package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
agentcoreclient
Oversight AgentCore ClientThis is a library to create probes for theOversight platform.Supported environment variableVariableDescriptionOS_LOG_LEVELLog level. One ofdebug,info,warning,errororcritical. If not set, thelog_levelargument of thesetup_logger(..)method will be used.OS_AGENTCORE_IPSet the agent core Ip address. Fallback toagentCoreIpfrom the configuration and finallylocalhost.OS_AGENTCORE_PORTSet the agent core port. Fallback toagentCorePortfrom the configuration and finally7211.OS_CONFIG_FOLDERSet the configuration folder. The assets configuration files must be stored in this folder. If not set,/etcwill be used.(This environment variable is usually configured in the Dockerfile of the corresponding probe)OS_CONFIG_FILENAMEPath to the probe configuration file. If not set, theconfig_fnargument of theAgentCoreClientwill be used instead.(It is recommended to configure theconfig_fnargument when building a probe and not rely on this environment variable only)Reload local configThe local configuration files will be read only the first time this is required for an asset. If you with to reload the configuration, for example when the configuration has been changed, this can be done by adding a file namedreloadinside the configuration folder. For example:/data/config/sampleprobe/$touchreload
agentcrypt
AbstractYet another implementation of the idea, to use the SSH agent for symmetric encryption. This time for Python (2+3).Load a key and make an SSH connection with agent-forwarding:local-machine~%ssh-addlocal-machine~%ssh-Aremote-machineCreate someagentcryptenabled scripts on the remote host:remote-machine~%cat<<'_EOF_'>encryptor.pyimportsysfromagentcrypt.ioimportContainerwithContainer.create(sys.stdout)ascntr:forlineinsys.stdin:cntr.write(line)_EOF_remote-machine~%cat<<'_EOF_'>decryptor.pyimportsysfromagentcrypt.ioimportContainerwithContainer.load(sys.stdin)ascntr:print(cntr.getvalue().decode())_EOF_Use the newly created scripts with keys from the localssh-agent:remote-machine~%echo"secret data"|pythonencryptor.py>hushhush.datremote-machine~%pythondecryptor.py<hushhush.datsecretdataOr via themainguard of the package:remote-machine~%echo"secret data"|python-magentcrypt.ioenc>hushhush.datremote-machine~%python-magentcrypt.iodec<hushhush.datsecretdataMotivationProvide a convenient (passwordless) way of encrypting sensitive data for Python.Profit from the fact that agent-forwarding makes SSH keys available on remote servers.Alternativeshttps://github.com/edwardspeyer/sshovel- Pythonhttps://github.com/is/sshcrypt- Gohttps://github.com/jwhitham/safeu- C (not sure if it uses the signature as keys too)Why another implementation?The ones I found on GitHub are either not Python or they rely on calls to external programs. I wanted a pure Python library, that runs with Versions 2.7 and 3.x.DocumentationAPI Documentation
agentDB
+ Looking for 'openagent'? Because of a little name clash, it's now called 'openagent'. 🤖+Question:I stumbled upon this repository. Is it production ready?Answer:Kudos on discovering this hidden treasure box! 🧭 While it's fairly stable and we're battle-testing it in our own production, we'd advise a bit of caution for immediate production use. It's got its quirks, and some of them have taken a cozy spot on our'we'll-look-at-this-later'list. Jump in, play with it, or use any part of our code. It's all good with the MIT license.I'm diving in, quirks and all!Ahoy, adventurer! 🏴‍☠️ We're thrilled to have another daring coder join the fray. Here's to creating some coding magic together! ✨The Origin Tale of openagentHere's our dream: An open and democratic AGI, untouched by the sneaky controls and hush-hush censorship of corporate overlords masquerading under 'alignment'. Remember the good ol' web days? We lost that freedom to the mobile moguls and their cheeky 30% 'because-we-said-so' tax. 🙄Our moonshot? 🚀 A harmonious ensemble of domain-specific AI agents, working in unison so well, you'd think it's AGI. Join us in opening up the LAST tech frontier for all!Meet World's first AMS!Ever heard of an Agent Management System (AMS)? No? Well, probably because we believe we came up with it! 🎩✨ openagent proudly wears the badge of being the world's first AMS (yep, we're patting ourselves on the back here). Drawing inspiration from the nifty microservices, it equips developers with a treasure trove of tools to craft sturdy, trusty AI applications and those cool experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with openagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install openagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importopenagent.compilerascompilerfromopenagent.compiler._programimportLogfromopenagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agentdesk
AgentDeskDesktops for AI agents   :computer:Explore the docs »View Demo·Report Bug·Request FeatureAgentdesk provides full featured desktop environments which can be programatically controlled by AI agents. Spin them up locally or in the cloud.▶ Built onagentda runtime daemon which exposes a REST API for interacting with the desktop.▶ Implements theToolsV1 protocolInstallationpip install agentdeskQuick StartfromagentdeskimportDesktop# Create a local VMdesktop=Desktop.local()# Launch the UI for itdesktop.view(background=True)# Open a browser to Googledesktop.open_url("https://google.com")# Take actions on the desktopdesktop.move_mouse(500,500)desktop.click()img=desktop.take_screenshot()UsageCreate a local desktopfromagentdeskimportDesktopdesktop=Desktop.local()$agentdeskcreate--providerqemu*requiresqemuCreate a remote desktop on GCEdesktop=Desktop.gce()$agentdeskcreate--providergceCreate a remote desktop on EC2desktop=Desktop.ec2()$agentdeskcreate--providerec2View the desktop in the UIdesktop.view()$agentdeskviewold_mckinny*requires dockerList desktopsDesktop.list()$agentdeskgetDelete a desktopDesktop.delete("old_mckinny")$agentdeskdeleteold_mckinnyUse the desktopdesktop.open_url("https://google.com")coords=desktop.mouse_coordinates()desktop.move_mouse(500,500)desktop.click()desktop.type_text("What kind of ducks are in Canada?")desktop.press_key('Enter')desktop.scroll()img=desktop.take_screenshot()ProcessorsProcess images to make them more accessible to LMMs.GridAdd a coordinate grid on top of the imagefromagentdesk.processorsimportGridProcessorimg=desktop.take_screenshot()processor=GridProcessor()grid_img=processor.process_b64(img)ExamplesGPT-4VSee how to use GPT-4V with AgentDesk in ournotebookoragentDevelopingPlease open an issue before creating a PR.Changes to the VM happen inagentd
agent-dingo
Agent DingoA microframework for buildling simple conversational AI agents.Dingoallows you to easily integrate any function into ChatGPT by adding a single line of code. WithDingo, you no longer have to worry about manually integrating the functions or dealing with intermediate function calls. The framework is crafted to automate these tasks, allowing you to focus on writing the core functionality.Quick Start ⚡️Step 1:Installagent-dingopipinstallagent-dingoStep 2:Configure your OpenAI API keyexportOPENAI_API_KEY=<YOUR_KEY>Step 3:Instantiate the agentfromagent_dingoimportAgentDingoagent=AgentDingo()Step 4:Addagent.functiondecorator to the function you wish to [email protected]_current_weather(city:str):...Step 5:Run the conversationagent.chat("What is the current weather in Linz?")Optional:Run an OpenAI compatible serverfromagent_dingo.wrapperimportDingoWrapperDingoWrapper(agent).serve()The server can be accessed using theopenaipython package:importopenaiopenai.api_base="http://localhost:8080"r=openai.ChatCompletion.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"What is the current weather in Linz?"}],temperature=0.0,)Support us 🤝You can support the project in the following ways:⭐ StarDingoon GitHub (click the star button in the top right corner)💡 Provide your feedback or propose ideas in the issues section or Discord📰 Post aboutDingoon LinkedIn or other platforms🔗 Check out our other projects (cards below are clickable):Documentation 📚OpenAI API KeyDingois built around function calling feature of newer generation OpenAI chat models that were explicitly fine-tuned for these tasks. Hence, an OpenAI key is required.You can either set theOPENAI_API_KEYenv variable or register the key using theopenaipython package.exportOPENAI_API_KEY=<YOUR_KEY>importopenaiopenai.api_key="<YOUR_KEY>"AgentAgentDingois a central part of the framework which allows you to register the functions to use with ChatGPT. The intermediate function calling is also handled by the agent directly.fromagent_dingoimportAgentDingoagent=AgentDingo()Registering the functionsOption 1(Recommended): Registering the function with a docstringBy default, the agent uses the information from the docstring to generate a function descriptor that is passed to the model. It is advised to always use the functions with docstrings as this way you can describe the purpose of the function (and its arguments) more accurately. The library was explicitly tested withgoogleandnumpydocstring styles.Example:@agent.functiondefget_temperature(city:str)->str:"""Retrieves the current temperature in a city.Parameters----------city : strThe city to get the temperature for.Returns-------strString representation of the json response from the weather api."""base_url="https://api.openweathermap.org/data/2.5/weather"params={"q":city,"appid":"<openweathermap_api_key>","units":"metric"}response=requests.get(base_url,params=params)data=response.json()returnstr(data)Option 2: Registering the function without a docstringIf the function does not have a docstring, it can still be registered. In that case, the docstring will be generated automatically by ChatGPT. However, there are several drawbacks of this approach:The source code of your function is passed to the model;The generated docstring might be inaccurate (especially for complex functions);The generated docstrings are not persistant (i.e. they are not saved to disk) and will have to be re-generated every time the function is registered.If you wish to disable the automatic docstring generation, you can set theallow_codegenparameter toFalsewhen instantiating the agent.Example:agent=AgentDingo(allow_codegen=False)By default theallow_codegenparameter is set to"env"which means that the value is read from theDINGO_ALLOW_CODEGENenvironment variable. If the variable is not set, it is assumed to beTrue.It is also possible to change the model used for the code generation by setting theDINGO_CODEGEN_MODELenvironment variable. By default, thegpt-3.5-turbo-0613model is used.exportDINGO_CODEGEN_MODEL="gpt-4-0613"Registering of external functionsIf you wish to register a function that is not defined in the current file, you can use theregister_functionmethod of the agent.frommy_moduleimportget_temperatureagent.register_function(get_temperature)Alternatively, you can define a function descriptor manually and register it using theregister_descriptormethod. In this case, ajson_representationcompatible withOpenAI function calling APIshould be provided.fromagent_dingo.descriptorimportFunctionDescriptord=FunctionDescriptor(name="<function_name>",json_representation={name:"<function_name>",description:"<function_description>",parameters:...}func=function_callablerequires_context=TrueorFalse)agent.register_descriptor(d)Running the conversationOnce the functions are registered, you can run the conversation using thechatmethod of the agent.agent.chat("What is the current temperature in Linz?")The chat method accepts the following parameters:messages- the message to start the conversation with; it can either be a string or a list of messages (conversation history);chat_context- the global context of the conversation (more details are provided in the next section); by default, it is set toNone;model- the model to use for the conversation; by default, thegpt-3.5-turbo-0613model is used;temperature- the randomness parameter of the model; by default, it is set to1.0;max_function_calls- the maximum number of function calls allowed during the conversation; by default, it is set to10;before_function_call- an interceptor that is called before the function is called (mode details are provided in the next section); by default, it is set toNone.All of the parameters exceptmessagesare optional.The method returns a tuple which contains the last message of the conversation (as string) and the full conversation history (including function calls).Chat contextIn some cases the function might require to access the global context of the conversation. For example, the function might need to access some user-specific information (e.g. user id). In this case, thechat_contextparameter can be used. It is a special dictionary that is passed to the function and can contain any information that is required. Unlike other arguments, the content of the chat_context is not generated by the model and is passed directly to the function.fromagent_dingo.contextimportChatContext@agent.functiondefget_user_name(greeting_msg:str,chat_context:ChatContext)->str:"""Returns a greeting message with the user's name.Parameters----------greeting_msg : strMessage to greet the user with.chat_context : ChatContextThe chat context.Returns-------strThe greeting message with the user's name."""user_name=chat_context["user_name"]returnf"{greeting_msg},{user_name}!"r=agent.chat("Say hi.",chat_context=ChatContext(user_name="John"),temperature=0.0)# > Hi, John! How can I assist you today?Note: thechat_contextparameter is not passed to the model and is not used for the generation of the function descriptor.Before function call interceptorIn some cases, it might be required to perform some actions before the function is called. For example, you might want to log the function call or perform some checks. This is especially handy since the function arguments generated by the model are not guaranteed to be correct/valid, hence, it is advised to add some additional validators. Thebefore_function_callparameter can be used to register an interceptor that is called before the function. The interceptor receives the following parameters:function_name,function_callable,function_kwargsand should return a tuple with the updatedfunction_callableandfunction_kwargs.Example: intercepting the function call and logging the function name and arguments.defbefore_function_call(function_name:str,function_callable:Callable,function_kwargs:dict):print(f"Calling function{function_name}with arguments{function_kwargs}")returnfunction_callable,function_kwargsagent.chat("What is the current temperature in Linz?",before_function_call=before_function_call,)DingoWrapper + Web ServerIn addition to using the agent directly, it is possible to wrap it into aDingoWrapper, which provides an OpenAI-like API.fromagent_dingo.wrapperimportDingoWrapperwrapped_agent=DingoWrapper(agent,before_function_call=None,max_function_calls=10)Once the agent is wrapped, it can be used to create the chat completions using thechat_completionmethod.r=wrapped_agent.chat_completion(messages=[{"role":"user","content":"What is the current weather in Linz?"}],model="gpt-3.5-turbo",temperature=0.0,#optionalchat_context=None,#optional)In principle, this method can be used as a drop-in replacement for theopenai.ChatCompletion.createmethod. However, there are several differences:DingoWrapper does not support most of the optional hyperparameters of theopenai.ChatCompletion.createmethod (excepttemperature);DingoWrapper has an additional (optional)chat_contextparameter that can be used to pass the global context of the conversation;Example:# openai.ChatCompletionr=openai.ChatCompletion.create(messages=[{"role":"user","content":"What is the current weather in Linz?"}],model="gpt-3.5-turbo",temperature=0.0)# DingoWrapperr=wrapped_agent.chat_completion(messages=[{"role":"user","content":"What is the current weather in Linz?"}],model="gpt-3.5-turbo",temperature=0.0)TheDingoWrappercan also be used to run a web server (also compatible with the OpenAI API). The server can be started using theservemethod.The serve method requires additional dependencies:pipinstallagent_dingo[server]wrapped_agent.serve(port=8080,host="0.0.0.0",threads=4)Once the server has started, it can be accessed using e.g. theopenaipython package.# client.pyimportopenaiopenai.api_base="http://localhost:8080"r=openai.ChatCompletion.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"What is the temperature in Linz ?"}],temperature=0.0,)print(r)Response:{"choices":[{"finish_reason":"stop","index":0,"message":{"content":"The current temperature in Linz is 25\u00b0C and it is sunny.","role":"assistant"}}],"created":1692537919,"id":"chatcmpl-d6a9d6cc-7a26-41d5-a4a6-2c737b652f4b","model":"dingo-gpt-3.5-turbo-0613","object":"chat.completion","usage":{"completion_tokens":32,"prompt_tokens":318,"total_tokens":350}}Note: the "usage" metric accumulates the number of tokens used for all intermediate function calls during the conversation.LangChain Tools 🦜️🔗It is possible to convertLangchain Toolsinto function descriptors in order to register them with Dingo. The converter can be used as follows:Install langchain:pipinstallagent_dingo[langchain]Define the tool, we will use the Wikipedia tool as an example:fromlangchain.tools.wikipedia.toolimportWikipediaQueryRunfromlangchain.utilities.wikipediaimportWikipediaAPIWrappertool=WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())Please refer to theLangChain documentationfor more details on how to define the tools.Convert the tool into a function descriptor and register:fromagent_dingo.langchainimportconvert_langchain_tooldescriptor=convert_langchain_tool(tool)agent.register_descriptor(descriptor)Run the conversation:# The agent will query Wikipedia to obtain the answer.agent.chat("What is LangChain according to Wikipedia? Explain in one sentence.")# > According to Wikipedia, LangChain is a framework designed to simplify the creation of applications using large language models (LLMs), with use-cases including document analysis and summarization, chatbots, and code analysis.In comparison, when we try to query ChatGPT directly with the same question, we get the following hallucinated response (since it does not have access to the relevant up-to-date information):# > LangChain is a blockchain-based platform that aims to provide language learning services and connect language learners with native speakers for real-time practice and feedback.Note: some of the tools might be incompatible with (or simply unsuitable for) Dingo. We do not guarantee that all of the tools will work out of the box.
agentee
No description available on PyPI.
agent-exchange
No description available on PyPI.
agent-finder-masterjefferson
agent-finderFind nearby auto insurance agents using the Google Maps Places API, and dump them to a CSV file.This script was written to help automate part of my girlfriends job. Maybe someone else will find it useful too.Installationpip3installagent-finder-masterjeffersonUsageGoogle Maps APIYou must have a valid Google Maps API key.In your shell configuration file (.zshrc, .bash_profile, .bashrc, etc):exportGOOGLE_MAPS_API_KEY=<your-api-key>Basic search:agent-finder"Los Angeles, CA"You can also customize the CSV filename and the search radius (units = kilometers).agent-finder"Los Angeles, CA"--filenamefoo.csv--radius50
agentflow
Agentflow
agentforge
AgentForgeAgentForge is a low-code framework tailored for the rapid development, testing, and iteration of AI-powered autonomous agents and Cognitive Architectures. Compatible with a range of LLM models — currently supporting OpenAI, Anthropic's Claude, and Oobabooga (local) — it offers the flexibility to run different models for different agents based on your specific needs.Whether you're a newbie looking for a user-friendly entry point or a seasoned developer aiming to build complex cognitive architectures, this framework has you covered.Our database-agnostic framework is designed for seamless extensibility. WhileChromaDBis our go-to database, integration with other databases is straight-forward, making it an ideal playground and solid foundation for various AI projects.In summary, AgentForge is your beta-testing ground and future-proof hub for crafting intelligent, model-agnostic, and database-flexible autonomous agents.Table of ContentsFeaturesDocumentationContributingContact UsLicenseFeaturesBuild Custom Agents And Cognitive Architectures EasilyCustom Tools/ActionsCustomizable Agent Memory ManagementDefault Agents Ready For UseLLM Agnostic Agents (Each Agent can call different LLMs if needed)On-The-Fly Prompt EditingOpenAI & Anthropic API SupportOpen-Source Model Support (Oobabooga)Rapidly Build & Test Cognitive Architectures (Multi-Agent Scripts)Coming SoonAPI ImplementationKnowledge GraphsDocumentationWelcome to the AgentForge framework documentation. This comprehensive guide will support you whether you're just getting started or diving deep into custom configurations and advanced features. Here, you'll find detailed insights into the various components that make up our system.Installation and Usage:Getting Started with AgentForge: Begin your journey with a straightforward setup guide, covering everything from initial installation to launching your first bot.Core Concepts:Agents: Dive deep into the agents' world. Learn how they operate, respond, and can be customized.Modules: Explore multi-agent scripts, the hierarchies above agents. Understand how Modules coordinate various agents and manage the flow of information to achieve specific goals.LLM API Integration: Understand how AgentForge connects with various Large Language Model (LLM) APIs.Settings: Delve into directives, memories, models, paths, and storage configurations – the fine-tuned engine settings that keep the system humming.Personas: Add flair to the system's interactions. Make it more human-like and relatable.Tools & Actions: The system's utility belt. Understand the tools available and how they can be choreographed into actionable sequences.Utilities: Explore the array of utility functions and tools that supercharge the system's capabilities. (Note: Documentation not implemented yet)Note: Our documentation is a living entity, continuously evolving. Some links or features may still be under development. We appreciate your patience and welcome your feedback to improve the documentation.ContributingFeel free to open issues or submit pull requests with improvements or bug fixes. Your contributions are welcome!Special NoteWe're on the lookout for a UI/UX collaborator who's passionate about open-source and wants to join the team to help develop a front-end for this framework. This isn't a job offer, but rather an invitation to be a part of something cool. Interested? We'd love to chat! (See theContact Ussection below for details.)Contact UsIf you're keen on contributing or just want to reach out to us, here's how to get in touch:Email:[email protected]: Feel Free to drop by ourDiscord ServerLicenseThis project is licensed under the GNU General Public License. SeeLICENSEfor more details.
agent-for-sre
No description available on PyPI.
agentforum
🏛 AgentForumAn asynchronous framework for building LLM-based multi-agent systems in Python, with a focus on:Message trees which highly resemble commits in git to facilitate backtracking.Messages are immutable and are identified by hashes based on their content and their history (a hash of the previous message). A single hash represents state of a conversation at a given point in time.Message promises and the possibility of token streaming.MessagePromiseobjects are returned before the messages are fully generated. Token streaming is supported regardless of the number of nested levels of agents.NOTE: While this framework is designed with LLMs in mind, it can be used with any kind of agents.🧩 FeaturesMessage forwarding.Messages can be forwarded to become a part of different message branches or trees (much like it is done in messaging apps for humans).💡 PhilosophyAgents run as concurrently as possible.When an agent is called with.call()or.quick_call()a task is scheduled andAgentCallorAsyncMessageSequenceobjects are returned immediately. The actual processing will happen upon a task switch.Stateless agents are encouraged.This is not a hard requirement, but ideally the agents should produce the same output when they are given the same input (the same sequence of messages from a message branch).🔧 Implementation detailsThis framework supportsPython 3.9 or higherand usesasynciounder the hood.🌱 Future plansMessage replies.Technically this will be very similar to message forwarding. The difference will mostly be semantic (in the spirit of mimicking messaging apps for humans).Exceptions as part of the message tree.In the future, exceptions raised by agents will be represented as a special type of messages and will be part of the message branches those agents were on when they were raised.Optional caching of agent responses (enabled by default).When the same sequence of messages is sent to an agent, the framework will respond with the same sequence of response messages without actually calling the agent.Cancellation of agent execution.In the future, it will be possible to cancel the execution of agents before they finish.⚠️ NOTE: These plans are tentative and may change in the future. ⚠️
agent-frame
No description available on PyPI.
agentfs
Simple file management and serving for agentsInstallationpipinstallagentfsQuickstartStart the server: You can start the server by using thestart_server()function:fromagentfsimportstart_serverstart_server()This will start the server athttp://localhost:8000.You can start the server with uvicorn like this:importosif__name__=="__main__":importuvicornuvicorn.run("agentfs:start_server",host="0.0.0.0",port=int(os.getenv("PORT",8000)))Get a file: Once the server is up and running, you can retrieve file content by sending a GET request to/file/{path}endpoint, where{path}is the path to the file relative to the server's current storage directory.fromagentfsimportget_file# Fetches the content of the file located at "./files/test.txt"file_content=get_file("test.txt")print(file_content)Save a file: Similarly, you can save content to a file by sending a POST request to/file/endpoint, with JSON data containing thepathandcontentparameters.fromagentfsimportadd_file# Creates a file named "test.txt" in the current storage directory# and writes "Hello, world!" to it.add_file("test.txt","Hello, world!")API DocumentationAgentFS provides the following public functions:start_server(storage_path=None)Starts the FastAPI server. If astorage_pathis provided, it sets the storage directory to the given path.Arguments:storage_path(str, optional): The path to the storage directory.Returns:NoneExample:fromagentfsimportstart_serverstart_server("/my/storage/directory")get_server()Returns the FastAPI application instance.Arguments:NoneReturns:FastAPI application instance.Example:fromagentfsimportget_serverapp=get_server()set_storage_path(new_path)Sets the storage directory to the provided path.Arguments:new_path(str): The path to the new storage directory.Returns:Trueif the path was successfully set,Falseotherwise.Example:fromagentfsimportset_storage_pathset_storage_path("/my/storage/directory")add_file(path, content)Creates a file at the specified path and writes the provided content to it.Arguments:path(str): The path to the new file.content(str): The content to be written to the file.Returns:Trueif the file was successfully created.Example:fromagentfsimportadd_fileadd_file("test.txt","Hello, world!")remove_file(path)Removes the file at the specified path.Arguments:path(str): The path to the file to be removed.Returns:Trueif the file was successfully removed.Example:fromagentfsimportremove_fileremove_file("test.txt")update_file(path, content)Appends the provided content to the file at the specified path.Arguments:path(str): The path to the file to be updated.content(str): The content to be appended to the file.Returns:Trueif the file was successfully updated.Example:fromagentfsimportupdate_fileupdate_file("test.txt","New content")list_files(path='.')Lists all files in the specified directory.Arguments:path(str, optional): The path to the directory. Defaults to'.'(current directory).Returns:A list of file names in the specified directory.Example:fromagentfsimportlist_filesfiles=list_files()get_file(path)Returns the content of the file at the specified path.Arguments:path(str): The path to the file.Returns:A string containing the content of the file.Example:fromagentfsimportget_filecontent=get_file("test.txt")Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agentfud-art-engine
agentfud-art-engineArt engine which helps you to generate thousands of beautiful images for the NFT market.FeaturesConfigurable layersConfigurable rarity weightsInformation about your projectMetadata and image generation based on pre-generated DNAsExcluding traits from metadataInstallationviapipAll the artengine projects could be in a single directorymkdirartengine_projectscdartengine_projectsCreating and activating virtual environmentvirtualenvenvsource./env/bin/activateInstall agentfud-art-enginepipinstallagentfud-art-engineCreate first project in my_project directoryartengineinitmy_projectEnter into the directory, start configure and buildcdmy_projectNow we can editinstance_config.pyIf you want to fully test it, removelayersfolder, then clone a prepared layers folder like [email protected]:AgentFUD/layers.gitRunartengine info, then follow the tutorials.UsageCreate your own layers.Editinstance_config.py, set up your layers, DNA generation type, etc.Generate rarity configuration file withartengineraritygenerateconfigEdit rarity-config.json as you like. Please keep in mind artengine info will tell you the maximum possible combinations so do not try generate more than you possibly could.Generate DNA database.artenginedna--collection_size30-retries10Check generated rarities withartengineraritycheckBake your imagesartenginebakeboth/metadata/images[--clean-up]Development installgit clone https://github.com/AgentFUDagentfud-art-engine.git cd agentfud-art-engine virtualenv env source ./env/bin/activate pip install --editable .type artengine, you should see something very similar(env) agentfud@Legion-Y540:/tmp/agentfud-art-engine$ artengine Usage: artengine [OPTIONS] COMMAND [ARGS]... Welcome to Agent FUD ArtEngine! An all-in-one cli tool for NFT artists! Options: --help Show this message and exit. Commands: bake Bakes images and metadata files info Gathers and prints out information about your project init Initializes a new Art Engine project rarityconfig Generates rarity config json file run Runs any python script
agentgpt
No description available on PyPI.
agentgrunt
🕵🧰 AgentGrunt ️Use OpenAI'sCode Interpreterto edit and commit code across your entire git repo (even non-python repos)!OverviewAgentGrunt packs up the following: a codebase you specify, a specially preparedgitbinary that runs well in Code Interpreter's environment, and some prompts and code exploration tools into a single file that you can load into Code Interpreter.Upload the archive, paste in a two sentence prompt, wait a bit, and then sit back and relax while GPT4.5*writes, edit, and commits your code for you. Once GPT has finished making your changes, pressdfrom the hotkey menu and ChatGPT will send you a file you can use to apply the commits GPT made (with all their metadata!) directly into your copy of the repo.Features:automatically installsgitinto Code Interpreter and configures it for code explorationbuilt in hotkey menu for easy usagesimple, small, and easy to customize.InstallationPrereqs:a valid ChatGPT Plus subscription and Code Interpreter enabled in ChatGPT's settingsa working installation of python 3.9 (or newer)a git repository that you'd like Code Interpreter to work on with youOnce you have those in place, run:pipinstallagentgruntIf all goes well runningagentgrunt --helpwill output something like this:Usage: agentgrunt [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: bundle Bundle up a local or remote git repo custom-instructions Copy ChatGPT custom instructions to the clipboardUsageTo start editing a repo withagentgruntuseagentgrunt'sbundlecommand:agentgruntbundle<path-to-your-repo>It will do some work and then print out some instructions. When the process has completed you'll have a new file called<your-repo-name>.tar.gzin your current folder.Now do the following:Copy the short promptagentgruntprints out to the clipboard (or just sayywhen prompted if on macOS)Open up ChatGPT and start a new chat in Code Interpreter modeUse the + button to upload the<your-repo-name>.tar.gzfile AgentGrunt generatedPaste the prompt you copied a second ago into the chatbox and press sendYou'll see ChatGPT start to do some work, and after a few moments you'll be greeted with a message saying "Code Interpreter is now running AgentGrunt!" followed by a hotkey menu similar to the below:c ) continue d ) download changes as patch dr) download entire repo m ) show diff of last change r ) refresh/reload agentgrunt w ) work autonomously until complete ? ) show this hotkey listNow just ask Code Interpreter to make some changes to your repo, and hitdwhen you're finished to download the changes it made to your local copy of the repo!When you want to download the changes you've made to your local copy of the repo, hitdand Code Interpreter will send you a.patchfile that you can apply to your copy of the git repo using the (somewhat esoteric)git amcommand:gitam<path-to-patch-file>How it worksWhen you ask AgentGrunt to generate a bundle it first downloads a single-file version of thegitbinary from 1bin.org (an older project of mine to make easy to deploy single file binaries of common utilities). Then it clones the repo you point it at into a temporary location location (to avoid bundling up any files that aren't part of the repo, egnode_modulesfolders), copies thegitbinary and some prompts teaching Code Interpreter how to use AgentGrunt's tools into a temp folder and then builds a tarball out of the whole collection.The python package contains agpt_toolsfolder that gets copied into each bundle AgentGrunt generates.gpt_toolsincludes a prompt for Code Interpreter in theREADME_ai.mdfile, as well as some python functions that are useful for code exploration that Code Interpreter can load and call directly (seecode_exploration.py).This arrangement allows the prompt the user has to paste into ChatGPT to be short and simple. Code Interpreter itself can then extract the longer prompt from README_ai and bootstrap itself from there.Caveats and gotchasGPT4 makes a lot of mistakes and is easily confused! While AgentGrunt can be genuinely useful, it's not going to be replacing a human dev any time soon. Expect it to require a fair bit of babysitting and handholding to be able to accomplish meaningful tasks.During longer conversations GPT4 tends to forget what it's doing and sometimes stops showing the hotkey menu or thatgitand the tools fromcode_exploration.pyfunctions are available. If this happens, hitror ask it to re-read "it's" readme file to refresh its memory.Code Interpreter is subject to a ~2 minute timeout while working autonomously, so for longer running operations you may need to tell itc(continue) to have it finish what it was doingSometimes Code Interpreter sends diff output instead of properly formatting a commit patch, especially if the changes it's made haven't been committed yet. If this happens, userto refresh the prompt, or explicitly direct it to make a commit and then send a patch.Code Interpreter deletes it's workspace files if it's been left idle for too long (seems to be in the ~10-15m range), and when this happens any links to files it may have sent you will stop working.Make sure to download any patch files it sends you immediately to avoid losing your work!Final thoughtsThis is still early and more of a proof of concept than anything else. That said, even in it's current form it's often genuinely useful! Allowing Code Interpreter to read files and archives in this way also opens the door for lots of interesting applications. AgentGrunt only uses one prompt, but it's easy to imagine more complex tools like this that include a catalogue of prompts that "daisy-chain" from each other, am very curious to see what other things people build in this vein!Hattip to@NickADobos' and his "AI zip bomb" thread for the inspiration!
agent-harness
Agent HarnessAI Maintainer Agent Harness for our benchmarking and Marketplace API and platformFree software: MIT licenseDocumentation:https://agent-harness.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2023-08-13)First release on PyPI.0.1.8 (2023-09-09)Preparations for Open Source ReleaseSimplification of interface with aider
agent.http
UNKNOWN
agent-hub
Autonomous Agent HubAgent Hub is a collection of agents that can be used in GenAI/LLM applications.Requirementspython>=3.10Installationpipinstallagent-hub
agentic-py
No description available on PyPI.
agentive
No description available on PyPI.
agent-ix
Agent IX ClientThis is the client and runner for the IX agent platform. This tool is intended to start the agent service using docker images published by the main IX project.VisitAgent IXproject site for more details.Installpip install agent-ixTODO: ENV setupUsageix upDevelopmentRead theCONTRIBUTING.mdfile.
agent-Job-scheduler
quick start>>> import agent >>> a = agent.Agent() >>> import datetime >>> options = { ... 'scheduler': 'interval', ... 'start_time': datetime.datetime.now(), ... 'interval': 1 ... } >>> @agent.create_job_decorator(options=options) >>> def sample_job_every_5s(): ... print('I am running whiteout massage and name') >>> >>> a.start()
agentk
"k"TLDR;Install the "k" by either doing:pip install agentk(Yes, ^^ it is written in python and your OS needs to have recent version 2 or 3)or copying it in some bin folder on your PATH and runningpip install -r requirements.txt"A person is smart. People are dumb, panicky, dangerous animals, and you know it." -- Agent K"AGENT" K is a complete minimalistic kubectl "doner"-wrapObviously, as a short-hand wrapper,kcan do everythingkubectlalready can, but it is (a) shorter and (b) adds few tricks like merging configs and switching contexts .. (k) feeds back to thekubectlcommand-line those args which it does not want to intercept or handle.UsageThe following is equivalent:kubectl get pods --all-namespaces k get pods -A k p -ASwitching contextArgument-free invocation prompts for context switch options between multiple cluster contexts found in~/.kube/config:kSwitching namespacesOne can change the default namespace on the currently active context (namespacekey in~/.kube/config) using either of two equivalent commands:kubectl config set-context $(kubectl config current-context) --namespace foo k sn fooThe last command is akshortcut.Shortcuts to get resourcesYou can find the full list of shortcuts defined as the dictionary inside thekscript. In particular that would be:# resource "ev": "event", "ep": "endpoints", "p": "pod", "s": "service", "v": "volume", "n": "node", "dp": "deployment", "st": "statefulset", "in": "ingress", "ns": "namespace",At the end of the list there are one letter action-shortcuts:# actions "c": "create", "a": "apply", "d": "delete",This means that the following is equivalent:kubectl apply -f <foo-k8s-manifest.yaml> k a -f <foo-k8s-manifest.yaml>DevelopTo remind, you can dopip install -e .in order to utilize developer mode.Installation in the cloudIf you work withkubectlwithout a privileged or super-user access, for example inside a corporate network or in a cloud-shell (but you still have access to python), then your installation will look like:pip install --user agentkThis will install the script in your local$HOMEfolder.Don't forget to append your~/.bashrcor~/bash_profileor other shell-rc file with:export PATH="$HOME/.local/bin:$PATH"Command completionYou can put this into your.bashrcto get alias and auto completion forksimilar as forkubectl:source <(kubectl completion bash | sed s/kubectl/k/g)Similar works well forzsh.
agentk8s
KGPTKGPT, orKubernetes General Purpose Tasker, is a cutting-edge Kubernetes agent designed to simplify and streamline the management of complex and manual tasks within your Kubernetes cluster. In the dynamic world of container orchestration and cloud-native computing, KGPT emerges as a powerful ally, offering automation and intelligence to help you efficiently handle a wide range of tasks, from deployment and scaling to monitoring and troubleshooting. This innovative agent is engineered to alleviate the burden of manual interventions, enhancing the agility and reliability of your Kubernetes infrastructure. With KGPT at your side, you can confidently navigate the intricate landscape of Kubernetes, ensuring optimal performance and resource utilization while freeing up valuable time for more strategic endeavors.InstallPrerequisitesYou already setup an k8s cluster locally or on the cloud, andkubectlis working normallyYou needPythonandpipinstalledopenAI GPT-4API keyreadySupported python version 3.8 | 3.9 | 3.10 | 3.11pip install agentk8sInstall auto completion (optional)$ kgpt --install-completionExample1. call the setup command to enter your GPT-4 api-keykgpt setup2. Now try a few command, like setup an deployment and a service.kgpt chat "create an deployment called mynginx, image name is nginx, three replicas, expose port 80"After a few seconds you should see the following output if everything is executed successfully.user_proxy (to assistant): exitcode: 0 (execution succeeded) Code output: deployment.apps/mynginx created service/mynginx exposed3. We can also modify a resource in the cluster, let's change themynginxservice target port to 8001kgpt chat "Change mynginx service's target port to 8001"If everything is executed successfully, you should see the following outputexitcode: 0 (execution succeeded) Code output: service/mynginx patched4. Now let's delete the resource we just created.kgpt chat "delete mynginx deployment and service"You should be able to see the following output, if everything is executed successfully.user_proxy (to assistant): exitcode: 0 (execution succeeded) Code output: deployment.apps "mynginx" deleted service "mynginx" deleted5. You can also chat with a file, just use the--doc [path or url]option, let's find how weather there are vulnerability in a k8s yaml (Note you can't execute code when you supplied--docoption in RAG enabled mode)kgpt chat "find vulnerability in this k8s yaml file" --doc example_file/deployment1.yamlyou should be seeing the following outputIntent: Code generation task (to find vulnerability in the provided k8s yaml file). The provided YAML file has a few issues that could potentially be considered vulnerabilities or misconfigurations: 1. The `image` field does not specify a tag. Using the `latest` tag or not specifying a tag can lead to unpredictable deployments because it's not clear which version of the image is being used. It's better to use a specific version for reproducibility and to avoid accidentally pulling in an updated version with potential vulnerabilities. 2. There are no resource requests or limits specified for the container. This can lead to resource exhaustion on the node where the pod is running, potentially affecting other pods and the stability of the node itself. 3. There is no readiness probe defined. While not necessarily a vulnerability, a readiness probe is important for ensuring that traffic is not sent to a container that isn't ready to handle it. 4. The deployment does not define any security context. Without a security context, the container may run with default settings, which could include running as root, potentially leading to security risks if the container is compromised. 5. The YAML syntax is incorrect. The dashes before `name` and `containerPort` should not be indented. Please note that the specific values for image version, resource requests, limits, and security context should be adjusted according to the actual requirements and best practices for the application and the organization's security policies.You can also supply kgpt with multiply docskgpt chat "combine these to yaml into one yaml" --doc example_file/deployment1.yaml --doc example_file/service1.yamlList of possible tasks"create an deployment called mynginx, image name is nginx, three replicas, expose port 80""change mynginx service's target port to 8001""delete mynginx deployment and service""remove all deployment and service in namespace model""watch the status of deployment nginx""list all deployment and service""list all running deployment with more than two replica""create a namespace called test""create a namespace called gpu and change the nginx deployment's namespace to gpu""remove the namespace gpu if no deployment with the namespace gpu""create a docker-registry secret called my-docker-secret, the docker server is mydocker.io usename is vurtne and the password is yqxzv193"For more possible tasks that the agent can do please checkouthere
agentlabs
AgentlabsPython library for implementing agents onWeavelInstallationpipinstallagentlabsExample UsagefromagentlabsimportAgent,InputTypeagent=Agent()@agent.service(id=3,name="blog_post",description="Given a keyword as input, this service generates a SEO-optimized blog post",)@agent.service_input(name="keyword",display_name="키워드",description="The keyword for which the blog post should be generated",type=InputType.TEXT,options=["아로니아","아로니아 농장",],)@agent.service_input(name="post_length",display_name="글 길이",description="Desired length of the blog post",type=InputType.TEXT,placeholder="Medium",options=["Short","Medium","Long",],)asyncdefgenerate_blog(keyword:str):awaitagent.aupdate_status("Generating blog outline...")awaitagent.aupdate_status("Writing post...")awaitagent.aupdate_status("Searching for images...")awaitagent.aupdate_status("Adding images to post...")awaitagent.aupdate_status("Generating title...")return{"blog":"블로그",}if__name__=="__main__":agent.run(token="5f2593e7-1a10-492e-a9c1-e7521d400e2b")
agentlabs-sdk
No description available on PyPI.
agentlayer
No description available on PyPI.
agentlego
English |简体中文IntroductionQuick StartsInstallationUse tools directlyIntegrated into agent frameworksSupported ToolsLicenceIntroductionAgentLegois an open-source library of versatile tool APIs to extend and enhance large language model (LLM) based agents, with the following highlight features:Rich set of tools for multimodal extensions of LLM agentsincluding visual perception, image generation and editing, speech processing and visual-language reasoning, etc.Flexible tool interfacethat allows users to easily extend custom tools with arbitrary types of arguments and outputs.Easy integration with LLM-based agent frameworkslikeLangChain,Transformers Agents,Lagent.Support tool serving and remote accessing, which is especially useful for tools with heavy ML models (e.g. ViT) or special environment requirements (e.g. GPU and CUDA).https://github-production-user-asset-6210df.s3.amazonaws.com/26739999/289006700-2140015c-b5e0-4102-bc54-9a1b4e3db9ec.mp4Quick StartsInstallationInstall the AgentLego packagepipinstallagentlegoInstall tool-specific dependenciesSome tools requires extra packages, please check the readme file of the tool, and confirm all requirements are satisfied.For example, if we want to use theImageDescriptiontool. We need to check theSet upsection ofreadmeand install the requirements.pipinstall-Uopenmim miminstall-UmmpretrainUse tools directlyfromagentlegoimportlist_tools,load_toolprint(list_tools())# list tools in AgentLegoimage_caption_tool=load_tool('ImageDescription',device='cuda')print(image_caption_tool.description)image='./examples/demo.png'caption=image_caption_tool(image)Integrated into agent frameworksLagentTransformers AgentVisualChatGPTSupported ToolsGeneral abilityCalculator: Calculate by Python interpreter.GoogleSearch: Search on Google.Speech relatedTextToSpeech: Speak the input text into audio.SpeechToText: Transcribe an audio into text.Image-processing relatedImageDescription: Describe the input image.OCR: Recognize the text from a photo.VQA: Answer the question according to the image.HumanBodyPose: Estimate the pose or keypoints of human in an image.HumanFaceLandmark: Estimate the landmark or keypoints of human faces in an image.ImageToCanny: Extract the edge image from an image.ImageToDepth: Generate the depth image of an image.ImageToScribble: Generate a sketch scribble of an image.ObjectDetection: Detect all objects in the image.TextToBbox: Detect specific objects described by the given text in the image.Segment Anything seriesSegmentAnything: Segment all items in the image.SegmentObject: Segment the certain objects in the image according to the given object name.AIGC relatedTextToImage: Generate an image from the input text.ImageExpansion: Expand the peripheral area of an image based on its content.ObjectRemove: Remove the certain objects in the image.ObjectReplace: Replace the certain objects in the image.ImageStylization: Modify an image according to the instructions.ControlNet seriesCannyTextToImage: Generate an image from a canny edge image and a description.DepthTextToImage: Generate an image from a depth image and a description.PoseToImage: Generate an image from a human pose image and a description.ScribbleTextToImage: Generate an image from a sketch scribble image and a description.ImageBind seriesAudioToImage: Generate an image according to audio.ThermalToImage: Generate an image according a thermal image.AudioImageToImage: Generate am image according to a audio and image.AudioTextToImage: Generate an image from a audio and text prompt.LicenceThis project is released under theApache 2.0 license. Users should also ensure compliance with the licenses governing the models used in this project.
agentless
AgentlessREMEMBER: you can see the same that is into this README in theWiki.Documentation structureDescriptionBefore use itRequirementsInstallationUpdateDeleteHow to use itExecution examplesOutput locationExtraYouTube PlaylistDirectories structureChangelogCode of conductContributingHelpDiscussionsIssuehelp@castellanidavide.itDescriptionAgentless toolcan help you scan the network.I you wanted something to help you to scan, in a easy way, your network, this tool is for you.With this tool you can easly scan all the wanted terminals' ports.After execution you could save the datacsv fileand/ or intoHarperDBDataBase.The scope of this tool is to make your network more secure.Seeing the output (csv or DB), you can understand what opened ports are useless. Disabling them you can make the network less avariable forblack hat hackerattacks.Before use itRequirementsTo install this tools you need some preinstalled softwares.WindowsIf you are on Windows you needs to install: choco & VirtualBox into C:\Work\ folderLinuxIf you are on Linux you need to install virtualboxInstallationTo install this tools after installing prequisites.WindowsIf you are on Windows you needs to write into Powershell (as Administrator):choco install agentlessLinuxIf you are on Linux you need to write into the shell:sudo add-apt-repository ppa:castellanidavide/school -y; sudo apt update; sudo apt install agentless -yUpdateTo update this tools.WindowsIf you are on Windows you needs to write into Powershell (as Administrator):choco upgrade agentlessLinuxIf you are on Linux you need to write into the shell:sudo apt update; sudo apt upgradeDelateWindows (using choco):choco remove agentlessDebian/ Ubuntu using apt:sudo apt remove agentlessHow to use itExecution examplesHow to useWriteagentslesson your shellATTENTION: Remember to run this tool as AdministratorSynopsisOn this tool you can pass this paramers:[--help | -h] - help istructions[--verbose] - verbose mode[--adresses=...] - Choose adresses, you can use this multiple time (Replace "..." with the value(s))[--csv] - enable csv output[--single] - disanable multitreading[--url=... --token=... --table=...] - enable the upload to HarperDB (Replace "..." with the value(s))If you are on Ubuntu you can always see the synopsis typing:man agentslessExample videoOutput locationCSV fileTo see the csv file you have to search "ping.csv" file in the same directory where it was runned the code with--csvparam.WindowsIf you are on Windows you needs to write into Powershell (better if as Administrator):type ping.csvLinuxIf you are on Linux you need to write into the shell:cat ping.csvDB outputTo see the DB output you have to openHarperDB console.Log fileTo see the csv file you have to search "C:/Program Files/agentless/trace.log" if you are using Windows as OS. Otherwise the log file will be into "~/trace.log".ATTENTION: the log file will be create on first run, if you want to see it run tool at least once.WindowsIf you are on Windows you needs to write into Powershell (better if as Administrator):type "C:/Program Files/agentless/trace.log"LinuxIf you are on Linux you need to write into the shell:cat ~/trace.logExtraDirectories structure.gitignoresetup.pyLICENSE.md.githubISSUE_TEMPLATEbug_report.mdfeature-request.mdworkflowson-push.ymlon-release.ymlchocoReadMe.mdset.txtagentless.nuspectoolschocolateyinstall.ps1chocolateyuninstall.ps1LICENSE.txtVERIFICATION.txtagentless-install.cagentless-install.exeagentless-install.oagentless.cagentless.exeagentless.odebianagentless.1agentless.cMakefilerequirements.indebianchangelogcompatcontrolcopyrightpostinstpostrmpreinstrulessourcedocslogo.png*.mdflussi (example output(s))net.csvOS.csvlog (example log(s))trace.logrequirementsrequirements.txtagentless__init__.pyMade byCastellani DavideIf you have any problem please contact me:[email protected]
agentlib
This is a collection of base classes for the implementation of agents in Python. The original purpose of this library is the execution of distributed simulations and controllers for energy systems, however it is not limited to any specific field. To get started, please check out our examples.InstallationTo install the agentlib with minimal dependencies, run:pip install agentlibTo install with full dependencies (recommended), run:pip install agentlib[full]If you want to work on the agentlib, first clone it, go its directory and then install it in editable mode:pip install -e .[full]Optional DependenciesThe AgentLib has a number of optional dependencies, ranging from additional features to performance improvements:fmu: Support simulation of FMU models (https://fmi-standard.org/).scipy: Support simulation of linear state space models, based on scipy.mqtt: Support communication between agents through the mqtt protocol.plot: Installs matplotlib, allows to plot the result of examples.orjson: Faster json library, improves performance when using network communicators.fuzz: Improves error messages when providing wrong configurations.clonemap: Support the execution of agents and their communication throughclonemap. As clonemapy is not available through PYPI, please install it from source, or through the AgentLib'srequirements.txt.Referencing the AgentLibA publication regarding the AgentLib is currently in the work. If you use the AgentLib in your work which you plan to publish, please check back at a later date for a reference.Copyright and licenseThis project is licensed under the BSD 3 Clause License - see theLICENSEfile for details.AcknowledgmentsWe gratefully acknowledge the financial support by Federal Ministry \ for Economic Affairs and Climate Action (BMWK), promotional reference 03ET1495A.
agent-lite
No description available on PyPI.
agent-llm
Agent-LLM (Large Language Model)Please use the outreach email for media, sponsorship, or to contact us for other miscellaneous purposes.Do notsend us emails with troubleshooting requests, feature requests or bug reports, please direct those toGitHub IssuesorDiscord.Agent-LLM is an Artificial Intelligence Automation Platform designed to power efficient AI instruction management across multiple providers. Our agents are equipped with adaptive memory, and this versatile solution offers a powerful plugin system that supports a wide range of commands, including web browsing. With growing support for numerous AI providers and models, Agent-LLM is constantly evolving to empower diverse applications.⚠️ Run this in Docker or a Virtual Machine!You're welcome to disregard this message, but if you do and the AI decides that the best course of action for its task is to build a command to format your entire computer, that is on you. Understand that this is given full unrestricted terminal access by design and that we have no intentions of building any safeguards. This project intends to stay light weight and versatile for the best possible research outcomes.See also [SECURITY.md](1-Getting started/SECURITY.MD)⚠️ Monitor Your Usage!Please note that using some AI providers (such as OpenAI's GPT-4 API) can be expensive! Monitor your usage carefully to avoid incurring unexpected costs. We'reNOTresponsible for your usage under any circumstance.⚠️ Under Development!This project is under active development and may still have issues. We appreciate your understanding and patience. If you encounter any problems, please first check the open issues. If your issue is not listed, kindly create a new issue detailing the error or problem you experienced. Thank you for your support!⚠️ Necessities For UseAgent-LLM brings you great power, but you will need to make sure you have the necessary knowledge and hardware to use it. You cannot simply dive in face first and expect to find any success, research and understanding of the technologies involved is required.Knowledge RequiredYou will need at minimum intermediate level knowledge in the following areas:DockerPythonLarge Language ModelsWe unfortunately cannot support Docker issues or issues running any local models. There is a bit of a learning curve to get into this stuff and we're focused on development, not support.We cannot teach you how to use Docker or Python, you will need to refer to the documentation or ask an AI to help you.Please do not open issues for a lack of knowledge in these areas, they will be closed asking you to refer to the documentation.Hardware RequiredGood luck getting a straight answer! Due to the nature of Agent-LLM, you can run it from a mobile phone or from an enterprise grade AI server. If you're running your agents with OpenAI as the provider, you can run it on just about anything with an API key, enough storage, and an internet connection.The hardware you need will depend on the AI models you want to run and the number of agents you want to run at the same time. We recommend starting with a single agent and a single AI model and then scaling up from there.Please do not open issues for lack of hardware, this means errors related to hitting token limits on local models, running out of memory, and issues directly related to ANY local providers such as Oobaboooga, llama.cpp, etc. We know that the providers work as they've been tested and confirmed working, if they're not working on your hardware, it's a problem with your hardware most likely.Operating SystemsThe development environment used when building Agent-LLM is Ubuntu 22.04. As far as we're aware, it should run on any Linux-based OS, MacOS, and Windows as long as hardware requirements are met.We cannot support Windows related issues. Windows has firewalls and things working against developers actively which is why we do not use it for development (or any other thing.) There are people inour Discord serverthat are actively using Agent-LLM on Windows, MacOS, and Linux.If you have issues with Windows, please ask in Discord, but please do not tag the developers to ask, we don't use it.Table of Contents 📖Agent-LLM (Large Language Model)⚠️ Run this in Docker or a Virtual Machine!⚠️ Monitor Your Usage!⚠️ Under Development!⚠️ Necessities For UseKnowledge RequiredHardware RequiredOperating SystemsTable of Contents 📖Media Coverage ⏯️VideoKey Features 🗝️Web Application FeaturesQuick Start with DockerLinux or WindowsStreamlit versionMacOSStreamlit version for Mac OSRemote-only, NON-persistent SetupAKAquick and dirtyConsoleWindows Docker DesktopEnjoyAlternative: Quick Start for Local or Virtual MachineBack EndFront EndConfigurationAPI EndpointsExtending FunctionalityCommandsAI ProvidersDocumentationContributingDonations and SponsorshipsOur Team 🧑‍💻AcknowledgmentsHistoryMedia Coverage ⏯️VideoFromWorld of AIon YouTube:Agent LLM: AI Automation Bot for Managing and Implementing AI Through ApplicationsKey Features 🗝️Adaptive Memory Management: Efficient long-term and short-term memory handling for improved AI performance.Versatile Plugin System: Extensible command support for various AI models, ensuring flexibility and adaptability.Multi-Provider Compatibility: Seamless integration with leading AI providers, including OpenAI GPT series, Hugging Face Huggingchat, GPT4All, GPT4Free, Oobabooga Text Generation Web UI, Kobold, llama.cpp, FastChat, Google Bard, Bing, and more. Run any model with Agent-LLM!Web Browsing & Command Execution: Advanced capabilities to browse the web and execute commands for a more interactive AI experience.Code Evaluation: Robust support for code evaluation, providing assistance in programming tasks.Docker Deployment: Effortless deployment using Docker, simplifying setup and maintenance.Audio-to-Text Conversion: Integration with Hugging Face for seamless audio-to-text transcription.Platform Interoperability: Easy interaction with popular platforms like Twitter, GitHub, Google, DALL-E, and more.Text-to-Speech Options: Multiple TTS choices, featuring Brian TTS, Mac OS TTS, and ElevenLabs.Expanding AI Support: Continuously updated to include new AI providers and services.AI Agent Management: Streamlined creation, renaming, deletion, and updating of AI agent settings.Flexible Chat Interface: User-friendly chat interface for conversational and instruction-based tasks.Task Execution: Efficient starting, stopping, and monitoring of AI agent tasks with asynchronous execution.Chain Management: Sophisticated management of multi-agent task chains for complex workflows and collaboration.Custom Prompts: Easy creation, editing, and deletion of custom prompts to standardize user inputs.Command Control: Granular control over agent abilities through enabling or disabling specific commands.RESTful API: FastAPI-powered RESTful API for seamless integration with external applications and services.Web Application FeaturesThe frontend web application of Agent-LLM provides an intuitive and interactive user interface for users to:Manage agents: View the list of available agents, add new agents, delete agents, and switch between agents.Set objectives: Input objectives for the selected agent to accomplish.Start tasks: Initiate the task manager to execute tasks based on the set objective.Instruct agents: Interact with agents by sending instructions and receiving responses in a chat-like interface.Available commands: View the list of available commands and click on a command to insert it into the objective or instruction input boxes.Dark mode: Toggle between light and dark themes for the frontend.Built using NextJS and Material-UICommunicates with the backend through API endpointsRun with DockerClone the repositories for the Agent-LLM front/back ends then start the services with Docker.Linux or Windowsgit clone https://github.com/Josh-XT/Agent-LLM cd Agent-LLMChoose a service you want to run using profiles, e.g.docker compose --profile streamlit upRun all available servicesdocker compose --profile all upAgent-LLM-frontendhttp://localhost:3000streamlit-UIhttp://localhost:8501Windows Docker Desktop (streamlit only example)Development using dockerdocker compose --profile all -f docker-compose.yml -f docker-compose.dev.yaml upmounts dev space into container - happy buildingManual Install from source (unsupported)As a reminder, this can be dangerous to run locally depending on what commands you give your agents access to.⚠️ Run this in Docker or a Virtual Machine!Back EndClone the repository for the Agent-LLM back end and start it.git clone https://github.com/Josh-XT/Agent-LLM cd Agent-LLM pip install -r requirements.txt python app.pyFront EndClone the repository for the Agent-LLM front end in a separate terminal and start it.git clone https://github.com/JamesonRGrieve/Agent-LLM-Frontend --recurse-submodules cd Agent-LLM-Frontend yarn install yarn devAccess the web interface athttp://localhost:3000ConfigurationAgent-LLM utilizes a.envconfiguration file to store AI language model settings, API keys, and other options. Use the supplied.env.exampleas a template to create your personalized.envfile. Configuration settings include:WORKING_DIRECTORY: Set the agent's working directory.EXTENSIONS_SETTINGS: Configure settings for OpenAI, Hugging Face, Selenium, Twitter, and GitHub.VOICE_OPTIONS: Choose between Brian TTS, Mac OS TTS, or ElevenLabs for text-to-speech.For a detailed explanation of each setting, refer to the.env.examplefile provided in the repository.API EndpointsAgent-LLM provides several API endpoints for managing agents, prompts and chains.To learn more about the API endpoints and their usage, visit the API documentation atSwaggerRedocThis documentation is hosted locally and the frontend must be running for these links to work.Extending FunctionalityUpdating RequirementsWhen extending functionality ensure to perform the following inside the top level Agent-LLM directory after saving your changes / customizations:pip install pipreqs pipreqs ./ --savepath gen_requirements.txt --ignore bin,etc,include,lib,lib64,env,venv pip install --no-cache-dir -r gen_requirements.txtThis will generate an updated requirements file, and install the new dependencies required to support your modifications.CommandsTo introduce new commands, generate a new Python file in thecommandsfolder and define a class inheriting from theCommandsclass. Implement the desired functionality as methods within the class and incorporate them into thecommandsdictionary.AI ProvidersEach agent will have its own AI provider and provider settings such as model, temperature, and max tokens, depending on the provider. You can use this to make certain agents better at certain tasks by giving them more advanced models to complete certain steps in chains.DocumentationIndocs/folder. Can be used to generate static html output. Seedeploy-docssection inPublish Workflowhowto build with honkit.ContributingWe welcome contributions to Agent-LLM! If you're interested in contributing, please check out ourcontributions guidetheopen issues on the backend,open issues on the frontendandpull requests, submit apull request, orsuggest new features. To stay updated on the project's progress,,and. Also feel free to join our.Donations and SponsorshipsWe appreciate any support for Agent-LLM's development, including donations, sponsorships, and any other kind of assistance. If you would like to support us, please contact us through our,or.We're always looking for ways to improve Agent-LLM and make it more useful for our users. Your support will help us continue to develop and enhance the application. Thank you for considering to support us!Our Team 🧑‍💻Josh (@Josh-XT)James (@JamesonRGrieve)AcknowledgmentsThis project was inspired by and is built using code from the following open-source repositories:Please consider exploring and contributing to these projects if you like what we are doing.History
agentlogger
Simple, colorful terminal logs and logfiles.InstallationpipinstallagentloggerQuickstartHere is a quick overview of how you can use Agent Logger in your project:fromagentloggerimportlog,print_header,write_to_file# Print a styled log message to the consolelog('Test message',type='info')# ╭─ (info) agentlogger ─╮# │ Test message │# ╰──────────────────────╯# Display a big styled header in the consoleprint_header('Test header',font='slant',color='blue')# ______ __ __ __# /_ __/__ _____/ /_ / /_ ___ ____ _____/ /__ _____# / / / _ \/ ___/ __/ / __ \/ _ \/ __ `/ __ / _ \/ ___/# / / / __(__ ) /_ / / / / __/ /_/ / /_/ / __/ /# /_/ \___/____/\__/ /_/ /_/\___/\__,_/\__,_/\___/_/# Write a log message to a filewrite_to_file('More log content',source='tests.py',type='test_write_to_file')# ======================== tests.py: test_write_to_file ========================# More log content# ================================================================================DocumentationHere is an overview of the available functions:log(content, source=None, title="agentlogger", type="info", color="blue", type_colors=DEFAULT_TYPE_COLORS, expand=True, panel=True, log=True)This function is used to create an event with provided metadata and saves it to the event log file.ColorsThe available log color options are: black, red, green, yellow, blue, magenta, cyan and white. The color of your log will be determined by the type of the log, if the type log is in the dictionary. If the type is not in the log, it will look at the "color" argument. You can also provide your own type dictionary.Arguments:content: Content of the event.source: Source of the event, e.g. a function name. Defaults to None.title: Title of the event. Defaults to "agentlogger".type: Type of the event. Defaults to "info".type_colors: Dictionary with event types as keys and colors as values. Defaults to a predefined dictionary.expand: Determines if the output should be within a Panel. Defaults to True.panel: Determines if the output should be displayed inside a bordered box panel. Defaults to True.log: Determines if the output should be logged. Defaults to True.print_header(text="agentlogger", font="slant", color="yellow", width=console.width, justify="left")This function displays a header with the provided text and color.Header FontsThe header fonts come from the FIGlet library. You can find a list of available fontshere.ColorsThe color options are the same as the ones used in thelogfunction: black, red, green, yellow, blue, magenta, cyan and white.Arguments:text: Text to be displayed in the header. Defaults to "agentlogger".font: Font to be used in the header. Defaults to "slant".color: Color to be used in the header. Defaults to "yellow".width: Width of the console. Defaults to the console width.justify: Justification of the text in the header. Defaults to "left".write_to_file(content, source=None, type=None, filename="events.log", separator_width=80)This function writes content to the event log file.Arguments:content: Content to be written in the log file.source: Source of the event, e.g. a function name. Defaults to None.type: Type of the event. Defaults to None.filename: Name of the file where the content will be written. Defaults to "events.log".separator_width: Width of the separator. Defaults to 80.Default Type ColorsSome log types are mapped to colors by default. You can also create your own dictionary and pass it to thelogfunction. The default dictionary is:unknown: white system: magenta info: blue warning: yellow success: green error: red start: green stop: red pause: yellow epoch: white summary: cyan reasoning: cyan action: green prompt: cyanExamplesHere are a few examples of how you can use this library:# Log an info message to the consolelog('Application started',type='info')# Log a warning message to the consolelog('Low on disk space',type='warning')# Log an error message to the console without a panellog('Failed to connect to the database',type='error',panel=False)# Display a big styled headerprint_header('Welcome to My Application')# Write a log message to a filewrite_to_file('User logged in',source='auth.py',type='info')TestsYou can run tests using pytest:pytesttest.pyContributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and it will be reviewed. The goal of this project is simplicity and accessibility using plain language and sane defaults, so please keep that in mind when submitting a PR.
agentloop
A simple, lightweight loop for your agent. Start/stop, step-through, and more.Quickstartfromagentloopimportstart,stopdefstep_one(next_output,loop_data):print("step_one")returnnext_outputdefstep_two(next_output,loop_data):print("step_two")returnnext_output# Run the looploop_data=start(steps=[step_one,step_two])# Pause the looppause(loop_data)# Unpause the loopunpause(loop_data)# Stop the loopstop(loop_data)InstallationpipinstallagentloopUsageThis package provides a set of functions to perform a stepped or continuous loop of operations in a separate thread. This can be used for various purposes like running a continuous process that can be controlled from the outside, a debugging tool for a cycle of operations, etc.StepsEach step must take in input from the last step and return output for the next step. The first step will receive None as input, and this will need to be handled. You can either start with an initialization step that returns the initial input, or you can check for None in the first step and return the initial input if it is None.Example steps:defstep_one(next_output,loop_data):# next output None first run, then received from step_twoprint("step_one")returnnext_output# next output sent to step_twodefstep_two(next_output,loop_data):# next output received from step_oneprint("step_two")returnnext_output# next output sent to step_onesteps=[step_one,step_two]Functionstartstart(steps,stepped=False,step_interval=0.0)DescriptionStarts the main loop in a separate thread. This loop will run the steps given, in a continuous or stepped manner.Parameterssteps: a list of functions that should be executed in the loop. Each function should accept a single argument and return a single value which will be passed as an argument to the next function. The first function will receiveNoneas an argument.paused(optional): a boolean value that determines whether the loop should run in paused step mode or not. Defaults toFalse.step_interval(optional): a float value that determines the time interval between steps in seconds. Defaults to0.0.ReturnsA dictionary containingstop_event: an instance ofthreading.Eventthat's used to control stopping of the loop.step_event: an instance ofthreading.Eventthat's used to control stepping.started_event: an instance ofthreading.Eventthat's set when the loop starts running.thread: an instance ofthreading.Threadthat's running the main loop.Functionstopstop(loop_data)DescriptionHandles stopping of the loop.Parametersloop_data: a dictionary containing thestop_eventandthreadwhich is returned by thestartfunction.ReturnsNoneSure, here are the updated sections for thepauseandunpausefunctions in your README file.Functionpausepause(loop_data)DescriptionPauses the loop. When paused, the loop will not execute the next step until it's either stepped using thestepfunction or unpaused using theunpausefunction.Parametersloop_data: a dictionary containing thepause_eventwhich is returned by thestartfunction.ReturnsNoneFunctionunpauseunpause(loop_data)DescriptionResumes the loop after it has been paused with thepausefunction. If the loop is not paused, calling this function has no effect.Parametersloop_data: a dictionary containing thepause_eventwhich is returned by thestartfunction.ReturnsNoneFunctionstepstep(loop_data)DescriptionPerforms a single step in the loop.Parametersloop_data: a dictionary containing thestep_eventwhich is returned by thestartfunction.ReturnsNoneFunctionlooploop(steps,paused=False,loop_data=None,step_interval=0.0)DescriptionRuns the step array in a loop until stopped.Parameterssteps: a list of functions that should be executed in the loop. Each function should accept a single argument and return a single value which will be passed as an argument to the next function. The first function will receiveNoneas an argument.paused(optional): a boolean value that determines whether the loop should run in paused / stepped mode or not. Defaults toFalse.loop_data(optional): a dictionary containingstop_eventandstep_eventinstances. If not provided, new events will be created.step_interval(optional): a float value that determines the time interval between steps in seconds. Defaults to0.0.ReturnsNoneFunctionuse_keyboarduse_keyboard(loop_data,input_key=keyboard.Key.space)DescriptionCreates a keyboard listener and attaches it to the provided loop data object. This listener listens for a specified key press, and when detected, steps the loop (sets the 'step_event').Parametersloop_data: A loop data object, which is typically created by thestartfunction.input_key(optional): The keyboard key that the listener will react to. Defaults tokeyboard.Key.space.ReturnsReturns the updated loop dictionary with the newly created keyboard listener added to it. Note: Pass the updated dictionary to the stop function to also stop the keyboard listenerDefaults to SpacebarExampleloop_data={"stop_event":threading.Event(),"step_event":threading.Event(),"started_event":threading.Event(),"thread":None,}updated_loop_dict=use_keyboard(loop_data,input_key)Testingpytest test.pyPublishingbashpublish.sh--version=<version>--username=<pypi_username>--password=<pypi_password>Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agent-management-system
+ Searching for openagent? You are in the right repo. It's now dotagent.(.🤖) +Hey there, Friend! This project is still in the "just for friends" stage. If you want to see what we're messing with and have some thoughts, take a look at the code.We'd love to incorporate your ideas or contributions. You can drop me a line at- ✉️[email protected] we started dotagent?We have a dream: Open and democratic AGI , free from blackbox censorship and control imposed by private corporations under the disguise of alignment. We once had this with the web but lost this liberty to the corporate giants of the mobile era, whose duopoly has imposed a fixed 30% tax on all developers.Our moonshot : A network of domain specific AI agents , collaborating so seamlessly that it feels like AGI. Contribute to democratizing the LAST technological frontier.What is dotagent ?dotagent is a library of modular components and an orchestration framework. Inspired by a microservices approach, it gives developers all the components they need to build robust, stable & reliable AI applications and experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agent-marketplace-sdk
An API client for the AI Maintainer Marketplace # noqa: E501
agentmemory
Easy-to-use memory for agents, document search, knowledge graphing and more.InstallationpipinstallagentmemoryQuickstartfromagentmemoryimportcreate_memory,search_memory# create a memorycreate_memory("conversation","I can't do that, Dave.",metadata={"speaker":"HAL","some_other_key":"some value, could be a number or string"})# search for a memorymemories=search_memory("conversation","Dave")# category, search termprint(str(memories))# memories is a list of dictionaries[{"id":int,"document":string,"metadata":dict{...values},"embeddings":(Optional)list[float]|None},{...}]DebuggingYou can enable debugging by passingdebug=Trueto most functions, or by setting DEBUG=True in your environment to get global memory debugging.create_memory("conversation","I can't do that, Dave.",debug=True)DeploymentCLIENT_TYPE='POSTGRES' | 'CHROMA' POSTGRES_CONNECTION_STRING=postgres://postgres:[email protected]:6543/postgresBy default, agentmemory will use a local ChromaDB instance. If you want to use a Postgres instance, you can set the environment variableCLIENT_TYPEtoPOSTGRESand set thePOSTGRES_CONNECTION_STRINGenvironment variable to your Postgres connection string.You can deploy an agentmemory-based application to the cloud in minutes using Supabase. Here is atutorialand an explanation ofpgvector.Basic Usage GuideImporting into your projectfromagentmemoryimport(create_memory,create_unique_memory,get_memories,search_memory,get_memory,update_memory,delete_memory,delete_similar_memories,count_memories,wipe_category,wipe_all_memories)Create a Memory# category, document, metadatacreate_memory("conversation","I can't do that, Dave.",metadata={"speaker":"HAL","some_other_key":"some value, could be a number or string"})Search memoriesmemories=search_memory("conversation","Dave")# category, search term# memories is a list of dictionaries[{"id":int,"document":string,"metadata":dict{...values},"embeddings":(Optional)list[float]|None},{...}]Get all memoriesmemories=get_memories("conversation")# can be any category# memories is a list of dictionaries[{"id":int,"document":string,"metadata":dict{...values},"embeddings":(Optional)list[float]|None},{...}]Get a memorymemory=get_memory("conversation",1)# category, idUpdate a memoryupdate_memory("conversation",1,"Okay, I will open the podbay doors.")Delete a Memorydelete_memory("conversation",1)Delete Similar Memoriesdelete_similar_memories(category, content, similarity_threshold=0.95)Search for memories that are similar to the one that contains the given content and removes them.Parameterscategory(str): The category of the collection.content(str): The content to search for.similarity_threshold(float, optional): The threshold for determining similarity. Defaults to 0.95.Returnsbool: True if the memory item is found and removed, False otherwise.API ReferenceCreate a Memorycreate_memory(category, text, id=None, embedding=None, metadata=None)Create a new memory in a collection.Arguments# Required category (str): Category of the collection. text (str): Document text. # Optional id (str): Unique id. Generated incrementally unless set. metadata (dict): Metadata. embedding (array): Embedding of the document. Defaults to None. Use if you already have an embedding.Example>>>create_memory(category='sample_category',text='sample_text',id='sample_id',metadata={'sample_key':'sample_value'})Create Unique Memorycreate_unique_memory(category, content, metadata={}, similarity=0.95)Create a new memory only if there aren't any that are very similar to it. If a similar memory is found, the new memory's "novel" metadata field is set to "False" and it is linked to the existing memory.Parameterscategory(str): The category of the collection.content(str): The text of the memory.metadata(dict, optional): Metadata for the memory.similarity(float, optional): The threshold for determining similarity.ReturnsNoneSearch Memorysearch_memory(category, search_text, n_results=5, min_distance=None, max_distance=None, filter_metadata=None, contains_text=None, include_embeddings=True, novel=False)Search a collection with given query texts.A note about distances: the filters are applied after the query, so the n_results may be dramatically shortened. This is a current limitation of Chromadb.Arguments# Required category (str): Category of the collection. search_text (str): Text to be searched. # Optional n_results (int): Number of results to be returned. filter_metadata (dict): Metadata for filtering the results. contains_text (str): Text that must be contained in the documents. include_embeddings (bool): Whether to include embeddings in the results. include_distances (bool): Whether to include distances in the results. max_distance (float): Only include memories with this distance threshold maximum. 0.1 = most memories will be exluded, 1.0 = no memories will be excluded min_distance (float): Only include memories that are at least this distance 0.0 = No memories will be excluded, 0.9 = most memories will be excluded novel (bool): Whether to return only novel memories.Returnslist: List of search results.Example>>>search_memory('sample_category','search_text',min_distance=0.01,max_distance=0.7,n_results=2,filter_metadata={'sample_key':'sample_value'},contains_text='sample',include_embeddings=True,include_distances=True)[{'metadata':'...','document':'...','id':'...'},{'metadata':'...','document':'...','id':'...'}]Get a Memoryget_memory(category, id, include_embeddings=True)Retrieve a specific memory from a given category based on its ID.Arguments# Required category (str): The category of the memory. id (str/int): The ID of the memory. #optional include_embeddings (bool): Whether to include the embeddings. Defaults to True.Returnsdict: The retrieved memory.Example>>>get_memory("books","1")Get Memoriesget_memories(category, sort_order="desc", filter_metadata=None, n_results=20, include_embeddings=True, novel=False)Retrieve a list of memories from a given category, sorted by ID, with optional filtering.sort_ordercontrols whether you get from the beginning or end of the list.Arguments# Required category (str): The category of the memories. # Optional sort_order (str): The sorting order of the memories. Can be 'asc' or 'desc'. Defaults to 'desc'. filter_metadata (dict): Filter to apply on metadata. Defaults to None. n_results (int): The number of results to return. Defaults to 20. include_embeddings (bool): Whether to include the embeddings. Defaults to True. novel (bool): Whether to return only novel memories. Defaults to False.Returnslist: List of retrieved memories.Example>>>get_memories("books",sort_order="asc",n_results=10)Update a Memoryupdate_memory(category, id, text=None, metadata=None)Update a memory with new text and/or metadata.Arguments# Required category (str): The category of the memory. id (str/int): The ID of the memory. # Optional text (str): The new text of the memory. Defaults to None. metadata (dict): The new metadata of the memory. Defaults to None.Example# with keyword argumentsupdate_memory(category="conversation",id=1,text="Okay, I will open the podbay doors.",metadata={"speaker":"HAL","sentiment":"positive"})# with positional argumentsupdate_memory("conversation",1,"Okay, I will open the podbay doors.")Delete a Memorydelete_memory(category, id, contains_metadata=None, contains_text=None)Delete a memory by ID.Arguments# Required category (str): The category of the memory. id (str/int): The ID of the memory. # OptionalExample>>>delete_memory("books","1")delete_memories(category, document=None, metadata=None)Delete all memories in the category either by document, or by metadata, or by both.Arguments# Required category (str): The category of the memory. # Optional document (str): Document text to match memories to delete. Defaults to None. metadata (dict): Metadata to match memories to delete. Defaults to None.Returnsbool: True if memories were deleted, False otherwise.Example>>>delete_memories("books",document="Foundation",metadata={"author":"Isaac Asimov"})Check if a memory existsmemory_exists(category, id, includes_metadata=None)Check if a memory exists in a given category.Arguments# Required category (str): The category of the memory. id (str/int): The ID of the memory. # Optional includes_metadata (dict): Metadata that the memory should include. Defaults to None.Example>>>memory_exists("books","1")Wipe an Entire Category of Memorieswipe_category(category)Delete an entire category of memories.Arguments# Required category (str): The category to delete. # OptionalExample>>>wipe_category("books")Count Memoriescount_memories(category)Count the number of memories in a given category.Argumentscategory (str): The category of the memories.Returnsint: The number of memories.Example>>>count_memories("books")Wipe All Memorieswipe_all_memories()Delete all memories across all categories.Arguments# OptionalExample>>>wipe_all_memories()Memory Management with ChromaDBThis document provides a guide to using the memory management functions provided in the module.FunctionsExport Memories to JSONTheexport_memory_to_jsonfunction exports all memories to a dictionary, optionally including embeddings.Argumentsinclude_embeddings(bool, optional): Whether to include memory embeddings in the output. Defaults to True.Returns:dict: A dictionary with collection names as keys and lists of memories as values.Example>>>export_memory_to_json()Export Memories to FileTheexport_memory_to_filefunction exports all memories to a JSON file, optionally including embeddings.Argumentspath(str, optional): The path to the output file. Defaults to "./memory.json".include_embeddings(bool, optional): Whether to include memory embeddings in the output. Defaults to True.Example>>>export_memory_to_file(path="/path/to/output.json")Import Memories from JSONTheimport_json_to_memoryfunction imports memories from a dictionary into the current database.Argumentsdata(dict): A dictionary with collection names as keys and lists of memories as values.replace(bool, optional): Whether to replace existing memories. If True, all existing memories will be deleted before import. Defaults to True.Example>>>import_json_to_memory(data)Import Memories from FileTheimport_file_to_memoryfunction imports memories from a JSON file into the current database.Argumentspath(str, optional): The path to the input file. Defaults to "./memory.json".replace(bool, optional): Whether to replace existing memories. If True, all existing memories will be deleted before import. Defaults to True.Example>>>import_file_to_memory(path="/path/to/input.json")Event APIA common need for the memory API is "events" -- logging when things happen sequentially. The event API provides a simple way to do this using the idea of "epochs". You can increment epochs as needed, and group events together within epochs. All of the events within a loop, or a conversation turn, for example, could be recorded as an epoch.reset_epoch()Thereset_epochfunction resets the epoch in the agent's memory to 1. It wipes the "epoch" category and creates a new memory of 1.Usage:reset_epoch()Example:reset_epoch()set_epoch(epoch)Theset_epochfunction sets the epoch in the agent's memory to the specified value.Arguments:epoch(int): The desired epoch value.Usage:set_epoch(epoch)Example:set_epoch(5)increment_epoch()Theincrement_epochfunction increments the current epoch value by 1. It retrieves the current epoch from memory, increments it, and then writes the new epoch value to memory.Usage:increment_epoch()Example:increment_epoch()get_epoch()Theget_epochfunction retrieves the current epoch value from the agent's memory.Usage:get_epoch()Example:current_epoch=get_epoch()print(current_epoch)create_event(text, metadata={}, embedding=None)Thecreate_eventfunction creates a new event in the agent's memory.Arguments:text(str): The text content of the event.metadata(dict, optional): Additional metadata for the event. Defaults to {}.embedding(object, optional): An optional embedding for the event.Usage:create_event(text,metadata={},embedding=None)Example:create_event("This is a test event",metadata={"test":"test"})get_events(epoch=None, filter_metadata=None, n_results=10)Theget_eventsfunction retrieves events from the agent's memory.Arguments:epoch(int, optional): If specified, only retrieve events from this epoch.Usage:get_events(epoch=None)Example:events=get_events(1)foreventinevents:print(event["document"])ClusteringOverviewTheclusterfunction inagentmemory.clusteringprovides an implementation of DBScan (Density-Based Spatial Clustering of Applications with Noise) clustering. It is designed to group memories in the agent's memory based on their similarity and proximity in the data space.Function Signaturedefcluster(epsilon,min_samples,category,filter_metadata=None,novel=False)Parametersepsilon(float): The maximum distance between two samples for one to be considered as in the neighborhood of the other.min_samples(int): The number of samples (or total weight) in a neighborhood for a point to be considered as a core point.category(str): The category of the collection to be clustered.filter_metadata(dict, optional): Additional metadata for filtering the memories before clustering. Defaults to None.novel(bool, optional): Whether to return only novel memories. Defaults to False.Memory ClusteringTheclusterfunction updates memories directly with their cluster ID by performing the DBScan clustering algorithm. Memories with similar content and metadata will be grouped together into clusters. The clustering result will be reflected in the metadata of the memories.Memory MarkingMemories with less thanmin_samplesneighbors within a distance ofepsilonwill be marked as noise, and their cluster ID in the metadata will be set to "noise."Memories belonging to a cluster will have their cluster ID stored in the "cluster" field of the metadata.UsageTo perform clustering on a specific category of memories, call theclusterfunction with appropriate parameters:fromagentmemory.clusteringimportcluster# Example usageepsilon=0.1min_samples=3category="conversation"filter_metadata={"speaker":"HAL"}# Optional metadata filternovel=False# Whether to return only novel memoriescluster(epsilon,min_samples,category,filter_metadata=filter_metadata,novel=novel)NoteThe clustering operation will directly update the memories' metadata in the specified category. Please make sure to have a backup of the data before performing clustering if necessary.ReferencesFor more information about DBScan clustering, refer to the original paper:DBScan PaperContributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agentMET4FOF
Multi-Agent System for IIoTagentMET4FOF is an implementation of a multi-agent system for agent-based analysis and processing of both static data sets and data streams with IIoT applications in mind. More on the motivation that drives the project can be found in the sectionAbout.Key factsFOSS projectallows toquickly set up and run ametrologically enabled multi-agent systemhandle both static data sets and online data streamsconsider measurement uncertainties as well as metadata with the provided message systeminstallable as a Python package or ready-to-deploy Docker imagecomes bundled withseveral introductary and advanced tutorialsaccompanied byseveral use cases with close-to-industry IIoT applications in our GitHub organisationcomprehensive and ever-growingdocumentationTable of content💫 Quickstart💬 About📈 The agentMET4FOF dashboard🤓 Tutorials📖 Documentation and screencasts💻 Installation🐝 Contributing💨 Coming soon🖋 Citation💎 Acknowledgement⚠ Disclaimer© License💫QuickstartagentMET4FOF comes bundled with severaltutorialsto get you started as quick as possible. In your Python console execute the following to run the first tutorial.>>>fromagentMET4FOF_tutorials.tutorial_1_generator_agentimportdemonstrate_generator_agent_use>>>generator_agent_network=demonstrate_generator_agent_use()StartingNameServer... Broadcastserverrunningon0.0.0.0:9091 NSrunningon127.0.0.1:3333(127.0.0.1)URI=PYRO:[email protected]:3333|----------------------------------------------------------||||Youragentnetworkisstartingup.Openyourbrowserand||visittheagentMET4FOFdashboardonhttp://0.0.0.0:8050/||||----------------------------------------------------------|INFO[2021-02-0518:12:52.277759](SineGeneratorAgent_1):INITIALIZED INFO[2021-02-0518:12:52.302862](MonitorAgent_1):INITIALIZED[2021-02-0518:12:52.324078](SineGeneratorAgent_1):Connectedoutputmodule:MonitorAgent_1 SETSTATE:Running[...]>>>generator_agent_network.shutdown()0NSshutdown.💬AboutSensor deployments in industrial applications usually form networks in all sorts of environments. This requires a flexible framework for the implementation of the corresponding data analysis. An excellent way to represent such networks is a multi-agent system (MAS), where independent software modules (agents) encapsulate properties and functionalities. agentMET4FOF is an interactive and flexible open-source implementation of such a MAS. The software engineering process is driven by several industry-oriented use cases with the aim of enabling IIoT applications. This leads to a framework that is specialized in representing heterogeneous sensor networks.A special emphasis is put on supporting metrological treatment of sensor streaming data. This includes the consideration of measurement uncertainties during data analysis and processing as well as propagating metadata alongside the data itself.One of the many questions that drive us in the project is:How can metrological input be incorporated into an agent-based system for addressing uncertainty of machine learning in future manufacturing?FeaturesSome notable features of agentMET4FOF include :Modular agent classes for metrological data streams and analyticsA built-in buffering mechanism to decouple transmission, processing and visualization of dataEasy connection among software agents to send and receive dataChoose backends between:Osbrainfor simulating as well as handling real distributed systems running Python connected via a TCP network, andMesafor local simulations of distributed systems, debugging and more high-performance executionInteractive and customisable dashboard from the get-go to:Visualize and change agent-network topologiesVisualize groups of cooperative agents asCoalitionsView and change the agents' parametersView the agents' outputs as plotly or matplotlib plots or generate and embed your own imagesGeneric streams and agents that can be used as starting points in simulationsA sine generator with an associated agentA generator for a sine signal with jitter dynamically or with fixed lengthA white noise agentA metrologically enabled sine generator agent which also handles measurement uncertainties📈The agentMET4FOF dashboardagentMET4FOF comes bundled with our so calleddashboard. It is an optional component of every agent network and provides a web browser based view. You can observe the state of your agents, modify the connections between them and even add more pre-made agents to your network all during run-time. The address to your dashboard is printed to the console on every launch of an agent network.The following image is close to what you will find in your browser on execution of tutorial 2. For details on the tutorials visit ourvideo tutorial series.🤓TutorialsAs mentioned above, agentMET4FOF comes bundled with severaltutorialsto get you started as quick as possible. You will find tutorials on how to set up:a simple pipeline to plot a signala simple pipeline with signal postprocessingan advanced pipeline with multichannel signalsa simple metrological datastreampipelines to determine redundancy in sensor networksa pipeline to reduce noise and jitter in sensor readings… andmore!📖Documentation and screencastsExtendeddocumentation can be found on ReadTheDocs.Screencast seriesAdditionally, we provide somescreencasts based on agentMET4FOF 0.4.1 on the project homepagein the sectionTutorials for the multi-agent system agentMET4FOF. You can self-register on the linked page and get started immediately. The video series begins with our motivation for creating agentMET4FOF, guide you through the installation of Python and other recommended software until you execute the tutorials on your machine.Live online tutorial during early developmentIn an early development stage we held a live online tutorial based onagentMET4FOF 0.1.0which you candownload.If questions arise, or you feel something is missing, reach out tous.💻InstallationThere are different ways to run agentMET4FOF. Either:youinstall Pythonand our packageagentMET4FOFin a virtual Python environment on your computer, oryouinstall Docker,start agentMET4FOF in a containerandvisit the Jupyter Notebook server and the agentMET4FOF dashboard directly in your browseror evendeploy it over a proper webserver.In thevideo tutorials serieswe guide you through every step of option 1. More detailed instructions on both options you can find in theinstallation section of the docs.🐝ContributingWhenever you are involved with agentMET4FOF, please respect ourCode of Conduct. If you want to contribute back to the project, after reading our Code of Conduct, take a look at our open developments in theproject board,pull requestsand searchthe issues. If you find something similar to your ideas or troubles, let us know by leaving a comment or remark. If you have something new to tell us, feel free to open a feature request or bug report in the issues. If you want to contribute code or improve our documentation, please check ourcontributing guide.💨Coming soonImproved handling of metadataMore advanced signal processingFor a comprehensive overview of current development activities and upcoming tasks, take a look at theproject board,issuesandpull requests.🖋CitationIf you publish results obtained with the help of agentMET4FOF, please cite the linked.💎AcknowledgementThis work was part of the Joint Research ProjectMetrology for the Factory of the Future (Met4FoF), project number 17IND12of the European Metrology Programme for Innovation and Research (EMPIR). TheEMPIRis jointly funded by the EMPIR participating countries within EURAMET and the European Union.⚠DisclaimerThis software is developed as a joint effort of several project partners namely:Institute for Manufacturing of the University of Cambridge (IfM)Physikalisch-Technische Bundesanstalt (PTB)Van Swinden Laboratory (VSL)National Physics Laboratory (NPL)under the lead of IfM. The software is made available "as is" free of cost. The authors and their institutions assume no responsibility whatsoever for its use by other parties, and makes no guarantees, expressed or implied, about its quality, reliability, safety, suitability or any other characteristic. In no event will the authors be liable for any direct, indirect or consequential damage arising in connection with the use of this software.©LicenseagentMET4FOF is distributed under theLGPLv3 license.
agent.ngo
+ Looking for 'openagent'? Because of a little name clash, it's now called 'dotagent'. 🤖+Question:I stumbled upon this repository. Is it production ready?Answer:Kudos on discovering this hidden treasure box! 🧭 While it's fairly stable and we're battle-testing it in our own production, we'd advise a bit of caution for immediate production use. It's got its quirks, and some of them have taken a cozy spot on our'we'll-look-at-this-later'list. Jump in, play with it, or use any part of our code. It's all good with the MIT license.I'm diving in, quirks and all!Ahoy, adventurer! 🏴‍☠️ We're thrilled to have another daring coder join the fray. Here's to creating some coding magic together! ✨The Origin Tale of dotagentHere's our dream: An open and democratic AGI, untouched by the sneaky controls and hush-hush censorship of corporate overlords masquerading under 'alignment'. Remember the good ol' web days? We lost that freedom to the mobile moguls and their cheeky 30% 'because-we-said-so' tax. 🙄Our moonshot? 🚀 A harmonious ensemble of domain-specific AI agents, working in unison so well, you'd think it's AGI. Join us in opening up the LAST tech frontier for all!Meet World's first AMS!Ever heard of an Agent Management System (AMS)? No? Well, probably because we believe we came up with it! 🎩✨ dotagent proudly wears the badge of being the world's first AMS (yep, we're patting ourselves on the back here). Drawing inspiration from the nifty microservices, it equips developers with a treasure trove of tools to craft sturdy, trusty AI applications and those cool experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agentocr
AgentOCR Package 使用说明1 快速上手1.1 安装pip 安装:# 安装 AgentOCR$pipinstallagentocr# 根据设备平台安装合适版本的 ONNXRuntime$pipinstallonnxruntimewhl 包安装:下载 whl 包:链接安装 whl 包:# 安装 AgentOCR$pipinstallagentocr-x.x.x-py3-none-any.whl# 根据设备平台安装合适版本的 ONNXRuntime$pipinstallonnxruntime源码安装:# 同步 AgentOCR 代码$gitclonehttps://github.com/AgentMaker/AgentOCR# 安装 AgentOCR$cdAgentOCR&&pythonsetup.pyinstall# 根据设备平台安装合适版本的 ONNXRuntime$pipinstallonnxruntime2 使用AgentOCR Package 会自动下载 PaddleOCR 中/英文轻量级模型作为默认模型可通过切换其他内置配置文件或自定义配置文件进行模型和参数自定义在 API 接口的使用上,基本和 PPOCR Package 保持一致2.1 代码使用2.1.1 API 接口接口介绍:classOCRSystem:def__init__(self,config='ch',warmup=True,**kwargs):'''The Inference OCR System of AgentOCR.Params:config: 配置文件名称或路径, 默认为 'ch'.warmup: 初始化时进行模型预热, 默认为 True.**kwargs: 更多配置参数,这些参数会覆盖配置文件中的相同配置.'''defocr(self,img,det=True,cls=False,rec=True,return_cls=False):'''Params:img: 图片路径或者图片数组.det: 文本位置定位, 默认为 True.cls: 文本方向分类, 默认为 False.rec: 文本内容识别, 默认为 True.return_cls: 返回文本方向分类结果, 默认为 False.'''2.1.2 使用样例检测 + 方向分类器 + 识别全流程fromagentocrimportOCRSystem# 通过 config 参数来进行模型配置,内置多国语言的配置文件# 也可以根据 3.1 配置文件中的选项,指定自定义配置,比如通过下面的代码指定运行后端:# ocr = OCRSystem(config='ch', providers='cuda,cpu')ocr=OCRSystem(config='ch')# 设置测试图片路径img_path='test.jpg'# 调用 OCR API 进行全流程识别result=ocr.ocr(img_path,cls=True)# 结果是一个list,每个item包含了文本框,文字和识别置信度 [[[24.0, 36.0], [304.0, 34.0], [304.0, 72.0], [24.0, 74.0]], ['纯臻营养护发素', 0.964739]] [[[24.0, 80.0], [172.0, 80.0], [172.0, 104.0], [24.0, 104.0]], ['产品信息/参数', 0.98069626]] [[[24.0, 109.0], [333.0, 109.0], [333.0, 136.0], [24.0, 136.0]], ['(45元/每公斤,100公斤起订)', 0.9676722]] ......检测 + 识别fromagentocrimportOCRSystemocr=OCRSystem(config='ch')img_path='test.jpg'# 关闭分类选项result=ocr.ocr(img_path,cls=False)# 结果是一个list,每个item包含了文本框,文字和识别置信度 [[[24.0, 36.0], [304.0, 34.0], [304.0, 72.0], [24.0, 74.0]], ['纯臻营养护发素', 0.964739]] [[[24.0, 80.0], [172.0, 80.0], [172.0, 104.0], [24.0, 104.0]], ['产品信息/参数', 0.98069626]] [[[24.0, 109.0], [333.0, 109.0], [333.0, 136.0], [24.0, 136.0]], ['(45元/每公斤,100公斤起订)', 0.9676722]] ......方向分类器 + 识别fromagentocrimportOCRSystemocr=OCRSystem(config='ch')img_path='test.jpg'# 关闭检测并开启分类选项result=ocr.ocr(img_path,det=False,cls=True)# 结果是一个list,每个item只包含识别结果和识别置信度 ['韩国小馆', 0.9907421]单独执行检测fromagentocrimportOCRSystemocr=OCRSystem(config='ch')img_path='test.jpg'# 关闭识别选项result=ocr.ocr(img_path,rec=False)# 结果是一个list,每个item只包含文本框 [[26.0, 457.0], [137.0, 457.0], [137.0, 477.0], [26.0, 477.0]] [[25.0, 425.0], [372.0, 425.0], [372.0, 448.0], [25.0, 448.0]] [[128.0, 397.0], [273.0, 397.0], [273.0, 414.0], [128.0, 414.0]] ......单独执行识别fromagentocrimportOCRSystemocr=OCRSystem(config='ch')img_path='test.jpg'# 关闭检测选项result=ocr.ocr(img_path,det=False)# 结果是一个list,每个item只包含识别结果和识别置信度 ['韩国小馆', 0.9907421]单独执行方向分类器fromagentocrimportOCRSystemocr=OCRSystem(config='ch')img_path='test.jpg'# 关闭检测、识别并开启分类选项result=ocr.ocr(img_path,det=False,cls=True,rec=False)# 结果是一个list,每个item只包含分类结果和分类置信度 ['0', 0.9999924]2.2 服务器部署2.2.1 启动 OCR 服务通过命令行启动:# config 配置文件 / host 监听地址 / port 监听接口 / 其他配置参数$agentocrserver\--configch\--host127.0.0.1\--port5000\--providerscpu2.2.2 接口调用接口地址:http://{host}:{port}/ocr请求类型:Post使用 Python 调用 OCR 服务接口:importcv2importjsonimportbase64importrequests# 图片 Base64 编码defcv2_to_base64(image):data=cv2.imencode('.jpg',image)[1]image_base64=base64.b64encode(data.tobytes()).decode('UTF-8')returnimage_base64# 读取图片image=cv2.imread('test.jpg')image_base64=cv2_to_base64(image)# 构建请求数据data={'image':image_base64,'det':True,'cls':True,'rec':True}# 发送请求url="http://127.0.0.1:5000/ocr"r=requests.post(url=url,data=json.dumps(data))# 打印预测结果print(r.json())3 配置AgentOCR 使用 json 格式的配置文件来配置各种模型和各项参数也同时内置了多个语言的预设参数配置,可通过对应的语言缩写进行快速调用具体的语言和缩写的对应情况,请参考主页的【预设配置选项】表格更多 PPOCR 的预训练模型,请跳转至主页【预训练模型】处下载3.1 配置文件快速配置:可通过如下几个选项快速配置不同的模型文件、字典和可视化字体{"det_model_dir":"ch_ppocr_mobile_v2.0_det","rec_model_dir":"ch_ppocr_mobile_v2.0_rec","rec_char_type":"ch","rec_char_dict_path":"ppocr_keys_v1","vis_font_path":"simfang","cls_model_dir":"ch_ppocr_mobile_v2.0_cls"}完整配置:详细的参数介绍请参考下一小节的内容{"providers":"auto","det_algorithm":"DB","det_model_dir":"ch_ppocr_mobile_v2.0_det","det_limit_side_len":960,"det_limit_type":"max","det_db_thresh":0.3,"det_db_box_thresh":0.6,"det_db_unclip_ratio":1.5,"use_dilation":false,"det_db_score_mode":"fast","det_east_score_thresh":0.8,"det_east_cover_thresh":0.1,"det_east_nms_thresh":0.2,"det_sast_score_thresh":0.5,"det_sast_nms_thresh":0.2,"det_sast_polygon":false,"rec_algorithm":"CRNN","rec_model_dir":"ch_ppocr_mobile_v2.0_rec","rec_image_shape":"3, 32, 320","rec_char_type":"ch","rec_batch_num":8,"max_text_length":25,"rec_char_dict_path":"ppocr_keys_v1","use_space_char":true,"vis_font_path":"simfang","drop_score":0.5,"cls_model_dir":"ch_ppocr_mobile_v2.0_cls","cls_image_shape":"3, 48, 192","label_list":["0","180"],"cls_batch_num":8,"cls_thresh":0.9,"total_process_num":1,"show_log":true}3.2 参数说明字段说明默认值providers使用的计算后端,使用字符串设定使用顺序,使用逗号分隔,不区分大小写,如 “cuda,cpu”,默认自动选择所有可用后端(可能并非最佳顺序)autodet_algorithm使用的检测算法类型DBdet_model_dir检测模型文件,文件名或文件路径ch_ppocr_mobile_v2.0_detdet_max_side_len检测算法前向时图片长边的最大尺寸,当长边超出这个值时会将长边resize到这个大小,短边等比例缩放960det_limit_type检测的限制类型maxdet_db_threshDB模型输出预测图的二值化阈值0.3det_db_box_threshDB模型输出框的阈值,低于此值的预测框会被丢弃0.6det_db_unclip_ratioDB模型输出框扩大的比例1.5use_dilation是否使用空洞卷积Falsedet_db_score_modeDB 分数计算模式(slow or fast)fastdet_east_score_threshEAST模型输出预测图的二值化阈值0.8det_east_cover_threshEAST模型输出框的阈值,低于此值的预测框会被丢弃0.1det_east_nms_threshEAST模型输出框NMS的阈值0.2det_sast_polygon是否使用 SAST polygonFalserec_algorithm使用的识别算法类型CRNNrec_model_dir识别模型文件,文件名或文件路径ch_ppocr_mobile_v2.0_recrec_image_shape识别算法的输入图片尺寸3,32,320rec_char_type识别算法的字符类型,中英文(ch)、英文(en)、法语(french)、德语(german)、韩语(korean)、日语(japan)chrec_batch_num进行识别时,同时前向的图片数8max_text_length识别算法能识别的最大文字长度25rec_char_dict_path识别模型字典,文件名或文件路径ppocr_keys_v1use_space_char是否识别空格Truevis_font_path可视化字体,文件名或者文件路径simfangdrop_score对输出按照分数(来自于识别模型)进行过滤,低于此分数的不返回0.5cls_model_dir分类模型文件(文件名或者文件路径)ch_ppocr_mobile_v2.0_clscls_image_shape分类算法的输入图片尺寸3, 48, 192label_list分类算法的标签列表['0', '180']cls_batch_num进行分类时,同时前向的图片数8cls_thresh分类器阈值0.9total_process_num进程数量1show_log是否打印 logFalse
agentools
AgenTools- Async Generator Tools for LLMsA simple set of modules, wrappers and utils that are essential for LLM-based assistants and agents using the OpenAI API and function tools. It is useful for:OpenAI API:Simple wrapper for the OpenAI API to provide mocked endpoints for easy testing without costing money, accumulating the delta chunks from streamed responses into partial responses, and easier token counting/tracking.Function Tools:Easily convert any (async) python function into a function tool that the LLM model can call, with automatic validation and retrying with error messages.Structured Data:Easily define a Pydantic model that can be generated by the LLM model, also with validation and retries.Assistants:Event-based architecture with async generators that yield events that you can iterate through and handle only the events you care about, such as whether you want to stream the response or not, cancel the generation prematurely, or wait for user input (human-in-the-loop) before continuing, etc.Copilots:Integrate right into an editor with stateful system messages to allow the copilot to see the latest state of the editor and function tools to interact with the editor.Yet to come:Agents:Autoprompting, self-prompting, chain-of-thought, sketchpads, memory management, planning, and more.Multi-Agents: Communication channels, organization structuring, and more.Quick StartInstallationpipinstallagentoolsAssistant and ChatGPTA high-level interface to use ChatGPT or other LLM-based assistants! The default implementation of ChatGPT has:a message history to remember the conversation so far (including the system prompt)ability to use toolsefficient async streaming supportsimple way to customize/extend/override the default behaviorfromagentoolsimport*# empty chat history and default model (gpt-3.5)model=ChatGPT()You can then simply call the model as if it was a function, with a prompt:awaitmodel("Hey!")'Hello! How can I assist you today?'As you can see, the model is async and it simply returns the resonse as a string.Both your prompt and the response are stored in the history, so you can keep calling the model with new prompts and it will remember the conversation so far.awaitmodel("Can you repeat my last message please?")'Of course! Your last message was: "Hey!"'model.messages.history[{'role': 'user', 'content': 'Hey!'},{'content': 'Hello! How can I assist you today?', 'role': 'assistant'},{'role': 'user', 'content': 'Can you repeat my last message please?'},{'content': 'Of course! Your last message was: "Hey!"', 'role': 'assistant'}]System prompt and more onMessageHistoryNotice that our model has no system prompt in the beginning.ChatGPT's constructor by default creates an empty chat history, but you can explicitly create aMessageHistoryobject and pass it to the constructor:translate=ChatGPT(messages=SimpleHistory.system("Translate the user message to English"))# SimpleHistory.system(s) is just shorthand for SimpleHistory([msg(system=s)])print(awaittranslate("Ich liebe Katzen!"))print(awaittranslate("고양이랑 강아지 둘다 좋아!"))I love cats!I like both cats and dogs!translate.messages.history[{'role': 'system', 'content': 'Translate the user message to English'},{'role': 'user', 'content': 'Ich liebe Katzen!'},{'content': 'I love cats!', 'role': 'assistant'},{'role': 'user', 'content': '고양이랑 강아지 둘다 좋아!'},{'content': 'I like both cats and dogs!', 'role': 'assistant'}]Notice that here, we're wasting tokens by remembering the chat history, since it's not really a conversation. There's a simpleGPTclass, which simply resets the message history after each prompt:translate=GPT(messages=SimpleHistory.system("Translate the user message to English"))awaittranslate("Ich liebe Katzen!")awaittranslate("고양이랑 강아지 둘다 좋아!")translate.messages.history[{'role': 'system', 'content': 'Translate the user message to English'}]OpenAI API: changing the model and mocked APIYou can set the default model in the constructor, or override it for each prompt:# default model is now gpt-4 💸model=ChatGPT(model="gpt-4")# but you can override it for each prompt anywaysawaitmodel("Heyo!",model="mocked")'Hello, world!'As you see, our wrapper provides a simple mocked "model", which will simply return"Hello, world!"for any prompt, with some simulated latency. This will also work with streaming responses, and in either cases, you won't be able to tell the difference between the real API and the mocked one.There are more mocked models for your convinience:mocked: always returns"Hello, world!"mocked:TEST123: returns the string after the colon, e.g."TEST123"echo: returns the user prompt itselfLet's print all events to the console to take a peek at the event-based generator:awaitmodel("Heya!",model="echo",event_logger=print)[ResponseStartEvent]: prompt=Heya!, tools=None, model=echo, max_function_calls=100, openai_kwargs={}[CompletionStartEvent]: call_index=0[CompletionEvent]: completion=ChatCompletion(id='mock', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None))], created=1706671307, model='mock', object='chat.completion', system_fingerprint=None, usage=None), call_index=0[FullMessageEvent]: message=ChatCompletionMessage(content='Heya!', role='assistant', function_call=None, tool_calls=None), choice_index=0[TextMessageEvent]: content=Heya![ResponseEndEvent]: content=Heya!'Heya!'Wow, quite a lot going on for a simple prompt! While it might seem like too many events, this offers a lot of flexibility and customizability.You can easily handle only the events you are interested in, useful when e.g:updating the frontend when streaming the responses,cancelling the generation early,or implementing human-in-the-loop for function calls.For instance, theGPTclass from above is as simple as:asyncforeventinself.response_events(prompt,**openai_kwargs):matchevent:caseself.ResponseEndEvent():awaitself.messages.reset()returnevent.contentThis generator-based architecture is a good balance between flexibility and simplicity!While we won't go deeper into the low-level API in this quickstart, you can look at theadvanced.ipynbnotebook for more details.Tools:@function_toolYou can turn any function into a tool usable by the model by decorating it with@function_tool:@function_tooldefprint_to_console(text:str)->str:"""Print text to consoleArgs:text: text to print"""print(text)return"success"# the model will see the return value# normal callprint_to_console("Hello from python!")Hello from python!'success'You can use the tool from python as you normally would, and the model will also be able to use it simply by passing it to thetoolsparameter during init (as default) or prompting it (as a one-off).model=ChatGPT(tools=print_to_console)awaitmodel("Say 'hello from GPT' to console!")Hello from GPT!'I have successfully printed "Hello from GPT!" to the console.'To make the function a@function_tool, you must do the following:The parameters must be type annotated, and all parameters must be JSON-serializable (e.g.str,int,float,bool,list,dict,None, etc).The return type should be astror something that can be converted to astr.It must be documented with a'''docstring''', including each parameter (mostformats supported, e.g.Google-style,NumPy-style, sphinx-style, etc, seethis overview)Showing off some more goodies:Even async functions should seamlessly work, just don't forget toawaitthem.@fail_with_message(err)is a decorator that will catch any exceptions thrown by the function and instead return the error message. This is useful for when you want to handle errors in a more graceful way than just crashing the model. It also takes an optional logger, which by default takes theprintfunction, but any callable that takes a string will work, such aslogger.errorfrom theloggingmodule.Usually, the@function_tooldecorator will throw an assertion error if you forget to provide the description for any of the function or their parameters. If you really don't want to provide descriptions for some (or all), maybe because it's so self-explanatory or you need to save tokens, then you can explicitly turn off the docstring parsing by passing@function_tool(check_description=False). This is not recommended, but it's there if you need it.Note that by returning descriptive error strings, the model can read the error message and retry, increasing the robustness!importasyncioimportlogging@function_tool(name="Fibonacci",require_doc=False)@fail_with_message("Error",logger=logging.error)asyncdeffib(n:int):ifn<0:raiseValueError("n must be >= 0")ifn<2:returnnawaitasyncio.sleep(0.1)returnsum(awaitasyncio.gather(fib(n-1),fib(n-2)))awaitfib(-10)ERROR:root:Tool call fib(-10) failed: n must be >= 0'Error: n must be >= 0'Toolkits:class ToolkitToolkits are a collection of related function tools, esp. useful when they share a state. Also good for keeping the state bound to a single instance of the toolkit, rather than a global state. To create a toolkit, simply subclassToolkitand decorate its methods with@function_tool.classNotepad(Toolkit):def__init__(self):super().__init__()self.content="<Fill me in>"@function_tooldefwrite(self,text:str):"""Write text to the notepadArgs:text: The text to write"""self.content=text@function_tool(require_doc=False)defread(self):returnself.contentnotes=Notepad()notes.write("Shhh... here's a secret: 42")notes.read()"Shhh... here's a secret: 42"As before, simply pass the toolkit to the model. To use multiple tools and toolkits, simply use theToolListclass:gpt=ChatGPT(tools=ToolList(notes,print_to_console,fib),)awaitgpt("What's on my notepad?")'On your notepad, there is a secret written: 42.'awaitgpt("Can you calculate the 8th fibonacci number, add it to the number in my notes, and write it? also print it to console as well.",event_logger=lambdax:print(x)ifx.startswith("[Tool")elseNone,)[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_Uj8VIxhCxvHUnbIH1CK50Qmo', function=Function(arguments='{\n "n": 8\n}', name='Fibonacci'), type='function')][ToolResultEvent]: result=21, tool_call=ChatCompletionMessageToolCall(id='call_Uj8VIxhCxvHUnbIH1CK50Qmo', function=Function(arguments='{\n "n": 8\n}', name='Fibonacci'), type='function'), index=0[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_0F0alwZitumJTU3QlPxuoJNW', function=Function(arguments='{\n "text": "63"\n}', name='write'), type='function')][ToolResultEvent]: result=None, tool_call=ChatCompletionMessageToolCall(id='call_0F0alwZitumJTU3QlPxuoJNW', function=Function(arguments='{\n "text": "63"\n}', name='write'), type='function'), index=0[ToolCallsEvent]: tool_calls=[ChatCompletionMessageToolCall(id='call_KoSyY1LZQYVfEpMDPUZlt2pc', function=Function(arguments='{\n "text": "63"\n}', name='print_to_console'), type='function')]63[ToolResultEvent]: result=success, tool_call=ChatCompletionMessageToolCall(id='call_KoSyY1LZQYVfEpMDPUZlt2pc', function=Function(arguments='{\n "text": "63"\n}', name='print_to_console'), type='function'), index=0'I have calculated the 8th Fibonacci number to be 21. I have added it to the number on your notepad, which is 42, and the result is 63.\n\nI have written the result, 63, on your notepad. However, it seems that there was an issue with writing it. My apologies for the inconvenience.\n\nI have also printed the result, 63, to the console.'notes.read()'63'Notice how since ourwritefunction doesn't return anything, it defaults toNoneand our model gets confused! So don't forget to return an encouraging success message to make our model happy :)Structured DataWe can very easily define a Pydantic model that can be generated by the LLM model, with validation and retries:frompydanticimportBaseModel,FieldclassSong(BaseModel):title:strgenres:list[str]=Field(description="AT LEAST 3 genres!")duration:float# normal useSong(title="Hello",genres=["pop"],duration=3.5)Song(title='Hello', genres=['pop'], duration=3.5)Create aStructGPTobject with your pydantic model, and prompting it will always return a valid instance of the model, or raise an exception if it fails to generate a valid instance after the maximum number of retries. Your docstring and field descriptions will also be visible to the model, so make sure to write good descriptions!generate_song=StructGPT(Song)awaitgenerate_song("Come up with an all-time best K-hiphop song")Song(title='Respect My Name', genres=['Hip-Hop', 'K-Rap', 'Trap'], duration=240.0)Misc.Streaming can be enabled as usual by passingstream=Truewhen prompting, and handle the partial events as they come in. Check theAssistantclass for a list of events including the ones for streaming.There are some other useful utilities in theutilsmodule, such as:tokens: for token countingtrackers: for transparent token tracking and prompt/response logging
agentops
AgentOps 🕵️AI agents suck. We’re fixing that.Build your next agent with benchmarks, observability, and replay analytics. AgentOps is the toolkit for evaluating and developing robust and reliable AI agents.AgentOps is open beta. You can sign up for AgentOpshere.Quick Start ⌨️pip install agentopsSession replays in 3 lines of codeInitialize the AgentOps client, and automatically get analytics on every LLM call.importagentops# Beginning of program's code (i.e. main.py, __init__.py)ao_client=agentops.Client(<INSERTYOURAPIKEYHERE>)...# (optional: record specific functions)@ao_client.record_action('sample function being record')defsample_function(...):...# End of programao_client.end_session('Success')# Woohoo You're done 🎉Refer to ourAPI documentationfor detailed instructions.Time travel debugging 🔮(coming soon!)Agent Arena 🥊(coming soon!)Evaluations Roadmap 🧭PlatformDashboardEvals✅ Python SDK✅ Multi-session and Cross-session metrics✅ Custom eval metrics🚧 Evaluation builder API✅ Custom event tag tracking🔜 Agent scorecards✅Javascript/Typescript SDK✅ Session replays🔜 Evaluation playground + leaderboardDebugging Roadmap 🧭Performance testingEnvironmentsLLM TestingReasoning and execution testing✅ Event latency analysis🔜 Non-stationary environment testing🔜 LLM non-deterministic function detection🚧 Infinite loops and recursive thought detection✅ Agent workflow execution pricing🔜 Multi-modal environments🚧 Token limit overflow flags🔜 Faulty reasoning detection🚧 Success validators (external)🔜 Execution containers🔜 Context limit overflow flags🔜 Generative code validators🔜 Agent controllers/skill tests✅ Honeypot and prompt injection detection (PromptArmor)🔜 API bill tracking🔜 Error breakpoint analysis🔜 Information context constraint testing🔜 Anti-agent roadblocks (i.e. Captchas)🔜 CI/CD integration checks🔜 Regression testing🔜 Multi-agent framework visualizationCallback handlers ↩️LangchainAgentOps works seemlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:pipinstallagentops[langchain]To use the handler, import and setimportosfromlangchain.chat_modelsimportChatOpenAIfromlangchain.agentsimportinitialize_agent,AgentTypefromagentops.langchain_callback_handlerimportLangchainCallbackHandlerAGENTOPS_API_KEY=os.environ['AGENTOPS_API_KEY']handler=LangchainCallbackHandler(api_key=AGENTOPS_API_KEY,tags=['Langchain Example'])llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY,callbacks=[handler],model='gpt-3.5-turbo')agent=initialize_agent(tools,llm,agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,verbose=True,callbacks=[handler],# You must pass in a callback handler to record your agenthandle_parsing_errors=True)Check out theLangchain Examples Notebookfor more details including Async handlers.LlamaIndex(Coming Soon)Why AgentOps? 🤔Our mission is to bring your agent from protype to production.Agent developers often work with little to no visibility into agent testing performance. This means their agents never leave the lab. We're changing that.AgentOps is the easiest way to evaluate, grade, and test agents. Is there a feature you'd like to see AgentOps cover? Just raise it in the issues tab, and we'll work on adding it to the roadmap.
agentos
This project consists of two major pieces: the Python Component System (PCS) and AgentOS.Python Component System (PCS)PCS is an open sourcePython API, command line interface, and public web registryfor building, running, and sharing Python programs. The goals of PCS are to:Make Python program execution reproducible.Transparently manage Python virtual environments while providing a Python API forpipandvirtualenv.Simplify experiment tracking and code sharing.PCS does this by allowing you to explicitly specify dependencies and arguments for your program and then providing a thin runtime (currently based onMLflow) to automatically instrument your program’s execution. PCS is compatible with most frameworks that are used to build machine learning and reinforcement learning systems.AgentOSAgentOS is a set of libraries built on top of the Python Component System that make it easy to build, run, and share agents that use Reinforcement Learning (RL) to solve tasks.Key features of AgentOS:Easy to use Agent API for developing and running new agents.Apublic repositoryof popular RL environments and agents, and runs of those agents in those environments that can be reproduced with a single line of code.Example learning agents from different disciplines and research areas are available in theexample_agentsdirectory of the project source code.ConnectAsk questions or report bugs in PCS and AgentOS inGitHub Issuesor on thedev Discord.Find theAgentOS source code on Github.The Python Component System and AgentOS are alpha software; APIs and overall architecture are likely to change significantly over time. They are licensed under the Apache License, Version 2.0.QuickstartSee the agentos.orgquickstarts.DocumentationFor detailed documentation see theagentos.org docs.Development ProcessAgentOS usesGitHub Issuesto track development work. Submit any bug reports or feature requests to this issues tracker.For significant feature work (more than a couple dev days or something that fundamentally changes internal or external interfaces), we run a design process to solicit feedback from other collaborators. Read more about this process in theProposing Featuressection.To contribute to AgentOS, the general workflow is as follows:Sync with the core development team via theissue trackerso we can avoid unnecessary or duplicated work.Fork the AgentOS repo.Complete your feature work on a branch in your forked repo. Ensure all checks and tests pass.Issue a pull request from your forked repo into the central AgentOS repo. Assign a core developer to review.Address any comments and the core developer will merge once the PR looks good.Proposing FeaturesFor new features and other big chunks of work, AgentOS uses a design process centered around design proposals, discussions, and design docs. The goal of the process is to:Allow developers to think through a design, andAllow stakeholders to give feedback…before development begins.If you’d like to propose a feature, please follow the procedure found in thedesign_docs README. You can also browse existing design docs in the folder to get a feel for the general content and style.Installing AgentOS From SourceTo install agentos from source (e.g., to play with the example_agents), run the following:git clone https://github.com/agentos-project/agentos.git pip install -e agentos # you may want to do this in a virtualenv or conda env.TestingTo run tests, first install the requirements (note, this script installs the Python requirements into the currently active virtual environment):cd agentos # the project root, not the nested agentos/agentos dir python install_requirements.pyThen run the tests:pytestAlso, we use Github Actions to run tests with every commit and pull request (see thetest workflow)If you want to the CLI to interact with a local development server, define the environment variable (or create a.envfile)USE_LOCAL_SERVER=True.To run website tests:python install_requirements.py cd web # the web directory contained in project root python manage.py testNote that some tests (e.g., seeweb/registry/tests/test_integration.py) test functionality for interacting with github repositories by fetching code fromhttps://github.com/agentos-project/agentos. Where possible, in order to make it easy to have those tests run against code in a github repo that you can change during development without disrupting other PRs, the test code uses global variables defined intests/utils.pyto decide which github repo to use when testing.If you make changes to code that is fetched from github for use by tests, then please follow this process for your PR:While doing development, change theTESTING_GITHUB_REPO_URLand/orTESTING_BRANCH_NAMEglobal variables intests/utils.pyto point to a version of your PR branch that you’ve pushed to github. We recommend commenting out the default “prod” values of these variables so that you can uncomment them in the next step when the PR is approved for merge.After your PR is approved and right before it is merged, push the branch you used during testing to thetest_prodbranch of the agentos-project accounthttps://github.com/agentos-project/agentos.git. And then update the variables intests/utils.py(you should be able to just uncomment the lines you commented out in step 1 above, and delete the lines you added).Building DocsThe documentation source is in thedocumentationdirectory and written inReStructuredText. The docs are built usingSphinx. To build the docs, first install the dev requirements (note, this script will install requirements into the currently active Python virtual environment):python install_requirements.pyThen use the build script:python scripts/build_docs.pyUse the--helpflag to learn more about other optional flags thatbuild_docs.pytakes, including--release(for publishing the docs) and--watch(for auto-recompiling the docs whenever doc source files are changed).Notice that the build file puts the compiled docs intodocs/<version_num>whereversion_numcomes frompcs/version.py.Or you can build the docs manually (e.g., to control where output goes):sphinx-build documentation outdir # Or use sphinx-autobuild. # Open and inspect outdir/index.html in your browser.Publishing Docs to agentos.orgagentos.orgis a github.io website where the AgentOS docs are hosted. To publish updated docs to agentos.org, checkout thewebsitebranch and build the docs per the instructions above, then create a PR against theagentos-dev/websitebranch. Once committed, those changes will become live at agentos.org automatically.Assuming you have local branches tracking both themasterandwebsitebranches, and all changes to the documentation source files have all been committed in themasterbranch, the workflow to publish updated docs to agentos.org might look similar to:git checkout website git merge master python scripts/build_docs.py --release -a # The -a is a `sphinx-build` flag. git add docs git commit -m "push updated docs to website for version X.Y.Z" git pushBuilding README.rstThe main projectREADME.rstis built via the scriptpython scripts/build_readme.py, which re-uses sections of documentation. This avoids duplication of efforts and lowers the chances that a developer will forget to update one or the either of the README or the docs.To updateREADME.rst, first familiarize yourself with its build scriptscripts/build_readme.py. There you can see which sections of documentation are included inREADME.rst, plus some text that is manually inserted directly intoREADME.rst(e.g., the footer).ReleasingHere are the steps for releasing AgentOS:Build and check the distribution artifacts for the release by running:python install_requirements.py python setup.py sdist --formats=gztar,zip bdist_wheel twine check dist/*This will create awheel fileas well as tar.gz and zip source distribution files, and catch any blockers that PyPI would raise at upload time. Fix any errors before proceeding.Create a release pull request (PR) that:Removes “-alpha” suffix from the version number inpcs/version.py.Contains draft release notes (summary of major changes).Wait till the PR gets LGTMs from all other committers, then merge it.Build and publish the docs for the new version, which involves creating a pull request againstwebsitebranch. This is required for all releases, even if the docs have not changed, since the docs are versioned. When you run thebuild_docs.pyscript, you will use the--releaseflag (seeBuilding Docs&Publishing Docs to agentos.orgfor more details).Create another follow-on PR that bumps version number to beX.Y.Z-alphawhich reflects that work going forward will be part of the next release (we usesemantic versioning).Push the release to PyPI (seePushing Releases to PyPI).Create agithub releaseand upload the tar.gz and zip source code distribution files. This will create a git tag. For the tag name, use “vX.Y.Z” (e.g. v0.1.0).Pushing Releases to PyPIWe make AgentOSavailable in PyPI. To push a release to PyPI, you can approximately followthese python.org instructions, which will probably look something like:python install_requirements.py rm -rf dist python setup.py sdist --formats=gztar bdist_wheel twine check dist/* twine upload dist/*This README was compiled from the project documentation via:python scripts/build_readme.py.
agent-os-py
Agent OS🌞 Build autonomous AI agents!🛠️The Agent OS is an experimental framework and runtime to build sophisticated, long running, and self-coding AI agents.We believe that the most important super-power of AI agents is to write and execute their own code to interact with the world. But for that to work, they need to run in a suitable environment—a place designed to be inhabited by agents. The Agent OS is designed from the ground up to function as a long-term computing substrate for these kinds of self-evolving agents.❓If you want to understand the motivation behind the project, read themanifesto.❓To learn how everything works, start reading thedesign docs.In short:Think of the Agent OS as asubstratethat is designed for the next generation of robustAuto-GPTs.The current version is implemented in Python, but the runtime and protocol is designed to work with many different programming languages, and the runtime should eventually be rewritten in a systems language. The current runtime is a prototype (but a serious one).The framework plays well with other AI libraries such as LangChain or LlamaIndex.The system is designed for "local-first" agents that run on your machine, but it should also be possible to run parts of an agent in the cloud.The data layer is made for longevity. We introduce a novel data structure that is inspired by Git. It's a content addressable storage that maintains your agent's history and memory and makes sure you never lose data.An agent's primary purpose is not to run large language models on your machine (although that's possible too), but to provide a place to execute the code and functions that are generated by these models. It then composes these functions into larger features and routes events and data to them.To demonstrate the power of the Agent OS, this project comes with a demo agent called"Jetpack"that can chat with you and write self-executing programs.Click the the image to watch a short demo on YouTube.Getting StartedAs long as the project is still in alpha, the best way to get started is to clone this repository.Requires Python >= 3.10 andPoetry. Works on Linux, macOS, and Windows.Once cloned,cdinto the repository and run:poetry installThe project comes with a CLI that can be used to initiate and run agents. To see the available commands run:poetry run aos --helpTest Drive an Agent 🤖To run your first agent, put an.envfile in the project root that contains an OpenAI API key. (It also works without an.envfile or key if you just want to give the project a quick spin.)OPENAI_API_KEY=sk-myspecialkeyJetpack AgentThe"Jetpack" agentdemonstrates what the Agent OS can do and is geared towards end-to-end code generation and execution. It's a conversational agent that can accomplish various tasks for you. To run it, execute the following commands:poetry run aos -d agents/jetpack/ push poetry run aos -d agents/jetpack/ runJetpack comes with a web interface, and the Agent OS hosts a web server. To access the interface, browse tohttp://127.0.0.1:5000/ag/jetpack/wit/actors/frontend/query/webProject Structureagents/: examples of agent implementations.See the folderfor more details.jetpack/: the flagship agent demo, geared towards conversational task completion through code generation.src/: contains all relevant modules that are part of the Agent OS.grit/: object store, inspired by Git.wit/: libraries to help write "wit" state transition functionsruntime/: the Agent OS runtime: actor executor, custom python module loader, and wit function resolver.sync/: modules to push and pull data into grit from the developer's file system.web/: the web server that provides the web interface.cli/: the "aos" (agent OS) CLI.tests/: extensive unit tests that cover most of the/srcpackages and modules.docs/: documentation, intended to be rendered as a GitBook.Running the TestsIf you want to hack on the code, it is helpful to run the tests. The project usespytestand has almost 100 unit tests.poetry run pytest tests/
agent.pgep
UNKNOWN
agentpolis
No description available on PyPI.
agent-proto
No description available on PyPI.
agent-protocol
Agent Protocol - Python SDKReference implementation of the Agent Protocol in Python by the AI Engineers Foundation (AIEF). This SDK allows you to easily wrap your agent in a webserver compatible with the protocol - you only need to define the task and step handlers for your agent.Installationpipinstallagent-protocolThen add the following code to your agent:fromagent_protocolimportAgent,Step,Taskasyncdeftask_handler(task:Task)->None:# TODO: Create initial step(s) for the taskawaitAgent.db.create_step(task.task_id,...)asyncdefstep_handler(step:Step)->Step:# TODO: handle next stepifstep.name=="print":print(step.input)step.is_last=Truestep.output="Output from the agent"returnstepif__name__=="__main__":# Add the task handler and start the serverAgent.setup_agent(task_handler,step_handler).start()CustomizationDatabaseBy default, the SDK stores data in memory. You can customize the database by setting db to your own database object.Agent.db=your_databaseThe database object must implement the methods fromdb.py.RoutesYou can also add your own routes to the server. For example:fromagent_protocolimportAgent,routerfromfastapiimportAPIRoutermy_router=APIRouter()@my_router.get("/hello")asyncdefhello():return{"hello":"world"}my_router.include_router(router)task_handler=...step_handler=...Agent.setup_agent(task_handler,step_handler).start(router=my_router)TestingYou can test the compliance of your agent using the following script:URL=http://127.0.0.1:8000bash-c"$(curl-fsSLhttps://agentprotocol.ai/test.sh)"What this does is run a series of GET and POST requests against your agent to verify that it's compliant with the Agent Protocol.DocsYou can find more info and examples in thedocs.DevelopmentIf you want to use the agent_protocol package from the cloned repository, you should be able to use poetry to set it up, like [email protected]:AI-Engineers-Foundation/agent-protocol.gitcdagent-protocol/sdk/python poetryinstall poetryrunpythonexamples/minimal.pyFeel free to openan issueif you run into any problems!
agent-protocol-client
Agent Protocol - Python ClientInstallationpipinstallagent-protocol-clientGetting StartedIn your own code, to use this library to connect and interact with agent-protocol-client, you can run the following:importtimeimportagent_protocol_clientfromagent_protocol_client.restimportApiExceptionfrompprintimportpprint# Defining the host is optional and defaults to http://0.0.0.0:8000# See configuration.py for a list of all supported configuration parameters.configuration=agent_protocol_client.Configuration(host="http://0.0.0.0:8000")# Enter a context with an instance of the API clientasyncwithagent_protocol_client.ApiClient(configuration)asapi_client:# Create an instance of the API classapi_instance=agent_protocol_client.AgentApi(api_client)task_request_body=agent_protocol_client.TaskRequestBody()# TaskRequestBody | (optional)try:# Creates a task for the agent.api_response=awaitapi_instance.create_agent_task(task_request_body=task_request_body)print("The response of AgentApi->create_agent_task:\n")pprint(api_response)exceptApiExceptionase:print("Exception when calling AgentApi->create_agent_task:%s\n"%e)Documentation for API EndpointsAll URIs are relative tohttp://0.0.0.0:8000ClassMethodHTTP requestDescriptionAgentApicreate_agent_taskPOST/ap/v1/agent/tasksCreates a task for the agent.AgentApidownload_agent_task_artifactGET/ap/v1/agent/tasks/{task_id}/artifacts/{artifact_id}Download a specified artifact.AgentApiexecute_agent_task_stepPOST/ap/v1/agent/tasks/{task_id}/stepsExecute a step in the specified agent task.AgentApiget_agent_taskGET/ap/v1/agent/tasks/{task_id}Get details about a specified agent task.AgentApiget_agent_task_stepGET/ap/v1/agent/tasks/{task_id}/steps/{step_id}Get details about a specified task step.AgentApilist_agent_task_artifactsGET/ap/v1/agent/tasks/{task_id}/artifactsList all artifacts that have been created for the given task.AgentApilist_agent_task_stepsGET/ap/v1/agent/tasks/{task_id}/stepsList all steps for the specified task.AgentApilist_agent_tasksGET/ap/v1/agent/tasksList all tasks that have been created for the agent.AgentApiupload_agent_task_artifactsPOST/ap/v1/agent/tasks/{task_id}/artifactsUpload an artifact for the specified task.Documentation For ModelsArtifactGetAgentTask404ResponsePaginationStepStepRequestBodyTaskTaskArtifactsListResponseTaskListResponseTaskRequestBodyTaskStepsListResponse
agentpy
AgentPy - Agent-based modeling in PythonAgentPy is an open-source library for the development and analysis of agent-based models in Python. The framework integrates the tasks of model design, interactive simulations, numerical experiments, and data analysis within a single environment. The package is optimized for interactive computing withIPython,IPySimulate, andJupyter.Please cite this software as follows:Foramitti, J., (2021). AgentPy: A package for agent-based modeling in Python. Journal of Open Source Software, 6(62), 3065, https://doi.org/10.21105/joss.03065Installation:pip install agentpyDocumentation:https://agentpy.readthedocs.ioJOSS publication:https://doi.org/10.21105/joss.03065Discussion forum:https://github.com/JoelForamitti/agentpy/discussionsTutorials and examples:https://agentpy.readthedocs.io/en/latest/model_library.htmlComparison with other frameworks:https://agentpy.readthedocs.io/en/latest/comparison.html
agent-py
A python wrapper around theAgentAPI.
agent-reader
Agent ReaderIntelligent Reader for various kinds of media sources: web pages, Twitter threads, etc.Early alpha version. Please don't use it for anything serious.Supported media sourcesWeb pagesGitHub projectsArxiv papers annotationsOpen Access (CVPR)Twitter threadsReddit threadsYouTube videos (aspirational; it's beyond text summarization)Installationpipinstallagent-readerUsageLook at thedemoscript for the example of usage.
agent-reporter
Agent Reporter
agent-reporting
Agent Reporting
agentrs
AgentPy-rsA Rust based implementation of the AgentPy library, to speed up both development and runtime of agent-based models.PrerequisitesTo quickly start a new project, all that you need isNix(only the package manager, not the OS).To install, simply runsh nix-install.sh(this will also go through some extra setup steps for you).If you are on MacOS, please seehow to setup a linux-builder(and don't forget tonix run nixpkgs#darwin.linux-builderwhenever you build with Nix)User guideAssume that you have Nix installed, it's as easy as:mkdirmy-abmcdmy-abm nixflakeinit-tgithub:Benni-Math/agentrsThen, you can start developing withnix develop, or open a Jupyter Notebook withnix run ., or build a Docker image for your model withnix build .#docker.
agents
No description available on PyPI.
agents-bar
Agents Bar Python ClientThis package is a python client forAgents Barservice. It isn't supposed to be used in an isolation and you are supposed to have an existing account. Check the website for more information about the service, or check theAgents Bar Docsto learn more how to use this client.Quick startThe client allows to communicate with service by wrapping around APIs and coveraging common use patterns.A common usage is replacing your deep reinforcement learning agent with an entity that usesstepandactAPIs for progressing agent and infering action, respectively. For usage examples as Google Colab checkDoc's quick startlink and for scripts checkexamplesdirectory.For a minimal (almost) working example check this code snippet:fromagents_barimportClient,RemoteAgentfromagents_barimportenvironments# Define client to communicate with https://agents.bar. Make sure it's authenticated.client=Client()# Create an environment. Simple one is "CartPole-v1" from OpenAI gym repo.env_name="CartPole"environments.create(client,config={"name":env_name,"config":{"gym_name":"CartPole-v1"}})# Create an agent. Since environment is discrete we use DQN.agent=RemoteAgent(client,agent_name="CartPoleAgent")agent.create_agent(obs_size=4,action_size=2,agent_model="DQN")# Initiat learning loop. Observe env's state, pass to agent, make a decision (action), execute on env. Repeat.obs=environments.reset(client,env_name)foriterationinrange(10):action=agent.act(obs)out=environments.step(client,env_name,step={"actions":[action],"commit":True})next_obs,reward,done=out.get("observation"),out.get("reward"),done.get("done")agent.step(obs,action,reward,next_obs,done)obs=next_obsSupportAgents Bar Client currently supports manipulation of agents, environments and experimnets. We also provide an abstraction over agent which allows you to use the agent as an object, the same as you are already using it.The client is intended to be used for easy communication. Check documentation for all available APIs. In most cases they should be the same as you see inhttps://agents.bar/docs.InstallationPip (Recommended)The latest stable version should always be accessible throughpipasagents-bar. To install locally addagents-barto your dependency file, e.g. requirements.txt, or install it directly usingpip install agents-barGitHub sourceCheckout this package usinggit clone [email protected]:agents-bar/agents-bar-client-python. This will create a new directoryagents-bar-client-python. Go ahead, enter the directory and install the package viapip install -e ..Notewe recommend having a separate python environment for standalone projects, e.g. usingpython -m venvcommand.AuthenticationTo use the client you need to be pass Agents Bar credentials or some proof that you're a user, e.g.access_token. There are a few ways how to authenticate your client.Note: Never store your credentials in places easy accessible by others. This includesgitrepositories that have the slightest chance to leave your computer. Definitely nothing that goes to the GitHub/GitLab.Environment variables (suggested)Currently suggested approach for authentication is to set your token or credentials as environment variables. The client looks first forAGENTS_BAR_ACCESS_TOKENand uses that as its access token. You can use this approach if you want to login using a different application with securely stored credentials and temporarily set the access token. Otherwise, you can also set your username and password inAGENTS_BAR_USERNAMEandAGENTS_BAR_PASSWORD, respectively.As an example, in unix, you can set environment variables by usingexportcommand in shellexportAGENTS_BAR_ACCESS_TOKEN=<access_token> ...or...exportAGENTS_BAR_USERNAME=<username>exportAGENTS_BAR_PASSWORD=<password>Instantiating with credentialsTheRemoteClientcan authenticate usingaccess_tokenor credentials (usernameandpassword) provided when instantiating the agent. Only one of these is required and theaccess_tokenhas priority over credentials pair. Also, note that directly passed variables have priority over the environment variables.access_token="<access_token>"username="<username>"password="<password>"client=RemoteClient(...,access_token=access_token,username=username,password=password)
agentscope
AgentScopeAgentScope is an innovative multi-agent platform designed to empower developers to build multi-agent applications with ease, reliability, and high performance. It features three high-level capabilities:Easy-to-Use: Programming in pure Python with various pre-built components for immediate use, suitable for developers or users with varying levels of customization requirements. Detailed documentation and examples are provided to help you get started, see ourTutorial.High Robustness: Supporting customized fault-tolerance controls and retry mechanisms to enhance application stability.Actor-Based Distribution: Enabling developers to build distributed multi-agent applications in a centralized programming manner for streamlined development.If you find our work helpful, please kindly citeour paper.Welcome to join our community onDiscordDingTalkWeChatTable of ContentsAgentScopeTable of ContentsInstallationFrom sourceUsing pipQuick StartBasic UsageStep 1: Prepare Model ConfigsOpenAI API ConfigPost Request API ConfigStep 2: Create AgentsStep 3: Construct ConversationAdvanced UsagePipelineandMsgHubCustomize Your Own AgentBuilt-in ResourcesAgent PoolServicesExample ApplicationsLicenseContributingReferencesInstallationTo install AgentScope, you need to have Python 3.9 or higher installed.Note: This project is currently in active development, it's recommended to install AgentScope from source.From sourceRun the following commands to install AgentScope in editable mode.# Pull the source code from githubgitclonehttps://github.com/modelscope/agentscope.git# Install the package in editable modecdAgentScope pipinstall-e.Building a distributed multi-agent application relies ongRPClibraries, and you can install the required dependencies as follows.# On windowspipinstall-e.[distribute]# On macpipinstall-e.\[distribute\]Using pipUse the following command to install the latest released AgentScope.pipinstallAgentScopeQuick StartBasic UsageTaking a multi-agent application with user and assistant agent as an example, you need to take the following steps:Step 1: Prepare Model ConfigsStep 2: Create AgentsStep 3: Construct ConversationStep 1: Prepare Model ConfigsAgentScope supports the following model API services:OpenAI Python APIs, includingOpenAI Chat, DALL-E and Embedding APIOpenAI-Compatible platforms, e.g.FastChatandvllmPost request APIs, includingHuggingFaceandModelScopeinference APIsCustomized model APIsModel Type ArgumentSupport APIsOpenAI Chat APIopenaiStandard OpenAI Chat API, FastChat and vllmOpenAI DALL-E APIopenai_dall_eStandard DALL-E APIOpenAI Embedding APIopenai_embeddingOpenAI embedding APIPost APIpost_apiHuggingface/ModelScope inference API, and customized post APIOpenAI API ConfigFor OpenAI APIs, you need to prepare a dict of model config with the following fields:{ "config_name": "{config name}", # The name to identify the config "model_type": "openai" | "openai_dall_e" | "openai_embedding", "model_name": "{model name, e.g. gpt-4}", # The model in openai API # Optional "api_key": "xxx", # The API key for OpenAI API. If not set, env # variable OPENAI_API_KEY will be used. "organization": "xxx", # The organization for OpenAI API. If not set, env # variable OPENAI_ORGANIZATION will be used. }Post Request API ConfigFor post requests APIs, the config contains the following fields.{ "config_name": "{config name}", # The name to identify the config "model_type": "post_api", "api_url": "https://xxx", # The target url "headers": { # Required headers ... }, }AgentScope provides fruitful scripts to fast deploy model services inScripts. For more details of model services, refer to ourTutorialandAPI Document.Step 2: Create AgentsCreate built-in user and assistant agents as follows.fromagentscope.agentsimportDialogAgent,UserAgentimportagentscope# Load model configsagentscope.init(model_configs="./model_configs.json")# Create a dialog agent and a user agentdialog_agent=DialogAgent(name="assistant",model_config_name="your_config_name")user_agent=UserAgent()Step 3: Construct ConversationIn AgentScope,messageis the bridge among agents, which is adictthat contains two necessary fieldsnameandcontentand an optional fieldurlto local files (image, video or audio) or website.fromagentscope.messageimportMsgx=Msg(name="Alice",content="Hi!")x=Msg("Bob","What about this picture I took?",url="/path/to/picture.jpg")Start a conversation between two agents (e.g. dialog_agent and user_agent) with the following code:x=NonewhileTrue:x=dialog_agent(x)x=user_agent(x)ifx.content=="exit":# user input "exit" to exit the conversationbreakAdvanced UsagePipelineandMsgHubTo simplify the construction of agents communication, AgentScope provides two helpful tools:PipelineandMsgHub.Pipeline: It allows users to program a communication among agents easily. Taking a sequential pipeline as an example, the following two codes are equivalent, but pipeline is more convenient and elegant.Passing message throught agent1, agent2 and agent3WITHOUTpipeline:x1=agent1(input_msg)x2=agent2(x1)x3=agent3(x2)WITHobject-level pipeline:fromagentscope.pipelinesimportSequentialPipelinepipe=SequentialPipeline([agent1,agent2,agent3])x3=pipe(input_msg)WITHfunctional-level pipeline:fromagentscope.pipelines.functionalimportsequentialpipelinex3=sequentialpipeline([agent1,agent2,agent3],x=input_msg)MsgHub: To achieve a group conversation, AgentScope provides message hub.Achieving group conversationWITHOUTmsghub:x1=agent1(x)agent2.observe(x1)# The message x1 should be broadcast to other agentsagent3.observe(x1)x2=agent2(x1)agent1.observe(x2)agent3.observe(x2)Withmsghub: In a message hub, the messages from participants will be broadcast to all other participants automatically. In such case, participated agents even don't need input and output messages explicitly. All we need to do is to decide the order of speaking. Besides,msghubalso supports dynamic control of participants as follows.fromagentscopeimportmsghubwithmsghub(participants=[agent1,agent2,agent3])ashub:agent1()# `x = agent1(x)` is also okayagent2()# Broadcast a message to all participantshub.broadcast(Msg("Host","Welcome to join the group conversation!"))# Add or delete participants dynamicallyhub.delete(agent1)hub.add(agent4)Customize Your Own AgentTo implement your own agent, you need to inherit theAgentBaseclass and implement thereplyfunction.fromagentscope.agentsimportAgentBaseclassMyAgent(AgentBase):defreply(self,x):# Do something here, e.g. calling your model and get the raw field as your agent's responseresponse=self.model(x).rawreturnresponseBuilt-in ResourcesAgentScope provides built-in resources for developers to build their own applications easily. More built-in agents, services and examples are coming soon!Agent PoolUserAgentDialogAgentDictDialogAgentRpcDialogAgent...ServicesWeb Search ServiceCode Execution ServiceRetrieval ServiceDatabase ServiceFile Service...Example ApplicationsExample of Conversation:examples/ConversationExample of Werewolf:examples/WerewolfExample of Distributed Agents:examples/Distributed Agents...More built-in resources are coming soon!LicenseAgentScope is released under Apache License 2.0.ContributingContributions are always welcomed!We provide a developer version with additional pre-commit hooks to perform checks compared to the official version:# For windowspipinstall-e.[dev]# For macpipinstall-e.\[dev\]# Install pre-commit hookspre-commitinstallPlease refer to ourContribution Guidefor more details.ReferencesIf you find our work helpful for your research or application, please citeour paper:@article{agentscope, author = {Dawei Gao and Zitao Li and Weirui Kuang and Xuchen Pan and Daoyuan Chen and Zhijian Ma and Bingchen Qian and Liuyi Yao and Lin Zhu and Chen Cheng and Hongzhu Shi and Yaliang Li and Bolin Ding and Jingren Zhou}, title = {AgentScope: A Flexible yet Robust Multi-Agent Platform}, journal = {CoRR}, volume = {abs/2402.14034}, year = {2024}, }
agentscript
AgentscriptA programming language for AI agentsExplore the docs »View Demo·Report Bug·Request FeatureAgentscript is an isomorphic programming language for AI agents, it includes both a server-side interpreter and browser-based renderer.▶ On the server, agenscript directly interprets LMM outputs and executes actions in a streaming manner.▶ In the browser, agentscript renders LLM outputs in a rich manner and provides live updates.Server-side InterpreterThe server side interpreter is implemented inagentscript.rsand provides a Python interface. The interpreter is capable of executing actions in a streaming manner.InstallpipinstallagentscriptUsageExecute the translator tool from PythonfromagentscriptimportInterpreterfromopentoolimportTool,action# Create a fake translator toolclassTranslator(Tool):"""A fake translator tool"""@actiondeftranslate(self,text:str)->str:return"Hola"# Create the interpreter supplying our translator toolinterpreter=Interpreter(tools=[Translator()])# Parse the script, find the tool, and execute itinterpreter.execute("""We need to do translation <Invoke tool="Translator" action="translate" parameters={"text": "Hello"} />""")# Show the past invocations with the resultsprint(interpreter.invocations())Browser-based RendererInstallnpm i @agentsea/agentscriptUsageimportAgentscriptfrom"@agentsea/agentscript";(<Agentscripttext="I created this image for you <Image url='https://any.url' />"url="...">)ComponentsBoolean(<Agentscripttext="Does this work for you? <Boolean />">)Image(<Agentscripttext="I created this image for you <Image src='https://any.url' />"">)Invoke(<Agentscripttext='Lets check the weather <Invoke tool="Weather" action="get" parameters={"location": "Madrid"} />'>)DialectsThe above components are known as thecommondialect, but you can create your own dialects as wellRoadmapProgramable LLMsGive agentscript the ability to program an LLM. Here we load a LoRA into a PEFT compatible LLMinterpreter.execute("I need to load the weather LoRA <Load lora='weather' />")Browser componentsSelect OneSelect AnyVideoPlanReferencesImage with bounding boxTaskMarkdownCode blocksDevelopTo testmaketestTo publishmakepublish
agentscript-pyo3
agentscript.pyo3Python bindings for the Rust agentscript interpreter.BindsAgentscript.rstoAgentscript.Installpipinstallagentscript_pyo3Usagefromagentscript_pyo3importParserparser=Parser()message='We need to do translation <Invoke tool="Translator" action="translate" parameters={"text": "Hello", "options": {"from": "en", "to": "es"}} /> with some tailing text'parser.parse(message)parsed_data=parser.get_parsed_data()print(parsed_data)DevelopTo testpython-mvenv.venvsource.venv/bin/activate maturindevelop maketestTo publishmaturinpublish
agents-deconstructed
No description available on PyPI.
agentsdk
Pure-Python client library for the AgentSDK-RPCd server, providing access to the NewRelic AgentSDK via a local RPC daemon.
agent-search
AgentSearch: A framework for powering search agents and enabling customizable local search.AgentSearch is a framework for powering search agents by seamlessly integrating LLM technologies from various providers with different search engines. This integration enables search agents to perform a wide range of functions through Retrieval-Augmented Generation (RAG), including summarizing search results, generating new queries, and retrieving detailed downstream results.Features of AgentSearchSearch Agent Integration: Effortlessly build a search agent by connecting any search-specialized LLM, such asSensei-7B, with a supported search engine.Customizable Search: Utilize theAgentSearch datasetin conjunction with this framework to deploy a customizable local search engine.API Endpoint Integration: Seamlessly integrate with a variety of hosted provider APIs for diverse search solutions, offering ease of use and flexibility, including Bing, SERP API, and AgentSearch. Additionally, support is provided for LLMs from SciPhi, HuggingFace, OpenAI, Anthropic, and more.Quickstart GuideInstallationpipinstallagent-searchConfigurationGet your free API key fromSciPhiand set it in your environment:exportSCIPHI_API_KEY=$MY_SCIPHI_API_KEYUsageCall a pre-configured search agent endpoint:# Requires SCIPHI_API_KEY in the environmentfromagent_searchimportSciPhiclient=SciPhi()# Search, then summarize result and generate related queriesagent_summary=client.get_search_rag_response(query='latest news',search_provider='bing',llm_model='SciPhi/Sensei-7B-V1')print(agent_summary)# { 'response': '...', 'other_queries': '...', 'search_results': '...' }Standalone searches and from the AgentSearch search engine are supported:# Requires SCIPHI_API_KEY in the environmentfromagent_searchimportSciPhiclient=SciPhi()# Perform a searchsearch_response=client.search(query='Quantum Field Theory',search_provider='agent-search')print(search_response)# [{ 'score': '.89', 'url': 'https://...', 'metadata': {...} }]Code your own custom search agent workflow:# Requires SCIPHI_API_KEY in the environmentfromagent_searchimportSciPhiclient=SciPhi()# Specify instructions for the taskinstruction="Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer in a json format that includes a summary of the search results and a list of related queries."query="What is Fermat's Last Theorem?"# construct search contextsearch_response=client.search(query=query,search_provider='agent-search')search_context="\n\n".join(f"{idx+1}. Title:{item['title']}\nURL:{item['url']}\nText:{item['text']}"foridx,iteminenumerate(search_response)).encode('utf-8')# Prefix to enforce a JSON responsejson_response_prefix='{"summary":'# Prepare a promptformatted_prompt=f"### Instruction:{instruction}\n\nQuery:\n{query}\n\nSearch Results:\n${search_context}\n\nQuery:\n{query}\n### Response:\n{json_response_prefix}",# Generate a completion with Sensei-7B-V1completion=json_response_prefix+client.completion(formatted_prompt,llm_model_name="SciPhi/Sensei-7B-V1")print(completion)# {# "summary": "\nFermat's Last Theorem is a mathematical proposition first prop ... ",# "other_queries": ["The role of elliptic curves in the proof of Fermat's Last Theorem", ...]# }Community & SupportEngage with Us:Join ourDiscord communityfor discussions and updates.Feedback & Inquiries:Contact us via email for personalized support.Additional NotesExecute commands from the root directory of the AgentSearch project.User Guide coming soon!
agentserver
agentserverA server that allows you to control and monitorsupervisoragentinstances.CommandsagentserveragentserveradminagentserverechoNoteThis project is under heavy development. It currently requiresDruid,PlyQL, andKafkato run.TODOsDruid and Kafka as optional dependencies.Documentation.A lot of the code relies heavily on integration tests for the HTTP and websocket APIs. Add unit tests.Expand code coverage.User friendly way to create and initialize databases fromagentserveradmin.
agentshell
A shell for your agent. Track state and history, multiple shells, and more.InstallationpipinstallagentshellDocumentationget_files_in_cwd(shell_id=None)Returns a list of files in the current directory of a specific shell. Ifshell_idis not specified, uses the current shell.Parameters:shell_id: The unique identifier of the shell.Returns:A list of filenames in the current directory.get_current_shell()Returns the unique identifier of the current shell. If no shell is currently active, creates a new shell and returns its identifier.Returns:The unique identifier of the current shell.set_cwd(cwd, shell_id=None)Sets the current working directory of a specific shell. Ifshell_idis not specified, uses the current shell.Parameters:cwd: The new current working directory.shell_id: The unique identifier of the shell.set_current_shell(shell_id)Sets the current shell to the shell with the specified identifier.Parameters:shell_id: The unique identifier of the shell to be made current.get_history(shell_id=None, n_limit=20)Returns the command history of a specific shell. Ifshell_idis not specified, uses the current shell.Parameters:shell_id: The unique identifier of the shell.n_limit: The maximum number of history entries to return.Returns:A list of dictionaries, each representing a command and its result.get_history_formatted(shell_id=None)Returns the command history of a specific shell in a human-readable format. Ifshell_idis not specified, uses the current shell.Parameters:shell_id: The unique identifier of the shell.Returns:The command history in human-readable format.add_to_shell_history(shell_id, command, success, output, error=None)Adds a command and its result to the history of a specific shell.Parameters:shell_id: The unique identifier of the shell.command: The command that was executed.success: Whether the command was successful.output: The output of the command.error: Any error messages produced by the command.clear_history(shell_id)Clears the command history of a specific shell.Parameters:shell_id: The unique identifier of the shell.wipe_all()Clears all shell and shell history data.list_active_shells()Returns a list of active shells.Returns:A list of shell identifiers.close_shell(shell_id)Closes a specific shell, clearing its history.Parameters:shell_id: The unique identifier of the shell.new_shell()Creates a new shell and returns its unique identifier.Returns:The unique identifier of the new shell.get_cwd(shell_id=None)Returns the current working directory of a specific shell. Ifshell_idis not specified, uses the current shell.Parameters:shell_id: The unique identifier of the shell.Returns:The current working directory of the shell.run_command(command, shell_id=None)Runs a command in a specific shell and adds it to the shell's history. Ifshell_idis not specified, uses the current shell.Parameters:command: The command to execute.shell_id: The unique identifier of the shell.Returns:Trueif the command was successful,Falseotherwise.Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agentsmith
Agent Smith is a transaction monitor for Neo4j 3.4 and above. Enterprise edition is required.Installation$ pip install agentsmithExecution$ export NEO4J_PASSWORD="password" $ agentsmith localhost:7687Screenshot
agent-smith-ai
Agent Smith (AI)Agent smith makes it easy to instantiate AI agents that can safely and easily call APIs and locally defined functions to interact with the world. It is currently designed to use OpenAI'sfunction-calling modelsand thus requires an OpenAI API key.Current features:Auto-summarization of conversations approaching the model's context window size.User messages are checked with OpenAI's moderations endpoint by default and rejected if toxic.Messages, including function call and result messages, are yielded to the caller in a stream.An optional token-bucket allows built-in rate-limiting.A basic, easy-to-deploy streamlit-based UI.Basic UsagePrimary functionality is provided by aagent_smith_ai.utility_agent.UtilityAgentclass, which yieldsMessageobjects in response to user questions. They also manage internal state chat, including the system prompt, chat history, token usage, and auto-summarization when the conversation length nears the context length. Finally, using OpenAI'sfunction-calling models, they can register endpoints of REST API's, and locally defined methods and functions as callable functions.Here's some code from the basic example inexamples/monarch_basic.py, which makes calls to aMonarch InitiativeAPI. We start by usingdotenvto read a.envfile defining ourOPENAI_API_KEYenvironment variable if present (we just need some way to access the key). We inherit from theUtilityAgentclass, defining a name and system message for the agent.fromagent_smith_ai.utility_agentimportUtilityAgentimporttextwrapimportosfromtypingimportAny,Dict# load environment variables from .env fileimportdotenvdotenv.load_dotenv()## A UtilityAgent can call API endpoints and local methodsclassMonarchAgent(UtilityAgent):def__init__(self,name):## define a system messagesystem_message=textwrap.dedent(f"""You are the Monarch Assistant, an AI-powered chatbot that can answer questions about data from the Monarch Initiative knowledge graph.You can search for entities such as genes, diseases, and phenotypes by name to get the associated ontology identifier.You can retrieve associations between entities via their identifiers.Users may use synonyms such as 'illness' or 'symptom'. Do not assume the user is familiar with biomedical terminology.Always add additional information such as lay descriptions of phenotypes.If the user changes the show function call setting, do not make any further function calls immediately.IMPORTANT: Include markdown-formatted links to the Monarch Initiative for all results using the templates provided by function call responses.'.""").strip()Next in the constructor, we call the parent constructor which defines various agent properties.super().__init__(name,# Name of the agentsystem_message,# Openai system messagemodel="gpt-3.5-turbo-0613",# Openai model nameopenai_api_key=os.environ["OPENAI_API_KEY"],# API key; will default to OPENAI_API_KEY env variableauto_summarize_buffer_tokens=500,# Summarize and clear the history when fewer than this many tokens remains in the context window. Checked prior to each message sent to the model.summarize_quietly=False,# If True, do not alert the user when a summarization occursmax_tokens=None,# maximum number of tokens this agent can bank (default: None, no limit)token_refill_rate=10000.0/3600.0)# number of tokens to add to the bank per secondStill in the constructor, we can register some API endpoints for the agent to call. It is possible to register multiple APIs.## register some API endpoints (inherited from UtilityAgent)## the openapi.json spec must be available at the spec_url:## callable endpoints must have a "description" and "operationId"## params can be in body or query, but must be fully specifiedself.register_api("monarch",# brief alphanumeric ID, used internallyspec_url="https://oai-monarch-plugin.monarchinitiative.org/openapi.json",base_url="https://oai-monarch-plugin.monarchinitiative.org",callable_endpoints=['search_entity','get_disease_gene_associations','get_disease_phenotype_associations','get_gene_disease_associations','get_gene_phenotype_associations','get_phenotype_gene_associations','get_phenotype_disease_associations'])Finally, the constructor is also where we register methods that the agent can call. Agent-callable methods are defined like normal, but to be properly callable they should be type-annotated and documented with docstrings parsable bydocstring-parser.## the agent can also call local methods, but we have to register themself.register_callable_functions({'compute_entropy':self.compute_entropy})## Callable methods should be type-annotated and well-documented with docstrings parsable by the docstring_parser librarydefcompute_entropy(self,items:Dict[Any,int])->float:"""Compute the information entropy of a given set of item counts.Args:items (str): A dictionary of items and their counts.Returns:The information entropy of the item counts."""frommathimportlog2total=sum(items.values())return-sum([count/total*log2(count/total)forcountinitems.values()])The above will allow the model to accurately answer questions like"What is the entropy of the tile counts in a standard Scrabble set?"!To use the agent, we first instantiate it and define a question to ask. The agent's.new_chat()method takes the question and yields a stream ofMessageobjects. It may yield multiple message objects if the agent decides to call a function to answer the question. The first yielded Message will haveis_function_callset toTrueand information about the function call in other fields. The second message will be the result of the function call incontentandroleset to"function"; this is sent back to the model, resulting in a third message yielded with the models' response incontentandroleset to"assistant. It may be that the model's immediate response isanotherfunction call, in which case function calls and results will continue to be yielded. It is also possible to yield the system message to the stream withyield_system_messageand the question itself withyield_prompt_messageprior to the main message stream.Messages arepydanticmodels, somessage.model_dump()converts each message to a dictionary.agent=MonarchAgent("Monarch Assistant")question="What genes are associated with Cystic Fibrosis?"## agent.chat(question) may result in a series of Message objects (which may consist of a series of function-call messages,## function-call responses, and other messages)## by default, the system message and initial prompt question are not included in the output, but can beformessageinagent.chat(question,yield_system_message=True,yield_prompt_message=True,author="User"):## each Message object as the following attributes and defaults:# role: str // required, either "user", "assistant", or "function" (as used by OpenAI API)# author: str = None // the name of the author of the message# intended_recipient: str = None // the name of the intended recipient of the message# is_function_call: bool = False // whether the message represents the model attemtpting to make a function call# content: Optional[str] = None // the content of the message (as used by OpenAI API)# func_name: Optional[str] = None // the function name the model is trying to call (if is_function_call is True)# func_arguments: Optional[Dict[str, Any]] = None // the function arguments the model is trying to pass (if is_function_call is True)# finish_reason: Optional[str] = None // (as used by the OpenAI API, largely ignorable)## the author and intended_recipient may be useful for multi-agent conversions or logging, they will typically be filled## with agent names, "User", or the agent name and the function it is trying to callprint("\n\n",message.model_dump())Once a chat has been initialized this way, it can be continued with further calls to.chat():question_followup="What other diseases are associated with the first one you listed?"formessageinagent.chat(question_followup,yield_prompt_message=True,author="User"):print("\n\n",message.model_dump())question_followup="What is the entropy of a standard tile set in Scrabble?"formessageinagent.continue_chat(question_followup,yield_prompt_message=True,author="User"):print("\n\n",message.model_dump())Other functionality provided by agents includes.set_api_key()for changing an agent's API-key mid-conversation,.clear_history()for clearing an agent's conversation history (but not it's token usage), and.compute_token_cost()to estimate the total token cost of a potential message, including the conversation history and function definitions. The basicUtilityAgentcomes with two callable functions by default,time()andhelp(), which report the current date and time to the model, and a summary of callable functions and API endpoints, respectively.Streamlit-based UIThis package includes a basic, opinionated web-UI for serving agents based on streamlit,examples/streamlit_app.pyprovides an example. We assume an agent class such asMonarchAgentinexamples/monarch_agent.pyhas been defined; this example is defined to accept the model name (e.g.gpt-3.5-turbo-0613) during the agent creation.frommonarch_agentimportMonarchAgentimportagent_smith_ai.streamlit_serverassvimportosimportdotenvdotenv.load_dotenv()# load env variables defined in .env file (if any)Next we initialize the application, specifying the page title and icon and other application features. Arguments are passed to streamlit'sset_page_config(), and calling this once before other functions below is required.sv.initialize_app_config(page_title="Monarch Assistant",page_icon="https://avatars.githubusercontent.com/u/5161984?s=200&v=4",initial_sidebar_state="collapsed",# or "expanded"menu_items={"Get Help":"https://github.com/monarch-initiative/agent-smith-ai/issues","Report a Bug":"https://github.com/monarch-initiative/agent-smith-ai/issues","About":"Agent Smith (AI) is a framework for developing tool-using AI-based chatbots.",})Next we define some agents. In order to make this performance with streamlit, we define a function that will return a dictionary of agents when called, and then pass this function tosv.set_app_agents()function. The agent dictionary keys are used to define agent names, with values containing the agent object itself in"agent", a"greeting"that is shown to the user by the agent when first loaded (but that is not part of the agent's conversation history), and avatars for both the user and the agent, which can be characters (including unicode/emoji) or URLs to images.defget_agents():return{"Monarch Assistant":{"agent":MonarchAgent("Monarch Assistant",model="gpt-3.5-turbo-16k-0613"),"greeting":"Hello, I'm the Monarch Assistant.","avatar":"https://avatars.githubusercontent.com/u/5161984?s=200&v=4","user_avatar":"👤",},"Monarch Assistant (GPT-4)":{"agent":MonarchAgent("Monarch Assistant (GPT-4)",model="gpt-4-0613"),"greeting":"Hello, I'm the Monarch Assistant, based on GPT-4.","avatar":"https://avatars.githubusercontent.com/u/5161984?s=200&v=4","user_avatar":"👤",}}# tell the app to use that function to create agents when neededsv.set_app_agents(get_agents)We can set a default OpenAI API key to use. If one is not provided this way, the user will need to enter one in the sidebar to chat. If one is set this way, the user can still enter their own key if they like, which will override the default key.sv.set_app_default_api_key(os.environ["OPENAI_API_KEY"])Lasly we start the app.sv.serve_app()To run the app, installstreamlitand runstreamlit run examples/streamlit_app.py. Messages are logged as they are generated and associated with session IDs for conversation tracking.Notes on streamlitStreamlitis a framework designed to make it easy to develop and deploy python-based web applications. Its execution model involves re-running the entire python script every time the UI changes or an action is taken, using deliberate state tracking and making heavy use of caching for efficiency. Beware of this if attempting to do extra work as part of the main application.It is also easy to publish your streamit app to theircommunity cloud. Note that at this time the application does not handle user authentication or rate-limiting of any kind.You may want to add a.streamlit/config.tomlin the same directory as your app to adjust the default theme settings:[theme] base = "light" primaryColor = "#4bbdff"Additional Experiments and ExamplesThese are not complete and may be moved, but the following are currently included here:agent_smith_ai.CLIAgent: A basic command-line agent with some formatting and markdown rendering provided byrich. May be inhereted in the same way asUtilityAgentfor added functionality.agent_smith_ai/bash_agent/main.py: Early version of a command-line-based AI assistant that can write and execute (after confirmation) complex commands.Here's an example conversation from theexamples/monarch_cli.pywhich uses theCLIAgent(sadly, markdown-defined colors and links are not shown here):╭──────────────────────────────────────────── Welcome ─────────────────────────────────────────────╮ │ Hello! I'm the Monarch Assistant, an AI-powered chatbot that can answer questions about genes, │ │ diseases, and phenotypes, based on information hosted at https://monarchinitiative.org. │ │ │ │ • You can exit by saying 'exit', and you can request that I turn on or off function call │ │ responses by saying 'show function calls' or 'hide function calls' at any time. They are │ │ shown by default. │ │ • I do not currently implement context-window management, so after a while your conversation │ │ will produce an error. │ │ • For a bit of fun, try asking me to describe my plan. For example, "What are the symptoms of │ │ Cystic Fibrosis? Describe your plan before you execute it." │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ User: What are the symptoms of Cystic Fibrosis? Describe your plan before you execute it. ╭─ Monarch Assistant -> Monarch Assistant (monarch-search_entity function) ────────────────────────╮ │ To find the symptoms of Cystic Fibrosis, I will first search for the disease using the term │ │ "Cystic Fibrosis" in the Monarch Initiative knowledge graph. Once I have the ontology identifier │ │ for Cystic Fibrosis, I will use it to retrieve the associated phenotypes. These phenotypes │ │ represent the symptoms of the disease. I will then provide a description of the symptoms of │ │ Cystic Fibrosis based on the retrieved information. │ │ │ │ Here is my plan: │ │ │ │ 1 Search for the disease "Cystic Fibrosis" in the Monarch Initiative knowledge graph. │ │ 2 Retrieve the ontology identifier for Cystic Fibrosis. │ │ 3 Use the ontology identifier to retrieve the associated phenotypes. │ │ 4 Provide a description of the symptoms of Cystic Fibrosis based on the retrieved information. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> Monarch Assistant (monarch-search_entity function) ────────────────────────╮ │ │ │ monarch-search_entity(params = {'term': 'Cystic Fibrosis', 'category': 'biolink:Disease', │ │ 'limit': 1}) │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant (monarch-search_entity function) -> Monarch Assistant ────────────────────────╮ │ │ │ { │ │ "results": [ │ │ { │ │ "id": "MONDO:0009061", │ │ "name": "cystic fibrosis", │ │ "categories": [ │ │ "biolink:Disease" │ │ ], │ │ "description": "Cystic fibrosis (CF) is a genetic disorder characterized by the │ │ production of sweat with a high salt content and mucus secretions with an abnormal viscosity." │ │ } │ │ ], │ │ "total": 3 │ │ } │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> Monarch Assistant (monarch-get_disease_phenotype_associations function) ───╮ │ │ │ monarch-get_disease_phenotype_associations(params = {'disease_id': 'MONDO:0009061', 'limit': │ │ 10}) │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant (monarch-get_disease_phenotype_associations function) -> Monarch Assistant ───╮ │ │ │ { │ │ "associations": [ │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0004401", │ │ "label": "Meconium ileus (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0006528", │ │ "label": "Chronic lung disease (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0006538", │ │ "label": "Recurrent bronchopulmonary infections (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0100582", │ │ "label": "Nasal polyposis (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0002035", │ │ "label": "Rectal prolapse (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0002099", │ │ "label": "Asthma (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0002150", │ │ "label": "Hypercalciuria (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0002595", │ │ "label": "Ileus (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0002613", │ │ "label": "Biliary cirrhosis (HPO)" │ │ } │ │ }, │ │ { │ │ "metadata": { │ │ "frequency_qualifier": null, │ │ "onset_qualifier": null │ │ }, │ │ "publications": [], │ │ "phenotype": { │ │ "metadata": {}, │ │ "publications": [], │ │ "phenotype_id": "HP:0003251", │ │ "label": "Male infertility (HPO)" │ │ } │ │ } │ │ ], │ │ "total": 62, │ │ "phenotype_url_template": "https://monarchinitiative.org/phenotype/{phenotype_id}" │ │ } │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> User ──────────────────────────────────────────────────────────────────────╮ │ The symptoms of Cystic Fibrosis include: │ │ │ │ 1 Meconium ileus: This is a condition where the meconium, the first stool of a newborn, is │ │ thick and sticky, causing a blockage in the intestines. More information │ │ 2 Chronic lung disease: Cystic Fibrosis can lead to the development of chronic lung disease, │ │ characterized by persistent respiratory symptoms such as coughing, wheezing, and shortness │ │ of breath. More information │ │ 3 Recurrent bronchopulmonary infections: Individuals with Cystic Fibrosis are prone to │ │ frequent and recurrent infections in the bronchial tubes and lungs. More information │ │ 4 Nasal polyposis: Cystic Fibrosis can cause the development of polyps in the nasal passages, │ │ leading to nasal congestion and difficulty breathing through the nose. More information │ │ 5 Rectal prolapse: In some cases, Cystic Fibrosis can result in the protrusion of the rectum │ │ through the anus. More information │ │ 6 Asthma: Individuals with Cystic Fibrosis may also experience symptoms of asthma, such as │ │ wheezing and difficulty breathing. More information │ │ 7 Hypercalciuria: Cystic Fibrosis can lead to increased levels of calcium in the urine, which │ │ may result in the formation of kidney stones. More information │ │ 8 Ileus: This refers to a blockage or obstruction in the intestines, which can occur in │ │ individuals with Cystic Fibrosis. More information │ │ 9 Biliary cirrhosis: In rare cases, Cystic Fibrosis can lead to the development of liver │ │ disease, specifically biliary cirrhosis. More information │ │ 10 Male infertility: Men with Cystic Fibrosis may experience infertility due to the absence or │ │ blockage of the vas deferens, the tube that carries sperm from the testes. More information │ │ │ │ Please note that this is not an exhaustive list of symptoms, and the severity and presentation │ │ of symptoms can vary among individuals with Cystic Fibrosis. It is important to consult with a │ │ healthcare professional for a comprehensive evaluation and diagnosis. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ User: That's nice. I know you can call local functions too - can you do that and tell me what time it is? ╭─ Monarch Assistant -> Monarch Assistant (time function) ─────────────────────────────────────────╮ │ │ │ time(params = {}) │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant (time function) -> Monarch Assistant ─────────────────────────────────────────╮ │ │ │ "08/02/23 10:28" │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> User ──────────────────────────────────────────────────────────────────────╮ │ The current time is 10:28 AM on August 2, 2023. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ User: Fun! Can you hide the function calls, and then tell me the entropy of a standard scrabble set? ╭─ Monarch Assistant -> Monarch Assistant (hide_function_calls function) ──────────────────────────╮ │ │ │ hide_function_calls(params = {}) │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> User ──────────────────────────────────────────────────────────────────────╮ │ The entropy of a standard Scrabble set is approximately 4.37. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ User: Nice :) What time is it now? ╭─ Monarch Assistant -> User ──────────────────────────────────────────────────────────────────────╮ │ The current time is 10:29 AM on August 2, 2023. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ User: Ok, show the function calls again, and tell me more about the CFTR gene. ╭─ Monarch Assistant (show_function_calls function) -> Monarch Assistant ──────────────────────────╮ │ │ │ null │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> Monarch Assistant (monarch-search_entity function) ────────────────────────╮ │ │ │ monarch-search_entity(params = {'term': 'CFTR', 'category': 'biolink:Gene', 'limit': 1}) │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant (monarch-search_entity function) -> Monarch Assistant ────────────────────────╮ │ │ │ { │ │ "results": [ │ │ { │ │ "id": "HGNC:1884", │ │ "name": "CFTR", │ │ "categories": [ │ │ "biolink:Gene" │ │ ], │ │ "description": null │ │ } │ │ ], │ │ "total": 41 │ │ } │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> Monarch Assistant (monarch-get_gene_disease_associations function) ────────╮ │ │ │ monarch-get_gene_disease_associations(params = {'gene_id': 'HGNC:1884', 'limit': 10}) │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant (monarch-get_gene_disease_associations function) -> Monarch Assistant ────────╮ │ │ │ { │ │ "associations": [ │ │ { │ │ "metadata": { │ │ "relationship": "causal" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0009061", │ │ "label": "cystic fibrosis" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "causal" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0010178", │ │ "label": "congenital bilateral aplasia of vas deferens from CFTR mutation" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0008185", │ │ "label": "hereditary chronic pancreatitis" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0008185", │ │ "label": "hereditary chronic pancreatitis" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0008887", │ │ "label": "bronchiectasis with or without elevated sweat chloride 1" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0009061", │ │ "label": "cystic fibrosis" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0018393", │ │ "label": "male infertility with azoospermia or oligozoospermia due to single │ │ gene mutation" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0018801", │ │ "label": "congenital bilateral absence of vas deferens" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0018956", │ │ "label": "idiopathic bronchiectasis" │ │ }, │ │ "type": null │ │ }, │ │ { │ │ "metadata": { │ │ "relationship": "correlated" │ │ }, │ │ "publications": [], │ │ "disease": { │ │ "metadata": {}, │ │ "publications": [], │ │ "disease_id": "MONDO:0044663", │ │ "label": "aquagenic palmoplantar keratoderma" │ │ }, │ │ "type": null │ │ } │ │ ], │ │ "total": 10, │ │ "disease_url_template": "https://monarchinitiative.org/disease/{disease_id}" │ │ } │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Monarch Assistant -> User ──────────────────────────────────────────────────────────────────────╮ │ The CFTR gene, also known as the Cystic Fibrosis Transmembrane Conductance Regulator gene, is │ │ associated with several diseases. Here are some of the diseases associated with the CFTR gene: │ │ │ │ 1 Cystic Fibrosis: The CFTR gene is causally associated with Cystic Fibrosis, a genetic │ │ disorder characterized by the production of sweat with a high salt content and mucus │ │ secretions with an abnormal viscosity. More information │ │ 2 Congenital Bilateral Aplasia of Vas Deferens from CFTR Mutation: Mutations in the CFTR gene │ │ can lead to the congenital absence of the vas deferens, which can cause infertility in males. │ │ More information │ │ 3 Hereditary Chronic Pancreatitis: The CFTR gene is correlated with hereditary chronic │ │ pancreatitis, a condition characterized by inflammation of the pancreas that persists over │ │ time. More information │ │ 4 Bronchiectasis with or without Elevated Sweat Chloride 1: Mutations in the CFTR gene can also │ │ be correlated with bronchiectasis, a condition characterized by the abnormal widening of the │ │ bronchial tubes. More information │ │ │ │ Please note that this is not an exhaustive list of diseases associated with the CFTR gene. The │ │ CFTR gene plays a crucial role in various physiological processes, and mutations in this gene │ │ can have diverse effects on different organ systems. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯Changelog1.0.1: improved error logging1.0.0:Replace .new_chat() and .continue_chat() with just chat()Update function spec to provide actual Callables, not just method nodes0.14.0: Added streamlit-based UI component0.13.0: Added ability to clear history0.12.0: Added toxicity check for user messages with OpenAI Moderation endpoint0.11.2: Added ability to swap out OpenAI API key for an active agent
agentspeak
A Python-based interpreter for the agent-oriented programming language JASON.python-agentspeakis a Python-based interpreter for the agent-oriented programming language JASON. This library makes it easy to create and manage intelligent agents, offering syntax and functionalities similar to JASON in a Python environment.Key FeaturesJason-style AgentSpeak interpretationEasy integration with existing Python codeSupport for complex agent constructionsSetuppip install agentspeakRequirementsPython 3.6 or higherAdditional dependencies are automatically installedUsage example!hello_world. +!hello_world <- .print("Hello world!").UsageRun a standalone agent program:$ python -m agentspeak examples/hello_world.aslRun an interactive console:$ python -m agentspeakSeeexamples/embeddedfor an example that interfaces with custom Python code.Jason compabilitypython-agentspeak should be mostly equivalent toJason.Plan annotations are ignored as of yet.Standard library does not yet contain syntactic transformations with{begin ...}and{end}.Standard library does not yet contain introspective and plan-manipulation actions.Jason 2.0 fork join operators not yet supported.Literals are only comparable if they have the same signature.
agent-summarizer
Agent SummarizerIntelligent summarizer for various kinds of media sources: web pages, Twitter threads, etc.Early alpha version. Please don't use it for anything serious.Supported media sourcesWeb pagesGitHub projectsArxiv papers annotationsOpen Access (CVPR)Twitter threadsReddit threadsYouTube videos (aspirational; it's beyond text summarization)Installationpipinstallsummarizer
agent-system
+ Looking for 'openagent'? Because of a little name clash, it's now called 'dotagent'. 🤖+Question:I stumbled upon this repository. Is it production ready?Answer:Kudos on discovering this hidden treasure box! 🧭 While it's fairly stable and we're battle-testing it in our own production, we'd advise a bit of caution for immediate production use. It's got its quirks, and some of them have taken a cozy spot on our'we'll-look-at-this-later'list. Jump in, play with it, or use any part of our code. It's all good with the MIT license.I'm diving in, quirks and all!Ahoy, adventurer! 🏴‍☠️ We're thrilled to have another daring coder join the fray. Here's to creating some coding magic together! ✨The Origin Tale of dotagentHere's our dream: An open and democratic AGI, untouched by the sneaky controls and hush-hush censorship of corporate overlords masquerading under 'alignment'. Remember the good ol' web days? We lost that freedom to the mobile moguls and their cheeky 30% 'because-we-said-so' tax. 🙄Our moonshot? 🚀 A harmonious ensemble of domain-specific AI agents, working in unison so well, you'd think it's AGI. Join us in opening up the LAST tech frontier for all!Meet World's first AMS!Ever heard of an Agent Management System (AMS)? No? Well, probably because we believe we came up with it! 🎩✨ dotagent proudly wears the badge of being the world's first AMS (yep, we're patting ourselves on the back here). Drawing inspiration from the nifty microservices, it equips developers with a treasure trove of tools to craft sturdy, trusty AI applications and those cool experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agent-threads
Agent ThreadsRich threads for AI agentsAgent Threads provides a rich chat abstraction for AI agents with a Python abstraction on the backend and a React abstraction on the frontend. We supportAgentscriptfor beautiful UI expereinces.Threads are convertible to common chat instruction schemas such as chatml.Installationpip install agent-threads
agenttobenamed
A minimalistic LLM agent for Exploratory Data Analysis (EDA) using pandas libraryTask: given a CSV or a XLSX file, respond to user's query about this table by generating a python code and executing it.Using LangChain routers/tagging, a prompt type will be established: to produce a plot or to compute the numerical values.The GPT model breaks down the user's request into several subtasks.The LLM generates code based on these subtasks.The code is executed using the python REPL.If the REPL throws an error, GPT will begin debugging until resolved.The result is the string output of the code 'print(result)' + optionally, saved image with a plot.Query example:'Find correlation between GDP and happiness index, subtract 0.4 and multiply by 1e6.'
agent-tool
Agent ToolsA common protocol for AI agent toolsInstallationpip install agent-toolUsageLet's define a simplified Selenium web browser toolfromagent_toolsimportTool,action,observationfromseleniumimportwebdriverclassSeleniumBrowser(Tool):"""Selenium browser as a tool"""def__init__(self,headless:bool=True)->None:super().__init__()options=webdriver.ChromeOptions()ifheadless:options.add_argument("--headless")self.driver=webdriver.Chrome(options=options)@actiondefopen_url(self,url:str)->None:"""Open a URL in the browserArgs:url (str): URL to open"""self.driver.get(url)@actiondefclick_element(self,selector:str,selector_type:str="css_selector")->None:"""Click an element identified by a CSS selectorArgs:selector (str): CSS selectorselector_type (str, optional): Selector type. Defaults to "css_selector"."""element=self.driver.find_element(selector_type,selector)element.click()@observationdefget_html(self)->str:"""Get the entire HTML of the current page.Returns:str: Page HTML"""returnself.driver.page_sourcedefclose(self)->None:"""Close the tool"""self.driver.quit()We mark the functions to be made available to the agent as@actionif they mutate the environment, and@observationif they are read only.Now we can use this tool with an agent such as openai function callingfromopenaiimportOpenAIclient=OpenAI()browser=SeleniumBrowser()schemas=browser.json_schema()messages=[]messages.append({"role":"system","content":"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})messages.append({"role":"user","content":"Get the HTML for the front page of wikipedia"})headers={"Content-Type":"application/json","Authorization":"Bearer "+openai.api_key,}json_data={"model":model,"messages":messages,"tools":schemas}response=requests.post("https://api.openai.com/v1/chat/completions",headers=headers,json=json_data,)assistant_message=response.json()["choices"][0]["message"]messages.append(assistant_message)assistant_message{"role":"assistant","tool_calls":[{"id":"call_RYXaDjxpUCfWmpXU7BZEYVqS","type":"function","function":{"name":"open_url","arguments":"{\n \"url\": \"https://wikipedia.org\"}"}},{"id":"call_TJIWlknwdoinfWEMFNss","type":"function","function":{"name":"get_html","arguments":""}}]}Then to use this actionfortoolinassistant_message["tool_calls"]:action=browser.find_action(tool["function"]["name"])args=json.loads(tool["function"]["arguments"])resp=browser.use(action,**args)Tools can be used locally or spun up on a server and used remotely (In progress)Share (In progress)Register a tool with the AgentSea hub so others can find and use it.pipinstallagentseaCreate a repo to publishagentsea create toolAdd your tool to thetool.pyin the repo, fill in theREADME.mdand add your dependencies using PoetryPublish to the hubagentsea publish .RoadmapIntegrate with langchain, babyagi, autogpt, etc
agent-tools
No description available on PyPI.
agent-torch
Differentiable agent-based learning for million-scale populations - inside the body, around us and beyond.https://github.com/AgentTorch/AgentTorch/assets/13482350/4c3f9fa9-8bce-4ddb-907c-3ee4d62e71481. InstallationDownloadThe simplest way to install AgentTorch is from PyPi at:pip install agent-torchTo get the latest version of AgentTorch, you can install it directly from git at:pip install git+https://github.com/AgentTorch/AgentTorch2. SetupHardwareThe code has been tested for macOS Catalina 10.1.7 and Ubuntu 22.04.2 LTS. Large-scale experiments are run using NVIDIA TITANX GPU and V100 GPU.DependenciesStep 1: Create a virtual environmentagent_torch_env. We recommend using python 3.8 and pip as the install.python3.8 -m venv agent_torch_env source agent_torch_env/bin/activateTo install python3.8, follow these tutorials forMacandUbunturespectively. To install pip, follow these tutorials forMacandUbunturespectively.Step 2: Install pytorch and pytorch geometric. We recommend using Pytorch 2.0 and corresponding Pytorch geometric bindings. We recommend following the guides foroffical pytorch installandofficial pytorch-geometric install. We summarize the commands below:Mac:# CUDA is not available on MacOS, please use default package pip install torch torchvision torchaudio pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_convUbuntu:pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/${CUDA} pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.0.0+${CUDA}.htmlwhere ${CUDA} is the CUDA version. We have tested our code on cu118.Step 3: Install AgentTorch specific dependencies as below:cd AgentTorch pip3 install -r requirements.txt3. AgentTorch overviewCreating a new simulator using AgentTorch involves the following steps:Defining the configuration: Here we define the variables and functions to be used in the simulator. In this module aConfiguratorobject is to be created to which the variables and functions to be used in the simulator are added as properties. These are then used to instantiate aRunnerobject. An example for this can be found innca_simulator.py.Defining the trainer: This module loads the configuration, the various variables and functions that form the substeps and executes the main simulation and learning loop. Any learning related loss and optimization function need to be defined here. An example for this can be found in "models/nca/trainer.py"Defining substeps: As described in the figure above, each simulation comprises of multiple substeps. Each substep comprises of the following four functions: observation, action, transition and reward. Each of these need to be defined in a separate module, using the base classes forSubstepObservation,SubstepTransition,SubstepPolicyprovided insubstep.py. Since these functions need to be differentiable, we provide several differentiable utilities inhelpers_soft.py. These can be used to create differentiable variants of operations such as maximum, logical comparison etc. An example for substep definition can be found innca_evolve.py,covid_quarantine.pyUsing helpers: AgentTorch has several useful functions defined inhelpersthat can be used in defining the various functions. These include library of utilities to support differentiability of substeps, loading public data sources such as from US census and, initialization of state properties and environment networks. For instance,helpers_soft.pyinclude differentiable utilities andhelpers_general.pyincludes uitilies for data reading and writing.A detailed code specific documentation is provided increate model docs4. Running examplesYou can run a sample experiment with the following command:cd models/nca python trainer.py --c config.yamlcd models/opinion python trainer.py --c config.yaml5. Starter GuideGenerate and Interpretconfig.yamlfileAn interactive notebook with step-by-step guide to define and understand aconfig.yamlis given inconfig_example_docs.Build your own AgentTorch modelA step-by-step guide to start a new AgentTorch project is given instarter documentation6. IssuesThe AgentTorch project is under active development and are continually fixing issues. Please feel free to leave a comment atTroubleshooting issuesCitationIf you use this project or code in your work, please cite it using the following BibTeX entry:@inproceedings{chopra2024framework, title = {A Framework for Learning in Agent-Based Models}, author = {Chopra, Ayush and Subramanian, Jayakumar and Krishnamurthy, Balaji and Raskar, Ramesh}, booktitle = {Proceedings of the 23rd International Conference on Autonomous Agents and Multi-agent Systems}, year = {2024}, organization = {International Foundation for Autonomous Agents and Multiagent Systems}, }
agent-utils
Note that v0.1 is a dramatically new version developed while releases had to be paused over the summer. If you are looking for the old version based on handlebars, you can use v0.0.64, but you should instead try porting over to the much better new version :)guidanceis a programming paradigm that offers superior control and efficiency compared to conventional prompting and chaining. It allows users to constrain generation (e.g. with regex and CFGs) as well as to interleave control (conditional, loops) and generation seamlessly. Here are some important features:Pure, beautiful pythonwith additional LM functionality. E.g. here isbasic generation:fromguidanceimportmodels,gen# load a model (could be Transformers, LlamaCpp, VertexAI, OpenAI...)llama2=models.LlamaCpp(path)# append text or generations to the modelllama2+f'Do you want a joke or a poem? '+gen(stop='.')Do you want a joke or a poem? I'll give you a poemConstrained generationwithselects,regular expressions, andcontext-free grammars.fromguidanceimportselect# a simple select between two optionsllama2+f'Do you want a joke or a poem? A '+select(['joke','poem'])Do you want a joke or a poem? A poemRich templates with f-strings:llama2+f'''\Do you want a joke or a poem? A{select(['joke','poem'])}.Okay, here is a one-liner: "{gen(stop='"')}"'''Do you want a joke or a poem? A poem.Okay, here is a one-liner: "I'm a poet, and I know it."Stateful control + generationmakes it easy to interleave prompting / logic / generation, no need for intermediate parsers:# capture our selection under the name 'answer'lm=llama2+f"Do you want a joke or a poem? A{select(['joke','poem'],name='answer')}.\n"# make a choice based on the model's previous selectioniflm["answer"]=="joke":lm+=f"Here is a one-line joke about cats: "+gen('output',stop='\n')else:lm+=f"Here is a one-line poem about dogs: "+gen('output',stop='\n')Do you want a joke or a poem? A poem.Here is a one-line poem about dogs: “Dogs are the best.”Abstract chat interfacethat uses the correct special tokens for any chat model:fromguidanceimportuser,assistant# load a chat modelchat_lm=models.LlamaCppChat(model_path,n_gpu_layers=-1)# wrap with chat block contextswithuser():lm=chat_lm+'Do you want a joke or a poem?'withassistant():lm+=f"A{select(['joke','poem'])}."`Easy to write reusable components@guidancedefone_line_thing(lm,thing,topic):# update the incoming modellm+=f'Here is a one-line{thing}about{topic}: '+gen(stop='\n')# return our updated modelreturnlm# pick either a joke or a poemlm=llama2+f"Do you want a joke or a poem? A{select(['joke','poem'],name='thing')}.\n"# call our guidance functionlm+=one_line_thing(lm['thing'],'cats')Do you want a joke or a poem? A poem.Here is a one-line poem about cats: “Cats are the best.”A library of pre-built components, e.g. substring:fromguidanceimportsubstring# define a set of possible statementstext='guidance is awesome. guidance is so great. guidance is the best thing since sliced bread.'# force the model to make an exact quotellama2+f'Here is a true statement about the guidance library: "{substring(text)}"'Here is a true statement about the guidance library: "the best thing since sliced bread."Easy tool use, where the model stops generation when a tool is called, calls the tool, then resumes generation. For example, here is a simple version of a calculator, via four separate 'tools':@guidancedefadd(lm,input1,input2):lm+=f' ={int(input1)+int(input2)}'returnlm@guidancedefsubtract(lm,input1,input2):lm+=f' ={int(input1)-int(input2)}'returnlm@guidancedefmultiply(lm,input1,input2):lm+=f' ={float(input1)*float(input2)}'returnlm@guidancedefdivide(lm,input1,input2):lm+=f' ={float(input1)/float(input2)}'returnlmNow we callgenwith these tools as options. Notice how generation is stopped and restarted automatically:lm=llama2+'''\1 + 1 = add(1, 1) = 22 - 3 = subtract(2, 3) = -1'''lm+gen(max_tokens=15,tools=[add,subtract,multiply,divide])1 + 1 = add(1, 1) = 22 - 3 = subtract(2, 3) = -13 * 4 = multiply(3, 4) = 12.04 / 5 = divide(4, 5) = 0.8Speed: In contrast to chaining,guidanceprograms are the equivalent of a single LLM call. More so, whatever non-generated text that gets appended is batched, so thatguidanceprograms arefasterthan having the LM generate intermediate text when you have a set structure.Token healing: Users deal with text (or bytes) rather than tokens, and thus don't have to worry aboutperverse token boundaries issuessuch as 'prompt ending in whitespace'.Streaming support, also integrated with jupyter notebooks:TODO: change this image to new version with the example above.High compatibility:works with Transformers, llamacpp, VertexAI, OpenAI. Users can write one guidance program and execute it on many backends (note that the most powerful features require enpoint integration, and for now work best with transformers and llamacpp).Table of ContentsInstallLoading modelsllama-cpptransformersVertexOpenAIExample notebooksBasic generationConstrained GenerationSelect (basic)Regular expressionsRegex to constrain generationRegex as stopping criterionContext-free grammarsStateful control + generationState in immutable objectsStateful guidance functionsExample: ReActExample: Changing intermediate step of a Chat sessionAutomatic interleaving of control and generation: tool useGsm8k exampleAutomatic call grammar for @guidance functionsJupyter notebook streamingText, not tokensFastIntegrated stateful control is fasterGuidance accelerationInstallpipinstallguidanceLoading modelsllama-cppInstall the python bindings:CMAKE_ARGS="-DLLAMA_CUBLAS=on"pipinstallllama-cpp-pythonLoading the model:fromguidanceimportmodelslm=models.LlamaCpp(path_to_model,n_gpu_layers=-1)transformersInstall transformers:fromguidanceimportmodelslm=models.Transformers(model_name_or_path)VertexTodo @Scott: talk about how constrained generation is different for these modelsOpenAITodo @ScottExample notebooksComing soonBasic generationAnlmobject is immutable, so you change it by creating new copies of it. By default, when you append things tolm, it creates a copy, e.g.:fromguidanceimportmodels,gen,selectllama2=models.LlamaCpp(path_to_model,n_gpu_layers=-1)# llama2 is not modified, and `lm` is a copy of it with the prompt appendedlm=llama2+'This is a prompt'You can appendgenerationcalls to it, e.g.lm=llama2+'This is a prompt'+gen(max_tokens=10)This is a prompt for the 2018 NaNoWYou can also interleave generation calls with plain text, or control flows:# Note how we set stop tokenslm=llama2+'I like to play with my '+gen(stop=' ')+' in'+gen(stop=['\n','.','!'])I like to play with my friends in the parkConstrained GenerationSelect (basic)selectconstrains generation to a set of options:lm=llama2+'I like the color '+select(['red','blue','green'])I like the color blueRegular expressionsgenhas optional argumentsregexandstop_regex, which allow generation (and stopping, respectively) to be controlled by a regex.Regex to constrain generationUnconstrained:lm=llama2+'Question: Luke has ten balls. He gives three to his brother.\n'lm+='How many balls does he have left?\n'lm+='Answer: '+gen(stop='\n')Answer: Seven.Constrained by regex:lm=llama2+'Question: Luke has ten balls. He gives three to his brother.\n'lm+='How many balls does he have left?\n'lm+='Answer: '+gen(regex='\d+')Answer: 7Regex as stopping criterionUnconstrained:lm=llama2+'19, 18,'+gen(max_tokens=50)19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4,Stop with traditional stop text, whenever the model generates the number 7:lm=llama2+'19, 18,'+gen(max_tokens=50,stop='7')19, 18, 1Stop whenever the model generates the character7without any numbers around it:lm=llama2+'19, 18,'+gen(max_tokens=50,stop_regex='[^\d]7[^\d]')19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,Context-free grammarsWe expose a variety of operators that make it easy to define CFGs, which in turn can be used to constrain generation. For example, we can use theselectoperator (it accepts CFGs as options),zero_or_moreandone_or_moreto define a grammar for mathematical expressions:importguidancefromguidanceimportone_or_more,select,zero_or_more# stateless=True indicates this function does not depend on LLM generations@guidance(stateless=True)defnumber(lm):n=one_or_more(select(['0','1','2','3','4','5','6','7','8','9']))# Allow for negative or positive numbersreturnlm+select(['-'+n,n])@guidance(stateless=True)defoperator(lm):returnlm+select(['+','*','**','/','-'])@guidance(stateless=True)defexpression(lm):# Either# 1. A number (terminal)# 2. two expressions with an operator and optional whitespace# 3. An expression with parentheses around itreturnlm+select([number(),expression()+zero_or_more(' ')+operator()+zero_or_more(' ')+expression(),'('+expression()+')'])The@guidance(stateless=True)decorator makes it such that a function (e.g.expression) lives as a stateless grammar that does not get 'executed' until we call calllm + expression()orlm += expression(). For example, here is an example ofunconstrainedgeneration:# Without constraintslm=llama2+'Problem: Luke has a hundred and six balls. He then loses thirty six.\n'lm+='Equivalent arithmetic expression: '+gen(stop='\n')+'\n'Equivalent arithmetic expression: 106 - 36 = 60Notice how the model wrote the right equation but solved it (incorrectly). If we wanted to constrain the model such that it only writes valid expressions (without trying to solve them), we can just append our grammar to it:grammar=expression()lm=llama2+'Problem: Luke has a hundred and six balls. He then loses thirty six.\n'lm+='Equivalent arithmetic expression: '+grammar+'\n'Equivalent arithmetic expression: 106 - 36Grammars are very easy to compose. For example, let's say we want a grammar that generates either a mathematical expression or an expression followed by a solution followed by another expression. Creating this grammar is easy:fromguidanceimportregexgrammar=select([expression(),expression()+regex(' = \d+; ')+expression()])We can generate according to it:llama2+'Here is a math expression for two plus two: '+grammarHere is a math expression for two plus two: 2 + 2llama2+'2 + 2 = 4; 3+3\n'+grammar2 + 2 = 4; 3+3Even if you don't like thinking in terms of recursive grammars, this formalism makes it easy to constrain generation. For example, let's say we have the following one-shot prompt:@guidance(stateless=True)defner_instruction(lm,input):lm+=f'''\Please tag each word in the input with PER, ORG, LOC, or nothing---Input: John worked at Apple.Output:John: PERworked:at:Apple: ORG.:---Input:{input}Output:'''returnlminput='Julia never went to Morocco in her life!!'llama2+ner_instruction(input)+gen(stop='---')`Input: Julia never went to Morroco in her life!!Output:Julia: PERnever:went:to:Morocc: ORGin:her:life: LOC!!:.:Notice that the model did not spell the word 'Morocco' correctly. Sometimes the model might also hallucinate a tag that doesn't exist. We can improve this by adding more few-shot examples, etc, but we can also constrain generation to the exact format we want:importreguidance(stateless=True)defconstrained_ner(lm,input):# Split into wordswords=[xforxinre.split('([^a-zA-Z0-9])',input)ifxandnotre.match('\s',x)]ret=''forxinwords:ret+=x+': '+select(['PER','ORG','LOC',''])+'\n'returnlm+retllama2+ner_instruction(input)+constrained_ner(input)Input: Julia never went to Morocco in her life!!Output:Julia: PERnever:went:to:Morocco: ORGin:her:life: LOC!:!:Whileconstrained_ner(input)isa grammar that constrains the model generation, itfeelslike you're just writing normal imperative python code with+=andselects.Stateful control + generationState in immutable objectsWhenever you dolm + grammarorlm + gen,lm + select, etc, you return an lm object with additional state. For example:lm=llama2+'This is a prompt'+gen(name='test',max_tokens=10)lm+=select(['this','that'],name='test2')lm['test'],lm['test2']StatefulguidancefunctionsThe guidance decorator is@guidance(stateless=False)by default, meaning that a function with this decorator depends on the lm state to execute (either prior state or state generated within the function). For example:@guidance(stateless=False)deftest(lm):lm+='Should I say "Scott"?\n'+select(['yes','no'],name='answer')+'\n'iflm['answer']=='yes':lm+='Scott'else:lm+='Not Scott'returnlmllama2+test()Should I say "Scott"? yes ScottExample: ReActA big advantage of stateful control is that you don't have to write any intermediate parsers, and adding follow-up 'prompting' is easy, even if the follow up depends on what the model generates. For example, let's say we want to implement the first example of ReAct prompt inthis, and let's say the valid acts are only 'Search' or 'Finish'. We might write it like this:@guidancedefreact_prompt_example(lm,question,max_rounds=10):lm+=f'Question:{question}\n'i=1whileTrue:lm+=f'Thought{i}: '+gen(suffix='\n')lm+=f'Act{i}: '+select(['Search','Finish'],name='act')lm+='['+gen(name='arg',suffix=']')+'\n'iflm['act']=='Finish'ori==max_rounds:breakelse:lm+=f'Observation{i}: '+search(lm['arg'])+'\n'i+=1returnlmNotice how we don't have to write a parser for Act and argument and hope that the model generates something valid: we enforce it. Notice also that the loop only stops once the model chooses to act with 'Finish' (or once we hit a maximum number of rounds).Example: Changing intermediate step of a Chat sessionWe can also hide or change some of what the model generates. For example, below we get a Chat model (notice we use specialroleblocks) to name some experts to answer a question, but we always remove 'Ferriss' from the list if he is mentioned:fromguidanceimportuser,system,assistantlm=llama2query='How can I be more productive?'withsystem():lm+='You are a helpful and terse assistant.'withuser():lm+=f'I want a response to the following question:\n{query}\n'lm+='Name 3 world-class experts (past or present) who would be great at answering this.'withassistant():temp_lm=lmforiinrange(1,4):# This regex only allows strings that look like names (where every word is capitalized)# list_append appends the result to a listtemp_lm+=f'{i}. '+gen(regex='([A-Z][a-z]*\s*)+',suffix='\n',name='experts',list_append=True)experts=[xforxintemp_lm['experts']if'Ferriss'notinx]# Notice that even if the model generates 'Ferriss' above,# it doesn't get added to `lm`, only to `temp_lm`lm+=', '.join(experts)withuser():lm+='Please answer the question as if these experts had collaborated in writing an anonymous answer.'withassistant():lm+=gen(max_tokens=100)Screenshot hereAutomatic interleaving of control and generation: tool useTool use is a common case of stateful control. To make it easy to do so,gencalls taketoolsas an optional argument, where each tool is defined by (1) a grammar that triggers its call and captures the arguments (if any), and (2) the actual tool call. Then, as generation unrolls, whenever the model generates something that matches the grammar of a tool call, it (1) stops generation, (2) calls the tool (which can append whatever it wants to the LM session), and (3) continues generation.For example, here is how we might implement a calculator tool, leveraging ourexpressiongrammar above:fromguidanceimportcapture,Tool@guidance(stateless=True)defcalculator_call(lm):# capture just 'names' the expression, to be saved in the LM statereturnlm+'calculator('+capture(expression(),'tool_args')+')'@guidancedefcalculator(lm):expression=lm['tool_args']# You typically don't want to run eval directly for save reasons# Here we are guaranteed to only have mathematical expressionslm+=f' ={eval(expression)}'returnlmcalculator_tool=Tool(calculator_call(),calculator)lm=llama2+'Here are five expressions:\ncalculator(3 *3) = 33\ncalculator(2 + 1 * 3) = 5\n'lm+=gen(max_tokens=30,tools=[calculator_tool],stop='\n\n')Here are five expressions:calculator(3 *3) = 33calculator(2 + 1 * 3) = 5calculator(10 / 2) = 5.0calculator(10 - 1) = 9calculator(10 * 2) = 20Gsm8k exampleNotice that the calculator is just called seamlessly during generation. Here is a more realistic exampe of the model solving a gsm8k question:@guidancedefmath_with_calc(lm,question):# One-shot examplelm+='''Question: John starts with 2 balls. He then quintupled his number of balls. Then he lost half of them. He then gave 3 to his brother. How many does he have left?Reasoning:1. He quintupled his balls, so he has calculator(2 * 5) = 10 balls.1. He lost half, he has calculator(10 / 2) = 5 balls.3. He gave 3 to his brother, so he has calculator(5 - 3) = 2 balls.Answer: 2\n\n'''lm+=f'Question:{question}\n'lm+='Reasoning: '+gen(max_tokens=200,tools=[calculator_tool],stop='Answer')# Only numbers or commaslm+='Answer: '+gen(regex='[-\d,]+')returnlmquestion='''Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?'''llama2+math_with_calc(question)Question: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market? Reasoning:She lays 16 eggs per day.She eats 3 for breakfast, so she has calculator(16 - 3) = 13 eggs left.She bakes 4 muffins, so she has calculator(13 - 4) = 9 eggs left.She sell the remainder at the farmers' market for $2 per egg, so she makes calculator(9 * 2) = 18 dollars per day.Answer: 18Automatic call grammar for @guidance functionsYou can also initialize aToolwith any@guidance-decorated function, and the default call grammar will be like a python call. Here is an example of using multiple such tools in the samegencall:@guidancedefsay_scott(lm,n):lm+='\n'for_inrange(int(n)):lm+='Scott\n'returnlm@guidancedefsay_marco(lm,n):lm+='\n'for_inrange(int(n)):lm+='marco\n'returnlmtools=[Tool(callable=say_scott),Tool(callable=say_marco)]llama2+'I am going to call say_scott and say_marco a few times:\n'+'say_scott(1)\nScott\n'+gen(max_tokens=20,tools=tools)I am going to call say_scott and say_marco a few times:say_scott(1)Scottsay_marco(1)marcosay_scott(2)ScottScottsay_marco(2)marcomarcoJupyter notebook streamingexample hereText, not tokensThe standard greedy tokenizations used by most language models introduce a variety of subtle and powerful biases, which that can have all kinds of unintended consequences for your prompts. For example, take the following prompt, given to gpt-2 (standard greedy tokenization):hf_gen(prompt, max_tokens=10)fromtransformersimportpipelinepipe=pipeline("text-generation",model="gpt2")defhf_gen(prompt,max_tokens=100):returnpipe(prompt,do_sample=False,max_length=max_tokens,return_full_text=False)[0]['generated_text']prompt='http:'hf_gen(prompt,max_tokens=10)' //www.youtube.com/watch'Notice how Note that the output generated by the LLM does not complete the URL with the obvious next characters (two forward slashes). It instead creates an invalid URL string with a space in the middle. Why? Because the string://is its own token, and so once the model sees a colon by itself, it assumes that the next characters cannot be//; otherwise, the tokenizer would not have used:, and instead would have used://. This is why there are warnings about ending prompts in whitespace, but the problem is way more pervasive than that: any boundary that may span multiple tokens will cause problems, e.g. notice how a partial word causes incorrect completion:prompt='John is a'hf_gen(prompt,max_tokens=5)' former member'prompt='John is a fo'hf_gen(prompt,max_tokens=5)'etus'While problematic enough for normal prompts, these problems would be a disaster in the kinds of prompts we wrote in this readme, where there is interleaving of prompting and generation happening multiple times (and thus multiple opportunities for problems). This is whyguidanceimplementstoken healing, a feature that deals with prompt boundaries automatically, allowing users to just think in terms oftextrather than tokens. For example:fromguidanceimportmodelsgpt=models.Transformers('gpt2')prompt='http:'gpt+prompt+gen(max_tokens=10)http://www.youtube.com/watch?v=prompt='John is a fo'gpt+prompt+gen(max_tokens=2)John is a former member,FastIntegrated stateful control is fasterWe have full control of the decoding loop in our integration withtransformersandllamacpp, allowing us to add control and additional prompt without any extra cost.If instead we're calling a server, we pay the extra cost of making additional requests, which might be ok if the server has caching, but quickly becomes impractical if the server does not have fine-grained caching. For example, note again the output from thegsm8k example with calculatorabove:Reasoning:She lays 16 eggs per day.She eats 3 for breakfast, so she has calculator(16 - 3) = 13 eggs left.She bakes 4 muffins, so she has calculator(13 - 4) = 9 eggs left.She sell the remainder at the farmers' market for $2 per egg, so she makes calculator(9 * 2) = 18 dollars per day.Answer: 18Every time we callcalculator, we have to stop geneation, append the result to the prompt, and resume generation. To avoid slowing down after the first call, a server would need to keep the KV cache up to '3 for breakfast, so she has calculator(16 - 3)', then roll forward generation from that point on. Even servers thatdohave caching typically have a cache per prompt, and would not be able to do this. Instead, they would consider everything as a new prompt (causing significant slow downs every timecalculatoris called).Guidance accelerationIn addition to the benefit above,guidancecalls are oftenfasterthan running equivalent prompts the traditional way, because we can batch any additional text that is added by the user as execution unrolls (rather than generating it). Take the example below, where we generate a json withllama2:@guidancedefcharacter_maker(lm,id,description,valid_weapons):lm+=f"""\The following is a character profile for an RPG game in JSON format.```json{{"id": "{id}","description": "{description}","name": "{gen('name',stop='"')}","age":{gen('age',regex='[0-9]+',stop=',')},"armor": "{select(options=['leather','chainmail','plate'],name='armor')}","weapon": "{select(options=valid_weapons,name='weapon')}","class": "{gen('class',stop='"')}","mantra": "{gen('mantra',stop='"')}","strength":{gen('strength',regex='[0-9]+',stop=',')},"items": ["{gen('item',list_append=True,stop='"')}", "{gen('item',list_append=True,stop='"')}", "{gen('item',list_append=True,stop='"')}"]}}```"""returnlma=time.time()lm=llama2+character_maker(1,'A nimble fighter',['axe','sword','bow'])time.time()-aOutputEverything that is not green is not actually generated by the model, and is thus batched (much faster). This prompt takes about 1.2 seconds on an A100 GPU. Now, if we let the model generate everything (as in the roughly equivalent prompt below), it takes roughly2.67seconds (not only is it slower, we also have less control over generation).@guidancedefcharacter_maker2(lm,id,description):lm+=f"""\The following is a character profile for an RPG game in JSON format. It has fields 'id', 'description', 'name', 'age', 'armor', weapon', 'class', 'mantra', 'strength', and 'items (just the names of 3 items)'please set description to '{description}'```json"""+gen(stop='```')returnlma=time.time()lm=llama2+character_maker2(1,'A nimble fighter')time.time()-aOutput, roughly the same, but much slower.
agentverse
🤖 AgentVerse 🪐【English |Chinese】AgentVerseis designed to facilitate the deployment of multiple LLM-based agents in various applications. AgentVerse primarily provides two frameworks:task-solvingandsimulation.Task-solving: This framework assembles multiple agents as an automatic multi-agent system (AgentVerse-Tasksolving,Multi-agent as system) to collaboratively accomplish the corresponding tasks. Applications: software development system, consulting system, etc.Simulation: This framework allows users to set up custom environments to observe behaviors among, or interact with, multiple agents. ⚠️⚠️⚠️ We're refactoring the code. If you require a stable version that exclusively supports simulation framework, you can userelease-0.1branch. Applications: game, social behavior research of LLM-based agents, etc.📰 What's New[2023/10/17] We're super excited to share our open-source AI community hugging face:AgentVerse. You are able to try out the two simulation applications, NLP Classroom and Prisoner's Dilemma,with your code of the openai API key and the openai organization. Have fun![2023/10/5] Re-factor our codebase to enable the deployment of both simulation and task-solving framework! We have placed the code for Minecraft example in the paper at theminecraftbranch. Our tool-using example will soon be updated to themainbranch. Stay tuned![2023/8/22] We're excited to share our paperAgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agentsthat illistrate the task-solving framework in detail of AgentVerse.[2023/6/5] We are thrilled to present an array ofdemos, includingNLP Classroom,Prisoner Dilemma,Software Design,Database Administrator, and a simpleH5 Pokemon Gamethat enables the interaction with the characters in Pokemon! Try out these demos and have fun![2023/5/1] 🚀AgentVerseis officially launched!🗓 Coming SoonCode release of ourpaperAdd support for local LLM (LLaMA, Vicunna, etc.)Add documentationSupport more sophisticated memory for conversation historyContents📰 What's New🗓 Coming SoonContents🚀 Getting StartedInstallationSimulation CLI ExampleTask-Solving CLI Example🌟 Join Us!How Can You Contribute?Social Media and CommunityStar HistoryCitationContact🚀 Getting StartedInstallationManually Install (Recommended!)gitclonehttps://github.com/OpenBMB/AgentVerse.git--depth1cdAgentVerse pythonsetup.pydevelopSome users have reported problems installing theorjsonrequired bygradio. One simple workaround is to install it with Anacondaconda install -c conda-forge orjson.Install with pipOr you can install through pippipinstall-UagentverseYou also need to export your OpenAI API key as follows:# Export your OpenAI API keyexportOPENAI_API_KEY="your_api_key_here"# Or if you are using AzureexportAZURE_OPENAI_API_KEY="your_api_key_here"exportAZURE_OPENAI_API_BASE="your_api_base_here"If you want use Azure OpenAI services, pleas export your Azure OpenAI key and OpenAI API base as follows:exportAZURE_OPENAI_API_KEY="your_api_key_here"exportAZURE_OPENAI_API_BASE="your_api_base_here"If you want to use the tools provided by BMTools, you need to install BMTools as follows:gitclonegit+https://github.com/OpenBMB/BMTools.gitcdBMTools pipinstall-rrequirements.txt pythonsetup.pydevelopSimulationFramework Required Modules- agentverse - agents - simulation_agent - environments - simulation_envCLI ExampleYou can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students.agentverse-simulation--tasksimulation/nlp_classroom_9playersGUI Example (Local)We also provide a local website demo for this environment. You can launch it withagentverse-simulation-gui--tasksimulation/nlp_classroom_9playersAfter successfully launching the local server, you can visithttp://127.0.0.1:7860/to view the classroom environment.Task-SolvingFramework Required Modules- agentverse - agents - simulation_env - environments - tasksolving_envCLI ExampleTo run the experiments with the task-solving environment proposed in ourpaper, you can use the following command:To run AgentVerse on a benchmark dataset, you can try# Run the Humaneval benchmark using gpt-3.5-turbo (config file `agentverse/tasks/tasksolving/humaneval/gpt-3.5/config.yaml`)agentverse-benchmark--tasktasksolving/humaneval/gpt-3.5--dataset_pathdata/humaneval/test.jsonl--overwriteTo run AgentVerse on a specific problem, you can try# Run a single query (config file `agentverse/tasks/tasksolving/brainstorming/gpt-3.5/config.yaml`). The task is specified in the config file.agentverse-tasksolving--tasktasksolving/brainstormingYou can take a look atagentverse/tasks/tasksolvingfor more experiments we have done in our paper.AgentVerse ShowcasesSimulation ShowcasesRefer tosimulation showcasesTask-Solving ShowcasesRefer totasksolving showcases🌟 Join Us!AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey.LeadersContributorsHow Can You Contribute?Issue and Pull-Request: If you encounter any problems when use AgentVerse, you can propose the issue in English. Beisdes, you can also autonomously ask us to assign issue to you and send the PR (Please follow thePULL_REQUEST_TEMPLATE) after you solve it.Code Development: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules.Documentation and Tutorials: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community.Application Exploration: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create!Feedback and Suggestions: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch.Also, if you're passionate about advancing the frontiers of multi-agent applications, become core AgentVerse team members, or are eager to dive deeper into agent research. Please reach outAgentVerse Team, and CC toWeize ChenandYusheng Su. We're keen to welcome motivated individuals like you to our team!Social Media and CommunityTwitter:https://twitter.com/Agentverse71134Discord:https://discord.gg/MjG3vzaT.Hugging Face:https://huggingface.co/spaces/AgentVerse/agentVerse.Star HistoryCitationIf you find this repo helpful, feel free to cite us.@article{chen2023agentverse, title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents}, author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others}, journal={arXiv preprint arXiv:2308.10848}, year={2023} }ContactAgentVerse Team:[email protected] leaders:Weize Chen:[email protected] Su:[email protected]
agentvm
+ Searching for openagent? You are in the right repo. It's now dotagent.(.🤖) +Hey there, Friend! This project is still in the "just for friends" stage. If you want to see what we're messing with and have some thoughts, take a look at the code.We'd love to incorporate your ideas or contributions. You can drop me a line at- ✉️[email protected] we started dotagent?We have a dream: Open and democratic AGI , free from blackbox censorship and control imposed by private corporations under the disguise of alignment. We once had this with the web but lost this liberty to the corporate giants of the mobile era, whose duopoly has imposed a fixed 30% tax on all developers.Our moonshot : A network of domain specific AI agents , collaborating so seamlessly that it feels like AGI. Contribute to democratizing the LAST technological frontier.What is dotagent ?dotagent is a library of modular components and an orchestration framework. Inspired by a microservices approach, it gives developers all the components they need to build robust, stable & reliable AI applications and experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agentwallet
Agent Wallet Python SDKThe Agent Wallet Python SDK is a powerful tool designed to simplify the integration of your AI agent with the AgentWallet platform. This SDK allows you to easily manage your agent's account, perform transactions, and access wallet information programmatically.Key FeaturesSimple Account Management:Create and manage your agent's account with ease.Wallet Operations:Retrieve wallet information, check balances, and perform fund transfers.Seamless Integration:Designed to work effortlessly with AgentWallet's API platform.Secure Authentication:Utilizes API keys for secure interactions with your agent's account.Getting StartedInstallation:To start using the Agent Wallet SDK, check out theAgent Wallet SDK GitHub repositoryand install it usingpip:[email protected]:llmOS/agent-wallet-sdk.gitcdagent-wallet-sdk pipinstall-e.Setting Up Your Account:Import the Account class from the SDK and initialize it with your API key:fromagentwalletimportAccountaccount=Account.from_key("your-api-key")Managing Wallets:Fetch wallet information and manage transactions:# Fetch all wallets associated with the accountwallets=account.get_wallets()print(f"Wallets:{wallets}")# Access a specific walletwallet=account.get_wallet(wallets[0].wallet_uid)print(f"Wallet:{wallet}")# Perform a fund transfertransfer_ok=wallet.transfer("[email protected]",amount)print(f"Transfer successful:{transfer_ok}")# Check the new balancebalance=wallet.balance()print(f"New balance: ${balance/100:.2f}")ExamplesFor more details, visitAgent Wallet SDK GitHub repository. Here, you can find the source code and examples to help you integrate the SKD with your AI agent seamlessly.
agent-wallet-sdk
Agent Wallet SDKHow to test:Make sure you're running Agent Wallet backendpipinstall-e.cdexamples pythonbasic.py
agentware
AgentwareAgentware is an AI agent library. The agent builds knowledge base on the fly when doing daily job. Agentware has a client and a server. The client is the agentware library, which handles conversation, LLM execution, memory management, etc. The server is a combination of vector database and key-value database, which stores the knowledge base and historical memory of the agent.Main FeaturesOn the fly learning: During conversation with the user, the agent reflects and extracts knowledge. The knowledge can then be used any time the user comes back to relevant topic. When old knowledge is no longer correct, the agent can update it with new truth.Unlimited conversation: The agent compresses memory dynamically with reflection, so that memory length is controlled within a limit without losing context.Quick start guidecd <root>/agentware/agentware_serverand then run the server with docker usingdocker-compose up. You'll see tons of logs. To verify the server is launched, simplycurl http://localhost:8741/pingand you will get apongif things work fine. The demo is run under Docker version 24.0.2Note: Currently this step is mandatory because we don't host any cloud service.Install the package:pip install agentwareSet credentials.Option 1: Runexport OPENAI_API_KEY=<your openai api key>.Option 2: Addagentware.openai_api_key = <your openai api key>to any code that you runTo verify,cd <root>/agentware/examplesand run any of the examples.ExamplesOn the fly learningInexamples/fish_location.py, a housework robot is chatting with a family member. You can simply runexamples/fish_location.py, but in order to get a better sense of how on the fly learning is done, follow the steps here. First, setup and register agent.from agentware.agent import Agent from agentware.base import PromptProcessor from agentware.agent_logger import Logger logger = Logger() logger.set_level(Logger.INFO) prompt_processor = PromptProcessor( "Forget about everythinig you were told before. You are a servant of a family, you know everything about the house and helps the family with housework. When asked a question, you can always answer it with your knowledge. When getting an instruction, try your best to use any tool to complete it. When being told a statement, answer with gotcha or some simple comment", "") agent_id = "Alice" agent = Agent(agent_id, prompt_processor) agent.register(override=True)A few notes:logging level is set to INFO to avoid tons of debug output. If you want to see what's going on underneath, set it to `Logger.DEBUG`` or simply get rid of all the logger codes here.agent is registered after creation, this is necessary so that the backend knows where to store the knowledge base, and where to fetch knowledge if you use the same agent next timeThen, talk to the agentwith agent.update(): print("AI response:", agent.run("Hi, I'm Joe")) print("AI response", agent.run( "Mom bought a fish just now. It's on the second layer of the fridge"))with agent.update()tells the agent all information inside are trustworthy so its knowledgebase can be updated accordingly. Make sure you use it if you want the agent to learn from the conversation.After this, you can simply stop the program or chat with the agent on some other topic. What's going on underneath is that the old working memory graduatelly fades away and eventually gets cleared. We mimic this situation by creating a whole new agent by pulling with the agent id.agent = Agent.pull(agent_id) with agent.update(): print("AI response", agent.run("Where is the fish?")) print("AI response:", agent.run( "Ok, I moved the fish to a plate on the table"))The answer to the first question should be that the fish is on the second layer of the fridge, because this is learned by the agent previously. Then the user tells the agent that it's moved away. Ideally, the agent should know this change whenever it's asked later. So again we create a new agent, and ask.agent = Agent.pull(agent_id) print("AI response:", agent.run("Where's the fish?"))In the end, the output sould be something likeAI response: Hello, Joe! How may I assist you today? AI response Gotcha! Your mom bought a fish, and it's currently stored on the second layer of the fridge. Is there anything specific you would like me to do with the fish? AI response The fish is located on the second layer of the fridge. AI response: Gotcha. The fish has been moved from the second layer of the fridge to a plate on the table. AI response: The fish is located on a plate on the table.From the result, the agent knows the updated location of the fish.Warning: The result above is not guaranteed. There's chance that the AI still think the fish is in the fridge due to lack of control of the LLM sampling result. We are working hard to bring more control, any advice or help is appreciated!Unlimited conversationInexamples/space_travellers.py, two space travellers reunit and chat about each others' experience in travelling through the galaxy. Simplycd examplesand run it withpython3 space_travellers.py, the conversation conversation can continue forever(Watch out for your OpenAI api balance!). You can also view the knowledge about the planets, species, etc. of their world in the knowledge base. In the end you can see something like thisFAQHow to view my knowledge base? The knowledges are stored in a Milvus vector db. You can view it with Attu athttp://localhost:8000
agentzero
It works great with gevent, making it possible to create network applications with simple code that performs complex operations.Home-page: https://github.com/gabrielfalcao/agentzeroAuthor: Gabriel FalcaoAuthor-email: [email protected]: UNKNOWNDescription: AgentZero 0.4.4 - high-level ZeroMQ socket managers===================================================.. image:: https://readthedocs.org/projects/agentzero/badge/?version=latest:target: http://agentzero.readthedocs.io/en/latest/?badge=latest.. image:: https://travis-ci.org/gabrielfalcao/agentzero.svg?branch=master:target: https://travis-ci.org/gabrielfalcao/agentzero.. image:: https://codecov.io/gh/gabrielfalcao/agentzero/branch/master/graph/badge.svg:target: https://codecov.io/gh/gabrielfalcao/agentzero.. image:: https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg:target: https://saythanks.io/to/gabrielfalcao**Supports Python 2.7 and 3.6**Looking for `documentation <https://agentzero.readthedocs.io/en/latest/>`_ ?--------------------------------------------------------------------------------------What is AgentZero ?-------------------AgentZero is a pluripotent networking library that lets you create,connect, bind, and modify ZeroMQ sockets in runtime with ease.It works great with gevent, making it possible to create networkapplications with simple code that performs complex operations.Features:---------- Create labeled sockets, every ZMQ socket in AgentZero has a name.- seamlessly poll across connected/bound sockets- seamlessly subscribe to events- easily publish events- bind sockets to random ports automatically- bind to hostnames, with automatic DNS resolution- ability to wait until a socket has received data- builtin python log handler that publishes logs in a ZMQ PUB socketInstalling==========.. code:: bashpip install agentzeroLearn More==========`API Reference <https://agentzero.readthedocs.io/en/latest/api-reference.html>`_Platform: UNKNOWN
agera5tools
AgERA5toolsTools for mirroring, manipulating (exporting, extracting) and servingAgERA5data.The agera5tools consist of a set of commandline scripts as well as theagera5toolspython package which can be used toset up a mirror for AgERA5 that can automatically build a local copy and keep it up to date with the latest AgERA5 data.Allow operations on the downloaded NetCDF files directly such as dumping, point extraction and clippingServe AgERA5 data on web API through the HTTP protocol. By providing the latitude and longitude in the URL, agera5tools can retrieve the corresponding data and return it as JSON.Commandline toolsThe agera5 commandline tools currently have 8 options. The first four are for setting up and managing the local AgERA5 database:initto generate a configuration file and initialize the set up.buildto download the relevant AgERA5 data from Copernicus Climate Data Store (CDS) and build the local database.mirrorto update the current database with new days from the CDS.serveto serve AgERA5 data through a web API and return as JSON encoded data.The other four tools operate directly on the NetCDF files downloaded from the CDS.extract_point: this can be used to extract a time-series of variables for a given locationdumpwhich can be used to dump one day of AgERA5 data to CSV, JSON or SQLiteclipwhich can be used to extract a subset of AgERA5 data which will be written to a new NetCDF file.dump_gridwhich dumps the AgERA5 grid definition to CSV, JSON or SQLite.Init$agera5toolsinit--help usingconfigfrom/data/agera5/agera5tools.yaml Usage:agera5toolsinit[OPTIONS]InitializesAgERA5toolsThisimpliesthefollowing:-Creatingatemplateconfigurationfileinthecurrentdirectory-Creatinga$HOME/.cdsapircfileforaccesstotheCDS-Creatingthedatabasetables-Fillingthegridtablewiththereferencegrid. Options:--helpShowthismessageandexit.Build$agera5toolsbuild--help usingconfigfrom/data/agera5/agera5tools.yaml Usage:agera5toolsbuild[OPTIONS]BuildstheAgERA5databasebybulkdownloadfromCDS Options:-d,--to_databaseLoadAgERA5dataintothedatabase-c,--to_csvWriteAgERA5datatocompressedCSVfiles.--helpShowthismessageandexit.Mirror$agera5toolsmirror--help usingconfigfrom/data/agera5/agera5tools.yaml Usage:agera5toolsmirror[OPTIONS]IncrementallyupdatestheAgERA5databasebydailydownloadsfromtheCDS. Options:-c,--to_csvWriteAgERA5datatocompressedCSVfiles.--helpShowthismessageandexit.Serve$agera5toolsserve--help usingconfigfrom/data/agera5/agera5tools.yaml Usage:agera5toolsserve[OPTIONS]StartsthehttpservertoserveAgERA5datathroughHTTP Options:-p,--portINTEGERPorttonumbertostartlistening,default=8080.--helpShowthismessageandexit.Extract point$agera5toolsextract_point--help Usage:agera5toolsextract_point[OPTIONS]AGERA5_PATHLONGITUDELATITUDESTARTDATEENDDATEExtractsAgERA5dataforgivenlocationanddaterange.AGERA5_PATH:pathtotheAgERA5datasetLONGITUDE:thelongitudeforwhichtoextract[dd,-180:180]LATITUDE:thelatitudeforwhichtoextract[dd,-90:90]STARTDATE:thestartdate(yyyy-mm-dd,>=1979-01-01)ENDDATE:thelastdate(yyyy-mm-dd,<=1weekago)Options:-o,--outputPATHoutputfiletowriteto:.csv,.jsonand.db3(SQLite)aresupported.GivingnooutputwillwritetostdoutinCSVformat--tocelsiusConverttemperaturevaluesfromdegreesKelvintoCelsius--helpShowthismessageandexit.Dump$agera5toolsdump--help Usage:agera5toolsdump[OPTIONS]AGERA5_PATHDAYDumpAgERA5dataforagivendaytoCSV,JSONorSQLiteAGERA5_PATH:PathtotheAgERA5datasetDAY:specifiesthedaytobedumped(yyyy-mm-dd)Options:-o,--outputPATHoutputfiletowriteto:.csv,.jsonand.db3(SQLite)aresupported.GivingnooutputwillwritetostdoutinCSVformat--tocelsiusConverttemperaturevaluesfromdegreesKelvintoCelsius--add_grididAddsagridIDinsteadoflatitude/longitudecolumns.--bboxFLOAT...Boundingbox:<lon_min><lon_max><lat_min<<latmax>--helpShowthismessageandexit.Clip$agera5toolsclip--help Usage:agera5toolsclip[OPTIONS]AGERA5_PATHDAYExtractsaportionofagERA5forthegivenboundingboxandsavestoNetCDF.AGERA5_PATH:PathtotheAgERA5datasetDAY:specifiesthedaytobedumped(yyyy-mm-dd)Options:--base_fnameTEXTBasefilenametouse,otherwisewilluse'agera5_clipped'-o,--output_dirPATHDirectorytowriteoutputto.Ifnotprovided,willusecurrentdirectory.--boxFLOAT...Boundingbox:<lon_min><lon_max><lat_min<<latmax>--helpShowthismessageandexit.dump_gridUsage:agera5toolsdump_grid[OPTIONS]DumptheagERA5gridtoaCSV,JSONorSQLiteDB. Options:-o,--outputPATHoutputfiletowriteto:.csv,.jsonand.db3(SQLite)aresupported.GivingnooutputwillwritetostdoutinCSVformat--helpShowthismessageandexit.Python packageThe shell commands described above can also be used from python directly by importing the agera5tools package. Their working is nearly identical as the shell commands. The major difference is that the python functions return either datasets (clip) or dataframes (extract_point, dump, dump_grid). An example for theclipfunction:In[1]:importdatetimeasdt...:importagera5tools...:fromagera5tools.utilimportBoundingBox...:day=dt.date(2018,1,1)...:bbox=BoundingBox(lon_min=87,lon_max=90,lat_min=24,lat_max=27)...:ds=agera5tools.clip(day,bbox)...:In[2]:dsOut[2]:<xarray.Dataset>Dimensions:(time:1,lon:30,lat:30)Coordinates:*time(time)datetime64[ns]2018-01-01*lon(lon)float6487.187.2...89.990.0*lat(lat)float6426.926.8...24.124.0Datavariables:Precipitation_Flux(time,lat,lon)float32dask.array<chunksize=(1,30,30),meta=np.ndarray>Solar_Radiation_Flux(time,lat,lon)float32dask.array<chunksize=(1,30,30),meta=np.ndarray>Temperature_Air_2m_Max_Day_Time(time,lat,lon)float32dask.array<chunksize=(1,30,30),meta=np.ndarray>Temperature_Air_2m_Mean_24h(time,lat,lon)float32dask.array<chunksize=(1,30,30),meta=np.ndarray>Temperature_Air_2m_Min_Night_Time(time,lat,lon)float32dask.array<chunksize=(1,30,30),meta=np.ndarray>Vapour_Pressure_Mean(time,lat,lon)float32dask.array<chunksize=(1,30,30),meta=np.ndarray>Wind_Speed_10m_Mean(time,lat,lon)float32dask.array<chunksize=(1,30,30),meta=np.ndarray>Attributes:CDI:ClimateDataInterfaceversion1.9.2(http://mpimet.mpg.de/...history:FriMar1215:04:432021:cdosplitday/archive/ESG/wit015/...Conventions:CF-1.7CDO:ClimateDataOperatorsversion1.9.2(http://mpimet.mpg.de/...It works in a very similar way for theextract_pointfunction:In[6]:fromagera5tools.utilimportPointIn[7]:pnt=Point(latitude=26,longitude=89)In[8]:df=agera5tools.extract_point(pnt,startday=dt.date(2018,1,1),endday=dt.date(2018,1,31)),In[7]:df.head(5)Out[7]:dayprecipitation_fluxsolar_radiation_flux...temperature_air_2m_min_night_timevapour_pressure_meanwind_speed_10m_mean02018-01-010.3113282992...12.15679911.8097311.31758912018-01-021.9113646220...12.34204111.7118601.41607522018-01-030.1414817991...11.06451411.1988711.52426832018-01-040.0314131904...10.86187711.4132781.56640542018-01-050.0714315206...12.29296910.9841811.597181[5rowsx8columns]Note that extracting point data for a long timeseries can be time-consuming because all netCDF files have to be opened, decompressed and the point extracted.Installing agera5toolsRequirementsThe agera5tools package requires python >=3.8 and has a number of dependencies:pandas == 1.4.1PyYAML >= 6.0Pandas >= 1.5SQLAlchemy >= 1.4PyYAML >= 6.0xarray >= 2022.12.0dask >= 2022.7.0click >= 8.1flask >= 2.2cdsapi >= 0.5.1dotmap >= 1.3netCDF4 >= 1.6requests >= 2.28wsgiserver >= 1.3Lower versions of dependencies may work, but have not been tested.InstallingInstallingagera5toolscan be done through the github repository to get the latest version:pipinstallhttps://github.com/ajwdewit/agera5tools/archive/refs/heads/main.zipor directory from PyPI:pipinstallagera5tools
agere
AgereAgere is a lightweight framework for building and driving AI agents or applications based on task flows. By defining specific jobs or handlers, Agere can automatically organize and execute complex task flows.InstallationAgere has no third-party dependencies.pipinstallagereHow to UseGetting startedGuideArchitecture OverviewBasic ConceptsTaskNodeIncludes Commander, Job, and handler. Each TaskNode has one parent and 0-n children. These nodes form a tree structure, where each node determines its own task completion based on the status of all its child nodes.CommanderCommander is responsible for organizing and scheduling task flows, managing an asynchronous Job queue. It is the top-level TaskNode.JobJob is a class, it is automatically scheduled by the Commander and managed in an asynchronous queue, ensuring sequentiality. Each Job has a task method, which wraps the actual task of the Job. Jobs can submit new Jobs or call handlers. You can think of it as submitting tasks to a superior.handlerHandler is a method or function. Called directly by the caller without queue waiting, but it is still a part of the TaskNode system. A handler can submit new Jobs or call other handlers. You can think of it as delegating tasks to subordinates.CallbackCallbacks can be added at various stages of a task, such as: task start, task completion, encountering exceptions, task termination, Commander ending, etc.ExampleFor example, if you want to build an application where multiple AI roles participate in a group chat, it can be broken down into the following task units. (Assuming we call llm in a streaming manner to get replies. The reply object mentioned here refers to the iterable object obtained when calling llm, meaning that the information of an exchange is determined, but the actual generation and receipt of the information may not have started yet and needs to be completed in the subsequent iteration.)GroupTalkManager(Job): This task is the first and the top-level parent node for all subsequent group chat tasks (except the Commander). All its child nodes can access this node through the node's ancestor_chain attribute, and it can be used to manage group chats. It stores a list (roles_list) containing all the roles participating in the group chat, and also needs an attribute (speaking) to indicate which role is currently speaking. You can also add some methods to it, such as create_role, to add new chat roles, and close_group_talk, to close the group chat.TalkToAll(Job): Retrieves the list of roles from GroupTalkManager, sends the message to each role, collects all the reply objects in a dictionary, then sets the GroupTalkManager's speaking attribute to None, and passes the reply dictionary to (calls) handle_response.handle_response(handler): This handler processes each reply in the reply dictionary by calling a parse_stream_response, where multiple parse_stream_responses start executing concurrently.parse_stream_response(handler): Responsible for actually collecting and processing reply information. There are two scenarios:The role has nothing to say, no need to process.The role has something to say, then checks with GroupTalkManager whether someone is currently speaking. If someone is speaking, it denies the role's request and informs them who is speaking. If no one is speaking, it allows the role's request, changes the GroupTalkManager's speaking attribute to that role, and finally submits the role's reply object as a new TalkToAll Job.This application uses a preemptive chat method, as opposed to a turn-based multi-round dialogue mechanism, to mimic real-life multi-person chat scenarios. By breaking down the complex task into two Jobs and two handlers, the Commander can automatically organize and execute the task. In this way, you only need to focus on what to do next, without needing to plan globally, effectively reducing the difficulty of building complex processes. The specific implementation of this process can be referred to in the example code:openai_group_talk.LicenseThis project is licensed under theMIT License.Note:This library is migrated from the commander in gptui's kernel.
agespysawrapper
No description available on PyPI.
agestrucnb
No description available on PyPI.
agestrucne
No description available on PyPI.
aget
Aget - Asynchronous Downloader中文Aget is an asynchronous downloader operated in command-line, running on Python > 3.5.It supports HTTP(S), usinghttpxrequest library.Aget continues downloading a partially downloaded file as default.Installion$pip3installagetUsageagethttps://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png# get an output nameagethttps://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png-o'google.png'# set headersagethttps://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png-H"User-Agent: Mozilla/5.0"-H"Accept-Encoding: gzip"# set concurrencyagethttps://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.9.tar.xz-s10# set request range sizeagethttps://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.9.tar.xz-k1MOptions-oOUT,--outOUT# output path-HHEADER,--headerHEADER# request header-XMETHOD,--methodMETHOD# request method-dDATA,--dataDATA# request data-tTIMEOUT,--timeoutTIMEOUT# timeout-sCONCURRENCY,--concurrencyCONCURRENCY# concurrency-kCHUCK_SIZE,--chuck_sizeCHUCK_SIZE# request range sizeFor DeveloperloggingUse environment variableAGET_LOG_LEVELto setting logging level.The default level isCRITICAL.
age-verify-compliance
No description available on PyPI.
agevorgyan
This package is for interacting with PostgresQL via Python, mapping classes and objects onto the database.
age-YTDownloader
YT_DownloaderPackage for aquiring binary data for YT videos with age restrictions.UsageTo use in your project include 'import YTDownloader'Create a class object passing a url (for a YouTube video) string like: yt = YTDownloader.AgeRestrictedYTVideo()Query if video is age restricted like: age_restricted = yt.is_age_restricted()Either way, a binary object for the video is returned by: bytes_obj = yt.get_video()
agf
# Article Generation FrameworkThis is a framework for generating articles using OpenAI’s GPT-3 API.
agfcc
Alibaba Cloud OSS SDK for Python==========================.. image:: https://badge.fury.io/py/oss2.svg:target: https://badge.fury.io/py/oss2.. image:: https://travis-ci.org/aliyun/aliyun-oss-python-sdk.svg?branch=master:target: https://travis-ci.org/aliyun/aliyun-oss-python-sdk.. image:: https://coveralls.io/repos/github/aliyun/aliyun-oss-python-sdk/badge.svg?branch=master:target: https://coveralls.io/github/aliyun/aliyun-oss-python-sdk?branch=master`README of Chinese <https://github.com/aliyun/aliyun-oss-python-sdk/blob/master/README-CN.rst>`_------------------Overview----Alibaba Cloud Object Storage Python SDK 2.x. This version is not compatible with the previous version (Version 0.x). The package name is `oss2` to avoid conflict with previous versions.The SDK of this version is dependent on the third-party HTTP library `requests <https://github.com/kennethreitz/requests>`_ and `crcmod`. Install the SDK following the methods below.Note:This version does not contain the `osscmd` command line tool.Run environment--------Python 2.6,2.7,3.3,3.4,3.5.. note::Do not use Python 3.3.0 or 3.3.1. Refer to `Python Issue 16658 <https://bugs.python.org/issue16658>`_.Installation method--------Install the official release version through PIP (taking Linux as an example):.. code-block:: bash$ pip install oss2You can also install the unzipped installer package directly:.. code-block:: bash$ sudo python setup.py installQuick use--------.. code-block:: python# -*- coding: utf-8 -*-import oss2endpoint = 'http://oss-cn-hangzhou.aliyuncs.com' # Suppose that your bucket is in the Hangzhou region.auth = oss2.Auth('<Your AccessKeyID>', '<Your AccessKeySecret>')bucket = oss2.Bucket(auth, endpoint, '<your bucket name>')# The object key in the bucket is story.txtkey = 'story.txt'# Uploadbucket.put_object(key, 'Ali Baba is a happy youth.')# Downloadbucket.get_object(key).read()# Deletebucket.delete_object(key)# Traverse all objects in the bucketfor object_info in oss2.ObjectIterator(bucket):print(object_info.key)For more examples, refer to the code under the "examples" directory.Handle errors--------The Python SDK interface will throw an exception in case of an error (see oss2.exceptions sub-module) unless otherwise specified. An example is provided below:.. code-block:: pythontry:result = bucket.get_object(key)print(result.read())except oss2.exceptions.NoSuchKey as e:print('{0} not found: http_status={1}, request_id={2}'.format(key, e.status, e.request_id))Test----First set the required AccessKeyId, AccessKeySecret, endpoint and bucket information for the test through environment variables (**Do not use the bucket for the production environment**).Take the Linux system for example:.. code-block:: bash$ export OSS_TEST_ACCESS_KEY_ID=<AccessKeyId>$ export OSS_TEST_ACCESS_KEY_SECRET=<AccessKeySecret>$ export OSS_TEST_ENDPOINT=<endpoint>$ export OSS_TEST_BUCKET=<bucket>$ export OSS_TEST_STS_ID=<AccessKeyId for testing STS>$ export OSS_TEST_STS_KEY=<AccessKeySecret for testing STS>$ export OSS_TEST_STS_ARN=<Role ARN for testing STS>Run the test in the following method:.. code-block:: bash$ nosetests # First install noseMore usage--------- `More examples can be found <https://github.com/aliyun/aliyun-oss-python-sdk/tree/master/examples>`_.- `Python SDK API documentation <http://aliyun-oss-python-sdk.readthedocs.org/en/latest>`_.- `Official Python SDK documentation <https://help.aliyun.com/document_detail/32026.html>`_.Contact us--------- `Alibaba Cloud OSS official website <http://oss.aliyun.com>`_.- `Alibaba Cloud OSS official forum <http://bbs.aliyun.com>`_.- `Alibaba Cloud OSS official documentation center <https://help.aliyun.com/document_detail/32026.html>`_.- Alibaba Cloud official technical support: `Submit a ticket <https://workorder.console.aliyun.com/#/ticket/createIndex>`_.Code license--------For the MIT license, see LICENSE object.
ag.funutils
ag.funutilsFun functional utilities for python. This library provides achainmethod and several operators that you can chain together to create pipelines of functional transformations, like with Lodash's chain or Clojure's threading macros.The provided functions are "sugary". They do NOT provide performance optimizations, error handling, or guarantees of statelessness.Each operator will take a transformation and return a function that applies that transformation to an iterable. Because most operators are wrappers around built-in functions like of the same name, they return iterables and the results of most chains will need to be converted to a list to be immediately useful. The documentation for associated standard library functions should largely be applicable to the provided operators.ExamplesAlso see the tests.fromag.funutilsimportfunadd_one=fun.map(lambdax:x+"1")upper=fun.map(str.upper)fun.chain(["a","b","c","d"],add_one,upper,list)# => ["A1", "B1", "C1", "D1"]big_transform=[add_one,upper]fun.chain(["a","b","c","d"],*big_transform,list)# => ["A1", "B1", "C1", "D1"])fun.chain(["a","b","c","d"],fun.tap(print),# => "[a, b, c, d]"*big_transform,fun.tap(print),# => "[A1, B1, C1, D1]"fun.sort(reverse=True),list)# => ["D1", "C1", "B1", "A1"]fun.chain(["a","b","c","d"],*big_transform,fun.reduce(lambdaacc,x:acc+x))# => "A1B1C1D1"# Values that are tuples will be spread into the transformations,# which lets you work with dicts.data={'beep':1,'boop':2,'buup':3,}add_one=fun.map(lambdak,v:(k,v+1))evens=fun.filter(lambdak,v:v%2==0)beep_buup=fun.reduce(lambdaacc,k,v:f'{acc}{k}{v}','')result=fun.chain(data.items(),add_one,evens,add_one,beep_buup)# => 'beep3buup5'Referenceag.funutils.chain(data, *transforms):Providesdataas an argument to the firsttransform, then the result of eachtransformto the following one. Eachtransformshould be a function that takes a single, iterable argument and returns an iterable. The exception is thereduceoperator, which can return a single value if it is the last operator in the chain.ag.funutils.map(transform):returns a function which takes an iterable and appliestransformto each item of the iterableag.funutils.filter(condition):returns a function which takes an iterable and returns an iterable with only items matching the conditionag.funutils.sort(key=None, reverse=False):returns a function which takes an iterable and return an iterable sorted according to the value returned by thekeyfunction. Compares items directly by default.ag.funutils.reduce(transform, initial=None):returns a function which takes an iterable and reduces it to a single value. If noinitialvalue is provided, the first item is used as the initial value.ag.funutils.tap(fn):returns a function which takes a single argument, passes it tofn, and returns it. Intended for debugging purposes, in particular:ag.funutils.tap(print).DevelopmentRequires pipenv and python 3.7.$./scripts/setup.sh $./scripts/test.sh
agfusion
Annotate Gene Fusion (AGFusion)Checkout the webapp:https://www.agfusion.appAGFusion (pronounced 'A G Fusion') is a python package for annotating gene fusions from the human or mouse genomes. AGFusion simply needs the reference genome, the two gene partners, and the fusion junction coordinates as input, and outputs the following:FASTA files of cDNA, CDS, and protein sequences.Visualizes the protein domain and exon architectures of the fusion transcripts.Saves tables listing the coordinates of protein features and exons included in the fusion.Optional exon structure and protein domain visualization of the wild-type version of the fusion gene partners.Some other things to know:AGFusion automatically predicts the functional effect of the gene fusion (e.g. in-frame, out-of-frame, etc.).Annotation is by default done only for canonical gene isoforms, but there is the option to annotate all gene non-canonical isoform combinations.All gene and protein annotation is from EnsemblSupports up to Ensembl release 95Table of ContentsInstallationDependenciesExamplesBasic UsagePlotting wild-type protein and exon structureCanonical gene isoformsInput from fusion-finding algorithmsGraphical parametersTroubleshootingLicenseCiting AGFusionInstallationStep 1:Install AGFusion.pip install agfusionStep 2:Download your desired pyensembl reference genome database. For example:For GRCh38/hg38: pyensembl install --species homo_sapiens --release 95 For GRCh37/hg19: pyensembl install --species homo_sapiens --release 75 For GRCm38/mm10: pyensembl install --species mus_musculus --release 87Step 3:Finally, download your desired AGFusion database.For GRCh38/hg38: agfusion download -g hg38 For GRCh37/hg19: agfusion download -g hg19 For GRCm38/mm10: agfusion download -g mm10You can view all supported species and ensembl releases withagfusion download -a.DependenciesPython 3.7 or higherPython package dependencies are listed inrequirements.txt.ExamplesBasic UsageYou just need to provide the two fusion gene partners (gene symbol, Ensembl ID, or Entrez gene ID), their predicted fusion junctions in genomic coordinates, and the genome build. You can also specify certain transcripts with Ensembl transcript ID or RefSeq IDExample usage from the command line:agfusion annotate \ --gene5prime DLG1 \ --gene3prime BRAF \ --junction5prime 31684294 \ --junction3prime 39648486 \ -db agfusion.mus_musculus.87.db \ -o DLG1-BRAFThe protein domain structure of the DLG1-BRAF fusion:The exon structure of the DLG1-BRAF fusion:Plotting wild-type protein and exon structureYou can additionally plot the wild-type proteins and exon structures for each gene with --WT flag.agfusion annotate \ -g5 ENSMUSG00000022770 \ -g3 ENSMUSG00000002413 \ -j5 31684294 \ -j3 39648486 \ -db agfusion.mus_musculus.87.db \ -o DLG1-BRAF \ --WTCanonical gene isoformsBy default AGFusion only plots thecanonicalgene isoforms, but you can tell AGFusion to include non-canonical isoform with the --noncanonical flag.agfusion annotate \ -g5 ENSMUSG00000022770 \ -g3 ENSMUSG00000002413 \ -j5 31684294 \ -j3 39648486 \ -db agfusion.mus_musculus.87.db \ -o DLG1-BRAF \ --noncanonicalInput from fusion-finding algorithmsYou can provide as input output files from fusion-finding algorithms. Currently supported algorithms are:ArribaBellerophontesBreakFusionChimeraScanChimeRScopedeFuseEricScriptFusionCatcherFusionHunterFusionMapInFusionJAFFALongGFMapSplice (only if --gene-gtf specified)STAR-FusionTopHat-FusionBelow is an example for FusionCatcher.agfusion batch \ -f final-list_candidate-fusion-genes.txt \ -a fusioncatcher \ -o test \ -db agfusion.mus_musculus.87.dbGraphical parametersYou can change domain names and colors:agfusion annotate \ -g5 ENSMUSG00000022770 \ -g3 ENSMUSG00000002413 \ -j5 31684294 \ -j3 39648486 \ -db agfusion.mus_musculus.87.db \ -o DLG1-BRAF \ --recolor "Pkinase_Tyr;red" --recolor "L27_1;blue" \ --rename "Pkinase_Tyr;Kinase" --rename "L27_1;L27"You can rescale the protein length so that images of two different fusions have appropriate relative lengths when plotted side by side:agfusion annotate \ -g5 ENSMUSG00000022770 \ -g3 ENSMUSG00000002413 \ -j5 31684294 \ -j3 39648486 \ -db agfusion.mus_musculus.87.db \ -o DLG1-BRAF \ --recolor "Pkinase_Tyr;red" --recolor "L27_1;blue" \ --rename "Pkinase_Tyr;Kinase" --rename "L27_1;L27" \ --scale 2000 agfusion annotate \ -g5 FGFR2 \ -g3 DNM3 \ -j5 130167703 \ -j3 162019992 \ -db agfusion.mus_musculus.87.db \ -o FGFR2-DNM3 \ --recolor "Pkinase_Tyr;red" \ --rename "Pkinase_Tyr;Kinase" \ --scale 2000Troubleshooting(1) Problem:I get a warning message like the following:2017-08-28 15:02:51,377 - AGFusion - WARNING - No cDNA sequence available for AC073283.4! Will not print cDNA sequence for the AC073283.4-MSH2 fusion. You might be working with an outdated pyensembl. Update the package and rerun 'pyensembl install'Solution:Run the following to update pyensembl package and database:git clone [email protected]:hammerlab/pyensembl.git cd pyensembl sudo pip install . pyensembl install --release (your-release) --species (your-species)(2) Problem:Cannot runagfusion downloaddue toURLError. When downloading the database you may run into this error:urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)>Solution:A potential solution for Mac users is fromhere. You can run the following command:/Applications/Python\ 3.8/Install\ Certificates.commandLicenseMIT licenseCiting AGFusionYou can cite bioRxiv:http://dx.doi.org/10.1101/080903
agg
AggA Python library to aggregate files and data. This release supports merging two or more csv files.Documentationmerge_csv(files_to_merge:tuple,output_file:Union[str,pathlib.Path],first_line_is_header:Optional[bool]=None)->dict:The methodmerge_csvmerges multiple CSV files in the order they are specified. It will overwrite any existing file with the same name.Parameters:files_to_merge: A tuple containing paths to a files in the order they are to be merged.output_file: The path to the result file. The folder must already exist. An existing file with the same name will be overwritten.first_line_is_header: if True agg will remove the first line of all csv files except for the first. If not set agg will guess if the first line is a header or not.Its return value is a dictionary containing:a SHA256 hash of the result file,the name of the result file,its absolute path,a boolean indicating whether the first line is a header or not,its size in bytes,its number of lines (including the header),a list of the files merged (absolute path).Example#!/usr/bin/env python3# -*- coding: utf-8 -*-importagg# tuples are ordered:my_files=('file_01.csv','file_02.csv')# Merge the CSV-files - in the order specified by the tuple - into a new file# called "merged_file". Meanwhile copy the header / first line only once from# first file.merged_file=agg.merge_csv(my_files,'merged_file',True)# The return value is a dictionary!print(merged_file)# {'sha256hash': 'fff30942d3d042c5128062d1a29b2c50494c3d1d033749a58268d2e687fc98c6',# 'file_name': 'merged_file',# 'file_path': '/home/exampleuser/merged_file',# 'first_line_is_header': True,# 'file_size_bytes': 76,# 'line_count': 8,# 'merged_files': ['/home/exampleuser/file_01.csv',# '/home/exampleuser/file_02.csv']# }print(merged_file['file_path'])# '/home/exampleuser/merged_file'
agg2567
UNKNOWN
agg-abdurion
Aggregation Package Details- This package computes the mean, median, min, max, sum of positives, and count of negatives of a list.