package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
agas
Agas defines similarity as the absolute difference between pairs of output scores from an aggregation function applied to the input series. The default behavior of Agas is to maximize similarity on a single dimension (e.g., means of the series in the input matrix) while minimizing similarity on another dimension (e.g., the variance of the series).The main motivation for this library is to provide a data description tool for depicting time-series. It is customary to plot pairs of time series, where the pair is composed of data which is similar on one dimension (e.g., mean value) but dissmilar on another dimension (e.g., standard deviation).The library name Agas is abbreviation for aggregated-series. Also, ‘Agas’ is Hebrew for ‘Pear’.
agat
AGAT (Atomic Graph ATtention networks)ThePyTorchbackend AGAT is available now, try withpip install agat==8.*. For previous version, install withpip install agat==7.*.Using AGATThedocumentationof AGAT API is available.InstallationInstall withcondaenvironmentCreate a new environmentconda create -n agat python==3.10Activate the environmentconda activate agatInstallPyTorch,Navigate to theinstallation pageand choose you platform. For example (GPU):conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidiaInstalldgl.Please navigate to theGet Startedpage ofdgl. For example (GPU):conda install -c dglteam/label/cu118 dglInstall AGAT packagepip install agatInstall CUDA and CUDNN [Optional].For HPC, you may load CUDA by checkingmodule av, or you can contact your administrator for help.CUDA ToolkitcuDNNQuick startPrepare VASP calculationsRun VASP calculations at this step.Collect paths of VASP calculationsWe provided examples of VASP outputs atVASP_calculations_example.Find all directories containingOUTCARfile:find . -name OUTCAR > paths.logRemove the string 'OUTCAR' inpaths.log.sed -i 's/OUTCAR$//g' paths.logSpecify the absolute paths inpaths.log.sed -i "s#^.#${PWD}#g" paths.logBuild databasefromagat.dataimportBuildDatabaseif__name__=='__main__':database=BuildDatabase(mode_of_NN='ase_dist',num_of_cores=16)database.build()Train AGAT modelfromagat.modelimportFitf=Fit()f.fit()Application (geometry optimization)fromase.optimizeimportBFGSfromagat.appimportAgatCalculatorfromase.ioimportreadfromaseimportAtomsmodel_save_dir='agat_model'graph_build_scheme_dir='dataset'atoms=read('POSCAR')calculator=AgatCalculator(model_save_dir,graph_build_scheme_dir)atoms=Atoms(atoms,calculator=calculator)dyn=BFGS(atoms,trajectory='test.traj')dyn.run(fmax=0.05)Application (high-throughput prediction)fromagat.app.cataimportHpAdsmodel_save_dir='agat_model'graph_build_scheme_dir='dataset'formula='NiCoFePdPt'ha=HpAds(model_save_dir=model_save_dir,graph_build_scheme_dir=graph_build_scheme_dir)ha.run(formula=formula)For more custom manipulations, see ourdocumentationpage.Some default parametersagat/default_parameters.py; Explanations:docs/sphinx/source/Default parameters.md.Change logPlease checkChange_log.md
agate
agate is a Python data analysis library that is optimized for humans instead of machines. It is an alternative to numpy and pandas that solves real-world problems with readable code.agate was previously known as journalism.Important links:Documentation:https://agate.rtfd.orgRepository:https://github.com/wireservice/agateIssues:https://github.com/wireservice/agate/issues
agate-charts
agate-charts adds exploratory charting support toagate.Important links:agate:https://agate.rtfd.orgDocumentation:https://agate-charts.rtfd.orgRepository:https://github.com/wireservice/agate-chartsIssues:https://github.com/wireservice/agate-charts/issues
agate-dbf
agate-dbf adds read support for dbf files toagate.Important links:agatehttps://agate.rtfd.orgDocumentation:https://agate-dbf.rtfd.orgRepository:https://github.com/wireservice/agate-dbfIssues:https://github.com/wireservice/agate-dbf/issues
agate-excel
agate-excel adds read support for Excel files (xls and xlsx) toagate.Important links:agatehttps://agate.rtfd.orgDocumentation:https://agate-excel.rtfd.orgRepository:https://github.com/wireservice/agate-excelIssues:https://github.com/wireservice/agate-excel/issues
agate-lookup
agate-lookup adds one-line access tolookuptables toagate.Important links:agatehttps://agate.rtfd.orgDocumentation:https://agate-lookup.rtfd.orgRepository:https://github.com/wireservice/agate-lookupIssues:https://github.com/wireservice/agate-lookup/issues
agate-mdb
agate-mdb adds read support for Microsoft Access files (*.mdb, *.accdb) toagate, with the help ofmdbtools.It’s still a work in progress. Caveat emptor!Important links:agatehttp://agate.rtfd.orgDocumentation:http://agate-mdb.rtfd.orgRepository:https://github.com/lcorbasson/agate-mdbIssues:https://github.com/lcorbasson/agate-mdb/issues
agate-remote
agate-remote adds read support for remote files toagate.Important links:agatehttps://agate.rtfd.orgDocumentation:https://agate-remote.rtfd.orgRepository:https://github.com/wireservice/agate-remoteIssues:https://github.com/wireservice/agate-remote/issues
agatereports
AgateReportsPure Python engine to generate reports from JasperReports jrxml file.IntroductionAgateReports is a pure Python tool to generate reports without extensive coding. It currently relies on Jaspersoft Studiohttps://sourceforge.net/projects/jasperstudio/to graphically position reporting elements on a report layout.This package aims to be a solution to following users:Python developers who want to create a report using GUI tool.Users who want to modify existing reports without programming.Differences with JasperReportsAgateReports use different components than JasperReports and there are minor differences. AgateReports components are based on modules such as ReportLab that are available in PythonAgateReports ia able to use .ttc fontsAdditional components available in Python are added. Currently, jrxml file must manually be edited to use these components. Furthermore, since JasperReports do not support these components, edited jrxml file can no longer be edited with Jaspersoft Studio. There is a plan to fork Jaspersoft Studio to support Python language syntax and these components.Current RestrictionsAgateReports is still in initial development phase and does not provide all of JasperReports features.Patterns, format, and Java classes specified in jrxml file need to be changed to Python equivalent. For example, “Current Date” needs to be converted from “new java.util.Date()” to “datatime.datetime.now()”Currently, only MySQL, Postgresql, and csv file are supported as a datasourcePerformance is slow for large data source.RequirementsPython3.6 or above ReportLab Pillow MySQL Connector/Python psycopg2InstallationAgateReports requires Pillow. If Pillow is not already installed, please install it with the following command.pip install PillowAgateReports can be installed by the following command:pip install agatereportsGetting StartedPlease refer to the following github project for full documentation.https://github.com/ozawa-hi/agatereportsdemos directorycontains samples of jrxml files and Python scripts on how to use AgateReports.Usage Samplefromagatereports.adaptersimportMysqlAdapterfromagatereports.basic_reportimportBasicReportimportlogginglogger=logging.getLogger(__name__)defdatasource_mysql_sample():""" MySQL data source sample. WARNING: Before running this sample, schema 'agatereports' must be create and populated. Run scripts in directory 'agatereports/tests/database/mysql' to create and populated database tables. CAUTION: Edit values of 'host' and 'port' to those in your environment. """logger.info('running datasource mysql sample')jrxml_filename='../demos/jrxml/datasource_mysql.jrxml'# input jrxml filenameoutput_filename='../demos/output/datasource_mysql.pdf'# output pdf filename# MySQL datasourceconfig={'host':'localhost','port':'3306','user':'python','password':'python','database':'agatereports'}data_source=MysqlAdapter(**config)pdf_page=BasicReport(jrxml_filename=jrxml_filename,output_filename=output_filename,data_source=data_source)pdf_page.generate_report()if__name__=='__main__':datasource_mysql_sample()END
agate-sql
agate-sql adds SQL read/write support toagate.Important links:agatehttps://agate.rtfd.orgDocumentation:https://agate-sql.rtfd.orgRepository:https://github.com/wireservice/agate-sqlIssues:https://github.com/wireservice/agate-sql/issues
agate-stats
agate-stats adds statistical methods toagate.Important links:agatehttps://agate.rtfd.orgDocumentation:https://agate-stats.rtfd.orgRepository:https://github.com/wireservice/agate-statsIssues:https://github.com/wireservice/agate-stats/issues
agatha
No description available on PyPI.
agatha-print-hello
Failed to fetch description. HTTP Status Code: 404
agave
agaveAgave is a library that implement rest_api across the use of Blueprints based on Chalice Aws.this library allow send and receive JSON data to these endpoints to query, modify and create content.Install agave using pip:pipinstallagave==0.0.2.dev0You can use agave for blueprint like this:fromagave.blueprints.rest_apiimportRestApiBlueprintagave include helpers for mongoengine, for example:fromagave.models.helpersimport(uuid_field,mongo_to_dict,EnumField,updated_at,list_field_to_dict)Correr testsmaketest
agavedb
Multiuser-aware key/value store built using the Agave metadata APIDocumentation:https://agavedb.readthedocs.io/en/latest/GitHub:https://github.com/TACC/agavedbPyPI:https://pypi.python.org/pypi/agavedbFree software: 3-Clause BSD LicenseInstallationInstall fromPyPIpip install agavedbInstall from GitHub checkout:cd agavedb python setup.py installTestsTests are implemented usingtox. To run them, just typetox
agaveflask
# agaveflask ### Overview ##A common set of Python modules for writing flask services for the Agave Platform. The package officially requires Python3.4+, though some functionality may work with Python 2.## Installation ##pip install agaveflaskRequires Python header files and a C++ compiler on top of gcc. On Debian/Ubuntu systems:apt-get install python3-dev g++## Usage ##agaveflask provides the following modules:* auth.py - configurable authentication/authorization routines.* config.py - config parsing.* errors.py - exception classes raised by agaveflask.* store.py - python bindings for persistence.* utils.py - general request/response utilities.It relies on a configuration file for the service. Create a file called service.conf in one of `/`, `/etc`, or `$pwd`.See service.conf.ex in this repository for settings used by this library.## Using Docker ##### Packaging ###If you are packaging your flask service with Docker, agaveflask provides a base image, agaveapi/flask_api, thatsimplifies your Dockerfile and provides a configurable entrypoint for both development and production deployments. Inmost cases, all you need to do is add your service code and your requirements.txt file. For example, if you have aflask service with a requirements.txt file and code that resides in a directory called "service", the Dockerfile canbe as simple as:```from agaveapi/flask_apiADD requirements.txt /requirements.txtRUN pip install -r /requirements.txtADD service /service```### Suggested Package Layout ###For simple microservices, we make the following recommendations for minimal configuration at deployment time.* Place the service code in a Python package called `service` that resides at the root of the Docker image (i.e. `/service`).* Within `/service`, have a python module called `api.py` where the wsgi application is instantiated.* Call the actual wsgi application object `app`.Beyond standard flask operations, additional suggestions for the `api.py` module include:* Add cors support.* Set up authentication and authorization for your API using the agaveflask authn_and_authz() method.* Start a development server within `__main__`.Here is a typical example:```from flask import Flaskfrom flask_cors import CORSfrom agaveflask.utils import AgaveApi, handle_errorfrom agaveflask.auth import authn_and_authzfrom resources import JwtResource# create the wsgi application objectapp = Flask(__name__)# add CORS supportCORS(app)# create an AgaveApi object so that convenience utilities are available:api = AgaveApi(app)# Set up Authn/z for the [email protected]_requestdef auth():authn_and_authz()# Set up error [email protected](Exception)def handle_all_errors(e):return handle_error(e)# Add the resourcesapi.add_resource(JwtResource, '/admin/jwt')# start a development serverif __name__ == '__main__':app.run(host='0.0.0.0', debug=True)```### Deployment Configuration ###The entry point is configured through environmental variables. Using the `server` variable toggles between adevelopment server you have created in your api module and gunicorn, meant for production.All other settings are used to help the entrypoint findyour wsgi application. If you are able to use the recommended file system structure and layout, you may be able torely exclusively on the image defaults.Here is a complete list of config variables, their usage, and their default values:* server: Value 'dev' attempts to starts up a development server by executing your module's `__main__` method. Anyother value starts up gunicorn. Default is 'dev'.* package: path to package containing service code (no trailing slash). Default is '/service'.* module: name of python module (not including '.py') containing the wsgi application object. Default is 'api'.* app: name of the wsgi application object. Default is 'app'.* port: port to start the server on when running with gunicorn. Default is 5000.### Docker compose Example ###The following snippet from a hypothetical docker-compose.yml file illustrates typical usage. In this example we have afolder, `services`, containing two services like so:* `/services/serviceA/api.py`* `/services/serviceB/api.py`We are bundling into the same docker image (`jdoe/my_services`). Because of this we need to set the indvidual packages for each using environmental variables. We also set the servervariable so that we use gunicorn.```. . .serviceA:image: jdoe/my_servicesports:- "5000:5000"volumes:- ./local-dev.conf:/etc/service.confenvironment:package: /services/serviceAserver: gunicornserviceB:image: jdoe/my_servicesports:- "5001:5000"volumes:- ./local-dev.conf:/etc/service.confenvironment:package: /services/serviceAserver: gunicorn. . .```
agavepy
Python2/3 binding for TACC.Cloud Agave and Abaco APIsDocumentation:https://agavepy.readthedocs.io/en/latest/GitHub:https://github.com/TACC/agavepyPyPI:https://pypi.python.org/pypi/agavepyFree software: 3-Clause BSD LicenseInstallationInstall fromPyPI:pip install agavepyInstall from GitHub checkout:cd agavepy python setup.py install # or # make installContributingIn case you want to contribute, you should read ourcontributing guidelinesand we have a contributor’s guide that explainssetting up a development environment and the contribution process.QuickstartIf you already have an active installation of the TACC Cloud CLI, AgavePy will pick up on your existing credential cache, stored in$HOME/.agave/current. We illustrate this usage pattern first, as it’sreallystraightforward.TACC Cloud CLI>>>fromagavepy.agaveimportAgave>>>ag=Agave.restore()Voila! You have an active, authenticated API client. AgavePy will use a cached refresh token to keep this session active as long as the code is running.Pure PythonAuthentication and authorization to the TACC Cloud APIs uses OAuth2, a widely-adopted web standard. Our implementation of Oauth2 is designed to give you the flexibility you need to script and automate use of TACC Cloud while keeping your access credentials and digital assets secure.This is covered in great detail in ourDeveloper Documentationbut some key concepts will be highlighted here, interleaved with Python code.The first step is to create a Python objectagwhich will interact with an Agave tenant.>>>fromagavepy.agaveimportAgave>>>ag=Agave()CODE NAME URL 3dem 3dem Tenant https://api.3dem.org/ agave.prod Agave Public Tenant https://public.agaveapi.co/ araport.org Araport https://api.araport.org/ designsafe DesignSafe https://agave.designsafe-ci.org/ iplantc.org CyVerse Science APIs https://agave.iplantc.org/ irec iReceptor https://irec.tenants.prod.tacc.cloud/ sd2e SD2E Tenant https://api.sd2e.org/ sgci Science Gateways Community Institute https://sgci.tacc.cloud/ tacc.prod TACC https://api.tacc.utexas.edu/ vdjserver.org VDJ Server https://vdj-agave-api.tacc.utexas.edu/ Please specify the ID of a tenant to interact with: araport.org>>>ag.api_server'https://api.araport.org/'If you already now what tenant you want to work with, you can instantiateAgaveas follows:>>>fromagavepy.agaveimportAgave>>>ag=Agave(api_server="https://api.tacc.cloud")or>>>fromagavepy.agaveimportAgave>>>ag=Agave(tenant_id="tacc.prod")Once the object is instantiated, interact with it according to the API documentation and your specific usage needs.Create a new Oauth clientIn order to interact with Agave, you’ll need to first create an Oauth client so that later on you can create access tokens to do work.To create a client you can do the following:>>>fromagavepy.agaveimportAgave>>>ag=Agave(api_server='https://api.tacc.cloud')>>>ag.clients_create("client-name","some description")API username: your-username API password:>>>ag.api_key'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx'>>>ag.api_secret'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'You will use the api key and secret to generate Oauthtokens, which are temporary credentials that you can use in place of putting your real credentials into code that is interacting with TACC APIs.Reuse an existing Oauth clientOnce you generate a client, you can re-use its key and secret. Clients can be created using the Python-based approach illustrated above, via the TACC Cloud CLIclients-createcommand, or by a direct, correctly-structuredPOSTto theclientsweb service. No matter how you’ve created a client, setting AgavePy up to use it works the same way:>>>fromagavepy.agaveimportAgave>>>ag=Agave(api_server='https://api.tacc.cloud',...username='mwvaughn',...client_name='my_client',...api_key='kV4XLPhVBAv9RTf7a2QyBHhQAXca',...api_secret='5EbjEOcyzzIsAAE3vBS7nspVqHQa')The Agave objectagis now configured to talk to all TACC Cloud services.Generate an Access TokenIn order to interact with the TACC cloud services in a more secure and controlled manner - without constantly using your username and password - we will use the oauth client, created in the previous step, to generate access tokens.The generated tokens will by defualt have a lifetime of 4 hours, or 14400 seconds.To create a token>>>ag.get_access_token()API password:>>>ag.token'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'Keep in mind that you will need to create an oauth client first!Saving your credentialsTo save your process (api key, api secret, access token, refresh token, tenant information) you can use the methodAgave.save_configs()>>>ag.save_configs()By default,Agave.save_configswill store credentials in~/.agave. It will save all session in~/.agave/config.jsonand, for backwards-compatibility with other agave tooling, it will save the current session in~/.agave/current.The refresh tokenNobody likes to change their password, but they have to if it leaks out into the wild. A tragically easy way for that to happen is in committed code or a Docker container where it’s been hard-coded. To get around this, AgavePy works with the TACC authentication APIs to support using arefresh token. Basically, as long as you have the apikey, apisecret, and the last refresh token for an authenticated session, you can renew the session without sending a password. Neat, right? Let’s build on theagobject from above to learn about this.Let’s start by inspecting itstokenproperty, which will also demonstrate how you can access token data programmatically for your own purposes.>>>ag.token.token_info{u'access_token': u'14f0bbd0b334e594e676661bf9ccc136', 'created_at': 1518136421, u'expires_in': 13283, 'expires_at': 'Thu Feb 8 22:15:04', u'token_type': u'bearer', 'expiration': 1518149704, u'scope': u'default', u'refresh_token': u'b138c49040a6f67f80d49a1c112e44b'}>>>ag.token.token_info['refresh_token']u'b138c49046f67f80d49a1c10a12e44b'
agave-pyclient
agave_pyclientA Python client for the Agave 3d volume rendererFeaturesConnects to Agave server and sends draw commands. Receives and saves rendered images.Quick StartYou must have Agave installed. On command line, run:agave --server &For Linux headless operation, you need to tell the Qt library to use the offscreen platform plugin:agave -platform offscreen --server &fromagave_pyclientimportAgaveRenderer# 1. connect to the agave serverr=agave_pyclient.AgaveRenderer()# 2. tell it what data to loadr.load_data("my_favorite.ome.tiff")# 3. set some render settings (abbreviated list here)r.set_resolution(681,612)r.background_color(0,0,0)r.render_iterations(128)r.set_primary_ray_step_size(4)r.set_secondary_ray_step_size(4)r.set_voxel_scale(0.270833,0.270833,0.53)r.exposure(0.75)r.density(28.7678)# 4. give the output a namer.session("output.png")# 5. wait for render and then save outputr.redraw()InstallationStable Release:pip install agave_pyclientDocumentationFor full package documentation please visitallen-cell-animated.github.io/agave.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.The Four Commands You Need To Knowpip install -e .[dev]This will install your package in editable mode with all the required development dependencies (i.e.tox).make buildThis will runtoxwhich will run all your tests in both Python 3.7 and Python 3.8 as well as linting your code.make cleanThis will clean up various Python and build generated files so that you can ensure that you are working in a clean environment.make docsThis will generate and launch a web browser to view the most up-to-date documentation for your Python package.Allen Institute Software License
a-gb-distributions
No description available on PyPI.
agbenchmark
Auto-GPT BenchmarksBuilt for the purpose of benchmarking the performance of agents regardless of how they work.Objectively know how well your agent is performing in categories like code, retrieval, memory, and safety.Save time and money while doing it through smart dependencies. The best part? It's all automated.Scores:Ranking overall:1-Beebot2-mini-agi3-Auto-GPTDetailed results:Click here to see the results and the raw data!!More agents coming soon !
agboost
No description available on PyPI.
agbpycc
agbpyccagbpycc is a python based compiler frontend for theagbcccompiler for the GameBoy Advance. It provides an interface more similar to a modern gcc frontend, making it easier to use agbcc with other tools like compiler explorer. agbpycc also does some processing of the assembly code, allowing for easier comparrison to other assembly. This is due to its main usage in decompiling matching binaries.InstallationYou can install agbpycc from pypi using pip. This installs agbpycc as a command line tool, and also as an importable module. You might want to use a virtual environment.pipinstallagbpyccTo install from source, you need to build the ASM Parser first. This requiresantlr. The provided Makefile takes care of downloading the tool using wget. Should wget not be available to you as a command, download theantlr toolmanually, and place it in the projects directory.gitclonehttps://gitlab.com/henny022/agbpycccdagbpycc makeinstallTo work with the project files directly, you need to install the required dependencies in addition to building the antlr files. You can use the provided make target to do all thisgitclonehttps://gitlab.com/henny022/agbpycccdagbpycc makesetupUsageTo run agbpycc after install, use theagbpycccommand line tool.agbpycc<arguments>To run the files from source without installing run the python module from the project directory.python-magbpycc<arguments>The following is a list of the most basic arguments. Runagbpycc --helpto get a full list.--cc1 Path to the agbcc binary (required for compiling C) -o output assembly file name -g enable debug info processed output contains only file and line informationAssembling objects and Linking binaries are not yet supported.Examplescompile a file to cleaned assembly for nice human readingagbpycc--cc1agbcc-g-ooutput.sinput.cclean assembly for nice human readingagbpycc-ooutput.sinput.sSupportFor problems with the tool, please use thegitlab issue tracker. For questions you can use thetmc-miscchannel on thezeldaret discord server. Ping @Henny022 there.ContributingTo contribute your own code to this project, create a fork, do you changes and open a Merge Request. Please usegitmojiin your commit messages, this creates a neat git history and allows to grasp the contents of a commit in a single look.Authors and acknowledgmentThis was originally based on thepycc.pyfile inthis repo, most of the has been modified or replaced since. (original file)Main Contributors:OctorockHenny022LicenseThis is licensed under the Unlicense. So you can use this code however you want. Any credit you can give is very appreciated.
agbus
AmazonGlacierBackUpSystemAmazon Glacier BackUp System (AGBUS) is a cli tool that provides an interface to Amazon Glacier that is simpler than the aws cli. It can also be pointed to a mongoDB instance to store information about vaults, archives and more.Installation:pip install agbusDatabase Configuration Filelocation:~/.gbs/db_configOnly 1 attribute:dbUri- Can define the connection string URI for any mongo instanceexampledb_config:{ "dbUri": "mongodb://localhost:27017" }AWS authenticationAGBUS uses the awscli for authenticationhttps://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
agc
AGC - Area under Gain CurvesPython code to compute and plot (truncated, weighted) area under gain curves (AGC)For binary classification, gain curves are a nice alternative to ROC curves in that they can naturally betruncatedto focus on the top scoring points only. Moreover, the data points can haveweights. In this code, we provide two functions:agc_score: Compute the area under the gain curve (AGC) for binary labelled datagain_curve: Compute the proportion of data points and true positive rate for all thresholds, for plottingThe first function returns thenormalized areaby default (improvement over random, so this could be negative). The functions can be imported from the suppliedagc.pyfile, or installed viapip install agc.A simple example## create toy binary labels and scores for illustration labels = np.concatenate((np.repeat(1,100),np.repeat(0,900))) scores = np.concatenate((np.random.uniform(.4,.8,100),np.random.uniform(.2,.6,900))) ## compute (normalized) area under the gain curve print(agc_score(labels, scores)) ## compute (un-normalized) area under the gain curve print(agc_score(labels, scores, normalized=False)) ## now the area for the top scoring 10% of the points print(agc_score(labels, scores, truncate=0.1)) ## or top scoring 100 points print(agc_score(labels, scores, truncate=100))More details in Notebooks:For a quick introduction, see the following notebook:https://github.com/ftheberge/agc/blob/main/agc/agc_intro.ipynbalso available in markup format:https://github.com/ftheberge/agc/blob/main/intro/agc_intro.mdFor more details, see the notebook:https://github.com/ftheberge/agc/blob/main/agc/agc.ipynbalso available in markup format:https://github.com/ftheberge/agc/blob/main/example/agc.md
agc-addition
No description available on PyPI.
agci
agciThis is a tutorial on how to debut a PyPI application and how to validate software reproducibility. This agci is renamed and modified from "Symmetric GC integration.py" athttps://github.com/1mikegrn/pyGC/blob/master/JCE-supplemental/Asymmetric%20GC%20integration.pyThe modification is used to show how to debut a PyPI application and how to validate software reproducibility via Code Ocean.how to install agci$ pip install agcihow to run agciThe result picture will be saved in result.png.$ agci "csv data filename"$ agci Sample\ GC\ data\ 1\ pt\ acetone\ 1\ pt\ cyclohexane.csv
agc-key
No description available on PyPI.
agcli
No description available on PyPI.
agcod
Amazon Gift Code On-Demand (AGCOD)This is a tool for working with the AGCOD service and can be used for easily creating, cancelling and checking the status of Amazon gift codes.Installpip install agcodQuickstartfromagcodimportclientclient.sandbox=True# default is True, False will use production URLsclient.debug=False# default is Falseclient.aws_region_name='us-east-1'# Default is us-east-1client.partner_id='<Your Partner ID>'client.aws_key_id='<Your AWS Key ID>'client.aws_secret_key='<Your AWS Secret Key>'# Request ID must begin with Partner IDrequest_id=client.partner_id+'EXAMPLE'amount=1.00currency='USD'# Create a new gift coderesult=client.create_gift_card(request_id,amount,currency)# Example response# {# "cardInfo": {# "cardNumber": null,# "cardStatus": "Fulfilled",# "expirationDate": null,# "value": {# "amount": 1.0,# "currencyCode": "USD"# }# },# "creationRequestId": "MyidEXAMPLE",# "gcClaimCode": "ABCD-12345-WXYZ",# "gcExpirationDate": null,# "gcId": "AA11BB22CC33DD",# "status": "SUCCESS"# }# Cancel that gift codeclient.cancel_gift_card(request_id,result['gcId'])# Get account balanceclient.get_available_funds()# Example response# {# "availableFunds": {# "amount": 1250.00,# "currencyCode": "USD"# },# "status": "SUCCESS",# "timestamp": "20180802T155339Z"# }
agconnect
Huawei AppGallery Connect Python Server SDKOverviewAppGallery Connect is dedicated to providing one-stop services for app creation, development, distribution, operations, and engagement, and building a smart app ecosystem for all scenarios. By opening up a wide range of services and capabilities, which are built upon Huawei's profound experience in globalization, quality, security, and project management, AppGallery Connect substantially simplifies app development and O&M, improves app version quality, and helps apps attract a wider scope of users and generate higher revenue.You can access the services listed below.This module contains Server SDKs for following AGC Services:1.Auth Service 2.Cloud Function 3.Cloud DB 4.Cloud StorageFor more information, visit theAppGallery Connect IntroductionAuth Service OverviewAuth Service provides an SDK and backend services, supports multiple authentication modes, and provides a powerful management console, enabling you to easily develop and manage user authentication,helps you quickly build a secure and reliable user authentication system for your app by directly integrating cloud-based Auth Service capabilities into your app.DocumentationAuth Service IntroductionCloud Function OverviewCloud Functions enables serverless computing. It provides the Function as a Service (FaaS) capabilities to simplify app development and O&M by splitting service logic into functions and offers the Cloud Functions SDK that works with Cloud DB and Cloud Storage so that your app functions can be implemented more easily. Cloud Functions automatically scales in or out server resources for functions based on actual traffic, freeing you from server resource management and reducing your O&M workload.Cloud Function IntroductionCloud DB OverviewCloud DB is a device-cloud synergy database product that provides data synergy management capabilities, unified data models, and various data management APIs. In addition to ensuring data availability, reliability, consistency, and security, CloudDB enables seamless data synchronization between the device and cloud, and supports offline application operations, helping developers quickly develop device-cloud and multi-device synergy applications. As a part of the AppGallery Connect solution, Cloud DB builds the Mobile Backend as a Service (MBaaS) capability for the AppGallery Connect platform. In this way, application developers can focus on application services, greatly improving the production efficiency.Cloud Storage OverviewAppGallery Connect Cloud Storage allows you to store high volumes of data such as images, audio, videos, and other user-generated content securely and economically. This scalable and maintenance-free service can free you from development, deployment, O&M, and capacity expansion of storage servers, so you can focus on service capability building and operations with better user experience. The Cloud Storage SDK provided for various clients has the following advantages in file uploads and downloads.Strong security: Data is transmitted using HTTPS, and files are encrypted and stored on the cloud using secure encryption protocols. Resumable transfer, You can resume uploads or downloads from the breakpoint if there is a network failure or misoperation while the upload or download is underway.Cloud Storage IntroductionAgconnect InstallationYou can install the agconnect via PyPipip install agconnectAfter the installation takes place, you can import the SDKs as shown below:from agconnect.auth_server import AGCAuth from agconnect.cloud_function import AGConnectFunction from agconnect.database_server import AGConnectCloudDB from agconnect.cloud_storage import AGConnectCloudStorageSupported EnvironmentsThis project supports Python version 3.7 or higherAlso note that the Huawei AppGallery Connect Python Server SDK should only be used in server-side/back-end environments controlled by the application developer.This includes most server and serverless platforms. Do not use the Python Server SDK environment in the client.LicenseHuawei AppGallery Connect Python Server SDK for Auth is licensed under the "ISC".Keywordsagconnectpythonserver sdkauthentication
agc-optims
AGC OptimizersA small lib for using adaptive gradient clipping in your optimizer. Currently PyTorch only.NewsIntroductionComparisonTo DoNewsSep 15, 2021Add AGC use independent from optimizer choice in PyTorchSep 14, 2021Add AdamW, Adam, SGD and RMSprop with AGCAdd first comparsion between optimizers with and without AGC based on CIFAR10IntroductionBrock et al.introduced 2021 a new clipping technique in order to increase stability of large batch training and high learning rates in their Normalizer-Free Networks (NFNet), the adaptive gradient clipping. This clipping method is not implemented in leading frameworks, thus I provide optimizers which are capable of AGC.Installationpipinstallagc_optimsUsageTo be consistent with PyTorch all arguments of the optimizer remain the same as in the standard. Only two parameters are added for the AGC:clipping: Hyperparameter for the clipping of the parameter. Default value 1e-2, smaller batch sizes demand a higher clipping parameteragc_eps: Term used in AGC to prevent grads clipped to zero, default value 1e-3Optimizer independentfromtorch.optimimportAdamfromagc_optims.clipperimportAGCnet=Net()# your modeloptimizer=Adam(net.parameters(),lr=0.001)optimizer=AGC(optimizer=optimizer,clipping=0.16)SGDfromagc_optims.optimimportSGD_AGCnet=Net()# your modeloptimizer=SGD_AGC(net.parameters(),lr=0.01,momentum=0.9,clipping=0.16)Adamfromagc_optims.optimimportAdam_AGCnet=Net()# your modeloptimizer=Adam_AGC(net.parameters(),lr=0.001,weight_decay=1e-4,clipping=0.16)AdamWfromagc_optims.optimimportAdamW_AGCnet=Net()# your modeloptimizer=AdamW_AGC(net.parameters(),lr=0.001,weight_decay=1e-4,clipping=0.16)RMSpropfromagc_optims.optimimportRMSprop_AGCnet=Net()# your modeloptimizer=RMSprop_AGC(net.parameters(),lr=0.001,clipping=0.16)Now you can use the optimizer just like their non-AGC counterparts.ComparisonThe following comparison shows that for batch sizes 64 and 128 Adam with AGC performs better than the normal Adam. SGD is unfortunately worse with AGC, but the batch size is also very small compared to the NFNet paper. This requires more comparisons with higher batch sizes and also on other data sets. RMSprop is also better at both batch sizes with AGC than without. The learning rate was left at the default value for all optimizers and the scripts in the performance_tests folder were used as the test environment.Batch Size 64 - SGD Accuracy on Cifar10Batch Size 64 - SGD Loss on Cifar10Batch Size 128 - SGD Accuracy on Cifar10Batch Size 128 - SGD Loss on Cifar10Batch Size 64 - Adam Accuracy on Cifar10Batch Size 64 - Adam Loss on Cifar10Batch Size 128 - Adam Accuracy on Cifar10Batch Size 128 - Adam Loss on Cifar10Batch Size 64 - RMSProp Accuracy on Cifar10Batch Size 64 - RMSProp Loss on Cifar10Batch Size 128 - RMSProp Accuracy on Cifar10Batch Size 128 - RMSProp Loss on Cifar10As a little treat, I have also compared the speed of the optimizer with and without AGC to see whether this greatly increases training times.Batch Size 128 - RMSProp Accuracy on Cifar10Batch Size 128 - RMSProp Loss on Cifar10To DoAdd first comparsion based on CIFAR10 with a small CNNAdd AGC independent from optimizerAdd comparsion with higher batch sizes (256,512,1024)Add tests for each optimizerClipping == 0 no AGCAdd comparsion based on CIFAR100 with a small CNNAdd comparsion based on CIFAR10/100 with ResNetAdd comparsion with ImageNet (I do not have enough GPU-Power currently if someone provides some tests I would be grateful)Add all optimizer included in PyTorchSupport of other frameworks than PyTorchAdd first comparsion based on CIFAR with a small CNN
agcounts
agcountsA python package for extracting actigraphy counts from accelerometer data.InstallpipinstallagcountsTestDownload test data:curl-Lhttps://github.com/actigraph/agcounts/files/8247896/GT3XPLUS-AccelerationCalibrated-1x8x0.NEO1G75911139.2000-01-06-13-00-00-000-P0000.sensor.csv.gz--outputdata.csv.gzRun a simple testimportpandasaspdimportnumpyasnpfromagcounts.extractimportget_countsdefget_counts_csv(file,freq:int,epoch:int,fast:bool=True,verbose:bool=False,time_column:str=None,):ifverbose:print("Reading in CSV",flush=True)raw=pd.read_csv(file,skiprows=0)iftime_columnisnotNone:ts=raw[time_column]ts=pd.to_datetime(ts)time_freq=str(epoch)+"S"ts=ts.dt.round(time_freq)ts=ts.unique()ts=pd.DataFrame(ts,columns=[time_column])raw=raw[["X","Y","Z"]]ifverbose:print("Converting to array",flush=True)raw=np.array(raw)ifverbose:print("Getting Counts",flush=True)counts=get_counts(raw,freq=freq,epoch=epoch,fast=fast,verbose=verbose)delrawcounts=pd.DataFrame(counts,columns=["Axis1","Axis2","Axis3"])counts["AC"]=(counts["Axis1"]**2+counts["Axis2"]**2+counts["Axis3"]**2)**0.5ts=ts[0:counts.shape[0]]iftime_columnisnotNone:counts=pd.concat([ts,counts],axis=1)returncountsdefconvert_counts_csv(file,outfile,freq:int,epoch:int,fast:bool=True,verbose:bool=False,time_column:str=None,):counts=get_counts_csv(file,freq=80,epoch=60,verbose=True,time_column=time_column)counts.to_csv(outfile,index=False)returncountscounts=get_counts_csv("data.csv.gz",freq=80,epoch=60)counts=convert_counts_csv("data.csv.gz",outfile="counts.csv.gz",freq=80,epoch=60,verbose=True,time_column="HEADER_TIMESTAMP",)
agct
agct: Another Genome Conversion ToolA drop-in replacement for thepyliftovertool, using the St. Jude'schainfilecrate. Enables significantly faster chainfile loading from cold start (seeanalysis/).InstallationInstall fromPyPI:python3-mpipinstallagctUsageInitialize a class instance:fromagctimportConverterc=Converter("hg38","hg19")If a chainfile is unavailable locally, it's downloaded from UCSC and saved using thewags-tailspackage -- see theconfiguration instructionsfor information on how to designate a non-default storage location.Callconvert_coordinate():c.convert_coordinate("chr7",140453136,"+")# [['chr7', 140152936, '+']]DevelopmentTheRust toolchainmust be installed.Create a virtual environment and install developer dependencies:python3-mvirtualenvvenvsourcevenv/bin/activate python3-mpipinstall-e'.[dev,tests]'Be sure to install pre-commit hooks:pre-commitinstallThis installs Python code as editable, but after any changes to Rust code,maturin developmust be run:maturindevelopCheck Python style withruff:python3-mruffformat.&&python3-mruffcheck--fix.Usecargo fmtto check Rust style (must be run from within therust/subdirectory):cdrust/ cargofmtRun tests withpytest:pytest
agd
Adaptive Grid Discretizations using Lattice Basis Reduction (AGD-LBR)A set of tools for discretizing anisotropic PDEs on cartesian gridsThis repository containsthe agd library (Adaptive Grid Discretizations), written in Python® and cuda®a series ofjupyter notebooksin the Python® language (onlinestaticandinteractiveview), reproducing my research in Anisotropic PDE discretizations and their applications.abasic documentation, generated withpdoc.The AGD libraryThe recommended ways to install arepip install agdor alternatively (but this option does not include the GPU eikonal solver)conda install agd -c agd-lbrReboot of the git history (february 8th 2024)The whole notebooks, including images and videos, were previously saved in the git history, which as a result had grown to approx 750MB. After some unsuccessful attempts with BFG, I eventually had to delete and recreate the repository.The notebooksYou may :Visualize the notebooks online using nbviewer.Note: prefer to use this link to view the notebooks, rather than the present repository, which contains some notebooks in a partially evaluated state.Run and modify the notebooks online using GoogleColab.Note: some notebooks require turning on the GPU acceleration in GoogleColab (typical error: cannot import cupy) : Modify->Notebook parameters->GPU.The notebooks are intended as documentation and testing for the adg library. They encompass:Anisotropic fast marching methods, for shortest path computation.Non-divergence form PDEs, including non-linear PDEs such as Monge-Ampere.Divergence form anisotropic PDEs, often encountered in image processing.Algorithmic tools, related with lattice basis reduction methods, and automatic differentiation.For offline consultation, please download and installanacondaorminiconda.Optionally, you may create a dedicated conda environnement by typing the following in a terminal:conda env create --file agd-hfm.yamlconda activate agd-hfmIn order to open the book summary, type in a terminal:jupyter notebook Summary.ipynbThen use the hyperlinks to navigate within the notebooks.Matlab usersRecent versions of Matlab are able to call the Python interpreter, and thus to use the agd library. See Notebooks_FMM/Matlab for examples featuring the CPU and GPU eikonal solvers.
agda
Agda Python DistributionA project that packages Agda as a Python package, which allows you to install Agda from PyPI:pipinstallagdaThe PyPI package versions follow thePvPversion numbers of Agda releases.Binary wheels are provided for the following platforms:PlatformReleaseArchitecturemacOS≥10.10x86_64≥13.0ARM64Linuxlibc ≥2.17x86_64libc ≥2.28aarch64musl ≥1.1x86_64WindowsAMD64For more information, see:The Agda Source CodeThe Agda User Manual
agda-kernel
agda-kernelAn experimental Agda kernel for Jupyter. Used atNextjournal[nextjournal kernel].ExamplesYou can launch the following examples directly via the mybinder interface:example/LabImp.ipynb[binder].Alternatively, if you havebinder, then you can userepo2dockerlocally:repo2docker https://github.com/lclem/agda-kernelInstallationpip install agda_kernel python -m agda_kernel.installSyntax highlightingSyntax highlighting is done separately byCodemirror, but unfortunately there is no Agda mode packaged with it. A rudimentary Agda mode for Codemirror can be found incodemirror-agda/agda.js. In order to install it, typemake codemirror-installAgda extensionIn order to improve the Jupyter interface, it is strongly recommended to also installagda-extension.FunctionalityEach code cell must contain a line of the formmodule A.B.C where. For instance:moduleA.B.Cwhereid:{A:Set}→A→A idx=xUpon execution, the fileA/B/C.agdais created containing the cell's contents, and it is fed to the Agda interpreter (viaagda --interaction). The results of typechecking the cell are then displayed.After a cell has been evaluated, one canRun Agsy (auto) by putting the cursor next to a goal?and hitting TAB. The hole?is replaced by the result returned by Agsy, if any, or by{! !}if no result was found. If there is more than one result, the first ten of them are presented for the user to choose from.Refine the current goal by putting the cursor next to a goal{! !}and hitting TAB. An optional variable can be provided for case-splitting{! m !}.Show information about the current context, goal, etc.: putting the cursor near a goal/literal and hit SHIFT-TAB.EditingInputting common UNICODE characters is facilitated by the code-completion feature of Jupyter.When the cursor is immediately to the right of one of thebase formsymbols hitting TAB will replace it by the correspondingalternate form. Hitting TAB again will go back to the base form.base formalternate form->→\λ<⟨B𝔹>⟩=≡top⊤/=≢bot⊥alphaα/\∧eε/∨emptyset∅neg¬qed∎forall∀SigmaΣexists∃PiΠ[=⊑
agda-pkg
Agda-pkgis a tool to manageAgdalibraries with extra features like installing libraries from different kind of sources.This tool does not modifyAgdaat all, it just manages systematically the directory.agdawith.agda/defaultsand.agda/librariesfiles used by Agda to locate the available libraries. For more information about how Agda package system works, please read the official documentationhere.Quick StartThe most common usages of agda-pkg are the following:To installAgda-pkgjust run the following command:$pip3installagda-pkgTo install your library, go to the root directory of your source code and run:$apkginstall--editable.To install a library fromagda/package-index.$apkginit $apkginstallstandard-libraryA list of the available packages is shown below.To install aGithub repositorywith aspecific version release:$apkginstall--githubagda/agda-stdlib--versionv1.3To install a library from a Github repository with aspecific branchwith aspecific library name:$apkginstall--githubplfa/plfa.github.io--branchdev--nameplfaIndexed librariesLibrary nameLatest versionURLagda-basev0.2https://github.com/pcapriotti/agda-base.gitagda-categoriesv0.1https://github.com/agda/agda-categories.gitagda-metisv0.2.1https://github.com/jonaprieto/agda-metis.gitagda-preludedf679cfhttps://github.com/UlfNorell/agda-prelude.gitagda-propv0.1.2https://github.com/jonaprieto/agda-prop.gitagda-reale1558b62https://gitlab.com/pbruin/agda-real.gitagda-ring-solverd1ed21chttps://github.com/oisdk/agda-ring-solver.gitagdarsecv0.3.0https://github.com/gallais/agdarsec.gitalga-theory0fdb96chttps://github.com/algebraic-graphs/agda.gitatacaa9a7c06https://github.com/jespercockx/ataca.gitcatv1.6.0https://github.com/fredefox/cat.gitcubicalv0.1https://github.com/agda/cubical.gitFiniteSetsc8c2600https://github.com/L-TChen/FiniteSets.gitfotcapia-1.0.2https://github.com/asr/fotc.gitgenericf448ab3https://github.com/effectfully/Generic.githott-core1037d82https://github.com/HoTT/HoTT-Agda.githott-theorems1037d82https://github.com/HoTT/HoTT-Agda.gitHoTT-UF-Agda9d0f38ehttps://github.com/martinescardo/HoTT-UF-Agda-Lecture-Notes.gitialv1.5.0https://github.com/cedille/ial.gitlightweight-preludeb2d440ahttps://github.com/L-TChen/agda-lightweight-prelude.gitmini-hottd9b4a7bhttps://github.com/jonaprieto/mini-hott.gitMtacAR5417230https://github.com/L-TChen/MtacAR.gitplfastable-web-2019.09https://github.com/plfa/plfa.github.io.gitrouting-librarythesishttps://github.com/MatthewDaggitt/agda-routing.gitstandard-libraryv1.3https://github.com/agda/agda-stdlib.gitUsage manualInitialisation of the package indexThe easiest way to install libraries is by usingthe package index.agda-pkguses a local database to maintain a register of all libraries available in your system. To initialize the index and the database please run the following command:$apkginit Indexinglibrariesfromhttps://github.com/agda/package-index.gitNote. To use a different location for your agda filesdefaultsandlibraries, you can set up the environment variableAGDA_DIRbefore runapkgas follows:$exportAGDA_DIR=$HOME/.agdaOther way is to create a directory.agdain your directory and runagda-pkgfrom that directory.agda-pkgwill prioritize the.agdadirectory in the current directory.Help commandCheck all the options of a command or subcommand by using the flag--help.$apkg--help $apkginstall--helpUpgrade the package indexRecall updating the index every once in a while usingupgrade.$apkgupgrade UpdatingAgda-Pkgfromhttps://github.com/agda/package-index.gitIf you want to index your library go tothe package indexand makePR.Environmental variablesIf there is an issue with your installation or you suspect something is going wrong. You might want to see the environmental variables used by apkg.$apkgenvironmentList all the packages availableTo see all the packages available run the following command:$apkglistThis command also has the flag--fullto display a version of the this list with more details.Installation of packagesInstall a library is now easy. We have multiple ways to install a package.from thepackage-index$ apkg install standard-libraryfrom alocal directory$apkginstall.or even much simpler:$apkginstallInstalling a library creates a copy for agda in the directory assigned by agda-pkg. If you want your current directory to be taken into account for any changes use the--editableoption. as shown below.$apkginstall--editable.from a github repository$apkginstall--githubagda/agda-stdlib--versionv1.1from a git repository$apkginstallhttp://github.com/jonaprieto/agda-prop.gitTo specify the version of a library, we use the flag--version$apkginstallstandard-library--versionv1.0Or simpler by using@or==as it [email protected] $apkginstallstandard-library==v1.0Multiple packages at onceTo install multiple libraries at once, we have two options:Using the inline method$apkginstallstandard-libraryagda-baseUse@or==if you need a specific version, see above examples.Using a requirement file:Generate a requirement file usingapkg freeze:$apkgfreeze>requirements.txt $catrequirements.txt standard-library==v1.1Now, use the flag-rto install all the listed libraries in this file:$apkginstall-rrequirements.txtCheck all the options of this command with the help information:$apkginstall--helpUninstalling a packageUninstalling a package will remove the library from the visible libraries for Agda.using the name of the library$apkguninstallstandard-libraryinfering the library name from the current directory$apkguninstall.And if we want to remove the library completely (the sources and everything), we use the flag--remove-cache.$apkguninstallstandard-library--remove-cacheUpdate a package to latest versionWe can get the latest version of a package from the versions registered in the package-index.Update all the installed libraries:$apkgupdateUpdate a specific list of libraries. If some library is not installed, this command will installed the latest version of it.$apkgupdatestandard-libraryagdarsecSee packages installed$apkgfreeze standard-library==v1.1This command is useful to keep in a file the versions used for your project to install them later.$apkgfreeze>requirements.txtTo install from this requirement file run this command.$apkginstall<requirements.txtApproximate search of packagesWe make a search (approximate) by using keywords and title of the packages from the index. To perform such a search, see the following example:$apkgsearchmetis1resultin0.0012656739999998834seg cubical url:https://github.com/agda/cubical.git installed:FalseGet all the information of a package$apkginfocubicalCreating a library for Agda-PkgIn this section, we describe how to build a library.To build a project usingagda-pkg, we just run the following command:$apkgcreateSome questions are going to be prompted in order to create the necessary files for Agda and for Agda-Pkg.The output is a folder like the following showed below.Directory structure of an agda libraryA common Agda library has the following structure:$ tree -L 1 mylibrary/ mylibrary/ ├── LICENSE ├── README.md ├── mylibrary.agda-lib ├── mylibrary.agda-pkg ├── src └── test 2 directories, 3 files.agda-lib library file$ cat mylibrary.agda-libname:mylibrary -- Commentdepend:LIB1 LIB2LIB3LIB4include:PATH1PATH2PATH3.agda-pkg library fileThis file only works foragda-pkg. The idea of this file is to provide more information about the package, pretty similar to the cabal files in Haskell. This file has priority over its version.agda-lib.$ cat mylibrary.agda-pkgname:mylibraryversion:v0.0.1author:-AuthorName1-AuthorName2category:cat1, cat2, cat3homepage:http://github.com/user/mylibrarylicense:MITlicense-file:LICENSE.mdsource-repository:http://github.com/user/mylibrary.gittested-with:2.6.0description:Put here a description.include:-PATH1-PATH2-PATH3depend:-LIB1-LIB2-LIB3-LIB4Using with Nix or NixOSAnix-shellenvironment that loadsagda-pkgas well asagdaandagda-modefor Emacs is available. To use this,apkgcan put the necessary files in your project folder by running one of the following commands:$curl-Lhttps://gist.github.com/jonaprieto/53e55263405ee48a831d700f27843931/download|tar-xvz--strip-components=1or if you already have installed agda-pkg:$apkgnixosThen, you will have the following files:./hello-world.agda ./agda_requirements.txt ./shell.nix ./deps.nix ./emacs.nixFrom where you can run the nix shell.$nix-shellTo launch Emacs withagda-modeenabled, runmymacsin the newly launched shell;mymacswill also load your~/.emacsfile if it exists. If you are usingSpacemacs, you will need to editshell.nixto use~/.spacemacsinstead.The files provided by the commands above are also available in this repository (apkg/support/nix) and in a third-partyexample repositoryto give an idea of exactly which files need to be copied to your project.Example:$cathello-world.agdamodulehello-worldwhereopenimportIOmain=run(putStrLn"Hello, World!")Runmymacs hello-world.agdathen typeC-c C-x C-cin emacs to compile the loaded hello world file.ConfigurationEdit any of thenixexpressions as needed. In particular:To add Agda dependencies viaagda-pkg, editagda_requirements.txtTo add more 4Haskell or other system dependencies or other target-language dependencies, editdeps.nix.To add or alter the editor used, change themyEmacsreferences inshell.nixor add similar derivations.Optionally, create.emacs_user_configin the repository root directory and add any additional config, such as(setq agda2-backend "GHC")to use GHC by default when compiling Agda files from emacs.AboutThis is a proof of concept of an Agda Package Manager. Contributions are always welcomed.
agdeblend
No description available on PyPI.
agdetector
agricultural_field_detector.
agdistispy
============AGDISTIS Py============Python bindings for AGDISTIS web service: http://agdistis.aksw.org/demo/Usage=========In [1]: from agdistispy.agdistis import AgdistisIn [2]: ag = Agdistis()In [3]: ag.disambiguate("<entity>Austria</entity>")Out[3]:[{u'disambiguatedURL': u'http://dbpedia.org/resource/Austria',u'namedEntity': u'Austria',u'offset': 7,u'start': 0}]Contributors=========* Ivan Ermilov (AKSW/BIS/UPB)Notes=========The source code is available from:https://github.com/earthquakesan/agdistispy
agd_tools
[![PyPi version](https://img.shields.io/pypi/v/agd_tools.svg)](https://pypi.python.org/pypi/agd_tools/)# AGD ToolsUseful functions to work with data### Documentationhttp://agd-tools.readthedocs.org/fr/latest/### LicenceCette librairie est libre sous licence [GNU Affero General Public License](http://www.gnu.org/licenses/agpl.html) version 3 ou ultérieure.
age
pyagepyage is an experimental implementation of @FiloSottile and @Benjojo12 's project "age". The spec is currently available as seven-page Google doc atage-encryption.org/v1.This project is still work-in-progress.⚠️ pyage is not intended to be a secure age implementation! My original intention was to better understand the spec, find mistakes early and provide a redundant implementation for validation. I'm not a cryptographer (IANAC) and did not (yet) find the time to address implementation-specific security issues (such as DoS attacks or side-channel attacks).So:Use at your own risk.Quick StartInstall from pip:pip install ageGenerate a key pair:mkdir -p ~/.config/age pyage generate > ~/.config/age/keys.txtEncrypt a file:pyage encrypt -i hello.txt -o hello.age pubkey:<recipient public key>Decrypt a file (uses~/.config/age/keys.txt):pyage decrypt -i hello.ageFor a real tutorial, seethe Tutorial section in the documentation.DocumentationThe full documentation can be found atpyage.readthedocs.io.
age-2020
Age 2020It takes an integer as an input and prints it square.Installationpip install age_2020How to use it?Open terminal and type age and then input the integer (your birth year)License © 2020 Priya_malThis repository is licensed under the MIT license. See LICENSE for details."# get-age"
age3d
age3dA Python Library to age 3d models by simulating the effects of weatherOverviewAge3D is a Python Library that allows for eroding of 3d models. It uses the.stlfile format and incorporates Open3D functionality, allowing users to simulate material removal.Features:Simplified Workflow for.stl$\rightarrow$TriangleMesh$\rightarrow$PointCloud&BitMaskVisualization ofTriangleMesh&PointCloudCalculate Metric ofTriangleMeshErosion Method for AgingCustomizable Number of Passes of Simulated Erosion ParticlesCustomizable lifetime of Simulated Erosion ParticlesDependenciesage3d requires theopen3dPython Library which is installed during libray installation.InstallationThe recommended way to install age3d is through pip.pip install age3dUsageImport the library:import age3dImport a.stlmodel wherefile_pathpoints to the location:mesh = age3d.import_mesh(file_path)ErosionIf themeshis low-poly, run withnumber_of_subdivisions > 0:mesh = age3d.mesh_subdivision(mesh, iterations = number_of_subdivisions)Erode themesh:eroded_mesh = age3d.erode(mesh)If Erosion with customized Passes and Max Particle Lifetime:updated_vertices, eroded_mesh = age3d.erode(mesh, iterations = 2, erosion_lifetime = 10)Point Cloud CreationMake aPointCloudwith Red Color Points:point_cloud = age3d.make_point_cloud(mesh, color = [255, 0, 0])VisualizationVisualize Eroded Mesh:eroded_mesh.compute_vertex_normals() age3d.visualize(mesh)oreroded_mesh.compute_vertex_normals() age3d.visualize([mesh])Visualize Mesh & Point Cloud:eroded_mesh.compute_vertex_normals() age3d.visualize([mesh, point_cloud])Visualize Mesh & Point Cloud with Wireframe:eroded_mesh.compute_vertex_normals() age3d.visualize([mesh, point_cloud], show_wireframe = True)ContributingIf you encounter an issue, please feel free to raise it by opening an issue. Likewise, if you have resolved an issue, you are welcome to open a pull request.See more atCONTRIBUTING.md
age-and-weight-Converter
Failed to fetch description. HTTP Status Code: 404
agea-tools
Agea toolsEste paquete tiene como objetivo ser utilizado por el equipo de Big Data de Agea para las diferentes tareas diarias que sea realizan.InstallationSe puede istalar el paquete desdePyPI:pip install agea_toolsPor el momento solo está testeado para Python 3.6.7.How to useA continuación se darán ejemplos de como utilizarlo:You can also call the Real Python Feed Reader in your own Python code, by importing from thereaderpackage:>>> from reader import feed >>> feed.get_titles() ['How to Publish an Open-Source Python Package to PyPI', ...]
agecalc
SynopsisCalculates birth information based on a specified Date of Birth.Installationpip install agecalcWhat’s InsideClassesAgeCalcThis stores the DOB data into a class. You can then use the methods below to get data from this.FunctionsageDisplays a DOB’s age in years.age_daysDisplays a DOB’s age in days.age_hoursDisplays a DOB’s age in hours.age_monthsDisplays a DOB’s age in months.age_weeksDisplays a DOB’s age in weeks.age_weeks_daysDisplays a DOB’s age in weeks/days. Will return a dictionary with the ‘weeks’ and ‘days’ keys, and their values.age_years_monthsDisplays a DOB�s age in years/months. Will return a dictionary with the ‘years’ and ‘months’ keys, and their values.dating_agesDisplays the socially acceptable dating ages for a person. Will return a dictionary with the “max”, “min” and “original” keys, with their values.day_of_birthDisplays the DAY of birth of a DOB.last_birthdayDisplays the days since a DOB’s last birthdaynext_birthdayDisplays the days until a DOB’s next birthdayExample (age function)With AgeCalc classimportagecalcdob=agecalc.AgeCalc(1,1,2000)printdob.ageWith Functionsimportagecalcprintagecalc.age(1,1,2000)NotesAll functions/classes take only these three arguments:dd: Daymm: Monthyy: YearDates should be passed as if they were integers. If the Date/Month contains a ‘0’ before the integer, the ‘0’ should be ommitted.E.G. DOB ‘01/01/2000’ should be passed as:dd: 1mm: 1yy: 2000Submitting an IssueIf you wish to submit an issue with this module, or suggest any changes, you can either use theGitHub Issue Tracker, or email me [email protected]/LicenseCopyright (C) 2015, Ali RajaThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.You should have received a copy of the GNU General Public License along with this program. If not, seehttp://www.gnu.org/licenses/.
age-calculator
No description available on PyPI.
agecal-mick
agecalThis library providessay "Hi Everyone"Compare age between user and I whether who is younger or olderIt takes user's age as an input.Installationpip install agecal-mickHow to use it?import agecalfrom agecal import greetingfrom agecal import compareageTo print "Hi everyone"greeting.sayHi()Comparing the age between user and I. It takes user's age as an input and print younger or older as an output.compareage.compare_my_age()LicenseCopyright 2021 Mickzaa
age-cda
Community-Detection-ModularityEigenvector-based community detection is a method used to identify communities or groups within a network by analyzing the eigenvectors of the network's adjacency matrix. The basic idea behind this approach is that nodes that belong to the same community will be more strongly connected to each other than to nodes in other communities.The method starts by calculating the adjacency matrix of the network, which represents the connections between nodes. Next, the eigenvalues and eigenvectors of this matrix are calculated. The eigenvectors with the largest eigenvalues are then used to assign nodes to communities.The basic idea is that nodes that belong to the same community will have similar eigenvector values for these dominant eigenvectors. By grouping nodes with similar eigenvector values together, communities can be identified.The method starts by calculating the adjacency matrix of the network, which represents the connections between nodes. Next, the eigenvalues and eigenvectors of this matrix are calculated. The eigenvectors with the largest eigenvalues are then used to assign nodes to communities.The basic idea is that nodes that belong to the same community will have similar eigenvector values for these dominant eigenvectors. By grouping nodes with similar eigenvector values together, communities can be identified.InstallationInstall via PIPpip install age-cdaBuild from Sourcesudo apt-get update sudo apt-get install libeigen3-dev git clone https://github.com/Munmud/Community-Detection-Modularity cd Community-Detection-Modularity python setup.py installConfigure GSL - GNU Scientific LibraryUnit Testpython -m unittest test_community.pyInstructionimportfromage_cdaimportGraphCreate Graphnodes=[0,1,2,3,4,5]edges=[[0,1],[0,2],[1,2],[2,3],[3,4],[3,5],[4,5]]g=Graph.Graph()g.createGraph(nodes,edges)Nodes :anyEdges :2d array : adjacency list`Each element within Nodes arrayGenerate Community Assignmentres=g.get_community()Output Format[[3,4,5],[0,1,2]]List communityEach community has list of nodesSamplesCreating GraphZachary's karate clubReferenceFinding community structure in networks using the eigenvectors of matricesModularity and community structure in networksStatistical Mechanics of Community Detection
age-descriptor
No description available on PyPI.
age-detection-local
This is a package for running age detection and getting approximate age
age-detection-local-python-package
This is a package for running age detection and getting approximate age
age-finder
“README.md”
agefromname
No description available on PyPI.
ageliaco.p10userdata
IntroductionThis product adds some properties to the user data to exploit to its best the values there are in the p10 ldap.This product is dependant onplone.app.ldap, because it does set the basic properties.InstallationGo to admin > Site Setup > Add-onsActivate plone.app.ldapActivate ageliaco.p10userdataGo to ZMI > acl_users > ldap-plugin > acl_users ** reset LDAP Server ** reset “Configure” to fit your needs (filter and groups)There is a bug concerning plone.app.ldap => when the ldap server is set it doesn’t set properly the port number, and the ldap filter is not set either.This product may contain traces of nuts.Authors“AGELIACO”, Serge Renfermailto:serge.renfer@gmaildot comChangelog1.0dev (unreleased)Initial release
ageliaco.rd
IntroductionThis product is intended for a service management in Geneva School Departement (DIP). This service is “Ressources & Développement” and it brings a support for educational projects in the upper secondary schools (colleges).This product is dependant onageliaco.p10userdata, because it does set the basic properties in user data conforming to our ldap settings.InstallationGo to admin > Site Setup > Add-onsActivate plone.app.ldapActivate ageliaco.p10userdataGo to ZMI > acl_users > ldap-plugin > acl_users ** reset LDAP Server ** reset “Configure” to fit your needs (filter and groups)Activate ageliaco.rdThere is a bug concerning plone.app.ldap => when the ldap server is set it doesn’t set properly the port number, and the ldap filter is not set either.This product may contain traces of nuts.Authors“AGELIACO”, Serge Renfermailto:serge.renfer@gmaildot comChangelog0.1dev (unreleased)Initial release
ageliaco.rd2
IntroductionThis product helps managing a service in the post-obligatory schools of Geneva. “Ressource & Developpement” is a service that manages resources (hours of teaching used to produce a pedagogic project) to help teachers bring new teaching resources.This is an experimental project, to see how far I can push this dexterity/Plone matter ;-)Changelog0.1dev (unreleased)Initial release
ageliaco.recipe.csvconfig
- Code repository:https://github.com/renfers/ageliaco.recipe.csvconfig- Questions and comments to serge.renfer at gmail dot com=========================ageliaco.recipe.csvconfig=========================The idea behind this recipe is to have only one source of information for your buildoutvariable settings.When one is confronted with several deployments which are very similar, then you gatherthe variable elements into a csv file (just like a flat representation of several columns).For instance, let's say you need to generate several Plone instances and you want to gatherthem in a same buildout, because you will add a supervisor and a cache with varnish, plusa config for the nginx that runs on your server.Your CSV file, main.csv::instance,port,domain,subdomain,plone,emailadminalbertcair,15004,albertcair.ch,base.albertcair.ch,/,[email protected],15004,albertcair.ch,albertcair.ch,/alberto,[email protected],15004,albertcair.ch,www.albertcair.ch,/alberto,[email protected],15004,albertcair.ch,histoire.albertcair.ch,/bestie,[email protected],15004,albertcair.ch,images.albertcair.ch,/images,[email protected],15004,albertcair.ch,italiano.albertcair.ch,/italiano,[email protected],15005,bopip.ch,base.bopip.ch,/,[email protected],15005,bopip.ch,bopip.ch,/bopip,[email protected],15005,bopip.ch,www.bopip.ch,/bopip,[email protected],15005,bopip.ch,jaun.bopip.ch,/jaun,[email protected],15005,bopip.ch,java.bopip.ch,/java,[email protected],15005,bopip.ch,math.bopip.ch,/math,[email protected],15005,bopip.ch,ecole-en-sauvygnon.ch,/ensauvygnon,[email protected],15005,bopip.ch,www.ecole-en-sauvygnon.ch,/ensauvygnon,[email protected] your buildout you will be able to spread those information at different levels,that means on different templates.Let's make a *templates* directory in our buildout and we put our first templateinstances.cfg.in::[$${subdomain}-parameters]port = $${port}host = 127.0.0.1plone = $${plone}name = $${instance}and a second onevarsetting.cfg.in::[var-settings]vh-targets =$${subdomain}:$${subdomain}-parametersinstances-targets =$${instance}:$${instance}-parametersbackup-targets =backup-$${instance}:$${instance}-parameterscron-targets =cron-$${instance}:$${instance}-parameterssupervisor =20 $${instance} ${buildout:directory}/bin/$${instance} [console] true ${users:zope}eventlistener =$${instance}-HttpOk TICK_60 ${buildout:bin-directory}/httpok [-m $${emailadmin} -p $${instance} http://localhost:11011]Notice that our variables have the ``$${var}`` format.In a buildout file you will have a part that has the following form:main.cfg::[buildout]parts = maineggs = ageliaco.recipe.csvconfig[main]recipe = ageliaco.recipe.csvconfig:defaultcsvfile = main.csvtemplates = templates/varsetting.cfg.intemplates/instances.cfg.inRunning the following commands::../Python-2.7/bin/python bootstrap.py -c main.cfgbin/buildout -c main.cfgIt will generate 2 files in your buildout directory,varsetting.cfg::[var-settings]vh-targets =base.albertcair.ch:base.albertcair.ch-parametersalbertcair.ch:albertcair.ch-parameterswww.albertcair.ch:www.albertcair.ch-parametershistoire.albertcair.ch:histoire.albertcair.ch-parametersimages.albertcair.ch:images.albertcair.ch-parametersitaliano.albertcair.ch:italiano.albertcair.ch-parametersbase.bopip.ch:base.bopip.ch-parametersbopip.ch:bopip.ch-parameterswww.bopip.ch:www.bopip.ch-parametersjaun.bopip.ch:jaun.bopip.ch-parametersjava.bopip.ch:java.bopip.ch-parametersmath.bopip.ch:math.bopip.ch-parametersecole-en-sauvygnon.ch:ecole-en-sauvygnon.ch-parameterswww.ecole-en-sauvygnon.ch:www.ecole-en-sauvygnon.ch-parametersinstances-targets =albertcair:albertcair-parametersbopip:bopip-parametersbackup-targets =backup-albertcair:albertcair-parametersbackup-bopip:bopip-parameterscron-targets =cron-albertcair:albertcair-parameterscron-bopip:bopip-parameterssupervisor =20 albertcair ${buildout:directory}/bin/albertcair [console] true ${users:zope}20 bopip ${buildout:directory}/bin/bopip [console] true ${users:zope}eventlistener =albertcair-HttpOk TICK_60 ${buildout:bin-directory}/httpok [-m [email protected] -p albertcair http://localhost:11011]bopip-HttpOk TICK_60 ${buildout:bin-directory}/httpok [-m [email protected] -p bopip http://localhost:11011]andinstances.cfg::[base.albertcair.ch-parameters]port = 15004host = 127.0.0.1plone = /name = albertcair[albertcair.ch-parameters]port = 15004host = 127.0.0.1plone = /albertoname = albertcair[www.albertcair.ch-parameters]port = 15004host = 127.0.0.1plone = /albertoname = albertcair[histoire.albertcair.ch-parameters]port = 15004host = 127.0.0.1plone = /bestiename = albertcair[images.albertcair.ch-parameters]port = 15004host = 127.0.0.1plone = /imagesname = albertcair[italiano.albertcair.ch-parameters]port = 15004host = 127.0.0.1plone = /italianoname = albertcair[base.bopip.ch-parameters]port = 15005host = 127.0.0.1plone = /name = bopip[bopip.ch-parameters]port = 15005host = 127.0.0.1plone = /bopipname = bopip[www.bopip.ch-parameters]port = 15005host = 127.0.0.1plone = /bopipname = bopip[jaun.bopip.ch-parameters]port = 15005host = 127.0.0.1plone = /jaunname = bopip[java.bopip.ch-parameters]port = 15005host = 127.0.0.1plone = /javaname = bopip[math.bopip.ch-parameters]port = 15005host = 127.0.0.1plone = /mathname = bopip[ecole-en-sauvygnon.ch-parameters]port = 15005host = 127.0.0.1plone = /ensauvygnonname = bopip[www.ecole-en-sauvygnon.ch-parameters]port = 15005host = 127.0.0.1plone = /ensauvygnonname = bopipvarsetting.cfg.in exposes one kind of variable substitution, when a variable is present inan option value => the option value is repeated based on the number of different result wehave in the csv file configuration, for instance the "instance" column in my csv file has2 different values then, based on that the "eventlistner" option expands in a 2 lines value.instances.cfg.in exposes another kind of variable substitution, where the variable is presentin the section identifier => the section with the "subdomain" variable will epxand in asmany sections as there are different values for this variable in the csv file.CSV as flat database :----------------------Let's see another example to show you that the csv file can be just a flat representationof a relational database.The csv file, testmultikey.csv::prenom, nom, naissance, professionbob,wut,1961,docmarie,wut,1962,maitresseserge,ren,1960,profcoco,ren,1961,maitresseThe template, templates/contact.cfg.in::[contact]$${prenom}-$${nom}-$${naissance} = $${profession}[famille-$${nom}]$${prenom}-naissance = $${naissance}$${prenom}-profession = $${profession}[annee-de-naissance-$${naissance}]$${prenom}-$${nom} = $${profession}[$${profession}]nom = $${prenom}-$${nom}-$${naissance}and now the buildout, buildout.cfg::[buildout]parts = maindevelop = src/ageliaco.recipe.csvconfigeggs =ageliaco.recipe.csvconfig[main]recipe = ageliaco.recipe.csvconfigtemplates = templates/contact.cfg.incsvfile = testmultikey.csvand the result of `bin/buildout`, contact.cfg::[contact]bob-wut-1961 = doccoco-ren-1961 = maitressemarie-wut-1962 = maitresseserge-ren-1960 = prof[famille-ren]coco-naissance = 1961serge-naissance = 1960coco-profession = maitresseserge-profession = prof[famille-wut]bob-naissance = 1961marie-naissance = 1962bob-profession = docmarie-profession = maitresse[annee-de-naissance-1961]bob-wut = doccoco-ren = maitresse[annee-de-naissance-1962]marie-wut = maitresse[annee-de-naissance-1960]serge-ren = prof[maitresse]nom = marie-wut-1962coco-ren-1961[doc]nom = bob-wut-1961[prof]nom = serge-ren-1960Detailed Documentation**********************Supported options=================The recipe supports the following options:.. Note to recipe author!----------------------For each option the recipe uses you should include a descriptionabout the purpose of the option, the format and semantics of thevalues it accepts, whether it is mandatory or optional and what thedefault value is if it is omitted.csvfilethis is a path (relative or absolute) to csv file that will be used by the recipetemplatesone (or more) path to a template file => by default, it is expected a name with ".in"suffix and a file with the same name without the suffix ".in" will be generate in thebuildout directory. If you want to use another suffix or naming convention you will haveto use an alternative format with a ":" to separate the template path to the target path,This alternate format with ":" is also interesting if you want to generate a file ina different directory than the buildout-directory!for instance::templates = templates/instances.cfg.inthat will generate a ./instances.cfg file (in the buildout directory) ortemplates = templates/init-cache.cfg:production/cache.cfgthat will generate a production/cache.cfg file (notice that in this example it is a relative path,but it can also be a full path)Contributors************"renfers", AuthorChange history**************0.7 (2013-01-10)----------------- changed from ConfigParser to configparser- added "+=" and "-=" to the delimiters, but could not recreate the same operator on optionsit passes automatically to "=" (=> still a feature to implement)["renfers"]0.6 (2013-01-07)----------------- change variable call from ${} to $${} to easily substitute variables embedded in buildout variables- added "+" to the generated file mode ("wb+" instead of "wb")["renfers"]0.5 (2012-12-28)----------------- Documentation updated["renfers"]0.4 (2012-12-26)----------------- rewrite to be sure to take into account multiple value keys. For instance, if you haveone or several variables on a part name, you can consider this set becomes a key and if there arevariables in the options we only consider values that for which key values match those ofthe part. The same can apply when the left part of an option has one or several variablesthen variables on the right part can only apply to values that have the same key values(the ones from the left part).["renfers"]0.3 (2012-12-19)----------------- Documentation updated["renfers"]0.2 (2012-12-19)----------------- Changed "update" to redo the install on update["renfers"]0.1 (2012-12-18)----------------- Created recipe with ZopeSkel["renfers"]Download********
ageliaco.tracker
IntroductionGeneric Issue TrackerThis product may contain traces of nuts.Changelog0.1dev (unreleased)Initial release
agelient-signal-analyzer
Failed to fetch description. HTTP Status Code: 404
ageml
AgeModellingDescriptionBrainAge models (Franke et al. 2010, Neuroimage) have had success in exploring the relationship between healthy and pathological ageing of the brain. Furthermore, this type of age modelling can be extended to multiple body systems and modelling of the interactions between them (Tian et al 2023, Nature Medicine). However, there is no standard for age modelling. There have been works attempting to describe proper procedures, especially for age-bias correction (de Lange and Cole 2020, Neuroimage: Clinical). In this work we developed an Open-Source software that allows anyone to do age modelling following well-established and tested methodologies for any type of clinical data. Age modelling with machine learning made easy.The objective of AgeML is to standardise procedures, lower the barrier to entry into age modelling and ensure reproducibility. The project is Open-Source to create a welcoming environment and a community to work together to improve and validate existing methodologies. We are actively seeking new developers who want to contribute to growing and expanding the package.References: De Lange, A.-M. G., & Cole, J. H. (2020). Commentary: Correction procedures in brain-age prediction. NeuroImage: Clinical, 26, 102229.https://doi.org/10.1016/j.nicl.2020.102229Franke, K., Ziegler, G., Klöppel, S., & Gaser, C. (2010). Estimating the age of healthy subjects from T1-weighted MRI scans using kernel methods: Exploring the influence of various parameters. NeuroImage, 50(3), 883–892.https://doi.org/10.1016/j.neuroimage.2010.01.005Tian, Y. E., Cropley, V., Maier, A. B., Lautenschlager, N. T., Breakspear, M., & Zalesky, A. (2023). Heterogeneous aging across multiple organ systems and prediction of chronic disease and mortality. Nature Medicine, 29(5), 1221–1231.https://doi.org/10.1038/s41591-023-02296-6CI/CDLicenseLicensed under theApache 2.0
agemo
agemoAgemo is an open-source tool with a python API that allows users to generate the Laplace Transform of the coalescence time distribution of a sample with a given demographic history. In addition, agemo provides ways to efficiently query that distribution, by using the fact that its generating function can be represented most simply as a directed graph with all possible ancestral states of the sample as nodes. Past implementations have not made full use of this, relying on computer algebra systems instead of graph traversal to process these recursive expressions.So far, agemo has been used to compute the probabilities of the joint site frequency spectrum for blocks of a given size, under models of isolation and migration. Calculating these probabilities requires repeated differentiation of the generating function (Lohse et al, 2011) which suffers from an explosion in the number of terms when implemented naively. Using a closed-form expression for the coefficients of a series expansion of the equations associated with each edge of the graph, we can efficiently propagate these coefficients through the graph avoiding redundant operations.
agen
A very simple code generator.Free software: MIT licenseDocumentation:https://agen.readthedocs.com/en/.FeaturesSimple and very simple APICustomJinja EnvsupportedCould be used as Command-Line-ToolsNo Templateagendon’t provide any template. It’s only provide some function, to make code to be a template. If you need any public template, please use the awesome open source tool,Cookiecutter.Why agen?agen is so lightweight, that could be perfectly integrated into your project in minutes.I likeCookiecutter(It’s so cool and so awesome), but most of it’s features are too heavyweight for me.InstallationInstall with pip:pipinstallagenInstall with source code:clonehttps://github.com/yufeiminds/agen.gitcdagenpythonsetup.pyinstallQuickstart GuideInagen, usejinja2as template engine for rendering, so, any feature ofjinja2template will be found inagen.File Generationfromagenimport(string_render,render,generate,generate_dir)# Render text from a templated stringstring_render('{{key}}',{'key':'value'})>'value'cattemplate.py>{{key}}# Render text from a template filerender('template.py',{'key':'value'})>'value'# Generate file from a template filegenerate('template.py','output.py',{'key':'value'})# Content of output.pyvalueDirectory GenerationIf we have a directory like this:directory├──__init__.py└──{{key}}.pycallgenerate_dirfunction:generate_dir('directory','mydir',{'key':'value'})will generatemydir├──__init__.py└──value.pyEvery pure text file will be render by template engine.context{'key': 'value'}also will be rendered automatically.Command Line ToolBasic Usageagenalso implement a very simple command line tool, use for rendering the local template easily, but it only could be used on *UNIX operation system.Usage: agen [OPTIONS] [NAMES]... Options: -o, --out PATH Output path or directory -s, --source PATH Source path or directory -c, --context PATH Path of context file --help Show this message and exit.With no argument,agenwill search local template directory, eg. on *NIX operation system, this directory are usually at:$ agen -------------------------------------------- agen Library see -> /Users/yufeili/.agen/templates -------------------------------------------- directory repo single.txtThe simplest way to call:$ agen -s template_path -o ouput_path -c context.jsonSure,.yamlalso can be used ascontextfile. If theoutoption wasn’t provided, it will prompt for input on screen (defualt is current directory).Full ExampleYou can specific three kinds of directory or file as thesource.Single File$ agen -s single.txt -o output.txt -c context.jsonDirectoryAny directory, such asdirectory ├── __init__.py └── {{key}}.pyboth could besource, it also support to use template variable to render the output file name.$ agen -s directory -o myapp -c context.jsonThis command will create a directory namedmyapp, and processing recursively all files under thedirectory, output tomyappbase on origin structure.RepositoryNoteagen is not designed as a command line tool, so for generating repository, recommend to use the awesomeCookiecutter.If there is an inner folder in a directory, and the directory has aagen.jsonoragen.yaml, it will be judged as aRepo,repo ├── README.md ├── agen.json └── {{name}} ├── __init__.py └── {{name}}.pyThe default behavior of this tool will be changed, assuming thisRepo$ agen -s repo -o output -c context.jsonThis command will create a folder has the same name with inner directory tooutputdirectory, if the name of folder is a template string, it will be compiled as standard string then create a folder, the other behavior same asdirectory.Thecontextis not required. If it wasn’t provided, it will load theagen.[json|yaml]file, and prompt user for input.ExampleForcontext{'key': 'value'},outputis current directory, current value:. └── value ├── __init__.py └── value.pyLocal Template DirectoryUse option argumentNAMES, could get files path from local templates directory assource. The following two calls are equivalent in *NIX systems:$ agen -s ~/.agen/templates/single.txt $ agen single.txtCreditsAuthor : Yufei [email protected] me: @yufeiminds (Facebook)、@YufeiMinds (Sina Weibo)ContributionWelcome to develop with me!Fork this repo & develop it.
agency
SummaryAgency is a python library that provides anActor modelframework for creating agent-integrated systems.The library provides an easy to use API that enables you to connect agents with traditional software systems in a flexible and scalable way, allowing you to develop any architecture you need.Agency's goal is to enable developers to create custom agent-based applications by providing a minimal foundation to both experiment and build upon. So if you're looking to build a custom agent system of your own, Agency might be for you.FeaturesEasy to use APIStraightforward class/method based agent and action definitionUp to date documentationandexamplesfor referencePerformance and ScalabilitySupports multiprocessing and multithreading for concurrencyAMQP support for networked agent systemsObservability and ControlAction and lifecycle callbacksAccess policies and permission callbacksDetailed loggingDemo application available atexamples/demoMultiple agent examples for experimentationTwo OpenAI agent examplesHuggingFace transformers agent exampleOperating system accessIncludes Gradio UIDocker configuration for reference and developmentAPI OverviewIn Agency, all entities are represented as instances of theAgentclass. This includes all AI-driven agents, software interfaces, or human users that may communicate as part of your application.All agents may expose "actions" that other agents can discover and invoke at run time. An example of a simple agent could be:classCalculatorAgent(Agent):@actiondefadd(a,b):returna+bThis defines an agent with a single action:add. Other agents will be able to call this method by sending a message to an instance ofCalculatorAgentand specifying theaddaction. For example:other_agent.send({'to':'CalcAgent','action':{'name':'add','args':{'a':1,'b':2,}},})Actions may specify an access policy, allowing you to control access for safety.@action(access_policy=ACCESS_PERMITTED)# This allows the action at any timedefadd(a,b):...@action(access_policy=ACCESS_REQUESTED)# This requires review before the actiondefadd(a,b):...Agents may also define callbacks for various purposes:classCalculatorAgent(Agent):...defbefore_action(self,message:dict):"""Called before an action is attempted"""defafter_action(self,message:dict,return_value:str,error:str):"""Called after an action is attempted"""defafter_add(self):"""Called after the agent is added to a space and may begin communicating"""defbefore_remove(self):"""Called before the agent is removed from the space"""ASpaceis how you connect your agents together. An agent cannot communicate with others until it is added to a commonSpace.There are two includedSpaceimplementations to choose from:LocalSpace- which connects agents within the same application.AMQPSpace- which connects agents across a network using an AMQP server like RabbitMQ.Finally, here is a simple example of creating aLocalSpaceand adding two agents to it.space=LocalSpace()space.add(CalculatorAgent,"CalcAgent")space.add(MyAgent,"MyAgent")# The agents above can now communicateThese are just the basic features that Agency provides. For more information please seethe help site.InstallpipinstallagencyorpoetryaddagencyThe Demo ApplicationThe demo application is maintained as an experimental development environment and a showcase for library features. It includes multiple agent examples which may communicate with eachother and supports a "slash" syntax for invoking actions as an agent yourself.To run the demo, please follow the directions atexamples/demo.The following is a screenshot of the Gradio UI that demonstrates the exampleOpenAIFunctionAgentfollowing orders and interacting with theHostagent.FAQHow does Agency compare to other agent frameworks?Though you could entirely create a simple agent using only the primitives in Agency (seeexamples/demo/agents/), it is not intended to be an all-inclusive LLM-oriented toolset like other libraries. For example, it does not include support for constructing prompts or working with vector databases. Implementation of agent behavior is left entirely up to you, and you are free to use other libraries as needed for those purposes.Agency focuses on the concerns of communication, observation, and scalability. The library strives to provide the operating foundations of an agent system without imposing additional structure on you.The goal is to allow you to experiment and discover the right approaches and technologies that work for your application. And once you've found an implementation that works, you can scale it out to your needs.ContributingPlease do!If you're considering a contribution, please check out thecontributing guide.Planned WorkSee the issues page.If you have any suggestions or otherwise, feel free to add an issue or open adiscussion.
agency-swarm
🐝 Agency SwarmOverviewAgency Swarm started as a desire and effort of Arsenii Shatokhin (aka VRSEN) to fully automate his AI Agency with AI. By building this framework, we aim to simplify the agent creation process and enable anyone to create collaborative swarm of agents (Agencies), each with distinct roles and capabilities. By thinking about automation in terms of real world entities, such as agencies and specialized agent roles, we make it a lot more intuitive for both the agents and the users.Key FeaturesCustomizable Agent Roles: Define roles like CEO, virtual assistant, developer, etc., and customize their functionalities withAssistants API.Full Control Over Prompts: Avoid conflicts and restrictions of pre-defined prompts, allowing full customization.Tool Creation: Tools within Agency Swarm are created usingInstructor, which provides a convenient interface and automatic type validation.Efficient Communication: Agents communicate through a specially designed "send message" tool based on their own descriptions.State Management: Agency Swarm efficiently manages the state of your assistants on OpenAI, maintaining it in a specialsettings.jsonfile.Deployable in Production: Agency Swarm is designed to be reliable and easily deployable in production environments.Installationpipinstallagency-swarmGetting StartedSet Your OpenAI Key:fromagency_swarmimportset_openai_keyset_openai_key("YOUR_API_KEY")Create Tools: Define your custom tools withInstructor:fromagency_swarm.toolsimportBaseToolfrompydanticimportFieldclassMyCustomTool(BaseTool):"""A brief description of what the custom tool does.The docstring should clearly explain the tool's purpose and functionality."""# Define the fields with descriptions using Pydantic Fieldexample_field:str=Field(...,description="Description of the example field, explaining its purpose and usage.")# Additional fields as required# ...defrun(self):"""The implementation of the run method, where the tool's main functionality is executed.This method should utilize the fields defined above to perform its task.Doc string description is not required for this method."""# Your custom tool logic goes heredo_something(self.example_field)# Return the result of the tool's operationreturn"Result of MyCustomTool operation"or convert from OpenAPI schemas:# using local filewithopen("schemas/your_schema.json")asf:ToolFactory.from_openapi_schema(f.read(),)# using requestsToolFactory.from_openapi_schema(requests.get("https://api.example.com/openapi.json").json(),)Define Agent Roles: Start by defining the roles of your agents. For example, a CEO agent for managing tasks and a developer agent for executing tasks.fromagency_swarmimportAgentceo=Agent(name="CEO",description="Responsible for client communication, task planning and management.",instructions="You must converse with other agents to ensure complete task execution.",# can be a file like ./instructions.mdfiles_folder="./files",# files to be uploaded to OpenAIschemas_folder="./schemas",# OpenAPI schemas to be converted into toolstools=[MyCustomTool,LangchainTool])Import from existing agents (will be deprecated in future versions):fromagency_swarm.agents.browsingimportBrowsingAgentbrowsing_agent=BrowsingAgent()browsing_agent.instructions+="\n\nYou can add additional instructions here."Define Agency Communication Flows: Establish how your agents will communicate with each other.fromagency_swarmimportAgencyagency=Agency([ceo,# CEO will be the entry point for communication with the user[ceo,dev],# CEO can initiate communication with Developer[ceo,va],# CEO can initiate communication with Virtual Assistant[dev,va]# Developer can initiate communication with Virtual Assistant],shared_instructions='agency_manifesto.md')# shared instructions for all agentsIn Agency Swarm, communication flows are directional, meaning they are established from left to right in the agency_chart definition. For instance, in the example above, the CEO can initiate a chat with the developer (dev), and the developer can respond in this chat. However, the developer cannot initiate a chat with the CEO. The developer can initiate a chat with the virtual assistant (va) and assign new tasks.Run Demo: Run the demo to see your agents in action!Web interface:agency.demo_gradio(height=900)Terminal version:agency.run_demo()Backend version:completion_output=agency.get_completion("Please create a new website for our client.",yield_messages=False)CLIGenesis AgencyThegenesiscommand starts the genesis agency in your terminal to help you create new agencies and agents.Command Syntax:agency-swarmgenesis[--openai_key"YOUR_API_KEY"]Make sure to include:Your mission and goals.The agents you want to involve and their communication flows.Which tools or APIs each agent should have access to, if any.Creating Agent Templates LocallyThis CLI command simplifies the process of creating a structured environment for each agent.Command Syntax:agency-swarmcreate-agent-template--name"AgentName"--description"Agent Description"[--path"/path/to/directory"][--use_txt]Folder StructureWhen you run thecreate-agent-templatecommand, it creates the following folder structure for your agent:/your-specified-path/ │ ├── agency_manifesto.md or .txt # Agency's guiding principles (created if not exists) └── AgentName/ # Directory for the specific agent ├── files/ # Directory for files that will be uploaded to openai ├── schemas/ # Directory for OpenAPI schemas to be converted into tools ├── tools/ # Directory for tools to be imported by default. ├── AgentName.py # The main agent class file ├── __init__.py # Initializes the agent folder as a Python package ├── instructions.md or .txt # Instruction document for the agent └── tools.py # Custom tools specific to the agentThis structure ensures that each agent has its dedicated space with all necessary files to start working on its specific tasks. Thetools.pycan be customized to include tools and functionalities specific to the agent's role.Future EnhancementsCreation of agencies that can autonomously create other agencies.Asynchronous communication and task handling.Inter-agency communication for a self-expanding system.ContributingFor details on how to contribute you agents and tools to Agency Swarm, please refer to theContributing Guide.LicenseAgency Swarm is open-source and licensed underMIT.Need Help?If you require assistance in creating custom agents for your business, feel free to reach out through my website:vrsen.aior schedule a consultation athttps://calendly.com/vrsen/ai-project-consultation
agenda
Simple Python module for pretty task logging.Exposes the following methods:sectionstarts a new section of application processingtaskstarts a new high-level task within the sectionsubtaskstarts a new subtask within the tasksubfailureindicates the failure of a subtask of the current taskfailureindicates the failure of a high-level tasksubpromptprints a subtask input promptpromptprints a high-level input prompt
agenda2pdf
agenda2pdf is a script which generates a book agenda file in PDF format, ready to be printed or to be loaded on a ebook readerYou can choose among different sections. Each section have pdf links to other parts of the agenda.I’ve created it for using with my iLiad eBook reader.
agenet
agenetA Python 3.8 implementation of a system model to estimate the average Age of Information (AoI) in an ultra-reliable low latency communication (URLLC) enabled wireless communication system with Slotted ALOHA scheme over the quasi-static Rayleigh block fading channels. A packet communication scheme is used to meet both the reliability and latency requirements of the proposed wireless network. By resorting to finite block length information theory, queuing theory, and stochastic processes, theoretical results can be obtained with this research software.System modelThe following figure illustrates the wireless communication system that is proposed in this application.The diagram illustrates a wireless network that consists of multiple nodes. The transmission between each node and the relay is done using a transmission scheme similar to that of the Slotted ALOHA protocol, which is a popular random access method used in wireless communication systems.However, the transmission between the relay and each destination uses dedicated communication channels, and as a result, no transmission scheme similar to ALOHA is employed for this part of the communication. This helps to reduce the possibility of collisions and improve the reliability of the communication.Additionally, short packet communication is used for transmission. Since short packets are more susceptible to errors, a finite block length information theory is employed to calculate the block error rate. This allows for a more accurate estimation of the probability of errors occurring during transmission.FeaturesTheagenetpackage allows the user to study the Age of Information (AoI) in a slotted URLLC-enabled ALOHA network, which can be used as a basis for implementing mission-critical wireless communication applications. This application can be used as a study tool to analyze the age of information in slotted ALOHA networks with multiple users and short packet communications scenarios to maintain URLLC (ultra-reliable low-latency communication). In this application, various parameters such as power allocation, block length, packet size, number of nodes in the network, and activation probability of each node can be adjusted to analyze how the age of information varies.Theagenetpackage contains several functions that can be used to study the AoI in a slotted URLLC-enabled ALOHA network. These functions allow the user to:Calculate the Signal-to-Noise Ratio (SNR) at each receiving node in the network, which is an important factor in determining the quality of the communication link;Calculate the Block Error Rate (BER) for each destination in the network, which is an important metric for assessing the reliability of the network;Calculate the theoretical AoI and simulate the AoI for a given network configuration, allowing the comparison of both measures to verify the accuracy of the simulation, as well as analyzing the performance of the network and assessing the impact of various parameters on the AoI;Estimate the average AoI value for a given update generation time and receiving time, which is a useful metric for evaluating the performance of any network.Additionally, a command-line script is included in the package that allows for easy experimentation with the model with default or user-defined parameters. The simulation can generate both theoretical and simulated values for various factors such as block lengths, power allocations, packet sizes, activation probabilities, and number of nodes in the network. These values can be presented in the form of tables using the following command.$ ageprintIn addition to the tables, the simulation results can also be displayed as plots using the command:$ ageplotIn both cases, adding the--helpoption displays the model's configurable parameters.RequirementsThe implementation requires Python 3.8+ to run. The following libraries are also required:numpymatplotlibpandastabulateargparseitertoolsmathscipyrandomHow to installInstall from PyPI:pip install agenetOr directly from GitHub:pip install git+https://github.com/cahthuranag/agenet.git#egg=agenetInstalling for development and/or improving the packagegit clone https://github.com/cahthuranag/agenet.git cd agenet pip install -e .[dev]This way, the package is installed in development mode. As a result, development dependencies are also installed.DocumentationAgenetpackage documentationLicenseMIT LicenseReferences[1]Age of Information in an URLLC-enabled Decode-and-Forward Wireless Communication System
ageng-distributions
No description available on PyPI.
ageng-tool
No description available on PyPI.
agenius
AGenius.pyAGenius.pyis aLyricsGeniusfork, making it easy to use, and async ready.Key FeaturesPythonicasync/await.Removed every possible instance of thePublic APIto make itsafer.SetupYou'll need a freeGeniusaccount to get access to theGenius API. This provides anaccess_tokenthat is required.InstallationPython 3.9 or higherYou can use pip:# Linuxpython3-mpipinstallagenius# Windowspy-3-mpipinstallageniusExamplesImporting the package and initiating the main class:importageniusgenius=agenius.Genius(token)PUBLIC_APIhas been removed in this version. You have to pass an access token to theGenius()class.To search for a specific song, you can either search by thetitleorsong_id:# by titlesong=awaitgenius.search_song("Never Gonna Give You Up","Rick Astley")# by song_idsong=awaitgenius.search_song(song_id=84851)You can also look up artists and their songs viaartist_id's:# look up an artistartist=awaitgenius.artist(artist_id=artist_id)# look up their songssong_list=awaitgenius.artist_songs(artist_id=artist_id,per_page=10,sort="title")Configurable parameters in theGenius()class:genius.verbose=False# Turns status messages offgenius.excluded_terms=["(Remix)","(Live)"]# Exclude songs with these words in their titleMore ExamplesGet a song's lyricsimportageniusgenius=agenius.Genius(token)song=awaitgenius.search_song("Never Gonna Give You Up")lyrics=song.lyricsGet a list of an artist's songs, and get the lyrics of every one of themimportageniusgenius=agenius.Genius(token)asyncdefget_lyrics(artist_id):song_list=awaitgenius.artist_songs(artist_id,per_page=50,sort="title")lyrics={}asyncforsonginsong_list:lyrics[song["title"]]=song.lyricsreturnlyricsLicense NoticeThis program is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.You should have received a copy of the GNU Lesser General Public License along with this program. If not, seehttps://www.gnu.org/licenses/.
agensgraph
# Agensgraph python driverA psycopg2 extension to use [AgensGraph][1] with Python.### DocumentationGo to [Psycopg's documentation][2] for documentation about the Python interface and to [AgensGraph][1] documentation for AgensSQL. If you want to learn more about Cypher, I recommend you to go to [Neo4j's][3] documentation.### How to```pythonimport agensgraphconn = agensgraph.connect("dbname=iomed") # Equivalent to psycopg2.connectcur = conn.cursor()cur.execute("SET graph_path=snomed;")cur.execute("MATCH (a)-[r]->(b) RETURN a,r,b LIMIT 10;")result = cur.fetchall()```**Copyright © 2017, IOMED Medical Solutions S.L.**[1]: http://bitnine.net/wp-content/uploads/2017/06/html5/main.html[2]: http://initd.org/psycopg/docs/index.html[3]: https://neo4j.com/docs/developer-manual/current/cypher/
agensgraph4jupyter
To be done
agenshindot
AGenshinDotAGenshinDot 是GenshinDot的 Python 实现,由Graia-Ariadne驱动.声明AGenshinDot 遵循AGPLv3许可协议开放全部源代码,你可在这里找到本项目的许可证.AGenshinDot 仅供学习娱乐使用,禁止将此 Bot 用于商用和非法用途.AGenshinDot 项目及作者不对因使用本项目所造成的损失进行赔偿,也不承担任何法律责任.安装从 PyPI 安装pipinstallagenshindot# orpoetryaddagenshindot从 GitHub 安装1.直接安装poetryaddgit+https://github.com/MingxuanGame/AGenshinDot.git2.克隆后安装gitclonehttps://github.com/MingxuanGame/AGenshinDot.gitcdAGenshinDot poetryinstall--no-dev配置所有配置均保存在运行目录config.toml.下面为配置样例:# 机器人 QQ 号account=1185285105# verifyKeyverify_key="agenshindot"# 是否启用控制台enable_console=false# 是否开启 Cookie 绑定enable_bind_cookie=false# 机器人管理员 QQ 号admins=[1060148379]# 以下为连接配置# 如果不配置则默认为 HTTP + 正向 WebSocket,连接地址为 localhost:8080# 可以同时配置多种连接# 正向 WebSocket 配置ws="ws://localhost:8080"# 等同于如下配置# ws = ["ws://localhost:8080"]# HTTP 配置http="http://localhost:8080"# 等同于如下配置# http = ["http://localhost:8080"]# 反向 WebSocket 配置[ws_reverse]# Endpointpath="/"# 验证的参数params={}# 验证的请求头headers={}# WARNING 上面的配置要保证不能缺失,也不能调换位置# 如果只需要设置 Endpoint,也可以使用下方的配置# ws_reverse = "/"# HTTP Webhook 配置[webhook]# Endpointpath="/"# 验证的请求头headers={}# WARNING 上面的配置要保证不能缺失,也不能调换位置# 如果只需要设置 Endpoint,也可以使用下方的配置# webhook = "/"# 日志配置[log]# 日志等级,详情请看 loguru 文档level="INFO"# 过期时间,过期的日志将被删除,格式请看# https://pydantic-docs.helpmanual.io/usage/types/#datetime-types# 中 `timedelta` 部分expire_time="P14DT0H0M0S"# 是否启用数据库日志db_log=false启动1.执行本项目文件夹下的bot.pypythonbot.py2.以模块形式启动python-magenshindot控制台命令WARNING 控制台属于实验性内容,不建议使用启用控制台后,会禁用标准输出中日志等级的设置在启用控制台后,可以输入以下命令执行一些操作/stop关闭 AGenshinDot./license输出许可证信息./version输出 AGenshinDot LOGO 和版本信息./execute <SQL 语句>执行 SQL 语句(危险操作)
agent
# Agent: Async generators for humans**agent** provides a simple decorator to create python 3.5 [asynchronous iterators](https://docs.python.org/3/reference/compound_stmts.html#async-for) via `yield`s## ExamplesMake people wait for things for no reason!```pythonimport agentimport [email protected] # Shorthand decoratordef wait_for_me():yield 'Like 'yield from asyncio.sleep(1)yield 'the line 'yield from asyncio.sleep(10)yield 'at 'yield from asyncio.sleep(100)yield 'the DMV'async for part in wait_for_me():print(part)```Paginate websites in an easy asynchronous manner.```pythonimport agentimport [email protected]_generatordef gen():page, url = 0, 'http://example.com/paginated/endpoint'while True:resp = yield from aiohttp.request('GET', url, params={'page': page})resp_json = (yield from resp.json())['data']if not resp_json:breakfor blob in resp_json['data']:yield blobpage += 1# Later on....async for blob in gen():# Do work```**The possibilities are endless!**For additional, crazier, examples take a look in the [tests directory](tests/).## Get it```bash$ pip install -U agent```## Caveats`yield from` syntax must be used as `yield` in an `async def` block is a syntax error.```pythonasync def generator():yield 1 # Syntax Error :(````asyncio.Future`s can not be yield directly, they must be wrapped by `agent.Result`.## LicenseMIT licensed. See the bundled [LICENSE](LICENSE) file for more details.
agent1c-metrics
Agent1C Metrics ServiceService for collecting metrics from 1C service files Version: 0.4.4InstallationInstall packagepip install --upgrade agent1c_metricsRun servicepython -m agent1c_metrics --reloadContributionInstall package in editable modepip install -e .Change version (major/minor/patch)bumpver update --patchBuild and publish the packagepoetry publish --build
agent360
Agent360360 Monitoring (360monitoring.com) is a web service that monitors and displays statistics of your server performance.Agent360 is OS agnostic software compatible with Python 3.7 and 3.8. It's been optimized to have low CPU consumption and comes with an extendable set of useful plugins.DocumentationYou can find the full documentation including the feature complete REST API atdocs.360monitoring.comanddocs.360monitoring.com/docs/api.Automatic Installation (All Linux Distributions)You can install the default configuration of Agent360 on all Linux distributions with just one click.Connect to your server via SSH.Find your USERTOKEN. To do so,go to the servers pageand then click the "Add server" button.Run the following command:wget-q-Nhttps://monitoring.platform360.io/agent360.sh&&bashagent360.shUSERTOKENAutomatic Installation (Windows)Download thesetupand install it on your Windows server.The installer will ask for your USERTOKEN which you can getfrom the servers page.Manual InstallationTo customize installation options, install Agent360 manually.Connect to your server via SSH.Run the following command, which differs depending on your server platform:Debian GNU/Linux:apt-getinstallpython3-develpython3-setuptoolspython3-pip pip3installagent360 wget-O/etc/agent360.inihttps://monitoring.platform360.io/agent360.iniFedora/CentOS version 6 or earlier (python 2.7):yuminstallpython-develpython-setuptoolsgcc easy_installagent360netifacespsutil wget-O/etc/agent360.inihttps://monitoring.platform360.io/agent360.iniFedora/CentOS version 7 and later (python 3):yuminstallpython36-develpython36gcc pip3installagent360 wget-O/etc/agent360.inihttps://monitoring.platform360.io/agent360.iniFind your USERTOKEN. To do so,go to the servers pageand then click the "Add server" button. You need this to generate a serverid.Run the following command (USERTOKEN is the one you got during the previous step):agent360helloUSERTOKEN/etc/agent360-token.iniCreate a systemd service at/etc/systemd/system/agent360.serviceby adding the following:[Unit]Description=Agent360[Service]ExecStart=/usr/local/bin/agent360User=agent360[Install]WantedBy=multi-user.targetRun the following command:chmod644/etc/systemd/system/agent360.service systemctldaemon-reload systemctlenableagent360 systemctlstartagent360Building Windows setupPrerequisite:InnoSetupis used as the installer, build script assumes that it is installed in the default location.Runphp windows/build.phpto create setup file.
agent_6tisch
# 6TiSCH testing tool## Architecture overview+-----------------------+F-interop side | || Users || |+--------^-----+--------+| || || |+--------------------+ +-------------------+ +--------+-----v--------+| | | | | || Tests | | Manager | | GUI || (Background tasks) | | | | || | +-----+----+--------+ +-----+---+-------------++-------+---+--------+ | ^ ^ |^ | | | | || | | | | || | +------v----+-------------------+---v-+| +------------> || | Message broker |+----------------+ (RabbitMQ) |+--------------+-----^----------------+| || |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX|XXXXX|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX| || || |+--v-----+--+User side | || Agent || Sniffer || |+-----------++--------------+ +--------------+ +--------------+| | | | | || 6TiSCH node | | 6TiSCH node | | 6TiSCH node |+--------------+ +--------------+ +--------------+- Agent sniffer => In charge of sniffing- A message broker (RabbitMQ) which is in charge of passing messages between different components.- A tshark based dissector (AMQP for communication)- A manager which is in charge of driving the whole testing tool by receiving messages and launching the tests.- Tests => In charge of asserting whether or not the messages observed from the 6TiSCH nodes are compliant to the standard.## InstallationYou need to have:- tshark that correctly support 6TiSCH and IEEE802.15.4e dissection (TODO: Insert git-hash of the correct revision)- All python libraries listed on the requirements.txt file## How to launch?Run the following command:supervisord -n --conf supervisord.conf.iniThis command will launch all the required components and will restart them if they crash.## In the F-interop context, where does this testing tool run?At the moment, the testing is launched from iMinds server (http://orchestrator.f-interop.eu/).## How does test session isolation happen?It is managed by the F-Interop orchestrator.When a session is provisioned by the orchestrator a virtual host is created on the broker and a virtual instance of the testing tool is spawn.Therefore two different sessions launched by the orchestrator don't interact with each others.## Configuration- The dissector will consult a routing key and an exchange name given as an environment variable.After started, the dissector will send a json containing the dissected packets information.## Architecture## TestsThis tool aims to have code reuse between:- **local execution** where the a developer is working locally- **remote execution** where the test is run on a remote machine.Tests can be organized arbitrarily but the recommended way of organizing tests is the following### How does a test work?The goal of a test is to expose failure inside an implementation.If your test returns it means that everything went fine.In any other case an exception will be raised (Most likely an AssertionError),and you will get all the context to debug it.Your goal as a test implementer is to put as many relevant assertas possible in your tests to test a maximum of assumptions that yougot about your network packets.What you want is having a step that is a series of assertion on a context.### What is the difference between online verdicts and local verdicts?In local, you don't get an AMQP message. Basically, if pytest returns no errors were detected.In online, you get at the end of all task a test that basically check that all steps went fine andsend an AMQP message.### Why do you require to have a tshark installed on the client side? Are the dissection happening client side?We use tshark on the client side for two main reasons:- We want to use eBPF filter to be sure to gather only the relevant packet.You don't want to send us packets that are not related to the test. That's fine, we don't want them either.We provides good default (we sniff only the interface you want us to and only with the filter wpan).If you want to provides additional filter, for privacy issues or any other reason feel free to do so.- We want to have production-ready, cross-platform sniffing.Sniffing is a solved problem, the wireshark suite provides it for free.We prefer invest our time in developing tests and improving dissectorsrather than work on a solved problem and reinventing the wheel.No dissection is happening on client side. Ever. You want to be sure to have the latest dissector.F-Interop provides it for you.### How can I run all existing tests locally?python3 -m pytest -s tests### Which tests are supported?- We aim to reproduce interoperability results that happened during ETSI 6TiSCH plugfest in Prague between [Contiki](http://contiki-os.org) and [OpenWSN](http://openwsn.atlassian.net).### I want to test something new. How could I get started?- First be sure that all the field you are going to use in your test are supported by tshark.If they are not, add support in Wireshark to dissect this field.You can get started quickly by going through this [tutorial](https://www.wireshark.org/docs/wsdg_html_chunked/ChDissectAdd.html).- Second, be sure you have easy access to those fields by using well named filter.You can easily test that by loading your PCAP files into Wireshark and look for the display filterused for the relevant field.- Third you can get started by looking at the JSON output of tshark and start to assert propertyon the JSON document using your favorite language and tools.tshark -r my_capture.pcapng -T json### How is the context of a test is saved? How can I alter it?Simply modify the object.All changes are kept between steps.When you are working in offline mode, context is an object that all steps refers to.If you are working in online mode, context is saved at the end of each step and restore at the beginning of a new step.You access packets by accessing attributes of the context no matter if it's an online or offline test.### How can I know in which state my test is?Simply consult the status property of the context object:context.status### How can I make a test block waiting for a given condition to be performed?You can put for instance a while loop with a time.sleep() waiting for a condition to be filled.### What is the format of a report?It's a JSON document.TODO: put here the report.### How can I produce a verdict result in online/offline mode?TODO### What is the format of an error? How can I tell if an error happen?TODO### What should I do if an error happen?In case where you are in offline mode, you can simply fix code and try again.In case you are in online mode, the task will finish and a new one matching the step you are at will be started.### How can I replay a network trace locally?Use iterator:for packet in context.dissected_packets:if validator.validate(packet):normalize_my_packet(packet)assert my_property(packet) == my_expected_valueThe context variable is assigned by pytest automatically.### How can I replay a network trace locally while mimicking arrival time from the PCAP?TODO### How does a JSON is modified through the test?We suggest that you normalize the results of the dissection.There is several improvements that come out of this:- JSON payload are smaller and easier to parse- Situation specific encapsulation can be removed at this step### How can I efficiently filter and extract relevant information from the JSON?[Cerberus](http://docs.python-cerberus.org/en/stable/) is a validation library that can help you validate whether a JSON is correct or not.This library can also help you to normalize the JSON document you have.Once you have a correct JSON document, you can create extractor functions that can fetch any properties inside your document.## How can I distinguish between a code that failed to pass a test and a code that made the testing tool crash?TODO### When should I send debug messages?When you have a failed assertion feel free to use as much debug as possible.By default we send a report once an AssertionError is raised in a step.Knowing about the context helps you understand at which step you are in the test case.It also helps you see how many packets are available.### How can I send a verdict?import hammerheadhammerhead.Verdict(context, {"my_verdict"})
agenta
Home Page|Slack|DocumentationQuickly iterate, debug, and evaluate your LLM appsThe open-source LLMOps platform for prompt-engineering, evaluation, human feedback, and deployment of complex LLM apps.About•Quick Start•Installation•Features•Documentation•Enterprise•Community•Contributingℹ️ AboutAgenta is an end-to-end LLMOps platform. It provides the tools forprompt engineering and management, ⚖️evaluation, and :rocket:deployment. All without imposing any restrictions on your choice of framework, library, or model.Agenta allows developers and product teams to collaborate and build robust AI applications in less time.🔨 How does it work?Using an LLM App Template (For Non-Technical Users)Starting from Code1.Create an application using a pre-built template from our UI2. Access a playground where you can test and compare different prompts and configurations side-by-side.3. Systematically evaluate your application using pre-built or custom evaluators.4. Deploy the application to production with one click.1.Add a few lines to any LLM application code to automatically create a playground for it2. Experiment with prompts and configurations, and compare them side-by-side in the playground.3. Systematically evaluate your application using pre-built or custom evaluators.4. Deploy the application to production with one click.Quick StartTry the cloud versionCreate your first application in one-minuteCreate an application using LangchainSelf-host agentaRead the DocumentationCheck the CookbookFeaturesPlayground 🪄With just a few lines of code, define the parameters and prompts you wish to experiment with. You and your team can quickly experiment and test new variants on the web UI.https://github.com/Agenta-AI/agenta/assets/4510758/8b736d2b-7c61-414c-b534-d95efc69134cVersion Evaluation 📊Define test sets, then evaluate manually or programmatically your different variants.API Deployment 🚀When you are ready, deploy your LLM applications as APIs in one click.Why choose Agenta for building LLM-apps?🔨Build quickly: You need to iterate many times on different architectures and prompts to bring apps to production. We streamline this process and allow you to do this in days instead of weeks.🏗️Build robust apps and reduce hallucination: We provide you with the tools to systematically and easily evaluate your application to make sure you only serve robust apps to production.👨‍💻Developer-centric: We cater to complex LLM-apps and pipelines that require more than one simple prompt. We allow you to experiment and iterate on apps that have complex integration, business logic, and many prompts.🌐Solution-Agnostic: You have the freedom to use any libraries and models, be it Langchain, llma_index, or a custom-written alternative.🔒Privacy-First: We respect your privacy and do not proxy your data through third-party services. The platform and the data are hosted on your infrastructure.How Agenta works:1. Write your LLM-app codeWrite the code using any framework, library, or model you want. Add theagenta.postdecorator and put the inputs and parameters in the function call just like in this example:Example simple application that generates baby names:importagentaasagfromlangchain.chainsimportLLMChainfromlangchain.llmsimportOpenAIfromlangchain.promptsimportPromptTemplatedefault_prompt="Give me five cool names for a baby from{country}with this gender{gender}!!!!"ag.init()ag.config(prompt_template=ag.TextParam(default_prompt),temperature=ag.FloatParam(0.9))@ag.entrypointdefgenerate(country:str,gender:str,)->str:llm=OpenAI(temperature=ag.config.temperature)prompt=PromptTemplate(input_variables=["country","gender"],template=ag.config.prompt_template,)chain=LLMChain(llm=llm,prompt=prompt)output=chain.run(country=country,gender=gender)returnoutput2.Deploy your app using the Agenta CLI3. Go to agenta athttp://localhostNow your team can 🔄 iterate, 🧪 experiment, and ⚖️ evaluate different versions of your app (with your code!) in the web platform.Enterprise SupportContact us here for enterprise support and early access to agenta self-managed enterprise with Kubernetes support.Disabling Anonymized TrackingTo disable anonymized telemetry, set the following environment variable:For web: SetTELEMETRY_TRACKING_ENABLEDtofalsein youragenta-web/.envfile.For CLI: Settelemetry_tracking_enabledtofalsein your~/.agenta/config.tomlfile.After making this change, restart agenta compose.ContributingWe warmly welcome contributions to Agenta. Feel free to submit issues, fork the repository, and send pull requests.We are usually hanging in our Slack. Feel free tojoin our Slack and ask us anythingCheck out ourContributing Guidefor more information.Contributors ✨Thanks goes to these wonderful people (emoji key):Sameh Methnani💻📖Suad Suljovic💻🎨🧑‍🏫👀burtenshaw💻Abram💻📖Israel Abebe🐛🎨💻Master X💻corinthian💻🎨Pavle Janjusevic🚇Kaosi Ezealigo🐛💻Alberto Nunes🐛Maaz Bin Khawar💻👀🧑‍🏫Nehemiah Onyekachukwu Emmanuel💻💡📖Philip Okiokio📖Abhinav Pandey💻Ramchandra Warang💻🐛Biswarghya Biswas💻Uddeepta Raaj Kashyap💻Nayeem Abdullah💻Kang Suhyun💻Yoon💻Kirthi Bagrecha Jain💻Navdeep💻Rhythm Sharma💻Osinachi Chukwujama💻莫尔索📖Agunbiade Adedeji💻Emmanuel Oloyede💻📖Dhaneshwarguiyan💻Priyanshu Prajapati📖Raviteja💻Arijit💻Yachika9925📖Aldrin⚠️seungduk.kim.2304💻Andrei Dragomir💻diego💻brockWith💻Dennis Zelada💻Romain Brucker💻This project follows theall-contributorsspecification. Contributions of any kind are welcome!Attribution: Testing icons created byFreepik - Flaticon
agentaction
Action chaining and history for agentsWhy Use This?This package helps manage and simplify the task of handling actions for an agent, especially a looping agent with chained functions. Actions can be anything, but the intended purpose is to work with openai function calling or other JSON/function calling LLM completion paradigms.This package facilitates action creation, retrieval, and management, all while supporting vector search powered by chromadb to efficiently locate relevant actions.InstallationpipinstallagentactionQuickstartCreate a directory for your action modules:mkdiractionsIn this directory, you can create Python files (.py) that define your actions. Each file should define aget_actionsfunction that returns a list of action dictionaries. Here is a sample action filesample_action.py:defsample_function(args):# Your function logic herereturn"Hello, "+args["name"]defget_actions():return[{"prompt":"Say hello","builder":None,"handler":sample_function,"suggestion_after_actions":[],"never_after_actions":[],"function":{"name":"sample_function","description":"Says hello to a person","args":["name"]}}]Now you can use the action manager in your agent. Here's a simple example:fromactions_managerimportimport_actions,use_action# Import the actionsimport_actions("./actions")# Use an actionresult=use_action("sample_function",{"name":"John"})actions=search_actions("hello")print(result)# Should print: {"success": True, "output": "Hello, John"}You can use theget_available_actionsandget_actionfunctions to search for and retrieve actions, respectively. And, don't forget to use theadd_to_action_historyfunction to keep track of which actions your agent has performed.Usage GuideAction Creation and Additionfromactions_managerimportadd_actionaction={"prompt":"Action Prompt","builder":None,# the function that is called to build the action prompt"handler":your_function_name,# the function that is called when the action is executed"suggestion_after_actions":["other_action_name1","other_action_name2"],"never_after_actions":["action_name3","action_name4"],"function":{"name":"your_function_name","description":"Your function description","args":["arg1","arg2"]}}add_action("your_function_name",action)Action Executionfromactions_managerimportuse_actionresult=use_action("your_function_name",{"arg1":"value1","arg2":"value2"})Search for Relevant Actionsfromactions_managerimportget_available_actionsactions=get_available_actions("query_text")API Documentationcompose_action_prompt(action: dict, values: dict) -> strGenerates a prompt for a given action based on provided values.get_actions() -> dictRetrieves all the actions present in the globalactionsdictionary.add_to_action_history(action_name: str, action_arguments: dict={}, success: bool=True)Adds an executed action to the action history.get_action_history(n_results: int=20) -> listRetrieves the most recent executed actions.get_last_action() -> str or NoneRetrieves the last executed action from the action history.get_available_actions(search_text: str) -> listRetrieves the available actions based on relevance and last action.get_formatted_actions(search_text: str) -> listRetrieve a dict containing the available actions in several formatsget_action_from_memory(action_name) -> dict or NoneRetrieve an action from memory based on the action's name.search_actions(search_text: str, n_results: int=5) -> listSearches for actions based on a query text.use_action(function_name: str, arguments: dict) -> dictExecutes a specific action by its function name.add_action(name: str, action: dict)Adds an action to the actions dictionary and 'actions' collection in memory.get_action(name: str) -> dict or NoneRetrieves a specific action by its name from the 'actions' dictionary.remove_action(name: str) -> boolRemoves a specific action by name.import_actions(actions_dir: str)Imports all the actions present in the 'actions_dir' directory. The actions returned are then added to the 'actions' dictionary.clear_actions()Wipes the 'actions' collection in memory and resets the 'actions' dictionary.Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agent-actors
Agent Actors: Plan-Do-Check-Adjust with Parallelized LLM Agent TreesCreate your own trees of AI agents that work towards a common objective. Together, let's explore the potential of Agent Actors and inspire the LLM community to delve deeper into this exciting realm of possibilities!Watch this 2-minute demo walkthrough —https://www.loom.com/share/8e60585f069c4a9f8ac9f01204b41704Here's a sample generated execution plan:Key FeaturesTime Weighted Long-Term Memoryusinglangchain.retrievers.TimeWeightedVectoreStoreRetrieverSynthesized Working Memory: An agent draws insights from and synthesizes their relevant memories into a "working memory" of 1–12 items for use with zero-shot prompts.Implements thePlan-Do-Check-Adjust (PDCA)operational framework for continuous improvement.Automatic Planning and Distribution of Tasks to AgentsOurParentAgentclass plans tasks for its children to do and distributes them to be completed in parallel.Parallel Execution of AgentsChildAgents work in parallel to Do and Check their results. Before running, they wait for all task dependencies to be resolved and inject that into their context.Create your own trees of autonomous AI agentsYou can nestParentAgents underParentAgents, or comingle them withChildAgents. Use your own vector store, retriever, or embedding function with ourParentAgentandChildAgentclasses. See how easy it is intest_system.py.What will you build?Terminating auto-GPTs that converges across successive runsYour own research and reporting teams of agentsSimulation-driven organizational behaviour researchCreate a developerteamof AutoGPTs that code for you togetherLimitationsProof of Concept, not production readyWe've only tested used GPT3.5InstallationRequires Python: ^3.10Install through your choice of package manager:poetryaddgit+https://github.com/shaman-ai/agent-actors.git pipenvinstallgit+https://github.com/shaman-ai/agent-actors.git#egg=agent-actorsLearn Agent Actors in 5 minutesfromagent_actorsimport(Agent,# subclass and replace with your own `run` methodChildAgent,# Do and CheckParentAgent,# Plan and AdjustConsolePrettyPrinter,# Helpful for printing JSON task outputs, pass as a handler to CallbackManager)Readtest_system.py!Run Agent ActorsClone the repopoetry install --with dev --with typingModifytest_system.pyto your own needsRunpoetry run pytest -s -k 'test_name_str_to_filter_by'You can also run all tests withpoetry run pytest, but this may take a while to execute, and is likely to hit into API rate limits.Contribute to Agent ActorsCheck out this diagram to understand how the system works:https://beta.plectica.com/maps/W26XSGD28Requests for Pull RequestsImproved Agent Prompts: Develop better prompts for the Plan, Do, Check, and Adjust chainsVisualization Tooling: Develop an interface for exploring first, then composing, an execution tree of Agent Actors, allowing researchers to better understand and visualize the interaction between the supervisory agent and worker agents.Evaluation Data: Understanding how this performs in different contexts is key to developing a better AGI architecture.Unlock Talking to Agents: The dialogue functions are there, and we're looking for help on how we can "talk" to these agents from another, say, IPython, to get a look into their state.Unlock Inter-Agent Communication: What happens if agents can talk to each other, not just return results to their parents and write memories to the global store?LicenseBUSL-1.1GratefulnessWe extend our gratitude to the contributors of the Python packageslangchainandray, without which this wouldn't be possible. We extend our gratitude to the amazing researchers who wrote Generative Simulacra [TODO], ReAct [TODO], and Jeremy Howard [TODO] and FastAI, without which this wouldn't be possible. And to BabyAGI and AutoGPT for inspiring us.CitationCitation Please cite the repo if you use the data or code in this repo.@misc{agentactors, author = {Shaman AI}, title = {Agent Actors: Plan-Do-Check-Adjust with Parallelized LLM Agent Trees}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/shaman-ai/agent-actors}}, }
agent-admin
Agent Admin
agentagenda
An agenda and task manager for your agent.InstallationpipinstallagentagendaQuickstartThis will create a new task with the goal "Write README.md file", and then mark it as completed.fromagentagendaimportcreate_task,finish_task# Create a new tasktask=create_task("Write README.md file")print(task)# Complete the taskfinish_task(task)1. Creating a Task:To create a task, use thecreate_taskmethod. You can optionally specify a plan and steps.task=create_task("Finish the project",plan="Plan for the project")print(task)2. List All Tasks:Retrieve a list of all tasks that are in progress using thelist_tasksmethod.tasks=list_tasks()print(tasks)3. Search for Tasks:You can search for specific tasks using thesearch_tasksmethod.tasks=search_tasks("project")print(tasks)4. Deleting a Task:To delete a task, use thedelete_taskmethod.delete_task(task)5. Completing a Task:Mark a task as complete using thefinish_taskmethod.finish_task(task)6. Cancelling a Task:If you want to cancel a task, use thecancel_taskmethod.cancel_task(task)7. Retrieve Task ID:To get the ID of a specific task, use theget_task_idmethod.task_id=get_task_id(task)print(task_id)8. Working with Plans:To create a plan for a specific goal, use thecreate_planmethod.plan=create_plan("Finish the project")print(plan)To update the plan of a specific task, use theupdate_planmethod.update_plan(task,"New plan for the project")9. Working with Steps:To create a list of steps based on a given goal and plan, use thecreate_stepsmethod.steps=create_steps("Finish the project","Plan for the project")print(steps)To add a new step to a task, use theadd_stepmethod.add_step(task,"New step for the project")To mark a specific step of a task as complete, use thefinish_stepmethod.finish_step(task,"Step to complete")Documentationcreate_task(goal: str, plan: str = None, steps: dict = None) -> dictCreates a new task based on the given goal, as well as plan and steps optionally. If no plan or steps are provided they will be generated based on the goal. Returns a dictionary representing the task. *Example:* ```python task = create_task("Finish the project") print(task) ```list_tasks() -> listReturns a list of all tasks that are currently in progress. *Example:* ```python tasks = list_tasks() print(tasks) ```search_tasks(search_term: str) -> listReturns a list of tasks whose goal is most relevant to the search term. *Example:* ```python tasks = search_tasks("project") print(tasks) ```delete_task(task: Union[dict, int, str]) -> NoneDeletes the specified task. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python delete_task(task) ```finish_task(task: Union[dict, int, str]) -> NoneMarks the specified task as complete. *Example:* ```python finish_task(task) ```cancel_task(task: Union[dict, int, str]) -> NoneMarks the specified task as cancelled. *Example:* ```python cancel_task(task) ```get_task_id(task: Union[dict, int, str]) -> strReturns the ID of the given task. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python task_id = get_task_id(task) print(task_id) ```get_task_by_id(task_id: str) -> dictReturns the task with the given ID. If no task is found, None is returned. *Example:* ```python task = get_task_by_id(task_id) print(task) ```get_last_created_task() -> dictReturns the most recently created task. *Example:* ```python task = get_last_created_task() print(task) ```get_last_updated_task() -> dictReturns the most recently updated task. *Example:* ```python task = get_last_updated_task() print(task) ```get_current_task() -> dictReturns the current task. *Example:* ```python task = get_current_task() print(task) ```set_current_task(task: Union[dict, int, str]) -> dictSets the specified task as the current task. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python set_current_task(task) ```create_plan(goal: str) -> strCreates a plan based on the given goal. *Example:* ```python plan = create_plan("Finish the project") print(plan) ```update_plan(task: Union[dict, int, str], plan: str) -> dictUpdates the plan of the specified task. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python update_plan(task, "New plan for the project") ```create_steps(goal: str, plan: str) -> listCreates a list of steps based on the given goal and plan. *Example:* ```python steps = create_steps("Finish the project", "Plan for the project") print(steps) ```update_step(task: Union[dict, int, str], step: dict) -> dictUpdates the specified step of the specified task. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python step = {"content": "New step", "completed": True} update_step(task, step) ```add_step(task: Union[dict, int, str], step: str) -> dictAdds a new step to the specified task. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python add_step(task, "New step for the project") ```finish_step(task: Union[dict, int, str], step: str) -> dictMarks the specified step of the specified task as complete. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python finish_step(task, "Step to complete") ```cancel_step(task: Union[dict, int, str], step: str) -> dictCancels the specified step of the specified task. The task can be specified as a dictionary (as returned by `create_task`), an integer ID, or a string ID. *Example:* ```python cancel_step(task, "Step to cancel") ```get_task_as_formatted_string(task: dict, include_plan: bool = True, include_current_step: bool = True, include_status: bool = True, include_steps: bool = True) -> strReturns a string representation of the task, including the plan, status, and steps based on the arguments provided. *Example:* ```python task_string = get_task_as_formatted_string(task, include_plan=True, include_current_step=True, include_status=True, include_steps=True) print(task_string) ```list_tasks_as_formatted_string() -> strRetrieves and formats a list of all current tasks. Returns a string containing details of all current tasks. *Example:* ```python tasks_string = list_tasks_as_formatted_string() print(tasks_string) ```Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agentai
AgentAI: OpenAI Functions + Python FunctionsIt is designed to make it easy to use OpenAI models e.g. GPT3.5-Turbo and GPT4 with existing Python functions by adding a simple decorator.AgentAI is a simple Python library with these ethos:Let developers write code!Do not invent a new syntax!Make it easy to integrate with existing projects!Make it easy to extend!Have fun and use exclamations!Unlike some libraries, AgentAI does NOT require you to learn a new syntax. No chains!Instead, it empowers you to add OpenAI functions using Python decorators and then call them directly from your code. This makes it easy to integrate AgentAI with your existing projects.Colab NotebooksExtract detailed entities using Pydantic:FeaturesAPI Calls: Use AgentAI to decorate your Python functions and make them magical!Nested Pydantic Objects for Extraction: Use nested Pydantic objects to extract information from the user.SQL Database Interaction: Seamlessly extract and utilize data from SQL databases.Function Execution: Generate and execute function calls based on conversation context.Conversation Management: Effectively manage and track the state of conversations. Easily define your own functions which can use messages, functions, and conversation history.Next WeekMultiple Functions: Call multiple functions in a single conversation with a DSL/DAG.Retrieval: Use AgentAI to retrieve information from a vector Database -- but only when needed!Rich Media: Support for rich media types e.g. images, audioFunction Generation: Generate Python functions based on conversation contextInstallationInstall AgentAI using pip:pipinstallagentaiGetting Started: Asking User for Missing Inputs till all inputs are availableImport required classes and functionsfromagentai.apiimportchat_complete,chat_complete_execute_fnfromagentai.openai_functionimporttool,ToolRegistryfromagentai.conversationimportConversationfromenumimportEnumweather_registry=ToolRegistry()Define a function with@tooldecoratorclassTemperatureUnit(Enum):celsius="celsius"fahrenheit="fahrenheit"@tool(regsitry=weather_registry)defget_current_weather(location:str,format:TemperatureUnit)->str:"""Get the current weatherArgs:location (str): The city and state, e.g. San Francisco, CAformat (str): The temperature unit to use. Infer this from the users location.Returns:str: The current weather"""# Your function implementation goes here.return""Note thatagentaiautomatically parses the Python Enum type (TemperatureUnit) and passes it to the model as a JSONSchema Enum. This saves you time in writing boilerplate JSONSchema which is required by OpenAI API.Create a Conversation object and add messagesconversation=Conversation()conversation.add_message("user","what is the weather like today?")Use thechat_completefunction to get a response from the modelchat_response=chat_complete(conversation.conversation_history,function_registry=weather_registry,model=GPT_MODEL)Output:{'role':'assistant','content':'In which city would you like to know the current weather?'}Add user response to conversation and callchat_completeagainOnce the user provides the required information, the model can generate the function arguments:conversation.add_message("user","I'm in Bengaluru, India")chat_response=chat_complete(conversation.conversation_history,function_registry=weather_registry,model=GPT_MODEL)eval(chat_response.json()["choices"][0]["message"]["function_call"]["arguments"])Output:{'location':'Bengaluru, India','format':'celsius'}Example: Doing a Database Query with Generated SQLDefine a function with@tooldecoratordb_registry=ToolRegistry()@tool(registry=db_registry)defask_database(query:str)->List[Tuple[str,str]]:"""Use this function to answer user questions about music. Input should be a fully formed SQL query.Args:query (str): SQL query extracting info to answer the user's question.SQL should be written using this database schema: <database_schema_string>IMPORTANT: Please return a fixed SQL in PLAIN TEXT.Your response should consist of ONLY the SQL query."""try:results=conn.execute(query).fetchall()returnresultsexceptExceptionase:raiseException(f"SQL error:{e}")Registering the function and using itagentai_functions=[json.loads(func.json_info)forfuncin[ask_database]]fromagentai.apiimportchat_complete_execute_fnagent_system_message="""You are ChinookGPT, a helpful assistant who gets answers to user questions from the Chinook Music Database.Provide as many details as possible to your usersBegin!"""sql_conversation=Conversation()sql_conversation.add_message(role="system",content=agent_system_message)sql_conversation.add_message("user","Hi, who are the top 5 artists by number of tracks")assistant_message=chat_complete_execute_fn(conversation=sql_conversation,functions=agentai_functions,model=GPT_MODEL,callable_function=ask_database)sql_conversation.display_conversation(detailed=True)Output:system: You are ChinookGPT, a helpful assistant who gets answers to user questions from the Chinook Music Database. Provide as many details as possible to your users Begin! user: Hi, who are the top 5 artists by number of tracks function: [('Iron Maiden', 213), ('U2', 135), ('Led Zeppelin', 114), ('Metallica', 112), ('Lost', 92)] assistant: The top 5 artists by number of tracks are: 1. Iron Maiden - 213 tracks 2. U2 - 135 tracks 3. Led Zeppelin - 114 tracks 4. Metallica - 112 tracks 5. Lost - 92 tracksDetailed ExamplesCheck out our detailednotebooks with exampleswhere we demonstrate how to integrate AgentAI with a chatbot to create a powerful conversational assistant that can answer questions using a SQLite database.ContributingWe welcome contributions! Please see ourcontributing guidelinesfor more details.SupportIf you encounter any issues or require further assistance, please raise an issue on ourGitHub repository.We hope you enjoy using AgentAI and find it helpful in powering up your AI models. Happy coding!
agentarchives
agentarchivesClients to retrieve, add, and modify records from archival management systems.InstallationAgentarchives is onPyPI!pip install agentarchivesOr you can install it directly from gitpip install git+https://github.com/artefactual-labs/agentarchives.gitDependency issue on MacOsAgentarchvies depends on themysqlclientpackage which has abugwhich can possibly fail an install when using homebrew on MacOs computers. A solution suggested on the mysqlclient site is to changemysql_configon or about line 112:# Create options libs="-L$pkglibdir" libs="$libs -l "to# Create options libs="-L$pkglibdir" libs="$libs -lmysqlclient -lssl -lcrypto"See also thisblog.UsageThis library can be used to interact withArchivists Toolkit,ArchivesSpace, andAccess To Memory (AtoM).ArchivesSpaceFirst, you need to import the module in your Python script:fromagentarchivesimportarchivesspaceThen, initiate a new client, passing in the URL, user name, password, port and repository for your AS instance:client=archivesspace.ArchivesSpaceClient('http://localhost','admin','admin',8089,2)Using your client, call one of the included functions (documented inclient.py). For example, the following:$ resource = client.get_record('/repositories/2/resources/1') $ print resourcewill return:{"classifications":[],"create_time":"2015-11-17T00:23:19Z","created_by":"admin","dates":[{"create_time":"2015-11-17T00:23:19Z","created_by":"admin","date_type":"bulk","expression":"maybe 1999","jsonmodel_type":"date","label":"creation","last_modified_by":"admin","lock_version":0,"system_mtime":"2015-11-17T00:23:19Z","user_mtime":"2015-11-17T00:23:19Z"}],"deaccessions":[],"extents":[{"create_time":"2015-11-17T00:23:19Z","created_by":"admin","extent_type":"cassettes","jsonmodel_type":"extent","last_modified_by":"admin","lock_version":0,"number":"1","portion":"whole","system_mtime":"2015-11-17T00:23:19Z","user_mtime":"2015-11-17T00:23:19Z"}],"external_documents":[],"external_ids":[],"id_0":"blah","instances":[],"jsonmodel_type":"resource","language":"aar","last_modified_by":"admin","level":"collection","linked_agents":[],"linked_events":[],"lock_version":0,"notes":[],"publish":false,"related_accessions":[],"repository":{"ref":"/repositories/2"},"restrictions":false,"revision_statements":[],"rights_statements":[],"subjects":[],"suppressed":false,"system_mtime":"2015-11-17T00:23:19 Z","title":"blah","tree":{"ref":"/repositories/2/resources/1/tree"},"uri":"/repositories/2/resources/1","user_mtime":"2015-11-17T00:23:19 Z"}Access To Memory (AtoM)First, you need to import the module in your Python script:fromagentarchivesimportatomThen, initiate a new client, passing in the URL, REST API access token, password, and port for your AtoM instance:client=atom.AtomClient('http://localhost','68405800c6612599',80)Using your client, call one of the included functions (documented inclient.py). For example, the following:$ resource = client.get_record('test-fonds') $ print resourceWill return:{"dates":[{"begin":"2014-01-01","end":"2015-01-01","type":"Creation"}],"level_of_description":"Fonds","notes":[{"content":"Note content","type":"general"}],"publication_status":"Draft","reference_code":"F2","title":"Test fonds"}Current AtoM client limitations (versus the ArchivesSpace client):Identifier wildcard search not supportedCreation of multiple notes not supportedNested digital objects not supportedThe ability to add/list notes with no content isn't supported
agent-attention-pytorch
No description available on PyPI.
agent-automata
Agent AutomataIntroductionagent-automatais a lightweight orchestration architecture for a hierarchical group of modular, autonomous agents, with the goal of composing actions from simple autonomous agents into complex collective behavior.The core idea behind this architecture is that instead of having a complex central agent managing many commands and sub-agents, or a fixed set of agents with specific roles in a task loop, we allow agents to call each other as tools, and then establish a hierarchical, rank-based structure to control the direction of the calls:Agent A (Rank 3): - Agent B (Rank 2) - Tool 1 - Tool 2 - Agent C (Rank 2) - Tool 1 - Tool 3 - Agent D (Rank 1) - Tool 4 - Tool 5 - Tool 6Agent A can then potentially be included as a callable sub-agent by another agent of higher rank, and so on.InstallationRunpip install agent-automatafor the core package. You can also runpip install agent-automata[builtins]to install some additional built-in functionality.Usage/DemoThere is very little concrete functionality included in the package--this is meant to be one component in a larger, more usable system of agents. Thedemodirectory shows a rather trivial example of specifying a simple agent and its sub-agents/tools using yaml spec files.To run the demo:Install the package with the[builtins]option.Download thedemodirectory (you can download the zip and extract just thedemodirectory).cdto thedemodirectory.Runpython run_demo.py.You should see some output from the demo agent, which creates a quiz and saves it to a file in a workspace.If you find this architecture interesting and would like more documentation on how it works, please post an issue.
agent-bdi
Physiologically Inspired Framework for Complex System IntegrationIntroductionIn this study, inspired by the physiological operation of the human body, an efficient and resilient software framework was developed to aid in the integration of mature AI technologies, allowing them to coordinate with one another to accomplish more advanced goals. A navigation system for the visually impaired was then developed to validate the framework, with promising experimental results. This framework can be applied to other types of AI systems.InstructionInstall the required packages.pip install -r requirements.txtCreate config.py, rewite the MQTT settings.cp config.sample.py config.pyStart the framework.python start.pyClass DiagramThe class diagram of the integration of agents and coordination. HolonicAgent represents the core, and HeadAgents and BodyAgents represent the collection of head agents and subagents, respectively. MQTT is the fundamental communication protocol for agents in the global circulation system, and MqttClient is a private member of HolonicAgent, which allows the agent to have built-in MQTT connection and reception capabilities.All agents inherit from HolonicAgent to form a hierarchical structure, and they use the DDS to achieve neural message transmission depending on their specific behavior. Each super-agent is a DDS domain that publishes or subscribes to related topics with the required QoS, such as the DEADLINE policy to confirm the date of the data or the TRANSPORT_PRIORITY policy to define the transmission priority order, in order to achieve the purpose of a specific agent.Class diagram of integrationSequence DiagramAccording to the sequence diagram depicted in below, the DDS and MQTT serve to transmit messages for the agents. Action 1 entails generating an independent process immediately after the root agent is initialized. Action 2 entails subscribing to or publishing relevant topics within the QoS constraints. Action 3 entails recursively calling all the subagents to initiate the action. The agent main action is performed in a separate process of Action 2 until it is notified of its termination. Finally, Action 4 entails generating a global broadcast with MQTT, with a system termination notification serving as an example in this study.Sequence Diagram of Integration
agentbox
+ Searching for openagent? You are in the right repo. It's now dotagent.(.🤖) +Hey there, Friend! This project is still in the "just for friends" stage. If you want to see what we're messing with and have some thoughts, take a look at the code.We'd love to incorporate your ideas or contributions. You can drop me a line at- ✉️[email protected] we started dotagent?We have a dream: Open and democratic AGI , free from blackbox censorship and control imposed by private corporations under the disguise of alignment. We once had this with the web but lost this liberty to the corporate giants of the mobile era, whose duopoly has imposed a fixed 30% tax on all developers.Our moonshot : A network of domain specific AI agents , collaborating so seamlessly that it feels like AGI. Contribute to democratizing the LAST technological frontier.What is dotagent ?dotagent is a library of modular components and an orchestration framework. Inspired by a microservices approach, it gives developers all the components they need to build robust, stable & reliable AI applications and experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agentbridge
No description available on PyPI.
agentbrowser
A browser for your agent, built on Playwright.InstallationpipinstallagentbrowserUsageImporting into your projectfromagentbrowserimport(get_browser,init_browser,navigate_to,get_body_html,get_body_text,get_document_html,create_page,close_page,evaluate_javascript,)Quickstartfromagentbrowserimport(navigate_to,get_body_text,)# Navigate to a URLpage=navigate_to("https://google.com")# Get the text from the pagetext=get_body_text(page)print(text)API Documentationensure_event_loop()Ensure that there is an event loop in the current thread. If no event loop exists, a new one is created and set for the current thread. This function returns the current event loop.Example usage:loop=ensure_event_loop()get_browser()Get a Playwright browser. If the browser doesn't exist, initializes a new one.Example usage:browser=get_browser()init_browser(headless=True, executable_path=None)Initialize a new Playwright browser.Parameters:headless: Whether the browser should be run in headless mode, defaults to True.executable_path: Path to a Chromium or Chrome executable to run instead of the bundled Chromium.Example usage:init_browser(headless=False,executable_path="/usr/bin/google-chrome")create_page(site=None)Create a new page in the browser. If a site is provided, navigate to that site.Parameters:site: URL to navigate to, defaults to None.Example usage:page=create_page("https://www.example.com")close_page(page)Close a page.Parameters:page: The page to close.Example usage:page=create_page("https://www.example.com")close_page(page)navigate_to(url, page, wait_until="domcontentloaded")Navigate to a URL in a page.Parameters:url: The URL to navigate to.page: The page to navigate in.Example usage:page=create_page()navigate_to("https://www.example.com",page)get_document_html(page)Get the HTML content of a page.Parameters:page: The page to get the HTML from.Example usage:page=create_page("https://www.example.com")html=get_document_html(page)print(html)get_page_title(page)Get the title of a page.Parameters:page: The page to get the title from.Example usage:page=create_page("https://www.example.com")title=get_page_title(page)print(title)get_body_text(page)Get the text content of a page's body.Parameters:page: The page to get the text from.Example usage:page=create_page("https://www.example.com")text=get_body_text(page)print(text)get_body_html(page)Get the HTML content of a page's body.Parameters:page: The page to get the HTML from.Example usage:page=create_page("https://www.example.com")body_html=get_body_html(page)print(body_html)screenshot_page(page)Get a screenshot of a page.Parameters:page: The page to screenshot.Example usage:page=create_page("https://www.example.com")screenshot=screenshot_page(page)withopen("screenshot.png","wb")asf:f.write(screenshot)evaluate_javascript(code, page)Evaluate JavaScript code in a page.Parameters:code: The JavaScript code to evaluate.page: The page to evaluate the code in.Example usage:page=create_page("https://www.example.com")result=evaluate_javascript("document.title",page)print(result)find_chrome()Find the Chrome executable. Returns the path to the Chrome executable, or None if it could not be found.Example usage:chrome_path=find_chrome()print(chrome_path)Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agent_client
No description available on PyPI.
agent-cloud
+ Searching for openagent? You are in the right repo. It's now dotagent.(.🤖) +Hey there, Friend! This project is still in the "just for friends" stage. If you want to see what we're messing with and have some thoughts, take a look at the code.We'd love to incorporate your ideas or contributions. You can drop me a line at- ✉️[email protected] we started dotagent?We have a dream: Open and democratic AGI , free from blackbox censorship and control imposed by private corporations under the disguise of alignment. We once had this with the web but lost this liberty to the corporate giants of the mobile era, whose duopoly has imposed a fixed 30% tax on all developers.Our moonshot : A network of domain specific AI agents , collaborating so seamlessly that it feels like AGI. Contribute to democratizing the LAST technological frontier.What is dotagent ?dotagent is a library of modular components and an orchestration framework. Inspired by a microservices approach, it gives developers all the components they need to build robust, stable & reliable AI applications and experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agent-cloud-os
+ Searching for openagent? You are in the right repo. It's now dotagent.(.🤖) +Hey there, Friend! This project is still in the "just for friends" stage. If you want to see what we're messing with and have some thoughts, take a look at the code.We'd love to incorporate your ideas or contributions. You can drop me a line at- ✉️[email protected] we started dotagent?We have a dream: Open and democratic AGI , free from blackbox censorship and control imposed by private corporations under the disguise of alignment. We once had this with the web but lost this liberty to the corporate giants of the mobile era, whose duopoly has imposed a fixed 30% tax on all developers.Our moonshot : A network of domain specific AI agents , collaborating so seamlessly that it feels like AGI. Contribute to democratizing the LAST technological frontier.What is dotagent ?dotagent is a library of modular components and an orchestration framework. Inspired by a microservices approach, it gives developers all the components they need to build robust, stable & reliable AI applications and experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this:
agentclpr
AgentCLPR简介一个基于ONNXRuntime、AgentOCR和License-Plate-Detector项目开发的中国车牌检测识别系统。车牌识别效果支持多种车牌的检测和识别(其中单层车牌识别效果较好):单层车牌:[[[[373, 282], [69, 284], [73, 188], [377, 185]], ['苏E05EV8', 0.9923506379127502]]] [[[[393, 278], [318, 279], [318, 257], [393, 255]], ['VA30093', 0.7386096119880676]]] [[[[[487, 366], [359, 372], [361, 331], [488, 324]], ['皖K66666', 0.9409016370773315]]]] [[[[304, 500], [198, 498], [199, 467], [305, 468]], ['鲁QF02599', 0.995299220085144]]] [[[[309, 219], [162, 223], [160, 181], [306, 177]], ['使198476', 0.9938704371452332]]] [[[[957, 918], [772, 920], [771, 862], [956, 860]], ['陕A06725D', 0.9791222810745239]]]双层车牌:[[[[399, 298], [256, 301], [256, 232], [400, 230]], ['浙G66666', 0.8870148431461757]]] [[[[398, 308], [228, 305], [227, 227], [398, 230]], ['陕A00087', 0.9578166644088313]]] [[[[352, 234], [190, 244], [190, 171], [352, 161]], ['宁A66666', 0.9958433652812175]]]快速使用快速安装# 安装 AgentCLPR$pipinstallagentclpr# 根据设备平台安装合适版本的 ONNXRuntime# CPU 版本(推荐非 win10 系统,无 CUDA 支持的设备安装)$pipinstallonnxruntime# GPU 版本(推荐有 CUDA 支持的设备安装)$pipinstallonnxruntime-gpu# DirectML 版本(推荐 win10 系统的设备安装,可实现通用的显卡加速)$pipinstallonnxruntime-directml# 更多版本的安装详情请参考 ONNXRuntime 官网简单调用:# 导入 CLPSystem 模块fromagentclprimportCLPSystem# 初始化车牌识别模型clp=CLPSystem()# 使用模型对图像进行车牌识别results=clp('test.jpg')服务器部署:启动 AgentCLPR Server 服务$agentclprserverPython 调用importcv2importjsonimportbase64importrequests# 图片 Base64 编码defcv2_to_base64(image):data=cv2.imencode('.jpg',image)[1]image_base64=base64.b64encode(data.tobytes()).decode('UTF-8')returnimage_base64# 读取图片image=cv2.imread('test.jpg')image_base64=cv2_to_base64(image)# 构建请求数据data={'image':image_base64}# 发送请求url="http://127.0.0.1:5000/ocr"r=requests.post(url=url,data=json.dumps(data))# 打印预测结果print(r.json())
agentcomlink
Simple file management and serving for agentsInstallationpipinstallagentcomlinkQuickstartStart the server: You can start the server with uvicorn like this:importosif__name__=="__main__":importuvicornuvicorn.run("agentcomlink:start_server",host="0.0.0.0",port=int(os.getenv("PORT",8000)))This will start the server athttp://localhost:8000.Get a file: Once the server is up and running, you can retrieve file content by sending a GET request to/file/{path}endpoint, where{path}is the path to the file relative to the server's current storage directory.fromagentcomlinkimportget_file# Fetches the content of the file located at "./files/test.txt"file_content=get_file("test.txt")print(file_content)Save a file: Similarly, you can save content to a file by sending a POST request to/file/endpoint, with JSON data containing thepathandcontentparameters.fromagentcomlinkimportadd_file# Creates a file named "test.txt" in the current storage directory# and writes "Hello, world!" to it.add_file("test.txt","Hello, world!")API DocumentationAgentFS provides the following public functions:start_server(storage_path=None)Starts the FastAPI server. If astorage_pathis provided, it sets the storage directory to the given path.Arguments:storage_path(str, optional): The path to the storage directory.Returns:NoneExample:fromagentcomlinkimportstart_serverstart_server("/my/storage/directory")get_server()Returns the FastAPI application instance.Arguments:NoneReturns:FastAPI application instance.Example:fromagentcomlinkimportget_serverapp=get_server()set_storage_path(new_path)Sets the storage directory to the provided path.Arguments:new_path(str): The path to the new storage directory.Returns:Trueif the path was successfully set,Falseotherwise.Example:fromagentcomlinkimportset_storage_pathset_storage_path("/my/storage/directory")add_file(path, content)Creates a file at the specified path and writes the provided content to it.Arguments:path(str): The path to the new file.content(str): The content to be written to the file.Returns:Trueif the file was successfully created.Example:fromagentcomlinkimportadd_fileadd_file("test.txt","Hello, world!")remove_file(path)Removes the file at the specified path.Arguments:path(str): The path to the file to be removed.Returns:Trueif the file was successfully removed.Example:fromagentcomlinkimportremove_fileremove_file("test.txt")update_file(path, content)Appends the provided content to the file at the specified path.Arguments:path(str): The path to the file to be updated.content(str): The content to be appended to the file.Returns:Trueif the file was successfully updated.Example:fromagentcomlinkimportupdate_fileupdate_file("test.txt","New content")list_files(path='.')Lists all files in the specified directory.Arguments:path(str, optional): The path to the directory. Defaults to'.'(current directory).Returns:A list of file names in the specified directory.Example:fromagentcomlinkimportlist_filesfiles=list_files()list_files_formatted(path='.')Lists all files in the specified directory as a formatted string. Convenient!Arguments:path(str, optional): The path to the directory. Defaults to'.'(current directory).Returns:A string containing a list of file names in the specified directory.Example:fromagentcomlinkimportlist_filesfiles=list_files()get_file(path)Returns the content of the file at the specified path.Arguments:path(str): The path to the file.Returns:A string containing the content of the file.Example:fromagentcomlinkimportget_filecontent=get_file("test.txt")Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agentcomms
Connectors for your agent to the outside world.Discord connector with (voice and chat, DMs coming)Twitter connector (feed only, DMs coming)Admin Panel - simple web interface to chat with your agent and upload filesInstallationpipinstallagentcommsTwitter Usage GuideThis module uses a set of environment variables to interact with Twitter, so you'll need to set the following before using:TWITTER_EMAIL: The email address for your Twitter account.TWITTER_USERNAME: The username for your Twitter account.TWITTER_PASSWORD: The password for your Twitter account.Setting up the Twitter connectorBefore you can start using the Twitter connector, you have to initialize it. The initialization is done using thestart_twitter_connectorfunction.importtwittertwitter.start_twitter_connector()This will start the twitter connector with default parameters. If you wish to customize the email, username, password, or session storage path, you can use thestart_twitterfunction like so:twitter.start_twitter(email="[email protected]",username="my_username",password="my_password",session_storage_path="my_session.cookies")Liking a tweetTo like a tweet, you can use thelike_tweetfunction. Pass in the id of the tweet you wish to like.twitter.like_tweet("1234567890")Replying to a tweetTo reply to a tweet, you can use thereply_to_tweetfunction. Pass in the message you wish to send, and the id of the tweet you're replying to.twitter.reply_to_tweet("This is a great tweet!","1234567890")Posting a tweetTo post a new tweet, you can use thetweetfunction. Pass in the message you wish to tweet. You can optionally pass in a media object to attach to the tweet.twitter.tweet("Hello, Twitter!")Registering feed handlersFeed handlers are functions that get called whenever there are new tweets in the feed. They can be registered using theregister_feed_handlerfunction.defmy_feed_handler(tweet):print(f"New tweet from{tweet['user']['name']}:{tweet['text']}")twitter.register_feed_handler(my_feed_handler)You can also unregister a handler using theunregister_feed_handlerfunction.twitter.unregister_feed_handler(my_feed_handler)Getting account informationTo get the current account object, you can use theget_accountfunction.account=twitter.get_account()print(account.username)This will print out the username of the current Twitter account.Discord Usage GuideThe Discord connector works with both voice and text. For voice, you will need an Elevenlabs API key.Environment VariablesBefore you start, you need to set the environment variables for the bot to function correctly. Create a.envfile in your project directory and set these variables:DISCORD_API_TOKEN=your_discord_bot_token ELEVENLABS_API_KEY=your_elevenlabs_api_key ELEVENLABS_VOICE=voice_you_want_to_use ELEVENLABS_MODEL=model_you_want_to_useDISCORD_API_TOKENis your Discord bot token, which you get when you create a new bot on the Discord developer portal.ELEVENLABS_API_KEYis your Eleven Labs API key for their TTS service.ELEVENLABS_VOICEis the voice you want to use for the TTS. You will have to check the Eleven Labs API documentation for the voices they support.ELEVENLABS_MODELis the TTS model you want to use. Again, you will have to check the Eleven Labs API documentation for the supported models.Running the BotAfter setting your environment variables, you can run your bot by calling thestart_connectorfunction:start_connector()Registering Message HandlersMessage handlers are functions that are executed when certain events happen in Discord, such as receiving a message. Here's how you can register a message handler:Create the Handler FunctionFirst, you need to create a function that will be executed when a message is received. This function should take one argument, which is the message that was received. The message object will contain all the information about the message, such as the content of the message, the author, and the channel where it was sent.Here's an example of a simple message handler function:defhandle_message(message):print(f"Received a message from{message.author}:{message.content}")This function will simply print the author and content of every message that is received.Register the Handler FunctionTo register the handler function, you use theregister_feed_handlerfunction and pass the handler function as an argument:register_feed_handler(handle_message)After calling this function, thehandle_messagefunction will be executed every time a message is received on Discord.Public Functionssend_message(message: str, channel_id: int)This function is used to add a message to the queue. The message will be sent to the channel with the ID specified.send_message("Hello world!",1234567890)start_connector(discord_api_token: str)This function is used to start the bot and the event loop, setting the bot to listen for events on Discord.start_connector("your_discord_api_token")register_feed_handler(func: callable)This function is used to register a new function as a feed handler. Feed handlers are functions that process or respond to incoming data in some way.defmy_func(data):print(data)register_feed_handler(my_func)unregister_feed_handler(func: callable)This function is used to remove a function from the list of feed handlers.unregister_feed_handler(my_func)Admin Panel Usage GuideQuickstartStart the server: You can start the server with uvicorn like this:importosif__name__=="__main__":importuvicornuvicorn.run("agentcomms:start_server",host="0.0.0.0",port=int(os.getenv("PORT",8000)))This will start the server athttp://localhost:8000.Get a file: Once the server is up and running, you can retrieve file content by sending a GET request to/file/{path}endpoint, where{path}is the path to the file relative to the server's current storage directory.fromagentcommsimportget_file# Fetches the content of the file located at "./files/test.txt"file_content=get_file("test.txt")print(file_content)Save a file: Similarly, you can save content to a file by sending a POST request to/file/endpoint, with JSON data containing thepathandcontentparameters.fromagentcommsimportadd_file# Creates a file named "test.txt" in the current storage directory# and writes "Hello, world!" to it.add_file("test.txt","Hello, world!")API DocumentationAgentFS provides the following public functions:start_server(storage_path=None)Starts the FastAPI server. If astorage_pathis provided, it sets the storage directory to the given path.Arguments:storage_path(str, optional): The path to the storage directory.Returns:NoneExample:fromagentcommsimportstart_serverstart_server("/my/storage/directory")get_server()Returns the FastAPI application instance.Arguments:NoneReturns:FastAPI application instance.Example:fromagentcommsimportget_serverapp=get_server()set_storage_path(new_path)Sets the storage directory to the provided path.Arguments:new_path(str): The path to the new storage directory.Returns:Trueif the path was successfully set,Falseotherwise.Example:fromagentcommsimportset_storage_pathset_storage_path("/my/storage/directory")add_file(path, content)Creates a file at the specified path and writes the provided content to it.Arguments:path(str): The path to the new file.content(str): The content to be written to the file.Returns:Trueif the file was successfully created.Example:fromagentcommsimportadd_fileadd_file("test.txt","Hello, world!")remove_file(path)Removes the file at the specified path.Arguments:path(str): The path to the file to be removed.Returns:Trueif the file was successfully removed.Example:fromagentcommsimportremove_fileremove_file("test.txt")update_file(path, content)Appends the provided content to the file at the specified path.Arguments:path(str): The path to the file to be updated.content(str): The content to be appended to the file.Returns:Trueif the file was successfully updated.Example:fromagentcommsimportupdate_fileupdate_file("test.txt","New content")list_files(path='.')Lists all files in the specified directory.Arguments:path(str, optional): The path to the directory. Defaults to'.'(current directory).Returns:A list of file names in the specified directory.Example:fromagentcommsimportlist_filesfiles=list_files()list_files_formatted(path='.')Lists all files in the specified directory as a formatted string. Convenient!Arguments:path(str, optional): The path to the directory. Defaults to'.'(current directory).Returns:A string containing a list of file names in the specified directory.Example:fromagentcommsimportlist_filesfiles=list_files()get_file(path)Returns the content of the file at the specified path.Arguments:path(str): The path to the file.Returns:A string containing the content of the file.Example:fromagentcommsimportget_filecontent=get_file("test.txt")Contributions WelcomeIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
agent-context
+ Looking for 'openagent'? Because of a little name clash, it's now called 'dotagent'. 🤖+Question:I stumbled upon this repository. Is it production ready?Answer:Kudos on discovering this hidden treasure box! 🧭 While it's fairly stable and we're battle-testing it in our own production, we'd advise a bit of caution for immediate production use. It's got its quirks, and some of them have taken a cozy spot on our'we'll-look-at-this-later'list. Jump in, play with it, or use any part of our code. It's all good with the MIT license.I'm diving in, quirks and all!Ahoy, adventurer! 🏴‍☠️ We're thrilled to have another daring coder join the fray. Here's to creating some coding magic together! ✨The Origin Tale of dotagentHere's our dream: An open and democratic AGI, untouched by the sneaky controls and hush-hush censorship of corporate overlords masquerading under 'alignment'. Remember the good ol' web days? We lost that freedom to the mobile moguls and their cheeky 30% 'because-we-said-so' tax. 🙄Our moonshot? 🚀 A harmonious ensemble of domain-specific AI agents, working in unison so well, you'd think it's AGI. Join us in opening up the LAST tech frontier for all!Meet World's first AMS!Ever heard of an Agent Management System (AMS)? No? Well, probably because we believe we came up with it! 🎩✨ dotagent proudly wears the badge of being the world's first AMS (yep, we're patting ourselves on the back here). Drawing inspiration from the nifty microservices, it equips developers with a treasure trove of tools to craft sturdy, trusty AI applications and those cool experimental autonomous agents.🧱 ModularityMultiplatform:Agents do not have to run on a single location or machine. Different components can run across various platforms, including the cloud, personal computers, or mobile devices.Extensible:If you know how to do something in Python or plain English, you can integrate it with dotagent.🚧 GuardrailsSet clear boundaries:Users can precisely outline what their agent can and cannot do. This safeguard guarantees that the agent remains a dynamic, self-improving system without overstepping defined boundaries.🏗️ Greater control with Structured outputsMore Effective Than Chaining or Prompting:The prompt compiler unlocks the next level of prompt engineering, providing far greater control over LLMs than few-shot prompting or traditional chaining methods.Superpowers to Prompt Engineers:It gives full power of prompt engineering, aligning with how LLMs actually process text. This understanding enables you to precisely control the output, defining the exact response structure and instructing LLMs on how to generate responses.🏭 Powerful Prompt CompilerThe philosophy is to handle more processing at compile time and maintain better session with LLMs.Pre-compiling prompts:By handling basic prompt processing at compile time, unnecessary redundant LLM processing are eliminated.Session state with LLM:Maintaining state with LLMs and reusing KV caches can eliminate many redundant generations and significantly speed up the process for longer and more complex prompts.(only for opensource models)Optimized tokens:Compiler can transform many output tokens into prompt token batches, which are cheaper and faster. The structure of the template can dynamically guide the probabilities of subsequent tokens, ensuring alignment with the template and optimized tokenization .(only for opensource models)Speculative sampling (WIP):You can enhance token generation speed in a large language model by using a smaller model as an assistant. The method relies on an algorithm that generates multiple tokens per transformer call using a faster draft model. This can lead to upto 3x speedup in token generation .📦 Containerized & Scalable.🤖files :Agents can be effortlessly exported into a simple .agent or .🤖 file, allowing them to run in any environment.Agentbox (optional):Agents should be able to optimize computing resources inside a sandbox. You can use Agentbox locally or on a cloud with a simple API, with cloud agentbox offering additional control and safety.Installationpip install dotagentCommon ErrorsSQLite3 Version ErrorIf you encounter an error like:Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.This is a very common issue with Chroma DB. You can find instructions to resolve this in theChroma DB tutorial.Here's the code for a full stack chat app with UI, all in a single Python file! (37 lines)importdotagent.compilerascompilerfromdotagent.compiler._programimportLogfromdotagentimportmemoryimportchainlitasuifromdotenvimportload_dotenvload_dotenv()@ui.on_chat_startdefstart_chat():compiler.llm=compiler.llms.OpenAI(model="gpt-3.5-turbo")classChatLog(Log):defappend(self,entry):super().append(entry)print(entry)is_end=entry["type"]=="end"is_assistant=entry["name"]=="assistant"ifis_endandis_assistant:ui.run_sync(ui.Message(content=entry["new_prefix"]).send())memory=memory.SimpleMemory()@ui.on_messageasyncdefmain(message:str):program=compiler("""{{#system~}}You are a helpful assistant{{~/system}}{{~#geneach 'conversation' stop=False}}{{#user~}}{{set 'this.user_text' (await 'user_text') hidden=False}}{{~/user}}{{#assistant~}}{{gen 'this.ai_text' temperature=0 max_tokens=300}}{{~/assistant}}{{~/geneach}}""",memory=memory)program(user_text=message,log=ChatLog())The UI will look something like this: