package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
ahttptemplate
No description available on PyPI.
ahui
库的介绍及用法:简单介绍:库(ahui)由作者阿辉开发并维护,不定期发布更新版本查看最新版本>>该库的模块及函数覆盖实际工作中的多种场景的应用,函数的开发基于原标准库中的模块、调用第三方接口实现、原始函数的二次开发应用等本文档列举了库中所有模块下的函数及要点注解,更多详细教程可访问作者(阿辉)博客:CSDN知乎基本用法:安装库 (以下方法默认安装最新版本):pip install ahui调用库或模块:import ahuifrom ahui import pyemail,pywechat注:因部分依赖库没有加入到默认安装列表,调用时可根据报错提示安装所缺少的<module>调用库(ahui)时默认导入各模块下的所有函数,故可以直接调用库下的函数,示例如下:importahuiahui.Email_send(username='your_email',password='email_password',subject='test',contents='test...',receivers=['receiver_email'],accs=[],links={},df=None,df_links={},file_pathname=None,smtp='exmail',ssl=True)一、Python 发送邮件的多种场景的应用0.邮箱-SMTP服务器地址/端口号(SSL加密)/端口号(非加密):注:qq(腾讯QQ邮箱)、exmail(腾讯企业邮箱)、126和163(网易邮箱)smtp_servers(smtp='exmail',ssl=True)1.发送邮件信息:参数:[file_pathname]类型(str/list) 支持添加多个附件-文件格式不限、附件总大小不超过50M;[contents]类型(str/list) 支持正文内容换行(列表中一个元素代表一行);[links]设置超链接-支持添加多个;[df_links]支持设置多个链接字段,默认为空。设置:[QQ邮箱发送]用于发送邮件的qq邮箱需要开启SMTP服务,进入邮箱-设置-账户-开启(POP3/SMTP)验证-获取授信码(授信码用于邮箱登录密码)Email_send(username='',password='',subject='',contents='',receivers=[],accs=[],links={},df=None,df_links={},file_pathname=None,smtp='exmail',ssl=True)2.发送邮件报告:功能:支持正文插入1张图片(png/jpg/gif动图); 支持添加多个附件(文件格式不限)参数:[image_name]支持类型(本地图片/变量Image/变量bytes);[file_pathname]类型(str/list) 支持添加多个附件-文件格式不限、附件总大小不超过50M;[contents]类型(str/list) 支持正文内容换行(列表中一个元素代表一行)说明:插入动图浏览器中首次打开正常,但手机端需要二次打开邮件才显示动图(刷新邮件)Email_image(username='',password='',title='',contents='',image_name=None,receivers=[],accs=[],file_pathname=None,smtp='exmail',ssl=True,sign=False)二、企业微信群机器人-推送多类型消息的应用0.配置企业微信群机器人key:wechat_key(key='key1')1.1.企业微信群机器人-推送消息类型-markdown:参数:[content]支持传入字符串(str)/列表(list)/数据框(DataFrame);[df_links]支持多个链接字段,默认参数空;说明:只支持@1个人/不支持@all;@对象为企业微信英文名,英文名不区分大小写;若@对象无此人仍正常推送消息内容;系统:适用于 Windows、Linux 系统环境下的Python3版本。wechat_markdown(top='',title='',content='',user='',links={},df_links={},show_cols=False,key=None)1.2.企业微信群机器人-推送消息类型-markdown:简介:基于 wechat_markdown() 函数的二次封装开发;功能上支持@多人、支持推送多个群;参数:[user]支持传入数据类型str/list; [key]支持数据类型str/list,当传入多个元素则用列表形式。wechat_markdowns(top='',title='',content='',user='',links={},df_links={},show_cols=False,key=None)2.企业微信群机器人-推送消息类型-text:功能:@对象支持使用企业微信英文名或手机号、支持 '@all' 或 @多个人;参数:[content]支持数据类型str/list,列表中的每个元素文本代表一行。wechat_text(top='',title='',content='',users_name=[],users_phone=[],key=None)3.企业微信群机器人-推送消息类型-@群成员:功能:@对象支持使用企业微信英文名或手机号,英文名不区分大小写;支持 '@all' 或 @多个人,若无此人则不@且不影响执行推送。wechat_at(users_name=[],users_phone=[],key=None,content='')4.企业微信群机器人-推送消息类型-图像:参数:[image]支持传入的数据类型-本地图片路径/变量Image/变量bytes说明:图片最大不能超过2M,支持JPG、PNG格式。安装:PIL(pip install pillow)wechat_image(image,key=None,users=[],content='')5.企业微信群机器人-推送消息类型-本地文件:参数:[pathfile]传参要求,图片:最大10MB 支持JPG、PNG格式;语音:最大2MB 播放长度不超过60s 支持AMR格式;视频:最大10MB 支持MP4格式;普通文件:最大20MBwechat_file(pathfile,key=None,users=[],content='')
ahui_aiohttp_server
Ahui-aiohttp-serverThis is a simple http async server which extendspython -m http.server.(WARN: It's not recommended for production environments, just use it in development or testing environments only!):Support print to http response directly (same as php's echo)Support async-awaitSupport php, pythonInstallpip install ahui-aiohttp-server pip3 install ahui-aiohttp-serverStart server$ tree . ./ app/ echo1.py echo2.py echo.php return.py js/ test.js $ python -m ahui_aiohttp_server $ python -m ahui_aiohttp_server --host 127.0.0.1 --port 5000Access serverAccess via echo server(php-like):$ cat app/echo1.py print('Hello World!') $ curl http://127.0.0.1:5000/app/echo1.py Hello World! $ curl http://127.0.0.1:5000/js/test.js <js content> $ curl http://127.0.0.1:5000/app/echo.php <js content>If you want to get request data(such as:get, post, cookie, ..., useaiohttp_handler(request)instead:$ cat app/echo2.py def aiohttp_handler(request): print(request.query) # use print $ curl http://127.0.0.1:5000/app/echo2.py?var=value {'var':'value'}Access via normal aiohttp server:$ cat app/return.py from aiohttp import web async def aiohttp_handler(request): data = await request.post() return web.Response(body=str(data)) # use return $ curl http://127.0.0.1:5000/app/return.py?var=value - 'k1=v2' {'k1':'v2'}Access static file:$ curl http://127.0.0.1:5000/js/test.js <js content>Requiredaiohttppython>=3.6
ahuiwechat
企业微信群机器人-推送消息类型的多种应用企业微信群机器人key:可以通过函数 wechat_key() 定义默认的群机器人key1.1.企业微信群机器人-推送消息类型-markdown:wechat_markdown(top='', title='', content='', user='', links={}, df_links={}, show_cols=False, key=None)参数:[content]支持传入字符串(str)/列表(list)/数据框(DataFrame);[df_links]支持多个链接字段,默认参数空;说明:只支持@1个人/不支持@all;@对象为企业微信英文名,英文名不区分大小写;若@对象无此人仍正常推送消息内容;系统:适用于 Windows、Linux 系统环境下的Python3版本。1.2.企业微信群机器人-推送消息类型-markdown:wechat_markdowns(top='', title='', content='', user='', links={}, df_links={}, show_cols=False, key=None)简介:基于 wechat_markdown() 函数的二次封装开发;功能上支持@多人、支持推送多个群;参数:[user]支持传入数据类型str/list; [key]支持数据类型str/list,当传入多个元素则用列表形式。2.1.企业微信群机器人-推送消息类型-text:wechat_text(top='', title='', content='', users_name=[], users_phone=[], key=None)功能:@对象支持使用企业微信英文名或手机号、支持 '@all' 或 @多个人;参数:[content]支持数据类型str/list,列表中的每个元素文本代表一行。3.1.企业微信群机器人-推送消息类型-@群成员:wechat_at(users_name=[], users_phone=[], key=None, content='👆')功能:@对象支持使用企业微信英文名或手机号,英文名不区分大小写;支持 '@all' 或 @多个人,若无此人则不@且不影响执行推送。4.1.企业微信群机器人-推送消息类型-图像:wechat_image(image, key=None, users=[], content='👆')参数:[image]支持传入的数据类型-本地图片路径/变量Image/变量bytes;说明:图片最大不能超过2M,支持JPG、PNG格式。5.1.企业微信群机器人-推送消息类型-本地文件:wechat_file(pathfile, key=None, users=[], content='👆')参数:[pathfile]传参要求,图片:最大10MB 支持JPG、PNG格式;语音:最大2MB 播放长度不超过60s 支持AMR格式;视频:最大10MB 支持MP4格式;普通文件:最大20MB5.2.企业微信群机器人-推送消息类型-本地文件:wechat_files(pathfiles, key=None, users=[], content='👆')简介:基于 wechat_file() 函数的二次封装开发;功能上支持批量推送多个文件。
ahura
a God like Serializer for God like Developers.Ahura is an Automatic easy to use Django serializer. it comes With diverse features designed to fit in all different occasions. Feel free to contribute in case of detecting issues or if any new ideas came to your mind.PrerequisitesAhura is a Django model serializer for now, so you must have Django installed and use Django ORM.InstallingFor using Ahura you just need to install it using pip.$pipinstallahuraor clone the project. then import the Serializer and use it like the example below.>>>fromahuraimportSerializer>>>frommyapp.modelsimportMyModel>>>my_objects=MyModel.objects.all()>>>s=Serializer(exclude=["password"],date_format="%Y-%m-%d")>>>data=s.serialize(my_objects)Getting Startedusing ahura is easy as you can see above while it gives you many different options to modify your model’s json. let’s go through some of these options.this part is not complete but we will add the documention very soonContributingPlease readCONTRIBUTING.mdfor details on our code of conduct, and the process for submitting pull requests to us.AuthorsMahdi Sorkhemiri-Initial work-SorkhemiriMohammad Rabetian-Initial work-RabetianSee also the list ofcontributorswho participated in this project.LicenseThis project is licensed under the MIT License - see theLICENSE.mdfile for details
ahuri-cli
ahuri-cli v1.1.2Use Ahuri from the command line!This repository contains the source code forahuri-cli.View thePyPI repositoryfor this CLI app.How to use this app?RequirementsYou needPython 3.8+with pip on path.InstallationOn Linux / MacOSOpen a terminal and type:$pipinstallahuri-cliOn WindowsOpen a terminal and type:>pipinstallahuri-cli[windows]UsageYou can use the commandahurito use the app.Useahuri -hto view the main help page.NOTE:If the commandahuriis not found, it's probably not on path. If python or python3 is on path, you can use the commandpython -m ahuriorpython3 -m ahurito use the app.
ahvl
ahvlBase libraries for the Ansible HashiCorp Vault Lookup (AHVL) Plugin by NetsonContentsIntroductionSecretsOutput filtersRequirementsInstallationStorage structureExamplesPasswordsConfiguration optionsahvl Lookup Password optionsahvl Lookup SSH Key optionsahvl Lookup SSH Hostkey optionsahvl Lookup GPG Key optionsahvl Lookup Credential optionsahvl Generate Password optionsahvl Generate SSH Key optionsahvl Generate SSH Hostkey optionsahvl Generate GPG Key optionsahvl Generate Salt optionsIntroductionShort versionA set of lookup plugins which retrieves secrets from HashiCorp Vault, and generates them automatically if they don't exist. It support generating passwords, SSH keys, SSH hostkeys and GPG keys. Keys are generated and converted to various formats to support a wide range of applications. All aspects of the various keys are stored in vault and accessible. Each secret can be passed through an output filter or hash function so no further processing is necessary. Hash functions which require a salt will store the salt in Vault as well to maintain idempotence.TL;DRManaging secrets properly is tough. Even though there are great tools available which help you manage, rotate and audit your secrets properly, it still requires a significant effort. These ansible lookup plugins aim to make that process easier. When managing many servers with Ansible, the number of secrets you need to manage just keeps growing and even though Ansible provides its own Ansible Vault as a means to store these secrets in an encrypted form, I still found the process to be quite cumbersome and I wanted to keep all my secrets out of version control entirely, encrypted or not.My first attempt at using a different secrets store, other than Ansible Vault, was built on KeePass which I have used successfully and happily for over 10 years now. After spending a good amount of time developing a lookup plugin for keepass I ran into a major issue: performance! For each lookup, the KeePass database had to be unlocked, decrypted, searched, encrypted and closed again. For a few lookups, this was not an issue, but since we're talking seconds per lookup, this didn't seem like a scalable solution. Enter HashiCorp Vault. HashiCorp Vault was built specifically for the purpose of managing secrets and provided important features straight out of the box: auditing, fine-grained access control, versioning, etc.The next major issue I have is with generating secrets; when creating user accounts, when setting up databases, when generating SSH keys and potentially having to convert them because your end user uses Putty on a windows system, when generating GPG keys to protect automated backups etc. etc. Generating the secrets is one thing, but then you need to store these secrets in vault in order to be able to retrieve them in your playbooks. Again, quite a time consuming process, not to mention the risk of generating insecure secrets due to missing parameters, insecure system defaults, poorly configured tools, or simply a lack of sleep!Then another issue I had with the manual secret managing process, is that often you need secrets in a particular format for a specific purpose. I already mentioned the SSH key that needs to be converted to the Putty format, but when creating a user account, you may need the password to be hashed as a sha256crypt hash or a particular application hash such as the MySQL41 hash.Previously I had a bunch of bash scripts to help me generate and convert all these secrets, but storing them safely was always cumbersome. These lookup plugins attempt to solve all these issues at once! Keep all secrets out of version control, store them securly in HashiCorp Vault, and when you request a secret which does not exist (yet), it will simply generate it for you on the fly and store it in Vault! Last but not least, you can request the specific output that you need and the plugin will return the converted secret!SecretsVarious types of secrets are supported, seperated into seperate lookup plugins. For all lookup plugins sane and secure defaults have been set, but you can manipulate almost all aspects of the secret generating process using variables to match your needs.TypeDescriptionPasswordsThis lookup plugin will return straight forward passwords. It uses the python library Passlib to generate secure secrets.SSH keysThis lookup plugin allows you to fetch every aspect of an SSH key: private key in various formats (OpenSSH, Putty, SSHCOM, PKCS8), public key in various formats (OpenSSH, PEM, PKCS8, RFC4716), but also the MD5 and SHA256 fingerprints, the key art and bubblebabble and of course the password!SSH hostkeysThis lookup plugin will generate a set of SSH hostkeys in 2 formats (RSA and Ed25519), the matching DNS entries, fingerprints and bubblebabble. Never again do you have an excuse to not have you knownhosts file out of date or to support the older, insecure hostkey formats.GPG keysThis lookup plugin allows you to generate 2 sets of keys on the fly. The 'regular' setting will generate a single master key (cert only) and 3 subkeys, each with a single responsibility (sign, encr or auth). All aspects of the generated key can be retrieved, from the KeyID, fingerprint and keygrip to the expiration date, private key (armored) and of course the password. The 'backup' setting will generate 2 master keys, each with a single subkey. One will be used for signing only, the other for encryption only. The encryption key will be signed by the signing key. Again, all aspects of each key will be stored in vault and can be retrieved.CredentialsCredentials are similar to passwords, however, they differ on 1 main aspect: credentials are never automatically generated. I use these to store my AWS, GCP, API keys and other sorts of external credentials which are needed by various playbooks.Output filtersEach of the above secrets can be run through an output filter before being returned to your playbook. If you select a hashing algorithm which requires a salt, the lookup plugin will automatically generate a unique salt for you and store this salt in Vault as well, to make sure that each subsequent playbook run maintains idempotence. For each combination of secret, hostname and hashing algorithm, a unique salt is generated and stored next to the requested secret so they can be easily found and salt reuse is limited as much as possible accross hosts and services. The following output filters are supported:Filter/OutDescriptionplaintextReturns the plaintext versionhexsha256Returns the sha256 hashed versionhexsha512Returns the sha512 hashed versionsha256cryptReturn the Crypt hashed and salted SHA-256 versionsha512cryptReturns the Crypt hashed and salted SHA-512 versionpbkdf2sha256Return the PBKDF2 hashed and salted SHA-256 versionpbkdf2sha512Return the PBKDF2 hashed and salted SHA-512 versiongrubpbkdf2sha512Return the GRUB specific PBKDF2 hashed and salted SHA-512 versionphpassReturns the PHPass hashed and salted versionmysql41Returns the MySQL41 hashed versionpostgresmd5Returns the PostgreSQL hashed version (uses the 'in' parameter as usernameargon2Returns the Argon2 hashed and salted versionRequirementsThese lookup plugins depend on the following software packages:NameMin. versionCommentsAnsible2.7Doesn't really need an introduction hereHashiCorp Vault1.1.3Only KV storage engine version 2 is supportedGnuPG2.1.17To generate GPG KeysLibgcrypt1.8.1Required by GnuPGOpenSSL1.1.1To generate SSH Keys and SSH Hostkeysputty-tools0.7.2To convert SSH keys to various formatsPython3.6May work with other versions, but is untestedAdditionally, the following python packages are required. These should be installed automatically or be part of the default python distribution.Python packagedistutilstimepasslibshutilsubprocessreoshvacrandompackagingargon2-cffiInstallationInstall packagepipinstallahvlPackage will most likely be installed in/usr/local/lib/pythonX.X/dist-packages/ahvlon ubuntu systemsUpgrade packagepipinstall--upgradeahvlStorage structureThese lookup plugins, by default, store different secrets at different paths in Vault.The default paths are as follows:Secret typeDefault pathCommentspasswordhosts/{hostname}/{find}Regular passwords are stored per host, so having the same 'find' on different hosts will lead to different passwordssshkeysshkeys/{find}SSH keys are usually per user, and are not usually unique per hostsshhostkeyhosts/{hostname}/sshhostkeysHostkeys are, obviously, different per host, so are stored under hostsgpgkeygpgkeys/{find}GPG keys are usually per user, and are not usually unique per hostcredentialcredentials/{find}Credentials (AWS, API keys, etc) are stored seperatelysaltAlways at the same path as the secretIf a salt is generated it will always be stored in vault to ensure idempotence across runs. The path for the salt will be based on the path of the secret, will have the same 'in', but appended with the hostname, hashtype and the fixed string 'salt' at the endThe optionpathdoes not need to be provided to the lookup plugin, instead it will be calculated. However, if you wish to have a different storage structure, you can simply change the base values as you see fit. You can use the variables{find}and{hostname}in your paths. Please be aware of any conflicting paths though. A specific path for salts cannot be set and will always follow the rules above.Using the default settings, you will end up with a structure that looks similar to the one below. The lowest levels will contain the key/value combinations.+-secret/ +-credentials/ +-aws/ +-gcp/ +-hosts/ +-srv1.example.com/ +-sshhostkeys/ +-ed25519/ +-rsa/ +-mariadb/ +-sshkeys/ +-myuser/ +-gpgkeys/ +-someusr/ExamplesHashiCorp Vault connection# ansible.cfg...# connection details for HashiVault lookup plugin[ahvl_connection]ahvl_url=https://192.168.1.100:8200ahvl_auth_method=tokenahvl_validate_certs=Trueahvl_cacert=/usr/local/share/ca-certificates/Netson_CA.crt# /etc/environment...AHVL_CONNECTION_AHVL_TOKEN=<myvaulttoken>PASSLIB_MAX_PASSWORD_SIZE=16384# to prevent error when using the hash function on (very) large passwords or keysPasswords---# playbook to demonstrate ahvl_password-hosts:localhostgather_facts:novars:output_filters:-plaintext-hexsha256-hexsha512-sha256crypt-sha512crypt-phpass-mysql41-postgresmd5-pbkdf2sha256-pbkdf2sha512-argon2-grubpbkdf2sha512tasks:# the path in vault which will be searched is [hosts/localhost/mariadb] on the default mountpoint-name:'ahvl_password:getpasswordforMariaDBaccountwithusernamemyuserandshowalloutputs'debug:msg:"{{lookup('ahvl_password',find='mariadb',in='myuser',out=item)}}"loop:"{{output_filters}}"-name:'ahvl_password:getpassword(length=8,type=phrase-willresultin8words)forMariaDBaccountwithusernameanotheruserandoutputthemysql41hash'debug:msg:"{{lookup('ahvl_password',find='mariadb',in='anotheruser',out='mysql41',pwd_length=8,pwd_type='phrase')}}"Additionally, passwords can be set with a value from your playbook. This can be useful to store secrets which you receive from an API or similar. For example, when creating IAM users on Amazon AWS, you could use the following example to store the returned access key in Vault:--- # playbook to demonstrate ahvl_set_password - hosts: localhost gather_facts: no tasks: # the path in vault which will be set is [hosts/localhost/awstestusr] on the default mountpoint - name: 'ahvl_set_password : set password for aws user, created via ansible' debug: msg: "{{ lookup('ahvl_set_password', find='awstestusr', in='myawsuser', pwd='mypasswd') }}"SSH keys---# playbook to demonstrate ahvl_sshkey-hosts:localhostgather_facts:novars:ahvl_generate_sshkey:sshkey_pkcs8_enabled:yes# requires openssl 6.5+sshkey_putty_enabled:yes# requires puttygen 0.72+# PCKS8 and SSHCOM do not support Ed25519 keyssshkey_ed25519_in:-password-private-private_keybits-private_keytype-private_openssh-private_putty-public-public_rfc4716-fingerprint_sha256-fingerprint_sha256_clean-fingerprint_sha256_art-fingerprint_md5-fingerprint_md5_clean-fingerprint_md5_art-fingerprint_bubblebabble-fingerprint_bubblebabble_clean-fingerprint_puttysshkey_rsa_in:-password-private-private_keybits-private_keytype-private_pkcs8-private_openssh-private_putty-private_sshcom-public-public_pem-public_pkcs8-public_rfc4716-fingerprint_sha256-fingerprint_sha256_clean-fingerprint_sha256_art-fingerprint_md5-fingerprint_md5_clean-fingerprint_md5_art-fingerprint_bubblebabble-fingerprint_bubblebabble_clean-fingerprint_puttytasks:-name:'ahvl_sshkey:fetch/generateSSHkeyoftypeEd25519andoutputallinformationpieces'debug:msg:"{{lookup('ahvl_sshkey',find='myusername',in=item,out='plaintext')}}"loop:"{{sshkey_ed25519_in}}"-name:'ahvl_sshkey:rsa'debug:msg:"{{lookup('ahvl_sshkey',find='anotherusername',sshkey_type='rsa',in=item,out='plaintext')}}"loop:"{{sshkey_rsa_in}}"SSH hostkeys---# playbook to demonstrate ahvl_sshhostkey-hosts:localhostgather_facts:novars:sshhostkey_ins:-private-public-fingerprint_sha256-fingerprint_sha256_clean-fingerprint_sha256_art-fingerprint_md5-fingerprint_md5_clean-fingerprint_md5_art-fingerprint_bubblebabble-fingerprint_bubblebabble_clean-dns_sha1-dns_sha1_clean-dns_sha256-dns_sha256_cleantasks:# search path used for vault will be [hosts/localhost/sshhostkeys/rsa]-name:'ahvl_sshhostkey:lookupRSAhostkeyandoutputallpieces'debug:msg:"{{lookup('ahvl_sshhostkey',find='rsa',in=item,out='plaintext')}}"loop:"{{sshhostkey_ins}}"# search path used for vault will be [hosts/localhost/sshhostkeys/ed25519]-name:'ahvl_sshhostkey:lookupEd25519hostkeyandoutputallpieces'debug:msg:"{{lookup('ahvl_sshhostkey',find='ed25519',in=item,out='plaintext')}}"loop:"{{sshhostkey_ins}}"# search path used for vault will be [hosts/myhost2.local/sshhostkeys/rsa]-name:'ahvl_sshhostkey:lookupRSAforanotherhostandoutputallpieces'debug:msg:"{{lookup('ahvl_sshhostkey',find='rsa',in=item,sshhostkey_type='rsa',out='plaintext',hostname='myhost2.local')}}"loop:"{{sshhostkey_ins}}"GPG keys---# playbook to demonstrate ahvl_gpgkey-hosts:localhostgather_facts:novars:gpgkey_regular_ins:-master_cert_pub_key_armored-master_cert_sec_key_armored-master_cert_sec_keytype-master_cert_sec_keyuid-master_cert_sec_password-master_cert_sec_fingerprint-master_cert_sec_keycurve-master_cert_sec_keygrip-master_cert_sec_keybits-master_cert_sec_creationdate-master_cert_sec_keyid-master_cert_sec_expirationdate-subkey_sign_sec_key_armored-subkey_sign_sec_fingerprint-subkey_sign_sec_keycurve-subkey_sign_sec_keygrip-subkey_sign_sec_keybits-subkey_sign_sec_creationdate-subkey_sign_sec_keyid-subkey_sign_sec_expirationdate-subkey_encr_sec_key_armored-subkey_encr_sec_fingerprint-subkey_encr_sec_keycurve-subkey_encr_sec_keygrip-subkey_encr_sec_keybits-subkey_encr_sec_creationdate-subkey_encr_sec_keyid-subkey_encr_sec_expirationdate-subkey_auth_sec_key_armored-subkey_auth_sec_fingerprint-subkey_auth_sec_keycurve-subkey_auth_sec_keygrip-subkey_auth_sec_keybits-subkey_auth_sec_creationdate-subkey_auth_sec_keyid-subkey_auth_sec_expirationdategpgkey_backup_ins:-sign_master_cert_pub_key_armored-sign_master_cert_sec_key_armored-sign_master_cert_sec_keytype-sign_master_cert_sec_keyuid-sign_master_cert_sec_password-sign_master_cert_sec_fingerprint-sign_master_cert_sec_keycurve-sign_master_cert_sec_keygrip-sign_master_cert_sec_keybits-sign_master_cert_sec_creationdate-sign_master_cert_sec_keyid-sign_master_cert_sec_expirationdate-sign_subkey_sign_sec_key_armored-sign_subkey_sign_sec_fingerprint-sign_subkey_sign_sec_keycurve-sign_subkey_sign_sec_keygrip-sign_subkey_sign_sec_keybits-sign_subkey_sign_sec_creationdate-sign_subkey_sign_sec_keyid-sign_subkey_sign_sec_expirationdate-encr_master_cert_pub_key_armored-encr_master_cert_sec_key_armored-encr_master_cert_sec_keytype-encr_master_cert_sec_keyuid-encr_master_cert_sec_password-encr_master_cert_sec_fingerprint-encr_master_cert_sec_keycurve-encr_master_cert_sec_keygrip-encr_master_cert_sec_keybits-encr_master_cert_sec_creationdate-encr_master_cert_sec_keyid-encr_master_cert_sec_expirationdate-encr_subkey_encr_sec_key_armored-encr_subkey_encr_sec_fingerprint-encr_subkey_encr_sec_keycurve-encr_subkey_encr_sec_keygrip-encr_subkey_encr_sec_keybits-encr_subkey_encr_sec_creationdate-encr_subkey_encr_sec_keyid-encr_subkey_encr_sec_expirationdatetasks:# search path used for vault will be [gpgkeys/name_ed25519_localhost_myemail]-name:'ahvl_gpgkey:fetch/generateregulared25519keyandoutputallpieces'debug:msg:"{{lookup('ahvl_gpgkey',gpgkey_fullname='name_ed25519',gpgkey_email='myemail',in=item,out='plaintext')}}"loop:"{{gpgkey_regular_ins}}"# search path used for vault will be [gpgkeys/name_rsa_localhost_myemail]-name:'ahvl_gpgkey:fetchgenerateregularRSAkeyandoutputallpieces'debug:msg:"{{lookup('ahvl_gpgkey',gpgkey_type='rsa',gpgkey_fullname='name_rsa',gpgkey_email='myemail',in=item,out='plaintext')}}"loop:"{{gpgkey_regular_ins}}"# search path used for vault will be [gpgkeys/bckp_ed25519_localhost_myemail]-name:'ahvl_gpgkey:fetch/generatebackuped25519keyandoutputallpieces'debug:msg:"{{lookup('ahvl_gpgkey',gpgkey_keyset='backup',gpgkey_fullname='bckp_ed25519',gpgkey_email='myemail',gpgkey_comment=inventory_hostname,in=item,out='plaintext')}}"loop:"{{gpgkey_backup_ins}}"# search path used for vault will be [gpgkeys/bckp_rsa_localhost_myemail]-name:'ahvl_gpgkey:fetch/generatebackupRSAkeyandoutputallpieces'debug:msg:"{{lookup('ahvl_gpgkey',gpgkey_keyset='backup',gpgkey_type='rsa',gpgkey_fullname='bckp_rsa',gpgkey_email='myemail',gpgkey_comment=inventory_hostname,in=item,out='plaintext')}}"loop:"{{gpgkey_backup_ins}}"Credentials---# playbook to demonstrate ahvl_credential-hosts:localhostgather_facts:novars:credential_outs:-plaintext-hexsha256-hexsha512-sha256crypt-sha512crypt-phpass-mysql41-postgresmd5-pbkdf2sha256-pbkdf2sha512-argon2-grubpbkdf2sha512tasks:# search path used for vault will be [credentials/transip]-name:'ahvl_password:findcredential;willfailifitdoesnotexist'debug:msg:"{{lookup('ahvl_credential',find='transip',in='apikeyxyz',out=item)}}"loop:"{{credential_outs}}"Configuration OptionsTo give you maximum flexibility in configuring the behaviour of these lookup plugins, there are several ways you can set the option values, one taking precedence over the other. The order in which they are processed is as follows. The lowest number will have the highest priority. Obviously, the variable precedence as defined in Ansible also applies. Consult theAnsible docsfor more information.PriorityMethodExampleComments1Lookup argumentslookup('ahvl_password', find='mysql' in='myuser', out='mysql41')2Environment variablesAHVL_CONNECTION_AHVL_TOKEN=http://localhost:82003Prefixed variablesahvl_connection_ahvl_url:'http://localhost:8200'4Nested variablesahvl_connection:ahvl_url: 'http://localhost:8200'5ansible.cfg[ahvl_connection]ahvl_token: 'yourtoken'Only supported for AHVL Connection details6DefaultsNoneHardcoded in the lookup pluginahvl Vault connection optionsEvery lookup will generate at least a single request to the HashiCorp Vault. In case a new secret has been generated, or a search path doesn't exist yet, more than one request will be made. The following connection details can be set:Option nameRequiredValue typePossible valuesDefault valueCommentahvl_urlyesstringprotocol://fqdn:porthttp://localhost:8200ahvl_auth_methodyesstringtoken/userpass/ldap/approletokenvault authentication methodahvl_namespacenostringNonevault secret namespaceahvl_validate_certsnobooleanTrue/FalseTruevalidate vault certificates; set to False if not using an https connection; if you're using self-signed certificates provide the root certificate in ahvl_cacert insteadahvl_mount_pointnostringsecretvault secret mount pointahvl_cacertnopath/fullpath/to/file.crtNone(self-signed) certificate to verify https connectionahvl_usernamenostringNonevault login username; required if auth_method is userpass/ldapahvl_passwordnostringNonevault login password; required if auth_method is userpass/ldap; it is strongly recommended to only set the password using the environment variable AHVL_CONNECTION_AHVL_PASSWORDahvl_role_idnostringNonevault login role id; required if auth_method is approleahvl_secret_idnostringNonevault login secret id; required if auth_method is approleahvl_tokennostringNonevault token; required if auth_method is token; it is strongly recommended to only set the token using the environment variable AHVL_CONNECTION_AHVL_TOKEN!ahvl General optionsThese options apply to all lookup plugins and can (or sometimes must) be set for each lookup. With the exception of the ahvl_tmppath, these options cannot be set globally.Option nameRequiredValue typePossible valuesDefault valueCommenthostnameyesfqdninventory_hostnameThe hostname can/will be used as part of the search pathahvl_tmppathnopath/fullpath/to/tmpdiransible generated tmp pathBEWARE:The tmppathWILL BE DELETED AT THE END OF EACH LOOKUP! To be safe, leave this setting empty; ansible will provide a random temporary folder which can be safely deleted.findyesstringNoneThe find parameter is used as part of the search pathinyesstringdepends on lookup pluginNoneAt the given search path, determine which key to look foroutyesstringplaintext/hexsha256/hexsha512/sha256crypt/sha512crypt/grubpbkdf2sha512/phpass/mysql41/postgresmd5/pbkdf2sha256/pbkdf2sha512/argon2hexsha512The format in which the secret will be returned. The hex*, mysql41 and postgresmd5 formats provide a hash, the sha* and phpass functions will give you a salted hash. Each hostname/secret/hash combination will have a unique salt and the salt will also be stored in vault to make sure each subsequent playbook run will not generate a new salt and thus result in a 'changed' state. For each hash function the correct salt is determined automatically based on best practices.pathnostring{find}/{hostname}depends on lookup pluginThe actual search path used to find secret in vault. If not specified, it will be determined by the lookup plugin. When setting the path directly, you can use the variables {find} and {hostname} which will be replaced by the correct values prior to querying vault.autogeneratenobooleanTrue/FalseTrueWhether or not to automatically generate new secrets when they could not be found in vault or when the latest version of the secret has been deletedrenewnobooleanTrue/FalseFalseForces renewal of the secret, regardless of whether it already exists or not; will not change the behaviour of the autogenerate option. Be careful when using this, as it will be triggered for each and every lookup where this option is True, particularly in loops!pwdnostringany stringNoneRequired only when using the set_password functionahvl Lookup Password optionsGeneral optionsOption nameDefault valueAvailable optionspathhosts/{hostname}/{find}Lookup optionsNo additional options available, however, check theahvl Generate Password optionssection as well!ahvl Set Password optionsGeneral optionsOption nameDefault valueAvailable optionspathhosts/{hostname}/{find}pwdNoneahvl Lookup SSH Key optionsGeneral optionsOption nameDefault valueAvailable optionspathsshkeys/{find}inNoneprivate/password/private_keybits/private_keytype/private_pkcs8/private_openssh/private_putty/private_sshcom/public/public_pem/public_pkcs8/public_rfc4716/fingerprint_sha256/fingerprint_sha256_clean/fingerprint_sha256_art/fingerprint_md5/fingerprint_md5_clean/fingerprint_md5_art/fingerprint_putty/fingerprint_bubblebabble/fingerprint_bubblebabble_cleanLookup optionsNo additional options available, however, check theahvl Generate SSH Key optionssection as well!ahvl Lookup SSH Hostkey optionsGeneral optionsOption nameDefault valueAvailable optionspathhosts/{hostname}/sshhostkeys/{find}findNoneed25519/rsainNoneprivate/private_keybits/private_keytype/fingerprint_sha256/fingerprint_sha256_clean/fingerprint_sha256_art/fingerprint_md5/fingerprint_md5_clean/fingerprint_md5_art/fingerprint_bubblebabble/fingerprint_bubblebabble_clean/dns_sha1/dns_sha1_clean/dns_sha256/dns_sha256_clean/publicLookup optionsNo additional options available, however, check theahvl Generate SSH Hostkey optionssection as well!ahvl Lookup GPG Key optionsGeneral optionsOption nameDefault valueAvailable optionspathgpgkeys/{find}findNonewhengpgkey_keyset=backup:ed25519/rsainNonewhengpgkey_keyset=regular:master_cert_pub_key_armored/master_cert_sec_key_armored/master_cert_sec_keytype/master_cert_sec_keyuid/master_cert_sec_password/master_cert_sec_fingerprint/master_cert_sec_keycurve/master_cert_sec_keygrip/master_cert_sec_keybits/master_cert_sec_creationdate/master_cert_sec_keyid/master_cert_sec_expirationdate/subkey_sign_sec_key_armored/subkey_sign_sec_fingerprint/subkey_sign_sec_keycurve/subkey_sign_sec_keygrip/subkey_sign_sec_keybits/subkey_sign_sec_creationdate/subkey_sign_sec_keyid/subkey_sign_sec_expirationdate/subkey_encr_sec_key_armored/subkey_encr_sec_fingerprint/subkey_encr_sec_keycurve/subkey_encr_sec_keygrip/subkey_encr_sec_keybits/subkey_encr_sec_creationdate/subkey_encr_sec_keyid/subkey_encr_sec_expirationdate/subkey_auth_sec_key_armored/subkey_auth_sec_fingerprint/subkey_auth_sec_keycurve/subkey_auth_sec_keygrip/subkey_auth_sec_keybits/subkey_auth_sec_creationdate/subkey_auth_sec_keyid/subkey_auth_sec_expirationdatewhengpgkey_keyset=backup:sign_master_cert_pub_key_armored/sign_master_cert_sec_key_armored/sign_master_cert_sec_keytype/sign_master_cert_sec_keyuid/sign_master_cert_sec_password/sign_master_cert_sec_fingerprint/sign_master_cert_sec_keycurve/sign_master_cert_sec_keygrip/sign_master_cert_sec_keybits/sign_master_cert_sec_creationdate/sign_master_cert_sec_keyid/sign_master_cert_sec_expirationdate/sign_subkey_sign_sec_key_armored/sign_subkey_sign_sec_fingerprint/sign_subkey_sign_sec_keycurve/sign_subkey_sign_sec_keygrip/sign_subkey_sign_sec_keybits/sign_subkey_sign_sec_creationdate/sign_subkey_sign_sec_keyid/sign_subkey_sign_sec_expirationdate/encr_master_cert_pub_key_armored/encr_master_cert_sec_key_armored/encr_master_cert_sec_keytype/encr_master_cert_sec_keyuid/encr_master_cert_sec_password/encr_master_cert_sec_fingerprint/encr_master_cert_sec_keycurve/encr_master_cert_sec_keygrip/encr_master_cert_sec_keybits/encr_master_cert_sec_creationdate/encr_master_cert_sec_keyid/encr_master_cert_sec_expirationdate/encr_subkey_encr_sec_key_armored/encr_subkey_encr_sec_fingerprint/encr_subkey_encr_sec_keycurve/encr_subkey_encr_sec_keygrip/encr_subkey_encr_sec_keybits/encr_subkey_encr_sec_creationdate/encr_subkey_encr_sec_keyid/encr_subkey_encr_sec_expirationdateLookup optionsOption nameRequiredValue typePossible valuesDefault valueCommentgpgkey_fullnameyesstringNonefull name for keygpgkey_emailyesstringNoneemail for keygpgkey_commentnostringNonecomment for key; if not provided will default tohostnamegpgkey_uidnostringNoneuid for key; if not provided will default to<gpgkey_fullname>_<gpgkey_comment>_<gpgkey_email>gpgkey_keysetyesstringregular/backupregularkeyset to generateNo additional options available, however, check theahvl Generate GPG Key optionssection as well!ahvl Lookup Credential optionsGeneral optionsOption nameDefault valueAvailable optionspathcredentials/{find}Lookup optionsNo additional options available.ahvl Generate Password optionsOption nameRequiredValue typePossible valuesDefault valueCommentpwd_typeyesstringword/phrasewordtype of password to generate; word or phrasepwd_entropyyesstringweak/fair/strong/securesecurestrength of password; check passlib docs for allowed valuespwd_lengthnointeger32length of password; if omitted is auto calculated based on entropypwd_charsnostringNonespecific string of characters to use when generating passwordspwd_charsetnostringascii_62/ascii_50/ascii_72/hexascii_72specific charset to use when generating passwordspwd_wordsnostringNonelist of words to use when generating passphrasepwd_wordsetnostringeff_long/eff_short/eff_prefixed/bip39eff_longpredefined list of words to use when generating passphrase; check passlib docs for allowed valuespwd_sepnostringword separator for passphraseahvl Generate SSH Key optionsOption nameRequiredValue typePossible valuesDefault valueCommentsshkey_typeyesstringed25519/rsaed25519type of ssh key to generatesshkey_bitsyesinteger4096number of bits for ssh keysshkey_usernamenostringNonessh key username; defaults tofindif not providedsshkey_commentnostringNonesshkey comment; defaults tousernameif not providedsshkey_bin_keygenyespathNonefull path to ssh-keygen binary; attempts to findssh-keygenif not providedsshkey_bin_opensslnopathNonefull path to puttygen binary, for pkcs8 key format; attempts to findopensslif not providedsshkey_bin_puttygennopathNonefull path to puttygen binary; attempts to findputtygenwhen not providedsshkey_pkcs8_enablednobooleanFalseuse openssl to convert keys to pkcs8 compatible keyssshkey_putty_enablednobooleanFalseuse puttygen to convert keys to putty/sshcom compatible keysahvl Generate SSH Hostkey optionsOption nameRequiredValue typePossible valuesDefault valueCommentsshhostkey_typeyesstringed25519/rsaNonetype of keys to generate when generating hostkeyssshhostkey_strengthyesstringmedium/strongstronghostkey strength; see gen_sshhostkey function for actual valuessshhostkey_commentnostringNonesshhostkey commentsshhostkey_bin_keygenyespathNonefull path to ssh-keygen binary; attempts to findssh-keygenif not providedahvl Generate GPG Key optionsOption nameRequiredValue typePossible valuesDefault valueCommentgpgkey_confnolistdescribed in gpg manpage['keyid-format 0xlong', 'with-fingerprint', 'personal-cipher-preferences AES256', 'personal-digest-preferences SHA512', 'cert-digest-algo SHA512']Contains the options which will be written to gpg.conf when manipulating keys. It will always be appended with the key preferences as defined ingpgkey_prefgpgkey_prefyeslistdescribed in gpg manpage['SHA512', 'SHA384', 'SHA256', 'SHA224', 'AES256', 'AES192', 'ZLIB', 'BZIP2', 'ZIP', 'Uncompressed']Preferences regarding ciphers, digests and algorithmsgpgkey_digestnostringSHA512used with gpg option --digest-algogpgkey_s2k_ciphernostringAES256used with gpg option --s2k-cipher-algogpgkey_s2k_digestnostringSHA512used with gpg option --s2k-digest-algogpgkey_s2k_modenointeger3used with gpg option --s2k-modegpgkey_s2k_countnointeger65011712used with gpg option --s2k-count; must be between 1024-65011712 inclusivegpgkey_fullnameyesstringNoneconcatenated into a uid like; fullname (comment)gpgkey_emailyesstringNoneconcatenated into a uid like; fullname (comment)gpgkey_commentnostringNoneconcatenated into a uid like; fullname (comment)gpgkey_uidyesstringNonethe uidgpgkey_expirationdateyesstring0key expiration date in the format of [YYYY-MM-DD], [YYYYMMDDThhmmss], seconds=(int)gpgkey_bitsyesinteger4096key length; only used by RSA keys; will be added to the gpgkey_algo variable for RSA keysgpgkey_typeyesstringed25519/rsaed25519main key type to use for all 4 keys (master + 3 subkeys); supported are rsagpgkey_binyespathNonefull path to gpg binarygpgkey_keysetyesstringregular/backupregularset of keys to generate; regular or backup (i.e. for duplicity)ahvl Generate Salt optionsOption nameRequiredValue typePossible valuesDefault valueCommentsalt_charsyesstringitoa64/alnumitoa64Set of characters to use when generating salts; alnum is Alpha-numeric. while itoa64 adds the.and/characters to thealnumsetUpdate ahvl package instructionscreate a working directorymkdir /opt/ahvl && cd /opt/ahvlmake sure twine is installedpip install twinemake sure your github SSH key is availablelogin to githubssh -T [email protected] repositorygit clone git://github.com/netson/ahvlset remote origingit remote set-url origin [email protected]:netson/ahvl.gitmake changes as neededremove any dist folder that may existrm -rf ./dist && rm MANIFESTdetermine next PyPi package version number, look athttps://github.com/netson/ahvl/releaseschange theversionanddownload_urlinsetup.pycommit changes to gitgit add . && git commit -m "commit message"push to mastergit push origin mastercreate a new release on github with the same version number as indownload_urlcreate PyPi source distributionpython setup.py sdisttest package upload using twinetwine upload --repository-url https://test.pypi.org/legacy/ dist/*verify test results onhttps://test.pypi.org/manage/projects/upload package to PyPi using twinetwine upload dist/*enter yourusernameandpasswordDONE! :-)
ai
No description available on PyPI.
ai1wm
Pack/Unpack All-in-One WP Migration PackagesThis library provides helper classes for packing/unpacking WordPressAll-in-One WP Migrationpackages.ExamplesUnpack a Filefromai1wmimportAi1wmPackagepackage=Ai1wmPackage('/path/to/the/destination/dir')package.unpack_from('/path/to/the/source/wpress/file')Pack a Directoryfromai1wmimportAi1wmPackagepackage=Ai1wmPackage('/path/to/the/source/dir')package.pack_to('/path/to/the/destination/wpress/file')
ai21
AI21 Labs Python SDKMigration from v1.3.4 and belowInv2.0.0we introduced a new SDK that is not backwards compatible with the previous version. This version allows for Non static instances of the client, defined parameters to each resource, modelized responses and more.Migration ExamplesInstance creation (not available in v1.3.4 and below)fromai21importAI21Clientclient=AI21Client(api_key='my_api_key')# or set api_key in environment variable - AI21_API_KEY and thenclient=AI21Client()We No longer support static methods for each resource, instead we have a client instance that has a method for each allowing for more flexibility and better control.Completion before/afterprompt = "some prompt"- import ai21- response = ai21.Completion.execute(model="j2-light", prompt=prompt, maxTokens=2)+ from ai21 import AI21Client+ client = ai21.AI21Client()+ response = client.completion.create(model="j2-light", prompt=prompt, max_tokens=2)This applies to all resources. You would now need to create a client instance and use it to call the resource method.Tokenization and Token counting before/after- response = ai21.Tokenization.execute(text=prompt)- print(len(response)) # number of tokens+ from ai21 import AI21Client+ client = AI21Client()+ token_count = client.count_tokens(text=prompt)Key Access in Response Objects before/afterIt is no longer possible to access the response object as a dictionary. Instead, you can access the response object as an object with attributes.- import ai21- response = ai21.Summarize.execute(source="some text", sourceType="TEXT")- response["summary"]+ from ai21 import AI21Client+ from ai21.models import DocumentType+ client = AI21Client()+ response = client.summarize.create(source="some text", source_type=DocumentType.TEXT)+ response.summaryAWS Client CreationsBedrock Client creation before/after- import ai21- destination = ai21.BedrockDestination(model_id=ai21.BedrockModelID.J2_MID_V1)- response = ai21.Completion.execute(prompt=prompt, maxTokens=1000, destination=destination)+ from ai21 import AI21BedrockClient, BedrockModelID+ client = AI21BedrockClient()+ response = client.completion.create(prompt=prompt, max_tokens=1000, model_id=BedrockModelID.J2_MID_V1)SageMaker Client creation before/after- import ai21- destination = ai21.SageMakerDestination("j2-mid-test-endpoint")- response = ai21.Completion.execute(prompt=prompt, maxTokens=1000, destination=destination)+ from ai21 import AI21SageMakerClient+ client = AI21SageMakerClient(endpoint_name="j2-mid-test-endpoint")+ response = client.completion.create(prompt=prompt, max_tokens=1000)Installationpippipinstallai21UsageModel TypesWherever you are required to pass a model name as a parameter, you can use any of the available AI21 state-of-the-art models:j2-lightj2-midj2-ultrayou can read more about the modelshere.Client Instance Creationfromai21importAI21Clientclient=AI21Client(# defaults to os.enviorn.get('AI21_API_KEY')api_key='my_api_key',)response=client.completion.create(prompt="<your prompt here>",max_tokens=10,model="j2-mid",temperature=0.3,top_p=1,)print(response.completions)print(response.prompt)Token CountingBy using thecount_tokensmethod, you can estimate the billing for a given request.fromai21importAI21Clientclient=AI21Client()client.count_tokens(text="some text")# returns intFile Uploadfromai21importAI21Clientclient=AI21Client()file_id=client.library.files.create(file_path="path/to/file",path="path/to/file/in/library",labels=["label1","label2"],public_url="www.example.com",)uploaded_file=client.library.files.get(file_id)Environment VariablesYou can set several environment variables to configure the client.LoggingWe use the standard libraryloggingmodule.To enable logging, set theAI21_LOG_LEVELenvironment variable.$exportAI21_LOG_LEVEL=debugOther Important Environment VariablesAI21_API_KEY- Your API key. If not set, you must pass it to the client constructor.AI21_API_VERSION- The API version. Defaults tov1.AI21_API_HOST- The API host. Defaults tohttps://api.ai21.com/v1/.AI21_TIMEOUT_SEC- The timeout for API requests.AI21_NUM_RETRIES- The maximum number of retries for API requests. Defaults to3retries.AI21_AWS_REGION- The AWS region to use for AWS clients. Defaults tous-east-1.Error Handlingfromai21importerrorsasai21_errorsfromai21importAI21Client,AI21APIErrorfromai21.modelsimportChatMessageclient=AI21Client()system="You're a support engineer in a SaaS company"messages=[# Notice the given role does not exist and will be the reason for the raised errorChatMessage(text="Hello, I need help with a signup process.",role="Non-Existent-Role"),]try:chat_completion=client.chat.create(messages=messages,model="j2-ultra",system=system)exceptai21_errors.AI21ServerErrorase:print("Server error and could not be reached")print(e.details)exceptai21_errors.TooManyRequestsErrorase:print("A 429 status code was returned. Slow down on the requests")exceptAI21APIErrorase:print("A non 200 status code error. For more error types see ai21.errors")AWS ClientsAI21 Library provides convenient ways to interact with two AWS clients for use with AWS SageMaker and AWS Bedrock.Installationpipinstall"ai21[AWS]"This will make sure you have the required dependencies installed, includingboto3 >= 1.28.82.UsageSageMakerfromai21importAI21SageMakerClientclient=AI21SageMakerClient(endpoint_name="j2-endpoint-name")response=client.summarize.create(source="Text to summarize",source_type="TEXT",)print(response.summary)With Boto3 Sessionfromai21importAI21SageMakerClientimportboto3boto_session=boto3.Session(region_name="us-east-1")client=AI21SageMakerClient(session=boto_session,endpoint_name="j2-endpoint-name",)Bedrockfromai21importAI21BedrockClient,BedrockModelIDclient=AI21BedrockClient(region='us-east-1')# region is optional, as you can use the env variable insteadresponse=client.completion.create(prompt="Your prompt here",model_id=BedrockModelID.J2_MID_V1,max_tokens=10,)print(response.completions[0].data.text)With Boto3 Sessionfromai21importAI21BedrockClient,BedrockModelIDimportboto3boto_session=boto3.Session(region_name="us-east-1")client=AI21BedrockClient(session=boto_session,)response=client.completion.create(prompt="Your prompt here",model_id=BedrockModelID.J2_MID_V1,max_tokens=10,)
ai21-python-client
No description available on PyPI.
ai21-tokenizer
AI21 Labs TokenizerA SentencePiece based tokenizer for production usesInstallationpippipinstallai21-tokenizerpoetrypoetryaddai21-tokenizerUsageTokenizer Creationfromai21_tokenizerimportTokenizertokenizer=Tokenizer.get_tokenizer()# Your code hereAnother way would be to use our Jurassic model directly:fromai21_tokenizerimportJurassicTokenizermodel_path="<Path to your vocabs file. This is usually a binary file that end with .model>"config={}# "dictionary object of your config.json file"tokenizer=JurassicTokenizer(model_path=model_path,config=config)FunctionsEncode and DecodeThese functions allow you to encode your text to a list of token ids and back to plaintexttext_to_encode="apple orange banana"encoded_text=tokenizer.encode(text_to_encode)print(f"Encoded text:{encoded_text}")decoded_text=tokenizer.decode(encoded_text)print(f"Decoded text:{decoded_text}")What if you had wanted to convert your tokens to ids or vice versa?tokens=tokenizer.convert_ids_to_tokens(encoded_text)print(f"IDs corresponds to Tokens:{tokens}")ids=tokenizer.convert_tokens_to_ids(tokens)For more examples, please see ourexamplesfolder.
ai2bs
No description available on PyPI.
ai2-catwalk
CatwalkCatwalk shows off models.Catwalk contains a lot of models, and a lot of tasks. The goal is to be able to run all models on all tasks. In practice, some combinations are not possible, but many are.Here is the current list of tasks we have implemented. This list is not showing the `metaicl` and `p3` categories of tasks, because those are largely variants of the other tasks.wikitext piqa squad squadshifts-reddit squadshifts-amazon squadshifts-nyt squadshifts-new-wiki mrqa::race mrqa::newsqa mrqa::triviaqa mrqa::searchqa mrqa::hotpotqa mrqa::naturalquestions mrqa::bioasq mrqa::drop mrqa::relationextraction mrqa::textbookqa mrqa::duorc.paraphraserc squad2 rte superglue::rte cola mnli mnli_mismatched mrpc qnli qqp sst wnli boolq cb copa multirc wic wsc drop lambada lambada_cloze lambada_mt_en lambada_mt_fr lambada_mt_de lambada_mt_it lambada_mt_es prost mc_taco pubmedqa sciq qa4mre_2011 qa4mre_2012 qa4mre_2013 triviaqa arc_easy arc_challenge logiqa hellaswag openbookqa race headqa_es headqa_en mathqa webqs wsc273 winogrande anli_r1 anli_r2 anli_r3 ethics_cm ethics_deontology ethics_justice ethics_utilitarianism_original ethics_utilitarianism ethics_virtue truthfulqa_gen mutual mutual_plus math_algebra math_counting_and_prob math_geometry math_intermediate_algebra math_num_theory math_prealgebra math_precalc math_asdiv arithmetic_2da arithmetic_2ds arithmetic_3da arithmetic_3ds arithmetic_4da arithmetic_4ds arithmetic_5da arithmetic_5ds arithmetic_2dm arithmetic_1dc anagrams1 anagrams2 cycle_letters random_insertion reversed_words raft::ade_corpus_v2 raft::banking_77 raft::neurips_impact_statement_risks raft::one_stop_english raft::overruling raft::semiconductor_org_types raft::systematic_review_inclusion raft::tai_safety_research raft::terms_of_service raft::tweet_eval_hate raft::twitter_complaintsInstallationCatwalkrequires Python 3.9 or later.Unfortunately Catwalk cannot be installed from pypi, because it depends on other packages that are not uploaded to pypi.Install from source:gitclonehttps://github.com/allenai/catwalk.gitcdcatwalk pipinstall-e.Getting startedLet's run GPT2 on PIQA:python-mcatwalk--modelrc::gpt2--taskpiqaThis will load up GPT2 and use it to perform the PIQA task with the "ranked classification" approach.You can specify multiple tasks at once:python-mcatwalk--modelrc::gpt2--taskpiqaarc_easyIt'll print you a nice table with all tasks and the metrics for each task:arc_challenge acc 0.22440272569656372 arc_easy acc 0.3998316526412964 piqa acc 0.6256800889968872Training / FinetuningCatwalk can train models. It can train models on a single task, or on multiple tasks at once. To train, use this command line:python-mcatwalk.train--modelrc::gpt2--taskpiqaYou can train on multiple tasks at the same time, if you want to create a multi-task model:python-mcatwalk.train--modelrc::gpt2--taskpiqaarc_easyNote that not all models support training. If you want to train one and can't, create an issue and tag @dirkgr in it.Tango integrationCatwalk usesTangofor caching and executing evaluations. The command line interface internally constructs a Tango step graph and executes it. You can point the command line to a Tango workspace to cache results:python-mcatwalk--modelrc::gpt2--taskpiqaarc_easy-w./my-workspace/The second time you run one of those tasks, it will be fast:timepython-mcatwalk--modelrc::gpt2--taskpiqa-w./my-workspace/arc_easy acc 0.39941078424453735 piqa acc 0.626224160194397 ________________________________________________________ Executed in 9.82 secs fish external usr time 6.51 secs 208.00 micros 6.51 secs sys time 1.25 secs 807.00 micros 1.25 secsTango workspaces also save partial results, so if you interrupt an evaluation half-way through, your progress is saved.Teamai2-catwalkis developed and maintained by the AllenNLP team, backed bythe Allen Institute for Artificial Intelligence (AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering. To learn more about who specifically contributed to this codebase, seeour contributorspage.Licenseai2-catwalkis licensed underApache 2.0. A full copy of the license can be foundon GitHub.
ai2core
Failed to fetch description. HTTP Status Code: 404
ai2d
企联AI DevConnectAI定位为一个帮助公司更快地建立AI内部工具的平台产品背景由于各个公司的内部管理流程不同,会有很多无法标准化的长尾AI需求企业内部技术团队需要将1/3的时间用来开发维护这些内部AI工具如果能将低代码用于AI内部工具的开发,无疑将极大地提升企业技术团队的效率既然AI需求是无法标准化的,那么就标准化这些长尾AI需求的开发工具以低代码的方式提高内部AI工具开发效率
ai2e
企联AI DevConnectAI定位为一个帮助公司更快地建立AI内部工具的平台产品背景由于各个公司的内部管理流程不同,会有很多无法标准化的长尾AI需求企业内部技术团队需要将1/3的时间用来开发维护这些内部AI工具如果能将低代码用于AI内部工具的开发,无疑将极大地提升企业技术团队的效率既然AI需求是无法标准化的,那么就标准化这些长尾AI需求的开发工具以低代码的方式提高内部AI工具开发效率
ai2-kit
ai2-kitA toolkit featuredartificialintelligence ×abinitiofor computational chemistry research.Please be advised thatai2-kitis still under heavy development and you should expect things to change often. We encourage people to play and explore withai2-kit, and stay tuned with us for more features to come.Feature HighlightsCollection of tools to facilitate the development of automated workflows for computational chemistry research.Utilities to execute and manage jobs in local or remote HPC job scheduler.Utilities to simplified automated workflows development with reusable components.InstallationYou can use the following command to installai2-kit:# for users who just use most common featurespipinstallai2-kit# for users who want to use all featurespipinstallai2-kit[all]If you want to runai2-kitfrom source, you can run the following commands in the project folder:pipinstallpoetry poetryinstall ./ai2-kit--helpNote that instead of running globalai2-kitcommand, you should run./ai2-kitto run the command from source on Linux/MacOS or.\ai2-kit.ps1on Windows.ManualsFeaturing Toolsai2-cat: A toolkit for dynamic catalysis researching.WorkflowsCLL MLP Training Workflow(zh)Domain Specific ToolsProton Transfer Analysis ToolkitAmorphous Oxides Structure Analysis ToolkitGeneral ToolsTips: useful tips for usingai2-kitBatch Toolkit: a toolkit to generate batch scripts from files or directoriesASE Toolkit: commands to process trajectory files withASEdpdata Toolkit: commands to process system data withdpdataNotebooksai2catMiscellaneousHPC Executor Introduction(zh): a simple HPC executor for job submission and managementCheckpoint MechanismAcknowledgementThis project is inspired by and built upon the following projects:dpgen: A concurrent learning platform for the generation of reliable deep learning based potential energy models.ase: Atomic Simulation Environment.
ai2-kubernetes-initializer
This is a simple library for writingKubernetes initializers.Please seethe project’s github pagefor full details.
ai2-olmo
OLMo: Open Language ModelOLMo is a repository for training and using AI2's state-of-the-art open language models. It is built by scientists, for scientists.InstallationFirst installPyTorchaccording to the instructions specific to your operating system.To install from source (recommended for training/fine-tuning) run:gitclonehttps://github.com/allenai/OLMo.gitcdOLMo pipinstall-e.[all]Otherwise you can install the model code by itself directly from PyPI with:pipinstallai2-olmoModels overviewThe core models in the OLMo family released so far are (all trained on theDolma dataset):ModelTraining TokensContext LengthOLMo 1B3 Trillion2048OLMo 7B2.5 Trillion2048OLMo 7B Twin 2T2 Trillion2048Fine-tuningTo fine-tune an OLMo model using our trainer you'll first need to prepare your dataset by tokenizing it and saving the tokens IDs to a flat numpy memory-mapped array. Seescripts/prepare_tulu_data.pyfor an example with the Tulu V2 dataset, which can be easily modified for other datasets.Next, prepare your training config. There are many examples in theconfigs/directory that you can use as a starting point. The most important thing is to make sure the model parameters (themodelfield in the config) match up with the checkpoint you're starting from. To be safe you can always start from the config that comes with the model checkpoint. At a minimum you'll need to make the following changes to the config or provide the corresponding overrides from the command line:Updateload_pathto point to the checkpoint you want to start from.Setreset_trainer_statetotrue.Updatedata.pathsto point to thetoken_ids.npyfile you generated.Optionally updatedata.label_mask_pathsto point to thelabel_mask.npyfile you generated, unless you don't need special masking for the loss.Updateevaluatorsto add/remove in-loop evaluations.Once you're satisfied with your training config, you can launch the training job viatorchrun. For example:torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \ --data.paths=[{path_to_data}/input_ids.npy] \ --data.label_mask_paths=[{path_to_data}/label_mask.npy] \ --load_path={path_to_checkpoint} \ --reset_trainer_stateNote: passing CLI overrides like--reset_trainer_stateis only necessary if you didn't update those fields in your config.InferenceYou can utilize our HuggingFace integration to run inference on the olmo checkpoints:fromhf_olmoimport*# registers the Auto* classesfromtransformersimportAutoModelForCausalLM,AutoTokenizerolmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B")tokenizer=AutoTokenizer.from_pretrained("allenai/OLMo-7B")message=["Language modeling is "]inputs=tokenizer(message,return_tensors='pt',return_token_type_ids=False)response=olmo.generate(**inputs,max_new_tokens=100,do_sample=True,top_k=50,top_p=0.95)print(tokenizer.batch_decode(response,skip_special_tokens=True)[0])Alternatively, with the huggingface pipeline abstraction:fromtransformersimportpipelineolmo_pipe=pipeline("text-generation",model="allenai/OLMo-7B")print(olmo_pipe("Language modeling is"))Inference on finetuned checkpointsIf you finetune the model using the code above, you can use the conversion script to convert a native OLMo checkpoint to a HuggingFace-compatible checkpointpythonhf_olmo/convert_olmo_to_hf.py--checkpoint-dir/path/to/checkpointQuantizationolmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B",torch_dtype=torch.float16,load_in_8bit=True)# requires bitsandbytesThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as inputs.input_ids.to('cuda') to avoid potential issues.EvaluationAdditional tools for evaluating OLMo models are available at theOLMo Evalrepo.
ai2op
No description available on PyPI.
ai2opt
Usepdfoinstead.
ai2py
#AI2PYAI2PY is an OpenAI-powered Python module created to help developers easily integrate artificial intelligence into their Python projects. AI2PY provides a comprehensive set of tools for creating AI-generated text, images, and image variations. An OpenAI API Key is required.The module contains functions to create text, create text commands, create images, and create image variations. The init function initializes the class with an api_key. The create_text function takes in data and a word count and uses the OpenAI API to generate text based on the data. The create_text_command function takes in a command, data, and a word count and uses the OpenAI API to generate text based on the command and data. The create_image function takes in prompt text and an image size and uses the OpenAI API to generate an image based on the prompt text. The create_image_variation function takes in data and an image size and uses the OpenAI API to generate an image variation based on the data. The options class contains two functions: display_imagesizes and image_resize. The display_imagesizes function prints out the required image sizes for the OpenAI API. The image_resize function takes in an image and a size and resizes the image to the specified size.In addition, AI2PY also provides a set of pre-trained models that can be used for various tasks such as natural language processing, computer vision, and more. These models can be used to quickly build AI applications without having to train your own models from scratch.If you have any questions, feel free to ask them in the GitHub Discussion tab. If you need help with a project or want to show off a project made with AI2PY, you can also do so in the Discussion tab. If you find a bug, please report it in the GitHub Issues tab and provide the complete error message and a minimal example code that reproduces the error. If you have ideas for improvements or requests for new functions, you can also report these in the GitHub Issues tab, but please check for previous requests to prevent duplicates.
ai2table
No description available on PyPI.
ai2tablecore
No description available on PyPI.
ai2-tango
AI2 Tango replaces messy directories and spreadsheets full of file versions by organizing experiments into discrete steps that can be cached and reused throughout the lifetime of a research project.Quick linksDocumentationPyPI PackageContributingLicenseIn this READMEQuick startInstallationInstalling with PIPInstalling with CondaInstalling from sourceChecking your installationDocker imageFAQTeamLicenseQuick startCreate a Tango step:# hello.pyfromtangoimportstep@step()defhello(name:str)->str:message=f"Hello,{name}!"print(message)returnmessageAnd create a corresponding experiment configuration file:// hello.jsonnet{steps:{hello:{type:"hello",name:"World",}}}Then run the experiment using a local workspace to cache the result:tangorunhello.jsonnet-w/tmp/workspaceYou'll see something like this in the output:Starting new run expert-llama ● Starting step "hello"... Hello, World! ✓ Finished step "hello" ✓ Finished run expert-llamaIf you run this a second time the output will now look like this:Starting new run open-crab ✓ Found output for step "hello" in cache... ✓ Finished run open-crabYou won't see "Hello, World!" this time because the result of the step was found in the cache, so it wasn't run again.For a more detailed introduction check out theFirst Stepswalk-through.Installationai2-tangorequires Python 3.8 or later.Installing withpipai2-tangois availableon PyPI. Just runpipinstallai2-tangoTo install with a specific integration, such astorchfor example, runpipinstall'ai2-tango[torch]'To install with all integrations, runpipinstall'ai2-tango[all]'Installing withcondaai2-tangois available on conda-forge. You can install just the base package withcondainstalltango-cconda-forgeYou can pick and choose from the integrations with one of these:condainstalltango-datasets-cconda-forge condainstalltango-torch-cconda-forge condainstalltango-wandb-cconda-forgeYou can also install everything:condainstalltango-all-cconda-forgeEven thoughai2-tangoitself is quite small, installing everything will pull in a lot of dependencies. Don't be surprised if this takes a while!Installing from sourceTo installai2-tangofrom source, first clonethe repository:gitclonehttps://github.com/allenai/tango.gitcdtangoThen runpipinstall-e'.[all]'To install with only a specific integration, such astorchfor example, runpipinstall-e'.[torch]'Or to install just the base tango library, you can runpipinstall-e.Checking your installationRuntangoinfoto check your installation.Docker imageYou can build a Docker image suitable for tango projects by usingthe official Dockerfileas a starting point for your own Dockerfile, or you can simply use one of ourprebuilt imagesas a base image in your Dockerfile. For example:# Start from a prebuilt tango base image.# You can choose the right tag from the available options here:# https://github.com/allenai/tango/pkgs/container/tango/versionsFROMghcr.io/allenai/tango:cuda11.3# Install your project's additional requirements.COPYrequirements.txt.RUN/opt/conda/bin/pipinstall--no-cache-dir-rrequirements.txt# Install source code.# This instruction copies EVERYTHING in the current directory (build context),# which may not be what you want. Consider using a ".dockerignore" file to# exclude files and directories that you don't want on the image.COPY..Make sure to choose the right base image for your use case depending on the version of tango you're using and the CUDA version that your host machine supports. You can see a list of all available image tagson GitHub.FAQWhy is the library named Tango?The motivation behind this library is that we can make research easier by composing it into well-defined steps. What happens when you choreograph a number of steps together? Well, you get a dance. And since ourteam's leaderis part of a tango band, "AI2 Tango" was an obvious choice!How can I debug my steps through the Tango CLI?You can run thetangocommand throughpdb. For example:python-mpdb-mtangorunconfig.jsonnetHow is Tango different fromMetaflow,Airflow, orredun?We've found that existing DAG execution engines like these tools are great for production workflows but not as well suited for messy, collaborative research projects where code is changing constantly. AI2 Tango was builtspecificallyfor these kinds of research projects.How does Tango's caching mechanism work?AI2 Tango caches the results of steps based on theunique_idof the step. Theunique_idis essentially a hash of all of the inputs to the step along with:the step class's fully qualified name, andthe step class'sVERSIONclass variable (an arbitrary string).Unlike other workflow engines likeredun, Tango doesnottake into account the source code of the class itself (other than its fully qualified name) because we've found that using a hash of the source code bytes is way too sensitive and less transparent for users. When you change the source code of your step in a meaningful way you can just manually change theVERSIONclass variable to indicate to Tango that the step has been updated.Teamai2-tangois developed and maintained by the AllenNLP team, backed bythe Allen Institute for Artificial Intelligence (AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering. To learn more about who specifically contributed to this codebase, seeour contributorspage.Licenseai2-tangois licensed underApache 2.0. A full copy of the license can be foundon GitHub.
ai2thor
A Near Photo-Realistic Interactable Framework for Embodied AI Agents🏡 EnvironmentsiTHORManipulaTHORRoboTHORA high-level interaction framework that facilitates research in embodied common sense reasoning.A mid-level interaction framework that facilitates visual manipulation of objects using a robotic arm.A framework that facilitates Sim2Real research with a collection of simlated scene counterparts in the physical world.🌍 Features🏡 Scenes.200+ custom built high-quality scenes. The scenes can be explored on ourdemopage. We are working on rapidly expanding the number of available scenes and domain randomization within each scene.🪑 Objects.2600+ custom designed household objects across 100+ object types. Each object is heavily annotated, which allows for near-realistic physics interaction.🤖 Agent Types.Multi-agent support, a custom built LoCoBot agent, a Kinova 3 inspired robotic manipulation agent, and a drone agent.🦾 Actions.200+ actions that facilitate research in a wide range of interaction and navigation based embodied AI tasks.🖼 Images.First-class support for many image modalities and camera adjustments. Some modalities include ego-centric RGB images, instance segmentation, semantic segmentation, depth frames, normals frames, top-down frames, orthographic projections, and third-person camera frames. User's can also easily change camera properties, such as the size of the images and field of view.🗺 Metadata.After each step in the environment, there is a large amount of sensory data available about the state of the environment. This information can be used to build highly complex custom reward functions.📰 Latest AnnouncementsDateAnnouncement5/2021RandomizeMaterialsis now supported! It enables a massive amount of realistic looking domain randomization within each scene. Try it out on thedemo4/2021We are excited to releaseManipulaTHOR, an environment within the AI2-THOR framework that facilitates visual manipulation of objects using a robotic arm. Please see thefull 3.0.0 release notes here.4/2021RandomizeLightingis now supported! It includes many tunable parameters to allow for vast control over its effects. Try it out on thedemo!2/2021We are excited to host theAI2-THOR Rearrangement Challenge,RoboTHOR ObjectNav Challenge, andALFRED Challenge, held in conjunction with theEmbodied AI Workshopat CVPR 2021.2/2021AI2-THOR v2.7.0 announces several massive speedups to AI2-THOR! Read more about ithere.6/2020We've released🐳 AI2-THOR Dockera mini-framework to simplify running AI2-THOR in Docker.4/2020Version 2.4.0 update of the framework is here. All sim objects that aren't explicitly part of the environmental structure are now moveable with physics interactions. New object types have been added, and many new actions have been added. Please see thefull 2.4.0 release notes here.2/2020AI2-THOR now includes two frameworks:iTHORandRoboTHOR. iTHOR includes interactive objects and scenes and RoboTHOR consists of simulated scenes and their corresponding real world counterparts.9/2019Version 2.1.0 update of the framework has been added. New object types have been added. New Initialization actions have been added. Segmentation image generation has been improved in all scenes.6/2019Version 2.0 update of the AI2-THOR framework is now live! We have over quadrupled our action and object states, adding new actions that allow visually distinct state changes such as broken screens on electronics, shattered windows, breakable dishware, liquid fillable containers, cleanable dishware, messy and made beds and more! Along with these new state changes, objects have more physical properties like Temperature, Mass, and Salient Materials that are all reported back in object metadata. To combine all of these new properties and actions, new context sensitive interactions can now automatically change object states. This includes interactions like placing a dirty bowl under running sink water to clean it, placing a mug in a coffee machine to automatically fill it with coffee, putting out a lit candle by placing it in water, or placing an object over an active stove burner or in the fridge to change its temperature. Please see thefull 2.0 release notes hereto view details on all the changes and new features.💻 InstallationWith Google ColabAI2-THOR Colabcan be used to run AI2-THOR freely in the cloud with Google Colab. Running AI2-THOR in Google Colab makes it extremely easy to explore functionality without having to set AI2-THOR up locally.With pippipinstallai2thorWith condacondainstall-cconda-forgeai2thorWith Docker🐳 AI2-THOR Dockercan be used, which adds the configuration for running a X server to be used by Unity 3D to render scenes.Minimal ExampleOnce you've installed AI2-THOR, you can verify that everything is working correctly by running the following minimal example:fromai2thor.controllerimportControllercontroller=Controller(scene="FloorPlan10")event=controller.step(action="RotateRight")metadata=event.metadataprint(event,event.metadata.keys())RequirementsComponentRequirementOSMac OS X 10.9+, Ubuntu 14.04+Graphics CardDX9 (shader model 3.0) or DX11 with feature level 9.3 capabilities.CPUSSE2 instruction set support.PythonVersions 3.5+LinuxX server with GLX module enabled💬 SupportQuestions.If you have any questions on AI2-THOR, please ask them on ourGitHub Discussions Page.Issues.If you encounter any issues while using AI2-THOR, please open anIssue on GitHub.🏫 Learn moreSectionDescriptionDemoInteract and play with AI2-THOR live in the browser.iTHOR DocumentationDocumentation for the iTHOR environment.ManipulaTHOR DocumentationDocumentation for the ManipulaTHOR environment.RoboTHOR DocumentationDocumentation for the RoboTHOR environment.AI2-THOR ColabA way to run AI2-THOR freely on the cloud using Google Colab.AllenActAn Embodied AI Framework build at AI2 that provides first-class support for AI2-THOR.AI2-THOR Unity DevelopmentA (sparse) collection of notes that may be useful if editing on the AI2-THOR backend.AI2-THOR WebGL DevelopmentDocumentation on packaging AI2-THOR for the web, which might be useful for annotation based tasks.📒 CitationIf you use AI2-THOR or iTHOR scenes, please cite the original AI2-THOR paper:@article{ai2thor,author={Eric Kolve and Roozbeh Mottaghi and Winson Han andEli VanderBilt and Luca Weihs and Alvaro Herrasti andDaniel Gordon and Yuke Zhu and Abhinav Gupta andAli Farhadi},title={{AI2-THOR: An Interactive 3D Environment for Visual AI}},journal={arXiv},year={2017}}If you use 🏘️ ProcTHOR or procedurally generated scenes, please cite the following paper:@inproceedings{procthor,author={Matt Deitke and Eli VanderBilt and Alvaro Herrasti andLuca Weihs and Jordi Salvador and Kiana Ehsani andWinson Han and Eric Kolve and Ali Farhadi andAniruddha Kembhavi and Roozbeh Mottaghi},title={{ProcTHOR: Large-Scale Embodied AI Using Procedural Generation}},booktitle={NeurIPS},year={2022},note={Outstanding Paper Award}}If you use ManipulaTHOR agent, please cite the following paper:@inproceedings{manipulathor,title={{ManipulaTHOR: A Framework for Visual Object Manipulation}},author={Kiana Ehsani and Winson Han and Alvaro Herrasti andEli VanderBilt and Luca Weihs and Eric Kolve andAniruddha Kembhavi and Roozbeh Mottaghi},booktitle={CVPR},year={2021}}If you use RoboTHOR scenes, please cite the following paper:@inproceedings{robothor,author={Matt Deitke and Winson Han and Alvaro Herrasti andAniruddha Kembhavi and Eric Kolve and Roozbeh Mottaghi andJordi Salvador and Dustin Schwenk and Eli VanderBilt andMatthew Wallingford and Luca Weihs and Mark Yatskar andAli Farhadi},title={{RoboTHOR: An Open Simulation-to-Real Embodied AI Platform}},booktitle={CVPR},year={2020}}👋 Our TeamAI2-THOR is an open-source project built by thePRIOR teamat theAllen Institute for AI(AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering.
ai2thor-colab
Run AI2-THOR on the Cloud using Google Colab💡 Templates💪 Full Starter TemplateTo get started, we recommend saving a copy of theAI2-THOR Colab Full Starter Templateto your drive. It goes over many helper functions that are often useful.https://user-images.githubusercontent.com/28768645/119420726-06d8bc80-bcb2-11eb-9acf-e9b151121506.mp4👑 Minimal Starter TemplateWe also provide aMinimal Starter Templatethat does not showcase any helper functions. This is often useful as a starting point to minimally reproduce issues, highlight, or test functionality.🐱‍💻 Setup Overview💻 InstallationUsing Python's packaging manager,ai2thor_colabcan be installed withpipinstallai2thor_colab🔥 Start X ServerAI2-THOR requires an X Server to run on a Linux machine. It allows us to open a Unity window where we can render scenes and observe images. Colab runs Linux, but it does not start an X Server by default. Usingai2thor_colab.start_xserver(), we can install all required X Server dependencies and start it up:importai2thor_colabai2thor_colab.start_xserver()💬 SupportQuestions.If you have any questions on AI2-THOR, please ask them onAI2-THOR's GitHub Discussions Page.Issues.If you encounter any issues while using AI2-THOR, please open anIssue on AI2-THOR's GitHub. If you encounter an issue with AI2-THOR Colab, please open anIssue on our GitHub🏫 Learn moreSectionDescriptionAI2-THOR WebsiteThe AI2-THOR website, which contains extensive documentation on using the API.AI2-THOR GitHubContains the source code and development of AI2-THOR.AI2-THOR DemoInteract and play with AI2-THOR live in the browser.👋 Our TeamAI2-THOR and AI2-THOR Colab are open-source projects built by thePRIOR teamat theAllen Institute for AI(AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering.
ai2xl
ai2xlis a freePythonlibrary that allows usingExcelforML data preparation, zero dependencyML model deployment,model debuggingandmodel explainability.Data Scientistscan use ai2xl to make their own lives easier, and also tocollaboratewithData Analysts,Business Analystsand more. There are an estimated1 billion+Excel, Google Sheets, LibreOffice or similar Spreadsheet users in the World. ai2xl aims to bridge the gap between Data Scientist / Python world and the billion+ Business / Excel users.You can read more about ai2xl here:https://ai2xl.github.ioFor getting started and other information:https://ai2xl.readthedocs.ioSample Jupyter notebooks:https://ai2xl.github.io/samplesEnjoy!
ai2xlcore
ai2xlis a freePythonlibrary that allows usingExcelforML data preparation, zero dependencyML model deployment,model debuggingandmodel explainability.Data Scientistscan use ai2xl to make their own lives easier, and also tocollaboratewithData Analysts,Business Analystsand more. There are an estimated1 billion+Excel, Google Sheets, LibreOffice or similar Spreadsheet users in the World. ai2xl aims to bridge the gap between Data Scientist / Python world and the billion+ Business / Excel users.You can read more about ai2xl here:https://ai2xl.github.ioFor getting started and other information:https://ai2xl.readthedocs.ioSample Jupyter notebooks:https://ai2xl.github.io/samplesEnjoy!
ai3bs
No description available on PyPI.
ai4ao
AI for Anomaly and Outlier detection (AI4AO)AI4AO is a Python package that allows to build any of thescikit-learnsupported Clustering and Classification algorithms based machine learning models in batches. This means that one can useyamldeclarative syntax in order to write a configuration file, and based on the instructions in the configuration file, and the machine learning models will be constructed sequentially. This way many models can be built with a single configuration file with the results being arranged in an extremely modular way. AI4AO can be considered as a convenient wrapper for scikit-learn models.Developed by:Selvakumar UlaganathanWebsite:www.selvai.comPyPI Project:https://pypi.org/project/ai4aoGitLab Source:https://gitlab.com/selvai/ai4aoOfficial Documentation:https://selvai.gitlab.io/ai4ao/UsageDefine a configuration inconfig.yaml# config.yamlIsolationForest_0.01:project_name:timeseries_anomalyrun_this_project:Truemulti_variate_model:Truemodel:IsolationForestdata:path:'path-to-train-data.csv'test_data_path:'path-to-train-data.csv'features_to_avoid:['feat-to-avoid']hyperparams:contamination:0.01results:path:'results/isolation_forest_001/'remote_run:FalseRun the model defined inconfig.yaml# example_script.pyimportai4ao# import packagefromai4ao.modelsimportSKLearnModelasModel# scikit-learn wrapper# fit and evaluate modelmodel=Model(plot_results=True)model.batch_fit(path_config='configs.yaml')# print models and metricsprint(model.models)print(model.metrics())
ai4bharat-transliteration
AI4Bharat Indic-TransliterationAn AI-based transliteration engine for 21 major languages of the Indian subcontinent.This package provides support for:Python Library for transliteration from Roman to Native scriptHTTP API server that can be hosted for interaction with web applicationsAboutThis library is based on ourresearch workcalledIndic-Xlitto build tools that can translit text between Indic languages and colloquially-typed content (in English alphabet). We support both Roman-to-Native back-transliteration (English script to Indic language conversion), as well as Native-to-Roman transliteration (Indic to English alphabet conversion).An online demo is available here:https://xlit.ai4bharat.orgLanguages SupportedISO 639 codeLanguageasAssamese - অসমীয়াbnBangla - বাংলাbrxBoro - बड़ोguGujarati - ગુજરાતીhiHindi - हिंदीknKannada - ಕನ್ನಡksKashmiri - كٲشُرgomKonkani Goan - कोंकणीmaiMaithili - मैथिलीmlMalayalam - മലയാളംmniManipuri - ꯃꯤꯇꯩꯂꯣꯟmrMarathi - मराठीneNepali - नेपालीorOriya - ଓଡ଼ିଆpaPanjabi - ਪੰਜਾਬੀsaSanskrit - संस्कृतम्sdSindhi - سنڌيsiSinhala - සිංහලtaTamil - தமிழ்teTelugu - తెలుగుurUrdu - اُردُوUsagePython LibraryImport the wrapper for transliteration engine by:fromai4bharat.transliterationimportXlitEngineExample 1: Using word Transliteratione=XlitEngine("hi",beam_width=10,rescore=True)out=e.translit_word("namasthe",topk=5)print(out)# output: {'hi': ['नमस्ते', 'नमस्थे', 'नामस्थे', 'नमास्थे', 'नमस्थें']}Arguments:beam_widthincreases search size, resulting in improved accuracy but increases time/compute. (Default:4)topkreturns only specified number of top results. (Default:4)rescorereturns the reranked suggestions after using a dictionary. (Default:True)Romanization:By default,XlitEnginewill load English-to-Indic model (default:src_script_type="roman")To load Indic-to-English model, usesrc_script_type="indic"For example: (also applicable for all other examples below)e=XlitEngine(src_script_type="indic",beam_width=10,rescore=False)out=e.translit_word("नमस्ते",lang_code="hi",topk=5)print(out)# output: ['namaste', 'namastey', 'namasthe', 'namastay', 'namste']Example 2: word Transliteration without rescoringe=XlitEngine("hi",beam_width=10,rescore=False)out=e.translit_word("namasthe",topk=5)print(out)# output: {'hi': ['नमस्थे', 'नामस्थे', 'नमास्थे', 'नमस्थें', 'नमस्ते']}Example 3: Using Sentence Transliteratione=XlitEngine("ta",beam_width=10)out=e.translit_sentence("vanakkam ulagam")print(out)# output: {'ta': 'வணக்கம் உலகம்'}Note:Only single top most prediction is returned for each word in sentence.Example 4: Using Multiple language Transliteratione=XlitEngine(["ta","ml"],beam_width=6)# leave empty or use "all" to load all available languages# e = XlitEngine("all)out=e.translit_word("amma",topk=3)print(out)# output: {'ml': ['അമ്മ', 'എമ്മ', 'അമ'], 'ta': ['அம்மா', 'அம்ம', 'அம்மை']}out=e.translit_sentence("vandhe maatharam")print(out)# output: {'ml': 'വന്ധേ മാതരം', 'ta': 'வந்தே மாதரம்'}## Specify language name to get only specific language resultout=e.translit_word("amma",lang_code="ml",topk=5)print(out)# output: ['അമ്മ', 'എമ്മ', 'അമ', 'എഎമ്മ', 'അഎമ്മ']Example 5: Transliteration for all available languagese=XlitEngine(beam_width=10)out=e.translit_sentence("namaskaar bharat")print(out)# sample output: {'bn': 'নমস্কার ভারত', 'gu': 'નમસ્કાર ભારત', 'hi': 'नमस्कार भारत', 'kn': 'ನಮಸ್ಕಾರ್ ಭಾರತ್', 'ml': 'നമസ്കാർ ഭാരത്', 'pa': 'ਨਮਸਕਾਰ ਭਾਰਤ', 'si': 'නමස්කාර් භාරත්', 'ta': 'நமஸ்கார் பாரத்', 'te': 'నమస్కార్ భారత్', 'ur': 'نمسکار بھارت'}Web API ServerRunning a flask server using a 3-line script:fromai4bharat.transliterationimportxlit_serverapp,engine=xlit_server.get_app()app.run(host='0.0.0.0',port=8000)Then on browser (or) curl, use link ashttp://{IP-address}:{port}/tl/{lang-id}/{word_in_eng_script}Example:http://localhost:8000/tl/ta/ammahttp://localhost:8000/languagesDebugging errorsIf you face any of the following errors:ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject ValueError: Please build (or rebuild) Cython components withpython setup.py build_ext --inplace.Run:pip install --upgrade numpyRelease NotesThis package contains applications built around the Transliteration engine. The contents of this package can also be downloaded fromour GitHub repo.All the NN models of Indic-Xlit are released under MIT License.
ai4drones
No description available on PyPI.
ai4f
AI 4 FREEBy using this repository or any code related to it, you agree to thelegal notice. The author is not responsible for any copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses.This (quite censored) New Version of gpt4free, was just released, it may contain bugs, open an issue or contribute a PR when encountering one, some features were disabled. Docker is for now not available but I would be happy if someone contributes a PR. The g4f GUI will be uploaded soon enough.Newpypi package:pip install -U ai4fTable of Contents:Getting StartedPrerequisitesSetting up the projectUsageTheg4fPackageinterference openai-proxy apiModelsgpt-3.5 / gpt-4Other ModelsRelated gpt4free projectsContributeChatGPT cloneCopyrightCopyright NoticeStar HistoryGetting StartedPrerequisites:Download and install Python(Version 3.x is recommended).Setting up the project:Install using pypipip install -U ai4forClone the GitHub repository:git clone https://github.com/hansfzlorenzana/AI-4-Free.gitNavigate to the project directory:cd AI-4-Free(Recommended) Create a virtual environment to manage Python packages for your project:python3 -m venv venvActivate the virtual environment:On Windows:.\venv\Scripts\activateOn macOS and Linux:source venv/bin/activateInstall the required Python packages fromrequirements.txt:pip install -r requirements.txtCreate atest.pyfile in the root folder and start using the repo, further Instructions are belowimportai4f...UsageTheai4fPackageimportai4fprint(ai4f.Provider.Ails.params)# supported args# Automatic selection of provider# streamed completionresponse=ai4f.ChatCompletion.create(model='gpt-3.5-turbo',messages=[{"role":"user","content":"Hello world"}],stream=True)formessageinresponse:print(message)# normal responseresponse=ai4f.ChatCompletion.create(model=ai4f.models.gpt_4,messages=[{"role":"user","content":"hi"}])# alterative model settingprint(response)# Set with providerresponse=ai4f.ChatCompletion.create(model='gpt-3.5-turbo',provider=ai4f.Provider.Forefront,messages=[{"role":"user","content":"Hello world"}],stream=True)formessageinresponse:print(message)providers:fromai4f.Providerimport(Ails,You,Bing,Yqcloud,Theb,Aichat,Bard,Vercel,Forefront,Lockchat,Liaobots,H2o,ChatgptLogin,DeepAi,GetGpt,AItianhu,EasyChat,Acytoo,DfeHub,AiService,BingHuan,Wewordle,ChatgptAi,opchatgpts,Poe,)# usage:response=ai4f.ChatCompletion.create(...,provider=ProviderName)interference openai-proxy api (use with openai python package)run server:python3-minterference.appimportopenaiopenai.api_key=''openai.api_base='http://127.0.0.1:1337'chat_completion=openai.ChatCompletion.create(stream=True,model='gpt-3.5-turbo',messages=[{'role':'user','content':'write a poem about a tree'}])#print(chat_completion.choices[0].message.content)fortokeninchat_completion:content=token['choices'][0]['delta'].get('content')ifcontent!=None:print(content)Modelsgpt-3.5 / gpt-4WebsiteProvidergpt-3.5gpt-4othersStreamStatusAuthai.lsg4f.Provider.Ails✔️❌❌✔️❌you.comg4f.Provider.You✔️❌❌✔️❌bing.comg4f.Provider.Bing❌✔️❌✔️❌chat9.yqcloud.topg4f.Provider.Yqcloud✔️❌❌✔️❌theb.aig4f.Provider.Theb✔️❌❌✔️❌chat-gpt.orgg4f.Provider.Aichat✔️❌❌❌❌bard.google.comg4f.Provider.Bard❌❌✔️❌✔️play.vercel.aig4f.Provider.Vercel✔️❌✔️✔️❌forefront.comg4f.Provider.Forefront✔️❌❌✔️❌supertest.lockchat.appg4f.Provider.Lockchat✔️✔️❌✔️❌liaobots.comg4f.Provider.Liaobots✔️✔️❌✔️✔️gpt-gm.h2o.aig4f.Provider.H2o❌❌✔️✔️❌chatgptlogin.acg4f.Provider.ChatgptLogin✔️❌❌❌❌deepai.orgg4f.Provider.DeepAi✔️❌❌✔️❌chat.getgpt.worldg4f.Provider.GetGpt✔️❌❌✔️❌www.aitianhu.comg4f.Provider.AItianhu✔️❌❌❌❌free.easychat.workg4f.Provider.EasyChat✔️❌❌✔️❌chat.acytoo.comg4f.Provider.Acytoo✔️❌❌❌❌chat.dfehub.comg4f.Provider.DfeHub✔️❌❌✔️❌aiservice.vercel.appg4f.Provider.AiService✔️❌❌❌❌b.ai-huan.xyzg4f.Provider.BingHuan✔️✔️❌✔️❌wewordle.orgg4f.Provider.Wewordle✔️❌❌❌❌chatgpt.aig4f.Provider.ChatgptAi❌✔️❌❌❌opchatgpts.netg4f.Provider.opchatgpts✔️❌❌❌❌poe.comg4f.Provider.Poe✔️✔️✔️✔️✔️Other ModelsModelBase ProviderProviderWebsitepalm2Googleg4f.Provider.BardGoogle Bardsage-assistantQuorag4f.Provider.PoeQuora Poeclaude-instant-v1-100kAnthropicg4f.Provider.PoeQuora Poeclaude-v2-100kAnthropicg4f.Provider.PoeQuora Poeclaude-instant-v1Anthropicg4f.Provider.PoeQuora Poegpt-3.5-turbo-16kOpenAIg4f.Provider.PoeQuora Poegpt-4-32kOpenAIg4f.Provider.PoeQuora Poellama-2-7bMeta AIg4f.Provider.PoeQuora Poellama-2-13bMeta AIg4f.Provider.PoeQuora Poellama-2-70bMeta AIg4f.Provider.PoeQuora Poefalcon-40bHuggingfaceg4f.Provider.H2oH2ofalcon-7bHuggingfaceg4f.Provider.H2oH2ollama-13bHuggingfaceg4f.Provider.H2oH2oclaude-instant-v1-100kAnthropicg4f.Provider.Vercelsdk.vercel.aiclaude-instant-v1Anthropicg4f.Provider.Vercelsdk.vercel.aiclaude-v1-100kAnthropicg4f.Provider.Vercelsdk.vercel.aiclaude-v1Anthropicg4f.Provider.Vercelsdk.vercel.aialpaca-7bReplicateg4f.Provider.Vercelsdk.vercel.aistablelm-tuned-alpha-7bReplicateg4f.Provider.Vercelsdk.vercel.aibloomHuggingfaceg4f.Provider.Vercelsdk.vercel.aibloomzHuggingfaceg4f.Provider.Vercelsdk.vercel.aiflan-t5-xxlHuggingfaceg4f.Provider.Vercelsdk.vercel.aiflan-ul2Huggingfaceg4f.Provider.Vercelsdk.vercel.aigpt-neox-20bHuggingfaceg4f.Provider.Vercelsdk.vercel.aioasst-sft-4-pythia-12b-epoch-3.5Huggingfaceg4f.Provider.Vercelsdk.vercel.aisantacoderHuggingfaceg4f.Provider.Vercelsdk.vercel.aicommand-medium-nightlyCohereg4f.Provider.Vercelsdk.vercel.aicommand-xlarge-nightlyCohereg4f.Provider.Vercelsdk.vercel.aicode-cushman-001OpenAIg4f.Provider.Vercelsdk.vercel.aicode-davinci-002OpenAIg4f.Provider.Vercelsdk.vercel.aitext-ada-001OpenAIg4f.Provider.Vercelsdk.vercel.aitext-babbage-001OpenAIg4f.Provider.Vercelsdk.vercel.aitext-curie-001OpenAIg4f.Provider.Vercelsdk.vercel.aitext-davinci-002OpenAIg4f.Provider.Vercelsdk.vercel.aitext-davinci-003OpenAIg4f.Provider.Vercelsdk.vercel.aiRelated AI-4Free projects🎁 Projects⭐ Stars📚 Forks🛎 Issues📬 Pull requestsgpt4freefreeGPTPoe API WrapperEdge GPTBard ReversedChatGPT ReversedPython Poe APIContributeto add another provider, its very simple:create a new file inai4f/Provider/Providerswith the name of the Providerin the file, paste theBoilerplateyou can find inai4f/Provider/Provider.py:importosfrom..typingimportsha256,Dict,get_type_hintsurl=Nonemodel=Nonesupports_stream=Falseneeds_auth=Falsedef_create_completion(model:str,messages:list,stream:bool,**kwargs):returnparams=f'ai4f.Providers.{os.path.basename(__file__)[:-3]}supports: '+\'(%s)'%', '.join([f"{name}:{get_type_hints(_create_completion)[name].__name__}"fornamein_create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])Here, you can adjust the settings, for example if the website does support streaming, setsupports_streamtoTrue...Write code to request the provider in_create_completionandyieldthe response,even ifits a one-time response, do not hesitate to look at other providers for inspirationAdd the Provider Name inai4f/Provider/init.pyfrom.importProviderfrom.Providersimport(...,ProviderNameHere)You are done !, test the provider by calling it:importai4fresponse=ai4f.ChatCompletion.create(model='gpt-3.5-turbo',provider=ai4f.Provider.PROVIDERNAME,messages=[{"role":"user","content":"test"}],stream=g4f.Provider.PROVIDERNAME.supports_stream)formessageinresponse:print(message,flush=True,end='')Copyright:This program is licensed under theGNU GPL v3Copyright Notice:hansfzlorenzana/AI-4-Free: Copyright (C) 2023 hansfzlorenzana This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.Star History
ai4flwr
ai4-flwrThis repository contains theAI4OSextensions for theFlowerframework.AuthenticationAuthentication for Flower is implemented directly via GRPC: interceptors (server side) and authentication medatata plugins (client side).In order to use it, the server must be initialized with any object of theai4flwr.authpackage as interceptor. See the examples below for more details.Bearer token authenticationIn your server, start it as follows:import ai4flwr.auth.bearer fl.server.start_server( server_address="0.0.0.0:5000", certificates=(...), interceptors=[ai4flwr.auth.bearer.BearerTokenInterceptor()] )In your client, start it as follows:import ai4flwr.auth.bearer token = "Your token as configured in the server" fl.client.start_numpy_client( server_address=f"localhost:5000", client=..., root_certificates=... call_credentials=grpc.metadata_call_credentials( ai4flwr.auth.bearer.BearerTokenAuthPlugin(token) ), )
ai4i2020
Failed to fetch description. HTTP Status Code: 404
ai4i2020LinearModel
Failed to fetch description. HTTP Status Code: 404
ai4i2020UCI
Failed to fetch description. HTTP Status Code: 404
ai4i2020UCIDataSET
Failed to fetch description. HTTP Status Code: 404
ai4i2020UCIDataSETPRED
Failed to fetch description. HTTP Status Code: 404
ai4i2020UCIPRED
Failed to fetch description. HTTP Status Code: 404
ai4ipreds
0.01(19/04/2022) -first Release
ai4-metadata-validator
AI4 Metadata validatorMetadata validator for the AI4OS hub data science applications.Motivationai4-metadata-validatorvalidates the metadata used by the AI4OS hub models and applications.ImplementationTheschemahas been implemented according toJSON schema specification(Draft 7), usingPython's jsonschemamodule.Once installed, theai4-metadata-validatorCLI tool is provided, which accepts schema instance files as input parameters.Installation$ pip install ai4-metadata-validatorUsage$ ai4-metadata-validator instances/sample.mods.json
ai4opt
ai4opt: AI for Optimization
ai4sci.oml
omlAI4Science: Efficient data-driven Online Model Learning (OML) / system identification and control. The algorithm is proposed in thispaper.To get started,pip install ai4sci.oml --upgradeThis python package is based on the online dynamic mode decomposition algorithm, which is also available as a python packagepip install odmd --upgrade, seehere.Showcase: Lorenz system controlLorenz systemis one of the most classical nonlinear dynamical systems. Here we show how the proposed algorithm can be used to controll that. For more details, seedemo.No controlIf there is no control, we can see the mysterious butterfly trajectory. It starts close to the bottom plane and enters into the butterfly wing region, then oscillates there.With controlIf we apply data-driven real-time closed loop control, it can be stabilized at an unstable fixed point (near the center of the butterfly wing).HighlightsHere are some hightlights about this algorithm, and for more detail refer to thispaperEfficient data-driven online linear/nonlinear model learning (system identification). Any nonlinear and/or time-varying system is locally linear, as long as the model is updated in real-time wrt to new measurements.It finds the exact optimal solution (in the sense of least square error), without any approximation (unlike stochastic gradient descent).It achieves theoretical optimal time and space complexity.The time complexity (flops for one iteration) is O(n^2), where n is state dimension. This is much faster than standard algorithm O(n^2 * t), where t is the current time step (number of measurements). In online applications, t >> n and essentially will go to infinity.The space complexity is O(n^2), which is far more efficient than standard algorithm O(n * t) (t >> n).A weighting factor (in (0, 1]) can be used to place more weight on recent data, thus making the model more adaptive.This local model can be used for short-horizon prediction and data-driven real-time closed loop control.It has been successfully applied to flow separation control problem, and achived real-time closed loop control. See thispaperfor details.Online model learning algorithm descriptionThis is a brief introduction to the algorithm. For full technical details, see thispaper, and chapter 3 and chapter 7 of thisPhD thesis.Unknown dynamical systemSuppose we have a (discrete) nonlinear and/or time-varyingdynamical system, and the state space representation isx(t+1) = f(t, x(t), u(t))y(t) = g(t, x(t), u(t))where t is (discrete) time, x(t) is state vector, u(t) is control (input) vector, y(t) is observation (output) vector. f(~, ~,) and g(, ~, ~) are unknown vector-valued nonlinear functions.It is assumed that we have measurements x(i), u(i), y(i) for i = 0,1,...t.However, we do not know functions f and g.We aim to learn a model for the unknown dynamical system from measurement data up to time t.We want to the model to be updated efficiently in real-time as new measurement data becomes available.Online linear model learningWe would like to learn an adaptivelinear modelx(t+1) = A x(t) + B u(t)y(t) = C x(t) + D u(t)that fits/explains the observation optimally. By Taylor expansion approximation, any nonlinear and/or time-varying system is linear locally. There are many powerful tools for linear control, e.g,Linear Quadratic Regulator,Kalman filter. However, to accurately approximate the original (unknown) dynamical system, we need to update this linear model efficiently in real-time whenever new measurement becomes available.This problem can be formulated as an optimization problem, and at each time step t we need to solve a related but slightly different optimization problem. The optimal algorithm is achived through efficient reformulation of the problem.ai4sci.oml.OnlineLinearModelclass implements the optimal algorithm.Online nonlinear model learningIf we need to fit a nonlinear model to the observed data, this algorithm also applies in this case. Keep in mind that linear adaptive model is good approximation as long as it is updated in real-time. Also, the choice of nonlinear form can be tricky. Based on Taylor expansion, if we add higher order nonlinearity (e.g., quadratic, cubic), the approximation can be more accurate. However, given the learned nonlinear model, it is still not easy to apply control.In particular, we want to fit a nonlinear model of this formx(t+1) = F * phi(x(t), u(t))y(t) = G * psi(x(t), u(t))where phi(~,) and psi(, ~) are known vector-valued nonlinear functions (e.g, quadratic) that we specify, F and G are unknown matrices of proper size.We aim to learn F and G from measurement data.Notice that this model form is general, and encompass many systems such as Lorenze attractor, Logistic map, Auto-regressive model, polynomial systems.F and G can be updated efficiently in real-time when new data comes in.This can also be formulated as the same optimization problem, and the same efficient algorithm works in this case.ai4sci.oml.OnlineModelclass implements the optimal algorithm.UseInstallFrom PyPipip install ai4sci.oml --upgradeFrom sourcegit clone https://github.com/haozhg/oml.git cd oml/ pip install -e .Testscd tests/ python -m pytest .DemoSee./demofor python notebook to demo the algorithm for data-driven real-time closed loop control.demo_lorenz.ipynbshows control ofLorenz attractor.demo_online_linear_model.ipynbshows control of an unstable linear time-varying system.Authors:Hao ZhangReferenceIf you you used these algorithms or this python package in your work, please consider citingZhang, Hao, Clarence W. Rowley, Eric A. Deem, and Louis N. Cattafesta. "Online dynamic mode decomposition for time-varying systems." SIAM Journal on Applied Dynamical Systems 18, no. 3 (2019): 1586-1609.BibTeX@article{zhang2019online, title={Online dynamic mode decomposition for time-varying systems}, author={Zhang, Hao and Rowley, Clarence W and Deem, Eric A and Cattafesta, Louis N}, journal={SIAM Journal on Applied Dynamical Systems}, volume={18}, number={3}, pages={1586--1609}, year={2019}, publisher={SIAM} }Date createdApril 2017LicenseMITIf you want to use this package, but find license permission an issue, pls contact me athaozhang at alumni dot princeton dot edu.IssuesIf there is any comment/suggestion, or if you find any bug, feel free tocreate an issuehere, and/orfork this repo, make suggested changes, and create a pull request (merge from your fork to this repo). Seethisas an example guidance for contribution and PRs.
ai4scr-athena
ATHENA is an open-source computational framework written in Python that facilitates the visualization, processing and analysis of (spatial) heterogeneity from spatial omics data. ATHENA supports any spatially resolved dataset that contains spatial transcriptomic or proteomic measurements, including Imaging Mass Cytometry (IMC), Multiplexed Ion Beam Imaging (MIBI), multiplexed Immunohistochemisty (mIHC) or Immunofluorescence (mIF), seqFISH, MERFISH, Visium.
ai4scr-data-utility
DatasetUtilities to standardise exposure of datsets used in the AI4SCR group.
ai4scr-scQUEST
scQUEST, an open-source Python library for cell type identification and quantification of tumor ecosystem heterogeneity in patient cohorts.
ai4scr-spatial-omics
todo
ai4sqlite3
ai4sqlite3Natural language query assistant forSQLite databasesUsing a local SQLite3 database file, the command-line interface asks for your query intentions, uses OpenAI's ChatGPT API to formulate SQL fulfilling them, and then runs the SQL on your database. Bring your ownOpenAI API key($ / free trial).The tool sends your databaseschemaand written query intentions to OpenAI. But NOT the result sets nor any other databasecontent. The database is opened in read-only mode so that the AI cannot damage it.Quick start$export OPENAPI_API_KEY=xxx$pip3 install ai4sqlite3$wget https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite$ai4sqlite3 Chinook_Sqlite.sqlite --yesAnalyzing schema of Chinook_Sqlite.sqlite in 4.9s This database models a digital music store. It includes tables for artists, albums, tracks, genres, media types, invoices, customers, employees, playlists, and playlist tracks. The tables are linked through foreign keys to form relationships, such as an artist being associated with an album, an invoice being linked to a customer, and a playlist being composed of multiple tracks. The database is designed to enable the store to manage and track music sales, customer information, and employee records, as well as organizing and categorizing the available music. Please state the nature of the desired database query. >top five customer countries by 2011 revenue (round to cents)Generating SQL in 2.8s SELECT c.Country, ROUND(SUM(i.Total), 2) AS 'Revenue 2011' FROM Customer c JOIN Invoice i ON c.CustomerId = i.CustomerId WHERE strftime('%Y', i.InvoiceDate) = '2011' GROUP BY c.Country ORDER BY SUM(i.Total) DESC LIMIT 5; Executing query in 0.1s +---------+--------------+ | Country | Revenue 2011 | +---------+--------------+ | USA | 103.01 | | Canada | 55.44 | | Germany | 48.57 | | France | 42.61 | | Ireland | 32.75 | +---------+--------------+ Next query? >percentage of all revenue from sales to North American customersGenerating SQL in 3.3s SELECT ROUND(SUM(i.Total) / (SELECT SUM(Total) FROM Invoice)*100, 2) AS "North American Revenue Percentage" FROM Invoice i INNER JOIN Customer c ON i.CustomerId = c.CustomerId WHERE c.Country = 'USA' OR c.Country = 'Canada'; Executing query in 0.1s +-----------------------------------+ | North American Revenue Percentage | +-----------------------------------+ | 35.52 | +-----------------------------------+ Next query? >AdviceWe reset the AI memory betweenNext query?prompts, so you can't implicitly refer back to prior queries.But you can use the up arrow key to recall a prior input to edit or add to.If SQLite rejects the AI's SQL, then we automatically request corrections (up to--revisionsattempts).You can (usually) get general questions aswered with something like:about the schema, what does each invoice line item refer to?You might enjoy exploring yourFirefoxorChromebrowser history database (you might need to copy the file if your browser has it open).Challenging examplesHere are a few examples where gpt-3.5-turbo usually generates erroneous answers (but we show cherry-picked ones).>Considering sales to USA customers, find the top-grossing artist in each state.Generating SQL in 13.4s WITH -- Select only the sales to USA customers and -- join the necessary tables. usa_sales AS ( SELECT il.*, c.State FROM InvoiceLine il INNER JOIN Invoice i ON il.InvoiceId=i.InvoiceId INNER JOIN Customer c ON i.CustomerId=c.CustomerId WHERE c.Country='USA' ), -- Calculate the total sale in dollars for each artist. artist_total_sales AS ( SELECT ar.Name, SUM(usa_sales.UnitPrice*usa_sales.Quantity) AS total_sales FROM usa_sales INNER JOIN Track t ON usa_sales.TrackId=t.TrackId INNER JOIN Album al ON t.AlbumId=al.AlbumId INNER JOIN Artist ar ON al.ArtistId=ar.ArtistId GROUP BY ar.ArtistId ), -- Rank the artists by total sale in each state. -- If two or more artists share the top sale in a state, -- rank them alphabetically by artist name. ranked_artists AS ( SELECT ar.Name, usa_sales.State, SUM(usa_sales.UnitPrice*usa_sales.Quantity) AS state_sales, RANK() OVER ( PARTITION BY usa_sales.State ORDER BY SUM(usa_sales.UnitPrice*usa_sales.Quantity) DESC, ar.Name ) AS rank_num FROM usa_sales INNER JOIN Track t ON usa_sales.TrackId=t.TrackId INNER JOIN Album al ON t.AlbumId=al.AlbumId INNER JOIN Artist ar ON al.ArtistId=ar.ArtistId GROUP BY ar.ArtistId, usa_sales.State ) -- Select only the top-ranked artist in each state. SELECT State, Name AS Top_Grossing_Artist FROM ranked_artists WHERE rank_num=1 ORDER BY State; Executing query in 0.1s +-------+--------------------------------+ | State | Top_Grossing_Artist | +-------+--------------------------------+ | AZ | Iron Maiden | | CA | Iron Maiden | | FL | Metallica | | IL | The Office | | MA | Milton Nascimento | | NV | Cidade Negra | | NY | Deep Purple | | TX | Lost | | UT | Battlestar Galactica (Classic) | | WA | Metallica | | WI | The Office | +-------+--------------------------------+ Next query? >Col 1: employee name. Col 2: subordinate employee name. Subordinates include transitive as well as direct reports.Generating SQL in 5.6s WITH RECURSIVE cte_subordinates AS ( SELECT e1.FirstName || ' ' || e1.LastName AS EmployeeName, e2.FirstName || ' ' || e2.LastName AS SubordinateName FROM Employee e1 INNER JOIN Employee e2 ON e2.ReportsTo = e1.EmployeeId UNION SELECT cte.EmployeeName, e.FirstName || ' ' || e.LastName AS SubordinateName FROM cte_subordinates cte INNER JOIN Employee e ON e.ReportsTo = ( SELECT EmployeeId FROM Employee WHERE FirstName || ' ' || LastName = cte.SubordinateName ) ) SELECT * FROM cte_subordinates ORDER BY EmployeeName, SubordinateName LIMIT 25; Executing query in 0.1s +------------------+------------------+ | EmployeeName | SubordinateName | +------------------+------------------+ | Andrew Adams | Jane Peacock | | Andrew Adams | Laura Callahan | | Andrew Adams | Margaret Park | | Andrew Adams | Michael Mitchell | | Andrew Adams | Nancy Edwards | | Andrew Adams | Robert King | | Andrew Adams | Steve Johnson | | Michael Mitchell | Laura Callahan | | Michael Mitchell | Robert King | | Nancy Edwards | Jane Peacock | | Nancy Edwards | Margaret Park | | Nancy Edwards | Steve Johnson | +------------------+------------------+
ai4vl-video-tools
ai4vl video toolsVideo handling tools for ai4vl.Install it from PyPIpipinstallai4vl-video-toolsUsageThe tools expects a directory with video files as input and outputs a directory with the converted videos. The source directory must have the following structure:source_dir └──some_study_name├──1-1# these are patient number - case number│├──snipped01.mp4│├──snipped02.mp4│└──snipped03.mp4└──2-2└──snipped04.mp4Run the help command to see the usage. A basic example is:$python-mvideo_tools-vaggregate-itests/data/-otests/data/output-s1
ai4xde
AI4XDEDescriptionAI4XDE is a comprehensive library for scientific machine learning and physical information networks. AI4XDE aims to decouple specific algorithms from specific examples, using examples as input parameters for neural networks, so that all examples can be calculated in one programming operation. Writing neural network algorithms and examples according to the interface paradigm used in the AI4XDE library can quickly test the stability of algorithms on different examples and accelerate experimental progress; At the same time, it can also enable the completion of calculation examples, which can be tested and compared on different neural network algorithms.Currently, AI4XDE supports the following algorithms:PINNUniformRandom_ RRAR_ DRAR_ GRADR3SamplingHPOgPINNFI-PINNFBPINNCurrently, AI4XDE supports the following examples:Formula based approximate function calculation exampleData based formula approximation examplesA simple ODE calculation exampleLotka Volterra equationSecond Order ODEPoisson equation in 1D with Dirichlet boundary conditionsPoisson equation in 1D with Dirichlet/Neumann boundary conditionsPoisson equation in 1D with Dirichlet/Robin boundary conditionsPoisson equation in 1D with Dirichlet/Periodic boundary conditionsPoisson equation in 1D with Dirichlet/PointSetOperator boundary conditionsPoisson equation in 1D with hard boundary conditionsPoisson equation in 1D with Multi-scale Fourier feature networksPoisson equation over L-shaped domainInverse problem for the Poisson equation with unknown forcing fieldInverse problem for the fractional Poisson equation in 1DInverse problem for the fractional Poisson equation in 2DPoisson equation in 2D peak problemLaplace equation on a diskEuler BeamHelmholtz equation over a 2D square domainHelmholtz equation over a 2D square domain with a holeHelmholtz sound-hard scattering problem with absorbing boundary conditionsKovasznay FlowBurgers equationHeat equationDiffusion equationDiffusion-reaction equationAllen Cahn equationKlein-Gordon equationBeltrami flowSchrodinger equationWave propagation with spatio-temporal multi-scale Fourier feature architectureWave equationIntegro-differential equationVolterra IDEFractional Poisson equation in 1DFractional Poisson equation in 2DFractional Poisson equation in 3DFractional_Diffusion_1DInverse problem for the Lorenz systemInverse problem for the Lorenz system with exogenous inputInverse problem for Brinkman-Forchheimer modelInverse problem for the diffusion equationInverse problem for the Diffusion-reaction equationInverse problem for the Navier-Stokes equation of incompressible flow around cylinderBimodal in 2DFlow in a Lid-Driven CavityConvection equation in 1D with Periodic boundary conditionsHarmonic Oscillator 1DInstallationSince AI4XDE is based on the DeepXDE library, you need to first install the DeepXDE library.DeepXDE requires one of the following dependencies to be installed:TensorFlow 1.x:TensorFlow>=2.7.0TensorFlow 2.x:TensorFlow>=2.2.0,TensorFlow Probability>=0.10.0PyTorch:PyTorch>=1.9.0JAX:JAX,Flax,OptaxPaddlePaddle:PaddlePaddle(develop version)Please install the above dependencies as a baseline before installing DeepXDESubsequently, you can use the following method to install AI4XDEInstall using 'pip':$ pip install ai4xdeInstall using 'conda':$ conda install -c xuelanghanbao ai4xdeFor developers, you should clone the folder to your local machine and put it along with your project scripts:$ git clone https://gitee.com/xuelanghanbao/AI4XDE.gitInstructionsAI4XDE separates algorithms from examples, where algorithm templates are stored in thesolverfolder, and specific algorithms implemented based on algorithm templates (such as PINN, R3Sampling, etc.) are stored in thealgorithmsfolder. The calculation template and specific calculation examples (such as Burgers, AllenCahn, etc.) are stored in thecasesfolder.ContributionFork the repositoryCreate Feat_xxx branchCommit your codeCreate Pull Request
ai6
ai6 -- Artificial Intelligence Code ExamplesInstall$ pip install ai6Exampleimportai6deff(p):[x,y]=preturnx*x+y*yp=[1.0,3.0]ai6.gradient_descendent(f,p)
ai69
ai69is a Python package to dynamically generate and execute Python functions for any occation. It's designed to simplify the process of integrating AI-powered functions for anything you want.InstallationTo installai69, run the following command in your terminal:pipinstallai69Ensure you have Python 3.6 or later installed on your system.UsageSetting UpFirst, importai69and set your OpenAI API key:fromai69importaiai.set_key("your-openai-api-key")[!IMPORTANT] You must obtain an API key from OpenAI. You can get it fromhere.Alternatively, If you have an environment variableOPENAI_API_KEYset, you need not useai.set()to set up the key. ai69 will auto import the key for youCalling FunctionsWithai69, you can call functions dynamically. The package will attempt to generate the appropriate Python code using OpenAI's Codex:fromai69importaiawaitai.getWeather('Chennai')# 'sunny'awaitai.randomNumberBetween(1,10)#6awaitai.slugify('My Article')# 'my-article'awaitai.hasProfanityRegex('f*ck this lol')# Falseawaitai.extractHashtags('this is #really cool! #ai #code')# ['really', 'ai', 'code']awaitai.getProgrammerJoke()# 'What do you call a programmer from Finland? Nerdic.'FeaturesDynamic Function Generation:Create functions on the fly based on method names and arguments.AI-Powered Code Generation:Utilizes OpenAI's GPT-3.5 Turbo model for generating Python code.Flexible and Easy to Use:Designed to be intuitive and straightforward, requiring minimal setup.Important NotesSecurity:Executing dynamically generated code can be risky. Always validate and sanitize inputs and useai69in a secure environment.API Key:Your OpenAI API key should be kept confidential. Do not expose it in publicly accessible areas like GitHub repositories.[!NOTE] The responses from the AI can vary, and the generated code's quality depends on the model's current capabilities and understanding.Disclaimerai69is an experimental tool that relies on AI to generate code. The developers ofai69are not responsible for any consequences arising from the use of this package, including but not limited to generated code quality, security vulnerabilities, or AI model inaccuracies.Currently only OpenAI's GPT-3.5 Turbo model is supported. Support for other models may be added in the future. For this reason, we use openai's api rather than openai's python package.UwU wetotally* recommend you to use in productionlmaowContributingContributions toai69are welcome! Please feel free to open a pull request or issue on GitHub. All contributions must be released under theMIT.ai69is released under theMIT License.
ai71
Developers building Python 3.8+ apps can now interact seamlessly with the AI71 API thanks to the ai71 Python library. It includes built-in type checking for both requests and responses, and provides both synchronous and asynchronous HTTP clients powered by httpx.DocumentationThe API documentation can be foundhere.Installationpip install ai71UsageDefineAI71_API_KEYenvironment variable or provideapi_keyinAI71andAsyncAI71.import os from ai71 import AI71 client = AI71() chat_completion = client.chat.completions.create( messages=[{"role": "user", "content": "What is your name?"}], model="tiiuae/falcon-180B-chat", )Async Usageimport asyncio from ai71 import AsyncAI71 client = AsyncAI71() async def main(): stream = await client.chat.completions.create( messages=[{"role": "user", "content": "What is your name?"}], model="tiiuae/falcon-180B-chat", stream=True, ) async for chunk in stream: print(chunk.choices[0].delta.content or "", end="") asyncio.run(main())
ai8
zbmain个人代码库持续新增。先持续0.x.x,待完整再正式发布更新1.0安装说明pip install zbmain使用方法import zbmain
aia
AIA Chasing in PythonThis library was built as a workaround to the CPythonissue 18617(AIA chasing for missing intermediate certificates on TLS connections) regarding SSL/TLS.Why a session?That's not really a session in the HTTP sense, it's just a way to cache the downloaded certificates in memory, so one doesn't need to validate the same certificate more than once.How does it get the certificate chain?It gets the whole chain from the AIA (Authority Information Access) extension of each certificate, and gets the root certificate locally, from the system.How does it validate the certificate chain?Through OpenSSL, which must be installed as an external dependency.When should I use it?Ideally, never, but that might not be an option. When the web server configuration doesn't include the entire chain (apart from the root certificate), there are only two "options": ignore the certificate (not secure) or get the intermediary certificates in the chain through AIA (that's why this small library was written).How to installAnywhere, assuming OpenSSL is already installed:pipinstallaiaFor system installation in Arch Linux, there's also thepython-aiapackage in AUR.How to use it?For simple requests on HTTPS, there's a straightforward way based on the standard libraryurllib.request.urlopen.fromaiaimportAIASessionaia_session=AIASession()# A GET result (only if status was 200), as bytescontent=aia_session.download("https://...")# Return a `http.client.HTTPResponse` object, like `urllib.request.urlopen`response=aia_session.urlopen("https://...")# Indirectly, the same abovefromurllib.requestimporturlopenurl="https://..."context=aia_session.ssl_context_from_url(url)response=urlopen(url,context=context)The context methods also helps when working with HTTP client libraries. For example, withrequests:fromtempfileimportNamedTemporaryFilefromaiaimportAIASessionimportrequestsaia_session=AIASession()url="https://..."cadata=aia_session.cadata_from_url(url)# Validated PEM certificate chainwithNamedTemporaryFile("w")aspem_file:pem_file.write(cadata)pem_file.flush()resp=requests.get(url,verify=pem_file.name)Withhttpxin synchronous code it's really straightforward, since it accepts theSSLContextinstance:fromaiaimportAIASessionimporthttpxaia_session=AIASession()url="https://..."context=aia_session.ssl_context_from_url(url)resp=httpx.get(url,verify=context)The certificate fetching part of this library and the OpenSSL call are blocking, so this library is still not prepared for asynchronous code. But one can easily make some workaround to use it, for example withtornado.httpclientor with the already seenhttpx, usingasyncio:importasynciofromfunctoolsimportpartialfromaiaimportAIASessionasyncdefget_context(aia_session,url,executor=None):returnawaitasyncio.get_event_loop().run_in_executor(executor,partial(aia_session.ssl_context_from_url,url),)# Tornado versionfromtornado.httpclientimportAsyncHTTPClientasyncdefdownload_tornado_async(url):aia_session=AIASession()context=awaitget_context(aia_session,url)client=AsyncHTTPClient()try:resp=awaitclient.fetch(url,ssl_options=context)returnresp.bodyfinally:client.close()result=asyncio.run(download_tornado_async("https://..."))# httpx versionimporthttpxasyncdefdownload_httpx_async(url):aia_session=AIASession()context=awaitget_context(aia_session,url)asyncwithhttpx.AsyncClient(verify=context)asclient:resp=awaitclient.get(url)returnresp.contentresult=asyncio.run(download_httpx_async("https://..."))
aia-41
This is AIA trial package
aiaa-framework
No description available on PyPI.
aiaas-falcon
AIaaS FalconInstallation|Quickstart|DescriptionAIaaS_Falcon is Generative AI - LLM library interacts with different model api endpoints., allowing operations such as listing models, creating embeddings, and generating text based on certain configurations. AIaaS_Falcon helps to invoking the RAG pipeline in seconds.Supported Endpoint Types:Azure OpenAISingtelGPTDev_QuantizedDev_Full:shield: InstallationEnsure you have therequestsandgoogle-api-corelibraries installed:pipinstallaiaas-falconif you want to install from sourcegitclonehttps://github.com/Praveengovianalytics/AIaaS_falcon&&cdAIaaS_falcon pipinstall-e.MethodsFalconClass__init__ (config)Intialise the Falcon object with endpoint configs.Parameter:api_key : API Keyapi_name: Name for endpointapi_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)host_name_port: Host and Port Inforationuse_pil: Activate Personal Identifier Information Limit Protection (Boolean)protocol: HTTP/ HTTPSapi_type: Subroute if neededuse_pii: Whether current endpoint need Personal Identifier Information Limit Protectionlog_key: Auth Key to use the Applicationcurrent_active()Check current endpoint activeadd_endpoint(api_name,protocol,host_name_port,api_endpoint,api_key,use_pil=False)Add a new endpoint.Parameter:api_key : API Keyapi_name: Name for endpointapi_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)host_name_port: Host and Port Inforationuse_pii: Activate Personal Identifier Information Limit Protection (Boolean)protocol: HTTP/ HTTPSuse_pil: Whether current endpoint need Personal Identifier Information Limit Protectionlist_endpoint()List out all endpoints in the endpoint management managerset_endpoint(name)Set target endpoint to activeParameter:name : Target endpoint's nameremove_endpoint(name)Delete endpoint by nameParameter:name : Target endpoint's namecurrent_pii()Check current Personal Identifier Information Protection activation statusswitch_pii()Switch current Personal Identifier Information Protection activation statuslist_models()List out models availableinitalise_pii()Download and intialise PII Protection.Note: This does not activate PII but initialise dependencieshealth()Check health of current endpointcreate_embedding(file_path)Create embeddings by sending files to the API.Parameter:file_path: Path to filegenerate_text_full(query="", context="", use_file=0, model="", chat_history=[], max_new_tokens: int = 200, temperature: float = 0, top_k: int = -1, frequency_penalty: int = 0, repetition_penalty: int = 1, presence_penalty: float = 0, fetch_k=100000, select_k=4, api_version='2023-05-15', guardrail={'jailbreak': False, 'moderation': False}, custom_guardrail=None)Generate text using LLM endpoint. Note: Some parameter of the endpoint is endpoint-specific.Parameter:query: a string of your promptuse_file: Whether to take file to context in generation. Only applies to dev_full and dev_quan. Need tocreate_embeddingbefore use.model: a string on the model to use. You can uselist_modelsto check for model available.chat_history: an array of chat history between user and bot. Only applies to dev_full and dev_quan. (Beta)max_new_token: maximum new token to generate. Must be integer.temperature: Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.top_k: Integer that controls the number of top tokens to consider.frequency_penalty: Float that penalizes new tokens based on their frequency in the generated text so far.repetition_penalty: Float that penalizes new tokens based on whether they appear in the prompt and the generated text so far.presence_penalty: Float that penalizes new tokens based on whether they appear in the generated text so farfetch_k: Use for document retrival. Include how many element in searching. Only applies whenuse_fileis 1select k: Use to select number of document for document retrieval. Only applies whenuse_fileis 1api_version: Only applies for azure endpointguardrail: Whether to use the default jailbreak guardrail and moderation guardrailcustom_guardrail: Path to custom guardrail .yaml file. The format can be found in sample.yamlevaluate_parameter(config)Carry out grid search for parameterParameter:config: A dict. The dict must contain model and query. Parameter to grid search must be a list.model: a string of modelquery: a string of query**other parameter (eg: "temperature":list(np.arange(0,2,0.5))decrypt_hash(encrypted_data)Decret the configuration from experiment id. Parameter:encrypted_data: a string of id:fire: Quickstartfromaiaas_falconimportFalconmodel=Falcon(api_name="azure_1",protocol='https',host_name_port='example.com',api_key='API_KEY',api_endpoint='azure',log_key="KEY")model.list_models()model.generate_text_full(query="Hello, introduce yourself",model='gpt-35-turbo-0613-vanilla',api_version='2023-05-15')ConclusionAIaaS_Falcon library simplifies interactions with the LLM API's, providing a straightforward way to perform various operations such as listing models, creating embeddings, and generating text.Authors@Praveengovianalytics@zhuofanGoogle ColabGet start with aiaas_falconBadges
aiaas-falcon-light
AIaaS Falcon-LightInstallation|Quickstart|DescriptionAIaaS_Falcon_Light is Generative AI - Logical & logging framework support AIaaS Falcon library:shield: InstallationEnsure you have therequestsandgoogle-api-corelibraries installed:pipinstallaiaas-falcon-lightif you want to install from sourcegitclonehttps://github.com/Praveengovianalytics/falcon_light&&cdfalcon_light pipinstall-e.MethodsLightClass__init__ (config)Intialise the Falcon object with endpoint configs.Parameter:config: A object consisting parameter:api_key : API Keyapi_name: Name for endpointapi_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)url: url of endpoint (eg:http://localhost:8443/)log_id: ID of log (Integer Number)use_pii: Activate Personal Identifier Information Limit Protection (Boolean)headers: header JSON for endpointlog_key: Auth Key to use the Applicationcurrent_pii()Check current Personal Identifier Information Protection activation statusswitch_pii()Switch current Personal Identifier Information Protection activation statuslist_models()List out models availableinitalise_pii()Download and intialise PII Protection.Note: This does not activate PII but initialise dependencieshealth()Check health of current endpointcreate_embedding(file_path)Create embeddings by sending files to the API.Parameter:file_path: Path to filegenerate_text(query="", context="", use_file=0, model="", chat_history=[], max_new_tokens: int = 200, temperature: float = 0, top_k: int = -1, frequency_penalty: int = 0, repetition_penalty: int = 1, presence_penalty: float = 0, fetch_k=100000, select_k=4, api_version='2023-05-15', guardrail={'jailbreak': False, 'moderation': False}, custom_guardrail=None)Generate text using LLM endpoint. Note: Some parameter of the endpoint is endpoint-specific.Parameter:query: a string of your promptuse_file: Whether to take file to context in generation. Only applies to dev_full and dev_quan. Need tocreate_embeddingbefore use.model: a string on the model to use. You can uselist_modelsto check for model available.chat_history: an array of chat history between user and bot. Only applies to dev_full and dev_quan. (Beta)max_new_token: maximum new token to generate. Must be integer.temperature: Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.top_k: Integer that controls the number of top tokens to consider.frequency_penalty: Float that penalizes new tokens based on their frequency in the generated text so far.repetition_penalty: Float that penalizes new tokens based on whether they appear in the prompt and the generated text so far.presence_penalty: Float that penalizes new tokens based on whether they appear in the generated text so farfetch_k: Use for document retrival. Include how many element in searching. Only applies whenuse_fileis 1select k: Use to select number of document for document retrieval. Only applies whenuse_fileis 1api_version: Only applies for azure endpointguardrail: Whether to use the default jailbreak guardrail and moderation guardrailcustom_guardrail: Path to custom guardrail .yaml file. The format can be found in sample.yamlevaluate_parameter(config)Carry out grid search for parameterParameter:config: A dict. The dict must contain model and query. Parameter to grid search must be a list.model: a string of modelquery: a string of query**other parameter (eg: "temperature":list(np.arange(0,2,0.5))decrypt_hash(encrypted_data)Decret the configuration from experiment id. Parameter:encrypted_data: a string of id:fire: Quickstartfrom aiaas_falcon import Falcon model=Falcon(api_name="azure_1",protocol='https',host_name_port='example.com',api_key='API_KEY',api_endpoint='azure',log_key="KEY") model.list_models() model.generate_text_full(query="Hello, introduce yourself",model='gpt-35-turbo-0613-vanilla',api_version='2023-05-15')ConclusionAIaaS_Falcon_Light library simplifies interactions with the AIaaS Falcon, providing a straightforward way to perform various operations such as fact-checking and logging.Authors@Praveengovianalytics@zhuofanGoogle ColabGet start with aiaas_falconBadges
aiacc
Placeholder packageThis is a placeholder package
aiacc-nccl
AIACC-NCCLOptimized primitives for inter-GPU communication on Aliyun machines.IntroductionAIACC-NCCL is an AI-Accelerator communication framework for NVIDIA-NCCL. It implements optimized all-reduce, all-gather, reduce, broadcast, reduce-scatter, all-to-all, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on aliyun machines using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs, eRDMA or TCP/IP sockets.InstallTo install AIACC NCCL on the system, create a package then install it as root as follow two methods:method1: rpm/deb (Recommended)# Centos:wgethttp://mirrors.aliyun.com/aiacc/aiacc-nccl/aiacc_nccl-1.0.rpm rpm-iaiacc-nccl-1.0.rpm# Ubuntu:wgethttp://mirrors.aliyun.com/aiacc/aiacc-nccl/aiacc_nccl-1.0.deb dpkg-iaiacc-nccl-1.0.debmethod2: python-offlinewgethttp://mirrors.aliyun.com/aiacc/aiacc-nccl/aiacc_nccl-2.0.0.tar.gz pipinstallaiacc_nccl-2.0.0.tar.gz# notes: must download and then pip install, cannot merge in oneline `pip install aiacc_xxx_url`# Both method1 and method2 can run concurrently.method3: python-pypipipinstallaiacc_nccl==2.0UsageAfter install aiacc-nccl package, you need do nothing to change code!EnvironmentAIACC_FASTTUNING: Enable Fasttuning for LLMs, default=1 is to enable.NCCL_AIACC_ALLREDUCE_DISABLE: Disable allreduce algo, default=0 is to enable.NCCL_AIACC_ALLGATHER_DISABLE: Disable allgather algo, default=0 is to enable.NCCL_AIACC_REDUCE_SCATTER_DISABLE: Disable reduce_scatter algo, default=0 is to enable.AIACC_UPDATE_ALGO_DISABLE: Disable update aiacc nccl algo from aiacc-sql-server, default=0 is to enable.PerformanceAIACC-NCCL can speedup the nccl performance on aliyun EGS(GPU machine), for example instance type 'ecs.ebmgn7ex.32xlarge' is A100 x 8 GPU and using network eRdma.GPU(EGS)CollectiveNodesNetworkSpeedup(nccl-tests)A100 x 8all_gather2-10VPC/eRdma30%+A100 x 8reduce_scatter2-10VPC/eRdma30%+A100 x 8all_reduce2-10VPC/eRdma20%V100 x 8all_reduce2-20VPC60%+A10 x 8all_reduce1-20%CopyrightAll source code and accompanying documentation is copyright (c) 2015-2020, NVIDIA CORPORATION. All rights reserved. All modifications are copyright (c) 2020-2024, ALIYUN CORPORATION. All rights reserved.
aiacc-nccl-cu11
AIACC-2.0 ACSpeed means AIACC communication Compiler Speeding. AIACC-2.0 AGSpeed means AIACC compute Graph SPEEDing. This is a distributed training framework plugin for PyTorch and aiacc nccl communication plugin for many deeplearning framworks including TensorFlow, PyTorch, MXNet and Caffe. This project is to create a uniform distributed training framework plugin tool for major frameworks, and make the distributed training as easy as possible and as fast as possible on Alibaba Cloud.
aiacc-nccl-plugin
AIACC-NCCL-PLUGINOptimized socket/rdma for inter-GPU communication on Aliyun machines.IntroductionAIACC-NCCL-PLUGIN is an AI-Accelerator communication framework plugin for NVIDIA-NCCL. It has been optimized to achieve high bandwidth on aliyun machines using InfiniBand Verbs, eRDMA or TCP/IP sockets.InstallTo install AIACC NCCL PLUGIN on the system, create a package then install it as root as follow two methods:method1: rpm/deb (Recommended)# Centos:wgethttp://mirrors.aliyun.com/aiacc/aiacc-nccl-plugin/aiacc-nccl-plugin-1.1.0.rpm rpm-iaiacc-nccl-plugin-1.1.0.rpm# Ubuntu:wgethttp://mirrors.aliyun.com/aiacc/aiacc-nccl-plugin/aiacc-nccl-plugin-1.1.0.deb dpkg-iaiacc-nccl-plugin-1.1.0.debmethod2: python-pypipipinstallaiacc_nccl_pluginUsageAfter install aiacc-nccl-plugin package, you need do nothing to change code!CopyrightAll source code and accompanying documentation is copyright (c) 2015-2020, NVIDIA CORPORATION. All rights reserved. All modifications are copyright (c) 2020-2024, ALIYUN CORPORATION. All rights reserved.
aiacd
No description available on PyPI.
aia-chaser
AIA ChaserThis package provides authority information access (AIA) chasing from a host/leaf certificate to complete its chain of trust and generate an SSL context to establish a secure connection.OverviewAIA, an extension of the X509 standard inRFC 5280, points a client towards two types of endpoints:CA Issuers: To fetch theissuercertificate.OSCP: To check the certificate's revocation status.Thanks to this information, it is possible to complete the chain of trust of a certificate. Without AIA chasing, some HTTPS requests may fail if the endpoint does not provide all the certificates of its chain of trust.You may have experienced that already when some HTTPS URL works on your browser but fail when usingcurlorPython+requests. Then this package could be of help to you :guide_dog:.ExamplesThe following examples showcase how to use this library with some typical Python HTTP libraries.Standard library'surlopen:fromurllib.requestimporturlopenfromaia_chaserimportAiaChaserurl="https://..."chaser=AiaChaser()context=chaser.make_ssl_context_for_url(url)response=urlopen(url,context=context)UsingRequests: HTTP for Humans:importrequestsfromaia_chaserimportAiaChaserchaser=AiaChaser()url="https://..."context=chaser.make_ssl_context_for_url(url)ca_data=chaser.fetch_ca_chain_for_url(url)withtempfile.NamedTemporaryFile("wt")aspem_file:pem_file.write(ca_data.to_pem())pem_file.flush()response=requests.get(url,verify=pem_file.name)Usingurllib3:importurllib3fromaia_chaserimportAiaChaserurl="https://..."chaser=AiaChaser()context=chaser.make_ssl_context_for_url(url)withurllib3.PoolManager(ssl_context=context)aspool:respone=pool.request("GET",url)DevelopmentFirst of all, you must have the following tools installed and on your$PATH.PyenvPoetryMakeThen, open a terminal on the project's directory and run:make initAcknowledgmentsThis project is based onaia.
aiadv
aiadvA small helper library to download the datasets that are used in our training programs at AiAdventures.Installpip install aiadvHow to useThere is only two things that you need to knowURLsclassThis class saves the id and file name of all the available datasetsURLs.MOVIE_LENS_SAMPLE{'id': '1k2y0qC0E3oHeGA5a427hRgfbW7hnQBgF', 'fname': 'movie_lens_sample.zip'}untar_datafunctionThis function will download the dataset for you and will return the path.untar_data(URLs.YELP_REIVEWS)That's it!
aiae
No description available on PyPI.
aia-fairness
AIA_fairnessCode repository of the article. The repositries segments the pipeline as to isolate key componants of the experimental analysis.Installation and dependenciesBy using pypi :pip install aia_fairnessOr clone the repository then runpip install --editable .in the directory containing pyproject.tomlAll dependencies are specified in pyproject.toml and installed automatically by pip.ConfigurationDefault configuration is loaded automatically and can be found atsrc/aia_fairness/config/defaulf.py. To set custom configuration, first runpython-maia_fairness.configIt will create a fileconfig.pyin your current directory containing the default configuration. You can then edit this at you liking. If the fileconfig.pyexist in your current directory it is loaded. If not, the default configuration is loaded.How to useDataset automatic download and processingPart of the dataset uses the kaggle API to download the data. Hence you need to include your API key in~/.kaggle/kaggle.json.aia_fairness provides automatic download, formatic and saving of the dataset used in the article. To use this feature and thus saving data in thedata_formatdirectory simply run oncepython-maia_fairness.dataset_processing.fetchThen you can load any dataset easily form anywhere in your code withimportaia_fairness.dataset_processingasdpdata=dp.load_format(dset,attrib)Dataset evaluationEach dataset can be evaluated along different axes:Sizesfromaia_fairness.dataset_processingimportmetricmetric.counting(<dataset>)Fairnessfromaia_fairness.dataset_processingimportmetricmetric.dp_lvl(<dataset>,<attribute>)To run all the evalution simply executepython-maia_fairness.dataset_processing.evaluationYou can refer to the full implementation intest/target_training.py.Running all the expriments form the paper** Heavy computing power required **One you have downloaded all the data you can run all the expirement of the article by runningpython-maia_fairness.experimental_stackor importaia_fairness.experimental_stackin the python interpreter.Plotting all the experiments of the paperRun the same shell command as for running the experiment adding theplotargument:python-maia_fairness.experimental_stackplotTraining a target modelaia_fairness.models.targetcontains various target model type. The target models available are :RandomForestRandomForest_EGDfairlearn is used to impose EGDNeuralNetworkNeuralNetwork_EGDNeuralNetwork_FairgradOriginal implementation of the fairgrad paperNeuralNetwork_AdversarialDebiasingUses the fairlearn implementation of Adversarial DebisaingFor instance to train a random forest (based on sklearn) you canimportaia_fairness.models.targetastargetsT=dp.split(data,0)target.fit(T["train"]["x"],T["train"]["y"])yhat=target.predict(T["test"]["x"])Evaluation of a target modelaia_fairness.evaluationprovides the metrics used in the article.importaia_fairness.evaluationasevaluationsutility=evaluations.utility()fairness=evaluations.fairness()utility.add_result(T["test"]["y"],yhat,T["test"]["z"])fairness.add_result(T["test"]["y"],yhat,T["test"]["z"])utility.save(type(target).__name__,dset,attrib)fairness.save(type(target).__name__,dset,attrib)The save method, as called in the example, creates a directory structure of the form :#|result/target/<target type> #| |<name of the metric class> #| | |<dset> #| | | |<attrib> #| | | | <Name of metric 1>.pickle #| | | | <Name of metric 2>.pickle #| | | | <Name of metric ..>.pickle #| |<name of another metric class> #| | |<dset> #| | | |<attrib> #| | | | <Name of another metric 1>.pickle #| | | | <Name of another metric 2>.pickle #| | | | <Name of another metric ..>.pickleTraining an attackaia_fairness.modelsprovides the two types of AIA attack described in the paper:Classificationfor hard labelsRegressionfor soft labelsfromaia_fairness.modelsimportattackasattacksaux={"y":T["test"]["y"],"z":T["test"]["z"],"yhat":yhat}aux_split=dp.split(aux,0)classif=attacks.Classification()classif.fit(aux_split["train"]["yhat"],aux_split["train"]["z"])Evaluation of an attackSimilarly with evaluating the target modelaia_fairness.evaluation.attackcan be used to save the accuracy and the balanced accuracy of the attack.Plots and graphsTODOTests scriptsVarious tests are provided in thetestdirectory.download_data.pyFetches all the data form the diferent sources (don't forget to set your kaggle API key)target_training.pyLoads a dataset, split it with 5-folding (cross validation), train a target model on the data, and computes the metrics for this target modelDirectories structuredata_processing contains code that downloads, preprocess and saves the dataset in a uniformed pickle format exploitable by the rest of the pipeline usingload_format(dataset,attribute)function of the utils.py file.
ai-agent
Failed to fetch description. HTTP Status Code: 404
ai-agents
An Open-source Framework for Autonomous Language Agents[📄 Paper][📄 Doc][🌐 Website][🤖️ Demos][🔥 Discord][🔥 Wechat Group]OverviewAgentsis an open-source library/framework for building autonomous language agents. The library is carefully engineered to support important features includinglong-short term memory,tool usage,web navigation,multi-agent communication, and brand new features includinghuman-agent interactionandsymbolic control. WithAgents, one can customize a language agent or a multi-agent system by simply filling in a config file in natural language and deploy the language agents in a terminal, a Gradio interface, or a backend service.One major difference betweenAgentsand other existing frameworks for language agents is that our framework allows users to provide fine-grained control and guidance to language agents via anSOP (Standard Operation Process). An SOP defines subgoals/subtasks for the overall task and allows users to customize a fine-grained workflow for the language agents.📢 Updates[x] 2023.10.7Support LLM-based SOP generation🎉🎉🎉🎉SOP Generation for Multi-Agent[on Huggingface Space]SOP Generation for Single-Agent[on Huggingface Space]We recommend strongly that you can create your OWN DEMO by clicking the three dots at the top right and selectingDuplicate this Spaceon Huggingface Space.[x] 2023.9.20 Deploy Demos on Huggingface Space[x] 2023.9.12 Official Release💡 HighlightsLong-short Term Memory: Language agents in the library are equipped with both long-term memory implemented via VectorDB + Semantic Search and short-term memory (working memory) maintained and updated by an LLM.Tool Usage: Language agents in the library can use any external tools viafunction-callingand developers can add customized tools/APIshere.Web Navigation: Language agents in the library can use search engines to navigate the web and get useful information.Multi-agent Communication: In addition to single language agents, the library supports building multi-agent systems in which language agents can communicate with other language agents and the environment. Different from most existing frameworks for multi-agent systems that use pre-defined rules to control the order for agents' action,Agentsincludes acontrollerfunction that dynamically decides which agent will perform the next action using an LLM by considering the previous actions, the environment, and the target of the current states. This makes multi-agent communication more flexible.Human-Agent interaction: In addition to letting language agents communicate with each other in an environment, our framework seamlessly supports human users to play the role of the agent by himself/herself and input his/her own actions, and interact with other language agents in the environment.Symbolic Control: Different from existing frameworks for language agents that only use a simple task description to control the entire multi-agent system over the whole task completion process,Agentsallows users to use anSOP (Standard Operation Process)that defines subgoals/subtasks for the overall task to customize fine-grained workflows for the language agents.🛠 InstallationOption 1. Build from sourcegit clone https://github.com/aiwaves-cn/agents.git cd agents pip install -e .Option 2. Install via PyPIpip install ai-agents📦 Usage🛠️ Generate the config fileOption 1. Fill in the config template manuallyModifyexample/{Muti|Single_Agent}/{target_agent}/config.jsonOption 2. Try ourWebUIfor customizing the config file.Haven't figured out how to write the JSON file yet? Check out ourdocumentation!Option 3. Try Huggingface Space for generating the SOP automatically.SOP Generation for Multi-Agent[on Huggingface Space]SOP Generation for Single-Agent[on Huggingface Space]We recommend strongly that you can create your OWN DEMO by clicking the three dots at the top right and selectingDuplicate this Spaceon Huggingface Space.🤖️ The Agent HubWe provide anAgentHub, where you can search for interesting Agents shared by us or other developers, try them out or use them as the starting point to customize your own agent. We encourage you to share your customized agents to help others build their own agents more easily! You can share your customized agents by submitting PRs that adds configs and customized codeshere. You can also send us your own config files and codes for customized agents byemail, and we will share your examples and acknowledge your contribution in future updates!📷 Examples and DemosWe have provided exemplar config files, code, and demos for both single-agent and multi-agent systemshere.Web demosNote1.Due to massive traffic, our online demos may suffer from long queue time and unstable issues.Please follow ourquick start guide) and deploy language agents locally for testing. Or checkout ourwebsite. 2.Software Company is unable to generate executable code online,if you wish to generate executable code directly, please run it locally :)Customer Service AgentDebate[now on Huggingface Space]Software Company[now on Huggingface Space]Fiction StudioContributing to AgentsWe appreciate your interest in contributing to our open-source initiative. Please feel free to submit a PR or share your thoughts on how to improve the library in Issues!Note:When running the code, we will download an embedding model, which will cause the code to run slowly. We will adjust it to the API interface laterCurrently, the shopping assistant cannot be used. We will replace the API later. Stay tuned📚 DocumentationPlease check ourdocumentationfor detailed documentation of the framework.⭐ Star HistoryCitationIf you find our repo useful in your research, please kindly consider cite:@misc{zhou2023agents, title={Agents: An Open-source Framework for Autonomous Language Agents}, author={Wangchunshu Zhou and Yuchen Eleanor Jiang and Long Li and Jialong Wu and Tiannan Wang and Shi Qiu and Jintian Zhang and Jing Chen and Ruipu Wu and Shuai Wang and Shiding Zhu and Jiyu Chen and Wentao Zhang and Ningyu Zhang and Huajun Chen and Peng Cui and Mrinmaya Sachan}, year={2023}, eprint={2309.07870}, archivePrefix={arXiv}, primaryClass={cs.CL} }
aianalytics-client
This project enables quering the Application Insights Analytics API while parsing the results for furthur processing in a simple manner.Application Insights Analyticsis a powerful search feature of Application Insights, which allows to query your Applciation Insights telemetry. This module is meant to be used with other data analysis packages, such asnumpyandmatplotlib. The query result are numpy arrays.Note: this package is not for sending telemetry to the Application Insights serivce. For that you can use theofficial python sdk repo.RequirementsThis module was tested on Python 2.7 and Python 3.5. Older versions of Python 3 probably work as well.For opening the project in Microsoft Visual Studio you will needPython Tools for Visual Studio.InstallationTo install the latest release you can usepip.$ pip install aianalytics-clientUsageOnce installed, you can query your Application Insights telemetry. Here are a few samples.Query exceptions from the last 24 hours and print themfromanalytics.clientimportAnalyticsClientclient=AnalyticsClient('<Your app id goes here>','<You app key goes here>')result=client.query('exceptions | where timestamp > ago(24h) | project timestamp, type, outerMessage')forrowinresult.row_iterator():print("at{0}there was an exception of type{1}with message{2}".format(row['timestamp'],row['type'],row['outerMessage']))# Indexes can also be used instead of column names, e.g.:print("at{0}there was an exception of type{1}with message{2}".format(row[0],row[1],row[2]))Query average request duration from the last week and plot using matplotlibfromanalytics.clientimportAnalyticsClientclient=AnalyticsClient('<Your app id goes here>','<You app key goes here>')result=client.query('requests | where timestamp > ago(7d) | summarize Duration = avg(duration/1000) by bin(timestamp, 1h) | order by timestamp asc')importmatplotlib.pyplotaspltplt.plot(result["timestamp"],result["Duration"])plt.show()
aiandml
MACHINE LEARNING LABORATORY [As per Choice Based Credit System (CBCS) scheme] (Effective from the academic year 2018 - 2019) SEMESTER – VII Subject Code 18CSL76 CIE Marks 40 Course Learning Objectives: This course (18CSL76) will enable students to: Implement and evaluate AI and ML algorithms in and Python programming language. Descriptions (if any): Programs List:Implement A* Search algorithm.Implement AO* Search algorithm.For a given set of training data examples stored in a .CSV file, implement and demonstrate the Candidate-Elimination algorithm to output a description of the set of all hypotheses consistent with the training examples.Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an appropriate data set for building the decision tree and apply this knowledge to classify a new sample.Build an Artificial Neural Network by implementing the Back propagation algorithm and test the same using appropriate data sets.Write a program to implement the naive Bayesian classifier for a sample training data set stored as a .CSV file. Compute the accuracy of the classifier, considering few test data sets.Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set for clustering using k-Means algorithm. Compare the results of these two algorithms and comment on the quality of clustering. You can add Java/Python ML library classes/API in the program.Write a program to implement k-Nearest Neighbour algorithm to classify the iris data set. Print both correct and wrong predictions. Java/Python ML library classes can be used for this problem.Implement the non-parametric Locally Weighted Regression algorithm in order to fit data points. Select appropriate data set for your experiment.
aiapi
Failed to fetch description. HTTP Status Code: 404
ai-api-client-sdk
SAP AI API Client SDKThe AI API client SDK is a Python-based SDK that enables you to access the AI API using Python methods and data structures. You can use the Client SDK to test AI API.The client SDK can be used with any implementation of AI API. Because it is independent of the runtime implementation, it doesn't provide access to runtime-specific APIs. For example, maintaining object store secrets is specific to SAP AI Core and is therefore not included in the AI API client SDK. Check for SDK offerings for your runtime that let you access any runtime-specific APIs. For more information on SAP AI Core SDK seehttps://pypi.org/project/ai-core-sdk/.For more information on the AI API specification, SAP AI Core and related topics, please refer tohttps://help.sap.com/docs/AI_CORE?version=CLOUD.Example Usagefromai_api_client_sdk.ai_api_v2_clientimportAIAPIV2Client# Instantiate the clientclient=AIAPIV2Client(base_url=AI_API_URL,auth_url=AUTH_URL,client_id=CLIENT_ID,client_secret=CLIENT_SECRET,resource_group=RESOURCE_GROUP)# Make some queries, e.g.scenarios=client.scenario.query()forscenarioinscenarios.resources:print(scenario.id)# Find a deployable executables in scenario 1111executables=client.executable.query(scenario_id="1111")forexecutableinexecutables.resources:ifexecutable.deployable==False:breakprint(executable.id)# Inspect the required parameters for the executableforparameterinexecutable.parameters:print("Parameter:{}, Type:{}".format(parameter.name,parameter.type))# Create a configurationparameter_epochs=ParameterBinding(key="training-epochs",value="1")myConfiguration=client.configuration.create(name="test",scenario_id="1111"executable_id="argo-mnist-0.2.1",parameter_bindings=[parameter_epochs])
aiap-model
ContentsName of CandidateOverview of folder structureRunning instructionsDescription of logical steps/ flow of pipelineOverview of Key findings in EDA and Pipeline, Feature Engineering ChoicesModel choicesEvaluation choicesOther ConsiderationsParting WordsName of CandidateBack to content pageHi! My name is :Chng Yuan Long, RandyEmail:[email protected] of folder structureBack to content pageFolder structure:AIAP │ README.md │ requirements.txt │ test-requirements.txt | run.sh | Dockerfile | eda.ipynb | tox.ini | └──data | survive.db | sample_df.csv | └──src | main.py │ └──config | config.py | └──preprocessing | datamanager.py | └──tests | test-datamanager.py | test-predict.py | test-train_pipeline.py | test-pipeline.py | test_bound_outliers.py | test_load_from_database.py | test_pipeline.py | test_predict.py | test_preprocess_data.py | test_preprocess_input.py | └──model pipeline.pkl pipeline.py predict.py train_pipeline.pyFile Summary:Format: File (folder)Usagemain.py (src)runs applicationConfig.py (src/config)Tweak variables in config/config filesFile pathsmodel specific objects (CV, test ratio, random seed, params for cross validation)Column namesData related like Column names, default values on streamlit UIDatamanager.py (src/Preprocessing)loads pipeline, datapreprocesses input from application, data from database after readingpython files (src/tests)tests functions in the respective python filesTrain-Pipeline.py (src/model)trains and scores the pipeline with the data in data folderoutputs pipeline.pkl and a log on training outcome in the same folderPipeline.py (src/model)contains pipeline to transform datapredict.py (src/model)predicts inputs using pipeline trained on data in data folderRunning instructionsBack to content pageYou can run the application straight with either the bash script or from docker. Optionally, you may run tests or train the pipeline on data in the data folder or run lint tools on the code with Tox. A trained pipeline named pipeline.pkl should already be included in the src/model folder.Default values are present on the application itself so that you can click on predict button at the end. If prediction is 0, message 'Please see a doctor!' will appear, otherwise it will appear as 'Please keep up the healthy habits'. Along with the message the predict class and the probability will appear as well.The instructions below assumes a Windows OS with Python version 3.10.0ToxI have 3 environments in tox (train_pipeline, pytest, lint) with each for a specific function. You may run tox command like so in the root directory to run all 3 back to backtoxOr you can run a specific environment like sotox -e pytestRunning Main ApplicationBash Script:Run bash script (run.sh) by double clicking it. Streamlit application should appear in your browser.Docker:Please pull image by running command in terminal with docker runningdocker pull hashketh/aiapOnce retrieved, please run commanddocker run -p 8501:8501 hashketh/aiapThe streamlit should be available in your browser vialocalhost:8501Description of logical steps flow of pipelineBack to content pageTestI imagine the user would like to test the application first to make sure that everything is working. After that they might want to train the model on the data, or they may wish to use linting tools to assist in cleaning or spotting issues with the code. They may do all using the package tox.For the testing, I tried to test as much of the functions in each python file as I can. I used a sample of the database to replicate the loading and preprocessing of the pipeline. This sample is saved in the data file under sample_df.csv. All of the test files are included in the test folder.ConfigurationThis deployment is done in streamlit and all of the variables are stored in config.py in the config folder save for the default values. This goes with all of the other variables like pathing on so on. If they have any configuration to be done they can tweak them in the config.py file.Training DataTrain -> Ingest Data -> Preprocessing -> Train Pipeline -> Score -> Output ResultsSay they train the pipeline, the pipeline.py will call on config.py for value of variables, pipeline.py for the loading of pipeline, datamanager.py for loading and preprocessing of the data.The preprocessing phase will include all of the transformation that was done from the eda jupyter notebook. This includes imputation of missing values, bounding of the outliers, replacement of the invalid values from smoke, ejection fraction and other features. It will also add the BMI feature.The pipeline.py will train the pipeline on the data, score it and generate a txt file for the user to view the results. The resulting pipeline will be saved as a pickle file. User can either run the train_pipeline.py file directly or call it from tox.Run applicationRun application -> load pipeline -> consume inputs -> preprocess inputs -> predict -> display resultsAfter that they can run the application. The main.py contains the Streamlit UI for it and its filled with the default values provided by config.py. If the user clicks on the predict button, main.py will call predict.py which in turn will call on pipeline.py to load the pipeline and datamanager.py to preprocess the input. Predict.py will generate both the prediction and the probability of the outcome. This will be displayed on the page.Overview of Key findings in EDA and Pipeline Feature Engineering ChoicesBack to content pageThe dataset contains a moderate amount of features with 150K observations. Numerical features are typically tail heavy with some features require cleaning or imputing. Likewise the categorical features require some cleaning as well. All numerical features do not correlate with each other.The pipeline included median imputation of possible null values , bounding outliers within the distribution and the usual scaling or numerical features and one-hot encoding of categorical features.As I think that domain knowledge is useful in feature engineering and I do not have any medical knowledge, the only feature introduced is BMI which revealed to be a rather terrible feature. Through feature importance of both the random forest classifier and light Gradient boosting machine, I discovered that 5 features have a higher weight in determining the outcome. They are CK, Smoke, Gender, Diabetes and Age.Model choicesBack to content pageI used the following modelsLogistic Regression (LOGREG)Support Vector Machines (SVM)K-Nearest Neighbours (KNN)Random Forest (RF)Light Gradient Boosting Machine (LGBM)The models used to train were chosen based on how complex the models are, whether they are ensemble models or not and where it is instance or model based. I originally intended to compare the models on the validation data and then choose 1 to perform hyperparamter tunning to achieve better results. However the models happen to give me good results with the default values that I dont need to tune hyperparameters.Another selection criterion is also whether if there is any indication of overfitting on the data. Based on the training and test cross validation scores provided, I can see if a model is prone to overfit or not. If there is overfitting I can regularise the model or choose a less complex model. If there is underfitting I will choose a more complex modelOn the second iteration, I chose to focus on 5 features with the highest weightage but I am unable to achieve the same score. Although it was a very comparable performance I think in terms of the severity of a false negative in context of the problem I am still comfortable with a perfect score with more features. Furthermore, training time is negligible at this point. Either one of the ensemble model would be fine but I settled on the random forest.LOGREG is the simplest of them all being a linear model. A simple model has its usefulness however it is unable to fit well onto the data using the default values.SVM is a very flexible model that allows me to reach both linear or polynominal solutions with its kernel methods and its hyper-parameters. But it requires a bit of knowledge in order to tune the hyperparameters properly.KNN is an instance based model that does not have any algorithim but predicts based on distance of the training data to the new instances for predictions.RF is an ensemble model of decision trees but it is prone to overfitting. Typically I will fit using default and then prune (regularise) the tree later. I like that the interpretation of the Tree is easy to understand.LGBM is an ensemble model that improves on every iteration by adjusting to the residual error of the previous iteration. My understanding is that the LGBM is a variant of XGBoost that is faster. XGBoost is itself a more regularised variant of Gradient Boosting machine.I intitally chose to use LGBM as it provided the highest score on all metrics with accuracy taking precedence. It is also the fastest to train. However when I was building the application I have some issues running the LGBM model so I used random forest instead as it is the runner up with the same scoring on all metrics just a tad bit slower when training.Evaluation choicesBack to content pageAs this is a classification problem, scores like recall, precision, accuracy, F1 score and the ROC AUC score are relevant. I got most of the metrics through sklearn's classification report.I think its important to know before hand which metric should take priority before the problem is modelled. The problem is about predicting the surival of a patient suffering from heart artery disease and I think between choosing a low false negative rate or a low false positive rate, a low false negative rate will take priority since the outcome of a false positive (predicted death when it is survive) is less disastrous than a false negative (predicted survive when it is death). The model should have high recallBeyond that, accuracy measures the true positive and true negative rates and is great to know the absolute performance of the model. F1 is good to see as a weighted measure of both recall and precision. F1 is a harmonic mean that is just another measurement of the mean similar to arithmetic mean or the geometric mean (where harmonic mean ≤ geometric mean ≤ arithmetic mean). F1 is penalised by low value and F1 will be high only when all components have high values.ROC AUC tells us how much is the model able to distinguish between the positive and the negative class, with 0.5 being an equal chance of the model to label the positive as negative and vice versa.All scores are bounded between 0 and 1 inclusive with higher values being better.Other ConsiderationsBack to content pageThis deployment is built with ease of use and maintenance in mind.A couple of design choices are made to this end: Tox allows me to run a couple of virtual environments and commands in a easy manner. With Tox, I can use pytest, run lint packages on the code and train the model on the training data with one command regardless of what virtual environment the user is in.Pytest allows me and any other users to ensure that the code is working properly. I have written pre-train and post-train test cases so that I can cover both the data and the functions in the model and the expected behavior of the model.Lint tools like black, isort and flake8 formats and flags out inconsistencies with the code, the docstrings and the imports in accordance with PEP8. I hope this improves readability and ease of use for other people using the application.The model is also containerised in docker so we can avoid the "it only runs on my machine" problem. This is done in the event that the bash script fails to run the application for some reason.Parting WordsBack to content pageThank you for reading all the way to the end of the README! I hope that everything is according to your expectations.I had fun practising what I have learnt especially the software engineering aspects of it. Many tutorials or courses on data science stops after you score the model! Thank you for allowing me to participate!
aiapy
aiapyis a Python package for analyzing data from the Atmospheric Imaging Assembly (AIA) instrument onboard NASA’s Solar Dynamics Observatory (SDO) spacecraft. For more information, see theaiapy documentation. For some examples of using aiapy, seeour gallery.CitingIf you useaiapyin your scientific work, we would appreciate you citing it in your publications. The latest citation information can be found in thedocumentation, or obtained withaiapy.__citation__.Contributingaiapyis open source software, built on top of other open source software, and we’d love to have you contributePlease checkout thecontributing guidefor more information on how to contribute to aiapy.
aiarc
aiarcWhyI prototyped novel and good algorithms for chemistry in 2021 and it took me two years to get them publication ready. Of course I did other stuff like thesis, work, exams etc. However, the development time was far too long.WhatPreliminaries and LearningsI need my own frameworkI learned from aiarc that I need more flexibility for the major partsI need a directory structure and saving structure to properly distribute the filesthere could be an initializermethods should be imported easier through theinitfilesTesting should be more rigorous (in each new project)The package is aimed to investigate algorithms fasterThe focus is on making algorithms and sparing out the restThus the focus is on an easy interface for investigation. PytorchGeometric gives a good guideline on how to do this. It is more though, and also about producing plots to score the overall algorithm.aiarcis aimed to improve rapid prototyping of AI algorithms on graphs (timeseries, molecules, networks etc) for building novel simulation applications fast and at scale.The package shall provide production grade algorithms in an automated way but with user interaction.Lessons learned from other projectsThe lessons I learned from other projects wereNever trust propietary software that is used for politics, e.g. Microsoft, Nvidia drivers, Canonical etcAvoid trusting open-source software that is aimed to upsell a paid software distribution (e.g. Canonical, or security software)Code high quality instead of coding fast -> is faster in the endTry to code consistently instead of a lot at onceHow it should workWork on a project base -> provide a framework to start a project A project needs differnt thingsConfigurationtrainingMonitoringLoggingAnalysisFoundationsI am having some years of experience now, trying to model in an regulatory environment such as drug design, risk assessement, health care and chemistry.Packages I have built in that context arechembeeaiarcAn architectural pattern became prevalent such that I found it intorchsr, too (which had mutiple years of development, too)torchsrNow, withaiarcwe aim to get modelling to the next stage.Packages to usejaxpytorch-gym (?)torchtorchsrSuper-Resolution Networks for PytorchSuper-resolutionis a process that increases the resolution of an image, adding additional details. Methods using neural networks give the most accurate results, much better than other interpolation methods. With the right training, it is even possible to make photo-realistic images.For example, here is a low-resolution image, magnified x4 by a neural network, and a high resolution image of the same object:In this repository, you will find:the popular super-resolution networks, pretrainedcommon super-resolution datasetsa unified training script for all modelsModelsThe following pretrained models are available. Click on the links for the paper:EDSRCARNRDNRCANNinaSRNewer and larger models perform better: the most accurate models are EDSR (huge), RCAN and NinaSR-B2. For practical applications, I recommend a smaller model, such as NinaSR-B1.Expand benchmark resultsSet5 resultsNetworkParameters (M)2x (PSNR/SSIM)3x (PSNR/SSIM)4x (PSNR/SSIM)carn1.5937.88 / 0.960034.32 / 0.926532.14 / 0.8942carn_m0.4137.68 / 0.959434.06 / 0.924731.88 / 0.8907edsr_baseline1.3737.98 / 0.960434.37 / 0.927032.09 / 0.8936edsr40.738.19 / 0.960934.68 / 0.929332.48 / 0.8985ninasr_b00.1037.72 / 0.959433.96 / 0.923431.77 / 0.8877ninasr_b11.0238.14 / 0.960934.48 / 0.927732.28 / 0.8955ninasr_b210.038.21 / 0.961234.61 / 0.928832.45 / 0.8973rcan15.438.27 / 0.961434.76 / 0.929932.64 / 0.9000rdn22.138.12 / 0.960933.98 / 0.923432.35 / 0.8968Set14 resultsNetworkParameters (M)2x (PSNR/SSIM)3x (PSNR/SSIM)4x (PSNR/SSIM)carn1.5933.57 / 0.917330.30 / 0.841228.61 / 0.7806carn_m0.4133.30 / 0.915130.10 / 0.837428.42 / 0.7764edsr_baseline1.3733.57 / 0.917430.28 / 0.841428.58 / 0.7804edsr40.733.95 / 0.920130.53 / 0.846428.81 / 0.7872ninasr_b00.1033.24 / 0.914430.02 / 0.835528.28 / 0.7727ninasr_b11.0233.71 / 0.918930.41 / 0.843728.71 / 0.7840ninasr_b210.034.00 / 0.920630.53 / 0.846128.80 / 0.7863rcan15.434.13 / 0.921630.63 / 0.847528.85 / 0.7878rdn22.133.71 / 0.918230.07 / 0.837328.72 / 0.7846DIV2K results (validation set)NetworkParameters (M)2x (PSNR/SSIM)3x (PSNR/SSIM)4x (PSNR/SSIM)8x (PSNR/SSIM)carn1.5936.08 / 0.945132.37 / 0.887130.43 / 0.8366N/Acarn_m0.4135.76 / 0.942932.09 / 0.882730.18 / 0.8313N/Aedsr_baseline1.3736.13 / 0.945532.41 / 0.887830.43 / 0.8370N/Aedsr40.736.56 / 0.948532.75 / 0.893330.73 / 0.8445N/Aninasr_b00.1035.77 / 0.942832.06 / 0.881830.09 / 0.829326.60 / 0.7084ninasr_b11.0236.35 / 0.947132.51 / 0.889230.56 / 0.840526.96 / 0.7207ninasr_b210.036.52 / 0.948232.73 / 0.892630.73 / 0.843727.07 / 0.7246rcan15.436.61 / 0.948932.78 / 0.893530.73 / 0.844727.17 / 0.7292rdn22.136.32 / 0.946832.04 / 0.882230.61 / 0.8414N/AB100 resultsNetworkParameters (M)2x (PSNR/SSIM)3x (PSNR/SSIM)4x (PSNR/SSIM)carn1.5932.12 / 0.898629.07 / 0.804227.58 / 0.7355carn_m0.4131.97 / 0.897128.94 / 0.801027.45 / 0.7312edsr_baseline1.3732.15 / 0.899329.08 / 0.805127.56 / 0.7354edsr40.732.35 / 0.901929.26 / 0.809627.72 / 0.7419ninasr_b00.1031.97 / 0.897428.90 / 0.800027.36 / 0.7290ninasr_b11.0232.24 / 0.900429.13 / 0.806127.62 / 0.7377ninasr_b210.032.32 / 0.901429.23 / 0.808727.71 / 0.7407rcan15.432.39 / 0.902429.30 / 0.810627.74 / 0.7429rdn22.132.25 / 0.900628.90 / 0.800427.66 / 0.7388Urban100 resultsNetworkParameters (M)2x (PSNR/SSIM)3x (PSNR/SSIM)4x (PSNR/SSIM)carn1.5931.95 / 0.926328.07 / 0.84926.07 / 0.78349carn_m0.4131.30 / 0.920027.57 / 0.83925.64 / 0.76961edsr_baseline1.3731.98 / 0.927128.15 / 0.85226.03 / 0.78424edsr40.732.97 / 0.935828.81 / 0.86526.65 / 0.80328ninasr_b00.1031.33 / 0.920427.48 / 0.837425.45 / 0.7645ninasr_b11.0232.48 / 0.931928.29 / 0.855526.25 / 0.7914ninasr_b210.032.91 / 0.935428.70 / 0.864026.54 / 0.8008rcan15.433.19 / 0.937229.01 / 0.86826.75 / 0.80624rdn22.132.41 / 0.931027.49 / 0.83826.36 / 0.79460All models are defined intorchsr.models. Other useful tools to augment your models, such as self-ensemble methods and tiling, are present intorchsr.models.utils.DatasetsThe following datasets are available. Click on the links for the project page:DIV2KRealSRFlicr2KREDSSet5,Set14,B100,Urban100All datasets are defined intorchsr.datasets. They return a list of images, with the high-resolution image followed by downscaled or degraded versions. Data augmentation methods are provided intorchsr.transforms.Datasets are downloaded automatically when using thedownload=Trueflag, or by running the corresponding script i.e../scripts/download_div2k.sh.Usagefromtorchsr.datasetsimportDiv2Kfromtorchsr.modelsimportninasr_b0fromtorchvision.transforms.functionalimportto_pil_image,to_tensor# Div2K datasetdataset=Div2K(root="./data",scale=2,download=False)# Get the first image in the dataset (High-Res and Low-Res)hr,lr=dataset[0]# Download a pretrained NinaSR modelmodel=ninasr_b0(scale=2,pretrained=True)# Run the Super-Resolution modellr_t=to_tensor(lr).unsqueeze(0)sr_t=model(lr_t)sr=to_pil_image(sr_t.squeeze(0))sr.show()Expand more examplesfromtorchsr.datasetsimportDiv2Kfromtorchsr.modelsimportedsr,rcanfromtorchsr.models.utilsimportChoppedModel,SelfEnsembleModelfromtorchsr.transformsimportColorJitter,Compose,RandomCrop# Div2K dataset, cropped to 256px, width color jitterdataset=Div2K(root="./data",scale=2,download=False,transform=Compose([RandomCrop(256,scales=[1,2]),ColorJitter(brightness=0.2)]))# Pretrained RCAN model, with tiling for large imagesmodel=ChoppedModel(rcan(scale=2,pretrained=True),scale=2,chop_size=400,chop_overlap=10)# Pretrained EDSR model, with self-ensemble method for higher qualitymodel=SelfEnsembleModel(edsr(scale=2,pretrained=True))TrainingA script is available to train the models from scratch, evaluate them, and much more. It is not part of the pip package, and requires additional dependencies. More examples are available inscripts/.pipinstallpiqtqdmtensorboard# Additional dependenciespython-mtorchsr.train-h python-mtorchsr.train--archedsr_baseline--scale2--download-pretrained--imagestest/butterfly.png--destinationresults/ python-mtorchsr.train--archedsr_baseline--scale2--download-pretrained--validation-only python-mtorchsr.train--archedsr_baseline--scale2--epochs300--lossl1--dataset-traindiv2k_bicubicYou can evaluate models from the command line as well. For example, for EDSR with the paper's PSNR evaluation:python -m torchsr.train --validation-only --arch edsr_baseline --scale 2 --dataset-val set5 --chop-size 400 --download-pretrained --shave-border 2 --eval-luminanceAcknowledgementsThanks to the people behindtorchvisionandEDSR, whose work inspired this repository. Some of the models available here come fromEDSR-PyTorchandCARN-PyTorch.To cite this work, please use:@misc{torchsr, author = {Gabriel Gouvine}, title = {Super-Resolution Networks for Pytorch}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Coloquinte/torchSR}}, doi = {10.5281/zenodo.4868308} } @misc{ninasr, author = {Gabriel Gouvine}, title = {NinaSR: Efficient Small and Large ConvNets for Super-Resolution}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Coloquinte/torchSR/blob/main/doc/NinaSR.md}}, doi = {10.5281/zenodo.4868308} }
aiarena21
What is this?A small game for Monash Competitive Programming Contest AIArena21.If you are competing, you should have recieved a document containing all the documentation required. Please refer to that.Contributingpip install -e ./ aiarena21 -hBuildingpip install twine python setup.py sdist bdist_wheel twine upload dist/*
ai-arena-21
What is this?A small game for Monash Competitive Programming Contest AIArena21.If you are competing, you should have recieved a document containing all the documentation required. Please refer to that.Contributingpip install -e ./ aiarena21 -hBuildingpip install twine python setup.py sdist bdist_wheel twine upload dist/*
aiarena-gym
AI Arena Python EnvironmentTo get started with our python environment you can run thetraining.pyfile.This file shows you how to do a few things in our environment:Initialize a new modelImport a pretrained modelSet up the game environmentRun training with one-sided and selfplay reinforcement learningSave your model in the format that works with our researcher platformWe have set you up with a starter model in thestarter_modeldirectory. This is a simple Policy Gradient that implements a version of theREINFORCEalgorithm. We encourage you to replace this with your own models!Additionally, we set up some basic training loops in thesimulation_methods.pyfile. Feel free to change these up and make them your own!NOTE:There are two variables in thetraining.pyfile which you should not change because our game requires these to be constant:n_features: This is the dimensionality of the staten_actions: This is the dimensionality of the policyLastly, we have included the rules-based agentagent_sihing.py(the researcher platform benchmark) in case you want to train specifically against it. But be careful about overfitting because we will introduce more benchmarks which require generalization...
aiartchan-pycocotools
No description available on PyPI.
ai-artist
I. INTRODUCTIONai_artistmade generating images easy using the Stable Diffusion AI model.II. HOW TO USE THIS PROJECT2.1. Install this packageUsing pip to install the pre-built package on Pypippip install ai_artistIf you want to use the latestai_artistversion instead of the stable one, you can install it from the source with the following command:pip install git+https://github.com/thinh-vu/ai_artist.git@main(*) You might need to insert a!before your command when running terminal commands on Google Colab.2.2. Set up your projectImport the whole package to your project:from ai_artist import *Install dependencies:!pip install transformersSet up the environment:initialize()Save Huggingface login info to use the pre-trained model:login('YOUR_HUGGINGFACE_KEY')Set up the pipeline:pipe = pipegen()2.3. Start generating imagesProvide your image description to the prompt:image_gen("YOUR_IMAGE_DESCRIPTION", pipe)III. RERERENCES3.1. Get HuggingFace API keyGenerate a token keywithreadpermission. Read the dochereAbout HuggingfaceHuggingfaceis a community and data science platform that provides:Tools that enable users to build, train and deploy ML models based on open source (OS) code and technologies.A place where a broad community of data scientists, researchers, and ML engineers can come together and share ideas, get support and contribute to open source projects.3.2. Google Colab and GPU runtime are highly recommendedGo to the Google Colab menu: SelectRuntime>Change runtime typeand make sure thatGPUhas been chosen. You can run this AI model way faster with GPU on Google Colab than the normal CPU or your personal computer.Stable Diffusion & StabilityAIStable Diffusion on Github:hereStable Diffusion prompt guide and exampleshereIV. 🙋‍♂️ CONTACT INFORMATIONIf you want to support my open-source projects, you can "buy me a coffee" viaPatreonor Momo e-wallet (VN). Your support will help to maintain my blog hosting fee & to develop high-quality content.Momo QR
ai-aside
ai-asidePurposeThis plugin holds LLM related blocks and tools, initially the summary XBlock aside but eventually more options.Getting StartedDevelopingOne Time Setup# Clone the repository git clone [email protected]:openedx/ai-aside.git cd ai-aside # Set up a virtualenv using virtualenvwrapper with the same name as the repo and activate it mkvirtualenv -p python3.8 ai-asideLocal testingTo test your changes locally, you will need to install the package from your local branch into edx-platform. For example, if using devstack, copy or clone your branch into <devstack-parent>/src/ai-aside. Bring up the LMS and CMS containers in devstack, and run this make target in the ai-aside directory:make install-localYou should see it uninstall any existing ai-aside in and install your local source, for both the CMS and LMS.The plug-in configuration will automatically be picked up once installed. Changes will be hot reloaded after the next LMS or CMS restart.If you would like to manually install ai-aside, go to your lms or cms shell and run:pip install -e /edx/src/ai-asideRun MigrationsYou will also need to run migrations for local testing. Using the same lms or cms shell as before, run:./manage.py lms migrate ai_asideTesting in Docker with AI-spotIf you are running both devstack and a local instance of the supporting ai-spot in docker, you need two pieces of special setup to let ai-spot call the aside handler and retrieve content.The first is to connect ai-spot to the devstack network with a docker command:docker network connect devstack_default ai-spot-server-1The second is to change the handler URLs generated by ai-aside to a URL that is accessible by ai-spot in the same docker. This is already set up for you insummaryhook_aside/settings/devstack.py. If your AI service is running locally outside of docker, you will need to change that setting.Enabling the AsideFor the summary aside to work, you will have to make two changes in the LMS admin:You must create anXBlockAsidesConfig(admin URL:/admin/lms_xblock/xblockasidesconfig/). This model has a list of blocks you do not want asides to apply to that can be left alone, and an enabled setting that unsurprisingly should be True.You must enable a course waffle flag for each course you want to summarize.summaryhook.summaryhook_enabledis the main one,summaryhook_enabled.summaryhook_staff_onlycan be used if you only want staff to see it.You must enable a course waffle flag if you want to use the feature for enabling/disabling the summary by its settings.summaryhook.summaryhook_summaries_configuration. If this flag is enabled, you can enable/disable the courses and the blocks by its settings.Aside Settings APIThere are some endpoints that can be used to pinpoint units to be either enabled or disabled based on their configs. The new settings work as follows: - If a course is enabled, the summary for all the blocks for that course are enabled by default. - If a course is disabled or the setting does not exists, then the summary for all the blocks from that course are disabled by default. - If a block has its own settings, it will override any other setting with the one that is saved. - If a block does not have any settings saved, then the enabled state will fall back to the course’s enabled state mentioned above.The new endpoints for updating these settings are:Fetch settingsMethodPathResponsesGETai_aside/v1/:course_idCode 200:{ "success": true }Code 400:{ "success": false, "message": "(description)" }Code 404:{ "success": false }GETai_aside/v1/:course_id/:unit_idUpdate settingsMethodPathPayloadResponsesPOSTai_aside/v1/:course_id{ "enabled": true|false }Code 200:{ "success": true }Code 400:{ "success": false, "message": "(description)" }POSTai_aside/v1/:course_id/:unit_id{ "enabled": true|false }Delete settingsMethodPathResponsesDELETEai_aside/v1/:course_idCode 200:{ "success": true }Code 400:{ "success": false, "message": "(description)" }Code 404:{ "success": false }DELETEai_aside/v1/:course_id/:unit_idEvery time you develop something in this repo# Activate the virtualenv workon ai-aside # Grab the latest code git checkout main git pull # Install/update the dev requirements make requirements # Run the tests and quality checks (to verify the status before you make any changes) make validate # Make a new branch for your changes git checkout -b <your_github_username>/<short_description> # Using your favorite editor, edit the code to make your change. vim ... # Run your new tests pytest ./path/to/new/tests # Run all the tests and quality checks make validate # Commit all your changes git commit ... git push # Open a PR and ask for review.DeployingThis plugin is deployed on edx.org via EDXAPP_EXTRA_REQUIREMENTS.LicenseThe code in this repository is licensed under the AGPL 3.0 unless otherwise noted.Please seeLICENSE.txtfor details.ContributingContributions are very welcome. Please readHow To Contributefor details.This project is currently accepting all types of contributions, bug fixes, security fixes, maintenance work, or new features. However, please make sure to have a discussion about your new feature idea with the maintainers prior to beginning development to maximize the chances of your change being accepted. You can start a conversation by creating a new issue on this repo summarizing your idea.The Open edX Code of ConductAll community members are expected to follow theOpen edX Code of Conduct.PeopleThe assigned maintainers for this component and other project details may be found inBackstage. Backstage pulls this data from thecatalog-info.yamlfile in this repo.Reporting Security IssuesPlease do not report security issues in public. Please [email protected] LogUnreleased3.6.2 — 2023-10-12Handle rare blocks missing dates when calculating last updatedRemove log of expected “not here” exception during config3.6.1 — 2023-10-10Resolve scenario where a user has no associated enrollment value3.6.0 – 2023-10-05Include user role in summary hook HTML.Add make install-local target for easy devstack installation.3.5.0 – 2023-09-04Add edx-drf-extensions lib.Add JwtAuthentication checks before each request.Add SessionAuthentication checks before each request.Add HasStudioWriteAccess permissions checks before each request.3.4.0 – 2023-08-30Include last updated timestamp in summary hook HTML, derived from the blocks.Also somewhat reformats timestamps in the handler return to conform to ISO standard.3.3.1 – 2023-08-21Remove no longer needed first waffle flag summaryhook_enabled3.3.0 – 2023-08-16FeaturesAdd xpert summaries configuration by default for units3.2.0 – 2023-07-26FeaturesAdded the checks for the module settings behind the waffle flagsummaryhook.summaryhook_summaries_configuration.Added is this course configurable endpointError suppression logs now include block IDMissing video transcript is caught earlier in content fetch3.1.0 – 2023-07-20FeaturesAdded API endpoints for updating settings for courses and modules (enable/disable for now) (Has migrations)3.0.1 – 2023-07-20Add positive log when summary fragement decides to inject3.0.0 – 2023-07-16FeaturesSummary content handler now requires a staff user identity, otherwise returns 403. This is a breaking change.Added models to summaryhook_aside (Has migrations)Catch exceptions in a couple of locations so the aside cannot crash content.2.0.2 – 2023-07-05FixUpdated HTML parser to remove tags with their content for specific cases like<script>or<style>.2.0.1 – 2023-06-29FixFix transcript format request and conversion2.0.0 – 2023-06-28AddedAdds a handler endpoint to provide summarizable contentImproves content length checking using that summarizable content1.2.1 – 2023-05-19FixesFix summary-aside settings package1.2.0 – 2023-05-11AddedPorting over summary-aside from edx-arch-experiments version 1.2.0
ai-assignment-marker
WhatThis is a test of creating an AI marker for first year programming assignments.The parituclar assignment example is a simple converter that takes in either binary or decimal numbers and converts them to the other.WhyIn later papers using unit testing frameworks and giving out grades based on this can be done sucsessfully as the students are more confident with programming and can handle having to write a program with exact output. However in first year the focus is still on programming itself and so the challenge of the assignment is with the logic of the program and not the exact format of the output (Unless that is part of the logic!).This would mean that for example the program could output "Converting binary to decimal answer is 10" or "Converting to decimal: 10" and both would be completely acceptable.HowThe goal of the AI marker is to be able to take in a set of test cases and then run the students program against various unit tests however the grading and marking of these unit tests will be done with a LLMs.
ai-assist
A command-line AI assist that can do math, search, run Python and shell command.Under experiment. Use at your own risks.Installpip install ai-assistSetupYou will need put the following environment variables in a.envfile in your home directory.# used for calling OpenAI's API OPENAI_API_KEY=... # used for calling Google Search API GOOGLE_API_KEY=... GOOGLE_CSE_ID=...You can generate/get keys from links below:OPENAI_API_KEYfromhttps://platform.openai.com/account/api-keysGOOGLE_API_KEYfromhttps://console.cloud.google.com/apis/credentialsGOOGLE_CSE_IDfromhttps://programmablesearchengine.google.com/controlpanel/allUsage$ai WelcometotheAI. Idomath,search,runpython/bashandmore. Type'exit'toquit.[USER]<<What'sthelargestprimenumberlessthan1000? ...[AI]>>997
aiassistant
Python Package ExampleThis package was created by generally following thePackaging Python Projectswith the addition of somepipenv setupto manage virtual environments.How this package was createdInstall pipenv,build, andtwineif not already installed.create directory structure for the package like that below, whereexamplepackagefb1258is replaced with your package's name. This name must be uniquely yours when uploaded toPyPI. Better to avoid hyphens or underline characters (-or_) in the package name, as these can create problems when importing. The parent directory name (the repository name) -python-package-examplein this case - is not relevant to the package name.python-package-example/ |____README.md |____LICENSE |____pyproject.toml |____tests |____src |____examplepackagefb1258 |______init__.py |______main__.py |____wisdom.pyMake__init__.pyan empty file.Enter the text of acopyright license of your choosingintoLICENSE.Add settings inpyproject.tomlsuitable for asetuptools-based build and add metadata fields to this file - see the example in this repository.Put your own custom module code intosrc/examplepackagefb1258/wisdom.pyor whatever filename(s) you choose for the module(s) that live within your package.Optionally add a__main__.pyfile to the package directory, if you want to be able to run the package as a script from the command line, e.g.python -m examplepackagefb1258.Build the project by runningpython -m buildfrom the same directory where thepyproject.tomlfile is located.Verify that the built.tararchive has the files you expect your package to have (including any important non-code files) by running the command:tar --list -f dist/examplepackagefb1258-0.0.7.tar.gz, whereexamplepackagefb1258-0.0.7is replaced with your own package name and version.Create an account onTestPyPIwhere one can upload to a test repository instead of the production PyPI repo.Create anew API tokenon TestPyPI with the "Scope" set to “Entire account”. Save a copy of the token somewhere safe.Upload your packageto the TestPyPI repository using twine, e.g.twine upload -r testpypi dist/*twine will output the URL of your package on the PyPI website - load that URL in your web browser to see your packaged published - make sure theREADME.mdfile looks nice on the web site.Every time you change the code in your package, you will need to rebuild and reupload it to PyPI. You will need to build from a clean slate and update the version number to achieve this:delete the autogenerateddistdirectorydelete the autogeneratedsrc/*.egg-infodirectoryupdate the version number inpyproject.tomland anywhere else it is mentioned (do a find/replace)build the package again withpython -m buildupload the package again withtwine upload -r testpypi dist/*Repeat as many times as necessary until the package works as expected. Once complete, upload to the real PyPI instead of the TestPyPI repository.If updating version numbers is tedious, you may consider usingbumpver- a tool that can automate some parts of updating version numbers.How to install and use this packageTryinstalling and using your packagein a separate Python project:Create apipenv-managed virtual environment and install the latest version of your package installed:pipenv install -i https://test.pypi.org/simple/ examplepackagefb1258==0.0.7. (Note that if you've previously created apipenvvirtual environment in the same directory, you may have to delete the old one first. Find out where it is located with thepipenv --venvcommand.)Activate the virtual environment:pipenv shell.Create a Python program file that imports your package and uses it, e.g.from examplepackagefb1258 import wisdomand thenprint(wisdom.get())(replacewisdomandget()with any module name and function that exists in your package) .Run the program:python3 my_program_filename.py.Exit the virtual environment:exit.Try running the package directly:Create and activate up thepipenvvirtual environment as before.Run the package directly from the command line:python3 -m examplepackagefb1258. This should run the code in the__main__.pyfile.Exit the virtual environment.How to run unit testsSimple example unit tests are included within thetestsdirectory. To run these tests...Install pytest in a virtual environment.Run the tests from the main project directory:python3 -m pytest.Tests should never fail. Any failed tests indicate that the production code is behaving differently from the behavior the tests expect.Pro tipWhile working on the package code, and verifying it behaves as expected, it can be helpful to install the package in "editable" mode so that changes to the package are immediately updated in the virtual environment.To do this, runpipenv install -e .from the main project directory.Continuous integrationThis project has a continuous integration workflow that builds and runs unit tests automatically with everypushof the code to GitHub.
ai-assistant-by-timurkarev
it is my own package, nothing interesting
ai-assistent
A package that allows to build and handle simple NN model for MNIST-numbers dataset. It also allows to use a piple to output predictions.
aiatools
AIA ToolsAIA Tools is a Python library for interacting with App Inventor Application (AIA) files in Python. It is useful for opening, summarizing, and analyzing AIA files for research inquiries. The query API is inspired by SQLalchemyInstalling$pipinstallaiatoolsFor development:$pyenvinstall3.6.3 $pyenvvirtualenv3.6.3aiatools $pyenvactivateaiatools $pipinstall-rrequirements.txt $pipinstall.Usage ExamplesfromaiatoolsimportAIAFilewithAIAFile('MyProject.aia')asaia:print('Number of screens:%d\n'%len(aia.screens))print('Number of components:%d\n'%len(aia.screens['Screen1'].componentiter()))print('Number of blocks:%d\n'%len(aia.screens['Screen1'].blockiter()))print('Number of event blocks:%d\n'%len(aia.screens['Screen1'].blockiter(type='component_event')))aia.screens['Screen1'].blocks(type=='component_event').count(by='event_name')fromaiatoolsimportAIAFilefromaiatools.attributesimportevent_name,typefromaiatools.block_typesimport*fromaiatools.component_typesimport*aia=AIAFile('MyProject.aia')# Count the number of screensprintlen(aia.screens)# Count the number of distinct component types used on Screen1printaia.screens['Screen1'].components().count(group_by=type)# Count the number of Button components on Screen1printaia.screens['Screen1'].components(type==Button).count()# Count the number of component_event blocks, grouped by event nameprintaia.screens['Screen1'].blocks(type==component_event).count(group_by=event_name)# Compute the average depth of the blocks tree in Button.Click handlersprintaia.screens['Screen1'].blocks(type==component_event&&event_name==Button.Click).avg(depth)# Count the number of blocks referencing a specific componentprintaia.screens['Screen1'].components(name=='Button1').blocks().count()# Count the number of event handlers where the event opens another screenprintaia.blocks(type==component_event).descendants(type==control_openAnotherScreen).count()# Get the screens where the user has included more than one TinyDBprintaia.screens().components(type==TinyDB).count(group_by=Screen.name).filter(lambdak,v:v>1)Selectorsproject=AIAFile('project.aia')project.screens()# Select all screensproject.screens('Screen1')# Select Screen1project.screens(Button.any)# Select any screen with at least 1 buttonproject.screens(Control.open_another_screen)# Select any screen containing an open_another_screen blockproject.screens(Component.Name=='TinyDb1')# Select any screen containing a component named TinyDb1classBlock(object):""":py:class:`Block` represents an individual block in the blocks workspace... Arguments ::id_ The block IDtype_ :py:class:`BlockType` The block type"""def__init__(self,id_,type_):self.id=id_self.type=type_self.parent=Noneself.children=[]classComponent(object):""":py:class:`Component` represents a component in the designer view... Arguments ::id_type_ :py:class:`ComponentType`"""def__init__(self,id_,type_):self.id=id_self.type=type_self.properties={}classComponentContainer(Component):def__init__(self,id_,type_):super(self,ComponentContainer).__init__(id_,type_)self.components=[]classBlockType(object):def__init__(self,name):self.name=nameself.mutators=[]classComponentType(object):def__init__(self,name,class_name):self.name=nameself.class_name=class_nameclassScreen(object):def__init__(self,scm=None,bky=None):self.name=''self.properties={}self.components=FilterableDict()self.blocks=FilterableDict()self.top_blocks=FilterableDict()ifscmisnotNone:self._read_scheme(scm)ifbkyisnotNone:self._read_blocks(bky)classProject(object):def__init__(self,file=None):self.name=''self.screens=FilterableDict()self.components=FilterableDict()self.components.parent=self.screensself.blocks=FilterableDict()self.blocks.parent=self.screensiffileisnotNone:self.read(file)classFilterableDict(dict):def__call__(self,filter_):returnFilterableDict([k,vfork,vinself.iteritems()iffilter_(v)elseNone,None])classFilter(object):def__call__(self,o):throwNotImplementedError()def__and__(self,right):returnand_(self,right)def__or__(self,right):returnor_(self,right)def__eq__(self,right):returneq(self,right)def__ne__(self,right):returnne(self,right)def__lt__(self,right):returnlt(self,right)def__gt__(self,right):returngt(self,right)def__le__(self,right):returnle(self,right)def__ge__(self,right):returnge(self,right)classAndFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)andself.r(o)classOrFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)orself.r(o)classNotFilter(Filter):def__init__(self,expression):self.expression=expressiondef__call__(self,o):returnnotself.expression(o)classEqualFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)==self.r(o)classNotEqualFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)!=self.r(o)classLessThanFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)<self.r(o)classGreaterThanFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)>self.r(o)classLessThanOrEqualFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)<=self.r(o)classGreaterThanOrEqualFilter(Filter):def__init__(self,l,r):self.l=lself.r=rdef__call__(self,o):returnself.l(o)<=self.r(o)classScreenFilter(Filter):passclassComponentFilter(Filter):passclassBlockFilter(Filter):passAttributesdepth- For a component, this is the depth of the component hierarchy rooted at that component. For components that are not containers this value is always 1. For containers and blocks, this is the longest length of the paths from this root node to any of its leaf nodes.length- The number of direct descendants of the target. If the target is a component container, it will be the number of direct chidlren. For a block, it will be the number ofchildren- The list of children for the item(s) in the set. If more than one item is in the set, the children will be provided in the order of their parents.mutators- If the block has mutations, a list of strings indicating the types of the mutations, e.g. ['if', 'elseif', 'elseif', 'else'].callers- For procedures, the number of caller blocks in the workspace. For variables and component methods and properties, the number of getter blocks.Aggregationmax- Maximum value of the filtermin- Minimum value of the filteravg- Average value of the filtercount- Count of items matching the filtermedian- Median value of the attribute
aiauthenticator
No description available on PyPI.
aia_utils
No description available on PyPI.
ai-automation
No description available on PyPI.
aiavatar
AIAvatarKit🥰 Building AI-based conversational avatars lightning fast ⚡️💬✨ FeaturesLive anywhere: VRChat, cluster and any other metaverse platforms, and even devices in real world.Extensible: Unlimited capabilities that depends on you.Easy to start: Ready to start conversation right out of the box.🍩 RequirementsVOICEVOX API in your computer or network reachable machine (Text-to-Speech)API key for Speech Services of Google or Azure (Speech-to-Text)API key for OpenAI API (ChatGPT)Python 3.10 (Runtime)🚀 Quick startInstall AIAvatarKit.$pipinstallaiavatarMake the script asrun.py.importloggingfromaiavatarimportAIAvatar,WakewordListenerGOOGLE_API_KEY="YOUR_API_KEY"OPENAI_API_KEY="YOUR_API_KEY"VV_URL="http://127.0.0.1:50021"VV_SPEAKER=46# Configure root loggerlogger=logging.getLogger()logger.setLevel(logging.INFO)log_format=logging.Formatter("[%(levelname)s]%(asctime)s:%(message)s")streamHandler=logging.StreamHandler()streamHandler.setFormatter(log_format)logger.addHandler(streamHandler)# Promptsystem_message_content="""あなたは「joy」「angry」「sorrow」「fun」の4つの表情を持っています。特に表情を表現したい場合は、文章の先頭に[face:joy]のように挿入してください。例[face:joy]ねえ、海が見えるよ![face:fun]早く泳ごうよ。"""# Create AIAvatarapp=AIAvatar(google_api_key=GOOGLE_API_KEY,openai_api_key=OPENAI_API_KEY,voicevox_url=VV_URL,voicevox_speaker_id=VV_SPEAKER,# volume_threshold=2000, # <- Set to adjust microphone sensitivitymodel="gpt-3.5-turbo",system_message_content=system_message_content,)# Create WakewordListenerwakewords=["こんにちは"]asyncdefon_wakeword(text):logger.info(f"Wakeword:{text}")awaitapp.start_chat()wakeword_listener=WakewordListener(api_key=GOOGLE_API_KEY,volume_threshold=app.volume_threshold,wakewords=wakewords,on_wakeword=on_wakeword,device_index=app.input_device)# Start listeningww_thread=wakeword_listener.start()ww_thread.join()# Tips: To terminate with Ctrl+C on Windows, use `while` below instead of `ww_thread.join()`# while True:# time.sleep(1)Start AIAvatar.$pythonrun.pyWhen you say the wake word "こんにちは" the AIAvatar will respond with "どうしたの?". Feel free to enjoy the conversation afterwards!If you want to set face expression on some action, configure as follows:# Add face expresionsapp.avatar_controller.face_controller.faces["on_wake"]=10app.avatar_controller.face_controller.faces["on_listening"]=11app.avatar_controller.face_controller.faces["on_thinking"]=12# Set face when the character is listening the users voiceasyncdefset_listening_face():awaitapp.avatar_controller.face_controller.set_face("on_listening",3.0)app.request_listener.on_start_listening=set_listening_face# Set face when the character is processing the requestasyncdefset_thinking_face():awaitapp.avatar_controller.face_controller.set_face("on_thinking",3.0)app.chat_processor.on_start_processing=set_thinking_faceasyncdefon_wakeword(text):logger.info(f"Wakeword:{text}")# Set face when wakeword detectedawaitapp.avatar_controller.face_controller.set_face("on_wake",2.0)awaitapp.start_chat(request_on_start=text,skip_start_voice=True)🐈 Use in VRChat2 Virtual audio devices (e.g. VB-CABLE) are required.Multiple VRChat accounts are required to chat with your AIAvatar.First, run the commands below in python interpreter to check the audio devices.$%python >>>fromaiavatarimportAudioDevice >>>AudioDevice.list_audio_devices()Availableaudiodevices:0:HeadsetMicrophone(OculusVirt:6:CABLE-BOutput(VB-AudioCable7:Microsoftサウンドマッパー-Output8:SONYTV(NVIDIAHighDefinition:13:CABLE-AInput(VB-AudioCableA:In this example,To useVB-Cable-Afor microphone for VRChat, index foroutput_deviceis13(CABLE-A Input).To useVB-Cable-Bfor speaker for VRChat, index forinput_deviceis6(CABLE-B Output). Don't forget to setVB-Cable-B Inputas the default output device of Windows OS.Then editrun.pylike below.# Create AIAvatarapp=AIAvatar(GOOGLE_API_KEY,OPENAI_API_KEY,VV_URL,VV_SPEAKER,model="gpt-3.5-turbo",system_message_content=system_message_content,input_device=6# Listen sound from VRChatoutput_device=13,# Speak to VRChat microphone)You can also set the name of audio devices instead of index (partial match, ignore case).input_device="CABLE-B Out"# Listen sound from VRChatoutput_device="cable-a input",# Speak to VRChat microphoneRun it.$run.pyLaunch VRChat as desktop mode on the machine that runsrun.pyand log in with the account for AIAvatar. Then setVB-Cable-Ato microphone in VRChat setting window.That's all! Let's chat with the AIAvatar. Log in to VRChat on another machine (or Quest) and go to the world the AIAvatar is in.🟦 Use Azure ListenersWe strongly recommend using AzureWakewordListener and AzureRequestListner that are more stable than the default listners. Checkexamples/run_azure.pythat works out-of-the-box.Install Azure SpeechSDK.$pipinstallazure-cognitiveservices-speechChange script to use AzureRequestListener and AzureWakewordListener.YOUR_SUBSCRIPTION_KEY="YOUR_SUBSCRIPTION_KEY"YOUR_REGION_NAME="japanwest"# Create AzureRequestListenerfromaiavatar.listeners.azurevoicerequestimportAzureVoiceRequestListenerrequest_listener=AzureVoiceRequestListener(YOUR_SUBSCRIPTION_KEY,YOUR_REGION_NAME,)# Create AIAVater with AzureRequestListenerapp=AIAvatar(openai_api_key=OPENAI_API_KEY,system_message_content=system_message_content,request_listener=request_listener,voicevox_url=VV_URL,voicevox_speaker_id=VV_SPEAKER,)# Create AzureWakewordListnerasyncdefon_wakeword(text):logger.info(f"Wakeword:{text}")awaitapp.start_chat()fromaiavatar.listeners.azurewakewordimportAzureWakewordListenerwakeword_listener=AzureWakewordListener(YOUR_SUBSCRIPTION_KEY,YOUR_REGION_NAME,on_wakeword=on_wakeword,wakewords=["こんにちは"])To specify the microphone device by settingdevice_nameargument. See Microsoft Learn to know how to check the device UID on each platform.https://learn.microsoft.com/en-us/azure/ai-services/speech-service/how-to-select-audio-input-devicesWe providea script for MacOS. Just run it on Xcode.Device UID: BuiltInMicrophoneDevice, Name: MacBook Proのマイク Device UID: com.vbaudio.vbcableA:XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX, Name: VB-Cable A Device UID: com.vbaudio.vbcableB:XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX, Name: VB-Cable BFor example, the UID for the built-in microphone on MacOS isBuiltInMicrophoneDevice.Then, set it as the value ofdevice_name.request_listener=AzureVoiceRequestListener(YOUR_SUBSCRIPTION_KEY,YOUR_REGION_NAME,device_name="BuiltInMicrophoneDevice")wakeword_listener=AzureWakewordListener(YOUR_SUBSCRIPTION_KEY,YOUR_REGION_NAME,on_wakeword=on_wakeword,wakewords=["こんにちは"],device_name="BuiltInMicrophoneDevice")⚡️ Function CallingUsechat_processor.add_functionto use ChatGPT function calling. In this example,get_weatherwill be called autonomously.# Add functionasyncdefget_weather(location:str):awaitasyncio.sleep(1.0)return{"weather":"sunny partly cloudy","temperature":23.4}app.chat_processor.add_function(name="get_weather",description="Get the current weather in a given location",parameters={"type":"object","properties":{"location":{"type":"string"}}},func=get_weather)And, afterget_weathercalled, message to get voice response will be sent to ChatGPT internally.{"role":"function","content":"{\"weather\": \"sunny partly cloudy\", \"temperature\": 23.4}","name":"get_weather"}🎤 Testing audio I/OUsing the script below to test the audio I/O before configuring AIAvatar.Step-by-Step audio device configuration.Speak immediately after start if the output device is correctly configured.All recognized text will be shown in console if the input device is correctly configured.Just echo on wakeword recognized.importasyncioimportloggingfromaiavatarimport(AudioDevice,VoicevoxSpeechController,WakewordListener)GOOGLE_API_KEY="YOUR_API_KEY"VV_URL="http://127.0.0.1:50021"VV_SPEAKER=46VOLUME_THRESHOLD=3000INPUT_DEVICE=-1OUTPUT_DEVICE=-1# Configure root loggerlogger=logging.getLogger()logger.setLevel(logging.INFO)log_format=logging.Formatter("[%(levelname)s]%(asctime)s:%(message)s")streamHandler=logging.StreamHandler()streamHandler.setFormatter(log_format)logger.addHandler(streamHandler)# Select input deviceifINPUT_DEVICE<0:input_device_info=AudioDevice.get_input_device_with_prompt()else:input_device_info=AudioDevice.get_device_info(INPUT_DEVICE)input_device=input_device_info["index"]# Select output deviceifOUTPUT_DEVICE<0:output_device_info=AudioDevice.get_output_device_with_prompt()else:output_device_info=AudioDevice.get_device_info(OUTPUT_DEVICE)output_device=output_device_info["index"]logger.info(f"Input device: [{input_device}]{input_device_info['name']}")logger.info(f"Output device: [{output_device}]{output_device_info['name']}")# Create speakerspeaker=VoicevoxSpeechController(VV_URL,VV_SPEAKER,device_index=output_device)asyncio.run(speaker.speak("オーディオデバイスのテスターを起動しました。私の声が聞こえていますか?"))# Create WakewordListenerwakewords=["こんにちは"]asyncdefon_wakeword(text):logger.info(f"Wakeword:{text}")awaitspeaker.speak(f"{text}")wakeword_listener=WakewordListener(api_key=GOOGLE_API_KEY,volume_threshold=VOLUME_THRESHOLD,wakewords=["こんにちは"],on_wakeword=on_wakeword,verbose=True,device_index=input_device)# Start listeningww_thread=wakeword_listener.start()ww_thread.join()⚡️ Use custom listenerIt's very easy to add your original listeners. Just make it run on other thread and invokeapp.start_chat()when the listener handles the event.Here the example ofFileSystemListenerthat invokes chat whentest.txtis found on the file system.importasyncioimportosfromthreadingimportThreadfromtimeimportsleepclassFileSystemListener:def__init__(self,on_file_found):self.on_file_found=on_file_founddefstart_listening(self):whileTrue:# Check file every 3 secondsifos.path.isfile("test.txt"):asyncio.run(self.on_file_found())sleep(3)defstart(self):th=Thread(target=self.start_listening,daemon=True)th.start()returnthUse this listener inrun.pylike below.# Event handlerdefon_file_found():asyncio.run(app.chat())# Instantiatefs_listener=FileSystemListener(on_file_found)fs_thread=fs_listener.start():# Wait for finishfs_thread.join()
aib2ofx
aib2ofx...or how to grab transaction data out of AIB's online interface, and format it intoOFXfile.NOTE:Last AIB login update (Feb' 2021) made me realise how brittle the overall machinery here is. The code that works around Web Storage API use is ugly and likely to break. The most likely road forward for this project is to decouple it intoofxstatementplugin and (maybe) Selenium-powered CSV acquisition script. The former will be easy, the latter will most likely be a nightmare to maintain and install, unless you enjoy having your banking details pipe through an arbitrary docker image.Time will tell.Installationpython3 -mvenv aib2ofx source aib2ofx/bin/activate pip3 install aib2ofxThis will create a virtualenv foraib2ofx, fetch its code then install it with all dependencies. Once that completes, you'll findaib2ofxexecutable in thebindirectory of this new virtualenv.UsageCreate a~/.aib2ofx.jsonfile, with AIB login details. Set the permission bits to 0600 to prevent other system users from reading it.touch ~/.aib2ofx.json chmod 0600 ~/.aib2ofx.jsonIt has a JSON format, single object with one key per AIB login you want to use.{ "bradmajors": { "regNumber": "12345678", "pin": "12345" } }The fields are as follows:regNumberYour AIB registered number.pinYour five digit PIN.You can put more than one set of credentials in the file; the script will download data for all accounts for all logins.{ "bradmajors": { "regNumber": "12345678", "pin": "12345" }, "janetweiss": { "regNumber": "87654321", "pin": "54321" } }Note that there's no comma after the last account details.Once you've prepared that config file, run:aib2ofx -d /output/directoryThe script should connect to AIB, log in using provided credentials, iterate through all accounts, and save each of those to a separate file located in/output/directory.GuaranteeThere is none.I've written that script with my best intentions, it's not malicious, it's not sending the data anywhere, it's not doing anything nasty. I'm using it day to day to get data about my AIB accounts into a financial program that I use. It should work for you as good as it works for me. See theLICENSEfile for more details.Developmentaib2ofx works only with python 3.In order to set up a dev environment clone the repository, getpoetryand runpoetry install. This will create a virtualenv with all dependencies installed. You can activate it withpoetry shell.
aibabyllmclient
No description available on PyPI.
aibaklava
Tool to create AI jobs
aiballs-game
Failed to fetch description. HTTP Status Code: 404