package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aipalette-nlp | aipalette_nlpaipalette_nlppython package is a package that contains a list of NLP functions that will be used for future tasks in Ai Palette. Many useful modules and functions will be included in the package. For now, it has a module that consists of tokenizers of different languages, and another module that has several functions for text preprocessing which also includes detecting language.How to UseInstall this package using pip :>> pip install aipalette_nlp, and import it directly in your code.ModulesModule1: tokenizerBelow is an example of how you can use theword_tokenizefunction in the tokenizer module, which will automatically detect input language and call its respective tokenizer.from aipalette_nlp.tokenizer import word_tokenize
text = "우아아 제 요리에 날개를 달아주는 아름다운 <키친콤마> 식품들이 도착했어요. 저당질, 저탄수화물로 만들어져 건강과 다이어트 그리고 맛까지 한꺼번에 챙길 수 있는 필수템입니다! 처음 호기심에서 시작한 저탄고지 키토식단을 유지한지 어느덧 2년 가까이 되었어요. 저탄고지는 살을 빼기위해 무작정 탄수화물을 끊는다거나 몸에 무리가 갈 수 있는 저칼로리 / 저염식이 아니에요. 내 몸에서 나타나는 반응에 좀더 귀기울이고 끊임없이 공부하고 좋은 음식을 섭취하려고 노력하는 라이프스타일 입니다."
print(word_tokenize(text))Output:{'tokenized_text': ['우아아', '제', '요리에', '날개를', '달아주는', '아름다운', '<키친콤마>', '식품들이', '도착했어요', '저당질,', '저탄수화물로', '만들어져', '건강과', '다이어트', '그리고', '맛까지', '한꺼번에', '챙길', '수', '있는', '필수템입니ᄃ', 'ᅡ!', '처음', '호기심에서', '시작 한', '저탄고지', '키토식단을', '유지한지', '어느덧', '2년', '가까이', '되었어요', '저탄고지는', '살을', '빼기위해', '무작정', '탄수화물을', '끊는다거나', '몸에', '무리가', '갈', '수', '있는', '저칼로리', '/', '', '저염식이', '아니에요', '내', '몸에서', '나타나는', '반응에', '좀더', '귀기울이고', '끊임없이', '공부하고', '좋은', '음식을', '섭취하려고', '노력하는', '라이프스타일', '입니다']}Module2: text_cleaningBelow is an example of how you can use the functions in the text_cleaning module.from aipalette_nlp.preprocessing import detect_language, clean_text, remove_stopwords
text = """Dinner at @docksidevancouver . Patio season is definitely here!Support your local restaurants.
#foodie #facestuffing #scoutmagazine #vancouvermagazine #dailyhivevancouver #ediblevancouver #eatmagazine #vancouverisawesome #vancouverfoodie #food #foodlover
#curiocityvancouver #foodporn #foodlover #eat #foodgasm #foodinsta #foodinstagram #instafood #instafoodie #foodlover #foodpics #foodiesofinstagram #restaurant #homechef #foodphotography #nomnomnom #georgiastraight #docksiderestaurant #granvilleisland #gnocchi #dinner"""
print("language detected of the given text is : ", detect_language(text))
print(remove_stopwords(text))
print(clean_text(text))Output:language detected of the given text is : endinner @docksidevancouver . patio season definitely here!support local restaurants. #foodie #facestuffing #scoutmagazine #vancouvermagazine #dailyhivevancouver #ediblevancouver #eatmagazine #vancouverisawesome #vancouverfoodie #food #foodlover #curiocityvancouver #foodporn #foodlover #eat #foodgasm #foodinsta #foodinstagram #instafood #instafoodie #foodlover #foodpics #foodiesofinstagram #restaurant #homechef #foodphotography #nomnomnom #georgiastraight #docksiderestaurant #granvilleisland #gnocchi #dinner{'hashtags': ['foodie', 'facestuffing', 'scoutmagazine', 'vancouvermagazine', 'dailyhivevancouver', 'ediblevancouver', 'eatmagazine', 'vancouverisawesome', 'vancouverfoodie', 'food', 'foodlover', 'curiocityvancouver', 'foodporn', 'foodlover', 'eat', 'foodgasm', 'foodinsta', 'foodinstagram', 'instafood', 'instafoodie', 'foodlover', 'foodpics', 'foodiesofinstagram', 'restaurant', 'homechef', 'foodphotography', 'nomnomnom', 'georgiastraight', 'docksiderestaurant', 'granvilleisland', 'gnocchi', 'dinner'], 'cleaned_text': 'dinner username patio season definitely support local restaurants', 'text_length': 65}Complete list of tokenizers supported:['english', 'french', 'italian', 'portuguese', 'spanish', 'swedish', 'turkish', 'russian', 'mandarin', 'thai', 'japanese', 'korean', 'vietnamese','german', 'arabic']Text Processing/Cleaning FunctionsTheclean_textfunction from module text_cleaning does the following steps:replace the hashtags (#______) in the main caption with the original form of the word.replace all the mentioned usernames (@_______) with the word “<username>”.remove punctuationsremove stopwords (use nltk package)detect languagereplace all links/urlsLanguage supported by our language detector :af, am, an, ar, as, az, be, bg, bn, br, bs, ca, cs, cy, da, de, dz, el, en, eo, es, et, eu, fa, fi, fo, fr, ga, gl, gu, he, hi, hr, ht, hu, hy, id, is, it, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lb, lo, lt, lv, mg, mk, ml, mn, mr, ms, mt, nb, ne, nl, nn, no, oc, or, pa, pl, ps, pt, qu, ro, ru, rw, se, si, sk, sl, sq, sr, sv, sw, ta, te, th, tl, tr, ug, uk, ur, vi, vo, wa, xh, zh, zu |
ai-pangu | No description available on PyPI. |
ai-param-validation | ProjectParameter validation utility for Python methods whether they are standalone or in a class.Install: pip install .Test: pytest testsSee src/Readme.md for more details.See test_class.py for using the utility with a class.
See test_standalone for using with standalone methods.ContributingThis project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visithttps://cla.opensource.microsoft.com.When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.This project has adopted theMicrosoft Open Source Code of Conduct.
For more information see theCode of Conduct FAQor
[email protected] any additional questions or comments.TrademarksThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must followMicrosoft's Trademark & Brand Guidelines.
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies. |
aiparo | No description available on PyPI. |
aipat | Failed to fetch description. HTTP Status Code: 404 |
aipfs | No description available on PyPI. |
aiphad | No description available on PyPI. |
aipha-geo-solutions | AIPHA Python APIThis package contains the AIPHA python API. You can install it via:pip install aipha_geo_solutionsYou can find a detailed documentation at theAIPHA Webservices |
aip-hera | readme |
aipi | aipipython 开源命令行 AI P图工具。 |
ai-pill | AI-PILL |
aipilot | No description available on PyPI. |
aipin | soon |
aip-infer-utils | readme |
ai-pipeline-params | AI Pipeline Params |
aip-kafka-worker | readme |
aipkg | No description available on PyPI. |
aipkgs-core | No description available on PyPI. |
aipkgs-database | No description available on PyPI. |
aipkgs-firebase | AI’s package |
aipkgs-heartbeat | No description available on PyPI. |
aipkgs-notifications | APNS Push NotificationThis Python script allows you to send push notifications to Apple devices manually. It uses the Apple Push Notification service (APNS) to deliver notifications to your application users.Why use initialize_apns?Theinitialize_apnsfunction is used to set up the APNS with your application's specific details. This includes your key ID, team ID, bundle ID, and the path to your p8 or pem file. You can also specify whether you want to use the production or sandbox environment (default is sandbox).How to use the script?First, import the necessary modules:importapnsThen, initialize the APNS with your application's details:KEY_ID=''TEAM_ID=''BUNDLE_ID='com.test.app'IS_PROD=FalseP8_KEY_PATH='path/to/p8/key'PEM_FILE_PATH='path/to/pem/file'APNS_PRIORITY=10APNS_EXPIRATION=0apns.initialize_apns(key_id=KEY_ID,team_id=TEAM_ID,bundle_id=BUNDLE_ID,is_prod=IS_PROD,p8_key_path=P8_KEY_PATH,pem_file_path=PEM_FILE_PATH,apns_priority=APNS_PRIORITY,apns_expiration=APNS_EXPIRATION)apns.apns_config().verbose=TrueNow, you can send a push notification:device_token=""data={}title="Hello World!"response:APNSResponse=apns.push(device_token=device_token,title=title,data=data,badge=1,push_type=apns.PushType.alert,collapse_id=None)Using p8 or pem fileDepending on your preference or requirements, you can use either a p8 or pem file for authentication. If you want to use a p8 file, pass the path to thep8_key_pathparameter in theinitialize_apnsfunction. If you want to use a pem file, pass the path to thepem_file_pathparameter.Getting a p8 file or pem fileTo get the p8 or pem file for APNS push notification, you can follow these steps:For p8 file:Go to the Apple Developer Portal.Navigate to the "Certificates, Identifiers & Profiles" section.Under the "Keys" section, click on the "+" button to create a new key. (make sure to selectApple Push Notifications service (APNs)from the checkbox)Make sure to write down theKey IDas you will need it later.Download the generated p8 file and save it to a secure location.For pem file:Convert the exported .p12 file to a .pem file using the following command in the terminal:openssl pkcs12 -clcerts -legacy -nodes -in Certificates.p12 -out AuthKey.pemReplace "Certificates.p12" with the path to your .p12 file or keep it if you're in the same directory as your .p12 file.Enter the password for the .p12 file when prompted (hit Enter if blank).The converted pem file will be saved as "AuthKey.pem" in the current directory.Once you have obtained the p8 or pem file, you can use it in theinitialize_apnsfunction by providing the path to the file in thep8_key_pathorpem_file_pathparameter, respectively. |
aipkgs-requests | AI’s package |
aipkgs-sentry | AI’s package |
aiplat | 腾讯AI开放平台API |
aiplat-acu-asr | https://github.com/baidubce/pie/tree/master/audio-streaming-client-python |
ai-platform | Failed to fetch description. HTTP Status Code: 404 |
ai-platform-iscas | No description available on PyPI. |
aiplatgo | No description available on PyPI. |
aiplogin | No description available on PyPI. |
aiplugins | 🎸 Plug and PlaiPlug and Plai is an open source library aiming to simplify the integration of AI plugins into open-source language models (LLMs).It provides utility functions to get a list of active plugins from plugnplai.com directory, get plugin manifests, and extract OpenAPI specifications and load plugins.InstallationYou can install Plug and PlAI using pip:pip install plugnplaiUsageUtility FunctionsThe following utility functions are available in the library:get_plugins(endpoint): Get a list of available plugins from aplugins repository.get_plugin_manifest(url): Get the AI plugin manifest from the specified plugin URL.get_openapi_url(url, manifest): Get the OpenAPI URL from the plugin manifest.get_openapi_spec(openapi_url): Get the OpenAPI specification from the specified OpenAPI URL.from_url(url): Returns the Manifest and OpenAPI specification from the plugin URL.ExampleHere is an example of how to use the utility functions:importplugnplai# Get all plugins from plugnplai.comurls=plugnplai.get_plugins()# Get ChatGPT plugins - only ChatGPT verified pluginsurls=plugnplai.get_plugins(filter='ChatGPT')# Get working plugins - only tested plugins (in progress)urls=plugnplai.get_plugins(filter='working')# Get the Manifest and the OpenAPI specification from the plugin URLmanifest,openapi_spec=spec_from_url(plugins[0])ContributingPlug and Plai is an open source library, and we welcome contributions from the entire community. If you're interested in contributing to the project, please feel free to fork, submit pull requests, report issues, or suggest new features.LinksPlugins directory:https://plugplai.com/API reference:https://plugnplai.github.io/ |
aiplusiot | aiPlus-IOTSimple library to control a homeassistant installation using python |
aip-model-repo | readme |
aipn-guass | No description available on PyPI. |
aipo | Version:0.1.0Download:http://pypi.org/project/aipoSource:http://github.com/CaiqueReinhold/aipoKeywords:asyncio, task, job, queueWhat is Aipo?Aipo is a minimal framework for dispatching and running background tasks using asynchronous python.Using the task decorator you can turn your python functions into tasks that can be dispatched to a queue and executed in the aipo server.Setting up your aipo app:app = Aipo(config='aipo.yaml')
@app.task
async def my_task():
await get_stuff()
await comunicate_stuff()Once you execute you task it will be dispatched to the queue and executed in the aipo server:await my_task()Run your Aipo server using the command line::aipo run –config aipo.yamlInstallationYou can install Aipo from the Python Package Index (PyPI).To install usingpip:$ pip install aipoCurrently supported Aipo backendsRedisCurrently supported event loopsasynciouvloop |
aipod | AIPODAIPOD 是一个 AI 模型服务框架:约定 AI 模型提供的接口(初始化/训练/预测/日志等)基于模型定义,启动一个 RPC 服务,并提供 RPC client模型接入步骤通过 AIPOD 接入模型的过程大致如下服务端:fromaipod.modelimportAIModelBasefromaipod.rpc.serverimportserveclassUserModel(AIModelBase):deftrain(self,**kwargs):# 在此实现训练过程passdefpredict(self,**kwargs):# 在此实现预测过程pass# 启动一个加载了 UserModel 模型的 rpc 服务serve(UserModel)客户端:fromaipod.rpc.clientimportAIClientmodel=AIClient(address="{rpc_server_address}",version="{model_version}")# 初始化(仅模型训练前需要初始化)model.initialize(**model_configs)# 训练model.train(**trainning_options)# 查看训练日志(训练进度、评估结果等,非必要)logs=model.log()# 预测result=model.predict(**input_data)类方法关系如下(以 predict 为例)class AIClient(AIModelBase) -> AIClient.predict(**input_data)
↓
RPC Server
↓
class UserModel(AIModelBase) -> UserModel.predict(**input_data)因此自定义模型,即是基于aipod.model.AIModelBase实现一个模型类,其中主要是训练和预测方法模型封装AIPOD 使用aipod.model.AIModelBase类对模型进行了定义,主要包括以下方法:AIModelBase.initialize()模型初始化AIModelBase.train()模型训练AIModelBase.predict()模型预测AIModelBase.log()查看日志以上方法使用**kwargs传递任意所需参数,由于需要通过网络传递,输入/输出均需要是可序列化的数据类型其中有一个保留关键字binary_data用于传递二进制数据如需传递海量的训练数据,建议使用接入共享/网络存储的方式,参数部分只传路径在模型训练前需调用AIModelBase.initialize()初始化模型数据文件夹,同时会将 initialize 所有的参数作为模型参数保存下来,之后通过AIModelBase.model_info进行访问AIModelBase.model_dir()方法会返回此模型实例对应的文件夹位置,在训练/预测过程中所有相关数据应存放在此文件夹下无需在线训练的模型项目,也可以只实现AIModelBase.predict()方法,加载自有模型进行预测模型服务经 AIModelBase 封装好的模型类可以通过serve(UserAIModel)启动一个 rpc 服务,客户端即可使用 AIClient 进行调用模型服务会负载某一个模型类,且支持多个模型实例,由AIClient的version参数进行区分为了减少模型加载时间带来的影响,模型服务会缓存加载好的模型实例,但由于显存的限制,需要通过AIPOD_MODEL_POOL_SIZE环境变量配置最大同时加载的模型实例数量(超出会进行 LRU 淘汰)模型服务可由环境变量进行控制,列表如下:AIPOD_LISTEN_PORT:rpc 服务监听端口,默认为50051AIPOD_DATA_PATH:模型数据存放路径,默认为appdata/AIPOD_MODEL_POOL_SIZE:服务内模型缓存池大小,默认为1AIPOD_RPC_MAX_WORKERS:RPC 服务线程池大小,默认为12 |
aipoincare | Quick StartThe 'aipoincare' package is available on PyPI and can be installed withpip install aipoincareRequired packages:numpypytorchmatplotlibflaskossklearncopyFirst examplefromaipoincare.mainimportrunrun()Then play with our GUI ! Have fun ! |
aip-oneservice | readme |
aiPool | Framework for Advanced Statistics and Data Sciences. |
aipose | AIPOSELibrary to use pose estimation in your projects easily.InstalationYou can installaiposefromPypi. It's going to install the library itself and its prerequisites as well.pipinstallaiposeYou can installaiposefrom its source code.gitclonehttps://github.com/Tlaloc-Es/aipose.gitcdaipose
pipinstall-e.Run demoUse the following command to run a demo with your cam and YOLOv7 pose estimator,posewebcamRunning over a video resultsHow to useYou can check the section notebooks in the repository to check the usage of the library or you can ask in theIssues section.The examples are:How to draw key points in a videoHow to draw key points in a video and store itHow to draw key points in a webcamHow to draw key points in a pictureHow to capture a frame to apply your business logicHow to stop the video stream when anybody raises hands with YOLOv7How to calculate pose similarity with YOLOv7How to turn the pose with YOLOv7How to train a pose classificatorReferenceshttps://github.com/RizwanMunawar/yolov7-pose-estimationSupportYou can do a donation with the following link.Or you can try to make a pull request with your improvements to the repo.Source of videos and imagesVideo by Mikhail Nilov:https://www.pexels.com/video/a-woman-exercising-using-an-exercise-ball-6739975/In folder notebooks/poses/Photo by Roman Davayposmotrim: https://www.pexels.com/photo/woman-wearing-black-sports-bra-reaching-floor-while-standing-35987/Photo by Vlada Karpovich: https://www.pexels.com/photo/a-woman-doing-yoga-4534689/Photo by Lucas Pezeta: https://www.pexels.com/photo/woman-doing-yoga-2121049/Photo by Cliff Booth: https://www.pexels.com/photo/photo-of-woman-in-a-yoga-position-4057839/Photo by Cliff Booth: https://www.pexels.com/photo/photo-of-woman-meditating-alone-4056969/Photo by MART PRODUCTION: https://www.pexels.com/photo/photo-of-a-woman-meditating-8032834/Photo by Antoni Shkraba: https://www.pexels.com/photo/woman-in-blue-tank-top-and-black-leggings-doing-yoga-4662330/Photo by MART PRODUCTION: https://www.pexels.com/photo/woman-wearing-a-sports-bra-8032742/Photo by Elina Fairytale: https://www.pexels.com/photo/woman-in-pink-tank-top-and-blue-leggings-bending-her-body-3823074/Photo by Cliff Booth: https://www.pexels.com/photo/photo-of-woman-stretching-her-legs-4057525/Photo by Mikhail Nilov: https://www.pexels.com/photo/woman-standing-in-a-bending-position-on-a-box-6740089/Photo by cottonbro studio: https://www.pexels.com/photo/woman-in-black-sports-bra-and-black-panty-doing-yoga-4323290/Photo by ArtHouse Studio: https://www.pexels.com/photo/photo-of-man-bending-his-body-4334910/Photo by Anna Shvets: https://www.pexels.com/photo/graceful-woman-performing-variation-of-setu-bandha-sarvangasana-yoga-pose-5012071/Photo by Miriam Alonso: https://www.pexels.com/photo/calm-young-asian-woman-doing-supine-hand-to-big-toe-yoga-asana-7593010/Photo by Miriam Alonso: https://www.pexels.com/photo/anonymous-sportswoman-doing-stretching-exercise-during-yoga-session-7593002/Photo by Miriam Alonso: https://www.pexels.com/photo/fit-young-asian-woman-preparing-for-handstand-during-yoga-training-at-home-7593004/Photo by Anete Lusina: https://www.pexels.com/photo/concentrated-woman-standing-in-tree-pose-on-walkway-4793290/Photo by Miriam Alonso: https://www.pexels.com/photo/faceless-sportive-woman-stretching-back-near-wall-7592982/ |
aip-py | aip-pyaip-pypip install aip-py |
aiprg | Failed to fetch description. HTTP Status Code: 404 |
aiprint | No description available on PyPI. |
ai-privacy-toolkit | ai-privacy-toolkitA toolkit for tools and techniques related to the privacy and compliance of AI models.Theanonymizationmodule contains methods for anonymizing ML model
training data, so that when a model is retrained on the anonymized data, the model itself will also be
considered anonymous. This may help exempt the model from different obligations and restrictions
set out in data protection regulations such as GDPR, CCPA, etc.Theminimizationmodule contains methods to help adhere to the data
minimization principle in GDPR for ML models. It enables to reduce the amount of
personal data needed to perform predictions with a machine learning model, while still enabling the model
to make accurate predictions. This is done by by removing or generalizing some of the input features.Thedataset assessmentmodule implements a tool for privacy assessment of
synthetic datasets that are to be used in AI model training.Official ai-privacy-toolkit documentation:https://ai-privacy-toolkit.readthedocs.io/en/latest/Installation: pip install ai-privacy-toolkitFor more information or help using or improving the toolkit, please contact Abigail Goldsteen [email protected],
or join our Slack channel:https://aip360.mybluemix.net/community.We welcome new contributors! If you're interested, take a look at ourcontribution guidelines.Related toolkits:ai-minimization-toolkit - has been migrated into this toolkit.differential-privacy-library: A
general-purpose library for experimenting with, investigating and developing applications in,
differential privacy.adversarial-robustness-toolbox:
A Python library for Machine Learning Security. Includes an attack module calledinferencethat contains privacy attacks on ML models
(membership inference, attribute inference, model inversion and database reconstruction) as well as aprivacymetrics module that contains
membership leakage metrics for ML models.CitationAbigail Goldsteen, Ola Saadi, Ron Shmelkin, Shlomit Shachor, Natalia Razinkov,
"AI privacy toolkit", SoftwareX, Volume 22, 2023, 101352, ISSN 2352-7110,https://doi.org/10.1016/j.softx.2023.101352. |
ai-probability | No description available on PyPI. |
aiprog | Failed to fetch description. HTTP Status Code: 404 |
ai-projects | Artificial IntelligenceA collection of AI algorithms, RL agents, and more.Documentationhttps://jakkes.github.io/AI_Projects/aiInstallingpip install ai-projects |
ai-proto-server | ai proto for coder |
ai-proxy | ai-proxyThis is a placeholder for something to be published soon. |
aiproxy-python | 🦉 AIProxy🦉AIProxyis a Python library that serves as a reverse proxy LLM APIs including ChatGPT and Claude2. It provides enhanced features like monitoring, logging, and filtering requests and responses. This library is especially useful for developers and administrators who need detailed oversight and control over the interaction with LLM APIs.✅ Streaming support: Logs every bit of request and response data with token count – never miss a beat! 💓✅ Custom monitoring: Tailor-made for logging any specific info you fancy. Make it your own! 🔍✅ Custom filtering: Flexibly blocks access based on specific info or sends back your own responses. Be in control! 🛡️✅ Multiple AI Services: Supports ChatGPT (OpenAI and Azure OpenAI Service), Claude2 on AWS Bedrock, and is extensible by yourself! 🤖✅ Express dashboard: We provide template forApache Supersetthat's ready to use right out of the box – get insights quickly and efficiently! 📊🚀 Quick startInstall.$pipinstallaiproxy-pythonRun.$python-maiproxy[--hosthost][--portport][--openai_api_keyOPENAI_API_KEY]Use.importopenaiclient=openai.Client(base_url="http://127.0.0.1:8000/",api_key="YOUR_API_KEY")resp=client.chat.completions.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"hello!"}])print(resp)Enjoy😊🦉🛠️ Custom entrypointTo customize🦉AIProxy, make your custom entrypoint and configure logger and filters here.fromcontextlibimportasynccontextmanagerimportloggingfromfastapiimportFastAPIfromaiproxyimportChatGPTProxy,AccessLogWorkerimportthreading# Setup loggerlogger=logging.getLogger()logger.setLevel(logging.INFO)log_format=logging.Formatter("%(asctime)s%(levelname)8s%(message)s")streamHandler=logging.StreamHandler()streamHandler.setFormatter(log_format)logger.addHandler(streamHandler)# Setup access log worker# worker = AccessLogWorker()worker=CustomAccessLogWorker(accesslog_cls=CustomAccessLog)# 🌟 Instantiate your custom access log worker# Setup proxy for ChatGPTproxy=ChatGPTProxy(api_key=YOUR_API_KEY,access_logger_queue=worker.queue_client)proxy.add_filter(CustomRequestFilter1())# 🌟 Set your custom filter(s)proxy.add_filter(CustomRequestFilter2())# 🌟 Set your custom filter(s)proxy.add_filter(CustomResponseFilter())# 🌟 Set your custom filter(s)# Setup server application@asynccontextmanagerasyncdeflifespan(app:FastAPI):# Start access log workerthreading.Thread(target=worker.run,daemon=True).start()yield# Stop access log workerworker.queue_client.put(None)app=FastAPI(lifespan=lifespan,docs_url=None,redoc_url=None,openapi_url=None)proxy.add_route(app,"/chat/completions")Run with uvicorn with some params if you need.$uvicornrun:app--host0.0.0.0--port8080To use Azure OpenAI, instantiateChatGPTProxywithAsyncAzureOpenAI.azure_client=openai.AsyncAzureOpenAI(api_key="YOUR_API_KEY",api_version="2023-10-01-preview",azure_endpoint="https://{DEPLOYMENT_ID}.openai.azure.com/")proxy=ChatGPTProxy(async_client=azure_client,access_logger_queue=worker.queue_client)To use Claude2 on AWS Bedrock, instantiateClaude2Proxy.fromaiproxy.claude2importClaude2Proxyclaude_proxy=Claude2Proxy(aws_access_key_id="YOUR_AWS_ACCESS_KEY_ID",aws_secret_access_key="YOUR_AWS_SECRET_ACCESS_KEY",region_name="your-bedrock-region",access_logger_queue=worker.queue_client)claude_proxy.add_route(app,"/model/anthropic.claude-v2")Client side. We test API with boto3.importboto3importjson# Make client with dummy credssession=boto3.Session(aws_access_key_id="dummy",aws_secret_access_key="dummy",)bedrock=session.client(service_name="bedrock-runtime",region_name="private",endpoint_url="http://127.0.0.1:8000")# Call APIresponse=bedrock.invoke_model(modelId="anthropic.claude-v2",body=json.dumps({"prompt":"Human: うなぎとあなごの違いは?\nAssistant: ","max_tokens_to_sample":100}))# Show responseprint(json.loads(response["body"].read()))🔍 MonitoringBy default, seeaccesslogtable inaiproxy.db. If you want to use other RDBMS like PostgreSQL, set SQLAlchemy-formatted connection string asconnection_strargument when instancingAccessLogWorker.And, you can customize log format as below:This is an example to addusercolumn to request log. In this case, the customized log are stored into table namedcustomaccesslog, the lower case of your custom access log class.fromdatetimeimportdatetimeimportjsonimporttracebackfromsqlalchemyimportColumn,Stringfromaiproxy.accesslogimportAccessLogBase,AccessLogWorkerclassCustomAccessLog(AccessLogBase):user=Column(String)classCustomGPTRequestItem(ChatGPTRequestItem):defto_accesslog(self,accesslog_cls:_AccessLogBase)->_AccessLogBase:accesslog=super().to_accesslog(accesslog_cls)# In this case, set value of "x-user-id" in request header to newly added colmun "user"accesslog.user=self.request_headers.get("x-user-id")returnaccesslog# Use your custom accesslogworker=CustomAccessLogWorker(accesslog_cls=CustomAccessLog)# Use your custom request itemproxy=ChatGPTProxy(access_logger_queue=worker.queue_client,request_item_class=CustomGPTRequestItem)NOTE: By defaultAccessLog, OpenAI API Key in the request headers is masked.🛡️ FilteringThe filter receives all requests and responses, allowing you to view and modify their content. For example:Detect and protect from misuse: From unknown apps, unauthorized users, etc.Trigger custom actions: Doing something triggered by a request.This is an example for custom request filter that protects the service from banned user. uezo will receive "you can't use this service" as the ChatGPT response.fromtypingimportUnionfromaiproxyimportRequestFilterBaseclassBannedUserFilter(RequestFilterBase):asyncdeffilter(self,request_id:str,request_json:dict,request_headers:dict)->Union[str,None]:banned_user=["uezo"]user=request_json.get("user")# Return string message to return response right after this filter ends (not to call ChatGPT)ifnotuser:return"user is required"elifuserinbanned_user:return"you can't use this service"# Enable this filterproxy.add_filter(BannedUserFilter())Try it.resp=client.chat.completions.create(model="gpt-3.5-turbo",messages=messages,user="uezo")print(resp)ChatCompletion(id='-',choices=[Choice(finish_reason='stop',index=0,message=ChatCompletionMessage(content="you can't use this service",role='assistant',function_call=None,tool_calls=None))],created=0,model='request_filter',object='chat.completion',system_fingerprint=None,usage=CompletionUsage(completion_tokens=0,prompt_tokens=0,total_tokens=0))Another example is the model overwriter that forces the user to use GPT-3.5-Turbo.classModelOverwriteFilter(RequestFilterBase):asyncdeffilter(self,request_id:str,request_json:dict,request_headers:dict)->Union[str,None]:request_model=request_json["model"]ifnotrequest_model.startswith("gpt-3.5"):print(f"Change model from{request_model}-> gpt-3.5-turbo")# Overwrite request_jsonrequest_json["model"]="gpt-3.5-turbo"Lastly,ReplayFilterthat retrieves content for a specific request_id from the histories. This is an exceptionally cool feature for developers to test AI-based applications.classReplayFilter(RequestFilterBase):asyncdeffilter(self,request_id:str,request_json:dict,request_headers:dict)->Union[str,None]:# Get request_id to replay from request headerrequest_id=request_headers.get("x-aiproxy-replay")ifnotrequest_id:returndb=worker.get_session()try:# Get and return the response content from historiesr=db.query(AccessLog).where(AccessLog.request_id==request_id,AccessLog.direction=="response").first()ifr:returnr.contentelse:return"Record not found for{request_id}"exceptExceptionasex:logger.error(f"Error at ReplayFilter:{str(ex)}\n{traceback.format_exc()}")return"Error at getting response for{request_id}"finally:db.close()request_idis included in HTTP response headers asx-aiproxy-request-id.NOTE:Responsefilter doesn't work whenstream=True.📊 DashboardWe provide an Apache Superset template as our express dashboard. Please follow the steps below to set up.Install Superset.$pipinstallapache-supersetGet dashboard.zip from release page and extract it to the same directory as aiproxy.db.https://github.com/uezo/aiproxy/releases/tag/v0.3.0Set required environment variables.$exportSUPERSET_CONFIG_PATH=$(pwd)/dashboard/superset_config.py
$exportFLASK_APP=supersetMake database.$supersetdbupgradeCreate admin user. Change username and password as you like.$supersetfabcreate-admin--usernameadmin--firstnameAIProxyAdmin--lastnameAIProxyAdmin--emailadmin@localhost--passwordadminInitialize Superset.$supersetinitImport 🦉AIProxy dashboard template. Execute this command in the same directory as aiproxy.db. If you execute from a different location, open the Database connections page in the Superset after completing these steps and modify the database connection string to the absolute path.$supersetimport-directorydashboard/resourcesStart Superset.$supersetrun-p8088Open and customize the dashboard to your liking, including the metrics you want to monitor and their conditions.👍http://localhost:8088📕 Superset official docs:https://superset.apache.org/docs/intro💡 TipsCORSConfigure CORS if you call API from web apps.https://fastapi.tiangolo.com/tutorial/cors/Retry🦉AIProxy does not retry when API returns 5xx error because the OpenAI official client library retries. Settmax_retriestoChatGPTProxyif you call 🦉AIProxy from general HTTP client library.proxy=ChatGPTProxy(api_key=args.openai_api_key,access_logger_queue=worker.queue_client,max_retries=2# OpenAI's default is 2)DatabaseYou can use other RDBMS that is supported by SQLAlchemy. You can use them by just changing connection string. (and, install client libraries required.)Example for PostgreSQL🐘$pipinstallpsycopg2-binary# connection_str = "sqlite:///aiproxy.db"connection_str=f"postgresql://{USER}:{PASSWORD}@{HOST}:{PORT}/{DATABASE}"worker=AccessLogWorker(connection_str=connection_str)Example for SQL Server or Azure SQL DatabaseThis is a temporary workaroud from AIProxy >= 0.3.6. SetAIPROXY_USE_NVARCHAR=1to use NVARCHAR internally.$exportAIPROXY_USE_NVARCHAR=1Install ODBC driver (version 18 in this example) andpyodbcthen set connection string as follows:# connection_str = "sqlite:///aiproxy.db"connection_str=f"mssql+pyodbc:///?odbc_connect=DRIVER={ODBCDriver18forSQLServer};SERVER=YOUR_SERVER;PORT=1433;DATABASE=YOUR_DB;UID=YOUR_UID;PWD=YOUR_PWD"worker=AccessLogWorker(connection_str=connection_str)RestrictionsFeaturesChatGPTChatGPTStreamClaude2Claude2StreamRequest filter✅✅✅Instant response is non-stream with status code 400Response filter✅N/A✅N/AAPI keysServer/ClientServer/ClientServerServer🛟 SupportFor support, questions, or contributions, please open an issue in the GitHub repository. Please contact me directly when you need an enterprise or business support😊.⚖️ License🦉AIProxyis released under theApache License v2.Made with ❤️ by Uezo, the representive of Unagiken. |
aipsetup | Software For Building and Maintaining Own GNU/Linux distributionaipsetupFeatures:building software packages from tarrballs
* multihost package buildings supported
* multilib package buildings supportedinstalling and uninstalling packagessearching for and deletion of garbage among system filessome sort of tools for creating distributionsRequirements:some packages from my GitHub page are required.
also You’ll need BottlePy, CherryPy, Mako, SQLAlchemy, lxml
and some other stuff. GUI elements requires Gtk+3 and PyGObject.aipsetup is written (and oriented) on system with Python 3.5GitHub page is here:https://github.com/AnimusPEXUS/wayround_org_aipsetupSome slack distribution packages are here:rsync://wayround.org/Lailalo_snapshots/ |
aip-site-generator | aip.dev static site generatorThis is the site generator foraip.devand its forks. It
takes AIP files in a git repository and outputs a static website.Why?We are not fans of rolling our own tools when off-the-shelf alternatives exist.
However, the AIP project has grown sufficiently mature to warrant it.GitHub Pages normally automatically builds documentation withJekyll, but
as the AIP system has grown, we are beginning to reach the limits of what
Jekyll can handle, and other off-the-shelf generators had similar issues:AIP adoption is handled through fork-and-merge and top-down configuration
files will lead to repetitive merge conflicts.Our grouping and listing logic has grown complicated, and had to be
maintained using complex and error-prone Liquid templates.Jekyll is extensible but GitHub requires specific Jekyll plugins, meaning we
can not use off-the-shelf solutions for planned features (e.g. tabbed code
examples).Lack of meaningful build CI caused failures.Working with the development environment was (really) slow.There are some additional advantages that we unlock with a custom generator:We can override segments of AIPs using template extensions in new files
rather than modifying existing files.We can provide useful abstractions for common deviations between companies
(e.g. case systems) that minimize the need to fork AIPs.We can customize the Markdown parsing where necessary (tabs, hotlinking,
etc.).How does it work?This is essentially split into three parts:Python code (aip_site/):The majority of the code is models (aip_site/models/) that represent the
fundamental concept of an AIP site. These are rolled up into a singleton
object calledSitethat is used everywhere. All models aredataclassesthat get sent to templates.There is also a publisher class (aip_site/publisher.py) that is able to
slurp up a repo of AIPs and build a static site.There is some server code (aip_site/server.py) that can run a development
server.All remaining files are thin support code to avoid repeating things in or
between the above.Templates (support/templates/) areJinja2templates containing (mostly)
HTML that makes up the layout of the site.Assets (support/assets/andsupport/scss/) are other static files. SCSS
is automatically compiled into CSS at publication.Of the models, there are three models in particular that matter:Site:A singleton that provides access to all scopes, AIPs, and static
pages. This is sent to every template as thesitevariable.AIP:A representation of a single AIP, including both content and
metadata. This is sent to the AIP rendering template as theaipvariable.Scope:A group of AIPs that apply to a particular scope. The "general"
scope is special, and is the "root" group. This is sent to the AIPlistingtemplate as thescopevariable.Templates arejinja2files in thetemplates/directory.Note:We run Jinja in with "strict undefined", so referencing an undefined
variable in a template is a hard error rather than an empty string.Entry pointsThere are two entry points for the app. Thepublisher(aip_site/publisher.py) is the program that iterates over the relevant
directories, renders HTML files, and writes them out to disk. Theapp(aip_site/server.py) is a lightweight Flask app that provides a development
server.These entry points are routed through the CLI file (aip_site/cli.py); when
this application is installed using pip, it makes theaip-site-gen(publisher) andaip-site-serve(server) commands available.ExtensionsThis site generator includes a basic extension system for AIPs. When processing
AIPs as plain Markdown files, it will make any Markdown (level 2 or 3) header
into a block. Therefore...## Foo bar bazLorem ipsum dolor set ametBecomes...{% block foo_bar_baz %}
## Foo bar baz
Lorem ipsum dolor set amet
{% endblock %}That allows an overriding template to extend the original one and override
sections:{% extends aip.templates.generic %}
{% block foo_bar_baz %}
## My mo-betta foo bar baz
Lorem ipsum dolor set something-not-amet
{% endblock %}Developer SetupIf you want to contribute to this project you will want to have a setup where
you can make changes to the code and see the result of your changes as soon as
possible. Here is a quick way to set up a local development environment that
will enable you to work on the code without having to reinstall the command
line scripts.DependenciesYou'll need venv. On Linux, install with,sudo apt-get install python3-venvRunning dev envCheck out the source$mkdirsrc
$cdsrc
$gitclonehttps://github.com/aip-dev/site-generator.gitSetup python virtual environment$python3-mvenv.venv
$source.venv/bin/activatePIP Install with the editable option$pipinstall--editable.Serve the aip.dev site$aip-site-serve/path/to/aip/data/on/your/system |
aiptest | No description available on PyPI. |
aiptrms | No description available on PyPI. |
aipy | This package collects together tools for radio astronomical interferometry. In addition to pure-python phasing, calibration, imaging, and deconvolution code, this package includes interfaces to MIRIAD (a Fortran interferometry package) and HEALPix (a package for representing spherical data sets). Instructions, documentation, and a FAQ may be found at [the aipy GitHub page](http://github.com/HERA-Team/aipy). |
aipysdk | aipysdkSmall library containing some helpers for implementing the server-side of the Vercel AI SDK in python.Installationpip install aipysdkExamplesCheck out the examples inexamples |
ai-python | # AI Python by MicrosoftThis is a library of Python packages scanned using various open source and internal tools to provide up-to-date, and secury dependencies.## Installation
Currently Python3.7 is supported.### Ubuntu 18.04
The aiubuntu library is provided to simplify installation of required Ubuntu dependancies.shelladd-apt-repository -y ppa:deadsnakes/ppa && apt-get update && apt-get install -y python3.7 python3.7-devcurl -fSsLOhttps://bootstrap.pypa.io/get-pip.py&& /usr/bin/python3.7 get-pip.py ‘pip==20.3.3’add-apt-repository -y ppa:dciborow/ppa && apt-get update && apt-get install -y aiubuntu# For Core Libaries (this will install nearly everything)
python3.7 -m pip install ai-python[core]# For Libraries needed for testing
python3.7 -m pip install ai-python[tests]# For Libraries needed for a single package
python3.7 -m pip install ai-python[retail]## ContributingThis project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visithttps://cla.opensource.microsoft.com.When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.## TrademarksThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party’s policies. |
ai-python-common | Failed to fetch description. HTTP Status Code: 404 |
aipyx | No description available on PyPI. |
aipyzip | No description available on PyPI. |
aiqa | qaQuant Analysis. |
aiqc | 📚DocumentationTechnical OverviewWhat is it?AIQC is an open source Python package that provides adeclarative API for end-to-end MLOps(dataset registration, preprocessing, experiment tracking, model evaluation, inference, post-processing, etc) in order to make deep learning more accessible to researchers.How does it work?The backend is aSQLite object-relational model (ORM)for machine learning objects (Dataset, Feature, Label, Splits, Algorithm, Job, etc). The high-level API stacks these building blocks intostandardized workflowsfor various: analyses (classify, regress, generate), data types (tabular, sequence, image), and libraries (TensorFlow, PyTorch). The benefits of this approach are:⏱️90% reduction in data wranglingvia automation of highly conditional and repetitive tasks that vary for each type of dataset and analysis (e.g. model evaluation, metrics, and charts for every split of every model).💾Reproducibility, not only because the workflow is persisted (e.g. encoder metadata) but also because it provides standardized classes as opposed to open-ended scripting (e.g. 'X_train, y_test').🎛️No need to install and maintainapplication and database servers for experiment tracking. SQLite is just a highly-performant and portable file that is automatically configured by `aiqc.setup()`. AIQC is just a pip-installable Python package that works great in Jupyter (or any IDE/shell), and provides a Dash-Plotly user interface (UI) for areal-time experiment tracking.What's on the roadmap?🖥️ Expand the UI (e.g. dataset registration and model design) to make it even more approachable for less technical users.☁️ Schedule parallel training of models in the cloud.📚DocumentationExperiment TrackerCompare ModelsWhat if?Usage# Built on Python 3.7.12 to mirror Google Colab$pipinstall--upgradepip$pipinstall--upgradewheel$pipinstall--upgradeaiqc# Monitor and evaluate models (from CLI)$python-maiqc.ui.app# High-level APIfromaiqcimportmlops# Declare preprocessing stepsmlops.Pipeline()# Define, train, & evaluate modelsmlops.Experiment().run_jobs()# Infer using original Pipelinemlops.Inference()Official Installation Documentation:https://aiqc.readthedocs.io/en/latest/notebooks/installation.html📚Documentation |
aiqDocTests | aiqDocTestsAiqDocTests A framework to validate request/response's json and create documentation for applications maintained by the devs of themost greedy-gut app on the internet!Install with pip3pip3 install aiqDocTestsInit in the project folderaiqdoctests --initWill be created the folderdata_scructures_ioandstatic.In thedata_scructures_io, is the json files to test the request.ExampleWe usingCerberusto validate the structure. So if any valid in the json response is a name, type or don't send. Will occurs a exception in tests.In thestaticfolder, will be the json file toSwagger, with this file you can any package in any language what you want to read.
This swagger.json is generate, so every time that you run the commandaiqdoctests -gin the folder this file will be update.The.aiqdoctests.configfileThis json file is for configuration, so the name folders and other things can be personalizable.docs_urlwith the commandaiqdoctests --docswill be up un flask server in the port 3000 and read the swagger file, this value is for which url will run.default: localhost:3000/docssave_file_swaggerThe name file generate to swagger.default: swagger.jsondata_structures_folderThe name folder that are the data scructures for requests.default: data_structures_iotests_folderThe name folder that will be the tests.default: teststests_before_cmdSometimes in the project, we wanna run command/script before start the tests, example create the tables in the bd. Will run this command before start the tests.tests_between_cmdThis command is for run between tests, in thetearDown, soafterrun a test, this is for a script or migration that you want to run for clean the bd for example.swaggerThe header for swagger file.For more informationFor more you can see in the example (https://github.com/aiqfome/aiqDocTests-example)Relax, this documentation is still in construction :construction_worker:Any doubt create a issue.Made with :pizza: & :hearts:! Enjoy it! |
aiqi | This placeholder package exists to publicly claim an internal name, so that it cannot be used as part of a dependency confusion attack. |
aiqpy | aiqpy is a wrapper library that allows you to interact with anAppearIQ 8platform using Python.The library connects to the REST APIs of the platform and takes care of authentication and session management so you can focus on funnier things.Example: Changing the password of a user>>>platform=aiqpy.Connection(profile='dev')>>>user=platform.get(['admin','users'],username='stephen.falken')[0]>>>user['password']='joshua'>>>updated=platform.put(['admin','users'],user)Dependenciesaiqpy usesRequestsfor performing HTTP requests.Licenseaiqpy is licensed under GNU GPL v3.0. |
aiqterminal | aiqterminal is a an interactive shell for administrating anAppearIQ 8platform.Example: Changing the password of a useradmin@wapr/default> set_organization('games')
admin@wapr/games> user = get(['admin', 'users'], username='stephen.falken')[0]
admin@wapr/games> user['password'] = 'joshua'
admin@wapr/games> updated = put(['admin', 'users'], user)Dependenciesaiqterminal usesaiqpyfor performing HTTP requests.Licenseaiqterminal is licensed under GNU GPL v3.0. |
air | An ultra-lightweight static site generator.Documentation:https://air.readthedocs.org.FeaturesTODOCreditsThis package was created withCookiecutterand thecookiecutter-pypackageproject template.History0.1.0 (2015-12-10)First release on PyPI. |
air2neo | air2neoAirtable to Neo4j data ingestorQuickstartfromair2neoimportAir2Neo,MetatableConfiga2n=Air2Neo(airtable_api_key,airtable_base_id,neo4j_uri,neo4j_username,neo4j_password,MetatableConfig(table_name,# "Metatable"# Optionally, you can provide `table`,# which is a pyairtable.Table object.name_col,# "Name",index_for_col,# "IndexFor",constrain_for_col,# "ConstrainFor",node_properties_col,# "NodeProperties",edges_col,# "Edges",node_properties_last_ingested_col,# "nodesLastIngested",edges_last_ingested_col,# "edgesLastIngested",airtable_id_property_in_neo4j,# "_aid" (The name of the property in Neo4j that stores the Airtable ID, defaults to)format_edge_col_name,# "function that formats edge column names. Removes everything after a double-underscore, e.g. IN_INDUSTRY__BANK is renamed to IN_INDUSTRY",airtable_api_key,# "Airtable API key",airtable_base_id,# "Airtable base ID",),)a2n.run()If you have a .env file like so:AIRTABLE_API_KEY=
AIRTABLE_BASE_ID=
AIRTABLE_METATABLE_NAME= # Optional, defaults to "Metatable"
NEO4J_URI=
NEO4J_USERNAME=
NEO4J_PASSWORD=You just run the following:fromair2neoimportAir2Neoa2n=Air2Neo()a2n.run()Installation$pipinstallair2neoDocumentationTo be implemented. For now, please look at the code docstrings. Sorry about that! |
air2phin | Air2phinair2phin is a tool for migrating Airflow DAGs to DolphinScheduler Python API.InstallationFor now, it just for test and without publish to pypi but will be adding in the future.
You could still install locally by yourself.python-mpipinstall--upgradeair2phinQuick StartHere will give a quick example to show how to migrate base on standard input.# Quick test the migrate rule for standard input# Can also add option `--diff` to see the diff detail of this migrateair2phintest"from airflow.operators.bash import BashOperatortest = BashOperator(task_id='test',bash_command='echo 1',)"And you will see the migrated result in the standard output. air2phin can only migrate standard input, it can
also migrate file, directory and even can use in your python code. For more detail, please seeour usage.DocumentationThe documentation host on read the doc and is available athttps://air2phin.readthedocs.io.Support StatementFor now, we support following statement from Airflow's DAG filesDAGBefore MigrationAfter Migrationfrom airflow import DAGfrom pydolphinscheduler.core.process_definition import ProcessDefinitionwith DAG(...) as dag: passwith ProcessDefinition(...) as dag: passOperatorsDummy OperatorBefore MigrationAfter Migrationfrom airflow.operators.dummy_operator import DummyOperatorfrom pydolphinscheduler.tasks.shell import Shellfrom airflow.operators.dummy import DummyOperatorfrom pydolphinscheduler.tasks.shell import Shelldummy = DummyOperator(...)dummy = Shell(..., command="echo 'airflow dummy operator'")Shell OperatorBefore MigrationAfter Migrationfrom airflow.operators.bash import BashOperatorfrom pydolphinscheduler.tasks.shell import Shellbash = BashOperator(...)bash = Shell(...)Spark Sql OperatorBefore MigrationAfter Migrationfrom airflow.operators.spark_sql_operator import SparkSqlOperatorfrom pydolphinscheduler.tasks.sql import Sqlspark = SparkSqlOperator(...)spark = Sql(...)Python OperatorBefore MigrationAfter Migrationfrom airflow.operators.python_operator import PythonOperatorfrom pydolphinscheduler.tasks.python import Pythonpython = PythonOperator(...)python = Python(...) |
air52 | No description available on PyPI. |
airadar | No description available on PyPI. |
airam | air |
airavat | airavatarm_templateThis template detects obsolete datasets and connections of Azure Data Factory(ADF) using ARM template.obsolete_datasets: This method detects all the obsolete datasets present in ADF.Input: arm_template_folder_path; Output: Pandas dataframe having one column (Obsolete_Datasets).obsolete_linked_services: This method detects all the obsolete linked services present in ADF.Input: arm_template_folder_path; Output: Pandas dataframe having one column (Obsolete_Linked_Services). |
airavata | # Django AiravataBuild status [](https://jenkins.levitnet.be/buildStatus/icon?job=Polla)Docs [](https://readthedocs.org/projects/django-polla/badge/?version=latest)
PyPi [](https://badge.fury.io/py/airavata)Airavata is a mythological white elephant who carries the Hindu god Indra. Airavata has four tusks and seven trunks and is spotless white. [source on Wikipedia](https://en.wikipedia.org/wiki/Airavata).Airavata is also a Django 1.8+ library that allows you to hosts multiple dynamic sites running on a single Django instance/db.I have been using a customized version of [dynamicsites](https://bitbucket.org/uysrc/django-dynamicsites/overview) for a while now. But with the new features from Django 1.7 and 1.8, there exists another simpler, and more pythonesque IMHO, way to achieve the same results. Polla is an attempt at thatotherway.See documentation on [ReadTheDocs](http://django-polla.readthedocs.io/en/latest/)If you are planning on contributing (or are just curious about our values), please take the time to read the [project’s Code of Conduct](COC.md)Thanks to:@ashwoods for the new name :-)[Coraline Ada Ehmke](http://where.coraline.codes/) for creating [the contributor covenant](http://contributor-covenant.org/) |
airavata-custos-portal | Airavata Custos PortalHow to useAiravat custos portal is available as a python package to install and customise for tenants needs.
The forllowing instructions are for setting up a customised portal using all the features available
in the airavata custos portal.Installpip install airavata-custos-portalCreate a Django appdjango-admin startproject my_first_custos_app
cd my_first_custos_app/my_first_custos_app
django-admin startapp my_custom_uiInclude the custos portal api and frontend in the urls.# my_first_custos_app/my_first_custos_app/urls.py
from django.contrib import admin
from django.urls import path
from django.conf.urls import include
urlpatterns = [
path('admin/', admin.site.urls),
path("api/", include("airavata_custos_portal.apps.api.urls")),
path("", include("airavata_custos_portal.apps.frontend.urls")),
]Also, include the custom UI app in the urls.# my_first_custos_app/my_first_custos_app/urls.py
from django.contrib import admin
from django.urls import path
from django.conf.urls import include
urlpatterns = [
path('admin/', admin.site.urls),
path("api/", include("airavata_custos_portal.apps.api.urls")),
path("", include("airavata_custos_portal.apps.frontend.urls")),
path("custom-ui/", include("my_first_custos_app.my_custom_ui.urls")),
]DevelopmentThe application consists of a Vue.js frontend and a Django based backend.
The instructions below are for setting up the local setup for development.Change the configurationsChange the environment variables on.envRun Vue.js appyarn install
yarn serveLints and fixes filesyarn lintRunning the Django server locallypython -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cd airavata_custos_portal/
./manage.py migrate
./manage.py runserverAnd then point tohttp://localhost:8000How to publishBuild the static filesyarn buildBuild the python packagepython -m pip install --upgrade build
python -m buildPublish the python package to pypi.org. Optionally can push to test.pypi.org. Seehttps://packaging.python.org/tutorials/packaging-projects/for more info.python -m pip install --upgrade twine
python -m twine upload dist/* |
airavata-custos-portal-sdk | custos-demo-gatewayProject setupyarn installCompiles and hot-reloads for developmentyarn serveCompiles and minifies for productionyarn buildLints and fixes filesyarn lintCustomize configurationSeeConfiguration Reference.Running the Django server locallypython -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cd airavata_custos_portal/
./manage.py migrate
./manage.py runserver |
airavata-django-portal-commons | Airavata Django Portal CommonsUtilities for working with dynamically loaded Django apps.Getting StartedInstall this package with pippip install airavata-django-portal-commonsDynamically loaded Django appsAt the end of your Django server's settings.py file addimportsysfromairavata_django_portal_commonsimportdynamic_apps# Add any dynamic apps installed in the virtual environmentdynamic_apps.load(INSTALLED_APPS)# (Optional) merge WEBPACK_LOADER settings from custom Django appssettings_module=sys.modules[__name__]dynamic_apps.merge_settings(settings_module)Note: if the dynamic Django app uses WEBPACK_LOADER, keep in mind that it is
important that the version ofdjango-webpack-loaderand the version of webpack-bundle-tracker be compatible. If you're using
django-webpack-loader prior to version 1.0 then a known good pair of versions
is django-webpack-loader==0.6.0 and webpack-bundle-tracker==0.4.3.Also add'airavata_django_portal_commons.dynamic_apps.context_processors.custom_app_registry'to the context_processors list:TEMPLATES=[{'BACKEND':'django.template.backends.django.DjangoTemplates','DIRS':...'APP_DIRS':True,'OPTIONS':{'context_processors':[...'airavata_django_portal_commons.dynamic_apps.context_processors.custom_app_registry',],},},]In your urls.py file add the following to the urlpatternsurlpatterns=[# ...path('',include('airavata_django_portal_commons.dynamic_apps.urls')),]Creating a dynamically loaded Django appSeehttps://apache-airavata-django-portal.readthedocs.io/en/latest/dev/custom_django_app/for the latest information.Note that by default thecookiecutter templateregisters Django apps under the entry_point group name ofairavata.djangoapp,
but you can change this. Just make sure that when you calldynamic_apps.loadthat you pass as the second argument the name of the entry_point group.DevelopingMaking a new releaseUpdate the version in setup.cfg.Commit the update to setup.cfg.Tag the repo with the same version, with the formatv${version_number}. For
example, if the version number in setup.cfg is "1.2" then tag the repo with
"v1.2".VERSION=...
git tag -m $VERSION $VERSION
git push --follow-tagsIn a clean checkoutcd /tmp/
git clone /path/to/airavata-django-portal-commons/ -b $VERSION
cd airavata-django-portal-commons
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --upgrade pip build
python3 -m buildPush to pypi.org. Optionally can push to test.pypi.org. Seehttps://packaging.python.org/tutorials/packaging-projects/for more info.python3 -m pip install --upgrade twine
python3 -m twine upload dist/* |
airavata-django-portal-sdk | Airavata Django Portal SDKThe Airavata Django Portal SDK provides libraries that assist in developing
custom Django app extensions to theAiravata Django Portal.See the documentation athttps://airavata-django-portal-sdk.readthedocs.io/for
more details.Getting StartedTo integrate the SDK with an Airavata Django Portal custom app, addairavata-django-portal-sdkto theinstall_requireslist in your setup.cfg or setup.py file. Then
reinstall the Django app withpip install -e .(see your Airavata Django custom app's README for details)You can also just install the library with:pip install airavata-django-portal-sdkMigrationsdjango-admin makemigrations --settings=airavata_django_portal_sdk.tests.test_settings airavata_django_portal_sdkDevelopingSetting up dev environmentTo generate the documentation,create a virtual environmentand
then:source venv/bin/activate
pip install --upgrade pip setuptools wheel
pip install -r requirements-dev.txtDocumentationmkdocs serveRunning testspytestordjango-admin test --settings airavata_django_portal_sdk.tests.test_settingsor use tox to run the tests in all supported Python environmentstoxRunning flake8flake8 .Automatically formatting Python codeautopep8 -i -aaa -r .
isort .Making a new releaseUpdate the version in setup.pyTag the repo with the same version, with the formatv${version_number}. For
example, if the version number in setup.py is "1.2" then tag the repo with
"v1.2".VERSION=...
git tag -m $VERSION $VERSION
git push --follow-tagsIn a clean checkoutcd /tmp/
git clone /path/to/airavata-django-portal-sdk/ -b $VERSION
cd airavata-django-portal-sdk
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --upgrade build
python3 -m buildPush to pypi.org. Optionally can push to test.pypi.org. Seehttps://packaging.python.org/tutorials/packaging-projects/for more info.python3 -m pip install --upgrade twine
python3 -m twine upload dist/* |
airavata-mft-cli | Airavata Managed File Transfers (MFT)Apache Airavata MFT is a high-performance, multi-protocol data transfer engine to orchestrate data movement and operations across most cloud and On-premises storages. MFT aims to abstract the complexity of heterogenous storages by providing a unified and simple interface for users to seamlessly access and move data across any storage endpoint. To accomplish this goal, MFT provides simple but highly-performing tools to access most cloud and on-premise storages as seamlessly as they access local files in their workstations.Apache Airavata MFT bundles easily deployable agents that auto determine optimum network path with additional multi-channel, parallel data paths to optimize the transfer performance to gain the maximum throughput between storage endpoints. MFT utilizes parallel Agents to transfer data between endpoints to gain the advantage of multiple network links.Try Airavata MFTMFT requires Java 11+ and python3.10+ to install Airavata MFT in your environment. MFT currently supports Linux and MacOS operating systems. Contributions to support Windows are welcome!!.Download and InstallFollowing commands will download Airavata MFT into your machine and start the MFT service.pip3 install airavata-mft-cli
mft initIf the installer failed for M1 and M2 Macs complaining about grpcio installation. Follow the solution mentioned inhere. You might have to uninstall already installed grpcio and grpcio-tools distributions first.
For other common installation issues, please refer to thetroubleshooting section.To stop MFT after usingmft stopRegistering StoragesFirst you need to register your storage endpoints into MFT in order to access them. Registering storage is an interactive process and you can easily register those without prior knowledgemft storage addThis will ask the type of storage you need and credentials to access those storages. To list already added storages, you can runmft storage listAccessing Data in StoragesIn Airavata MFT, we provide a unified interface to access the data in any storage. Users can access data in storages just as they access data in their computers. MFT converts user queries into storage specific data representations (POSIX, Block, Objects, ..) internallymft ls <storage name>
mft ls <storage name>/<resource path>Moving Data between StoragesCopying data between storages are simple as copying data between directories of local machine for users. MFT takes care of network path optimizations, parallel data path selections and selections or creations of suitable transfer agents.mft cp <source storage name>/<path> <destination storage name>/<path>MFT is capable of auto detecting directory copying and file copying based on the path given.Troubleshooting and Issue ReportingThis is our very first attempt release Airavata MFT for community usage and there might be lots of corner cases that we have not noticed. All the logs of MFT service are available in~/.mft/Standalone-Service-0.01/logs/airavata.log. If you see any error while using MFT, please report that in our Github issue page and we will respond as soon as possible. We really appreciate your contribution as it will greatly help to improve the stability of the product.Common issuesFollowing error can be noticed if you have a python version which is less than 3.10ERROR: Could not find a version that satisfies the requirement airavata-mft-cli (from versions: none)
ERROR: No matching distribution found for airavata-mft-cliIf the Error still occurs after installing the right python version, try creating a virtual environemntpython3.10 -m venv venv
source venv/bin/activate
pip install airavata-mft-cli |
airavata-mft-sdk | Python SDK for Apache Airavata Managed File Transfers (MFT) |
airavata-python-sdk | No description available on PyPI. |
airbag | airbagWIP |
airball | Welcome to AIRBALLAIRBALL is a package for running and managing flybys usingREBOUND. It is an extension to REBOUND, the popular N-body integrator.AIRBALL is currently in alpha testing. The APIs are subject to change without warning or backwards compatibility. Feedback and feature requests are very welcome.FeaturesLogic for handling the geometry of adding, running, and removing a flyby object in a REBOUND simulation.Stellar environments for generating and managing randomly generated stars from different stellar environments throughout the galaxy.Initial mass functions for quickly generating samples from probability distributions.Astropy.units integration to help you manage the mess of units and scales.Interactive examples for teaching and exploring AIRBALL’s functionality.InstallationAIRBALL is installable viapipwith one simple commandpipinstallairballThe following packages should automatically be installed along with AIRBALL:reboundnumpyscipyjoblibastropyContributorsGarett Brown, University of Toronto,[email protected] Rein, University of Toronto,[email protected] Mohsin, University of Toronto,[email protected] Chao-Ming Lam, University of Waterloo,[email protected] He, Ivy Shi, and others.AIRBALL is open source and you are invited to contribute to this project!AcknowledgmentsIf you use this code or parts of this code for results presented in a scientific publication, we would greatly appreciate a citation.LicenseAIRBALL is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.AIRBALL is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.You should have received a copy of the GNU General Public License along with airball. If not, seehttp://www.gnu.org/licenses/. |
airbase | 🌬 AirBaseAn easy downloader for the AirBase air quality data.AirBase is an air quality database provided by the European Environment Agency
(EEA). The data is available for download atthe portal, but
the interface makes it a bit time consuming to do bulk downloads. Hence, an easy
Python-based interface.Read the full documentation athttps://airbase.readthedocs.io.🔌 InstallationTo installairbase, simply run$pipinstallairbase🚀 Getting Started🗺 Get info about available countries and pollutants:>>>importairbase>>>client=airbase.AirbaseClient()>>>client.all_countries['GR', 'ES', 'IS', 'CY', 'NL', 'AT', 'LV', 'BE', 'CH', 'EE', 'FR', 'DE', ...>>>client.all_pollutants{'k': 412, 'CO': 10, 'NO': 38, 'O3': 7, 'As': 2018, 'Cd': 2014, ...>>>client.pollutants_per_country{'AD': [{'pl': 'CO', 'shortpl': 10}, {'pl': 'NO', 'shortpl': 38}, ...>>>client.search_pollutant("O3")[{'pl': 'O3', 'shortpl': 7}, {'pl': 'NO3', 'shortpl': 46}, ...🗂 Request download links from the server and save the resulting CSVs into a directory:>>>r=client.request(country=["NL","DE"],pl="NO3",year_from=2015)>>>r.download_to_directory(dir="data",skip_existing=True)Generating CSV download links...100%|██████████| 2/2 [00:03<00:00, 2.03s/it]Generated 12 CSV links ready for downloadingDownloading CSVs to data...100%|██████████| 12/12 [00:01<00:00, 8.44it/s]💾 Or concatenate them into one big file:>>>r=client.request(country="FR",pl=["O3","PM10"],year_to=2014)>>>r.download_to_file("data/raw.csv")Generating CSV download links...100%|██████████| 2/2 [00:12<00:00, 7.40s/it]Generated 2,029 CSV links ready for downloadingWriting data to data/raw.csv...100%|██████████| 2029/2029 [31:23<00:00, 1.04it/s]📦 Download the entire dataset (not for the faint of heart):>>>r=client.request()>>>r.download_to_directory("data")Generating CSV download links...100%|██████████| 40/40 [03:38<00:00, 2.29s/it]Generated 146,993 CSV links ready for downloadingDownloading CSVs to data...0%| | 299/146993 [01:50<17:15:06, 2.36it/s]🌡 Don't forget to get the metadata about the measurement stations:>>>client.download_metadata("data/metadata.tsv")Writing metadata to data/metadata.tsv...🚆 Command line interface$airbasedownload--helpUsage: airbase download [OPTIONS]Download all pollutants for all countriesThe -c/--country and -p/--pollutant allow to specify which data to download, e.g.- download only Norwegian, Danish and Finish sitesairbase download -c NO -c DK -c FI- download only SO2, PM10 and PM2.5 observationsairbase download -p SO2 -p PM10 -p PM2.5Options:-c, --country [AD|AL|AT|...]-p, --pollutant [k|CO|NO|...]--path PATH [default: data]--year INTEGER [default: 2022]-O, --overwrite Re-download existing files.-q, --quiet No progress-bar.--help Show this message and exit.🛣 RoadmapParallel CSV downloadsContributed by @avaldebeCLI to avoid using Python all togetherContributed by @avaldebeData wrangling module for AirBase output data |
airbnb | Copyright (C) 2004 Sam Hocevar <[email protected]>Everyone is permitted to copy and distribute verbatim or modifiedcopies of this license document, and changing it is allowed as longas the name is changed.DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSETERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION0. You just DO WHAT THE FUCK YOU WANT TO.Description: UNKNOWNPlatform: UNKNOWNClassifier: Programming Language :: PythonClassifier: Topic :: Software Development :: Libraries :: Python ModulesClassifier: Development Status :: 4 - BetaClassifier: Intended Audience :: DevelopersClassifier: Intended Audience :: End Users/DesktopClassifier: Operating System :: OS IndependentClassifier: License :: Public DomainClassifier: Programming Language :: Python :: 3Classifier: Programming Language :: Python :: 2.6Classifier: Programming Language :: Python :: 2.7Classifier: Topic :: Internet :: WWW/HTTPClassifier: Environment :: Console |
airbnb-script | airbnb_scriptStable version 1.0.6This script can help you to start own Internet business anywhere in the world. The system allows you to run a super-modern, interactive and fast website.Free AirBNB script for Real Estate
agenciesThe script supports:Selling Entertainment, Tours & Everything!AirBNB script for Property RentalClassified for Real Estate agencyDemo:Website:https://mirador.onlineAdmin:https://mirador.online/admin/Login:demoPassword:demodemoReal exampleCanarian Islands, Tenerife — Things to doModulesMobile versionOur script based on ReactJS technology and works
great on all smartphones and tablets — check!SEOThe site is perfectly adapted for search engines and supports
the Open-Graph & JsonLD protocols. The news section allows you to post
SEO-text with automatic relinking.Discount systemAllows you to set global discounts or hold one-time
promotions. The discount is calculated from the amount of your
commission which specified into the price of each offer.TripAdvisor ReviewsAutomatic parsing reviews with auto-rewriting
text. Gradual addition of reviews to your website with cross-linking,
for the natural increasing of content.Flexible price managementThe final price of your offers on the site
is formed from the basic price of the supplier and your commission.Support ChatDynamic form placed in all pages of the site, allowing
customers to send requests to your email.Customers and ordersAll orders arriving in the system have extended
data of prices with a convenient system of customer management.Personal AreaYou can allow your users to add their objects to your
website through their personal account.Other features:Booking in two clicks— the interface allows you to make an order
easily and simply!White-Label— we do not place our logos.Unlimitednumber of copies of your script.Auto-scaling photos— don’t think about the size of photos, our
script optimizes them independently.Youtube-video— specify the link to the video and the video will
appear at the offer page.Caching algorithm— loading pages occurs faster than 1 second.SSL-securitycertificate based on CloudFlare. Your site will work
through a secure https protocol.Email notifications.Receive email notifications for orders.Auto-update system.In the near future:Integration with several languages.Payments by cardsAutoposting to Facebook, InstagramInstallationScript made with:ReactJS, Django v2, Ubuntu. Installation takes
place automatically in 5 minutes, via the Docker system. First
installation for free!Detailed installation instructions you will receive by email.If
necessary, we can adapt templates for your business — the cost
depends on the amount of work.Contact usContact UsAt
Facebook |
airbornerf-aviation-sdk | No description available on PyPI. |
airbornerf-netconfig | No description available on PyPI. |
airbornerf-sdk | No description available on PyPI. |
airboss | AirbossManage your software projects lifecycle.How to release a project# This should be run form the projects directoryairbossrelease--infraaws--platformdocker--appprofitpulse |
airbotics-agent | The Airbotics AgentThis repository contains the source code for theAirboticsagent. The agent is designed to run as a background service that is responsible for maintaining communications with:The MQTT broker in the airbotics cloud.ROS2 on the host machine.The Docker Engine to on the host machine (optional).DocumentationCheck the latestdocumentationto get started.LicenseThis software is licensed underApache 2.0 |
airbrake | airbrake-pythonNote. Python 3.4+ are advised to use newAirbrake Python notifierwhich supports async API and code hunks. Python 2.7 users should continue to use this notifier.Airbrakeintegration for python that quickly and easily plugs into your existing code.importairbrakelogger=airbrake.getLogger()try:1/0exceptException:logger.exception("Bad math.")airbrake-python is used most effectively through itslogginghandler, and uses theAirbrake V3 APIfor error reporting.installTo install airbrake-python, run:$pipinstall-UairbrakesetupThe easiest way to get set up is with a few environment variables:exportAIRBRAKE_API_KEY=*****exportAIRBRAKE_PROJECT_ID=12345exportAIRBRAKE_ENVIRONMENT=devand you're done!Otherwise, you can instantiate yourAirbrakeHandlerby passing these values as arguments to thegetLogger()helper:importairbrakelogger=airbrake.getLogger(api_key=*****,project_id=12345)try:1/0exceptException:logger.exception("Bad math.")By default, airbrake will catch and send uncaught exceptions. To avoid this behvaiour, use the send_uncaught_exc option:logger = airbrake.getLogger(api_key=*****, project_id=12345, send_uncaught_exc=False)setup for Airbrake On-Premise and other compatible back-ends (e.g. Errbit)AirbrakeEnterpriseand self-hosted alternatives, such asErrbit, provide a compatible API.You can configure a different endpoint than the default (https://api.airbrake.io) by either:Setting an environment variable:exportAIRBRAKE_HOST=https://self-hosted.errbit.example.com/Or passing ahostargument to thegetLogger()helper:importairbrakelogger=airbrake.getLogger(api_key=*****,project_id=12345,host="https://self-hosted.errbit.example.com/")adding the AirbrakeHandler to your existing loggerimportloggingimportairbrakeyourlogger=logging.getLogger(__name__)yourlogger.addHandler(airbrake.AirbrakeHandler())by default, theAirbrakeHandleronly handles logs level ERROR (40) and aboveAdditional OptionsMore options are available to configure this library.For example, you can set the environment to add more context to your errors.
One way is by setting the AIRBRAKE_ENVIRONMENT env var.export AIRBRAKE_ENVIRONMENT=stagingOr you can set it more explicitly when you instantiate the logger.importairbrakelogger=airbrake.getLogger(api_key=*****,project_id=12345,environment='production')The available options are:environment, defaults to env varAIRBRAKE_ENVIRONMENThost, defaults to env varAIRBRAKE_HOSTorhttps://api.airbrake.ioroot_directory, defaults to Nonetimeout, defaults to 5. (Number of seconds before each request times out)send_uncaught_exc, defaults to True (Whether or not to send uncaught exceptions)giving your exceptions more contextimportairbrakelogger=airbrake.getLogger()defbake(**goods):try:temp=goods['temperature']exceptKeyErrorasexc:logger.error("No temperature defined!",extra=goods)Setting severity[Severity][what-is-severity] allows categorizing how severe an error is. By
default, it's set toerror. To redefine severity, simplybuild_noticewith
the needed severity value. For example:notice=airbrake.build_notice(exception,severity="critical")airbrake.notify(notice)Using this library without a loggerYou can create an instance of the notifier directly, and send
errors inside exception blocks.fromairbrake.notifierimportAirbrakeab=Airbrake(project_id=1234,api_key='fake')try:amazing_code()exceptValueErrorase:ab.notify(e)except:# capture all other errorsab.capture()Running Tests ManuallyCreate your environment and install the test requirementsvirtualenv venv
source venv/bin/activate
pip install .
python setup.py testTo run via nose (unit/integration tests):source venv/bin/activate
pip install -r ./test-requirements.txt
source venv/bin/activate
nosetestsRun all tests, including multi-env syntax, and coverage tests.pip install tox
tox -v --recreateIt's suggested to make sure tox will pass, as CI runs this.
tox needs to pass before any PRs are merged.Theairbrake.ioapi docs used to implement airbrake-python are here:https://airbrake.io/docs/api/[[what-is-severity]: https://airbrake.io/docs/airbrake-faq/what-is-severity/] |
airbrake-flask | airbrake-flask is a fast library that use the amazing requests library to senderror, exception messages to airbrake.io. You can use this library with theamazing gevent library to send your request asynchronously.Example Usage with gevent-------------------------from flask import Flask, request, got_request_exceptionfrom airbrake.airbrake import AirbrakeErrorHandlerimport geventimport sysapp = Flask(__name__)ENV = ('ENV' in os.environ and os.environ['ENV']) or 'prod'def log_exception(error):handler = AirbrakeErrorHandler(api_key="PUT_YOUR_AIRBRAKE_KEY_HERE",env_name=ENV, request=request)gevent.spawn(handler.emit, error, sys.exc_info())got_request_exception.connect(log_exception, app)Contribute----------This library is hosted on Github and you can contribute there:http://github.com/kienpham2000/airbrake-flask |
airbrake-integrations | Airbrake python notifier integrationsAirbrake Python Integrations READMEAirbrake Python READMEIntegrations built on top of the airbrake python notifier for quick use with popular frameworks and libraries.IntroductionAirbrakeis an online tool that provides robust exception
tracking in any of your Python applications. In doing so, it allows you to easily
review errors, tie an error to an individual piece of code, and trace the cause
back to recent changes. The Airbrake dashboard provides easy categorization,
searching, and prioritization of exceptions so that when errors occur, your team
can quickly determine the root cause.Key featuresThis library is built on top ofAirbrake Python. The difference
betweenAirbrake PythonandAirbrake Python Integrationsis that theairbrake-integrationspackage is just a collection of integrations
with frameworks or other libraries. Theairbrakepackage is the core library
that performs exception sending and other heavy lifting.Normally, you just need to depend on this package, select the integration you are
interested in and follow the instructions for it. If the framework or
application you use does not have an integration available, you can depend on
theairbrakepackage and ignore this package entirely.The list of integrations that are available in this package includes:DjangoFlaskTwistedInstallationTo install airbrake-integrations, run:pipinstallairbrake-integrationsIt's highly suggested that you add the package to yourrequirements.txtfile:pipfreeze>requirements.txtConfigurationDjangoTo install the middleware and catch exceptions in your views:Add the following to yoursettings.pyfile; replacing the value with your
project's id and key:AIRBAKE={"PROJECT_ID":123,"API_KEY":"123abcde","ENVIRONMENT":"test"}Add the middleware to yoursettings.pyfile; making sure that the
airbrake middleware is at the top of the list. Django processes middleware
in order from the end of this list to start, so placing it at the end will
catch all exceptions before it.MIDDLEWARE=['airbrake_integrations.django.middleware.AirbrakeNotifierMiddleware',...]Note that any middleware that catches exceptions and does not allow them to
flow through will not be sent to airbrake. It's important to make sure any
middleware that also process exceptions will raise the original exception:defprocess_exception(self,request,exception):raiseexceptionAn example django app can be found in /examples/djangoFlaskTo catch exceptions, use the Airbrake extension:Make sure the airbrake configuration fields are set:AIRBRAKE_PROJECT_ID = 123456
AIRBRAKE_API_KEY = '1290180gsdf8snfaslfa0'
AIRBRAKE_ENVIRONMENT = "production"And then install the extension!fromairbrake_integrations.flask.appimportAirbrakeAppapp=Flask(__name__)app.config.from_pyfile('config.cfg')ab=AirbrakeApp(app)An example flask app can be found in /examples/flaskTo run the example:exportFLASK_APP=example.py
flaskrunTwistedfromairbrake_integrations.twisted.observerimportAirbrakeLogObserverfromtwisted.loggerimportglobalLogBeginner,Loggersettings={"AIRBRAKE":{"PROJECT_ID":1234,"API_KEY":"1234567890asdfghjkl"}}observers=[AirbrakeLogObserver(settings)]globalLogBeginner.beginLoggingTo(observers,redirectStandardIO=False)log=Logger()try:raiseException("A gremlin in the system is angry")except:log.failure("Error")This creates an observer that looks theglobalLogPublishertwisted object, and checks all events for any possible exceptions.An example flask app can be found in /examples/twisted |
airbrake-tornado | # airbrake-tornadoAirbrake notifier for Tornado web framework.## InstallationInstall via pip:pip install airbrake-tornado## Usage```pythonfrom airbrake import airbrake# In your RequestHandler:API_KEY = "Airbrake API key"ENV_NAME = "Airbrake env name"def write_error(self, status_code, **kwargs):if exc_info and status_code == 500:airbrake.notify(kwargs["exc_info"],self.request,"My-cool-app",api_key=self.API_KEY,environment=self.ENV_NAME)```## Licenseairbrake-tornado is available under the MIT license. See the [LICENSE](LICENSE) file for more info. |
airbridge | No description available on PyPI. |
airbrite | UNKNOWN |
airbugmaker | 一、安装(python版本建议3.7以上)pipinstalldubborequests二、升级包pipinstall--upgradedubborequests三、示例获取dubbo服务详情# 导入importdubborequests# 获取dubbo服务详情data=dubborequests.search('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')获取服务下的所有方法# 导入importdubborequests# 获取dubbo服务下的所有方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService')# 获取dubbo服务指定的方法data=dubborequests.list('cn.com.xxx.sso.ehr.api.dubbo.SsoEmpInfoService','login')通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口importdubborequestsfromdubborequestsimportConfig# 先配置zookeeper中心地址Config.zookeeper_url_list=['192.168.240.15:2181','192.168.240.15:2182','192.168.240.15:2183']invoke_data={"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# 通过zookeeper获取服务的ip和端口, Telnet命令测试dubbo接口data=dubborequests.zk_invoke(*invoke_data)Telnet命令测试dubbo接口importdubborequestsinvoke_data={"ip":'xxxx',"port":7777,"service_name":"cn.com.xxxxx.sso.ehr.api.dubbo.SsoEmpInfoService","method_name":"login","data":{"account":"xxxx","password":"xxxx"}}# Telnet命令测试dubbo接口data=dubborequests.telnet_invoke(*invoke_data) |
airbus | No description available on PyPI. |
airbyte | PyAirbytePyAirbyte brings the power of Airbyte to every Python developer.Secrets ManagementPyAirbyte can auto-import secrets from the following sources:Environment variables.Variables defined in a local.env("Dotenv") file.Google Colab secrets.Manual entry viagetpass.Note: Additional secret store options may be supported in the future.More info here.Retrieving Secretsfromairbyteimportget_secret,SecretSourcesource=get_connection("source-github")source.set_config("credentials":{"personal_access_token":get_secret("GITHUB_PERSONAL_ACCESS_TOKEN"),})Theget_secret()function accepts an optionalsourceargument of enum typeSecretSource. If omitted or set toSecretSource.ANY, PyAirbyte will search all available secrets sources. Ifsourceis set to a specific source, then only that source will be checked. If a list ofSecretSourceentries is passed, then the sources will be checked using the provided ordering.By default, PyAirbyte will prompt the user for any requested secrets that are not provided via other secret managers. You can disable this prompt by passingprompt=Falsetoget_secret().Connector compatibilityTo make a connector compatible with PyAirbyte, the following requirements must be met:The connector must be a Python package, with apyproject.tomlor asetup.pyfile.In the package, there must be arun.pyfile that contains arunmethod. This method should read arguments from the command line, and run the connector with them, outputting messages to stdout.Thepyproject.tomlorsetup.pyfile must specify a command line entry point for therunmethod calledsource-<connector name>. This is usually done by adding aconsole_scriptssection to thepyproject.tomlfile, or aentry_pointssection to thesetup.pyfile. For example:[tool.poetry.scripts]source-my-connector="my_connector.run:run"setup(...entry_points={'console_scripts':['source-my-connector = my_connector.run:run',],},...)To publish a connector to PyPI, specify thepypisection in themetadata.yamlfile. For example:data:# ...remoteRegistries:pypi:enabled:truepackageName:"airbyte-source-my-connector"Validating source connectorsTo validate a source connector for compliance, theairbyte-lib-validate-sourcescript can be used. It can be used like this:airbyte-lib-validate-source—connector-dir.-—sample-configsecrets/config.jsonThe script will install the python package in the provided directory, and run the connector against the provided config. The config should be a valid JSON file, with the same structure as the one that would be provided to the connector in Airbyte. The script will exit with a non-zero exit code if the connector fails to run.For a more lightweight check, the--validate-install-onlyflag can be used. This will only check that the connector can be installed and returns a spec, no sample config required.ContributingTo learn how you can contribute to PyAirbyte, please see ourPyAirbyte Contributors Guide.Changelog and Release NotesFor a version history and list of all changes, please see ourGitHub Releasespage. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.