package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aiodown | aiodownAnasyncio-basedfile downloader made inPython 3usinghttpx.RequirementsPython 3.8 or higher.httpx 0.14 or higher.async-files 0.4 or higher.InstallationNOTE:Ifpython3is "not a recognized command" try usingpythoninstead.For the latest stable version:python3 -m pip install aiodownFor the latest development version:python3 -m pip install git+https://github.com/AmanoTeam/aiodownWhat's left to do?Write the API Documentation.Show some examples.LicenseMIT © 2021AmanoTeamChangelogAll notable changes to this project will be documented in this file.The format is based onKeep a Changelog.1.0.4 (March 14th, 2021)AddedAssert that the status code is 200.ChangedImprove code based on Codacy.The downloads are runned in a executor instead of threads.1.0.3 (March 13th, 2021)Addedaiodown.errorspackage.1.0.2 (March 13th, 2021)Addedclient.rem()method.download.get_attempts(),download.get_id(),download.get_retries()anddownload.get_start_time()methods.1.0.1 (March 12th, 2021)AddedSupport for download retries.await download.resume()method.Changedclient.add()instead ofclient.download()method.1.0.0 (March 12th, 2021)First release. |
aiodownload | =========== |
aio-downloader | # To be continued—## Main Functions:download with cmdlinedownload with redisdownload with file |
aiodownloads | aiodownloadsAsynchronous downloadsInstallationpip install aiodownloadsUsageInheritaiodownloads.Downloaderthen override handle_success, handle_fail methodsExamplesDownload list of urlsimportasynciofromaiodownloadsimportDownloaderurls=['https://httpbin.org/status/200','https://httpbin.org/status/400']classUrlsDownloader(Downloader):asyncdefhandle_success(self,resp,item):content=awaitresp.read()# save content stuffasyncdefhandle_fail(self,resp,item):...url_downloader=UrlsDownloader()asyncio.run(url_downloader.download(urls)) |
aio-dprcon | A library and console client for DarkPlaces RCON protocolInstallation$pipinstallaio-dprconTo use the console tool on windows please also install pyreadline$pipinstallpyreadlineConsole tool usage$dprconadd# Add a server$dprconrefreshSERVER_NAME# Fill completion cache (optional)$dprconconnectSERVER_NAME# Launch interactive RCON shellOr watch an ascii cast here -https://asciinema.org/a/148143Library APITo be written |
aio-dtls | aio_dtlsThe implementation of dtls was made to connect to the iotivity which uses the mbedtls and not the standard cipher.Handshake dtls and tls for ECDHE_ANON is working nowA simple example of use can be seen in example.pyAn example implementation of the COAPS protocol in Bubot_CoAP |
aiodtnsim | aiodtnsimA minimal framework for performing DTN simulations based on Python 3.7 and asyncio.Note that this project is still awork in progress.RequirementsPython 3.7+NumPytqdm for the progress barsdtn-tvg-utilfor uPCN integration,uPCNv0.7.0+ with thepyupcnmodule installed in the Python environmentGetting StartedJust installaiodtnsimviapip, e.g., in a virtual environment:pip install aiodtnsimFor generating satellite scenarios (needed by the example script), you need to install thedtn-tvg-utilRing Road dependencies additionally:pip install "dtn-tvg-util[ring_road]"Now, you should be able to use the example scripts provided in the root directory ofaiodtnsimto perform simple simulation runs, e.g. via:bash examples/example_test_run.shDevelopment SetupFirst, clone theaiodtnsim,dtn-tvg-util, andupcnrepositories (the latter only for using uPCN emulation) and change into theaiodtnsimdirectory.
Now, create a virtual environment and install the required dependencies:python3 -m venv --without-pip .venv
curl -sS https://bootstrap.pypa.io/get-pip.py | .venv/bin/python
source .venv/bin/activate
pip install -e .
pip install -e "../dtn-tvg-util[ring_road,gs_placement]"
pip install -U -r ../upcn/pyupcn/requirements.txt
python ../upcn/pyupcn/install.pyIf you want to perform a run with uPCN, ensure the latest binary has been built:cd ../upcn
makeLicenseaiodtnsimis provided under the MIT license. SeeLICENSEfor details.AcknowledgmentsThe simulation event loop is based uponcode by Damon Wischik. |
aio-dt-protocol | Асинхронная обёртка надпротоколомотладчика браузера Chromium.Запуски проводятся только в ОС Windows и Linux.Имеет одну зависимость:https://github.com/aaugustin/websocketsУстановкаpipinstallaio-dt-protocolПримеры:importasynciofromaio_dt_protocolimportBrowserfromaio_dt_protocolimportBrowserNamefromaio_dt_protocol.dataimportKeyEventsDEBUG_PORT:int=9222BROWSER_NAME:str=BrowserName.CHROMEPROFILE_NAME:str=BROWSER_NAME.capitalize()+"_Profile"asyncdefmain()->None:# ? Будет печатать в консоль всё, что приходит по соединению со страницей.# ? Полезно при разработке.# async def action_printer(data: dict) -> None:# print(data)# browser, conn = await Browser.run(callback=action_printer)# ? Если на указанном порту есть запущенный браузер, происходит подключение.# ? Иначе, запуск нового браузера.browser,conn=awaitBrowser.run(debug_port=DEBUG_PORT,browser_name=BROWSER_NAME,profile_path=PROFILE_NAME)print("[- GO TO GOOGLE ... -]")awaitconn.Page.navigate("https://www.google.com",)print("[- EMULATE INPUT TEXT ... -]")input_node=awaitconn.DOM.querySelector("[type=search]")# ? Эмуляция клика в поисковую строкуawaitinput_node.click()awaitasyncio.sleep(1)# ? Вставка текстаawaitconn.Input.insertText("github PieceOfGood")awaitasyncio.sleep(1)# ? Эмуляция нажатия клавиши Enterawaitconn.extend.action.sendKeyEvent(KeyEvents.enter)awaitasyncio.sleep(1)# ? Нажатие Enter можно заменить кликом по кнопке# ? используя протокол# submit_button_selector = "div:not([jsname])>center>[type=submit]:not([jsaction])"# submit_button = await conn.DOM.querySelector(submit_button_selector)# await submit_button.click()# ? Или выполнить клик используя JavaScript# click_code = f"""\# document.querySelector("{submit_button_selector}").click();# """# await conn.extend.injectJS(click_code)print("[- WAIT FOR CLOSE PAGE ... -]")# ? Пока соединение существует, цикл выполняется.awaitconn.waitForClose()print("[- DONE -]")if__name__=='__main__':asyncio.run(main())На страницу можно легко зарегистрировать слушателей, которые будут вызываться на стороне клиентского(Python) кода. Для этого необходимо зарегистрировать вызываемую функцию в качестве такого слушателя. Это возможно выполнить двумя способами:Вручную передав методуaddBinding()доменаRuntimeимя функции в виде строки.Воспользоваться более функциональной обёрткой первого способа, выраженной в методеbindFunction()соединения.Второй способ менее многословен. Под капотом он добавляет в контекст страницы утилитуpy_call(), первым аргументом принимающую имя функции(слушателя), после чего, любое кол-во позиционных аргументов, которые ожидает эта функция, а так же позволяет прикрепить любое кол-во аргументов, передаваемых в функцию последними. Например:html="""\<html lang="ru"><head><meta charset="utf-8" /><title>Test application</title></head><body><button id="knopka">Push me</button></body><script>const btn = document.querySelector('#knopka');btn.addEventListener('click', () => {py_call("test_func", 1, "test")});</script></html>"""# ? number и text будут переданы из браузера, а bind_arg указан при регистрацииasyncdeftest_func(number:int,text:str,bind_arg:dict)->None:print(f"[- test_func -] Called with args:\n\tnumber:{number}"f"\n\ttext:{text}\n\tbing_arg:{bind_arg}")awaitconn.bindFunction(test_func,# ! слушатель{"name":"test","value":True}# ! bind_arg)# ? Если ожидается внушительный функционал прикрутить к странице, то это можно# ? сделать за один раз.# await conn.bindFunctions(# (test_func, [ {"name": "test", "value": True} ]),# # (any_awaitable1, [1, 2, 3])# # (any_awaitable2, [])# )awaitconn.Page.navigate(html)HeadlessЧтобы запустить браузер в безголовом(headless) режиме, передайте пустую строку аргументу(profile_path) принимающему путь к папке профиля.importasynciofromaio_dt_protocolimportBrowser,BrowserNamefromaio_dt_protocol.utilsimportsave_img_as,async_util_callDEBUG_PORT:int=9222BROWSER_NAME:str=BrowserName.CHROMEasyncdefmain()->None:# ? Если на указанном порту есть запущенный браузер, происходит подключение.# ? Иначе, запуск нового браузера.browser,conn=awaitBrowser.run(debug_port=DEBUG_PORT,browser_name=BROWSER_NAME,profile_path="")print("[- WAITING PAGE -]")conn=awaitbrowser.waitFirstTab()print("[- GO TO GOOGLE -]")awaitconn.Page.navigate("https://www.google.com")print("[- MAKE SCREENSHOT -]")awaitasync_util_call(save_img_as,"google.png",awaitconn.extend.makeScreenshot())print("[- CLOSE BROWSER -]")awaitconn.Browser.close()print("[- DONE -]")if__name__=='__main__':asyncio.run(main())Custom serializerПоскольку обмен данными по протоколу использует формат JSON, а под капотом используется стандартная реализация, то чтобы поменять этот механизм используется глобальный объектSerializer. Например:fromaio_dt_protocolimportBrowser,Serializerfrommsgspecimportjsonasyncdefmain()->None:Serializer.decode=json.decodeSerializer.encode=lambdax:json.encode(x).decode("utf-8")browser,conn=awaitBrowser.run()...Будьте внимательны!
Метод, сериализующий данные в JSON, должен возвращать типstr, так кактолько в этом случаесообщение отправляется в текстовом фрейме, что и ожидается при обмене по протоколу. |
aioduckdb | aioduckdb provides a friendly, async interface to DuckDB databases. It has been ported from the original aiosqlite module.It replicates theduckdbmodule, but with async versions
of all the standard connection and cursor methods, plus context managers for
automatically closing connections and cursors:async with aioduckdb.connect(...) as db:
await db.execute("INSERT INTO some_table ...")
await db.commit()
async with db.execute("SELECT * FROM some_table") as cursor:
async for row in cursor:
...It can also be used in the traditional, procedural manner:db = await aioduckdb.connect(...)
cursor = await db.execute('SELECT * FROM some_table')
row = await cursor.fetchone()
rows = await cursor.fetchall()
await cursor.close()
await db.close()Installaioduckdb is compatible with Python 3.6 and newer.
~~You can install it from PyPI:~~ Not currently on PyPI.Detailsaioduckdb allows interaction with DuckDB databases on the main AsyncIO event
loop without blocking execution of other coroutines while waiting for queries
or data fetches. It does this by using a single, shared thread per connection.
This thread executes all actions within a shared request queue to prevent
overlapping actions.Connection objects are proxies to the real connections, contain the shared
execution thread, and provide context managers to handle automatically closing
connections. Cursors are similarly proxies to the real cursors, and provide
async iterators to query results.Licenseaioduckdb is copyright Salvador Pardiñas, and licensed under the
MIT license. I am providing code in this repository to you under an open source
license. This is my personal repository; the license you receive to my code
is from me and not from my employer. See theLICENSEfile for details.Big thanks toAmethyst Reesefor the originalaiosqliterepository. |
aiodxf | aiodxfasyncio port ofdxfUsageTheaiodxfcommand-line tool uses the following environment variables:DXF_HOST- Host where Docker registry is running.DXF_INSECURE- Set this to1if you want to connect to the registry usinghttprather thanhttps(which is the default).DXF_USERNAME- Name of user to authenticate as.DXF_PASSWORD- User's password.DXF_AUTHORIZATION- HTTPAuthorizationheader value.DXF_AUTH_HOST- If set, always perform token authentication to this host, overriding the value returned by the registry.DXF_PROGRESS- If this is set to1, a progress bar is displayed (on standard error) duringpush-blobandpull-blob. If this is set to0, a progress bar is not displayed. If this is set to any other value, a progress bar is only displayed if standard error is a terminal.DXF_BLOB_INFO- Set this to1if you wantpull-blobto prepend each blob with its digest and size (printed in plain text, separated by a space and followed by a newline).DXF_CHUNK_SIZE- Number of bytespull-blobshould download at a time. Defaults to 8192.DXF_SKIPTLSVERIFY- Set this to1to skip TLS certificate verification.DXF_TLSVERIFY- Optional path to custom CA bundle to use for TLS verification.You can use the following options withdxf. Supply the name of the repository
you wish to work with in each case as the second argument.aiodxf push-blob <repo> <file> [@alias]Upload a file to the registry and optionally give it a name (alias).
The blob's hash is printed to standard output.The hash or the alias can be used to fetch the blob later usingpull-blob.aiodxf pull-blob <repo> <hash>|<@alias>...Download blobs from the registry to standard output. For each blob you
can specify its hash, prefixed bysha256:(remember the registry is
content-addressable) or an alias you've given it (usingpush-bloborset-alias).aiodxf blob-size <repo> <hash>|<@alias>...Print the size of blobs in the registry. If you specify an alias, the
sum of all the blobs it points to will be printed.aiodxf del-blob <repo> <hash>|<@alias>...Delete blobs from the registry. If you specify an alias the blobs it
points to will be deleted, not the alias itself. Usedel-aliasfor that.aiodxf set-alias <repo> <alias> <hash>|<file>...Give a name (alias) to a set of blobs. For each blob you can either
specify its hash (as printed byget-blob) or, if you have the blob's
contents on disk, its filename (including a path separator to
distinguish it from a hash).aiodxf get-alias <repo> <alias>...For each alias you specify, print the hashes of all the blobs it points
to.aiodxf del-alias <repo> <alias>...Delete each specified alias. The blobs they point to won't be deleted
(usedel-blobfor that), but their hashes will be printed.aiodxf list-aliases <repo>Print all the aliases defined in the repository.aiodxf list-reposPrint the names of all the repositories in the registry. Not all versions
of the registry support this.aiodxf get-digest <repo> <alias>...For each alias you specify, print the hash of its configuration blob.
For an alias created usingdxf, this is the hash of the first blob it
points to. For a Docker image tag, this is the same asdocker inspect alias --format='{{.Id}}'.CertificatesIf your registry uses SSL with a self-issued certificate, you'll need to supplydxfwith a set of trusted certificate authorities.You can set theDXF_TLSVERIFYenvironment variable to the path of a PEM file
containing the trusted certificate authority certificates for the command-line
tool or pass thetlsverifyoption to the module.Authentication tokensdxfautomatically obtains Docker registry authentication tokens using yourDXF_USERNAMEandDXF_PASSWORD, orDXF_AUTHORIZATION, environment variables
as necessary.However, if you wish to override this then you can use the following command:aiodxf auth <repo> <action>...Authenticate to the registry usingDXF_USERNAMEandDXF_PASSWORD,
orDXF_AUTHORIZATION, and print the resulting token.actioncan bepull,pushor*.If you assign the token to theDXF_TOKENenvironment variable, for example:DXF_TOKEN=$(aiodxf auth fred/datalogger pull)then subsequentdxfcommands will use the token without needingDXF_USERNAMEandDXF_PASSWORD, orDXF_AUTHORIZATION, to be set.Note however that the token expires after a few minutes, after whichdxfwill
exit withEACCES.InstallationpipinstallaiodxfLicenceMITTestsmaketestLintmakelintCode Coveragemakecoverage |
aiodynamo | AsyncIO DynamoDBAsynchronous pythonic DynamoDB client;2xfaster thanaiobotocore/boto3/botocore.Quick startWith httpxInstall this librarypip install "aiodynamo[httpx]"or, for poetry userspoetry add aiodynamo -E httpxConnect to DynamoDBfromaiodynamo.clientimportClientfromaiodynamo.credentialsimportCredentialsfromaiodynamo.http.httpximportHTTPXfromhttpximportAsyncClientasyncwithAsyncClient()ash:client=Client(HTTPX(h),Credentials.auto(),"us-east-1")With aiohttpInstall this librarypip install "aiodynamo[aiohttp]"or, for poetry userspoetry add aiodynamo -E aiohttpConnect to DynamoDBfromaiodynamo.clientimportClientfromaiodynamo.credentialsimportCredentialsfromaiodynamo.http.aiohttpimportAIOHTTPfromaiohttpimportClientSessionasyncwithClientSession()assession:client=Client(AIOHTTP(session),Credentials.auto(),"us-east-1")API usetable=client.table("my-table")# Create table if it doesn't existifnotawaittable.exists():awaittable.create(Throughput(read=10,write=10),KeySchema(hash_key=KeySpec("key",KeyType.string)),)# Create or override an itemawaittable.put_item({"key":"my-item","value":1})# Get an itemitem=awaittable.get_item({"key":"my-item"})print(item)# Update an item, if it exists.awaittable.update_item({"key":"my-item"},F("value").add(1),condition=F("key").exists())Why aiodynamoboto3 and botocore are synchronous. aiodynamo is built forasynchronousapps.aiodynamo isfast. Two times faster than aiobotocore, botocore or boto3 for operations such as query or scan.aiobotocore is very low level. aiodynamo provides apythonic API, using modern Python features. For example, paginated APIs are automatically depaginated using asynchronous iterators.Legible source code. botocore and derived libraries generate their interface at runtime, so it cannot be inspected and isn't typed. aiodynamo is hand written code you can read, inspect and understand.Pluggable HTTP client. If you're already using an asynchronous HTTP client in your project, you can use it with aiodynamo and don't need to add extra dependencies or run into dependency resolution issues.Complete documentation is here |
aioeachlimit | aioeachlimitApply an async function to each item in an array or queue with limited concurrencyInstallpip install aioeachlimitUsageasync def f(item):
asyncio.sleep(3)
return item * 2
items = [1, 2, 3, 4]
async for result in aioeachlimit(items, f, concurrency_limit=2):
print(result) # Prints 2 4 6 8 in random orderIf you don't need to return anything:await aioeachlimit(items, f, concurrency_limit=2, discard_results=True)Ifitemsis anasyncio.Queuethen aioeachlimit will read from it indefinitely.Testspytest . |
aioeafm | No description available on PyPI. |
aioeagle | AioeagleAsynchronous library to control Rainforest EAGLE-200Requires Python 3.8+ and uses asyncio and aiohttp.importasynciofrompprintimportpprintimportaiohttpfromaioeagleimportEagleHubCLOUD_ID="123456"INSTALL_CODE="abcdefghijklmn"asyncdefmain():asyncwithaiohttp.ClientSession()assession:awaitrun(session)asyncdefrun(websession):hub=EagleHub(websession,CLOUD_ID,INSTALL_CODE)devices=awaithub.get_device_list()iflen(devices)==0:print("No devices found")returndevice=devices[0]pprint(device.details)print()pprint(awaitdevice.get_device_query(device.ENERGY_AND_POWER_VARIABLES))asyncio.run(main())Testing locallypython3example.py<cloud_id><install_code>TimeoutsAioeagle does not specify any timeouts for any requests. You will need to specify them in your own code. We recommend theasync_timeoutpackage:importasync_timeoutwithasync_timeout.timeout(10):devices=awaithub.get_device_list()Contribution guidelinesObject hierarchy and property/method names should match the EAGLE-200 API. |
aio-eapi | Arista EOS API asyncio ClientThis repository contains an Arista EOS asyncio client.Quick ExampleThie following shows how to create a Device instance and run a list of
commands.Device will use HTTPS transport by default. The Device instance supports the
following initialization parameters:host- The device hostname or IP addressusername- The login usernamepassword- The login passwordproto-(Optional)Choose either "https" or "http", defaults to "https"port-(Optional)Chose the protocol port to override proto defaultThe Device class inherits directly from httpx.AsyncClient. As such, the Caller
can provide any initialization parameters. The above specific parameters are
all optional.importjsonfromaioeapiimportDeviceusername='dummy-user'password='dummy-password'asyncdefrun_test(host):dev=Device(host=host,username=username,password=password)res=awaitdev.cli(commands=['show hostname','show version'])json.dumps(res)ReferencesArista eAPI documents require an Arista Portal customer login. Once logged into the
system you can find the documents in the Software Download area. Select an EOS release
and then select the Docs folder.You can also take a look at the Arista community client,here. |
aio-easycodefpy | OverviewCODEF는 온라인에 흩어진 데이터를 클라이언트 엔진과 웹 API 등을 활용해 쉽고 빠르게 사용할 수 있도록 돕습니다. 아이디어가 구현되기 위한 복잡한 과정을 간결하게 바꾸고, 수고로움을 줄이고자 노력합니다.홈페이지개발가이드블로그aio-easycodefpy는easycodefpy의 비동기 버전으로 CODEF API 연동 개발을 돕는 라이브러리 유틸입니다.사용을 위해서는홈페이지가입 후 데모/정식 서비스 신청을 통해 자격 증명을 위한 클라이언트 정보 등을 발급받아야 하며 사용 가능한 모든 API의 엔드포인트(은행, 카드, 보험, 증권, 공공, 기타)와 요청/응답 항목은 모두개발가이드를 통해 확인할 수 있습니다.Get it!$python-mpipinstallaio-easycodefpyUse it!Quik Start아래 가이드는aio-easycodefpy-exam를 기반으로 작성되었으며 샌드박스 서버를 대상으로 즉시 테스트가 가능합니다.샌드박스에서는 필수 요청 파라미터 여부를 체크한 뒤 요청 상품에 따른 예정되어 있는 고정 응답 값을 반환합니다.사용자는 샌드박스를 통해 코드에프 연동에 대한 개발 연습과 상품 별 응답 자료 구조 등을 확인 할 수 있습니다.1. Codef 클래스 사용aio-easycodefpy는aiohttp프레임워크를 사용하여 비동기 네트워크를 지원합니다.
그리고aiohttp.ClientSession의 컨텍스트를Codef클래스 내부적에서 관리하고 있으며, 인스턴스 당 하나의 세션을 할당합니다.classCodef(object):def__init__(self):...self.__session=asyncio.ClientSession()세션의 컨텍스트를 관리하기 위해close()메소드를 제공하고 있으며, 비동기 컨텍스트 매니저를 사용할 수 있습니다.fromaio_easycodefpyimportCodefasyncdefmain():codef=Codef()awaitcodef.close()#### orasyncwithCodef()ascodef:...2. 토큰 요청CODEF API 서비스를 이용하기 위해서는 서비스 이용에 대한 자격 증명을 통해 토큰을 발급받아야 합니다. 토큰은 모든 요청시 헤더 값에 포함되어야 하며 한번 발급 받은 토큰은 일주일간 재사용이 가능합니다.easycodefpy 라이브러리는 토큰의 발급과 재사용을 자동으로 처리합니다.재사용 중인 토큰의 유효기간이 만료되는 경우 재발급 또한 자동으로 처리됩니다.사용자는 단순히 자격증명을 위한클라이언트 정보설정 만을 진행하면 됩니다.아래의 예제는 사용자가 직접 토큰을 발급받는 과정을 설명합니다.
계정 관리나 상품 요청시 토큰은 라이브러리 내에서 자동 발급받아 사용하기 때문에 특별한 경우가 아니라면 사용자가 직접 토큰을 요청할 필요는 없습니다.예제 링크https://github.com/codef-io/aio-easycodefpy-exam/blob/master/00_access_token/main.pyimportasynciofromaio_easycodefpyimportCodef,ServiceTypedemo_client_id=''demo_client_secret=''client_id=''client_secret=''public_key=''# 코드에프 인스턴스 생성asyncdefmain():codef=Codef()codef.public_key=public_key# 데모 클라이언트 정보 설정# - 데모 서비스 가입 후 코드에프 홈페이지에 확인 가능(https://codef.io/#/account/keys)# - 데모 서비스로 상품 조회 요청시 필수 입력 항목codef.set_demo_client_info(demo_client_id,demo_client_secret)# 정식 클라이언트 정보 설정# - 정식 서비스 가입 후 코드에프 홈페이지에 확인 가능(https://codef.io/#/account/keys)# - 정식 서비스로 상품 조회 요청시 필수 입력 항목codef.set_client_info(client_id,client_secret)# 토큰 발급 요청token=awaitcodef.request_token(ServiceType.SANDBOX)# 결과 출력print(token)awaitcodef.close()asyncio.get_event_loop().run_until_complete(main())3. 계정 관리CODEF API 서비스의 여러 상품들 중 요청 파라미터에 Connected ID가 필요한 경우가 있습니다. 인증이 필요한 CODEF API를 사용하기 위해서는 엔드 유저(End User) 계정 정보(대상기관의 인증수단)등록이 필요하며, 이를 통해 사용자마다 다른 Connected ID를 발급받을 수 있습니다. (Connected ID는개발가이드 인증방식에서 자세한 내용을 확인하세요.)Connected ID 발급 이후에는 직접적인 계정 정보 전송 없이 대상기관의 데이터를 요청할 수 있습니다. Connected ID는 계정 등록 요청 시 발급되며 이후 계정 추가/계정 수정/계정 삭제 요청으로 관리할 수 있습니다. 동일한 기관의 동일한 계정 정보는 중복해서 등록할 수 없으며 개인 고객/기업 고객 각각 1개씩 등록이 가능합니다.모든 상품의 파라미터에 Connected ID가 필요한 것은 아닙니다. 상품별 파라미터는개발가이드 상품안내에서 확인할 수 있습니다.Connected ID를 사용하지 않는 API를 사용하는 경우 계정 관리는 생략하세요.예제 링크https://github.com/codef-io/aio-easycodefpy-exam/blob/master/01_create_account/main.py...asyncdefmain():# 비동기 컨텍스트 매니저 사용 가능asyncwithCodef()ascodef:...# 요청 파라미터 설정# - 계정관리 파라미터를 설정(https://developer.codef.io/cert/account/cid-overview)account_list=[]account={'countryCode':'KR','businessType':'BK','clientType':'P','organization':'0004','loginType':'1','id':"user_id",}# 비밀번호 설정pwd=encrypt_rsa("password",codef.public_key)account['password']=pwdaccount_list.append(account)parameter={'accountList':account_list,}# 요청res=awaitcodef.create_account(ServiceType.SANDBOX,parameter)print(res)asyncio.get_event_loop().run_until_complete(main())계정 등록 요청은 등록하려는 여러 기관의 계정을 목록 파라미터로 설정해 한번에 요청이 가능하며 응답 결과는 아래와 같습니다.
사용자는 발급받은 Connected ID를 계정 등록에 성공한 기관(successList) 상품 조회 요청시 사용을 할 수 있습니다.예) 국민은행(0004)으로 등록한 Connected ID를 산업은행(0002) 상품 조회시 사용할 수 없음{"result":{"code":"CF-00000","extraMessage":"","message":"정상","transactionId":"786e01e459af491888e1f782d1902e40"},"data":{"successList":[{"code":"CF-00000","message":"정상","extraMessage":"","countryCode":"KR","businessType":"BK","clientType":"P","loginType":"1","organization":"0004"}],"errorList":[],"connectedId":"byi1wYwD40k8hEIiXl6bRF"}}계정 등록 이외의 계정 추가, 수정, 삭제 등의 계정 관리 기능과 계정 목록 조회, Connected ID 목록 조회 등 조회 기능은aio-easycodefpy-exam에서 확인 할 수 있습니다.인증서로 계정을 등록하는 경우에는 cert파일, key파일 세트 혹은 pfx파일 2가지 모두를 지원합니다.개발가이드 계정등록에서 자세한 내용을 확인하세요. 인증서 내보내기/가져오기 등 인증서 릴레이 서버 기능이 필요한 경우[email protected]로 문의해주시기 바랍니다. 코드에프에서는 계정 관리를 위한 인증서 팝업과 전송 서버를 서비스 하고 있습니다.4. 상품 요청엔드 유저의 계정 등록 과정을 거쳐 상품 사용 준비가 끝났다면 이제 발급받은 Connected ID와 필요한 파라미터 정보 설정 등을 통해 코드에프 API 상품 요청을 할 수 있습니다. Connected ID를 사용하는 상품과 Connected ID를 사용하지 않는 상품 요청 예제를 아래 코드를 통해
확인하겠습니다.한번 더 강조하자면 모든 상품의 파라미터에 Connected ID가 필요한 것은 아닙니다.상품별 파라미터는개발가이드 상품안내에서 확인할 수 있습니다.- 일반 상품예제 코드https://github.com/codef-io/aio-easycodefpy-exam/tree/master/07_product/main.pyimportasynciofromaio_easycodefpyimportCodef,ServiceTypedemo_client_id=''demo_client_secret=''client_id=''client_secret=''public_key=''asyncdefmain():asyncwithCodef()ascodef:codef.public_key=public_key# 데모 클라이언트 정보 설정# - 데모 서비스 가입 후 코드에프 홈페이지에 확인 가능(https://codef.io/#/account/keys)# - 데모 서비스로 상품 조회 요청시 필수 입력 항목codef.set_demo_client_info(demo_client_id,demo_client_secret)# 정식 클라이언트 정보 설정# - 정식 서비스 가입 후 코드에프 홈페이지에 확인 가능(https://codef.io/#/account/keys)# - 정식 서비스로 상품 조회 요청시 필수 입력 항목codef.set_client_info(client_id,client_secret)# 요청 파라미터 설정# - 각 상품별 파라미터를 설정(https://developer.codef.io/products)parameter={'connectedId':'8PQI4dQ......hKLhTnZ','organization':'0004',}# 코드에프 정보 조회 요청# - 서비스타입(0:정식, 1:데모, 2:샌드박스)# 개인 보유계좌 조회 (https://developer.codef.io/products/bank/common/p/account)product_url="/v1/kr/bank/p/account/account-list"res=awaitcodef.request_product(product_url,ServiceType.SANDBOX,parameter)print(res)asyncio.get_event_loop().run_until_complete(main())쉬운 코드에프 객체 생성 후 클라이언트 정보 등을 설정한 다음 개인 보유계좌 조회를 위한 파라미터를 설정해서 상품 요청을 합니다.
라이브러리 사용자는 토큰 발급이나 토큰 관리, 요청 파라미터의 인코딩, 응답 바디의 디코딩 등 API이용을 위한 부수적인 작업을 직접 할 필요가 없습니다.
상품 요청에 필요한 파라미터를 설정하고request_product메서드 호출만으로 Connected ID로 등록된 계정의 기관(0004 국민은행)의 목록을 아래와 같이 응답받게 됩니다.{"result":{"code":"CF-00000","extraMessage":"","message":"성공","transactionId":"5069429e367745baba92f5c12c4343de"},"data":{"resDepositTrust":[{"resAccount":"06170204000000","resAccountBalance":"874890","resAccountCurrency":"KRW","resAccountDeposit":"11","resAccountDisplay":"061702-04-000000","resAccountEndDate":"","resAccountLifetime":"","resAccountName":"저축예금","resAccountNickName":"","resAccountStartDate":"20120907","resLastTranDate":""},{"resAccount":"23850204000000","resAccountBalance":"0","resAccountCurrency":"KRW","resAccountDeposit":"11","resAccountDisplay":"238502-04-000000","resAccountEndDate":"","resAccountLifetime":"","resAccountName":"직장인우대통장-저축예금","resAccountNickName":"급여통장","resAccountStartDate":"20060413","resLastTranDate":""},{"resAccount":"54780300000000","resAccountBalance":"13110000","resAccountCurrency":"KRW","resAccountDeposit":"12","resAccountDisplay":"547803-00-000000","resAccountEndDate":"","resAccountLifetime":"","resAccountName":"OO국민재형저축","resAccountNickName":"","resAccountStartDate":"20151228","resLastTranDate":""}],"resForeignCurrency":[],"resFund":[],"resInsurance":[],"resLoan":[{"resAccount":"75260904000000","resAccountBalance":"120000000","resAccountCurrency":"KRW","resAccountDeposit":"40","resAccountDisplay":"752609-04-000000","resAccountEndDate":"20210628","resAccountLoanExecNo":"","resAccountName":"서울특별시신혼부부임차보증금대출","resAccountNickName":"","resAccountStartDate":"20190628"}]}}이번에는 사업자 휴폐업 상태 정보를 조회하는 상품의 예제입니다.# 사업자등록상태(휴폐업조회)(Connected ID 미사용)importasynciofromaio_easycodefpyimportCodef,ServiceTypedemo_client_id=''demo_client_secret=''client_id=''client_secret=''public_key=''# 코드에프 인스턴스 생성asyncdefmain():asyncwithCodef()ascodef:codef.public_key=public_key# 데모 클라이언트 정보 설정# - 데모 서비스 가입 후 코드에프 홈페이지에 확인 가능(https://codef.io/#/account/keys)# - 데모 서비스로 상품 조회 요청시 필수 입력 항목codef.set_demo_client_info(demo_client_id,demo_client_secret)# 정식 클라이언트 정보 설정# - 정식 서비스 가입 후 코드에프 홈페이지에 확인 가능(https://codef.io/#/account/keys)# - 정식 서비스로 상품 조회 요청시 필수 입력 항목codef.set_client_info(client_id,client_secret)# 요청 파라미터 설정# - 각 상품별 파라미터를 설정(https://developer.codef.io/products)parameter={'organization':"0004",}req_identity_list=[{'reqIdentity':'3333344444',},{'reqIdentity':'1234567890',}]parameter['req_identity_list']=req_identity_list# 코드에프 정보 조회 요청# - 서비스타입(0:정식, 1:데모, 2:샌드박스)product_url='/v1/kr/public/nt/business/status'res=awaitcodef.request_product(product_url,ServiceType.SANDBOX,parameter)print(res)asyncio.get_event_loop().run_until_complete(main())2개의 상폼 요청 예시를 비교해 코드의 내용을 살펴보면 개인 보유카드 조회와 사업자등록상태 조회의 차이는요청주소(URL)와 파라미터 설정에만 있는 것을 확인할 수 있습니다.사용자는 라이브러리 사용을 통해 일관된 형태의 코드를 작성하여 개발할 수 있고 다른 사람이 작성한 상품 요청 코드를 쉽게 이해할 수 있습니다.{"result":{"code":"CF-00000","extraMessage":"","message":"성공","transactionId":"786e01e459af491888e1f782d1902e40"},"data":[{"resBusinessStatus":"사업을하지않고있습니다.","resCompanyIdentityNo":"3333344444","code":"CF-00000","resTaxationTypeCode":"98","extraMessage":null,"resClosingDate":"","resTransferTaxTypeDate":"","message":"성공"},{"resBusinessStatus":"부가가치세일반과세자입니다.\n*과세유형전환된날짜는2011년07월01일입니다.","resCompanyIdentityNo":"1234567890","code":"CF-00000","resTaxationTypeCode":"1","extraMessage":null,"resClosingDate":"","resTransferTaxTypeDate":"20110701","message":"성공"}]}- 추가 인증 상품추가 인증 상품이란 API호출 한번으로 요청 결과를 받을 수 있는 일반 상품과는 달리 첫 요청 이후 대상기관이 요구하는 추가 인증(이메일, SMS, 보안문자 입력 등)을 수행해야 요청 결과를 받을 수 있는 상품을 의미합니다.예를 들어 아래 그림과 같이 로그인을 하는 경우 아이디와 비밀번호 외에 대상기관에서 추가적으로 요구하는 보안문자 입력이 진행되어야 합니다. 고정 값인 아이디, 비밀번호와 다르게 보안문자 이미지는 랜덤하게 반환되기 때문에 엔드 유저의 추가적인 정보 입력이 필요합니다.대상기관이 요구하는 인증방식에 따라 N차 추가인증이 요구 될 수 있으며, 추가 인증 정보 입력이 진행되어야 정상적으로 CODEF API 요청도 완료됩니다. 1차 입력부[기본파라미터] -> n차 추가 인증[기본 파라미터 + 추가 인증 파라미터] 요청으로 이루어집니다.추가 인증은 사용자 인증을 위한 정보가 대부분이며 추가 인증 요청시에도 Endpoint URL은 동일합니다.샌드박스 서버를 통해 추가인증 상품에 대한 테스트를 진행 할 수는 없습니다.추가 인증에 필요한 파라미터 설명은 개발 가이드의 각 상품 페이지에서 확인할 수 있으며 자세한 내용은개발가이드 추가인증을 통해 확인하세요.요청 타입코드에프 요청시 샌드박스 외 다른 타입으로 요청을 보내실 경우 아래 타입을 사용할 수 있습니다.classServiceType(Enum):PRODUCT=0# 정식DEMO=1# 데모SANDBOX=2# 샌드박스Ask usaio-eaasycodefpy라이브러리 사용에 대한 문의사항과 개발 과정에서의 오류 등에 대한 문의를홈페이지 문의게시판에 올려주시면 운영팀이 답변을 드립니다. 문의게시판의 작성 양식에 맞춰 문의 글을 남겨주세요. 가능한 빠르게 응답을 드리겠습니다. |
aioEasyPillow | aioEasyPillowA python library based oneasy-pilandPillowto easily edit/modify images.InstallationPython 3.8 or above is requiredTo install the library directly from PyPI you can just run the following command:# Linux/macOSpython3-mpipinstall-U"discord.py[voice]"# Windowspy-3-mpipinstall-Udiscord.py[voice]Quick ExampleimportasynciofromaioEasyPillowimportEditor,Canvas,Fontasyncdefmain():blank=Canvas((200,100),'black')editor=Editor(blank)font=Font.poppins('bold',200)awaiteditor.text((20,20),'Quick Example',font)awaiteditor.save('example.png','PNG')awaiteditor.show()asyncio.run(main())Discord Bot Exampleimportdiscordfromdiscord.extimportcommandsfromaioEasyPillowimportEditor,Canvas,Font,load_imageintents=discord.Intents.default()intents.members=True# don't forget to activate this in the dev portal# You can of course also use the discord.Bot() or commands.Bot() classbot=commands.Bot(command_prefix='!',intents=intents)@bot.command()asyncdefcircle(ctx):# Load the image using `load_image`image=awaitload_image(ctx.author.display_avatar.url)# Initialize the editor and pass image as a parametereditor=Editor(image)# Simply circle the imageawaiteditor.circle_image()# Creating a discord.File object from the editors image_bytes, the image must not be savedfile=discord.File(fp=editor.image_bytes,filename='circle.png')awaitctx.send('Your circled imagavatare',file=file)bot.run("TOKEN") |
aio-easy-rabbit | aio-easy-rabbitAn opinionated way to use RabbitMQ in Asyncio based frameworks.Installationto be continued ...Usage exampleSimple consumer example in a FastAPIfromfastapiimportFastAPIfromaio_easy_rabbit.connectionimport(connect_to_rabbitmq,start_listening_to_queue,)app=FastAPI()@rabbitmq_consumer("test_queue")asyncdeftest_consumer(message):print(message)@app.on_event("startup")asyncdefstartup_event():registry=ConsumerRegistry()rabbitmq_connection_string="amqp://guest:guest@localhost"app.state.rabbitmq_connection=awaitconnect_to_rabbitmq(rabbitmq_connection_string)forqueue_nameinregistry.get_registered_queues():asyncio.create_task(start_listening_to_queue(app.state.rabbitmq_connection,queue_name))Key component is theConsumerRegistrythat will manage state of all registered queues and their consuming functions.Note: Each decorated consumer will receive a raw string that needs to be serialized as pleased.Simple publisher example in FastAPIfromfastapiimportFastAPIfromaio_easy_rabbit.connectionimport(connect_to_rabbitmq)fromaio_easy_rabbit.producersimportRabbitMQPublisherapp=FastAPI()classMessage(BaseModel):message:[email protected]("/test/{queue_name}/")asyncdeftest_endpoint(queue_name:str,message:Message):awaitapp.state.rabbitmq_publisher.publish_message(queue_name,message)return{"message":"Message published"}@app.on_event("startup")asyncdefstartup_event():rabbitmq_connection_string="amqp://guest:guest@localhost"app.state.rabbitmq_connection=awaitconnect_to_rabbitmq(rabbitmq_connection_string)app.state.rabbitmq_publisher=RabbitMQPublisher(rabbitmq_connection_string) |
aioeasywebdav | This project started as a port of the requests-based EasyWebDAV (http://github.com/amnong/easywebdav) to asyncio on Python 3.5. It has
since been extended with additional features.FeaturesBasic authenticationCreating directories, removing directories and filesUploading and downloading filesDirectory listingSupport for client side SSL certificatesFragmented download (multiple chunks in simultaneous streams)MD5 checksum validation when used with OwnCloud/Nextcloud webdavProgress tracking/reporting via callback systemInstallationInstall using distribute:pip install aioeasywebdavQuick Startimport aioeasywebdav
loop = asyncio.get_event_loop()
# Start off by creating a client object. Username and
# password may be omitted if no authentication is needed.
webdav = aioeasywebdav.connect('webdav.your-domain.com', username='myuser', password='mypass')
# Do some stuff:
loop.run_until_complete(webdav.mkdir('some_dir'))
loop.run_until_complete(webdav.rmdir('another_dir'))
async def fn():
await webdav.download('/remote/path/to/file', '/local/target/file')
await webdav.upload('/local/path/to/file', '/remote/target/file')
loop.run_until_complete(fn())Client object APIThe API is pretty much self-explanatory:cd(path)
ls(path=None)
exists(remote_path)
mkdir(path, safe=False)
mkdirs(path)
rmdir(path, safe=False)
delete(file_path)
upload(local_path_or_fileobj, remote_path)
download(remote_path, local_path)Using clientside SSL certificatewebdav = aioeasywebdav.connect('secure.example.net',
username='user',
password='pass',
protocol='https',
cert="/path/to/your/certificate.pem")
# Do some stuff:
print(await webdav.ls())Please note that all options and restriction regarding the “cert”
parameter fromRequests
APIapply here as
the parameter is only passed through!Developing aioEasyWebDAVWorking with a virtual environment is highly recommended:virtualenv --no-site-packages aioeasywebdav_dev
source aioeasywebdav_dev/bin/activateInstalling the library in development-mode:EASYWEBDAV_DEV=1 python setup.py developThe first part of the command causes setup.py to install development
dependencies, in addition to the normal dependencies.Running the tests:nosetests --with-yanc --nologcapture --nocapture testsRunning the tests with WebDAV server logs:WEBDAV_LOGS=1 nosetests --with-yanc --nologcapture --nocapture -v tests |
aioec | An aiohttp-based client for theEmote Collector API.Usageimportaioecanonymous_client=aioec.Client()authenticated_client=aioec.Client(token='your token here')local_client=aioec.Client(base_url='http://ec.localhost:2018/api/v0')# no trailing slash!# this step isn't necessary but makes sure that your token is correctmy_user_id=awaitclient.login()# it returns the user ID associated with your token# in a coroutine...emote=awaitclient.emote('Think')emote.name# Thinkawaitemote.edit(name='Think_',description='a real happy thinker')# remove the description:awaitemote.edit(description=None)forgamewisp_emoteinawaitclient.search('GW'):awaitgamewisp_emote.delete()all_emotes=[emoteasyncforemoteinclient.emotes()]popular_emotes=awaitclient.popular()awaitclient.close()# it's also a context manager:asyncwithaioec.Client(token=my_token)asclient:awaitclient.delete('Think_')# this will automatically close the clientWith the Tor hidden servicesYou’ll need to installaiohttp_socksfirst.fromaiohttp_socksimportSocksConnectorimportaioecconnector=SocksConnector(port=9050,rdns=True)# without rdns, the connector will fail to resolve onionsclient=aioec.Client(connector=connector,base_url='http://emotesdikhisgxdcmh7wtlvzfw2yxp4vmkyy6mu5wixzgqfmxvuotryd.onion/api/v0',)LicenseMIT/X11Copyright © 2018–2019 Io Mintz <[email protected]> |
aioecobee | aioecobeePython3 async library for interacting with the ecobee APIRequirements:aiofiles, aiohttp, asyncioInstall aioecobee withpython3 -m pip install aioecobee.Usageaioecobee's main class is EcobeeAPI; create an API object like this:fromaiohttpimportClientSessionfromaioecobeeimportEcobeeAPIsession=ClientSession()api_key=xxxxxxxxxxxxxxxxxxxxxconfig_file="/path/to/ecobee.conf"ecobee=EcobeeAPI(session,api_key=api_key,config_file=config_file)Where:sessionis an instance of aiohttp.ClientSession();api_keyis the API key obtained from ecobee.com (optional); and,config_fileis the name of a config file for use with aioecobee (optional).If config_file is not specified, api_key is required.Obtain a PIN for authorizing on ecobee.com:awaitecobee.request_pin()After authorizing your app on ecobee.com, request tokens:awaitecobee.request_tokens()After obtaining tokens, populate (or update) ecobee.thermostats and ecobee.sensors:awaitecobee.update()Calls to the API will raise an ExpiredTokensError if tokens are expired and need refreshing:fromaioecobeeimportExpiredTokensErrortry:awaitecobee.update()exceptExpiredTokensError:awaitecobee.refresh_tokens()example.pyAn example script is provided to demonstrate the usage of aioecobee.pythonexample.pyapi_keyCaveatsaioecobee does not implement timeouts; use asyncio_timeout in your client code to wrap calls to the API as needed.ContributingPlease open issues or pull requests. |
aioecopanel | aioecopanelThis is a library created for communicating with the Bepacom EcoPanel BACnet add-on for Home Assistant.
This library contains the possibility to interact with the add-on through GET and POST requests, as well as through a websocket. |
aioecowitt | aioEcoWittSimple python library for the EcoWitt ProtocolInspired by pyecowit & ecowitt2mqtt |
aioeeveemobility | aioeeveemobilityAsynchronous library to communicate with the EEVEE Mobility APIAPI Example"""Test for aioeeveemobility."""fromaioeeveemobilityimportEeveeMobilityClientimportasyncioimportjsonimportaiohttpasyncdefmain():client=EeveeMobilityClient("[email protected]","yourpassword",)try:user=awaitclient.request("user")print(f"Hello{user.get('first_name')}")fleets=awaitclient.request(f"user/{user.get('id')}/fleets")forfleetinfleets:forentityinfleet.get('fleet').get('entities'):ifentity.get('id')==fleet.get('entity_id'):breakprint(f"Fleet:{fleet.get('fleet').get('name')},{entity.get('name')}| Payout rate:{fleet.get('payout_rate').get('rate')}{fleet.get('payout_rate').get('currency_code')}{fleet.get('payout_rate').get('suffix')}")cars=awaitclient.request(f"cars")forcarincars:print(f"Your car:{car.get('display_name')}{car.get('license')}")addresses=awaitclient.request(f"cars/{car.get('id')}/addresses")print("Addresses:")foraddressinaddresses:print(f" >{address.get('name')}:{address.get('location')}")events=awaitclient.request(f"cars/{car.get('id')}/events")print("Events:")foreventinevents.get('data'):print(event)finally:awaitclient.close_session()asyncio.run(main()) |
aioeffect | aioeffectAsyncIO/Effect integration.Free software: MIT licenseDocumentation:https://aioeffect.readthedocs.org.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2017-06-23)First release on PyPI. |
aioelasticsearch | info:elasticsearch-py wrapper for asyncioInstallationpipinstallaioelasticsearchUsageimportasynciofromaioelasticsearchimportElasticsearchasyncdefgo():es=Elasticsearch()print(awaites.search())awaites.close()loop=asyncio.get_event_loop()loop.run_until_complete(go())loop.close()FeaturesAsynchronousscrollimportasynciofromaioelasticsearchimportElasticsearchfromaioelasticsearch.helpersimportScanasyncdefgo():asyncwithElasticsearch()ases:asyncwithScan(es,index='index',doc_type='doc_type',query={},)asscan:print(scan.total)asyncfordocinscan:print(doc['_source'])loop=asyncio.get_event_loop()loop.run_until_complete(go())loop.close()ThanksThe library was donated byOcean S.A.Thanks to the company for contribution. |
aioelasticsearch-fork | fork aioelasticsearch and support python3.10 laterand you can install it by pippip install aioelasticsearch-fork |
aioelectricitymaps | aioelectricitymapsAsynchronous Python client for Electricity Maps.AboutThis package allows you to fetch data from electricitymaps.com.InstallationpipinstallaioelectricitymapsUsageimportasynciofromaioelectricitymapsimportElectricityMapsasyncdefmain()->None:"""Run the example."""asyncwithElectricityMaps(token="abc123")asem:response=awaitem.latest_carbon_intensity_by_country_code("DE")print(f"Carbon intensity in Germany:{response.data.carbon_intensity}gCO2eq/kWh")if__name__=="__main__":asyncio.run(main())Changelog & ReleasesThis repository keeps a change log usingGitHub's releasesfunctionality. The format of the log is based onKeep a Changelog.Releases are based onSemantic Versioning, and use the format
ofMAJOR.MINOR.PATCH. In a nutshell, the version will be incremented
based on the following:MAJOR: Incompatible or major changes.MINOR: Backwards-compatible new features and enhancements.PATCH: Backwards-compatible bugfixes and package updates.ContributingThis is an active open-source project. I am always open to people who want to
use the code or contribute to it.Thank you for being involved! :heart_eyes:Setting up development environmentThis Python project is fully managed using thePoetrydependency manager. But also relies on the use of NodeJS for certain checks during development.You need at least:Python 3.11+PoetryNodeJS 20+ (including NPM)To install all packages, including all development requirements:npminstall
poetryinstallAs this repository uses thepre-commitframework, all changes
are linted and tested with each commit. You can run all checks and tests
manually, using the following command:poetryrunpre-commitrun--all-filesTo run just the Python tests:poetryrunpytestAuthors & contributorsThe content is byJan-Philipp Benecke.For a full list of all authors and contributors,
checkthe contributor's page.LicenseMIT LicenseCopyright (c) 2023 Jan-Philipp BeneckePermission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
aioelpris | aioelprisAn aio library to retrieve current electricity price in (some parts) of the Nordics. Current supported regions are:DK1: Denmark/west of the Great BeltDK2: Denmark/east of the Great BeltNO2: Norway/KristiansandSE3: Sweden/StockholmSE4: Sweden/MalmöPrices are returned in DKK and EUR currencies.Basic exampleimportasynciofromaiohttpimportClientSessionfromaioelprisimportElPrisfromaioelpris.core.modelsimportPriceasyncdefexample()->Price:asyncwithClientSession()assession:pris=ElPris(session=session,price_area="SE3")price:Price=awaitpris.get_current_price()print(price.SpotPriceDKK)returnpriceasyncio.run(example())Data sourcesEnergi Data Service. |
aioelschools | Async Discord API wrapper for elschool |
aioemit | aioemitaioemit allows you to manage events asynchonosly and notify subscribers when those events occur. It provides a simple way to implement event bus pattern in an event driven architecture application.InstallationpipinstallaioemitUsageCreating EventsTheEventclass represents an event with a specified event type and optional data. You can create an event by initializing an instance of theEventclass with the event type and data (if any).event=Event("example_event","example_data")Creating an EmitterTo use the event emitter, create an instance of theEmitterclass.emitter=Emitter()Subscribing to EventsTo subscribe to events, use thesubscribemethod of theEmitterclass. Pass the event type and an observer function that will be called when the event is emitted.defevent_observer(event):# Handle the eventprint("Received event:",event)emitter.subscribe("example_event",event_observer)Unsubscribing from EventsIf you no longer want to receive notifications for a specific event, you can unsubscribe from it using theunsubscribemethod. Provide the event type and the observer function that you want to remove.emitter.unsubscribe("example_event",event_observer)Emitting EventsTo emit an event and notify all subscribers, use theemitmethod of theEmitterclass. Pass the event you want to emit.event=Event("example_event","example_data")awaitemitter.emit(event)Theemitmethod will asynchronously call all the subscribed observer functions that are associated with the event type.LicenseThis project is licensed under theMIT License. Feel free to use, modify, and distribute the code as per the terms of the license. |
aioemonitor | aioemonitorAsyncio Python lib for SiteSage EmonitorFeaturesRetreive emonitor power statusQuick StartimportasyncioimportpprintfromaioemonitorimportEmonitorfromaiohttpimportClientSessionasyncdefrun():session=ClientSession()emonitor=Emonitor("1.2.3.4",session)status=awaitemonitor.async_get_status()pprint.pprint(status)asyncio.run(run())InstallationStable Release:pip install aioemonitorDevelopment Head:pip install git+https://github.com/bdraco/aioemonitor.gitDocumentationFor full package documentation please visitbdraco.github.io/aioemonitor.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.The Four Commands You Need To Knowpip install -e .[dev]This will install your package in editable mode with all the required development
dependencies (i.e.tox).make buildThis will runtoxwhich will run all your tests in both Python 3.7
and Python 3.8 as well as linting your code.make cleanThis will clean up various Python and build generated files so that you can ensure
that you are working in a clean environment.make docsThis will generate and launch a web browser to view the most up-to-date
documentation for your Python package.Additional Optional Setup Steps:Turn your project into a GitHub repository:Make an account ongithub.comGo tomake a new repositoryRecommendations:It is strongly recommended to make the repository name the same as the Python
package nameA lot of the following optional steps arefreeif the repository is Public,
plus open source is coolAfter a GitHub repo has been created, run the commands listed under:
"...or push an existing repository from the command line"Register your project with Codecov:Make an account oncodecov.io(Recommended to sign in with GitHub)
everything else will be handled for you.Ensure that you have set GitHub pages to build thegh-pagesbranch by selecting thegh-pagesbranch in the dropdown in the "GitHub Pages" section of the repository settings.
(Repo Settings)Register your project with PyPI:Make an account onpypi.orgGo to your GitHub repository's settings and under theSecrets tab,
add a secret calledPYPI_TOKENwith your password for your PyPI account.
Don't worry, no one will see this password because it will be encrypted.Next time you push to the branchmainafter usingbump2version, GitHub
actions will build and deploy your Python package to PyPI.Suggested Git Branch Strategymainis for the most up-to-date development, very rarely should you directly
commit to this branch. GitHub Actions will run on every push and on a CRON to this
branch but still recommended to commit to your development branches and make pull
requests to main. If you push a tagged commit with bumpversion, this will also release to PyPI.Your day-to-day work should exist on branches separate frommain. Even if it is
just yourself working on the repository, make a PR from your working branch tomainso that you can ensure your commits don't break the development head. GitHub Actions
will run on every push to any branch or any pull request from any branch to any other
branch.It is recommended to use "Squash and Merge" commits when committing PR's. It makes
each set of changes tomainatomic and as a side effect naturally encourages small
well defined PR's.Apache Software License 2.0 |
aioenkanetworkcard | GitHub|Example|Discord|DocumentationEnkaNetworkCardWrapper forEnkaNetwork.pyto create character cards in Python.Installation:pip install aioenkanetworkcardDependencies:Dependencies that must be installed for the library to work:Pillowrequestsiomaththreadingdatetimerandomenkanetwork.pyloggingSample Results:The result of a custom images and adaptation (template= 1).Usual result (template= 1).The result of a custom images and adaptation (template= 2).Usual result (template= 2). |
aioeos | aioeosAsync Python library for interacting with EOS.io blockchain.FeaturesAsync JSON-RPC client.Signing and verifying transactions using private and public keys.Serializer for basic EOS.io blockchain ABI types.Helpers which provide an easy way to generate common actions, such as token
transfer.InstallationLibrary is available on PyPi, you can simply install it usingpip.$ pip install aioeosUsageImporting a private keyfrom aioeos import EosAccount
account = EosAccount(private_key='your key')Transferring fundsfrom aioeos import EosJsonRpc, EosTransaction
from aioeos.contracts import eosio_token
rpc = EosJsonRpc(url='http://127.0.0.1:8888')
block = await rpc.get_head_block()
transaction = EosTransaction(
ref_block_num=block['block_num'] & 65535,
ref_block_prefix=block['ref_block_prefix'],
actions=[
eosio_token.transfer(
from_addr=account.name,
to_addr='mysecondacc1',
quantity='1.0000 EOS',
authorization=[account.authorization('active')]
)
]
)
await rpc.sign_and_push_transaction(transaction, keys=[account.key])Creating a new accountfrom aioeos import EosJsonRpc, EosTransaction, EosAuthority
from aioeos.contracts import eosio
main_account = EosAccount(name='mainaccount1', private_key='private key')
new_account = EosAccount(name='mysecondacc1')
owner = EosAuthority(
threshold=1,
keys=[new_account.key.to_key_weight(1)]
)
rpc = EosJsonRpc(url='http://127.0.0.1:8888')
block = await rpc.get_head_block()
await rpc.sign_and_push_transaction(
EosTransaction(
ref_block_num=block['block_num'] & 65535,
ref_block_prefix=block['ref_block_prefix'],
actions=[
eosio.newaccount(
main_account.name,
new_account.name,
owner=owner,
authorization=[main_account.authorization('active')]
),
eosio.buyrambytes(
main_account.name,
new_account.name,
2048,
authorization=[main_account.authorization('active')]
)
],
),
keys=[main_account.key]
)DocumentationDocs and usage examples are availablehere.Unit testingTo run unit tests, you need to bootstrap an EOS testnet node first. Use the providedensure_eosio.shscript.$ ./ensure_eosio.sh |
aioeosabi | aioeosabiUpdated version of aioeos to no longer rely on API calls to serialize data. Integrateshttps://github.com/stuckatsixpm/antelopyto serialize instead.
Should be usable as a drop-in replacement for the original aioeos.For Documentation: See the original aioeos Docs. Only difference is thatsign_and_push_transactionnow lets you specify if you want to use a stored ABI or fetch a new one. Also two new helper functions.smart_sign_and_push_transaction: Signs and pushes a transaction using the cached ABI, refetching the ABI if the first try fails.push_actions: Pushed out a transaction, trying out various endpoints until one succeeds.Async Python library for interacting with EOS.io blockchain.FeaturesAsync JSON-RPC client.Signing and verifying transactions using private and public keys.Serializer for basic EOS.io blockchain ABI types.Helpers which provide an easy way to generate common actions, such as token
transfer.InstallationLibrary is available on PyPi, you can simply install it usingpip.$pipinstallaioeosabiUsageImporting a private keyfromaioeosabiimportEosAccountaccount=EosAccount(private_key='your key')Transferring fundsfromaioeosabiimportEosJsonRpc,EosTransactionfromaioeosabi.contractsimporteosio_tokenrpc=EosJsonRpc(url='http://127.0.0.1:8888')block=awaitrpc.get_head_block()transaction=EosTransaction(ref_block_num=block['block_num']&65535,ref_block_prefix=block['ref_block_prefix'],actions=[eosio_token.transfer(from_addr=account.name,to_addr='mysecondacc1',quantity='1.0000 EOS',authorization=[account.authorization('active')])])awaitrpc.sign_and_push_transaction(transaction,keys=[account.key])Creating a new accountfromaioeosabiimportEosJsonRpc,EosTransaction,EosAuthorityfromaioeosabi.contractsimporteosiomain_account=EosAccount(name='mainaccount1',private_key='private key')new_account=EosAccount(name='mysecondacc1')owner=EosAuthority(threshold=1,keys=[new_account.key.to_key_weight(1)])rpc=EosJsonRpc(url='http://127.0.0.1:8888')block=awaitrpc.get_head_block()awaitrpc.sign_and_push_transaction(EosTransaction(ref_block_num=block['block_num']&65535,ref_block_prefix=block['ref_block_prefix'],actions=[eosio.newaccount(main_account.name,new_account.name,owner=owner,authorization=[main_account.authorization('active')]),eosio.buyrambytes(main_account.name,new_account.name,2048,authorization=[main_account.authorization('active')])],),keys=[main_account.key])DocumentationDocs and usage examples are availablehere.Unit testingTo run unit tests, you need to bootstrap an EOS testnet node first. Use the providedensure_eosio.shscript.$./ensure_eosio.sh |
aioerl | aioerlaioerlis a python library that mimics the philosophy of Erlang's processes with asyncio tasks.Implements the following ideas:Each process has a mailbox: a queue to receive messages from other processes.Message passing: processes communicate entirely with messages (from the point of view of the developer)Supervisor/monitors: processes can monitor other processes (when a process dies or crashes, sends a message to its supervisor with the exit reason or the exception)Why?asynciois awesome and built-in structures likeasyncio.Queueare great for communicating between tasks but is hard to manage errors.Withaioerl, a process just waits for incoming messages from other processes and decides what to do for each event (seeexample).QuickstartRequirements: Python 3.7+Installation:pipinstallaioerlExamplefromaioerlimportreceivefromaioerlimportreplyfromaioerlimportsendfromaioerlimportspawnimportasyncioasyncdefping_pong():whilem:=awaitreceive(timeout=10):ifm.is_ok:ifm.body=="ping":awaitreply("pong")else:raiseException("Invalid message body")elifm.is_timeout:return# terminate processasyncdefmain():p=awaitspawn(ping_pong())awaitsend(p,"ping")print(awaitreceive())# Message(sender=<Proc:Task-2>, event='ok', body='pong')awaitsend(p,"pang")print(awaitreceive())# Message(sender=<Proc:Task-2>, event='err', body=Exception('Invalid message body'))awaitsend(p,"ping")print(awaitreceive())# Message(sender=<Proc:Task-2>, event='exit', body='noproc')if__name__=="__main__":asyncio.run(main())TODO:Lot of things! |
aioes | aioesis aasynciocompatible library for working withElasticsearchDocumentationReadaioesdocumentation on Read The Docs:http://aioes.readthedocs.io/Exampleimport asyncio
from aioes import Elasticsearch
@asyncio.coroutine
def go():
es = Elasticsearch(['localhost:9200'])
ret = yield from es.create(index="my-index",
doc_type="test-type",
id=42,
body={"str": "data",
"int": 1})
assert (ret == {'_id': '42',
'_index': 'my-index',
'_type': 'test-type',
'_version': 1,
'ok': True})
answer = yield from es.get(index="my-index",
doc_type="test-type",
id=42)
assert answer['_source'] == {'str': 'data', 'int': 1}
loop = asyncio.get_event_loop()
loop.run_until_complete(go())RequirementsPython3.3+asyncioor Python 3.4+aiohttp1.3+TestsMake sure you have an instance of Elasticsearch running on port 9200
before executing the tests.In order for all tests to work you need to add the following lines in theconfig/elasticsearch.ymlconfiguration file:Enable groovy scripts:script.groovy.sandbox.enabled: trueSet a repository path:path.repo: ["/tmp"]The test suite usespy.test, simply run:$ py.testLicenseaioes is offered under the BSD license.CHANGES0.7.2 (2017-04-19)Allow customConnectorinTransport: #138, #137.Several typos in documentation fixed.0.7.0 (2017-03-29)Fix Elasticsearch 5.x compatibility issues: #48, #72, #112, #73, #123.Addstored_fieldstomget,searchandexplainmethods (#123).Addwait_for_no_relocating_shardsparameter inhealth(#123).Addfilter,token_filter,char_filterparams inanalyze(#123).Addforce_mergemethod (renamedoptimize) (#123).Add ignore_idle_threads param in hot_threads #123.Update project dependencies.Convert tests to pytest.0.6.1 (2016-09-08)Accept bytes as payload #42ConvertElasticsearch.close()into a coroutine.0.6.0 (2016-09-08)Add support for verify_ssl #430.5.0 (2016-07-16)Allow scheme, username and password in connections #400.4.0 (2016-02-10)Fix ES2+ compatibility in transport address regex #380.3.0 (2016-01-27)Use aiohttp.ClientSession internally #360.2.0 (2015-10-08)Make compatible with Elasticsearch 1.7Support Python 3.5Drop Python 3.3 supportRelicense under Apache 20.1.0 (2014-10-04)Initial release |
aioes-ext | aioesis aasynciocompatible library for working withElasticSearchDocumentationReadaioesdocumentation on Read The Docs:http://aioes.readthedocs.org/Exampleimport asyncio
from aioes import Elasticsearch
@asyncio.coroutine
def go():
es = Elasticsearch(['localhost:9200'])
ret = yield from es.create(index="my-index",
doc_type="test-type",
id=42,
body={"str": "data",
"int": 1})
assert (ret == {'_id': '42',
'_index': 'my-index',
'_type': 'test-type',
'_version': 1,
'ok': True})
answer = yield from es.get(index="my-index",
doc_type="test-type",
id=42)
assert answer['_source'] == {'str': 'data', 'int': 1}
loop = asyncio.get_event_loop()
loop.run_until_complete(go())RequirementsPython3.3+asyncioor Python 3.4+aiohttp0.9.1+TestsMake sure you have an instance of elastic-search running on port 9200
before executing the tests.In order for all tests to work you need to add the following lines in theconfig/elasticsearch.ymlconfiguration file:Enable groovy scripts:script.groovy.sandbox.enabled: trueSet a repository path:path.repo: ["/tmp"]The test suite usesnose, to execute:nosetests testsLicenseaioes is offered under the BSD license.CHANGES0.6.1 (2016-09-08)Accept bytes as payload #42ConvertElasticsearch.close()into a coroutine.0.6.0 (2016-09-08)Add support for verify_ssl #430.5.0 (2016-07-16)Allow scheme, username and password in connections #400.4.0 (2016-02-10)Fix ES2+ compatibility in transport address regex #380.3.0 (2016-01-27)Use aiohttp.ClientSession internally #360.2.0 (2015-10-08)Make compatible with Elasticsearch 1.7Support Python 3.5Drop Python 3.3 supportRelicense under Apache 20.1.0 (2014-10-04)Initial release |
aioesl | ****************************************aioesl: Protocol for Freeswitch Event Socket****************************************About=====aioesl support inbound and outbound connections.For more information read https://freeswitch.org/confluence/display/FREESWITCH/mod_event_socketRequirements=============* Python 3.5.1Examples=========Examples are available in the examples/ directory. |
aioesphomeapi | aioesphomeapiallows you to interact with devices flashed withESPHome.InstallationThe module is available from thePython Package Index.$pip3installaioesphomeapiAn optional cython extension is available for better performance, and the module will try to build it automatically.The extension requires a C compiler and Python development headers. The module will fall back to the pure Python implementation if they are unavailable.Building the extension can be forcefully disabled by setting the environment variableSKIP_CYTHONto1.UsageIt’s required that you enable theNative APIcomponent for the device.# Example configuration entryapi:password:'MyPassword'Check the output to get the local address of the device or use thename:``under``esphome:from the device configuration.[17:56:38][C][api:095]:APIServer:[17:56:38][C][api:096]:Address:api_test.local:6053The sample code below will connect to the device and retrieve details.importaioesphomeapiimportasyncioasyncdefmain():"""Connect to an ESPHome device and get details."""# Establish connectionapi=aioesphomeapi.APIClient("api_test.local",6053,"MyPassword")awaitapi.connect(login=True)# Get API version of the device's firmwareprint(api.api_version)# Show device detailsdevice_info=awaitapi.device_info()print(device_info)# List all entities of the deviceentities=awaitapi.list_entities_services()print(entities)loop=asyncio.get_event_loop()loop.run_until_complete(main())Subscribe to state changes of an ESPHome device.importaioesphomeapiimportasyncioasyncdefmain():"""Connect to an ESPHome device and wait for state changes."""cli=aioesphomeapi.APIClient("api_test.local",6053,"MyPassword")awaitcli.connect(login=True)defchange_callback(state):"""Print the state changes of the device.."""print(state)# Subscribe to the state changesawaitcli.subscribe_states(change_callback)loop=asyncio.get_event_loop()try:asyncio.ensure_future(main())loop.run_forever()exceptKeyboardInterrupt:passfinally:loop.close()Other examples:CameraAsync printSimple printInfluxDBDevelopmentFor development is recommended to use a Python virtual environment (venv).# Setup virtualenv (optional)$python3-mvenv.$sourcebin/activate# Install aioesphomeapi and development depenencies$pip3install-e.$pip3install-rrequirements_test.txt# Run linters & test$script/lint# Update protobuf _pb2.py definitions (requires a protobuf compiler installation)$script/gen-protocA cli tool is also available for watching logs:aioesphomeapi-logs--helpA cli tool is also available to discover devices:aioesphomeapi-discoverLicenseaioesphomeapiis licensed under MIT, for more details check LICENSE. |
aioetcd | # Async etcd client[](https://cloud.drone.io/SergeyTsaplin/aioetcd) |
aio_etcd | A python client for Etcdhttps://github.com/coreos/etcdOfficial documentation:http://python-aio-etcd.readthedocs.org/InstallationPre-requirementsThis version of python-etcd will only work correctly with the etcd server version 2.0.x or later. If you are running an older version of etcd, please use python-etcd 0.3.3 or earlier.This client is known to work with python 3.5. It will not work in older versions of python due to ist use of “async def” syntax.Python 2 is not supported.From source$pythonsetup.pyinstallUsageThe basic methods of the client have changed compared to previous versions, to reflect the new API structure; however a compatibility layer has been maintained so that you don’t necessarily need to rewrite all your existing code.Create a client objectimportaio_etcdasetcdclient=etcd.Client()# this will create a client against etcd server running on localhost on port 4001client=etcd.Client(port=4002)client=etcd.Client(host='127.0.0.1',port=4003)client=etcd.Client(host=(('127.0.0.1',4001),('127.0.0.1',4002),('127.0.0.1',4003)))client=etcd.Client(host='127.0.0.1',port=4003,allow_redirect=False)# wont let you run sensitive commands on non-leader machines, default is true# If you have defined a SRV record for _etcd._tcp.example.com pointing to the clientsclient=etcd.Client(srv_domain='example.com',protocol="https")# create a client against https://api.example.com:443/etcdclient=etcd.Client(host='api.example.com',protocol='https',port=443,version_prefix='/etcd')Write a keyawaitclient.write('/nodes/n1',1)# with ttlawaitclient.set('/nodes/n1',1)# Equivalent, for compatibility reasons.awaitclient.write('/nodes/n2',2,ttl=4)# sets the ttl to 4 secondsRead a key(awaitclient.read('/nodes/n2')).value# read a value(awaitclient.get('/nodes/n2')).value# Equivalent, for compatibility reasons.awaitclient.read('/nodes',recursive=True)# get all the values of a directory, recursively.# raises etcd.EtcdKeyNotFound when key not foundtry:client.read('/invalid/path')exceptetcd.EtcdKeyNotFound:# do somethingprint"error"Delete a keyawaitclient.delete('/nodes/n1')Atomic Compare and Swapawaitclient.write('/nodes/n2',2,prevValue=4)# will set /nodes/n2 's value to 2 only if its previous value was 4awaitclient.write('/nodes/n2',2,prevExist=False)# will set /nodes/n2 's value to 2 only if the key did not exist beforeawaitclient.write('/nodes/n2',2,prevIndex=30)# will set /nodes/n2 's value to 2 only if the key was last modified at index 30awaitclient.test_and_set('/nodes/n2',2,4)#equivalent to client.write('/nodes/n2', 2, prevValue = 4)You can also atomically update a result:awaitclient.write('/foo','bar')result=awaitclient.read('/foo')print(result.value)# barresult.value+=u'bar'updated=awaitclient.update(result)# if any other client wrote to '/foo' in the meantime this will failprint(updated.value)# barbarWatch a keyresult=awaitclient.read('/nodes/n1')# start from a known initial valueresult=awaitclient.read('/nodes/n1',wait=True,waitIndex=result.modifiedIndex+1)# will wait till the key is changed, and return once it's changedresult=awaitclient.read('/nodes/n1',wait=True,waitIndex=10)# get all changes on this key starting from index 10result=awaitclient.watch('/nodes/n1')# equivalent to client.read('/nodes/n1', wait = True)result=awaitclient.watch('/nodes/n1',index=result.modifiedIndex+1)If you want to time out the read() call, wrap it inasyncio.wait_for:result=awaitasyncio.wait_for(client.read('/nodes/n1',wait=True),timeout=30)Refreshing key TTL(Since etcd 2.3.0) Keys in etcd can be refreshed without notifying current watchers.This can be achieved by setting the refresh to true when updating a TTL.You cannot update the value of a key when refreshing it.client.write('/nodes/n1','value',ttl=30)# sets the ttl to 30 secondsclient.refresh('/nodes/n1',ttl=600)# refresh ttl to 600 seconds, without notifying current watchersLocking module# Initialize the lock object:# NOTE: this does not acquire a lockfromaio_etcd.lockimportLockclient=etcd.Client()# Or you can custom lock prefix, default is '/_locks/' if you are using HEADclient=etcd.Client(lock_prefix='/my_etcd_root/_locks')lock=etcd.Lock(client,'my_lock_name')# Use the lock object:awaitlock.acquire(blocking=True,# will block until the lock is acquiredlock_ttl=None)# lock will live until we release itlock.is_acquired# Trueawaitlock.acquire(lock_ttl=60)# renew a lockawaitlock.release()# release an existing locklock.is_acquired# False# The lock object may also be used as a context manager:asyncwithLock(client,'customer1')asmy_lock:do_stuff()my_lock.is_acquired# Trueawaitmy_lock.acquire(lock_ttl=60)my_lock.is_acquired# FalseGet machines in the clustermachines=awaitclient.machines()Get leader of the clusterleaderinfo=awaitclient.leader()Generate a sequential key in a directoryx=awaitclient.write("/dir/name","value",append=True)print("generated key: "+x.key)# actually the whole pathprint("stored value: "+x.value)List contents of a directory#stick a couple values in the directoryawaitclient.write("/dir/name","value1",append=True)awaitclient.write("/dir/name","value2",append=True)directory=awaitclient.get("/dir/name")# loop through a directory's childrenforresultindirectory.children:print(result.key+": "+result.value)# or just get the first child valueprint(directory.next(children).value)Development setupThe usual setuptools commands are available.$python3setup.pyinstallTo test, you should have etcd available in your system path:$python3setup.pytestto generate documentation,$cddocs$makeRelease HOWTOTo make a releaseUpdate release date/version in NEWS.txt and setup.pyRun ‘python setup.py sdist’Test the generated source distribution in dist/Upload to PyPI: ‘python setup.py sdist register upload’ |
aioetcd3 | No description available on PyPI. |
aio-eth | aio-ethA simple python library that can be used to run large Web3 queries onEthereum blockchainconcurrently as perEthereum JSON-RPC specification.The library provides a bare minimal framework for expressing raw JSON-RPC queries as described in the Ethereum Specification and execute them together either concurrently (off-chain on the client side) or together as a batch (JSON-RPC batch specification on-chain). This method greatly reduces the time required to run large queries sequentially and thus can be used for use-cases where we need to index large number of transactions happening on ethereum blockchain in a local database for faster Web2 queries.FeaturesProvides interface for concurrent execution of large number of JSON-RPC queriesProvides interface for batched execution of large number of JSON-RPC queriesProvides complete flexibility to call any JSON-RPC method by allowing users to specify raw queries directly.Requirements:Python 3.6+How to install:From source:git clone [email protected]:Narasimha1997/aio-eth.git
cd aio-eth
pip3 install -e .FromPyPi:pip3 install aio-ethExamples:Run tasks concurrently: This method will create a socket for each task on the client-side and executes the JSON-RPC calls concurrently. Under the hood, this method usesaiohttpmodule. By this way you are using the client machine's resources and bandwidth to run queries by creating N concurrent sockets.importasyncioimportaio_ethimporttimeURL="https://rinkeby.infura.io/v3/b6fe23ef7add48d18d33c9bf41d5ad0c"asyncdefquery_blocks():# create the API handleasyncwithaio_eth.EthAioAPI(URL,max_tasks=100)asapi:# express queries - example: get all transactions from 70 blocks# starting from 10553978foriinrange(10553978,10553978+70):# submit tasks to the task list, if `current tasks > max_tasks`# this method throws an exception.api.push_task({"method":"eth_getBlockByNumber","params":[hex(i),True]})st=time.time()# execute the tasks together concurrently, outputs are returned in the same# order in which their corresponding queries where submitted.result=awaitapi.exec_tasks_async()et=time.time()print('time taken: ',et-st,' seconds')if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(query_blocks())Output:time taken: 1.5487761497497559 secondsRun tasks as batch: This method will submit the batch of queries to the connected Ethereum RPC server and expects the output of all the queries at once, unlike concurrent method, here you will be using only one socket as all the queries are submitted as batch. While Batch API is very useful, few providers do not support batch queries, so make sure your provider supports batch queries before using this.importasyncioimportaio_ethimporttimeURL="https://rinkeby.infura.io/v3/b6fe23ef7add48d18d33c9bf41d5ad0c"asyncdefquery_blocks():# create the API handleasyncwithaio_eth.EthAioAPI(URL,max_tasks=100)asapi:# express queries - example: get all transactions from 70 blocks# starting from 10553978foriinrange(10553978,10553978+70):# submit tasks to the task list, if `current tasks > max_tasks`# this method throws an exception.api.push_task({"method":"eth_getBlockByNumber","params":[hex(i),True]})st=time.time()# execute the tasks together as batch, outputs are returned in the same# order in which their corresponding queries where submitted.result=awaitapi.exec_tasks_batch()et=time.time()print('time taken: ',et-st,' seconds')if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(query_blocks())Output:time taken: 3.698002576828003 secondsIt can be noted that using concurrent connects gives result in less time when compared to batch API, but Batch API can be helpful for large queries involving hundreds of tasks as opening many concurrent sockets while eat up the system's resources.Handling errors:When usingexec_tasks_async- each task can succeed or fail independently as they are executed concurrently, each item in the result contains a key calledsuccess, which is eitherTrueorFalse, ifsuccessisFalse, then a field calledexceptioncan be read to get theExceptionobject of the corresponding error.When usingexec_tasks_batch, all of the tasks can either succeed or fail as it is executed on the server side. For this reason, the method throws an exception on failure and must be handled externally.Maximum tasks size:We can limit the number of tasks that can be allowed to be submitted at once by callingset_max_tasksmethod. By default it is set to 100. When we try to more tasks above this limit usingpush_taskan exception is thrown. Example:asyncwithaio_eth.EthAioAPI(URL,max_tasks=100)asapi:......# set max task sizeapi.set_max_tasks(500).......TODO:Support Web Sockets channelContributingPlease feel free to raise issues and submit PRs. |
aioethereum | aioethereumEthereum RPC client library for thePEP 3156Python event loop.FeaturesujsonsupportYesuvloopsupportYesHigh-level APIsYesHTTP supportYesUnix domain socket (IPC) supportYesSSL/TLS supportYesTested CPython versions3.4, 3.5, 3.6Tested Geth versions1.7.0Implemented RPC apisadmin, db, debug, eth, miner, net, personal, shh, txpool, web3Documentationhttp://aioethereum.readthedocs.io/Usage examplesSimple high-level interface (through HTTP):importasyncioimportaioethereumloop=asyncio.get_event_loop()asyncdefgo():client=awaitaioethereum.create_ethereum_client('http://localhost:8545',loop=loop)val=awaitclient.web3_clientVersion()print(val)loop.run_until_complete(go())# will print like 'Geth/v1.7.0-stable-6c6c7b2a/darwin-amd64/go1.9'or via IPCimportasyncioimportaioethereumloop=asyncio.get_event_loop()asyncdefgo():client=awaitaioethereum.create_ethereum_client('ipc://<path_to_unix_socket>',loop=loop)val=awaitclient.web3_clientVersion()print(val)loop.run_until_complete(go())# will print like 'Geth/v1.7.0-stable-6c6c7b2a/darwin-amd64/go1.9'RequirementsPython3.3+asyncioorPython3.4+ujsonaiohttpNoteujson is preferred requirement.
Pure C JSON encoder and decoder is implemented as well and can be used
automatically when installed.LicenseThe aioethereum is offered under MIT license.0.2.2 (2018-04-10)Fix bug, related tohttps://www.python.org/dev/peps/pep-0492/#new-syntax;Fix port detection for client when only domain;0.2.1 (2017-10-08)Add admin and debug management apis;Add new tests;Add uvloop support (python 3.5+ required);0.2.0 (2017-10-05)Add more docstrings to the code;Add tests for all rpc methods;Add admin and debug;Fix error for unixsocket retring;Fix unixsocket invalid loop for Python 3.4;BaseAsyncIOClient._rpc marked as deprecated and will be removed in 0.3.0;0.1.1 (2017-10-01)Add sphinx docs;0.1.0 (2017-09-30)Initial release;Ethereum client implemented;WIP on RPC management. |
aioetherscan | No description available on PyPI. |
aioevent | No description available on PyPI. |
aioeventlet | aioeventlet implements the asyncio API (PEP 3156) on top of eventlet. It makes
possible to write asyncio code in a project currently written for eventlet.aioeventlet allows to use greenthreads in asyncio coroutines, and to use
asyncio coroutines, tasks and futures in greenthreads: seelink_future()andwrap_greenthread()functions.The main visible difference between aioeventlet and trollius is the behaviour
ofrun_forever():run_forever()blocks with trollius, whereas it runs
in a greenthread with aioeventlet. It means that aioeventlet event loop can run
in an greenthread while the Python main thread runs other greenthreads in
parallel.aioeventlet documentationaioeventlet project in the Python Cheeseshop (PyPI)aioeventlet project at BitbucketCopyright/license: Open source, Apache 2.0. Enjoy! |
aioevents | aioeventsEvents for asyncio (PEP 3156)UsageTo declare an event:fromaioeventsimportEventclassSpam:egged=Event("The spam has been egged")To register a handler:spam=Spam()@spam.egged.handlerdefon_egged(sender,amt):print("Spam got egged{}times".format(amt)")Triggering an event:spam.egged(42) |
aioevents-ng | A simple library for managing events through an asynchronous queueInstallationpipinstallaioevents-ngUsage exampleimportasynciofromdataclassesimportdataclassimportaioevents@dataclassclassMyEvent(aioevents.Event):payload:[email protected](MyEvent)asyncdefevent_hadler(event:aioevents.Event):print(f"recieved:{event}")asyncdefproduce():asyncwithaioevents.eventsasevents:awaitevents.publish(MyEvent("Hello!"))asyncdefmain():aioevents.start(asyncio.get_event_loop())awaitproduce()print('stopping worker')aioevents.stop()# wait for all coroutinesawaitasyncio.sleep(1)if__name__=="__main__":asyncio.run(main())Licenseaioeventslibrary is offered under Apache 2 license. |
aioevproc | aioevprocIt is a minimal async/sync event processing framework. Has no dependencies and
uses nothing except pure Python 3.8.TL;DRDo not have much time? Seerecap on examplesandrecap on conditions. Now go and useaioevproc! :)ExamplesSimplest example for a single async handler, just echo the message text:fromaioevprocimportEventsProcessor,handler,EventclassEchoTelegramBot(EventsProcessor):@handler(lambdaevent:'message'ineventand'text'inevent['message'])asyncdefecho_message(self,event:Event)->None:awaitself.reply_to_message(text=event['message']['text'])A little bit more complex Telegram bot example, see theexplanation below:fromaioevprocimportEventsProcessor,handler,Eventfromcontextlibimportasynccontextmanager,contextmanagerclassTelegramBot(EventsProcessor):# synchronous middleware for any exception: log exception@handler()@contextmanagerdeflog_exception(self,event:Event)->Generator[None,None,None]:try:yieldexcept:logging.exception('Error!')# async middleware for any exception: send excuse message to the user@handler()@asynccontextmanagerdefsend_excuse_message(self,event:Event)->AsyncGenerator[None,None]:try:yieldexcept:awaitself.send_message('Sorry!')# synchronous handler for all updates: log message@handler()deflog_update_id(self,event:Event)->Literal[True]:logging.info(event['update_id'])returnTrue# call following handlers# async handler to check if user is admin for update with messages and cb@handler(lambdaevent:'message'ineventor'callback_query'inevent)asyncdefcheck_admin(self,event:Event)->bool:# the next handler will be called only if this returns Truereturnevent['message']['from_user']['id']inawaitself.get_admins()# async handler to echo updates containing a message@handler(lambdaevent:'message'ineventand'text'inevent['message'])asyncdefecho_message(self,event:Event)->None:# if the update contains a message then echo itawaitself.reply_to_message(text=event['message']['text'])# async handler to answer a callback query@handler(lambdaevent:'callback_query'inevent)asyncdefecho_message(self,event:Event)->None:# if the update does not contain a message but a callback query, answerawaitself.answer_callback_query(event['callback_query']['id'])What do the examples demonstrate?handlerdecorates methods ofEventsProcessorsubclasses. The method can be
one of: async function (likecheck_admin,handle_messageandecho_messagein the example above), sync function (log_update_id), async
context manager (send_excuse_message) or sync context manager
(log_exception).All of the handlers are called in the same order as they are declared in the
class body. Middlewares follow the same rule: they are entered in the order
of declaration and exited in the reversed order (in a recursive manner).Sync and async handlers may return a value: if it is not a truthy value then
none of the following handlers will be called and event processing will be
stopped at the handler whichdid notreturn truthy value.Please notice: if you return nothing from the sync/async handler method (means
you implicitlyreturn None) then none of the following handlers will be
called. This is an intended default behavior since usually an event requires
a single handler. None is a falsy (not truthy) value.ReturningTruefrom the handler is useful for logging purposes: the logging
method should not block further processing of the event. This is shown in
the example below (log_update_id) as well as the filtering use case for
admins: if the user is not an admin thencheck_adminwill returnFalseand no further processing will be done.Middlewares are based on context managers and are intended to be used for
exceptions handling. Also use them when some related actions are required
before and after the event is processed by other handlers: for example, for
measuring the execution time.Recap on examplesLet's sum up on theexamples:aioevprocsupports both sync and async handlers and middlewares.Every handler or middleware has to be a method ofEventsProcessorsubclass.If the handler does not return exactlyTruethen the following handlers are
not called.Middlewares are sync/async context managers.Handlers and middlewares are called in the same order as they are declared.How to use the handlers conditionsHandler usually has to be applied to certain types of events, not all. The
following handler will be applied only to updates containing a message:@handler(lambdaevent:'message'inevent)asyncdefhandle_event(self,event:Event)->None:passIf the condition check fails then the next handler condition will be checked:@handler(lambdaevent:False)defalways_skipped(self,event:Event)->Literal[False]:# this handler is never called since its predicate always evaluates to FalsereturnFalse# has no effect since this handler is not called# since previous handler condition check failed this one will be checked next@handler(lambdaevent:'edited_message'inevent)deflog_message_edit(self,event:Event)->None:passPlease notice: if the handler condition check failed then the handler's return
value does not affect the next handlers. The return value of the handler
affects the next handlers only if the handler itself is called (meaning
that its condition check is passed).You can specify multiple predicates in ahandlercall: this will make handler
to be called only ifallof the predicates evaluate to a truthy value
for the event. Example below shows the handers which will be applied only to
updates with text messages:@handler(lambdaevent:'message'inevent,lambdaevent:'text'inevent['message'],)asyncdefhandle_event(self,event:Event)->None:passThe predicates are evaluated in the same order as they are declared. So the
above pair of conditions is equivalent to'message' in event and 'text' in event['message']. This means that
specifying multiple predicates for a singlehandlercall implements AND
semantics (conditions conjunction).If you need to apply single handler ifanyof the conditions is true, use
multiplehandlercalls:@handler(lambdaevent:'message'inevent)@handler(lambdaevent:'callback_query'inevent)asyncdefhandle_event(self,event:Event)->None:passThis will apply the handler for either update with a message or update with a
callback query. This form implements OR semantics (conditions disjunction).Please notice: the implementation ofaioevprocchecks handlers predicates in
the same order as they are declared. First'message' in eventwill be
checked and after that the'callback_query' in eventpredicate will be
evaluated. This is a reversed order to how Python applies decorators: Python
applies the most inner decorator first. Butaioevprocapplies the most
outerhandlercall first since it is more intuitive.If you need a handler to be applied unconditionally then use justhandler()without arguments.Please notice: you cannot usehandler()without arguments on a handler with
any otherhandlercall with arguments since this has no sense:@handler()# will raise an AssertionError@handler(lambdaevent:'message'inevent)asyncdefhandle_event(self,event:Event)->None:passDon't forget toreturn Truefrom unconditionally applied handler to not ignore
all of the following handlers!Recap on conditionsLet's sum up onconditions:Singlehandlercall accepts multiple predicates as arguments. The handler
then will be called only ifallof the predicates are true (AND semantics).If a handler method (or middleware) is decorated with multiplehandlercalls then the handler will be called ifanyof thehandlers'
conditions is true (OR semantics).OR and AND semantics can be combined.If the handler's conditions check failed then the handler is skipped and the
next handlers' conditions are checked until the matching handler is found.All the conditions are checked in the same order as they are declared. The
most outerhandlerdecorator is applied first.Handler decorated withhandler()w/o arguments is applied unconditionally.Installationpip install aioevprocHow to run testsFrom project root directory:python -m unittest discover -s tests/unit |
aioevt | AioevtSimplified Asyncio-Friendly Event ManagementProblemAsyncio offers a lot of utilities that provide thread-safe execution of coroutines and synchronous functions. However, there isn't any one "unified" way of emitting/catching events accross threads, and synchronization primitives are not themselves thread-safe. This can lead to unexpected behavior when trying to synchronize multiple event loops on multiple threads.Solutionaioevt- After creating the manager, you can emit or await 'global' events in a thread-safe way. Callbacks can registered from any thread and target any event loop. This allows you to very easily share objects and quickly emit information without fussing with thread safety.DocumentationEvt and EvtDataThe core objects used throughoutaioevtare theEvtandEvtDatadataclassess.Evtrepresents an event itself and is comprised of aname(identifier),func(callback),loop(for execution), andrecurring(automatic re-scheduling)EvtDataconsists only ofargsandkwargswhich are splatted into callbacks as neededCreate a managerCreate anaioevtmanager which uses the default event loop.mgr = aioevt.Manager()Register an eventRegister a global event to be triggered from a provided event loop when a named event is emitted. This can be done in two ways: both through themgr.registermethod, or themgr.ondecorator. An event can have multiple callbacks, and each callback will be invoked with the same parameters on each emit.Note:The return value of the event callback is not retrievable. If you'd like to handle a value from inside a callback, simply emit a different event and wait for it in the desired location.mgr.register(
name="MyEvent", # Name by which the event will be referenced
func=my_func, # Synchronous or Asynchronous function
loop=my_event_loop, # Provide a target loop in which to execute the function, Default: None (get running)
recurring=True, # Determines if the event should be re-registered after the first emit, Default: True
)[email protected](name="Add", loop=my_event_loop, recurring=True)
def my_callback(num1, num2, num3, num4):
# e.g. run hard calculations within a ProcessPoolExecutor
total = num1 + num2 + num3 + num4
mgr.emit("Calculated", args=(total,))
mgr.emit_after(0.1, "Add", args=(1, 2, 3, 4))
data = await mgr.wait("Calculated")
assert data.args[0] == 10Emitting an eventEmit a signal with arbitrary positional and/or keyword parameters. This can be done withmgr.emitormgr.emit_afterwhich is identical except that it accepts an additionaldelayargument as its first parameter.mgr.emit(
name="MyEvent", # Name of the event to emit
args=(1, 2, 3), # Tuple of args used to emit
kwargs={"num4": 4}, # Dict of kwargs used to emit
)Waiting for an eventUsingmgr.wait, you can asynchronously wait until an event is fired. This is commonly used just to wait for a certain status, but will also return anEvtDataobject which contains theargsandkwargsvalues that were passed into the call tomgr.emitdata = await mgr.wait(
name="MyEvent", # Name of the event to wait for
timeout=None, # Timeout in seconds, Default: None
)
print(data.args) # mgr.emit(..., args=...)
print(data.kwargs) # mgr.emit(..., kwargs=...)Unregistering an eventRecurring events can be unregistered manually both by name and by function value.Notethat unregistering by name is significantly faster and more efficient, so use that when possible.mgr.unregister(name="MyEventName")
mgr.unregister(func=my_callback_func) |
aioexec | aioexecDescriptionAioexec is a simple, intuitive interface around theconcurrent.futurespackage and asyncio'sloop.run_in_executormethod. Aioexec is leightweight, no dependencies and ~100 LOC.Requirementsaioexec requires Python>= 3.7Installpip install aioexecorpipenv install aioexecUsageWithoutaioexecyou usually run an executor something like this:importaysnciofromconcurrent.futuresimportProcessPoolExecutor# ...loop=asyncio.get_event_loop()foo=awaitloop.run_in_executor(ProcessPoolExecutor(1),lambda:my_func(foo='baz'))Withaioexecyou would do the same like this:fromaioexecimportProcs# ...foo=awaitProcs(1).call(my_func,foo='baz')You can pass bothsyncandasyncfunctions to an executor:defmy_sync_func(foo):returnstuff(foo)asyncdefmy_async_func(foo):returnawaitstuff(foo)foo=awaitProcs(1).call(my_sync_func,foo='baz')foo=awaitProcs(1).call(my_async_func,foo='baz')You can call abatchof functions in the same executor like this:importasynciofromaioexecimportProcs,Call# ...my_values=awaitasyncio.gather(*Procs(3).batch(Call(my_func,foo='bar'),Call(my_func,foo='baz'),Call(my_func,foo='qux'),))This plays nicely with comprehensions:my_values=awaitasyncio.gather(*Procs(10).batch(Call(my_func,foo=i)foriinrange(0,10)))You can also spawn apoolin a context and make multiple different calls with the same executor:withProcs(10)aspool:value_a=awaitpool.call(my_func,foo='baz')value_b=awaitaio.gather(*pool.batch(Call(my_func,foo=i)foriinrange(0,10)))# etc...The examples from above work the same forThreads, e.g.:fromaioexecimportThreads# ...foo=awaitThreads(1).call(my_func,foo='baz')If necessary, you can pass an eventloopto the executors like this:foo=awaitThreads(1,my_loop).call(my_func,foo='baz')foo=awaitProcs(1,my_loop).call(my_func,foo='baz')Development / TestingClone the repo and install dev packages:pipenv install --devRun tests:pipenv run python make.py test |
aio-executor | aio-executorA concurrent.futures.Executor implementation that runs asynchronous tasks in an asyncio loop.Example usage:fromaio_executorimportAioExecutorasyncdefmy_async_function(arg):# ...withAioExecutor()asaioexec:# single invocationf=aioexec.submit(my_async_function,'foo')result=f.result()# multiple concurrent invocations using "map"results=aioexec.map(my_async_function,['foo','bar','baz'])As a convenience, arun_with_asynciodecorator is also provided. This
decorator runs the decorated async function in aAioExecutorinstance.The example below shows how to implement an async view function for the Flask
framework using this decorator:@app.route('/')@run_with_asyncioasyncdefindex():returnawaitget_random_quote()How to Installpip install aio-executor |
aio-exos | WORK IN PROGESSExtreme EXOS asyncio ClientThis repository contains an Extreme EXOS asyncio client, support for both
JSON-RPC and RESTCONF options.For reference on the EXOS JSON-RPC, refer tothis
document.For reference on the EXOS RESTCONF support, refer tothis
document.Device ConfigurationIn order to access the EXOS device via API you must enable the web server
feature using either http or https.enable web http
enable web https # requires ssl configuration as well.JSON-RPC Usagefromaioexos.jsonrpcimportDevicedev=Device(host='myhostname',username='user',password='Random')show_one=awaitdev.cli('show switch')show_many=awaitdev.cli(['show switch','show version'])# get text instead of JSON/dictshow_text=awaitdev.cli('show switch',text=True)RESTCONF UsageThe RESTCONF API supports only JSON body at this time. XML is not supported
even though the documentation states that it does.fromaioexos.restconfimportDevicedev=Device(host='myhostname',username='user',password='Random')# login step required for session authenticationawaitdev.login()# execute commands providing the restconf URL, supports all request methods# (GET, POST, etc.)res=awaitdev.get('/openconfig-system:system')# close connection when done with commandsawaitdev.aclose() |
aioexponent | aioexponentIf you have problems with the code in this repository, please file an issue or a pull-request. Thanks!Installationpip install aioexponentUsageUse to send push notifications to Exponent Experiences from a Python server.Full documentationon the API is available if you want to dive into the details.Here's an example on how to use this with retries and reporting viapyrollbar.fromaioexponentimportDeviceNotRegisteredErrorfromaioexponentimportPushClientfromaioexponentimportPushMessagefromaioexponentimportPushResponseErrorfromaioexponentimportPushServerErrorfromaiohttpimportClientError# Basic arguments. You should extend this function with the push features you# want to use, or simply pass in a `PushMessage` object.asyncdefsend_push_message(tokens,message,extra=None):client=PushClient()try:response=awaitclient.publish_multiple([PushMessage(to=token,body=message,data=extra)fortokenintokens])exceptPushServerErrorasexc:# Encountered some likely formatting/validation error.rollbar.report_exc_info(extra_data={'tokens':tokens,'message':message,'extra':extra'errors':exc.errors,'response_data':exc.response_data,})raiseexcept(ClientError)asexc:# Encountered some Connection or HTTP error - retry a few times in# case it is transient.rollbar.report_exc_info(extra_data={'tokens':tokens,'message':message,'extra':extra})raiseretry(exc=exc)try:# We got a response back, but we don't know whether it's an error yet.# This call raises errors so we can handle them with normal exception# flows.response.validate_response()exceptDeviceNotRegisteredError:# Mark the push token as inactivefromnotifications.modelsimportPushTokenPushToken.objects.filter(token=token).update(active=False)exceptPushResponseErrorasexc:# Encountered some other per-notification error.rollbar.report_exc_info(extra_data={'tokens':tokens,'message':message,'extra':extra,'push_response':exc.push_response._asdict(),})raiseretry(exc=exc) |
aioextensions | Python Asyncio Extensions |
aioface | aiofaceaioface is a powerful and simple asynchronous framework for the Facebook Messenger API written in Python 3.7.Installation$pipinstallaiofaceExamplesEcho botfromaiofaceimportBot,Dispatcher,BaseStorage,FacebookRequestdispatcher=Dispatcher(BaseStorage())bot=Bot(webhook_token='your_webhook_token',page_token='your_page_token',dispatcher=dispatcher)@dispatcher.message_handler()asyncdefecho_handler(fb_request:FacebookRequest):awaitfb_request.send_message(message=fb_request.message_text)if__name__=='__main__':bot.run() |
aiofase | No description available on PyPI. |
aiofastforward | aiofastforwardFast-forward time in asyncio Python by patchingloop.call_later,loop.call_at,loop.time, andasyncio.sleep. This allows you to test asynchronous code synchronously.Inspired byAngularJS $timeout.$flush.InstallationpipinstallaiofastforwardUsagePatching is done through a context manager, similar tounittest.patch.importasynciofromaiofastforwardimportFastForwardloop=asyncio.get_event_loop()withFastForward(loop)asforward:# Call production function(s), that call asyncio.sleep, loop.call_later,# loop.call_at, or loop.time# ...# Fast-forward time 1 second# asyncio.sleeps, and loop.call_at and loop.call_later callbacks# will be called: as though 1 second of real-world time has passedawaitforward(1)# More production functions or assertions# ...Examplesasyncio.sleep# Production codeasyncdefsleeper(callback):awaitasyncio.sleep(1)callback(0)awaitasyncio.sleep(2)# Test codefromunittest.mockimportMock,callloop=asyncio.get_event_loop()callback=Mock()withaiofastforward.FastForward(loop)asforward:asyncio.ensure_future(sleeper())awaitforward(1)# Move time forward one secondself.assertEqual(callback.mock_calls,[])awaitforward(1)# Move time forward another secondself.assertEqual(callback.mock_calls,[call(0)])loop.call_later# Production codeasyncdefschedule_callback(loop,callback):loop.call_later(1,callback,0)loop.call_later(2,callback,1)# Test codefromunittest.mockimportMock,callloop=asyncio.get_event_loop()withaiofastforward.FastForward(loop)asforward:callback=Mock()awaitschedule_callback(loop,callback)awaitforward(1)# Move time forward one secondself.assertEqual(callback.mock_calls,[call(0)])awaitforward(1)# Move time forward another secondself.assertEqual(callback.mock_calls,[call(0),call(1)])loop.call_at# Production codeasyncdefschedule_callback(loop,callback):now=loop.time()loop.call_at(now+1,callback,0)loop.call_at(now+2,callback,1)# Test codefromunittest.mockimportMock,callloop=asyncio.get_event_loop()withaiofastforward.FastForward(loop)asforward:callback=Mock()awaitschedule_callback(loop,callback)awaitforward(1)# Move time forward one secondself.assertEqual(callback.mock_calls,[call(0)])awaitforward(1)# Move time forward another secondself.assertEqual(callback.mock_calls,[call(0),call(1)])forwarding time can blockawait forward(a)only moves time forward, i.e. resolve calls toasyncio.sleepor calls the callbacks ofcall_atorcall_later, once there are sufficient such calls that time could have progressed that amount. Calls to IO functions, even if they take non-zero amounts of real time in the test, do not advance the patched "pseudo-timeline": they are treated as instantanous.This means that there are cases whereawait forward(a)will block forever.# Production codeasyncdefsleeper():awaitasyncio.sleep(1)# Test codeloop=asyncio.get_event_loop()withaiofastforward.FastForward(loop)asforward:asyncio.ensure_future(sleeper())awaitforward(2)# Will block foreverTo avoid this, ensure you onlyawait forwardan amount less than or equal to how much pseudo-time that will be progressed byasyncio.sleep,call_atorcall_later.# Production codeasyncdefsleeper(callback):awaitasyncio.sleep(1)callback(0)awaitasyncio.sleep(1)callback(1)# Test codefromunittest.mockimportMock,callloop=asyncio.get_event_loop()withaiofastforward.FastForward(loop)asforward:asyncio.ensure_future(sleeper(callback))start_time=loop.time()awaitforward(1.5)# The second sleep will have been called, but not resolvedself.assertEqual(loop.time(),start_time+1.5)self.assertEqual(callback.mock_calls,[call(0)])The justification for this design are the consequences of the the alternative: if itwouldn'tblock. This would mean that all sleeps and callbacks would have to be registeredbeforethe call toforward, and this in turn would lead to less flexible test code.For example, the production code may have a chain of 10asyncio.sleep(1), and in the test you would like toawait forward(10)to assert on the state of the system after these. At the time of callingawait forward(10)however, at most one of theasyncio.sleep(1)would have been called. Not blocking would mean that afterawait forward(10), the pseudo-timeline in the world of the patched production code would not have moved forward ten seconds.Differences between aiofastforward.FastForward andasynctest.ClockedTestCaseThere is overlap in functionality: both support fast-forwarding time in terms of loop.call_later and loop.call_at. However, there are properties that FastForward has that ClockedTestCase does not:FastForward is not coupled to any particular test framework. The only requirement is that the test code must be in an async function. If you wish, you can use FastForward in anasynctest.TestCasetest.FastForward supports fast-forwarding asyncio.sleep.FastForward allows fast-forwarding time in any event loop, not just the one the test code runs in.ClockedTestCase does have an advantage over FastForward, which may be important for some uses:ClockedTestCase supports Python 3.4 onwards, while FastForward supports Python 3.5.0 onwards. |
aiofauna | title: AioFaunaAioFauna🚀 Introducing aiofauna: A full-stack framework built on top of Aiohttp, Pydantic and FaunaDB.🔥 Inspired by FastAPI focuses on Developer Experience, Productivity and Versatility.🌟 Features:✅ Supports Python 3.7+, comes with an opinionated ODM (Object Document Mapper) out of the box for FaunaDB that abstracts out complex FQL expressions into pythonic, fully typed asynchronous methods for all CRUD operations.✅ Powerful and Scalable: Being built on top of Aiohttp an asyncio based http server/client and FaunaDB an scalable serverless database for modern applications allows for powerful and seamless integrations.✅ Performant: As a framework built on top of Aiohttp it leverages the power of asyncio and the fastest pythonAPIClientbuilt on top of aiohttp with Lazy Loading and session sharing, plus the ubiquiness of FaunaDB to achieve high performance.✅ Automatic Swagger UI generation: Automatic generation of interactive Swagger UI documentation for instant testing of yourAPIServer, exposed at the/docspath.✅ SSE (Server Sent Events): Built-in support for SSE (Server Sent Events) for real-time streaming of data from FaunaDB to your application, syntactic sugar through the@ssedecorator.✅ Websockets: Built-in support for Websockets for real-time bidirectional communication between your application and the resources served by AioFaunaAPIServer, syntactic sugar through the@websocketdecorator.✅ Robust data validation: Utilizes the rich features of Pydantic for data validation and serialization.✅ OX: Thanks torichandaiohttpyou will get rich logging and error handling out of the box.✅ Auto-provisioning: Automatic management of indexes, unique indexes, and collections withFaunaModelODM.✅ Full JSON communication: Focus on your data, don't worry about the communication protocol. YourAPIServerwill receive and return JSON.✅ Inspired by fastapi, you will work with almost the same syntax and features like path operations, path parameters, query parameters, request body, status codes,/docsautomatic interactive API documentation, and decorated view functions and automatic serialization and deserialization of your data.💡 With aiofauna, you can build fast, scalable, and reliable modern applications, avoiding decision fatigue and focusing on what really matters, your data and your business logic.📚 Check out the aiofauna library, and start building your next-gen applications today! 🚀#Python #FaunaDB #Async #Pydantic #aiofauna⚙️ If you are using a synchronous framework check outFaudanticfor a similar experience with FaunaDB and Pydantic.📦PyPi📦Demo📦GitHub📦Documentation |
aiofauna-llm | No description available on PyPI. |
aiofb | aiofbA thin asynchronous Python wrapper for Facebook graph API.This library requires Python 3.5+InstallationUsing pip$pipinstallaiofbBasic usageExampleimportasyncioimportaiofb# initialize Graph APIfb=aiofb.GraphAPI(access_token='YOUR_ACCESS_TOKEN')# Get an event looploop=asyncio.get_event_loop()# Get resultsdata=loop.run_until_complete(fb.get('/{some-endpoint}'))History0.1.0 (2018-04-06)Packege created.0.1.1 (2018-05-17)Clean upFirst release on PyPI0.1.2 (2018-10-02)Return raw python data decoded from json response instead ofaiohttp.ClientResponseobject.0.1.3 (2019-01-11)Change default Messenger user profile fields to name,first_name,last_name and profile_pic
to reflect new Messenger API policy.0.1.3 (2019-02-10)Add method for taking Messebger thread control (messenger.take_thread_control(data, session=None))) |
aiofcm | aiofcmis a library designed specifically for sending messages such as push-notifications
to Android devices via Firebase Cloud Messaging platform. aiofcm provides an efficient client
through asynchronous XMPP protocol for use with Python’sasyncioframework.aiofcm requires Python 3.5 or later.PerformanceIn my testing aiofcm allows you to send on average 1k messages per second on a single core.FeaturesInternal connection pool which adapts to the current loadSending notification and/or data messagesAbility to set TTL (time to live) for messagesAbility to set priority for messagesAbility to set collapse-key for messagesInstallationUse pip to install:$ pip install aiofcmBasic Usagefromuuidimportuuid4fromaiofcmimportFCM,Message,PRIORITY_HIGHasyncdefrun():fcm=FCM(123456789000,'<API_KEY>')message=Message(device_token='<DEVICE_TOKEN>',notification={# optional"title":"Hello from Firebase","body":"This is notification","sound":"default"},data={"score":"3x1"},# optionalmessage_id=str(uuid4()),# optionaltime_to_live=3,# optionalpriority=PRIORITY_HIGH,# optional)awaitfcm.send_message(message)loop=asyncio.get_event_loop()loop.run_until_complete(run())Licenseaiofcm is developed and distributed under the Apache 2.0 license. |
aiofcopy | aiofcopyPython3 package to copy binary files inside async loops |
aio-feedfinder2 | This is an asynchronous Python library for finding links feeds on a website.It is based on the synchronous (requestsbased)feedfinder2, written byDan Foreman-Mackey, which is based onfeedfinder- originally
written byMark Pilgrimand subsequently maintained byAaron Swartzuntil his untimely death.UsageFeedfinder2 offers a single public function:find_feeds. You would use it
as following:import asyncio
from aio_feedfinder2 import find_feeds
loop = asyncio.get_event_loop()
task = asyncio.ensure_future(find_feeds("xkcd.com"))
feeds = loop.run_until_complete(future)Now,feedsis the list:['http://xkcd.com/atom.xml','http://xkcd.com/rss.xml']. There is some attempt made to rank feeds from
best candidate to worst but… well… you never know.Thisasynciovariant is ideally suited to find feeds on multiple domains/
sites in an asynchronous way:import asyncio
from aio_feedfinder2 import find_feeds
loop = asyncio.get_event_loop()
tasks = [find_feeds(url) for url in ["xkcd.com", "abstrusegoose.com"]]
feeds = loop.run_until_complete(asyncio.gather(*tasks))
>>> feeds
... [
... ['http://xkcd.com/atom.xml', 'http://xkcd.com/rss.xml'],
... ['http://abstrusegoose.com/feed.xml', 'http://abstrusegoose.com/atomfeed.xml']
... ]LicenseFeedfinder2 is licensed under the MIT license (see LICENSE). |
aioffsend | No description available on PyPI. |
aiofile | Real asynchronous file operations with asyncio support.StatusDevelopment - StableFeaturesSince version 2.0.0 usingcaio, which contains linuxlibaioand two
thread-based implementations (c-based and pure-python).AIOFile has no internal pointer. You should passoffsetandchunk_sizefor each operation or use helpers (Reader or Writer).
The simples way is to useasync_openfor creating object with
file-like interface.For Linux using implementation based onlibaio.For POSIX (MacOS X and optional Linux) using implementation
based onthreadpool.Otherwise using pure-python thread-based implementation.Implementation chooses automatically depending on system compatibility.LimitationsLinux native AIO implementation is not able to open special files.
Asynchronous operations against special fs like/proc//sys/are not
supported by the kernel. It’s not aaiofile`s or `caioissue.
In this cases, you might switch to thread-based implementations
(seetroubleshootingsection).
However, when used on supported file systems, the linux implementation has a
smaller overhead and is preferred but it’s not a silver bullet.Code examplesAll code examples requires python 3.6+.High-level APIasync_openhelperHelper mimics python file-like objects, it returns file-like
objects with similar but async methods.Supported methods:async def read(length =-1)- reading chunk from file, when length is-1, will be reading file to the end.async def write(data)- writing chunk to filedef seek(offset)- setting file pointer positiondef tell()- returns current file pointer positionasync defreadline(size=-1,newline="\n")- read chunks until
newline or EOF. Since version 3.7.0__aiter__returnsLineReader.This method is suboptimal for small lines because it doesn’t reuse read buffer.
When you want to read file by lines please avoid usingasync_openuseLineReaderinstead.def __aiter__()->LineReader- iterator over lines.def iter_chunked(chunk_size: int = 32768)->Reader- iterator over
chunks..fileproperty contains AIOFile objectBasic example:importasynciofrompathlibimportPathfromtempfileimportgettempdirfromaiofileimportasync_opentmp_filename=Path(gettempdir())/"hello.txt"asyncdefmain():asyncwithasync_open(tmp_filename,'w+')asafp:awaitafp.write("Hello ")awaitafp.write("world")afp.seek(0)print(awaitafp.read())awaitafp.write("Hello from\nasync world")print(awaitafp.readline())print(awaitafp.readline())loop=asyncio.get_event_loop()loop.run_until_complete(main())Example without context manager:importasyncioimportatexitimportosfromtempfileimportmktempfromaiofileimportasync_openTMP_NAME=mktemp()atexit.register(os.unlink,TMP_NAME)asyncdefmain():afp=awaitasync_open(TMP_NAME,"w")awaitafp.write("Hello")awaitafp.close()asyncio.run(main())assertopen(TMP_NAME,"r").read()=="Hello"Concatenate example program (cat):importasyncioimportsysfromargparseimportArgumentParserfrompathlibimportPathfromaiofileimportasync_openparser=ArgumentParser(description="Read files line by line using asynchronous io API")parser.add_argument("file_name",nargs="+",type=Path)asyncdefmain(arguments):forsrcinarguments.file_name:asyncwithasync_open(src,"r")asafp:asyncforlineinafp:sys.stdout.write(line)asyncio.run(main(parser.parse_args()))Copy file example program (cp):importasynciofromargparseimportArgumentParserfrompathlibimportPathfromaiofileimportasync_openparser=ArgumentParser(description="Copying files using asynchronous io API")parser.add_argument("source",type=Path)parser.add_argument("dest",type=Path)parser.add_argument("--chunk-size",type=int,default=65535)asyncdefmain(arguments):asyncwithasync_open(arguments.source,"rb")assrc,\async_open(arguments.dest,"wb")asdest:asyncforchunkinsrc.iter_chunked(arguments.chunk_size):awaitdest.write(chunk)asyncio.run(main(parser.parse_args()))Example with opening already open file pointer:importasynciofromtypingimportIO,Anyfromaiofileimportasync_openasyncdefmain(fp:IO[Any]):asyncwithasync_open(fp)asafp:awaitafp.write("Hello from\nasync world")print(awaitafp.readline())withopen("test.txt","w+")asfp:asyncio.run(main(fp))Linux native aio doesn’t support reading and writing special files
(e.g. procfs/sysfs/unix pipes/etc.), so you can perform operations with
these files using compatible context objects.importasynciofromaiofileimportasync_openfromcaioimportthread_aio_asynciofromcontextlibimportAsyncExitStackasyncdefmain():asyncwithAsyncExitStack()asstack:# Custom context should be reusedctx=awaitstack.enter_async_context(thread_aio_asyncio.AsyncioContext())# Open special file with custom contextsrc=awaitstack.enter_async_context(async_open("/proc/cpuinfo","r",context=ctx))# Open regular file with default contextdest=awaitstack.enter_async_context(async_open("/tmp/cpuinfo","w"))# Copying file content line by lineasyncforlineinsrc:awaitdest.write(line)asyncio.run(main())Low-level APITheAIOFileclass is a low-level interface for asynchronous file operations, and the read and write methods accept
anoffset=0in bytes at which the operation will be performed.This allows you to do many independent IO operations on an once open file without moving the virtual carriage.For example, you may make 10 concurrent HTTP requests by specifying theRangeheader, and asynchronously write
one opened file, while the offsets must either be calculated manually, or use 10 instances ofWriterwith
specified initial offsets.In order to provide sequential reading and writing, there isWriter,ReaderandLineReader. Keep in mindasync_openis not the same as AIOFile, it provides a similar interface for file operations, it simulates methods
like read or write as it is implemented in the built-in open.importasynciofromaiofileimportAIOFileasyncdefmain():asyncwithAIOFile("hello.txt",'w+')asafp:payload="Hello world\n"awaitasyncio.gather(*[afp.write(payload,offset=i*len(payload))foriinrange(10)])awaitafp.fsync()assertawaitafp.read(len(payload)*10)==payload*10asyncio.run(main())The Low-level API in fact is just little bit sugaredcaioAPI.importasynciofromaiofileimportAIOFileasyncdefmain():asyncwithAIOFile("/tmp/hello.txt",'w+')asafp:awaitafp.write("Hello ")awaitafp.write("world",offset=7)awaitafp.fsync()print(awaitafp.read())loop=asyncio.get_event_loop()loop.run_until_complete(main())ReaderandWriterWhen you want to read or write file linearly following example
might be helpful.importasynciofromaiofileimportAIOFile,Reader,Writerasyncdefmain():asyncwithAIOFile("/tmp/hello.txt",'w+')asafp:writer=Writer(afp)reader=Reader(afp,chunk_size=8)awaitwriter("Hello")awaitwriter(" ")awaitwriter("World")awaitafp.fsync()asyncforchunkinreader:print(chunk)loop=asyncio.get_event_loop()loop.run_until_complete(main())LineReader- read file line by lineLineReader is a helper that is very effective when you want to read a file
linearly and line by line.It contains a buffer and will read the fragments of the file chunk by
chunk into the buffer, where it will try to find lines.The default chunk size is 4KB.importasynciofromaiofileimportAIOFile,LineReader,Writerasyncdefmain():asyncwithAIOFile("/tmp/hello.txt",'w+')asafp:writer=Writer(afp)awaitwriter("Hello")awaitwriter(" ")awaitwriter("World")awaitwriter("\n")awaitwriter("\n")awaitwriter("From async world")awaitafp.fsync()asyncforlineinLineReader(afp):print(line)loop=asyncio.get_event_loop()loop.run_until_complete(main())When you want to read file by lines please avoid to useasync_openuseLineReaderinstead.More examplesUseful examples withaiofileAsync CSV Dict ReaderimportasyncioimportiofromcsvimportDictReaderfromaiofileimportAIOFile,LineReaderclassAsyncDictReader:def__init__(self,afp,**kwargs):self.buffer=io.BytesIO()self.file_reader=LineReader(afp,line_sep=kwargs.pop('line_sep','\n'),chunk_size=kwargs.pop('chunk_size',4096),offset=kwargs.pop('offset',0),)self.reader=DictReader(io.TextIOWrapper(self.buffer,encoding=kwargs.pop('encoding','utf-8'),errors=kwargs.pop('errors','replace'),),**kwargs,)self.line_num=0def__aiter__(self):returnselfasyncdef__anext__(self):ifself.line_num==0:header=awaitself.file_reader.readline()self.buffer.write(header)line=awaitself.file_reader.readline()ifnotline:raiseStopAsyncIterationself.buffer.write(line)self.buffer.seek(0)try:result=next(self.reader)exceptStopIterationase:raiseStopAsyncIterationfromeself.buffer.seek(0)self.buffer.truncate(0)self.line_num=self.reader.line_numreturnresultasyncdefmain():asyncwithAIOFile('sample.csv','rb')asafp:asyncforiteminAsyncDictReader(afp):print(item)asyncio.run(main())TroubleshootingThe caiolinuximplementation works normal for modern linux kernel versions
and file systems. So you may have problems specific for your environment.
It’s not a bug and might be resolved some ways:Upgrade the kernelUse compatible file systemsUse threads based or pure python implementation.The caio since version 0.7.0 contains some ways to do this.1. In runtime use the environment variableCAIO_IMPLwith
possible values:linux- use native linux kernels aio mechanismthread- use thread based implementation written in Cpython- use pure python implementation2. Filedefault_implementationlocated near__init__.pyin caio
installation path. It’s useful for distros package maintainers. This file
might contains comments (lines starts with#symbol) and the first line
should be one oflinuxthreadorpython.You might manually manage contexts:importasynciofromaiofileimportasync_openfromcaioimportlinux_aio_asyncio,thread_aio_asyncioasyncdefmain():linux_ctx=linux_aio_asyncio.AsyncioContext()threads_ctx=thread_aio_asyncio.AsyncioContext()asyncwithasync_open("/tmp/test.txt","w",context=linux_ctx)asafp:awaitafp.write("Hello")asyncwithasync_open("/tmp/test.txt","r",context=threads_ctx)asafp:print(awaitafp.read())asyncio.run(main()) |
aiofilecache | A file-based backend foraiocache.Installationpip install aiofilecacheUsagefromaiocacheimportcachedfromaiocache.serializersimportPickleSerializerfromaiofilecacheimportFileCache@cached(cache=FileCache,serializer=PickleSerializer(),basedir='/tmp/...')asyncdefcached_func(...):# ... |
aiofiledol | aiofiledolaiofile (async filesys operations) with a simple (dict-like or list-like) interfaceTo install:pip install aiofiledolGet the bytes contents of the file k.>>>importos>>>fromaiofiledolimportAioFileBytesReader>>>filepath=__file__>>>dirpath=os.path.dirname(__file__)# path of the directory where I (the module file) am>>>s=AioFileBytesReader(dirpath,max_levels=0)>>>>>>####### Get the first 9 characters (as bytes) of this module #####################>>>t=awaits.aget(filepath)>>>t[:14]b'import asyncio'>>>>>>####### Test key validation #####################>>>awaits.aget('not_a_valid_key')# this key is not valid since not under the dirpath folderTraceback(mostrecentcalllast):...filesys.KeyValidationError:'Key not valid (usually because does not exist or access not permitted): not_a_valid_key'>>>>>>####### Test further exceptions (that should be wrapped in KeyError) #####################>>># this key is valid, since under dirpath, but the file itself doesn't exist (hopefully for this test)>>>non_existing_file=os.path.join(dirpath,'non_existing_file')>>>try:...awaits.aget(non_existing_file)...exceptKeyError:...print("KeyError (not FileNotFoundError) was raised.")KeyError(notFileNotFoundError)wasraised.Set the contents of filekto be some bytes.>>>fromaiofiledolimportAioFileBytesPersister>>>fromdol.filesysimportmk_tmp_dol_dir>>>importos>>>>>>rootdir=mk_tmp_dol_dir('test')>>>rpath=lambda*p:os.path.join(rootdir,*p)>>>s=AioFileBytesPersister(rootdir)>>>k=rpath('foo')>>>ifkins:...dels[k]# delete key if present...>>>n=len(s)# number of items in store>>>awaits.asetitem(k,b'bar')>>>assertlen(s)==n+1# there's one more item in store>>>assertkins>>>assert(awaits[k])==b'bar' |
aiofile-linux | No description available on PyPI. |
aiofilelock | No description available on PyPI. |
aiofiles | aiofiles: file support for asyncioaiofilesis an Apache2 licensed library, written in Python, for handling local
disk files in asyncio applications.Ordinary local file IO is blocking, and cannot easily and portably be made
asynchronous. This means doing file IO may interfere with asyncio applications,
which shouldn't block the executing thread. aiofiles helps with this by
introducing asynchronous versions of files that support delegating operations to
a separate thread pool.asyncwithaiofiles.open('filename',mode='r')asf:contents=awaitf.read()print(contents)'My file contents'Asynchronous iteration is also supported.asyncwithaiofiles.open('filename')asf:asyncforlineinf:...Asynchronous interface to tempfile module.asyncwithaiofiles.tempfile.TemporaryFile('wb')asf:awaitf.write(b'Hello, World!')Featuresa file API very similar to Python's standard, blocking APIsupport for buffered and unbuffered binary files, and buffered text filessupport forasync/await(PEP 492) constructsasync interface to tempfile moduleInstallationTo install aiofiles, simply:$pipinstallaiofilesUsageFiles are opened using theaiofiles.open()coroutine, which in addition to
mirroring the builtinopenaccepts optionalloopandexecutorarguments. Ifloopis absent, the default loop will be used, as per the
set asyncio policy. Ifexecutoris not specified, the default event loop
executor will be used.In case of success, an asynchronous file object is returned with an
API identical to an ordinary file, except the following methods are coroutines
and delegate to an executor:closeflushisattyreadreadallread1readintoreadlinereadlinesseekseekabletelltruncatewritablewritewritelinesIn case of failure, one of the usual exceptions will be raised.aiofiles.stdin,aiofiles.stdout,aiofiles.stderr,aiofiles.stdin_bytes,aiofiles.stdout_bytes, andaiofiles.stderr_bytesprovide async access tosys.stdin,sys.stdout,sys.stderr, and their corresponding.bufferproperties.Theaiofiles.osmodule contains executor-enabled coroutine versions of
several usefulosfunctions that deal with files:statstatvfssendfilerenamerenamesreplaceremoveunlinkmkdirmakedirsrmdirremovedirslinksymlinkreadlinklistdirscandiraccesspath.existspath.isfilepath.isdirpath.islinkpath.ismountpath.getsizepath.getatimepath.getctimepath.samefilepath.sameopenfileTempfileaiofiles.tempfileimplements the following interfaces:TemporaryFileNamedTemporaryFileSpooledTemporaryFileTemporaryDirectoryResults return wrapped with a context manager allowing use with async with and async for.asyncwithaiofiles.tempfile.NamedTemporaryFile('wb+')asf:awaitf.write(b'Line1\nLine2')awaitf.seek(0)asyncforlineinf:print(line)asyncwithaiofiles.tempfile.TemporaryDirectory()asd:filename=os.path.join(d,"file.ext")Writing tests for aiofilesReal file IO can be mocked by patchingaiofiles.threadpool.sync_openas desired. The return type also needs to be registered with theaiofiles.threadpool.wrapdispatcher:aiofiles.threadpool.wrap.register(mock.MagicMock)(lambda*args,**kwargs:threadpool.AsyncBufferedIOBase(*args,**kwargs))asyncdeftest_stuff():data='data'mock_file=mock.MagicMock()withmock.patch('aiofiles.threadpool.sync_open',return_value=mock_file)asmock_open:asyncwithaiofiles.open('filename','w')asf:awaitf.write(data)mock_file.write.assert_called_once_with(data)History23.2.1 (2023-08-09)Importos.statvfsconditionally to fix importing on non-UNIX systems.#171#17223.2.0 (2023-08-09)aiofiles is now tested on Python 3.12 too.#166#168On Python 3.12,aiofiles.tempfile.NamedTemporaryFilenow accepts adelete_on_closeargument, just like the stdlib version.On Python 3.12,aiofiles.tempfile.NamedTemporaryFileno longer exposes adeleteattribute, just like the stdlib version.Addedaiofiles.os.statvfsandaiofiles.os.path.ismount.#162UsePDMinstead of Poetry.#16923.1.0 (2023-02-09)Addedaiofiles.os.access.#146Removedaiofiles.tempfile.temptypes.AsyncSpooledTemporaryFile.softspace.#151Addedaiofiles.stdin,aiofiles.stdin_bytes, and other stdio streams.#154Transition toasyncio.get_running_loop(vsasyncio.get_event_loop) internally.22.1.0 (2022-09-04)Addedaiofiles.os.path.islink.#126Addedaiofiles.os.readlink.#125Addedaiofiles.os.symlink.#124Addedaiofiles.os.unlink.#123Addedaiofiles.os.link.#121Addedaiofiles.os.renames.#120Addedaiofiles.os.{listdir, scandir}.#143Switched to CalVer.Dropped Python 3.6 support. If you require it, use version 0.8.0.aiofiles is now tested on Python 3.11.0.8.0 (2021-11-27)aiofiles is now tested on Python 3.10.Addedaiofiles.os.replace.#107Addedaiofiles.os.{makedirs, removedirs}.Addedaiofiles.os.path.{exists, isfile, isdir, getsize, getatime, getctime, samefile, sameopenfile}.#63Addedsuffix,prefix,dirargs toaiofiles.tempfile.TemporaryDirectory.#1160.7.0 (2021-05-17)Added theaiofiles.tempfilemodule for async temporary files.#56Switched to Poetry and GitHub actions.Dropped 3.5 support.0.6.0 (2020-10-27)aiofilesis now tested on ppc64le.Addednameandmodeproperties to async file objects.#82Fixed a DeprecationWarning internally.#75Python 3.9 support and tests.0.5.0 (2020-04-12)Python 3.8 support. Code base modernization (usingasync/awaitinstead ofasyncio.coroutine/yield from).Addedaiofiles.os.remove,aiofiles.os.rename,aiofiles.os.mkdir,aiofiles.os.rmdir.#620.4.0 (2018-08-11)Python 3.7 support.Removed Python 3.3/3.4 support. If you use these versions, stick to aiofiles 0.3.x.0.3.2 (2017-09-23)The LICENSE is now included in the sdist.#310.3.1 (2017-03-10)Introduced a changelog.aiofiles.os.sendfilewill now work if the standardosmodule contains asendfilefunction.ContributingContributions are very welcome. Tests can be run withtox, please ensure
the coverage at least stays the same before you submit a pull request. |
aiofiles38 | aiofilesis an Apache2 licensed library, written in Python, for handling local
disk files in asyncio applications.Ordinary local file IO is blocking, and cannot easily and portably made
asynchronous. This means doing file IO may interfere with asyncio applications,
which shouldn’t block the executing thread. aiofiles helps with this by
introducing asynchronous versions of files that support delegating operations to
a separate thread pool.asyncwithaiofiles.open('filename',mode='r')asf:contents=awaitf.read()print(contents)'My file contents'Asynchronous iteration is also supported.asyncwithaiofiles.open('filename')asf:asyncforlineinf:...Featuresa file API very similar to Python’s standard, blocking APIsupport for buffered and unbuffered binary files, and buffered text filessupport forasync/await(PEP 492) constructsInstallationTo install aiofiles, simply:$pipinstallaiofilesUsageFiles are opened using theaiofiles.open()coroutine, which in addition to
mirroring the builtinopenaccepts optionalloopandexecutorarguments. Ifloopis absent, the default loop will be used, as per the
set asyncio policy. Ifexecutoris not specified, the default event loop
executor will be used.In case of success, an asynchronous file object is returned with an
API identical to an ordinary file, except the following methods are coroutines
and delegate to an executor:closeflushisattyreadreadallread1readintoreadlinereadlinesseekseekabletelltruncatewritablewritewritelinesIn case of failure, one of the usual exceptions will be raised.Theaiofiles.osmodule contains executor-enabled coroutine versions of
several usefulosfunctions that deal with files:statsendfilerenameremovemkdirrmdirWriting tests for aiofilesReal file IO can be mocked by patchingaiofiles.threadpool.sync_openas desired. The return type also needs to be registered with theaiofiles.threadpool.wrapdispatcher:aiofiles.threadpool.wrap.register(mock.MagicMock)(lambda*args,**kwargs:threadpool.AsyncBufferedIOBase(*args,**kwargs))asyncdeftest_stuff():data='data'mock_file=mock.MagicMock()withmock.patch('aiofiles.threadpool.sync_open',return_value=mock_file)asmock_open:asyncwithaiofiles.open('filename','w')asf:awaitf.write(data)mock_file.write.assert_called_once_with(data)History0.4.0 (2018-08-11)Python 3.7 support.Removed Python 3.3/3.4 support. If you use these versions, stick to aiofiles 0.3.x.0.3.2 (2017-09-23)The LICENSE is now included in the sdist.#310.3.1 (2017-03-10)Introduced a changelog.aiofiles.os.sendfilewill now work if the standardosmodule contains asendfilefunction.ContributingContributions are very welcome. Tests can be run withtox, please ensure
the coverage at least stays the same before you submit a pull request. |
aiofilesearch | # 🤙 Aio File Search 😂 #hackyhollidays## AioHTTP + Preact + Parcel + WebsocketsSearch your local files through a Browser interfaceThis is just a demo project on how to integrate WS Streaming responsesinto a Preact Web App.## Install``` pip install aiofilesearch ```## Run``` fsearch ```* You need to have installed The silver search, and sublime if you wantto open results with it.## Interesting Parts- The front side uses preact + parcel bundler. So easy to start!- The backend part uses asyncio subprocess to launch the ag command and start searching- Results are streamded from the ag command to the websocket frontend.## Todo- Add configuration params## Screenshot |
aiofiles-ext | aiofiles is an Apache2 licensed library, written in Python, for handling local
disk files in asyncio applications.Ordinary local file IO is blocking, and cannot easily and portably made
asynchronous. This means doing file IO may interfere with asyncio applications,
which shouldn’t block the executing thread. aiofiles helps with this by
introducing asynchronous versions of files that support delegating operations to
a separate thread pool.asyncwithaiofiles.open('filename',mode='r')asf:contents=awaitf.read()print(contents)'My file contents'Or, using the old syntax:f=yield fromaiofiles.open('filename',mode='r')try:contents=yield fromf.read()finally:yield fromf.close()print(contents)'My file contents'Featuresa file API very similar to Python’s standard, blocking APIsupport for buffered and unbuffered binary files, and buffered text filessupport for async/await (PEP 492) constructsInstallationTo install aiofiles, simply:$pipinstallaiofilesUsageFiles are opened using theaiofiles.open()coroutine, which in addition to
mirroring the builtinopenaccepts optionalloopandexecutorarguments. Ifloopis absent, the default loop will be used, as per the
set asyncio policy. Ifexecutoris not specified, the default event loop
executor will be used.In case of success, an asynchronous file object is returned with an
API identical to an ordinary file, except the following methods are coroutines
and delegate to an executor:closeflushisattyreadreadallread1readintoreadlinereadlinesseekseekabletelltruncatewritablewritewritelinesIn case of failure, one of the usual exceptions will be raised.Theaiofiles.osmodule contains executor-enabled coroutine versions of
several usefulosfunctions that deal with files:statsendfileLimitations and Differences from the Builtin File APIWhen using Python 3.5 or greater, aiofiles file objects can be used as
asynchronous context managers. Asynchronous iteration is also supported.asyncwithaiofiles.open('filename')asf:asyncforlineinf:...When using Python 3.3 or 3.4, be aware that the closing of a file may block,
and yielding from a coroutine while exiting from a context manager isn’t
possible, so aiofiles file objects can’t be used as (ordinary, non-async)
context managers. Use thetry/finallyconstruct from the introductory
section to ensure files are closed.When using Python 3.3 or 3.4, iteration is also unsupported. To iterate over a
file, callreadlinerepeatedly until an empty result is returned. Keep in
mindreadlinedoesn’t strip newline characters.f=yield fromaiofiles.open('filename')try:whileTrue:line=yield fromf.readline()ifnotline:breakline=line.strip()...finally:yield fromf.close()ContributingContributions are very welcome. Tests can be run withtox, please ensure
the coverage at least stays the same before you submit a pull request. |
aiofilters | No description available on PyPI. |
aiofirebase | UNKNOWN |
aiofix | Place Holder! |
aioflake | No description available on PyPI. |
aioflask | aioflaskFlask 2.x running on asyncio!Is there a purpose for this, now that Flask 2.0 is out with support for async
views? Yes! Flask's own support for async handlers is very limited, as the
application still runs inside a WSGI web server, which severely limits
scalability. With aioflask you get a true ASGI application, running in a 100%
async environment.WARNING: This is an experiment at this point. Not at all production ready!Quick startTo use async view functions and other handlers, use theaioflaskpackage
instead offlask.Theaioflask.Flaskclass is a subclass offlask.Flaskthat changes a few
minor things to help the application run properly under the asyncio loop. In
particular, it overrides the following aspects of the application instance:Theroute,before_request,before_first_request,after_request,teardown_request,teardown_appcontext,errorhandlerandcli.commanddecorators accept coroutines as well as regular functions. The handlers all
run inside an asyncio loop, so when using regular functions, care must be
taken to not block.The WSGI callable entry point is replaced with an ASGI equivalent.Therun()method uses uvicorn as web server.There are also changes outside of theFlaskclass:Theflask aioruncommand starts an ASGI application using the uvicorn web
server.Therender_template()andrender_template_string()functions are
asynchronous and must be awaited.The context managers for the Flask application and request contexts are
async.The test client and test CLI runner use coroutines.ExampleimportasynciofromaioflaskimportFlask,render_templateapp=Flask(__name__)@app.route('/')asyncdefindex():awaitasyncio.sleep(1)returnawaitrender_template('index.html') |
aioflo | 💧 aioflo: a Python3, asyncio-friendly library for Flo Smart Water Detectorsaioflois a Python 3,asyncio-friendly library for interacting withFlo by Moen Smart Water Detectors.Python Versionsaioflois currently supported on:Python 3.6Python 3.7Python 3.8Python 3.9Python 3.10InstallationpipinstallaiofloUsageimportasynciofromaiohttpimportClientSessionfromaiofloimportasync_get_apiasyncdefmain()->None:"""Run!"""api=awaitasync_get_api("<EMAIL>","<PASSWORD>")# Get user account information:user_info=awaitapi.user.get_info()a_location_id=user_info["locations"][0]["id"]# Get location (i.e., device) information:location_info=awaitapi.location.get_info(a_location_id)# Get device informationfirst_device_id=location_info["devices"][0]["id"]device_info=awaitapi.device.get_info(first_device_id)# Run a health testhealth_test_response=awaitapi.device.run_health_test(first_device_id)# Close the shutoff valveclose_valve_response=awaitapi.device.close_valve(first_device_id)# Open the shutoff valveopen_valve_response=awaitapi.device.open_valve(first_device_id)# Get consumption info between a start and end datetime:consumption_info=awaitapi.water.get_consumption_info(a_location_id,datetime(2020,1,16,0,0),datetime(2020,1,16,23,59,59,999000),)# Get various other metrics related to water usage:metrics=awaitapi.water.get_metrics("<DEVICE_MAC_ADDRESS>",datetime(2020,1,16,0,0),datetime(2020,1,16,23,59,59,999000),)# Set the device in "Away" mode:awaitset_mode_away(a_location_id)# Set the device in "Home" mode:awaitset_mode_home(a_location_id)# Set the device in "Sleep" mode for 120 minutes, then return to "Away" mode:awaitset_mode_sleep(a_location_id,120,"away")asyncio.run(main())By default, the library creates a new connection to Flo with each coroutine. If you are
calling a large number of coroutines (or merely want to squeeze out every second of
runtime savings possible), anaiohttpClientSessioncan be used for connection
pooling:importasynciofromaiohttpimportClientSessionfromaiofloimportasync_get_apiasyncdefmain()->None:"""Create the aiohttp session and run the example."""asyncwithClientSession()aswebsession:api=awaitasync_get_api("<EMAIL>","<PASSWORD>",session=session)# Tell Flo to get updated data from the deviceping_response=awaitapi.presence.ping()# Get user account information:user_info=awaitapi.user.get_info()a_location_id=user_info["locations"][0]["id"]# Get location (i.e., device) information:location_info=awaitapi.location.get_info(a_location_id)# Get device informationfirst_device_id=location_info["devices"][0]["id"]device_info=awaitapi.device.get_info(first_device_id)# Run a health testhealth_test_response=awaitapi.device.run_health_test(first_device_id)# Close the shutoff valveclose_valve_response=awaitapi.device.close_valve(first_device_id)# Open the shutoff valveopen_valve_response=awaitapi.device.open_valve(first_device_id)# Get consumption info between a start and end datetime:consumption_info=awaitapi.water.get_consumption_info(a_location_id,datetime(2020,1,16,0,0),datetime(2020,1,16,23,59,59,999000),)# Get various other metrics related to water usage:metrics=awaitapi.water.get_metrics("<DEVICE_MAC_ADDRESS>",datetime(2020,1,16,0,0),datetime(2020,1,16,23,59,59,999000),)# Set the device in "Away" mode:awaitset_mode_away(a_location_id)# Set the device in "Home" mode:awaitset_mode_home(a_location_id)# Set the device in "Sleep" mode for 120 minutes, then return to "Away" mode:awaitset_mode_sleep(a_location_id,120,"away")asyncio.run(main())ContributingCheck for open features/bugsorinitiate a discussion on one.Fork the repository.(optional, but highly recommended) Create a virtual environment:python3 -m venv .venv(optional, but highly recommended) Enter the virtual environment:source ./.venv/bin/activateInstall the dev environment:script/setupCode your new feature or bug fix.Write tests that cover your new functionality.Run tests and ensure 100% code coverage:script/testUpdateREADME.mdwith any new documentation.Add yourself toAUTHORS.md.Submit a pull request! |
aioflock | aioflock: file lock support for asyncio (based on fcntl.flock)==================================.. image:: https://coveralls.io/repos/kolko/aioflock/badge.svg?branch=master&service=github:target: https://coveralls.io/github/kolko/aioflock?branch=master.. image:: https://travis-ci.org/kolko/aioflock.svg?branch=master:target: https://travis-ci.org/kolko/aioflockExample:.. code-block:: pythonfrom aioflock import LockFilenamelock = LockFilename('/tmp/test_lock')yield from lock.acquire()..inside lock..lock.release()With stategment:.. code-block:: pythonfrom aioflock import LockFilenamewith (yield from LockFilename('/tmp/test_lock')):..inside lock..Can use timeout:.. code-block:: pythonfrom aioflock import LockFilename, LockFilenameTimeouttry:with (yield from LockFilename('/tmp/test_lock')):with (yield from LockFilename('/tmp/test_lock', timeout=1)):...newer here...except LockFilenameTimeout:...here... |
aioflow | No description available on PyPI. |
aioflowdock | aio-flowdockFlowdock async client/API for Python3, an Python version ofnode-flowdock |
aioflows | aioflowsAsynchronous actors frameworkDocumentation:https://aioflows.github.ioSource:https://github.com/apatrushev/aioflowsThis project aims to create a support library for constructing asynchronous applications in Python, using the concept of structured data flows and actors. The current phase is purely a proof-of-concept and serves as a basis for discussion with colleagues and the community. It is not intended for use in any production or personal projects.Minimal working exampleimportasynciofromaioflows.simpleimportPrinter,Tickerasyncdefstart():await(Ticker()>>Printer()).start()asyncio.run(start())Udp echo exampleimportasynciofromaioflows.networkimportUdpfromaioflows.simpleimportPrinter,Teeasyncdefstart():udp=Udp(local_addr=('127.0.0.1',5353),reuse_port=True)await(udp>>Tee(Printer())>>udp).start()asyncio.run(start())You can test it with socat:socat-UDP:localhost:5353Other examplesMore examples can be found insrc/examples.Installationlocalpipinstall.editablepipinstall-e.developmentpipinstall-e.[dev]examples dependenciespipinstall-e.[examples]all togetherpipinstall-e.[dev,examples]from githubpipinstallgit+https://github.com/apatrushev/aioflows.gitUsual development stepsRun checks and tests:invisortflaketestRun examples (all ERRORCODE's should be 0/OK or timeout at the moment):invexamples|grepERRORCODESimilar projectsI found existing solutions that are almost equal to this concept:https://github.com/ReactiveX/RxPYhttps://github.com/vxgmichel/aiostream |
aiofluent | WARNING: This is a fork of thehttps://github.com/fluent/fluent-logger-pythonproject to work with asyncio.Many web/mobile applications generate huge amount of event logs (c,f.
login, logout, purchase, follow, etc). To analyze these event logs could
be really valuable for improving the service. However, the challenge is
collecting these logs easily and reliably.Fluentdsolves that problem by
having: easy installation, small footprint, plugins, reliable buffering,
log forwarding, etc.aiofluentis a Python library, to record the events from
Python application.RequirementsPython 3.5 or greatermsgpack-pythonInstallationThis library is distributed as ‘aiofluent’ python package. Please
execute the following command to install it.$pipinstallaiofluentConfigurationFluentd daemon must be launched with a tcp source configuration:<source>
type forward
port 24224
</source>To quickly test your setup, add a matcher that logs to the stdout:<match app.**>
type stdout
</match>UsageFluentSender Interfacesender.FluentSenderis a structured event logger for Fluentd.By default, the logger assumes fluentd daemon is launched locally. You
can also specify remote logger by passing the options.fromaiofluentimportsender# for local fluentlogger=sender.FluentSender('app')# for remote fluentlogger=sender.FluentSender('app',host='host',port=24224)For sending event, callemitmethod with your event. Following example will send the event to
fluentd, with tag ‘app.follow’ and the attributes ‘from’ and ‘to’.# Use current timelogger.emit('follow',{'from':'userA','to':'userB'})# Specify optional timecur_time=int(time.time())logger.emit_with_time('follow',cur_time,{'from':'userA','to':'userB'})You can detect an error via return value ofemit. If an error happens inemit,emitreturnsFalseand get an error object usinglast_errormethod.ifnotlogger.emit('follow',{'from':'userA','to':'userB'}):print(logger.last_error)logger.clear_last_error()# clear stored error after handled errorsIf you want to shutdown the client, callclose()method.logger.close()Event-Based InterfaceThis API is a wrapper forsender.FluentSender.First, you need to callsender.setup()to create globalsender.FluentSenderlogger
instance. This call needs to be called only once, at the beginning of
the application for example.Initialization code of Event-Based API is below:fromaiofluentimportsender# for local fluentsender.setup('app')# for remote fluentsender.setup('app',host='host',port=24224)Then, please create the events like this. This will send the event to
fluentd, with tag ‘app.follow’ and the attributes ‘from’ and ‘to’.fromaiofluentimportevent# send event to fluentd, with 'app.follow' tagevent.Event('follow',{'from':'userA','to':'userB'})event.Eventhas one limitation which can’t return success/failure result.Other methods for Event-Based Interface.sender.get_global_sender# get instance of global sendersender.close# Call FluentSender#closeHandler for buffer overflowYou can inject your own custom proc to handle buffer overflow in the event of connection failure. This will mitigate the loss of data instead of simply throwing data away.importmsgpackfromioimportBytesIOdefhandler(pendings):unpacker=msgpack.Unpacker(BytesIO(pendings))forunpackedinunpacker:print(unpacked)logger=sender.FluentSender('app',host='host',port=24224,buffer_overflow_handler=handler)You should handle any exception in handler. aiofluent ignores exceptions frombuffer_overflow_handler.This handler is also called when pending events exist duringclose().Python logging.Handler interfaceThis client-library also hasFluentHandlerclass for Python logging
module.importloggingfromaiofluentimporthandlercustom_format={'host':'%(hostname)s','where':'%(module)s.%(funcName)s','type':'%(levelname)s','stack_trace':'%(exc_text)s'}logging.basicConfig(level=logging.INFO)l=logging.getLogger('fluent.test')h=handler.FluentHandler('app.follow',host='host',port=24224)formatter=handler.FluentRecordFormatter(custom_format)h.setFormatter(formatter)l.addHandler(h)l.info({'from':'userA','to':'userB'})l.info('{"from": "userC", "to": "userD"}')l.info("This log entry will be logged with the additional key: 'message'.")You can also customize formatter via logging.config.dictConfigimportlogging.configimportyamlwithopen('logging.yaml')asfd:conf=yaml.load(fd)logging.config.dictConfig(conf['logging'])A sample configurationlogging.yamlwould be:logging:version:1formatters:brief:format:'%(message)s'default:format:'%(asctime)s%(levelname)-8s%(name)-15s%(message)s'datefmt:'%Y-%m-%d%H:%M:%S'fluent_fmt:'()':fluent.handler.FluentRecordFormatterformat:level:'%(levelname)s'hostname:'%(hostname)s'where:'%(module)s.%(funcName)s'handlers:console:class:logging.StreamHandlerlevel:DEBUGformatter:defaultstream:ext://sys.stdoutfluent:class:fluent.handler.FluentHandlerhost:localhostport:24224tag:test.loggingformatter:fluent_fmtlevel:DEBUGnone:class:logging.NullHandlerloggers:amqp:handlers:[none]propagate:Falseconf:handlers:[none]propagate:False'':# root loggerhandlers:[console,fluent]level:DEBUGpropagate:FalseLicenseApache License, Version 2.01.2.9 (2020-10-22)Only log errors every 30 seconds1.2.8 (2020-05-15)Handle TypeError formatting log data1.2.7 (2020-03-09)Fix repo location1.2.6 (2020-01-06)Improve error logging
[vangheem]1.2.5 (2019-12-19)Handle event loop closed error
[vangheem]1.2.4 (2019-12-19)Increase max queue size1.2.3 (2019-04-01)Fix release1.2.2 (2019-04-01)nanosecond_precision by default
[davidonna]1.2.1 (2018-10-31)Add support for nanosecond precision timestamps
[davidonna]1.2.0 (2018-06-14)Maintain one AsyncIO queue for all logs
[vangheem]1.1.4 (2018-05-29)Handle RuntimeError on canceling tasks/cleanup
[vangheem]1.1.3 (2018-02-15)Lock calling the close method of sender
[vangheem]Increase default timeout
[vangheem]1.1.2 (2018-02-07)lock the whole method
[vangheem]1.1.1 (2018-02-07)Use lock on getting connection object
[vangheem]1.1.0 (2018-01-25)Move to using asyncio connection infrastructure instead of sockets
[vangheem]1.0.8 (2018-01-04)Always close out buffer data
[vangheem]1.0.7 (2018-01-04)Handle errors processing log queue
[vangheem]1.0.6 (2017-11-14)Prevent log queue from getting too large
[vangheem]1.0.5 (2017-10-17)Fix release to include CHANGELOG.rst file
[vangheem]1.0.4 (2017-10-10)Fix pushing initial record1.0.3 (2017-10-10)Handle Runtime error when logging done before event loop started
[vangheem]1.0.2 (2017-10-09)Fix to make normal logging call async
[vangheem]1.0.1 (2017-07-03)initial release |
aiofluent-python | aiofluentAn asynchronous fluentd client libary. Inspired byfluent-logger-pythonRequirementsPython 3.5 or greatermsgpackasync-timeoutInstallationpip install aiofluent-pythonExampleimportasynciofromaiofluentimportFluentSendersender=FluentSender()asyncdefgo():awaitsender.emit('tag',{'name':'aiofluent'})awaitsender.close()asyncio.run(go()) |
aio-fluid | Tools for backend python servicesInstallationThis is a simple python package you can install via pip:pip install aio-fluidModulesschedulerA simple asynchronous task queue with a schedulerkernelAsync utility for executing commands in sub-processesAWSpackages for AWS interaction are installed viaaiobotocores3fs(which depends on aiobotocore and therefore versions must be compatible)boto3is installed as extra dependency of aiobotocore so versioning is compatible |
aioflureedb | An asynchonous client library for communicating with a FlureeDB server, making signed transactions and queries. |
aiofm | Async file manager for PythonDescriptionA longer description of your project goes here…NoteThis project has been set up using PyScaffold 3.2.1. For details and usage
information on PyScaffold seehttps://pyscaffold.org/. |
aiofoam | aiofoamaiofoamprovides an asynchronous (asyncio-based) API for runningOpenFOAMcases from Python.Install withpip:pipinstallaiofoamFor more information, check out thedocumentation. |
aiofortnite | No description available on PyPI. |
aio-framework | A Python website bot development framework (WIP)IntroductionThis project was created to aid the development of website bots and API
wrappers.aio-frameworkhandles task management and execution,
session management, and captcha queue management (with threads!).
Currently, captcha queue management supports 2Captcha.aio-frameworkis meant to decrease development time by providing common bot and API
wrapper functionality.Basic UsageThis module is available via pip:$ pip install aio-frameworkBasicApiWrapperandBotimplementations are shown below.Botimplementationsmustimplement theexecute_taskmethod.ApiWrapper# exampleapiwrapper.pyfromaioimportApiWrapperclassExampleApiWrapper(ApiWrapper):BASE_URL='https://example.com'defget_product_data(self,product_url):response=self.get(product_url)returnresponse.json()['data']# Or somethingdefadd_product_to_cart(self,product_data,captcha_token):payload={'product_data':product_data,'captcha':captcha_token}endpoint='/add-to-cart'response=self.post(endpoint,data=payload)returnresponse.json()['success']# Or somethingBot# examplebot.pyfromaioimportBotfromaio.captchaimportCaptchaManagerfromexampleapiwrapperimportExampleApiWrapperclassExampleBot(Bot):defexecute_task(self,task):example_api_wrapper=ExampleApiWrapper()twocaptcha_api_token='2CAPTCHA_API_TOKEN_HERE'site_key='SITE_KEY_HERE'page_url='PAGE_URL_HERE'captcha_manager=CaptchaManager(twocaptcha_api_token,site_key,page_url)captcha_manager.start_captcha_queue(num_threads=5)task.status='STARTED'product_url=task.data['product_url']task.logger.info('Getting product data')product_data=example_api_wrapper.get_product_data(product_url)task.logger.info('Got product data!')task.logger.info('Waiting for captcha token')captcha_token=captcha_manager.wait_for_captcha_token()task.logger.info('Got captcha token!')task.logger.info('Adding product to cart')added=example_api_wrapper.add_product_to_cart(product_data,captcha_token)task.logger.info('Added product to cart!')task.status='FINISHED'Executing# main.pyfromaioimportTaskfromexamplebotimportExampleBotexample_bot=ExampleBot()task_data={'product_url':'https://example.com/product'}task=Task(task_data)example_bot.add_task(task)example_bot.start_all_tasks() |
aiofreepybox | aiofreepyboxEasily manage your freebox in Python using the Freebox OS API.
Check your calls, manage your contacts, configure your dhcp, disable your wifi, monitor your LAN activity and many others, on LAN or remotely.aiofreepybox is a python library implementing the freebox OS API. It handles the authentication process and provides a raw access to the freebox API in an asynchronous manner.This project is based on fstercq/freepybox, which provides the same features as aiofreepybox in a synchronous manner.InstallUse the PIP package manager$pipinstallaiofreepyboxOr manually download and install the last version from github$gitclonehttps://github.com/stilllman/aiofreepybox.git
$pythonsetup.pyinstallGet started# Import the aiofreepybox package.fromaiofreepyboximportFreepyboxasyncdefreboot()# Instantiate the Freepybox class using default options.fbx=Freepybox()# Connect to the freebox with default options.# Be ready to authorize the application on the Freebox.awaitfbx.open('192.168.0.254')# Do something useful, rebooting your freebox for example.awaitfbx.system.reboot()# Properly close the session.awaitfbx.close()Have a look at theexample.pyfor a more complete overview.Notes on HTTPSWhen you access a Freebox with its default-assigned domain (ending infbxos.fr), the library verifies its certificate by automatically trusting the Freebox certificate authority. If you want to avoid this, you cansetup a custom domain namewhich will be associated with a Let's Encrypt certificate.ResourcesFreebox OS API documentation :http://dev.freebox.fr/sdk/os/ |
aiofreqlimit | AboutFrequency limit context manager for asyncio.Installationaiofreqlimit requires Python 3.8 or greater and is available on PyPI. Use pip to install it:pipinstallaiofreqlimitUsing aiofreqlimitPass a value of any hashable type toacquireor do not specify any parameter:importasynciofromaiofreqlimitimportFreqLimitlimit=FreqLimit(1/10)asyncdefjob():asyncwithlimit.acquire('some_key'):awaitsome_call()asyncdefmain():awaitasyncio.gather(job()for_inrange(100))asyncio.run(main()) |
aioftp | ftp client/server for asyncio (http://aioftp.readthedocs.org)FeaturesSimple.Extensible.Client socks proxy viasiosocks(pip install aioftp[socks]).GoalsMinimum usable core.Do not use deprecated or overridden commands and features (if possible).Very high level api.Client use this commands: USER, PASS, ACCT, PWD, CWD, CDUP, MKD, RMD, MLSD,
MLST, RNFR, RNTO, DELE, STOR, APPE, RETR, TYPE, PASV, ABOR, QUIT, REST, LIST
(as fallback)Server support this commands: USER, PASS, QUIT, PWD, CWD, CDUP, MKD, RMD, MLSD,
LIST (but it’s not recommended to use it, cause it has no standard format),
MLST, RNFR, RNTO, DELE, STOR, RETR, TYPE (“I” and “A”), PASV, ABOR, APPE, RESTThis subsets are enough for 99% of tasks, but if you need something, then you
can easily extend current set of commands.Server benchmarkCompared withpyftpdliband
checked with its ftpbench script.aioftp 0.8.0STOR (client -> server) 284.95 MB/sec
RETR (server -> client) 408.44 MB/sec
200 concurrent clients (connect, login) 0.18 secs
STOR (1 file with 200 idle clients) 287.52 MB/sec
RETR (1 file with 200 idle clients) 382.05 MB/sec
200 concurrent clients (RETR 10.0M file) 13.33 secs
200 concurrent clients (STOR 10.0M file) 12.56 secs
200 concurrent clients (QUIT) 0.03 secsaioftp 0.21.4 (python 3.11.2)STOR (client -> server) 280.17 MB/sec
RETR (server -> client) 399.23 MB/sec
200 concurrent clients (connect, login) 0.22 secs
STOR (1 file with 200 idle clients) 248.46 MB/sec
RETR (1 file with 200 idle clients) 362.43 MB/sec
200 concurrent clients (RETR 10.0M file) 5.41 secs
200 concurrent clients (STOR 10.0M file) 2.04 secs
200 concurrent clients (QUIT) 0.04 secspyftpdlib 1.5.2STOR (client -> server) 1235.56 MB/sec
RETR (server -> client) 3960.21 MB/sec
200 concurrent clients (connect, login) 0.06 secs
STOR (1 file with 200 idle clients) 1208.58 MB/sec
RETR (1 file with 200 idle clients) 3496.03 MB/sec
200 concurrent clients (RETR 10.0M file) 0.55 secs
200 concurrent clients (STOR 10.0M file) 1.46 secs
200 concurrent clients (QUIT) 0.02 secsDependenciesPython 3.8+0.13.0 is the last version which supports python 3.5.3+0.16.1 is the last version which supports python 3.6+0.21.4 is the last version which supports python 3.7+Licenseaioftp is offered under the Apache 2 license.Library installationpip install aioftpGetting startedClient exampleWARNINGFor all commands, which use some sort of «stats» or «listing»,aioftptries
at firstMLSx-family commands (since they have structured, machine readable
format for all platforms). But old/lazy/nasty servers do not implement this
commands. In this caseaioftptries aLISTcommand, which have no
standard format and can not be parsed in all cases. Take a look atFileZilla«directory listing» parser code. So, before creating new issue be sure this
is not your case (you can check it with logs). Anyway, you can provide your ownLISTparser routine (see the client documentation).importasyncioimportaioftpasyncdefget_mp3(host,port,login,password):asyncwithaioftp.Client.context(host,port,login,password)asclient:forpath,infoin(awaitclient.list(recursive=True)):ifinfo["type"]=="file"andpath.suffix==".mp3":awaitclient.download(path)asyncdefmain():tasks=[asyncio.create_task(get_mp3("server1.com",21,"login","password")),asyncio.create_task(get_mp3("server2.com",21,"login","password")),asyncio.create_task(get_mp3("server3.com",21,"login","password")),]awaitasyncio.wait(tasks)asyncio.run(main())Server exampleimportasyncioimportaioftpasyncdefmain():server=aioftp.Server([user],path_io_factory=path_io_factory)awaitserver.run()asyncio.run(main())Or just use simple serverpython-maioftp--help |
aioftps3 | aioftps3FTP in front of AWS S3, usingasyncio, andaiohttp. Only a subset of the FTP protocol is supported, with implicit TLS and PASV mode; connections will fail otherwise.Installationpipinstallaioftps3An SSL key and certificate must be present$HOME/ssl.keyand$HOME/ssl.crtrespectively. To create a self-signed certificate, you can use openssl.opensslreq-new-newkeyrsa:2048-days3650-nodes-x509-subj/CN=selfsigned\-keyout$HOME/ssl.key\-out$HOME/ssl.crtRunningpython-maioftps3.server_mainConfigurationConfiguration is through environment variablesVaraiableDescriptionExampleAWS_AUTH_MECHANISMHow requests to AWS are authenticated. Can besecret_access_keyorecs_role. Ifecs_roleit is expected that the server runs in an ECS container.secret_access_keyAWS_ACCESS_KEY_IDThe ID of the AWS access key, ifAWS_AUTH_MECHANISMissecret_access_key.ommittedAWS_SECRET_ACCESS_KEYThe secret part of the AWS access key, ifAWS_AUTH_MECHANISMissecret_access_keyommittedAWS_S3_BUCKET_REGIONThe region of the S3 bucket that stores the files.eu-west-1AWS_S3_BUCKET_HOSTThe hostname used to communicate with S3.s3-eu-west-1.amazonaws.comAWS_S3_BUCKET_NAMEThe name of the bucket files are stored in.my-bucket-nameAWS_S3_BUCKET_DIR_SUFFIXThe suffix of the keys created in order to simulate a directory. Must start with a forward slash, but does not need to be longer./FTP_USERS__i__LOGINForiany integer, the username of an FTP user that can login.my-userFTP_USERS__i__PASSWORD_HASHEDForiany integer, the hash, as generated bycreate_password.py, of the password of an FTP user that can login, using the salt inFTP_USERS__i__PASSWORD_SALTommittedFTP_USERS__i__PASSWORD_SALTSeeFTP_USERS__i__PASSWORD_HASHEDommittedFTP_COMMAND_PORTThe port that the server listens on for command connections.8021FTP_DATA_PORTS_FIRSTThe first data port in the range for PASV mode data transfers.4001FTP_DATA_PORTS_COUNTThe number of ports used afterFTP_DATA_PORTS_FIRST.30FTP_DATA_CIDR_TO_DOMAINS__i__CIDRForiany integer, a CIDR range used to match the IP of incoming command connections. If a match is found, the IP of the corresponding domain or IP address inFTP_DATA_CIDR_TO_DOMAINS__i__DOMAINis returned to the client in response to PASV mode requests. Some clients will respond toFTP_DATA_CIDR_TO_DOMAINS__i__DOMAINbeing0.0.0.0by making PASV mode data connections to the same IP as the original command connection, but not all.0.0.0.0/0FTP_DATA_CIDR_TO_DOMAINS__i__DOMAINSeeFTP_DATA_CIDR_TO_DOMAINS__i__CIDR.ftp.my-domain.comHEALTHCHECK_PORTThe port the server listens on for healthcheck requests, such as from an AWS network load balancer.8022Advanced usageThe code inaioftps3.server_mainsatisfies a very particular use case, which may not be useful to most. However, the bulk of the code can be used for other cases: you will have to write your own aioftps3.server_main-equivalent, using the functionsaioftps3.server.on_client_connectandaioftps3.server_socket.server. For example, you couldStore credentials, appropriately hashed, differently, .e.g. in a database.Have the credentials hashed differently.Allow/deny PASV mode data connections based on some condition.See the source ofaioftps3.server_mainfor how these functions can be used.Creating a password and saltpython./create_password.pyRunning testsCertificates must be created, and Minio, which emulates S3 locally, must be started./certificates-create.sh&&./minio-start.shand then to run the tests themselves../tests.shFeatures / Design / LimitationsCan upload files bigger than 2G: usesmultipart uploadunder the hood.Does not store uploading files in memory before uploading them to S3: i.e. it is effectively a streaming upload. However, it's not completely streaming: each part of multipart upload is stored in memory before it begins to transfer to S3, in order to be able to hash its content and determine its length.For uploading files, hashes are computed incrementally as data comes in in order to not block the event loop just before uploads to S3.As few dependencies as is reasonable: aiohttp and its dependencies. Boto 3 isnotused.May not behave well if upload to the server is faster than its upload to S3.There is some locking to deal with the same files being operated on concurrently. However....... it does nothing to deal witheventual consistency of S3, and so some operations may appear to not have an immediate effect.Building and running locallydockerbuild-tftps-s3.&&\dockerrun--rm-p8021-8042:8021-8042\-eAWS_AUTH_MECHANISM=secret_access_key\-eAWS_ACCESS_KEY_ID=ommitted\-eAWS_SECRET_ACCESS_KEY=ommitted\-eAWS_S3_BUCKET_REGION=eu-west-1\-eAWS_S3_BUCKET_HOST=s3-eu-west-1.amazonaws.com\-eAWS_S3_BUCKET_NAME=my-bucket-name\-eAWS_S3_BUCKET_DIR_SUFFIX=/\-eFTP_USERS__1__LOGIN=user\-eFTP_USERS__1__PASSWORD_HASHED=ommitted\-eFTP_USERS__1__PASSWORD_SALT=ommitted\-eFTP_COMMAND_PORT=8021\-eFTP_DATA_PORTS_FIRST=4001\-eFTP_DATA_PORTS_COUNT=2\-eFTP_DATA_CIDR_TO_DOMAINS__1__CIDR=0.0.0.0/0\-eFTP_DATA_CIDR_TO_DOMAINS__1__DOMAIN=0.0.0.0\-eHEALTHCHECK_PORT=8022ftps-s3Building and pushing to Quaydockerbuild-tftps-s3.&&\dockertagftps-s3:latestquay.io/uktrade/ftps-s3:latest&&\dockerpushquay.io/uktrade/ftps-s3:latestBuilding and pushing healthcheck application to Quaydockerbuild-tftps-s3-healthcheck.-fDockerfile-healthcheck&&\dockertagftps-s3-healthcheck:latestquay.io/uktrade/ftps-s3-healthcheck:latest&&\dockerpushquay.io/uktrade/ftps-s3-healthcheck:latestBuilding and pushing Minio, used for testing, to Quaydockerbuild-tftps-s3-minio.-fDockerfile-minio&&\dockertagftps-s3-minio:latestquay.io/uktrade/ftps-s3-minio:latest&&\dockerpushquay.io/uktrade/ftps-s3-minio:latest |
aio.functional | Function utils for asyncio. |
aiofunctools | aiofunctoolsLibrary to help in Python functional programing. It’s asyncio compatible.Basic idea behind it isRailway Oriented Programing.This allows us to:simplify our code.improve error management.be cool! be functional!Examples:Old code example:asyncdefcreate_user_handler(request)->Response:try:user=check_valid_user(request)create_user(user)exceptInvalidBody:return_422('Invalid body')exceptUserAlreadyExists:return_409('User already exists')return_201(user)ROP example:asyncdefcreate_user_handler(request)->Response:returnawaitcompose(check_valid_user,create_user,return_201)(request) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.