package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aiorb | No description available on PyPI. |
aiorchestra | AsyncIO-with-UVLoop TOSCA orchestrator |
aiorchestra-asyncssh-plugin | AsyncIO-with-UVLoop TOSCA orchestrator OpenStack plugin |
aiorchestra-openstack-plugin | AsyncIO-with-UVLoop TOSCA orchestrator OpenStack plugin |
aiorcon | An asynchronous interface for the Source RCON Protocol. |
aiordr | Simple and fast asynchronous library for the o!rdr API.FeaturesSupport for modern async syntax (async with)Event decoratorsRate limit handlingEasy to useInstallingPython 3.9 or higher is requiredTo install the library, simply run the following commands# Linux/macOSpython3-mpipinstall-Uaiordr# Windowspy-3-mpipinstall-UaiordrTo install the development version, do the following:$gitclonehttps://github.com/NiceAesth/aiordr$cdaiordr$python3-mpipinstall-U.API Exampleimportaiordrimportasyncioasyncdefmain():client=aiordr.ordrClient(verification_key="verylongstring")awaitclient.create_render("username","YUGEN",replay_url="https://url.to.replay",)@client.on_render_addedasyncdefon_render_added(event:aiordr.models.RenderAddEvent)->None:print(event)@client.on_render_progressasyncdefon_render_progress(event:aiordr.models.RenderProgressEvent)->None:print(event)@client.on_render_failasyncdefon_render_fail(event:aiordr.models.RenderFailEvent)->None:print(event)@client.on_render_finishasyncdefon_render_finish(event:aiordr.models.RenderFinishEvent)->None:print(event)if__name__=="__main__":asyncio.run(main())ContributingPlease read theCONTRIBUTING.rstto learn how to contribute to aiordr!Acknowledgmentsdiscord.pyfor README formattingaiosusister library for the osu! API |
aiords | No description available on PyPI. |
aioreactive | aioreactive - ReactiveX for asyncio using async and awaitNEWS: Project rebooted Nov. 2020. Rebuilt usingExpression.Aioreactive isRxPYfor asyncio.
It's an asynchronous and reactive Python library for asyncio using async
and await. Aioreactive is built on theExpressionfunctional library
and, integrates naturally with the Python language.aioreactive is the unification of RxPY and reactive programming with
asyncio using async and await.The design goals for aioreactive:Python 3.11+ only. We have a hard dependencyExpression v5.All operators and tools are implemented as plain old functions.Everything isasync. Sending values is async, subscribing to
observables is async. Disposing subscriptions is async.One scheduler to rule them all. Everything runs on the asyncio base
event-loop.No multi-threading. Only async and await with concurrency using
asyncio. Threads are hard, and in many cases it doesn’t make sense to
use multi-threading in Python applications. If you need to use threads
you may wrap them withconcurrent.futuresand compose them into the chain withflat_map()or similar. Seeparallel.pyfor an example.Simple, clean and use few abstractions. Try to align with the
itertools package, and reuse as much from the Python standard library
as possible.Support type hints and static type checking usingPylance.Implicit synchronous back-pressure ™. Producers of events will
simply be awaited until the event can be processed by the down-stream
consumers.AsyncObservable and AsyncObserverWith aioreactive you subscribe observers to observables, and the key
abstractions of aioreactive can be seen in this single line of code:subscription=awaitobservable.subscribe_async(observer)The difference from RxPY can be seen with theawaitexpression.
Aioreactive is built around the asynchronous duals, or opposites of the
AsyncIterable and AsyncIterator abstract base classes. These async
classes are called AsyncObservable and AsyncObserver.AsyncObservable is a producer of events. It may be seen as the dual or
opposite of AsyncIterable and provides a single setter method calledsubscribe_async()that is the dual of the__aiter__()getter method:fromabcimportABC,abstractmethodclassAsyncObservable(ABC):@abstractmethodasyncdefsubscribe_async(self,observer):returnNotImplementedAsyncObserver is a consumer of events and is modeled after the
so-calledconsumer interface, the
enhanced generator interface inPEP-342and async
generators inPEP-525. It
is the dual of the AsyncIterator__anext__()method, and expands to
three async methodsasend(), that is the opposite of__anext__(),athrow()that is the opposite of anraise Exception()andaclose()that is the opposite ofraise StopAsyncIteration:fromabcimportABC,abstractmethodclassAsyncObserver(ABC):@abstractmethodasyncdefasend(self,value):returnNotImplemented@abstractmethodasyncdefathrow(self,error):returnNotImplemented@abstractmethodasyncdefaclose(self):returnNotImplementedSubscribing to observablesAn observable becomes hot and starts streaming items by using thesubscribe_async()method. Thesubscribe_async()method takes an
observable and returns a disposable subscription. So thesubscribe_async()method is used to attach a observer to the
observable.asyncdefasend(value):print(value)disposable=awaitsubscribe_async(source,AsyncAnonymousObserver(asend))AsyncAnonymousObserveris an anonymous observer that constructs anAsyncObserverout of plain async functions, so you don't have to
implement a new named observer every time you need one.The subscription returned bysubscribe_async()is disposable, so to
unsubscribe you need to await thedispose_async()method on the
subscription.awaitsubscription.dispose_async()Asynchronous iterationEven more interesting, withto_async_iterableyou can flip around fromAsyncObservableto anAsyncIterableand useasync-forto consume
the stream of events.obv=AsyncIteratorObserver()subscription=subscribe_async(source,obv)asyncforxinobv:print(x)They effectively transform us from an async push model to an async pull
model, and lets us use the awesome new language features such asasync forandasync-with. We do this without any queueing, as a push by theAsyncObservablewill await the pull by the `AsyncIterator. This
effectively applies so-called "back-pressure" up the subscription as the
producer will await the iterator to pick up the item send.The for-loop may be wrapped with async-with to control the lifetime of
the subscription:importaioreactiveasrxxs=rx.from_iterable([1,2,3])result=[]obv=rx.AsyncIteratorObserver()asyncwithawaitxs.subscribe_async(obv)assubscription:asyncforxinobv:result.append(x)assertresult==[1,2,3]Async streamsAn async stream is both an async observer and an async observable.
Aioreactive lets you create streams explicitly.importaioreactiveasrxstream=AsyncSubject()# Alias for AsyncMultiStreamsink=rx.AsyncAnonymousObserver()awaitstream.subscribe_async(sink)awaitstream.asend(42)You can create streams directly fromAsyncMultiStreamorAsyncSingleStream.AsyncMultiStreamsupports multiple observers, and
is hot in the sense that it will drop any event that is sent if there
are currently no observers attached.AsyncSingleStreamon the other
hand supports a single observer, and is cold in the sense that it will
await any producer until there is an observer attached.OperatorsThe Rx operators in aioreactive are all plain old functions. You can
apply them to an observable and compose it into a transformed, filtered,
aggregated or combined observable. This transformed observable can be
streamed into an observer.Observable -> Operator -> Operator -> Operator -> ObserverAioreactive contains many of the same operators as you know from RxPY.
Our goal is not to implement them all, but to provide the most essential
ones.concat-- Concatenates two or more observables.choose-- Filters and/or transforms the observable.choose_asnc-- Asynchronously filters and/or transforms the observable.debounce-- Throttles an observable.delay-- delays the items within an observable.distinct_until_changed-- an observable with continuously distinct values.filter-- filters an observable.filteri-- filters an observable with index.flat_map-- transforms an observable into a stream of observables and flattens the resulting observable.flat_map_latest-- transforms an observable into a stream of
observables and flattens the resulting observable by producing values
from the latest observable.from_iterable-- Create an observable from an (async) iterable.subscribe-- Subscribes an observer to an observable. Returns a subscription.map-- transforms an observable.mapi-- transforms an observable with index.map_async-- transforms an observable asynchronously.mapi_async-- transforms an observable asynchronously with index.merge_inner-- Merges an observable of observables.merge-- Merge one observable with another observable.merge_seq-- Merge a sequence of observables.run-- Awaits the future returned by subscribe. Returns when the subscription closes.slice-- Slices an observable.skip-- Skip items from the start of the observable stream.skip_last-- Skip items from the end of the observable stream.starfilter-- Filters an observable with a predicate and spreads the arguments.starmap-- Transforms and async observable and spreads the arguments to the mapper.switch_latest-- Merges the latest stream in an observable of streams.take-- Take a number of items from the start of the observable stream.take_last-- Take a number of items from the end of the observable stream.unit-- Converts a value or future to an observable.with_latest_from-- Combines two observables into one.Functional or object-oriented, reactive or interactiveWith aioreactive you can choose to program functionally with plain old
functions, or object-oriented with classes and methods. Aioreactive
supports both method chaining or forward pipe programming styles.Pipe forward programming styleAsyncObservablemay compose operators using forward pipelining with
thepipeoperator provided by the amazingExpressionlibrary. This works
by having the operators partially applied with their arguments before
being given the source stream as the last curried argument.ys=pipe(xs,filter(predicate),map(mapper),flat_map(request))Longer pipelines may break lines as for binary operators:importaioreactveasrxasyncdefmain():stream=rx.AsyncSubject()obv=rx.AsyncIteratorObserver()xs=pipe(stream,rx.map(lambdax:x["term"]),rx.filter(lambdatext:len(text)>2),rx.debounce(0.75),rx.distinct_until_changed(),rx.map(search_wikipedia),rx.switch_latest(),)asyncwithxs.subscribe_async(obv)asysasyncforvalueinobv:print(value)AsyncObservable also supports slicing using the Python slice [email protected]_slice_special():xs=rx.from_iterable([1,2,3,4,5])values=[]asyncdefasend(value):values.append(value)ys=xs[1:-1]result=awaitrun(ys,AsyncAnonymousObserver(asend))assertresult==4assertvalues==[2,3,4]Fluent and chained programming styleAn alternative to pipelining is to use the classic and fluent method
chaining as we know fromReactiveX.AnAsyncObservablecreated from class methods such asAsyncRx.from_iterable()returns aAsyncChainedObservable.
where we may use methods such as.filter()and.map()[email protected]_observable_simple_pipe():xs=AsyncRx.from_iterable([1,2,3])result=[]asyncdefmapper(value):awaitasyncio.sleep(0.1)returnvalue*10asyncdefpredicate(value):awaitasyncio.sleep(0.1)returnvalue>1ys=xs.filter(predicate).map(mapper)asyncdefon_next(value):result.append(value)subscription=awaitys.subscribe_async(AsyncAnonymousObserver(on_next))awaitsubsubscriptionassertresult==[20,30]Virtual time testingAioreactive also provides a virtual time event loop
(VirtualTimeEventLoop) that enables you to write asyncio unit-tests
that run in virtual time. Virtual time means that time is emulated, so
tests run as quickly as possible even if they sleep or awaits long-lived
operations. A test using virtual time still gives the same result as it
would have done if it had been run in real-time.For example the following test still gives the correct result even if it
takes 0 seconds to run:@pytest.fixture()defevent_loop():loop=VirtualTimeEventLoop()yieldlooploop.close()@pytest.mark.asyncioasyncdeftest_call_later():result=[]defaction(value):result.append(value)loop=asyncio.get_event_loop()loop.call_later(10,partial(action,1))loop.call_later(1,partial(action,2))loop.call_later(5,partial(action,3))awaitasyncio.sleep(10)assertresult==[2,3,1]The aioreactive testing module provides a testAsyncSubjectthat may
delay sending values, and a testAsyncTestObserverthat records all
events. These two classes helps you with testing in virtual [email protected]()defevent_loop():loop=VirtualTimeEventLoop()yieldlooploop.close()@pytest.mark.asyncioasyncdeftest_delay_done():xs=AsyncSubject()# Test streamasyncdefmapper(value):returnvalue*10ys=delay(0.5,xs)lis=AsyncTestObserver()# Test AsyncAnonymousObserversub=awaitsubscribe_async(ys,lis)awaitxs.asend_later(0,10)awaitxs.asend_later(1,20)awaitxs.aclose_later(1)awaitsubassertlis.values==[(0.5,OnNext(10)),(1.5,OnNext(20)),(2.5,OnCompleted)]Why not use AsyncIterable for everything?AsyncIterableandAsyncObservableare closely related (in fact they
are duals).AsyncIterableis an async iterable (pull) world, whileAsyncObservableis an async reactive (push) based world. There are
many operations such asmap()andfilter()that may be simpler to
implement usingAsyncIterable, but once we start to include time, thenAsyncObservablereally starts to shine. Operators such asdelay()makes much more sense forAsyncObservablethan forAsyncIterable.However, aioreactive makes it easy for you to flip-around to async
iterable just before you need to consume the stream, thus giving you the
best of both worlds.Will aioreactive replace RxPY?Aioreactive will not replaceRxPY.
RxPY is an implementation ofObservable. Aioreactive is an
implementation ofAsyncObservable.Rx and RxPY has hundreds of different query operators, and we currently
have no plans to implementing all of them for aioreactive.Many ideas from aioreactive have already been ported back into "classic" RxPY.ReferencesAioreactive was inspired by:AsyncRx- Aioreactive is a direct port of AsyncRx from F#.Expression- Functional programming for Python.Is it really Pythonic to continue using LINQ operators instead of plain old functions?Reactive Extensions (Rx)andRxPY.Dart StreamsUnderscore.js.itertoolsandfunctools.dbrattli/OSlashkriskowal/q.LicenseThe MIT License (MIT)
Copyright (c) 2016 Børge Lanes, Dag Brattli. |
aioreadline | aioreadlinePython has a builtin readline module. However, the builtin module is difficult
to use in async Python code. This module provides an interface around the
async-compatible functions of libreadline using a ctypes wrapper.Exampleimport asyncio, atexit
async def _main():
while True:
line = await aiorl.getLine()
if line is None or line == b"quit":
aiorl.stop()
loop.stop()
break
elif len(line) > 0:
aiorl.add_history(line)
print(line)
loop = asyncio.get_event_loop()
loop.create_task(_main())
aiorl = AIOReadline(prompt="> ", loop=loop, history_file=".aioreadline_history")
atexit.register(lambda: aiorl.stop())
try:
loop.run_forever()
except KeyboardInterrupt:
loop.stop() |
aio-readsb | python-aio-geojson-readsbThis is an adaption of theNSW RFS Incidents feedby Malte Franken.Installationpip install aio-geojson-readsbUsageSee below for examples of how this library can be used. After instantiating a
particular class - feed or feed manager - and supply the required parameters,
you can callupdateto retrieve the feed data. The return value
will be a tuple of a status code and the actual data in the form of a list of
feed entries specific to the selected feed.Status CodesOK: Update went fine and data was retrieved. The library may still
return empty data, for example because no entries fulfilled the filter
criteria.OK_NO_DATA: Update went fine but no data was retrieved, for example
because the server indicated that there was not update since the last request.ERROR: Something went wrong during the updateParametersParameterDescriptionhome_coordinatesCoordinates (tuple of latitude/longitude)Supported FiltersFilterDescriptionRadiusfilter_radiusRadius in kilometers around the home coordinates in which events from feed are included.ExampleimportasynciofromaiohttpimportClientSessionfromaio_geojson_readsbimportreadsbFeedasyncdefmain()->None:asyncwithClientSession()aswebsession:# Home Coordinates: Latitude: -33.0, Longitude: 150.0# Filter radius: 50 kmfeed=readsbFeed(websession,(-33.0,150.0),filter_radius=20000)status,entries=awaitfeed.update()print(status)print(entries)foreinentries:print(e.publication_date)print(e.coordinates)print(e.flight_num)asyncio.get_event_loop().run_until_complete(main())Feed entry propertiesEach feed entry is populated with the following properties:NameDescriptionFeed attributegeometryAll geometry details of this entry.geometrycoordinatesBest coordinates (latitude, longitude) of this entry.geometryexternal_idThe unique public identifier for this incident.guidtitleTitle of this entry.titleattributionAttribution of the feed.n/adistance_to_homeDistance in km of this entry to the home coordinates.n/apublication_dateThe publication date of the incidents.pubDateFeed ManagerThe Feed Manager helps managing feed updates over time, by notifying the
consumer of the feed about new feed entries, updates and removed entries
compared to the last feed update.If the current feed update is the first one, then all feed entries will be
reported as new. The feed manager will keep track of all feed entries'
external IDs that it has successfully processed.If the current feed update is not the first one, then the feed manager will
produce three sets:Feed entries that were not in the previous feed update but are in the
current feed update will be reported as new.Feed entries that were in the previous feed update and are still in the
current feed update will be reported as to be updated.Feed entries that were in the previous feed update but are not in the
current feed update will be reported to be removed.If the current update fails, then all feed entries processed in the previous
feed update will be reported to be removed.After a successful update from the feed, the feed manager provides two
different dates:last_updatewill be the timestamp of the last update from the feed
irrespective of whether it was successful or not.last_update_successfulwill be the timestamp of the last successful update
from the feed. This date may be useful if the consumer of this library wants
to treat intermittent errors from feed updates differently.last_timestamp(optional, depends on the feed data) will be the latest
timestamp extracted from the feed data.
This requires that the underlying feed data actually contains a suitable
date. This date may be useful if the consumer of this library wants to
process feed entries differently if they haven't actually been updated. |
aio-recaptcha | Async Recaptcha V2 & V3Setup ⚙️[email protected]('/')defrender_recaptcha():render(aiorecaptcha.html(site_key='your_site_key')+aiorecaptcha.js())@app.route('/verify',methods=['POST'])asyncdefverify_recaptcha(response_received_from_form):try:awaitaiorecaptcha.verify(secret=client_secret,response=response_recieved_from_form,fail_for_less_than=0.55,# Recaptcha V3 only)exceptrecaptcha.RecaptchaError:return'No! Only hoomans!'else:return'Hello hooman!'API:js()
html()
coro verify()
exc RecaptchaErroraiorecaptcha.html()Get HTML <div> used by Recaptcha's JS script
Arguments:
site_key:
* Required
* Your Sitekey
theme:
* The color theme of the widget.
* Optional
* One of: (dark, light)
* Default: light
badge:
* Reposition the reCAPTCHA badge. 'inline' lets you position it with CSS.
* Optional
* One of: ('bottomright', 'bottomleft', 'inline')
* Default: None
size:
* Optional
* The size of the widget
* One of: ("compact", "normal", "invisible")
* Default: normal
type_:
* Optional
* One of: ('image', 'audio')
* Default: 'image'
tabindex (int):
* Optional
* The tabindex of the widget and challenge.
* If other elements in your page use tabindex, it should be set to make user navigation easier.
* Default: 0
callback (str):
* Optional
* The name of your callback function, executed when the user submits a successful response.
* The **g-recaptcha-response** token is passed to your callback.
expired_callback (str):
* Opional
* The name of your callback function, executed when the reCAPTCHA response expires and the user needs to re-verify.
error_callback (str):
* Optional
* The name of your callback function, executed when reCAPTCHA encounters an error
(usually network connectivity) and cannot continue until connectivity is restored.
* If you specify a function here, you are responsible for informing the user that they should retry.aiorecaptcha.js()Get JS script that loads the Recaptcha V2/V3 script
Appending this script to your HTML will expose the following API:
https://developers.google.com/recaptcha/docs/display#js_api
**If your html div is invisible, it will expose this API:**
https://developers.google.com/recaptcha/docs/invisible#js_api
Arguments:
onload (str):
* Optional
* The name of your callback function to be executed once all the dependencies have loaded.
render (str):
* Optional
* Whether to render the widget explicitly.
* Defaults to onload, which will render the widget in the first g-recaptcha tag it finds.
* Either: ``"onload"`` or explicitly specify a widget value
language (str):
* Optional
* hl language code
* Reference: https://developers.google.com/recaptcha/docs/language
async_ (bool):
* Optional
* add async tag to JS script
* Default True
defer (bool):
* Optional
* Add def tag to JS Script
* Default Trueaiorecaptcha.verify()Returns None if Recaptcha's response is valid, raises error
Arguments:
secret:
* Required
* The shared key between your site and reCAPTCHA.
response:
* Required
* The user response token provided by reCAPTCHA, verifying the user on your site.
* Should be typically found as an item named: 'g-recaptcha-response'.
remoteip:
* Optional
* The user's IP address.
fail_for_less_than:
* Optional
* Only relevant for Recaptcha V3
* Default 0.5
* Read more about how to interpret the score here: https://developers.google.com/recaptcha/docs/v3#interpreting_the_score
* Fail for score less than this value.TestRun:$aio-recaptcha/test.sh |
aiorecollect | 🗑 aiorecollect: A Python 3 Library for Pinboardaiorecollectis a Python 3, asyncio-based library for the ReCollect Waste API. It allows
users to programmatically retrieve schedules for waste removal in their area, including
trash, recycling, compost, and more.Special thanks to @stealthhacker for the inspiration!InstallationpipinstallaiorecollectPython Versionsaiorecollectis currently supported on:Python 3.10Python 3.11Python 3.12Place and Service IDsTo useaiorecollect, you must know both your ReCollect Place and Service IDs.In general, cities/municipalities that utilize ReCollect will give you a way to
subscribe to a calendar with pickup dates. If you examine the iCal URL for this
calendar, the Place and Service IDs are embedded in it:webcal://recollect.a.ssl.fastly.net/api/places/PLACE_ID/services/SERVICE_ID/events.en-US.icsUsageimportasynciofromdatetimeimportdatefromaiorecollectimportClientasyncdefmain()->None:"""Run."""client=awaitClient("<PLACE ID>","<SERVICE ID>")# The client has a few attributes that you can access:client.place_idclient.service_id# Get all pickup events on the calendar:pickup_events=awaitclient.async_get_pickup_events()# ...or get all pickup events within a certain date range:pickup_events=awaitclient.async_get_pickup_events(start_date=date(2020,10,1),end_date=date(2020,10,31))# ...or just get the next pickup event:next_pickup=awaitclient.async_get_next_pickup_event()asyncio.run(main())ThePickupEventObjectThePickupEventobject that is returned from the above calls comes with three
properties:date: adatetime.datethat denotes the pickup datepickup_types: a list ofPickupTypeobjects that will occur with this eventarea_name: the name of the area in which the event is occurringThePickupTypeObjectThePickupTypeobject contains the "internal" name of the pickup typeanda
human-friendly representation when it exists:name: the internal name of the pickup typefriendly_name: the humany-friendly name of the pickup type (if it exists)Connection PoolingBy default, the library creates a new connection to ReCollect with each coroutine. If
you are calling a large number of coroutines (or merely want to squeeze out every second
of runtime savings possible), anaiohttpClientSessioncan be used for
connection pooling:importasynciofromaiohttpimportClientSessionfromaiorecollectimportClientasyncdefmain()->None:"""Run."""asyncwithClientSession()assession:client=awaitClient("<PLACE ID>","<SERVICE ID>",session=session)# Get to work...asyncio.run(main())ContributingThanks to all ofour contributorsso far!Check for open features/bugsorinitiate a discussion on one.Fork the repository.(optional, but highly recommended) Create a virtual environment:python3 -m venv .venv(optional, but highly recommended) Enter the virtual environment:source ./.venv/bin/activateInstall the dev environment:script/setupCode your new feature or bug fix on a new branch.Write tests that cover your new functionality.Run tests and ensure 100% code coverage:poetry run pytest --cov aiorecollect testsUpdateREADME.mdwith any new documentation.Submit a pull request! |
aio-recurring | aio-recurringRecurring coroutines using asyncioUsage:importasynciofromdatetimeimportdatetimefromaio_recurring.jobimport(recurring,run_recurring_jobs,)@recurring(every=5)asyncdefprint_info_5():print(f"[{datetime.now()}] This coroutine is rescheduled every 5 seconds")@recurring(every=10)asyncdefprint_info_10():print(f"[{datetime.now()}] This coroutine is rescheduled every 10 seconds")asyncdefmain():run_recurring_jobs()if__name__=='__main__':loop=asyncio.get_event_loop()loop.create_task(main())loop.run_forever() |
aiorecycle | A decorator to recycle tasks in the event loopInstallationpipinstallaiorecycleUsage [email protected]()asyncdeftask():ifasyncio.get_event_loop().time()%2==0:print('make some periodic work')asyncdefmain():awaittask()awaitasyncio.sleep(3)# emulate very important workif__name__=="__main__":asyncio.run(main())Licenseaiorecyclelibrary is offered under Apache 2 license. |
aioredis | aioredisasyncio (3156) Redis client library.The library is intended to provide simple and clear interface to Redis
based on asyncio.FeaturesFeatureSupportedhiredis parser:white_check_mark:Pure-python parser:white_check_mark:Low-level & High-level APIs:white_check_mark:Pipelining support:white_check_mark:Multi/Exec support:white_check_mark:Connections Pool:white_check_mark:Pub/Sub support:white_check_mark:Sentinel support:white_check_mark:ACL support:white_check_mark:Streams support:white_check_mark:Redis Cluster support:no_entry_sign:Tested Python versions3.6, 3.7, 3.8, 3.9, 3.10Tested for Redis servers5.0, 6.0Support for dev Redis serverthrough low-level APIInstallationThe easiest way to install aioredis is by using the package on PyPi:pip install aioredisRecommended with hiredis for performance and stability reasons:pip install hiredisRequirementsPython 3.6+hiredis (Optional but recommended)async-timeouttyping-extensionsBenchmarksBenchmarks can be found here:https://github.com/popravich/python-redis-benchmarkContributeIssue Tracker:https://github.com/aio-libs/aioredis/issuesGoogle Group:https://groups.google.com/g/aio-libsGitter:https://gitter.im/aio-libs/LobbySource Code:https://github.com/aio-libs/aioredisContributor's guide:develFeel free to file an issue or make pull request if you find any bugs or
have some suggestions for library improvement.LicenseThe aioredis is offered under aMIT License.Changelog2.0.1 - (2021-12-20)FeaturesAdded Python 3.10 to CI & Updated the Docs
(see #1160)Enable mypy in CI (see #1101)Synchronized reading the responses from a connection
(see #1106)FixesRemovedelfrom Redis (Fixes #1115)
(see #1227)fix socket.error raises (see #1129)Fix buffer is closed error when using PythonParser class
(see #1213)2.0.0 - (2021-03-18)FeaturesPort redis-py's client implementation to aioredis.(see #891)Make hiredis an optional dependency.(see #917)1.3.1 (2019-12-02)BugfixesFix transaction data decoding(see #657)Fix duplicate calls topool.wait_closed()uponcreate_pool()exception.(see #671)Deprecations and RemovalsDrop explicit loop requirement in API.
Deprecateloopargument.
Throw warning in Python 3.8+ if explicitloopis passed to methods.(see #666)Misc(#643, #646, #648)1.3.0 (2019-09-24)FeaturesAddedxdelandxtrimmethod which missed incommands/streams.py& also added unit test code for them(see #438)Addcountargument tospopcommand(see #485)Add support forzpopmaxandzpopminredis commands(see #550)Addtowncrier: change notes are now stored inCHANGES.txt(see #576)Type hints for the library(see #584)A few additions to the sorted set commands:the blocking pop commands:BZPOPMAXandBZPOPMINtheCHandINCRoptions of theZADDcommand(see #618)Addedno_ackparameter toxread_groupstreams method incommands/streams.py(see #625)BugfixesFix for sensitive logging(see #459)Fix slow memory leak inwait_closedimplementation(see #498)Fix handling of instances were Redis returns null fields for a stream message (see
#605)Improved DocumentationRewrite "Getting started" documentation.(see #641)Misc#585,
#611,
#612,
#619,
#620,
#642)1.2.0 (2018-10-24)FeaturesImplemented new Stream command support(see #299)Reduceencode_command()cost about 60%(see #397)BugfixesFix pipeline commands buffering was causing multiplesendtosyscalls(see #464)
and #473)Python 3.7 compatibility fixes(see #426)Fix typos in documentation(see #400)FixINFOcommand result parsing(see #405)Fix bug inConnectionsPool._drop_closedmethod(see #461)MiscellaneousUpdate dependencies versionsMultiple tests improvements1.1.0 (2018-02-16)FeaturesImplement new commands:wait,touch,swapdb,unlink(see #376)Addasync_opargument toflushallandflushdbcommands(see #364, #370)BugfixesImportant!Fix Sentinel sentinel client with poolminsizegreater than 1(see #380)FixSentinelPool.discover_timeoutusage(see #379)FixReceiverhang on disconnect(see #354, #366)Fix an issue withsubscribe/psubscribewith empty pool(see #351, #355)Fix an issue whenStreamReader's feed_data is called before set_parser(see #347)MiscellaneousUpdate dependencies versionsMultiple test fixes1.0.0 (2017-11-17)FeaturesImportant!Drop Python 3.3, 3.4 support(see #321, #323, #326)Important!Connections pool has been refactored; nowcreate_redisfunction will yieldRedisinstance instead ofRedisPool(see #129)Important!Change sorted set commands reply format:
return list of tuples instead of plain list for commands
acceptingwithscoresargument(see #334)Important!Changehscancommand reply format:
return list of tuples instead of mixed key-value list(see #335)Implement Redis URI support as supportedaddressargument value(see #322)Droppedcreate_reconnecting_redis,create_redis_poolshould be
used insteadImplement customStreamReader(see #273)Implement Sentinel support(see #181)Implement pure-python parser(see #212)Addmigrate_keyscommand(see #187)Addzrevrangebylexcommand(see #201)Addcommand,command_count,command_getkeysandcommand_infocommands(see #229)Addpingsupport in pubsub connection(see #264)Addexistparameter tozaddcommand(see #288)AddMaxClientsErrorand implementReplyErrorspecialization(see #325)Addencodingparameter to sorted set commands(see #289)BugfixesFixCancelledErrorinconn._reader_task(see #301)Fix pending commands cancellation withCancelledError,
use explicit exception instead of callingcancel()method(see #316)Correct error message on Sentinel discovery of master/slave with password(see #327)Fixbytearraysupport as command argument(see #329)Fix critical bug in patched asyncio.Lock(see #256)Fix Multi/Exec transaction canceled error(see #225)Add missing arguments tocreate_redisandcreate_redis_poolFix deprecation warning(see #191)Make correct__aiter__()(see #192)Backward compatibility fix forwith (yield from pool) as conn:(see #205)Fixed pubsub receiver stop()(see #211)MiscellaneousMultiple test fixesAdd PyPy3 to build matrixUpdate dependencies versionsAdd missing Python 3.6 classifier0.3.5 (2017-11-08)BugfixesFix for indistinguishable futures cancellation withasyncio.CancelledError(see #316, cherry-picked from master)0.3.4 (2017-10-25)BugfixesFix time command result decoding when using connection-wide encoding setting(see #266)0.3.3 (2017-06-30)BugfixesCritical bug fixed in patched asyncio.Lock(see #256)0.3.2 (2017-06-21)FeaturesAddedzrevrangebylexcommand(see #201 cherry-picked from master)Add connection timeout(see #221, cherry-picked from master)BugfixesFixed pool close warning(see #239, #236,
cherry-picked from masterFixed asyncio Lock deadlock issue(see #231, #241)0.3.1 (2017-05-09)BugfixesFix pubsub Receiver missing iter() method(see #203)0.3.0 (2017-01-11)FeaturesPub/Sub connection commands acceptChannelinstances(see #168)Implement new Pub/Sub MPSC (multi-producers, single-consumer) Queue --aioredis.pubsub.Receiver(see #176)Addaioredis.abcmodule providing abstract base classes
defining interface for basic lib components (see #176)Implement Geo commands support(see #177, #179)BugfixesMinor tests fixesMiscellaneousUpdate examples and docs to useasync/awaitsyntax
also keepingyield fromexamples for history(see #173)Reflow Travis CI configuration; add Python 3.6 section(see #170)Add AppVeyor integration to run tests on Windows(see #180)Update multiple development requirements0.2.9 (2016-10-24)FeaturesAllow multiple keys inEXISTScommand(see #156, #157)BugfixesClose RedisPool when connection to Redis failed(see #136)Add simpleINFOcommand argument validation(see #140)Remove invalid uses ofnext()MiscellaneousUpdate devel.rst docs; update Pub/Sub Channel docs (cross-refs)Update MANIFEST.in to include docs, examples and tests in source bundle0.2.8 (2016-07-22)FeaturesAddhmset_dictcommand(see #130)AddRedisConnection.addresspropertyRedisPoolminsize/maxsizemust not beNoneImplementclose()/wait_closed()/closedinterface for pool(see #128)BugfixesAdd test forhstrlenTest fixesMiscellaneousEnable Redis 3.2.0 on TravisAdd spell checking when building docs(see #132)Documentation updated0.2.7 (2016-05-27)create_pool()minsize default value changed to 1Fixed cancellation of wait_closed(see #118)Fixedtime()conversion to float(see #126)Fixedhmset()method to return bool instead ofb'OK'(see [#12))Fixed multi/exec + watch issue (changed watch variable was causingtr.execute()to fail)(see #121)Replaceasyncio.Futureuses with utility method(get ready to Python 3.5.2loop.create_future())Tests switched from unittest to pytest (see [#12))Documentation updates0.2.6 (2016-03-30)Fixed Multi/Exec transactions cancellation issue(see #110, #114)Fixed Pub/Sub subscribe concurrency issue(see #113, #115)Add SSL/TLS support(see #116)aioredis.ConnectionClosedErrorraised inexecute_pubsubas well(see #108)Redis.slaveof()method signature changed: now to disable
replication one should callredis.slaveof(None)instead ofredis.slaveof()More tests added0.2.5 (2016-03-02)Close all Pub/Sub channels on connection close(see #88)Additer()method toaioredis.Channelallowing to use it
withasync for(see #89)Inline code samples in docs made runnable and downloadable(see #92)Python 3.5 examples converted to useasync/awaitsyntax(see #93)Fix Multi/Exec to honor encoding parameter(see #94, #97)Add debug message increate_connection(see #90)Replaceasyncio.asynccalls with wrapper that respects asyncio version(see #101)Use NODELAY option for TCP sockets(see #105)Newaioredis.ConnectionClosedErrorexception added. Raised if
connection to Redis server is lost(see #108, #109)Fix RedisPool to close and drop connection in subscribe mode on releaseFixaioredis.util.decodeto recursively decode list responsesMore examples added and docs updatedAdd google groups link to READMEBump year in LICENSE and docs0.2.4 (2015-10-13)Python 3.5asyncsupport:New scan commands API (iscan,izscan,ihscan)Pool made awaitable (allowingwith await pool: ...andasync with pool.get() as conn:constructs)Fixed dropping closed connections from free pool(see #83)Docs updated0.2.3 (2015-08-14)Redis cluster support work in progressFixed pool issue causing pool growth over max size &acquirecall hangs(see #71)infoserver command result parsing implementedFixed behavior of util functions(see #70)hstrlencommand addedFew fixes in examplesFew fixes in documentation0.2.2 (2015-07-07)Decoding data withencodingparameter now takes into account
list (array) replies(see #68)encodingparameter added to following commands:generic commands: keys, randomkeyhash commands: hgetall, hkeys, hmget, hvalslist commands: blpop, brpop, brpoplpush, lindex, lpop, lrange, rpop, rpoplpushset commands: smembers, spop, srandmemberstring commands: getrange, getset, mgetBackward incompatibility:ltrimcommand now returns bool value instead of 'OK'Tests updated0.2.1 (2015-07-06)Logging added (aioredis.log module)Fixed issue withwait_messagein pub/sub(see #66)0.2.0 (2015-06-04)Pub/Sub support addedFix inzrevrangebyscorecommand(see #62)Fixes/tests/docs0.1.5 (2014-12-09)AutoConnector addedwait_closed method added for clean connections shutdownzscorecommand fixedTest fixes0.1.4 (2014-09-22)Dropped following Redis methods --Redis.multi(),Redis.exec(),Redis.discard()Redis.multi_exechack'ish property removedRedis.multi_exec()method addedHigh-level commands implemented:generic commands (tests)transactions commands (api stabilization).Backward incompatibilities:Following sorted set commands' API changed:zcount,zrangebyscore,zremrangebyscore,zrevrangebyscoreset string command' API changed0.1.3 (2014-08-08)RedisConnection.execute refactored to support commands pipelining(see #33)Several fixesWIP on transactions and commands interfaceHigh-level commands implemented and tested:hash commandshyperloglog commandsset commandsscripting commandsstring commandslist commands0.1.2 (2014-07-31)create_connection,create_pool,create_redisfunctions updated: db and password
arguments made keyword-only(see #26)Fixed transaction handling(see #32)Response decoding(see #16)0.1.1 (2014-07-07)Transactions support (in connection, high-level commands have some issues)Docs & tests updated.0.1.0 (2014-06-24)Initial releaseRedisConnection implementedRedisPool implementedDocs for RedisConnection & RedisPoolWIP on high-level API. |
aioredis-cluster | aioredis_clusterRedis Cluster support foraioredis(support only v1.x.x).Many implementation features were inspired bygo-redisproject.RequirementsPython3.8+async_timeout(only for Python < 3.11)Featurescommands execute failover (retry command on other node in cluster)support resharding replies ASK/MOVEDrestore cluster state from alive nodesone node is enough to know the topology and initialize clientcluster state auto reloadLimitationsCommands with limitationsKeys inmget/msetmust provide one key slot.# worksawaitredis.mget("key1:{foo}","key2:{foo}")# throw RedisClusterErrorawaitredis.mget("key1","key2")Commands are not supportedRedismethods below do not works and not supported in cluster mode implementation.cluster_add_slots
cluster_count_failure_reports
cluster_count_key_in_slots
cluster_del_slots
cluster_failover
cluster_forget
cluster_get_keys_in_slots
cluster_meet
cluster_replicate
cluster_reset
cluster_save_config
cluster_set_config_epoch
cluster_setslot
cluster_readonly
cluster_readwrite
client_setname
shutdown
slaveof
script_kill
move
select
flushall
flushdb
script_load
script_flush
script_exists
scan
iscan
quit
swapdb
migrate
migrate_keys
wait
bgrewriteaof
bgsave
config_rewrite
config_set
config_resetstat
save
sync
pipeline
multi_execBut you can always execute command you need on concrete node on cluster with usualaioredis.RedisConnection,aioredis.ConnectionsPoolor high-levelaioredis.Redisinterfaces.Installationpipinstallaioredis-clusterUsageimportaioredis_clusterredis=awaitaioredis_cluster.create_redis_cluster(["redis://redis-cluster-node1",])# orredis=awaitaioredis_cluster.create_redis_cluster(["redis://redis-cluster-node1","redis://redis-cluster-node2","redis://redis-cluster-node3",])# orredis=awaitaioredis_cluster.create_redis_cluster([("redis-cluster-node1",6379),])awaitredis.set("key","value",expire=180)redis.close()awaitredis.wait_closed()LicenseThe aioredis_cluster is offered under MIT license.Changes2.7.0 (2023-12-18)Rework PubSub and fix race conditions (#27)addaioredis_cluster.aioredis.streammodulerework PubSub command execution flow for prevent race conditions on spontaneously server channels unsubscribe pushmake fully dedicatedRedisConnectionimplementation for clusterRedisConnectiononce entered in PubSub mode was never exit in them, because is too hard handle spontaneously unsubscribe events from Redis with simultaneously(P|S)UNSUBSCRIBEmanually callsfully rewrite handling PUB/SUB replies/eventsforCluster,RedisConnectionandConnectionsPoolin_pubsubindicates flag when connector have in pubsub mode connections instead number of PUB/SUB channelsadd key slot handling for sharded PubSub channels in non-cluster dedicateRedisConnectionfix and improve legacyaioredistestsimprove support for py3.12improve support for Redis 7.22.6.0 (2023-11-02)fix stuckaioredis.Connectionsocket reader routine for sharded PUB/SUB when cluster reshard and Redis starts respondMOVEDerror onSSUBSCRIBEcommands#242.5.0 (2023-04-03)improve connection creation timeoutdo not lose connection in Pool while execute PING proberespect Pool.minsize in idle connections detectorshuffle startup nodes for obtain cluster state2.4.0 (2023-03-08)add supportSharded PUB/SUBnew methods and propertiesspublish,ssubscribe,sunsubscribe,pubsub_shardchannels,pubsub_shardnumsub,sharded_pubsub_channelsdrop support Python 3.6, 3.7add support Python 3.11idle connections detection in connections poolchange acquire connection behaviour from connection pool. Now connection acquire and release to pool by LIFO way for better idle connection detectiondeprecatedstate_reload_frequencyoption fromcreate_clusterfactory was removed2.3.1 (2022-07-29)fix bypassusernameargument for pool creation2.3.0 (2022-07-26)add support Redis 6AUTHcommand with usernamefactoriescreate_cluster,create_redis_cluster,aioredis_cluster.aioredis.create_connectionnow supportusernameargumentaddauth_with_usernamemethod forAbcConnection,AbcPooland impementations2.2.2 (2022-07-19)fix problem when RedisConnection was GC collected after unhandledasyncio.CancelledErrorfix defaultdbargument for pool/connection in cluster mode2.2.1 (2022-07-18)(revert) apply cluster state only if cluster metadata is changed2.2.0 (2022-07-18)fetch several cluster state candidates from cluster for choose best metadata for final local stateapply cluster state only if cluster metadata is changedFIX: handle closed pubsub connection before gc its collected that triggerTask was destroyed but it is pending!message in logimprove logging in state loader2.1.0 (2022-07-10)fix bug whenConnectionsPool.acquire()is stuck because closed PUB/SUB connection is not cleanup fromusedsetfixConnectionsPool.acquire()incorrect wakeup order for connection waiters when connection is releasedConnectionsPool.execute()now acquire dedicate connection for execution if command is blocking, ex.BLPOPConnectionsPool.execute()now raisesValueErrorexception for PUB/SUB family commandInConnectionsPoolPUB/SUB dedicated connections now is closing onclose()calladdaioredis_cluster.abc.AbcConnectionabstract classadd propertyreadonlyand methodset_readonly()foraioredis_cluster.abc.AbcConnectionandaioredis_cluster.abc.AbcPoolaioredis_cluster.Clusternow requirepool_clsimplementation fromaioredis_cluster.abc.AbcPooladdsslargument for factoriescreate_cluster,create_redis_clusterandClusterconstructoradd 10% jitter for cluster state auto reload intervalfix incorrect iterate free connections inselect(),auth()methods forConnectionsPool2.0.0 (2022-06-20)includeaioredis==1.3.1source code intoaioredis_cluster._aioredisand introduceaioredis_cluster.aioredisbut for compatible and migration periodthis release have not backward incompatible changesDEPRECATION WARNING:you must migrate fromimport aioredistoimport aioredis_cluster.aioredisbecauseaioredis_clusterstarts vendorizeaioredispackage and maintain it separately. Usingaioredispackagewill be removed in v3fix reacquire connection inaioredic.ConnectionsPoolafter Redis node failure1.8.0 (2022-05-20)Addxadd_620commands method for supportXADDoptions for Redis 6.2+1.7.1 (2021-12-15)addClusterState.slots_assignedrequire reload cluster state for some cases withUncoveredSlotError1.7.0 (2021-12-15)addexecute_timeoutforManagerimprove cluster state reload loggingreduce number of addresses to fetch cluster stateacquire dedicate connection from pool to fetch cluster stateextendClusterStateby new attributes:state,state_from,current_epoch1.6.1 (2021-11-23)fix keys extraction forXREADandXREADGROUPcommands1.6.0 (2021-11-20)make publicAddress,ClusterNodeandClusterStatestructs. Available by importfrom aioredis_cluster importClusterprovides some new helpful methods:get_master_node_by_keys(*keys)- return masterClusterNodewhich contains keyskeyscreate_pool_by_addr(addr, **kwargs)- create connection pool byaddrand return pool wrapped bycommands_factoryfromClusterconstructor. By default isaioredis_cluster.RedisClusterinstance.get_cluster_state()- returnClusterStateinstance with recent known cluster state received from Redis clusterextract_keys(command_sequence)- returns keys of command sequencedroppytest-aiohttpplugin for testsaddpytest-asynciodependency for testsswitchasynctest->mocklibrary for aio testsdropattrsdependency. For Python 3.6 you need installdataclassesfix extract keys forBLPOP/BRPOPcommandsadd support keys extraction forZUNION,ZINTER,ZDIFF,ZUNIONSTORE,ZINTERSTORE,ZDIFFSTOREcommandsacquire dedicate connection from pool for potential blocking commands likeBLPOP,BRPOP,BRPOPLPUSH,BLMOVE,BLMPOP,BZPOPMIN,BZPOPMAX,XREAD,XREADGROUP1.5.2 (2020-12-14)README update1.5.1 (2020-12-11)speedup crc16. Use implementation from python stdlib1.5.0 (2020-12-10)removestate_reload_frequencyfromClusterManager.state_reload_intervalnow is one relevant option for state auto reloaddefaultstate_reload_intervalincreased and now is 300 seconds (5 minutes)commands registry loads only once, on cluster state initializeimprove failover. First connection problem cause retry to random slot replicaimprove python3.9 supportdefaultidle_connection_timeoutnow is 10 minutes1.4.0 (2020-09-08)fixaioredis.locks.Lockissue (https://github.com/aio-libs/aioredis/pull/802,bpo32734)nowaioredis_cluster.Clusterdo not acquire dedicate connection for every executeaioredis_clusternow requirespython>=3.6.51.3.0 (2019-10-23)improve compatible with Python 3.8improve failover logic while command timed outread-only commands now retries if attempt_timeout is reachedadd required dependenyasync_timeoutaioredisdependency bound now isaioredis >=1.1.0, <2.0.01.2.0 (2019-09-10)add timeout for command execution (per execution try)add Cluster optionattempt_timeoutfor configure command execution timeout, default timeout is 5 secondsCluster.execute_pubsub() fixes1.1.1 (2019-06-07)CHANGES fix1.1.0 (2019-06-06)Cluster state auto reloadnewstate_reload_frequencyoption to configure state reload frequencynewstate_reload_intervaloption to configure state auto reload intervalfollow_clusteroption enable load cluster state from previous cluster state nodesestablish connection only for master nodes after cluster state loadchange default commands_factory to aioredis_cluster.RedisCluster instead aioredis.Redisall cluster info commands always returns structs with str, not byteskeys_masterandall_mastersmethods now try to ensure cluster state instead simply raise exception if connection lost to cluster node, for examplemax_attemptsalways defaults fix1.0.0 (2019-05-29)Library full rewriteCluster state auto reloadCommand failover if cluster node is down or key slot resharded0.2.0 (2018-12-27)Pipeline and MULTI/EXEC cluster implementation with keys distribution limitation (because cluster)0.1.1 (2018-12-26)Python 3.6+ only0.1.0 (2018-12-24)Initial release based on aioredis PR (https://github.com/aio-libs/aioredis/pull/119)ContributorsAnton IlyushenkovVadim Pushtaeverastovroman901Alexander Malev |
aioredis_fastapi | aioredis_fastapiis an asynchronousredis based sessionbackend for FastAPI powered applications.🚸This repository is currently under testing, kind of production-ready.🚸🛠️ Requirementsaioredis_fastapirequires Python 3.9 or above.To install Python 3.9, I recommend usingpyenv. You can refer tothis sectionof the readme file on how to install poetry and pyenv into your linux machine.🚨 InstallationWithpip:python3.9 -m pip install aioredis-fastapior by checking out the repo and installing it withpoetry:git clone https://github.com/wiseaidev/aioredis_fastapi.git && cd aioredis_fastapi && poetry install🚸 UsagefromtypingimportAnyfromfastapiimportDepends,FastAPI,Request,Responsefromaioredis_fastapiimport(get_session_storage,get_session,get_session_id,set_session,del_session,SessionStorage,)app=FastAPI(title=__name__)@app.post("/set-session")asyncdef_set_session(request:Request,response:Response,session_storage:SessionStorage=Depends(get_session_storage),):session_data=awaitrequest.json()awaitset_session(response,session_data,session_storage)@app.get("/get-session")asyncdef_get_session(session:Any=Depends(get_session)):[email protected]("/del-session")asyncdef_delete_session(session_id:str=Depends(get_session_id),session_storage:SessionStorage=Depends(get_session_storage),):awaitdel_session(session_id,session_storage)returnNone🛠️ Custom Configfromaioredis_fastapiimportsettingsfromdatetimeimporttimedeltaimportrandomsettings(redis_url="redis://localhost:6379",session_id_name="session-id",session_id_generator=lambda:str(random.randint(1000,9999)),expire_time=timedelta(days=1))🌐 Interacting with the endpointsfromhttpximportAsyncClientimportasynciofromaioredis_fastapi.configimportsettingsasyncdefmain():client=AsyncClient()r=awaitclient.post("http://127.0.0.1:8000/set-session",json=dict(a=1,b="data",c=True))r=awaitclient.get("http://127.0.0.1:8000/get-session",cookies={settings().session_id_name:"ssid"})print(r.text)returnr.textloop=asyncio.new_event_loop()asyncio.set_event_loop(loop)try:loop.run_until_complete(main())finally:loop.close()asyncio.set_event_loop(None)🎉 CreditsThe following projects were used to build and testaioredis_fastapi.pythonpoetrypytestflake8coveragerstcheckmypypytestcovtoxisortblackprecommit👋 ContributeIf you are looking for a way to contribute to the project, please refer to theGuideline.📝 LicenseThis program and the accompanying materials are made available under the terms and conditions of theGNU GENERAL PUBLIC LICENSE. |
aioredis-lock | aioredis_lockImplementation of distributed locking withaioredis, an asyncio based redis client.This is a standalone lib until, and if,aio-libs/aioredis#573is accepted.UsageYou need anaioredis.RedisConnectionoraioredis.ConnectionsPoolalready created.Mutexfromaioredis_lockimportRedisLock,LockTimeoutErrortry:asyncwithRedisLock(pool,key="foobar",# how long until the lock should expire (seconds). this can be extended# via `await lock.extend(30)`timeout=30,# you can customize how long to allow the lock acquisitions to be# attempted.wait_timeout=30,)aslock:# If you get here, you now have a lock and are the only program that# should be running this code at this moment.# do some work...# we may want it longer...awaitlock.extend(30)exceptLockTimeoutError:# The lock could not be acquired by this worker and we should give uppassSimple Leader/Follower(s)Let's suppose you need a simple leader/follower type implementation where you have a number of web-workers but just want 1 to preform a repeated task. In the case the leader fails someone else should pick up the work. Simply passwait_timeout=Noneto RedisLock allowing the worker to keep trying to get a lock for when the leader eventually fails. The main complication here is extending the lock and validating the leader still owns it.fromaioredis_lockimportRedisLock# if the lock is lost, we still want to be a followerwhileTrue:# wait indefinitely to acquire a lockasyncwithRedisLock(pool,"shared_key",wait_timeout=None)aslock:# hold the lock as long as possiblewhileTrue:ifnotawaitlock.is_owner():logger.debug("We are no longer the lock owner, falling back")break# do some workifnotawaitlock.renew():logger.debug("We lost the lock, falling back to follower mode")breakThis mostly delegates the work of selecting and more importantly promoting leaders. |
aioredis-models | aioredis-modelsA wrapper overaioredisthat modelsRedisdata as simple data structures.Supported data structuresKeysStringsListsHash mapsSetsDouble hash mapsRequirementsPython 3.6+aioredisrequirementsDocumentationDetailed documentation is available athttps://aioredis-models.readthedocs.io/.UsageConstruction of all data structures requires at least anaioredis.Redisinstance
and a key. For example, to create aRedisString:fromaioredis-modelsimportRedisStringredis_string=RedisString(redis,'my-string')Once a model has been constructed, various functions can be used to interact with it.importaioredisfromaioredis-modelsimportRedisStringasyncdefdo_work(redis_string:RedisString):stored_value=awaitredis_string.get()print(stored_value)ContributingThe library is currently in very early stages of development and there is a lot of room for growth.
As such, contributions are welcome. To contribute, create a pull request into themainbranch.
Make sure the tests pass and there are no linting errors. Also, please update documentation, if
needed.TestingThe easiest way to run the tests is throughdocker-composeandDocker:docker-composeup--buildunit-testTo run directly on the host:pip3install-rrequirements.txt
tox-eunit-py39End-to-end tests can also be run withdocker-compose:docker-composeup--builde2e-testLintingSimilar to testing, linting rules can be run through:docker-composeup--buildlintTo run directly on the host:pip3install-rrequirements.txt
pylintaioredis_modelsDocumentationDocumentation can get regenerated by starting thegenerate-docsservice in docker-compose:docker-composeup--buildgenerate-docsTo run directly on the host:pip3install-rrequirements.txt
sphinx-apidoc-f-odocs/sourceaioredis_models&&(cddocs&&makehtml)LicenseThis library is offered under the MIT license. |
aio-redis-mq | aio_redis_mqLightweight Message Queue & Broker base on async python redis streamsSuitable Application EnvironmentModern software applications have moved from being a single monolithic unit to loosely coupled collections of services.
While this new architecture brings many benefits, those services still need to interact with each other,
creating the need for robust and efficient messaging solutions.The following problems are suitable for using message queuing:Asynchronous processingFlow controlService decouplingConnect flow computingAs a publish / subscribe systemInstallationpipinstallaio-redis-mqQuick Startimportasyncioimporttimefromaio_redis_mqimportMQProducer,MQConsumer_redis_url='redis://root:xxxxx@localhost/1'asyncdefproducer_task(producer):for_inrange(0,10):awaitasyncio.sleep(1)send_msg_id=awaitproducer.send_message({'msg':f'msg_{_}','content':time.strftime("%Y-%m-%d%H:%M:%S")})print(f'producer_task time at{time.strftime("%Y-%m-%d%H:%M:%S")}',f'message id={send_msg_id}')asyncdefconsumer_task(consumer:MQConsumer,consumer_index:int):for_inrange(0,10):msg=awaitconsumer.block_read_messages(block=1500)print(f'consumer_{consumer_index}block read message',msg)asyncdefmain():# one producerproducer=MQProducer('pub_stream',redis_name='_redis_local',redis_url=_redis_url)# three consumerconsumer1=MQConsumer('pub_stream',redis_name='_redis_local',redis_url=_redis_url)consumer2=MQConsumer('pub_stream',redis_name='_redis_local',redis_url=_redis_url)consumer3=MQConsumer('pub_stream',redis_name='_redis_local',redis_url=_redis_url)awaitasyncio.gather(producer_task(producer),consumer_task(consumer1,1),consumer_task(consumer2,2),consumer_task(consumer3,3))if__name__=='__main__':asyncio.run(main())Group Consumerimportasyncioimporttimefromaio_redis_mqimportMQProducer,GroupManager,Group,GroupConsumer_redis_url='redis://root:xxxxx@localhost/1'asyncdefproducer_task(producer):for_inrange(0,10):awaitasyncio.sleep(1)print(f'-------------------------------------{_}-------------------------------------')send_msg_id=awaitproducer.send_message({'msg':f'msg_{_}','content':time.strftime("%Y-%m-%d%H:%M:%S")})print(f'group_producer send_message time at{time.strftime("%Y-%m-%d%H:%M:%S")}',f'message id={send_msg_id}')asyncdefconsumer_task(consumer:GroupConsumer):for_inrange(0,10):# Here we use a low-level read message API and do not detect pending messages or handle idle messagesmsg=awaitconsumer.block_read_messages(count=1,block=1500)awaitasyncio.sleep(0.05)print(f'group_consumer{consumer.consumer_id}group={consumer.group_name}block read message',msg)iflen(msg)>0andlen(msg[0][1])>0:msg_id=msg[0][1][0][0]ack_result=awaitconsumer.ack_message(msg_id)print(f'group_consumer{consumer.consumer_id}group={consumer.group_name}ack message id='f'{msg_id}{"successful"ifack_resultelse"failed"}.')# show infoasyncdefshow_groups_infor(group:Group):print(f'-----------------------------{group.group_name}---------- groups info ------------------------------------')group_info=awaitgroup.get_groups_info()print(f'group name:{group.group_name}groups info :{group_info}')print(f'-----------------------------{group.group_name}--------- consumer info -----------------------------------')consumer_info=awaitgroup.get_consumers_info()print(f'group name:{group.group_name}consumer info :{consumer_info}')print(f'-----------------------------{group.group_name}-------- pending info -------------------------------------')pending_info=awaitgroup.get_pending_info()print(f'group name:{group.group_name}pending info :{pending_info}')asyncdefmain():# create one producerproducer=MQProducer('group_stream1',redis_name='_group_redis_',redis_url=_redis_url)# create group manager , via same stream key, same redis_namegroup_manager=GroupManager('group_stream1',redis_name='_group_redis_',redis_url=_redis_url)# create first groupgroup1=awaitgroup_manager.create_group('group1')# create two consumers in the same groupconsumer1=awaitgroup1.create_consumer('consumer1')consumer2=awaitgroup1.create_consumer('consumer2')# create second groupgroup2=awaitgroup_manager.create_group('group2')# create three consumers in the same groupconsumer3=awaitgroup2.create_consumer('consumer3')consumer4=awaitgroup2.create_consumer('consumer4')consumer5=awaitgroup2.create_consumer('consumer5')awaitasyncio.gather(producer_task(producer),consumer_task(consumer1),consumer_task(consumer2),consumer_task(consumer3),consumer_task(consumer4),consumer_task(consumer5))print('------------------------------------- show total infor -------------------------------------')stream_info=awaitgroup_manager.get_stream_info(group_manager.stream_key)print(f'stream_key:{group_manager.stream_key}stream info :{stream_info}')awaitshow_groups_infor(group1)awaitshow_groups_infor(group2)if__name__=='__main__':asyncio.run(main())More ExampleFor more examples, please query the example folder.About Redis streamsThe Redis Stream is a new data type introduced with Redis 5.0, which models a log data structure in a more abstract way.
Redis Streams doubles as a communication channel for building streaming architectures and as a log-like data structure
for persisting data, making Streams the perfect solution for event sourcing.The stream type published in redis5.0 is also used to implement typical message queues.
The emergence of this stream type meets almost all the requirements of message queues,
including but not limited to:Serialization generation of message IDMessage traversalBlocking and non blocking reading of messagesGroup consumption of messagesUnfinished message processingMessage queue monitoringComparison of basic conceptsCommon distributed message system, including RabbitMQ 、 RocketMQ 、 Kafka 、Pulsar 、Redis streamsRedis streams vs KafkaKafkaRedis StreamsDescriptionRecordMessageObjects to be processed in the message engineProducerProducerClients that publish new messages to topicsConsumerConsumerClients that subscribe to new messages from topicsConsumer GroupConsumer GroupA group composed of multiple consumer instances can consume the same topic at the same time to achieve high throughput.BrokerCluster Nodeservers form the storage layer. Leader-Follower replicaTopicStream Data typeTopics are logical containers that carry messagespartitionsDifferent Redis keysRedis StreamsDifferences with Kafka (TM) partitionsPerformanceYou can use the following tools for performance testing.OpenMessaging Benchmark FrameworkAPI ReferenceMQClientclient for message system, can manage and query messages.__init__(redis_name: Optional[str] = None, redis_url: Optional[str] = None, redis_pool: aioredis.client.Redis = None, **kwargs)create MQ Client instanceredis_name: name for cache redis clientredis_url: redis server urlredis_pool: aioredis.client.Redis instance, defaults to Noneget_stream_length(stream_key: KeyT)Returns the number of elements in a given stream.stream_key: key of the stream.query_messages(stream_key: KeyT, min_id: StreamIdT = "-", max_id: StreamIdT = "+", count: Optional[int] = None)query message value from min_id to max_id with count limit in a given stream.stream_key: key of the stream.min_id: first stream ID. defaults to '-', meaning the earliest available.max_id: last stream ID. defaults to '+', meaning the latest available.count: if set, only return this many items, beginning with the earliest available.reverse_query_messages(stream_key: KeyT, min_id: StreamIdT = "-", max_id: StreamIdT = "+", count: Optional[int] = None)query message value in reverse order from min_id to max_id with count limit in a given stream.get_stream_info(stream_key: KeyT)Returns general information about the stream.delete_message(stream_key: KeyT, *ids: StreamIdT)Deletes one or more messages from a stream.stream_key: key of the stream.*ids: message ids to delete.trim_stream(stream_key: KeyT, maxlen: int, approximate: bool = True)Deletes one or more messages from a stream.stream_key: key of the stream.maxlen: truncate old stream messages beyond this sizemaxlen: actual stream length may be slightly more than maxlenclient=MQClient(redis_name='my_redis',redis_url='redis://root:xxxxx@localhost/0')# get stream lengthstream_length=awaitclient.get_stream_length('_test_stream1')# get stream infostream_info=awaitclient.get_stream_info('_test_stream1')assertstream_info.get('length')==stream_length# get first_message_infofirst_message_info=awaitclient.query_messages('_test_stream1',count=1)# get last_message_infolast_message_info=awaitclient.reverse_query_messages('_test_stream1',count=1)assertfirst_message_info[0]==stream_info.get('first-entry')assertlast_message_info[0]==stream_info.get('last-entry')MQProducer <- MQClientmessage producer, MQClient with a specific stream key__init__(stream_key: KeyT, redis_name: str = None, redis_pool: aioredis.client.Redis = None, **kwargs)message producer in message system based on a specific stream key.stream_key: key of streamredis_name: name for cache redis clientredis_url: redis server urlredis_pool: aioredis.client.Redis instance, defaults to Nonesend_message(message: Dict[FieldT, EncodableT], msg_id: StreamIdT = "*", maxlen: int = None, approximate: bool = True)Coroutine. send message content to a stream which is a message container, and return message id.message: dict of field/value pairs to insert into the streammsg_id: Location to insert this record. By default it is appended.maxlen: max number of messages, truncate old stream members beyond this sizeapproximate: actual stream length may be slightly more than maxlenproducer=MQProducer('pub_stream',redis_name='my_redis',redis_url='redis://root:xxxxx@localhost/0')send_msg_id=awaitproducer.send_message({'msg_key1':'value1','msg_key2':'value2'})MQConsumer <- MQClientmessage consumer, MQClient with a specific stream key__init__(stream_key: KeyT, redis_name: str = None, redis_pool: aioredis.client.Redis = None, **kwargs)message consumer in message system based on a specific stream key.stream_key: key of streamredis_name: name for cache redis clientredis_url: redis server urlredis_pool: aioredis.client.Redis instance, defaults to Noneread_messages(streams: Dict[KeyT, StreamIdT], count: Optional[int] = None)Coroutine. read messages from streams as message containersstreams: a dict of stream keys to stream IDs, where IDs indicate the last ID already seen.count: if set, only return this many items, beginning with the earliest available.block_read_messages(*stream_key: KeyT, count: Optional[int] = None, block: Optional[int] = None,)Coroutine. Block and monitor multiple streams for new data.stream_key: key of the stream.count: if set, only return this many items, beginning with the earliest available.block: number of milliseconds to wait, if nothing already present.consumer=MQConsumer('pub_stream',redis_name='my_redis',redis_url='redis://root:xxxxx@localhost/0')# block read new messagenew_msg=awaitconsumer.block_read_messages(block=1500)# read messages from msg_id(0 or other id) in single stream (pub_stream)read_msgs=awaitconsumer.read_messages({'pub_stream':0},count=10)GroupManager__init__(stream_key: KeyT, redis_name: str = None, **kwargs)group manager in message system based on a specific stream key.stream_key: key of streamredis_name: name for cache redis clientredis_url: redis server urlcreate_group(group_name: GroupT, msg_id: StreamIdT = "$", mkstream: bool = True)Create a new group consumer associated with a streamgroup_name: name of the consumer groupmsg_id: ID of the last item in the stream to consider already delivered.mkstream: a boolean indicating whether to create new streamdestroy_group(group_name: GroupT)Destroy a consumer groupgroup_name: name of the consumer groupget_groups_info()Returns general information about the consumer groups of the stream.group_manager=GroupManager('pub_stream',redis_name='my_redis',redis_url='redis://root:xxxxx@localhost/0')# create groupgroup=awaitgroup_manager.create_group('group')Groupcreate_consumer(consumer_id: ConsumerT)create a consumer instance in groupconsumer_id: id of consumer.delete_consumer(consumer_id: ConsumerT)Remove a specific consumer from a consumer group.consumer_id: id of consumer.set_msg_id(msg_id: StreamIdT)Set the consumer group last delivered ID to something else.msg_id: ID of the last item in the stream to consider already deliveredget_groups_info()Returns general information about the consumer groups of the stream.get_consumers_info()Returns general information about the consumers in the group. only return consumer which has read messageget_pending_info()Returns information about pending messages of a group.query_pending_messages(min_msg_id: Optional[StreamIdT], max_msg_id: Optional[StreamIdT], count: Optional[int], consumer_id: Optional[ConsumerT] = None)Returns information about pending messages, in a range.min_msg_id: minimum message IDmax_msg_id: maximum message IDcount: number of messages to returnconsumer_id: id of a consumer to filter by (optional)ack_message(*msg_id: StreamIdT)Acknowledges the successful processing of one or more messages.msg_id: message ids to acknowledge.claim_message(consumer_id: ConsumerT, min_idle_time: int, msg_ids: Union[List[StreamIdT], Tuple[StreamIdT]], idle: Optional[int] = None, time: Optional[int] = None, retrycount: Optional[int] = None, force: bool = False, justid: bool = False)Changes the ownership of a pending message. In the context of a stream consumer group,
this command changes the ownership of a pending message,
so that the new owner is the consumer specified as the command argument.consumer_id: name of a consumer that claims the message.min_idle_time: filter messages that were idle less than this amount of millisecondsmsg_ids: non-empty list or tuple of message IDs to claimidle: Set the idle time (last time it was delivered) of the message in mstime: optional integer. This is the same as idle but instead of a relative amount of milliseconds,
it sets the idle time to a specific Unix time (in milliseconds).retrycount: optional integer. set the retry counter to the specified value.
This counter is incremented every time a message is delivered again.force: optional boolean, false by default. Creates the pending message entry in the PEL
even if certain specified IDs are not already in the PEL assigned to a different client.justid: optional boolean, false by default. Return just an array of IDs of messages successfully
claimed, without returning the actual messageGroupConsumerread_messages(streams: Dict[KeyT, StreamIdT], count: Optional[int] = None, noack: bool = False)Read from a stream via a consumer group.streams: a dict of stream names to stream IDs, where IDs indicate the last ID already seen.count: if set, only return this many items, beginning with the earliest availablenoack: do not add messages to the PEL (Pending Entries List)block_read_messages(*stream_key: KeyT, block: Optional[int] = None, count: Optional[int] = None, noack: bool = False)Block read from a stream via a consumer group.stream_key: a list of stream keyblock: number of milliseconds to wait, if nothing already present.count: if set, only return this many items, beginning with the earliest availablenoack: do not add messages to the PEL (Pending Entries List)query_pending_messages(min_msg_id: Optional[StreamIdT], max_msg_id: Optional[StreamIdT], count: Optional[int])Returns information about pending messages, in a range.min_msg_id: minimum message IDmax_msg_id: maximum message IDcount: number of messages to returnack_message(*msg_id: StreamIdT)Acknowledges the successful processing of one or more messages.msg_id: message ids to acknowledge.Developerkavinbj |
aioredisorm | AIORedisORMA Python class for interacting with Redis using asyncio and aioredis.InstallationYou can install AIORedisORM using pip:pip install aioredisormExample UsageHere is an example that demonstrates the usage of the AIORedisORM class:importasynciofromaioredisormimportAIORedisORMasyncdefmain():# Add prefix to beging of each keyredis_client=AIORedisORM(key_prefix='my_prefix')awaitredis_client.connect()# Set a valueawaitredis_client.set_value('my_key','my_value',ex=12)# Get a valueresult=awaitredis_client.get_value('my_key')print(result)# Output: b'my_value'# Set a hashawaitredis_client.set_hash('my_hash',{'key1':'value1','key2':'value2','key3':13})# Set a hash with expirationawaitredis_client.set_hash('my_hash',{'key1':'value1','key2':'value2','key3':13},ex=5)# Get a hashhash_result=awaitredis_client.get_hash('my_hash')print(hash_result)# Output: {b'key1': b'value1', b'key2': b'value2', b'key3': b'123'}awaitasyncio.sleep(5)# Wait for the expiration to passhash_result=awaitredis_client.get_hash('my_hash')print(hash_result)# Output: {}# Decode the bytes to a string if neededresult=result.decode('utf-8')print(result)# Output: 'my_value'# Set a setawaitredis_client.set_set('my_set','value1','value2','value3')# Get a setset_result=awaitredis_client.get_set('my_set')print("set_result",set_result)# Output: {b'value1', b'value2', b'value3'}# Transaction examplecommands=[('set','key1','value1'),('set','key2','value2')]results=awaitredis_client.execute_transaction(commands)print(results)# Output: [(True, True)]# Set a listawaitredis_client.set_list('my_list','value1','value2','value3')# Get a listlist_result=awaitredis_client.get_list('my_list')print(list_result)# Output: [b'value1', b'value2', b'value3']# Get the expiration time of a keyttl,pttl=awaitredis_client.get_key_expiration('key1')print(f"TTL of 'my_key':{ttl}seconds")print(f"PTTL of 'my_key':{pttl}milliseconds")# Close the connectionawaitredis_client.close()# Run the async exampleasyncio.run(main())Make sure to import the AIORedisORM class and replace 'my_prefix' with your desired key prefix. |
aioredis-rate-limiter | aioredis-rate-limiterRate limiter implements on RedisImplementation of distributed rate limiter with aioredis, an asyncio based redis client.Simple rate limiter based on Redis DB.The key features are:Concurrency workRate by limit requestsRate by time for requestsUsageWe want to limit requests from 5 pods to limited resource. Limits for resource: no more 10 request per 20 seconds.locker=AioRedisRateLimiter(redis,rate_limit=10,rate_key_ttl=20)importasyncioimportosfromaioredis.clientimportRedisfromaioredis_rate_limiterimportAioRedisRateLimiterclassExecutor:def__init__(self,name:str,locker:AioRedisRateLimiter,task_count:int=10):self._locker=lockerself._task_count=task_countself._name=nameasyncdefprocess(self):foriinrange(self._task_count):whileTrue:is_ok=awaitself._locker.acquire()ifis_ok:print(f'Executor{self._name}by{i+1}')breakelse:awaitasyncio.sleep(1)asyncdefmain():host=os.getenv('REDIS_HOST')db=os.getenv('REDIS_DB')redis=Redis.from_url(host,db=db,encoding="utf-8",decode_responses=True)locker=AioRedisRateLimiter(redis,rate_limit=10,rate_key_ttl=15)w1=Executor('first',locker,10)w2=Executor('helper',locker,8)w3=Executor('lazzy',locker,5)tasks=[w1.process(),w2.process(),w3.process()]awaitasyncio.gather(*tasks)if__name__=='__main__':asyncio.run(main()) |
aioredis-rpc | aioredis-rpcA RPC interface usingaioredisandpydantic.Usagepipinstallaioredis-rpcpydanticis used to model complex objects which are transparently serialized
and packed into messages usingmsgpack.# Define Pydantic modelsclassFileData(BaseModel):filename:strdata:bytesDefine a class using the@endpointdecorator to specify which methods will be
accessible via rpc.fromredisrpcimportendpoint# Define an RPC classclassDropbox:files:Dict[str,FileData]max_files:intdef__init__(self,max_files:int=1000):self.files=dict()self.max_files=max_files@endpointasyncdefupload_file(self,file:FileData)->int:iflen(self.files)>=self.max_files:# Errors are propagated to the client-sideraiseException('too many files')self.files[file.name]=filereturnlen(file.data)@endpointdefget_file_names(self)->List[str]:returnlist(self.files.keys())@endpointasyncdefdownload_file(self,name:str)->FileData:returnself.files[name]Use thecreate_serverfunction to make an instance of your server-side rpc
class. The server instance will be assigned anrpcattribute to access server
functions likeconnectanddisconnect. Onceconnectis called methods
decorated with@endpointwill be invoked automatically by remote calls from
the client.NOTE: TheRpcProvider.connectmethod is non-blocking.server=create_server(Dropbox,max_files=2)# Returns once connected to redisawaitserver.rpc.connect(dsn="redis://localhost")# Wait foreverwhileTrue:awaitasyncio.sleep(1)Thecreate_clientfunction create a faux instance of the rpc class with only
the methods decorated by@endpointpresent. When these methods are called by
the client the function arguments are serialized and published to redis.NOTE: If there are no subscribers to the redis channel then the client will
throw aRpcNotConnectedError.client=create_client(Dropbox)awaitclient.rpc.connect(dsn="redis://localhost")Now that both ends are connected the@endpointdecorated methods may be called
like they are accessing the actual class passed tocreate_client.file1=FileData(name='file1',data=b'1234')size=awaitclient.upload_file(file1) |
aioredis-semaphore | ===============
aioredis-semaphoreA distributed semaphore and mutex built on Redis.InstallationTo install aioredis-semaphore, simply::pip install aioredis-semaphoreExamples::# -*- coding:utf-8 -*-
import anyio
from aioredis import Redis
from anyio import create_task_group, run
from aioredis_semaphore import Semaphore
semaphore = Semaphore(Redis(), count=2, namespace="example")
async def task(i: int) -> None:
async with semaphore:
print("id: {}".format(i))
print("sleep...")
await anyio.sleep(2)
async def main() -> None:
async with create_task_group() as tg:
for i in range(5):
tg.start_soon(task, i)
if __name__ == "__main__":
run(main) |
aioredis-watchdog | No description available on PyPI. |
aioredlock | The asyncioredlockalgorithm implementation.Redlock and asyncioThe redlock algorithm is a distributed lock implementation forRedis. There are many implementations of it in several languages. In this case, this is theasynciocompatible implementation for python 3.5+.UsagefromaioredlockimportAioredlock,LockError,Sentinel# Define a list of connections to your Redis instances:redis_instances=[('localhost',6379),{'host':'localhost','port':6379,'db':1},'redis://localhost:6379/2',Sentinel(('localhost',26379),master='leader',db=3),Sentinel('redis://localhost:26379/4?master=leader&encoding=utf-8'),Sentinel('rediss://:password@localhost:26379/5?master=leader&encoding=utf-8&ssl_cert_reqs=CERT_NONE'),]# Create a lock manager:lock_manager=Aioredlock(redis_instances)# Check wether a resourece acquired by any other redlock instance:assertnotawaitlock_manager.is_locked("resource_name")# Try to acquire the lock:try:lock=awaitlock_manager.lock("resource_name",lock_timeout=10)exceptLockError:print('Lock not acquired')raise# Now the lock is acquired:assertlock.validassertawaitlock_manager.is_locked("resource_name")# Extend lifetime of the lock:awaitlock_manager.extend(lock,lock_timeout=10)# Raises LockError if the lock manager can not extend the lock lifetime# on more then half of the Redis instances.# Release the lock:awaitlock_manager.unlock(lock)# Raises LockError if the lock manager can not release the lock# on more then half of redis instances.# The released lock become invalid:assertnotlock.validassertnotawaitlock_manager.is_locked("resource_name")# Or you can use the lock as async context manager:try:asyncwithawaitlock_manager.lock("resource_name")aslock:assertlock.validisTrue# Do your stuff having the lockawaitlock.extend()# alias for lock_manager.extend(lock)# Do more stuff having the lockassertlock.validisFalse# lock will be released by context managerexceptLockError:print('Lock not acquired')raise# Clear the connections with Redis:awaitlock_manager.destroy()How it worksThe Aioredlock constructor accepts the following optional parameters:redis_connections: A list of connections (dictionary of host and port and kwargs foraioredis.create_redis_pool(), or tuple(host, port), or string Redis URI) where the Redis instances are running. The default value is[{'host':'localhost', 'port': 6379}].retry_count: An integer representing number of maximum allowed retries to acquire the lock. The default value is3times.retry_delay_minandretry_delay_max: Float values representing waiting time (in seconds) before the next retry attempt. The default values are0.1and0.3, respectively.In order to acquire the lock, thelockfunction should be called. If the lock operation is successful,lock.validwill be true, if the lock is not acquired then theLockErrorwill be raised.From that moment, the lock is valid until theunlockfunction is called or when thelock_timeoutis reached.Call theextendfunction to reset lifetime of the lock tolock_timeoutinterval.Use theis_lockedfunction to check if the resource is locked by other redlock instance.In order to clear all the connections with Redis, the lock_managerdestroymethod can be called.To-do |
aioredlock-neorisk | The asyncioredlockalgorithm implementation.Redlock and asyncioThe redlock algorithm is a distributed lock implementation forRedis. There are many implementations of it in several languages. In this case, this is theasynciocompatible implementation for python 3.5+.UsagefromaioredlockimportAioredlock,LockError# Define a list of connections to your Redis instances:redis_instances=[('localhost',6379),{'host':'localhost','port':6379,'db':1},'redis://localhost:6379/2',]# Create a lock manager:lock_manager=Aioredlock(redis_instances)# Check wether a resourece acquired by any other redlock instance:assertnotawaitlock_manager.is_locked("resource_name")# Try to acquire the lock:try:lock=awaitlock_manager.lock("resource_name",lock_timeout=10)exceptLockError:print('Lock not acquired')raise# Now the lock is acquired:assertlock.validassertawaitlock_manager.is_locked("resource_name")# Extend lifetime of the lock:awaitlock_manager.extend(lock,lock_timeout=10)# Raises LockError if the lock manager can not extend the lock lifetime# on more then half of the Redis instances.# Release the lock:awaitlock_manager.unlock(lock)# Raises LockError if the lock manager can not release the lock# on more then half of redis instances.# The released lock become invalid:assertnotlock.validassertnotawaitlock_manager.is_locked("resource_name")# Or you can use the lock as async context manager:try:asyncwithawaitlock_manager.lock("resource_name")aslock:assertlock.validisTrue# Do your stuff having the lockawaitlock.extend()# alias for lock_manager.extend(lock)# Do more stuff having the lockassertlock.validisFalse# lock will be released by context managerexceptLockError:print('Lock not acquired')raise# Clear the connections with Redis:awaitlock_manager.destroy()How it worksThe Aioredlock constructor accepts the following optional parameters:redis_connections: A list of connections (dictionary of host and port and kwargs foraioredis.create_redis_pool(), or tuple(host, port), or string Redis URI) where the Redis instances are running. The default value is[{'host':'localhost', 'port': 6379}].retry_count: An integer representing number of maximum allowed retries to acquire the lock. The default value is3times.retry_delay_minandretry_delay_max: Float values representing waiting time (in seconds) before the next retry attempt. The default values are0.1and0.3, respectively.In order to acquire the lock, thelockfunction should be called. If the lock operation is successful,lock.validwill be true, if the lock is not acquired then theLockErrorwill be raised.From that moment, the lock is valid until theunlockfunction is called or when thelock_timeoutis reached.Call theextendfunction to reset lifetime of the lock tolock_timeoutinterval.Use theis_lockedfunction to check if the resource is locked by other redlock instance.In order to clear all the connections with Redis, the lock_managerdestroymethod can be called.To-do |
aioredlock-py | aioredlock-pySecure and efficient distributed locks (Redisson like) implemetation. Ensure efficient performance with biased locking's implementation, can load more than 1k/s of concurrent requests with default parameter settings.Requirementsaioredis>=2.0.0Installpip install aioredlock-pyFeatureEnsure reliability with context manager.Use lua scripts to ensure atomicity on lock release.Notification prompt you to cancel the following execution if acquisition failsReliable in high concurrency.Documentationhttps://aioredlock-py.readthedocs.ioBasic usageimportasyncioimportaioredisfromaioredlock_pyimportRedissonasyncdefsingle_thread(redis):for_inrange(10):asyncwithRedisson(redis,key="no1")aslock:ifnotlock:# If the lock still fails after several attempts, `__aenter__`# will return None to prompt you to cancel the following executionreturn'Do something, failed to acquire lock'# raise ...# else# Service logic protected by Redissonawaitredis.incr("foo")asyncdeftest_long_term_occupancy(redis):asyncwithRedisson(redis,key="no1",ex=10)aslock:ifnotlock:return;# Service logic protected by Redissonawaitredis.set("foo",0)# By default, a lock is automatically released if no action is# taken for 20 seconds after redisson holds it. Let's assume that# your service logic takes a long time (30s in this case) to process,# you don't need to worry about it causing chaos, because there's# background threads help you automatically renew legally held locks.awaitasyncio.sleep(30)awaitredis.incr("foo")asyncdefmain():redis=aioredis.from_url("redis://localhost")awaitredis.delete("redisson:no1")awaitredis.set("foo",0)awaitasyncio.gather(*(single_thread(redis)for_inrange(20)))assertint(awaitredis.get("foo"))==200# test_long_term_occupancy(redis)asyncio.run(main()) |
aioredux | PythonicReduxPythonicReduxusing asyncio.aioreduxprovides a predictable state container with the following goal: “[Redux] helps
you write applications that behave consistently, run in different environments
…, and are easy to test” (from theReduxdocumentation).Free software: Mozilla Public LicenseThis package requires Python 3.4 or higherUsageimport asyncio
import aioredux
async def go():
initial_state = {
'todos': (),
}
def add_todo(text):
return {'type': 'ADD_TODO', 'text': text}
def reducer(state, action):
if action['type'] == 'ADD_TODO':
todos = state['todos'] + (action['text'],)
return {'todos': todos}
return state
store = yield from aioredux.create_store(reducer, initial_state)
yield from store.dispatch(add_todo('todo text'))
print(store.state['todos'])
asyncio.get_event_loop().run_until_complete(go())Implementation notesdispatchis marked asasyncalthough in most cases it functions like
a plain Python function returning a Future. This is done to allow for cases
where dispatch performs a more complicated set of (async) actions.A Pythonic version ofredux-thunkis also included. |
aioregistry | AIORegistryaioregistryis a Python library and CLI tool for inspecting and copying container image data
from and between registries.This library primarily focuses on being a useful tool for dealing with container image
registries. It has very limited support for interpretation of the objects stored within.Library usageFind sub-manifest based on platform.asyncwithAsyncRegistryClient()asclient:manifest_ref=parse_image_name("alpine")manifest=awaitclient.manifest_download(manifest_ref)ifisinstance(manifest,ManifestListV2S2):forsub_manifestinmanifest.manifests:ifsub_manifest.platform.architecture=="amd64":manifest_ref.ref=sub_manifest.digestmanifest=awaitclient.manifest_download(manifest_ref)breakelse:raiseException("Found no matching platform")else:print("Not a manifest list")Download layers of an imageforlayerinmanifest.layers:assertlayer.media_type=="application/vnd.docker.image.rootfs.diff.tar.gzip"blob_ref=RegistryBlobRef(manifest_ref.registry,manifest_ref.repo,layer.digest)# For example we just download into memory. In practice don't do this.blob_data=io.BytesIO(b"".join([chunkasyncforchunkinclient.ref_content_stream(blob_ref)]))withtarfile.open(mode="r|*",fileobj=blob_data)astar:fortarinfointar.getmembers():print(tarinfo.name)CLI copy tool# By default it will pull credentials based on ~/.docker/config.jsonpython-maioregistryubuntu:18.04my.private.registry/my-repo:my-tag# Copy all tags matching regexpython-maioregistryubuntumy.private.registry/my-repo--tag-pattern'18\..*' |
aiorelational | Async relational iterators/generators for manipulating data streams |
aioreloader | Tool that reloads yourasyncio-based application automatically when you
modify the source code.Most of code has been borrowed fromTornadoreloader built mostly by@finiteloopand@bdarnell. Thanks!From 0.3.x version aioreloader natively supports-Xpython arguments. Which is recommended way for development debug mode in aiohttp.UsageHere’s an example of usage withaiohttpframework:app=aiohttp.web.Application()aioreloader.start()aiohttp.web.run_app(app)To add any file to watching list (which is not loaded as a python module):aioreloader.watch('/etc/app_config.yml')RequirementsPython - at least 3.5Installation$pipinstallaioreloader |
aioremootio | aioremootio - An asynchronous API client library for Remootioaioremootiois an asynchronous API client library forRemootiowritten in Python 3 and
based onasyncioandaiohttp.Supported functionalities of the deviceWith this client library is currently possible to listen to state changes of aRemootiodevice,
to listen to some events triggered by it, furthermore to operate the gate or garage door connected to it.This client library supports currently the listening to following kind of events triggered by the device:STATE_CHANGEwhich is triggered by the device when its state changesRELAY_TRIGGERwhich is triggered by the device when its control output has been triggered to operate the
connected gate or garage doorLEFT_OPENwhich is triggered by the device when the connected gate or garage door has been left openRESTARTwhich is triggered by the device when it was restartedUsing the libraryThe following example demonstrates how you can use this library.fromtypingimportNoReturnimportloggingimportasyncioimportaiohttpimportaioremootioclassExampleStateListener(aioremootio.Listener[aioremootio.StateChange]):__logger:logging.Loggerdef__init__(self,logger:logging.Logger):self.__logger=loggerasyncdefexecute(self,client:aioremootio.RemootioClient,subject:aioremootio.StateChange)->NoReturn:self.__logger.info("State of the device has been changed. Host [%s] OldState [%s] NewState [%s]"%(client.host,subject.old_state,subject.new_state))asyncdefmain()->NoReturn:logger=logging.getLogger(__name__)logger.setLevel(logging.INFO)handler:logging.Handler=logging.StreamHandler()handler.setFormatter(logging.Formatter(fmt="%(asctime)s[%(levelname)s]%(message)s"))logger.addHandler(handler)connection_options:aioremootio.ConnectionOptions=\aioremootio.ConnectionOptions("192.168.0.1","API_SECRET_KEY","API_AUTH_KEY")state_change_listener:aioremootio.Listener[aioremootio.StateChange]=ExampleStateListener(logger)remootio_client:aioremootio.RemootioClientasyncwithaiohttp.ClientSession()asclient_session:try:remootio_client=awaitaioremootio.RemootioClient(connection_options,client_session,aioremootio.LoggerConfiguration(logger=logger),[state_change_listener])exceptaioremootio.RemootioClientConnectionEstablishmentError:logger.exception("The client has failed to establish connection to the Remootio device.")exceptaioremootio.RemootioClientAuthenticationError:logger.exception("The client has failed to authenticate with the Remootio device.")exceptaioremootio.RemootioError:logger.exception("Failed to create client because of an error.")else:logger.info("State of the device:%s",remootio_client.state)ifremootio_client.state==aioremootio.State.NO_SENSOR_INSTALLED:awaitremootio_client.trigger()else:awaitremootio_client.trigger_open()awaitremootio_client.trigger_close()whileTrue:awaitasyncio.sleep(0.1)if__name__=="__main__":try:asyncio.run(main())exceptKeyboardInterrupt:passTo get theAPI Secret KeyandAPI Auth Keyof yourRemootiodevice you must enable the
API on it according to theRemootio Websocket API documentation.Running the bundled examplesTheproject sourcedoes also contain two examples.The exampleexample.pydemonstrates how you can
use the client as a Python object.The exampleexample_mc.pydemonstrates how you can
use the client as a Python object where it does not establish a connection to the Remootio device automatically
during its initialization.The exampleexample_acm.pydemonstrates how
you can use the client as an asynchronous context manager.To run the bundled examples you mustalso enable the API on yourRemootiodevice to get theAPI Secret KeyandAPI Auth
Keyof it, andadd the source folder/srcof the repository to yourPYTHONPATH.After the two steps described above you can run the bundled examples with the argument--helpto show
the usage information. E.g.:python example.py --helpRunning the bundled testsTo run the bundled tests you must create the.\remootio_device.configuration.jsonfile with a content according
to the following template.{
"host": "IP-ADDRESS-OR-HOST-NAME-OF-YOUR-DEVICE",
"api_secret_key": "API-SECRET-KEY-OF-YOUR-DEVICE",
"api_auth_key": "API-AUTH-KEY-OF-YOUR-DEVICE",
"api_version": API-VERSION-OF-YOUR-DEVICE
}Copyright © 2021 Gergö Gabor Ilyes-Veisz.
Licensed under theApache License, Version 2.0 |
aioreq | Aioreqis a Python asynchronous HTTP client library. It is built on top of TCP sockets and implements the HTTP protocol entirely on his own.DocumentationClick hereInstallFrompypi$pipinstallaioreqFromGitHub$gitclonehttps://github.com/karosis88/aioreq
$pipinstall./aioreqAioreqcan be used as a Python library or as a command-line tool to make HTTP requests.Basic UsagePython>>>importaioreq>>>response=aioreq.get("http://127.0.0.1:7575/")>>>response.status200>>>content_type=response.headers["content-type"]# Case insensitive>>>response.contentb'Hello World'or in async context>>>importasyncio>>>>>>asyncdefmain():...asyncwithaioreq.Client()asclient:...response=awaitclient.get("http://127.0.0.1:7575")...returnresponse>>>asyncio.run(main())<Response 200 OK>CLIAioreqcli tools are very similar tocurl, so if you've used curl before, you should have no trouble.$aioreqhttp://127.0.0.1:7575/cli_doc
HelloWorldWhen performing HTTP requests, there are a few options available.--method -XSpecify HTTP method--verbose -vShow HTTP request headers--include -iInclude HTTP response headers--output -oOutput file--headers -HSend custom headers--data -dHTTP POST data--user-agent -ASet User-Agent headerHere are some examples of requests.$aioreqhttp://127.0.0.1:7575$aioreqhttp://127.0.0.1:7575/cli_doc-d"Bob"-XPOST
UserBobwascreated!
$aioreqhttp://127.0.0.1:7575/cli_doc-o/dev/null
$aioreqhttp://127.0.0.1:7575/cli_doc-v-H"custom-header: custom-value"\"second-header: second-value"========REQUESTHEADERS========user-agent:python/aioreq
accept:*/*
custom-header:custom-value
second-header:second-value
accept-encoding:gzip;q=1,deflate;q=1HelloMiddlewaresAioreqnow supports 'middleware' power.The first steps with middlewareAioreq provides default middlewares to each client.
We can see that middlewares by importing 'default_middlewares' variable.>>>importaioreq>>>aioreq.middlewares.default_middlewares('RetryMiddleWare','RedirectMiddleWare','CookiesMiddleWare','DecodeMiddleWare','AuthenticationMiddleWare')The first item on this list represents the first middleware that should handle our request (i.e. theclosest middleware to our client), while the last index represents theclosest middleware to the server.We can pass our modified middlewares tuple to the Client to override the default middlewares.>>>client=aioreq.Client(middlewares=aioreq.middlewares.default_middlewares[2:])This client will no longer redirect or retry responses.Also, because aioreq stores middlewares in Client objects as linked lists, we can simply change the head of that linked list to skip the first middleware.>>>client=aioreq.Client()>>>client.middlewares.__class__.__name__'RetryMiddleWare'>>>>>>client.middlewares=client.middlewares.next_middleware>>>client.middlewares.__class__.__name__'RedirectMiddleWare'>>>>>>client.middlewares=client.middlewares.next_middleware>>>client.middlewares.__class__.__name__'CookiesMiddleWare'or>>>client=aioreq.Client()>>>client.middlewares=client.middlewares.next_middleware.next_middleware>>># alternative for client = aioreq.Client(middlewares=aioreq.middlewares.default_middlewares[2:])Create your own middlewares!All 'aioreq' middlewares must be subclasses of the classmiddlewares.MiddleWareMiddleWare below would add 'test-md' header if request domain iswww.example.com>>>importaioreq>>>>>>classCustomMiddleWare(aioreq.middlewares.MiddleWare):...asyncdefprocess(self,request,client):...ifrequest.host=='www.example.com':...request.headers['test_md']='test'...returnawaitself.next_middleware.process(request,client)...>>>client=aioreq.Client()>>>client.middlewares=CustomMiddleWare(next_middleware=client.middlewares)Our CustomMiddleWare will now be the first middleware (i.e. closest to the client). Because 'aioreq' middlewares are stored as linked lists, this pattern works (i.e. same as linked list insert method).Alternatively, we can alter the list of middlewares that the client receives.>>>client=aioreq.Client(middlewares=(CustomMiddleWare,)+aioreq.middlewares.default_middlewares)>>>client.middlewares.__class__.__name__'CustomMiddleWare'SSL/TLSAioreq supports three attributes related to this topic.check_hostnameChecks whether the peer cert hostname matches the server domain.verify_modeSpecifies whether the server certificate must be verified.keylog_filenameFile location for dumping private keysYou can also set the environment variableSSLKEYLOGFILEinstead of specifyingkeylog_filename.You can use a tool likewiresharkto decrypt yourHTTPStraffic if you have a file with the private keys.Example:$exportSSLKEYLOGFILE=logsThen just run aioreq.$aioreqhttps://example.com
$ls-l
total8-rw-r--r--1useruser406Dec517:19logsNow, the 'logs' file contains keylogs that can be used to decrypt your TLS/SSL traffic with a tool such as 'wireshark'.Here are a few examples of how to manage the SSL context for your requests.importaioreqdont_verify_cert=aioreq.get("https://example.com",verify_mode=False)verify_and_dump_logs=aioreq.get("https://example.com",verify_mode=True,keylog_filename="logs")default_configs=aioreq.get("https://example.com",verify_mode=True,check_hostname=True)AuthenticationIf theauthparameter is included in the request, Aioreq will handle authentication.There are two types of authorization that aioreq can handle.Digest AuthorizationBasic AuthorizationIf the incoming response status code is401and the header containswww-authorization,aioreqwill attempteachof the schemes until authorization is complete.>>>importaioreq>>>importasyncio>>>asyncdefsend_req():...asyncwithaioreq.Client()ascl:...returnawaitcl.get('http://httpbin.org/basic-auth/foo/bar',auth=('foo','bar'))>>>resp=asyncio.run(send_req())>>>resp.status200Parameterauthshould be a tuple with two elements: login and password.Authentication is enabled byAuthenticationMiddleWare, so exercise caution when managing middlewares manually.BenchmarksIn this benchmarks, we compareaioreqandhttpxduring 999 asynchronous requests, without cachingYou can run these tests on yourlocal machine; the directory `aioreq/benchmarkscontains all of the required modules.$ cd benchmarks
$ ./run_tests
Benchmarks
---------------------------
aioreq benchmark
Total time: 2.99
---------------------------
httpx benchmark
Total time: 7.60Supported FeaturesAioreqsupport basic features to work withHTTP/1.1.More functionality will be available in future releases.This is the latest version features.Keep-Alive (Persistent Connections)MiddlewaresKeylogsAuthenticationCookiesAutomatic accepting and decoding responses. UsingAccept-EncodingheaderHTTPS support, TLS/SSL VerificationRequest Timeouts |
aiorequest | aioRequestProvides asynchronous user-friendly micro HTTP client with nothing but clean objects.Basically, it is a wrapper overrequestspython library with async/await approach.
Represents asynchronous version ofurequestpackage.ToolsProductionpython 3.7, 3.8asynciolibraryrequestslibraryDevelopmenttravisCIpytestblackmypypylintflake8pydocstyleinterrogatebatsUsageInstallationpipinstallaiorequest
✨🍰✨Quick start>>>importasyncio>>>fromtypingimportTuple>>>fromaiorequest.sessionsimportSession,HttpSession>>>fromaiorequest.responsesimportHTTPStatus,Response,JsonType>>>fromaiorequest.urlsimportHttpUrl>>>>>>>>>asyncdefaioresponse()->Tuple[HTTPStatus,JsonType]:...session:Session...asyncwithHttpSession()assession:...response:Response=awaitsession.get(...HttpUrl(host="xkcd.com",path="info.0.json")...)...returnawaitresponse.status(),awaitresponse.as_json()......>>>>>>asyncio.run(aioresponse())(<HTTPStatus.OK:200>,{"month":"3","num":2284,"link":"","year":"2020","news":"","safe_title":"Sabotage","transcript":"","img":"https://imgs.xkcd.com/comics/sabotage.png","title":"Sabotage","day":"23",})Source [email protected]:aiopymake/aiorequest.git
pythonsetup.pyinstallOr using specific release:pipinstallgit+https://github.com/aiopymake/[email protected] [email protected]:aiopymake/aiorequest.git>>>importaiorequest>>>aiorequest.__doc__'Package provides asynchronous user-friendly HTTP client with clean objects.'⬆ back to topDevelopment notesTestingGenerally,pytesttool is used to organize testing procedure.Please follow next command to run unittests:pytestCIProject has Travis CI integration using.travis.ymlfile thus code analysis (black,pylint,flake8,mypy,pydocstyleandinterrogate) and unittests (pytest) will be run automatically after every made change to the repository.To be able to run code analysis, please execute command below:./analyse-source-code.shThe package is also covered with the installation unit tests based onbatsframework. Please run the following command to launch package unit tests:bats--prettytest-package.batsPACKAGE_NAMEandPACKAGE_VERSIONenvironment variables should be specified prelimirary.Release notesPlease checkchangelogfile to get more details about actual versions and it's release notes.MetaAuthor –Volodymyr Yahello. Please checkAUTHORSfile for all contributors.Distributed under theMITlicense. SeeLICENSEfor more information.You can reach out me at:[email protected]://twitter.com/vyahellohttps://www.linkedin.com/in/volodymyr-yahello-821746127ContributingI would highly appreciate any contribution and support. If you are interested to add your ideas into project please follow next simple steps:Clone the repositoryConfiguregitfor the first time after cloning with yournameandemailpip install -r requirements.txtto install all project dependenciespip install -r requirements-dev.txtto install all development project dependenciesCreate your feature branch (git checkout -b feature/fooBar)Commit your changes (git commit -am 'Add some fooBar')Push to the branch (git push origin feature/fooBar)Create a new Pull RequestWhat's nextAll recent activities and ideas are described at projectissuespage.
If you have ideas you want to change/implement please do not hesitate and create an issue.⬆ back to top |
aio-request | aio-requestThis library simplifies an interaction between microservices:Allows sending requests using various strategiesPropagates a deadline and a priority of requestsExposes client/server metricsExample:importaiohttpimportaio_requestasyncwithaiohttp.ClientSession()asclient_session:client=aio_request.setup(transport=aio_request.AioHttpTransport(client_session),endpoint="http://endpoint:8080/",)response_ctx=client.request(aio_request.get("thing"),deadline=aio_request.Deadline.from_timeout(5))asyncwithresponse_ctxasresponse:pass# process response hereRequest strategiesThe following strategies are supported:Single attempt. Only one attempt is sent.Sequential. Attempts are sent sequentially with delays between them.Parallel. Attempts are sent in parallel one by one with delays between them.Attempts count and delays are configurable.Example:importaiohttpimportaio_requestasyncwithaiohttp.ClientSession()asclient_session:client=aio_request.setup(transport=aio_request.AioHttpTransport(client_session),endpoint="http://endpoint:8080/",)response_ctx=client.request(aio_request.get("thing"),deadline=aio_request.Deadline.from_timeout(5),strategy=aio_request.parallel_strategy(attempts_count=3,delays_provider=aio_request.linear_delays(min_delay_seconds=0.1,delay_multiplier=0.1)))asyncwithresponse_ctxasresponse:pass# process response hereDeadline & priority propagationTo enable it for the server side a middleware should be configured:importaiohttp.webimportaio_requestapp=aiohttp.web.Application(middlewares=[aio_request.aiohttp_middleware_factory()])Expose client/server metricsTo enable client metrics a metrics provider should be passed to the transport:importaiohttpimportaio_requestasyncwithaiohttp.ClientSession()asclient_session:client=aio_request.setup(transport=aio_request.AioHttpTransport(client_session,metrics_provider=aio_request.PROMETHEUS_METRICS_PROVIDER),endpoint="http://endpoint:8080/",)It is an example of how it should be done for aiohttp and prometheus.To enable client metrics a metrics provider should be passed to the middleware:importaiohttp.webimportaio_requestapp=aiohttp.web.Application(middlewares=[aio_request.aiohttp_middleware_factory(metrics_provider=aio_request.PROMETHEUS_METRICS_PROVIDER)])Circuit breakerimportaiohttpimportaio_requestasyncwithaiohttp.ClientSession()asclient_session:client=aio_request.setup_v2(transport=aio_request.AioHttpTransport(client_session),endpoint="http://endpoint:8080/",circuit_breaker=aio_request.DefaultCircuitBreaker[str,int](break_duration=1.0,sampling_duration=1.0,minimum_throughput=2,failure_threshold=0.5,),)In the case of requests count >= minimum throughput(>=2) in sampling period(1 second) the circuit breaker will open
if failed requests count/total requests count >= failure threshold(50%).v0.1.30 (2023-07-23)Removal of tracing supportDrop python 3.8 supportv0.1.29 (2023-04-27)Stop losing redirects params in headers updatev0.1.28 (2023-04-27)Add allow_redirects and max_redirects options to requestv0.1.27 (2023-02-16)Maintenance releasev0.1.26 (2022-11-02)Add python 3.11 supportv0.1.25 (2022-08-25)Reverted: URL-encode path_parameters- let user
decide what to dov0.1.24 (2022-07-04)URL-encode path_parametersv0.1.23 (2022-02-08)Reject throttling(too many requests) status codev0.1.22 (2022-01-08)Return default json expected content_type to "application/json"Release aiohttp response instead of closeValidate json content-typev0.1.21 (2022-01-05)Content type should be None in Response.json()v0.1.20 (2022-01-05)Do not expect json content type by defaultv0.1.19 (2021-11-01)Support async-timeout 4.0+v0.1.18 (2021-09-08)Reexport explicitlyv0.1.17 (2021-09-01)Fix patch/patch_json visibilityv0.1.16 (2021-09-01)Support patch methodv0.1.15 (2021-09-01)Clean up resources in single shieldv0.1.14 (2021-08-18)Keys should be materialized if dict is changed in loopv0.1.13 (2021-08-15)Circuit breakerv0.1.12 (2021-07-21)Basic repr implementationv0.1.11 (2021-07-21)Fix Request.update_headers, add Request.extend_headers#59v0.1.10 (2021-07-20)Add Response.is_json property to check whether content-type is json compatible#58Tracing support#54,Configurationof a new pipeline |
aio_requests | No description available on PyPI. |
aioresponses | Aioresponses is a helper to mock/fake web requests in python aiohttp package.Forrequestsmodule there are a lot of packages that help us with testing (eg.httpretty,responses,requests-mock).When it comes to testing asynchronous HTTP requests it is a bit harder (at least at the beginning).
The purpose of this package is to provide an easy way to test asynchronous HTTP requests.Installing$pipinstallaioresponsesSupported versionsPython 3.7+aiohttp>=3.3.0,<4.0.0UsageTo mock out HTTP request useaioresponsesas a method decorator or as a context manager.Responsestatuscode,body,payload(for json response) andheaderscan be mocked.Supported HTTP methods:GET,POST,PUT,PATCH,DELETEandOPTIONS.importaiohttpimportasynciofromaioresponsesimportaioresponses@aioresponses()deftest_request(mocked):loop=asyncio.get_event_loop()mocked.get('http://example.com',status=200,body='test')session=aiohttp.ClientSession()resp=loop.run_until_complete(session.get('http://example.com'))assertresp.status==200mocked.assert_called_once_with('http://example.com')for convenience usepayloadargument to mock out json response. Example below.as a context managerimportasyncioimportaiohttpfromaioresponsesimportaioresponsesdeftest_ctx():loop=asyncio.get_event_loop()session=aiohttp.ClientSession()withaioresponses()asm:m.get('http://test.example.com',payload=dict(foo='bar'))resp=loop.run_until_complete(session.get('http://test.example.com'))data=loop.run_until_complete(resp.json())assertdict(foo='bar')==datam.assert_called_once_with('http://test.example.com')aioresponses allows to mock out any HTTP headersimportasyncioimportaiohttpfromaioresponsesimportaioresponses@aioresponses()deftest_http_headers(m):loop=asyncio.get_event_loop()session=aiohttp.ClientSession()m.post('http://example.com',payload=dict(),headers=dict(connection='keep-alive'),)resp=loop.run_until_complete(session.post('http://example.com'))# note that we pass 'connection' but get 'Connection' (capitalized)# under the neath `multidict` is used to work with HTTP headersassertresp.headers['Connection']=='keep-alive'm.assert_called_once_with('http://example.com',method='POST')allows to register different responses for the same urlimportasyncioimportaiohttpfromaioresponsesimportaioresponses@aioresponses()deftest_multiple_responses(m):loop=asyncio.get_event_loop()session=aiohttp.ClientSession()m.get('http://example.com',status=500)m.get('http://example.com',status=200)resp1=loop.run_until_complete(session.get('http://example.com'))resp2=loop.run_until_complete(session.get('http://example.com'))assertresp1.status==500assertresp2.status==200Repeat response for the same urlE.g. for cases you want to test retrying mechanismsimportasyncioimportaiohttpfromaioresponsesimportaioresponses@aioresponses()deftest_multiple_responses(m):loop=asyncio.get_event_loop()session=aiohttp.ClientSession()m.get('http://example.com',status=500,repeat=True)m.get('http://example.com',status=200)# will not take effectresp1=loop.run_until_complete(session.get('http://example.com'))resp2=loop.run_until_complete(session.get('http://example.com'))assertresp1.status==500assertresp2.status==500match URLs with regular expressionsimportasyncioimportaiohttpimportrefromaioresponsesimportaioresponses@aioresponses()deftest_regexp_example(m):loop=asyncio.get_event_loop()session=aiohttp.ClientSession()pattern=re.compile(r'^http://example\.com/api\?foo=.*$')m.get(pattern,status=200)resp=loop.run_until_complete(session.get('http://example.com/api?foo=bar'))assertresp.status==200allows to make redirects responsesimportasyncioimportaiohttpfromaioresponsesimportaioresponses@aioresponses()deftest_redirect_example(m):loop=asyncio.get_event_loop()session=aiohttp.ClientSession()# absolute urls are supportedm.get('http://example.com/',headers={'Location':'http://another.com/'},status=307)resp=loop.run_until_complete(session.get('http://example.com/',allow_redirects=True))assertresp.url=='http://another.com/'# and also relativem.get('http://example.com/',headers={'Location':'/test'},status=307)resp=loop.run_until_complete(session.get('http://example.com/',allow_redirects=True))assertresp.url=='http://example.com/test'allows to passthrough to a specified list of serversimportasyncioimportaiohttpfromaioresponsesimportaioresponses@aioresponses(passthrough=['http://backend'])deftest_passthrough(m,test_client):session=aiohttp.ClientSession()# this will actually perform a requestresp=loop.run_until_complete(session.get('http://backend/api'))aioresponses allows to throw an exceptionimportasynciofromaiohttpimportClientSessionfromaiohttp.http_exceptionsimportHttpProcessingErrorfromaioresponsesimportaioresponses@aioresponses()deftest_how_to_throw_an_exception(m,test_client):loop=asyncio.get_event_loop()session=ClientSession()m.get('http://example.com/api',exception=HttpProcessingError('test'))# calling# loop.run_until_complete(session.get('http://example.com/api'))# will throw an exception.aioresponses allows to use callbacks to provide dynamic responsesimportasyncioimportaiohttpfromaioresponsesimportCallbackResult,aioresponsesdefcallback(url,**kwargs):returnCallbackResult(status=418)@aioresponses()deftest_callback(m,test_client):loop=asyncio.get_event_loop()session=ClientSession()m.get('http://example.com',callback=callback)resp=loop.run_until_complete(session.get('http://example.com'))assertresp.status==418aioresponses can be used in a pytest fixtureimportpytestfromaioresponsesimportaioresponses@pytest.fixturedefmock_aioresponse():withaioresponses()asm:yieldmFeaturesEasy to mock out HTTP requests made byaiohttp.ClientSessionLicenseFree software: MIT licenseCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template. |
aiorest | JSON REST framework based on aiohttp (an asyncio (PEP 3156) http server).aiorest development has stoppedThe project always was in experimental status: we have tried to make the proof
of concept foraiohttphigh level server.Now the work is done, the most important parts transplanted toaiohttp.web:RequestandResponse.Someaiorestfeatures are not supported byaiohttp.webyet:
sessions, CORS and security.We are working hard on the issue by makingaiohttpextension
libraries for those ones.We will keepaiorestwork on top ofaiohttpnew versions for a
while.Please report about incompatibility bugs toaiorest github
issue tracker– we’ll fix those.Example usageSimple REST server can be run like this:import asyncio
import aiohttp
import aiorest
# define a simple request handler
# which accept no arguments
# and responds with json
def hello(request):
return {'hello': 'world'}
loop = asyncio.get_event_loop()
server = aiorest.RESTServer(hostname='127.0.0.1',
loop=loop)
# configure routes
server.add_url('GET', '/hello', hello)
# create server
srv = loop.run_until_complete(loop.create_server(
server.make_handler, '127.0.0.1', 8080))
@asyncio.coroutine
def query():
resp = yield from aiohttp.request(
'GET', 'http://127.0.0.1:8080/hello', loop=loop)
data = yield from resp.read_and_close(decode=True)
print(data)
loop.run_until_complete(query())
srv.close()
loop.run_until_complete(srv.wait_closed())
loop.close()this will print{'hello': 'world'}jsonSeeexamplesfor more.RequirementsPython 3.3asynciohttp://code.google.com/p/tulip/or Python 3.4+aiohttphttp://github.com/KeepSafe/aiohttpoptional moduleaiorest.redis_sessionrequires aioredishttps://github.com/aio-libs/aioredisLicenseaiorest is offered under the MIT license.CHANGES0.4.0 (2015-01-18)The aiorest library development has stopped, use aiohttp.web instead.Updateaiorestcode to be compatible withaiohttp 0.14 release.0.3.1 (2014-12-22)Fixed exceptions logging for unhandled errors0.3.0 (2014-12-17)Made aiorest compatible to aiohttp v0.120.2.5 (2014-10-30)Fix response.write_eof() to follow aiohttp changes0.2.4 (2014-09-12)Make loop keywork-only parameter in create_session_factory() function0.2.3 (2014-08-28)Redis session switched from asyncio_redis to aioredis0.2.2 (2014-08-15)Added Pyramid-like matchdict to request
(seehttps://github.com/aio-libs/aiorest/pull/18)Return “400 Bad Request” for incorrect JSON body in POST/PUT methodsREADME fixedCustom response status code
(seehttps://github.com/aio-libs/aiorest/pull/23)0.1.1 (2014-07-09)Switched to aiohttp v0.9.00.1.0 (2014-07-07)Basic REST API |
aio-rest | aio-rest – REST Helpers for Asyncio projects** WORK IN PROGRESS ** the package is not ready yetFeaturesSupport for Starlette, AIOHTTP frameworksContentsFeaturesRequirementsInstallationQuickStartBug trackerContributingLicenseRequirementspython >= 3.4Installationaio-restshould be installed using pip:pip install aio-restQuickStartTODOBug trackerIf you have any suggestions, bug reports or
annoyances please report them to the issue tracker
athttps://github.com/klen/aio-rest/issuesContributingDevelopment of the project happens at:https://github.com/klen/aio-restLicenseLicensed under aMIT license. |
aiorestapi | Rapid rest resources for aiohttpKey FeaturesSupports both client and server side of HTTP protocol.Getting startedaiorestapiallows you to quickly create a rest resource in a few steps. It automatically creates the resource routes on the collections or individual items; it's simply to specify the suffix '_collection' or '_item' on the methods.
The serialization / deserialization of results / requests occurs transparently using python dictionaries.An example creating a simple rest resourcefromaiohttpimportwebfromaiorestapiimportRestView,[email protected]("/views")classRestResource(RestView):# example call: GET to <server>/views?start=10asyncdefon_get_collection(self,start=0)->list:return[{"id":int(start)+1,"value":1},{"id":int(start)+2,"value":2},]# example call: GET <server>/views/80asyncdefon_get_item(self,id:str)->dict:returnself.key# example call: POST to <server>/viewsasyncdefon_post_collection(self,body:dict)->dict:returnbodyapp=web.Application()app.add_routes(routes)app['key']=[1,2,4,5]if__name__=='__main__':web.run_app(app)InstallationIt's very simple to install aiorestapi:[email protected] decorator the decorator automatically adds the routes to '/myresources' and '/myresources/{id}'From the RestView object it is possible to access the aiohttp request with self.requestThe query parameters are automatically converted into parameters of the view method.If requests have a body it is necessary to specify in the method a parameter called 'body'If requests are to the single item it is necessary to specify a parameter called 'id'The items stored as 'app[]' are accessible into view as properties 'self.'To DoTests!!DocumentationConfigurable custom serializers/deserializersRequirementsPython >= 3.6aiohttpLicenseaiorestapiis offered under the Apache 2 license.Source codeThe latest developer version is available in a GitHub repository:https://github.com/aiselis/aiorestapi |
aiorest-client | AIOREST ClientA helper to call REST API from aiohttpFree software: MIT licenseDocumentation:https://aiorest-client.readthedocs.io.History0.1.0 (2019-01-29)First release on PyPI. |
aiorestlight | aiorestlightPyPi package for communicating with lights over a REST APIDesigned to work with the npm package@jms777/light-controller. |
aiorest-ws | UNKNOWN |
aioresult | Welcome toaioresult!This is a very small library to capture the result of an asynchronous operation, either an async
function (with theResultCaptureclass) or more generally (with theFutureclass). It works
withTrio nurseriesandanyio
task groups. It is not needed for Python 3.11asyncio task groupsbecause
those already return an object representing the task, allowing the result to be retrieved.Code is hosted on github:https://github.com/arthur-tacca/aioresultDocumentation is on ReadTheDocs:Overview (this page):https://aioresult.readthedocs.io/en/v1.0/overview.htmlCapturing a result:https://aioresult.readthedocs.io/en/v1.0/result_capture.htmlFuture objects:https://aioresult.readthedocs.io/en/v1.0/future.htmlUtility functions for waiting:https://aioresult.readthedocs.io/en/v1.0/wait.htmlThe package is on PyPI:https://pypi.org/project/aioresult/Quick OverviewTheResultCaptureclass runs an async function in a nursery and stores its return value (or
raised exception) for later:async with trio.open_nursery() as n:
result1 = ResultCapture.start_soon(n, foo, 1)
result2 = ResultCapture.start_soon(n, foo, 2)
# At this point the tasks have completed, and results are stashed in ResultCapture objects
print("results", result1.result(), result2.result())When stored in a list, the effect is very similar to theasyncio gather() function:async with trio.open_nursery() as n:
results = [ResultCapture.start_soon(n, foo, i) for i in range(10)]
print("results:", *[r.result() for r in results])NoteA key design decision about theResultCaptureclass is thatexceptions are allowed
to propagate out of the task into their enclosing nursery. This is unlike some similar
libraries, which consume the exception in its original context and rethrow it later. In practice,
aioresult’s behaviour is simpler and less error prone.There is also a simpleFutureclass that shares a lot of its code withResultCapture. The
result is retrieved the same way, but it is set explicitly rather than captured from a task. It is
most often used when an API wants to return a value that will be demultiplexed from a shared
connection:# When making a request, create a future, store it for later and return to caller
f = aioresult.Future()
# The result is set, usually inside a networking API
f.set_result(result)
# The calling code can wait for the result then retrieve it
await f.wait_done()
print("result:", f.result())The interface inFutureandResultCaptureto wait for a result and retrieve it is shared in
a base classResultBase.There are also a few simple utility functions to help waiting for results:wait_any()andwait_all()to wait for one or all of a collection of tasks to complete, andresults_to_channel()to allow using the results as they become available.Installation and UsageInstall into a suitable virtual environment withpip:pip install aioresultaioresult can be used with Trio nurseries:import trio
from aioresult import ResultCapture
async def wait_and_return(i):
await trio.sleep(i)
return i
async def use_aioresult():
async with trio.open_nursery() as n:
results = [ResultCapture.start_soon(n, wait_and_return, i) for i in range(5)]
print("results:", *[r.result() for r in results])
if __name__ == "__main__":
trio.run(use_aioresult)It can also be used with anyio task groups:import asyncio
import anyio
from aioresult import ResultCapture
async def wait_and_return(i):
await anyio.sleep(i)
return i
async def use_aioresult():
async with anyio.create_task_group() as tg:
results = [ResultCapture.start_soon(tg, wait_and_return, i) for i in range(5)]
print("results:", *[r.result() for r in results])
if __name__ == "__main__":
asyncio.run(use_aioresult())ContributingThis library is deliberately small and limited in scope, so it is essentially “done”. An exception
to this is that the typing annotations are not exhaustive and have not been tested with any type
checker, so contributions to improve this would be welcome. I could perhaps also be persuaded to
add support for optionally including a per-task cancel scope (seeissue #2).To test any changes, install the test requirements (see thepyproject.tomlfile) and runpytestin the root of the repository:python -m pytestTo also get coverage information, run it with thecoveragecommand:coverage run -m pytestYou can then usecoverage htmlto get a nice HTML output of exactly what code has been tested
and what has been missed.LicenseCopyright Arthur Tacca 2022 - 2024Distributed under the Boost Software License, Version 1.0.
See accompanying file LICENSE or the copy athttps://www.boost.org/LICENSE_1_0.txtThis is similar to other liberal licenses like MIT and BSD: you can use this library without the
need to share your program’s source code, so long as you provide attribution of aioresult.The Boost license has the additional provision that you do not even need to provide attribution if
you are distributing your software in binary form only, e.g. if you have compiled to an executable
withNuitka. (Bundlers likepyinstallerandpy2exedon’t count for this because they still include the source
code internally.) |
aiorethink | aiorethink is a fairly comprehensive but easy-to-use asyncio-enabled Object Document Mapper
forRethinkDB. It is currently in development.Documentation:http://aiorethink.readthedocs.org(very early stages)Source:https://github.com/lars-tiede/aiorethinkSimple exampleimport aiorethink as ar
class Hero(ar.Document):
name = ar.Field(ar.StringValueType(), indexed = True)That’s all you need to start out with your own documents. More than that,
actually: declaring and “typing” fields is entirely optional.Obviously, you need a RethinkDB instance running, and you need a database
including the tables for your Document classes. aiorethink can’t help you with
the RethinkDB instance, but the DB setup can be done like so (assuming a
RethinkDB instance runs on localhost):ar.configure_db_connection(db = "my_db")
await ar.init_app_db()Let’s make a document:spiderman = Hero(name = "Spiderma")
# declared fields can be accessed by attribute or dict interface
spiderman.name = "Spierman"
spiderman["name"] = "Spiderman" # third time's the charm
# with the dict interface, we can make and access undeclared fields
spiderman["nickname"] = "Spidey"Validate and save to DB:try:
await spiderman.save() # calls spiderman.validate()
except ar.ValidationError as e:
print("validation failed, doc not saved: {}".format(e))
# if we don't declare a primary key field, RethinkDB makes us an 'id' field
doc_id = spiderman.idLoad a document from the DB:spidey = Hero.load(doc_id) # using primary key
spidey = Hero.from_query( # using arbitrary query
Hero.cq(). # "class query prefix", basically rethinkdb.table("Heros")
get_all("Spiderman", index = "name").nth(0)
)Iterate over a document’s RethinkDB changefeed:async for spidey, changed_keys, change_msg in await spidey.aiter_changes():
if "name" in changed_keys:
print("what, a typo again? {}?".format(spidey.name))
# change_msg is straight from the rethinkdb changes() queryFeaturesThe following features are either fully or partially implemented:optional schema: declare fields in Document classes and get serialization and
validation magic much like you know it from other ODMs / ORMs. Or don’t
declare fields and just use them with the dictionary interface. Or use a mix
of declared and undeclared fields.schema for complex fields such as lists, dicts, or even “sub-documents” with
named and typed fields just like documents.dictinterface that works for both declared and undeclared fields.all I/O is is asynchronous, done withasync def/awaitstyle
coroutines, using asyncio.lazy-loading and caching (i.e. “awaitable” fields), for example references
to other documents.asynchronous changefeeds usingasync for, on documents and document
classes. aiorethink can also assist with Python object creation on just about
any other changefeed.Planned features:maybe explicit relations between document classes (think “has_many” etc.)maybe schema migrationsPhilosophyaiorethink aims to do the following two things very well:make translations between database documents and Python objects easy and
convenienthelp with schema and validationOther than that, aiorethink tries not to hide RethinkDB under a too thick
abstraction layer. RethinkDB’s excellent Python driver, and certainly its
awesome query language, are never far removed and always easy to access. Custom
queries on document objects should be easy. Getting document objects out of
vanilla rethinkdb queries, including changefeeds, should also be easy.Statusaiorethink is in development. The API is not complete and not stable yet,
although the most important features are present now. |
aioretry | aioretryAsyncio retry utility for Python 3.7+Upgrade guideInstall$pipinstallaioretryUsageimportasynciofromtypingimport(Tuple)fromaioretryimport(retry,# Tuple[bool, Union[int, float]]RetryPolicyStrategy,RetryInfo)# This example shows the usage with python typingsdefretry_policy(info:RetryInfo)->RetryPolicyStrategy:"""- It will always retry until succeeded- If fails for the first time, it will retry immediately,- If it fails again,aioretry will perform a 100ms delay before the second retry,200ms delay before the 3rd retry,the 4th retry immediately,100ms delay before the 5th retry,etc..."""returnFalse,(info.fails-1)%3*0.1@retry(retry_policy)asyncdefconnect_to_server():# connec to server...asyncio.run(connect_to_server())Use as class instance method decoratorWe could also useretryas a decorator for instance methodclassClient:@retry(retry_policy)asyncdefconnect(self):awaitself._connect()asyncio.run(Client().connect())Use instance method as retry policyretry_policycould be the method name of the class ifretryis used as a decorator for instance method.classClientWithConfigurableRetryPolicy(Client):def__init__(self,max_retries:int=3):self._max_retries=max_retriesdef_retry_policy(self,info:RetryInfo)->RetryPolicyStrategy:returninfo.fails>self._max_retries,info.fails*0.1# Then aioretry will use `self._retry_policy` as the retry policy.# And by using a str as the parameter `retry_policy`,# the decorator must be used for instance methods@retry('_retry_policy')asyncdefconnect(self):awaitself._connect()asyncio.run(ClientWithConfigurableRetryPolicy(10).connect())Register anbefore_retrycallbackWe could also register anbefore_retrycallback which will be executed after every failure of the target function if the corresponding retry is not abandoned.classClientTrackableFailures(ClientWithConfigurableRetryPolicy):# `before_retry` could either be a sync function or an async functionasyncdef_before_retry(self,info:RetryInfo)->None:awaitself._send_failure_log(info.exception,info.fails)@retry(retry_policy='_retry_policy',# Similar to `retry_policy`,# `before_retry` could either be a Callable or a strbefore_retry='_before_retry')asyncdefconnect(self):awaitself._connect()Only retry for certain types of exceptionsdefretry_policy(info:RetryInfo)->RetryPolicyStrategy:ifisinstance(info.exception,(KeyError,ValueError)):# If it raises a KeyError or a ValueError, it will not retry.returnTrue,0# Otherwise, retry immediatelyreturnFalse,0@retry(retry_policy)asyncdeffoo():# do something that might raise KeyError, ValueError or RuntimeError...APIsretry(retry_policy, before_retry)(fn)fnCallable[[...], Awaitable]the function to be wrapped. The function should be an async function or normal function returns an awaitable.retry_policyUnion[str, RetryPolicy]before_retry?Optional[Union[str, Callable[[RetryInfo], Optional[Awaitable]]]]If specified,before_retryis called after each failure offnand before the corresponding retry. If the retry is abandoned,before_retrywill not be executed.Returns a wrapped function which accepts the same arguments asfnand returns anAwaitable.RetryPolicyRetryPolicy=Callable[[RetryInfo],Tuple[bool,Union[float,int]]]Retry policy is used to determine what to do next after thefnfails to do some certain thing.abandon,delay=retry_policy(info)infoRetryInfoinfo.failsintis the counter number of how many times functionfnperforms as a failure. Iffnfails for the first time, thenfailswill be1.info.exceptionExceptionis the exception thatfnraised.info.sincedatetimeis the datetime when the first failure happens.IfabandonisTrue, then aioretry will give up the retry and raise the exception directly, otherwise aioretry will sleepdelayseconds (asyncio.sleep(delay)) before the next retry.defretry_policy(info:RetryInfo):ifisinstance(info.exception,KeyError):# Just raise exceptions of type KeyErrorreturnTrue,0returnFalse,info.fails*0.1Python typingsfromaioretryimport(# The type of retry_policy functionRetryPolicy,# The type of the return value of retry_policy functionRetryPolicyStrategy,# The type of before_retry functionBeforeRetry,RetryInfo)Upgrade guideSince5.0.0, aioretry introducesRetryInfoas the only parameter ofretry_policyorbefore_retry2.x -> 5.x2.xdefretry_policy(fails:int):"""A policy that gives no chances to retry"""returnTrue,0.1*fails5.xdefretry_policy(info:RetryInfo):returnTrue,0.1*info.fails3.x -> 5.x3.xdefbefore_retry(e:Exception,fails:int):...5.x# Change the sequence of the parametersdefbefore_retry(info:RetryInfo):info.exceptioninfo.fails...4.x -> 5.xSince5.0.0, bothretry_policyandbefore_retryhave only one parameter of typeRetryInforespectively.LicenseMIT |
aioretry-decorator | aioretryHandy decorator to set retry policies for async callables with some useful featuresUsage examples@retry(
tries=5,
allowed_exceptions=(RuntimeError,),
intervals=(5, 7, 10),
fail_cb=make_request_callback,
)
async def make_request(address, client):
... http request code goes here...
if response.status_code >= 300:
raise RuntimeError()
def make_request_callback(address, client)
...5 retries will be performedwith 5, 7, 10, 10 and 10 seconds interval between retries.
I.e. ifintervalstuple length more thantriesnumber,
the last tuple interval will be used for the rest of the tries.make_request_callback()synchronous function will be called if all attempts are failedThe accepted exceptions tuple allows you to control when to retry.You can optionally pass a customlogger: logging.Loggerto the decorator
within alogger=parameter. Otherwiseretry_decoratorlogger will be created to log retries.Other possible ways to use this decorator:@retry(5, (MyCustomError,), (5, 7, 10), make_request_callback)
@retry(3, (MyCustomError,), (1,))
# this actually will either successfully return or fail on first exception occured
@retry(3)Look intotests.pyto see more on usage.InstallationAvailable as a package on pypi:pip install aioretry-decoratorOr install it directly from GitHub:pip install git+https://github.com/remort/aioretry.git#egg=aioretry_decorator |
aio-retrying | info:Simple retrying for asyncioInstallationpipinstallaio_retryingUsageimportasyncioimportaiohttpfromUtils.async_retryingimportretryclassAio:def__init__(self):self.session=aiohttp.ClientSession()asyncdef__aenter__(self):returnself@retry(attempts=3,delay=1,fallback="daoji")asyncdef__aexit__(self,exc_type,exc_val,exc_tb):awaitself.session.close()print("exit")asyncdefmain(self):foriinrange(3):assertself.session.closedisFalseawaitasyncio.sleep(1)print(f"task -{i}")asyncdefsecond_aio():aio=Aio()# task = asyncio.create_task(aio.main())# task.add_done_callback(lambda _: asyncio.create_task(aio.__aexit__(None, None, None)))ret=awaitaio.__aexit__(None,None,None)print(f"结果:{ret}")asyncdeffirst_aio():awaitsecond_aio()whileTrue:# print("first")awaitasyncio.sleep(3)defmain():asyncio.run(first_aio())if__name__=="__main__":main()Python 3.8+ is required |
aiorezka | aiorezkaInstallationWithout cachepipinstallaiorezkaWith cacheIt's recommended to use cache, because it will reduce load on Rezka API.pipinstallaiorezka[request_cache]Usagefromaiorezka.apiimportRezkaAPIimportasyncioasyncdefmain():asyncwithRezkaAPI()asapi:details=awaitapi.movie_detail.get('https://rezka.ag/cartoons/comedy/2136-rik-i-morti-2013.html')print(details)asyncio.run(main())You can find more examples inexamplesdirectory.ConfigurationHostname configurationYou can configure hostname for requests. By default it will userezka.aghostname.
To change it, you can pass environment variableREZKA_HOSTNAMEor change it in code:importaiorezkaaiorezka.host='rezka.co'Concurrency configurationYou can configure concurrency for API client, basically it will limit number of concurrent requests via asyncio.Semaphore.
By default it will use 60 concurrent requests.
To change it, you can pass environment variableREZKA_CONCURRENCY_LIMITor change it in code:importaiorezkaaiorezka.concurrency_limit=100Retry configurationYou can configure retry policy for requests. By default it will retry 3 times with 1 * (backoff ** retry_no) second delay.
To change it, you can pass environment variables, such asREZKA_MAX_RETRYandREZKA_RETRY_DELAYor change it in code:importaiorezkaaiorezka.max_retry=5aiorezka.retry_delay=2Cache configurationYou can configure cache for requests. By default, it will useaiorezka.cache.QueryCache+aiorezka.cache.DiskCacheThreadProviderwith 1 day TTL.
Cache will periodically save to disk, so you can use it between restarts.use_cacheEnable or disable cache. By default, it's disabled.importaiorezkaaiorezka.use_cache=False# disable cacheor use environment variableREZKA_USE_CACHEcache_directoryDirectory where cache will be stored. By default, it's/tmp/aiorezka_cache.importaiorezkaaiorezka.cache_directory='/tmp/aiorezka_cache'or use environment variableREZKA_CACHE_DIRECTORYmemcache_max_lenMax number of items in memory cache. When it's reached, it will be saved to disk.By default, it's 1000.importaiorezkaaiorezka.memcache_max_len=1000or use environment variableREZKA_MEMCACHE_MAX_LENcache_ttlTTL for cache objects.By default, it's 1 day.importaiorezkaaiorezka.cache_ttl=60*60*24# 1 dayor use environment variableREZKA_CACHE_TTLmax_open_filesMax number of open files for cache. It's used foraiorezka.cache.DiskCacheThreadProvider. When app starts cache will be rebuilt on disk, so it will open a lot of files to check if they are expired.By default, it's 5000.importaiorezkaaiorezka.max_open_files=5000or use environment variableREZKA_MAX_OPEN_FILESYou can disable cache rebuild on start, then TTL will be ignored.fromaiorezka.apiimportRezkaAPIasyncdefmain():asyncwithRezkaAPI(cache_rebuild_on_start=False)asapi:passLogging configurationYou can configure logging for aiorezka. By default, it will uselogging.INFOlevel.importaiorezkaaiorezka.log_level="DEBUG"or use environment variableREZKA_LOG_LEVELDebuggingMeasure RPSMeasure requests per second, use it only for debug purposes.importasynciofromaiorezka.apiimportRezkaAPIfromaiorezka.cliimportmeasure_rps@measure_rpsasyncdefmain():asyncwithRezkaAPI()asapi:movies=awaitapi.movie.iter_pages(range(1,10),chain=True)detailed_movies=awaitapi.movie_detail.many(movies)formovieindetailed_movies:attributes='\n'.join([f'{attr["key"]}:{attr["value"]}'forattrinmovie.attributes])print(f'{movie.title}\n{attributes}\n')if__name__=='__main__':asyncio.run(main())Output will look like:[main][333requestsin37.82s]8.81rps |
aiorgwadmin | aiorgwadminaiorgwadmin is fork of rgwadmin library.aiorgwadmin is an async Python library to access the Ceph Object Storage Admin API.http://docs.ceph.com/docs/master/radosgw/adminops/API Example UsageimportasynciofromaiorgwadminimportRGWAdminasyncdefmain():rgw=RGWAdmin(access_key='XXX',secret_key='XXX',server='obj.example.com')awaitrgw.create_user(uid='liam',display_name='Liam Monahan',email='[email protected]',user_caps='usage=read, write; users=read',max_buckets=1000)awaitrgw.set_user_quota(uid='liam',quota_type='user',max_size_kb=1024*1024,enabled=True)awaitrgw.remove_user(uid='liam',purge_data=True)loop=asyncio.get_event_loop()loop.run_until_complete(main())User Example UsageimportasynciofromaiorgwadminimportRGWAdmin,RGWUserasyncdefmain():RGWAdmin.connect(access_key='XXX',secret_key='XXX',server='obj.example.com')u=awaitRGWUser.create(user_id='test',display_name='Test User')u.user_quota.size=1024*1024# in bytesu.user_quota.enabled=Trueawaitu.save()awaitu.delete()loop=asyncio.get_event_loop()loop.run_until_complete(main())Requirementsaiorgwadmin requires the following Python packages:aiohttprequestsrequests-awsAdditionally, you need to have aCephObject Storage
instance with a user that has appropriate caps (capabilities) on the parts of
the API that you want to access. See theCeph Object Storagepage for more
information.Compatibilityaiorgwadmin implements all documented Admin API operations or recent versions of
Ceph. We also implement some of the undocumented ones, too...Installationpip install aiorgwadminLicensergwadmin - a Python interface to the Rados Gateway Admin API
Copyright (C) 2015 UMIACS
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Email:
[email protected] |
aioriak | Asyncio (PEP 3156) Riak client library.
This project is based on official Basho python client library
(https://github.com/basho/riak-python-client).FeaturesRiak KV operationsYesRiak DatatypesYesRiak BucketTypesYesCustom resolverYesNode list supportWIPCustom quorumNoConnections PoolNoOperations timeoutNoSecurityNoRiak SearchWIPMapReduceWIPTested python versions3.5, 3.6Tested Riak versions2.1.4, 2.2.3DocumentationYou can read the docs here:DocumentationInstallationThe easiest way to install aioriak is by using the package on PyPi:pip install aioriakRequirementsPython >= 3.5riak>=2.7.0Using exampeclient=awaitRiakClient.create('localhost',loop=loop)bucket_type=client.bucket_type('default')bucket=bucket_type.bucket('example')obj=awaitbucket.get('key')print(obj.data)TestingDocker based testingYou can use docker for running:DOCKER_CLUSTER=1pythonsetup.pytestContributeIssue Tracker:https://github.com/rambler-digital-solutions/aioriak/issuesSource Code:https://github.com/rambler-digital-solutions/aioriakFeel free to file an issue or make pull request if you find any bugs or have
some suggestions for library improvement.LicenseThe aioriak is offered underMIT license. |
aioridwell | ♻️ aioridwell: A Python3, asyncio-based API for interacting with Ridwellaioridwellis a Python 3, asyncio-friendly library for interacting withRidwellto view information on upcoming recycling pickups.InstallationPython VersionsUsageContributingInstallationpipinstallaioridwellPython Versionsaioridwellis currently supported on:Python 3.10Python 3.11Python 3.12UsageCreating and Using a ClientTheClientis the primary method of interacting with the API:importasynciofromaioridwellimportasync_get_clientasyncdefmain()->None:client=awaitasync_get_client("<EMAIL>","<PASSWORD>")# ...asyncio.run(main())By default, the library creates a new connection to the API with each coroutine. If
you are calling a large number of coroutines (or merely want to squeeze out every second
of runtime savings possible), anaiohttpClientSessioncan be used for
connection pooling:importasynciofromaiohttpimportClientSessionfromaiowatttimeimportClientasyncdefmain()->None:asyncwithClientSession()assession:client=awaitasync_get_client("<EMAIL>","<PASSWORD>",session=session)# ...asyncio.run(main())Getting the User's Dashboard URLimportasynciofromaioridwellimportasync_get_clientasyncdefmain()->None:client=awaitasync_get_client("<EMAIL>","<PASSWORD>")client.get_dashboard_url()# >>> https://www.ridwell.com/users/userId1/dashboardasyncio.run(main())Getting AccountsGetting all accounts associated with this email address is easy:importasynciofromaioridwellimportasync_get_clientasyncdefmain()->None:client=awaitasync_get_client("<EMAIL>","<PASSWORD>")accounts=awaitclient.async_get_accounts()# >>> {"account_id_1": RidwellAccount(...), ...}asyncio.run(main())TheRidwellAccountobject comes with some useful properties:account_id: the Ridwell ID for the accountaddress: the address being servicedemail: the email address on the accountfull_name: the full name of the account ownerphone: the phone number of the account ownersubscription_id: the Ridwell ID for the primary subscriptionsubscription_active: whether the primary subscription is activeGetting Pickup EventsGetting pickup events associated with an account is easy, too:importasynciofromaioridwellimportasync_get_clientasyncdefmain()->None:client=awaitasync_get_client("<EMAIL>","<PASSWORD>")accounts=awaitclient.async_get_accounts()foraccountinaccounts.values():events=awaitaccount.async_get_pickup_events()# >>> [RidwellPickupEvent(...), ...]# You can also get just the next pickup event from today's date:next_event=awaitaccount.async_get_next_pickup_event()# >>> RidwellPickupEvent(...)asyncio.run(main())TheRidwellPickupEventobject comes with some useful properties:pickup_date: the date of the pickup (indatetime.dateformat)pickups: a list ofRidwellPickupobjectsstate: anEventStateenum whose name represents the current state of the pickup eventLikewise, theRidwellPickupobject comes with some useful properties:category: aPickupCategoryenum whose name represents the type of pickupname: the name of the item being picked upoffer_id: the Ridwell ID for this particular offerpriority: the pickup priorityproduct_id: the Ridwell ID for this particular productquantity: the amount of the product being picked upOpting Into or Out Of a Pickup Eventimportasynciofromaioridwellimportasync_get_clientasyncdefmain()->None:client=awaitasync_get_client("<EMAIL>","<PASSWORD>")accounts=awaitclient.async_get_accounts()foraccountinaccounts.values():events=awaitaccount.async_get_pickup_events()# >>> [RidwellPickupEvent(...), ...]awaitevents[0].async_opt_in()awaitevents[0].async_opt_out()asyncio.run(main())Calculating a Pickup Event's Estimated Add-on Costimportasynciofromaioridwellimportasync_get_clientasyncdefmain()->None:client=awaitasync_get_client("<EMAIL>","<PASSWORD>")accounts=awaitclient.async_get_accounts()foraccountinaccounts.values():events=awaitaccount.async_get_pickup_events()# >>> [RidwellPickupEvent(...), ...]event_1_cost=awaitevents[0].async_get_estimated_addon_cost()# >>> 22.00asyncio.run(main())ContributingThanks to all ofour contributorsso far!Check for open features/bugsorinitiate a discussion on one.Fork the repository.(optional, but highly recommended) Create a virtual environment:python3 -m venv .venv(optional, but highly recommended) Enter the virtual environment:source ./.venv/bin/activateInstall the dev environment:script/setupCode your new feature or bug fix on a new branch.Write tests that cover your new functionality.Run tests and ensure 100% code coverage:poetry run pytest --cov aioridwell testsUpdateREADME.mdwith any new documentation.Submit a pull request! |
aioring | aioring (Io Rings for asyncio)aioring is a library that handles async fileIO.
Currently we only support io_uring on linux, other operating systems fall back to a custom ring using python threads which is only intended for development purposes and should not be used in production.fromaioringimportaioasyncwithawaitaio.open("file.name","r")asf:content=awaitf.read()installaioring can be installed with pippipinstallaioringaosin aos we expose async versions of functions defined in the 'os' module.
currently we support:aos.pread(fd: int, count: int, offset: int)aos.pwrite(fd: int, buffer: bytes, offset: int)aos.close(fd: int)aos.open(path: str, flags: int, mode: int=0o777, *, dir_fd=None)aos.fstat(fd: int)aos.stat(path: str)these functions should work the same way as their counterpart in the os module but need to be called with await.aioin aio we expose a async implementation of the cpython pyio module (https://github.com/python/cpython/blob/3.10/Lib/_pyio.py)
usage is like normal io but with async/awaitfromaioringimportaio# read fileasyncwithawaitaio.open("file.txt","r")asf:data=awaitf.read()# write fileasyncwithawaitaio.open("file.txt","w")asf:data=awaitf.write("test")PlansfileIOWindows IoRingsocketIOdirectory operations? (io_uring currently does not support readdir) |
aiorinnai | aiorinnai - Python interface for the Rinnai Control-R APIPython library for communicating with theRinnai Control-R Water Heaters and control devicesvia the Rinnai Control-R cloud API.WARNINGThis library only works if you have migrated to the Rinnai 2.0 app. This will require a firmware update to your Control-R module.IOSAndroidNOTE:This library is community supported, please submit changes and improvements.This is a very basic interface, not well thought out at this point, but works for the use cases that initially prompted spitting this out from.Supportsstarting/stop recirculationsetting temperatureInstallationpip install aiorinnai==0.3.0ExamplesimportasynciofromaiohttpimportClientSessionfromaiorinnaiimportasync_get_apiasyncdefmain()->None:"""Run!"""api=awaitasync_get_api("<EMAIL>","<PASSWORD>")# Get user account information:user_info=awaitapi.user.get_info()# Get device informationfirst_device_id=user_info["devices"]["items"][0]["id"]device_info=awaitapi.device.get_info(first_device_id)#Start Recirculation#Last variable is duration in minutesstart_recirculation=awaitapi.device.start_recirculation(device_info['data']['getDevices'],5)#Stop Recirculationstop_recirculation=awaitapi.device.stop_recirculation(device_info['data']['getDevices'])#Set Temperature#Last variable is the temperature in increments of 5set_temperature=awaitapi.device.set_temperature(device_info['data']['getDevices'],130)asyncio.run(main())By default, the library creates a new connection to Rinnai with each coroutine. If you are calling a large number of coroutines (or merely want to squeeze out every second of runtime savings possible), an aiohttp ClientSession can be used for connection pooling:importasynciofromaiohttpimportClientSessionfromaiorinnaiimportasync_get_apiasyncdefmain()->None:"""Create the aiohttp session and run the example."""asyncwithClientSession()aswebsession:api=awaitasync_get_api("<EMAIL>","<PASSWORD>",session=websession)# Get user account information:user_info=awaitapi.user.get_info()# Get device informationfirst_device_id=user_info["devices"]["items"][0]["id"]device_info=awaitapi.device.get_info(first_device_id)#Start Recirculation#Last variable is duration in minutesstart_recirculation=awaitapi.device.start_recirculation(user_info["id"],first_device_id,5)print(start_recirculation)#Stop Recirculationstop_recirculation=awaitapi.device.stop_recirculation(user_info["id"],first_device_id)print(stop_recirculation)#Set Temperature#Last variable is the temperature in increments of 5set_temperature=awaitapi.device.set_temperature(user_info["id"],first_device_id,130)asyncio.run(main())Known Issuesnot all APIs supported |
aiorm | No description available on PyPI. |
aiormq | aiormq is a pure python AMQP client library.Table of contentsStatusFeaturesTutorialIntroductionSimple consumerSimple publisherWork QueuesCreate new taskSimple workerPublish SubscribePublisherSubscriberRoutingDirect consumerEmitterTopicsPublisherConsumerRemote procedure call (RPC)RPC serverRPC clientStatus3.x.x branch - Production/Stable4.x.x branch - Unstable (Experimental)5.x.x and greater is only Production/Stable releases.FeaturesConnecting by URLamqp example:amqp://user:[email protected]/vhostsecure amqp example:amqps://user:[email protected]/vhost?cafile=ca.pem&keyfile=key.pem&certfile=cert.pem&no_verify_ssl=0Buffered queue for received framesOnlyPLAINauth mechanism supportPublisher confirmssupportTransactionssupportChannel based asynchronous locksNoteAMQP 0.9.1 requires serialize sending for some frame types
on the channel. e.g. Content body must be following after
content header. But frames might be sent asynchronously
on another channels.Tracking unroutable messages
(Useconnection.channel(on_return_raises=False)for disabling)Full SSL/TLS support, using your choice of:amqps://url query parameters:cafile=- string contains path to ca certificate filecapath=- string contains path to ca certificatescadata=- base64 encoded ca certificate datakeyfile=- string contains path to key filecertfile=- string contains path to certificate fileno_verify_ssl- boolean disables certificates validationcontext=SSLContextkeyword argument toconnect().Pythontype hintsUsespamqpas an AMQP 0.9.1 frame encoder/decoderTutorialIntroductionSimple consumerimportasyncioimportaiormqasyncdefon_message(message):"""
on_message doesn't necessarily have to be defined as async.
Here it is to show that it's possible.
"""print(f" [x] Received message{message!r}")print(f"Message body is:{message.body!r}")print("Before sleep!")awaitasyncio.sleep(5)# Represents async I/O operationsprint("After sleep!")asyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()# Declaring queuedeclare_ok=awaitchannel.queue_declare('helo')consume_ok=awaitchannel.basic_consume(declare_ok.queue,on_message,no_ack=True)loop=asyncio.get_event_loop()loop.run_until_complete(main())loop.run_forever()Simple publisherimportasynciofromtypingimportOptionalimportaiormqfromaiormq.abcimportDeliveredMessageMESSAGE:Optional[DeliveredMessage]=Noneasyncdefmain():globalMESSAGEbody=b'Hello World!'# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost//")# Creating a channelchannel=awaitconnection.channel()declare_ok=awaitchannel.queue_declare("hello",auto_delete=True)# Sending the messageawaitchannel.basic_publish(body,routing_key='hello')print(f" [x] Sent{body}")MESSAGE=awaitchannel.basic_get(declare_ok.queue)print(f" [x] Received message from{declare_ok.queue!r}")loop=asyncio.get_event_loop()loop.run_until_complete(main())assertMESSAGEisnotNoneassertMESSAGE.routing_key=="hello"assertMESSAGE.body==b'Hello World!'Work QueuesCreate new taskimportsysimportasyncioimportaiormqasyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()body=b' '.join(sys.argv[1:])orb"Hello World!"# Sending the messageawaitchannel.basic_publish(body,routing_key='task_queue',properties=aiormq.spec.Basic.Properties(delivery_mode=1,))print(f" [x] Sent{body!r}")awaitconnection.close()loop=asyncio.get_event_loop()loop.run_until_complete(main())Simple workerimportasyncioimportaiormqimportaiormq.abcasyncdefon_message(message:aiormq.abc.DeliveredMessage):print(f" [x] Received message{message!r}")print(f" Message body is:{message.body!r}")asyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()awaitchannel.basic_qos(prefetch_count=1)# Declaring queuedeclare_ok=awaitchannel.queue_declare('task_queue',durable=True)# Start listening the queue with name 'task_queue'awaitchannel.basic_consume(declare_ok.queue,on_message,no_ack=True)loop=asyncio.get_event_loop()loop.run_until_complete(main())# we enter a never-ending loop that waits for data and runs# callbacks whenever necessary.print(" [*] Waiting for messages. To exit press CTRL+C")loop.run_forever()Publish SubscribePublisherimportsysimportasyncioimportaiormqasyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()awaitchannel.exchange_declare(exchange='logs',exchange_type='fanout')body=b' '.join(sys.argv[1:])orb"Hello World!"# Sending the messageawaitchannel.basic_publish(body,routing_key='info',exchange='logs')print(f" [x] Sent{body!r}")awaitconnection.close()loop=asyncio.get_event_loop()loop.run_until_complete(main())Subscriberimportasyncioimportaiormqimportaiormq.abcasyncdefon_message(message:aiormq.abc.DeliveredMessage):print(f"[x]{message.body!r}")awaitmessage.channel.basic_ack(message.delivery.delivery_tag)asyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()awaitchannel.basic_qos(prefetch_count=1)awaitchannel.exchange_declare(exchange='logs',exchange_type='fanout')# Declaring queuedeclare_ok=awaitchannel.queue_declare(exclusive=True)# Binding the queue to the exchangeawaitchannel.queue_bind(declare_ok.queue,'logs')# Start listening the queue with name 'task_queue'awaitchannel.basic_consume(declare_ok.queue,on_message)loop=asyncio.get_event_loop()loop.create_task(main())# we enter a never-ending loop that waits for data# and runs callbacks whenever necessary.print(' [*] Waiting for logs. To exit press CTRL+C')loop.run_forever()RoutingDirect consumerimportsysimportasyncioimportaiormqimportaiormq.abcasyncdefon_message(message:aiormq.abc.DeliveredMessage):print(f" [x]{message.delivery.routing_key!r}:{message.body!r}"awaitmessage.channel.basic_ack(message.delivery.delivery_tag)asyncdefmain():# Perform connectionconnection=aiormq.Connection("amqp://guest:guest@localhost/")awaitconnection.connect()# Creating a channelchannel=awaitconnection.channel()awaitchannel.basic_qos(prefetch_count=1)severities=sys.argv[1:]ifnotseverities:sys.stderr.write(f"Usage:{sys.argv[0]}[info] [warning] [error]\n")sys.exit(1)# Declare an exchangeawaitchannel.exchange_declare(exchange='logs',exchange_type='direct')# Declaring random queuedeclare_ok=awaitchannel.queue_declare(durable=True,auto_delete=True)forseverityinseverities:awaitchannel.queue_bind(declare_ok.queue,'logs',routing_key=severity)# Start listening the random queueawaitchannel.basic_consume(declare_ok.queue,on_message)loop=asyncio.get_event_loop()loop.run_until_complete(main())# we enter a never-ending loop that waits for data# and runs callbacks whenever necessary.print(" [*] Waiting for messages. To exit press CTRL+C")loop.run_forever()Emitterimportsysimportasyncioimportaiormqasyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()awaitchannel.exchange_declare(exchange='logs',exchange_type='direct')body=(b' '.join(arg.encode()forarginsys.argv[2:])orb"Hello World!")# Sending the messagerouting_key=sys.argv[1]iflen(sys.argv)>2else'info'awaitchannel.basic_publish(body,exchange='logs',routing_key=routing_key,properties=aiormq.spec.Basic.Properties(delivery_mode=1))print(f" [x] Sent{body!r}")awaitconnection.close()loop=asyncio.get_event_loop()loop.run_until_complete(main())TopicsPublisherimportsysimportasyncioimportaiormqasyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()awaitchannel.exchange_declare('topic_logs',exchange_type='topic')routing_key=(sys.argv[1]iflen(sys.argv)>2else'anonymous.info')body=(b' '.join(arg.encode()forarginsys.argv[2:])orb"Hello World!")# Sending the messageawaitchannel.basic_publish(body,exchange='topic_logs',routing_key=routing_key,properties=aiormq.spec.Basic.Properties(delivery_mode=1))print(f" [x] Sent{body!r}")awaitconnection.close()loop=asyncio.get_event_loop()loop.run_until_complete(main())Consumerimportasyncioimportsysimportaiormqimportaiormq.abcasyncdefon_message(message:aiormq.abc.DeliveredMessage):print(f" [x]{message.delivery.routing_key!r}:{message.body!r}")awaitmessage.channel.basic_ack(message.delivery.delivery_tag)asyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/",loop=loop)# Creating a channelchannel=awaitconnection.channel()awaitchannel.basic_qos(prefetch_count=1)# Declare an exchangeawaitchannel.exchange_declare('topic_logs',exchange_type='topic')# Declaring queuedeclare_ok=awaitchannel.queue_declare('task_queue',durable=True)binding_keys=sys.argv[1:]ifnotbinding_keys:sys.stderr.write(f"Usage:{sys.argv[0]}[binding_key]...\n")sys.exit(1)forbinding_keyinbinding_keys:awaitchannel.queue_bind(declare_ok.queue,'topic_logs',routing_key=binding_key)# Start listening the queue with name 'task_queue'awaitchannel.basic_consume(declare_ok.queue,on_message)loop=asyncio.get_event_loop()loop.create_task(main())# we enter a never-ending loop that waits for# data and runs callbacks whenever necessary.print(" [*] Waiting for messages. To exit press CTRL+C")loop.run_forever()Remote procedure call (RPC)RPC serverimportasyncioimportaiormqimportaiormq.abcdeffib(n):ifn==0:return0elifn==1:return1else:returnfib(n-1)+fib(n-2)asyncdefon_message(message:aiormq.abc.DeliveredMessage):n=int(message.body.decode())print(f" [.] fib({n})")response=str(fib(n)).encode()awaitmessage.channel.basic_publish(response,routing_key=message.header.properties.reply_to,properties=aiormq.spec.Basic.Properties(correlation_id=message.header.properties.correlation_id),)awaitmessage.channel.basic_ack(message.delivery.delivery_tag)print('Request complete')asyncdefmain():# Perform connectionconnection=awaitaiormq.connect("amqp://guest:guest@localhost/")# Creating a channelchannel=awaitconnection.channel()# Declaring queuedeclare_ok=awaitchannel.queue_declare('rpc_queue')# Start listening the queue with name 'hello'awaitchannel.basic_consume(declare_ok.queue,on_message)loop=asyncio.get_event_loop()loop.create_task(main())# we enter a never-ending loop that waits for data# and runs callbacks whenever necessary.print(" [x] Awaiting RPC requests")loop.run_forever()RPC clientimportasyncioimportuuidimportaiormqimportaiormq.abcclassFibonacciRpcClient:def__init__(self):self.connection=None# type: aiormq.Connectionself.channel=None# type: aiormq.Channelself.callback_queue=''self.futures={}self.loop=loopasyncdefconnect(self):self.connection=awaitaiormq.connect("amqp://guest:guest@localhost/")self.channel=awaitself.connection.channel()declare_ok=awaitself.channel.queue_declare(exclusive=True,auto_delete=True)awaitself.channel.basic_consume(declare_ok.queue,self.on_response)self.callback_queue=declare_ok.queuereturnselfasyncdefon_response(self,message:aiormq.abc.DeliveredMessage):future=self.futures.pop(message.header.properties.correlation_id)future.set_result(message.body)asyncdefcall(self,n):correlation_id=str(uuid.uuid4())future=loop.create_future()self.futures[correlation_id]=futureawaitself.channel.basic_publish(str(n).encode(),routing_key='rpc_queue',properties=aiormq.spec.Basic.Properties(content_type='text/plain',correlation_id=correlation_id,reply_to=self.callback_queue,))returnint(awaitfuture)asyncdefmain():fibonacci_rpc=awaitFibonacciRpcClient().connect()print(" [x] Requesting fib(30)")response=awaitfibonacci_rpc.call(30)print(r" [.] Got{response!r}")loop=asyncio.get_event_loop()loop.run_until_complete(main()) |
aio-rmq-wss-proxy | Asynchronous RMQ -> Websocket Proxy ServerHere is an implementation of typical websocket server which receives messages from RMQ and process them and send to connected websocket clientsIn the current state WS server is capable of:Running a server instance which accepts websocket connections from clients and control them;connecting to RabbitMQ, creating Exchange and Queue and consume on it;receiving messages from RMQ Queue;processing received RMQ messages and send them to connected websocket clients.StructPippip install aio-rmq-wss-proxySourceClone repositoryRun "cd aio_rmq_wss_proxy/"Run "make install"Run the commands "make run_server" & "make run_client" to see how does it worksPlease, note that before running server and client you need to have installed and running Rabbit MQ broker.This is how it works from the box.CustomizationYou can see the "public_sample" in "_testing/public_sample" directory and use it for build your own proxy server |
aiornot | AIORNOT Python ClientThis is a Python client for theAIORNOTAPI.Getting StartedAccount Registration and API Key GenerationRegister for an account atAIORNOT. After creating an account,
you can generate an API key via yourdashboard. If you
just created your account, the page looks like,Click theRequest API Keybutton to generate a new API key. After generating a key, the page
looks like,Press theCopy API Keybutton to copy the key to your clipboard. If you already have
generated an API key, the page looks like,Press theRefresh API Keybutton to generate a new API key. Then press theCopy API Keybutton
to copy the key to your clipboard.[!WARNING]Never share your API key with anyone. It is like a password.Installing the Python PackageTo install the python package, run the following command,pipinstallaiornotUsing the client requires an API key. You can set the API key in two ways.The easier and more flexible way is to set an environment variable,AIORNOT_API_KEY=your_api_keyOtherwise, you can pass the api key in as an argument to the client,fromaiornotimportClient,AsyncClientclient=Client(api_key='your_api_key')# sync clientasync_client=AsyncClient(api_key='your_api_key')# async clientFailure to set either the environment variable or the api key argument will result in a runtime error.View from 10,000 feetfromaiornotimportClient# Create a client (reads AIORNOT_API_KEY env)client=Client()# Classify an image by urlresp=client.image_report_by_url('https://thispersondoesnotexist.com')# Classify an image by pathresp=client.image_report_by_file('path/to/image.jpg')# Classify audio by urlresp=client.audio_report_by_url('https://www.youtube.com/watch?v=v4WiI4es_UI')# Classify audio by pathresp=client.audio_report_by_file('path/to/audio.mp3')# Check your tokenresp=client.check_token()# Refresh your tokenresp=client.refresh_token()# Revoke your tokenresp=client.revoke_token()# Check if the API is upresp=client.is_live()There is also an async client that has the same methods as the sync client, but as coroutines.importasynciofromaiornotimportAsyncClientasyncdefmain():client=AsyncClient()ifawaitclient.check_api():print('API is up!')else:print('API is down :(')if__name__=='__main__':asyncio.run(main())CLI UsageAIOrNot also comes with a CLI. You can use it easily via apipxinstallation,# For fresh installpipxinstallaiornot# For upgradepipxupgradeaiornotThe CLI also looks for theAIORNOT_API_KEYenvironment variable. But it will also
look for a~/.aiornot/config.jsonfile if the environment variable is not set. To
set it up, run the following command,aiornottokenconfigand follow the prompts. Afterwards, you can see a menu of commands with,aionotthe two most useful ones being,# Classify an image by url or pathaiornotimage[url|path]# Classify audio by url or pathaionotaudio[text] |
aiorobinhood | aiorobinhoodThin asynchronous wrapper for the unofficial Robinhood API.Why?Supports automated trading strategies on RobinhoodSupports concurrency using asynchronous programming techniquesGetting StartedimportasyncioimportosfromaiorobinhoodimportRobinhoodClientusername=os.getenv("ROBINHOOD_USERNAME")password=os.getenv("ROBINHOOD_PASSWORD")asyncdefmain():asyncwithRobinhoodClient(timeout=1)asclient:awaitclient.login(username,password)# Buy $10.50 worth of Appleawaitclient.place_market_buy_order("AAPL",amount=10.5)# End sessionawaitclient.logout()if__name__=="__main__":asyncio.run(main())DependenciesPython 3.7+aiohttpyarlLicenseaiorobinhoodis offered under the MIT license. |
aiorobonect | aiorobonectAsynchronous library to communicate with the Robonect APIAPI Example"""Test for aiorobonect."""fromaiorobonectimportRobonectClientimportasyncioimportjsonimportaiohttpasyncdefmain():host="10.0.0.99"## The Robonect mower IPusername="USERNAME"## Your Robonect usernamepassword="xxxxxxxx"## Your Robonect passwordtracking=[## Commands to query"battery","wlan","version","timer","hour","error"]client=RobonectClient(host,username,password)try:status=awaitclient.async_cmd("status")print(status)tracking=awaitclient.async_cmds(tracking)print(json.dumps(tracking,indent=2))exceptExceptionasexception:ifisinstance(exception,aiohttp.ClientResponseError):print(exception)awaitclient.session_close()asyncio.run(main()) |
aioroboremote | aioroboremotePython asyncio RobotFramework remote library implementation |
aiorobot | Root robotPython async API for iRobot Root (coding robot) over bluetooth-low-energy protocol.Protocol specifications fromhttps://github.com/RootRobotics/root-robot-ble-protocol.InstallationInstall theaiorobotpackage from PyPI withpip.pipinstallaiorobotQuickstartTo simply run the robot, use therunfunction ofaiorobotmodule.
It takes coroutine callbacks for different root robot events.fromaiorobotimportrunasyncdefmain(robot):foriinrange(4):awaitrobot.led.on((0,i*80,100))awaitrobot.motor.drive(150)awaitrobot.motor.rotate(900)awaitrobot.disconnect()run(started=main)This will search for a root robot in bluetooth devices, connect to it and call themaincoroutine when the root is ready.
So make sure you have bluetooth enabled and working on your computer.Accepted keyword-arguments ofrunfunction are event names listed inaiorobot/events.py.You can also directly get a robot and interact with it withget_robotfunction that you can use as an async context-manager to start the connection.importasynciofromaiorobotimportget_robotasyncdefmain():asyncwithget_robot()asrobot:awaitrobot.motor.drive(150)asyncio.run(main())Then you will need to handle events yourself (iterate overrobot.eventsor callrobots.events.process()) to get updates from the robot.See more code examples inaiorobot/examplesdirectory. |
aiorocket | SDK для работы с TON Rocket🔐 АвторизацияКак получить токен написанотут.Mainnet:importaiorocketapi=aiorocket.Rocket('токен')Testnet:importaiorocketapi=aiorocket.Rocket('токен',testnet=True)🚀 МетодыПолучение информации о приложенииДокументацияПример:awaitapi.info()ПереводВсе параметры как вдокументацииПример:awaitapi.send(tgUserId=1448705322,currency="TONCOIN",amount=0.123,description="Всем совятам привет!")ВыводВсе параметры как вдокументацииПример:awaitapi.withdraw(address="EQAJkw0RC9s_FAEmKr4GftJsNbA0IK0o4cfEH3bNoSbKJHAy",currency="TONCOIN",amount=0.123,comment="Всем совятам привет!")Создание чекаВсе параметры как вдокументацииПример:cheque=awaitapi.create_cheque({chequePerUser=0.005,usersNumber=100,refProgram=50,password="пароль :D",description="Чек для вас",sendNotifications=True,enableCaptcha=True,telegramResourcesIds=["-1001799549067"]})Получение чековДокументацияПример:awaitapi.get_cheques()Получение чека по IDВсе параметры как вдокументацииПример:awaitapi.get_cheque(1234)Удаление чекаВсе параметры как вдокументацииПример:awaitapi.delete_cheque(1234)# ИЛИ ТАКawaitcheque.delete()# в стиле ООПСоздание счётаВсе параметры как вдокументацииПример:invoice=awaitapi.createInvoice(amount=1.23,description="покупка лучшой вещи в мире",hiddenMessage="спасибо",callbackUrl="https://t.me/ton_rocket",payload="some payload",expiredIn=10)Получение счетовДокументацияПример:awaitapi.get_invoices()Получение счёта по IDВсе параметры как вдокументацииПример:awaitapi.get_invoice(1234)Удаление счётаВсе параметры как вдокументацииПример:awaitapi.delete_invoice(1234)# ИЛИ ТАКawaitinvoice.delete()# в стиле ООПДоступные валютыДокументацияПример:awaitapi.available_currencies()⚠ Обработка ошибокtry:api.get_invoice(1234)# вызов методаexceptaiorocket.classes.RocketAPIErroraserr:print(err.errors)Результат:{"property":"somePropertyName","error":"somePropertyName should be less than X"} |
aiorocksdb | No description available on PyPI. |
aioroku | Screw remotes. Control yourRokuvia Python. Asynchronously.Installationpip install aiorokuUsageRequires Python 3.5 and uses asyncio and aiohttp.import asyncio
import aiohttp
from aioroku import AioRoku
def get_args():
arg_parser = ArgumentParser()
arg_parser.add_argument('--roku_ip', '-r', metavar="ROKU_IP_ADDRESS")
args = arg_parser.parse_args()
return args.roku_ip
async def main():
async with aiohttp.ClientSession() as session:
my_roku = AioRoku(host, session)
print(await my_roku.active_app)
if __name__ == '__main__':
roku_ip = get_args()
loop = asyncio.get_event_loop()
loop.run_until_complete(main(roku_ip))
loop.close()TODODocsTests, of course. |
aio-rom | Python Redis Object Mapperasyncio based Redis object mapperTable of contentInstallationUsageFeaturesTODOLimitationsInstallationTODOUsageimportasynciofromdataclassesimportfieldfromtypingimportSet,Dictfromaio_romimportModelfromaio_rom.fieldsimportMetadatafromaio_rom.sessionimportredis_poolclassFoo(Model):bar:intfoobar:Set[int]=field(default_factory=set)my_boolean:bool=Falsetransient_field:Dict=field(metadata=Metadata(transient=True))classOtherFoo(Model):foo:Fooasyncdefmain():asyncwithredis_pool("redis://localhost"):foo=Foo(123,{1,2,3},True)awaitfoo.save()...foo2=awaitFoo.get(321)other_foo=OtherFoo(303,foo2)awaitother_foo.save()asyncio.run(main())FeaturesTODOTODODocsTestsLimitationsconfiguremust be called before other calls to Redis can succeed, no defaults to localhost atm.You cannot usefrom __future__ import annotationsin the same file you define your models. Seehttps://bugs.python.org/issue39442TODO Supported datatypesProbably more ... |
aiorosapi | aiorosapiSimple asyncio-based library to perform API queries on
Mikrotik RouterOS-based devices.InstallationInstall from PyPi:pip install aiorosapiInstall from sources:git clone https://github.com/gaussgss/aiorosapi.git
cd aiorosapi
python setup.py installUsageimport asyncio
from aiorosapi.protocol import create_ros_connection
async def main():
conn = await create_ros_connection(
host='192.168.90.1',
port=8728,
username='admin',
password=''
)
data = await conn.talk_one('/system/routerboard/print')
print("Routerboard info:")
for k, v in data.items():
print('{:>20s}: {}'.format(k, v))
data = await conn.talk_all('/interface/ethernet/print')
print("Ethernet interfaces:")
for item in data:
print("{:>20s}: {}".format(item['.id'], item['name']))
await conn.disconnect()
await conn.wait_disconnect()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close() |
aio-routes | UNKNOWN |
aiorow | No description available on PyPI. |
aiorpc | aiorpc is a lightweight asynchronous RPC library. It enables you to easily build a distributed server-side system by writing a small amount of code. It is built on top ofasyncioandMessagePack.Note aiorpc is under development, should not be considered to have a stable API.InstallationTo install aiorpc, simply:$pipinstallaiorpcExamplesRPC serverfromaiorpcimportRPCServerimportasyncioimportuvloopdefecho(msg):returnmsgrpc_server=RPCServer()loop=uvloop.new_event_loop()asyncio.set_event_loop(loop)rpc_server.register("echo",echo)coro=asyncio.start_server(rpc_server.serve,'127.0.0.1',6000,loop=loop)server=loop.run_until_complete(coro)try:loop.run_forever()exceptKeyboardInterrupt:server.close()loop.run_until_complete(server.wait_closed())RPC clientfromaiorpcimportRPCClientimportasyncioimportuvloopasyncdefdo(cli):ret=awaitclient.call('echo','message')print("{}\n".format(ret))loop=uvloop.new_event_loop()asyncio.set_event_loop(loop)client=RPCClient('127.0.0.1',6000)loop.run_until_complete(do(client))client.close()aiorpc client can also be used as an async context manager:asyncdefdo():asyncwithRPCClient('127.0.0.1',6000)asclient:ret=awaitclient.call('echo','message')print("{}\n".format(ret))Performanceaiorpc withuvloopsignificantly outperformsZeroRPC(6xfaster), which is built usingZeroMQandMessagePackand slightly underperformsofficial MessagePack RPC(0.7xslower), which is built usingFacebook’s TornadoandMessagePack.aiorpc%pythonbenchmarks/benchmark_aiorpc.pycall:2236qpsOfficial MesssagePack RPC%pipinstallmsgpack-rpc-python%pythonbenchmarks/benchmark_msgpackrpc.pycall:3112qpsZeroRPC%pipinstallzerorpc%pythonbenchmarks/benchmark_zerorpc.pycall:351qpsDocumentationDocumentation is available athttp://aiorpc.readthedocs.org/. |
aiorpcX | Transport, protocol and framing-independent async RPC client and server implementation. |
aiorpcx-spesmilo | Transport, protocol and framing-independent async RPC client and server implementation. |
aiorq | AiorqIntroductionAiorq is a distributed task queue with asyncio and redis, which rewrite from arq to make improvement and include web
interface.Seedocumentationfor more details.Requirementsredis >= 5.0aioredis >= 2.0.0InstallpipinstallaiorqQuick StartTask Definition# demo.py# -*- coding: utf-8 -*-importasyncioimportosfromaiorq.connectionsimportRedisSettingsfromaiorq.cronimportcronasyncdefsay_hello(ctx,name)->None:awaitasyncio.sleep(5)print(f"Hello{name}")asyncdefstartup(ctx):print("starting... done")asyncdefshutdown(ctx):print("ending... done")asyncdefrun_cron(ctx,name_):returnf"hello{name_}"classWorkerSettings:redis_settings=RedisSettings(host=os.getenv("REDIS_HOST","127.0.0.1"),port=os.getenv("REDIS_PORT",6379),database=os.getenv("REDIS_DATABASE",0),password=os.getenv("REDIS_PASSWORD",None))functions=[say_hello]on_startup=startupon_shutdown=shutdowncron_jobs=[cron(coroutine=run_cron,kwargs={"name_":"pai"},hour={17,12,18},minute=40,second=50,keep_result_forever=True)]Run aiorq worker> aiorq tasks.WorkerSettings worker
15:08:50: Starting Queue: ohuo
15:08:50: Starting Worker: ohuo@04dce85c-1798-43eb-89d8-7c6d78919feb
15:08:50: Starting Functions: say_hello, EnHeng
15:08:50: redis_version=5.0.10 mem_usage=731.12K clients_connected=2 db_keys=9
starting...Integration in FastAPI> aiorq tasks.WorkerSettings server
INFO: Started server process [4524]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8080 (Press CTRL+C to quit)DashboardSeedashboardfor more details.ThanksArqandFastAPILicenseMIT |
aiorsmq | aiorsmqThis is the repository for aiorsmq (AsyncIORSMQ), an asynchronous (async/await) implementation ofRSMQfor Python 3.6+. It aims to provide all the features that RSMQ provides, but for Python users.FeaturesSome of aiorsmq's features are:Fully compatible with RSMQ.Provides an API similar to that of RSMQ, with some changes done to achieve something more "pythonic".All public functions, methods and classes documented.Type-annotated and checked withmypy.Tested against Redis, and against the original RSMQ library.InstallationTo install aiorsmq, run:$pipinstallaiorsmqDocumentationFor examples and API documentation please visit thedocumentation pages.Related ProjectsFor a synchronous implementation of RSMQ for Python, seePyRSMQ.LicenseDistributed under the MIT license.SeeLICENSEfor more details. |
aiorss | aiorssAsyncio client for interacting with rss feedsInstallation instructions:pipinstallaiorssUsage:fromaiorssimportRSSFeedasyncdefmain():url='http://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml'feed=RSSFeed(url)returnawaitfeed.parse()Noteaiorss is not affiliated or endorsed by any of the web services it interacts with. |
aiortc | What isaiortc?aiortcis a library forWeb Real-Time Communication (WebRTC)andObject Real-Time Communication (ORTC)in Python. It is built on top ofasyncio, Python’s standard asynchronous I/O framework.The API closely follows its Javascript counterpart while using pythonic
constructs:promises are replaced by coroutinesevents are emitted usingpyee.EventEmitterTo learn more aboutaiortcpleaseread the documentation.Why should I useaiortc?The main WebRTC and ORTC implementations are either built into web browsers,
or come in the form of native code. While they are extensively battle tested,
their internals are complex and they do not provide Python bindings.
Furthermore they are tightly coupled to a media stack, making it hard to plug
in audio or video processing algorithms.In contrast, theaiortcimplementation is fairly simple and readable. As
such it is a good starting point for programmers wishing to understand how
WebRTC works or tinker with its internals. It is also easy to create innovative
products by leveraging the extensive modules available in the Python ecosystem.
For instance you can build a full server handling both signaling and data
channels or apply computer vision algorithms to video frames using OpenCV.Furthermore, a lot of effort has gone into writing an extensive test suite for
theaiortccode to ensure best-in-class code quality.Implementation statusaiortcallows you to exchange audio, video and data channels and
interoperability is regularly tested against both Chrome and Firefox. Here are
some of its features:SDP generation / parsingInteractive Connectivity Establishment, with half-trickle and mDNS supportDTLS key and certificate generationDTLS handshake, encryption / decryption (for SCTP)SRTP keying, encryption and decryption for RTP and RTCPPure Python SCTP implementationData ChannelsSending and receiving audio (Opus / PCMU / PCMA)Sending and receiving video (VP8 / H.264)Bundling audio / video / data channelsRTCP reports, including NACK / PLI to recover from packet lossInstallingThe easiest way to installaiortcis to run:pipinstallaiortcBuilding from sourceIf there are no wheels for your system or if you wish to build aiortc from
source you will need a couple of libraries installed on your system:Opus for audio encoding / decodingLibVPX for video encoding / decodingLinuxOn Debian/Ubuntu run:aptinstalllibopus-devlibvpx-devOS XOn OS X run:brewinstallopuslibvpxLicenseaiortcis released under theBSD license. |
aiortc-datachannel-only | aiortc-datachannel-onlyaiortc-datachannel-onlyis a fork fromaiortc, which does not require av, cffi or netifaces dependencies, but has a support for datachannels only.Install with:pip install aiortc-datachannel-onlyWhy?I have only need for datachannels, and I want a simple "pip install ..." to work without extra dependencies or build steps (some others have had the same need, issues78,118,243,324,364,436)Netifacesis no longer maintained, and does not have binaries for Python3.10Caveats:This library does not support ipv6 (which the originalaiortcdoes support)Currently this library uses only the primary ipv4 address (aiortccan use all of the ip4 addresses)Licenseaioice-datachannel-onlyis released under the same BSD license as the originalaiortclibrary.DevelopmentInstall:pip install .[dev]Build:python setup.py bdist_wheelRelease:twine upload dist/* |
aiortc-dc | this repo is special version of aiortc which can be install data channel feature only and it works on windows palatrom.PRs is already requested. So this is tempolary repository.Install procedure::pip install aiortc-dcFile transfer example of P2P direct communication over NAT (which can run on Windows platform!)Please See:README_WS_SIGNALING_VERSION_.. _README_WS_SIGNALING_VERSION:https://github.com/ryogrid/aiortc-dc/blob/pr-websocket-version-filexfer/examples/datachannel-filexfer/README_WS_SIGNALING_VERSION.rstaiortc|rtd| |pypi-v| |pypi-pyversions| |pypi-l| |travis| |codecov| |gitter|.. |rtd| image::https://readthedocs.org/projects/aiortc/badge/?version=latest:target:https://aiortc.readthedocs.io/.. |pypi-v| image::https://img.shields.io/pypi/v/aiortc.svg:target:https://pypi.python.org/pypi/aiortc.. |pypi-pyversions| image::https://img.shields.io/pypi/pyversions/aiortc.svg:target:https://pypi.python.org/pypi/aiortc.. |pypi-l| image::https://img.shields.io/pypi/l/aiortc.svg:target:https://pypi.python.org/pypi/aiortc.. |travis| image::https://img.shields.io/travis/com/aiortc/aiortc.svg:target:https://travis-ci.com/aiortc/aiortc.. |codecov| image::https://img.shields.io/codecov/c/github/aiortc/aiortc.svg:target:https://codecov.io/gh/aiortc/aiortc.. |gitter| image::https://img.shields.io/gitter/room/aiortc/Lobby.svg:target:https://gitter.im/aiortc/LobbyWhat isaiortc?aiortcis a library forWeb Real-Time Communication (WebRTC)_ andObject Real-Time Communication (ORTC)_ in Python. It is built on top ofasyncio, Python's standard asynchronous I/O framework.The API closely follows its Javascript counterpart while using pythonic
constructs:promises are replaced by coroutinesevents are emitted usingpyee.EventEmitterTo learn more aboutaiortcpleaseread the documentation_... _Web Real-Time Communication (WebRTC):https://webrtc.org/.. _Object Real-Time Communication (ORTC):https://ortc.org/.. _read the documentation:https://aiortc.readthedocs.io/en/latest/Why should I useaiortc?The main WebRTC and ORTC implementations are either built into web browsers,
or come in the form of native code. While they are extensively battle tested,
their internals are complex and they do not provide Python bindings.
Furthermore they are tightly coupled to a media stack, making it hard to plug
in audio or video processing algorithms.In contrast, theaiortcimplementation is fairly simple and readable. As
such it is a good starting point for programmers wishing to understand how
WebRTC works or tinker with its internals. It is also easy to create innovative
products by leveraging the extensive modules available in the Python ecosystem.
For instance you can build a full server handling both signaling and data
channels or apply computer vision algorithms to video frames using OpenCV.Furthermore, a lot of effort has gone into writing an extensive test suite for
theaiortccode to ensure best-in-class code quality.Implementation statusaiortcallows you to exchange audio, video and data channels and
interoperability is regularly tested against both Chrome and Firefox. Here are
some of its features:SDP generation / parsingInteractive Connectivity Establishment, including half-trickleDTLS key and certificate generationDTLS handshake, encryption / decryption (for SCTP)SRTP keying, encryption and decryption for RTP and RTCPPure Python SCTP implementationData ChannelsSending and receiving audio (Opus / PCMU / PCMA)Sending and receiving video (VP8 / H.264)Bundling audio / video / data channelsRTCP reports, including NACK / PLI to recover from packet lossRequirementsIn addition to aiortc's Python dependencies you need a couple of libraries
installed on your system for media codecs. FFmpeg 3.2 or greater is required.On Debian/Ubuntu run:.. code:: bashapt install libavdevice-dev libavfilter-dev libopus-dev libvpx-dev pkg-configOn OS X run:.. code:: bashbrew install ffmpeg opus libvpx pkg-configLicenseaiortcis released under theBSD license_... _BSD license:https://aiortc.readthedocs.io/en/latest/license.html |
aiortc-pyav-stub | No description available on PyPI. |
aiortm | aiortmUse the Remember the Milk API with aiohttp.InstallationInstall this via pip (or your favourite package manager):pip install aiortmCreditsThis package was created withCookiecutterand thebrowniebroke/cookiecutter-pypackageproject template. |
aiortsp | This is a very simple asyncio library for interacting with an
RTSP server, with basic RTP/RTCP support.The intended use case is to provide a pretty low level control
of what happens at RTSP connection level, all in python/asyncio.This library does not provide any decoding capability,
it is up to the client to decide what to do with received RTP packets.One could easily decode usingOpenCVorPyAV, or not at all depending on the intended
use.Seeexamplesfor how to use the lib internals, butfor quick usage:importasynciofromaiortsp.rtsp.readerimportRTSPReaderasyncdefmain():# Open a reader (which means RTSP connection, then media session)asyncwithRTSPReader('rtsp://cam/video.sdp')asreader:# Iterate on RTP packetsasyncforpktinreader.iter_packets():print('PKT',pkt.seq,pkt.pt,len(pkt))asyncio.run(main()) |
aioruckus | aioruckusA Python API which interacts with Ruckus Unleashed and ZoneDirector devices via their AJAX Web Service interface.Configuration information can also be queried from Ruckus Unleashed and ZoneDirector backup files.Compatible with all Ruckus Unleashed versions, and Ruckus ZoneDirector versions 9.10 onwards.How to installpipinstallaioruckusUsageFunctions are defined within anasynccontext manager, so you will have to useasynciorather than calling the functions directly in a shell.fromaioruckusimportAjaxSession,BackupSession,SystemStatimportasyncioasyncdeftest_aioruckus():asyncwithAjaxSession.async_create("<ruckus ip>","<ruckus user>","<ruckus password>")assession:ruckus=session.api## viewing configuration# note: configuration functions resolve all related objects, so may be slower than stats functions#aps=awaitruckus.get_aps()ap_groups=awaitruckus.get_ap_groups()wlans=awaitruckus.get_wlans()wlan_groups=awaitruckus.get_wlan_groups()# WLAN Groups are CLI-only on Unleasheddpsks=awaitruckus.get_dpsks()mesh=awaitruckus.get_mesh_info()default_system_info=awaitruckus.get_system_info()all_system_info=awaitruckus.get_system_info(SystemStat.ALL)## working with with client devices#active_clients=awaitruckus.get_active_clients()inactive_clients=awaitruckus.get_inactive_clients()# always empty on Unleashedblocked_clients=awaitruckus.get_blocked_client_macs()#awaitruckus.do_block_client("60:ab:de:ad:be:ef")awaitruckus.do_unblock_client("60:ab:de:ad:be:ef")#new_rogues=awaitruckus.get_active_rogues()known_rogues=awaitruckus.get_known_rogues()blocked_rogues=awaitruckus.get_blocked_rogues()## working with APs#ap_stats=awaitruckus.get_ap_stats()ap_group_stats=awaitruckus.get_ap_group_stats()#awaitruckus.do_hide_ap_leds("24:79:de:ad:be:ef")awaitruckus.do_show_ap_leds("24:79:de:ad:be:ef")#awaitruckus.do_restart_ap("24:79:de:ad:be:ef")## working with WLANs / VAPs#vap_stats=awaitruckus.get_vap_stats()wlan_group_stats=awaitruckus.get_wlan_group_stats()#awaitruckus.do_disable_wlan("my ssid")awaitruckus.do_enable_wlan("my ssid")#awaitruckus.do_set_wlan_password("my ssid","blah>blah<")## viewing events / alarms / logs#all_alarms=awaitruckus.get_all_alarms(limit=15)#all_events=awaitruckus.get_all_events(limit=1000)ap_events=awaitruckus.get_ap_events()ap_group_events=awaitruckus.get_ap_events("24:79:de:ad:be:ef","24:59:de:ad:be:ef")wlan_events=awaitruckus.get_wlan_events()wlan_group_events=awaitruckus.get_wlan_events("my ssid","my other ssid","my third ssid")client_events=awaitruckus.get_client_events(limit=50)wired_client_events=awaitruckus.get_wired_client_events()#syslog=awaitruckus.get_syslog()## modifying configuration#awaitruckus.do_add_wlan_group("new empty wlangroup","empty group added by aioruckus")awaitruckus.do_add_wlan_group("new full wlangroup","group added by aioruckus",wlans)#wlan_group_template=next((wlangforwlanginwlan_groupsifwlang["name"]=="Default"),None)awaitruckus.do_clone_wlan_group(wlan_group_template,"Copy of Default")#awaitruckus.do_delete_wlan_group("Copy of Default")#awaitruckus.do_add_wlan("my new sid",passphrase="mypassphrase")awaitruckus.do_edit_wlan("my new sid",{"ofdm-rate-only":True})#template_wlan=next((wlanforwlaninwlansifwlan["name"]=="my ssid"),None)awaitruckus.do_clone_wlan(template_wlan,"my newer sid")awaitruckus.do_delete_wlan("my newer sid")# viewing backed-up configurationwithBackupSession.create("<ruckus backup filename>")assession:ruckus=session.apiaps=awaitruckus.get_aps()ap_groups=awaitruckus.get_ap_groups()wlans=awaitruckus.get_wlans()wlan_groups=awaitruckus.get_wlan_groups()dpsks=awaitruckus.get_dpsks()blocked=awaitruckus.get_blocked_client_macs()mesh=awaitruckus.get_mesh_info()all_system_info=awaitruckus.get_system_info(SystemStat.ALL)asyncio.run(test_aioruckus())Other APIs for Ruckus UnleashedThis project was originally a fork ofpyruckus, which provides similar Python query functionality by controlling an SSH CLI session.There is aGo clientfor the latest releases of Unleashed.Since it's strongly typed, has good quality comments, and doesn't (yet) contain the large collection of tweaks and hacks needed to work over a wide range of Unleashed and ZoneDirector releases, theruckus-gosource code is a great place to understand the required requests and responses you should expect to receive from the AJAX API.There is alsoscraplisupport for the Ruckus Unleashed SSH CLI viascrapli community.Authentication and privilege levels are implemented, but no templates are implemented as of August 2023. |
aiorule34 | No description available on PyPI. |
aiorun | Table of Contents🏃 aiorun🤔 Why?🖥️ What about TCP server startup?🐛 Error Handling💨 Do you like uvloop?🛡️ Smart shield for shutdown🙏 Windows Support🏃 aiorunHere’s the big idea (how you use it):importasynciofromaiorunimportrunasyncdefmain():# Put your application code hereawaitasyncio.sleep(1.0)if__name__=='__main__':run(main())This package provides arun()function as the starting point
of yourasyncio-based application. Therun()function will
run forever. If you want to shut down whenmain()completes, just
callloop.stop()inside it: that will initiate shutdown.WarningNote thataiorun.run(coro)will runforever, unlike the standard
library’sasyncio.run()helper. You can callaiorun.run()without a coroutine parameter, and it will still run forever.This is surprising to many people, because they sometimes expect that
unhandled exceptions should abort the program, with an exception and
a traceback. If you want this behaviour, please see the section onerror handlingfurther down.WarningNote thataiorun.run(coro)will create anew event loop instanceevery time it is invoked (same asasyncio.run). This might cause
confusing errors if your code interacts with the default event loop
instance provided by the stdlibasynciolibrary. For such situations
you can provide the actual loop you’re using withaiorun.run(coro, loop=loop). There is more info about this further down.However, generally speaking, configuring your own loop and providing
it in this way is a code smell. You will find it much easier to
reason about your code if you do all your task creationinsidean async context, such as within anasync deffunction, because then
there will no ambiguity about which event loop is in play: it will
always be the one returned byasyncio.get_running_loop().🤔 Why?Therun()function will handleeverythingthat normally needs
to be done during the shutdown sequence of the application. All you
need to do is write your coroutines and run them.So what the heck doesrun()do exactly?? It does these standard,
idiomatic actions for asyncio apps:creates aTaskfor the given coroutine (schedules it on the
event loop),callsloop.run_forever(),adds default (and smart) signal handlers for bothSIGINTandSIGTERMthat will stop the loop;andwhenthe loop stops (either by signal or called directly), then it will……gather all outstanding tasks,cancel them usingtask.cancel(),resume running the loop until all those tasks are done,wait for theexecutorto complete shutdown, andfinally close the loop.All of this stuff is boilerplate that you will never have to write
again. So, if you useaiorunthis is whatyouneed to remember:Spawn all your work from a single, starting coroutineWhen a shutdown signal is received,allcurrently-pending tasks
will haveCancelledErrorraised internally. It’s up to you whether
you want to handle this inside each coroutine with
atry/exceptor not.If you want to protect coros from cancellation, seeshutdown_waits_for()further down.Try to have executor jobs be shortish, since the shutdown process will wait
for them to finish. If you need a long-running thread or process tasks, use
a dedicated thread/subprocess and setdaemon=Trueinstead.There’s not much else to know for general use.aiorunhas a few special
tools that you might need in unusual circumstances. These are discussed
next.🖥️ What about TCP server startup?You will see in many examples online that for servers, startup happens in
severalrun_until_complete()phases before the primaryrun_forever()which is the “main” running part of the program. How do we handle that withaiorun?Let’s recreate theecho client & serverexamples from the Standard Library documentation:Client:# echo_client.pyimportasynciofromaiorunimportrunasyncdeftcp_echo_client(message):# Same as original!reader,writer=awaitasyncio.open_connection('127.0.0.1',8888)print('Send:%r'%message)writer.write(message.encode())data=awaitreader.read(100)print('Received:%r'%data.decode())print('Close the socket')writer.close()asyncio.get_event_loop().stop()# Exit after one msg like originalmessage='Hello World!'run(tcp_echo_client(message))Server:importasynciofromaiorunimportrunasyncdefhandle_echo(reader,writer):# Same as original!data=awaitreader.read(100)message=data.decode()addr=writer.get_extra_info('peername')print("Received%rfrom%r"%(message,addr))print("Send:%r"%message)writer.write(data)awaitwriter.drain()print("Close the client socket")writer.close()asyncdefmain():server=awaitasyncio.start_server(handle_echo,'127.0.0.1',8888)print('Serving on{}'.format(server.sockets[0].getsockname()))asyncwithserver:awaitserver.serve_forever()run(main())It works the same as the original examples, except you see this
when you hitCTRL-Con the server instance:$pythonecho_server.pyRunningforever.Servingon('127.0.0.1',8888)Received'Hello World!'from('127.0.0.1',57198)Send:'Hello World!'Closetheclientsocket^CStoppingtheloopEnteringshutdownphase.Cancellingpendingtasks.Cancellingtask:<Taskpendingcoro=[...snip...]>RunningpendingtaskstillcompleteWaitingforexecutorshutdown.Leaving.Bye!Task gathering, cancellation, and executor shutdown all happen
automatically.🐛 Error HandlingUnlike the standard library’sasyncio.run()method,aiorun.runwill run forever, and does not stop on unhandled exceptions. This is partly
because we predate the standard library method, during the time in whichrun_forever()was actually the recommended API for servers, and partly
because it canmake sensefor long-lived servers to be resilient to
unhandled exceptions. For example, if 99% of your API works fine, but the
one new endpoint you just added has a bug: do you really want that one new
endpoint to crash-loop your deployed service?Nevertheless, not all usages ofaiorunare long-lived servers, so some
users would prefer thataiorun.run()crash on an unhandled exception,
just like any normal Python program. For this, we have an extra parameter
that enables it:# stop_demo.pyfromaiorunimportrunasyncdefmain():raiseException('ouch')if__name__=='__main__':run(main(),stop_on_unhandled_errors=True)This produces the following output:$ python stop_demo.py
Unhandled exception; stopping loop.
Traceback (most recent call last):
File "/opt/project/examples/stop_unhandled.py", line 9, in <module>
run(main(), stop_on_unhandled_errors=True)
File "/opt/project/aiorun.py", line 294, in run
raise pending_exception_to_raise
File "/opt/project/aiorun.py", line 206, in new_coro
await coro
File "/opt/project/examples/stop_unhandled.py", line 5, in main
raise Exception("ouch")
Exception: ouchError handling scenarios can get very complex, and I suggest that you
try to keep your error handling as simple as possible. Nevertheless, sometimes
people have special needs that require some complexity, so let’s look at a
few scenarios where error-handling considerations can be more challenging.aiorun.run()can also be started without an initial coroutine, in which
case any other created tasks still run as normal; in this case exceptions
still abort the program if the parameter is supplied:importasynciofromaiorunimportrunasyncdefjob():raiseException("ouch")if__name__=="__main__":loop=asyncio.new_event_loop()asyncio.set_event_loop(loop)loop.create_task(job())run(loop=loop,stop_on_unhandled_errors=True)The output is the same as the previous program. In this second example,
we made a our own loop instance and passed that torun(). It is also possible
to configure your exception handler on the loop, but if you do this thestop_on_unhandled_errorsparameter is no longer allowed:importasynciofromaiorunimportrunasyncdefjob():raiseException("ouch")if__name__=="__main__":loop=asyncio.new_event_loop()asyncio.set_event_loop(loop)loop.create_task(job())loop.set_exception_handler(lambdaloop,context:"Error")run(loop=loop,stop_on_unhandled_errors=True)But this is not allowed:Traceback (most recent call last):
File "/opt/project/examples/stop_unhandled_illegal.py", line 15, in <module>
run(loop=loop, stop_on_unhandled_errors=True)
File "/opt/project/aiorun.py", line 171, in run
raise Exception(
Exception: If you provide a loop instance, and you've configured a
custom exception handler on it, then the 'stop_on_unhandled_errors'
parameter is unavailable (all exceptions will be handled).
/usr/local/lib/python3.8/asyncio/base_events.py:633:
RuntimeWarning: coroutine 'job' was never awaitedRemember that the parameterstop_on_unhandled_errorsis just a convenience. If you’re
going to go to the trouble of making your own loop instance anyway, you can
stop the loop yourself inside your own exception handler just fine, and
then you no longer need to setstop_on_unhandled_errors:# custom_stop.pyimportasynciofromaiorunimportrunasyncdefjob():raiseException("ouch")asyncdefother_job():try:awaitasyncio.sleep(10)exceptasyncio.CancelledError:print("other_job was cancelled!")if__name__=="__main__":loop=asyncio.new_event_loop()asyncio.set_event_loop(loop)loop.create_task(job())loop.create_task(other_job())defhandler(loop,context):# https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.call_exception_handlerprint(f'Stopping loop due to error:{context["exception"]}')loop.stop()loop.set_exception_handler(handler=handler)run(loop=loop)In this example, we schedule two jobs on the loop. One of them raises an
exception, and you can see in the output that the other job was still
cancelled during shutdown as expected (which is what you expectaiorunto do!):$ python custom_stop.py
Stopping loop due to error: ouch
other_job was cancelled!Note however that in this situation the exception is beinghandledby
your custom exception handler, and does not bubble up out of therun()like you saw in earlier examples. If you want to do something with that
exception, like reraise it or something, you need to capture it inside your
custom exception handler and then do something with it, like add it to a list
that you check afterrun()completes, and then reraise there or something
similar.💨 Do you likeuvloop?importasynciofromaiorunimportrunasyncdefmain():<snip>if__name__=='__main__':run(main(),use_uvloop=True)Note that you have topip install uvloopyourself.🛡️ Smart shield for shutdownIt’s unusual, but sometimes you’re going to want a coroutine to not get
interrupted by cancellationduring the shutdown sequence. You’ll look in
the official docs and findasyncio.shield().Unfortunately,shield()doesn’t work in shutdown scenarios because
the protection offered byshield()only applies if the specific coroutineinside whichtheshield()is used, gets cancelled directly.Let me explain: if you do a conventional shutdown sequence (likeaiorunis doing internally), this is the sequence of steps:tasks = all_tasks(), followed by[t.cancel() for t in tasks], and thenrun_until_complete(gather(*tasks))The wayshield()works internally is it creates asecret, innertask—which also gets included in theall_tasks()call above! Thus
it also receives a cancellation exception just like everything else.Therefore, we have an alternative version ofshield()that works better for
us:shutdown_waits_for(). If you’ve got a coroutine that mustnotbe
cancelled during the shutdown sequence, just wrap it inshutdown_waits_for()!Here’s an example:importasynciofromaiorunimportrun,shutdown_waits_forasyncdefcorofn():foriinrange(10):print(i)awaitasyncio.sleep(1)print('done!')asyncdefmain():try:awaitshutdown_waits_for(corofn())exceptasyncio.CancelledError:print('oh noes!')run(main())If you hitCTRL-Cbefore10 seconds has passed, you will seeoh noes!printed immediately, and then after 10 seconds (since start),done!is printed, and thereafter the program exits.Output:$pythontestshield.py01234^CStoppingtheloopohnoes!56789done!Behind the scenes,all_tasks()would have been cancelled byCTRL-C,exceptones wrapped inshutdown_waits_for()calls. In this respect, it
is loosely similar toasyncio.shield(), but with special applicability
to our shutdown scenario inaiorun().Be careful with this: the coroutine should still finish up at some point.
The main use case for this is short-lived tasks that you don’t want to
write explicit cancellation handling.Oh, and you can useshutdown_waits_for()as if it wereasyncio.shield()too. For that use-case it works the same. If you’re usingaiorun, there
is no reason to useshield().🙏 Windows Supportaiorunalso supports Windows! Kinda. Sorta. The root problem with Windows,
for a thing likeaiorunis that Windows doesn’t supportsignal handlingthe way Linux or Mac OS X does. Like, at all.For Linux,aiorundoes “the right thing” out of the box for theSIGINTandSIGTERMsignals; i.e., it will catch them and initiate
a safe shutdown process as described earlier. However, onWindows, these
signals don’t work.There are two signals that work on Windows: theCTRL-Csignal (happens
when you press, unsurprisingly,CTRL-C, and theCTRL-BREAKsignal
which happens when you…well, you get the picture.The good news is that, foraiorun, both of these will work. Yay! The bad
news is that for them to work, you have to run your code in a Console
window. Boo!Fortunately, it turns out that you can run an asyncio-based processnotattached to a Console window, e.g. as a service or a subprocess,andhave
it also receive a signal to safely shut down in a controlled way. It turns
out that it is possible to send aCTRL-BREAKsignal to another process,
with no console window involved, but only as long as that process was created
in a particular way and—here is the drop—this targetted process is a
child process of the one sending the signal. Yeah, I know, it’s a downer.There is an example of how to do this in the tests:importsubprocessasspproc=sp.Popen(['python','app.py'],stdout=sp.PIPE,stderr=sp.STDOUT,creationflags=sp.CREATE_NEW_PROCESS_GROUP)print(proc.pid)Notice how we print out the process id (pid). Then you can send that
process the signal from a completely different process, once you know
thepid:importos,signalos.kill(pid,signal.CTRL_BREAK_EVENT)(Remember,os.kill()doesn’t actually kill, it only sends a signal)aiorunsupports this use-case above, although I’ll be pretty surprised
if anyone actually uses it to manage microservices (does anyone do this?)So to summarize:aiorunwill do a controlled shutdown if eitherCTRL-CorCTRL-BREAKis entered via keyboard in a Console window
with a running instance, or if theCTRL-BREAKsignal is sent to
asubprocessthat was created with theCREATE_NEW_PROCESS_GROUPflag set.Hereis a much more
detailed explanation of these issues.Finally,uvloopis not yet supported on Windows so that won’t work
either.At the very least,aiorunwill, well,runon Windows ¯\_(ツ)_/¯ |
aio.run.checker | Async checker definitions. |
aio-run-in-process | No description available on PyPI. |
aiorunner | Handle signals and cleanup context. |
aio.run.runner | Async runner definitions. |
aioruuvigateway | aioruuvigatewayAn asyncio-native library for requesting data from a Ruuvi Gateway.InstallationRequires Python 3.7 or newer.pip install aioruuvigatewayUsageEnsure you have set up bearer token authentication in your Ruuvi Gateway (and that you know the token).APIDocumentation can be found intest_library.pyfor now, sorry.Command line interfaceYou can use the command line interface to test the library.python -m aioruuvigateway --host 192.168.1.249 --token bearbear --parse --jsonwill output data from the gateway in JSON format, printing changed information every 10 seconds.Licenseaioruuvigatewayis distributed under the terms of theMITlicense. |
aioruz | aioruzAsync HSE RUZ API client for Python 3.UsageTo obtain student's schedule:fromdatetimeimportdate,timedeltaimportasyncioimportaioruzasyncdefmain():# Get schedule on 10 days forwardprint(awaitaioruz.student_schedule(email='[email protected]',to_date=10))# Suitable for lecturers as there is no way to get lecturer's person_id by emailprint(awaitaioruz.schedule(person_type='lecturer',person_id=12345,from_date=date.today(),to_date=date.today()+timedelta(days=7))# Get student's info by emailprint(awaitaioruz.student_info('[email protected]'))# Search for queryprint(awaitaioruz.search('some name'))loop=asyncio.get_event_loop()loop.run_until_completed(main())InstalliationInstall via Pip:pipinstallaioruzSecurityBy default SSL certificate validation is off. To ebale it, setaioruz.VERIFY_SSLtoTrue.importaioruzaioruz.VERIFY_SSL=TrueOr you can setRUZ_VERIFY_SSLenvironment virable toTrue.FeedbackPlease, send your bug reports tothisTelegram chat. |
aiorwlock | Read write lock forasyncio. ARWLockmaintains a pair of associated
locks, one for read-only operations and one for writing. The read lock may be
held simultaneously by multiple reader tasks, so long as there are
no writers. The write lock is exclusive.Whether or not a read-write lock will improve performance over the use of
a mutual exclusion lock depends on the frequency that the data isreadcompared to beingmodified. For example, a collection that is initially
populated with data and thereafter infrequently modified, while being
frequently searched is an ideal candidate for the use of a read-write lock.
However, if updates become frequent then the data spends most of its time
being exclusively locked and there is little, if any increase in concurrency.Implementation is almost direct port from thispatch.Exampleimportasyncioimportaiorwlockasyncdefgo():rwlock=aiorwlock.RWLock()# acquire reader lock, multiple coroutines allowed to hold the lockasyncwithrwlock.reader_lock:print('inside reader lock')awaitasyncio.sleep(0.1)# acquire writer lock, only one coroutine can hold the lockasyncwithrwlock.writer_lock:print('inside writer lock')awaitasyncio.sleep(0.1)asyncio.run(go())Fast pathBy defaultRWLockswitches context on lock acquiring. That allows to
other waiting tasks get the lock even if task that holds the lock
doesn’t contain context switches (await futstatements).The default behavior can be switched off byfastargument:RWLock(fast=True).Long story short: lock is safe by default, but if you sure you have
context switches (await,async with,async fororyield fromstatements) inside locked code you may want to usefast=Truefor
minor speedup.TLA+ SpecificationTLA+ specification ofaiorwlockprovided in this repository.Licenseaiorwlockis offered under the Apache 2 license.Changes1.4.0 (2024-01-20)Lazily evaluate current loop to allow instantiating lock outside of async functions.Support Python 3.11 and 3.12.Drop Python 3.7 support.1.3.0 (2022-01-18)Dropped Python 3.6 supportPython 3.10 is officially supportedDrop deprecatedloopparameter fromRWLockconstructor1.2.0 (2021-11-09)Fix a bug that makes concurrent writes possible under some (rare) conjunctions (#235)1.1.0 (2021-09-27)Remove explicit loop usage inasyncio.sleep()call, make the library forward
compatible with Python 3.101.0.0 (2020-12-32)Fix a bug with cancelation during acquire #170 (thanks @romasku)Deprecate passing explicitloopargument toRWLockconstructorDeprecate creation ofRWLockinstance outside of async function contextMinimal supported version is Python 3.6The library works with Python 3.8 and Python 3.9 seamlessly0.6.0 (2018-12-18)Wake up all readers after writer releases lock #60 (thanks @ranyixu)Fixed Python 3.7 compatibilityRemoved oldyield fromsyntaxMinimal supported version is Python 3.5.3Removed support for none async context managers0.5.0 (2017-12-03)Fix corner cases and deadlock when we upgrade lock from write to
read #39Use loop.create_future instead asyncio.Future if possible0.4.0 (2015-09-20)Support Python 3.5 andasync withstatementrename.reader_lock->.reader,.writer_lock->.writer. Backward compatibility is preserved.0.3.0 (2014-02-11)Add.lockedproperty0.2.0 (2014-02-09)Make.release()non-coroutine0.1.0 (2014-12-22)Initial release |
aiorx | No description available on PyPI. |
aios | aiosasynchronous state, transition and abstraction manager for Python 3.5+pipinstallaiosDocs are available- but are a work in progress.FeaturesState machineClass hierarchy managementTodoTimed & repeated actionsCron/scheduled actionsDemoadd demo code * |
aios3 | File-like object for aiobotocoreWithstreamyou can create file-like object
to read fromaiobotocore"files" by chunks.DocumentationaioS3DevelopersDo not forget to run. ./activate.sh.Python github project templateYou can use this repository as template for your Python projects.It brings:Python virtual environment (seeactivate.sh)pre-commit configuration with mypy, flake8, black and docstrings linter (see.pre-commit-config.yaml,install pre-commitand activate it withpre-commit install)github pages:read the article, the site generated from md-files indocs/,docs/docstrings/is autogenerated from source code docstrings (see.github/workflows/docs.yml)pytest with examples of async tests and fixtures (seetests/)github actions to linter, test and to create github pages (see.github/workflows/)versions in git tags (seeverup.sh)publishing Python PIP packageread the article, automatically on git tag, also creates github release with link to the version on pypi.orgpinning dependencies versions usingpip-tools(seescripts/compile_requirements.sh)Scriptsmake help |
aio-s3 | Status:AlphaTheaio-s3is a small library for accessing Amazon S3 Service that leverages
python’s standardasynciolibrary.Only read operations are supported so far, contributions are welcome.ExampleBasically all methods supported so far are shown in this example:import asynciofrom aios3.bucket import [email protected] main():bucket = Bucket('uaprom-logs',aws_region='eu-west-1',aws_endpoint='s3-eu-west-1.amazonaws.com',aws_key='AKIAIOSFODNN7EXAMPLE',aws_secret='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY')# List keys based on prefixlst = yield from bu.list('some-prefix')response = yield from bu.get(lst[0])print(len(response))response = yield from bu.download(lst[0])print("GOT Response", dir(response))while 1:chunk = yield from response.read(65536)print("Received", len(chunk))if not chunk:breakasyncio.get_event_loop().run_until_complete(main())ReferenceBucket(name, *, aws_key, aws_secret, aws_region, aws_endpoint, connector):Creates a wrapper object for accessing S3 bucket. Note unlike in many
other bindings you need to specify aws_region (and probably aws_endpoint)
correctly (see atable). Theconnectoris anaiohttpconnector,
which might be used to setup proxy or other useful things.Bucket.list(prefix='',max_keys=1000):Lists items which start with prefix. Each returned item is aKeyobject. This method is coroutine.NoteThis method raises assertion error if there are more keys than
max_keys. We do not have a method to return keys iteratively yet.Bucket.get(key):Fetches object nameskey. Thekeymight be a string orKeyobject. Returns bytes. This method is coroutine.Bucket.download(key):Allows iteratively download thekey. The object returned by the
coroutine is an object having method.read(bufsize)which is a
coroutine too.KeyRepresents an S3 key returned byBucket.list. Key has at least the
following attributes:key– the full name of the key stored in a bucketlast_modified–datetime.datetimeobjectetag– The ETag, usually md5 of the content with additional quotessize– Size of the object in bytesstorage_class– Storage class of the object |
aiosaber | A concurrent streaming packageDataflow based functional syntax.Implicitly parallelism for both async and non-async functions.Composable for both flows and tasks.Extensible with middlewares.InstallationpipinstallaiosaberExamplechecktestsfor more examples.fromaiosaberimport*@taskdefadd(self,num):foriinrange(100000):num+=1returnnum@taskasyncdefmultiply(num1,num2):returnnum1*num2@flowdefsub_flow(num):returnadd(num)|map_(lambdax:x**2)|add@flowdefmy_flow(num):[sub_flow(num),sub_flow(num)]|multiply|viewnum_ch=Channel.values(*list(range(100)))f=my_flow(num_ch)asyncio.run(f.start())Middleware examplefromaiosaberimport*classNameBuilder(BaseBuilder):def__call__(self,com,*args,**kwargs):super().__call__(com,*args,**kwargs)com.context['name']=type(com).__name__+str(id(com))classClientProvider(BaseExecutor):asyncdef__call__(self,com,**kwargs):ifnotcontext.context.get('client'):context.context['client']='client'returnawaitsuper().__call__(com,**kwargs)classFilter(BaseHandler):asyncdef__call__(self,com,get,put,**kwargs):asyncdeffilter_put(data):ifdataisENDordata>3:awaitput(data)returnawaitsuper().__call__(com,get,filter_put,**kwargs)@taskasyncdefadd(self,num):print(self.context['name'])print(context.context['client'])returnnum+1@flowdefmyflow(num_ch):returnnum_ch|add|viewcontext.context.update({'builders':[NameBuilder],'executors':[ClientProvider],'handlers':[Filter]})f=myflow(Channel.values(1,2,3,4,5))context.context.clear()asyncio.run(f.start()) |
aio-sanitana-eden | aio_sanitana_edenPython async module to control a Sanitana Eden steam shower. |
aiosasl | aiosaslprovides a generic, asyncio-based SASL library. It can be used with
any protocol, provided the neccessary interface code is provided by the
application or protocol implementation.DependenciesPython ≥ 3.5Supported SASL mechanismsPLAIN: authenticate with plaintext password (RFC 4616)ANONYMOUS: anonymous “authentication” (RFC 4505)SCRAM-SHA-1andSCRAM-SHA-256(and the-PLUSvariants with
channel binding): Salted Challenge Response Authentication (RFC 5802)DocumentationOfficial documentation can be built with sphinx and is available onlineon our servers.Supported channel binding methodstls-uniqueandtls-server-end-pointwith a pyOpenSSL connectionall methods supported by the Python standard library when using thesslmodule |
aiosc | aiosc is a minimalistic Open Sound Control (OSC) communication module
which uses asyncio for network operations and is compatible with the
asyncio event loop.Installationaiosc requires at least Python 3.7. It can be installed using pip:pip3 install aioscAlternatively, use--useroption to install aiosc only for the current user:pip3 install --user aioscUsageTo send OSC messages withaiosc, create an asyncio datagram connection
endpoint usingaiosc.OSCProtocolas the protocol factory.A datagram connection can be created with thecreate_datagram_endpointmethod of the asyncio event loop. Use the argumentremote_addrto specify
the OSC server address and port as follows:importasyncioimportaioscasyncdefmain():loop=asyncio.get_running_loop()transport,osc=awaitloop.create_datagram_endpoint(aiosc.OSCProtocol,remote_addr=('127.0.0.1',8000))osc.send('/hello/world')osc.send('/a/b/cde',1000,-1,'hello',1.234,5.678)asyncio.run(main())For an OSC server implementation,aiosc.OSCProtocolcan be subclassed
or directly constructed with a dictionary which maps OSC address patterns to
handler methods for incoming messages.When creating datagram connection for an OSC server withcreate_datagram_endpoint, use the argumentlocal_addrto specify
the interface (address) and listening port for the server.In a typical case, local address can look like('0.0.0.0', 9000)where9000is the port number and0.0.0.0address designates that the server
will be listening on all available network interfaces.importasyncioimportaioscimportsysclassEchoServer(aiosc.OSCProtocol):def__init__(self):super().__init__(handlers={'/sys/exit':lambdaaddr,path,*args:sys.exit(0),'//*':self.echo,})defecho(self,addr,path,*args):print("incoming message from{}:{}{}".format(addr,path,args))asyncdefmain():loop=asyncio.get_running_loop()transport,osc=awaitloop.create_datagram_endpoint(EchoServer,local_addr=('0.0.0.0',8000))awaitloop.create_future()asyncio.run(main())For more examples, seeexamples/.OSC address patternsaioscdispatches messages to handler methods using glob-style address
pattern matching as described in the OSC 1.0 specification. The//operator
from OSC 1.1 preliminary specification is also supported.Examples:/hello/worldmatches/hello/world./hello/*matches/hello/worldand/hello/sarah./{hello,goodbye}//worldmatches/hello/worldand/goodbye/cruel/world.//*matches any address.NotesBundles are not yet supported.OSC data types are picked from the preliminary spec documented in Features
and Future of Open Sound Control version 1.1 for NIME paper. For example,Itypetag is decoded to Impulse (aka “bang”) which is passed around
asaiosc.Impulsesingleton.Suggestions, bug reports, issues and/or pull requests are, of course, welcome.LicenseCopyright (c) 2014 Artem Popov <[email protected]>aiosc is licensed under the MIT license, please see LICENSE file for details. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.