package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aiopolly | aiopollyaiopolly is an asynchronous library forAmazon Polly APIwhich is written withasyncioandaiohttpand usespydanticmodelsFeaturesAsynchronousRespects PEP-8 (no camelCase args and vars)Provides easy way to work with SSML tags and lexiconsHas mapped and classified AWS API exceptionsHas audio convert support and built-in async opus converterInstallation$pipinstallaiopollyGetting startedTo work with AWS Polly you need AWS account, IAM User and it's credentials,here's the instructionshow to get itThen you can init this class using one of two methods:Provide your access and secret keys directly:fromaiopollyimportPollypolly=Polly(access_key='your_access_key',secret_key='your_secret_key')Create a ~/.aws/credentials file with following data:[default]
aws_access_key_id = your_access_key
aws_secret_access_key = your_secret_keyAnd init class without any auth params:fromaiopollyimportPollypolly=Polly()ExamplesMany voicesimportasyncioimporttimefromaiopollyimportPolly,typesasyncdefmain():time_start=time.time()# Initializing AWS Polly client with default output_formatpolly=Polly(output_format=types.AudioFormat.mp3)voices=awaitpolly.describe_voices()text='Whatever you can do I can override it, got a million ways to synthesize it'# Asynchronously synthesizing text with all available voicessynthesized=awaitvoices.synthesize_speech(text,language_code=types.LanguageCode.en_US)# Asynchronously saving each synthesized audio on diskawaitasyncio.gather(*(speech.save_on_disc(directory='examples')forspeechinsynthesized))# Counting how many characters were synthesizedcharacters_synthesized=sum(speech.request_charactersforspeechinsynthesized)print(f'{characters_synthesized}characters are synthesized on{len(synthesized)}speech'f'and saved on disc in{time.time()-time_start}seconds!')loop=asyncio.get_event_loop()loop.run_until_complete(main())Managing lexiconsimportasynciofromaiopollyimportPollyfromaiopolly.typesimportAlphabet,AudioFormat,LanguageCode,VoiceIDfromaiopolly.utils.lexiconimportnew_lexicon,new_lexemeasyncdefmain():# Creating a new Polly instance with default output format 'mp3'polly=Polly(output_format=AudioFormat.mp3)text='Python is a beautiful programming language which is commonly used for web backend and ML. '\'It also has cool style guides listed in PEP-8, and many community libraries like aiopolly or aiogram.'# Creating new lexemespython_lexemes=[new_lexeme(grapheme='PEP',alias='Python Enhancement Proposals'),new_lexeme(grapheme='ML',alias='Machine Learning'),new_lexeme(grapheme='aiopolly',phoneme='eɪˈaɪoʊˈpɑli'),new_lexeme(grapheme='aiogram',phoneme='eɪˈaɪoʊˌgræm')]# Creating a new lexicon with 'ipa' alphabet and 'en_US' language codelexicon=new_lexicon(alphabet=Alphabet.ipa,lang=LanguageCode.en_US,lexemes=python_lexemes)# Putting lexicon on Amazon serverlexicon_name='PythonML'awaitpolly.put_lexicon(lexicon_name=lexicon_name,content=lexicon)# Synthesizing speech with lexicon we just created# (we don't need to specify required param "output_format", as we using it by default)speech=awaitpolly.synthesize_speech(text,voice_id=VoiceID.Matthew,lexicon_names=[lexicon_name])# Saving speech on disk with default nameawaitspeech.save_on_disc()loop=asyncio.get_event_loop()loop.run_until_complete(main())Using SSML Textaiopolly got built-in ssml-text factory which you can use to manage your ssml text:importasynciofromaiopollyimportPollyfromaiopolly.typesimportAudioFormat,VoiceID,TextTypefromaiopolly.utils.ssmlimportssml_text,prosodyfromaiopolly.utils.ssml.paramsimportVolume,Pitch,Ratesuper_fast=prosody(f'''\Uh, sama lamaa duma lamaa you assuming I'm a human\What I gotta do to get it through to you I'm superhuman\Innovative and I'm made of rubber\So that anything you say is ricocheting off of me and it'll glue to you\I'm devastating more than ever demonstrating\How to give a motherfuckin' audience a feeling like it's levitating\Never fading and I know that the haters are forever waiting\For the day that they can say I fell off they'd be celebrating\'Cause I know the way to get 'em motivated''',rate=Rate.x_fast,volume=Volume.x_loud,pitch=Pitch.high)asyncdefmain():# Creating a new Polly instance with default output format 'mp3'polly=Polly(output_format=AudioFormat.mp3)text=ssml_text(super_fast)speech=awaitpolly.synthesize_speech(text,voice_id=VoiceID.Matthew,text_type=TextType.ssml)awaitspeech.save_on_disc(directory='speech')loop=asyncio.get_event_loop()loop.run_until_complete(main())Using default paramsYou can init Polly client with any default params.
Those will be used when same params in API methods remain empty.fromaiopollyimportPolly,typespolly=Polly(voice_id=types.VoiceID.Joanna,output_format=types.AudioFormat.ogg_vorbis,sample_rate='16000',speech_mark_types=['ssml'],text_type=types.TextType.ssml,language_code=types.LanguageCode.en_US,lexicon_names=['myLexicon','alsoMyLexicon'],output_s3_key_prefix='s3_key_prefix',output_s3_bucket_name='s3_bucket_name',include_additional_language_codes=True,**{'other_default_param':'value'})Using built-in OpusConverterFor this to work you need ffmpeg and libopus installed on your systemimportasynciofromaiopollyimportPollyfromaiopolly.typesimportAudioFormat,TextType,VoiceIDfromaiopolly.utils.converterimportOpusConverterfromaiopolly.utils.ssmlimportssml_text,pause,Strengthasyncdefmain():# Creating instance if OpusConverterconverter=OpusConverter(auto_convert=True,keep_original=True)polly=Polly(output_format=AudioFormat.mp3,converter=converter)text=ssml_text(f'''sendVoiceUse this method to send audio files, if you want Telegram clients to display the file as a playable voice message.For this to work, your audio must be in an{pause(Strength.none)}.ogg file encoded with OPUS(other formats may be sent as Audio or Document)''')# Synthesizing speech as usual, it will be converted automaticallyspeech=awaitpolly.synthesize_speech(text,voice_id=VoiceID.Matthew,text_type=TextType.ssml)# Saving speech on disk with default nameawaitspeech.save_on_disc(directory='speech')awaitspeech.save_on_disc(directory='speech',converted=False)loop=asyncio.get_event_loop()loop.run_until_complete(main())To-Do:Test Synthesis tasks (not tested yet)Write testsGet rid of botocore (built-in request signer needed)Work on converter API?More docs?Inspired by Alex Root Junior'saiogram |
aiopool | A library for running separated subprocesses with asyncio. |
aio-pool | aio-poolExtending Python'smultiporcessing.Poolto support coroutine functions.Can be useful for when using a server with very high bandwidth or doing both very large IO and CPU tasks at the same time.All methods ofmultiprocessing.Poolare supported.All paramters for multiprocessing.Pool are supported.examples:Setting concurrency limit. This means each process can run with up to 8 concurrent tasks at a time.importasynciofromaio_poolimportAioPoolasyncdefpowlong(a):awaitasyncio.sleep(1)returna**2if__name__=='__main__':withAioPool(processes=2,concurrency_limit=8)aspool:results=pool.map(powlong,[iforiinrange(16)])# Should take 2 seconds (2*8).print(results)Async initliazers are also suppported.importasynciofromaio_poolimportAioPoolasyncdefstart(message):awaitasyncio.sleep(1)print(message)asyncdefpowlong(a):awaitasyncio.sleep(1)returna**2if__name__=='__main__':withAioPool(processes=2,concurrency_limit=8,initializer=start,init_args=("Started with AioPool",))aspool:results=pool.map(powlong,[iforiinrange(16)])# Should take 2 seconds (2*8).print(results)By default, AioPool also set up a default executor for any non-async tasks.The size can be determined bythreadpool_sizearguemnt, which defaults to 1.None default event loops(uvloopfor example) are supported as well, using theloop_initializerargument.Also, non-async functions are supported by default, as the AioPool worker identify if the function is async or not.If the function is not async, it runs inside the threadpool, to allow the requested concurrency.This means that order of execution is not guaranteed, even if the function is not async.However, the order of results is guaranteed through the pool API (map, starmap, apply, etc...).fromaio_poolimportAioPoolimportuvloopwithAioPool(loop_initializer=uvloop.new_event_loop,threadpool_size=4)pool:pool.map(print,[iforiinrange(8)]) |
aiopop3 | This is a server for POP3 protocol |
aioposter | No description available on PyPI. |
ai-oppose | package for AI opposition of scientific claims (under development) |
aiopriman | AioprimanAttention! the project is at the initial stage of development, so there may be changes that break backward compatibilityThis package provides the ability to manage asyncio synchronization primitives.
Allows you to create storages of primitives, provides convenient means for accessing them using context managers,
factories, creation and access to synchronization primitives by key.Designed to solve the problem of managing dynamically created synchronization primitives for different resources.Primitives are stored in memory only when needed.Installpip install aioprimanUsage ExamplesWorking with a specific type of Manager, storage data must be specified as a parameterimportasyncioimportloggingfromaiopriman.managerimportLockManager,SemaphoreManagerfromaiopriman.storageimportStorageDataasyncdefrun_lock(storage,name):logging.debug(f"START Lock{name}")asyncwithLockManager(storage_data=storage,key="test")aslock:logging.debug(f"HERE LOCKED{name}:{lock}")awaitasyncio.sleep(3)asyncdefrun_sem(storage,name):logging.debug(f"START Sem{name}")asyncwithSemaphoreManager(storage_data=storage,key="test",value=2)assem:logging.debug(f"HERE SEM LOCKED{name}:{sem}")awaitasyncio.sleep(3)asyncdefmain_run(*args):awaitasyncio.gather(*args)if__name__=='__main__':logging.basicConfig(level=logging.DEBUG,format='%(levelname)s:%(name)s:(%(filename)s).%(funcName)s(%(lineno)d):%(message)s')tasks=[]storage_data=StorageData()foriinrange(1,10):tasks.append(run_lock(storage_data,i))tasks.append(run_sem(storage_data,i))asyncio.run(main_run(*tasks)) |
aioprint | aioprintaioprintprovides an asynchronous interface forprintby usingaiofilesas a backend.InstallationUsing PyPIpip3 install -U aioprintUsing git with this GitHub repopip3 install -U git+https://github.com/crrapi/aioprintUsageimportasyncioimportsysimportaioprintclassA:asyncdef__aiostr__(self):# The __aiostr__ magic method is preferred# over the __str__ method to provide# a coroutine interfacereturn"pony trick yasuo"asyncdefmain():awaitprint(["sub",2,"pew"],"he is great",end="",sep="LOL")a=A()awaitprint("error",file=sys.stderr)awaitprint(a,file="out.txt")loop=asyncio.get_event_loop()loop.run_until_complete(main())AcknowledgementsSpecial thanks toGelbpunkt aka Adrianfor reviving this and making it useful |
aioproc | aioprocSkeleton project created by Python Project Wizard (ppw)Free software: MITDocumentation:https://aioproc.readthedocs.ioFeaturesTODOCreditsThis package was created withCookiecutterand thezillionare/cookiecutter-pypackageproject template. |
aioprocessing | aioprocessingaioprocessingprovides asynchronous,asynciocompatible, coroutine
versions of many blocking instance methods on objects in themultiprocessinglibrary. To usedillfor universal pickling, install usingpip install aioprocessing[dill]. Here's an example demonstrating theaioprocessingversions ofEvent,Queue, andLock:importtimeimportasyncioimportaioprocessingdeffunc(queue,event,lock,items):""" Demo worker function.This worker function runs in its own process, and usesnormal blocking calls to aioprocessing objects, exactlythe way you would use oridinary multiprocessing objects."""withlock:event.set()foriteminitems:time.sleep(3)queue.put(item+5)queue.close()asyncdefexample(queue,event,lock):l=[1,2,3,4,5]p=aioprocessing.AioProcess(target=func,args=(queue,event,lock,l))p.start()whileTrue:result=awaitqueue.coro_get()ifresultisNone:breakprint("Got result{}".format(result))awaitp.coro_join()asyncdefexample2(queue,event,lock):awaitevent.coro_wait()asyncwithlock:awaitqueue.coro_put(78)awaitqueue.coro_put(None)# Shut down the workerif__name__=="__main__":loop=asyncio.get_event_loop()queue=aioprocessing.AioQueue()lock=aioprocessing.AioLock()event=aioprocessing.AioEvent()tasks=[asyncio.ensure_future(example(queue,event,lock)),asyncio.ensure_future(example2(queue,event,lock)),]loop.run_until_complete(asyncio.wait(tasks))loop.close()The aioprocessing objects can be used just like their multiprocessing
equivalents - as they are infuncabove - but they can also be
seamlessly used inside ofasynciocoroutines, without ever blocking
the event loop.What's newv2.0.1Fixed a bug that kept theAioBarrierandAioEventproxies returned fromAioManagerinstances from working. Thanks to Giorgos Apostolopoulos for the fix.v2.0.0Add support for universal pickling usingdill, installable withpip install aioprocessing[dill]. The library will now attempt to importmultiprocess, falling back to stdlibmultiprocessing. Force stdlib behaviour by setting a non-empty environment variableAIOPROCESSING_DILL_DISABLED=1. This can be used to avoiderrorswhen attempting to combineaioprocessing[dill]with stdlibmultiprocessingbased objects likeconcurrent.futures.ProcessPoolExecutor.How does it work?In most cases, this library makes blocking calls tomultiprocessingmethods
asynchronous by executing the call in aThreadPoolExecutor, usingasyncio.run_in_executor().
It doesnotre-implement multiprocessing using asynchronous I/O. This means
there is extra overhead added when you useaioprocessingobjects instead ofmultiprocessingobjects, because each one is generally introducing aThreadPoolExecutorcontaining at least onethreading.Thread. It also means
that all the normal risks you get when you mix threads with fork apply here, too
(Seehttp://bugs.python.org/issue6721for more info).The one exception to this isaioprocessing.AioPool, which makes use of the
existingcallbackanderror_callbackkeyword arguments in the variousPool.*_asyncmethods to run them asasynciocoroutines. Note thatmultiprocessing.Poolis actually using threads internally, so the thread/fork
mixing caveat still applies.Eachmultiprocessingclass is replaced by an equivalentaioprocessingclass,
distinguished by theAioprefix. So,PoolbecomesAioPool, etc. All methods
that could block on I/O also have a coroutine version that can be used withasyncio. For example,multiprocessing.Lock.acquire()can be replaced withaioprocessing.AioLock.coro_acquire(). You can pass anasyncioEventLoop object to anycoro_*method using theloopkeyword argument. For example,lock.coro_acquire(loop=my_loop).Note that you can also use theaioprocessingsynchronization primitives as replacements
for their equivalentthreadingprimitives, in single-process, multi-threaded programs
that useasyncio.What parts of multiprocessing are supported?Most of them! All methods that could do blocking I/O in the following objects
have equivalent versions inaioprocessingthat extend themultiprocessingversions by adding coroutine versions of all the blocking methods.PoolProcessPipeLockRLockSemaphoreBoundedSemaphoreEventConditionBarrierconnection.Connectionconnection.Listenerconnection.ClientQueueJoinableQueueSimpleQueueAllmanagers.SyncManagerProxyversions of the items above (SyncManager.Queue,SyncManager.Lock(), etc.).What versions of Python are compatible?aioprocessingwill work out of the box on Python 3.5+.GotchasKeep in mind that, while the API exposes coroutines for interacting withmultiprocessingAPIs, internally they are almost always being delegated
to aThreadPoolExecutor, this means the caveats that apply with usingThreadPoolExecutorwithasyncioapply: namely, you won't be able to
cancel any of the coroutines, because the work being done in the worker
thread can't be interrupted. |
aioprometheus | aioprometheusis a Prometheus Python client library for asyncio-based
applications. It provides metrics collection and serving capabilities for
use with Prometheus and compatible monitoring systems. It supports exporting
metrics into text and pushing metrics to a gateway.The ASGI middleware inaioprometheuscan be used in FastAPI/Starlette and
Quart applications.aioprometheuscan also be used in other kinds of asyncio
applications too.The project documentation can be found onReadTheDocs.Install$pipinstallaioprometheusThe ASGI middleware does not have any external dependencies but the Starlette
and Quart convenience functions that handle metrics requests do.If you plan on using the ASGI middleware in a Starlette / FastAPI application
then you can install the extra dependencies alongsideaioprometheusby adding
extras to the install.$pipinstallaioprometheus[starlette]If you plan on using the ASGI middleware in a Quart application then you can
install the extra dependencies alongsideaioprometheusby adding extras
to the install.$pipinstallaioprometheus[quart]A Prometheus Push Gateway client and a HTTP service are included, but their
dependencies are not installed by default. You can install them alongsideaioprometheusby adding extras to the install.$pipinstallaioprometheus[aiohttp]Multiple optional dependencies can be listed at once, such as:$pipinstallaioprometheus[aiohttp,starlette,quart]UsageThere are two basic steps involved in using aioprometheus; the first is to
instrument your software by creating metrics to monitor events and the second
is to expose the metrics to a collector.Creating a new metric is easy. First, import the appropriate metric from
aioprometheus. In the example below it’s a Counter metric. Next, instantiate
the metric with a name and a help string. Finally, update the metric when an
event occurs. In this case the counter is incremented.fromaioprometheusimportCounterevents_counter=Counter("events_counter","Total number of events.",)events_counter.inc({"kind":"event A"})By default, metrics get registered into the default collector registry which
is available ataioprometheus.REGISTRY.A number of convenience decorator functions are included in aioprometheus that
can assist with automatically updating metrics. Theexamplesdirectory
contains various decorators examples.Once your software is instrumented with various metrics you’ll want to
expose them to Prometheus or a compatible metrics collector. There are
multiple strategies available for this and the right choice depends on the
kind of thing being instrumented.If you are instrumenting a Starlette, FastAPI or Quart application then the
easiest option for adding Prometheus metrics is to use the ASGI Middleware
provided byaioprometheus.The ASGI middleware provides a default set of metrics that include counters
for total requests received, total responses sent, exceptions raised and
response status codes for route handlers.The example below shows how to use the aioprometheus ASGI middleware in a
FastAPI application. FastAPI is built upon Starlette so using the middleware
in Starlette would be the same.fromfastapiimportFastAPI,Request,ResponsefromaioprometheusimportCounter,MetricsMiddlewarefromaioprometheus.asgi.starletteimportmetricsapp=FastAPI()# Any custom application metrics are automatically included in the exposed# metrics. It is a good idea to attach the metrics to 'app.state' so they# can easily be accessed in the route handler - as metrics are often# created in a different module than where they are used.app.state.users_events_counter=Counter("events","Number of events.")app.add_middleware(MetricsMiddleware)app.add_route("/metrics",metrics)@app.get("/")asyncdefroot(request:Request):returnResponse("FastAPI Middleware Example")@app.get("/users/{user_id}")asyncdefget_user(request:Request,user_id:str,):request.app.state.users_events_counter.inc({"path":request.scope["path"]})returnResponse(f"{user_id}")if__name__=="__main__":importuvicornuvicorn.run(app)Other examples in theexamples/frameworksdirectory show how aioprometheus
can be used within various web application frameworks.The next example shows how to use the Service HTTP endpoint to provide a
dedicated metrics endpoint for other applications such as long running
distributed system processes.The Service object requires optional extras to be installed so make sure you
install aioprometheus with the ‘aiohttp’ extras.$pipinstallaioprometheus[aiohttp]"""
This example demonstrates how the ``aioprometheus.Service`` can be used to
expose metrics on a HTTP endpoint.
.. code-block:: console
(env) $ python simple-service-example.py
Serving prometheus metrics on: http://127.0.0.1:8000/metrics
You can open the URL in a browser or use the ``curl`` command line tool to
fetch metrics manually to verify they can be retrieved by Prometheus server.
"""importasyncioimportsocketfromaioprometheusimportCounterfromaioprometheus.serviceimportServiceasyncdefmain():service=Service()events_counter=Counter("events","Number of events.",const_labels={"host":socket.gethostname()})awaitservice.start(addr="127.0.0.1",port=8000)print(f"Serving prometheus metrics on:{service.metrics_url}")# Now start another coroutine to periodically update a metric to# simulate the application making some progress.asyncdefupdater(c:Counter):whileTrue:c.inc({"kind":"timer_expiry"})awaitasyncio.sleep(1.0)awaitupdater(events_counter)# Finally stop serverawaitservice.stop()if__name__=="__main__":try:asyncio.run(main())exceptKeyboardInterrupt:passA counter metric is used to track the number of while loop iterations executed
by the ‘updater’ coroutine. The Service is started and then a coroutine is
started to periodically update the metric to simulate progress.The Service can be configured to bind to a user defined network interface and
port.When the Service receives a request for metrics it forms a response by
rendering the contents of its registry into the appropriate format. By default
the Service uses the default collector registry, which isaioprometheus.REGISTRY. The Service can be configured to use a different
registry by passing one in as an argument to the Service constructor.Licenseaioprometheusis released under the MIT license.aioprometheusoriginates from the (now deprecated)prometheus pythonpackage which
was released under the MIT license.aioprometheuscontinues to use the MIT
license and contains a copy of the original MIT license from theprometheus-pythonproject as instructed by the original license. |
aioprometheus-api-client | aioprometheus-api-clientA Python wrapper for the Prometheus http api and some tools for metrics processing.InstallationTo install the latest release:pip install aioprometheus-api-clientTo install directly from this branch:pip install git+https://github.com/trisongz/aioprometheus-api-clientLinksSlackGoogle ChatDocumentationGetting StartedUsagePrometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics (time series data) from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. The raw time series data obtained from a Prometheus host can sometimes be hard to interpret. To help better understand these metrics we have created a Python wrapper for the Prometheus http api for easier metrics processing and analysis.Theprometheus-api-clientlibrary consists of multiple modules which assist in connecting to a Prometheus host, fetching the required metrics and performing various aggregation operations on the time series data.Connecting and Collecting Metrics from a Prometheus hostThePrometheusConnectmodule of the library can be used to connect to a Prometheus host. This module is essentially a class created for the collection of metrics from a Prometheus host. It stores the following connection parameters:url- (str) url for the prometheus hostheaders– (dict) A dictionary of http headers to be used to communicate with the host. Example: {“Authorization”: “bearer my_oauth_token_to_the_host”}disable_ssl– (bool) If set to True, will disable ssl certificate verification for the http requests made to the prometheus hostfromprometheus_api_clientimportPrometheusConnectprom=PrometheusConnect(url="<prometheus-host>",disable_ssl=True)# Get the list of all the metrics that the Prometheus host scrapesprom.all_metrics()You can also fetch the time series data for a specific metric using custom queries as follows:prom=PrometheusConnect()my_label_config={'cluster':'my_cluster_id','label_2':'label_2_value'}prom.get_current_metric_value(metric_name='up',label_config=my_label_config)# Here, we are fetching the values of a particular metric nameprom.custom_query(query="prometheus_http_requests_total")# Now, lets try to fetch the `sum` of the metricsprom.custom_query(query="sum(prometheus_http_requests_total)")We can also use custom queries for fetching the metric data in a specific time interval. For example, let's try to fetch the past 2 days of data for a particular metric in chunks of 1 day:# Import the required datetime functionsfromprometheus_api_client.utilsimportparse_datetimefromdatetimeimporttimedeltastart_time=parse_datetime("2d")end_time=parse_datetime("now")chunk_size=timedelta(days=1)metric_data=prom.get_metric_range_data("up{cluster='my_cluster_id'}",# this is the metric name and label configstart_time=start_time,end_time=end_time,chunk_size=chunk_size,)For more functions included in thePrometheusConnectmodule, refer to thisdocumentation.Understanding the Metrics Data FetchedTheMetricsListmodule initializes a list of Metric objects for the metrics fetched from a Prometheus host as a result of a promql query.# Import the MetricsList and Metric modulesfromprometheus_api_clientimportPrometheusConnect,MetricsList,Metricprom=PrometheusConnect()my_label_config={'cluster':'my_cluster_id','label_2':'label_2_value'}metric_data=prom.get_metric_range_data(metric_name='up',label_config=my_label_config)metric_object_list=MetricsList(metric_data)# metric_object_list will be initialized as# a list of Metric objects for all the# metrics downloaded using get_metric query# We can see what each of the metric objects look likeforiteminmetric_object_list:print(item.metric_name,item.label_config,"\n")Each of the items in themetric_object_listare initialized as aMetricclass object. Let's look at one of the metrics from themetric_object_listto learn more about theMetricclass:my_metric_object=metric_object_list[1]# one of the metrics from the listprint(my_metric_object)For more functions included in theMetricsListandMetricsmodule, refer to thisdocumentation.Additional Metric FunctionsTheMetricclass also supports multiple functions such as adding, equating and plotting various metric objects.Adding MetricsYou can add add two metric objects for the same time-series as follows:metric_1=Metric(metric_data_1)metric_2=Metric(metric_data_2)metric_12=metric_1+metric_2# will add the data in ``metric_2`` to ``metric_1``# so if any other parameters are set in ``metric_1``# will also be set in ``metric_12``# (like ``oldest_data_datetime``)Equating MetricsOverloading operator =, to check whether two metrics are the same (are the same time-series regardless of their data)metric_1=Metric(metric_data_1)metric_2=Metric(metric_data_2)print(metric_1==metric_2)# will print True if they belong to the same time-seriesPlotting Metric ObjectsPlot a very simple line graph for the metric time series:fromprometheus_api_clientimportPrometheusConnect,MetricsList,Metricprom=PrometheusConnect()my_label_config={'cluster':'my_cluster_id','label_2':'label_2_value'}metric_data=prom.get_metric_range_data(metric_name='up',label_config=my_label_config)metric_object_list=MetricsList(metric_data)my_metric_object=metric_object_list[1]# one of the metrics from the listmy_metric_object.plot()Getting Metrics Data as pandas DataFramesTo perform data analysis and manipulation, it is often helpful to have the data represented using apandas DataFrame. There are two modules in this library that can be used to process the raw metrics fetched into a DataFrame.TheMetricSnapshotDataFramemodule converts "current metric value" data to a DataFrame representation, and theMetricRangeDataFrameconverts "metric range values" data to a DataFrame representation. Example usage of these classes can be seen below:importdatetimeasdtfromprometheus_api_clientimportPrometheusConnect,MetricSnapshotDataFrame,MetricRangeDataFrameprom=PrometheusConnect()my_label_config={'cluster':'my_cluster_id','label_2':'label_2_value'}# metric current valuesmetric_data=prom.get_current_metric_value(metric_name='up',label_config=my_label_config,)metric_df=MetricSnapshotDataFrame(metric_data)metric_df.head()""" Output:+-------------------------+-----------------+------------+-------+| __name__ | cluster | label_2 | timestamp | value |+==========+==============+=================+============+=======+| up | cluster_id_0 | label_2_value_2 | 1577836800 | 0 |+-------------------------+-----------------+------------+-------+| up | cluster_id_1 | label_2_value_3 | 1577836800 | 1 |+-------------------------+-----------------+------------+-------+"""# metric values for a range of timestampsmetric_data=prom.get_metric_range_data(metric_name='up',label_config=my_label_config,start_time=(dt.datetime.now()-dt.timedelta(minutes=30)),end_time=dt.datetime.now(),)metric_df=MetricRangeDataFrame(metric_data)metric_df.head()""" Output:+------------+------------+-----------------+--------------------+-------+| | __name__ | cluster | label_2 | value |+-------------------------+-----------------+--------------------+-------+| timestamp | | | | |+============+============+=================+====================+=======+| 1577836800 | up | cluster_id_0 | label_2_value_2 | 0 |+-------------------------+-----------------+--------------------+-------+| 1577836801 | up | cluster_id_1 | label_2_value_3 | 1 |+-------------------------+-----------------+------------=-------+-------+"""For more functions included in theprometheus-api-clientlibrary, please refer to thisdocumentation.Running testsPROM_URL="http://demo.robustperception.io:9090/" pytestCode Styling and LintingPrometheus Api client usespre-commitframework to maintain the code linting and python code styling.The AICoE-CI would run the pre-commit check on each pull request.We encourage our contributors to follow the same pattern, while contributing to the code.we would like to keep the same standard and maintain the code for better quality and readability.The pre-commit configuration file is present in the repository.pre-commit-config.yamlIt contains the different code styling and linting guide which we use for the application.we just need to runpre-commitbefore raising a Pull Request.Following command can be used to run the pre-commit:pre-commit run --all-filesIf pre-commit is not installed in your system, it can be install with :pip install pre-commit |
aioprometheus-binary-format | No description available on PyPI. |
aio-prometheus-client | Asyncio Prometheus ClientPrometheus Http Client for Asyncio of Pythoninstallpip install aio-prometheus-clientusagefrom aio_prometheus_client import PrometheusClient
client = PrometheusClient('http://127.0.0.1:9090')
result = await client.query('http_requests_total')
for s in result.series:
print(s.metric, s.value.timestamp, s.value.value) |
aioprometheus-summary | aioprometheus-summaryAioprometheus summary with quantiles over configurable sliding time windowInstallationpip install aioprometheus-summary==0.1.0This package can be found onPyPI.CollectingBasic usagefromaioprometheus_summaryimportSummarys=Summary("request_latency_seconds","Description of summary")s.observe({},4.7)With labelsfromaioprometheus_summaryimportSummarys=Summary("request_latency_seconds","Description of summary")s.observe({"method":"GET","endpoint":"/profile"},1.2)s.observe({"method":"POST","endpoint":"/login"},3.4)With custom quantiles and precisionsBy default, metrics are observed for next quantile-precision pairs((0.50, 0.05), (0.90, 0.01), (0.99, 0.001))but you can provide your own value when creating the metric.fromaioprometheus_summaryimportSummarys=Summary("request_latency_seconds","Description of summary",invariants=((0.50,0.05),(0.75,0.02),(0.90,0.01),(0.95,0.005),(0.99,0.001)),)s.observe({},4.7)With custom time window settingsTypically, you don't want to have a Summary representing the entire runtime of the application,
but you want to look at a reasonable time interval. Summary metrics implement a configurable sliding time window.The default is a time window of 10 minutes and 5 age buckets, i.e. the time window is 10 minutes wide, and
we slide it forward every 2 minutes, but you can configure this values for your own purposes.fromaioprometheus_summaryimportSummarys=Summary("request_latency_seconds","Description of summary",# time window 5 minutes wide with 10 age buckets (sliding every 30 seconds)max_age_seconds=5*60,age_buckets=10,)s.observe({},4.7)QueryingSuppose we have a metric:fromaioprometheus_summaryimportSummarys=Summary("request_latency_seconds","Description of summary")To show request latency bymethod,endpointandquntileuse next query:max by (method, endpoint, quantile) (request_latency_seconds)To only 99-th quantile:max by (method, endpoint) (request_latency_seconds{quantile="0.99") |
aioprometheus-thin | aioprometheus-thinThis package wraps the aioprometheus package and gives the developer the way of easy writing AsyncIO Prometheus custom collectors, without the need of any AsyncIO deep knowledge.Hi !Hello there.
This is my first python package, it's pretty small but has a big impact on my way of working.
I would like you guys to help me improve my Python skills with some issues/ implemntations improvments.
Thank youUsage ExamplesWill be added soon. |
aioproperty | Installpip3installaiopropertyDocumentationYou can find documentationhereDescriptionaioproperty presents async properties with both async getter and setter in one place.Example:fromaiopropertyimportaiopropertyimportasyncioclassSomeClass:@aiopropertyasyncdefhello(self,value):awaitasyncio.sleep(1)returnvaluesome_obj=SomeClass()some_obj.hello='hello'asyncdefrun():print(awaitsome_obj.hello)asyncio.run(run())aioproperty is not a property in a classic meaning, it keeps values inside asincio tasks. Once
you set a new value, it is scheduled in a task. If any task is running now, it will wait untill it is finished.
When you get value, you actually get a current task, and you can await it to get a value.
More of that: you can use math without awaiting like that:other=some_obj.hello+' byby'print(awaitother)We also introduce chaining:classSomeClass:@aiopropertyasyncdefhello(self,value):awaitasyncio.sleep(1)[email protected]_more(self,value):push_value(value)Chains works like a reducer. It applies each function to a new value iteratively. It can return value or return nothing.
If nothing is return, the value will be kept asis. Anychainedfunction is inserted by default in the end of chain, but
you can control it with a priority parameter, or with is_first parameter. If is_first is True, it will be inserted in the
beginning of the chain.You can also use inheritance and modify inhereted properties with our special decoratorinject:classParent:@aiopropertyasyncdefhello(self,value):awaitasyncio.sleep(1)[email protected]_more(self,value):push_value(value)classChild(Parent):@inject(Parent.hello,priority=100)asyncdefinjected_method(self,value):print('hello from injection')#another form of injection using name of a property@inject('hello')asyncdefinjected_method_2(self,value):print('hello from injection 2')Read more in ourdocs |
aioproxy | No description available on PyPI. |
aioproxyline | 🔗 Links🎓Documentation:CLICK🖱️Developer contacts:🐦 DependenciesLibraryDescriptionaiohttpAsynchronous HTTP Client/Server for asyncio and Python.pydanticJSON Data Validator |
aiops | Basic Utilities for aiops implementationsaiops is a library of basic utilities (including pretrained models/pipelines) for aiops implementations.aiopsextends PyTorch to provide text data processing functions.InstallationMake sure you have Python 3.6+ and PyTorch 1.0+. You can then installaiopsusing
pip:pipinstallaiopsAuthorsAmandeep- DeveloperCitingIf you find aiops useful for an academic publication, then please use the following BibTeX to cite it:@misc{pytorch-nlp,
author = {Amandeep},
title = {aiops: Basic utilities for aiops implementations},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/amandeep1991/aiops}},
} |
aiops4b-NTTDataUK | AIOps4BAIOps4B (AIOps for business) is a framework for business decision making.Below is the description of what the framework does...Trend Analysis, Root-cause identification and Predicting the future trends accordingly:AIOps4B performs trend analysis on a given time-series dataset (For e.g, Revenue values by date) with additional related metrics that has impact on the data (For e.g, product ratings, delivery timelines, discounts) to detect past trends with changepoints considering external events (for e.g. weather, seasonality, holidays), internal events (for e.g. marketing campaign).Each trend change point (for e.g. from down trend to up trend) will be labeled with its root-cause (for e.g. internal or external event, or some metrics that has impact on the Revenue) The un-known anomolies/events (for e.g. black friday) will need to be detected through manual analysis and then defined as internal/external events to be labeled. Once all past trends are identified/labeled with its root-cause, the future trends can be predicted after providing the related future events.Detailed Root-cause analysis & Recommendations:For each root-cause identified/labeled, it further defines the impact of each event and metrics to the overall trend as percentages (For e.g. the imapct of bad product rating is %25 on the Revenue, etc.) to make recommendations accordingly.** Optimized Recommendations:**After recommending that (for e.g. if the future Revenue is down), the reason of the past/future trends was because of an event or metric (for e.g. product reviews), it further make optimized recommendation after finding the real cause the problem (why the product reviews was bad and what to do with it)How it can be used:AIOps4B is designed as pyhton package currently with three API's. The pyhton package can be downloaded fromhttps://pypi.org/project/aiops4b-NTTDataUK/The package can be wrapped/deployed as REST API and embedded in any application / visualization dashboard in the future. As of now, it is used within a pyhton based dash application to display the results in interactive graphics.**Prediction API **predicts the future trends only and compares the predicted value with actuals on an interactive graph. It returns the graph representation, forecasted trend as a result. The graph can be displayed either on a Dash App or can be embedded into Visualization tools.Recommendation APIanalyzes the past trends and predicts the future trends with its root-causes and labels them on an interactive graph. It returns the graph representation and related recommendations (as what to do with each trend) as a result . The graph can be displayed either on a Dash App or can be embedded into Visualization tools.Optimization APIanalyzes the past trends and predicts the future trends with its root-causes and labels them on an interactive graph. It returns the graph representation and related recommendations (as what to do with each trend) as a result . The graph can be displayed either on a Dash App or can be embedded into Visualization tools.What are the use-cases it can be used in:**Enrich existing KPI dashboards: **If the dataset is ready for a given KPI with its related metrics that is contributing to the revenue, the REST API's can be called to provide past/future complex trend analysis with each root causes and recommendations to fix the issues.Enrich applications with interactive trend analysis/recommendations - it can be embedded into applications of any sort.Respond to business queries: Integrate with Chatbots, Alexa, etc for business users to ask business questions about KPI's and metrics and get root-causes, recommendations to fix. This can an innovative enabler especially for executive stakeholders to respond to their queries such as "Can we achieve our Revenue targets in the coming quarter"?End to end Business process analysis/issue mining:It can be integrated with business processes (such as Order to Cash business process) to analyze the overall health of the Revenue generated and define the root causes and alert if the objective Revenue KPI is not met or will not be met in the coming days/months.Next steps:Use it in real life projects and improve the existing functionality further.Extend the framework with self-learning. After making recommendations, it will need to analyze the outcomes and optimize itself. |
aiops4b-pkg-julgonza | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
aiopsconnector | No description available on PyPI. |
aiops-crypto-utils | No description available on PyPI. |
aiopslibs | No description available on PyPI. |
ai-ops-libs | No description available on PyPI. |
aiopslibscancer | IntroductionTODO: Give a short introduction of your project. Let this section explain the objectives or the motivation behind this project.Getting StartedTODO: Guide users through getting your code up and running on their own system. In this section you can talk about:Installation processSoftware dependenciesLatest releasesAPI referencesBuild and TestTODO: Describe and show how to build your code and run the tests.ContributeTODO: Explain how other users and developers can contribute to make your code better.If you want to learn more about creating good readme files then refer the followingguidelines. You can also seek inspiration from the below readme files:ASP.NET CoreVisual Studio CodeChakra Core |
aiops-staticfiles | No description available on PyPI. |
aiopubsub | Simple publish-subscribe pattern for asyncio applications.WhyWhen building big applications, separation of concerns is a great way to keep things manageable.
In messaging systems, the publish-subscribe pattern is often used to decouple data producers and data
consumers. We went a step ahead and designed even the internals of our applications around this pattern.We explain our thinking and the workings ofaiopubsubin detail in our articleDesign your app using the pub-sub pattern with aiopubsub.
We recommend reading it before usingaiopubsubin your project.Installationaiopubsubis only compatible with Python 3.8 and higher. There are no plans to support older versions.aiopubsubisavailable on PyPIand you can install it with:pip install aiopubsuborpoetry add aiopubsubHow to use itThe following comprehensive example is explained step-by-step
in our article“Design your app using the pub-sub pattern with aiopubsub”.importasyncioimportdataclassesimportdecimalimportaiopubsub@dataclasses.dataclassclassTrade:timestamp:floatquantity:intprice:decimal.Decimalasyncdefon_trade(key:aiopubsub.Key,trade:Trade)->None:print(f'Processing trade ={trade}with key ={key}.')asyncdefon_nyse_trade(key:aiopubsub.Key,trade:Trade)->None:print(f'Processing trade ={trade}with key ={key}that happened in NYSE')asyncdefmain():# create an aiopubsub hubhub=aiopubsub.Hub()# create a sample of data to sendtrade=Trade(timestamp=123.5,quantity=56,price=decimal.Decimal('1639.43'))# subscriber listens on every trade and calls the `on_trade` functionsubscriber=aiopubsub.Subscriber(hub,'trades')subscribe_key=aiopubsub.Key('*','trade','*')subscriber.add_async_listener(subscribe_key,on_trade)# publisher has a NASDAQ prefix and sends the trade that happened on Google stockpublisher=aiopubsub.Publisher(hub,prefix=aiopubsub.Key('NASDAQ'))publish_key=aiopubsub.Key('trade','GOOGL')publisher.publish(publish_key,trade)# sleep so the event loop can process the actionawaitasyncio.sleep(0.001)# expected output:# Processing trade = Trade(timestamp=123.5, quantity=56, price=Decimal('1639.43')) with key = ('NASDAQ', 'trade', 'GOOGL').# sample from another stock exchangetrade_nyse=Trade(timestamp=127.45,quantity=67,price=decimal.Decimal('1639.44'))# subscribe only for the NYSE exchangesubscribe_key_nyse=aiopubsub.Key('NYSE','trade','*')subscriber.add_async_listener(subscribe_key_nyse,on_nyse_trade)# publish NYSE tradepublisher_nyse=aiopubsub.Publisher(hub,prefix=aiopubsub.Key('NYSE'))publisher_nyse.publish(aiopubsub.Key('trade','GOOGL'),trade_nyse)# sleep so the event loop can process the actionawaitasyncio.sleep(0.001)# expected output:# Processing trade = Trade(timestamp=127.45, quantity=67, price=Decimal('1639.44')) with key = ('NYSE', 'trade', 'GOOGL').# Processing trade = Trade(timestamp=127.45, quantity=67, price=Decimal('1639.44')) with key = ('NYSE', 'trade', 'GOOGL') that happened in NYSE# clean the subscriber before the end of the programawaitsubscriber.remove_all_listeners()if__name__=='__main__':asyncio.run(main())Aiopubsub will uselogwoodif it is installed, otherwise it will default
to the standard logging module. Note thatlogwoodis required to run tests.ArchitectureHubaccepts messages fromPublishersand routes them toSubscribers. Each message is routed by itsKey- an iterable of strings forming a hierarchic namespace. Subscribers may subscribe to wildcard keys,
where any part of the key may be replaced replaced with a*(star).addedSubscriberandremovedSubscribermessagesWhen a new subscriber is added the Hub sends this message{
"key": ("key", "of", "added", "subscriber"),
"currentSubscriberCount": 2
}under the key('Hub', 'addedSubscriber', 'key', 'of', 'added', 'subscriber')(the part afteraddedSubscriberis made of the subscribed key). Note thecurrentSubscriberCountfield indicating how many subscribers are currently
subscribed.When a subscriber is removed a message in the same format is sent, but this time under the key('Hub', 'removedSubscriber', 'key', 'of', 'added', 'subscriber').ContributingPull requests are welcome! In particular, we are aware that the documentation could be improved.
If anything aboutaiopubsubis unclear, please feel free tosimply open an issueand we will do our best
to advise and explain 🙂fastenumwas made byQuantlane, a systematic trading firm.
We design, build and run our own stock trading platform. |
aio-pubsub | A generic interface wrapping multiple backends to provide a consistent pubsub API.Installationpipinstallaio-pubsub# for redis backendpipinstallaio-pubsub[aioredis]# for postgresql backendpipinstallaio-pubsub[aiopg]UsageTo use it, you need to implement your pubsub implementation from interfaces or use backends
fromaio_pubsub.backendspackagefromaio_pubsub.backends.memoryimportMemoryPubSubpubsub=MemoryPubSub()# Create subscribersubscriber=awaitpubsub.subscribe("a_chan")# Push messageawaitpubsub.publish("a_chan","hello world!")awaitpubsub.publish("a_chan","hello universe!")# And listening channeltry:asyncformessageinsubscriber:print(message,flush=True)exceptKeyboardInterrupt:print("Finish listening")Supported backendsDisclaimer: I would not advise you to use this backend, because it is shown only for testing purposes.
Better develop your own implementation.memoryredispostgresql |
aiopubsub-py3 | aio_pubsub
########安装
==========.. code-block:: shellpip install aiopubsub-py3
pip install aiopubsub-py3[redis]示例
==========2.1 发布.. code-block:: pythonfrom aiopubsub import Pubsub
async def main():
async with Pubsub(Pubsub.REDIS, port=16379, namespace="cs", role=PubsubRole.PUB) as pub:
count = await pub.publish("foo", {"test": 1})
print(count)
count = await pub.publish("foo", {"test": 2})
print(count)
count = await pub.publish("foo", {"test": 3})
print(count)2.2 订阅.. code-block:: pythonfrom aiopubsub import Pubsub, PubsubRole
async def print_msg(channel, msg):
print(f"sub msg: {channel} -- {msg}")
async def main():
channels = ["foo"]
async with Pubsub(Pubsub.REDIS, port=16379, namespace="cs", role=PubsubRole.SUB) as sub:
await sub.subscribe(*channels)
async for k in sub.listen(handler=print_msg):
print(k)
await sub.unsubscribe(*channels)2.3 模糊订阅.. code-block:: pythonfrom aiopubsub import Pubsub
async def print_msg(channel, msg):
print(f"psub msg: {channel} -- {msg}")
async def main():
channels = ["foo*"]
async with Pubsub(Pubsub.REDIS, port=16379, namespace="cs", role=PubsubRole.SUB) as psub:
await psub.psubscribe(*channels)
async for k in psub.listen(handler=print_msg):
print(k)
await psub.unsubscribe(*channels)2.4 自动识别角色【根据第一次调用函数决策,未close之前不能切换角色】.. code-block:: pythonfrom aiopubsub import Pubsub
async def print_msg(channel, msg):
print(f"psub msg: {channel} -- {msg}")
async def main():
channels = ["foo*"]
pubsub = Pubsub(Pubsub.REDIS, port=16379, namespace="cs")
await pubsub.publish("foo", {"test": 1}) # 角色设置为PUB,执行成功
await pubsub.subscribe(*channels) # 角色为PUB,无法订阅抛出异常
await pubsub.close() # 角色释放
await pubsub.subscribe(*channels) # 角色设置为SUB,执行成功
print(pubsub.backend.role)
await pubsub.publish("foo", {"test": 1}) # 角色为SUB,无法发布抛出异常
async with pubsub:
await pubsub.unsubscribe(*channels) # 角色为SUB,执行成功
# async with 退出作用域,角色释放
await pubsub.publish("foo", {"test": 1}) # 角色设置为PUB,执行成功 |
aiopulsar | AioPulsarAn asynchronous framework forApache Pulsar. |
aiopulsar-py | aiopulsar-pyaiopulsar-pyis a Python 3.5+ module that makes it possible to interact with pulsar servers with asyncio.aiopulsar-pyserves as an asynchronous wrapper for theofficial python pulsar-clientand preserves the look and
feel of the original pulsar-client. It is written using the async/await syntax and hence not compatible with Python
versions older than 3.5. Internally, aiopulsar-py employs threads to avoid blocking the event loop.aiopulsar-pytakes inspiration from other asyncio wrappers released in theaio-libs project.Basic exampleaiopulsar-pyis built around thepython pulsar-clientand provides the same api. You just need to use asynchronous context managers and await for every method. Setting up a
pulsar client that can be used to create readers, producers and consumers requires a call to theaiopulsar.connectmethod.importasyncioimportaiopulsarimportpulsarasyncdeftest_example():topic="persistent://some-test-topic/"asyncwithaiopulsar.connect("localhost")asclient:asyncwithclient.subscribe(topic=topic,subscription_name='my-subscription',consumer_name="my-consumer",initial_position=pulsar.InitialPosition.Earliest,)asconsumer:whileTrue:msg=awaitconsumer.receive()print(msg)loop=asyncio.get_event_loop()loop.run_until_complete(test_example())Installaiopulsar-pycannot be installed on windows systems since the wrappedpulsar-clientlibrary only functions on Linux and MacOS.
The package is available on PyPi and can be installed with:pipinstallaiopulsar-pyContributingYou can contribute to the project by reporting an issue. This is done via GitHub. Please make sure to include
information on your environment and please make sure that you can express the issue with a reproducible test case.You can also contribute by making pull requests. To install the project please use poetry:poetryinstallThe project relies onmypy,blackandflake8and these are configured as git hooks.
To configure the git hooks run:poetryrungithookssetupEvery time the git hooks are changed in the[tool.githooks]section ofpyproject.tomlyou will need to set up
the git hooks again with the command above. |
aiopulse | AiopulseAsynchronous library to control Rollease Acmeda Automate roller blinds via a version 1 Pulse Hub.The Rollease Acmeda Pulse Hub is a WiFi hub that communicates with Rollease Acmeda Automate roller blinds via a proprietary RF protocol.
This module communicates over a local area network using a propriatery binary protocol to issues commands to the Pulse Hub.
A module that supports version 2 Pulse Hubs has been developed separately here:https://pypi.org/project/aiopulse2/This module requires Python 3.4 or newer and uses asyncio.InstallingAvailable on PyPi here:https://pypi.org/project/aiopulse/, runpip install aiopulse.
Alternatively, download and extract a release and from within the folder containing setup.py runpython setup.py install.Demo.py commands:CommandDescriptiondiscoverFind and connect to any hubs on the local network (uses udp broadcast discovery)connectConnect to all hubs and trigger updatedisconnectDisconnect all hubsupdateRefresh all information from hublistList currently connected hubs and their blinds, use to get the [hub id] and [blind id] for the following commands.open [hub id] [blind id]Open blindclose [hub id] [blind id]Close blindstop [hub id] [blind id]Stop blindmoveto [hub id] [blind id] [% closed]Open blind to percentagehealth [hub id] [blind id]Update the health of the blindlog [level]Set the log level to one of (critical,error,warning,info,debug)exitExit program |
aiopulse2 | aiopulse2Asynchronous library to control Rollease Acmeda Automate roller blinds with the Pulse v2 HubThis is an updated fork ofaiopulsefor the v2 hub (note: this isnotcompatible with the v1 hub, useaiopulsefor that). The protocol implementation uses a combination of WebSockets and a TCP connection using a serial like protocol. See the project wiki page for details.Requires Python 3.7 or later and uses asyncio andwebsockets.It has been primarily developed as an integration forHome Assistant.InstallingRunpip install aiopulse2.Demo.pyThis is an interactive interface to test the integration. The available commands are listed below.Use thelistcommand to get the id of the hubs/blinds.CommandDescriptionconnect [hub ip][hub ip]...]Connect to the hub at ip(s)disconnectDisconnect all hubslistList currently connected hubs and their blinds, use to get the [hub id] and [blind id] for the following commands.open [hub id][blind id]Open blindclose [hub id][blind id]Close blindstop [hub id][blind id]Stop blindmoveto [hub id][blind id] [% closed]Open blind to percentageexitExit programpulse_hub_cli.pyThis is a trivial work-in-progress aiopulse2 command-line-interface wrapper. It issues a command to a blind given the hub ip address, device name as defined in thePulse 2app and desired percentage closed. It then waits for the command to complete.python3 pulse_hub_cli.py '192.168.1.127' 'Office 1 of 3' 100close.shThis is an example application of pulse_hub_cli.py. It closes three blinds in sequence. In this case, it is useful to close the blinds one at a time because they share a small power supply.python3 pulse_hub_cli.py '192.168.1.127' 'Office 1 of 3' 100
python3 pulse_hub_cli.py '192.168.1.127' 'Office 2 of 3' 100
python3 pulse_hub_cli.py '192.168.1.127' 'Office 3 of 3' 100 |
aiopurpleair | 🟣 aiopurpleair: A Python3, asyncio-based library to interact with the PurpleAir APIaiopurpleairis a Python3, asyncio-based library to interact with thePurpleAirAPI.InstallationPython VersionsUsageChecking an API KeyGetting SensorsGetting a Single SensorGetting Nearby SensorsGetting a Map URLConnection PoolingContributingInstallationpipinstallaiopurpleairPython Versionsaiopurpleairis currently supported on:Python 3.10Python 3.11Python 3.12UsageIn-depth documentation on the API can be foundhere. Unless otherwise
noted,aiopurpleairendeavors to follow the API as closely as possible.Checking an API KeyTo check whether an API key is valid and what properties it has:importasynciofromaiopurpleairimportAPIasyncdefmain()->None:"""Run."""api=API("<API KEY>")response=awaitapi.async_check_api_key()# >>> response.api_key_type == ApiKeyType.READ# >>> response.api_version == "V1.0.11-0.0.41"# >>> response.timestamp_utc == datetime(2022, 10, 27, 18, 25, 41)asyncio.run(main())Getting SensorsimportasynciofromaiopurpleairimportAPIasyncdefmain()->None:"""Run."""api=API("<API_KEY>")response=awaitapi.sensors.async_get_sensors(["name"])# >>> response.api_version == "V1.0.11-0.0.41"# >>> response.data == {# >>> 131075: SensorModel(sensor_index=131075, name=Mariners Bluff),# >>> 131079: SensorModel(sensor_index=131079, name=BRSKBV-outside),# >>> }# >>> response.data_timestamp_utc == datetime(2022, 11, 3, 19, 25, 31)# >>> response.fields == ["sensor_index", "name"]# >>> response.firmware_default_version == "7.02"# >>> response.max_age == 604800# >>> response.timestamp_utc == datetime(2022, 11, 3, 19, 26, 29)asyncio.run(main())Method Parametersfields(required): The sensor data fields to includelocation_type(optional): An LocationType to filter bymax_age(optional): Filter results modified within these secondsmodified_since(optional): Filter results modified since a UTC datetimeread_keys(optional): Read keys for private sensorssensor_indices(optional): Filter results by sensor indexGetting a Single SensorimportasynciofromaiopurpleairimportAPIasyncdefmain()->None:"""Run."""api=API("<API_KEY>")response=awaitapi.sensors.async_get_sensor(131075)# >>> response.api_version == "V1.0.11-0.0.41"# >>> response.data_timestamp_utc == datetime(2022, 11, 5, 16, 36, 21)# >>> response.sensor == SensorModel(sensor_index=131075, ...),# >>> response.timestamp_utc == datetime(2022, 11, 5, 16, 37, 3)asyncio.run(main())Method Parameterssensor_index(required): The sensor index of the sensor to retrieve.fields(optional): The sensor data fields to include.read_key(optional): A read key for a private sensor.Getting Nearby SensorsThis method returns a list ofNearbySensorResultobjects that are within a bounding box
around a given latitude/longitude pair. The list is sorted from nearest to furthest
(i.e., the first index in the list is the closest to the latitude/longitude).NearbySensorResultobjects have two properties:sensor: the correspondingSensorModelobjectdistance: the calculated distance (in kilometers) between this sensor and the provided
latitude/longitudeimportasynciofromaiopurpleairimportAPIasyncdefmain()->None:"""Run."""api=API("<API_KEY>")sensors=awaitapi.sensors.async_get_nearby_sensors(["name"],51.5285582,-0.2416796,10)# >>> [NearbySensorResult(...), NearbySensorResult(...)]asyncio.run(main())Method Parametersfields(required): The sensor data fields to includelatitude(required): The latitude of the point to measure distance fromlongitude(required): The longitude of the point to measure distance fromdistance(required): The distance from the measured point to search (in kilometers)limit(optional): Limit the resultsGetting a Map URLIf you need to get the URL to a particular sensor index on the PurpleAir map website,
simply pass the appropriate sensor index to theget_map_urlmethod:importasynciofromaiopurpleairimportAPIasyncdefmain()->None:"""Run."""api=API("<API_KEY>")map_url=api.get_map_url(12345)# >>> https://map.purpleair.com/1/mAQI/a10/p604800/cC0?select=12345asyncio.run(main())Connection PoolingBy default, the library creates a new connection to the PurpleAir API with each
coroutine. If you are calling a large number of coroutines (or merely want to squeeze
out every second of runtime savings possible), anaiohttpClientSessioncan
be used for connection pooling:importasynciofromaiohttpimportClientSessionfromaiopurpleairimportAPIasyncdefmain()->None:"""Run."""asyncwithClientSession()assession:api=awaitAPI("<API KEY>")# Get to work...asyncio.run(main())ContributingThanks to all ofour contributorsso far!Check for open features/bugsorinitiate a discussion on one.Fork the repository.(optional, but highly recommended) Create a virtual environment:python3 -m venv .venv(optional, but highly recommended) Enter the virtual environment:source ./.venv/bin/activateInstall the dev environment:script/setupCode your new feature or bug fix on a new branch.Write tests that cover your new functionality.Run tests and ensure 100% code coverage:poetry run pytest --cov aiopurpleair testsUpdateREADME.mdwith any new documentation.Submit a pull request! |
aiopusher | aiopusherAn async library for subscribing to the Pusher WebSocket protocol.InstallationYou can installaiopushervia pip from PyPI:pipinstallaiopusherOr withPoetry:poetryaddaiopusherUsageHere are some examples of usingaiopusher:importasynciofromaiopusherimportPusherasyncdefmain():asyncwithPusher('<your-app-key>')asclient:channel=awaitclient.subscribe('<channel-name>')channel.bind('<event-name>',lambdadata:print(data))# Run forever (or until manually stopped)whileTrue:awaitasyncio.sleep(1)asyncio.run(main())Or, if you don't want to use context manager, you can useconnectanddisconnectmethods:asyncdefmain():client=Pusher('<your-app-key>')awaitclient.connect()channel=awaitclient.subscribe('<channel-name>')channel.bind('<event-name>',lambdadata:print(data))whileTrue:awaitasyncio.sleep(1)awaitclient.disconnect()# Yes, I know this cannot technically be reachedYou can also use decorators to bind events:importasynciofromaiopusherimportPusherclient=Pusher('<your-app-key>')@client.event('<channel-name>','<event-name>')asyncdefhandle_event(data):print(data)asyncdefmain():awaitclient.connect()# Run forever (or until manually stopped)whileTrue:awaitasyncio.sleep(1)asyncio.run(main())Connect to different endpoints:importasynciofromaiopusherimportPusherasyncdefmain():options={host:'api.example.com',# default: 'ws.pusherapp.com'"userAuthentication":{"endpoint":"/auth","transport":"ajax",}}asyncwithPusher('<your-app-key>',options)asclient:channel=awaitclient.subscribe('<channel-name>')channel.bind('<event-name>',lambdadata:print(data))# Run forever (or until manually stopped)whileTrue:awaitasyncio.sleep(1)asyncio.run(main())You can also make a singleton client, which can be accessed from anywhere in your code:importasynciofromaiopusherimportPusher,SingletonClientasyncdefmain():client=Pusher('<your-app-key>')SingletonClient.set_client(client)awaitclient.connect()channel=awaitSingletonClient.get_client().subscribe('<channel-name>')channel.bind('<event-name>',lambdadata:print(data))whileTrue:awaitasyncio.sleep(1)DevelopmentTo get started with development, you can clone the repository and install the dependencies.Poetryis used to manage the dependencies, so you can install it withpip install poetry(or follow the instructions on the Poetry website) and then runpoetry installto install the dependencies.TestingIf you want to run the tests locally, it should be known that the tests require multiple python versions to be installed. The easiest way to do this is to usepyenv.Once you have pyenv installed, you can runpyenv installto install the required versions of python (as specified in the.python-versionfile). Then, you can runnoxto run the tests. |
aiopvapi | AioPvApiA python asyncio API for PowerView blinds.
Written for Home-Assistant. Adding features as I go...Have a look at the examples folder for some guidance how to use it.Linkshttps://home-assistant.io/https://www.hunterdouglas.com/operating-systems/powerview-motorizationChangelogv1.6.19Add endpoints and handle 423 responseRemove loop as argumentv2.0.0Add support for all known shade typesFallback to shade recognition based on capabilityClamping to prevent MIN_POSITION or MAX_POSITION being exceededCode refactoringv2.0.1Invert type 3 & 4 to match api documentation from hunter douglasAdd type 10v2.0.2Bug Fix to handle shades with unexpected json responsesv2.0.3Add Type 26, 27 & 28 - Skyline PanelsForce capability 1 for Type 44 - TwistAlign class name standardv2.0.4Add Type 10 - SkyLiftHandle calls to update shade position during maintenanceRaise error directly on hub calls instead of loggerv3.0.0Major overhaul to incorporate gateway version 3 API. Version can be automatically detected or manually specified.UserData class is deprecated and replaced with Hub.ShadePosition class now replaces the raw json management of shades in support of cross generational management.Schedules / Automations are now supported by the APINew get_objecttypemethods available to returned structured data objects for consistent managementv3.0.1Raw hub data updates made via defined function (request_raw_data,request_home_data,request_raw_firware,detect_api_version)Parse Gen 3 hub name based on serial + macFind API version based on firmware revisionRemove async_timeout and move to asynciov3.0.2Add type 19 (Provenance Woven Wood)Fix Positioning for ShadeVerticalTiltAnywhere + ShadeTiltOnly (Mid only)Fix logging regression on initial setupFixes for ShadeVerticalTiltAnywhere + ShadeTiltOnlyFix testsRemove unneeded declerationsFix shade position reporting for v2 shadesDandle empty hub data being returned |
aiopvpc | aiopvpcSimple aio library to download Spanish electricity hourly prices.Made to support thepvpc_hourly_pricingHomeAssistant integration.InstallInstall withpip install aiopvpcor clone it to run tests or anything else.UsageimportaiohttpfromdatetimeimportdatetimefromaiopvpcimportPVPCDataasyncwithaiohttp.ClientSession()assession:pvpc_handler=PVPCData(session=session,tariff="2.0TD")esios_data=awaitpvpc_handler.async_update_all(current_data=None,now=datetime.utcnow())print(esios_data.sensors["PVPC"]) |
aiopy | UNKNOWN |
aiopyami | No description available on PyPI. |
aiopyarr | aiopyarrPython API client for Lidarr/Radarr/Readarr/Sonarr.Installationpython3-mpipinstallaiopyarrExample usageMore examples can be found in thetestsdirectory."""Example usage of aiopyarr."""importasynciofromaiopyarr.models.host_configurationimportPyArrHostConfigurationfromaiopyarr.radarr_clientimportRadarrClientIP="192.168.100.3"TOKEN="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"asyncdefasync_example():"""Example usage of aiopyarr."""host_configuration=PyArrHostConfiguration(ipaddress=IP,api_token=TOKEN)asyncwithRadarrClient(host_configuration=host_configuration)asclient:print(awaitclient.async_get_system_status())asyncio.get_event_loop().run_until_complete(async_example())ContributeAllcontributions are welcome!Fork the repositoryClone the repository locally and open the devcontainer or use GitHub codespacesDo your changesLint the files withmake lintEnsure all tests passes withmake testEnsure 100% coverage withmake coverageCommit your work, and push it to GitHubCreate a PR against themasterbranch |
aio_pybars | aio_pybars========Quick Start------------------0. Install::pip install aio_pybarsOR via setup.py::python setup.py install1. Configure your app::from aio_pybars import FSTemplateLoaderloop.run_until_complete(aio_pybars.setup(app,templates_dir=config['TEMPLATES_DIR'],Loader=FSTemplateLoader))2. Use templates in the view::async def index(request):context = {"var": "value"}return AIOBarsResponse(request, 'template_name', context)It will render the `template_name.hbs` template with variables in the `context` to the aiohttp response.Helpers and partials------------------------------------Partial is the nested template that should be included in the specific place.If the following code occurs in the template::{{> "sidebar"}}pybars3 will search the _partial_ named `sidebar` in the dictionary. How to add your own partial see below.Helper is the callable that can be called from the template. Syntactically it looks same as the variable, but canget the arguments::<link rel="shortcut icon" href="{{asset "favicon.ico"}}">would call the `asset` callable with "favicon.ico" argument and put the results in the rendered template.*To use your own partials and helpers* implement your subclass of templates loader::class AppFSTemplateLoader(FSTemplateLoader):def __init__(self, app, base_dir):super().__init__(app, base_dir)def get_partials(self):"""Load all files in the partials/ subdirectory of templates dir.Method should return the dictionary {'partial_name': <compiled template>, ...}"""partials = super().get_partials()base_partials = os.path.join(self.app.config['TEMPLATES_DIR'], 'partials')for name in os.listdir(base_partials):filename = os.path.splitext(name)[0]template_source = open(os.path.join(base_partials, name), 'r', encoding='utf8').read()template = self.compiler.compile(template_source)partials[filename] = templatereturn partialsdef get_helpers(self):"""Define your own set of helpers.Method should return the dictionary {'helper_name': <callable>, ...}"""helpers = super().get_helpers()helpers.update({"asset": _asset})return helpersdef _asset(options, val, *args, **kwargs):return "/static/{}".format(val)and pass it as Loader argument to the setup::loop.run_until_complete(aio_pybars.setup(app,templates_dir=config['TEMPLATES_DIR'],Loader=AppFSTemplateLoader))Recursive rendering of templates--------------------------The aio_pybars enables templates to be recursive. If the first line of the template contains::{{!< base_template}}all the rendered template will be passed as variable `body` to the base template.For example:base.hbs::<!DOCTYPE html><html><head><title>Template</title></head><body>{{body}}</body>test.hbs::{{!< base}}Hello, {{name}}.Then result of the `render(loader, 'test', {'name': 'Roma'})` will be::<!DOCTYPE html><html><head><title>Template</title></head><body>Hello, Roma</body> |
aio-pydispatch | aio_pydispatchAsyncio pydispatch (Signal Manager)This is based onpyDispatcherreferenceDjango Signalsand referencescrapy SignalManagerimplementation onAsyncioEvent or Signal (not python bif signal)You can bind multiple listeners (called sender) to listening multiple handlers (called receiver)
on one event (called signal).Default, the listener is None, so when the event is fire with no listener, all handlers will be
executed that was bind default listener.UsageMost of the program hasstartandstopevents, we can register some handler to events,
we can also specify a sender.importasynciofromaio_pydispatchimportSignalserver_start=Signal()server_stop=Signal()defppp(value:str,**kwargs)->None:print(value,kwargs)asyncdefmain():server_start.connect(ppp,sender='loading config')server_stop.connect(ppp)awaitserver_start.send(sender='loading config',value='foo')awaitasyncio.sleep(1)awaitserver_stop.send(value='foo')if__name__=='__main__':asyncio.run(main())Similar designsync:pyDispatcherDjango.dispatchscrapy SignalManagerblinkerasync:Aiohttp tracingOthersEvent system in Python |
aiopyfix | AIOPyFIXOpen Source implementation of a FIX (Financial Information eXchange) Engine implemented in Python asyncioSee here [http://fixprotocol.org/] for more information on what FIX is.InstallationThis package requires Python 3.6 to run.Install in the normal python waypip install aiopyfixor from sourcepython setup.py installand it should install with no errorsUsageUsing the module should be simple. There is an examples directory, which is the probably best place to start.Session SetupEither you can create aFIXClientor aFIXServer. The Client initiates the connection and also initaiates the Logon sequence, a Server would sit there waiting for inbound connections, and expect a Logon message to be sent.self.client=FIXClient("aiopyfix.FIX44","TARGET","SENDER")# tell the client to start the connection sequenceawaitself.client.start('localhost',int("9898"),loop)The argument "aiopyfix.FIX44" specified the module which is used as the protocol, this is dynamically loaded, to you can create and specify your own if required.If you want to do something useful, other than just watching the session level bits work, you'll probably want to register for connection status changes (you'll need to do this be fore starting the event loop);self.client.addConnectionListener(self.onConnect,ConnectionState.CONNECTED)self.client.addConnectionListener(self.onDisconnect,ConnectionState.DISCONNECTED)The implementatino of thouse methods would be something like this;asyncdefonConnect(self,session):logging.info("Established connection to%s"%(session.address(),))session.addMessageHandler(self.onLogin,MessageDirection.INBOUND,self.client.protocol.msgtype.LOGON)asyncdefonDisconnect(self,session):logging.info("%shas disconnected"%(session.address(),))session.removeMsgHandler(self.onLogin,MessageDirection.INBOUND,self.client.protocol.msgtype.LOGON)in the code above, we are registering to be called back whenever we receive (MessageDirection.INBOUND) a logon requestMsgType[35]=Aon that session.That is pretty much it for the session setup.Message construction and sendingConstructing a message is simple, and is just a matter of adding the fields you require.
The session level tags will be added when the message is encoded by the codec. Setting any of the following session tags will result in the tag being duplicated in the messageBeginStringBodyLengthMsgTypeMsgSeqNoSendingTimeSenderCompIDTargetCompIDCheckSumExample of building a simple messageasyncdefsendOrder(self,connectionHandler):self.clOrdID=self.clOrdID+1# get the codec we are currently using for this sessioncodec=connectionHandler.codec# create a new messagemsg=FIXMessage(codec.protocol.msgtype.NEWORDERSINGLE)# ...and add some data to itmsg.setField(codec.protocol.fixtags.Price,random.random()*1000)msg.setField(codec.protocol.fixtags.OrderQty,int(random.random()*10000))msg.setField(codec.protocol.fixtags.Symbol,"VOD.L")msg.setField(codec.protocol.fixtags.SecurityID,"GB00BH4HKS39")msg.setField(codec.protocol.fixtags.SecurityIDSource,"4")msg.setField(codec.protocol.fixtags.Symbol,"VOD.L")msg.setField(codec.protocol.fixtags.Account,"TEST")msg.setField(codec.protocol.fixtags.HandlInst,"1")msg.setField(codec.protocol.fixtags.ExDestination,"XLON")msg.setField(codec.protocol.fixtags.Side,int(random.random()*2))msg.setField(codec.protocol.fixtags.ClOrdID,str(self.clOrdID))msg.setField(codec.protocol.fixtags.Currency,"GBP")# send the message on the sessionawaitconnectionHandler.sendMsg(codec.pack(msg,connectionHandler.session))A message (which is a subclass ofFIXContext) can also hold instances ofFIXContext, these will be treated as repeating groups. For examplemsg = FIXMessage(codec.protocol.msgtype.NEWORDERSINGLE)
msg.setField(codec.protocol.fixtags.Symbol, "VOD.L")
msg.setField(codec.protocol.fixtags.SecurityID, "GB00BH4HKS39")
msg.setField(codec.protocol.fixtags.SecurityIDSource, "4")
rptgrp1 = FIXContext()
rptgrp1.setField(codec.protocol.fixtags.PartyID, "It's Me")
rptgrp1.setField(codec.protocol.fixtags.PartyIDSource, "1")
rptgrp1.setField(codec.protocol.fixtags.PartyRole, "2")
msg.addRepeatingGroup(codec.protocol.fixtags.NoPartyIDs, rptgrp1)
rptgrp2 = FIXContext()
rptgrp2.setField(codec.protocol.fixtags.PartyID, "Someone Else")
rptgrp2.setField(codec.protocol.fixtags.PartyIDSource, "2")
rptgrp2.setField(codec.protocol.fixtags.PartyRole, "8")
msg.addRepeatingGroup(codec.protocol.fixtags.NoPartyIDs, rptgrp2)This will result in a message like the following8=FIX.4.4|9=144|35=D|49=sender|56=target|34=1|52=20150619-11:08:54.000|55=VOD.L|48=GB00BH4HKS39|22=4|453=2|448=It's Me|447=1|452=2|448=Someone Else|447=2|452=8|10=073|To send the message you need a handle on the session you want to use, this is provided to you in the callback methods. e.g. in the code above we registered for Logon callbacks usingsession.addMessageHandler(self.onLogin, MessageDirection.INBOUND, self.client.protocol.msgtype.LOGON)the signature for the callback is something like;async def onLogin(self, connectionHandler, msg):
logging.info("Logged in") |
aiopygismeteo | aiopygismeteoАсинхронная обёртка дляGismeteo API.Синхронная версияздесь.Разработка приостановленаДля использования библиотеки нужен API токен, который можно запросить по электронной почте[email protected].В настоящее время у разработчика отсутствует API токен, что делает невозможными тестирование и дальнейшую разработку.Если вам нужна погодная библиотека без API токена, можете рассмотретьhttps://github.com/monosans/aiopywttr.Установкаpython-mpipinstall-Uaiopygismeteopygismeteo-baseДокументацияhttps://aiopygismeteo.readthedocs.ioLicense / ЛицензияMIT |
aiopykube | No description available on PyPI. |
aiopylgtv | aiopylgtvLibrary to control webOS based LG Tv devices.Based on pylgtv library athttps://github.com/TheRealLink/pylgtvwhich is no longer maintained.RequirementsPython >= 3.7InstallpipinstallaiopylgtvInstall from SourceRun the following command inside this folderpipinstall--upgrade.Basic ExampleimportasynciofromaiopylgtvimportWebOsClientasyncdefrunloop():client=awaitWebOsClient.create('192.168.1.53')awaitclient.connect()apps=awaitclient.get_apps()forappinapps:print(app)awaitclient.disconnect()asyncio.get_event_loop().run_until_complete(runloop())Subscribed state updatesA callback coroutine can be registered with the client in order to be notified of any state changes.importasynciofromaiopylgtvimportWebOsClientasyncdefon_state_change():print("State changed:")print(client.current_appId)print(client.muted)print(client.volume)print(client.current_channel)print(client.apps)print(client.inputs)print(client.system_info)print(client.software_info)asyncdefrunloop():client=awaitWebOsClient.create('192.168.1.53')awaitclient.register_state_update_callback(on_state_change)awaitclient.connect()print(client.inputs)ret=awaitclient.set_input("HDMI_3")print(ret)awaitclient.disconnect()asyncio.get_event_loop().run_until_complete(runloop())Calibration functionalityWARNING: Messing with the calibration data COULD brick your TV in some circumstances, requiring a mainboard replacement.
All of the currently implemented functions SHOULD be safe, but no guarantees.On supported models, calibration functionality and upload to internal LUTs is supported. The supported input formats for LUTs are IRIDAS .cube format for both 1D and 3D LUTs, and ArgyllCMS .cal files for 1D LUTs.Not yet supported:
-Dolby Vision config upload
-Custom tone mapping for 2019 models (functionality does not exist on 2018 models)Supported models:
LG 2019 Alpha 9 G2 OLED R9 Z9 W9 W9S E9 C9 NanoCell SM99
LG 2019 Alpha 7 G2 NanoCell (8000 & higher model numbers)
LG 2018 Alpha 7 Super UHD LED (8000 & higher model numbers)
LG 2018 Alpha 7 OLED B8
LG 2018 Alpha 9 OLED C8 E8 G8 W8Models with Alpha 9 use 33 point 3D LUTs, while those with Alpha 7 use 17 points.n.b. this has only been extensively tested for the 2018 Alpha 9 case, so fixes may be needed still for the others.WARNING: When running the ddc_reset or uploading LUT data on 2018 models the only way to restore the factory
LUTs and behaviour for a given input mode is to do a factory reset of the TV.
ddc_reset uploads unity 1d and 3d luts and resets oled light/brightness/contrast/color/ to default values (80/50/85/50).
When running the ddc_reset or uploading any 1D LUT data, service menu white balance settings are ignored, and gamma,
colorspace, and white balance settings in the user menu are greyed out and inaccessible.Calibration data is specific to each picture mode, and picture modes are independent for SDR, HDR10+HLG, and Dolby Vision.
Picture modes from each of the three groups are only accessible when the TV is in the appropriate mode. Ie to upload
calibration data for HDR10 picture modes, one has to send the TV an HDR10 signal or play an HDR10 file, and similarly
for Dolby Vision.For SDR and HDR10 modes there are two 3D LUTs which will be automatically selected depending on the colorspace flags of the signal
or content. In principle almost all SDR content should be bt709 and HDR10 content should be bt2020 but there could be
nonstandard cases where this is not true.For Dolby Vision the bt709 3d LUT seems to be active and the only one used.Known supported picMode strings are:
SDR: cinema, expert1, expert2, game, technicolorExpert
HDR10(+HLG): hdr_technicolorExpert, hdr_cinema, hdr_game
DV: dolby_cinema_dark, dolby_cinema_bright, dolby_gameCalibration commands can only be run while in calibration mode (controlled by "start_calibration" and "end_calibration").While in calibration mode for HDR10 tone mapping is bypassed.
There may be other not fully known/understood changes in the image processing pipeline while in calibration mode.importasynciofromaiopylgtvimportWebOsClientasyncdefrunloop():client=awaitWebOsClient.create('192.168.1.53')awaitclient.connect()awaitclient.set_input("HDMI_2")awaitclient.start_calibration(picMode="expert1")awaitclient.ddc_reset(picMode="expert1")awaitclient.set_oled_light(picMode="expert1",value=26)awaitclient.set_contrast(picMode="expert1",value=100)awaitclient.upload_1d_lut_from_file(picMode="expert1",filename="test.cal")awaitclient.upload_3d_lut_bt709_from_file(picMode="expert1",filename="test3d.cube")awaitclient.upload_3d_lut_bt2020_from_file(picMode="expert1",filename="test3d.cube")awaitclient.end_calibration(picMode="expert1")awaitclient.disconnect()asyncio.run(runloop())Development ofaiopylgtvWe usepre-committo keep a consistent code style, sopip install pre_commitand runpre-commitinstallto install the hooks. |
aiopylimit | No description available on PyPI. |
aiopynoon | No description available on PyPI. |
aiopyo365 | aiopyo365Async wrapper for Python >= 3.8 aroundMicrosoft v1.0 graph API.Installationpip install aiopyo365Requirementspython 3.8 or greaterApplication registrationMicrosoft Graph APi requires to be authentificated. You will need to
have aregistred applicationin Azure that will provide you:client idclient secretYou will also need to have therequired permissionsto be able to interact withthe desired ressources.Installation#TODOAuthentificationTo authentificate you can use theGraphAuthProviderclass in theproviders.auth module.here is how to use this class. it assumes that you have set the folowing environnement variables :CLIENT_IDCLIENT_SECRETTENANT_IDThe class provide a method to fetch the token in the
form of adict.importasynciofromaiopyo365.providers.authimportGraphAuthProviderasyncdeffetch_auth_header():auth_provider=GraphAuthProvider(client_id=os.environ["CLIENT_ID"],client_secret=os.environ["CLIENT_SECRET"],tenant_id=os.environ["TENANT_ID"],)returnawaitauth_provider.auth()if__name__=='__main__':auth_header=asyncio.run(fetch_auth_header())print(auth_header)# output : {"authorization": "<token type> <token>"}RessourcesThe library tries to resemble the organization of the graph API documentation.for instance in the Graph documentation you will find theDriveItemsunder theFilessection.Inaiopyo365:fromaiopyo365.ressources.filesimportDriveItemsIf you want to work directly with ressources class you will need to instanciate aaiohttp sessionwithauth headerand instanciate the client class.importasyncioimportaiohttpfromaiopyo365.ressources.filesimportDriveItemsasyncdefupload_smallfile(content,file_name):auth_provider=GraphAuthProvider(client_id=os.environ["CLIENT_ID"],client_secret=os.environ["CLIENT_SECRET"],tenant_id=os.environ["TENANT_ID"],)auth_header=awaitauth_provider.auth()session=awaitaiohttp.ClientSession(headers=auth_header)drive_items_client=DriveItems(base_url="url",session=session)awaitdrive_items_client.upload_small_file(content,file_name)You can also use factories
to work with variant of ressources
here we work with a driveItems dedicated to SharePoint (site).importasyncioimportaiohttpimportosfromaiopyo365.providers.authimportGraphAuthProviderfromaiopyo365.factories.drive_itemsimportDriveItemsSitesFactoryasyncdefupload_smallfile(content,file_name):auth_provider=GraphAuthProvider(client_id=os.environ["CLIENT_ID"],client_secret=os.environ["CLIENT_SECRET"],tenant_id=os.environ["TENANT_ID"],)auth_header=awaitauth_provider.auth()session=awaitaiohttp.ClientSession(headers=auth_header)drive_items_client=DriveItemsSitesFactory(site_id="site_id").create(session=session)awaitdrive_items_client.upload_small_file(content,file_name)Servicesaiopyo365provide also service class that encapsulate many ressource to match business logic. It hides dealing with instanciate class client and so on.
Let's reuse the upload of a file example from above and use theSharePointServiceimportosfromaiopyo365.providers.authimportGraphAuthProviderfromaiopyo365.services.sharepointimportSharePointServiceasyncdefupload_smallfile(content,file_name):auth_provider=GraphAuthProvider(client_id=os.environ["CLIENT_ID"],client_secret=os.environ["CLIENT_SECRET"],tenant_id=os.environ["TENANT_ID"],)asyncwithSharePointService(auth_provider,"SHAREPOINT_HOSTNAME","SHAREPOINT_SITE")assharepoint:resp=awaitsharepoint.upload(small_file_path,"small_file",conflict_behavior="replace")assertresp["createdDateTime"] |
aiopype | aiopypePython asynchronous data pipelinesaiopypeallows running continuous data pipelines reliably with a
plain simple approach to their development.Aiopypecreates a centralized message handler to allow every
processor to work as an independent non-blocking message
producer/consumer.Aiopypehas 4 main concepts:FlowManagerProcessorMessage HandlerFlowThe Flow isaiopype’s main component. A flow is the entrypoint for
reliability running pipeline managers.Flowis responsible for:Starting all registered managersHandling manager failuresReporting errorsRestarting failed managersManagerThe manager is responsible for registering a data pipeline from top to
bottom. This means it must register a source and connect it with it’s
consumers, until the pipeline finally outputs.ProcessorA processor is a message consumer/producer.SourcesSources are special cases of processors. Their special characteristic is
that they can run forever, and are the starting point of any pipeline.Examples of sources may be:AREST APIpollerAnWebsocketclientACronjobMessage handlerThe message handler is the central piece that allowsaiopypeto
scale.A Flow will start one or more Sources as the starting point for each
registered Manager. Once a Source produces an event, a message will be
triggered and the handler will identify and fire the corresponding
handlers.There are two available message handlers:SyncProtocolAsyncProtocolSyncProtocolThe synchronous event handler is, as its name suggests, synchronous,
meaning that once the source emits a message, it must be handled until
the end of the pipeline and the source can proceed with it’s normal
behavior. This is good for development purposes but fails to meet the
asynchronous event driven pattern required to allowing component
isolation.AsyncProtocolThe main difference between SyncProtocol and AsyncProtocol is that the
latter uses a decoupled event loop to assess if there are new messages
in the queue for processing, whilst the first simply starts processing
received messages instantaneously. This allows total isolation of
processors.ExampleApple stock processor.SourceOur source will beYahoo Financefor gathering data fromAAPLticker price. We’ll useaiopypeRestSourceas a base class.fromaiopype.sourcesimportRestSourceclassYahooRestSource(RestSource):"""
Yahoo REST API source.
"""def__init__(self,name,handler,symbol):super().__init__(name,handler,'http://finance.yahoo.com/webservice/v1/symbols/{}/quote?format=json&view=detail'.format(symbol),{'exception_threshold':10,'request_interval':30})ProcessorOur sample processor will simply extract the price from the returned
json.fromaiopypeimportProcessorclassHandleRawData(Processor):defhandle(self,data,time):self.emit('price',time,data['list']['resources'][0]['resource']['fields']['price'])OutputOur output processor will write price data onto a CSV File.importcsvclassCSVOutput(Processor):def__init__(self,name,handler,filename):super().__init__(name,handler)self.filename=filenamewithopen(self.filename,'w',newline='')ascsvfile:writer=csv.writer(csvfile,delimiter=';')writer.writerow(['time','price'])defwrite(self,time,price):withopen(self.filename,'w',newline='')ascsvfile:writer=csv.writer(csvfile,delimiter=';')writer.writerow([time,price])ManagerThe manager will instantiateSource,ProcessorandOutput.
It will connectSource’sdataevent toProcessor.handlehandler andProcessor’spriceevent toOutput.writehandler.
This will be our data pipeline.fromaiopypeimportManagerclassYahooManager(Manager):name='yahoo_apple'def__init__(self,handler):super().__init__(handler)self.processor=HandleRawData(self.build_processor_name('processor'),self.handler)self.source=YahooRestSource(self.build_processor_name('source'),self.handler,'AAPL')self.writer=CSVOutput(self.build_processor_name('writer'),self.handler,'yahoo_appl.csv')self.source.on('data',self.processor.handle)self.processor.on('price',self.writer.write)FlowOur flow config will have theyahoo_applemanager only.fromaiopypeimportAsyncFlowclassFlowConfig(object):FLOWS=['yahoo_apple']dataflow=AsyncFlow(FlowConfig())Main method:Will simply start the dataflow.if__name__=="__main__":dataflow.start()Running the exampleCompile all the above code in a file calledexample.pyand run:pythonexample.pyClustersWIP:This decentralized mechanism makes distributed pipelines a possibility,
if we have coordination between nodes.Changelog0.1.4 / 2016-07-14#10Avoid unfinished
flows (@jAlpedrinha)0.1.3 / 2016-07-11#8Fix AsyncProtocol
termination condition (@jAlpedrinha)0.1.2 / 2016-07-06#6Handle exceptions
from async protocol listener (@jAlpedrinha)0.1.1 / 2016-07-05#4Avoid failure on
pusherclient disconnection (@jAlpedrinha)0.1.0 / 2016-07-05#1Add flow manager
and processors (@jAlpedrinha) |
aiopypes | (pronounced "a-i-o-pipes")Scalable asyncio pipelines in Python, made easy.📝 Table of ContentsAboutGetting StartedUsageBuilt UsingTODOContributingAuthorsAcknowledgments🧐 AboutThis package is designed to make building asynchronous streams that balance and
scale automaticallyeasy. Built on pure Python -- no dependencies -- this
framework can be used for variable, decoupled, task-based workloads like
web scraping, database management operations, and more. Scale this out-of-the-box,
with minimal hardware and coding, to process 10k+/s on production loads.Simple pipelinesimport aiopypes
app = aiopypes.App()
@app.task(interval=1.0)
async def every_second():
return datetime.utcnow()
@app.task()
async def task1(stream):
async for s in stream:
print(f"streaming from task1: {s}")
yield obj
if __name__ == '__main__':
pipeline = every_second \
.map(task1)
pipeline.run()To scaled pipelinesimport aiopypes
import aiohttp
app = aiopypes.App()
@app.task(interval=0.1)
async def every_second():
return "http://www.google.com"
@app.task(scaler=aiopypes.scale.TanhTaskScaler()) # this scales workers automatically to consume incoming requests
async def task1(stream):
async for s in stream:
yield await aiohttp.get(s)
async def task2(stream):
async for s in stream:
if s.response_code != 200:
print("failed request: {s}")
yield
if __name__ == '__main__':
pipeline = every_second \
.map(task1)
.reduce(task2)
pipeline.run()🏁 Getting StartedStart with a simple pipeline, and build out from there!import aiopypes
app = aiopypes.App()
@app.task(interval=1.0)
async def every_second():
return datetime.utcnow()
@app.task()
async def task1(stream):
async for s in stream:
print(f"streaming from task1: {s}")
yield obj
if __name__ == '__main__':
pipeline = every_second \
.map(task1)
pipeline.run()For more, seereadthedocsPrerequisitesaiopypesis based on pure Python (3.5+) and does not require other dependencies.InstallingAvailable on PyPihere, installed with pip:pip install aiopypes🔧 Running the testsTo be created!Break down into end to end testsTo be created!And coding style testsTo be created!🎈 UsageImport the libraryimport aiopypesCreate an App objectapp = aiopypes.App()Create a trigger [email protected](interval=1.0)
async def every_second():
return 1Create downtream [email protected]()
async def task_n(stream):
async for s in stream:
# process "s" here
yieldCreate + configure the pipelinepipeline = every_second \
.map(task_n)Run the pipelinepipeline.run()This will run continuously until interrupted.⛏️ Built UsingPythonasyncio✔️ TODOExtend to multithreadsExtend to multiprocessBuild visualization serverAdd pipeline pipe functions (join, head, ...)✍️ Authors@mroumanosContributors: you?🎉 Acknowledgementsfaust streaming |
aiopypexels | An asynchronous wrapper for Pexels API based on aiohttp, which additionally allows to download photos in any of avaliable resolutions.InstallationInstall aiopypexels with pippipinstallaiopypexelsExamplesSearching photos by queryfromaiopypexelsimportAioPexelsfromaiopypexels.typesimportPhotoSearchResponseapi=AioPexels(API_KEY)asyncdefget_photos_by_query(query:str)->PhotoSearchResponse:response=awaitapi.get_photos_by_query(query)print(response.total_results)# 498Getting photo by IDfromaiopypexelsimportAioPexelsfromaiopypexels.typesimportPhotoapi=AioPexels(API_KEY)asyncdefget_photo_by_id(id:int)->Photo:photo=awaitapi.get_photo_by_id(id)Downloading photo by IDfromaiopypexelsimportAioPexelsapi=AioPexels(API_KEY)asyncdefdownload_photo_by_id(query:str)->PhotoSearchResponse:response=awaitapi.download_photo_by_id(photo_id=9999,destination='./photos/test.jpeg',quality='original')LinksLatest VersionFeedbackI would be very pleased for a star :-) |
aiopypiserver | aiopypiserverA basic PyPi server using aiohttp to serve web pages. Intended to work behind
an Apache proxy with relative href accessing. Available here of fromPyPI.MotivationThis is intended to work behind an Apache proxy. This means providing
href links in the pages as relative links. i.e. ./packages/pkg_name.tar.gz
and not /packages... .This is addressed as provided byWSGI.
Looking at the code I liked the idea of implementing withasyncioandaiohttpin preference to forking thepypiservercode.Usageusage: aiopypiserver [-h] [-p port] [-i address] [-u username] [-P password] [-v] [-q] [package_path]
Private PyPi server.
positional arguments:
package_path path to packages
options:
-h, --help show this help message and exit
-p port, --port port Listen on port
-i address, --interface address Listen on address
-u username, --username username For uploading packages
-P password, --password password ...
-v, --verbose set debug level
-q, --quiet turn off access logging
Browse index at http://localhost:8080/.Can also be run as a module aspython -m aiopypiserver -h. Using the internal class is probably a bad idea ATM as the API is likely to change.By default access logs are generated, as I find it useful to see these.ApacheAdd the following to your Apache config. This is the item for pypiserver that required wsgi.ProxyPass /pypi/ http://127.0.0.1:8080/
ProxyPassReverse /pypi/ http://127.0.0.1:8080/ThanksPlease let me know how you get on through the github page. |
aiopypixel | aiopypixelA complete, asynchronous wrapper for the Hypixel APIIf you'd like to help out or if you've found a bug, open a pull request or bug report!Made byIapetus11&TrustedMercury |
aiopyql | A fast and easy-to-use asyncio ORM(Object-relational Mapper) for performing C.R.U.D. ops within RBDMS tables using python.Key Featuresasyncio readydatabase / table query cacheSQL-like query syntaxAutomatic schema discovery / migrationsDocumentationhttps://aiopyql.readthedocs.io/Instalation$virtualenv-ppython3.7aiopyql-env
$sourceaiopyql-env/bin/activate(aiopyql-env)$pipinstallaiopyqlCompatable Databasespostgres - viaasyncpgmysql - viaaiomysqlsqlite - viaaiosqliteGetting Startedimportasynciofromaiopyqlimportdataasyncdefmain():#sqlite connectionsqlite_db=awaitdata.Database.create(database="testdb")# create tableawaitdb.create_table('keystore',[('key',str,'UNIQUE NOT NULL'),('value',str)],'key',cache_enabled=True)# insertawaitdb.tables['keystore'].insert(key='foo',value={'bar':30})# updateawaitdb.tables['keystore'].update(value={'bar':31},where={'key':'foo'})# deleteawaitdb.tables['keystore'].delete(where={'key':'foo'})loop=asyncio.new_event_loop()loop.run_until_complete(main())RecipiesSee other usage examples inrecipies.FastAPIPostgresimportasynciofromaiopyqlimportdataasyncdefmain():mysql_db=awaitdata.Database.create(database='postgres_database',user='postgres',password='my-secret-pw',host='localhost',port=5432,db_type='postgres')loop=asyncio.new_event_loop()loop.run_until_complete(main())Mysqlimportasynciofromaiopyqlimportdataasyncdefmain():mysql_db=awaitdata.Database.create(database='mysql_database',user='mysqluser',password='my-secret-pw',host='localhost',port=3306,db_type='mysql')loop=asyncio.new_event_loop()loop.run_until_complete(main())Idea / Suggestion / IssueSubmit an IssueCreate a Pull request |
aiopyramid | IntroductionA library for leveraging pyramid infrastructure asynchronously using the newasyncio.Aiopyramidprovides tools for making web applications withPyramidandasyncio.
It will not necessarily make your application run faster. Instead, it gives you some tools
and patterns to build an application on asynchronous servers.
Bear in mind that you will need to use asynchronous libraries for io where appropriate.Since this library is built on relatively new technology, it is not intended for production use.Getting StartedAiopyramidincludes a scaffold that creates a “hello world” application,
check it out. The scaffold is designed to work with eithergunicornvia a custom worker oruWSGIvia theuWSGI asyncio plugin.For example:pip install aiopyramid gunicorn
pcreate -s aio_starter <project>
cd <project>
python setup.py develop
pserve development.iniThere is also awebsocketscaffoldaio_websocketfor those who basic tools for setting up
awebsocketserver.DocumentationFull documentation forAiopyramidcan be foundhere.Changes0.4.1 (2016-06-04)Fix dependency mismatch for cases of aiohttp > 1.0 but < 2.00.4.0 (2016-05-29)Refactor to support latests aiohttp0.3.7 (2017-05-07)Peg aiohttp dependency0.3.6 (2016-09-22)Fix header normalization for Gunicorn0.3.5 (2016-02-18)Fix Gunicorn logging support0.3.4 (2016-02-03)Fix compatiblity with websockets 3+0.3.3 (2015-11-21)Merge fix forignore_websocket_closedto allow chained exceptionsAdd option to coerce bytes to str for uwsgi websockets0.3.2 (2015-09-24)Support Python3.50.3.1 (2015-01-31)Fix issues related to POST requestsFix issues related to coroutine mappersSync with Gunicorn settings a la issue #9170.3.0 (2014-12-06)Add sphinxMigrate README to sphinx docsAdd helpers for authenticationDeprecated aiopyramid.traversal, use aiopyramid.helpers.synchronizeDeprecated aiopyramid.tweens, moved examples to docs0.2.4 (2014-10-06)Fix issue with gunicorn websocketsFix issue with class-based view mappers0.2.3 (2014-10-01)Fix issue withsynchronize0.2.2 (2014-09-30)Update example tween to work with gunicornAdd kwargs support to helpersAdd tox for testingAdd decoratorsynchronizefor wrapping coroutinesRefactored mappers and tween example to usesynchronizeBug fixes0.2.1 (2014-09-15)Update scaffold example testsAdd test suiteUpdate README0.2.0 (2014-09-01)Update READMEadded websocket mappers for uwsgi and gunicornadded websocket view class0.1.2 (2014-08-02)Update MANIFEST.in0.1.0 (2014-08-01)Update README ready for releaseAdded asyncio traverser (patched fromResourceTreeTraverser)Added custom gunicorn workerFix issue with uwsgi and executor threadsUpdate starter scaffold0.0.3 (2014-07-30)Moving to an extension-based rather than patched-based approachremoved most code based on pyramid_asyncio except testing and scaffoldsadded view mappers for running views in asyncioadded example tween that can come before or after synchronous tweens0.0.2 (2014-07-22)Removed Gunicorn specific codedisabled excview_tween_factorymade viewresult_to_response a coroutineadded dummy code for testing with uwsgi0.0.1 (2014-07-22)Migrated from pyramid_asyncio (Thank you Guillaume)Removed worker.py and Gunicorn dependencyAdded greenlet dependencyChanged contact information in setup.py |
aiopyrestful | GitHub:aiopyrestfulThis is a Specialized version ofpyrestfulPyRestfulpyRestful is an API to develop restful services with Tornado Web Server.We made changes from the last version to improve it and make it more easy.The last version works with Python 2 and 3.aiopyrestfulSupport asyncio.Installpip install aiopyrestfulExampleimportasynciofromaiopyrestful.restimportget,[email protected]_fun():awaitasyncio.sleep(10)return'text'@get(_path='/configure',_produces=mediatypes.APPLICATION_JSON)@asyncio.coroutineasyncdefpost_configure(self):text=awaitasync_fun()return{'text':text} |
aio-py-rq | No description available on PyPI. |
aio-py-sdk | No description available on PyPI. |
aio-pysqs | aio-sqsAsync aws sqs listener |
aiopystomp | No description available on PyPI. |
aiopyston | No description available on PyPI. |
aiopytesseract | aiopytesseractA Pythonasynciowrapper forTesseract-OCR.InstallationInstall and update using pip:pipinstallaiopytesseractUsageList all available languages by Tesseract installationimportaiopytesseractawaitaiopytesseract.languages()awaitaiopytesseract.get_languages()Tesseract versionimportaiopytesseractawaitaiopytesseract.tesseract_version()awaitaiopytesseract.get_tesseract_version()Tesseract parametersimportaiopytesseractawaitaiopytesseract.tesseract_parameters()Confidence only infoimportaiopytesseractawaitaiopytesseract.confidence("tests/samples/file-sample_150kB.png")Deskew infoimportaiopytesseractawaitaiopytesseract.deskew("tests/samples/file-sample_150kB.png")Extract text from an image: locally or bytesfrompathlibimportPathimportaiopytesseractawaitaiopytesseract.image_to_string("tests/samples/file-sample_150kB.png")awaitaiopytesseract.image_to_string(Path("tests/samples/file-sample_150kB.png").read_bytes(),dpi=220,lang='eng+por')Box estimatesfrompathlibimportPathimportaiopytesseractawaitaiopytesseract.image_to_boxes("tests/samples/file-sample_150kB.png")awaitaiopytesseract.image_to_boxes(Path("tests/samples/file-sample_150kB.png")Boxes, confidence and page numbersfrompathlibimportPathimportaiopytesseractawaitaiopytesseract.image_to_data("tests/samples/file-sample_150kB.png")awaitaiopytesseract.image_to_data(Path("tests/samples/file-sample_150kB.png")Information about orientation and script detectionfrompathlibimportPathimportaiopytesseractawaitaiopytesseract.image_to_osd("tests/samples/file-sample_150kB.png")awaitaiopytesseract.image_to_osd(Path("tests/samples/file-sample_150kB.png")Generate a searchable PDFfrompathlibimportPathimportaiopytesseractawaitaiopytesseract.image_to_pdf("tests/samples/file-sample_150kB.png")awaitaiopytesseract.image_to_pdf(Path("tests/samples/file-sample_150kB.png")Generate HOCR outputfrompathlibimportPathimportaiopytesseractawaitaiopytesseract.image_to_hocr("tests/samples/file-sample_150kB.png")awaitaiopytesseract.image_to_hocr(Path("tests/samples/file-sample_150kB.png")Multi ouputfrompathlibimportPathimportaiopytesseractasyncwithaiopytesseract.run(Path('tests/samples/file-sample_150kB.png').read_bytes(),'output','alto tsv txt')asresp:# will generate (output.xml, output.tsv and output.txt)print(resp)alto_file,tsv_file,txt_file=respConfig variablesfrompathlibimportPathimportaiopytesseractasyncwithaiopytesseract.run(Path('tests/samples/text-with-chars-and-numbers.png').read_bytes(),'output','alto tsv txt'config=[("tessedit_char_whitelist","0123456789")])asresp:# will generate (output.xml, output.tsv and output.txt)print(resp)alto_file,tsv_file,txt_file=respfrompathlibimportPathimportaiopytesseractawaitaiopytesseract.image_to_string("tests/samples/text-with-chars-and-numbers.png",config=[("tessedit_char_whitelist","0123456789")])awaitaiopytesseract.image_to_string(Path("tests/samples/text-with-chars-and-numbers.png").read_bytes(),dpi=220,lang='eng+por',config=[("tessedit_char_whitelist","abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")])For more details on Tesseract best practices and the aiopytesseract, see the folder:docs.ExamplesIf you want to testaiopytesseracteasily, can you use some options like:docker/docker-composestreamlitDocker / docker-composeAfter clone this repo run the command below:docker-composeup-dstreamlit appFor this option it's necessary first installaiopytesseractandstreamlit, after execute:# remote option:streamlitrunhttps://github.com/amenezes/aiopytesseract/blob/master/examples/streamlit/app.py# local option:streamlitrunexamples/streamlit/app.pynote: The streamlit example needpython >= 3.10LinksLicense:Apache LicenseCode:https://github.com/amenezes/aiopytesseractIssue tracker:https://github.com/amenezes/aiopytesseract/issuesDocs:https://github.com/amenezes/aiopytesseract |
aiopyupbit | aiopyupbitaiopyupbit is python wrapper for upbit API for asyncio and Python
which is base onpyupbitInstallationInstalling:pip install aiopyupbitUsageaiopyupbit syntax strives to be similar topyupbit.importasyncioimportaiopyupbitasyncdefmain():print(awaitaiopyupbit.get_tickers())print(awaitaiopyupbit.get_current_price("KRW-BTC"))print(awaitaiopyupbit.get_current_price(["KRW-BTC","KRW-XRP"]))print(awaitaiopyupbit.get_ohlcv("KRW-BTC"))...if__name__=="__main__":loop=asyncio.new_event_loop()asyncio.set_event_loop(loop)loop.run_until_complete(main)AboutSome features are not currently implemented:WebSocketManager classIssuesPlease report any issues viagithub issues |
aiopywttr | aiopywttrAsynchronous wrapper forwttr.inweather API.Synchronous versionhere.Installationpython-mpipinstall-Uaiopywttrpywttr-modelsDocumentationhttps://aiopywttr.readthedocs.ioLicenseMIT |
aio-q3-rcon | aio-q3-rconAn async Quake 3 RCON implementation for PythonInstallationpip install aio-q3-rconor with the cli extrapip install aio-q3-rcon[cli]CLI UsageUsage: q3rcon [OPTIONS] ADDRESS PASSWORD
Options:
-p, --port INTEGER RANGE [1<=x<=65535]
--timeout FLOAT RANGE [x>=0.01]
--fragment-read-timeout, --fr-timeout FLOAT RANGE
[x>=0.01]
--retries INTEGER RANGE [x>=1]
--debug
--help Show this message and exit.API ReferenceExamples FolderclassClient(host:str,port:int,timeout:float,fragment_read_timeout:float,retries:int,logger:Logger | None)Parameters:host:str-the host / IP / domain of the server to connect toport:port-the port of the server to connect todefault value is27960timeout:float-the timeout for network operationsdefault value is2.0for network operations with retries, the timeout applies to the rewrite attempts as a whole, rather than being per retryfragment_read_timeout:float-the timeout for waiting on potentially fragmented responsesdefault value is.25the Quake 3 server can sometimes send fragmented responses, since there is no consistent way to tell if a response is fragmented or not, the best solution is to wait for fragmented responses from the server whether they exist or not. This value is the timeout for waiting for those responses.retries:int-the amount of retries per network operationdefault value is2all network operations except for reads are wrapped in retry logiclogger:Logger | None-the logger instancedefault value isNoneif there is no logger specified, a logger that hasdisabledset toTruewill be used insteadcurrently only some debug information is loggedMethods:connect(verify:bool=True) ->Noneconnects to the serverifverifyisTrue, then theheartbeatRCON command is sent and the password is checked as wellifClientis being used as a context manager, this will be called automatically upon enterclose() ->Nonecloses the connection to the serverifClientis being used as a context manager, this will be called automatically upon exitexceptionRCONErrorBase exception all aio-q3-rcon errors derive fromexceptionIncorrectPasswordErrorRaised when the provided password is incorrect |
aioqb | ✨ aioqb ✨The asyncioQbittorrent ClientThe asyncio Qbittorrent Clientimportasyncioimportaioqbasyncdefmain():client=aioqb.QbittorrentClient()awaitclient.torrents_add(torrents=[open("xxx.torrent","rb")])print(awaitclient.transfer_info())print(awaitclient.torrents_info())asyncio.run(main())Auto ban thunder"""Copyright (c) 2008-2022 synodriver <[email protected]>"""# Auto ban xunlei without qbeeimportasynciofrompprintimportpprintfromaioqbimportClientblock_list=["xl","xunlei"]asyncdefmain():asyncwithClient()asclient:pprint(awaitclient.auth_login())whileTrue:d=awaitclient.sync_maindata()# pprint(d)torrent_hashs=d['torrents'].keys()rid=d['rid']fortintorrent_hashs:data=awaitclient.sync_torrentPeers(hash=t,rid=0)# filter(lambda x: for ip, peer in data["peers"].items() if , block_list)forip,peerindata["peers"].items():# print(ip)# pprint(v)forbinblock_list:ifbinpeer['client'].lower():awaitclient.transfer_banPeers(ip)print(f"ban peer{ip}{peer['client']}")breakawaitasyncio.sleep(1)asyncio.run(main()) |
aioqbt | aioqbtPython library for qBittorrent WebAPI with asyncio.Features:Async typed interfaces.Complete qBittorrent WebAPI.Tested with qBittorrent v4.1.5 to v4.6.0 on Debian/Ubuntu.Documentationhttps://aioqbt.readthedocs.io/en/latest/Quick StartInstall withpip$pipinstallaioqbtimportasynciofromaioqbt.apiimportInfoFilterfromaioqbt.clientimportcreate_clientasyncdefmain():client=awaitcreate_client("http://localhost:8080/api/v2/",username="admin",password="adminadmin",)asyncwithclient:# print client and API versionsprint(awaitclient.app.version())# v4.6.1print(awaitclient.app.webapi_version())# 2.9.3# print torrents in downloadingforinfoinawaitclient.torrents.info(filter=InfoFilter.DOWNLOADING):print(f"{info.added_on.isoformat()}added{info.name!r}")# 2023-11-06T17:59:00 added 'ubuntu-22.04.3-desktop-amd64.iso'if__name__=='__main__':asyncio.run(main())Seedetailed usage on Read the Docs. |
aioqiniu | No description available on PyPI. |
aioqs | aioqsPython 3.5+ library package providing yet another queue/schedule of asynchronous coroutines with limiting number of simultaneously running coroutines as simple to use async iterator. With no overhead on tracking worker coroutines state. |
aioqsw | aioqswPython library to control QNAP QSW devices.RequirementsPython >= 3.11InstallpipinstallaioqswInstall from SourceRun the following command inside this folderpipinstall--upgrade.ExamplesExamples can be found in theexamplesfolder |
aioqs-wWolf | Failed to fetch description. HTTP Status Code: 404 |
aio-quakeml-client | python-aio-quakeml-clientThis library provides convenient async access to QuakeML feeds. |
aio-quakeml-ingv-centro-nazionale-terremoti-client | python-quakeml-ingv-centro-nazionale-terremoti-clientThis library provides convenient async access to the INGV Centro Nazionale Terremoti (Earthquakes) QuakeML feeds.ExamplesRetrieve all events from the last 24 hours (default timeframe):importasynciofromaiohttpimportClientSessionfromaio_quakeml_ingv_centro_nazionale_terremoti_clientimportIngvCentroNazionaleTerremotiQuakeMLFeedasyncdefmain()->None:asyncwithClientSession()aswebsession:# Home Coordinates: Latitude: 43.7, Longitude: 11.2feed=IngvCentroNazionaleTerremotiQuakeMLFeed(websession,(43.7,11.2))status,entries=awaitfeed.update()print(status)ifentries:forentryinentries:print(f"- ID:{entry.external_id}- Magnitude:{entry.magnitude.mag}- Distance:{entry.distance_to_home:.2f}")asyncio.get_event_loop().run_until_complete(main())Retrieve all events from the last 24 hours (default timeframe) and within a radius of
100km around the provided home coordinates:importasynciofromaiohttpimportClientSessionfromaio_quakeml_ingv_centro_nazionale_terremoti_clientimportIngvCentroNazionaleTerremotiQuakeMLFeedasyncdefmain()->None:asyncwithClientSession()aswebsession:# Home Coordinates: Latitude: 43.7, Longitude: 11.2# Filter radius: 100 kmfeed=IngvCentroNazionaleTerremotiQuakeMLFeed(websession,(43.7,11.2),filter_radius=100)status,entries=awaitfeed.update()print(status)ifentries:forentryinentries:print(f"- ID:{entry.external_id}- Magnitude:{entry.magnitude.mag}- Distance:{entry.distance_to_home:.2f}")asyncio.get_event_loop().run_until_complete(main())Retrieve all events from the last 24 hours (default timeframe), within a radius of
100km around the provided home coordinates, and with a magnitude of 2.0 or higher:importasynciofromaiohttpimportClientSessionfromaio_quakeml_ingv_centro_nazionale_terremoti_clientimportIngvCentroNazionaleTerremotiQuakeMLFeedasyncdefmain()->None:asyncwithClientSession()aswebsession:# Home Coordinates: Latitude: 43.7, Longitude: 11.2# Filter radius: 100 km# Filter minimum magnitude: 2.0feed=IngvCentroNazionaleTerremotiQuakeMLFeed(websession,(43.7,11.2),filter_radius=100,filter_minimum_magnitude=2.0)status,entries=awaitfeed.update()print(status)ifentries:forentryinentries:print(f"- ID:{entry.external_id}- Magnitude:{entry.magnitude.mag}- Distance:{entry.distance_to_home:.2f}")asyncio.get_event_loop().run_until_complete(main()) |
aioquant | Failed to fetch description. HTTP Status Code: 404 |
aioquery | InstallPip:pip3 install aioqueryGit:pip3 install git+https://github.com/WardPearce/aioquery.gitDocumentationDocumentation |
aioqueue | UNKNOWN |
aioqueueext | aioqueueextA package that provides asyncio Queues with additional functionality.Work-in-ProgressThe repository contains modules extracted from my other project and was refactored as a separate package.In the current version, I have not verified all of the functions.Additional functions I plan to implement are:return_when_*()- async functions to ease synchronization tasksset_on_get_callback()set_on_put_callback()peek_nowait()- returns the "up-next" item without removing it from the queuepeek_and_get()- async peek and conditionally get (pop) an item from the queueExamplesSync Peekingasync def sync_peeking_example() -> None:
queue1 = QueueExt()
await queue1.put("apple")
item = queue1.peek_nowait()
print(f"first peek: {item}") # "apple"
await queue1.put("banana")
item = queue1.peek_nowait()
print(f"second peek: {item}") # "apple"
item = await queue1.get() # popped "apple"
item = queue1.peek_nowait()
print(f"third peek: {item}") # "banana"
item = await queue1.get() # popped "banana"
# the next peek raises the QueueEmpty exception
try:
print(f"fourth peek: {queue1.peek_nowait()}")
except asyncio.QueueEmpty:
print("fourth peek failed: QueueEmpty") |
aioqueuerpc | aioqueuerpcJSON RPC with asyncio queues (no transport layer implementation).Work-in-ProgressThe repository contains a module extracted from my other project and was refactored as a separate package.Usage Examples... |
aioqui | OverviewDocumentationInstallationpip install aioquiPackage dependenciesincludespip install pyside6 qasync loguruspeedupspip install aiodns ujson cchardet uvloopdepending on modules [optional]pip install aiohttp uvicorn |
aioquic | What isaioquic?aioquicis a library for the QUIC network protocol in Python. It features
a minimal TLS 1.3 implementation, a QUIC stack and an HTTP/3 stack.QUIC was standardised inRFC 9000and HTTP/3 inRFC 9114.aioquicis regularly tested for interoperability against otherQUIC implementations.To learn more aboutaioquicpleaseread the documentation.Why should I useaioquic?aioquichas been designed to be embedded into Python client and server
libraries wishing to support QUIC and / or HTTP/3. The goal is to provide a
common codebase for Python libraries in the hope of avoiding duplicated effort.Both the QUIC and the HTTP/3 APIs follow the “bring your own I/O” pattern,
leaving actual I/O operations to the API user. This approach has a number of
advantages including making the code testable and allowing integration with
different concurrency models.Featuresminimal TLS 1.3 implementation conforming withRFC 8446QUIC stack conforming withRFC 9000IPv4 and IPv6 supportconnection migration and NAT rebindinglogging TLS traffic secretslogging QUIC events in QLOG formatHTTP/3 stack conforming withRFC 9114server push supportWebSocket bootstrapping conforming withRFC 9220datagram support conforming withRFC 9297InstallingThe easiest way to installaioquicis to run:pipinstallaioquicBuilding from sourceIf there are no wheels for your system or if you wish to buildaioquicfrom source you will need the OpenSSL development headers.LinuxOn Debian/Ubuntu run:sudo apt install libssl-dev python3-devOn Alpine Linux run:sudo apk add openssl-dev python3-dev bsd-compat-headers libffi-devOS XOn OS X run:brew install opensslYou will need to set some environment variables to link against OpenSSL:export CFLAGS=-I/usr/local/opt/openssl/include
export LDFLAGS=-L/usr/local/opt/openssl/libWindowsOn Windows the easiest way to install OpenSSL is to useChocolatey.choco install opensslYou will need to set some environment variables to link against OpenSSL:$Env:INCLUDE="C:\Progra~1\OpenSSL\include"$Env:LIB="C:\Progra~1\OpenSSL\lib"Running the examplesaioquiccomes with a number of examples illustrating various QUIC usecases.You can browse these examples here:https://github.com/aiortc/aioquic/tree/main/examplesLicenseaioquicis released under theBSD license. |
aioquic-gcc49 | What isaioquic?aioquicis a library for the QUIC network protocol in Python. It features
a minimal TLS 1.3 implementation, a QUIC stack and an HTTP/3 stack.QUIC standardisation is not finalised yet, butaioquicclosely tracks the
specification drafts and is regularly tested for interoperability against otherQUIC implementations.To learn more aboutaioquicpleaseread the documentation.Why should I useaioquic?aioquichas been designed to be embedded into Python client and server
libraries wishing to support QUIC and / or HTTP/3. The goal is to provide a
common codebase for Python libraries in the hope of avoiding duplicated effort.Both the QUIC and the HTTP/3 APIs follow the “bring your own I/O” pattern,
leaving actual I/O operations to the API user. This approach has a number of
advantages including making the code testable and allowing integration with
different concurrency models.FeaturesQUIC stack conforming with draft-23HTTP/3 stack conforming with draft-23minimal TLS 1.3 implementationIPv4 and IPv6 supportconnection migration and NAT rebindinglogging TLS traffic secretslogging QUIC events in QLOG formatHTTP/3 server push supportRunning the examplesaioquicrequires Python 3.6 or better, and the OpenSSL development headers.$sudoaptinstalllibssl-devpython3-devAfter checking out the code using git you can run:$pipinstall-e.$pipinstallaiofilesasgirefhttpbinstarlettewsprotoHTTP/3 serverYou can run the example server, which handles both HTTP/0.9 and HTTP/3:$pythonexamples/http3_server.py--certificatetests/ssl_cert.pem--private-keytests/ssl_key.pemHTTP/3 clientYou can run the example client to perform an HTTP/3 request:$pythonexamples/http3_client.py--ca-certstests/pycacert.pemhttps://localhost:4433/Alternatively you can perform an HTTP/0.9 request:$pythonexamples/http3_client.py--ca-certstests/pycacert.pem--legacy-httphttps://localhost:4433/You can also open a WebSocket over HTTP/3:$pythonexamples/http3_client.py--ca-certstests/pycacert.pemwss://localhost:4433/wsLicenseaioquicis released under theBSD license. |
aioquic-mitmproxy | What isaioquic_mitmproxy?aioquic_mitmproxyis a fork ofaioquic, that is specifically targeted
towardsmitmproxy.If you want to use QUIC and/or HTTP/3 in your Python project, you should useaioquicinstead:https://pypi.org/project/aioquic/Any code contributions toaioquicshould also be submitted directly to
upstream:https://github.com/aiortc/aioquicWhat isaioquic?aioquicis a library for the QUIC network protocol in Python. It features
a minimal TLS 1.3 implementation, a QUIC stack and an HTTP/3 stack.QUIC was standardised inRFC 9000and HTTP/3 inRFC 9114.aioquicis regularly tested for interoperability against otherQUIC implementations.To learn more aboutaioquicpleaseread the documentation.Why should I useaioquic?aioquichas been designed to be embedded into Python client and server
libraries wishing to support QUIC and / or HTTP/3. The goal is to provide a
common codebase for Python libraries in the hope of avoiding duplicated effort.Both the QUIC and the HTTP/3 APIs follow the “bring your own I/O” pattern,
leaving actual I/O operations to the API user. This approach has a number of
advantages including making the code testable and allowing integration with
different concurrency models.FeaturesQUIC stack conforming withRFC 9000HTTP/3 stack conforming withRFC 9114minimal TLS 1.3 implementation conforming withRFC 8446IPv4 and IPv6 supportconnection migration and NAT rebindinglogging TLS traffic secretslogging QUIC events in QLOG formatHTTP/3 server push supportRequirementsaioquicrequires Python 3.8 or better.Running the examplesaioquiccomes with a number of examples illustrating various QUIC usecases.You can browse these examples here:https://github.com/aiortc/aioquic/tree/main/examplesLicenseaioquicis released under theBSD license. |
aioquic-pmd3 | Fork ofhttps://github.com/aiortc/aioquic |
aioqvapay | QvaPay client for Python (Deprecated!!!)IMPORTANT: Deprecated, we recommend usinghttps://pypi.org/project/qvapayAsynchronousand also synchronousnon-officialQvaPayclient forasyncioandPython language. This library is still under development, the interface could be changed.FeaturesResponse models with type hints annotated fully (Also internal code have type hints annotated fully) thank you to Python's type hints (or annotations) andpydanticAsynchronous and synchronous behavior thank you tohttpxCoverage 100%Project collaborative and open sourceAlternativeshttps://pypi.org/project/qvapayFor more information aboutQvaPay API, read theQvaPay docs.Contributors ✨Thanks goes to these wonderful people (emoji key):Leynier Gutiérrez González💻🚧⚠️This project follows theall-contributorsspecification. Contributions of any kind welcome! |
aioqzone | aioqzoneaioqzone封装了一些Qzone接口。English| 简体中文[!WARNING]
aioqzone 仍在开发阶段,任何功能和接口都有可能在未来的版本中发生变化。[!IMPORTANT]欢迎有意协助开发/维护的中文开发者。不仅限于本仓库,aioqzone所属的任何仓库都需要您的帮助。功能和特点Qzone 功能二维码登录密码登录(受限)通过滑动验证码解析图片选择验证码通过网络环境检测爬取HTML说说爬取说说详细内容点赞/取消赞发布(仅文字)/修改/删除说说发评论为什么选择 aioqzone完整的 IDE 类型支持 (typing)API 类型验证 (pydantic)异步设计易于二次开发文档支持在做了:完善的测试覆盖包描述包名简述aioqzoneQzone APIqqqrQzone 登录例子这些仓库提供了一些 aioqzone 的实际使用示例。aioqzone 的插件们aioqzone-feed: 提供了操作 feed 的简单接口许可证Copyright (C) 2022-2023 aioqzone.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.aioqzone 以AGPL-3.0开源.免责声明 |
aior | No description available on PyPI. |
aiorabbit | aiorabbit is an opinionated AsyncIO RabbitMQ client forPython 3(3.7+).Project GoalsTo create a simple, robustRabbitMQclient library forAsyncIOdevelopment in Python 3.To make use of new features and capabilities in Python 3.7+.Remove some complexity in using anAMQPclient by:Abstracting away the AMQP channel and use it only as a protocol coordination mechanism inside the client.Remove thenowaitkeyword to ensure a single round-trip pattern of behavior for client usage.To automatically reconnect when a connection is closed due to an AMQP exception/error.When such a behavior is encountered, the exception is raised, but the client continues to operate if the user catches and logs the error.To automatically create a new channel when the channel is closed due to an AMQP exception/error.When such a behavior is encountered, the exception is raised, but the client continues to operate if the user catches and logs the error.To ensure correctness of API usage, including values passed to RabbitMQ in AMQ method calls.Example UseThe following demonstrates an example of using the library to publish a message with publisher confirmations enabled:importasyncioimportdatetimeimportuuidimportaiorabbitRABBITMQ_URL='amqps://guest:guest@localhost:5672/%2f'asyncdefmain():asyncwithaiorabbit.connect(RABBITMQ_URL)asclient:awaitclient.confirm_select()ifnotawaitclient.publish('exchange','routing-key','message-body',app_id='example',message_id=str(uuid.uuid4()),timestamp=datetime.datetime.utcnow()):print('Publishing failure')if__name__=='__main__':asyncio.get_event_loop().run_until_complete(main())Documentationhttp://aiorabbit.readthedocs.orgLicenseCopyright (c) 2019-2023 Gavin M. Roy
All rights reserved.Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.Neither the name of the copyright holder nor the names of its contributors may
be used to endorse or promote products derived from this software without
specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.Python Versions Supported3.7+ |
aio-rabbitmq | aio-rabbitmqA concise library built on aio-pika for publishing and consuming from a message queue in RabbitMQ |
aioradio | aioradioGeneric asynchronous i/o python utilities for AWS services (SQS, S3, DynamoDB, Secrets Manager), Redis, MSSQL (pyodbc), JIRA and more.AWS S3 example codeaioradio abstracts using aiobotocore and aioboto3 making async AWS funtion calls simple one liners.
Besides what is shown below in the examples, there is also support for SQS, DynamoDB and Secrets Manager.importasynciofromaioradio.aws.s3import(create_bucket,delete_s3_object,download_file,get_object,list_s3_objects,upload_file)asyncdefmain():s3_bucket='aioradio's3_prefix='test'filename='hello_world.txt's3_key=f'{s3_prefix}/{filename}'# create an s3 bucket called aioradioawaitcreate_bucket(bucket=s3_bucket)# create hello_world.txt filewithopen(filename,'w')asfile_handle:file_handle.write('hello world of aioradio!')# upload the file from s3 and confirm it now exists in s3awaitupload_file(bucket=s3_bucket,filepath=filename,s3_key=s3_key)asserts3_keyinawaitlist_s3_objects(bucket=s3_bucket,s3_prefix=s3_prefix)# test downloading the fileawaitdownload_file(bucket=s3_bucket,filepath=filename,s3_key=s3_key)# test getting file data to objectresult=awaitget_object(bucket=s3_bucket,s3_key=s3_key)assertresult==b'hello world of aioradio!'# delete the file from s3awaitdelete_s3_object(bucket=s3_bucket,s3_prefix=s3_key)asserts3_keynotinawaitlist_s3_objects(bucket=s3_bucket,s3_prefix=s3_prefix)asyncio.get_event_loop().run_until_complete(main())MSSQL example codeaioredis uses the pyodbc library to work with ODBC databases.
It currently has support for connecting and sending queries to mssql.importasynciofromaioradio.pyodbcimportestablish_pyodbc_connectionfromaioradio.pyodbcimportpyodbc_query_fetchonefromaioradio.pyodbcimportpyodbc_query_fetchalldefmain():conn=establish_pyodbc_connection(host='your-host',user='your-user',pwd='your-password')query="SELECT homeruns FROM MLB.dbo.LosAngelesAngels WHERE lastname = 'Trout' AND year = '2020'"row=pyodbc_query_fetchone(conn=conn,query=query)print(row)query="SELECT homeruns FROM MLB.dbo.LosAngelesAngels WHERE lastname = 'Trout'"rows=pyodbc_query_fetchall(conn=conn,query=query)print(rows)asyncio.get_event_loop().run_until_complete(main())Jira example codeJira uses the async library httpx behind the scene to send http requests.importasynciofromaioradio.jiraimportadd_comment_to_jirafromaioradio.jiraimportget_jira_issuefromaioradio.jiraimportpost_jira_issueasyncdefmain():# create a jira ticketurl='https://aioradio.atlassian.net/rest/api/2/issue/'payload={"fields":{"project":{"key":"aioradio"},"issuetype":{"name":"Task"},"reporter":{"accountId":"somebodies-account-id"},"priority":{"name":"Medium"},"summary":"Aioradio rocks!","description":"Aioradio Review","labels":["aioradio"],"assignee":{"accountId":"somebodies-account-id"}}}resp=awaitpost_jira_issue(url=url,jira_user='your-user',jira_token='your-password',payload=payload)jira_id=resp.json()['key']# get jira ticket inforesp=awaitget_jira_issue(url=f'{url}/{jira_id}',jira_user='your-user',jira_token='your-password')# add comment to jira ticketcomment='aioradio rocks!'response=awaitadd_comment_to_jira(url=url,jira_user='your-user',jira_token='your-password',comment=comment)asyncio.get_event_loop().run_until_complete(main())INSTALLING FOR DIRECT DEVELOPMENT OF AIORADIOInstallpython 3.11.XMake sure you've installedODBC drivers, required for using the python package pyodbc.Clone aioradio locally and navigate to the root directoryInstall and activate python VirtualEnvpython3.11-mvenvenvsourceenv/bin/activateInstall python modules included in requirements.txtpipinstallcython
pipinstall-raioradio/requirements.txtRun Makefile command from the root directory to test all is good before issuing push to mastermake allAUTHORSTim Reichard-aioradioSee also the list ofcontributorswho participated in this project.ACKNOWLEDGEMENTSPedro Artiga- Developer contributing to aioradio. |
aioradios | aioradiosaioradio is an asynchronous API wrapper forwww.radio-browser.infoInstallationUse the package managerpipto install aioradios.pipinstallaioradiosExample usagein:fromaioradiosimportRadioBrowserasyncdefmain():rb=RadioBrowser()awaitrb.init()radio=awaitrb.search(name='UpBeatRadio',limit=1)out:[{"changeuuid":"29c0910a-2fae-4623-8054-eaee674fe602","stationuuid":"ad95f623-c7fd-4ecb-98d5-32242708ce63","name":"UpBeatRadio","url":"http://live.upbeat.pw/","url_resolved":"http://live.upbeat.pw/","homepage":"https://upbeat.pw/","favicon":"http://upbeatradio.net/UpBeat.png","tags":"","country":"UK","countrycode":"","state":"","language":"english","votes":0,"lastchangetime":"2020-06-23 12:38:08","codec":"MP3","bitrate":128,"hls":0,"lastcheckok":1,"lastchecktime":"2020-11-04 04:14:11","lastcheckoktime":"2020-11-04 04:14:11","lastlocalchecktime":"2020-11-03 19:16:54","clicktimestamp":"2020-10-22 15:09:37","clickcount":8,"clicktrend":0}]ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.DocumentationFor documentation do:fromaioradiosimportRadioBrowserhelp(RadioBrowser())LicenseMIT |
aioraft | UNKNOWN |
aioraft-ng | No description available on PyPI. |
aiorate | aiorateLoop frequency regulator forasynciowith an API similar torospy.Rate.This project is archived as it has been superseded byloop-rate-limiters.InstallationpipinstallaiorateUsageTheRateclass provides a non-blocking loop frequency limiter:Set the loop frequency in Hz at construction:rate = aiorate.Rate(200.0)Callawait rate.sleep()at every loop cycleHere is what it looks like in practice:importasyncioimportaiorateasyncdefmain():rate=aiorate.Rate(400.0)# HzwhileTrue:loop_time=asyncio.get_event_loop().time()print(f"Hello from loop at{loop_time:.3f}s")awaitrate.sleep()if__name__=="__main__":asyncio.run(main())Check out theexamplesfolder for more advance use cases, such as multiple loops running simultaneously at different rates. |
aio-rate-limiter | aio-rate-limiterRate limit a function using Redis as a backend. This is a smaller library modeled afterpython-redis-rate-limitbut it usesaioredis. Supports Python 3.6+.Installationpipinstallaio-rate-limiterExampleimportloggingfromaio_rate_limiterimportRateLimiter,TooManyRequestsimportaioredisasyncdefexample():pool=awaitaioredis.create_redis_pool("redis://localhost:6379")try:asyncwithRateLimiter(redis_pool,# Rate limit requests to a resource"name-of-external-system",# Allow up to 100 requests in 60 secondsmax_requests=100,time_window=60):asyncdo_work()exceptTooManyRequests:logging.warning("Try again later")Development# Install poetrypipinstallpoetry# Install all package dependenciespoetryinstall# Launch a shell with dependencies availablepoetryshell# Run tests (requires Redis server running at localhost:6379)pytest# When you're ready to publish...# Bump versionpoetryversion<version># Set your pypi tokenexportPOETRY_PYPI_TOKEN_PYPI='...'# Build and publishpoetrybuild
poetrypublish |
aioratelimits | aioratelimitsClient rate limiter. It enqueues function calls and run them as leaky bucket to
ensure specified rates.ImplementationLeaky bucket. We have one queue for requests andcountnumber of workers.
Each worker can handle one request perdelaysecondsInstallpip install aioratelimitsUseThe following code prints not more than 2 lines per second.importasynciofromaioratelimitsimportRateLimiterasyncdefcritical_resource(i:int):print('request:',i)asyncdefmain():asyncwithRateLimiter(count=2,delay=1)aslimiter:awaitasyncio.gather(*(limiter.run(critical_resource(i))foriinrange(10)))asyncio.run(main())Arguments toRateLimiter:count- how many calls can we do in the specified intervaldelay- the interval in seconds |
aioraven | aioravenAsynchronous communication with RAVEn devices made by Rainforest Automation. |
aiorazemax | Aiorazemax✉️ Async communications using AWS SNS + SQS for Python services ✨DocumentationIn-Memory event managerShow me the codefromaiorazemax.event_managerimportEventManagerclassNorthKoreaThreatCreatedEvent:def__init__(self,id,target):self.id=idself.target=targetasyncdeftrump_subscriber(event:NorthKoreaThreatCreatedEvent):print(f"North korea will attack us or{event.target}!")EventManager.subscribe(trump_subscriber,NorthKoreaThreatCreatedEvent)awaitEventManager.trigger(NorthKoreaThreatCreatedEvent(0,"Mexico"))Result:North korea will attack us or Mexico!Trigger subscribers from SQSPreconditionsSQS queue has to be subscribed to SNS topic before running the consumerCodeimportasynciofromaiorazemax.consumersimportMessageConsumerfromaiorazemax.driversimportSQSDriverfromaiorazemax.event_managerimportEventManagerfromaiorazemax.publisherimportSNSMessagePublisheraws_settings={'region_name':"",'aws_access_key_id':"",'aws_secret_access_key':"",'endpoint_url':""}classNorthKoreaThreatCreatedEvent:def__init__(self,id,target):self.id=idself.target=targetdefkp_message_to_event(event_message):message=event_message.body# Highly recommended to use Marshmallow to validatereturnNorthKoreaThreatCreatedEvent(message['body']['id'],message['body']['target_name'])mapper={'KPThreatCreated':kp_message_to_event}asyncdeftrump_subscriber(event:NorthKoreaThreatCreatedEvent):print(f"North korea will attack us or{event.target}!")asyncdefmain():EventManager.subscribe(trump_subscriber,NorthKoreaThreatCreatedEvent)queue_driver=awaitSQSDriver.build('korea-threats-queue',aws_settings)consumer=MessageConsumer(mapper,EventManager,queue_driver)publisher=awaitSNSMessagePublisher.build('korea-topic',aws_settings)awaitpublisher.publish('KPThreatCreated',{'id':21,'target_name':'Portugal'})awaitconsumer.process_message()awaitqueue_driver.close()awaitpublisher.close()if__name__=='__main__':asyncio.run(main())Result:North korea will attack us or Portugal!Installingpip install aiorazemaxRunning the testsTo run end to end tests do:make unit-tests
make integration-testsAuthorsJairo Vadillo (@jairovadillo)LicenseThis project is licensed under the MIT License - see theLICENSE.mdfile for details |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.