package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aiokafka-commit | aiokafka-commitasyncio client for Kafka forked from aio-libs/aiokafka
This project will be removed in the future when aio-libs/aiokafka will fix commit issueAIOKafkaProducerAIOKafkaProducer is a high-level, asynchronous message producer.Example of AIOKafkaProducer usage:fromaiokafkaimportAIOKafkaProducerimportasyncioasyncdefsend_one():producer=AIOKafkaProducer(bootstrap_servers='localhost:9092')# Get cluster layout and initial topic/partition leadership informationawaitproducer.start()try:# Produce messageawaitproducer.send_and_wait("my_topic",b"Super message")finally:# Wait for all pending messages to be delivered or expire.awaitproducer.stop()asyncio.run(send_one())AIOKafkaConsumerAIOKafkaConsumer is a high-level, asynchronous message consumer.
It interacts with the assigned Kafka Group Coordinator node to allow multiple
consumers to load balance consumption of topics (requires kafka >= 0.9.0.0).Example of AIOKafkaConsumer usage:fromaiokafkaimportAIOKafkaConsumerimportasyncioasyncdefconsume():consumer=AIOKafkaConsumer('my_topic','my_other_topic',bootstrap_servers='localhost:9092',group_id="my-group")# Get cluster layout and join group `my-group`awaitconsumer.start()try:# Consume messagesasyncformsginconsumer:print("consumed: ",msg.topic,msg.partition,msg.offset,msg.key,msg.value,msg.timestamp)finally:# Will leave consumer group; perform autocommit if enabled.awaitconsumer.stop()asyncio.run(consume())Running testsDocker is required to run tests. Seehttps://docs.docker.com/engine/installationfor installation notes. Also note, thatlz4compression libraries for python will requirepython-devpackage,
or python source header files for compilation on Linux.
NOTE: You will also need a valid java installation. It’s required for thekeytoolutility, used to
generate ssh keys for some tests.Setting up tests requirements (assuming you’re within virtualenv on ubuntu 14.04+):sudo apt-get install -y libsnappy-dev libzstd-dev
make setupRunning tests with coverage:make covTo run tests with a specific version of Kafka (default one is 1.0.2) use KAFKA_VERSION variable:make cov KAFKA_VERSION=0.10.2.1Test running cheatsheat:make testFLAGS="-l-x--ff"- run until 1 failure, rerun failed tests first. Great for cleaning up a lot of errors, say after a big refactor.make testFLAGS="-kconsumer"- run only the consumer tests.make testFLAGS="-m'not ssl'"- run tests excluding ssl.make testFLAGS="--no-pull"- do not try to pull new docker image before test run.Changelog0.7.2 (2021-09-02)Bugfixes:FixCancelledErrorhandling in sender (issue #710)Fix exception for weakref use after object deletion (issue #755)Fix consumer’sstart()method hanging after being idle for more thanmax_poll_interval_ms(issue #764)Improved Documentation:AddSASL_PLAINTEXTandSASL_SSLto valid values of security protocol
attribute (pr #768 by @pawelrubin)0.7.1 (2021-06-04)Bugfixes:Allow group coordinator to close when all brokers are unavailable (issue #659
and pr #660 by @dkilgore90)Exclude.sofrom source distribution to fix usage of sdist tarball
(issue #681 and pr #684 by ods)Adddataclassesbackport package to dependencies for Python 3.6
(pr #690 by @ods)Fix initialization without running loop (issue #689 and pr #690 by @ods)Fix consumer fetcher for python3.9 (pr #672 by @dutradda)Make sure generation and member id are correct after (re)joining group.
(issue #727 and pr #747 by @vangheem)Deprecation:Add deprecation warning when loop argument to AIOKafkaConsumer and
AIOKafkaProducer is passed. It’s scheduled for removal in 0.8.0 as a
preparation step towards upcoming Python 3.10 (pr #699 by @ods)Improved Documentation:Update docs and examples to not use deprecated practices like passing loop
explicitly (pr #693 by @ods)Add docstring for Kafka header support inProducer.send()(issue #566 and
pr #650 by @andreportela)0.7.0 (2020-10-28)New features:Add support for Python 3.8 and 3.9. (issue #569, pr #669 and #676 by @ods)Drop support for Python 3.5. (pr #667 by @ods)AddOAUTHBEARERas a newsasl_mechanism. (issue #618 and pr #630 by @oulydna)Bugfixes:Fix memory leak in kafka consumer when consumer is in idle state not consuming any message.
(issue #628 and pr #629 by @iamsinghrajat)0.6.0 (2020-05-15)New features:Add async context manager support for both Producer and Consumer. (pr #613 and #494 by @nimish)Upgrade to kafka-python version 2.0.0 and set it as non-strict
parameter. (issue #590 by @yumendy and #558 by @originalgremlin)Make loop argument optional (issue #544)SCRAM-SHA-256 and SCRAM-SHA-512 support for SASL authentication (issue #571 and pr #588 by @SukiCZ)Added headers param to AIOKafkaProducer.send_and_wait (pr #553 by @megabotan)Addconsumer.last_poll_timestamp(partition)which gives the ms timestamp of the last
update ofhighwaterandlso. (issue #523 and pr #526 by @aure-olli)Change all code base to async-await (pr #522)Minor: added PR and ISSUE templates to GitHubBugfixes:Ignore debug package generation on bdist_rpm command. (issue #599 by @gabriel-tincu)UnknownMemberId was raised to the user instead of retrying on auto commit. (issue #611)Fix issue with messages not being read after subscriptions change with group_id=None. (issue #536)HandleRequestTimedOutErrorincoordinator._do_commit_offsets()method to explicitly mark
coordinator as dead. (issue #584 and pr #585 by @FedirAlifirenko)Added handlingasyncio.TimeoutErroron metadata request to broker and metadata update.
(issue #576 and pr #577 by @MichalMazurek)Too many reqs on kafka not available (issue #496 by @lud4ik)Consumer.seek_to_committed now returns mapping of committed offsets (pr #531 by @ask)Message Accumulator: add_message being recursive eventually overflows (pr #530 by @ask)Improved Documentation:Clarify auto_offset_reset usage. (pr 601 by @dargor)Fix spelling errors in comments and documentation using codespell (pr #567 by mauritsvdvijgh)Delete old benchmark file (issue #546 by @jeffwidman)Fix a few typos in docs (pr #573 and pr #563 by @ultrabug)Fix typos, spelling, grammar, etc (pr #545 and pr #547 by @jeffwidman)Fix typo in docs (pr #541 by @pablogamboa)Fix documentation for benchmark (pr #537 by @abhishekray07)Better logging for bad CRC (pr #529 by @ask)0.5.2 (2019-03-10)Bugfixes:Fix ConnectionError breaking metadata sync background task (issue #517 and #512)Fix event_waiter reference before assignment (pr #504 by @romantolkachyov)Bump version of kafka-python0.5.1 (2019-03-10)New features:Add SASL support with both SASL plain and SASL GGSAPI. Support also includes
Broker v0.9.0, but you will need to explicitly passapi_version="0.9".
(Big thanks to @cyrbil and @jsurloppe for working on this)Added support for max_poll_interval_ms and rebalance_timeout_ms settings (
issue #67)Added pause/resume API for AIOKafkaConsumer. (issue #304)Added header support to both AIOKafkaConsumer and AIOKafkaProducer for
brokers v0.11 and above. (issue #462)Bugfixes:Made sure to not request metadata for all topics if broker version is passed
explicitly and is 0.10 and above. (issue #440, thanks to @ulrikjohansson)Make sure heartbeat task will close if group is reset. (issue #372)0.5.0 (2018-12-28)New features:Add full support for V2 format messages with a Cython extension. Those are
used for Kafka >= 0.11.0.0Added support for transactional producing (issue #182)Added support for idempotent producing withenable_idempotenceparameterAdded support forfetch_max_bytesin AIOKafkaConsumer. This can help limit
the amount of data transferred in a single roundtrip to broker, which is
essential for consumers with large amount of partitionsBugfixes:Fix issue with connections not propagating serialization errorsFix issue withgroup=Noneresetting offsets on every metadata update
(issue #441)Fix issue with messages not delivered in order when Leader changes (issue
#228)Fixed version parsing ofapi_versionparameter. Before it ignored the
parameter0.4.3 (2018-11-01)Bugfix:Fixed memory issue introduced as a result of a bug inasyncio.shieldand
not cancelling coroutine after usage. (see issue #444 and #436)0.4.2 (2018-09-12)Bugfix:Added error propagation from coordinator to main consumer. Before consumer
just stopped with error logged. (issue #294)Fix manual partition assignment, broken in 0.4.0 (issue #394)Fixed RecursionError in MessageAccumulator.add_message (issue #409)Update kafka-python to latest 1.4.3 and added support for Python3.7Dropped support for Python3.3 and Python3.4Infrastructure:Added Kafka 1.0.2 broker for CI test runnerRefactored travis CI build pipeline0.4.1 (2018-05-13)Fix issue when offset commit error reports wrong partition in log (issue #353)Add ResourceWarning when Producer, Consumer or Connections are not closed
properly (issue #295)Fix Subscription None in GroupCoordinator._do_group_rejoin (issue #306)0.4.0 (2018-01-30)Major changes:Full refactor of the internals of AIOKafkaConsumer. Needed to avoid several
race conditions in code (PR #286, fixes #258, #264 and #261)Rewrote Records parsing protocol to allow implementation of newer protocol
versions laterAdded C extension for Records parsing protocol, boosting the speed of
produce/consume routines significantlyAdded an experimental batch producer API for unique cases, where user wants
to control batching himself (by @shargan)Minor changes:Addtimestampfield to produced message’s metadata. This is needed to find
LOG_APPEND_TIME configured timestamps.Consumer.seek()and similar API’s now raise properValueError’s on
validation failure instead ofAssertionError.Bug fixes:Fixconnections_max_idle_msoption, as earlier it was only applied to
bootstrap socket. (PR #299)Fixconsumer.stop()side effect of logging an exception
ConsumerStoppedError (issue #263)Problem with Producer not able to recover from broker failure (issue #267)Traceback containing duplicate entries due to exception sharing (PR #247
by @Artimi)Concurrent record consumption rasingInvalidStateError(‘Exception is not
set.’)(PR #249 by @aerkert)Don’t failGroupCoordinator._on_join_prepare()ifcommit_offset()throws exception (PR #230 by @shargan)Send session_timeout_ms to GroupCoordinator constructor (PR #229 by @shargan)Big thanks to:@shargan for Producer speed enhancements and the batch produce API
proposal/implementation.@vineet-rh and other contributors for constant feedback on Consumer
problems, leading to the refactor mentioned above.0.3.1 (2017-09-19)AddedAIOKafkaProducer.flush()method. (PR #209 by @vineet-rh)Fixed a bug with uvloop involvingfloat(“inf”)for timeout. (PR #210 bydmitry-moroz)Changed test runner to allow running tests on OSX. (PR #213 by @shargan)0.3.0 (2017-08-17)Moved all public structures and errors toaiokafkanamespace. You will no
longer need to import fromkafkanamespace.Changed ConsumerRebalanceListener to support either function or coroutine
foron_partitions_assignedandon_partitions_revokedcallbacks. (PR #190
by @ask)Added support foroffsets_for_times,beginning_offsets,end_offsetsAPI’s. (issue #164)Coordinator requests are now sent using a separate socket. Fixes slow commit
issue. (issuer #137, issue #128)Addedseek_to_end,seek_to_beginningAPI’s. (issue #154)Updated documentation to provide more useful usage guide on both Consumer and
Producer interface.0.2.3 (2017-07-23)Fixed retry problem in Producer, when buffer is not reset to 0 offset.
Thanks to @ngavrysh for the fix in Tubular/aiokafka fork. (issue #184)Fixed how Producer handles retries on Leader node failure. It just did not
work before… Thanks to @blugowski for the help in locating the problem.
(issue #176, issue #173)Fixed degrade in v0.2.2 on Consumer with no group_id. (issue #166)0.2.2 (2017-04-17)Reconnect after KafkaTimeoutException. (PR #149 by @Artimi)Fixed compacted topic handling. It could skip messages if those were
compacted (issue #71)Fixed old issue with new topics not adding to subscription on pattern
(issue #46)Another fix for Consumer race condition on JoinGroup. This forces Leader to
wait for new metadata before assigning partitions. (issue #118)Changed metadata listener in Coordinator to avoid 2 rejoins in a rare
condition (issue #108)getmanywill not return 0 results until we hit timeout. (issue #117)Big thanks to @Artimi for pointing out several of those issues.0.2.1 (2017-02-19)Add a check to wait topic autocreation in Consumer, instead of raising
UnknownTopicOrPartitionError (PR #92 by fabregas)Consumer now stops consumption afterconsumer.stop()call. Any newget*calls
will result in ConsumerStoppedError (PR #81)Addedexclude_internal_topicsoption for Consumer (PR #111)Better support for pattern subscription when used withgroup_id(part of PR #111)Fix for Consumersubscribeand JoinGroup race condition (issue #88). Coordinator will now notice subscription changes during rebalance and will join group again. (PR #106)Changed logging messages according to KAFKA-3318. Now INFO level should be less messy and more informative. (PR #110)Add support for connections_max_idle_ms config (PR #113)0.2.0 (2016-12-18)Added SSL support. (PR #81 by Drizzt1991)Fixed UnknownTopicOrPartitionError error on first message for autocreated topic (PR #96 by fabregas)Fixednext_recordrecursion (PR #94 by fabregas)Fixed Heartbeat fail if no consumers (PR #92 by fabregas)Added docs addressing kafka-python and aiokafka differences (PR #70 by Drizzt1991)Addedmax_poll_recordsoption for Consumer (PR #72 by Drizzt1991)Fix kafka-python typos in docs (PR #69 by jeffwidman)Topics and partitions are now randomized on each Fetch request (PR #66 by Drizzt1991)0.1.4 (2016-11-07)Bumped kafka-python version to 1.3.1 and Kafka to 0.10.1.0.Fixed auto version detection, to correctly handle 0.10.0.0 versionUpdated Fetch and Produce requests to use v2 with v0.10.0 message format on brokers.
This allows atimestampto be associated with messages.Changed lz4 compression framing, as it was changed due to KIP-57 in new message format.Minor refactoringsBig thanks to @fabregas for the hard work on this release (PR #60)0.1.3 (2016-10-18)Fixed bug with infinite loop on heartbeats with autocommit=True. #44Bumped kafka-python to version 1.1.1Fixed docker test runner with multiple interfacesMinor documentation fixes0.1.2 (2016-04-30)Added Python3.5 usage example to docsDon’t raise retriable exceptions in 3.5’s async for iteratorFix Cancellation issue with producer’ssend_and_waitmethod0.1.1 (2016-04-15)Fix packaging issues. Removed unneeded files from package.0.1.0 (2016-04-15)Initial releaseAdded full support for Kafka 9.0. Older Kafka versions are not tested. |
aiokafka-commit-commit | aiokafka-commitasyncio client for Kafka forked from aio-libs/aiokafka
This project will be removed in the future when aio-libs/aiokafka will fix commit issueAIOKafkaProducerAIOKafkaProducer is a high-level, asynchronous message producer.Example of AIOKafkaProducer usage:fromaiokafkaimportAIOKafkaProducerimportasyncioasyncdefsend_one():producer=AIOKafkaProducer(bootstrap_servers='localhost:9092')# Get cluster layout and initial topic/partition leadership informationawaitproducer.start()try:# Produce messageawaitproducer.send_and_wait("my_topic",b"Super message")finally:# Wait for all pending messages to be delivered or expire.awaitproducer.stop()asyncio.run(send_one())AIOKafkaConsumerAIOKafkaConsumer is a high-level, asynchronous message consumer.
It interacts with the assigned Kafka Group Coordinator node to allow multiple
consumers to load balance consumption of topics (requires kafka >= 0.9.0.0).Example of AIOKafkaConsumer usage:fromaiokafkaimportAIOKafkaConsumerimportasyncioasyncdefconsume():consumer=AIOKafkaConsumer('my_topic','my_other_topic',bootstrap_servers='localhost:9092',group_id="my-group")# Get cluster layout and join group `my-group`awaitconsumer.start()try:# Consume messagesasyncformsginconsumer:print("consumed: ",msg.topic,msg.partition,msg.offset,msg.key,msg.value,msg.timestamp)finally:# Will leave consumer group; perform autocommit if enabled.awaitconsumer.stop()asyncio.run(consume())Running testsDocker is required to run tests. Seehttps://docs.docker.com/engine/installationfor installation notes. Also note, thatlz4compression libraries for python will requirepython-devpackage,
or python source header files for compilation on Linux.
NOTE: You will also need a valid java installation. It’s required for thekeytoolutility, used to
generate ssh keys for some tests.Setting up tests requirements (assuming you’re within virtualenv on ubuntu 14.04+):sudo apt-get install -y libsnappy-dev libzstd-dev
make setupRunning tests with coverage:make covTo run tests with a specific version of Kafka (default one is 1.0.2) use KAFKA_VERSION variable:make cov KAFKA_VERSION=0.10.2.1Test running cheatsheat:make testFLAGS="-l-x--ff"- run until 1 failure, rerun failed tests first. Great for cleaning up a lot of errors, say after a big refactor.make testFLAGS="-kconsumer"- run only the consumer tests.make testFLAGS="-m'not ssl'"- run tests excluding ssl.make testFLAGS="--no-pull"- do not try to pull new docker image before test run.Changelog0.7.2 (2021-09-02)Bugfixes:FixCancelledErrorhandling in sender (issue #710)Fix exception for weakref use after object deletion (issue #755)Fix consumer’sstart()method hanging after being idle for more thanmax_poll_interval_ms(issue #764)Improved Documentation:AddSASL_PLAINTEXTandSASL_SSLto valid values of security protocol
attribute (pr #768 by @pawelrubin)0.7.1 (2021-06-04)Bugfixes:Allow group coordinator to close when all brokers are unavailable (issue #659
and pr #660 by @dkilgore90)Exclude.sofrom source distribution to fix usage of sdist tarball
(issue #681 and pr #684 by ods)Adddataclassesbackport package to dependencies for Python 3.6
(pr #690 by @ods)Fix initialization without running loop (issue #689 and pr #690 by @ods)Fix consumer fetcher for python3.9 (pr #672 by @dutradda)Make sure generation and member id are correct after (re)joining group.
(issue #727 and pr #747 by @vangheem)Deprecation:Add deprecation warning when loop argument to AIOKafkaConsumer and
AIOKafkaProducer is passed. It’s scheduled for removal in 0.8.0 as a
preparation step towards upcoming Python 3.10 (pr #699 by @ods)Improved Documentation:Update docs and examples to not use deprecated practices like passing loop
explicitly (pr #693 by @ods)Add docstring for Kafka header support inProducer.send()(issue #566 and
pr #650 by @andreportela)0.7.0 (2020-10-28)New features:Add support for Python 3.8 and 3.9. (issue #569, pr #669 and #676 by @ods)Drop support for Python 3.5. (pr #667 by @ods)AddOAUTHBEARERas a newsasl_mechanism. (issue #618 and pr #630 by @oulydna)Bugfixes:Fix memory leak in kafka consumer when consumer is in idle state not consuming any message.
(issue #628 and pr #629 by @iamsinghrajat)0.6.0 (2020-05-15)New features:Add async context manager support for both Producer and Consumer. (pr #613 and #494 by @nimish)Upgrade to kafka-python version 2.0.0 and set it as non-strict
parameter. (issue #590 by @yumendy and #558 by @originalgremlin)Make loop argument optional (issue #544)SCRAM-SHA-256 and SCRAM-SHA-512 support for SASL authentication (issue #571 and pr #588 by @SukiCZ)Added headers param to AIOKafkaProducer.send_and_wait (pr #553 by @megabotan)Addconsumer.last_poll_timestamp(partition)which gives the ms timestamp of the last
update ofhighwaterandlso. (issue #523 and pr #526 by @aure-olli)Change all code base to async-await (pr #522)Minor: added PR and ISSUE templates to GitHubBugfixes:Ignore debug package generation on bdist_rpm command. (issue #599 by @gabriel-tincu)UnknownMemberId was raised to the user instead of retrying on auto commit. (issue #611)Fix issue with messages not being read after subscriptions change with group_id=None. (issue #536)HandleRequestTimedOutErrorincoordinator._do_commit_offsets()method to explicitly mark
coordinator as dead. (issue #584 and pr #585 by @FedirAlifirenko)Added handlingasyncio.TimeoutErroron metadata request to broker and metadata update.
(issue #576 and pr #577 by @MichalMazurek)Too many reqs on kafka not available (issue #496 by @lud4ik)Consumer.seek_to_committed now returns mapping of committed offsets (pr #531 by @ask)Message Accumulator: add_message being recursive eventually overflows (pr #530 by @ask)Improved Documentation:Clarify auto_offset_reset usage. (pr 601 by @dargor)Fix spelling errors in comments and documentation using codespell (pr #567 by mauritsvdvijgh)Delete old benchmark file (issue #546 by @jeffwidman)Fix a few typos in docs (pr #573 and pr #563 by @ultrabug)Fix typos, spelling, grammar, etc (pr #545 and pr #547 by @jeffwidman)Fix typo in docs (pr #541 by @pablogamboa)Fix documentation for benchmark (pr #537 by @abhishekray07)Better logging for bad CRC (pr #529 by @ask)0.5.2 (2019-03-10)Bugfixes:Fix ConnectionError breaking metadata sync background task (issue #517 and #512)Fix event_waiter reference before assignment (pr #504 by @romantolkachyov)Bump version of kafka-python0.5.1 (2019-03-10)New features:Add SASL support with both SASL plain and SASL GGSAPI. Support also includes
Broker v0.9.0, but you will need to explicitly passapi_version="0.9".
(Big thanks to @cyrbil and @jsurloppe for working on this)Added support for max_poll_interval_ms and rebalance_timeout_ms settings (
issue #67)Added pause/resume API for AIOKafkaConsumer. (issue #304)Added header support to both AIOKafkaConsumer and AIOKafkaProducer for
brokers v0.11 and above. (issue #462)Bugfixes:Made sure to not request metadata for all topics if broker version is passed
explicitly and is 0.10 and above. (issue #440, thanks to @ulrikjohansson)Make sure heartbeat task will close if group is reset. (issue #372)0.5.0 (2018-12-28)New features:Add full support for V2 format messages with a Cython extension. Those are
used for Kafka >= 0.11.0.0Added support for transactional producing (issue #182)Added support for idempotent producing withenable_idempotenceparameterAdded support forfetch_max_bytesin AIOKafkaConsumer. This can help limit
the amount of data transferred in a single roundtrip to broker, which is
essential for consumers with large amount of partitionsBugfixes:Fix issue with connections not propagating serialization errorsFix issue withgroup=Noneresetting offsets on every metadata update
(issue #441)Fix issue with messages not delivered in order when Leader changes (issue
#228)Fixed version parsing ofapi_versionparameter. Before it ignored the
parameter0.4.3 (2018-11-01)Bugfix:Fixed memory issue introduced as a result of a bug inasyncio.shieldand
not cancelling coroutine after usage. (see issue #444 and #436)0.4.2 (2018-09-12)Bugfix:Added error propagation from coordinator to main consumer. Before consumer
just stopped with error logged. (issue #294)Fix manual partition assignment, broken in 0.4.0 (issue #394)Fixed RecursionError in MessageAccumulator.add_message (issue #409)Update kafka-python to latest 1.4.3 and added support for Python3.7Dropped support for Python3.3 and Python3.4Infrastructure:Added Kafka 1.0.2 broker for CI test runnerRefactored travis CI build pipeline0.4.1 (2018-05-13)Fix issue when offset commit error reports wrong partition in log (issue #353)Add ResourceWarning when Producer, Consumer or Connections are not closed
properly (issue #295)Fix Subscription None in GroupCoordinator._do_group_rejoin (issue #306)0.4.0 (2018-01-30)Major changes:Full refactor of the internals of AIOKafkaConsumer. Needed to avoid several
race conditions in code (PR #286, fixes #258, #264 and #261)Rewrote Records parsing protocol to allow implementation of newer protocol
versions laterAdded C extension for Records parsing protocol, boosting the speed of
produce/consume routines significantlyAdded an experimental batch producer API for unique cases, where user wants
to control batching himself (by @shargan)Minor changes:Addtimestampfield to produced message’s metadata. This is needed to find
LOG_APPEND_TIME configured timestamps.Consumer.seek()and similar API’s now raise properValueError’s on
validation failure instead ofAssertionError.Bug fixes:Fixconnections_max_idle_msoption, as earlier it was only applied to
bootstrap socket. (PR #299)Fixconsumer.stop()side effect of logging an exception
ConsumerStoppedError (issue #263)Problem with Producer not able to recover from broker failure (issue #267)Traceback containing duplicate entries due to exception sharing (PR #247
by @Artimi)Concurrent record consumption rasingInvalidStateError(‘Exception is not
set.’)(PR #249 by @aerkert)Don’t failGroupCoordinator._on_join_prepare()ifcommit_offset()throws exception (PR #230 by @shargan)Send session_timeout_ms to GroupCoordinator constructor (PR #229 by @shargan)Big thanks to:@shargan for Producer speed enhancements and the batch produce API
proposal/implementation.@vineet-rh and other contributors for constant feedback on Consumer
problems, leading to the refactor mentioned above.0.3.1 (2017-09-19)AddedAIOKafkaProducer.flush()method. (PR #209 by @vineet-rh)Fixed a bug with uvloop involvingfloat(“inf”)for timeout. (PR #210 bydmitry-moroz)Changed test runner to allow running tests on OSX. (PR #213 by @shargan)0.3.0 (2017-08-17)Moved all public structures and errors toaiokafkanamespace. You will no
longer need to import fromkafkanamespace.Changed ConsumerRebalanceListener to support either function or coroutine
foron_partitions_assignedandon_partitions_revokedcallbacks. (PR #190
by @ask)Added support foroffsets_for_times,beginning_offsets,end_offsetsAPI’s. (issue #164)Coordinator requests are now sent using a separate socket. Fixes slow commit
issue. (issuer #137, issue #128)Addedseek_to_end,seek_to_beginningAPI’s. (issue #154)Updated documentation to provide more useful usage guide on both Consumer and
Producer interface.0.2.3 (2017-07-23)Fixed retry problem in Producer, when buffer is not reset to 0 offset.
Thanks to @ngavrysh for the fix in Tubular/aiokafka fork. (issue #184)Fixed how Producer handles retries on Leader node failure. It just did not
work before… Thanks to @blugowski for the help in locating the problem.
(issue #176, issue #173)Fixed degrade in v0.2.2 on Consumer with no group_id. (issue #166)0.2.2 (2017-04-17)Reconnect after KafkaTimeoutException. (PR #149 by @Artimi)Fixed compacted topic handling. It could skip messages if those were
compacted (issue #71)Fixed old issue with new topics not adding to subscription on pattern
(issue #46)Another fix for Consumer race condition on JoinGroup. This forces Leader to
wait for new metadata before assigning partitions. (issue #118)Changed metadata listener in Coordinator to avoid 2 rejoins in a rare
condition (issue #108)getmanywill not return 0 results until we hit timeout. (issue #117)Big thanks to @Artimi for pointing out several of those issues.0.2.1 (2017-02-19)Add a check to wait topic autocreation in Consumer, instead of raising
UnknownTopicOrPartitionError (PR #92 by fabregas)Consumer now stops consumption afterconsumer.stop()call. Any newget*calls
will result in ConsumerStoppedError (PR #81)Addedexclude_internal_topicsoption for Consumer (PR #111)Better support for pattern subscription when used withgroup_id(part of PR #111)Fix for Consumersubscribeand JoinGroup race condition (issue #88). Coordinator will now notice subscription changes during rebalance and will join group again. (PR #106)Changed logging messages according to KAFKA-3318. Now INFO level should be less messy and more informative. (PR #110)Add support for connections_max_idle_ms config (PR #113)0.2.0 (2016-12-18)Added SSL support. (PR #81 by Drizzt1991)Fixed UnknownTopicOrPartitionError error on first message for autocreated topic (PR #96 by fabregas)Fixednext_recordrecursion (PR #94 by fabregas)Fixed Heartbeat fail if no consumers (PR #92 by fabregas)Added docs addressing kafka-python and aiokafka differences (PR #70 by Drizzt1991)Addedmax_poll_recordsoption for Consumer (PR #72 by Drizzt1991)Fix kafka-python typos in docs (PR #69 by jeffwidman)Topics and partitions are now randomized on each Fetch request (PR #66 by Drizzt1991)0.1.4 (2016-11-07)Bumped kafka-python version to 1.3.1 and Kafka to 0.10.1.0.Fixed auto version detection, to correctly handle 0.10.0.0 versionUpdated Fetch and Produce requests to use v2 with v0.10.0 message format on brokers.
This allows atimestampto be associated with messages.Changed lz4 compression framing, as it was changed due to KIP-57 in new message format.Minor refactoringsBig thanks to @fabregas for the hard work on this release (PR #60)0.1.3 (2016-10-18)Fixed bug with infinite loop on heartbeats with autocommit=True. #44Bumped kafka-python to version 1.1.1Fixed docker test runner with multiple interfacesMinor documentation fixes0.1.2 (2016-04-30)Added Python3.5 usage example to docsDon’t raise retriable exceptions in 3.5’s async for iteratorFix Cancellation issue with producer’ssend_and_waitmethod0.1.1 (2016-04-15)Fix packaging issues. Removed unneeded files from package.0.1.0 (2016-04-15)Initial releaseAdded full support for Kafka 9.0. Older Kafka versions are not tested. |
aio-kafka-daemon | No description available on PyPI. |
aiokafka_rpc | No description available on PyPI. |
aiokatcp | aiokatcp is an implementation of thekatcpprotocol based around the Python
asyncio system module. It requires Python 3.7 or later. It is loosely inspired
by thePython 2 bindings, but has a much narrower scope.The current implementation provides both client and server APIs. It only
supports katcp version 5, and does not support a number of features that are
marked deprecated in version 5.Full documentation can be found onreadthedocs. |
aio-kavenegar | No description available on PyPI. |
aiokdb | aiokdbPython asyncio connector to KDB. Pure python, so does not depend on thek.hbindings or kdb shared objects, or numpy/pandas. Fully type hinted to comply withPEP-561. No non-core dependancies, and tested on Python 3.9 - 3.11.Peer review & motivationqPythonis a widely used library for this task and it maps objects to Pandas Dataframes which might be more suitable for the majority of applications.This library takes a different approach and aims to replicate using the KDB C-library functions, ie. being 100% explicit about KDB types. It was built working from the publically documentedSerialization ExamplesandC API for kdb+pages. Users might also need to be familiar withk.h.A simple example:fromaiokdb.socketimportkhpu# run ./q -p 12345 &h=khpu("localhost",12345,"kdb:pass")result=h.k("2.0+3.0")# if the remote returns a Q Exception, this gets raised, unless khpu(..., raise_krr=False)assertresult.aF()==5.0Theresultobject is a K-like Python object (aKObj), having the usual signed integer type available asresult.type. Accessors for the primitive types are prefixed with anaand check at runtime that the accessor is appropriate for the stored type (.aI(),.aJ(),.aH(),.aF()etc.). Atoms store their value to abytesobject irrespective of the type, and encode/decode on demand. Atomic values can be set with (.i(3),.j(12),.ss("hello")).Arrays are implemented with subtypes that usePython's native arrays modulefor efficient array types. TheMutableSequencearrays are returned using the usual array accessor functions.kI(),.kB(),.kS()etc.Serialisation is handled byb9which returns a python bytes, andd9which takes a bytes and returns a K-object.Atoms are created byka,kb,ku,kg,kh,ki,kj,ke,kf,kc,ks,kt,kd,kz,ktjLists withktnandknkDictionaries withxdand tables withxt.Python manages garbage collection, none of the refcounting primitives exist, ie.k.rand functionsr1,r0andm9,setmare absent.RPCClient support using python asyncio is built into the package, and usesprompt_toolkitfor line editing:$pipinstallaiokdbprompt_toolkit
$./q-p12345&$python-maiokdb.cli--hostlocalhost--port12345(eval)>([s:760Nj]x:3?0Ng;y:2)s|xy
-|---------------------------------------7|409031f3-b19c-6770-ee84-6e9369c9869726|52cb20d9-f12c-9963-2829-3c64d8d8cb142|cddeceef-9ee9-3847-9172-3e3d7ab39b262(eval)>\\$TestsThe unit tests intest/test_rpc.pywill use a real KDB binary to test against (over RPC) if you setKDB_PYTEST_SERVICEto a URL of the formkdb://user:password@hostname:port, otherwise that test is skipped and they are self contained.Formatting withruff check .Formatting withblack .importsisort --check --profile black .Check type annotations withmypy --strict .Runpytest .in the root directory |
aiokea | Failed to fetch description. HTTP Status Code: 404 |
aiokeepin | Async Python Wrapper for KeepinCRM APIThis is an async Python wrapper for the KeepinCRM API that allows you to interact with the API using simple and convenient methods. The wrapper provides aKeepinClientclass with async methods for making HTTP requests and retrieving data from the API.InstallationYou can install the library using pip:pipinstallaiokeepinUsageTo use the KeepinCRM async Python wrapper, import theKeepinClientclass from the library:fromaiokeepinimportKeepinClientInitializing the ClientCreate an instance of theKeepinClientclass by providing your API key:client=KeepinClient(api_key='YOUR_API_KEY')Making API RequestsTheKeepinClientinstance provides async methods that correspond to the different HTTP request methods (GET,POST,PATCH,PUT,DELETE). Each method returns a dictionary containing the response from the API.GET Request Exampleresponse=awaitclient.get('/clients')print(response)POST Request Exampledata={"email":"[email protected]","company":"Назва чи ПІБ","lead":True,"source_id":5,"status_id":1,"phones":["+380000000001"],"tag_names":["VIP"],"contacts_attributes":[{"fullname":"Прізвище Імʼя По батькові","phones":["+380000000002"],"custom_fields":[{"name":"Посада","value":"Директор"}]}]}response=awaitclient.post('/clients',data=data)print(response)PATCH Request Exampledata={"email":"[email protected]"}response=awaitclient.patch('/clients/{client_id}',data=data)print(response)PUT Request Exampledata={"email":"[email protected]"}response=awaitclient.put('/clients/{client_id}',data=data)print(response)DELETE Request Exampleresponse=awaitclient.delete('/clients/{client_id}')print(response)GET Paginated Items Exampleresponse=awaitclient.get_paginated_items('/clients')Error HandlingIn case of an error response from the KeepinCRM API, an exception will be raised. The exceptions provided by the aiokeepin library inherit from the baseKeepinExceptionclass. There are several specific exceptions available for different types of errors:KeepinStatusError: This exception is raised for non-2xx status codes. It contains thestatus_codeandresponseattributes, providing information about the error.InvalidAPIKeyError: This exception is raised specifically for an invalid API key.ValidationError: This exception is raised for invalid data.NotFoundError: This exception is raised when the requested resource is not found.InternalServerError: This exception is raised for internal server errors.When making API requests, you can handle exceptions using try-except blocks to capture and handle specific types of errors. Here's an example:fromaiokeepin.exceptionsimportKeepinStatusError,InvalidAPIKeyErrortry:response=awaitclient.get('/nonexistent_endpoint')exceptInvalidAPIKeyError:print("Invalid API key provided.")exceptKeepinStatusErrorase:print(f"Error:{e.status_code}-{e.response}")You can customize the exception handling based on your specific needs. By catching the appropriate exceptions, you can handle different error scenarios and provide appropriate error messages or take specific actions. Make sure to refer to the documentation for the KeepinCRM API for more details on the possible error responses and their corresponding status codes.DocumentationFor detailed information on the KeepinCRM API, refer to the official API documentation:KeepinCRM API DocumentationLicenseThis project is licensed under the MIT License. See theLICENSEfile for more information. |
aiokef | Asyncio Python API for KEF speakersSupported:KEF LS50 Wireless(tested with latest firmware of 19-11-2019: p6.3001902221.105039422 and older firmware: p6.2101809171.105039422)
Untested:KEF LSXSupported featuresGet and set volumeMute and unmuteGet and set source inputTurn speaker on and offInvert L/R to R/LPlay and pause (only works with Wifi and Bluetooth)Previous and next track (only works with Wifi and Bluetooth)Set the standby time to infinite, 20 minutes, or 60 minutesAutomatically connects and disconnects when speakers goes online/offlineControlallDSP settings!Use in Home AssistantSeebasnijholt/media_player.kef.InstallpipinstallaiokefDiscussionSee thisHome Assistant discussion threadwhere the creation of the KEF speakers is discussed.LicenseMIT LicenseContributionsBas NijholtRobin Grönberg (@Gronis)Bastian Beggel (@bastianbeggel)chimpy (@chimpy) |
aioketraapi | aioketraapiaiohttp-based Python Client SDK for controlling Ketra lighting products. Based on Ketra API documentation availablehere.This Python package is automatically generated by theOpenAPI Generatorproject:API version: 1.4.0Package version: 1.0.0Build package: org.openapitools.codegen.languages.PythonClientCodegenRequirements.Python 3.6+Installation & Usagepip installYou can install from pypi using:pipinstallaioketraapiOr, you can install directly from github using:pipinstallgit+https://github.com/s4v4g3/aio-ketra-api.gitNote that in either case you may need to runpipwith root permission:sudo pip install aioketraapiorsudo pip install git+https://github.com/s4v4g3/aio-ketra-api.gitThen import the package:importaioketraapiSetuptoolsInstall viaSetuptools.pythonsetup.pyinstall--user(orsudo python setup.py installto install the package for all users)Then import the package:importaioketraapiGetting StartedIn order to use this package you will need to contact Ketra Support and get assigned a client_id and client_secret.AuthenticationYou will only need to obtain a single access token for a given user account; at present the access tokens do not expire.importasynciofromaioketraapi.oauthimportOAuthTokenResponseasyncdefmain():client_id='YOUR CLIENT ID'client_secret='YOUR CLIENT SECRET'username='<your Ketra Design Studio username here>'password='<your Ketra Design Studio password here>'oauth_token=awaitOAuthTokenResponse.request_token(client_id,client_secret,username,password)access_token=oauth_token.access_tokenprint(f"My access token is{access_token}")if__name__=='__main__':asyncio.run(main())Usageimportasynciofromaioketraapi.n4_hubimportN4Hubfromaioketraapi.models.lamp_stateimportLampStateasyncdefmain():# Find the installation id for your installation by logging into https://my.goketra.com# and finding your installation in the list and going to the “Details” page for your# installation. The installation id is displayed in the URL of this page, for example# a URL of https://my.goketra.com/installations/0fbcada7-b318-4d29-858c1ea3ac1fd5cb# would indicate an installation id of “0fbcada7-b318-4d29-858c1ea3ac1fd5cb”installation_id='my-installation-id'# Use the access token received from the authentication step aboveaccess_token='access token received from authentication step'# Set to False to access your installation directly through your local network,# set to True to access your installation through the cloud (requires remote access# to be enabled in Design Studio for the installation)use_cloud=Falsehub=awaitN4Hub.get_hub(installation_id,access_token,use_cloud=use_cloud)# get the keypads in the installation and print all button nameskeypads=awaithub.get_keypads()forkeypadinkeypads:forbuttoninkeypad.buttons:print(button.scene_name)# activate the "Natural" button on the "Kitchen" keypadifbutton.scene_name=="Kitchen Natural":awaitbutton.activate()# get the groups in the installation and print all group namesgroups=awaithub.get_groups()forgroupingroups:print(group.name)# Control the "Office" groupifgroup.name=="Office":awaitgroup.set_state(LampState(transition_time=13000,cct=2000,brightness=0.95,power_on=True))if__name__=='__main__':asyncio.run(main())LicenseThe library is available as open source under the terms of theMIT License. |
aiokevoplus | This started as a fork ofhttps://github.com/Bahnburner/pykevoplusbut at this point is pretty much a rewrite.This library has been converted to be compatible with asyncio and also to use the latest version of the Kevo API including support for
realtime updates via websockets.UsagefromaiokevoplusimportKevoApidefstatus_changed(lock):print("Status changed for "+lock.name)api=KevoApi()try:awaitapi.login("[email protected]","password123")api.register_callback(status_changed)awaitapi.websocket_connect()locks=api.get_locks()forlockinlocks:lock.lock()exceptExceptionase:print("Something went wrong "+e) |
aiokeycloak | No description available on PyPI. |
aiokeydb | aiokeydbLatest Version:Unified Syncronous and Asyncronous Python client forKeyDBandRedis.It implements the majority ofredis-pyAPI methods and is fully compatible withredis-pyAPI methods, while also providing a unified Client for bothsyncandasyncmethods.Additionally, newer apis such asKeyDBClientare strongly typed to provide a more robust experience.Usageaiokeydbcan be installed frompypiusingpipor from source.Installation# Install from pypipipinstallaiokeydb# Install from sourcepipinstallgit+https://github.com/trisongz/aiokeydb-py.gitExamplesBelow are some examples of how to useaiokeydb. Additional examples can be found in thetestsdirectory.importtimeimportasyncioimportuuidfromaiokeydbimportKeyDBClient# The session can be explicitly initialized, or# will be lazily initialized on first use# through environment variables with all# params being prefixed with `KEYDB_`keydb_uri="keydb://localhost:6379/0"# Initialize the Unified ClientKeyDBClient.init_session(uri=keydb_uri,)# Cache the results of these functions# cachify works for both sync and async functions# and has many params to customize the caching behavior# and supports both `redis` and `keydb` backends# as well as `api` frameworks such as `fastapi` and `starlette`@KeyDBClient.cachify()asyncdefasync_fibonacci(number:int):ifnumber==0:return0elifnumber==1:return1returnawaitasync_fibonacci(number-1)+awaitasync_fibonacci(number-2)@KeyDBClient.cachify()deffibonacci(number:int):ifnumber==0:return0elifnumber==1:return1returnfibonacci(number-1)+fibonacci(number-2)asyncdeftest_fib(n:int=100,runs:int=10):# Test that both results are the same.sync_t,async_t=0.0,0.0foriinrange(runs):t=time.time()print(f'[Async -{i}/{runs}] Fib Result:{awaitasync_fibonacci(n)}')tt=time.time()-tprint(f'[Async -{i}/{runs}] Fib Time:{tt:.2f}s')async_t+=ttt=time.time()print(f'[Sync -{i}/{runs}] Fib Result:{fibonacci(n)}')tt=time.time()-tprint(f'[Sync -{i}/{runs}] Fib Time:{tt:.2f}s')sync_t+=ttprint(f'[Async] Cache Average Time:{async_t/runs:.2f}s | Total Time:{async_t:.2f}s')print(f'[Sync ] Cache Average Time:{sync_t/runs:.2f}s | Total Time:{sync_t:.2f}s')asyncdeftest_setget(runs:int=10):# By default, the client utilizes `pickle` to serialize# and deserialize objects. This can be changed by setting# the `serializer`sync_t,async_t=0.0,0.0foriinrange(runs):value=str(uuid.uuid4())key=f'async-test-{i}'t=time.time()awaitKeyDBClient.async_set(key,value)assertawaitKeyDBClient.async_get(key)==valuett=time.time()-tprint(f'[Async -{i}/{runs}] Get/Set:{key}->{value}={tt:.2f}s')async_t+=ttvalue=str(uuid.uuid4())key=f'sync-test-{i}'t=time.time()KeyDBClient.set(key,value)assertKeyDBClient.get(key)==valuett=time.time()-tprint(f'[Sync -{i}/{runs}] Get/Set:{key}->{value}={tt:.2f}s')sync_t+=ttprint(f'[Async] GetSet Average Time:{async_t/runs:.2f}s | Total Time:{async_t:.2f}s')print(f'[Sync ] GetSet Average Time:{sync_t/runs:.2f}s | Total Time:{sync_t:.2f}s')asyncdefrun_tests(fib_n:int=100,fib_runs:int=10,setget_runs:int=10):# You can explicitly wait for the client to be ready# Sync version# KeyDBClient.wait_for_ready()awaitKeyDBClient.async_wait_for_ready()# Run the testsawaittest_fib(n=fib_n,runs=fib_runs)awaittest_setget(runs=setget_runs)# Utilize the current sessionawaitKeyDBClient.async_set('async_test_0','test')assertawaitKeyDBClient.async_get('async_test_0')=='test'KeyDBClient.set('sync_test_0','test')assertKeyDBClient.get('sync_test_0')=='test'# you can access the `KeyDBSession` object directly# which mirrors the APIs in `KeyDBClient`awaitKeyDBClient.session.async_set('async_test_1','test')assertawaitKeyDBClient.session.async_get('async_test_1')=='test'KeyDBClient.session.set('sync_test_1','test')assertKeyDBClient.session.get('sync_test_1')=='test'# The underlying client can be accessed directly# if the desired api methods aren't mirrored# KeyDBClient.keydb# KeyDBClient.async_keydb# Since encoding / decoding is not handled by the client# you must encode / decode the data yourselfawaitKeyDBClient.async_keydb.set('async_test_2',b'test')assertawaitKeyDBClient.async_keydb.get('async_test_2')==b'test'KeyDBClient.keydb.set('sync_test_2',b'test')assertKeyDBClient.keydb.get('sync_test_2')==b'test'# You can also explicitly close the client# However, this closes the connectionpool and will terminate# all connections. This is not recommended unless you are# explicitly closing the client.# Sync version# KeyDBClient.close()awaitKeyDBClient.aclose()asyncio.run(run_tests())Additionally, you can use the previous APIs that are expected to be presentfromaiokeydbimportKeyDB,AsyncKeyDB,from_urlsync_client=KeyDB()async_client=AsyncKeyDB()# Alternative methodssync_client=from_url('keydb://localhost:6379/0')async_client=from_url('keydb://localhost:6379/0',asyncio=True)Setting the Global Client SettingsThe library is designed to be explict and not have any global state. However, you can set the global client settings by using theKeyDBClient.configuremethod. This is useful if you are using theKeyDBClientclass directly, or if you are using theKeyDBClientclass in a library that you are developing.For example, if you initialize a session with a specificuri, the global client settings will still inherit from the default settings.fromaiokeydbimportKeyDBClientfromlazyops.utilsimportloggerkeydb_uris={'default':'keydb://127.0.0.1:6379/0','cache':'keydb://localhost:6379/1','public':'redis://public.redis.db:6379/0',}sessions={}# these will now be initializedforname,uriinkeydb_uris.items():sessions[name]=KeyDBClient.init_session(name=name,uri=uri,)logger.info(f'Session{name}: uri:{sessions[name].uri}')# however if you initialize another session# it will use the global environment varssessions['test']=KeyDBClient.init_session(name='test',)logger.info(f'Session test: uri:{sessions["test"].uri}')By configuring the global settings, any downstream sessions will inherit the global settings.fromaiokeydbimportKeyDBClientfromlazyops.utilsimportloggerdefault_uri='keydb://public.host.com:6379/0'keydb_dbs={'cache':{'db_id':1,},'db':{'uri':'keydb://127.0.0.1:6379/0',},}KeyDBClient.configure(url=default_uri,debug_enabled=True,queue_db=1,)# now any sessions that are initialized will use the global settingssessions={}# these will now be initialized# Initialize the first default session# which should utilize the `default_uri`KeyDBClient.init_session()forname,configinkeydb_dbs.items():sessions[name]=KeyDBClient.init_session(name=name,**config)logger.info(f'Session{name}: uri:{sessions[name].uri}')KeyDB Worker QueuesReleased sincev0.1.1KeyDB Worker Queues is a simple, fast, and reliable queue system for KeyDB. It is designed to be used in a distributed environment, where multiple KeyDB instances are used to process jobs. It is also designed to be used in a single instance environment, where a single KeyDB instance is used to process jobs.importasynciofromaiokeydbimportKeyDBClientfromaiokeydb.queuesimportTaskQueue,Workerfromlazyops.utilsimportlogger# Configure the KeyDB Client - the default keydb client will use# db = 0, and queue uses 2 so that it doesn't conflict with other# by configuring it here, you can explicitly set the db to usekeydb_uri="keydb://127.0.0.1:6379/0"# Configure the Queue to use db = 1 instead of 2KeyDBClient.configure(url=keydb_uri,debug_enabled=True,queue_db=1,)@Worker.add_cronjob("*/1 * * * *")asyncdeftest_cron_task(*args,**kwargs):logger.info("Cron task ran")awaitasyncio.sleep(5)@Worker.add_function()asyncdeftest_task(*args,**kwargs):logger.info("Task ran")awaitasyncio.sleep(5)asyncdefrun_tests():queue=TaskQueue("test_queue")worker=Worker(queue)awaitworker.start()asyncio.run(run_tests())Requirementsdeprecated>=1.2.3packaging>=20.4importlib-metadata >= 1.0; python_version < "3.8"typing-extensions; python_version<"3.8"async-timeout>=4.0.2pydanticanyiolazyopsMajor Changes:v0.0.8->v0.0.11:Fully Migrates away [email protected] inherits all API methods fromredis-pyto enforce compatability since deprecation ofaioredisgoing forwardThis fork ofredis-pyhas some modified semantics, while attempting to replicate all the API methods ofredis-pyto avoid compatibility issues with underlying libraries that require a pinnedredisversion.Notably, allasyncClasses and methods are prefixed byAsyncto avoid name collisions with thesyncversion.v0.0.11->v0.1.0:Migration ofaiokeydb.client->aiokeydb.coreandaiokeydb.asyncio.client->aiokeydb.asyncio.coreHowever these have been aliased to their original names to avoid breaking changes.Creating a unified API available through newKeyDBClientclass that createssessionswhich areKeyDBSessioninherits fromKeyDBandAsyncKeyDBClientclass that inherits fromAsyncKeyDBclass |
aiokilogram | aiokilogramConvenience tools and wrappers foraiogram,
the asynchronous Telegram Bot API framework.InstallationpipinstallaiokilogramBasic ExamplesClass-Based Command HandlersTheaiokilogramtoolkit is centered around the notion of class-based
command handlers. Basically this means that you can group several commands
as methods in a class and assign them to message and callback patterns
at class level.A simplistic bit will look something like this:importasyncioimportosfromaiokilogram.botimportKiloBotfromaiokilogram.settingsimportBaseGlobalSettingsfromaiokilogram.handlerimportCommandHandlerfromaiokilogram.registrationimportregister_message_handlerclassTestCommandHandler(CommandHandler):@register_message_handler(commands={'hello'})asyncdeftest_handler(self,event)->None:awaitself.send_text(user_id=event.from_user.id,text=f'This is a test reply')defrun_bot():bot=KiloBot(global_settings=BaseGlobalSettings(tg_bot_token=os.environ['TG_BOT_TOKEN']),handler_classes=[TestCommandHandler],)asyncio.run(bot.run())if__name__=='__main__':run_bot()For more info you can take a look at aboilerplate bot with buttonsandsimple boilerplate botSet theTG_BOT_TOKENenv variable to run them.Action ButtonsThe package also provides a simplified mechanism of using buttons and
binding handlers to their callbacks.
This is useful when you want your buttons to contain a combination of
several parameters and don't want to implement their serialization,
deserialization and callback bindings each time.The idea is simple.Define an action (aCallbackActionsubclass) using a combination of fields:fromenumimportEnumfromaiokilogram.actionimportCallbackAction,StringActionField,EnumActionFieldclassActionType(Enum):show_recipe='show_recipe'like_recipe='like_recipe'classSingleRecipeAction(CallbackAction):action_type=EnumActionField(enum_cls=ActionType)recipe_title=StringActionField()Create a page containing action buttons:fromaiokilogram.pageimportActionMessageButton,MessagePage,MessageBody,MessageKeyboardpage=MessagePage(body=MessageBody(text='Main Recipe Menu'),keyboard=MessageKeyboard(buttons=[ActionMessageButton(text='Button Text',action=SingleRecipeAction(action_type=ActionType.show_recipe,recipe_title='Fantastic Menemen',),),# ...]))Send it as a response to some command in your handler class:awaitself.send_message_page(user_id=event.from_user.id,page=page)Define and register a handler method for this action where you deserialize the
action parameters and somehow use them in your logic:classMyHandler(CommandHandler):@register_callback_query_handler(action=SingleRecipeAction)asyncdefdo_single_recipe_action(self,query:types.CallbackQuery)->None:action=SingleRecipeAction.deserialize(query.data)if'soup'inaction.recipe_title.lower():do_soup_stuff()# whatever# ...or you can be more precise and limit the binding to specific values
of the action's fields:@register_callback_query_handler(action=SingleRecipeAction.when(action_type=ActionType.like_recipe),)Seeboilerplate bot with buttonsSet theTG_BOT_TOKENenv variable to run it.Error handlingGeneric error (exception) handling in bots can be implemented viaErrorHandlers
at class or method level.First, define an error handler:fromaiokilogram.errorsimportDefaultErrorHandlerclassMyErrorHandler(DefaultErrorHandler):defmake_message(self,err:Exception):# Any custom logic can go here.# This method can return either a `str`, a `MessagePage` or `None`.# In case of `None` no message is sent and the exception is re-raised.return'This is my error message'Then add it either to the message handler class:classMyCommandHandler(CommandHandler):error_handler=MyErrorHandler()to handle errors in all methods registered via theregister_message_handlerandregister_callback_query_handlerdecorators.Or you can do it at method-level:@register_message_handler(commands={'my_command'},error_handler=MyErrorHandler())asyncdefmy_command_handler(self,event:types.Message)->None:pass# do whatever you do...Seeboilerplate bot with error handlingSet theTG_BOT_TOKENenv variable to run it.LinksHomepage on GitHub:https://github.com/altvod/aiokilogramProject's page on PyPi:https://pypi.org/project/aiokilogram/aiogram's homepage on GitHub:https://github.com/aiogram/aiogramTelegram Bot API:https://core.telegram.org/bots/api |
aiokinesis | AIOKinesis==========Asyncio client library for AWS KinesisInstallation------------```pip install aiokinesis```AIOKinesisProducer------------------Usage:```pythonimport asyciofrom aiokinesis import AIOKinesisProducerasync def send_message():loop = asyncio.get_event_loop()producer = AIOKinesisProducer('my-stream-name', loop, region_name='us-east-1')await producer.start()await producer.send('partition-key', {'data': 'blah'})await asyncio.sleep(1)await producer.stop()loop.run_until_complete(send_message())```Limitations:- Stopping the producer before all messages are sent will prevent in flight messages from being sent- AIOKinesis only supports one shard so the producer is rate limited to 5 requests per rolling secondAIOKinesisConsumer------------------Usage:```pythonimport asynciofrom aiokinesis import AIOKinesisConsumerasync def get_messages():loop = asyncio.get_event_loop()consumer = AIOKinesisConsumer('my-stream-name', loop, region_name='us-east-1')await consumer.start()try:async for message in consumer:print("Consumed message: ", message)except KeyboardInterrupt:await consumer.stop()loop.run_until_complete()```Limitations:- AIOKinesis only supports one shard so the consumer is rate limited to 5 requests per rolling second |
aiokit | AioKITRoutines for asynchronous programming |
aio-kong | Async Python Client for KongTested withkongv3.3Installation & TestingTo install the packagepip install aio-kongTo run tests, clone andmake test:warning: If you don't have Kong or postgres running locally, run the services firstmakeservicestest certificates were generated using the commandopenssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=localhost'ClientThe client can be imported viafromkong.clientimportKongIn a coroutine:asyncwithKong()ascli:services=awaitcli.services.get_list()print(json.dumps([s.dataforsinservices],indent=2))By default the url is obtained from the "KONG_ADMIN_URL" environment variable which defaults tohttp://127.0.0.1:8001.The client has handlers for all Kong objectscli.servicesCRUD operations on servicescli.routesCRUD operations on routescli.pluginsCRUD operations on pluginscli.consumersCRUD operations on consumerscli.certificatesCRUD operations on TLS certificatescli.snisCRUD operations on SNIscli.aclsTo list all ACLsApply a configurationThe client allow to apply a configuration object to kong:awaitcli.apply_json(config)Command line toolThe library install thekongfigcommand line tool for uploading kong configuration files.kongfig --yaml config.yaml |
aiokonstsmide | aiokonstsmideAn asynchronous library to communicate with Konstsmide Bluetooth string lights.DocumentationSupported featuresConnect with a device and send the password as pairing mechanismTurn the device on/offControl the devices function, brightness and flash speedCreate timers on the device to turn on/off the device or specific functions at specific times and weekdaysInstallation$pipinstallaiokonstsmideUsageasyncwithaiokonstsmide.Device("11:22:33:44:55:66")asdev:awaitdev.on()Also check theexamplesfolder. |
aiokraken | No description available on PyPI. |
aio-kraken-ws | aio-kraken-wsA module to collect ohlc candles from Kraken using WebSockets that is asyncio friendly! Looking for automated trading tools?Botcryptomay interest you.Key featuresSubscribe to kraken data using a single WebSocket.Trigger a callback that is coroutine on each new closed candles from kraken.Easy subscribe/unsubscribe to datasets, i.e. [(pair, Unit Time)] ex: [('XBT/EUR', 1)].Callback is regularly triggered at each end of the UT intervals, whatever is the number of data received by kraken.Getting startedInstallpip install aio-kraken-wsUsage# check tests/learning/log_to_file.py for a complete exampleasyncdefcallback(pair,interval,timestamp,o,h,l,c,v):""" A coroutine handling new candles.:param str pair::param int interval: time in minutes:param int timestamp: candle open timestamp.:param float o: open price:param float h: high price:param float l: low price:param float c: close price:param float v: volume"""withopen("candles.txt","a+")asfile:file.write(f"[{pair}:{interval}]({timestamp},{o},{h},{l},{c},{v})\n")kraken_ws=awaitKrakenWs.create(callback)# subscribe to some datasetskraken_ws.subscribe([("XBT/EUR",1),("ETH/EUR",5)])Thecallbackfunction is called for each dataset at the end of each dataset's unit time interval.E.g. if subscription start at 4h42.05 to the dataset("XBT/EUR", 1), then callback is triggered at 4h43.00, at 4h44.00, at 4h45.00, etc... For("XBT/EUR", 60), it would be at 5h00.00, at 6h00.00, etc... It's possible to get at most 10ms delay between the exact UT interval ending and the actual datetime of the call.Ifnonew data were received from Kraken during an interval, the callback is triggered with the latest known close price and v=0, as it's described in the following example.E.g.kraken_ws.subscribe([("XBT/EUR",1)])# time.time() = 120awaitcallback("XBT/EUR",1,60,42.0,57.0,19.0,24.0,150.0)# time.time() = 180awaitcallback("XBT/EUR",1,120,19.0,24.0,8.0,10.0,13.0)# time.time() = 240 : no data received in 60s, i.e. no activityawaitcallback("XBT/EUR",1,180,10.0,10.0,10.0,10.0,0.0)Error managementAn exception raised by thecallbackwill be logged and it wont stop the streamsIf kraken send an error message, anERRORlog is emitted with the kraken payloadIf kraken send 'Subscription ohlc interval not supported', the related dataset is automatically unsubscribedWarningThecallbackshould takes less than a minute to process. If thecallbacktakes more than a minutes, awarningis emitted and you may lose market data.Kraken WebSocket server manage20 subscriptions maximumper connection. Above 20 subscriptions, you may not receive all desired data.Hopfully, aio-kraken-ws manage this limitation for you!A new websocket connection is open every 20 subscriptions.Moreover, after 24h a subscriptions seem to expire and no more market data is receive. To ensure we do not lose the stream of market data,aio-kraken-wsautomatically reconnect and re-subscribe to the datasets every 5 minutes.TestsYou can find a working example of KrakenWs intests/learning/log_to_file.py.Run tests locallyClone the repo and install requirementspipinstall-e.[test]Run the suite tests# unit tests - no call to kraken - fastpytest--cov=aio_kraken_ws--cov-report=-vtests/unit# integration tests - actual kraken subscription - slowpytest--cov=aio_kraken_ws--cov-append-v-n8tests/integrationChangelogSeehttps://cdlr75.gitlab.io/aio-kraken-ws/CHANGELOG.html |
aio-krpc-server | Asyncio Kademlia RPC-serverKademlia protocol based RPC-server.Exampleimportasyncioloop=asyncio.get_event_loop()udp=UDPServer()udp.run("0.0.0.0",12346,loop=loop)app=KRPCServer(server=udp,loop=loop)@app.callcack(arg_schema={"id":{"type":"integer","required":True}})defping(addr,id):print(addr,id)return{"id":id}if__name__=='__main__':loop.run_forever() |
aiokts | Tuned asyncio and aiohttp classes for simpler creation of powerful APIs |
aiokubemq | AIOKubeMQAboutAsynchronous Python KubeMQ Client.Basically an optimized reimplementation ofhttps://github.com/kubemq-io/kubemq-Pythonwith asyncio and typed methods.NoteThis repository is mirrored fromGitlabtoGithub.
Most features like Issues, MRs/PRs etc. are disabled onGithub, please use theGitlab repositoryfor theseFeaturesModern Pythonic API using asyncioTypedPretty lowlevel, wrappers can be built as seen fitHow to useimportaiokubemqasyncdefmain():asyncwithaiokubemq.KubeMQClient("client-id","host:50000")asclient:result=awaitclient.ping()print(f"We're connected to host '{result.Host}'")seethe examples folderfor more |
aiokubernetes | Python client for kuberneteshttp://kubernetes.io/ |
aiokwikset | aiokwikset - Python interface for the Kwikset APIPython library for communicating with theKwikset Smart Locksvia the Kwikset cloud API.WARNINGThis library only works if you have signed up for and created a home/had a home shared with you from the Kwikset Application.IOSAndroidNOTE:This library is community supported, please submit changes and improvements.This is a very basic interface, not well thought out at this point, but works for the use cases that initially prompted spitting this out from.Supportslocking/unlockingretrieving basic informationInstallationpip install aiokwiksetExamplesimportasynciofromaiokwiksetimportAPIasyncdefmain()->None:"""Run!"""#initialize the APIapi=API("<EMAIL>")#start auth#<CODE_TYPE> = [phone, email]pre_auth=awaitapi.authenticate('<PASSWORD>','<CODE_TYPE>')#MFA verificationawaitapi.verify_user(pre_auth,input("Code:"))# Get user account information:user_info=awaitapi.user.get_info()# Get the homeshomes=awaitapi.user.get_homes()# Get the devices for the first homedevices=awaitapi.device.get_devices(homes[0]['homeid'])# Get information for a specific devicedevice_info=awaitapi.device.get_device_info(devices[0]['deviceid'])# Lock the specific devicelock=awaitapi.device.lock_device(device_info,user_info)# Set led statusled=awaitapi.device.set_ledstatus(device_info,"false")# Set audio statusaudio=awaitapi.device.set_audiostatus(device_info,"false")# Set secure screen statusscreen=awaitapi.device.set_securescreenstatus(device_info,"false")asyncio.run(main())Known Issuesnot all APIs supported |
aiolago | aiolagoUnofficial Asyncronous Python Client for LagoLatest Version:Official ClientFeaturesUnified Asyncronous and Syncronous Python Client forLagoSupports Python 3.6+Strongly Typed withPydanticIncludes Function Wrappers to quickly add to existing projectsUtilizes Environment Variables for ConfigurationInstallation# Install from PyPIpipinstallaiolago# Install from sourcepipinstallgit+https://github.com/GrowthEngineAI/aiolago.gitUsageWIP - Simple Usage ExampleimportasynciofromaiolagoimportLagofromaiolago.utilsimportlogger"""Environment Vars that map to Lago.configure:all vars are prefixed with LAGO_LAGO_API_KEY (apikey): strLAGO_URL (url): str takes precedence over LAGO_SCHEME | LAGO_HOST | LAGO_PORTLAGO_SCHEME (scheme): str - defaults to 'http://'LAGO_HOST (host): str - defaults to NoneLAGO_PORT (port): int - defaults to 3000LAGO_API_PATH (api_path): str - defaults to '/api/v1'LAGO_TIMEOUT (timeout): int - defaults to 10LAGO_IGNORE_ERRORS (ignore_errors): bool = defaults to False"""Lago.configure(api_key='...',url='',)customer_id="gexai_demo"metric_name="Demo API Requests"metric_id="demo_requests"plan_name="Demo Plan"plan_id="demo_plan"asyncdefcreate_demo_customer():customer=awaitLago.customers.async_create(external_id=customer_id,email=f"{customer_id}@growthengineai.com",billing_configuration={"tax_rate":8.25,},)logger.info(f'Created Customer:{customer}')returncustomerflat_rate=0.021volume_rate=0.025base_rate=0.023rates={'volume':[{'from_value':0,'to_value':2500,'flat_amount':'0','per_unit_amount':str(round(volume_rate,5)),},# 20% discount{'from_value':2501,'to_value':10000,'flat_amount':'0','per_unit_amount':str(round(volume_rate*.8,5)),},# 50% discount{'from_value':10001,'flat_amount':'0','per_unit_amount':str(round(volume_rate*.5,5)),},],'graduated':[{'to_value':2500,'from_value':0,'flat_amount':'0','per_unit_amount':str(round(base_rate,5)),},# 25% discount{'from_value':2501,'flat_amount':'0','per_unit_amount':str(round(base_rate*.75,5)),},],# 'standard': str(round(flat_rate, 5)),}defcreate_charge(metric_id:str,name:str='volume')->Charge:# https://doc.getlago.com/docs/api/plans/plan-objectifnamein{'volume','graduated'}:returnCharge(billable_metric_id=metric_id,charge_model=name,amount_currency='USD',properties={f'{name}_ranges':rates[name],})returnCharge(billable_metric_id=metric_id,charge_model=name,amount_currency='USD',properties={'amount':rates[name]},)asyncdefcreate_metric()->BillableMetricResponse:"""The upsert logic creates a new metric if it doesn't exist."""returnawaitLago.billable_metrics.async_upsert(resource_id=metric_id,name=metric_name,code=metric_id,description='Demo API Requests',aggregation_type="sum_agg",field_name="consumption")asyncdefcreate_plan()->Plan:plan=awaitLago.plans.async_exists(resource_id=plan_id,)ifnotplan:metric=awaitcreate_metric()plan_obj=Plan(name=plan_name,amount_cents=0,amount_currency='USD',code=plan_id,interval="monthly",description="Demo API Plan",pay_in_advance=False)forrateinrates:charge=create_charge(name=rate,metric_id=metric.resource_id,)plan_obj.add_charge_to_plan(charge)plan=awaitLago.plans.async_create(plan_obj)logger.info(f'Created Plan:{plan}')returnplanasyncdefrun_test():plan=awaitcreate_plan()asyncio.run(run_test()) |
aiolambda | aiolambdaPython async microservices fasts and functional!TechnologiesPython librariescoreconnexion:
Swagger/OpenAPI First framework for Python on top of Flask with automatic endpoint validation & OAuth2 supportaiohttp:
Asynchronous HTTP client/server framework for asyncio and PythonDatabasesasyncpg:
A fast PostgreSQL Database Client Library for Python/asyncio.AMPQaio-pika:
Wrapper for the PIKA for asyncio and humans.SoftwareContainersDockerContinuous Integrationtravis-ci |
aiolancium | aiolanciumaiolancium is a simplistic python REST client for the Lancium Compute REST API utilizing asyncio. The client itself has
been developed against theLancium Compute REST API documentation.Installationaiolancium can be installed viaPyPiusingpipinstallaiolanciumHow to use aiolanciumfromaiolancium.authimportAuthenticatorfromaiolancium.clientimportLanciumClient# Authenticate yourself against the API and refresh your token if necessaryauth=Authenticator(api_key="<your_api_key>")# Initialise the actual clientclient=LanciumClient(api_url="https://portal.lancium.com/api/v1/",auth=auth)# List details about the `lancium/ubuntu` containerawaitclient.images.list_image("lancium/ubuntu")# Create your hellow world first job.job={"job":{"name":"GridKa Test Job","qos":"high","image":"lancium/ubuntu","command_line":'echo "Hello World"',"max_run_time":600}}awaitclient.jobs.create_job(**job)# Show all your jobs and their status in Lancium computejobs=awaitclient.jobs.show_jobs()forjobinjobs["jobs"]:# Retrieve the stdout/stdin output of your finished jobsawaitclient.jobs.download_job_output(job["id"],"stdout.txt")awaitclient.jobs.download_job_output(job["id"],"stderr.txt")# or download them to diskawaitclient.download_file_helper("stdout.txt","stdout.txt",job["id"])awaitclient.download_file_helper("stderr.txt","stderr.txt",job["id"])# Delete all your jobs in Lancium computeforjobinjobs["jobs"]:awaitclient.jobs.delete_job(id=job["id"])In order to simplify file uploads and downloads to/from the Lancium compute platform, an upload/download helper method
has been added to the client.
The upload helper takes care of reading a file in binary format and uploading it in 32 MB chunks (default) to the
Lancium persistent storage. The download helper downloads a file from the Lancium persistent storage to the local disks.
The download helper also supports the download of jobs outputs (stdout.txt, stderr.txt) to local disk (see example
above).
Unfortunately, streaming of data is not support by the underlyingsimple-rest-client. Thus, the entire file is
downloaded to memory before writing to the disk.fromaiolancium.authimportAuthenticatorfromaiolancium.clientimportLanciumClient# Authenticate yourself against the API and refresh your token if necessaryauth=Authenticator(api_key="<your_api_key>")# Initialise the actual clientclient=LanciumClient(api_url="https://portal.lancium.com/api/v1/",auth=auth)# Upload /bin/bash to /test on the Lancium persistent storageawaitclient.upload_file_helper(path="test",source="/bin/bash")# Get information about the uploaded fileawaitclient.data.get_file_info("/test")# Download the file againawaitclient.download_file_helper("/test",destination="test_downloaded_again")# Delete the uploaded file again, thearg={"file-path":"/test"}awaitclient.data.delete_data_item(**arg)# Alternative approach to delete the uploaded fileawaitclient.data.delete_data_item("/test") |
aiolangchain | No description available on PyPI. |
aiolastfm | aiolastfm |
aiolavapy | Async Lava API libraryExample to useimportasynciofromaiolavaimportLavaBusinessClientasyncdefmain():client=LavaBusinessClient(private_key="INSERT_PRIVATE_KEY",shop_id="INSERT_SHOP_ID"# optional)invoice=awaitclient.create_invoice(sum_=10,order_id="order#10")print(invoice.data.url)status=awaitclient.check_invoice_status(order_id="order#10")print(status.data.status)if__name__=='__main__':asyncio.run(main())All documentation you can findhere |
aiolbry | aioLBRY, a Python API Wrapper for lbry & lbrycrdaioLBRY is a wrapper for thelbry daemonandlbrycrd daemonAPI for Python 3.7+(Python 2 will never be supported)InstallationWith pipSimply run the following$pipinstallaiolbryAnd you're done!Manually Cloning the RepositoryYou can either clone this repository or get a tarball from PyPI's
website for whatever version you want. Simply download it and# Simply clone the repository somewhere$gitclonehttps://gitlab.com/jamieoglindsey0/aiolbry# Or obtain a release from PyPI's site.$wget<extremelylonglinkgeneratedbyPyPI>
$tar-xzfaiolbry-x.x.x.tar.gzaiolbry/# Change directories into the newly created repository$cdaiolbry/# Now you simply run the setup.py file:$python3setup.pybuild_pyinstallUsageUsing the APIMake sure thatlbry-daemonis up and running, as you will not be able to
do anything without it.First, importLbrydApiorLbrycrdApifromaiolbryinto your project.API for LBRYDUsing the Generated CodeThe API generates all the functions from thelbryddocumentation, and translates
it into tangible, documented code.[1]importasyncio[2]fromaiolbryimportLbrydApi# Initialize the API[3]lbry=LbrydApi()[4]loop=asyncio.get_event_loop()# Just call the method as documented in the LBRYD API[5]loop.run_until_complete(lbry.claim_list(name="@lbry"))Calling the API ManuallySince all the code does is make requests to the lbry daemon, you can also
use it as you would with cURL on the commandline. In fact, this is
actually what the bodies of generated code do.# You can also use the traditional method of making requests# if you prefer the cURL commandline syntax, works the same.response=lbry.call("claim_list",{"name":"bellflower"})API For LbryCRDfromaiolbryimportLbrycrdApi# Provide the username and passwordlbrycrd=LbrycrdApi("username","password")# Just specify the method and the parametersresponse=lbrycrd.call("wallet_unlock",{"wallet_username","wallet_password"}) |
aioldap | Not entirely ready, literally just started. Might shuffle things around a bit etc…This was initially going to be a complete “from scratch” LDAP library for asyncio. Having used ldap3 for quite a
while I thought: wouldn’t it be nice to have something ldap3-like but using normal asyncio. So I wrote this library which
is sort of based around ldap3, it uses ldap3’s encoding and decoding functions and I just dealt with the actual packet
handoff. As as for why I made this, well because I can… and because I was bored.I wouldn’t quite call this production ready yet, and it could do with a bit of cleaning up but if anyone actually
finds this library useful, raise an issue with anything you have and I’ll be happy to help out.In its current form it only supports Python3.6 as I have an async generator in the code, am looking at making it
Python3.5 compatible too.DocumentationEventually will be on readthedocsExampleSimple example of using aioboto3 to put items into a dynamodb tableconn=aioldap.LDAPConnection()awaitconn.bind(bind_dn=ldap_params['user'],bind_pw=ldap_params['password'],host=ldap_params['host'],port=ldap_params['port'])dn=user_entry('modify',ldap_params['test_ou1'])awaitconn.add(dn=dn,object_class='inetOrgPerson',attributes={'description':'some desc','cn':'some_user','sn':'some user','employeeType':['type1','type2']})awaitconn.modify(dn=dn,changes={'sn':[('MODIFY_REPLACE','some other user')],'employeeType':[('MODIFY_ADD','type3'),('MODIFY_DELETE','type1'),]})# Now search for userasyncforuserinconn.search(dn,search_filter='(uid=*)',search_scope='BASE',attributes='*'):assertuser['dn']==dnCreditsAll of the credit goes to @cannatag who literally had done all of the hard work for me. |
aioletterxpress | No description available on PyPI. |
aioleviosa | No description available on PyPI. |
aioli | Failed to fetch description. HTTP Status Code: 404 |
aiolib | # aiolib## DescriptionThe main purpose of this lib is to provide *asyncio* compatbile alternatives for items of the standard library that aren't asyncio compatible. E.g. `itertools.chain.from_iterable()` doesn't work with `async_generators`, etc.I hope/guess the stdlib will catch up with this sooner or later. This lib is for those who need a shim as soon as possible.If you find something in the standard lib that doesn't work or play well with asyncio, please go ahead and add your replecement/wrapper!## Requirementsaiolib requires Python `>= 3.6`## Installpip install aioliborpipenv install aiolib## Library Documentation### builtins#### `aenumerate`Async version of: [enumerate](https://docs.python.org/3.6/library/functions.html#enumerate)Example:```pythonasync for i in aenumerate(chain(a_gen, range(10))):print(i)```### itertools#### `itertools.chain(*iterables)`Async version of: [chain](https://docs.python.org/3.6/library/itertools.html#itertools.chain) in the stdlib.This function can handle `AsyncGeneratorType`.Example:```python# Use with async_generator:async for i in chain(my_async_gen1, my_async_gen2):print(i)# Allows for mixing normal generators with async generators:async for i in chain(my_async_gen, range(10)):print(i)```#### `itertools.chain_from_iterable(iterable)`Async version of: [from_iterable](https://docs.python.org/3.6/library/itertools.html#itertools.chain.from_iterable) in the stdlib.This function can handle `AsyncGeneratorType`.Example:```python# Use with async_generator:async for i in chain_from_iterable(my_async_generator):print(i)# Works with normal items as well:async for i in chain_from_iterable([range(10)]):print(i)```### More items to comePull requests appreciated!## DevelopmentJust clone the repo and run:pipenv install --devThis project uses [yapf](https://github.com/google/yapf), but I don't really care how you format your pull requests. It would be auto-formatted later. |
aiolibgen | aiocrossrefAsynchronous client for Libgen APIExampleimportasynciofromaiolibgenimportLibgenClientasyncdefbooks(base_url,ids):client=LibgenClient(base_url)returnawaitclient.by_ids(ids)response=asyncio.get_event_loop().run_until_complete(books('http://gen.lib.rus.ec',[100500])) |
aiolifecycle | aiolifecycleSafely use asyncio handlers in synchronous context.Use caseIf you want to run an asyncio-based program in a synchronous context - such as
a command line invocation - you can useasyncio.runfrom the standard library.
But it immediately spins up and terminates an event loop. What if you want to a
continuous workflow, where you can initialise and re-use resources?The original need for this project arose when adapting an asyncio-based system
to AWS Lambda, which makes multiple synchronous function calls in the same
interpreter environment.InstallationRunpip install aiolifecycle, or add it to your package dependencies.UsageHandlerDefine your handler as anasyncfunction, and add thesyncannotation. Then
it can safely be called synchronously. An event loop will automatically be
initialised with its associated resources.importasynciofromaiolifecycleimportsync@sync()asyncdefmy_handler()->None:awaitasyncio.sleep(1)By default, handlers areeager, meaning an event loop will be created and
initialisation functions will be immediately run on module import.If you want to initialise resources only when a handler is first called, do:importasynciofromaiolifecycleimportsync@sync(eager=False)asyncdefmy_handler()->None:awaitasyncio.sleep(1)InitialisationYou can defineasyncinitialisation functions to prepare resources for use by
handlers. These can be simpleasync defs returning nothing.In the following example,my_initwill be called exactly once, before any handlers
run.importasynciofromaiolifecycleimportsync,init@initasyncdefmy_init()->None:print('Hello, world!')@syncasyncdefmy_handler()->None:awaitasyncio.sleep(1)Initialisation order can be controlled with theorderparameter. In the following
example,hellowill be called beforeworld. Functions with the same order (or
undefined order) are called in the order they were defined.importasynciofromaiolifecycleimportsync,init@init(order=20)asyncdefworld()->None:print('World!')@init(order=10)asyncdefhello()->None:print('Hello!')@sync()asyncdefmy_handler()->None:awaitasyncio.sleep(1)Initialisation functions can manage resources. You can simply return the resource
from aninitfunction (if no finalisation is necessary), or define it as anasynccontextmanagerOr you can useAsyncContextManagers, and access the resources they create by
refering to the handler function. Proper lifetime will be managed internally, such
that the initialisation will happen once.importjsonfromcontextlibimportasynccontextmanagerfromtypingimportAsyncIteratorimportaiofilesfromaiofiles.threadpool.textimportAsyncTextIOWrapperfromaiolifecycleimportsyncfromaiolifecycleimportinit@initasyncdefjson_log_path()->str:# Run complicated logic to determine path, only once!# ....return'/tmp/my-file.json'@init()@asynccontextmanagerasyncdefjson_log_file()->AsyncIterator[AsyncTextIOWrapper]:log_path=awaitjson_log_path()# File will be open before any handler is called, and cleaned up on shutdownasyncwithaiofiles.open(log_path,mode='a')asf:yieldf@sync()asyncdefhandler():log_file=awaitjson_log_file()awaitlog_file.write(json.dumps(event)+"\n")awaitlog_file.flush()Changelogv0.1.8 (2022-05-11)Fixinit:Fix cycle checking scope (9e76ea7)v0.1.7 (2022-05-11)Fixinit:Raise terminating exception on init function failure bubbling up (69c0f88)init:Raise terminating exception on init function failure bubbling up (024734d)v0.1.6 (2022-05-11)Fixinit:Exit immediately on init failure (8abe43b)v0.1.5 (2022-05-11)Fixinit:Exit immediately on init failure (9652c7c)v0.1.4 (2022-05-10)Fixinit:Fix threading atexit registration order (0a3a861)v0.1.3 (2022-05-09)Fixloop:Wait for loop on interpreter termination (e6b9825)v0.1.2 (2022-05-09)Fixinit:Add missing reset for context var (#1) (fc7f654)v0.1.1 (2022-05-05)Fixlicense:Update copyright (588e5a3)v0.1.0 (2022-05-05)Featureall:Remove remaining lambda references (c003c73)v0.0.4 (2022-05-05)Fixhandlers:Cycle detection (018ba95)handlers:Rename handlers to remove lambda references (de9643b)tests:For new resource management (5f30e6d)metadata:Use Markdown version of Apache License [skip ci] (bfaf82e)v0.0.3 (2022-05-04)Fixcd:Fix long_description Markdown content type (b3479ee)v0.0.2 (2022-05-04)Fixcd:Don't release on new tags (581d5ff)v0.0.1 (2022-05-04)Fixcd:Force release (7bbac93)cd:Force release (dd9f209)cd:Force release (e621555)Copyright 2022 Daniel Miranda and contributors.Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.-------------------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS |
aiolifx | aiolifxaiolifx is a Python 3/asyncio library to control Lifx LED lightbulbs over your LAN.Most of it was taken from Meghan Clarkk lifxlan package (https://github.com/mclarkk)
and adapted to Python 3 (and asyncio obviously)InstallationWe are on PyPi sopip3 install aiolifxor
python3 -m pip install aiolifxAfter installation, the utilityaiolifxcan be used to test/control devices.NOTE: When installing with Python 3.4, the installation produce an error message
(syntax error). This can be safely ignored.How to useEssentially, you create an object with at least 2 methods:- register
- unregisterYou then start the LifxDiscovery task in asyncio. It will register any new light it finds.
All the method communicating with the bulb can be passed a callback function to react to
the bulb response. The callback should take 2 parameters:- a light object
- the response messageThe easiest way is to look at the file in the examples directory. "Wifi" and "Uptime" use
a callback to print the info when it is returned.In essence, the test program is thisclass bulbs():
""" A simple class with a register and unregister methods
"""
def __init__(self):
self.bulbs=[]
def register(self,bulb):
self.bulbs.append(bulb)
def unregister(self,bulb):
idx=0
for x in list([ y.mac_addr for y in self.bulbs]):
if x == bulb.mac_addr:
del(self.bulbs[idx])
break
idx+=1
def readin():
"""Reading from stdin and displaying menu"""
selection = sys.stdin.readline().strip("\n")
DoSomething()
MyBulbs = bulbs()
loop = aio.get_event_loop()
discovery = alix.LifxDiscovery(loop, MyBulbs)
try:
loop.add_reader(sys.stdin, readin)
discovery.start()
loop.run_forever()
except:
pass
finally:
discovery.cleanup()
loop.remove_reader(sys.stdin)
loop.close()Other things worth noting:- Whilst LifxDiscovery uses UDP broadcast, the bulbs are
connected with Unicast UDP
- The socket connecting to a bulb is not closed unless the bulb is deemed to have
gone the way of the Dodo. I've been using that for days with no problem
- You can select to used IPv6 connection to the bulbs by passing an
IPv6 prefix to LifxDiscovery. It's only been tried with /64 prefix.
If you want to use a /48 prefix, add ":" (colon) at the end of the
prefix and pray. (This means 2 colons at the end!)
- I only have Original 1000, so I could not test with other types
of bulbs
- Unlike in lifxlan, set_waveform takes a dictionary with the right
keys instead of all those parametersDevelopmentRunning locallyRun this command each time you make changes to the project. It enters at__main__.pypip3install.&&aiolifxThanksThanks to Anders Melchiorsen and Avi Miller for their essential contributions |
aiolifxc | AioLifxCAioLifxC is a Python 3/asyncio library to control Lifx LED lightbulbs over your LAN.Most of it was originally taken from theMeghan Clarkk lifxlan packageand adapted to Python 3 (and asyncio obviously)This is a fork fromFrançois Wautier’s package.
It uses coroutines as opposed to callbacks. If you prefer callbacks,
please see his implementation instead. This was forked from version 0.5.0.InstallationWe are on PyPi so:pip3 install aiolifxcor:python3 -m pip install aiolifxcHow to useIn essence, the test program is this:def readin():
"""Reading from stdin and displaying menu"""
selection = sys.stdin.readline().strip("\n")
DoSomething()
loop = aio.get_event_loop()
devices = Devices(loop=loop)
loop.add_reader(sys.stdin, readin)
devices.start_discover()
try:
loop.run_forever()
except Exception as e:
print("Got exception %s" % e)
finally:
loop.remove_reader(sys.stdin)
loop.close()Other things worth noting:Whilst LifxDiscover uses UDP broadcast, the bulbs are
connected with Unicast UDPThe socket connecting to a bulb is not closed unless the bulb is deemed to have
gone the way of the Dodo. I’ve been using that for days with no problemYou can select to used IPv6 connection to the bulbs by passing an
IPv6 prefix to LifxDiscover. It’s only been tried with /64 prefix.
If you want to use a /48 prefix, add “:” (colon) at the end of the
prefix and pray. (This means 2 colons at the end!)I only have Original 1000, so I could not test with other types
of bulbsUnlike in lifxlan, set_waveform takes a dictionary with the right
keys instead of all those parametersHistory1.0.0 (UNRELEASED)ChangedAdd more log messages.Fix display of selected devices in sample app.Simplify API. Merge device classes into light classes.0.5.6 (2017-09-22)ChangedUpdate pip dependancies.Simplify start_discovery function.Update to Beta status.Fix API docs on readthedocs.FixedUpdate MANIFEST.in file.0.5.5 (2017-07-11)ChangedUpdate mypy from 0.511 to 0.520FixedEnsure we act on selected device in sample client.Fix mypy errors.Fix message size calculation.Add configurable grace period for unregister.0.5.4 (2017-07-07)FixedFix failure to re-register a light that went off-line.0.5.3 (2017-07-03)FixedFixed FD resource leak in discovery of existing lights.0.5.2 (2017-07-02)ChangedSignificant changes. Improvements to the API. Type hints, doc strings, etc.0.5.1 (2017-06-26)Initial version after fork fromhttps://github.com/frawau/aiolifx |
aiolifx-connection | AIOLIFX ConnectionA wrapper for aiolifx to connect to a single LIFX deviceFree software: BSD licenseDocumentation:https://aiolifx-connection.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History1.0.0 (2022-07-03)First release on PyPI. |
aiolifx-effects | No description available on PyPI. |
aiolifx-scenes | aiolifx-scenesAn async library with a single input and a single output.If you feed it a LIFX Cloud API Personal Access Token (PAT), it will return all the scenes you that token has access to on the LIFX Cloud.UsageTo generate a personal access token:visithttps://cloud.lifx.comand login using the same login credentials that you use for the LIFX smart phone app.Once logged in, click the arrow next to your email in the top right-hand corner of the "Cloud home" page to reveal the menu.With the menu revealed, click the "Personal access tokens" menu item.On the personal access tokens page, click the big blue "Generate new token" button.Once you have a personal access token, you can install the library:$pipinstallaiolifx-scenesWith the library installed, you can call it from your application:importaiolifx_scenesPAT="personal access token"scenes=awaitaiolifx_scenes.async_get_scenes(token=PAT)Top tip:useaiolifx_scenes.get_scenes()from non-async methods.Sanity checksAn extremely basic command-line tool is provided to enable easier sanity checking of your personal access token and existing LIFX scene information.To use the tool, set theLIFX_API_TOKENenvironment variable, then runlifx-scenes. If human readability is important to you, consider piping the output throughjq.For example:$LIFX_API_TOKEN="your_lifx_api_personal_access_token"lifx-scenes|jq[{'uuid':'031f1116-034f-4d92-a1f3-13420e532706','name':'My Scene','account':{'uuid':'bda95b31-948c-4c34-a330-c5f0c5eeb2a3'},'states':[{"selector":"id:d073d5xxxxxx","power":"off","brightness":0.25,"color":{"hue":0,"saturation":0,"kelvin":3500}},{"selector":"id:d073d5xxxxxx","power":"off","brightness":0.25,"color":{"hue":0,"saturation":0,"kelvin":2500}}],'created_at':1658591387,'updated_at':1679022191}]CreditsThis package was created withCookiecutterand thewaynerv/cookiecutter-pypackageproject template. |
aiolifx-themes | aiolifx-themesAsync library that applies color themes to LIFX lightsInstallationInstall this via pip (or your favourite package manager):pip install aiolifx-themesContributors ✨Thanks goes to these wonderful people (emoji key):Stephen Moore💻Avi Miller💻📖🚧This project follows theall-contributorsspecification. Contributions of any kind welcome!CreditsThis package contains code originally authored [email protected] package was created withCookiecutterand thebrowniebroke/cookiecutter-pypackageproject template. |
aioli-guestbook | aioli-guestbook: RESTful HTTP API Package ExampleThe idea with this example is to show how a CRUD-type RESTful HTTP API package can be built with the Aioli Framework.DocumentationCheck out thePackage Documentationfor usage and info about the
HTTP and Service APIs.ExamplesEvery guestbook needs a guesthouse, right?Check out theexample directoryto see howaioli-guestbookcan be incorporated into an example Guesthouse Application.AuthorRobert Wikman <[email protected]> |
aiolimit | aiolimitasync api rate limit modified and inspired byredis-token-bucke.Installationpip install aiolimittestpy.test |
aiolimiter | aiolimiterIntroductionAn efficient implementation of a rate limiter for asyncio.This project implements theLeaky bucket algorithm, giving you precise control over the rate a code section can be entered:fromaiolimiterimportAsyncLimiter# allow for 100 concurrent entries within a 30 second windowrate_limit=AsyncLimiter(100,30)asyncdefsome_coroutine():asyncwithrate_limit:# this section is *at most* going to entered 100 times# in a 30 second period.awaitdo_something()It was first developedas an answer on Stack Overflow.Documentationhttps://aiolimiter.readthedocs.ioInstallation$pipinstallaiolimiterThe library requires Python 3.7 or newer.RequirementsPython >= 3.7Licenseaiolimiteris offered under theMIT license.Source codeThe project is hosted onGitHub.Please file an issue in thebug trackerif you have found a bug
or have some suggestions to improve the library.Developer setupThis project usespoetryto manage dependencies, testing and releases. Make sure you have installed that tool, then run the following command to get set up:poetryinstall--withdocs&&poetryrundoitdevsetupApart from usingpoetry run doit devsetup, you can either usepoetry shellto enter a shell environment with a virtualenv set up for you, or usepoetry run ...to run commands within the virtualenv.Tests are run withpytestandtox. Releases are made withpoetry buildandpoetry publish. Code quality is maintained withflake8,blackandmypy, andpre-commitruns quick checks to maintain the standards set.A series ofdoittasks are defined; runpoetry run doit list(ordoit listwithpoetry shellactivated) to list them. The default action is to run a full linting, testing and building run. It is recommended you run this before creating a pull request. |
aiolinebot | aiolinebotAioLineBotApi provides asynchronous interface for LINE messaging API✨ Features100% coverage: All endpoints of line-bot-sdk supported!100% compatible: Both async and sync methods for each endpoint provided!Up-to-date immediately: Update automatically when your line-bot-sdk is updated!by dynamic class building: making async api client at the first time you import this package, from the source of line-bot-sdk installed in your environment.🥳 UsageJust create instance of AioLineBotApi instead of LineBotApi. That's all.# line_api = LineBotApi("<YOUR CHANNEL ACCESS TOKEN>")line_api=AioLineBotApi("<YOUR CHANNEL ACCESS TOKEN>")Now you are ready to use both async and sync methods for each endpoint.# asyncloop=asyncio.get_event_loop()loop.run_until_complete(line_api.reply_message_async("<REPLY TOKEN>",TextMessage("Hello!")))# syncline_api.reply_message("<REPLY TOKEN>",TextMessage("Hello!"))Note that when you get binary content by stream, you should close the http response after finished.content=awaitline_api.get_message_content_async("<MESSAGE ID>")asyncforbincontent.iter_content(1024):do_something(b)awaitcontent.response.close()📦 Installation$ pip install aiolinebot⚙ Dependenciesaiohttpline-bot-sdkContributionAll kinds of contributions are welcomed🙇♀️🙇♀️🙇♀️Especially we need tests. Because of async we can't useresponsesthat is used in the tests for line-bot-sdk. So at first we have to find out the way of testing...If you have any ideas about testing post issue please🙏🙏🥘 ExampleThis is the echobot on Azure Functions.importloggingimportazure.functionsasfuncfromlinebotimportWebhookParserfromlinebot.modelsimportTextMessagefromaiolinebotimportAioLineBotApiasyncdefmain(req:func.HttpRequest)->func.HttpResponse:# create api clientline_api=AioLineBotApi(channel_access_token="<YOUR CHANNEL ACCESS TOKEN>")# get events from requestparser=WebhookParser(channel_secret="<YOUR CHANNEL SECRET>")events=parser.parse(req.get_body().decode("utf-8"),req.headers.get("X-Line-Signature",""))forevinevents:# reply echoawaitline_api.reply_message(ev.reply_token,TextMessage(text=f"You said:{ev.message.text}"))# 200 responsereturnfunc.HttpResponse("ok") |
aiolinkding | 🔖 aiolinkding: a Python3, async library to the linkding REST APIaiolinkdingis a Python3, async library that interfaces withlinkdinginstances. It is intended to be a reasonably light wrapper around the linkding API
(meaning that instead of drowning the user in custom objects/etc., it focuses on
returning JSON straight from the API).InstallationPython VersionsUsageCreating a ClientWorking with BookmarksGetting All BookmarksGetting Archived BookmarksGetting a Single BookmarkCreating a New BookmarkUpdating an Existing Bookmark by IDArchiving/Unarchiving a BookmarkDeleting a BookmarkWorking with TagsGetting All TagsGetting a Single TagCreating a New TagWorking with User DataGetting Profile InfoConnection PoolingContributingInstallationpipinstallaiolinkdingPython Versionsaiolinkdingis currently supported on:Python 3.10Python 3.11Python 3.12UsageCreating a ClientIt's easy to create an API client for a linkding instance. All you need are two
parameters:A URL to a linkding instanceA linkding API tokenimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")asyncio.run(main())Working with BookmarksGetting All Bookmarksimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Get all bookmarks:bookmarks=awaitclient.bookmarks.async_get_all()# >>> { "count": 100, "next": null, "previous": null, "results": [...] }asyncio.run(main())client.bookmarks.async_get_all()takes three optional parameters:query: a string query to filter the returned bookmarkslimit: the maximum number of results that should be returnedoffset: the index from which to return results (e.g.,5starts at the fifth bookmark)Getting Archived Bookmarksimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Get all archived bookmarks:bookmarks=awaitclient.bookmarks.async_get_archived()# >>> { "count": 100, "next": null, "previous": null, "results": [...] }asyncio.run(main())client.bookmarks.async_get_archived()takes three optional parameters:query: a string query to filter the returned bookmarkslimit: the maximum number of results that should be returnedoffset: the index from which to return results (e.g.,5starts at the fifth bookmark)Getting a Single Bookmark by IDimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Get a single bookmark:bookmark=awaitclient.bookmarks.async_get_single(37)# >>> { "id": 37, "url": "https://example.com", "title": "Example title", ... }asyncio.run(main())Creating a New Bookmarkimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Create a new bookmark:created_bookmark=awaitclient.bookmarks.async_create("https://example.com",title="Example title",description="Example description",tag_names=["tag1","tag2",],)# >>> { "id": 37, "url": "https://example.com", "title": "Example title", ... }asyncio.run(main())client.bookmarks.async_create()takes four optional parameters:title: the bookmark's titledescription: the bookmark's descriptionnotes: Markdown notes to add to the bookmarktag_names: the tags to assign to the bookmark (represented as a list of strings)is_archived: whether the newly-created bookmark should automatically be archivedunread: whether the newly-created bookmark should be marked as unreadshared: whether the newly-created bookmark should be shareable with other linkding usersUpdating an Existing Bookmark by IDimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Update an existing bookmark:updated_bookmark=awaitclient.bookmarks.async_update(37,url="https://different-example.com",title="Different example title",description="Different example description",tag_names=["tag1","tag2",],)# >>> { "id": 37, "url": "https://different-example.com", ... }asyncio.run(main())client.bookmarks.async_update()takes four optional parameters (inclusion of any parameter
will change that value for the existing bookmark):url: the bookmark's URLtitle: the bookmark's titledescription: the bookmark's descriptionnotes: Markdown notes to add to the bookmarktag_names: the tags to assign to the bookmark (represented as a list of strings)unread: whether the bookmark should be marked as unreadshared: whether the bookmark should be shareable with other linkding usersArchiving/Unarchiving a Bookmarkimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Archive a bookmark by ID:awaitclient.bookmarks.async_archive(37)# ...and unarchive it:awaitclient.bookmarks.async_unarchive(37)asyncio.run(main())Deleting a Bookmarkimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Delete a bookmark by ID:awaitclient.bookmarks.async_delete(37)asyncio.run(main())Working with TagsGetting All Tagsimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Get all tags:tags=awaitclient.tags.async_get_all()# >>> { "count": 100, "next": null, "previous": null, "results": [...] }asyncio.run(main())client.tags.async_get_all()takes two optional parameters:limit: the maximum number of results that should be returnedoffset: the index from which to return results (e.g.,5starts at the fifth bookmark)Getting a Single Tag by IDimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Get a single tag:tag=awaitclient.tags.async_get_single(22)# >>> { "id": 22, "name": "example-tag", ... }asyncio.run(main())Creating a New Tagimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Create a new tag:created_tag=awaitclient.tags.async_create("example-tag")# >>> { "id": 22, "name": "example-tag", ... }asyncio.run(main())Working with User DataGetting Profile Infoimportasynciofromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345")# Get all tags:tags=awaitclient.user.async_get_profile()# >>> { "theme": "auto", "bookmark_date_display": "relative", ... }asyncio.run(main())Connection PoolingBy default, the library creates a new connection to linkding with each coroutine. If you
are calling a large number of coroutines (or merely want to squeeze out every second of
runtime savings possible), anaiohttpClientSessioncan be used for
connection pooling:importasynciofromaiohttpimportasync_get_clientSessionfromaiolinkdingimportasync_get_clientasyncdefmain()->None:"""Use aiolinkding for fun and profit."""asyncwithClientSession()assession:client=awaitasync_get_client("http://127.0.0.1:8000","token_abcde12345",session=session)# Get to work...asyncio.run(main())ContributingThanks to all ofour contributorsso far!Check for open features/bugsorinitiate a discussion on one.Fork the repository.(optional, but highly recommended) Create a virtual environment:python3 -m venv .venv(optional, but highly recommended) Enter the virtual environment:source ./.venv/bin/activateInstall the dev environment:script/setupCode your new feature or bug fix on a new branch.Write tests that cover your new functionality.Run tests and ensure 100% code coverage:poetry run pytest --cov aiolinkding testsUpdateREADME.mdwith any new documentation.Submit a pull request! |
aioli-openapi | No description available on PyPI. |
aiolip | Async Lutron Integration ProtocolAsync Lutron Integration ProtocolFree software: Apache Software License 2.0Documentation:https://aiolip.readthedocs.io.Example UsageimportasyncioimportloggingfromaiolipimportLIPfromaiolip.dataimportLIPMode_LOGGER=logging.getLogger(__name__)asyncdefmain():lip=LIP()logging.basicConfig(level=logging.DEBUG)awaitlip.async_connect("192.168.209.70")defmessage(msg):_LOGGER.warning(msg)lip.subscribe(message)run_task=asyncio.create_task(lip.async_run())awaitrun_taskawaitlip.async_stop()if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(main())CreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History1.0.0 (2021-01-18)First release on PyPI. |
aioliqpay | Version: 1.0.0Web:https://www.liqpay.ua/Download:https://pypi.org/project/aioliqpaySource:https://github.com/toxazhl/aioliqpayDocumentation:https://www.liqpay.ua/documentation/en/Keywords: aioliqpay, liqpay, privat24, privatbank, python, internet acquiring, P2P payments, two-step payments, asyncioWhat python version is supported?Python 3.6, 3.7, 3.8, 3.9, 3.10Get StartedSign up inhttps://www.liqpay.ua/en/authorization.Create a company.In company settings, on API tab, getPublic keyandPrivate key.Done.InstallationFrom pippip install aioliqpayWorking with LiqPay Callback locallyIf you need debugging API Callback on local environment usehttps://localtunnel.github.io/www/How it use?Example 1: BasicBackendGet payment button (html response)liqpay = LiqPay(public_key, private_key)
html = liqpay.cnb_form(
action='pay',
amount=1,
currency='UAH',
description='description text',
order_id='order_id_1',
language='ua'
)Get plain checkout url:liqpay = LiqPay(public_key, private_key)
html = liqpay.checkout_url({
action='auth',
amount=1,
currency='UAH',
description='description text',
order_id='order_id_1',
language='ua',
recurringbytoken=1'
})
# Response:
str: https://www.liqpay.ua/api/3/checkout/?data=<decoded data>&signature=<decoded signature>FrontendVariablehtmlwill contain next html form<form method="POST" action="https://www.liqpay.ua/api/3/checkout" accept-charset="utf-8">
<input type="hidden" name="data" value="eyAidmVyc2lvbiIgOiAzLCAicHVibGljX2tleSIgOiAieW91cl9wdWJsaWNfa2V5IiwgImFjdGlv
biIgOiAicGF5IiwgImFtb3VudCIgOiAxLCAiY3VycmVuY3kiIDogIlVTRCIsICJkZXNjcmlwdGlv
biIgOiAiZGVzY3JpcHRpb24gdGV4dCIsICJvcmRlcl9pZCIgOiAib3JkZXJfaWRfMSIgfQ=="/>
<input type="hidden" name="signature" value="QvJD5u9Fg55PCx/Hdz6lzWtYwcI="/>
<input type="image"
src="//static.liqpay.ua/buttons/p1ru.radius.png"/>
</form>Example 2: Integrate Payment widget to DjangoPayment widget documentationhttps://www.liqpay.ua/documentation/en/api/aquiring/widget/Backendviews.pyfrom aioliqpay import LiqPay
from django.views.generic import TemplateView
from django.shortcuts import render
from django.http import HttpResponse
class PayView(TemplateView):
template_name = 'billing/pay.html'
def get(self, request, *args, **kwargs):
liqpay = LiqPay(settings.LIQPAY_PUBLIC_KEY, settings.LIQPAY_PRIVATE_KEY)
params = {
'action': 'pay',
'amount': 100,
'currency': 'USD',
'description': 'Payment for clothes',
'order_id': 'order_id_1',
'sandbox': 0, # sandbox mode, set to 1 to enable it
'server_url': 'https://test.com/billing/pay-callback/', # url to callback view
}
signature = liqpay.cnb_signature(params)
data = liqpay.cnb_data(**params)
return render(request, self.template_name, {'signature': signature, 'data': data})
@method_decorator(csrf_exempt, name='dispatch')
class PayCallbackView(View):
def post(self, request, *args, **kwargs):
liqpay = LiqPay(settings.LIQPAY_PUBLIC_KEY, settings.LIQPAY_PRIVATE_KEY)
data = request.POST.get('data')
signature = request.POST.get('signature')
sign = liqpay.str_to_sign(settings.LIQPAY_PRIVATE_KEY + data + settings.LIQPAY_PRIVATE_KEY)
if sign == signature:
print('callback is valid')
response = liqpay.decode_data_from_str(data)
print('callback data', response)
return HttpResponse()urls.pyfrom django.conf.urls import url
from billing.views import PayView, PayCallbackView
urlpatterns = [
url(r'^pay/$', PayView.as_view(), name='pay_view'),
url(r'^pay-callback/$', PayCallbackView.as_view(), name='pay_callback'),
]Frontend<div id="liqpay_checkout"></div>
<script>
window.LiqPayCheckoutCallback = function() {
LiqPayCheckout.init({
data: "{{ data }}",
signature: "{{ signature }}",
embedTo: "#liqpay_checkout",
mode: "embed" // embed || popup,
}).on("liqpay.callback", function(data){
console.log(data.status);
console.log(data);
}).on("liqpay.ready", function(data){
// ready
}).on("liqpay.close", function(data){
// close
});
};
</script>
<script src="//static.liqpay.ua/libjs/checkout.js" async></script> |
aiolirc | Jump ToDocumentationPython package indexSource on githubDownloadsAboutAsynchronous messaging using python’s new facility(async-await syntax), introduced in version 3.5 is so fun!So, I decided to provide an asynchronous context manager and iterator wrapper forLinux Infra-Red Remote Control(LIRC).Happily, the Cython is working well with asyncio. So thelirc_clientC extension has been made by cython’s extenstion
type.In addition, anIRCDispatchertype and alisten_fordecorator have been provided.Install$ apt-get install liblircclient-dev python3.5-dev build-essential
$ pip install cython
$ pip install aiolircQuick StartThe simplest way to use this library is the famousvery_quickstartfunction as follows:from aiolirc import very_quickstart, listen_for
@listen_for('play')
async def do_play(loop):
...
# Do play stuff
very_quickstart('my-prog') # my-prog is configured in your lircrc file.Another coroutine function namedquickstartis also available.This lets you have control over the event loop
life-cycle:import asyncio
from aiolirc import quickstart
main_loop = asyncio.get_event_loop()
try:
main_loop.run_until_complete(quickstart(loop=main_loop))
except KeyboardInterrupt:
print('CTRL+C detected. terminating...')
return 1
finally:
if not main_loop.is_closed():
main_loop.close()TheIRCDispatcherConstructordef __init__(self, source: LIRCClient, loop: asyncio.BaseEventLoop=None):Example of usageimport asyncio
from aiolirc.lirc_client import LIRCClient
from aiolirc.dispatcher import IRCDispatcher, listen_for
@listen_for('amp power', repeat=5)
async def amp_power(loop):
...
# Do your stuff
@listen_for('amp source')
async def amp_source(loop):
...
# Do your stuff
async with LIRCClient('my-prog') as client:
dispatcher = IRCDispatcher(client)
await dispatcher.listen()TheLIRCClientConstructordef __cinit__(self, lircrc_prog, *, lircrc_file='~/.config/lircrc', loop=None, check_interval=.05, verbose=False,
blocking=False):To advance control over the messages received from lirc, asychronously iter over an instance of theLIRCClientafter
callingLIRCClient.lirc_init(). And make sure theLIRCClient.lirc_deinit()has been called after finishing your work
withLIRCClient:from aiolirc.lirc_client import LIRCClient
client = LIRCClient('my-prog')
try:
client.lirc_init()
async for cmd in client:
print(cmd)
finally:
client.lirc_deinit()You may use theLIRCClientas an asynchronous context manager as described as follows, to automatically call theLIRCClient.lirc_init()andLIRCClient.lirc_deinit()functions, and also acquiring a lock to prevent multiple
instances of theLIRCClientfrom reading messages from lirc_client wrapper:from aiolirc.lirc_client import LIRCClient
async with LIRCClient('my-prog') as client:
async for cmd in client:
print(cmd)SystemdCreate a main.py:import sys
import asyncio
from aiolirc import IRCDispatcher, LIRCClient
async def launch(self) -> int:
async with LIRCClient('my-prog', lircrc_file='path/to/lircrc', check_interval=.06) as client:
dispatcher = IRCDispatcher(client)
result = (await asyncio.gather(dispatcher.listen(), return_exceptions=True))[0]
if isinstance(result, Exception):
raise result
return 0
def main(self):
main_loop = asyncio.get_event_loop()
try:
return main_loop.run_until_complete(launch())
except KeyboardInterrupt:
print('CTRL+C detected.')
return 1
finally:
if not main_loop.is_closed():
main_loop.close()
if __name__ == '__main__':
sys.exit(main())/etc/systemd/system/aiolirc.servicefile:[Unit]
Description=aiolirc
[Service]
ExecStart=python3.5 /path/to/main.py
User=user
Group=group
[Install]
WantedBy=multi-user.targetsystemctl:$ systemctl enable aiolirc
$ systemctl start aiolirc
$ systemctl restart aiolirc
$ ps -Af | grep 'main.py'
$ systemctl stop aiolircChange Log0.1.0README.rst |
aioli-rdbms | No description available on PyPI. |
aioli-sdk | Aioli (AIOnLineInference) is a platform for deploying models a
t scale.Seehttps://docs.determined.ai/for more information. |
aioli-sphinx-theme | Alabaster is a visually (c)lean, responsive, configurable theme for theSphinxdocumentation system. It is Python 2+3 compatible.It began as a third-party theme, and is still maintained separately, but as of
Sphinx 1.3, Alabaster is an install-time dependency of Sphinx and is selected
as the default theme.Live examples of this theme can be seen onthis project’s own website,paramiko.org,fabfile.organdpyinvoke.org.For more documentation, please seehttp://alabaster.readthedocs.io.NoteYou can install the development version viapip install-egit+https://github.com/bitprophet/alabaster/#egg=alabaster. |
aio-live-task-traces | aio-live-tasks-exceptionsTrace python asyncio tasks exceptions live!InstallationInstall the last released version usingpip:python3-mpipinstall-Uaio-live-task-tracesOr install the latest version from sources:[email protected]:matan1008/aio-live-task-traces.gitcdaio-live-task-traces
python3-mpipinstall-U-e.UsageUsually, if you run a task that throws exception using asyncio you will not be aware
of the exception until the task is awaited or deleted. For example:importasyncioasyncdeffaulty_task():raiseException('foo')asyncdefmain():task=asyncio.create_task(faulty_task())awaitasyncio.sleep(3600)awaittaskif__name__=='__main__':# The exception will be printed after 3600 secondsasyncio.run(main())This package, will wrap each task you run so the exception will be traced
the moment it raises:importasynciofromaio_live_task_tracesimportset_live_task_tracesasyncdeffaulty_task():raiseException('foo')asyncdefmain():set_live_task_traces(True)task=asyncio.create_task(faulty_task())awaitasyncio.sleep(3600)awaittaskif__name__=='__main__':# The exception will be printed both immediately and after 3600 secondsasyncio.run(main()) |
aiolivisi | AioLIVISIAsynchronous library to communicate with LIVISI Smart Home ControllerRequires Python 3.8+ and uses asyncio and aiohttp. |
aiolizzer | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
aiolmdb | aiolmdbAn asyncio wrapper around LMDB.aiolmdb is alpha quality software, expect the API or architecture to change
signifigantly over time.UsageOpening a aiolmdb enviroementimportaiolmdb# Open a aiolmdb enviroment## Takes the same arguments that lmdb.open does.enviroment=aiolmdb.open("/tmp/path/to/enviorment",...)Opening a aiolmdb databaseUnlike pylmdb, aiolmdb does not return a database handle onopen_db, but
rather a full Python object.# Open a databaserecords=enviroment.open_db("records")# Get the default ("" named database) within an enviromentdefault=enviroment.get_default_database()Querying an aiolmdb databaseAll queries against databases return coroutines and are run asynchronously.# Get a value(s) from the databaseresult=awaitdb.get(b'key')# Normal fetch, returned b'value'result=await.db.get(b'key',default=b'')# Defaults to b'' if no key is foundresult=awaitdb.get_multi([b'0',b'1'])# Gets multiple keys at once# Write a value into the databaseawaitdb.put(b'key',b'value')awaitdb.put_multi([(b'k1',b'v1'),(b'k2',b'v2')])# Puts multiple key-valuesatonce,atomically.# Delete a key from the databaseawaitdb.delete(b'key')awaitdb.delete_multi([b'k1',b'k2',b'k3'])# Drop the databaseawaitdb.drop()# Run any arbitrary transactionsdeftransaction_action(txn):returntxn.id()awaitdb.run(transaction_action)Using codersApplications do not operate directly on bytearrays, and require converting
runtime objects to and from serialized bytearrays. To avoid spending additional
time on the main loop running this conversion code, aiolmdb supports adding
database level coders to run this serialization/deserialization logic in the
executor instead of in the main loop. By default, every aiolmdb database uses
theIdentityCoderwhich supports directly writing bytes like objects. Other
coders can be used for both the key and value to change the types of objects
accepted by the API.# Opening a database with specific codersdb=env.open_db("records",key_coder=UInt16Coder(),value_coder=JSONCoder())awaitdb.put(65535,{"key":"value"})# Takes the approriate maching keysawaitdb.get(65535)# Returns {"key": "value"}# Alter the coder for an existing database, useful for altering the enviroment# default database.db.key_coder=StringCoder()db.value_coder=JSONCoder()# Supported CodersIdentityCoder()# Raw bytes coderStringCoder()# String coderUInt16Coder()# 16-bit unsigned integer coderUInt32Coder()# 32-bit unsigned integer coderUInt64Coder()# 64-bit unsigned integer coderJSONCoder()# JSON coder, works with any JSON serializable objectPicleCoder()# Pickle coder, works with any picklable object compression# Create a new JSONCoder, gzipped with compression level 9# Runs the encoded JSON through zlib before writing to database, anddecompresseszlib_json_coder=JSONCoder().compressed(level=9)compressed_db=env.open_db("records",value_coder=zlib_json_coder)# Write your own custom coderfromaiolmdb.codersimportCoderclassCustomCoder(Coder):defserialize(self,obj):# Custom serialization logic## These objects need to have locally immutable state: the objects must not# change how it represents its state for the duration of all concurrent# transactions dealing with the object.## must return a bytes-like objectreturnbufferdefdeserialize(self,buffer):# Custom deserialization logic## aiolmdb uses LMDB transactions with `buffers=True`. this returns a# direct reference to the memory region. This buffer must NOT be modified in# any way. The lifetime of the buffer is also only valid during the scope of# the transaction that fetched it. To use the buffer outside of the context# of the serializer, it must be copied, and references to the buffer must# not be used elsewhere.## Returns the deserialized objectreturndeserialized_objectCaveats and GotchasWrite transactions (put, delete, pop, replace) still block while executed in
the executor. Thus running multiple simultaneous write transactions will
block all other transactions until they complete, one-by-one. Long running
write transactions are strongly discouraged.Due to design limitations, atomic transactions across multiple databases is
currently not easy to do, nor is the code very pythonic.TODOsSupport cursors and range queries |
aiolo | aioloasyncio-friendly Python bindings forliblo, an implementation of the Open Sound Control (OSC) protocol for POSIX systems.InstallationInstall liblo:OS X:brew install libloUbuntu:apt-get install liblo7 liblo-devThen:pipinstallaioloExamplesOne of the many beautiful things in Python is support for operator overloading. aiolo embraces this enthusiastically to offer the would-be OSC hacker an intuitive programming experience for objects such asMessage,Bundle,Route, andSub.Simple echo serverimportasynciofromaioloimportAddress,Midi,Serverasyncdefmain():server=Server(port=12001)server.start()# Create endpoints# /foo accepts an int, a float, and a MIDI packetfoo=server.route('/foo',[int,float,Midi])ex=server.route('/exit')address=Address(port=12001)foriinrange(5):address.send(foo,i,float(i),Midi(i,i,i,i))# Notify subscriptions to exit in 1 secaddress.delay(1,ex)# Subscribe to messages for any of the routessubs=foo.sub()|ex.sub()asyncforroute,datainsubs:print(f'echo_server:{str(route.path)}received{data}')ifroute==ex:awaitsubs.unsub()server.stop()if__name__=='__main__':asyncio.get_event_loop().run_until_complete(main())MultiCastimportasyncioimportrandomfromaioloimportMultiCast,MultiCastAddress,Route,Serverasyncdefmain():# Create endpoints for receiving datafoo=Route('/foo',str)ex=Route('/exit')# Create a multicast groupmulticast=MultiCast('224.0.1.1',port=15432)# Create a cluster of servers in the same multicast groupcluster=[]foriinrange(10):server=Server(multicast=multicast)# Have them all handle the same routeserver.route(foo)server.route(ex)server.start()cluster.append(server)address=MultiCastAddress(server=random.choice(cluster))# Send a single message from any one server to the entire cluster.# The message will be received by each server.address.send(foo,'hello cluster')# Notify subscriptions to exit in 1 secaddress.delay(1,ex)# Listen for incoming strings at /foo on any server in the clustersubs=foo.sub()|ex.sub()asyncforroute,datainsubs:print(f'{route}got data:{data}')ifroute==ex:awaitsubs.unsub()forserverincluster:server.stop()if__name__=='__main__':asyncio.get_event_loop().run_until_complete(main())For additional usage see theexamplesandtests.Supported platformsTravis CI tests with the following configurations:Ubuntu 18.04 Bionic Beaver + liblo 0.29 + [CPython3.6, CPython3.7, CPython3.8, PyPy7.3.0 (3.6.9)]OS X + liblo 0.29 + [CPython3.6, CPython3.7, CPython3.8, PyPy7.3.0 (3.6.9)]ContributingPull requests are welcome, please file any issues you encounter.Changelog4.1.1 (2020-07-22)Prevent egg installation errors by passing zip_safe=False4.1.0Rectify some__hash__issues.4.0.0Use Python-based OSC address pattern matching rather than liblo's, supports escaped special charactersEnsure ThreadedServer.start() waits for thread to be initializedFix bug where subscribers might not receive pending dataFix bug where loop.remove_reader() was not being called on AioServer.stop() |
aioload | AIOloadhttps://blog.mogollon.com.ve/2020/01/10/load-testing-with-python/Load test tool usingaiosonichttp client. For drawing charts we
use matplotlib and pandas.Usage of uvloop is highly recommended.Requirementspython>=3.6Installationpipinstallaioload# optional, highly recommended, doesn't work on WindowspipinstalluvloopUsageyou need to specify your request in a settings file likeconfig.ini[http]sock_read=30sock_connect=3[test]# target url for testurl=http://localhost:8080/api/v1/something# methods: get, post, put, deletemethod=post## use body for send body in request# if body is json, indicate correct header in headers section# comment body line if you're doing a get requestbody='{"foo": "bar"}'# query params if needed, this will transform url# in something like http://localhost:8080/api/v1/something?token=something[params]token=something# headers if needed[headers]content-type=application/jsonusage example>aioload-husage:aioload[-h][-d][-v][-nNUMBER_OF_REQUESTS][-cCONCURRENCY][--plot]testfilepositionalarguments:testfileTestfiletobeexecutedoptionalarguments:-h,--helpshowthishelpmessageandexit-d,--debugtrueifpresent-v,--verbosetrueifpresent-nNUMBER_OF_REQUESTS,--number_of_requestsNUMBER_OF_REQUESTSnumberofrequeststobedone,default:100-cCONCURRENCY,--concurrencyCONCURRENCYconcurrency(requestsatthesametime),default:10--plotdrawchartsifpresent>aioloadconfig.ini-n3000-c100--plot-v2019-05-2917:20:51,662-__init__:135-info-8cf56ded860f41d8a86dab2aed05218f-startingscript...-2019-05-2917:20:55,301-__init__:102-info-8cf56ded860f41d8a86dab2aed05218f-done-min=14.54ms;max=212.21ms;mean=109.36ms;req/s=600.0;req/q_std=333.7;stdev=24.65;codes.200=3000;concurrency=100;requests=3000;You can override aioload runner methods,hereis an example. Then you should execute the script you made, in this example:python sample/dynamic_test.py conf.ini-vNotePython has limits, if your applications is crazy fast likethiscrystal server, the test will be limited by aiosonic’s client speed.Contributeforkcreate a branchfeature/your_featurecommit - push - pull requestDependencies are handled withpip-toolsthanks :) |
aiolock | do not tryHome-page:https://github.com/bigpanglAuthor: bigpanglAuthor-email:[email protected]: MIT
Description: UNKNOWN
Platform: UNKNOWN |
aiolog | Asynchronous handlers for standard python logging library.
Currently telegram (requiresaiohttp)
and smtp (viaaiosmtplib) handlers are available.Installationpip install aiologRepository:https://github.com/imbolc/aiologConfigurationJust use any way you prefer to configure built-inlogginglibrary, e.g.:logging.config.dictConfig({'version':1,'handlers':{'telegram':{# any built-in `logging.Handler` params'level':'DEBUG','class':'aiolog.telegram.Handler',# common `aiolog` params'timeout':10,# 60 by default'queue_size':100,# 1000 by default# handler specific params'token':'your telegram bot token','chat_id':'telegram chat id',},'smtp':{'level':'WARNING','class':'aiolog.smtp.Handler','hostname':'smtp.yandex.com','port':465,'sender':'bot@email','recipient':'your@email','use_tls':True,'username':'smtp username','password':'smtp password',},},'loggers':{'':{'handlers':['telegram','smtp',],'level':'DEBUG',},}})UsageYou can use built-inlogginglibrary as usual,
just add starting and stopping ofaiolog.log=logging.getLogger(__name__)asyncdefhello():log.debug('Hey')aiolog.start()loop=asyncio.get_event_loop()loop.run_until_complete(hello())loop.run_until_complete(aiolog.stop())Look at theexamplefolder for more examples.aiohttpWithaiohttp, you can use a little more sugar.
Instead of starting and stoppingaiologdirectly, you can use:aiolog.setup_aiohttp(app) |
aiologfields | aiologfieldsaiologfieldsmakes it easy to includecorrelation IDs, as well
as other contextual information into log messages, acrossawaitcalls andloop.create_task()calls. Correlation IDs are critically
important for accurate telemetry in monitoring and debugging distributed
microservices.InstructionsIt couldn’t be easier:aiologfields.install()After this,every single taskcreated will have alogging_fieldsattribute. To add a field to aLogRecord, simply apply it to any task:t=loop.create_task(coro)t.logging_fields.correlation_id='12345'If you’re using a logging handler that produces JSON output
(likelogjson!), or some other formatter that produces output with
all fields in theLogRecord, you will find that each record within the
context of the task will include an additional field calledcorrelation_idwith a value of12345.DemoThis is adapted from one of the tests:aiologfields.install()correlation_id=str(uuid4())logger=logging.getLogger('blah')asyncdefcf2():logger.info('blah blah')asyncdefcf1():ct=asyncio.Task.current_task()ct.logging_fields.correlation_id=correlation_idawaitcf2()loop.run_until_complete(cf1())In theLogRecordproduced insidecf2(), an additional fieldcorrelation_idis included, even though the field was set in
coroutine functioncf1().It would also have worked ifcf2()had been executed in a separate
task itself, since thelogging_fieldsnamespace is copied between
nested tasks. |
aiologger | aiologgerAbout the ProjectThe built-in python logger is I/O blocking. This means that using the built-inloggingmodule will interfere with your asynchronous application performance.aiologgeraims to be the standard Asynchronous non-blocking logging for
python and asyncio.DocumentationThe project documentation can be found here:https://async-worker.github.io/aiologger/ |
aiologs | No description available on PyPI. |
aiologstash | asyncio logging handler for logstash.InstallationpipinstallaiologstashUsageimportloggingfromaiologstashimportcreate_tcp_handlerasyncdefinit_logger():handler=awaitcreate_tcp_handler('127.0.0.1',5000)root=logging.getLogger()root.setLevel(logging.DEBUG)root.addHandler(handler)ThanksThe library was donated byOcean S.A.Thanks to the company for contribution. |
aio-logstash | aio-logstashpython asyncio logstash logger adapterInstallationpipinstallaio-logstashUsageimportloggingimportasynciofromaio_logstash.handlerimportTCPHandlerasyncdefmain():handler=TCPHandler()awaithandler.connect('127.0.0.1',5000)logger=logging.getLogger(__name__)logger.setLevel(logging.INFO)logger.addHandler(handler)logger.info('test',extra={'foo':'bar'})awaithandler.exit()if__name__=='__main__':asyncio.run(main()) |
aiologstash2 | aiologstash2asyncio logging handler for logstash.Installationpipinstallaiologstash2Usageimportloggingfromaiologstash2importcreate_tcp_handlerasyncdefinit_logger():handler=awaitcreate_tcp_handler('127.0.0.1',5000)root=logging.getLogger()root.setLevel(logging.DEBUG)root.addHandler(handler)ThanksThis is an actively maintained fork ofaio-libs'
aiologstashThe library was donated byOcean S.A.Thanks to the company for contribution. |
aiologto | aiologtoUnofficial Asyncronous Python Client for LogtoLatest Version:FeaturesUnified Asyncronous and Syncronous Python Client forLogtoSupports Python 3.6+Strongly Typed withPydanticIncludes Function Wrappers to quickly add to existing projectsUtilizes Environment Variables for ConfigurationInstallation# Install from PyPIpipinstallaiologto# Install from sourcepipinstallgit+https://github.com/GrowthEngineAI/aiologto.gitUsageWIP - Simple Usage ExampleimportasynciofromaiologtoimportLogto,UserListResponsefromaiologto.utilsimportlogger"""Environment Vars that map to Logto.configure:all vars are prefixed with LOGTO_LOGTO_URL (url): str takes precedence over LOGTO_SCHEME | LOGTO_HOST | LOGTO_PORTLOGTO_SCHEME (scheme): str - defaults to 'http://'LOGTO_HOST (host): str - defaults to NoneLOGTO_PORT (port): int - defaults to 3000LOGTO_APP_ID (app_id): strLOGTO_APP_SECRET (app_secret): strLOGTO_RESOURCE (resource): str - defaults to "https://api.logto.io"LOGTO_OIDC_GRANT_TYPE (oidc_grant_Type): str - defaults to "client_credentials"## these variables are dynamically generated from the oidcLOGTO_ACCESS_TOKEN (access_token): str - defaults to NoneLOGTO_TOKEN_TYPE (token_type): str - defaults to NoneLOGTO_JWT_ALGORITHMS (jwt_algorithms): str - defaults to NoneLOGTO_JWT_OPTIONS (jwt_options): dict - defaults to {"verify_at_hash": False}LOGTO_JWT_ISSUER (jwt_issuer): str - defaults to generated valueLOGTO_TIMEOUT (timeout): int - defaults to 10LOGTO_IGNORE_ERRORS (ignore_errors): bool = defaults to False"""Logto.configure(url='...',app_id="...",app_secret="...",debug_enabled=True,)asyncdeffetch_users():# Fetch all the usersusers:UserListResponse=awaitLogto.users.async_list()logger.info(f"Users:{users}")# Update a specific useruser=users[0]user.custom_data["email"]="[email protected]"user=awaitLogto.users.async_update(user)logger.info(f"User Updated:{user.dict()}")asyncio.run(fetch_users()) |
aioloki | aiolokiAn asynchronous python logging handler to stream logs to Grafana LokiInstallationpip install aiolokiUsageimportasyncioimportloggingimportaiohttpimportaiolokiasyncdefmain():session=aiohttp.ClientSession()handler=aioloki.AioLokiHandler('http://localhost:3100',tags={'cluser':'1'},session=session)log=logging.getLogger('test-logging')log.addHandler(handler)log.info('Setup aioloki successfully',extra={'tags':{'function':'main'}})awaitsession.close()asyncio.run(main()) |
aiolookin | No description available on PyPI. |
aioloop-proxy | A proxy forasyncio.AbstractEventLoopfor testing purposes.When tests writing for asyncio based code, there are controversial requirements.First, a signle event loop for the whole test session (or test subset) is desired. For
example, if web server starts slowly, there is a temptation to create a server only once
and access to the single web server instance from each test.Second, each test should be isolated. It means that asyncio tasks (timers, connections,
etc.) created by test A should be finished at test A finalization and should not affect
test B execution.The library providesloop proxyclass that fully implementsasyncio.AbstractEventLoopinterface but redirects all actual work to the proxied
parent loop. It allows to check that all activities created with the proxy are finished
on the proxy finalization. In turn, all tasks created with the parent loop are still
keep working during the proxy execution.Loop proxies can be nested, e.g. global-loop -> module-loop -> test-loop is supported.The library is test tool agnostic, e.g. it can be integrated withunittestandpytesteasily (the actual integraion is out of the project scope).Installationpip install aioloop-proxyUsageimport asyncio
import aioloop_proxy
loop = asyncio.new_event_loop()
server_addr = loop.run_until_complete(setup_and_run_test_server())
...
with aioloop_proxy(loop, strict=True) as loop_proxy:
loop_proxy.run_until_complete(test_func(server_addr))Sure, each test system (unittest,pytest, name it) should not run the code
snippet above as-is but incorporate it as a dedicates test-case class or plugin.Extra loop methodsLoopProxyimplements allasyncio.AbstractEventLooppublic methods. Additionally,
it provides two proxy-specific ones:loop.check_and_shutdown()andloop.advance_time().await proxy.check_and_shutdown(kind=CheckKind.ALL)can be used for checking if
the proxy finished without active tasks, open transports etc.kindis aenum.Flagdescribed as the following:class CheckKind(enum.Flag):
TASKS = enum.auto()
SIGNALS = enum.auto()
SERVERS = enum.auto()
TRANSPORTS = enum.auto()
READERS = enum.auto()
WRITERS = enum.auto()
HANDLES = enum.auto()
ALL = TASKS | SIGNALS | SERVERS | TRANSPORTS | READERS | WRITERSAll checks are performed by default. A specific test can omit some check if it raises a
false positive warning.N.B.Dangling resources are always closed even if correspondingkindis omitted.
A proxy loop should cleanup all acquired resources at the test finish for the sake of
tests isolation principle.proxy.advance_time(offset)is a perk that helps with writing tests for scenarios
that uses timeouts, delays, etc.Let’s assume, we have a code that should read data from peer or raiseTimeoutErrorafter 15 minute timeout. It can be done by shiftingthe proxy local time(proxy.time()returned value) to 15 minutes forward artificially:task = asyncio.create_task(fetch_or_timeout())
loop.advance_time(15 * 60)
try:
await task
except TimeoutError:
...In the example above,await taskis resumed immediatelly because the testwall-clockis shifted by 15 minutes two lines above, and all timers created by the
proxy are adjusted accordingly.The parent loop wall-clock is not touched.The method complexity is O(N) where N is amount of active timers created byproxy.call_later()orproxy.call_at()methods. |
aiolotus | aiolotusUnofficial Asyncronous Python Client forLotusLatest Version:Official ClientInstallation# Install from PyPIpipinstallaiolotus# Install from sourcepipinstallgit+https://github.com/GrowthEngineAI/aiolotus.gitUsageWIP - Simple Usage ExampleimportasynciofromaiolotusimportLotusfromaiolotus.utilsimportloggerLotus.configure(apikey='...',url='',)asyncdefrun_test():res=awaitLotus.get_all_customers()logger.info(res)res=awaitLotus.get_all_metrics()logger.info(res)res=awaitLotus.get_all_plans()logger.info(res)res=awaitLotus.get_all_subscriptions()logger.info(res)awaitLotus.async_shutdown()asyncio.run(run_test()) |
aiolxd | aiolxdWIP AsyncIO LXD API for Python 3.THIS PROJECT IS NOT READY FOR PRODUCTION USEExampleimportasynciofromaiolxdimportLXDasyncdefmain()->None:asyncwithLXD.with_async("https://localhost:8443",cert=("client.crt","client.key"))aslxd:create_task=awaitlxd.instance.create(name="test-instance",source="ubuntu/22.04",type_="virtual-machine")# Request the creation of an instanceawaitcreate_task.wait()# Wait for the task to completeprint(awaitlxd.instance.get("test-instance"))# architecture='x86_64' created_at='2023-02-07T13:05:12.631550731Z'# last_used_at='1970-01-01T00:00:00Z' location='none' name='test-instance'# profiles=['default'] project='default' restore=None stateful=False# status='Stopped' status_code=102 type='virtual-machine' description=''# devices={} ephemeral=False config=InstanceConfig(security_nesting=None)delete_task=awaitlxd.instance.delete("test-instance")# Request the deletion of an instanceawaitdelete_task.wait()# Wait for the task to completeasyncio.run(main())TODOBasic API (instance creation, deletion, etc.)LoggingWebsocket operation events (websocket support exists, but events are not parsed)TestsMore API endpoints |
aiolyric | # AIOLyricPython package for the Honeywell Lyric Platform.## Attributions / Contributions[@bramkragten](https://github.com/bramkragten) for the original implementation. Referenced his [package](https://github.com/bramkragten/python-lyric) quite a bit whilst writing this one.[@ludeeus](https://github.com/ludeeus) for his generator class. Made reading json into objects super simple. :tada:[Everyone else](https://github.com/timmo001/aiolyric/graphs/contributors). Thanks for the help! |
aiom3u8 | No description available on PyPI. |
aiom3u8downloader | Update package m3u8downloader to use aiohttp to speed up download m3u8 urlSupport disguised as img (png/jpg/jpeg) to decode into ts fileaiom3u8downloader base on package m3u8downloader (https://pypi.org/project/m3u8downloader, version: 0.10.1)ffmpeg is used to convert the downloaded fragments into final mp4 video file.InstallationTo install aiom3u8downloader, simply:$sudoaptinstall-yffmpeg# python version >= python3.6$pipinstallaiom3u8downloaderQuick StartExample command line usage:aiodownloadm3u8-o~/Downloads/foo.mp4https://example.com/path/to/foo.m3u8If ~/.local/bin is not in $PATH, you can use full path:~/.local/bin/aiodownloadm3u8-o~/Downloads/foo.mp4https://example.com/path/to/foo.m3u8Here is built-in command line help:usage: aiom3u8downloader [-h] [--version] [--debug] --output OUTPUT
[--tempdir TEMPDIR] [--limit_conn LIMIT_CONN]
[--auto_rename] URL
download video at m3u8 url
positional arguments:
URL the m3u8 url
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
--debug enable debug log
--output OUTPUT, -o OUTPUT
output video filename, e.g. ~/Downloads/foo.mp4
--tempdir TEMPDIR temp dir, used to store .ts files before combing them into mp4
--limit_conn LIMIT_CONN, -conn LIMIT_CONN
limit amount of simultaneously opened connections
--auto_rename, -ar auto rename when output file name already existsLimitationsThis tool only parses minimum m3u8 extensions for selecting media playlist
from master playlist, downloading key and fragments from media playlist. If a
m3u8 file doesn’t download correctly, it’s probably some new extension was
added to the HLS spec which this tool isn’t aware of.ChangeLogv0.0.1use aiohttp download m3u8 urlv1.0.3remove multiprocessing packagerelease to pypi |
aiomadeavr | aiomadeavrA library/utility to control Marantz/Denon devices over the telnet port.InstallationWe are on PyPi sopip3 install aiomadeavrWhy?Another projectaio_marantz_avrtargets the
same problem. Unfortunately, it has a few shortcomings for my intended use. For one thing, whilst
it is using asyncio, it is not really asynchronous as you need to poll the device to get data. Second,
there is no automatic discovery of devices.So I decided to write my own.Note that I lifted some code fromaio_marantz_avr, but
in the end, it is so far from the original that it made no sense to create this as a PR.RunningThis has been tested with a Marantz SR7013 receiver.Although aiomadeavr is meant to be use as a library, the module can be ran, just dopython3 -m aiomadeavrAfter a moment, if you type "enter", you should see a list of the device that have been
discovered. You will be able to power the device on/off, mute/unmute it, set the volume, choose
the source and select the surround mode. You will also be able to change the sound channels bias.DiscoveryThere is actually no way to discover the telnet service of those devices. So aiomadeavr cheats.
As far as I can tell all recent Marantz/Denon networked devices support Denon'sHEOS. That service advertises itself over theSimple Service Discovery Protocol. Discovery looks for those services
and, hopefoolly, the devices we can telnet to will answer the Denon/Marantz serial protocol.DocumentationHere are the exposed API functions and objectavr_factoryThis is a coroutine. It is how one creates devices instances.Parameters are:name: The friendly name of the instances, a string
addr: The IP asddress, a stringThese 2 are required, there are also 2 optional parameters:port: The port to connect to. An integer. Default is 23
timeout: A timeout,currently not used default 3.0If anything goes wrong, avr_factory will return None. If things go right, it will return an MDAVR objectMDAVRThis is the class used to communicate with the device.When created with avr_factory, the object will connect to the device and start reading the information
coming from the device. It will then issue a list of command to get the current state of the device.All communications with a device must be performed through a MDAVR instance.Here are the exposed attributes and method of the MDAVR class.String Attr: nameThe friendly name of the device. This was passed to avr_factory at creation time.Dictionary Attr: statusCurrent status of the device. Below is a pretty printed example from a marantz SR7013:Power: On
Main Zone: On
Zone 2: Off
Zone 3: Off
Muted: False
Z2 Muted: False
Z3 Muted: False
Volume: 50.0
Z2 Volume: 50.0
Z3 Volume: 1.0
Source: Bluray
Z2 Source: -
Z3 Source: Online Music
Surround Mode: Dolby Digital Surround
Channel Bias:
Front Left: 0.0
Front Right: 0.0
Centre: 0.0
Subwoofer: 0.0
Surround Left: 0.0
Surround Right: 0.0
Subwoofer2: 0.0
Front Top Left: 0.0
Front Top Right: 0.0
Rear Top Left: 0.0
Rear Top Right: 0.0
Picture Mode: ISF Day
Eco Mode: Auto
Sampling Rate: 192.0String Attr: power, main_power, z2_power, z3_powerCurrent status status of the device, one of 'On' or 'Standby', for 'power', "On' or 'Off" for the others.Bool Attr: muted, z2_muted, z3_mutedCurrent "muted" status of the device: True or FalseFloat Attr: volume, z2_volume, z3_volumeCurrent zone volume of the device. From 0.0 to max_volume by 0,5 incrementsFloat Attr: max_volumeMaximum of the volume range.String Attr: source, z2_source, z3_sourceCurrent source of the device, for instance Bluray, CD, Set Top Box,...List Attr: source_listList of all the possible sources. When setting a source, the name MUST BE in this list.Not all sources are available to all devices. aiomadeave will try to get the list of inputs available to the device.String Attr: sound_modeCurrent sound processing mode, for instance: Stereo, DTS, Pure Direct,...List Attr: sound_mode_listList of all the possible sound_mode. When setting a sound_mode, the name MUST BE in this list.Not all sound_mode are available to all devices.String Attr: picture_modeCurrent video processing mode, for instance: Custum, Vivid, ISF Day,...List Attr: picture_mode_listList of all the possible picture_mode. When setting a picture_mode, the name MUST BE in this list.Not all picture_mode are available to all devices.String Attr: eco_modeCurrent economy mode setting, one of 'On', 'Off' or 'Auto'List Attr: eco_mode_listList of all the possible economy mode settings. When setting the economy mode, the name MUST BE in this list.Economy mode is not available on all devices.Dictionary Attr: channels_biasThe bias for all the currently available channels. The key is the channel name, and the
value is the bias as a float. The bias is between -12 dB and +12 dBList Attr: channels_bias_listList of all the possible channels for which a bias can be set. When setting a channel bias the name MUST BE in this list.Note that this list is dynamic has it depends on the sound mode. Values are like: Front Right, Surrond Left,...Method: refreshNo parameter.Ask the device to query its current status. Returns None.Method: turn_on, main_tunr_on, z2_turn_on, z3_turn_onNo parameter.Turn on the device/zone. Returns None. 'turn_on' will affect all zonesMethod: turn_off, main_power_off, z2_power_off, z3_poweer_offNo parameter.Turn off the device/zone. Returns None.Note that the associated value is "Standby" for'power' and "Off" for zones.Method: mute_volume, z2_mute_volume, z3_mute_volumeOne parameter:mute: booleanReturns None.Method: set_volume, z2_set_volume, z3_set_volumeOne parameter:level: float, valuer between 0.0 and 98.0 in 0.5 increments for main zone and 1.0 increment for other zones.Set the volume level.Returns None.Method: volume_up, z2_volume_up, z3_volume_upNo parameter.Raise the volume level by 0.5 for main zone, 1.0 for othersReturns None.Method: volume_down, z2_volume_down, z3_volume_downNo parameter.Lower the volume level by 0.5 for main zone, 1.0 for othersReturns None..Method: set_channel_biasTwo parameter:chan: The channel name. Must be in channels_bias_list
level: float, valuer between -12.0 and 12.0 in 0.5 incrementsSet the bias level for the specified channel.Returns None.Method: channel_bias_upOne parameter:chan: The channel name. Must be in channels_bias_listRaises the bias level for the specified channel by 0.5Returns None.Method: channel_bias_downOne parameter:chan: The channel name. Must be in channels_bias_listLower the bias level for the specified channel by 0.5Returns None.Method: channel_bias_resetNo parameter.Reset all the channels' bias to 0.0Returns None.Method: select_source, z2_select_source, z3_select_sourceOne parameter:source: The source name. Must be in source_listMake the source the current active one for the Main ZoneReturns None.Method: select_sound_modeOne parameter:mode: The mode name. Must be in sound_mode_listSet the sound mode for the active zone. The name of the sound mode
in the status may not be the same as the one set. For instance, setting 'Auto' may lead to a
'Stereo' mode.Returns None.Method: select_picture_modeOne parameter:mode: The mode name. Must be in picture_mode_listSet the picture mode for the active zone.Returns None.Method: select_eco_modeOne parameter:mode: The mode name. Must be in eco_mode_listSet the eco mode for the device.Returns None.Method: notifymeOne parameter:func: A callable with 2 parameters:
lbl: The name of the property, a key in status
value: The new valueThis function register a callable to be called when one
of the status value changes. For 'Channel Bias' it is called
everytime the channel bias info is received.Coroutine start_discoveryOne parameter:addr: The multicast address to use for discovery, by default this is the multicast address for SSDP discovery.
callb: A callable. It is called when and HEOS service is discoverd. The callablew must accept one parameter, a dictionary with the following keys:
ip: ip address of the device
name: friendly name
model: The device model
serial: the device serial numberCaveatTrying to set the current value will often result in AvrTimeoutError exception.The device will simply not respond to unknown commands and will secretly despise you for it. This makes it difficullt to use timeout on sending to detect disconnection.The channel bias list may get out of sync when setting the sound mode to 'Auto'. It looks like there is a delay before that information is sent.AfterthoughtsThe module uses asyncio Streams. I think using protocols may have been a wiser choice.Currently, most of the coroutine of the MDAVR object generate a future and wait for it. Not sure it is a good idea. May be removed in the future. Oh, wait!All that silly use of future has now been cleaned up. |
aiomagra | AIO Magra |
aiomail | # aiomail
easy and async e-mail package |
aiomailru | aiomailruaiomailru is a pythonMail.Ru APIwrapper.
The main features are:authorization (Authorization Code,Implicit Flow,Password Grant,Refresh Token)REST APImethodsweb scrapersUsageTo useMail.Ru APIyou need a registered app andMail.Ruaccount.
For more details, seeaiomailru Documentation.Client applicationUseClientSessionwhen REST API is needed in:a client component of the client-server applicationa standalone mobile/desktop applicationi.e. when you embed your app’s info (private key) in publicly available code.fromaiomailruimportClientSession,APIsession=ClientSession(app_id,private_key,access_token,uid)api=API(session)events=awaitapi.stream.get()friends=awaitapi.friends.getOnline()Useaccess_tokenanduidthat were received after authorization. For more details, seeauthorization instruction.Server applicationUseServerSessionwhen REST API is needed in:a server component of the client-server applicationrequests from your serversfromaiomailruimportServerSession,APIsession=ServerSession(app_id,secret_key,access_token)api=API(session)events=awaitapi.stream.get()friends=awaitapi.friends.getOnline()Useaccess_tokenthat was received after authorization.
For more details, seeauthorization instruction.Installation$pipinstallaiomailruor$pythonsetup.pyinstallSupported Python VersionsPython 3.5, 3.6, 3.7 and 3.8 are supported.TestRun all tests.$pythonsetup.pytestRun tests with PyTest.$python-mpytest[-kTEST_NAME]Licenseaiomailru is released under the BSD 2-Clause License. |
aio_manager | .. image:: https://img.shields.io/pypi/v/aio_manager.svg:target: https://pypi.org/project/aio_manager.. image:: https://img.shields.io/travis/rrader/aio_manager/master.svg:target: http://travis-ci.org/rrader/aio_manager.. image:: https://img.shields.io/pypi/pyversions/aio_manager.svgScript manager for aiohttp.========Quick Start------------------Install from PYPI:.. code:: shellpip install aio_managerFor optional features, feel free to depend on extras:.. code:: shellpip install aio_manager[mysql,postgres]pip install aio_manager[sa]OR (less popular) via ``setup.py``:.. code:: shellpython -m setup installExample------------------.. code:: python:number-lines:app = build_application()manager = Manager(app)sqlalchemy.configure_manager(manager, app, Base,DATABASE_USERNAME,DATABASE_NAME,DATABASE_HOST,DATABASE_PASSWORD) |
aiomangadex | aiomangadexAn asynchronous API wrapper for mangadex.Basic Usageimportaiomangadeximportaiohttpimportasyncioasyncdeffetch(id):session=aiohttp.ClientSession()manga=awaitaiomangadex.fetch_manga(id,session)awaitsession.close()print(manga.description)asyncio.get_event_loop().run_until_complete(fetch(34198))For more info, visit the docshere. |
aiomangadexapi | Asynchronous MangaDex python APIAn unofficial asynchronous pythonMangaDexAPI built with the JSON API and web scraping.key featuresget data on any manga from mangadexget updates from mangadex main pageget the mangas from anyone's list with a rss linkget any chapter link from mangadexExamples#couple examplesimportaiomangadexapiimportasyncioasyncdefget_manga():session=awaitaiomangadexapi.login(username='username',password='password')# we login into mangadexmanga=awaitaiomangadexapi.search(session,'solo leveling')#search for solo leveling (will return the first result of the search on mangadex)awaitsession.close()#close the sessionreturnmangamanga=asyncio.run(get_manga())Documentationhttps://github.com/Mudy7/aiomangadexapi |
aiomanhole | aiomanholeManhole for accessing asyncio applications. This is useful for debugging
application state in situations where you have access to the process, but need
to access internal application state.Adding a manhole to your application is simple:from aiomanhole import start_manhole
start_manhole(namespace={
'gizmo': application_state_gizmo,
'whatsit': application_state_whatsit,
})Quick example, in one shell, run this:$ python -m aiomanholeIn a secondary shell, run this:$ nc -U /var/tmp/testing.manhole
Well this is neat
>>> f = 5 + 5
>>> f
10
>>> import os
>>> os.getpid()
4238
>>> import sys
>>> sys.exit(0)And you’ll see the manhole you started has exited.The package provides both a threaded and non-threaded interpreter, and allows
you to share the namespace between clients if you want.I’m getting “Address is already in use” when I start! Help!Unlike regular TCP/UDP sockets, UNIX domain sockets are entries in the
filesystem. When your process shuts down, the UNIX socket that is created is
not cleaned up. What this means is that when your application starts up again,
it will attempt to bind a UNIX socket to that path again and fail, as it is
already present (it’s “already in use”).The standard approach to working with UNIX sockets is to delete them before you
try to bind to it again, for example:import os
try:
os.unlink('/path/to/my.manhole')
except FileNotFoundError:
pass
start_manhole('/path/to/my.manhole')You may be tempted to try and clean up the socket on shutdown, but don’t. What
if your application crashes? What if your computer loses power? There are lots
of things that can go wrong, and hoping the previous run was successful, while
admirably positive, is not something you can do.Can I specify what is available in the manhole?Yes! When you callstart_manhole, just pass along a dictionary of what you
want to provide as the namespace parameter:from aiomanhole import start_manhole
start_manhole(namespace={
'gizmo': application_state_gizmo,
'whatsit': application_state_whatsit,
'None': 5, # don't do this though
})When should I use threaded=True?Specifying threaded=True means that statements in the interactive session are
executed in a thread, as opposed to executing them in the event loop.Say for example you did this in a non-threaded interactive session:>>> while True:
... pass
...You’ve just broken your application! You can’t abort that without restarting
the application. If however you ran that in a threaded application, you’d
‘only’ have a thread trashing the CPU, slowing down your application, as
opposed to making it totally unresponsive.By default, a threaded interpreter will time out commands after 5 seconds,
though this is configurable. Not that this willnotkill the thread, but
allow you to keep running commands.Change History0.7.0 (23rd January 2022)Added support for Python 3.10. Thank you to Peter Bábics for contributing this!Removed support for Python 3.5.0.6.0 (30th April 2019)Don’t use the global loop. Thanks Timothy Fitz!Allow a port of 0. Thanks Timothy Fitz!Fix unit test failure.0.5.0 (6th August 2018)Fix syntax error in 3.7Drop 3.4 support.0.4.2 (3rd March 2017)Handle clients putting the socket into a half-closed state when an EOF
occurs.0.4.1 (3rd March 2017)Ensure prompts are bytes, broken in 0.4.0.0.4.0 (3rd March 2017)Ensure actual syntax errors get reported to the client.0.3.0 (23rd August 2016)Behaviour changeaiomanhole no longer attempts to remove the UNIX socket
on shutdown. This was flakey behaviour and does not match best practice
(i.e. removing the UNIX socket on startup before you start your server). As
a result, errors creating the manhole will now be logged instead of silently
failing.start_manholenow returns a Future that you can wait on.Giving a loop tostart_manholenow works more reliably. This won’t matter
for most people.Feels “snappier”0.2.1 (14th September 2014)Handle a banner of None.Fixed small typo in MANIFEST.in for the changelog.Feels “snappier”0.2.0 (25th June 2014)Handle multiline statements much better.setup.py pointed to wrong domain for project URLRemoved pointless insertion of ‘_’ into the namespace.Added lots of tests.Feels “snappier”0.1.1 (19th June 2014)Use setuptools as a fallback when installing.Feels “snappier”0.1 (19th June 2014)Initial releaseFeels “snappier” |
aio.manhole.server | Manhole server for theaioasyncio frameworkBuild statusInstallationRequires python >= 3.4Install with:pipinstallaio.manhole.serverQuick start - Manhole serverSave the following into a file “manhole.conf”[server/my_manhole_server]factory=aio.manhole.server.factoryport=7373Run with the aio run commandaiorun-cmanhole.confYou should now be able to telnet into the running server on port 7373aio.manhole.server usageConfigurationLets create a manhole configuration>>> config = """
... [aio]
... log_level = ERROR
...
... [server/server_name]
... factory = aio.manhole.server.factory
... port = 7373
...
... """>>> import sys
>>> import io
>>> import aiomanhole>>> import aio.testing
>>> import aio.app
>>> from aio.app.runner import runnerWhen we run the manhole server, its accessible as “server_name” from aio.app.servers>>> @aio.testing.run_forever(sleep=1)
... def run_manhole_server(config):
... yield from runner(['run'], config_string=config)
...
... def call_manhole():
... print(aio.app.servers["server_name"])
... aio.app.clear()
...
... return call_manhole>>> run_manhole_server(config)
<Server sockets=[<socket.socket ...laddr=('0.0.0.0', 7373)...>Lets try calling the manhole server>>> import asyncio
>>> import telnetlib3>>> @aio.testing.run_forever(sleep=1)
... def run_manhole_server(config):
... yield from runner(['run'], config_string=config)
...
... class TestTelnetClient(telnetlib3.TelnetClient):
...
... def data_received(self, data):
... print(data)
...
... def call_manhole():
... loop = asyncio.get_event_loop()
... transport, protocol = yield from loop.create_connection(
... TestTelnetClient, "127.0.0.1", 7373)
... aio.app.clear()
...
... return call_manhole>>> run_manhole_server(config)
b'hello...\n>>> ' |
aio-marantz-avr | AsyncIO Marantz AVRAsyncIO access to Marantz AVRs.Free software: MIT licenseDocumentation:https://aio-marantz-avr.readthedocs.io.FeaturesControl and read status of a Marantz AVR over Telnet using python AsyncIO.Command line tool.Supported models:SR7011 (tested)AV7703SR6011SR5011NR1607CreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.HistoryAll notable changes to this project will be documented in this file.The format is based onKeep a Changelog, and this project adheres toSemantic Versioning.0.1.0 (2020-01-07)First release on PyPI.AddedBasic control of limited Marantz AVR models. |
aiomarionette | FireFox Marionette Client forasyncioaiomarionetteprovides an asynchronous client interface for theFirefox
Marionetteremote control protocol.UsageTo useaiomarionette, create an instance of theMarionetteclass. By
default, the cclient will attempt to connect to the Marionette socket on the
local machine, port 2828. You can specify thehostand/orportarguments to
change this. Be sure to call theconnectmethod first, before calling any
of the command methods.asyncwithaiomarionette.Marionette()asmn:mn.connect()mn.navigate('https://getfirefox.com/')Compared tomarionette_driverThe official Python client for Firefox Marionette ismarionette_driver.
Although it is more complete thanaiomarionette(at least for now), it only
provides a blocking API.Unlikemarionette_driver,aiomarionettedoes not currently support launching
Firefox directly. You must explicity start a Firefox process in Marionette mode
before connecting to it withaiomarionette. |
aiomas | aiomas – A library for multi-agent systems and RPC based on asyncioaiomasis an easy-to-use library forrequest-reply channels,remote
procedure calls (RPC)andmulti-agent systems (MAS). It’s written in pure
Python on top ofasyncio.Here are three simple examples that show the different layers of aiomas and
what they add on top of each other:Therequest-reply channelhas the lowest level of abstraction (but already
offers more then vanilla asyncio):>>>importaiomas>>>>>>>>>asyncdefhandle_client(channel):..."""Handle a client connection."""...req=awaitchannel.recv()...print(req.content)...awaitreq.reply('cya')...awaitchannel.close()>>>>>>>>>asyncdefclient():..."""Client coroutine: Send a greeting to the server and wait for a
... reply."""...channel=awaitaiomas.channel.open_connection(('localhost',5555))...rep=awaitchannel.send('ohai')...print(rep)...awaitchannel.close()>>>>>>>>>server=aiomas.run(aiomas.channel.start_server(('localhost',5555),handle_client))>>>aiomas.run(client())ohaicya>>>server.close()>>>aiomas.run(server.wait_closed())TheRPC layeradds remote procedure calls on top of it:>>>importaiomas>>>>>>>>>classMathServer:...router=aiomas.rpc.Service()[email protected](self,a,b):...returna+b...>>>>>>asyncdefclient():..."""Client coroutine: Call the server's "add()" method."""...rpc_con=awaitaiomas.rpc.open_connection(('localhost',5555))...rep=awaitrpc_con.remote.add(3,4)...print('What’s 3 + 4?',rep)...awaitrpc_con.close()>>>>>>server=aiomas.run(aiomas.rpc.start_server(('localhost',5555),MathServer()))>>>aiomas.run(client())What’s3+4?7>>>server.close()>>>aiomas.run(server.wait_closed())Finally, theagent layerhides some of the boilerplate code required to setup
the sockets and allows agent instances to easily talk with each other:>>>importaiomas>>>>>>classTestAgent(aiomas.Agent):...def__init__(self,container):...super().__init__(container)...print('Ohai, I am%s'%self)......asyncdefrun(self,addr):...remote_agent=awaitself.container.connect(addr)...ret=awaitremote_agent.service(42)...print('%sgot%sfrom%s'%(self,ret,remote_agent))[email protected](self,value):...returnvalue>>>>>>c=aiomas.Container.create(('localhost',5555))>>>agents=[TestAgent(c)foriinrange(2)]Ohai,IamTestAgent('tcp://localhost:5555/0')Ohai,IamTestAgent('tcp://localhost:5555/1')>>>aiomas.run(until=agents[0].run(agents[1].addr))TestAgent('tcp://localhost:5555/0')got42fromTestAgentProxy('tcp://localhost:5555/1')>>>c.shutdown()aiomasis released under the MIT license. It requires Python 3.4 and above
and runs on Linux, OS X, and Windows.Installationaiomasrequires Python >= 3.6 (or PyPy3 >= 5.10.0). It uses theJSONcodec
by default and only has pure Python dependencies.Installaiomasviapipby running:$pipinstallaiomasYou can enable the optionalMsgPackcodec or itsBlosccompressed version
by installing the corresponding features (note, that you need a C compiler to
install them):$pipinstallaiomas[mp]# Enables the MsgPack codec$pipinstallaiomas[mpb]# Enables the MsgPack and MsgPackBlosc codecsFeaturesaiomasjust puts three layers of abstraction around raw TCP / unix domain
sockets provided byasyncio:Agents and agent containers:The top-layer provides a simple base class for your own agents. All agents
live in a container.Containers take care of creating agent instances and performing the
communication between them.The container provides aclockfor the agents. This clock can either be
synchronized with the real (wall-clock) time or be set by an external process
(e.g., other simulators).RPC:Therpclayer implements remote procedure calls which let you call methods
on remote objects nearly as if they were normal objects:Instead ofret = obj.meth(arg)you writeret = await obj.meth(arg).Request-reply channel:Thechannellayer is the basis for therpclayer. It sendsJSONorMsgPackencoded byte strings over TCP or unix domain sockets. It also maps
replies (of success or failure) to their corresponding request.Other features:TLS support for authorization and encrypted communication.Interchangeable and extensible codecs: JSON and MsgPack (the latter
optionally compressed with Blosc) are built-in. You can add custom codecs or
write (de)serializers for your own objects to extend a codec.Deterministic, emulated sockets: ALocalQueuetransport lets you send and
receive message in a deterministic and reproducible order within a single
process. This helps testing and debugging distributed algorithms.Planned featuresSome ideas for future releases:Optional automatic re-connect after connection lossContributeIssue Tracker:https://gitlab.com/sscherfke/aiomas/issuesSource Code:https://gitlab.com/sscherfke/aiomasSet-up a development environment with:$virtualenv-p`whichpython3`aiomas$pipinstall-rrequirements-setup.txtRun the tests with:$pytest$# or$toxSupportDocumentation:https://aiomas.readthedocs.io/en/latest/Mailing list:https://groups.google.com/forum/#!forum/python-tulipStack Overflow:http://stackoverflow.com/questions/tagged/aiomasIRC: #asyncioLicenseThe project is licensed under the MIT license.Changelog2.0.1 – 2017-12-29[CHANGE] Restore support for Python 3.5 so that the docs on Read the Docs
build again.2.0.0 – 2017-12-28[BREAKING] Converted to f-Strings andasync/awaitsyntax. The
minimum required Python versions are now Python 3.6 and PyPy3 5.10.0.[BREAKING] Removedaiomas.util.async()andaiomas.util.create_task().[CHANGE] Move from Bitbucket and Mercurial to GitLab and Git.[FIX] Adjust to asyncio changes and explicitly pass references to the current
event loop where necessary.You can find information about older versions in thedocumentation.AuthorsThe original author of aiomas is Stefan Scherfke.The initial development has kindly been supported byOFFIS. |
aiomast | aiomastAsynchronous Mastodon library for Python.*** WARNING: This library is still very experimental and unstable. Don't even think about using it in production. *** |
aiomatrix | API endpoint for the matrix.org protocol for async Python.
Designed for writing GUI/TUI clients and bots. |
aiomatrix-py | aiomatrix is a simple and fully asynchronous library for creating bots inMatrixwithout hassle.ResourcesCommunity:#aiomatrix-enRussian community:#aiomatrix-ruSource code:GithubIssues:Github bug tracker |
aiombus | No description available on PyPI. |
aiomc | aiomcForked frombmcaiomc is an Async Python 3 wrapper library for MinIO's command line interfacemcandminio.MinIO, an Amazon Simple Storage Service (S3) compatible object storage. It has a usefulPython client librarywhich unfortunately lacks administrative capabilities that themcandminiocommand line interfaces provide, such as adding users and hosts, which we need to do for theikomachine learning platform. This library solves thatproblem.Installationpipinstallaiomc |
aiomcache | memcached client for asyncioasyncio (PEP 3156) library to work with memcached.Getting startedThe API looks very similar to the other memcache clients:importasyncioimportaiomcacheasyncdefhello_aiomcache():mc=aiomcache.Client("127.0.0.1",11211)awaitmc.set(b"some_key",b"Some value")value=awaitmc.get(b"some_key")print(value)values=awaitmc.multi_get(b"some_key",b"other_key")print(values)awaitmc.delete(b"another_key")asyncio.run(hello_aiomcache())Version 0.8 introducesFlagClientwhich allows registering callbacks to
set or process flags. Seeexamples/simple_with_flag_handler.pyCHANGES0.8.1 (2023-02-10)Addconn_argstoClientto allow TLS and other options when connecting to memcache.0.8.0 (2022-12-11)AddFlagClientto support memcached flags.Fix type annotations [email protected] rare exception caused by memcached server dying in middle of operation.Fix get method to not use CAS.0.7.0 (2022-01-20)Added support for Python 3.10Added support for non-ascii keysAdded type annotations0.6.0 (2017-12-03)Drop python 3.3 support0.5.2 (2017-05-27)Fix issue with pool concurrency and task cancellation0.5.1 (2017-03-08)Added MANIFEST.in0.5.0 (2017-02-08)Added gets and cas commands0.4.0 (2016-09-26)Make max_size strict #140.3.0 (2016-03-11)Dockerize testsReuse memcached connections in Client Pool #4Fix stats parse to compatible more mc class software #50.2 (2015-12-15)Make the library Python 3.5 compatible0.1 (2014-06-18)Initial release |
aiomcache-multi | No description available on PyPI. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.