package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aioconfig | aioconfigaioconfigefficientlyandthread-safelystores configurations in the
background (asynchronously).InstallationpipinstallaioconfigUsageThe interface ofaioconfigis dramatically easy to use.
For example, bothset(key, value)andawait set(key, value)store a pair of
key/value, which the former one is a fire-and-forget asynchronous function call
while the latter one blocks until the data written onto the disk.Initimportaioconfigstorage=aioconfig.get_storage(awaitaioconfig.attach('example.db'))section=awaitaioconfig.get('default')Deletesection.delete(key='foo')Blocking delete (wait until it's done)awaitsection.delete(key='foo')Getvalue1=awaitsection.get(key='foo',default='bar')value2=awaitsection.get(key='baz',default=12.3)Get allvalue=awaitsection.get_all()Set (fire-and-forget)section.set(key='foo',value='bar')section.set(key='baz',value=12.3)Blocking set (wait until it's done)awaitsection.set(key='foo',value='bar')awaitsection.set(key='baz',value=12.3)Batch set (fire-and-forget) (TBD)withstorage.transation():storage.set(key='foo',value='bar',section='default_section')storage.set(key='baz',value=12.3,section='default_section')Blocking batch set (wait until it's done) (TBD)asyncwithstorage.transation():storage.set(key='foo',value='bar',section='default_section')storage.set(key='baz',value=12.3,section='default_section') |
aio.config | Configuration utilities for theaioasyncio frameworkBuild statusInstallationInstall with:pipinstallaio.configConfiguration finderThe configuration finder will search the following directory paths in search of configuration filesaio.confetc/aio.conf/etc/aio.confConfiguration parserThe configuration parser usesconfigparser.ExtendedInterpolation |
aioconnect | not yet working |
aioconnection | Async messaging with devices through different interfaces |
aioconnectors | _ __
___ _(_)__ _______ ___ ___ ___ ____/ /____ _______
/ _ `/ / _ \/ __/ _ \/ _ \/ _ \/ -_) __/ __/ _ \/ __(_-<
\_,_/_/\___/\__/\___/_//_/_//_/\__/\__/\__/\___/_/ /___/aioconnectorsSimple secure asynchronous message queueFeaturesInstallationExample Point to point : Server and ClientExample publish/subscribe : Broker, Subscriber, and PublisherHigh Level DesignUse CasesUsage1.Encryption2.Run a connector3.Send/receive messages4.ConnectorManager and ConnectorAPI5.send_message6.Programmatic management tools7.Command line interface management tools8.Testing tools9.Embedded chatContainersWindowsFEATURESaioconnectors is an easy to set up message queue and broker that works on Unix like systems. Requirements are : Python >= 3.6, and openssl installed.It provides bidirectional transfer of messages and files, optional authentication and encryption, persistence and reconnection in case of connection loss, proxy support, client filtering.It is a point to point broker built on the client/server model, but both peers can push messages. It can also be easily configured as a publish/subscribe broker.Based on asyncio, message sending and receiving are asynchronous, with the option to wait asynchronously for a response.A connector can be configured with a short json file.An embedded command line tool enables to easily run a connector and manage it with shell commands.A simple Python API provides functions like starting/stopping a connector, sending a message, receiving messages, and other management capabilities. To support other languages for the API, the file standalone_api.py only should be transpiled.INSTALLATIONpip3 install aioconnectorsBASIC EXAMPLE - POINT TO POINTYou can run a connector with a single shell commandpython3 -m aioconnectors create_connector <config_json_path>This is covered in2-, but this example shows the programmatic way to run connectors.This is a basic example of a server and a client sending messages to each other. For more interesting examples, please refer to applications.py or aioconnectors_test.py.For both server and client, connector_manager is running the connector, and connector_api is sending/receiving messages.In this example, connector_manager and connector_api are running in the same process for convenience. They can obviously run in different processes, as shown in the other examples.In this example we are running server and client on the same machine since server_sockaddr is set to "127.0.0.1".To run server and client on different machines, you should modify server_sockaddr value in both server and client code, with the ip address of the server.You can run multiple clients, just set a different client_name for each client.1.No encryptionYou can run the following example code directly, the encryption is disabled.In case you want to use this example with encryption, you should read 2. and 3. after the examples.Server exampleimport asyncio
import aioconnectors
loop = asyncio.get_event_loop()
server_sockaddr = ('127.0.0.1',10673)
connector_files_dirpath = '/var/tmp/aioconnectors'
#create connector
connector_manager = aioconnectors.ConnectorManager(is_server=True, server_sockaddr=server_sockaddr, use_ssl=False, use_token=False,
ssl_allow_all=True, connector_files_dirpath=connector_files_dirpath,
certificates_directory_path=connector_files_dirpath,
send_message_types=['any'], recv_message_types=['any'],
file_recv_config={'any': {'target_directory':connector_files_dirpath}},
reuse_server_sockaddr=True)
task_manager = loop.create_task(connector_manager.start_connector())
loop.run_until_complete(task_manager)
#create api
connector_api = aioconnectors.ConnectorAPI(is_server=True, server_sockaddr=server_sockaddr,
connector_files_dirpath=connector_files_dirpath,
send_message_types=['any'], recv_message_types=['any'],
default_logger_log_level='INFO')
#start receiving messages
async def message_received_cb(logger, transport_json , data, binary):
print('SERVER : message received', transport_json , data.decode())
loop.create_task(connector_api.start_waiting_for_messages(message_type='any', message_received_cb=message_received_cb))
#start sending messages
async def send_messages(destination):
await asyncio.sleep(2)
index = 0
while True:
index += 1
await connector_api.send_message(data={'application message': f'SERVER MESSAGE {index}'},
message_type='any', destination_id=destination)
await asyncio.sleep(1)
loop.create_task(send_messages(destination='client1'))
try:
print(f'Connector is running, check log at {connector_files_dirpath+"/aioconnectors.log"}'
f', type Ctrl+C to stop')
loop.run_forever()
except:
print('Connector stopped !')
#stop receiving messages
connector_api.stop_waiting_for_messages(message_type='any')
#stop connector
task_stop = loop.create_task(connector_manager.stop_connector(delay=None, hard=False, shutdown=True))
loop.run_until_complete(task_stop)Client exampleimport asyncio
import aioconnectors
loop = asyncio.get_event_loop()
server_sockaddr = ('127.0.0.1',10673)
connector_files_dirpath = '/var/tmp/aioconnectors'
client_name = 'client1'
#create connector
connector_manager = aioconnectors.ConnectorManager(is_server=False, server_sockaddr=server_sockaddr,
use_ssl=False, ssl_allow_all=True, use_token=False,
connector_files_dirpath=connector_files_dirpath,
certificates_directory_path=connector_files_dirpath,
send_message_types=['any'], recv_message_types=['any'],
file_recv_config={'any': {'target_directory':connector_files_dirpath}},
client_name=client_name)
task_manager = loop.create_task(connector_manager.start_connector())
loop.run_until_complete(task_manager)
#create api
connector_api = aioconnectors.ConnectorAPI(is_server=False, server_sockaddr=server_sockaddr,
connector_files_dirpath=connector_files_dirpath, client_name=client_name,
send_message_types=['any'], recv_message_types=['any'],
default_logger_log_level='INFO')
#start receiving messages
async def message_received_cb(logger, transport_json , data, binary):
print('CLIENT : message received', transport_json , data.decode())
loop.create_task(connector_api.start_waiting_for_messages(message_type='any', message_received_cb=message_received_cb))
#start sending messages
async def send_messages():
await asyncio.sleep(1)
index = 0
while True:
index += 1
await connector_api.send_message(data={'application message': f'CLIENT MESSAGE {index}'}, message_type='any')
await asyncio.sleep(1)
loop.create_task(send_messages())
try:
print(f'Connector is running, check log at {connector_files_dirpath+"/aioconnectors.log"}'
f', type Ctrl+C to stop')
loop.run_forever()
except:
print('Connector stopped !')
#stop receiving messages
connector_api.stop_waiting_for_messages(message_type='any')
#stop connector
task_stop = loop.create_task(connector_manager.stop_connector(delay=None, hard=False, shutdown=True))
loop.run_until_complete(task_stop)BASIC EXAMPLE - PUBLISH/SUBSCRIBEYou can run the following code of a broker, a publisher and a subscriber in 3 different shells on the same machine out of the box.You should modify some values as explained in the previous example in order to run on different machines, and with encryption.Broker exampleJust a server with pubsub_central_broker=Trueimport asyncio
import aioconnectors
loop = asyncio.get_event_loop()
server_sockaddr = ('127.0.0.1',10673)
connector_files_dirpath = '/var/tmp/aioconnectors'
#create connector
connector_manager = aioconnectors.ConnectorManager(is_server=True, server_sockaddr=server_sockaddr, use_ssl=False, use_token=False,
ssl_allow_all=True, connector_files_dirpath=connector_files_dirpath,
certificates_directory_path=connector_files_dirpath,
send_message_types=['any'], recv_message_types=['any'],
file_recv_config={'any': {'target_directory':connector_files_dirpath}},
pubsub_central_broker=True, reuse_server_sockaddr=True)
task_manager = loop.create_task(connector_manager.start_connector())
loop.run_until_complete(task_manager)
#create api
connector_api = aioconnectors.ConnectorAPI(is_server=True, server_sockaddr=server_sockaddr,
connector_files_dirpath=connector_files_dirpath,
send_message_types=['any'], recv_message_types=['any'],
default_logger_log_level='INFO')
#start receiving messages
async def message_received_cb(logger, transport_json , data, binary):
print('SERVER : message received', transport_json , data.decode())
loop.create_task(connector_api.start_waiting_for_messages(message_type='any', message_received_cb=message_received_cb))
try:
print(f'Connector is running, check log at {connector_files_dirpath+"/aioconnectors.log"}'
f', type Ctrl+C to stop')
loop.run_forever()
except:
print('Connector stopped !')
#stop receiving messages
connector_api.stop_waiting_for_messages(message_type='any')
#stop connector
task_stop = loop.create_task(connector_manager.stop_connector(delay=None, hard=False, shutdown=True))
loop.run_until_complete(task_stop)Subscriber exampleJust a client with subscribe_message_types = [topic1, topic2, ...]import asyncio
import aioconnectors
loop = asyncio.get_event_loop()
server_sockaddr = ('127.0.0.1',10673)
connector_files_dirpath = '/var/tmp/aioconnectors'
client_name = 'client2'
#create connector
connector_manager = aioconnectors.ConnectorManager(is_server=False, server_sockaddr=server_sockaddr,
use_ssl=False, ssl_allow_all=True, use_token=False,
connector_files_dirpath=connector_files_dirpath,
certificates_directory_path=connector_files_dirpath,
send_message_types=['any'], recv_message_types=['type1'],
file_recv_config={'type1': {'target_directory':connector_files_dirpath}},
client_name=client_name, subscribe_message_types=["type1"])
task_manager = loop.create_task(connector_manager.start_connector())
loop.run_until_complete(task_manager)
#create api
connector_api = aioconnectors.ConnectorAPI(is_server=False, server_sockaddr=server_sockaddr,
connector_files_dirpath=connector_files_dirpath, client_name=client_name,
send_message_types=['any'], recv_message_types=['type1'],
default_logger_log_level='INFO')
#start receiving messages
async def message_received_cb(logger, transport_json , data, binary):
print('CLIENT : message received', transport_json , data.decode())
loop.create_task(connector_api.start_waiting_for_messages(message_type='type1', message_received_cb=message_received_cb))
'''
#start sending messages
async def send_messages():
await asyncio.sleep(1)
index = 0
while True:
index += 1
await connector_api.send_message(data={'application message': f'CLIENT MESSAGE {index}'}, message_type='any')
await asyncio.sleep(1)
loop.create_task(send_messages())
'''
try:
print(f'Connector is running, check log at {connector_files_dirpath+"/aioconnectors.log"}'
f', type Ctrl+C to stop')
loop.run_forever()
except:
print('Connector stopped !')
#stop receiving messages
connector_api.stop_waiting_for_messages(message_type='type1')
#stop connector
task_stop = loop.create_task(connector_manager.stop_connector(delay=None, hard=False, shutdown=True))
loop.run_until_complete(task_stop)Publisher exampleJust a client which uses publish_message instead of send_messageimport asyncio
import aioconnectors
loop = asyncio.get_event_loop()
server_sockaddr = ('127.0.0.1',10673)
connector_files_dirpath = '/var/tmp/aioconnectors'
client_name = 'client1'
#create connector
connector_manager = aioconnectors.ConnectorManager(is_server=False, server_sockaddr=server_sockaddr,
use_ssl=False, ssl_allow_all=True, use_token=False,
connector_files_dirpath=connector_files_dirpath,
certificates_directory_path=connector_files_dirpath,
send_message_types=['type1','type2'], recv_message_types=['any'],
file_recv_config={'any': {'target_directory':connector_files_dirpath}},
client_name=client_name, disk_persistence_send=True)
task_manager = loop.create_task(connector_manager.start_connector())
loop.run_until_complete(task_manager)
#create api
connector_api = aioconnectors.ConnectorAPI(is_server=False, server_sockaddr=server_sockaddr,
connector_files_dirpath=connector_files_dirpath, client_name=client_name,
send_message_types=['type1','type2'], recv_message_types=['any'],
default_logger_log_level='INFO')
#start receiving messages
#async def message_received_cb(logger, transport_json , data, binary):
# print('CLIENT : message received', transport_json , data.decode())
#loop.create_task(connector_api.start_waiting_for_messages(message_type='any', message_received_cb=message_received_cb))
#start sending messages
async def send_messages():
await asyncio.sleep(1)
index = 0
#with_file={'src_path':'file_test','dst_type':'any', 'dst_name':'file_dest',
# 'delete':False, 'owner':'nobody:nogroup'}
while True:
index += 1
print(f'CLIENT : message {index} published')
#connector_api.publish_message_sync(data={'application message': f'CLIENT MESSAGE {index}'}, message_type='type1')#,
await connector_api.publish_message(data={'application message': f'CLIENT MESSAGE {index}'}, message_type='type1')#,
#with_file=with_file, binary=b'\x01\x02\x03')
#await connector_api.publish_message(data={'application message': f'CLIENT MESSAGE {index}'}, message_type='type2')#,
await asyncio.sleep(1)
loop.create_task(send_messages())
try:
print(f'Connector is running, check log at {connector_files_dirpath+"/aioconnectors.log"}'
f', type Ctrl+C to stop')
loop.run_forever()
except:
print('Connector stopped !')
#stop receiving messages
connector_api.stop_waiting_for_messages(message_type='any')
#stop connector
task_stop = loop.create_task(connector_manager.stop_connector(delay=None, hard=False, shutdown=True))
loop.run_until_complete(task_stop)2.Encryption without authenticationIn order to use encryption, you should set use_ssl to True in both server and client ConnectorManager instantiations.A directory containing certificates must be created before running the example, which is done by a single command :python3 -m aioconnectors create_certificatesIf you decide to use server_ca=true on your connector server, then you need to add "--ca" (4-).If you run server and client on different machines, this command should be run on both machines.3.Encryption with authenticationIn this example, the kwarg ssl_allow_all is true (both on server and client), meaning the communication between server and client if encrypted is not authenticated.In case you want to run this example with authentication too, you have 2 options :3.1. Set use_ssl to True and ssl_allow_all to False in both server and client ConnectorManager instantiations.If you run server and client on the same machine, this only requires to run the command "python3 -m aioconnectors create_certificates" beforehand like in 2.In case the server and client run on different machines, you should run the prerequisite command "python3 -m aioconnectors create_certificates" only once, and copy the generated directory /var/tmp/aioconnectors/certificates/server to your server (preserving symlinks) and /var/tmp/aioconnectors/certificates/client to your client.3.2. Set use_ssl to True, ssl_allow_all to True, and use_token to True, in both server and client ConnectorManager instantiations, to use token authentication. This also requires to run beforehand "python3 -m aioconnectors create_certificates".HIGH LEVEL DESIGNThe client and server are connected by one single tcp socket.
When a peer sends a message, it is first sent by unix socket to the connector, then transferred to a different queue for each remote peer. Messages are read from these priority queues and sent to the remote peer on the client/server socket. After a message reaches its peer, it is sent to a queue, one queue per message type. The api listens on a unix socket to receive messages of a specific type, that are read from the corresponding queue.The optional encryption uses TLS. The server certificate and the default client certificate are automatically generated and pre-shared, so that a server or client without prior knowledge of these certificates cannot communicate. Then, the server generates on the fly a new certificate per client, so that different clients cannot interfere with one another. Alternatively, the server can generate on the fly a new token per client.USE CASES-The standard use case is running server and client on separate stations. Each client station can then initiate a connection to the server station.The valid message topics are defined in the server and client configurations (send_message_types and recv_message_types), and the messages are sent point to point.In order to have all clients/server connections authenticated and encrypted, you just have to callpython3 -m aioconnectors create_certificates <optional_directory_path>And then share the created directories between server and clients as explained in1-.You can also use a proxy between your client and server, as explained in4-.-You might prefer to use a publish/subscribe approach.This is also supported by configuring a single server as the broker (you just need to set pubsub_central_broker=True).The other connectors should be clients. A client can subscribe to specific topics (message_types) by setting the attribute subscribe_message_types in its constructor, or by calling the set_subscribe_message_types command on the fly.-You might want both sides to be able to initiate a connection, or even to have multiple nodes being able to initiate connections between one another.The following lines describe a possible approach to do that using aioconnectors.Each node should be running an aioconnector server, and be able to also spawn a aioconnector client each time it initiates a connection to a different remote server. A new application layer handling these connectors could be created, and run on each node.Your application might need to know if a peer is already connected before initiating a connection : to do so, you might use the connector_manager.show_connected_peers method (explained in7-).Your application might need to be able to disconnect a specific client on the server : to do so, you might use the connector_manager.disconnect_client method.A comfortable approach would be to share the certificates directories created in the first step between all the nodes. All nodes would share the same server certificate, and use the same client default certificate to initiate the connection (before receiving their individual certificate). The only differences between clients configurations would be their client_name, and their remote server (the configurations are explained in4-).-There are multiple tools to let the server filter clients. Your application might need to decide whether to accept a client connection or not.The following tools filter clients in this order :whitelisted_clients_ip/subnet : in configuration file, or on the fly with add_whitelist_client (it updates the configuration file).hook_whitelist_clients(extra_info, source_id) : coroutine that lets you take a decision after having filtered a non whitelisted client (maybe allow it from now on).blacklisted_clients_ip/subnet: in configuration file or on the fly with add_blacklist_client.whitelisted_clients_id : in configuration file or on the fly with add_whitelist_client (uses regex).hook_whitelist_clients(extra_info, source_id) : same.blacklisted_clients_id : in configuration file or on the fly with add_blacklist_client (uses regex).hook_allow_certificate_creation(source_id) : coroutine that lets you prevent certificate creation based on the source_id.hook_server_auth_client(source_id) : coroutine that gives a last opportunity to filter the source_id.The hooks must be fed to the ConnectorManager constructor (explained in4-).USAGEaioconnectors provides the ConnectorManager class which runs the connectors, and the ConnectorAPI class which sends and receives messages. It provides as well the ConnectorRemoteTool class which can lightly manage the connector outside of the ConnectorManager.The ConnectorManager client and server can run on different machines. However, ConnectorAPI and ConnectorRemoteTool communicate internally with their ConnectorManager, and the three must run on the same machine.aioconnectors also provides a command line tool accessible by typingpython3 -m aioconnectors --help1.EncryptionEncryption mode is, as everything else, configurable through the ConnectorManager kwargs or config file, as explained later in4-. The relevant parameters are use_ssl and ssl_allow_all.The default mode is the most secure : use_ssl is enabled and ssl_allow_all is disabled, both on server and client.-If you choose to use encryption, you should callpython3 -m aioconnectors create_certificates [<optional_directory_path>] [--ca] [--help]A directory called "certificates" will be created under your optional_directory_path, or under /var/tmp/aioconnectors if not specified.
Under it, 2 subdirectories will be created : certificates/server and certificates/client.You need to copy certificates/server to your server (preserving symlinks), and certificates/client to your client. That's all you have to do.This is the recommended approach, since it ensures traffic encryption, client and server authentication, and prevents client impersonation.Clients use the default certificate to first connect to server, then an individual certificate is generated by the server for each client. Client automatically uses this individual certificate for further connections. This individual certificate is mapped to the client_name.The first client named client_name reaching the server is granted a certificate for this client_name. Different clients further attempting to use the same client_name will be rejected.When server_ca is false on server side (default), the client certificates are checked against the certificates pem kept on the server, otherwise against the server CA.When using ssl, the default approach is to have server_ca false (default), meaning your server will generate and manage self signed client certificates, providing certificates visibility, and tools like delete_client_certificate to delete client certificates on the fly.Using server_ca true lets your server become a CA with a self signed CA certificate that will sign your client certificates. If you choose to run your server with server_ca true, then you need the --ca argument in create_certificates, otherwise you don't need it (default).The server_ca true mode comes with server_ca_certs_not_stored enabled by default, meaning the client certificates are deleted from server side. Not having to store the client certificates on the server might be an advantage but it doesn't enable you to delete them : if you want to be able to delete them in ca mode, then you might just use server_ca false. The server_ca_certs_not_stored option set to false requires to delete the certificates yourself, since it is not currently supported when server_ca is true : this implementation would require something like "openssl ca -gencrl -config certificates/server/server_ca_details.conf -out revoked.pem", and also "SSLContext.verify_flags |= ssl.VERIFY_CRL_CHECK_LEAF" before loading the revoked.pem into SSLContext.load_verify_locations.-The client also checks the server certificate to prevent MITM.Instead of using the generated server certificate, you also have the option to use a hostname for your server and use a CA signed server certificate that the clients will verify. For that you should :-On server side, under "certificates" directory, replace /server/server-cert/server.pem and server.key with your signed certificates. You don't need to do that manually, there is a tool that does it :python3 -m aioconnectors replace_server_certificate <custom_server_pem_file_path> [<optional_directory_path>]Note that the custom server pem file should contain the whole chain of .crt including the intermediate certificates.and in case you want to roll back to use the original generated server certificate :python3 -m aioconnectors replace_server_certificate --revert-On client side, configure server_sockaddr with the server hostname instead of IP address, and set client_cafile_verify_server to be the ca cert path (like /etc/ssl/certs/ca-certificates.crt), to enable CA verification of your server certificate.-You can delete a client certificate on the server (and also on client) by calling delete_client_certificate inpython3 -m aioconnectors cliFor this purpose, you can also call programmatically the ConnectorManager.delete_client_certificate method.-You shouldn't need to modify the certificates, however there is a way to tweak the certificates template : run create_certificates once, then modify certificates/server/csr_details_template.conf according to your needs (without setting the Organization field), delete other directories under certificates and run create_certificates again.-On server side, you can store manually additional default certificates with their symlink, under certificates/server/client-certs/symlinks. They must be called defaultN where N is an integer, or be another CA certificate in case server_ca is true.-Other options :-ssl_allow_all and use_token enabled : this is a similar approach but instead of generating a certificate per client, the server generates a token per client. This approach is simpler. Note that you can also delete the token on the fly by calling delete_client_token.You can combine ssl_allow_all with token_verify_peer_cert (on client and server) and token_client_send_cert (on client) : in order to authenticate the default certificate only. On client side the token_verify_peer_cert can also be the path of ca certificates (like /etc/ssl/certs/ca-certificates.crt) or custom server public certificate.token_client_verify_server_hostname can be the server hostname that your client authenticates (through its certificate).By setting ssl_allow_all on both sever and client, you can use encryption without the hassle of sharing certificates. In such a case you can run independently create_certificates on server and client side, without the need to copy a directory. This disables authentication, so that any client and server can communicate.By unsetting use_ssl, you can disable encryption at all.2.You have 2 options to run your connectors, either through the command line tool, or programmatically.2.1.Command line tool-To configure the Connector Manager, create a <config_json_path> file based on the Manager template json, and configure it according to your needs (more details in4-). Relevant for both server and client.A Manager template json can be obtained by calling :python3 -m aioconnectors print_config_templates-Then create and start you connector (both server and client, each with its own <config_json_path>)python3 -m aioconnectors create_connector <config_json_path>If you are testing your connector server and client on the same machine, you can use the configuration generated by print_config_templates almost out of the box.The only change you should do is set is_server to False in the client configuration, and use_ssl to False in both configurations (unless you already run "python3 -m aioconnectors create_certificates").If you want to test messages sending/receiving, you should also set a client_name value in the client configuration.Then you can use the other command line testing facilites mentioned in8-: on both server and client you can run "python3 -m aioconnectors test_receive_messages <config_json_path>" and "python3 -m aioconnectors test_send_messages <config_json_path>".2.2.Programmatically, examples are provided in applications.py and in aioconnectors_test.py.To create and start a connector :connector_manager = aioconnectors.ConnectorManager(config_file_path=config_file_path)
await connector_manager.start_connector()To stop a connector :await connector_manager.stop_connector()To shutdown a connector :await connector_manager.stop_connector(shutdown=True)You don't have to use a config file (config_file_path), you can also directly initialize your ConnectorManager kwargs, as shown in the previous basic examples, and in aioconnectors_test.py.3.send/receive messages with the API3.1.To configure the Connector API, create a <config_json_path> file based on the API template json.
Relevant for both server and client. This connector_api config file is a subset of the connector_manager config file. So if you already have a relevant connector_manager config file on your machine, you can reuse it for connector_api, and you don't need to create a different connector_api config file.python3 -m aioconnectors print_config_templates
connector_api = aioconnectors.ConnectorAPI(config_file_path=config_file_path)3.2.Or you can directly initialize your ConnectorAPI kwargsThen you can send and receive messages by calling the following coroutines in your program, as shown in aioconnectors_test.py, and in applications.py (test_receive_messages and test_send_messages).3.3.To send messages :await connector_api.send_message(data=None, binary=None, **kwargs)This returns a status (True or False)."data" is your message, "binary" is an optional additional binary message in case you want your "data" to be a json for example.
If your "data" is already a binary, then the "binary" field isn't necessary.kwargs contain all the transport instructions for this message, as explained in5-.If you set the await_response kwarg to True, this returns the response, which is a (transport_json , data, binary) triplet.The received transport_json field contains all the kwargs sent by the peer.You can also send messages synchronously, with :connector_api.send_message_sync(data=None, binary=None, **kwargs)Similarly, use the "publish_message" and "publish_message_sync" methods in the publish/subscribe approach.More details in5-.3.4.To register to receive messages of a specific message_type :await connector_api.start_waiting_for_messages(message_type='', message_received_cb=message_received_cb, reuse_uds_path=False)-binaryis an optional binary message (or None).-datais the message data bytes. It is always bytes, so if it was originally sent as a json or a string, you'll have to convert it back by yourself.-message_received_cbis an async def coroutine that you must provide, receiving and processing the message quadruplet (logger, transport_json, data, binary).-reuse_uds_pathis false by default, preventing multiple listeners of same message type. In case it raises an exception even with a single listener, you might want to find and delete an old uds_path_receive_from_connector file specified in the exception.-transport_jsonis a json with keys related to the "transport layer" of our message protocol : these are the kwargs sent in send_message. They are detailed in5-. The main arguments are source_id, destination_id, request_id, response_id, etc.Your application can read these transport arguments to obtain information about peer (source_id, request_id if provided, etc), and in order to create a proper response (with correct destination_id, and response_id for example if needed, etc).transport_json will contain a with_file key if a file has been received, more details in5-.-Note: if you send a message using send_message(await_response=True), the response value is the expected response message : so in that case the response message is not received by the start_waiting_for_messages task.4.More details about the ConnectorManager and ConnectorAPI arguments.logger=None, use_default_logger=True, default_logger_log_level='INFO', default_logger_rotate=True, config_file_path=<path>,default_logger_bk_count=5config_file_path can be the path of a json file like the following, or instead you can load its items as kwargs, as shown in the basic example later on and in aioconnectors_test.pyYou can use both kwargs and config_file_path : if there are shared items, the ones from config_file_path will override the kwargs, unless you specify config_file_overrides_kwargs=False (True by default).The main use case for providing a config_file_path while having config_file_overrides_kwargs=False is when you prefer to configure your connector only with kwargs but you also want to let the connector update its config file content on the fly (for example blacklisted_clients_id, whitelisted_clients_id, or ignore_peer_traffic).Here is an example of config_file_path, with ConnectorManager class arguments, used to create a connector{
"alternate_client_default_cert": false,
"blacklisted_clients_id": null,
"blacklisted_clients_ip": null,
"blacklisted_clients_subnet": null,
"certificates_directory_path": "/var/tmp/aioconnectors",
"client_bind_ip": null,
"client_cafile_verify_server": null,
"client_name": null,
"connect_timeout": 10,
"connector_files_dirpath": "/var/tmp/aioconnectors",
"debug_msg_counts": true,
"default_logger_bk_count":5,
"default_logger_dirpath": "/var/tmp/aioconnectors",
"default_logger_log_level": "INFO",
"default_logger_rotate": true,
"disk_persistence_recv": false,
"disk_persistence_send": false,
"enable_client_try_reconnect": true,
"everybody_can_send_messages": true,
"file_recv_config": {},
"ignore_peer_traffic": false,
"is_server": true,
"keep_alive_period": null,
"keep_alive_timeout": 5,
"max_certs": 1024,
"max_number_of_unanswered_keep_alive": 2,
"max_size_file_upload_recv": 8589930194,
"max_size_file_upload_send": 8589930194,
"max_size_persistence_path": 1073741824,
"proxy": {},
"pubsub_central_broker": false,
"recv_message_types": [
"any"
],
"reuse_server_sockaddr": false,
"reuse_uds_path_commander_server": false,
"reuse_uds_path_send_to_connector": false,
"send_message_types": [
"any"
],
"send_message_types_priorities": {},
"send_timeout": 50,
"server_ca": false,
"server_ca_certs_not_stored": true,
"server_secure_tls": true,
"server_sockaddr": [
"127.0.0.1",
10673
],
"silent": true,
"ssl_allow_all": false,
"subscribe_message_types": [],
"token_client_send_cert": true,
"token_client_verify_server_hostname": null,
"token_server_allow_authorized_non_default_cert": false,
"token_verify_peer_cert": true,
"tokens_directory_path": "/var/tmp/aioconnectors",
"uds_path_receive_preserve_socket": true,
"uds_path_send_preserve_socket": true,
"use_ssl": true,
"use_token": false,
"whitelisted_clients_id": null,
"whitelisted_clients_ip": null,
"whitelisted_clients_subnet": null
}Here is an example of config_file_path, with ConnectorAPI class arguments, used to send/receive messages.These are a subset of ConnectorManager arguments : which means you can use the ConnectorManager config file also for ConnectorAPI.{
"client_name": null,
"connector_files_dirpath": "/var/tmp/aioconnectors",
"default_logger_bk_count":5,
"default_logger_dirpath": "/var/tmp/aioconnectors",
"default_logger_log_level": "INFO",
"default_logger_rotate": true,
"is_server": true,
"max_size_chunk_upload": 209715200,
"pubsub_central_broker": false,
"receive_from_any_connector_owner": true,
"recv_message_types": [
"any"
],
"send_message_types": [
"any"
],
"server_sockaddr": [
"127.0.0.1",
10673
],
"uds_path_receive_preserve_socket": true,
"uds_path_send_preserve_socket": true
}-alternate_client_default_certis false by default : if true it lets the client try to connect alternatively with the default certificate, in case of failure with the private certificate. This can save the hassle of having to delete manually your client certificate when the certificate was already deleted on server side. This affects also the token authentication : the client will try to connect alternatively by requesting a new token if its token fails.-blacklisted_clients_id|ip|subnet: a list of blacklisted clients (regex for blacklisted_clients_id), can be updated on the fly with the api functions add|remove_blacklist_client or in the cli.-certificates_directory_pathis where your certificates are located, if use_ssl is True. This is the <optional_directory_path> where you generated your certificates by calling "python3 -m aioconnectors create_certificates <optional_directory_path>".-client_cafile_verify_server: On client side, if server_sockaddr is configured with the server hostname, you can set client_cafile_verify_server to be the ca cert path (like /etc/ssl/certs/ca-certificates.crt), to enable CA verification of you server certificate.-client_nameis used on client side. It is the name that will be associated with this client on server side. Auto generated if not supplied in ConnectorManager. Mandatory in ConnectorAPI. It should match the regex ^[0-9a-zA-Z-_:]+$-client_bind_ipis optional, specifies the interface to bind your client. You can use an interface name or its ip address (string).-connect_timeout: On client side, the socket timeout to connect to Tsoc. Default is 10s, you might need to increase it when using a server hostname in server_sockaddr, since sometimes name resolution with getaddrinfo is slow.-connector_files_dirpathis important, it is the path where all internal files are stored. The default is /var/tmp/aioconnectors. unix sockets files, default log files, and persistent files are stored there.-debug_msg_countsis a boolean, enables to display every 2 minutes a count of messages in the log file, and in stdout ifsilentis disabled.-default_logger_rotate(boolean) can also be an integer telling the maximum size of the log file in bytes.-default_logger_bk_countan integer telling the maximum number of gzip compressed logs kept when log rotate is enabled. Default is 5.-disk_persistence_recv: In order to enable persistence between the connector and a message listener (supported on both client and server sides), use disk_persistence_recv=True (applies to all message types). disk_persistence_recv can also be a list of message types for which to apply persistence. There will be 1 persistence file per message type.-file_recv_config: In order to be able to receive files, you must define the destination path of files according to their associated dst_type. This is done in file_recv_config, as shown in aioconnectors_test.py. file_recv_config = {"target_directory":"", "owner":"", "override_existing":False}.target_directoryis later formatted using the transport_json fields : which means you can use a target_directory value like "/my_destination_files/{message_type}/{source_id}".owneris optional, it is the owner of the uploaded file. It must be of the form "user:group".override_existingis optional and false by default : when receiving a file with an already existing destination path, it decides whether to override the existing file or not.-enable_client_try_reconnectis a boolean set to True by default. If enabled, it lets the client try to reconnect automatically to the server every 5 seconds in case of failure.-keep_alive_periodis null by default. If an integer then the client periodically sends a ping keep-alive to the server. Ifmax_number_of_unanswered_keep_alive(default is 2) keep-alive responses are not received by the client, each afterkeep_alive_timeout(default is 5s), then the client disconnects and tries to reconnect with the same mechanism used by enable_client_try_reconnect.-everybody_can_send_messagesif True lets anyone send messages through the connector, otherwise the sender must have write permission to the connector. Setting to True requires the connector to run as root.-hook_allow_certificate_creation: does not appear in the config file (usable as a kwargs only). Only for server. Can be an async def coroutine receiving a client_name and returning a boolean, to let the server accept or block the client_name certificate creation.-hook_server_auth_client: does not appear in the config file (usable as a kwargs only). Only for server. Can be an async def coroutine receiving a client peername and returning a boolean, to let the server accept or block the client connection. An example exists in the chat implementation in applications.py.-hook_store_tokenandhook_load_token: lets you manipulate the token before it is stored on disk, for client only.-hook_target_directory: does not appear in the config file (usable as a kwargs only). A dictionary of the form {dst_type: custom_function} where custom_function receives transport_json as an input and outputs a destination path to be appended to target_directory. If custom_function returns None, it has no effect on the target_directory. If custom_function returns False, the file is refused. This enables better customization of the target_directory according to transport_json. An example exists in the chat implementation in applications.py.-hook_whitelist_clients: does not appear in the config file (usable as a kwargs only). Has 2 arguments : extra_info, peername. Lets you inject some code when blocking non whitelisted client.-hook_proxy_authorization: does not appear in the config file (usable as a kwargs only). Only for client. A function that receives and returns 2 arguments : the proxy username and password. It returns them after an eventual transformation (like a decryption for example).-ignore_peer_trafficto ignore a peer traffic, can be updated on the fly with the api functions ignore_peer_traffic_enable, ignore_peer_traffic_enable_unique, or ignore_peer_traffic_disable or in the cli.-is_server(boolean) is important to differentiate between server and client-max_certs(integer) limits the maximum number of clients that can connect to a server using client ssl certificates.-max_size_chunk_upload(integer) used only by ConnectorAPI to send a file in chunks, default chunk length is 200MB. You can try a max chunk length of up to 1GB in a fast network, and might need to lower it in a slow network.-max_size_file_upload_sendandmax_size_file_upload_recv: Size limit of the files you send and receive, both on server and on client. Default is 8GB. However best performance is achieved until 1GB. Once you exceed 1GB, the file is divided in 1GB chunks and reassembled after reception, which is time consuming.-disk_persistence_send: In order to enable persistence between client and server (supported on both client and server sides), use disk_persistence_send=True (applies to all message types). disk_persistence_send can also be a list of message types for which to apply persistence. There will be 1 persistence file per message type. You can limit the persistence files size withmax_size_persistence_path.-pubsub_central_broker: set to True if you need your server to be the broker. Used in the publish/subscribe approach, not necessary in the point to point approach.-proxyan optional dictionary like {"enabled":true, "address":"<proxy_url>", "port":<proxy_port>, "authorization":"", "ssl_server":false}. Relevant only on client side. Lets the client connect to the server through an http(s) proxy with the connect method, if theenabledfield is true. The authorization field can have a value like {"username":"", "password":""}. Regardless of the aioconnectors inner encryption, you can set the "ssl_server" flag in case your proxy listens on ssl : this feature is under development and not tested because such proxy setup is rare.-receive_from_any_connector_ownerif True lets the api receive messages from a connector being run by any user, otherwise the connector user must have write permission to the api. True by default (requires the api to run as root to be effective).-recv_message_types: the list of message types that can be received by connector. Default is ["any"]. It should include the send_message_types using await_response.-reuse_server_sockaddr,reuse_uds_path_send_to_connector,reuse_uds_path_commander_server: booleans false by default, that prevent duplicate processes you might create by mistake from using the same sockets. In case your OS is not freeing a closed socket, you still can set the relevant boolean to true.-send_message_types: the list of message types that can be sent from connector. Default is ["any"] if you don't care to differentiate between message types on your
application level.-send_message_types_priorities: None, or a dictionary specifying for each send_message_type its priority. The priority is an integer, a smaller integer meaning a higher priority. Usually this is not needed, but with very high throughputs you may want to use it in order to ensure that a specific message type will not get drown by other messages. This might starve the lowest priority messages. Usage example : "send_message_types_priorities": {"type_fast":0, "type_slow":1}.-send_timeout: maximum time for sending a message between peers on the socket. By default 50 seconds. After timeout, the message is lost, the sending peer disconnects, and peers reconnect if enable_client_try_reconnect.-server_ca: (server only) If set to false (default), the server authenticates client certificates according to the stored certificates, otherwise according to its CA. You can always add manually defaultN or CA certificates, under certificates/server/client-certs/symlinks.-server_ca_certs_not_stored: (server only) True by default. If server_ca is true, the generated client certificates won't be stored on server side.-server_secure_tls: (server only) If set to true (default), the server allows only clients using TLS version >= v1.2.-server_sockaddrcan be configured as a tuple when used as a kwarg, or as a list when used in the json, and is mandatory on both server and client sides. You can use an interface name instead of its ip on server side, for example ("eth0", 10673).-subscribe_message_types: In the publish/subscribe approach, specify for your client the message types you want to subscribe to. It is a subset of recv_message_types.-tokens_directory_path: The path of your server token json file, or client token file.-token_verify_peer_cert: True by default. If boolean, True means the server/client verifies its peer certificate according to its default location under certificates_directory_path. On client : can also be a string with full path of a custom server certificate, or even a string with full path of CA certificate to authenticate server hostname (for example "/etc/ssl/certs/ca-certificates.crt", in that case token_client_verify_server_hostname should be true).-token_client_send_cert: True by default. Boolean, must be True if server has token_verify_peer_cert enabled : sends the client certificate.-token_client_verify_server_hostname: if true, client authenticates the server hostname with token_verify_peer_cert (CA path) during SSL handshake.-token_server_allow_authorized_non_default_cert: boolean false by default. If true, server using use_token will allow client with non default authorized certificate, even if this client doesn't use a token.-uds_path_receive_preserve_socketshould always be True for better performance, your message_received_cb coroutine in start_waiting_for_messages is called for each message without socket disconnection between messages (in fact, only 1 disconnection per 100 messages).-uds_path_send_preserve_socketshould always be True for better performance.-use_ssl,ssl_allow_all,use_tokenare boolean, must be identical on server and client. use_ssl enables encryption as explained previously. When ssl_allow_all is disabled, certificates validation is enforced. use_token requires use_ssl and ssl_allow_all both enabled.-whitelisted_clients_id|ip|subnet: a list of whitelisted clients (regex for whitelisted_clients_id), can be updated on the fly with the api functions add|remove_whitelist_client or in the cli.5.More details about the send_message argumentssend_message(message_type=None, destination_id=None, request_id=None, response_id=None,
data=None, data_is_json=True, binary=None, await_response=False, with_file=None,
wait_for_ack=False, await_response_timeout=None)
with_file can be like : {'src_path':'','dst_type':'', 'dst_name':'', 'delete':False, 'owner':''}send_message is an async coroutine.These arguments must be filled on the application layer by the user-await_responseis False by default, set it to True if your coroutine calling send_message expects a response value.In such a case, the remote peer has to answer with response_id equal to the request_id of the request. (This is shown in aioconnectors_test.py).-await_response_timeoutis None by default. If set to a number, and if await_response is true, the method waits up to this timeout for the peer response, and if timeout is exceeded it returns False.-datais the payload of your message. By default it expects a json, but it can be a string, and even bytes. However, using together the "data" argument for a json or a string, and the "binary" argument for binary payload, is a nice way to accompany a binary payload with some textual information. Contrary to "data",binarymust be bytes, and cannot be a string. A message size should not exceed 1GB.-data_is_jsonis True by default since it assumes "data" is a json, and it dumps it automatically. Set it to False if "data" is not a json.-destination_idis mandatory for server : it is the remote client id. Not needed by client.-message_typeis mandatory, it enables to have different listeners that receive different message types. You can use "any" as a default.-request_idandresponse_idare optional (integer or string) : they are helpful to keep track of asynchronous messages on the application layer. At the application level, the remote peer should answer with response_id equal to the request_id of the request. The request sender can then associate the received response with the request sent.-Thepublish_messageandpublish_message_syncmethods are the same as the send_message ones, but used by a client in the publish/subscribe approach.-Thesend_message_await_responsemethod is the same as send_message, but automatically sets await_response to True.-Thesend_message_syncmethod is almost the same as send_message, but called synchronously (not an async coroutine). It can also receive a "loop" as a kwarg. If a loop is running in the background, it schedules and returns a task. Otherwise it returns the peer response if called with await_response.-wait_for_ackis not recommended for high throughputs, since it slows down dramatically. Basic testing showed a rate of ten messages per second, instead of a few thousands messages per second in the point to point approach.Not a benchmark, but some point-to-point and pubsub trials (VM with 8GB RAM and 4 cores) showed that up until 4000 messages (with data of 100 bytes) per second could be received by a server without delay, and from that point the receive queue started to be non empty. This test gave the same result with 100 clients sending each 40 events per second, and with 1 client sending 4000 events per second.-with_filelets you embed a file, with {'src_path':'','dst_type':'', 'dst_name':'', 'delete':False, 'owner':''}.src_pathis the source path of the file to be sent,dst_typeis the type of the file, which enables the remote peer to evaluate the destination path thanks to its ConnectorManager attribute "file_recv_config" dictionary.dst_nameis the name the file will be stored under.deleteis a boolean telling if to delete the source file after it has been sent.owneris the optional user:group of your uploaded file : if used, it overrides the "owner" value optionally set on server side in file_recv_config. If an error occurs while opening the file to send, the file will not be sent but with_file will still be present in transport_json received by peer, and will contain an additional keyfile_errortelling the error to the peer application.-taglets you add a tag string to your message in transport_json : it has the advantage of being accessible at reception directly in transport_json without the need to look into the data structure.6.Management programmatic toolsThe class ConnectorManager has several methods to manage your connector. These methods are explained in7-.-delete_client_certificate,delete_client_token,disconnect_client,reload_tokens-add_blacklist_client, remove_blacklist_client,add_whitelist_client, remove_whitelist_client-delete_previous_persistence_remains-ignore_peer_traffic_show,ignore_peer_traffic_enable,ignore_peer_traffic_enable_unique,ignore_peer_traffic_disable-show_connected_peers-show_log_level,set_log_level-show_subscribe_message_types,set_subscribe_message_types-start_connector,stop_connector,restart_connectorThe same methods can be executed remotely, with the ConnectorRemoteTool class. This class is instantiated exactly like ConnectorAPI, with the same arguments (except for receive_from_any_connector_owner)connector_remote_tool = aioconnectors.ConnectorRemoteTool(config_file_path=config_file_path)An example of ConnectorRemoteTool is available in applications.py in the cli implementation.7.Other management command line toolspython3 -m aioconnectors clito run several interesting commands like :-start/stop/restartyour connectors.-show_connected_peers: show currently connected peers.-delete_client_certificateenables your server to delete a specific client certificate. delete_client_certificate enables your client to delete its own certificate and fallback using the default one. In order to delete a certificate of a currently connected client, first delete the certificate on server side, which will disconnect the client instantaneously, and then delete the certificate on client side : the client will then reconnect automatically and obtain a new certificate. The client side deletion is not needed in case alternate_client_default_cert is true.-delete_client_tokenenables your server to delete a specific client token. Enables you client to delete its own token and fallback requesting a new token.-reload_tokensreloads tokens after for example modifying them on disk.-disconnect_clientenables your server to disconnect a specific client.-add_blacklist_client, remove_blacklist_clientenables your server to blacklist a client by id (regex), ip, or subnet, at runtime. Disconnects the client if blacklisted by id, also deletes its certificate if exists. Kept in the connector config file if exists.-add_whitelist_client, remove_whitelist_clientenables your server to whitelist a client by id (regex), ip, or subnet, at runtime. Kept in the connector config file if exists.-peek_queuesto show the internal queues sizes.-ignore_peer_trafficcan be a boolean, or a peer name. When enabled, the connector drops all new messages received from peers, or from the specified peer. It also drops new messages to be sent to all peers, or to the specified peer. This mode can be useful to let the queues evacuate their accumulated messages.-show_log_levelto show the current log level.-set_log_levelto set the log level on the fly.-show_subscribe_message_typesto show the subscribed message types of a client.-set_subscribe_message_typesto set the list of all subscribed message types of a client.8.Testing command line tools-To let your connector send pings to a remote connector, and print its replies.python3 -m aioconnectors ping <config_json_path>-To simulate a simple application waiting for messages, and print all received messages. Your application should not wait for incoming messages when using this testing tool.python3 -m aioconnectors test_receive_messages <config_json_path>-To simulate a simple application sending dummy messages.python3 -m aioconnectors test_send_messages <config_json_path>9.Funny embedded chatA simple chat using aioconnectors is embedded. It allows you to exchange messages, files and directories easily between 2 Linux or Mac stations. It can also be configured to execute the commands it receives.It is encrypted, and supports authentication by prompting to accept connections.It is not a multi user chat, but more of a tool to easily transfer stuff between your computers.-On the 1st station (server side), type :python3 -m aioconnectors chat-Then on the 2nd station (client side), type :python3 -m aioconnectors chat --target <server_ip>You can execute local shell commands by preceding them with a "!".You can also upload files during a chat, by typing "!upload <file or dir path>".Files are uploaded to your current working directory. A directory is transferred as a zip file.You can simply unzip a zip file by using "!dezip <file name>".The cleanest way to exit a chat is by typing "!exit" on both sides.-On client side, you can also directly upload a file or directory to the server without opening a chat :python3 -m aioconnectors chat --target <server_ip> --upload <file or dir path>-You can configure client or server (not simultaneously) to execute the commands it receives, by using the --exec <shell_path> option :python3 -m aioconnectors chat --exec /bin/sh
python3 -m aioconnectors chat --target <server_ip>
or
python3 -m aioconnectors chat
python3 -m aioconnectors chat --target <server_ip> --exec /bin/sh-On server side, you can accept client connections without prompting by specifying --accept :python3 -m aioconnectors chat --accept-More info :python3 -m aioconnectors chat --help-If you need your server to listen on a specific interface :python3 -m aioconnectors chat --bind_server_ip <server_ip><server_ip> can be an ip address, or an interface name-If you don't want your server to use the default port (10673), use --port on both peers :python3 -m aioconnectors chat --port <port> [--target <server_ip>]-By default the chat has tab completion, you can disable it with --nowrap.ContainersConnector client and server, as well as connector api, can run in a Docker container, you just need to pip install aioconnectors in a Python image (or any image having Python >= 3.6 and openssl).A connector and its connector api must run on the same host, or in the same Kubernetes pod.A connector and its connector api can run in the same container, or in different containers. In case you choose to run them in different containers, you must configure their connector_files_dirpath path as a shared volume, in order to let them share their UDS sockets.Windows ?To port aioconnectors to Windows, these steps should be taken, and probably more :-Replace usage of unix sockets by maybe : local sockets, or named pipes, or uds sockets if and when they are supported.Since the implementation relies on unix sockets paths, a possible approach would be to preserve these paths, and manage a mapping between the paths and their corresponding local listening ports.-Port the usage of openssl in ssl_helper.py-Convert paths-Ignore the file uploaded ownership feature-Convert the interface to ipaddress function using ip (used for sockaddr and client_bind_ip) |
aioconnpass | Aioconnpassaioconnpassは、connpass APIのラッパーです。
aiohttpを使用しています。Installingpipでインストール可能です。pip install aiospotipyQuick ExamplefromaioconnpassimportConnpassconnpass=Connpass()results=awaitconnpass.search(keyword="長野") |
aioconsole | Asynchronous console and interfaces for asyncioaioconsoleprovides:asynchronous equivalents toinput,print,execandcode.interactan interactive loop running the asynchronous python consolea way to customize and run command line interface usingargparsestreamsupport to serve interfaces instead of using standard streamstheapythonscript to access asyncio code at runtime without modifying the sourcesRequirementsPython >= 3.8Installationaioconsoleis available onPyPIandGitHub.
Both of the following commands install theaioconsolepackage
and theapythonscript.$pip3installaioconsole# from PyPI$python3setup.pyinstall# or from the sources$apython-husage: apython [-h] [--serve [HOST:] PORT] [--no-readline]
[--banner BANNER] [--locals LOCALS]
[-m MODULE | FILE] ...
Run the given python file or module with a modified asyncio policy replacing
the default event loop with an interactive loop. If no argument is given, it
simply runs an asynchronous python console.
positional arguments:
FILE python file to run
ARGS extra arguments
optional arguments:
-h, --help show this help message and exit
--serve [HOST:] PORT, -s [HOST:] PORT
serve a console on the given interface instead
--no-readline force readline disabling
--banner BANNER provide a custom banner
--locals LOCALS provide custom locals as a dictionary
-m MODULE run a python moduleSimple usageThe following example demonstrates the use ofawaitinside the console:$apythonPython 3.5.0 (default, Sep 7 2015, 14:12:03)
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
---
This console is running in an asyncio event loop.
It allows you to wait for coroutines using the 'await' syntax.
Try: await asyncio.sleep(1, result=3, loop=loop)
--->>>awaitasyncio.sleep(1,result=3)# Wait one second...3>>>DocumentationFind more examples in thedocumentationand theexample directory.LimitationsThe python console exposed byaioconsoleis quite limited compared to modern consoles such asIPythonorptpython. Luckily, those projects gained greater asyncio support over the years. In particular, the following use cases overlap withaioconsolecapabilities:Embedding a ptpython console in an asyncio programUsing the await syntax in an IPython consoleContactVincent Michel:[email protected] |
aiocontext | Context information storage for asyncio.Project InformationAioContext is released under the MIT license, its documentation lives atRead
the Docsand the code onGitHub. |
aiocontextvars | aiocontextvarsIMPORTANT:This package will be deprecated aftercontextvars asyncio backportis fixed. Before then, this library
experimentally provides the missing asyncio support for thecontextvarsbackport library. Please read more in Python 3.7contextvars
documentation.CompatibilityIn Python 3.7 this packageis100%contextvars.In Python 3.5 and 3.6, this package added asyncio support to the PEP-567
backport package also namedcontextvars, in a very different way than
Python 3.7contextvarsimplementation:call_soon()and family methods.Python 3.7 added keyword argumentcontexttocall_soon()and its family
methods. By default those methods will copy (inherit) the current context and
run the given method in that context. Butaiocontextvarswon’t touch the
loop, so in order to achieve the same effect, you’ll need to:loop.call_soon(copy_context().run, my_meth)Task local.Python 3.7 used above keyword argumentcontextinTaskto make sure
that each step of a coroutine is ran in the same context inherited at the time
its driving task was created. Meanwhile,aiocontextvarsusesTask.current_task()to achieve similar effect: it hacks asyncio and
attaches a copied context to the task on its creation, and replaces thread
local with current task instance to share the context. This behaves identically
to Python 3.7 in most times. What you need to do is to importaiocontextvarsbefore creating loops.Custom tasks and loops.Because above hack is done by replacingasyncio.get_event_loopandloop.create_task, therefore tasks and loops created by custom/private API
won’t behave correctly as expected, e.g.uvloop.new_event_loop()orasyncio.Task(). Also, event loops created before importingaiocontextvarsare not patched either. So over all, you should importaiocontextvarsat the beginning before creating event loops, and always useasyncio.*to operate loops/policies, and public asyncio API to create
tasks.CreditsFantix King is the author and maintainer of this library. This library is open
source software under BSD license.History0.2.1 (2018-10-24)Changed to single module layout.Updated README.0.2.0 (2018-09-09)This is a breaking change.Most implementation is replaced withcontextvars. In Python 3.5 and 3.6,aiocontextvarsdepends oncontextvarsthe PEP-567 backport in PyPI, and patches it to partially
support asyncio; in Python 3.7aiocontextvarsis only a delegate to the
built-incontextvarslibrary.ModifiedContextVar.set()to return a token.AddedContextVar.reset(token).RemovedContextVar.delete().Removedenable_inherit()anddisable_inherit(), inherit is always enabled.Addedcopy_context()andContext.run().RemovedContext.current()andContext.inherited.Fixed issue thatset_event_loop(None)fails (contributed by J.J. Jackson in #10 #11)0.1.2 (2018-04-04)Supported Python 3.5.0.1.1 (2017-12-03)Fixed setup.py0.1.0 (2017-12-03)First release on PyPI. |
aiocontroller | aiocontroller |
aio-cooker-client | Cooker Client PythonPython (asyncio) client library for Quortex Cooker API.InstallationThis client can be installed by runningpip install aio_cooker_client. It requires Python 3.7+ to run.Usageimportaio_cooker_clientclient=aio_cooker_client.CookerClient(cooker_domain_name=DOMAIN_NAME,client_id=CLIENT_ID,client_secret=CLIENT_SECRET,cache_ttl=120,)cred=client.get_credential() |
aiocord | A modern API wrapper for Discord.InstallationpipinstallaiocordFeaturesComplete: Implements the entirety of Discord’s services.Asynchronous: Written in pureasynciofor native parallelism.Modular: Any component (such as HTTP) can be used in isolation.Ergonomic: Comes with extreme purpose-driven data reception and cacehing.Interactive: Supports slash-commands and related utilities out of the box.ExampleCreate awidget/[email protected](aiocord.events.CreateMessage)asyncdefhandle(info,event):if(message:=event.message).author.id==info.client.cache.user.id:returnawaitinfo.client.create_message(message.channel_id,content=f'{message.author.mention()}said{message.content}!')And run the following in your terminal:aiocord--token<TOKEN>startwidgetThis is a simple example to get you started in seconds, but the library covers a vast wealth tools to fit any scenario.Check out theDocumentation’sExamplessection for more,
such as how to useCommandsandInteractions. |
aiocore | UNKNOWN |
aio.core | Utils for asyncio. |
aiocorenlp | aiocorenlpHigh-fidelityasynciocapable StanfordCoreNLPlibrary.Heavily based onnerandnltk.Rationale and differences fromnltkFor every tag operation (in other words, every call toStanfordTagger.tag*),nltkruns a Stanford JAR (stanford-ner.jar/stanford-postagger.jar) in a newly spawned Java subprocess.
In order to pass the input text to these JARs,nltkfirst writes it to atempfileand includes its path in the Java command line using the-textFileflag.This method works well in sequential applications, however once scaled up by concurrency and stress problems begin to arise:Python'stempfile.mkstempdoesn't work very well on Windows to begin with and starts to break down under stress.Calls totempfile.mkstempstart to fail which in turn results in Stanford code failing (no input file to read).Temporary files get leaked resulting in negative impact on disk usage.Repeated calls tosubprocessmean:Multiple Java processes run in parallel causing negative impact on CPU and memory usage.OS-level subprocess and Java startup code has to be run every time causing additional negative impact on CPU usage.All this causes unnecessary slowdown and bad reliability to user-written code.Patchingnltk's code to usetempfile.TemporaryDirectoryinstead oftempfile.mkstempseemed to resolve issue 1 but issue 2 would require more work.This library runs the Stanford code in a server mode and sends input text over TCP, meaning:Filesystem operations and temporary files/directories are avoided entirely.There's no need to run a Java subprocess more than once.The only synchronization bottleneck is offloaded to Java'sSocketServerclass which is used in the Stanford code.CPU, memory and disk usage is greatly reduced.Differences fromnerasynciosupport.Method name manglingis inexplicably enabled in thener.client.NERclass, making subclassing not practical.The ner library appears to be abandoned.Differences fromstanzaasynciosupport.Stanza aims to provide a wider range of uses.Basic Usage>>>fromaiocorenlpimportner_tag>>>awaitner_tag("I complained to Microsoft about Bill Gates.")[('O', 'I'), ('O', 'complained'), ('O', 'to'), ('ORGANIZATION', 'Microsoft'), ('O', 'about'), ('PERSON', 'Bill'), ('PERSON', 'Gates.')]This usage doesn't require interfacing with the server and socket directly and is suitable for low frequency/one-time tagging.Advanced UsageTo fully take advantage of this library's benefits theAsyncNerServerandAsyncPosServerclasses should be used:fromaiocorenlp.async_ner_serverimportAsyncNerServerfromaiocorenlp.async_corenlp_socketimportAsyncCorenlpSocketserver=AsyncNerServer()port=server.start()print(f"Server started on port{port}")socket:AsyncCorenlpSocket=server.get_socket()whileTrue:text=input("> ")iftext=="exit":breakprint(awaitsocket.tag(text))server.stop()Context manager is supported as well:fromaiocorenlp.async_ner_serverimportAsyncNerServerserver:AsyncNerServerasyncwithAsyncNerServer()asserver:socket=server.get_socket()whileTrue:text=input("> ")iftext=="exit":breakprint(awaitsocket.tag(text))ConfigurationAs seen above, all classes and functions this library exposes may be used without arguments (default values).Optionally, the following arguments may be passed toAsyncNerServer(and by extensionner_tag/pos_tag):port: Server bind port. LeaveNonefor random port.model_path: Path to language model. LeaveNoneto letnltkfind the model (supportsSTANFORD_MODELSenvironment variable).jar_path: Path tostanford-*.jar. LeaveNoneto letnltkfind the jar (supportsSTANFORD_POSTAGGERenvironment variable, for NER as well).output_format: Output format. SeeOutputFormatenum for values. Default isslashTags.encoding: Output encoding.java_options: Additional JVM options.It is not possible to configure the server bind interface. This is a limitation imposed by the Stanford code. |
aio-cosmos | aio-cosmosAsyncio SDK for Azure Cosmos DB. This library is intended to be a very thin asyncio wrapper around theAzure Comsos DB Rest API.
It is not intended to have feature parity with the Microsoft Azure SDKs but to provide async versions of the most commonly used interfaces.Feature SupportDatabases✅ List✅ Create✅ DeleteContainers✅ Create✅ DeleteDocuments✅ Create Single✅ Create Concurrent Multiple✅ Delete✅ Get✅ QueryLimitationsThe library currently only supports Session level consistency, this may change in the future.
For concurrent writes the maximum concurrency level is based on a maximum of 100 concurrent
connections from the underlying aiohttp library. This may be exposed to the as a client
setting in a future version.Sessions are managed automatically for document operations. The session token is returned in the
result so it is possible to manage sessions manually by providing this value in session_token to
the appropriate methods. This facilitates sending the token value back to an end client in a
session cookie so that writes and reads can maintain consistency across multiple instances of
Cosmos.There is currently no retry policy on failed connections/broken connections and this must be entirely
managed by the end user code. This may be implemented in the futureInstallationpipinstallaio-cosmosUsageClient Setup and Basic UsageThe client can be instantiated using either the context manager as below or directly using the CosmosClient class.
If using the CosmosClient class directly the user is responsible for calling the .connect() and .close() methods to
ensure the client is boot-strapped and resources released at the appropriate times.fromaio_cosmos.clientimportget_clientasyncwithget_client(endpoint,key)asclient:awaitclient.create_database('database-name')awaitclient.create_container('database-name','container-name','/partition_key_document_path')doc_id=str(uuid4())res=awaitclient.create_document(f'database-name','container-name',{'id':doc_id,'partition_key_document_path':'Account-1','description':'tax surcharge'},partition_key="Account-1")Querying DocumentsDocuments can be queried using the query_documents method on the client. This method returns an AsyncGenerator and should
be used in an async for statement as below. The generator automatically handles paging for large datasets. If you don't
wish to iterate through the results use a list comprehension to collate all of them.asyncfordocinclient.query_documents(f'database-name','container-name',query="select * from r where r.account = 'Account-1'",partition_key="Account-1"):print(f'doc returned by query:{doc}')Concurrent Writes / Multiple DocumentsThe client provides the ability to issue concurrent document writes using asyncio/aiohttp. Each document is represented
by a tuple of (document, partition key value) as below.docs=[({'id':str(uuid4()),'account':'Account-1','description':'invoice paid'},'Account-1'),({'id':str(uuid4()),'account':'Account-1','description':'VAT remitted'},'Account-1'),({'id':str(uuid4()),'account':'Account-1','description':'interest paid'},'Account-1'),({'id':str(uuid4()),'account':'Account-2','description':'annual fees'},'Account-2'),({'id':str(uuid4()),'account':'Account-2','description':'commission'},'Account-2'),]res=awaitclient.create_documents(f'database-name','container-name',docs)ResultsResults are returned in a dictionary with the following format:{'status':str,'code':int,'session_token':Optional[str],'error':Optional[str],'data':Union[dict,list]}status will be either 'ok' or 'failed'code is the integer HTTP response codesession_token is the string session code vector returned by Cosmoserror is a string error message to provide context to a failed statusdata is the direct JSON response from Cosmos and will contain any error information in the case of failed operationsNote, to see an error return in the above format you must passraise_on_failure=Falseto the client constructor. |
aiocouch | aiocouchAn asynchronous client library for CouchDB 2.0 based on asyncio using aiohttpKey featuresAll requests are asynchronus using aiohttpSupports CouchDB 2.x and 3.xSupport for modern Python ≥ 3.7Library installationpip install aiocouchGetting startedThe following code retrieves and prints the list ofincredientsof theapple_pierecipe.
Theincredientsare stored as a list in theapple_pieaiocouch.document.Document,
which is part of therecipeaiocouch.database.Database. We use the context manageraiocouch.CouchDBto create a new session.fromaiocouchimportCouchDBasyncwithCouchDB("http://localhost:5984",user="admin",password="admin")ascouchdb:db=awaitcouchdb["recipes"]doc=awaitdb["apple_pie"]print(doc["incredients"])We can also create new recipes, for instance for some delicious cookies.new_doc=awaitdb.create("cookies",data={"title":"Granny's cookies","rating":"★★★★★"})awaitnew_doc.save()For further details please refer to the documentation, which is availablehere on readthedocs.org.Run examplesSetup the CouchDB URL and credentials using the environment variablesInstall dependencies usingpip install --editable '.[examples]'run for instancepython examples/getting_started.pyRun testsInstall dependencies usingpip install --editable '.[tests]'Setup the CouchDB URL and credentials using the environment variables (COUCHDB_HOST,COUCHDB_USER,COUCHDB_PASS)runpytest --cov=aiocouchGenerate documentationInstall dependencies usingpip install '.[docs]'switch to thedocsdirectory:cd docsrunmake html |
aiocouchdb | source:https://github.com/aio-libs/aiocouchdbdocumentation:http://aiocouchdb.readthedocs.org/en/latest/license:BSDCouchDB client built on top ofaiohttpand made forasyncio.Current status:beta.aiocouchdbhas all CouchDB API implements up to
1.6.1 release. However, it may lack of some usability and stability bits, but
work is in progress. Feel free tosend pull requestoropen issueif
you’d found something that should be fixed.Features:Modern CouchDB client for Python 3.3+ based onaiohttpComplete CouchDB API support (JSON and Multipart) up to 1.6.1 versionMultiuser workflow with Basic Auth, Cookie, Proxy and OAuth supportStateless behaviorStream-like handling views, changes feeds and bulk docs uploadRoadmap (not exactly in that order):Cloudant supportCouchDB 2.0 supportElasticSearch CouchDB river supportGeoCouch supportMicroframework for OS daemons and external handlersNative integration with Python Query ServerReplicator-as-a-Library / Replicator-as-a-ServiceStateful APIRequirementsPython 3.3+aiohttpoauthlib(optional)Changes0.9.1 (2016-02-03)Read views and changes feeds line by line, not by chunks.
This fixes #8 and #9 issues.Deprecate Python 3.3 support. 0.10 will be 3.4.1+ only.0.9.0 (2015-10-31)First release in aio-libs organization (:Add context managers for response and feeds objects to release connection
when work with them is doneUse own way to handle JSON responses that doesn’t involves chardet usageAdd HTTPSession object that helps to apply the same auth credentials and
TCP connector for the all further requests made with itaiocouchdb now uses own request module which is basically fork of aiohttp oneAuthProviders API upgraded for better workflowFix _bulk_docs request with new_editWorkaround COUCHDB-2295 by calculating multipart request bodyAllow to pass event loop explicitly to every major objectsFix parameters for Server.replicate methodMinor fixes for docstringsQuite a lot of changes in Makefile commands for better lifeMinimal requirements for aiohttp raised up to 0.17.4 version0.8.0 (2015-03-20)Source tree was refactored in the way to support multiple major CouchDB
versions as like as the other friendly forksDatabase create and delete methods now return exact the same response as
CouchDB sends backEach module now contains __all__ list to normalize their exportsAPI classes and Resource now has nicer __repr__ outputBetter error messages formatFix function_clause error on attempt to update a document with attachments
by using multipart requestDocument.update doesn’t makes document’s dict invalid for further requests
after multipart oneFixed accidental payload sent with HEAD/GET/DELETE requests which caused
connection close from CouchDB sideAdded integration with Travis CICode cleaned by following pylint and flake8 noticesAdded short tutorial for documentationMinor fixes and Makefile improvements0.7.0 (2015-02-18)Greatly improved multipart module, added multipart writerDocument.update now supports multipart requests to upload
multiple attachments in single requestAdded Proxy Authentication providerMinimal requirements for aiohttp raised up to 0.14.0 version0.6.0 (2014-11-12)Adopt test suite to run against real CouchDB instanceDatabase, documents and attachments now provides access to their name/idRemove redundant longnamed constructorsConstruct Database/Document/Attachment instances through __getitem__ protocolAdd Document.rev method to get current document`s revisionAdd helpers to work with authentication database (_users)Add optional limitation of feeds bufferAll remove(…) methods are renamed to delete(…) onesAdd support for config option existence checkCorrectly set members for database securityFix requests with Accept-Ranges header against attachmentsFix views requests when startkey/endkey should be nullAllow to pass custom query parameters and request headers onto changes feed
requestHandle correctly HTTP 416 error responseMinor code fixes and cleanup0.5.0 (2014-09-26)Last checkpoint release. It’s in beta now!Implements CouchDB Design Documents HTTP APIViews refactoring and implementation consolidation0.4.0 (2014-09-17)Another checkpoint releaseImplements CouchDB Attachment HTTP APIMinimal requirements for aiohttp raised up to 0.9.1 versionMinor fixes for Document API0.3.0 (2014-08-18)Third checkpoint releaseImplements CouchDB Document HTTP APISupport document`s multipart API (but not doc update due to COUCHDB-2295)Minimal requirements for aiohttp raised up to 0.9.0 versionBetter documentation0.2.0 (2014-07-08)Second checkpoint releaseImplements CouchDB Database HTTP APIBulk docs accepts generator as an argument and streams request doc by docViews are processed as streamUnified output for various changes feed typesBasic Auth accepts non-ASCII credentialsMinimal requirements for aiohttp raised up to 0.8.4 version0.1.0 (2014-07-01)Initial checkpoint releaseImplements CouchDB Server HTTP APIBasicAuth, Cookie, OAuth authentication providersMulti-session workflow |
aiocqhttp | aiocqhttpaiocqhttp是OneBot(原酷Q的CQHTTP 插件) 的 Python SDK,采用异步 I/O,封装了 web 服务器相关的代码,支持 OneBot 的 HTTP 和反向 WebSocket 两种通信方式,让使用 Python 的开发者能方便地开发插件。本 SDK 要求使用 Python 3.7 或更高版本,以及建议搭配支持 OneBot v11 的 OneBot 实现。文档使用方法见文档。遇到问题如果在使用本 SDK 时遇到任何问题,请提交 issue,也欢迎参与项目开发,具体流程请参考贡献指南。 |
aiocqhttp-sanic | CQHTTP Python Async SDKCQHTTP Python Async SDK 是酷Q的CQHTTP 插件的 Python SDK 异步版本,采用异步 I/O,封装了 web 服务器相关的代码,支持 CQHTTP 的 HTTP 和反向 WebSocket 两种通信方式,让使用 Python 的开发者能方便地开发插件。本 SDK 要求使用 Python 3.7 或更高版本、CQHTTP v4.8 或更高版本。CQHTTP Python SDK 同步版本见python-cqhttp。文档使用方法见文档。遇到问题如果在使用本 SDK 时遇到任何问题,请提交 issue,也欢迎参与项目开发,具体流程请参考贡献指南。 |
aiocqlengine | aiocqlengineAsync wrapper for cqlengine of cassandra python driver.This project is built oncassandra-python-driver.Installation$pipinstallaiocqlengineChange log0.3.0Due toaiocassandrais not maintained, removed theaiocassandradependency.0.2.0Create new session wrapper forResultSet, users need to wrap session byaiosession_for_cqlengine:fromaiocqlengine.sessionimportaiosession_for_cqlengineAdd new method ofAioModelfor paging:asyncforresultsinAioModel.async_iterate(fetch_size=100):# Do something with resultspass0.1.1AddAioBatchQuery:batch_query=AioBatchQuery()foriinrange(100):Model.batch(batch_query).create(id=uuid.uuid4())awaitbatch_query.async_execute()Example usageimportasyncioimportuuidimportosfromaiocqlengine.modelsimportAioModelfromaiocqlengine.queryimportAioBatchQueryfromaiocqlengine.sessionimportaiosession_for_cqlenginefromcassandra.clusterimportClusterfromcassandra.cqlengineimportcolumns,connection,managementclassUser(AioModel):user_id=columns.UUID(primary_key=True)username=columns.Text()asyncdefrun_aiocqlengine_example():# Model.objects.create() and Model.create() in async way:user_id=uuid.uuid4()awaitUser.objects.async_create(user_id=user_id,username='user1')awaitUser.async_create(user_id=uuid.uuid4(),username='user2')# Model.objects.all() and Model.all() in async way:print(list(awaitUser.async_all()))print(list(awaitUser.objects.filter(user_id=user_id).async_all()))# Model.object.update() in async way:awaitUser.objects(user_id=user_id).async_update(username='updated-user1')# Model.objects.get() and Model.get() in async way:user=awaitUser.objects.async_get(user_id=user_id)awaitUser.async_get(user_id=user_id)print(user,user.username)# Model.save() in async way:user.username='saved-user1'awaituser.async_save()# Model.delete() in async way:awaituser.async_delete()# Batch Query in async way:batch_query=AioBatchQuery()User.batch(batch_query).create(user_id=uuid.uuid4(),username="user-1")User.batch(batch_query).create(user_id=uuid.uuid4(),username="user-2")User.batch(batch_query).create(user_id=uuid.uuid4(),username="user-3")awaitbatch_query.async_execute()# Async iteratorasyncforusersinUser.async_iterate(fetch_size=100):pass# The original cqlengine functions were still thereprint(len(User.objects.all()))defcreate_session():cluster=Cluster()session=cluster.connect()# Create keyspace, if already have keyspace your can skip thisos.environ['CQLENG_ALLOW_SCHEMA_MANAGEMENT']='true'connection.register_connection('cqlengine',session=session,default=True)management.create_keyspace_simple('example',replication_factor=1)management.sync_table(User,keyspaces=['example'])# Wrap cqlengine connectionaiosession_for_cqlengine(session)session.set_keyspace('example')connection.set_session(session)returnsessiondefmain():# Setup connection for cqlenginesession=create_session()# Run the example function in asyncio looploop=asyncio.get_event_loop()loop.run_until_complete(run_aiocqlengine_example())# Shutdown the connection and loopsession.cluster.shutdown()loop.close()if__name__=='__main__':main()LicenseThis project is under MIT license. |
aiocrawler | No description available on PyPI. |
aiocron | Usageaiocronprovide a decorator to run function at time:>>> import aiocron
>>> import asyncio
>>>
>>> @aiocron.crontab('*/30 * * * *')
... async def attime():
... print('run')
...
>>> asyncio.get_event_loop().run_forever()You can also use it as an object:>>> @aiocron.crontab('1 9 * * 1-5', start=False)
... async def attime():
... print('run')
...
>>> attime.start()
>>> asyncio.get_event_loop().run_forever()Your function still be available atattime.funcYou can also await a crontab. In this case, your coroutine can accept
arguments:>>> @aiocron.crontab('0 9,10 * * * mon,fri', start=False)
... async def attime(i):
... print('run %i' % i)
...
>>> async def once():
... try:
... res = await attime.next(1)
... except Exception as e:
... print('It failed (%r)' % e)
... else:
... print(res)
...
>>> asyncio.get_event_loop().run_forever()Finally you can use it as a sleep coroutine. The following will wait until
next hour:>>> await crontab('0 * * * *').next()If you don’t like the decorator magic you can set the function by yourself:>>> cron = crontab('0 * * * *', func=yourcoroutine, start=False)Notice that unlike standard unix crontab you can specify seconds at the 6th
position.aiocronusecroniter. Refer to
it’s documentation to know more about crontab format. |
aiocronjob | aiocronjobSchedule and runasynciocoroutines and manage them from a web interface or programmatically using the rest api.Requires python >= 3.8How to installpip3installaiocronjobUsage exampleSeeexamples/simple_tasks.pyRest APIOpenlocalhost:8000/docsfor endpoints docs.curlexample:$curlhttp://0.0.0.0:8000/api/jobsTBDDevelopmentRequirements:Python>= 3.8 andPDMfor backendInstall dependencies$gitclonehttps://github.com/devtud/aiocronjob.git
$cdaiocronjob
$pdmsyncRun testspdmruncoveragerun-munittestdiscover
pdmruncoveragereport-m |
aiocrontab | AIOCRONTABSample project to "flex" my asyncio [email protected]("*/5 * * * *")defprint_every_five_mminutes():print(f"{time.ctime()}: Hello World!!!!!")@aiocrontab.register("* * * * *")defprint_every_mminute():print(f"{time.ctime()}: Hello World!")aiocrontab.run()TODOsupport for diff timezonessupport for async tasktake logger as dependencyAdd more meaningful testsfix mypy errorsdocument the codebasedocument usage in readme |
aiocrossref | aiocrossrefAsynchronous client for CrossRef APIExampleimportasynciofromaiocrossrefimportCrossrefClientasyncdefworks(doi):client=CrossrefClient()returnawaitclient.works(doi)response=asyncio.get_event_loop().run_until_complete(works('10.21100/compass.v11i2.812'))assert(response=={'DOI':'10.21100/compass.v11i2.812','ISSN':['2044-0081','2044-0073'],'URL':'http://dx.doi.org/10.21100/compass.v11i2.812','abstract':'<jats:p>Abstract: Educational policy and provision is ''ever-changing; but how does pedagogy need to adapt to respond ''to transhumanism? This opinion piece discusses transhumanism, ''questions what it will mean to be posthuman, and considers the ''implications of this on the future of education.</jats:p>','author':[{'affiliation':[],'family':'Gibson','given':'Poppy Frances','sequence':'first'}],'container-title':['Compass: Journal of Learning and Teaching'],'content-domain':{'crossmark-restriction':False,'domain':[]},'created':{'date-parts':[[2018,12,17]],'date-time':'2018-12-17T09:42:26Z','timestamp':1545039746000},'deposited':{'date-parts':[[2019,6,11]],'date-time':'2019-06-11T10:29:57Z','timestamp':1560248997000},'indexed':{'date-parts':[[2020,4,14]],'date-time':'2020-04-14T14:52:16Z','timestamp':1586875936184},'is-referenced-by-count':0,'issn-type':[{'type':'print','value':'2044-0073'},{'type':'electronic','value':'2044-0081'}],'issue':'2','issued':{'date-parts':[[2018,12,10]]},'journal-issue':{'issue':'2','published-online':{'date-parts':[[2018,12,10]]}},'link':[{'URL':'https://journals.gre.ac.uk/index.php/compass/article/viewFile/812/pdf','content-type':'application/pdf','content-version':'vor','intended-application':'text-mining'},{'URL':'https://journals.gre.ac.uk/index.php/compass/article/viewFile/812/pdf','content-type':'unspecified','content-version':'vor','intended-application':'similarity-checking'}],'member':'8854','original-title':[],'prefix':'10.21100','published-online':{'date-parts':[[2018,12,10]]},'publisher':'Educational Development Unit, University of Greenwich','reference-count':0,'references-count':0,'relation':{},'score':1.0,'short-container-title':['Compass'],'short-title':[],'source':'Crossref','subtitle':[],'title':['From Humanities to Metahumanities: Transhumanism and the Future ''of Education'],'type':'journal-article','volume':'11'}) |
aio_crud_store | A very simple subset of databases capabilities intended to use most of dbs
the same way. You can use it to write database independent asyncio
libraries.It currently supports mongodb (throughmotor) and postrgresql
(throughaiopg), please feel free to add other dbs implementations.Installpip install aio_crud_storeUsageThe api is very simple and obvious (I hope).
The working examples are inexamplesdirectory.# createid=awaitstore.create({'foo':'bar'})# readdoc=awaitstore.read(foo='bar')# updateawaitstore.update(id,{'foo':'baz','spam':1})# deleteawaitstore.delete(id) |
aiocrwaler | No description available on PyPI. |
aiocryptocurrency | aiocryptocurrencyProvides a single abstract interface for managing the funds of various
cryptocurrency wallets via their RPC interfaces.Support for:This project currently supports the following coins:MoneroWowneroFiroQuick startpip install aiocryptocurrencyExample usingFiro(the API is the same for other coins).importasynciofromaiocryptocurrency.coins.neroimportWownero,Monerofromaiocryptocurrency.coins.firoimportFiroasyncdefmain():# ./firod -testnet -rpcbind=127.0.0.1 -rpcallowip=127.0.0.1 -rpcport=18888 -rpcuser=admin -rpcpassword=adminfiro=Firo()firo.port=18888firo.basic_auth=('admin','admin')# create a new receiving addressblob=awaitfiro.create_address()address=blob['address']# # list incoming txstxs=awaitfiro.list_txs(address)fortxintxs:print(tx.txid)# send paymentdest='TRwRAjxfAVKVZYQGdmskZRDSBw9E5YqjC8'amount=0.05txid=awaitfiro.send(dest,amount)loop=asyncio.get_event_loop()loop.run_until_complete(main()) |
aiocryptopay | @cryptobotasynchronous api wrapperDocs:https://help.crypt.bot/crypto-pay-apiMainNet -@CryptoBotTestNet -@CryptoTestnetBotInstallpipinstallaiocryptopay
poetryaddaiocryptopayBasic methodsfromaiocryptopayimportAioCryptoPay,Networkscrypto=AioCryptoPay(token='1337:JHigdsaASq',network=Networks.MAIN_NET)profile=awaitcrypto.get_me()currencies=awaitcrypto.get_currencies()balance=awaitcrypto.get_balance()rates=awaitcrypto.get_exchange_rates()print(profile,currencies,balance,rates,sep='\n')Create, get and delete invoice methodsfromaiocryptopayimportAioCryptoPay,Networkscrypto=AioCryptoPay(token='1337:JHigdsaASq',network=Networks.MAIN_NET)invoice=awaitcrypto.create_invoice(asset='TON',amount=1.5)print(invoice.bot_invoice_url)# Create invoice in fiatfiat_invoice=awaitcrypto.create_invoice(amount=5,fiat='USD',currency_type='fiat')print(fiat_invoice)old_invoice=awaitcrypto.get_invoices(invoice_ids=invoice.invoice_id)print(old_invoice.status)deleted_invoice=awaitcrypto.delete_invoice(invoice_id=invoice.invoice_id)print(deleted_invoice)# Get amount in crypto by fiat summamount=awaitcrypto.get_amount_by_fiat(summ=100,asset='TON',target='USD')invoice=awaitcrypto.create_invoice(asset='TON',amount=amount)print(invoice.bot_invoice_url)Create, get and delete check methods# The check creation method works when enabled in the application settingsfromaiocryptopayimportAioCryptoPay,Networkscrypto=AioCryptoPay(token='1337:JHigdsaASq',network=Networks.MAIN_NET)check=awaitcrypto.create_check(asset='USDT',amount=1)print(check)old_check=awaitcrypto.get_checks(check_ids=check.check_id)print(old_check)deleted_check=awaitcrypto.delete_check(check_id=check.check_id)print(deleted_check)WebHook usagefromaiohttpimportwebfromaiocryptopayimportAioCryptoPay,Networksfromaiocryptopay.models.updateimportUpdateweb_app=web.Application()crypto=AioCryptoPay(token='1337:JHigdsaASq',network=Networks.MAIN_NET)@crypto.pay_handler()asyncdefinvoice_paid(update:Update,app)->None:print(update)asyncdefcreate_invoice(app)->None:invoice=awaitcrypto.create_invoice(asset='TON',amount=1.5)print(invoice.bot_invoice_url)asyncdefclose_session(app)->None:awaitcrypto.close()web_app.add_routes([web.post('/crypto-secret-path',crypto.get_updates)])web_app.on_startup.append(create_invoice)web_app.on_shutdown.append(close_session)web.run_app(app=web_app,host='localhost',port=3001) |
aiocrypto-prices | # aiocrypto_pricesVery early version - API WILL CHANGE!If you happen to stumble upon this library, please provide any and all feedback
through any means comfortable to you.## Install$ pipenv install aiocrypto_pricesor$ pip install aiocrypto_prices –user## UsageBehind the scenes we are (currently) using cryptocompare’s API,
which means all of the symbols need to be in their format and supported
by them.### Simple`python >> from aiocrypto_prices import currencies >> awaitcurrencies.ETH.prices.get('USD')1053.28 `### AdvancedUseful for loading things in parallel.Careful, if you’re not accessing the target price throughget,
it might not reload after cache expires`python >> from aiocrypto_prices import currencies >>currencies.add('BTC','ETH', 'IOT') >> await currencies.load_all() >> currencies.IOT.prices.USD 2.79 `### Setting up extra options`python >>> from aiocrypto_prices import currencies >>> currencies.cache = 120 # 2 minute cache >>>currencies.target_currencies.append('EUR')# In addition to defaults, let's fetch EUR too. >>> currencies.extra_information = True # Get name and url of a logo `or`python >>> from aiocrypto_prices import Currencies >>> currencies = Currencies(cache=120,target_currencies=['USD','EUR'],extra_information=True) `## Changelog### 0.0.3extra_information parameter was renamed to humannew paramter ‘full’ providing market cap and supplyMore data should be provided with ‘full’, but requires a redesign of Prices class## TODOall the TODOs scattered around the codeAll the available information cryptocompare offersAssign amount in that currency? - perhaps aiocrypto_folio?Implement adding together currencies of the same symbol and possibly other interactionsaiocrypto_exchangesaiocrypto_pools |
aio-crystal-pay | aio_crystal_payУстановкаpipinstallaio_crystal_pay |
aiocse | aiocseAn asynchronous wrapper for theGoogle Custom Search JSON APIwritten in Python.
This is basically a copy ofasync-csewith some modifications.Features100% asynchronous (non-blocking)Toggle safe-search on/offImage searchTotal resultsMax resultsQuery timeInstallationThis library can be installed through PyPi:pipinstall-UaiocseYou can also install through GitHub:pipinstall-Ugit+https://github.com/Daudd/aiocseGetting StartedAn API key is required for aiocse, which you can get one fromhere. Keep in mind that one key is limited to 100 requests per day, so consider getting more than one key and put them in a list. |
aiocsv | aiocsvAsynchronous CSV reading and writing.Installationpip install aiocsv. Python 3.8+ is required.This module contains an extension written in C. Pre-build binaries
may not be available for your configuration. You might need a C compiler
and Python headers to install aiocsv.UsageAsyncReader & AsyncDictReader accept any object that has aread(size: int)coroutine,
which should return a string.AsyncWriter & AsyncDictWriter accept any object that has awrite(b: str)coroutine.Reading is implemented using a custom CSV parser, which should behave exactly like the CPython parser.Writing is implemented using the synchronous csv.writer and csv.DictWriter objects -
the serializers write data to a StringIO, and that buffer is then rewritten to the underlying
asynchronous file.ExampleExample usage withaiofiles.importasyncioimportcsvimportaiofilesfromaiocsvimportAsyncReader,AsyncDictReader,AsyncWriter,AsyncDictWriterasyncdefmain():# simple readingasyncwithaiofiles.open("some_file.csv",mode="r",encoding="utf-8",newline="")asafp:asyncforrowinAsyncReader(afp):print(row)# row is a list# dict reading, tab-separatedasyncwithaiofiles.open("some_other_file.tsv",mode="r",encoding="utf-8",newline="")asafp:asyncforrowinAsyncDictReader(afp,delimiter="\t"):print(row)# row is a dict# simple writing, "unix"-dialectasyncwithaiofiles.open("new_file.csv",mode="w",encoding="utf-8",newline="")asafp:writer=AsyncWriter(afp,dialect="unix")awaitwriter.writerow(["name","age"])awaitwriter.writerows([["John",26],["Sasha",42],["Hana",37]])# dict writing, all quoted, "NULL" for missing fieldsasyncwithaiofiles.open("new_file2.csv",mode="w",encoding="utf-8",newline="")asafp:writer=AsyncDictWriter(afp,["name","age"],restval="NULL",quoting=csv.QUOTE_ALL)awaitwriter.writeheader()awaitwriter.writerow({"name":"John","age":26})awaitwriter.writerows([{"name":"Sasha","age":42},{"name":"Hana"}])asyncio.run(main())Differences withcsvaiocsvstrives to be a drop-in replacement for Python's builtincsv module. However, there are 3 notable differences:Readers accept objects with asyncreadmethods, instead of an AsyncIterable over lines
from a file.AsyncDictReader.fieldnamescan beNone- useawait AsyncDictReader.get_fieldnames()instead.Changes tocsv.field_size_limitare not picked up by existing Reader instances.
The field size limit is cached on Reader instantiation to avoid expensive function calls
on each character of the input.Referenceaiocsv.AsyncReaderAsyncReader(asyncfile: aiocsv.protocols.WithAsyncRead, **csvreaderparams)An object that iterates over records in the given asynchronous CSV file.
Additional keyword arguments are understood as dialect parameters.Iterating over this object returns parsed CSV rows (List[str]).Methods:__aiter__(self) -> selfasync __anext__(self) -> List[str]Read-only properties:dialect: The csv.Dialect used when parsingline_num: The number of lines read from the source file. This coincides with a 1-based index
of the line number of the last line of the recently parsed record.aiocsv.AsyncDictReaderAsyncDictReader(
asyncfile: aiocsv.protocols.WithAsyncRead,
fieldnames: Optional[Sequence[str]] = None,
restkey: Optional[str] = None,
restval: Optional[str] = None,
**csvreaderparams,
)An object that iterates over records in the given asynchronous CSV file.
All arguments work exactly the same was as in csv.DictReader.Iterating over this object returns parsed CSV rows (Dict[str, str]).Methods:__aiter__(self) -> selfasync __anext__(self) -> Dict[str, str]async get_fieldnames(self) -> List[str]Properties:fieldnames: field names used when converting rows to dictionaries⚠️Unlike csv.DictReader, this property can't read the fieldnames if they are missing -
it's not possible toawaiton the header row in a property getter.Useawait reader.get_fieldnames().reader=csv.DictReader(some_file)reader.fieldnames# ["cells", "from", "the", "header"]areader=aiofiles.AsyncDictReader(same_file_but_async)areader.fieldnames# ⚠️ Noneawaitareader.get_fieldnames()# ["cells", "from", "the", "header"]restkey: If a row has more cells then the header, all remaining cells are stored under
this key in the returned dictionary. Defaults toNone.restval: If a row has less cells then the header, then missing keys will use this
value. Defaults toNone.reader: Underlyingaiofiles.AsyncReaderinstanceRead-only properties:dialect: Link toself.reader.dialect- the current csv.Dialectline_num: The number of lines read from the source file. This coincides with a 1-based index
of the line number of the last line of the recently parsed record.aiocsv.AsyncWriterAsyncWriter(asyncfile: aiocsv.protocols.WithAsyncWrite, **csvwriterparams)An object that writes csv rows to the given asynchronous file.
In this object "row" is a sequence of values.Additional keyword arguments are passed to the underlying csv.writer instance.Methods:async writerow(self, row: Iterable[Any]) -> None:
Writes one row to the specified file.async writerows(self, rows: Iterable[Iterable[Any]]) -> None:
Writes multiple rows to the specified file.Readonly properties:dialect: Link to underlying's csv.reader'sdialectattributeaiocsv.AsyncDictWriterAsyncDictWriter(asyncfile: aiocsv.protocols.WithAsyncWrite, fieldnames: Sequence[str], **csvdictwriterparams)An object that writes csv rows to the given asynchronous file.
In this object "row" is a mapping from fieldnames to values.Additional keyword arguments are passed to the underlying csv.DictWriter instance.Methods:async writeheader(self) -> None: Writes header row to the specified file.async writerow(self, row: Mapping[str, Any]) -> None:
Writes one row to the specified file.async writerows(self, rows: Iterable[Mapping[str, Any]]) -> None:
Writes multiple rows to the specified file.Readonly properties:dialect: Link to underlying's csv.reader'sdialectattributeaiocsv.protocols.WithAsyncReadAtyping.Protocoldescribing an asynchronous file, which can be read.aiocsv.protocols.WithAsyncWriteAtyping.Protocoldescribing an asynchronous file, which can be written to. |
aiocurl | handle=aiocurl.Curl()handle.setopt(aiocurl.URL,'https://example.com')awaithandle.perform()How?Using libcurl'ssocket interfaceto let asyncio's event loop do all the work ofwaiting for I/Oandscheduling of timeouts.multi_socket supports multiple parallel transfers — all done in the same single thread — and have been used to run several tens of thousands of transfers in a single application. It is usually the API that makes the most sense if you do a large number (>100 or so) of parallel transfers.This setup allows clients to scale up the number of simultaneous transfers much higher than with other systems, and still maintain good performance. The "regular" APIs otherwise waste far too much time scanning through lists of all the sockets.More examples?Awaiting multiple transfersUse any of asyncio's functions:awaitasyncio.gather(handle1.perform(),handle2.perform(),)Even better:multi=aiocurl.CurlMulti()awaitasyncio.gather(multi.perform(handle1),multi.perform(handle2),)Advantages of using a multi handle:connection reusemultiplexingshared SSL session and DNS cachePausing and resuming a transferSimply use the existing pause method:handle.pause(aiocurl.PAUSE_ALL)And to resume:handle.pause(aiocurl.PAUSE_CONT)For more pause options seelibcurl's documention.Stopping a tranferThe opposite of perform:handle.stop()And if the transfer is performed by a multi handle:multi.stop(handle)A stopped perform will returnNoneinstead of the finished handle:ifawaithandle.perform():print('finished')else:print('stopped')Cancelling a transferThis is just like stop(), except the corresponding perform() coroutine will be
cancelled instead:try:awaithandle.perform()exceptasyncio.CancelledError:print('cancelled')DependenciesPycURL 7.43.0.4 or above. It has essential fixes that make event-driven transfers work. Older releases fail to relay libcurl's event messages.(optional)Additional PycURLevent-related fixesthat make pausing and resuming of transfers work.Licenseaiocurl - asyncio extension of PycURL
Copyright (C) 2021 fsbs
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. |
aiocurrencylayer | No description available on PyPI. |
aiocutter | UNKNOWN |
aiocv | AIOCVaiocv Is A Python Library Used To Track Hands, Track Pose, Detect Face, Detect Contours (Shapes), Detect Cars, Detect Number Plate, Detect Smile, Detect Eyes, Control Volume Using Gesture, Read QR Codes And Create Face Mesh On Image/Video.InstallationUse the package managerpipto install aiocv.pipinstallaiocvUsageHand Trackingimportaiocvimportcv2img=cv2.imread("hands.png")# Make An Objecthands=aiocv.HandTrack()# Use findHands() Method To Track Hands On Image/Videohands.findHands(img,draw=True)cv2.imshow("Image",img)cv2.waitKey(0)Params For findHands() Method :findHands(self,img=None,draw=True)If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1 and trackConfidence = 1Output :Pose Detectorimportaiocvimportcv2img=cv2.imread("man.png")# Make An Objectpose=aiocv.PoseDetector()# Use findPose() Method To Detect Pose On Image/Videopose.findPose(img,draw=True)cv2.imshow("Image",img)cv2.waitKey(0)Params For findPose() Method :findPose(self,img=None,draw=True)If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1 and trackConfidence = 1Output :Face Detectionimportaiocvimportcv2img=cv2.imread("elon_musk.png")# Make An Objectface=aiocv.FaceDetector()# Use findFace() Method To Detect Face On Image/Videoface.findFace(img,draw=True)cv2.imshow("Image",img)cv2.waitKey(0)Params For findFace() Method :findFace(self,img=None,draw=True)If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1Output :Face Meshimportaiocvimportcv2img=cv2.imread("elon_musk.png")# Make An Objectmesh=aiocv.FaceMesh()# Use findFaceMesh() Method To Detect Face And Draw Mesh On Image/Videomesh.findFaceMesh(img,draw=True)cv2.imshow("Image",img)cv2.waitKey(0)Params For findFaceMesh() Method :findFaceMesh(self,img=None,draw=True)If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1 and trackConfidence = 1Output :Contour (Shape) Detectionimportaiocvimportcv2img=cv2.imread("shapes.png")# Make An Objectshape=aiocv.ContourDetector(img)# Use findContours() Method To Detect Shapes On Image/Videoshape.findContours(img,draw=True)cv2.imshow("Image",img)cv2.waitKey(0)Output :Car Detectionimportaiocvimportcv2img=cv2.imread("car.png")# Make An Objectcar=aiocv.CarDetector(img)# Use findCars() Method To Detect Cars On Image/Videocar.findCars()cv2.imshow("Image",img)cv2.waitKey(0)Params For findCars() Method :findCars(self,color=(255,0,0),thickness=2)Output :Number Plate Detectionimportaiocvimportcv2img=cv2.imread("car.png")# Make An Objectcar=aiocv.NumberPlateDetector(img)# Use findNumberPlate() Method To Detect Number Plate On Image/Videocar.findNumberPlate()cv2.imshow("Image",img)cv2.waitKey(0)Params For findNumberPlate() Method :findNumberPlate(self,color=(255,0,0),thickness=2)Output :Smile Detectionimportaiocvimportcv2img=cv2.imread("person.png")# Make An Objectsmile=aiocv.SmileDetector(img)# Use findSmile() Method To Detect Smile On Image/Videosmile.findSmile()cv2.imshow("Image",img)cv2.waitKey(0)Params For findSmile() Method :findSmile(self,color=(255,0,0),thickness=2)Output :Eyes Detectionimportaiocvimportcv2img=cv2.imread("person.png")# Make An Objecteyes=aiocv.EyesDetector(img)# Use findEyes() Method To Detect Eyes On Image/Videoeyes.findEyes()cv2.imshow("Image",img)cv2.waitKey(0)Params For findEyes() Method :findEyes(self,color=(255,0,0),thickness=2)Output :Control Volume Using Gestureimportaiocv# Make An Objectgvc=aiocv.GestureVolumeControl()# Use controlVolume() Method To Control Volumegvc.controlVolume()Params For controlVolume() Method :controlVolume(self,color=(255,0,0),thickness=2)Params For GestureVolumeControl Class :gvc=aiocv.GestureVolumeControl(webcamIndex=0)# If You Want To Control From Other Camera, Set The webcamIndex Accordingly.Output :Read QR Codeimportaiocvimportcv2img=cv2.imread("qr.png")# Make An Objectqr=aiocv.QRCodeReader(img)# Use findQRCode() Method To Detect QR Code On Image/Videotext=qr.findQRCode()cv2.imshow("Image",img)cv2.waitKey(0)Params For findQRCode() Method :findQRCode(self,color=(255,0,0),thickness=3)To Print The Extracted Text :print(text)Output :ContributingPull Requests Are Welcome. For Major Changes, Please Open An Issue First To Discuss What You Would Like To Change.Please Make Sure To Update Tests As Appropriate.LicenseMIT |
aiocycletls | CycleTLS Wrapper - convenient, fast, no bansMinimal information before start usingpip3installaiocycletlsAfter install you must get build from my repository or create own for your system. (repository CycleTLS)This project working with proxy on low-level of Go networkSome exampleimportaiocycletlsasyncdefmain()->None:proxy=aiocycletls.WSProxyClient()response=awaitproxy.request(url="https://tools.scrapfly.io/api/fp/ja3?extended=1",method="GET",user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36",ja3=aiocycletls.JA3.CHROME_107)print(response.jsonable_body())# You see your JA3 <3 |
aiocypher | AsyncIO wrapper around the neo4j driver |
aiod | An incubating project. |
aio-daemon | aio-daemonPython async daemon module. Helps bootstrap your code as an async-aware server that comes with daemonization, logging and basic signal handling. |
aiodag | No description available on PyPI. |
aiodagpi | An asynchronous python API wrapper for Dagpi :https://dagpi.xyzInstallationInstall via pip, either directly from PyPI:>>>python3-mpipinstall-Uaiodagpior from the github repository:>>>python3-mpipinstall-Ugit+https://github.com/DevilJamJar/aiodagpiAlternatively, download files directly fromdownload files. This is not recommended and can cause directory issues later down the line if not placed and constructed correctly. If you must, usethis guidefor help on how to install python packages.UsageExamplesAuthorsaiodagpiwas written byRaj Sharma.dagpi.xyzwas constructed byDaggy1234. |
aiodanbooru | Danbooru is a Python library that provides an easy-to-use interface for
interacting with the Danbooru API. It allows you to search for posts,
retrieve post details, and download media files from the Danbooru image
board.FeaturesSimple and intuitive API for interacting with the Danbooru APIRetrieve posts based on tags and limitDownload media files (images, videos) associated with the postsSupports asynchronous requests using aiohttpInstallationYou can install Danbooru using pip:pipinstallaiodanbooruUsageHere’s a simple example that demonstrates how to use the Danbooru
library:fromaiodanbooru.apiimportDanbooruAPIasyncdefmain():api=DanbooruAPI(base_url="https://danbooru.donmai.us")posts=awaitapi.get_posts(tags=["cat_girl","solo"],limit=10)ifposts:post=posts[0]media_data=awaitpost.get_media()withopen(post.filename,"wb")asfile:file.write(media_data)print("Media file saved!")if__name__=="__main__":importasyncioloop=asyncio.get_event_loop()loop.run_until_complete(main())For more details and advanced usage examples, please refer to thedocumentation.ContributingContributions are welcome! If you have any suggestions, bug reports, or
feature requests, please open an issue on theGitHub
repository. Feel free to
submit pull requests with improvements or fixes.LicenseThis project is licensed under the MIT License. See theLICENSEfile for more information. |
aiodantic | AiodanticBuilding site... |
aiodata | A lightweightPostgRESTproxy that does not get in your way.Installingpip3installaiodataLinksDocumentationWarningVersions bellow3.0.0are not guaranteed to work due to dependency changes. |
aio-databases | AIO-DatabasesThe package gives you async support for a range of databases (SQLite,
PostgreSQL, MySQL).FeaturesHas no dependencies (except databases drivers)SupportsasyncioandtrioSupportsaiosqlite,aiomysql,aiopg,asyncpg,triopg,trio_mysqlManage pools of connectionsManage transactionsRequirementspython >= 3.7Installationaio-databasesshould be installed using pip:$pipinstallaio-databasesYou have to choose and install the required database drivers with:# To support SQLite$pipinstallaio-databases[aiosqlite]# asyncio# To support MySQL$pipinstallaio-databases[aiomysql]# asyncio$pipinstallaio-databases[trio_mysql]# trio# To support PostgreSQL (choose one)$pipinstallaio-databases[aiopg]# asyncio$pipinstallaio-databases[asyncpg]# asyncio$pipinstallaio-databases[triopg]# trio# To support ODBC (alpha state)$pipinstallaio-databases[aioodbc]# asyncioUsageInit a databasefromaio_databasesimportDatabase# Initialize a databasedb=Database('sqlite:///:memory:')# with default driver# Flesh out the driverdb=Database('aiosqlite:///:memory:',**driver_params)Setup a pool of connections (optional)Setup a pool of connections# Initialize a database's poolasyncdefmy_app_starts():awaitdb.connect()# Close the poolasyncdefmy_app_ends():awaitdb.disconnect()# As an alternative users are able to use the database# as an async context managerasyncwithdb:awaitmy_main_coroutine()Get a connection# Acquire and release (on exit) a connectionasyncwithdb.connection():awaitmy_code()# Acquire a connection only if it not existasyncwithdb.connection(False):awaitmy_code()If a pool is setup it will be usedRun SQL queriesawaitdb.execute('select $1','1')awaitdb.executemany('select $1','1','2','3')records=awaitdb.fetchall('select (2 * $1) res',2)assertrecords==[(4,)]record=awaitdb.fetchone('select (2 * $1) res',2)assertrecord==(4,)assertrecord['res']==4result=awaitdb.fetchval('select 2 * $1',2)assertresult==4Iterate through rows one by oneasyncforrecindb.iterate('select name from users'):print(rec)Manage connectionsBy default the database opens and closes a connection for a query.# Connection will be acquired and released for the queryawaitdb.fetchone('select%s',42)# Connection will be acquired and released againawaitdb.fetchone('select%s',77)Manually open and close a connection# Acquire a new connection objectasyncwithdb.connection():# Only one connection will be usedawaitdb.fetchone('select%s',42)awaitdb.fetchone('select%s',77)# ...# Acquire a new connection or use an existingasyncwithdb.connection(False):# ...If there any connection alreadydb.methodwould be using the current oneasyncwithdb.connection():# connection would be acquired hereawaitdb.fetchone('select%s',42)# the connection is usedawaitdb.fetchone('select%s',77)# the connection is used# the connection released thereManage transactions# Start a tranction using the current connectionasyncwithdb.transaction()astrans1:# do some work ...asyncwithdb.transaction()astrans2:# do some work ...awaittrans2.rollback()# unnessesary, the transaction will be commited on exit from the# current contextawaittrans1.commit()# Create a new connection and start a transactionasyncwithdb.tranction(True)astrans:# do some work ...Bug trackerIf you have any suggestions, bug reports or annoyances please report them to
the issue tracker athttps://github.com/klen/aio-databases/issuesContributingDevelopment of the project happens at:https://github.com/klen/aio-databasesLicenseLicensed under aMIT License |
aiodatagram | AsyncIO Datagram UtilsA set of utilities to handle UDP in the asyncio context.Installationpipinstallaiodatagram |
aiodataloader | Asyncio DataLoaderDataLoader is a generic utility to be used as part of your application's data
fetching layer to provide a simplified and consistent API over various remote
data sources such as databases or web services via batching and caching.A port of the "Loader" API originally developed by@schrocknat Facebook in
2010 as a simplifying force to coalesce the sundry key-value store back-end
APIs which existed at the time. At Facebook, "Loader" became one of the
implementation details of the "Ent" framework, a privacy-aware data entity
loading and caching layer within web server product code. This ultimately became
the underpinning for Facebook's GraphQL server implementation and type
definitions.Asyncio DataLoader is a Python port of the original JavaScriptDataLoaderimplementation. DataLoader is often used when implementing aGraphQLservice,
though it is also broadly useful in other situations.Getting StartedFirst, install DataLoader using pip.pipinstallaiodataloaderTo get started, create aDataLoader. EachDataLoaderinstance represents a
unique cache. Typically instances are created per request when used within a
web-server likeSanicif different users can see different things.Note: DataLoader assumes a AsyncIO environment withasync/awaitavailable only in Python 3.5+.BatchingBatching is not an advanced feature, it's DataLoader's primary feature.
Create loaders by providing a batch loading function.fromaiodataloaderimportDataLoaderclassUserLoader(DataLoader):asyncdefbatch_load_fn(self,keys):returnawaitmy_batch_get_users(keys)user_loader=UserLoader()A batch loading function accepts an Iterable of keys, and returns a Promise which
resolves to a List of values*.Then load individual values from the loader. DataLoader will coalesce all
individual loads which occur within a single frame of execution (a single tick
of the event loop) and then call your batch function with all requested keys.user1_future=user_loader.load(1)user2_future=user_loader.load(2)user1=awaituser1_futureuser2=awaituser2_futureuser1_invitedby=user_loader.load(user1.invited_by_id)user2_invitedby=user_loader.load(user2.invited_by_id)print("User 1 was invited by",awaituser1_invitedby)print("User 2 was invited by",awaituser2_invitedby)A naive application may have issued four round-trips to a backend for the
required information, but with DataLoader this application will make at most
two.DataLoader allows you to decouple unrelated parts of your application without
sacrificing the performance of batch data-loading. While the loader presents an
API that loads individual values, all concurrent requests will be coalesced and
presented to your batch loading function. This allows your application to safely
distribute data fetching requirements throughout your application and maintain
minimal outgoing data requests.Batch FunctionA batch loading function accepts an List of keys, and returns a Future which
resolves to a List of values. There are a few constraints that must be upheld:The List of values must be the same length as the List of keys.Each index in the List of values must correspond to the same index in the List of keys.For example, if your batch function was provided the List of keys:[ 2, 9, 6, 1 ],
and loading from a back-end service returned the values:{'id':9,'name':'Chicago'}{'id':1,'name':'New York'}{'id':2,'name':'San Francisco'}Our back-end service returned results in a different order than we requested, likely
because it was more efficient for it to do so. Also, it omitted a result for key6,
which we can interpret as no value existing for that key.To uphold the constraints of the batch function, it must return an List of values
the same length as the List of keys, and re-order them to ensure each index aligns
with the original keys[ 2, 9, 6, 1 ]:[{'id':2,'name':'San Francisco'},{'id':9,'name':'Chicago'},None,{'id':1,'name':'New York'}]CachingDataLoader provides a memoization cache for all loads which occur in a single
request to your application. After.load()is called once with a given key,
the resulting value is cached to eliminate redundant loads.In addition to relieving pressure on your data storage, caching results per-request
also creates fewer objects which may relieve memory pressure on your application:user_future1=user_loader.load(1)user_future2=user_loader.load(1)assertuser_future1==user_future2Caching per-RequestDataLoader cachingdoes notreplace Redis, Memcache, or any other shared
application-level cache. DataLoader is first and foremost a data loading mechanism,
and its cache only serves the purpose of not repeatedly loading the same data in
the context of a single request to your Application. To do this, it maintains a
simple in-memory memoization cache (more accurately:.load()is a memoized function).Avoid multiple requests from different users using the DataLoader instance, which
could result in cached data incorrectly appearing in each request. Typically,
DataLoader instances are created when a Request begins, and are not used once the
Request ends.For example, when using withSanic:defcreate_loaders(auth_token){return{'users':user_loader,}}app=Sanic(__name__)@app.route("/")asyncdeftest(request):auth_token=authenticate_user(request)loaders=create_loaders(auth_token)returnrender_page(request,loaders)Clearing CacheIn certain uncommon cases, clearing the request cache may be necessary.The most common example when clearing the loader's cache is necessary is after
a mutation or update within the same request, when a cached value could be out of
date and future loads should not use any possibly cached value.Here's a simple example using SQL UPDATE to illustrate.# Request begins...user_loader=...# And a value happens to be loaded (and cached).user4=awaituser_loader.load(4)# A mutation occurs, invalidating what might be in cache.awaitsql_run('UPDATE users WHERE id=4 SET username="zuck"')user_loader.clear(4)# Later the value load is loaded again so the mutated data appears.user4=awaituser_loader.load(4)# Request completes.Caching ExceptionsIf a batch load fails (that is, a batch function throws or returns a rejected
Promise), then the requested values will not be cached. However if a batch
function returns anExceptioninstance for an individual value, thatExceptionwill
be cached to avoid frequently loading the sameException.In some circumstances you may wish to clear the cache for these individual Errors:try:user_loader.load(1)exceptExceptionase:user_loader.clear(1)raiseDisabling CacheIn certain uncommon cases, a DataLoader whichdoes notcache may be desirable.
CallingDataLoader(batch_fn, cache=false)will ensure that every
call to.load()will produce anewFuture, and requested keys will not be
saved in memory.However, when the memoization cache is disabled, your batch function will
receive an array of keys which may contain duplicates! Each key will be
associated with each call to.load(). Your batch loader should provide a value
for each instance of the requested key.For example:classMyLoader(DataLoader):cache=Falseasyncdefbatch_load_fn(self,keys):print(keys)returnkeysmy_loader=MyLoader()my_loader.load('A')my_loader.load('B')my_loader.load('A')# > [ 'A', 'B', 'A' ]More complex cache behavior can be achieved by calling.clear()or.clear_all()rather than disabling the cache completely. For example, this DataLoader will
provide unique keys to a batch function due to the memoization cache being
enabled, but will immediately clear its cache when the batch function is called
so later requests will load new values.classMyLoader(DataLoader):cache=Falseasyncdefbatch_load_fn(self,keys):self.clear_all()returnkeysAPIclass DataLoaderDataLoader creates a public API for loading data from a particular
data back-end with unique keys such as theidcolumn of a SQL table or
document name in a MongoDB database, given a batch loading function.EachDataLoaderinstance contains a unique memoized cache. Use caution when
used in long-lived applications or those which serve many users with different
access permissions and consider creating a new instance per web request.DataLoader(batch_load_fn, **options)Create a newDataLoadergiven a batch loading function and options.batch_load_fn: An async function (coroutine) which accepts an List of keys
and returns a Future which resolves to an List of values.options:batch: DefaultTrue. Set toFalseto disable batching, instead
immediately invokingbatch_load_fnwith a single load key.max_batch_size: DefaultInfinity. Limits the number of items that get
passed in to thebatch_load_fn.cache: DefaultTrue. Set toFalseto disable memoization caching,
instead creating a new Promise and new key in thebatch_load_fnfor every
load of the same key.cache_key_fn: A function to produce a cache key for a given load key.
Defaults tokey => key. Useful to provide when Python objects are keys
and two similarly shaped objects should be considered equivalent.cache_map: An instance ofdict(or an object with a similar API) to be
used as the underlying cache for this loader. Default{}.load(key)Loads a key, returning aFuturefor the value represented by that key.key: An key value to load.load_many(keys)Loads multiple keys, promising an array of values:a,b=awaitmy_loader.load_many(['a','b']);This is equivalent to the more verbose:fromasyncioimportgathera,b=awaitgather(my_loader.load('a'),my_loader.load('b'))keys: A list of key values to load.clear(key)Clears the value atkeyfrom the cache, if it exists. Returns itself for
method chaining.key: An key value to clear.clear_all()Clears the entire cache. To be used when some event results in unknown
invalidations across this particularDataLoader. Returns itself for
method chaining.prime(key, value)Primes the cache with the provided key and value. If the key already exists, no
change is made. (To forcefully prime the cache, clear the key first withloader.clear(key).prime(key, value).) Returns itself for method chaining.Using with GraphQLDataLoader pairs nicely well withGraphQL. GraphQL fields are
designed to be stand-alone functions. Without a caching or batching mechanism,
it's easy for a naive GraphQL server to issue new database requests each time a
field is resolved.Consider the following GraphQL request:{
me {
name
bestFriend {
name
}
friends(first: 5) {
name
bestFriend {
name
}
}
}
}Naively, ifme,bestFriendandfriendseach need to request the backend,
there could be at most 13 database requests!When using DataLoader withgraphene, we could define theUsertype with clearer code and
at most 4 database requests, and possibly fewer if there are cache hits.classUser(graphene.ObjectType):name=graphene.String()best_friend=graphene.Field(lambda:User)friends=graphene.List(lambda:User)defresolve_best_friend(self,args,context,info):returnuser_loader.load(self.best_friend_id)defresolve_friends(self,args,context,info):returnuser_loader.load_many(self.friend_ids)Common PatternsCreating a new DataLoader per request.In many applications, a web server using DataLoader serves requests to many
different users with different access permissions. It may be dangerous to use
one cache across many users, and is encouraged to create a new DataLoader
per request:defcreate_loaders(auth_token):return{'users':DataLoader(lambdaids:gen_users(auth_token,ids)),'cdn_urls':DataLoader(lambdaraw_urls:gen_cdn_urls(auth_token,raw_urls)),'stories':DataLoader(lambdakeys:gen_stories(auth_token,keys)),}}# When handling an incoming web request:loaders=create_loaders(request.query.auth_token)# Then, within application logic:user=awaitloaders.users.load(4)pic=awaitloaders.cdn_urls.load(user.raw_pic_url)Creating an object where each key is aDataLoaderis one common pattern which
provides a single value to pass around to code which needs to perform
data loading, such as part of theroot_valuein aGraphQLrequest.Loading by alternative keys.Occasionally, some kind of value can be accessed in multiple ways. For example,
perhaps a "User" type can be loaded not only by an "id" but also by a "username"
value. If the same user is loaded by both keys, then it may be useful to fill
both caches when a user is loaded from either source:asyncdefuser_by_id_batch_fn(ids):users=awaitgen_users_by_id(ids)foruserinusers:username_loader.prime(user.username,user)returnusersuser_by_id_loader=DataLoader(user_by_id_batch_fn)asyncdefusername_batch_fn(names):users=awaitgen_usernames(names)foruserinusers:user_by_id_loader.prime(user.id,user)returnusersusername_loader=DataLoader(username_batch_fn)Custom CachesDataLoader can optionaly be provided a custom dict instance to use as its
memoization cache. More specifically, any object that implements the methodsget(),set(),delete()andclear()can be provided. This allows for custom dicts
which implement variouscache algorithmsto be provided. By default,
DataLoader uses the standarddictwhich simply grows until the DataLoader
is released. The default is appropriate when requests to your application are
short-lived.Video Source Code WalkthroughDataLoader Source Code Walkthrough (YouTube): |
aiodataloader-next | This a fork from theasyncio DataLoaderincluding community fixes and Python 3.7+ compatibility.DataLoader is a generic utility to be used as part of your application’s
data fetching layer to provide a simplified and consistent API over
various remote data sources such as databases or web services via
batching and caching.A port of the “Loader” API originally developed by [@schrockn][] at
Facebook in 2010 as a simplifying force to coalesce the sundry key-value
store back-end APIs which existed at the time. At Facebook, “Loader”
became one of the implementation details of the “Ent” framework, a
privacy-aware data entity loading and caching layer within web server
product code. This ultimately became the underpinning for Facebook’s
GraphQL server implementation and type definitions.DataLoader is a simplified version of this original idea implemented in
Python for AsyncIO services. DataLoader is often used when implementing
agrapheneservice,
though it is also broadly useful in other situations.DataLoader is provided so that it may be useful not just to build
GraphQL services with AsyncIO but also as a publicly available reference
implementation of this concept in the hopes that it can be ported to
other languages. If you port DataLoader to another language, please open
an issue to include a link from this repository.Getting StartedFirst, install DataLoader using pip.pipinstallaiodataloader-nextTo get started, create aDataLoader. EachDataLoaderinstance
represents a unique cache. Typically instances are created per request
when used within a web-server likeSanicif different users
can see different things.Note: DataLoader assumes a AsyncIO environment withasync/awaitavailable only in Python 3.5+.BatchingBatching is not an advanced feature, it’s DataLoader’s primary feature.
Create loaders by providing a batch loading function.fromaiodataloaderimportDataLoaderclassUserLoader(DataLoader):asyncdefbatch_load_fn(self,keys):returnawaitmy_batch_get_users(keys)user_loader=UserLoader()A batch loading function accepts a Iterable of keys, and returns a
Promise which resolves to a List of values*.Then load individual values from the loader. DataLoader will coalesce
all individual loads which occur within a single frame of execution (a
single tick of the event loop) and then call your batch function with
all requested keys.user1_future=user_loader.load(1)user2_future=user_loader.load(2)user1=awaituser1_futureuser2=awaituser2_futureuser1_invitedby=user_loader.load(user1.invited_by_id)user2_invitedby=user_loader.load(user2.invited_by_id)print("User 1 was invited by",awaituser1_invitedby)print("User 2 was invited by",awaituser2_invitedby)A naive application may have issued four round-trips to a backend for
the required information, but with DataLoader this application will make
at most two.DataLoader allows you to decouple unrelated parts of your application
without sacrificing the performance of batch data-loading. While the
loader presents an API that loads individual values, all concurrent
requests will be coalesced and presented to your batch loading function.
This allows your application to safely distribute data fetching
requirements throughout your application and maintain minimal outgoing
data requests.Batch FunctionA batch loading function accepts an List of keys, and returns a Future
which resolves to an List of values. There are a few constraints that
must be upheld:The List of values must be the same length as the List of keys.Each index in the List of values must correspond to the same index in
the List of keys.For example, if your batch function was provided the List of keys:[ 2, 9, 6, 1 ], and loading from a back-end service returned the
values:{'id':9,'name':'Chicago'}{'id':1,'name':'New York'}{'id':2,'name':'San Francisco'}Our back-end service returned results in a different order than we
requested, likely because it was more efficient for it to do so. Also,
it omitted a result for key6, which we can interpret as no value
existing for that key.To uphold the constraints of the batch function, it must return an List
of values the same length as the List of keys, and re-order them to
ensure each index aligns with the original keys[ 2, 9, 6, 1 ]:[{'id':2,'name':'San Francisco'},{'id':9,'name':'Chicago'},None,{'id':1,'name':'New York'}]CachingDataLoader provides a memoization cache for all loads which occur in a
single request to your application. After.load()is called once
with a given key, the resulting value is cached to eliminate redundant
loads.In addition to relieving pressure on your data storage, caching results
per-request also creates fewer objects which may relieve memory pressure
on your application:user_future1=user_loader.load(1)user_future2=user_loader.load(1)assertuser_future1==user_future2Caching per-RequestDataLoader cachingdoes notreplace Redis, Memcache, or any other
shared application-level cache. DataLoader is first and foremost a data
loading mechanism, and its cache only serves the purpose of not
repeatedly loading the same data in the context of a single request to
your Application. To do this, it maintains a simple in-memory
memoization cache (more accurately:.load()is a memoized function).Avoid multiple requests from different users using the DataLoader
instance, which could result in cached data incorrectly appearing in
each request. Typically, DataLoader instances are created when a Request
begins, and are not used once the Request ends.For example, when using withSanic:defcreate_loaders(auth_token){return{'users':user_loader,}}app=Sanic(__name__)@app.route("/")asyncdeftest(request):auth_token=authenticate_user(request)loaders=create_loaders(auth_token)returnrender_page(request,loaders)Clearing CacheIn certain uncommon cases, clearing the request cache may be necessary.The most common example when clearing the loader’s cache is necessary is
after a mutation or update within the same request, when a cached value
could be out of date and future loads should not use any possibly cached
value.Here’s a simple example using SQL UPDATE to illustrate.# Request begins...user_loader=...# And a value happens to be loaded (and cached).user4=awaituser_loader.load(4)# A mutation occurs, invalidating what might be in cache.awaitsql_run('UPDATE users WHERE id=4 SET username="zuck"')user_loader.clear(4)# Later the value load is loaded again so the mutated data appears.user4=awaituser_loader.load(4)# Request completes.Caching ExceptionsIf a batch load fails (that is, a batch function throws or returns a
rejected Promise), then the requested values will not be cached. However
if a batch function returns anExceptioninstance for an individual
value, thatExceptionwill be cached to avoid frequently loading the
sameException.In some circumstances you may wish to clear the cache for these
individual Errors:try:user_loader.load(1)exceptExceptionase:user_loader.clear(1)raiseDisabling CacheIn certain uncommon cases, a DataLoader whichdoes notcache may be
desirable. CallingDataLoader(batch_fn, cache=false)will ensure
that every call to.load()will produce anewFuture, and
requested keys will not be saved in memory.However, when the memoization cache is disabled, your batch function
will receive an array of keys which may contain duplicates! Each key
will be associated with each call to.load(). Your batch loader
should provide a value for each instance of the requested key.For example:classMyLoader(DataLoader):cache=Falseasyncdefbatch_load_fn(self,keys):print(keys)returnkeysmy_loader=MyLoader()my_loader.load('A')my_loader.load('B')my_loader.load('A')# > [ 'A', 'B', 'A' ]More complex cache behavior can be achieved by calling.clear()or.clear_all()rather than disabling the cache completely. For
example, this DataLoader will provide unique keys to a batch function
due to the memoization cache being enabled, but will immediately clear
its cache when the batch function is called so later requests will load
new values.classMyLoader(DataLoader):cache=Falseasyncdefbatch_load_fn(self,keys):self.clear_all()returnkeysAPIclass DataLoaderDataLoader creates a public API for loading data from a particular data
back-end with unique keys such as theidcolumn of a SQL table or
document name in a MongoDB database, given a batch loading function.EachDataLoaderinstance contains a unique memoized cache. Use
caution when used in long-lived applications or those which serve many
users with different access permissions and consider creating a new
instance per web request.new DataLoader(batch_load_fn, **options)Create a newDataLoadergiven a batch loading function and options.batch_load_fn: An async function (coroutine) which accepts an
List of keys and returns a Future which resolves to an List of
values.options:batch: DefaultTrue. Set toFalseto disable batching,
instead immediately invokingbatch_load_fnwith a single load
key.max_batch_size: DefaultInfinity. Limits the number of items
that get passed in to thebatch_load_fn.cache: DefaultTrue. Set toFalseto disable memoization
caching, instead creating a new Promise and new key in thebatch_load_fnfor every load of the same key.cache_key_fn: A function to produce a cache key for a given load
key. Defaults tokey => key. Useful to provide when Python
objects are keys and two similarly shaped objects should be
considered equivalent.cache_map: An instance ofdict(or an object with a similar API) to be used as the underlying cache
for this loader. Default{}.load(key)Loads a key, returning aFuturefor the value represented by that
key.key: An key value to load.load_many(keys)Loads multiple keys, promising an array of values:a,b=awaitmy_loader.load_many(['a','b']);This is equivalent to the more verbose:fromasyncioimportgathera,b=awaitgather(my_loader.load('a'),my_loader.load('b'))keys: A list of key values to load.clear(key)Clears the value atkeyfrom the cache, if it exists. Returns itself
for method chaining.key: An key value to clear.clear_all()Clears the entire cache. To be used when some event results in unknown
invalidations across this particularDataLoader. Returns itself for
method chaining.prime(key, value)Primes the cache with the provided key and value. If the key already
exists, no change is made. (To forcefully prime the cache, clear the key
first withloader.clear(key).prime(key,value).) Returns itself for
method chaining.Using with GraphQLDataLoader pairs nicely well withGraphQL. GraphQL fields
are designed to be stand-alone functions. Without a caching or batching
mechanism, it’s easy for a naive GraphQL server to issue new database
requests each time a field is resolved.Consider the following GraphQL request:{
me {
name
bestFriend {
name
}
friends(first: 5) {
name
bestFriend {
name
}
}
}
}Naively, ifme,bestFriendandfriendseach need to request
the backend, there could be at most 13 database requests!When using DataLoader, we could define theUsertype using theSQLiteexample with clearer code and at most 4
database requests, and possibly fewer if there are cache hits.classUser(graphene.ObjectType):name=graphene.String()best_friend=graphene.Field(lambda:User)friends=graphene.List(lambda:User)defresolve_best_friend(self,args,context,info):returnuser_loader.load(self.best_friend_id)defresolve_friends(self,args,context,info):returnuser_loader.load_many(self.friend_ids)Common PatternsCreating a new DataLoader per request.In many applications, a web server using DataLoader serves requests to
many different users with different access permissions. It may be
dangerous to use one cache across many users, and is encouraged to
create a new DataLoader per request:defcreate_loaders(auth_token):return{'users':DataLoader(lambdaids:gen_users(auth_token,ids)),'cdn_urls':DataLoader(lambdaraw_urls:gen_cdn_urls(auth_token,raw_urls)),'stories':DataLoader(lambdakeys:gen_stories(auth_token,keys)),}}# When handling an incoming web request:loaders=create_loaders(request.query.auth_token)# Then, within application logic:user=awaitloaders.users.load(4)pic=awaitloaders.cdn_urls.load(user.raw_pic_url)Creating an object where each key is aDataLoaderis one common
pattern which provides a single value to pass around to code which needs
to perform data loading, such as part of theroot_valuein a
[graphql][] request.Loading by alternative keys.Occasionally, some kind of value can be accessed in multiple ways. For
example, perhaps a “User” type can be loaded not only by an “id” but
also by a “username” value. If the same user is loaded by both keys,
then it may be useful to fill both caches when a user is loaded from
either source:asyncdefuser_by_id_batch_fn(ids):users=awaitgen_users_by_id(ids)foruserinusers:username_loader.prime(user.username,user)returnusersuser_by_id_loader=DataLoader(user_by_id_batch_fn)asyncdefusername_batch_fn(names):users=awaitgen_usernames(names)foruserinusers:user_by_id_loader.prime(user.id,user)returnusersusername_loader=DataLoader(username_batch_fn)Custom CachesDataLoader can optionaly be provided a custom dict instance to use as
its memoization cache. More specifically, any object that implements the
methodsget(),set(),delete()andclear()can be
provided. This allows for custom dicts which implement variouscache
algorithmsto be
provided. By default, DataLoader uses the standarddictwhich simply grows until the DataLoader is released. The default is
appropriate when requests to your application are short-lived.Video Source Code WalkthroughDataLoader Source Code Walkthrough (YouTube): |
aiodataloader-ng | DataLoader is a generic utility to be used as part of your application’s
data fetching layer to provide a simplified and consistent API over
various remote data sources such as databases or web services via
batching and caching.A port of the “Loader” API originally developed by [@schrockn][] at
Facebook in 2010 as a simplifying force to coalesce the sundry key-value
store back-end APIs which existed at the time. At Facebook, “Loader”
became one of the implementation details of the “Ent” framework, a
privacy-aware data entity loading and caching layer within web server
product code. This ultimately became the underpinning for Facebook’s
GraphQL server implementation and type definitions.DataLoader is a simplified version of this original idea implemented in
Python for AsyncIO services. DataLoader is often used when implementing
agrapheneservice,
though it is also broadly useful in other situations.DataLoader is provided so that it may be useful not just to build
GraphQL services with AsyncIO but also as a publicly available reference
implementation of this concept in the hopes that it can be ported to
other languages. If you port DataLoader to another language, please open
an issue to include a link from this repository.Getting StartedFirst, install DataLoader using pip.pipinstallaiodataloaderTo get started, create aDataLoader. EachDataLoaderinstance
represents a unique cache. Typically instances are created per request
when used within a web-server likeSanicif different users
can see different things.Note: DataLoader assumes a AsyncIO environment withasync/awaitavailable only in Python 3.5+.BatchingBatching is not an advanced feature, it’s DataLoader’s primary feature.
Create loaders by providing a batch loading function.fromaiodataloaderimportDataLoaderclassUserLoader(DataLoader):asyncdefbatch_load_fn(self,keys):returnawaitmy_batch_get_users(keys)user_loader=UserLoader()A batch loading function accepts a Iterable of keys, and returns a
Promise which resolves to a List of values*.Then load individual values from the loader. DataLoader will coalesce
all individual loads which occur within a single frame of execution (a
single tick of the event loop) and then call your batch function with
all requested keys.user1_future=user_loader.load(1)user2_future=user_loader.load(2)user1=awaituser1_futureuser2=awaituser2_futureuser1_invitedby=user_loader.load(user1.invited_by_id)user2_invitedby=user_loader.load(user2.invited_by_id)print("User 1 was invited by",awaituser1_invitedby)print("User 2 was invited by",awaituser2_invitedby)A naive application may have issued four round-trips to a backend for
the required information, but with DataLoader this application will make
at most two.DataLoader allows you to decouple unrelated parts of your application
without sacrificing the performance of batch data-loading. While the
loader presents an API that loads individual values, all concurrent
requests will be coalesced and presented to your batch loading function.
This allows your application to safely distribute data fetching
requirements throughout your application and maintain minimal outgoing
data requests.Batch FunctionA batch loading function accepts an List of keys, and returns a Future
which resolves to an List of values. There are a few constraints that
must be upheld:The List of values must be the same length as the List of keys.Each index in the List of values must correspond to the same index in
the List of keys.For example, if your batch function was provided the List of keys:[ 2, 9, 6, 1 ], and loading from a back-end service returned the
values:{'id':9,'name':'Chicago'}{'id':1,'name':'New York'}{'id':2,'name':'San Francisco'}Our back-end service returned results in a different order than we
requested, likely because it was more efficient for it to do so. Also,
it omitted a result for key6, which we can interpret as no value
existing for that key.To uphold the constraints of the batch function, it must return an List
of values the same length as the List of keys, and re-order them to
ensure each index aligns with the original keys[ 2, 9, 6, 1 ]:[{'id':2,'name':'San Francisco'},{'id':9,'name':'Chicago'},None,{'id':1,'name':'New York'}]CachingDataLoader provides a memoization cache for all loads which occur in a
single request to your application. After.load()is called once
with a given key, the resulting value is cached to eliminate redundant
loads.In addition to relieving pressure on your data storage, caching results
per-request also creates fewer objects which may relieve memory pressure
on your application:user_future1=user_loader.load(1)user_future2=user_loader.load(1)assertuser_future1==user_future2Caching per-RequestDataLoader cachingdoes notreplace Redis, Memcache, or any other
shared application-level cache. DataLoader is first and foremost a data
loading mechanism, and its cache only serves the purpose of not
repeatedly loading the same data in the context of a single request to
your Application. To do this, it maintains a simple in-memory
memoization cache (more accurately:.load()is a memoized function).Avoid multiple requests from different users using the DataLoader
instance, which could result in cached data incorrectly appearing in
each request. Typically, DataLoader instances are created when a Request
begins, and are not used once the Request ends.For example, when using withSanic:defcreate_loaders(auth_token){return{'users':user_loader,}}app=Sanic(__name__)@app.route("/")asyncdeftest(request):auth_token=authenticate_user(request)loaders=create_loaders(auth_token)returnrender_page(request,loaders)Clearing CacheIn certain uncommon cases, clearing the request cache may be necessary.The most common example when clearing the loader’s cache is necessary is
after a mutation or update within the same request, when a cached value
could be out of date and future loads should not use any possibly cached
value.Here’s a simple example using SQL UPDATE to illustrate.# Request begins...user_loader=...# And a value happens to be loaded (and cached).user4=awaituser_loader.load(4)# A mutation occurs, invalidating what might be in cache.awaitsql_run('UPDATE users WHERE id=4 SET username="zuck"')user_loader.clear(4)# Later the value load is loaded again so the mutated data appears.user4=awaituser_loader.load(4)# Request completes.Caching ExceptionsIf a batch load fails (that is, a batch function throws or returns a
rejected Promise), then the requested values will not be cached. However
if a batch function returns anExceptioninstance for an individual
value, thatExceptionwill be cached to avoid frequently loading the
sameException.In some circumstances you may wish to clear the cache for these
individual Errors:try:user_loader.load(1)exceptExceptionase:user_loader.clear(1)raiseDisabling CacheIn certain uncommon cases, a DataLoader whichdoes notcache may be
desirable. CallingDataLoader(batch_fn, cache=false)will ensure
that every call to.load()will produce anewFuture, and
requested keys will not be saved in memory.However, when the memoization cache is disabled, your batch function
will receive an array of keys which may contain duplicates! Each key
will be associated with each call to.load(). Your batch loader
should provide a value for each instance of the requested key.For example:classMyLoader(DataLoader):cache=Falseasyncdefbatch_load_fn(self,keys):print(keys)returnkeysmy_loader=MyLoader()my_loader.load('A')my_loader.load('B')my_loader.load('A')# > [ 'A', 'B', 'A' ]More complex cache behavior can be achieved by calling.clear()or.clear_all()rather than disabling the cache completely. For
example, this DataLoader will provide unique keys to a batch function
due to the memoization cache being enabled, but will immediately clear
its cache when the batch function is called so later requests will load
new values.classMyLoader(DataLoader):cache=Falseasyncdefbatch_load_fn(self,keys):self.clear_all()returnkeysAPIclass DataLoaderDataLoader creates a public API for loading data from a particular data
back-end with unique keys such as theidcolumn of a SQL table or
document name in a MongoDB database, given a batch loading function.EachDataLoaderinstance contains a unique memoized cache. Use
caution when used in long-lived applications or those which serve many
users with different access permissions and consider creating a new
instance per web request.new DataLoader(batch_load_fn, **options)Create a newDataLoadergiven a batch loading function and options.batch_load_fn: An async function (coroutine) which accepts an
List of keys and returns a Future which resolves to an List of
values.options:batch: DefaultTrue. Set toFalseto disable batching,
instead immediately invokingbatch_load_fnwith a single load
key.max_batch_size: DefaultInfinity. Limits the number of items
that get passed in to thebatch_load_fn.cache: DefaultTrue. Set toFalseto disable memoization
caching, instead creating a new Promise and new key in thebatch_load_fnfor every load of the same key.cache_key_fn: A function to produce a cache key for a given load
key. Defaults tokey => key. Useful to provide when Python
objects are keys and two similarly shaped objects should be
considered equivalent.cache_map: An instance ofdict(or an object with a similar API) to be used as the underlying cache
for this loader. Default{}.load(key)Loads a key, returning aFuturefor the value represented by that
key.key: An key value to load.load_many(keys)Loads multiple keys, promising an array of values:a,b=awaitmy_loader.load_many(['a','b']);This is equivalent to the more verbose:fromasyncioimportgathera,b=awaitgather(my_loader.load('a'),my_loader.load('b'))keys: A list of key values to load.clear(key)Clears the value atkeyfrom the cache, if it exists. Returns itself
for method chaining.key: An key value to clear.clear_all()Clears the entire cache. To be used when some event results in unknown
invalidations across this particularDataLoader. Returns itself for
method chaining.prime(key, value)Primes the cache with the provided key and value. If the key already
exists, no change is made. (To forcefully prime the cache, clear the key
first withloader.clear(key).prime(key,value).) Returns itself for
method chaining.Using with GraphQLDataLoader pairs nicely well withGraphQL. GraphQL fields
are designed to be stand-alone functions. Without a caching or batching
mechanism, it’s easy for a naive GraphQL server to issue new database
requests each time a field is resolved.Consider the following GraphQL request:{
me {
name
bestFriend {
name
}
friends(first: 5) {
name
bestFriend {
name
}
}
}
}Naively, ifme,bestFriendandfriendseach need to request
the backend, there could be at most 13 database requests!When using DataLoader, we could define theUsertype using theSQLiteexample with clearer code and at most 4
database requests, and possibly fewer if there are cache hits.classUser(graphene.ObjectType):name=graphene.String()best_friend=graphene.Field(lambda:User)friends=graphene.List(lambda:User)defresolve_best_friend(self,args,context,info):returnuser_loader.load(self.best_friend_id)defresolve_friends(self,args,context,info):returnuser_loader.load_many(self.friend_ids)Common PatternsCreating a new DataLoader per request.In many applications, a web server using DataLoader serves requests to
many different users with different access permissions. It may be
dangerous to use one cache across many users, and is encouraged to
create a new DataLoader per request:defcreate_loaders(auth_token):return{'users':DataLoader(lambdaids:gen_users(auth_token,ids)),'cdn_urls':DataLoader(lambdaraw_urls:gen_cdn_urls(auth_token,raw_urls)),'stories':DataLoader(lambdakeys:gen_stories(auth_token,keys)),}}# When handling an incoming web request:loaders=create_loaders(request.query.auth_token)# Then, within application logic:user=awaitloaders.users.load(4)pic=awaitloaders.cdn_urls.load(user.raw_pic_url)Creating an object where each key is aDataLoaderis one common
pattern which provides a single value to pass around to code which needs
to perform data loading, such as part of theroot_valuein a
[graphql][] request.Loading by alternative keys.Occasionally, some kind of value can be accessed in multiple ways. For
example, perhaps a “User” type can be loaded not only by an “id” but
also by a “username” value. If the same user is loaded by both keys,
then it may be useful to fill both caches when a user is loaded from
either source:asyncdefuser_by_id_batch_fn(ids):users=awaitgen_users_by_id(ids)foruserinusers:username_loader.prime(user.username,user)returnusersuser_by_id_loader=DataLoader(user_by_id_batch_fn)asyncdefusername_batch_fn(names):users=awaitgen_usernames(names)foruserinusers:user_by_id_loader.prime(user.id,user)returnusersusername_loader=DataLoader(username_batch_fn)Custom CachesDataLoader can optionaly be provided a custom dict instance to use as
its memoization cache. More specifically, any object that implements the
methodsget(),set(),delete()andclear()can be
provided. This allows for custom dicts which implement variouscache
algorithmsto be
provided. By default, DataLoader uses the standarddictwhich simply grows until the DataLoader is released. The default is
appropriate when requests to your application are short-lived.Video Source Code WalkthroughDataLoader Source Code Walkthrough (YouTube): |
aiodatastore | aiodatastoreaiodatastoreis a low level and high performance asyncio client forGoogle Datastore REST API. Inspired bygcloud-aiolibrary, thanks!Key advantages:lazy properties loading (that's why it's fast, mostly)explicit value types for properties (no types guessing)strictly following Google Datastore REST API data structuresInstallationpip install aiodatastoreHow to create datastore clientfromaiodatastoreimportDatastoreclient=Datastore("project1",service_file="/path/to/file")You can also set namespace if needed:fromaiodatastoreimportDatastoreclient=Datastore("project1",service_file="/path/to/file",namespace="namespace1")To useDatastore emulator(for tests or development), just defineDATASTORE_EMULATOR_HOSTenvironment variable (usually value is127.0.0.1:8081).How to work withkeysandentitiesfromaiodatastoreimportKey,PartitionId,PathElementkey=Key(PartitionId("project1"),[PathElement("Kind1")])You can also setnamespacefor key:fromaiodatastoreimportKey,PartitionId,PathElementkey=Key(PartitionId("project1",namespace_id="namespace1"),[PathElement("Kind1")])Andidornamefor path element:fromaiodatastoreimportKey,PartitionId,PathElementkey1=Key(PartitionId("project1"),[PathElement("Kind1",id="12345")])key2=Key(PartitionId("project1"),[PathElement("Kind1",name="name1")])To create an entity object, you have to specify key and properties. Properties is a dict with string keys and typed values. For eachdata typethe library provides corresponding value class. Every value (except ArrayValue) can be indexed or not (indexed by default):fromaiodatastoreimportEntity,Key,PartitionId,PathElementfromaiodatastoreimport(ArrayValue,BoleanValue,BlobValue,DoubleValue,GeoPointValue,IntegerValue,LatLng,NullValue,StringValue,TimestampValue,)key=Key(PartitionId("project1"),[PathElement("Kind1")])entity=Entity(key,properties={"array-prop":ArrayValue([NullValue(),IntegerValue(123),StringValue("str1")]),"bool-prop":BooleanValue(True),"blob-prop":BlobValue("data to store as blob"),"double-prop":DoubleValue(1.23,indexed=False),"geo-prop":GeoPointValue(LatLng(1.23,4.56)),"integer-prop":IntegerValue(123),"null-prop":NullValue(),"string-prop":StringValue("str1"),"timestamp-prop":TimestampValue(datetime.datetime.utcnow()),})To access property value use.valueattribute:print(entity.properties["integer-prop"].value)123Use.valueattribute to change property value and keep index status. Or assign new value and set index:print(entity.properties["integer-prop"].value,entity.properties["integer-prop"].indexed)123,Trueentity.properties["integer-prop"].value=456print(entity.properties["integer-prop"].value,entity.properties["integer-prop"].indexed)456,Trueentity.properties["integer-prop"]=IntegerValue(456,indexed=True)print(entity.properties["integer-prop"].value,entity.properties["integer-prop"].indexed)456,TrueUse.indexedattribute to access or change index:print(entity.properties["integer-prop"].indexed)Trueentity.properties["integer-prop"].indexed=Falseprint(entity.properties["integer-prop"].indexed)FalseTo insert new entity (the entity key's final path element may be incomplete):key=Key(PartitionId("project1"),[PathElement("Kind1")])entity=Entity(key,properties={"string-prop":StringValue("some value"),})awaitclient.insert(entity)To update an entity (the entity must already exist. Must have a complete key path):entity.properties["string-prop"]=StringValue("new value")awaitclient.update(entity)To upsert an entity (the entity may or may not already exist. The entity key's final path element may be incomplete):key=Key(PartitionId("project1"),[PathElement("Kind1")])entity=Entity(key,properties={"string-prop":StringValue("some value"),})awaitclient.upsert(entity)To delete an entity (the entity may or may not already exist. Must have a complete key path and must not be reserved/read-only):awaitclient.delete(entity)If you have entity's key or know how to build it:awaitclient.delete(key) |
aiodatasync | This is a security placeholder package.
If you want to claim this name for legitimate purposes,
please contact us [email protected]@yandex-team.ru |
aiodav | Python Async WebDAV ClientA asynchronous WebDAV client that useasyncioBased onwebdavclient3InstallationWe periodically publish source code and wheelson PyPI.$pipinstallaiodavFor install the most updated version:$gitclonehttps://github.com/jorgeajimenezl/aiodav.git
$cdaiodav
$pipinstall-e.Getting startedfromaiodavimportClientimportasyncioasyncdefmain():asyncwithClient('https://webdav.server.com',login='juan',password='cabilla')asclient:space=awaitclient.free()print(f"Free space:{space}bytes")asyncdefprogress(c,t):print(f"{c}bytes /{t}bytes")awaitclient.download_file('/remote/file.zip','/local/file.zip',progress=progress)asyncio.run(main())LicenseMIT License |
aiodb | RoadmapKey featuresSupport asyncio from scratch.Data mapper pattern.Supports PostgreSQL.Built-in migrations. |
aiodb-helper | Failed to fetch description. HTTP Status Code: 404 |
aiodbm | An AsyncIO bridge for Python’s DBM library.Descriptionaiodbm is a library that allows you to use DBM in asyncio code.Full coverage of Python’s DBM and GDBM APITyping supportDocstrings and documentationFully testedWhy use aiodbm?DBMis a fast and easy to use, embedded key-value store.
It is supported by Python’s standard library[1]and can be used on most systems without requiring additional dependencies[2].Compared to Sqlite - the other embedded database supported by Python’s standard library - it is significantly faster when used as key/value store.In our measurements we see that aiodbm is hundreds or times faster for writes and more then three faster for reads compared to aiosqlite[3]:So if you are on a Linux system and need a fast and an easy to use embedded key-value store for asyncio, aiodbm can be a good solution.CaveatsOn non Linux-like systems DBM is usually not available and Python will fall back on it’s “dumb” DBM implementation. While DBM’s core functionality still works, that implementation is be much slower.Python’s DBM library is not process safe. If you need a key-value store in a multi process context (e.g. a web server running with gunicorn) we’d recommend to use Redis or something similar instead.UsageHere is a basic example on how to use the library:importasyncioimportaiodbmasyncdefmain():# opening/creating databaseasyncwithaiodbm.open("example.dbm","c")asdb:# creating new key alpha with value greenawaitdb.set("alpha","green")# fetching value for key alphavalue=awaitdb.get("alpha")print(value)# delete key alphaawaitdb.delete("alpha")asyncio.run(main())InstallationYou can install this library directly from PyPI with the following command:pipinstallaiodbmReference[1]See also Python’s DBM module:https://docs.python.org/3/library/dbm.html[2]The newer DBM variants GDBM or NDBM are preinstalled on most Linux/Unix systems:https://en.wikipedia.org/wiki/DBM_(computing)#Availability[3]We compared asyncio compatible key/value stores on Linux with GDBM. See also measurements folder for more details. |
aiodbus | Port of txdbus to the asyncio world |
aiodcard | aiodcard==============Dcard crawler using asyncio(coroutine)Feature-------| Get article list and content using coroutineDependencies------------* Python 3.3 and :mod:`asyncio` or Python 3.4+* aiohttpInstallation------------::python setup.py installor::pip install aiodcardExample-------::import asyncioimport aiohttpimport [email protected] get_funny_articles():session = aiohttp.ClientSession()forum_name = 'funny'page_index = 1result = yield from aiodcard.get_articles_of_page(session, forum_name, page_index)print(result)def main():loop = asyncio.get_event_loop()loop.run_until_complete(get_funny_articles())if __name__ == '__main__':main()Todo----* test all functionsAuthors and License-------------------The ``aiodcard`` package is written by Chien-Wei Huang. It’s MIT licensed and freely available.Feel free to improve this package and send a pull request to GitHub. |
aioddd | Async Python DDD utilities libraryaioddd is an async Python DDD utilities library.InstallationUse the package managerpipto install aioddd.pipinstallaiodddDocumentationVisitaioddd docs.Usagefromasyncioimportget_event_loopfromdataclassesimportdataclassfromtypingimportTypefromaiodddimportNotFoundError,\Command,CommandHandler,SimpleCommandBus,\Query,QueryHandler,OptionalResponse,SimpleQueryBus,Event_products=[]classProductStored(Event):@dataclassclassAttributes:ref:strattributes:AttributesclassStoreProductCommand(Command):def__init__(self,ref:str):self.ref=refclassStoreProductCommandHandler(CommandHandler):defsubscribed_to(self)->Type[Command]:returnStoreProductCommandasyncdefhandle(self,command:StoreProductCommand)->None:_products.append(command.ref)classProductNotFoundError(NotFoundError):_code='product_not_found'_title='Product not found'classFindProductQuery(Query):def__init__(self,ref:str):self.ref=refclassFindProductQueryHandler(QueryHandler):defsubscribed_to(self)->Type[Query]:returnFindProductQueryasyncdefhandle(self,query:FindProductQuery)->OptionalResponse:ifquery.ref!='123':raiseProductNotFoundError.create(detail={'ref':query.ref})return{'ref':query.ref}asyncdefmain()->None:commands_bus=SimpleCommandBus([StoreProductCommandHandler()])awaitcommands_bus.dispatch(StoreProductCommand('123'))query_bus=SimpleQueryBus([FindProductQueryHandler()])response=awaitquery_bus.ask(FindProductQuery('123'))print(response)if__name__=='__main__':get_event_loop().run_until_complete(main())RequirementsPython >= 3.7ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT |
aiodebug | This is a tiny library for monitoring and testing asyncio programs.
Its monitoring features are meant to be always on in production.Installationaiodebugis only compatible with Python 3.8 and higher. There are no plans to support older versions.aiodebugisavailable on PyPIand you can install it with:pip install aiodebugorpoetry add aiodebugaiodebugwill uselogwoodif it is installed, otherwise it will default
to the standard logging module.Log warnings when callbacks block the event loopimportaiodebug.log_slow_callbacksaiodebug.log_slow_callbacks.enable(0.05)This will produce WARNING-level logs such asExecuting <Task pending coro=<foo() running at /home/.../foo.py:37>
wait_for=<Future pending cb=[Task._wakeup()]>> took 0.069 secondsasyncio already does this in debug mode, but you probably don’t want to enable full-on debug mode in production.Instead of defaulting to the logs, you may provide your own callback that gets called with the name of the
slow callback and its execution duration, and can do anything it needs with it. This might be useful
e.g. for structured JSON logging.importaiodebug.log_slow_callbacksaiodebug.log_slow_callbacks.enable(0.05,on_slow_callback=lambdatask_name,duration:json_logger.warning('Task blocked async loop for too long',extra={'task_name':task_name,'duration':duration}))Track event loop lags in StatsDimportaiodebug.monitor_loop_lagaiodebug.monitor_loop_lag.enable(statsd_client)Tracks how much scheduled calls get delayed and sends the lags to StatsD.Dump stack traces of all threads if the event loop hangs for too longimportaiodebug.hang_inspectiondumper=aiodebug.hang_inspection.start('/path/to/output/directory',interval=0.25)# 0.25 is the default...awaitaiodebug.hang_inspection.stop_wait(dumper)Enabling this function may help you in case one of your threads (sometimes) runs a CPU-bound operation that
completely stalls the event loop, but you don’t know which thread it is or what it is doing.Every time the event loop hangs (doesn’t run a scheduled ‘monitoring’ task) for longer than the giveninterval, aiodebug will create 3 stack traces, 1 second apart, in your output directory.
For example:-rw-r--r-- 1 user group 6.7K 4 Jan 09:41 stacktrace-20220104-094154.197418-0.txt
-rw-r--r-- 1 user group 7.0K 4 Jan 09:41 stacktrace-20220104-094155.206574-1.txt
-rw-r--r-- 1 user group 6.6K 4 Jan 09:41 stacktrace-20220104-094156.211781-2.txtEach file then contains the Python stack traces of all threads that were running or waiting at the time.
You might be able to find your culprit blocking the event loop at the end of one of the traces.Speed up or slow down time in the event loopThis is mainly useful for testing.importaiodebug.testing.time_dilated_looploop=aiodebug.testing.time_dilated_loop.TimeDilatedLoop()asyncio.set_event_loop(loop)loop.time_dilation=3awaitasyncio.sleep(1)# Takes 0.333s of real timeloop.time_dilation=0.1awaitasyncio.sleep(1)# Takes 10s of real timeaiodebugwas made byQuantlane, a systematic trading firm.
We design, build and run our own stock trading platform. |
aiodec | aiodecDecorators for asyncioContentsaiodecastopwatchastopwatchTheastopwatchdecorator is used in the following way:fromaiodecimportastopwatch@astopwatchasyncdefblah(x,y):returnx+yWhat does it do? This simple decorator will emit logs with the following message:INFO:aiodec:Time taken: 0.0003 secondsNot terribly special. Yet. You can also customize the log message:fromaiodecimportastopwatch@astopwatch(message_template='Time cost was $time_ sec',fmt='%.1g')asyncdefblah(x,y):returnx+yThis outputs log messages with the following message:INFO:aiodec:Time cost was 3e-4 secTwo things: first, the template parameter used for the time cost is called$time_; second, you can customize the formatting of the seconds value.
However, it can also do something a lot more interesting: it can include
parameters from the wrapped function in the message:fromaiodecimportastopwatch@astopwatch(message_template='x=$x y=$y | $time_ seconds')asyncdefblah(x,y=2):returnx+yloop.run_until_complete(blah(1))This outputs log messages with the following message:INFO:aiodec:x=1 y=2 | 0.0003 secondsMagic! Note that positional args and keyword args and default values
are all handled correctly.As you saw earlier, in addition to the function parameters, the special$time_parameter will also be available. The other extra fields are:$name_, which contains the__name__of the wrapped function, and$qualname_, which contains the__qualname__of the wrapped function.These three template parameters have a trailing underscore, to avoid collisions
with any parameter names. |
aiodecorator | aiodecoratorPython decorators for asyncio, includingthrottle: Throttle a (coroutine) function that return anAwaitableInstall$pipinstallaiodecoratorUsageimporttimeimportasynciofromaiodecoratorimport(throttle)now=time.time()# -----------------------------------------------------# The throttled function is only called twice a second@throttle(2,1)asyncdefthrottled(index:int):diff=format(time.time-now,'.0f')print(index,f'{diff}s')# -----------------------------------------------------asyncdefmain():loop=asyncio.get_running_loop()tasks=[loop.create_task(throttled(index))forindexinrange(5)]awaitasyncio.wait(tasks)asyncio.run(main())# Output# 0 0s# 1 0s# 2 1s# 3 1s# 4 2sAPIsthrottle(limit: int, interval: Union[float, int])limitintMaximum number of calls within aninterval.intervalUnion[int, float]Timespan for limit in seconds.Returns a decorator functionLicenseMIT |
aiodecorators | aiodecoratorsFunction decorators based on asyncio Lock, Semaphore and BoundedSemaphoreInstallpip3 install aiodecoratorsUsageasyncio.Lockfrom aiodecorators import Lock
@Lock()
async def f():
passasyncio.Semaphorefrom aiodecorators import Semaphore
@Semaphore(n)
async def f():
passasyncio.BoundedSemaphorefrom aiodecorators import BoundedSemaphore
@BoundedSemaphore(n)
async def f():
pass |
aiodeepl | Asynchronous API for DeepL TranslatorIt also shipped with a simple command line interface.Installationpipinstall-UaiodeeplAPI UsageDocumentation is yet complete. Apart from the following example, you can also refer to the__main__.pyfile for more examples.importaiodeeplasyncdefmain():translator=aiohttp.Translator(api_key="123")result=awaittranslator.translate("Hello, World!",target_lang="DE")print(result)CLI Usageaiodeepl--api-key123-t"Hello, World!"-dDEYou can saveapi_keyin config file or input it interactively for security.To translate a document, you can use the following command:aiodeepl--api-key123-fREADME.pdf-dDE-oREADME_DE.pdf |
aiodeluge | ✨ aiodeluge ✨An asyncio deluge client talk todelugeUsageimportasynciofromaiodelugeimportClientasyncdefmain():asyncwithClient(timeout=10)asclient:print(awaitclient.send_request("daemon.login","synodriver","123456",client_version="2.1.1"))print(awaitclient.send_request("core.get_auth_levels_mappings"))print(awaitclient.send_request("core.get_external_ip"))print(awaitclient.send_request("core.get_config"))if__name__=="__main__":asyncio.run(main())Public apiimportsslasssl_fromtypingimportCallable,Dict,Optional,UnionclassClient:host:strport:intusername:strpassword:strevent_handlers:dictssl:ssl_.SSLContexttimeout:Union[int,float]def__init__(self,host:str="127.0.0.1",port:Optional[int]=58846,username:Optional[str]="",password:Optional[str]="",event_handlers:Optional[Dict[str,Callable]]=None,ssl:Optional[ssl_.SSLContext]=None,timeout:Optional[Union[int,float]]=None,):...asyncdefconnect(self):...asyncdefdisconnect(self):...asyncdefsend_request(self,method:str,*args,**kwargs):...asyncdef__aenter__(self):...asyncdef__aexit__(self,exc_type,exc_val,exc_tb):...def__eq__(self,other:"Client"):... |
aiodesa | Asyncio Dead Easy Sql APISimplify Your Personal Projects with AIODesaAre you tired of the hassle of setting up complex databases for small apllications and personal projects? Designed to streamline monotony, AIODesa makes managing asynchronous database access easy. Perfect for smaller-scale applications where extensive database operations are not a priority.No need to write even a single line of raw SQL.A straightforward and 100% Python interface for managing asynchronous database API's by leveraging Python's built-ins and standard library. It wraps around AioSqlite, providing a hassle-free experience to define, generate, and commit data effortlessly, thanks to shared objects for tables and records.Ideal for Personal ProjectsAIODesa is specifically crafted for simpler projects where database IO is minimal. It's not intended for heavy production use but rather serves as an excellent choice for personal projects that require SQL structured data persistence without the complexity of a full-scale database setup. SQLite is leveraged here, meaning youre free to use other SQLite drivers to consume and transform the data if your project outgrows AIODesa.Read the docsUsageInstall via pippip install aiodesaSample API usage:from aiodesa import Db
import asyncio
from dataclasses import dataclass
from aiodesa.utils.tables import ForeignKey, UniqueKey, PrimaryKey, set_key
async def main():
# Define structure for both tables and records
# Easily define key types
@dataclass
@set_key(PrimaryKey("username"), UniqueKey("id"), ForeignKey("username", "anothertable"))
class UserEcon:
username: str
credits: int | None = None
points: int | None = None
id: str | None = None
table_name: str = "user_economy"
async with Db("database.sqlite3") as db:
# Create table from UserEcon class
await db.read_table_schemas(UserEcon)
# Insert a record
record = db.insert(UserEcon.table_name)
await record('sockheadrps', id="fffff")
# Update a record
record = db.update(UserEcon.table_name, column_identifier="username")
await record('sockheadrps', points=2330, id="1234")
asyncio.run(main())Development:Ensure poetry is installed:pip install poetryInstall project using poetrypoetry add git+https://github.com/sockheadrps/AIODesa.git
poetry installcreate a python file for using AIODesa and activate poetry virtual env to run itpoetry shell
poetry run python main.py |
aiodesktop | aiodesktopA set of tools which simplify building cross-platform desktop apps with Python, JavaScript, HTML & CSS.FeaturesIn contrast to typical desktop GUI frameworks such astkinter,wxPython,PyQtorKivy:does not define own widgets/layout system (Kivy,Qt,wx), simply use a browser as a platform which already provides those thingsreuse time-saving libraries likeReact,BootstraporHighchartsreuse technologies likeWebRTC,WebGL,WebAssemblyaccess platform features such ascameras,geolocationandothersyour app is client-server and cross-platform by design, different devices may use it simultaneouslyCompared to existing alternatives such asEel,async-eelandguy:runs onasyncioinstead of threads or gevent greenletshighly customizableaiohttpserverno global state / singleton APIInstallInstall from pypi withpip:pipinstallaiodesktopHello, World!importaiodesktopclassServer(aiodesktop.Server):asyncdefon_startup(self):aiodesktop.launch_chrome(self.start_uri)# Use `expose` decorator to mark method as visible from [email protected]_string(self):# Use `await self.js.xxx()` to call JS functions from Pythonreturn'Hello, '+awaitself.js.getWorld()bundle=aiodesktop.Bundle()server=Server(bundle=bundle,init_js_function='onConnect',index_html='''<html><body><script>async function onConnect(server) {// Exposing JS function to pythonserver.expose({async getWorld() {return 'World!'}});// Use `await server.py.xxx()` to call Python methods from JSdocument.body.innerHTML += await server.py.get_string();};</script></body></html>''',)server.run()Seeexample/for aslightlymore complicated app with:static filespyinstaller executableJS & webpackhttps |
aiodeta | aiodetaUnofficial client for Deta CloundSupported functionalityDeta BaseDeta DriveDecorator for cron tasksExamplesimportasynciofromaiodetaimportDetaDETA_PROJECT_KEY="xxx_yyy"asyncdefgo():db_name="users"# Initialize Deta clientdeta=Deta(DETA_PROJECT_KEY)# Initialize Deta Base clientbase=deta.Base(db_name)# Create row in Deta Baseuser={"username":"steve","active":False}resp=awaitbase.insert(user)print(resp)user_key=resp["key"]# Update row by keyresp=awaitbase.update(user_key,set={"active":True})print(resp)# Get row by keyresp=awaitbase.get(user_key)print(resp)# Delete row by keyresp=awaitbase.delete(user_key)print(resp)# Create multiple rows in one requestusers=[{"username":"jeff","active":True},{"username":"rob","active":False},{"username":"joe","active":True}]resp=awaitbase.put(users)print(resp)# Query dataquery=[{"active":True},{"username?pfx":"j"}]result=awaitbase.query(query=query,limit=10)print(result)# Close connectionawaitdeta.close()loop=asyncio.get_event_loop()loop.run_until_complete(go()) |
aiodeu | None |
aiodevision | aiodevisionA simple async wrapper for the idevision api.Note: this may be unstable. This is an experimental wrapper, and is undocumented (for now).If you would like to contribute/add anything, feel free to make a PR.InstallationFor installing the stable version, dopipinstallaiodevisionIf you wanna install the dev version, dopipinstallgit+https://github.com/MrKomodoDragon/aiodevisiontodo:Add Better Exception HaandlingHandle Ratelimits |
aiodgram | aiodgramWhat is this?The module makes it easier for you to use the basic functions of AIOGRAM, such as sending messages/photos/video and start your bot.UsingLet's import it first:
First, import classTgBot and typesfrom the library (use the 'from...import TgBot, types' construct).
Second, create a object from classTgBot(use 'name= TgBot()' construct)
Third, You must write arguments to this object,token:string,admin_username:stringfrom`...`importTgBotbot=TgBot(`token`,`admin_username`)AFTERobject_botname=botUse in asyng definesExamples:@`bot`.dispatcher.message_handler()asyncdeftest:awaitbot.send_message(`chat_id`,`text`)or@`bot`.dispatcher.message_handler()asyncdeftest:awaitbot.send_photo(`chat_id`,`photo_url`,caption=`text`)or@`bot`.dispatcher.message_handler()asyncdeftest:awaitbot.send_video(`chat_id`,`video_url`,caption=`text`)Beautiful messages to consoleYou can create your beautiful messages to console with color!First, import classMyMessagesfrom the library (use the 'from...import MyMessages' construct)Second, create a object from classMyMessages(use 'name= MyMessage()' construct)Third, you must use amessagedefine based on this object (use, 'name.message()' construct)Fourth, You must write arguments to this define,clear:bool,message:string,colors:string.Example:from`...`importMyMessagesmy_msg=MyMessage()print(my_msg.message(clear=`bool`,text=`text`,color=`[green,blue]`))Download a videos from YouTubeYou can download a video from YouTube using this library!First, import classDownloadVideofrom the library (use the 'from...import DownloadVideo' construct)Second, create a object from classDownloadVideo(use 'name= DownloadVideo()' construct)Third, you must use aDownload_This_Videodefine based on this object (use, 'name.Download_This_Video()' construct)Fourth, you must write arguments to this define,link_on_video:string,video_name:string,resolution:intExample:from`...`importDownloadVideovideo=DownloadVideo()video.download_This_Video(link_on_video=`link`,video_name=`nameoffile`,resolution=`720p`)Create your buttonsYou can create buttons for your messages!For reply_markupExample:from`...`importButtonbtns=Button()keyboard=[[btns.add_button(`text`)],[btns.add_button(`text`)]]reply_btns=btns.add_markup(keyboard)And, you must add object from fourth point tosend_message;send_photo;send_videoinreply_markupargumentFor inline_markupExample:from`...`importButtonbtns=Button()keyboard=[[btns.add_inline_button(`text`,`callback`)],[btns.add_inline_button(`text`,`callback`)]]reply_btns=btns.add_inline_markup(keyboard)And, you must add object from fourth point tosend_messageorsend_photoorsend_videoinreply_markupargumentShow loading animation in your botFirst, you must use a define from TgBot (use 'bot.loading()')Second, you must write argumets to this define,chat_id:int,percentages:intExample:bot=TgBot(`token`)@`bot`.dispatcher.message_handler()asyncdeftest:awaitbot.loading(`chat_id`,`percentages`)Easy work with SQLExample:from`...`importDatabasedb=Database(`filenamedb`)db.create_data(`table`,columns=`list`,values=`list`)#Creating a cell in a DB tabledb.select_data(`table`,column=`str`,search_column=`str`,search_data=`str`)#Return list with your datasdb.edit_data(`table`,column=`str`,new_data=`any`,search_column=`str`,search_data=`str`)#Edit data in your DBdb.delete_data(`table`,search_column=`str`,search_data=`str`)#Delete data fro DBYour custom exceptionsExample:from`...`importMyExceptionMyException(message='',extra_info=any)Logging your projectExample:from`...`importLogged`log`=Logged(type_logging='info',filename='log.log')`log`.info(message='')`log`.error(message='')`log`.warning(message='')`log`.critical(message='')`log`.debug(message='')Password makerExample:from`...`importPasswordMaker,TgBot,typesbot=TgBot(`token`,`admin_username`)password=PasswordMaker(bot=bot)@bot.dispatcher.message_handler()asyncdefpassword_maker(message:types.Message):awaitpassword.make(message.from_user.id,'Input a password lenght')Edit text in your messageExample:from`...`inportTgBotbot=TgBot(`token`,`admin_username`)@dp.message_handler()asyncdefa(message:types.Message):text=message.textchat_id=message.chat.idawaitbot.edit_message_text(text=text,chat_id=chat_id)Edit inline markup in your messageExample:from`...`inportTgBotbot=TgBot(`token`,`admin_username`)your_inline_markup='your markup'@dp.message_handler()asyncdefa(message:types.Message):chat_id=message.chat.idmessage_id=message.message_idawaitbot.edit_message_markup(chat_id=chat_id,message_id=message_id,reply_markup=your_inline_markup)For start your bot, you needbot.start_polling(dispatcher=bot Dispatcher, skip_updates=True or False, on_startup=define for start, on_shutdown=define for shutdown).Example:from`...`importTgBotbot=TgBot()bot.start_polling(`nothingargumentsoryourarguments`)Developersauthors:Darkangel, Arkeepauthors telegrams:t.me/darkangel58414andt.me/Stillcrayg |
aiodhcpwatcher | aiodhcpwatcherDocumentation:https://aiodhcpwatcher.readthedocs.ioSource Code:https://github.com/bdraco/aiodhcpwatcherWatch for DHCP packets with asyncioInstallationInstall this via pip (or your favourite package manager):pip install aiodhcpwatcherUsageimportasyncioimportaiodhcpwatcherdef_async_process_dhcp_request(response:aiodhcpwatcher.DHCPRequest)->None:print(response)asyncdefrun():cancel=aiodhcpwatcher.start(_async_process_dhcp_request)awaitasyncio.Event().wait()asyncio.run(run())Contributors ✨Thanks goes to these wonderful people (emoji key):This project follows theall-contributorsspecification. Contributions of any kind welcome!CreditsThis package was created withCopierand thebrowniebroke/pypackage-templateproject template. |
aiodht | No description available on PyPI. |
aiodi | Python Dependency Injection libraryaiodi is a modern Python Dependency Injection library that allows you to standardize and centralize the way objects are constructed in your application highly inspired onPHP Symfony's DependencyInjection Component.Key Features:Native-based: ImplementsPEP 621storing project metadata inpyproject.toml.Dual mode: Setting dependencies usingPythonand usingconfiguration files.Clean: Wherever you want just use it,no more decorators and defaults everywhere.InstallationUse the package managerpipto install aiodi.pipinstallaiodiDocumentationVisitaiodi docs.Usagewith Configuration Files# sample/pyproject.toml[tool.aiodi.variables]name="%env(str:APP_NAME, 'sample')%"version="%env(int:APP_VERSION, '1')%"log_level="%env(APP_LEVEL, 'INFO')%"debug="%env(bool:int:APP_DEBUG, '0')%"text="Hello World"[tool.aiodi.services."_defaults"]project_dir="../../.."[tool.aiodi.services."logging.Logger"]class="sample.libs.utils.get_simple_logger"arguments={name="%var(name)%",level="%var(log_level)%"}[tool.aiodi.services."UserLogger"]type="sample.libs.users.infrastructure.in_memory_user_logger.InMemoryUserLogger"arguments={commands="@logging.Logger"}[tool.aiodi.services."*"]_defaults={autoregistration={resource="sample/libs/*",exclude="sample/libs/users/{domain,infrastructure/in_memory_user_logger.py,infrastructure/*command.py}"}}# sample/apps/settings.pyfromtypingimportOptionalfromaiodiimportContainer,ContainerBuilderdefcontainer(filename:str,cwd:Optional[str]=None)->Container:returnContainerBuilder(filenames=[filename],cwd=cwd).load()# sample/apps/cli/main.pyfromsample.apps.settingsimportcontainerfromloggingimportLoggerdefmain()->None:di=container(filename='../../pyproject.toml')di.get(Logger).info('Just simple call get with the type')di.get('UserLogger').logger().info('Just simple call get with the service name')with PythonfromabcimportABC,abstractmethodfromloggingimportLogger,getLogger,NOTSET,StreamHandler,FormatterfromosimportgetenvfromaiodiimportContainerfromtypingimportOptional,Union_CONTAINER:Optional[Container]=Nonedefget_simple_logger(name:Optional[str]=None,level:Union[str,int]=NOTSET,fmt:str='[%(asctime)s] -%(name)s-%(levelname)s-%(message)s',)->Logger:logger=getLogger(name)logger.setLevel(level)handler=StreamHandler()handler.setLevel(level)formatter=Formatter(fmt)handler.setFormatter(formatter)logger.addHandler(handler)returnloggerclassGreetTo(ABC):@abstractmethoddef__call__(self,who:str)->None:passclassGreetToWithPrint(GreetTo):def__call__(self,who:str)->None:print('Hello '+who)classGreetToWithLogger(GreetTo):_logger:Loggerdef__init__(self,logger:Logger)->None:self._logger=loggerdef__call__(self,who:str)->None:self._logger.info('Hello '+who)defcontainer()->Container:global_CONTAINERif_CONTAINER:return_CONTAINERdi=Container({'env':{'name':getenv('APP_NAME','aiodi'),'log_level':getenv('APP_LEVEL','INFO'),}})di.resolve([(Logger,get_simple_logger,{'name':di.resolve_parameter(lambdadi_:di_.get('env.name',typ=str)),'level':di.resolve_parameter(lambdadi_:di_.get('env.log_level',typ=str)),},),(GreetTo,GreetToWithLogger),# -> (GreetTo, GreetToWithLogger, {})GreetToWithPrint,# -> (GreetToWithPrint, GreetToWithPrint, {})])di.set('who','World!')# ..._CONTAINER=direturndidefmain()->None:di=container()di.get(Logger).info('Just simple call get with the type')forgreet_toindi.get(GreetTo,instance_of=True):greet_to(di.get('who'))if__name__=='__main__':main()RequirementsPython >= 3.7ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT |
aiodictcc | UNKNOWN |
aiodine | aiodineaiodine provides async-firstdependency injectionin the style ofPytest fixturesfor Python 3.6+.InstallationConceptsUsageFAQChangelogInstallationpipinstall"aiodine==1.*"Conceptsaiodine revolves around two concepts:Providersare in charge of setting up, returning and optionally cleaning upresources.Consumerscan access these resources by declaring the provider as one of their parameters.This approach is an implementation ofDependency Injectionand makes providers and consumers:Explicit: referencing providers by name on the consumer's signature makes dependencies clear and predictable.Modular: a provider can itself consume other providers, allowing to build ecosystems of reusable (and replaceable) dependencies.Flexible: provided values are reused within a given scope, and providers and consumers support a variety of syntaxes (asynchronous/synchronous, function/generator) to make provisioning fun again.aiodine isasync-firstin the sense that:It was made to work with coroutine functions and the async/await syntax.Consumers can only be called in an asynchronous setting.But provider and consumer functions can be regular Python functions and generators too, if only for convenience.UsageProvidersProvidersmake aresourceavailable to consumers within a certainscope. They are created by decorating aprovider [email protected]'s a "hello world" provider:[email protected]():return"Hello, aiodine!"Providers are available in twoscopes:function: the provider's value is re-computed everytime it is consumed.session: the provider's value is computed only once (the first time it is consumed) and is reused in subsequent calls.By default, providers are function-scoped.ConsumersOnce a provider has been declared, it can be used byconsumers. A consumer is built by decorating aconsumer [email protected]. A consumer can declare a provider as one of its parameters and aiodine will inject it at runtime.Here's an example consumer:@aiodine.consumerasyncdefshow_friendly_message(hello):print(hello)All aiodine consumers are asynchronous, so you'll need to run them in an asynchronous context:fromasyncioimportrunasyncdefmain():awaitshow_friendly_message()run(main())# "Hello, aiodine!"Of course, a consumer can declare non-provider parameters too. aiodine is smart enough to figure out which parameters should be injected via providers, and which should be expected from the [email protected]_friendly_message(hello,repeat=1):for_inrange(repeat):print(hello)asyncdefmain():awaitshow_friendly_message(repeat=10)Providers consuming other providersProviders are modular in the sense that they can themselves consume other providers.For this to work however, providers need to befrozenfirst. This ensures that the dependency graph is correctly resolved regardless of the declaration [email protected]():return"[email protected]"@aiodine.providerasyncdefsend_email(email):print(f"Sending email to{email}…")aiodine.freeze()# <- Ensures that `send_email` has resolved `email`.Note: it is safe to call.freeze()multiple times.A context manager syntax is also available:importaiodinewithaiodine.exit_freeze():@aiodine.providerdefemail():return"[email protected]"@aiodine.providerasyncdefsend_email(email):print(f"Sending email to{email}…")Generator providersGenerator providers can be used to perform cleanup (finalization) operations after a provider has gone out of [email protected]_resource():print("setting up complex resource…")yield"complex"print("cleaning up complex resource…")Tip: cleanup code is executed even if an exception occurred in the consumer, so there's no need to surround theyieldstatement with atry/finallyblock.Important: session-scoped generator providers will only be cleaned up if using them in the context of a session. SeeSessionsfor details.Lazy async providersAsync providers areeagerby default: their return value is awaited before being injected into the consumer.You can mark a provider aslazyin order to defer awaiting the provided value to the consumer. This is useful when the provider needs to be conditionally [email protected](lazy=True)asyncdefexpensive_io_call():awaitsleep(10)[email protected](expensive_io_call,cache=None):ifcache:returncachereturnawaitexpensive_io_callFactory providersInstead of returning a scalar value, factory providers return afunction. Factory providers are useful to implement reusable providers that accept a variety of inputs.This is adesign patternmore than anything else. In fact, there's no extra code in aiodine to support this feature.The following example defines a factory provider for a (simulated) database query:[email protected](scope="session")asyncdefnotes():# Some hard-coded sticky notes.return[{"id":1,"text":"Groceries"},{"id":2,"text":"Make potatoe smash"},]@aiodine.providerasyncdefget_note(notes):asyncdef_get_note(pk:int)->list:try:# TODO: fetch from a database instead?returnnext(notefornoteinnotesifnote["id"]==pk)exceptStopIteration:raiseValueError(f"Note with ID{pk}does not exist.")return_get_noteExample usage in a consumer:@aiodine.consumerasyncdefshow_note(pk:int,get_note):print(awaitget_note(pk))Tip: you can combine factory providers withgenerator providersto cleanup any resources the factory needs to use. Here's an example that provides temporary files and removes them on cleanup:[email protected](scope="session")deftmpfile():files=set()asyncdef_create_tmpfile(path:str):withopen(path,"w")astmp:files.add(path)returntmpyield_create_tmpfileforpathinfiles:os.remove(path)Using providers without declaring them as parametersSometimes, a consumer needs to use a provider but doesn't care about the value it returns. In these situations, you can use the@useproviderdecorator and skip declaring it as a parameter.Tip: the@useproviderdecorator accepts a variable number of providers, which can be given by name or by [email protected]():os.makedirs("cache",exist_ok=True)@aiodine.providerdefdebug_log_file():withopen("debug.log","w"):passyieldos.remove("debug.log")@[email protected]("cache",debug_log_file)asyncdefbuild_index():...Auto-used providersAuto-used providers areautomatically activated(within their configured scope) without having to declare them as a parameter in the consumer.This can typically spare you from decorating all your consumers with [email protected] example, the auto-used provider below would result in printing the current date and time to the console every time a consumer is [email protected](autouse=True)asyncdeflogdatetime():print(datetime.now())SessionsAsessionis the context in whichsession providerslive.More specifically, session providers (resp. generator session providers) are instanciated (resp. setup) when entering a session, and destroyed (resp. cleaned up) when exiting the session.To enter a session, use:awaitaiodine.enter_session()To exit it:awaitaiodine.exit_session()An async context manager syntax is also available:asyncwithaiodine.session():...Context providersWARNING: this is an experimental feature.Context providers were introduced to solve the problem of injectingcontext-local resources. These resources are typically undefined at the time of provider declaration, but become well-defined when entering some kind ofcontext.This may sound abstract, so let's see an example before showing the usage of context providers.ExampleLet's say we're in a restaurant. There, a waiter executes orders submitted by customers. Each customer is given anOrderobject which they can.write()their desired menu items to.In aiodine terminilogy, the waiter is theproviderof the order, and the customer is aconsumer.During service, the waiter needs to listen to new customers, create a newOrderobject, provide it to the customer, execute the order as written by the customer, and destroy the executed order.So, in this example, thecontextspans from when an order is created to when it is destroyed, and is specific to a given customer.Here's what code simulating this situation on the waiter's side may look like:fromasyncioimportQueueimportaiodineclassOrder:defwrite(self,item:str):...classWaiter:def__init__(self):self._order=Noneself.queue=Queue()# Create an `order` provider for customers to use.# NOTE: the actually provided value is not defined [email protected]():returnself._orderasyncdef_execute(self,order:Order):...asyncdef_serve(self,customer):# NOTE: we've now entered the *context* of serving# a particular customer.# Create a new order that the customer can# via the `order` provider.self._order=Order()awaitcustomer()# Execute the order and destroy it.awaitself._execute(self._order)self._order=Noneasyncdefstart(self):whileTrue:customer=awaitself.queue.get()awaitself._serve(customer)It's important to note that customers can doanythingwith the order. In particular, they may take some time to think about what they are going to order. In the meantime, the server will be listening to other customer calls. In this sense, this situation is anasynchronousone.An example customer code may look like this:[email protected](order:Order):# Pondering while looking at the menu…awaitsleep(10)order.write("Pizza Margheritta")Let's reflect on this for a second. Have you noticed that the waiter holds onlyonereference to anOrder? This means that the code works fine as long as onlyonecustomer is served at a time.But what if another customer, saybob, comes along whilealiceis thinking about what she'll order? With the current implementation, the waiter will simplyforgetaboutalice's order, and end up executingbob's order twice. In short: we'll encounter arace condition.By using a context provider, we transparently turn the waiter'sorderinto acontext variable(a.k.a.ContextVar). It is local to the context of each customer, which solves the race condition.Here's how the code would then look like:importaiodineclassWaiter:def__init__(self):self.queue=Queue()self.provider=aiodine.create_context_provider("order")asyncdef_execute(self,order:Order):...asyncdef_serve(self,customer):order=Order()withself.provider.assign(order=order):awaitcustomer()awaitself._execute(order)asyncdefstart(self):whileTrue:customer=awaitself.queue.get()awaitself._serve(customer)Note:Customers can use theorderprovider just like before. In fact, it was created when calling.create_context_provider().Theorderis nowcontext-local, i.e. its value won't be forgotten or scrambled if other customers come and make orders concurrently.This situation may look trivial to some, but it is likely to be found in client/server architectures, including in web frameworks.UsageTo create a context provider, useaiodine.create_context_provider(). This method accepts a variable number of arguments and returns aContextProvider. Each argument is used as the name of a new@providerwhich provides the contents of aContextVarobject.importaiodineprovider=aiodine.create_context_provider("first_name","last_name")Each context variable containsNoneinitially. This means that consumers will receiveNone— unless they are called within the context of an.assign()block:withprovider.assign(first_name="alice"):# Consumers called in this block will receive `"alice"`# if they consume the `first_name` provider....FAQWhy "aiodine"?aiodine contains "aio" as inasyncio, and "di" as inDependency Injection. The last two letters end up making aiodine pronounce likeiodine, the chemical element.ChangelogSeeCHANGELOG.md.LicenseMIT |
aiodinweb | OdinWeb API framework for aiohttp. For building your APIs using asyncio.NoteCurrently in development, APIs can change.Features:API Framework designed around OpenAPIBuilt in support for OpenAPI specBuilt in support for CORSHandling of validation of all incoming parameters (via Odin)Handling of Serialisation and Deserialisation of data into common API
content types including JSON, XML, and YAML.Easily extensible.Built in Authorisation and customisable Authentication.Fully type annotated with Python 3.6+ typing support.ContributionsContributions are always welcome, however please ensure the following
guidelines are met to ensure your PR will be accepted.AIOdinWeb uses Git-FlowCheck with Flake8, this must passEnsure type annotations are fully applied.Ensure your contribution comes with fast test cases (for PyTest)Documentation is generated from code, ensure your contribution is
documented.Thanks! |
aiodirector | Failed to fetch description. HTTP Status Code: 404 |
aiodirigera | aiodirigera |
aiodiscover | Async Host discoveryDiscover hosts by arp and ptr lookupFeaturesDiscover hosts on the network via ARP and PTR lookupQuick StartimportasyncioimportpprintfromaiodiscoverimportDiscoverHostsdiscover_hosts=DiscoverHosts()hosts=asyncio.run(discover_hosts.async_discover())pprint.pprint(hosts)InstallationStable Release:pip install aiodiscoverDevelopment Head:pip install git+https://github.com/bdraco/aiodiscover.gitDocumentationFor full package documentation please visitbdraco.github.io/aiodiscover.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.The Four Commands You Need To Knowpip install -e .[dev]This will install your package in editable mode with all the required development
dependencies (i.e.tox).make buildThis will runtoxwhich will run all your tests in both Python 3.7
and Python 3.8 as well as linting your code.make cleanThis will clean up various Python and build generated files so that you can ensure
that you are working in a clean environment.make docsThis will generate and launch a web browser to view the most up-to-date
documentation for your Python package.Additional Optional Setup Steps:Turn your project into a GitHub repository:Make an account ongithub.comGo tomake a new repositoryRecommendations:It is strongly recommended to make the repository name the same as the Python
package nameA lot of the following optional steps arefreeif the repository is Public,
plus open source is coolAfter a GitHub repo has been created, run the commands listed under:
"...or push an existing repository from the command line"Register your project with Codecov:Make an account oncodecov.io(Recommended to sign in with GitHub)
everything else will be handled for you.Ensure that you have set GitHub pages to build thegh-pagesbranch by selecting thegh-pagesbranch in the dropdown in the "GitHub Pages" section of the repository settings.
(Repo Settings)Register your project with PyPI:Make an account onpypi.orgGo to your GitHub repository's settings and under theSecrets tab,
add a secret calledPYPI_TOKENwith your password for your PyPI account.
Don't worry, no one will see this password because it will be encrypted.Next time you push to the branchmainafter usingbump2version, GitHub
actions will build and deploy your Python package to PyPI.Suggested Git Branch Strategymainis for the most up-to-date development, very rarely should you directly
commit to this branch. GitHub Actions will run on every push and on a CRON to this
branch but still recommended to commit to your development branches and make pull
requests to main. If you push a tagged commit with bumpversion, this will also release to PyPI.Your day-to-day work should exist on branches separate frommain. Even if it is
just yourself working on the repository, make a PR from your working branch tomainso that you can ensure your commits don't break the development head. GitHub Actions
will run on every push to any branch or any pull request from any branch to any other
branch.It is recommended to use "Squash and Merge" commits when committing PR's. It makes
each set of changes tomainatomic and as a side effect naturally encourages small
well defined PR's.Apache Software License 2.0 |
aiodisk | No description available on PyPI. |
aiodiskdb | Minimal, embeddable on-disk DB, tailored for asyncio.aiodiskdbis a lightweight, fast, simpleappend onlydatabase.To be used in theasyncioevent loop.InstallpipinstallaiodiskdbUsageStart the DB by fire and forget:fromaiodiskdbimportAioDiskDB,ItemLocationdb=AioDiskDB('/tmp/aiodiskdb')loop.create_task(db.start())Use the db API to write and read data from a coroutine.asyncdefread_and_write():new_data_location:ItemLocation=awaitdb.add(b'data')data:bytes=awaitdb.read(location)assertdata==b'data'noted_location=ItemLocation(index=0,position=80,size=1024333)prev_saved_data:bytes=awaitdb.read(noted_location)assertlen(prev_saved_data)==1024333Stop the DB before closing the application.awaitdb.stop()Be alerted when data is actually persisted to disk:asyncdefcallback(timestamp:int,event:WriteEvent):human_time=datetime.fromtimestamp(timestamp).isoformat()log(f'{human_time}-{event}persisted to disk.')awaitdo_something(location)db.events.on_write=callbackOr hook to other events:db.events.on_start=...db.events.on_stop=...db.events.on_failure=...db.events.on_index_drop=...Asynchronous non-blockingHandle file writes with no locks.
Data is appended in RAM and persisted asynchronously, according to customizable settings.Transactional"All or nothing" commit.
Lock all the DB write operations during commits, still allowing the reads.
Ensure an arbitrary sequence of data is persisted to disk.Transaction is scoped. Data added into a transaction is not available outside until committed.transaction=awaitdb.transaction()transaction.add(b'cafe')transaction.add(b'babe')transaction.add(b'deadbeef')locations:typing.Sequence[ItemLocation]=awaittransaction.commit()Not-so-append-onlyAiodiskdbis an append-only database. It means you'll never see methods todeleteorremovesingle entries.However, data pruning is supported, with the following methods:db.enable_overwrite()db.rtrim(0,400)db.ltrim(8,900)db.drop_index(3)db.disable_overwrite()These three methods respectively:prune data from the right, at index0, starting from the location400to the index end (rtrim)prune data from the left, at index8, starting from the beginning to the location900(ltrim)drop the whole index3, resulting in a file deletion:drop_indexAll the items locations not involved into a TRIM operation remains unmodified, even after anltrim.Highly customizableThe default parameters:_FILE_SIZE=128_FILE_PREFIX='data'_FILE_ZEROS_PADDING=5_BUFFER_SIZE=16_BUFFER_ITEMS=1000_FLUSH_INTERVAL=30_TIMEOUT=30_CONCURRENCY=32can be easily customized. In the following example the files max size is 16 MB,
and data is persisted to disk every 1 MBORevery 100 new itemsORevery minute.db=AioDiskDB(max_file_size=16max_buffer_size=1,max_buffer_items=100,flush_interval=60)The max DB size ismax_file_size * max_files.
Withfile_padding=5the max number of files is 10,000.A DB created withfile_padding=5andmax_file_size=16is capable to store up to 160 GB, or 167,772,160,000 items,
at its maximum capacity will allocate 10,000 files.Try to do its bestHook the blockingon_stop_signalmethod to avoid data losses on exit.importsignalfromaiodiskdbimportAioDiskDBdb=AioDiskDB(...)signal.signal(signal.SIGINT,db.on_stop_signal)signal.signal(signal.SIGTERM,db.on_stop_signal)signal.signal(signal.SIGKILL,db.on_stop_signal)Quite enough fast for some use casesConcurrency tests, part of the unit tests, can be replicated as system benchmark.
The followings are performed on a common consumer SSD:Duration: 14.12s,
Reads: 2271 (~162/s),
Writes: 2014 (~143/s),
Bandwidth: 1000MB (71MB/s),
Avg file size: 508.0kBDuration: 18.97s,
Reads: 10244 (~540/s),
Writes: 10245 (~540/s),
Bandwidth: 20MB (1.05MB/s),
Avg file size: 1.0kBLimitationsassertlen(data)<=max_buffer_sizeassertmax_transaction_size<RAMassertmax_file_size<4096Ifrtrimis applied on thecurrentindex, the space is reused, otherwise no.
Withltrim, once the space is freed, it is not allocated again.
Withdrop_indexthe discarded index is not reused.With a lot of data turn-over (pruning by trimming), it may be necessary to set an unusual highfile_padding, and
increase the database potential size.CreditsInspired by the raw block data storage of thebitcoincore blocks database.Logo by mepheesto.NotesAlpha stage. Still under development, use with care and expect data losses.Donate :heart:Bitcointo: 3FVGopUDc6tyAP6t4P8f3GkYTJ5JD5tPwV orpaypal |
aiodiskqueue | Persistent queue for Python AsyncIO.DescriptionThis library provides a persistent FIFO queue for Python AsyncIO:Queue content persist a process restartFeature parity with Python’sasyncio.QueueSimilar API to Python’sasyncio.QueueSane loggingType hintsFully testedSupports different storage engines and can be extended with custom storage enginesUsageHere is a basic example on how to use the queue:importasynciofromaiodiskqueueimportQueueasyncdefmain():q=awaitQueue.create("example_queue.sqlite")awaitq.put("some item")item=awaitq.get()print(item)asyncio.run(main())Please see theexamplesfolder for more usage examples.InstallationYou can install this library directly from PyPI with the following command:pipinstallaiodiskqueueLoggingThe name of the logger for all logging by this library is:aiodiskqueue.Storage Enginesaiodiskqueue support different storage engines. The default engine isDbmEngine.We measured the throughput for a typical load scenario (5 producers, 1 consumer) with each storage engine:DbmEngine: Consistent throughput at low and high volumes and about 3 x faster then SqlitePickledList: Very fast at low volumes, but does not scale wellSqliteEngine: Consistent throughput at low and high volumes. Relatively slow.The scripts for running the measurements and generating this chart can be found in the measurements folder. |
aiodispatch | AioDispatchAioDispatch is a simple and pluggable async dispatcher framework with batteries included. AioDispatch can be used
to offload expensive operations to external workers. For example, you might use the framework to send email, execute
big queries or analyse large datasets.AioDispatch is designed to work right of out the box, but to remain pluggable. For example a custom broker is
a matter of subclassingaiodispatch.brokers.abc.Brokerand a serializeraiodispatch.serializers.abc.Serializer.InstallpipinstallaiodispatchUsageimportasynciofromaiodispatch.brokers.memoryimportMemoryBrokerfromaiodispatch.decoratorsimporttaskfromaiodispatch.dispatchimportDispatcherfromaiodispatch.serializers.jsonimportJsonSerializerfromaiodispatch.workerimportWorker@task()asyncdefslow_greeter(name:str)->None:awaitasyncio.sleep(2)print(f"Hello{name}")asyncdefproducer(num:int=10)->None:foriinrange(num):awaitslow_greeter(name=str(i))asyncdefmain():broker=MemoryBroker()serializer=JsonSerializer()dispatcher=Dispatcher(broker,serializer)worker=Worker(dispatcher,semaphore=asyncio.Semaphore(1))asyncwithasyncio.TaskGroup()astg:tg.create_task(worker.start())tg.create_task(producer())if__name__=="__main__":asyncio.run(main()) |
aiodistbus | A Distributed Eventbus using ZeroMQ and AsyncIO for Python.The objective of this library is to provide both a local and distributed eventbus that are compatible to communicate. A similar API can be used in both versions of the eventbuses implementations.InstallationFor installing the package, download from PyPI and install withpip:pipinstallaiodistbusHere is a link to theDocumentation. If you encounter any issues in terms of code or documentation, please don't hesitate to make an issue.EventBus ExampleThe eventbus implementation follows a client-server design approach, with theDEventBusas the server andDEntryPointas the client. Here is a quick example to emit an event.importasynciofromdataclassesimportdataclassfromdataclasses_jsonimportDataClassJsonMixin# DO NOT FORGET THIS!importaiodistbusasadb@dataclassclassExampleEvent(DataClassJsonMixin):# NEEDS TO BE A DataClassJsonMixin!msg:strasyncdefhandler(event:ExampleEvent):print(event)asyncdefmain():# Create resourcesbus,e1,e2=adb.DEventBus(),adb.DEntryPoint(),adb.DEntryPoint()# Connectawaite1.connect(bus.ip,bus.port)awaite2.connect(bus.ip,bus.port)# Add funcsawaite1.on("test",handler,ExampleEvent)# Send messageawaite2.emit("test",ExampleEvent("hello"))# Flushawaitbus.flush()# Close resourcesawaite1.close()awaite2.close()awaitbus.close()if__name__=='__main__':asyncio.run(main())DesignIn theaiodistbuslibrary, we provided 2 eventbus implementations:EventBusandDEventBus. TheEventBusclass is for local (within same Python runtime) observer pattern. In the other hand,DEventBusclass is for a distributed eventbus that leverages ZeroMQ -- closing following theClone pattern.The Clone pattern uses a client-server structure, where a centralized broker broadcasts messages sent by clients. As described in the ZeroMQ Guide, this creates a single point of failure, but yields in a simpler and more scalable implementation.ContributingContributions are welcomed! OurDeveloper Documentationshould provide more details in how ChimeraPy works and what is in current development.LicenseChimeraPyandChimeraPy/aiodistbususes the GNU GENERAL PUBLIC LICENSE, as found inLICENSEfile.Funding InfoThis project is supported by theNational Science Foundationunder AI Institute Grant No.DRL-2112635. |
aiodistributor | aiodistributorPython Asynchronous Library for Synchronization of Replicated MicroservicesThis library provides a set of tools for synchronizing replicated microservices using Redis. The main goal is to facilitate inter-service communication, rate limiting, and throttling mechanisms in a distributed environment.Features:Distributed sliding counter for implementing rate limiting or throttle mechanisms.Distributed waiter for waiting for signals from other nodes or services and triggering the appropriate callback.Distributed task for managing and distributing tasks across multiple nodes or services.Distributed cache for caching data and sharing it among services.Distributed notifier for event-driven communication between nodes or services.Utilizes Redis for storage and message passing between nodes.Asynchronous and non-blocking design using Python's asyncio library.DependenciesPython 3.10+Redis serverredis-py or aioredisInstallationTo install the aiodistributor library, you can simply use pip:pip install aiodistributorUsageDetailed examples and usage instructions can be found inexamplesfolderContributingContributions are welcome! Please submit a pull request or create an issue to discuss proposed changes. |
aiodjango | This is a proof-of-concept experiment to combine a Django WSGI app mixed with
async views/websocket handlers using aiohttp. The API is highly unstable
and I wouldn’t recommend that you use this code for anything other than
wild experimentation.How It Worksaidjango.get_aio_applicationbuilds an application which combines both
request handlers/views from Django andaiohttp.web.
Views are defined using the normal Django url pattern syntax but
any handler which is a coroutine is handled by theaiohttpapplication
while the rest of the views are handled by the normal Django app.Internal this makes use ofaiohttp-wsgiwhich runs the Django WSGI app in a thread-pool to minimize blocking the async
portions of the app.Running the DemoThe example project requires Python 3.4+ to run. You should create a virtualenv
to install the necessary requirements:$ git clone https://github.com/mlavin/aiodjango.git
$ cd aiodjango/
$ mkvirtualenv aiodjango -p `which python3.4`
(aiodjango) $ add2virtualenv .
(aiodjango) $ cd example
(aiodjango) $ pip install -r requirements.txt
(aiodjango) $ python manage.py migrate
(aiodjango) $ python manage.py runserverThis starts the server onhttp://localhost:8000/with a new version of Django’s
built-in runserver. For a more multi-process server you can run using the
aiohttp worker along with Gunicorn:(aiodjango) $ gunicorn example.wsgi:app --worker-class aiohttp.worker.GunicornWebWorker --workers 2 |
aiodl | AiodlAiodl – Yet another command line download accelerator.FeaturesAccelerate the downloading process by using multiple connections for
one file.Reasonable retries on network errors.Breakpoint resume.Installation$pip3installaiodl--user# or$sudopip3installaiodlUsageCommandlineSimply callaiodlwith the URL:$aiodlhttps://dl.google.com/translate/android/Translate.apkFile: Translate.apk
Size: 16.8M
Type: application/vnd.android.package-archive
11%|████▎ | 1.78M/16.0M [00:03<00:26, 565KB/s]Hit Ctrl+C to stop the download. Aiodl will save necessary information
to<download-file>.aiodl, next time it will automatically continue
to download from here.Other arguments:--fake-user-agent, -u Use a fake User-Agent.
--num-tasks N, -n N Limit number of asynchronous tasks.
--max-tries N, -r N Limit retries on network errors.In your scriptimportaiodl# in an async functionfilename=awaitaiodl.download('https://dl.google.com/translate/android/Translate.apk',quiet=True) |
aiodns | aiodns provides a simple way for doing asynchronous DNS resolutions usingpycares.Exampleimportasyncioimportaiodnsloop=asyncio.get_event_loop()resolver=aiodns.DNSResolver(loop=loop)asyncdefquery(name,query_type):returnawaitresolver.query(name,query_type)coro=query('google.com','A')result=loop.run_until_complete(coro)The following query types are supported: A, AAAA, ANY, CAA, CNAME, MX, NAPTR, NS, PTR, SOA, SRV, TXT.APIThe API is pretty simple, three functions are provided in theDNSResolverclass:query(host, type): Do a DNS resolution of the given type for the given hostname. It returns an
instance ofasyncio.Future. The actual result of the DNS query is taken directly from pycares.
As of version 1.0.0 of aiodns (and pycares, for that matter) results are always namedtuple-like
objects with different attributes. Please check thedocumentationfor the result fields.gethostbyname(host, socket_family): Do a DNS resolution for the given
hostname and the desired type of address family (i.e.socket.AF_INET).
Whilequery()always performs a request to a DNS server,gethostbyname()first looks into/etc/hostsand thus can resolve
local hostnames (such aslocalhost). Please checkthe documentationfor the result fields. The actual result of the call is aasyncio.Future.gethostbyaddr(name): Make a reverse lookup for an address.cancel(): Cancel all pending DNS queries. All futures will getDNSErrorexception set, withARES_ECANCELLEDerrno.Note for Windows usersThis library requires the asyncio loop to be aSelectorEventLoop, which is not the default on Windows since
Python 3.8.The default can be changed as follows (do this very early in your application):asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())This may have other implications for the rest of your codebase, so make sure to test thoroughly.Running the test suiteTo run the test suite:python tests.pyAuthorSaúl Ibarra Corretgé <[email protected]>Licenseaiodns uses the MIT license, check LICENSE file.Python versionsPython >= 3.6 are supported.ContributingIf you’d like to contribute, fork the project, make a patch and send a pull
request. Have a look at the surrounding code and please, make yours look
alike :-) |
aiodnsbl | aiodnsblDNSBLlists checker based onaiodns. Checks if an IP or a domain is listed on anti-spam DNS blacklists.NotesThis is a fork ofpydnsbl.Key differences:Fully type annotatedNo sync wrapper (async only)No category classificationInstallationpipinstallaiodnsblUsageimportasynciofromaiodnsblimportDNSBLCheckerloop=asyncio.get_event_loop()checker=DNSBLChecker()# Check IPloop.run_until_complete(checker.check("8.8.8.8"))# <DNSBLResult: 8.8.8.8 (0/10)>loop.run_until_complete(checker.check("68.128.212.240"))# <DNSBLResult: 68.128.212.240 [BLACKLISTED] (4/10)># Check domainloop.run_until_complete(checker.check("example.com"))# <DNSBLResult: example.com (0/4)># Bulk checkloop.run_until_complete(checker.bulk_check(["example.com","8.8.8.8","68.128.212.240"]))# [<DNSBLResult: example.com (0/4)>, <DNSBLResult: 8.8.8.8 (0/10)>, <DNSBLResult: 68.128.212.240 [BLACKLISTED] (4/10)>]importasynciofromaiodnsblimportDNSBLCheckerasyncdefmain():checker=DNSBLChecker()res=awaitchecker.check("68.128.212.240")print(res)# <DNSBLResult: 68.128.212.240 [BLACKLISTED] (4/10)>print(res.blacklisted)# Trueprint([provider.hostforproviderinres.providers])# ['b.barracudacentral.org', 'bl.spamcop.net', 'dnsbl.sorbs.net', 'ips.backscatterer.org', ...]print([provider.hostforproviderinres.detected_by])# ['b.barracudacentral.org', 'dnsbl.sorbs.net', 'spam.dnsbl.sorbs.net', 'zen.spamhaus.org']loop=asyncio.get_event_loop()loop.run_until_complete(main()) |
aiodnsbrute | Brute force DNS domain names asynchronously |
aiodnsresolver | aiodnsresolverAsyncio Python DNS resolver. Pure Python, with no dependencies other than the standard library, threads are not used, no additional tasks are created, and all code is in a single module. The nameservers to query are taken from/etc/resolv.conf, and treats hosts in/etc/hostsas A or AAAA records with a TTL of 0.Designed for highly concurrent/HA situations. Based onhttps://github.com/gera2ld/async_dns.InstallationpipinstallaiodnsresolverUsagefromaiodnsresolverimportResolver,TYPESresolve,_=Resolver()ip_addresses=awaitresolve('www.google.com',TYPES.A)Returned are tuples of subclasses ofIPv4AddressorIPv6Address. Both support conversion to their usual string form by passing them tostr.CacheA cache is part of eachResolver(), expiring records automatically according to their TTL.importasynciofromaiodnsresolverimportResolver,TYPESresolve,clear_cache=Resolver()# Will make a request to the nameserver(s)ip_addresses=awaitresolve('www.google.com',TYPES.A)# Will only make another request to the nameserver(s) if the ip_addresses have expiredip_addresses=awaitresolve('www.google.com',TYPES.A)awaitclear_cache()# Will make another request to the nameserver(s)ip_addresses=awaitresolve('www.google.com',TYPES.A)The cache for each record starts on thestartof each request, so duplicate concurrent requests for the same record are not made.TTL / Record expiryThe address objects each have an extra property,expires_at, that returns the expiry time of the address, according to theloop.time()clock, and the TTL of the records involved to find that address.importasynciofromaiodnsresolverimportResolver,TYPESresolve,_=Resolver()ip_addresses=awaitresolve('www.google.com',TYPES.A)loop=asyncio.get_event_loop()forip_addressinip_address:print('TTL',max(0.0,ip_address.expires_at-loop.time())This can be used in HA situations to assist failovers. The timer forexpires_atstarts justbeforethe request to the nameserver is made.CNAMEsCNAME records are followed transparently. Theexpires_atof IP addresses found via intermediate CNAME(s) is determined by using the minimumexpires_atof all the records involved in determining those IP addresses.Custom nameservers and timeoutsIt is possible to query nameservers other than those in/etc/resolv.conf, and for each to specify a timeout in seconds to wait for a reply before querying the next.asyncdefget_nameservers(_,__):yield(0.5,('8.8.8.8',53))yield(0.5,('1.1.1.1',53))yield(1.0,('8.8.8.8',53))yield(1.0,('1.1.1.1',53))resolve,_=Resolver(get_nameservers=get_nameservers)ip_addresses=awaitresolve('www.google.com',TYPES.A)Parallel requests to multiple nameservers are also possible, where the first response from each set of requests is used.asyncdefget_nameservers(_,__):# For any record request, udp packets are sent to both 8.8.8.8 and 1.1.1.1, waiting 0.5 seconds# for the first response...yield(0.5,('8.8.8.8',53),('1.1.1.1',53))# ... if no response, make another set of requests, waiting 1.0 seconds before timing outyield(1.0,('8.8.8.8',53),('1.1.1.1',53))resolve,_=Resolver(get_nameservers=get_nameservers)ip_addresses=awaitresolve('www.google.com',TYPES.A)This can be used as part of a HA system: if a nameserver isn't contactable, this pattern avoids waiting for its timeout before querying another nameserver.Custom hostsIt's possible to specify hosts without editing the/etc/hostsfile.fromaiodnsresolverimportResolver,IPv4AddressExpiresAt,TYPESasyncdefget_host(_,fqdn,qtype):hosts={b'localhost':{TYPES.A:IPv4AddressExpiresAt('127.0.0.1',expires_at=0),},b'example.com':{TYPES.A:IPv4AddressExpiresAt('127.0.0.1',expires_at=0),},}try:returnhosts[qtype][fqdn]exceptKeyError:returnNoneresolve,_=Resolver(get_host=get_host)ip_addresses=awaitresolve('www.google.com',TYPES.A)ExceptionsExceptions are subclasses ofDnsError, and are raised if a record does not exist, on socket errors, timeouts, message parsing errors, or other errors returned from the nameserver.Specifically, if a record is determined to not exist,DnsRecordDoesNotExistis raised.fromaiodnsresolverimportResolver,TYPES,DnsRecordDoesNotExist,DnsErrorresolve,_=Resolver()try:ip_addresses=awaitresolve('www.google.com',TYPES.A)exceptDnsRecordDoesNotExist:print('domain does not exist')raiseexceptDnsErrorasexception:print(type(exception))raiseIf a lower-level exception caused theDnsError, it will be in the__cause__attribute of the exception.LoggingBy default logging is through theLoggernamedaiodnsresolver, and all messages are prefixed with[dns]or[dns:<fqdn>,<query-type>]through aLoggerAdapter. Each function acceptsget_logger_adapter: the default of which results in this behaviour, and can be overridden to set either theLoggeror theLoggerAdapter.importloggingfromaiodnsresolverimportResolver,ResolverLoggerAdapterresolve,clear_cache=Resolver(get_logger_adapter=lambdaextra:ResolverLoggerAdapter(logging.getLogger('my-application.dns'),extra),)TheLoggerAdapterused byresolveandclear_cachedefaults to the one passed toResolver.Chaining logging adaptersFor complex or highly concurrent applications, it may be desirable that logging adapters be chained to output log messages that incorporate a parent context. So the default ouput of[dns:my-domain.com,A] Concurrent request found, waiting for it to completewould be prefixed with aparentcontext to output something like[request:12345] [dns:my-domain.com,A] Concurrent request found, waiting for it to completeTo do this, setget_logger_adapteras a function that chains multipleLoggerAdapter.importloggingfromaiodnsresolverimportResolver,TYPES,ResolverLoggerAdapterclassRequestAdapter(logging.LoggerAdapter):defprocess(self,msg,kwargs):return'[request:%s]%s'%(self.extra['request-id'],msg),kwargsdefget_logger_adapter(extra):parent_adapter=RequestAdapter(logging.getLogger('my-application.dns'),{'request-id':'12345'})child_adapter=ResolverLoggerAdapter(parent_adapter,extra)returnchild_adapterresolve,_=Resolver()result=awaitresolve('www.google.com',TYPES.A,get_logger_adapter=get_logger_adapter)Log levelsA maximum of two messages per DNS query are logged atINFO. If a nameserver fails, aWARNINGis issued [although an exception will be raised if no nameservers succeed], and the remainder of messages are logged atDEBUG. NoERRORorCRITICALmessages are issued when exceptions are raised: it is the responsiblity of client code to log these if desired.Disable 0x20-bit encodingBy default each domain name is encoded with0x20-bit encodingbefore being sent to the nameservers. However, some nameservers, such as Docker's built-in, do not support this. So, to control or disable the encoding, you can pass a customtransform_fqdncoroutine to Resolver that does not perform any additional encoding.fromaiodnsresolverimportResolverasyncdeftransform_fqdn_no_0x20_encoding(fqdn):returnfqdnresolve,_=Resolver(transform_fqdn=transform_fqdn_no_0x20_encoding)or performs it conditionallyfromaiodnsresolverimportResolver,mix_caseasyncdeftransform_fqdn_0x20_encoding_conditionally(fqdn):return\fqdniffqdn.endswith(b'some-domain')else\awaitmix_case(fqdn)resolve,_=Resolver(transform_fqdn=transform_fqdn_0x20_encoding_conditionally)Security considerationsTo migitate spoofing, several techniques are used.Each query is given a random ID, which is checked against any response.By default each domain name is encoded with0x20-bit encoding, which is checked against any response.A new socket, and so a new random local port, is used for each query.Requests made for a domain while there is an in-flight query for that domain, wait for the the in-flight query to finish, and use its result.Also, to migitate the risk of evil responses/configurationPointer loopsare detected.CNAME chains have a maximum length.Event loop, tasks, and yieldingNo tasks are created, and the event loop is only yielded to during socket communication. Because fetching results from the cache involves no socket communication, this means that cached results are fetched without yielding. This introduces a small inconsistency between fetching cached and non-cached results, and so clients should be written to not depend on the presence or lack of a yield during resolution. This is a typically recommended process however: it should be expected that coroutines might yield.The trade-off for this inconsistency is that cached results are fetched slightly faster than if resolving were to yield in all cases.For CNAME chains, the event loop is yielded during each communication for non-cached parts of the chain.ScopeThe scope of this project is deliberately restricted to operations that are used to resolve A or AAAA records: to resolve a domain name to its IP addresses so that IP connections can be made, and have similar responsibilities togethostbyname. Some limited extra behaviour is present/may be added, but great care is taken to prevent scope creep, especially to not add complexity that isn't required to resolve A or AAAA records.UDP queries are made, but not TCP. DNS servers must support UDP, and it's impossible for a single A and AAAA record to not fit into the maximum size of a UDP DNS response, 512 bytes. There may be other data that the DNS server would return in TCP connections, but this isn't required to resolve a domain name to a single IP address.It is technically possible that in the case of extremely high numbers of A or AAAA records for a domain, they would not fit in a single UDP message. However, this is extremely unlikely, and in this unlikely case, extremely unlikely to affect target applications in any meaningful way. If a truncated message is received, a warning is logged.The resolver is astubresolver: it delegates the responsibility of recursion to the nameserver(s) it queries. In the vast majority of envisioned use cases this is acceptable, since the nameservers in/etc/resolv.confwill be recursive.Example: aiohttpimportasyncioimportsocketfromaiodnsresolverimport(TYPES,Resolver,DnsError,DnsRecordDoesNotExist,)importaiohttpclassAioHttpDnsResolver(aiohttp.abc.AbstractResolver):def__init__(self):super().__init__()self.resolver,self.clear_cache=Resolver()asyncdefresolve(self,host,port=0,family=socket.AF_INET):# Use ipv4 unless requested otherwise# This is consistent with the default aiohttp + aiodns AsyncResolverrecord_type=\TYPES.AAAAiffamily==socket.AF_INET6else\TYPES.Atry:ip_addresses=awaitself.resolver(host,record_type)exceptDnsRecordDoesNotExistasdoes_not_exist:raiseOSError(0,'{}does not exist'.format(host))fromdoes_not_existexceptDnsErrorasdns_error:raiseOSError(0,'{}failed to resolve'.format(host))fromdns_errorreturn[{'hostname':host,'host':str(ip_address),'port':port,'family':family,'proto':socket.IPPROTO_TCP,'flags':socket.AI_NUMERICHOST,}forip_addressinip_addresses]asyncdefclose(self):awaitself.clear_cache()asyncdefmain():asyncwithaiohttp.ClientSession(connector=aiohttp.TCPConnector(use_dns_cache=False,resolver=AioHttpDnsResolver()),)assession:asyncwithawaitsession.get('https://www.google.com/')asresult:print(result)loop=asyncio.get_event_loop()loop.run_until_complete(main())loop.close()Example: tornadoimportasyncioimportsocketfromaiodnsresolverimport(TYPES,DnsError,DnsRecordDoesNotExist,Resolver,)importtornado.httpclientimporttornado.netutilclassAioHttpDnsResolver(tornado.netutil.Resolver):definitialize(self):self.resolver,self.clear_cache=Resolver()asyncdefresolve(self,host,port=0,family=socket.AF_UNSPEC):# Use ipv4 unless ipv6 requestedrecord_type,family_conn=\(TYPES.AAAA,socket.AF_INET6)iffamily==socket.AF_INET6else\(TYPES.A,socket.AF_INET)try:ip_addresses=awaitself.resolver(host,record_type)exceptDnsRecordDoesNotExistasdoes_not_exist:raiseIOError('{}does not exist'.format(host))fromdoes_not_existexceptDnsErrorasdns_error:raiseIOError('{}failed to resolve'.format(host))fromdns_errorreturn[(family_conn,(str(ip_address),port))forip_addressinip_addresses]asyncdefclose(self):awaitself.clear_cache()asyncdefmain():tornado.netutil.Resolver.configure(AioHttpDnsResolver)http_client=tornado.httpclient.AsyncHTTPClient()response=awaithttp_client.fetch("http://www.google.com")print(response.body)loop=asyncio.get_event_loop()loop.run_until_complete(main())loop.close()Example: lowhaioNo extra code is needed to use aiodnsresolver withlowhaio: it is used by default.Testing strategyTests attempt to closly match real-world use, and assert on how input translate to output, i.e. thepublicbehaviour of the resolver. Therefore the tests avoid assumptions on implementation details.There are however exceptions.Many tests assume that timeouts are controlled byasyncio.sleep,loop.call_laterorloop.call_at. This is to allow time to be fast-forwarded through cache invalidation usingaiofastforwardwithout actually having to wait the corresponding time in the tests. Also, many tests assumeopenis used to access files, and patch it to allow assertions on what the code would do with different contents of/etc/resolv.confor/etc/hosts.While both being assumptions, they are both unlikely to change, and in the case that they are changed, this would much more likely result in tests failing incorrectly rather than passing incorrectly. Therefore these are low-risk assumptions.A higher risk assumption is that many tests use the, otherwise private,packandparsefunctions as part of the built-in DNS server that is used by the tests. These are the core functions used by the production code used to pack and parse DNS messages. While asserting that the resolver can communicate to the built-in nameserver, all the tests do is assert thatpackandparseare consistent with each other: it is an assumption that other nameservers have equivalent behaviour.To mitigate the risks that these assumptions bring, some "end to end"-style tests are included, which use whatever nameservers are in/etc/resolv.conf, and asserting on globally available DNS results. While not going through every possible case of input, they do validate that core behaviour is consistent with one other implementation of the protocol. |
aiodo | aiodoAsynchronous first distributed objects |
aiodocker | AsyncIO bindings for docker.ioA simple Docker HTTP API wrapper written with asyncio and aiohttp.InstallationpipinstallaiodockerDocumentationhttp://aiodocker.readthedocs.ioExamplesimportasyncioimportaiodockerasyncdeflist_things():docker=aiodocker.Docker()print('== Images ==')forimagein(awaitdocker.images.list()):tags=image['RepoTags'][0]ifimage['RepoTags']else''print(image['Id'],tags)print('== Containers ==')forcontainerin(awaitdocker.containers.list()):print(f"{container._id}")awaitdocker.close()asyncdefrun_container():docker=aiodocker.Docker()print('== Running a hello-world container ==')container=awaitdocker.containers.create_or_replace(config={'Cmd':['/bin/ash','-c','echo "hello world"'],'Image':'alpine:latest',},name='testing',)awaitcontainer.start()logs=awaitcontainer.log(stdout=True)print(''.join(logs))awaitcontainer.delete(force=True)awaitdocker.close()if__name__=='__main__':loop=asyncio.get_event_loop()loop.run_until_complete(list_things())loop.run_until_complete(run_container())loop.close()Changes0.21.0 (2021-07-23)BugfixesUse ssl_context passsed to Docker constructor for creating underlying connection to docker engine. (#536)Fix an error when attach/exec when container stops before close connection to it. (#608)0.20.0 (2021-07-21)BugfixesAccept auth parameter byrun()method; it allows auto-pulling absent image from private storages. (#295)Fix passing of JSON params. (#543)Fix issue with unclosed response object in attach/exec. (#604)0.19.1 (2020-07-09)BugfixesFix type annotations forexec.start(),docker.images.pull(),docker.images.push(). Respect default arguments again.0.19.0 (2020-07-07)FeaturesRun mypy checks on the repo in the non-strict mode. (#466)Addcontainer.rename()method. (#458)BugfixesChanged DockerNetwork.delete() to return True if successful (#464)0.18.9 (2020-07-07)BugfixesFix closing of the task fetching Docker’s event stream and make it re-openable after closing (#448)Fix type annotations for pull() and push() methods. (#465)Misc#4420.18.8 (2020-05-04)BugfixesDon’t sendnullfor empty BODY.0.18.7 (2020-05-04)BugfixesFix some typing errors0.18.1 (2020-04-01)BugfixesImprove the errror message when connection is closed by Docker Engine on TCP hijacking. (#424)0.18.0 (2020-03-25)FeaturesImprove the error text message if cannot connect to docker engine. (#411)Renamewebsocket()toattach()(#412)Implement docker exec protocol. (#415)Implement container commit, pause and unpause functionality. (#418)Implement auto-versioning of the docker API by default. (#419)BugfixesFix volume.delete throwing a TypeError. (#389)0.17.0 (2019-10-15)BugfixesFixed an issue when the entire tar archive was stored in RAM while building the image. (#352)0.16.0 (2019-09-23)BugfixesFix streaming mode for pull, push, build, stats and events. (#344)0.15.0 (2019-09-22)FeaturesAdd support for Docker 17.12.1 and 18.03.1 (#164)Add initial support for nodes. (#181)Add initial support for networks. (#189)Add support for docker info ando docker swarm join. (#193)Add restart method for containers. (#200)Feature: Add support for registry-auth when you create a service. (#215)Feature: Add support for docker save and load api methods (#219)Pass params to docker events. (#223)Add ability to get a Docker network by name or ID. (#279)Always close response after processing, make.logs(…, follow=True)async iterator. (#341)BugfixesFix: Set timeout for docker events to 0 (no timeout) (#115)Fix: prevents multiple listener tasks to be created automatically (#116)Fix: if container.start() fails user won’t get the id of the container (#128)Improve logging when docker socket not available. (#155)Fix current project version. (#156)Fixupdate out of sequence.(#169)Remove asserts used to check auth with docker registry. (#172)Fix: fix to parse response of docker load method as a json stream (#222)Fix: Handle responses with 0 or missing Content-Length (#237)Fix: don’t remove non-newline whitespace from multiplexed lines (#246)Fix docker_context.tar error (#253)Deprecations and Removalsdocker.images.get has been renamed to docker.images.inspect, remove support for Docker 17.06 (#164)Drop Python 3.5 (#338)Drop deprecated container.copy() (#339)Misc#28, #167, #192, #286 |
aiodockerpy | No description available on PyPI. |
aiodog.py | No description available on PyPI. |
aiodogstatsd | aiodogstatsdAn asyncio-based client for sending metrics to StatsD with support ofDogStatsDextension.Library fully tested withstatsd_exporterand supportsgauge,counter,histogram,distributionandtimingtypes.aiodogstatsdclient by default uses9125port. It's a default port forstatsd_exporterand it's different from8125which is used by default in StatsD andDataDog. Initialize the client with the proper port you need if it's different from9125.InstallationJust type:$pipinstallaiodogstatsdAt a glanceJust simply use client as a context manager and send any metric you want:importasyncioimportaiodogstatsdasyncdefmain():asyncwithaiodogstatsd.Client()asclient:client.increment("users.online")asyncio.run(main())Please followdocumentationor look atexamples/directory to find more examples of library usage, e.g. integration withAIOHTTPorStarletteframeworks.ContributingTo work on theaiodogstatsdcodebase, you'll want to clone the project locally and install the required dependencies viapoetry:[email protected]:Gr1N/aiodogstatsd.git
$makeinstallTo run tests and linters use command below:$makelint&&maketestIf you want to run only tests or linters you can explicitly specify which test environment you want to run, e.g.:$makelint-blackLicenseaiodogstatsdis licensed under the MIT license. See the license file for details. |
aio-doh | aio-dohIs a tiny asynchronous client for Google’s PublicDNS-over-HTTPSservice. It is built on top ofasyncioandaiohttpInstallationpipinstallaio-dohExample usage>>>fromdohimportDOHClient>>>fromasyncioimportget_event_loop>>>>>>loop=get_event_loop()>>>client=DOHClient(loop)>>>loop.run_until_complete(client.resolve('example.com'))['93.184.216.34']>>>APIThe API is simple and smallDOHClient.query(hostname, type, dnssec)hostname- name of a target host;type- DNS record type for a query;dnssec- enable DNSSEC validation. Returns a complete DNS response as a python dictionary.DOHClient.resolve(hostname, type, dnssec)hostname- name of a target host;type- DNS record type for a query;dnssec- enable DNSSEC validation. Returns a list of IP adresses.DOHClient.gethostbyname(hostname, type, dnssec)hostname- name of a target host;type- DNS record type for a query;dnssec- enable DNSSEC validation. Returns the first IP adress found if any or raises an error.DocumentationTODOContributingFile and issue or create a pull request. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.