package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
alas-tools
A project for developing a client to access the Alas.Ce0 APIThis is the README file for the project.“generate dist and wheel” python setup.py sdist bdist_wheel“upload to pypi project” twine upload dist/*
alas-tools3
A project for developing a client to access the Alas.Ce0 APIThis is the README file for the project.“generate dist and wheel” python setup.py sdist bdist_wheel“upload to pypi project” twine upload dist/*
alastria-auth
Alastria ID authenticationDescripción: Librería para poder autenticarse con alastria IDURL:https://pypi.org/project/alastria-auth/Requisitos: DjangoInstalar: alastria-auth==0.0.9Cómo usar:Añadir en settings.py:INSTALLED_APPS=[...."alastria_auth.apps.AlastriaAuthConfig"]# ALASTRIA AUTH VARSBACKEND_DOMAIN=os.environ.get("BACKEND_DOMAIN","http://localhost:8000")NETWORK_SERVICE_HOST=os.environ.get("NETWORK_SERVICE_HOST","http://host.docker.internal:8001")ISSUER_PRIVATE_KEY=os.environ.get("ISSUER_PRIVATE_KEY","")ISSUER_PUBLIC_KEY=os.environ.get("ISSUER_PUBLIC_KEY","",)ISSUER_ADDRESS=os.environ.get("ISSUER_ADDRESS","")ALASTRIA_T_NETWORK_ID=os.environ.get("ALASTRIA_T_NETWORK_ID","redT")ALASTRIA_AUTH_SECRET=os.environ.get("ALASTRIA_AUTH_SECRET","")ALASTRIA_SERVICE_HOST=os.environ.get("ALASTRIA_SERVICE_HOST","http://host.docker.internal:5000")#####################Añadir enurl.pylas urls:fromalastria_auth.viewsimportAlastriaAuthView....urlpatterns=[path("alastria/",include("alastria_auth.urls")),....]
alastria-identity
alastria-identity-lib-pyPython version of the Alastria Identity libInstallingpipinstallalastria-identityor you could use Poetrypoetryaddalastria-identityTestingExecute testsdocker-composerun--rmidentitypoetryrunpython-mcoveragerun-mpytestalastria_identity-v.Create and check test coveragedocker-composerun--rmidentitypoetryruncoveragehtml python-mhttp.server8000Openhttp://localhost:8000in your browserTODOThis READMEAdd more code examplesCreate the PyPI package and push it to pypi.orgTest the connection with the identity Alastria network nodeDelegate calls is still a WIP, we need to finish that
alastria-service-client
alastria-service-clientalastria-service-client is a Python http client for dealing with communication with the wealize alastria service.InstallationUse the package managerpipto install alastria-service-client.pipinstallalastria-service-clientUsagefromalastria_service_client.clientimportAClient,Clientfromdjango.core.exceptionsimportPermissionDeniedfromalastria_service_client.validatorsimport(NetworkValidator,OnlyNetworkValidator,Address,)# returns 'identity_keys'alastria_service_client:AClient=Client(service_host=settings.ALASTRIA_SERVICE_HOST)returnalastria_service_client.identity_keys(address=Address(address),body=OnlyNetworkValidator(network=network_body)).responseContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT
alas-webapp
alas-webappA dependency ofAzurLaneAutoScriptwebapp.
alat
ALAT: Advanced Linear Algebra ToolkitALAT project was developed for calculating linear algebratic problems automatically. Especially, in engineering and science, linear algebratic problems are so hot topic. So, I've decided to write this project.Of course, I may have made mistake in some methods. Please, contact the with me over my e-mail address.Resource: Elementary Linear Algebra, Sixth Edition by Ron LARSON, David C. FALVOStarting date: 04-07-2022FeaturesI've seperated this project into 4 parts mainly. First isMatricesclass which contain the methods related to matrix operations. Second isVectorsclass which contain the methods about vector operations. Third isAppsclass which contain the common applications related with linear algebra. Fourth isCryptswhich provide the cryptography operations in 4 step.Installitionpip intall alatLicenseMIT License
alation
UNKNOWN
alation-api
UNKNOWN
alation-auth
No description available on PyPI.
alation-cli
Alation CLIAlation CLI, a command line experience for the Alation API. Pre-alpha planning stages, do not install.Usage$al[command]{parameters}Getting StartedFor usage and help content, pass in the-hparameter, for example:$al[command]-h
alauda
This is the description file for the project.https://www.alauda.iohttps://www.alauda.cn
alaudaapi
The Test for Alauda APIhttps://bitbucket.org/mathildetech/alauda-api-automation/src/master/
alauda-celery
Version:3.1.25 (Cipater)Web:http://celeryproject.org/Download:http://pypi.python.org/pypi/celery/Source:http://github.com/celery/celery/Keywords:task queue, job queue, asynchronous, async, rabbitmq, amqp, redis, python, webhooks, queue, distributed–What is a Task Queue?Task queues are used as a mechanism to distribute work across threads or machines.A task queue’s input is a unit of work, called a task, dedicated worker processes then constantly monitor the queue for new work to perform.Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task a client puts a message on the queue, the broker then delivers the message to a worker.A Celery system can consist of multiple workers and brokers, giving way to high availability and horizontal scaling.Celery is a library written in Python, but the protocol can be implemented in any language. So far there’sRCeleryfor the Ruby programming language, and aPHP client, but language interoperability can also be achieved by using webhooks.What do I need?Celery version 3.0 runs on,Python (2.5, 2.6, 2.7, 3.2, 3.3)PyPy (1.8, 1.9)Jython (2.5, 2.7).This is the last version to support Python 2.5, and from Celery 3.1, Python 2.6 or later is required. The last version to support Python 2.4 was Celery series 2.2.Celeryis usually used with a message broker to send and receive messages. The RabbitMQ, Redis transports are feature complete, but there’s also experimental support for a myriad of other solutions, including using SQLite for local development.Celerycan run on a single machine, on multiple machines, or even across datacenters.Get StartedIf this is the first time you’re trying to use Celery, or you are new to Celery 3.0 coming from previous versions then you should read our getting started tutorials:First steps with CeleryTutorial teaching you the bare minimum needed to get started with Celery.Next stepsA more complete overview, showing more features.Celery is…SimpleCelery is easy to use and maintain, and doesnot need configuration files.It has an active, friendly community you can talk to for support, including amailing-listand and an IRC channel.Here’s one of the simplest applications you can make:from celery import Celery app = Celery('hello', broker='amqp://guest@localhost//') @app.task def hello(): return 'hello world'Highly AvailableWorkers and clients will automatically retry in the event of connection loss or failure, and some brokers support HA in way ofMaster/MasterorMaster/Slavereplication.FastA single Celery process can process millions of tasks a minute, with sub-millisecond round-trip latency (using RabbitMQ, py-librabbitmq, and optimized settings).FlexibleAlmost every part ofCelerycan be extended or used on its own, Custom pool implementations, serializers, compression schemes, logging, schedulers, consumers, producers, autoscalers, broker transports and much more.It supports…Message TransportsRabbitMQ,Redis,MongoDB(experimental), Amazon SQS (experimental),CouchDB(experimental),SQLAlchemy(experimental),Django ORM (experimental),IronMQand more…ConcurrencyPrefork,Eventlet,gevent, threads/single threadedResult StoresAMQP, Redismemcached, MongoDBSQLAlchemy, Django ORMApache Cassandra, IronCacheSerializationpickle,json,yaml,msgpack.zlib,bzip2compression.Cryptographic message signing.Framework IntegrationCelery is easy to integrate with web frameworks, some of which even have integration packages:Djangonot neededPyramidpyramid_celeryPylonscelery-pylonsFlasknot neededweb2pyweb2py-celeryTornadotornado-celeryThe integration packages are not strictly necessary, but they can make development easier, and sometimes they add important hooks like closing database connections atfork.DocumentationThelatest documentationwith user guides, tutorials and API reference is hosted at Read The Docs.InstallationYou can install Celery either via the Python Package Index (PyPI) or from source.To install usingpip,:$ pip install -U CeleryTo install usingeasy_install,:$ easy_install -U CeleryBundlesCelery also defines a group of bundles that can be used to install Celery and the dependencies for a given feature.You can specify these in your requirements or on thepipcomand-line by using brackets. Multiple bundles can be specified by separating them by commas.$ pip install "celery[librabbitmq]" $ pip install "celery[librabbitmq,redis,auth,msgpack]"The following bundles are available:Serializerscelery[auth]:for using the auth serializer.celery[msgpack]:for using the msgpack serializer.celery[yaml]:for using the yaml serializer.Concurrencycelery[eventlet]:for using the eventlet pool.celery[gevent]:for using the gevent pool.celery[threads]:for using the thread pool.Transports and Backendscelery[librabbitmq]:for using the librabbitmq C library.celery[redis]:for using Redis as a message transport or as a result backend.celery[mongodb]:for using MongoDB as a message transport (experimental), or as a result backend (supported).celery[sqs]:for using Amazon SQS as a message transport (experimental).celery[memcache]:for using memcached as a result backend.celery[cassandra]:for using Apache Cassandra as a result backend.celery[couchdb]:for using CouchDB as a message transport (experimental).celery[couchbase]:for using CouchBase as a result backend.celery[beanstalk]:for using Beanstalk as a message transport (experimental).celery[zookeeper]:for using Zookeeper as a message transport.celery[zeromq]:for using ZeroMQ as a message transport (experimental).celery[sqlalchemy]:for using SQLAlchemy as a message transport (experimental), or as a result backend (supported).celery[pyro]:for using the Pyro4 message transport (experimental).celery[slmq]:for using the SoftLayer Message Queue transport (experimental).Downloading and installing from sourceDownload the latest version of Celery fromhttp://pypi.python.org/pypi/celery/You can install it by doing the following,:$ tar xvfz celery-0.0.0.tar.gz $ cd celery-0.0.0 $ python setup.py build # python setup.py installThe last command must be executed as a privileged user if you are not currently using a virtualenv.Using the development versionWith pipThe Celery development version also requires the development versions ofkombu,amqpandbilliard.You can install the latest snapshot of these using the following pip commands:$ pip install https://github.com/celery/celery/zipball/master#egg=celery $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp $ pip install https://github.com/celery/kombu/zipball/master#egg=kombuWith gitPlease the Contributing section.Getting HelpMailing listFor discussions about the usage, development, and future of celery, please join thecelery-usersmailing list.IRCCome chat with us on IRC. The#celerychannel is located at theFreenodenetwork.Bug trackerIf you have any suggestions, bug reports or annoyances please report them to our issue tracker athttp://github.com/celery/celery/issues/Wikihttp://wiki.github.com/celery/celery/ContributingDevelopment ofceleryhappens at Github:http://github.com/celery/celeryYou are highly encouraged to participate in the development ofcelery. If you don’t like Github (for some reason) you’re welcome to send regular patches.Be sure to also read theContributing to Celerysection in the documentation.LicenseThis software is licensed under theNew BSD License. See theLICENSEfile in the top distribution directory for the full license text.
alauda-django-oauth
Failed to fetch description. HTTP Status Code: 404
alauda-kombu
Version:3.0.37Kombuis a messaging library for Python.The aim ofKombuis to make messaging in Python as easy as possible by providing an idiomatic high-level interface for the AMQ protocol, and also provide proven and tested solutions to common messaging problems.AMQPis the Advanced Message Queuing Protocol, an open standard protocol for message orientation, queuing, routing, reliability and security, for which theRabbitMQmessaging server is the most popular implementation.FeaturesAllows application authors to support several message server solutions by using pluggable transports.AMQP transport using thepy-amqp,librabbitmq, orqpid-pythonclient libraries.High performance AMQP transport written in C - when usinglibrabbitmqThis is automatically enabled if librabbitmq is installed:$ pip install librabbitmqVirtual transports makes it really easy to add support for non-AMQP transports. There is already built-in support forRedis,Beanstalk,Amazon SQS,CouchDB,MongoDB,ZeroMQ,ZooKeeper,SoftLayer MQandPyro.You can also use the SQLAlchemy and Django ORM transports to use a database as the broker.In-memory transport for unit testing.Supports automatic encoding, serialization and compression of message payloads.Consistent exception handling across transports.The ability to ensure that an operation is performed by gracefully handling connection and channel errors.Several annoyances withamqplibhas been fixed, like supporting timeouts and the ability to wait for events on more than one channel.Projects already usingcarrotcan easily be ported by using a compatibility layer.For an introduction to AMQP you should read the articleRabbits and warrens, and theWikipedia article about AMQP.Transport ComparisonClientTypeDirectTopicFanoutamqpNativeYesYesYesqpidNativeYesYesYesredisVirtualYesYesYes (PUB/SUB)mongodbVirtualYesYesYesbeanstalkVirtualYesYes[1]NoSQSVirtualYesYes[1]Yes[2]couchdbVirtualYesYes[1]NozookeeperVirtualYesYes[1]Noin-memoryVirtualYesYes[1]NodjangoVirtualYesYes[1]NosqlalchemyVirtualYesYes[1]NoSLMQVirtualYesYes[1]No[1](1,2,3,4,5,6,7,8)Declarations only kept in memory, so exchanges/queues must be declared by all clients that needs them.[2]Fanout supported via storing routing tables in SimpleDB. Disabled by default, but can be enabled by using thesupports_fanouttransport option.DocumentationKombu is using Sphinx, and the latest documentation can be found here:https://kombu.readthedocs.io/Quick overviewfrom kombu import Connection, Exchange, Queue media_exchange = Exchange('media', 'direct', durable=True) video_queue = Queue('video', exchange=media_exchange, routing_key='video') def process_media(body, message): print body message.ack() # connections with Connection('amqp://guest:guest@localhost//') as conn: # produce producer = conn.Producer(serializer='json') producer.publish({'name': '/tmp/lolcat1.avi', 'size': 1301013}, exchange=media_exchange, routing_key='video', declare=[video_queue]) # the declare above, makes sure the video queue is declared # so that the messages can be delivered. # It's a best practice in Kombu to have both publishers and # consumers declare the queue. You can also declare the # queue manually using: # video_queue(conn).declare() # consume with conn.Consumer(video_queue, callbacks=[process_media]) as consumer: # Process messages and handle events on all channels while True: conn.drain_events() # Consume from several queues on the same channel: video_queue = Queue('video', exchange=media_exchange, key='video') image_queue = Queue('image', exchange=media_exchange, key='image') with connection.Consumer([video_queue, image_queue], callbacks=[process_media]) as consumer: while True: connection.drain_events()Or handle channels manually:with connection.channel() as channel: producer = Producer(channel, ...) consumer = Producer(channel)All objects can be used outside of with statements too, just remember to close the objects after use:from kombu import Connection, Consumer, Producer connection = Connection() # ... connection.release() consumer = Consumer(channel_or_connection, ...) consumer.register_callback(my_callback) consumer.consume() # .... consumer.cancel()ExchangeandQueueare simply declarations that can be pickled and used in configuration files etc.They also support operations, but to do so they need to be bound to a channel.Binding exchanges and queues to a connection will make it use that connections default channel.>>> exchange = Exchange('tasks', 'direct') >>> connection = Connection() >>> bound_exchange = exchange(connection) >>> bound_exchange.delete() # the original exchange is not affected, and stays unbound. >>> exchange.delete() raise NotBoundError: Can't call delete on Exchange not bound to a channel.InstallationYou can installKombueither via the Python Package Index (PyPI) or from source.To install usingpip,:$ pip install kombuTo install usingeasy_install,:$ easy_install kombuIf you have downloaded a source tarball you can install it by doing the following,:$ python setup.py build # python setup.py install # as rootTerminologyThere are some concepts you should be familiar with before starting:ProducersProducers sends messages to an exchange.ExchangesMessages are sent to exchanges. Exchanges are named and can be configured to use one of several routing algorithms. The exchange routes the messages to consumers by matching the routing key in the message with the routing key the consumer provides when binding to the exchange.ConsumersConsumers declares a queue, binds it to a exchange and receives messages from it.QueuesQueues receive messages sent to exchanges. The queues are declared by consumers.Routing keysEvery message has a routing key. The interpretation of the routing key depends on the exchange type. There are four default exchange types defined by the AMQP standard, and vendors can define custom types (so see your vendors manual for details).These are the default exchange types defined by AMQP/0.8:Direct exchangeMatches if the routing key property of the message and therouting_keyattribute of the consumer are identical.Fan-out exchangeAlways matches, even if the binding does not have a routing key.Topic exchangeMatches the routing key property of the message by a primitive pattern matching scheme. The message routing key then consists of words separated by dots (“.”, like domain names), and two special characters are available; star (“*”) and hash (“#”). The star matches any word, and the hash matches zero or more words. For example“*.stock.#”matches the routing keys“usd.stock”and“eur.stock.db”but not“stock.nasdaq”.Getting HelpMailing listJoin thecarrot-usersmailing list.Bug trackerIf you have any suggestions, bug reports or annoyances please report them to our issue tracker athttp://github.com/celery/kombu/issues/ContributingDevelopment ofKombuhappens at Github:http://github.com/celery/kombuYou are highly encouraged to participate in the development. If you don’t like Github (for some reason) you’re welcome to send regular patches.LicenseThis software is licensed under theNew BSD License. See theLICENSEfile in the top distribution directory for the full license text.
alaudaorg-django-oauth
OAuth2 goodies for the Djangonauts!If you are facing one or more of the following:Your Django app exposes a web API you want to protect with OAuth2 authentication,You need to implement an OAuth2 authorization server to provide tokens management for your infrastructure,Django OAuth Toolkit can help you providing out of the box all the endpoints, data and logic needed to add OAuth2 capabilities to your Django projects. Django OAuth Toolkit makes extensive use of the excellentOAuthLib, so that everything isrfc-compliant.SupportIf you need support please send a message to theDjango OAuth Toolkit Google GroupContributingWe love contributions, so please feel free to fix bugs, improve things, provide documentation. Justfollow the guidelinesand submit a PR.RequirementsPython 2.6, 2.7, 3.3, 3.4Django 1.4, 1.5, 1.6, 1.7, 1.8InstallationInstall with pip:pip install django-oauth-toolkitAddoauth2_providerto yourINSTALLED_APPSINSTALLED_APPS=(...'oauth2_provider',)If you need an OAuth2 provider you’ll want to add the following to your urls.py. Notice thatoauth2_providernamespace is mandatory.urlpatterns=patterns(...url(r'^o/',include('oauth2_provider.urls',namespace='oauth2_provider')),)DocumentationThefull documentationis onRead the Docs.Licensedjango-oauth-toolkit is released under the terms of theBSD license. Full details inLICENSEfile.Roadmap / Todo list (help wanted)OAuth1 supportOpenID connectorNonrel storages supportChangelogmaster#273: Generic read write scope by resource0.9.0 [2015-07-28]oauthlib_backend_classis now pluggable through Django settings#127:application/jsonContent-Type is now supported usingJSONOAuthLibCore#238: Fixed redirect uri handling in case of error#229: Invalidate access tokens when getting a new refresh tokenadded support for oauthlib 1.00.8.2 [2015-06-25]Fix the migrations to be two-step and allow upgrade from 0.7.20.8.1 [2015-04-27]South migrations fixed. Added new django migrations.0.8.0 [2015-03-27]Several docs improvements and minor fixes#185: fixed vulnerabilities on Basic authentication#173: ProtectResourceMixin now allows OPTIONS requestsFixed client_id and client_secret characters set#169: hide sensitive informations in error emails#161: extend search to all token types when revoking a token#160: return empty response on successful token revocation#157: skip authorization form withskip_authorization_completelyclass field#155: allow custom uri schemesfixedget_application_modelon Django 1.7fixed non rotating refresh tokens#137: fixed base templatecustomizedclient_secretlenght#38: create access tokens not bound to a user instance forclient credentialsflow0.7.2 [2014-07-02]Don’t pin oauthlib0.7.1 [2014-04-27]Added database indexes to the OAuth2 related models to improve performances.Warning: schema migration does not work for sqlite3 database, migration should be performed manually0.7.0 [2014-03-01]Created a setting for the default value for approval prompt.Improved docsDon’t pin django-braces and six versionsBackwards incompatible changes in 0.7.0Make Application model truly “swappable” (introduces a new non-namespaced setting OAUTH2_PROVIDER_APPLICATION_MODEL)0.6.1 [2014-02-05]added support forscopequery parameter keeping backwards compatibility for the originalscopesparameter.__str__ method in Application model returns content ofnamefield when available0.6.0 [2014-01-26]oauthlib 0.6.1 supportDjango dev branch supportPython 2.6 supportSkip authorization form viaapproval_promptparameterBugfixesSeveral fixes to the docsIssue #71: Fix migrationsIssue #65: Use OAuth2 password grant with multiple devicesIssue #84: Add information about login template to tutorial.Issue #64: Fix urlencode clientid secret0.5.0 [2013-09-17]oauthlib 0.6.0 supportBackwards incompatible changes in 0.5.0backends.pymodule has been renamed tooauth2_backends.pyso you should change your imports whether you’re extending this moduleBugfixesIssue #54: Auth backend proposal to address #50Issue #61: Fix contributing pageIssue #55: Add support for authenticating confidential client with request body paramsIssue #53: Quote characters in the url query that are safe for Django but not for oauthlib0.4.1 [2013-09-06]Optimize queries on access token validation0.4.0 [2013-08-09]New FeaturesAdd Application management views, you no more need the admin to register, update and delete your application.Add support to configurable application modelAdd support for function based viewsBackwards incompatible changes in 0.4.0SCOPEattribute in settings is now a dictionary to store{‘scope_name’: ‘scope_description’}Namespace ‘oauth2_provider’ is mandatory in urls. See issue #36BugfixesIssue #25: Bug in the Basic Auth parsing in Oauth2RequestValidatorIssue #24: Avoid generation of client_id with “:” colon char when using HTTP Basic AuthIssue #21: IndexError when trying to authorize an applicationIssue #9: Default_redirect_uri is mandatory when grant_type is implicit, authorization_code or all-in-oneIssue #22: Scopes need a verbose descriptionIssue #33: Add django-oauth-toolkit version on example main pageIssue #36: Add mandatory namespace to urlsIssue #31: Add docstring to OAuthToolkitError and FatalClientErrorIssue #32: Add docstring to validate_urisIssue #34: Documentation tutorial part1 needs corsheaders explanationIssue #36: Add mandatory namespace to urlsIssue #45: Add docs for AbstractApplicationIssue #47: Add docs for views decorators0.3.2 [2013-07-10]Bugfix #37: Error in migrations with custom user on Django 1.50.3.1 [2013-07-10]Bugfix #27: OAuthlib refresh token refactoring0.3.0 [2013-06-14]Django REST Frameworkintegration layerBugfix #13: Populate request with client and user in validate_bearer_tokenBugfix #12: Fix paths in documentationBackwards incompatible changes in 0.3.0requested_scopesparameter in ScopedResourceMixin changed torequired_scopes0.2.1 [2013-06-06]Core optimizations0.2.0 [2013-06-05]Add support for Django1.4 and Django1.6Add support for Python 3.3Add a default ReadWriteScoped viewAdd tutorial to docs0.1.0 [2013-05-31]Support OAuth2 Authorization Flows0.0.0 [2013-05-17]Discussion with Daniel Greenfeld at Django CircusIgnition
alauda-pytest
Thepytestframework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.An example of a simple test:# content of test_sample.pydefinc(x):returnx+1deftest_answer():assertinc(3)==5To execute it:$ pytest ============================= test session starts ============================= collected 1 items test_sample.py F ================================== FAILURES =================================== _________________________________ test_answer _________________________________ def test_answer(): > assert inc(3) == 5 E assert 4 == 5 E + where 4 = inc(3) test_sample.py:5: AssertionError ========================== 1 failed in 0.04 seconds ===========================Due topytest’s detailed assertion introspection, only plainassertstatements are used. Seegetting-startedfor more examples.FeaturesDetailed info on failingassert statements(no need to rememberself.assert*names);Auto-discoveryof test modules and functions;Modular fixturesfor managing small or parametrized long-lived test resources;Can rununittest(or trial),nosetest suites out of the box;Python 3.5+ and PyPy3;Rich plugin architecture, with over 315+external pluginsand thriving community;DocumentationFor full documentation, including installation, tutorials and PDF documents, please seehttps://docs.pytest.org/en/latest/.Bugs/RequestsPlease use theGitHub issue trackerto submit bugs or request features.ChangelogConsult theChangelogpage for fixes and enhancements of each version.Support pytestOpen Collectiveis an online funding platform for open and transparent communities. It provide tools to raise money and share your finances in full transparency.It is the platform of choice for individuals and companies that want to make one-time or monthly donations directly to the project.See more datails in thepytest collective.pytest for enterpriseAvailable as part of the Tidelift Subscription.The maintainers of pytest and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use.Learn more.Securitypytest has never been associated with a security vunerability, but in any case, to report a security vulnerability please use theTidelift security contact. Tidelift will coordinate the fix and disclosure.LicenseCopyright Holger Krekel and others, 2004-2019.Distributed under the terms of theMITlicense, pytest is free and open source software.
alauda-redis-py-cluster
# redis-py-clusterThis client provides a client for redis cluster that was added in redis 3.0.This project is a port of `redis-rb-cluster` by antirez, with alot of added functionality. The original source can be found at https://github.com/antirez/redis-rb-clusterGitter chat room: [![Gitter](https://badges.gitter.im/Grokzen/redis-py-cluster.svg)](https://gitter.im/Grokzen/redis-py-cluster?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)[![Build Status](https://travis-ci.org/Grokzen/redis-py-cluster.svg?branch=master)](https://travis-ci.org/Grokzen/redis-py-cluster) [![Coverage Status](https://coveralls.io/repos/Grokzen/redis-py-cluster/badge.png)](https://coveralls.io/r/Grokzen/redis-py-cluster) [![PyPI version](https://badge.fury.io/py/redis-py-cluster.svg)](http://badge.fury.io/py/redis-py-cluster)# DocumentationAll documentation can be found at http://redis-py-cluster.readthedocs.org/en/masterThis Readme contains a reduced version of the full documentation.Upgrading instructions between each released version can be found [here](docs/upgrading.rst)Changelog for next release and all older releasess can be found [here](docs/release-notes.rst)## InstallationLatest stable release from pypi```$ pip install redis-py-cluster```## Usage exampleSmall sample script that shows how to get started with RedisCluster. It can also be found in [examples/basic.py](examples/basic.py)```python>>> from rediscluster import StrictRedisCluster>>> # Requires at least one node for cluster discovery. Multiple nodes is recommended.>>> startup_nodes = [{"host": "127.0.0.1", "port": "7000"}]>>> rc = StrictRedisCluster(startup_nodes=startup_nodes, decode_responses=True)>>> rc.set("foo", "bar")True>>> print(rc.get("foo"))'bar'```## License & AuthorsCopyright (c) 2013-2017 Johan AnderssonMIT (See docs/License.txt file)The license should be the same as redis-py (https://github.com/andymccurdy/redis-py)Release Notes=============1.3.4 (Mar 5, 2017)-------------------* Package is now built as a wheel and source package when releases is built.* Fixed issues with some key types in `NodeManager.keyslot()`.* Add support for `PUBSUB` subcommands `CHANNELS`, `NUMSUB [arg] [args...]` and `NUMPAT`.* Add method `set_result_callback(command, callback)` allowing the default reply callbacks to be changed, in the same way `set_response_callback(command, callback)` inherited from Redis-Py does for responses.* Node manager now honors defined max_connections variable so connections that is emited from that class uses the same variable.* Fixed a bug in cluster detection when running on python 3.x and decode_responses=False was used.Data back from redis for cluster structure is now converted no matter what the data you want to set/get later is using.* Add SSLClusterConnection for connecting over TLS/SSL to Redis Cluster* Add new option to make the nodemanager to follow the cluster when nodes move around by avoiding to query the original list of startup nodes that was providedwhen the client object was first created. This could make the client handle drifting clusters on for example AWS easier but there is a higher risk of the client talking tothe wrong group of nodes during split-brain event if the cluster is not consistent. This feature is EXPERIMENTAL and use it with care.1.3.3 (Dec 15, 2016)--------------------* Remove print statement that was faulty commited into release 1.3.2 that case logs to fill up with unwanted data.1.3.2 (Nov 27, 2016)--------------------* Fix a bug where from_url was not possible to use without passing in additional variables. Now it works as the same method from redis-py.Note that the same rules that is currently in place for passing ip addresses/dns names into startup_nodes variable apply the same way throughthe from_url method.* Added options to skip full coverage check. This flag is useful when the CONFIG redis command is disabled by the server.* Fixed a bug where method *CLUSTER SLOTS* would break in newer redis versions where node id is included in the reponse. Method is not compatible with both old and new redis versions.1.3.1 (Oct 13, 2016)--------------------* Rebuilt broken method scan_iter. Previous tests was to small to detect the problem but is not corrected to work on a bigger dataset during the test of that method. (korvus81, Grokzen, RedWhiteMiko)* Errors in pipeline that should be retried, like connection errors, moved, errors and ask errors now fall back to single operation logic in StrictRedisCluster.execute_command. (72squared).* Moved reinitialize_steps and counter into nodemanager so it can be correctly counted across pipeline operations (72squared).1.3.0 (Sep 11, 2016)--------------------* Removed RedisClusterMgt class and file* Fixed a bug when using pipelines with RedisCluster class (Ozahata)* Bump redis-server during travis tests to 3.0.7* Added docs about same module name in another python redis cluster project.* Fix a bug when a connection was to be tracked for a node but the node either do not yet exists orwas removed because of resharding was done in another thread. (ashishbaghudana)* Fixed a bug with "CLUSTER ..." commands when a node_id argument was needed and the return typewas supposed to be converted to bool with bool_ok in redis._compat.* Add back gitter chat room link* Add new client commands- cluster_reset_all_nodes* Command cluster_delslots now determines what cluster shard each slot is on and sends each slot deletioncommand to the correct node. Command have changed argument spec (Read Upgrading.rst for details)* Fixed a bug when hashing the key it if was a python 3 byte string and it would cause it to route to wrong slot in the cluster (fossilet, Grokzen)* Fixed a bug when reinitialize the nodemanager it would use the old nodes_cache instead of the new one that was just parsed (monklof)1.2.0 (Apr 09, 2016)--------------------* Drop maintained support for python 3.2.* Remove Vagrant file in favor for repo maintained by 72squared* Add Support for password protected cluster (etng)* Removed assertion from code (gmolight)* Fixed a bug where a regular connection pool was allocated with each StrictRedisCluster instance.* Rework pfcount to now work as expected when all arguments points to same hashslot* New code and important changes from redis-py 2.10.5 have been added to the codebase.* Removed the need for threads inside of pipeline. We write the packed commands all nodes before reading the responses which gives us even better performance than threads, especially as we add more nodes to the cluster.* Allow passing in a custom connection pool* Provide default max_connections value for ClusterConnectionPool *(2**31)** Travis now tests both redis 3.0.x and 3.2.x* Add simple ptpdb debug script to make it easier to test the client* Fix a bug in sdiffstore (mt3925)* Fix a bug with scan_iter where duplicate keys would be returned during itteration* Implement all "CLUSTER ..." commands as methods in the client class* Client now follows the service side setting 'cluster-require-full-coverage=yes/no' (baranbartu)* Change the pubsub implementation (PUBLISH/SUBSCRIBE commands) from using one single node to now determine the hashslot for the channel name and use that to connect toa node in the cluster. Other clients that do not use this pattern will not be fully compatible with this client. Known limitations is patternsubscription that do not work properly because a pattern can't know all the possible channel names in advance.* Convert all docs to ReadTheDocs* Rework connection pool logic to be more similar to redis-py. This also fixes an issue with pubsub and that connectionswas never release back to the pool of available connections.1.1.0 (Oct 27, 2015)-------------------* Refactored exception handling and exception classes.* Added READONLY mode support, scales reads using slave nodes.* Fix __repr__ for ClusterConnectionPool and ClusterReadOnlyConnectionPool* Add max_connections_per_node parameter to ClusterConnectionPool so that max_connections parameter is calculated per-node rather than across the whole cluster.* Improve thread safty of get_connection_by_slot and get_connection_by_node methods (iandyh)* Improved error handling when sending commands to all nodes, e.g. info. Now the connection takes retry_on_timeout as an option and retry once when there is a timeout. (iandyh)* Added support for SCRIPT LOAD, SCRIPT FLUSH, SCRIPT EXISTS and EVALSHA commands. (alisaifee)* Improve thread safety to avoid exceptions when running one client object inside multiple threads and doing resharding of thecluster at the same time.* Fix ASKING error handling so now it really sends ASKING to next node during a reshard operation. This improvement was also made to pipelined commands.* Improved thread safety in pipelined commands, along better explanation of the logic inside pipelining with code comments.1.0.0 (Jun 10, 2015)-------------------* No change to anything just a bump to 1.0.0 because the lib is now considered stable/production ready.0.3.0 (Jun 9, 2015)-------------------* simple benchmark now uses docopt for cli parsing* New make target to run some benchmarks 'make benchmark'* simple benchmark now support pipelines tests* Renamed RedisCluster --> StrictRedisCluster* Implement backwards compatible redis.Redis class in cluster mode. It was named RedisCluster and everyone updating from 0.2.0 to 0.3.0 should consult docs/Upgrading.md for instructions how to change your code.* Added comprehensive documentation regarding pipelines* Meta retrieval commands(slots, nodes, info) for Redis Cluster. (iandyh)0.2.0 (Dec 26, 2014)-------------------* Moved pipeline code into new file.* Code now uses a proper cluster connection pool class that handlesall nodes and connections similar to how redis-py do.* Better support for pubsub. All clients will now talk to the same server becausepubsub commands do not work reliably if it talks to a random server in the cluster.* Better result callbacks and node routing support. No more ugly decorators.* Fix keyslot command when using non ascii characters.* Add bitpos support, redis-py 2.10.2 or higher required.* Fixed a bug where vagrant users could not build the package via shared folder.* Better support for CLUSTERDOWN error. (Neuront)* Parallel pipeline execution using threads. (72squared)* Added vagrant support for testing and development. (72squared)* Improve stability of client during resharding operations (72squared)0.1.0 (Sep 29, 2014)-------------------* Initial release* First release uploaded to pypi
alauda_test
This is the description file for the project.https://www.alauda.iohttps://www.alauda.cn
alauda-xdist
xdist: pytest distributed testing pluginThepytest-xdistplugin extends pytest with some unique test execution modes:test runparallelization: if you have multiple CPUs or hosts you can use those for a combined test run. This allows to speed up development or to use special resources ofremote machines.--looponfail: run your tests repeatedly in a subprocess. After each run pytest waits until a file in your project changes and then re-runs the previously failing tests. This is repeated until all tests pass after which again a full run is performed.Multi-Platformcoverage: you can specify different Python interpreters or different platforms and run tests in parallel on all of them.Before running tests remotely,pytestefficiently “rsyncs” your program source code to the remote place. All test results are reported back and displayed to your local terminal. You may specify different Python versions and interpreters.If you would like to know how pytest-xdist works under the covers, checkoutOVERVIEW.InstallationInstall the plugin with:pip install pytest-xdistor use the package in develop/in-place mode with a checkout of thepytest-xdist repositorypip install --editable .Speed up test runs by sending tests to multiple CPUsTo send tests to multiple CPUs, type:pytest -n NUMEspecially for longer running tests or tests requiring a lot of I/O this can lead to considerable speed ups. This option can also be set toautofor automatic detection of the number of CPUs.If a test crashes the interpreter, pytest-xdist will automatically restart that worker and report the failure as usual. You can use the--max-worker-restartoption to limit the number of workers that can be restarted, or disable restarting altogether using--max-worker-restart=0.By default, the-noption will send pending tests to any worker that is available, without any guaranteed order, but you can control this with these options:--dist=loadscope: tests will be grouped bymodulefortest functionsand byclassfortest methods, then each group will be sent to an available worker, guaranteeing that all tests in a group run in the same process. This can be useful if you have expensive module-level or class-level fixtures. Currently the groupings can’t be customized, with grouping by class takes priority over grouping by module. This feature was added in version1.19.--dist=loadfile: tests will be grouped by file name, and then will be sent to an available worker, guaranteeing that all tests in a group run in the same worker. This feature was added in version1.21.Running tests in a Python subprocessTo instantiate a python3.5 subprocess and send tests to it, you may type:pytest -d --tx popen//python=python3.5This will start a subprocess which is run with thepython3.5Python interpreter, found in your system binary lookup path.If you prefix the –tx option value like this:--tx 3*popen//python=python3.5then three subprocesses would be created and tests will be load-balanced across these three processes.Running tests in a boxed subprocessThis functionality has been moved to thepytest-forkedplugin, but the--boxedoption is still kept for backward compatibility.Sending tests to remote SSH accountsSuppose you have a packagemypkgwhich contains some tests that you can successfully run locally. And you have a ssh-reachable machinemyhost. Then you can ad-hoc distribute your tests by typing:pytest -d --tx ssh=myhostpopen --rsyncdir mypkg mypkgThis will synchronize yourmypkgpackage directory to a remote ssh account and then locally collect tests and send them to remote places for execution.You can specify multiple--rsyncdirdirectories to be sent to the remote side.NoteFor pytest to collect and send tests correctly you not only need to make sure all code and tests directories are rsynced, but that any test (sub) directory also has an__init__.pyfile because internally pytest references tests as a fully qualified python module path.You will otherwise get strange errorsduring setup of the remote side.You can specify multiple--rsyncignoreglob patterns to be ignored when file are sent to the remote side. There are also internal ignores:.*, *.pyc, *.pyo, *~Those you cannot override using rsyncignore command-line or ini-file option(s).Sending tests to remote Socket ServersDownload the single-modulesocketserver.pyPython program and run it like this:python socketserver.pyIt will tell you that it starts listening on the default port. You can now on your home machine specify this new socket host with something like this:pytest -d --tx socket=192.168.1.102:8888 --rsyncdir mypkg mypkgRunning tests on many platforms at onceThe basic command to run tests on multiple platforms is:pytest --dist=each --tx=spec1 --tx=spec2If you specify a windows host, an OSX host and a Linux environment this command will send each tests to all platforms - and report back failures from all platforms at once. The specifications strings use thexspec syntax.Identifying the worker process during a testNew in version 1.15.If you need to determine the identity of a worker process in a test or fixture, you may use theworker_idfixture to do so:@pytest.fixture()defuser_account(worker_id):""" use a different account in each xdist worker """return"account_%s"%worker_idWhenxdistis disabled (running with-n0for example), thenworker_idwill return"master".Additionally, worker processes have the following environment variables defined:PYTEST_XDIST_WORKER: the name of the worker, e.g.,"gw2".PYTEST_XDIST_WORKER_COUNT: the total number of workers in this session, e.g.,"4"when-n4is given in the command-line.The information about the worker_id in a test is stored in theTestReportas well, under theworker_idattribute.Acessingsys.argvfrom the master node in workersTo access thesys.argvpassed to the command-line of the master node, userequest.config.workerinput["mainargv"].Specifying test exec environments in an ini fileYou can use pytest’s ini file configuration to avoid typing common options. You can for example make running with three subprocesses your default like this:[pytest]addopts=-n3You can also add default environments like this:[pytest]addopts=--tx ssh=myhost//python=python3.5 --tx ssh=myhost//python=python3.6and then just type:pytest --dist=eachto run tests in each of the environments.Specifying “rsync” dirs in an ini-fileIn atox.iniorsetup.cfgfile in your root project directory you may specify directories to include or to exclude in synchronisation:[pytest]rsyncdirs=. mypkg helperpkgrsyncignore=.hgThese directory specifications are relative to the directory where the configuration file was found.
alay
No description available on PyPI.
alaya.py
No description available on PyPI.
alazia
No description available on PyPI.
alb
Failed to fetch description. HTTP Status Code: 404
alba
ALBA synchrotron python meta packageThis is ALBA synchrotron python meta package.Namespaces are one honking great idea -- let's do more of those!source:The Zen of Python, by Tim PetersHow to create an alba sub-moduleLet's say a new high pressure lab has been installed at ALBA which requires specific software. The goal is the user can type:importalba.hplab... to have access to ALBA's specific high pressure lab software.PreparationIn the future there might be a cookie cutter for this. For now we have to bootstrap the project by hand:create a directory calledhplaband enter it.create a directory calledalbacreate aalba/__init__.pyfile with a single line:__path__=__import__('pkgutil').extend_path(__path__,__name__)It is crucial that thealba/__init__.pyhas this precise contents and no more.create a setup.py has usual. Here is a minimal version:# setup.pyfromsetuptoolsimportsetup,find_packagessetup(name="alba-hplab",author="ALBA controls team",author_email="[email protected]",packages=find_packages(),description="ALBA controls HP Lab software",version="0.0.1")create aalba/hplabdirectory. This is where you should put the specific code for ALBA's HP lab.By now you should have a structure like thishplab/ ├── alba │ ├── hplab │ │ └── __init__.py │ └── __init__.py └── setup.pyThat's it! If you publish your package on pypi you will be able to install your software anywere with:pip install alba-hplab
alba-client-python
Библиотека для работы c Alba=============Библиотека содержит два базовых класса AlbaService и AlbaCallback предназначенных для наследования.AlbaService - сервис в Alba. Позволяет получить список доступных способов оплаты, инициировать транзакцию, получать информацию о ней. Необходимо создать по экземпляру на каждый существующий сервис.AlbaCallback - обработчик для обратного вызова от Alba. Проверяет подпись и вызывает соответствующий параметру "command" метод.В процессе работы может сработать исключение AlbaException.Пример использования для инициации транзакции:from alba_client import AlbaService, AlbaExceptionservice = AlbaService(<service-id>, '<service-secret>')try:response = service.init_payment('mc', 10, 'Test', '[email protected]', '71111111111')except AlbaException, e:print eПроверка, требуется ли 3-D secure:card3ds = response.get('3ds')if card3ds:# Требуется 3-D secureЕсли 3-D secure требуется, то необходимо сделать POST запрос на адрес card3ds['ACSUrl'] с параметрами:PaReq - с значением card3ds['PaReq']MD - с значением card3ds['MD']TermUrl - URL обработчика, на вашем сайте. На него будет возвращён пользователь после прохождения3DS авторизации на сайте банка-эмитента карты. Этот URL нужно сформировать так,чтобы в нём передавалась информация о транзакции: рекомендуется передавать service_id, tid и order_id(если транзакция создана с ним).Пример использования для обратного вызова:from alba_client import AlbaCallbackclass MyAlbaCallback(AlbaCallback):def callback_success(self, data):# фиксирование успешной транзакцииservice1 = AlbaService(<service1-id>, '<service1-secret>')service2 = AlbaService(<service2-id>, '<service2-secret>')callback = MyAlbaCallback([service1, service2])callback.handle(<словарь-c-POST-данными>)
albaem
Library and Tango DS for Alba Electrometer first version
albalacalculator
No description available on PyPI.
alba-mistral
This project has been developd at ALBA Synchrotron Light Source, and it is mainly used for image processing purposes at BL09-Mistral Tomography Beamline.
albaner
No description available on PyPI.
albanerhello
No description available on PyPI.
albania
No description available on PyPI.
albanian-lang
Make sure you have upgraded version of pipWindowspy -m pip install --upgrade pipLinux/MAC OSpython3 -m pip install --upgrade pipCreate a project with the following structurepackaging_tutorial/ ├── LICENSE ├── pyproject.toml ├── README.md ├── setup.cfg ├── src/ │ └── example_package/ │ ├── __init__.py │ └── example.py └── tests/ touch LICENSE touch pyproject.toml touch setup.cfg mkdir src/mypackage touch src/mypackage/__init__.py touch src/mypackage/main.py mkdir testspyproject.tomlThis file tells tools like pip and build how to create your project[build-system] requires = [ "setuptools>=42", "wheel" ] build-backend = "setuptools.build_meta"build-system.requires gives a list of packages that are needed to build your package. Listing something here will only make it available during the build, not after it is installed.build-system.build-backend is the name of Python object that will be used to perform the build. If you were to use a different build system, such as flit or poetry, those would go here, and the configuration details would be completely different than the setuptools configuration described below.Setup.cfg setupUsing setup.cfg is a best practice, but you could have a dynamic setup file using setup.py[metadata] name = example-pkg-YOUR-USERNAME-HERE version = 0.0.1 author = Example Author author_email = [email protected] description = A small example package long_description = file: README.md long_description_content_type = text/markdown url = https://github.com/pypa/sampleproject project_urls = Bug Tracker = https://github.com/pypa/sampleproject/issues classifiers = Programming Language :: Python :: 3 License :: OSI Approved :: MIT License Operating System :: OS Independent [options] package_dir = = src packages = find: python_requires = >=3.6 [options.packages.find] where = srcRunning the buildMake sure your build tool is up to dateWindowspy -m pip install --upgrade buildLinux/MAC OSpython3 -m pip install --upgrade buildCreate the buildpy -m buildReferenceshttps://packaging.python.org/tutorials/packaging-projects/
alba-probablity
Failed to fetch description. HTTP Status Code: 404
albaraa
No description available on PyPI.
albatradis
AlbaTraDIS is a software application for performing rapid large-scale comparative analysis of TraDIS experiments (transposon mutagenesis) whilst also predicting the impact of inserts on nearby genes. It allows for experiements with multiple conditions to be easily analysed using statistical methods developed in the Bio-TraDIS toolkit.
albatross
Albatross is a small and flexible Python toolkit for developing highly stateful web applications. The toolkit has been designed to take a lot of the pain out of constructing intranet applications although you can also use Albatross for deploying publicly accessed web applications.
albatross3
AlbatrossA modern, fast, simple, natively-async web framework. (Python3.5 only)fromalbatrossimportServerimportasyncioclassHandler:asyncdefon_get(self,req,res):awaitasyncio.sleep(0.1)res.write('Hello,%s'%req.args['name'])app=Server()app.add_route('/{name})',Handler())app.serve()Notes for UsageFor now (pre 1.0.0), I’m making no claims about API stability (but will try to avoid changes). That said, I’m using this framework for some small projects, and it is a joy to work in! Reach out if you want to use this, as I’m happy to incorporate your feedback!Installpip3 install albatross3FeaturesYou can read the entire codebase in about 30 minutes.It’s natively async. Doingawaitdatabase calls or controller calls in your views just works!This works with theuvloopproject, to make your server fast!BenchmarksMy benchmarks indicate that albatross is as fast as aiohttp, both of which are twice as fast as tornado. You can run the benchmarks by poking around in thebench/folder.
albatross_extras
HandlerThere are handlers for:static filesstatic directoriesserver health & profilingjinja2 templatingMiddlewareThere is middleware for:authenticationloggingstatsdcors cross-browser authorizationExamplefromalbatrossimportServerfromalbatross_extras.handlerimportHealthHandlerfromalbatross_extras.middlewareimport(StatsdMiddleware,LoggingMiddleware,)fromalbatross_extras.libimportloggingimportasyncioclassHandler:asyncdefon_get(self,req,res):awaitasyncio.sleep(0.1)res.write('Hello,%s'%req.args['name'])app=Server()logger=logging.get_logger('my-app.web')app.add_middleware(LoggingMiddleware(logger)app.add_middleware(StatsdMiddleware())app.add_route('/health',HealthHandler())app.serve()# You'll now emit stats to statsd and log in JSON format to stdout
albatros-uav
Albatros UAVA python library that provides high-level functions for UAVs based on MAVLink. It allows to easily handle communication with the flight controller to create friendly mission management systems. Albatros supports direct communication with UAVs as well as via Redis (WIP)Supported functionalitiesPlane:arming vehicle,setting flight mode,setting servos positions,flying inGUIDEDmode,uploading mission and flying inAUTOmode,param protocol.Copter:arming vehicle,setting flight mode,setting servos positions,flying inGUIDEDmode,param protocol,comming soon:uploading mission and flying inAUTOmode.Access to UAV telemetry viaUAV.dataSupported MAVLink telemetry messagesAttitudeGlobalPositionIntGPSRawIntGPSStatusHeartbeatCommandACKMissionACKMissionRequestIntRadioStatusRcChannelsRawServoOutputRawSysStatusMissionItemReachedParamValuePositionTargetLocalNEDHomePositionLocalPositionNEDNavControllerOutputExamplesCreating connectionfromalbatrosimportPlane,ConnectionTypefromalbatros.telemimportComponentAddress# SITL connection is defaultplane=Plane()# Direct connection to the flight controllerplane=Plane(device="/dev/tty/USB0/",baud_rate=57600)# You can also specify the ID of the vehicle you want to connect to and the ID of your system# read more about MAVLink Routing in ArduPilot: https://ardupilot.org/dev/docs/mavlink-routing-in-ardupilot.htmlplane_addr=ComponentAddress(system_id=1,component_id=1)gcs_addr=ComponentAddress(system_id=255,component_id=1)plane=Plane(uav_addr=plane_addr,my_addr=gcs_addr)Arming vehicle (SITL simulation)$python-mexamples.arming_vehiclefromalbatrosimportUAVvehicle=UAV()whilenotvehicle.arm():print("Waiting vehicle to arm")print("Vehicle armed")ifnotvehicle.disarm():print("Disarm vehicle failed")print("Vehicle disarmed")
al-bday-enigma
No description available on PyPI.
albero
UNKNOWN
alber-package-calc
Tutorial sobre como subir un paquete a PYPIEjemplo de como crear un paquete en Python y subirlo a PYPI para posteriormente instalarlo en otro proyecto.El tutorial estará próximamente.
albert
Albert Heijn (unofficial) REST interfaceNew features are being addedInstallationpip install albertUsageInitalize APIimport albert ah = albert.Api("username","password")Make an account on ah.nl.Add product to cartah.add("wi386562",amount=30)You can find products onwww.albertheijn.nl. I.e.,https://www.ah.nl/producten2/product/wi383655/ah-conference-perenBy default, one item is added to cart if you dont pass the second parameter.Items in cartah.cart()Empty cartah.empty()Save cart to listah.shopping_list_add("Name of List", empty_card=False)Standard after calling shopping_list_add method, by Default, the items in your cart will remain. When passing the second argument as True, the items will get removed from your cart after saving the cart items to list.Product Informationah_product = albert.Product('wi60539/ah-scharrelei-advocaat') print(ah_product.id)pass the query without producten/product/ from a product as string argument to retrieve information from the product. Several items can be accessed:idnamebranddescriptionsummaryunit_sizecategoryis_availablepriceLabelprice_currentprice_previousis_discounted
albertai
No description available on PyPI.
albertk
#搜索关键词进行聚类from albertk import * model,tokenizer=load_albert(“data/albert_tiny”) keyword=input(“输入关键词:”) # keyword=”边境牧羊犬智商” klist=run_search_sent(keyword,20,tokenizer,model) print(klist)
alberto-prova
Failed to fetch description. HTTP Status Code: 404
albert_pytorch
这是一个包 albert_pytorch# fix 修正提交错误版本更新日志 0.0.2.1.3 修复了cpu运行时候错误0.0.1.9 训练加入自动随机屏蔽数据15% 适用于所有数据 0.0.1.7 版本之前的请勿使用 0.0.2.1 加入处理两句话的判断操作 分类示例 from albert_pytorch import classify tclass = classify(model_name_or_path=’outputs/terry_r_rank/’,num_labels=1,device=’cuda’) p=tclass.pre(text)
albert-tensorflow
ALBERT for TensorFlowThis is a fork of the originalALBERT repositorythat adds package configuration so that it can be easily installed and used. The purpose is to remove the need of cloning the repository and modifying it locally which can be quite dirty for common tasks (e.g. training a new classifier). A lot of code could be shared (e.g.modeling.py,optimization.py) and this fork is exactly for that.All the credit goes togoogle-research/albert.Please refer to the official repository for details on how to use the modules contained in this package. Useful information can also be found in theALBERT paper.Release NotesNovember 5, 2019: Initial version -v1.0.November 15, 2019: version 1.1 -v1.1.
albert-toolkit
Albert Invent Data Science LibraryThe Albert Invent Data Science Library is a set of wrappers and helper functions which can be used to build data science applications on top of the albert platform.Non-Python DependenciesDockerIf you want to use the docker image for doing development you will need to install the docker runtime (or docker desktop for windows/macos). If you are going to be running the docker image on an ubuntu system with a GPU you will need to additionally install the nvidia docker runtime -- instructions for which can be found athttps://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.htmlGetting StartedAfter installing the albert-toolkit package, you can setup your environment configuration by running the commandalbert initand following the prompts. Any credentials that you do not have (e.g. if you do not have data warehouse credentials) you can simply leave them blank.If you will be using clearML integrations then be sure to have your clearml pip install command ready as you will be prompted for it as part of the init process. If you have already setup clearml in your virtual environment prior to installing albert-toolkit then you can simply answer no to the prompt and it will leave your current configuration untouched.Running Unit TestsNOTE: All the following commands should be run from within a virtual environment. And they will all require that your albert API token is setup correctlyalbert-toolkit utilizes pytest as its testing framework -- to run the full suite of unit tests use the helper script and thetacommand:$./al.shtaAlternatively if you want to run the tests in a fresh venv (i.e. test the build/install/unit test workflow) run the commandtb-- This is useful for checking that you have correctly supplied all dependencies in the package setup.cfg.$./al.shtbNote that running the above command will look for lib-jwt-python at the relative path../lib-jwt-pythonif it is not located there it will attempt to clone the repo using ssh and will fail if you don't have your github ssh keys setup correct.Package InstallationUntil we get a pypi package setup you will need to clone this repository and perform a pip install locally.Choosing an Install EnvironmentDepending on the application you may not need or want the full development stack associated with this library. You can therefore install different dependencies using the square bracket syntax e.g.albert-toolkit[viz]if you only want to install a certain set of dependencies. This is particularly useful if you are building out an application or microservice which only depends upon a subset of the full albert-toolkit library, and you do not want to install unused dependencies.If you specify no environment tag then you will just the get base code for albert-toolkit installed into your environment and most of it will not function without the required dependencies -- so be sure to choose one of the following stacks when installing the library.Note the use of quotes in the package install commands below e.g."albert-toolkit[dev]"-- the quotes are required and you will get an error if you forget to include them.Full Development StackThe full development stack can be installed using the[dev]or[all]tags. The development environment contains support for juptyerlab and all the associated ipython dependencies. It should not be installed in a deployed application$pipinstall"albert-toolkit[dev]"#This should be run from within your python virtual environmentVisualization StackThe visualization stack includes only those dependencies which are needed to utilize the visualization components of the albert-toolkit library. It can be installed with the[viz]tag$pipinstall"albert-toolkit[viz]"#This should be run from within your python virtual environmentMetrics StackThe metrics stack includes only those depenencies which are necessary to compute metrics (for example: if you have a microservice architecture and you are running lots of predictions and storing those in a database, you may want to have a microservice which is dedicated to computing performance/accuracy/etc... metrics and hence that service should not require any of the other stacks)$pipinstall"albert-toolkit[metrics]"#This should be run from within your python virtual environmentNLP StackThe NLP Stack includes only those dependencies which are necessary to compute NLP transforms or embeddings, including helper functions for transforming or analyzing text data$pipinstall"albert-toolkit[nlp]"#This should be run from within your python virtual environmentModels StackThe models stack includes all the dependencies necessary to build or run models. Installing the Models stack will also install the metrics stack$pipinstall"albert-toolkit[models]"#This should be run from within your python virtual environmentChemistry StackThe chemistry stacks includes all the dependencies needed to run the chemistry modules -- the proper way to install the chemistry stack is throught he use of thealbertcli tool. Simply run the following to install the chemistry dependencies after installing the main albert toolkit$albertinit-chemThis will requirecmakeand CUDA to be installed.
albert-xin-py-pk
albert_xin_py_pk这是一个python包集合各种奇怪的功能
albhed
Al BhedSimple CLI and Library that translates Text into Al BhedPython:>>>fromalbhedimportAlBhed>>>>>>albhed=AlBhed("Hello, World!")>>>albhed.translate()'Rammu, Funmt!'>>>>>>albhed=AlBhed("Rammu, Funmt!")>>>albhed.revert()'Hello, World!'Shell:$albhedHello,World! Rammu,Funmt! $albhed-rRammu,Funmt! Hello,World!
albion-api-client
No description available on PyPI.
albion-data
A simple wrapper for theAlbion Data ProjectAPIInstallInstall using pippython3-mpipinstallalbion-dataFeaturescheck price of items# price of t4 and t5 bag at lymhurst quality 1fromalbion_dataimportget_priceget_price(["T4_BAG","T5_BAG"],"Lymhurst",1)check history of item#get daily history of t4 bag at lymhurst of quality 1fromalbion_dataimportget_historyget_history("T4_BAG","Lymhurst",1,time_scale=24)System for making arithemetic expressionsThe values are lazy loaded.#check if t4 leather refining is profitable in fort sterling without focusfromalbion_dataimportVart4leather=Var("T4_LEATHER","Fort Sterling","sell_price_min")t4hide=Var("T4_HIDE","Fort Sterling","sell_price_min")t3leather=Var("T3_LEATHER","Fort Sterling","sell_price_min")if(2*t4hide+t3leather)<t4leather:#triggers a single API callprint("refine t4hide")else:print("not worth it")NOTESThe item ids and market names can be foundhereUse &,|,~ for logical and,or,not when making arithemetic expressions with Var.why not and,or,not?the PYPI name of package and all API might change until v1
albion-similog
Albion SimilogThis project is dedicated to the generation of possible edges within theAlbionframework, through the way ofFAMSAalgorithm.albion_similogworks with Python3.8.InstallationDependenciesThis package works with a few dependencies from the biology ecosystem, likebiopythonorscikit-bio. They are installable easily throughpip.There is a notable exception withFAMSA, that should be patched for specific Albion purpose, and compiled. Everything is managed through theMakefile.Debian/Ubuntu$ virtualenv .venv $ source .venv/bin/activate (.venv) $ make installWindowsTODO
albopictus
Environmentally-driven population dynamics model of Aedes albopictusThis is a python (v3.7) package implementing the environmentally-driven population dynamics model of Aedes albopictus.ContentsPrerequisitesLinux installationUsage1) PrerequisitesThe model requires the Python packagesnumpypkg_resourcesand theGNU Scientific Library (development)for C, which are not included in this package.2) Linux installationEasy way:If you have pip installed, you can use the following command to download and install the package.pip install albopictusAlternatively, you can download the source code from PyPI and run pip on the latest version xxx.pip install albopictus-xxx.tar.gzHard way:If pip is not available, you can unpack the package contents and perform a manual install.tar -xvzf albopictus-xxx.tar.gzcd albopictus-xxxpython setup.py installThis will install the package in the site-packages directory of your python distribution. If you do not have root privileges or you wish to install to a different directory, you can use the –prefix argument.python setup.py install –prefix=<dir>In this case, please make sure that <dir> is in your PYTHONPATH, or you can add it with the following command.In bash shell:export PYTHONPATH=<dir>:$PYTHONPATHIn c shell:setenv PYTHONPATH <dir>:$PYTHONPATHUsageInformation for usage and contents are documented in the package, and can be accessed with the help utility.import albopictushelp(albopictus)Credits‘modern-package-template’ -http://pypi.python.org/pypi/modern-package-templateNews1.15.0Release date: 30 November 2023*CDZ: Chikungunya - Dengue - Zika modelMemory fix in modelStochCHIKVmodelAlbopictus08c: minor bug fix (pupa -> juveniles)1.13.0Release date: 13 September 2023*Initial draft of sPop v.2 with accumulative development (sPop2: Python)Minor fix: numpy.int -> numpy.int32Minor fix: DPOP_EPS double symbol1.12.6Release date: UNRELEASED*Included “extract” to the population dynamics model (sPop : Python)Included “spop_print_to_csv” to the population dynamics model (sPop : C)1.12.5Release date: UNRELEASED*Bug fixesIncluded “flush” to the population dynamics model (sPop)Optimized probability function (hash) in sPop (NOT WORKING)1.12.4Release date: UNRELEASED*1.12.3Release date: 19 February 2020*Bug fix for the sPop model (albopictus.population)1.12.2Release date: UNRELEASED*Culex pipiens complex population dynamics model (first draft)1.12.1Release date: UNRELEASED*Dynamic human population density for vector08 (from v.1.0 - 14 February 2017)Ability to change MAX_MEM for the gamma distribution module1.11.0Release date: 29 November 2018*Ae. albopictus population dynamics model updateAe. albopictus egg capture/removal is implementedAe. albopictus egg capture (total egg population laid each day is recorded)Ae. albopictus gonotrophic cycle length is introduced (experimental data need to be updated for egg numbers per cycle not per day)1.9.3Release date: 1 August 2018*getClimate and getLonLat updated for Python 31.9.1Release date: 31 July 2018spop module is now compatible with Python 31.9Release date: 30 July 2018Compatible with Python 31.8.3Release date: UNRELEASED*Bug fixes for cross-platform compatibility1.8Release date: UNRELEASED*Ae. albopictus population dynamics model update initiated1.7Release date: UNRELEASED*Age-structured deterministic and stochastic population dynamics model1.6Release date: 27 February 2017This is the model as presented in Erguler K, Pontiki I, Zittis G, Proestos Y, Christodoulou V, Tsirigotakis N, Antoniou M, Kasap OE, Alten B, Lelieveld J. Using longitudinal surveillance data to analyse the environmental dependence of vector populations: the case of sand flies. Scientific Reports. 2018 (submitted)1.5Release date: UNRELEASED*Sand fly model update (environmental dependence: functional dependence)1.4Release date: UNRELEASED*Sand fly model update1.3Release date: UNRELEASED*Preliminary study for a global albopictus-chikungunya modelTesting a new hypothesis for diapause1.2Release date: 7 June 2017Includes plotting routines for prior and posterior distributions1.1Release date: UNRELEASED*Preliminary study for a global albopictus-chikungunya model1.0Release date: 14 February 2017Revised version of the chikungunya model as presented in Erguler K, Chandra NL, Proestos Y, Lelieveld J, Christophides GK, Parham PE. A large-scale stochastic spatiotemporal model for Aedes albopictus-borne chikungunya epidemiology. PLOS ONE. 2017 (under review)0.9Release date: UNRELEASED*Preliminary study for a sand fly model0.8Release date: 16 November 2016The ‘chikv’ model is updated to model spread with stable population sizes in patchesThe Ae. albopictus population dynamics model v.0.3 is available as vector03The prior distribution for vector03 is available as prior03This is the model as presented in Erguler K, Chandra NL, Proestos Y, Lelieveld J, Christophides GK, Parham PE. A large-scale stochastic spatiotemporal model for Aedes albopictus-borne chikungunya epidemiology. PLOS ONE. 2016 (submitted)0.7Release date: UNRELEASED*The ‘chikv’ model is updated to model spread with stable population sizes in patches0.6Release date: UNRELEASED*The spatiotemporal ‘chikv’ model is added, which models the Ae. albopictus-borne Chikungunya transmission0.5Release date: UNRELEASED*Hash function for Gamma distribution was implemented for improved flexibility and performance0.4Release date: UNRELEASED*Includes initial improvements on the vector population model0.3Release date: 9 February 2016This is the model as presented in Erguler K, Smith-Unna SE, Waldock J, Proestos Y, Christophides GK, Lelieveld J, Parham PE. Large-scale modelling of the environmentally-driven population dynamics of temperate Aedes albopictus (Skuse). PLOS ONE. 2016 (in press)This version also includes posterior Q1b, which is sampled from posterior mode Q1 for subsequent research0.2Release date: UNRELEASED*Includes the prior distribution0.1Release date: UNRELEASED*Initial commit
albp
Failed to fetch description. HTTP Status Code: 404
albprov
readme
albprov-test
readme
alb-response
AWS ALB Response CreatorThe AWS announcement ofApplication Load Balancers supporting Lambda functionsmade my reInvent experience!PurposeIn doing a PoC however, I found that thestatusDescriptionelement was somewhat of an annoyance to code. This package provides a method to return the appropriate format of this field without copy/paste response data and allowing a strategy to implement this to swap out response formats for API Gateway or ALB as needed.InstallationRunpip install alb-responseUsagefromalb_responseimportalb_responsedeflambda_handler(event,context):response_dict=process_the_event(event)returnalb_response(http_status=200,json=response_dict,is_base64_encoded=False,)ArchitectureSetup an Application Load Balancer (ALB)Create a target group for the LambdaAssign appropriate permissions to your Lambda functionAttach the target group to the ALB with a ruleContributingContributions are welcome! Please open an issue or make a pull request.If making a pull request, please run the tests and ensure that you maintain or increase code coverage.DependenciesTo make this project more portable and keep environments organized, this project leveragespipenv. To install deterministic dependencies, runpipenv sync.Run TestsTo run the tests, install the dependencies and runbehave.To get code coverage as well, runcoverage run --source='.' -m behavefollowed bycoverage report.Release Log0.1.0Initial Release0.1.1Dependency update to resolveCVE-2019-11324.0.1.2Patch to support null json responses without sending an empty json object
albt
Allows for building, creating, and invoking lambda functions.
albula
AlbulaA minimalist self-hosted music server.Install withpip install albula.Current statusThe library build is still wonky, I need to untangle scanning, metadata extraction etc and clean up. For now, complete rebuild is often the better option.Why not Plex / Subsonic / Airsonic / ...?I just made Albula for myself because I disliked several things about the other options. Most likely, it will not be better than them for you. Some features to note though:Support for multiple artists per track / album artists per album / albums per trackLess crowded interfaceDirect server-side scrobbling toMaloja, removing the need for individual solutions for each clientNo Javascript-bloated web interfaceNo central authentication / phone-homeRequirementsPython 3.5 or higherLibrary guidelinesAlbula is fairly good at filling in missing metadata from folder structure. Generally, if you follow the default pattern with folders for each album artist that contain folders for each album, you can have pictures namedalbum.extandartist.extin the appropriate folders and they should be correctly assigned.
album
albumIntroduction:https://album.solutions/Documentation:https://docs.album.solutions/CitationAlbrecht, J.P.*, Schmidt, D.*, and Harrington, K., 2021. Album: a framework for scientific data processing with software solutions of heterogeneous tools. arXiv preprint arXiv:2110.00601.https://arxiv.org/abs/2110.00601DevelopersKyle Harrington, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationJan Philipp Albrecht, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationDeborah Schmidt, Max Delbrueck Center for Molecular Medicine in the Helmholtz Association
album2video
receives tracks and img and outputs albumvideo with tracknames subtitle & timestamps.txtDescriptionalbum2videois a CLI to create album videos with img as bg & tracknames subtitles (useful for uploading albumvideos to yt)Python 3.9InstallationFrom PyPIpip3 install album2videoOR w/ pipx (recommended)pipx install album2videoFrom SourceClone the project ordownload and extract the zipcdto the project directory containing thesetup.pypython setup.py installorpipx install .DetailsUsage: album2video [options] [URL...] Arguments: URL Path to folder w/ tracks & img or folderpath + img path or individual trackpaths + img path Examples: album2video --help album2video path/to/folder album2video --title TheAlbumTitle path/to/mp3 path/to/mp3 path/to/imgRequires path to img or path to folder with img(Needs ImageMagick installed) Windows users will have to define magick.exe filepath with album2video –imgmagick path/to/magick.exeOptionsOptions: -h --help Show this screen -v --version Show version -d --debug Verbose logging -n --notxt Don't output timestamps.txt -t --test Run program without writing videofile (for test purposes) --title=TITLE Set title beforehand --imgmagick=PATH Set path to ImageMagick & exit
album-client
album-clientCitationAlbrecht, J.P.*, Schmidt, D.*, and Harrington, K., 2021. Album: a framework for scientific data processing with software solutions of heterogeneous tools. arXiv preprint arXiv:2110.00601.https://arxiv.org/abs/2110.00601DevelopersKyle Harrington, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationJan Philipp Albrecht, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationDeborah Schmidt, Max Delbrueck Center for Molecular Medicine in the Helmholtz Association
album-colours
No description available on PyPI.
album-distributed
Album plugin for distributed callsThis is an early version of enhancing album with calls for batch and distributed processing.Installationinstall AlbumActivate the album environment:conda activate albumInstall this plugin:pip install https://gitlab.com/album-app/plugins/album-distributed/-/archive/main/album-distributed-main.zipUsageFist, install a solution - replacesolution.pywith the path to your solution / solution folder or with thegroup:name:versioncoordinates of your solution.album install solution.pyNow you can use the plugin:album run-distributed solution.pyThe plugin does two things:It figures out if the input arguments match multiple tasks - in this case, it generates the different task arguments.It runs all matching tasks, the mode for running these tasks can be chosen.Since the matching part can be tricky, please use the--dry-runargument to first print a list of matched tasks:album run-distributed solution.py --dry-runOn Windows, replace the slashes with backslashes in the examples on this page.Please let us know if you run into issues.Matching input argumentsTo generate multiple tasks, patterns in file name arguments can be used to match multiple files.Using patterns in a single argumentYou should be able to use allglobfeatures when using it in a single argument. Here are some examples:In the following scenariossolution.pyhas an argument calledinput_data.Match all.tiffiles in the current folder:album run-distributed solution.py --input_data *.tifMatch all.tiffiles in a specific folder where the file name starts withinput:album run-distributed solution.py --input_data /data/input*.tifMatch all.tiffiles recursively, starting from the current folder:album run-distributed solution.py --input_data **/*.tifUsing patterns in multiple argumentsWhen using patterns in multiple arguments, this plugin will try to figure out the corresponding argument values based on which of the patterns match with existing files. This is likely to fail in a bunch of situations - please use the--dry-runargument to test if the matched tasks correspond with your expectation.In the following scenariossolution.pyhas two arguments calledinput_dataandoutput_data.Use all.tiffiles in the current folder and append_outto the file name for the output argument.album run-distributed solution.py --input_data *.tif --output_data *_out.tifDo the same thing recursively:album run-distributed solution.py --input_data **/*.tif --output_data **/*_out.tifLet the output argument values live in a different folder:album run-distributed solution.py --input_data *.tif --output_data output/*.tifSince Album does not yet distinguish between input and output arguments, be aware that if theoutput_dataargument in these scenarios matches existing files, the plugin will also try to generate correspondinginput_filevalues. We will work on improving this.ModesYou can set the mode by using the--modeargument:album run-distributed solution.py --mode queueBy default, the plugin will use thebasicmode.BasicIn this mode, all tasks will be performed one after each other. The console output of each task will be printed.QueueIn this mode, a set of thread workers will be created to process tasks in parallel. The console output of each task will not be printed. You can control how many threads should be created with the--threadsargument:album run-distributed solution.py --mode queue --threads 16
album-dl
UNKNOWN
albumentations
AlbumentationsAlbumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to increase the quality of trained models. The purpose of image augmentation is to create new training samples from the existing data.Here is an example of how you can apply somepixel-levelaugmentations from Albumentations to create new images from the original one:Why AlbumentationsAlbumentationssupports all common computer vision taskssuch as classification, semantic segmentation, instance segmentation, object detection, and pose estimation.The library providesa simple unified APIto work with all data types: images (RBG-images, grayscale images, multispectral images), segmentation masks, bounding boxes, and keypoints.The library containsmore than 70 different augmentationsto generate new training samples from the existing data.Albumentations isfast. We benchmark each new release to ensure that augmentations provide maximum speed.Itworks with popular deep learning frameworkssuch as PyTorch and TensorFlow. By the way, Albumentations is a part of thePyTorch ecosystem.Written by experts. The authors have experience both working on production computer vision systems and participating in competitive machine learning. Many core team members are Kaggle Masters and Grandmasters.The library iswidely usedin industry, deep learning research, machine learning competitions, and open source projects.Table of contentsAlbumentationsWhy AlbumentationsTable of contentsAuthorsInstallationDocumentationA simple exampleGetting startedI am new to image augmentationI want to use Albumentations for the specific task such as classification or segmentationI want to know how to use Albumentations with deep learning frameworksI want to explore augmentations and see Albumentations in actionWho is using AlbumentationsSee alsoList of augmentationsPixel-level transformsSpatial-level transformsA few more examples of augmentationsSemantic segmentation on the Inria datasetMedical imagingObject detection and semantic segmentation on the Mapillary Vistas datasetKeypoints augmentationBenchmarking resultsContributingCommunity and SupportCommentsCitingAuthorsVladimir I. Iglovikov|Kaggle GrandmasterMikhail Druzhinin|Kaggle ExpertAlex Parinov|Kaggle MasterAlexander Buslaev— Computer Vision Engineer at Mapbox|Kaggle MasterEvegene Khvedchenya— Computer Vision Research Engineer at Piñata Farms|Kaggle GrandmasterInstallationAlbumentations requires Python 3.8 or higher. To install the latest version from PyPI:pipinstall-UalbumentationsOther installation options are described in thedocumentation.DocumentationThe full documentation is available athttps://albumentations.ai/docs/.A simple exampleimportalbumentationsasAimportcv2# Declare an augmentation pipelinetransform=A.Compose([A.RandomCrop(width=256,height=256),A.HorizontalFlip(p=0.5),A.RandomBrightnessContrast(p=0.2),])# Read an image with OpenCV and convert it to the RGB colorspaceimage=cv2.imread("image.jpg")image=cv2.cvtColor(image,cv2.COLOR_BGR2RGB)# Augment an imagetransformed=transform(image=image)transformed_image=transformed["image"]Getting startedI am new to image augmentationPlease start with theintroduction articlesabout why image augmentation is important and how it helps to build better models.I want to use Albumentations for the specific task such as classification or segmentationIf you want to use Albumentations for a specific task such as classification, segmentation, or object detection, refer to theset of articlesthat has an in-depth description of this task. We also have alist of exampleson applying Albumentations for different use cases.I want to know how to use Albumentations with deep learning frameworksWe haveexamples of using Albumentationsalong with PyTorch and TensorFlow.I want to explore augmentations and see Albumentations in actionCheck theonline demo of the library. With it, you can apply augmentations to different images and see the result. Also, we have alist of all available augmentations and their targets.Who is using AlbumentationsSee alsoA list of papers that cite Albumentations.A list of teams that were using Albumentations and took high places in machine learning competitions.Open source projects that use Albumentations.List of augmentationsPixel-level transformsPixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. The list of pixel-level transforms:AdvancedBlurBlurCLAHEChannelDropoutChannelShuffleColorJitterDefocusDownscaleEmbossEqualizeFDAFancyPCAFromFloatGaussNoiseGaussianBlurGlassBlurHistogramMatchingHueSaturationValueISONoiseImageCompressionInvertImgMedianBlurMotionBlurMultiplicativeNoiseNormalizePixelDistributionAdaptationPosterizeRGBShiftRandomBrightnessContrastRandomFogRandomGammaRandomGravelRandomRainRandomShadowRandomSnowRandomSunFlareRandomToneCurveRingingOvershootSharpenSolarizeSpatterSuperpixelsTemplateTransformToFloatToGrayToRGBToSepiaUnsharpMaskZoomBlurSpatial-level transformsSpatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. The following table shows which additional targets are supported by each transform.TransformImageMasksBBoxesKeypointsAffine✓✓✓✓BBoxSafeRandomCrop✓✓✓CenterCrop✓✓✓✓CoarseDropout✓✓✓Crop✓✓✓✓CropAndPad✓✓✓✓CropNonEmptyMaskIfExists✓✓✓✓ElasticTransform✓✓✓Flip✓✓✓✓GridDistortion✓✓✓GridDropout✓✓HorizontalFlip✓✓✓✓Lambda✓✓✓✓LongestMaxSize✓✓✓✓MaskDropout✓✓NoOp✓✓✓✓OpticalDistortion✓✓✓PadIfNeeded✓✓✓✓Perspective✓✓✓✓PiecewiseAffine✓✓✓✓PixelDropout✓✓✓✓RandomCrop✓✓✓✓RandomCropFromBorders✓✓✓✓RandomCropNearBBox✓✓✓✓RandomGridShuffle✓✓✓RandomResizedCrop✓✓✓✓RandomRotate90✓✓✓✓RandomScale✓✓✓✓RandomSizedBBoxSafeCrop✓✓✓RandomSizedCrop✓✓✓✓Resize✓✓✓✓Rotate✓✓✓✓SafeRotate✓✓✓✓ShiftScaleRotate✓✓✓✓SmallestMaxSize✓✓✓✓Transpose✓✓✓✓VerticalFlip✓✓✓✓XYMasking✓✓✓A few more examples of augmentationsSemantic segmentation on the Inria datasetMedical imagingObject detection and semantic segmentation on the Mapillary Vistas datasetKeypoints augmentationBenchmarking resultsTo run the benchmark yourself, follow the instructions inbenchmark/README.mdResults for running the benchmark on the first 2000 images from the ImageNet validation set using an Intel(R) Xeon(R) Gold 6140 CPU. All outputs are converted to a contiguous NumPy array with the np.uint8 data type. The table shows how many images per second can be processed on a single core; higher is better.albumentations1.1.0imgaug0.4.0torchvision (Pillow-SIMD backend)0.10.1keras2.6.0augmentor0.2.8solt0.1.9HorizontalFlip102202702251787625286798VerticalFlip443821412151438121553659Rotate3892831652860367ShiftScaleRotate66942514629--Brightness276511244112294082335Contrast27671137349-3462341BrightnessContrast2746629190-1891196ShiftRGB27581093-360--ShiftHSV59825959--144Gamma2849-388--933Grayscale5219393723-10821309RandomCrop64163550256250159-4284222260PadToSize5123609-602--3097Resize51210496111066-10411017RandomSizedCrop_64_51232248581660-15982675Posterize2789-----Solarize2761-----Equalize647385--765-Multiply26591129----MultiplyElementwise111200----ColorJitter3517857---Python and library versions: Python 3.9.5 (default, Jun 23 2021, 15:01:51) [GCC 8.3.0], numpy 1.19.5, pillow-simd 7.0.0.post3, opencv-python 4.5.3.56, scikit-image 0.18.3, scipy 1.7.1.ContributingTo create a pull request to the repository, follow the documentation atCONTRIBUTING.mdCommunity and SupportTwitterDiscordCommentsIn some systems, in the multiple GPU regime, PyTorch may deadlock the DataLoader if OpenCV was compiled with OpenCL optimizations. Adding the following two lines before the library import may help. For more detailshttps://github.com/pytorch/pytorch/issues/1355cv2.setNumThreads(0)cv2.ocl.setUseOpenCL(False)CitingIf you find this library useful for your research, please consider citingAlbumentations: Fast and Flexible Image Augmentations:@Article{info11020125,AUTHOR={Buslaev, Alexander and Iglovikov, Vladimir I. and Khvedchenya, Eugene and Parinov, Alex and Druzhinin, Mikhail and Kalinin, Alexandr A.},TITLE={Albumentations: Fast and Flexible Image Augmentations},JOURNAL={Information},VOLUME={11},YEAR={2020},NUMBER={2},ARTICLE-NUMBER={125},URL={https://www.mdpi.com/2078-2489/11/2/125},ISSN={2078-2489},DOI={10.3390/info11020125}}
albumentations-experimental
Albumentations ExperimentalThe Albumentations Experimental library provides experimental and cutting edge augmentation techniques on topAlbumentations.Why a separate libraryAlbumentations provides stable and well-tested interfaces for performing augmentations. We don't want to pollute the library with features that may be prone to rapid changes in interfaces and behavior since they could break users' pipelines. But we also want to implement new, experimental features and see whether they will be useful.So we created Albumentations Experimental, a library that will help us to iterate faster and remove the need for striving for backward compatibility and rigorous testing.Beware, that each new version of Albumentations Experimental may contain backward-incompatible changes both in interfaces and behavior.When features in Albumentations Experimental are mature enough, we will port them to the main library with all our usual policies such as rigorous testing, extensive documentation, and stable behavior.InstallationAlbumentations Experimental requires Python 3.5 or higher.Install the latest stable version from PyPIpipinstall-Ualbumentations_experimentalInstall the latest version from the master's branch on GitHubpipinstall-Ugit+https://github.com/albumentations-team/albumentations-experimentalUsageImport augmentations from the library:fromalbumentations_experimentalimportFlipSymmetricKeypointsDocumentationDocumentation is available athttps://albumentations.ai/docs/experimental/overview/List of augmentations and their supported targetsSpatial-level transformsTransformImageMasksBBoxesKeypointsFlipSymmetricKeypoints✓✓✓✓HorizontalFlipSymmetricKeypoints✓✓✓✓TransposeSymmetricKeypoints✓✓✓✓VerticalFlipSymmetricKeypoints✓✓✓✓
album-environments
album-environmentsEnvironment handling ofAlbum.
albumface
Hello
album-gui
No description available on PyPI.
album-image-util
Album image utilℹ Note:This project is onpipso I can use it on other machines, you'll probably not find a use case for as it is ultra-specific️ This is a project for my Album of the day website, which is a website where I post my writing about albums: one every day. The album of the days were initially written in image form. To get them on the website, I had to do some OCR-reading of previous album image files. This library can extract different parts of these images (because I used the same template for all of them). It can also create new album of the day images.
albuminer
UNKNOWN
album-of-the-year-api
AlbumOfTheYearWrapperA light weight python library that acts as an API forhttps://www.albumoftheyear.org/DescriptionGets data fromhttps://www.albumoftheyear.org/. The website doesn't currently provide API support so web parsing is required to obtain data. Because of this, and according tohttps://www.albumoftheyear.org/robots.txt, searching and POST requests are not allowed.Installationpip install album-of-the-year-apior upgradepip install album-of-the-year-api --upgradeUsageExamplesHere's a quick example of getting a specific users follower countfrom albumoftheyearapi import AOTY client = AOTY() print(client.user_follower_count('jahsias'))If you don't need the full functionality, you can also import only the neccesary filesfrom albumoftheyearapi.artist import ArtistMethods client = ArtistMethods() print(client.artist_albums('183-kanye-west'))Notice artists also need their unique id along with their nameMethodsArtist Methodsartist_albums(artist)Returns a list of all albums by an artistParameters:artist - artist id and nameartist_mixtapes(artist)Returns a list of all mixtapes by an artistParameters:artist - artist id and nameartist_eps(artist)Returns a list of all eps by an artistParameters:artist - artist id and nameartist_singles(artist)Returns a list of all singles by an artistParameters:artist - artist id and nameartist_name(artist)Returns the name of the artistParameters:artist - artist id and nameartist_critic_score(artist)Returns the critic score of the artistParameters:artist - artist id and nameartist_user_score(artist)Returns the user score of the artistParameters:artist - artist id and nameartist_total_score(artist)Returns the average of the critic and users score of the artistParameters:artist - artist id and nameartist_follower_count(artist)Returns the follower count of the artistParameters:artist - artist id and nameartist_details(artist)Returns the detials of the artistParameters:artist - artist id and nameartist_top_songs(artist)Returns a list of the top songs of the artistParameters:artist - artist id and namesimilar_artists(artist)Returns a list of similar artists to the given artistParameters:artist - artist id and nameUser Methodsuser_rating_count(user)Returns the number of ratings by a userParameters:user - usernameuser_review_count(user)Returns the number of reviews by a userParameters:user - usernameuser_list_count(user)Returns the number of lists by a userParameters:user - usernameuser_follower_count(user)Returns the number of followers a user hasParameters:user - usernameuser_about(user)Returns the about page of a userParameters:user - usernameuser_rating_distribution(user)Returns a list of a users rating distributionParameters:user - username
album-package
Album plugin for packaging solutions into executablesThis plugin is used to create installation executables for Album and Album solutions, so Album and a solution can be installed with a simple double click. The executable creates a shortcut for running Albums UI on the desktop of the user. The executables can be distributed to a different system running the same operating system. To create executables for different operating systems, run this plugin on a system running the target operating system. If the the target system runs Windows or MacOS it doesn't need to have anything preinstalled, the executable will install every needed component (Micromamba and album) into the ~/.album directory if they are not already installed in this location. Linux users need to have the binutils package installed.Installation:Install AlbumActivate the album environment:conda activate albumInstall the album package plugin:pip install album-packageIf you are using a linux system, make sure the source and the target system got the binutils package installed. For example on ubuntu it can be installed with the following command:apt-get update && apt-get install binutilsUsage:To create an executable which installs Album run following command:album package --output_path /your/output/pathTo create an executable which installs Album and a solution in one go run following command:album package --solution /path/to/your/solution.py --output_path /your/output/pathInput parameter:solution: The album solution.py file which should be packed into an executable. If you provide the path to a directory containing a solution.py all files in the directory will be packaged into the solution executable. If you provide the direct path to a solution.py only the solution will packaged. If your solution contains local imports, make sure all imported files lie in the same directory as the solution and you provide the path containing the solution.py. If this parameter is not set, the resulting executable will only install Album without a solution.output_path: The path where the executable should be saved
albumpl
No description available on PyPI.
albumr
albumrImgur album downloader.Installationpip install albumrUsageusage: albumr [<options>] [--] <album>... Imgur album downloader positional arguments: <album> an album hash or URL optional arguments: -h, --help show this help message and exit -n, --numbers, --no-numbers prepend numbers to filenames; default: False -t, --titles, --no-titles append album titles to directory names; default: False -v, --version show program's version number and exitExamplesFrom album URL:albumr http://imgur.com/a/adkETFrom album hash, with numbers in filenames and album title in directory name:albumr -nt adkETLicenseMIT License
album-rsync
album-rsyncA python script to manage synchronising a local directory of photos with a remote storage provider based on an rsync interaction pattern.RequirementsSeerequirements.txtfor list of dependencies.Supports Python 3.6+For Python 2, seehttps://github.com/phdesign/flickr-rsyncInstallationVia PyPIInstall from the python package manager by$ pip install album-rsyncFrom GitHub repoClone the GitHub repo locallyTo install globally:$ python setup.py installTo install for the current user only:$ python setup.py install --userStorage providersCurrently the local file system, Flickr and Google Photos are supported. Below is a list of supported features for each.LocalFlickrGoogleRoot filesYesYesNoDelete extra filesYesYesNoLogoutNoYesYesAuthenticatingTo authenticate against a storage provider, you will need to setup API keys, and then authorise your account.To create API keys, visit:Flickr:https://www.flickr.com/services/api/misc.api_keys.htmlGoogle:https://console.developers.google.com/apis/library/photoslibrary.googleapis.comYou will be issued an api key and a secret. To enable the app to use these keys, either:For Flickr, provide--flickr-api-keyand--flickr-api-secretarguments to the command lineFor Google, provide--google-api-keyand--google-api-secretarguments to the command linecreate a config file in $HOME/.album-rsync.ini with the following entries# For Flickr FLICKR_API_KEY = xxxxxxxxxxxxxxxxxxx FLICKR_API_SECRET = yyyyyyyyyyyyyy # For Google GOOGLE_API_KEY = xxxxxxxxxxxxxxxxxxx GOOGLE_API_SECRET = yyyyyyyyyyyyyyThe first time you perform any action against the storage provider, this app will prompt you to authorise access to your account. For Flickr you may choose to request delete permissions, or write only permissions if you do not want any photos deleted by this app.LogoutTo remove the authentication token for a storage provider, specify the storage provider as the source and pass the--logoutargument. E.g.$ album-rsync flickr --logoutListing filesThe--list-onlyflag will print a list of files in the source storage provider, this can either be Flickr by specifying thesrcasflickr,googleor a local file system path. Use--list-sortto sort the files alphabetically (slower). This feature is useful for manually creating a diff between your local files and Flickr files.e.g. List all files in Flickr photo sets$ album-rsync flickr --list-onlyor list sorted files from Google$ album-rsync google --list-only --list-sortor list all files in a local folder$ album-rsync ~/Pictures --list-onlyTree view vs. csv viewYou can change the output from a tree view to a comma separated values view by using--list-format=treeor--list-format=csv. By default the tree view is used.e.g. Print in tree format$ album-rsync flickr --list-only --list-format=tree ├─── 2017-04-24 Family Holiday │ ├─── IMG_2546.jpg [70ebf9] │ ├─── IMG_2547.jpg [3d3046] │ ├─── IMG_2548.jpg [2f2385] │ └─── IMG_2549.jpg [d8e946] │ └─── 2017-04-16 Easter Camping ├─── IMG_2515.jpg [aabe74] ├─── IMG_2516.jpg [0eb4f2] └─── IMG_2517.jpg [4fe908]Or csv format$ album-rsync flickr --list-only --list-format=csv Folder, Filename, Checksum 2017-04-24 Family Holiday, IMG_2546.jpg, 70ebf9be4d8301e94c65582977332754 2017-04-24 Family Holiday, IMG_2547.jpg, 3d3046b37ba338793a762ab7bd83e85c 2017-04-24 Family Holiday, IMG_2548.jpg, 2f23853abeb742551043a3514ba4315b 2017-04-24 Family Holiday, IMG_2549.jpg, d8e946e73700b9c2890d3681c3c0fa0b 2017-04-16 Easter Camping, IMG_2515.jpg, aabe74b06c3a53e801893347eb6bd7f5 2017-04-16 Easter Camping, IMG_2516.jpg, 0eb4f2519f6562ff66069618637a7b10 2017-04-16 Easter Camping, IMG_2517.jpg, 4fe9085b9f320a67988f84e85338a3ffListing foldersTo just list the top level folders (without all the files). use--list-folders.$ album-rsync ~/Pictures --list-foldersSyncing filese.g. To copy all files from Flickr to a local folder$ album-rsync flickr ~/Pictures/flickrOr to copy all files from a local folder up to Flickr$ album-rsync ~/Pictures/flickr flickrYou can even copy from a local folder to another local folder$ album-rsync ~/Pictures/from ~/Pictures/toFiles are matched by folder names and file names, case insensitively. E.g. if you have a Flickr photoset called2017-04-16 Easter Campingand a file calledIMG_2517.jpg, and you are trying to copy from a folder with2017-04-16 Easter Camping\IMG_2517.jpgit will assume this file is the same and will not try to copy it.Dry runBefore performing any operations, it's recommended to perform a dry run first, just pass-nor--dry-runto simulate syncing, without actually copying anything.Deleting extra filesWARNING: Use of this feature will permanently delete files, be sure you know what you're doing.NOTE: Deleting extra files is not supported by the Google storage provider.Pass--deleteto delete any extra files from the destination that don't exist in the source. E.g.$ album-rsync ~/Pictures/flickr flickr --deleteFilteringFiltering is done using regular expressions. The following four options control filtering the files:--include=specifies a pattern thatfile namesmust match to be included in the operation--include-dir=specifies a pattern thatfolder namesmust match to be included in the operation--exclude=specifies a pattern thatfile namesmust NOT match to be included in the operation--exclude-dir=specifies a pattern thatfolder namesmust NOT match to be included in the operationNote that filtering by folders is more performant than by file names, prefer folder name filtering where possible.Also note that exclude filters take preference and will override include filters.Root filesNote that filtering does not apply to root files, root files (files in the target folder if local file system, or files not in a photoset on Flickr) are excluded by default. To include them, use--root-files.OptionsAll options can be provided by either editing the config filealbum-rsync.inior using the command line interface.usage: album-rsync [-h] [-l] [--list-format {tree,csv}] [--list-sort] [--list-folders] [--delete] [-c] [--include REGEX] [--include-dir REGEX] [--exclude REGEX] [--exclude-dir REGEX] [--root-files] [-n] [--throttling SEC] [--retry NUM] [--flickr-api-key FLICKR_API_KEY] [--flickr-api-secret FLICKR_API_SECRET] [--flickr-tags "TAG1 TAG2"] [--google-api-key GOOGLE_API_KEY] [--google-api-secret GOOGLE_API_SECRET] [--logout] [-v] [--version] [src] [dest] A python script to manage synchronising a local directory of photos with a remote storage provider based on an rsync interaction pattern. positional arguments: src the source directory to copy or list files from, or FLICKR to specify flickr dest the destination directory to copy files to, or FLICKR to specify flickr optional arguments: -h, --help show this help message and exit -l, --list-only list the files in --src instead of copying them --list-format {tree,csv} output format for --list-only, TREE for a tree based output or CSV --list-sort sort alphabetically when --list-only, note that this forces buffering of remote sources so will be slower --list-folders lists only folders (no files, implies --list-only) --delete WARNING: permanently deletes additional files in destination -c, --checksum calculate file checksums for local files. Print checksum when listing, use checksum for comparison when syncing --include REGEX include only files matching REGEX. Defaults to media file extensions only --include-dir REGEX include only directories matching REGEX --exclude REGEX exclude any files matching REGEX, note this takes precedent over --include --exclude-dir REGEX exclude any directories matching REGEX, note this takes precedent over --include-dir --root-files includes roots files (not in a directory or a photoset) in the list or copy -n, --dry-run in sync mode, don't actually copy anything, just simulate the process and output --throttling SEC the delay in seconds (may be decimal) before each network call --retry NUM the number of times to retry a network call (using exponential backoff) before failing --flickr-api-key FLICKR_API_KEY flickr API key --flickr-api-secret FLICKR_API_SECRET flickr API secret --flickr-tags "TAG1 TAG2" space seperated list of tags to apply to uploaded files on flickr --google-api-key GOOGLE_API_KEY Google API key --google-api-secret GOOGLE_API_SECRET Google API secret --logout logout of remote storage provider (determined by src) -v, --verbose increase verbosity --version show program's version number and exitConfig and token file discoveryThe config filealbum-rsync.iniand token filealbum-rsync.tokenare searched for in the following locations in order:<current working dir>/album-rsync.ini<current working dir>/.album-rsync.ini<users home dir>/album-rsync.ini<users home dir>/.album-rsync.ini<executable dir>/album-rsync.ini<executable dir>/.album-rsync.iniThe token file is auto generated file containing the authorisation token to access the API. If deleted you will need to authorise the app again when next using it.DevelopingInstall in development mode so source files are symlinked, meaning changes you make to the source files are reflected when you run the package anywhere.$ python setup.py developThen to uninstall$ python setup.py develop --uninstallDebuggingUse pdbpython -m pdb ./flickr_rsync/__main__.py <parameters>Set a breakpointb ./flickr_rsync/flickr_storage.py:74Thenc(ontinue)orn(ext)to step over ors(tep)to step into.l(ist)to show current line and 11 lines of context.p(print)orpp(pretty print) to print a variable. E.g.p dir(photo) pp photo.__dict__To print all properties of variable photo.q(uit)to exit.Checkouthttps://medium.com/instamojo-matters/become-a-pdb-power-user-e3fc4e2774b2PublishingBased onhttp://peterdowns.com/posts/first-time-with-pypi.htmlUpdatealbum_rsync/_version.pywith the new version number (e.g. 1.1.1)Create a new GitHub release (e.g.git tag -a v1.1.1 -m "Version v1.1.1" && git push --tags)Push to PyPI$ make deployRunning testsIfmakeis installed, you can run tests using a virtual environment$ make venv $ make testwhich will lint the code and run tests. To just run the linter$ make lintTo run the tests without make, use$ python setup.py testTo mark a focused testAdd [email protected] test. Run with$ pytest -m focusTipsTo list just root files only:$ album-rsync flickr --exclude-dir '.*' --root-files --list-onlyVideosMovies should work, but flickr doesn't seem to return the original video when you download it again, it returns a processed video that may have slightly downgraded quality and will not have the same checksum.TroubleshootingI get a Version conflict error with the six python package when installing on my MacIf you're running Mac OSX El Capitan and you get the following error when runningpython setup.py testpkg_resources.VersionConflict: (six 1.4.1 (/System/Library/Frameworks/Python.fra mework/Versions/2.7/Extras/lib/python), Requirement.parse('six>=1.9'))Do the following:$ sudo pip install --ignore-installed sixMore detailshttps://github.com/pypa/pip/issues/3165I get an error 'The Flickr API keys have not been set'To access Flickr this application needs API keys, go tohttp://www.flickr.com/services/apps/create/applyto sign up for a free personal API keyI get an error 'The Flickr API keys have not been set' but I've set them in my config (ini) fileGetting an errorThe Flickr API keys have not been setbut you've set them in the config file? Perhaps the application can't find the config file location. Use-vor--verboseoption to print the location of the config file being used.Why are some files are not being shown in the file list / sync?By default only media files are included in file listings and sync operations. Media files are defined as\.(jpg|jpeg|png|gif|tiff|tif|bmp|psd|svg|raw|wmv|avi|mov|mpg|mp4|3gp|ogg|ogv|m2ts)$. Use--include=.*to include all files.I get an error 'The filename, directory name or volume label syntax is incorrect'If you're seeing an error like thisWindowsError: [Error 123] The filename, directory name, or volume label syntax is incorrect: 'C:\\Users\\xxx\\Pictures" --list-only/*.*'Ensure that you are not using single quotes'around a folder path in windows, instead use double quotes". e.g.$ album-rsync "C:\Users\xxx\Pictures" --list-onlyWhen I try list list in a local folder called 'flickr' it lists my remote flickr filesalbum-rsync uses the keywordflickras a src or dest to denote pulling the list from flickr. If you have a folder called flickr, just give it a relative or absolute path make it obvious that it's a file path, e.g.$ album-rsync ./flickr --list-onlyIf I add tags, they get changed by flickr, e.g. 'extn=mov becomes extnmov'.Internally flickr removes all whitespace and special characters, so 'extn mov' and 'extn=mov' match 'extnmov'. You can edit a tag using this URL:https://www.flickr.com/photos/{username}/tags/{tagname}/edit/or go here to manage all tags:https://www.flickr.com/photos/{username}/tagsAnd in future put double quotes around your tag to retain special charactersI get an error 'UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-3: character maps to <undefined>'This error occurs on Windows when you redirect stdout. To fix this, set PYTHONIOENCODING=utf-8. e.g.$ PYTHONIOENCODING=utf-8 album-rsync ./flickr --list-onlyRelease notesv2.0.4 (14 Mar 2019)Renamed toalbum-rsyncConverted to Python 3Added Google Photos storage providerContinues to next file when an error occurs copying a file (after retry policies have been applied)Support for deleting extra files in destinationv1.0.5 (21 Mar 2018)Support for videosAdd tag to maintain original extensionv1.0.4 (2 Nov 2017)Improve retry and throttling, now uses exponential backoffUse python logging framework, outputs log messages to stderrv1.0.3 (16 Sep 2017)Flickr converts .jpeg to .jpg extensions, so consider them the same when comparing for syncTODOHandle nested directories. Merge with separator likeparent_child. Apply --include-dir after mergingList duplicate filesWebpage for successful Flickr loginOptimise - why does sort files seem to run faster?!Fix duplicate albums issueWhy does it make 3 api calls for every photo in --list-only --list-sort mode?--init to setup a new .ini file and walk through auth processAdd throttling and delay to Google
album-runner
album-runnerInterpreting an album script.
album-sender
album_senderTelegram album senderusageimport album_sender, web_2_album result = web_2_album.get(url) album_sender.send_v2(chat, result, rotate=False)how to installpip3 install album_sender
album-server
album-serverCitationAlbrecht, J.P.*, Schmidt, D.*, and Harrington, K., 2021. Album: a framework for scientific data processing with software solutions of heterogeneous tools. arXiv preprint arXiv:2110.00601.https://arxiv.org/abs/2110.00601DevelopersKyle Harrington, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationJan Philipp Albrecht, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationDeborah Schmidt, Max Delbrueck Center for Molecular Medicine in the Helmholtz Association
album-socket
album-socketCitationAlbrecht, J.P.*, Schmidt, D.*, and Harrington, K., 2021. Album: a framework for scientific data processing with software solutions of heterogeneous tools. arXiv preprint arXiv:2110.00601.https://arxiv.org/abs/2110.00601DevelopersKyle Harrington, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationJan Philipp Albrecht, Max Delbrueck Center for Molecular Medicine in the Helmholtz AssociationDeborah Schmidt, Max Delbrueck Center for Molecular Medicine in the Helmholtz Association
album-solution-api
album-solution-apiPackage installed in any Album solution environment. Provides the API for the solution script.
albumsplit
albumsplitalbumsplit is a tool to split up full-length album mp3s into individual tracks.First, you need the mp3 file, and a tracks file, listing the temporal offset of each track, followed by its title, ie:00:00 first track 01:33 second track (where first track was a minute 33 seconds long … etcYou can use my other tool, trackcalc, to calculate out these offsets.then you can split up the mp3 file like this:albumsplit “Album Name.mp3” tracks.txthere is a vim macro I use to reformat tracks:$Bd$0Pa 0j
album-splitter
album-splitterUsealbum-splitterto automatically split any audio file (youtube videos, albums, podcasts, audiobooks, tapes, vinyls) into separate tracks starting from timestamps. album-splitter will also take care of tagging each part with the correct metadata. If your file is on YouTube, you can download it automatically.Common use cases covered:music album on YouTube to download and split into tracksfull audiobook to split into chaptersmusic tape/cassette rip to split into tracksdigitalized vinyl to split into tracksAll you need is:The file to split OR an URL of a YouTube videoTimestamps for each track, for example:00:06 - When I Was Young03:35 Dogs Eating DogsHow to installFirst time only:InstallffmpegLinux:apt install ffmpeg(or equivalent)Windows:Official websiteMacOS:Official websiteorbrew install ffmpegInstallPython 3(a version newer or equal to3.7is required)Linux:apt install python3(or equivalent)Windows:Official webisteMacOS: You should have it already installedOpen your terminal appCreate a virtual environmentpython3 -m venv venvActivate the virtual environmentLinux/MacOS:source venv/bin/activateWindows:./venv/Scripts/activateInstall album-splitterpython3 -m pip install album-splitterYou are ready to go!After the first time:Open your terminal appOptional, update album-splitter:python3 -m pip install --upgrade album-splitterActivate the virtual environmentLinux/MacOS:source venv/bin/activateWindows:./venv/Scripts/activateYou are ready to go!Quick guide (from a local album)Create a copy of thetracks.txt.example, rename it astracks.txtOpentracks.txtAdd your tracks timestamps info in this format:<start-time> - <title>A track on each lineSeeExamplessection, many other formats supportedRun the scriptBasic usage:python -m album_splitter --file <path/to/your/album.mp3>More in theExamplessectionWait for the splitting process to completeYou will find your tracks in the./splits/folderQuick guide (from a YouTube video)Copy the YouTube URL of the album you want to download and splitFind in the YouTube comments the tracklist with start-time and titleCreate a copy of thetracks.txt.example, rename it astracks.txtOpentracks.txtCopy the tracklist in the file, adjusting for the supported formats<start-time> - <title>A track on each lineRun the scriptBasic usage:python -m album_splitter -yt <youtube_url>More in theExamplessectionWait for the Download and for the conversionWait for the splitting process to completeYou will find your tracks in the./splitsfolderOutput FormatThe format of the output tracks is the same as the format of the input (same extension, same codec, same bitrate, ...), it simply does a copy of the codec. If you want to convert the output tracks to a different format, you can do this using additional tools.For example to convert from.wavto.mp3you can use FFmpeg.Hereis how you can do it on Linux/macOS.Thisorthismight help for Windows instead. You can adopt such snippets to do other processing, such as changing the bitrate.ExamplesDownloading and splitting an album from YouTubeThis is the album I want to download and split:https://www.youtube.com/watch?v=p_uqD4ng9hwI find the tracklist in the comments and I copy that intracks.txt, eventually adjusting it to a supported format for the tracklist00:06 - When I Was Young ... 14:48 - Pretty Little GirlI executepython -m album_splitter -yt "https://www.youtube.com/watch?v=p_uqD4ng9hw"and waitOnce the process is complete I open the./splitsand I find all my songs:When I Was Young.mp3 ... Pretty Little Girl.mp3These songs are already mp3-tagged with their track name and track number, but not their author or their album, since we have not specified it.Splitting and tagging with Author and Album a local fileI somehow got the fileDogsEatingDogsAlbum.mp3that I want to splitI set the tracklist intracks.txt(same tracks as before)I executepython -m album_splitter --file DogsEatingDogsAlbum.mp3 --album "Dogs Eating Gods" --artist "blink-182" --folder "2012 - Dogs Eating Dogs"The software will execute, it will split the album, and mp3-tag each track with the author and the album name I passed as a parameter (as well as track number and name). It will also put the files in the folder passed as an argument (instead of putting them in the default./splitsfolder)Supported formats for the track list (tracks.txt)These are just some examples, find more intracks.txt.example.[hh:]mm:ss - TitleTitle - [hh:]mm:ssTitle [hh:]mm:ssTo just see which data would be extracted from the tracklist use the option--dry-run.Available OptionsTo get the full help and all the available options runpython -m album_splitter --helpNeed help?If you need any help justcreate an Issueor send me an email at the address you can find on my profile.UpdatingTo update to use the latest version of album-splitter you can usepython3 -m pip install --upgrade album-splitterWant to help?If you want to improve the code and submit a pull request, please feel free to do so.LicenseGPL v3
albumthief
UNKNOWN
albus
This Python data mapper library was created thinking on simplicity learning and using it. As opposed to other ORM options, this one will not be as powerful and flexible in terms of queries. That is the trade-off to make it simpler to learn and use.UsageDefining models is done using property descriptors as you would for SQLAlchemy and Django. Querying them is completely different in regards to how we avoid overriding Python operators or using dynamic keyword arguments.fromalbus.modelimportModelfromalbusimportfieldclassBook(Model):author=field.StringField()title=field.StringField()year=field.IntegerField()query=Book.new_query()query.filter_equals('author','John Doe')query.filter_greater('year',2000)results=query.select()forcurrentinresults:print('Found:',current.title)
albylib
No description available on PyPI.
alcali
AlcaliWhat's Alcali?Alcali is a web based tool for monitoring and administratingSaltstackSalt.FeaturesGet notified in real time when a job is created, updated or has returned.Store your jobs results by leveraging themaster_job_storesetting with database master returner.Check your minions conformity to their highstate orany state.Keep track of custom state at a glance.Use custom auth module to login into both Alcali and the Salt-api using JWT.LDAPandGoogle OAuth2authentication.Try it!If you just want to have a look, just clone therepositoryand usedocker-compose:git clone https://github.com/latenighttales/alcali.git cd alcali docker compose up --scale minion=2Once you see minions waiting to be approved by the master, you're good to go:... minion_1 | [ERROR ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate minion_1 | [INFO ] Waiting 10 seconds before retry. ...Just connect onhttp://127.0.0.1:8000, login with:username: admin password: passwordand follow thewalkthrough.InstallationThe easiest way to install it is to use the saltformula.Make sure to check theinstallationdocs first!ScreenshotsDashboardMinion DetailsJob DetailsMorehere.LicenseMITImage: Jean-Philippe WMFr, derivative work : User:Benoit RochonCC BY-SA 4.0ContributingIf you'd like to contribute, check thecontributedocumentation on how to install a dev environment and submit PR!And if you like this project, consider donating:via GitHub Sponsors, orChangelog[3006.3.0] devint: bugfix and deps updatefeat: i18n (#353)3003.1.0- 2021-04-23int: updated deps (#317)fix: py36 compatible (#306)fix: non-standard-minion-response (#281)int: offline version (#225)3000.1.0- 2020-04-26use salt 3000updated deps (#185)fix: UI errors (#187)fix: users are able to reset their pw (#184)fix: responsive layout (#178)2019.2.4- 2020-02-14fix: password update (#164)update deps 20200207 (#155)fix: Less restrictive minion_id regex and error mgmt (#140)2019.2.3- 2019-12-10feat: Google OAuth2 (#130)updated deps (#111)feat: Group jobs by jid (#106)int: error mgmt (#105)fix: favicon and boolrepr (#102)fix: removed useless icon files, fixed boolean repr (#100)fix: state render,Layout removed adminfeat: predefined jobs (#98)fix: Boolean repr (#97)feat: LDAP auth backend (#84)fix: async run, updated deps (#82)feat: fold/unfold allfeat: display current version in gui and cli dynamically (#76)fix: timezone, success bool for custom modules (#75)async link: resolve #69 (#74)feat: schedule disable/enable (#72)fix: schedules, keys, updated vuetify (#71)int: updated docs, added contribute section, screenshots (#62)2019.2.2- 2019-09-21use slim docker imageAdded rest authAdded pillar overrideUpdated deps2019.2.1- 2019-09-21Frontend refactor
alcazar-web-framework
AlcazarAlcazar is a Python Web Framework built for learning purposes. The plan is to learn how frameworks are built by implementing their features, writing blog posts about them and keeping the codebase as simple as possible.It is a WSGI framework and can be used with any WSGI application server such as Gunicorn.InspirationI was inspired to make a web framework after readingFlorimond Monca'sblog postabout how he built a web framework and became an open source maintainer. He wrote about how thrilling the experience has been for him so I decided I would give it a try as well. Thank you,Florimondand of courseKenneth Reitzwho in turn inspired Florimond to write a framework with his own frameworkResponder. Go check out bothBocadillo by Florimond MoncaandResponder by Kenneth Reitz. If you like them, show some love by staring their repos.Blog postsPart I: Intro, API, request handlers, routing (both simple and parameterized)Part II: class based handlers, route overlap check, unit testsPart III: templates support, test client, django way of adding routesPart IV: custom exception handler, support for static files, middlewareQuick StartInstall it:pipinstallalcazar-web-frameworkBasic Usage:# app.pyfromalcazarimportAlcazarapp=Alcazar()@app.route("/")defhome(req,resp):resp.text="Hello, this is a home page."@app.route("/about")defabout_page(req,resp):resp.text="Hello, this is an about page."@app.route("/{age:d}")deftell_age(req,resp,age):resp.text=f"Your age is{age}"@app.route("/{name:l}")classGreetingHandler:defget(self,req,resp,name):resp.text=f"Hello,{name}"@app.route("/show/template")defhandler_with_template(req,resp):resp.html=app.template("example.html",context={"title":"Awesome Framework","body":"welcome to the future!"})@app.route("/json")defjson_handler(req,resp):resp.json={"this":"is JSON"}@app.route("/custom")defcustom_response(req,resp):resp.body=b'any other body'resp.content_type="text/plain"Start:gunicornapp:appHandlersIf you use class based handlers, only the methods that you implement will be allowed:@app.route("/{name:l}")classGreetingHandler:defget(self,req,resp,name):resp.text=f"Hello,{name}"This handler will only allowGETrequests. That is,POSTand others will be rejected. The same thing can be done with function based handlers in the following way:@app.route("/",methods=["get"])defhome(req,resp):resp.text="Hello, this is a home page."Note that if you specifymethodsfor class based handlers, they will be ignored.Unit TestsThe recommended way of writing unit tests is withpytest. There are two built in fixtures that you may want to use when writing unit tests with Alcazar. The first one isappwhich is an instance of the mainAlcazarclass:deftest_route_overlap_throws_exception(app):@app.route("/")defhome(req,resp):resp.text="Welcome Home."withpytest.raises(AssertionError):@app.route("/")defhome2(req,resp):resp.text="Welcome Home2."The other one isclientthat you can use to send HTTP requests to your handlers. It is based on the famousrequestsand it should feel very familiar:deftest_parameterized_route(app,client):@app.route("/{name}")defhello(req,resp,name):resp.text=f"hey{name}"assertclient.get(url("/matthew")).text=="hey matthew"Note that there is aurl()function used. It is used to generate the absolute url of the request given a relative url. Import it before usage:fromalcazar.utils.testsimporturlTemplatesThe default folder for templates istemplates. You can change it when initializing the mainAlcazar()class:app=Alcazar(templates_dir="templates_dir_name")Then you can use HTML files in that folder like so in a handler:@app.route("/show/template")defhandler_with_template(req,resp):resp.html=app.template("example.html",context={"title":"Awesome Framework","body":"welcome to the future!"})Static FilesJust like templates, the default folder for static files isstaticand you can override it:app=Alcazar(static_dir="static_dir_name")Then you can use the files inside this folder in HTML files:<!DOCTYPE html><htmllang="en"><head><metacharset="UTF-8"><title>{{title}}</title><linkhref="/static/main.css"rel="stylesheet"type="text/css"></head><body><h1>{{body}}</h1><p>This is a paragraph</p></body></html>Custom Exception HandlerSometimes, depending on the exception raised, you may want to do a certain action. For such cases, you can register an exception handler:defon_exception(req,resp,exception):ifisinstance(exception,HTTPError):ifexception.status==404:resp.text="Unfortunately the thing you were looking for was not found"else:resp.text=str(exception)else:# unexpected exceptionsifapp.debug:debug_exception_handler(req,resp,exception)else:print("These unexpected exceptions should be logged.")app=Alcazar(debug=False)app.add_exception_handler(on_exception)This exception handler will catch 404 HTTPErrors and change the text to"Unfortunately the thing you were looking for was not found". For other HTTPErrors, it will simply show the exception message. If the raised exception is not an HTTPError and ifdebugis set to True, it will show the exception and its traceback. Otherwise, it will log it.MiddlewareYou can create custom middleware classes by inheriting from thealcazar.middleware.Middlewareclass and override its two methods that are called before and after each request:fromalcazarimportAlcazarfromalcazar.middlewareimportMiddlewareapp=Alcazar()classSimpleCustomMiddleware(Middleware):defprocess_request(self,req):print("Before dispatch",req.url)defprocess_response(self,req,res):print("After dispatch",req.url)app.add_middleware(SimpleCustomMiddleware)FeaturesWSGI compatibleBasic and parameterized routingClass based handlersTest ClientSupport for templatesSupport for static filesCustom exception handlerMiddlewareNoteIt is extremely raw and will hopefully keep improving. If you are interested in knowing how a particular feature is implemented in other frameworks, please open an issue and we will hopefully implement and explain it in a blog post.
alcf
No description available on PyPI.
alchemer
No description available on PyPI.
alchemical
alchemicalModern SQLAlchemy simplified.ResourcesDocumentationPyPIChange Log
alchemical-queues
No description available on PyPI.
alchemical-storage
alchemical-storagealchemical-storageis a library intended to bridge CRUD operations with SQLAlchemy query constructsInstallpip install alchemical-storageBasic Usagealchemical-storageassumes that you have set up a session (or scoped_session) for your database. This is assumed to have been imported assessionin the following example. The table for the defined model is also assumed to be in the database.Set up the model."""package/models.py"""fromsqlalchemyimportormclassModel(orm.DeclarativeBase):"""Model class"""__tablename__='models'attr:orm.Mapped[int]=orm.mapped_column(primary_key=True)attr2:orm.Mapped[int]attr3:orm.Mapped[str]Set up the storage schema. The Meta class in the schema should set load_instance toTrue."""package/schema.py"""frommarshmallow_sqlalchemyimportSQLAlchemyAutoSchemafrompackageimportModelclassModelSchema(SQLAlchemyAutoSchema):"""Model storage schema"""classMeta:model=Modelload_instance=TrueCreate a DatabaseStorage instance. Set theprimary_keykeyword argument (defaults to 'slug') to the primary key of the model."""___main___.py"""frompackageimportmodelsfrompackage.schemaimportModelSchemastorage=DatabaseStorage(session,models.Model,ModelSchema,primary_key="attr",)Use the DatabaseStorage instance.# __main__.py continued...storage.get(1)# Gets a record from the databasestorage.put(2,{'attr2':1,'attr3':'test'})# Puts a record to the databasestorage.patch(1,{'attr2':42})# Update a record in the databasestorage.delete(1)# Delete a record from the databasestorage.index()# Get an index of records from the databaseCommit changes to the database.session.commit()LicenseMIT License. See LICENSE file for more details.
alchemiscale
alchemiscalealchemiscale: a high-throughput alchemical free energy execution system for use with HPC, cloud, bare metal, and Folding@Home
alchemist
UNKNOWN