package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
allmlmodelscomparing
No description available on PyPI.
allms
allmsAboutallms is a versatile and powerful library designed to streamline the process of querying Large Language Models (LLMs) 🤖💬Developed by the Allegro engineers, allms is based on popular libraries like transformers, pydantic, and langchain. It takes care of the boring boiler-plate code you write around your LLM applications, quickly enabling you to prototype ideas, and eventually helping you to scale up for production use-cases!Among the allms most notable features, you will find:😊 Simple and User-Friendly Interface: The module offers an intuitive and easy-to-use interface, making it straightforward to work with the model.🔀 Asynchronous Querying: Requests to the model are processed asynchronously by default, ensuring efficient and non-blocking interactions.🔄 Automatic Retrying Mechanism: The module includes an automatic retrying mechanism, which helps handle transient errors and ensures that queries to the model are robust.🛠️ Error Handling and Management: Errors that may occur during interactions with the model are handled and managed gracefully, providing informative error messages and potential recovery options.⚙️ Output Parsing: The module simplifies the process of defining the model's output format as well as parsing and working with it, allowing you to easily extract the information you need.DocumentationFull documentation available atallms.allegro.techGet familiar with allms 🚀:introductory jupyter notebookQuickstartInstallation 🚧Install the package via pip:pip install allmsBasic Usage ⭐Configure endpoint credentials and start querying the model with any prompt:fromallms.modelsimportAzureOpenAIModelfromallms.domain.configurationimportAzureOpenAIConfigurationconfiguration=AzureOpenAIConfiguration(api_key="your-secret-api-key",base_url="https://endpoint.openai.azure.com/",api_version="2023-03-15-preview",deployment="gpt-35-turbo",model_name="gpt-3.5-turbo")gpt_model=AzureOpenAIModel(config=configuration)gpt_response=gpt_model.generate(prompt="Plan me a 3-day holiday trip to Italy")You can pass also a system prompt:gpt_response=gpt_model.generate(system_prompt="You are an AI assistant acting as a trip planner",prompt="Plan me a 3-day holiday trip to Italy")Advanced Usage 🔥Batch Querying and Symbolic VariablesIf you want to generate responses for a batch of examples, you can achieve this by preparing a prompt with symbolic variables and providing input data that will be injected into the prompt. Symbolic variables can be more than one.positive_review_0="Very good coffee, lightly roasted, with good aroma and taste. The taste of sourness is barely noticeable (which is good because I don't like sour coffees). After grinding, the aroma spreads throughout the room. I recommend it to all those who do not like strongly roasted and pitch-black coffees. A very good solution is to close the package with string, which allows you to preserve the aroma and freshness."positive_review_1="Delicious coffee!! Delicate, just the way I like it, and the smell after opening is amazing. It smells freshly roasted. Faithful to Lavazza coffee for years, I decided to look for other flavors. Based on the reviews, I blindly bought it and it was a 10-shot, it outperformed Lavazze in taste. For me the best."negative_review="Marketing is doing its job and I was tempted too, but this coffee is nothing above the level of coffees from the supermarket. And the method of brewing or grinding does not help here. The coffee is simply weak - both in terms of strength and taste. I do not recommend."prompt="You'll be provided with a review of a coffe. Decide if the review is positive or negative. Review:{review}"input_data=[InputData(input_mappings={"review":positive_review_0},id="0"),InputData(input_mappings={"review":positive_review_1},id="1"),InputData(input_mappings={"review":negative_review},id="2")]responses=model.generate(prompt=prompt,input_data=input_data)# >>> {f"review_id={response.input_data.id}": response.response for response in responses}# {# 'review_id=0': 'The review is positive.',# 'review_id=1': 'The review is positive.',# 'review_id=2': 'The review is negative.'# }Forcing Structured Output FormatThrough pydantic integration, in allms you can pass an output dataclass and force the LLM to provide you the response in a structured way.frompydanticimportBaseModel,FieldclassReviewOutputDataModel(BaseModel):summary:str=Field(description="Summary of a product description")should_buy:bool=Field(description="Recommendation whether I should buy the product or not")brand_name:str=Field(description="Brand of the coffee")aroma:str=Field(description="Description of the coffee aroma")cons:list[str]=Field(description="List of cons of the coffee")review="Marketing is doing its job and I was tempted too, but this Blue Orca coffee is nothing above the level of coffees from the supermarket. And the method of brewing or grinding does not help here. The coffee is simply weak - both in terms of strength and taste. I do not recommend."prompt="Summarize review of the coffee. Review:{review}"input_data=[InputData(input_mappings={"review":review},id="0")]responses=model.generate(prompt=prompt,input_data=input_data,output_data_model_class=ReviewOutputDataModel)response=responses[0].response# >>> type(response)# ReviewOutputDataModel## >>> response.should_buy# False## >>> response.brand_name# "Blue Orca"## >>> response.aroma# "Not mentioned in the review"## >>> response.cons# ['Weak in terms of strength', 'Weak in terms of taste']Local Development 🛠️Installation from the sourceWe assume that you have python3.10.*installed on your machine. You can set it up usingpyenv(How to install pyenv on MacOS). To install allms env locally:Activate your pyenv;Install Poetry via:makeinstall-poetryInstall allms dependencies with the command:makeinstall-envNow you can use this venv for development. You can activate it in your shell by running:makeactivate-env# or simply, poetry shellTestsIn order to execute tests, run:maketestsUpdating the documentationRunmkdocs serveto serve a local instance of the documentation.Modify the content ofdocsdirectory to update the documentation. The updated content will be deployed via the github action.github/workflows/docs.ymlMake a new releaseWhen a new version of allms is ready to be released, do the following operations:Merge to masterthe dev branch in which the new version has been specified:In this branch,versionunder[tool.poetry]section inpyproject.tomlshould be updated, e.g0.1.0;Update the CHANGELOG, specifying the new release.Tag the new masterwith the name of the newest version:e.gv0.1.0.Publish package to PyPI:Go toActions→Manual Publish To PyPI;Select "master" as branch and clickRun workflow;If successful, you will find the package under # TODO: open-source.Make a GitHub release:Go toReleases→Draft a new release;Select the recently created tag inChoose a tagwindow;Copy/paste all the content present in the CHANGELOG under the version you are about to release;Uploadallms-<NEW-VERSION>.whlandallms-<NEW-VERSION>.tar.gzas assets;ClickPublish release.
allmusic
AllMusicAn unofficial scraperAllMusicreviews.InstallationpipinstallallmusicAlternatively, clone directly frommasterto run with the freshest bugs:pipinstallgit+git://github.com/fortes/allmusic.git@masterUsage>>> import allmusic >>> review = allmusic.getAlbumReviewForAllMusicUrl('https://www.allmusic.com/album/beauty-and-the-beat-mw0000736440') >>> print(review.album) 'Beauty and the Beat' >>> print(review.rating) 9TestsThere is one test, you can try it:python-munittestLicenseMITContributionsPlease do
allmychanges
UNKNOWN
allmydata-tahoe
Tahoe-LAFS is a Free and Open decentralized cloud storage system. It distributes your data across multiple servers. Even if some of the servers fail or are taken over by an attacker, the entire file store continues to function correctly, preserving your privacy and security.To get started please seequickstart.rstin the docs directory.LICENCECopyright 2006-2015 The Tahoe-LAFS Software FoundationYou may use this package under the GNU General Public License, version 2 or, at your option, any later version. You may use this package under the Transitive Grace Period Public Licence, version 1.0, or at your option, any later version. (You may choose to use this package under the terms of either licence, at your option.) See the fileCOPYING.GPLfor the terms of the GNU General Public License, version 2. See the fileCOPYING.TGPPL.rstfor the terms of the Transitive Grace Period Public Licence, version 1.0.SeeTGPPL.PDFfor why the TGPPL exists, graphically illustrated on three slides.
allneo
Project description#developed by :Roni Das#Date : 30/05/2021allneo is a Python package that provides fast, flexible, and expressive toolkit this helps to work with neo4j graph database easyily and intuitively. It aims to be the fundamental high-level building block for doing practical, real world operation and data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source neo4j operation and manipulation tool available in python. We are trying to make its way toward this goal.allneo is well suited for many different kinds of neo4j operations#30/05/2021#version 12.0.0 #added function count_node for getting the nodecoun #added exception handling for invalid hostname, username and password
allneo4j
Failed to fetch description. HTTP Status Code: 404
allneo4j-pkg-ronidas39
neo4j operations
allocation-language
allocation_languageLanguage intented to construct the financial behaviour of (re)insurance contracts. Complete documentation of all methods is still pending because this is still in development.Installation$ pip install allocation_languageExamplesimportrandomfromallocation_languageimportmake_contractfromallocation_language.alloc_lang_data_containersimportconverterscontract=make_contract.make_contract_from_text("alloc @'claim' $name_1 @'liable';")contract.update('name_1',50)defloss_data():foriinrange(10):yield{'id':i,'claim':random.randint(1,1e3),'liable':0,}losses=converters.dict_iter_to_event_iter(loss_data())results=contract.evaluate_stream(losses)forxinresults:print(x)The above code is an example to feed a generator of loss data into a contract. The "alloc @'claim' $name_1 @'liable';" command translates in english to 'move as much as possible, limited to name_1's value, from the claim field into the liable field'. This means it applies an occurence limit equal to name_1. The contract.update method allows modification of any named variables in a contract. There can be multiple.
allocator
How can we efficiently collect data from geographically distributed locations? If the data collection is being crowd-sourced, then we may want to exploit the fact that workers are geographically distributed. One simple heuristic to do so is to order the locations by distance for each worker (with some task registration backend). If you have hired workers (or rented drones) who you can send to different locations, then you must split the tasks across workers (drones), and plan the ‘shortest’ routes for each, ala the Traveling Salesman Problem (TSP). This is a problem that companies like Fedex etc. solve all the time. Since there are no computationally feasible solutions for solving for the global minimum, one heuristic solution is to split the locations into clusters of points that are close to each other (ideally, we want the clusters to be ‘balanced’), and then to estimate a TSP solution for each cluster.The package provides a simple way to implement these solutions. Broadly, it provides three kinds of functions:Sort by Distance:Produces an ordered list of workers for each point or an ordered list of pointsfor each worker.Cluster the Points:Clusters the points inton_workergroups.Shortest Path:Order points within a cluster (or any small number of points) into a path or itinerary.The package also provides access to three different kinds of distance functions for calculating the distance matrices that underlie these functions:Euclidean Distance:use option-deuclidean; similar to the Haversine distance within the sameUTM zone)Haversine Distance:use option-dhaversine.OSRM Distance:use option-dosrm. Neither Haversine nor Euclidean distance take account of the actual road network or the traffic. To use actual travel time, useOpen Source Routing Machine APIA maximum number of 100 points can be passed to the function if we use the public server. However, you can set up your own private OSRM server with--max-table-sizeto specific the maximum number of points.Google Distance Matrix API:. use option-dgoogle. This option available insort_by_distaneandcluster_kahiponly due to Google Distance Matrix API has very usage limits. Please look at the limitationshere.Related PackageTo sample locations randomly on the streets, check outgeo_sampling.ApplicationMissing Women on the streets of Delhi. Seewomen countInstallpip install allocatorFunctionsSort By DistanceClusterCluster data collection locations using k-means (clustering) or KaHIP (graph partitioning). To check which of the algorithms produces more cohesive, balanced clusters, runCompare K-means to KaHIPk-meansExamples:python -m allocator.cluster_kmeans -n 10 allocator/examples/chonburi-roads-1k.csv --plotKaHIP allocatorShortest PathThese function can be used find the estimated shortest path across all the locations in a cluster. We expose three different ways of getting the ‘shortest’ path, a) via MST (Christofides algorithm), b) via Google OR-Tools, b) Google Maps Directions API.Approximate TSP using MSTGoogle OR Tools TSP solver Shortest pathGoogle Maps Directions API Shortest pathOSRM Trip API Shortest pathDocumentationDocumentation available at:https://allocator.readthedocs.io/en/latest/AuthorsSuriyan Laohaprapanon and Gaurav SoodContributor Code of ConductThe project welcomes contributions from everyone! In fact, it depends on it. To maintain this welcoming atmosphere, and to collaborate in a fun and productive way, we expect contributors to the project to abide by theContributor Code of Conduct.LicenseThe package is released under theMIT License.
allocGPU
allocGPUA small Python package that automatically sets multipleleast-loadedgpusHow to use the packageAfter installing the package bypipinstallallocGPUImport allocGPU at the beginning of your main.pyfile byimport allocGPUYou will receive the following message in your terminalNumberofGPUs:You can input the number of gpus and pressEnterto run the program
allocGPUs
Failed to fetch description. HTTP Status Code: 404
allocine
Non official Python wrapper for allocine.fr
allocine-seances
Packagehttps://pypi.org/project/allocine-seances/pip install allocine-seancesDescriptionObjectif : récupération des horaires des séances de cinéma.Les méthodesget_top_villes(),get_departements()etget_circuit()retournent un id d’emplacement.La méthodeget_cinema(id_location)retourne un id de cinéma pour un emplacement donné.La méthodeget_showtime(id_cinema, day_shift)retourne la liste des séances pour un cinema donné et un jour. le paramètre day_shift (entier positif) représente le décalage en jour par rapport à la date actuelle.ImportfromallocineAPI.allocineAPIimportallocineAPIapi=allocineAPI()Liste des méthodesListes des villesret=api.get_top_villes()# {'id': 'ville-87860', 'name': 'Aix-en-Provence'}# {'id': 'ville-96943', 'name': 'Bordeaux'}# {'id': 'ville-85268', 'name': 'Cannes'}# {'id': 'ville-110514', 'name': 'Clermont-Ferrand'}# ...Liste des départementsret=api.get_departements()# {'id': 'departement-83191', 'name': 'Ain'}# {'id': 'departement-83178', 'name': 'Aisne'}# {'id': 'departement-83111', 'name': 'Allier'}# {'id': 'departement-83185', 'name': 'Alpes de Haute-Provence'}# ...Liste des circuitsret=api.get_circuit()# {'id': 'circuit-81002', 'name': 'Pathé Cinémas'}# {'id': 'circuit-81005', 'name': 'CGR'}# {'id': 'circuit-4750', 'name': 'UGC'}# {'id': 'circuit-81027', 'name': 'Megarama'}# ...liste des cinemascinemas=api.get_cinema("departement-83191")# {'id': 'B0242', 'name': 'Gaumont Stade de France - 4DX', 'address': '8, rue du Mondial 98 93210 Saint-Denis'}# {'id': 'C0037', 'name': 'Pathé Alésia - Dolby Cinema', 'address': '73 avenue du Général Leclerc 75014 Paris 14e arrondissement'}# {'id': 'C0116', 'name': 'Gaumont Aquaboulevard - 4DX', 'address': '8-16, rue du Colonel-Pierre-Avia 75015 Paris 13e arrondissement'}# {'id': 'C0161', 'name': 'Pathé Convention', 'address': '27, rue Alain-Chartier 75015 Paris'}# ...liste des seancesdata=api.get_showtime("W2920")# {'title': 'Les Aventures de Ricky', 'duration': '1h 25min', 'VF': [], 'VO': ['2023-04-15T13:45:00', '2023-04-15T15:45:00']}# {'title': "Donjons & Dragons : L'Honneur des voleurs", 'duration': '2h 14min', 'VF': [], 'VO': ['2023-04-15T10:30:00', '2023-04-15T13:45:00', '2023-04-15T17:00:00', '2023-04-15T18:25:00', '2023-04-15T21:00:00']}# {'title': 'Une histoire d’amour', 'duration': '1h 30min', 'VF': [], 'VO': ['2023-04-15T14:40:00', '2023-04-15T16:45:00', '2023-04-15T19:50:00', '2023-04-15T21:55:00']}# {'title': 'Princes et princesses : le spectacle au cinéma', 'duration': '1h 00min', 'VF': [], 'VO': ['2023-04-15T10:50:00']}# ...
allocine-wrapper
===========Python Allocine===========Python Allocine provides a generic wrapper for Allocine API v3. Typical usageoften looks like this::#!/usr/bin/env pythonfrom allocine.Allocine import Allocineresults = Allocine().search("the godfather")movie = results.movies[0]print(movie.title)movie.getInfo()print(movie.synopsisShort)What this wrapper can do=========This API allows you to query Allocine* Search* Access Person & Movies & Reviews
allo-client
Allo clientPaquet Python3 permettant la télémaintenance et la mise à jour automatique des logiciels Libriciel-SCOP.Demande identifiant clientDemande code produitAssociation avec code PIN via le serveur AlloDemande de token de télémaintenanceClone du repo gitlabUne fois activé, il est possible de :Lancer la télémaintenance via le système "Teleport"mettre à jour automatiquementannuler une mise à jour automatiquementCommunications réseauAllo-client doit pouvoir communiquer avec :allo.dev.libriciel.fr:443InstallationToutes les commandes suivantes sont à lancer en tant qu'utilisateurroot.OS supportés :RHEL 7RHEL 8CentOS 7CentOS 8Ubuntu 18.04 LTSUbuntu 20.04 LTSDebian 8Debian 9S'assurer que la locale par défaut est en UTF-8 et non-ASCII ou POSIX.Ceci est configuré par défaut sur une installation classique, mais pas sur une image Docker par exemple.Les histoires de localesPour un environnement docker, il faut définir la locale par défaut du terminal en UTF8, voici les commandes nécessaires par OS (les versions d'OS non répertoriés n'ont pas besoin de commande supplémentaires) :RHEL / CentOS 7 :localedef -i fr_FR -f UTF-8 C.UTF-8Ubuntu 18 :apt install localesEnfin, lancer dans tous les cas les commandes suivantes :export LC_ALL=C.UTF-8export LANG=C.UTF-8Pré-requis RHEL / CentOSPour RHEL / CentOS 7 :yuminstallepel-release yuminstallpython36exportPATH=$PATH:/usr/local/bin localedef-ifr_FR-fUTF-8C.UTF-8Pour RHEL / CentOS 8 :yuminstallpython36exportPATH=$PATH:/usr/local/binPré-requis Debian / Ubuntuaptupdate&&aptinstallpython3-pipInstallationpip3installallo-client.beta-ihttps://pypi.org/simple/UsageIl suffit de lancer la commandealloet de se laisser guider. Pour relancer l'installation des dépendances systèmes, utiliser la commandeallo installPour une utilisation sans invite de commande, utiliserallo cli<-- Travaux en coursReste à faireMise à jour via teleport avec ansibleDetection et récupération des versions actuelles de logicielMise à jour automatique de allo avec ansibleMeilleure gestion d'erreursGestion de cas "spéciaux" (modification de fichiers)Gestion de process d'upgrade / downgrade spécifique à une version
allo-client.beta
Allo clientPaquet Python3 permettant la télémaintenance et la mise à jour automatique des logiciels Libriciel-SCOP.Demande identifiant clientDemande code produitAssociation avec code PIN via le serveur AlloDemande de token de télémaintenanceClone du repo gitlabUne fois activé, il est possible de :Lancer la télémaintenance via le système "Teleport"mettre à jour automatiquementannuler une mise à jour automatiquementCommunications réseauAllo-client doit pouvoir communiquer avec :allo.dev.libriciel.fr:443InstallationToutes les commandes suivantes sont à lancer en tant qu'utilisateurroot.OS supportés :RHEL 7RHEL 8CentOS 7CentOS 8Ubuntu 18.04 LTSUbuntu 20.04 LTSDebian 8Debian 9S'assurer que la locale par défaut est en UTF-8 et non-ASCII ou POSIX.Ceci est configuré par défaut sur une installation classique, mais pas sur une image Docker par exemple.Les histoires de localesPour un environnement docker, il faut définir la locale par défaut du terminal en UTF8, voici les commandes nécessaires par OS (les versions d'OS non répertoriés n'ont pas besoin de commande supplémentaires) :RHEL / CentOS 7 :localedef -i fr_FR -f UTF-8 C.UTF-8Ubuntu 18 :apt install localesEnfin, lancer dans tous les cas les commandes suivantes :export LC_ALL=C.UTF-8export LANG=C.UTF-8Pré-requis RHEL / CentOSPour RHEL / CentOS 7 :yuminstallepel-release yuminstallpython36exportPATH=$PATH:/usr/local/bin localedef-ifr_FR-fUTF-8C.UTF-8Pour RHEL / CentOS 8 :yuminstallpython36exportPATH=$PATH:/usr/local/binPré-requis Debian / Ubuntuaptupdate&&aptinstallpython3-pipInstallationpip3installallo-client.beta-ihttps://pypi.org/simple/UsageIl suffit de lancer la commandealloet de se laisser guider. Pour relancer l'installation des dépendances systèmes, utiliser la commandeallo installPour une utilisation sans invite de commande, utiliserallo cli<-- Travaux en coursReste à faireMise à jour via teleport avec ansibleDetection et récupération des versions actuelles de logicielMise à jour automatique de allo avec ansibleMeilleure gestion d'erreursGestion de cas "spéciaux" (modification de fichiers)Gestion de process d'upgrade / downgrade spécifique à une version
allocmd
Allora CLIFind the comprehensive documentationhere
all-of-it
No description available on PyPI.
allofplos
No description available on PyPI.
all-of-pypi
This project installs all of PyPI.
allogate
AllogateA very simple logging packageOutput exampleUsage exampleimportallogateaslogginglogging.VERBOSITY=1print(f"Verbosity ={logging.VERBOSITY}")logging.pprint(f"Hello, this is a failure",0)logging.pprint(f"Hello, this is a success",1)logging.pprint(f"Hello, this is a warning",2)logging.pprint(f"Hello, this is an info",3)logging.pprint(f"Hello, this is verbose",4)logging.pprint(f"Hello, this is very verbose",12)logging.VERBOSITY=3print(f"Verbosity ={logging.VERBOSITY}")logging.pprint(f"Hello, this is a failure",0)logging.pprint(f"Hello, this is a success",1)logging.pprint(f"Hello, this is a warning",2)logging.pprint(f"Hello, this is an info",3)logging.pprint(f"Hello, this is verbose",4)logging.pprint(f"Hello, this is very verbose",12)logging.VERBOSITY=5print(f"Verbosity ={logging.VERBOSITY}")logging.pprint(f"Hello, this is a failure",0)logging.pprint(f"Hello, this is a success",1)logging.pprint(f"Hello, this is a warning",2)logging.pprint(f"Hello, this is an info",3)logging.pprint(f"Hello, this is verbose",4)logging.pprint(f"Hello, this is very verbose",12)logging.VERBOSITY=15print(f"Verbosity ={logging.VERBOSITY}")logging.pprint(f"Hello, this is a failure",0)logging.pprint(f"Hello, this is a success",1)logging.pprint(f"Hello, this is a warning",2)logging.pprint(f"Hello, this is an info",3)logging.pprint(f"Hello, this is verbose",4)logging.pprint(f"Hello, this is very verbose",12)logging.pprint(f"Clamp me AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",12)deftest_function():logging.pprint(f"Hello, this is a failure",0)logging.pprint(f"Hello, this is a success",1)logging.pprint(f"Hello, this is a warning",2)logging.pprint(f"Hello, this is an info",3)logging.pprint(f"Hello, this is verbose",4)logging.pprint(f"Hello, this is very verbose",12)test_function()
allokation
Welcome to Allokation 👋A python package that gets stocks prices fromyahoo financeand calculates how much of each stocks you must buy to have almost equal distribution between the stocks you want in your portfolio***Disclaimer***NO FINANCIAL ADVISE- This libraryDO NOToffer financial advises, it just calculates the amount of stocks you will need to buy based on stocks thatYOU WILL INFORMand the market price of the day for these stocks, given byyahoo finance.Requirespython >= 3.xInstallinstall via pippipinstallallokationUsageIt's quite simple to use this package, you just need to import the functionallocate_money, pass a list of tickers you want and the available money you have to invest. If you want, you can also pass a list of the percentages of each stocks you want in your portfolio. This list of percentages must have the same length of the tickers.It will return a dict containing the allocations you must need and the total money you must need to have this portfolio (This total will be less or equal than the available money you informed to theallocate_moneyfunction). For each stock, it will be returned thepricethat was used to calculate the portfolio, theamountof stocks you will need to buy, thetotalmoney you need to buy this amount of this stock and thepercentagethat this stock represents in your portfolio. For example:{'allocations':{'B3SA3':{'price':58.33,'amount':3.0,'total':174.99,'percentage':18.14420803782506},'BBDC4':{'price':21.97,'amount':9.0,'total':197.73,'percentage':20.50205300485256},'MGLU3':{'price':88.77,'amount':2.0,'total':177.54,'percentage':18.408610177927088},'PETR4':{'price':22.92,'amount':9.0,'total':206.28000000000003,'percentage':21.388577827547596},'VVAR3':{'price':18.9,'amount':11.0,'total':207.89999999999998,'percentage':21.556550951847704}},'total_value':964.4399999999999}ExampleCheck out the example available inexample/example.pyto see it in action.Development GuideGetting the projectclone this [email protected]:capaci/allokation.gitinstall dependenciespipinstall-rrequirements.txt pipinstall-rrequirements-tests.txtRun testsUnit testspytesttests/Coveragecoveragerun-mpytesttests/ coveragereportLinterflake8Author👤Rafael CapaciWebsite:capaci.devTwitter:@capacirafaelGithub:@capaciLinkedIn:@rafaelcapaciShow your supportGive a ⭐️ if this project helped you!This README was generated with ❤️ byreadme-md-generator
allora-wkr
Allora Worker Node CLIFind the comprehensive documentationhere
allosaurus
No description available on PyPI.
allostery
UNKNOWN
allot
Likefunctools.singledispatch, but will allow to register multiple functions for each class.If a registered function decides it cannot handle the value after inspecting it, it can give up and let others try their luck:fromallotimportallot,Pass@allotdeff(obj):return'object'@f.register(int)deff_small_integer(obj):ifobj>10:returnPassreturn'small integer'assertf('a string')=='object'assertf(3)=='small integer'assertf(10)=='object'
allotropy
*Allotrope® is a registered trademark of the Allotrope Foundation; no affiliation with the Allotrope Foundation is claimed or implied.IntroductionWelcome toallotropy-- a Python library by Benchling for converting instrument data into the Allotrope Simple Model (ASM).The objective of this library is to read text or Excel based instrument software output and return a JSON representation that conforms to the published ASM schema. The code in this library does not convert from proprietary/binary output formats and so has no need to interact with any of the specific vendor softwares.If you aren't familiar with Allotrope, we suggest you start by reading theAllotrope Product Overview.We have chosen to have this library output ASM since JSON is easy to read and consume in most modern systems and can be checked by humans without any special tools needed. All of the published open source ASMs can be found in theASM Gitlab repository.We currently have parser support for the following instruments:Agilent Gen5Applied Bio QuantStudioApplied Bio AbsoluteQBeckman Vi-Cell BLUBeckman Vi-Cell XRChemoMetec NucleoviewLuminex xPONENTMolDev SoftMax ProNovaBio Flex2PerkinElmer EnvisionRoche Cedex BioHTThermo Fisher NanoDrop EightUnchained Labs LunaticThis code is published under the permissive MIT license because we believe that standardized instrument data is a benefit for everyone in science.ContributingWe welcome community contributions to this library and we hope that together we can expand the coverage of ASM-ready data for everyone. If you are interested, please read ourcontribution guidelines.UsageConvert a file to an ASM dictionary:fromallotropy.parser_factoryimportVendor fromallotropy.to_allotropeimportallotrope_from_fileasm_schema=allotrope_from_file("filepath.txt",Vendor.MOLDEV_SOFTMAX_PRO)or, convert any IO:fromallotropy.parser_factoryimportVendor fromallotropy.to_allotropeimportallotrope_from_io withopen("filename.txt")asf:asm_schema=allotrope_from_io(f,Vendor.MOLDEV_SOFTMAX_PRO)bytes_io=BytesIO(file_stream)asm_schema=allotrope_from_io(bytes_io,Vendor.MOLDEV_SOFTMAX_PRO)Specific setup and build instructions.gitignore: used standard GitHub Python template and added their recommended JetBrains linesSetupInstall Hatch:https://hatch.pypa.io/latest/Install Python:https://www.python.org/downloads/This library supports Python 3.9 or higher. Hatch will install a matching version of Python (defined inpyproject.toml) when it sets up your environment.Add pre-push checks to your repo:hatchrunscripts:setup-pre-pushDependenciesTo add requirements used by the library, updatedependenciesinpyproject.toml:For project dependencies, updatedependenciesunder[project].For script dependencies, updatedependenciesunder[tool.hatch.envs.default].For lint dependencies, updatedependenciesunder[tool.hatch.envs.lint].For test dependencies, updatedependenciesunder[tool.hatch.envs.test].Useful Hatch commandsList all environments:hatchenvshowRun all lint:hatchrunlint:allAuto-fix all possible lint issues:hatchrunlint:fmtRun all tests:hatchruntest:testRun a specific test file (replace the filepath with your own):hatchruntest:testtests/allotrope/allotrope_test.pyRun all tests with coverage:hatchruntest:covSpawn a shell within an environment for development:hatchshellPublishTo publish a new version, update the version insrc/allotropy/__about__.pyand run:hatchbuild hatchpublish
allow
No description available on PyPI.
allow2
pip install allow2Before the app/device can log any actions or check permissions/etc, you need to first “pair” the device or app:>>> import allow2 >>> >>> userId, pairId, children = allow2.pair(user, password, deviceToken, deviceName)The userId and pairId are used for all subsequent requests to the API and will work only while the device/app remains paired, so these values should be persisted.If the parent that owns that account deletes the pairing, then the userId / pairId credentials will no longer work.The “children” return value is an array of all current children definitions in that account when it is paired. You can use this to show the parent an interface to nominate the one permanent child who will use this device/app. Alternately, you can present a selector and use the PIN on each account to allow the child to directly select and unlock their account prior to using the device or app.Then, to record usage and get permissions and blocks/etc, use the following:>>> import allow2 >>> >>> ???? = allow2.log(userId, pairId, [activityId, ...], childId)That will…. (TBC)
allowed
Courses often use a restricted subset of a programming language and its library, to reduce cognitive load, focus on concepts, simplify marking, etc.allowedis a program that checks if your code files and Jupyter notebooks only use the Python constructs that were taught.allowedenables instructors to check in advance their examples, exercises and assessment for inadvertent use of constructs that weren't taught. It also allows students and instructors to check submitted code against the taught constructs. To do its job,allowedrequires a short file that lists which constructs were introduced in which 'unit' of the course. That file can be used as a reference document to onboard new tutors and to discuss the design of the course, e.g. to check if important constructs are missing or if some units are overloaded.Like all static analysis tools,allowedisn't perfect and will never be. There may be false positives (code reported to be a violation, but isn't) and false negatives (code that uses disallowed constructs but isn't reported).To refer toallowedin a publication, please citeMichel Wermelinger.Checking Conformance to a Subset of the Python Language. Proceedings of the Conference on Innovation and Technology in Computer Science Education (ITiCSE), vol. 2, pp. 573–574. ACM, 2023.InstructionsIf you're an M269 student or tutor, follow theM269 softwareinstallation instructions, and use the M269 technical forum or the tutor forum to report issues and ask questions.Otherwise, follow theinstructionson how to install, use and configureallowed. If you need help, post your query in theQ & A discussion forum.ContributingAny help to improveallowedis welcome and appreciated.If you useallowed, please share your experience and tips in theshow & tell forum.If you require a new feature, please suggest it in theideas discussion forum.If you spot an error in the code or documentation, please check if ithas been reported, before creating a new issue.If you want to contribute code or documentation that addresses anopen issue, please read first ourcode contribution guide. Your contribution will become available under the terms below.LicencesThe code and text in this repository are Copyright © 2023 by The Open University, UK. The code is licensed under aBSD 3-clause licence. The text is licensed under aCreative Commons Attribution 4.0 International Licence.
allowedflare
Authenticate to Django with JSON Web Tokens (JWTs) signed by Cloudflare Access. A Django reimplementation ofhttps://developers.cloudflare.com/cloudflare-one/identity/authorization-cookie/validating-json/#python-exampleTo run the demo, export these environment variablesALLOWEDFLARE_ACCESS_URLhttps://your-organization.cloudflareaccess.comALLOWEDFLARE_AUDIENCE64-character hexidecimal stringALLOWEDFLARE_PRIVATE_DOMAINyour-domain.tldThen rundocker-compose upConfigure Cloudflare Tunnel public hostname demodj.your-domain.tld tohttp://localhost:8001or equivalent.TODOBetter login pageDjango REST Framework (DRF) supportGrant users view permission to all models(Re-) authenticating proxy for different-domain front-ends, likehttps://developers.cloudflare.com/cloudflare-one/identity/authorization-cookie/cors/#send-authentication-token-with-cloudflare-workerbutSetting username so it can be logged by gunicornRewriting origin redirectsSetting the XmlHttpRequest(?) header to avoid redirects to the sign-in pageWill the original CF_Authorization cookie need to be copied, similar to X-Forwarded-For?
allowed-ghosts
Allowed Ghosts 👻Daily inspiration forALLOWED_HOSTSvalues.Installation & Usagepip install allowed-ghostsAddallowed-ghoststo yourrequirements.txtEdit yoursettings.pyfromallowed_ghostsimportALLOWED_GHOSTSALLOWED_HOSTS=["localhost"]ALLOWED_HOSTS+=ALLOWED_GHOSTSNow you can create aPublic Link🔗 for today's run 👉https://cloud.sdu.dk/app/public-links📆Documentation 📚Extendeddocumentationavailable 👉https://jv-conseil.github.io/allowed-ghosts/TODOUseBeautiful Soupto pull collections from web sources instead of downloaded text files.SponsorshipIf this project helps you, you can offer me a cup of coffee ☕️ :-)
allows
AllowsEasier mock configuration and assertions in Python usingR-spec-like grammar!allow(my_mock).to(return_value('hi').on_method('wave'))allow(my_mock).to(return_value('bye').on_method('wave').when_called_with('see ya'))assertmy_mock.wave()=='hi'assertmy_mock.wave('see ya')=='bye'This library is built to wrap and configure Mock, MagicMock and other objects from the built inunittest.mockavailable in Python 3.3+.Free software: MIT licenseDocumentation:https://allows.readthedocs.io.FeaturesR-spec-like grammar for specifing Mock behaviorCompatible with all Python standard libraryunittest.mockMock (MagicMock, Patch, etc.)Stand alone SideEffect builder to model and combine complex side effectsCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2019-05-11)First release on PyPI.
alloy
No description available on PyPI.
alloyclient
DescriptionA Python client and command-line tool for the Alloy digital archive. Also includes a rudimentary client for any CDMI enabled cloud storage.AfterInstallation, connect to an Alloy archive:alloy init --api=https://alloy.example.com/api/cdmi(or if authentication is required by the archive):alloy init --api=https://alloy.example.com/api/cdmi --username=USER --password=PASSShow current working container:alloy pwdList a container or object:alloy ls [name]Move to a new container:alloy cd subdir ... alloy cd .. # back up to parentCreate a new container:alloy mkdir newPut a local file:alloy put source.txt ... alloy put source.txt destination.txt # Put to a different name remotelyProvide the MIME type of the object (if not suppliedalloy putwill attempt to guess):alloy put --mimetype "text-plain" source.txtFetch a data object from the archive to a local file:alloy get source.txt alloy get source.txt destination.txt # Get with a different name locally alloy get --force source.txt # Overwrite an existing local fileRemove an object:alloy rm file.txtRecursively remove a container (WARNING: Dangerous!):alloy rm -r containerRemove an already empty container (Safer!):alloy rmdir containerClose the current session to prevent unauthorized access:alloy exitAdvanced Use - MetadataSet (overwrite) a metadata value for a field:alloy meta file.txt "org.dublincore.creator=S M Body" alloy meta . "org.dublincore.title=My Collection"Add another value to an existing metadata field:alloy meta file.txt "org.dublincore.creator+=A N Other"List metadata values for all fields:alloy meta file.txtList metadata value(s) for a specific field:alloy meta file.txt org.dublincore.creatorDelete a metadata field:alloy meta file.txt "org.dublincore.creator="InstallationCreate And Activate A Virtual Environment$ virtualenv ~/ve/alloyclient<version> ... $ source ~/ve/alloyclient/bin/activateInstall Dependenciespip install -r requirements.txtInstall Alloy Clientpip install -e .Detailed OSX install commandssudo easy_install virtualenv # virtualenv installs pip python -m virtualenv ~/ve/alloyclient<version> source ~/ve/alloyclient<version>/bin/activate pip install -r requirements.txt pip install -e .LicenseCopyright 2014 Archive Analytics SolutionsLicensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
alloylib
AlloyCopyright (C) 2020 Transportation, Bots, and Disability Lab - Carnegie Mellon UniversityLicensed under the MIT licenseLatest Version: 0.3.0 (2022-02-16)This is python library that provide commonly used functions in different areas of robotics. The current library consists of functions forbasic vector math operations on numpy,ROS math wrappers,Baxter related functions,Graph search implementations,Basic State Machines. The library is developed for internal use but we welcome others to use it if they find it useful.InstallationThe best way to install this package is to clone/download the package and install it withpip install -e /path/to/packageThe package will also live onpypiand can be installed throughpip install alloylibHowever, that version will not be the most up-to-date version.
alloy-ml
# alloy-ml Machine learning methods for the prediction of alloy properties
alloypress
alloypressA Python static site generator which I use formy personal website.Made using thePython Packaging User Guide.Getting StartedInstall viapip install alloypress.Run the following Python code in the root directory of your site:from alloypress import StaticSite ssg = StaticSite() ssg.generate()An example of this can be seen in./tests. It will take every.mdfile in./rawand generate the HTML in./.FeaturesThe high-level approach ofalloypressis serve static HTML and CSS files, rendering nothing client-side.It supports:Jon Gruber's originalMarkdown syntaxviamarkdownLaTeX vialatex2mathmlinside$and$$delimitersSyntax highlighting for Python viapygmentsSidenotes which are displayed inline on narrow devicesSortable and tag-filtered index page for each top-level folderOnly re-generates HTML if the Markdown has been modified, and deletes orphaned HTML filesTo Be ImplementedSupport for other languages viapygmentsSidebar-based navigationEmbedding-based search across all pagesPost summaries on the index page via yaml frontmatterMore coming soon...
alloy-python
Alloy Python SDKThis SDK provides a Python interface for interacting with Alloy APIs. It's designed for flexibility and easy integration into Python projects and is distributed via PyPI.InstallationThe Alloy Python SDK can be easily installed using pip:pipinstallalloy-python-sdkUsageThe package needs to be configured with your account's API key, which is available in the Alloy Dashboard under settings. You must supply the API key with each instantiation of the module.Unified APITo use thie SDK with Alloy's Unified API, use the code snippet below:fromalloy_python.uapiimportUAPIuapi=UAPI('YOUR_API_KEY')Creating a UserTo make API calls to Unified API, you must firstcreate a user. To create a user, call theUser.create_user()method as seen below. You must pass a unique username.user_data={'username':'[email protected]'}user_data=user.create_user(user_data)Before you make your first API call, you will need to obtain aconnectionId. A connectionId represents the unique identifier of an app you plan to make API calls to. You can obtain a connectionId by using the frontend SDK. Read morehere.Making requests to Alloy Unified API's SDKOnce you have aconnectionId, you can start making calls to Alloy Unified API. See the example below for making a request to the Commerce Unified API:fromalloy_python.uapi.commerceimportCommercecommerce=Commerce('YOUR_API_KEY')list_customers_response=commerce.list_customers()Alloy EmbeddedTo set up Alloy's Embedded iPaaS, use the code snippet below:fromalloy_python.embeddedimportEmbedded# Initialize with your API keyembedded=Embedded('YOUR_API_KEY')# Example usage with the App classresponse=embedded.App.get_apps()print(response)TestingAPI Key for TestingTo test the SDK, you'll need to set theALLOY_API_KEYenvironment variable. This can be done in your terminal session:exportALLOY_API_KEY="your_api_key_here"Running Test ScriptsThe SDK includes a set of test scripts that you can use to verify its functionality. Before running these scripts, ensure that theALLOY_API_KEYenvironment variable is set as described above. You may need to replace user IDs, workflow IDs, integration IDs, etc., in the test scripts with those specific to your Alloy account.Run a test script using:python3test/path_to_test_script.pyFor example:python3test/apps/get_apps.pyContributingContributions to the SDK are welcome. Please ensure to follow the coding standards and write tests for new features.
all-package-resolver
package-resolverThis package is for those who need to download a package, with ALL of its dependencies. Not install, download only.The idea is to be able to get all versions, of any package, on all platforms. This includes:Linux distros packages (deb, rpm, apk...)Programming languages packages (Python, Node...)OS DistributionsAt the moment only Ubuntu, Centos and alpine are implemented, all at their latest versions. In the future you will be able to get packages based on the distro version.Programming languages packagesAt the moment only python and node are implemented, But in the future you will be able to download packages based on the language version, and os distro.RequirementsThe problem is that you can't download rpm files with yum if you are on ubuntu. Right? Wrong. This is possible thanks to docker containers. You can build any environment imaginable with containers, and thats what this package does. If you want to download vim for ubuntu, no matter what os you are running, the package will run a container of ubuntu, download the .deb files and zip it to one single file.To use this package you will need to run docker on your system:Install DockerInstallation$python-mpipinstallall-package-resolverUsage$download-package[OPTIONS]COMMAND[ARGS]...Options-v, --verbose Show logs -o, --output-dir <output-dir> Output directory -c, --no-cleanup No cleanup -h, --help output usage informationCommandsos <distro> <package> Download a package for a specific os distro lang <language> <lang-version> <package> Download a package for a specific languageExamples$download-packageosubuntuvim$download-package-vlangpython3.10boto3$download-package--output-dir="/dst/folder"langpython3.6boto3# For multiple packages$download-packagelangpython3.6"boto3 requests"$download-package--help
all-packages
all_packagesThe Python Package Index (PyPI) contains over 300,000 Python packages. Need a Python library but don't want to search through all the options? Search no further!all_packagesis a Python script that attempts to install every package on PyPI - all 349,451 of them as of 11 Jan 2022.Installpip install all_packagesRunFrom the command line, execute the following.all_packages installFor each package on PyPI, this creates a virtual environment in theall_packagessubdirectory of your home directory and attempts to install the package into that virtual environment.To customize where the virtual environments are created, use the-doption.all_packages install -d MYDIRECTORY
all_page_login
No description available on PyPI.
allpairspy
allpairspyforked frombayandin/allpairsAllPairs test combinations generatorFeaturesGet StartedBasic UsageFilteringData Source: OrderedDictParameterized testing pairwise by using pytestOther ExamplesInstallationInstallation: pipInstallation: aptKnown issuesDependenciesSponsorsAllPairs test combinations generatorAllPairs is an open source test combinations generator written in Python, developed and maintained by MetaCommunications Engineering. The generator allows one to create a set of tests using “pairwise combinations” method, reducing a number of combinations of variables into a lesser set that covers most situations.For more info on pairwise testing seehttp://www.pairwise.org.FeaturesProduces good enough dataset.Pythonic, iterator-style enumeration interface.Allows to filter out “invalid” combinations during search for the next combination.Goes beyond pairs! If/when required can generate n-wise combinations.Get StartedBasic UsageSample Code:fromallpairspyimportAllPairsparameters=[["Brand X","Brand Y"],["98","NT","2000","XP"],["Internal","Modem"],["Salaried","Hourly","Part-Time","Contr."],[6,10,15,30,60],]print("PAIRWISE:")fori,pairsinenumerate(AllPairs(parameters)):print("{:2d}:{}".format(i,pairs))Output:PAIRWISE: 0: ['Brand X', '98', 'Internal', 'Salaried', 6] 1: ['Brand Y', 'NT', 'Modem', 'Hourly', 6] 2: ['Brand Y', '2000', 'Internal', 'Part-Time', 10] 3: ['Brand X', 'XP', 'Modem', 'Contr.', 10] 4: ['Brand X', '2000', 'Modem', 'Part-Time', 15] 5: ['Brand Y', 'XP', 'Internal', 'Hourly', 15] 6: ['Brand Y', '98', 'Modem', 'Salaried', 30] 7: ['Brand X', 'NT', 'Internal', 'Contr.', 30] 8: ['Brand X', '98', 'Internal', 'Hourly', 60] 9: ['Brand Y', '2000', 'Modem', 'Contr.', 60] 10: ['Brand Y', 'NT', 'Modem', 'Salaried', 60] 11: ['Brand Y', 'XP', 'Modem', 'Part-Time', 60] 12: ['Brand Y', '2000', 'Modem', 'Hourly', 30] 13: ['Brand Y', '98', 'Modem', 'Contr.', 15] 14: ['Brand Y', 'XP', 'Modem', 'Salaried', 15] 15: ['Brand Y', 'NT', 'Modem', 'Part-Time', 15] 16: ['Brand Y', 'XP', 'Modem', 'Part-Time', 30] 17: ['Brand Y', '98', 'Modem', 'Part-Time', 6] 18: ['Brand Y', '2000', 'Modem', 'Salaried', 6] 19: ['Brand Y', '98', 'Modem', 'Salaried', 10] 20: ['Brand Y', 'XP', 'Modem', 'Contr.', 6] 21: ['Brand Y', 'NT', 'Modem', 'Hourly', 10]FilteringYou can restrict pairs by setting a filtering function tofilter_funcatAllPairsconstructor.Sample Code:fromallpairspyimportAllPairsdefis_valid_combination(row):""" This is a filtering function. Filtering functions should return True if combination is valid and False otherwise. Test row that is passed here can be incomplete. To prevent search for unnecessary items filtering function is executed with found subset of data to validate it. """n=len(row)ifn>1:# Brand Y does not support Windows 98if"98"==row[1]and"Brand Y"==row[0]:returnFalse# Brand X does not work with XPif"XP"==row[1]and"Brand X"==row[0]:returnFalseifn>4:# Contractors are billed in 30 min incrementsif"Contr."==row[3]androw[4]<30:returnFalsereturnTrueparameters=[["Brand X","Brand Y"],["98","NT","2000","XP"],["Internal","Modem"],["Salaried","Hourly","Part-Time","Contr."],[6,10,15,30,60]]print("PAIRWISE:")fori,pairsinenumerate(AllPairs(parameters,filter_func=is_valid_combination)):print("{:2d}:{}".format(i,pairs))Output:PAIRWISE: 0: ['Brand X', '98', 'Internal', 'Salaried', 6] 1: ['Brand Y', 'NT', 'Modem', 'Hourly', 6] 2: ['Brand Y', '2000', 'Internal', 'Part-Time', 10] 3: ['Brand X', '2000', 'Modem', 'Contr.', 30] 4: ['Brand X', 'NT', 'Internal', 'Contr.', 60] 5: ['Brand Y', 'XP', 'Modem', 'Salaried', 60] 6: ['Brand X', '98', 'Modem', 'Part-Time', 15] 7: ['Brand Y', 'XP', 'Internal', 'Hourly', 15] 8: ['Brand Y', 'NT', 'Internal', 'Part-Time', 30] 9: ['Brand X', '2000', 'Modem', 'Hourly', 10] 10: ['Brand Y', 'XP', 'Modem', 'Contr.', 30] 11: ['Brand Y', '2000', 'Modem', 'Salaried', 15] 12: ['Brand Y', 'NT', 'Modem', 'Salaried', 10] 13: ['Brand Y', 'XP', 'Modem', 'Part-Time', 6] 14: ['Brand Y', '2000', 'Modem', 'Contr.', 60]Data Source: OrderedDictYou can usecollections.OrderedDictinstance as an argument forAllPairsconstructor. Pairs will be returned ascollections.namedtupleinstances.Sample Code:fromcollectionsimportOrderedDictfromallpairspyimportAllPairsparameters=OrderedDict({"brand":["Brand X","Brand Y"],"os":["98","NT","2000","XP"],"minute":[15,30,60],})print("PAIRWISE:")fori,pairsinenumerate(AllPairs(parameters)):print("{:2d}:{}".format(i,pairs))Sample Code:PAIRWISE: 0: Pairs(brand='Brand X', os='98', minute=15) 1: Pairs(brand='Brand Y', os='NT', minute=15) 2: Pairs(brand='Brand Y', os='2000', minute=30) 3: Pairs(brand='Brand X', os='XP', minute=30) 4: Pairs(brand='Brand X', os='2000', minute=60) 5: Pairs(brand='Brand Y', os='XP', minute=60) 6: Pairs(brand='Brand Y', os='98', minute=60) 7: Pairs(brand='Brand X', os='NT', minute=60) 8: Pairs(brand='Brand X', os='NT', minute=30) 9: Pairs(brand='Brand X', os='98', minute=30) 10: Pairs(brand='Brand X', os='XP', minute=15) 11: Pairs(brand='Brand X', os='2000', minute=15)Parameterized testing pairwise by using pytestParameterized testing: value matrixSample Code:importpytestfromallpairspyimportAllPairsdeffunction_to_be_tested(brand,operating_system,minute)->bool:# do somethingreturnTrueclassTestParameterized(object):@pytest.mark.parametrize(["brand","operating_system","minute"],[valuesforvaluesinAllPairs([["Brand X","Brand Y"],["98","NT","2000","XP"],[10,15,30,60]])])deftest(self,brand,operating_system,minute):assertfunction_to_be_tested(brand,operating_system,minute)Output:$ py.test test_parameterize.py -v ============================= test session starts ============================== ... collected 16 items test_parameterize.py::TestParameterized::test[Brand X-98-10] PASSED [ 6%] test_parameterize.py::TestParameterized::test[Brand Y-NT-10] PASSED [ 12%] test_parameterize.py::TestParameterized::test[Brand Y-2000-15] PASSED [ 18%] test_parameterize.py::TestParameterized::test[Brand X-XP-15] PASSED [ 25%] test_parameterize.py::TestParameterized::test[Brand X-2000-30] PASSED [ 31%] test_parameterize.py::TestParameterized::test[Brand Y-XP-30] PASSED [ 37%] test_parameterize.py::TestParameterized::test[Brand Y-98-60] PASSED [ 43%] test_parameterize.py::TestParameterized::test[Brand X-NT-60] PASSED [ 50%] test_parameterize.py::TestParameterized::test[Brand X-NT-30] PASSED [ 56%] test_parameterize.py::TestParameterized::test[Brand X-98-30] PASSED [ 62%] test_parameterize.py::TestParameterized::test[Brand X-XP-60] PASSED [ 68%] test_parameterize.py::TestParameterized::test[Brand X-2000-60] PASSED [ 75%] test_parameterize.py::TestParameterized::test[Brand X-2000-10] PASSED [ 81%] test_parameterize.py::TestParameterized::test[Brand X-XP-10] PASSED [ 87%] test_parameterize.py::TestParameterized::test[Brand X-98-15] PASSED [ 93%] test_parameterize.py::TestParameterized::test[Brand X-NT-15] PASSED [100%]Parameterized testing: OrderedDictSample Code:importpytestfromallpairspyimportAllPairsdeffunction_to_be_tested(brand,operating_system,minute)->bool:# do somethingreturnTrueclassTestParameterized(object):@pytest.mark.parametrize(["pair"],[[pair]forpairinAllPairs(OrderedDict({"brand":["Brand X","Brand Y"],"operating_system":["98","NT","2000","XP"],"minute":[10,15,30,60],}))],)deftest(self,pair):assertfunction_to_be_tested(pair.brand,pair.operating_system,pair.minute)Other ExamplesOther examples could be found inexamplesdirectory.InstallationInstallation: pippip install allpairspyInstallation: aptYou can install the package byaptvia a Personal Package Archive (PPA):sudo add-apt-repository ppa:thombashi/ppa sudo apt update sudo apt install python3-allpairspyKnown issuesNot optimal - there are tools that can create smaller set covering all the pairs. However, they are missing some other important features and/or do not integrate well with Python.Lousy written filtering function may lead to full permutation of parameters.Version 2.0 has become slower (a side-effect of introducing ability to produce n-wise combinations).DependenciesPython 3.7+ no external dependencies.SponsorsBecome a sponsor
all-params-env
No description available on PyPI.
allparts
amsin is a simple command line tool to shortlink an amazon URL.
all-pay
轻量级支付方式整合集成,实现支付与业务完全剥离,快速简单完成支付模块的开发特性屏蔽支付方式之间接入API和数据结构的差异,统一API和数据结构支持支付类型横向扩展统一异常处理支持支付方式及功能支付方式支付宝(pay_type=ali_pay)微信支付(pay_type=wx_pay)通用功能电脑网站支付手机网站支付APP支付异步通知校验交易查询交易取消退款退款查询平台特有功能微信JS支付微信企业付款到零钱使用说明核心说明配置(dict)ALIPAY_CONFIG={'pay_type':'ali_pay',# 必填 区分支付类型'app_id':'xxx',#必填 应用id'private_key_path':'xxx',#必填 私钥'public_key_path':'xxx',#必填公钥'notify_url':'xxx',#异步回调地址'sign_type':'RSA2',# 签名算法 RSA 或者 RSA2'debug':False,# 是否是沙箱模式}WECHAT_CONFIG={'pay_type':'wx_pay',# 必填 区分支付类型'app_id':'xxx',# 必填,应用id'mch_key':'xxx',# 必填,商户平台密钥'mch_id':'xxx',# 必填,微信支付分配的商户号'app_secret':'xxx',# 应用密钥'notify_url':'xxx'# 异步回调地址'api_cert_path':'xxx',# API证书'api_key_path':'xxx'# API证书 key}其中pay_type为本项目所需,用来区分支付类型,其余为对应支付方式所需配置参数,具体参考对应支付方式对应的官方文档。Pay支付网关,支付方式分配和转发入口PayOrder统一封装支付订单信息,主要用于支付下单生成统一订单例子order=PayOrder.Builder().subject('商品标题').out_trade_no('商品订单号').total_fee('商品费用').build()通过Builder模式+链式调用灵活组合通用参数和特殊参数 更多参数说明参见源码PayResponse是统一封装支付返回业务信息,主要用于支付查询生成统一回单例子response=PayResponse.Builder().trade_no('平台订单号').out_trade_no('商家订单号').build()通过Builder模式+链式调用灵活组合通用参数和特殊参数 更多参数说明参见源码demoALIPAY_CONFIG={'pay_type':'ali_pay',# 必填 区分支付类型'app_id':'xxx',#必填 应用id'private_key_path':'xxx',#必填 私钥'public_key_path':'xxx',#必填 公钥'notify_url':'xxx',# 异步回调地址'sign_type':'RSA2',# 签名算法 RSA 或者 RSA2'debug':False,# 是否是沙箱模式}# 额外参数,某些支付方式有些选填的参数在PayOrder并没有封装,可以自行传递extra_params={'xxx':'xxx''xxx':'xxx''xxx':'xxx'}order=PayOrder.Builder().subject('商品标题').out_trade_no('商品订单号').total_fee('商品费用').build()pay=Pay(ALIPAY_CONFIG)# 传入对应支付方式配置order_res=pay.trade_page_pay(order,extra_params)# 传入对应订单和额外参数(要是需要)功能说明电脑网站支付[trade_page_pay]pay=Pay(ALIPAY_CONFIG)# 传入对应支付方式配置order_res=pay.trade_page_pay(order)# 传入对应订单手机网站支付[trade_wap_pay]pay=Pay(ALIPAY_CONFIG)# 传入对应支付方式配置order_res=pay.trade_wap_pay(order)# 传入对应订单APP支付[trade_app_pay]pay=Pay(ALIPAY_CONFIG)# 传入对应支付方式配置order_res=pay.trade_app_pay(order)# 传入对应订单异步通知校验[parse_and_verify_result]# 传入对应支付方式配置pay=Pay(WECHAT_CONFIG)# 传入对应支付方式返回的原始数据,校验成功会返回解析成json数据data=pay.parse_and_verify_result(req_xml)微信JS支付[trade_js_pay]# 传入对应支付方式配置pay=Pay(WECHAT_CONFIG)# 传入对应订单data=pay.trade_js_pay(order)微信企业付款到零钱[enterprise_pay]# 传入对应支付方式配置pay=Pay(WECHAT_CONFIG)# 传入对应订单data=pay.enterprise_pay(order)交易查询[trade_query]# 传入对应支付方式配置pay=Pay(WECHAT_CONFIG)# 传入对应回单信息data=pay.trade_query(response)交易取消[trade_cancel]# 传入对应支付方式配置pay=Pay(WECHAT_CONFIG)# 传入对应回单信息data=pay.trade_cancel(response)退款[trade_refund]# 传入对应支付方式配置pay=Pay(WECHAT_CONFIG)# 传入对应回单信息data=pay.trade_refund(response)退款查询[trade_refund_query]# 传入对应支付方式配置pay=Pay(WECHAT_CONFIG)# 传入对应回单信息data=pay.trade_refund_query(response)贡献本项目目前支持的支付方式和API还不多,欢迎你给本项目提pull request,扩展新的的支付接口,同时如果你有好的意见或建议,也欢迎给本项目提issue声明:本项目主要目标的是支付整合,统一支付API和数据结构,在具体支付模块的接入实现参考了一些开源项目支付宝模块基于python-alipay-sdk微信模块基于wx_pay_python
allpipus
Failed to fetch description. HTTP Status Code: 404
allplay
No description available on PyPI.
all-plugins
all-pluginsThis package prevents malicious activity from a previously-existing package with the same name. If you have a legitimate use case, contactme.
allpowers-ble
Allpowers BLEInterface with smart batteries from AllpowersInstallationInstall this via pip (or your favourite package manager):pip install allpowers-bleContributors ✨Thanks goes to these wonderful people (emoji key):This project follows theall-contributorsspecification. Contributions of any kind welcome!CreditsThis package was created withCopierand thebrowniebroke/pypackage-templateproject template.
all-prac-codes
No description available on PyPI.
allPrint
tired of the squiggly red lines from typing "console.log" because you were just working on a javascript front end five minutes ago? just want to print one goddamn line to the console but can't remember what to type because there's too many fucking print statements? well do i have news for you.HI BILLY MAYS HEREfor ALLPRINT. a pypi package that lets you use all print statements from any coding language**. simply typefrom allPrint import *at the top of any python file. it just works.** except C++. what the fuck bjarne??
all-project
Failed to fetch description. HTTP Status Code: 404
allpurpose
No description available on PyPI.
all-purpose-bypass
Installationpip install in your Databricks Notebook%pipinstallall_purpose_bypass
all-purpose-dict
All Purpose DictTable of ContentsWhat is it?Why create it?Simple usageSee AlsoApiTestWhat is it?A dict which doesn't require hashable keysWhy create it?I often have a need to store non-hashable keys in a dict. For example storing a dict as a key isn't possible with the builtin dict.# doesn't worksomeDict={"key":"value"}anotherDict={someDict:"anotherValue"}Simple usagefromall_purpose_dictimportApDictsomeDict={"key":"value"}anotherDict=ApDict([(someDict,"anotherValue")])print(someDictinanotherDict)# prints TrueSee AlsoAll Purpose SetApiNote: This api is young and subject to change quite a bit. There also may be functionality present in the builtin dict which ApDict doesn't cover. I'm willing to add it so please just raise a github issue or PR with details.class ApDict([a list of pairs])'pairs' may be either a list or tuple with a length of 2all methods returnselfunless specified otherwiseiterates in the order of insertionviews are not implemented because I don't have a need for them. Instead I exposekeysIteratorandvaluesIterator. If you need views then raise a github issue.the internal methods implemented are__contains____delitem____getitem____iter____len____setitem__clear()delete(key)a function alternative todel aDict[key]get(key, default=None) => valueget the value for key if key is in the dictionary, else default.note: this never raises a KeyError.has(key) => boola function alternative tokey in aDictgetKeysIterator() => ApDictKeysIteratorset(key, value)a function alternative toaDict[key] = valgetValuesIterator() => ApDictValuesIteratorTest## you must have poetry installed#$poetryshell $poetryinstall $pythonrunTests.py
all-purpose-set
All Purpose SetTable of ContentsWhat is it?Why create it?Simple usageSee alsoApiTestWhat is it?A set which doesn't require hashable contentsWhy create it?I often have a need to store non-hashable contents in a set. For example storing a dict isn't possible with the builtin set.# doesn't worksomeDict={"key":"value"}someSet={someDict}Simple usagefromall_purpose_setimportApSetsomeDict={"key":"value"}someSet=ApSet([someDict])print(someDictinsomeSet)# prints TrueSee alsoAll Purpose DictApiNote: This api is young and subject to change quite a bit. There also may be functionality present in the builtin set which this set doesn't cover. I'm willing to add it so please just raise a github issue or PR with details.class ApSet([a list])all methods returnselfunless specified otherwiseiterates in the order of insertioncurrently the internal methods implemented are__contains____iter____len__add(something)clear()has(something) => boola function alternative tokey in aSetremove(something)raises aKeyErrorif the element doesn't existTest## you must have poetry installed#$poetryshell $poetryinstall $pythonrunTests.py
allpwn
No description available on PyPI.
allpy
Solve for Mixed Strategy Equilibrium of All-Pay Contests with SpilloversThis package currently can estimate the mixed strategy equilibrium of an asymmetric all pay contest with spillovers. It uses the matrix inversion method established inBetto and Thomas (2021).The package is not yet complete and the commands are subject to design changes. Better documentation is forthcoming.
all-python
No description available on PyPI.
allRank
allRank : Learning to Rank in PyTorchAboutallRank is a PyTorch-based framework for training neural Learning-to-Rank (LTR) models, featuring implementations of:common pointwise, pairwise and listwise loss functionsfully connected and Transformer-like scoring functionscommonly used evaluation metrics like Normalized Discounted Cumulative Gain (NDCG) and Mean Reciprocal Rank (MRR)click-models for experiments on simulated click-through dataMotivationallRank provides an easy and flexible way to experiment with various LTR neural network models and loss functions. It is easy to add a custom loss, and to configure the model and the training procedure. We hope that allRank will facilitate both research in neural LTR and its industrial applications.FeaturesImplemented loss functions:ListNet (for binary and graded relevance)ListMLERankNetOrdinal lossLambdaRankLambdaLossApproxNDCGRMSENeuralNDCG (introduced inhttps://arxiv.org/pdf/2102.07831)Getting started guideTo help you get started, we provide arun_example.shscript which generates dummy ranking data in libsvm format and trains a Transformer model on the data using provided exampleconfig.jsonconfig file. Once you run the script, the dummy data can be found indummy_datadirectory and the results of the experiment intest_rundirectory. To run the example, Docker is required.Configuring your model & trainingTo train your own model, configure your experiment inconfig.jsonfile and runpython allrank/main.py --config_file_name allrank/config.json --run_id <the_name_of_your_experiment> --job_dir <the_place_to_save_results>All the hyperparameters of the training procedure: i.e. model defintion, data location, loss and metrics used, training hyperparametrs etc. are controlled by theconfig.jsonfile. We provide a template fileconfig_template.jsonwhere supported attributes, their meaning and possible values are explained. Note that following MSLR-WEB30K convention, your libsvm file with training data should be namedtrain.txt. You can specify the name of the validation dataset (eg. valid or test) in the config. Results will be saved under the path<job_dir>/results/<run_id>Google Cloud Storage is supported in allRank as a place for data and job results.Implementing custom loss functionsTo experiment with your own custom loss, you need to implement a function that takes two tensors (model prediction and ground truth) as input and put it in thelossespackage, making sure it is exposed on a package level. To use it in training, simply pass the name (and args, if your loss method has some hyperparameters) of your function in the correct place in the config file:"loss": { "name": "yourLoss", "args": { "arg1": val1, "arg2: val2 } }Applying click-modelTo apply a click model you need to first have an allRank model trained. Next, run:python allrank/rank_and_click.py --input-model-path <path_to_the_model_weights_file> --roles <comma_separated_list_of_ds_roles_to_process e.g. train,valid> --config_file_name allrank/config.json --run_id <the_name_of_your_experiment> --job_dir <the_place_to_save_results>The model will be used to rank all slates from the dataset specified in config. Next - a click model configured in config will be applied and the resulting click-through dataset will be written under<job_dir>/results/<run_id>in a libSVM format. The path to the results directory may then be used as an input for another allRank model training.Continuous integrationYou should runscripts/ci.shto verify that code passes style guidelines and unit tests.ResearchThis framework was developed to support the research projectContext-Aware Learning to Rank with Self-Attention. If you use allRank in your research, please cite:@article{Pobrotyn2020ContextAwareLT, title={Context-Aware Learning to Rank with Self-Attention}, author={Przemyslaw Pobrotyn and Tomasz Bartczak and Mikolaj Synowiec and Radoslaw Bialobrzeski and Jaroslaw Bojar}, journal={ArXiv}, year={2020}, volume={abs/2005.10084} }Additionally, if you use the NeuralNDCG loss function, please cite the corresponding work,NeuralNDCG: Direct Optimisation of a Ranking Metric via Differentiable Relaxation of Sorting:@article{Pobrotyn2021NeuralNDCG, title={NeuralNDCG: Direct Optimisation of a Ranking Metric via Differentiable Relaxation of Sorting}, author={Przemyslaw Pobrotyn and Radoslaw Bialobrzeski}, journal={ArXiv}, year={2021}, volume={abs/2102.07831} }LicenseApache 2 License
allrank-mod
Modified version ofhttps://github.com/allegro/allRankallRank : Learning to Rank in PyTorchAboutallRank is a PyTorch-based framework for training neural Learning-to-Rank (LTR) models, featuring implementations of:common pointwise, pairwise and listwise loss functionsfully connected and Transformer-like scoring functionscommonly used evaluation metrics like Normalized Discounted Cumulative Gain (NDCG) and Mean Reciprocal Rank (MRR)click-models for experiments on simulated click-through dataMotivationallRank provides an easy and flexible way to experiment with various LTR neural network models and loss functions. It is easy to add a custom loss, and to configure the model and the training procedure. We hope that allRank will facilitate both research in neural LTR and its industrial applications.FeaturesImplemented loss functions:ListNet (For a binary and graded relevance)ListMLERankNetOrdinal lossLambdaRankLambdaLossApproxNDCGRMSEGetting started guideTo help you get started, we provide arun_example.shscript which generates dummy ranking data in libsvm format and trains a Transformer model on the data using provided exampleconfig.jsonconfig file. Once you run the script, the dummy data can be found indummy_datadirectory and the results of the experiment intest_rundirectory. To run the example, Docker is required.Configuring your model & trainingTo train your own model, configure your experiment inconfig.jsonfile and runpython allrank/main.py --config_file_name allrank/config.json --run_id <the_name_of_your_experiment> --job_dir <the_place_to_save_results>All the hyperparameters of the training procedure: i.e. model defintion, data location, loss and metrics used, training hyperparametrs etc. are controlled by theconfig.jsonfile. We provide a template fileconfig_template.jsonwhere supported attributes, their meaning and possible values are explained. Note that following MSLR-WEB30K convention, your libsvm file with training data should be namedtrain.txt. You can specify the name of the validation dataset (eg. valid or test) in the config. Results will be saved under the path<job_dir>/results/<run_id>Google Cloud Storage is supported in allRank as a place for data and job results.Implementing custom loss functionsTo experiment with your own custom loss, you need to implement a function that takes two tensors (model prediction and ground truth) as input and put it in thelossespackage, making sure it is exposed on a package level. To use it in training, simply pass the name (and args, if your loss method has some hyperparameters) of your function in the correct place in the config file:"loss": { "name": "yourLoss", "args": { "arg1": val1, "arg2: val2 } }Applying click-modelTo apply a click model you need to first have an allRank model trained. Next, run:python allrank/rank_and_click.py --input-model-path <path_to_the_model_weights_file> --roles <comma_separated_list_of_ds_roles_to_process e.g. train,valid> --config_file_name allrank/config.json --run_id <the_name_of_your_experiment> --job_dir <the_place_to_save_results>The model will be used to rank all slates from the dataset specified in config. Next - a click model configured in config will be applied and the resulting click-through dataset will be written under<job_dir>/results/<run_id>in a libSVM format. The path to the results directory may then be used as an input for another allRank model training.Continuous integrationYou should runscripts/ci.shto verify that code passes style guidelines and unit tests.ResearchThis framework was developed to support the research projectContext-Aware Learning to Rank with Self-Attention. If you use allRank in your research, please cite:@article{Pobrotyn2020ContextAwareLT, title={Context-Aware Learning to Rank with Self-Attention}, author={Przemyslaw Pobrotyn and Tomasz Bartczak and Mikolaj Synowiec and Radoslaw Bialobrzeski and Jaroslaw Bojar}, journal={ArXiv}, year={2020}, volume={abs/2005.10084} }LicenseApache 2 License
all-relative
all-relativeA command line tool to convert a static site to use only relative urls.Run it from the the directory which contains your generated static site to have it convert all urls in html and css to be relative to that dir or specify it as a cli argument.Relative urls leaves you with a portable website that doesn't care what path it is mounted at./,/olizilla//ipfs/hash/,file://x/y/z/, the lot, it don't care. This allows you to load the same static site viaIPFSor github pages, or the localfile system, as well as from the root of your custom domain. Relative urls are wonderful.The command willedit the files in place, so it's best to run it on generated output that you can recreate if you need to. If you can't, be sure to take a back up of your site first.InstallInstall it with$pipinstallall-relativeor run just it wihtout installing it viapipxsee here if you haven't heard about it$pipxall-relativeUsageRun the command from the root directory of your static site.$all-relativeInspired byall-relative
all-repos
all-reposClone all your repositories and apply sweeping changes.Installationpipinstallall-reposCLIAll command line interfaces provided byall-reposprovide the following options:-h/--help: show usage information-C CONFIG_FILENAME/--config-filename CONFIG_FILENAME: use a non-default config file (the defaultall-repos.jsoncan be changed with the environment variableALL_REPOS_CONFIG_FILENAME).--color {auto,always,never}: use color in output (defaultauto).all-repos-complete [options]Addgit clonetab completion for all-repos repositories.Requiresjqto function.Add to.bash_profile:eval"$(all-repos-complete-C~/.../all-repos.json--bash)"all-repos-clone [options]Clone all the repositories into theoutput_dir. If run again, this command will update existing repositories.Options:-j JOBS/--jobs JOBS: how many concurrent jobs will be used to complete the operation. Specify 0 or -1 to match the number of cpus. (default8).Sample invocations:all-repos-clone: clone the repositories specified inall-repos.jsonall-repos-clone -C all-repos2.json: clone using a non-default config filename.all-repos-find-files [options] PATTERNSimilar to a distributedgit ls-files | grep -P PATTERN.Arguments:PATTERN: thepython regexto match.Options:--repos-with-matches: only print repositories with matches.Sample invocations:all-repos-find-files setup.py: find allsetup.pyfiles.all-repos-find-files --repos setup.py: find all repositories containing asetup.py.all-repos-grep [options] [GIT_GREP_OPTIONS]Similar to a distributedgit grep ....Options:--repos-with-matches: only print repositories with matches.GIT_GREP_OPTIONS: additional arguments will be passed on togit grep. seegit grep --helpfor available options.Sample invocations:all-repos-grep pre-commit -- 'requirements*.txt': find all repositories which havepre-commitlisted in a requirements file.all-repos-grep -L six -- setup.py: find setup.py files which do not containsix.all-repos-list-repos [options]List all cloned repository names.all-repos-manual [options]Interactively apply a manual change across repos.note:all-repos-manualwill always run in--interactiveautofixing mode.note:all-repos-manualrequiresthe--reposautofixer option.Options:autofix options:all-repos-manualis an autofixer and supports all of the autofixer options.--branch-name BRANCH_NAME: override the autofixer branch name (defaultall-repos-manual).--commit-msg COMMIT_MSG(required): set the autofixer commit message.all-repos-sed [options] EXPRESSION FILENAMESSimilar to a distributedgit ls-files -z -- FILENAMES | xargs -0 sed -i EXPRESSION.note: this assumes GNU sed. If you're on macOS, installgnu-sedwith Homebrew:brewinstallgnu-sed# Add to .bashrc / .zshrcexportPATH="$(brew--prefix)/opt/gnu-sed/libexec/gnubin:$PATH"Arguments:EXPRESSION: sed program. For example:s/hi/hello/g.FILENAMES: filenames glob (passed togit ls-files).Options:autofix options:all-repos-sedis an autofixer and supports all of the autofixer options.-r/--regexp-extended: use extended regular expressions in the script. Seeman sedfor further details.--branch-name BRANCH_NAMEoverride the autofixer branch name (defaultall-repos-sed).--commit-msg COMMIT_MSGoverride the autofixer commit message. (defaultgit ls-files -z -- FILENAMES | xargs -0 sed -i ... EXPRESSION).Sample invocations:all-repos-sed 's/foo/bar/g' -- '*': replacefoowithbarin all files.ConfiguringA configuration file looks roughly like this:{"output_dir":"output","source":"all_repos.source.github","source_settings":{"api_key":"...","username":"asottile"},"push":"all_repos.push.github_pull_request","push_settings":{"api_key":"...","username":"asottile"}}output_dir: where repositories will be cloned to whenall-repos-cloneis run.source: the module import path to asource, see below for builtin source modules as well as directions for writing your own.source_settings: the source-type-specific settings, the source module's documentation will explain the various possible values.push: the module import path to apush, see below for builtin push modules as well as directions for writing your own.push_settings: the push-type-specific settings, the push module's documentation will explain the various possible values.include(default""): python regex for selecting repositories. Only repository names which match this regex will be included.exclude(default"^$"): python regex for excluding repositories. Repository names which match this regex will be excluded.all_branches(defaultfalse): whether to clone all of the branches or just the default upstream branch.Source modulesall_repos.source.json_fileClones all repositories listed in a file. The file must be formatted as follows:{"example/repo1":"https://git.example.com/example/repo1","repo2":"https://git.example.com/repo2"}Requiredsource_settingsfilename: file containing repositories one-per-line.Directory locationoutput/ +--- repos.json +--- repos_filtered.json +--- {repo_key1}/ +--- {repo_key2}/ +--- {repo_key3}/all_repos.source.githubClones all repositories available to a user on github.Requiredsource_settingsapi_key: the api key which the user will log in as.Usethe settings tabto create a personal access token.The minimum scope required to function ispublic_repo, though you'll needrepoto access private repositories.api_key_env: alternatively API key can also be passed via an environment variableusername: the github username you will log in as.Optionalsource_settingscollaborator(defaultfalse): whether to include repositories which are not owned but can be contributed to as a collaborator.forks(defaultfalse): whether to include repositories which are forks.private(defaultfalse): whether to include private repositories.archived(default:false): whether to include archived repositories.base_url(default:https://api.github.com) is the base URL to the Github API to use (for Github Enterprise support set this tohttps://{your_domain}/api/v3).Directory locationoutput/ +--- repos.json +--- repos_filtered.json +--- {username1}/ +--- {repo1}/ +--- {repo2}/ +--- {username2}/ +--- {repo3}/all_repos.source.github_forksClones all repositories forked from a repository on github.Requiredsource_settingsapi_key: the api key which the user will log in as.Usethe settings tabto create a personal access token.The minimum scope required to function ispublic_repo.api_key_env: alternatively API key can also be passed via an environment variablerepo: the repo which has forksOptionalsource_settingscollaborator(defaulttrue): whether to include repositories which are not owned but can be contributed to as a collaborator.forks(defaulttrue): whether to include repositories which are forks.private(defaultfalse): whether to include private repositories.archived(default:false): whether to include archived repositories.base_url(default:https://api.github.com) is the base URL to the Github API to use (for Github Enterprise support set this tohttps://{your_domain}/api/v3).Directory locationSee the directory structure forall_repos.source.github.all_repos.source.github_orgClones all repositories from an organization on github.Requiredsource_settingsapi_key: the api key which the user will log in as.Usethe settings tabto create a personal access token.The minimum scope required to function ispublic_repo, though you'll needrepoto access private repositories.api_key_env: alternatively API key can also be passed via an environment variableorg: the organization to clone fromOptionalsource_settingscollaborator(defaulttrue): whether to include repositories which are not owned but can be contributed to as a collaborator.forks(defaultfalse): whether to include repositories which are forks.private(defaultfalse): whether to include private repositories.archived(default:false): whether to include archived repositories.base_url(default:https://api.github.com) is the base URL to the Github API to use (for Github Enterprise support set this tohttps://{your_domain}/api/v3).Directory locationSee the directory structure forall_repos.source.github.all_repos.source.gitoliteClones all repositories available to a user on agitolitehost.Requiredsource_settingsusername: the user to SSH to the server as (usuallygit)hostname: the hostname of your gitolite server (e.g.git.mycompany.com)The gitolite API is served over SSH. It is assumed that whenall-repos-cloneis called, it's possible to make SSH connections with the username and hostname configured here in order to query that API.Optionalsource_settingsmirror_path(defaultNone): an optional mirror to clone repositories from. This is a Python format string, and can use the variablerepo_name.This can be anything git understands, such as another remote server (e.g.gitmirror.mycompany.com:{repo_name}) or a local path (e.g./gitolite/git/{repo_name}.git).Directory locationoutput/ +--- repos.json +--- repos_filtered.json +--- {repo_name1}.git/ +--- {repo_name2}.git/ +--- {repo_name3}.git/all_repos.source.bitbucketClones all repositories available to a user on Bitbucket Cloud.Requiredsource_settingsusername: the Bitbucket username you will log in as.app_password: the authentication method for the above user to login withCreate an application password within youraccount settings.We need the scope: Repositories -> Readall_repos.source.bitbucket_serverClones all repositories available to a user on Bitbucket Server.Requiredsource_settingsbase_url: the bitbucket server URL (egbitbucket.domain.com)username: the Bitbucket username you will log in as.app_password: the authentication method for the above user to login withCreate an application password within youraccount settings.We need the scope: Repositories -> ReadOptionalsource_settingsproject(defaultNone): an optional project to restrict the search for repositories.Directory locationoutput/ +--- repos.json +--- repos_filtered.json +--- {username1}/ +--- {repo1}/ +--- {repo2}/ +--- {username2}/ +--- {repo3}/all_repos.source.gitlab_orgClones all repositories from an organization on gitlab.Requiredsource_settingsapi_key: the api key which the user will log in as.Use the settings tab (eg https://{gitlab.domain.com}/-/profile/personal_access_tokens) to create a personal access token.We need the scope:read_api,read_repository.api_key_env: alternatively API key can also be passed via an environment variableorg: the organization to clone fromOptionalsource_settingsbase_url: (defaulthttps://gitlab.com/api/v4) the gitlab server URLarchived(default:false): whether to include archived repositories.Directory locationoutput/ +--- repos.json +--- repos_filtered.json +--- {org}/ +--- {subpgroup1}/ +--- {subpgroup2}/ +--- {repo1}/ +--- {repo2}/ +--- {repo3}/ +--- {repo4}/Writing your own sourceFirst create a module. This module must have the following api:ASettingsclassThis class will receive keyword arguments for all values in thesource_settingsdictionary.An easy way to implement theSettingsclass is by using anamedtuple:Settings=collections.namedtuple('Settings',('required_thing','optional'))Settings.__new__.__defaults__=('optional default value',)In this example, therequired_thingsetting is arequiredsetting whereasoptionalmay be omitted (and will get a default value of'optional default value').def list_repos(settings: Settings) -> Dict[str, str]:callableThis callable will be passed an instance of yourSettingsclass. It must return a mapping from{repo_name: repository_url}. Therepo_nameis the directory name inside theoutput_dir.Push modulesall_repos.push.merge_to_masterMerges the branch directly to the default branch and pushes. The commands it runs look roughly like this:gitcheckoutmain gitpull gitmerge--no-ff$BRANCHgitpushoriginHEADOptionalpush_settingsfast_forward(default:false): iftrue, perform a fast-forward merge (--ff-only). Iffalse, create a merge commit (--no-ff).all_repos.push.github_pull_requestPushes the branch tooriginand then creates a github pull request for the branch.Requiredpush_settingsapi_key: the api key which the user will log in as.Usethe settings tabto create a personal access token.The minimum scope required to function ispublic_repo, though you'll needrepoto access private repositories.api_key_env: alternatively API key can also be passed via an environment variableusername: the github username you will log in as.Optionalpush_settingsfork(default:false): (if applicable) a fork will be created and pushed to instead of the upstream repository. The pull request will then be made to the upstream repository.base_url(default:https://api.github.com) is the base URL to the Github API to use (for Github Enterprise support set this tohttps://{your_domain}/api/v3).all_repos.push.bitbucket_server_pull_requestPushes the branch tooriginand then creates a Bitbucket pull request for the branch.Requiredpush_settingsbase_url: the Bitbucket server URL (egbitbucket.domain.com)username: the Bitbucket username you will log in as.app_password: the authentication method for the above user to login withCreate an application password within youraccount settings.We need the scope: Repositories -> Readall_repos.push.gitlab_pull_requestPushes the branch tooriginand then creates a GitLab pull request for the branch.Requiredpush_settingsbase_url: the GitLab server URL (eghttps://{gitlab.domain.com}/api/v4)api_key: the api key which the user will log in as.Use the settings tab (eg https://{gitlab.domain.com}/-/profile/personal_access_tokens) to create a personal access token.We need the scope:write_repository.api_key_env: alternatively API key can also be passed via an environment variableall_repos.push.readonlyDoes nothing.push_settingsThere are no configurable settings forreadonly.Writing your own push moduleFirst create a module. This module must have the following api:ASettingsclassThis class will receive keyword arguments for all values in thepush_settingsdictionary.def push(settings: Settings, branch_name: str) -> None:This callable will be passed an instance of yourSettingsclass. It should deploy the branch. The function will be called with the root of the repository as thecwd.Writing an autofixerAn autofixer applies a change over all repositories.all-reposprovides several api functions to write your autofixers with:all_repos.autofix_lib.add_fixer_argsdefadd_fixer_args(parser):Adds the autofixer cli options.Options:--dry-run: show what would happen but do not push.-i/--interactive: interactively approve / deny fixes.-j JOBS/--jobs JOBS: how many concurrent jobs will be used to complete the operation. Specify 0 or -1 to match the number of cpus. (default1).--limit LIMIT: maximum number of repos to process (default: unlimited).--author AUTHOR: override commit author. This is passed directly togit commit. An example:--author='Herp Derp <[email protected]>'.--repos [REPOS [REPOS ...]]: run against specific repositories instead. This is especially useful withxargs autofixer ... --repos. This can be used to specify repositories which are not managed byall-repos.all_repos.autofix_lib.from_clideffrom_cli(args,*,find_repos,msg,branch_name):Parse cli arguments and produceautofix_libprimitives. Returns(repos, config, commit, autofix_settings). This is handled separately fromfixto allow for fixers to adjust arguments.find_repos: callback takingConfigas a positional argument.msg: commit message.branch_name: identifier used to construct the branch name.all_repos.autofix_lib.fixdeffix(repos,*,apply_fix,check_fix=_noop_check_fix,config:Config,commit:Commit,autofix_settings:AutofixSettings,):Apply the fix.apply_fix: callback which will be called once per repository. Thecwdwhen the function is called will be the root of the repository.all_repos.autofix_lib.rundefrun(*cmd,**kwargs):Wrapper aroundsubprocess.runwhich prints the command it will run. Unlikesubprocess.run, this defaultscheck=Trueunless explicitly disabled.Example autofixerThe trivial autofixer is as follows:importargparsefromall_reposimportautofix_libdeffind_repos(config):return[]defapply_fix():passdefmain(argv=None):parser=argparse.ArgumentParser()autofix_lib.add_fixer_args(parser)args=parser.parse_args(argv)repos,config,commit,autofix_settings=autofix_lib.from_cli(args,find_repos=find_repos,msg='msg',branch_name='branch-name',)autofix_lib.fix(repos,apply_fix=apply_fix,config=config,commit=commit,autofix_settings=autofix_settings,)if__name__=='__main__':raiseSystemExit(main())You can find some more involved examples inall_repos/autofix:all_repos.autofix.azure_pipelines_autoupdate: upgrade pinned azure pipelines template repository references.all_repos.autofix.pre_commit_autoupdate: runspre-commit autoupdate.all_repos.autofix.pre_commit_autopep8_migrate: migratesautopep8-wrapperfrompre-commit/pre-commit-hookstomirrors-autopep8.all_repos.autofix.pre_commit_cache_dir: updates the cache directory for travis-ci / appveyor for pre-commit 1.x.all_repos.autofix.pre_commit_flake8_migrate: migratesflake8frompre-commit/pre-commit-hookstopycqa/flake8.all_repos.autofix.pre_commit_migrate_config: runspre-commit migrate-config.all_repos.autofix.setup_py_upgrade: runssetup-py-upgradeand thensetup-cfg-fmtto migratesetup.pytosetup.cfg.
all-repos-depends
all-repos-dependsView the dependencies of your repositories.all-repos-dependsis an add-on project toall-repos.Installationpip install all-repos-dependsCLITo generate the database, runall-repos-depends-generate.To run the webapp, runall-repos-depends-server. The server runs on a configurable--port.configuration{"all_repos_config":"../all-repos/all-repos.json","get_packages":["all_repos_depends.packages.setup_py","all_repos_depends.packages.package_json"],"get_depends":["all_repos_depends.depends.setup_py","all_repos_depends.depends.requirements_tools"]}providersProviders are the pluggable bits ofall-repos-depends. A few providers are given for free.The types that the providers will produce are inall_repos_depends.types:Package=collections.namedtuple('Package',('type','key','name'))Depends=collections.namedtuple('Depends',('relationship','package_type','package_key','spec'),)If a provider encounters a detectable error state, it should raise an exception of the typeall_repos_depends.errors.DependsError.packageprovidersApackageprovider will be called while thecwdis at the root of the repository. It must return aall_repos_depends.types.Packagethat the repository provides (orNoneif it is not applicable).A few are provided out of the box (PRs welcome for more!)all_repos_depends.packages.setup_pyThispackageprovider reads the ast ofsetup.pyand searches for thenamekeyword argument. For now this means it will only be able to readsetup.pyfiles which have python3-compatible syntax and set their name literally.all_repos_depends.packages.package_jsonReads thenamefield out of an npmpackage.jsonfile.dependsprovidersAdependsprovider will be called while thecwdis at the root of the repository. It must return a sequence orall_repos_depends.types.Dependsthat the repository provides (or an empty sequence if it is not applicable).all_repos_depends.depends.setup_pyThisdependsprovider reads the ast ofsetup.pyand searches for theinstall_requireskeyword argument.all_repos_depends.depends.requirements_toolsThisdependsprovider reads the following requirements files according to the conventions forrequirements-tools:requirements-minimal.txt(DEPENDS)requirements.txt(REQUIRES)requirements-dev-minimal.txt(DEPENDS_DEV)requirements-dev.txt(REQUIRES_DEVif-minimalis present otherwiseDEPENDS_DEV)
all-repos-envvar
all-repos-envvarSource Code:https://github.com/browniebroke/all-repos-envvarAn all-repos extension to read values from environment variables.InstallationInstall this via pip (or your favourite package manager):pip install all-repos-envvarUsageThis library should be installed alongsideall-reposso that it's findable at import time. It provides a customsourceandpushto get the GitHub API key from an environment variableGITHUB_API_KEY(reading from and.envfile is also supported), allowing you to omit it from the config:{"output_dir":"output","source":"all_repos_envvar.source","source_settings":{"username":"browniebroke"},"push":"all_repos_envvar.push","push_settings":{"username":"browniebroke"}}I wanted this feature, but it looks like it won't be implemented in the main repo, hence this little extension. The source module extendsall_repos.source.githuband the push module extendsall_repos.push.github_pull_requestContributors ✨Thanks goes to these wonderful people (emoji key):Bruno Alla💻🤔📖This project follows theall-contributorsspecification. Contributions of any kind welcome!CreditsThis package was created withCookiecutterand thebrowniebroke/cookiecutter-pypackageproject template.
allrights
Failed to fetch description. HTTP Status Code: 404
allround-blogs
Failed to fetch description. HTTP Status Code: 404
allround-utils
master package for utils for allround ecosystem
allrucodes
allrudirectoryPython package with a collection of official all-russian code catalogsDocumentation:https://naydyonov.github.io/allrucodesGitHub:https://github.com/naydyonov/allrucodesPyPI:https://pypi.org/project/allrucodes/Free software: MITFeaturesFor now it contains few of all-russian classifiers:OKSM (ОКСМ) - All-russian classifier of world countries (Общероссийский классификатор стран мира)OKOPF (ОКОПФ) - All-russian classifier of organizational and legal forms (Общероссийский классификатор организационно-правовых форм)CreditsThis package was created withCookiecutterand thewaynerv/cookiecutter-pypackageproject template.
all_search_module
General Search code for all the properties
allset
A small python utility for auto-completing__all__and binding sub-modules in__init__.pyfiles.How to UseAdd these lines to the top of your__init__.py.import allset allset.set_all_submodules(globals()) allset.bind_all_submodules(globals()) del allsetNow you can reference any sub-module withimport mysubmoduleorfrom mysubmodule import SubModClassDef. Additionally, thefrom mymodule import *will work as though you specified all sub-modules in__all__manually.What’s it do?set_all_submodulessets up you__all__variable by auto-detecting the files and sub-modules in the current directory.bind_all_submodulestakes the submodules found inset_all_submodulesand applies them to the current namespace.
all-shortcuts
shortcutsUna app que simplifica las ordenes en la linea de comandos, soporta conexiones remotas a otros nodos y bases de datos comandos populares:gogetsend
allsortalgo
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.Description-Content-Type: UNKNOWNDescription: [![Build Status](https://travis-ci.org/sannykr/allsortalgo.svg?branch=master)](https://travis-ci.org/sannykr/allsortalgo)Sorting Algorithms========================This project includes implementations of the following sorting algorithms:1. Bubble sort: In bubble sort, we repeatedly swap the adjancent elements if they are in wrong order. It's time complexity is O(N*N)2. Insertion sort: In insertion sort, we start with a sub-list of size 2, and insert the right-most elements to the proper locationin such a way that the list remains sorted. It's time complexity is O(N*N)3. merge sort: In merge sort, we use divide and conquer strategy and sort the sublists and merge them recursively. It's time complexityis O(N*logN)I will be adding more sorting algorithms to this project.---------------Platform: UNKNOWNClassifier: Development Status :: 1Classifier: Operating System :: OS IndependentClassifier: Intended Audience :: Science/ResearchClassifier: Programming Language :: PythonClassifier: Programming Language :: Python :: 2Classifier: Programming Language :: Python :: 3Classifier: Programming Language :: Python :: 2.7Classifier: Programming Language :: Python :: 3.5Classifier: Programming Language :: Python :: 3.6Classifier: Topic :: Scientific/Engineering
allspark
UNKNOWN
all_spark_cube_client
See docs online athttp://chadharrington.github.io/all_spark_cube/
allspeak
AllspeakAllspeakis a pythonic (yet ironically inspired by Rails) internationalization and localization solution for Python web applications.It's flexible, easy to use and, unlike gettext, independent of any external compilation tool.This librarydoes notuse gettext -we find it cumbersome, to say the least-, but instead it works with translations inYAMLfiles, compatible by those used with the Rails internationalization system, so you can use any third-party service already compatible with them (for example,Transifex).It is powered by theBabelandpytzlibraries and tested with Python 3.5+Read the documentation here:http://allspeak.lucuma.coWhat's in a name?"When Thor speaks with the All-Speak anyone who hears him will hear him speak their native language in their hearts" ------(from Thor's wiki page)Run the tests$ pip install . $ pip install .[testing]To run the tests in your current Python version do:$ make testTo run them in every supported Python version do:$ toxIt might be also necessary to run the coverage report to make sure all lines of code are touch by the tests:$ make coverageOur test suiteruns continuously on Travis CIwith every update.
allsql
SQLITE31 - Import filefrom allsql import Sqlite2 -Sqlite([local database])db = Sqlite('database.db')3 -db.create_database([database])db.create_database(db)4 -db.create_table([table],[columns type parameters])db.create_table('tb_test', 'id integer primary key autoincrement, name text, status integer')5 -db.truncate_table([table])db.truncate_table('tb_test')6 -db.drop_table([table])db.drop_table('tb_test')7 -db.insert([table],[columns],[values])db.insert('tb_test', 'name,status', '"Joe Climb",0')8 -db.update([table],[set],[where])db.update('tb_test', 'name = "Joe Caruzo", status = 0', 'status = 1')9 -db.delete([table],[where])db.delete('tb_test','id = 1, status = 0')10 -db.select([table],[columns],[where],[groupby],[orderby])print(db.select('tb_test','name,count(*)','id > 0, status != 0', 'name', 'name asc'))11 -db.sql([query])print(db.sql('select * from tb_test '))MYSQL1 - Import filefrom allsql import Mysql2 -Mysql([user],[password],[host],[sid])db = Mysql('<user>','<user_pass>','<host>','<dba>')3 -db.create_table([table],[columns type parameters])db.create_table('tb_test', 'id integer primary key auto_increment, name text, status integer')4 -db.truncate_table([table])db.truncate_table('tb_test')5 -db.drop_table([table])db.drop_table('tb_test')6 -db.insert([table],[columns],[values])db.insert('tb_test', 'name,status', '"Joe Climb",0')7 -db.update([table],[set],[where])db.update('tb_test', 'name = "Joe Caruzo", status = 0', 'status = 1')8 -db.delete([table],[where])db.delete('tb_test','id = 1, status = 0')9 -db.select([table],[columns],[where],[groupby],[orderby])print(db.select('tb_test','name,count(*)','id > 0, status != 0', 'name', 'name asc'))10 -db.sql([query])print(db.sql('select * from tb_test '))
allsql-utils
MIT LicenseCopyright (c) [year] [fullname]Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.更多方法请看模块,或者联系作者一起探讨。
allstar
AllStar"allstar" is a simple library for managing the the __all__ attribute in modules.The __all__ attribute is used to specify what is imported when using thefrom module import *syntax. This type of import is commonly used by project mapping tools like documentation generators. It is important to manage __all__ because, for example, you may not want to generate documentation for imported classes, only for the ones you have created.The library is named "allstar" because the __all__ attribute is imported using the star (*) symbol.InstallationpipinstallallstarUsageBasic usagefromallstarimportStar# manager instance# __name__ is used to reference the module and its attributesstar=Star(__name__)# star.sign adds a callable to the __all__ [email protected]:[email protected]_function():passprint(__all__)# prints: ['TheClass', 'the_function']Extended usageimportos,sys,builtinsfromallstarimportStar__all__=['os']# Star preserves the previous namesstar=Star(__name__)star.include('builtins')# include names using strings or referencesstar.include_all(['os','sys'])# include names using iterablesSome extra featuresfromallstarimportStarstar=Star(__name__)star.empty()# empties the __all__ iterablestar.freeze()# turns the __all__ iterable into a tupleAuthorAlejandro [email protected]://github.com/virtualitems/Projecthttps://pypi.org/project/allstar/https://github.com/virtualitems/allstarLicenseMIT LicenseCopyright (c) 2022 Virtual ItemsPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
all-stats-distributions
No description available on PyPI.
all_test
UNKNOWN
all-the-chatbots
No description available on PyPI.
allthedots
No description available on PyPI.
allthekernels
All the Kernels!A Jupyter kernel that multiplexes all the kernels you have installed.InstallationDownload and extract the contents of this repository or clone it withgit clonehttps://github.com/minrk/allthekernels.gitInstall with:python3 -m pip install allthekernelsOr to do a dev install from source:python3 -m pip install -e . # manually install kernelspec in your environment, e.g.: # mkdir ~/mambaforge/envs/allthekernels/share/jupyter/kernels/atk # cp atk/kernel.json ~/mambaforge/envs/allthekernels/share/jupyter/kernels/atkUsageSpecify which kernel a cell should use with>kernelname. If no kernel is specified, IPython will be used.(All the things source)Making a releaseAnyone with push access to this repo can make a release. We usetbumpto publish releases.tbump updates version numbers and publishes thegit tagof the version.Our GitHub Actionsthen build the releases and publish them to PyPI.The steps involved:install tbump:pip install tbumptag and publish the releasetbump $NEW_VERSION.That's it!
allthethings
allthethingsVarious utilities augmenting the python standard library.base_convertbase_convert(s, from_base: int, to_base: int, alphabet="0123456789abcdefghijklmnopqrstuvwxyz") -> strclassDateRange...at_end_of_monthat_end_of_month(d: date) -> dateReturns a date at the end of the month of the given date.at_end_of_month(read_date('2022-02-03')) --> '2020-02-28' at_end_of_month(read_date('2020-02-03')) --> '2020-02-29'read_dateread_date(value: Union[date, str]) -> datemake_dsnmake_dsn(protocol: str, *, host: str, port: int, database: str, username: str, password: str) -> strmake_postgres_dsnmake_postgres_dsn(*, host: str, database: str, username: str, password: str, port: int = 5432) -> strgroupergrouper(iterable: Iterable[U], n) -> Iterator[List[U]]groupbygroupby(f: Callable[[U], R], xs: Iterable[U]) -> Dict[R, List[U]]like Scala's groupby, unlike Haskell's/Python's groupbydedupdedup(xs: Iterable[U], on=lambda x: x) -> Iterator[U]range_inclrange_incl(lower: E, upper: E, step: Optional[Union[Callable[[E], E], Number]] = None) -> Iterator[E] with E = TypeVar('E', Number, date, covariant=True)range_exclrange_excl(lower: E, upper: E, step: Optional[Union[Callable[[E], E], Number]] = None) -> Iterator[E] with E = TypeVar('E', Number, date, covariant=True)classStopwatch...
allthingsnlp
All-things-NLPThis repository is a collection of different NLP tasks
allthingstalk
No description available on PyPI.
alltime
time-seriesGeneral purpose time-series predictive modeling package.
alltime-athletics-python
No description available on PyPI.
all-time-convert
No description available on PyPI.
alltrees
alltrees PackageOur package contains 1.AVL Tree 2.B Tree 3.B+ Tree 4.Binary Search Tree 5.Red Black Tree. This package can be used for storing data and fast retrieval of the stored data. Different operations can be performed easily such as: 1.Insertion 2.Deletion 3.Searching 4.Determination of Height 5.Determination of Balancing factor 6.Finding Maximum element 7.Finding Minimum element
alltweets
A very simple Twitter crawler that can collect all friends, followers, and tweets of a specified user.
all-twitter-scraper
all_twitter_scraperAll twitter scraper gets all tweets filtered by the parameters that are in twitter advanced search such as tweets that: include specific keywords, does not include specific keywords, include exact phrase, include specific hashtags, etc..Table of ContentsInstallationUsageInstallationTo install all_twitter_scraper package, simply run:pipinstallall_twitter_scraperUsageAn example that clears most of the package parameters:fromall_twitter_scraper.modules.Twitter_ScraperimportTwitter_Scraperfromdatetimeimportdatefromall_twitter_scraper.modules.CriteriaimportCriteriaif__name__=="__main__":criteria=Criteria()criteria.set_language('ar')criteria.set_since_date(date(2019,1,1))criteria.set_until_date(date(2020,4,1))criteria.set_all_of_keywords(['Mohammed','Aly'])twitter_scraper=Twitter_Scraper()tweets=twitter_scraper.scrap(criteria,return_tweets_list=True)Buy me a coffee
allude
alludeBuilding precise functionality from vague specificationsTo install:pip install allude
alluka
AllukaA type based dependency injection framework for Python 3.9+.UsageFor how to get started with this library, see thedocumentation.InstallationYou can install Alluka from PyPI using the following command in any Python 3.9 or above environment.python -m pip install -U allukaContributingBefore contributing you should read through thecontributing guidelinesand thecode of conduct.
allumette
A library with whatwever I find missing in pytorch! Mostly stuff for logging.
allure2-behave
No description available on PyPI.
allure-adaptor-nio
No description available on PyPI.