package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
af-remote-html-runtime-manager | AristaFlow BPM REST Client library for connecting to the RuntimeManager/RemoteHTMLRuntimeManager endpoint.https://pypi.org/project/aristaflowpy/provides APIs on top of the AristaFlow BPM REST endpoints for many use cases, so using aristaflowpy is recommended. |
af-res-model-manager | AristaFlow BPM REST Client library for connecting to the ResModelManager/ResModelManager endpoint.https://pypi.org/project/aristaflowpy/provides APIs on top of the AristaFlow BPM REST endpoints for many use cases, so using aristaflowpy is recommended. |
africamonitor | Africa Macroeconomic Monitor Database APIA python API providing access to a relational database with macroeconomic data for Africa.
The database contains >700 macroeconomic time series from mostly international sources,
grouped into 50 macroeconomic and development-related topics. Series are carefully selected
on the basis of data coverage for Africa and relevance to the macro-development context.
The project is part of theKiel Institute Africa Initiative,
which, amongst other things, aims to develop a parsimonious database with highly relevant indicators
to monitor macroeconomic developments in Africa, accessible through a fast API and a web-based platform
athttps://africamonitor.ifw-kiel.de/. |
africanelephantdatabasedatadownloader | Download data from the African Elephant DatabaseThe African Elephant Database (http://africanelephantdatabase.org/) is an online effort by theIUCN SSC African Elephant Specialist Group (AfESG)to gather data from different surveys and combine it with past African Elephant Status Reports (published by the same group).The database is freely accessible online via a web user interface, and released under aCreative Commons Attribution-NonCommercial-ShareAlike license. Unfortunately, at the time of this writing, the AfESG did not have means to access the backend and retrieve raw data.This script swifts through the online user interface and downloads all data contained in the “Elephant Estimates” columns as well as the spatial geometry of eachstratum(the smallest area reported). It retains the hierarchy of spatial units by referencing to the higher-order units in attributes in order to allow the reconstruction of data on the level ofinput systems,countries,regionsand the entirecontinent.If you usepython-africanelephantdatadownloaderfor scientific research, please cite it in your publication:Fink, C. (2019):python-africanelephantdatabasedatadownloader: a Python utility to download the most up-to-date data from the African Elephant Database.doi:10.5281/zenodo.3243872DependenciesThe script is written in Python 3 and depends on the Python modulesBeautifulSoup4,GeoPandas,Shapelyandrequests.To install all dependencies on a Debian-based system, run:apt-getupdate-y&&apt-getinstall-ypython3-devpython3-pippython3-virtualenv\python3-bs4python3-geopandaspython3-requestspython3-shapely(There’s an Archlinux AUR package pulling in all dependencies, see further down)Installationusingpipor similar:pip3install-uafricanelephantdatabasedatadownloaderOR: manually:Clone this repositorygitclonehttps://gitlab.com/helics-lab/python-africanelephantdatabasedatadownloader.gitChange to the cloned directoryUse the Pythonsetuptoolsto install the package:cdpython-africanelephantdatabasedatadownloader
python3./setup.pyinstallOR: (Arch Linux only) fromAUR:# e.g. using yaourtyaourtpython-africanelephantdatabasedatadownloaderUsageRunaed-downloader [outputFile]. It will download all data (be patient) and save it inGeoPackageformat tooutputFile(default isoutput.gpkgin the current working directory). |
africanwordnet | AfricanWordNet: Implementation of WordNets for African languagesThis library extendsOMWimplemented inNLTKto add support for the following African languages.Sepedi (nso)Xitsonga (tsn)Tshivenda (ven)isiZulu (zul)isiXhosa (xho)[]RequirementsPython 3NLTKInstallationFrom Pypipip install africanwordnetFrom sourcepip install https://github.com/JosephSefara/AfricanWordNet.gitCitation Paper@inproceedings{sefara2020practical,
title={Paper Title},
author={Sefara, Tshephisho and Mokgonyane, Tumisho and Marivate, Vukosi},
booktitle={Proceedings of the Eleventh Global Wordnet Conference},
paages={},
year={2020},
}Usage>>>fromnltk.corpusimportwordnetaswn>>>importafricanwordnet>>>wn.langs()['nso','tsn','ven','zul','xho']Setswana WordNet>>>wn.synsets('phêpafatsa',lang=('tsn'))[Synset('scavenge.v.04'),Synset('tidy.v.01'),Synset('refine.v.04'),Synset('refine.v.03'),Synset('purify.v.01'),Synset('purge.v.04'),Synset('purify.v.02'),Synset('clean.v.08'),Synset('clean.v.01'),Synset('houseclean.v.01')]>>>wn.lemmas('phêpafatsa',lang='tsn')[Lemma('scavenge.v.04.phêpafatsa'),Lemma('tidy.v.01.phêpafatsa'),Lemma('refine.v.04.phêpafatsa'),Lemma('refine.v.03.phêpafatsa'),Lemma('purify.v.01.phêpafatsa'),Lemma('purge.v.04.phêpafatsa'),Lemma('purify.v.02.phêpafatsa'),Lemma('clean.v.08.phêpafatsa'),Lemma('clean.v.01.phêpafatsa'),Lemma('houseclean.v.01.phêpafatsa')]>>>wn.synset('purify.v.01').lemma_names('tsn')['phêpafatsa']>>>lemma=wn.lemma('purify.v.01.phêpafatsa',lang='tsn')>>>whole_lemma.lang()'tsn'Sepedi WordNet>>>wn.synsets('taelo',lang=('nso'))[Synset('call.n.12'),Synset('mandate.n.03'),Synset('command.n.01'),Synset('order.n.01'),Synset('commission.n.06'),Synset('commandment.n.01'),Synset('directive.n.01'),Synset('injunction.n.01')]>>>wn.lemmas('taelo',lang='nso')[Lemma('call.n.12.taelo'),Lemma('mandate.n.03.taelo'),Lemma('command.n.01.taelo'),Lemma('order.n.01.taelo'),Lemma('commission.n.06.taelo'),Lemma('commandment.n.01.taelo'),Lemma('directive.n.01.taelo'),Lemma('injunction.n.01.taelo')]>>>wn.synset('call.n.12').lemma_names('nso')['taelo']>>>lemma=wn.lemma('call.n.12.taelo',lang='nso')>>>whole_lemma.lang()'nso'isiZulu WordNet>>>wn.synsets('iqoqo',lang='zul')[Synset('whole.n.02'),Synset('conspectus.n.01'),Synset('overview.n.01'),Synset('sketch.n.03'),Synset('compilation.n.01'),Synset('collection.n.01'),Synset('team.n.02'),Synset('set.n.01')]>>>wn.lemmas('iqoqo',lang='zul')[Lemma('whole.n.02.iqoqo'),Lemma('conspectus.n.01.iqoqo'),Lemma('overview.n.01.iqoqo'),Lemma('sketch.n.03.iqoqo'),Lemma('compilation.n.01.iqoqo'),Lemma('collection.n.01.iqoqo'),Lemma('team.n.02.iqoqo'),Lemma('set.n.01.iqoqo')]>>>wn.synset('whole.n.02').lemma_names('zul')['iqoqo']>>>whole_lemma=wn.lemma('whole.n.02.iqoqo',lang='zul')>>>whole_lemma.lang()'zul'isiXhosa WordNet>>>wn.synsets('imali',lang=('xho'))[Synset('finance.n.03'),Synset('wealth.n.04'),Synset('capital.n.01'),Synset('store.n.02'),Synset('credit.n.02'),Synset('money.n.01'),Synset('currency.n.01'),Synset('purse.n.02'),Synset('franc.n.01'),Synset('cent.n.01')]>>>wn.lemmas('imali',lang='xho')[Lemma('finance.n.03.imali'),Lemma('wealth.n.04.imali'),Lemma('capital.n.01.imali'),Lemma('store.n.02.imali'),Lemma('credit.n.02.imali'),Lemma('money.n.01.imali'),Lemma('currency.n.01.imali'),Lemma('purse.n.02.imali'),Lemma('franc.n.01.imali'),Lemma('cent.n.01.imali')]>>>wn.synset('wealth.n.04').lemma_names('xho')['imali']>>>lemma=wn.lemma('wealth.n.04.imali',lang='xho')>>>lemma.lang()'xho'Tshivenda WordNet>>>wn.synsets('tshifanyiso',lang=('ven'))[Synset('picture.n.05'),Synset('word_picture.n.01'),Synset('portrayal.n.01')]>>>wn.lemmas('tshifanyiso',lang='ven')[Lemma('picture.n.05.tshifanyiso'),Lemma('word_picture.n.01.tshifanyiso'),Lemma('portrayal.n.01.tshifanyiso')]>>>wn.synset('picture.n.05').lemma_names('ven')['tshifanyiso']>>>lemma=wn.lemma('picture.n.05.tshifanyiso',lang='ven')>>>whole_lemma.lang()'ven'Find related wordsThe wordtaeloin Sepedi is related totagafalomolaotlhalošowords=set()synsets=wn.synsets('taelo',lang=('nso'))forsynsetinsynsets:# synset is in englishforhypoinsynset.hyponyms():forlemmainhypo.lemmas("nso"):words.add(lemma.name())print('taelo','---',words)taelo---{'taelo','tagafalo','molao','tlhalošo'} |
africastalking | africastalking-pythonThe SDK provides convenient access to the Africa's Talking APIs to python apps.DocumentationTake a look at theAPI docs here.Install$pipinstallafricastalking# python 2.7.xOR
$python-mpipinstallafricastalking# python 2.7.xOR
$pip3installafricastalking# python 3.6.xOR
$python3-mpipinstallafricastalking# python 3.6.xUsageThe package needs to be configured with your app username and API key, which you can get from thedashboard.You can use this SDK for either production or sandbox apps. For sandbox, the app username isALWAYSsandbox# import packageimportafricastalking# Initialize SDKusername="YOUR_USERNAME"# use 'sandbox' for development in the test environmentapi_key="YOUR_API_KEY"# use your sandbox app API key for development in the test environmentafricastalking.initialize(username,api_key)# Initialize a service e.g. SMSsms=africastalking.SMS# Use the service synchronouslyresponse=sms.send("Hello Message!",["+2547xxxxxx"])print(response)# Or use it asynchronouslydefon_finish(error,response):iferrorisnotNone:raiseerrorprint(response)sms.send("Hello Message!",["+2547xxxxxx"],callback=on_finish)Seeexamplefor more usage examples.InitializationInitialize the SDK by callingafricastalking.initialize(username, api_key). After initialization, you can get instances of offered services as follows:SMS:africastalking.SMSAirtime:africastalking.AirtimeVoice:africastalking.VoiceToken:africastalking.TokenApplication:africastalking.ApplicationMobileData:africastalking.MobileDataApplicationfetch_application_data(): Get app information. e.g balance.Airtimesend(recipients: [dict]): Send airtimerecipients: Contains an array of arrays containing the following keysphone_number: Recipient of airtimeamount: Amount to send with currency e.g100currency_code: 3-digit ISO format currency code (e.gKES,USD,UGXetc).max_num_retry: This allows you to specify the maximum number of retries in case of failed airtime deliveries due to various reasons such as telco unavailability. The default retry period is 8 hours and retries occur every 60 seconds. For example, settingmax_num_retry=4means the transaction will be retried every 60 seconds for the next 4 hours.OPTIONAL.Smssend(message: str, recipients: [str], sender_id: str = None, enqueue: bool = False): Send a message.message: SMS content.REQUIREDrecipients: An array of phone numbers.REQUIREDsender_id: Shortcode or alphanumeric ID that is registered with your Africa's Talking account.enqueue: Set totrueif you would like to deliver as many messages to the API without waiting for an acknowledgement from telcos.send_premium(message: str, short_code: str, recipients: [str], link_id: [str] = None, retry_duration_in_hours [int] = None): Send a premium SMSmessage: SMS content.REQUIREDshort_code: Your premium product shortCode.REQUIREDrecipients: An array of phone numbers.REQUIREDkeyword: Your premium product keyword.link_id: We forward thelinkIdto your application when a user sends a message to your onDemand serviceretry_duration_in_hours: This specifies the number of hours your subscription message should be retried in case it's not delivered to the subscriberfetch_messages(last_received_id: int = 0): Fetch your messageslast_received_id: This is the id of the message you last processed. Defaults to0create_subscription(short_code: str, keyword: str, phone_number: str): Create a premium subscriptionshort_code: Premium short code mapped to your account.REQUIREDkeyword: Premium keyword under the above short code and is also mapped to your account.REQUIREDphone_number: PhoneNumber to be subscribedREQUIREDfetch_subscriptions(short_code: str, keyword: str, last_received_id: int = 0): Fetch your premium subscription datashort_code: Premium short code mapped to your account.REQUIREDkeyword: Premium keyword under the above short code and mapped to your account.REQUIREDlast_received_id: ID of the subscription you believe to be your last. Defaults to0delete_subscription(short_code: str, keyword: str, phone_number: str): Delete a phone number from a premium subscriptionshort_code: Premium short code mapped to your account.REQUIREDkeyword: Premium keyword under the above short code and is also mapped to your account.REQUIREDphone_number: PhoneNumber to be subscribedREQUIREDMobileDatasend(product_name: str, recipients: dict): Send mobile data to customers.product_name: Payment product on Africa's Talking.REQUIREDrecipients: A list of recipients. Each recipient has:phoneNumber: Customer phone number (in international format).REQUIREDquantity: Mobile data amount.REQUIREDunit: Mobile data unit. Can either beMBorGB.REQUIREDvalidity: How long the mobile data is valid for. Must be one ofDay,WeekandMonth.REQUIREDisPromoBundle: This is an optional field that can be eithertrueandfalse.OPTIONALmetadata: Additional data to associate with the transaction.OPTIONALfind_transaction(transaction_id: str): Find a mobile data transaction.fetch_wallet_balance(): Fetch a mobile data product balance.Tokengenerate_auth_token(): Generate an auth token to use for authentication instead of an API key.UssdFor more information, please readhttps://developers.africastalking.com/docs/ussdDevelopment$gitclonehttps://github.com/AfricasTalkingLtd/africastalking-python.git
$cdafricastalking-python
$touch.envMake sure your.envfile has the following content then runpython -m unittest discover -v# AT APIUSERNAME=sandboxAPI_KEY=some_keyIssuesIf you find a bug, please file an issue onour issue tracker on GitHub. |
africastalking-python | No description available on PyPI. |
africunia | # Python Library for AfrIcunia




Stable[](http://python-africunia.readthedocs.io/en/latest/)
[](https://travis-ci.org/africunia/python-africunia)
[](https://codecov.io/gh/africunia/python-africunia)Develop[](http://python-africunia.readthedocs.io/en/develop/)
[](https://travis-ci.org/africunia/python-africunia)
[](https://codecov.io/gh/africunia/python-africunia)—## DocumentationVisit the [pybitshares website](http://docs.pybitshares.com/en/latest/) for in depth documentation on this Python library.## Installation### Install with pip3:$ sudo apt-get install libffi-dev libssl-dev python-dev python3-dev python3-pip
$ pip3 install africunia### Manual installation:$ git clonehttps://github.com/africunia/python-africunia/$ cd python-africunia
$ python3 setup.py install –user### Upgrade$ pip3 install –user –upgrade africunia## Contributingpython-africunia welcomes contributions from anyone and everyone. Please
see our [guidelines for contributing](CONTRIBUTING.md) and the [code of
conduct](CODE_OF_CONDUCT.md).### Discussion and DevelopersDiscussions around development and use of this library can be found in a
[dedicated Telegram Channel](https://t.me/pybitshares)### LicenseA copy of the license is available in the repository’s
[LICENSE](LICENSE.txt) file.### BountiesAs part of [HackTheDex](https://hackthedex.io), security issues found in this
library are potentially eligible for a bounty. |
afrigis | # Afrigis Python Library[](https://codeclimate.com/github/chris-cmsoft/python-afrigis/maintainability)
[](https://codeclimate.com/github/chris-cmsoft/python-afrigis/test_coverage)
[](https://travis-ci.org/chris-cmsoft/python-afrigis)### Installation`bash $ pip install afrigis `### Services:#### GeocodeExample on using the Geocode service`python from afrigis.services import geocode result =geocode('AFRIGIS_KEY','AFRIGIS_SECRET', 'NADID | SEOID') print(result) # {'number_of_records': 4,...}`### Running tests`bash $ py.test `### Building and pushing to pypi> In order to di this please make sure you are authenticated against Pypi first :).
> You ca do this with the following method:https://docs.python.org/3/distutils/packageindex.html#the-pypirc-file`bash $ python setup.py sdist upload ` |
afrikpay-pip-sdk | Simple python packageExplore the docs »View Demo·Report Bug·Request FeatureTable of ContentsGetting startedUsageEcommerce IntegrationBill IntegrationAirtime IntegrationAccount IntegrationLicenseContactAcknowledgementsGetting StartedThis python library was created with the purpose of facilitating the integration of our payment api to our partners. It is an ongoing work. Suggestions to ameliorate the api are welcome.Ecommerce integrationLet's suppose you want to integrate ecommerce payments on you system. Here are the two main steps to get the job done in the development environment.
You an uncomment the code to test the others apis.from afrikpay_pip_sdk import Ecommerce
#testing some ecommerce api
ecommerce = Ecommerce(
'AFC6617',
'661671d0bd7bef499e7d80879c27d95e',
'7777',
'http://34.86.5.170:8086/api/ecommerce/collect/',
'http://34.86.5.170:8086/api/ecommerce/payout/',
'http://34.86.5.170:8086/api/ecommerce/deposit/',
'http://34.86.5.170:8086/api/ecommerce/changekey/',
'http://34.86.5.170:8086/api/ecommerce/transaction/status/')
#ecommerce payment with mtn
response = ecommerce.collect(
'mtn_mobilemoney_cm',
'677777777',
4000,
'123456'
)
print(response)
# ecommerce payment with orange
# response = ecommerce.collect(
# 'orange_money_cm',
# '699999999',
# 400,
# 0000
# )
#print(response)
#response = ecommerce.deposit()
#print(response)
# change ecommerce apikey
#response = ecommerce.changeKey()
#print(response)
# get ecommerce transaction status
#response = ecommerce.transactionStatus('128_1622044090')
#print(response)Bill integrationIf you want to integrate bill payments apis on you system, here are the two main steps to get the job done in the development environment. You an uncomment the code to test the others apis.from afrikpay_pip_sdk import Bill
#testing bill api
bill = Bill(
'3620724907638658',
'3620724907638658',
'e825e83873eafffff315fc3f22db2d59',
'afrikpay',
'http://34.86.5.170:8086/api/bill/v2/',
'http://34.86.5.170:8086/api/bill/getamount/',
'http://34.86.5.170:8086/api/bill/status/v2/')
#camwater
response = bill.payBill(
'camwater',
'111111111111111',
1000,
'cash',
'96543'
)
print(response)
# response = bill.getBillAmount(
# 'camwater',
# '111111111111111',
# )
#print(response)
# response = bill.getBillStatus(
# '96543',
# )
# print(response)
#eneoprepay
# response = bill.payBill(
# 'eneoprepay',
# '014111111111',
# 1000,
# 'cash',
# 'qsde14'
# )
# print(response)
# response = bill.getBillAmount(
# 'eneoprepay',
# '014111111111',
# )
# print(response)
# response = bill.getBillStatus(
# 'qsde14',
# )
# print(response)Airtime integrationIf you want to integrate airtime apis on you system, here are the two main steps to get the job done in the development environment. You an uncomment the code to test the others apis.from afrikpay_pip_sdk import Airtime
#testing airtime apis
airtime = Airtime(
'3620724907638658',
'3620724907638658',
'e825e83873eafffff315fc3f22db2d59',
'afrikpay',
'http://34.86.5.170:8086/api/airtime/v2/',
'http://34.86.5.170:8086/api/airtime/status/v2/')
response = airtime.makeAirtime(
'mtn',
'677777777',
1000,
'cash',
'123456789'
)
print(response)
# response = airtime.airtimeStatus(
# '123456789'
# )
# print(response)Account integrationIf you want to integrate account apis on you system, here are the two main steps to get the job done in the development environment. You an uncomment the code to test the others apis.from afrikpay_pip_sdk import Account
#testing account apis
account = Account(
'3620724907638658',
'3620724907638658',
'e825e83873eafffff315fc3f22db2d59',
'http://34.86.5.170:8086/api/account/transaction/status/',
'http://34.86.5.170:8086/api/account/agent/balance/v2/',
'http://34.86.5.170:8086/api/account/developer/changekey/')
response = account.balance()
print(response)How to switch to production ?You can explore the src folder to see the default production setup. Just use the appropriate apikey, store code, agentid for production. If you have any problem using the library please contact us, we will be happy to help you.LicenseContactProject Link:https://github.com/Georges-Ngandeu/SimplePythonPackageAcknowledgements |
afrinet-automation | In the rapidly evolving landscape of cloud computing, effective management and orchestration of resources across diverse cloud platforms are imperative. Our Cloud Automation solution is designed to provide a unified and comprehensive approach to automate processes, workflows, and tasks across various cloud environments, including but not limited to AWS, Azure, Google Cloud, and more. |
afrinet-cloud-automation | Failed to fetch description. HTTP Status Code: 404 |
afrinet-cloud-automator | In the rapidly evolving landscape of cloud computing, effective management and orchestration of resources across diverse cloud platforms are imperative. Our Cloud Automation solution is designed to provide a unified and comprehensive approach to automate processes, workflows, and tasks across various cloud environments, including but not limited to AWS, Azure, Google Cloud, and more. |
afrinet-system | A longer description of your project |
afrl | AFRL - All Forms of Reinforcement LearningThe main goal of this project is to provide a framework for reinforcement learning research. The framework is designed to be modular and easy to extend. It is written in Python and built on top ofPyTorch. The framework is still in its early stages and is under active development.UsageThe framework is designed to be modular and easy to extend. The main components of the framework are:Environments: The environments are the tasks that the agent is trying to solve.Agents: The agents are the algorithms that are trying to solve the environment.Trainers: The trainers are the methods the agents are training with.Here's the list of the trainers along with their current implementation status:DQNDDQNDueling DQNDouble Dueling DQNDDPGSDDPGPrioritized Experience ReplayTRPOPPOTD-LambdaSARSAREINFORCEActor-CriticA2CA3CSACTo make it easier to choose the right trainer for the right environment, here's a table that shows the different trainers and the environments that they support:MethodDiscrete Action SpaceContinuous Action SpaceSingle-AgentMulti-AgentLow-Dimensional ObsHigh-Dimensional ObsDQN✔️❌✔️❌✔️❌DDQN✔️❌✔️❌✔️❌Dueling DQN✔️❌✔️❌✔️❌Double Dueling DQN✔️❌✔️❌✔️❌DDPG❌✔️✔️❌✔️✔️SDDPG❌✔️✔️❌✔️✔️Prioritized Experience Replay✔️✔️✔️❌✔️❌TRPO✔️✔️✔️❌✔️✔️PPO✔️✔️✔️✔️✔️✔️TD-Lambda✔️✔️✔️❌✔️❌SARSA✔️❌✔️❌✔️❌REINFORCE✔️✔️✔️❌✔️✔️Actor-Critic✔️✔️✔️✔️✔️✔️A2C✔️✔️✔️✔️✔️✔️A3C✔️✔️✔️✔️✔️✔️SAC❌✔️✔️❌✔️✔️Characteristics of Different Environments for RL AlgorithmsHere is a list of characteristics of different environments, along with some environment examples that have those characteristics and some generic rules of thumb in order to choose the right method for your environment:1. Discrete vs Continuous Action SpaceExamples:Discrete: Tic-Tac-Toe, Grid WorldsContinuous: Robotic arm control, Portfolio managementRules of Thumb:For discrete action spaces, DQN variants and PPO are commonly used.For continuous action spaces, DDPG and SAC offer better performance.2. Single-Agent vs Multi-AgentExamples:Single-Agent: Mountain Car, CartpoleMulti-Agent: Poker, Multi-robot coordinationRules of Thumb:Single-agent tasks often use DQN, PPO, or DDPG.Multi-agent tasks typically benefit from specialized algorithms like MADDPG, or generic methods like PPO that are adapted for multi-agent scenarios.3. Low-Dimensional vs High-Dimensional Observation SpaceExamples:Low-Dimensional: Frozen Lake, Taxi-v3High-Dimensional: Atari games, Visual navigationRules of Thumb:Low-dimensional problems can be tackled with simpler algorithms like SARSA or TD-Lambda.High-dimensional problems often require methods capable of handling complex function approximation, such as Convolutional Neural Networks (CNNs) in DQN for image-based tasks.4. Partially Observable vs Fully ObservableExamples:Partially Observable: Poker, Hide and SeekFully Observable: Chess, GoRules of Thumb:Fully observable environments can make use of simpler methods such as DQN or PPO.Partially observable environments may require methods with memory capabilities like LSTM or GRU incorporated into the algorithm (e.g., DRQN).5. Sparse vs Dense RewardExamples:Sparse Reward: Maze navigation, robotic graspingDense Reward: Cartpole, Lunar LanderRules of Thumb:Dense reward problems can use most algorithms effectively.Sparse reward problems often benefit from algorithms designed for exploration, such as those utilizing curiosity-driven mechanisms or hierarchical methods.By considering these environment characteristics and rules of thumb, one can make a more informed decision when selecting an appropriate reinforcement learning algorithm for a specific task. |
afro | DescriptionAFROisAnotherFreeRippingOrchestraDocumentationYou canread the doumentation online. |
afrodite | pacotepypiExample of PyPI package.Getting StartedDependenciesYou need Python 3.7 or later to usepacotepypi. You can find it atpython.org.
You also need setuptools, wheel and twine packages, which is available fromPyPI. If you have pip, just run:pip install setuptools
pip install wheel
pip install twineInstallationClone this repo to your local machine using:git clone https://github.com/caiocarneloz/pacotepypi.gitFeaturesFile structure for PyPI packagesSetup with package informationsLicense example |
afrolid | AfroLID, a neural LID toolkit for 517 African languages and varieties. AfroLID exploits a multi-domain web dataset manually curated from across 14 language families utilizing five orthographic systemsGitHub link:https://github.com/UBC-NLP/afrolidOnline demo link:https://demos.dlnlp.ai/afrolidGetting StartedThefull documentationcontains instructions for getting started, translation using diffrent methods, intergrate AfroLID with your code, and provides more examples.Licenseafrolid(-py) is Apache-2.0 licensed. The license applies to the pre-trained models as well.CitationIf you use AfroLID toolkit or the pre-trained models for your
scientific publication, or if you find the resources in this repository
useful, please cite our paper as follows:useful, please cite our paper as follows:@article{adebara2022afrolid,
title={AfroLID: A Neural Language Identification Tool for African Languages},
author={Adebara, Ife and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad and Inciarte, Alcides Alcoba},
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2022",
}AcknowledgmentsWe gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN-2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2018-0576; 895-2020-1004; 895-2021-1008),The Digital Research Alliance of Canada (the Alliance)andUBC
ARC-Sockeyeand Advanced Micro Devices, Inc. (AMD). Any opinions, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSERC, SSHRC, CFI, CC, AMD, or UBC ARC-Sockeye. |
afropdf | Failed to fetch description. HTTP Status Code: 404 |
afroy | Type 'afro.help()' for help
Type 'afro.' followed by one of the folloing colors [green, red, yellow, cyan, magenta] then ("the text to add color to here") without spaces for eg. print(afro.red("This Text Is Red"))
Type 'afro.' followed by 'diflet()' which will find the amount of different letters in a string. (string, starting_value) so afro.diflet("hellllllllllo", 1) would return 5 because 4 different letters are in that string and the second parameter would be the added value.
username and password account integration to programs!!! |
afroz-bitcoin-price | BITCOIN NOTIFICATION ALERT APPThis is a python script package, bitcoin price notification to get notified for the regular updates of bitcoin price on slack, gmail and telegram.
The aim is to push notifications when the price of bitcoin changes at certain time interval, user can specify the time interval. By default time interval and threshold are set to 0.1 in minutes and $10000 respectively.
When the conditions in the program are satisfied the trigger is fired and user will recieve notifications on slack, gmail and telegram.DESCRIPTIONBitcoin price is a fickle thing. You never really know where it’s going to be at the end of the day. So, instead of constantly checking various sites for the latest updates, let’s make a Python app to do the work for you.We’re going to use the popular automation website IFTTT.We’re going to create there IFTTT applets:One for emergency notification when Bitcoin price falls under a certain threshold and
one for regular Telegram and one for Slack updates on the Bitcoin price.These will be triggered by our Python app which will consume the data from the Coindesk API.An IFTTT applet is composed of two parts: a trigger and an action.Trigger will be a webhook service provided by IFTTT.Our Python app will make an HTTP request to the webhook URL which will trigger an action. Now, this is the fun part—the action could be almost anything you want. IFTTT offers a multitude of actions like sending an email, updating a Google Spreadsheet and even calling your phone.INSTALLATIONYou might need to do these installationsInstall python version 3This website will help you find all the versions available as per your system requirments.Install python3.https://www.python.org/downloads/Installation for PIP for windowspip installRefer this website for further readhttps://pip.pypa.io/en/stable/installing/DependenciesThe only dependency is on the requests library.pip install requests==2.18.4Install bitcoin-notifier package/module from PIP (on terminal)pip install afroz-bitcoin-priceUSAGEFollowing query on terminal will provide you with all the help optionsINPUTafroz-bitcoin-price -hOUTPUTusage: afroz-bitcoin-price [-h] [--d decision] [--i interval]
[--u upper threshold]Bitcoin Notification Alertoptional arguments:
-h, --help show this help message and exit
--d decision Enter (Yes/No) - Yes will run the program
--i interval Enter time interval
--u upper threshold Set upper threshold limit in USD for notificationafroz-bitcoin-price --d=Yes --i=0.1 --u=10000This will provide 5 prices of Bitcoin, at the specified time interval according to the code.TELEGRAM GROUP INVITE LINK -YOU ARE MOST WELCOME! SEE YOU THERE!https://t.me/joinchat/L5pJsBmFWhCQwtKVye4oDwCONTRIBUTIONSUpdates are always welcome, feel free to drop your suggestions!
Looking forward for your contribution. ❤️ |
afrr-remuneration | aFRR remunerationA tool to calculate the aFRR remuneration for the european energy market.AboutThis project was initiated with the start of aFRR remuneration in temporal resolution of seconds on October 1st 2021
which is one further step to fulfill the EU target market design.
The motivation for creating this python package is to provide a tool for evaluating remuneration of aFRR activation
events by TSOs.
Therefore, it provides an implementation of the calculation procedure described in themodel descriptionas
python code.InstallationWe aim to release a package on PyPi soonish. Until then, please see indevelopment installationhow to install the package from sources.UsageHere is some example code that shows how use functionality of this package.
Make sure you have a file at hand with data about setpoints and actual values of an aFRR activation event. See the
example files fromregelleistung.netto
understand the required file format.
Note, you have to make sure that data starts at the beginning of an aFRR activation event.fromafrr_renumeration.aFRRimportcalc_acceptance_tolerance_band,calc_underfulfillment_and_accountfromafrr_renumeration.dataimportparse_tso_data# load the setpoint and the measured value for example by reading the tso datafile="20211231_aFRR_XXXXXXXXXXXXXXXX_XXX_PT1S_043_V01.csv"tso_df=parse_tso_data(file)# calculate the tolerance bandband_df=calc_acceptance_tolerance_band(setpoint=tso_df["setpoint"],measured=tso_df["measured"])# calculate acceptance values and other relevant serieses like the under-/overfulfillmentunderful_df=calc_underfulfillment_and_account(setpoint=band_df.setpoint,measured=band_df.measured,upper_acceptance_limit=band_df.upper_acceptance_limit,lower_acceptance_limit=band_df.lower_acceptance_limit,lower_tolerance_limit=band_df.lower_tolerance_limit,upper_tolerance_limit=band_df.upper_tolerance_limit,)Next StepsWe plan toAdd a testfile with artificial dataAdd an example with a valid MOLFeel free to help us here!ContributingContributions are highly welcome. For more details, please have a look in tocontribution guidelines.Development installationFor installing the package from sources, please clone the repository [email protected]:energy2market/afrr-remuneration.gitThen, in the directoryafrr-remuneration(the one the source code was cloned to), executepoetryinstallwhich creates a virtual environment under./venvand installs required package and the package itself to this virtual environment.
Read here for more information aboutpoetry. |
af-runtime-service | AristaFlow BPM REST Client library for connecting to the RuntimeService/RuntimeService endpoint.https://pypi.org/project/aristaflowpy/provides APIs on top of the AristaFlow BPM REST endpoints for many use cases, so using aristaflowpy is recommended. |
afs | AFSAFS is a Python based library, that helps Deep / Machine Learning specialists to track their models during
training without accessing server, and getting notifications full of their desired information via beloved
Social Media platforms.PROSBuilt as lightweight as possibleTakes 14 arguments, therefore users can check almost everything while their model is trainingBack-End is built on Flask framework, and open-sourced. You can contribute to implement more
Social Media platforms' APIs.TODOFinish working on Back-End for Facebook Messenger.Used Frameworks & LibrariesAFS is built totally on Python & Node.JS.Python 3- for library buildingNode.JS- for Back-EndInstallationPython 3.6+ required to use.Get the package fromPyPi$pipinstallAFSUsageImport the AFS and reach 'teller' function.
Define the AFS.teller function inside the training loop, and pass the arguments.$importAFSasafs
$afs.teller(arg1,arg2)Then, reach uID function, and pass the 'yes' string, that will basically create unique id for you, by which you'll then verify your session with the chatbot.$afs.uID("yes")After the execution of the training loop, this line will print unique ID for you that is generated super randomly to minimize the similarities.It'll look like this:$YouruniqueIDis---231409296064663:68137457840134:27374860406350Copy the unique ID, and text the AFS bot the plain text to verify your session.
And, it's all done.Arguments'teller'functiontakesmaximumof14arguments.Defaultvaluesare0s.$iterationargumentisforcountingiterations.type=number.$distributionargumentisbasicallyadivider,foreveryhowmanyiterationsdoyouneedtosendtheGETrequest.type=number.$maxiterisamaximumofiterations,afterwhichthemodelfinishestraining.type=number.$epochdistributionisthesameas'distribution'argument,butforepochs.type=number.$epochcountsepochs.type=number.$testlosstakestestlossasaninformation.type=number.$vallosstakesvalidationlossasaninformation.type=number.JSON InstanceThe API sends the JSON array, that is basically stringified version of combination of dictionaries.ImplementationsThe Flask server is deployed on Heroku, and implemented only in Facebook Messenger for now.
Next Social Media Platforms:SlackDiscordLicenseBSD 3-Clause Licence |
afs2-datasource | AFS2-DataSource SDKThe AFS2-DataSource SDK package allows developers to easily access PostgreSQL, MongoDB, InfluxDB, S3 and APM.InstallationSupport Python version 3.6 or laterpip install afs2-datasourceDevelopmentpip install -e .NoticeAFS2-DataSource SDK usesasynciopackage, and Jupyter kernel is also usingasyncioand running an event loop, but these loops can't be nested.
(https://github.com/jupyter/notebook/issues/3397)If using AFS2-DataSource SDK in Jupyter Notebook, please add the following codes to resolve this issue:!pipinstallnest_asyncioimportnest_asyncionest_asyncio.apply()APIDBManagerInit DBManagerDBManager.connect()DBManager.disconnect()DBManager.is_connected()DBManager.is_connecting()DBManager.get_dbtype()DBManager.get_query()DBManager.execute_query()DBManager.create_table(table_name, columns)DBManager.is_table_exist(table_name)DBManager.is_file_exist(table_name, file_name)DBManager.insert(table_name, columns, records, source, destination)DBManager.delete_table(table_name)DBManager.delete_record(table_name, file_name, condition)Init DBManagerWith Database ConfigImport database config via Python.fromafs2datasourceimportDBManager,constant# For MySQLmanager=DBManager(db_type=constant.DB_TYPE['MYSQL'],username=username,password=password,host=host,port=port,database=database,querySql="select{field}from{table}")# For SQLServermanager=DBManager(db_type=constant.DB_TYPE['SQLSERVER'],username=username,password=password,host=host,port=port,database=database,querySql="select{field}from{table}"# only support `SELECT`)# For PostgreSQLmanager=DBManager(db_type=constant.DB_TYPE['POSTGRES'],username=username,password=password,host=host,port=port,database=database,querySql="select{field}from{schema}.{table}")# For MongoDBmanager=DBManager(db_type=constant.DB_TYPE['MONGODB'],username=username,password=password,host=host,port=port,database=database,collection=collection,querySql="{}")# For InfluxDBmanager=DBManager(db_type=constant.DB_TYPE['INFLUXDB'],username=username,password=password,host=host,port=port,database=database,querySql="select{field_key}from{measurement_name}")# For Oracle Databasemanager=DBManagerdb_type=constant.DB_TYPE['ORACLEDB'],username=username,password=password,host=host,port=port,database=database,querySql="select{field_key}from{measurement_name}"# only support `SELECT`)# For S3manager=DBManager(db_type=constant.DB_TYPE['S3'],endpoint=endpoint,access_key=access_key,secret_key=secret_key,is_verify=False,buckets=[{'bucket':'bucket_name','blobs':{'files':['file_name'],'folders':['folder_name']}}])# For AWS S3manager=DBManager(db_type=constant.DB_TYPE['AWS'],access_key=access_key,secret_key=secret_key,buckets=[{'bucket':'bucket_name','blobs':{'files':['file_name'],'folders':['folder_name']}}])# For APMmanager=DBManager(db_type=constant.DB_TYPE['APM'],username=username,# sso usernamepassword=password,# sso passwordapmUrl=apmUrl,apm_config=[{'name':name# dataset name'machines':[{'id':machine_id# node_id in APM}],'parameters':[parameter1,# parameter in APMparameter2]}],mongouri=mongouri,# timeRange or timeLasttimeRange=[{'start':start_ts,'end':end_ts}],timeLast={'lastDays':lastDay,'lastHours':lastHour,'lastMins':lastMin})# For Azure Blobmanager=DBManager(db_type=constant.DB_TYPE['AZUREBLOB'],account_name=account_name,account_key=account_key,containers=[{'container':container_name,'blobs':{'files':['file_name']'folders':['folder_name']}}])# For DataHubmanager=DBManager(db_type=constant.DB_TYPE['DATAHUB'],username=username,# sso usernamepassword=password,# sso passworddatahub_url=datahub_url,datahub_config=[{"name":"string",# dataset name"project_id":"project_id","node_id":"node_id","device_id":"device_id","tags":["tag_name"]}],uri=mongouri,# mongouri or influxuri# timeRange or timeLasttimeRange=[{'start':start_ts,'end':end_ts}],timeLast={'lastDays':lastDay,'lastHours':lastHour,'lastMins':lastMin})How to get APM machine id and parametersHow to get DataHub project id, node id, device id and tagDBManager.connect()Connect to MySQL, PostgreSQL, MongoDB, InfluxDB, S3, APM with specified by the given config.manager.connect()DBManager.disconnect()Close the connection.
Note S3 datasource not support this function.manager.disconnect()DBManager.is_connected()Return if the connection is connected.manager.is_connected()DBManager.is_connecting()Return if the connection is connecting.manager.is_connecting()DBManager.get_dbtype()Return database type of the connection.manager.get_dbtype()# Return: strDBManager.get_query()Return query in the config.manager.get_query()# MySQL, Oracle Database# Return type: String"""select {field} from {table} {condition}"""# PostgreSQL# Return type: String"""select {field} from {schema}.{table}"""# MongoDB# Return type: String"""{"{key}": {value}}"""# InfluxDB# Return type: String"""select {field_key} from {measurement_name}"""# S3# Return type: List"""[{'bucket': 'bucket_name','blobs': {'files': ['file_name'],'folders': ['folder_name']}}]"""# Azure Blob# Return type: List"""[{'container': container_name,'blobs': {'files': ['file_name']'folders': ['folder_name']}}]"""# APM# Return type: Dict"""{'apm_config': [{'name': name # dataset name'machines': [{'id': machine_id # node_id in APM}],'parameters': [parameter1, # parameter in APMparameter2]}],'time_range': [{'start': start_ts, 'end': end_ts}],'time_last': {'lastDays': lastDay, 'lastHours': lastHour, 'lastMins': lastMin}}"""# DataHub# Return type: Dict"""{'config': [{"name": "string", # dataset name"project_id": "project_id","node_id": "node_id","device_id": "device_id","tags": ["tag_name"]}],'time_range': [{'start': start_ts, 'end': end_ts}],'time_last': {'lastDays': lastDay, 'lastHours': lastHour, 'lastMins': lastMin}}"""DBManager.execute_query(querySql=None)Return the result in MySQL, PostgreSQL, MongoDB or InfluxDB after executing thequerySqlin config orquerySqlparameter.Download files which are specified inbucketsin S3 config orcontainersin Azure Blob config, and returnbucketsandcontainersname of the array.
If only download one csv file, then returndataframe.Return dataframe of list which ofMachineandParameterintimeRangeortimeLastfrom APM.
Return dataframe of list which ofTagintimeRangeortimeLastfrom DataHub.# For MySQL, Postgres, MongoDB, InfluxDB, Oracle Database, APM and DataHubdf=manager.execute_query()# Return type: DataFrame"""Age Cabin Embarked Fare ... Sex Survived Ticket_info Title20 22.0 7.0 2.0 7.2500 ... 1.0 0.0 2.0 2.01 38.0 2.0 0.0 71.2833 ... 0.0 1.0 14.0 3.02 26.0 7.0 2.0 7.9250 ... 0.0 1.0 31.0 1.03 35.0 2.0 2.0 53.1000 ... 0.0 1.0 36.0 3.04 35.0 7.0 2.0 8.0500 ... 1.0 0.0 36.0 2.0..."""# For Azure Blobcontainer_names=manager.execute_query()# Return Array# Return type: DataFrame"""['container1', 'container2']"""# or Return type: DataFrame"""Age Cabin Embarked Fare ... Sex Survived Ticket_info Title20 22.0 7.0 2.0 7.2500 ... 1.0 0.0 2.0 2.01 38.0 2.0 0.0 71.2833 ... 0.0 1.0 14.0 3.02 26.0 7.0 2.0 7.9250 ... 0.0 1.0 31.0 1.03 35.0 2.0 2.0 53.1000 ... 0.0 1.0 36.0 3.04 35.0 7.0 2.0 8.0500 ... 1.0 0.0 36.0 2.0..."""# For S3bucket_names=manager.execute_query()# Return Array"""['bucket1', 'bucket2']"""# or Return type: DataFrame"""Age Cabin Embarked Fare ... Sex Survived Ticket_info Title20 22.0 7.0 2.0 7.2500 ... 1.0 0.0 2.0 2.01 38.0 2.0 0.0 71.2833 ... 0.0 1.0 14.0 3.02 26.0 7.0 2.0 7.9250 ... 0.0 1.0 31.0 1.03 35.0 2.0 2.0 53.1000 ... 0.0 1.0 36.0 3.04 35.0 7.0 2.0 8.0500 ... 1.0 0.0 36.0 2.0..."""DBManager.create_table(table_name, columns=[])Create table in database for MySQL, Postgres, MongoDB and InfluxDB.
Noted, to create a new measurement in influxdb simply insert data into the measurement.Create Bucket/Container in S3/Azure Blob.Note: PostgreSQL table_name formatschema.table# For MySQL, Postgres, MongoDB and InfluxDBtable_name='titanic'columns=[{'name':'index','type':'INTEGER','is_primary':True},{'name':'survived','type':'FLOAT','is_not_null':True},{'name':'age','type':'FLOAT'},{'name':'embarked','type':'INTEGER'}]manager.create_table(table_name=table_name,columns=columns)# For S3bucket_name='bucket'manager.create_table(table_name=bucket_name)# For Azure Blobcontainer_name='container'manager.create_table(table_name=container_name)DBManager.is_table_exist(table_name)Return if the table exists in MySQL, Postgres, MongoDB or Influxdb.Return if the bucket exists in S3.Return if the container exists in Azure Blob.# For Postgres, MongoDB and InfluxDBtable_name='titanic'manager.is_table_exist(table_name=table_name)# For S3bucket_name='bucket'manager.is_table_exist(table_name=bucket_name)# For Azure blobcontainer_name='container'manager.is_table_exist(table_name=container_name)DBManager.is_file_exist(table_name, file_name)Return if the file exists in the bucket in S3 & AWS S3.Note this function only support S3 and AWS S3.# For S3 & AWS S3bucket_name='bucket'file_name='test.csvmanager.is_file_exist(table_name=bucket_name,file_name=file_name)# Return: BooleanDBManager.insert(table_name, columns=[], records=[], source='', destination='')Insert records into table in MySQL, Postgres, MongoDB or InfluxDB.Upload file to S3 and Azure Blob.# For MySQL, Postgres, MongoDB and InfluxDBtable_name='titanic'columns=['index','survived','age','embarked']records=[[0,1,22.0,7.0],[1,1,2.0,0.0],[2,0,26.0,7.0]]manager.insert(table_name=table_name,columns=columns,records=records)# For S3bucket_name='bucket'source='test.csv'# local file pathdestination='test_s3.csv'# the file path and name in s3manager.insert(table_name=bucket_name,source=source,destination=destination)# For Azure Blobcontainer_name='container'source='test.csv'# local file pathdestination='test_s3.csv'# the file path and name in Azure blobmanager.insert(table_name=container_name,source=source,destination=destination)Use APM data sourceGet Hist Raw data from SCADA Mongo data baseRequiredusername: APM SSO usernamepassword: APM SSO passwordmongouri: mongo data base uriapmurl: APM api urlapm_config: APM config (type:Array)name: dataset namemachines: APM machine list (type:Array)id: APM machine Idparameters: APM parameter name list (type:Array)time range: Training date rangeexample:[{'start':'2019-05-01','end':'2019-05-31'}]time last: Training date rangeexample:{'lastDays:'1,'lastHours':2,'lastMins':3}DBManager.delete_table(table_name)Delete table in MySQL, Postgres, MongoDB or InfluxDB, and return if the table is deleted successfully.Delete the bucket in S3 and return if the table is deleted successfully.Delete the container in Azure Blob and return if the table is deleted successfully.# For Postgres, MongoDB or InfluxDBtable_name='titanic'is_success=manager.delete_table(table_name=table_name)# Return: Boolean# For S3bucket_name='bucket'is_success=manager.delete_table(table_name=bucket_name)# Return: Boolean# For Azure Blobcontainer_name='container'is_success=manager.delete_table(table_name=container_name)# Return: BooleanDBManager.delete_record(table_name, file_name, condition)Delete record withconditionintable_namein MySQL, Postgres and MongoDB, and return if delete successfully.Delete file in bucket in S3 and in container in Azure Blob, and return if the file is deleted successfully.Note Influx not support this function.# For MySQL, Postgrestable_name='titanic'condition='passenger_id = 1'is_success=manager.delete_record(table_name=table_name,condition=condition)# Return: Boolean# For MongoDBtable_name='titanic'condition={'passanger_id':1}is_success=manager.delete_record(table_name=table_name,condition=condition)# Return: Boolean# For S3bucket_name='bucket'file_name='data/titanic.csv'is_success=manager.delete_record(table_name=bucket_name,file_name=file_name)# Return: Boolean# For Azure Blobcontainer_name='container'file_name='data/titanic.csv'is_success=manager.delete_record(table_name=container_name,file_name=file_name)# Return: BooleanExampleMongoDB Examplefromafs2datasourceimportDBManager,constant# Init DBManagermanager=DBManager(db_type=constant.DB_TYPE['MONGODB'],username={USERNAME},password={PASSWORD},host={HOST},port={PORT},database={DATABASE},collection={COLLECTION},querySql={QUERYSQL})## Mongo query ISODate ExampleQUERYSQL="{\"ts\": {\"$lte\": ISODate(\"2020-09-26T02:53:00Z\")}}"QUERYSQL={'ts':{'$lte':datetime.datetime(2020,9,26,2,53,0)}}# Connect DBmanager.connect()# Check the status of connectionis_connected=manager.is_connected()# Return type: boolean# Check is the table is existtable_name='titanic'manager.is_table_exist(table_name)# Return type: boolean# Create Tablecolumns=[{'name':'index','type':'INTEGER','is_not_null':True},{'name':'survived','type':'INTEGER'},{'name':'age','type':'FLOAT'},{'name':'embarked','type':'INTEGER'}]manager.create_table(table_name=table_name,columns=columns)# Insert Recordcolumns=['index','survived','age','embarked']records=[[0,1,22.0,7.0],[1,1,2.0,0.0],[2,0,26.0,7.0]]manager.insert(table_name=table_name,columns=columns,records=records)# Execute querySql in DB configdata=manager.execute_query()# Return type: DataFrame"""index survived age embarked0 0 1 22.0 7.01 1 1 2.0 0.02 2 0 26.0 7.0..."""# Delete Documentcondition={'survived':0}is_success=db.delete_record(table_name=table_name,condition=condition)# Return type: Boolean# Delete Tableis_success=db.delete_table(table_name=table_name)# Return type: Boolean# Disconnect to DBmanager.disconnect()S3 Examplefromafs2datasourceimportDBManager,constant# Init DBManagermanager=DBManager(db_type=constant.DB_TYPE['S3'],endpoint={ENDPOINT},access_key={ACCESSKEY},secret_key={SECRETKEY},buckets=[{'bucket':{BUCKET_NAME},'blobs':{'files':['titanic.csv'],'folders':['models/']}}])# Connect S3manager.connect()# Check is the table is existbucket_name='titanic'manager.is_table_exist(table_name=bucket_name)# Return type: boolean# Create Bucketmanager.create_table(table_name=bucket_name)# Upload File to S3local_file='../titanic.csv's3_file='titanic.csv'manager.insert(table_name=bucket_name,source=local_file,destination=s3_file)# Download files in blob_list# Download all files in directorybucket_names=manager.execute_query()# Return type: Array# Check if file is exist or notis_exist=manager.is_file_exist(table_name=bucket_name,file_name=s3_file)# Return type: Boolean# Delete the file in Bucket and return if the file is deleted successfullyis_success=manager.delete_record(table_name=bucket_name,file_name=s3_file)# Return type: Boolean# Delete Bucketis_success=manager.delete_table(table_name=bucket_name)# Return type: BooleanAPM Data source exampleAPMDSHelper(username,password,apmurl,machineIdList,parameterList,mongouri,timeRange)APMDSHelper.execute()Azure Blob Examplefromafs2datasourceimportDBManager,constant# Init DBManagermanager=DBManager(db_type=constant.DB_TYPE['AZUREBLOB'],account_key={ACCESS_KEY},account_name={ACCESS_NAME}containers=[{'container':{CONTAINER_NAME},'blobs':{'files':['titanic.csv'],'folders':['test/']}}])# Connect Azure Blobmanager.connect()# Check is the container is existcontainer_name='container'manager.is_table_exist(table_name=container_name)# Return type: boolean# Create containermanager.create_table(table_name=container_name)# Upload File to Azure Bloblocal_file='../titanic.csv'azure_file='titanic.csv'manager.insert(table_name=container_name,source=local_file,destination=azure_file)# Download files in `containers`# Download all files in directorycontainer_names=manager.execute_query()# Return type: Array# Check if file is exist in container or notis_exist=manager.is_file_exist(table_name=container_name,file_name=azure_file)# Return type: Boolean# Delete Fileis_success=manager.delete_record(table_name=container_name,file_file=azure_file)# Delete Containeris_success=manager.delete_table(table_name=container_name)# Return type: BooleanOracle ExampleNoticeInstall OracleDB client Documentshttps://www.oracle.com/au/database/technologies/instant-client/linux-x86-64-downloads.html#ic_x64_instfromafs2datasourceimportDBManager,constant# Init DBManagermanager=DBManager(db_type=constant.DB_TYPE['ORACLEDB'],username=username,password=password,host=host,port=port,dsn=dsb,querySql="select{field_key}from{measurement_name}"# only support `SELECT`)# Connect OracleDBmanager.connect()# Check is the container is existtable_name='table'manager.is_table_exist(table_name=table_name)# Return type: boolean# Execute querySql in DB configdata=manager.execute_query()# Return type: DataFrame"""index survived age embarked0 0 1 22.0 7.01 1 1 2.0 0.02 2 0 26.0 7.0...""" |
afs2-model | AFS2-MODEL SDKDocumentsReference documentsReadthedocsInstallationSupport python version 3.6 or laterpip install on AFS notebookAFS provides the release version SDK on private pypi server. Run the following command on notebook cell to install SDK:!pip install afs2-modelList the installed packages.Develop(For SDK developer) From sourcesClone the repository to local.To build the library run:$ python setup.py install(For SDK developer) Build from sourceClone the repository to local.To build the wheel package:$ python setup.py bdist_wheel.whl will be in dist/ |
afsapi | python-afsapiAsynchronous Python implementation of the Frontier Silicon APIThis project was started in order to embed Frontier Silicon devices in Home Assistant (https://home-assistant.io/)Inspired by:https://github.com/flammy/fsapi/https://github.com/tiwilliam/fsapihttps://github.com/p2baron/fsapiRequired python libs:requestsUsageimportasynciofromafsapiimportAFSAPIURL='http://192.168.1.XYZ:80/device'PIN=1234TIMEOUT=1# in secondsasyncdeftest():afsapi=awaitAFSAPI.create(URL,PIN,TIMEOUT)print(f'Set power succeeded? -{awaitafsapi.set_power(True)}')print(f'Power on:{awaitafsapi.get_power()}')print(f'Friendly name:{awaitafsapi.get_friendly_name()}')formodeinawaitafsapi.get_modes():print(f'Available Mode:{mode}')print(f'Current Mode:{awaitafsapi.get_mode()}')forequaliserinawaitafsapi.get_equalisers():print(f'Equaliser:{equaliser}')print(f'EQ Preset:{awaitafsapi.get_eq_preset()}')forpresetinawaitafsapi.get_presets():print(f"Preset:{preset}")print(f'Set power succeeded? -{awaitafsapi.set_power(False)}')print(f'Set sleep succeeded? -{awaitafsapi.set_sleep(10)}')print(f'Sleep:{awaitafsapi.get_sleep()}')print(f'Get power{awaitafsapi.get_power()}')loop=asyncio.new_event_loop()loop.run_until_complete(test()) |
afscgap | Python Tools for AFSC GAPGroupBadgesStatusUsagePublicationPython-based tool chain ("Pyafscgap.org") for working with the public bottom trawl data from theNOAA AFSC GAP. This provides information from multiple survey programs about where certain species were seen and when under what conditions, information useful for research in ocean health.Seewebpage,project Github, andexample notebook.QuickstartTaking your first step is easy!Explore the data in a UI:To learn about the datasets, try out an in-browser visual analytics app athttps://app.pyafscgap.orgwithout writing any code.Try out a tutorial in your browser:Learn from and modify an in-depthtutorial notebookin a free hosted academic environment (all without installing any local software).Jump into code:Ready to build your own scripts? Here's an example querying for Pacific cod in the Gulf of Alaska for 2021:importafscgap# install with pip install afscgapquery=afscgap.Query()query.filter_year(eq=2021)query.filter_srvy(eq='GOA')query.filter_scientific_name(eq='Gadus macrocephalus')results=query.execute()Continue your exploration in thedeveloper docs.InstallationReady to take it to your own machine? Install the open source tools for accessing the AFSC GAP via Pypi / Pip:$pipinstallafscgapThe library's only dependency isrequestsandPandas / numpy are not expected but supported. The above will install the release version of the library. However, you can also install the development version via:$pipinstallgit+https://github.com/SchmidtDSE/afscgap.git@mainInstalling directly from the repo provides the "edge" version of the library which should be treated as pre-release.PurposeUnofficial Python-based tool set for interacting withbottom trawl surveysfrom theGround Fish Assessment Program (GAP). It offers:Pythonic access to the officialNOAA AFSC GAP API service.Tools for inference of the "negative" observations not provided by the API service.Visualization tools for quickly exploring and creating comparisons within the datasets, including for audiences with limited programming experience.Note that GAP are an excellent collection of datasets produced by theResource Assessment and Conservation Engineering (RACE) Divisionof theAlaska Fisheries Science Center (AFSC)as part of the National Oceanic and Atmospheric Administration's Fisheries organization (NOAA Fisheries).Please see ourobjectives documentationfor additional information about the purpose, developer needs addressed, and goals of the project.UsageThis library provides access to the AFSC GAP data with optional zero catch ("absence") record inference.Examples / tutorialOne of the best ways to learn is through our examples / tutorials series. For more details see ourusage guide.API DocsFull formalized API documentation is availableas generated by pdoc in CI / CD.Data structureDetailed information about our data structures and their relationship to the data structures found in NOAA's upstream database is available in ourdata model documentation.Absence vs presence dataBy default, the NOAA API service will only return information on hauls matching a query. So, for example, requesting data on Pacific cod will only return information on hauls in which Pacific cod is found. This can complicate the calculation of important metrics like catch per unit effort (CPUE). That in mind, one of the most important features inafscgapis the ability to infer "zero catch" records as enabled byset_presence_only(False). See more information inour inference docs.Data quality and completenessThere are a few caveats for working with these data that are important for researchers to understand. These are detailed in ourlimitations docs.LicenseWe are happy to make this library available under the BSD 3-Clause license. See LICENSE for more details. (c) 2023 Regents of University of California. See theEric and Wendy Schmidt Center for Data Science and the Environment
at UC Berkeley.DevelopingIntersted in contributing to the project or want to bulid manually? Please see ourbuild docsfor details.PeopleSam Pottingeris the primary contact with additional development fromGiulia Zarpellon. Additionally some acknowledgements:Thank you toCarl BoettigerandFernando Perezfor advice in the Python library.Thanks also toMaya Weltman-Fahs,Brookie Guzder-Williams, Angela Hayes, David Joy, andMagali de Bruynfor advice on the visual analytics tool.Lewis Barnett, Emily Markowitz, and Ciera Martinez for general guidance.This is a project of theThe Eric and Wendy Schmidt Center for Data Science and the Environment
at UC BerkeleywhereKevin Koyis Executive Director. Please contact us [email protected] SourceWe are happy to be part of the open source community.At this time, the only open source dependency used by this microlibrary isRequestswhich is available under theApache v2 LicensefromKenneth Reitz and other contributors.In addition to Github-providedGithub Actions, our build and documentation systems also use the following but are not distributed with or linked to the project itself:mkdocsunder theBSD License.mkdocs-windmillunder theMIT License.mypyunder theMIT Licensefrom Jukka Lehtosalo, Dropbox, and other contributors.nose2under aBSD licensefrom Jason Pellerin and other contributors.pdocunder theUnlicense licensefromAndrew GallantandMaximilian Hils.pycodestyleunder theExpat Licensefrom Johann C. Rocholl, Florent Xicluna, and Ian Lee.pyflakesunder theMIT Licensefrom Divmod, Florent Xicluna, and other contributors.sftp-actionunder theMIT Licensefrom Niklas Creepios.ssh-actionunder theMIT Licensefrom Bo-Yi Wu.Next, the visualization tool has additional dependencies as documented in thevisualization readme.Finally, note that the website uses assets fromThe Noun Projectunder the NounPro plan. If used outside ofhttps://pyafscgap.org, they may be subject to adifferent license.Thank you to all of these projects for their contribution.Version historyAnnotated version history:1.0.4: Minor documentation fypo fix.1.0.3: Documentation edits for journal article.1.0.2: Minor documentation touch ups for pyopensci.1.0.1: Minor documentation fix.1.0.0: Release with pyopensci.0.0.9: Fix with issue for certain import modalities and thehttpmodule.0.0.8: New query syntax (builder / chaining) and units conversions.0.0.7: Visual analytics tools.0.0.6: Performance and size improvements.0.0.5: Changes to documentation.0.0.4: Negative / zero catch inference.0.0.3: Minor updates in documentation.0.0.2: License under BSD.0.0.1: Initial release. |
af-simple-process-image-renderer | AristaFlow BPM REST Client library for connecting to the SimpleProcessImageRenderer/SimpleProcessImageRenderer endpoint.https://pypi.org/project/aristaflowpy/provides APIs on top of the AristaFlow BPM REST endpoints for many use cases, so using aristaflowpy is recommended. |
afsk | Library to generate Bell 202 AFSK audio samples and
AFSK encoded APRS/AX.25 packets.Theaprscommand line program encodes APRS packets
as AFSK audio data.e.g.:$ aprs -c <your callsign> ":EMAIL :[email protected] Test email"InstallationInstall withpip:$ pip install afsk
$ pip install --allow-external PyAudio --allow-unverified PyAudio PyAudioPyAudio is optional, so must be installed separately.If you want to use the CLI program to play APRS packets via your
soundcard, install PyAudio. Otherwise, if you just want to generate
Wave files of AFSK data, you can skip it.Note that installing PyAudio will require a C compiler and PyAudio’s various
C dependencies, in addition to the--allow-externaland--allow-unverifiedpipflags.For development, change to the afsk directory and install with:$ pip install -r requirements.txt
$ python setup.py developRequires Python 2.6 or 2.7.Command Line InterfaceGenerate APRS messages with theaprsCLI program:$ aprs --callsign <your callsign> ":EMAIL :[email protected] Test email"Specify your message body with INFO command line argument. Be sure to wrap the message in
quotes so it’s passed as one argument, spaces includd.At the moment, no message formats are implemented in theaprsprogram; you must
construct the body string yourself. For instance, in the example above, the string
passed as an argument toaprsfollows the email messsage format specified for APRS.Youmustspecify your amateur radio callsign with the--callsignor-cflags.Use the--outputoption to write audio to a Wave file (use ‘-’ for STDOUT) rather
than play over the soundcard.Get a listing of other options withaprs--help.ExamplesPlayback with PyAudio and short options:$ aprs --callsign <your callsign> ":EMAIL :[email protected] Test email"Playback withsox:$ aprs --callsign <your callsign> --output - ":EMAIL :[email protected] Test email" |\
play -t wav -Save to a wave file with using short options:$ aprs -c <your callsign> -o packet.wav ":EMAIL :[email protected] Test email"ContributingGet the source and report any bugs on Github:https://github.com/casebeer/afskVersion History0.0.3 – Pin dependency versions, fix bug with STDOUT playback, verbosity CLI option. |
afsmon | Python library and utilities for monitoring AFS file-systems usingOpenAFStools.Many of the details are inspired byhttps://github.com/openafs-contrib/afs-toolsCommand-lineTheafsmontool providesshow: produce tabular output of key statistics for a cell,
such as threads on file-servers, partition usage, volume usage and
quotas.statsd: report similar results to astatsdhost.Configuration is minimal, see thesample.cfg.LibraryThe core ofafsmonshould suitable for use in other contexts.importafsmonfs=afsmon.FileServerStats('hostname')fs.get_stats()Thefsobject now contains aFileServerStatswith all
available information for the server, partitions and volumes. |
afsql | No description available on PyPI. |
afs-scenario | Create templated molecule scenarios for openafs_contrib.openafs playbooks and roles. |
afsutil | afsutilis a command-line tool to build, install, and setup OpenAFS for
developers and testers.Command line interfaceusage: afsutil <command> [options]
commands:
version Print version
help Print usage
getdeps Install build dependencies
check Check hostname
build Build binaries
reload Reload the kernel module from the build tree
package Build RPM packages
install Install binaries
remove Remove binaries
start Start AFS services
stop Stop AFS services
ktcreate Create a fake keytab
ktdestroy Destroy a keytab
ktsetkey Add a service key from a keytab file
ktlogin Obtain a token with a keytab
newcell Setup a new cell
mtroot Mount root volumes in a new cell
addfs Add a new fileserver to a cellInstallationInstall withyum:$ sudo yum install https://download.sinenomine.net/openafs/repo/sna-openafs-release-latest.noarch.rpm
$ sudo yum install afsutilInstall withpip:$ sudo pip install afsutilInstall withvirtualenv:$ python -m virtualenv ~/.virtualenv/afsutil
$ . ~/.virtualenv/afsutil/bin/activate
(afsutil) $ pip install afsutil
(afsutil) $ deactivate
$ sudo ln -s /home/$USER/.virtualenv/afsutil/bin/afsutil /usr/bin/afsutil
$ afsutil version
$ sudo afsutil versionInstall from source:$ git clone https://github.com/openafs-contrib/afsutil
$ cd afsutil
$ <python-interpreter> configure.py # i.e. python, python2
$ sudo make install # or make install-user for local installExamplesTo build OpenAFS from sources:$ git clone git://git.openafs.org/openafs.git
$ cd openafs
$ sudo afstuil getdeps
$ afsutil buildTo build RPM packages from an arbitrary git checkout (on an rpm-based system):$ sudo afsutil getdeps
$ git clone git://git.openafs.org/openafs.git
$ cd openafs
$ git checkout <branch-or-tag>
$ afsutil package
$ ls ./package/rpmbuild/RPMSTheafsutil packagecommand will build packages for the userspace and kernel
modules by default. See the–buildoption to build these separately.Theafsutil packagecommand also supports the Fedoramockbuild system, which
is useful to build kernel modules for a large variety of kernel versions.To build RPM packages from a git checkout withmock, including kernel
modules (kmods) for each kernel version found in the yum repositories:# Install mock.
$ sudo yum install mock
$ sudo usermod -a -G mock $USER
$ newgrp - mock
# Install packages needed to build the OpenAFS SRPM.
$ sudo yum install git libtool bzip2
# Checkout and then build packages.
$ git clone git://git.openafs.org/openafs.git
$ git checkout <branch-or-tag>
$ afsutil package --mock # This will take some time.To install legacy “Transarc-style” binaries:$ sudo afsutil install \
--force \
--components server client \
--dist transarc \
--dir /usr/local/src/openafs-test/amd64_linux26/dest \
--cell example.com \
--realm EXAMPLE.COM \
--hosts myhost1 myhost2 myhost3 \
--csdb /root/CellServDB.dist \
-o "afsd=-dynroot -fakestat -afsdb" \
-o "bosserver=-pidfiles"To setup the OpenAFS service key from a Kerberos 5 keytab file:$ sudo afsutil setkey
--cell example.com \
--realm EXAMPLE.COM \
--keytab /root/fake.keytabTo start the OpenAFS servers:$ sudo afsutil start serverTo setup a new OpenAFS cell on 3 servers, after ‘afsutil install’ has been run
on each:$ sudo afsutil newcell \
--cell example.com \
--realm EXAMPLE.COM \
--admin example.admin \
--top test \
--akimpersonate \
--keytab /root/fake.keytab \
--fs myhost1 myhost2 myhost3 \
--db myhost1 myhost2 myhost3 \
--aklog /usr/local/bin/aklog-1.6 \
-o "dafs=yes" \
-o "afsd=-dynroot -fakestat -afsdb" \
-o "bosserver=-pidfiles" \
-o "dafileserver=L"To start the client:$ sudo afsutil start clientTo mount the top-level volumes after the client is running:$ afsutil mtroot \
--cell example.com \
--admin example.admin \
--top test \
--realm EXAMPLE.COM \
--akimpersonate \
--keytab /root/fake.keytab \
--fs myhost1 \
-o "afsd=-dynroot -fakestat -afsdb"Configuration filesAll of the command line values may be set in a configuration file. Place
global configuration in/etc/afsutil.cfg, per user options in~/.afsutil.cfg, and per project options in.git/afsutil.cfg. Use command
line options to override configuration options.Theafsutilconfiguration files are ini-style format. The sections of the
configuration file correspond to the subcommand names, e.g.,build,install,newcell. Options within each section correspond to the command
line option names.Some subcommands, such asinstallandnewcellhave options like–optionsand–paths, which consist of multiple name/values pairs. These are
represented in the configuration file as subsection in the form[<subcommand>.<option>].For example, theinstallcommand example given above has set of startup
options forafsdandbosserver. This would be specified in the
configuration file as:[install]
force = yes
components = server client
dist = transarc
dir = /usr/local/src/openafs-test/amd64_linux26/dest
cell = example.com
realm = EXAMPLE.COM
hosts = myhost1 myhost2 myhost3
csdb = /root/CellServDB.dist
[install.options]
afsd = -dynroot -fakestat -afsdb
bosserver = -pidfilesHere is an example configuration file:$ cat /etc/afsutil.cfg
[install]
cell = example.com
realm = EXAMPLE.COM
force = True
components = server client
dist = transarc
hosts = debian9
[install.options]
afsd = -dynroot -fakestat -afsdb
bosserver =
[ktcreate]
cell = example.com
realm = EXAMPLE.COM
keytab = /home/mtycobb/afsrobot/fake.keytab
[ktsetkey]
cell = example.com
realm = EXAMPLE.COM
keytab = /home/mtycobb/afsrobot/fake.keytab
format = detect
[ktsetkey.paths]
asetkey = /usr/afs/bin/asetkey
[newcell]
cell = example.com
realm = EXAMPLE.COM
admin = afsrobot.admin
fs = debian9
db = debian9
[newcell.options]
bosserver =
dafileserver =
davolserver =
debian9.dafileserver = -d 1 -L
debian9.davolserver = -d 1
[newcell.paths]
aklog=/home/mtycobb/.local/bin/aklog-1.6
asetkey=/usr/afs/bin/asetkey
bos=/usr/afs/bin/bos
fs=/usr/afs/bin/fs
gfind=/usr/bin/find
pagsh=/usr/afsws/bin/pagsh
pts=/usr/afs/bin/pts
rxdebug=/usr/afsws/etc/rxdebug
tokens=/usr/afsws/bin/tokens
udebug=/usr/afs/bin/udebug
unlog=/usr/afsws/bin/unlog
vos=/usr/afs/bin/vos
[mtroot]
cell = example.com
realm = EXAMPLE.COM
admin = afsrobot.admin
top = test
akimpersonate = True
keytab = /home/mtycobb/afsrobot/fake.keytab
fs = debian9
[mtroot.options]
afsd = -dynroot -fakestat -afsdb
[mtroot.paths]
aklog = /home/mtycobb/.local/bin/aklog-1.6
asetkey = /usr/afs/bin/asetkey
bos = /usr/afs/bin/bos
fs = /usr/afs/bin/fs
gfind = /usr/bin/find
pagsh = /usr/afsws/bin/pagsh
pts = /usr/afs/bin/pts
rxdebug = /usr/afsws/etc/rxdebug
tokens = /usr/afsws/bin/tokens
udebug = /usr/afs/bin/udebug
unlog = /usr/afsws/bin/unlog
vos = /usr/afs/bin/vosAnd the commands to install OpenAFS and create a new cell on a single
machine:sudo afsutil install
sudo afsutil ktcreate
sudo afsutil ktsetkey
sudo afsutil start server
sudo afsutil newcell
sudo afsutil start client
afsutil mtroot |
aft | BETA SOFTWARE. DO NOT USE WHEN FAILURE WILL HAVE CONSEQUENCES.Rabbit Typing, for PythonRabbit Typing, as opposed toDuck TypingandMonkey Typing, is a brute force
dynamic type inference approach which attempts to infer argument types for a
library function. |
afterbasics | No description available on PyPI. |
afterburner | Afterburner: Python Web Framework build for learning purposesAfterburner is a Python web framework built for learning purposes.It's a WSGI framework and can be used with any WSGI application server.InstallationpipinstallafterburnerHow to use itBasic usage:fromafterburner.apiimportAPIapp=API()@app.route("/home")defhome(request,response):response.text="Hello from the HOME page"@app.route("/hello/{name}")defgreeting(request,response,name):response.text=f"Hello,{name}"@app.route("/book")classBooksResource:defget(self,req,resp):resp.text="Books Page"defpost(self,req,resp):resp.text="Endpoint to create a book"@app.route("/template")deftemplate_handler(req,resp):resp.body=app.template("index.html",context={"name":"Afterburner","title":"Best Framework"}).encode()Unit TestsThe recommended way of writing unit tests is withpytest. There are two built in fixtures
that you may want to use when writing unit tests with Afterburner. The first one isappwhich is an instance of the mainAPIclass:deftest_route_overlap_throws_exception(app):@app.route("/")defhome(req,resp):resp.text="Welcome Home."withpytest.raises(AssertionError):@app.route("/")defhome2(req,resp):resp.text="Welcome Home2."The other one isclientthat you can use to send HTTP requests to your handlers. It is based on the famousrequestsand it should feel very familiar:deftest_parameterized_route(app,client):@app.route("/{name}")defhello(req,resp,name):resp.text=f"hey{name}"assertclient.get("http://testserver/matthew").text=="hey matthew"TemplatesThe default folder for templates istemplates. You can change it when initializing the mainAPI()class:app=API(templates_dir="templates_dir_name")Then you can use HTML files in that folder like so in a handler:@app.route("/show/template")defhandler_with_template(req,resp):resp.html=app.template("example.html",context={"title":"Awesome Framework","body":"welcome to the future!"})Static FilesJust like templates, the default folder for static files isstaticand you can override it:app=API(static_dir="static_dir_name")Then you can use the files inside this folder in HTML files:<!DOCTYPE html><htmllang="en"><head><metacharset="UTF-8"><title>{{title}}</title><linkhref="/static/main.css"rel="stylesheet"type="text/css"></head><body><h1>{{body}}</h1><p>This is a paragraph</p></body></html>MiddlewareYou can create custom middleware classes by inheriting from theafterburner.middleware.Middlewareclass and overriding its two methods
that are called before and after each request:fromafterburner.apiimportAPIfromafterburner.middlewareimportMiddlewareapp=API()classSimpleCustomMiddleware(Middleware):defprocess_request(self,req):print("Before dispatch",req.url)defprocess_response(self,req,res):print("After dispatch",req.url)app.add_middleware(SimpleCustomMiddleware) |
after-class | dv_after_classTool to analyze multi-class classification models. |
aftercopy | AfterCopy有时候需要从 PDF、CAJ 等文件的阅读器窗口中复制出大段文字,但是这样得到的文字粘贴到 Word 里往往不尽人意:原排版的每一行都会被“另起一段”,有些中文标点变成了带有空格的英文标点……手动调整比较烦琐,因此我写了一个简单的脚本来处理这件事。Quick Startpip install aftercopy
aftercopy -v然后去阅读器中复制文字,再粘贴时得到的已经是处理好(去掉换行、替换标点)的结果了。由于无法识别分段,段落之间需要使用者手动分开。使用完毕后请记得关闭,避免影响常规的复制粘贴的使用。用法aftercopy --help
Usage: aftercopy [OPTIONS]
Options:
-p, --passive Disable active reading from clipboard. Instead you can
paste into and copy from terminal. End your input with
Ctrl-Z + Enter (Windows) or Ctrl-D + Enter.
-v, --verbose Display the concrete re-copied text and more info.
-l, --lang [cn|en] Switch type of language in text. This will influence the
rule set used. (Chinese by default)
--help Show this message and exit.原理每隔 0.01 秒读一次剪贴板(性能影响可忽略不计),若发生改变则对新读入的文字作相应的处理,将结果重新写入剪贴板。TODO替换规则。目前对于标点的替换规则是硬编码的,显然这种做法大大降低了使用的灵活性。但是我还没有想到在每次运行 / 在安装时指定规则文件的较好办法。错别字识别。没找到这方面便捷的库。One more thing...请勿用于抄袭等侵犯他人著作权的用途。 |
aftercovid | documentationTools, tries about COVID epidemics.
The module must be compiled to be used inplace:python setup.py build_ext --inplaceGenerate the setup in subfolderdist:python setup.py sdistGenerate the documentation in folderdist/html:python -m sphinx -T -b html doc dist/htmlRun the unit tests:python -m unittest discover testsOr:python -m pytestTo check style:python -m flake8 aftercovid tests examplesThe functioncheckor the command linepython-maftercovid checkchecks the module is properly installed and returns processing
time for a couple of functions or simply:import aftercovid
aftercovid.check() |
afterdown | No description available on PyPI. |
afterflight | An application for analysis of UAV log and video.Introductory video with quickstart is athttp://youtu.be/wdeeGyvHJ9c.Installing the release versionInstalling the development versionmake sure you have scipy installed on your system. On ubuntu, that means doing:sudoapt-getinstall scipy. Once scipy 0.13 is released, this step will no longer be necessary because setup.py will be able to install it properly.Clone the afterflight repository to withgit clonehttps://github.com/foobarbecue/afterflight.gitIn the directory that is created (called afterflight unless you specified otherwise), runpip-rrequirements.txt. This will install the remaining dependancies.Createsettings_local.pybased on the examplesettings_local_example.py. Usually you can just runcp settings_local_example.py settings_local.py, but if you want to use a database other than sqlite (such as postgres) this is where your database access information will go.Create your database tables by runningpython afterflight/manage.py syncdb. This will also add a default site for the django sites framework, which is required for the authentication system.Run a local development server:python afterflight/manage.py runserver. By default this will run athttp://localhost:8000, so you can point your browser there to get started.If you want to run this on a public server, followhttps://docs.djangoproject.com/en/1.5/howto/deployment/. |
afterglow | afterglowWARNING: Project is currently unstable, API's versions and tags can change at any momentA configuration tool for ignition based systems.Ignition-based systems have a 'one-shot' system configuration, which needs to be generally available to all instances. This means that if you are deploying a service that requires configured secrets, you might be tempted to place them in the Ignition config. However, doing so would involve storing secrets in plain text (potentially uploading them to a hosting service). Not only is this insecure, but it also doesn't truly solve the problem since these secrets are likely to rotate, rendering any static values in the Ignition configuration invalid. This service is intended to allow secret provisioning after boot, similar to how you would provision other servers. This aligns with the general principles of other configuration tools such as Ansible and PuppetPrinciple of operationThis service usessshandscpto copy across configuration files and uses parent/child semantics where the parent provisions the child. A typical boot up flow may look like this:Parent (CI/Local/Instance) boots up a new vm on some host providerParent needs to know the childs public keyRequires the parent knows the IP address of the child nodeChild boots and runsafterglow child <...>providing private keyNote: In someways this is just kicking the can down the road. We still need to get the secret key onto the child node. How exactly is up to you. Two solutions seems promising:Add a volume mount to the instance through the host providerUpload a custom FCOS|Flatcar|... image with a preshared key (symmetrical/asymetrical?) used to decrypt a private key in the ignition configSome other trust mechanism through the host provider (aws secrets manager) with IAM permissions provided to the instanceParent runsafterglow parent <...>including child public key connecting to childChild initiatesscpfor each configured files.Both parent and child process return exit code0on successful provisioningChild writes lock file to--lock-pathcontaining<file tag> = <sha256sum>key value pairsNote: The intention of this is to allow use of this in a systemd unit configuration for oneshot behaviourIn the case of copy failure the child process keeps running waiting up totimeoutfor a new parent connection which succeeds.RoadmapAdd CI integration testsUsageSpecify the mode eitherparentorchildusage:afterglow[-h][parent|child]...
Copyfilesfromonemachinetoanother
positionalarguments:[parent|child]childcopyfilesontothismachineparentcopyfilesfromthismachineParent optionsusage:afterglowparent[-h]--private-keyPRIVATE_KEY--child-keyCHILD_KEY--ipIP--portPORT--filesFILES[FILES...][--timeoutTIMEOUT]options:-h,--helpshowthishelpmessageandexit--private-keyPRIVATE_KEYPathtoprivatekeyfile--child-keyCHILD_KEYPathtochildspublickey--ipIPTheipaddrestoconnectto--portPORTTheporttoconnectto--filesFILES[FILES...]Colonseperatedfile:pathmapping--timeoutTIMEOUTThetimewindowforwhichfilesareexpetedtobecopiedacrossChild optionsusage:afterglowchild[-h]--private-keyPRIVATE_KEY--portPORT--filesFILES[FILES...][--timeoutTIMEOUT]options:-h,--helpshowthishelpmessageandexit--private-keyPRIVATE_KEYPathtoprivatekeyfile--portPORTTheportonwhichtheserverwilllisten--filesFILES[FILES...]Colonseperatedfile:pathmapping--lock-pathLOCK_PATHPathtowritethelockfiletouponsuccessfullprovisioning--timeoutTIMEOUTThetimewindowforwhichfilesareexpetedtobecopiedacrossMakefileSimplify docker packagingDependenciesDocker or Podman (passUSE_PODMAN=1to use podman)The pyproject.toml file needs to have a version set correctlyTargetsbuild: Builds the Docker or Podman image using the specified Dockerfile and assigns appropriate tags based on the project's version defined inpyproject.toml.run: Runs the Docker or Podman container with the specified runtime arguments (RUN_ARGS). It also allows additional runtime arguments to be passed (DOCKER_ARGS).clean: Removes the Docker or Podman image and the running container associated with the project. It stops the running container, removes it, and deletes the image.rebuild:cleanbuildrerun:rebuildrunpush: Push image to docker hubhelp: Show help informationDevelopingTech stackpyenvpython-build-dependenciespoetrypython 3.11pyenv install 3.11Example invocationsChilddockerrun\-v~/.ssh:/root/.ssh:ro\-v`pwd`:/host\-p127.0.0.1:8022:8022\dataligand/afterglow:latestchild\--filestest_file:/host/child/files\--lock-path/host/afterglow.lock\--private-key/root/.ssh/id_ed25519\--port8022Parentdockerrun\-v~/.ssh:/root/.ssh:ro\-v`pwd`:/root/files:ro\--networkhost\dataligand/afterglow:latestparent\--filestest_file:/root/files/test_file\--private-key/root/.ssh/id_ed25519\--child-key/root/.ssh/id_ed25519.pub\--iplocalhost\--port8022 |
afterglowpy | Numeric GRB Afterglow modelsA Python 3 module to calculate GRB afterglow light curves and spectra. Details of the methods can be found inRyan et al 2019. Builds onvan Eerten & MacFadyen 2010andvan Eerten 2018. This code is under active development.Documentation available athttps://afterglowpy.readthedocs.io/AttributionIf you use this code in a publication, please refer to the package by name and cite "Ryan, G., van Eerten, H., Piro, L. and Troja, E., 2019, arXiv:1910.11691"arXiv link.Featuresafterglowpycomputes synchrotron emission from the forward shock of a relativistic blast wave. It includes:Fully trans-relativistic shock evolution through a constant density medium.On-the-fly integration over the equal-observer-time slices of the shock surface.Approximate prescription for jet spreading.Arbitrary viewing angles.Angularly structured jets, ie. E(θ)Spherical velocity-stratified outflows, ie. E(u)Counter-jet emission.It has limited support (these should be considered experimental) for:Initial energy injectionInverse comption spectraSpatially-resolved intensity mapsEarly coasting phaseIt does not include (yet):External wind medium, ie. n ∝ r-2Synchrotron self-absorbtionReverse shock emissionafterglowpyhas been calibrated to the BoxFit code (van Eerten, van der Horst, & Macfadyen 2011, available at theAfterglow Library) and produces similar light curves for top hat jets (within 50% when same parameters are used) both on- and off-axis. Its jet models by default do not include an initial coasting phase, which may effect predictions for early observations.Installation/Buildingafterglowpyis available viapip:$pipinstallafterglowpyIf you are working on a local copy of this repo and would like to install from source, you can the run the following from the top level directory of the project.$pipinstall-e.UsingThis interface will be updated to be more sensible in the VERY near futureIn your python code, import the library withimport afterglowpy as grb.The main function of interest isgrb.fluxDensity(t, nu, jetType, specType, *pars, **kwargs). Seetests/plotLC.pyfor a simple example.jetTypecan be -1 (top hat), 0 (Gaussian), 1 (Power Law w/ core), 2 (Gaussian w/ core), 3 (Cocoon), or 4 (Smooth Power Law).specTypecan be 0 (global cooling time, no inverse compton) or 1 (global cooling time, inverse compton).For jet-like afterglows (jetTypes-2, -1, 0, 1, 2, and 4)parshas 14 positional arguments:0 thetaVviewing angle in radians1 E0on-axis isotropic equivalent energy in erg2 thetaChalf-width of the jet core in radians (jetType specific)3 thetaW"wing" truncation angle of the jet, in radians4 bpower for power-law structure, θ-b5 L0Fiducial luminosity for energy injection, in erg/s, typically 0.6 qTemporal power-law index for energy injection, typically 0.7 tsFiducial time-scale for energy injection, in seconds, typically 0.8 n0Number density of ISM, in cm-39 pElectron distribution power-law index (p>2)10 epsilon_eThermal energy fraction in electrons11 epsilon_BThermal energy fraction in magnetic field12 xi_NFraction of electrons that get accelerated13 d_LLuminosity distance in cmFor cocoon-like afterglows (jetType3)parshas 14 positional arguments:0 umaxInitial maximum outflow 4-velocity1 uminMinium outflow 4-velocity2 EiFiducial energy in velocity distribution, E(>u) = Eiu-k.3 kPower-law index of energy velocity distribution4 MejMass of material at `umax' in solar masses5 L0Fiducial luminosity for energy injection, in erg/s, typically 0.6 qTemporal power-law index for energy injection, typically 0.7 tsFiducial time-scale for energy injection, in seconds, typically 0.8 n0Number density of ISM, in cm-39 pElectron distribution power-law index (p>2)10 epsilon_eThermal energy fraction in electrons11 epsilon_BThermal energy fraction in magnetic field12 xi_NFraction of electrons that get accelerated13 d_LLuminosity distance in cmKeyword arguments are:zredshift (defaults to 0)tRestime resolution of shock-evolution scheme, number of sample points per decade in timelatReslatitudinal resolution for structured jets, number of shells perthetaCrtoltarget relative tolerance of flux integrationspreadboolean (defaults to True), whether to allow the jet to spread. |
afterhours | No description available on PyPI. |
aftermarketpl | No description available on PyPI. |
afterpay | afterpay-pythonPython library for interacting with Afterpay API, based on thebraintree_pythonprojectNoteThis library may be functional but is still in development. Features still need to be tested
and code refactored.DependenciesrequestscertifiThis project has only been tested with Python 3.9 |
afterrealism | No description available on PyPI. |
afterscan | # Afterscan## InstallationUse python 3` pip install afterscan `## Usage` afterscan [OPTIONS] FILENAME `Example` afterscan myimage.jpg--threshold75-f`### Options`--thresholdINTEGER Threshold value between 0 and 255. Default=100-o,--outTEXT Output path. Defaultafterscan-[filename]in pwd-i,--invert/--no-invertInvert the image-f,--force/--no-forceOverwrite existing file without asking--helpShow this message and exit. `## DemoWhen drawing this amazing logo, I found myself having a scan with lines in the background.By using this command I was able to produce the second image:` afterscan logo.jpg--threshold75-f`### Before### After |
aftership | aftership-sdk-python is Python SDK (module) forAfterShip API.
Module provides clean way to access API endpoints.IMPORTANT NOTECurrent version of aftership-sdk-python>=0.3notcompatiblewith
previous version of sdk<=0.2.Also, version since 1.0 isnotsupport Python 2.X anymore. If you want
to use this SDK under Python 2.X, please use versions<1.0.Supported Python Versions3.63.73.83.93.10pypy3InstallationVia pipUse Virtual EnvironmentWe recommend using avirtualenvorpoemto use this SDK.$pipinstallaftershipVia source codeDownload the code archive, without unzip it, go to the
source root directory, then run:$pipinstallaftership-sdk-python.zipUsageYou need a valid API key to use this SDK. If you don’t have one, please visithttps://www.aftership.com/apps/api.Quick StartThe following code gets list of supported couriersimportaftershipaftership.api_key='YOUR_API_KEY_FROM_AFTERSHIP'couriers=aftership.courier.list_couriers()You can also set API key via settingAFTERSHIP_API_KEYenvironment varaible.exportAFTERSHIP_API_KEY=THIS_IS_MY_API_KEYimportaftershiptracking=aftership.get_tracking(tracking_id='your_tracking_id')The functions of the SDK will returndatafield value if the API endpoints
return response with HTTP status2XX, otherwise will throw an
exception.ExceptionsExceptions are mapped fromhttps://docs.aftership.com/api/4/errors,
and this table is the exception attributes mapping.API errorAfterShipErrorhttp status codehttp_statusmeta.codecodemeta.typemessageKeyword argumentsMost of SDK functions only accept keyword arguments.ExamplesGotoexamplesto see more examples. |
aftershocks | A small script to plot recent earthquake magnitudes and frequency in Japan by
region, using the Japan Meteorological Agencywebsite. Used to track
aftershocks of the 2018 Iburiearthquake.Requiresmatplotlibandpandas.Installation:pip install aftershocksExample usage:aftershocks.py --at iburi
aftershocks.py --help |
aftershoq | aftershoqA Flexible Tool for Em-Radiation-emitting Semiconductor Heterostructure Optimization using Quantum modelsThis Tool aims to aid in the simulation of quantum cascade structures (such as QC lasers, detectors, or QWIPs)
using a variety of different simulation models. It also contains routines for optimization of such structures.
It contains a libraty of common materials and structures used in the Litterature, and provides a framework
for simulations. It does not contain any simualation code, this has to be provided by the users themselves
(for now). The respective simulation code can be linked to [aftershoq] by the implementation of a subclass
to Interface, which writes input files, executes the model, computes the merit function, and gathers the
results data.This is a program written for Python 3.6. You need to have Python 3 installed to use and modify this software
to your needs. The current implentation also uses numpy, scipy, matplotlib, and lxml for some features.InstallationWhen cloning, use the --recursive option:git clone --recursivehttps://github.com/mfranckie/aftershoq.git(orgit clone --recurse-submoduleshttps://github.com/mfranckie/aftershoq.gitdepending on your git version) so that the project "hilbert_curve" appears in the base directory of aftershoq.
To install aftershoq and all its dependencies, executepython setup.py installfrom the aftershoq/ directory. To install on a system without root privileges, runpython setyp.py install --userinsead.TutorialsFor a demonstration, see the Jupyter notebooks located in examples/notebooks. To install Jupyter, runpython -m pip install jupyterthen run withjupyter notebookMaterials_guide.ipynbShows how to create materials and alloys with varying composition and strain.QCL_guide.ipynbShows how to generate structures from scratch, how to load them from thelibraryand how to generate them automatically.Opt_guide.ipynbShows how to setup and run an optimization with Gaussian Processs (GP) regression for a test function and for a real QCL (requires ownership of a separate QCL simulation model).If you don't want to/can't use jupyter, the following examples have a similar content:"QCLexample.py" (Requires a supported simulation program)"example_sewself.py" (Requires the sewself program)"example_sewlab.py" (Requires sewlab version 4.6.4 or later)"test_optim.py" (No requirements, this is a test of the optimization scheme)Good luck! |
afthermal | Noteafthermal is currently in alpha status. This is a snapshot release of
the development branch. While it is used productively, some features
may be unfinished or undocumented.afthermalis a driver/library for the popularAdafruit(originally Cashino
A2) thermal printer[1].Partially, it is inspired by previous efforts:https://github.com/adafruit/Adafruit-Thermal-Printer-Libraryhttps://github.com/adafruit/Python-Thermal-Printer/https://github.com/luopio/py-thermal-printer/afthermaltry to be more pythonic and efficient than previous efforts,
which have mostly been 1:1 ports from other languages.
Additionally it is not focused on education but rather on being a
reliable library for handling this kind of hardware.Features include:Comfortable handling of text formattingAdapters to print images fromPIL/Pillowas well asOpenCVA fastFloyd-Steinbergimplementation to ditherOpenCVimages.Command-line utilities for calibrating the printer for optimum speed and
quality, as well as other capabilitiesSupport for printing QR codes viaPyQRCodewithout having to render them
into images first[1]Specification is available athttp://www.adafruit.com/datasheets/CSN-A2%20User%20Manual.pdfInstallationafthermalis installable frompip. It supports an extra feature namedtools, installing it will include cli tools for calibrating the
printer, printing test images or other tasks:$pipinstall'afthermal[tools]'It includes a C extension forFloyd-Steinbergdithering, sinceOpenCVdoes
not ship with a dithering function. For this reason C-modules must be
compileable when installingafthermal.Full docsThe complete documentation is housed athttp://pythonhosted.org/afthermal. |
aftk | aftkAntenna Fields Toolkit |
aftool | aftoolAsdil's tool |
aft-parse-lib | An util of parsing, from origin to viewable |
aft-pt | aft-pytorchUnofficial PyTorch implementation ofAttention Free Transformer's layers byZhai, et al. [abs,pdf] from Apple Inc.InstallationYou can installaft_ptviapip:pipinstallaft_ptUsageYou can import theAFT-FullorAFT-Simplelayer (as described in the paper) from the package like so:AFTFullfromaft_ptimportAFTFulllayer=AFTFull(max_seqlen=20,dim=512,hidden_dim=64)# a batch of sequences with 10 timesteps of length 512 eachx=torch.rand(32,10,512)y=layer(x)# [32, 10, 512]AFTSimplefromaft_ptimportAFTSimplelayer=AFTSimple(max_seqlen=20,dim=512,hidden_dim=64)# a batch of sequences with 10 timesteps of length 512 eachx=torch.rand(32,10,512)y=layer(x)# [32, 10, 512]AFTLocalfromaft_ptimportAFTLocallayer=AFTLocal(max_seqlen=20,dim=512,hidden_dim=64)# a batch of sequences with 10 timesteps of length 512 eachx=torch.rand(32,10,512)y=layer(x)# [32, 10, 512]This layer wrapper is a 'plug-and-play' with your existing networks / Transformers. You can swap out the Self-Attention layer with the available layers in this package with minimal changes.TODOAdd full AFT architectureAdd variants like,AFTConvBenchmark using Karpathy'sminGPTContributingIf you like this repo, please leave a star! If there are any amends or suggestions, feel free to raise a PR/issue.Credits@misc{attention-free-transformer,
title = {An Attention Free Transformer},
author = {Shuangfei Zhai and Walter Talbott and Nitish Srivastava and Chen Huang and Hanlin Goh and Ruixiang Zhang and Josh Susskind},
year = {2021},
URL = {https://arxiv.org/pdf/2105.14103.pdf}
}LicenseMIT |
aftpy | AFTpyThe Python package enables users to download, visualize, and analyze
AFT HDF (h5) data seamlessly.AFTPyoffers a comprehensive solution for
analyzing and downloading Advective Flux Transport (AFT) data,
streamlining the process of data downloading in parallel and
also facilitating the conversion of H5 files into the most popular
FITS file format or various other formats with ease.InstallationFrom PyPIpipinstallaftpyFrom [email protected]:bibhuraushan/aftpy.gitcdaftpy
pythonsetup.pyinstallDiscriptionsaftpyprovides two importent python module namedaftmapandaftgetdata. These modules can be used
to read the singleaftmap fileor load all the files from a given directory.aftmap moduleaftmapmodules also provides two python classAFTmapandAFTload. TheAFTmapclass
provide an interface to read the singale H5AFTmapfile and provide you the functions and isntances
to get the information and plot the data. The other classAFTloadprovides the interface to load
all the data from a directory and provide the instances and function to know about the loaded data. It also
provides a function to convert all the loaded data in to popularFITSfile.AFTmap ClassAFTload ClassA class for loading all AFT maps from directory.Attributespath(str): The path to the directory containing AFT map files.filetype(str): The file extension of the AFT map files (e.g., "h5").date_fmt(str): The date format string used to parse timestamps from filenames.filelist(list): A list of file paths to AFT map files in the specified directory.filenames(numpy.ndarray): An array of filenames extracted from the filelist.Methodsconvert_all(convert_to="fits", outpath=".", verbose=True)Convert all loaded AFT map files to the specified format.convert_to(str, optional): The output format to convert the AFT map files to. Defaults to "fits".outpath(str, optional): The directory path to save the converted files. Defaults to current directory.verbose(bool, optional): Whether to print conversion progress. Defaults to True.Example Usageimportaftpy.aftmapasaft# Initialize AFTload objectloader=aft.AFTload(path="/path/to/aft/maps",filetype="h5")# Convert all AFT map files to FITS format to '/path/to/converted/maps'loader.convert_all(convert_to="fits",outpath="/path/to/converted/maps",verbose=True)aftgetdata moduleAFTdownlaod ClasA class for downloading AFT map files from a specified URL.Attributesncpu(int): Number of CPU cores to utilize for downloading files. Defaults tocpu_count() - 1.dt(module): Alias for thedatetimemodule.root_url(str): The root URL from where AFT map files will be downloaded. Defaults to "https://data.boulder.swri.edu/lisa/".urls(list): List of URLs of AFT map files.datafile(str): File path to store the list of files in CSV format.datalist(DataFrame): DataFrame containing the list of files and corresponding timestamps.Methodsget_list(t0=None, t1=None, dt=1) -> data (DataFrame)t0(datetime.datetime, optional): Start time of the time range. Defaults to None.t1(datetime.datetime, optional): End time of the time range. Defaults to None.dt(int, optional): Time interval for sampling files within the time range. Defaults to 1.Returnsdata (DataFrame): DataFrame containing the list of files within the specified time range.reload_files(url=None, filetype="h5")Reload the list of AFT map files from the root URL.url(str, optional): The URL to reload the list of files from. Defaults to None.filetype(str, optional): The file extension of AFT map files. Defaults to "h5".Returns: True if the list of files is successfully reloaded.download(dataframe, rootpath=None, ncpu=None)Download AFT map files listed in the DataFrame.dataframe(DataFrame): DataFrame containing the list of files to download.rootpath(str, optional): Root directory path to save the downloaded files. Defaults to None.ncpu(int, optional): Number of CPU cores to utilize for downloading files. Defaults tocpu_count() - 1.Example Usageimportaftpy.getaftdataasaftget# Initialize AFTdownload objectdownloader=aftget.AFTdownload()# Reload the list of AFT map filesdownloader.reload_files()# Get the list of AFT map files within a specified time rangefile_list=downloader.get_list(t0=dt.datetime(2023,1,1),t1=dt.datetime(2023,1,7))# Download AFT map files listed in the DataFramedownloader.download(file_list) |
aft-pytorch | aft-pytorchUnofficial PyTorch implementation ofAttention Free Transformer's layers byZhai, et al. [abs,pdf] from Apple Inc.InstallationYou can installaft-pytorchviapip:pipinstallaft-pytorchUsageYou can import theAFT-FullorAFT-Simplelayer (as described in the paper) from the package like so:AFTFullfromaft_pytorchimportAFTFulllayer=AFTFull(max_seqlen=20,dim=512,hidden_dim=64)# a batch of sequences with 10 timesteps of length 512 eachx=torch.rand(32,10,512)y=layer(x)# [32, 10, 512]AFTSimplefromaft_pytorchimportAFTSimplelayer=AFTSimple(max_seqlen=20,dim=512,hidden_dim=64)# a batch of sequences with 10 timesteps of length 512 eachx=torch.rand(32,10,512)y=layer(x)# [32, 10, 512]AFTLocalfromaft_pytorchimportAFTLocallayer=AFTLocal(max_seqlen=20,dim=512,hidden_dim=64)# a batch of sequences with 10 timesteps of length 512 eachx=torch.rand(32,10,512)y=layer(x)# [32, 10, 512]This layer wrapper is a 'plug-and-play' with your existing networks / Transformers. You can swap out the Self-Attention layer with the available layers in this package with minimal changes.TODOAdd full AFT architectureAdd variants like,AFTConv,AFTLocalContributingIf you like this repo, please leave a star! If there are any amends or suggestions, feel free to raise a PR/issue.Credits@misc{attention-free-transformer,
title = {An Attention Free Transformer},
author = {Shuangfei Zhai and Walter Talbott and Nitish Srivastava and Chen Huang and Hanlin Goh and Ruixiang Zhang and Josh Susskind},
year = {2021},
URL = {https://arxiv.org/pdf/2105.14103.pdf}
}LicenseMIT |
afuncion | No description available on PyPI. |
af-utils | af-utilsIntroductionAn airflow utilities package.
An internal use package. |
afvaldienst | Afvaldienst libraryThis library is meant to interface with mijnafvalwijzer.nl and/or afvalstoffendienstkalender.nl
It is meant to use with home automation projects like Home Assistant.Installation.. code:: bashpip install afvaldienstUninstallation.. code:: bashpip uninstall afvaldienstUsage.. code:: python>>> from Afvaldienst import Afvaldienst
>>> provider = 'mijnafvalwijzer'
>>> api_token = ''
>>> zipcode = '1111AA'
>>> housenumber = '1'
>>> suffix = ''
>>> start_date = 'True or False' (start counting wihth Today's date or with Tomorrow's date)
>>> trash = Afvaldienst(provider, api_token, zipcode, housenumber, suffix)
>>> trash.trash_json
[{'nameType': 'gft', 'type': 'gft', 'date': '2019-12-20'}, {'nameType': 'pmd', 'type': 'pmd', 'date': '2019-12-28'}]
>>> trash.trash_schedule
[{'key': 'pmd', 'value': '31-10-2019', 'days_remaining': 8}, {'key': 'restafval', 'value': '15-11-2019', 'days_remaining': 23}, {'key': 'papier', 'value': '20-11-2019', 'days_remaining': 28}]
>>> trash.trash_schedule_custom
[{'key': 'first_next_in_days', 'value': 8}, {'key': 'today', 'value': 'None'}, {'key': 'tomorrow', 'value': 'None'},
>>> trash.trash_types
['gft', 'kerstbomen', 'pmd', 'restafval', 'papier']Or use the scraper:>>>from AfvaldienstScraper import AfvaldienstScraper
>>> provider = 'mijnafvalwijzer'
>>> zipcode = '1111AA'
>>> housenumber = '1'
>>> start_date = 'True or False' (start counting wihth Today's date or with Tomorrow's date)
>>> trash = AfvaldienstScraper(provider, zipcode, housenumber)
>>> trash.trash_schedule
[{'key': 'pmd', 'value': '31-10-2019', 'days_remaining': 8}, {'key': 'restafval', 'value': '15-11-2019', 'days_remaining': 23}, {'key': 'papier', 'value': '20-11-2019', 'days_remaining': 28}]
>>> trash.trash_schedule_custom
[{'key': 'first_next_in_days', 'value': 8}, {'key': 'today', 'value': 'None'}, {'key': 'tomorrow', 'value': 'None'},
>>> trash.trash_types
['gft', 'kerstbomen', 'pmd', 'textiel', 'restafval', 'papier']
>>>> trash.trash_types_from_schedule
['gft', 'papier', 'pmd', 'restafval', 'textiel', 'kerstbomen', 'today', 'tomorrow', 'day_after_tomorrow', 'first_next_in_days', 'first_next_item', 'first_next_date']Contributors are most welcomeI'm still learning how to code properly... :changelog:Release History1.0.6 (2020-09-21)
++++++++++++++++New releaseRemove unnecessary data1.0.6 (2020-09-21)
++++++++++++++++New releaseAdd scraper funcionality1.0.5 (2020-09-21)
++++++++++++++++Bugfix releaseMultiple minor bugs regarding the new api from mijnafvalwijzer1.0.4 (2020-09-16)
++++++++++++++++Bugfix releaseAdd trash None value on no data1.0.3 (2020-09-16)
++++++++++++++++Bugfix releaseadd trash_schedule and trash_schedule_custom trash type list overview1.0.2 (2020-09-16)
++++++++++++++++Bugfix releaseadd trash_schedule and trash_schedule_custom trash type list overview1.0.1 (2020-09-16)
++++++++++++++++Bugfix releaseremove api tokens (as requested by the provider(s))1.0.0 (2020-09-15)
++++++++++++++++Bugfix releasecomplete rewrite of the logic for parsing json data0.8.0 (2020-09-13)
++++++++++++++++Bugfix releasemoved from json..nl to api..nladded additional error handling |
afvalwijzer | This library is meant to interface withmijnafvalwijzer.It is meant as aworkaroundfor the afvalwijzer app (used in the Netherlands) to be notified when to place the bin at the road.
Since this app delivers a poor functionality for notifications, and I needed a small project, I created this.InstallationpipinstallafvalwijzerUninstallationpipuninstallafvalwijzerUsage>>>fromAfvalwijzerimportAfvalwijzer>>>zipcode='3564KV'>>>number='13'>>>garbage=Afvalwijzer(zipcode,number)>>>garbage.pickupdate'Vandaag'>>>garbage.wastetype'Groente-, Fruit- en Tuinafval'>>>garbage.garbage('Vandaag','Groente-, Fruit- en Tuinafval')>>>garbage.pickupdates['dinsdag 02 januari','dinsdag 02 januari']>>>garbage.wastetypes['Groente-, Fruit- en Tuinafval','Kerstbomen']The following function only returns true if the pickup date is the same as today.>>>garbage.notifyTrueBelow is shown how I use it to get notified using pushbullet.fromAfvalwijzerimportAfvalwijzerfrompushbulletimportPushbulletdefnotification(device=None):pb=Pushbullet(pushbulletapi)try:mydevice=pb.get_device(device)except:mydevice=Nonepush=pb.push_note("Container:{}".format(wastetype),"Container:{}\nDate:{}".format(wastetype,pickupdate),device=mydevice)zipcode='3564KV'number=13pushbulletapi='pushbullet_api_key'pushbulletdevice='LGE Nexus 5X'garbage=Afvalwijzer(zipcode,number)pickupdate,wastetype=garbage.garbagenotify=garbage.notifyifnotifyandpushbulletapi:notification(pushbulletdevice)Cron jobThis script can now be set up as a cronjob on your server or alike.06***cd/path/to/script/notify_garbage.py>/dev/null2>&1CaveatOutput is provided in Dutch due to the main website. There is a button for English, but I haven’t got it working (yet).Contributors are most welcomeI’m still learning how to work with it all. Therefore feedback, advice, pull request etc. are most welcome.Release History0.2.7 (2018-01-01)Verification added for zipcode valueTest added for assert on the raise0.2.6 (2017-12-31)Change the dates in ‘HISTORY.rst’ to the correct monthLibrary also returns a list for multiple dates highlighted on the webpage0.2.5 (2017-12-28)Improving the python packaging, following:Python packaging0.2.4 (2017-12-27)Fixed rst issues; now showing correct html on pypilearned aboutpython setup.py checkdocs; requirepygmentsandcollective.checkdocs0.2.1 (2017-12-26)Changing the way of working with ‘__version__’Changed versioning schemeRemoved the datetime dependencyRewritten parts and tests to work with python 2.7 and 3.4+Rewritten Markdown to restructured text0.2 (2017-12-25)Status BetaVersioning in sync, setup reads it from the programHistory (this file) addedProperty decorators instead of traditional gettersREADME improved0.1 (2017-08-24)Initial release- first working release
- py.tests
- travis-ci
- pypi
- hours of troubleshooting the 2 above |
afvalwijzerapi | epsonprinter-apiThis package can be used to scrape the ink levels from your Epson Workforce printer.Usage:ConnectionCreate the API object with the IP address of the printer, on connect the values are fetched from the printerapi=EpsonPrinterAPI(<IP>)Fetches the latest values from the printerUpdate valuesapi.update()Get actual values#regular coloursblack=api.getSensorValue("black")magenta=api.getSensorValue("magenta")cyan=api.getSensorValue("cyan")yellow=api.getSensorValue("yellow")# Cleaning cardridgeclean=api.getSensorValue("clean") |
afwf | Welcome toafwfDocumentationA powerful framework enables fast and elegant development of Alfred Workflows in Python.📔 SeeFull Documentation HERE.Project BackgroundAlfred 官方的 Python 包已经 5 年没有更新了, 而且只支持 Python2.7, 不支持 Python3. 因为 2.7 已经在 2020 年 1 月 1 日停止更新, 而且 MacOS 2021 年起操作系统内就不带 Python2.7 了, 所以这导致以前使用了官方包的 Workflow 对新 Mac 不再兼容. 并且由于兼容性和历史包袱的原因, 官方的 Python 包内置了太多本应由第三方库提供的功能, 例如 HTTP request, 缓存 等等, 而为了兼容性只能在垃圾代码上堆叠垃圾代码. 于是我就萌生了自己造一个轮子的想法.我个人同时维护着 10 多个垂直领域的 Alfred Workflow, 早期我的源代码中包含了很多跟业务逻辑无关, 只用于和 Alfred 整合, 自动化测试, 以及元编程的代码. 这些代码在多个项目中有很多重复. 于是我认为有必要讲这些功能抽象出来, 将其封装为一个框架, 以便于在多个项目中复用, 于是就有了这个项目.这个项目的目的是提供了用 Python 编写 Alfred Workflow 中需要用到的 Script Filter 的数据模型, 以及一套基于超大型内部企业项目 (我是 AWS 内部官方的 AWS Alfred Workflow 的作者) 经验总结出的一套开发 Python Alfred Workflow 的框架, 包含了项目生命周期中的开发, 测试, 发布, 快速迭代等最佳实践, 解决了 Workflow 中的控件太多, 测试不易等问题.另外, 这个项目提供了互联网领域常用的图标, 你可以在这里预览.Related Projectscookiecutter-afwf: 一个 Python Alfred Workflow 的项目模板. 我的所有的 Alfred Workflow 的项目都是基于这个模板, 自动生成所需要的所有代码的. 使得我可以专注于项目的业务逻辑, 而不是运维.afwf_example-project: 一个使用cookiecutter-afwf生成的示例项目. 可以用来学习如何使用cookiecutter-afwf模版来快速开发 Alfred Workflow.Installafwfis released on PyPI, so all you need is:$pipinstallafwfTo upgrade to latest version:$pipinstall--upgradeafwf |
afwfcfg | No description available on PyPI. |
afwf-shell | Welcome toafwf_shellDocumentationafwf_shellis a framework for building Alfred Workflow liked App in terminal. See example atui_example.py.Demo:Examples:ui_example.pyerror_handling_example_1error_handling_example_2first_run_waiter_example.pyInstallafwf_shellis released on PyPI, so all you need is to:$pipinstallafwf-shellTo upgrade to latest version:$pipinstall--upgradeafwf-shell |
afwizard | Welcome to the Adaptive Filtering WizardFeaturesAFwizard is a Python package to enhance the productivity of ground point filtering workflows in archaeology and beyond.
It provides a Jupyter-based environment for "human-in-the-loop" tuned, spatially heterogeneous ground point filterings.
Core features:Working with Lidar datasets directly in Jupyter notebooksLoading/Storing of LAS/LAZ filesVisualization using hillshade models and slope mapsApplying of ground point filtering algorithmsCropping with a map-based user interfaceAccessibility of existing filtering algorithms under a unified data model:PDAL: The Point Data Abstraction Library is an open source library for point cloud processing.OPALSis a proprietary library for processing Lidar data. It can be tested freely for datasets <1M points.LASToolshas a proprietary tool calledlasground_newthat can be used for ground point filtering.Access to predefined filter pipeline settingsCrowd-sourced library of filter pipelines athttps://github.com/ssciwr/afwizard-library/Filter definitions can be shared with colleagues as filesSpatially heterogeneous application of filter pipelinesAssignment of filter pipeline settings to spatial subregions in map-based user interfaceCommand Line Interface for large scale application of filter pipelinesDocumentationThe documentation of AFwizard can be found here:https://afwizard.readthedocs.io/en/latestPrerequisitesIn order to work with AFwizard, you need the following required pieces of Software.AConda installationIf you want to use the respective backends, you also need to install the following pieces of software:OPALSin version 2.5LASToolsInstalling and usingUsing CondaHaving alocal installation of Conda, the following sequence of commands sets up a new Conda environment and installsafwizardinto it:conda create -n afwizard
conda activate afwizard
conda install -c conda-forge/label/afwizard_dev -c conda-forge -c conda-forge/label/ipywidgets_rc -c conda-forge/label/jupyterlab_widgets_rc -c conda-forge/label/widgetsnbextension_rc afwizardYou can start the JupyterLab frontend by doing:conda activate afwizard
jupyter labIf you need some example notebooks to get started, you can copy them into the current working directory like this:conda activate afwizard
copy_afwizard_notebooksDevelopment BuildIf you are intending to contribute to the development of the library, we recommend the following setup:git clone https://github.com/ssciwr/afwizard.git
cd afwizard
conda env create -f environment-dev.yml --force
conda run -n afwizard-dev python -m pip install --no-deps .Using BinderYou can try AFwizard without prior installation by usingBinder, which is a free cloud-hosted service to run Jupyter notebooks. This will give you an impression of the library's capabilities, but you will want to work on a local setup when using the library productively: On Binder, you might experience very long startup times, slow user experience and limitations to disk space and memory.Using DockerHaving set upDocker, you can use AFwizard directly from a provided Docker image:docker run -t -p 8888:8888 ssciwr/afwizard:latestHaving executed above command, paste the URL given on the command line into your browser and start using AFwizard by looking at the provided Jupyter notebooks.
This image is limited to working with non-proprietary filtering backends (PDAL only).Using PipWe advise you to use Conda as AFwizard depends on a lot of other Python packages, some of which have external C/C++ dependencies. Using Conda, you get all of these installed automatically, using pip you might need to do a lot of manual work to get the same result.That being said,afwizardcan be installed from PyPI:python -m pip install afwizardCitation - How to cite AFwizardThe following scientific article can be referenced when using AFwizard in your research.Doneus, M., Höfle, B., Kempf, D., Daskalakis, G. & Shinoto, M. (2022): Human-in-the-loop development of spatially adaptive ground point filtering pipelines — An archaeological case study.Archaeological Prospection. Vol. 29 (4), pp. 503-524. DOI:https://doi.org/10.1002/arp.1873Related Bibtex entry:@Article{Doneus_2022,
author = {Michael Doneus and Bernhard H\"ofle and Dominic Kempf and Gwydion Daskalakis and Maria Shinoto},
title = {Human-in-the-loop development of spatially adaptive ground point filtering pipelines {\textemdash} An archaeological case study},
journal = {Archaeological Prospection},
year = {2022},
volume = {29},
number = {4},
pages = {503--524},
doi = {10.1002/arp.1873},
url = {https://doi.org/10.1002/arp.1873} }The data from the Nakadake Sanroku Kiln Site Center in Japan used in above article is also accessible under CC-BY-SA 4.0 in thedata repository of the 3D Spatial Data Processing Group:@data{data/TJNQZG_2022,
author = {Shinoto, Maria and Doneus, Michael and Haijima, Hideyuki and Weiser, Hannah and Zahs, Vivien and Kempf, Dominic and Daskalakis, Gwydion and Höfle, Bernhard and Nakamura, Naoko},
publisher = {heiDATA},
title = {{3D Point Cloud from Nakadake Sanroku Kiln Site Center, Japan: Sample Data for the Application of Adaptive Filtering with the AFwizard}},
year = {2022},
version = {V2},
doi = {10.11588/data/TJNQZG},
url = {https://doi.org/10.11588/data/TJNQZG}
}TroubleshootingIf you run into problems using AFwizard, we kindly ask you to do the following in this order:Have a look at the list of ourFrequently Asked Questionsfor a solutionSearch through theGitHub issue trackerOpen a new issue on theGitHub issue trackerprovidingThe version ofafwizardusedInformation about your OSThe output ofconda liston your machineAs much information as possible about how to reproduce the bugIf you can share the data that produced the error, it is much appreciated. |
afwizard-library | afwizard-libraryThis repository contains community-contributed filter pipelines for theAdaptive Filtering Wizardproject.
It is distributed as a Python package through PyPI, which theafwizardPython package depends on.It is currently under construction, but already accepting contributions.ContributingIf you want to contribute your filter pipelines to this repository, please read theContribution Guide. |
af-worklist-manager | AristaFlow BPM REST Client library for connecting to the WorklistManager/WorklistManager endpoint.https://pypi.org/project/aristaflowpy/provides APIs on top of the AristaFlow BPM REST endpoints for many use cases, so using aristaflowpy is recommended. |
afx | Implements the Terraform Registry API in AWS Lambda |
afxano-keras | # afxano-keras[](https://pypi.org/project/afxano-keras/)> afxano (αυξάνω – to increase, augment, or grow).afxano-keras a small collection of functions/classes to extend keras. |
ag | Coming soon. |
aga | aga grades assignmentsaga(agagradesassignments) is a tool for easily producing autograders for python programming assignments, originally developed for Reed College's CS1 course.MotivationUnlike traditional software testing, where there is likely noa prioriknown-correct implementation, there is always such an implementation (or one can be easily written by course staff) in homework grading. Therefore, applying traditional software testing frameworks to homework grading is limited. Relying on reference implementations (what aga callsgolden solutions) has several benefits:Reliability: having a reference solution gives a second layer of confirmation for the correctness of expected outputs. Aga supportsgolden tests, which function as traditional unit tests of the golden solution.Test case generation: many complex test cases can easily be generated via the reference solution, instead of needing to work out the expected output by hand. Aga supports generating test cases from inputs without explcitly referring to an expected output, and supports collecting test cases from python generators.Property testing: unit testing libraries likehypothesisallow testing large sets of arbitrary inputs for certain properties, and identifying simple inputs which reproduce violations of those properties. This is traditionally unreliable, because identifying specific properties to test is difficult. In homework grading, the property can simply be "the input matches the golden solution's output." Support for hypothesis is along-term goalof aga.InstallationInstall from pip:pipinstallagaor with the python dependency manager of your choice (I likepoetry), for example:curl-sSLhttps://install.python-poetry.org|python3-echo"cd into aga repo"cdaga
poetryinstall&&poetryshellExampleInsquare.py(or any python file), write:fromagaimportproblem,test_case,test_cases@test_cases(-3,100)@test_case(2,aga_expect=4)@test_case(-2,aga_expect=4)@problem()defsquare(x:int)->int:"""Square x."""returnx*xThen runaga gen square.pyfrom the directory withsquare.py. This will generate a ZIP file suitable for upload to Gradescope.UsageAga relies on the notion of agolden solutionto a given problem which is known to be correct. The main work of the library is to compare the output of this golden solution on some family of test inputs against the output of a student submission. To that end, aga integrates with frontends: existing classroom software which allow submission of student code. Currently, only Gradescope is supported.To use aga:Write a golden solution to some programming problem.Decorate this solution with theproblemdecorator.Decorate this problem with any number oftest_casedecorators, which take arbitrary positional or keyword arguments and pass them verbatim to the golden and submitted functions.Generate the autograder using the CLI:aga gen <file_name>.Thetest_casedecorator may optionally take a special keyword argument calledaga_expect. This allows easy testing of the golden solution: aga will not successfully produce an autograder unless the golden solution's output matches theaga_expect. You should use these as sanity checks to ensure your golden solution is implemented correctly.For more info, see thetutorial.For complete documentation, including configuring problem and test case metadata, see theAPI reference.For CLI documentation, runaga --help, or access the docsonline.ContributingBug reports, feature requests, and pull requests are all welcome. For details on our test suite, development environment, and more, see thedeveloper documentation. |
aga8-python | AGA8 Python versionDetailmodule of AGA8(american gas association report number 8).Installation through pippip install aga8-pythonTestpython tests.pyOriginal sourcehttps://github.com/usnistgov/AGA8 |
agacsayisi | # Ağaç sayısı hesaplamaKarbon dioksit miktarı girilerek, tükettiğiniz kaynağın yerini doldurmak için kaç ağaca ihtiyaç olduğunu hesaplar.Loggma Yazılım Elektrik Elektronik A.Ş. tarafından hazırlanmıştır.### Gereklilikler* Python 2.7 ve üzeri## Açıklamalar### Ağaç sayısının hesaplanması:- Genç bir ağaç senede 11.79 kg karbon dioksit özümsemektedir.(http://www.arborenvironmentalalliance.com/carbon-tree-facts.asp)- Kar edilen karbondioksit miktarını senede özümsenen karbondioksit miktarına böldüğümüz zaman ağaç sayısını buluyoruz.#### Katsayı ile enerjinin çarpılması:- Üretilen elektrik enerjisiyle önceden elle belirlenmis olan katsayının çarpımı veriliyor.#### Katsayının hesaplanması:- Türkiyede elektrik üretebilmek için doğalgaz, ithal kömür, linyit, taş kömürü, fuel oil, rüzgarenerjisi, jeotermal enerji ve güneş enerjisi kullanılıyor. 2018’in Nisan ayı verilerine göre, doğalgaz,elektrik üretiminin %28’inde, ithal kömür %16’sında, linyit %15’inde, taş kömürü %1’inde, rüzgar enerjisi%5’inde, jeotermal enerji %3’ünde, fuel oil %0’ında ve son olarak güneş enerjisi %3’ünde kullanıldı.Toplamda ise 23,844,12 kWh üretildi.[(AYLIK ENERJİ İSTATİSTİKLERİ RAPORU - 4)](http://www.eigm.gov.tr/File/?path=ROOT%2f4%2fDocuments%2f%c4%b0statistik%20Raporu%2f2018%20Nisan%20Ay%c4%b1%20Enerji%20Raporu.pdf)- EMO’ya göre, bu elemanların sera gazı emisyonları(ton- karbon dioksit/GWh) şu şekildedir:(http://www.emo.org.tr/ekler/15ed8b43a250de0_ek.pdf)```Doğalgaz: 499İthal kömür: 888Linyit: 1054Taş kömürü: 888Fuel oil: 733Rüzgar enerjisi: 10Jeotermal enerji: 38Güneş enerjisi: 23```- Her elemana ait elektrik üretimini bulmak için toplam enerji miktarıyla yüzdeler çarpıldı.Çıkan sonuçla da elemanların sera gazı emisyon değerleri çarpılıp gerekli birim hesaplamalarıyapıldı.(kg/kWh) Her elemanın sebep olduğu salınmalar teker teker bulundu ve toplandı.Böylece toplam karbondioksit miktarı bulundu. Bu değer de toplam elektrik enerjisine bölündüve katsayı bulundu.## KatkıDaha doğru olduğunu düşündüğünüz hesaplamalar varsa katkıda bulunabilirsiniz.## Lisans[MIT](https://choosealicense.com/licenses/mit/) |
agadoodoo | hello world from peotry |
again | UNKNOWN |
againback | UNKNOWN |
agal | No description available on PyPI. |
agalma | Agalma is an automated tool that constructs matrices forphylogenomic analyses, allowing complex phylogenomic analyses to be
implemented and described unambiguously as a series of high-level
commands. The user provides raw Illumina transcriptome data, and
Agalma produces annotated assemblies, aligned gene sequence matrices,
a preliminary phylogeny, and detailed diagnostics that allow the
investigator to make extensive assessments of intermediate analysis
steps and the final results. Sequences from other sources, such as
externally assembled genomes and transcriptomes, can also be
incorporated in the analyses. Agalma is built on the BioLite
bioinformatics framework, which tracks provenance, profiles processor
and memory use, records diagnostics, manages metadata, installs
dependencies, logs version numbers and calls to external programs, and
enables rich HTML reports for all stages of the analysis. Agalma
includes a small test data set and a built-in test analysis of these
data. |
agama | Its main tasks include:Computation of gravitational potential and forces;Orbit integration and analysis;Conversion between position/velocity and action/angle coordinates;Distribution functions;Self-consistent multi-component galaxy models;Framework for finding best-fit parameters of a model from data;Auxiliary utilities (e.g., various mathematical routines).The core of the library is written in C++, and there are Python and Fortran interfaces;
it may be used as a plugin for other stellar-dynamical software packages:
galpy, amuse, nemo.A detailed documentation for the AGAMA library is presented in a separate file doc/reference.pdf,
and a more technical description may be extracted from source files with Doxygen.
Instructions for compilation are in the file INSTALL.
The package also contains collection of examples and test programs both in C++ and Python.Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 3
Requires: numpy |
agam-conservation | No description available on PyPI. |
a-game | No description available on PyPI. |
agamemnon | AgamemnonAgamemnon is a thin library built on top of pycassa.
It allows you to use the Cassandra database (<http://cassandra.apache.org>) as a graph database.
Using cassandra provides an extremely high level of reliability and scalability that is not available in other
graph databases. Cassandra provides integrated support for both data partitioning as well as replication via configuration.Much of the api was inspired by the excellent neo4j.py project (<http://components.neo4j.org/neo4j.py/snapshot/>),
however the support in this package has diverged from that project.Agamemnon also has integrated RDF support through RDFLib (http://www.rdflib.net/)UsageThe following is an example of how to use Agamemnon in your own code>>> from agamemnon.factory import load_from_settingsFirst, we can decide which kind of data store we are creating. In this case we’re creating an InMemory data store>>> config = {'backend': 'agamemnon.memory.InMemoryDataStore'}
>>> graph_db = load_from_settings(config)In honor of The Simpsons Movie, we’ll create a node called spiderpig>>> spiderpig = graph_db.create_node('test_type', 'spiderpig', {'sound':'oink'})Now we will retrieve the spiderpig from the graph and check that the attributes were correct.>>> spiderpig = graph_db.get_node('test_type', 'spiderpig')
>>> spiderpig['sound']
'oink'Now we will create a friend for the spiderpig (who also happens to be his alter ego). Again, let’s check to
confirm that the node and it’s attributes were created correctly.>>> harry_plopper = graph_db.create_node('test_type', 'Harry Plopper', {'sound':'plop'})
>>> harry_plopper = graph_db.get_node('test_type','Harry Plopper')
>>> harry_plopper['sound']
'plop'Nodes can have different types as well. Here we create a node of type simpson, with name Homer. This node has
different attributes than the previous nodes.>>> homer = graph_db.create_node('simpson', 'Homer', {'sound':'Doh', 'job':'Safety Inspector'})
>>> homer = graph_db.get_node('simpson', 'Homer')
>>> homer['sound']
'Doh'
>>> homer['job']
'Safety Inspector'Nodes by themselves are not very useful. Let’s create a relationship between spiderpig and Harry Plopper.>>> rel = spiderpig.friend(harry_plopper, key='spiderpig_harry_plopper_alliance', alter_ego=True, best=False)This creates a relationship of type friend. The key has been specified in this case, although it is not necessary.
If no key is supplied a uuid will be generated for the relationship.Every node type has a “reference node”. This is a metanode for the type and functions as an index for all nodes of a
given type.>>> reference_node = graph_db.get_reference_node('test_type')Getting the instances from the test_type reference node should return the Harry Plopper node and the spiderpig node.>>> sorted([rel.target_node.key for rel in reference_node.instance.outgoing])
['Harry Plopper', 'spiderpig']The spiderpig should only have one friend at this point, and it should be Harry Plopper>>> friends = [rel for rel in spiderpig.friend]>>> len(friends)
1>>> friends[0].target_node.key
'Harry Plopper'Now let’s confirm that Harry Plopper is friends with spider pig as well:>>> 'spiderpig' in harry_plopper.friend
TrueAnd, once more, make sure that spider pig is Harry Plopper’s only friend:>>> friends = [rel for rel in harry_plopper.friend]>>> len(friends)
1>>> friends[0].source_node.key
'spiderpig'They should not be best friends. Let’s confirm this:>>> friends[0]['best']
FalseHomer is spiderpig’s best friend:>>> rel = homer.friend(spiderpig, best=True, alter_ego=False, type='love', strength=100)Here we added additional attributes to the relationship.Now spiderpig should have 2 friends.>>> friends = [rel for rel in spiderpig.friend]
>>> len(friends)
2You can get a list of all of the relationships of a particular type between a node and other nodes with a particular key>>> homer_spiderpig_love = spiderpig.friend.relationships_with('Homer')
>>> len(homer_spiderpig_love)
1>>> homer_spiderpig_love = spiderpig.friend.relationships_with('Homer')
>>> print homer_spiderpig_love[0]['strength']
100Thanks ToThis project is an extension of the globusonline.org project and is being used to power the upcoming version of globusonline.org. I’d like to thank Ian Foster and Steve Tuecke for leading that project, and all of the members of the cloud services team for participating in this effort, especially: Vijay Anand, Kyle Chard, Martin Feller and Mike Russell for helping with design and testing. I’d also like to thank Bryce Allen for his help with some of the python learning curve.0.3.1.0Added many tests and fixed bugs with certain operationsAdded RDF support through RDFlib0.2.1.3fixed bug with in memory column comparisons0.2.1.2fixing bug with root reference node, adding support for unicode serialization and bumping version num0.2.1.1adding method to get all relationships regardless of typeremoving generated doc files and updating index.rstadding doctest to usage documentation and setup.cfgupdating setup files with requirementsmultiple fixes for issues discovered by globusonline0.2.1.0added support for contains operator (with relationships_with(other_node_key) function) and added type conversion for prmerging relationship code from globusonline0.0.1.3Updating datastore.save_node so that it no longer uses batch storage |
aga-ml | No description available on PyPI. |
agamotto | AgamottoAgamotto is a helper module to make it easier to test a running system
with Python.Why not use serverspec? I work in a Python shop and want our devs to be
able to easily write their own tests. Making the test suite use the same
language they use daily removes a potential friction point.InstallationpipinstallagamottoUsageimportagamottoimportunittest2asunittestclassTestKnownSecurityIssues(unittest.TestCase):deftestBashHasCVE_2014_6271Fix(self):"""Confirm that fix has been installed for CVE-2014-6271 Bash Code
Injection Vulnerability via Specially Crafted Environment Variables
"""self.assertFalse(agamotto.process.stdoutContains("(env x='() { :;}; echo vulnerable' bash -c\"echo this is a test\") 2>&1",'vulnerable'),'Bash is vulnerable to CVE-2014-6271')deftestBashHasCVE_2014_7169Fix(self):"""Confirm that fix has been installed for CVE-2014-7169 Bash Code
Injection Vulnerability via Specially Crafted Environment Variables
"""self.assertFalse(agamotto.process.stdoutContains("env X='() { (a)=>\'bash -c\"echo echo vuln\"; [[\"$(cat echo)\"==\"vuln\"]] && echo\"still vulnerable :(\"2>&1",'still vulnerable'),'Bash is vulnerable to CVE-2014-7169')deftestNoAccountsHaveEmptyPasswords(self):"""/etc/shadow has : separated fields. Check the password field ($2) and
make sure no accounts have a blank password.
"""self.assertEquals(agamotto.process.execute('sudo awk -F:\'($2 == ""){print}\'/etc/shadow | wc -l').strip(),'0',"found accounts with blank password")deftestRootIsTheOnlyUidZeroAccount(self):"""/etc/passwd stores the UID in field 3. Make sure only one account entry
has uid 0.
"""self.assertEquals(agamotto.process.execute('awk -F:\'($3 == "0"){print}\'/etc/passwd').strip(),'root:x:0:0:root:/root:/bin/bash')if__name__=='__main__':unittest.main()Then run py.test.CaveatsWe’re a CentOS shop. This hasn’t even been tested on stock RHEL, let
alone Debian or Ubuntu. Pull requests adding that functionality are
welcome, of course. |
agamprimer | No description available on PyPI. |
aganhui | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.2021 01 05 鏇存柊ygh.rs 浣垮叾姝e父 |
agape-auth | Agape AuthThe auth package provides a User model and an API for authenticating and registering users. It includes an account recovery API for resetting passwords and the User model can be extended dynamically.Quick startAdd the desired applications into the INSTALLED_APPS setting like this:INSTALLED_APPS = [
...
'agape.authentication',
]Include the URLconf in your project urls.py like this:url(r'^/api/, include('agape.authentication.urls')),Runpython manage.py migrate agapeAuthto create the models.Start the development server and visithttp://127.0.0.1:8000/Developer InstructionsTestingThere are 3 differentruntests_*.pyfiles. Each one runs tests using a different configuration file. There is also aruntests.shfile which will run all all of theruntests_*.pyfiles../runtests.sh
venv/bin/python runtests_default.py
...Packaging & DistributionInstead of callingpython setup.py sdistdirectly use thebuild.shscript. This script sets the version number in setup.py, builds the package, and releases to the loca repository if the$REPOvariable is set../build.shInstallationpip install agape-authLicenseCopyright (C) 2017-2020 Jeffrey Hallock, Maverik SoftwareThis program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
See the [LICENSE.md](LICENSE.md) file for details. |
agape-core | AgapeAgape is DRY software.DescriptionSeethe github repofor more information.LicenseCopyright (C) 2017-2020 Jeffrey Hallock, Maverik SoftwareThe (MIT) license. |
agape-db | AgapeField factories for Django modelsSynopsisfrom django.db.models import Model
from agape import db
class Person( Model ):
first_name = db.string( 32, required=True )
last_name = db.string( 32, required=True )
birthday = db.date( )
email = db.email( )DescriptionShorthand for building Django models.Fields default tonull=Trueandblank=True. Settingrequired=Truewill set bothnullandblanktoFalseon the native Django field.FieldsThese factory methods are wrappers over the native Django fields.
Fields that have required parameters use ordered arguments
instead of requiring named arguments. You may pass in any named
arguments that the native Django fields accept. Additional arguments may are also provided for specific fields.boolean( )color( )date( )datetime( )decimal( max_digits, decimal_places )duration( )email( )file( )float( )image( )integer( )small_integer( )join( model, on_delete_policy )slug( from_field, auto=True )If the optionalautoparamter is set toTruethe slug will
be automatically when the instance is saved for the first time.string( max_length )`user( on_delete_policy, auto=True, auto_add=True )User field which will auto populate on the instance is updated
or created. Setauto=Trueto update the field each time the
instance is saved, orauto_add=Trueonly when the instance
is saved for the first time.You must add theagape.db.middleware.AgapeUserMiddlewareto
yoursettings.pyfile to useautoandauto_add.LicenseCopyright (C) 2021 Maverik MinettThe (MIT) license. |
agape-django | # AgapeAgape is collection of Django apps to use for building modernweb applications.## Quick start1. Add the desired applications into the INSTALLED_APPS setting like this:INSTALLED_APPS = [...'agape.authentication',]2. Include the URLconf in your project urls.py like this:url(r'^/api/v1/', include('agape.authentication.urls')),3. Run `python manage.py migrate` to create the models.4. Start the development server and visit http://127.0.0.1:8000/## Available Applications* authentication* files## Developer Instructions### Testing```python runtests.pypython runtests.py agape.authenticationpython runtests.py agape.[module]```### Packaging & Distribution#### Package only```python setup.py sdist```#### Package and submit to PyPi repository```python setup.py sdist upload -r pypi```### Installation#### Install locally```pip install ../django-agape/dist/django-agape-*.tar.gz```#### Install from PyPi```pip install django-agape``` |
agape-string | AgapeString manipulationFunctionsslugify( string )Returns a URL safe slug from a stringtokenize( string )Returns a string formatted in snake-casepluralize( string )Convert singular noun to plural nounLicenseCopyright (C) 2021 Maverik MinettThe (MIT) license. |
agar | Agar is a set of utilities forGoogle App Engine python, , created as part of theSubstrate Project.ResourcesDocumentationPyPI PackageSource Code RepositoryRequirementsAgar requires the Google App Engine SDK,webapp2,webapp2_extras,pytz,restler, andbasin. Versions of these (except the Google App
Engine SDK) are located in thelibdirectory.InstallationTo install Agar, download the source and add theagardirectory to
your Google App Engine project. It must be on your path.TestsAgar comes with a set of tests. Running Agar’s tests requiresunittest2andWebTest(included in thelibdirectory). To run them,
execute:$ ./run_tests.pyTestingGoogle App Engine now includes testbed to make local unit testing
easier. This obsoletes the now-unsupported GAE TestBed
library. However, it had several useful helper functions, many of
which have been re-implemented in Agar. To use them, you must useunittest2and inherit fromagar.tests.BaseTestoragar.tests.WebTest.LicenseAgar is licensed under the MIT License. SeeLICENSE.txtfor details.ContributingTo contribute to the Agar project, fork the repository, make your
changes, and submit a pull request. |
agarclient | No description available on PyPI. |
agarilog | agarilogThis is simple logger for message service.想定用途長時間のバッチ処理やサービスのデモなどのloggerを想定。他から呼ばれることを想定したライブラリなどには向きません。Installationpip install agarilogFeaturesUse .env file.>>>importagarilogaslogger>>>logger.info("Hello agarilog!")Use any .env file.>>>fromagarilogimportget_logger>>>logger=get_logger(name=__name__,env_file="dev.env")>>>logger.info("Hello agarilog!")This is usedev.envfile.TelegramSlackChatworkTerminalEnvironment環境変数にサービスごとの設定を登録する。もしくは実行パスと同じ場所の.envファイルに記述する。importの方法を変えることで任意のファイルを読み込むこともできる。(上記参照)web系の設定はLOG_XXXX_LIMITを用いて並列で送信するログの数に制限をかけられる。1に設定することで、ログの順序通りに送信できる。それ以上は順序は保証されない。Environment variables will always take priority over values loaded from a dotenv file.LOG_XXXX_LEVEL: [“NOTSET”, “DEBUG”, “INFO”, “WARNING”, “ERROR”, “CRITICAL”]TelegramLOG_TELEGRAM_TOKEN=XXXXXXXXX:YYYYYYYYYYYYYYYYYYYYYYYYYYYY
LOG_TELEGRAM_CHAT_ID=XXXXXXXX
LOG_TELEGRAM_LEVEL=WARNING # default is warning
LOG_TELEGRAM_LIMIT=10 # default is 10SlackLOG_SLACK_TOKEN=xxxx-YYYYYYYYYYYY-YYYYYYYYYYYY-xxxxxxxxxxxxxxxxxxxxx
LOG_SLACK_CHANNEL=XXXXXXXXXXX
LOG_SLACK_LEVEL=WARNING # default is warningChatworkLOG_CHATWORK_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
LOG_CHATWORK_ROOM_ID=XXXXXXXXX
LOG_CHATWORK_LEVLE=WARNING # default is warningTerminalLOG_TERMINAL_TYPE=COLOR # default is COLOR
LOG_TERMINAL_LEVEL=WARNING # default is warningLOG_TERMINAL_TYPE: [“NONE”, “PRINT”, “NORMAL”, “COLOR”]Developmentgit cloneしたら最初に実行すること。仮想環境作成とpre-commitのインストールが行われる。$ make init |
agario-bot | ## agario-python-bot##### Lightweight client library for writing agario bots in Python##### socketio server: https://gitlab.com/unidev/agario##### installation: pip install agario-bot#### Examples###### Simplest cycle bot```pythonfrom agario_bot.bot import BotClientb = BotClient('prettygoo', wait_rate=0.1)surroundings = b.get_visible_surroundings()while True:b.move_left()surroundings = b.get_visible_surroundings()print(surroundings['cells'])print(surroundings['food'])b.move_up()surroundings = b.get_visible_surroundings()print(surroundings['cells'])print(surroundings['food'])b.move_right()surroundings = b.get_visible_surroundings()print(surroundings['cells'])print(surroundings['food'])b.move_down()surroundings = b.get_visible_surroundings()print(surroundings['cells'])print(surroundings['food'])```###### Scary bot, just runs away from everyone one the gameboard```pythonfrom agario_bot.examples.scary_bot import run_scary_botrun_scary_bot()```##### BotClient arguments- float:speed_rate - time to execute movement, 1 - one second- float:wait_rate - time to wait before server response on client emitted socket- list:cells - players- mass - will be removed- string:host - default localhost (0.0.0.0 will not work on Windows, 127.0.0.1 is highly likely not to work as well)- int:port - must be None for real server deployment |
agarnet | No description available on PyPI. |
agaro | Framework to run modelsFree software: BSD licenseDocumentation:https://agaro.readthedocs.org.FeaturesTODOHistory0.1.0 (2015-01-11)First release on PyPI. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.