package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aleph-pytezos
|
aleph-pytezosAdd a short description here!A longer description of your project goes here…NoteThis project has been set up using PyScaffold 4.2.3. For details and usage
information on PyScaffold seehttps://pyscaffold.org/.
|
alephs
|
Pakistan’s emergency services.DetailsCatalogue of emergency services in Pakistan for a consolidated lookup.Package Usagefrom alephs import alephs# Get versionalephs.get_version()# get list of institutionsinsts = alephs.get_institutes()
|
aleph-sdk-python
|
aleph-sdk-pythonPython SDK for the Aleph.im network, next generation network of decentralized big data applications.Development follows theAleph Whitepaper.DocumentationDocumentation (albeit still vastly incomplete as it is a work in progress) can be found athttp://aleph-sdk-python.readthedocs.io/or built from this repo with:$pythonsetup.pydocsRequirementsLinuxSome cryptographic functionalities use curve secp256k1 and require installinglibsecp256k1.$apt-getinstall-ypython3-piplibsecp256k1-devUsing some chains may also require installinglibgmp3-dev.macOsThis project does not support Python 3.12 on macOS. Please use Python 3.11 instead.$brewtapcuber/homebrew-libsecp256k1
$brewinstalllibsecp256k1InstallationUsing pip andPyPI:$pipinstallaleph-sdk-python[ethereum,solana,tezos]Installation for developmentTo install from source and still be able to modify the source code:$pipinstall-e.[testing]or$pythonsetup.pydevelopUsage with LedgerHQ hardwareThe SDK supports signatures usingapp-ethereum,
the Ethereum app for the Ledger hardware wallets.This has been tested successfully on Linux (amd64).
Let us know if it works for you on other operating systems.Using a Ledger device on Linux requires root access or the setup of udev rules.Unlocking the device is required before using the relevant SDK functions.Debian / UbuntuInstallledger-wallets-udev.sudo apt-get install ledger-wallets-udevOn NixOSConfigurehardware.ledger.enable = true.Other Linux systemsSeehttps://github.com/LedgerHQ/udev-rules
|
aleph-utils
|
aleph_utilsSome helpers functions for Aleph in French
|
alephvault-evm-events-http-mongodb-storage
|
No description available on PyPI.
|
alephvault-http-mongodb-storage
|
No description available on PyPI.
|
alephvault-windrose-http-mongodb-storage-generator
|
No description available on PyPI.
|
aleph-vrf
|
Aleph.im Verifiable Random FunctionsWhat is a Verifiable Random Function (VRF)?Verifiable Random Functions (VRF) are cryptographic primitives that generate random numbers that are both unpredictable
and verifiable.
This allows to create "trustless randomness", i.e. generate (pseudo-) random numbers in decentralized systems and
provide the assurance that the number was indeed generated randomly.Aleph.im implementationAleph.im uses a combination of virtual machines (VMs) and aleph.im network messages to implement VRFs.The implementation revolves around the following components:The VRF coordinatorN executors.The coordinator receives user requests to generate random numbers.
Upon receiving a request, it selects a set of compute resource nodes (CRNs) to act as executors.
Each of these executors generates a random number and computes its hash using SHA3–256.
These hashes are then posted to aleph.im using a POST message, which also includes a unique request identifier.
Once all the hashes are posted and confirmed, the coordinator requests the actual random numbers from each node.Finally, the coordinator performs a verification process to ensure that all random numbers correspond to their
previously posted hashes. The random numbers are then combined using an XOR operation to generate the final random
number. This final number, along with a summary of operations performed, is published on aleph.im for public
verification.How to use aleph.im VRFsThe VRF executors and coordinator are meant to be deployed as VM functions on the aleph.im network.
The coordinator can also be deployed in library mode (see below).We provide a script to deploy the VM functions.
Just run the following command to package the application and upload it to the aleph.im network.python3 deployment/deploy_vrf_vms.pyIf the deployment succeeds, the script will display links to the VMs on the aleph.im network. Example:Executor VM:https://api2.aleph.im/api/v0/messages/558b0eeea54d80d2504b0287d047e0b78458d08022d3600bcf8478700dd0aac2Coordinator VM:https://api2.aleph.im/api/v0/messages/d9eef54544338685a9b4034cc16e285520eb3cf0c199eeade1d6b290365c95d0Use the coordinator in library modeThe coordinator can also be used directly from Python code.
First, deploy the executors using the deployment script, without the coordinator VM:python3 deployment/deploy_vrf_vms.py --no-coordinatorThis will deploy an executor VM on the network and give you its ID.
Example:Executor VM:https://api2.aleph.im/api/v0/messages/558b0eeea54d80d2504b0287d047e0b78458d08022d3600bcf8478700dd0aac2Then, install thealeph-vrfmodule and call it from your code:pipinstallaleph-vrffromaleph_vrf.coordinator.vrfimportgenerate_vrffromaleph_message.modelsimportItemHashasyncdefmain():aleph_account=...# Specify your aleph.im accountvrf_response=awaitgenerate_vrf(account=aleph_account,vrf_function=ItemHash(# The hash of the executor VM deployed above"558b0eeea54d80d2504b0287d047e0b78458d08022d3600bcf8478700dd0aac2"),)random_number=int(vrf_response.random_number)ContributeSet up the development environmentYou can set up a development environment by configuring a Python virtual environment and installing the project in
development mode.python-mvirtualenvvenvsourcevenv/bin/activate
pipinstall-e.[build,testing]Run testsThis project uses mypy for static type analysis and pytest for unit/integration tests.# Static analysis with mypymypysrc/tests/# Run unit/integration testspytest-v.Create a new releaseDeploy the VMs:python3 deployment/deploy_vrf_vms.pyUpdate the executor VM hash in the settings (Settings.FUNCTION) and create a Pull RequestMerge the Pull Request and create a new release on GithubBuild and upload the package on PyPI:python3 -m build && twine upload dist/*Other resourcesArticle on Medium
|
alephzero
|
TODO: long description
|
ale-project-x
|
python -m build
twine check dist/*
twine upload -r testpypi dist/*
twine upload dist/*
|
ale-py
|
The Arcade Learning EnvironmentThe Arcade Learning Environment (ALE) is a simple framework that allows researchers and hobbyists to develop AI agents for Atari 2600 games.It is built on top of the Atari 2600 emulatorStellaand separates the details of emulation from agent design.
Thisvideodepicts over 50 games currently supported in the ALE.For an overview of our goals for the ALE readThe Arcade Learning Environment: An Evaluation Platform for General Agents.
If you use ALE in your research, we ask that you please cite this paper in reference to the environment. See theCitingsection for BibTeX entries.FeaturesObject-oriented framework with support to add agents and games.Emulation core uncoupled from rendering and sound generation modules for fast
emulation with minimal library dependencies.Automatic extraction of game score and end-of-game signal for more than 100
Atari 2600 games.Multi-platform code (compiled and tested under macOS, Windows, and several Linux distributions).Python bindings throughpybind11.Native support for OpenAI Gym.Visualization tools.Quick StartThe ALE currently supports three different interfaces: C++, Python, and OpenAI Gym.PythonYou simply need to install theale-pypackage distributed via PyPI:pipinstallale-pyNote: Make sure you're using an up to date version ofpipor the install may fail.You can now import the ALE in your Python projects withfromale_pyimportALEInterfaceale=ALEInterface()ROM ManagementThe ALE doesn't distribute ROMs but we do provide a couple tools for managing your ROMs. First is the command line toolale-import-roms. You can simply specify a directory as the first argument to this tool and we'll import all supported ROMs by the ALE.ale-import-romsroms/[SUPPORTED]breakoutroms/breakout.bin[SUPPORTED]freewayroms/freeway.bin[NOTSUPPORTED]roms/custom.bin
Imported2/3ROMsFurthermore, Python packages can expose ROMs for discovery using the specialale-py.romsentry point. For more details check out the examplepython-rom-package.Once you've imported a supported ROM you can simply import the path from theale-py.romspackage and load the ROM in the ALE:fromale_py.romsimportBreakoutale.loadROM(Breakout)OpenAI GymGym support is included inale-py. Simply install the Python package using the instructions above. You can also installgym[atari]which also installsale-pywith Gym.As of Gym v0.20 and onwards all Atari environments are provided viaale-py. We do recommend using the newv5environments in theALEnamespace:importgymenv=gym.make('ALE/Breakout-v5')Thev5environments follow the latest methodology set out inRevisiting the Arcade Learning Environment by Machado et al..The only major change difference from Gym'sAtariEnvis that we'd recommend not using theenv.render()method in favour of supplying therender_modekeyword argument during environment initialization. Thehumanrender mode will give you the advantage of: frame perfect rendering, audio support, and proper resolution scaling. For more information check outdocs/gym-interface.md.For more information on changes to the Atari environments in OpenAI Gym please check outthe following blog post.C++The following instructions will assume you have a valid C++17 compiler andvcpkginstalled.We use CMake as a first class citizen, and you can use the ALE directly with any CMake project.
To compile and install the ALE you can runmkdirbuild&&cdbuild
cmake../-DCMAKE_BUILD_TYPE=Release
cmake--build.--targetinstallThere are optional flags-DSDL_SUPPORT=ON/OFFto toggle SDL support (i.e.,display_screenandsoundsupport;OFFby default),-DBUILD_CPP_LIB=ON/OFFto build
theale-libC++ target (ONby default), and-DBUILD_PYTHON_LIB=ON/OFFto build the pybind11 wrapper (ONby default).Finally, you can link agaisnt the ALE in your own CMake project as followsfind_package(aleREQUIRED)target_link_libraries(YourTargetale::ale-lib)CitingIf you use the ALE in your research, we ask that you please cite the following.M. G. Bellemare, Y. Naddaf, J. Veness and M. Bowling. The Arcade Learning Environment: An Evaluation Platform for General Agents, Journal of Artificial Intelligence Research, Volume 47, pages 253-279, 2013.In BibTeX format:@Article{bellemare13arcade,author={{Bellemare}, M.~G. and {Naddaf}, Y. and {Veness}, J. and {Bowling}, M.},title={The Arcade Learning Environment: An Evaluation Platform for General Agents},journal={Journal of Artificial Intelligence Research},year="2013",month="jun",volume="47",pages="253--279",}If you use the ALE with sticky actions (flagrepeat_action_probability), or if
you use the different game flavours (mode and difficulty switches), we ask you
that you also cite the following:M. C. Machado, M. G. Bellemare, E. Talvitie, J. Veness, M. J. Hausknecht, M. Bowling. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents, Journal of Artificial Intelligence Research, Volume 61, pages 523-562, 2018.In BibTex format:@Article{machado18arcade,author={Marlos C. Machado and Marc G. Bellemare and Erik Talvitie and Joel Veness and Matthew J. Hausknecht and Michael Bowling},title={Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents},journal={Journal of Artificial Intelligence Research},volume={61},pages={523--562},year={2018}}
|
ale-python-interface
|
No description available on PyPI.
|
alerce
|
Welcome to ALeRCE Python Client.ALeRCEclient is a Python library to interact
with ALeRCE services and databases.For full documentation please visit the officialDocumentation:Installing ALeRCE Clientpip install alerceOr clone the repository and install from theregit clone https://github.com/alercebroker/alerce_client.git
cd alerce_client
python setup.py installUsagefrom alerce.core import Alerce
alerce = Alerce()
dataframe = alerce.query_objects(
classifier="lc_classifier",
class_name="LPV",
format="pandas"
)
detections = alerce.query_detections("ZTF20aaelulu", format="pandas", sort="mjd")
magstats = alerce.query_magstats("ZTF20aaelulu")
query='''
SELECT
oid, sgmag1, srmag1, simag1, szmag1, sgscore1
FROM
ps1_ztf
WHERE
oid = 'ZTF20aaelulu'
'''
detections_direct = alerce.send_query(query, format="pandas")ConfigurationBy default the Alerce object should be ready to use without any external
configuration, but in case you need to adjust any parameters then you
can configure the Alerce object in different ways.At the client object initializationYou can pass parameters to the Alerce class constructor to set the
parameters for API connection.For example using the ZTF API on localhost:5000 and the DB API on localhost:5050alerce = Alerce(ZTF_API_URL="<http://localhost:5000>", ZTF_DB_API_URL="<http://localhost:5050>")From a dictionary objectYou can pass parameters to the Alerce class from a dictionary object.my_config = {
"ZTF_API_URL": "http://localhost:5000"
"ZTF_DB_API_URL": "http://localhost:5050"
}
alerce = Alerce()
alerce.load_config_from_object(my_config)
|
alert
|
AlertA quick package to Email logs for long running programs.pipinstallalertEmail Alertsimport alertmail = alert.mail(sender_email="", sender_password="")mail.send_email(receiver_email="", subject="", msg="")
|
alert2me
|
alert2mecli tool that alert to me.How to installpip install alert2me
alme-configurealert2mealme {cmd}
|
alert360
|
SummaryAlert 360 can be used to trigger any specific action on any specific data changes. The actions can involve any sort of variables required to complete the action and the trigger can be any sort of change in state in an SQL databaseCompatible RDBMSPostgreSQLMicrosoft SQLMySQL/MariaDBSQLiteFireBirdSetup InstructionsInstall the packagepipinstallalert360Start a Django projectdjango-adminstartprojectdjangoprojectAdd the app to INSTALLED_APPSINSTALLED_APPS=[...'django_ace','alert360',...]Create a python file in which you can write your own actions that will be triggered when the state changesactions.pyfromalert360.actionsimportActionsManager@ActionsManager.add_handlerdefprint_changes(changes):print("Some changes occured")print(changes)In the above code we declared our own custom functionprint_changeswhich will be called whenever the state changes and it will print the summary of changes.
However, there's one more step left to connect this function to theActionsManagerIn the__init__.pyfile in the folder in which we createdactions.pyadd the following linefrom.importactionsNow run database migrations and create a superuser so you can access the admin websitepythonmanage.pymigrate
pythonmanage.pycreatesuperuserNow Login to the adminwebsite, connect a database, and create a new StateWatcher
|
alerta
|
Alerta Command-Line ToolUnified command-line tool, terminal GUI and python SDK for the Alerta monitoring system.Related projects can be found on the Alerta Org Repo athttps://github.com/alerta/.InstallationTo install the Alerta CLI tool run::$ pip install alertaConfigurationOptions can be set in a configuration file, as environment variables or on the command line.
Profiles can be used to easily switch between different configuration settings.OptionConfig FileEnvironment VariableOptional ArgumentDefaultfilen/aALERTA_CONF_FILEn/a~/.alerta.confprofileprofileALERTA_DEFAULT_PROFILE--profile PROFILENoneendpointendpointALERTA_ENDPOINT--endpoint-url URLhttp://localhost:8080keykeyALERTA_API_KEYn/aNonetimezonetimezonen/an/aEurope/LondonSSL verifysslverifyREQUESTS_CA_BUNDLEn/averify SSL certificatesSSL client certsslcertn/an/aNoneSSL client keysslkeyn/an/aNonetimeouttimeoutn/an/a5s TCP connection timeoutoutputoutputn/a--output-format OUTPUTsimplecolorcolorCLICOLOR--color,--no-colorcolor ondebugdebugDEBUG--debugno debugExampleConfiguration file~/.alerta.conf::[DEFAULT]
timezone = Australia/Sydney
# output = psql
profile = production
[profile production]
endpoint = https://api.alerta.io
key = demo-key
[profile development]
endpoint = https://localhost:8443
sslverify = off
timeout = 10.0
debug = yesEnvironment VariablesSet environment variables to use production configuration settings by default::$ export ALERTA_CONF_FILE=~/.alerta.conf
$ export ALERTA_DEFAULT_PROFILE=production
$ alerta queryAnd to switch to development configuration settings when required use the--profileoption::$ alerta --profile development queryUsage$ alerta
Usage: alerta [OPTIONS] COMMAND [ARGS]...
Alerta client unified command-line tool.
Options:
--config-file <FILE> Configuration file.
--profile <PROFILE> Configuration profile.
--endpoint-url <URL> API endpoint URL.
--output-format <FORMAT> Output format. eg. simple, grid, psql, presto, rst
--color / --no-color Color-coded output based on severity.
--debug Debug mode.
--help Show this message and exit.
Commands:
ack Acknowledge alerts
blackout Suppress alerts
blackouts List alert suppressions
close Close alerts
customer Add customer lookup
customers List customer lookups
delete Delete alerts
heartbeat Send a heartbeat
heartbeats List heartbeats
help Show this help
history Show alert history
key Create API key
keys List API keys
login Login with user credentials
logout Clear login credentials
perm Add role-permission lookup
perms List role-permission lookups
query Search for alerts
raw Show alert raw data
revoke Revoke API key
send Send an alert
status Display status and metrics
tag Tag alerts
token Display current auth token
unack Un-acknowledge alerts
untag Untag alerts
update Update alert attributes
uptime Display server uptime
user Update user
users List users
version Display version info
whoami Display current logged in userPython SDKThe alerta client python package can also be used as a Python SDK.Example>>> from alertaclient.api import Client
>>> client = Client(key='NGLxwf3f4-8LlYN4qLjVEagUPsysn0kb9fAkAs1l')
>>> client.send_alert(environment='Production', service=['Web', 'Application'], resource='web01', event='HttpServerError', value='501', text='Web server unavailable.')
Alert(id='42254ef8-7258-4300-aaec-a9ad7d3a84ff', environment='Production', resource='web01', event='HttpServerError', severity='normal', status='closed', customer=None)
>>> [a.id for a in client.search([('resource','~we.*01'), ('environment!', 'Development')])]
['42254ef8-7258-4300-aaec-a9ad7d3a84ff']
>>> client.heartbeat().serialize()['status']
'ok'LicenseAlerta monitoring system and console
Copyright 2012-2023 Nick Satterly
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
alerta-blackout-regex
|
Alertaplugin to enhance the blackout management, by
matching the alerts against blackouts with PCRE (Perl Compatible Regular
Expression) on attributes.A blackout is considered matched when all its attributes are matched.Once an alert is identified as matching a blackout, a special label is applied,
with the format:regex_blackout=<blackoutid>, whereblackout idis the
ID of the matched blackout, e.g.,regex_blackout=d8ba1d3b-dbfd-4677-ab00-e7f8469d7ad3. This way, when the
alert is fired again, there’s no need to verify the matching again, but simply
verify whether the blackout referenced is still active.ImportantBeginning with version 2.0.0, the behaviour has changed, and instead of
evaluating the alert into thepost_receivehook, this plugin now
evaluates the alerts through thepre_receivehook. The reasoning was thatpost_receivewould set theblackoutstatusafterthe alert has been
sent to other plugins, which has resulted in confusing behaviour.That said, the plugin has been changed to process the alert inpre_receiveand therefore before the alert has been correlated. As the
Blackouts are retrieved from the Alerta API as unfortunately there’s no
other way to gather the Blackouts from a plugin via other internal
mechanisms, processing each and every alert throguhpre_receivewould put
a lot more workload on your Alerta API. To reduce this, theblackout_regexplugin now caches the Blackouts locally, into a file. To
fine tune this behaviour for your own setup you are able to set a few
environment variables. See more details below, under the _Configuration_
section.NoteStarting with version 3.0.0 onwards, the plugin will gather the list of
Blackouts straight from the database (instead of using the API, as
previously). This should normally improve reliability, but as there’s no
caching involved, every alert notification coming in (before being
evaluated and correlated) will cause a DB query.InstallationThis plugin is designed to be installed on the Alerta server; the package is
available on PyPI so you can install as:pipinstallalerta-blackout-regexConfigurationAddblackout_regexto the list of enabled PLUGINS inalertad.confserver
configuration file and set plugin-specific variables either in the server
configuration file or as environment variables.PLUGINS=['blackout_regex']NoteTo ensure this plugin won’t affect the existing Blackouts you may have in
place, it is recommended to list theblackout_regexpluginafterthe
nativeblackoutplugin in thePLUGINSconfiguration option or
environment variable.ReferencesSuppressing Alerts using BlackoutsLicenseCopyright (c) 2020-2022 Mircea Ulinic. Available under the Apache License 2.0.
|
alertadengue
|
AlertaDengueThis repository contains the main applications and services for the InfoDengue web portal.InfoDengue is an early-warning system to all states of Brazil, the system is based on the continuous analysis of hybrid data generated through the research of climate and epidemiological data and social scraping.For more information, please visit our websiteinfo.dengue.mat.brto visualize the current epidemiological situation in each state.SponsorsHow to contribute with InfoDengueYou can find more information aboutContributingon GitHub. Also check ourTeampage to see if there is a work oportunity in the project.How data can be visualizedThe Infodengue website is accessed by many people and it is common for us to receive news that this information is used in the definition of travel and other activities. All data is compiled, analyzed and generated in a national level with the support of the Brazilian Health Ministry, the weekly reports can be found in our website through graphics or downloaded in JSON and CSV files viaAPI.APIThe InfoDengue API will provide the data contained in the reports compiled in JSON or CSV files, it also provides a custom range of time.If you don't know Python or R, please check the tutorialshere.ReportsIf you are a member of a Municipal Health Department, or a citizen, and you have interest in detailed information on the transmission alerts of your municipality, just type the name of the city or statehere.Where the data comes fromDengue, Chikungunya and Zika data are provided bySINANas a notification form that feeds a municipal database, which is then consolidated at the state level and finally, federally by the Ministry of Health. Only a fraction of these cases are laboratory confirmed, most receive final classification based on clinical and epidemiological criteria. From the notified cases, the incidence indicators that feed the InfoDengue are calculated.InfoDengue has partnered with theDengue Observatory, that captures and analyzes tweets from geolocalized people for the mention of dengue symptoms on social media.Weather and climate data are obtained fromREDEMETin the airports all over Brazil.Epidemiological indicators require population size. Demographic data of Brazilian cities are updated each year in Infodengue using estimatesIBGE.Check out below the softwares we use in the project:
|
alerta-elastalert
|
No description available on PyPI.
|
alertapi
|
AlertAPIAsync and static typed Air Raid Alert microframework for Python3.Python 3.8, 3.9 and 3.10 are currently supported.InstallationInstall AlertAPI from PyPi with the following command:pipinstallalertapiUpdatingpipinstall--upgradealertapiStart up basic API clientimportasyncioimportalertapiasyncdefmain()->None:client=alertapi.APIClient(access_token='...')print(awaitclient.fetch_states())loop=asyncio.get_event_loop()loop.run_until_complete(main())Exampleimportasyncioimportalertapiasyncdefmain()->None:client=alertapi.APIClient(access_token='...')print('State list:',awaitclient.fetch_states())print('First 5 active alerts:',awaitclient.fetch_states(with_alert=True,limit=5))print('Inactive alerts:',awaitclient.fetch_states(with_alert=False))print('Kyiv info:',awaitclient.fetch_state(25))print('Kyiv info:',awaitclient.fetch_state('Kyiv'))print('Is active alert in Lviv oblast:',awaitclient.is_alert('Lviv oblast'))loop=asyncio.get_event_loop()loop.run_until_complete(main())On run GatewayClientimportalertapiclient=alertapi.GatewayClient(access_token='...')@client.listen(alertapi.ClientConnectedEvent)asyncdefon_client_connected(event:alertapi.ClientConnectedEvent)->None:states=awaitevent.api.fetch_states()print(states)@client.listen(alertapi.PingEvent)asyncdefon_ping(event:alertapi.PingEvent)->None:print('Ping event')@client.listen(alertapi.StateUpdateEvent)asyncdefon_state_update(event:alertapi.StateUpdateEvent)->None:print('State updated:',event.state)client.connect()Python optimization flagsCPython provides two optimisation flags that remove internal safety checks that are useful for development, and change other internal settings in the interpreter.python main.py - no optimisation - this is the default.python -O main.py - first level optimisation - features such as internal
assertions will be disabled.python -OO main.py - second level optimisation - more features (including
all docstrings) will be removed from the loaded code at runtime.A minimum of first level of optimizationsis recommended when running applications in a production environment.
|
alertaseism
|
O librarie folosiind date XML de la INFP pentru a afla magnitudinea la secunda.Github: https://github.com/TudorGruian/alerta-seism-python3Exemplu:```pythonimport alertaseismprint(alertaseism.mag()) # <- pentru magnitudineprint(alertaseism.heart()) # <- pentru stare serverif alertaseism.mag() >= 1:print("CUTREMUUUR")```Tot pe Git gasesti si exemple de loop
|
alerta-server
|
Alerta Release 9.0The Alerta monitoring tool was developed with the following aims in mind:distributed and de-coupled so that it isSCALABLEminimalCONFIGURATIONthat easily accepts alerts from any sourcequick at-a-glanceVISUALISATIONwith drill-down to detailRequirementsRelease 9 only supports Python 3.8 or higher.The only mandatory dependency is MongoDB or PostgreSQL. Everything else is optional.Postgres version 11 or betterMongoDB version 4.4 or betterInstallationTo install MongoDB on Debian/Ubuntu run:$ sudo apt-get install -y mongodb-org
$ mongodTo install MongoDB on CentOS/RHEL run:$ sudo yum install -y mongodb
$ mongodTo install the Alerta server and client run:$ pip install alerta-server alerta
$ alertad runTo install the web console run:$ wget https://github.com/alerta/alerta-webui/releases/latest/download/alerta-webui.tar.gz
$ tar zxvf alerta-webui.tar.gz
$ cd dist
$ python3 -m http.server 8000
>> browse to http://localhost:8000DockerAlerta and MongoDB can also run using Docker containers, seealerta/docker-alerta.ConfigurationTo configure thealertadserver override the default settings in/etc/alertad.confor usingALERTA_SVR_CONF_FILEenvironment variable::$ ALERTA_SVR_CONF_FILE=~/.alertad.conf
$ echo "DEBUG=True" > $ALERTA_SVR_CONF_FILEDocumentationMore information on configuration and other aspects of alerta can be found
athttp://docs.alerta.ioDevelopmentTo run in development mode, listening on port 5000:$ export FLASK_APP=alerta FLASK_DEBUG=1
$ pip install -e .
$ flask runTo run in development mode, listening on port 8080, using Postgres and
reporting errors toSentry:$ export FLASK_APP=alerta FLASK_DEBUG=1
$ export DATABASE_URL=postgres://localhost:5432/alerta5
$ export SENTRY_DSN=https://8b56098250544fb78b9578d8af2a7e13:[email protected]/153768
$ pip install -e .[postgres]
$ flask run --debugger --port 8080 --with-threads --reloadTroubleshootingEnable debug log output by settingDEBUG=Truein the API server
configuration:DEBUG=True
LOG_HANDLERS = ['console','file']
LOG_FORMAT = 'verbose'
LOG_FILE = '$HOME/alertad.log'It can also be helpful to check the web browser developer console for
JavaScript logging, network problems and API error responses.TestsTo run theallthe tests there must be a local Postgres
and MongoDB database running. Then run:$ TOXENV=ALL make testTo just run the Postgres or MongoDB tests run:$ TOXENV=postgres make test
$ TOXENV=mongodb make testTo run a single test run something like:$ TOXENV="mongodb -- tests/test_search.py::QueryParserTestCase::test_boolean_operators" make test
$ TOXENV="postgres -- tests/test_queryparser.py::PostgresQueryTestCase::test_boolean_operators" make testCloud DeploymentAlerta can be deployed to the cloud easily using Herokuhttps://github.com/alerta/heroku-api-alerta,
AWS EC2https://github.com/alerta/alerta-cloudformation, or Google Cloud Platformhttps://github.com/alerta/gcloud-api-alertaLicenseAlerta monitoring system and console
Copyright 2012-2023 Nick Satterly
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
alerta-server-ai
|
Alerta Release 7.0The Alerta monitoring tool was developed with the following aims in mind:distributed and de-coupled so that it isSCALABLEminimalCONFIGURATIONthat easily accepts alerts from any sourcequick at-a-glanceVISUALISATIONwith drill-down to detailPython 2.7 support is EOLStarting with Release 6.0 only Python 3.5+ is supported. Release 5.2 was the
last to support Python 2.7 and feature enhancements for this release ended on
August 31, 2018. Only critical bug fixes will be backported to Release 5.2.RequirementsThe only mandatory dependency is MongoDB or PostgreSQL. Everything else is optional.Postgres version 9.5 or betterMongoDB version 3.2 or betterInstallationTo install MongoDB on Debian/Ubuntu run:$ sudo apt-get install -y mongodb-org
$ mongodTo install MongoDB on CentOS/RHEL run:$ sudo yum install -y mongodb
$ mongodTo install the Alerta server and client run:$ pip install alerta-server alerta
$ alertad runTo install the web console run:$ wget https://github.com/alerta/alerta-webui/releases/latest/download/alerta-webui.tar.gz
$ tar zxvf alerta-webui.tar.gz
$ cd dist
$ python3 -m http.server 8000
>> browse to http://localhost:8000DockerAlerta and MongoDB can also run using Docker containers, seealerta/docker-alerta.ConfigurationTo configure thealertadserver override the default settings in/etc/alertad.confor usingALERTA_SVR_CONF_FILEenvironment variable::$ ALERTA_SVR_CONF_FILE=~/.alertad.conf
$ echo "DEBUG=True" > $ALERTA_SVR_CONF_FILEDocumentationMore information on configuration and other aspects of alerta can be found
athttp://docs.alerta.ioDevelopmentTo run in development mode, listening on port 5000:$ export FLASK_APP=alerta FLASK_ENV=development
$ pip install -e .
$ flask runTo run in development mode, listening on port 8080, using Postgres and
reporting errors toSentry:$ export FLASK_APP=alerta FLASK_ENV=development
$ export DATABASE_URL=postgres://localhost:5432/alerta5
$ export SENTRY_DSN=https://8b56098250544fb78b9578d8af2a7e13:[email protected]/153768
$ pip install -e .[postgres]
$ flask run --debugger --port 8080 --with-threads --reloadTroubleshootingEnable debug log output by settingDEBUG=Truein the API server
configuration:DEBUG=True
LOG_HANDLERS = ['console','file']
LOG_FORMAT = 'verbose'
LOG_FILE = '$HOME/alertad.log'It can also be helpful to check the web browser developer console for
JavaScript logging, network problems and API error responses.TestsTo run the tests using a local Postgres database run:$ pip install -r requirements.txt
$ pip install -e .[postgres]
$ createdb test5
$ ALERTA_SVR_CONF_FILE= DATABASE_URL=postgres:///test5 pytestCloud DeploymentAlerta can be deployed to the cloud easily using Herokuhttps://github.com/alerta/heroku-api-alerta,
AWS EC2https://github.com/alerta/alerta-cloudformation, or Google Cloud Platformhttps://github.com/alerta/gcloud-api-alertaLicenseAlerta monitoring system and console
Copyright 2012-2019 Nick Satterly
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
alerta-server-shakti
|
Alerta Release 8.0The Alerta monitoring tool was developed with the following aims in mind:distributed and de-coupled so that it isSCALABLEminimalCONFIGURATIONthat easily accepts alerts from any sourcequick at-a-glanceVISUALISATIONwith drill-down to detailRequirementsRelease 8 only supports Python 3.6 or higher.The only mandatory dependency is MongoDB or PostgreSQL. Everything else is optional.Postgres version 9.5 or betterMongoDB version 3.6 or better (4.0.7 required for full query syntax support)InstallationTo install MongoDB on Debian/Ubuntu run:$ sudo apt-get install -y mongodb-org
$ mongodTo install MongoDB on CentOS/RHEL run:$ sudo yum install -y mongodb
$ mongodTo install the Alerta server and client run:$ pip install alerta-server alerta
$ alertad runTo install the web console run:$ wget https://github.com/alerta/alerta-webui/releases/latest/download/alerta-webui.tar.gz
$ tar zxvf alerta-webui.tar.gz
$ cd dist
$ python3 -m http.server 8000
>> browse to http://localhost:8000DockerAlerta and MongoDB can also run using Docker containers, seealerta/docker-alerta.ConfigurationTo configure thealertadserver override the default settings in/etc/alertad.confor usingALERTA_SVR_CONF_FILEenvironment variable::$ ALERTA_SVR_CONF_FILE=~/.alertad.conf
$ echo "DEBUG=True" > $ALERTA_SVR_CONF_FILEDocumentationMore information on configuration and other aspects of alerta can be found
athttp://docs.alerta.ioDevelopmentTo run in development mode, listening on port 5000:$ export FLASK_APP=alerta FLASK_ENV=development
$ pip install -e .
$ flask runTo run in development mode, listening on port 8080, using Postgres and
reporting errors toSentry:$ export FLASK_APP=alerta FLASK_ENV=development
$ export DATABASE_URL=postgres://localhost:5432/alerta5
$ export SENTRY_DSN=https://8b56098250544fb78b9578d8af2a7e13:[email protected]/153768
$ pip install -e .[postgres]
$ flask run --debugger --port 8080 --with-threads --reloadTroubleshootingEnable debug log output by settingDEBUG=Truein the API server
configuration:DEBUG=True
LOG_HANDLERS = ['console','file']
LOG_FORMAT = 'verbose'
LOG_FILE = '$HOME/alertad.log'It can also be helpful to check the web browser developer console for
JavaScript logging, network problems and API error responses.TestsTo run theallthe tests there must be a local Postgres
and MongoDB database running. Then run:$ TOXENV=ALL make testTo just run the Postgres or MongoDB tests run:$ TOXENV=postgres make test
$ TOXENV=mongodb make testTo run a single test run something like:$ TOXENV="mongodb -- tests/test_search.py::QueryParserTestCase::test_boolean_operators" make test
$ TOXENV="postgres -- tests/test_queryparser.py::PostgresQueryTestCase::test_boolean_operators" make testCloud DeploymentAlerta can be deployed to the cloud easily using Herokuhttps://github.com/alerta/heroku-api-alerta,
AWS EC2https://github.com/alerta/alerta-cloudformation, or Google Cloud Platformhttps://github.com/alerta/gcloud-api-alertaLicenseAlerta monitoring system and console
Copyright 2012-2020 Nick Satterly
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
alerta-slack
|
No description available on PyPI.
|
alerta-syslog
|
UNKNOWN
|
alertbook
|
# alertbookAn Ansible-inspired Prometheus rules file compiler## InstallationThe recommended installation method is via `pip`:```pip install alertbook```For development use, clone this repository and install with the `-e`(development) flag in `pip`:```git clone https://github.com/kneitinger/alertbook.gitcd alertbookpip install -e .```## UsageThis program works in analogously to how `ansible-playbook` works. A*rulebook* is like an `ansible` playbook, where variables are defined and thedesired rules are listed, each with conditions, or `conds`, under which theyshould be instantiated.A project layout might look something like:```alertbook_proj├── foo_cluster.yml└── rules├── DiskFailure└── DiskUsageHigh```By default, the `alertbook` command looks for rules in the `./rules` directory,and outputs compiled `*.rules` files to the `./out` directory, but these valuescan be modified with the `--rules-dir` and `--out-dir` command line arguments,respectively.### Rules_Note: this tool is currently only compatible with the rule format ofPrometheus versions less than 2.0_The rules files that `alertbook` expects look no different than the recordingand alert rules files that Prometheus already uses...in fact, files in thatformat can be included as is into an `alertbook` rules directory. They canalso be augmented with variables (in the form `${foo}` that can be assigned ina rulebook. For example, the following rule,```ALERT DiskUsageOver${threshold}PercentIF node_filesystem_avail / node_filesystem_size < (100 - ${threshold}) / 100FOR 5mLABELS { severity = "${prio}" }ANNOTATIONS {description = "{{ $labels.instance }} disk usage has over {$threshold}%."}```parameterizes the disk usage percentage and alert priority with the`${threshold}` and `${prio}` variables, meaning that the same rule can be usedin a variety of contexts.### RulebooksA rulebook is a YAML file of the form.```---name: "Text to appear in compiled .rules file header"vars:some_ident: "in scope for all rules unless overwritten"rules:- file: path-relative-to-rules-dirconds:- some_var_in_rule_file: doopanother_var_in_rule_file: [8,16]- some_var_in_rule_file: floopanother_var_in_rule_file: 53```The `name` component is a purely cosmetic value that populates the header inthe output rules file's header. The `vars` component allows you to defineglobal variables for the rulebook.The `rules` component is where all of the desired rules are listed, and theirvariables, if any, are instantiated. Each entry has a `file` option, whichspecifies the path of the file relative to the default or user-specified rulesdirectory, and optionally, a `conds` list, where any remaining variables arespecified.#### ConditionsThe `conds` list can have many items (conditions), and each item may generate one ormore rules, depending on whether or not a variable is defined as an array.When a condition's variables only have single element values, `alertbook` fillsthe rule's variables in with those values, and adds it to the output text.When one or more of the condition's variables has an array value, `alertbook`creates a set of values equal to the Cartesian product of the condition'svariables and outputs one rule for each set. See the **Example** section forfurther clarification.### Command```$ alertbook -husage: alertbook [-h] [-r DIR] [-o DIR] book [book ...]positional arguments:bookoptional arguments:-h, --help show this help message and exit-r DIR, --rules-dir DIR base directory of rules (default: './rules')-o DIR, --out-dir DIR directory for compiled rules (default: './out')```### ExampleLet's use the following project structure```alertbook_proj├── foo_cluster.yml└── rules├── DiskFailure└── DiskUsageHigh```where,```$ cat foo_cluster.yml---name: Prometheus Alert Rules for foo clusterrules:- file: DiskFailureconds:- prio: lowhours: [8,16]- prio: highhours: 4- file: DiskUsageHighconds:- threshold: 85prio: high``````$ cat rules/DiskFailureALERT DiskWillFillIn{%hours}HoursIF predict_linear(node_filesystem_free[1h], ${hours}*3600) < 0FOR 5mLABELS { severity="${prio}" }``````$ cat rules/DiskUsageHighALERT DiskUsageOver${threshold}PercentIF node_filesystem_avail / node_filesystem_size < (100 - ${threshold}) / 100FOR 5mLABELS { severity = "${prio}" }ANNOTATIONS {description = "{{ $labels.instance }} disk usage has over {$threshold}%."}```if we examine the `rules` section of the `rulebook` we can see that we're using2 rules.The 2nd rule, `DiskUsageHigh`, is fairly straightforward, we are justpopulating the values of the variables in the rule file, one for diskpercentage, and one for alert priority.The 1st rule however has a it more going on:```- file: DiskFailureconds:- prio: lowhours: [8,16]- prio: highhours: 4```It's second condition is just like the `DiskUsageHigh` rule's form, but thefirst condition has an array. Again, when `alertbook` encounters an array inone or more of a conditions variables, it constructs the Cartesian product ofthem and essentially generates one condition for each. With that in mind, wecould interpret```conds:- foo: [bar, baz]floop: [doop, boop]```as being equivalent to```conds:- foo: barfloop: doop- foo: barfloop: boop- foo: bazfloop: doop- foo: bazfloop: boop```When we run```alertbook foo_cluster```the following file is generated and output to `foo_cluster.rules` in the`./out` directory```########################################### Prometheus Alert Rules for foo cluster ############################################# DiskFailureALERT DiskWillFillIn{%hours}HoursIF predict_linear(node_filesystem_free[1h], 8*3600) < 0FOR 5mLABELS { severity="low" }ALERT DiskWillFillIn{%hours}HoursIF predict_linear(node_filesystem_free[1h], 16*3600) < 0FOR 5mLABELS { severity="low" }ALERT DiskWillFillIn{%hours}HoursIF predict_linear(node_filesystem_free[1h], 4*3600) < 0FOR 5mLABELS { severity="high" }## DiskUsageHighALERT DiskUsageOver70PercentIF node_filesystem_avail / node_filesystem_size < (100 - 70) / 100FOR 5mLABELS { severity = "low" }ANNOTATIONS {description = "{{ $labels.instance }} disk usage has over {$threshold}%."}ALERT DiskUsageOver85PercentIF node_filesystem_avail / node_filesystem_size < (100 - 85) / 100FOR 5mLABELS { severity = "high" }ANNOTATIONS {description = "{{ $labels.instance }} disk usage has over {$threshold}%."}```
|
alert-close
|
Welcome to Squish Alert CloseInstallationRun setup.py
requirements:opencv-pythonpyautoguicomtypesLicensingSee theLICENSEfile for details.About Squish Alert CloseThis module closes the error window that occurs due to server squash memory leaks (Windows 8 and higher).
|
alert-exporter
|
Alert ExporterInstallationUse the package managerpipto install alert-exporter.pipinstallalert-exporterUsage❯alert-exporter--help
Extractalertsconfiguredindifferentsources(eg:PrometheusRules,CloudWatchAlarms,Pingdom)optionalarguments:-h,--helpshowthishelpmessageandexit-v,--versionshowprogram'sversionnumberandexit--log-level{DEBUG,INFO,WARNING,ERROR}-oOUTPUT_FILE,--output-fileOUTPUT_FILE--jinja-template[JINJA_TEMPLATE]-f{markdown,yaml,html},--format{markdown,yaml,html}--prometheus--prometheus-filtersPROMETHEUS_FILTERS--context[CONTEXT]--cloudwatch--aws-profileAWS_PROFILE--aws-regionAWS_REGIONSpecificregiontotarget.Default:Iterateoverallregionsavailable.--pingdom--pingdom-api-keyPINGDOM_API_KEY--pingdom-tagsPINGDOM_TAGSCommaseparatedlistoftags.Eg:tag1,tag2Multiple sources are available, one or many can be selectedKubernetes / PrometheusThe current context is used unless you provide the--contextflag.alert-exporter-ominikube.html--prometheus--contextminikubeYou can filter prometheus rule to match specific labels using the '--prometheus-filters' flag.alert-exporter-ominikube.html--prometheus--contextminikube--prometheus-filters'{"severity": "critical"}'AWS CloudwatchAll available regions are parsed unless you provide the--aws-regionflag.You need to be authenticated before using this tool.alert-exporter-oaws.html--cloudwatch--aws-regioneu-west-1--aws-profileprofilePingdomAn API key with read only permission is required to fetch the checks. The key can be provided in thePINGDOM_API_KEYenvironment variable.alert-exporter-opingdom.html--pingdom--pingdom-tagsexample-tagMultiple sources at oncealert-exporter-ocombined.html--prometheus--cloudwatch--aws-regioneu-west-1FormatsPredefined formats are provided with this tool:HTMLMarkdownYAMLYou can use a custom format by providing a Jinja2 file with the--jinja-templateflag.HTML output exampleContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
|
alert-from-script
|
alert_from_scriptGenerate SNS alerts from any command.Installation$ sudo python setup.py installUsageusage: alert-from-script [-h] sns_topic sns_subject script
Runs a script and sends the stdout and stderr to SNS in case exit != 0.
positional arguments:
sns_topic ARN of SNS topic to publish to.
sns_subject Subject used for SNS messages.
script The script to run. Includes all args to run.
optional arguments:
-h, --help show this help message and exit
|
alert-grid
|
UNKNOWN
|
alertify
|
AlertifyUptimedog Alerting IntegrationsNoteThis project is created and maintained for Uptimedog. seehttps://github.com/Uptimedog
|
alerting
|
AlertingEasy to use alerting library for Python 3+Tested with:Python 3.6-3.10Use the following command to install using pip:pip install alertingSamplefromalertingimportAlertingfromalerting.clientsimportAlertingMailGunClient,AlertingSlackClient,AlertingTelegramClientmy_alerts=Alerting(clients=[AlertingSendGridClient(sendgrid_api_key,from_email),AlertingMailGunClient(your_mailgun_api_key,your_domain,from_email,target_email),AlertingSlackClient(your_bot_user_oauth,target_channel),AlertingTelegramClient(bot_token,chat_id)])try:# somethingexceptExceptionasex:my_alerts.send_alert(title='some bad error happened',message=str(ex))
|
alertlib
|
No description available on PyPI.
|
alertlogic
|
Python interface to Alert Logic.
|
alertlogic-cli
|
No description available on PyPI.
|
alertlogic-sdk-definitions
|
Alert Logic APIs definitionsRepository contains static definitions of Alert Logic APIs, used for documentation generation,SDKandCLI.UsageInstallpip install alertlogic-sdk-definitionsFor the one who doesn't require python code, GitHub releases are produced
containing an archive with OpenAPI definitions only, seehereTestpython -m unittestUseList available service definitions:>>> import alsdkdefs
>>> alsdkdefs.list_services()
OrderedDict([('aecontent', ServiceDefinition(aecontent)), ('aefr', ServiceDefinition(aefr)), ('aepublish', ServiceDefinition(aepublish)), ('aerta', ServiceDefinition(aerta)), ('aetag', ServiceDefinition(aetag)), ('aetuner', ServiceDefinition(aetuner)), ('aims', ServiceDefinition(aims)), ('assets_query', ServiceDefinition(assets_query)), ('assets_write', ServiceDefinition(assets_write)), ('connectors', ServiceDefinition(connectors)), ('credentials', ServiceDefinition(credentials)), ('deployments', ServiceDefinition(deployments)), ('herald', ServiceDefinition(herald)), ('ingest', ServiceDefinition(ingest)), ('iris', ServiceDefinition(iris)), ('kalm', ServiceDefinition(kalm)), ('notify', ServiceDefinition(notify)), ('otis', ServiceDefinition(otis)), ('policies', ServiceDefinition(policies)), ('remediations', ServiceDefinition(remediations)), ('responder', ServiceDefinition(responder)), ('search', ServiceDefinition(search)), ('subscriptions', ServiceDefinition(subscriptions)), ('themis', ServiceDefinition(themis))])Get path to a service definitions paths:>>> import alsdkdefs
>>> alsdkdefs.get_service_defs("aerta")
['/usr/local/lib/python3.8/site-packages/alsdkdefs/apis/aerta/aerta.v1.yaml']Get normalised service spec of a service(all refs resolved,
path parameters moved to the methods,
allOfs are merged if possible):>>> import alsdkdefs
>>> alsdkdefs.load_service_spec("aerta")Validate service spec:>>> import alsdkdefs
>>> service_spec = alsdkdefs.load_service_spec("aerta")
>>> alsdkdefs.validate(service_spec)Quick validation of a definitionWhile YAML definition is developed apart from the current package and current repo,
it is required to validate it prior to push, please add this to yourMakefilein order to achieve quick validation:curl -s https://raw.githubusercontent.com/alertlogic/alertlogic-sdk-definitions/master/scripts/validate_my_definition.sh | bash -s <path/to/definitions/directory>If no directory is specified, by defaultdoc/openapi/directory will be used, if such behaviour is desired, use following line instead:curl -s https://raw.githubusercontent.com/alertlogic/alertlogic-sdk-definitions/master/scripts/validate_my_definition.sh | bashIt is recommended to invoke it via curl, since validation of the definitions might be extended with time.
Script requirespython3to be available in the system.Validation checks:YAML of a definition is validDefinition passes OpenAPI 3 schema validationDevelopmentPlease submit a PR. Please note that API definitions are updated automatically and any changes to it will be overwritten, see:automatic update process
|
alertlogic-sdk-python
|
The Alert Logic SDK For Python (almdrlib)Alert Logic Software Development Kit for Python allows developers to integrate with Alert Logic MDR Services.Quick StartInstall the library:pip install alertlogic-sdk-pythonSet up configuration file (in e.g.~/.alertlogic/config[default]
access_key_id = YOUR_KEY
secret_key = YOUR_SECRETTo create and manage access keys, use theAlert Logic Console. For information on creating an access key, seehttps://docs.alertlogic.com/prepare/access-key-management.htmOptionally you can specify if you are working withintegrationdeployment of Alert Logic MDR Services orproductionby specifying:global_endpoint=integrationglobal_endpoint=productionNOTE: Ifglobal_endpointisn't present, SDK defaults to production.Test installation
Launch python interpreter and then type:import almdrlib
aims = almdrlib.client("aims")
res = aims.get_account_details()
print(f"{res.json()}")DevelopmentGetting StartedPrerequisites:Python v3.7or newervirtualenvorvirtualenvwrapper(We recommendvirtualenvwrapperhttps://virtualenvwrapper.readthedocs.io/en/latest/)To produce RESTful APIs documentation installredoc-cliandnpx:npm install --save redoc-cli
npm install --save npxSetup your development environment and install required dependencies:export WORKON_HOME=~/environments
mkdir -p $WORKON_HOME
source /usr/local/bin/virtualenvwrapper.sh
mkvirtualenv alsdkgit clone https://github.com/alertlogic/alertlogic-sdk-python
cd alertlogic-sdk-python
pip install -r requirements_dev.txt
pip install -e .Using local servicesSetup a local profile:[aesolo]
access_key_id=skip
secret_key=skip
global_endpoint=map
endpoint_map_file=aesolo.jsonWrite an endpoint map (here,~/.alertlogic/aesolo.json;endpoint_map_filecan also be an absolute path):{
"aecontent" : "http://127.0.0.1:8810",
"aefr" : "http://127.0.0.1:8808",
"aepublish" : "http://127.0.0.1:8811",
"aerta" : "http://127.0.0.1:8809",
"aetag" : "http://127.0.0.1:8812",
"aetuner": "http://127.0.0.1:3000",
"ingest" : "http://127.0.0.1:9000"
}Alternativelyglobal_endpointconfiguration option orALERTLOGIC_ENDPOINTvalue might be set to the url value:[aesolo]
access_key_id=skip
secret_key=skip
global_endpoint=http://api.aesolo.com
...
global_endpoint=http://api.aesolo.com:3001export ALERTLOGIC_ENDPOINT="http://api.aesolo.com"
...
export ALERTLOGIC_ENDPOINT="http://api.aesolo.com:3001"Historyv1.0.74Wed, 8 Mar 2023 08:54:55 -0600 - Merge pull request #123 from EvanBrown96/fix-service-name-bug-environment-creationTue, 7 Mar 2023 11:05:42 -0600 - Merge pull request #122 from alertlogic/update_workflowMon, 6 Mar 2023 13:16:59 -0500 - fix bug where the service name was not being used correctly when querying dynamodb/ssm for credentialsFri, 3 Mar 2023 11:20:19 -0600 - Update workflow to stop using depricated node versionsv1.0.73Thu, 2 Mar 2023 09:09:13 -0600 - Merge pull request #121 from EvanBrown96/add-ssm-keys-check-2Tue, 28 Feb 2023 13:58:20 -0500 - use raise fromTue, 28 Feb 2023 13:52:05 -0500 - documentation clarityTue, 28 Feb 2023 13:50:37 -0500 - re-add both-source functionalityTue, 28 Feb 2023 10:34:25 -0500 - fix a couple bugs and update testsTue, 28 Feb 2023 10:09:12 -0500 - switch to just one application in AlEnv creationTue, 28 Feb 2023 09:55:27 -0500 - update to fix bugs/better implementationv1.0.72Thu, 23 Feb 2023 16:11:36 -0600 - Merge pull request #114 from alertlogic/fix_deprication_warningThu, 10 Nov 2022 10:26:22 -0600 - Update best practices when calling pip in the MakefileTue, 8 Nov 2022 13:10:36 -0600 - Remove unsupported python 3.6 from toxTue, 8 Nov 2022 10:48:55 -0600 - Move Session param to allowed_methods as previous version was depricatedTue, 8 Nov 2022 10:48:10 -0600 - use _ version of description_fileTue, 8 Nov 2022 10:47:33 -0600 - Stop invoking setup.py directlyThu, 23 Feb 2023 15:45:06 -0600 - Merge pull request #120 from EvanBrown96/add-ssm-keys-checkWed, 22 Feb 2023 14:38:31 -0500 - add tests for new ssm functionalityWed, 22 Feb 2023 11:52:39 -0500 - add some documentation about SSM usageWed, 22 Feb 2023 11:41:37 -0500 - when creating an AL MDR config, check SSM for AIMS access key and secretWed, 22 Feb 2023 09:07:38 -0600 - Merge pull request #117 from alertlogic/dependabot/pip/wheel-0.38.1Tue, 21 Feb 2023 15:09:47 -0600 - Merge pull request #119 from msayler/timeoutsTue, 21 Feb 2023 13:35:07 -0600 - Add request timeout, default 300sMon, 26 Dec 2022 20:53:58 +0000 - Bump wheel from 0.33.1 to 0.38.1v1.0.71Wed, 9 Nov 2022 09:17:41 -0600 - Merge pull request #115 from alertlogic/update_worflowTue, 8 Nov 2022 13:16:09 -0600 - Update github workflow to latest versions of python and depsThu, 20 Oct 2022 13:50:02 -0500 - Merge pull request #113 from msayler/cache_endpoints_lookupMon, 3 Oct 2022 14:25:24 +0200 - Merge pull request #109 from pavel-puchkin/patch-1Mon, 3 Oct 2022 09:35:06 +0200 - Cache service/account endpoints lookupFri, 24 Dec 2021 14:23:46 +0200 - Authenticate dynamically if token is not setv1.0.70Tue, 26 Jul 2022 15:55:09 -0500 - Merge pull request #112 from alertlogic/alcom_jsonTue, 26 Jul 2022 14:53:35 +0300 - fix alertlogic.com/json serializationv1.0.67Mon, 31 Jan 2022 09:10:58 -0600 - Merge pull request #111 from zdaniel86/fmultiThu, 27 Jan 2022 23:44:47 +0000 - fix multiple content type issue for requestBodyv1.0.66Thu, 27 Jan 2022 10:54:07 +0000 - Merge pull request #110 from ivanu-at-AL/m2r2Mon, 24 Jan 2022 16:31:35 +0000 - initv1.0.65Wed, 1 Dec 2021 15:49:46 +0100 - Use AlmdrlibValueError instead of Exception (#108)v1.0.64Fri, 26 Nov 2021 13:48:25 +0100 - Make opeanapi schema validation errors human readable (#107)Fri, 19 Nov 2021 08:54:51 +0000 - Bump pip from 19.3.1 to 21.1 (#106)Tue, 16 Nov 2021 07:58:01 -0600 - Merge pull request #105 from MikeBenza/format-validation-error-betterFri, 12 Nov 2021 15:25:56 -0600 - Only emit schema in debug modeFri, 12 Nov 2021 15:20:51 -0600 - Merge branch 'master' ofhttps://github.com/alertlogic/alertlogic-sdk-pythoninto format-validation-error-betterFri, 12 Nov 2021 15:20:31 -0600 - Merge pull request #104 from MikeBenza/good-docFri, 12 Nov 2021 08:41:35 -0600 - Format validation errors betterWed, 10 Nov 2021 13:49:42 -0600 - Fix tests to work in a bare environmentWed, 10 Nov 2021 13:28:24 -0600 - Restore sorted parametersWed, 10 Nov 2021 13:25:24 -0600 - Add tests, use 'or'+empty instead of kwargsWed, 10 Nov 2021 11:25:06 -0600 - Handle the content_type param being presentTue, 9 Nov 2021 22:56:12 -0600 - Flake8 client.pyTue, 9 Nov 2021 22:43:32 -0600 - Lazily construct doc and signatureTue, 9 Nov 2021 16:04:58 -0600 - Add type annotations for simple casesTue, 9 Nov 2021 15:51:45 -0600 - Consolidate default content type, add body parameter to signatureTue, 9 Nov 2021 14:23:53 -0600 - Make useful documentation for generated functionsv1.0.63Wed, 27 Oct 2021 08:01:43 -0500 - Support byte RequestBodySimpleParameter (#101)v1.0.62Mon, 25 Oct 2021 12:54:52 +0100 - pyyaml: pin to 5.4.1 (#102)v1.0.61Mon, 28 Jun 2021 17:02:43 +0100 - Do full clone for the pypi release (#100)v1.0.60Mon, 28 Jun 2021 16:57:05 +0100 - Adjust formatting for the release history (#99)v1.0.59Mon, 28 Jun 2021 16:45:42 +0100 - Add automatic rel notes (#98)v1.0.58Mon, 28 Jun 2021 14:18:35 +0100 - For each operation call try to resolve proper service endpoint if account_id is present in the args (#97)v1.0.57Tue, 20 Apr 2021 14:10:27 +0100 - Update setup.pyTue, 20 Apr 2021 14:09:57 +0100 - Update requirements.txtv1.0.54Tue, 30 Mar 2021 17:17:46 +0100 - Bump pyyaml from 5.1.2 to 5.4 (#96)v1.0.53Thu, 25 Mar 2021 06:52:07 -0700 - Add operations to dir(...) result on clients (#94)Wed, 24 Mar 2021 06:55:02 -0700 - Merge pull request #95 from MikeBenza/dont-blap-moduletypeSun, 21 Mar 2021 21:59:27 -0700 - Don't overwrite types.ModuleTypev1.0.52Thu, 18 Mar 2021 12:47:40 +0000 - Support raw endpoint url (#93)Thu, 28 Jan 2021 19:36:24 +0000 - Rename .travis.yml to .travis.yml.defunctv1.0.51Wed, 27 Jan 2021 20:10:03 +0000 - Install newver setuptools on buildWed, 27 Jan 2021 17:03:31 +0000 - PyPi act on tagsv1.0.50Wed, 27 Jan 2021 17:00:38 +0000 - AlEnv support for the mdr lib (#89)Wed, 27 Jan 2021 16:54:03 +0000 - Bump definitions version (#90)Wed, 27 Jan 2021 15:49:41 +0000 - Add test and deploy workflowsv1.0.49Sat, 24 Oct 2020 07:52:04 -0500 - Added user_id property to the session object (#87)v1.0.48Mon, 5 Oct 2020 08:50:45 -0500 - Don't duplicate logger hander (#86)v1.0.47Fri, 2 Oct 2020 11:50:11 +0100 - bump sdk definitions to v0.0.47 (#85)Fri, 2 Oct 2020 05:37:20 -0500 - Don't log AIMS tokens (#84)v1.0.46Tue, 29 Sep 2020 18:01:05 +0100 - bump alertlogic sdk defintions version (#83)v1.0.45Tue, 29 Sep 2020 14:59:05 +0100 - bump definitions dependency (#82)v1.0.44Thu, 17 Sep 2020 11:22:03 +0100 - bump definitions dep (#81)v1.0.43Thu, 13 Aug 2020 16:41:41 +0100 - Revert "support python < 3.6 (#78)" (#79)v1.0.42Wed, 12 Aug 2020 15:15:49 +0100 - support python < 3.6 (#78)v1.0.41Mon, 10 Aug 2020 16:15:13 -0500 - Updated to indicate python 3.6 support (#77)Mon, 10 Aug 2020 11:05:21 +0100 - Add docs test (#76)v1.0.40Mon, 10 Aug 2020 10:10:44 +0100 - Update README.mdMon, 10 Aug 2020 10:07:48 +0100 - add docs badge (#75)Mon, 10 Aug 2020 10:05:06 +0100 - Support documentation case when yaml converted from json has :{} empty object (#74)v1.0.39Fri, 7 Aug 2020 16:00:07 +0100 - Serialize boolean parameters to lowercase (#73)Fri, 7 Aug 2020 11:54:54 +0100 - Initialise _endpoints_map in the client since it is requested by default session (#72)Fri, 31 Jul 2020 08:08:51 -0500 - Local services (#61)v1.0.38Sun, 26 Jul 2020 16:23:15 +0100 - bump definitions dependency to 0.0.31 (#60)v1.0.37Fri, 24 Jul 2020 19:23:37 +0100 - bump definitions dependency to v0.0.30 (#59)v1.0.36Fri, 24 Jul 2020 14:20:27 +0100 - bump sdk definitions to 0.0.28 (#58)Fri, 24 Jul 2020 14:17:02 +0100 - Move parsing, loading and normalisation logic for the definitions to the alsdkdefs package (#57)v1.0.35Fri, 17 Jul 2020 19:30:23 +0100 - bump definitions v0.0.23 (#56)v1.0.34Thu, 16 Jul 2020 12:53:39 -0500 - Residency initialization and session initialization logging. (#54)v1.0.33Wed, 15 Jul 2020 11:36:13 -0500 - Merge pull request #53 from alertlogic/init_residency_fixWed, 15 Jul 2020 10:47:04 -0500 - Fixed to correctly initialize default residencyThu, 9 Jul 2020 15:46:35 +0100 - Change readme typo (#51)v1.0.32Wed, 15 Jul 2020 06:39:11 -0500 - Support further query parameter serialization (#52)v1.0.31Fri, 3 Jul 2020 14:09:38 -0500 - Merge pull request #50 from alertlogic/session_global_endpoint_fixFri, 3 Jul 2020 14:07:04 -0500 - Added missing global_endpoint parameter to the session initializationThu, 2 Jul 2020 22:37:02 +0100 - Doc generation requires install first (#49)v1.0.30Thu, 2 Jul 2020 16:55:55 +0100 - Add CR into readme (#48)v1.0.29Mon, 29 Jun 2020 20:49:20 +0100 - bump default definitions to 0.0.13 (#47)v1.0.28Thu, 18 Jun 2020 13:08:44 +0100 - Move definitions to definitions package (#45)Thu, 11 Jun 2020 18:48:46 -0500 - Fixed typov1.0.27Wed, 10 Jun 2020 10:58:35 +0100 - add token for pypi (#43)Wed, 10 Jun 2020 10:30:30 +0100 - add skip cleanup to allow releases (#42)Wed, 10 Jun 2020 04:08:26 -0500 - Request body object serialize fix (#41)Thu, 4 Jun 2020 13:08:55 -0300 - Update AIMS OpenAPI documentation (#39)Fri, 22 May 2020 14:33:41 -0500 - Merge pull request #38 from alertlogic/aertaFri, 22 May 2020 14:29:31 -0500 - Added initial version of aerta API specFri, 22 May 2020 10:10:37 -0500 - Merge branch 'master' of github.com:alertlogic/alertlogic-sdk-pythonFri, 22 May 2020 10:09:50 -0500 - Bumped version number to indicate inclusion of IRIS APIFri, 22 May 2020 10:07:17 -0500 - Merge pull request #36 from FinlayShepherd/irisThu, 21 May 2020 20:28:11 -0500 - Merge pull request #37 from alertlogic/configure_supportThu, 21 May 2020 20:24:36 -0500 - Added support for configure operation. Use AIMS token when resolving endpointsThu, 21 May 2020 17:32:04 +0100 - Improve IRIS example responsesThu, 21 May 2020 14:38:47 +0100 - Add IRIS docsMon, 18 May 2020 07:02:48 -0700 - Merge pull request #32 from alertlogic/windows_installerSat, 16 May 2020 20:57:54 -0500 - Ensure to read specs using utf-8 encoding to support running on windowsSun, 10 May 2020 14:04:53 -0500 - Merge pull request #31 from alertlogic/ingest_schema_fixSun, 10 May 2020 13:39:49 -0500 - Fixed ingest service schema to pass 'anyOf' validation for send_data operation'Sat, 9 May 2020 16:54:04 -0500 - Merge pull request #30 from alertlogic/request_body_param_fixSat, 9 May 2020 16:52:09 -0500 - Fixed to not use 'required' for object parameters as it breaks jsonschema validationFri, 8 May 2020 17:50:20 -0500 - Merge pull request #29 from alertlogic/windows_supportFri, 8 May 2020 17:46:08 -0500 - Updated to work on windows plus other minor fixesThu, 7 May 2020 11:10:22 -0700 - Merge pull request #28 from mcnielsen/masterThu, 7 May 2020 10:34:49 -0700 - Added a package.json to allow the repository to be consumed by NPM.Tue, 5 May 2020 08:33:03 -0500 - Handle a case of m2r not being installedTue, 5 May 2020 08:27:01 -0500 - Merge pull request #27 from alertlogic/documentationMon, 4 May 2020 18:14:21 -0500 - Changed to use newer version of sphinxMon, 4 May 2020 18:09:07 -0500 - Changed to use newer version of sphinxMon, 4 May 2020 18:03:37 -0500 - Added support for indirect types and other documentation improvementsThu, 23 Apr 2020 14:58:44 -0500 - Pinned to the supported version of m2rThu, 23 Apr 2020 14:46:10 -0500 - Pinned to the supported version of m2rThu, 23 Apr 2020 11:17:08 -0500 - Merge pull request #26 from alertlogic/incident_handling_supportThu, 23 Apr 2020 11:14:15 -0500 - Updated to the latest version of aetuner that includes incident handling settings supportMon, 20 Apr 2020 17:49:50 -0500 - Merge pull request #25 from alertlogic/config_init_fixMon, 20 Apr 2020 17:47:34 -0500 - Fixed to correctly intialize session configurationWed, 15 Apr 2020 14:15:31 -0500 - Merge pull request #24 from alertlogic/ingest-send-data-improvementWed, 15 Apr 2020 14:12:53 -0500 - Support binary format for simple parametersTue, 14 Apr 2020 16:07:07 -0500 - Merge pull request #23 from alertlogic/ingest-send-data-improvementTue, 14 Apr 2020 16:04:42 -0500 - Increased version numberTue, 14 Apr 2020 16:03:33 -0500 - Added automatic retries for POSTTue, 14 Apr 2020 16:02:00 -0500 - Updated to support publishing syslog data to ingestSun, 12 Apr 2020 14:16:04 -0500 - Merge pull request #22 from alertlogic/aetuner_releaseSun, 12 Apr 2020 14:05:34 -0500 - Updated to support new aetuner specSat, 11 Apr 2020 15:40:56 -0500 - Merge pull request #21 from msayler/doc-fixesThu, 2 Apr 2020 15:11:48 -0500 - Improve docs for first-useFri, 3 Apr 2020 16:31:16 -0500 - Fixed aetuner paths and bumped up sdk versionFri, 3 Apr 2020 15:59:22 -0500 - Erronously bumped the version numberFri, 3 Apr 2020 15:41:59 -0500 - Fixed to actually have proper urlsFri, 3 Apr 2020 15:29:12 -0500 - Added support for aetuner endpoints.Fri, 3 Apr 2020 09:38:49 -0500 - Updated to use endpointsWed, 1 Apr 2020 14:03:48 -0500 - Merge pull request #20 from alertlogic/response_supportWed, 1 Apr 2020 13:56:42 -0500 - Updated ingest api to have response informationWed, 1 Apr 2020 13:53:40 -0500 - Removed the need for pydocTue, 24 Mar 2020 17:11:54 -0500 - Added required module to sphinx doc generationTue, 24 Mar 2020 17:04:19 -0500 - Added initial support for response objectsSun, 15 Mar 2020 16:16:38 -0500 - Merge pull request #19 from alertlogic/aims_token_fixSun, 15 Mar 2020 16:14:03 -0500 - Fixed handling object based request bodiesSat, 14 Mar 2020 19:54:13 -0500 - Fixed to work with 'dictionary-like' payloadBodySat, 14 Mar 2020 19:53:20 -0500 - fixed lint error'Sat, 14 Mar 2020 11:12:26 -0500 - Updated version numberSat, 14 Mar 2020 11:06:18 -0500 - Updated to allow aims token based sessionsFri, 13 Mar 2020 10:32:38 -0500 - Merge pull request #18 from alertlogic/ingestFri, 13 Mar 2020 10:28:03 -0500 - Added jsonschema to the list of requirementsFri, 13 Mar 2020 10:00:41 -0500 - Updated testsFri, 13 Mar 2020 10:00:21 -0500 - Added examplesFri, 13 Mar 2020 09:59:51 -0500 - Added new required for jsonschema validationFri, 13 Mar 2020 09:59:28 -0500 - Added new required for jsonschema validationFri, 13 Mar 2020 09:53:30 -0500 - Fixed lint errorsFri, 13 Mar 2020 09:53:08 -0500 - Added support for indirect types: oneOf, anyOf, allOfFri, 13 Mar 2020 09:50:06 -0500 - Added logging.Fri, 13 Mar 2020 09:47:19 -0500 - Bumping up versionFri, 13 Mar 2020 09:46:58 -0500 - Enabled logging and put back AL imageFri, 6 Mar 2020 14:33:50 -0600 - Merge pull request #17 from alertlogic/docsFri, 6 Mar 2020 14:31:15 -0600 - Bump version numberFri, 6 Mar 2020 14:29:41 -0600 - Sort operations and parametersFri, 6 Mar 2020 09:41:25 -0600 - Updated readthedocs config to install sdk via setuptoolsFri, 6 Mar 2020 08:34:49 -0600 - Removed incorrect reference to AlertLogic_Logo_White.pngThu, 5 Mar 2020 18:20:13 -0600 - Added response syntaxWed, 4 Mar 2020 12:53:13 -0600 - Merged with redoc PRWed, 4 Mar 2020 12:43:15 -0600 - Merge branch 'master' of github.com:alertlogic/alertlogic-sdk-python into docsWed, 4 Mar 2020 12:43:05 -0600 - Added support for dict params and enumsWed, 4 Mar 2020 12:42:12 -0600 - Merge pull request #16 from mcnielsen/docsTue, 3 Mar 2020 19:55:17 -0800 - Merge branch 'docs' of algithub.pd.alertlogic.net:knielsen/alertlogic-sdk-python into docsTue, 3 Mar 2020 19:47:27 -0800 - fix font familyTue, 3 Mar 2020 19:45:51 -0800 - Merge branch 'docs' of algithub.pd.alertlogic.net:knielsen/alertlogic-sdk-python into docsTue, 3 Mar 2020 19:45:36 -0800 - A few finishing touchesTue, 3 Mar 2020 19:45:12 -0800 - fix font familyTue, 3 Mar 2020 19:42:05 -0800 - fix font familyTue, 3 Mar 2020 19:30:22 -0800 - styles for the skeery UkrainianTue, 3 Mar 2020 17:50:11 -0800 - Fixed some relative paths and added the class to the API selectorTue, 3 Mar 2020 17:27:42 -0800 - Merge branch 'docs' of github.com:alertlogic/alertlogic-sdk-python into docsTue, 3 Mar 2020 15:30:49 -0600 - Fixed to actually build for each serviceTue, 3 Mar 2020 13:21:01 -0800 - Added templateTue, 3 Mar 2020 11:39:27 -0800 - Merge branch 'docs' of github.com:alertlogic/alertlogic-sdk-python into docsTue, 3 Mar 2020 13:32:34 -0600 - Added redoc supportMon, 2 Mar 2020 14:29:50 -0600 - Merge pull request #15 from alertlogic/docsMon, 2 Mar 2020 14:24:28 -0600 - Removing “Edit on …” Buttons from DocumentationMon, 2 Mar 2020 14:16:39 -0600 - Merge pull request #14 from alertlogic/docsMon, 2 Mar 2020 13:22:08 -0600 - Commented out formats settingMon, 2 Mar 2020 11:51:13 -0600 - Merge pull request #13 from alertlogic/docsMon, 2 Mar 2020 11:49:19 -0600 - Added configuration files for readthedocs.ioMon, 2 Mar 2020 10:56:13 -0600 - Merge pull request #12 from alertlogic/docsMon, 2 Mar 2020 10:53:17 -0600 - Added missing docs generating module. Fixed lint errorsMon, 2 Mar 2020 10:49:46 -0600 - Initial support of generating sphinx SDK documentationMon, 24 Feb 2020 16:44:18 -0600 - Merge branch 'master' of github.com:alertlogic/alertlogic-sdk-pythonMon, 24 Feb 2020 16:44:13 -0600 - Merge pull request #11 from alertlogic/bump_versionMon, 24 Feb 2020 16:42:37 -0600 - Another tryMon, 24 Feb 2020 16:34:29 -0600 - Increased version to enable push to pypiMon, 24 Feb 2020 16:32:54 -0600 - Merge pull request #10 from alertlogic/readme_travisMon, 24 Feb 2020 16:31:21 -0600 - Fixed formattingMon, 24 Feb 2020 16:28:40 -0600 - Added pypi and python versionsMon, 24 Feb 2020 16:08:12 -0600 - Merge pull request #9 from alertlogic/readme_travisMon, 24 Feb 2020 16:05:44 -0600 - Added travis build status to readmeMon, 24 Feb 2020 15:57:06 -0600 - Merge pull request #8 from alertlogic/travis_ciMon, 24 Feb 2020 15:54:52 -0600 - Initial version of travis-ci supportMon, 24 Feb 2020 14:31:25 -0600 - Merge pull request #7 from alertlogic/payload_serialize_fixMon, 24 Feb 2020 14:30:26 -0600 - Bumped version numberMon, 24 Feb 2020 14:29:27 -0600 - Fixed to return a dictionary not a tupleMon, 24 Feb 2020 10:56:02 -0600 - Merge pull request #6 from alertlogic/bug_fixesMon, 24 Feb 2020 10:54:12 -0600 - Fixed multiple bugs. Reworked RequestBody to support multiple content-types. Fixed lint violations. Added schema tests. Use explode OpenAPI keyword to indicate that an object properties are to be serialized. Introduce v1 of exceptionsFri, 7 Feb 2020 15:56:50 -0600 - Merge pull request #4 from alertlogic/search_v2Fri, 7 Feb 2020 15:55:45 -0600 - Search API (beta) supportThu, 6 Feb 2020 15:25:03 -0600 - Updated to use new version of aetuner implementationThu, 6 Feb 2020 14:13:26 -0600 - Merge pull request #3 from alertlogic/publish_alphaThu, 6 Feb 2020 14:12:44 -0600 - Publish alpha to pypiWed, 5 Feb 2020 15:54:20 -0600 - Merge pull request #2 from alertlogic/config_supportWed, 5 Feb 2020 15:50:46 -0600 - Bumped version numberWed, 5 Feb 2020 15:48:56 -0600 - Added support for alcli to pass global configuration parametersTue, 4 Feb 2020 08:34:14 -0600 - Fixed to use SafeLoader for yamlTue, 28 Jan 2020 13:47:40 -0600 - Merge pull request #1 from alertlogic/bug_fixesTue, 28 Jan 2020 13:46:39 -0600 - Reworked to support lazy authenticationTue, 28 Jan 2020 13:45:00 -0600 - Fixed to use proper package nameTue, 28 Jan 2020 13:44:24 -0600 - Fixed to use proper package nameTue, 28 Jan 2020 13:44:04 -0600 - Initial version of README.mdSun, 26 Jan 2020 17:18:19 -0600 - Initial version OpenAPI based Python SDKWed, 15 Jan 2020 13:08:23 -0600 - Initial commit
|
alertmanager-gchat-integration
|
alertmanager-gchat-integrationDescriptionThe application provides a Webhook integration for Prometheus AlertManager to push alerts to Google Chat rooms.The application expects aconfig.tomlfile like this:[app.notification]# Helpful to indicate the origin of the message. Default to HOSTNAME.# origin = "custom-origin"# Optional Jinja2 custom template to print message to GChat.# custom_template_path = "<file>.json.j2"# Optional true to send the message as a Gchat card# use_cards = true[app.room.<room-name>]notification_url="https://chat.googleapis.com/v1/spaces/<space-id>/messages?key=<key>&token=<token>&threadKey=<threadId>"The file may be:Located in the current directory and namedconfig.toml.Placed in the directory of your choice withCONFIG_FILE_LOCATIONenvironment variable set.Also, the application provides a built-in template for GChat notification locatedhere.
If you wish to customize it, create a custom version and useapp.notification.custom_template_path.By default, the message will be sent as abasic message.
If you wish to usecardssetapp.notification.use_cardstotrue.When the application is started, the following endpoints are available:/alerts?room=<room-name>: Endpoint used by AlertManager to send messages to GChat. Theroom-nameshould match the value indicated in the config.toml file. HTTP expected methods are:POST./healthz: return 200 OK if the service is running. HTTP expected methods are:GET./metrics: return Prometheus metrics regarding HTTP requests. HTTP expected methods are:GET.Using the python module$ pip install alertmanager-gchat-integration
$ CONFIG_FILE_LOCATION=config.toml python -m alertmanager_gchat_integrationUsing the containerTo execute the container, you should have a ~/.kube/config with the context pointing to the cluster.
The user defined in the context should have the appropriate rights in th cluster to manage configmaps.Starts the serviceRun container as root:$dockerrun-ti\--user65534:65534\-p80:8080\-v$(pwd)/config.toml:/app/config.toml\julb/alertmanager-gchat-integration:latestThe following environment variables are also available:Environment varDescriptionDefault ValuePORTThe listening port for the application.8080CONFIG_FILE_LOCATIONThe config.toml file path./app/config.tomlHelm chartAHelm chartis available to install this runtime.ContributingThis project is totally open source and contributors are welcome.When you submit a PR, please ensure that the python code is well formatted and linted.$ make install.dependencies
$ make format
$ make lint
|
alertmanagermeshtastic
|
Alertmanager webhook for meshtasticThis little Adapter receives alertmanager webhooks and sends the notifications via a over serial attached Meshtastic device to the specified nodeID.WarningCaution: The Tests that are provided for the code in this repository are not currently updated! Also this is a quickly hacked together peace of software, that has not built any security in it at the moment. Also the way that the ack is checked is not optimal, so messages tend to take longer to get to the device, but are delivered for sure(maybe multiple times). If you have the skill and the time to contribute in any way, take a look at theContribution sectionCreditsThis is based on the work ofhttps://github.com/homeworkprod/weitersagerThanks toGUVWAFfor the support and thanks to the whole meshtastic team for this awsome software!Alertmanager configuration examplereceivers:
- name: 'meshtastic-webhook'
webhook_configs:
- url: http://alertmanager-meshtastic:9119/alert
send_resolved: trueconfig.toml exampleThis is an example config, that shows all of the config options.log_level = "debug"
[http]
host = "0.0.0.0"
port = 9119
[meshtastic.connection]
tty = "/tmp/vcom0"
nodeid = 631724152
maxsendingattempts = 30
timeout = 60docker compose service example - Hardware Serial (default)To integrate this bridge into your composed prometheus/alertmanager cluster, this is a good startingpoint.alertmanagermeshtastic:
image: apfelwurm/alertmanagermeshtastic
ports:
- 9119:9119
devices:
-/dev/ttyACM0
volumes:
- ./alertmanager-meshtastic/config.toml:/app/config.toml
restart: alwaysdocker compose service example - Virtual SerialTo integrate this bridge into your composed prometheus/alertmanager cluster, this is a good startingpoint.
If you plan to use a virtual serial port that is provided with socat you have to use the socat connector in this container or run your alertmanagermeshtastic instance on the terminating linux machine, because the reconnecting is not working if you either mount it as volume or as a device.alertmanagermeshtastic:
image: apfelwurm/alertmanagermeshtastic
ports:
- 9119:9119
environment:
- SOCAT_ENABLE=TRUE
- SOCAT_CONNECTION=tcp:192.168.178.46:5000
volumes:
- ./alertmanager-meshtastic/config.toml:/app/config.toml
restart: alwaysNote: If you set SOCAT_ENABLE to TRUE, the tty option from [meshtastic.connection] in config.toml will be overwritten with /tmp/vcom0 as thats the virtual serial port.Running on docker example - Hardware Serial (default)docker run -d --name alertmanagermeshtastic \
--device=/dev/ttyACM0 \
-v ./alertmanager-meshtastic/config.toml:/app/config.toml \
-p 9119:9119 apfelwurm/alertmanagermeshtastic:latestRunning on docker example - Virtual SerialIf you plan to use a virtual serial port that is provided with socat you have to use the socat connector in this container or run your alertmanagermeshtastic instance on the terminating linux machine, because the reconnecting is not working if you either mount it as volume or as a device.docker run -d --name alertmanagermeshtastic \
--env SOCAT_ENABLE=TRUE --env SOCAT_CONNECTION=tcp:192.168.178.46:5000 \
-v ./alertmanager-meshtastic/config.toml:/app/config.toml \
-p 9119:9119 apfelwurm/alertmanagermeshtastic:latestNote: If you set SOCAT_ENABLE to TRUE, the tty option from [meshtastic.connection] in config.toml will be overwritten with /tmp/vcom0 as thats the virtual serial port.ContributionThis is currently a minimal implementation that supports only a single node as a receiver. If you need additional features, you are welcome to open an issue, or even better, submit a pull request. You can also take a look on the opened Issues, where i have opened some for planned features and work on them if you want. I would appreciate any help.Example to testYou can use the test.sh or the test single.sh or the following curl command to test alertmanager-meshtasticcurl -XPOST --data '{"status":"resolved","groupLabels":{"alertname":"instance_down"},"commonAnnotations":{"description":"i-0d7188fkl90bac100 of job ec2-sp-node_exporter has been down for more than 2 minutes.","summary":"Instance i-0d7188fkl90bac100 down"},"alerts":[{"status":"resolved","labels":{"name":"olokinho01-prod","instance":"i-0d7188fkl90bac100","job":"ec2-sp-node_exporter","alertname":"instance_down","os":"linux","severity":"page"},"endsAt":"2019-07-01T16:16:19.376244942-03:00","generatorURL":"http://pmts.io:9090","startsAt":"2019-07-01T16:02:19.376245319-03:00","annotations":{"description":"i-0d7188fkl90bac100 of job ec2-sp-node_exporter has been down for more than 2 minutes.","summary":"Instance i-0d7188fkl90bac100 down"}}],"version":"4","receiver":"infra-alert","externalURL":"http://alm.io:9093","commonLabels":{"name":"olokinho01-prod","instance":"i-0d7188fkl90bac100","job":"ec2-sp-node_exporter","alertname":"instance_down","os":"linux","severity":"page"}}' http://alertmanager-meshtastic:9119/alert
|
alertme
|
AlertMe is a tool used to alert users when a script of theirs has finished running or has encountered an error. AlertMe will prompt you for your email and will send you an email notification to alert you about your script. Typical usage looks like this:alertme myscript.pySeehttp://github.com/ChandranshuRao14/AlertMefor more information on usage
|
alert-msgs
|
Easily construct and send formatted emails and Slack alerts.Installpipinstallalert_msgsIf using Slack as an alert destination, you will need to set up aSlack Appand get a bot token configured with OAuth permissions.Go toSlack Apps. ClickCreate New App->From Scratch(give it a name, select your workspace)Navigate to theOAuth & Permissionson the left sidebar and scroll down to theBot Token Scopessection. Addchat:writeandfile:writeOAuth scopes.Scroll up to the top of theOAuth & Permissionspage and clickInstall App to Workspace.Copy theBot User OAuth Tokenfrom theOAuth & Permissionspage. This is the value that should be used for thebot_tokenparameter ofSlackconfig.In Slack, go to your channel and click the down arrow next to your channel name at the top. ClickIntegrations->Add apps-> select the app you just made.Usagesend_alertis the high-level/easiest way to send alerts.Alerts are composed of one or more messages, where each message is composed of one or morecomponents.Alerts can be sent to one or more Slack and/or Email destinations. Seedestinationsfor configuration.Examplesfromalert_msgsimportEmail,Slack,ContentType,FontSize,Map,Text,Table,send_alert,send_slack_message,send_emailfromuuidimportuuid4importrandomcomponents=[Text("Important things have happened.",size=FontSize.LARGE,color=ContentType.IMPORTANT,),Map({"Field1":"Value1","Field2":"Value2","Field3":"Value3"}),Table(rows=[{"Process":"thing-1","Status":0,"Finished":True,},{"Process":"thing-2","Status":1,"Finished":False,}],caption="Process Status",),]send_to=[Email(sender_addr="[email protected]",password="myemailpass",receiver_addr=["[email protected]","[email protected]"]),Slack(bot_token="xoxb-34248928439763-6634233945735-KbePKXfstIRv6YN2tW5UF8tS",channel="my-channel")]send_alert(components,subject="Test Alert",send_to=send_to)
|
alertnow-python
|
AlertNow PythonThis package used for logging information or errors.#Installationpip install AlertNow-Python#How to use it?
First, you need to register withsign-upremote API and
get the api key on the site.
After, you must initialize connection usingset_api_keyand you can initialize host, user and tags data usingset_host,set_user,set_tagmethod.
And you can useinfoorerrormethods.#Examplefrom logger.src.logger import set_host, set_api_key, set_user, set_tag, info, error
from logger.src.common.dto.user import User
from logger.src.common.dto.userGeo import UserGeo
import jsonpickle
def execute_method():
set_host('http://localhost:8080')
set_api_key('b4984c8de7f14b2f86f8e036456fd60c')
set_tag('os.name', 'python OS')
set_user(User(
"13213231",
"111.11.1.11",
UserGeo(
"1044",
"Baku",
"Gadabay"
)
))
response = info('hi from python env')
print(response.status_code)
print(jsonpickle.encode(response))
execute_method()
|
alertover
|
No description available on PyPI.
|
alertPrediction
|
No description available on PyPI.
|
alertpy
|
Alertpy
|
alertscraper
|
alertscraperalertscraper badgetravis badgeGeneral purpose flexible tool for scraping a given URL for a certain
type of items, and then email if new items are added. Useful for
monitoring ad or auction websites. Could also be useful for setting up
email alerts on your own site.WARNINGCheck the Terms of Service of the site before you use this tool! For
some sites, using this tool may violate their terms of service, and
should not be used.LimitationsThis code ONLY scrapes based on the initial HTTP request. Websites
that function as single-page apps will not work. This could be
supported in the future using JSON, or integrating with something
heavier weight like Selenium.UsageInstallationAssuming Python’spipis installed (for Debian-based systems, this
can be installed withsudoapt-getinstallpython-pip), alertscraper
can be installed directly from PyPI:pip install alertscraperPython versions 3.3+ (and 2.6+) are supported and tested against.Quick startalertscraperis based on URLs, and maintains a history file for each
URL that you scrape so it knows when something is new.Start by navigating in your web-browser to the website you want to
scrape, and then copying and pasting the URL. Then, inspect the page
source of the site and see if you can figure out the DOM path to the
relevant element. In this case, it was alielement with the class
nameresultso the combined thing becomesli.result.alertscraper 'https://some-site.org/?query=guitar&maxprice=550' li.resultThis will download the given URL and list the text content of each item
specified. This lets you know your query is correct.Now we want to save this to a database file, that is, say that “I’ve
seen everything currently posted and am only now interested in new
stuff”.alertscraper 'https://some-site.org/?query=guitar&maxprice=550' li.result --file=guitars.txtNotice that it prints out again all the links it found. If we were to
run the command again, it would not print them out since it will have
stored them as “already seen”.Finally, lets run the command to email us everything that has not yet
been seen.alertscraper 'https://some-site.org/?query=guitar&maxprice=550' li.result --file=guitars.txt [email protected] only runs once. If you want it to run continually, I’d recommend
putting it in a cronjob. Eventually I may add a daemon mode, but this is
good for now.Happy scraping!ContributingCONDUCT.mdNew features, tests, and bug fixes are welcome!
|
alerts-in-ua
|
IntroductionThe Alerts.in.ua API Client is a Python library that simplifies access to the alerts.in.ua API service. It provides real-time information about air raid alerts and other potential threats.InstallationTo install the Alerts.in.ua API Client, run the following command in your terminal:pipinstallalerts_in_uaUsage⚠️ Before you can use this library, you need to obtain an API token by [email protected]'s an basic example of how to use the library to get a list of active alerts:Async:importasynciofromalerts_in_uaimportAsyncClientasAsyncAlertsClientasyncdefmain():# Initialize the client with your tokenalerts_client=AsyncAlertsClient(token="your_token")# Get the active alertsactive_alerts=awaitalerts_client.get_active_alerts()print(active_alerts)# Run the asynchronous functionasyncio.run(main())or sync:fromalerts_in_uaimportClientasAlertsClientalerts_client=AlertsClient(token="your_token")# Get the active alertsactive_alerts=alerts_client.get_active_alerts()print(active_alerts)AlertsAlerts class is a collection of alerts and provides various methods to filter and access these alerts.When user callclient.get_active_alerts()it returnsAlertsclass.Methodsfilter(*args: str) -> List[Alert]This method filters the alerts based on the given parameters.filtered_alerts=active_alerts.filter('location_oblast','Донецька область','alert_type','air_raid')In this example, filtered_alerts will contain all the air raid alerts that have the location oblast as 'Донецька область'.get_alerts_by_location_title(location_title: str) -> List[Alert]This method returns all the alerts from specified location.kyiv_alerts=active_alerts.get_alerts_by_location_title('м. Київ')get_air_raid_alerts() -> List[Alert]This method returns all the alerts that are of alert type 'air_raid'.air_raid_alerts=active_alerts.get_air_raid_alerts()get_oblast_alerts() -> List[Alert]This method returns all the alerts that are of location type 'oblast'.oblast_alerts=active_alerts.get_oblast_alerts()get_raion_alerts() -> List[Alert]This method returns all the alerts that are of location type 'raion'.raion_alerts=active_alerts.get_raion_alerts()get_hromada_alerts() -> List[Alert]This method returns all the alerts that are of location type 'hromada'.hromada_alerts=active_alerts.get_hromada_alerts()get_city_alerts() -> List[Alert]This method returns all the alerts that are of location type 'city'.city_alerts=active_alerts.get_city_alerts()get_alerts_by_alert_type(alert_type: str) -> List[Alert]This method returns all the alerts that are of the given alert type.artillery_shelling_alerts=active_alerts.get_alerts_by_alert_type('artillery_shelling')get_alerts_by_location_type(location_type: str) -> List[Alert]This method returns all the alerts that are of the given location type.urban_location_alerts=active_alerts.get_alerts_by_location_type('raion')get_alerts_by_oblast(oblast_title: str) -> List[Alert]This method returns all the alerts that are of the given oblast title.donetsk_oblast_alerts=active_alerts.get_alerts_by_oblast('Донецька область')get_alerts_by_location_uid(location_uid: str) -> List[Alert]This method returns all the alerts that have the given location uid.location_uid_alerts=active_alerts.get_alerts_by_location_uid('123456')get_artillery_shelling_alerts() -> List[Alert]This method returns all the alerts that are of alert type 'artillery_shelling'.artillery_shelling_alerts=active_alerts.get_artillery_shelling_alerts()get_urban_fights_alerts() -> List[Alert]This method returns all the alerts that are of alert type 'urban_fights'.urban_fights_alerts=active_alerts.get_urban_fights_alerts()get_nuclear_alerts() -> List[Alert]This method returns all the alerts that are of alert type 'nuclear'.nuclear_alerts=active_alerts.get_nuclear_alerts()get_chemical_alerts() -> List[Alert]This method returns all the alerts that are of alert type 'chemical'.chemical_alerts=active_alerts.get_chemical_alerts()get_all_alerts() -> List[Alert]This method returns all alerts.all_alerts=active_alerts.get_all_alerts()or you can use shortcut:foralertinactive_alerts:print(alert)get_last_updated_at() -> datetime.datetimeThis method returns the datetime object representing the time when the alert information was last updated (Kyiv timezone).last_updated_at=alerts.get_last_updated_at()get_disclaimer() -> strThis method returns the disclaimer associated with the alert information.disclaimer=alerts.get_disclaimer()LicenseMIT 2023
|
alerts-in-ua.py
|
Alerts_in_ua.pyБібліотека для користування API сайтуalerts.in.ua.Бібліотека досі в розробці, якщо ви знайшли помилку або у вас є ідея щодо бібліотеки,
вертайтесь до розробника!Telegram:@FOUREX_dot_py.Розробники сайтуalerts.in.uaвипустилиофіційну бібліотеку.Встановленняpipinstallalerts-in-ua.pyПриклад використання:fromalerts_in_ua.alerts_clientimportAlertsClient# Імпортуємо клієнтalerts_client=AlertsClient("token")# Ініціалізуємо клієнтdefmain():locations=alerts_client.get_active()# Отримуємо список місць з тривогою# Фільтруємо список місць залишаючи місця з ПОВІТРЯНОЮ тривогоюair_raid_locations=locations.filter(alert_type="air_raid")forlocationinair_raid_locations:# Виводимо назву та час початку тривоги кожного місця зі спискуprint(location.location_title,location.started_at)if__name__=="__main__":main()Приклад використання асинхронного клієнта:Рекомендовано використовувати для ботівimportasynciofromalerts_in_ua.async_alerts_clientimportAsyncAlertsClientalerts_client=AsyncAlertsClient("token")# Ініціалізуємо клієнтasyncdefmain():locations=awaitalerts_client.get_active()# Отримуємо список місць з тривогою# Фільтруємо список місць залишаючи місця з ПОВІТРЯНОЮ тривогоюair_raid_locations=locations.filter(alert_type="air_raid")forlocationinair_raid_locations:# Виводимо назву та час початку тривоги кожного місця зі спискуprint(location.location_title,location.started_at)if__name__=="__main__":loop=asyncio.new_event_loop()loop.run_until_complete(main())Приклад використання рендера мапи тривог:Для телеграм боту написаного з допомогою бібліотекиaiogramfromaiogramimportBot,Dispatcher,executorfromaiogram.typesimportMessagefromalerts_in_ua.async_alerts_clientimportAsyncAlertsClientbot=Bot("telegram_bot_token")dp=Dispatcher(bot)alerts_client=AsyncAlertsClient("api_alerts_in_ua_token")@dp.message_handler(commands=["alerts"])asyncdefyep(message:Message):locations=awaitalerts_client.get_active()alerts_map=locations.render_map()message_text="\n".join(locations.location_title)awaitmessage.reply_photo(alerts_map,message_text)if__name__=="__main__":executor.start_polling(dispatcher=dp)Результат:Використання фільтрів:Спосіб 1locations=alerts_client.get_active()air_raid=locations.filter(alert_type="air_raid")oblast=locations.filter(location_type="oblast")air_raid_and_oblast=locations.filter(alert_type="air_raid",location_type="oblast")print(air_raid)# Місця лише з повітряною тривогоюprint(oblast)# Лише областіprint(air_raid_and_oblast)# Лише області з повітряною тривогоюСпосіб 2locations=alerts_client.get_active()air_raid_filter={"alert_type":"air_raid"}oblast_filter={"location_type":"oblast"}air_raid_and_oblast_filter={"alert_type":"air_raid","location_type":"oblast"}air_raid=locations.filter(**air_raid_filter)oblast=locations.filter(**oblast_filter)air_raid_and_oblast=locations.filter(**air_raid_and_oblast_filter)print(air_raid)# Місця лише з повітряною тривогоюprint(oblast)# Лише областіprint(air_raid_and_oblast)# Лише області з повітряною тривогоюОтримання значень атрибутів місць через список місць:locations=alerts_client.get_active()print(list(zip(locations.location_title,locations.location_uid)))# [('Луганська область', '16'), ('Автономна Республіка Крим', '29'), ('Нікопольська територіальна громада', '351'), ('м. Нікополь', '5351')]Перевірка наявності місця в списку:За його UID (location_uid) або назвою (location_title)locations=alerts_client.get_active()print("Автономна Республіка Крим"inlocations)
|
alerts-msg
|
alerts_msg (v0.2.1)DESCRIPTION_SHORTAll abilities (mail/telegram) to send alert msgs (threading)DESCRIPTION_LONGdesigned for ...Featuressend alert msgs:emailstelegramthreadingLicenseSee theLICENSEfile for license rights and limitations (MIT).Release historySee theHISTORY.mdfile for release history.Installationpip install alerts-msgImportfromalerts_msgimport*USAGE EXAMPLESSee tests and sourcecode for other examples.1. example1.py# =========================================================================================### 0. BEST PRACTICEfromalerts_msgimport*classAlertADX(AlertSelect.TELEGRAM_DEF):passAlertADX("hello")AlertADX("World")AlertADX.threads_wait_all()# =========================================================================================# =========================================================================================# =========================================================================================### AlertSmtp#### 1. add new server if not existsfromalerts_msgimport*classSmtpServersMOD(SmtpServers):EXAMPLE_RU:SmtpAddress=SmtpAddress("smtp.EXAMPLE.ru",123)classAlertSmtpMOD(AlertSmtp):SERVER_SMTP:SmtpAddress=SmtpServersMOD.EXAMPLE_RU# or direct =SmtpAddress("smtp.EXAMPLE.ru", 123)# =========================================================================================#### 2. change authorisation data (see `private_values` for details)fromalerts_msgimport*classAlertSmtpMOD(AlertSmtp):AUTH:PrivateAuto=PrivateAuto(_section="AUTH_EMAIL_MOD")# =========================================================================================#### 3. change other settings (see source for other not mentioned)fromalerts_msgimport*classAlertSmtpMOD(AlertSmtp):RECONNECT_PAUSE:int=60RECONNECT_LIMIT:int=10TIMEOUT_RATELIMIT:int=600RECIPIENT_SPECIAL:str="[email protected]"# =========================================================================================#### 4. send# if no modsfromalerts_msgimport*AlertSmtp(_subj_name="Hello",body="World!")# with modsfromalerts_msgimport*classAlertSmtpMOD(AlertSmtp):pass# changedAlertSmtpMOD(_subj_name="Hello",body="World!")# =========================================================================================#### 5. using in class with saving alert objectfromalerts_msgimport*classAlertSmtpMOD(AlertSmtp):pass# changedclassMyMonitor:ALERT=AlertSmtpMODmonitor=MyMonitor()monitor.ALERT("Hello")# =========================================================================================# =========================================================================================### AlertTelegram# All idea is similar to AlertSmtp.# add auth data# add pv.json or do smth else (for details see private_values.PrivateJsonTgBotAddress)# json{"TG_ID":{"MyTgID":1234567890},"TGBOT_DEF":{"LINK_ID":"@my_bot_20230916","NAME":"my_bot","TOKEN":"9876543210xxxxxxxxxxxxxxxxxxxxxxxxx"}}# =========================================================================================fromalerts_msgimport*classMyMonitor:ALERT=AlertTelegrammonitor=MyMonitor()monitor.ALERT("Hello")
|
alesha
|
No description available on PyPI.
|
aletheia
|
A Python 3 implementation ofAletheia.This is how we get fromI read it on the Internet, so it must be true.toYesterday, the Guardian had a story about a prominent politician doing
something they weren’t supposed to be doing. The video footage was
certified authentic, and the author of the article stands by her work.Aletheia is a little program you run to attach your name – and reputation –
to the files you create: audio, video, and documentation, all of it can carry
authorship, guaranteed to be tamper proof.Once you use Aletheia to sign your files, you can share them all over the web,
and all someone has to do to verify the file’s author is run Aletheia against
the file they just received. The complication of fetching public keys and
verifying signatures is all done for you.If this sounds interesting to you, have a look atthe documentationor even
install it and try it out yourself.The GoalI want to live in a world where journalism means something again. Where “some
guy on the internet” making unsubstantiated claims can be fact-checked by
organisations who have a reputation for doing the work of accurate reporting.
More importantly though, I think we need a way to be able to trust what we see
again.New technologies are evolving every day that allow better and better fakes to
be created. Now more than ever we need a way to figure out whether we trust
the source of something we’re seeing. This is an attempt to do that.How to Use itThe process is pretty straight forward. Install the system dependencies as
described in thesetup documentationand then:$pipinstallaletheiaOnce it’s installed, you can verify a file to try it out. Usethis oneas a
starting example.Command Line API$aletheiaverifypath/to/test.jpgPython APIfromaletheia.utilsimportverifyverify("path/to/test.jpg")More details can be found in thecommand line APIandPython APIdocumentation.How to Run the TestsAletheia usespytest, so assuming you’ve got a working environment (with
libmagic, exiftool, and ffmpeg installed and working) you can just run it from
the project root:$pytestThe reality of this project however is that getting a working environment setup
perfectly can be a pain, especially when all you want to do is run the tests.
So to that end, we’ve got some Docker containers setup for you.To run your tests in a lightweightAlpine Linuxcontainer, just run this:$dockerrun--rm-v$(pwd):/app-itregistry.gitlab.com/danielquinn/aletheia-python:alpine-python3.7bash-c'cd /app && pytest'That’ll run the entire battery of tests in an environment containing all the
tools Aletheia needs to do its thing. Alternatively, you can just jump into
an instance of the container and use it as a sort of virtualenv:$dockerrun--rm-v$(pwd):/app-itregistry.gitlab.com/danielquinn/aletheia-python:alpine-python3.7/bin/bash$cd/app$pytestTesting for Multiple EnvironmentsGitLab will automatically run the tests in a multitude of environments
(Alpine:py3.6, Arch, Debian:py3.5, Debian:py3.7, etc.), but if you want to do
that locallybeforeit goes up to GitLab, there’s a handy test script for you
that does all the work:$./tests/cross-platformJust note that this script will download all of the required Docker containers
from GitLab to do its thing, so you’re looking at a few hundred MB of disk
space consumed by this process.Colophon & DisambiguationThis project is named for the Greek goddess of truth & verity – a reasonable
name for a project that’s trying to restore truth and verified origins to the
web. It also doesn’t hurt that the lead developer’s wife is Greek ;-)It’s been noted that there’sanother project out there with the same name.
The two projects are totally unrelated, despite the identical nameandthe
fact that both lead developers are named “Daniel”.
|
aletheia-dnn
|
No description available on PyPI.
|
alethiometer
|
zero-cost-proxiesIndependent ZC proxies only for testing on it.Modified and simplified fromforesight repo, fix some bugs in model output, remove some unwanted code snippets.Supported zc-metrics are:=========================================================
= grad_norm, =
=-------------------------------------------------------=
= grasp, =
=-------------------------------------------------------=
= snip, =
=-------------------------------------------------------=
= synflow, =
=-------------------------------------------------------=
= nwot, (NASWOT) =
= [nwot, nwot_Kmats] =
=-------------------------------------------------------=
= lnwot, (Layerwise NASWOT) =
= [lnwot, lnwot_Kmats] =
=-------------------------------------------------------=
= nwot_relu, (original RELU based NASWOT metric) =
= [nwot_relu, nwot_relu_Kmats] =
=-------------------------------------------------------=
= zen, =
= Your network need have attribute fn: =
= `forward_before_global_avg_pool(inputs)` =
= to calculate zenas score =
= (see sample code in tests/test_zc.py) =
=-------------------------------------------------------=
= tenas, =
= must work in `gpu` env, =
= might encouter bug on `cpu`. =
= also contains metrics: =
= ntk, =
= lrn, =
=-------------------------------------------------------=
= zico, not work in torch-cpu, I will check it later. =
= zico must use at least two batches of data, =
= in order to calculate cross-batch (non-zero) std =
=-------------------------------------------------------=
= tcet, =
= snr-synflow, =
= snr-snip, =
=========================================================0. How to install.First create conda env with python version >= 3.6, this repo has been completely tested on python 3.9.condainstallpytorch==1.13.1torchvision==0.14.1torchaudio==0.13.1pytorch-cuda=11.6-cpytorch-cnvidiaInstall torch, torchvision, cudatoolkit.Testedon:pytorch==1.13.1(py3.9_cuda11.6_cudnn8.3.2_0)python==3.9.16
cuda11.6torchvision==0.14.1(py39_cu116)torchaudio==0.13.1(py39_cu116)this repo is perfectly compatible with current mainstream zc testing framework, including zennas, naslib, nb201 related repos, nb101, nb1shot1, blox, etc.If you still cannot use this repo, try to contact me, or try to setup some mainstream nas testing benchmarks, then most problems would be solved.Finally, if all the previous basic enviroment requirements are met, then try this lib with just one single command.pip install -e .
# running this command under the root directory where the setup.py locates in.check installation success.cdtests/
pythontest_zc.py1. TestsImageNet16-120 cannot be automatically downloaded. Using script underscripts/download_data.shto download:sourcescripts/download_data.shnb201ImageNet16-120# do not use `bash`, use `source` instead2. VersionsV1.1.2Fix bug intenas, add net instance deep copy to avoid weight changes.V1.1.1Fix warnings intenas, now using new torch api to calc eigenvalue.Fix bug intcet, add net instance deep copy to avoid weight changes, add manually designedtcetcopy process, remove bn in synflow, add bn in snip.V1.1.0Addtcetmetric, which calculates TCET score.
Addsnrmetrics, which calculates SNR family scores.V1.0.10addzicometric, which calculates ZICO score.V1.0.9fix readme format, no code change.V1.0.8fix bug innwot_relufor wrong for/backward fn register,fix bug inzenfor missed necessary attribute check, add test sample forzenmetric,fix bug inzenfor return value have not .item() attribute,addtenasmetric, which calculates TE-NAS score. (tenas,ntk,lrn)V1.0.7addzenmetric, which calculates ZenNAS score.V1.0.6add originalnaswotimplements based on RELU, can be calculated using metircnwot_relu, also fix potential oom bug, and more reliable GPU memory cache removal code snippets.V1.0.5addnaswot, lnwotinto matsV1.0.4fix bugs in calculation, add more test codes.V1.0.3add shortcuts to import directly from package root directory.3. Quick Bug Fixif you encouther this error:RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'Traceback(mostrecentcalllast):
File"/home/u2280887/GitHub/zero-cost-proxies/tests/test_zc.py",line87,in<module>test_zc_proxies()File"/home/u2280887/GitHub/zero-cost-proxies/tests/test_zc.py",line49,intest_zc_proxiesresults=calc_zc_metrics(metrics=mts,model=net,train_queue=train_loader,device=device,aggregate=True)File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zc_proxy.py",line115,incalc_zc_metricsmt_vals=calc_vals(net_orig=model,trainloader=train_queue,device=device,metric_names=metrics,loss_fn=loss_fn)File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zc_proxy.py",line101,incalc_valsraisee
File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zc_proxy.py",line73,incalc_valsval=M.calc_metric(mt_name,net_orig,device,inputs,targets,loss_fn=loss_fn,split_data=ds)File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/__init__.py",line42,incalc_metricreturn_metric_impls[name](net,device,*args,**kwargs)File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/__init__.py",line24,inmetric_implret=func(net,*args,**kwargs,**impl_args)File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/tenas.py",line316,incompute_TENAS_scoreRN=compute_RN_score(net,inputs,targets,split_data,loss_fn,num_batch)File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/tenas.py",line201,incompute_RN_scorenum_linear_regions=float(lrc_model.forward_batch_sample()[0])File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/tenas.py",line170,inforward_batch_samplereturn[LRCount.getLinearReginCount()forLRCountinself.LRCounts]File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/tenas.py",line170,in<listcomp>return[LRCount.getLinearReginCount()forLRCountinself.LRCounts]File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/tenas.py",line93,ingetLinearReginCountself.calc_LR()File"/home/u2280887/miniconda3/envs/zc-alth/lib/python3.9/site-packages/torch/autograd/grad_mode.py",line27,indecorate_contextreturnfunc(*args,**kwargs)File"/home/u2280887/GitHub/zero-cost-proxies/alethiometer/zero_cost_metrics/tenas.py",line62,incalc_LRres=torch.matmul(self.activations.half(),(1-self.activations).T.half())RuntimeError:"addmm_impl_cpu_"notimplementedfor'Half'please check your lib installation, we need gpu support fortorch.half(), please check your cuda version and pytorch version, and reinstall pytorch with cuda support. It seem current cpu version of pytorch does not supporttorch.half(), even if we are using float32 not float16.....
|
ale-uy
|
Moduleeda.py: Data ManipulationThe classes eda.EDA and eda.Graphs_eda are tools for performing data manipulations and visualizations in a simple and efficient manner. These classes are designed to streamline various tasks related to data processing and cleaning.Available MethodsData Preprocessing (EDA)EDA.remove_single_value_columns(df): Removes variables that have only one value in a DataFrame.EDA.remove_missing_if(df, p=0.5): Removes columns with a percentage of missing values greater than or equal topin a DataFrame.EDA.impute_missing(df, method="median", n_neighbors=None): Imputes missing values in a DataFrame using the median method for numerical variables and the mode method for categorical variables. The K-Nearest Neighbors (KNN) method can also be used to impute missing values.EDA.standardize_variables(df, target, method="zscore", cols_exclude=[]): Standardizes numerical variables in a DataFrame using the "z-score" method (mean and standard deviation-based standardization). Other standardization methods such as 'minmax' and 'robust' are also available.EDA.balance_data(df, target, oversampling=True): Performs random sampling of data to balance classes in a binary classification problem. This helps mitigate class imbalance issues in the dataset.EDA.shuffle_data(df): Shuffles the data in the DataFrame randomly, which can be useful for splitting data into training and testing sets.EDA.numeric_statistics(df): Generates statistical data for numerical variables in the DataFrame.EDA.convert_to_numeric(df, target, method="ohe", drop_first=True): Performs categorical variable encoding using different methods. In addition to "ohe" (one-hot-encode), "dummy" and "label" (label-encode) methods can be selected.EDA.analyze_nulls(df): Returns the percentage of null values in the entire dataset for each column.EDA.remove_duplicate(df): Remove duplicate rows from a DataFrame.EDA.remove_outliers(df, method='zscore', threshold=3): Remove outliers from a DataFrame using different methods. The method to remove outliers, can be 'zscore' (default) or 'iqr'.EDA.perform_full_eda(df, target, cols_exclude=[], p=0.5, impute=True, imputation_method='median', n_neighbors=None, convert=True, conversion_method="ohe", drop_duplicate=True, drop_outliers=False, outliers_method='zscore', outliers_threshold=3, standardize=False, standardization_method="zscore", balance=False, balance_oversampling=True, shuffle=False): Pipeline to perform various (or all) steps of the class automatically.Data Visualization (Graphs_eda)Graphs_eda.categorical_plots(df): Creates horizontal bar charts for each categorical variable in the DataFrame.Graphs_eda.histogram_plot(df, column): Generates an interactive histogram for a specific column in the DataFrame.Graphs_eda.box_plot(df, column_x, column_y): Generates an interactive box plot for a variable y based on another variable x.Graphs_eda.scatter_plot(df, column_x, column_y): Generates an interactive scatter plot for two variables, x and y.Graphs_eda.hierarchical_clusters_plot(df, method='single', metric='euclidean', save_clusters=False): Generates a dendrogram that is useful for determining the value of k (clusters) in hierarchical clusters.Graphs_eda.correlation_heatmap_plot(df): Generates a correlation heatmap for the given DataFrame.Moduleml.py: Data ModelingThe classesml.ML,ml.Graphs_ml, andml.Toolsare a tool for performing modeling, data manipulation, and visualization of data in a simple and efficient manner. These classes are designed to facilitate various tasks related to data processing, training, and evaluation of machine learning models.Data ModelingML.lightgbm_model(...): Uses LightGBM to predict the target variable in a DataFrame. This method supports both classification and regression problems. You can see the customizable parameters within the docstring.ML.xgboost_model(...): Utilizes XGBoost to predict the target variable in a DataFrame. This method is also suitable for both classification and regression problems. You can find customizable parameters within the docstring.ML.catboost_model(...): Employs CatBoost to predict the target variable in a DataFrame. Similar to the previous methods, it can handle both classification and regression problems. You can explore the customizable parameters within the docstring.IMPORTANT: If you passgrid=Trueas a parameter to any of these models (e.g:model_catboost(grid=True,...)), arandomhyperparameter search is now conducted to reduce training times. Additionally, you can passn_iter=...with the number you desire for the model to try different combinations of parameters (10 is the default option).Model EvaluationClassification Metrics: Calculates various evaluation metrics for a classification problem, such asprecision,recall,F1-score, and the area under the ROC curve (AUC-ROC).Regression Metrics: Computes various evaluation metrics for a regression problem, including mean squared error (MSE), adjusted R-squared, among others.Variable Selection and ClusteringTools.feature_importance(...): Calculates the importance of variables based on their contribution to prediction using Random Forest with cross-validation. It employs a threshold that determines the minimum importance required to retain or eliminate a variable. You can find customizable parameters within the docstring.Tools.generate_clusters(...): Applies unsupervised algorithms K-Means or DBSCAN to a DataFrame and returns a series with the cluster number to which each observation belongs. You can explore customizable parameters within the docstring.Tools.generate_soft_clusters(...): Applies Gaussian Mixture Models (GMM) to the DataFrame to generate a table with the probabilities of each observation belonging to a specific cluster. You can find customizable parameters within the docstring.Tools.split_and_convert_data(df, target, test_size=0.2, random_state=np.random.randint(1, 1000), encode_categorical=False): Divides data into training and testing sets and optionally encodes categorical variables.Graphs_ml.plot_cluster(df, random_state=np.random.randint(1, 1000)): Elbow and silhouette plot, which is essential for determining the optimal number of clusters to use in the aforementioned clustering methods.Modulets.py: Time Series Data ManipulationThe classests.Ts,ts.Graphs_ts, andts.Prophetaare powerful tools for performing modeling, manipulation, and visualization of time series data. These classes are designed to facilitate various tasks related to statistical time series data, as well as modeling and prediction.Available MethodsTS ClassEach method has its specific functionality related to the analysis and manipulation of time series data. You can use these methods to perform various tasks on time series data, including data loading, statistical analysis, stationarity tests, decomposition, differencing, transformation, and SARIMA modeling.TS.statistical_data(df, target): This method calculates various statistical properties of a time series, such as mean, median, standard deviation, minimum, maximum, percentiles, coefficient of variation, skewness, and kurtosis. It returns these statistics as a dictionary.TS.unit_root_tests(df, target, test='adf', alpha="5%"): This method performs unit root tests to determine if a time series is stationary. It supports three different tests: Augmented Dickey-Fuller (ADF), Kwiatkowski-Phillips-Schmidt-Shin (KPSS), and Phillips Perron (PP). It returns diagnostic information and, if necessary, performs differencing to make the series stationary.TS.apply_decomposition(df, target, seasonal_period, model='additive'): This method applies seasonal decomposition to a time series, separating it into trend, seasonality, and residuals. You can specify the type of decomposition (additive or multiplicative) and the seasonal period.TS.apply_differencing(df, target, periods=1): This method performs differencing on a time series to make it stationary. You can specify the number of periods to difference.TS.apply_transformation(df, target, method='box-cox'): This method applies transformations to a time series. It supports three transformation methods: Box-Cox, Yeo-Johnson, and logarithmic. It returns the transformed time series.TS.sarima_model(df, target, p=0, d=0, q=0, P=0, D=0, Q=0, s=0): This method fits an ARIMA model to a time series by specifying the orders of the autoregressive (AR), differencing (d), and moving average (MA) components. It can also fit a SARIMA model by modifying the other four parameters: seasonal autoregressive order (P), seasonal differencing (D), seasonal moving average (Q), and seasonal periods (s). It returns the results of fitting the ARIMA/SARIMA model.Class Graphs_tsThese methods are useful for exploring and understanding time series data, identifying patterns, and evaluating model assumptions. To use these methods, you should pass a pandas DataFrame containing time series data and specify the relevant columns and parameters.Graphs_ts.plot_autocorrelation(df, value_col, lags=24, alpha=0.05): This method visualizes the autocorrelation function (ACF), partial autocorrelation function (PACF), and seasonal ACF of a time series (SACF and SPACF). You can specify the number of lags and the significance level of the tests.Graphs_ts.plot_seasonality_trend_residuals(df, value_col, period=12, model='additive'): This method decomposes a time series into its trend, seasonality, and residual components using an additive or multiplicative model. It then plots these components along with the original time series.Graphs_ts.plot_box_plot(df, time_col, value_col, group_by='year'): This method generates and displays box plots to visualize data grouped by year, month, day, etc. You can specify the time column, value column, and grouping option.Graphs_ts.plot_correlogram(df, value='value', max_lag=10, title='Correlogram Plot'): This method creates and displays a correlogram (autocorrelation plot) for a time series. It helps identify correlations between different lags in the series.Graphs_ts.plot_prophet(model, forecast, plot_components=False): This method generates charts related to a Prophet model and its predictions. You can choose to visualize the components (trend, seasonality) or the entire forecast.Class Propheta:Propheta.load_prophet_model(model_name='prophet_model'): This method loads a previously saved Prophet model from a JSON file. You can specify the name of the model file to load.Propheta.train_prophet_model(...): This method trains and fits a Prophet model for time series forecasting. You can customize the parameters as described in the docstring.Moduledl.py: Neural Networks ModelsThedl.DLclass is a tool that will help you model data with neural networks. It is designed to make it easy to create modeling and prediction with the data you have.Available MethodsDL.model_ANN(...): Create a customizable Artificial Neural Network (ANN) model using scikit-learn. You can explore the customizable parameters within the docstring.DL.model_FNN(...): Creates a customizable Feedforward Neural Network (FNN) model using Tensorflow. You can explore the customizable parameters within the docstring.InstallTo use the classesML,EDA,Graphs_ml,Graphs_eda,DL, andTools, simply import the class in your code:fromale_uy.edaimportEDA,Graphs_edafromale_uy.mlimportML,Tools,Graphs_mlfromale_uy.tsimportTS,Graphs_ts,Prophetafromale_uy.dlimportDLUsage ExampleHere's an example of how to use theEDAandMLclasses to preprocess data and train a LightGBM model for a binary classification problem:# Import the ml and eda modules with their respective classesfromale_uy.mlimportML,Tools,Graphs_mlfromale_uy.edaimportEDA,Graphs_eda# Load the data into a DataFramedata=pd.read_csv(...)# Your DataFrame with the data# Data preprocessing with the target variable named 'target'preprocessed_data=EDA.perform_full_eda(data,target='target')# Train the LightGBM classification model and obtain its metricsML.lightgbm_model(preprocessed_data,target='target',problem_type='classification')# If the model fits our needs, we can simply save it by adding the 'save_model=True' attributeML.lightgbm_model(preprocessed_data,target='target',problem_type='classification',save_model=True)# It will be saved as "lightgbm.pkl"To use the saved model with new data, we will use the following codeimportjoblib# File path and name where the model was savedmodel_filename="model_filename.pkl"# Load the modelloaded_model=joblib.load(model_filename)# Now you can use the loaded model to make predictions# Suppose you have a dataset 'X_test' for making predictionsy_pred=loaded_model.predict(X_test)ContributionIf you encounter any issues or have ideas to improve these classes, please feel free to contribute! You can do so by submitting pull requests or opening issues on theProject Repository.Thank you for your interest! I hope it proves to be a useful tool for your machine learning projects. If you have any questions or need assistance, don't hesitate to ask. Good luck with your data science and machine learning endeavors!
|
alex
|
UNKNOWN
|
alexa
|
No description available on PyPI.
|
alexa-browser-client
|
Alexa Browser ClientAlexa client in your browser. Django app. Talk to Alexa from your desktop, phone, or tablet browser.DemoThe demo should really be heard, so click the gif below to view it in youtube.Run the demoFirst follow these steps:Configure your Amazon oauth configurationSet your environment variablesInstall:[email protected]:richtier/alexa-browser-client.git
$cdalexa-browser-client
$virtualenv.venv-ppython3.6&&source.venv/bin/activate&&maketest_requirementsCompile snowboy$ make demoGo tohttp://localhost:8000for basic demo, orhttp://localhost:8000/mixer/to play with the response audioInstallationpip install alexa_browser_clientMake sure your settingsINSTALLED_APPScontains at least these entries:INSTALLED_APPS = [
'django.contrib.staticfiles',
'channels',
'alexa_browser_client',
]DependenciesSnowboydetects when the wakeword "Alexa" is uttered.You must compileSnowboymanually. Copy the compiledsnowboyfolder to the top level of you project. By default, the folder structure should be:.
├── ...
├── snowboy
| ├── snowboy-detect-swig.cc
| ├── snowboydetect.py
| └── resources
| ├── alexa.umdl
| └── common.res
└── ...If the default folder structure does not suit your needs you cancustomize the wakeword detector.Routing and urlsAddurl(r'^', include('alexa_browser_client.config.urls')),tourls.pyurl_patterns.Addinclude('alexa_browser_client.config.routing.channel_routing')to yourrouting.pychannel_routing.AuthenticationThis app uses Alexa Voice Service. To use AVS you must first have adeveloper account. Then register your producthere. Choose "Application" under "Is your product an app or a device"?Ensure you update your settings.py:SettingNotesALEXA_BROWSER_CLIENT_AVS_CLIENT_IDRetrieve by clicking on the your product listedhereALEXA_BROWSER_CLIENT_AVS_CLIENT_SECRETRetrieve by clicking on the your product listedhereALEXA_BROWSER_CLIENT_AVS_DEVICE_TYPE_IDRetrieve by reading "Product ID"hereRefresh tokenYou will need to login to Amazon via a web browser to get your refresh token.To enable this first gohereand click on your product to set some security settings underSecurity Profileand, assuming you're running on localhost:8000, set the following:settingvalueAllowed Originshttps://localhost:8000/refreshtoken/Allowed Return URLshttps://localhost:8000/refreshtoken/callback/UsageOnce you have all the settings configured:Run django:./manage.py runserverGo tohttp://localhost:8000and start talking to Alexa.CustomizationWakewordThe default wakeword is "Alexa". You can change this by customizing the lifecycle'saudio_detector_class:# my_project/consumers.pyimportalexa_browser_clientimportcommand_lifecycleclassCustomAudioDetector(command_lifecycle.wakeword.SnowboyWakewordDetector):wakeword_library_import_path='dotted.import.path.to.wakeword.Detector'resource_file=b'path/to/resource_file.res'decoder_model=b'path/to/model_file.umdl'classCustomAudioLifecycle(alexa_browser_client.AudioLifecycle):audio_detector_class=CustomAudioDetectorclassCustomAlexaConsumer(alexa_browser_client.AlexaConsumer):audio_lifecycle_class=CustomAudioLifecycleThen in yourrouting.py:import alexa_browser_client.consumers
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.sessions import SessionMiddlewareStack
from django.conf.urls import url
application = ProtocolTypeRouter({
'websocket': SessionMiddlewareStack(
URLRouter([
url(r"^ws/$", alexa_browser_client.consumers.AlexaConsumer),
])
),
})VersioningWe useSemVerfor versioning. For the versions available, see thePyPI.Other projectsThis project usesVoice Command LifecycleandAlexa Voice Service Client.
|
alexachatbot
|
Alexa ChatbotGenrale InformationAlexa chatbot is a simple python pakcage for createing chatbot in python, it's help you to create a basic chatbotgithubInstallationpipinstallalexachatbotFirst you need to install alexa-chatbot from pypi it's open-source and completelly freeFeaturesLightweight and FastSupoort mutiples languageEasy to useUsageHere is a simple Exampel how can you user Alexa-chatbotreply() | functionfromalexachatbot.alexaAIimportreplytext="How are you?"res=reply(text)print(res)# do further things what you want to do with resNote: text pateamter are require and it should be string
It's Supoort mutiples language so you can pass any kind of message and you will get a good replyOther example of reaply functionfromalexachatbot.alexaAIimportreplytext="こん に ち わ"res=reply(text)print(res)# same as do further things what you want to do with resreplylang() | functionfromalexachatbot.alexaAIimportreplylangtext="what are you doing right now?"lang='en'res=replylang(text,lang)print(res)# same as do further things what you want to do with resNote: text and lang both pateamter are require and it should be string
It's Supoort mutiples language so you can pass any kind of message and you will get a good replyNote:Alexa chatbot using a api, but sometimes api go down so don't worry and wait a little bitLicenseAlexa chatbot is released under MIT License, See License for more detalisContectSololearnDiscosrd serverTelegramyoutubegit-hubGamil
|
alexa-client
|
Alexa Voice Service ClientPython Client for Alexa Voice Service (AVS)Installationpipinstallalexa_clientor if you want to run the demos:pipinstallalexa_client[demo]UsageFile audiofromalexa_clientimportAlexaClientclient=AlexaClient(client_id='my-client-id',secret='my-secret',refresh_token='my-refresh-token',)client.connect()# authenticate and other handshaking stepswithopen('./tests/resources/alexa_what_time_is_it.wav','rb')asf:fori,directiveinenumerate(client.send_audio_file(f)):ifdirective.namein['Speak','Play']:withopen(f'./output_{i}.mp3','wb')asf:f.write(directive.audio_attachment)Now listen tooutput_0.wavand Alexa should tell you the time.Microphone audioimportiofromalexa_clientimportAlexaClientimportpyaudiodefcallback(in_data,frame_count,time_info,status):buffer.write(in_data)return(in_data,pyaudio.paContinue)p=pyaudio.PyAudio()stream=p.open(format=pyaudio.paInt16,channels=1,rate=16000,input=True,stream_callback=callback,)client=AlexaClient(client_id='my-client-id',secret='my-secret',refresh_token='my-refresh-token',)buffer=io.BytesIO()try:stream.start_stream()print('listening. Press CTRL + C to exit.')client.connect()fori,directiveinenumerate(client.send_audio_file(buffer)):ifdirective.namein['Speak','Play']:withopen(f'./output_{i}.mp3','wb')asf:f.write(directive.audio_attachment)finally:stream.stop_stream()stream.close()p.terminate()Multi-step requestsAn Alexa command may relate to a previous command e.g,[you] "Alexa, play twenty questions"
[Alexa] "Is it a animal, mineral, or vegetable?"
[you] "Mineral"
[Alexa] "Is it valuable"
[you] "No"
[Alexa] "is it..."This can be achieved by passing the same dialog request ID to multiplesend_audio_filecalls:fromalexa_client.alexa_clientimporthelpersdialog_request_id=helpers.generate_unique_id()directives_one=client.send_audio_file(audio_one,dialog_request_id=dialog_request_id)directives_two=client.send_audio_file(audio_two,dialog_request_id=dialog_request_id)directives_three=client.send_audio_file(audio_three,dialog_request_id=dialog_request_id)Run the streaming microphone audio demo to use this feature:pipinstallalexa_client[demo]python-malexa_client.demo.streaming_microphone\--client-id="{enter-client-id-here}"\--client-secret="{enter-client-secret-here}"\--refresh-token="{enter-refresh-token-here}"ASR ProfilesAutomatic Speech Recognition (ASR) profiles optimized for user speech from varying distances. By default CLOSE_TALK is used but this can be specified:from alexa_client import constants
client.send_audio_file(
audio_file=audio_file,
distance_profile=constants.NEAR_FIELD, # or constants.FAR_FIELD
)Audio formatBy default PCM audio format is assumed, but OPUS can be specified:from alexa_client import constants
client.send_audio_file(
audio_file=audio_file,
audio_format=constants.OPUS,
)When PCM format is specified the audio should be 16bit Linear PCM (LPCM16), 16kHz sample rate, single-channel, and little endian.When OPUS forat is specified the audio should be 16bit Opus, 16kHz sample rate, 32k bit rate, and little endian.Base URLbase_urlcan be set to improve latency. Choose a region closest to your location.from alexa_client.alexa_client import constants
client = AlexaClient(
client_id='my-client-id',
secret='my-secret',
refresh_token='my-refresh-token',
base_url=constants.BASE_URL_ASIA
)The default base URL is Europe. The available constants are BASE_URL_EUROPE, BASE_URL_ASIA and BASE_URL_NORTH_AMERICA but you can pass any string if required.Read moreAuthenticationTo use AVS you must first have adeveloper account. Then register your producthere. Choose "Application" under "Is your product an app or a device"?The client requires yourclient_id,secretandrefresh_token:client kwargNotesclient_idRetrieve by clicking on the your product listedheresecretRetrieve by clicking on the your product listedhererefresh_tokenYou must generate this.See belowRefresh tokenYou will need to login to Amazon via a web browser to get your refresh token.To enable this first gohereand click on your product to set some security settings underSecurity Profile:settingvalueAllowed Originshttp://localhost:9000Allowed Return URLshttp://localhost:9000/callback/Note what you entered for Product ID under Product Information, as this will be used as the device-type-id (case sensitive!)Then run:python-malexa_client.refreshtoken.serve\--device-type-id="{enter-device-type-id-here}"\--client-id="{enter-client-id-here}"\--client-secret="{enter-client-secret-here}"Follow the on-screen instructions shown athttp://localhost:9000in your web browser.
On completion Amazon will return yourrefresh_token- which you will require tosend audioorrecorded voice.Steaming audio to AVSAlexaClient.send_audio_filestreaming uploads a file-like object to AVS for great latency. The file-like object can be an actual file on your filesystem, an in-memory BytesIo buffer containing audio from your microphone, or even audio streaming fromyour browser over a websocket in real-time.Persistent AVS connectionCallingAlexaClient.connectcreates a persistent connection to AVS. A thread runs that pings AVS after 4 minutes of no request being made to AVS. This prevents the connection getting forcefully closed due to inactivity.Unit testTo run the unit tests, call the following commands:[email protected]:richtier/alexa-voice-service-client.git
maketest_requirements
pytestOther projectsThis library is used byalexa-browser-client, which allows you to talk to Alexa from your browser.
|
alexafsm
|
alexafsmFinite-state machine library for building complex Alexa
conversations.Free software: Apache Software License 2.0.Dialog agents need to keep track of the various pieces of information to
make decisions how to respond to a given user input. This is referred to
as context, session, or state tracking. As the dialog complexity
increases, this state-tracking logic becomes harder to write, debug, and
maintain. This library takes the finite-state machine design approach to
address this complexity. Developers using this library can model dialog
agents with first-class concepts such as states, attributes, transition,
and actions. Visualization and other tools are also provided to help
understand and debug complex FSM conversations.Also check out ourblog
post.FeaturesFSM-based library for building Alexa skills with complex dialog state
tracking.Tools to validate, visualize, and print the FSM graph.Support analytics withVoiceLabs.Can be paired with any Python server library (Flask, CherryPy, etc.)Written in Python 3.6 (primarily for type annotation and string
interpolation).Getting StartedInstall fromPyPi:pip install alexafsmConsult theAlexa skill
searchskill in thetestsdirectory for details of how to write analexafsmskill. An Alexa skill is composed of the following three
classes:SessionAttributes,States, andPolicy.SessionAttributesSessionAttributesis a class that holds session attributes
(alexa_request['session']['attributes']) and any information we need
to keep track of dialog state. * The core attributes areintent,slots, andstate. *intentandslotsmap directly to
Alexa’s concepts. *slotsshould be of typeSlots, which in
turn is defined as a named tuple, one field for each slot type. In the
skill search example,Slots =namedtuple('Slots',['query', 'nth']).
This named tuple class should be specified in the class definition asslots_cls = Slots. *stateholds the name of the current state
in the state machine. * Each Alexa skill can contain arbitrary number
of additional attributes. If an attribute is not meant to be sent back
to Alexa server (e.g. so as to reduce the payload size), it should be
added tonot_sent_fields. In the skill search example,searchedandfirst_timeare not sent to Alexa server.See the implementation of skill search skill’s`SessionAttributes<https://github.com/allenai/alexafsm/blob/master/tests/skillsearch/session_attributes.py>`__StatesStatesis a class that specifies most of the FSM and its behavior.
It holds a reference to aSessionAttributesobject, the type of
which is specified by overriding thesession_attributes_clsclass
attribute. The FSM is specified by a list of parameter-less methods.
Consider the following method:@with_transitions({'trigger':NEW_SEARCH,'source':'*','prepare':'m_search','conditions':'m_has_result_and_query'},{'trigger':NTH_SKILL,'source':'*','conditions':'m_has_nth','after':'m_set_nth'},{'trigger':PREVIOUS_SKILL,'source':'*','conditions':'m_has_previous','after':'m_set_previous'},{'trigger':NEXT_SKILL,'source':'*','conditions':'m_has_next','after':'m_set_next'},{'trigger':amazon_intent.NO,'source':'has_result','conditions':'m_has_next','after':'m_set_next'})defhas_result(self)->response.Response:"""Offer a preview of a skill"""attributes=self.attributesquery=attributes.queryskill=attributes.skillasked_for_speech=''ifattributes.first_time_presenting_results:asked_for_speech=_you_asked_for(query)ifattributes.number_of_hits==1:skill_position_speech='The only skill I found is'else:skill_position_speech=f'The{ENGLISH_NUMBERS[attributes.skill_cursor]}skill is'ifattributes.first_time_presenting_results:ifattributes.number_of_hits>6:num_hits=f'Here are the top{MAX_SKILLS}results.'else:num_hits=f'I found{len(attributes.skills)}skills.'skill_position_speech=f'{num_hits}{skill_position_speech}'returnresponse.Response(speech=f"{asked_for_speech}"f"{skill_position_speech}{_get_verbal_skill(skill)}."f"{HEAR_MORE}",card=f"Search for{query}",card_content=f"""
Top result:{skill.name}{_get_highlights(skill)}""",reprompt=DEFAULT_PROMPT)Each method encodes the following:The name of the method is also the name of a state (describing)
in the FSM.The method may be decorated with one or several transitions, usingwith_transitionsdecorators. Transitions can be inbound
(sourceneeds to be specified) or outbound (destneeds to be
specified).Each method returns aResponseobject which is sent to Alexa.Transitions can be specified withprepareandconditionsattributes. Seehttps://github.com/tyarkoni/transitionsfor detailed
documentations. The values of these attributes are parameter-less
methods of thePolicyclass.Thepreparemethods are responsible for “actions” of the FSM such
as querying a database. Theaftermethods are responsible for
updating the state after the transition completes. They are the only
methods responsible for side-effects, e.g. modifying the attributes
of the states. This design facilitates ease of debugging.PolicyPolicyis the class that holds everything together. It contains a
reference to aStatesobject, the type of which is specified by
overriding thestates_clsclass attribute. APolicyobject
initializes itself by constructing a FSM based on theStatestype.Policyclass contains the following key methods:handletakes an Alexa request, parses it, and hands over all
intent requests toexecutemethod.executeupdates the policy’s internal state with the request’s
details (intent, slots, session attributes), then callstriggerto make the state transition. It then looks up the corresponding
response generating methods of theStatesclass to generate a
response for Alexa.initializewill initialize a policy without any request.validateperforms validation of a policy object based onPolicyclass definition and a intent schema json file. It looks
for intents that are not handled, invalid source/dest/prepare
specifications, and unreachable states. The test intest_skillsearch.pyperforms such validation as a test ofalexafsm.The Alexa skill search skill in thetestsdirectory also contains a
Flask-based server that shows how to usePolicyin five lines of
code:@app.route('/',methods=['POST'])defmain():req=flask_request.jsonpolicy=Policy.initialize()returnjson.dumps(policy.handle(req,settings.vi)).encode('utf-8')Other Toolsalexafsmsupports validation, graph visualization, and printing of
the FSM.ValidationSimply initialize aPolicybefore callingvalidate. This
function takes as input the path to the skill’s Alexa intent schema json
file and performs the following checks:All Alexa intents have corresponding events/triggers in the FSM.All states have either inbound or outbound transitions.All transitions are specified with valid source and destination
states.All conditions and prepare actions are handled with methods in thePolicyclass.Change Detection with Record and PlaybackWhen making code changes that are not supposed to impact a skill’s
dialog logic, we may want a tool to check that the skill’s logic indeed
stay the same. This is done by first recording
(SkillSettings().record= True) one or several sessions, making the
code change, then checking if the changed code still produces the same
set of dialogs (SkillSettings().playback= True). During playback,
calls to databases such as ElasticSearch can be fulfilled from data read
from files generated during the recording. This is done by decorating
the database call withrecordablefunction. Seethe ElasticSearch
callin Skill Search for an example usage.Graph Visualizationalexafsmuses thetransitionslibrary’s API to draw the FSM
graph. For example, the skill search skill’s FSM can be visualized using
thegraph.py.
invoked fromgraph.sh.
The resulting graph is displayed follow:FSM ExampleGraph PrintoutFor complex graphs, it may be easier to inspect the FSM in text format.
Use theprint_machinemethod to accomplish this. The output for the
skill search skill is below:Machine states:
bad_navigate, describe_ratings, describing, exiting, has_result, helping, initial, is_that_all, no_query_search, no_result, search_prompt
Events and transitions:
Event: NthSkill
Source: bad_navigate
bad_navigate -> bad_navigate, conditions: ['m_has_nth']
bad_navigate -> has_result, conditions: ['m_has_nth']
Source: describe_ratings
describe_ratings -> bad_navigate, conditions: ['m_has_nth']
describe_ratings -> has_result, conditions: ['m_has_nth']
Source: describing
describing -> bad_navigate, conditions: ['m_has_nth']
describing -> has_result, conditions: ['m_has_nth']
Source: exiting
exiting -> bad_navigate, conditions: ['m_has_nth']
exiting -> has_result, conditions: ['m_has_nth']
Source: has_result
has_result -> bad_navigate, conditions: ['m_has_nth']
has_result -> has_result, conditions: ['m_has_nth']
Source: helping
helping -> bad_navigate, conditions: ['m_has_nth']
helping -> has_result, conditions: ['m_has_nth']
Source: initial
initial -> bad_navigate, conditions: ['m_has_nth']
initial -> has_result, conditions: ['m_has_nth']
Source: is_that_all
is_that_all -> bad_navigate, conditions: ['m_has_nth']
is_that_all -> has_result, conditions: ['m_has_nth']
Source: no_query_search
no_query_search -> bad_navigate, conditions: ['m_has_nth']
no_query_search -> has_result, conditions: ['m_has_nth']
Source: no_result
no_result -> bad_navigate, conditions: ['m_has_nth']
no_result -> has_result, conditions: ['m_has_nth']
Source: search_prompt
search_prompt -> bad_navigate, conditions: ['m_has_nth']
search_prompt -> has_result, conditions: ['m_has_nth']
Event: PreviousSkill
Source: bad_navigate
bad_navigate -> bad_navigate, conditions: ['m_has_previous']
bad_navigate -> has_result, conditions: ['m_has_previous']
Source: describe_ratings
describe_ratings -> bad_navigate, conditions: ['m_has_previous']
describe_ratings -> has_result, conditions: ['m_has_previous']
Source: describing
describing -> bad_navigate, conditions: ['m_has_previous']
describing -> has_result, conditions: ['m_has_previous']
Source: exiting
exiting -> bad_navigate, conditions: ['m_has_previous']
exiting -> has_result, conditions: ['m_has_previous']
Source: has_result
has_result -> bad_navigate, conditions: ['m_has_previous']
has_result -> has_result, conditions: ['m_has_previous']
Source: helping
helping -> bad_navigate, conditions: ['m_has_previous']
helping -> has_result, conditions: ['m_has_previous']
Source: initial
initial -> bad_navigate, conditions: ['m_has_previous']
initial -> has_result, conditions: ['m_has_previous']
Source: is_that_all
is_that_all -> bad_navigate, conditions: ['m_has_previous']
is_that_all -> has_result, conditions: ['m_has_previous']
Source: no_query_search
no_query_search -> bad_navigate, conditions: ['m_has_previous']
no_query_search -> has_result, conditions: ['m_has_previous']
Source: no_result
no_result -> bad_navigate, conditions: ['m_has_previous']
no_result -> has_result, conditions: ['m_has_previous']
Source: search_prompt
search_prompt -> bad_navigate, conditions: ['m_has_previous']
search_prompt -> has_result, conditions: ['m_has_previous']
Event: NextSkill
Source: bad_navigate
bad_navigate -> bad_navigate, conditions: ['m_has_next']
bad_navigate -> has_result, conditions: ['m_has_next']
Source: describe_ratings
describe_ratings -> bad_navigate, conditions: ['m_has_next']
describe_ratings -> has_result, conditions: ['m_has_next']
Source: describing
describing -> bad_navigate, conditions: ['m_has_next']
describing -> has_result, conditions: ['m_has_next']
Source: exiting
exiting -> bad_navigate, conditions: ['m_has_next']
exiting -> has_result, conditions: ['m_has_next']
Source: has_result
has_result -> bad_navigate, conditions: ['m_has_next']
has_result -> has_result, conditions: ['m_has_next']
Source: helping
helping -> bad_navigate, conditions: ['m_has_next']
helping -> has_result, conditions: ['m_has_next']
Source: initial
initial -> bad_navigate, conditions: ['m_has_next']
initial -> has_result, conditions: ['m_has_next']
Source: is_that_all
is_that_all -> bad_navigate, conditions: ['m_has_next']
is_that_all -> has_result, conditions: ['m_has_next']
Source: no_query_search
no_query_search -> bad_navigate, conditions: ['m_has_next']
no_query_search -> has_result, conditions: ['m_has_next']
Source: no_result
no_result -> bad_navigate, conditions: ['m_has_next']
no_result -> has_result, conditions: ['m_has_next']
Source: search_prompt
search_prompt -> bad_navigate, conditions: ['m_has_next']
search_prompt -> has_result, conditions: ['m_has_next']
Event: AMAZON.NoIntent
Source: has_result
has_result -> bad_navigate, conditions: ['m_has_next']
has_result -> has_result, conditions: ['m_has_next']
Source: describe_ratings
describe_ratings -> is_that_all
Source: describing
describing -> search_prompt
Source: is_that_all
is_that_all -> search_prompt
Event: DescribeRatings
Source: bad_navigate
bad_navigate -> describe_ratings, conditions: ['m_has_result']
Source: describe_ratings
describe_ratings -> describe_ratings, conditions: ['m_has_result']
Source: describing
describing -> describe_ratings, conditions: ['m_has_result']
Source: exiting
exiting -> describe_ratings, conditions: ['m_has_result']
Source: has_result
has_result -> describe_ratings, conditions: ['m_has_result']
Source: helping
helping -> describe_ratings, conditions: ['m_has_result']
Source: initial
initial -> describe_ratings, conditions: ['m_has_result']
Source: is_that_all
is_that_all -> describe_ratings, conditions: ['m_has_result']
Source: no_query_search
no_query_search -> describe_ratings, conditions: ['m_has_result']
Source: no_result
no_result -> describe_ratings, conditions: ['m_has_result']
Source: search_prompt
search_prompt -> describe_ratings, conditions: ['m_has_result']
Event: AMAZON.YesIntent
Source: has_result
has_result -> describing
Source: describe_ratings
describe_ratings -> describing
Source: describing
describing -> exiting
Source: is_that_all
is_that_all -> exiting
Event: AMAZON.CancelIntent
Source: no_result
no_result -> exiting
Source: search_prompt
search_prompt -> exiting
Source: is_that_all
is_that_all -> exiting
Source: bad_navigate
bad_navigate -> exiting
Source: no_query_search
no_query_search -> exiting
Source: describing
describing -> is_that_all
Source: has_result
has_result -> is_that_all
Source: describe_ratings
describe_ratings -> is_that_all
Source: initial
initial -> search_prompt
Source: helping
helping -> search_prompt
Event: AMAZON.StopIntent
Source: no_result
no_result -> exiting
Source: search_prompt
search_prompt -> exiting
Source: is_that_all
is_that_all -> exiting
Source: bad_navigate
bad_navigate -> exiting
Source: no_query_search
no_query_search -> exiting
Source: describing
describing -> is_that_all
Source: has_result
has_result -> is_that_all
Source: describe_ratings
describe_ratings -> is_that_all
Source: initial
initial -> search_prompt
Source: helping
helping -> search_prompt
Event: NewSearch
Source: bad_navigate
bad_navigate -> exiting, conditions: ['m_searching_for_exit']
bad_navigate -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
bad_navigate -> no_query_search, conditions: ['m_no_query_search']
bad_navigate -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: describe_ratings
describe_ratings -> exiting, conditions: ['m_searching_for_exit']
describe_ratings -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
describe_ratings -> no_query_search, conditions: ['m_no_query_search']
describe_ratings -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: describing
describing -> exiting, conditions: ['m_searching_for_exit']
describing -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
describing -> no_query_search, conditions: ['m_no_query_search']
describing -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: exiting
exiting -> exiting, conditions: ['m_searching_for_exit']
exiting -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
exiting -> no_query_search, conditions: ['m_no_query_search']
exiting -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: has_result
has_result -> exiting, conditions: ['m_searching_for_exit']
has_result -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
has_result -> no_query_search, conditions: ['m_no_query_search']
has_result -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: helping
helping -> exiting, conditions: ['m_searching_for_exit']
helping -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
helping -> no_query_search, conditions: ['m_no_query_search']
helping -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: initial
initial -> exiting, conditions: ['m_searching_for_exit']
initial -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
initial -> no_query_search, conditions: ['m_no_query_search']
initial -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: is_that_all
is_that_all -> exiting, conditions: ['m_searching_for_exit']
is_that_all -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
is_that_all -> no_query_search, conditions: ['m_no_query_search']
is_that_all -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: no_query_search
no_query_search -> exiting, conditions: ['m_searching_for_exit']
no_query_search -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
no_query_search -> no_query_search, conditions: ['m_no_query_search']
no_query_search -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: no_result
no_result -> exiting, conditions: ['m_searching_for_exit']
no_result -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
no_result -> no_query_search, conditions: ['m_no_query_search']
no_result -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Source: search_prompt
search_prompt -> exiting, conditions: ['m_searching_for_exit']
search_prompt -> has_result, prepare: ['m_search'], conditions: ['m_has_result_and_query']
search_prompt -> no_query_search, conditions: ['m_no_query_search']
search_prompt -> no_result, prepare: ['m_search'], conditions: ['m_no_result']
Event: AMAZON.HelpIntent
Source: bad_navigate
bad_navigate -> helping
Source: describe_ratings
describe_ratings -> helping
Source: describing
describing -> helping
Source: exiting
exiting -> helping
Source: has_result
has_result -> helping
Source: helping
helping -> helping
Source: initial
initial -> helping
Source: is_that_all
is_that_all -> helping
Source: no_query_search
no_query_search -> helping
Source: no_result
no_result -> helping
Source: search_prompt
search_prompt -> helpingHistory0.1.0 (2017-02-23)First release on PyPI.
|
alexander
|
UNKNOWN
|
alexander-fw
|
No description available on PyPI.
|
alexander-shlyaev-package
|
redme.md La-la-la
|
alexandra
|
No description available on PyPI.
|
alexandra-ai
|
AlexandraAIA Python package for Danish data scienceInstallationTo install the package simply write the following command in your favorite terminal:pip install alexandra-aiQuickstartBenchmarking from the Command LineThe easiest way to benchmark pretrained models is via the command line interface. After
having installed the package, you can benchmark your favorite model like so:evaluate --model-id <model_id> --task <task>Heremodel_idis the HuggingFace model ID, which can be found on theHuggingFace
Hub, andtaskis the task you want to benchmark the
model on, such as "ner" for named entity recognition. See all options by typingevaluate --helpThe specific model version to use can also be added after the suffix '@':evaluate --model_id <model_id>@<commit>It can be a branch name, a tag name, or a commit id. It defaults to 'main' for latest.Multiple models and tasks can be specified by just attaching multiple arguments. Here
is an example with two models:evaluate --model_id <model_id1> --model_id <model_id2> --task nerSee all the arguments and options available for theevaluatecommand by typingevaluate --helpBenchmarking from a ScriptIn a script, the syntax is similar to the command line interface. You simply initialise
an object of theEvaluatorclass, and call this evaluate object with your favorite
models and/or datasets:>>> from alexandra_ai import Evaluator
>>> evaluator = Evaluator()
>>> evaluator('<model_id>', '<task>')ContributorsIf you feel like this package is missing a crucial feature, if you encounter a bug or
if you just want to correct a typo in this readme file, then we urge you to join the
community! Have a look at theCONTRIBUTING.mdfile, where you can
check out all the ways you can contribute to this package. :sparkles:Your name here?:tada:MaintainersThe following are the core maintainers of thealexandra_aipackage:@saattrupdan(Dan Saattrup Nielsen;[email protected])@AJDERS(Anders Jess Pedersen;[email protected])The AlexandraAI ecosystemThis package is a wrapper around other AlexandraAI packages, each of which is standalone:AlexandraAI-eval: Evaluation of finetuned models.Project structure.
├── .flake8
├── .github
│ └── workflows
│ ├── ci.yaml
│ └── docs.yaml
├── .gitignore
├── .pre-commit-config.yaml
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── gfx
├── makefile
├── notebooks
├── poetry.toml
├── pyproject.toml
├── src
│ ├── alexandra_ai
│ │ └── __init__.py
│ └── scripts
│ ├── fix_dot_env_file.py
│ └── versioning.py
└── tests
└── __init__.py
|
alexandra-ai-eval
|
AlexandraAI-evalEvaluation of finetuned models(pronounced as in "Aye aye captain")InstallationTo install the package simply write the following command in your favorite terminal:pip install alexandra-ai-evalQuickstartBenchmarking from the Command LineThe easiest way to benchmark pretrained models is via the command line interface. After
having installed the package, you can benchmark your favorite model like so:evaluate --model-id <model_id> --task <task>Heremodel_idis the HuggingFace model ID, which can be found on theHuggingFace
Hub, andtaskis the task you want to benchmark the
model on, such as "ner" for named entity recognition. See all options by typingevaluate --helpThe specific model version to use can also be added after the suffix '@':evaluate --model_id <model_id>@<commit>It can be a branch name, a tag name, or a commit id. It defaults to 'main' for latest.Multiple models and tasks can be specified by just attaching multiple arguments. Here
is an example with two models:evaluate --model_id <model_id1> --model_id <model_id2> --task nerSee all the arguments and options available for theevaluatecommand by typingevaluate --helpBenchmarking from a ScriptIn a script, the syntax is similar to the command line interface. You simply initialise
an object of theEvaluatorclass, and call this evaluate object with your favorite
models and/or datasets:>>> from alexandra_ai_eval import Evaluator
>>> evaluator = Evaluator()
>>> evaluator('<model_id>', '<task>')ContributorsIf you feel like this package is missing a crucial feature, if you encounter a bug or
if you just want to correct a typo in this readme file, then we urge you to join the
community! Have a look at theCONTRIBUTING.mdfile, where you can
check out all the ways you can contribute to this package. :sparkles:Your name here?:tada:MaintainersThe following are the core maintainers of thealexandra_ai_evalpackage:@saattrupdan(Dan Saattrup Nielsen;[email protected])@AJDERS(Anders Jess Pedersen;[email protected])Project structure.
├── .flake8
├── .github
│ └── workflows
│ ├── ci.yaml
│ └── docs.yaml
├── .gitignore
├── .pre-commit-config.yaml
├── LICENSE
├── README.md
├── gfx
│ └── alexandra-ai-eval-logo.png
├── makefile
├── models
├── notebooks
├── poetry.toml
├── pyproject.toml
├── src
│ ├── alexandra_ai_eval
│ │ ├── __init__.py
│ │ ├── automatic_speech_recognition.py
│ │ ├── cli.py
│ │ ├── co2.py
│ │ ├── config.py
│ │ ├── country_codes.py
│ │ ├── evaluator.py
│ │ ├── exceptions.py
│ │ ├── hf_hub.py
│ │ ├── image_to_text.py
│ │ ├── named_entity_recognition.py
│ │ ├── question_answering.py
│ │ ├── scoring.py
│ │ ├── task.py
│ │ ├── task_configs.py
│ │ ├── task_factory.py
│ │ ├── text_classification.py
│ │ └── utils.py
│ └── scripts
│ ├── fix_dot_env_file.py
│ └── versioning.py
└── tests
├── __init__.py
├── conftest.py
├── test_cli.py
├── test_co2.py
├── test_config.py
├── test_country_codes.py
├── test_evaluator.py
├── test_exceptions.py
├── test_hf_hub.py
├── test_image_to_text.py
├── test_named_entity_recognition.py
├── test_question_answering.py
├── test_scoring.py
├── test_task.py
├── test_task_configs.py
├── test_task_factory.py
├── test_text_classification.py
└── test_utils.py
|
alexandreleclercq-picsou
|
No description available on PyPI.
|
alexandria
|
Alexandria is a a DNS management solution that allows you to easily manage your
zone files using a simple to use web interface.InstallationWait for this package to be available on PyPi…DevelopingGetting the application set up locally:Change directory to this projectCreate virtualenv for Project in $venv$venv/bin/pip install -e .$venv/bin/initialize_alexandria_db config/development.ini$venv/bin/pserve config/development.iniFor submitting patches back to the project:Create a new branch for the feature/topic from masterHack awaySubmit a pull request0.0Initial version
|
alexandria3k
|
Alexandria3kThealexandria3kpackage supplies a library and a command-line tool
providing efficient relational query access to diverse publication open
data sets.
The largest one is the entireCrossref data set(157 GB compressed, 1 TB uncompressed).
This contains publication metadata from about 134 million publications from
all major international publishers with full citation data for 60 million
of them.
In addition,
the Crossref data set can be linked with
theORCID summary data set(25 GB compressed, 435 GB uncompressed),
containing about 78 million author records,
theUnited States Patent Office issued patents(11 GB compressed, 115 GB uncompressed),
containing about 5.4 million records,
as well as
data sets of
funder bodies,
journal names,
open access journals,
and research organizations.Thealexandria3kpackage installation contains all elements required
to run it.
It does not require the installation, configuration, and maintenance
of a third party relational or graph database.
It can therefore be used out-of-the-box for performing reproducible
publication research on the desktop.DocumentationThe complete reference and use documentation foralexandria3kcan be foundhere.PublicationDetails about the rationale, design, implementation, and use of this software
can be found in the following paper.Diomidis Spinellis. Open reproducible scientometric research with Alexandria3k.PLoS ONE18(11): e0294946. November 2023.doi: 10.1371/journal.pone.0294946
|
alexandria-python
|
AlexandriaAlexandriais a Python package for Bayesian time-series econometrics applications. This is the first official release of the software. For its first release, Alexandria includes only the most basic model: the linear regression. However, it proposes a wide range of Bayesian linear regressions:maximum likelihood / OLS regression (non-Bayesian)simple Bayesian regressionhierarchical (natural conjugate) Bayesian regressionindependent Bayesian regression with Gibbs samplingheteroscedastic Bayesian regressionautocorrelated Bayesian regressionAlexandria is user-friendly and can be used from a simple Graphical User Inteface (GUI). More experienced users can also run the models directly from the Python console by using the model classes and methods.===============================Installing AlexandriaAlexandria can be installed from pip:pip install alexandria-pythonA local installation can also obtain by copy-pasting the folder containing the toolbox programmes. The folder can be downloaded from the project website or Github repo:https://alexandria-toolbox.github.iohttps://github.com/alexandria-toolbox===============================Getting startedSimple Python example:# imports
from alexandria.linear_regression import IndependentBayesianRegression
from alexandria.datasets import data_sets as ds
import numpy as np
# load Taylor dataset, split as train/test
taylor_data = ds.load_taylor()
y_train, X_train = taylor_data[:198,0], taylor_data[:198,1:]
y_test, X_test = taylor_data[198:,0], taylor_data[198:,1:]
# set prior mean and prior variance for the model
b = np.array([1.5, 0.5])
b_const = 1
V = np.array([0.01, 0.0025])
V_const = 0.01
# create and train regression
br = IndependentBayesianRegression(endogenous=y_train, exogenous=X_train,
constant=True, b_exogenous=b, V_exogenous=V, b_constant=b_const, V_constant=V_const)
br.estimate()
# get predictions on test sample, run forecast evaluation, display log score
estimates_forecasts = br.forecast(X_test, 0.95)
br.forecast_evaluation(y_test)
print('log score on test sample : ' + str(round(br.forecast_evaluation_criteria['log_score'], 2)))===============================DocumentationComplete manuals and user guides can be found on the project website and Github repo:https://alexandria-toolbox.github.iohttps://github.com/alexandria-toolbox===============================Contactalexandria.toolbox@gmail.com
|
alex-app-auto-test
|
An app auto test
|
alexa-reply
|
alexa-reply : 0.1.2About: An ai python package to respond to any message suitably.PyPI repo:https://pypi.org/project/alexa-reply/Installation:# from github : unstable
pip install git+https://pypi.org/project/alexa-reply/# from github : stable
pip install alexa-replyExample Usage:fromalexa_replyimportreplyowner="your name"bot="bot's name"message="the message u wanna reply to"resp=reply(message,bot,owner)print(resp)```
|
alexa-responses
|
alexa-responsesAlexa responses python module
|
alexa-siterank
|
SiteRank-AlexaThis is vanilla Python library for gathering data about website ranks from Alexa!It is ultra customizable.InstallationYou can usepip install alexa-siterankorpip install git+https://github.com/mytja/SiteRank-Alexa⚡️ Wanna try outnew OTR function! It is up to 5 times faster than current functions⚡️ Wanna try outnew asynchronous function! It requires httpxUsageGet PageRankfromalexa_siterankimport*print(getRank("google.com"))Output:{"rank":{"global":1,"us":1},"rating":false}Get Top keywordsfromalexa_siterankimport*print(getTopKeywords("google.com"))Output:{"titles":["keyword","metric_one","metric_two"],"google.com":[[{"title":"keyword","value":"gmail"},{"title":"metric_one","value":"5.11%"},{"title":"metric_two","value":"83.27%"}],[{"title":"keyword","value":"google translate"},{"title":"metric_one","value":"3.84%"},{"title":"metric_two","value":"59.46%"}],[{"title":"keyword","value":"google maps"},{"title":"metric_one","value":"1.93%"},{"title":"metric_two","value":"55.67%"}],[{"title":"keyword","value":"translate"},{"title":"metric_one","value":"1.72%"},{"title":"metric_two","value":"51.89%"}],...Get visitorsfromalexa_siterankimport*print(getVisitors("google.com"))Output:[{"pageviews_per_user":"25.22","code":"US","visitors_percent":"19.5","name":"United States","pageviews_percent":"27.7"},{"pageviews_per_user":"28.07","code":"IN","visitors_percent":"10.4","name":"India","pageviews_percent":"16.5"},{"pageviews_per_user":"26.3","code":"JP","visitors_percent":"5.2","name":"Japan","pageviews_percent":"7.8"}]Get competitorsfromalexa_siterankimport*print(getCompetitors("google.com"))Output:{"site":"google.com","competitors":["youtube.com","wikipedia.org","facebook.com","vk.com"]}Get SiteRank 3 month historyfromalexa_siterankimport*print(getRankHistory("google.com"))Output:{"3mrank":{"20201116":"1","20201117":"1","20201118":"1","20201119":"1","20201120":"1","20201121":"1","20201122":"1","20201123":"1","20201124":"1","20201125":"1","20201126":"1",...}}DisclaimerDeveloperThe developer(s) of this project aren't responsible for any code usage in non-intended ways!Community / People using this projectWith downloading any modified and/or original code from this repository, from any source, you agree, that you will use it only for non-production, non-commercial, private uses and for educational purposes, and in monthly limits! If not, you are responsible for non-legal usage of this project!If you want to use commercial version, then you have to get API token from AWIS.This projectThis project can only be used for private, non-commercial, non-production uses, and for educational uses.
|
alexa-skill
|
alexa-skillalexa-skillis flexible, easy to use and extend package for creating Alexa skill applications.This package is based onalexa documentation.InstallingInstall and update using pip:pipinstall-Ualexa-skillExamplesDefine intent classfromalexa_skill.intentsimportBaseIntentsclassExampleIntents(BaseIntents):@propertydefmapper(self):return{'EXAMPLE.hello':self.hello,}defhello(self):returnself.response('Hello. Nice to meet you.'),TrueDefine intent class with slotsfromalexa_skillimportdatesfromalexa_skill.intentsimportBaseIntentsclassDateIntents(BaseIntents):@propertydefmapper(self):return{'EXAMPLE.date_intent':self.date_intent,}defdate_intent(self,slots):date,date_type=dates.AmazonDateParser.to_date(slots['dateslot']['value'])text="Your date is <say-as interpret-as='date'>{}</say-as> and it is a{}".format(date.strftime('%Y%m%d'),date_type)returnself.response(text),TrueDefine buildin intentsfromalexa_skill.intentsimportBuildInIntentsbuildin_intents=BuildInIntents(help_message='Say "HI" to us',not_handled_message="Sorry, I don't understand you. Could you repeat?",stop_message='stop',cancel_message='cancel')FalconInitiate intents in fulfiller webhook for Alexaimportloggingimportalexa_skillimportfalconclassFulfiller(object):defon_post(self,req,resp):get_response=alexa_skill.Processor(req.media,buildin_intents,'Welcome to Alexa skill bot','Good bye',ExampleIntents(),# Insert created Intents as argumentsDateIntents(),)json_response,handled=get_response()logging.info('Response was handled by system:{}'.format(handled))resp.media=json_responseapp=falcon.API(media_type=falcon.MEDIA_JSON)app.add_route('/v1/alexa/fulfiller',Fulfiller())Flaskimportloggingimportalexa_skillfromflaskimportFlask,request,jsonifyapp=Flask(__name__)@app.route("/v1/alexa/fulfiller",methods=['POST'])deffulfiller():get_response=alexa_skill.Processor(request.json,buildin_intents,'Welcome to Alexa skill bot','Good bye',ExampleIntents(),DateIntents(),)json_response,handled=get_response()logging.info('Response was handled by system:{}'.format(handled))returnjsonify(json_response)DocumentationAuto generate documentationcddocs/
sphinx-apidoc-o./source/_modules/../alexa_skill/
makehtml
|
alexa-skill-kit
|
Help you create Alexa skill super easy in Python, like 20 lines of code, promise!Please go to the repo for more info on getting started:https://github.com/KNNCreative/alexa-skill-kit
|
alexa-skills
|
Alexa Skills Python PackageREADME TODO
|
alex_asr
|
UNKNOWN
|
alexa-teacher-models
|
Alexa Teacher ModelsThis is the official Alexa Teacher Model program github page.AlexaTM 20BAlexaTM 20B is a 20B-Parameter sequence-to-sequence transformer model created by the Alexa Teacher Model (AlexaTM) team at Amazon. The model was trained on a mixture of Common Crawl (mC4) and Wikipedia data across 12 languages using denoising and Causal Language Modeling (CLM) tasks.AlexaTM 20B can be used for in-context learning. "In-context learning," also known as "prompting," refers to a method for using NLP models in which no fine tuning is required per task. Training examples are provided to the model only as part of the prompt given as inference input, a paradigm known as "few-shot in-context learning." In some cases, the model can perform well without any training data at all, a paradigm known as "zero-shot in-context learning."To learn more about the model, please read theAmazon Science blog postand thepaper.The model is currently available for noncommercial use via SageMaker JumpStart, as described in ourAWS blog post. The model can be accessed using the following steps:Createan AWS account if needed.In your AWS account, search forSageMakerin the search bar and click on it.Once in the SageMaker experience, create adomainand a studio user if none yet exist. All of the default settings can be used.In the control panel, clickLaunch appnext to the user you wish to use. Launch a studio instance.Once in the studio, there will be a launcher showing JumpStart as one of the tiles. ClickGo to SageMaker Jumpstart. Alternatively, JumpStart can be accessed by 3-pointed orange symbol on the far left of the studio.Once in JumpStart, click theNotebooksbutton.Browse or search for our example notebook entitledIn-context learning with AlexaTM 20B.There will be a button at the top to copy the read-only version into your studio.Ensure that your kernel has started, and run the notebook.Note: You can also find our example notebookhereLoad the Model and Run Inferencefromalexa_teacher_modelsimportAlexaTMTokenizerFasttokenizer=AlexaTMTokenizerFast.from_pretrained('/path/to/AlexaTM-20B-pr/')# Load the modelfromalexa_teacher_modelsimportAlexaTMSeq2SeqForConditionalGenerationmodel=AlexaTMSeq2SeqForConditionalGeneration.from_pretrained('/path/to/AlexaTM-20B-pr/')You can also use theAutoTokenizerandAutoModelForSeq2SeqLMas you would in any other HuggingFace Transformer
program by importingalexa_teacher_models:importalexa_teacher_models...tokenizer=AutoTokenizer.from_pretrained('/path/to/AlexaTM-20B-pr/')model=AutoModelForSeq2SeqLM.from_pretrained('/path/to/AlexaTM-20B-pr/')Load the model on 4 gpus:model.bfloat16()model.parallelize(4)Run the model in CLM mode:# qatest="""[CLM] Question: Who is the vocalist of coldplay? Answer:"""print('Input:',test)encoded=tokenizer(test,return_tensors="pt").to('cuda:0')generated_tokens=model.generate(input_ids=encoded['input_ids'],max_length=32,num_beams=1,num_return_sequences=1,early_stopping=True)tokenizer.batch_decode(generated_tokens,skip_special_tokens=True)[0]Run the model in denoising mode:# denoisingtest="we went to which is the capital of France"print('Input:',test)encoded=tokenizer(test,return_tensors="pt").to('cuda:0')generated_tokens=model.generate(input_ids=encoded['input_ids'],max_length=32,num_beams=5,num_return_sequences=5,early_stopping=True)tokenizer.batch_decode(generated_tokens,skip_special_tokens=True)Running the repl exampleA sample Read Execute Print Loop (REPL) program is provided in the samples. It can be used to interact with
any AlexaTM model, and has a flexible set of command line arguments, including support for sampling and using multiple turns of history as context$ pip install alexa_teacher_models[repl]
$ python -m alexa_teacher_models.scripts.repl --model /path/to/AlexaTM-20B-pr/ --max_length 64
$ python -m alexa_teacher_models.scripts.repl --model /path/to/AlexaTM-20B-pr/ --max_length 64 --do_sample --max_history 3 --join_string " </s> "Fine-tuning with DeepSpeed on a single P4NoteWe strongly recommend training on multiple instances. For information on how to do this, see the section belowTo run on a single P4 (8 GPUs), you will need to use CPU offload. A deepspeed config is provided in thescripts/deepspeeddirectory.
Assuming you have a training and validation JSONL formatted file, a run would look like this:$ pip install alexa_teacher_models[ft]
$ deepspeed --num_gpus 8 --module alexa_teacher_models.scripts.finetune --per_device_train_batch_size $BS \
--deepspeed deepspeed/zero3-offload.json \
--model_name_or_path /home/ubuntu/AlexaTM/ --max_length 512 --bf16 --output_dir output \
--max_target_length 64 --do_train --learning_rate 1e-7 \
--train_file train.json --validation_file valid.json \
--num_train_epochs 1 --save_steps 1000Fine-tuning with DeepSpeed on multiple machinesThere is adetailed tutorialdemonstrating how to fine-tune 20B across multiple machines in EC2 usingElastic Fabric Adapter (EFA).CitationIf you use AlexaTM 20B, please use the following BibTeX entry.@article{soltan2022alexatm,
title={AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2seq Model},
author={Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, Prem Natarajan},
year={2022}
}SecuritySeeCONTRIBUTINGfor more information.LicenseThe code in this package is subject toLicense. However,
the model weights are subject toModel License.
|
alexautils
|
AlexaUtilsAlexaUtils is a PyPI with utility classes and functions to supplement the Python AWS SDK for Alexa skill development.InstallationPython 3.6 +ask-sdk-core == 1.11.0To install Alexa Utils, use pip:pip install alexautilsImportsBy default, the following classes and functions are imported:ClassesSlotUtilsPauserFunctionslogger, log_func_names, log_alllinear_nlgThe smml_tags module is not imported by default.
Per convention, this module should be imported separately as ssml:>>>importalexautils.ssml_tagsasssml>>>ssml.MW_EXCITED.format("Hi!")ContentsSlotUtilsUtility class with methods to retrieve slots from user utterance.get_slot_val_by_name(handler_input, slot_name: str) -> str:Returns slot value for slot_name nameget_all_slot_values(handler_input) -> list:Returns all slot.values from user utteranceget_first_slot_value(handler_input) -> str:Returns first slot value from captured values.get_resolved_value(handler_input, slot_name: str) -> str:Returns resolved value for the slot.PauserUtility class to create pauses in speech response.get_pause(pause_length: float = 1) -> str:Returns pause speech for passed length.get_p_for_msg_len(message: str) -> str:Returns pause with duration based on message length.get_p_level(level: float) -> str:Returns pause length dependent on the level passed.Random variation included for more fluid UX.Standard levelsPause length (seconds)10.3520.7031.0541.4051.75make_ms_pause_level_list(*args) -> list:Returns list of the arguments to be added to speech_list.Transforms all int/float args into p_levels then adds to the list.linear_nlglinear_nlg(tuple_message_clause: tuple, str_joiner: str = ' ') -> strReturns message constructed from tuple message clause.Constructs the message with different methods per data type.Data typeMethodTuple/listrandom.choice()strappendintPauser.get_p_level()linear_nlg is a naive natural language generation (NLG) to interact with the user.
The method transforms linearly connected sentence chunks (e.g., clauses, parts of speech, etc.) into speech responses.Consider the following arbitrary noun phrase:"The red dog"This phrase can be parsed into 3 separate chunks:"The": determiner
"red": colour adjective
"dog": animal noun
In this example, the determiner, adjective, and noun have no effect on the meaning of the response.
We can use naive NLG to create an arbitrary noun phrase. This skill's NLG method would sample from the following three message tuples (MT).
A single item is sampled from each message tuple to create the noun phrase (DET, JJ, NN).MT_DET=("The","A",)MT_COLOUR_JJ=("red","blue","yellow",)MT_ANIMAL_NN=("dog","cat",)This NLG method requires careful consideration of sentence structure and semantics to avoid unnatural responses.
However, successful implementation increases response variety multiplicatively.
The speech construction for the above noun phrase yields 12 response permutations.>>>test=[MT_DET,MT_COLOUR_JJ,MT_ANIMAL_NN]>>>naive_nlg(test)"The red dog">>>naive_nlg(test)"A yellow cat"Logsloggerlog_level set by Lambda environment variablelog_leveldef log_func_name(func, *args, **kwargs):Decorator to log.debug the function name.log_all(*args, log_level: int = 10) -> None:Logs all arguments at log_level keyword.ssml_tagsAlexa's voice user interface uses Speech Synthesis Markup Language to control the speech output. SSML reference availablehereSSML are implemented as individual text wrappers so that wrappers can be applied to separate phrases, e.g.:MW_EXCITED_MED.format("Oh No!")+"Don't throw that away, please."NOTE:
May like to implement a class with dictionary structure to access SSML levels.
Referencepyssmlv0.0.1 - initial commit
|
alexa-webcrawler
|
Alexa WebcrawlerA CLI tool to show top website in different country from Alexa
|
alexbasiccalculator
|
This is a very simple calculator that takes in 2 numbers and either add, subtract, multiply or divide them.Change Log0.0.1 (28/04/2021)First release
|
alex-ber-utils
|
AlexBerUtilsAlexBerUtils is collection of the small utilities. See CHANGELOG.md for detail description.Getting HelpQuickStartpython3-mpipinstall-Ualex-ber-utilsInstalling from Githubpython3-mpipinstall-Uhttps://github.com/alex-ber/AlexBerUtils/archive/master.zipOptionally installing tests requirements.python3-mpipinstall-Uhttps://github.com/alex-ber/AlexBerUtils/archive/master.zip#egg=alex-ber-utils[tests]Or explicitly:wgethttps://github.com/alex-ber/AlexBerUtils/archive/master.zip-Omaster.zip;unzipmaster.zip;rmmaster.zipAnd then installing from source (see below).Installing from sourcepython3-mpipinstall.# only installs "required"python3-mpipinstall.[tests]# installs dependencies for testspython3-mpipinstall.[piptools]# installs dependencies for pip-toolspython3-mpipinstall.[md]# installs multidispatcher (used in method_overloading_test.py)python3-mpipinstall.[fabric]# installs fabric (used in fabs.py)python3-mpipinstall.[yml]# installs Yml related dependencies# (used in ymlparsers.py, init_app_conf.py, deploys.py;# optionally used in ymlparsers_extra.py, emails.py)python3-mpipinstall.[env]# installs pydotenv (optionally used in deploys.py and mains.py)Alternatively you install from requirements file:python3-mpipinstall-rrequirements.txt# only installs "required"python3-mpipinstall-rrequirements-tests.txt# installs dependencies for testspython3-mpipinstall-rrequirements-piptools.txt# installs dependencies for pip-toolspython3-mpipinstall-rrequirements-md.txt# installs multidispatcher (used in method_overloading_test.py)python3-mpipinstall-rrequirements-fabric.txt# installs fabric (used in fabs.py)python3-mpipinstall-rrequirements-yml.txt# installs Yml related dependencies# (used in ymlparsers.py, init_app_conf.py, deploys.py;# optionally used in ymlparsers_extra.py, emails.py)python3-mpipinstall-rrequirements-env.txt# installs pydotenv (optionally used in deploys.py)Using Dockeralexberkovich/AlexBerUtils:latestcontains allAlexBerUtilsdependencies.
This Dockerfile is very simple, you can take relevant part for you and put them into your Dockerfile.Alternatively, you can use it as base Docker image for your project and add/upgrade
another dependencies as you need.For example:FROMalexberkovich/alex_ber_utils:latestCOPYrequirements.txtetc/requirements.txtRUNset-ex&&\#latestpip,setuptools,wheelpipinstall--upgradepipsetuptoolswheel&&\pipinstallalex_ber_utilspipinstall-retc/requirements.txtCMD["/bin/sh"]#CMD tail -f /dev/nullwhererequirements.txtis requirements for your project.From the directory with setup.pypython3setup.pytest#run all testsorpytestInstalling new versionSeehttps://docs.python.org/3.1/distutils/uploading.htmlInstalling new version to venvpython38-mpipuninstall--yesalex_ber_utils
python38setup.pycleansdistbdist_wheel
python38-mpipinstall--find-links=./distalex_ber_utils==0.6.5##Manual upload#python setup.py clean sdist uploadRequirementsAlexBerUtils requires the following modules.Python 3.6+PyYAML>=6.0.1ChangelogAll notable changes to this project will be documented in this file.#https://pypi.org/manage/project/alex-ber-utils/releases/Unreleased[0.8.0] 04.12.2023ChangedIn setup.cfg flag mock_use_standalone_module
was changed to false (to use unittest.mock).Many version of the packages in extra was updated to latest:python-dotenv from 0.15.0 to 1.0.0.MarkupSafe isdowngradedfrom 2.1.3 to 2.0.1bcrypt is upgraded from 3.2.0 to 4.1.1cffi is upgraded from 1.14.5 to 1.16.0cryptography is upgraded from 38.0.4 to to 41.0.7fabric is upgraded from 2.5.0 to 3.2.2invoke is upgraded from 1.7.3 to 2.2.0paramiko is upgraded from 2.7.2 to 3.3.1pycparser is upgraded from 2.20 to 2.21PyNaCl is upgraded from 1.3.0 to 1.5.0HiYaPyCo is upgraded from 0.5.1 to 0.5.4Addednew extra-group, with name requirements-piptools.txt is added.alexber.utils.props.lazyproperty is added. TODO: add test for itNew file requirements.in that have all high-level dependecies together.New file requirements.all that have pinned low-level dependecies resolution.Removedmock package was removed. pytest will use unittest.mock. See Changed
p.1 above.six was removed.[0.7.0] - 04-08-2023ChangedUpgrade pyparsing==2.4.7 to 3.1.1.Upgrade cryptography from 3.4.7 to 41.0.3.Upgrade invoke from 1.4.1 to 1.7.3.Upgrade six from 1.15.0 to 1.16.0.Upgrade colorama from 0.4.3 to 0.4.4.Change declaration of namespace todeclare_namespace()mechanism.AddedExplicit dependency on pyOpenSSL==22.1.0 (lowest version where cryptography version is
pinned). cryptography's and pyOpenSSL's version change should be in sync.[0.6.6] - 13-06-2021AddedstdLoggingmodule. The main function isinitStream(). This is Thin adapter layer that redirects stdout/stderr
(or any other stream-like object) to standard Python's logger. Based onhttps://github.com/fx-kirin/py-stdlogging/blob/master/stdlogging.pySeehttps://github.com/fx-kirin/py-stdlogging/pull/1Quote fromhttps://stackoverflow.com/questions/47325506/making-python-loggers-log-all-stdout-and-stderr-messages:
"But be careful to capture stdout because it's very fragile". I decided to focus on redirecting stderr only to the
logger. If you want you can also redirect stdout, by making 2 calls to initStream() package-level method.
But, because ofhttps://unix.stackexchange.com/questions/616616/separate-stdout-and-stderr-for-docker-runit is
sufficient only to do it for stderr for me.
See [https://alex-ber.medium.com/stdlogging-module-d5d69ff7103f] for details.ChangedDoockerfilesbase-image. Now, you can transparentely switch betwee AMD64 to ARM 64 proccessor.cffidependency from 1.14.3 to 1.14.5.cryptographydependency from 3.1.1 to 3.4.7.[0.6.5] - 12-04-2021AddedFixRelCwdcontext-manager inmainsmodule - This context-manager temporary changes current working directory to
the one where relPackage is installed. What if you have some script or application that use relative path and you
want to invoke in from another directory. To get things more complicated. maybe your “external” code also userelativepath, but relative to another directory.
See [https://alex-ber.medium.com/making-more-yo-relative-path-to-file-to-work-fbf6280f9511] for details.GuardedWorkerExceptioncontext-manager inmainsmodule - context manager that mitigate exception propogation
from another process. It is very difficult if not impossible to pickle exceptions back to the parent process.
Simple ones work, but many others don’t.
For example, CalledProcessError is not pickable (my guess, this is because of stdout, stderr data-members).
This means, that if child process raise CalledProcessError that is not catched, it will propagate to the parent
process, but the propagation will fail, apparently because of bug in Python itself.
This causepool.join() to halt forever — and thus memory leak!See [https://alex-ber.medium.com/exception-propagation-from-another-process-bb09894ba4ce] for details.join_files()function infilesmodule - Suppose, that you have some multi-threaded/multi-process application
where each thread/process creates some file (each thread/process create different file)
and you want to join them to one file.
See [https://alex-ber.medium.com/join-files-cc5e38e3c658] for details.Changedfixabscwd()function inmainsmodule - minour refactoring - moving out some internal helper function for reuse
in new function.Base docker image version to alexberkovich/alpine-anaconda3:0.2.1-slim.
alexberkovich/alpine-anaconda3:0.1.1 has some minor changes relative to alexberkovich/alpine-anaconda3:0.1.1.
See [https://github.com/alex-ber/alpine-anaconda3/blob/master/CHANGELOG.md] for details.UpdatedDocumentationSee [https://github.com/alex-ber/AlexBerUtils/issues/8] Config file from another directory is not resolved
(usingargumentParserwith--general.config.filecan't be passed toinit_app_conf.parse_config())[0.6.4] - 12/12/2020ChangedBase docker image version to alexberkovich/alpine-anaconda3:0.1.1-slim.
alexberkovich/alpine-anaconda3:0.1.1 has some minor changes relative to alexberkovich/alpine-anaconda3:0.1.0.
See [https://github.com/alex-ber/alpine-anaconda3/blob/master/CHANGELOG.md] for details.
alexberkovich/alpine-anaconda3:0.1.1-slim is "slim" version of the same docker image, most unused packaged are removed.update versions to pip==20.3.1 setuptools==51.0.0 wheel==0.36.1RemovedScript check_d.py[0.6.3] - 18/11/2020ChangedBase docker image version to alexberkovich/alpine-anaconda3:0.1.0, it has fix for potential security risk: Git was changed
not to store credential as plain text, but to keep them in memory for 1 hour,
seehttps://git-scm.com/docs/git-credential-cacheUpdatedDocumentationMydeploysmodule [https://medium.com/analytics-vidhya/my-deploys-module-26c5599f1b15 for documentation]
is updated to containfix_retry_env()function inmainsmodule.AddedDocumentationfix_retry_env()function inmainsmodule. [https://alex-ber.medium.com/make-path-to-file-on-windows-works-on-linux-402ed3624f66][0.6.2] - 17/11/2020Deprecationmethod_overloading_test.pyis deprecated and will be removed once AlexBerUtils will support
Python 3.9. It will happen approximately at 01.11.2021.This test usesmultidispatchproject that wasn't updated since 2014.
In Python 3.8 it has following warning:multidispatch.py:163: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working from collections import MutableMappingAddedclassOsEnvrionPathRetry, functionfix_retry_env()tomainsmodule.ChangedOsEnvrionPathExpender- refactored, functionality is preserved.[0.6.1] - 16/11/2020Addedoptional Dockerfileoptional .env.docker for reference.Support of Python 3.8 is validated, seehttps://github.com/alex-ber/AlexBerUtils/issues/5Email formatting changed in Python 3.8, seehttps://github.com/alex-ber/AlexBerUtils/issues/7Note that it is possible that7bitwill be replaced with8bitasContent-Transfer-Encoding,
that I'm considering as ok.check_all.pyto run all unit test.check_d.pyfor sanity test..dockerignorerequirements*.txt - dependencies version changed, seehttps://github.com/alex-ber/AlexBerUtils/issues/6Because of pytest upgradeconftest.pywas changed:pytest_configure()was added to support dynamically used marks.Inymlparsers_test.pydeprecation warning removed (it will be error in Python 3.9)collections.Mappingwas changed tocollections.abc.Mapping.ChangedREADME.MD added section about Docker usage.setup.py to indicate support of Python 3.8[0.5.3] - 10/09/2020Changedalexber.utils.emails.initConfigis fixed. Before this default variables where ignored.2 Unit tests forinit_app_confare fixed. These fix are minors.Documentationimportermodule [https://medium.com/analytics-vidhya/how-to-write-easily-customizable-code-8b00b43406b2]fixabscwd()function inmainsmodule. [https://medium.com/@alex_ber/making-relative-path-to-file-to-work-d5d0f1da67bf]fix_retry_env()function inmainsmodule. [https://alex-ber.medium.com/make-path-to-file-on-windows-works-on-linux-402ed3624f66]Myparsermodule [https://medium.com/analytics-vidhya/my-parser-module-429ed1457718]Myymlparsersmodule [https://medium.com/analytics-vidhya/my-ymlparsers-module-88221edf16a6]My majorinit_app_confmodule [https://medium.com/analytics-vidhya/my-major-init-app-conf-module-1a5d9fb3998c]Mydeploysmodule [https://medium.com/analytics-vidhya/my-deploys-module-26c5599f1b15]Myemailsmodule [https://medium.com/analytics-vidhya/my-emails-module-3ad36a4861c5]Myprocessinvokesmodule [https://medium.com/analytics-vidhya/my-processinvokes-module-de4d301518df][0.5.2] - 21/06/2020Addedpath()function inmainsmodule. For older Python version usesimportlib_resourcesmodule. For newer version built inimportlib.resources.load_env()function inmainsmodule. Added kwargs forwarding. if dotenv_path or stream is present it will be used.
if ENV_PCK is present, dotenv_path will be constructed from ENV_PCK and ENV_NAME.
Otherwise, kwargs will be forwarded as is to load_dotenv.fix_env()function inmainsmodule. For each key in ENV_KEYS, this method prepends full_prefix to os.environ[key].
full_prefix is calculated as absolute path of__init__.pyof ENV_PCK.Changedprocessinvokesfunctionrun_sub_process- documentation typo fixed.
Lower Python version to 3.6.[0.5.1] - 06-05-2020Addedmainsmodule explanation articlehttps://medium.com/@alex_ber/making-relative-path-to-file-to-work-d5d0f1da67bfis published.fabsmodule. It adds cp method to fabric.Connection.This method is Linux-like cp command. It copies single file to remote (Posix) machine.Spited dependency list for setup.py req.txt (inexact versions, direct dependency only) and for
reproducible installation requirements.txt (exact versions, all, including transitive dependencies).Added req-fabric.txt, requirements-fabric.txt - Fabric, used infabsmodule.Added req-yml.txt, requirements-yml.txt - Yml-related dependencies, used in ymlparsers.py
and in init_app_conf.py, deploys.py; optionally used in ymlparsers_extra.py, emails.py.Main dependency is HiYaPyCo. I'm using feature that is availlable in the minimal version.HiYaPyCo depends upon PyYAML and Jinja2. Limitations for Jinja2 is from HiYaPyCo project.Added req-env.txt, requirements-env.txt - pydotenv, optionally used in deploys.py.Addedinspects.has_method(cls, methodName). Check if class cls has method with name methodName directly,
or in one of it's super-classes.Addedpareser.parse_sys_argsfunction parses command line arguments.Addedymlparsersmodule -load/safe_dumpa Hierarchical Yml files. This is essentially wrapper arround HiYaPyCo project with streamlined
and extended API and couple of work-arrounds.Note: this module doesn't use any package-level variables in hiYaPyCo module, including hiYaPyCo.jinja2env.
This module do use Jinja2'sEnvironment.It also has another defaults forload/safe_dumpmethods.
They can be overridden ininitConfig()function.safe_dump()method supports simple Python objects like primitive types (str, integer, etc), list, dict,OrderedDict.as_str()- convenient method for getting str representation of the data,
for example of dict.DisableVarSubst- use of this context manager disables variable substation in theload()function.initConfig- this method reset some defaults. If running from the MainThread, this method is idempotent.Addedinit_app_confmajormodule.The main function isparse_config. This function parses command line arguments first.
Than it parse yml files. Command line arguments overrides yml files arguments.
Parameters of yml files we always try to convert on best-effort basses.
Parameters of system args we try convert according toimplicit_convertparam.If you supplyimplicit_convert=True, thanmask_value()will be applied to the flat map (first parameter).
Otherwise,implicit_convertwiil have the value that was set inintiConfig(). By default it isTrue.Command line key --general.profiles or appropriate key default yml file is used to find 'profiles'.
Let suppose, that --config_file is resolved to config.yml.
If 'profiles' is not empty, than it will be used to calculate filenames
that will be used to override default yml file.
Let suppose, 'profiles' resolved to ['dev', 'local']. Than first config.yml
will be loaded, than it will be overridden with config-dev.yml, than
it will be overridden with config-local.yml.
At last, it will be overridden with system args.
This entry can be always be overridden with system args.ymlparsersandparsermodules serves as Low-Level API for this module.mask_value()implemented as a wrapper toparsers.safe_eval()method with support for boolean
variables. This implementation is used to get type for arguments that we get from system args.
This mechanism can be easily replaced with your own one.to_convex_map()This method receives dictionary with 'flat keys', it has simple key:value structure
where value can't be another dictionary.
It will return dictionary of dictionaries with natural key mapping,
optionally, entries will be filtered out according to white_list_flat_keys and,
optionally, value will be implicitly converted to appropriate type.In order to simulate dictionary of dictionaries 'flat keys' compose key from outer dict with key from inner dict
separated with dot.
For example, 'general.profiles' 'flat key' corresponds to convex map with 'general' key with dictionary as value
that have one of the keys 'profiles' with corresponding value.If you supplyimplicit_convert=True, thanmask_value()will be applied to the values of the received flat dictionary.
Otherwise,implicit_convertwiil have the value that was set inintiConfig(). By default it is True.merge_list_value_in_dicts- merges value of 2 dicts. This value represents list of values.
Value from flat_d is roughly obtained by flat_d[main_key+'.'+sub_key].
Value from d is roughly obtained by d[main_key][sub_key].If you supplyimplicit_convert=True, thanmask_value()will be applied to the flat map (first parameter).
Otherwise,implicit_convertwiil have the value that was set inintiConfig(). By default it isTrue.initConfig- you can set default value ofimplicit_convert. By default it isTrue.
This parameters is used ifimplicit_convertwasn't explicitly supplied. This method is idempotent.Addeddeploysmodule.
This module is usable in your deployment script. See alsofabsmodule.This method useparsers, ymlparsers,init_app_confas it's low-level API.init_app_conf` usage is limited.The main function isload_config(). It is simplified method for parsing yml configuration file with optionally
overrided profiles only. Seeinit_app_conf.parse_config()for another variant.split_path- Split filename in 2 part parts by split_dirname. first_part will ends with split_dirname.
second_part will start immediately after split_dirname.add_to_zip_copy_function- Factory method that returns closure that can be used as copy_function param inshutil.copytree().Addedemailsmodule.
This module contains extensions of the logging handlers.
This module optionally depends onymlparsesermodule.
It is better to useEmailStatuscontext manager with configuredemailLogger.
It is intended to configure first youremailLoggerwithOneMemoryHandler(together withSMTPHandler).
Than the code block that you want to aggregate log messages from is better to be enclosed withEmailStatuscontext manager.alexber.utils.emails.SMTPHandleris customization oflogging.handlers.SMTPHandler. It's purpose is to connect to
SMTP server and actually send the e-mail. Unlikelogging.handlers.SMTPHandlerthis class expects for record.msg to be built EmailMessage.
You can also change use of underline SMTP class to SMTP_SSL, LMTP, etc.
This implementation isthread-safe.alexber.utils.emails.OneMemoryHandleris variant oflogging.handlers.MemoryHandler. This handler aggregates
log messages untilFINISHEDlog-level is received or application is going to terminate abruptly (see docstring
ofcalc_abrupt_vars()method for the details) and we have some log messages in the buffer. On such event
all messages (in the current Thread) are aggregated to the singleEmailMessage. The subject of theEmailMessageis determined byget_subject()method.
If you want to change delimeters used to indicate variable declaration inside template, see docstring of theget_subject()method.
It is better to useEmailStatuscontext manager with configured emailLogger. See docstring ofEmailStatus.This implementation isthread-safe.alexber.utils.emails.EmailStatus- if contextmanager exits with exception (it fails), than e-mail with
subject formatted with faildargs and faildkwargs will be send.
Otherwise, e-mail with subject formatted with successargs and successkwargs will be send.
All messages (in the current Thread) will be aggregated to one long e-mail with the subject described inOneMemoryHandler.get_subject()method.alexber.utils.emails.initConfig- this method reset some defaults. This method is idempotent.
By default,SMTPclass fromsmtplibis used to send actual e-mail. You can change it toSMTP_SSL,LMTP,
or another class by specifying default_smpt_cls_name.
You can also specified default port for sending e-mails.processInvokesmodule has one primary function -run_sub_process()This method run subprocess and logs it's out
to the logger. This method is sophisticated decorator tosubprocess.run(). It is useful, when your subprocessrun's a lot of time and you're interesting to receive it'sstdoutandstderr. By default, it's streamed to log.
You can easily customize this behavior, seeinitConig()method.initConig()This method can be optionally called prior any call to another function in this module. You can use your
custom class for the logging. For example, FilePipe.ChangedSpited dependency list for setup.py req.txt (inexact versions, direct dependency only) and for
reproducible installation requirements.txt (exact versions, all, including transitive dependencies).README.md changed, added section 'Alternatively you install install from requirements file:'.
Some other misc changed done.CHANGELOG.md version 0.4.1 misc changes.Misc improvement in unit tests.Fixedparser.safe_eval- safe_eval('%(message)s') was blow up, now it returns value as is.
Seehttps://github.com/alex-ber/AlexBerUtils/issues/2Enhancedimporter.importer- added support for PEP 420 (implicit Namespace Packages).Namespace packages are a mechanism for splitting a single Python package across multiple directories on disk.
When interpreted encounter with non-emptypathattribute it adds modules found in those locations
to the current package.
Seehttps://github.com/alex-ber/AlexBerUtils/issues/3In all documentation refference topip3was changed topython3 -m pip[0.4.1] - 2020-04-02BREAKING CHANGEI highly recommend not to use 0.3.X versions.Removedmodulewarnsis dropedChangedLimitation::mainsmodule wasn't tested with frozen python script (frozen using py2exe).modulemainsis rewritten. FunctioninitConfis dropped entirely.modulemainsnow works with logger and with warnings (it was wrong decision to work with warnings).[0.3.4] - 2020-04-02ChangedCHANGELOG.md fixedwarnsmodule bug fixed, now warnings.warn() works.FixabscwdWarning is added to simplify warnings disabling.Changing howmainsmodule usewarns.[0.3.3] - 2020-04-02ChangedCHANGELOG.md fixed[0.3.2] - 2020-04-01ChangedTo REAMDE.md addInstalling new versionsectionFix typo in REAMDE.md (tests, not test).Fixing bug: now, you're able to import package in the Python interpreter (setups.pyfixed)Fixing bug:warnsmodule now doesn't change log_level in the preconfigured logger in any cases.BREAKING CHANGE: Inmainsmodule methodwarnsInitConfig()was renamed tomainInitConfig()Also singature was changed.mainsmodule minor refactored.AddedUnit tests are added forwarnsmoduleUnit tests are added formainsmodule[0.3.1] - 2020-04-01ChangedTests minor improvements.Excluded tests, data from setup.py (from being installed from the sdist.)Created MANIFEST.inAddedwarnsmodule is added:It provides better integration between warnings and logger.
Unlikelogging._showwarning()this variant will always go through logger.warns.initConfig()has optional file parameter (it's file-like object) to redirect warnings.
Default value issys.stderr.If logger forlog_name(default ispy.warnings) will be configured before call toshowwarning()method,
than warning will go to the logger's handler withlog_level(default islogging.WARNING).If logger forlog_name(default ispy.warnings) willn't be configured before call to showwarning() method,
than warning will be done tofile(default issys.stderr) withlog_level(default islogging.WARNING).mainmodule is added:main.fixabscwd()changesos.getcwd()to be the directory of the__main__module.main.warnsInitConfig()reexportswarns.initConfig()for convenience.AddedTests for alexber.utils.thread_locals added.[0.2.5] - 2019-05-22ChangedFixed bug in UploadCommand, git push should be before git tag.[0.2.4] - 2019-05-22ChangedFixed bug in setup.py, incorrect order between VERSION and UploadCommand (no tag was created on upload)[0.2.1] - 2019-05-22Changedsetup url fixed.Added import of Enum to alexber.utils package.[0.2.0] - 2019-05-22Changedsetup.py - keywords added.[0.1.1] - 2019-05-22ChangedREADME.md fixed typo.[0.1.0] - 2019-05-22Changedalexber.utils.UploadCommand - bug fixed, failed on git tag, because VERSION was undefined.[0.0.1] - 2019-05-22Addedalexber.utils.StrAsReprMixinEnum - Enum Mixin that hasstr() equal torepr().alexber.utils.AutoNameMixinEnum- Enum Mixin that generate value equal to the name.alexber.utils.MissingNoneMixinEnum - Enum Mixin will return None if value will not be found.alexber.utils.LookUpMixinEnum - Enim Mixin that is designed to be used for lookup by value.If lookup fail, None will be return. Also,str() will return the same value asrepr().alexber.utils.threadlocal_var, get_threadlocal_var, del_threadlocal_var.Inspired byhttps://stackoverflow.com/questions/1408171/thread-local-storage-in-pythonalexber.utils.UploadCommand - Support setup.py upload.UploadCommand is intented to be used only from setup.pyIt's builds Source and Wheel distribution.It's uploads the package to PyPI via Twine.It's pushes the git tags.alexber.utils.uuid1mc is is a hybrid between version 1 & version 4. This is v1 with random MAC ("v1mc").uuid1mc() is deliberately generating v1 UUIDs with a random broadcast MAC address.The resulting v1 UUID is time dependant (like regular v1), but lacks all host-specific information (like v4).Note: somebody reported that ran into trouble using UUID1 in Amazon EC2 instances.alexber.utils.importer.importer - Convert str to Python construct that target is represented.alexber.utils.importer.new_instance - Convert str to Python construct that target is represented.
args and kwargs will be passed in to appropriatenew() /init() /init_subclass() methods.alexber.utils.inspects.issetdescriptor - Return true if the object is a method descriptor with setters.But not if ismethod() or isclass() or isfunction() are true.alexber.utils.inspects.ismethod - Return false if object is not a class and not a function.
Otherwise, return true iff signature has 2 params.alexber.utils.parsers.safe_eval - The purpose of this function is convert numbers from str to correct type.This function support convertion of built-in Python number to correct type (int, float)This function doesn't support decimal.Decimal or datetime.datetime or numpy types.alexber.utils.parsers.is_empty - if value is None returns True.if value is empty iterable (for example, empty str or emptry list),returns true otherwise false.Note: For not iterable values, behaivour is undefined.alexber.utils.parsers.parse_boolean - if value is None returns None.if value is boolean, it is returned as it is.
if value is str and value is equals ignoring case to "True", True is returned.
if value is str and value is equals ignoring case to "False", False is returned.For every other value, the answer is undefined.alexber.utils.props.Properties - A Python replacement for java.util.Properties classThis is modelled as closely as possible to the Java original.Created - Anand B [email protected] to Python 3 by Alex.Also there are some tweeks that was done by Alex.
|
alexcalculator
|
alexcalculatorUnder Construstion...
Not ready for use yet, currently experimenting and planning.Decveloped by Alex Gomes from DIA (C) 2023Examples od How to use (Alex Calculator)Creating a Calculator
|
alexcanc-de-toolkit
|
No description available on PyPI.
|
alex_chi2
|
Demo package demonstrating actual c code being modulized
|
alexchoitest
|
Failed to fetch description. HTTP Status Code: 404
|
alexdataconverter
|
This is a simple exercise to publish a package onto PyPi.
To use the module use method change_format()
|
alexdistribution
|
No description available on PyPI.
|
alexdlbrain
|
No description available on PyPI.
|
alexdlbrain2
|
No description available on PyPI.
|
alexdlbrain4
|
hellothis is a readme placehoder.
|
alexe-aws-cdk.aws-elasticloadbalancingv2-targets
|
Targets for AWS Elastic Load Balancing V2---This package contains targets for ELBv2. See the README of the@aws-cdk/aws-elasticloadbalancingv2library.
|
alexer
|
UNKNOWN
|
alexeygameframework
|
A library that allows to create games easier with built in tile system, screen handler, animation and asset importing and handling, and a basic window.Change Log0.0.1 (2023-09-16)First Release0.0.11 (2023-09-16)Added init python files0.0.12 (2023-09-17)Fixed package issues0.0.13 (2023-09-17)Added imports to init file to make it easier to use the package0.0.14 (2023-09-17)Changed project classifiers to reflect the package better
|
alexeyqu-singleton
|
alexeyqu Singleton
|
alexeyshesh_test_package
|
This is a security placeholder package.
If you want to claim this name for legitimate purposes,
please contact us [email protected]@yandex-team.ru
|
alex-first-project
|
No description available on PyPI.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.