package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aldryn-quote
|
Failed to fetch description. HTTP Status Code: 404
|
aldryn-redirects
|
################Aldryn Redirects################This is a modified version of Django's ``django.contrib.redirects`` app thatsupports language-dependent target URLs, using ``django-parler``.This is useful for cases in which another middleware strips the languageprefix from the URL, like django CMS. It allows to define different urls toredirect to, depending on the user's language.************Installation************Aldryn Platform Users#####################To install the addon on Aldryn, all you need to do is follow this`installation link <https://control.aldryn.com/control/?select_project_for_addon=aldryn-redirects>`_on the Aldryn Marketplace and follow the instructions.Manually you can:#. Choose a site you want to install the Addon to from the dashboard.#. Go to Apps > Install App#. Click Install next to the Aldryn Redirects app.#. Redeploy the site.Manual Installation###################```bashpip install aldryn-redirects```Follow the `setup instructions for django-parler <http://django-parler.readthedocs.org/>`_.```python# settings.pyINSTALLED_APPS += ['parler','aldryn_redirects']# add the middleware somewhere near the top of MIDDLEWARE_CLASSESMIDDLEWARE_CLASSES.insert(0, 'aldryn_redirects.middleware.RedirectFallbackMiddleware')```
|
aldryn-reversion
|
DescriptionA collection of shared helpers and mixins to provide support for
django-reversion on models with translatable (using django-parler)
fields and/or django-cms placeholder fields.Note:django-parler is optional and is not required. However, if your model is
translated with Parler, aldryn-reversion will take translations and the
resulting internal Parler translation cache into consideration when making
revisions.UsagePlease refer todocumentation.RequirementsPython 2.6, 2.7, 3.4Django 1.6 - 1.9django-reversionOptionaldjango CMS 3.0.12 or laterdjango-parlerInstallationMost likely you won’t need to install this addon - it will be installed as a dependency for some
other addon. If you do need to install it manually, follow these steps:Runpip install aldryn-reversion.Add below apps toINSTALLED_APPS:INSTALLED_APPS = [
…
'reversion',
'aldryn_reversion',
…
](Re-)Start your application server.More detailed installation guidanceis also
available.
|
aldryn-search
|
This package provides a search indexes for easy Haystack 2 integration with django CMS.ContributingThis is a an open-source project. We’ll be delighted to receive your
feedback in the form of issues and pull requests. Before submitting your
pull request, please review ourcontribution guidelines.We’re grateful to all contributors who have helped create and maintain this package.
Contributors are listed at thecontributorssection.UsageAfter installing aldryn-search through your package manager of choice, addaldryn_searchto yourINSTALLED_APPS. If you run a multilingual CMS setup, you have to define a haystack backend for every language
in use:HAYSTACK_CONNECTIONS = {
'en': {
'ENGINE': 'haystack.backends.solr_backend.SolrEngine',
'URL': 'http://my-solr-server/solr/my-site-en/',
'TIMEOUT': 60 * 5,
'INCLUDE_SPELLING': True,
'BATCH_SIZE': 100,
},
'fr': {
'ENGINE': 'haystack.backends.solr_backend.SolrEngine',
'URL': 'http://my-solr-server/solr/my-site-fr/',
'TIMEOUT': 60 * 5,
'INCLUDE_SPELLING': True,
'BATCH_SIZE': 100,
},
}To make sure the correct backend is used during search, addaldryn_search.router.LanguageRouterto yourHAYSTACK_ROUTERSsetting:HAYSTACK_ROUTERS = ['aldryn_search.router.LanguageRouter',]When using multiple languages, usually there’s one search backend per language, when indexing it’s important to know
which language is currently being used, this can be facilitated by theALDRYN_SEARCH_LANGUAGE_FROM_ALIASsetting,
this setting could be a callable or a string path that resolves to one.Please keep in mind that it’s usually not a good idea to import things in your settings, however there are cases where
it seems overkill to create a function to handle the alias, for example:ALDRYN_SEARCH_LANGUAGE_FROM_ALIAS = lambda alias: alias.split('-')[-1]the example above could be used when using multiple languages and sites, all backends could have a language suffix.The same could be achieved using a function defined somewhere else in your code like so:ALDRYN_SEARCH_LANGUAGE_FROM_ALIAS = "my_app.helpers.language_from_alias"If any of the above return None thensettings.LANGUAGE_CODEwill be used.By default this setting evaluates to a function that checks if the alias is insettings.LANGUAGESand if so it
uses the alias as a language.For a complete Haystack setup, please refer to theirdocumentation.For more documentation, see thedocs folder.Integration with django CMSaldryn-search comes with an App Hook for django CMS, and a search view using Django’s class based views. If you
want to use this app hook, you can either subclass it and register it yourself, or setALDRYN_SEARCH_REGISTER_APPHOOKtoTrue.If you want to exclude some cms plugins from indexing, you can specifyALDRYN_SEARCH_PLUGINS_EXCLUDEsetting like so:ALDRYN_SEARCH_EXCLUDED_PLUGINS = [
"PluginA", "PluginB"
]For pagination, aldryn-search usesaldryn_common.paginator.DiggPaginator. If you want to use this built-in
pagination, make sure to installdjango-spurl, and add then addspurltoINSTALLED_APPS.PaginationResults are paginated according to theALDRYN_SEARCH_PAGINATIONsetting (default: 10).
If set toNonepagination is disabled.
|
aldryn-sites
|
Extensions to django.contrib.sites.FeaturesDomain redirects: handles smart redirecting to a main domain from alias domains.
Taking http/https into consideration.Site auto-population: automatically populates the Domain name indjango.contrib.sites.Site.domainbased
on theALDRYN_SITES_DOMAINSsetting.Installationaddaldryn_sitestoINSTALLED_APPS.addaldryn_sites.middleware.SiteMiddlewaretoMIDDLEWARE_CLASSES(place itbeforedjangosecure.middleware.SecurityMiddlewareif redirects should be smart about alias domains
possibly not having a valid certificate of their own. The middleware will pick up onSECURE_SSL_REDIRECTfromdjango-secure.)configureALDRYN_SITES_DOMAINS:ALDRYN_SITES_DOMAINS = {
1: { # matches SITE_ID
'domain': 'www.example.com', # main domain that all domains in redirects will redirect to.
# Auto populates ``django.contrib.sites.Site.domain``
'aliases': [ # these domains will be accessible like the main domain (no redirect).
'an.other.domain.com',
r'^[a-z0-9-]+\.anysub\.com$', # regexes are supported
],
'redirects': [ # these domains will be redirected to the main domain.
'example.com', # add ``'*'`` to redirect all non-main domains to the main one.
'example.ch',
'www.example.ch',
r'^[a-z0-9-]+\.my-redirect-domain\.com$', # regexes are supported
r'.*', # matches any domain (Makes the above rules useless. It's just an example)
],
}
}When using regexes:exact matches win over pattern matchespattern redirect matches win over pattern alias matchesFurther SettingssetALDRYN_SITES_SET_DOMAIN_NAMEtoFalseif you don’t wantdjango.contrib.sites.Site.domainto be
auto-populated (default:True).TODOSvalidate settingstest settings validatorslog warning if there are Sites in the database that are not in the settingspretty display of how redirects will work (in admin and as a simple util)regex support for aliasesform to test redirect logicpre-compile and cache regexes
|
aldryn-snake
|
Aldryn Snakeadds tail and head context processors for addons similar todjango-sekizai.This addon still uses the legacy “Aldryn” naming. You can read more about this in oursupport section.ContributingThis is a an open-source project. We’ll be delighted to receive your
feedback in the form of issues and pull requests. Before submitting your
pull request, please review ourcontribution guidelines.We’re grateful to all contributors who have helped create and maintain this package.
Contributors are listed at thecontributorssection.DocumentationSeeREQUIREMENTSin thesetup.pyfile for additional dependencies:Installationaddaldryn_snake.template_api.template_processorto your TEMPLATE_CONTEXT_PROCESSORS settingssomewhere in your app (that will be imported on startup (models, admin etc) add something to the api:from aldryn_snake.template_api import registry
from django.conf import settings
OPTIMIZELY_SCRIPT = """<script src="//cdn.optimizely.com/js/%(account_number)s.js"></script>"""
def get_crazyegg_script():
optimizely_number = getattr(settings, 'OPTIMIZELY_ACCOUNT_NUMBER', None)
if optimizely_number:
return OPTIMIZELY_SCRIPT % {'account_number': optimizely_number}
else:
return ''
registry.add_to_tail(get_crazyegg_script())Ifadd_to_tailoradd_to_headreceive a callable, it will be called with therequestkeyword argument.add the following in your base template to the HEAD:{{ ALDRYN_SNAKE.render_head }}add the following in your base template right above </BODY>:{{ ALDRYN_SNAKE.render_tail }}Running TestsYou can run tests by executing:virtualenv env
source env/bin/activate
pip install -r tests/requirements.txt
python setup.py test
|
aldryn-snippet
|
UNKNOWN
|
aldryn-sso
|
Aldryn SSOadds single-sign-on to Divio Cloud.This addon still uses the legacy “Aldryn” naming. You can read more about this in oursupport section.ContributingThis is a an open-source project. We’ll be delighted to receive your
feedback in the form of issues and pull requests. Before submitting your
pull request, please review ourcontribution guidelines.We’re grateful to all contributors who have helped create and maintain this package.
Contributors are listed at thecontributorssection.DocumentationSeeREQUIREMENTSin thesetup.pyfile for additional dependencies:InstallationNothing to do.aldryn-ssois part of the Divio Cloud.Running TestsYou can run tests by executing:virtualenv env
source env/bin/activate
pip install -r tests/requirements.txt
python setup.py testSharing Links and TokensAldryn SSO supports a “test link” or “preview mode” feature to bypass the password protection of test environments. This is normally useful to share a test environment with other people without complicated setups and passwords, a link is enough.The links are in the following form:https://{aldryn_url}/?sharing_token={token}, where the token is the value of theSHARING_VIEW_ONLY_SECRET_TOKENenvironment variable.This environment variable can bet set in the container as part of your build process. The argument name (sharing_token) can also be overridden by setting theSHARING_VIEW_ONLY_TOKEN_KEY_NAMEenvironment variable to your desired value.
|
aldryn-style
|
Aldryn Style provides a plugin that wraps other plugins in CSS styling, by
placing a class name on a containing element.InstallationThis plugin requiresdjango CMS2.4 or higher to be properly installed.Within yourvirtualenvrunpip installaldryn-styleAdd'aldryn_style'to yourINSTALLED_APPSsetting.Runmanage.py migrate aldryn_style.UsageYou can define styles and tag-types in your settings.py:# define this lambda if it isn't already defined in your settings...
_ = lambda s: s
ALDRYN_STYLE_CLASS_NAMES = (
('info', _('info')),
('new', _('new')),
('hint', _('hint')),
)
ALDRYN_STYLE_ALLOWED_TAGS = [
'div', 'p', 'span', 'article', 'section', 'header', 'footer',
]By default, ifALDRYN_STYLE_ALLOWED_TAGSis not supplied, or contains an
empty list, then it will default to the following list for backwards
compatibility with previous versions:ALDRYN_STYLE_ALLOWED_TAGS = [
'div', 'article', 'section', 'p', 'span',
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
'header', 'footer',
]NOTICE:All tags included in this list should be "paired tags" that require a
closing tag. It does not make sense to attempt to use 'button', 'input',
'img', or other self-closing tag in this setting.
Also, the developer is advised to choose the tag-types wisely to avoid HTML
validation issues and/or unintentional security vulnerabilities. For
example, the 'script' tag should never be allowed in
``ALDRYN_STYLE_ALLOWED_TAGS`` (though, we do not prevent this). If you have
an application where you find yourself wishing to do this, please see
djangocms-snippet or aldryn-snippet as an alternative, but note these
projects also come with appropriate security warnings.After that you can place any number of other plugins inside this style plugin.
It will create a div (or other tag-type) with a class that was prior selected
around the contained plugins.TranslationsIf you want to help translate the plugin please do it on Transifex:https://www.transifex.com/projects/p/django-cms/resource/aldryn-style/
|
aldryn-tours
|
UNKNOWN
|
aldryn-translation-tools
|
No description available on PyPI.
|
aldryn-video
|
Aldryn Video provides an elegant way to embed videos in your django CMS sites.A number of video hosting providers are supported, including:VimeoYouTube(any provider that uses theoEmbed specificationshould be supported).The plugin also provides access to various control and sizing parameters for embedded video.InstallationAldryn Platform UsersChoose a site you want to install the add-on to from the dashboard. Then go toApps->Install appand clickInstallnext toVideoapp.Redeploy the site.Manuall Installationpip install aldryn-videoAddaldryn_videotoINSTALLED_APPS.Configurealdryn-boilerplates(https://pypi.python.org/pypi/aldryn-boilerplates/).To use the old templates, setALDRYN_BOILERPLATE_NAME='legacy'.
To usehttps://github.com/aldryn/aldryn-boilerplate-standard(recommended, will be renamed toaldryn-boilerplate-bootstrap3) setALDRYN_BOILERPLATE_NAME='bootstrap3'.CreditsVideo file type icon by dreamxis,http://dreamxis.themex.net/,
under Creative Commons Attribution license.
|
aldryn-wordpress-import
|
UNKNOWN
|
aldryn-wow
|
Plugin for django-cms to include awesome animations fromWOW.js andAnimate.cssInstallationThis plugin requiresdjango CMS3.0 or higher to be properly installed.In your projectsvirtualenv, runpip installaldryn-wowIf using Django 1.6 add'aldryn_wow': 'aldryn_wow.south_migrations',toSOUTH_MIGRATION_MODULES(or defineSOUTH_MIGRATION_MODULESif it does not exists);Runmanage.py migrate aldryn_wowUsageDefault content in PlaceholderIf you use Django-CMS >= 3.0, you can useAnimationandWow Animationin “default_plugins”
(see docs about the CMS_PLACEHOLDER_CONF setting in Django CMS 3.0).ChangelogVersion 1.0.0Drop backward compatibilty.Tested and status upgraded to Production/Stable.Version 0.0.3Make parameter fields in WOW animations optional.Version 0.0.2Fixed packaging issues.Added tests to cover all plugins.Add south and django migrations.Drop support for Python 2.6.Version 0.0.1Initial Release.
|
aldServer
|
aldServer - a template serverInspired by Flask and Jinja. Super easy to use!Installationpip3installaldSeverSimple Code Example# importsfromaldServerimportcreateServer,Route,DebugfromaldServerimportRESPONSE,CONTENT_TYPE,CHARSET# server debugging (false by default)Debug.debug=True# crete route objroute=Route()# create static folderroute.static_folder('/static')# create simple [email protected]_route('/api/v2/json',CONTENT_TYPE.JSON,RESPONSE.OK,CHARSET.UTF8)defjson_test():return"{'json': 'is the best'}"# simple templating, passing [email protected]_route('/me',CONTENT_TYPE.TEXT_HTML,RESPONSE.OK,CHARSET.UTF8)deftemplate_test():returnroute.render_template('/me.html',name="aldison")''' - me.html file:<h3>Hello my name is {{%name%}},</h3><h4>and this server is made with aldServer.</h4>'''''' - me.html file after rendering<h3>Hello my name is aldison,</h3><h4>and this server is made with aldServer.</h4>'''# create the serverserver=createServer(hostname="localhost",port=3333)# run the creted serverserver.run()Template SupportsDifferent loops:#1 - to_dos - is given as a parameter{%forto_dointo_dos%}<li>{{to_do}}</li>{%endloop%}#2{%foriinrange(5)%}<li>{{i}}</li>{%endloop%}#3{%fori,xinenumerate(range(5))%}<li>{{i}}-{{x}}</li>{%endloop%}#4{%foriinrange(5)%}{{i}}x7={{i*7}}<br>{%endloop%}#5{%foritemin["hello","world"]}<li>{{item}}</li>{%endloop%}Printing variables<h3>Hellomynameis{{%name%}},</h3><h4>andthisserverismadewithaldServer.</h4>
|
aldy
|
No description available on PyPI.
|
alea
|
No description available on PyPI.
|
alea-inference
|
aleaalea is a flexible statistical inference framework. The Python package is designed for constructing, handling, and fitting statistical models, computing confidence intervals and conducting sensitivity studies. It is primarily developed for theXENONnT dark matter experiment, but can be used for any statistical inference problem.If you use alea in your research, please consider citing the software published onzenodo.InstallationYou can install alea from PyPI using pip butbeware that it is listed there as alea-inference!Thus, you need to runpip install alea-inferenceFor the latest version, you can install directly from the GitHub repository by cloning the repository and runningcd alea
pip install .You are now ready to use alea!Getting startedThe best way to get started is to check out thedocumentationand have a look at ourtutorial notebooks. To explore the notebooks interactively, you can useBinder.Acknowledgementsaleais a public package inherited the spirits of previously private XENON likelihood definition and inference construction codebinferencethat based on the blueice repohttps://github.com/JelleAalbers/blueice.Binference was developed for XENON1T WIMP searches by Knut Dundas Morå, and for the first XENONnT results by Robert Hammann, Knut Dundas Morå and Tim Wolf.0.2.3 / 2024-02-22Improve check of already made toydata and output by @dachengx inhttps://github.com/XENONnT/alea/pull/128Combine several jobs into one to save computation resources by @dachengx inhttps://github.com/XENONnT/alea/pull/131Checklocateloaded package by @dachengx inhttps://github.com/XENONnT/alea/pull/134Updatehypothesesandcommon_hypothesisbypre_process_poiby @dachengx inhttps://github.com/XENONnT/alea/pull/135Print total number of submitted jobs by @dachengx inhttps://github.com/XENONnT/alea/pull/1370.2.2 / 2024-01-13Save dtype ofvalid_fitas bool by @dachengx inhttps://github.com/XENONnT/alea/pull/123Optional setting of random seed for debugging by @dachengx inhttps://github.com/XENONnT/alea/pull/122Tiny minor change on docstring by @dachengx inhttps://github.com/XENONnT/alea/pull/126Change example filename extension by @kdund inhttps://github.com/XENONnT/alea/pull/93Add axis_names to example templates by @hammannr inhttps://github.com/XENONnT/alea/pull/127Evaluateblueice_anchorsexpression by @dachengx inhttps://github.com/XENONnT/alea/pull/124Update pypi to use trusted publisher by @dachengx inhttps://github.com/XENONnT/alea/pull/130Update versions ofblueiceandinference-interfaceby @dachengx inhttps://github.com/XENONnT/alea/pull/1320.2.1 / 2023-12-08Add optional argumentdegree_of_freedomforasymptotic_critical_valueby @dachengx inhttps://github.com/XENONnT/alea/pull/86Update readthedocs configurations by @dachengx inhttps://github.com/XENONnT/alea/pull/88Update tutorials by @hammannr inhttps://github.com/XENONnT/alea/pull/89Add column to toyMC results with minuit convergence flag by @kdund inhttps://github.com/XENONnT/alea/pull/91Debug a typo at docstring of fittable parameter by @dachengx inhttps://github.com/XENONnT/alea/pull/95Improve documentation by @hammannr inhttps://github.com/XENONnT/alea/pull/101Update Neyman threshold when changing runner_args by @hammannr inhttps://github.com/XENONnT/alea/pull/100Allow submitter to skip the already succeeded files by @dachengx inhttps://github.com/XENONnT/alea/pull/94Print time usage ofRunner.runby @dachengx inhttps://github.com/XENONnT/alea/pull/104Get expectation values per likelihood term by @hammannr inhttps://github.com/XENONnT/alea/pull/106Prevent arguments to submission variations being changed by deepcopy-ing them. by @dachengx inhttps://github.com/XENONnT/alea/pull/107Make error message more explicit that an excecutable is not found and… by @kdund inhttps://github.com/XENONnT/alea/pull/109Read poi and expectation directly fromoutput_filenameto accelerateNeymanConstructorby @dachengx inhttps://github.com/XENONnT/alea/pull/108Direct call of used parameters of model by @dachengx inhttps://github.com/XENONnT/alea/pull/112Add function to get all sources names from all likelihoods by @dachengx inhttps://github.com/XENONnT/alea/pull/111Make sure values of parameters that need re-initialization are not changed by @hammannr inhttps://github.com/XENONnT/alea/pull/110Allow all computation names by @kdund inhttps://github.com/XENONnT/alea/pull/116Debug for the missing argument in_read_poiby @dachengx inhttps://github.com/XENONnT/alea/pull/118Remove unnecessary warning given new ptype constraints by @dachengx inhttps://github.com/XENONnT/alea/pull/119Full Changelog:https://github.com/XENONnT/alea/compare/v0.2.0...v0.2.10.2.0 / 2023-09-01Proposal to use pre-commit for continuous integration by @dachengx inhttps://github.com/XENONnT/alea/pull/78Example notebooks by @hammannr inhttps://github.com/XENONnT/alea/pull/75Simplify TemplateSource, CombinedSource and SpectrumTemplateSource by @dachengx inhttps://github.com/XENONnT/alea/pull/69[pre-commit.ci] pre-commit autoupdate by @pre-commit-ci inhttps://github.com/XENONnT/alea/pull/80[pre-commit.ci] pre-commit autoupdate by @pre-commit-ci inhttps://github.com/XENONnT/alea/pull/82Add Submitter and NeymanConstructor by @dachengx inhttps://github.com/XENONnT/alea/pull/79New Contributors@pre-commit-ci made their first contribution inhttps://github.com/XENONnT/alea/pull/80Full Changelog:https://github.com/XENONnT/alea/compare/v0.1.0...v0.2.00.1.0 / 2023-08-11Unify and clean code style and docstring by @dachengx inhttps://github.com/XENONnT/alea/pull/68First runner manipulating statistical model by @dachengx inhttps://github.com/XENONnT/alea/pull/50Set best_fit_args to confidence_interval_args if None by @kdund inhttps://github.com/XENONnT/alea/pull/76Livetime scaling by @kdund inhttps://github.com/XENONnT/alea/pull/73Full Changelog:https://github.com/XENONnT/alea/compare/v0.0.0...v0.1.00.0.0 / 2023-07-28readme update with pointer to previous work in lieu of commit history by @kdund inhttps://github.com/XENONnT/alea/pull/8Adds a statistical model base class (under construction by @kdund inhttps://github.com/XENONnT/alea/pull/7change folder/module name by @kdund inhttps://github.com/XENONnT/alea/pull/9Move submission_script.py also from binference to here by @dachengx inhttps://github.com/XENONnT/alea/pull/10Add simple gaussian model by @hammannr inhttps://github.com/XENONnT/alea/pull/12Parameter class by @hammannr inhttps://github.com/XENONnT/alea/pull/19Confidence intervals by @kdund inhttps://github.com/XENONnT/alea/pull/27Update README.md by @kdund inhttps://github.com/XENONnT/alea/pull/29Init code style checking, pytest, and coverage by @dachengx inhttps://github.com/XENONnT/alea/pull/31Add templates for wimp example by @hoetzsch inhttps://github.com/XENONnT/alea/pull/30Removes all hash for parameters not used for each source, and for all… by @kdund inhttps://github.com/XENONnT/alea/pull/37First implementation of an nT-like likelihood by @hammannr inhttps://github.com/XENONnT/alea/pull/32Check if some parameter is not set as guess when fitting by @kdund inhttps://github.com/XENONnT/alea/pull/44Fix likelihood_names check in statistical_model.store_data to handle unnamed likelihoods by @kdund inhttps://github.com/XENONnT/alea/pull/45Create pull_request_template.md by @dachengx inhttps://github.com/XENONnT/alea/pull/46Codes style cleaning by @dachengx inhttps://github.com/XENONnT/alea/pull/49First runner manipulating statistical model by @dachengx inhttps://github.com/XENONnT/alea/pull/47Run test on main not master by @dachengx inhttps://github.com/XENONnT/alea/pull/55Simplify file structure by @dachengx inhttps://github.com/XENONnT/alea/pull/51Moveblueice_extended_modeltomodelsby @dachengx inhttps://github.com/XENONnT/alea/pull/56Change data format to only use structured arrays by @kdund inhttps://github.com/XENONnT/alea/pull/42Another fitting test by @kdund inhttps://github.com/XENONnT/alea/pull/59Add first tests module and file indexing system by @dachengx inhttps://github.com/XENONnT/alea/pull/54Shape parameters by @hammannr inhttps://github.com/XENONnT/alea/pull/58Recover examples folder, update file indexing, add notebooks folder, remove legacies by @dachengx inhttps://github.com/XENONnT/alea/pull/61Remove pdf_cache folder before pytest by @dachengx inhttps://github.com/XENONnT/alea/pull/65Make 0.0.0, initialize documentation structure based on readthedocs, add badges to README by @dachengx inhttps://github.com/XENONnT/alea/pull/66New Contributors@kdund made their first contribution inhttps://github.com/XENONnT/alea/pull/8@dachengx made their first contribution inhttps://github.com/XENONnT/alea/pull/10@hammannr made their first contribution inhttps://github.com/XENONnT/alea/pull/12@hoetzsch made their first contribution inhttps://github.com/XENONnT/alea/pull/30
|
alearn
|
Failed to fetch description. HTTP Status Code: 404
|
aleat1
|
Aleatoryous 1The original Aleatoryous project.WARNING! Newer generation availableSince January 20, 2021, the3rth generationof Aleatoryous objects is available atPyPi. We strongly recommend you to install the latest version ofaleat3library by
this reasons:aleat1does not contain this features fromaleat3:"Roulette" algorithm"CoinToBinary" functionaleat3operations are much easier and fast.
|
aleat2
|
Aleatoryous 1The second generation of Aleatoryous project.WARNING! Newer generation availableSince January 20, 2021, the3rth generationof Aleatoryous objects is available atPyPi. We strongly recommend you to install the latest version ofaleat3library by
this reasons:aleat2does not contain this features fromaleat3:"CoinToBinary" functionaleat3operations are easier and faster, and also much cleaner.We are only distributing versions 1 and 2 to record releases, but we only recommend
the 3rth version of Aleatoryous.
|
aleat3
|
Aleatoryous 3This is the 3rth Generation of aleatory objects, built by Diego
Ramirez.IntroductionThe Aleatoryous package allows you to build:Aleatory Syntax objectsDice: aleatory.diceCoin: aleatory.coinRoulette: aleatory.rouletteBy using thePythonlibraryrandom, Aleatoryous object can build many solutions
for problems where aleatory numbers or specific output are needed.This package is built to be used with Python versions 3.5 to 3.9.To enjoy the Aleatoryous materials, you must download the package from thePyPiand install it with pip by one of this ways:pip install aleat3_[version]_[platform].whl
pip install aleat3_[version]_[platform].tar.gzAlso, you can install it with Pip from Internet without downloading:pip install aleat3==[version]
pip install --upgrade aleat3
pip install aleat3To view moreCheck out more docs here:General documentationGet some useful info about the package, issues, etc.Contributing pageLearn about how to contribute to aleat3.Some aleat3 examples
|
aleatoire
|
title: Aleatoire
description: A set of tools for the probabilistic analysis of systems.
...AleatoireA set of tools for the probabilistic analysis of systems.Table of contentsInstallationA rough compilation of general-purpose system reliability functions and classes written over the course of a semester. This package implements probability transformations composed of marginal distributions which are defined using objects from the popular scipy.stats statistical library. This package is largely built upon the framework for reliability computations layed out in CalRel and FERUM.Installationpip install aleatoireYou can also install the in-development version with:pip install https://github.com/claudioperez/aleatoire/archive/master.zipChangelog0.00.0.2Match version with Github tag.0.0.1Fix long description on PyPi.0.0.0First release on PyPi.
|
aleatools
|
No description available on PyPI.
|
aleatora
|
AleatoraAleatora is a music composition framework, implemented as a Python library, built around the abstraction of lazy, effectful streams.What does that mean? Like most audio synthesis frameworks, Aleatora lets you build up complex sounds by connecting generators in an audio graph (function composition + parallel composition). Unlike most, it also lets you build things uphorizontally: streams can be composed sequentially, so the audio graphchange over timeon its own (based on the computation described in the graph itself).Additionally, streams may contain any kind of data type, not just samples. So you can use the same basic abstraction, and all the operations that it offers, to work with strings, events, arrays, MIDI data, etc., just as well as with individual audio samples.InstallationFirst, set up the environment:virtualenv venv -p python3 # or pypy3 for better performance
source venv/bin/activateThen, get the stable version of Aleatora:pip install aleatora # for optional features, append a list like `[speech,foxdot]` (or `[all]`)Or, get the latest version instead:pip install git+https://github.com/ijc8/aleatora.gitTo ensure installation succeeded and that you can get sound out, try playing a sine tone:fromaleatoraimport*play(osc(440))Documentation
|
aleatorio
|
More information about the sample project
|
aleatorpy
|
No description available on PyPI.
|
aleatory
|
aleatoryGit HomepagePip RepositoryDocumentationOverviewThealeatory(/ˈeɪliətəri/) Python library provides functionality for simulating and visualising
stochastic processes. More precisely, it introduces objects representing a number of continuous-time
stochastic processes $X = (X_t : t\geq 0)$ and provides methods to:generate realizations/trajectories from each process —over discrete time setscreate visualisations to illustrate the processes properties and behaviourCurrently,aleatorysupports the following processes:Brownian MotionGeometric Brownian MotionOrnstein–UhlenbeckVasicekCox–Ingersoll–RossConstant ElasticityBessel ProcessSquared Bessel ProcesssInstallationAleatory is available onpypiand can be
installed as followspip install aleatoryDependenciesAleatory relies heavily onnumpyfor random number generationscipyandstatsmodelsfor support for a number of one-dimensional distributions.matplotlibfor creating visualisationsCompatibilityAleatory is tested on Python versions 3.8, 3.9, and 3.10Quick-StartAleatory allows you to create fancy visualisations from different stochastic processes in an easy and concise way.For example, the following codefromaleatory.processesimportBrownianMotionbrownian=BrownianMotion()brownian.draw(n=100,N=100,colormap="cool",figsize=(12,9))generates a chart like this:For more example visit theQuick-Start Guide.Thanks for Visiting! ✨Connect with me via:🦜Twitter👩🏽💼Linkedin📸Instagram👾Personal Website⭐️If you like this projet, please give it a star!⭐️
|
aleatory-words-api
|
No description available on PyPI.
|
alebot
|
alebot------alebot is a super lean and highly modularized irc bot that lets youextend it in a python way, using classes and decorators. It supportsboth hooks and background tasks in an easy and fail-safe way.Links`````* `source code <https://github.com/alexex/alebot>`_* `docs <https://alebot.readthedocs.org/`_
|
alectiocli
|
Alectio-CLI 🚀 🚀Alectio-cliisAlectio's's official command-line interface (CLI)wrapper. It allows you to create projects, experiments and do hybrid labeling tasks.ConfigurationSetupThese setup instructions are for CLI usage only.We recommend installingalectiocliin your virtualenv rather than doing a global installation so you don't run into unexpected behavior with mismatching versions.pipinsallalectiocliAuthenticationFor usage of cli, you first need to authenticate with Alectio's server.alectio-cli--loginThis will redirect you toAlectio'splatform and once you login, it will generate a authentication token, which you need to copy and paste it on the command line.If you wish to force login or refresh token after the authentication is done, use:alectio-cli--login--refreshUsageIf you wish to get more information about any command withinalectio-cli, you can executealectio-cli --helpcommand.Commonalectio-cli --helpOptionslogin: Login for token authenticationproject: A sub-app which handles all project related tasksexperiment: A sub-app which handles all experiment related tasksCommonalectio-cli project --helpOptionscreate: Create an alectio projectlist: List all the projects of the userdelete: Delete a projecthybrid-labeling:Projection Creation:alectio-cliprojectcreate[OPTIONS]YAML_FILELABEL_FILEA folder Alectio_cli_sample_yamls will be generated which will contain sample YAML files for doing various tasks like:Yaml file for creating an experimentYaml file for manual curation experimentYaml file for Hybrid Labeling TaskSample YAML FILE for creating projectname: "Sample Project"
description: "test "
type: "Image Classification"
alectio-dataset: "false"
test_len: 0
s3_bucket:
data_name:
alectio_dir:
pre_loaded_model:
project_type: "curation"
data_format: "image"
docker_url:
premise: "true"
train_len: 10000
dataset_source: "on-prem"
labels: TrueList Projects:alectio-cliprojectlistProject Deletion:alectio-cliprojectdelete<proj_id>Hybrid Labeling Task:alectio-cliprojecthybrid-labeling[OPTIONS]YAML_FILEExperimentCommonalectio-cli experiment --helpOptionscreate: Create an alectio experimentrun: Runs an alectio experimentExperiment Creationalectio-cliexperimentcreate[OPTIONS]YAML_FILESample YAML FILE for creating projectproject_id: '22977034a12b11ecaf91cbe75e9b38c0'
name: 'Test Experiment'
n_records: 100
limits: False
qs: [] #Empty for auto_al,ml_driven. To be filled for manual curation.
seed: 120
labeling_type: pre_labeled
is_sdk_setup: False
is_curr_fully_labeled: False
assoc_labeling_task_id: ''Run Experimentalectio-cliexperimentrun[OPTIONS]PYTHON_FILEEXPERIMENT_FILEℹ️ Note that this cli is still in development phase, instability might occur.FutureWe are continually expanding and improving the offerings of this application. Some interactions may change over time, but we will do our best to retain backwards compatibility.
|
alectio-kms
|
A CLI interface to get AlectioSDK token
|
alectiolite
|
AlectioliteNew SDK formatBrief instructionsconda create -n myenv python=3.6conda activate myenvTestingTemporary installation instructions until stable and publishablepython setup.py sdist bdist_wheelpip install .Temporary uninstallation instructionsrm -fr < path-to-alectio-lite-installation >rm -rf ./buildrm -rf ./alectiolite.egg-inform -rf ./distTODOcreating docstrings
|
alectio-sdk
|
RequirementsPython3 (Required)PIP3 (Required)Ubuntu 16.04+ / MacOS / Windows 10GCC / C++ (Will depend on OS you are using. Ubuntu, MacOS it comes default. Some falvours of linux distribution like Amazon Linux/RED Hat linux might not have GCC or C++ realted libraires installed)For this tutorial, we are assuming you are using Python3 and PIP3. Also, make sure you have the necessary build tools installed (might vary from OS to OS). If you get any errors while installing any dependent packages feel free to reach out to us but most of it can quickly be solved by a simple Google search.Installation1. Key ManagementIf you have not already created your Client ID and Client Secret then do so by visiting:openhttps://pro.alectio.com/Login there and create project and an experiment.An experiment token will be generated.Enter your experiment token in main.py (refer example in step 3) to authenticate.2. Set up a virtual environmentWe recommend to set-up a virtual environment.For example, you can use python's built-in virtual environment via:python3 -m venv env
source env/bin/activate3. Download the examplesAll examples available inexamplesdirectory4. Install the requirements in examplespip install -r requirements.txt5. Run ExamplesThe remaining installation instructions are detailed in theexamplesdirectory. We cover one example forimage classification.Alectio SDKAlectioSDK is a package that enables developers to build an ML pipeline as a Flask app to interact with Alectio's platform. It is designed for Alectio's clients, who prefer to keep their model and data on their own server.The package is currently under active development. More functionalities that aim to enhance robustness will be added soon, but for now, the package provides a classalectio_sdk.sdk.Pipelinethat interfaces with customer-side processes in a consistent manner. Customers need to implement 4 processes as python functions:A process to train the modelA process to test the modelA process to apply the model to infer unlabeled dataA process to assign each data point in the dataset to a unique index (Refer to one of the examples to know how)A Pipeline can be created inside themain.pyfile using the following syntax:importyamlfromalectio_sdk.sdkimportPipelinefromprocessesimporttrain,test,infer,getdatasetstate# All the variables can be declared inside the .yaml filewithopen("./config.yaml","r")asstream:args=yaml.safe_load(stream)# Initialising the Experiment PipelineAlectioPipeline=Pipeline(name=args["exp_name"],train_fn=train,# A process to train the modeltest_fn=test,# A process to test the modelinfer_fn=infer,# A process to apply the model to infer unlabeled datagetstate_fn=getdatasetstate,# A process to assign each data point in the dataset to a unique indexargs=args,# Any arguments that user ants to use inside his train, test, infer functions.token="xxxxxx7041a6xxxxx7948cexxxxxxxx",# Experiment tokenmultiple_initialisations={"seeds":[],"limit_value":0},# Multiple seed initialisation feature)Refer to the alectio examples for more clarity on the use of the Pipeline class.Train the ModelThe logic for training the model should be implemented in this process. The function should look like this:deftrain(args,labeled,resume_from,ckpt_file):"""Training FunctionInput args:args* # Arguments passed to Alectio Pipelinelabeled: list # List of labeled indices for trainingresume_from: str # Path to last checkpoint fileckpt_file: str # Path to saved modelReturns:Noneoroutput_dict: dict # Labels and Hyperparams"""# implement your logic to train the model# with the selected data indexed by `labeled`# lbs <- dictionary of indices of train data and their ground-truthreturn{'labels':lbs,'hyperparams':hyperparameters}The name of the function can be anything you like. It takes an argument as shown in the example above.keyvalueresume_froma string that specifies which checkpoint to resume fromckpt_filea string that specifies the name of checkpoint to be saved for the current looplabeleda list of indices of selected samples used to train the model in this loopDepending on your situation, the samples indicated in labeled might not be labeled (despite the variable name). We call it labeled because, in the active learning setting, this list represents the pool of samples iteratively labeled by the human oracle.Test the ModelThe logic for testing the model should be implemented in this process. The function representing this process should look like this:deftest(args,ckpt_file):"""testing functionInput args:args* # Arguments passed to Alectio Pipelineckpt_file: str # Path to saved modelReturns:output_dict: dict # Preds and Labels"""# implement your testing logic here# put the predictions and labels into# two dictionaries# lbs <- dictionary of indices of test data and their ground-truth# prd <- dictionary of indices of test data and their predictionreturn{'predictions':prd,'labels':lbs}The test function takes arguments as shown in the example above.keyvalueckpt_filea string that specifies which checkpoint to test modelThe test function needs to return a dictionary with two keys:keyvaluepredictionsa dictionary of an index and a prediction for each test samplelabelsa dictionary of an index and a ground truth label for each test sampleThe format of the values depends on the type of ML problem. Please refer to the examples directory for details.Apply InferenceThe logic for applying the model to infer the unlabeled data should be implemented in this process. The function representing this process should look like this:definfer(args,unlabeled,ckpt_file):"""Inference FunctionInput args:args* # Arguments passed to Alectio Pipelineunlabeled: list # List of labeled indices for inferenceckpt_file: str # Path to saved modelreturns:output_dict: dict"""# implement your inference logic here# outputs <- save the output from the model on the unlabeled data as a dictionaryreturn{'outputs':outputs}The infer function takes an argument payload, which is a dictionary with 2 keys:keyvalueckpt_filea string that specifies which checkpoint to use to infer on the unlabeled dataunlabeleda list of of indices of unlabeled data in the training setThe infer function needs to return a dictionary with one key.keyvalueoutputsa dictionary of indexes mapped to the models output before an activation function is appliedFor example, if it is a classification problem, return the outputbeforeapplying softmax.
For more details about the format of the output, please refer to theexamplesdirectory.config.yamlPut in all the requirements that are required for the model to train. This will be read and used in processes.py when the model trains. For example if config.yaml looks like this:LOG_DIR:"./log"DATA_DIR:"./data"EXPT_DIR:"./log"exptname:"ManualAL"# Model configsbackbone:"Resnet101"description:"Pedestrian detection"epochs:10..You can access them inside your any of the above 4 processes as lets say args["backbone"] , args["description"] etc.SDK- Features1. Tracking CO2 emissionsThe alectio SDK is capable of tracking the CO2 emissions during the experiment. The SDK uses an open-source package called code carbon to track the CO2 emissions along with the (CPU, GPU, and RAM) usage. This data is tracked and synced, once the experiment ends, with the user account where the user can see the total CO2 emission on his dashboard.2. Time-Saved InformationThe SDK uses linear interpolation to estimate the time that a user saved to train his model in each active learning cycle. The time-saved information is logged after each AL cycle and gets synced with the platform at the end of the experiment. The time-saved insights can be seen on the user dashboard.3. Storing HyperparametersThe SDK has the ability to track the hyperparameters for each AL cycle. To use this feature the user just needs to return a dictionary of their hyperparameters. Currently, the SDK supports a limited number of hyperparameters, the list of these parameters is shown below:hyperparameter_names=["optimizer_name",# Name of the optimizer used"loss",# Loss of the training process"running_loss",# Running Loss"epochs",# Number of epochs for which the model was trained"batch_size",# batch size on which the model was trained"loss_function",# name of loss function used for training"activation",# List of activation functions used"optimizer",# Can be a state_dict in case of Pytorch]The syntax for storing these values is shown in the train function section.4. Running Multiple Seed InitializationThe SDK can also help the user choose the right seed for his experiment by training his model on a range of seed values and selecting the best seed depending on the performance of models on these seed values. In order to use this feature the user can just use the multiple_initialisations argument of the Alectio Pipeline. The syntax is as shown below:fromalectio_sdk.sdkimportPipelineAlectioPipeline=Pipeline(name=args["exp_name"],train_fn=train,test_fn=test,infer_fn=infer,getstate_fn=getdatasetstate,args=args,token="xxxxxx7041a6xxxxx7948cexxxxxxxx",multiple_initialisations={"seeds":[10,42,36,78],"limit_value":4000},)The input of this argument is a dict with 2 keys.keyvalueseeda list containing different seed values you want to test your model on.limit_valueThe number of samples from which you want to select the training samples from.5. Accessing Alectio Public DatasetsThe user can access Alectio Public Datasets usin the Alectio SDK. The user needs to select the public dataset he wants to use during creating his/her project on the Alectio platform. Alectio Public datasets contain training, validation and testing data. The code snippet to use the Public datasets is as given below.1. Pytorch# Pytorch Syntaximporttorchvisionfromtorchvisionimporttransformsfromalectio_sdk.sdk.alectio_datasetimportAlectioDatasetfromtorch.utils.dataimportDataLoader,Subset# create a public dataset object# token = experiment token# root = directory in which you want to download your dataset# framework = pytorch/tensorflowalectio_dataset=AlectioDataset(token="your_exp_token_goes_here",root="./data",framework="pytorch")# train datasettrain_transforms=transforms.Compose([transforms.Resize((224,224)),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize(mean=[0.485,0.456,0.406],std=[0.229,0.224,0.225]),])# call the get dataset function# dataset_type = train/test/validation# transforms = augmentations/transformations you want to perform# Returns# DataLoader Object | Length of dataset | Mapping of labels and indicestrain_dataset,train_dataset_len,train_class_to_idx=alectio_dataset.get_dataset(dataset_type="train",transforms=train_transforms)2. Tensorflow# Tensorflow Syntaximporttensorflowastffromalectio_sdk.sdk.alectio_datasetimportAlectioDataset# create a public dataset object# token = experiment token# root = directory in which you want to download your dataset# framework = pytorch/tensorflowalectio_dataset=AlectioDataset(token="your_exp_token_goes_here",root="./data",framework="tensorflow")# train dataset# all transforms supported by Tensoflow ImageDataGenerator can be added to the transform dicttrain_transforms=dict(featurewise_center=False,samplewise_center=False,featurewise_std_normalization=False,samplewise_std_normalization=False,zca_whitening=False,channel_shift_range=0.0,fill_mode='nearest',cval=0.0,horizontal_flip=False,vertical_flip=False,rescale=None,preprocessing_function=None,data_format=None,)# call the get dataset function# dataset_type = train/test/validation# transforms = dict of augmentations/transformations you want to perform# Returns# Imagedatagenerator Object | Length of dataset | Mapping of labels and indicestrain_dataset,train_dataset_len,train_class_to_idx=alectio_dataset.get_dataset(dataset_type="train",transforms=train_transforms)
|
alecto
|
Alecto: Advanced Password Hashing ToolAlecto is an advanced command-line utility designed for sophisticated password hashing, offering a comprehensive set of features and algorithms to bolster security. Below, you'll find an in-depth guide on Alecto's features, advanced usage, supported algorithms, and practical examples.Features1.Extensive Algorithm SupportAlecto boasts support for a diverse array of hashing algorithms, providing users with the flexibility to tailor their security measures to specific requirements. Here is a list of available algorithms:apr_md5_cryptargon2bcryptbcrypt_sha256bigcryptblake2bblake2sbsd_nthashbsdi_cryptcisco_type7crypt16cta_pbkdf2_sha1des_cryptdjango_pbkdf2_sha1django_pbkdf2_sha256django_salted_sha1dlitz_pbkdf2_sha1fshpgrub_pbkdf2_sha512lmhashmd4md5md5-sha1md5_cryptmssql2000mssql2005mysql323mysql41nthashoracle11pbkdf2_hmac_sha1pbkdf2_hmac_sha256pbkdf2_hmac_sha512pbkdf2_sha256phpassripemd160scramscryptsha128sha1_cryptsha224sha256sha256_cryptsha384sha3_224sha3_256sha3_384sha3_512sha512sha512_cryptshake_128shake_256sm3spookyhashsun_md5_cryptwhirlpoolxxhashbcrypt_sha256django_salted_sha1ldap_md5ldap_pbkdf2_sha1ldap_pbkdf2_sha256ldap_pbkdf2_sha512siphash2.Algorithm SpecificationDirectly specify the hashing algorithm:pythonalecto.py-a<algorithm><password>3.Custom Salt IntegrationElevate password security by introducing custom salts into the hashing process. Alecto seamlessly accommodates custom salts, providing users with granular control over the salting mechanism, a crucial aspect of robust password storage.4.Terminal Clarity EnhancementAlecto includes a terminal-clearing functionality, optimizing user experience by ensuring a clean and organized interface.5.Fine-tuned Hash Length SpecificationSpecific to shake_128 and shake_256 algorithms, Alecto enables users to precisely specify the hash length using the--hash-lengthoption. This advanced feature allows for tailoring hash outputs to exact requirements.Advanced Usage1.Parallel SaltingFor enhanced security, leverage both custom and default salts simultaneously:pythonalecto.py<password>-a<algorithm>--salt--both-salt2.Custom Salt UsageUtilize a custom salt in the hashing process:pythonalecto.py<password>-a<algorithm>--salt--custom-salt3.Custom Byte Length For SHAKE128 AND SHAKE256'''bash
python alecto.py -a --hash-length
'''Examples1.Custom Algorithm and Saltpythonalecto.py-asha3_256--custom-saltmypassword2.Parallel Salting with Shake_256pythonalecto.py--salt--both-salt-ashake_256--hash-length64mypasswordConsiderationsWhen using a custom salt, it is seamlessly integrated into the hashing process.For algorithms like argon2 and bcrypt, employing a custom salt enhances overall security.Default salts are automatically generated when using the--saltor--both-saltoptions.NOTE:Some hashes is not available on the system so if you face an error like unsupported algorithm it's probably because your system don't have that algorithm.Disclaimer:Alecto is intended for educational and security research purposes. Users are advised to employ the tool responsibly and adhere to ethical guidelines.
|
alectryon
|
A library to process Coq snippets embedded in documents, showing goals and messages for each Coq sentence. Also a literate programming toolkit for Coq. The goal of Alectryon is to make it easy to write textbooks, blog posts, and other documents that mix Coq code and prose.Alectryon is typically used in one of three ways:As a library, through its Python APIAs a Docutils/Sphinx extension, allowing you to include annotated snippets into your reStructuredText and Markdown documents. During compilation, Alectryon collects all.. coq::code blocks, feeds their contents to Coq, and incorporates the resulting goals and responses into the final document.As a standalone compiler, allowing you to include prose delimited by special(*| … |*)comments directly into your Coq source files (in the style of coqdoc). When invoked, Alectryon translates your Coq file into a reStructuredText document and compiles it using the standard reStructuredText toolchain.For background information, check out thequickstart guideon the MIT PLV blog, theSLE2020 paper(open access) and itslive examples, or theconference talk.Alectryon is free software under a very permissive license. If you use it, please remember tocite it, and please let me know!Some examples of use in the wild are linkedat the bottom of this page. Please add your own work by submitting a PR!SetupTo install from OPAM and PyPI:opam install"coq-serapi>=8.10.0+0.7.0"(from theCoq OPAM archive)python3-mpip install alectryonTo install the latest version from Git, usepython3-mpip installgit+https://github.com/cpitclaudel/alectryon.git. To install from a local clone, usepython3-mpip install ..A note on dependencies: theserapimodule only depends on thecoq-serapiOPAM package.dominateis used inalectryon.htmlto generate HTML output, andpygmentsis used by the command-line application for syntax highlighting. reStructuredText support requiresdocutils(and optionallysphinx); Markdown support requiresmyst_parser(docs); Coqdoc support requiresbeautifulsoup4. Support for Coq versions follows SerAPI; Coq ≥ 8.10 works well and ≥ 8.12 works best.UsageAs a standalone programRecipesTry these recipes in therecipesdirectory of this repository (for each task I listed two commands: a short one and a longer one making everything explicit):Generate an interactive webpage from a literate Coq file with reST comments (Coqdoc style):alectryon literate_coq.v
alectryon --frontend coq+rst --backend webpage literate_coq.v -o literate_coq.htmlGenerate an interactive webpage from a plain Coq file (Proof General style):alectryon --frontend coq plain.v
alectryon --frontend coq --backend webpage plain.v -o plain.v.htmlGenerate an interactive webpage from a Coqdoc file (compatibility mode):alectryon --frontend coqdoc coqdoc.v
alectryon --frontend coqdoc --backend webpage coqdoc.v -o coqdoc.htmlGenerate an interactive webpage from a reStructuredText document containing.. coq::directives (coqrst style):alectryon literate_reST.rst
alectryon --frontend rst --backend webpage literate_reST.rst -o literate_reST.htmlGenerate an interactive webpage from a Markdown document written in theMySTdialect, containing.. coq::directives:alectryon literate_MyST.md
alectryon --frontend md --backend webpage literate_MyST.md -o literate_MyST.htmlTranslate a reStructuredText document into a literate Coq file:alectryon literate_reST.rst -o literate_reST.v
alectryon --frontend rst --backend coq+rst literate_reST.rst -o literate_reST.vTranslate a literate Coq file into a reStructuredText document:alectryon literate_coq.v -o literate_coq.v.rst
alectryon --frontend coq+rst --backend rst literate_coq.v -o literate_coq.v.rstRecord goals and responses for fragments contained in a JSON source file:alectryon fragments.v.json
alectryon --frontend coq.json --backend json fragments.json -o fragments.v.io.jsonRecord goals and responses and format them as HTML for fragments contained in a JSON source file:alectryon fragments.v.json -o fragments.v.snippets.html
alectryon --frontend coq.json --backend snippets-html fragments.json -o fragments.v.snippets.htmlCommand-line interfacealectryon [-h] […]
[--frontend {coq,coq+rst,coqdoc,json,md,rst}]
[--backend {coq,coq+rst,json,latex,rst,snippets-html,snippets-latex,webpage,…}]
input [input ...]Usealectryon--helpfor full command line details.Eachinputfile can be.v(a Coq source file, optionally including reStructuredText in comments delimited by(*| … |*)),.json(a list of Coq fragments),.rst(a reStructuredText document including.. coq::code blocks), or.md(a Markdown/MyST document including{coq}code blocks). Each input fragment is split into individual sentences, which are executed one by one (all code is run in a single Coq session).One output file is written per input file. Each frontend supports a subset of all backends.With--backendwebpage(the default for most inputs), output is written as a standalone webpage named<input>.html(forcoq+rstinputs) or<input>.v.html(for plaincoqinputs).With--backendsnippets-html, output is written to<input>.snippets.htmlas a sequence of<preclass="alectryon-io">blocks, separated by<!--alectryon-block-end-->markers (there will be as many blocks as entries in the input list ifinputis a.jsonfile).With--backendjson, output is written to<input>.io.jsonas a JSON-encoded list of Coq fragments (as many as ininputifinputis a.jsonfile). Each fragment is a list of records, each with a_typeand some type-specific fields. Here is an example:Input (minimal.json):["Example xyz (H: False): True. (* ... *) exact I. Qed.","Print xyz."]Output (minimal.json.io.json) after runningalectryon--writerjson minimal.json:[// A list of processed fragments[// Each fragment is a list of records{// Each record has a type, and type-specific fields"_type":"sentence","sentence":"Example xyz (H: False): True.","responses":[],"goals":[{"_type":"goal","name":"2","conclusion":"True","hypotheses":[{"_type":"hypothesis","name":"H","body":null,"type":"False"}]}]},{"_type":"text","string":" (* ... *) "},{"_type":"sentence","sentence":"exact I.","responses":[],"goals":[]},{"_type":"text","string":" "},{"_type":"sentence","sentence":"Qed.","responses":[],"goals":[]}],[// This is the second fragment{"_type":"sentence","sentence":"Print xyz.","responses":["xyz = fun _ : False => I\n : False -> True"],"goals":[]}]]The exit code produced by Alectryon indicates whether the conversion succeeded:0for success,1for a generic error, and10+ the level of the most severe Docutils error if using a Docutils-based pipeline (hence10is debug,11is info,12is warning,13is error, and14is severe error). Docutils errors at levels belowexit_status_level(default: 3) do not affect the exit code, so level10,11, and12are not used by default.As a libraryUsealectryon.serapi.annotate(chunks: List[str]), which returns an object with the same structure as the JSON above, but using objects instead of records with a_typefield:>>>fromalectryon.serapiimportannotate>>>annotate(["Example xyz (H: False): True. (* ... *) exact I. Qed.","Check xyz."])[# A list of processed fragments[# Each fragment is a list of records (each an instance of a namedtuple)Sentence(contents='Example xyz (H: False): True.',messages=[],goals=[Goal(name=None,conclusion='True',hypotheses=[Hypothesis(names=['H'],body=None,type='False')])]),Text(contents=' (* ... *) '),Sentence(contents='exact I.',messages=[],goals=[]),Text(contents=' '),Sentence(contents='Qed.',messages=[],goals=[])],[# This is the second fragmentSentence(contents='Check xyz.',messages=[Message(contents='xyz\n: False -> True')],goals=[])]]The results ofannotatecan be fed toalectryon.html.HtmlGenerator(highlighter).gen()to generate HTML (with CSS classes defined inalectryon.css). Passhighlighter=alectryon.pygments.highlight_htmlto use Pygments, or any other function from strings todominatetags to use a custom syntax highlighter.As a docutils or Sphinx moduleWith blogs (Pelican, Nikola, Hugo, etc.)Include the following code in your configuration file to setup Alectryon’sdocutilsextensions:importalectryon.docutilsalectryon.docutils.setup()This snippet registers a.. coq::directive, which feeds its contents to Alectryon and displays the resulting responses and goals interleaved with the input and a:coq:role for highlighting inline Coq code. It also replaces the default Pygments highlighter for Coq with Alectryon’s improved one, and sets:coq:as the default role. Seehelp(alectryon.docutils)for more information.To ensure that Coq blocks render properly, you’ll need to tell your blogging platform to includealectryon.css. Using a git submodule or vendoring a copy of Alectryon is an easy way to ensure that this stylesheet is accessible to your blogging software. Alternatively, you can usealectryon.cli.copy_assets. Assets are stored inalectryon.html.ASSETS.PATH; their names are inalectryon.html.ASSETS.CSSandalectryon.html.ASSETS.JS.By default, Alectryon’s docutils module will raise warnings for lines over 72 characters. You can change the threshold or silence the warnings by adjustingalectryon.docutils.LONG_LINE_THRESHOLD. WithPelican, use the following snippet to make warnings non-fatal:DOCUTILS_SETTINGS={'halt_level':3,# Error'warning_stream':None# stderr}I test regularly with Pelican; other systems will likely need minimal adjustments.With SphinxFor Sphinx, add the following to yourconf.pyfile:extensions=["alectryon.sphinx"]If left unset in your config file, the default role (the one you get with single backticks) will be set to:coq:. To get syntax highlighting for inline snippets, create adocutils.conffile with thefollowing contentsalong yourconf.pyfile (seebelowfor details):[restructuredtext parser]
syntax_highlight = shortSetting optionsVarious settings are exposed as global constants in the docutils module:alectryon.docutils.LONG_LINE_THRESHOLD(same as--long-line-threshold)alectryon.docutils.CACHE_DIRECTORY(same as--cache-directory)alectryon.docutils.CACHE_COMPRESSION(same as--cache-compression)alectryon.docutils.HTML_MINIFICATION(same as--html-minification)alectryon.docutils.AlectryonTransform.SERTOP_ARGS(same as--sertop-arg)Controlling outputThe.. coq::directive takes a list of space-separated flags to control the way its contents are displayed:One option controls whether output is folded (fold) or unfolded (unfold). When output is folded, users can reveal the output corresponding to each input line selectively.Multiple options control what is included in the output.
-in: Include input sentences (no-in: hide them)
-goals: Include goals (no-goals: hide them)
-messages: Include messages (no-messages: hide them)
-hyps: Include hypotheses (no-hyps: hide them)
-out: Include goals and messages (no-out: hide them)
-all: Include input, goals, and messages (none: hide them)
-fails(for sentences expected to throw an error): Strip theFailkeyword from the input and remove theThe command has indeed failed with message:prefix in the output. (succeeds: leave input and output as-is)The default isall fold, meaning that all output is available, and starts folded. The exact semantics depend on the polarity of the first inclusion option encountered:x y zmeans the same asnone x y z, i.e. includex,y,z, and nothing else;no-xno-ymeansallno-xno-y, i.e. include everything exceptxandy.These annotations can also be added to individual Coq sentences (⚠sentences, not lines), using special comments of the form(* .flag₁ … .flagₙ *)(a list of flags each prefixed with a.):..coq::Require Coq.Arith. (* .none *) ← Executed but hiddenGoal True. (* .unfold *) ← Goal unfoldedFail exact 1. (* .in .messages .fails *) ← Goal omittedFail fail. (* .messages .fails *) ← Error message shown, input hiddenMore precise inclusion/exclusion is possible using themarker-placement mini-languagedescribed below. For example:-.h(Inhabited)Hide all hypotheses that mentionInhabited-.g#2.h#IHnHide hypothesisIHnin goal 2.-.g#2.h#*Hide all hypotheses of goal 2.-.h#*.h#IHnShow only hypothesisIHn-.g#*.g#1 .g#3 .g{False}Show only goals 1, 3, and any goal whose conclusion isFalse.Finally, you can use a[lang]=…annotation to chose which Pygments lexer to use to render part of a goal:.msg[lang]=haskellHighlight the bodies of all messages produced by this sentence using the Haskell lexer.These last two features are experimental; the syntax might change.Referring to subparts of a proof (the marker-placement mini-language)Each object in a proof (sentences, goals, messages, hypotheses, conclusions) can be referred to by giving a path that leads to it, written in CSS-inspired notation. This makes it possible to selectively show, hide, or link to parts of the proof state.In the example below, the markers[α],[β], and[γ]correspond to the paths listed below:Goal∀mn,m+n=n+m.[α]inductionm;intros.-(* Base case *)【n:ℕ⊢0+n=n+0[β]】applyplus_n_O.-(* Induction *)【m,n:ℕ;IHm:∀n:ℕ,m+n=n+m[γ]⊢Sm+n=n+Sm】[α].s(Goal ∀)Search for a sentence (.s(…)) by matching its contents.[β].s(Basecase).cclSearch for a sentence (.s(…)) matchingBase case, then match the conclusion (.ccl) of its first goal.[γ].s(Induction).h#IHmSearch for a sentence (.s(…)) matchingInduction, then match the hypothesisIHmby name (.h#…) in the first goal..s(Induction).g#1.h(m+ n = n + m)Search for a sentence (.s(…)) matchingInduction, select its first goal by number (.g#…), match the hypothesisIHmby searching for its contents (.h(…)).The full architecture of a path is shown below for reference:.io#nameex: .io#introA block of code whose name matchesname.
(.io is optional and defaults to the most recent code block.)
.s(pattern) ex: .s(Goal True)
Any sentence matchingpattern.
.s{pattern} ex: .s{forall*m*n*,}
Any sentence that completely matchespattern.
.in
The input part of the sentence.
.msg
Any message
.msg(str) ex: .msg(Closed under global context)
Any message whose text includesstr.
.msg{pattern} ex: .msg{*[*syntax*]*}
Any message whose complete text matchespattern.
.g#idex: .g#1Goal numberid.
.g#nameex: .g#base_caseThe goal namedname(documentation).
.g(str) ex: .g(True)
Any goal whose conclusion includesstr.
.g{pattern} ex: .g{* ++ * ++ * = *}
Any goal whose complete conclusion matchespattern.
(.g is optional and defaults to #1.)
.ccl|.name
The conclusion or name of the goal.
.h#nameex: .h#IHnThe hypothesis namedname.
.h(str) ex: .h(Permutation)
Any hypothesis whose body or type includesstr.
.h{pattern} ex: .h{nat}
Any hypothesis whose complete body or type matchespattern.
.type|.body|.name
The type or body or name of the hypothesis.Plain search patterns (delimited by(…)) are matched literally, anywhere in the term. Other patterns ({…}patterns and#…names) usefnmatch-stylematching(?matches any character;*matches any sequence of characters; and[]matches a range of characters), and must match the whole term. Hence:To match hypothesisH1but notH10norIH1, use.h#H1.To match hypotheses of typenat, but not of typelist natornat->nat, use.h{nat}To match hypotheses whole type or body includesPermutationanywhere, use.h(Permutation)or.h{*Permutation*}.Etc.As long as the search term does not contain special characters (*?[]), a plain search ((…)) is the same as an fnmatch-style search with wildcards on both sides ({*…*}).Finally, you can attach can attach arbitrarykey-valueto subparts of a goal matched using the marker-placement mini-language by appending[key]=valueafter the path. This is useful with custom transforms and with the[lang]=…setting to customize highlighting for a given sentence or message.This feature is experimental; the syntax might change.Extra roles and directivesFor convenience, Alectryon includes a few extra roles and directives:Markers and marker referencesThe:mref:role (short for “marker reference”) can be used to point the reader to a sentence, a goal, or a hypothesis. The argument is a search pattern written in themarker-placement mini-language; Alectryon locates the corresponding object in the input sent to the prover or in the prover’s output, inserts a marker at that point, and replaces the reference with a link to that marker.For example, the[γ]marker in the example above could be inserted using:mref:`.s(Induction).h#IHm`or:mref:`.s(Induction).g#1.h(m+ n = n + m)`.By default markers refer to the most recent.. coq::block, but other blocks can be targeted by name by prepending.io#nameto the argument of:mref:.Markers can be customized by setting the:counter-style:option on a custom role derived from:mref:; for example, to use Devanagari numerals:..role::dref(mref):counter-style:० १ २ ३ ४ ५ ६ ७ ८ ९More details and examples are given inrecipes/references.rst.This feature is experimental: the syntax might change.Quoted references to goal fragmentsThe:mquote:role is similar to:mref:, but it inserts a copy of the target instead of a link to it. Targets may only be hypotheses, goal conclusions, or goal or hypothesis names.For example, using:mquote:`.s(Induction).h#IHm.type`in the example above would print the type ofIHm,∀ n: ℕ, m + n = n + mwhereas:mref:`.s(Induction).g#1.h(m+ n = n +m).name`would produce only the name of the corresponding hypothesis,IHm.An.. mquote:directive is also available. It places the quoted elements in a block and preserves indentation and newlines, unlike the:mquote:role (whose output appears inline).More details and examples are given inrecipes/references.rst.Output assertionsSometimes it is desirable to check that the prover produced the right output, without displaying that output to the user. In these cases, Alectryon’s marker-placement mini-language can serve as a poor lad’s unit test. Themassertdirective takes one argument (a path prefix), and checks that each line of its body is a valid reference to part of a previous goal.More details and examples are given inrecipes/references.rst.Links to Coq identifiers:coqid:can be used to link to the documentation or definition of a Coq identifier in an external file. Some examples::coqid:`Coq.Init.Nat.even`→Coq.Init.Nat.even:coqid:`Coq.Init.Nat#even`→even:coqid:`apredicate <Coq.Init.Nat.even>`→a predicate:coqid:`Coq.Arith.PeanoNat#`→Coq.Arith.PeanoNat:coqid:`alibrary<Coq.Arith.PeanoNat#>`→a library:coqid:`Coq.Arith.PeanoNat#Nat.Even`→Nat.Even:coqid:`apredicate <Coq.Arith.PeanoNat#Nat.Even>`→a predicateBy default,:coqid:only knows how to handle names from Coq’s standard library (that is, names starting withCoq., which get translated to links pointing tohttps://coq.inria.fr/library/). To link to other libraries, you can add entries toalectryon.docutils.COQ_IDENT_DB_URLS, a list of tuples containing a prefix and a templated URL. The URL can refer to$modpath, the part before the last#or.in the fully qualified name, and$ident, the part after the last#or.. Here is an example:("My.Lib", "https://your-url.com/$modpath.html#$ident")Alternatively, you can inherit from:coqid:to define new roles. The following defines a new:mylib:role, which assumes that its target is part ofMy.Lib:.. role:: mylib(coqid)
:url: https://your-url.com/My.Lib.$modpath.html#$identCachingThealectryon.jsonmodule has facilities to cache the prover’s output. Caching has multiple benefits:Recompiling documents with unchanged code is much faster, since Coq snippets do not have to be re-evaluated.Deploying a website or recompiling a book does not require setting up a complete Coq development environment.Changes in output can be inspected by comparing cache files. Caches contain just as much information as needed to recreate input/output listings, so they can be checked-in into source control, making it easy to assess whether a Coq update meaningfully affects a document (it’s easy to miss breakage or subtle changes in output otherwise, as when using the copy-paste approach or even Alectryon without caching).To enable caching on the command line, chose a directory and pass it to--cache-directory. Alectryon will record inputs and outputs in individual JSON files (one.cachefile per source file) in subdirectories of that folder. You may pass the directory containing your source files if you’d like to store caches alongside inputs.From Python, setalectryon.docutils.CACHE_DIRECTORYto enable caching. For example, to store cache files alongside sources in Pelican, use the following code:import alectryon.docutils
alectryon.docutils.CACHE_DIRECTORY = "content"With a custom driverFor advanced usage, or to customize Alectryon’s command-line interface, you can use a custom driver. Create a new Python file, and add the following to it:fromalectryonimportcli…Anyextensioncodeherecli.main()Extensions might include, registering additional docutils directives or roles withdocutils.directives.register_directiveanddocutils.roles.register_canonical_role, adding custom syntax highlighting for project-specific tokens usingalectryon.pygments.add_tokens, or even tweaking the operation of the Coq lexer inalectryon.pygments_lexer, or monkey-patching parts of Alectryon’sdocutilsmodule.Seerecipes/alectryon_custom_driver.pyfor a concrete example.TipsPrettificationProgramming fonts with ligatures are a good way to display prettified symbols without resorting to complex hacks. Good candidates includeFira CodeandIosevka(with the latter, add.alectryon-io{font-feature-settings:'XV00' 1; }to your CSS to pick Coq-specific ligatures).Passing arguments to SerAPIWhen using the command line interface, you can use the-I,-Q,-Rand--sertop-argflags to specify custom SerAPI arguments, like this:alectryon -R . Lib --sertop-arg=--async-workers=4When compiling reStructuredText documents, you can add custom SerAPI arguments in a docinfo section at the beginning of your document, like this::alectryon/serapi/args:-R . Lib -I mldirTo set SerAPI’s arguments for all input files, modifyAlectryonTransform.DRIVER_ARGS["sertop"]inalectryon.docutils. Here’s an example that you could use in a Sphinx config file:from alectryon.docutils import AlectryonTransform
AlectryonTransform.DRIVER_ARGS["sertop"] = ["-Q", "/coq/source/path/,LibraryName"]Note that the syntax ofDRIVER_ARGS["sertop"]is the one ofsertop, not the one ofcoqc(https://github.com/ejgallego/coq-serapi/issues/215).Adding custom keywordsYou can usealectryon.pygments.add_tokensto specify additional highlighting
rules, such as custom tactic names. Seehelp(alectryon.pygments.add_tokens)for more details.When compiling reStructuredText documents, you can add per-document highlighting rules to the docinfo section at the beginning of your document, like this::alectryon/pygments/coq/tacn:intuition_eauto simplify invert:alectryon/pygments/coq/tacn-solve:map_tauto solve_eqInteractivityMost features in Alectryon’s HTML output do not require JavaScript, but extra functionality (including keyboard navigation) can be added by loadingassets/alectryon.js(this is done by default).Scripts needed to unminify documents produced with--html-minification(seebelow) are bundled into the generated HTML and do not need to be loaded separately.Authoring supportTheetc/elispfolder of this directory includes an Emacs mode,alectryon.el, which makes it easy to switch between the Coq and reStructuredText views of a document.Docutils configurationYou can set Docutils settings for your single-page reST or Coq+reST documents using adocutils.conffile. See thedocumentationor theexampleinrecipes/. For example, the following changeslatex-preamblefor the XeTeX backend to custom fonts:[xetex writer]
latex-preamble:
\setmainfont{Linux Libertine O}
\setsansfont{Linux Biolinum O}
\setmonofont[Scale=MatchLowercase]{Fira Code}You can also use theDOCUTILSCONFIGenvironment variableto force alectryon to use a specific configuration file.Reducing page and cache sizes (experimental)Proofs with many repeated subgoals can generate very large HTML files and large caches. In general, these files compress very well — especially with XZ and Brotli (often over 99%), less so with GZip (often over 95%). But if you want to save space at rest, the following options may help:--html-minification: Replace repeated goals and hypotheses in the generated HTML with back-references and use more succinct markup. Minimal Javascript is included in the generated page to resolve references and restore full interactivity. Typical results:4.4M List.html 24.8M Ranalysis3.html
1.4M List.min.html 452K Ranalysis3.min.html--cache-compression: Compress caches (the default is to use XZ compression). Typical results:3.2M List.v.cache 21M Ranalysis3.v.cache
66K List.v.cache.xz 25K Ranalysis3.v.cache.xzFrom Python, usealectryon.docutils.HTML_MINIFICATION = Trueandalectryon.docutils.CACHE_COMPRESSION = "xz"to enable minification and cache compression.A minification algorithm for JSON is implemented injson.pybut not exposed on the command line.Diffing compressed cachesCompressed caches kept in a Git repository can be inspected byautomatically decompressing thembefore computing diffs:# In $GIT_DIR/config or $HOME/.gitconfig:
[diff "xz"]
binary = true
textconv = xzcat
# In .gitattributes:
*.cache.xz diff=xzBuilding without SerAPIAlectryon can compile documents usingcoqc. Sentences be split correctly, but goals and messages will not be collected, and error reporting will be less precise. To use this feature, pass--coq-driver=coqc_timeto Alectryon.Building without AlectryonThealectryon.minimalPython module provides trivial shims for Alectryon’s roles and directives, allowing you continue compiling your documents even if support for Alectryon stops in the future.Including custom JS or CSS in Alectryon’s outputFor single-page documents, you can use a.. raw::directive:..raw::html<script src="https://d3js.org/d3.v5.min.js" charset="utf-8"></script><script src="https://dagrejs.github.io/project/dagre-d3/latest/dagre-d3.js"></script><link rel="stylesheet" href="rbt.css"><script type="text/javascript" src="rbt.js"></script>For documents with more pages, you can either move the.. rawpart to a separate file and.. includeit, or you can use a custom driver: create a new filedriver.pyand use the following:importalectryon.htmlimportalectryon.clialectryon.html.ADDITIONAL_HEADS.append('<link rel="stylesheet" href="…" />')alectryon.cli.main()But for large collections of related documents, it’s likely better to use Sphinx (or some other similar engine). In that case, you can use Sphinx’ built-in support for additional JS and CSS:app.add_js_file(js)andapp.add_css_file(css).Special case: MathJaxMathJax is a JavaScript library for rendering LaTeX math within webpages. Properly configuring it can be a bit tricky.If you just want to include math in reStructuredText or Markdown documents, docutils will generally do the right thing: it will generate code to load MathJaX from a CDN if you use the:math:role, and it leave that code out if you don’t.If you want to render parts of your Coq code using MathJaX, things are trickier. You need to identify which text to render as math by wrapping it into\( … \)markers; then add themathjax_processclass to the corresponding document nodes to force processing (otherwise MathJax ignores the contents of Alectryon’s<pre>blocks); then trigger a recomputation. See./recipes/mathjax.rstfor an example and a more detailed discussion.GalleryPierre Castéran,Hydra Battles and Cie(PDF, using a customAlectryon driverto render snippets extracted from a large Coq development).Enrico Tassi,Tutorial on the Elpi programming language(using acustom Alectryon driverto highlight mixed Coq/ELPI code).Anton Trunov.Introduction to Formal Verification course at CS Club.Jean-Paul Bodeveix, Érik Martin-Dorel, Pierre Roux.Types Abstraits et Programmation Fonctionnelle Avancée.Li-yao Xia.Tutorial: Verify Haskell Programs with hs-to-coq.Silver Oak contributors.Formal specification and verification of hardware, especially for security and privacy.Philip Zucker.Translating My Z3 Tutorial to Coq.Li-yao Xia.hakyll-alectryon: Hakyll extension for rendering Coq code using Alectryon.
|
alef
|
No description available on PyPI.
|
alef-namespace
|
No description available on PyPI.
|
ale-frugal
|
No description available on PyPI.
|
alegant
|
AlegantAlegant is an elegant training framework for PyTorch models.Install AlegantBefore installing Alegant, please make sure you have the following requirements:Python >= 3.7torch >= 1.9Simple installation from PyPIpipinstallalegantTo install fairseq and develop locally:pythonsetup.pydevelopUsageTo use alegant, follow the steps below:Define your Model.Define your DataModule.Define your Trainer.Run the training script using the following command:python--config_filerun.pyMake sure to replace config_file with the path to your configuration file.ConfigurationTo customize the training process, you need to provide a configuration file. This file specifies various parameters such as dataset paths, model architecture, hyperparameters, etc. Make sure to create a valid configuration file before running the framework.Project Structurealegant
├── tensorboard
├── data
├── alegant
│ ├── data_module.py
│ ├── trainer.py
│ └── utils.py
├── src
│ ├── dataset.py
│ ├── loss.py
│ ├── model
│ │ ├── modeling.py
│ │ ├── poolers.py
│ ├── trainer.py
│ └── utils.py
├── config.yaml
├── run.py
└── setup.pyContactIf you have any questions or inquiries, please contact us [email protected] you for using Alegant! Happy training!
|
alegbra
|
#alegbra :
A PyPI package that can be used perform alegbric function .Installation :Run the following to install:'''cmd
pip install alegbra
'''Team :Contributor : Harini.TMentor : Venkateswar.S
|
alegra
|
A Python library for Alegra’s API.SetupYou can install this package by using the pip tool and installing:$ pip install alegraUsageThe library needs to be configured with your user and token which is
available in your Alegra Account:https://app.alegra.com/configuration/apiSetalegra.userandalegra.tokento their values:importalegraalegra.user="...@..."alegra.token="..."# List contacts.alegra.Contact.list()# Create contact.alegra.Contact.create(name="Chaty",identification={"type":"RUC","number":"20000000000",},email="[email protected]",type=["client"],address={"address":"Jr. Neptuno 777","city":"Lima, Peru",},)# Retrieve contact.alegra.Contact.retrieve(123)# Modify contact.alegra.Contact.modify(resource_id=contact_id,identification={"type":"RUC","number":"20000000001",},)# Delete contact.alegra.Contact.delete(123)Using the Alegra APIDocumentation can be found here:https://developer.alegra.com/docshttps://github.com/okchaty/alegra
|
alegrapy
|
AlegraPyEste paquete nos permite consumir el API del la plataforma de facturacion Alegra mediante python 🐍CaracterísticasPermite leer facturas, pagos, productos, contactosPermite crear facturas, pagos, productos, contactosPermite borrar facturas, pagos, productos, contactosInstalación y uso1. Obtener token de accesoDebes seguir los siguientes pasos para obtener el token de acceso:Ingresar a la aplicación deAlegra.Haz clic sobre el vínculo "Configuración" en la parte superior derecha de la pantalla de Alegra y haz clic en la sección "API - Integraciones con otros sistemas"En la nueva pantalla puedes encontrar el correo con el cual debes acceder al API y el token. Si aún no cuentas con un token puedes generarlo también.2. Instalaciónpip install alegrapy==0.0.13. UsoUn ejemplo para usar este paquetefromalegraimportinvoices,contacts,sessionsession.user="[email protected]"session.token="your_token"invoice=invoices()invoice.read(1,fields='pdf')invoice.list(0,3)contact=contacts()contact.read(12)contact.list(0,2)CréditosCamilo Andrés Rodriguezreferenciashttps://developer.alegra.com/LicenciaEste proyecto está bajo la Licencia [MIT].¡Puedes personalizarlo según las necesidades específicas de tu proyecto!
|
alegra-python
|
alegra-pythonalegra-pythonis an API wrapper for Alegra (accounting software), written in PythonInstallingpipinstallalegra-pythonUsagefromalegra.clientimportClientclient=Client(email,token)Get company information (Compañía)company=client.get_company_info()Get current user (Usuario)user=client.get_current_user()Contacts- List contacts (Contactos)contacts=client.list_contacts(order_field=None,order="ASC",limit=None,start=None)# order options = "ASC" or "DESC"# Max limit = 30- Get contact by idcontact=client.get_contact(contact_id,extra_fields=None)# extra_fields current options are: 'statementLink', 'url' or 'statementLink,url'- Create Contact (Contacto)contacto={"address":{"city":"Villavicencio","address":"Calle 10 #01-10"},"internalContacts":[{"name":"Lina","lastName":"Montoya","email":"[email protected]",}],"name":"Lina Montoya","identification":"1018425711","mobile":"38845555610","seller":1,"priceList":1,"term":1,"email":"[email protected]","type":"client"}contact=client.create_contact(contacto)- List sellers (Vendedores)vendedores=client.list_sellers()Inventory- List items (Items)items=client.list_items(order_field=None,order="ASC",limit=None,start=None)# order options = "ASC" or "DESC"# Max limit = 30- Create item (Item)item={"name":"PS5","description":"Play Station 5","reference":"PS5 nuevo","price":3750000,"category":{"id":5064},"inventory":{"unit":"unit","unitCost":40000,"negativeSale":False,"warehouses":[{"initialQuantity":4,"id":1,"minQuantity":2,"maxQuantity":10}],},"tax":2,"type":"product","customFields":[{"id":1,"value":"BHUJSK888833"},{"id":2,"value":44},{"id":3,"value":44.45}],"itemCategory":{"id":1}}item_created=client.create_item(item)- List item Categories (Categorias de items)item_categorias=client.list_item_categories()- List Warehouses (Bodegas)bodegas=client.list_warehouses()- List Item Custom Fields (Campos adicionales)campos=client.list_item_custom_fields()- List Variant Attributes (Variantes)atributos_variantes=client.list_variant_attributes()- List price lists (Lista de precios)lista_precios=client.list_price_lists()Invoices- List invoices (Facturas de venta)items=client.list_invoices(order_field=None,order="ASC",limit=None,start=None,date=None)# order options = "ASC" or "DESC"# Max limit = 30# Date format YYYY-MM-DDTerms- List Terms (Condiciones de pago)condiciones=client.list_terms()Taxes- List Taxes (Impuestos)impuestos=client.list_taxes()Accounts- List Accounts (Cuentas)cuentas=client.list_accounts(format_acc="tree",type_acc=None)# format_acc options = "tree" or "plain"# type_acc options = "income", "expense", "asset", "liability" or "equity"
|
alegria
|
AlegriaPython Boilerplate contains all the boilerplate you need to create a Python package.Free software: MIT licenseDocumentation:https://alegria.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2021-06-19)First release on PyPI.
|
alegriadb
| |
aleimi
|
DocumentationTutorialsCI/CDBuildSource CodePython VersionsDependenciesLicenseDownloadsDescriptionaleimiis a versatile Python package designed for performing conformational analysis of small molecules.
The package utilizes a range of theories, including classical mechanics, semiempirical, and high-level quantum mechanics,
to provide comprehensive and accurate analyses of molecular conformations.WarningPlease be aware thataleimiis currently undergoing heavy development, which may result in significant changes to the codebase without prior notice.
Therefore, we advise caution when using the package. We strongly recommend that you always pin your version of the package to ensure that your pipelines are not broken.You can try it out prior to any installation onBinder.DocumentationThe installation instructions, documentation and tutorials can be found online onReadTheDocs.IssuesIf you have found a bug, please open an issue on theGitHub Issues.DiscussionIf you have questions on how to usealeimi, or if you want to give feedback or share ideas and new features, please head to theGitHub Discussions.Citing aleimiPlease refer to thecitation pageon the documentation.
|
aleister
|
No description available on PyPI.
|
alei-utils
|
alei-utils
|
aleixo50
|
Aleixo 50Get information about any of Bruno Aleixo's famous 50 Portuguese
traditional dishes.
|
alej-supl
|
No description available on PyPI.
|
alek
|
alekis a python library that contains useful
functions for Alek Chase
Available functions:HelloDelay PrintClearGet int will be working soon... **
Report any bugs or email us with questions:[email protected]
|
aleksis
|
What AlekSIS® isAlekSIS®is a web-based school information system (SIS) which can be used to
manage and/or publish organisational subjects of educational institutions.Formerly two separate projects (BiscuIT and SchoolApps), developed byTeckids e.V.and a team of students atKatharineum zu Lübeck, they
were merged into the AlekSIS project in 2020.AlekSIS is a platform based on Django, that provides central funstions
and data structures that can be used by apps that are developed and provided
seperately. The AlekSIS team also maintains a set of official apps which
make AlekSIS a fully-featured software solutions for the information
management needs of schools.By design, the platform can be used by schools to write their own apps for
specific needs they face, also in coding classes. Students are empowered to
create real-world applications that bring direct value to their environment.AlekSIS is part of theschul-freiproject as a component in sustainable
educational networks.This packageThealeksispackage is a meta-package, which simply depends on the core
and all official apps as requirements. The dependencies are semantically versioned
and limited to the currentminorversion. If installing the distribution meta-package,
all apps will be kept up to date with bugfixes, but not introduce new features or breakage.Official appsApp namePurposeAlekSIS-App-ChronosThe Chronos app provides functionality for digital timetables.AlekSIS-App-DashboardFeedsThe DashboardFeeds app provides functionality to add RSS or Atom feeds to dashboardAlekSIS-App-HjelpThe Hjelp app provides functionality for aiding users.AlekSIS-App-LDAPThe LDAP app provides functionality to import users and groups from LDAPAlekSIS-App-UntisThis app provides import and export functions to interact with Untis, a timetable software.AlekSIS-App-AlsijilThis app provides an online class register.AlekSIS-App-CSVImportThis app provides import functions to import data from CSV files.AlekSIS-App-ResintThis app provides time-base/live documents.AlekSIS-App-MatrixThis app provides integration with matrix/element.AlekSIS-App-StoelindelingThis app provides functionality for creating seating plans.LicenceLicenced under the EUPL, version 1.2 or later, by Teckids e.V. (Bonn, Germany).
For details, please see the README file of the official apps.Please see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-app-csvimport
|
AlekSISThis is an application for use with theAlekSIS®platform.FeaturesThis app provides general CSV imports functions to interact with school administration software.Generic and customisable importer based on templatesRegister import templates in the frontendSupported systems:Schild-NRW (North Rhine-Westphalia, Germany)Pedasos (Schleswig-Holstein, GermanyLicenceCopyright © 2019, 2020, 2022 Dominik George <[email protected]>
Copyright © 2020, 2021, 2022 Jonathan Weth <[email protected]>
Copyright © 2019 mirabilos <[email protected]>
Copyright © 2019 Tom Teichler <[email protected]>
Copyright © 2022 magicfelix <[email protected]>
Licenced under the EUPL, version 1.2 or later, by Teckids e.V. (Bonn, Germany).Please see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-app-discourse
|
AlekSISThis is an application for use with theAlekSIS®platform.FeaturesThe author of this app did not describe it yet.LicenceCopyright © 2022 Dominik George <[email protected]>
Licenced under the EUPL, version 1.2 or laterPlease see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-app-fritak
|
AlekSISThis is anunofficialapplication for use with theAlekSISplatform.FeaturesThe Fritak app provides functionality for managing exemption requests.Typical WorkflowTeacher fills out form with following fields:Start date and timeEnd date and timeDescription/reasonHeadmaster receives request and a. approves or b. rejects:Deputy headmaster reviews request and i. approves or ii. rejects:Teacher receives positive feedback (notifcation).Teacher receives negative feedback (notification).Teacher receives negative feedback (notification).LicenceCopyright © 2017, 2018, 2019, 2020 Frank Poetzsch-Heffter <[email protected]>
Copyright © 2017, 2018, 2019, 2020 Jonathan Weth <[email protected]>
Copyright © 2019 Julian Leucker <[email protected]>
Copyright © 2019 Hangzhi Yu <[email protected]>
Licenced under the EUPL, version 1.2 or laterPlease see the LICENCE file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).
|
aleksis-app-ldap
|
AlekSISThis is an application for use with theAlekSIS®platform.FeaturesConfigurable sync strategiesManagement commands for ldap importMass import of users and groupsSync LDAP users and groups on loginLicenceCopyright © 2020, 2021, 2022 Dominik George <[email protected]>
Copyright © 2020 Tom Teichler <[email protected]>
Licenced under the EUPL, version 1.2 or later, by Teckids e.V. (Bonn, Germany).Please see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-app-order
|
AlekSISThis is anunofficialapplication for use with theAlekSISplatform.FeaturesThis application can be used to create order forms and manage orders e. g. for school clothes.LicenceCopyright © 2020, 2021 Jonathan Weth <[email protected]>
Copyright © 2021 Hangzhi Yu <[email protected]>
Licenced under the EUPL, version 1.2 or laterPlease see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).
|
aleksis-app-paweljong
|
AlekSISThis is an application for use with theAlekSIS®platform.FeaturesThe author of this app did not describe it yet.LicenceCopyright © 2018, 2021, 2022 Dominik George <[email protected]>
Copyright © 2019, 2022 Tom Teichler <[email protected]>
Licenced under the EUPL, version 1.2 or laterPlease see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-app-postbuero
|
AlekSISThis is an application for use with theAlekSIS®platform.FeaturesPostbuero provides integration with various mail server functionality, among which are:Management of supported mail domainsManagement of mail addresses (mailboxes) for personsPublic registration for domains allowing itManagement of mail addresses (aliases) for groupsIncluding support for members, owners, and guardiansWebMiltersupport for PostfixAlias resolution for persons and groupsLicenceCopyright © 2020 Tom Teichler <[email protected]>
Copyright © 2022 Tom Teichler <[email protected]>
Licenced under the EUPL, version 1.2 or laterPlease see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-app-tezor
|
AlekSISThis is an application for use with theAlekSIS®platform.FeaturesThe author of this app did not describe it yet.LicenceCopyright © 2022 Dominik George <[email protected]>
Copyright © 2022 Tom Teichler <[email protected]>
Licenced under the EUPL, version 1.2 or laterPlease see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-app-untis
|
AlekSISThis is an application for use with theAlekSIS®platform.FeaturesImport absence reasonsImport absencesImport breaksImport classesImport eventsImport examsImport exported Untis database via MySQL importImport exported Untis XML filesImport holidaysImport lessonsImport roomsImport subjectsImport substitutionsImport supervision areasImport teachersImport time periodsLicenceCopyright © 2018, 2019, 2020, 2021, 2022 Jonathan Weth <[email protected]>
Copyright © 2018, 2019 Frank Poetzsch-Heffter <[email protected]>
Copyright © 2019, 2020, 2021, 2022 Dominik George <[email protected]>
Copyright © 2019, 2020 Tom Teichler <[email protected]>
Copyright © 2019 Julian Leucker <[email protected]>
Copyright © 2019 mirabilos <[email protected]>
Licenced under the EUPL, version 1.2 or later, by Teckids e.V. (Bonn, Germany).Please see the LICENCE.rst file accompanying this distribution for the
full licence text or on theEuropean Union Public Licencewebsitehttps://joinup.ec.europa.eu/collection/eupl/guidelines-users-and-developers(including all other official language versions).TrademarkAlekSIS® is a registered trademark of the AlekSIS open source project, represented
by Teckids e.V. Please refer to thetrademark policyfor hints on using the trademark
AlekSIS®.
|
aleksis-builddeps
|
No description available on PyPI.
|
alekya
|
Alekya Tree visualizationchange log0.0.1 (31/10/2020)-First Release
|
alem
|
Alem is a simple revision wrapper ofAlembic.Usage:Install by pip, “pip install alem”.
Then you get a command “alem”.Difference from Alembic:Add two arguments for Alembic subcommand revision, “–upgrade”/”-U” and
“–downgrade”/”-D”. They accept a string, separated by “;” for multiple sql
statements, or a file path of sql file. Sql statements in sql file can be
multiple lines or one line. You can ignore them as you want.
Attention: comment in sql file is not supported yet.The rest is to use it as Alembic.
|
alemate_api
|
This is a security placeholder package.
If you want to claim this name for legitimate purposes,
please contact us [email protected]@yandex-team.ru
|
alemate_client
|
This is a security placeholder package.
If you want to claim this name for legitimate purposes,
please contact us [email protected]@yandex-team.ru
|
alemate-tools
|
This is a security placeholder package.
If you want to claim this name for legitimate purposes,
please contact us [email protected]@yandex-team.ru
|
alembic
|
Alembic is a database migrations tool written by the author
ofSQLAlchemy. A migrations tool
offers the following functionality:Can emit ALTER statements to a database in order to change
the structure of tables and other constructsProvides a system whereby “migration scripts” may be constructed;
each script indicates a particular series of steps that can “upgrade” a
target database to a new version, and optionally a series of steps that can
“downgrade” similarly, doing the same steps in reverse.Allows the scripts to execute in some sequential manner.The goals of Alembic are:Very open ended and transparent configuration and operation. A new
Alembic environment is generated from a set of templates which is selected
among a set of options when setup first occurs. The templates then deposit a
series of scripts that define fully how database connectivity is established
and how migration scripts are invoked; the migration scripts themselves are
generated from a template within that series of scripts. The scripts can
then be further customized to define exactly how databases will be
interacted with and what structure new migration files should take.Full support for transactional DDL. The default scripts ensure that all
migrations occur within a transaction - for those databases which support
this (Postgresql, Microsoft SQL Server), migrations can be tested with no
need to manually undo changes upon failure.Minimalist script construction. Basic operations like renaming
tables/columns, adding/removing columns, changing column attributes can be
performed through one line commands like alter_column(), rename_table(),
add_constraint(). There is no need to recreate full SQLAlchemy Table
structures for simple operations like these - the functions themselves
generate minimalist schema structures behind the scenes to achieve the given
DDL sequence.“auto generation” of migrations. While real world migrations are far more
complex than what can be automatically determined, Alembic can still
eliminate the initial grunt work in generating new migration directives
from an altered schema. The--autogeneratefeature will inspect the
current status of a database using SQLAlchemy’s schema inspection
capabilities, compare it to the current state of the database model as
specified in Python, and generate a series of “candidate” migrations,
rendering them into a new migration script as Python directives. The
developer then edits the new file, adding additional directives and data
migrations as needed, to produce a finished migration. Table and column
level changes can be detected, with constraints and indexes to follow as
well.Full support for migrations generated as SQL scripts. Those of us who
work in corporate environments know that direct access to DDL commands on a
production database is a rare privilege, and DBAs want textual SQL scripts.
Alembic’s usage model and commands are oriented towards being able to run a
series of migrations into a textual output file as easily as it runs them
directly to a database. Care must be taken in this mode to not invoke other
operations that rely upon in-memory SELECTs of rows - Alembic tries to
provide helper constructs like bulk_insert() to help with data-oriented
operations that are compatible with script-based DDL.Non-linear, dependency-graph versioning. Scripts are given UUID
identifiers similarly to a DVCS, and the linkage of one script to the next
is achieved via human-editable markers within the scripts themselves.
The structure of a set of migration files is considered as a
directed-acyclic graph, meaning any migration file can be dependent
on any other arbitrary set of migration files, or none at
all. Through this open-ended system, migration files can be organized
into branches, multiple roots, and mergepoints, without restriction.
Commands are provided to produce new branches, roots, and merges of
branches automatically.Provide a library of ALTER constructs that can be used by any SQLAlchemy
application. The DDL constructs build upon SQLAlchemy’s own DDLElement base
and can be used standalone by any application or script.At long last, bring SQLite and its inability to ALTER things into the fold,
but in such a way that SQLite’s very special workflow needs are accommodated
in an explicit way that makes the most of a bad situation, through the
concept of a “batch” migration, where multiple changes to a table can
be batched together to form a series of instructions for a single, subsequent
“move-and-copy” workflow. You can even use “move-and-copy” workflow for
other databases, if you want to recreate a table in the background
on a busy system.Documentation and status of Alembic is athttps://alembic.sqlalchemy.org/The SQLAlchemy ProjectAlembic is part of theSQLAlchemy Projectand
adheres to the same standards and conventions as the core project.Development / Bug reporting / Pull requestsPlease refer to theSQLAlchemy Community Guidefor
guidelines on coding and participating in this project.Code of ConductAbove all, SQLAlchemy places great emphasis on polite, thoughtful, and
constructive communication between users and developers.
Please see our current Code of Conduct atCode of Conduct.LicenseAlembic is distributed under theMIT license.
|
alembic-api
|
No description available on PyPI.
|
alembic-autogen-check
|
alembic-autogen-checkInstallpip install alembic-autogen-checkUsagePYTHONPATH=. alembic-autogen-checkThis assumes that analembic.inifile exists in the current working
directory. You can explicitly pass a config file:PYTHONPATH=. alembic-autogen-check --config path/to/alembic.ini
|
alembic-autogenerate-enums
|
alembic-autogenerate-enumsThis package implements an Alembic hook that causesalembic revision --autogenerateto output PostgreSQLALTER TYPE .. ADD VALUESQL
statements as part of new migrations.UsageAdd the line:import alembic_autogenerate_enumsTo the top of yourenv.py.NotesSinceALTER TYPE .. ADD VALUEcannot run transactionally, eachop.sync_enum_values()call creates its own temporary private DB connection.
Seehttps://bitbucket.org/zzzeek/alembic/issues/123/a-way-to-run-non-transactional-ddlTestsWe have incredibly basic tests in asample project.mkvirtualenv alembic-autogenerateInstall the main autogenerate package and then the test harness:pip install -e .
pip install -e test-harnesscreateuser alembic-autogenerate
createdb -O alembic-autogenerate alembic-autogenerate_dbcd test-harness && pytest
|
alembic-bot
|
No description available on PyPI.
|
alembic-clamp
|
A wrapper aroundalembic(SQLAlchemymigration tool), that is configurable
from code instead ofalembic.inifile.Change logSeeCHANGELOG.
|
alembic-dddl
|
Alembic Dumb DDLA plugin forAlembicDB migration tool that adds support for arbitrary user-defined objects like views, functions, triggers, etc. in autogenerate command.Alembic DDDLdoes notcompare the objects in the code with their state in the database. Instead, itonly tracks if the source code of the script has changed, compared to the previous revision.InstallationYou can install Alembic Dumb DDL from pip:pipinstallalembic-dddlQuick startStep 1: save your DDL script in a file, and make sure that it overwrites the entities, not just creates them (e.g. start withDROP ... IF EXISTSor a similar construct for your DBMS).-- myapp/scripts/last_month_orders.sqlDROPVIEWIFEXISTSlast_month_orders;CREATEVIEWlast_month_ordersASSELECT*FROMordersWHEREorder_date>current_date-interval'30 days';Step 2: Wrap your script in aDDLclass:# myapp/models.pyfromalembic_dddlimportDDLfrompathlibimportPathSCRIPTS=Path(__file__).parent/"scripts"defload_sql(filename:str)->str:"""Helper function to load the contents of a file from a `scripts` directory"""return(SCRIPTS/filename).read_text()my_ddl=DDL(# will be used in revision filenamename="last_month_orders",# DDL script SQL code, will be used in the upgrade commandsql=load_sql("last_month_orders.sql"),# Cleanup SQL code, will be used in the first downgrade commanddown_sql="DROP VIEW IF EXISTS last_month_orders;",)Step 3: Register your script in alembic'senv.py:# migrations/env.pyfrommyapp.modelsimportmy_ddlfromalembic_dddlimportregister_ddlregister_ddl(my_ddl)# also supports a list# ...# the rest of the env.py fileFrom now on the alembic autogenerate command will keep track oflast_month_orders.sql, and if it changes — automatically add update code to your migration scripts to update your entities.Run the migration:$alembicrevision--autogenerate-m"last_month_orders"...
INFO[alembic_dddl.dddl]DetectednewDDL"last_month_orders"Generatingmyapp/migrations/versions/2024_01_08_0955-0c897e9399a9_last_month_orders.py...doneThe generated revision script:# migrations/versions/2024_01_08_0955-0c897e9399a9_last_month_orders.py...defupgrade()->None:# ### commands auto generated by Alembic - please adjust! ###op.run_ddl_script("2024_01_08_0955_last_month_orders_0c897e9399a9.sql")# ### end Alembic commands ###defdowngrade()->None:# ### commands auto generated by Alembic - please adjust! ###op.execute("DROP VIEW IF EXISTS last_month_orders;")# ### end Alembic commands ###For more info seetutorialor take a look at theExample Project.Why do it this way?Managing your custom entities with Alembic DDDL has several benefits:The DDL scripts are defined in one place in the source code, any change to them is reflected in git history through direct diffs.Any kind of SQL script and any DBMS is supported because the plugin does not interact with the database.The migrations for your DDL scripts are fully autogenerated, they are also clean and concise.Further readingTutorialHow it WorksConfigurationSetting up Logging
|
alembic-enums
|
Alembic EnumsSupport for migrating PostgreSQL enums with AlembicThe package doesn't detect enum changes or generate migration code automatically, but it provides a helper class to run the enum migrations in Alembic migration scripts.Problem statementWhen you define an enum column with SQLAlchemy, the initial migration defines a customenum type.Once the enum type is created,ALTER TYPEallows you to add new values or rename existing ones, but not delete them.If you need to delete a value from an enum, you must create a new enum type and migrate all the columns to use the new type.Installationpipinstallalembic-enumsUsageAssume you decided to rename thestateenum valuesactiveandinactivetoenabledanddisabled:class Resource(Base):__tablename__ = "resources"id = Column(Integer, primary_key=True)name = Column(String(255), nullable=False)- state = Column(Enum("enabled", "disabled", name="resource_state"), nullable=False)+ state = Column(Enum("active", "archived", name="resource_state"), nullable=False)To migrate the database, we create a new empty migration withalembic revision -m "Rename enum values"and add the following code to the generated migration script:fromalembicimportopfromalembic_enumsimportEnumMigration,Column# Define a target column. As in PostgreSQL, the same enum can be used in multiple# column definitions, you may have more than one target column.# The constructor arguments are the table name, the column name, and the# server_default values for the old and new enum types.column=Column("resources","state",old_server_default=None,new_server_default=None)# Define an enum migration. It defines the old and new enum values# for the enum, and the list of target columns.enum_migration=EnumMigration(op=op,enum_name="resource_state",old_options=["enabled","disabled"],new_options=["active","archived"],columns=[column],)# Define upgrade and downgrade operations. Inside upgrade_ctx and downgrade_ctx# context managers, you can update your data.defupgrade():withenum_migration.upgrade_ctx():enum_migration.update_value(column,"enabled","active")enum_migration.update_value(column,"disabled","archived")defdowngrade():withenum_migration.downgrade_ctx():enum_migration.update_value(column,"active","enabled")enum_migration.update_value(column,"archived","disabled")Under the hood, theEnumMigrationclass creates a new enum type, updates the target columns to use the new enum type, and deletes the old enum type.Change column default valuesTo change the column default values, pass corresponding values to new_server_default and old_server_default arguments of the Column constructor. The new_server_default is used on upgrade, and the old_server_default is used on downgrade.IMPORTANT: Setting the server_default value to None will remove the default value from the column. If you want to keep the default value as is, set old_server_default and new_server_default to the same value.For example, to change the default value of thestatecolumn fromenabledtoactive:fromalembic_enumsimportColumncolumn=Column("resources","state",old_server_default="enabled",new_server_default="active",)API referenceEnumMigrationA helper class to run enum migrations in Alembic migration scripts.Constructor arguments:op: an instance ofalembic.operations.Operationsenum_name: the name of the enum typeold_options: a list of old enum valuesnew_options: a list of new enum valuescolumns: a list ofColumninstances that use the enum typeschema: the optional schema of the enumMethods:upgrade_ctx(): a context manager that creates a new enum type, updates the target columns to use the new enum type, and deletes the old enum typedowngrade_ctx(): a context manager that performs the opposite operations.update_value(column, old_value, new_value): a helper method to update the value of thecolumntonew_valuewhere it wasold_valuebefore. It's useful to update the data in the upgrade and downgrade operations within theupgrade_ctxanddowngrade_ctxcontext managers.upgrade(): a shorthand forwith upgrade_ctx(): pass.downgrade(): a shorthand forwith downgrade_ctx(): pass.ColumnA data class to define a target column for an enum migration.Constructor arguments:table_name: the name of the tablecolumn_name: the name of the columnold_server_default: the old server_default value. When set to None, the server_default value is removed on downgrade.new_server_default: the new server_default value. When set to None, the server_default value is removed on upgrade.schema: the optional schema of the table
|
alembic-migrate
|
alembic-migratealembic-migrate is a framework-independent fork of flask-migrate.InstallationTo install, runpip install alembic-migrateUsageCreate the following file structure:model/├──__init__.py├──base.py├──book.pyThen declare your SQLAlchemy Base and connection string inmodel/base.py.Note: the connection string doesn't need to match your
app's DB connection, it's only used for migrations.# model/base.pyfromsqlalchemy.ext.declarativeimportdeclarative_baseBase=declarative_base()defget_base():return{'base':Base,'sqlalchemy_url':'sqlite:///demo.db'}Let's add a model inmodel/book.py:from.baseimportBasefromsqlalchemyimportInteger,String,ColumnclassBook(Base):__tablename__='books'name=Column(String,primary_key=True)year=Column(Integer)Nowcd ..so that you are out of the model package and run:alembic-db initto setup the templatealembic-db migrateto create a migrationYou can also check out the example folder in this repo.Configuring base moduleThe base module containsget_base() -> dict. By defaultmodel.baseis used but you can change the environment variable:exportALEMBIC_BASE="my_model.base"Customize model import logicBy default, all*.pyfiles in the same package as the base will be loaded.
However if you want to split your models in subpackages or have custom logic
you should implementimport_modelsinside your base module.defimport_models():from.importcar,bookfrom.subimportother_modelsdefget_base():return...
|
alembic-migration-fixtures
|
DescriptionPytest fixture to simplify writing tests against databases managed withalembic.
Before each test run, alembic migrations apply schema changes which then allows tests to only care about data.
This way your application code, and the database migrations get executed by the test.Only tested with PosgreSQL. However, code may work with other databases as well.InstallationInstall withpip install alembic-migration-fixturesor any other dependency manager.
Afterwards, create a pytest fixture calleddatabase_enginereturning an SQLAlchemyEngineinstance.WARNINGDo not specify the production / development / any other database where data is important in the engine fixture.
If you do so, the tests WILL truncate all tables and data loss WILL occur.UsageThis library provides a pytestfixturecalledtest_db_session.
Use this to replace the normal SQLAlchemy session used within the application, or else tests may not be independent
of one another.How the fixture works with your tests:Fixture recreates (wipes) the database schema based on the engine provided for the test sessionFixture runs alembic migrations (equivalent toalembic upgrade heads)Fixture creates a test database session within a transaction for the testYour test sets up data and runs the test using the session (includingCOMMITing transactions)Your test verifies data is in the databaseFixture rolls back the transaction (and any innerCOMMITed transactions in the test)This two-level transaction strategy makes it so any test is independent of one another,
since the database is empty after each test. Since the database schema only gets re-created once per session,
the test speed is only linearly dependent on the number of migrations.DevelopmentThis library uses thepoetrypackage manager, which has to be installed before installing
other dependencies. Afterwards, runpoetry installto create a virtualenv and install all dependencies.
To then activate that environment, usepoetry shell. To run a command in the environment without activating it,
usepoetry run <command>.Blackis used (and enforced via workflows) to format all code. Poetry will install it
automatically, but running it is up to the user. To format the entire project, runblack .inside the virtualenv.ContributingThis project uses the Apache 2.0 license and is maintained by the data science team @ Barbora. All contribution are
welcome in the form of PRs or raised issues.
|
alembic-multischema
|
alembic-multischemaThis module provides the ability to act on multiple postgres schemas at once when using alembic.Functions:perSchema(**kwargs)Used to decorate the upgrade() and downgrade() functions in a migration. When upgrade or downgrade are decorated with perSchema() the decorated function will be called for a list of schemas in the current database.kwargs:schemas
A list of schema names to run the function against. If omitted perSchema() will automatically generate a list of non-system schemas from the current database by using getAllNonSystemSchemas()exclude
A list of schema names to exclude from running the function againstgetAllSchemas()Returns a list of all schemas in the current database.kwargs:exclude
A list of schema names to exclude from running the function againstgetAllNonSystemSchemas()Returns a list of schemas in the current database, omitting information_schema and pg_catalog.kwargs:exclude
A list of schema names to exclude from running the function againstExample Usage:"""CreateUsersTable
Revision ID: a6a219646b55
Revises:
Create Date: 2019-10-16 14:43:11.347575
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.sql import text
from almebic_multischema import perSchema
# revision identifiers, used by Alembic.
revision = 'a6a219646b55'
down_revision = None
branch_labels = None
depends_on = None
@perSchema(schemas=["public", "foo", "bar"])
def upgrade():
users = text(
"""CREATE TABLE users
(
id serial PRIMARY KEY,
firstname VARCHAR (50) UNIQUE,
lastname VARCHAR (50)
);
""")
op.execute(users)
@perSchema(schemas=["public", "foo", "bar"])
def downgrade():
op.execute(text("DROP TABLE users"))
|
alembic-offline
|
alembic-offlinealembic-offline is an extension for alemic to enrich offline functionality of the migrationsContentsalembic-offlineUsagePhased migrationsArbitrary script as operationGet migration dataGet migration data in batchCommand line utilitiesContactLicenseChangelog1.2.01.1.01.0.51.0.41.0.31.0.0UsagePhased migrationsalembic-offline introduces a helper which allows to implement phased migrations, e.g. those which steps
are divided into logical phases. For example, you can have steps to be executed before code deploy and
those after.In your alembic config file (main section):phases = before-deploy after-deploy final
default-phase = after-deployIn your version file:fromsqlalchemyimportINTEGER,VARCHAR,NVARCHAR,TIMESTAMP,Column,funcfromalembicimportopfromalembic_offlineimportphased,execute_scriptfromtests.migrations.scriptsimportscriptrevision='1'down_revision=None@phaseddefupgrade():op.create_table('account',Column('id',INTEGER,primary_key=True),Column('name',VARCHAR(50),nullable=False),Column('description',NVARCHAR(200)),Column('timestamp',TIMESTAMP,server_default=func.now()))yieldop.execute("update account set name='some'")yieldexecute_script(script.__file__)defdowngrade():passWill give the sql output (for sqlite):-- Running upgrade -> 1-- PHASE::before-deploy::;CREATETABLEaccount(idINTEGERNOTNULL,nameVARCHAR(50)NOTNULL,descriptionNVARCHAR(200),timestampTIMESTAMPDEFAULTCURRENT_TIMESTAMP,PRIMARYKEY(id));-- PHASE::after-deploy::;updateaccountsetname='some';-- PHASE::final::;-- SCRIPT::scripts/script.py::;INSERTINTOalembic_version(version_num)VALUES('1');As you see, phases are rendered as SQL comments to divide migration steps, so those who execute migration
can see which phase’s step it is.
However, if migration procedure is highly customized, you can use alembic-offline API described below.get_migration_datareturns migration phases in special form so you can automate their execution.Arbitrary script as operationFor complex migrations, it’s not enough to execute sql, you might need some script to be executed instead.
For that, there’s special operation:fromalembic_offlineimportexecute_scriptdefupgrade():execute_script('scripts/script.py')If you’ll get migration sql, it will be rendered as SQL comment:-- SCRIPT::scripts/script.py::;For those who execute migrations it will be visible and they can execute the script manually.
However, if migration procedure is highly customized, you can use alembic-offline API described below.get_migration_datareturns script migration steps in special form so you can automate their execution.
For online mode, the script will be executed as subprocess via pythonsubprocessmodule.Get migration dataalembic-offline provides specialized API to get certain migration data as dictionary:fromalembic_offlineimportget_migration_datafromalemic.configimportConfigconfig=Config('path to alemic.ini')data=get_migration_data(config,'your-revision')assertdata=={'revision':'your-revision','phases':{'after-deploy':[{'type':'mysql','script':'alter table account add column name VARCHAR(255)'},{'type':'python','script':'from app.models import Session, Account; Session.add(Account()); Session.commit()','path':'scripts/my_script.py'},]}}get_migration_datarequires bothphasesanddefault-phaseconfiguration options to be set.default-phaseis needed to be able to get migration data even for simple migrations without phases.Get migration data in batchalembic-offline provides an API call to get migration data for all revisions:fromalembic_offlineimportget_migrations_datafromalemic.configimportConfigconfig=Config('path to alemic.ini')data=get_migrations_data(config)assertdata==[{'revision':'your-revision','phases':{'after-deploy':[{'type':'mysql','script':'alter table account add column name VARCHAR(255)'},{'type':'python','script':'from app.models import Session, Account; Session.add(Account()); Session.commit()','path':'scripts/my_script.py'},]}}]Command line utilitiesBecause with alembic revisions it’s sometimes hard to find which the correct down revision should be; especially
when there are multiple heads we added the alembic-offline graph command.The graph command will generate adot fileof
the revisions, this file can then be converted to an image for easy visualization.Usage:alembic-offline graph --filename revisions.dot --alembic-config path/to/alembic.iniThen if you havegraphvizinstalled you can run:dot -Tpng -o revisions.png revisions.dotTo generate a png image.ContactIf you have questions, bug reports, suggestions, etc. please create an issue on
theGitHub project page.LicenseThis software is licensed under theMIT licensePlease refer to thelicense file© 2015 Anatoly Bubenkov, Paylogic International and others.Changelog1.2.0add migration dependency tree generation command (hvdklauw)1.1.0add down_revision to migration data (bubenkoff)reverse migration order to simplify the application (bubenkoff)1.0.5correctly handle multi-phased migration data extraction (bubenkoff)1.0.4online script execution implemented (bubenkoff)get_migrations_dataAPI (bubenkoff)1.0.3Added arbitrary script operation (bubenkoff)Strict phases configuration assertions for phased migration decorator (bubenkoff)get_migration_dataAPI (bubenkoff)1.0.0Initial public release (bubenkoff)
|
alembic_pastedeploy
|
This is a thin-wrapper of alembic which allows alembic to read pastedeploy-flavored ini config files.Supported featuresImporting defaults bygetandsetdirective:[alembic]
get sqlalchemy.url = sqlalchemy.urlGiving global-conf interpolants by--paste-globaloption from the commandline:$ alembic_pastedeploy --paste-global sqlalchemy.url=sqlite:///test.db upgrade head
|
alembic-postgresql-enum
|
alembic-postgresql-enumAlembic autogenerate support for creation, alteration and deletion of enumsAlembic will now automatically:Create enums that currently are not in postgres schemaRemove/add/alter enum valuesReorder enum valuesDelete unused enums from schemaIf you are curious to know about analogs and reasons for this library to exist seealternatives and motivationUsageInstall library:pip install alembic-postgresql-enumAdd the line:# env.pyimportalembic_postgresql_enum...To the top of your migrations/env.py file.FeaturesCreation of enumsDeletion of unreferenced enumsCreation of new enum valuesDeletion of enums valuesRenaming of enum valuesCreation of enumWhen table is createdclassMyEnum(enum.Enum):one=1two=2three=3classExampleTable(BaseModel):test_field=Column(Integer,primary_key=True,autoincrement=False)enum_field=Column(postgresql.ENUM(MyEnum))This code will generate migration given below:defupgrade():# ### commands auto generated by Alembic - please adjust! #### this line is generated by our librarysa.Enum('one','two','three',name='myenum').create(op.get_bind())op.create_table('example_table',sa.Column('test_field',sa.Integer(),nullable=False),# create_type=False argument is now present on postgresql.ENUM as library takes care of enum creationsa.Column('enum_field',postgresql.ENUM('one','two','three',name='myenum',create_type=False),nullable=True),sa.PrimaryKeyConstraint('test_field'))# ### end Alembic commands ###defdowngrade():# ### commands auto generated by Alembic - please adjust! #### drop_table does not drop enum by alembicop.drop_table('example_table')# It is dropped by ussa.Enum('one','two','three',name='myenum').drop(op.get_bind())# ### end Alembic commands ###When column is addedclassMyEnum(enum.Enum):one=1two=2three=3classExampleTable(BaseModel):test_field=Column(Integer,primary_key=True,autoincrement=False)# this column has just been addedenum_field=Column(postgresql.ENUM(MyEnum))This code will generate migration given below:defupgrade():# ### commands auto generated by Alembic - please adjust! #### this line is generated by our librarysa.Enum('one','two','three',name='myenum').create(op.get_bind())# create_type=False argument is now present on postgresql.ENUM as library takes care of enum creationop.add_column('example_table',sa.Column('enum_field',postgresql.ENUM('one','two','three',name='myenum',create_type=False),nullable=False))# ### end Alembic commands ###defdowngrade():# ### commands auto generated by Alembic - please adjust! ###op.drop_column('example_table','enum_field')# enum is explicitly dropped as it is no longer usedsa.Enum('one','two','three',name='myenum').drop(op.get_bind())# ### end Alembic commands ###Deletion of unreferenced enumIf enum is defined in postgres schema, but its mentions removed from code - It will be automatically removedclassExampleTable(BaseModel):test_field=Column(Integer,primary_key=True,autoincrement=False)# enum_field is removed from tabledefupgrade():# ### commands auto generated by Alembic - please adjust! ###op.drop_column('example_table','enum_field')sa.Enum('one','two','four',name='myenum').drop(op.get_bind())# ### end Alembic commands ###defdowngrade():# ### commands auto generated by Alembic - please adjust! ###sa.Enum('one','two','four',name='myenum').create(op.get_bind())op.add_column('example_table',sa.Column('enum_field',postgresql.ENUM('one','two','four',name='myenum',create_type=False),autoincrement=False,nullable=True))# ### end Alembic commands ###Creation of new enum valuesIf new enum value is defined sync_enum_values function call will be added to migration to account for itclassMyEnum(enum.Enum):one=1two=2three=3four=4# New enum valuedefupgrade():# ### commands auto generated by Alembic - please adjust! ###op.sync_enum_values('public','myenum',['one','two','three','four'],[('example_table','enum_field')],enum_values_to_rename=[])# ### end Alembic commands ###defdowngrade():# ### commands auto generated by Alembic - please adjust! ###op.sync_enum_values('public','myenum',['one','two','three'],[('example_table','enum_field')],enum_values_to_rename=[])# ### end Alembic commands ###Deletion of enums valuesIf enum value is removed it also will be detectedclassMyEnum(enum.Enum):one=1two=2# three = 3 removeddefupgrade():# ### commands auto generated by Alembic - please adjust! ###op.sync_enum_values('public','myenum',['one','two'],[('example_table','enum_field')],enum_values_to_rename=[])# ### end Alembic commands ###defdowngrade():# ### commands auto generated by Alembic - please adjust! ###op.sync_enum_values('public','myenum',['one','two','three'],[('example_table','enum_field')],enum_values_to_rename=[])# ### end Alembic commands ###Rename enum valueIn this case you must manually edit migrationclassMyEnum(enum.Enum):one=1two=2three=3# renamed from `tree`This code will generate this migration:defupgrade():# ### commands auto generated by Alembic - please adjust! ###op.sync_enum_values('public','myenum',['one','two','three'],[('example_table','enum_field')],enum_values_to_rename=[])# ### end Alembic commands ###defdowngrade():# ### commands auto generated by Alembic - please adjust! ###op.sync_enum_values('public','myenum',['one','two','tree'],[('example_table','enum_field')],enum_values_to_rename=[])# ### end Alembic commands ###This migration will cause problems with existing rows that references MyEnumSo adjust migration like thatdefupgrade():op.sync_enum_values('public','myenum',['one','two','three'],[('example_table','enum_field')],enum_values_to_rename=[('tree','three')])defdowngrade():op.sync_enum_values('public','myenum',['one','two','tree'],[('example_table','enum_field')],enum_values_to_rename=[('three','tree')])Do not forget to switch places old and new values for downgradeAll defaults in postgres will be renamed automatically as well
|
alembic-sdk
|
alembic-sdkSimplify database migrations using the popular Alembic Python library
|
alembic-set-date-trigger-plugin
|
No description available on PyPI.
|
alembic-stubs
|
Alembic StubsThis package contains stub files from official repository ofAlembic.
It allows type checkers to be used with Alembic versions that do not have stubs (< 1.7.0).Installingalembic-stubs can be installed usingpip:pipinstallalembic-stubsRequirementsAlembic (< 1.7.0)LicenseAlembic is distributed under theMIT license.
|
alembic-utils
|
No description available on PyPI.
|
alembic-verify
|
Verify that your alembic migrations are valid and equivalent to your models.DocumentationSeehttp://alembic-verify.readthedocs.org.LicenseApache 2.0. See LICENSE for details.
|
alembic-viz
|
####Installation`pip install alembic-viz --user`####Usage```bashUsage: alembic-viz [OPTIONS]Options:--config TEXT path to alembic config file--name TEXT name of the alembic ini section--filename TEXT output file name without file extension--format [png|svg|pdf] output file format--help Show this message and exit.```####Todo* add revision commit messages to graph* handle `depends_on` relationships properly* add test cases* ...
|
alengen
|
UNKNOWN
|
alenr
|
Failed to fetch description. HTTP Status Code: 404
|
aleo
|
Aleo Python SDKThe Aleo Python SDK provides a set of libraries aimed at empowering Python developers with zk (zero-knowledge)
programming capabilities via the usage of Aleo's zkSnarks.Currently this SDK is in alpha preview stage. It can be installed by using the following command:pip3installzkmlAlternatively, you can clone it fromGitHuband run:bashinstall.shThis will print out a simple aleo Private Key that is generated by the entropy present on your device.Future Work PlannedThis SDK will be expanded to include the following features:Aleo Account ManagementAleo program deployment, execution, and managementLeo program compilation and executionIt is also planned to integrate this SDK with Aleo's ZkML Package so that the full suite of Aleo program execution and account management capabilities can be used with ZkML.If you wish to contribute, please follow the contribution guidelines outlined onGitHub. For efficient workflows, we also encourage you to get in touch with the developers prior to contributing.
|
aleparser
|
UNKNOWN
|
aleph
|
# ALEPH
ALEPH is a bioinformatics tool to generate fragment-based libraries for molecular replacementALEPH is a bioinformatics tool to generate customised fold libraries for fragment-based molecular replacement.
It provides several algorithms to interpret and analyse protein structures.References:ALEPH: a network-oriented approach for the generation of fragment-based
libraries and for structure interpretation
Medina A, Trivino J, Borges R, Millan C , Uson I and Sammito MD
(2019) Acta Cryst. D Study WeekendExploiting tertiary structure through local folds for ab initio phasing
Sammito M, Millan C, Rodriguez DD, M. de Ilarduya I, Meindl K, De Marino I,
Petrillo G, Buey RM, de Pereda JM, Zeth K, Sheldrick GM and Uson I
(2013) Nat Methods. 10, 1099-1101.Email support:[email protected] tracking:https://gitlab.com/arcimboldo-team/ALEPH/-/issues
|
aleph-alpha-client
|
Aleph Alpha ClientPython client for theAleph AlphaAPI.UsageSynchronous Clientimportosfromaleph_alpha_clientimportClient,CompletionRequest,Promptclient=Client(token=os.getenv("AA_TOKEN"))request=CompletionRequest(prompt=Prompt.from_text("Provide a short description of AI:"),maximum_tokens=64,)response=client.complete(request,model="luminous-extended")print(response.completions[0].completion)Asynchronous Clientimportosfromaleph_alpha_clientimportAsyncClient,CompletionRequest,Prompt# Can enter context manager within an async functionasyncwithAsyncClient(token=os.environ["AA_TOKEN"])asclient:request=CompletionRequest(prompt=Prompt.from_text("Provide a short description of AI:"),maximum_tokens=64,)response=awaitclient.complete(request,model="luminous-base")print(response.completions[0].completion)Interactive ExamplesThis table contains interactive code examples, further exercises can be found in theexamples repository.TemplateDescriptionInternal LinkColab Link1Calling the APITemplate 12Simple completionTemplate 23Simple searchTemplate 34Symmetric and Asymmetric SearchTemplate 45Hidden EmbeddingsTemplate 56Task-specific EndpointsTemplate 6InstallationThe latest stable version is deployed to PyPi so you can install this package via pip.pipinstallaleph-alpha-clientGet started using the client by firstcreating an account. Afterwards head over toyour profileto create an API token. Read more about how you can manage your API tokenshere.DevelopmentFor local development, start by creating a Python virtual environment as follows:python3 -m venv venv
. ./venv/bin/activateNext, install thetestanddevdependencies:pip install -e ".[test,dev]"Now you should be able to ...run all the tests usingpytestor,pytest -k <test_name>to run a specific testtypecheck the code and tests usingmypy aleph_alpha_clientresp.mypy testsformat the code usingblack .LinksHTTP API DocsInteractive Playground
|
alephclient
|
alephclientCommand-line client for Aleph. It can be used to bulk import document sets via
the API, without direct access to the server. It requires an active API client
to perform uploads.InstallationInstall usingpip.pipinstallalephclientUsageRefer to thealephhandbook for an introduction on how to usealephclient,
e.g. to crawl a local file directory, or to stream entities:https://docs.alephdata.org/developers/alephclient
|
aleph-client
|
Python Client for the aleph.im network, next generation network of decentralized big data applications.
Developement follows theAleph Whitepaper.DocumentationDocumentation (albeit still vastly incomplete as it is a work in progress) can be found athttp://aleph-client.readthedocs.io/or built from this repo with:$ python setup.py docsRequirementsLinux :Some cryptographic functionalities use curve secp256k1 and require installinglibsecp256k1.$ apt-get install -y python3-pip libsecp256k1-devmacOs :$ brew tap cuber/homebrew-libsecp256k1
$ brew install libsecp256k1InstallationUsing pip andPyPI:$ pip install aleph-clientInstallation for developmentIf you want NULS2 support you will need to install nuls2-python (currently only available on github):$ pip install git+https://github.com/aleph-im/nuls2-python.gitTo install from source and still be able to modify the source code:$ pip install -e .
or
$ python setup.py developUsing DockerUse the Aleph client and it’s CLI from within Docker or Podman with:$ docker run –rm -ti -v $(pwd)/data:/dataghcr.io/aleph-im/aleph-client/aleph-client:master –helpWarning: This will use an ephemeral key that will be discarded when stopping the container.
|
aleph-lang
|
alephThis package provides the Python bindings foraleph.alephdefines a high level programming model that can be embedded into classical languages to develop large scale quantum hybrid applications, without the quantum mechanics.Getting startedTo get started, install the aleph-lang package from PyPI:pipinstallaleph-langProgramming ModelExpressions in Python take a single value, for example in the following program variable $x$ takes value 1:x=1print(x)# prints: 1alephextends this model by allowing registers to take multiple values simultaneously. This is not achieved by defining a list structure that points to multiple values in memory, instead, all the values are in a single quantum register insuperposition. We call this new type of register a Ket.To construct a Ket in Python, create an instance of theKetIntor theKetBoolclass from thealeph_langmodule.KetIntaccepts an optionalwidthargument to control the width (number of qubits) of the register, it defaults to 3. To read a value from a Ket use thesamplemethod.fromaleph_langimportKetInt,samplerandom_number=KetInt(width=10)print(sample(random_number))# it prints a number between 0 and 1023 (2^10 - 1).You can perform arithmetic and boolean expressions on Kets. Expression are applied simultaneously to all Ket elements on a single step usingquantum parallelism. The result is a new Ket containing all possible outcomes of the evaluation.Input and output Kets of an expression areentangled, that is, when sampled the Ket's values are always correlated with each other. This entanglement is preserved across expressions. For example:x=KetInt()y=x+5z=2*yprint(sample([x,y,z]))# prints [0, 5, 10] or [1, 6, 12] or [2, 7, 14] or ... [7, 12, 24)]# e.g. the value of the second element is the first plus five,# the third element is the second times two.You can filter the elements of a Ket usingwhereexpressions. For example, usewhere_into filter the elements of a Ket to a specific list of items:dice1=KetInt().where_in(range(1,7))dice2=KetInt().where_in(range(1,7))roll=dice1+dice2print(sample([dice1,dice2,roll]))#prints [0,0,0] or [0,1,1] or [0,2,2] or ... [6,6,12]You can also filter the set of elementssamplewill return passing an optionalwhenparameter; when provided,samplewill return only elements that satisfy the given expression:# Solve x + a == b * xdefsolve_equation(a,b):x=KetInt()eq1=x+aeq2=b*xreturnsample(x,when=(eq1==eq2))answer=solve_equation(3,2)print(answer)# prints 3Under the covers,whenandwhereexpressions useamplitude amplification (a.k.a. Grover)to amplify the probability of measuring the correct elements and filter out the rest.samplereturns a single value;alephalso provideshistogramto get the histogram resulting from the outcomes of sampling Kets multiple times; it takes aroundsparameter that indicates how many samples to take:defdice_roll_histogram():dice1=KetInt().where_in(range(1,7))dice2=KetInt().where_in(range(1,7))roll=dice1+dice2returnhistogram([roll],rounds=1000)It is safe to combine Kets with classical expressions, making it possible to create hybrid programs that leverage the host's language features. For example, the following function takes a graph information and the max number of colors to solve a graph coloring problem:# Solve a graph coloring problem, for the given number of nodes and list of edges.defgraph_coloring(max_colors,nodes_count,edges):defcreate_node():w=width_for(max_colors)returnKetInt(width=w).where_less_than_equals(max_colors-1)defcompare(edges):iflen(edges)==1:(left,right)=edges[0]returnleft!=rightelse:head,*tail=edges(left,right)=heada=left!=rightb=compare(tail)returna&bnodes=[create_node()for_inrange(nodes_count)]edges=[(nodes[x],nodes[y])for(x,y)inedges]filter=compare(edges)## Print to the console a graphviz representation## of the quantum graph:## tree(filter)returnsample(nodes,when=filter)max_colors=3total_nodes=4edges=[(0,1),(1,2),(0,2),(1,3)]print("graph coloring:",graph_coloring(max_colors,total_nodes,edges))API serverTo simplify its setupalephitself runs on the cloud. It uses an external service that at runtime keeps track of a program's quantum graph and enables its evaluation. No personal information is collected or accessible by the service. To change this behavior, you can run a local instance of the API server and set theALEPH_BASE_URLenvironment variable to point to your local instance, for example:exportALEPH_BASE_URL=http://localhost:7071/Source code foraleph's API server and instructions on how to build and run locally can be found at:https://github.com/anpaz/aleph/tree/main/src/api.
|
alephmarcreader
|
alephmarcreaderGeneralPython library to read Marc obtained from Aleph, the catalogue of the library of the University of Basel.This library supports Marc21, MARCXML, and AlephX.DocumentationThe docstrings can be displayed with pydoc (from the project root):pydoc alephmarcreader.abstractalephmarcreader.AbstractAlephMarcReader. For the inner classes such asPerson, runpydoc alephmarcreader.abstractalephmarcreader.AbstractAlephMarcReader.Person.Designalephmarcreader.abstractalephmarcreader.AbstractAlephMarcReaderprovides methods to access Marc data.
It is an abstract class that has two abstract methods__get_fieldand__get_subfield_textthat have to be implemented in the subclass for the file format at hand.Unit TestsFrom the project root, runpython -m unittest alephmarcreader.tests.test_[Marc[21|XML]|X]Reader.Dependenciespymarc: install with piplxml: install with pipThe library works both with python2 and python3.UsageInstall the package withpip install alephmarcreader.Example usage:# importfromalephmarcreaderimportAlephMarcXMLReader# Read data from local filemarc=AlephMarcXMLReader('example_file.xml')# get some fieldsauthor=marc.get_author()[0]recipient=marc.get_recipient()[0]date=marc.get_date()[0]# print itprint(author.name)print(recipient.name)print(date)For an exhaustive list of the API, usepydoc, as described above.
|
aleph-message
|
Aleph.im Message SpecificationThis library aims to provide an easy way to create, update and manipulate
messages from Aleph.im.It mainly consists inpydanticmodels that provide field type validation and IDE autocompletion for messages.This library provides:schema validation when parsing messages.cryptographic hash validation that theitem_hashmatches the content of the message.type validation using type checkers such asmypyin development environments.autocompletion support in development editors.Theitem_hashis commonly used as unique message identifier on Aleph.im.Cryptographic signatures are out of scope of this library and part of thealeph-sdk-pythonproject, due to their extended scope and dependency on cryptographic libraries.This library is used in both client and node software of Aleph.im.Usagepipinstallaleph-messageimportrequestsfromaleph_messageimportparse_messagefrompydanticimportValidationErrorALEPH_API_SERVER="https://official.aleph.cloud"MESSAGE_ITEM_HASH="9b21eb870d01bf64d23e1d4475e342c8f958fcd544adc37db07d8281da070b00"message_dict=requests.get(ALEPH_API_SERVER+"/api/v0/messages.json?hashes="+MESSAGE_ITEM_HASH).json()try:message=parse_message(message_dict["messages"][0])print(message.sender)exceptValidationErrorase:print(e.json(indent=4))
|
aleph-nuls2
|
Python library to interact with the NULS2 blockchain.DescriptionFeatures:Sign data with a NULS2 private key.Verify signatures.NoteThis project has been set up using PyScaffold 3.2.1. For details and usage
information on PyScaffold seehttps://pyscaffold.org/.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.