package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
alternator
|
Alternator provides tools to ease the creation ofasynchronous generators.Synchronous generators are pretty easy to create and manipulate in Python. In
Python 3.5 asynchronous generators are possible. Consuming them withasync foris pretty nice, but creating them is a fairly manual process.Maybe a future version of Python will correct this imbalance.Or maybe we can fix it with TOOLS!
|
alterootheme.busycity
|
UNKNOWN
|
alterootheme.intensesimplicity
|
UNKNOWN
|
alterootheme.lazydays
|
UNKNOWN
|
alterparagraphs
|
Alterparagraphsis an ongoing effort for providing a family of
paragraph implementations, each to be used as a replacement for the
regular and only paragraph flowable inside the ReportLab package.The idea behind this collection of paragraphs is to provide simple
implementations that can be more easily understood and extended than
the monolithic paragraph implementation as implemented by ReportLab.Note that many of the paragraph classes inalterparagraphsare not
finished in the sense that they are directly ready for production
(this is especially true for the XMLParagraph, the development of
which has barely started). You must test yourself if they are suitable
for your purpose. In any case it should be much easier to tweak them
to make them do what you need compared to the standard ReportLab
implementation.
|
alterschemo
|
No description available on PyPI.
|
alteruphono
|
alteruphonoalteruphonois a Python library for applying sound changes to phonetic and
phonological representations, intended for use in simulations of language
evolution.Please remember that, while usable,alteruphonois a work-in-progress.
The best documentation is currently to check the tests, and the
library is not recommended for production usage.Future improvementsMove from existing AST to a dictionary, mostly for speed and portability
(even if it might be the code more verbose); should still be a frozen
dictionaryMemoizeparser.__call__()callsConsider that, if a rule has alternatives, sound_classes, or other
profilific rules incontext, it might be necessary to
perform a more complex merging and add back-references inpostto what is matched inante, which could potentially
even mean different ASTs for forward and backward. This
needs further and detailed investigation, or explicit
exclusion of such rules (the user could always have the
profilic rules inanteandpost, manually doing what
would be done here).Use logging where appropriateAllow different boundary symbols, including "^" and "$"Add support for clusters/diphthongsAdd tone and other suprasegmentalAdd custom featuresresearch about kleene closuresInstallationIn any standard Python environment,alteruphonocan be installed with:pip install alteruphonoHow to useDetailed documentation can be found in the library source code and will
be published along with the paper accompanying the library; terser
technical description is available at the end of this document.
Consultation of thesound changes provided for testing purposesis also recommended.For basic usage as a library, the.forward()and.backward()functions
can be used as a wrapper for most common circumstances. In the
examples below, a rulep > t / _ V(that is, /p/ turns into /t/ when
followed by a vowel) is applied both in forward and backward direction
to the/pate/sound sequence; the.backward()function correctly
returns the two possible proto-forms:>>>importalteruphono>>>alteruphono.forward("# p a t e #","p > t / _ V")['#','t','a','t','e','#']>>>alteruphono.backward("# p a t e #","p > t / _ V")[['#','p','a','t','e','#'],['#','p','a','p','e','#']]A stand-alone command-line tool can be used to call these wrapper
functions:$alteruphonoforward'# p a t e #''p > t / _ V'# t a t e #$alteruphonobackward'# p a t e #''p > t / _ V'['# p a t e #','# p a p e #']ElementsWe are not exploring every detail of the formal grammar for annotating
sound changes, such as the flexibility with spaces and tabulations or
equivalent symbols for the arrows; for full information, interested parties
can consult the reference PEG grammar and the source code.AlteruPhono operates by applying ordered lists of sound changes to
textual representation of sound sequences.Sound changes are annotated in theA -> B / Csyntax,
whose constituents are
for reference
referred as "source" (A), "target" (B), and "context" (C), with the
first two being mandatory; the other elements are named "arrow" and
"slash". When applied to segment sequences, we refer to the original
one as "ante" and to the resulting one (which might have been modified
or not) as "post". So, with a rule "p -> b / _ a" applied to "pape":pis the "source"bis the "target"_ ais the "context""pape" is the "ante (sequence)""bape" is the "post (sequence)"Note that, if applied backwards, a rule will have a post sequence but
potentially more than one ante sequence. If the rule above is applied
backwards to the post sequence "bape", as explained in the backwards
definition and given that we have no complementary information, the
result is a set of ante sequences "pape" and "bape".AlteruPhono operates on sound sequences expressed in standardCLDF/LingPynotation,
derived for Cysouw work,
i.e., as a string character string with tokens separated by single spaces.
As such, a word like the English "chance" is represented not as
"/tʃæns/" or/t͡ʃæns/, in proper IPA notation, but as "tʃ æ n s".
While the notation might at first seem strange, it has proven its
advantages with extensive work on linguistic databases, as it not only
facilitates data entry and inspection, but also makes no assumptions about
what constitutes a segment, no matter how obvious the segmentation might
look to a linguist. On one had, being agnostic in terms of the segmentation
allows the program to operate as a "dumb" machine, and on the other allows
researchers to operate on different kinds of segmentation if suitable for
their research, including treating whole syllables as segments. In order
to facilitate the potentially tedious and error-prone task of manual
segmentation, orthographic profiles can be used as in Lexibank.CatalogsWhile they are not enforced and in some cases are not needed, such as
when the system operates as a glorified search&replace, alteruphono is
designed to operate with three main catalogs: graphemes, features, and
segment classes.Graphemes are sequences of one or more textual characters where most
characters are accepts (exceptions are...).
While in most cases it will correspond
to common transcription system such as the IPA, and in most case correspond
to a single sound or phoneme, this is not enforced and sequence of
characters (with the exception of a white-space, a tabulation, a forward
slash, square and curly brackets, and an arrow) can be used to represent
anything defined as
a segment in a corresponding catalog. Also note that the slash notation
of Lexibank is supported. The default catalog distributed with alteruphono
is based on the BIPA system of clts.Features are descriptors... Default is derived from BIPA descriptors,
mostly articulatory, but we also incluse some distinctive feature
systems.It is not necessary for a grapheme catalog to specify the features that
compose each grapheme, but this severly limits the kind of operations
possible, particularly when modelling observed or plausible sound
changes.The default catalogs are derived from BIPA... such as in examleSegment classes are just shorthards. The default distributed with AlteruPhono
includes a number of shorthands common in the literature and mostly
unambiguousTypesAgraphemeis a sequence of one or more textual characters representing
a segment, such as "a", "kʷʰ".Abundleis an explicit listing of features and values, as defined
in a reference, enclosed in square brackets, such as
"[open,front,vowel]" or "[-vowel]". Features are separated by commas,
with optional spaces, and may carry a specific value in the formatfeature=valuewithvaluebeing either a logical boolean ("true" or
"false") or a numeric value; shorthands for "true" and "false" are
defined as the operators "+" and "-"; if no "value" is provided, it defaults
to "true" (so that[open,front,vowel]is internally translated to[open=true,front=true,vowel=true]). Note on back-references here
(experimental)Amodifieris a bundle of feautes used to modify a basic value;
for example, if "V" defines a segment class (see item below) of vowels,
"V[long]" would restrict the set of matched segments to long vowels.Asegment-classis a short-hand to a bundle of features, as defined
in a reference, intended to match one or more segments are expressed with
one or more upper-case characters, such as "C" or
"VL" (for [consonant] and [long,vowel], respectively, in the
default). A segment class can have a modifier.Amarkeris a single character non-segmental information. Defined
markers are # for word-boundary, . for syllable break, + for morpheme
boundary, stress marks and tone marks. Note that some markers,
particularly suprasegmental features as stress and tone, in most cases
will not be referred directly when writing rule, but by tiers. See
section on tiers.Afocusis a special marker, represented by underscore, and used in
context to indicate the position of the source and target. See reference
when discuss contexts.Analternativeis a list of one or more segments (which tzype?)
separated by a vertical bar, such "b|p". While in almost all cases of
actual usage alternatives could be expressed by bundles (such
"b|p" as "[plosive,bilabial]" in most inventories, using an alternative is
in most cases preferable for legibilityAsetis a list of alternative segments where the order is
significant, expressed between curly brackets and separated by commas,
such as{a,e}. The order is significant in the sense that, in the
case of a corresponding set, elements will be matched by their index:
if{a,e}is matched with{ɛ,i}, all /a/ will become /ɛ/ and all
/e/ will become /i/ (note how, with standard IPA descriptors, it would
not be possible to express such raising in a an unambiguos way)Aback-referenceis a reference to a previously matched segment,
expressed by the symbol @ and the numeric index for the segment,
(such as @2 for referring to the second element,
the vowel /a/, in the segment sequence "b a"). As such, back-references
allow to carry identities: if "V s V" means any intervocalic "s" and
"a s a" means only "s" between "a", "V s @1" means any "s" in
intervocalic position where the two vowels are equal. Back-references
can take modifier.TODOFor version 2.0:
- Implement mapper support in the automata (also with test cases)
- Implement parentheses support in the grammar and automata (also with
test cases)
- Consider moving to ANTLR
- For the grammar, consider removing direct sound match insegment,
only usingalternative(potentially renamed toexpressionand dealt
with in an appropriate way)
- don't collect acontext, butleftandrightalready in the
AST (i.e., remove thepositionsymbol)- In Graphviz output
- Accept a strng with a description (could be the output of the
NLAutomata)
- Draw borders around `source`, `target`, and `context`
- Add indices to sequences, at least optionally
- Accept definitions of sound classes and IPA, at least in EnglishOld versionUseloggingeverywhereImplement automatic, semi-automatic, and requested syllabification
based on prosody strengthImplement both PEG grammars from separate repositoryAdd support for custom replacement functions (deciding on notation)ManualThere are two basic elements: rules and sequences. A rule operates on
a sequence, resulting in a single, potentially different, sequence in
forwards direction, and in at least one, potentially different, sequence
in backwards direction.Following the conventions and practices of CLTS, CLDF, Lingpy,
and orthographic profiles, the
proposed notation operates on "strings", that is, text in Unicode
characters representing a sequence of one or more segments separated
by spaces. The most common segments are sounds as represented by Unicode
glyphs, so that a transcription like /haʊs/ ("house" in English Received
Pronounciation) is represented as"h a ʊ s", that is, not considering
spaces, U+0068 (LATIN SMALL LETTER H),
U+0061 (LATIN SMALL LETTER A),
U+028A (LATIN SMALL LETTER UPSILON), and U+0073
(LATIN SMALL LETTER S). The usage of spaces might seem inconventient and
even odds at first, but the convention has proven useful with years of
experience of phonological transcription for computer-assisted treatment, as
not only it makes no automatic assumption of what constitutes a segment
(for example, allowing user to work with fully atomic syllables), but
facilitates validation work.Aruleis a statement expressed in theA > B / C _ Dnotation, where
C and D, both optional, express the preceding and following context.
It is a shorthand to common notation, internally mapped toC A D > B A D. While A and B might expresses something different from
historical evolution, such as correspondence, they are respectively namedanteandpost, and the rule can be real as "the sequence of segments
A changes into the sequence of sounds B when preceded by C and followed by
D".
A, B, and C are referred as as "sequences", and are composed of one or
more "segments". A "segment" is the basic, fundamental, atomic unit of a
sequence.Segments can be of X types:sound segments, such as phonemes (likeaorʒ) or whatever is
defined as an atomic segment by the used (for example, full-length
syllables such asbaorʈ͡ʂʰjou̯˨˩˦). In most cases, a phonetic or
phonological transcription system such IPA or NAPA will be used; by
default, the system operates on BIPA, which also facilitates
normalization in terms of homoglyphs, etc.A bundle of features, expressed as comma separated feature-values
enclosed by square brackets, such as[consonant], referring to all
consonants,[unrounded,open-mid,central,vowel], referring to all
sounds matching this bundle of features (that is,ɜand the same
sound with modifiers), etc. Complex relationships and tiers allow to
expressed between segments, as described later. By default, the system
of descriptors used by BIPA is used.Sound-classes, which are common short-hands for bundles of features,
such asKfor[consonant,velar]orRfor "resonants" (defined
internally as[consonant,-stop]). A default system, expressed in
table X, is provided, and can be replaced, modified, or extended by the
user. Sound-classes are expressed in all upper-case.Back-references, used to refer to other segments in a sequence,
which are expressed by the at-symbol (@) and a
numeric index, such as@1or@3(1-based). These will are better
explored in X.Special segments related to sequences, which are_(underscore) for the "focus" in a context (from the name by
Hartman 2003), that is, the position whereanteandpostsequences
are found#(hash) for word boundaries.(dot) for syllable breaksSound segments, sound-classes, and back-references can carry a modifier,
which is following bundle of features the modifies the value expressed or
referred to. For exampleθ[voiced]is equivalent toð,C[voiceless]would match only voiceless consonants,C[voiceless] ə @1[voiced]would
match sequences of voiceless consonants, followed by a schwa, followed by
the corresponding voiced consonant (thus matching sequences likep ə bandk ə g, but notp ə g).Other non primitives include alternatives and sets.How to citeIf you usealteruphono, please cite it as:Tresoldi, Tiago (2020). Alteruphono, a tool for simulating sound changes.
Version 0.3. Jena. Available at:https://github.com/tresoldi/alteruphonoIn BibTex:@misc{Tresoldi202alteruphono,
author = {Tresoldi, Tiago},
title = {Alteruphono, a tool for simulating sound changes. Version 0.3.},
howpublished = {\url{https://github.com/tresoldi/alteruphono}},
address = {Jena},
year = {2020},
}AuthorTiago Tresoldi ([email protected])The author was supported during development by theERC Grant #715618for the projectCALC(Computer-Assisted Language Comparison: Reconciling Computational and Classical
Approaches in Historical Linguistics), led byJohann-Mattis List.
|
alteryx-gallery-py
|
alteryx_gallery_pyA lightweight Python wrapper for the Alteryx gallery API.InstallationUse the package managerpipto install alteryx_gallery_py.pipinstallalteryx_gallery_pyUsageimportzipfileimportiofromalteryx_gallery_pyimportSubscriptionsgallery_url="http://devalteryx.continuus-technologies.com/gallery/"api_key='INSERT API KEY'client_secret='INSERT API SECRET'# Initiate Subscription Objectsub=Subscriptions(api_key,client_secret,gallery_url)# Search for a workflow in galleryworkflows=sub.get_workflows(search="pizza survey")pizza_app_id=workflows[0]["id"]# Get Questions for a workflowquestions=sub.get_questions(pizza_app_id)# Run an appanswers=[{"name":"Question 1","value":"True"},{"name":"Question 2","value":"Cheese"},{"name":"Question 3","value":150}]job_run=sub.create_job(pizza_app_id,questions=answers,priority="0")# Get job statusjob_status=sub.get_job(job_run["id"])output_id=job_status["outputs"][0]["id"]# List all jobs for an Alteryx appall_pizza_jobs=sub.list_jobs(pizza_app_id)# Get job outputjob_output=sub.get_job_output(job_run["id"],output_id)# Download an Alteryx Analytics Appapp_package=sub.download_app(pizza_app_id)z=zipfile.ZipFile(io.BytesIO(app_package))z.extractall("./")ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT
|
alteryx-open-src-update-checker
|
Alteryx Open Source Update CheckerAlteryx open source update checker is a Python library to automatically check that you have the latest version of an Alteryx open source library. If your Alteryx open source library is out of date, a warning to upgrade will be shown.InstallationInstall with pip (as an add-on to Alteryx open source libraries):python-mpipinstall"featuretools[updater]"python-mpipinstall"evalml[updater]"python-mpipinstall"woodwork[updater]"python-mpipinstall"compose[updater]"Install with conda from theconda-forge channel:condainstall-cconda-forgealteryx-open-src-update-checkerDisable CheckerYou can disable the update checker by changing your environment variables to include the following:export ALTERYX_OPEN_SRC_UPDATE_CHECKER=FalseBuilt at Alteryx Innovation Labs
|
altest
|
No description available on PyPI.
|
altest-multiple-res
|
Getting Started with Multiple Responses Test APIGetting StartedInstall the PackageThe package is compatible with Python versions2 >=2.7.9and3 >=3.4.
Install the package from PyPi using the following pip command:pipinstallaltest-multiple-res==2.0.0You can also view the package at:https://pypi.python.org/pypi/altest-multiple-resInitialize the API ClientThe following parameters are configurable for the API Client:ParameterTypeDescriptiontimeoutfloatThe value to use for connection timeout.Default: 60max_retriesintThe number of times to retry an endpoint call if it fails.Default: 3backoff_factorfloatA backoff factor to apply between attempts after the second try.Default: 0The API client can be initialized as follows:frommultipleresponsestestapi.multipleresponsestestapi_clientimportMultipleresponsestestapiClientclient=MultipleresponsestestapiClient()Test the SDKYou can test the generated SDK and the server with test cases.unittestis used as the testing framework andnoseis used as the test runner. You can run the tests as follows:Navigate to the root directory of the SDK and run the following commandspip install -r test-requirements.txt
nosetestsClient Class DocumentationMultiple Responses Test APIClientThe gateway for the SDK. This class acts as a factory for the Controllers and also holds the configuration of the SDK.ControllersNameDescriptionsend_messagesProvides access to SendMessagesControllerAPI ReferenceList of APIsSend MessagesSend MessagesOverviewGet instanceAn instance of theSendMessagesControllerclass can be accessed from the API Client.send_messages_controller = client.send_messagesMultiple Responses Without Range:information_source:NoteThis endpoint does not require authentication.defmultiple_responses_without_range(self)Response TypeList of MultipleMessageModelExample Usageresult=send_messages_controller.multiple_responses_without_range()Example Response(as JSON)[{"from":"Littlecab","to":["+254700000001","+254700000002","+254700000003"],"text":"Welcome to our Little world."}]ErrorsHTTP Status CodeError DescriptionException Class404Not foundFailureResponseModelException500Internal server errorFailureResponseModelExceptionDefaultContinueSuccessResponseModelExceptionMultiple Responses With Range:information_source:NoteThis endpoint does not require authentication.defmultiple_responses_with_range(self)Response TypeList of MultipleMessageModelExample Usageresult=send_messages_controller.multiple_responses_with_range()Example Response(as JSON)[{"from":"Littlecab","to":["+254700000001","+254700000002","+254700000003"],"text":"Welcome to our Little world."}]ErrorsHTTP Status CodeError DescriptionException Class404Not foundFailureResponseModelException500Internal server errorFailureResponseModelExceptionDefaultContinueSuccessResponseModelExceptionModel ReferenceStructuresSingle Message ModelMultiple Message ModelId TypeReasonSingle Message ModelAny payload to send a single message should be in this formatClass NameSingleMessageModelFieldsNameTypeTagsDescriptionmfromstringOptionalThe SMS header you would like to use, these should be registered under the account being managed by the API KEY used.tostringOptionalMobile number of the recipient of the message with the country code includedtextstringOptionalYour message to the recipient userExample (as JSON){"from":null,"to":null,"text":null}Multiple Message ModelAny payload to send a message to multiple numbers should be in this formatClass NameMultipleMessageModelFieldsNameTypeTagsDescriptionmfromstringOptionalThe SMS header you would like to use, these should be registered under the account being managed by the API KEY used.toList of stringOptionalList of mobile numbers in the international format receiving your messagetextstringOptionalYour message to the recipient userExample (as JSON){"from":null,"to":null,"text":null}Id TypeClass NameIdTypeFieldsNameTypeTagsDescriptionkindstring-channel_idstringOptional-video_idstringOptional-Example (as JSON){"kind":"kind8","channelId":null,"videoId":null}ReasonReason of the failureClass NameReasonFieldsNameTypeTagsDescriptionnamestringOptionalName of the error generatedmessagestringOptionalLiteral description of the error generatedExample (as JSON){"name":null,"message":null}ExceptionsSuccess Response ModelFailure Response ModelSuccess Response ModelAny successful response will have this formatClass NameSuccessResponseModelExceptionFieldsNameTypeTagsDescriptionstatusboolOptionalStatus of the response, when unsuccessful this value will befalsemessagestringOptionalSuccessful message to your previous request. Messages:*Request sent to queue=> "Your messages are being processed for delivery to your different recipients"Example (as JSON){"status":null,"message":null}Failure Response ModelAny unsuccessful response with have this formatClass NameFailureResponseModelExceptionFieldsNameTypeTagsDescriptionstatusboolOptionalStatus of the response, when successful this value will betruereasonReasonOptionalReason of the failureExample (as JSON){"status":null,"reason":null}Utility Classes DocumentationApiHelperA utility class for processing API Calls. Also contains classes for supporting standard datetime formats.MethodsNameDescriptionjson_deserializeDeserializes a JSON string to a Python dictionary.ClassesNameDescriptionHttpDateTimeA wrapper for datetime to support HTTP date format.UnixDateTimeA wrapper for datetime to support Unix date format.RFC3339DateTimeA wrapper for datetime to support RFC3339 format.Common Code DocumentationHttpResponseHttp response received.ParametersNameTypeDescriptionstatus_codeintThe status code returned by the server.reason_phrasestrThe reason phrase returned by the server.headersdictResponse headers.textstrResponse body.requestHttpRequestThe request that resulted in this response.HttpRequestRepresents a single Http Request.ParametersNameTypeTagDescriptionhttp_methodHttpMethodEnumThe HTTP method of the request.query_urlstrThe endpoint URL for the API request.headersdictoptionalRequest headers.query_parametersdictoptionalQuery parameters to add in the URL.parametersdict | stroptionalRequest body, either as a serialized string or else a list of parameters to form encode.filesdictoptionalFiles to be sent with the request.
|
alt-eval
|
alt-evalAn automatic lyrics transcription (ALT) evaluation toolkit, released with theJam-ALT benchmark.The package implements metrics designed to work well with lyrics formatted according to music industry standards (see theJam-ALT annotation guide), namely:Aword error rate(WER) computed on text tokenized in a way that accounts for non-standard spellings common in song lyrics.Acase error rate, measuring the rate of incorrectly predicted letter case.Precision,recallandF-scorefor symbols important for written lyrics:PunctuationParentheses (used to delimit background vocals)Line breaksSection breaks (i.e. double line breaks)UsageInstall the package withpip install alt-eval.To compute the metrics:fromalt_evalimportcompute_metricscompute_metrics(references,hypotheses)wherereferencesandhypothesesare lists of strings. To specify the language (English by default), use thelanguagesparameter, passing either a single language code, or a list of language codes corresponding to individual examples.For JamALT, use:fromdatasetsimportload_datasetdataset=load_dataset("audioshake/jam-alt")["test"]compute_metrics(dataset["text"],transcriptions,languages=dataset["language"])Usevisualize_errors=Trueto also get a list of HTML snippets that can be used to visualize the errors in each transcript.
|
alteza
|
AltezaAlteza is a static site generatordriven byPyPage.
Examples of other static site generators can befound here.The differentiator with Alteza is that the site author (if familiar with Python) will have a lot more
fine-grained control over the output, than what (as far as I'm aware) any of the existing options offer.The learning curve is also shorter with Alteza. I've tried to follow part ofxmonad's philosophy
of keeping things small and simple. Alteza doesn't try to do a lot of things; instead it simply offers the core crucial
functionality that is common to most static site generators.Alteza also imposes very littlerequiredstructure or a particular "way of doing things" on your website (other than
requiring unique names). You retain the freedom to organize your website as you wish. (The nameAltezacomes from
a word that may be translated toillustriousnessin Español.)A key design aspect of Alteza is writing little scripts and executing such code to generate your website. Your static
site can contain arbitrary Python that is executed at the time of site generation.PyPage,
in particular, makes it seamless to include actual Python code inside page templates. (This of
course means that you must run Alteza with trusted code, or in an isolated container.)InstallationYou caninstallAlteza easily withpip:pip install altezaTry runningalteza -hto see the command-line options available.User GuideThe directory structure is generally mirrored in the generated site.By default, nothing is copied/published to the generated site.A file must explicitly indicate using apublic: truevariable/field that it is to be published.So directories with no public files, are non-existent in the generated site.Filesreachablefrom marked-as-public files will also be publicly accessible.Here, reachability is discovered when a providedlinkfunction is used to link to other files.There are two kinds of files that are subject to processing withPyPage: Markdown files (ending with.md) and any file with a.pybefore its actual extension.Markdown Files:Markdown files are first processed to have their "front matter" extracted usingMeta-Data.The first blank line or---ends the front matter section.The front matter is processed as YAML, and the fields are injected into thepypageenvironment.The Markdown file is processed usingpypage, with its Python environment enhanced by the YAML fields from the front matter.The environment dictionary after the Markdown is processed by pypage is treated as the "return value" of this.mdfile.This "return value" dictionary has acontentkey added to it which maps to thepypageoutput for this.mdfile.This Markdown file is passed to atemplatespecified in configuration, for a second round of processing by PyPage.Templates are HTML files processed by PyPage. The PyPage-processed Markdown HTML output is passed to the template as the variablebodyvariable. The template itself is executed by PyPage.The template should use thisbodyvalue via PyPage (with{{ boydy }}in order to render thebody's contents.(See more on configuration files in the next section.)The template is defined using atemplatevariable declared in a__config__.pyfile.Thetemplate's value must be the entire contents of a template HTML file. A convenience functionreadfileis provided for this. So you can writetemplate = readfile('some_template.html')in a config file.Templates may be overriden in descendant__config__.pyfiles, or in the Markdownitselfusing a PyPage multiline code tag (not inline code tag).Markdown files result ina directory, with anindex.htmlfile containing the Markdown's output.Other Dynamic Files (i.e. any file with a.pybefore the last.in its file name):These files are processed with PyPageoncewith no template application step afterward.Other content files are not read. They areselectivelyeithersymlinked orcopied.Python Environment and Configuration:Note:Python code in both.mdand other.py.*files are run using Python's built-inexec(andeval) functions, and when they're run, we passed in a dictionary for theirglobalsargument. We call that dict theenvironment, orenv.Configuration is done through file(s) called__config__.py.First, we recursively go through all directories top-down.At each directory (descending downward), we execute an__config__.pyfile, if one is present. After
execution, we absorb any variables in it that do not start with a_into theenvdict.This behavior cna be used to override values. For example a top-level directory can define adefault_template, which can then be overriden by inner directories.The deepest.md/.py.*files get executed first. After it executes, we check if aenvcontains a fieldpublicthat is set asTrue. If it does, we mark that file for publication. Other than recording the value ofpublicafter each dynamic file is executed, any modification toenvmade by a dynamic file are discarded (and not absorbed, unlike with__config__.py).I would recommend not using__config__.pyto setpublicasTrue, as that would make the entire directory and all its descendants public (unless that behavior is exactly what is desired). Reachability withlink(described below) is, in my opinion, a better way to makeonly reachablecontent public.Name Registry andlink.The name of every file in the input content is stored in a "name registry" of sorts that's used bylink.Currently, names, without their file extension, have to be unique across input content. This might change in the future.The Name Registry will error out if it encounters any non-unique names. (I understand this is a significant limitation, so I might support marking this simply opt-in behavior with a--uniqueflag in the future.)Any non-dynamic content file that has beenlink-ed to is marked for publication (i.e. copying or symlinking).A Python function namedlinkis injected into the top levelenv.This function can be used to get relative links to any other file.linkwill automatically determine & return the relative path to a file.For example, one can do<a href="{{link('some-other-blog-post')}}">, and the generated site will have a relative link to it (i.e. to its directory if a Markdown file, and to the file itself otherwise).Reachability of files is determined using this function, and unreachable files will be treated as non-public (and thus not exist in the generated site).Extensions may be omitted for dynamic files (i.e..mdfor Markdown, and.py*for any file with.pybefore its extension).I.e. one can write bothlink('magic-turtle')orlink('magic-turtle.md')for the filemagic-turtle.md, andlink('pygments-styles')orlink('pygments-styles.py.css')for the filepygments-styles.py.css.Usage, Testing & DevelopmentRunningIf you've installed Alteza with pip, you can just runalteza, e.g.:alteza-hIf you're working on Alteza itself, then run thealtezamodule itself, from the project directory directly, e.g.python3 -m alteza -h.Command-line ArgumentsThe-hargument above will print the list of available arguments:usage: __main__.py [--copy_assets] [--trailing_slash] [--content CONTENT] [--output OUTPUT] [-h]
options:
--copy_assets (bool, default=False) Copy assets instead of symlinking to them
--trailing_slash (bool, default=False) Include a trailing slash in links to markdown pages
--content CONTENT
(str, default=test_content) Directory to read the input content from.
--output OUTPUT
(str, default=test_output) Directory to send the output. WARNING: This will be deleted first.
-h, --help show this help message and exitAs might be obvious above, you set thecontentto your content directory. The output directory will be deleted entirely, before being written to.To test againsttest_content(and generate output totest_output), run it like this:python-malteza--contenttest_content--outputtest_outputCode StyleI'm usingblack. To re-format the code, just run:black alteza.
Fwiw, I've configured my IDE (PyCharm) to always auto-format withblack.Type CheckingTo ensure better code quality, Alteza is type-checked with five different type checking systems:Mypy, Meta'sPyre, Microsoft'sPyright, Google'sPytype, andPyflakes; as well as linted withPylint.To run some type checks:mypyalteza# should have zero errorspyflakesalteza# should have zero errorspyrecheck# should have zero errors as wellpyrightalteza# should have zero errors alsopytypealteza# should have zero errors tooOr, all at once with:mypy alteza ; pyre check ; pyright alteza ; pytype alteza ; pyflakes alteza.LintingLinting policy is very strict.Pylintmust issue a perfect 10/10 score, otherwise thePylint CI checkwill fail.To test whether lints are passing, simply run:pylint -j 0 altezaOf course, when it makes sense, lints are suppressed next to the relevant line, in code. Also, unlike typical Python code, the naming convention generally-followed in this codebase iscamelCase. Pylint checks have been mostly disabled for names.DependenciesTo install dependencies for development, run:python3-mpipinstall-rrequirements.txt
python3-mpipinstall-rrequirements-dev.txtTo use a virtual environment (after creating one withpython3 -m venv venv):sourcevenv/bin/activate# ... install requirements ...# ... do some development ...deactive# end the venvLicenseThis project is licensed under the AGPL v3, but I'm reserving the right to re-license it under a license with fewer restrictions, e.g. the Apache License 2.0, and any PRs constitute consent to re-license as such.
|
altf1be-google-analytics-helpers
|
altf1be_google_analytics_helpersHelpers for Google Analytics facilitating the setting of the categories, actions, labels, User Id and User Agents.Seehttps://bitbucket.org/altf1be/altf1be_google_analytics_helpersGoogle Analytics:https://analytics.google.comAuthor: Abdelkrim BOUJRAF,http://www.alt-f1.beUsageCreate a .env including this parameter COM_GOOGLE_ANALYTICS_TRACKING_IDexportCOM_GOOGLE_ANALYTICS_TRACKING_ID=G-XXXXXXXXXXInstall python-dotenvpipinstallpython-dotenvload the .env in your entry point (app.py, main.py, test.py)fromdotenvimportload_dotenv
load_dotenv()Run the code and check if the event is set on Google Analyticsfromaltf1be_google_analytics_helpersimportGoogleAnalyticsimportrequestsgoogleAnalytics=GoogleAnalytics()USER_ID:int,0ifyouDONOTstoreauser_idgoogleAnalytics.track_event(category="set a category",action="set an action",label="set a label",value=0,# Event value, must be an integer. i.e. the value of a basketua=request.headers.get("User-Agent"),)installationinstall the package onpypi.org:install :pip install altf1be_google_analytics_helpersupgrade :pip install altf1be_google_analytics_helpers --upgradeinstall the package ontest.pypi.org:install :pip install -i https://test.pypi.org/simple/altf1be_google_analytics_helpersupgrade :pip install -i https://test.pypi.org/simple/altf1be_google_analytics_helpers --upgradedependenciesSeerequirements.txtBuild this packagebuild the setup.pypython3 setup.py sdist bdist_wheelpython3 -m pip install --user --upgrade twine --use-feature=2020-resolverupload the library on TESTpypi.orgpython -m twine upload --repository-url https://test.pypi.org/legacy/ dist/*Source :https://test.pypi.org/project/altf1be_helpersupload the library on PRODpypi.orgpython -m twine upload dist/*Source :https://pypi.org/project/altf1be_helpersDocumentation to build a Python packagePackaging Python Projectshttps://packaging.python.org/tutorials/packaging-projects/Managing Application Dependencieshttps://packaging.python.org/tutorials/managing-dependencies/#managing-dependenciesPackaging and distributing projectshttps://packaging.python.org/guides/distributing-packages-using-setuptools/#distributing-packagesLicenseCopyright (c) ALT-F1 SPRL, Abdelkrim BOUJRAF. All rights reserved.Licensed under the EUPL License, Version 1.2.See LICENSE in the project root for license information.
|
altf1be-helpers
|
altf1be_helpersHelpers to deal with basic requirements of an application built bywww.alt-f1.be. Seehttps://bitbucket.org/altf1be/altf1be_helpersmanagement of a JSON File: Load, save, save with datetime.usageinstall the package onpypi.org:install :pip install altf1be_helpersupgrade :pip install altf1be_helpers --upgradeinstall the package ontest.pypi.org:install :pip install -i https://test.pypi.org/simple/altf1be_helpersupgrade :pip install -i https://test.pypi.org/simple/altf1be_helpers --upgradedependenciesSeerequirements.txtBuild the packagebuild the setup.pypython3 setup.py sdist bdist_wheelpython3 -m pip install --user --upgrade twineupload the library on TESTpypi.orgpython -m twine upload --repository-url https://test.pypi.org/legacy/ dist/*Source :https://test.pypi.org/project/altf1be_helpersupload the library on PRODpypi.orgpython -m twine upload dist/*Source :https://pypi.org/project/altf1be_helperstest the library altf1be_helperscd altf1be_helperspython altf1be_helpers_unittest.pypython altf1be_json_helpers_unittest.pylocate the packagepython -c "from altf1be_helpers import AltF1BeHelpers as _; print(_.__path__)"does not work yetlist functions inside the modulethe packagepython -c "import altf1be_helpers as _; print(dir(_))"test the packagepython -c "from altf1be_helpers import AltF1BeHelpers; text='éê à iïî'; print(f'{AltF1BeHelpers.unicode_to_ascii(text)}')"result :ee a iiitest the library altf1be_helperscd altf1be_helperspython altf1be_json_helpers_unittest.pylocate the packagepython -c "from altf1be_json_helpers import AltF1BeJSONHelpers as _; print(_.__path__)"does not work yetlist functions inside the modulethe packagepython -c "import altf1be_helpers as _; print(dir(_))"test the packagepython -c 'import os;from altf1be_helpers import AltF1BeJSONHelpers; altF1BeJSONHelpers = AltF1BeJSONHelpers();data = altF1BeJSONHelpers.load(os.path.join("data", "altf1be_sample.json"));print(data)'result :{"name": "altf1be_json_helpers"}DocumentationPackaging Python Projectshttps://packaging.python.org/tutorials/packaging-projects/Managing Application Dependencieshttps://packaging.python.org/tutorials/managing-dependencies/#managing-dependenciesPackaging and distributing projectshttps://packaging.python.org/guides/distributing-packages-using-setuptools/#distributing-packagesLicenseCopyright (c) ALT-F1 SPRL, Abdelkrim Boujraf. All rights reserved.Licensed under the EUPL License, Version 1.2.See LICENSE in the project root for license information.
|
altf1be-json-helpers
|
altf1be_json_helpersHelpers to deal with basic requirements of the management of a JSON File: Load, save, save with datetime. The library is built bywww.alt-f1.be. Seehttps://bitbucket.org/altf1be/altf1be_json_helpersThe classAltF1beJSONcounts a limited amount of methodsLoad a JSON fileSave a JSON file and create the directory where to store the JSON file if it does not existsSave a JSON file appended with a date time; e.g. 2020-06-19_20-45-42 (format YYYY-MM-DD_HH-MM-SS)usageinstall the package onpypi.org:install :pip install altf1be_json_helpersupgrade :pip install altf1be_json_helpers --upgradeinstall the package ontest.pypi.org:install :pip install -i https://test.pypi.org/simple/altf1be_json_helpersupgrade :pip install -i https://test.pypi.org/simple/altf1be_json_helpers --upgradedependenciesSeerequirements.txtBuild the packageinstall NodeJS packagesnpm installbuild the setup.pyClean and build the package :npm run clean-buildupload the library on TESTpypi.orgnpm run push-test:setup.pySource :https://test.pypi.org/project/altf1be_json_helpersupload the library on PRODpypi.orgnpm run push-prod:setup.pySource :https://pypi.org/project/altf1be_json_helperstest the librarycd altf1be_json_helperspython altf1be_json_helpers_unittest.pylocate the packagepython -c "from altf1be_json_helpers import AltF1BeJSONHelpers as _; print(_.__path__)"does not work yetlist functions inside the modulethe packagepython -c "import altf1be_json_helpers as _; print(dir(_))"test the packagepython -c 'import os;from altf1be_json_helpers import AltF1BeJSONHelpers; altF1BeJSONHelpers = AltF1BeJSONHelpers();data = altF1BeJSONHelpers.load(os.path.join("data", "altf1be_sample.json"));print(data)'result :{"name": "altf1be_json_helpers"}DocumentationPackaging Python Projectshttps://packaging.python.org/tutorials/packaging-projects/Managing Application Dependencieshttps://packaging.python.org/tutorials/managing-dependencies/#managing-dependenciesPackaging and distributing projectshttps://packaging.python.org/guides/distributing-packages-using-setuptools/#distributing-packagesLicenseCopyright (c) ALT-F1 SPRL, Abdelkrim Boujraf. All rights reserved.Licensed under the EUPL License, Version 1.2.See LICENSE in the project root for license information.
|
altf1be-sca-tork-easycube-api
|
altf1be_sca_tork_easycube_apiHelpers for SCA Tork Easycube API facilitating the collection of data generated by the dispensers Built byhttp://www.alt-f1.be.Seehttps://bitbucket.org/altf1be/com_torkglobal_easycube_fmhttps://easycube.sca-tork.com/en/Plan/Indexhttps://fm.easycube.torkglobal.com/enhttps://easycube-external-api-web-c2m2jq5zkw6rc.azurewebsites.net/swaggerAuthor: Abdelkrim BOUJRAF,http://www.alt-f1.beScreenshotcd web; ./run_web.shopen a browserhttp://0.0.0.0:8000/api/dispensers/sca_tork_easycube/Display actions required per dispenser:UsageCreate a .env including this parameter COM_GOOGLE_ANALYTICS_TRACKING_IDexportSCA_TORK_EASYCUBE_CLIENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxexportSCA_TORK_EASYCUBE_CLIENT_SECRET=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaaexportSCA_TORK_EASYCUBE_GRANT_TYPE=client_credentialsexportSCA_TORK_EASYCUBE_SCOPE=EasyCube.External.ApiexportSCA_TORK_EASYCUBE_BASE_URL=https://easycube-external-api-web-c2m2jq5zkw6rc.azurewebsites.netInstall the pyhton packagespipinstall-rrequirements.txtload the .env in your entry point (app.py, main.py, test.py)fromdotenvimportload_dotenv
load_dotenv()Run the code and check if the you can access to the REST APIpythonimportaltf1be_sca_tork_easycube_apiprint(dir(altf1be_sca_tork_easycube_api))# ['AltF1BeHelpers', 'AltF1BeJSONHelpers', 'DISPENSER_TYPE_NOT_FOUND', 'Dispensers', 'DispensersModel', 'ERROR_UNKNOWN', 'SCATorkEasyCubeAPI', 'SCATorkEasyCubeAPIAuthentication', 'SCATorkEasyCubeAPIHelpers', 'STATUS_UNKNOWN', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'credentials_filename', 'datetime', 'dispensers', 'dispensers_model', 'glob', 'json', 'load_dotenv', 'log_filename', 'logger', 'logging', 'np', 'os', 'pd', 'requests', 'sca_tork_easycube_api', 'sca_tork_easycube_api_authentication', 'sca_tork_easycube_api_helpers', 'sys', 'time', 'timezone']installationinstall the package onpypi.org:install :pip install altf1be_sca_tork_easycube_apiupgrade :pip install altf1be_sca_tork_easycube_api --upgradeinstall the package ontest.pypi.org:install :pip install -i https://test.pypi.org/simple/altf1be_sca_tork_easycube_apiupgrade :pip install -i https://test.pypi.org/simple/altf1be_sca_tork_easycube_api --upgradedependenciesSeerequirements.txtBuild this packagebuild the setup.pypython3 setup.py sdist bdist_wheelpython3 -m pip install --user --upgrade twine --use-feature=2020-resolverupload the library on TESTpypi.orgpython -m twine upload --repository-url https://test.pypi.org/legacy/ dist/*Source :https://test.pypi.org/project/altf1be_helpersupload the library on PRODpypi.orgpython -m twine upload dist/*Source :https://pypi.org/project/altf1be_helpersDocumentation to build a Python packagePackaging Python Projectshttps://packaging.python.org/tutorials/packaging-projects/Managing Application Dependencieshttps://packaging.python.org/tutorials/managing-dependencies/#managing-dependenciesPackaging and distributing projectshttps://packaging.python.org/guides/distributing-packages-using-setuptools/#distributing-packagesLicenseCopyright (c) ALT-F1 SPRL, Abdelkrim BOUJRAF. All rights reserved.This project IS NOT open sourced BUT the source code is freely availableSee LICENSE in the project root for license information.
|
altf2
|
No description available on PyPI.
|
altf4
|
Automatic Load Testing Framework 4--------Use at your own risk. I haven't written much yet.
|
altFACS
|
altFACSAuthor: David BrownLast Edited: 2020-August-07Aim:This package is intended to standardise and simplify the investigation of protein-protein interactions by flow cytometry.Explanation:In theHuang Labwe are interested in developing new tools to examine biological interactions. Split fluorescent proteins have proven to be a useful tool for the tagging and study of endogenous proteins in living cells, and we have been trying to maximise their utility. Appropriate use of a split fluorescent protein as a probe requires a good understanding of the complementation process, whereby the two halves of the split meet and fold to form the mature fluorescent protein. Complementation can be studied biochemically, however we can exploit the self-reporting nature of fluorescent proteins to study complementation in vivo by microscopy or or flow cytometry which offers a higher throughput.Flow cytometry and fluorescence activated cell sorting (FACS) are more frequently used to distinguish cell populations based on a characteristic intensity profile. In our case we often use it to study how proteins behave at a range of concentrations. This alternative approach to FACS is the main purpose of the altFACS package.Example Plots:Example altFACS Plots:A.Raw flow cytometry events.B.Scatter gating. Events fromAafter events saturating in any channel have been removed from all channels. Events likely to correspond to live cells have been gated based on a contour map.C.Singlet gating. Events fromBafter likely to contain more than one cell (below the line) are excluded.D.Negative control without transfection. Events fromCafter fluorescence gates have been set to contain 99% of the population.E.Transfected with CloGFP(1-10) only.F.Positive control with full length sfGFP::CTLA:T2A:mTagBFPG.Fitting of full length GFP in BFP+ cells. altFACS facilitates model fitting to flow cytometry data.H.CloGFP with wild type GFP11::CTLA:T2A:mTagBFP.I.CloGFP with Y9F mutant GFP11::CTLA:T2A:mTagBFP.J-L.Data from panelsG-Irescaled for comparison.We conclude that split CloGFP complementation efficiency is much less than 100%, and that the Y9F mutation in the GFP11 fragment has no impact of split CloGFP complementation efficiency.InstallationCurrently, altFACS is available on GitHub or on thetest.Pypisite.
Mostrequirementswill install automatically, but you may need to installfcsparserbefore installing altFACS.
|
alt-fake-useragent
|
Alternate Fake UserAgentModule NameDescriptionalt-fake-useragentUp to date and Fixed version offake-useragentwhich is a simple useragent faker with real world databaseFeaturesGrabs up to dateuseragentfromuseragentstring.comRandomize with real world statistic viaw3schools.comInstallationpipinstallalt-fake-useragentUsagefromfake_useragentimportUserAgentua=UserAgent()ua.ie# Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US);ua.msie# Mozilla/5.0 (compatible; MSIE 10.0; Macintosh; Intel Mac OS X 10_7_3; Trident/6.0)'ua['Internet Explorer']# Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; GTB7.4; InfoPath.2; SV1; .NET CLR 3.3.69573; WOW64; en-US)ua.opera# Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11ua.chrome# Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.2 (KHTML, like Gecko) Chrome/22.0.1216.0 Safari/537.2'ua.google# Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/537.13 (KHTML, like Gecko) Chrome/24.0.1290.1 Safari/537.13ua['google chrome']# Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11ua.firefox# Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:16.0.1) Gecko/20121011 Firefox/16.0.1ua.ff# Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:15.0) Gecko/20100101 Firefox/15.0.1ua.safari# Mozilla/5.0 (iPad; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5355d Safari/8536.25ua.mobile# Mozilla/5.0 (Android 2.2; Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.19.4 (KHTML, like Gecko) Version/5.0.3 Safari/533.19.4ua.desktop# Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; de-de) AppleWebKit/533.20.25 (KHTML, like Gecko) Version/5.0.4 Safari/533.20.27ua.random# and the best one, random via real world browser usage statisticNotesfake-useragentstore collected data at your os temp dir, like/tmpIf You want to update saved database just:fromfake_useragentimportUserAgentua=UserAgent()ua.update()If You don't want cache database or no writable file system:fromfake_useragentimportUserAgentua=UserAgent(cache=False)Sometimes,useragentstring.comorw3schools.comchanges their html, or down, in such casefake-useragentusesherokufallbackIf You don't want to use hosted cache server (version 0.1.5 added)fromfake_useragentimportUserAgentua=UserAgent(use_cache_server=False)In very rare case, if hosted cache server and sources will be
unavailablefake-useragentwont be able to download data: (version
0.1.3 added)fromfake_useragentimportUserAgentua=UserAgent()# Traceback (most recent call last):# ...# fake_useragent.errors.FakeUserAgentError# You can catch it viafromfake_useragentimportFakeUserAgentErrortry:ua=UserAgent()exceptFakeUserAgentError:passIf You will try to get unknown browser: (version 0.1.3 changed)fromfake_useragentimportUserAgentua=UserAgent()ua.best_browser# Traceback (most recent call last):# ...# fake_useragent.errors.FakeUserAgentErrorYou can completely disable ANY annoying exception with addingfallback: (version 0.1.4 added)importfake_useragentua=fake_useragent.UserAgent(fallback='Your favorite Browser')# in case if something went wrong, one more time it is REALLY!!! rare caseua.random=='Your favorite Browser'Want to control location of data file? (version 0.1.4 added)importfake_useragent# I am STRONGLY!!! recommend to use version suffixlocation='/home/user/fake_useragent%s.json'%fake_useragent.VERSIONua=fake_useragent.UserAgent(path=location)ua.randomIf you need to safe some attributes from overriding them in UserAgent by__getattr__method usesafe_attrsyou can pass there attributes
names. At least this will prevent you from raising FakeUserAgentError
when attribute not found.For example, when using fake_useragent withinjectionsyou need to:importfake_useragentua=fake_useragent.UserAgent(safe_attrs=('__injections__',))Please, do not use if you don't understand why you need this. This is
magic for rarely extreme case.Experiencing issues???Make sure that You using latest version!!!pipinstall-Ufake-useragentCheck version via python console: (version 0.1.4 added)importfake_useragentprint(fake_useragent.VERSION)And You are always welcome to postissuesPlease do not forget mention version that You are usingTestspip install toxtoxChangelog0.2.1 December 8, 2021Added mobile and desktop selectors0.2.0 December 6, 2021Fixed all Known Bugs 🐛Bug 🐛 Fixed :Error occurred during loading data.0.1.11 October 4, 2018moveds3 + cloudfrontfallback toheroku.com, cuz
someone from Florida did ~25M requests last month0.1.10 February 11, 2018Minor fix docscloudfronturl0.1.9 February 11, 2018fixw3schools.comrenamedIE/EdgetoEdge/IEmovedheroku.comfallback tos3 + cloudfrontstop testing Python3.3 and pypy0.1.8 November 2, 2017fixuseragentstring.comCan't connect to local MySQL server through socket0.1.7 April 2, 2017fix broken README.rst0.1.6 April 2, 2017fixes buguse_cache_serverdo not affected anythingw3schools.commoved tohttpsverify_ssloptions added, by default it isTrue(urllib.urlopenssl context for Python 2.7.9- and 3.4.3-
is not supported)0.1.5 February 28, 2017addedua.edgealias to Internet Explorerw3schools.com starts displayingEdgestatisticPython 2.6 is not tested anymoreuse_cache_serveroption addedIncreasedfake_useragent.settings.HTTP_TIMEOUTto 5
seconds0.1.4 December 14, 2016Added custom data file location supportAddedfallbackbrowser support, in case of unavailable
data sourcesAdded aliasfake_useragent.FakeUserAgentforfake_useragent.UserAgentAdded aliasfake_useragent.UserAgentErrorforfake_useragent.FakeUserAgentErrorReducedfake_useragent.settings.HTTP_TIMEOUTto 3 secondsStarted migration to new data file formatSimplified a lot 4+ years out of date codeBetter thread/greenlet safetyAdded verbose loggingAddedsafe_attrsfor prevent overriding by__getattr__0.1.3 November 24, 2016Added hosted data file, when remote services is unavailableRaisesfake_useragent.errors.FakeUserAgentErrorin case
when there is not way to download dataRaisesfake_useragent.errors.FakeUserAgentErrorinstead ofNonein case of unknown browserAddedgevent.sleepsupport ingeventpatched environment
when trying to download dataAuthorsYou can visitauthors
page
|
altflags
|
alt-flagsaltflagsallows you to easily map, parse and manipulate binary flagsWhy?The built in Python Flags and IntFlags didn't fit my needsSimple usage to handle binary flag mapping, parsing and manipulationNeeds to run super efficiently and quick (same thing?)This is my first public package, it's small and easy to maintainQuick Start1. Install with pip from PyPipython -m pip install altflags2. Create altflags, Flags classfrom altflags import Flags, flag
class Permissions(Flags):
create_message = flag(0)
delete_message = flag(1)
edit_message = flag(2)
user_permissions = Permissions()3. Edit your flags# Set create_message and edit_message flags to true
user_permissions.create_message = True
user_permissions.edit_message = True
# print flags as binary
print("{:0b}".format(user_permissions.flags))
# >>> 101
# all flags are False (0) from initialization
# print flags as integer
print({:0n}.format(user_permissions.flags))
# >>> 54. Compare flagsuser2_permissions = Permission()
user2_permissions.create_message = True
user2_permissions.edit_message = True
print(user_permissions == user2_permissions)
# >>> True
user2_permissions.create_message = False
print(user_permissions == user2_permissions)
# >>> False5. Extend altflags with class methods that return pre-formatted flag objectsclass Permissions(Flags):
create_message = flag(0)
delete_message = flag(1)
edit_message = flag(2)
@classmethod
def all(cls):
new_cls = cls()
new_cls.create_message = True
new_cls.delete_message = True
new_cls.edit_message = True
return new_cls
user_permissions = Permissions.all()
print({:0b}.format(user_permissions))
# >>> 111
print({:0n}.format(user_permissions))
# >>> 7Notesflags(n: int)n argument specifies the bit position of your flag (Warning: These can be overwritten).
|
altgraph
|
altgraph is a fork of graphlib: a graph (network) package for constructing
graphs, BFS and DFS traversals, topological sort, shortest paths, etc. with
graphviz output.altgraph includes some additional usage of Python 2.6+ features and
enhancements related to modulegraph and macholib.Project linksDocumentationIssue TrackerRepositoryRelease history0.17.3Update classifiers for Python 3.110.17.2Change in setup.py to fix the sidebar links on PyPI0.17.1Explicitly mark Python 3.10 as supported in wheel metadata.0.17Explicitly mark Python 3.8 as supported in wheel metadata.Migrate from Bitbucket to GitHubRun black on the entire repository0.16.1Explicitly mark Python 3.7 as supported in wheel metadata.0.16Add LICENSE file0.15ObjectGraph.get_edges,ObjectGraph.getEdgeDataandObjectGraph.updateEdgeDataacceptNoneas the node to get and treat this as an alias forself(as other
methods already did).0.14Issue #7: Remove use ofiteritemsin altgraph.GraphAlgo code0.13Issue #4: Graph._bfs_subgraph and back_bfs_subgraph return subgraphs with reversed edgesFix by “pombredanne” on bitbucket.0.12AddedObjectGraph.edgeDatato retrieve the edge data
from a specific edge.AddedAltGraph.update_edge_dataandObjectGraph.updateEdgeDatato update the data associated with a graph edge.0.11Stabilize the order of elements in dot file exports,
patch from bitbucket user ‘pombredanne’.Tweak setup.py file to remove dependency on distribute (but
keep the dependency on setuptools)0.10.2There where no classifiers in the package metadata due to a bug
in setup.py0.10.1This is a bugfix releaseBug fixes:Issue #3: The source archive contains a README.txt
while the setup file refers to ReadMe.txt.This is caused by a misfeature in distutils, as a
workaround I’ve renamed ReadMe.txt to README.txt
in the source tree and setup file.0.10This is a minor feature releaseFeatures:Do not use “2to3” to support Python 3.As a side effect of this altgraph now supports
Python 2.6 and later, and no longer supports
earlier releases of Python.The order of attributes in the Dot output
is now always alphabetical.With this change the output will be consistent
between runs and Python versions.0.9This is a minor bugfix releaseFeatures:Addedaltgraph.ObjectGraph.ObjectGraph.nodes, a method
yielding all nodes in an object graph.Bugfixes:The 0.8 release didn’t work with py2app when using
python 3.x.0.8This is a minor feature release. The major new feature
is a extensive set of unittests, which explains almost
all other changes in this release.Bugfixes:Installing failed with Python 2.5 due to using a distutils
class that isn’t available in that version of Python
(issue #1 on the issue tracker)altgraph.GraphStat.degree_distnow actually worksaltgraph.Graph.add_edge(a, b, create_nodes=False)will
no longer create the edge when one of the nodes doesn’t
exist.altgraph.Graph.forw_topo_sortfailed for some sparse graphs.altgraph.Graph.back_topo_sortwas completely broken in
previous releases.altgraph.Graph.forw_bfs_subgraphnow actually works.altgraph.Graph.back_bfs_subgraphnow actually works.altgraph.Graph.iterdfsnow returns the correct result
when theforwardargument isFalse.altgraph.Graph.iterdatanow returns the correct result
when theforwardargument isFalse.Features:Thealtgraph.Graphconstructor now accepts an argument
that contains 2- and 3-tuples instead of requireing that
all items have the same size. The (optional) argument can now
also be any iterator.altgraph.Graph.Graph.add_nodehas no effect when you
add a hidden node.The private methodaltgraph.Graph._bfsis no longer
present.The private methodaltgraph.Graph._dfsis no longer
present.altgraph.ObjectGraphnow has a__contains__methods,
which means you can use theinoperator to check if a
node is part of a graph.altgraph.GraphUtil.generate_random_graphwill raiseGraphErrorinstead of looping forever when it is
impossible to create the requested graph.altgraph.Dot.edge_styleraisesGraphErrorwhen
one of the nodes is not present in the graph. The method
silently added the tail in the past, but without ensuring
a consistent graph state.altgraph.Dot.save_imgnow works when the mode is"neato".0.7.2This is a minor bugfix releaseBugfixes:distutils didn’t include the documentation subtree0.7.1This is a minor feature releaseFeatures:Documentation is now generated usingsphinxand can be viewed at <http://packages.python.org/altgraph>.The repository has moved to bitbucketaltgraph.GraphStat.avg_hopsis no longer present, the function had no
implementation and no specified behaviour.the modulealtgraph.compatis gone, which means altgraph will no
longer work with Python 2.3.0.7.0This is a minor feature release.Features:Support for Python 3It is now possible to run tests using ‘python setup.py test’(The actual testsuite is still very minimal though)
|
alt-gym-wordle
|
gym-wordleA wordle environment for openai/gymInstallationInstallopenai-gymthen install this package withpip install -e .Usageimport gym
import gym_wordle
env = gym.make('Wordle-v0')See thedocsfor more infoEnvironment detailsThis environment simulates a game of wordle using a wordle python clone fromhttps://github.com/bellerb/wordle_solverThe action space is a discrete space of 12972 numbers which corresponds to a word from a list of all allowed wordle guesses and answers
The observation space is a dict with the guesses and colors for the current game.
Guesses is an array ofshape(5,6) #5 letters and 6 rowswhere each element is a number from 0-26 where 0 is''and 26 iszColors is an array of the same shape, only each element is a number from 0-2 where 0 is a blank (or black or grey) square, 1 is a yellow square, and 2 is a green squareThe reward calculation is as follows:The agent gets 1-6 points depending on how fast the agent guesses the word. For example, getting the word on the first guesses rewards 6 points, getting the word on the second guess rewards 5 points, etcThe agent also is rewarded for colors in the current row (so the current guess).
Right now, the agent is rewarded 3 points for each green tile, and 1 point for each yellow tile.
No points are giving for grey tiles
|
althaia
|
AlthaiaAlthaia:from Latin althaea, from Greek althaia - marsh mallow (literally: healing plant), from Greek althein to healWhat is it?Althaia is a very simple fork ofmarshmallow, with patches to improve the performance when dumping large sets
of data. It is then also compiled viacythonfor some extra performance boost. Ideally, these patches will
one day find their way into the upstream marshmallow in some cleaner form, and this package will become obsolete.How does it work?During the serialization process, marshmallow repeats a lot of lookup operations for each object it's attempting to
serialize, even though these values never change during the single execution. The main patch in this repo is
basically reading those values once and creating a serializer function, which is much more performant on large
data sets.The entire thing is then compiled into C extension modules and released only as binary wheels.Check out the originalupstream PRfor some discussion, or myoriginal announcement.How fast is it?It really depends on your data and usage, but using thebenchmark.pytest from the upstream marshmallow repo,
Althaia seems to shave off some ~30% of execution time on average. These values are an example test run results
from the upstream benchmark:Upstream(usec/dump)Althaia(usec/dump)Improvement(%)374.64258.61-30.9719189.8313275.84-30.81396368.67275365.67-30.52198163.58133714.07-32.52The table is the result of the following commands:python performance/benchmark.py
python performance/benchmark.py --object-count 1000
python performance/benchmark.py --iterations=5 --repeat=5 --object-count 20000
python performance/benchmark.py --iterations=10 --repeat=10 --object-count 10000They are also available in this repo aspoetry run task upstream-performance. Note that you may get different
results while running the benchmarks (the numbers above were obtained with Althaia v3.20.1, generally speaking you
should be getting better results with newer versions, but sometimes not).Contribution into theserialization benchmarkis in the works (update:stalled), but
local run seems to be almost comparable toToasted Marshmallow, which is stuck on an old marshmallow 2.x branch.
This means that Althaia gives you (almost) the speed of Toasted Marshmallow, with all the goodies of the latest
marshmallow.LibraryMany Objects (seconds)One Object (seconds)Relativeserpyco0.007676120.003891471Custom0.009657860.004676341.23917lima0.01169590.005836491.51564Pickle0.01376030.01368332.37246serpy0.03527280.01815084.61839Strainer0.05160050.02605066.71281Toasted Marshmallow0.0767920.041278610.207Althaia0.1018920.048421112.9943Colander0.2085140.10571927.1649Avro0.3037860.15118439.3314Lollipop0.3523310.17314145.4262Marshmallow0.5316360.27624369.8398Django REST Framework0.5311750.38752779.4203kim0.6697590.33613286.9576InstallationpipinstallalthaiaNOTE: This is still a work in progress and a wheel may not be available for your platform yet. PRs welcome!UsageThere are two ways to use Althaia: as a standalone package, or as a drop-in replacement for marshmallow.
Latter method is the recommended one. Add the following code as early as possible in your app bootstrap:importalthaiaalthaia.patch()This will install a Python meta path importer which will mimic marshmallow for the rest of your project, without any
changes to the codebase, i.e.import marshmallowwill work as expected. If and when this package becomes obsolete,
there will be no need to change the rest of your source to revert to upstream marshmallow.Alternatively, you can use Althaia directly:fromalthaiaimportmarshmallow# or, e.g.fromalthaia.marshmallowimportSchemaThough I'm not sure why one would do that.Obviously, for allactualusage of marshmallow, you should always refer to the excellentmarshmallow docs.Bugs & ContributingIf there are bugs, please make sure they are not upstream marshmallow bugs before reporting them. Since the patches
applied are picking apart some of the marshmallow internals, any breakage should be immediately visible, and the
chances are that most bugswillbe upstream bugs.Contributingmanylinuxbuilds for the CI pipeline is most welcome.When opening pull requests, please target thedevelopbranch by default.If you have any other ideas on how to tweak the performance, feel free to contribute in any way you can!Versioning & ReleasesAlthaia will always follow the upstream version of marshmallow to reduce confusion. In other words, Althaia versionX.Y.Zwill use marshmallow versionX.Y.Z.Additionally, if it comes to some changes on Althaia side (repo structure, build process, bugfixes),
PEP440 will be followed and will be released either as alpha, beta, rc (X.Y.ZaN,X.Y.ZbN,X.Y.ZrcN) if there is
still no change in the upstream dependency, or post-releases (X.Y.ZpostN). Since bugfixing is discouraged for
post-releases, there may also be a hotfix release asX.Y.Z.N, whereNis the hotfix version.devreleases may appear on test PyPI (X.Y.Z.devN), but these are not relevant to the general public.There will obviously be some delay between marshmallow and Althaia releases, and it is inevitable that I will get
sloppy over time, so feel free to create a GitHub issue if you need an urgent update to latest marshmallow.DevelopingAlthaia is usingPoetrywith a custom build script, and sometaskipyscripts to facilitate things.
You can see them defined inpyproject.toml, or just typepoetry run task --list.Preparing a new version TL;DR:Editpyproject.tomland change the version of the packages for upstream marshmallow and Althaia itself.Runpoetry run task version-check.Runpoetry run task build.Runpoetry run task upstream-test.[Optional] Runpoetry run task upstream-performance.[Optional] Inspect the wheel content withpoetry run task inspect.Runpoetry run task publish-testto deploy to test PyPI.Known IssuesIf you have any marshmallow warnings ignored in yourpytest.ini, i.e. you havefilterwarningsset up
to ignore an error starting withmarshmallow.warnings, you will get an import error even if you're doingalthaia.patch()in yourconftest.py. As a workaround, you can change it to start withalthaia.marshmallow.warnings. This happens because pytest is trying to import marshmallow before Althaia
gets a chance to patch the importer.Since althaia 3.20.1, the support for python3.7 and 3.8 has been dropped, unlike marshmallow which has dropped
support only for python3.7. The reason for this are massive changes in typing annotation starting from python3.9,
which are no longer supported by the recently released Cython 3.0.0. Maintaining patches for 3.8 and 3.9+
would be difficult without being a significant time sink. Since python3.8 is already in security-updates-only state,
it's much easier to just drop it.
|
althea
|
No description available on PyPI.
|
althiqa
|
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
|
althra-core
|
althra-coreTable of ContentsInstallationLicenseInstallationpip install althra-coreLicensealthra-coreis distributed under the terms of theBSD-3-Clauselicense.
|
altify
|
Altify automizes the task of inserting alternative text attributes for
image tags. Altify uses Microsoft Computer Vision API’s deep learning
algorithms to caption images in an HTML file and returns a new HTML file
in which alt attributes are filled out with their corresponding
captions.Notice: Altify will now ignore any image tag whose alt attribute has
content or is just an empty string. (In compliance with standard web
practices)DependenciesPython 2.7BeautifulSoupInstall and Usage (Latest Version)1) Get a Microsoft API Key for Freehttps://www.microsoft.com/cognitive-services/en-us/sign-up.2) Install via pipOpen up terminal and enter:pip install altify3) Usealtify path_to_your_html api_key4) Enjoy!A new HTML file called altify.html is created next to the HTML file you
selected.How It was BuiltParses the html using BeautifulSoup.Finds all the image tags.Sends a request to Microsoft’s API to caption.Fills out the alt attributes for all the images.Writes an edited HTML file next to the file you selected.Captioned Images SamplesDonald Trump wearing a suit and tieA piano keyboardA squirrel eatingA close up of a cat looking at the cameraA woman wearing a red hatA small boat in a lake surrounded by mountainsDisclaimerHumans are currently better at captioning images than machines. Use
responsibly!
|
altility
|
The name altility stands for 'actively learning utility', and was first developed
to help electric utilities in the process of placing new smart meters in space and
collecting their data at different times. This package, however, can now be used
and be further developed for any type of spatio-temporal prediction task.Installation:pip install altilityDocker:For using altility within an Ubuntu docker containerdocker run -it aryandoustarsam/altilityFor using altility with Jupyter notebook inside a docker containerdocker run -it -p 3333:1111 -v ~/path_to_data/data:/data aryandoustarsam/altility:jupyter
[inside running container]: jupyter notebook --ip 0.0.0.0 --port 1111 --no-browser --allow-root
[in local machine browser]: localhost:3333
[in local machine browser, type token shown in terminal]Usage guide:At the core of the altility package stands the classaltility.ADL_model. It
bundles properties and methods of the active deep learning (ADL) model that we
want to train. Bellow is a list of all parameters it takes when initialized, methods
it contains and results it can generate.Parametersname (='adl_model'):stringThe name of active deep learning (ADL) model.path_to_results (='results'):stringThe path to where resulting plots and values are supposed to be stored.Methodsinitialize(y, x_t=None, x_s=None, x_st=None, **kwargs):Initializes prediction model.collect(x_t_cand=None, x_s_cand=None, x_st_cand=None, **kwargs):Collect candidate data with embedding uncertainty active learning.train(y_picked, x_t_picked=None, x_s_picked=None, x_st_picked=None, **kwargs):Train model with queried labels of chosen candidate data points.predict(y_pred=None, x_t_cand=None, x_s_cand=None, x_st_cand=None, **kwargs):Predict labels for unqueried candidate data points. If you are testing model,
and have labels available, you can pass these and see the difference between
true and predicted labels of unqueried candidate data points.Methods:A complete lits of key word arguments or parameters that can be passed toADL_model.initialize()Parametersy (required):numpy arrayArray or matrix of labels.x_t (=None):numpy arrayArray or matrix of time-variant features.x_s (=None):numpy arrayArray or matrix of space-variant features.x_st (=None):numpy arrayArray or matrix of space-time-variant features.encoder_layers (=1):intChoose how many neural network layers you want to use for encoding features.network_layers (=1):intChoose how many layers you want to use after encoders. This is your network
depth.encoding_nodes_x_t (=100):intChoose the dimension of the encoding outcome of temporal features.encoding_nodes_x_s (=100):intChoose the dimension of the encoding outcome of spatial features.encoding_nodes_x_st (=100):intChoose the dimension of the encoding outcome of spatio-temporal features.encoding_nodes_joint (=100):intChoose the dimension of the encoding outcome of entire concatenated
feature vector.nodes_per_layer_dense (=1000):intChoose how many nodes per dense layer you want to use. This determines the
width of your network.filters_per_layer_cnn (=16):intChoose how many filtes per convolutional layer you want to use.states_per_layer_rnn (=200):intChoose how many states per recurrent layer you want to use.activation_encoding (='relu'):stringChoose which activation function to use on last encoding layer. Choose from
None, 'relu', 'tanh', 'selu', 'elu', 'exponential'.activation_dense (='relu'):stringChoose which activation function to use in each dense layer. Choose from
None, 'relu', 'tanh', 'selu', 'elu', 'exponential'.activation_cnn (='relu'):stringChoose which activation function to use in each convolutional layer. Choose
from None, 'relu', 'tanh', 'selu', 'elu', 'exponential'.activation_rnn (='tanh'):stringChoose which activation function to use in each recurrent layer. Choose
from None, 'relu', 'tanh', 'selu', 'elu', 'exponential'.layer_type_x_st (='CNN'):stringChoose which layers to use for X_st inputs. Choose one from 'ANN', 'CNN',
'LSTM'.initialization_method (='glorot_normal'):stringChoose how to initiliaze weights for Conv1D, Conv2D and Dense layers.
Choose from 'glorot_normal'.initialization_method_rnn (='orthogonal'):stringChoose how to initiliaze weights for LSTM layers. Choose from 'orthogonal'.regularizer (='l1_l2'):stringChoose how to regularize weights. Choose from None, 'l1', 'l2', 'l1_l2'.batch_normalization (=False):boolChoose whether or not to use batch normalization on each layer in your NN.train_split (=0.7):floatChoose on the splitting ratio between training and validation datasets.
Choose a value between 0 and 1.split_intervals (=0.05):floatDecide in which frequency to do train-validation split. 1 equals one datapoint
per bin, 0.5 equals two datapoints per bin.random_seed (=None):floatProvide a seed for reproducibility of your experiments. This is then used
when initializing weights of deep learning model, when choosing random
data sequences during training and anywhere, where stochastic processes play
a role.epochs (=30):intChoose for how many epochs you want to train your model.patience (=10):intChoose how many epochs to have patience on not increasing validation loss
during training before early stopping.batch_size (=16):intChoose how large your data batch size should be during training. Choose a
value to the power of 2.monitor (='val_loss'):stringChoose which value to monitor for early stopping. Choose from 'val_loss' and
'train_loss'.silent (=True):boolDecide whether or not to print out progress.plot (=False):boolDecide whether or not to visualize process.Resultsmodels:list of Tensorflow modelsList of computational graphs that compound our active deep learning embedding
network.A complete lits of key word arguments or parameters that can be passed toADL_model.collect()Parametersx_t_cand (=None):numpy arrayArray or matrix of time-variant features for candidate data points.x_s_cand (=None):numpy arrayArray or matrix of space-variant features for candidate data points.x_st_cand (=None):numpy arrayArray or matrix of space-time-variant features for candidate data points.budget (=0.5):floatChoose which share of candidate data pool we want to select. This is our
data budget for new querying new data points. Choose a value between 0 and
1.method (='embedding_uncertainty'):stringChoose which active learning method to use. Currently, only queries with
embedding uncertainty are supported.method_variant (='max_uncertainty'):stringChoose which variant of the active learning method to use. Choose from
'max_uncertainty', 'min_uncertainty', 'avg_uncertainty' and 'rnd_uncertainty'.method_distance (='laplacian_kernel'):stringChoose which distance metric to use for calculating embedding uncertainty
to cluster centers. Choose from 'rbf_kernel', 'laplacian_kernel' and
'cosine_similarity'.method_cluster (='KMeans'):stringChoose which clusting method to use for clusting embedded candidate data
points. Choose from 'rbf_kernel', 'laplacian_kernel' and 'cosine_similarity'.subsample (=None):intChoose None or a subsample size of uniformly chosen candidates.silent (=True):boolDecide whether or not to print out progress.plot (=False):boolDecide whether or not to visualize process.Resultsbatch_index_list:list of integersList of indices for most informative data points suggested to collect.inf_score_list:list of floatsList of information scores for most informative data points suggested to
collect.A complete lits of key word arguments or parameters that can be passed toADL_model.train()Parametersy_picked (required):numpy arrayArray or matrix of labels.x_t_picked (=None):numpy arrayArray or matrix of time-variant features.x_s_picked (=None):numpy arrayArray or matrix of space-variant features.x_st_picked (=None):numpy arrayArray or matrix of space-time-variant features.silent (=True):boolDecide whether or not to print out progress.plot (=False):boolDecide whether or not to visualize process.Resultsmodels:list of Tensorflow modelsList of computational graphs that compound our active deep learning embedding
network further trained on the passed dataset of picked candidate data.A complete lits of key word arguments or parameters that can be passed toADL_model.predict()Parametersy_pred (=None):numpy arrayArray or matrix of labels.x_t_pred (=None):numpy arrayArray or matrix of time-variant features.x_s_pred (=None):numpy arrayArray or matrix of space-variant features.x_st_pred (=None):numpy arrayArray or matrix of space-time-variant features.silent (=True):boolDecide whether or not to print out progress.plot (=False):boolDecide whether or not to visualize process.Resultspredictions:list of floatsList of predictions made for passed features.testing_loss:floatTesting loss score calculated from true vs. predicted labels. Only calculated
if true labels'y_pred'are provided.Datasets:The package can be tested on datasets that are either publicly available, or which
we make public for making spatio-temporal predictions. A first dataset consists of
electric load that we provide in our Github repository. To prepare the data
for usage with altility, use theprep_load_forecasting_data()function provided
inload_forecasting.pywith the following parameter and return values:Parameterspath_to_data (='data/public/electric load forecasting/'):stringThe path to where data is stored. This is 'data/public/electric load forecasting/'
in our original repository.dataset_name (='profiles_100'):stringChoose between 'profiles_100' and 'profiles_400'. These are two distinct
datasets containing load profiles from either 100 or 400 industrial, commercial,
and residential buildings of different sizes, shapes, consumption and occupancy
patterns in Switzerland.label_type (='feature_scaled'):stringDecide which labels to consider. Choose from 'random_scaled' and 'feature_scaled'.spatial_features (='histogram'):stringDecide how to treat aerial imagery. Choose one from 'average' and 'histogram'.meteo_types:listDecide which meteo data types to consider. Choose from 'air_density',
'cloud_cover', 'precipitation', 'radiation_surface', 'radiation_toa',
'snow_mass', 'snowfall', 'temperature' and 'wind_speed'. The default is a
list of all meteorological conditions.timestamp_data:listDecide which time stamp information to consider. Choose from: '15min',
'hour', 'day', 'month' and 'year'.time_encoding (='ORD'):stringDecide how to encode time stamp data. Choose one of 'ORD', 'ORD-1D' or 'OHE'histo_bins (=100):intSet the number of histogram bins that you want to use. Applied if parameter
spatial_features = 'histogram'.grey_scale (=False):boolDecide whether you want to consider underlying RGB images in grey-scale.profiles_per_year (=1):floatDecide how many building-year profiles you want to consider for each year.
Choose a share between 0 and 1. A value of 1 corresponds to about 100 profiles
for the profiles_100 and 400 profiles for the profiles_400 dataset.points_per_profile (=0.003):floatDecide how many data points per building-year profile you want to consider.
Choose a share between 0 and 1. A value of 0.01 corresponds to approximately
350 points per profile.history_window_meteo (=24):intChoose past time window for the meteo data. Resolution is hourly.prediction_window (=96):intDecide how many time steps to predict consumption into the future. Resolution
is 15 min. A values of 96 corresponds to 24h.test_split (=0.7):floatDecides how many buildings and how much of the time period to separate for
testing.normalization (=True):boolDecide whether or not to normalize features.standardization (=True):boolDecide whether to standardize features to zero mean and unit variance.silent (=True):boolDecide whether or not to print out progress of data processing.plot (=False):boolDecide whether or not to visualize examples of processed data.Returnsdatasets:dictA dictionary containing available and candidate data, that are stored with
the keys'avail_data'and'cand_data'. These are dictionaries
themselves, and store variables under keys'x_t','x_s','x_st'and'y'. These stand for only time-variant features'x_t', only space-variant features'x_s', space- and
time-variant features'x_st'and labels'y'.A second dataset consists of travel time data provided by the Uber movement project.Note:This data is licensed under Creative Commons, Attribution Non-Commercial
(https://creativecommons.org/licenses/by-nc/3.0/us/). This is different from the
MIT license we provide for our package here. To prepare the data for usage with
altility, use theprep_travel_forecasting_data()function provided intravel_forecasting.pywith the following parameters and return values.Parameterspath_to_data (='data/public/travel time forecasting/'):stringThe path to where data is stored. This is 'data/public/travel time forecasting/'
in our original repository.dataset_name (='Uber movement'):stringThis is currently the only dataset source we provide for travel time data.
An alternative source is the Google Maps API.city_name (='Amsterdam'):stringChoose a city for which you want to predict missing travel time data between
their single city zones. All available cities can be seen under the path
'data/public/travel time forecasting/Uber movement/'.test_split (=0.7):floatDecides how many data to separate for creating the candidate data pool.time_encoding (='ORD'):stringDecide how to encode time stamp data. Choose one of 'ORD' for ordinal encoding
or 'OHE' for one-hot encoding.normalization (=True):boolDecide whether or not to normalize features.standardization (=True):boolDecide whether to standardize features to zero mean and unit variance.silent (=True):boolDecide whether or not to print out progress of data processing.plot (=False):boolDecide whether or not to visualize examples of processed data.Returnsdatasets:dictA dictionary containing available and candidate data, that are stored with
the keys'avail_data'and'cand_data'. These are dictionaries
themselves, and store variables under keys'x_t','x_s'and'y'. These stand for only time-variant features'x_t', only
space-variant features'x_s'and labels'y'.Examples:An example for forecasting electric consumption of single buildings.import altility.adl_model as adl_model
import altility.datasets.load_forecasting as load_forecasting
### Import and prepare load forecasting data
datasets = load_forecasting.prep_load_forecasting_data(
silent=False,
plot=True
)
### Get features and labels for available data
y = datasets['avail_data']['y']
x_t = datasets['avail_data']['x_t']
x_s = datasets['avail_data']['x_s']
x_st = datasets['avail_data']['x_st']
### Get features and labels for candidate data from spatio-temporal test set
y_cand = datasets['cand_data']['y']
x_t_cand = datasets['cand_data']['x_t']
x_s_cand = datasets['cand_data']['x_s']
x_st_cand = datasets['cand_data']['x_st']
### Create a class instance
ADL_model = adl_model.ADL_model('Electrific f_nn')
### Initialize model by creating and training it
ADL_model.initialize(
y,
x_t,
x_s,
x_st,
silent=True,
plot=True
)
### Collect candidate data
ADL_model.collect(
x_t_cand,
x_s_cand,
x_st_cand,
silent=True,
plot=False
)
### Create one array for picked and one for unpicked data to be predicted
picked_array = np.zeros([len(y_cand),], dtype=bool)
picked_array[ADL_model.batch_index_list] = True
pred_array = np.invert(picked_array)
### Extract selected data from candidate data pool for training
y_picked = y_cand[picked_array]
x_t_picked = x_t_cand[picked_array]
x_s_picked = x_s_cand[picked_array]
x_st_picked = x_st_cand[picked_array]
### Train model with picked data
ADL_model.train(
y_picked,
x_t_picked,
x_s_picked,
x_st_picked,
silent=False,
plot=True
)
### Extract not selected data from candidate data pool for testing/predicting
y_pred = y_cand[pred_array]
x_t_pred = x_t_cand[pred_array]
x_s_pred = x_s_cand[pred_array]
x_st_pred = x_st_cand[pred_array]
### Predict on remaining data
ADL_model.predict(
y_pred,
x_t_pred,
x_s_pred,
x_st_pred,
silent=False,
plot=True
)An example for forecasting travel times between single city zones.import altility.adl_model as adl_model
import altility.datasets.travel_forecasting as travel_forecasting
### Import and prepare travel forecasting data
datasets = travel_forecasting.prep_travel_forecasting_data(
silent=False,
plot=True
)
### Get features and labels for available data
n_points=1000
y = datasets['avail_data']['y'][:n_points]
x_t = datasets['avail_data']['x_t'][:n_points]
x_s = datasets['avail_data']['x_s'][:n_points]
### Get features and labels for candidate data from spatio-temporal test set
y_cand = datasets['cand_data']['y'][:n_points]
x_t_cand = datasets['cand_data']['x_t'][:n_points]
x_s_cand = datasets['cand_data']['x_s'][:n_points]
### Create a class instance
ADL_model = adl_model.ADL_model('Spacetimetravelic f_nn')
### Initialize model by creating and training it
ADL_model.initialize(
y,
x_t=x_t,
x_s=x_s,
silent=True,
plot=True
)
### Show us if we created all models
for model_name, model in ADL_model.models.items():
print(model_name)
### Collect candidate data
ADL_model.collect(
x_t_cand,
x_s_cand,
silent=False,
plot=True
)
### Create one array for picked and one for unpicked data to be predicted
picked_array = np.zeros([len(y_cand),], dtype=bool)
picked_array[ADL_model.batch_index_list] = True
pred_array = np.invert(picked_array)
### Extract selected data from candidate data pool for training
y_picked = y_cand[picked_array]
x_t_picked = x_t_cand[picked_array]
x_s_picked = x_s_cand[picked_array]
### Train model with picked data
ADL_model.train(
y_picked,
x_t_picked,
x_s_picked,
silent=False,
plot=True
)
### Extract not selected data from candidate data pool for testing/predicting
y_pred = y_cand[pred_array]
x_t_pred = x_t_cand[pred_array]
x_s_pred = x_s_cand[pred_array]
### Predict on remaining data
ADL_model.predict(
y_pred,
x_t_pred,
x_s_pred,
silent=False,
plot=True
)
|
altimate-dataminion
|
Internal package. Use this at your own risk, support not guaranteed
|
altimate-datapilot
|
Assistant for Data TeamsFree software: MIT licenseInstallationpip install altimate-datapilotYou can also install the in-development version with:pip install https://github.com/AltimateAI/datapilot/archive/main.zipDocumentationhttps://datapilot.readthedocs.io/DevelopmentTo run all the tests run:toxNote, to combine the coverage data from all the tox environments run:Windowsset PYTEST_ADDOPTS=--cov-append
toxOtherPYTEST_ADDOPTS=--cov-append toxChangelog0.0.0 (2024-01-25)First release on PyPI.
|
altimate-django
|
No description available on PyPI.
|
altimeter
|
AltimeterAltimeter is a system to graph and scan AWS resources across multiple
AWS Organizations and Accounts.Altimeter generates RDF files which can be loaded into a triplestore
such as AWS Neptune for querying.QuickstartInstallationpip install altimeterConfigurationAltimeter's behavior is driven by a toml configuration file. A few sample
configuration files are included in theconf/directory:current_single_account.toml- scans the current account - this is the account
for which the environment's currently configured AWS CLI credentials are.current_master_multi_account.toml- scans the current account and attempts to
scan all organizational subaccounts - this configuration should be used if you
are scanning all accounts in an organization. To do this the currently
configured AWS CLI credentials should be pointing to an AWS Organizations
master account.To scan a subset of regions, set the region list parameterregionsin thescansection to a list of region names.Required IAM permissionsThe following permissions are required for a scan of all supported resource types:acm:DescribeCertificate
acm:ListCertificates
cloudtrail:DescribeTrails
dynamodb:DescribeContinuousBackups
dynamodb:DescribeTable
dynamodb:ListTables
ec2:DescribeFlowLogs
ec2:DescribeImages
ec2:DescribeInstances
ec2:DescribeInternetGateways
ec2:DescribeNetworkInterfaces
ec2:DescribeRegions
ec2:DescribeRouteTables
ec2:DescribeSecurityGroups
ec2:DescribeSnapshots
ec2:DescribeSubnets
ec2:DescribeTransitGatways
ec2:DescribeTransitGatwayAttachments
ec2:DescribeVolumes
ec2:DescribeVpcEndpoints
ec2:DescribeVpcEndpointServiceConfigurations
ec2:DescribeVpcPeeringConnections
ec2:DescribeTransitGatewayVpcAttachments
ec2:DescribeVpcs
elasticloadbalancing:DescribeLoadBalancers
elasticloadbalancing:DescribeLoadBalancerAttributes
elasticloadbalancing:DescribeTargetGroups
elasticloadbalancing:DescribeTargetGroupAttributes
elasticloadbalancing:DescribeTargetHealth
eks:ListClusters
events:ListRules
events:ListTargetsByRule
events:DescribeEventBus
guardduty:GetDetector
guardduty:GetMasterAccount
guardduty:ListDetectors
guardduty:ListMembers
iam:GetAccessKeyLastUsed
iam:GetAccountPasswordPolicy
iam:GetGroup
iam:GetGroupPolicy
iam:GetLoginProfile
iam:GetOpenIDConnectProvider
iam:GetPolicyVersion
iam:GetRolePolicy
iam:GetSAMLProvider
iam:GetUserPolicy
iam:ListAccessKeys
iam:ListAttachedGroupPolicies
iam:ListAttachedRolePolicies
iam:ListAttachedUserPolicies
iam:ListGroupPolicies
iam:ListGroups
iam:ListinstanceProfiles
iam:ListMFADevices
iam:ListOpenIDConnectProviders
iam:ListPolicies
iam:ListPolicies
iam:ListRolePolicies
iam:ListRoles
iam:ListSAMLProviders
iam:ListUserPolicies
iam:ListUsers
kms:ListKeys
lambda:ListFunctions
rds:DescribeDBInstances
rds:DescribeDBInstanceAutomatedBackups
rds:ListTagsForResource
rds:DescribeDBSnapshots
route53:ListHostedZones
route53:ListResourceRecordSets
s3:ListBuckets
s3:GetBucketLocation
s3:GetBucketEncryption
s3:GetBucketTagging
sts:GetCallerIdentity
support:DescribeSeverityLevelsAdditionally if you are doing multi-account scanning via an MPA master account you
will also need:organizations:DescribeOrganization
organizations:ListAccounts
organizations:ListAccountsForParent
organizations:ListOrganizationalUnitsForParent
organizations:ListRootsGenerating the GraphAssuming you have configured AWS CLI credentials
(seehttps://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html),
run:altimeter <path-to-config>This will scan all resources in regions specified in the config file.The full path to the generated RDF file will printed, for example:Created /tmp/altimeter/20191018/1571425383/graph.rdfThis RDF file can then be loaded into a triplestore such as Neptune or
Blazegraph for querying.For more user documentation seehttps://tableau.github.io/altimeter/
|
altin
|
UNKNOWN
|
altinity-datasets
|
Altinity Datasets for ClickHouseWelcome!altinity-datasetsloads test datasets for ClickHouse. It is
inspired by Python libraries thatautomatically load standard datasetsfor quick testing.Getting StartedAltinity-datasets requires Python 3.5 or greater. Theclickhouse-clientexecutable must be in the path to load data.Before starting you must install the altinity-datasets package using
pip3. Following example shows install into a Python virtual environment.
First command is only required if you don't have clickhouse-client already
installed on the host.sudo apt install clickhouse-client
sudo pip3 install altinity-datasetsMany users will prefer to install within a Python3 virtual environment,
for example:python3 -m venv my-env
. my-env/bin/activate
pip3 install altinity-datasetsYou can also install a current version directly from Github:pip3 install git+https://github.com/altinity/altinity-datasets.gitTo remove altinity-datasets run the following command:pip3 uninstall altinity-datasetsUsing datasetsThead-clicommand manages datasets. You can see available commands by
typingad-cli -h/--help. All subcommands also accept -h/--help options.Listing reposLet's start by listing repos, which are locations that contain datasets.ad-cli repo listThis will return a list of repos that have datasets. For the time being there
is just a built-in repo that is part of the altinity-datasets package.Finding datasetsNext let's see the available datasets.ad-cli dataset searchThis gives you a list of datasets with detailed descriptions. You can
restrict the search to a single dataset by typing the name, for examplead-cli search wine. You can also search other repos using the repo
file system location, e.g.,ad-cli search wine --repo-path=$HOME/myrepo.Loading datasetsNow, let's load a dataset. Here's a command to load the iris dataset
to a ClickHouse server running on localhost.ad-cli dataset load irisHere is a more complex example. It loads the iris dataset to theiris_newdatabase on a remote server. Also, we parallize the upload with 10 threads.ad-cli load iris --database=iris_new --host=my.remote.host.com --parallel=10The command shown above is typical of the invocation when loading on a
server that has a large number of cores and fast storage.Note that it's common to reload datasets expecially during development.
You can do this usingad-cli load --clean. IMPORTANT: This drops the
database to get rid of dataset tables. If you have other tables in the
same database they will be dropped as well.Dumping datasetsYou can make a dataset from any existing table or tables in ClickHouse
that reside in a single database. Here's a simple example that shows
how to dump the weather dataset to create a new dataset. (The weather
dataset is a built-in that loads by default to the weather database.)ad-cli dataset dump weatherThere are additional options to control dataset dumps. For example,
we can rename the dateset, restrict the dump to tables that start with
'central', compress data, and overwrite any existing data in the output
directory.ad-cli dataset dump new_weather -d weather --tables='^central' --compress \
--overwriteExtra Connection OptionsThe dataset load and dump commands by default connect to ClickHouse
running on localhost with default user and empty password. The following
example options connect using encrypted communications to a specific
server with explicit user name and password. The last option suppresses
certificate verification.ad-cli dataset load iris -H 127.0.0.1 -P 9440 \
-u special -p secret --secure --no-verifyNote: To use --no-verify you must also ensure that clickhouse-client is
configured to accept invalid certificates. Validate by logging in using
clickhouse-client with the --secure option. Check and correct settings
in /etc/clickhouse-client/config.xml if you have problems.Repo and Dataset FormatRepos are directories on the file system. The exact location of the repo is
known as the repo path. Data sets under the repo are child directories that
in turn have subdirectories for DDL commands and data. The following listing
shows part of the organization of the built-ins repo.built-ins/
iris/
data/
iris/
iris.csv
ddl/
iris.sql
manifest.yaml
wine/
data/
wine/
wine.csv
ddl/
wine.sql
manifest.yamlTo create your own dataset you can dump existing tables usingad-cli dataset dumpor copy the examples in built-ins. The format is is simple.The manifest.yaml file describes the dataset. If you put in extra fields
they will be ignored.The DDL directory contains SQL scripts to run. By convention these should
be named for the objects (i.e., tables) that they create.The data directory contains CSV data. There is a separate subdirectory
for each table to be loaded. Its name must match the table name exactly.CSV files can be uncompressed .csv or gzipped .csv.gz. No other formats
are supported and the file types must be correctly specified.You can place new repos in any location you please. To load from your
own repo run a load command and use the --repo-path option to point to the
repo location. Here's an example:ad-cli dataset load mydataset --repo-path=$HOME/my-repoDevelopmentTo work on altinity-datasets clone from Github and install.git clone https://github.com/altinity/altinity-datasets.git
cd altinity-datasets
python3 setup.py developAfter making changes you should run tests.cd tests
python3 -m unittest --verboseThe following commands build an installable and push to pypi.org.
PyPI account credentials must be set in TWINE_USERNAME and TWINE_PASSWORD.python3 setup.py sdist
twine upload --repository-url https://upload.pypi.org/legacy/ dist/*Code conventions are enforced using yapf and flake8. Run the
dev-format-code.sh script to check formatting.Run tests as follows with virtual environment set. You will need a
ClickHouse server with a null password on the default user.cd tests
python3 -m unittest -vErrorsOut-of-date pip3 causes installation failureIf pip3 installs with the messageerror: invalid command 'bdist_wheel'you
may need to upgrade pip. Runpip3 install --upgrade pipto correct the
problem.Materialized views cannot be dumpedad-cli will fail with an error if you try to dump a database that has
materialized views. The workaround is to omit them from the dump operation
using a table regex as shown in the following example:ad-cli dataset dump nyc_taxi_rides --repo-path=. --compress --parallel=6 \
--tables='^(tripdata|taxi_zones|central_park_weather_observations)$'--no-verify option fails on self-signed certsWhen using ad-cli --secure together with --no-verify options you need
to also configure clickhouse-client to skip certificate verification.
This only applies when the certificate is self-signed. You must
change /etc/clickhouse-client/config.xml as follows to skip certificate
validation:<config>
<openSSL>
<client> <!-- Used for connection to server's secure tcp port -->
...
<invalidCertificateHandler>
<name>AcceptCertificateHandler</name>
</invalidCertificateHandler>
</client>
</openSSL>
...
</config>LimitationsThe most important are:Error handling is spotty. If clickhouse-client is not in the path
things may fail mysteriously.Datasets have to be on the local file system. In the future we will
use cloud object storage such as S3.Please file issues athttps://github.com/Altinity/altinity-datasets/issues.
Pull requests to fix problems are welcome.
|
altinkaynak
|
AltinkaynakThis package is used for fetching altinkaynak.com rates based on TRY currency.
To use it, simpy do that;from altinkaynak import Altinkaynak
altin = Altinkaynak()
#Get all rates based on TRY currrency
altin.get_try_currencies()
#Or how much unit EUR cost equal to 1 unit USD
altin.get_rate("USD","EUR")
#Or other provided rates
altin.get_rate("AFG","TRY")ChangelogWhen you callget_try_currencies()andget_rate()functions, they return currency date as string (eg; 2022-00-00T00:00:00.000Z), not timezone aware datetime object. It is because of the python version compatibility issues.
|
altinn3-common-util
|
Failed to fetch description. HTTP Status Code: 404
|
altinn3-error-handler
|
Error HandlerWhat is it?This module implements a simple, yet flexible framework for handling errors and applying
specific strategies for various cases. Basically, a function is decorated with the error-handler
wrapper, with or without a specific strategy, and exceptions (and for given strategies other error
situations) can be caught, processed and handled according to the needs of the user.Basic usageThe simplest use will be along these lines:@error_handler
def error_prone_func(s: str):
...
< code that might fail >
...
return resultThis will add a basic error handler to the function, using a default error strategy (just basic logging
and re-raising the exception). In this case, the handler will catch any exception from the function,
handle it according to the default strategy, and finally raise the exception after processing. Functionally,
the benefits might be small, but it gives cleaner code with less nesting.To supress the exception instead (not re-raised), this can be done by simply setting the re_raise property of the
handler:@error_handler(re_raise=False)
def function(x: int):
< code >
return somethingThis will still handle the exception using the supplied strategy, but it will not be raised further allowing normal
execution to proceed (typically if the handled function is not really required, or if an error is not relevant to
the process)A slightly more involved example using retry:@error_handler(strategy=RetryStrategy())
def my_function(x: int, y: int) -> int:
< code >
return somethingThis will use the RetryStrategy, providing basic retry functionality for the function. Note that the function as a
whole will be retried, with the same arguments. If all retries fail, the strategy will return a result indicating
failure, and the exception will be raised back to the caller.Strategy configurationThe strategies may also be configured more specifically to each case, like this:@error_handler(strategy=RetryStrategy(retries=5, backoff=3, backoff_exponent=2, on_except_hook=do_on_exception_func))
def my_function(x: int, y: int) -> int:
< code >
return somethingThis will override the default values of the strategy, using the user supplied values instead and in this case adding
a function to be called on each exception when attempting the function/retry. To reduce clutter and simplify reuse
these configurations can be instantiated outside the decorator and used where needed:# Set up different strategies for use in code
# A simple retry strategy
simple_retry = RetryStrategy(retries=5)
# Retry with exponential backoff
exp_backoff_retry = RetryStrategy(retries=10, backoff_exponent=2)
# More complex handling of retries
complex_retry = RetryStrategy(
retries=4,
backoff=5,
backoff_exponent=2,
handled_exceptions=(HTTPError, IOError, ConnectionError),
on_except_hook=my_custom_error_logger,
on_except_hook_args={"foo": "bar", "reference": transaction_id}
)
# -- End of strategies
@error_handler(strategy=simple_retry)
def func_a(n: int):
<code>
return something
@error_handler(strategy=exp_backoff_retry)
def func_b(n: int):
<code>
return something
@error_handler(strategy=simple_retry)
def func_c(n: int):
<code>
return something
@error_handler(strategy=complex_retry)
def func_d(n: int):
<code>
return somethingAdvanced usageIt is also possible to use strategies for handling other types of errors than exceptions. A typical
case here would be to handle the response of an http-request based on status code. This is made trickier
by the fact that you probably need to handle the result of the actual request, rather than the function
containing the calling code.As an example, we have the following function:def call_service():
<prepare request>
response = requests.post(url, headers=headers, data=request_data)
<process response>
return resultIn this case, decorating the function with an error_handler wouldn't catch an error from the requests.post()
call, since the request would return a status code on the response object indicating the result, whether it is
a success or otherwise (unless the call itself generated an exception i.e. unable to serialize the data element)
so the previous method of simply decorating like this wouldn't give us what we wanted:@error_handler(strategy=HttpErrorStrategy())
def call_service():
...It would catch exceptions from trying to process a missing body after an internal server error, but that might not be
what we want. Let's say you know the service you are trying to call has availability issues, but getting the call
across is critical to your application you might need to retry the call several times before allowing the program to
try to process the result of the request. Simply setting the retry_on_error=True for the HttpErrorStrategy doesn't
help, since we need to verify (handle) the response from the requests.post() directly. This can be achieved through
calling the decorator directly with the wrapped function call:response = error_handler(strategy=HttpErrorStrategy(retry_on_error=True))(requests.post)(url, headers=headers, data=request_data)That looks complicated, but the structure is actually quite straight forward:decorator(<config parameters>)(<function to wrap>)(<function arguments>)This syntax allows for using a handler directly on a single line of code if required. Since decorators can be nested,
this allows for more complex handling, like in this imaginary example:# Set up default handling with additional custom logging on exception
@error_handler(strategy=DefaultHandlingStrategy(on_except_hook=my_custom_logger))
def example():
<code>
# Add retry for critical code
@error_handler(strategy=RetryStrategy())
def inner_function():
<code>
result = call_external_service()
<code>
# supress exceptions from non-critical component
@error_handler(re_raise=False)(call_non_required_func)("data":result.content)
return result
<code>
temp_result = inner_function()
<process temp_result>
return temp_resultThis sets up a function with default error handling and custom logging, while enabling retry for an inner function (not
the whole function). There is also a call to an unreliable dependency that can't be allowed to let the execution fail
(maybe has a tendency to cause timeouts due to long processing, but still get's the job done), so that call is wrapped
in an explicit local handler supressing any exceptions raised.Writing your own strategiesIt is also pretty straight forward to write your own strategies for error handling to plug in:Define a class that derives from ErrorHandlingStrategy:class MyErrorStrategy(ErrorHandlingStrategy)Override theinit(self) function:def __init__(
self,
*,
handled_exceptions: Tuple[Type[Exception], ...] = None,
on_except_hook: Callable[[Exception], None] = None,
on_except_hook_args: Dict[str, Any] = None,
# Arguments for custom handler goes here
):
super().__init__(handled_exceptions=handled_exceptions, on_except_hook=on_except_hook, on_except_hook_args=on_except_hook_args)
# Custom init goes hereImplement the act-function in you strategy. This is called by the error_handler to invoke the wrapped function:def act(self, func: Callable, *args, **kwargs) -> StrategyInvokeResult:
# implementation goes here.Add whatever logic/functionality you need to your strategy.Once done, it can be injected into a handler easily:@error_handler(strategy=MyStrategy(<init params>))
def func_to_be_handled():
# do stuff
|
altinn3-test-lib
|
Altinn3-test-libEt Python bibliotek med felles test-funksjoner og mocke-objekt for bruk i altinn3-prosjektene
|
altitude
|
UNKNOWN
|
altitudo
|
# AltitudoPython package to find the elevation / altitude at a given geo coordinate.## Usage:### Via Python:```python>>> from altitudo import altitudo>>> altitudo(lat=39.90974, lon=-106.17188) # Returns meters by default... 2624.0>>> # Request more than a single coordinate>>> altitudo(lat=[39.90974, 62.52417], lon=[-106.17188, 10.02487])... [{"lat": 39.90974, "lon": -106.17188, "elevation": 2624.0},{"lat": 62.52417, "lon": 10.02487, "elevation": 1111.0}]```### Via CLI```altitudo -- 39.90974 -106.171882624.0```---[](https://pypi.python.org/pypi/altitudo)[](https://travis-ci.org/milesgranger/altitudo)[](https://altitudo.readthedocs.io/en/latest/?badge=latest)[](https://pyup.io/repos/github/milesgranger/altitudo/)Package to get elevation / altitude from a given geo coordinate* Free software: The Unlicense* Documentation: https://altitudo.readthedocs.io.=======History=======0.1.0 (2018-10-07)------------------* First release on PyPI.
|
altius-py
|
No description available on PyPI.
|
altk
|
No description available on PyPI.
|
alt-lk-nuuuwan
|
No description available on PyPI.
|
altmetric
|
Altmetricis a Python wrapper forAltmetric API v1 <http://api.altmetric.com/>.Installationpip install altmetricUsageFetching details by identifiersfrom altmetric import Altmetric
a = Altmetric()
a.id("108989")
a.doi("10.1126/science.1173146")
a.ads("2009sci...325..578w")
a.arxiv("1212.4819")
a.pmid("19644114")
a = Altmetric("you_api_key")
a.fetch("doi","10.1126/science.1173146")Querying the database::from altmetric import Altmetric
a = Altmetric()
a.citations(‘1d’)
a.citations(‘1d’, page=2)Catching Errorsfrom altmetric import Altmetric
a = Altmetric()
try:
rsp = a.doi("10.1234/foo")
if rsp is None:
print "DOI not found"
else:
print rsp['altmetric_id']
except AltmetricHTTPException, e:
if e.status_code == 403:
print "You aren't authorized for this call"
elif e.status_code == 420:
print "You are being rate limited"
elif e.status_code == 502:
print "The API version you are using is currently down for maintenance."
elif e.status_code == 404:
print "Invalid API function"
print e.msgAPI ReferencePlease seehttp://api.altmetric.com/for detailed reference on response object
and parameters.
|
altmo
|
AltMoAlternativeMobilities is a CLI tool which helps map alternative mobilities with Open Street Map data.
Specifically, this tool helps you map walkability and bikeability averages as a surface for an area of intent
(usually a city or a region).It relies on the following external services to work:A PostgreSQL database within extensionspostgis,hstoreandtablefuncenabledAn Open Street Map database imported into this databaseA running instance aVahalla(used for calculating network routing)A GeoJSON file of the boundary you would like to gather data for (should fit inside OSM data)For a full description of how to use this tool, you are encouraged to visithttps://altmo.readthedocs.io/en/latest/index.html.
|
alt-model-checkpoint
|
alt-model-checkpointAn adapter callback for KerasModelCheckpointthat allows checkpointing
an alternate model (often submodel of a multi-GPU model).Installationpipinstallalt-model-checkpointUsageYou must provide your own Keras or Tensorflow installation.SeePipfilefor preferred versions.If using the Keras bundled in Tensorflow:fromalt_model_checkpoint.tensorflowimportAltModelCheckpointIf using Keras standalone:fromalt_model_checkpoint.kerasimportAltModelCheckpointCommon usage involving multi-GPU models built with Kerasmulti_gpu_model():fromalt_model_checkpoint.kerasimportAltModelCheckpointfromkeras.modelsimportModelfromkeras.utilsimportmulti_gpu_modeldefcompile_model(m):"""Implement with your model compile logic; both base and GPU models should be compiled identically"""m.compile(...)base_model=Model(...)gpu_model=multi_gpu_model(base_model)compile_model(base_model)compile_model(gpu_model)gpu_model.fit(...,callbacks=[AltModelCheckpoint('save/path/for/model.hdf5',base_model)])Constructor argsfilepathModel save file path; seeunderlying ModelCheckpoint docsfor details.alternate_modelKeras model to save instead of the default. This is used especially when training multi-gpu models built with Keras
multi_gpu_model(). In that case, you would pass the original "template model" to be saved each checkpoint.inherit_optimizerIf TRUE (default), saves the optimizer of the base model (e.g. a multi-gpu model) with the alternate model. This is
necessary if you want to be able to resume training on a saved alternate model. If FALSE, the alternate model's
optimizer will be saved as-is.*args, **kwargsThese are passed as-is to the underlyingModelCheckpointconstructor.Dev environment setupInstallpipenv.Runmake test(runsmake test-buildautomatically to ensure deps)
|
alto
|
This Django app allows you to browse the urlpatterns, views, and templates for
your project, see their source code, and open that code in your favorite editor[*].Planned features include the ability to browse and search for models, template
tags, filters, and celery tasks.At some point, Alto may become aLight Tableplugin.Alto is ALPHA software. It may or may not work with your project. Bug reports
without patches are unlikely to be fixed for now, so unless you’re ready to work
on it, you should hold off for a few releases.RequirementsPython 2.7Django 1.4Other versions may work, but have not been tested.Installationpip install altoSetupAdd'alto'to yourINSTALLED_APPSAdd'alto.middleware.AltoMiddleware'to yourMIDDLEWARE_CLASSESVisithttp://127.0.0.1:8000/_alto/ConfigurationSetALTO_URL_SCHEMEin your Django settings. The default is'mvim'for
opening files in MacVim.'txmt'will work for TextMate, and if you installSublHandler,'subl'will open Sublime Text 2.ThanksAlto is inspired byBret Victor’s talk, “Inventing on Principle” and byLight Table.[*]As long as your favorite editor is MacVim, TextMate or Sublime Text 2. In theory, any editor that can be made to open a file from a custom url scheme will work.
|
alto2txt
|
Extract plain text from newspapers (alto2txt 0.3.1)Converts XML (in METS 1.8/ALTO 1.4, METS 1.3/ALTO 1.4, BLN or UKP format) publications to plaintext articles and generates minimal metadata.Full documentation and demo instructions.InstallationInstallation using an Anaconda environmentWe recommend installation via Anaconda:Refer to theAnaconda website and follow the instructions.Create a new environment for alto2txtcondacreate-npy37altopython=3.7Activate the environment:condaactivatepy37altoInstall alto2txt itselfInstallalto2txtusing pip:pipinstallalto2txt(For now it is still necessary to install using pip. In due course we plan to make alto2txt available through a conda channel, meaning that it can be installed directly using conda commands.)Installation using pip, outside an Anaconda environmentNote, the use of `alto2txt`` outside a conda environment has not been as extensively tested as within a conda environment. Whilst we believe that this should work, please use with caution.pipinstallalto2txtInstallation of a test releaseIf you need (or want) to install a test release ofalto2txtyou will likely be advised of the specific version number to install. This examaple command will installv0.3.1-alpha.20:pipinstall-ihttps://test.pypi.org/simple/alto2txt==0.3.1a20UsageDownsampling can be used to convert only every Nth issue of each newspaper. One text file is output per article, each complemented by one XML metadata file.extract_publications_text.py [-h] [-d [DOWNSAMPLE]]
[-p [PROCESS_TYPE]]
[-l [LOG_FILE]]
[-n [NUM_CORES]]
xml_in_dir txt_out_dir
Converts XML publications to plaintext articles
positional arguments:
xml_in_dir Input directory with XML publications
txt_out_dir Output directory for plaintext articles
optional arguments:
-h, --help show this help message and exit
-d [DOWNSAMPLE], --downsample [DOWNSAMPLE]
Downsample. Default 1
-l [LOG_FILE], --log-file [LOG_FILE]
Log file. Default out.log
-p [PROCESS_TYPE], --process-type [PROCESS_TYPE]
Process type.
One of: single,serial,multi,spark
Default: multi
-n [NUM_CORES], --num-cores [NUM_CORES]
Number of cores (Spark only). Default 1")xml_in_diris expected to hold XML for multiple publications, in the following structure:xml_in_dir
|-- publication
| |-- year
| | |-- issue
| | | |-- xml_content
| |-- year
|-- publicationHowever, if-p|--process-type singleis provided thenxml_in_diris expected to hold XML for a single publication, in the following structure:xml_in_dir
|-- year
| |-- issue
| | |-- xml_content
|-- yeartxt_out_diris created with an analogous structure toxml_in_dir.PROCESS_TYPEcan be one of:single: Process single publication.serial: Process publications serially.multi: Process publications using multiprocessing (default).spark: Process publications using Spark.DOWNSAMPLEmust be a positive integer, default 1.The following XSLT files need to be in anextract_text.xsltsmodule:extract_text_mets18.xslt: METS 1.8 XSL file.extract_text_mets13.xslt: METS 1.3 XSL file.extract_text_bln.xslt: BLN XSL file.extract_text_ukp.xslt: UKP XSL file.Process publicationsAssume~/BNAexists and matches the structure above.Extract text from every publication:./extract_publications_text.py~/BNAtxtExtract text from every 100th issue of every publication:./extract_publications_text.py~/BNAtxt-d100Process a single publicationExtract text from every issue of a single publication:./extract_publications_text.py-psingle~/BNA/0000151txtExtract text from every 100th issue of a single publication:./extract_publications_text.py-psingle~/BNA/0000151txt-d100Configure loggingBy default, logs are put inout.log.To specify an alternative location for logs, use the-lflag e.g../extract_publications_text.py-lmylog.txt~/BNAtxt-d1002>err.logProcess publications via SparkInformation on running on spark.Future workFor a complete list of future plans see theGitHub issues list. Some highlights include:Export more metadata from alto, probably by parsingmetsfirst.Check and ensure that articles that span multiple pages are pulled into a single article file.Smarter handling of articles spanning multiple pages.CopyrightSoftwareCopyright 2022 The Alan Turing Institute, British Library Board, Queen Mary University of London, University of Exeter, University of East Anglia and University of Cambridge.SeeLICENSEfor more details.Example DatasetsThis repo contains example datasets, which have been taken from theBritish Library Research Repository(DOI link).This data is "CC0 1.0 Universal Public Domain" -No Copyright - Other Known Legal RestrictionsThere is a subset of the example data in thedemo-filesdirectory.There are adapted copies of the data in thetests/tests/test_filesdirectory. These have been edited to test errors and edge cases.Funding and AcknowledgementsThis software has been developed as part of theLiving with Machinesproject.This project, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London.
|
alto-anomaly-detection
|
No description available on PyPI.
|
altob
|
AltobAbundance learning for ToBRFV variants. The primary purpose of the tool is:Estimating abundace of clades of ToBRFV from sequencing dataYou can read more about how Altob works in the Alcov preprint as it was originally developed for predicting abundances of variants of concern of SARS-CoV-2 in wastewater sequencing data,Alcov: Estimating Variant of Concern Abundance from SARS-CoV-2 Wastewater Sequencing DataThe tool can also be used for:Converting between nucleotide and amino acid mutations for ToBRFVDetermining the frequency of mutations of interest in BAM filesPlotting the depth for each tiled amplicon for ToBRFV, designed based on the ARTIC protocol (https://github.com/artic-network/artic-ncov2019/tree/master/primer\_schemes/nCoV-2019/V3)Comparing amplicon GC content with its read depth (as a measure of degredation)The tool is under active development. If you have questions or issues, please open an issue on GitHub or email me (email in setup.py).InstallingThe latest release can be downloaded from PyPIpip install altobThis will install the Python library and the CLI.To install the development version, clone the repository and runpip install .Usage examplePreprocessingAltob expects a BAM file of reads aligned to the ToBRFV reference genome. For an example of how to process Illumina reads, check theprepdirectory for a script called "prep.py".Estimating relative abundance of lineages/clades:altob find_lineages reads.bamFinding lineages in BAM files for multiple samples:altob find_lineages samples.txtWheresamples.txtlooks like:path/to/reads1.bam Sample 1 name
path/to/reads2.bam Sample 2 name
...Optionally specify which clades to look foraltob find_lineages reads.bam lineages.txtWherelineages.txtlooks like:clade_1
clade_3
...Optionally change minimum read depth (default 40)altob find_lineages --min_depth=5 reads.bamOptionally show how predicted mutation rates agree with observed mutation ratesaltob find_lineages --show_stacked=True reads.bamUse mutations which are found in multiple VOCs (can help for low coverage samples). Note: this is now the defaut behaviour.altob find_lineages --unique=False reads.bamPlotting changes in clade distributions over time for multiple sitesaltob find_lineages --ts samples.txtWheresamples.txtlooks like:path/to/reads1.bam SITE1_2021-09-10
path/to/reads2.bam SITE1_2021-09-12
...
path/to/reads3.bam SITE2_2021-09-10
path/to/reads4.bam SITE2_2021-09-12
...Converting mutation names:(Note: These examples are from SARS-CoV-2 genomic sequences)$ altob nt A23063T
A23063T causes S:N501Y
$ altob aa S:E484K
G23012A causes S:E484KFinding mutations in BAM file:altob find_mutants reads.bamFinding mutations in BAM files for multiple samples:altob find_mutants samples.txtWheresamples.txtlooks like:path/to/reads1.bam Sample 1 name
path/to/reads2.bam Sample 2 name
...Runningfind_mutantswill print the number of reads with and without each mutation in each sample and then generate a heatmap showing the frequencies for all samples.You can also specify a custom mutations file:altob find_mutants samples.txt mutations.txtWheremutations.txtlooks like:
(Note: these examples are from SARS-CoV-2 genomic sequences)S:N501Y
G23012A
...Getting the read depth for each ampliconaltob amplicon_coverage reads.bamoraltob amplicon_coverage samples.txtPlotting amplicon GC content against amplicon depthaltob gc_depth reads.bamoraltob gc_depth samples.txt
|
altocumulus
|
Command line utilities for running workflows onTerraorCromwellincluding:Run a Terra method, and bulk add/delete methods on Terra.Submit WDL workflow jobs to a sever running Cromwell, as well as check jobs’ status, abort jobs, and get logs.Replace local file paths with remote Cloud (Google Cloud or Amazon AWS) bucket URIs, and automatically upload referenced files to Cloud buckets.Parse monitoring log files to determine optimal instance type and disk space.Important tools used by Altocumulus:FireCloud SwaggerDockstore SwaggerFireCloud Service Selector(FISS). In particular,fiss/firecloud/api.py.
|
alto-dev
|
Welcome to Alto!Alto is the easiest way to run any code on the cloud! Alto is designed to be used with Prism projects, but it can be used to any arbitrary code (e.g., functions, scripts, Jupyter notebooks, or entire projects)!Getting StartedAlto can be installed viapip. Alto requires Python >= 3.8.pip install --upgrade pip
pip install alto-devThen, initialize a configuration file with thealto initCLI command. This command will automatically prompt you for all the information needed to configure your cloud environment.$ alto init
What type of cloud environment do you want to use [ec2]? ec2
What would you like the name of your configuration file to be (default: alto.yml)?
<HH:MM:SS> | INFO | Building configuration file...
<HH:MM:SS> | INFO | Done!To run your project on your cloud environment, use thealto buildcommand. Under the hood, this command:Builds the cloud environment according to instructions contained in the configuration file, andExecutes your project on the cloud.$ alto build -f alto.yml
<HH:MM:SS> | INFO | my_cloud_agent[build] | Created key pair my_cloud_agent
<HH:MM:SS> | INFO | my_cloud_agent[build] | Created security group with ID sg-XXXXXXXXXXXXXXXXX in VPC vpc-XXXXXXXXXXXXXXXXX
<HH:MM:SS> | INFO | my_cloud_agent[build] | Created EC2 instance with ID i-XXXXXXXXXXXXXXXXX
<HH:MM:SS> | INFO | my_cloud_agent[build] | Instance i-XXXXXXXXXXXXXXXXX is pending... checking again in 5 seconds
<HH:MM:SS> | INFO | my_cloud_agent[build] | Instance i-XXXXXXXXXXXXXXXXX is pending... checking again in 5 seconds
<HH:MM:SS> | INFO | my_cloud_agent[build] | Instance i-XXXXXXXXXXXXXXXXX is pending... checking again in 5 seconds
<HH:MM:SS> | INFO | my_cloud_agent[build] | Instance i-XXXXXXXXXXXXXXXXX is pending... checking again in 5 seconds
...
...
<HH:MM:SS> | INFO | my_cloud_agent[run] | Done!
<HH:MM:SS> | INFO | my_cloud_agent[delete] | Deleting key-pair my_cloud_agent at /../../../my_cloud_agent.pem
<HH:MM:SS> | INFO | my_cloud_agent[delete] | Deleting instance i-XXXXXXXXXXXXXXXXX
<HH:MM:SS> | INFO | my_cloud_agent[delete] | Deleting security group sg-XXXXXXXXXXXXXXXXXAlternatively, you could use thealto applycommand to first build the cloud environment and then usealto runto actually run the code.Check out ourdocumentationto see the full list of CLI command and their usage!Cloud environmentsAlto currently supports the following cloud environments (which we call "Agents"):ec2Product RoadmapWe're always looking to improve our product. Here's what we're working on at the moment:Additional Agents: GCP Virtual Machines, EMR clusters, Databricks clusters, and more!Managed service: Managed platform to easily view, manage, and schedule your different cloud deploymentsLet us know if you'd like to see another feature!
|
alto-exp-bot
|
No description available on PyPI.
|
alto-pointcloud
|
Failed to fetch description. HTTP Status Code: 404
|
altoshift
|
#Altoshift Module Python#Altoshift.com - Search engine as a service.we provide easy setup for integration.## Installing` pip install altoshift--user`## Upgrade` pip install altoshift--upgrade--user`Author :
Eko Aprili [email protected]
|
alto-tools
|
ALTO Tools:snake:tools for performing various operations onALTOXML filesInstallationClone the repository, enter it and runpipinstall.Usagealto-tools<INPUT>[OPTION]INPUTshould be the path to an ALTO file or directory containing ALTO files.Output is sent tostdout.OPTIONDescription-t--textExtract UTF-8 encoded text content-c--confidenceExtract mean OCR word confidence score-i--illustrationsExtract bounding box coordinates of<Illustration>elements-g--graphicsExtract bounding box coordinates of<GraphicalElement>elements
|
alto-xml
|
AltoA Python parser for alto XML files, for handling OCR outputsExample usagefromaltoimportparse_filealto=parse_file('path/to/alto/file.xml')print(alto.extract_words())InstallationStable Release:pip install alto-xmlDevelopment Head:pip install git+https://github.com/envinorma/alto.gitDocumentationFor full package documentation please visitenvinorma.github.io/alto.DevelopmentSeeCONTRIBUTING.mdfor information related to development.MIT license
|
altparse
|
AltSourceParserThis package is designed to aid in creating and maintaining AltSources (for use primarily with AltStore) which are static json files containing various store-like and app metadata.Installationpip install altparseUsageSeetests/example_update.pyfor how to best utilize the AltSource updating functionality.
|
altplotlib
|
altplotlib provides matplotlib-style object oriented bindings to altair.DescriptionCreate high quality altair plots using more familiar calls.importaltplotlibimportnumpyasnprng=np.random.default_rng(seed=42)n_points,n_series=100,5multiple=rng.normal(size=(n_points,n_series)).cumsum(axis=0)x=np.linspace(0,14,n_points)axis=altplotlib.AltairAxis()axis.plot(multiple[:,:3])axis.set_title("Hello, world!")axis.set_xlabel("This is my x-axis")axis.set_ylabel("This is my y-axis")axis.plot(multiple[:,3],c="xkcd:rust")axis.plot(multiple[:,4],c="tab:cyan")
|
altprint
|
altprintalternative 3d printing path generatorcreate and modify gcode files for use in FFF printers.
|
alt-profanity-check
|
Alt-profanity-checkAlt profanity check is a drop-in replacement of theprofanity-checklibrary for the not so well
maintainedhttps://github.com/vzhou842/profanity-check:A fast, robust Python library to check for profanity or offensive language in strings.
Read more about how and whyprofanity-checkwas built inthis blog post.Our aim is to follow scikit-learn's (main dependency) versions and post models trained with the
same version number, example alt-profanity-check version 1.2.3.4 should be trained with the
1.2.3.4 version of the scikit-learn library.For joblib which is the next major dependency we will be using the latest one which was available
when we trained the models.Last but not least we aim to clean up the codebase a bit andmaybeintroduce some features or
datasets.Learn Python from the Maintainer of alt-profanity-check 🎓🧑💻️⌨️I am teaching Python through Mentorcruise, aiming both to beginners and seasoned developers who want to get to the next level in their learning journey:https://mentorcruise.com/mentor/dimitriosmistriotis/. Please mention that you found me through this repository.ChangelogSeeCHANGELOG.mdHow It Worksprofanity-checkuses a linear SVM model trained on 200k human-labeled samples of clean and
profane text strings. Its model is simple but surprisingly effective, meaningprofanity-checkis both robust and extremely performant.Why Use profanity-check?No Explicit BlacklistMany profanity detection libraries use a hard-coded list of bad words to detect and filter
profanity. For example,profanityusesthis wordlist,
and evenbetter-profanitystill usesa wordlist.
There are obviously glaring issues with this approach, and, while they might be performant,these libraries are not accurate at all.A simple example for whichprofanity-checkis better is the phrase"You cocksucker"* -profanitythinks this is clean because it doesn't have"cocksucker"* in its wordlist.PerformanceOther libraries likeprofanity-filteruse more sophisticated methods that are much more accurate but at the cost of performance.
A benchmark (performed December 2018 on a new 2018 Macbook Pro) usinga Kaggle dataset of Wikipedia commentsyielded roughly
the following results:Package1 Prediction (ms)10 Predictions (ms)100 Predictions (ms)profanity-check0.20.53.5profanity-filter60120013000profanity0.31.224profanity-checkis anywhere from300 - 4000 times fasterthanprofanity-filterin this
benchmark!AccuracyThis table speaks for itself:PackageTest AccuracyBalanced Test AccuracyPrecisionRecallF1 Scoreprofanity-check95.0%93.0%86.1%89.6%0.88profanity-filter91.8%83.6%85.4%70.2%0.77profanity85.6%65.1%91.7%30.8%0.46See the How section below for more details on the dataset used for these results.Installation$ pip install alt-profanity-checkFor older Python versionsPython 3.7From Scikit-learn'sGithub page:scikit-learn 1.0 and later require Python 3.7 or newer.
scikit-learn 1.1 and later require Python 3.8 or newer.Which means that from 1.1.2 and later, Python 3.7 is not supported, hence:
If you are using 3.6 pin alt-profanity-check to1.0.2.1.Python 3.6Following Scikit-learn,Python3.6is not supported after its 1.0 version if you are using 3.6 pin
alt-profanity-check to0.24.2.UsageYou can test from the command line:profanity_check"Check something""Check something else"fromprofanity_checkimportpredict,predict_probpredict(['predict() takes an array and returns a 1 for each string if it is offensive, else 0.'])# [0]predict(['fuck you'])# [1]predict_prob(['predict_prob() takes an array and returns the probability each string is offensive'])# [0.08686173]predict_prob(['go to hell, you scum'])# [0.7618861]Note that bothpredict()andpredict_probreturnnumpyarrays.More on How/Why It WorksHowSpecial thanks to the authors of the datasets used in this project.profanity-checkhence alsoalt-profanity-checkis trained on a combined dataset from 2 sources:t-davidson/hate-speech-and-offensive-language,
used in their paperAutomated Hate Speech Detection and the Problem of Offensive LanguagetheToxic Comment Classification Challengeon Kaggle.profanity-checkrelies heavily on the excellentscikit-learnlibrary. It's mostly powered byscikit-learnclassesCountVectorizer,LinearSVC, andCalibratedClassifierCV.
It uses aBag-of-words modelto vectorize input strings before feeding them to a linear classifier.WhyOne simplified way you could think about whyprofanity-checkworks is this:
during the training process, the model learns which words are "bad" and how "bad" they are
because those words will appear more often in offensive texts. Thus, it's as if the training
process is picking out the "bad" words out of all possible words and using those to make future
predictions. This is better than just relying on arbitrary word blacklists chosen by humans!CaveatsThis library is far from perfect. For example, it has a hard time picking up on less common
variants of swear words like"f4ck you"or"you b1tch"because they don't appear often
enough in the training corpus.Never treat any prediction from this library as
unquestionable truth, because it does and will make mistakes.Instead, use this library as a
heuristic.Developer NotesCreate a virtual environment from the projectpip install -r development_requirements.txtRetraining dataWith the above in place:cdprofanity_check/data
pythontrain_model.pyUploading to PyPiCurrently trying to automate it using Github Actions; see:.github/workflows/package_release_dry_run.yml.Setup:Set up your "~/.pypirc" with the appropriate tokenpip install -r requirements_for_uploading.txtwhich installs twineNew Version:Withx.y.zas the version to be uploaded:First tag:gittag-avx.y.z-m"Version x.y.z"gitpush--tagsThen upload:pythonsetup.pysdist
twineuploaddist/alt-profanity-check-x.y.z.tar.gz
|
altpty
|
This module provides an alternate implementation of the openpty() and forkpty()
functions using the pty handling code from Openssh. This should allow those
functions to work across more platforms than the standard python pty module does.
|
altpy
|
UNKNOWN
|
alt-pytest-asyncio
|
This plugin allows you to have async pytest fixtures and tests.This plugin only supports python 3.6 and above.The code here is influenced by pytest-asyncio but with some differences:Error tracebacks from are from your tests, rather than asyncio internalsThere is only one loop for all of the testsYou can manage the lifecycle of the loop yourself outside of pytest by using
this plugin with your own loopNo need to explicitly mark your tests as async. (pytest-asyncio requires you
mark your async tests because it also supports other event loops like curio
and trio)Like pytest-asyncio it supports async tests, coroutine fixtures and async
generator fixtures.Changelog0.7.2 - 1 October 2023Timeouts don’t take affect if the debugger is active0.7.1 - 23 June 2023No functional changes, only fixing how hatchling understands the
license field in the pyproject.toml with thanks to @piotrm-nvidia0.7.0 - 12 April 2023Changed the pytest dependency to be greater than pytest version 7Using isort nowWent from setuptools to hatchCI now runs against python 3.110.6.0 - 23 October 2021Fix bug where it was possible for an async generator fixture to
be cleaned up even if it was never started.This library is now 3.7+ onlyAdded an equivalentshutdown_asyncgento the OverrideLoop helper0.5.4 - 26 January 2021Added a--default-async-timeoutoption from the CLI. With many thanks
to @andredias.Renamed existing pytest.ini option fromdefault_alt_async_timeoutto
bedefault_async_timeout.0.5.3 - 25 July 2020Make sure a KeyboardInterrupt on running tests still shows errors from
failed tests0.5.2 - 6 February 2020Added ability to make a different event loop for some tests0.5.1 - 15 December 2019Added an ini optiondefault_alt_async_timeoutfor the default async
timeout for fixtures and tests. The default is now 5 seconds. So say
you wanted the default to be 3.5 seconds, you would setdefault_alt_async_timeoutto be 3.50.5 - 16 August 2019I made this functionality in a work project where I needed to run
pytest.main from an existing event loop. I decided to make this it’s
own module so I can have tests for this code.Running from your own event loopIf you want to run pytest.main from with an existing event loop then you can
do something like:fromalt_pytest_asyncio.pluginimportAltPytestAsyncioPlugin,run_coro_as_mainimportnest_asyncioimportasyncioimportpytestasyncdefmy_tests():awaitdo_some_setup_before_pytest()plugins=[AltPytestAsyncioPlugin(loop)]try:code=pytest.main([],plugins=plugins)finally:# Note that alt_pytest_asyncio will make sure all your async tests# have been finalized by this point, even if you KeyboardInterrupt# the pytest.mainawaitdo_any_teardown_after_pytest()ifcode!=0:raiseException(repr(code))if__name__=='__main__':# Nest asyncio is required so that we can do run_until_complete in an# existing event loop - https://github.com/erdewit/nest_asyncioloop=asyncio.get_event_loop()nest_asyncio.apply(loop)run_coro_as_main(loop,my_tests())Note that if you don’t need to run pytest from an existing event loop, you don’t
need to do anything other than have alt_pytest_asyncio installed in your
environment and you’ll be able to just use async keywords on your fixtures and
tests.Timeoutsalt_pytest_asyncio registers apytest.mark.async_timeout(seconds)mark which
you can use to set a timeout for your test.For example:[email protected]_timeout(10)asyncdeftest_something():awaitsomething_that_may_take_a_while()This test will be cancelled after 10 seconds and raise an assertion error saying
the test took too long and the file and line number where the test is.You can also use the async_timeout mark on coroutine fixtures:[email protected]()@pytest.mark.async_timeout(0.5)asyncdefmy_amazing_fixture():awaitasyncio.sleep(1)return1And you can have a timeout on generator fixtures:[email protected]()@pytest.mark.async_timeout(0.5)asyncdefmy_amazing_fixture():try:awaitasyncio.sleep(1)yield1finally:awaitasyncio.sleep(1)Note that for generator fixtures, the timeout is applied in whole to both the
setup and finalization of the fixture. As in the real timeout for the entire
fixture is essentially double the single timeout specified.The default timeout is 5 seconds. You can change this default by setting thedefault_async_timeoutoption to the number of seconds you want.This setting is also available from the CLI using the--default-async-timeoutoption.Note that if the timeout fires whilst you have the debugger active then the timeout
will not cancel the current test. This is determined by checking ifsys.gettrace()returns a non-None value.Overriding the loopSometimes it may be necessary to close the current loop in a test. For this to
not then break the rest of your tests, you will need to set a new event loop for
your test and then restore the old loop afterwards.For this, we have a context manager that will install a new asyncio loop and
then restore the original loop on exit.Usage looks like:from alt_pytest_asyncio.plugin import OverrideLoop
class TestThing:
@pytest.fixture(autouse=True)
def custom_loop(self):
with OverrideLoop() as custom_loop:
yield custom_loop
def test_thing(self, custom_loop):
custom_loop.run_until_complete(my_thing())By putting the loop into an autouse fixture, all fixtures used by the test
will have the custom loop. If you want to include module level fixtures too
then use the OverrideLoop in a module level fixture too.OverrideLoop takes in anew_loopboolean that will make it so no new
loop is set and asyncio is left with no default loop.The new loop itself (or None if new_loop is False) can be found in theloopattribute of the object yielded by the context manager.Therun_until_completeon thecustom_loopin the above example will
do arun_until_completeon the new loop, but in a way that means you
won’t getunhandled exception during shutdownerrors when the context
manager closes the new loop.When the context manager exits and closes the new loop, it will first cancel
all tasks to ensure finally blocks are run.
|
altrios
|
ALTRIOSThe Advanced Locomotive Technology and Rail Infrastructure Optimization System (ALTRIOS) is a unique, fully integrated, open-source software tool to evaluate strategies for deploying advanced locomotive technologies and associated infrastructure for cost-effective decarbonization. ALTRIOS simulates freight-demand driven train scheduling, mainline meet-pass planning, locomotive dynamics, train dynamics, energy conversion efficiencies, and energy storage dynamics of line-haul train operations. Because new locomotives represent a significant long-term capital investment and new technologies must be thoroughly demonstrated before deployment, this tool provides guidance on the risk/reward tradeoffs of different technology rollout strategies. An open, integrated simulation tool is invaluable for identifying future research needs and making decisions on technology development, routes, and train selection. ALTRIOS was developed as part of a collaborative effort by a team comprising The National Renewable Energy Laboratory (NREL), University of Illinois Urbana-Champaign (UIUC), Southwest Research Institute (SwRI), and BNSF Railway.InstallationIf you are an ALTRIOS developer, seeDeveloper Documentation. Otherwise, read on.Python SetupPython installation options:Option 1 -- Python:https://www.python.org/downloads/. We recommend Python 3.10. Be sure to check theAdd to PATHoption during installation.Option 2 -- Anaconda: we recommendhttps://docs.conda.io/en/latest/miniconda.html.Setup a python environment. ALTRIOS can work with Python 3.9, or 3.10, but we recommend 3.10 for better performance and user experience. Create a python environment for ALTRIOS with either of two methods:Option 1 --Python VenvNavigate to the ALTRIOS folder you just cloned or any folder you'd like for using ALTRIOS. Remember the folder you use!Assuming you have Python 3.10 installed, runpython3.10 -m venv altrios-venvin your terminal enviroment (we recommend PowerShell in Windows, which comes pre-installed). This tells Python 3.10 to use thevenvmodule to create a virtual environment (which will be ignored by git if namedaltrios-venv) in theALTRIOS/altrios-venv/.Activate the environment you just created to install packages or anytime you're running ALTRIOS:Mac and Linux:source altrios-venv/bin/activateWindows:altrios-venv/Scripts/activate.batin a windows command prompt or power shell orsource ./altrios-venv/scripts/activatein git bash terminalWhen the environment is activated, your terminal session will have a decorator that looks like(altrios-venv).Option 2 -- Anaconda:Open an Anaconda prompt (in Windows, we recommend Anaconda Powershell Prompt) and run the commandconda create -n altrios python=3.10to create an Anaconda environment namedaltrios.Activate the environment to install packages or anytime you're running ALTRIOS: runconda activate altrios.ALTRIOS SetupWith your Python environment activated, runpip install altrios.Congratulations, you've completed installation! Whenever you need to use ALTRIOS, be sure to activate your python environment created above.How to run ALTRIOSWith your activated Python environment with ALTRIOS fully installed, you can download the demo scripts to the current working directory inside of ademos/folder with:importaltriosasaltalt.copy_demo_files()You can run the Simulation Manager through a multi-week simulation of train operations in by runningpython sim_manager_demo.pyindemos/. This will create aplots/subfolder in which the plots will be saved. To run interactively, fire up a Python IDE (e.g.VS Code,Spyder), and run the file. If you're in VS Code, you can run the file as a virtual jupyter notebook because of the "cells" that are marked with the# %%annotation. You can click on line 2, for example, and hit<Shift> + <Enter>to run the current cell in an interactive terminal (which will take several seconds to launch) and advance to the next cell. Alternatively, you can hit<Ctrl> + <Shift> + pto enable interactive commands and type "run current cell". There are several other python files in thedemos/folder to demonstrate various capabilities of ALTRIOS.If you plan to modify the data used in the demo files, copy the data files to your local directory and load them from there, e.g.res=alt.ReversibleEnergyStorage.from_file(alt.resources_root()/"powertrains/reversible_energy_storages/Kokam_NMC_75Ah_flx_drive.yaml")would becomeres=alt.ReversibleEnergyStorage.from_file("./custom_battery.yaml")AcknowledgementsThe ALTRIOS Team would like to thank ARPA-E for financially supporting the research through the LOCOMOTIVES program and Dr. Robert Ledoux for his vision and support. We would also like to thank the ARPA-E team for their support and guidance: Dr. Apoorv Agarwal, Mirjana Marden, Alexis Amos, and Catherine Good. We would also like to thank BNSF for their cost share financial support, guidance, and deep understanding of the rail industry’s needs. Additionally, we would like to thank Jinghu Hu for his contributions to the core ALTRIOS code. We would like to thank Chris Hennessy at SwRI for his support. Thank you to Michael Cleveland for his help with developing and kicking off this project.
|
altron
|
A Package to use Gcast, Stats and Must Join on telegram
|
altscore
|
Python SDK for AltScore
|
alt-side-parking
|
Inspired by Jessica Garson’s talk at Pyconhttps://us.pycon.org/2020/schedule/presentation/102/Developing>>>pipinstall-e'.[dev]'>>>tox-elint&&tox>>>gitsecretreveal>>>direnvallow
|
altslice
|
The altslice package provides a number ofSlicerclasses which can be used
to index and slice sequences using alternative indexing. For example:fromaltsliceimportCategoricalSlicermonths=['Jan','Feb','Mar','Apr','May','Jun']sales=[100,200,250,300,333,400]slicer=CategoricalSlicer(months)# sales total from Januarysales[slicer['Jan']]# sales from Febuary until Maysales[slicer['Jan':'May']]SlicersThe following Slicers are provided in the library:CategoricalSlicer : Index using discrete categories.UniformSlicer : Index using evenly spaced numbers with a specific interval.SeqeuenceSlicer : Index using a sorted sequence of numbers.OneBasedSlicer : One-based indexing.Installaltslice can be installed using pip:pip install altsliceTestingaltslice uses pytest for testing. The test suite can be executed usingpy.test.One-based indexingIf desired the list container can be adjusted to use one-based indexing:fromaltsliceimportOneBasedSlicerslicer=OneBasedSlicer()classlist(list):def__getitem__(self,x):returnsuper(list,self).__getitem__(slicer[x])This adjustment is not recommended.
|
alt-src
|
Alt-src is a tool for pushing SRPM metadata into a git repo.Alt-src takes source RPMs as input, unpacks packaging metadata such as .spec files and
patch files, and pushes them into a git repository. It's most notably used to populateCentOS git.Usagealt-src --push <branch> <package.src.rpm>This command will check out the git repo for the given package and branch, unpack the
input RPM and create/push a new commit using the unpacked sources.
A tag is also created underimports/<branch>/<nvr>.If a repo doesn't exist for the given package, the command will create one using
the Pagure API.The command accepts these inputs:<package-filename.src.rpm>- path to a local SRPM file--koji <build-nvr>- SRPM is pulled from configured koji instance--koji <build-nvr>:module.src.txt- instead of SRPM, modulemd is importedIf enabled, the command also sends notifications to the configured email address.LicenseThis program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
|
alt-tableaudocumentapi
|
No description available on PyPI.
|
alt-tabpy
|
AltTabPy: An Easy Alternative to TabPyAltTabPy is a lightweight alternative to TabPy aimed at single user Desktop applications aiming to get up and running with Python and Tableau in tandem as fast as possible. It aims to bezero-configurationand can be set up in a few minutes. It also focuses on a much smaller subset of applications, leaving environment and module management as out of project scope.Official project documentation can be found at:https://alttabpy.readthedocs.io/en/latest/index.html
|
alttex
|
A LaTeX pre-processor that supports alternatives, templates and more.Main features:Uses LaTeX syntax.Can be used both as a program (thealttexexecutable) for final users,
or as a separate library for developers.Create alternative documents from a single source using the commands
\ALT, \IF, \ELSE, etc.Support for Jinja2 templates using a LaTeX-like syntax. It can be used as
an alternative to traditional LaTeX programming or to supply the LaTeX
document with data from different sources such as a Python script, a JSON
structure, a database, and others.
|
alt-text
|
Alt-TextA PyPi package used for finding, generating, and setting alt-text for images in HTML and EPUB files.Getting StartedInstallationYou can find the PyPi packagehere. To install the package via, you can execute the following in a terminal for your respective system...Windowspy -m pip install alt-textUnix/MacOSpython3 -m pip install alt-textDeveloper DependenciesAll developer dependencies can be foundhere. You will only need to install these individually when working directly with the source code.Engine DependenciesAs of the moment, the image analyzation tools that Alt-Text uses are not fully bundled with the package itself. Hence, depending on the type of engines you are using (for Description Generation and/or Character Recognition), you will need to install various applications/get API keys for the respective functionalities.Description EnginesDescription Engines are used to generate descriptions of an image. If you are to use one of these, you will need to fulfill that specific Engine's dependencies before use.ReplicateMiniGPT4APIReplicateMiniGPT4API Engine uses theReplicate API, hence you will need to get an API key viaLogging in with Githubon the Replicate website.GoogleVertexAPIGoogleVertexAPI Engine uses theVertex AI API, hence you will need to get access from theGoogle API Marketplace. Additionally, Alt-Text uses Service Account Keys to get authenticated with Google Cloud, hence you will need toCreate a Service Account Keywith permission for the Vertex AI API and have its according JSON.OCR EnginesOptical Character Recognition Engines are used to find text within images. If you are to use one of these, you will need to fulfill that specific Engine's dependencies before use.TesseractThe Tesseract Engine usesTesseract, hence you will need to install theTesseract OCR.Quickstart & UsageTo be added...Our MissionThe Alt-Text project is developed for theFree Ebook Foundationas a Senior Design Project atStevens Institute of Technology.As Ebooks become a more prominant way to consume written materials, it only becomes more important for them to be accessible to all people. Alternative text (aka alt-text) in Ebooks are used as a way for people to understand images in Ebooks if they are unable to use images as intended (e.g. a visual impaired person using a screen reader to read an Ebook).While this feature exists, it is still not fully utilized and many Ebooks lack alt-text in some, or even all their images. To illustrate this, theGutenberg Project, the creator of the Ebook and now a distributor of Public Domain Ebooks, have over 70,000 Ebooks in their collection and of those, there are about 470,000 images without alt-text.The Alt-Text project's goal is to use the power of AI, Automation, and the Internet to craft a solution capable of automatically generating descriptions for images lacking alt-text in Ebooks, closing the accessibility gap and improving collections, such as theGutenberg Project.Contact InformationThe emails and relevant information of those involved in the Alt-Text project can be found below.The DeveolpersJack [email protected] [email protected]'s WebsiteDavid's GithubDavid's LinkedInJared [email protected] [email protected] [email protected] [email protected] ClientEric [email protected] [email protected], Tools, & Libraries UsedAlt-Text is developed using an assortment of modern Python tools...Development ToolsAlt-Text is developed using...BeautifulSoup4EbookLibReplicateGoogle-Cloud-AIPlatformPyTesseractAPIs and Supplementary ToolsReplicate APIVertex AI APITesseractPackaging/Distribution ToolsAlt-Text is distributed using...PyPiHatchling
|
altugssecondtest
|
UNKNOWN
|
altugstestpackage
|
UNKNOWN
|
altunityrun
|
This package includes the Python bindings needed for tests to be run against Unity games and apps using AltUnityTester.For more information, visithttps://gitlab.com/altom/altunitytester
|
altunityrunner
|
AltUnityTester Python BindingsThis package contains an library for adding Python language binding to the AltUnity Tester framework.AltUnity Tester is an open-source UI driven test automation tool that helps you find objects in your game and interacts with them using tests written in C#, Python or Java.You can run your tests on real devices (mobile, PCs, etc.) or inside the Unity Editor.Read the documentation onhttps://altom.com/altunity/docs/altunitytester/Get StartedCheck out theGet Startedguide from the documentation.DevelopmentCode Style:PEP-0008Docstring style:Google Style Docstrings.Running TestsRun the following command to install the dev dependencies:$ pip install -r requirements-dev.txtUnit Tests$ pytest tests/unit/Integration Tests$ pytest tests/integration/ContributingCheck out the full contributing guidecontributing.SupportJoin our Google Group for questions and discussions:https://groups.google.com/a/altom.com/forum/#!forum/altunityforumJoin our Discord Server to chat with other members of the community:https://discord.gg/Ag9RSuSLicenseDistributed under theGNU General Public License v3.0. SeeLICENSEfor more information.
|
altunityrunnerfc
|
No description available on PyPI.
|
altuscli
|
Cloudera Altus has been decommissioned, so the Altus CLI no longer functions. If you are using Cloudera Data Platform (CDP), please use the `CDP CLI <https://pypi.org/project/cdpcli/>`__ instead.This package provides a unified command line interface to Cloudera Altus.LicenseThe Altus CLI is licensed under theApache License, Version 2.0.
with asupplemental license disclaimer.
|
altvmasterlist
|
alt:V Masterlist for PythonYou can use this Package to interface with the alt:V master list API.Installpip install altvmasterlistorpip3 install altvmasterlistUsagefrom altvmasterlist import masterlist as altvDocsPlease see the Docshere.Build the docs usingpdoc .\src\altvmasterlist\ -o .\docsDevelopmentYou need to havepoetryinstalled.Then install everything withpoetry installAfter that you can run the examples withpoetry run python example_masterlist.pyandpytest withpoetry run pytest
|
altv-stubs
|
altv-python-stubsWhat is this?This is a file to allow autocomplete for the alt:V Python Module. A manual approach was chosen for this because the autogenerated ones were not great. Based onalt:V Types.Installation$ pip install altv-stubs
|
altwalker
|
AltWalkerAltWalker is an open source, Model-Based Testing framework.Read the documentation onhttps://altwalker.github.io/altwalker.Join ourGitter chat roomor ourGoogle Groupto chat with us or with other members of the community.Table of ContentsOverviewInstallationQuickstartSetting Up a Development EnvironmentSupportLicenseOverviewAltWalkeris an open source Model-Based Testing framework that supports running
tests written in python3 and .NET/C#. You design your tests as a directed graph
and AltWalker generates test cases from your graph (usingGraphWalker) and executes them.Model-Based TestingModel-Based Testingis a testing
technique which offers a way of generating test cases based on models that describe the behavior
(functionality) of the system under test.The goal when designing models is to represent the part of the system under test, usually
by one model for each functionality of your system.With the help of graph theory we can dynamically generate multiple test scripts. A test script is a path passing through the model from a starting point till
a condition is met.Why use Model-Based Testing:the abstraction layer added by the model gives your tests a better structurethe model can be updated to reflect the requirements changes making the tests easy to maintaindynamically generates multiple test scripts based on different conditions (like coverage or length)allows for a large number of tests to be created which results in a larger part of the system under test to be covered.AltWalkerAltWalker is a test execution tool, which aims to make it easy to write and run your model-based tests. AltWalker uses GraphWalker to generate a path through the models.For the test structure it uses an Object-Oriented approach inspired by python'sunittestmodule. Every model is mapped to a class with the same name and each vertex and edge from the model is mapped to a method inside the class.AltWalker also borrows the concept of test fixture from unit tests, and implements the following fixtures:setUpRun,tearDownRun,setUpModelandtearDownModel.Now it supports running tests written in .NET/C# and Python3.AltWalker ComponentsAltWalker has the following components:Model: a directed graph, supplied by the user as a json or graphml file.
A graph is composed from a list of vertices and a list of edges.GeneratorandStop Condition: used to specify how to generate a
path and to decide when a path is complete.Test Code: the implementation of the model(s) as code. Each model is mapped to a
class and each vertex and edge is mapped to a method.Planner: uses themodel(s)and a pair ofgeneratorandstop conditionto provide a path (a sequence of steps) through the model(s).Currently AltWalker provides two planners:Online PlannerOffline PlannerReporter: reports the output of the tests, the reporter is called on
each event (e.g.step_start,step_end, ...).Executor: for each step in the plan it looks up and calls the named method
from thetest code. In addition to the step methods, it also calls
fixture methods if present (e.g.setUpModel,tearDownModel...).Currently AltWalker provides three executors:Python Executor.NET ExecutorAnd anHttp Executorthat allows you to hook up your own executor via HTTP. You can read
more about the Http Executor on theHow to: Write your own executorpage.Walker: the test runner. Coordinates the execution of a test asking thePlannerfor the next step, executing the step using theExecutorand reporting the progress
using theReporter.There are two ways to run your tests:Online Mode(using the Online Planner): Generate one step and then execute
the step, until the path is complete.Offline Mode(using the Offline Planner): Run a path from a sequence of steps.
Usually the path is generated using theofflinecommand.InstallationPrerequisites:Python3(with pip3)Java 8GraphWalker CLI(Optional).NET Core(Optional)git(Optional)Install AltWalkerTo installaltwalkerrun the following command in your command line:$ pip install altwalkerTo check that you have installed the correct version of AltWalker, run the
following command:$ altwalker --versionLiving on the edgeIf you want to work with the latest code before it’s released, install or update the code from thedevelopbranch:$ pip install -U git+https://github.com/altwalker/altwalkerFor a more detailed tutorial read theInstallationsection from the documentation.QuickstartMake a sample project and run the tests.$ altwalker init test-project -l python
$ cd test-project
$ altwalker online tests -m models/default.json "random(vertex_coverage(100))"
Running:
[2019-08-06 16:28:44.030077] ModelName.vertex_A Running
[2019-08-06 16:28:44.030940] ModelName.vertex_A Status: PASSED
[2019-08-06 16:28:44.048492] ModelName.edge_A Running
[2019-08-06 16:28:44.048729] ModelName.edge_A Status: PASSED
[2019-08-06 16:28:44.064495] ModelName.vertex_B Running
[2019-08-06 16:28:44.064746] ModelName.vertex_B Status: PASSED
Statistics:
Model Coverage..................100%
Number of Models...................1
Completed Models...................1
Failed Models......................0
Incomplete Models..................0
Not Executed Models................0
Edge Coverage...................100%
Number of Edges....................1
Visited Edges......................1
Unvisited Edges....................0
Vertex Coverage.................100%
Number of Vertices.................2
Visited Vertices...................2
Unvisited Vertices.................0
Status: PASSSetting Up a Development EnvironmentClone the repository:$ git clone https://github.com/altwalker/altwalker.git
$ cd altwalkerInstall python dependencies:$ pip install -r requirements.txt && \
pip install -r requirements-dev.txtRunning Tests$ pytest tests -s -vRunning tests with tox inside Docker$ docker run -it --rm -v "$(pwd):/altwalker" -w "/altwalker" altwalker/tests:tox toxCLIAfter you install the python dependencies to setup AltWalker CLI locally from code run:$ pip install --editable .Then from any command line you can access:$ altwalker --helpDocumentationAfter you install the python dependencies to generate the documentation run:$ cd docs && \
make clean && \
make htmlTo see the documentation run:$ open build/html/index.htmlTo rebuild the documentation on changes, with live-reload in the browser run:$ sphinx-autobuild docs/source docs/build/htmlNavigate to the documentation athttp://127.0.0.1:8000.Further Reading/Useful Links:Google Style Docstring ExampleGoogle Style GuideSupportJoin ourGitter chat roomor ourGoogle Groupto chat with us or with other members of the community.LicenseThis project is licensed under theGNU General Public License v3.0.
|
altwalker-live-viewer
|
AltWalker's LiveViewerA web application for visualizing the progress of an AltWalker test run.AltWalker's LiveViewer is a powerful tool designed to enhance your experience with AltWalker. This application provides real-time visualization and monitoring capabilities for your AltWalker test runs, allowing you to gain deeper insights into test execution, track progress, and identify potential issues with ease. With AltWalker's LiveViewer, you can effortlessly keep an eye on the execution of
your test models and ensure the success of your testing endeavors.SetupBefore you begin using AltWalker's LiveViewer, make sure you have AltWalker installed. If you haven't already, you can follow the installation instructionshere.Install the AltWalker LiveViewer command-line tool:pipinstallaltwalker-live-viewerTo verify that the CLI was installed correctly, run:altwalker-viewer--versionYou should see the version information displayed:altwalker-viewer,version0.4s.0RunningTo usealtwalker-viewer, you'll need the following prerequisites:Test model(s)Test code for the model(s)If you can run your tests usingaltwalker online, you already have everything you need for the LiveViewer.Thealtwalker-viewer onlinecommand shares arguments and options withaltwalker online. However, it includes the-poption to set up the WebSocket port.To start the WebSocket server:altwalker-vieweronlinepath/to/tests/-mpath/to/model.json"generator(stop_condition)"-x[python|dotnet]For example:altwalker-vieweronlinetests-mmodels/default.json"random(never)"Now, open your web browser and visit:https://altwalker.github.io/live-viewer/.If you want to run the frontend locally, you'll need to start a WebServer, which serves the LiveViewer frontend.altwalker-vieweropenNow, open your web browser and visit:http://localhost:8000/.TroubleshootingIf you encounter any issues while using the LiveViewer, consider the following steps:Check Model and Code Validity: First, ensure that your models and code are valid by using the following commands:altwalker checkfor the model(s)altwalker verifyfor codeTerminating GraphWalker Processes: If you experience problems when running thealtwalker-viewer onlinecommand, it's essential to check for any existing GraphWalker processes. If any GraphWalker processes are running, you should stop them before running thealtwalker-viewer onlinecommand.DocumentationGetting help on commands and option names-h,--help: Show a help message and exit.altwalker-viewer--helpaltwalker-vieweronline--helpaltwalker-vieweropen--helpDevelopment Setuppython3nodenpmInstall npm dependenciesnpminstallInstall PyPi dependenciespipinstall-rrequirementsBuild the FrontendnpmrunbuildnpmrunstartInstall the CLIpipinstall-e.LicenseThis project is licensed under theGNU General Public License v3.0.
|
altwistendpy
|
Extras for working with Twisted.
|
aludel
|
Aludel is a mini-framework for building RESTful APIs. It builds on top ofalchimia(for database things) andklein(for HTTP things).How to use aludelTODO: Write some documentation for this.About the nameAnaludelis a subliming pot used in alchemy and medieval chemistry. It was
used as a condenser in the sublimation process and thus came to signify the
end-stages of transformation and the symbol of creation.
|
alum
|
Failed to fetch description. HTTP Status Code: 404
|
alumin
|
Failed to fetch description. HTTP Status Code: 404
|
aluminium
|
# aluminium
###### System administration toolkit.
|
aluminum
|
AluminumA fast python ORM for InfluxDB 2 written in Rust.IntroductionAluminum is a fast Python library written in Rust that provides an ORM interface for interacting with InfluxDB.Getting StartedThis section will guide you through the basic steps of using the library. It will cover:Setting up a connection with an engineCreating a bucketRetrieving a bucketAdding data to a bucketQuerying data from a bucketInstallationyou can install Aluminum using pip:pipinstallaluminumTo use the library, you first need to create an instance of an engine and bind it to the Store.fromaluminumimportEngine,Store,Baseengine=create_engine(host="http://localhost:8086",token="<INFLUXDB-TOKEN>",org_id="<ORG-ID>",)# Bind it to the storestore=Store(bind=engine)# Initialize the Store's metadata for your modelsstore.collect(Base)The StoreAfter setting up the store, you can create a bucket by calling thecreate_bucketmethod of the Store instance.
The method takes a class that inherits fromBaseas an argument and returns a bucket instance.fromaluminum.baseimportBaseclassSensorBucket(Base):tag:strmeasurement:strfield:intasyncdefrun_async_example():# Create a bucketbucket=awaitstore.create_bucket(SensorBucket)# Get a bucketbucket=store.get_bucket(SensorBucket)# Get all bucketsbuckets=store.get_buckets()# Delete a bucketawaitstore.delete_bucket(SensorBucket)Adding Data to a BucketTo add data to a bucket, you can call theaddmethod of the bucket instance. The add method takes an instance of the bucket class as an argument.fromaluminum.baseimportBaseclassSensorBucket(Base):tag:strmeasurement:strfield:intasyncdefrun_async_example():msmnt=SensorBucket(tag="My Tag",measurement="My Measurement",field=10)awaitbucket.add(user)Querying Data from a BucketTo query data from a bucket, you can call theexecutemethod of the bucket instance. The execute method takes a Select instance as an argument and returns a list of bucket class instances that match the query.fromaluminumimportselectasyncdefrun_async_example():stmt=select(SensorBucket).where(SensorBucket.tag=="My Tag",SensorBucket.field>0)result=awaitbucket.execute(stmt)# list of SensorBucketAcknowledgementThe python-rust bindings are fromthe pyo3 projectLicenseLicensed under the MIT License.Copyright (c) 2022Gabriele FrattiniNeed Help?If you have any questions or need help getting started, please reach out by opening an issue.Conributions are welcome.
|
aluno_exatas
|
Aluno de exatasO módulo pode ser instalado pelo pippip3 install aluno_exatas.Esse é um módulo que tem como objetivo auxiliar alunos de exatas com os trabalhos do seu curso. Atualmente, contém funções para os seguintes assuntos:Física experimental: esse é o módulo mais desenvolvido, contém funções relacionadas à propagação de incertezas e manipulação dos dados utilizados, inclusive através do Método dos Mínimos Quadrados;Cálculo numérico: esse módulo não tem muitas funções, mas busca implementar conceitos explorados na matéria. Não está completo, e só recomendo usar as funções depois de ter entendido bem o que elas fazem e como fazem;Circuitos elétricos: esse módulo por enquanto só auxilia no tratamento de fasores e contém algumas constantes relacionadas a componentes simétricas.O tutorial para o módulo pode ser encontrado no formatoJupyter Notebooke no formatoMarkdown.
|
aluratemp
|
Um simples conversor de temperatura, com funções para conversão de Celsius para Fahrenheit e vice-versa, usado para um post no Blog da Alura
|
alurinium-image-processing
|
Image Processing for Django===========================
|
al-utils-almirai
|
Python common utilsasync lrc cacheLRU cache for asyncioasync_lru 1.0.3fromal_utils.alruimportalru_cache@alru_cache(maxsize=128,typed=False)deftest():passsync to asyncMake sync function as asyncfromal_utils.async_utilimportasync_wrap@async_wrapdeftest():pass# equals toasync_wrap(test())# equals toasyncdeftest():passasync to syncMake async function as syncfromal_utils.async_utilimportrun_async@run_asyncasyncdeftest():pass# equals torun_async(test())# equals todeftest():passloggerCreate logger easier to print log to console and save logs at the same time.Logger name is current package name split by dot.It contains a default logger config.It will save infos to./logs/info/and errors to./logs/error/. Log files split in each week.fromal_utils.loggerimportLoggerlogger=Logger(class_name='').loggersingletonSingleton container.Support multiple container.fromal_utils.singletonimportSingleton# set class to containerclassTest(Singleton):passfromal_utils.singletonimportresolvetest=resolve(Test)If meta class conflict, usingmerge_metato resolve it.fromal_utils.singletonimportSingletonfromal_utils.metaimportmerge_metaclassTest(merge_meta(Singleton,AnothClass)):passOr, without extends, add class or instance in low-level functions.fromal_utils.singletonimportadd,add_typeclassTest1:passtest1=Test1()add(test1)classTest2:passadd_type(Test2)print colored messagesPrint message to console with custome color(text color, background color) and style(light, normal, bold, ...)fromal_utils.consoleimportColoredConsole# lightColoredConsole.debug('This is a debug.')# greenColoredConsole.success('This is a success.')# yellowColoredConsole.warn('This is a warning.')# red, stderrColoredConsole.error('This is a error.')# custome colored text# set io(stdout or stderr) start colored.ColoredConsole.set(style,text_color,bg_color,io)# print custome colored textColoredConsole.print('Costome color text.')# unset, clear all style and color sets.ColoredConsole.unset(io)
|
alv
|
alv: a command-line alignment viewerView your DNA or protein multiple-sequence alignments right at your command line. No need to launch a
GUI!Note:alvrequires Python v3.4 or later. Earlier versions may also work, but this has not been
tested.Latest feature additionsIf you have more than one alignment in your input file, then the first alignment is output unless you
use the --alignment-index (-ai) option to choose another.alvis now adapted for use in Python notebooks (tested on Jupyter) through two convenience functions
'view' and 'glimpse'. Both functions take a BioPython alignment object and outputs a view of the
alignment.Writingfrom Bio import AlignIO
msa = AlignIO.read('PF00005.fa', 'fasta')
import alv
alv.view(msa)in a Jupyter notebook cell and evaluating will yield a colored alignment in thealvstyle.For large alignments, the glimpse function is convenient since a subset of the alignment, selected
as an easily detected conserved region, is shown.alv.glimpse(msa)Look for more usage information viewhelp(alv.view)in a notebook cell.FeaturesCommand-line based, no GUI, so easy to script viewing of many (typically small) MSAs.Reads alignments in FASTA, Clustal, PHYLIP, NEXUS, and Stockholm formats, from file orstdin.Output is formatted to suit your terminal. You can also set the alignment width with option-w.Can color alignments of coding DNA by codon's translations to amino acids.Guesses sequence type (DNA/RNA/AA/coding) by default. You can override with option-t.Order sequence explicitly, alphabetically, or by sequence similarity.Restrict coloring to where you don't have indels or where there is a lot of conservation.Focus on variable columns with the options--only-variableand--only-variable-excluding-indels, contributed by nikostr, that constrains
coloring to columns with variation and variation not counting indels.The commandalv -g huge_msa.fadisplays cut-out of the MSA, guaranteed to fit
one terminal page without scrolling or MSA line breaking, that is supposed to
give you an idea of alignment quality and contents.Writealv -r 20 huge_msa.fato get a view of the MSA containing only 20 randomly
selected sequences.InstallRecommended installation is:pip install --upgrade pip
pip install alvIf you have a half-modern BioPython installed, Python v3.4shouldwork.
BioPython is a dependency and will only get installed automatially withpip install alvif you are using Python v3.6 or later, because BioPython was apparently not on PyPi before that.ExamplesQuick viewing of a small alignment:alv msa.faThis autodetects sequence type (AA, DNA, RNA, coding DNA), colors the sequences, and formats the
alignment for easy viewing in your terminal.
When applyingalvto an alignment of coding DNA, the coding property is autodetected and colors are therefore applied to codons instead
of nucleotides.View three sequences, accessionsa,b, andc, from an alignment:alv -so a,b,c msa.faFeed alignment toless, for paging support.alv -k msa.fa | less -RThe-koption ensures thatalvkeeps coloring the alignment (by default, piping
and redirection removes colors), and the-Roption instructslessto interpret color codes.For developersRunpython setup.py develop testfor development install and to execute tests.ScreenshotsFull PFAM domainAll of the sequences in PFAM's seed alignment for PF00005Yeast sequences from PF00005Using the option-sm YEAST, we reduce the alignment to the ones with a matching accession.
|
alva
|
No description available on PyPI.
|
alvacc
|
PurposeThis script polls the vaccine appointment database, looking for open time slots.Procedure:Create an appointment on any available date athttps://alcovidvaccine.gov/Note your email address and confirmation codeRun this script and follow the prompts to configure.When it finds an open slot, it will open the confirmation page.Click either the first or second edit, depending on whether you need to change location and time, or just time.If you're not quick enough, the appointment slot may be grabbed before you can get to it. Good luck!Feel free to submit an issue and I'll do what I can to help. And try to avoid going to low on the sleep timer. I have never had any issues querying their website, but still best not to overload the servers.UsageThe simplest usage is to runalvaccand follow the prompts to create a configuration file.These values can also be provided directly through the command line arguments, or the.config/alvacc.yamlfile (either in repo directory or /home/user) can be manually edited.usage: alvacc.py [-h] [-s SLEEP_TIME] [--current_appointment_date CURRENT_APPOINTMENT_DATE]
[--confirmation_number CONFIRMATION_NUMBER]
[--locations LOCATIONS [LOCATIONS ...]] [-v]
optional arguments:
-h, --help show this help message and exit
-s, --sleep SLEEP_TIME
Time to sleep between queries (seconds)
--current_appointment_date CURRENT_APPOINTMENT_DATE
curret appointment in `Month day` format
--confirmation_number CONFIRMATION_NUMBER
Confirmation number from previously booked appointment
--locations LOCATIONS [LOCATIONS ...]
space-seperated list of counties to use. available counties: Autauga
Baldwin Barbour Bibb Blount Bullock Butler Chambers Cherokee Chilton
Choctaw Clarke Clay Coffee Colbert County Covington Crenshaw Cullman Dale
Dallas Decatur Dekalb Elmore Etowah Fayette Franklin Geneva Greene Hale
Heflin Henry Houston Huntsville Jackson Lamar Lauderdale Lawrence Limestone
Lowndes Macon Madison Marengo Marion Marshall Monroe Montgomery Morgan
Perry Pickens Pike Rainsville Randolph Russell Sumter Sylacauga Talladega
Tallapoosa Tuscaloosa Walker Washington Wilcox WinstonInstallationThis package can be installed or just run directly from the repo folder. Beyond Python 3+, the only package requirement [email protected]:zrsmithson/alvacc.gitInstall from repo folderpip install .Future workI've only dealt with my own configuration, so if there are any issues running the program, submit an issue or a PR. I'm sure something will change on the website side, so let me know if you come across any issues.There was some work on watching the entire month instead of just the next avilable time, but I found it was easiest just to watch next available appointment slots. With all the walk-in clinics, I'm pretty sure people are setting up their appointments then getting it somewhere else.Simplification can definitely be done between configuration, locations, etc. This would be useful depending on interest in using this package for other purposes than the script.
|
alvadescpy
|
alvaDescPy: A Python wrapper for alvaDesc softwarealvaDescPy provides a Python wrapper for thealvaDescmolecular descriptor calculation software. It was created to allow direct access to the alvaDesc command-line interface via Python.InstallationInstallation via pip:$ pip install alvadescpyInstallation via cloned repository:$ git clone https://github.com/ecrl/alvadescpy
$ cd alvadescpy
$ pip install .There are currently no additional dependencies for alvaDescPy, however it requires a valid, licensed installation ofalvaDesc.Basic UsagealvaDescPy assumes the location of alvaDesc's command-line interface is located at your OS's default location. If alvaDesc is located in a different location, you can change the path:fromalvadescpyimportCONFIGCONFIG['alvadesc_path']='\\path\\to\\alvaDescCLI'alvaDescPy provides direct access to all alvaDesc command line arguments via the "alvadesc" function:fromalvadescpyimportalvadesc# providing an XML script filealvadesc(script='my_script.xml')# supplying a SMILES string returns a list of descriptorsdescriptors=alvadesc(ismiles='CCC',descriptors='ALL')# a Python dictionary is returned if labels are desireddescriptors=alvadesc(ismiles='CCC',descriptors='ALL',labels=True)# specific descriptors can be calculateddescriptors=alvadesc(ismiles='CCC',descriptors=['MW','AMW'],labels=True)# input/output files (and input type) can be specifiedalvadesc(input_file='mols.mdl',inputtype='MDL',descriptors='ALL',output='descriptors.txt')# various fingerprints can be calculatedecfp=alvadesc(ismiles='CCC',ecfp=True)pfp=alvadesc(ismiles='CCC',pfp=True)maccsfp=alvadesc(ismiles='CCC',pfp=True)# fingerprint hash size, min/max fragment length, bits/pattern and other# options can be specifiedecfp=alvadesc(ismiles='CCC',ecfp=True,fpsize=2048,fpmin=1,fpmax=4,bits=4,fpoptions='- Additional Options -')# alvaDesc uses a number of threads equal to the maximum number of CPUs, but# can be changeddescriptors=alvadesc(ismiles='CCC',descriptors='ALL',threads=4)alvaDescPy also provides the "smiles_to_descriptors" function:fromalvadescpyimportsmiles_to_descriptors# returns a list of descriptor valuesdescriptors=smiles_to_descriptors('CCC',descriptors='ALL')# returns a dictionary of descriptor labels, valuesdescriptors=smiles_to_descriptors('CCC',descriptors='ALL',labels=True)# returns a dictionary containing MW, AMW labels, valuesdescriptors=smiles_to_descriptors('CCC',descriptors=['MW','AMW'],labels=True)Contributing, Reporting Issues and Other SupportTo contribute to alvaDescPy, make a pull request. Contributions should include tests for new features added, as well as extensive documentation.To report problems with the software or feature requests, file an issue. When reporting problems, include information such as error messages, your OS/environment and Python version.For additional support/questions, contact Travis Kessler ([email protected]).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.