package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aniwrapper
|
A Simple python api wrapperAPI HereHow to useThis api has many endpoints and the wrapper uses them all.
For example if you want to generate an 'hug' image use can use;fromaniwrapperimportanimeanime.hug(link=True)#For generating a link of the imageORfromaniwrapperimportanimeanime.hug(link=False)#For generating the imageHowever it also has nsfw endpointsfromaniwrapperimprtnsfwnsfw.hentai(link=True)#Will generate a nsfw imageIf you have any problems regarding this package mail me [email protected].
|
anixart
|
Anixart API wrapperОписаниеВраппер для использования Anixart API.Библиотека создана только для ознакомления c API.Автор презирает и не поддерживает создание авторегов / ботов для накрутки / спам ботов.Вся документация в папкеdocsLicenseCopyright (c) 2022 Maxim Khomutov
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without limitation in the rights to use, copy, modify, merge, publish, and/ or distribute copies of the Software in an educational or personal context, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
Permission is granted to sell and/ or distribute copies of the Software in a commercial context, subject to the following conditions:
- Substantial changes: adding, removing, or modifying large parts, shall be developed in the Software. Reorganizing logic in the software does not warrant a substantial change.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
anjana-distributions
|
No description available on PyPI.
|
anjani
|
AnjaniCan be found on Telegram asAnjani.Anjani is a modern, easy-to-develop, fully async Telegram group managing bot for Telegram.RequirementsPython 3.9 or higher.Telegram API key.Telegram Bot TokenMongoDB Database.FeaturesEasy to develop with object oriented models.Fully asynchronous with async / await.Type-hinted method making it easy to create plugins.Localization support.Class based plugin system.DocumentationInstallingPluginIf you want to make your custom plugins, refer toAnjani's Plugins Guide.CreditsMariePyrobudAll Contributors 👥
|
anji-common-addons
|
Common addons for AnjiProjectFree software: MIT licenseFeaturesGit addonOdoo addonMakefile addon
|
anji-core
|
Core for AnjiProject, framework that build on top of ErrBot and provide simple way to control servers use ChatOps on many servers in timeFree software: MIT licenseFeaturesUse RethinkDB as data storeExecuting tasks on workersCron tasksEvent collectorMuch more …
|
anjie
|
This python library provides corpus in English and various local african languages e.g(Youruba, Hausa, Pidgin), it also does sentiment analysis on brandsUSAGEBrand Sentiment Analysisbrand = the name of the brand you will like to perfrom sentiment analysis on e.g "MTN"
csvFileName = The name of the csv file you will like to save your output to, default is brandNews.csv. (optional parameter)from anjie import brandSentimentAnalysisbrandSentimentAnalysis.anjie_brands(brand = "MTN", csvFileName = 'brandNews')import pandas as pddf = pd.read_csv("brandNews.csv.csv")Scraping English CorpusnoRows = The number of rows of news you want.
csvFileName = The name of the csv file you will like to save your output to, default is news.csv. (optional parameter)
News categories include ['news', 'sports', 'metro-plus', 'politics', 'business', 'entertainment', 'editorial', 'columnist']
removeCategories = [] :as a parameter for news categories you dont want in the scraped corpus. (optional parameter)
e.g , englishCorpus.scrape(noRows = 150, removeCategories = ['metro-plus', 'politics'])pass onlyCategories = [] : as a parameter for only categories you want in the scraped corpus. (optional parameter)
e.g , englishCorpus.scrape(noRows = 150, onlyCategories = ['news', 'sports', 'metro-plus', 'entertainment', 'editorial', 'columnist'])from anjie import englishCorpusenglishCorpus.scrape(noRows = 150)df = pd.read_csv("news.csv")Scraping Hausa CorpusnoRows = The number of rows of news you want. only 60 rows of hausa corpus is currently available.
csvName = The name of the csv file you will like to save your output to, default is hausa_news.csv. (optional parameter)from anjie import hausaCorpushausaCorpus.scrape(noRows = 10)import pandas as pddf = pd.read_csv("hausa_news.csv")Scraping Pidgin English corpusnoRows = The number of rows of news you want.
csvFileName = The name of the csv file you will like to save your output to, default is pidgin_corpus.csv. (optional parameter)
News categories include ['nigeria', 'africa', 'sport', 'entertainment']
removeCategories = [] :as a parameter for news categories you dont want in the scraped corpus. (optional parameter)
e.g , englishCorpus.scrape(noRows = 150, removeCategories = ['entertainment'])pass onlyCategories = [] : as a parameter for only categories you want in the scraped corpus. (optional parameter)
e.g , englishCorpus.scrape(noRows = 150, onlyCategories = ['nigeria','sport', 'entertainment'])from anjie import pidginCorpuspidginCorpus.scrape(noRows = 20)df = pd.read_csv("pidgin_corpus.csv")Scraping Yoruba CorpusnoRows = The number of rows of news you want.
csvFileName = The name of the csv file you will like to save your output to, default is yoruba_corpus.csv. (optional parameter)from anjie import yorubaCorpusyorubaCorpus.scrape(noRows = 20)df = pd.read_csv("yoruba_corpus.csv")Github link for project -https://github.com/Free-tek/Anjie_local_language_corpus_generator
|
anjiepackone
|
Failed to fetch description. HTTP Status Code: 404
|
anji-orm
|
anji_ormsimple ORM for RethinkDBInstallationanji_ormis available as a python library on Pypi. Installation is very simple using pip :$pipinstallanji_ormThis will installanji_ormas well as external dependency.Basic usageORM registry should be initiated before usage:# For sync usageregister.init(dict(db='test'))register.load()# Or for async usageregister.init(dict(db='test'),async_mode=True)awaitregister.async_load()That, create some modelclassT1(Model):_table='t2'a1=StringField()a2=StringField()t2=T1(a1='b',a1='c')t2.send()# or for async usageawaitt2.async_send()
|
ank
|
Copyright 2016 Nhat Vo Van (a.k.a Sunary) and contributors (see CONTRIBUTORS.txt)Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the
“Software”), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.Description: Python Streaming system, REST-API and Schedule task using queue messageKeywords: ank,streaming,microservice,pipeline,schedule task
Platform: any
|
ankamantatra
|
🤔 ankamantatraTechzara WCC2 final weekA simple CLI quizz game.The nameankamantatrais a malagasy word that meansriddle.The user can play within a specific category or mix them all.
A game session consists of 4 questions, each of different type.
A the end of a session, the user is prompted whether he wants to play again or not.⚒️ InstallationTo install frompypi, type in the terminal:pipinstallankamantatraOr you can clone this repository and install it manually usingpoetry, a tool for dependency management and packaging in Python, by following the following steps :gitclonehttps://github.com/twisty-team/ankamantatra.gitpipinstallpoetry# in the project root directorypoetrybuild&&poetryinstallIn some cases you may get aKeyringLockederror that you can bypass by typing :exportPYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring🏃 How to runIf you installed the package with pip, you can run the game by typing in the terminal :ankamantatraIf you installed it manually using poetry, you can run the game by typing :poetryrunpython-mankamantatra▶ UsageUsage: ankamantatra [OPTIONS] COMMAND [ARGS]...
A simple quizz game CLI
Options:
--version Show the version and exit.
--help Show this message and exit.
Commands:
list List all available questions to play with.
play Use to play quiz gameUsage: python -m ankamantatra play [OPTIONS]
Use to play quiz game
Options:
-c, --categorie TEXT Specify Quiz categorie
--help Show this message and exit.Usage: python -m ankamantatra list [OPTIONS]
List all available questions to play with.
Options:
-c, --category TEXT Filter by TEXT
-sa, --show-answer
-sc, --show-category
--category-only Show only the categories and hide questions
--help Show this message and exit.🚀 FeaturesPlay quizzList questions or categoriesAuthorstbgracyrhja
|
ankdown
|
AnkdownA simple way to write Anki decks in Markdown.What This IsAnkiis awesome, in many ways.
However, its card editor is... a little bit uncomfortable.
I really wanted to write Anki cards in Markdown. So I made
a tool to convert Markdown (+ standard MathJAX math notation)
into Anki decks that can be easily imported. This way, it's
possible to use any fancy markdown (and MathJAX) tools to build
your decks.How to use itNOTEThis program requiresPython 3, along with the
packages in requirements.txtInstallingAnkdown can be installed by doingpip3 install --user ankdown.Writing CardsCards are written in the following format:Expected Value of \(f(x)\)
%
\[\mathbb{E}[f(x)] = \sum_x p(x)f(x)\]
%
math, probability
---
Variance of \(f(x)\)
%
\[\text{Var}(f(x)) = \mathbb{E}[(f(x) - \mathbb{E}[f(x)])^2]\]Each of the solitary%signs is a field separator: the first
field is the front of the card, the second field is
the back of the card, and subsequent fields can contain whatever
you want them to (all fields after the second are optional).---markers represent a card boundary.The tool needs these separators to be alone on their own lines,
and most markdown editors will work better if you separate them from
other text with empty lines, so that they're treated as their own
paragraphs by the editor.Running AnkdownMethod A: manuallyTo compile your cards, put them in markdown files with.mdextensions,
inside of a directory that has the name of the deck you'd like to put
the cards into. Then, runankdown -r [directory] -p [package filename].You can then import the package using the Anki import tool.Method B: via the add-onOnce you've installed ankdown, it can be a hassle to run it on all
of your decks over and over again. There is anankdownAnki add-onthat you
can use to make this process simpler: If you put all of your decks
in one megadirectory (mine is in~/Flashcards), you can re-import
your decks in one swell foop by going toTools > Reload Markdown Decks(or using the operating-system-dependent keybinding).GotchasAnkdown has an unusually large number of known issues; my preferred method
of discussing them is via github ticket.Multiple DecksAnkdown uses Genanki as a backend, which doesn't (as of this writing) handle
multiple decks in a single package very well. If you point ankdown at a
directory with multiple decks in subdirectories, it will do its best, and
your cards will all be added to the package, but they won't be assigned
to the correct decks. The ankdown plugin solves this problem by running
the executable on each deck individually, and then importing all the
resulting packages.Intentional feature removalsThere used to be other ways to run ankdown, but they were slowly making
the code worse and worse as I tried to keep them all operational. If there's
a particular method of operating ankdown that you used and miss, let me know
in a github issue.Math separatorsUnfortunately,$and$$as math separators were not chosen by the anki
developers for the desktop client's MathJax display, and so in order for math
to work in both web and desktop, it became much simpler to use\(\)and\[\]. These separators should be configurable in most markdown editors
(e.g. I use the VSCode Markdown+Math plugin). Older decks that were built
for ankdown need to be modified to use the new separators.Media referencesAnkdown should work with media references that result insrc=""appearing
somewhere in the generated html (mainly images). If you need it to work with
other media types (like sounds), let me know in a github issue and I may make
time to fix this.Updating CardsWhen you want to modify a card, just run your deck through the above
process after changing the markdown file. Anki should notice, and update
the card. This is done by giving the cards in your deck unique IDs based on
their filename and index in the file.This is the most robust solution I could come up with, but it has some downsides:It's not possible to automatically remove cards from your anki decks, since
the anki package importer never deletes cards.If you delete a card from a markdown file, ankdown will give all of its
successors off-by-one ID numbers, and so if they were different in important
ways (like how much you needed to study them), anki will get confused.
The best way to deal with this is to give each card its own markdown file.General code qualityLastly, the catch-all disclaimer: this is, as they say, alpha-quality software.
I wrote this program (and the add-on) to work for me; it's pretty likely that
you'll hit bugs in proportion to how different your desires are from mine. That
said, I want it to be useful for other people as well; please submit github
tickets if you do run into problems!
|
anker
|
Failed to fetch description. HTTP Status Code: 404
|
ank-gauss-distribution
|
No description available on PyPI.
|
ankgg
|
No description available on PyPI.
|
ankh
|
Ankh ☥: Optimized Protein Language Model Unlocks General-Purpose ModellingAnkhis the first general-purpose protein language model trained on Google'sTPU-V4surpassing the state-of-the-art performance with dramatically less parameters, promoting accessibility to research innovation via attainable resources.This repository will be updated regulary withnew pre-trained models for proteinsin part of supporting thebiotechcommunity in revolutinizing protein engineering using AI.Table of ContentsInstallationModels AvailabilityDataset AvailabilityUsageOriginal downstream PredictionsFollowup use-casesComparisons to other toolsCommunity and ContributionsHave a question?Found a bug?RequirementsSponsorsTeamLicenseCitationInstallationpython-mpipinstallankhModels AvailabilityModelankhHugging FaceAnkh Largeankh.load_large_model()Ankh LargeAnkh Baseankh.load_base_model()Ankh BaseDatasets AvailabilityDatasetHugging FaceRemote Homologyload_dataset("proteinea/remote_homology")CASP12load_dataset("proteinea/secondary_structure_prediction", data_files={'test': ['CASP12.csv']})CASP14load_dataset("proteinea/secondary_structure_prediction", data_files={'test': ['CASP14.csv']})CB513load_dataset("proteinea/secondary_structure_prediction", data_files={'test': ['CB513.csv']})TS115load_dataset("proteinea/secondary_structure_prediction", data_files={'test': ['TS115.csv']})DeepLocload_dataset("proteinea/deeploc")Fluorescenceload_dataset("proteinea/fluorescence")Solubilityload_dataset("proteinea/solubility")Nearest Neighbor Searchload_dataset("proteinea/nearest_neighbor_search")UsageLoading pre-trained models:importankh# To load large model:model,tokenizer=ankh.load_large_model()model.eval()# To load base model.model,tokenizer=ankh.load_base_model()model.eval()Feature extraction using ankh large example:model,tokenizer=ankh.load_large_model()model.eval()protein_sequences=['MKALCLLLLPVLGLLVSSKTLCSMEEAINERIQEVAGSLIFRAISSIGLECQSVTSRGDLATCPRGFAVTGCTCGSACGSWDVRAETTCHCQCAGMDWTGARCCRVQPLEHHHHHH','GSHMSLFDFFKNKGSAATATDRLKLILAKERTLNLPYMEEMRKEIIAVIQKYTKSSDIHFKTLDSNQSVETIEVEIILPR']protein_sequences=[list(seq)forseqinprotein_sequences]outputs=tokenizer.batch_encode_plus(protein_sequences,add_special_tokens=True,padding=True,is_split_into_words=True,return_tensors="pt")withtorch.no_grad():embeddings=model(input_ids=outputs['input_ids'],attention_mask=outputs['attention_mask'])Loading downstream models example:# To use downstream model for binary classification:binary_classification_model=ankh.ConvBertForBinaryClassification(input_dim=768,nhead=4,hidden_dim=384,num_hidden_layers=1,num_layers=1,kernel_size=7,dropout=0.2,pooling='max')# To use downstream model for multiclass classification:multiclass_classification_model=ankh.ConvBertForMultiClassClassification(num_tokens=2,input_dim=768,nhead=4,hidden_dim=384,num_hidden_layers=1,num_layers=1,kernel_size=7,dropout=0.2)# To use downstream model for regression:# training_labels_mean is optional parameter and it's used to fill the output layer's bias with it,# it's useful for faster convergence.regression_model=ankh.ConvBertForRegression(input_dim=768,nhead=4,hidden_dim=384,num_hidden_layers=1,num_layers=1,kernel_size=7,dropout=0,pooling='max',training_labels_mean=0.38145)Original downstream PredictionsSecondary Structure Prediction (Q3):ModelCASP12CASP14 (HARD)TS115CB513Ankh Large83.59%77.48%88.22%88.48%Ankh Base80.81%76.67%86.92%86.94%ProtT5-XL-UniRef5083.34%75.09%86.82%86.64%ESM2-15B83.16%76.56%87.50%87.35%ESM2-3B83.14%76.75%87.50%87.44%ESM2-650M82.43%76.97%87.22%87.18%ESM-1b79.45%75.39%85.02%84.31%Secondary Structure Prediction (Q8):ModelCASP12CASP14 (HARD)TS115CB513Ankh Large71.69%63.17%79.10%78.45%Ankh Base68.85%62.33%77.08%75.83%ProtT5-XL-UniRef5070.47%59.71%76.91%74.81%ESM2-15B71.17%61.81%77.67%75.88%ESM2-3B71.69%61.52%77.62%75.95%ESM2-650M70.50%62.10%77.68%75.89%ESM-1b66.02%60.34%73.82%71.55%Contact Prediction Long Precision Using Embeddings:ModelProteinNet (L/1)ProteinNet (L/5)CASP14 (L/1)CASP14 (L/5)Ankh Large48.93%73.49%16.01%29.91%Ankh Base43.21%66.63%13.50%28.65%ProtT5-XL-UniRef5044.74%68.95%11.95%24.45%ESM2-15B31.62%52.97%14.44%26.61%ESM2-3B30.24%51.34%12.20%21.91%ESM2-650M29.36%50.74%13.71%22.25%ESM-1b29.25%50.69%10.18%18.08%Contact Prediction Long Precision Using attention scores:ModelProteinNet (L/1)ProteinNet (L/5)CASP14 (L/1)CASP14 (L/5)Ankh Large31.44%55.58%11.05%20.74%Ankh Base25.93%46.28%9.32%19.51%ProtT5-XL-UniRef5030.85%51.90%8.60%16.09%ESM2-15B33.32%57.44%12.25%24.60%ESM2-3B33.92%56.63%12.17%21.36%ESM2-650M31.87%54.63%10.66%21.01%ESM-1b25.30%42.03%7.77%15.77%Localization (Q10):ModelDeepLoc DatasetAnkh Large83.01%Ankh Base81.38%ProtT5-XL-UniRef5082.95%ESM2-15B81.22%ESM2-3B81.22%ESM2-650M82.08%ESM-1b80.51%Remote Homology:ModelSCOPe (Fold)Ankh Large61.01%Ankh Base61.14%ProtT5-XL-UniRef5059.38%ESM2-15B54.48%ESM2-3B59.24%ESM2-650M51.36%ESM-1b56.93%Solubility:ModelSolubilityAnkh Large76.41%Ankh Base76.36%ProtT5-XL-UniRef5076.26%ESM2-15B60.52%ESM2-3B74.91%ESM2-650M74.56%ESM-1b74.91%Fluorescence (Spearman Correlation):ModelFluorescenceAnkh Large0.62Ankh Base0.62ProtT5-XL-UniRef500.61ESM2-15B0.56ESM-1b0.48ESM2-650M0.48ESM2-3B0.46Nearest Neighbor Search using Global Pooling:ModelLookup69K (C)Lookup69K (A)Lookup69K (T)Lookup69K (H)Ankh Large0.830.720.600.70Ankh Base0.850.770.630.72ProtT5-XL-UniRef500.830.690.570.73ESM2-15B0.780.630.520.67ESM2-3B0.790.650.530.64ESM2-650M0.720.560.400.53ESM-1b0.780.650.510.63TeamTechnical University of Munich:Ahmed ElnaggarBurkhard RostProteinea:Hazem EssamWafaa AshrafWalid MoustafaMohamed ElkerdawyUniversity of Columbia:Charlotte RochereauSponsorsGoogle CloudLicenseAnkh pretrained models are released under the under terms of theCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International.Community and ContributionsThe Ankh project is aopen source projectsupported by various partner companies and research institutions. We are committed toshare all our pre-trained models and knowledge. We are more than happy if you could help us on sharing new ptrained models, fixing bugs, proposing new feature, improving our documentation, spreading the word, or support our project.Have a question?We are happy to hear your question in our issues pageAnkh! Obviously if you have a private question or want to cooperate with us, you can alwaysreach out to us directlyviaHello.Found a bug?Feel free tofile a new issuewith a respective title and description on the theAnkhrepository. If you already found a solution to your problem,we would love to review your pull request!.✏️ CitationIf you use this code or our pretrained models for your publication, please cite the original paper:@article{elnaggar2023ankh,
title={Ankh: Optimized Protein Language Model Unlocks General-Purpose Modelling},
author={Elnaggar, Ahmed and Essam, Hazem and Salah-Eldin, Wafaa and Moustafa, Walid and Elkerdawy, Mohamed and Rochereau, Charlotte and Rost, Burkhard},
journal={arXiv preprint arXiv:2301.06568},
year={2023}
}
|
anki
|
Please seehttps://apps.ankiweb.net
|
ankiapp-easy-deck
|
OverviewEasy way to make an AnkiApp deck.InstallationTo install ankiapp_easy_deck, you can usepip. Open your terminal and run:pipinstallankiapp_easy_deckLicenseThis project is licensed under the MIT License.LinksDownloadSourceCreditsAuthor: JohannesEmail:[email protected] you for using ankiapp_easy_deck!
|
ankicc
|
ankicc將 Anki deck package (*.apkg) 的所有欄位在繁體及簡體中文間進行相互轉換。Installation 安裝pip install ankiccUsage 使用usage: ankicc [-h] --apkg_path APKG_PATH [--workspace WORKSPACE] --output_path OUTPUT_PATH
[--convertor {t2jp.json,t2tw.json,hk2t.json,tw2s.json,hk2s.json,s2hk.json,tw2t.json,t2s.json,s2tw.json,s2twp.json,t2hk.json,s2t.json,jp2t.json,tw2sp.json}]
optional arguments:
-h, --help show this help message and exit
--apkg_path APKG_PATH
--workspace WORKSPACE
--output_path OUTPUT_PATH
--convertor {t2jp.json,t2tw.json,hk2t.json,tw2s.json,hk2s.json,s2hk.json,tw2t.json,t2s.json,s2tw.json,s2twp.json,t2hk.json,s2t.json,jp2t.json,tw2sp.json}apkg_path: 待轉換的 apkg 文件位置workspace: ankicc 的工作目錄位置,預設為當前執行目錄,轉換過程中的文件都會保存在該執行目錄 (請留意:轉換後不會自動刪除)output_path: 轉換後的輸出文件位置convertor: OpenCC 的翻譯器設定,預設為簡體轉繁體 (s2t.json),其他翻譯器設定請參考OpenCC #ConfigurationsThird Party Library 第三方庫OpenCCApache-2.0 LicenseAnkiPandasMIT License
|
anki-cli-unofficial
|
Anki CLICLI to automate Anki notes/flashcards creation.This project is not part of the official Anki project.Note: The code was tested using Python 3.8 and Anki 2.1.35.Installation$pip3installaqt==2.1.44# Must be compatible with your Anki desktop installation$pip3installanki-cli-unofficialUsageThe CLI supports a single commandload.$anki-cli-unofficialload-h
usage:anki-cli-unofficialload[-h][--anki-dirANKI_DIR][--media-dirMEDIA_DIR][--deckDECK]input_fileoutput_file
positionalarguments:input_fileYAMLfilecontainingtheflashcardstocreateoutput_fileAnkigeneratedarchivefilepath
optionalarguments:-h,--helpshowthishelpmessageandexit--anki-dirANKI_DIRAnkidirectory(Defaulttoatempdirectory)--media-dirMEDIA_DIRlocaldirectorycontainingmediasreferencedininput_file--deckDECKdecknameinwhichtocreateflashcardsBy default, the CLI will create a sandbox Anki environment. This temporary directory will be deleted by your OS after a few days. You can specify an existing Anki directory using the option--anki-dirbutI strongly recommend that you don't run this program on your main Anki directory. This code may become outdated, and I don't want bugs to damage your precious flashcards.The commandloadexpects a YAML file as input. This file must use this format:# File cards.yaml# An array of documents-type:Basic# The type of the node to create ("Basic", "Basic (with reverse card)", etc.). Also known as the model.tags:[tag1,tag2]# An optional list of tagsfields:# The ordered list of fields for the selected note type (ex: Basic notes require two fields: Front & Back)Front:BonjourBack:HelloThe commandloadalso expects the filename of the generated Anki package (usually ends with.apkg). The file doesn't have to exist. The CLI will create it in your current directory.To generate the flashcards:$anki-cli-unofficialloadcards.yamlarchive.apkg
📂OpeningAnkicollection...
🔍Loading'cards.yaml'intothedeck'Default'...
💾SavingAnkicollection...
👍Done
👉Ankicollectioncanbeopenedusingthefollowingcommand:open/Applications/Anki.app--args-b/var/folders/5f/9sp_9nk17jjdtw7y9t9rydr80000gn/T/tmpn8rl4l2w
👉AnkiArchiveisavailablehere:./archive.apkgAfter the completion, you have the option to visualize the generated flashcard by opening Anki on the sandbox directory. The command differs depending on your OS and is therefore displayed by the CLI. If everything looks fine, you can close Anki and reopen it without any option like usual to import the packaged Anki deck generated by the CLI. That's it!Advanced UsesMultilingual SupportThe CLI was tested using the English locale. You may still use it with a different locale but minor adjustement are required.Ex: (French)# Create a Anki home with the French locale
$ anki -l fr -b ./ankidir
# Make sure your input file follows the locale naming conventions
$ echo cards.yml
- type: Basique # "Basic" translation in French
tags: [idiom]
fields: # Field names must be translated:
Recto: 'Avoir la banane! <small>idiom</small>'
Verso: 'To feel great. (literally: <em>to have the banana<em>)'
# Use the option --deck to override the default deck name
$ anki-cli-unofficial load cards.yaml \
--deck="Par défaut" \
--anki-dir="./ankidir/Utilisateur 1/" \
archive.apkg
# Open Anki to check generated cards
$ anki -b ankdirMediasMedias are supported using the usual Anki syntax:Include[sound:file.mp3]inside a field to add sounds to your flashcard.Include<img src="file.jpg">inside a field to add images to your flashcard.Ex:-type:Basicfields:Front:'<imgsrc="car.jpg"/>'Back:'[sound:voiture.mp3]Unevoiture'# This card will show the picture of a card and print the translation# with the pronunciation when the back card is revealed.You must specify the local directory containing the media files. The CLI will copy these files in the Anki medias database (missing files are ignored).$anki-cli-unofficialload--media-dir~/anki-imagescards.yaml# Where ~/anki-images contains the files car.jpg and voiture.mp3Custom Note TypesThe CLI creates a fresh Anki directory to generate the flashcards (for safety reasons). Therefore, only default card types are supported. If you have custom note types, you may decide to launch the CLI directly on your Anki directory using the option--anki-dir. That would be a bad idea. Bugs happen. I don't want to damage your flashcards. A better alternative is to create a copy of your current directory.Note: The Anki directory differs according your OS (check the official documentation):Windows:%APPDATA%\Anki2MacOS:~/Library/Application Support/Anki2Linux:~/.local/share/Anki2The CLI expects a user directory (~/Library/Application Support/Anki2/User 1is valid but~/Library/Application Support/Anki2/is not).Before running the CLI, make sure to back up this directory!(Create a zip archive using the file explorer or using the terminal.) Anki Desktop creates automatic backups by default, but they don't include medias.Then, run the CLI on the copy of your Anki directory. Example (on MacOS):$ mkdir $TMPDIR/Anki2
$ cp -R ~/Library/Application Support/Anki2/User\ 1 $TMPDIR/Anki2
$ anki-cli-unofficial load --anki-dir $TMPDIR/Anki2/User\ 1 cards.yaml archive.apkgWhen running on an existing Anki directory, the CLI doesn't create the package file (to avoid exporting all of your flashcards). Therefore, you have to create the archive yourself:Open Anki on the clone directory (the command is displayed in the output).Use the menuBrowserand select the newly created cards (using a special tag, for example).Right-click and selectExport.ExamplesYou will find additional examples in the directoryexamples/.DevelopmentTest locally$cdanki-cli/
$python3setup.pyinstall# The binary anki-cli-unofficial is now present in $PATH$anki-cli-unofficialload--media-dir./examples./examples/french.yamlarchive.apkgUpload to PyPICreate an API Token from the Web UI. (Edit your~/.pypircwith the generated token.)Install Twine$python3-mpipinstall--user--upgradetwineUpload the bundle$python3-mtwineuploaddist/*Note: The upload to PyPI is currently assured by GitHub Actions.ReleaseIncrease the version number insetup.py.Commit and push.Create a new tag in GitHub to trigger the CI pipeline.
|
anki-compressor
|
Anki CompressorCompresses images and audio inAnki.apkg files to reduce the overall file size.Installationanki-compressorcan be installed with Pip, but it requiresPydubandPillowwhich have native dependencies that need to be installed. You'll need to include support forlibvorbisin the audio library, since all audio is converted tooggand all images are converted tojpg.Once you've installed those dependencies, runpip install anki-compressorto install the command line script.Usageusage:anki-compressor[-h]-iINPUT[-oOUTPUT][-qQUALITY][-bBITRATE]CompressAnki.apkgfilesize
optionalarguments:-h,--helpshowthishelpmessageandexit-iINPUT,--inputINPUTInput.apkgfiletocompress-oOUTPUT,--outputOUTPUTOutputfiletowrite,defaultstoMIN_<INPUT>-qQUALITY,--qualityQUALITYQualityvalueforimagecompression(0-100),defaultsto50-bBITRATE,--bitrateBITRATEffmpeg-compliantbitratevalueforaudiocompression,defaultsto48kHere's an example of compressing a fileinput.apkgand writing the output tooutput.apkg:anki-compressor-iinput.apkg-ooutput.apkg-q50-b64kArguments-i: Specifies the input file and is required-o: Output file name, defaults toMIN_<INPUT>-q: Image quality on a scale of 1-100 supplied to Pillow's image processing, defaults to 50-b: Bitrate for audio output, defaults to '48k'
|
anki_deck_from_text
|
PurposeThe purpose of this tool is to generate an Anki deck from annotations on a text file.
This was specially developed for those who use Anki to learn language vocabulary.Input structureThe input must be some sort of non-compressed text file (.txt, .md, etc.)Every line to be converted to an Anki card must start with a marker, such as-. Use themarkeroption to set a custom marker. Every other line will be ignored.The front and back of the cards are separated by a separator, such as=. Use theseparatoroption to set a custom separator.Example:- die Katze = the cat
- das Haus = the houseOutside of these rules, you are free to populate your text file with other annotations which will be ignored when creating the deck.Current card typesCurrently, the output deck will be populated with cards from one type at a time.
The currently implemented types are:basic: TheBasiccard type in Anki. Each line's text is split between front and back of one card by theseparatorsound: Similar to theBasic (type in the answer)card type, but with an added empty field on the back of the card that can be filled up afterwards with (for example) sound files by using an add-on such asHyperTTSInstallationMake sure you have python installed (version >= 3.12) and then run in the terminal/command-line:pip install anki_deck_from_textHow to runOpen a terminal/command-line instance and follow the general structure:anki_deck_from_text file_name.md output_name amazing_deck_nameFor all options run:anki_deck_from_text --helpYou will get the following documentation:anki_deck_from_text [OPTIONS] INPUT OUTPUT DECK_NAME
Generate and Anki deck from annotations on a text file.
INPUT is the text file. OUTPUT is the desired name for the .apkg file with
the deck. DECK_NAME is the deck name that will be displayed in Anki.
Options:
--separator TEXT Character(s) that separate the text to be written to the
front and back of the cards [default: =]
--marker TEXT Character(s) marking this line to be included in the deck
[default: -]
--card_model TEXT Anki card model to build the deck with. Available options
are: `basic`, `sound` [default: basic]
-h, --help Show this message and exit.Further developmentContributingTo contribute to this project:Forkthis projectInstallPoetryInstallNox(optional but recommended for automated tests and code formatting)Change to the project directory and run$ poetry installThis should get your system setup to:Test that your changes didn't break the tool with$ noxorpoetry run pytestBuild with$ poetry build(optional)Test run with$ poetry run anki_deck_from_text ...Once you're happy with your changes and tests:
5. Create apull requestto be reviewedExtending the toolAdd card typesTo add extra card types follow the instructions in themodels.pyfile docstring and then update the current available card types both in the docstring ofgenerate_deck.pyandin the relevant sectionof this README.Refer tothe Anki docsfor how to design Anki card type structures.
|
ankidmpy
|
ankidmpyankidmpy( pronounced "anki-dumpy" ) is a straightforward port ofanki-dmtopython. The originalanki-dmis written inPHPand is a tool to work with theCrowdAnki pluginfor theAnkispaced repetition memory app to facilitate collaborative building of flash card decks.OverviewCrowdAnkialso aims to facilitate collaboration by extracting all the details of an Anki deck into a single json file for easier editing. Building on this,anki-dmsplits this single json file into several files: one containing the raw data, one each for template layout of the cards, one for css styling, etc. allowing each of them to be edited independently.Reversing the process, you canbuildaCrowdAnkifile from these edited files and in turnimportthese files back intoAnkiwith the plug-in to be used for spaced repetition memorization.UsageThe usage is nearly identical to the originalanki-dmwith only slight differences to accommodate standard arg parsing inpython.$python-mankidmpy--help
usage:anki-dm[-h][--baseBASE][--templates]{init,import,build,copy,index}...
ThistooldisassemblesCrowdAnkidecksintocollectionsoffilesand
directorieswhichareeasytomaintain.Itthenallowsyoutocancreate
variantsofyourdeckviacombiningfields,templatesanddatathatyoureally
need.Youcanalsousethistooltocreatetranslationsofyourdeckby
creatinglocalizedcolumnsindatafiles.
positionalarguments:{init,import,build,copy,index}initCreateanewdeckfromatemplate.importImportaCrowdAnkidecktoAnki-dmformatbuildBuildAnki-dmdeckintoCrowdAnkiformatcopyMakereindexedcopyofAnki-dmdeck.indexSetguidsforrowsmissingthem.
optionalarguments:-h,--helpshowthishelpmessageandexit--baseBASEPathtothedecksetdirectory.[Default:src]--templatesListallavailabletemplates.
$There are several sub-commands which each take their own options. The--baseswitch applies to each of these sub-commands and must be supplied before the sub-command. This switch indicates the root directory to use when looking for or generating new files.The--templatesswitch simply lists the sampleCrowdAnkidecks which can be built upon to generate new decks and doesn't require a sub-command.Help for the sub-commands can be found by applying--helpto the sub-command:$python-mankidmpyinit--help
usage:anki-dminit[-h][--deckDECK]template
positionalarguments:templateTemplatetousewhencreatingthedeckset.
optionalarguments:-h,--helpshowthishelpmessageandexit--deckDECKNameofthedefaultdeckofthedecksetbeingcreated.Ifnotprovided,thentheoriginaldeck/templatenamewillbeused.
$Buildingankidmpyis currently written in PurePythonwith no dependencies. I've only tried it withpython3.7so far but it may work in earlier versions.You can runankidmpywithpython -m ankidmpyby pointing yourPYTHONPATHat thesrcdirectory or you can usepoetryto build a wheel distribution like so:$poetryinstall
$poetrybuildOnce you runpoetry installyou can also runankidmpyusing thepoetryscript like so:$poetryrunanki-dm--helpSee thepoetrydocumentation for more details.
|
ankiepdf
|
Failed to fetch description. HTTP Status Code: 404
|
anki-export
|
anki-exportExport your Anki *.apkg to Python. Read Anki *.apkg in Python.Examplefromanki_exportimportApkgReaderimportpyexcel_xlsxwxwithApkgReader('test.apkg')asapkg:pyexcel_xlsxwx.save_data('test.xlsx',apkg.export(),config={'format':None})See real running example at/__extras__/blank-install/to-xlsx.py.Installation$ pip install anki-exportWhy?*.apkg is quite well structured, convincing me to use this format more.Allow you to use *.apkg programmatically in Python.Might be less buggy thanhttps://github.com/patarapolw/AnkiToolsMy other projects to create SRS flashcards outside Ankisrs-sqlite- A simple SRS app using Markdown/HandsOnTable/SQLitejupyter-flashcards- Excel-powered. Editable in Excel. SRS-enabled.gflashcards- A simple app to make formatted flashcards from Google Sheets. SRS-not-yet-enabled.HanziLevelUp- A Hanzi learning suite, with levels based on Hanzi Level Project, aka. another attempt to clone WaniKani.com for Chinese. SRS-enabled.
|
ankify
|
No description available on PyPI.
|
ankify-roam
|
Ankify RoamA command-line tool which brings flashcards created inRoam ResearchtoAnki.Main FeaturesCreate front/back and cloze deletion flashcards in Roam and import to Anki.Supports block references, images, and aliases.Include parent blockson your Anki cardsMake edits in Roam to flashcards you've already imported and sync the changes to Anki.Uses similar HTML syntax to Roam so you can style your Anki cards just like you do Roam.Add color to or hide cloze deletion markup in Roam.ContentsMain FeaturesInstallationRequirementsBasic UsageOptionsCustomize Anki and RoamSync AutomaticallyProblemsInstallationpip install ankify_roamRequirementsPython >=3.6AnkiAnkiConnect(add-on for Anki)Basic Usage1. Ankify RoamAnkify a block (ie. flag it to go to Anki) by adding the #ankify tag to it. The tag must be included in the block itself,it cannot be inherited from it's parents.By default, the block will be converted into a front/back style Anki note with the block content on the front and it's children on the back:What is the capital of France? #ankifyParisIf the block includes anycloze deletions, ankify_roam converts it to a cloze style Anki note. Add a cloze deletion by surrounding text in curly brackets:{Paris} is the capital and most populous city of {France}, with a estimated population of {2,148,271} residents #ankifyIn the example above, ankify_roam will add incremental cloze ids for each cloze deletion. But you can also explicitly define them (or a mixture of both). Here's an example showing what cloze markup in Roam becomes in Anki:{Paris} is the capital and most populous city of {2:France}, with a estimated population of {2,148,271} residents #ankify→{{c1::Paris}} is the capital and most populous city of {{c2::France}}, with a estimated population of {{c3::2,148,271}} residents #ankifyCloze ids matching the following patterns are all supported by ankify_roam: "c1:", "c1|", "1:"2. Export RoamOnce you've tagged all the blocks to ankify, export your Roam:Click on the "more options" button in the top right corner of Roam.Select Export All > JSON > Export All to export your Roam graph.Unzip the downloaded file.3. Open AnkiOpen Anki. Make sure you're on the profile you'd like to add the cards to and that you've installed theAnkiConnectadd-on.4. Create Roam note types (first time only)Create 2 new note types in Anki: 'Roam Basic' and 'Roam Cloze'. These are the note types which your flashcards in Roam will be added as.Steps to create a 'Roam Basic' note type:Go to Tools > Manage Note Types and click on "Add"Select the "Add: Basic" option the click "OK"Name it "Roam Basic"WithRoam Basicselected, click on "Fields..." and add a field called "uid"WithRoam Basicselected, Click on "Cards..."Replace the css in "Styling" with the contents ofroam_basic.cssClick "Save"Steps to create a 'Roam Cloze' note type:Go to Tools > Manage Note Types and click on "Add"Select the "Add: Cloze" option the click "OK"Name it "Roam Cloze"WithRoam Clozeselected, click on "Fields..." and add a field called "uid"WithRoam Clozeselected, Click on "Cards..."Replace the css in "Styling" with the contents ofroam_cloze.cssClick "Save"(You can also create your own note types, and have ankify_roam populate those. For details, seeCreate custom note types.)5. Add the Roam export to Ankiankify_roam add my_roam.json(Replace "my_roam.json" with the filename of the json within the zip you downloaded instep 2)Your flashcards should now be in Anki!6. RepeatWhenever you create new flashcards in Roam or edit the existing ones, repeat these same steps to update Anki with the changes.OptionsRoam Export PathThe path to your exported Roam graph can refer to the json, the zip containing the json, or the directory which the zip is in. When a directory is given, ankify_roam will find and add the latest export in it. In my case, all 3 of these commands do the same thing:ankify_roam add my_roam.json
ankify_roam add Roam-Export-1592525007321.zip
ankify_roam add ~/DownloadsChoose a different ankify tagTo use a tag other than #ankify to flag flashcards, pass the tag name to--tag-ankify:ankify_roam add --tag-ankify=flashcard my_roam.json... and if there are some blocks which include the #flashcard tag but you actuallydon'twant ankify_roam to ankify it, add another tag (eg. #not-a-flashcard) and then tell ankify_roam by passing it to--tag-dont-ankify:ankify_roam add --tag-ankify=flashcard --tag-dont-ankify=not-a-flashcard my_roam.jsonChange the default deck and note typesTo import your flashcards to different note types than the default 'Roam Basic' and 'Roam Cloze', pass the note type names to--note-basicand--note-cloze(seeCreate custom note typesfor details):ankify_roam add --note-basic="My Basic" --note-cloze="My Cloze" my_roam.jsonTo import your flashards to a different deck than "Default", pass the deck name to--deck:ankify_roam add --deck="Biology" my_roam.jsonYou can also specify the deck and note type on a per-note basis using tags in Roam:2+2={4} #[[ankify_roam: deck="Math"]] #[[ankify_roam: note="Cloze for math"]](When a deck or note type is specified using a tag on the block, those will take precedence over the deck and note type specified at the command line.)Show parent blocksTo show the parents of your ankified block, pass a number of parents (or "all") to--num-parents.Here's an example where we specified that all parents should be included:ankify_roam add --num-parents=all Geography.jsonNotice that "Geography" is shown differently from the rest of the parents. By default, the top level parent is shown as a title and all other parents are shown as breadcrumbs underneath. Because we included all parents, the top level parent for both blocks was the page name. But that's not always the case, as I'll show in the next example.You can also use a tag to specify thenum-parentson a single block. In this example, thenum-parentswas set to 2 using an inline tag:This ankified block has 3 parents: first parent is "[[Frace]]", the second is "Capitals", and the third is "Geography". Sincenum-parentswas set to 2, only "[[Frace]]" and "Capitals" was included. In this case, "Capitals" was the top most parent included, so it's now the one displayed as a title.Cloze delete the base name onlyWhen you add a cloze deletion around a namespaced page reference, eg.... you can tell ankify_roam to only cloze delete the base name part of the page reference, leaving out the namespace, eg.... by setting the--pageref-clozeoption to "base_only":ankify_roam add --pageref-cloze=base_only my_roam.jsonYou can also set this on an individual note:The {[[Design Pattern/Adaptor Pattern]]} specifies... #[[ankify_roam: pageref-cloze="base_only"]]Customize Anki and RoamCreate custom note typesAs mentioned in theoptionssection, you can import to different note types than the default 'Roam Basic' and 'Roam Cloze' typesprovided. Those note types will need to satisfy 2 requirements to be compatible with ankify_roam:The first field(s) is for content from Roam (first 2 for Basic and 1 for Cloze). When ankify_roam converts a Roam block into an Anki note, it takes the content of the block and places it into the first field of the Anki note. For basic notes, it also takes the content of the block's children and adds them to the second field. The names of these fields doesn't matter, it just matters that they come first in the field order.Include an additional field called "uid". In addition to those fields, a "uid" field is required. This field is used by ankify_roam to remember which block in Roam corresponds with which note in Anki. Without this field, when you make a change to a block in Roam, ankify_roam will add that block as a new note in Anki rather than updating the existing one.If you are going to make your own note types, I'd suggest you createclonesof the 'Roam Basic' and 'Roam Cloze' note types and then just edit the style of those clones (seeherefor a tutorial).CSS ideas for your Anki cardsHide all Roam tags (eg. the #ankify tag).rm-page-ref-tag{display:none;}Hide page reference brackets..rm-page-ref-brackets{display:none;}When a block has multiple children, they're added as bullet points on the backside of a card. If you'd prefer not to show the bullets, similar to the "View as Document" option in Roam, use the following CSS:.back-sideul{list-style-type:none;text-align:left;margin-left:0;padding-left:0;}Add color or hide cloze deletions in RoamYou can also define cloze deletions using curly bracket inside square brackets:The nice thing about doing it this way is that you can now style the cloze markup.For example, you can make the cloze brackets only faintly visible by:PressingCtrl-C Ctrl-Bin Roam to hide the square brackets surrounding page links.Adding this css to your [[roam/css]] page (how tovideo here) to change the color of the curly brackets:span[data-link-title="{"]>span,span[data-link-title="}"]>span{color:#DDDCDC!important;}Now the block shown above will look like this:Note: Just like the regular cloze markup, the page links can also include cloze ids eg. [[{c1:]]Paris[[}]]Sync AutomaticallyIt is possible to set up automatic updates of Anki usingRoam To Git.Follow the instructions on the Roam to Git page for setting up an automatically updating repository on GitHub. Clone that repository to your local machine:git clone https://github.com/YOURNAME/notesNow you can runankify_roam add /PATH_TO_YOUR_REPO/notes/json/YOURDBNAME.jsonAnd further, you can add the git update to crontab:echo "15 * * * * 'cd PATH_TO_YOUR_REPO;git pull;PATH_TO_ANKIFY/ankify_roam add PATH_TO_YOUR_REPO/json/YOURDBNAME.json '" | crontabNow you'll have git Roam to Git cloning your notes from Roam on the hour, and fifteen minutes later any updates/new items will be pulled in Anki, as long as it is running.ProblemsMissing FeaturesNo LaTeX supportNo syntax highlighting for code blocksNon-Intuitive BehaviourIf you change a flashcard's field content in Anki, that change will be overwritten with whatever is in Roam the next time you run ankify_roam. So make those changes in Roam, not Anki.When a flashcard in Roam has already been imported to Anki, the only changes made in Roam which will be reflected in Anki are changes to the fields. Changes to it's tags, deck, and note type need to be done manually in Anki.If you move the content of a block into a new block in Roam, ankify_roam will treat that as a new flashcard. This is because ankify_roam uses the block uid and the Anki uid field to know which block corresponds with which Anki note.Deleting a flashcard in Roam doesn't delete it in Anki. You'll need to delete it in Anki manually.A flashcard deleted in Anki will be re-imported to Anki next time you run ankify_roam if you don't also delete it or remove the #ankify tag in Roam.When you let ankify_roam infer the cloze ids, you can get some weird behaviour when you add a new cloze deletion to a note in Roam which was already imported to Anki. For example, if you have "Paris is the capital of {France}" in Roam, that'll become "Paris is the capital of {{c1::France}}" in Anki. Later, if you add a cloze deletion around Paris ie. "{Paris} is the capital of {France}", ankify_roam will convert that into "{{c1::Paris}} is the capital of {{c2::France}}". Notice that the "France" cloze id is now "c2" instead of "c1". This is because ankify_roam assigns cloze ids in the order that the cloze deletions appear. The result is that in Anki the original flashcard will now cloze delete "Paris" instead of "France" and a new flashcard will be added which cloze deletes "France". To avoid this, explicitely add cloze ids in Roam which match the existing note in Anki eg. "{2:Paris} is the capital of {1:France}"
|
ankigengpt
|
AnkiGenGPTAnkiGenGPT is a Python CLI tool that harnesses the power of OpenAI's GPT-3 or GPT-4 model to transform your text into flashcards for Anki, an open-source spaced repetition software.InstallYou can install AnkiGenGPT using pip:pip3-UinstallankigengptAlternatively, you can use pipx for installation:pipxinstallankigengptConfigurationTo use AnkiGenGPT, you need an OpenAI API token, which can be provided either through the --openai-token command-line option or via the OPENAI_TOKEN environment variable.EpubAnkiGenGPT can scan an epub file for text and use ChatGPT to create Anki cards. Here's an example of how to use it:ankigengptepub--path~/Downloads/my-ebook.epubKindle HighlightsTo generate Anki cards from Kindle highlights, ensure the highlights are in the APA format, and then run the following command:ankigengptkindle-highlights--path~/Downloads/Notebook.htmlKobo HighlightsFor Kobo highlights, you can enable the export feature on your Kobo device by connecting it via USB and modifying the .kobo/Kobo/Kobo eReader.conf file. Add the following under [FeatureSettings]:[FeatureSettings]ExportHighlights=trueAfter enabling export highlights on the Kobo device, you can retrieve the highlight files via USB and use the kobo-highlights command to create Anki cards:ankigengptkobo-highlights--path~/Downloads/BookHighlights.csvPlain textou can also use AnkiGenGPT with plain text files such as markdown or txt. Here's how to use it:ankigengptplain--path~/Downloads/book.mdDebuggingTo debug AnkiGenGPT, you can use the --debug option.
|
anki-kunren
|
Anki Kunren (暗記 訓練)Anki Kunren is a program to drill japanese kanji stroke order and practice writing in sync with an Anki study session.Installationinstall via pip:pip install anki-kunreninstallanki-connectwith code2055492159UsageOpen Anki and begin a review. You can then usekunrenas follows:usage:kunren [-h] [-s S] [-d D] [--field FIELD] [--size SIZE]optional arguments:-h, --helpShow help message-s SStart point size in px. defaults to 5px-d DStroke forgiveness in average px from actual. defaults to 25px--field FIELDname of anki card field containing kanji. defaults to "Vocabulary-Kanji"--size SIZELength of a size of the square canvas in pixels. Defaults to 300.While running, you can use the following controls:h: hint the current strokea: animate the current stroken: next kanji in the expressionesc: quitc: refresh the current card shown. This is done automatically when all characters have been drawn, but would slow the program to a crawl if checked for every frame.NotesWhen the size parameter is changed, all tolerances get multiplied by the ratio ofsize/109.OtherThis project uses KanjiVG stroke order data.
It is licensed by Ulrich Apel under theCreative Commons Attribution Share-Alike 3.0license.The KanjiVG ascii filename code is taken fromKanji Colorizerwhich was also the source of my initial inspiration.TODOcatch all indexoutofboundssmoother kanji linesdifferent coloring for different parts of stroke based on how wrong it is.
|
anki-librarian
|
anki-librarianA simple convention-over-configuration approach to managing shared Anki
content with YAML.
|
ankilist
|
No description available on PyPI.
|
ankillins
|
Generates Anki cards from Collins pages.ankillinsAnkillins is a Command Line Application that generates Anki cards from collinsdictionary.com pages.FeaturesGenerating cloze card for every definitionWord pronounce in cardsExamples pronounce support (locked because of high memory usage)Words search supportOriginal site stylesUsage➜ankillinssearch"At the end of the day"attheendoftheday
➜ankillinsgen-cards"At the end of the day"Word"At the end of the day"processedsuccessfully
➜ankillinsgen-cardshelloknifespider"so on"kekdoes_not_exists"sort of"kind
Word"hello"processedsuccessfully
Word"knife"processedsuccessfully
Word"spider"processedsuccessfully
Word"so on"processedsuccessfully[Error]Wordkeknotfound
Similarwords:keck,keek,keks,kike,eek,kak,keV,kea,keb,ked,kef,keg,ken,kep,ket,kex,key,lek,nek,zek[Error]Worddoes_not_existsnotfound
Similarwords:dentists,kenoticists
Word"sort of"processedsuccessfully
Word"kind"processedsuccessfully
➜ls-lh
total32K
-rw-r--r--1hairygeekhairygeek31KJul2612:56ankillins-result.csvAnkillins doesn't add cards to Anki itself. Instead, it generatesankillins-result.csvfile which you need to import.Important detailsWhen you import result file click on the "fields separated by" and type~It is better to create your own card type with styles located incard_styles.cssto get card style close to collinsdictionary page style.InstallationWill be available sooner
|
ankilol
|
Anki Card Knowledge SyncerThe ProblemI like to ask lots of questions, but I can't always immediately find out the answers to those questions. So I store them in a google doc. When I do figure out the answers, I add them. Now, I would like to take that question/answer pair and create an Anki flashcard, so that I can store it in my long-term memory. However, copy-pasting these questions and answers into anki is a time-consuming process, and one which can be fully automated.The solutionThis project takes a formatted set of questions and answers stored as a cloud document, creates flashcards from those question/answer pairs, adds them to an Anki deck, and syncs that local deck with AnkiWeb.Example usage with locally-downloaded HTML filespython -m card_parser input_file.htmlExample usage with locally-downloaded text filespython -m card_parser input_file.txtExample usage with files stored on google drivePrerequisites#. Sign up for a google cloud account
#. Create a new project and service account for that project
#. Share the document with the service account's e-mail
#. Download the service account's .json credentials and place inservice_account.jsonfile incard_parserdirectory
#. Setup config.ini to point to the appropriate google doc IDThen, just run the following command:python -m card_parserYour document should have been uploaded in-place.DisclaimerNOTE: This package is currently under development, and has not yet been published to pip. The only current way to install it is through cloning this repository.
|
ankimaker
|
AnkimakerWIPA CLI app to generate anki decks.From csv file, with configurable parameters, filters and media.From epub, finding difficult* words in the book and getting their translations.*I still don't know what 'difficult' will mean. Probably difficult words will be less frequent words that are more frequent in the text than in some corpus, cut above a grade threshold. The grades will map percentiles of frequency.Language LevelNumber of Base Words NeededA1500A21000B12000B24000C18000C216000This project is only possible because of the awesome work of genanki team.
|
anki-ocr
|
anki_ocranki_ocr is a python program that converts physical flashcards into digitalAnki(Anki is a flashcard program that sychronizes your flashcards and uses spaced repetition for efficient memorization) decks. It usesPyTesseractandgenankito turn your handwritten flashcards into digital anki ones.There are several use cases, mainly its for you if you have a lot of flashcards and and want to digitize them. Anki does support image flashcards, but it would take a lot of time and you wouldn't be able to search the flashcards. Its also useful if you're not allowed to use a laptop/phone in class or prefer to handwrite your notes.InstallationUse the package managerpipto install anki_ocr.pipinstallanki_ocrUsageTo use anki_ocr, you will need a directory with images of your flashcards. The program will automatically sort the images by date, so you shouldcapture the question followed by its answer(i.e question1>answer1>question2>answer2 and so on), and ensure the number of images is evenanki_ocr[img_directory][output_deck_name]This will output an Anki deck package output_deck_name.apkg. This package can be imported into the Desktop or mobile Anki appsContributingThis project is beginner friendly. The entire module is a small single file, and the only new package you might have to deal with is genanki just to see some other ways to generate notes.Clone the project & you probably want a virtual environmentgitclonehttps://github.com/madelesi/anki_ocr.gitcdanki_ocr
python3-mvenvvenv_anki_ocrsourcevenv_anki_ocr/bin/activateThen install an editable version (updates after every save)pipinstall-e.Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT
|
anki-ocr-gui
|
anki_ocr_guianki_ocr_gui is a PyQt5 gui for the CLI toolanki_ocranki_ocr is a python program that converts physical flashcards into digitalAnki(Anki is a flashcard program that sychronizes your flashcards and uses spaced repetition for efficient memorization) decks. It usesPyTesseractandgenankito turn your handwritten flashcards into digital anki ones.InstallationUse the package managerpipto install anki_ocr_gui.pipinstallanki_ocr_guiUsageTo use anki_ocr, you will need a directory with images of your flashcards. The program will automatically sort the images by date, so you shouldcapture the question followed by its answer, and ensure the number of images is even(i.e question1>answer1>question2>answer2 and so on)anki_ocr_guiContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT
|
anki-overdrive
|
No description available on PyPI.
|
ankipandas
|
Analyze and manipulate your Anki collection using pandas!📝 DescriptionNoteThis package needs a new maintainer, as I currently do not have enough time to continue development
of this package. Writing modifications back into the Anki database is currently disabled,
in particular because of issue#137.
Please reach out to me if you are interested in getting involved!Ankiis one of the most popular flashcard
system for spaced repetition learning,pandasis the most popular python package
for data analysis and manipulation. So what could be better than to
bring both together?WithAnkiPandasyou can usepandasto easily analyze or manipulate
your Anki flashcards.Features:Select: Easily select arbitrary subsets of your cards, notes or
reviews usingpandas(one of many
introductions,official
documentation)Visualize: Use pandas' powerfulbuilt in
toolsor switch to the even more versatileseaborn(statistical analysis) ormatplotliblibrariesManipulate: Apply fast bulk operations to the table (e.g. add
tags, change decks, set field contents, suspend cards, ...) or
iterate over the table and perform these manipulations step by step.⚠️ This functionality is currently disabled until#137has been resolved! ⚠️Import and Export: Pandas can export to (and import from) csv,
MS Excel, HTML, JSON, ... (io
documentation)Pros:Easy installation: Install via python package manager
(independent of your Anki installation)Simple: Just one line of code to get startedConvenient: Bring together information aboutcards,notes,models,decksand more in
just one table!Fully documented: Documentation onreadthedocsWell tested: More than 100 unit tests to keep everything in
checkAlternatives: If your main goal is to add new cards, models and more,
you can also take a look at thegenankiproject.📦 InstallationAnkiPandasis available aspypi
packageand can be installed or
upgrade with thepython package
manager:pip3install--user--upgradeankipandasDevelopment installationFor the latest development version you can also work from a cloned
version of this repository:gitclonehttps://github.com/klieret/ankipandas/cdankipandas
pip3install--user--upgrade--editable.If you want to help develop this package further, please also install thepre-commithooks and usegitmoji:pre-commitinstall
gitmoji-i🔥 Let's get started!Starting up is as easy as this:fromankipandasimportCollectioncol=Collection()Andcol.noteswill be dataframe containing all notes, with additional
methods that make many things easy. Similarly, you can access cards or
reviews usingcol.cardsorcol.revs.If called without any argumentCollection()tries to find your Anki
database by itself. However this might take some time. To make it
easier, simply supply (part of) the path to the database and (if you
have more than one user) your Anki user name, e.g.Collection(".local/share/Anki2/", user="User 1")on many Linux
installations.To get information about the interpretation of each column, useprint(col.notes.help_cols()).Take a look at thedocumentationto find out more about more about the available methods!Some basic examples:📈 AnalysisMore examples:Analysis
documentation,projects that useAnkiPandas.Show a histogram of the number of reviews (repetitions) of each card for
all decks:col.cards.hist(column="creps",by="cdeck")Show the number of leeches per deck as pie chart:cards=col.cards.merge_notes()selection=cards[cards.has_tag("leech")]selection["cdeck"].value_counts().plot.pie()Find all notes of modelMnemoticModelwith emptyMnemoticfield:notes=col.notes.fields_as_columns()notes.query("model=='MnemoticModel' and 'Mnemotic'==''")🛠️ ManipulationsWarningWriting the database has currently been disabled until#137has been resolved.
Help is much appreciated!WarningPlease be careful and test this well!Ankipandas will create a backup of your database before writing, so you can always restore the previous state. Please make sure that everything is working before continuing to use Anki normally!Add thedifficult-japaneseandmarkedtag to all notes that contain
the tagsJapaneseandleech:notes=col.notesselection=notes[notes.has_tags(["Japanese","leech"])]selection=selection.add_tag(["difficult-japanese","marked"])col.notes.update(selection)col.write(modify=True)# Overwrites your database after creating a backup!Set thelanguagefield toEnglishfor all notes of modelLanguageModelthat are tagged withEnglish:notes=col.notesselection=notes[notes.has_tag(["English"])].query("model=='LanguageModel'").copy()selection.fields_as_columns(inplace=True)selection["language"]="English"col.notes.update(selection)col.write(modify=True)Move all cards taggedleechto the deckLeeches Only:cards=col.cardsselection=cards[cards.has_tag("leech")]selection["cdeck"]="Leeches Only"col.cards.update(selection)col.write(modify=True)🐞 TroubleshootingSee thetroubleshooting section in the
documentation.💖 ContributingYour help is greatly appreciated! Suggestions, bug reports and feature
requests are best opened asgithub
issues. You could also
first discuss in thegitter
community. If you want to code
something yourself, you are very welcome to submit apull
request!Bug reports and pull requests are credited with the help of theallcontributors bot.📃 License & DisclaimerThis software is licenced under theMIT
licenseand (despite best testing efforts) comeswithout any warranty. The
logo is inspired by theAnki
logo(license) and
the logo of the pandas package
(license2).
This library and its author(s) are not affiliated/associated with the
main Anki or pandas project in any way.✨ ContributorsThanks goes to these wonderful people (emoji key):Blocked🐛CalculusAce🐛Francis Tseng🐛💻Keith Hughitt🐛Miroslav Šedivý⚠️💻Nicholas Bollweg💻Thomas Brownback🐛eshrh📖exc4l🐛💻p4nix🐛This project follows theall-contributorsspecification. Contributions of any kind welcome!
|
ankipy
|
No description available on PyPI.
|
ankiqt
|
No description available on PyPI.
|
anki-qt
|
No description available on PyPI.
|
ankirspy
|
To build from scratch, please seehttps://github.com/ankitects/anki
|
ankisiyuan
|
No description available on PyPI.
|
anki-sqlalchemy
|
Anki SQLAlchemy is an interface for interacting with theAnkisqlite database from python without having to
either hack an Anki install or figure out the database structure and field
serialization from scratch.The goal of this project is not to support every version of Anki entirely. The
current version supports at a signficant amount of the Anki 2.1.38.Here is a small code snippet written first withoutanki_sqlalchemyto show
how unintuivite the data format and columns names are without an wrapper.# plain python without anki-sqlalchemyimportsqlite3conn=sqlite3.connect('backup.db')cursor=conn.execute("SELECT id, tags FROM notes WHERE mod >= ?",[1445394366])note=cursor.fetchone()note[0]# 1428143940996note[1]# ' edit math probability wikipedia 'cursor=conn.execute("SELECT mod, type FROM cards WHERE nid = ?",[nid])card=cursor.fetchone()card[0]# 1445394366card[1]# 2# with anki-sqlalchemyimportdatetimefromsqlalchemyimportcreate_enginefromsqlalchemy.ormimportsessionmakerfromanki_sqlalchemyimportCardengine=create_engine("sqlite:///backup.db",echo=True)Session=sessionmaker(bind=engine)session=Session()note=session.query(Note).filter(Note.modification_time>=datetime.datetime(2017,2,5,21,29,49)).first()note.id# 1428143940996note.modification_time# datetime.datetime(2017, 2, 5, 21, 29, 49)card=note.cards[0]card.modification_time# datetime.datetime(2019, 11, 5, 22, 23, 3)card.type# <CardType.due: 2>Anki SQLAlchemy also plays nicely with types too.card:Card=session.query(Card).first()reveal_type(card.modification_time)# Revealed type is 'datetime.datetime*'reveal_type(card.note.tags)# Revealed type is 'builtins.list*[builtins.str]'BEWARE!This package can be used to make changes to your anki database. Before
proceeding, please make a backup of your database file. You don't want to lose
all your work with a bad query.The Anki database typically lives in acollection.anki2file.Installpip install anki_sqlalchemy
|
ankisync
|
This project is deprecated. Please seeankisync2.ankisyncDoing what AnkiConnect cannot do, includingCreating new*.apkgCreating new note type / modelUpserting notesSetting next reviewSetting card statisticsNote ids to Card idsBut of course, this is very unsafe compared to pure AnkiConnect. I will not hold liability to damage it may cost.UsagePlease close yourAnkiapplication first before doing this!fromankisync.ankiimportAnkiwithAnki()asa:a.add_model(name='foo',fields=['field_a','field_b','field_c'],templates={'Forward':(QUESTION1,ANSWER1),'Reverse':(QUESTION2,ANSWER2)})Most of the other API's are similar to AnkiConnect, but_by_id()'s are preferred.Creating a new*.apkgis also possible.fromankisync.apkgimportApkgwithApkg('bar.apkg')asa:model_id=a.init(first_model=dict(name='foo',fields=['field_a','field_b','field_c'],templates={'Forward':(QUESTION1,ANSWER1),'Reverse':(QUESTION2,ANSWER2)}),first_deck='baz',first_note_data=False)a.add_note({'modelName':'foo','deckId':1,# "Default" deck'fields':{'field_a':'aaaaa','field_b':123# Numbers will be converted to string.}})For the example of how I use it in action, seehttps://github.com/patarapolw/zhlib/blob/master/zhlib/export.pyInstallationpip install ankisyncContributionsWhat features outside AnkiConnect (or inside) do you want? I will try to implement it.Help me understand the documentations,AnkiDroid Wiki, andAnki decks collaboration WikiPlease help me implement theNotImplementedmethods.NoteThis is the successor toAnkiTools. I will not update it anymore.
|
ankisync2
|
AnkiSync 2*.apkg and *.anki2 file structure is very simple, but with some quirks of incompleteness.*.apkg file structureis a zip of at least two files..
├── example
│ ├── collection.anki2
│ ├── collection.anki21 # newer Anki Desktop creates and uses this file instead, while retaining the old one as stub.
│ ├── media # JSON of dict[int, str]
│ ├── 1 # Media files with the names masked as integers
│ ├── 2
│ ├── 3
| └── ...
└── example.apkg*.anki2 is a SQLite file with foreign key disabled, and the usage ofsome JSON schemasinstead ofsome tablesAlso, *.anki2 is used internally atos.path.join(appdirs.user_data_dir('Anki2'), 'User 1', 'collection.anki2'), so editing the SQLite there will also edit the database.However,internal *.anki2 has recently changed. If you need to edit internally, if maybe safer to do in Anki<=2.1.26. If you have trouble running two Anki versions (latest and 2.1.26), see/__utils__/anki2.1.26.Themediafile is a text file of at least a string of{}, which is actually a dictionary of keys -- stringified int; and values -- filenames.UsageSomeextra tablesare created if not exists.fromankisync2importApkgwithApkg("example.apkg")asapkg:# Or Apkg("example/") also works - the folder named 'example' will be created.apkg.db.database.execute_sql(SQL,PARAMS)apkg.zip(output="example1.apkg")I also support adding media.apkg.add_media("path/to/media.jpg")To find the wanted cards and media, iterate though theApkgandApkg.iter_mediaobject.forcardinapkg:print(card)Creating a new *.apkgYou can create a new *.apkg viaApkgwith any custom filename (and *.anki2 viaAnki2()). A folder required to create *.apkg needs to be created first.apkg=Apkg("example")# Create example folderAfter that, the Apkg will require at least 1 card, which is connected to at least 1 note, 1 model, 1 template, and 1 deck; which should be created in this order.Model, DeckTemplate, NoteCardwithApkg("example.apkg")asapkg:m=apkg.db.Models.create(name="foo",flds=["field1","field2"])d=apkg.db.Decks.create(name="bar::baz")t=[apkg.db.Templates.create(name="fwd",mid=m.id,qfmt="{{field1}}",afmt="{{field2}}"),apkg.db.Templates.create(name="bwd",mid=m.id,qfmt="{{field2}}",afmt="{{field1}}")]n=apkg.db.Notes.create(mid=m.id,flds=["data1","<img src='media.jpg'>"],tags=["tag1","tag2"])c=[apkg.db.Cards.create(nid=n.id,did=d.id,ord=i)fori,_inenumerate(t)]You can also add media, which is not related to the SQLite database.apkg.add_media("path/to/media.jpg")Finally, finalize withapkg.export("example1.apkg")Updating an *.apkgThis is also possible, by modifyingdb.Notes.dataassqlite_ext.JSONField, withpeewee.signals.It is now as simple as,withApkg("example1.apkg")asapkg:forninapkg.db.Notes.filter(db.Notes.data["field1"]=="data1"):n.data["field3"]="data2"n.save()apkg.close()JSON schema ofCol.models,Col.decks,Col.confandCol.dconfI have createddataclassesfor this at/ankisync2/builder.py. To serialize it, usedataclasses.asdictorfromankisync2importDataclassJSONEncoderimportjsonjson.dumps(dataclassObject,cls=DataclassJSONEncoder)Editing user'scollection.anki2This can be found at${ankiPath}/${user}/collection.anki2. Of course, do this at your own risk. Always backup first.fromankisync2importAnkiDesktopAnkiDesktop.backup("/path/to/anki-desktop.db")anki=AnkiDesktop(filename="/path/to/anki-desktop.db")...# Edit as you pleaseAnkiDesktop.restore("/path/to/anki-desktop.db")UsingpeeweeframeworkThis is based onpeeweeORM framework. You can use Dataclasses and Lists directly, without converting them to string first.ExamplesPlease see/__examples__, and/tests.Installationpipinstallankisync2Related projectshttps://github.com/patarapolw/ankisynchttps://github.com/patarapolw/AnkiTools
|
anki-sync-server
|
No description available on PyPI.
|
ankit
|
UNKNOWN
|
ankita
|
To run it you need PyQt4 module and PIL module.
Install python-qt4 (for PyQt4 module) and python-pil(for Python Imaging Library) in debian based distros
|
ankitar26681845
|
No description available on PyPI.
|
ankit-db
|
No description available on PyPI.
|
ankitdiscountcalculator
|
Failed to fetch description. HTTP Status Code: 404
|
ankit-discounts-calculator
|
This is a very simple discount calculator that takes two parameters that is originalRate and discountedPercentage to calculated the discounted rate.originalRate: This takes the original rate of any product/services.
discountedPercentage: This takes the percent of discount being offered.Change Log0.0.1 (22/04/2023)First Release
|
ankit_lister
|
UNKNOWN
|
ankitMissingValues
|
No description available on PyPI.
|
anki_tool
|
No description available on PyPI.
|
ankitOutlier
|
No description available on PyPI.
|
ankitpackage
|
No description available on PyPI.
|
ankitpalbuffed
|
No description available on PyPI.
|
ankitrazorpay
|
python3 -m buildpython3 -m twine check dist/*python3 -m twine upload --verbose dist/ankitrazorpay-0.1-py3-none-any.whl dist/ankitrazorpay-0.1.tar.gzpython3 -m twine upload --verbose --repository testpypi dist/ankitrazorpay-0.1-py3-none-any.whl dist/ankitrazorpay-0.1.tar.gz- name: Set up Python 3
uses: actions/setup-python@v2
with:
python-version: "3.10"name: Install tools
run: make venv
- name: Publish packages to PyPy
run: |
set -ex
source venv/bin/activate
export VERSION=$(cat VERSION)
gpg --detach-sign --local-user $GPG_SIGNING_KEYID --pinentry-mode loopback --passphrase $GPG_SIGNING_PASSPHRASE -a dist/stripe-$VERSION.tar.gz
gpg --detach-sign --local-user $GPG_SIGNING_KEYID --pinentry-mode loopback --passphrase $GPG_SIGNING_PASSPHRASE -a dist/stripe-$VERSION-py2.py3-none-any.whl
python -m twine upload --verbose dist/stripe-$VERSION.tar.gz dist/stripe-$VERSION-py2.py3-none-any.whl dist/stripe-$VERSION.tar.gz.asc dist/stripe-$VERSION-py2.py3-none-any.whl.asc
env:
GPG_SIGNING_KEYID: ${{ secrets.GPG_SIGNING_KEYID }}
TWINE_USERNAME: ${{ secrets.TWINE_USERNAME }}
TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }}
GPG_SIGNING_PASSPHRASE: ${{ secrets.GPG_SIGNING_PASSPHRASE }}
- uses: stripe/openapi/actions/notify-release@master
if: always()
with:
bot_token: ${{ secrets.SLACK_BOT_TOKEN }}
|
ankitTopsis
|
No description available on PyPI.
|
ankivalenz
|
AnkivalenzAnkivalenz is a tool for generating Anki cards from HTML files. Read myblog postfor more information on the "Why" of Ankivalenz.Use with QuartoAnkivalenz can be used withQuarto. Take a look at
the repo for thequarto-ankivalenzextension for more information.TutorialIn this walk-through we will write our notes as Markdown files, use
pandoc^pandocto convert them to HTML, and finally use Ankivalenz to
generate an Anki deck with Anki cards extracted from our Markdown files.InstallationAnkivalenz is distributed as a Python package, and requires Python 3.10+. To install:$ pip3 install ankivalenzInitialize projectCreate a folder for your notes:$ mkdir Notes
$ cd NotesAnkivalenz needs a configuration file, containing the name and ID of the
Anki deck. This can be generated withankivalenz init:$ ankivalenz init .Write a noteAdd the following to a file namedCell.md:# Cell## Types-Prokaryotic ?:: does not contain a nucleus-Eukaryotic ?:: contains a nucleusGenerate Anki deckConvert it to HTML:$ pandoc Cell.md > Cell.htmlAnd run Ankivalenz:$ ankivalenz run .This generates a fileNotes.apkgthat can be imported to Anki. Open
Anki and go to File -> Import, and findNotes.apkg.Updating Anki deckIf you make changes to your notes, you can update the Anki deck by
runningankivalenz runagain. It is not possible to mark cards
as deleted, so if you remove a note, the corresponding card will
remain in the Anki deck. To work around this issue, all cards are
tagged with a timestamp, and you can use the Anki browser to delete
cards with an old timestamp. Runningankivalenz runwill provide
you with the filter needed to delete orphaned cards:$ ankivalenz run .
- Added 3 notes to deck Biology in Biology.apkg
- Import the .apkg file into Anki (File -> Import)
- Find and delete orphaned notes with this filter (Browse):
deck:Biology -tag:ankivalenz:updated:1666899823ReviewThe new Anki deck will have two cards:QuestionAnswerPathProkaryoticdoes not contain a nucleusCell > TypesEukaryoticcontains a nucleusCell > TypesThis is what the first note looks like in Anki:SyntaxFront/back cardsAnkivalenz supports front/back cards, where the front is the question
and the back is the answer. To create a front/back card, add a new list item
with the question, followed by?::and the answer:-Color of the sun ?:: YellowYou can flip the order of the question and answer by using::?instead:-Anwer ::? QuestionTwo-way cardsTwo-way cards can be created with:::-Side 1 :: Side 2This will create two cards in Anki:FrontBackSide 1Side 2Side 2Side 1Standalone questions/answersSometimes you want to create a note refering to the parent heading.
This can be done with standalone questions/answers:-Sun-::? The star in our solar systemThis will create a note with the answer "Sun" and the question "The star
in our solar system". The other types of delimeters ("::" and "?::") can
be used in the same way.Cloze cardsAnkivalenz supports cloze deletion^cloze, where the answer is hidden in the
question. To create a cloze card, add a new list item with the question,
using Anki's cloze syntax:-The {{c1::sun}} is {{c2::yellow}}.Nested listsLists can be nested:-Solar System-Star ?:: Sun-Planet-Earth ?:: Blue-Mars ?:: RedThe headings for the nested lists become a part of the notes' paths:QuestionAnswerPathStarSunSolar SystemEarthBlueSolar System > PlanetMarsRedSolar System > PlanetMathIf you are writing Markdown files, and use pandoc to convert them,
the following syntax for math is supported:-Inline math: $1 + 2$-Display math: $$1 + 2$$With the--mathjaxflag, pandoc will generate the correct markup,
using\( ... \)as delimeters for inline math, and\[ ... \]as
delimeters for display math:$ pandoc --mathjax Note.md > Note.htmlConfigurationankivalenz.jsontakes the following options:OptionDescriptiondeck_nameThe name of the Anki deck.deck_idThe ID of the Anki deck.input_pathThe path to the folder containing the HTML files.
|
anki-vector
|
The Vector SDK gives you direct access to Vector’s unprecedented set of advanced sensors, AI capabilities, and robotics technologies including computer vision, intelligent mapping and navigation, and a groundbreaking collection of expressive animations.It’s powerful but easy to use, complex but not complicated, and versatile enough to be used across a wide range of domains including enterprise, research, and entertainment. Find out more athttps://developer.anki.comVector SDK documentation:https://developer.anki.com/vector/docs/Official developer forum:https://forums.anki.com/Requirements:Python 3.6.1 or later
|
ankiwiktionary
|
CLI for generating Anki flashcards from wiktionary.org pages.Works only with russian words.ankiwiktionaryUsage: ankiwiktionary [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
gen-cards Generate flashcards from passed WORDS
search Search for the passed WORDS in wiktionaryFeaturesGenerating cloze cardsExamples processingUsage➜ tmp ankiwiktionary gen-cards самоопределяться луддит дихотомия привинтивный дискурс конструктивный
Word "самоопределяться" processed successfully
Word "луддит" processed successfully
Word "дихотомия" processed successfully
[Error] Word "привинтивный" not found
Word "дискурс" processed successfully
Word "конструктивный" processed successfully
➜ tmp ankiwiktionary search привинтивный
Results for "привинтивный":
➜ tmp ankiwiktionary search люмпед
Results for "люмпед": люмпен, люмпен-пролетариат, люмпенизация, люмпен-пролетарий, люмпен-интеллигенция, люмпенский, люмпенизироваться, люмпенизировать, люмпенствовать, люмпен-интеллигентImportant detailsGenerated file should be imported to Anki using "Import File" button. Set "Fields separated by" to~and activate the "Allow HTML in fields" checkbox.
|
ankix
|
AnkixNew file format for Anki with improved review intervals. PurepeeweeSQLite database, no zipfile. Available to work with on Jupyter Notebook.UsageOn Jupyter Notebook,>>>fromankiximportankix,dbasa>>>ankix.init('test.ankix')# A file named 'test.ankix' will be created.>>>ankix.import_apkg('foo.apkg')# Import the contents from 'foo.apkg'>>>iter_quiz=a.iter_quiz()>>>card=next(iter_quiz)>>>card'A flashcard is show on Jupyter Notebook. You can click to change card side, to answer-side.''It is HTML, CSS, Javascript, Image enabled. Cloze test is also enabled. Audio is not yet tested.'>>>card.right()# Mark the card as right>>>card.wrong()# Mark the card as wrong>>>card.mark()# Add the tag 'marked' to the note.You can directly make use of Peewee capabilities,>>>a.Card.select().join(a.Note).where(a.Note.data['field_a']=='bar')[0]'The front side of the card is shown.'Adding new cardsAdding new cards is now possible. This has been tested inhttps://github.com/patarapolw/zhlib/blob/master/zhlib/export.py#L15fromankiximportankix,dbasaankix.init('test.ankix')a_model=a.Model.add(name='foo',templates=[a.TemplateMaker(name='Forward',question=Q_FORMAT,answer=A_FORMAT),a.TemplateMaker(name='Reverse',question=Q_FORMAT,answer=A_FORMAT)],css=CSS,js=JS)# Or, a_model = a.Model.get(name='foo')forrecordinrecords:a.Note.add(data=record,model=a_model,card_to_decks={'Forward':'Forward deck','Reverse':'Reverse deck'},tags=['bar','baz'])Installation$ pip install ankixPlansTest by using it a lot.
|
ankle
|
UNKNOWN
|
anko
|
ankoToolkit for performing anomaly detection algorithm on 1D time series based on numpy, scipy.Conventional approaches that based on statistical analysis have been implemented, with mainly two approaches included:Normal DistributionData samples are presumably been generated by normal distribution, and therefore anomalous data points can be targeted by analysing the standard deviation.Fitting AnsatzData samples are fitted by several ansatzs, and in accordance with the residual, anomalous data points can be selected.Regarding model selections, models are adopted dynamically by performing normal test and by computing the (Akaike/Bayesian) information criterion.
By default, the algorithm will first try to fit in the data into normal distribution, if it passednormal test.
If this attempt suffers from the loss of convergence or it did not passnormal testfrom begining,
then the algorithm will pass data into the second methods and try to execute all the available fitting ansatzs simultaneously.
The best fitting ansatz will be selected by information criterion, and finally the algorithm will pick up anomalous points in accordance with the residual.click here to see all available methods.Future development will also include methods that are based on deep learning techniques, such as isolation forest, support vector machine, etc.Requirementspython >= 3.6.0numpy >= 1.16.4scipy >= 1.2.1Installationpip install ankoFor current release version please refer toPyPI - anko homepage.DocumentationFor details about anko API, see thereference documentation.Jupyter Notebook Tutorial (in dev)Runanko_tutorial.ipynbon your local Jupyter Notebook or host ongoogle colab.Basic UsageCall AnomalyDetectorfrom anko.anomaly_detector import AnomalyDetector
agent = AnomalyDetector(t, series)Define policies and threshold values (optional)agent.thres_params["linregress_res"] = 1.5
agent.apply_policies["z_normalization"] = True
agent.apply_policies["info_criterion"] = 'AIC'for the use ofAnomalyDetector.thres_paramsandAnomalyDetector.apply_policies,
please refer to the documentation.Run checkcheck_result = agent.check()The type of outputcheck_resultisCheckResult, which is basically a dictionary that contains the following attributes:model: 'increase_step_func'popt: [220.3243250055105, 249.03846355234577, 74.00000107457113]perr: [0.4247789247961187, 0.7166253174634686, 0.0]anomalous_data: [(59, 209)]residual: [10.050378152592119]extra_info: ['Info: AnomalyDetector is using z normalization.', 'Info: There are more than 1 discontinuous points detected.']model (str): The best fit model been selected by algorithm.popt (list): Estimated fitting parameters.perr (list): Corresponding errors of popt.anomalous_data (list[tuple(float, float)]): Return a list of anomalous data points (t, series(t)), or an empty list if all data points are in order.residual (list): Residual of anomalous data.extra_info (list): All convergence errors, warnings, informations during the execution are stored here.Run Testpython -m unittest discover -s test -p '*_test.py'or simplymake test
|
ankorstore-api-client
|
No description available on PyPI.
|
ankorstore-api-wrapper
|
No description available on PyPI.
|
anko-sdk
|
Anko Investor Python SDKThis module provides a simple Anko Investor Forecasts gRPC Service client for python.This module does little more wrapgrpcwith retry logic and authorization wrappers.$ pip install anko-sdkUsageGiven a valid token fromhttps://anko-investor.com(see:Getting Startedfor more information), the following example will start consuming ForecastsimportosimportsocketfromankoimportClientc=Client(os.environ.get('ANKO_TOKEN'),socket.gethostname())forforecastinc:ifforecast:print(forecast)(Here we use the current machine's hostname as a client identifier- this can be anything, really; it's useful to set in case you need to open a support ticket to help debug connections. It can even be an empty string).
|
ankr-sdk
|
⚓️ Ankr Python SDKCompact Python library for interacting with Ankr'sAdvanced APIs.Get started in 2 minutes1. Install the package from PyPipipinstallankr-sdk2. Initialize the SDKNote: to use Advanced API for free starting from May 29th, 2023 you have to register on the platform.Get your individual endpoint herehttps://www.ankr.com/rpc/advanced-apiand provide it to theAnkrWeb3class.fromankrimportAnkrWeb3ankr_w3=AnkrWeb3("YOUR-TOKEN")3. Use the sdk and call one of the supported methodsNode's APIfromankrimportAnkrWeb3ankr_w3=AnkrWeb3("YOUR-TOKEN")eth_block=ankr_w3.eth.get_block("latest")bsc_block=ankr_w3.bsc.get_block("latest")polygon_block=ankr_w3.polygon.get_block("latest")Ankr NFT APIfromankrimportAnkrWeb3fromankr.typesimportBlockchain,GetNFTsByOwnerRequestankr_w3=AnkrWeb3("YOUR-TOKEN")nfts=ankr_w3.nft.get_nfts(request=GetNFTsByOwnerRequest(blockchain=Blockchain.Eth,walletAddress="0x0E11A192d574b342C51be9e306694C41547185DD"))Ankr Token APIfromankrimportAnkrWeb3fromankr.typesimportGetAccountBalanceRequestankr_w3=AnkrWeb3("YOUR-TOKEN")assets=ankr_w3.token.get_account_balance(request=GetAccountBalanceRequest(walletAddress="0x77A859A53D4de24bBC0CC80dD93Fbe391Df45527"))Ankr Query APIfromankrimportAnkrWeb3fromankr.typesimportBlockchain,GetLogsRequestankr_w3=AnkrWeb3("YOUR-TOKEN")logs=ankr_w3.query.get_logs(request=GetLogsRequest(blockchain=[Blockchain.Eth],fromBlock=1181739,toBlock=1181739,address=["0x3589d05a1ec4af9f65b0e5554e645707775ee43c"],topics=[[],["0x000000000000000000000000feb92d30bf01ff9a1901666c5573532bfa07eeec"],],decodeLogs=True,),limit=10)Ankr Advanced APIs supported chainsankr-sdksupports the following chains at this time:MainnetEthereum:"eth"BNB Smart Chain:"bsc"Polygon:"polygon"Fantom:"fantom"Arbitrum:"arbitrum"Avalanche:"avalanche"Syscoin NEVM:"syscoin"Optimism:"optimism"Polygon zkEVM:"polygon_zkevm"Rollux:"rollux"Base:"base"Flare:"flare"Gnosis Chain:"gnosis"Scroll:"scroll"Linea:"linea"TestnetEthereum Goerli:"eth_goerli"Avalanche Fuji:"avalanche_fuji"Polygon Mumbai:"polygon_mumbai"Optimism Testnet:"optimism_testnet"AppchainMETA Apes:"bas_metaapes"Appchain TestnetMETA Apes Testnet:"bas_metaapes_testnet"When passing blockchain, you can use one available fromtypes.py(preferred) or just a string value.Available methodsankr-sdksupports the following methods:Early Accessget_token_price_historyget_account_balance_historicalget_internal_transactions_by_block_numberget_internal_transactions_by_parent_hashToken APIexplain_token_priceget_account_balanceget_currenciesget_token_holdersget_token_holders_count_historyget_token_holders_countget_token_priceget_token_transfersNFT APIget_nftsget_nft_metadataget_nft_holdersget_nft_transfersQuery APIget_logsget_blocksget_transactionget_transactions_by_addressget_blockchain_statsget_interactionsNote: some methods are available in *_raw format, allowing to get full reply with syncStatus and control pagination by hands.Paginationmethods with *_raw ending support customized pagination, where you are controlling it, usingpageSizeandpageTokenother methods support automatic pagination, DON'T usepageSizeandpageTokenfields to these methodsExamplesEarly Access APIget_token_price_history/get_token_price_history_rawGet a list of history of the price for given contract to given timestamp.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTokenPriceHistoryRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_token_price_history(request=GetTokenPriceHistoryRequest(blockchain=Blockchain.Eth,contractAddress='0x50327c6c5a14dcade707abad2e27eb517df87ab5',toTimestamp=1696970653,interval=100,limit=10))print(result)get_account_balance_historical/get_account_balance_historical_rawGet the coin and token balances of the wallet at specified block.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetAccountBalanceHistoricalRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_account_balance_historical(request=GetAccountBalanceHistoricalRequest(blockchain=Blockchain.Eth,walletAddress='vitalik.eth',onlyWhitelisted=False,blockHeight=17967813,))print(result)get_internal_transactions_by_block_number/get_internal_transactions_by_block_number_rawGet a list of internal transactions in the block.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetInternalTransactionsByBlockNumberRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_internal_transactions_by_block_number(request=GetInternalTransactionsByBlockNumberRequest(blockchain=Blockchain.Eth,blockNumber=10000000,onlyWithValue=True,))fortransactioninresult:print(transaction)get_internal_transactions_by_parent_hash/get_internal_transactions_by_parent_hash_rawGet a list of internal transactions in the transaction.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetInternalTransactionsByParentHashRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_internal_transactions_by_parent_hash(request=GetInternalTransactionsByParentHashRequest(blockchain=Blockchain.Eth,parentTransactionHash='0xa50f8744e65cb76f66f9d54499d5401866a75d93db2e784952f55205afc3acc5',onlyWithValue=True,))fortransactioninresult:print(transaction)Token APIexplain_token_price/explain_token_price_rawGet a list of tokens and pool how price for calculated.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,ExplainTokenPriceRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")pairs,estimates=advancedAPI.explain_token_price(request=ExplainTokenPriceRequest(blockchain=Blockchain.Eth,tokenAddress='0x8290333cef9e6d528dd5618fb97a76f268f3edd4',blockHeight=17463534,))print(pairs)print(estimates)get_account_balance/get_account_balance_rawGet the coin and token balances of a wallet.fromankrimportAnkrAdvancedAPIfromankr.typesimportGetAccountBalanceRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_account_balance(request=GetAccountBalanceRequest(walletAddress="0x77A859A53D4de24bBC0CC80dD93Fbe391Df45527"))forbalanceinresult:print(balance)get_currencies/get_currencies_rawGet a list of supported currencies for a given blockchain.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetCurrenciesRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_currencies(request=GetCurrenciesRequest(blockchain=Blockchain.Fantom,))forcurrencyinresult:print(currency)get_token_holders/get_token_holders_rawGet the list of token holders for a given contract address.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTokenHoldersRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_token_holders(request=GetTokenHoldersRequest(blockchain=Blockchain.Eth,contractAddress='0xdac17f958d2ee523a2206206994597c13d831ec7',))forbalanceinresult:print(balance)get_token_holders_count_history/get_token_holders_count_history_rawGet historical data about the number of token holders for a given contract address.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTokenHoldersCountRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_token_holders_count_history(request=GetTokenHoldersCountRequest(blockchain=Blockchain.Eth,contractAddress='0xdAC17F958D2ee523a2206206994597C13D831ec7',))forbalanceinresult:print(balance)get_token_holders_count/get_token_holders_count_rawGet current data about the number of token holders for a given contract address.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTokenHoldersCountRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_token_holders_count_history_raw(request=GetTokenHoldersCountRequest(blockchain=Blockchain.Eth,contractAddress='0xdAC17F958D2ee523a2206206994597C13D831ec7',))print(result)get_token_price/get_token_price_rawGet token price by contract.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTokenPriceRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_token_price(request=GetTokenPriceRequest(blockchain=Blockchain.Eth,contractAddress='',))print(result)get_token_transfers/get_token_transfers_rawGet token transfers of specified address.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTransfersRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_token_transfers(request=GetTransfersRequest(blockchain=Blockchain.Eth,address=['0xf16e9b0d03470827a95cdfd0cb8a8a3b46969b91'],fromTimestamp=1674441035,toTimestamp=1674441035,descOrder=True,))fortransferinresult:print(transfer)NFT APIget_nfts/get_nfts_rawGet data about all the NFTs (collectibles) owned by a wallet.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetNFTsByOwnerRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_nfts(request=GetNFTsByOwnerRequest(blockchain=Blockchain.Eth,walletAddress='0x0E11A192d574b342C51be9e306694C41547185DD',))fornftinresult:print(nft)get_nft_metadata/get_nft_metadata_rawGet NFT's contract metadata.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetNFTMetadataRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")reply=advancedAPI.get_nft_metadata(request=GetNFTMetadataRequest(blockchain=Blockchain.Eth,contractAddress='0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d',tokenId='1500',forceFetch=False,))print(reply.metadata)print(reply.attributes)get_nft_holders/get_nft_holders_rawGet NFT's holders.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetNFTHoldersRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_nft_holders(request=GetNFTHoldersRequest(blockchain=Blockchain.Arbitrum,contractAddress='0xc36442b4a4522e871399cd717abdd847ab11fe88',),limit=1000)forholderinresult:print(holder)get_nft_transfers/get_nft_transfers_rawGet NFT Transfers of specified address.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTransfersRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_nft_transfers(request=GetTransfersRequest(blockchain=[Blockchain.Eth,Blockchain.Bsc],address=['0xd8da6bf26964af9d7eed9e03e53415d37aa96045'],fromTimestamp=1672553107,toTimestamp=1672683207,))fortransferinresult:print(transfer)Query APIget_logs/get_logs_rawGet logs matching the filter.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetLogsRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_logs(request=GetLogsRequest(blockchain=[Blockchain.Eth],fromBlock=1181739,toBlock=1181739,address=["0x3589d05a1ec4af9f65b0e5554e645707775ee43c"],topics=[[],["0x000000000000000000000000feb92d30bf01ff9a1901666c5573532bfa07eeec"],],decodeLogs=True,),limit=10)forloginresult:print(log)get_blocks/get_blocks_rawQuery data about blocks within a specified range.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetBlocksRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_blocks(request=GetBlocksRequest(blockchain=Blockchain.Eth,fromBlock=14500001,toBlock=14500004,descOrder=True,includeLogs=True,includeTxs=True,decodeLogs=True,))forblockinresult:print(block)get_transaction/get_transaction_rawQuery data about transaction by the transaction hash.fromankrimportAnkrAdvancedAPIfromankr.typesimportGetTransactionsByHashRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_transaction(request=GetTransactionsByHashRequest(transactionHash='0x82c13aaac6f0b6471afb94a3a64ae89d45baa3608ad397621dbb0d847f51196f',decodeTxData=True))print(result)get_transactions_by_address/get_transactions_by_address_rawQuery data about transactions of specified address.fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetTransactionsByAddressRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_transactions_by_address(request=GetTransactionsByAddressRequest(blockchain=Blockchain.Bsc,fromBlock=23593283,toBlock=23593283,address=["0x97242e3315c7ece760dc7f83a7dd8af6659f8c4c"],descOrder=True,))fortransactioninresult:print(transaction)get_blockchain_stats/get_blockchain_stats_rawReturns blockchain stats (num of txs, etc.).fromankrimportAnkrAdvancedAPIfromankr.typesimportBlockchain,GetBlockchainStatsRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_blockchain_stats(request=GetBlockchainStatsRequest(blockchain=Blockchain.Bsc,))forstatinresult:print(stat)get_interactions/get_interactions_rawReturns on which chain address was interacting.fromankrimportAnkrAdvancedAPIfromankr.typesimportGetInteractionsRequestadvancedAPI=AnkrAdvancedAPI("YOUR-TOKEN")result=advancedAPI.get_interactions(request=GetInteractionsRequest(address='0xF977814e90dA44bFA03b6295A0616a897441aceC',))forblockchaininresult:print(blockchain)About API keysAnkr is offeringfreeaccess to Advanced API, however you have to register on Ankr platform to access it.Get your individual API Key herehttps://www.ankr.com/rpc/advanced-api.
|
ankur-pdf
|
No description available on PyPI.
|
ankush-distributions
|
No description available on PyPI.
|
ankushpdf
|
This is homepage of our project.
|
ankush-test
|
No description available on PyPI.
|
ankylosaurus
|
No description available on PyPI.
|
anl
|
ANAC Automated Negotiations League PlatformOverviewThis repository is the official platform for running ANAC Automated Negotiation Leagues (starting 2024). It will contain a package
calledanlXXXXfor the competition run in year XXXX. For example anl2024 will contain all files related to the
2024’s version of the competition.Installationpip install anlYou can also install the in-development version with:pip install https://github.com/autoneg/anl/archive/master.zipDocumentationhttps://yasserfarouk.github.io/anl/Changelog0.1.9 (2024.02.14)Adding divide-the-pies scenariosAdding workflow to test on negmas masterTutorial and docs updateUpdate faq0.1.8 (2023.12.31)bugfix in visualizer initial tournament listCorrecting auto pushing to PyPi0.1.7 (2023.12.31)Adding simple dockerfileAdding –port, –address to anlv show. You can now set the port and address of the visualizerVisualizer parses folders recursivelyminor: faster saving of figsAdding mkdocs to dev requirementsRemoving NaiveTitForTat from the default set of competitorsImproving tutorial0.1.6 (2023.12.27)Improved visualizerAdding filtering by scenario or strategy to the main view.Adding new options to show scenario statistics, scenario x strategy statistics, and cases with no agreements at all.You can show multiple negotiations togetherYou can show the descriptive statistics of any metric according to strategy or scenarioMore plotting options for metricsImproved CLIAdding the ability to pass parameters to competitors in the CLI.Removing NaiveTitForTat from the default set of competitorsMaking small tournaments even smallerNew and improved strategiesAdding RVFitter strategy which showcases simple implementation of curve fitting for reserved value estimation and using logging.Adding more comments to NashSeeker strategySimplified implementation of MiCROAdding a simple test for MiCROAvoid failure when Nash cannot be found in NashSeekerMigrating to NegMAS 0.10.11. Needed for logging (and 0.10.10 is needed for self.oppponent_ufun)0.1.5 (2023.12.24)Changing default order of agentsAdding a basic visualizerAdding make-scenarios to the CLIPassing opponent ufun in the private infoSeparating implementation of builtin agentsrequiring NegMAS 0.10.90.1.4 (2023.12.24)Retrying scenario generation if it failedDefaulting to no plotting in windows0.1.3 (2023.12.23)Defaulting to no-plotting on windows to avoid an error caused by tkinterRetry scenario generation on failure. This is useful for piece-wise linear which will fail (by design) if n_pareto happened to be less than n_segments + 10.1.2 (2023.12.18)Adding better scenario generation and supporting mixtures of zero-sum, monotonic and general scenarios.Requiring negmas 0.10.80.1.2 (2023.12.11)Controlling log path in anl2024_tournament() through the added base_path argument0.1.1 (2023.12.09)Added anl cli for running tournaments.Added the ability to hide or show type names during negotiationsCorrected a bug in importing unique_nameNow requires negmas 0.10.60.1.0 (2023.11.30)Adding ANL 2024 placeholder
|
anlearn
|
anlearn - Anomaly learnInGauss Algorithmic, we're working on many anomaly/fraud detection projects using open-source tools. We decided to put our two cents in and "tidy up" some of our code snippets, add documentation, examples, and release them as an open-source package. So let me introduceanlearn. It aims to offer multiple interesting anomaly detection methods in familiarscikit-learnAPI so you could quickly try some anomaly detection experiments yourself.So far, this package is an alpha state and ready for your experiments.Do you have any questions, suggestions, or want to chat? Feel free to contact us viaGithub,Gitter, or email.Installationanlearn depends onscikit-learnand it's dependenciesscipyandnumpy.Requirements:python >=3.6scikit-learnscipynumpyRequirements for every supported python version with version and hashes could be found inrequirementsfolder.
We're usingpip-toolsfor generating requirements files.Intallation optionsPyPI installationpip install anlearnInstallation from sourcegit clone https://github.com/gaussalgo/anlearn
cd anlearnInstalilanlearn.pip install .or by usingpoetrypoetry installDocumentationYou can find documentation at Read the Docs:docs.Contat usDo you have any questions, suggestions, or want to chat? Feel free to contact us viaGithub,Gitter, or email.LicenseGNU Lesser General Public License v3 or later (LGPLv3+)anlearn Copyright (C) 2020 Gauss Algorithmic a.s.This package is in alpha state and comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to use, redistribute it, and contribute under certain conditions of its license.Code of ConductCode of Conduct
|
anlib
|
Failed to fetch description. HTTP Status Code: 404
|
anlis
|
ANLIS - Analysis for PythonANLIS is a Python package foranalysisbuilt on top ofnumpyandsympy. ANLIS provides a set of functions to perform analysis tasks. ANLIS is a work in progress and currently supports the following tasks:SeriesPlotting any seriesArithmetic SeriesFinding $a_n$ from two elementsGeometric SeriesFinding $a_n$ from two elementsFinding the sum of an infinite series (based on ratio and first elementortwo elements)ConvergenceDetermining if a sequence is convergent (or divergent)Convergence TestsDerivativesFinding the critical points of a functionFinding the extrema of a functionVectorsFinding the magnitude of a vectorFinding the unit vector of a vectorFinding the dot product of two vectorsTaylor SeriesFinding the Taylor Series of a functionFinding the Taylor Polynomial of a functionFinding the Lagrange Remainder of a functionIntegralsLeft/Right Riemann SumsTrapezoidal RuleSimpson's RuleDifferentials (e.g. for error analysis)Absolute differentialRelative differentialMultidimensional CalculusCritical pointsFinding critical pointsFinding extremaDerivativesFinding the (or all) partial derivatives of a functionFinding partial derivatives of composite functionsFinding the gradient of a functionFinding the Jacobian of a functionFinding the determinant of a functionFinding the linearisation of a functionSolving with Newton-RaphsonFinding the directional derivative of a functionContour LinesFinding the contour lines of a functionFinding the tangent lines of contour linesPlotting the contour lines of a function (2D or 3D)Differentials (e.g. for error analysis)Absolute differentialRelative differentialTotal differentialThewikicontains more information on the functions along with examples and when to use them.
|
anlogger
|
anloggerA python3 module to assist in setting up logging.Installpython3-mpipinstallanloggerUsagefromanloggerimportLoggerlogger_obj=Logger(name="appname",default_loglevel="INFO",fmt=None,syslog=None,syslog_facility=None,log_to_console=True,console_stream='stderr')logger=logger_obj.get()logger.info("Message on info-level")nameis application name used in logging (REQUIRED).default_loglevelis the logging level which is used unlessLOGLEVELenvironment variable is set.fmtis the format used for formatting the logger. Se python's logging module documentation for formattion options.syslogis the syslog configuration. Set toTrueto use local syslog, or a tuple of("ipaddress-string", port-int)for remote logging.syslog_facilityis one of well-known syslog facilities. If syslog is used butsyslog_facilityis not set, theuserfacility is used by default.log_to_consoledefines whether the logging is also outputted to the console. Default isTrue.console_streamdefines which output stream to use for console logging. Accepted values arestderr(default) andstdout.Seelogger.Loggerclass code for additional details.
|
anlp
|
Failed to fetch description. HTTP Status Code: 404
|
anls
|
ANLS: Average Normalized Levenshtein SimilarityThis python script is based on the one provided bythe Robust Reading Competitionfor evaluation of the InfographicVQA task.The ANLS metricThe Average Normalized Levenshtein Similarity (ANLS) proposed by [Biten+ ICCV'19] smoothly captures the OCR mistakes applying a slight penalization in case of correct intended responses, but badly recognized. It also makes use of a threshold of value 0.5 that dictates whether the output of the metric will be the ANLS if its value is equal or bigger than 0.5 or 0 otherwise. The key point of this threshold is to determine if the answer has been correctly selected but not properly recognized, or on the contrary, the output is a wrong text selected from the options and given as an answer.More formally, the ANLS between the net output and the groundtruth answers is given byequation 1. Where $N$ is the total number of questions, $M$ total number of GT answers per question, $a_{ij}$ the ground truth answers where $i = {0, ..., N}$, and $j = {0, ..., M}$, and $o_{qi}$ be the network's answer for the ith question $q_i$:$$
\mathrm{ANLS} = \frac{1}{N} \sum_{i=0}^{N} \left(\max_{j} s(a_{ij}, o_{qi}) \right),
$$where $s(\cdot, \cdot)$ is defined as follows:$$
s(a_{ij}, o_{qi}) = \begin{cases}
1 - \mathrm{NL}(a_{ij}, o_{qi}), & \text{if } \mathrm{NL}(a_{ij}, o_{qi}) \lt \tau \
0, & \text{if } \mathrm{NL}(a_{ij}, o_{qi}) \ge \tau
\end{cases}
$$The ANLS metric is not case sensitive, but space sensitive. For example:Q: What soft drink company name is on the red disk?Possible answers:$a_{i1}$ : Coca Cola$a_{i2}$ : Coca Cola CompanyNet output ($o_{qi}$)$s(a_{ij}, o_{qi})$Score (ANLS)The Coca$a_{i1} = 0.44$, $a_{i2} = 0.29$0.00CocaCola$a_{i1} = 0.89$, $a_{i2} = 0.47$0.89Coca cola$a_{i1} = 1.00$, $a_{i2} = 0.53$1.00Cola$a_{i1} = 0.44$, $a_{i2} = 0.23$0.00Cat$a_{i1} = 0.22$, $a_{i2} = 0.12$0.00InstallationFrom pypipipinstallanlsFrom GitHubpipinstallgit+https://github.com/shunk031/ANLSHow to useFrom CLIcalculate-anls\--gold-label-filetest_fixtures/evaluation/evaluate_json/gold_label.json\--submission-filetest_fixtures/evaluation/evaluate_json/submission.json\--anls-threshold0.5❯❯❯calculate-anls--help
usage:calculate-anls[-h]--gold-label-fileGOLD_LABEL_FILE--submission-fileSUBMISSION_FILE[--anls-thresholdANLS_THRESHOLD]EvaluationcommandusingANLS
optionalarguments:-h,--helpshowthishelpmessageandexit--gold-label-fileGOLD_LABEL_FILEPathoftheGroundTruthfile.--submission-fileSUBMISSION_FILEPathofyourmethod'sresultsfile.--anls-thresholdANLS_THRESHOLDANLSthresholdtouse(SeeScene-TextVQApaperformoreinfo.).From python script>>>fromanlsimportanls_score>>>ai1="Coca Cola">>>ai2="Coca Cola Company">>>net_output="The Coca">>>anls_score(prediction=net_output,gold_labels=[ai1,ai2],threshold=0.5)0.00>>>net_output="CocaCola">>>anls_score(prediction=net_output,gold_labels=[ai1,ai2],threshold=0.5)0.89>>>net_output="Coca cola">>>anls_score(prediction=net_output,gold_labels=[ai1,ai2],threshold=0.5)1.0ReferencesBiten, Ali Furkan, et al."Scene text visual question answering."Proceedings of the IEEE/CVF international conference on computer vision. 2019.
|
anltk
|
Arabic Natural Language Toolkit (ANLTK)ANLTK is a set of Arabic natural language processing tools. developed with focus on simplicity and performance.ANLTK is a C++ library, with python bindings.Installationfor python :pip install anltkBuildingNote: Currently only tested on Linux, prebuilt python wheels are available for Linux, Windows, Macos onpypiDependencies:utfcpp, automatically downloaded.utf8proc, automatically downlaoded.C++ Compiler that supports c++17.Python3,meson,ninjapipinstallmeson
pipinstallninjagitclonehttps://github.com/Abdullah-AlAttar/anltk.git\&&cdanltk/\&&mesonbuild--buildtype=release-Dbuild_tests=false\&&cdbuild\&&ninja\&&cd../\&&pipinstall-e.Usage Examples:C++ API :#include"anltk/anltk.hpp"#include<iostream>#include<string>intmain(){std::stringar_text="أبجد هوز حطي كلمن سعفص قرشت ثخذ ضظغ";std::cout<<anltk::transliterate(ar_text,anltk::CharMapping::AR2BW)<<'\n';// >bjd hwz HTy klmn sEfS qr$t vx* DZgstd::stringtext="فَرَاشَةٌ مُلَوَّنَةٌ تَطِيْرُ في البُسْتَانِ، حُلْوَةٌ مُهَنْدَمَةٌ تُدْهِشُ الإِنْسَانَ.";std::cout<<anltk::remove_tashkeel(text)<<'\n';// فراشة ملونة تطير في البستان، حلوة مهندمة تدهش الإنسان.// Third paramters is a stop_list, charactres in this list won't be removedstd::cout<<anltk::remove_non_alpha(text," ")<<'\n';// فراشة ملونة تطير في البستان حلوة مهندمة تدهش الإنسانanltk::TafqitOptionsopts;std::cout<<anltk::tafqit(15000120,opts)<<'\n';// خمسة عشر مليونًا ومائة وعشرون}Python APIimportanltkar="أبجد هوز حطي كلمن سعفص قرشت ثخذ ضظغ"bw=anltk.transliterate(ar,anltk.AR2BW)print(bw)# >bjd hwz HTy klmn sEfS qr$t vx* DZgprint(anltk.remove_tashkeel("فَرَاشَةٌ مُلَوَّنَةٌ تَطِيْرُ في البُسْتَانِ، حُلْوَةٌ مُهَنْدَمَةٌ تُدْهِشُ الإِنْسَانَ."))# فراشة ملونة تطير في البستان، حلوة مهندمة تدهش الإنسان.print(anltk.tafqit(15000120))# خمسة عشر مليونًا ومائة وعشرونFor list of features seeFeatures.mdBenchmarksProcessing a file containing 500000 Line, 6787731 Word, 112704541 Character. the task is to remove diacritics / transliterate to buckwalterBuckwatler transliterationMethodTimeanltk python-api1.379 secondspythoncamel_tools11.46 secondsRemove DiacriticsMethodTimeanltk python-api0.989 secondspythoncamel_tools4.892 seconds
|
anm
|
No description available on PyPI.
|
anm-addit
|
This is the testing package by Anshuman Mishra----TO INSTALL----Write the command in terminal:pip install anm_addit----HOW TO USE----Write the following code:from anm_addit import additx = addit(num1, num2)print(x)This will print (num1 + num2)
|
anmetal
|
Another Numeric optimization and Metaheuristics LibraryA library to do your metaheuristics and numeric combinatorial stuff.to install use´´´
pip install anmetal
´´´see /test folder to see some examples of useIn later updates I will make documentation, for now there is only examples codeContentNumeric optimizationIterative optimization functions (one solution)Euler methodNewton methodMetaheuristicsReal inputArtificial Fish Swarm Algorithm (AFSA) (Li, X. L. (2003). A new intelligent optimization-artificial fish swarm algorithm. Doctor thesis, Zhejiang University of Zhejiang, China, 27.)Particle Swarm Optimization (PSO) (Based onhttps://en.wikipedia.org/wiki/Particle_swarm_optimization)Particle Swarm Optimization (PSO) With LeapGreedyGreedy With LeapCategorical inputGeneticGenetic With LeapProblems and gold-standard functionsNphard problemsReal problemsPartition problemSubset problemCategorical problemsknapsacksudoku (without initial matrix, just random)Non linear functionsone input (1-D)F1 (https://doi.org/10.1007/s00521-017-3088-3)F3 (https://doi.org/10.1007/s00521-017-3088-3)two inputs (2-D)Camelback (https://doi.org/10.1007/s00521-017-3088-3)Goldsteinprice (https://doi.org/10.1007/s00521-017-3088-3)Pshubert1 (https://doi.org/10.1007/s00521-017-3088-3)Pshubert2 (https://doi.org/10.1007/s00521-017-3088-3)Shubert (https://doi.org/10.1007/s00521-017-3088-3)Quartic (https://doi.org/10.1007/s00521-017-3088-3)n inputs (N-D)Brown1 (https://doi.org/10.1007/s00521-017-3088-3)Brown3 (https://doi.org/10.1007/s00521-017-3088-3)F10n (https://doi.org/10.1007/s00521-017-3088-3)F15n (https://doi.org/10.1007/s00521-017-3088-3)Sphere (https://doi.org/10.1007/s00521-018-3512-3)Rosenbrock (https://doi.org/10.1007/s00521-018-3512-3)Griewank (https://doi.org/10.1007/s00521-018-3512-3)Rastrigrin (https://doi.org/10.1007/s00521-018-3512-3)Sumsquares (https://doi.org/10.1007/s00521-018-3512-3)Michalewicz (https://doi.org/10.1007/s00521-018-3512-3)Quartic (https://doi.org/10.1007/s00521-018-3512-3)Schwefel (https://doi.org/10.1007/s00521-018-3512-3)Penalty (https://doi.org/10.1007/s00521-018-3512-3)Another contentBinarization functionssShape1sShape2sShape3sShape4vShape1vShape2vShape3vShape4erfBinarization strategiesstandardcomplementstatic_probabilityelitist
|
anmi
|
No description available on PyPI.
|
anminnester
|
No description available on PyPI.
|
anml
|
anml: A Nonlinear Modeling LibraryNOTEThis repository is under construction. :construction: :warning: :construction_worker:
This is a nonlinear modeling library.
|
anmoku
|
Anmoku 安黙A peaceful and fully typedMyAnimeList/JikanPython API wrapper with caching and proper rate limiting.[!NOTE]Anmoku is currently a work in progress so the features below may not be complete yet.Features ✨Rate limiting 🎀 (with actual waiting).Supports caching. ⚡Fully type hinted.🌌yes you heard me correctlyExamples ⚗️Anmoku is probably the simplest Jikan API wrapper you'll ever use. All you need is the client and the resource. 🌊fromanmokuimportAnmoku,AnimeCharactersclient=Anmoku(debug=True)anime_characters=client.get(AnimeCharacters,id=28851)# ID for the anime film "A Silent Voice".forcharacterinanime_characters:print(f"{character.name}({character.url})")client.close()We also have an async client:importasynciofromanmokuimportAsyncAnmoku,AnimeCharactersasyncdefmain():client=AsyncAnmoku(debug=True)anime_characters=awaitclient.get(AnimeCharacters,id=28851)# ID for the anime film "A Silent Voice".forcharacterinanime_characters:print(f"{character.name}({character.url})")awaitclient.close()asyncio.run(main())Output:[DEBUG](anmoku)-[AsyncAnmoku]GET-->https://api.jikan.moe/v4/anime/28851/characters
Ishida,Shouya(https://myanimelist.net/character/80491/Shouya_Ishida)Nishimiya,Shouko(https://myanimelist.net/character/80243/Shouko_Nishimiya)Headteacher(https://myanimelist.net/character/214351/Headteacher)Hirose,Keisuke(https://myanimelist.net/character/97569/Keisuke_Hirose)Ishida,Maria(https://myanimelist.net/character/97943/Maria_Ishida)Ishida,Sister(https://myanimelist.net/character/118723/Sister_Ishida)# ... more characters below but I cut them off for the convenience of this readmeSearching! 🤩Here are some searching examples you can try:fromanmokuimportAnmoku,Characterclient=Anmoku(debug=True)characters=client.search(Character,"anya forger")forcharacterincharacters:print(f"{character.name}({character.image.url})")client.close()Merge that with gradio and you have a GUI.https://github.com/THEGOLDENPRO/anmoku/blob/099f6596b685daa65259319d6730bef674ced38a/examples/gradio_anime_search.py#L1-L23[Gradio Video]Type hinting support! 🌌API responses in our library are strongly typed.On top of that, we even provide class interfaces if you wish for stability and ease of use.
|
anmolpant
|
No description available on PyPI.
|
anmolpant-dist-package
|
No description available on PyPI.
|
anmotordesign
|
Electrical Machines Design Automation by Ansys Maxwell ScriptYouTube Vediohttps://youtu.be/uStT2k3V6x0GoalSetup a python api server, accept motor spec(stator Outer Diameter, DC bus voltage, max toruqe, max speed),
auto design, draw and run ansys analysis,
finally response result data(BEMF, cogging torque, max torque, torque ripple, induce voltage, efficiency)
(Now just 10p12s Surface PM design, It still working on...)RequirementsWindows 7 or aboveLegal Ansys Maxwell Electromagnetic SuitePython 3python 3.7.6Python librarypywin32==227ramda==0.5.5six==1.13.0functional-pipeline==0.3.1ipdb==0.12.3Flask==1.1.2Flask-Cors==3.0.8pandas==1.0.1numpy==1.18.1requests==2.24.0Environment Install Guide (Verified)Install Python 3.7.6(optional) Install virutal envpip install virtualenvoptional) create virutal envvirutalenv venv(optional) activate virutal env./venv/Scripts/activateInstalled needed library using the following commandpip install -r requirements.txtSPM Motor paramsall setting are in params/Execute Guideactive virtual env./venv/Scripts/activateexecutejust run analysis(params set at spec_params in run.py) - python run.pyrun flask api server and call ansys run and return result as response (POST method, json data sample in example/, url =http://localhost:5000/run_simu) - python server.pyrun flask api server, call but run ansys asyc in backgroud and return result use request to another url (POST method, json data sample in example/, url =http://localhost:5000/run_simu) - python server_run_back.py
|
anms-ace
|
ACE ToolsThis is the AMM CODEC Engine (ACE) for the DTN Management Architecture (DTNMA).
It is part of the larger Asynchronous Network Managment System (ANMS) managed forNASA AMMOS.It is a library to manage the information in DTNMA Application Data Models (ADMs) and use that information to encode and decode DTNMA Application Resource Identifiers (ARIs) in:Text form based onURI encodingBinary form based onCBOR encodingIt also includes anace_aricommand line interface (CLI) for translating between the two ARI forms.DevelopmentTo install development and test dependencies for this project, run from the root directory (possibly under sudo if installing to the system path):pip3install-r<(python3-mpiptoolscompile--extratestpyproject.toml2>&1)To install the project itself from source run:pip3 install .An example of using the ARI transcoder, from the source tree, to convert from text to binary form is:echo 'ari:/IANA:ion_admin/CTRL.node_contact_add(UVAST.1685728970,UVAST.1685729269,UINT.2,UINT.2,UVAST.25000,UVAST.1)' | PYTHONPATH=./src ADM_PATH=./tests/adms python3 -m ace.tools.ace_ari --inform=text --outform=cborhexwhich will produce a hexadecimal output:0xC1188D410605061616141416161A647A2ECA1A647A2FF502041961A801ContributingTo contribute to this project, through issue reporting or change requests, see theCONTRIBUTINGdocument.
|
anms-CAmp
|
CAmpPythonThis is the C code generator for the DTN Management Architecture (DTNMA).
It is part of the larger Asynchronous Network Managment System (ANMS) managed forNASA AMMOS.( ,&&&.
) .,.&&
( ( \=__/
) ,'-'.
( ( ,, _.__|/ /|
) /\ -((------((_|___/ |
( // | (`' (( `'--|
_ -.;_/ \\--._ \\ \-._/.
(_;-// | \ \-'.\ <_,\_\`--'|
( `.__ _ ___,') <_,-'__,'
`'(_ )_)(_)_)'This tool uses the JSON representation of an Application Data Model (ADM) to
generate code for various purposes. CAmp generates:C code for usage in NASA ION (Interplanetary Overlay Network)This generation can also carry over custom functions in existing C files for
the ADM, if indicated appropriately in the existing code (see the
Round-tripping Section).SQL code, also for usage in NASA IONACE input files, for usage with the ARI CBOR Encoder (ACE) ToolAdditional generators may be added to account for use cases outside of ION/ACE.
Please contact the developers for more information or suggestions. The
Architecture Section also provides some explanation of the components of CAmp,
and how to incorporate additional generators.NOTECAmp largely assumes that the ADM JSON input can be trusted (i.e., CAmp does not
go to great lengths to fully sanitize all strings found within the ADM). CAmp
does properly escape necessary sequences found in the ADMs tested during
development (e.g., apostrophes in object descriptions).DevelopmentTo install development and test dependencies for this project, run from the root directory (possibly under sudo if installing to the system path):pip3install-r<(python3-mpiptoolscompile--extratestpyproject.toml2>&1)To install the project itself from source run:pip3 install .View Usage Options for CAmpcamp -hBasic UsageThe camp tool takes a JSON representation of an ADM for a network protocol as
input and calls each of the included generators to generate files for the ADM.The includedtemplate.jsonprovides an example of how a JSON ADM should be
formatted. For more information on this data model, please consult the AMA
Application Data Model IETF draft.Given the JSON representation of the ADM, run camp with:camp <adm.json>Name RegistryIf you're generating files for a new ADM, you may see an error similar to the
following:[Error] this ADM is not present in the name registry. Pass integer value via
command line or set manually in name_registry.cfgThis is because the name of the ADM is not yet present in the camp Name
Registry. To solve this, pass the nickname value for the ADM to camp via the-ncommand line option:camp <adm.json> -n <value>You can also use the-noption to make camp use a different nickname for an
ADM that is present in the camp Name Registry. For example,camp bp_agent.json -n 23Will generate bp_agent files with a nickname of23instead of the registered
value of2. To make these changes permanent (or to add a new ADM to the
name registry), pass the-uflag to camp:camp <adm.json> -n <value> -uOutputDuring a successful camp execution, output similar to the following will be
printed to STDOUT.Loading <path_to_json_file>/<adm.json> ...
[ DONE ]
Generating files ...
Working on .//ace/adm_<adm>.json [ DONE ]
Working on .//agent/adm_<adm>_impl.h [ DONE ]
Working on .//agent/adm_<adm>_impl.c [ DONE ]
Working on .//adm_<adm>.sql [ DONE ]
Working on .//shared/adm_<adm>.h [ DONE ]
Working on .//mgr/adm_<adm>_mgr.c [ DONE ]
Working on .//agent/adm_<adm>_agent.c [ DONE ]
[ End of CAmpPython Execution ]This output shows that camp completed a successful generation of each of the
files listed. If they don't already exist, camp will create the following
directories in the current directory:aceagentsharedmgrand put generated files into the appropriate created directory. Use the-oflag with camp to redirect output to a different directory.camp <adm.json> -o <output_directory>If the path at <output_directory> does not already exist, camp will create it,
and will create the directories listed above within <output_directory>.Camp will not delete any existing directory structure, but files present in
the output directories with the same name as generated files will be
overwritten.Custom Code and Round-trippingTheadm_<adm>_impl.candadm_<adm>_impl.hfiles generated for NASA ION
contain functions whose bodies cannot be automatically generated with knowledge
of the ADM alone. When generated, these fuctions are marked with tags similar to
the following:/*
* +----------------------------------------------------------------------+
* |START CUSTOM FUNCTION <function_name> BODY
* +----------------------------------------------------------------------+
*/
/*
* +----------------------------------------------------------------------+
* |STOP CUSTOM FUNCTION <function_name> BODY
* +----------------------------------------------------------------------+
*/Additionally, the user may wish to add additional custom functions and/or header
files to these generated files. To allow re-generation of camp files with
minimal re-work for custom code in these files, camp has a 'roundtripping'
feature that allows preservation of these custom additions in subsequent file
generations.The roundtripping feature in camp will save any code in the file that falls
between camp custom tags, and will add it to the newly-generated version of the
file. Example usage:camp <adm.json> -c <path_to_existing_impl.c> -h <path_to_existing_impl.h>The resulting generated impl.c and impl.h files will contain the custom code
from the impl.c and impl.h files passed to camp.Current acceptable custom tags are:custom function body (example above)custom includes (/* [START|STOP] CUSTOM INCLUDES HERE */)custom functions (/* [START|STOP] CUSTOM FUNCTIONS HERE */)For custom function bodies, the <function_name> included in the custom function
tag must be the same as the one used in the ADM for the custom function to be
copied over to the correct area of the new file.CAmp Architecturetemplate.json - Example JSON ADM templateCAmpPython/ - contains all of the source code for campCAmpPython.py - Main script of camp. This script calls all necessary
generators and handles user inputdata/name_registry.cfg - Initial name registry configuration file installed
with camp.utils/name_registry.py - Fuctions for getting and setting values of the camp
name registry.generators/ - All generator scripts and their utility functionscreate_ace.py - Generates ACE tool input filecreate_agent.py - Generates agent file (C code) for usage in NASA IONcreate_gen_h.py - Generates the shared header file needed for NASA IONcreate_impl_c.py - Generates the implementation file (C code) for usage in
NASA IONcreate_impl_h.py - Generates the header file for the implementation file
created by create_impl_c.pycreate_mgr_c.py - Generates the manager file for usage in NASA IONcreate_mysql.py - Generates an SQL file for usage with NASA ION stored
procedureslib/ - Library functions for generating commonly-used patterns and
accessing portions of the ADM.campch.py - library functions commonly needed specifically for C code
generators.campch_roundtrip.py - round-tripping functionscommon/ - Library functions helpful to all generators.campsettings.py - initializes various global variables for camp
(enumerations for portions of the ADM, etc.)camputil.py - utility functions for parsing the JSON input file
and creating ARIs. Contains the Retriever class,
which is used by all generators to access ADM datajsonutil.py - utility functions to validate JSON input.Adding GeneratorsTo add a new generator to camp, create a python script that creates the file
and add it to theCAmpPython/generators/directory. Then, in CAmpPython.py,
import the new generator and add it to the file generation code of themain()function (starting at line 105).All generators should:define acreate()method as their main function, which takes as its first
and second argument:a Retriever object (pre-populated with the ADM passed to camp)a string that represents the path to the output directoryutilize the Retriever object to access fields of the JSON ADMplace generated file(s) in the output directory passed as the second argument
to thecreate()function (the generator may choose to make a sub-directory
in the output directory)ContributingTo contribute to this project, through issue reporting or change requests, see theCONTRIBUTINGdocument.
|
ann
|
No description available on PyPI.
|
ann2pmml
|
ann2pmml is an automated pmml exporter for neural network models (for supported models see bellow) into PMML text format which address
the problems mentioned bellow.Storing predictive models using binary format (e.g. Pickle) may be dangerous from several perspectives - naming few:binary compatibility:you update the libraries and may not be able to open the model serialized with older versiondangerous code: when you would use model made by someone elseinterpretability: model cannot be easily opened and reviewed by humanetc.In addition the PMML is able to persist scaling of the raw input features which helps gradient descent to run smoothly
through optimization space.InstallationTo install ann2pmml, simply:$pipinstallann2pmmlExampleExample on Iris data - for more examples see the examples folder.fromann2pmmlimportann2pmmlfromsklearn.datasetsimportload_irisimportnumpyasnpfromsklearn.cross_validationimporttrain_test_splitfromsklearn.preprocessingimportStandardScalerfromtensorflow.keras.utilsimportto_categoricalfromtensorflow.keras.modelsimportSequentialfromtensorflow.keras.layersimportDenseiris=load_iris()X=iris.datay=iris.targetX=X.astype(np.float32)y=y.astype(np.int32)X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=.3)std=StandardScaler()X_train_scaled=std.fit_transform(X_train)X_test_scaled=std.transform(X_test)y_train_ohe=to_categorical(y_train)y_test_ohe=to_categorical(y_test)model=Sequential()model.add(Dense(units=X_train.shape[1],input_shape=(X_train_scaled.shape[1],),activation='tanh'))model.add(Dense(units=5activation='tanh'))model.add(Dense(units=y_test_ohe.shape[1],activation='sigmoid'))model.compile(loss='categorical_crossentropy',optimizer='sgd')model.fit(X_train_scaled,y_train_ohe,nb_epoch=10,batch_size=1,verbose=1,validation_data=(X_test_scaled,y_test_ohe))params={'feature_names':['sepal_length','sepal_width','petal_length','petal_width'],'target_values':['setosa','virginica','versicolor'],'target_name':'specie','copyright':'lampda','description':'Simple Keras model for Iris dataset.','model_name':'Iris Model'}ann2pmml(estimator=model,transformer=std,file='keras_iris.pmml',**params)Params explainedestimator: Keras/TF model to be exported as PMML (for supported models - see bellow).transformer: if provided (and it’s supported - see bellow) then scaling is applied to data fields.file: name of the file where the PMML will be exported.feature_names: when provided and have same shape as input layer, then features will have custom names, otherwise generic names (x0,…, xn-1) will be used.target_values: when provided and have same shape as output layer, then target values will have custom names, otherwise generic names (y0,…, yn-1) will be used.target_name: when provided then target variable will have custom name, otherwise generic nameclasswill be used.copyright: who is the author of the model.description: optional parameter that setsdescriptionwithin PMML document.model_name: optional parameter that setsmodel_namewithin PMML document.What is supported?Modelskeras.models.SequentialActivation functionstanhsigmoid/logisticlinearsoftmax normalization on the output layer (with activation identity on output units)Scalerssklearn.preprocessing.StandardScalersklearn.preprocessing.MinMaxScalerLicenseThis software is licensed under MIT licence.https://opensource.org/licenses/MIT
|
anna
|
Anna helps you configure your application by building the bridge between the components of
your application and external configuration sources. It allows you to keep your code short and
flexible yet explicit when it comes to configuration - the necessary tinkering is performed by
the framework.Anna contains lots of “in-place” documentation aka doc strings so make sure you check out those
too (”helpyourself”)!80 seconds to AnnaAnna is all aboutparametersandconfiguration sources. You declare parameters as part of
your application (on a class for example) and specify their values in a configuration source.
All you’re left to do with then is to point your application to the configuration source and
let the framework do its job.An example is worth a thousand wordsSay we want to build an application that deals with vehicles. I’m into cars so the first thing
I’ll do is make sure we get one of those:>>> class Car:
... def __init__(self, brand, model):
... self._brand = brand
... self._model = model
>>>
>>> your_car = Car('Your favorite brand', 'The hottest model')Great! We let the user specify the car’sbrandandmodeland return him a brand new car!Now we’re usingannafor declaring the parameters:>>> from anna import Configurable, parametrize, String, JSONAdaptor
>>>
>>> @parametrize(
... String('Brand'),
... String('Model')
... )
... class Car(Configurable):
... def __init__(self, config):
... super(Car, self).__init__(config)
>>>
>>> your_car = Car(JSONAdaptor('the_file_where_you_specified_your_favorite_car.json'))The corresponding json file would look like this:{
"Car/Parameters/Brand": "Your favorite brand",
"Car/Parameters/Model": "The hottest model",
}It’s a bit more to type but this comes at a few advantages:We can specify the type of the parameter andannawill handle the necessary conversions
for us;annaships with plenty of parameter types so there’s much more to it than just
strings!If we change your mind later on and want to add another parameter, say for example the color
of the car, it’s as easy as declaring a new parameterString('Color')and setting it as
a class attribute; all the user needs to do is to specify the corresponding value in
the configuration source. Note that there’s no need to change any interfaces/signatures or
other intermediate components which carry the user input to the receiving class; all it expects
is a configuration adaptor which points to the configuration source.The configuration source can host parameters for more than only one component, meaning again
that we don’t need to modify intermediate parts when adding new components to our application;
all we need to do is provide the configuration adaptor.Five minutes hands-onThe 80 seconds intro piqued your curiosity? Great! So let’s move on! For the following
considerations we’ll pick up the example from above and elaborate on it more thoroughly.Let’s start with a quick Q/A sessionSo what happened when using the decorator ``parametrize``?It received a number of parameters
as arguments which it set as attributes on the receiving class. Field names are deduced from
the parameters names applying CamelCase to _snake_case_with_leading_underscore conversion.
That isString('Brand')is set asCar._brand.All right, but how did the instance receive its values then?Note thatCarinherits fromConfigurableandConfigurable.__init__is where the actual instance configuration happens.
We provided it a configuration adaptor which points to the configuration source (in this case
a local file) and the specific values were extracted from there. Values are set on the instance
using the parameter’s field name, that isString('Brand')will make an instance receive
the corresponding value atyour_car._brand(Car._brandis still the parameter instance).Okay, but how did the framework know where to find the values in the configuration source?Well there’s a bit more going on during the call toparametrizethan is written above.
In addition to setting the parameters on the class it also deduces a configuration path for
each parameter which specifies where to find the corresponding value in the source. The path
consists of a base path and the parameter’s name: “<base-path>/<name>” (slashes are used
to delimit path elements).parametrizetries to get this base path from the receiving class
looking up the attributeCONFIG_PATH. If it has no such attribute or if it’sNonethen
the base path defaults to “<class-name>/Parameters”. However in our example - although we didn’t
set the config path explicitly - it was already there becauseConfigurableuses a custom
metaclass which adds the class attributeCONFIG_PATHif it’s missing orNoneusing
the same default as above. So if you want to specify a custom path within the source you can do so
by specifying the class attributeCONFIG_PATH._snake_case_with_leading_underscore, not too bad but can I choose custom field names for the parameters too?Yes, besides providing a number of parameters as arguments toparametrizewe have the option
to supply it a number of keyword arguments as well which represent field_name / parameter pairs;
the key is the field name and the value is the parameter:brand_name=String('Brand').Now that we declared all those parameters how does the user know what to specify?annaprovides a decoratordocument_parameterswhich will add all declared parameters to
the component’s doc string under a new section. Another option for the user is to retrieve
the declared parameters viaget_parameters(which is inherited fromConfigurable) and
print their string representations which contain comprehensive information:>>> for parameter in Car.get_parameters():
... print(parameter)Of course documenting the parameters manually is also an option.Alright so let’s get to the code>>> from anna import Configurable, parametrize, String, JSONAdaptor
>>>
>>> @parametrize(
... String('Model'),
... brand_name=String('Brand')
... )
... class Car(Configurable):
... CONFIG_PATH = 'Car'
... def __init__(self, config):
... super(Car, self).__init__(config)Let’s first see what information we can get about the parameters:>>> for parameter in Car.get_parameters():
... print(parameter)
...
{
"optional": false,
"type": "StringParameter",
"name": "Model",
"path": "Car"
}
{
"optional": false,
"type": "StringParameter",
"name": "Brand",
"path": "Car"
}Note that it prints"StringParameter"because that’s the parameter’s actual class,Stringis just a shorthand. Let’s see what we can get from the doc string:>>> print(Car.__doc__)
None
>>> from anna import document_parameters
>>> Car = document_parameters(Car)
>>> print(Car.__doc__)
Declared parameters
-------------------
(configuration path: Car)
Brand : String
Model : StringNow that we know what we need to specify let’s get us a car! TheJSONAdaptorcan also be
initialized with adictas root element, so we’re just creating our configuration on the fly:>>> back_to_the_future = JSONAdaptor(root={
... 'Car/Brand': 'DeLorean',
... 'Car/Model': 'DMC-12',
... })
>>> doc_browns_car = Car(back_to_the_future)
>>> doc_browns_car.brand_name # Access via our custom field name.
'DeLorean'
>>> doc_browns_car._model # Access via the automatically chosen field name.
'DMC-12'Creating another car is as easy as providing another configuration source:>>> mr_bonds_car = Car(JSONAdaptor(root={
... 'Car/Brand': 'Aston Martin',
... 'Car/Model': 'DB5',
... }))Let’s assume we want more information about the brand than just its name. We have nicely stored
all information in a database:>>> database = {
... 'DeLorean': {
... 'name': 'DeLorean',
... 'founded in': 1975,
... 'founded by': 'John DeLorean',
... },
... 'Aston Martin': {
... 'name': 'Aston Martin',
... 'founded in': 1913,
... 'founded by': 'Lionel Martin, Robert Bamford',
... }}We also have a database access function which we can use to load stuff from the database:>>> def load_from_database(key):
... return database[key]To load this database information instead of just the brand’s name we only have to modify
theCarclass to declare a new parameter:ActionParameter(orAction).
AnActionParameterwraps another parameter and let’s us specify an action which is applied to
the parameter’s value when it’s loaded. For our case that is:>>> from anna import ActionParameter
>>> Car.brand = ActionParameter(String('Brand'), load_from_database)
>>> doc_browns_car = Car(back_to_the_future)
>>> doc_browns_car.brand
{'founded by': 'John DeLorean', 'name': 'DeLorean', 'founded in': 1975}
>>> doc_browns_car.brand_name
'DeLorean'Note that we didn’t need to provide a new configuration source as the newbrandparameter is
based on the brand name which is already present.Say we also want to obtain the year in which the model was first produced and we have a function
for exactly that purpose however it requires the brand name and model name as one string:>>> def first_produced_in(brand_and_model):
... return {'DeLorean DMC-12': 1981, 'Aston Martin DB5': 1963}[brand_and_model]That’s not a problem because anActionParametertype lets us combine multiple parameters:>>> Car.first_produced_in = ActionParameter(
... String('Brand'),
... lambda brand, model: first_produced_in('%s %s' % (brand, model)),
... depends_on=('Model',))Other existing parameters, specified either by name of by reference via the keyword argumentdepends_on, are passed as additional arguments to the given action.In the above example we declared parameters on a class usingparametrizebut you could as well
use parameter instances independently and load their values viaload_from_configurationwhich
expects a configuration adaptor as well as a configuration path which localizes the parameter’s
value. You also have the option to provide a specification directly viaload_from_representation. This functions expects the specification as a unicode string and
additional (meta) data as adict(a unit forPhysicalQuantitiesfor example).This introduction was meant to demonstrate the basic principles but there’s much more toanna(especially when it comes to parameter types)! So make sure to check out also the other parts
of the docs!Parameter typesA great variety of parameter types are here at your disposal:BoolIntegerStringNumberVectorDupletTripletTuplePhysicalQuantityActionChoiceGroupComplementaryGroupSubstitutionGroupConfiguration adaptorsTwo adaptor types are provided:XMLAdaptorfor connecting to xml files.JSONAdaptorfor connecting to json files (following some additional conventions).Generating configuration filesConfiguration files can of course be created manually howeverannaalso ships with aPyQtfrontend that can be integrated into custom applications. The frontend provides input forms for
all parameter types as well as for whole parametrized classes together with convenience methods for
turning the forms’ values into configuration adaptor instances which in turn can be dumped to
files. Both PyQt4 and PyQt5 are supported. Seeanna.frontends.qt.
|
anna-api-test-framework
|
Framework for rapid development of API tests and report generationAuthors@EvgeniiGerasinFeaturesRapid and straightforward development of tests using high-level methodsGenerating a report with test results in AllureThe report will be useful for stakeholderInstallationInstall my-project with pip:pipinstallanna-api-test-frameworkUsage/ExamplesfromannaimportAction,Report,[email protected]('Simple tests')@Report.story('Tests google')@Report.testcase('https://www.google.com','Google')@Report.link('https://www.google.com','Jast another link')classTestExample:@Report.title('Simple test google')@Report.severity('CRITICAL')deftest_simple_request(self):url='https://google.com'method='GET'want=200# insert discription of the testReport.description(url=url,method=method,other='other information')# doing request and geting responseaction=Action()response=action.request(method=method,url=url)got=response.status_code# checking responsewithReport.step('Checking response'):Assert.compare(variable_first=want,comparison_sign='==',variable_second=got,text_error='Response status code is not equal to expected')For run test and generat a report use following commands:pytestalluredir="./results"For generat and open a report you need to install Allure and use the following commands:alluregenerate"./results"-c-o"./report"allureopen"./report"After that, the generated report will automatically open in your browserThe report contains all the information you need
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.