package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
anime-muip
No description available on PyPI.
anime-or-not
No description available on PyPI.
anime-pgen
Генератор превью для ShikimoriДемоОписаниеanime-pgenпредставляет собойcli-утилиту для генерации превью-изображений по данным Шикимори (скачивание данных включено в функциональность). В качестве фреймворка для организацииcliинтерфейса используетсяTyperТребования и установкаPython^3.9pip, или,poetryили любой другой пакетный менеджер для PythonПриложение наShikimori(для работы необходимо иметьAPPLICATION_NAME)Установка:$>pipinstallanime-pgen[Опционально] Подсказки для терминала:$>pgen--install-completionИспользованиеПонадобится создать папку для конфигов и контента$>mkdirpreviews&&cdpreviewsДалее нужно добавить конфиг-файл. Его можно взятьв репозитории. Имя файла:config.yaml$>cpconfig.example.yamlconfig.yaml $>l total16drwxr-xr-x4userstaff128BJun2819:48. drwxr-xr-x23userstaff736BJun2819:43.. -rw-r--r--1userstaff1.1KJun2819:48config.yamlДля удобства создадим папкуcontent- в ней разместим шрифты и иконки$>mkdircontent $>l total16drwxr-xr-x5userstaff160BJun2819:52. drwxr-xr-x23userstaff736BJun2819:49.. -rw-r--r--1userstaff1.1KJun2819:48config.yaml drwxr-xr-x2userstaff64BJun2819:52contentВ новосозданную папкуcontentможно сразуперенести из репозиториядвусоставное лого Шикимори, иконку рейтинга и заполнение заднего фона (или можно использовать свои)$>cpshikimori-glyph.pngcontent/shikimori-glyph.png $>cpshikimori-logo.pngcontent/shikimori-logo.png $>cpstar.pngcontent/star.png $>cptile.pngcontent/tile.png $>tree-a . └──previews├──config.yaml└──content├──shikimori-glyph.png├──shikimori-logo.png├──star.png└──tile.pngВcontentтак же нужно положить шрифты. Для Шикимори используются:🔗OpenSansдля заголовка и описания🔗Tahomaдля рейтинга🔗NotoSerif_JPдля Японских иероглифовФинально папкаpreviewsвыглядит примерно так:$>tree-a-L4. └──previews├──config.yaml└──content├──Noto_Serif_JP│├──NotoSerifJP-Black.otf│├──NotoSerifJP-Bold.otf│├──NotoSerifJP-ExtraLight.otf│├──NotoSerifJP-Light.otf│├──NotoSerifJP-Medium.otf│├──NotoSerifJP-Regular.otf│├──NotoSerifJP-SemiBold.otf│└──OFL.txt├──Open_Sans│├──LICENSE.txt│├──OpenSans-Italic-VariableFont_wdth,wght.ttf│├──OpenSans-VariableFont_wdth,wght.ttf│├──README.txt│└──static├──Tahoma│├──COPYRIGHT.txt│└──tahoma.ttf├──shikimori-glyph.png├──shikimori-logo.png├──star.png└──tile.pngconfig.yamlРассмотрим конфигурационный файл. По дефолту он выглядит так:size: 'big' colors: background: '#ffffff' text: '#343434' year: '#555555' rating: active: '#4c86c8' regular: '#cccccc' content: images: background_tile: content/tile.png star: content/star.png logo: glyph: content/shikimori-glyph.png text: content/shikimori-logo.png fonts: text: content/Open_Sans/OpenSans-VariableFont_wdth,wght.ttf bold_text: content/Open_Sans/static/OpenSans/OpenSans-Bold.ttf numbers: content/Tahoma/tahoma.ttf japanese: content/Noto_Serif_JP/NotoSerifJP-Bold.otfsize: 'big'Возможные значения:big= 1200 x 630(значение по умолчанию)small= 600 x 315Это размер финального изображения. Цифры являются рекоммендацией к формату превью от Facebook/Twitter/Вконтакте.rating: active: '#4c86c8' regular: '#cccccc'Цвета звёздочек рейтинга - активных и плейсхолдеров. В конфиге представлены их дефолтные значения.colors: background: '#ffffff' text: '#343434' year: '#555555'Цвета:Подложки (background)Всего текса (text)Года выпуска (year)В конфиге представлены их дефолтные значения.Важно!colorsиsize- опциональны. В случае, если они не указаны в файле - будут использовать дефолтные значения (которые совпадают с дефолтным конфигом)content- обязательные поляВажно2!Для картинок нельзя использовать.svg, только.jpeg|.jpg|.png(ограничение библиотеки)content: images: background_tile: content/tile.pngПуть до файла с тайлом для заднего фона. Например, дефолтный для Шикимори:Рекоммендации:Квадратный (иначе сплющится)Бесшовный.pngс альфа-каналом, если хочется красивого наложения на белый фонcontent: images: star: content/star.pngПуть до файла со звездой рейтинга.Требования:Прозрачный фонФигура чёрного цветаКвадратПри накладывании на превью чёрный цвет перекрашивается вrating.activeилиrating.regularlogo: glyph: content/shikimori-glyph.png text: content/shikimori-logo.pngДвусоставное лого Шикимори - Иероглиф + "SHIKIMORI"Требования:Одинаковая высота.pngс альфа-каналомfonts: text: content/Open_Sans/OpenSans-VariableFont_wdth,wght.ttf bold_text: content/Open_Sans/static/OpenSans/OpenSans-Bold.ttf numbers: content/Tahoma/tahoma.ttf japanese: content/Noto_Serif_JP/NotoSerifJP-Bold.otfПуть до шрифтов:text- описание и подписиbold_text- названиеnumber- рейтинг и годjapanese- для Иероглифов, Хираганы и КатаканыТребования:TrueTypeшрифтыИспользованиеПодробная документация поcli-интерфейсу:DOCS.mdПример использования:MakefileИспользование состоит из двух частей:Скачиваем данные из API-Шикимори поidаниме или мангиГенерируем превью по даннымСкачаем информацию об аниме "Ковбой Бибоп":$>pgenfetch1--app-name<APPLICATION_NAME_из_Шикимори> Successfullysavedto.pgen.json $>l total40drwxr-xr-x6vladimirlevinstaff192BJun2820:36. drwxr-xr-x3vladimirlevinstaff96BJun2819:56.. -rw-r--r--1vladimirlevinstaff9.2KJun2820:36.pgen.json -rw-r--r--1vladimirlevinstaff1.1KJun2819:48config.yaml drwxr-xr-x9vladimirlevinstaff288BJun2820:03contentПо умолчанию данные сохраняются в.pgen.json, путь можно изменить, передав флаг--save-path 'my_file.json'$>pgenfetch1--app-name<APPLICATION_NAME_из_Шикимори>--save-path"my_file.json"Successfullysavedtomy_file.jsonПереходим к генерации:$>pgenmake-preview.pgen.json\--output-folder"."\--config"config.yaml"\--app-name<APPLICATION_NAME_из_Шикимори> Successfullycreatepreviews:-1.jpgГотово!🥳FAQQ: Как разметить много за раз?A: С флагом-Mможно за раз скачать и разметить много Аниме/Манги:$>pgenfetch-M"1,5,8"--app-name<APPLICATION_NAME_из_Шикимори> Successfullysavedto.pgen.json $>pgenmake-preview.pgen.json--output-folder"."--config"config.yaml"--app-name<APPLICATION_NAME_из_Шикимори> Successfullycreatepreviews:-1.jpg-5.jpg-8.jpgQ: Как разметить мангу?A: С помощью флага-mможно скачать Мангу. Создание превью опирается на данные, поэтому во второй команде ничего не потребуется менять$>pgenfetch-mM"1,8"--app-name<APPLICATION_NAME_из_Шикимори> Successfullysavedto.pgen.json $>pgenmake-preview.pgen.json--output-folder"."--config"config.yaml"--app-name<APPLICATION_NAME_из_Шикимори> Successfullycreatepreviews:-1.jpg-8.jpg
animeplanet
No description available on PyPI.
animePy
No description available on PyPI.
anime-python
Library anime-python is currently under development. This python lib will use the AnilistPython module to retrieve anime information when complete.For more information, please visithttps://github.com/ReZeroE/anime-python.
anime.rank
UNKNOWN
anime-reference
anime_referenceInstallingViapipI wrote this library to quickly get summaries from my favourite anime episodes. Hopefully, you find it easy to use. Install using the following command:pip install anime-referenceVia GitHubAlternatively, you can just clone this repo and import the libraries at your own discretion.DocumentationCurrently, the package can get per episode summaries for a few anime titles. The package will be expanding to include more and other content as well, but this is a start.For full details on the Documentation please refer to thedocumentation.
anime-relations-py
anime-relations-pyA parser for anime-relations. So you don't have to.More information on anime episode relations can be found here.Installation$pipinstall-Uanime-relations-pyUsage>>>fromanime_relations_pyimportAnimeRelations>>>parser=AnimeRelations()# instance is empty until fetched>>>parser.fetch_sync()# alt: await parser.fetch_async()>>>rule=parser.from_mal(40028)>>>ruleRule(mal_from=40028,kitsu_from=42422,anilist_from=110277,episodes_from=(60,75),mal_to=40028,kitsu_to=42422,anilist_to=110277,episodes_to=(1,16))>>>rule.get_episode_redirect(65)6>>>rule.mal_to40028>>>parser.meta{'version':'1.3.0','last_modified':'2021-02-25'}For more advanced usage and other methods, please look at the source code. It's quite short and well-documented.
animerem
Failed to fetch description. HTTP Status Code: 404
anime-renamer
No description available on PyPI.
animerim
This is a simple anime scrapper can scrap anime infos or watching/download links of episodes of animeChange Log0.1 (29/03/2021)First Release
animesearchinfo
Anime SearchThis Python program features a modular anime library leveraging the Jikan API. TheanimeLibraryclass retrieves anime details such as title, score, episode count, and synopsis based on user input.APIhttps://api.jikan.moe/v4/animeInstallationpipinstallanimesearchinfoUsage/Examplesfromanime_libraryimportanimeLibrarydefmain():jikan=animeLibrary()print(" ")query=input("Search an anime: ")print(" ")hasil_pencarian=jikan.cek_anime(query)ifisinstance(hasil_pencarian,dict):foranimeinhasil_pencarian.get('data',[]):print(f"Title: {anime.get('title')}")print(f"Episodes: {anime.get('episodes')}")print(f"Score: {anime.get('scores')}")print(f"Synopsis: {anime.get('synopsis')}")if__name__=="__main__":main()
animesr
AnimeSR (NeurIPS 2022):open_book: AnimeSR: Learning Real-World Super-Resolution Models for Animation VideosYanze Wu,Xintao Wang,Gen Li,Ying ShanTencent ARC Lab; Platform Technologies, Tencent Online Video:triangular_flag_on_post: Updates2022.11.28: release codes&models.2022.08.29: release AVC-Train and AVC-Test.Video Demoshttps://user-images.githubusercontent.com/11482921/204205018-d69e2e51-fbdc-4766-8293-a40ffce3ed25.mp4https://user-images.githubusercontent.com/11482921/204205109-35866094-fa7f-413b-8b43-bb479b42dfb6.mp4:wrench: Dependencies and InstallationPython >= 3.7 (Recommend to useAnacondaorMiniconda)PyTorch >= 1.7Other required packages inrequirements.txtInstallationClone repogitclonehttps://github.com/TencentARC/AnimeSR.gitcdAnimeSRInstall# Install dependent packagespipinstall-rrequirements.txt# Install AnimeSRpythonsetup.pydevelop:zap: Quick InferenceDownload the pre-trained AnimeSR models [Google Drive], and put them into theweightsfolder. Currently, the available pre-trained models are:AnimeSR_v1-PaperModel.pth: v1 model, also the paper model. You can use this model for paper results reproducing.AnimeSR_v2.pth: v2 model. Compare with v1, this version has better naturalness, fewer artifacts, and better texture/background restoration. If you want better results, use this model.AnimeSR supports both frames and videos as input for inference. We provide several sample test cases ingoogle drive, you can download it and put them toinputsfolder.Inference on Framespythonscripts/inference_animesr_frames.py-iinputs/tom_and_jerry-nAnimeSR_v2--expnameanimesr_v2--save_video_too--fps20Usage:-i --input Input frames folder/root. Support first level dir (i.e., input/*.png) and second level dir (i.e., input/*/*.png)-n --model_name AnimeSR model name. Default: AnimeSR_v2, can also be AnimeSR_v1-PaperModel-s --outscale The netscale is x4, but you can achieve arbitrary output scale (e.g., x2 or x1) with the argument outscale.The program will further perform cheap resize operation after the AnimeSR output. Default: 4-o --output Output root. Default: results-expname Identify the name of your current inference. The outputs will be saved in $output/$expname-save_video_too Save the output frames to video. Default: off-fps The fps of the (possible) saved videos. Default: 24After run the above command, you will get the SR frames inresults/animesr_v2/framesand the SR video inresults/animesr_v2/videos.Inference on Video# single gpu and single process inferenceCUDA_VISIBLE_DEVICES=0pythonscripts/inference_animesr_video.py-iinputs/TheMonkeyKing1965.mp4-nAnimeSR_v2-s4--expnameanimesr_v2--num_process_per_gpu1--suffix1gpu1process# single gpu and multi process inference (you can use multi-processing to improve GPU utilization)CUDA_VISIBLE_DEVICES=0pythonscripts/inference_animesr_video.py-iinputs/TheMonkeyKing1965.mp4-nAnimeSR_v2-s4--expnameanimesr_v2--num_process_per_gpu3--suffix1gpu3process# multi gpu and multi process inferenceCUDA_VISIBLE_DEVICES=0,1pythonscripts/inference_animesr_video.py-iinputs/TheMonkeyKing1965.mp4-nAnimeSR_v2-s4--expnameanimesr_v2--num_process_per_gpu3--suffix2gpu6processUsage:-i --input Input video path or extracted frames folder-n --model_name AnimeSR model name. Default: AnimeSR_v2, can also be AnimeSR_v1-PaperModel-s --outscale The netscale is x4, but you can achieve arbitrary output scale (e.g., x2 or x1) with the argument outscale.The program will further perform cheap resize operation after the AnimeSR output. Default: 4-o -output Output root. Default: results-expname Identify the name of your current inference. The outputs will be saved in $output/$expname-fps The fps of the (possible) saved videos. Default: None-extract_frame_first If input is a video, you can still extract the frames first, other wise AnimeSR will read from stream-num_process_per_gpu Since the slow I/O speed will make GPU utilization not high enough, so as long as thevideo memory is sufficient, we recommend placing multiple processes on one GPU to increase the utilization of each GPU.The total process will be number_process_per_gpu * num_gpu-suffix You can add a suffix string to the sr video name, for example, 1gpu3processx2 which means the SR video is generated with one GPU and three process and the outscale is x2-half Use half precision for inference, it won't make big impact on the visual resultsSR videos are saved inresults/animesr_v2/videos/$video_namefolder.If you are looking for portable executable files, you can try ourrealesr-animevideov3model which shares the similar technology with AnimeSR.:computer: TrainingSeeTraining.mdRequest for AVC-DatasetDownload and carefully read theLICENSE AGREEMENTPDF file.If you understand, acknowledge, and agree to all the terms specified in theLICENSE AGREEMENT. Please [email protected] theLICENSE AGREEMENT PDFfile,your name, andinstitution. We will keep the license and send the download link of AVC dataset to you.AcknowledgementThis project is build based onBasicSR.CitationIf you find this project useful for your research, please consider citing our paper:@InProceedings{wu2022animesr,author={Wu, Yanze and Wang, Xintao and Li, Gen and Shan, Ying},title={AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos},booktitle={Advances in Neural Information Processing Systems},year={2022}}:e-mail: ContactIf you have any question, please [email protected].
animestreamer
AnimeStreamerAnimeStreameris a TUI (Text-based user interface) for searching throughnyaa.siTorrents viaNyaaPyand streaming them viaWebTorrent.Torrent names can be parsed thanks toAnitopy.UsesTextualfor rendering the interface.Dependencieswebtorrent-cli→npm install webtorrent-cli -gPython packages are automatically installed when installing PyPI package. List can be found insetup.cfg.Install and runpip install animestreamer-PyPI packageanimestreamerUsageTODO
animethemes
AnimeThemes
animethemes-batch-encoder
DescriptionGenerate and execute collection of FFmpeg commands sequentially from external file to produce WebMs that meetAnimeThemes.moeencoding standards.Take advantage of sleep, work, or any other time that we cannot actively monitor the encoding process to produce a set of encodes for later quality checking and/or tweaking for additional encodes.Ideally we are iterating over a combination of filters and settings, picking the best one at the end.InstallRequirements:FFmpegPython >= 3.6Install:pip install animethemes-batch-encoderUsagepython -m batch_encoder [-h] [--generate | -g] [--execute | -e] [--custom | -c] [--file [FILE]] [--configfile [CONFIGFILE]] [--inputfile [INPUTFILES]] --loglevel [{debug,info,error}]Mode--generategenerates commands from input files in the current directory.The user will be prompted for values that are not determined programmatically, such as inclusion/exclusion of a source file candidate, start time, end time, output file name and new audio filters.--executeexecutes commands from file in the current directory line-by-line.By default, the program looks for a file namedcommands.txtin the current directory. This file name can be specified by the--fileargument.--generateand--executegenerates commands from input files in the current directory and executes the commands sequentially.Nonewill give modes options to run.Custom--customcustomizes options like Create Preview, Limit Size Enable, CRFs and Encoding Modes for each output file. Default configs are specified in the--fileargument.FileThe file that commands are written to or read from.By default, the program will write to or read fromcommands.txtin the current directory.Config FileThe configuration file in which our encoding properties are defined.By default, the program will write to or read frombatch_encoder.iniin the user config directory of appnamebatch_encoderand authorAnimeThemes.Example:C:\Users\Kyrch\AppData\Local\AnimeThemes\batch_encoder\batch_encoder.iniInput File--inputfilewill give the option to insert input files in advance, separated by two commas. Example:python -m batch_encoder -g --inputfile 'source file.mkv,,source file 2.mkv'.Audio FiltersExitSaves audio filters if selected and continues script execution.CustomApply a custom audio filter string.Fade InSelect an exponential value to apply Fade In.Fade OutSelect a start position and an exponential value to Fade Out.MuteSelect a start and end position to leave the volume at 0.Video FiltersNo FiltersAdd a line without filterscale=-1:720Add downscale to 720pscale=-1:720,hqdn3d=0:0:3:3,gradfun,unsharpAdd downscale to 720p and filters by AnimeThemeshqdn3d=0:0:3:3,gradfun,unsharpAdd filters by AnimeThemeshqdn3d=0:0:3:3Add lightdenoise filterhqdn3d=1.5:1.5:6:6Add heavydenoise filterunsharpAdd unsharp filterCustomApply a custom video filter string.Encoding PropertiesAllowedFileTypesis a comma-separated listing of file extensions that will be considered for source file candidates.EncodingModesis a comma-separated listing ofbitrate control modesfor inclusion and ordering of commands.Available bitrate control modes are:CBRConstant Bitrate ModeVBRVariable Bitrate ModeCQConstrained Quality ModeCRFsis a comma-separated listing of ordered CRF values to use withVBRand/orCQbitrate control modes.CBRBitratesis comma-separated listing of ordered bitrate values to use withCBR.CBRMaxBitratesis comma-separated listing of ordered maximum bitrate values to use withCBR.Threadsis the number of threads used to encode. Default is 4.LimitSizeEnableis a flag for including the-fsargument to terminate an encode when it exceeds the allowed size. Default is True.AlternateSourceEnableis a flag for alternate command lines between source files. Default is False.CreatePreviewis a flag for create a command line to preview seeks. Default is False.IncludeUnfilteredis a flag for including or excluding an encode without video filters for each bitrate control mode and CRF pairing. Default is True.VideoFiltersis a configuration item list used for named video filtergraphs for each bitrate control mode and CRF pairing.LoggingDetermines the level of the logging for the program.--loglevel errorwill only output error messages.--loglevel infowill output error messages and script progression info messages.--loglevel debugwill output all messages, including variable dumps.
animethemes-beta-batch-encoder
DescriptionGenerate and execute collection of FFmpeg commands sequentially from external file to produce WebMs that meetAnimeThemes.moeencoding standards.Take advantage of sleep, work, or any other time that we cannot actively monitor the encoding process to produce a set of encodes for later quality checking and/or tweaking for additional encodes.Ideally we are iterating over a combination of filters and settings, picking the best one at the end.InstallRequirements:FFmpegPython >= 3.6Install:pip install animethemes-beta-batch-encoderUsagepython -m beta_batch_encoder [-h] [--generate | -g] [--execute | -e] [--custom | -c] [--file [FILE]] [--configfile [CONFIGFILE]] --loglevel [{debug,info,error}]Mode--generategenerates commands from input files in the current directory.The user will be prompted for values that are not determined programmatically, such as inclusion/exclusion of a source file candidate, start time, end time, output file name and new audio filters.--executeexecutes commands from file in the current directory line-by-line.By default, the program looks for a file namedcommands.txtin the current directory. This file name can be specified by the--fileargument.--generateand--executegenerates commands from input files in the current directory and executes the commands sequentially.Nonewill give modes options to run.Custom--customcustomizes options like Create Preview, Limit Size Enable, CRFs and Encoding Modes for each output file. Default configs are specified in the--fileargument.FileThe file that commands are written to or read from.By default, the program will write to or read fromcommands.txtin the current directory.Config FileThe configuration file in which our encoding properties are defined.By default, the program will write to or read frombeta_batch_encoder.iniin the user config directory of appnamebeta_batch_encoderand authorAnimeThemes.Example:C:\Users\Kyrch\AppData\Local\AnimeThemes\beta_batch_encoder\beta_batch_encoder.iniAudio FiltersExitSaves audio filters if selected and continues script execution.CustomApply a custom audio filter string.Fade InSelect an exponential value to apply Fade In.Fade OutSelect a start position and an exponential value to Fade Out.MuteSelect a start and end position to leave the volume at 0.Video FiltersNo FiltersAdd a line without filterscale=-1:720Add downscale to 720pscale=-1:720,hqdn3d=0:0:3:3,gradfun,unsharpAdd downscale to 720p and filters by AnimeThemeshqdn3d=0:0:3:3,gradfun,unsharpAdd filters by AnimeThemeshqdn3d=0:0:3:3Add lightdenoise filterhqdn3d=1.5:1.5:6:6Add heavydenoise filterunsharpAdd unsharp filterCustomApply a custom video filter string.Encoding PropertiesAllowedFileTypesis a comma-separated listing of file extensions that will be considered for source file candidates.EncodingModesis a comma-separated listing ofbitrate control modesfor inclusion and ordering of commands.Available bitrate control modes are:CBRConstant Bitrate ModeVBRVariable Bitrate ModeCQConstrained Quality ModeCRFsis a comma-separated listing of ordered CRF values to use withVBRand/orCQbitrate control modes.Threadsis the number of threads used to encode. Default is 4.LimitSizeEnableis a flag for including the-fsargument to terminate an encode when it exceeds the allowed size. Default is True.AlternateSourceEnableis a flag for alternate command lines between source files. Default is False.CreatePreviewis a flag for create a command line to preview seeks. Default is False.IncludeUnfilteredis a flag for including or excluding an encode without video filters for each bitrate control mode and CRF pairing. Default is True.VideoFiltersis a configuration item list used for named video filtergraphs for each bitrate control mode and CRF pairing.LoggingDetermines the level of the logging for the program.--loglevel errorwill only output error messages.--loglevel infowill output error messages and script progression info messages.--loglevel debugwill output all messages, including variable dumps.
animethemes-dl
animethemes-dlwhat's this projectThis project allows you to automaticaly download opening and ending songs from all of your favorite anime without the need of downloading everything yourself. Since almost every weeb uses MAL to track the anime he's watching, this tool is really useful, as every information you need to give it has been written down already. All you need to do is to enter yourMALorAniListusername.reminderAll videos are downloaded fromanimethemes.moe. If you plan on using this program just for looking at openings, I recommend usingthemes.moeor theirown siteinstead. This program is made for creating your own playlist and such.what's this project forThis project was made for batch downloading themes from anime you have watched, but is programmed so it's easily improved, making it possible to add very easily. It's made with both command line usage and with python as a module.how to installclone this repository fromgithub.comor download it from pip withpip install animethemes-dlif you cloned, dopip install -r requirements.txtto install all required modulesinstall ffmpeg into the same folder or in PATHusage in command linemake sure you have ffmpeg and python installed To run in console useanimethemes-dlif installed with pip.python -m animethemes-dlif you have cloned the repository.These commands will be reffered to asanimethemes-dlin the documentation.command line documentationThe script should raise errors in case you pass in an improper arg, but sometimes an error won't be raised if the error is not obvious, therefore make sure you read the documentation before running it.You must set a username and a save folder.animelistYou must set a username. By default usernames are assumed to be a MAL user, you can use a different site with--site.--animelist-argscan be:url args for MAL ``queryandvariablesfor POST request for AniList--animelist-argsare passed as a<key>=<value>pairs, for example:sort1=1,sort2=14animelist filtersThere are filters for minimum score and priority.--minscoreis the minimum score between 0 and 10.--minpriorityis the minimum priority. For mal, useLow=0,Medium=1,High=2--range <start> <end>only gets a slice of the animelist.tag filtersTo download only openings or only endings, use--OPor--EDBy default, you can just use a--smartfilter, that takes out all the dialogue. This works by removing all themes that contains a part of the episode and spoilers at the same time. This works 95% of the time.Since animethemes can have a single song bound to multiple anime,--no-copyfilters them.You can set--banned-tagsor--required-tags. These will take multiple tags, possible tags are:TagMeaningspoilerVideo contains spoilers.nsfwVideo is NSFW.ncNo captions/no credits.subbedVideo includes English subtitles of dialogue.lyricsVideo includes English lyrics as subtitles.uncenVideo does not have censorship.You can set a--min-resolution, they show up in420,720,1080.You can set the required--source, possible sources are:SourceMeaningBDVideo is sourced from a Blu-ray disc.DVDVideo is sourced from a DVD.Video is sourced from a TV release.Some themes contain a part of the episode. You can set a--overlapto show only some overlaps.OverlapMeaningOverPart of episode is over the video.TransitionPart of episode transitions into the video.NoneNo dialogue in video.If you're only looking to remove dialogue, transitions are fairly fine. They don't even have dialogue most of the time, I recommend just banningOverdownloadDownloads are by default disabled for both video and audio. You can enable it by setting a save folder. Save folders are set with-a(audio) and-v(video).The filename format can be changed with--filename.The possible formats are defined in this table:FormatMeaninganime_idAnimethemes' id of anime.anime_nameName of Anime.anime_slugAnimethemes' slug of anime.anime_yearYear the anime came out.anime_seasonSeason the anime came out.theme_idAnimethemes' id of theme.theme_typeType of theme (OP/ED).theme_sequenceSequence of theme.theme_groupGroup of theme (e.g. language).theme_slugAnimethemes' slug of theme (type+sequence).entry_idAnimethemes' id of entry.entry_versionVersion of entry ("" or 1+).entry_notesNotes of entry (e.g. SFX version).video_idAnimethemes' id of video.video_basenameAnimethemes' basename of video.video_filenameBasename without the filetype.video_sizeSize of file in bytes.video_resolutionResolution of video.video_sourceWhere the video was sourced from.video_overlapEpisode overlap over video.song_idAnimethemes' id of song.song_titleTitle of song.video_filetypeFiletype of video.anime_filenameName of anime used in filenames.formats should be used as a python format string, meaning that it will be put as%(format)s. For example%(anime_filename)s-%(theme_slug)s.%(video_filetype)s.Windows and Linux banned characters will be removed by default, to remove those and also unicode characters use--asciiYou can disable redownloading with-r. This is highly recommended. If you have downloaded video you can--updatetheme, this will check file validity by looking at the filesize. It will also update audio files if the video is downloaded.You can add a coverart to audio files with--coverart,--coverarttakes in a resolution, if set, image will be fetched from anilist.co, with high resolutions it's recommended to save them in--coverart-folder.Downloader timeout can be changed with--timeoutand max amount of retries with--retries.Sometimes when using filters a video that you wanted gets filtered out. you can--force-videosand keep them this way.re:zero for example has lots of unique EDs, but they often have an overlay, meaning smart filter will remove them.Data from animethemes is sending a lot of requests at the same time, so to reduce stress on the servers, the data is saved in a temp folder. You can change it's max age with--max-cache-age.statusesYou can download anime that you have--on-hold,--droppedor--planned.compressionDownloaded files can be compressed in case you want to save them.It will be enabled by setting a directory you want to compress with--compress-dir, this should be the same directory as you chosen one. The destination file is set with--compress-name, set it without the extension. You can choose the--compress-format, this must be a format allowed byshutils.make_archive.Additionally you can set the--compress-base.printingYou can set the loglevel with--loglevel. This will set thelogger.setLevel(...). There are quick commands--quiet(print none) and--verbose(print all). To restrict download and ffmpeg messages, you MUST use--quiet.You can disable color with--no-color.utilitiesIn case you haven't added ffmpeg to path, you can set the path with--ffmpeg.In case the mp3 tags are not showing, you can specify to--use-id3v23, that will allow support for older systems.You can--repairin case the script made some errors or you picked wrong options. This will delete unexpected files and readd metadata.settingsYou can load options from a file with--options, the file is in json format.The default options are:{"animelist":{"username":"","site":"MyAnimeList","animelist_args":{},"minpriority":0,"minscore":0,"range":[0,0]},"filter":{"smart":false,"no_copy":false,"type":null,"spoiler":null,"nsfw":null,"resolution":0,"nc":null,"subbed":null,"lyrics":null,"uncen":null,"source":null,"overlap":null},"download":{"filename":"%(anime_filename)s-%(theme_slug)s.%(video_filetype)s","audio_folder":null,"video_folder":null,"no_redownload":false,"update":false,"ascii":false,"timeout":5,"retries":3,"max_cache_age":10368000,"force_videos":[]},"coverart":{"resolution":0,"folder":null},"compression":{"root_dir":null,"base_name":"animethemes","format":"tar","base_dir":null},"statuses":[1,2],"quiet":false,"no_colors":false,"ffmpeg":"ffmpeg","id3v2_version":4,"ignore_prompts":false}You can generate the options withpython -m animethemes_dl.options.code documentationThe code uses the modulemodelsthat contains models oftyping.TypedDict. Meaning python 3.8 is required. Moduleparserscontains all parsers for MAL, Anilist and themes.moe. Module tools contains extra tools foranimethemes-dl.examples:# parsers module uses API's to get dataimportanimethemes_dl.parsersasparsersparsers.fetch_animethemes(username)# fetchess raw dataparsers.get_download_data(username)# gets download data# models module uses typedDict to help lintersimportanimethemes_dl.modelsasmodelsanimelist:AnimeThemeAnime=_myanimefunc()metadata:Metadata=_mymetadatafunc()# tools have multiple tools used for several stuffimportanimethemes_dl.toolsastoolstools.ffmpeg_convert(webm_file,mp3_file)# converts a webm filetools.COLORS['progress']=Fore.CYAN# changes colorstools.compress_files(base,'zip',root)# compresses a direcotorytools.update_metadata(parsers.get_download_data(username),False)# updates metadata of all audio files# you can implement your own batch dlimportanimethemes_dldata=parsers.get_download_data(username)forthemeindata:animethemes_dl.download_theme(theme,True)# you can directly change optionsanimethemes_dl.setOptions(options)# you can make special catchersimportanimethemes_dl.errorsaserrorstry:animethemes_dl.batch_download(data)exceptFfmpegException:print('oh no')how does it work?parserget data from MAL/AniListget data from themes.moecombine datafilter out unwanted themescreate download datadownloaddownload video fileconvert video to audioconvert with ffmpegadd mp3 metadataoptionalcompress filesTODOcode optimizationsimprove code documentationmake a better README (too complicated rn)concurrent downloads, since animethemes disabled multithreaded dl.support for aria2c
animethemes-webm-verifier
DescriptionVerify WebM(s) Against AnimeThemes Encoding StandardsExecutes a test suite on the input WebM(s) to verify compliance.Test success/failure doesNOTguarantee acceptance/rejection of submissions. In some tests, we are determining the correctness of our file properties. In other tests, we are flagging uncommon property values for inspection.InstallRequirements:FFmpegPython >= 3.6Install:pip install animethemes-webm-verifierUsagetest_webm [-h] [--loglevel [{debug,info,error}]] [--groups [{format,video,audio} ...]] [file ...]FileThe WebM(s) to verify. If not provided, we will test all WebMs in the current directory.GroupsThe groups of tests that should be run.Theformatgroup pertains to testing of the file format and context of streams.Thevideogroup pertains to testing of the video stream of the file.Theaudiogroup pertains to testing of the audio stream of the file.By default, all test groups will be included.LoggingDetermines the level of the logging for the program.--loglevel errorwill only output error messages.--loglevel infowill output error messages and script progression info messages.--loglevel debugwill output all messages, including variable dumps.
animetime
AnimeTimeAnimeTime est un projet développé par swarthur.L'objectif est de proposer à l'utilisateur un moyen simple de suivre sa progression de visionnage de ses animés favoris.Pour le moment, AnimeTime n'est qu'un script python, mais il est possible de l'intégrer en tant que module à d'autres programmes ou application, via un codage évolutif. Ainsi, grâce à l'open source, ce script peut être réutilisé et modifié, tant que son appartenance au projet initial est maintenue.Par ailleurs, AnimeTime nécessite le module de gestion de données AnimeData :https://github.com/swarthur/AnimeData/Pour proposer d'éventuelles suggestions ou suivre l'avancement du projet:https://github.com/swarthur/AnimeTime
animetoolkit
No description available on PyPI.
animeujjwal
This is the homepage of our project.
animeup
AnimeDiffusion: A Pytorch Library for Anime Image Generation🛠️ Installationgitclonehttps://github.com/kadirnar/AnimeUpscalercdAnimeUpscaler pipinstall-rrequirements.txt🎙️ Usage🏆 Contributingpipinstall-rrequirements.txt pre-commitinstall pre-commitrun--all-files📜 LicenseThis project is licensed under the terms of the Apache License 2.0.🤗 AcknowledgmentsThis project is based on theHuggingFace Diffuserslibrary.
animeworld
AnimeWorld-APIAnimeWorld-API is an unofficial library forAnimeWorld(Italian anime site).InstallazioneQuesta libreria richiedePython 3.7o superiore.È Possibile installarare la libreria tramite pip:pipinstallanimeworldUtilizzoPer ricercare un anime per nome nel sito di animeWolrd è possibile usare la funzione find().importanimeworldasawres=aw.find("No game no life")print(res)La funzione estituirà un dizionario contentente per chiave il nome dell'anime e per valore il link della pagina di animeworld.{'name':'No Game no Life','link':'https://www.animeworld.so/play/no-game-no-life.IJUH1',...}È Possibile anche scaricare gli episodi di un anime.importanimeworldasawanime=aw.Anime(link="https://www.animeworld.so/play/danmachi-3.Ydt8-")forepisodioinanime.getEpisodes():print("Episodio Numero: ",episodio.number)if(episodio.download()):print("scaricato")else:print("errore")ifx.number=='1':breakEpisodio Numero: 1 scaricatoDocumentazioneLa documentazione completa è disponibile qui:DocumentazionePer una panoramica di tutte le nozioni di base, vai alla sezioneQuickStartPer argomenti più avanzati, vedere la sezioneAdvanced UsageLa sezioneAPI Referencefornisce un riferimento API completo.Se vuoi contribuire al progetto, vai alla sezioneContributingStar History
animeX
animeX-packA lightweight Python library (and command-line utility) for downloading anime.Table of ContentsInstallationQuick startFeaturesUsageCommand-line interfaceDevelopmentGUIs and other librariesInstallationDownload using pip via pypi.$pipinstallanimeX(Mac/homebrew users may need to usepip3)Quick startpy-manimeX--versionpy-manimeX--nameAnimeNameA GUI frontend for animeX is not yet available. A Windows executable for animeX is available atAnimeXFeaturesAbility to Capture Thumbnail URL.Extensively Documented Source CodeNo Third-Party DependenciesSaves video to local deviceUsageLet's begin with showing how easy it is to download a video with animeX:py-manimeX-hpy-manimeX--versionThis example will download boruto, its highest quality available.py-manimeX--nameborutoCommand-line interfaceanimeX ships with a simple CLI interface for downloading anime.The complete set of flags are:usage:animeX[-h][--version][--nameAnimeName]Commandlineapplicationtodownloadanime. positionalarguments:--nameAnimeNameThenameoftheanimeyouwanttodownload optionalarguments:-h,--helpshowthishelpmessageandexit--versionshowprogram'sversionnumberandexitDevelopmentPull requests are welcome. For major changes and feature request, please consider goinghereand open an issue first to discuss what you would like to change.For bug fixes to the command line application or enhancements, open an issue first to discuss what you would like to change.To run code checking before a PR usemake testVirtual environmentVirtual environment is setup withpipenvand can be automatically activated withdirenvCode FormattingThis project is linted withpyflakes, formatted withblack, and typed withmypyCode of ConductTreat other people with helpfulness, gratitude, and consideration! See thePython Community Code of Conduct.GUIs and other libraries
animgifviewer
Preview and explore step by step animated GIF images.
animius
Animius is an open source software library for creating deep-learning-powered virtual assistants. It provides an intuitive workflow that extracts data from existing media (such as anime and TV shows) and trains on them to provide a personalized AI. The flexible architecture enables you to add custom functionality to your virtual assistant.Animius also ships with a high-level APIanimius.Consolethat allows users without programming experience to use Animius.InstallationInstall the current release from PyPi:pip install animiusThen, install Tensorflow (recommended version 1.12). We recommend using the GPU package (tensorlfow-gpu) if you are going to train your own virtual assistant. Read more on Tensorflow installationhere.SeeInstalling Animiusfor detailed instructions and Docker installation guide.Getting StartedCheck out our quick start guide. (WIP)For more informationAnimius WebsiteAnimius TutorialsAnimius DocumentationAnimius BlogLicenseApache License 2.0
animl
animl-pyAniML comprises a variety of machine learning tools for analyzing ecological data. This Python package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos. This package is also available in R:animlTable of ContentsInstallationUsageInstallation InstructionsIt is recommended that you set up a conda environment for using animl. SeeDependenciesbelow for more detail. You will have to activate the conda environment first each time you want to run AniML from a new terminal.From GitHubgit clone https://github.com/conservationtechlab/animl-py.git cd animl-py conda env create --file environment.yml conda activate animl-gpu pip install -e .From PyPiWith NVIDIA GPUconda create -n animl-gpu python=3.7 conda activate animl-gpu conda install cudatoolkit=11.3.1 cudnn=8.2.1 pip install animlCPU onlyconda create -n animl-cpu python=3.7 conda activate animl pip install animlDependenciesWe recommend running AniML on GPU-enabled hardware. **If using an NVIDIA GPU, ensure driviers, cuda-toolkit and cudnn are installed. The /models/ and /utils/ modules are from the YOLOv5 repository.https://github.com/ultralytics/yolov5Python Package Dependenciespandas = 1.3.5tensorflow = 2.6torch = 1.13.1torchvision = 0.14.1numpy = 1.19.5cudatoolkit = 11.3.1 **cudnn = 8.2.1 **A full list of dependencies can be found in environment.ymlVerify InstallWe recommend you download theexamplesfolder within this repository. Download and unarchive the zip folder. Then with the conda environment active:python3 -m animl /path/to/example/folderThis should create an Animl-Directory subfolder within the example folder.UsageInferenceThe functionality of animl can be parcelated into its individual functions to suit your data and scripting needs. The sandbox.ipynb notebook has all of these steps available for further exploration.It is recommended that you use the animl working directory for storing intermediate steps.fromanimlimportfile_managementworkingdir=file_management.WorkingDirectory(imagedir)Build the file manifest of your given directory. This will find both images and videos.files=file_management.build_file_manifest('/path/to/images',out_file=workingdir.filemanifest)If there are videos, extract individual frames for processing. Select either the number of frames or fps using the argumments. The other option can be set to None or removed.fromanimlimportvideo_processingallframes=video_processing.images_from_videos(files,out_dir=workingdir.vidfdir,out_file=workingdir.imageframes,parallel=True,frames=3,fps=None)Pass all images into MegaDetector. We recommendMDv5aparseMD will merge detections with the original file manifest, if provided.fromanimlimportdetectMD,megadetector,parse_resultsdetector=megadetector.MegaDetector('/path/to/mdmodel.pt')mdresults=detectMD.detect_MD_batch(detector,allframes["Frame"],quiet=True)mdres=parse_results.from_MD(mdresults,manifest=allframes,out_file=workingdir.mdresults)For speed and efficiency, extract the empty/human/vehicle detections before classification.fromanimlimportsplitanimals=split.getAnimals(mdres)empty=split.getEmpty(mdres)Classify using the appropriate species model. Merge the output with the rest of the detections if desired.fromanimlimportclassify,parse_resultsclassifier=classify.load_classifier('/path/to/classifier/')predresults=classify.predict_species(animals,classifier,batch=4)animals=parse_results.from_classifier(animals,predresults,'/path/to/classlist.txt',out_file=workingdir.predictions)manifest=pd.concat([animals,empty])TrainingTraining workflows are available in the repo but still under development.
animl-KSwanson
Failed to fetch description. HTTP Status Code: 404
animop-ep
UNKNOWN
animo-trainer
Animo-Trainer uses the ML-Agents Reinforcement Learning Toolkit to train intelligent agents. It is a part of the Little Learning Machines game.https://store.steampowered.com/app/1993710/Little_Learning_Machines/Dependencies: (Windows)PyTorch depends on Microsoft Visual C++ Redistibutable:https://aka.ms/vs/16/release/vc_redist.x64.exeAnimo Simulation depends on DotNet Runtime 7:https://dotnet.microsoft.com/en-us/download/dotnet/7.0
ani-m-package
No description available on PyPI.
animplotlib
animplotlibThis package acts as a thin wrapper around thematplotlib.animation.FuncAnimationclass to simplify animatingmatplotlibplots.Installationpip install animplotlibUser manualThere are two classes which can be called:AnimPlot, for 2-D plots, andAnimPlot3D, for 3-D plots.AnimPlotAs an example, below is a demonstration of the steps required to make a basic plot of an Euler spiral. An Euler spiral can be obtained by plotting theFresnel integrals, which can be generated usingscipy.special.Import the necessary libraries and create amatplotlibfigure and axes:importanimplotlibasanimimportnumpyasnpimportmatplotlib.pyplotaspltimportscipy.specialasscfig=plt.figure()ax=fig.add_subplot(111)Generate the points being plotted:x=np.linspace(-10,10,2500)y,z=sc.fresnel(x)Create two emptymatplotlibplots: one to plot the points up to the current most point (i.e. the 'line') and one to plot the current most point:line,=ax.plot([],[],lw=1)point,=ax.plot([],[],'o')ax.set_xlim(-1,1)ax.set_ylim(-1,1)Call theAnimPlotclass and show the plot:animation=anim.AnimPlot(fig,line,point,y,z,l_num=len(x),plot_speed=5)plt.show()l_numis the number of points before the current most point being plotted toline. The default value is set to 10, however in this example it makes sense to set it to the same length asx(i.e. all the points before the current most point are plotted). Similarly, an argumentp_numcan be passed to determine the number of points being plotted topoint. This is set to 1 by default.Optional arguments:plot_speed(int) : set to 10 by default.l_num(int) : The number of points being plotted tolineeach frame. By default this is set to 10.p_num(int) : The number of points being plotted topointeach frame. By default, this is set to 1, i.e. only the current most point is plotted each frame (the orange point in the gif).save_as(str) : file name to save the animation as a gif in the current working directory.**kwargs: other arguments passable intomatplotlib.animation.FuncAnimation(seethe docsfor more info).AnimPlot3DCreating a 3-D animated plot is similar to creating a 2-D plot but with a few additional steps.importanimplotlibasanimimportnumpyasnpimportmatplotlib.pyplotaspltimportscipy.specialasscfig=plt.figure()ax=fig.add_subplot(111,projection='3d')x=np.linspace(-10,10,3000)y,z=sc.fresnel(x)For 3-D plots, two emptymatplotlibplots must be created:lines,=[ax.plot([],[],[])]points,=[ax.plot([],[],[],'o')]The second plot,points, by default plots the 'ith' point each frame. After that set the x, y and z limits and call theAnimPlot3Dclass.ax.set_xlim(-10,10)ax.set_ylim(-1,1)ax.set_zlim(-1,1)animation=anim.AnimPlot3D(fig,ax,[lines],[points],x,y,z,plot_speed=5)plt.show()Optional arguments:plot_speed(int) : set to 10 by default.rotation_speed(int) : proportional toplot_speed. Off by default, enabled by setting a value.l_num(int) : The number of points being plotted tolineseach frame. By default, all the points up until the current point get plotted.p_num(int) : The number of points being plotted topointseach frame. By default, this is set to 1, i.e. only the current most point is plotted each frame (the orange point in the gif).save_as(str) : file name to save the animation as a gif in the current working directory.**kwargs: other arguments passable intomatplotlib.animation.FuncAnimation(seethe docsfor more info).Both the 2-D and 3-D plots can be customised visually the same way you would a normalmatplotlibplot.
animrec
Anime_Recommendation_SystemAnime Recommendation System is a project designed to give users Anime Watching recommendations based on the Anime they have already watched using various Machine Learning TechniquesUSAGEJust typearsfollowed by the search argument in quotation marks, for example:ars "One Punch Man"
animu
AnimuAn async wrapper forAnimu APIwritten in Python.Key FeaturesAn async library.Anime fact, waifu & more!Especially made for discord bots.Easy to use with an object oriented design.InstallingPython 3.8 or higher is requiredYou can install it by following command:pip install animuFAQHow do I get the Animu API token?To get the token,join Discord server of Animu API. Move to#bot-commandschannel. And do-claim.From there, further process should start in your DM. Good luck!My Token is not working anymore. Why?Make sure that you've joined the support server, or else it won't work. If it's still not working, please ask for help in their official server.Any examples?Checkexamplesfolder for examples.Any ratelimit?Yes, 5 requests/second.Related LinksDocumentationOfficial Animu API Discord Server
animu-cf
animu.py is a Python library for CF's API!## Examples```pyfrom animu import CFClientclient = CFClient(user_agent='animu.py/Production/v0.0.1')def getAnimu():anime = client.get_animu()return anime['url']def getHentai():hentai = client.get_hentai()return hentai['url']```## Changelog* v0.0.1 => Initial ReleaseKeywords: hentai anime animu api api-wrapperPlatform: UNKNOWNClassifier: License :: OSI Approved :: MIT LicenseClassifier: Programming Language :: Python :: 3Classifier: Programming Language :: Python :: 3.4Classifier: Programming Language :: Python :: 3.5Classifier: Programming Language :: Python :: 3.6
animus
AnimusOne framework to rule them all.Animus is a "write it yourself"-based machine learning framework.Please seeexamples/for more information.Framework architecture is mainly inspired byCatalyst.FAQWhat is Animus?Animus is a general-purpose for-loop-based experiment wrapper. It divides ML experiment with the straightforward logic:defrun(experiment):forepochinexperiment.epochs:fordatasetinepoch.datasets:forbatchindataset.batches:handle_batch(batch)Eachforencapsulated withon_{for}_start,run_{for}, andon_{for}_endfor customisation purposes. Moreover, eachforhas its own metrics storage:{for}_metrics(batch_metrics,dataset_metrics,epoch_metrics,experiment_metrics).What are Animus' competitors?Any high-level ML/DL libraries, likeCatalyst,Ignite,FastAI,Keras, etc.Why do we need Animus if we have high-level alternatives?Although I find high-level DL frameworks an essential step for the community and the spread of Deep Learning (I have written one by myself), they have a few weaknesses.First of all, usually, they are heavily bounded to a single "low-level" DL framework (Jax,PyTorch,Tensorflow). While"low-level" frameworks become close each year, high-level frameworks introduce different synthetic sugar, which makes it impossible for a fair comparison, or complementary use, of "low-level" frameworks.Secondly, high-level frameworks introduce high-level abstractions, which:are built with some assumptions in mind, which could be wrong in your case,can cause additional bugs - even "low-level" frameworks have quite a lot of them,are really hard to debug/extend because of "user-friendly" interfaces and extra integrations.While these steps could seem unimportant in common cases, like supervised learning with(features, targets), they became more and more important during research and heavy pipeline customization (e.g. privacy-aware multi-node distributed training with custom backpropagation).Thirdly, many high-level frameworks try to divide ML pipeline into data, hardware, model, etc layers, making it easier for practitioners to start ML experiments and giving teams a tool to separate ML pipeline responsibility between different members. However, while it speeds up the creation of ML pipelines, it disregards that ML experiment results are heavily conditioned on the used model hyperparameters,and data preprocessing/transformations/sampling,and hardware setup.I found this the main reason why ML experiments fail - you have to focus on the whole data transformation pipeline simultaneously, from raw data through the training process to distributed inference, which is quite hard. And that's the reason Animus has Experiment abstraction (Catalystanalog -IRunner), which connects all parts of the experiment: hardware backend, data transformations, model train, and validation/inference logic.What is Animus' purpose?Highlight common "breakpoints" in ML experiments and provide a unified interface for them.What is Animus' main application?Research experiments, where you have to define everything on your own to get the results right.Does Animus have any requirements?No. That's the case - only pure Python libraries. PyTorch and Keras could be used for extensions.Do you have plans for documentation?No. Animus core is about 300 lines of code, so it's much easier to read than 3000 lines of documentation.DemoJax/Keras/Sklearn/Torch pipelinesJax XLA exampleTorch XLA example
animus-omni
The Animus Omni CLI helps you separate the signal from the noise in your logfiles. If you are running a service that faces the internet, you likely see thousands of scans, bots, and brute force attempts every day. These scans clog up your log files, and make it hard to find legitimate events of interest.The Animus Omni CLI is a utility that leverages the Animus API to reduce noisy entries from your log files. This tool is currently in ALPHA and will be available for free with rate-limited accounts.How it WorksAnimus Omni is powered by a network of sensors that are deployed across the internet. These sensors have no business value, but have a comprehensive set of logging rules. These logs are aggregated and analyzed before being loaded into a database that is made available through the Animus API. omni-reduce analyzes your log files, and passes metadata to our API. The API returns a filter based on your metadata that is then applied to your file. The result is less noisy log files.InstallationFrom the source repository:$ python setup.py installOr via PyPi:$ pip install animus-omniConfigurationThis command will ask you to provide your e-mail address, which will register a rate limited account for you to use for free during the alpha period:$ omni-reduce --configureUsageCommandline usage for the omni-reduce tool:usage: omni-reduce [-h] [--type {auth,http,generic}] [--noise] [--out-file OUTFILE] [--stats] [--dry-run] [--port PORTS] [--configure] [filename] positional arguments: filename Filename of log file to reduce optional arguments: -h, --help show this help message and exit --type {auth,http,generic}, -t {auth,http,generic} Log type to analyze --noise, -n Print the noise from the file rather than reducing it --out-file OUTFILE, -o OUTFILE Output file for the result --stats, -s Print statistics to STDERR from the reduction operation --dry-run, -d Don't output the reduced log file, only print possible reduction statistics to STDERR --port PORTS, -p PORTS Port and protocol used by generic mode. Can be used multiple times. Should be of the form "80:TCP" or "53:UDP" --configure Configure Omni Reduce.ExamplesOutput a reduced auth log to the screen:$ omni-reduce /var/log/auth.log [Results not shown]Output a reduced auth log to a file and print aggregate statistics to the screen:$ omni-reduce --output ~/auth.log.reduced -s /var/log/auth.log 489 lines were analyzed in this log file. 356 lines were determined to be noise by Animus. 133 lines were not determined to be noise by Animus. The input file was reduced to 27.2% of it's original size.Output a reduced HTTP access log to a file:$ omni-reduce -t http --output ~/access.log.reduced /etc/log/access.logOutput lines from an HTTP access log that Animus believes to be bots, crawlers, or other internet noise:$ cat /etc/log/access.log | omni-reduce -t http -n [Results not shown]Show statistics for reducing an access log by traffic seen by Animus on TCP port 80, and do not display results to the screen:$ omni-reduce -t generic -p 80:tcp --dry-run test/data/access.log.txtPrivacy NoticeIn order to reduce noise from your log files, we need to collect metadata from those file. This includes IP addresses, usernames, user agent strings, referrers, and request URI’s. We use this metadata to enchance the results of our API. If you have sensitive data in your log files or prefer to not share this data with us, contact us [email protected] a private on-premesis solution.
animutils
Animutils 0.0.8Utilities for musical animations in Blender.
aninhado
UNKNOWN
aninhado2
UNKNOWN
aninhador
UNKNOWN
aninja
No description available on PyPI.
aniparse
AniparseAniparse is a Python library for parsing anime video filenames. It's simple to use, and it's based on the C++ libraryAnitomywith a lot of improvement.UpdateThis library has already achieved its goal in a somewhat hacky way, as discussed inissue #9. I am aware that the last commit isn't the clean code, but I don't have much time to work on this project anymore. It's a sacrifice I have to make. I don't expect any improvements here for another year or so unless something breaks. If you have an interest in this project, I would suggest you take a look at thev2-ideabranch instead. I've documented the library's goals, how I plan to achieve them, and other details more comprehensively in that branch.ExampleThe following filename[TaigaSubs]_Toradora!_(2008)_-_01v2_-_Tiger_and_Dragon_[1280x720_H.264_FLAC][1234ABCD].mkv Toradora! S01E03-Your Song.mkvcan be parsed using the following code:importaniparseaniparse.parse('[TaigaSubs]_Toradora!_(2008)_-_01v2_-_Tiger_and_Dragon_[1280x720_H.264_FLAC][1234ABCD].mkv'){'anime_title':'Toradora!','anime_year':2008,'audio_term':'FLAC','episode_number':1,'episode_title':'Tiger and Dragon','file_checksum':'1234ABCD','file_extension':'mkv','file_name':'[TaigaSubs]_Toradora!_(2008)_-_01v2_-_Tiger_and_Dragon_[1280x720_H.264_FLAC][1234ABCD].mkv','release_group':'TaigaSubs','release_version':2,'video_resolution':'1280x720','video_term':'H.264'}aniparse.parse("Toradora! S01E03-Your Song.mkv"){'anime_season':1,'anime_season_prefix':'S','anime_title':'Toradora!','episode_number':3,'episode_prefix':'E','episode_title':'Your Song','file_extension':'mkv','file_name':'Toradora! S01E03-Your Song.mkv'}Theparsefunction receives a string and returns a dictionary containing all found elements. It can also receive parsingoptionsandkeyword_manager, this will be explained below.How does it work?Suppose that we're working on the following filename:"Aim_For_The_Top!_Gunbuster-ep1.BD(H264.FLAC.10bit)[KAA][69ECCDCF].mkv"The filename is first stripped off of its extension and split into groups. Groups are determined by the position of brackets:"Aim_For_The_Top!_Gunbuster-ep1.BD", "H264.FLAC.10bit", "KAA", "69ECCDCF"Each group is then split into tokens. In our current example, the delimiter for the enclosed group is., while the words in other groups are separated by_:"Aim", "For", "The", "Top!", "Gunbuster-ep1", "BD", "H264", "FLAC", "10bit", "KAA", "69ECCDCF"Note: the brackets and delimiter are stored as token with categoryDelimiterandBracket. And each token remembers if it enclosed or not.Once the tokenizer is done, the parser comes into effect. First, all tokens are compared against a set of known keywords. In this case, the tokensBD,H264,FLAC,10bit, and69ECCDCFare recognized as keywords, and are assigned the categorySource,VideoTerm,AudioTerm,VideoResolution, andFileChecksumrespectively."Aim", "For", "The", "Top!", "Gunbuster-ep1", "KAA"The next step is to look for the episode number. Each token that contains a number is analyzed. Here.Gunbuster-ep1contains number, but it doesn't match the episode number pattern. In this case, the token checked againts buggy dash pattern. So,Gunbuster-ep1will be split intoGunbusterandep1. After that, it will check andep1is recognized as an episode number. The categoryEpisodeNumberis assigned to it and the changes is saved."Aim", "For", "The", "Top!", "Gunbuster", "KAA"The next step is to look for the anime title. The parser will try to find unknown token before the episode number and not inside a bracket. In this case,Aim,For,The,Top!, andGunbusterare unknown tokens, they are not inside a bracket, so it assigned to theAnimeTitlecategory."KAA"the next step is to look for the release group. The parser will try to find unknown token after the episode number and inside a bracket. In this case,KAAis unknown token, and it inside a bracket, so it assigned to theReleaseGroupcategory.the next step is to look for the episode title. The parser will try to find unknown token after the episode number and not inside a bracket. In this case, no more unknown token left, so it leave it emptylastly, the parser will try to find any unknown token and assign it to each category or toOthersif it is not recognized.Why should I use it?Anime video files are commonly named in a format where the anime title is followed by the episode number, and all the technical details are enclosed within brackets. However, fansub groups tend to use their own naming conventions, and the problem is more complicated than it first appears:Element order is not always the same. Technical information is not guaranteed to be enclosed. Brackets and parentheses may be grouping symbols or a part of the anime/episode title. Space and underscore are not the only delimiters in use. A single filename may contain multiple delimiters. There are so many cases to cover that it's simply not possible to parse all filenames solely with regular expressions. Aniparse tries a different approach, and it succeeds: It's able to parse tens of thousands of filenames, with great accuracy.Are there any exceptions?Yes, unfortunately. Aniparse fails to identify the anime title and episode number on rare occasions, mostly due to bad naming conventions. See the examples below.Arigatou.Shuffle!.Ep08.[x264.AAC][D6E43829].mkv Here, Aniparse would report that this file is the 8th episode ofArigatou Shuffle!, whereArigatouis actually the name of the fansub group.Spice and Wolf 2 Is this the 2nd episode ofSpice and Wolf, or a batch release ofSpice and Wolf 2? with a text after number, there's no way to know. It's up to you consider both cases. For current version, it treats as part of title if it's not leading zero, and as episode number if it's leading zero.Suggestions to fansub groupsPlease consider abiding by these simple rules before deciding on your naming convention:Don't enclose anime title, episode number and episode title within brackets. Enclose everything else, including the name of your group.Don't use parentheses to enclose release information; use square brackets instead. Parentheses should only be used if they are a part of the anime/episode title.Don't use multiple delimiters in a single filename. If possible, stick with either space or underscore.Use a separator (e.g. a dash) between anime title and episode number. There are anime titles that end with a number, which creates ambiguity.Indicate the episode interval in batch releases.InstallationTo install Aniparse, simply use pip:pip install aniparseOr download the source code and inside the source code's folder run:python setup.py installOptionsTheparsefunction can receive theoptionsparameter. E.g.:importaniparseaniparse_options={'allowed_delimiters':' '}aniparse.parse('DRAMAtical Murder Episode 1 - Data_01_Login',options=aniparse_options){'anime_title':'DRAMAtical Murder','episode_prefix':'Episode','episode_number':'1','episode_title':'Data_01_Login','file_name':'DRAMAtical Murder Episode 1 - Data_01_Login'}If the default options had been used, the parser would have considered_as a delimiter and replaced it with space in the episode title.The options contain the following attributes:Attribute nameTypeDescriptionDefault valueallowed_delimitersstringThe list of character to be considered as delimiters.' _.&+,|'check_title_enclosedbooleanCheck the anime title in enclosed if no title foundTrueeps_lower_than_altbooleanSet episode number to the lowest and the alt to be the highestTrueignored_dashbooleanIf the dash in anime/episode title should be ignored or not.Trueignored_stringslist of stringsA list of strings to be removed from the filename during parse.[]keep_delimitersbooleanIf the delimiters should be kept or not in anime/episode title.Falsemax_extension_lengthintegerMaximum extension length.4title_before_episodebooleanIf the anime title should be before the episode number or not.TrueLicenseAniparseis licensed underMozilla Public License 2.0.
aniparser
This is a simple parser that is built to take a file path and return data based on that file path, to determine some common information about animes.UsageThe usage of this is pretty simple, there are two main entry points into grabbing data from files. To parse through an entire directory (recursively) you can do the followingfor data in aniparser.parse_directories("/home/user/Anime"): print(data)To not search recursively, just provide False to the recursive parameterfor data in aniparser.parse_directories("/home/user/Anime/Specific Anime Folder", recursive=False): print(data)If you want to parse just a single filedata = aniparser.parse("/home/user/Anime/Specicific Anime Folder/Specific Anime Episode.mpv") print(data)DetailsThe idea behind the parsing method in this library is to do the least amount of work possible while maintaining reliance. There are many common things that appear in a filename, and this does try to do them in some kind of "sane" order of commonality. It will only do some extra work when it's needed. Additionally, since this should always have the same output for the same input, as well as that output being a small memory footprint in of itself, this does use some aggressive caching that should help speed things up tremendously in long running uses.
anipdf
This is the homepage
anipics
Anipics🖼 Simple module to get anime pictures📥 Installation# pip$pipinstallanipics# poetry$poetryaddanipics🔑 UsageYou can see an example of usehereAvailable servicesAnimePicsXNekosLifeWaifuPics📝 LicenseThis project is underAGPL-3.0 license
anipie
find out on github
anipose
AniposeAnipose is an open-source toolkit for robust, markerless 3D pose estimation of animal behavior from multiple camera views. It leverages the machine learning toolboxDeepLabCutto track keypoints in 2D, then triangulates across camera views to estimate 3D pose.Check out theAnipose preprintfor more information.The name Anipose comes fromAnimalPose, but it also sounds like "any pose".DocumentationUp to date documentation may be found atanipose.org.DemosVideos of flies by Evyn Dickinson (slowed 5x),Tuthill LabVideos of hand by Katie RuppReferencesHere are some references for DeepLabCut and other things this project relies upon:Mathis et al, 2018, "DeepLabCut: markerless pose estimation of user-defined body parts with deep learning"Romero-Ramirez et al, 2018, "Speeded up detection of squared fiducial markers"
aniposelib
Anipose libAn easy-to-use library for calibrating cameras and triangulation in Python.This is the backend library for theAniposepackage.Thedocumentationis located in Anipose repository.
aniposelib-freemocap
Anipose libNOTE - This is a fork of the original aniposelib repo so I could tweak it to be compatible with needs of the FreeMoCap system (github.com/jonmatthis/freemocap)An easy-to-use library for calibrating cameras and triangulation in Python.This is the backend library for theAniposepackage.Thedocumentationis located in Anipose repository.
ani-probablity
No description available on PyPI.
anipy
# Anipy[![Build Status](https://travis-ci.org/twissell-/anipy.svg?branch=master)](https://travis-ci.org/twissell-/anipy)[![Codacy Badge](https://api.codacy.com/project/badge/Grade/d811779af6ee4c14a03137894930bb04)](https://www.codacy.com/app/dmaggioesne/anipy?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=twissell-/anipy&amp;utm_campaign=Badge_Grade)[![Codacy Badge](https://api.codacy.com/project/badge/Coverage/d811779af6ee4c14a03137894930bb04)](https://www.codacy.com/app/dmaggioesne/anipy?utm_source=github.com&utm_medium=referral&utm_content=twissell-/anipy&utm_campaign=Badge_Coverage)[![Python Version](https://img.shields.io/badge/python-3.5-blue.svg)]()[![Project License](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/twissell-/anipy/master/LICENSE)Anipy is a python library that wraps and organize the [Anilist] rest api into modules, classes and functions so it can be used quick, easy, and right out of the box. You can take a look at the api [official docs]. **Anilist is a [Josh Star]'s project**## Table of contents* [Installation](#installation)* [Usage](#usage)* [Authentication](#authentication)* [Resources](#resources)* [Roadmap](#roadmap)## InstallationFor now the only available versions are alphas. You can Instaled the las by:```bash$ git clone https://github.com/twissell-/anipy.git$ cd anipy$ python setup.py # Be sure using Python 3```## UsageI've tried to keep the developer interface as simple as possible.### AuthenticationBefore you can access any Anilist resource you have to get authenticated. Once you have [created a client] you must configure ```auth.AuthenticationProvider``` class with your credentials.Now you can get authenticated with any of the available [grant types]. Aditionaly, Anipy have a ```GrantType.refreshToken``` in case you have saved a refresh token from a previous authentication. *Note that only code and pin authentication gives you a refresh token.*```pythonfrom anipy import AuthenticationProviderfrom anipy import Authenticationfrom anipy import GrantTypeAuthenticationProvider.config('your-client-id', 'your-client-secret', 'your-redirect-uri')auth = Authentication.fromCredentials()# orauth = Authentication.fromCode('code')# orauth = Authentication.fromPin('pin')# Now you can save the refresh tokenrefresh_token = auth.refreshTokenauth = Authentication.fromRefreshToken(refresh_token)```Authentication expires after one hour and will refresh automatically, nevertheless you can do it manually at any time, ie.:```pythonif auth.isExpired:auth.refresh()```### ResourcesResources are one of the most important parts of the library. They are in charge of go an get the data from the Anilist API. Each domain class have a resource, you can compare them to *Data Access Objects*. All resouces are **Singletons**.In order to keep things simple you can access the resource from class it serves```python# Current logged useruser = User.resource().principal()# A user for his Id or Display Nameuser = User.resource().byId(3225)user = User.resource().byDisplayName('demo')```Some resources are injected in other classes also in order to keep things simple (ie. ```AnimeListResource```). So if you want to get de watching list of a user you can do:```python# The long wayresource = AnimeListResource()watching_list = resource.byUserId(user.id)# Or the short waywatching_list = user.watching```## RoadmapHere is a sumary of the project state.### Next Release: 0.1- [x] **Authentication**- [x] Authorization Code- [x] Authorization Pin- [x] Client Credentials- [x] **User**- [x] Basics- [ ] **User Lists**- [ ] Animelist- [x] Update watched episodes- [x] Update rewatched- [x] Update notes- [x] Update list status- [ ] Update score (simple)- [ ] Create a entry- [ ] Remove entry- [ ] Mangalist- [ ] List Scores types- [ ] **Anime**- [ ] Basics- [ ] Airing- [ ] Search- [ ] **Manga**- [ ] Basics- [ ] Search### Out of ScopeThing that I'm going to do soon.- Advance rating score- Custom lists[Anilist]: http://Anilist.co[official docs]: https://anilist-api.readthedocs.io[Josh Star]: https://github.com/joshstar[created a client]: https://anilist-api.readthedocs.io/en/latest/introduction.html#creating-a-client[grant types]:https://anilist-api.readthedocs.io/en/latest/authentication.html#which-grant-type-to-use
ani.py
ani.pyThis is a simple wrapper for anilists api
anipy-cli
ERROR: type should be string, got "https://user-images.githubusercontent.com/63876564/162056019-ed0e7a60-78f6-4a2c-bc73-9be5dc2a4f07.mp4Little tool written in python to watch and download anime from the terminal (the better way to watch anime), also applicable as an API.Scrapes:https://gogoanime.ggIf you dont like to use a cli there is a GUI and other versionshere.ContentsInstallationUsageLibary UsageWhat it can doOther VersionsCreditsInstallationRecommended installation:python3 -m pip install anipy-cli --upgradeDirectly from the repo (may be newer):python3 -m pip install git+https://github.com/sdaqo/anipy-cliFor video playback mpv is needed. Get it here:https://mpv.io/installation/If you would like to use another video player, you will need to specify its path in the config file.Optionally, you can installffmpegto download m3u8 playlists instead of using the internal downloader. You can use it with the-fflag. This is something you should use if the internal downlaoder fails since ffmpeg is comparatively slow.ConfigWhen you start the program for the first time the config file gets created automaticallyPlaces of the config:Linux: ~/.config/anipy-cli/config.yamlWindows: %USERPROFILE%/AppData/Local/anipy-cli/config.yamlMacOS: ~/.config/anipy-cli/config.yamlSample ConfigAttention Windows Users Using MPV:If you activate the optionreuse_mpv_window, you will have to download and put thempv-2.dllin your path. To get it go look here:https://sourceforge.net/projects/mpv-player-windows/files/libmpv/Attention Windows Users on Config File Placement:If you have downloaded Python from the Microsoft Store, your config file will be cached inside of your Python's AppData. For example:%USERPROFILE%\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\Local\\anipy-cli\\config.yaml.Usageusage: anipy-cli [-D | -B | -H | -S | -M | --delete-history] [-q QUALITY] [-f] [-o] [-a] [-p {mpv,vlc,syncplay,mpvnet}] [-l LOCATION] [--mal-password MAL_PASSWORD] [-h] [-v] [--config-path]\n\nPlay Animes from gogoanime in local video-player or Download them.\n\nActions:\n Different Actions and Modes of anipy-cli (only pick one)\n\n -D, --download Download mode. Download multiple episodes like so: first_number-second_number (e.g. 1-3)\n -B, --binge Binge mode. Binge multiple episodes like so: first_number-second_number (e.g. 1-3)\n -H, --history Show your history of watched anime\n -S, --seasonal Seasonal Anime mode. Bulk download or binge watch newest episodes.\n -M, --my-anime-list MyAnimeList mode. Similar to seasonal mode, but using MyAnimeList (requires MAL account credentials to be set in config).\n --delete-history Delete your History.\n\nOptions:\n Options to change the behaviour of anipy-cli\n\n -q QUALITY, --quality QUALITY\n Change the quality of the video, accepts: best, worst or 360, 480, 720 etc. Default: best\n -f, --ffmpeg Use ffmpeg to download m3u8 playlists, may be more stable but is way slower than internal downloader\n -o, --no-seas-search Turn off search in season. Disables prompting if GoGoAnime is to be searched for anime in specific season.\n -a, --auto-update Automatically update and download all Anime in seasonals list from start EP to newest.\n -p {mpv,vlc,syncplay,mpvnet}, --optional-player {mpv,vlc,syncplay,mpvnet}\n Override the player set in the config.\n -l LOCATION, --location LOCATION\n Override all configured download locations\n --mal-password MAL_PASSWORD\n Provide password for MAL login (overrides password set in config)\n\nInfo:\n Info about the current anipy-cli installation\n\n -h, --help show this help message and exit\n -v, --version show program's version number and exit\n --config-path Print path to the config file.What it can doFaster than watching in the browser.Play Animes in Your Local video playerSelect a quality in which the video will be played/downloaded.Download AnimesHistory of watched EpisodesBinge Mode to watch a range of episodes back-to-back.Seasonal Mode to bulk download or binge watch the latest episodes of animes you pickConfigurable with config(Optional) MAL Mode: Like seasonal mode, but uses your anime list atMyAnimeList.net(Optional) Search GoGo for animes in specific seasons. Available for the download cli, seasonal mode and MAL mode. Turn it off with -o flag.(Optional) Discord Presence for the anime you currently watch. This is off by default, activate it in the config (-c)(Optional) Ffmpeg to download m3u8 playlists, may be more stable but is slower than internal downloader.Libary UsageDocumentation can be foundhereImportant:To import the libary, don't importanipy-cli, butanipy_cli(no '-' is allowed)Advanced Usage ExamplesLittle example of using anipy-cli for automatically keeping anime library up-to-date:# Cronjob runs every 2 minutes and checks wether anipy-cli is still running or not \n# (only run the job if last one is finished)\n\n*/2 * * * * username pidof -x anipy-cli || anipy-cli -Ma >> /var/log/anipy-cli.logOther versionsGUI Frontend by me (WIP):https://github.com/sdaqo/anipy-guiDmenu script by @Dabbing-Guy:https://github.com/Dabbing-Guy/anipy-dmenuUlauncher extension by @Dankni95 (not maintained):https://github.com/Dankni95/ulauncher-animeCreditsHeavily inspired byhttps://github.com/pystardust/ani-cli/All contributors for contributing"
aniquote
No description available on PyPI.
anircbot
UNKNOWN
anirtic-calculator
Descriptioncalculator.py includes a class 'Calculator' which performs basic calculator operationsInstallationpip install anirtic_calculatorImporting a packagefrom anirtic_calculator.calculator import CalculatorAttributesmemory : float Calculator memoryMethodsadd(number): Takes the number and adds it to calculator memorysubtract(number): Takes the number and subtracts it from calculator memorymultiply(number): Multiplies calculator memory by a given numberdivide(number): Divides calculator memory by a given numberroot(number): Takes the n root of calculator memoryreset(): Resets calculator memoryExamplefrom anirtic_calculator import calculatorCalculator.add(2)2.0Calculator.multiply(3)6.0Calculator.divide(3)2.0Calculator.root(2)1.4142135623730951Calculator.reset()0.0
anirudh-globalmart-api
No description available on PyPI.
anirudhTopsis
No description available on PyPI.
anisble
No description available on PyPI.
ani-sched
Python Anime Schedule APIA lightweight API for gathering anime schedules fromMyAnimeListor anime news and announcements fromLiveChart.Installation and UsageTo install the library:pip install -U ani-schedTo import the library:fromani_schedimport*ExampleTo call the API, you need to create an object.fromani_schedimportAniSchedapi=AniSched()fall_2022=api.season(year=2022,season='fall')# gets the animes of Fall 2022print(fall["TV (New)"][7]["title"])# prints the title of the 8th most popular TV anime of Fall 2022# output: "Bocchi the Rock!"
ani-sched1
No description available on PyPI.
aniscrape
aniScrapeScraper for aniSearch.de and aniSearch.comrequirementsrequestsPython 3.xHow to use it and what does it do?It returns an dictionary for the given ID.import aniScrape dictionary = aniScrape.scrape(aS_ID,language,imghoster) #aS_ID is the aniSearch ID #language determines whether it uses .de or .com. It will ONLY use .com if you input "en". Standard(without inputting anything) is .de #imghoster: you can input "imgur" or "imgbb", it will upload the image from aS to the given hoster. If nothing is given, it will return an None statementDictionary Definitions{ "id": aS ID, "error": Whether an Error occured or not(if no Error: False), "jap": "Japanese Name*", "kan": "Kanjis*", "eng": "English/International Name*", "ger": "German Name*", "syn": [ "Synonyms in a List" ], "description": "Full Description without links*", "type": "Type of Series*", "time": "Average Time per Episode in Minutes*", "episodes": "Episodes of the season*", "date": { "year": "year*", "month": "month*", "day": "day*" }, "origin": "Japan*", "adaption_of": "Light Novel*", "targetgroup": "Male*", "genres": { "genre_main": [ "Main Genre in a List" ], "genre_sub": [ "Sub", "Genres", "in", "a", "List" ], "tags": [ "Tags", "in", "a", "List" ] }, "img": "Link to the aniSearch image* (Will be None if the picture at aniSearch is Empty)", "hoster": "link to the chosen host*" } * Everything with a "*" CAN be "None" if not available on websiteFull Dictionary Exampleprint(json.dumps(aniScrape.scrape(7335,"en","imgbb"),indent=1)) { "id": 7335, "error": false, "jap": "Sword Art Online", "kan": "\u30bd\u30fc\u30c9\u30a2\u30fc\u30c8\u30fb\u30aa\u30f3\u30e9\u30a4\u30f3", "eng": "Sword Art Online", "ger": null, "syn": [ "SAO" ], "description": "Blurb:Escape was impossible until it was cleared; a game over would mean an actual \u00abdeath\u00bb. Without knowing the \u00abtruth\u00bb of the mysterious next generation MMO, \u00abSword Art Online\u00bb (SAO), approximately ten thousand users logged in together, opening the curtains to this cruel death battle. Participating alone in SAO, protagonist Kirito had promptly accepted the \u00abtruth\u00bb of this MMO. And in the game world, a gigantic floating castle named \u00abAincrad\u00bb, he distinguished himself as a solo player.Aiming to clear the game by reaching the highest floor, Kirito riskily continued alone. Because of a pushy invitation from a female warrior and rapier expert, Asuna, he teamed up with her. That encounter brought about an opportunity to call out to the fated Kirito.", "type": "TV-Series", "time": "24", "episodes": "25", "date": { "year": "2012", "month": "07", "day": "08" }, "origin": "Japan", "adaption_of": "Light Novel", "targetgroup": "Male", "genres": { "genre_main": [ "Action Drama" ], "genre_sub": [ "Action", "Adventure", "Drama", "Fantasy", "Romance", "Science-Fiction" ], "tags": [ "Alternative World", "Contemporary Fantasy", "Hero of Strong Character", "Magic", "Swords & Co", "Virtual World" ] }, "img": "https://cdn.anisearch.com/images/anime/cover/full/7/7335.jpg", "hoster": "https://i.ibb.co/WnBXx3c/7335.jpg" }
aniscrape-tami
Failed to fetch description. HTTP Status Code: 404
anise
ANISE (Attitude, Navigation, Instrument, Spacecraft, Ephemeris)ANISE is a rewrite of the core functionalities of the NAIF SPICE toolkit with enhanced performance, and ease of use, while leveraging Rust's safety and speed.Please fill out our user surveyIntroductionIn the realm of space exploration, navigation, and astrophysics, precise and efficient computation of spacecraft position, orientation, and time is critical. ANISE, standing for "Attitude, Navigation, Instrument, Spacecraft, Ephemeris," offers a Rust-native approach to these challenges. This toolkit provides a suite of functionalities including but not limited to:Loading SPK, BPC, PCK, FK, and TPC files.High-precision translations, rotations, and their combination (rigid body transformations).Comprehensive time system conversions using the hifitime library (including TT, TAI, ET, TDB, UTC, GPS time, and more).ANISE stands validated against the traditional SPICE toolkit, ensuring accuracy and reliability, with translations achieving machine precision (2e-16) and rotations presenting minimal error (less than two arcseconds in the pointing of the rotation axis and less than one arcsecond in the angle about this rotation axis).FeaturesHigh Precision: Matches SPICE to machine precision in translations and minimal errors in rotations.Time System Conversions: Extensive support for various time systems crucial in astrodynamics.Rust Efficiency: Harnesses the speed and safety of Rust for space computations.Multi-threaded:Yup! Forget about mutexes and race conditions you're used to in SPICE, ANISEguaranteesthat you won't have any race conditions.Frame safety: ANISE checks all frames translations or rotations are physically valid before performing any computation, even internally.Tutorials01 - Querying SPK files02 - Loading remote and local files (MetaAlmanac)03 - Defining and working with the orbit structure04 - Computing azimuth, elevation, and range data (AER)Note: The tutorials can be viewed in read-only form onthe Github repo.UsageIn Python, start by adding anise to your project:pip install anise.fromaniseimportAlmanac,Aberrationfromanise.astro.constantsimportFramesfromanise.astroimportOrbitfromanise.timeimportEpochfrompathlibimportPathdeftest_state_transformation():"""This is the Python equivalent to anise/tests/almanac/mod.rs"""data_path=Path(__file__).parent.joinpath("..","..","data")# Must ensure that the path is a stringctx=Almanac(str(data_path.joinpath("de440s.bsp")))# Let's add another file here -- note that the Almanac will load into a NEW variable, so we must overwrite it!# This prevents memory leaks (yes, I promise)ctx=ctx.load(str(data_path.joinpath("pck08.pca"))).load(str(data_path.joinpath("earth_latest_high_prec.bpc")))eme2k=ctx.frame_info(Frames.EME2000)asserteme2k.mu_km3_s2()==398600.435436096asserteme2k.shape.polar_radius_km==6356.75assertabs(eme2k.shape.flattening()-0.0033536422844278)<2e-16epoch=Epoch("2021-10-29 12:34:56 TDB")orig_state=Orbit.from_keplerian(8_191.93,1e-6,12.85,306.614,314.19,99.887_7,epoch,eme2k,)assertorig_state.sma_km()==8191.93assertorig_state.ecc()==1.000000000361619e-06assertorig_state.inc_deg()==12.849999999999987assertorig_state.raan_deg()==306.614assertorig_state.tlong_deg()==0.6916999999999689state_itrf93=ctx.transform_to(orig_state,Frames.EARTH_ITRF93,None)print(orig_state)print(state_itrf93)assertstate_itrf93.latitude_deg()==10.549246868302738assertstate_itrf93.longitude_deg()==133.76889100913047assertstate_itrf93.height_km()==1814.503598063825# Convert backfrom_state_itrf93_to_eme2k=ctx.transform_to(state_itrf93,Frames.EARTH_J2000,None)print(from_state_itrf93_to_eme2k)assertorig_state==from_state_itrf93_to_eme2k# Demo creation of a ground stationmean_earth_angular_velocity_deg_s=0.004178079012116429# Grab the loaded frame infoitrf93=ctx.frame_info(Frames.EARTH_ITRF93)paris=Orbit.from_latlongalt(48.8566,2.3522,0.4,mean_earth_angular_velocity_deg_s,epoch,itrf93,)assertabs(paris.latitude_deg()-48.8566)<1e-3assertabs(paris.longitude_deg()-2.3522)<1e-3assertabs(paris.height_km()-0.4)<1e-3if__name__=="__main__":test_state_transformation()Getting started as a developerInstallmaturin, e.g. viapipxaspipx install maturinCreate a virtual environment:cd anise/anise-py && python3 -m venv .venvJump into the virtual environment and installpatchelffor faster builds:pip install patchelf, andpytestfor the test suite:pip install pytestRunmaturin developto build the development package and install it in the virtual environmentFinally, run the testspython -m pytestTo run the development version of ANISE in a Jupyter Notebook, install ipykernels in your virtual environment.pip install ipykernelNow, build the local kernel:python -m ipykernel install --user --name=.venvThen, start jupyter notebook:jupyter notebookOpen the notebook, click on the top right and make sure to choose the environment you created just a few steps above.
anish-101703072-outlier
No description available on PyPI.
anisha-job-selection
This is Anisha'a first "solo" project
anisha-vehicles
This is a test project
anishot
anishotAnimates a long screenshot into a GIF. Use it to show off long screenshots in your GitHub README.Install$ pip install anishotUsage$ anishot Usage: anishot.__main__: --h: Window height (default: '0') (an integer) --inp: Input screenshot image --maxspeed: Max speed on scroll px/frame (default: '200') (an integer) --out: Output antimated GIF --pad: Padding on sides (default: '0') (an integer) --rgb_bg: Background color (default: '#ffffff') --rgb_outline: Screenshot outline color (default: '#e1e4e8') --rgb_shadow: Screenshot shadow color (default: '#999999') --rgb_window: Window outline color (default: '#e1e4e8') --shadow_size: Shadow size (default: '0') (an integer) --start_scale: Start scale (default: '0.5') (a number) --stops: List of stops for scrolling (default: '') (a comma separated list) --zoom_steps: Number of steps on initial zoom in (default: '7') (an integer) --zoom_to: Point to zoom to (default: '0') (an integer)The anishot at the top of this README was generated by:anishot --inp=anishot.png --out=anishot.gif --h=450 --stops=290,640,940 --zoom_to=150 --start_scale=.7You can also experiment with styles. For example, you can go for a retro look:anishot --inp=anishot.png --out=anishot.gif --h=450 --stops=290,640,940 --zoom_to=150 --start_scale=.7 --pad=50 --shadow_size=5 --rgb_bg=#cccccc --rgb_window=#666666
anish-temperature
anish_temperatureA package to convert temperature from one scale to anotherUse the package in your projectInstall the packagepip install anish-temperatureUsing the packageimport anish_temperature.temperature as temperature temperature.celsius_to_fahrenheit(100)Anish Basukar © 2023NoteThis is for a university project. Might not be maintained in future.
anishTopsis
No description available on PyPI.
anislbe
No description available on PyPI.
aniso8601
aniso8601Another ISO 8601 parser for PythonFeaturesPure Python implementationLogical behaviorParse a time, get adatetime.timeParse a date, get adatetime.dateParse a datetime, get adatetime.datetimeParse a duration, get adatetime.timedeltaParse an interval, get a tuple of dates or datetimesParse a repeating interval, get a date or datetimegeneratorUTC offset represented as fixed-offset tzinfoParser separate from representation, allowing parsing to different datetime representations (seeBuilders)No regular expressionsInstallationThe recommended installation method is to use pip:$ pip install aniso8601Alternatively, you can download the source (git repository hosted atBitbucket) and install directly:$ python setup.py installUseParsing datetimesConsiderdatetime.datetime.fromisoformatfor basic ISO 8601 datetime parsingTo parse a typical ISO 8601 datetime string:>>> import aniso8601 >>> aniso8601.parse_datetime('1977-06-10T12:00:00Z') datetime.datetime(1977, 6, 10, 12, 0, tzinfo=+0:00:00 UTC)Alternative delimiters can be specified, for example, a space:>>> aniso8601.parse_datetime('1977-06-10 12:00:00Z', delimiter=' ') datetime.datetime(1977, 6, 10, 12, 0, tzinfo=+0:00:00 UTC)UTC offsets are supported:>>> aniso8601.parse_datetime('1979-06-05T08:00:00-08:00') datetime.datetime(1979, 6, 5, 8, 0, tzinfo=-8:00:00 UTC)If a UTC offset is not specified, the returned datetime will be naive:>>> aniso8601.parse_datetime('1983-01-22T08:00:00') datetime.datetime(1983, 1, 22, 8, 0)Leap seconds are currently not supported and attempting to parse one raises aLeapSecondError:>>> aniso8601.parse_datetime('2018-03-06T23:59:60') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/time.py", line 196, in parse_datetime return builder.build_datetime(datepart, timepart) File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/builders/python.py", line 237, in build_datetime cls._build_object(time)) File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/builders/__init__.py", line 336, in _build_object return cls.build_time(hh=parsetuple.hh, mm=parsetuple.mm, File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/builders/python.py", line 191, in build_time hh, mm, ss, tz = cls.range_check_time(hh, mm, ss, tz) File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/builders/__init__.py", line 266, in range_check_time raise LeapSecondError('Leap seconds are not supported.') aniso8601.exceptions.LeapSecondError: Leap seconds are not supported.To get the resolution of an ISO 8601 datetime string:>>> aniso8601.get_datetime_resolution('1977-06-10T12:00:00Z') == aniso8601.resolution.TimeResolution.Seconds True >>> aniso8601.get_datetime_resolution('1977-06-10T12:00') == aniso8601.resolution.TimeResolution.Minutes True >>> aniso8601.get_datetime_resolution('1977-06-10T12') == aniso8601.resolution.TimeResolution.Hours TrueNote that datetime resolutions map toTimeResolutionas a valid datetime must have at least one time member so the resolution mapping is equivalent.Parsing datesConsiderdatetime.date.fromisoformatfor basic ISO 8601 date parsingTo parse a date represented in an ISO 8601 string:>>> import aniso8601 >>> aniso8601.parse_date('1984-04-23') datetime.date(1984, 4, 23)Basic format is supported as well:>>> aniso8601.parse_date('19840423') datetime.date(1984, 4, 23)To parse a date using the ISO 8601 week date format:>>> aniso8601.parse_date('1986-W38-1') datetime.date(1986, 9, 15)To parse an ISO 8601 ordinal date:>>> aniso8601.parse_date('1988-132') datetime.date(1988, 5, 11)To get the resolution of an ISO 8601 date string:>>> aniso8601.get_date_resolution('1981-04-05') == aniso8601.resolution.DateResolution.Day True >>> aniso8601.get_date_resolution('1981-04') == aniso8601.resolution.DateResolution.Month True >>> aniso8601.get_date_resolution('1981') == aniso8601.resolution.DateResolution.Year TrueParsing timesConsiderdatetime.time.fromisoformatfor basic ISO 8601 time parsingTo parse a time formatted as an ISO 8601 string:>>> import aniso8601 >>> aniso8601.parse_time('11:31:14') datetime.time(11, 31, 14)As with all of the above, basic format is supported:>>> aniso8601.parse_time('113114') datetime.time(11, 31, 14)A UTC offset can be specified for times:>>> aniso8601.parse_time('17:18:19-02:30') datetime.time(17, 18, 19, tzinfo=-2:30:00 UTC) >>> aniso8601.parse_time('171819Z') datetime.time(17, 18, 19, tzinfo=+0:00:00 UTC)Reduced accuracy is supported:>>> aniso8601.parse_time('21:42') datetime.time(21, 42) >>> aniso8601.parse_time('22') datetime.time(22, 0)A decimal fraction is always allowed on the lowest order element of an ISO 8601 formatted time:>>> aniso8601.parse_time('22:33.5') datetime.time(22, 33, 30) >>> aniso8601.parse_time('23.75') datetime.time(23, 45)The decimal fraction can be specified with a comma instead of a full-stop:>>> aniso8601.parse_time('22:33,5') datetime.time(22, 33, 30) >>> aniso8601.parse_time('23,75') datetime.time(23, 45)Leap seconds are currently not supported and attempting to parse one raises aLeapSecondError:>>> aniso8601.parse_time('23:59:60') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/time.py", line 174, in parse_time return builder.build_time(hh=hourstr, mm=minutestr, ss=secondstr, tz=tz) File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/builders/python.py", line 191, in build_time hh, mm, ss, tz = cls.range_check_time(hh, mm, ss, tz) File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/builders/__init__.py", line 266, in range_check_time raise LeapSecondError('Leap seconds are not supported.') aniso8601.exceptions.LeapSecondError: Leap seconds are not supported.To get the resolution of an ISO 8601 time string:>>> aniso8601.get_time_resolution('11:31:14') == aniso8601.resolution.TimeResolution.Seconds True >>> aniso8601.get_time_resolution('11:31') == aniso8601.resolution.TimeResolution.Minutes True >>> aniso8601.get_time_resolution('11') == aniso8601.resolution.TimeResolution.Hours TrueParsing durationsTo parse a duration formatted as an ISO 8601 string:>>> import aniso8601 >>> aniso8601.parse_duration('P1Y2M3DT4H54M6S') datetime.timedelta(428, 17646)Reduced accuracy is supported:>>> aniso8601.parse_duration('P1Y') datetime.timedelta(365)A decimal fraction is allowed on the lowest order element:>>> aniso8601.parse_duration('P1YT3.5M') datetime.timedelta(365, 210)The decimal fraction can be specified with a comma instead of a full-stop:>>> aniso8601.parse_duration('P1YT3,5M') datetime.timedelta(365, 210)Parsing a duration from a combined date and time is supported as well:>>> aniso8601.parse_duration('P0001-01-02T01:30:05') datetime.timedelta(397, 5405)To get the resolution of an ISO 8601 duration string:>>> aniso8601.get_duration_resolution('P1Y2M3DT4H54M6S') == aniso8601.resolution.DurationResolution.Seconds True >>> aniso8601.get_duration_resolution('P1Y2M3DT4H54M') == aniso8601.resolution.DurationResolution.Minutes True >>> aniso8601.get_duration_resolution('P1Y2M3DT4H') == aniso8601.resolution.DurationResolution.Hours True >>> aniso8601.get_duration_resolution('P1Y2M3D') == aniso8601.resolution.DurationResolution.Days True >>> aniso8601.get_duration_resolution('P1Y2M') == aniso8601.resolution.DurationResolution.Months True >>> aniso8601.get_duration_resolution('P1Y') == aniso8601.resolution.DurationResolution.Years TrueThe defaultPythonTimeBuilderassumes years are 365 days, and months are 30 days. Where calendar level accuracy is required, aRelativeTimeBuildercan be used, see alsoBuilders.Parsing intervalsTo parse an interval specified by a start and end:>>> import aniso8601 >>> aniso8601.parse_interval('2007-03-01T13:00:00/2008-05-11T15:30:00') (datetime.datetime(2007, 3, 1, 13, 0), datetime.datetime(2008, 5, 11, 15, 30))Intervals specified by a start time and a duration are supported:>>> aniso8601.parse_interval('2007-03-01T13:00:00Z/P1Y2M10DT2H30M') (datetime.datetime(2007, 3, 1, 13, 0, tzinfo=+0:00:00 UTC), datetime.datetime(2008, 5, 9, 15, 30, tzinfo=+0:00:00 UTC))A duration can also be specified by a duration and end time:>>> aniso8601.parse_interval('P1M/1981-04-05') (datetime.date(1981, 4, 5), datetime.date(1981, 3, 6))Notice that the result of the above parse is not in order from earliest to latest. If sorted intervals are required, simply use thesortedkeyword as shown below:>>> sorted(aniso8601.parse_interval('P1M/1981-04-05')) [datetime.date(1981, 3, 6), datetime.date(1981, 4, 5)]The end of an interval is returned as a datetime when required to maintain the resolution specified by a duration, even if the duration start is given as a date:>>> aniso8601.parse_interval('2014-11-12/PT4H54M6.5S') (datetime.date(2014, 11, 12), datetime.datetime(2014, 11, 12, 4, 54, 6, 500000)) >>> aniso8601.parse_interval('2007-03-01/P1.5D') (datetime.date(2007, 3, 1), datetime.datetime(2007, 3, 2, 12, 0))Concise representations are supported:>>> aniso8601.parse_interval('2020-01-01/02') (datetime.date(2020, 1, 1), datetime.date(2020, 1, 2)) >>> aniso8601.parse_interval('2007-12-14T13:30/15:30') (datetime.datetime(2007, 12, 14, 13, 30), datetime.datetime(2007, 12, 14, 15, 30)) >>> aniso8601.parse_interval('2008-02-15/03-14') (datetime.date(2008, 2, 15), datetime.date(2008, 3, 14)) >>> aniso8601.parse_interval('2007-11-13T09:00/15T17:00') (datetime.datetime(2007, 11, 13, 9, 0), datetime.datetime(2007, 11, 15, 17, 0))Repeating intervals are supported as well, and return agenerator:>>> aniso8601.parse_repeating_interval('R3/1981-04-05/P1D') <generator object _date_generator at 0x7fd800d3b320> >>> list(aniso8601.parse_repeating_interval('R3/1981-04-05/P1D')) [datetime.date(1981, 4, 5), datetime.date(1981, 4, 6), datetime.date(1981, 4, 7)]Repeating intervals are allowed to go in the reverse direction:>>> list(aniso8601.parse_repeating_interval('R2/PT1H2M/1980-03-05T01:01:00')) [datetime.datetime(1980, 3, 5, 1, 1), datetime.datetime(1980, 3, 4, 23, 59)]Unbounded intervals are also allowed (Python 2):>>> result = aniso8601.parse_repeating_interval('R/PT1H2M/1980-03-05T01:01:00') >>> result.next() datetime.datetime(1980, 3, 5, 1, 1) >>> result.next() datetime.datetime(1980, 3, 4, 23, 59)or for Python 3:>>> result = aniso8601.parse_repeating_interval('R/PT1H2M/1980-03-05T01:01:00') >>> next(result) datetime.datetime(1980, 3, 5, 1, 1) >>> next(result) datetime.datetime(1980, 3, 4, 23, 59)Note that you should never try to convert a generator produced by an unbounded interval to a list:>>> list(aniso8601.parse_repeating_interval('R/PT1H2M/1980-03-05T01:01:00')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/nielsenb/Jetfuse/aniso8601/aniso8601/aniso8601/builders/python.py", line 560, in _date_generator_unbounded currentdate += timedelta OverflowError: date value out of rangeTo get the resolution of an ISO 8601 interval string:>>> aniso8601.get_interval_resolution('2007-03-01T13:00:00/2008-05-11T15:30:00') == aniso8601.resolution.IntervalResolution.Seconds True >>> aniso8601.get_interval_resolution('2007-03-01T13:00/2008-05-11T15:30') == aniso8601.resolution.IntervalResolution.Minutes True >>> aniso8601.get_interval_resolution('2007-03-01T13/2008-05-11T15') == aniso8601.resolution.IntervalResolution.Hours True >>> aniso8601.get_interval_resolution('2007-03-01/2008-05-11') == aniso8601.resolution.IntervalResolution.Day True >>> aniso8601.get_interval_resolution('2007-03/P1Y') == aniso8601.resolution.IntervalResolution.Month True >>> aniso8601.get_interval_resolution('2007/P1Y') == aniso8601.resolution.IntervalResolution.Year TrueAnd for repeating ISO 8601 interval strings:>>> aniso8601.get_repeating_interval_resolution('R3/1981-04-05/P1D') == aniso8601.resolution.IntervalResolution.Day True >>> aniso8601.get_repeating_interval_resolution('R/PT1H2M/1980-03-05T01:01:00') == aniso8601.resolution.IntervalResolution.Seconds TrueBuildersBuilders can be used to change the output format of a parse operation. All parse functions have abuilderkeyword argument which accepts a builder class.Two builders are included. ThePythonTimeBuilder(the default) in theaniso8601.builders.pythonmodule, and theTupleBuilderwhich returns the parse result as a corresponding named tuple and is located in theaniso8601.buildersmodule.Information on writing a builder can be found inBUILDERS.The following builders are available as separate projects:RelativeTimeBuildersupports parsing todatetutil relativedelta typesfor calendar level accuracyAttoTimeBuildersupports parsing directly toattotime attodatetime and attotimedelta typeswhich support sub-nanosecond precisionNumPyTimeBuildersupports parsing directly toNumPy datetime64 and timedelta64 typesTupleBuilderTheTupleBuilderreturns parse results asnamed tuples. It is located in theaniso8601.buildersmodule.DatetimesParsing a datetime returns aDatetimeTuplecontainingDateandTimetuples . The date tuple contains the following parse components:YYYY,MM,DD,Www,D,DDD. The time tuple contains the following parse componentshh,mm,ss,tz, wheretzitself is a tuple with the following componentsnegative,Z,hh,mm,namewithnegativeandZbeing booleans:>>> import aniso8601 >>> from aniso8601.builders import TupleBuilder >>> aniso8601.parse_datetime('1977-06-10T12:00:00', builder=TupleBuilder) Datetime(date=Date(YYYY='1977', MM='06', DD='10', Www=None, D=None, DDD=None), time=Time(hh='12', mm='00', ss='00', tz=None)) >>> aniso8601.parse_datetime('1979-06-05T08:00:00-08:00', builder=TupleBuilder) Datetime(date=Date(YYYY='1979', MM='06', DD='05', Www=None, D=None, DDD=None), time=Time(hh='08', mm='00', ss='00', tz=Timezone(negative=True, Z=None, hh='08', mm='00', name='-08:00')))DatesParsing a date returns aDateTuplecontaining the following parse components:YYYY,MM,DD,Www,D,DDD:>>> import aniso8601 >>> from aniso8601.builders import TupleBuilder >>> aniso8601.parse_date('1984-04-23', builder=TupleBuilder) Date(YYYY='1984', MM='04', DD='23', Www=None, D=None, DDD=None) >>> aniso8601.parse_date('1986-W38-1', builder=TupleBuilder) Date(YYYY='1986', MM=None, DD=None, Www='38', D='1', DDD=None) >>> aniso8601.parse_date('1988-132', builder=TupleBuilder) Date(YYYY='1988', MM=None, DD=None, Www=None, D=None, DDD='132')TimesParsing a time returns aTimeTuplecontaining following parse components:hh,mm,ss,tz, wheretzis aTimezoneTuplewith the following componentsnegative,Z,hh,mm,name, withnegativeandZbeing booleans:>>> import aniso8601 >>> from aniso8601.builders import TupleBuilder >>> aniso8601.parse_time('11:31:14', builder=TupleBuilder) Time(hh='11', mm='31', ss='14', tz=None) >>> aniso8601.parse_time('171819Z', builder=TupleBuilder) Time(hh='17', mm='18', ss='19', tz=Timezone(negative=False, Z=True, hh=None, mm=None, name='Z')) >>> aniso8601.parse_time('17:18:19-02:30', builder=TupleBuilder) Time(hh='17', mm='18', ss='19', tz=Timezone(negative=True, Z=None, hh='02', mm='30', name='-02:30'))DurationsParsing a duration returns aDurationTuplecontaining the following parse components:PnY,PnM,PnW,PnD,TnH,TnM,TnS:>>> import aniso8601 >>> from aniso8601.builders import TupleBuilder >>> aniso8601.parse_duration('P1Y2M3DT4H54M6S', builder=TupleBuilder) Duration(PnY='1', PnM='2', PnW=None, PnD='3', TnH='4', TnM='54', TnS='6') >>> aniso8601.parse_duration('P7W', builder=TupleBuilder) Duration(PnY=None, PnM=None, PnW='7', PnD=None, TnH=None, TnM=None, TnS=None)IntervalsParsing an interval returns anIntervalTuplecontaining the following parse components:start,end,duration,startandendmay both be datetime or date tuples,durationis a duration tuple:>>> import aniso8601 >>> from aniso8601.builders import TupleBuilder >>> aniso8601.parse_interval('2007-03-01T13:00:00/2008-05-11T15:30:00', builder=TupleBuilder) Interval(start=Datetime(date=Date(YYYY='2007', MM='03', DD='01', Www=None, D=None, DDD=None), time=Time(hh='13', mm='00', ss='00', tz=None)), end=Datetime(date=Date(YYYY='2008', MM='05', DD='11', Www=None, D=None, DDD=None), time=Time(hh='15', mm='30', ss='00', tz=None)), duration=None) >>> aniso8601.parse_interval('2007-03-01T13:00:00Z/P1Y2M10DT2H30M', builder=TupleBuilder) Interval(start=Datetime(date=Date(YYYY='2007', MM='03', DD='01', Www=None, D=None, DDD=None), time=Time(hh='13', mm='00', ss='00', tz=Timezone(negative=False, Z=True, hh=None, mm=None, name='Z'))), end=None, duration=Duration(PnY='1', PnM='2', PnW=None, PnD='10', TnH='2', TnM='30', TnS=None)) >>> aniso8601.parse_interval('P1M/1981-04-05', builder=TupleBuilder) Interval(start=None, end=Date(YYYY='1981', MM='04', DD='05', Www=None, D=None, DDD=None), duration=Duration(PnY=None, PnM='1', PnW=None, PnD=None, TnH=None, TnM=None, TnS=None))A repeating interval returns aRepeatingIntervalTuplecontaining the following parse components:R,Rnn,interval, whereRis a boolean,Truefor an unbounded interval,Falseotherwise.:>>> aniso8601.parse_repeating_interval('R3/1981-04-05/P1D', builder=TupleBuilder) RepeatingInterval(R=False, Rnn='3', interval=Interval(start=Date(YYYY='1981', MM='04', DD='05', Www=None, D=None, DDD=None), end=None, duration=Duration(PnY=None, PnM=None, PnW=None, PnD='1', TnH=None, TnM=None, TnS=None))) >>> aniso8601.parse_repeating_interval('R/PT1H2M/1980-03-05T01:01:00', builder=TupleBuilder) RepeatingInterval(R=True, Rnn=None, interval=Interval(start=None, end=Datetime(date=Date(YYYY='1980', MM='03', DD='05', Www=None, D=None, DDD=None), time=Time(hh='01', mm='01', ss='00', tz=None)), duration=Duration(PnY=None, PnM=None, PnW=None, PnD=None, TnH='1', TnM='2', TnS=None)))DevelopmentSetupIt is recommended to develop using avirtualenv.Inside a virtualenv, development dependencies can be installed automatically:$ pip install -e .[dev]pre-commitis used for managing pre-commit hooks:$ pre-commit installTo run the pre-commit hooks manually:$ pre-commit run --all-filesTestsTests can be run using theunittest testing framework:$ python -m unittest discover aniso8601Contributinganiso8601 is an open source project hosted onBitbucket.Any and all bugs are welcome on ourissue tracker. Of particular interest are valid ISO 8601 strings that don’t parse, or invalid ones that do. At a minimum, bug reports should include an example of the misbehaving string, as well as the expected result. Of course patches containing unit tests (or fixed bugs) are welcome!ReferencesISO 8601:2004(E)(Caution, PDF link)Wikipedia article on ISO 8601Discussion on alternative ISO 8601 parsers for Python
anisofilter
Python Wrapper for Anisotropic Denoising of 3D Point CloudsA python implementation for denosing 3D point clouds with Gaussian noise, where the anisotropic neighborhoods were computed to both denoise the smooth regions and to preserve the sharp features, i.e. edges and corners.The implementation is based onZ. Xu and A. Foi, "Anisotropic Denoising of 3D Point Clouds by Aggregation of Multiple Surface-Adaptive Estimates," in IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 6, pp. 2851-2868, 1 June 2021, doi: 10.1109/TVCG.2019.2959761.The package contains the Anisotropic Denoising binaries compiled for:Windows (Win10, MinGW-64)Linux (Ubuntu 20.04.2 LTS, 64bit)Mac OSX (Big Sur, 64-bit)The binaries are available for non-commercial use only (please see LICENSE for more details).For the demo, see the demo folder of the full source zip, which also includes the example noisy and noise-free point clouds demonstrated in the paper. You can also download the demo fromhttps://webpages.tuni.fi/foi/PointCloudFiltering/pcd_anisotropic_denoi_py_demo.zipAuthors:Zhongwei [email protected] Foi
anisoms
anisoms: a Python library for reading AMS dataIntroductionAGICO kappabridges write AMS (anisotropy of magnetic susceptibility) data in two formats: ASC and RAN. The first is an ASCII file formatted for easy perusal; the second is a compact binary format. Neither format is entirely straightforward to read for further processing. anisoms provides a Python library with functions to read and plot data from RAN and ASC files into Python dictionaries. As well as the main libraryanisoms, the package also contains a few short command-line scripts. These scripts demonstrate the usage of the anisoms API, as well as being potentially useful in their own right.Documentation for anisoms is available onreadthedocs.AMS file formatsThe file formats are described in more detail in user manuals for AGICO equipment (AGICO, 2003; AGICO, 2009).The RAN file contains a limited amount of data for each sample, most crucially the orientation tensor. In the RAN file, this tensor is given only in the geographic co-ordinate system (not, as might be expected, in the "raw" specimen co-ordinate system). A RAN file is sometimes used in conjunction with a GED ("geological data") file, which contains some additional sample data such as orientation conventions and additional co-ordinate systems; currently, anisoms does not read GED files.The structure of the ASC file corresponds to the format of the data displayed on the screen during usage of the SUSAR, SUSAM, or SAFYR program, and varies slightly according to the program version and measurement settings. The ASC file contains a more extensive range of data than the RAN file, including anisotropy as both tensors and principal directions, in all the co-ordinate systems which were specified during measurement.anisoms usageThis is a brief overview; the API is fully detailed by the docstrings in the source code andon readthedocs.The functionsread_ranandread_ascread a file of the respective types and return a nested dictionary structure containing the data from the file.TheDirectionclass represents a direction in three-dimensional space, and includes a method to plot itself on an equal-area plot using the pyx graphics library.ThePrincipalDirsclass represents the three principal directions of an anisotropy tensor. It can be initialized from the directions themselves or from a tensor.Thedirections_from_ran,directions_from_asc_tensors, anddirections_from_asc_directionsfunctions read a data file and return a corresponding dictionary containing aPrincipalDirsobject for each sample in the file.Thecorrected_anisotropy_factorfunction calculates the corrected anisotropy factor (P′orPj) (Jelínek, 1981; Hrouda, 1982).Overview of scriptsams-asc-to-csvconverts AMS data from ASC format to CSV format.ams-params-from-ascprints selected parameters from an ASC file.ams-plotplots AMS directions from ASC and RAN files.ams-print-ran-tensorreads RAN files and prints their AMS tensors.ams-tensor-to-dirprints the first principal directions of supplied tensors.More detailed documentation for the scripts is available in their docstrings, in their output when run with a--helpargument, andon readthedocs.Precision considerationsIn the RAN file, the components of the orientation tensor are stored as 32-bit floating point numbers, which have a precision of around 7 significant figures. In the ASC file, they are given as decimals with 5 significant figures of precision. So, for maximal precision, the tensors should be read from the RAN file; since the RAN file only gives tensors in the geographic co-ordinate system, they may have to be rotated into the desired co-ordinate system after reading.anisomscurrently focuses on data reading, and does not provide functions for these rotations, but it does provide a function for converting tensors to principal directions.When obtaining principal directions solely from an ASC file, the most precise method is to read directly the directions stored there, rather than reading the tensor and calculating directions from it. I have confirmed this by comparing both methods with the directions calculated from the high-precision tensor in the corresponding RAN file. The principal directions stored in the ASC file are presumably calculated directly from the full-precision floats. Calculating principal directions from the GED tensor is still more precise than reading the directions from the ASC file, since the latter are rounded to the nearest degree.LicenseCopyright 2019 Pontus Lurcock; released under theGNU General Public License, version 3.0ReferencesAGICO, 2003.KLY-3 / KLY-3S / CS-3 / CS-L / CS-23 user’s guide, Brno, Czech Republic: Advanced Geoscience Instruments Co.https://www.agico.com/downloads/documents/manuals/kly3-man.pdfAGICO, 2009.MFK1-FA / CS4 / CSL, MFK1-A / CS4 / CSL, MFK1-FB, MFK1-B user’s guide4th ed., Brno, Czech Republic: Advanced Geoscience Instruments Co.https://www.agico.com/downloads/documents/manuals/mfk1-man.pdfHrouda, F., 1982. Magnetic anisotropy of rocks and its application in geology and geophysics.Geophysical Surveys, 5, pp.37–82.Jelínek, V., 1981. Characterization of the magnetic fabric of rocks.Tectonophysics, 79, pp.T63–T67.
anisotropic-distance-transform
anisotropic-distance-transform ===A 2D adaptation of Jan Hosang’s gdt package, with anisotropic capabilities.can deal withinfhas runtime _O(n)_
anis-package
Hello les amis
anisq
No description available on PyPI.
anit
AnitProvided by Animite Software FoundationCreated by Shaurya Pratap SinghFor Python3 and aboveAnit 0.0.9Installationtype the following command in command linepip install anitthen in your .py file type this,fromanitimport*EmailFirst we have to create a message class,msg=Message('Hello World',"""May the force be with you.""")You can also pass in a html file,msg=Message('Hello World','something.html')Then, create a new class,mailer=Mailer('email','password')to send it, do this,mailer.send_message('Your message class here.')mailer.send_to('recievers email')Done!Mailer and Message mixin classes:classMyMessage(MessageMixin):subject=Nonemessage=NoneclassMyMailer(MailerMixin):email=Nonepwd=Noneperson='the person who youare sending to'You can also use the terminal command:anit -i mailerOpen browser url,open_browser_url('why do we love star wars?')Terminal command for this:anit -i openCreating package structurecreate_package('name of package')Terminal command for this:anit -i packageAnit variablesprint(anit.OS)# returns your osprint(anit.VERSION)#returns anit versionprint(anit.DATE)# returns the dateprint(anit.TIMESTAMP)# returns the timestamp
anita
Analytic Tableau Proof Assistant (ANITA)The ANITA is a tool written in Python that can be used as a desktop application, or in aweb platform. There is aJupyter Notebook(in Portuguese) that presents the Analytic Tableaux and ANITA concepts. The main idea is that the students can write their proofs as similar as possible to what is available in the textbooks and to what the students would usually write on paper. ANITA allows the students to automatically check whether a proof in the analytic tableaux is valid. If the proof is not correct, ANITA will display the errors of the proof. So, the students may make mistakes and learn from the errors. The web interface is very easy-to-use and has:An area for editing the proof in plain text. The students should write a proof in Fitch-style (seeAT Rules).A message area to display whether the proof is valid, the countermodel, or the errors on the proof.And the following links:Check, to check the correctness of the proof;Manual, to view a document with the inference rules and examples;LaTeX, to generate the LaTeX code of the trees from a valid proof. Use theqtreepackage in your LaTeX code;LaTeX in Overleaf, to open the proof source code directly inOverleafthat is a collaborative platform for editing LaTeXTo facilitate the writing of the proofs, we made the following conventions in ANITA:The Atoms are written in capital letters (e.g.A, B, H(x));Variables are written with the first letter in lowercase, followed by letters and numbers (e.g.x, x0, xP0);Formulas with $\forall x$ and $\exists x$ are represented by $Ax$ and $Ex$ ('A' and 'E' followed by the variable x). For instance,Ax(H(x)->M(x))represents $\forall x~(H(x)\rightarrow M(x))$.Table below shows the equivalence of logic symbols and those used in ANITA.The order of precedence of quantifiers and logical connectives is defined by $\lnot,\forall,\exists,\wedge,\vee,\rightarrow$ with right alignment. For example:Formula~A&B -> Crepresents formula $(((\lnot A)\land B)\rightarrow C)$;The theorem~A|B |- A->Crepresents $((\lnot A)\vee B)\vdash (A\rightarrow B)$.Each inference rule will be named by its respective connective and the truth value of the signed formula. For example,&Trepresents the conjunction rule when the formula is true. Optionally, the rule name can be omitted.The justifications for the premises and the conclusion use the reserved wordspreandconclusion, respectively.Symbol$\lnot$$\land$$\lor$$\rightarrow$$\forall x$$\exists x$$\bot$branch$\vdash$LaTeX$\backslash\textrm{lnot}$$\backslash\textrm{land}$$\backslash\textrm{lor}$$\backslash\textrm{rightarrow}$$\backslash\textrm{forall x}$$\backslash\textrm{exists x}$$\backslash\textrm{bot}$$[.~]$$\backslash\textrm{vdash}$ANITA~&$\mid$->AxEx@{ }|-LicenseANITA is available byMIT License.Requirements:You must installrply 0.7.8 packageipywidgetsInstallTo install ANITA from Github, run the following command:pipinstallgit+https://github.com/daviromero/anita.gitTo install ANITA from PyPi repository, run the following command:pipinstallanitaANITAYou can run ANITA with the command line:anita-l"en"-i[input_file]ANITA in Jupyter NootebookYou can run ANITA in Jupyter Nootebook:fromanita.anita_en_guiimportanita anita()ANITA in VoilaYou can run ANITA in aVoilàvoilaanita_en.ipynbANITA in your codeYou can import ANITA in your code (basic usage)fromanita.anita_en_foimportcheck_proof print(check_proof('''1. T A|B pre2. T A->C pre3. T B->C pre4. F C conclusion5. { T A 16. { F A 27. @ 5,6}8. { T C 29. @ 8,4}}10.{ T B 111. { F B 312. @ 10,11}13. { T C 314. @ 13,4}}'''))A Portuguese VersionWe have a portuguese version. In the portuguese ANITA syntax, useconclusaoinstead ofconclusion.Run ANITA withanita-i[input_file]Jupyter Nootebook withfromanita.anita_pt_guiimportanita anita()Voilà withvoilaanita_pt.ipynbYou can import ANITA in your code (basic usage)fromanita.anita_pt_foimportcheck_proof print(check_proof('''1. T A|B pre2. T A->C pre3. T B->C pre4. F C conclusao5. { T A 16. { F A 27. @ 5,6}8. { T C 29. @ 8,4}}10.{ T B 111. { F B 312. @ 10,11}13. { T C 314. @ 13,4}}'''))
anitakuapi
anitakuapiPython wrapper for the anitakuapiBase Url : anitakuapi-87ab1094388c.herokuapp.comDocumentation :Click HereUpdates Channel :zzSupport Group :zzxInstallationInstall latest version using$ pip install -U anitakuapiUse the above command to update alsoUsageTo get latest releases from gogo animefrom anitakuapi.api import AnitakuapiX GogoApi = AnitakuapiX.Gogo print(GogoApi.latest())
anita-maheen-338-package
No description available on PyPI.
anitejb
pip install anitejbTable of ContentsOverviewUsageLicenseContributionsApproved List of ContributorsOverviewA collection of customPythontools and functions developed for me, by me, with <3.Full disclosure: the functionality provided in this package may exist in various other forms across the depths of the internet. Everything here stems from a personal experience when I needed Python features that were not in-built/importable. I thought would be nice to have all of these things in one place, so here it is.UsageThe package is live onPyPI. I plan to use it, and you could too! All you need are three magic words:pip install anitejbThat being said, keep in mind that this package was developed solely for my own use, and quite literally everything could change at any moment with zero prior notice.Unless (and even if) you are me, I would not recommend using this on large scale and/or somewhat important projects. You wouldn't wantthisto happen.LicenseCopyright (c) 2021 Anitej Biradar. Released under the MIT License. SeeLICENSEfor details.ContributionsDue to the custom nature of this project, in order to contributecode, you must be on theapproved list of contributors(shown below).Things that don't require approval: questions, comments, concerns, thoughts, opinions, and cute animal pictures — reach out [email protected]!Approved List of ContributorsAnitej BiradarRequests to be added to this list will only be considered on the 20th of April every 48 years.The 2021 cycle has concluded. Details regarding the next cycle will be published closer to the event.
anitomy.py
No description available on PyPI.
anitopy
Anitopy is a Python library for parsing anime video filenames. It’s simple to use and it’s based on the C++ libraryAnitomy.ExampleThe following filename…[TaigaSubs]_Toradora!_(2008)_-_01v2_-_Tiger_and_Dragon_[1280x720_H.264_FLAC][1234ABCD].mkv…can be parsed using the following code:>>>importanitopy>>>anitopy.parse('[TaigaSubs]_Toradora!_(2008)_-_01v2_-_Tiger_and_Dragon_[1280x720_H.264_FLAC][1234ABCD].mkv'){'anime_title':'Toradora!','anime_year':'2008','audio_term':'FLAC','episode_number':'01','episode_title':'Tiger and Dragon','file_checksum':'1234ABCD','file_extension':'mkv','file_name':'[TaigaSubs]_Toradora!_(2008)_-_01v2_-_Tiger_and_Dragon_[1280x720_H.264_FLAC][1234ABCD].mkv','release_group':'TaigaSubs','release_version':'2','video_resolution':'1280x720','video_term':'H.264'}Theparsefunction receives a string and returns a dictionary containing all found elements. It can also receive parsing options, this will be explained below.InstallationTo install Anitopy, simply use pip:$pipinstallanitopyOr download the source code and inside the source code’s folder run:$pythonsetup.pyinstallOptionsTheparsefunction can receive theoptionsparameter. E.g.:>>>importanitopy>>>anitopy_options={'allowed_delimiters':' '}>>>anitopy.parse('DRAMAtical Murder Episode 1 - Data_01_Login',options=anitopy_options){'anime_title':'DRAMAtical Murder','episode_number':'1','episode_title':'Data_01_Login','file_name':'DRAMAtical Murder Episode 1 - Data_01_Login'}If the default options had been used, the parser would have considered_as a delimiter and replaced it with space in the episode title.The options contain the following attributes:Attribute nameTypeDescriptionDefault valueallowed_delimitersstringThe list of character to be considered as delimiters.‘ _.&+,|’ignored_stringslist of stringsA list of strings to be removed from the filename during parse.[]parse_episode_numberbooleanIf the episode number should be parsed.Trueparse_episode_titlebooleanIf the episode title should be parsed.Trueparse_file_extensionbooleanIf the file extension should be parsed.Trueparse_release_groupbooleanIf the release group should be parsed.True
anitrack
A package that helps to get information about any anime in any language.
anitracker
# AniTrackerThis tool is designed to help watch, sync, and manage animes with tools such as anilist. As of now this is a VERY early MVP, and it only works on linux and only works if you have mpv installed. In future updates this will be expanded more
anitube-lib
anitube-ua-libPython library for working withAniTube- anime resourceInstallationpipinstallanitube-libUsage# Import the library:fromanitube_ua_libimportAniTube# Initialize:anitube=AniTube()# Search for anime:results=anitube.search_anime("naruto",limit=10)# Get anime details:anime=results[0]print(anime.name)print(anime.description)print(anime.rating)# Get anime screenshots:screens=anime.get_big_screens()# orscreens=anime.get_small_screens()# Get anime playlist:playlist=anime.get_playlist()print(playlist.json)# Get anime list by filters:anime_list=anitube.get_anime(cat=[6,22],year=[2010,2020],sort='rating')Descriptionanitube-ua-lib is a Python library for convenient work with AniTube anime resource.It allows you to:Search animeGet anime details like description, rating, categories, etcGet anime screenshotsGet anime playlist (video links)Get anime list by filters: category, release year, rating, etc
anitube-simple-notification
Anitube Simple NotificationAnitube Simple Notification is a application made for getting notification when a content is updated on the web-site (anitube.in.ua).InstallEnsure that python and pip are installed:python--versionpython-mpip-V# orpython3--versionpython3-mpip-VInstall the package:pipinstallanitube-simple-notification# orpip3installanitube-simple-notificationUsageIn application folder create file with nameconfig.toml. If a value is wrong, then a error will be shown. If there no value or a wrong value then it will be default.Example of a config file for all options:POSTERS=trueWAITING_PERIOD=3600URLS=["https://anitube.in.ua/4110-chainsaw-man.html","https://anitube.in.ua/4010-overlord-iv.html","https://anitube.in.ua/4097-mob-varyat-100-3-sezon.html","https://anitube.in.ua/4087-spy-x-family-part-2.html",]The last comma of the urls list can be ommited.Run the program by one of the commands:anitube-simple-notificationasnAuthorKostiantyn Klochko (c) 2022-2023DonationMonero: 8BCZr3LaciDZUwNUbC8M5gNZTtnPKoT9cMH95YcBoo2k8sg4qaxejYL4Qvp6V21ViqMHj5sHLiuRwhMYxHTVW1HUNAawV6cLicenseUnder GNU GPL v3 license
anitube-ua-lib
Failed to fetch description. HTTP Status Code: 404
anitub-lib
anitube-ua-libPython library for working withAniTube- anime resourceInstallationpipinstallanitube-libUsage# Import the library:fromanitube-libimportAniTube# Initialize:anitube=AniTube()# Search for anime:results=anitube.search_anime("naruto",limit=10)# Get anime details:anime=results[0]print(anime.name)print(anime.description)print(anime.rating)# Get anime screenshots:screens=anime.get_big_screens()# orscreens=anime.get_small_screens()# Get anime playlist:playlist=anime.get_playlist()print(playlist.json)# Get anime list by filters:anime_list=anitube.get_anime(cat=[6,22],year=[2010,2020],sort='rating')Descriptionanitube-ua-lib is a Python library for convenient work with AniTube anime resource.It allows you to:Search animeGet anime details like description, rating, categories, etcGet anime screenshotsGet anime playlist (video links)Get anime list by filters: category, release year, rating, etc
anitui
ani-tuiTUI written in Python usingTextualto navigate local Anime files.ShowcaseGetting StartedPrerequisitesPython 3.9+InstallTo use ani-tui simply install the Python package:Unixpip3installanituiWindowspy-mpipinstallanituiThe TUI can then be run by simply typinganituiin the shell.Connecting withVLC-Ani-Discordani-tui is capable of launching the vlc-ani-discord script along with your chosen media to display Discord Rich Presence and automatically update your Anilist episode progress. Note that this is only applicable if you are using VLC as your media player. The setup for this is a bit convoluted at the moment so it is automatically turned off. However, if you would like to use this feature here's how to do it! :)Find out where ani-tui was installedpipshowanituiHere you should see something of the form:Location:{PATH}Go to the directorycd{PATH}/scriptHere you will find aREADME.mdwith instructions on setting upvlc-ani-discord. Complete the setup and then move on to Step 4Modify the ani-tui config file~/.config/anitui/config.jsonin your chosen editor. Simply change"script": falseto"script": true.Then you're done! I will be improving this process to be more straightforward in the future.NotesStill in development, expect bugs! :)LicenseMIT
anitvam-fp
No description available on PyPI.
anity
Anity CLIanity.ioenables developers to monitor their APIs using system tests written in Python rather than traditional ‘ping’ tests.This CLI is used to deploy and invoke test suites for your monitors.InstallationpipinstallanityUsageDeploy Test SuiteTo deploy your test suite you'll first need to create a new monitor atanity.io, where you'll be given an API key for the monitor.Package up your test suite withzip. Anity runs test using a custom implementation ofunittest discoverso anything that works withunittestwill work in Anity.zip-rmysuite.zipmysuite/Then deploy your test suite to your monitor withanityupdatePATHAPI_KEYFor exampleanityupdatemysuite.zip2a91-85ba-4cebHelpIf you have any problems getting setup please contact us [email protected] we'll respond as soon as possible.
anitya
AnityaAnitya is a release monitoring project. It provides a user-friendly interface to add, edit, or browse projects. A cron job can be configured to regularly scan for new releases of projects. When Anitya discovers a new release for a project, it publishes a RabbitMQ messages viafedora messaging. This makes it easy to integrate with Anitya and perform actions when a new release is created for a project. For example, the Fedora project runs a service calledthe-new-hotnesswhich files a Bugzilla bug against a package when the upstream project makes a new release.For more information, check out thedocumentation!DevelopmentFor details on how to contribute, check out thecontribution guide.
anitya-schema
Anitya Message SchemaJSON schema definitions for messages published byAnitya.Documentation for Anitya Message Schema could be foundhere.Seehttp://json-schema.org/for documentation on the schema format. Seehttps://fedora-messaging.readthedocs.io/en/latest/messages.htmlfor documentation on fedora-messaging.
anitya-telegram
anitya-telegramTelegram gateway for Anitya release monitoring systemGetting startedBefore running application, install dependencies:pip install -r requirements.txtThen makeconfig.tomlconfiguration file fromconfig.toml.example. Gateway sets up viaconsumer_configsections.Top level parameters areapi_keyandchat_ids:api_keyis a Telegram's Bot API key, can be configured viaANITYA_TG_BOT_KEYenvironment variable;chat_idsis the chats identifiers which notifications send to.Projects sectionconsumer_config.projectsconsists of:idis a project ID fromAnitya;versionsis a versions filter, which filters versions by start of string;allow_nonstable- settrueif you want receive notifications about non-stable releases too.To run gateway execute the command below:fedora-messaging --conf config.toml consume --callback-file anitya_tg_gw.py:TelegramForwardConsumerConfiguration file can be also set up viaFEDORA_MESSAGING_CONFenvironment variable.DevelopmentCreate virtual environment:python -m venv .venvVenv activation / deactivation on Windows:.venv\Scripts\Activate.ps1 deactivateTo run tests execute the command below:python -m unittest discover -v anitya_telegram\tests\LinksRelease monitoringIntegrating with AnityaQuick StartConsumersUsing the APIfedora-messagingDatagrepper - anitya.project.version.update.v2
aniwrap
aniwrapAn asynchronous wrapper for theMyAnimeList V2 API.Aniwrap aims to make it easier to interact with MAL API.DisclaimerThe library is still in Alpha, and the features may change at any time.InstallationPython version 3.10 or greater is required to use aniwrap.pipinstallaniwrapFeaturesSearch anime and manga by nameFetch anime and manga details by IDFetch seasonal animeFetch anime and manga rankingsFetch forum boards and discussionsFetch and manipulate user's anime and manga list using user's access tokenUsageExample of using anime and manga related actionsfromaniwrapimportClientclient=Client("your MAL client Id")anime_search_result=awaitclient.anime.search_anime("attack on titan")manga_search_result=awaitclient.manga.search_manga("attack on titan")ifanime_search_result.is_success:anime_results=anime_search_result.valueifanime_search_result.is_error:error=anime_search_result.errorifmanga_search_result.is_success:manga_results=manga_search_result.valueifmanga_search_result.is_error:error=manga_search_result.errorawaitclient.close()Example of using user related actionsfromaniwrapimportUserClientuser_client=UserClient("user's access token")anime_list_result=awaituser_client.user.get_anime_list("user's username")manga_list_result=awaituser_client.user.get_manga_list("user's username")ifanime_list_result.is_success:anime_list=anime_list_result.valueifanime_list_result.is_error:error=anime_list_result.errorifmanga_list_result.is_success:manga_list=manga_list_result.valueifmanga_list_result.is_error:error=manga_list_result.errorawaituser_client.close()You can find information on generating Client Id and user's access token used in the above examples onMAL documentation.IssuesIf you're facing any problems with the library, please open an issuehere.CreditsCredits toJonxslays'swom.py. Lot of stuff iscopiedinspired from wom.py.Licenseaniwrap is licensed underMIT License.