package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
allure-additions
|
No description available on PyPI.
|
allure-api-client
|
allure-api-clientTheallure-api-clientlibrary is a Python package designed to facilitate API testing and reporting in Allure. Built on
top of thehttpxlibrary, it provides both synchronous and asynchronous clients for making HTTP requests, complete
with automatic request and response logging for Allure reports. The library also includes utilities for bearer token
authentication and status code verification.FeaturesSynchronous and Asynchronous API ClientsAutomatic logging of requests and responses in Allure reportsBearer Token AuthenticationStatus Code VerificationInstallationTo installallure-api-client, you will need Python installed on your system. You can install the library using pip:pipinstallallure-api-clientUsageSynchronous API Clientfromallure_api_clientimportAPIClient# Initialize the clientclient=APIClient(base_url="https://api.example.com",auth=BearerToken("YOUR_ACCESS_TOKEN"),# Optionalverify=False# Optional)# Send a requestresponse=client.send_request(method="GET",path="/endpoint")Asynchronous API Clientfromallure_api_clientimportAsyncAPIClient# Initialize the clientasyncwithAsyncAPIClient(base_url="https://api.example.com",auth=BearerToken("YOUR_ACCESS_TOKEN"),# Optionalverify=False# Optional)asclient:# Send a requestresponse=awaitclient.send_request(method="GET",path="/endpoint")Bearer Token Authenticationfromallure_api_clientimportBearerToken# Initialize the bearer tokenauth=BearerToken("YOUR_ACCESS_TOKEN")# Use the auth with APIClient or AsyncAPIClientChecking Status Codefromallure_api_clientimportcheck_status_code# After receiving a responsecheck_status_code(response,200)# Verifies that the status code is 200ContributingContributions toallure-api-clientare welcome! Please follow the standard procedures to submit issues, pull requests,
etc.Licenseallure-api-clientis released under theMIT License.
|
allure-behave
|
Allure Behave FormatterSourceDocumentationGitterInstallation and Usage$pipinstallallure-behave$behave-fallure_behave.formatter:AllureFormatter-o%allure_result_folder%./features$allureserve%allure_result_folder%Support behave parallelCurrent implementation of behave-parallel makes some allure features inaccessible. So in this case you need patch your
environment.py files instead using formatter. If you don’t use environment.py, just crate empty one with calling allure
like in example below.fromallure_behave.hooksimportallure_report### your codeallure_report("path/to/result/dir")Usage examplesSee usage exampleshere.
|
allure-combine
|
Allure single html file builderTool to build allure generated folder into a single html fileWhat it's doing?After run by console command, or by call from python code, it:Reads contents of allure-generated folderCreates server.js file, which has all the data files inside and code to start fake XHR serverPatches index.html file, so it's using server.js and sinon-9.2.4.js (Taken fromhere), and could be run in any browser without --allow-file-access-from-files parameter of chrome browserCreates file complete.html with all files built-in in a single fileRequirementsPython 3.6+You need to have your allure report folder generated (allure generate './some/path/to/allure/generated/folder')InstallationInstall with pippipinstallallure-combineor:Install manuallyClone [email protected]:MihanEntalpo/allure-single-html-file.gitcdallure-single-html-fileInstall requirements (actually there are only BeautifulSoup)pipinstall-r./requirements.txt
pythonsetup.pyinstallRun as console scriptIf you have cloned repo, not installed module via pip, replaceallure-combinewithpython ./allure_combine/combine.pyin following commands:Create complete.html file inside the allure folder itselfallure-combine./some/path/to/allure/generated/folderCreate complete.html file inside specified folder:allure-combine./some/path/to/allure/generated/folder--dest/tmpEnsure that specified dest folder exists (create if not)allure-combine./some/path/to/allure/generated/folder--dest/tmp/allure-2022-05-05_12-20-01/result--auto-create-foldersRemove sinon.js and server.js from allure folder after complete.html is generated:allure-combine./some/path/to/allure/generated/folder--remove-temp-filesIf html/json files what should be utf-8 is has broken encoding, ignore errors:allure-combine./some/path/to/allure/generated/folder--ignore-utf8-errorsImport and use in python codepipinstallallure-combinefromallure_combineimportcombine_allure# 1) Create complete.html in allure-generated foldercombine_allure("./some/path/to/allure/generated/folder")# 2) Create complete.html in specified foldercombine_allure("./some/path/to/allure/generated/folder",dest_folder="/tmp")# 3) Make sure that dest folder exists, create if notcombine_allure("./some/path/to/allure/generated/folder",dest_folder="/tmp/allure-2022-05-05_12-20-01/result",auto_create_folders=True)# 4) Remove sinon.js and server.js from allure folder after complete.html is generated:combine_allure("./some/path/to/allure/generated/folder",remove_temp_files=True)# 5) If html/json files what should be utf-8 is has broken encoding, ignore errors:combine_allure("./some/path/to/allure/generated/folder",ignore_utf8_errors=True)TODOFunctionality to open image or video in new browser tab doesn't work yet.Need functionality to return combined file as a string, not saving it to a file directlyFunctionality to not change source files at all, work in a read-only filesystemPorts to other languages:Javascript port:https://github.com/aruiz-caritsqa/allure-single-html-file-js
|
allure-custom
|
allure-custom定制 Allure 报告目前支持的定制项:logo标题栏文案侧边栏颜色默认暂时的语言Documentation:https://funny-dream.github.io/allure-customSource Code:https://github.com/funny-dream/allure-custom安装pipinstallallure-custom配置fromallure_custom.confimportsetting# 测试报告的titlesetting.html_title="funny_test"# 测试报告的namesetting.report_name="Funny_Test"# 测试报告的默认语言# en:English# ru:Русский# zh:中文# de:Deutsch# nl:Nederlands# he:Hebrew# br:Brazil# pl:Polski# ja:日本語# es:Español# kr:한국어# fr:Français# az:Azərbaycancasetting.report_language="zh"# 左上角 logo 图片# 注意这里给一个绝对路径setting.logo_png="/home/xxx/logo.png"# html favicon# 注意这里给一个绝对路径setting.favicon_ico="/home/xxx/favicon.ico"使用方法fromallure_customimportAllureCustom# 生成html测试报告# ~/Desktop/report 此目录下保存了allure的json\txt的报告文件AllureCustom.gen(report_path="~/Desktop/report",generate_allure_html="~/Desktop/html")# 打开html测试报告AllureCustom.open(generate_allure_html="~/Desktop/html")# 直接生成在线测试报告AllureCustom.serve(report_path="~/Desktop/report")# 根据终端输出提示的在线链接,在浏览器查看即可;效果展示
|
allure-db-client
|
allure-db-clientTheallure-db-clientis a Python library designed to facilitate the interaction with PostgreSQL databases. It leverages thepsycopglibrary for database connectivity and integrates with the Allure framework for enhanced logging and debugging capabilities. This library simplifies executing SQL queries and retrieving results in various formats, making it particularly useful in testing and debugging environments.FeaturesEasy connection to PostgreSQL databases using a connection string.Methods to fetch query results as lists, dictionaries, or individual values.Integration with Allure for logging SQL queries and results.Context management for automatic opening and closing of database connections.Safe parameter substitution in SQL queries to prevent SQL injection.InstallationTo installallure-db-client, you will need to have Python installed on your system. The library can be installed using pip:pipinstallallure-db-clientUsageCreating a Client InstanceFirst, importDBClientfromallure-db-clientand create an instance with your database connection string:fromallure_db_clientimportDBClientdb_client=DBClient(connection_string="your_connection_string")Executing QueriesYou can execute various types of SQL queries using the provided methods:get_list(query, params): Fetches the first column of each row as a list.get_dict(query, params): Fetches the first two columns of each row as a dictionary.select_all(query, params): Executes a query and fetches all rows.get_first_value(query, params): Fetches the first column of the first row.get_first_row(query, params): Fetches the first row.execute(query, params): Executes a non-returning SQL command (e.g., INSERT, UPDATE).Context ManagementTheDBClientcan be used as a context manager to automatically handle database connections:withDBClient(connection_string="your_connection_string")asdb_client:# Your database operations hereExamplesHere's an example of usingDBClientto fetch user data from auserstable:withDBClient(connection_string="your_connection_string")asdb_client:users=db_client.select_all("SELECT * FROM users")foruserinusers:print(user)ContributingContributions toallure-db-clientare welcome! Please read our contributing guidelines for details on how to submit pull requests, report issues, or request features.Licenseallure-db-clientis released under the MIT License. See the LICENSE file for more details.
|
allure-docx
|
allure-docxDOCX and PDF report generation based on allure-generated result files.AboutThis package is developed in collaboration byTyphoon Hiland theFraunhofer Institute for Solar Energy Systems ISE. Users are welcome to test and report problems and suggestions by creating an issue. Note that the package is not officially supported by Typhoon HIL and is still on the alpha phase.LimitationsThe generator was only tested so far using allure results from python withallure-pytestand java withallure-junit5. Therefore it expects a folder with.jsonfiles generated by the allure plugins.Check the issues on the repository to see other current limitations.QuestionsFeel free to open an issue ticket with any problems or questions. Please if possible provide your test result files so we can debug it more efficiently.InstallationThe library uses matplotlib which depends on Visual C++ Redistributables. If not already installed, you can get them here:https://www.microsoft.com/en-in/download/details.aspx?id=48145.WindowsWe publish windows standalone executable files. With them you can use it without having to install anything else, meaning no installing python, etc.You can download them at:https://github.com/typhoon-hil/allure-docx/releasesThen you can use the executable directly (possibly adding it a folder added to PATH).Linux and WindowsYou can install from source using:pip install git+https://github.com/typhoon-hil/allure-docx.gitThen you should be able to useallure-docxexecutable like you usepip.UsageCheck usage by runningallure-docx --helpYou can generate the docx file by runningallure-docx ALLUREDIR filename.docxConfigurationUse the--config_tag/--config_fileoption to control which information should be added. Use the preset names or give a path to your custom .ini file.
When using a custom configuration the standard configuration is used as base and is overwritten for each attribute inside the custom file.
The standard file looks like this:[info]
description = fbpsu
details = fbpsu
trace = fbpsu
parameters = fbpsu
links = fbpsu
setup = fbpsu
body = fbpsu
teardown = fbpsu
duration = fbpsu
attachments = fbpsu
[labels]
severity = fbpsu
[cover]
title = Allure
[details]The report will display tests with the specified field (info and labels section) if the corresponding character is included. For mapping see following table:failedbrokenpassedskippedunknownfbpsuThere are some additional information you can add to your report. This includes following:Add custom labelsThe allure engine allows for custom labels inside the test code. These are not printed on standard.
You can add a custom label under[labels]section to include it in the report. It will appear inside the label table directly below the test heading.Example:[labels]
some_label = fbpsuAdd a company name to the coverYou can include a company name on the cover of the report by setting thecompany_namevariable under the[cover]section.Example:[cover]
company = Some companyAdd custom details to the Test Details sectionYou can add custom variables inside the[details]section. These will be printed on the Test Details section of the document.Example:[details]
Test name = Example Test
Device under test = Example device under test
Relevant documents =
Document abc v1.5
Document xyz v5.4
Description = This is a test description.This will be printed as following table under Test Details:Test nameExample TestDevice under testExample device under testRelevant documentsDocument abc v1.5Document xyz v5.4DescriptionThis is a test description.Putting "Device under test" into the details section also adds the value to the header."Custom Title and LogoYou can use the--titleoption to customize the docx report title (or specify inside custom configuration file, as described above).If you want to remove the title altogether (e.g. your logo already has the company title), you can set--title="".You can also use the--logooption with a path to a custom image to add the test title and the--logo-widthoption to adjust the width size of the logo image (in centimeters).Example invocation:allure-docx --pdf --config_file=C:\myconfig.ini --logo=C:\mycompanylogo.png --logo-width=2 allure allure.docxPDFThe--pdfoption will search for either Word (Windows only) orsoffice(LibreOffice) to generate the PDF.If both Word andsofficeare present, Word will be used.Previous versionsThe previous version of the package can be found in the releases page of the pluginhere.MiscelaneousBuilding the standalone executableWe use PyInstaller to create standalone executables. If you want to build an executable yourself, follow these steps:Create a new virtual environment with the proper python version (tested using python 3, 32 or 64 bit so far)Install using pippyinstallerand ONLY needed packages defined insetup.py. This prevents your executable to become too large with unnecessary dependenciesDelete any previousdistfolder from a previous PyInstaller runRun thebuild_exe.cmdcommand to run PyInstaller and create a single file executable.If having the following error:ImportError: This package should not be accessible on Python 3. Either you are trying to run from the python-future src folder or your installation of python-future is corrupted.Runpip install --force-reinstall -U future(https://github.com/nipy/nipype/issues/2646)
|
allure-env-builder
|
Allure Environment builderБиблиотека для создания файла с переменными окружения для allure-report-serviceУстановкаpipinstallallure-env-builderИспользованиеВ библиотеке есть единственный классAllureEnvironmentsBuilder,
с помощью которого можно
создатьфайлик с переменными окружениядля
сервисаAllure Reportfromallure_env_builderimportAllureEnvironmentsBuilderif__name__=='__main__':output=(AllureEnvironmentsBuilder().add('key_1','value_1').add('key_2','value_2').build())print(output)данный код выведет:key_1=value_1key_2=value_2Для того чтобы сохранить данные переменные в файл, необходимо методуbuildпередать путь до директории, где будет
создан файл с названиемenvironment.properties
|
allure-nose2
|
No description available on PyPI.
|
allure-publisher
|
Allure Results PublisherУтилита для публикации результатов тестирования вAllure Report ServiceУстановкаpipinstallallure-publisherЗапускСправка:python-mallure_publisher--helpПример запуска:python-mallure_publisherhttp://localhost:1155path/to/results
|
allure-pytest
|
Allure Pytest PluginSourceDocumentationGitterInstallation and Usage$pipinstallallure-pytest$py.test--alluredir=%allure_result_folder%./tests$allureserve%allure_result_folder%Usage examplesSee usage exampleshere.
|
allure-pytest-acn
|
Allure Pytest PluginSourceDocumentationGitterInstallation and Usage$pipinstallallure-pytest-acn$py.test--alluredir=%allure_result_folder%./tests$allureserve%allure_result_folder%
|
allure-pytest-bdd
|
No description available on PyPI.
|
allure-pytest-bdd-temp
|
No description available on PyPI.
|
allure-pytest-bdd-temp-fix
|
No description available on PyPI.
|
allure-pytest-logger
|
PyTest plugin that allows you to attach logs to allure-report for failures only
|
allure_pytest_master
|
This is a simple example package. You can use
[Github-flavored Markdown](https://guides.github.com/features/mastering-markdown/)
to write your content.
|
allure-pytest-mecher
|
No description available on PyPI.
|
allure-python-commons
|
No description available on PyPI.
|
allure-python-commons-acn
|
No description available on PyPI.
|
allure-python-commons-temp-fix
|
No description available on PyPI.
|
allure-python-commons-test
|
No description available on PyPI.
|
allure-robotframework
|
Allure Robot Framework ListenerSourceDocumentationGitterInstallation and Usage$pipinstallallure-robotframework$robot--listenerallure_robotframework./my_robot_testOptional argument sets output directory. Example:$robot--listenerallure_robotframework:/set/your/path/here./my_robot_testDefault output directory isoutput/allure.Listener supportrobotframework-pabot library:$pabot--listenerallure_robotframework./my_robot_testAdvanced listener settings:ALLURE_MAX_STEP_MESSAGE_COUNT=5. If robotframework step contains less messages than specified in this setting, each message shows as substep. This reduces the number of attachments in large projects. The default value is zero - all messages are displayed as attachments.Usage examplesSee usage exampleshere.Contributing to allure-robotframeworkThis project exists thanks to all the people who contribute. Especially byMegafonand@skhomutiwho started and maintaining allure-robotframework.
|
allure-unittest
|
allure-unittestunittest Allure integrationimportunittestfromallure_unittestimportRunclassActivity(unittest.TestCase):deftest_1(self):self._testMethodDoc="test 1 == 1"self.assertEqual(1,1)deftest_2(self):self._testMethodDoc="test 1 == 2"self.assertEqual(1,2)deftest_3(self):self._testMethodDoc="test 2 == 3"self.assertEqual(2,3,"2 not Equal 3")if__name__=='__main__':c=unittest.defaultTestLoader.loadTestsFromTestCase(Activity)Run('test',c,clean=True)
|
allurlstatus
|
pypi package for checking all url statususage:python
>>> import allurlstatus
>>> allurlstatus.getStatus( [ “https://www.google.com”, “http://someExampleNonExistingUrl”, “etc - but this should be in valid url format” ] )bash
$ getallurlstatus < file_with_url_list.txtprojectUrl:https://github.com/jogind3r/allurlstatus
|
allusgov
|
OverviewThis project attempts to map the organization of the US Federal Government by gathering and consolidating information from various directories.Current sources:SAM.gov Federal Hierarchy Public APIFederal Register Agencies APIUSA.gov A-Z Index of U.S. Government Departments and AgenciesOPM Federal Agencies ListCISA .gov dataFY 2024 Federal Budget OutlaysUSASpending API Agencies & Sub-agenciesEach source is scraped (seeoutdirectory) in raw JSON format, including fields for the organizational unit name/parent (if any), unique ID/parent-ID fields (if the names are not unique) as well as any other attribute data for that organization available from that source.A normalized name (still WIP) is then added, which corrects letter case, spacing and expands acronyms. Acronyms are selected and verified manually using data fromUSCD GovSpeakand theDOD Dictionary of Military and Associated Termsas well as manual entry when needed.Each source is them imported into a tree and exported into the following formats for easy consumption:Plain text treeJSON flat format (with path to each element)JSON nested tree formatCSV format (with embedded JSON attributes)Wide CSV format (with flattened attributes)DOT file(does not include attributes)GEXF graph file(includes flattened attributes)GraphQL graph file(includes flattened attributes)Cytoscape.js JSON format(includes flattened attributes)To merge the lists, each tree is merged into a selected base tree by comparing the normalized names of each node in the tree to the names of each node in the base tree using a fuzzy matching algorithm. Similarity scores between each pair of parents are incorporated into the score to more correctly identify cases where the same/similar office or program name is used for different organizations.Note that the fuzzy matching is imperfect and may have some inaccurate mappings (although most appear OK) and will certainly have some entries which actually should be merged, but aren't.The final merged dataset is written in the above formats to thedata/mergeddirectory.SetupRequirementsPython 3.10+PoetryInstallationCheck out this repository, then from the repository root, install dependencies:$ poetry installSee command line usage:poetry run allusgov --helpRun a complete scrape and merge:poetry run allusgov
|
alluvium
|
alluvium - interactive bindings visualizer for i3alluvium guides you through your keybindings and modes in i3 as you enter them. It is
heavily inspired byremontoireand reuses
its comment syntax in thei3/configfile to learn about keybindings, modes and their
description.Default overlayAfter entering Settings modeAfter entering Session modeUsageInstall alluvium, for example viapip install alluvium, and run it manually. It will
connect to i3 and show your bindings - if you put remontoire annotations into your config.To use alluvium in i3, ensure that thealluviumexecutable is callable by i3. So if you
installed it in pyenv, you might need to put a symlink to$(pyenv which alluvium)into
your.local/bin, so that i3 can find the executable without knowing about pyenv.## Launch // Toggle alluvium // <> ? ##
bindsym $mod+Shift+question $run alluvium --toggle
## Settings // Enter Settings Mode // <> F11 ##
mode "Settings" {
## Settings // Control Center // c ##
bindsym c exec gnome-control-center; mode "default"
## Settings // Display // d ##
bindsym d exec gnome-control-center display; mode "default"
## Settings // Wifi // w ##
bindsym w exec gnome-control-center wifi; mode "default"
## Settings // Bluetooth // b ##
bindsym b exec gnome-control-center bluetooth; mode "default"
## Settings // Exit Settings Mode // Escape or <Ctrl> g ##
bindsym Escape mode "default"
bindsym Ctrl+g mode "default"
}
bindsym $mod+F11 mode "Settings"; $run alluvium --mode Settings --quit-on-defaultThe first binding toggles the overlay while the binding to$mod+F11enters the settings
mode with an overlay showing the bindings available in that mode.--quit-on-defaultmakes the overlay disappear when you return to default mode. This is most helpful for
rarely used modes such as system settings as above or a mode to manage i3 itself.SyntaxThe config syntax is the same as in remontoire, that is## <Group> // <Label> // <Keys> ##with an extension to define modes. If a group contains a binding with a label ofEnter ... Mode, alluvium recognizes the group as a mode. The enter binding shows up in theModesgroup in the top level display while the other bindings of the group are shown
when you enter the mode.
|
alluxio
|
No description available on PyPI.
|
alluxiofs
|
No description available on PyPI.
|
alluxio-python-library
|
No description available on PyPI.
|
allvissone
|
# Import all Data Science Visualization package in single line
|
allways
|
allwaysAutomatically update__all__statements.InstallationpipinstallallwaysCommand line interfaceallways<file1.py><file2.py>...As a pre-commit hookSeepre-commitfor instructions.Sample.pre-commit-config.yaml-repo:https://github.com/tjsmart/allwaysrev:v0.0.2hooks:-id:allwaysNote: by default the pre-commit hook will run only against__init__.pyfiles.What does it do?Add__all__statements to your python filesfrom._fooimportbarfrom._ximportyaszdeffoo():...becomesfrom._fooimportbarfrom._ximportyaszdeffoo():...# allways: start__all__=["bar","foo","z",]# allways: endIgnore private variablesfrom.import_foofrom.importbarbecomesfrom.import_foofrom.importbar# allways: start__all__=["bar",]# allways: endUpdate pre-existing__all__statementsfrom.importbarfrom.importbaz# allways: start__all__=["bar","foo",]# allways: endbecomesfrom.importbarfrom.importbaz# allways: start__all__=["bar","baz",]# allways: endWhy?the problemI choose to organize python libraries with:PEP 561complianceprivate module files, public (sub-)package foldersusing__init__.pyto define the public interface of a (sub-)packageFor example, I might layout my project as such:pkg/
├── bar/
│ ├── _bar.py
│ └── __init__.py
├── _foo.py
├── __init__.py
└── py.typedContained in the filespkg/_foo.pyandpkg/bar/_bar.pythere will be some portion that I would like to expose publicly viapkg/__init__.pyandpkg/bar/__init__.py, respectively.For example, perhaps I would like to expose a functiondo_somethingfrom the module filepkg/_foo.pyby adding the following line topkg/__init__.py:from._fooimportdo_somethingAnd here is where the problem arises!(Iknow... a lot of setup...)When a user of our package turns to usedo_somethingthey will be slapped on the wrist by the type-checker.pyrightoutput:t.py:1:18 - error: "do_something" is not exported from module "pkg"
Import from "pkg._foo" instead (reportPrivateImportUsage)mypy --strictoutput:t.py:1: error: Module "pkg" does not explicitly export attribute "do_something" [attr-defined]And if you aren't concerned that users will have to ignore this type error, know that it get's worse! Language servers will not display any hint that the objectpkg.do_somethingexists. 😱For small projects maintaining this by hand is no big deal. But for large projects with several contributors this becomes a complete wast of time! 😠the solutionAccording topyright documentation, a typed library can choose to explicitly re-export symbols by adding it to the__all__of the corresponding module.allwaysmission is to automate the process of updating__all__statements in__init__.pyfiles. 🤗but alsoMy personal goal here is to contribute something to open source and write some more rust! 🦀
|
allwhatspy
|
Descontinuado. Utilize pip install -U allwhatspy-awp
|
allwhatspy-awp
|
AllWhatsPy: Automatize o WhatsApp com FacilidadeO AllWhatsPy é uma biblioteca Python revolucionária que oferece um controle sem precedentes sobre o WhatsApp. Seja para fins profissionais ou pessoais, essa ferramenta versátil e poderosa foi projetada para otimizar sua experiência de automação no aplicativo de mensagens mais popular do mundo.Por que AllWhatsPy? À medida que nossa dependência do WhatsApp se torna incontestável, surge a necessidade de aprimorar a maneira como utilizamos essa ferramenta essencial. O AllWhatsPy é a resposta a essa demanda crescente. Inspirado por projetos notáveis como o PyWhatsapp e o PyWhatKit, o AllWhatsPy é o resultado de uma jornada de exploração e pesquisa profunda em bots e APIs do WhatsApp.O desenvolvimento do AllWhatsPy envolveu milhares de linhas de código e testes meticulosos para garantir um desempenho perfeito. O resultado? Uma biblioteca versátil e flexÃvel que permite que você faça praticamente qualquer coisa no WhatsApp, proporcionando um novo nÃvel de eficiência e automação.Principais Recursos:Criptografia de Mensagens:Proteja suas mensagens com a clássica criptografia de Cifra de Caesar, garantindo que apenas os destinatários certos possam lê-las.
Cifra de Vigenère: Aumente a segurança de suas comunicações com a sofisticada criptografia de Cifra de Vigenère.Envio de Mensagens:Envio de Mensagens com 4 meios diferentes de realização. Cada uma delas com seu foco especÃfico.Análise de Mensagens:Análise de Mensagens Recebidas: Obtenha insights valiosos a partir da análise de mensagens recebidas, facilitando o entendimento do conteúdo.AgendamentosAgendamento de Tarefas: Programe mensagens e ações para serem executadas em horários especÃficos, aumentando a eficiência de suas comunicações.ContatosGerenciamento de Contatos: Controle seus contatos com facilidade, localize usuários, arquive conversas e marque mensagens como não lidas.Flexibilidade e Personalização: O AllWhatsPy oferece diversas opções de configuração para atender à s suas necessidades especÃficas, desde a geração de JSON até a geração de logs detalhados.Quer saber mais?Acesse meu GitHubpara explorar todas as possibilidades que o AllWhatsPy oferece. Automatize suas interações no WhatsApp e leve a eficiência da automação para um novo nÃvel!
|
allwhois
|
All WhoisA Python package for retrieving WHOIS information of domains.Description/FeaturesPython wrapper for Linux "whois" commandGetparsed&rawWHOIS data for a given domainExtract data foranyTLD.No TLD regex'sDate's asdatetime objectsorstringsCaching of resultsRequirementsPython 3.6+Support for python 3.6+ only. Works onmacos&linuxonlyIssuesIf there is something that is not parsing well, open a issue, and i will look into it.
Or if you fixed it, do make a pull request, and i can merge it.InstallationpipinstallallwhoisPre-requisite installationmacOSbrewinstallwhoisLinuxaptinstallwhoisUsageimportsysfromallwhoisimportwhoisfrompprintimportpprintif__name__=="__main__":domain=Nonetry:domain=sys.argv[1]except:exit(f"Usage:{sys.argv[0]}<domain_name>")response=whois.query(domain)pprint(response)Authors:Sriram:marirs
|
ally
|
No description available on PyPI.
|
allya.nestedconfigparser
|
By default, configparser only loads values from ‘vars’, ‘section’, and then ‘default_section’. This extension allows for nested sections by use of a section splitter (default ‘.’) and attempts to find values from ‘vars’, then ‘section’, then its logical parents, and finally ‘default_section’.For example, given the configuration below:[DEFAULT]
alpha=first level
[section]
beta=second level
[section.subsection]
gamma=third levelthe default configparser would behave like:>>> settings = configparser.ConfigParser()
>>> settings.read('config.ini')
['config.ini']
>>> settings.get('section.subsection', 'alpha')
first level
>>> settings.get('section.subsection', 'beta')
None
>>> settings.get('section.subsection', 'gamma')
third levelInstead, in this extension, the behaviour would be:>>> settings = nestedconfigparser.NestedConfigParser()
>>> settings.read('config.ini')
['config.ini']
>>> settings.get('section.subsection', 'alpha')
first level
>>> settings.get('section.subsection', 'beta')
second level
>>> settings.get('section.subsection', 'gamma')
third levelThis extensions supports theoretically unlimited levels of nesting. It also does not require each level of the subsection to exist for the case where a section has no additional configurations.Note: this extension intentionally does not raise a NoSectionError if a section does not exist when using ‘nestedconfigparser.NestedConfigParser().get(section, option)’. This is because it will attempt to fallback to higher sections and avoids potentially empty sections that don’t have any added configurations at the highest subsection.
|
allyoucanuse
|
allyoucanuseOne-liners of everything - a hodge-podge of python tools
|
allyourbase
|
All Your Base is a python library for converting number strings from any base to number strings of any other base.This library was created to make the following improvements on existing base conversion libraries out there:Can convert both integers and floatsUses Decimal package to allow for arbitrary precision / number of digitsUses Decimal package to avoid binary rounding errorsIs not limited to base 36, 62, 64 due to available characters. Can convert to/from any integer base from 2 to whatever you like. (higher bases use delimited decimal format)
|
allz
|
A universal command line tool for compression and decompressionSupports more normal compressed file formats than I can remember, like Zip, Tar, Gzip, Bzip2, 7-Zip, Rar, Rar.bz2, Tar.gz and so on.Supports split volumn compressed file for certain formats, like Tar.bz2, Tar.bz, Tar.gz, Tgz, Tar.7z, Tar, 7z, Zip, Rar.Supports decompress normal and split volumn compressed file in the same command. In split volumn mode, any part of the input split file can be decompress correctly.Support for automatic creation of target folders. If you have a path with spaces in it, it will handle that too.The decompress engine itself and command-line tool has only been tested in Linux environment so far.Install ClipipinstallallzCommand Descriptionallz-d-q-fsrc_path-odest_pathsrc_path The input source path.dest_path The output destination path.-d Decompress normal or split compressed files.-q Run in quiet mode.-f Always overwrite files when a file to be unpacked already exists on disk. By default, the program will skips the file.-p Output regular expressions for both regular compressed files and slice compressed files successively: the first line is for the normal compressed file, the second line is for the split volumn compressed file.-p --only-normal Output regular matching expressions for normal compressed files only.-p --only-split Output regular expressions for split volumn compressed files only.Available options:--output-directory (-o) The directory to write the contents of the archive to. Defaults to the current directory.ExamplesSuppose we now have a normal compressed file MNIST.7z and two split volumn compressed files MNIST.tar.7z.001, MNIST.tar.7z.002.1. View the versionallz--version2. View the helpallz-horallz--help3. Check which types are supported for decompression in your local environmentallzcheck4. Decompress the normal file MNIST.7z to current directoryallz-dMNIST.7zIn default, if the compressed file have already decompress before, it will skip the same file. You can use option -f to overwrite the files.allz-d-fMNIST.7zYou can also mask screen log output by use option -q.allz-d-qMNIST.7z5. Decompress the normal file MNIST.7z to the specified directory by use option -oallz-dMNIST.7z-o/tmpYou can also use the relative destination path.allz-dMNIST.7z-o..6. Decompress the split volumn file MNIST.tar.7z.001allz-dMNIST.tar.7z.001Decompress the split volumn file to specified directory by use option -o.allz-dMNIST.tar.7z.001-o/tmp7.Handle the path with space in itMethods of using escapesallz-d20220101\todo/MNIST.7z-o/tmp/20220101\done/MNIST.7zMethods of using quotation marksallz-d"20220101 todo/MNIST.7z"-o"/tmp/20220101 done/MNIST.7z"8. Decompress the file into a recurvise destination directoryIt will automatically create folders that do not exist.allz-dMNIST.7z-o/tmp/today/fruit/apple/SDK1. Function get_compressed_files_classify_lstSource code: allz/libs/file_type_testerFileTypeTester.get_compressed_files_classify_lst(file_lst)Return a nested list which will sort the input file list into multiple common or split volumn compressed file lists. The input parameter file_lst is a list of files in the same hierarchical directory. The list can be a file name, a full path or a relative path name. Function only processes compressed files, including normal compressed files and split volumn compressed files, ignoring the processing of normal files.A short usage example:fromallz.libs.file_type_testerimportFileTypeTesterfile_lst=["MNIST.tar.0000","MNIST.tar.0001","MNIST.tar.0002","MNIST.tar.0003","MNIST.tar.0004","MNIST.tar.7z.001","MNIST.tar.7z.002","MNIST.part1.rar","MNIST.part2.rar","MNIST.part3.rar","MNIST.part4.rar","MNIST.7z.001","MNIST.7z.002","123.rar","abc.zip","abc","000","0000.tar","02287.txt"]tester=FileTypeTester()res_lst=tester.get_compressed_files_classify_lst(file_lst)print(res_lst)Output:[['/home/work/srccode/github/allz/allz/libs/MNIST.tar.0000', '/home/work/srccode/github/allz/allz/libs/MNIST.tar.0002', '/home/work/srccode/github/allz/allz/libs/MNIST.tar.0004', '/home/work/srccode/github/allz/allz/libs/MNIST.tar.0003', '/home/work/srccode/github/allz/allz/libs/MNIST.tar.0001'], ['/home/work/srccode/github/allz/allz/libs/MNIST.tar.7z.002', '/home/work/srccode/github/allz/allz/libs/MNIST.tar.7z.001'], ['/home/work/srccode/github/allz/allz/libs/MNIST.part4.rar', '/home/work/srccode/github/allz/allz/libs/MNIST.part1.rar', '/home/work/srccode/github/allz/allz/libs/MNIST.part2.rar', '/home/work/srccode/github/allz/allz/libs/MNIST.part3.rar'], ['/home/work/srccode/github/allz/allz/libs/MNIST.7z.001', '/home/work/srccode/github/allz/allz/libs/MNIST.7z.002'], ['/home/work/srccode/github/allz/allz/libs/123.rar'], ['/home/work/srccode/github/allz/allz/libs/000.tar']]
|
allzpark
|
A cross-platform launcher for film and games projects, built on Rez
|
allzparkdemo
|
Example files for Allzpark
|
alma-cdk.aws-interface-endpoints
|
npmi-D@alma-cdk/aws-interface-endpointsL3 construct helping with PrivateLink-powered VPC Interface Endpoints for AWS Services.🚧 Project StabilityThis construct is still versioned withv0major version and breaking changes might be introduced if necessary (without a major version bump), though we aim to keep the API as stable as possible (even withinv0development). We aim to publishv1.0.0soon and after that breaking changes will be introduced via major version bumps.Getting StartedEndpoint open to whole isolated subnetimport{AwsInterfaceEndpoints}from'@alma-cdk/aws-interface-endpoints';import*asec2from'aws-cdk-lib/aws-ec2';constvpc=newec2.Vpc();newAwsInterfaceEndpoints(this,'EcrInterfaceEndpoint',{vpc,services:[{id:'EcrDocker',ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER},],});Session Manager connection endpointsimport{AwsInterfaceEndpoints}from'@alma-cdk/aws-interface-endpoints';import*asec2from'aws-cdk-lib/aws-ec2';constvpc=newec2.Vpc();newAwsInterfaceEndpoints(this,'SessionManagerInterfaceEndpoint',{vpc,services:AwsInterfaceEndpoints.SessionManagerConnect,});Explictly opened endpointsIn your VPC creation stackimport{AwsInterfaceEndpoints}from'@alma-cdk/aws-interface-endpoints';import*asec2from'aws-cdk-lib/aws-ec2';constvpc=newec2.Vpc();newAwsInterfaceEndpoints(this,'EcrInterfaceEndpoint',{vpc,open:false,services:[{id:'EcrDocker',ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER},],});In some other stack (maybe on a completely different CDK application):import{AwsInterfaceEndpoints}from'@alma-cdk/aws-interface-endpoints';import*asec2from'aws-cdk-lib/aws-ec2';defineinstance:ec2.Instance;constendpoints=AwsInterfaceEndpoints.fromAttributes(this,'EcrInterfaceEndpoint',{services:[{id:'EcrDocker',ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER},],});endpoints.allowDefaultPromFrom(instance);https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-create-vpc.htmlhttps://aws.amazon.com/privatelink/pricing/https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html
|
alma-cdk.cross-region-parameter
|
npmi-D@alma-cdk/cross-region-parameterStoreAWS SSM Parameter StoreParameters into another AWS Region with AWS CDK.🚧 Project StabilityThis construct is still versioned withv0major version and breaking changes might be introduced if necessary (without a major version bump), though we aim to keep the API as stable as possible (even withinv0development). We aim to publishv1.0.0soon and after that breaking changes will be introduced via major version bumps.Getting Startedimport{CrossRegionParameter}from"@alma-cdk/cross-region-parameter";newCrossRegionParameter(this,'SayHiToSweden',{region:'eu-north-1',name:'/parameter/path/message',description:'Some message for the Swedes',value:'Hej då!',});
|
alma-cdk.domain
|
npmi-D@alma-cdk/domainSimplifies creation of subdomain with a TLS certificate and configuration with services like AWS CloudFront.🚧 Project StabilityThis construct is still versioned withv0major version and breaking changes might be introduced if necessary (without a major version bump), though we aim to keep the API as stable as possible (even withinv0development). We aim to publishv1.0.0soon and after that breaking changes will be introduced via major version bumps.Getting Startedimport{Domain}from'@alma-cdk/domain';import*ascloudfrontfrom'aws-cdk-lib/aws-cloudfront';constdomain=newDomain(this,'Domain',{zone:'example.com',//retrievethezonevialookup,orprovideIHostedZonesubdomain:'foobar',//optionalsubdomain});constdistribution=newcloudfront.Distribution(this,'Distribution',{/*othercloudfrontconfigurationvaluesremovedforbrevity*/certificate:domain.certificate,//referencetocreatedICertificatedomainNames:[domain.fqdn],//foobar.example.comenableIpv6:domain.enableIpv6,//truebydefault–setenableIpv6proptofalseduringnewDomain()})//assignCloudFrontdistributiontogivenfqdnwithA+AAAArecordsdomain.addTarget(newtargets.CloudFrontTarget(distribution))CloudFront helperInstead of assigningcertificate,domainNamesandenableIpv6properties individually, you may choose to use the one-liner helper utility methodconfigureCloudFront()to set all three values at once – don't forget to use...object spread syntax:constdistribution=newcloudfront.Distribution(this,'Distribution',{/*othercloudfrontconfigurationvaluesremovedforbrevity*///one-linertoconfigurecertificate,domainNamesandIPv6support...domain.configureCloudFront(),})//assignCloudFrontdistributiontogivenfqdnwithA+AAAArecordsdomain.addTarget(newtargets.CloudFrontTarget(distribution))Note: The returned domain names configuration isdomainNames: [domain.fqdn], meaning this only works in scenarios where your CloudFront distribution has only single domain name.
|
alma-cdk.openapix
|
npmi-D@alma-cdk/openapixGenerate AWS Api Gateway REST APIs viaOpenAPI(formely known as “Swagger”) Schema Definitions by consuming "clean" OpenAPI schemas and injectx-amazon-apigateway-extensionswith type-safety.🚧 Project StabilityThis construct is still versioned withv0major version and breaking changes might be introduced if necessary (without a major version bump), though we aim to keep the API as stable as possible (even withinv0development). We aim to publishv1.0.0soon and after that breaking changes will be introduced via major version bumps.There are also some incomplete or buggy features, such as CORS andCognitoUserPoolsAuthorizer.Getting StartedInstallnpm i -D @alma-cdk/openapixDefine your API OpenApi Schema Definition in a.yamlfilewithoutanyx-amazon-apigateway-extensionsUseopenapixconstructs in CDK to consume the.yamlfile and then assign API Gateway integrations using CDKHTTP IntegrationGiven the followinghttp-proxy.yamlOpenApi schema definition,withoutany AWS API Gateway OpenApi extensions:openapi:3.0.3info:title:HTTP Proxydescription:Proxies requests to example.comversion:"0.0.1"paths:"/":get:summary:proxydescription:Proxies example.comYou may then define API Gateway HTTP integration (within your stack):newopenapix.Api(this,'HttpProxy',{source:path.join(__dirname,'../schema/http-proxy.yaml'),paths:{'/':{get:newopenapix.HttpIntegration(this,'http://example.com',{httpMethod:'get',}),},},});See/examples/http-proxyfor full OpenApi definition (with response models) and an example within a CDK application.Lambda IntegrationGiven the followinghello-api.yamlOpenApi schema definition,withoutany AWS API Gateway OpenApi extensions:openapi:3.0.3info:title:Hello APIdescription:Defines an example “Hello World” APIversion:"0.0.1"paths:"/":get:operationId:sayHellosummary:Say Hellodescription:Prints out a greetingparameters:-name:namein:queryrequired:falseschema:type:stringdefault:"World"responses:"200":description:Successful responsecontent:"application/json":schema:$ref:"#/components/schemas/HelloResponse"components:schemas:HelloResponse:description:Response bodytype:objectproperties:message:type:stringdescription:Greetingexample:Hello World!You may then define API Gateway AWS Lambda integration (within your stack):constgreetFn=newNodejsFunction(this,'greet');newopenapix.Api(this,'HelloApi',{source:path.join(__dirname,'../schema/hello-api.yaml'),paths:{'/':{get:newopenapix.LambdaIntegration(this,greetFn),},},})See/examples/hello-apifor full OpenApi definition (with response models) and an example within a CDK application.AWS Service IntegrationGivenbooks-api.yamlOpenApi schema definition,withoutany AWS API Gateway OpenApi extensions, You may then define API Gateway AWS service integration such as DynamoDB (within your stack):newopenapix.Api(this,'BooksApi',{source:path.join(__dirname,'../schema/books-api.yaml'),paths:{'/':{get:newopenapix.AwsIntegration(this,{service:'dynamodb',action:'Scan',options:{credentialsRole:role,//rolemusthaveaccesstoDynamoDBtablerequestTemplates:{'application/json':JSON.stringify({TableName:table.tableName,}),},integrationResponses:[{statusCode:'200',responseTemplates:{//See/examples/http-proxy/lib/list-books.vtl'application/json':readFileSync(__dirname+'/list-books.vtl','utf-8'),},}],},}),},'/{isbn}':{get:newopenapix.AwsIntegration(this,{service:'dynamodb',action:'GetItem',options:{credentialsRole:role,//rolemusthaveaccesstoDynamoDBtablerequestTemplates:{'application/json':JSON.stringify({TableName:table.tableName,Key:{item:{"S":"$input.params('isbn')"}}}),},integrationResponses:[{statusCode:'200',responseTemplates:{//See/examples/http-proxy/lib/get-book.vtl'application/json':readFileSync(__dirname+'/get-book.vtl','utf-8'),},}],},}),},},});See/examples/books-apifor full OpenApi definition (with response models) and an example within a CDK application.Mock IntegrationGiven the followingmock-api.yamlOpenApi schema definition,withoutany AWS API Gateway OpenApi extensions:openapi:3.0.3info:title:Hello APIdescription:Defines an example “Hello World” APIversion:"0.0.1"paths:"/":get:operationId:sayHellosummary:Say Hellodescription:Prints out a greetingparameters:-name:namein:queryrequired:falseschema:type:stringdefault:"World"responses:"200":description:Successful responsecontent:"application/json":schema:$ref:"#/components/schemas/HelloResponse"components:schemas:HelloResponse:description:Response bodytype:objectproperties:message:type:stringdescription:Greetingexample:Hello World!You may then define API Gateway Mock integration (within your stack):newopenapix.Api(this,'MockApi',{source:path.join(__dirname,'../schema/mock-api.yaml'),paths:{'/':{get:newopenapix.MockIntegration(this,{requestTemplates:{"application/json":JSON.stringify({statusCode:200}),},passthroughBehavior:apigateway.PassthroughBehavior.NEVER,requestParameters:{'integration.request.querystring.name':'method.request.querystring.name',},integrationResponses:[{statusCode:'200',responseTemplates:{//see/examples/mock-api/lib/greet.vtl'application/json':readFileSync(__dirname+'/greet.vtl','utf-8'),},responseParameters:{},},],}),},},});See/examples/mock-apifor full OpenApi definition (with response models) and an example within a CDK application.ValidatorsAPI Gateway REST APIs can performrequest parameter and request body validation. You can provide both default validator and integration specific validator (which will override the default for given integration).See/examples/todo-apifor complete example within a CDK application.Giventodo-api.yamlOpenApi schema definitionyou may define the API Gateway validators for your integration in CDK:newopenapix.Api(this,'MyApi',{source:path.join(__dirname,'../schema/todo-api.yaml'),validators:{'all':{validateRequestBody:true,validateRequestParameters:true,default:true,//setthisasthe"API level"defaultvalidator(therecanbeonlyone)},'params-only':{validateRequestBody:false,validateRequestParameters:true,},},paths:{'/todos':{//thisoneusesthedefault'all'validatorpost:newopenapix.HttpIntegration(this,baseUrl,{httpMethod:'post'}),},'/todos/{todoId}':{//thisonehasvalidatoroverrideanduses'params-only'validatorget:newopenapix.HttpIntegration(this,`${baseUrl}/{todoId}`,{validator:'params-only',options:{requestParameters:{'integration.request.path.todoId':'method.request.path.todoId',},},}),},},})Authorizers🚧 Work-in-ProgressThere are multiple ways tocontrol & manages access to API Gateway REST APIssuch asresource policies,IAM permissionsandusage plans with API keysbut this section focuses onCognito User PoolsandLambda authorizers.Cognito AuthorizersIn this example we're defining a Congito User Pool based authorizer.Given the followingschema.yamlOpenApi definition:openapi:3.0.3paths:/:get:security:-MyAuthorizer:["test/read"]# add scopecomponents:securitySchemes:MyCognitoAuthorizer:type:apiKeyname:Authorizationin:headerYou can define the Cognito Authorizer in CDK with:constuserPool:cognito.IUserPool;newopenapix.Api(this,'MyApi',{source:'./schema.yaml',authorizers:[newopenapix.CognitoUserPoolsAuthorizer(this,'MyCognitoAuthorizer',{cognitoUserPools:[userPool],resultsCacheTtl:Duration.minutes(5),})],})Lambda AuthorizersIn this example we're defining a custom Lambda authorizer. The authorizer function code is not relevant for the example but the idea in the example is that an API caller sends some "secret code" in query parameters (?code=example123456) which then the authorizer function somehow evaluates.Given the followingschema.yamlOpenApi definition:openapi:3.0.3paths:/:get:security:-MyAuthorizer:[]# note the empty arraycomponents:securitySchemes:MyCustomAuthorizer:type:apiKeyname:codein:queryYou can define the custom Lambda Authorizer in CDK with:constauthFn:lambda.IFunction;newopenapix.Api(this,'MyApi',{source:'./schema.yaml',authorizers:[newopenapix.LambdaAuthorizer(this,'MyCustomAuthorizer',{fn:authFn,identitySource:apigateway.IdentitySource.queryString('code'),type:'request',authType:'custom',resultsCacheTtl:Duration.minutes(5),}),],})Inject/RejectYou may modify the generated OpenAPI definition (which is used to define API Gateway REST API) by injecting or rejecting values from the source OpenAPI schema definition:newopenapix.Api(this,'MyApi',{source:'./schema.yaml',//AddanyOpenAPIv3data.//CanbeusefulforpassingvaluesfromCDKcode.//Seehttps://swagger.io/specification/injections:{"info.title":"FancyPantsAPI"},//Rejectfieldsbyabsoluteobjectpathfromgenerateddefinitionrejections:['info.description'],//RejectallmatchingfieldsfromgenerateddefinitionrejectionsDeep:['example','examples'],});CORS🚧 Work-in-ProgressUsingopenapix.CorsIntegrationcreates a Mock integration which responds with correct response headers:newopenapix.Api(this,'MyApi',{source:'./schema.yaml',paths:{'/foo':{options:newopenapix.CorsIntegration(this,{//usinghelpermethodtodefineexplicitvalues:headers:CorsHeaders.from(this,'Content-Type','X-Amz-Date','Authorization'),origins:CorsOrigins.from(this,'https://www.example.com'),methods:CorsMethods.from(this,'options','post','get'),}),},'/bar':{options:newopenapix.CorsIntegration(this,{//usingregularstringvalues:headers:'Content-Type,X-Amz-Date,Authorization',origins:'*',methods:'options,get',}),},'/baz':{options:newopenapix.CorsIntegration(this,{//usinghelperconstantforwildcardvalues:headers:CorsHeaders.ANY,origins:CorsOrigins.ANY,methods:CorsMethods.ANY,}),},},});When specifying multipleoriginsthe mock integration usesVTL magicto respond with the correctAccess-Control-Allow-Originheader.Default CORSIf you wish to define same CORS options to every path, you may do so by providing a defaultcorsvalue:newopenapix.Api(this,'MyApi',{source:'./schema.yaml',defaultCors:newopenapix.CorsIntegration(this,{headers:CorsHeaders.ANY,origins:CorsOrigins.ANY,methods:CorsMethods.ANY,}),paths:{/*...*/},});This will apply the givencorsconfiguration toeverypath asoptionsmethod. You may still do path specific overrides by adding anoptionsmethod to specific paths.API Gateway EndpointTypeAWS CDK API Gateway constructs default toEdge-optimized API endpointsby usingEndpointType.EDGEas the default.This construct@alma-cdk/openapixinstead defaults to usingRegional API endpointsby settingEndpointType.REGIONALas the default value. This is because we believe that in most cases you're better of by configuring your own CloudFront distribution in front the API. If you do that, you might also be interested in@alma-cdk/origin-verifyconstruct.You MAY override this default in@alma-cdk/openapixby providing your preferred endpoint types viarestApiProps:newopenapix.Api(this,'MyApi',{source:'./schema.yaml',paths:{/*...*/},restApiProps:{endpointConfiguration:{types:[apigateway.EndpointType.EDGE],},},});
|
alma-cdk.origin-verify
|
npmi-D@alma-cdk/origin-verifyEnforce API Gateway REST API, AppSync GraphQL API, or Application Load Balancer traffic via CloudFront by generating a Secrets Manager secret value which is used as a CloudFront Origin Custom header and a WAFv2 WebACL header match rule.Essentially this is an implementation ofAWS Solution“Enhance Amazon CloudFront Origin Security with AWS WAF and AWS Secrets Manager” without the secret rotation.🚧 Project StabilityThis construct is still versioned withv0major version and breaking changes might be introduced if necessary (without a major version bump), though we aim to keep the API as stable as possible (even withinv0development). We aim to publishv1.0.0soon and after that breaking changes will be introduced via major version bumps.Getting Startedimport{OriginVerify}from'@alma-cdk/origin-verify';import{Distribution}from'aws-cdk-lib/aws-cloudfront';constapi:RestApi;//TODO:implementtheRestApiconstapiDomain:string;//TODO:implementthedomainconstverification=newOriginVerify(this,'OriginVerify',{origin:api.deploymentStage,});newDistribution(this,'CDN',{defaultBehavior:{origin:newHttpOrigin(apiDomain,{customHeaders:{[verification.headerName]:verification.headerValue,},protocolPolicy:OriginProtocolPolicy.HTTPS_ONLY,})},})For more detailed example usage see/examplesdirectory.Custom Secret ValueAdditionally, you may pass in customsecretValueif you don't want to use a generated secret (which you should use in most cases):constmyCustomValue=SecretValue.unsafePlainText('foobar');constverification=newOriginVerify(this,'OriginVerify',{origin:api.deploymentStage,secretValue:myCustomValue,});NotesUseOriginProtocolPolicy.HTTPS_ONLY!In your CloudFront distribution Origin configuration useOriginProtocolPolicy.HTTPS_ONLYto avoid exposing theverification.headerValuesecret to the world.WhysecretValue.unsafeUnwrap()?Internally this construct creates theheaderValueby using AWS Secrets Manager but the secret value is exposed directly by usingsecretValue.unsafeUnwrap()method: This is:required, because we must be able to set it into the WAFv2 WebACL rulerequired, because you must be able to set it into the CloudFront Origin Custom Headerokay, because it's meant to protect the API externally and it'snotconsidered as a secret that should be kept – well – secret withinyourAWS account
|
alma-cdk.project
|
🚧Work-in-Progress: Breaking changes may occur at any given point duringv0.x.npmi-D@alma-cdk/projectOpinionated CDK “framework” with constructs & utilities for:deploying multiple environments to multiple accounts (with many-to-many relationship)managing account configuration through standardized props (no more random config files)querying account and/or environment specific information within your CDK codeenabling dynamic & short-lived “feature-environments”enabling well-defined taggingproviding structure & common conventions to CDK projectschoosing the target account & environment by passing in runtime context:npxcdkdeploy-caccount=dev-cenvironment=feature/abc-123... which means you don't need to define all the possibile environments ahead of time!Account StrategiesDepending on the use case, you may choose a configuration between 1-3 AWS accounts with the following environments:Shared account (shared):Multi-account (dev+prod)– RECOMMENDED:Multi-account (dev+preprod+prod):Getting StartedSteps required to define aenvironmentalproject resources; At first, it might seem complex but once you get into the habbit of defining your projects this way it starts to make sense:Choose yourAccount StrategyInitialize a newProjectinstead ofcdk.App://bin/app.tsimport{Project,AccountStrategy}from'@alma-cdk/project';constproject=newProject({//Basicinfo,youcouldalsoreadthesefrompackage.jsonifyouwantname:'my-cool-project',author:{organization:'Acme Corp',name:'Mad Scientists',email:'[email protected]',},//Ifnotset,defaultstooneof:$CDK_DEFAULT_REGION,$AWS_REGIONorus-east-1defaultRegion:'eu-west-1',//Configurestheprojecttouse2AWSaccounts(recommended)accounts:AccountStrategy.two({dev:{id:'111111111111',config:{//whateveryouwanthereas[string]:anybaseDomain:'example.net',},},prod:{id:'222222222222',config:{//whateveryouwanthereas[string]:anybaseDomain:'example.com',},},}),})Define a stack whichextends SmartStackwith resources://lib/my-stack.tsimport{Construct}from'constructs';import{StackProps,RemovalPolicy}from'aws-cdk-lib';import{SmartStack,Name,UrlName,PathName,EC}from'@alma-cdk/project';exportclassMyStackextendsSmartStack{constructor(scope:Construct,id:string,props?:StackProps){super(scope,id,props);newdynamodb.Table(this,'Table',{removalPolicy:EC.isStable(this)?RemovalPolicy.RETAIN:RemovalPolicy.DESTROY,tableName:Name.it(this,'MyTable'),partitionKey:{type:dynamodb.AttributeType.STRING,name:'pk',},//StagingMyTable});newevents.EventBus(this,'EventBus',{eventBusName:Name.withProject(this,'MyEventBus'),//MyCoolProjectStagingMyEventBus});news3.Bucket(this,'Bucket',{removalPolicy:EC.isStable(this)?RemovalPolicy.RETAIN:RemovalPolicy.DESTROY,autoDeleteObjects:EC.isStable(this)?false:true,bucketName:UrlName.globally(this,'MyBucket'),//acme-corp-my-cool-project-feature-foo-bar-my-bucket});newssm.StringParameter(this,'Parameter',{stringValue:'Foo',tier:ssm.ParameterTier.ADVANCED,parameterName:PathName.withProject(this,'MyNamespace/MyParameter'),///MyCoolProject/Staging/MyNamespace/MyParameter});}}Define a newenvironmentalwhichextends EnvironmentWrapperand initialize all your environmentalSmartStackstacks within://lib/environment.tsimport{Construct}from'constructs';import{EnvironmentWrapper}from'@alma-cdk/project';import{MyStack}from'./my-stack';exportclassEnvironmentextendsEnvironmentWrapper{constructor(scope:Construct){super(scope);newMyStack(this,'MyStack',{description:'This is required'});}}Resulting Stack properties (givenenvironment=staging):PropertyExample valuestackName"MyCoolProject-Environment-Staging-MyExampleStack"terminationProtectiontrueenv.account"111111111111"env.region"eu-west-1"Resulting Tags for the Stack and its resources (givenenvironment=staging):PropertyExample valueAccountdevEnvironmentstagingProjectmy-cool-projectAuthorMad ScientistsOrganizationAcme [email protected] initialize the environment with theProjectscope://bin/app.tsimport{Project,Accounts}from'@alma-cdk/project';import{Environment}from'../lib/environment';constproject=newProject({/*removedforbrevity,seestep1*/})newEnvironment(project);DocumentationSee detailed documentation for specific classes & methods atconstructs.dev.Generally speaking you would be most interested in the following:ProjectAccountStrategySmartStackAccountWrapper & EnvironmentWrapperAccountContext (AC)EnvironmentContext (EC)Name / UrlName / PathName
|
alma-client
|
alma-python-clientPython API Client for the Alma API.Support Python >= 3.8Installpipinstallalma-clientDemoWe support both a sync and async client.Synchronous clientfromalma_clientimportClientalma_client=Client.with_api_key("sk_test..")payments=alma_client.payments.fetch_all()forpinpayments:print(f"{p.id}: Paiement en{len(p.payment_plan)}fois")payment_data={"payment":{"purchase_amount":10000,"return_url":"http://merchant.com/payment-success","shipping_address":{"first_name":"Martin","last_name":"Dupond","line1":"1 rue de Rivoli","postal_code":"75004","city":"Paris"}}}eligibility=alma_client.payments.eligibility(payment_data)ifeligibility.eligible:payment=alma_client.payments.create(payment_data)print(payment.raw_data)Asynchronous clientfromalma_clientimportAsyncClientalma_client=AsyncClient.with_api_key("sk_test..")payments=awaitalma_client.payments.fetch_all()forpinpayments:print(f"{p.id}: Paiement en{len(p.payment_plan)}fois")payment_data={"payment":{"purchase_amount":10000,"return_url":"http://merchant.com/payment-success","shipping_address":{"first_name":"Martin","last_name":"Dupond","line1":"1 rue de Rivoli","postal_code":"75004","city":"Paris"}}}eligibility=awaitalma_client.payments.eligibility(payment_data)ifeligibility.eligible:payment=awaitalma_client.payments.create(payment_data)print(payment.raw_data)Changelog3.0.2 (2022-12-07)Fix dump for null-body requests.3.0.1 (2022-12-05)Configure credentials later in the flow.3.0.0 (2022-06-29)Breaking changeMove the code from thealmanamespace into thealma_clientnamespace.Remove support for Python 3.6 and Python 3.72.0.2 (2022-06-22)Fixpotential-fraudmethod URLs (#27)2.0.1 (2022-06-17)Addsinclude_child_accountsandcustom_fieldsparams to the DataExport creation endpoint2.0.0 (2021-08-12)Breaking changesMove from requests to HTTPXHandle both sync and async python clientsRemove support for Python 3.5Add support for Python 3.91.2.0 (2020-09-01)Add support for different authentication methodsAdd Black & isort checks on pull requests1.1.0 (2020-03-25)Add support for Python 3.5+1.0.1 (2020-03-24)Automatically detects the API mode from the api_key.1.0.0 (2020-03-24)Create a Python client for AlmaHandle Order entity for PaymentHandle the refund endpointHandle pagination for ordersHandle the send-sms API for payments.
|
almadenkorea
|
No description available on PyPI.
|
almadry
|
This is a very simple calculator that takes two numbers and either add, subtract, multiply or divide them.Change Log0.0.1 (18/12/2021)First Release
|
alma-hilse
|
alma-hilseALMA Hardware-In-the-Loop Simulation Environment setup, monitoring and verification package.Thealma-hilsepackage provides an application and libraries aiming to help in the setup, monitoring and verfication of the ALMA HILSE infrastructure.
Most online requests are based on the AmbManager object to minimize dependencies.Installationpip install alma-hilseIf installed for user only, it may be necessary to modify PATH accordingly by running:export PATH=$PATH:$HOME/.local/bin/Installation for developmentgit clone ssh://[email protected]:7999/esg/alma-hilse.git
make venv
source venv/bin/activate
alma-hilse --helpUsageFollowing is a non-exhaustive list of available commands, for illustrative purposes only. Use the following command for all available options:alma-hilse --helpTiming-related commandsalma-hilse timing --help
alma-hilse timing status # LORR/LFTRR status for HILSE
alma-hilse timing resync # LORR/LFTRR resync to CLO referenceCorrelator-related commandsalma-hilse corr --help
alma-hilse corr status # power, parity and delay status of DRXs
alma-hilse corr set-metaframe-delays # NOT IMPLEMENTED YET
alma-hilse corr mute-edfa # NOT IMPLEMENTED YET
alma-hilse corr eeprom-read # NOT IMPLEMENTED YET
alma-hilse corr eeprom-write # NOT IMPLEMENTED YETGeneral environment setup and troubleshooting commandsalma-hilse utils --help
alma-hilse utils get-devices # list devices connected to ABM
alma-hilse utils turn-on-ambmanager
alma-hilse utils array-info # NOT IMPLEMENTED YET (to be based on existing BE scripts)Antenna integration-related commandsGeneration of reports aiming to help in the integration of new antennas to HILSE enviroment. E.g., by collecting relevant information to be used during AOS patch panel fiber movements and following verifications.alma-hilse integration --help # NOT IMPLEMENTED YET
alma-hilse integration general-status # NOT IMPLEMENTED YET (to be based on existing BE scripts)
alma-hilse integration lo-resources # NOT IMPLEMENTED YET (to be based on existing BE scripts)
alma-hilse integration dts-resources # NOT IMPLEMENTED YET (to be based on existing BE scripts)
alma-hilse integration pad-resources # NOT IMPLEMENTED YET
|
almalinux-git-utils
|
almalinux-git-utilsUtilities for working with the AlmaLinux OS Git server.alma_get_sourcesThealma_get_sourcesscript downloads sources and BLOBs from the AlmaLinux
sources cache.UsageRun thealma_get_sourcesin a git project root directory:Clone an AlmaLinux RPM package git project fromgit.almalinux.org.Switch to a required branch.Run thealma_get_sourcestool:$alma_get_sourcesalma_blob_uploadThealma_blob_uploadscript uploads sources and BLOBs to the AlmaLinux
sources cache.PrerequirementsCreate an AWS credentials file ~/.aws/credentials with the following content:[default]aws_access_key_id=YOUR_ACCESS_KEYaws_secret_access_key=YOUR_SECRET_KEYUsageThe utility supports two types of input: a CentOS git repository metadata file
or a list of files to upload.For CentOS repositories workflow will be the following:Install theget_sources.shscript from thecentos-git-commonrepository.Clone a project and download its sources as described on the CentOSWiki.Run thealma_blob_uploadtool (don't forget to replacePROJECT_NAMEwith
an actual project name):$alma_blob_upload-i.PROJECT_NAME.metadataAlternatively, you can upload a list of files in the following way:$alma_blob_upload-fSOURCES/FILE_1SOURCES/FILE_NThealma_blob_uploadutility can also generate a CentOS-compatible metadata
file:$alma_blob_upload-o.PROJECT_NAME.metadata-fSOURCES/FILE_1SOURCES/FILE_NLicenseLicensed under the GPLv3 license, see the LICENSE file for details.
|
alman
|
UNKNOWN
|
almanac
|
a framework for interactive, page-based console applicationsSynopsisThealmanacframework aims to serve as an intuitive interface for spinning up interactive, page-based console applications. Think of it as a Python metaprogramming layer on top ofPython Prompt ToolkitandPygments.Examplealmanacturns this:"""Welcome to a simple interactive HTTP client.The current URL to request is the application's current path. Directories will becreated as you cd into them."""importaiohttpimportasynciofromalmanacimporthighlight_for_mimetype,make_standard_app,PagePathapp=make_standard_app()@app.on_init()asyncdefruns_at_start_up():app.io.raw(__doc__)app.bag.session=aiohttp.ClientSession()app.io.info('Session opened!')@app.on_exit()asyncdefruns_at_shut_down():awaitapp.bag.session.close()app.io.info('Session closed!')@app.prompt_text()defcustom_prompt():stripped_path=str(app.page_navigator.current_page.path).lstrip('/')returnf'{stripped_path}> '@app.hook.before('cd')asyncdefcd_hook_before(path:PagePath):ifpathnotinapp.page_navigator:app.page_navigator.add_directory_page(path)@app.hook.exception(aiohttp.ClientError)asyncdefhandle_aiohttp_errors(exc:aiohttp.ClientError):app.io.error(f'{exc.__class__.__name__}:{str(exc)}')@app.cmd.register()@app.arg.method(choices=['GET','POST','PUT'],description='HTTP verb for request.')@app.arg.proto(choices=['http','https'],description='Protocol for request.')asyncdefrequest(method:str,*,proto:str='https',**params:str):"""Send an HTTP or HTTPS request."""path=str(app.current_path).lstrip('/')url=f'{proto}://{path}'app.io.info(f'Sending{method}request to{url}...')resp=awaitapp.bag.session.request(method,url,params=params)asyncwithresp:text=awaitresp.text()highlighted_text=highlight_for_mimetype(text,resp.content_type)app.io.info(f'Status{resp.status}response from{resp.url}')app.io.info('Here\'s the content:')app.io.ansi(highlighted_text)if__name__=='__main__':asyncio.run(app.prompt())Into this:InstallationYou can download the latest packaged version from PyPI:pipinstallalmanacAlternatively, you can get the bleeding-edge version from version control:pipinstallhttps://github.com/welchbj/almanac/archive/master.tar.gzLicenseThe original content of this repository is licensed under theMIT License, as per theLICENSE.txtfile.Some of the parsing logic is borrowed from thepython-nubiaproject and is licensed under that project'sBSD License. For more information, please see the comment inalmanac/parsing/parsing.py.DevelopmentDevelopment dependencies can be installed with:pipinstall-rdeps/dev-requirements.txtTo run the tests, use:pythontasks.pytestTo lint and type check the code, use:flake8.
mypy.When it's time to cut a release, use:pythonsetup.pybdist_wheelsdist
twinecheckdist/*.whldist/*.gz
twineuploaddist/*.whldist/*.gz
|
almanac-bot
|
Copyright (c) 2017 Julio C. BarreraPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.Description-Content-Type: UNKNOWNDescription: Almanac Bot===========Almanac bot for Twitter.|codeship|.. |codeship| image:: https://app.codeship.com/projects/afbee300-5764-0135-3fa1-7e1de17b1d2f/status?branch=master:target: https://app.codeship.com/projects/236023:alt: Codeship Status for logoff/almanac-botKeywords: twitter historyPlatform: UNKNOWNClassifier: Development Status :: 3 - AlphaClassifier: Intended Audience :: Information TechnologyClassifier: Topic :: Sociology :: HistoryClassifier: License :: OSI Approved :: MIT LicenseClassifier: Programming Language :: Python :: 3.5Classifier: Framework :: Sphinx
|
almanach
|
Almanach stores the utilization of OpenStack resources (instances and volumes) for each tenant.What is Almanach?The main purpose of this software is to record the usage of the cloud resources of each tenants.Almanach is composed of two parts:Collector: Listen for OpenStack events and store the relevant information in the database.REST API: Expose the information collected to external systems.At the moment, Almanach is only able to record the usage of instances and volumes.ResourcesDocumentationhttps://almanach.readthedocs.io/Launchpad Projectshttps://launchpad.net/almanachBlueprintshttps://blueprints.launchpad.net/almanachBug Trackinghttps://bugs.launchpad.net/almanachLicenseAlmanach is distributed under Apache 2.0 LICENSE.
|
almanak
|
AlmanakA placeholder for the incoming Almanak package.
|
almanasi
|
Kynanlibs LibraryCore library ofNaya-Pyro, a python based telegram userbot.Installationpip3install-Upy-AyraDocumentationSee more working plugins onthe offical repository!Made with 💕 byKynan.LicenseUltroid is licensed underGNU Affero General Public Licensev3 or later.Credits
|
almapipy
|
almapipy: Python Wrapper for Alma APIalmapipy is python requests wrapper for easily accessing the Ex Libris Alma API. It is designed to be lightweight, and imposes a structure that mimics theAlma API architecture.Installationpip install almapipyProgress and RoadmapGet functionality has been developed around all the Alma APIs (listed below).
Post, Put and Delete functions will be gradually added in future releases.APIGetPostPutDeletebibsXanalyticsXNANANAacquisitionsXconfigurationXcoursesXresource sharing partnersXtask-listsXusersXelectronicXUseImport# Import and call primary Client classfromalmapipyimportAlmaCnxnalma=AlmaCnxn('your_api_key',data_format='json')Access Bibliographic DataAlma provides a set of Web services for handling bibliographic records related information, enabling you to quickly and easily manipulate bibliographic records related details. These Web services can be used by external systems to retrieve or update bibliographic records related data.# Use Alma mms_id for retrieving bib recordsharry_potter="9980963346303126"bib_record=alma.bibs.catalog.get(harry_potter)# get holding items for a bib recordholdings=alma.bibs.catalog.get_holdings(harry_potter)# get loans by titleloans=alma.bibs.loans.get_by_title(harry_potter)# or by a specific holding itemloans=alma.bibs.loans.get_by_item(harry_potter,holding_id,item_id)# get requests or availability of bibalma.bibs.requests.get_by_title(harry_potter)alma.bibs.requests.get_by_item(harry_potter,holding_id,item_id)alma.bibs.requests.get_availability(harry_potter,period=20)# get digital representationsalma.bibs.representations.get(harry_potter)# get linked dataalma.bibs.linked_data.get(harry_potter)Access ReportsThe Analytics API returns an Alma report.# Find the system path to the report if don't know pathalma.analytics.paths.get('/shared')# retrieve the report as an XML ET element (native response)report=alma.analytics.reports.get('path_to_report')# or convert the xml to json after API callreport=alma.analytics.reports.get('path_to_report',return_json=True)Access CoursesAlma provides a set of Web services for handling courses and reading lists related information, enabling you to quickly and easily manipulate their details. These Web services can be used by external systems such as Courses Management Systems to retrieve or update courses and reading lists related data.# Get a complete list of courses. Makes multiple calls if necessary.course_list=alma.courses.get(all_records=True)# or filter on search parametersecon_courses=alma.courses.get(query={'code':'ECN'})# get reading lists for a coursecourse_id=econ_courses['course'][0]['id']reading_lists=alma.courses.reading_lists.get(course_id)# get more detailed information about a specific reading listreading_list_id=reading_lists['reading_list'][0]['id']alma.courses.reading_lists(course_id,reading_list_id,view='full')# get citations for a reading listalma.courses.citations(course_id,reading_list_id)Access UsersAlma provides a set of Web services for handling user information, enabling you to quickly and easily manipulate user details. These Web services can be used by external systems—such as student information systems (SIS)—to retrieve or update user data.# Get a list of users or filter on search parametersusers=alma.users.get(query={'first_name':'Sterling','last_name':'Archer'})# get more information on that useruser_id=users['user'][0]['primary_id']alma.users.get(user_id)# get all loans or requests for a user. Makes multiple calls if necessary.loans=alma.user.loans.get(user_id,all_records=True)requests=alma.user.requests.get(user_id,all_records=True)# get deposits or fees for a userdeposits=alma.users.deposits.get(user_id)fees=alma.users.fees.get(user_id)Access AcquisitionsAlma provides a set of Web services for handling acquisitions information, enabling you to quickly and easily manipulate acquisitions details. These Web services can be used by external systems - such as subscription agent systems - to retrieve or update acquisitions data.# get all fundsalma.acq.funds.get(all_records=True)# get po_lines by searchamazon_lines=alma.acq.po_lines.get(query={'vendor_account':'AMAZON'})single_line_id=amazon_lines['po_line'][0]['number']# or by a specific line numberalma.acq.po_lines.get(single_line_id)# search for a vendoralma.acq.vendors.get(status='active',query={'name':'AMAZON'})# or get a specific vendoralma.acq.vendors.get('AMAZON.COM')# get invoices or polines for a specific vendoralma.acq.vendors.get_invoices('AMAZON.COM')alma.acq.vendors.get_po_lines('AMAZON.COM')# or get specific invoicesalma.acq.invoices.get('invoice_id')# get all licensesalma.acq.licenses.get(all_records=True)Access Configuration SettingsAlma provides a set of Web services for handling Configuration related information, enabling you to quickly and easily receive configuration details. These Web services can be used by external systems in order to get list of possible data.# Get libraries, locations, departments, and hourslibraries=alma.conf.units.get_libaries()library_id=libraries['library'][0]['code']locations=alma.conf.units.get_locations(library_id)hours=alma.conf.general.get_hours(library_id)departments=alma.conf.units.get_departments()# Get system code tablestable='UserGroups'alma.conf.general.get_code_table(table)# Get scheduled jobs and run historyjobs=alma.conf.jobs.get()job_id=jobs['job'][0]['id']run_history=alma.conf.jobs.get_instances(job_id)# Get sets and set memberssets=alma.conf.sets.get()set_id=sets['set'][0]['id']set_members=alma.conf.sets.get_members(set_id)# get profiles and remindersdepost_profiles=alma.conf.deposit_profiles.get()import_profiles=alma.conf.import_profiles.get()reminders=alma.conf.reminders.get()Access Resource Sharing PartnersAlma provides a set of Web services for handling Resource Sharing Partner information, enabling you to quickly and easily manipulate partner details. These Web services can be used by external systems to retrieve or update partner data.# get partnerspartners=alma.partners.get()Access ElectronicAlma provides a set of Web services for handling electronic information, enabling you to quickly and easily manipulate electronic details. These Web services can be used by external systems in order to retrieve or update electronic data.# get e-collectionscollections=alma.electronic.collections.get()collection_id=collections['electronic_collection'][0]['id']# get services for a collectionservices=alma.electronic.services.get(collection_id)service_id=services['electronic_service'][0]['id']# get portfolios for a servicealma.electronic.portfolios.get(collection_id,service_id)Access Task ListsAlma provides a set of Web services for handling task lists information, enabling you to quickly and easily manipulate their details. These Web services can be used by external systems.# get requested resources for a specific circulation deskalma.task_lists.resources.get(library_id,circ_desk)# get lending requests for a specific libraryalma.task_lists.lending.get(library_id)Attribution and ContactAuthor:Steve Pelkey
|
almapiwrapper
|
This python module is a tool to use Alma APIs. it manages the logs and the
backup of the records.Author: Raphaël Rey ([email protected])Year: 2022Version: 0.19.5License: GNU General Public License v3.0Documentation
|
alma-probablity
|
No description available on PyPI.
|
almaqrmi
|
This is a very simple calculator that takes two numbers and either add, subtract, multiply or divide them.Change Log0.0.1 (22/12/2021)First Release
|
almar
|
Almar ·Almar (formerly Lokar) is a script for batch editing and removing controlled
classification and subject heading fields (084/648/650/651/655) in bibliographic
records in Alma using the Alma APIs. Tested with Python 2.7 and Python 3.5+.It will use an SRU service to search for records, fetch and modify the MARCXML
records and use the Alma Bibs API to write the modified records back to Alma.The script will only work with fields having a vocabulary code defined in$2.
Since the Alma SRU service does not provide search indexes for specific
vocabularies, almar instead starts by searching using thealma.subjects+ thealma.authority_vocabularyindices. This returns all records having a subject
field A with the given term and a subject field B with the given vocabulary
code, but where A is not necessarily equal to B, so almar filters the result
list to find the records where A is actually the same as B.Installation and configurationRunpip install -e .to installalmarand its dependencies.Create a configuration file. Almar will first look foralmar.ymlin the
current directory, then forlokar.yml(legacy) and finally for.almar.ymlin your home directory.Here's a minimal configuration file to start with:---
default_vocabulary: INSERT MARC VOCABULARY CODE HERE
vocabularies:
- marc_code: INSERT MARC VOCABULARY CODE HERE
default_env: prod
env:
- name: prod
api_key: INSERT API KEY HERE
api_region: eu
sru_url: INSERT SRU URL HEREReplaceINSERT MARC VOCABULARY CODE HEREwith the vocabulary code of
your vocabulary (the$2value). The script uses this value as a filter,
to ensure it only edits subject fields from the specified vocabulary.ReplaceINSERT API KEY HEREwith the API key of your Alma instance. If
you'r connected to a network zone, you should probably use a network zone key.
Otherwise the edits will be stored as local edits in the institution zone.Optionally: Change api_region to 'na' (North America) or 'ap' (Asia Pacific).ReplaceINSERT SRU URL HEREwith the URL to your SRU endpoint. Again: use
the network zone endpoint if you're connected to a network zone. For Bibsys
institutions, usehttps://bibsys-k.alma.exlibrisgroup.com/view/sru/47BIBSYS_NETWORKNote: In the file above, we've configured a single Alma environment called "prod".
It's possible to add multiple environments (for instance a sandbox and a
production environment) and switch between them using the-ecommand line option.
Here's an example:---
default_vocabulary: noubomn
vocabularies:
- marc_code: noubomn
id_service: http://data.ub.uio.no/microservices/authorize.php?vocabulary=realfagstermer&term={term}&tag={tag}
default_env: nz_prod
env:
- name: nz_sandbox
api_key: API KEY HERE
api_region: eu
sru_url: https://sandbox-eu.alma.exlibrisgroup.com/view/sru/47BIBSYS_NETWORK
- name: nz_prod
api_key: API KEY HERE
api_region: eu
sru_url: https://bibsys-k.alma.exlibrisgroup.com/view/sru/47BIBSYS_NETWORKFor all configuration options, seeconfiguration options.UsageBefore using the tool, make sure you have set the vocabulary code (vocabulary.marc_code)
for the vocabulary you want to work with in the configuration file.
The tool will only make changes to fields having a$2value that matches
thevocabulary.marc_codecode set in your configuration file.Getting help:almar -hto show a list of command and general command line optionsalmar replace -hto show help for the "replace" subcommandReplace a subject headingTo replace "Term" with "New term" in 650 fields:almar replace '650 Term' 'New term'or, since 650 is defined as the default field, you can also use the shorthand:almar replace 'Term' 'New term'To work with any other field than the 650 field, the field number must be explicit:almar replace '655 Term' 'New term'`Supported fields are 084, 648, 650, 651 and 655.Test things first with dry runTo see the changes made to each catalog record, add the--diffsflag. Combined
with the--dry_runflag (or-d), you will see the changes that would be made
to the records without actually touching any records:almar replace --diffs --dry_run 'Term' 'New term'This way, you can easily get a feel for how the tool works.Move a subject to another MARC tagTo move a subject heading from 650 to 651:almar replace '650 Term' '651 Term'or you can use the shorthandalmar replace '650 Term' '651'if the term itself is the same. You can also move and change a heading in
one operation:almar replace '650 Term' '651 New term'Remove a subject headingTo remove all 650 fields having either$a Termor$x Term:almar remove '650 Term'or, since 650 is the default field, the shorthand:almar remove 'Term'List documentsIf you just want a list of documents without making any changes, usealmar list:almar list '650 Term'Optionally with titles:almar list '650 Term' --titlesMore complex editsFor more complex edits, such as replacing two subject headings with one,
use the--remand--addoptions to remove and add subject headings.
For instance, to replacePhysicsANDHistory(655) with a single subjectHistory of physics:almar --rem 'Physics' --rem '655 History' --add 'History of physics'Note that only records havingallof the two subjects to be removed (the--remsubjects) will be modified.
Any number of--remand--addoptions is supported.Interactive editingIf you need to split a concept into two or more concepts, you can use the
interactive mode.
Example: to replace "Kretser" with "Integrerte kretser"
on some documents, but with "Elektriske kretser" on other, run:almar interactive 'Kretser' 'Integrerte kretser' 'Elektriske kretser'For each record, Almar will print the title and subject headings and ask you
which of the two headings to include on the record. Use the arrow keys and space
to check one or the other, both or none of the headings, then press Enter to
confirm the selection and save the record.Working with a custom document setBy default,almarwill check all the documents returned from the following
CQL query:alma.subjects = "{term}" AND alma.authority_vocabulary = "{vocabulary}",
but you can use the--cqlargument to specify a different query if you only
want to work with a subset of the documents. For instance,lokar --cql 'alma.all_for_ui = "999707921404702201"' --diffs replace 'Some subject' 'Some other subject'The variables{term}and{vocabulary}can be used in the query string.NotesFor terms consisting of more than one word, you must add quotation marks (single or double)
around the term, as in the examples above. For single word terms, this is optional.In search, the first letter is case insensitive. If you search for "old term", both
"old term" and "Old term" will be replaced (but not "old Term").IdentifiersIdentifiers ($0) are added/updated if you configure aID lookup service URL(id_service) in your configuration file. The service should accept
a GET request with the parametersvocabulary,termandtagand return the
identifier of the matched concept as a JSON object. Seethis pagefor more details.For an example service usingSkosmos, seecodeanddemo.Limited support for subject stringsFour kinds of string operations are currently supported:almar remove 'Aaa : Bbb'deletes occurances of$a Aaa $x Bbbalmar replace 'Aaa : Bbb' 'Ccc : Ddd'replaces$a Aaa $x Bbbwith$a Ccc $x Dddalmar replace 'Aaa : Bbb' 'Ccc'replaces$a Aaa $x Bbbwith$a Ccc(replacing subfield$aand removing subfield$x)almar replace 'Aaa' 'Bbb : Ccc'replaces$a Aaawith$a Bbb $x $Ccc(replacing subfield$aand adding subfield$x)Note: A term is only recognized as a string if there is space before and after colon (:).More complex replacementsTo make more complex replacements, we can use the advanced MARC syntax, where
each argument is a complete MARC field using double$s as subfield delimiters.Let's start by listing documents having the subject "Advanced Composition Explorer"
in our default vocabulary using the simple syntax:almar list 'Advanced Composition Explorer'To get the same list using the advanced syntax, we would write:almar list '650 #7 $$a Advanced Composition Explorer $$2 noubomn'Notice that the quotation encapsulates the entire MARC field. And that we have explicitly
specified the vocabulary. This means we can make inter-vocabulary replacements.
To move the term to the "bare" vocabulary:almar replace '650 #7 $$a Advanced Composition Explorer $$2 noubomn' '610 27 $$a The Advanced Composition Explorer $$2 noubomn'We also changed the Marc tag and the field indicators in the same process.
We could also include more subfields in the process:almar replace '650 #7 $$a Advanced Composition Explorer $$2 noubomn' '610 27 $$a The Advanced Composition Explorer $$2 noubomn $$0 (NO-TrBIB)99023187'Note that unlike simple search and replace, the order of the subfields does not matter when matching.
Extra subfields do matter, however, except for$0and$9. To match any value (including no value)
for some subfield, use the value{ANY_VALUE}. Example:almar list --subjects '650 #7 $$a Sekvenseringsmetoder $$x {ANY_VALUE} $$2 noubomn'Using it as a Python libraryfromalmarimportSruClient,Almaapi_region='eu'api_key='SECRET'sru_url='https://sandbox-eu.alma.exlibrisgroup.com/view/sru/47BIBSYS_NETWORK'sru=SruClient(sru_url)alma=Alma(api_region,api_key)query='alma.authority_vocabulary="noubomn"'forrecordinsru.search(query):forsubjectinrecord.subjects(vocabulary='noubomn'):ifnotsubject.find('subfield[@code="0"]'):sa=subject.findtext('subfield[@code="a"]')sx=subject.findtext('subfield[@code="x"]')DevelopmentTo run tests:pip install -r test-requirements.txt
py.test
|
almasru
|
This python module is a tool to use Alma SRU. It manages the logs and the
backup of the records.Author: Raphaël Rey ([email protected])Year: 2023Version: 0.1License: GNU General Public License v3.0Documentation
|
almatasks
|
ALMA tasks
|
almath
|
ALMath—ALMath is an optimized mathematic toolbox for robotics.
|
almatpdf
|
This is a homepage of our project
|
almavik
|
АлмавикАлмавик - программа для определения центра масс капли по изображению. Она предоставляет удобный интерфейс и функции обработки изображений, которые позволяют точно и быстро определить центр масс капли.ОсобенностиОпределение центра масс: Алмавик использует современные алгоритмы обработки изображений для точного определения центра масс капли.Интерфейс: Программа имеет интуитивно понятный интерфейс, который позволяет легко загружать изображения и просматривать результаты обработки.Графическое представление: Алмавик предоставляет графическое представление капли, позволяя визуально оценить ее форму и размер.Интерфейс:Кнопки "Previous Image" и "Next Image": отвечают за переключение картинок на предыдущую и следующую соответственно;Конпки "Enable Drop Contour mode" и "Disable Drop Contour mode": появляются в зависимости от включенного режима на данный момент, изначально показывается изображение без выделенного контура капли, но при нажатии на "Enable Drop Contour mode" он выделяется;Слайдер: необходим для быстрого переключения картинок, так как в нашей папке 401 изображение, слайдер помогает быстро перейти от первого изображения к последнему;График: на графике красной точкой обозначается нахождение центра масс на текущей картинке, 4 точки до нее относятся к предыдущим изображениям, а 5 точк после к следующим.ЗапускЛокальный запуск1. Установите зависимости из файла requirements.txt, выполнив команду:
$ pip install -r requirements.txt
2. Создайте папку "exp1" в корневой директории проекта. В этой папке будут храниться изображения.
3. Для запуска Алмавик локально, выполните команду:
$ python YPPRPO.pyЗапуск в Docker контейнере1. Загрузите Docker образ Алмавик из репозитория Docker Hub, выполнив команду:
$ docker pull markuslons/almavik
2. Запустите контейнер, используя следующую команду:
$ docker run -it markuslons/almavikЗапуск через pipДля скачивания проекта введите в терминале строку:
$ pip install almavik
Это автоматически установит все нужные зависимости проекта, также папку с тестовыми изображениями
Чтобы запустить скачанную библиотеку, нужно создать и запустить python файл с данными строками кода:
$ from almavik.YPPRPO import main
$ main()
После запуска данного файла откроется дополнительное окно с последней версией проекта "Алмавик"СоздателиMarkusBlubberyViktoriaTixСсылкиhttps://pypi.org/project/almavik/https://hub.docker.com/repository/docker/markuslons/almavik
|
almaviva
|
Almaviva | Biblioteca PythonBiblioteca Python Almaviva com funções personalizadas para uso geral (pip install almaviva)Executando o programaPara usar esta biblioteca, basta instalar via PIP e posteriormente realizar a importação para dentro do código-fonte do seu projeto.pipinstallalmavivaPara instalar a biblioteca na versão anteriormente publicada, acrescente "==1.1.0" na frente do nome do pacote que está instalando, ex.:pipinstallalmaviva==1.4.8Para instalar a biblioteca do repositório de testes do PyPI, execute:pipinstall-ihttps://test.pypi.org/simple/almavivaAmbiente virtualPara criar a máquina virtual, caso não tenha sido criado a pasta.venvpor parte do download do template python (tutorial acima), vá até a pasta que contém os arquivos deste projeto e execute, no PowerShell:python-mvenv.venv--clear;.venv/scripts/activate;Para que os passos acima sejam executados com sucesso, será necessário revisar a política de execução do scripts do PowerShell (no modo de administrador). Então, execute, no PowerShell:Get-ExecutionPolicy-ListCasoScope ExecutionPolicyno itemLocalMachineesteja diferente deRemotesigned, execute, no PowerShell:Set-ExecutionPolicy-ExecutionPolicyRemotesigned-ScopeLocalMachineModo de operação do módulo: DatabaseImportação da bibliotecaRealize a importação para dentro do código-fonte do seu projetofromalmaviva.databaseimportDatabaseManipulando banco de dadosCrie a conexão com o servidor e informe o nome do banco de dados principalConfigurando a conexão usando autenticação windows (trusted connection):server='__servidor__'# Ex.: r'servidor1\instancia1'database='__banco__'# Ex.: 'Assets'db=Database(server,database)db.connect()Configurando a conexão usando autenticação de usuário:server='__servidor__'# Ex.: r'servidor1\instancia1'database='__banco__'# Ex.: 'Assets'username='__usuario__'# Ex.: 'User1'password='__senha__'# Ex.: 'Senha1'db=Database(server,database,username,password)db.connect()Opcionamente podemos configurar a versão dodriverque está instalado no sistema operacional, adicionando a configuração abaixo. Por padrão, se não informandodriver=Noneé utilizadoODBC Driver 13 for SQL Serverdriver='ODBC Driver 13 for SQL Server'db.set_driver(driver)A opção de logging/debug no terminal pode ser ativado simplesmente acrescentando a condição abaixo no seu código:Por padrão, esta condição é desabilitadadb.set_debugging(activated=True)Exemplos dos métodos SQL para manipulação de dadosSELECT:query='__select * from [...]__'resultado=db.select(query)INSERT:query='__insert [...] (__campos__) values [...] ou select [...]__'db.insert(query)DELETE:query='__delete [...] from [...]__'db.delete(query)UPDATE:query='__update [...] set [...]__'db.update(query)STORED PROCEDURE:params='__parametro1__','__parametro2__'procedure='[__banco__].[__schema__].[__procedure__]'db.execute_sp(stored_procedure=procedure,param=(params))Modo de operação do módulo: FileImportação da bibliotecaRealize a importação para dentro do código-fonte do seu projetofromalmaviva.fileimportFilesManipulando pastas e arquivos via PythonPrimeiro crie uma instancia do objeto, executando um dos seguintes códigos abaixo:Podemos iniciar a instância informando um caminho ao qual os arquivos serão manipulados por esta função:folder=Files(tempfolder=r'C:\_automacao\')Ou, quando não for informado o caminho da pasta que está querendo mapear, o código criará uma pasta com nome randômico nas pastas temporárias do sistema operacioal%tmp%folder=Files(tempfolder=None)print(folder.get_folder_path())Posteriormente podemos configurar outro caminho de pasta, nesta mesma instância, executando:folder.set_folder_path(tempfolder='__outra pasta__')Procurar e/ou listar arquivos do diretórioLista todos os arquivos do diretório mapeado de uma determinada instância é fácil, basta executar o código abaixo:folder.list_files()Mas, se quiser localizar uma pasta ou arquivo específico podemos usar:Aqui podemos usar caracteres coringas para localizar parte do nome do arquivo ou pastafolder.search_path_or_file(name_or_pattern_to_search='*')Excluir os arquivos da pastaPodemos executar o código abaixo para excluir TODOS os arquivos e pastas que existirem na pasta mapeada em uma determinada instânciafolder.delete_all_files()Também temos a possibilidade de excluir um arquivo específico, executando:folder.delete_file(file_name='__arquivo1.xlsb__')Mover arquivos de um diretório para outroSe quiser mover todos os arquivos de um diretorio, basta executar:Aqui recomenda-se usar a funçãoto_folder=folder2.get_folder_path()de outra instância para evitar exceções na execução do códigofolder.move_all_files(to_folder='__nova pasta__')Mover um arquivo específico de um diretório para outro:Aqui recomenda-se usar a funçãofile_name=folder.search_path_or_file('__arquivo1__.xls*')[0]da mesma instância e também o exemploto_folder=folder2.get_folder_path()de outra instância para evitar exceções na execução do códigofolder.move_file(file_name='__arquivo1.xlsb__',to_folder='__nova pasta__')Copiar arquivos de um diretório para outroSe quiser copiar todos os arquivos de um diretorio, basta executar:Aqui recomenda-se usar a funçãoto_folder=folder2.get_folder_path()de outra instância para evitar exceções na execução do códigofolder.copy_all_files(to_folder='__nova pasta__')Copiar um arquivo específico de um diretório para outro:Aqui recomenda-se usar a funçãofile_name=folder.search_path_or_file('__arquivo1__.xls*')[0]da mesma instância e também o exemploto_folder=folder2.get_folder_path()de outra instância para evitar exceções na execução do códigofolder.copy_file(file_name='__arquivo1.xlsb__',to_folder='__nova pasta__')Modo de operação do módulo: IHMImportação da bibliotecaRealize a importação para dentro do código-fonte do seu projeto:fromalmaviva.ihmimportHUNamesPara obter o nome da máquina e o usuário atualmente conectadoPrimeiro crie uma instancia do objeto, executando:huname=HUNames()Para saber o usuário atualmente conectadousername, execute:print(huname.get_username())Para saber o nome da máquinahostname, execute:print(huname.get_hostname())Para saber a versão do Python instalado no sistema, execute:print(huname.get_app())Modo de operação do módulo: LOGGINGImportação da bibliotecaRealize a importação para dentro do código-fonte do seu projetofromalmaviva.loggingimportLoggingRegistrando eventos de logCrie uma instancia do objeto antes de iniciar os registros, executando:log=Logging()Para registrar um evento de log no nível de segurançaINFO, basta executar:log.info(description='__primeiro log__',step='__etapa1__'# Step é opcional,complete=__True_or_False__# Complete é opcional)O argumentocompletepode ser informado comTrueouFalse, porém se não informado, o sistema sempre registrará o status comoOKPara registrar um evento de log no nível de segurançaWARN, basta executar:log.warn(description='__segundo log__',step='__etapa1__'# Step é opcional,complete=__True_or_False__# Complete é opcional)O argumentocompletepode ser informado comTrueouFalse, porém se não informado, o sistema sempre registrará o status comoOKPara registrar um evento de log no nível de segurançaERROR, basta executar:log.error(description='__terceiro log__',step='__etapa1__'# Step é opcional)O status desta função sempre retornará comoOK, porém com a sinalizaçãoERROR.Opções avançadas:A opçãologging_in_dbfoi substituída na versão 1.2.1 paralogging_in_servercarregando o registro de log em texto para o servidor de processamento de log do NOC Centrallog.set_options(# Configurações padrãologging_in_file=True,debug_in_terminal=False,logging_in_server=False)Por padrão o sistema cria um arquivo de log na mesma pasta em que a aplicação está executando, para desativar isto, desabilite a opção:logging_in_file=FalsePodemos exibir todos os registros de logs deINFOatéERRORno terminal. Para fazer isto, habilite a seguinte opção:debug_in_terminal=TrueTodo registro de atividade da aplicação pode ser enviada para o servidor de processamento de log central localizado no NOC Central por padrão. Para fazer isto, habilite a opção:logging_in_server=TrueModo de operação do módulo: WEBDRIVERImportação da bibliotecaRealize a importação para dentro do código-fonte do seu projetofromalmaviva.webdriverimportWebDriverChromefromselenium.webdriver.common.byimportByExecutando o Google Chrome no modo automatizadoAviso Importante! Para usar esta biblioteca, é necessário a instalação doNavegador ChromeeChrome DriverInicialização do WebDriver com perfil para permitir restaurar ultima sessãodriver=WebDriverChrome(profile_location=r'profiles\profile1'# profile_location: Parâmetro opcional,executable_path=r'drivers\chromedriver.exe')Opções adicionais de configuração das preferências do navegador:download_dir=r'download_folder\'# Importante encerrar com 2 barras no finalpreferences={"profile.default_content_settings.popups":0,"download.default_directory":str(download_dir),"download.prompt_for_download":False,"directory_upgrade":True}driver.add_experimental_option(prefs=preferences)# Adicionar preferencias personalizadas.Opções adicionais avançadas que podem ser configurados no navegador:driver.set_options(new_argument='--disable-dev-shm-usage')# Opção para desabilitar recursos de desenvolvedor - economiza custo da memóriadriver.set_options(new_argument='--disable-gpu')# Opção para economizar processamento de vídeodriver.set_options(new_argument='--no-sandbox')# Opção para teste de versões Beta da aplicaçãodriver.set_options(new_argument='--start-maximized')# Opção para maximizar a janela no start do browserdriver.set_options(new_argument='--window-size=960,1080')# Opção para modificar largura x altura em pixel no start do browserdriver.set_options(new_argument='--headless')# Opção para não rendenizar a janela do Google ChromeAgora basta inicializar o navegador e obter os objeto de localização e controle dos elementos da página passando a função instanciadadriver.set_browser(url)previamente configurado:browser=driver.set_browser(url=r'https://portaldocolaborador.almavivadobrasil.com.br')Use comandos, como no exemplo abaixo para localizar os elementos e controlar a página web, para mais informaçoes busque as palavrasselenium browser find_elementsna web.| BY_CHOISES-- | ------------------------
1 |By.ID2 |By.XPATH3 |By.LINK_TEXT4 |By.PARTIAL_LINK_TEXT5 |By.NAME6 |By.TAG_NAME7 |By.CLASS_NAME8 |By.CSS_SELECTORbrowser.find_element(by=By.XPATH,value='__x_path__')
|
almek
|
Readme.mdCreated for create package
|
almetro
|
Almetro Libraryversion number: 1.0.7
author: Arnour SabinoOverviewA python library to measure algorithms execution time and compare with its theoretical complexity.Installation / UsageTo install use pip:$ pip install almetroOr clone the repo:$ git clone https://github.com/arnour/almetro.git
$ python setup.py installInformationAlmetro uses timeit module from python to time your algorithms.See morehereExamplesApplying Almetro to a quadratic algorithm:importalmetrofromalmetro.algorithmsimportloop_n_quadraticfromalmetro.complexityimportcn_quadraticfromalmetro.instanceimportgrowingmetro=almetro\.new()\.with_execution(trials=5)\.with_instances(instances=20,provider=growing(initial_size=100,growth_size=100))\.metro(algorithm=loop_n_quadratic,complexity=cn_quadratic)chart=metro.chart()chart.show()Applying Almetro to a lg n algorithm:importalmetrofromalmetro.algorithmsimportloop_n_logfromalmetro.complexityimportclog_nfromalmetro.instanceimportgrowingmetro=almetro\.new()\.with_execution(trials=100)\.with_instances(instances=20,provider=growing(initial_size=10000,growth_size=10000))\.metro(algorithm=loop_n_log,complexity=clog_n)chart=metro.chart()chart.show()Customazing execution:importalmetrofromalmetro.complexityimportComplexityfromalmetro.instanceimportgeneratormy_custom_complexity=Complexity(theoretical=lambdav=1,e=1,c=1:v*v,experimental=lambdav=1,e=1,c=1:v+e,text='O(v^2)',latex=r'$\mathcal{O}(v^2)$')# You need to provide instances as dict: {'name': '', 'size': {}, 'value': {}}# Size must contains all needed theoretical complexity arguments# Value must contain all needed algorithms argumentsdefmy_custom_instances(n):g=create_some_graph()for_inrange(n):yield{'name':'my instance name','size':{'v':len(g.nodes()),'e':len(g.edges())},'c':some_order_value(),'value':{'graph':g,'v':len(g.nodes())}}defmy_custom_algorithm(graph,v):# Do some stuffpassN=50instances_generator=my_custom_instances(N)# Trials determine how many times each instance will be repeated for Almetro to pick the min time.metro=almetro\.new()\.with_execution(trials=5)\.with_instances(instances=N,provider=generator(instances_generator)\.metro(algorithm=my_custom_algorithm,complexity=my_custom_complexity)metro.chart().show()metro.table().show()
|
almgren-chriss
|
This package provides functions for implementing the Almgren-Chriss model for optimal execution of portfolio transactions.What is the Almgren-Chriss Model?The Almgren-Chriss model is a mathematical model used in financial markets to determine the optimal way to execute large
orders. The model takes into account various factors such as the risk tolerance of the trader, the volatility of the
market, and the impact of the trade on the market price. The goal of the model is to minimize the expected cost of the
trade while taking into account the risk of price fluctuations.Functions in the PackageThe package provides the following functions:cost_expectation: Calculate the expected cost of trading.cost_variance: Calculate the variance of the cost of trading.decay_rate: Calculate the trade decay rate.trade_trajectory: Calculate the trading trajectory.trade_list: Calculate the list of trades.Each function takes various parameters including risk tolerance, interval between trades, volatility, permanent impact
slope, temporary impact slope, total number of shares, and trading duration.ExampleHere is an example of how to use the functions in the package:fromalmgren_chrissimporttrade_trajectory,trade_list,cost_expectation,cost_variancelambda_=2e-6tau=1sigma=0.95gamma=2.5e-7eta=2.5e-6epsilon=0.0625X=1e06T=5>>> trade_trajectory(lambda_, tau, sigma, gamma, eta, X, T)
array([1000000.0, 428598.84574702, 182932.81426177, 76295.72161546, 27643.37739691, 0.0])>>> trade_list(lambda_, tau, sigma, gamma, eta, X, T)
array([571401.15425298, 245666.03148525, 106637.09264631, 48652.34421856, 27643.37739691])>>> cost_expectation(lambda_, tau, sigma, gamma, eta, epsilon, X, T)
1140715.1670497851>>> import math
>>> math.sqrt(cost_variance(lambda_, tau, sigma, gamma, eta, X, T))
449367.65254135116
|
almheb
|
Help you log in to Instagram.
|
alminer
|
ALminer: ALMA Archive Mining & Visualization Toolkitalmineris a Python-based code to effectively query, analyse, and visualize theALMA science archive. It also allows users to directly download ALMA data products and/or raw data for further image processing.InstallationThe easiest way to installalmineris withpip:pip install alminerTo obtain the most recent version of the code from Github:pip install https://github.com/emerge-erc/ALminer/archive/refs/heads/main.zipOr clone and install from source:# If you have a Github account:
git clone [email protected]:emerge-erc/ALminer.git
# If you do not:
git clone https://github.com/emerge-erc/ALminer.git
# After cloning:
cd ALminer
pip install .Note that depending on your setup, you may need to use pip3.DependenciesThe dependencies arenumpy,matplotlib,pandas,pyvo,astropyversion 3.1.2 or higher, andastroqueryversion 0.4.2.dev6649 or higher. We only use theastroquerypackage for downloading data from the ALMA archive. The strict requirement to have its newest version is due to recent changes made to the ALMA archive.alminerworks in Python 3.Getting startedWe have created an extensivetutorial Jupyter Notebookwhere allalminerfeatures have been highlighted. This is an excellent starting point to get familiar with all the possibilities; a glossery of all functions is provided at the bottom of this notebook.To work with the tutorial notebook interactivelyWe highly recommend working in aJupyter notebook environmentin order to make use ofalminer's visualization tools. We aim to keep adding new notebooks relevant for various sub-fields in the future.Note that the Jupyter notebooks may be outdated. The most up-to-date information can be found on thedocumentation page.DocumentationMore information can be found in thedocumentation.What's new:You can now specify which archive mirror to download data from:ESOis the default, and other options areNRAOandNAOJ. This option can be given through the'archive_mirror'parameter in thedownload_datafunction.You can now specify which archive service to query:ESOis the default, and other options areNRAOandNAOJ. This option can be given through the'tap_service'parameter to all functions that do the query (e.g. keysearch, target, catalog). For example:alminer.target(["TW Hya", "HL Tau"], tap_service='NRAO')Note that currently the ESO service is not returning all results, hence it is advisable to test your queries with multiple services until further notice.It is now possible to query entire phrases with thekeysearchfunction. For example:alminer.keysearch({'proposal_abstract': ['"high-mass star formation" outflow disk']})will query the proposal abstracts for the phrasehigh-mass star formationAND the wordsoutflowANDdisk.alminer.keysearch({'proposal_abstract': ['"high-mass star formation" outflow disk', '"massive star formation" outflow disk']})will query the the proposal abstracts for the phrasehigh-mass star formationAND the wordsoutflowANDdiskOR the phrasemassive star formationAND the wordsoutflowANDdisk.Acknowledgementsalminerhas been developed through a collaboration betweenAllegro, the ALMA Regional Centre in The Netherlands, and the University of Vienna as part of theEMERGE-StG project. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 851435).If you usealmineras part of your research, please consider citing thisASCL article(ADS reference will be added to the Github page when available).alminermakes use of different routines inAstropyandAstroquery. Please also consider citing the following papers:Astropy:Astropy Collaboration et al. 2013Astroquery:Ginsburg et al. 2019We also acknowledge the work of Leiden University M.Sc. students, Robin Mentel and David van Dop, who contributed to early versions of this work.Contact usIf you encounter issues, pleaseopen an issue.If you have suggestions for improvement or would like to collaborate with us on this project, please e-mailAida AhmadiandAlvaro Hacar.
|
almir
|
Documentation:http://readthedocs.org/docs/almir/Changelog0.1.8 (2013-05-16)fix bug introduced in 0.1.5 that crashed almir while doing configuration
[Domen Kožar]0.1.7 (2013-03-27)[bug] Add also LICENSE to MANIFEST.in
[Domen Kožar]0.1.6 (2013-03-27)[bug] Add .ini to MANIFEST.in
[Domen Kožar]0.1.5 (2013-03-27)[feature] Refactor the package a bit, so it’s easier to package it for Linux distributions
[Domen Kožar][bug] Update MANIFEST.in so all files are included in release
[Domen Kožar][bug] Add new bootstrap.py and pin down zc.buildout version to avoid upgrading zc.buildout to 2.0
[Domen Kožar][feature] Add apache2 configuration example
[Iban]0.1.4 (2013/03/23)brownbag release0.1.3 (2012/08/27)[bug] upgraded doctutils as it was failing buildout
[Domen Kožar]removed some dependencies on production, upgraded zc.buildout to 1.6.3 for faster installation
[Domen Kožar]determine version from distribution metadata
[Domen Kožar]0.1.2 (2012/05/31)[bug] interactive installer would swallow error message when SQL connection string was not formatted correctly[bug]: #7: don’t word wrap size columns[feature] add manual install steps[bug] #4: client detail page did not render if client had no successful backup[bug] #5: correctly parse scheduled jobs (choked on Admin job)[feature] use python2.7 or python2.6, otherwise abort installation0.1.1 (2012/04/18)[bug] fix support for postgresql 9.1
[Domen Kožar][feature] add reboot crontab for almir daemon
[Domen Kožar][bug] MySQL database size calculation was wrong, sometimes crashing the dashboard
[Domen Kožar][bug] console command list was not ordered and search box was not shown
[Domen Kožar][bug] bconsole did not accept non-asci input
[Domen Kožar]0.1 (2012/04/06)Initial version
[Domen Kožar]
|
almirah
|
almiraha wardrobe for datasetsExplore the docs »Report Bug·Request FeatureAbout The ProjectClothes can be organized, maintained, and easily accessed from a
wardrobe.almirahserves the
same purpose for datasets.You can define the method to follow to stack your contents on the
almirah, and even tag.almirahhopes to make working with datasets easier.(back to top)AcknowledgmentsThanks to Upi and Bhalla lab members for their inputs and help.(back to top)
|
almltools
|
No description available on PyPI.
|
al-ml-tools
|
Failed to fetch description. HTTP Status Code: 404
|
almmf
|
Bienvenidos a Nuestra Librería de Programación ParalelaEste proyecto ha sido desarrollado como parte del curso de Programación Paralela por elGrupo 8.📚 Integrantes del Grupo 8Castillon Gabriel, Maribel JazminCuba Aquino, Camila IsabelaJara Nuñez, Jose IgnacioMendoza Melo, Anthony LuisRojas Rivera, Renzo Eduardo🚀 Acerca de Nuestra LibreríaNuestra librería se centra en la implementación de técnicas deprogramación paralelapara optimizar operaciones comunes comosumas,máximos,mínimosypromedios, para así acelerar el análisis de grandes volúmenes de datos.✨ Cómo ContribuirEstamos abiertos a contribuciones! Si tienes sugerencias de mejoras, correcciones o nuevas funcionalidades, no dudes en abrir unissueo enviar unpull request. Tu aporte es muy valioso para nosotros.📄 LicenciaEste proyecto está licenciado bajoMIT License, lo que permite su uso, modificación y distribución bajo ciertas condiciones.📩 ContactoPara más información o consultas, no duden en contactar a cualquiera de los integrantes del grupo.Agradecemos tu interés en nuestra librería y esperamos que te sea de gran utilidad.Grupo 8- Curso de Programación Paralela
|
al-model-trainer
|
No description available on PyPI.
|
al-model-trainer-with-deps
|
No description available on PyPI.
|
almonaut
|
AlmonautAlmonaut(`Ahl-muh-naut) is a Python library for interacting with
the Ex Libris Alma©API. It provides a number of methods
which facilitate handling Alma©API data in Python dot
notation.Almonautis built on two excellent Python libraries: theRequestsHTTP library andpydanticfor parsing and
validation.NoteThis is a new project under active development. Its API is subject to
change.Current StateAlma©API AreaReadWriteAcquisitions✔✖ (planned)Analytics✖ (planned)n/aBibliographic Records and Inventory✖ (planned)✖ (planned)Electronic Resources✔✖ (planned)Install Almonautpip install almonautImport Almonaut and instantiate an API clientfromalmonautimportclientalma_api_client=client.AlmaApiClient('a1b2c3myapikeyx1y2z3')search_query='name~classics'my_funds=alma_api_client.get_funds(limit=10,extra_params={'mode':'ALL','q':search_query})iflen(my_funds.funds)>0:forfundinmy_funds.funds:print(f"Name:{fund.name}")print(f"Type:{fund.type_.desc}")print(f"Status:{fund.status.desc}")print(f"Fiscal period:{fund.fiscal_period.desc}")Note:Substitute your own API key for the placeholder shown above.For more information, see thedocumentation.
|
almond
|
almondStatic Site Generator in Python with PandocFree software: BSD licenseDocumentation:https://almond.readthedocs.io.FeaturesTODOHistory0.1.0 (2016-10-06)First release on PyPI.
|
almonds
|
FeaturesFully fledged Mandelbrot viewer, in your terminalNow compatible with the native Windows console!Julia setsHomemade terminal UI8 color ANSI mode with dithering256 color modeParallelized usingmultiprocessingMultiple palettes, adaptive modeSave and load capabilitiesAvailable in standalone, source compatible with Python 2 & 3Infinite fun from the comfort of your terminalRunningUsing PIPJust run:$ pip install almonds
$ almondsOn non-Cygwin Windows, you will still have to install the unofficialcursesmodule (see “From source” below)From sourceClone the repo:$ git clone https://github.com/Tenchi2xh/Almonds.git
$ cd AlmondsOn OS X, Linux and Cygwin:$ pip install Pillow
$ python -m almonds.main(For Cygwin,minttyorbabunare recommended)On Windows, download thecursesmodule from theUnofficial Windows
Binaries for Python Extension
Packages(acursesimplementation for Windows based onPDCurses), then run:> pip install curses‑2.2‑cp27‑none‑win32.whl
> pip install Pillow
> python -m almonds.mainThe fontEnvy Code
Rishighlyrecommended. If your terminal emulator supports it, try to
reduce the line spacing so that the box drawing characters touch. When
using another font, the appearance of the fractal may seem squashed
because the width to height ratio of the character are different; try to
adjust it using the argument--ratio(see “Usage” below).Using PyPy will make the hi-res captures faster, but the terminal
navigation slower.Usage██
██ ██████ ██ .d8b. db db
██████████ d8' `8b 88 .88b d88. .d88b. .888b .d8888 .d8888
██ ██████████████ 88ooo88 88 88 88 88 8P Y8 88 88 88 88 `8bo.
████████████████████ 88 88 88 88 88 88 8b d8 88 88 88 8D `Y8b
██ ██████████████ YP YP YP YP YP YP `Y88P' VP VP Y888D' `8888Y
██████████
██ ██████ ██ T e r m i n a l f r a c t a l v i e w e r
██
usage: almonds [-h] [-p N] [-r RATIO | -d W H] [-z] [save]
version 1.20b
positional arguments:
save path of a save to load
optional arguments:
-h, --help show this help message and exit
-p N, --processes N number of concurrent processes
-r RATIO, --char-ratio RATIO width to height ratio of the terminal characters
-d W H, --dimensions W H width and height of the terminal characters
-z, --qwertz swap the "z" and "y" keysControlsKeysAction↑,↓,←,→Move aroundC,VAdjust move speed⏎Input manual coordinatesY,UZoom / Un-zoomI,OIncrease / Decrase number of iterationsJEnter / Leave Julia setPNext paletteDColor mode (256 colors / 8 colors ANSI / 8 colors ASCII)RReverse palette orderAPalette mode (Normal / Adaptive)ZLaunch palette cycling animationHCapture current view in a high-resolution PNG fileXShow / Hide crosshairsTToggle UI theme (Dark / Light)SSave all current settings and viewLLoad a previous saveESC,CTRL``+``CExitScreenshots & RendersSee on theGitHub Project Page
|
almost
|
A helper for approximate comparison.from almost import almost
def test_repeating_decimal():
assert almost(1 / 3.) == 0.333
assert almost(1 / 6.) == 0.167
assert almost(3227 / 555., precision=6) == 5.814414
def test_irrational_number():
import math
assert almost(math.pi) == 3.142
assert almost(math.sqrt(2)) == 1.414
def test_random_text():
import random
def gen_text_with_prefix(prefix):
return prefix + str(random.random())[:-5]
assert almost(gen_text_with_prefix('@')) == '@...'LinksGitHub repositorydevelopment version
|
almost-make
|
AlmostMakeA pure-python, not-quite-POSIX-compliant implementation of make.Sample Supported Makefile(s)This example consists of a lightly-edited set of files from AlmostMake's tests.macroDefinitions.mk# Use the mini-shell built into AlmostMakeexport_BUILTIN_SHELL:=1export_CUSTOM_BASE_COMMANDS:=1CC=clangCFLAGS=TEST_MACRO=Testing1234=:=:=This**should**work!# A comment!EXEC_PROGRAM=SEND_MACROS:=EXEC_PROGRAM=$(EXEC_PROGRAM)CC=$(CC)CFLAGS=$(CFLAGS)TEST_MACRO="$(TEST_MACRO)"# Note: '=' defers expansion. ':=' does not.exportMAKEFLAGS:=$(MAKEFLAGS)$(SEND_MACROS)Makefile# To be run with AlmostMake.include *.mkall:testSimpletestPhonytestMacrostestRecursiontestParalleltestMisctest%:$(MAKE)-C$@clean$(MAKE)-C$@check$(MAKE)[email protected]:testSimpletestPhonytestMacrostestRecursiontestParalleltestMisctestSimple/Makefile.POSIX:all:# Note: As of v0.0.19, chmod is not built-in.check:allchmodu+xmain$(EXEC_PROGRAM)./main|grepPASSall:mainclean:-rm-fmain.o-rm-fmainmain:main.o$(CC)main.c-omain.SUFFIXES:.c.o.c.o:$(CC)$(CFLAGS)-c$<-o$@UsageAlmostMake comes with thealmakeandalmake_shellcommand-line utilities. Let's see how to use them!almakeRunningalmakein a directory with a file namedMakefilecausesalmaketo satisfy the first target defined in that file.For example, sayMakefilecontains the following:# A makefile!# This is the first target.# (Pretend `echo 'Hello, world'`# is indented with a single tab)firstTarget:echo'Hello, world'# firstTarget isn't the name of a real file!# Mark it as PHONY. We need this because if# firstTarget were to be a file in the same# folder as Makefile, its existence (and lack# of newer dependencies) would cause `almake`# to do nothing!.PHONY:firstTargetalmakethen runs the commands associated with firstTarget. Each line is given its own shell.Additional options are documented throughalmake's helptext:$almake--help
Help:Summary:Satisfydependenciesofatargetinamakefile.ThisparserisnotquitePOSIX-compliant,butshouldbeabletoparsesimplemakefiles.Usage:almake[targets...][options]whereeachtargetintargetsisavalidtargetandoptionsinclude:-h,--helpPrintthismessage.--versionPrintversionandlicensinginformation.--fileFiletoparse(defaultisMakefile).-kKeepgoingiferrorsareencountered.-n,--just-printJustprintcommandstoberun,withoutevaluating(printcommands,don't send them to the shell). Be aware that $(shell ...) macros are still evaluated. This option only applies to individual commands.-p Rather than finding targets, print the makefile, with top-level targets expanded.-C dir Switch to directory, dir, before running make.-w, --print-directory Print the current directory before and after running make.-j, --jobs Maximum number of jobs (e.g. almake -j 8).-s, --silent In most cases, don'tprintoutput.-b,--built-in-shellUsethebuilt-inshellforcommandsinthemakefile.Thiscanalsobeenabledasfollows:export_BUILTIN_SHELL:=1# Use the built-in shell instead of the system shell.export_CUSTOM_BASE_COMMANDS:=1# Enable built-in overrides for several commands like ls, echo, cat, grep, and pwd.export_SYSTEM_SHELL_PIPES:=1# Send commands that seem related to pipes (e.g. ls | less) directly to the system's shell.Note:AlmostMake'sbuilt-inshelliscurrentlyverylimited.
Note:Macrodefinitionsthatoverridethosefromtheenvironmentcanbeprovidedinadditiontotargetsandoptions.Forexample,maketarget1target2target3CC=gccCFLAGS=-O3
shouldmaketarget1,target2,andtarget3withthemacrosCCandCFLAGSbydefaultsettogccand-O3,respectively.
Note:Optionscanalsobegiventoalmakethroughtheenvironment.ThisisdonethroughtheMAKEFLAGSvariable.Forexample,settingMAKEFLAGSto--built-in-shellcausesalmaketoalwaysuseitsbuilt-inshell,ratherthanthesystemshell.almake_shellIn addition to thealmakecommand, thealmake_shellcommand is available. This command gives access to an interactive version of the (very limited) shell built into AlmostMake.Likealmake, we get usage information as follows:$almake_shell--help
Help:Summary:Runaninteractiveversionoftheshellbuiltintoalmake.ThisisaPOSIX-likeshell.ItisnotPOSIX-compliant.Usage:almake_shell[options][files...]...whereeachfilenamein[files...]isanoptionalfiletointerpret.Iffilesaregiven,interpretthembeforeopeningtheshell.
Optionsinclude:-h,--helpPrintthismessage.--versionPrintversionandlicensinginformation.-B,--without-builtinsDonot(re)definebuilt-incommands(likeecho).Bydefault,echo,ls,dir,pwd,andperhapsothercommands,aredefinedandoverrideanycommandswiththesamenamealreadypresentinthesystem.-p,--system-pipeRatherthanattemptingtopipeoutputbetweencommands(e.g.inls|grepfoo),sendpipedportionsoftheinputtothesystem'sshell.Thealmost_makePython moduleAlmostMake also makes available thealmost_makemodule! Documentation on this is coming, but for now, check out the source onGitHub!InstallationFrom PyPI...AlmostMake is on the Python Package Index! To install it, run:$python3-mpipinstallalmost-makeTo update it,$python3-mpipinstall--upgradealmost-makeFrom GitHub...AsAlmostMakeis hosted on GitHub, it can be installed by cloning:$gitclonehttps://github.com/personalizedrefrigerator/AlmostMake.git
$cdAlmostMake
$makeinstallYou may also need to installsetuptools,wheel, andtwine.See Packaging Python Projectsfor a brief overview of these packages. They can be installed as follows:$python3-mpipinstall--user--upgradesetuptoolswheeltwineNotable Missing FeaturesAt present,AlmostMakedoes not supportthe following, notable features.Inalmake:$(shell ...)that can usealmake_shellBSDMake-style conditionalsBSDMake-style.include < ... >includesDefining recipes viaa:: banda! b.Pre-defined recipes (e.g..ofrom.c)Inalmake_shell/built-in shell:ifstatements, loops, functions.Built-inchmodTestingTo test AlmostMake, run,$maketestNote, however, thatmake testdepends onmake install.Supported PlatformsAt present, it has been tested on the following platforms:Ubuntu with Python 3.8, AlmostMake v0.2.0. All tests pass.Debian with Python 3.7, older AlmostMake. All tests pass.iOS viaa-Shell, AlmostMake v0.19.0. Failing tests.If you find that AlmostMake works on a platform not listed here, please considercreating an issue and/or submitting a pull request to update the list of supported platforms and versions of Python!If AlmostMake doesn't work for you, you may wish to tryPyMake. This package appears to support a wider range of Python versions and platforms, but may have fewer features.
|
almost-p2p-chat
|
No description available on PyPI.
|
almost-p2p-messenget
|
No description available on PyPI.
|
almost_requested
|
Almost RequestedThis module provides a simple wrapper aroundrequeststo make accessing
APIs a little more simple.You start off by providing a requests.session, pre-configured with the required headers, then AlmostRequested will
construct the URL you are accessing by taking the dotted path of the AlmostRequested instance, converting it into
a URL, and returning a new AlmostRequested object, ready to call get, put, post or delete on, which themselves return
arequests.Response.classGithub(AlmostRequested):def__init__(self)->None:s=requests.session()self.GITHUB_TOKEN=os.environ["GITHUB_TOKEN"]s.headers["Authorization"]=f"Bearer{self.GITHUB_TOKEN}"s.headers["Content-Type"]="application/vnd.api+json"base_url="https://api.github.com"super().__init__(s,base_url)defget_asset(self,asset:dict[str,str])->None:s=requests.session()s.headers["Authorization"]=f"Bearer{self.GITHUB_TOKEN}"s.headers["Accept"]="application/octet-stream"ifos.path.exists(asset["name"]):returnprint("downloading",asset["name"],asset["url"])withs.get(url=asset["url"],stream=True)asr:r.raise_for_status()withopen(asset["name"],"wb")asf:forchunkinr.iter_content(chunk_size=8192):f.write(chunk)defmain():gh=Github()splitpolicies=gh.repos.octoenergy.terraform_provider_splitpolicies# Get Releases from https://github.com/octoenergy/terraform-provider-splitpolicies - by default# we convert underscores to dashes in pathsreleases=splitpolicies.releases.latest.get()# pretty print the JSON decoded responsereleases=splitpolicies.releases.latest.get(print_json=True)# Insert something into the URL that we can't type as a python variable namegh.a("some$endpoint").get()# Will raise an exception - this endpoint doesn't exist# The above is the same asgh.append("some$endpoint").get()# Won't work, but with clearer code# Avoid URL magic, just want the headers and response code checkinggh.get(url="https://github.com/octoenergy/terraform-provider-splitpolicies").get()
|
almoststatic
|
AlmoststaticAlmoststaticis a static sites and web pages generator engine written inPythonwhich utilizes theJinja2 template systemto render pages.
It can be integrated withFlaskapps to
serve static contents on dynamic sites or used standalone for static sites
development.Pages are declared inYAMLfiles and rendered with Jinja2
template files. The HTML contents can be written inMarkdownmarkup language or in plain HTML. The “content” folder contains all data needed
to do the job, and the “config.yaml” is used to share global parameters and to
set up configuration.It is loosely inspired byHugostatic site generator, but
it differs in many ways.Why Almoststatic?There are many static site generators such asNext.js,HugoorJekyll, but you can
prefer Almostatic because:It's perfect for pythonist.It uses Jinja2 and Flask which are widely
used by python community, so you don't need to learn other programming
languages or template engines.It's easy!In fact the rules are very few and this mean few things to
learn and great flexibility.It's versatile.It's engine has a powerful recursive system for embedding
and including contents. This helps you to build rich contents and also to
split them in small pieces called"widgets"easier to maintain.It has blog capabilities.Contain functions used to query metadata info
useful to organize blog pages by categories and tags.You can deliver static and dynamic contents at same time.With Flask you
can build your dynamic content and let Almoststatic to render the rest of
page or the whole page, if it is full static.Write static sites.Static sites are composed only by text files and media
contents. They are easy to deliver on the web, are secure by design, require
less maintenance and resources and are faster. If you have no need for dynamic
contents, withAlmoststaticyou can write all pages as static.Not only for pythonists.Basic knowledge of python is needed, but once
your developer environment is ready, Almoststatic lets you to focus on
writing yaml and karkdown's contents and to create your own widgets in html
and Jinja2.Quick startThe simplest way to see ifAlmoststaticis what you're looking for, is to
try the sample provided with source code package and explore the source code.The following tutorial is tested on Linux Ubuntu, but it's easy to port on
other platforms such other Linux distros, Windows or Mac.You need git and python3, install them with:$sudoaptinstallgitpython3Last linux's releases comes with python3 already installed and you have only to
check if your version is 3.6 or higher:$python3--versionNow clone Almoststatic with git and enter into the directory:$gitclonehttps://gitlab.com/claudio.driussi/almoststatic.git
$cdalmoststaticIt's common in python using virtualenv for development, to do so and to install
Almoststatic, write:$python3-mvenvmyvenv
$sourcemyvenv/bin/activate
$pipinstallalmoststatic
$pipinstallpyftpsyncDone! Now you can try the sample, cd into sample directory and run flaskapp.py:$cdsample/
$pythonflaskapp.pyA local host Flask App instance server is executed and you can test
Almoststatic features.Open your browser and copy and paste the following URL:http://127.0.0.1:5000/orlocalhost:5000You will be redirected to the static site showing some widgets and feature
of Almostatic, you can navigate to see some examples.You can always exit from server pressing CTRL-C on your terminal.To build static site run:$pythonwrite_static.py
$ls-l../_static_site/As you can see, your pages are written as *.html files, but this is not enough
to get a really static site, to do this you have to tune writing parameters and
copy media files on appropriate location. When you are done the site can be
published as static site.If you wish, you can run tests:$cd../test
$pythonas_test.pyThis do some tests and write a simpler static site.Now if you decide thatAlmoststaticis right for you, you can dig into
source code of sample and tests and read the documentation at:https://almoststatic.readthedocs.ioStatus of projectAlmoststatic is young but stable! It has all planned features and always gave me
the right results so it can be considered "production ready"At the moment there are only a few themes, we are developing a "rolling theme"
with some beautiful widgets ready to use.
SeeFlatstep themeand follow the tutorial.I'm not a designer so the result is of average quality. But I'm sure that good
designers can write great themes.DonateIf you appreciate Almoststatic, you can make a donation via PayPal
|
almost-unique-id
|
almost-unique-idGet almost unique ID names. This package randomly selects anadjective-Namepair from 24,567,933 unique pairs.These IDs serve much the same pupose as generating a random hex string, that is to get random names for things that are (extrememly) unlikely to overlap. This project was developed alongside several scientific research projects where many trial of an experiment needed to be named distictly. We find that Enlglish languane keys are easier to discuss and remember than some number of digits/letters that might be unpronounceable.Feel free to use this for any such application. If you have a use for this that you think is worth mentioning in the README, feel free to open a PR and add it so the description.InstallationThis package is installable with pip:$ pip install almost-unique-idOr, you can install it from source by git cloning it and installing with pip as follows.$ git clone https://github.com/aks2203/almost-unique-id.git
$ cd almost-unique-id
$ pip install -e .Examplefrom almost_unique_id import generate_id
run_id = generate_id()Sources:The names and adjectives are copied from other sources. An effort was made to find a list without explicit content.The names are copied from thisfile.The adjectives are copied from thifile.Development and ContributionWe believe in open-source community driven software development. Please open issues and pull requests with any questions or improvements you have.
|
almpy
|
Geo package htat generate base model and make geological shift
|
alms
|
# ALMS - ALMS Library Management System#####[Currently Being Hosted At [Nepal Japan Children Library](https://www.facebook.com/Nepal-Japan-Children-Library-117390398337579), Lainchaur, Kathmandu]ALMS is a Django Based Library Management System###__What Can ALMS Do?__* Catalog Books* Manage Members, volunteers and librarians* Lend books, return books* Take care of your accounting journal, ledger and trial balances* Search For Books based on all their properties###__How To Install ALMS__To install on a debian system, run this :wget -qO- https://raw.githubusercontent.com/ayys/alms/master/install.sh | shThe above code will start the server and run it in the default browser.To start the server again, simply go to the `alms` directory and run start.sh###__Programmer's Documentation__ is on our [wiki page](https://github.com/ayys/alms/wiki)
|
alm.solrindex
|
IntroductionSolrIndex is a product for Plone/Zope that provides enhanced searching capabilities by leveraging Solr, the popular open source enterprise search platform from the Apache Lucene project. It is compatible with Plone 4 and Plone 5.Out of the box, SolrIndex brings in more relevant search results by replacing Plone’s default full-text indexing with Solr-based search features, and including the ability to assign weights to certain fields.Leveraging Solr’s advanced search algorithms, SolrIndex comes with exciting features, such as the ability to use stopwords and synonyms. Stopwords allow to control which words the search mechanism should ignore, and synonyms make it possible to extend a query by including additional matches.SolrIndex also comes with blazing fast and highly scalable search capabilities. SolrIndex is extensible by design, which means it has the ability to integrate with other indexes and catalogs. This is good news for sites that need to provide search capabilities across multiple repositories.With additional customization, SolrIndex also has the ability to provide faceted search, highlighting of query terms, spelling suggestions and “more like this” suggestions.Thanks to SolrIndex, Plone and Zope-powered sites now benefit from truly enterprise search capabilities.Useful LinksSolr:http://lucene.apache.org/solr/PyPI:http://pypi.python.org/pypi/alm.solrindexissue tracker:https://github.com/collective/alm.solrindex/issuesgit repository:https://github.com/collective/alm.solrindexSpecial ThanksSix Feet Up would especially like to thank Shane Hathaway for his key contribution to SolrIndex.Detailed DocumentationInstallationInclude this package in your Zope 2 or Plone buildout. If you are using
theplone.recipe.zope2instancerecipe, addalm.solrindexto theeggsparameter and thezcmlparameter. See thebuildout.cfgin this package for an example. The example also shows how to use thecollective.recipe.solrinstancerecipe to build a working Solr
instance with little extra effort.Once Zope is running with this package installed, you can visit a
ZCatalog and addSolrIndexas an index. You should only add one
SolrIndex to a ZCatalog, but a single SolrIndex can take the place of
multiple ZCatalog indexes.The Solr SchemaConfigure the Solr schema to store an integer unique key. Add fields
with names matching the attributes of objects you want to index in Solr.
You should avoid creating a Solr field that will index the same data
as what will be indexed in ZODB by another ZCatalog index. In other
words, if you add aDescriptionfield to Solr, you probably ought
to remove the index namedDescriptionfrom ZCatalog, so that you
don’t force your system to index descriptions twice.Once the SolrIndex is installed, you can query all of the fields
described by the Solr schema, even if there is no ZCatalog index with
a matching name. For example, if you have configured aDescriptionfield in the Solr schema, then you can issue catalog queries against
theDescriptionfield using the same syntax you would use with
other ZCatalog indexes. For example:results = portal.portal_catalog(Description={'query': 'waldo'})Queries of this form pass through a configurable translation layer made
of field handler objects. When you need more flexibility than the field
handlers provide, you can either write your own field handlers (see the
“Writing Your Own Field Handlers” section) or you can provide Solr
parameters that do not get translated (see the “Translucent Solr
Queries” section).Translucent Solr QueriesYou can issue a Solr query through a ZCatalog containing a SolrIndex by
providing asolr_paramsdictionary in the ZCatalog query. For
example, if you have a SolrIndex installed in portal_catalog, this call
will query Solr:results = portal.portal_catalog(solr_params={'q': 'waldo'})The SolrIndex in the catalog will issue the query parameters specified
insolr_paramsto Solr. Each parameter value can be a string
(including unicode) or a list of strings. If you provide query
parameters for other Solr fields, the parameters passed to Solr will be
mixed with parameters generated for the other fields. Note that Solr
requires some value for the ‘q’ parameter, so if you provide Solr
parameters but no value for ‘q’, SolrIndex will supply ‘*:*’ as the
value for ‘q’.Solr will return to the SolrIndex a list of matching document IDs and
scores, then the SolrIndex will pass the document IDs and scores to
ZCatalog, then ZCatalog will intersect the document IDs with results
from other indexes. Finally, ZCatalog will return a sorted list of
result objects (“brain” objects) to application code.If you need access to the Solr response object, provide asolr_callbackfunction in the catalog query. After Solr sends its
response, the SolrIndex will call the callback function with the parsed
Solr response object. The response object conforms with the
documentation of thesolrpypackage.HighlightingHighlighting data may be requested for any field marked asstoredin the Solr schema. To enable this feature, pass ahighlightvalue of
eitherTrue, or a list of field names to highlight. A value ofqueriedwill cause Solr to return highlighting data for the list of queried columns.
If you pass in a sequence of field names, the requested highlighting data
will be limited to that list. You can also enable it by default in your Solr
config file. If you do enable it by default in the config file, but don’t
want it for a particular query, you must passhl:offin solr_params.The retrieved data is stored in thehighlightingattribute on the
returned brain. To use the customHighlightingBrain, the index needs to
be able to connect to its parent catalog. The code attempts to retrieve a
named utility for this, and will attempt to use Acquisition to find the id
of its immediate parent. Failing that, it defaults to usingportal_catalog.
If the code cannot determine the name of your catalog automatically and you
want to use highlighting, you will need to change thecatalog_nameproperty of the SolrIndex to reflect the correct value.To retrieve the highlighting data, the brain will have agetHighlightingmethod. By default, this is set to return the highlighting data for all
fields in a single list. You can limit this to specific fields, and change
the return format to a dictionary keyed on field name by passingcombine_fields=False.Example:results = portal.portal_catalog(SearchableText=’lincoln’,solr_params={‘highlight’: True})results[0].getHighlighting()
[u’<em>lincoln</em>-collections <em>Lincoln</em> ‘,
u’The collection of <em>Lincoln</em> plates’]results[0].getHighlighting(combine_fields=False)
{‘SearchableText’: [u’<em>lincoln</em>-collections <em>Lincoln</em> ‘]}
‘Description’: [u’The collection of <em>Lincoln</em> plates’]}results[0].getHighlighting(‘Description’)
[u’The collection of <em>Lincoln</em> plates’]results[0].getHighlighting(‘Description’, combine_fields=False)
{‘Description’: [u’The collection of <em>Lincoln</em> plates’]}The number of snippets returned, how the search terms are highlighted, and
several other settings can all be tweaked in your Solr config.http://wiki.apache.org/solr/HighlightingParametersEncodingAll data submitted to Solr for indexing or as a query must be encoded as
UTF-8. To this end, the SolrIndex has anexpected_encodingslines
property that details the list of encodings for it to try to decode data
from before transcoding to UTF-8. If you submit data to be indexed or
queries with strings in a different encoding, you need to add that
encoding to this list, before UTF-8.http://wiki.apache.org/solr/FAQ#Why_don.27t_International_Characters_Work.3FSortingSolrIndex only provides document IDs and scores, while ZCatalog retains
the responsibility for sorting the results. To sort the results from a
query involving SolrIndex, use thesort_onparameter like you
normally would with ZCatalog. At this time, you can not use a SolrIndex
as the index to sort on, but that could change in the future.Writing Your Own Field HandlersField handlers serve two functions. They parse object attributes for
indexing, and they translate field-specific catalog queries to Solr
queries. They are registered as utilities, so you can write your own
handlers and register them using ZCML.To determine the field handler for a Solr field,alm.solrindexfirst
looks for anISolrFieldHandlerutility with a name matching the field
name. If it doesn’t find one, it looks for anISolrFieldHandlerutility
with a name matching the name of the Java class that handles the field
in Solr. If that also fails, it retrieves theISolrFieldHandlerwith no
name.See the documentation of theISolrFieldHandlerinterface and the examples
in handlers.py.Integration with ZCatalogOneSolrIndexcan take the place of several ZCatalog indexes. In
theory, you could replace all of the catalog indexes with just a singleSolrIndex. Don’t do that yet, though, because this package needs
more maturity before it’s ready to take on that many responsibilities.Furthermore, replacing all ZCatalog indexes might not be the right
goal. ZCatalog indexes are under appreciated. ZCatalog indexes are built
on the excellent transaction-aware object cache provided by ZODB. This
gives them certain inherent performance advantages over network bound
search engines like Solr. Any communication with Solr incurs a delay on
the order of a millisecond, while a ZCatalog index can often answer a
query in a few microseconds. ZCatalog indexes also simplify cluster
design. The ZODB cache allows cluster nodes to perform searches without
relying on a large central search engine.Where ZCatalog indexes currently fall short, however, is in the realm
of indexing text. None of the text indexes available for ZCatalog match
the features and performance of text search engines like Solr.Therefore, one good way to use this package is to move all text indexes
to Solr. That way, queries that don’t need the text engine will avoid
the expense of invoking Solr. You can also move other kinds of indexes
to Solr.How This Package Maintains Persistent ConnectionsThis package uses a new method of maintaining an external database
connection from a ZODB object. Previous approaches included storing_v_(volatile) attributes, keeping connections in a thread local
variable, and reusing the multi-database support inside ZODB, but
those approaches each have significant drawbacks.The new method is to add dictionary calledforeign_connectionsto
the ZODB Connection object (the_p_jarattribute of any persisted
object). Each key in the dictionary is the OID of the object that needs
to maintain a persistent connection. Each value is an
implementation-dependent database connection or connection wrapper. If
it is possible to write to the external database, the database
connection or connection wrapper should implement theIDataManagerinterface so that it can be included in transaction commit or abort.When a SolrIndex needs a connection to Solr, it first looks in theforeign_connectionsdictionary to see if a connection has already
been made. If no connection has been made, the SolrIndex makes the
connection immediately. Each ZODB connection has its ownforeign_connectionsattribute, so database connections are not
shared by concurrent threads, making this a thread safe solution.This solution is better than_v_attributes because connections will
not be dropped due to ordinary object deactivation. This solution is
better than thread local variables because it allows the object
database to hold any number of external connections and it does not
break when you pass control between threads. This solution is better
than using multi-database support because participants in a
multi-database are required to fulfill a complex contract that is
irrelevant to databases other than ZODB.Other packages that maintain an external database connection should try
out this scheme to see if it improves reliability or readability. Other
packages should use the same ZODB Connection attribute name,foreign_connections, which should not cause any clashes, since
OIDs can not be shared.An implementation note: when ZODB objects are first created, they are
not stored in any database, so there is no simple way for the object to
get aforeign_connectionsdictionary. During that time, one way to hold
a database connection is to temporarily fall back to the volatile
attribute solution. That is what SolrIndex does (see the_v_temp_cmattribute).TroubleshootingIf the Solr index is preventing you from accessing Zope for some reason,
you can setDISABLE_SOLR=YESin the environment, causing the SolrIndex
class to bypass Solr for all queries and updates.Changelog1.2.0 (2016-10-15)Fix typo in solrpycore.
[davidblewett]Thanks to: “Schorr, Dr. Thomas” <[email protected]> for the following
encoding fixes, refs ticket #1:Added aexpected_encodingsproperty toSolrIndexthat lists the encodings
to expect text in; each is tried in turn to decode each parameter sent to
Solr. If none succeeds in decoding the text, we fall back to UTF8 and replace
failing characters.http://wiki.apache.org/solr/FAQ#Why_don.27t_International_Characters_Work.3F[davidblewett]Added_encode_parammethod toSolrIndexto encode a given string to UTF8.
[davidblewett]ModifiedSolrIndex’s ‘_apply_index` to send all parameters through the_encode_parammethod.
[davidblewett]Added atest__apply_index_with_unicodeto ensure unicode queries are
handled correctly.
[davidblewett]Initial highlighting support:ImportedgetToolByNamefromProducts.CMFCore, to be used on import failure.UpdatedSolrIndexto pass any fields from the Solr schema that have stored=True to be highlighted.UpdatedSolrIndexto store highlighting data returned from Solr in a_highlightingattribute.Added aHighlightingBrainclass that subclassesAbstractCatalogBrainthat looks up the highlighted data inSolrIndex.Added atest__apply_index_with_highlightingtest; unfortunately, calling theportal_catalogis not working in the tests currently.[davidblewett]Fixed : IIBTree needs integer keyshttp://plone.org/products/alm.solrindex/issues/3[thomasdesvenain]Quick Plone 4 compatibility fixes
[thomasdesvenain]Search using ZCTextIndex ‘*’ key character works with alm.solrindex.
Makes livesearch works with solrindex as SearchableText index.
[thomasdesvenain]Highlighting is not activated by default because there can be severe performance issues.
Pass ‘highlight’ parameter in solr_params to force it,
and pass ‘queried’ as ‘highlight’ value to force highlight on queried fields only.
[thomasdesvenain]Improved unicode handling to correctly handle dictionaries passed in as a field search,
inSolrIndex._decode_param.
[davidblewett]Extended ZCTextIndex support when a dictionary is passed in as a field search.
[davidblewett]Update test setup so that it is testing against Solr 1.4
[claytron]Handle emptydismaxqueries since a*:*value forqis not
interpreted for thedismaxquery handler and returns no results
rather than all results.
[claytron]Add uninstall profile, restoring the default Plone indizes.
[thet]Give the SolrIndex a meta_type ‘SolrIndex’ and register
ATSimpleStringCriterion for it, otherwise Collections cannot add
SearchableText criteria.
[maurits]Ensure that only one ‘q’ parameter is sent to Solr.
[claytron]Plone 4.1 compatibility.
[timo]Add missing elementtree import
[saily]Fix stale cached highlighting information that
lead to in inconsistent results.
[nrb]Plone 4.3 compatibility.
[cguardia]Add support for solr.TrieDateField
[mjpieters]Fix decoding of query requests so that lists are not stringified
before getting sent to field handlers.
[davisagli]Implement getIndexQueryNames which is now part of IPluggableIndex.
[davisagli]Add support for range queries to the DateFieldHandler.
[davisagli]Don’t turn wildcard queries into fuzzy queries.
[davisagli]Confirm compatibility with Plone 5
[witekdev, davisagli]1.1.1 (2010-11-04)Fix up links to issue tracker and Plone product page
[clayton]1.1 (2010-10-12)Addedz3c.autoincludesupport for Plone
[claytron]1.0 (2010-05-27)Initial public releaseClean up docs in prep for release.
[claytron]Fix up reST errors.
[claytron]0.14 (2010-05-11)Updated SolrConnectionManager to have a dummy savepoint
implementation, refs #2451.
[davidb]0.13 (2010-03-01)commit to cleanup version #’s0.12 (2010-03-01)PEP8 cleanup
[clayton]0.11 (2009-11-27)A commit after an aborted index update no longer breaks with an
assertion error. Refs #13400.10 (2009-10-15)Filter out invalid XML characters from indexed documents.0.9 (2009-10-14)Fixed test failure by going to the login_form to log in, instead of
the front page, where we get ambiguity errors.
[maurits]Fixed the catalog object information page. Solr was unable to parse
a negative number in the query.0.8 (2009-09-18)Added support for Solr boolean fields.GenericSetup profiles now have the option of clearing the
index.Made the waituri script wait up to 90 seconds by default,
pause a little more between polls, and accept a timeout
parameter.0.7 (2009-09-13)The Solr URI can now be provided by an environment variable,
so that catalog.xml does not need to hard code the URI.0.6 (2009-09-11)Added narrative documentation.Don’t clear the index when running GenericSetup. Clearing
indexes turns out to be a long-standing problem with GenericSetup;
in this case the easy solution is to just not clear it.0.5 (2009-09-10)Added a script that waits for Solr to start up.Brought in a private copy of solrpy to fix some bugs:The connection retry code reconnected, but wasn’t
actually retrying the request.The raw_query method should not assume the parameter
values are unicode (they could be lists of unicode).0.4 (2009-09-10)Purge Solr when importing a SolrIndex via GenericSetup.0.3 (2009-09-10)Made field handlers more flexible. Now they can add any
kind of query parameter to the Solr query.The default field handler now generates “fq” parameters
instead of “q” parameters. This seems to fit the intent of
the Solr authors much better.Renamed “solr_additional” to “solr_params”.0.2 (2009-09-09)Added a GenericSetup profile that replaces SearchableText
with a SolrIndex.Renamed the catalog parameter for passing extra args to Solr
“solr_additional”. Also renamed the response callback
parameter to “solr_callback”.0.1 (2009-09-09)First release
|
almvotes
|
No description available on PyPI.
|
almy
|
No description available on PyPI.
|
aln2pheno
|
Failed to fetch description. HTTP Status Code: 404
|
alnair
|
AlnairAlnair is a simple system configuration framework.
And also are intended to be used in conjunction with the Fabric (https://github.com/fabric/fabric).RequirementPython 2.6 and later (but does not work in 3.x)Installationfrom pypi:# using pip
% pip install -U alnair
# or using easy_install
% easy_install -U alnairfrom source:% python setup.py installBasic usageFirst, generate the recipes template set by following command:% alnair generate template archlinuxIn this example, distribution name usingarchlinux.recipes/archlinux/common.pydirectories and file are created to current directory by this command.
Also “g” as an alias for thegeneratecommand has been defined.
The following command is same meaning as above:% alnair g template archlinuxNext, editinstall_commandvariable incommon.pyfor the target distribution:# common.py
install_command = 'pacman -Sy'Next, generate recipe template for package setup by following command:% alnair g recipe pythonpython.pyfile is created onrecipes/archlinux/directory by this command.
In fact, directories where you want to create the files arerecipes/*/.Finally, editpython.pyfor more settings if necessary and setup to the server by following command:% alnair setup archlinux pythonUsing as a libraryYou can use the following code instead of “alnair setup archlinux python” command:from alnair import Distribution
distname = 'archlinux'
with Distribution(distname) as dist:
dist.setup('python')For more documentation, read the sources or please wait while the document is being prepared.Changes0.3.2Add –dry-run option to CLIImplement the multiple packages in the single package nameImplement a host specific configuration0.3Add command-line interfaceAdd Distribution.config() API0.2Change the APIs (An incompatible with older releases)0.1.2Implement the commands execution to make before setupBug fixes0.1.1A few bug fixes0.1First release
|
alnairjob
|
No description available on PyPI.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.