package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
aminocode
The aminocode library can be used to encode texts written in natural language in a format based on amino acids. With coding is enable the application of various bioinformatics tools in text mining.Stand alone tools based on library are available at link <https://sourceforge.net/projects/aminocode>.InstallationTo install aminocode throughpip:pip install aminocodeTested PlatformsPython:3.7.4Windows (64bits):10Ubuntu (64bits)18.04.1 LTSRequired external librariesnumpyunidecodebiopythonFunctionsencodetext(text,detailing='')text:natural language text string to be encoded;detailing:details in coding. ‘d’ for details in digits. ‘p’ for details on the punctuation. ‘dp’ or ‘pd’ for both;output:encode string.decodetext(text,detailing='')text:text string encoded using the encodefile function to be decode;detailing:details used in the text to be decoded. ‘d’ for details in digits. ‘p’ for details on the punctuation. ‘dp’ or ‘pd’ for both;output:decode string.encodefile(input_file_name,output_file_name=None,detailing='',header_format='number+originaltext',verbose=False)input_file_name:text file name or _io.TextIOWrapper variable. It can also be used the format that is imported by the Bio.SeqIO library of Biopython, in which case the function will automatically extract the headers to do the encoding;output_file_name:the name for the output file. If not defined, the result will only be returned as a variable;detailing:same as in the encodetext function;header_format:format for the headers of the generated FASTA. It can be ‘number+originaltext’, ‘number’ or ‘originaltext’. ‘number’ is a count of the lines in the input file. Blank lines are considered in the count, but are not added to the FASTA file. ‘originaltext’ is the input text itself;verbose:if True displays progress;output:FASTA variable in Biopython format. If defined output_file_name a file will be saved.decodefile(input_file_name,output_file_name=None,detailing='',verbose=False)input_file_name:file name or variable in the format used by Biopython’s Bio.SeqIO libraryoutput_file_name:the name for the output file. If not defined, the result will only be returned as a variable;detailing:same as in the decodetext function;verbose:if True displays progress;output:string list. If defined output_file_name a file will be saved.
amino.fix
### Discordhttps://discord.gg/Bf3dpBRJHj### About libFix Amino.py 1.2.17### How to install?pip install amino.fix### API Reference [Read the Docs Link](https://aminopy.readthedocs.io/en/latest/)
amino.fix-async
### Discordhttps://discord.gg/Bf3dpBRJHj### About libFix Amino.py 2.0.3### How to install?pip install amino.fix-async
amino.li
# amino.li Copia literal de amino.fix y amino.py, solo arreglo y agrego cosas que no quieren arreglarpip3 install -U amino.li
aminolib
AminolibIf you have any questions, join us on Discord!What is this?Aminolib is an alternate Python API of amino.py with the same function of communicating with Amino's Servers while pretending that you're an App User. This is mostly accomplished by spoofing Device Configuration Headers, while objectifying and organizing Amino Response Data, so that actually doing anything is easier.How do I use this?Python 3.6 and above is the recommended version for Aminolib.To install Aminolib, runpython3 -m pip install aminolib.
aminoraid
No description available on PyPI.
amino-rus
Асинхронная библиотека для взаимодействия сhttps://aminoapps.com/apiиhttps://service.narvii.com/api/v1, собранная в одном файле. Все функции имеют русскоязычные имена.
aminos
no khokria this is package for sid login in amino with Amino.py
amino.sf
### Telegramhttps://t.me/spi22der### About libFix Amino.py 1.2.17### How to install?pip install amino.sf
aminosp
AminoNlibrary to make a bots for aminoInstallationpip install AminoNHow to use it?you can chek the lessions or my discord server to learn how to use it ##community you can join with the programmershttp://aminoapps.com/c/Djjjhggghff###importsplax#what need you need install requests and json and urllib to open this library
aminoxii
# amino.xii### Así lo instalaspip install aminoxii
amin-qvm
amin-qvm Library DocumentationOverviewTheamin-qvmlibrary has good tools for simulating quantum algorithms and neural networks. It has classes for quantum state manipulation, representation of quantum gates, and implementations of algorithms like Grover and Deutsch-Jozsa.Modules in the LibrarynovumClass: NovumThis is the primary class representing a quantum system.Initialization:Novum(num_qubits: int)Initiates a quantum system with a specified number of qubits.Methods:execute_qasm(qasm_code: str):Executes QASM code on the quantum system.convert_to_binary(input_data: str, num_bits: int):Converts CSV data into binary representation.prepare_classical_state(input_data):Prepares the quantum state based on classical binary input data.apply_gate(gate_matrix: np.ndarray, qubit_indices: List[int]):Applies a specified gate to the qubits of the quantum system.apply_oracle(b1, b2, entanglements):Applies an oracle gate to the qubits using the specified entanglements.measure_qubits():Measures the qubits and returns the outcome.plot_results(title="Measurement Results"):Plots the measurement outcomes of the qubits.gatesDefines an assortment of quantum gates represented as NumPy arrays.Gate Variables:PauliX, PauliY, PauliZ, Hadamard, Identity, S, T, CNOT, SWAP, etc.Functions:PhaseShift(theta): Returns a phase shift gate for a given angle. Rx(theta), Ry(theta), Rz(theta): Rotation gates around the x, y, and z-axis.qasm_processingProvides functions for converting QASM code into executable Python code using the Novum library.Functions:qasm_to_python(qasm_code: str):Converts QASM code into Python code commands for the Novum system.toolsFunctions supporting quantum neural network operations.Functions:update_weights(...), compute_gradients(...), initialize_weights(...), apply_layer(...), forward_pass(...), calculate_loss(...), train_qnn(...)equationsBasic mathematical functions used in quantum computing.Functions:inner_product(...), eigen(...), dot_product(...), normalize_state(...), tensor_product(...), is_unitary(...)groverImplements Grover's search algorithm as a class.Class: GroverInitialization:Grover(data: list[str], target: str, iterations: int = 1)Methods:run():Executes the Grover search algorithm.prepare_superposition_state():Sets up the initial superposition across all qubits.apply_oracle():Applies the oracle function.apply_diffusion_operator():Applies the diffusion (inversion about the mean) operator.deutschjozsaImplementation of the Deutsch-Jozsa algorithm as a class.Class: DeutschJozsaInherits from Novum.Initialization:DeutschJozsa(num_qubits: int, f)Methods:oracle():Encodes the provided function f into the quantum oracle.run():Executes the Deutsch-Jozsa algorithm to determine if the function f is constant or balanced.UsageInstallationpip install amin-qvmSetting Up the EnvironmentMake sure you have Python installed on your system, because like, how else are you going to use the library :)Creating a Novum InstanceTo create a Novum system and run QASM:fromamin_qvm.system.novumimportNovum# Create an instance with 2 qubitsnovum=Novum(2)# Run QASM codeqasm_code="""qreg[2];cx 0,1;h 0;measure;"""novum.execute_qasm(qasm_code)Applying Gates and Running Algorithmsfromamin_qvm.system.gatesimportHadamardfromamin_qvm.algorithms.groverimportGrover# Initialize Grover's algorithm with data and targetgrover=Grover(data=["00","01","10","11"],target="10")grover.run()
amio
No description available on PyPI.
ami-organizer
UNKNOWN
ami-push
Daemon that makes a bridge to listen Asterisk Management Interface (AMI) Events and send HTTP requests.RequirementsPython 3.4+Installationpipinstallami-push
amipwnd
No description available on PyPI.
amipwned
No description available on PyPI.
amir
Utility Functions in PythonInstallationFirst, install Python 3.6 fromhttps://www.python.org, and then run:pip install amirCitationIf you use the library in academic work, please consider citinghttps://doi.org/10.1117/12.2304418.Bibtex Entry:@inproceedings{tahmassebi2018ideeple,title={ideeple: Deep learning in a flash},author={Tahmassebi, Amirhessam},booktitle={Disruptive Technologies in Information Sciences},volume={10652},pages={106520S},year={2018},organization={International Society for Optics and Photonics}}APA Entry:Tahmassebi, A. (2018, May). ideeple: Deep learning in a flash. In Disruptive Technologies in Information Sciences (Vol. 10652, p. 106520S). International Society for Optics and Photonics.
amira
AMIRA: Automated Malware Incident Response & AnalysisAMIRA is a service for automatically running the analysis on theOSXCollectoroutput files. The automated analysis is performed viaOSXCollector Output Filters, in particularThe One Filter to Rule Them All: theAnalyze Filter. AMIRA takes care of retrieving the output files from an S3 bucket, running the Analyze Filter and then uploading the results of the analysis back to S3 (although one could envision as well attaching them to the related JIRA ticket).PrerequisitestoxThe following steps assume you havetoxinstalled on your machine.If this is not the case, please run:$sudopipinstalltoxOSXCollector Output Filters configuration fileAMIRA uses OSXCollector Output Filters to do the actual analysis, so you will need to have a validosxcollector.yamlconfiguration file in the working directory. The example configuration file can be found in theOSXCollector Output Filters.The configuration file mentions the location of the file hash and the domain blacklists. Make sure that the blacklist locations mentioned in the configuration file are also available when running AMIRA.AWS credentialsAMIRA uses boto3 to interface with AWS. You can supply credentials using either of the possibleconfiguration options.The credentials should allow reading and deleting SQS messages from the SQS queue specified in the AMIRA config as well as the read access to the objects in the S3 bucket where the OSXCollector output files are stored. To be able to upload the analysis results back to the S3 bucket specified in the AMIRA configuration file, the credentials should also allow write access to this bucket.AMIRA ArchitectureThe service uses theS3 bucket event notificationsto trigger the analysis. You will need to configure an S3 bucket for the OSXCollector output files, so that when a file is added there the notification will be sent to an SQS queue (AmiraS3EventNotificationsin the picture below). AMIRA periodically checks the queue for any new messages and upon receiving one it will fetch the OSXCollector output file from the S3 bucket. It will then run the Analyze Filter on the retrieved file.The Analyze Filter runs all the filters contained in the OSXCollector Output Filters package sequentially. Some of them communicate with the external resources, like domain and hashes blacklists (or whitelists) and threat intel APIs, e.g.VirusTotal,OpenDNS InvestigateorShadowServer. The original OSXCollector output is extended with all of this information and the very last filter run by the Analyze Filter summarizes all of the findings into a human-readable form. After the filter finishes running, the results of the analysis will be uploaded to the Analysis Results S3 bucket.The overview of the whole process and the system components involved in it are depicted below:Using AMIRAThe main entry point to AMIRA is in theamira/amira.pymodule. You will first need to create an instance of AMIRA class by providing the AWS region name, where the SQS queue with the event notifications for the OSXCollector output bucket is, and the SQS queue name:fromamira.amiraimportAMIRAamira=AMIRA('us-west-1','AmiraS3EventNotifications')Then you can register the analysis results uploader, e.g. the S3 results uploader:fromamira.s3importS3ResultsUploaders3_results_uploader=S3ResultsUploader('amira-results-bucket')amira.register_results_uploader(s3_results_uploader)Finally, run AMIRA:amira.run()Go get some coffee, sit back, relax and wait till the analysis results pop up in the S3 bucket!
amirfirstpackage
No description available on PyPI.
amirispy
AMIRIS-PyPython tools for the electricity market modelAMIRIS.Installationpip install amirispyYou may also usepipx. For detailed information please refer to the officialpipxdocumentation.pipx install amirispyFurther RequirementsIn order to execute all commands provided byamirispy, you also require a Java Development Kit (JDK). JDK must be installed and accessible via your console in which you runamirispy.To test, runjava --versionwhich should show your JDK version (required: 8 or above). Ifjavacommand is not found or relates to a Java Runtime Environment (JRE), please download and install JDK (e.g. fromAdoptium)UsageCurrently, there are three distinct commands available:amiris install: installation of thelatest AMIRIS versionandexamplesto your computeramiris run: perform a full workflow by compiling the.pbfile from yourscenario.yaml, executing AMIRIS, and converting resultsamiris batch: perform multiple runs each with scenario compilation, AMIRIS execution, and results extractionamiris comparison: compare the results of two different AMIRIS runs to check them for their equivalenceamiris installDownloads and installs the latest open access AMIRIS instance and accompanying examples.OptionAction-uor--urlURL to download AMIRIS from (default: latest AMIRIS artifact fromhttps://gitlab.com/dlr-ve/esy/amiris/amiris-tor--targetFolder to installamiris-core_<version>-jar-with-dependencies.jarto (default:./)-for--forceForce install which may overwrites existing AMIRIS installation of same version and existing examples (default: False)-mor--modeOption to install model and examplesall(default), onlymodel, or onlyexamplesamiris runCompile scenario, execute AMIRIS, and extract results.OptionAction-jor--jarPath toamiris-core_<version>-jar-with-dependencies.jar-sor--scenarioPath to a scenario yaml-file-oor--outputDirectory to write output toamiris batchPerform multiple runs - each with scenario compilation, AMIRIS execution, and results extractionOptionAction-jor--jarPath toamiris-core_<version>-jar-with-dependencies.jar-sor--scenariosPath to single or list of: scenario yaml-files or their enclosing directories-oor--outputDirectory to write output to-ror--recursiveOption to recursively search in provided Path for scenario (default: False)-por--patternOptional name pattern that scenario files searched for must matchamiris compareCompare if results of two AMIRIS runs and equivalent.OptionAction-eor--expectedPath to folder with expected result .csv files-tor--testPath to folder with results files (.csv) to test for equivalence-ior--ignoreOptional list of file names not to be comparedHelpYou reach the help menu at any point using-hor--helpwhich gives you a list of all available options, e.g.:amiris --helpLoggingYou may define a logging level or optional log file asfirstarguments in your workflow using any of the following arguments:OptionAction-lor--logSets the logging level. Default iserror. Options aredebug,info,warning,warn,error,critical.-lfor--logfileSets the logging file. Default isNone. IfNoneis provided, all logs get only printed to the console.Example:amiris --log debug --logfile my/log/file.txt installCite AMIRIS-PyIf you use AMIRIS-Py for academic work, please cite:Christoph Schimeczek, Kristina Nienhaus, Ulrich Frey, Evelyn Sperber, Seyedfarzad Sarfarazi, Felix Nitsch, Johannes Kochems & A. Achraf El Ghazi (2023). AMIRIS: Agent-based Market model for the Investigation of Renewable and Integrated energy Systems. Journal of Open Source Software. doi:10.21105/joss.05041ContributingPlease seeCONTRIBUTING.Available SupportThis is a purely scientific project by (at the moment) one research group. Thus, there is no paid technical support available.If you experience any trouble with AMIRIS, you may contact the developers at theopenMod-Forumor [email protected]. Please report bugs and make feature requests by filing issues following the provided templates (see alsoCONTRIBUTING). For substantial enhancements, we recommend that you contact us [email protected] working together on the code in common projects or towards common publications and thus further develop AMIRIS.
amirkabirtsa
Amirkabir students in TSA course should develope projects This projects are gathered together in this package
amirPDFhosseintool
Here is Home page of the app.
amis
Amis介绍amis是Baidu团队开发的一个低代码前端框架,它使用JSON配置来生成页面,可以减少页面开发工作量,极大提升效率。python amis基于baidu amis, 对amis数据结构通过pydantic转换为对应的python数据模型,并添加部分常用方法.Amis亮点不需要懂前端:在百度内部,大部分 amis 用户之前从来没写过前端页面,也不会JavaScript,却能做出专业且复杂的后台界面,这是所有其他前端 UI 库都无法做到的;不受前端技术更新的影响:百度内部最老的 amis 页面是 6 年多前创建的,至今还在使用,而当年的Angular/Vue/React版本现在都废弃了,当年流行的Gulp也被Webpack取代了,如果这些页面不是用 amis,现在的维护成本会很高;享受 amis 的不断升级:amis 一直在提升细节交互体验,比如表格首行冻结、下拉框大数据下不卡顿等,之前的 JSON 配置完全不需要修改;可以完全使用可视化页面编辑器来制作页面:一般前端可视化编辑器只能用来做静态原型,而 amis 可视化编辑器做出的页面是可以直接上线的。提供完整的界面解决方案:其它 UI 框架必须使用 JavaScript 来组装业务逻辑,而 amis 只需 JSON 配置就能完成完整功能开发,包括数据获取、表单提交及验证等功能,做出来的页面不需要经过二次开发就能直接上线;大量内置组件(120+),一站式解决:其它 UI 框架大部分都只有最通用的组件,如果遇到一些稍微不常用的组件就得自己找第三方,而这些第三方组件往往在展现和交互上不一致,整合起来效果不好,而 amis 则内置大量组件,包括了富文本编辑器、代码编辑器、diff、条件组合、实时日志等业务组件,绝大部分中后台页面开发只需要了解 amis 就足够了;支持扩展:除了低代码模式,还可以通过自定义组件来扩充组件,实际上 amis 可以当成普通 UI 库来使用,实现 90% 低代码,10% 代码开发的混合模式,既提升了效率,又不失灵活性;容器支持无限级嵌套:可以通过嵌套来满足各种布局及展现需求;经历了长时间的实战考验:amis 在百度内部得到了广泛使用,在 6 年多的时间里创建了 5 万页面,从内容审核到机器管理,从数据分析到模型训练,amis 满足了各种各样的页面需求,最复杂的页面有超过 1 万行 JSON 配置。安装pipinstallamis简单示例main.py:fromamis.componentsimportPagepage=Page(title='标题',body='Hello World!')# 输出为jsonprint(page.amis_json())# 输出为dictprint(page.amis_dict())# 输出页面htmlprint(page.amis_html())开发文档参考:Amis官方文档依赖项目pydanticamis许可协议该项目遵循 Apache2.0 许可协议。
amis-admin-theme-editor
amis-admin-theme-editor - Theme-Editor forfastapi-amis-adminIncludes definition for cxd, antd, ang and dark Theme of amis.FeaturesSupported Themes/Styles: cxd, antd, ang and dark.Supported Vars/Elements: Includes all CSS vars defined in Theme CSS files.Supports to add custom css stylePreview View: After page refresh (currently) shows you most formitem types to check the changesTodo: Translations to DE, CNInstallpipinstallamis-admin-theme-editorUsage example:fromfastapiimportFastAPIfromfastapi_amis_admin.adminimportSettings,AdminSitefromfastapi_amis_admin.amisimportPagefromstarlette.requestsimportRequestfromstarlette.responsesimportRedirectResponsefromamis_admin_theme_editor.adminimportCustomThemeAdminfromamis_admin_theme_editor.modelimportCustomTheme# If you change the amis_theme value in settings don't forget to change the baseTheme value# in the CustomeTheme instance too, this is used by amis to show the correct default values of the selected ThemeclassMySettings(Settings):"""To add cssVars and/or css style to the amis render, we need to apply them for each page.Which means you need to override: `async def get_page(self, request: Request) -> Page`Easiest way is to add your customTheme instance to the site.settings object by override the Settings class and add afield which holds the current theme-editor settings.This way you can refernce it in any admin or form by self.site.settings.custom_theme"""custom_theme:CustomTheme=CustomTheme()classMyAdminSite(AdminSite):""" Beside the forms pages you need to override, we can do this, as an example, for the adminsite itself.Don't forget, this will not take any effect, you need to apply the changes per page,just for the whole site won't work, amis acts per page.But you can use this as an example. So you can Create your own FormAdmin version by inherit FormAdmin,override the get_page there and use your new class instead of the normal FormAdmin."""asyncdefget_page(self,request:Request)->Page:page=awaitsuper().get_page(request)ifself.app.site.settings.custom_theme:page.cssVars=self.app.site.settings.custom_theme.configpage.css=self.app.site.settings.custom_theme.custom_stylereturnpagesettings=MySettings()myCustomTheme=CustomTheme(baseTheme=settings.amis_theme)# as usual create the FastAPI app and the admin siteapp=FastAPI(debug=settings.debug)site=MyAdminSite(settings=settings)# create the CustomThemeAdmin and set/bind your custom theme instancetheme_app=site.get_admin_or_create(CustomThemeAdmin)theme_app.bind_custom_theme(myCustomTheme)# If you want to react on changes of the Theme-Editor you can register a callback# the event name: `settings_changed`@theme_app.on_event("settings_changed")deftheme_change_callback_handler(data:CustomTheme):print("theme_change_callback_handler",data)@app.on_event("startup")asyncdefstartup():# as usual mount your admin site with the fastapi appsite.mount_app(app)@app.get("/")asyncdefindex():# as usual, if you mount late and don't provide the fastapi app instance while creating the site,# you need to Redirect the response to the admin site routerreturnRedirectResponse(url=site.router_path)Interface PreviewTheme Editor with applied changes on pagesLicense Agreementamis-admin-theme-editoris based onMITopen source and free to use, it is free for commercial use, but please show/list the copyright information about amis-admin-theme-editor somewhere.
amisc
Efficient framework for building surrogates of multidisciplinary systems. Uses the adaptive multi-index stochastic collocation (AMISC) technique.InstallationWe highly recommend usingpdm:pipinstall--userpdmcd<your-project> pdminit pdmaddamiscHowever, you can also install normally:pipinstallamiscTo install from an editable local directory (e.g. for development), first fork the repo and then:gitclonehttps://github.com/<your-username>/amisc.git pdmadd-e./amisc--dev# or..pipinstall-e./amisc# similarlyThis way you can make changes toamisclocally while working on some other project for example. You can also quickly set up a dev environment with:gitclonehttps://github.com/<your-username>/amisc.gitcdamisc pdminstall# reads pdm.lock and sets up an identical venvQuickstartimportnumpyasnpfromamisc.systemimportSystemSurrogate,ComponentSpecfromamisc.rvimportUniformRVdeffun1(x):returndict(y=x*np.sin(np.pi*x))deffun2(x):returndict(y=1/(1+25*x**2))x=UniformRV(0,1,'x')y=UniformRV(0,1,'y')z=UniformRV(0,1,'z')model1=ComponentSpec(fun1,exo_in=x,coupling_out=y)model2=ComponentSpec(fun2,coupling_in=y,coupling_out=z)inputs=xoutputs=[y,z]system=SystemSurrogate([model1,model2],inputs,outputs)system.fit()x_test=system.sample_inputs(10)y_test=system.predict(x_test)ContributingSee thecontributionguidelines.CitationsAMISC paper [1].
amiseq
No description available on PyPI.
amispy
pyamis目的: 在python中生成amis的json配置解决问题:手写json配置太累了,配置复制粘贴重用性不好,减少手写json拼写出错概率可以在python中通过代码快速生成json配置要能和python的dict混用installgit clone https://github.com/1178615156/pyamis pip install --editable .示例生成Form组件# 首先import下frompyamis.componentsimport*form=Form(target='target',mode="horizontal",# 可以看到属性能够直接插入dictcontrols=[{"type":"submit","label":"登录"}])这样就生成了个Form配置了,然后转成json看看importjsonjson.dumps(form)转成json{"type":"form","mode":"horizontal","controls":[{"type":"submit","label":"\u767b\u5f55"}],"target":"target"}和dict进一步结合示例form_default_options=dict(title='条件',mode='horizontal',horizontal={"leftFixed":"sm"},submitOnInit=False,autoFocus=False,)form=Form(**form_default_options,#可以直接插入dicttarget='target')# 直接设置属性也可以form['controls']=[{"type":"submit","label":"提交"}]目前有的组件有:PageChartCrudActionForm如果想用的组件不在上面的话可以直接写dict快速启动示例fromflaskimportFlask,send_filefrompyamis.componentsimport*importjsonapp=Flask("pyamis-test")# app-pages配置defsite():return[# 第一个pageAppPage(label='page-1',url='1',# 第一个page有2个子页children=[# 子页1AppPage(label="page-1-1",url='page-1-1',schema_body=[Form(controls=[{'type':'text','name':"email",'label':"email","required":True},]),"hello world 1 - 1"]),# 子页2AppPage(label="page-1-2",url='page-1-2',# schema_body=[...] 等价于schema=Page(body=['hello page 1 - 2 ']),),]),# 第二个pageAppPage(label='page-2',url='2',#如果想用的组件没有的话,可以直接写jsonschema_body=[{"type":"service","api":"https://3xsw4ap8wah59.cfc-execute.bj.baidubce.com/api/amis-mock/mock2/page/initData","body":{"type":"panel","title":"$title","body":"现在是:${date}"}}],)]# print下生成的json配置print(json.dumps(site(),indent=2))@app.route('/')defroot():returnsend_file('test_index.html')# 分页的[email protected]("/pages/site.json")defsite_json():return{'data':{'pages':[dict(label='Home',url='/'),dict(children=site())]}}if__name__=='__main__':app.run(port=8080,debug=True)
amis-python
amis-python基于百度amis前端框架的python pydantic模型封装。由于原版本缺少大量amis新版本的组件或配置,因此本项目在其版本的基础上进行了扩充。相比fastapi-amis-admin的版本:涵盖amis截至3.1.0版本的所有组件使用jinja2模板支持修改主题安装pip install amis-python简单使用fromamis.componentsimportPagepage=Page(title='新页面',body='Hello World')# 输出为python字典print(page.to_dict())# 输出为jsonprint(page.to_json())# 输出为strprint(page.render())# 保存为html文件withopen('HelloWorld.html','w',encoding='utf-8')asf:f.write(page.render())详细使用详见amis官方文档感谢amisfastapi-amis-admin
amisrsynthdata
This module provides tools to create synthetic data files for the AMISR (Advanced Module Incoherent Scatter Radar) systems. The files are based on both a specified ionospheric state and a radar configuration. This can be used to generate synthetic data in the “SRI data format” both for the three existing AMISRs and for hypothetical future “AMISR-like” systems. Primarily, it was designed to help test the functionality of various inversion algorithms that attempt to create a complete picture of ionospheric state parameters from discrete measurements by creating a way to check the output of these algorithms against known “truth” data. Please note that this module does NOT attempt to simulate any aspect of fundamental ISR theory.Quick StartInstallationThe amisrsynthdata package is pure python and can be installed easily with pip:$ pip install amisrsynthdataAdditionalinstallation instructionsare also available.Basic UsageThis package installs the command line toolamisrsynthdata, which is used along with a YAML configuration file to generate an output hdf5 AMISR data file. The configuration file specifies the ionosphere state and radar configuration that should be used:$ amisrsynthdata config.yamlRefer to theconfiguration file docsfor information about the contents of these configuration files and how to construct one.LimitationsThe following are NOT currently included in the amisrsynthdata module:Any kind of proper treatment or simulation of ISR theory - The module effectively assumes the radar measures plasma parameters perfectly at a particular location, although empirical errors can be added.Integration over a time period or smearing along the length of pulses, as well as pulse coding.Madrigal data format - Currently files are only generated in the SRI data format.DocumentationFull documentation for amisrsynthdata is available onReadTheDocs.ContributingContributions to this package are welcome and encouraged, particularly to expand the currently set of specified ionospheres. Contributions can take the form ofissuesto report bugs and request new features andpull requeststo submit new code. Please refer to thecontributing guidelinesfor more details. Specific instructions on how to add a new state function to describe the ionosphere are available inNew State Functions.
amisui
amis-python基于百度amis前端框架的python pydantic模型封装。由于原版本缺少大量amis新版本的组件或配置,因此本项目在其版本的基础上进行了扩充。相比fastapi-amis-admin的版本:涵盖amis截至3.1.0版本的所有组件使用jinja2模板支持修改主题安装pip install amisui简单使用fromamisui.componentsimportPagepage=Page(title='新页面',body='Hello World')# 输出为python字典print(page.to_dict())# 输出为jsonprint(page.to_json())# 输出为strprint(page.render())# 保存为html文件withopen('HelloWorld.html','w',encoding='utf-8')asf:f.write(page.render())详细使用详见amis官方文档感谢amis-python
amis-web-requester
A web requester utility package
amit
AMITCentralise all your enumerations !Report Bug·Request FeatureTable of ContentsAbout the ProjectTools usedGetting StartedPrerequisitesInstallationUsageHelpRoadmapContributingLicenseAbout The ProjectAmit is a tool that group other tools into a unique view. Informations collected are organized and stored into a database.Tools usedAmit uses some performant tools to fetch information about the targets.NmapldapsearchrpcclientGetting StartedJust install it as a python module.pipinstallamitUsageJust run amit !amitAdd machinesHere we add 2 machines defined by their IP or domain.>add10.10.10.10example.comScan machinesThe 2 machines have id 1 and 2.>scanmachines12Show job status>showjobsShow services found>showservicesScan services>scanservices<id><id>...Show informations found for services>showservices<id>-vvvHelpYou can inspect the list of available commands using thehelpcommand. For each command, you can view its usage using the-hflag.RoadmapSee theopen issuesfor a list of proposed features (and known issues).ContributingContributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make aregreatly appreciated.Fork the ProjectCreate your Feature Branch (git checkout -b feature/AmazingFeature)Commit your Changes (git commit -m 'Add some AmazingFeature')Push to the Branch (git push origin feature/AmazingFeature)Open a Pull RequestLicenseDistributed under the GPL-3.0 License. SeeLICENSEfor more information.
amit26april
IntroductionThis project helps the begginers to generate ai models with 1 line of code.It is also useful for experts as it will automate repetative task and experts can focus upon main modelIt works right now only for image classification but there may be updates in futureSimple Usage for BegginersHere, I have created a model to classify humuan face into 7 emotions in just 4 linesfrom amit26april.image_cnn import classification model = classification() model = model.create('TRAIN',7) model.predict([x_test[0]])Simple usage for ExpertsHere the preperation of data is automated and it now depends upon developer , how to move futherfrom amit26april.image_cnn import classification x=classification() df = x.create_dataframe('TRAIN') y_train = x.prep_y_train x_train = x.prep_x_train # Rest complex work can be done on own
amitapdf
No description available on PyPI.
amitcalculator
This is a very basic calculator that performs addition, subtraction, multiplication and division of two numbersChange Log0.0.1 (20 Oct 2021)First release0.0.2 (20 Oct 2021)Second release, with Stat
amitgptpdf
This is homepage of the project.
amitgroup
UNKNOWN
amitjaiswal
UNKNOWN
amitkh-wordle
Entropy based Wordle solverCLI built with Typer. Based onthisvideo from 3 Blue 1 BrownInstallationpip install amitkh-wordleQuick StartThe solver needs to generate a pattern dictionary containing all of the word and pattern combinations in order to run. To do this, runwordle gen-pattern-dict. This process will take around 10-15 minutes.By default the dictionary will be compressed using bz2 to about 200 MB. You can also specify the--no-compressoption which will save the pickle without compression and will result in quicker load times, but take about 800 MB of storage .To run the solver in interactive mode, runwordle play. This will be followed by roughly 2 minutes of setup time while the program loads in the pattern dictionary and calculates the initial guess.For maximum speed, you can also add the-sflag, which runs the solver in time saving mode. This means instead of spending 1 minute or so calculating the first guess, which is always "tares", it simply assumes this to be the first guess and allows you to enter your pattern from "tares" straight away.After this, you will be presented with some information and prompted for a guess. The program will present you the top 10 guesses (though you can guess any word you would like), along with (from left to right) the expected score should you play that guess, the expected information that guess will provide in bits, and the probability that guess is the answer.If there are under 20 words remaining, the program will also print out those remaining words along with the probability that they are a potential answer based on frequency (which is not the same as the probability that they are the answer to this game).From there, simply type in your guesses and the pattern that was returned to you according to the instructions, and repeat until solved.Other Functionswordle play-wordsThis command allows you to pass in a word or list of words to be played by the solver, like sowordle play-words snake cater abysswordle test-allThis command runs the wordle solver on all of the 2,315 answers to the additional wordle, tracking the number of guesses taken for each, and writing them as a JSON to the file name given as an argumentwordle --helpDisplay a brief help message
amitool
数据库中使用Zlog表需要在创建Model的时候加上__table_args__ = {'implicit_returning':False}使用方法pipinstallamitoolfromamitools.AmiToolimportAmiToolsami=AmiTools(LogDir="./log",ConnString="",Host="127.0.0.1",PassWord="",Port=6379,db=10)# 参数不用全填logger=ami.loggerslogger.info("start session")session=ami.initSession()logger.info("start redis")rs=ami.initRedis()
amitools
amitools - various AmigaOS tools for other platformswritten by Christian [email protected] the GNU Public License V2Introductionamitoolsis a collection of Python 3 tools that I've written to work withAmiga OSbinaries and files on macOS and all other *nix-like platforms supporting Python. Windows might work as well, but is heavily untested. However, patches are welcome.I focus with my tools on classic Amiga setups, i.e. a 680x0 based system with Amiga OS 1.x - 3.x running on it. However, this is an open project, so you can provide other Amiga support, too.The tools are mostly developer-oriented, so a background in Amiga programming will be very helpful.PrerequisitesPython >=3.7pip3Optional Packageslhafile - FS Edition: required to use.lhafile scannercython: (version >=0.29) required to rebuild the native moduleInstallationStable/Release Versionpip3installamitoolsNote:on Linux/macOS may usesudoto install for all usersrequires a host C compiler to compile the extension.the version may be a bit outdated. If you need recent changes use the current version.Current Version from GitHubEnsure you have Cython installed:pip3installcythonThen installamitoolsdirectly from the git repository:pip3install-Ugit+https://github.com/cnvogelg/amitools.gitNote:This will install the latest version found in the github repository.You find the latest features but it may also be unstable from time to time.Repeat this command to update to the latest version.DevelopersFollow this route if you want to hack around with the amitools codebaseClone the Git repo:amitools@gitEnsure you have Cython installed:pip3installcythonEnter the directory of the cloned repo and install via pip:pip3install-U-e.This installamitoolsin your current Python environment but takes the source files still from this repository. So you can change the code there and directly test the tools.ContentsThe new Documentation ofamitoolsis hosted onreadthedocsToolsvamosV)irtual AM)iga OSvamos allows you to run command line (CLI) Amiga programs on your host Mac or PC. vamos is an API level Amiga OS Emulator that replaces exec and dos calls with its own implementation and maps all file access to your local file system.xdftoolCreate and modify ADF or HDF disk image files.xdfscanScan directory trees for ADF or HDF disk image files and verify the contents.rdbtoolCreate or modify disk images with Rigid Disk Block (RDB)romtoolA tool to inspect, dissect, and build Amiga Kickstart ROM images to be used with emulators, run with soft kickers or burned into flash ROMs.hunktoolThe hunktool uses amitools' hunk library to load a hunk-based amiga binary. Currently, its main purpose is to display the contents of the files in various formats.You can load hunk-based binaries, libraries, and object files. Even overlayed binary files are supported.typetoolThis little tool is a companion for vamos. It allows you to dump and get further information on the API C structure of AmigaOS used in vamos.fdtoolThis tool reads the fd (function description) files Commodore supplied for all of their libraries and dumps their contents in different formats including a code structure used in vamos.You can query functions and find their jump table offset.Python LibrariesHunk libraryamitools.binfmt.hunkThis library allows to read Amiga OS loadSeg()able binaries and represent them in a python structure. You could query all items found there, retrieve the code, data, and bss segments and even relocate them to target addressesELF libraryamitools.binfmt.elfThis library allows to read a subset of the ELF format mainly used in AROS m68k..fd File Parseramitools.fdParse function descriptions shipped by Commodore to describe the Amiga APIsOFS and FFS File System Toolsamitools.fsCreate or modify Amiga's OFS and FFS file system structuresFile Scannersamitools.scanI've written some scanners that walk through file trees and retrieve the file data for further processing. I support file trees on the file system, in lha archives or in adf/hdf disk images
amitpdf1
This is the homepage of our project.
amitu-hstore
You needdynamic columnsin your tables. What do you do?Create lots of tables to handle it. Nice, now you’ll need more models and lots of additional sqls. Insertion and selection will be slow as hell.Use anoSQLdatabase just for this issue.Good luck.Create a serialized column. Nice, insertion will be fine, and reading data from a record too. But, what if you have a condition in your select that includes serialized data? Yeah, regular expressions.Documentation-Mailing ListProjects using this packagedjango-rest-framework-hstore:django-rest-frameworktools fordjango-hstoreNodeshot: extensible django web application for management of community-led georeferenced data - some features ofdjango-hstore, like theschema-modehave been developed for this project
amitu.lipy
UNKNOWN
amitu-putils
UNKNOWN
amitu-websocket-client
UNKNOWN
amitu-zutils
UNKNOWN
amiuploader
UNKNOWN
ami-val
ami-valIntroductionami-val is a lightweight, fast tests collection for AMIs.InstallationInstall from pip# pip3 install ami-valInstall from source code# git clone https://github.com/liangxiao1/ami-val.git# cd ami-val# python3 setup.py installBuild wheel from source code and install it# python3 setup.py sdist bdist_wheel# pip install -U dist/ami_val-0.0.1-py3-none-any.whlPublic new wheels onpypi(maintainer use only)# python3 -m twine upload dist/*Run testWe shared the pre configured aws credentials in "~/.aws/", please specify profile name as below format# cat ~/.aws/credentials# aws for default aws regions access[aws]aws_access_key_id=xxxxxxxaws_secret_access_key=xxxxxxxxx# aws for default china regions access[aws-china]aws_access_key_id=xxxxxxxaws_secret_access_key=xxxxxxxxx# aws for default us-gov regions access[aws-us-gov]aws_access_key_id=xxxxxxxaws_secret_access_key=xxxxxxxxxThe config fileYou can change the default setting in "cfg/ami-val.yaml" locates in the same installed dir.Below option is must required for ssh login:# remote_user: user and keyfile to login instance ssh_user: ec2-user ssh_keyfile: '/home/virtqe_s1.pem' # if pair_name keypair not found, will upload ssh_pubfile automatically # ssh_pubfile is not required if pair_name exists already. pair_name: virtqe_s1 ssh_pubfile: '/home/virtqe_s1.pub'Run all ami-val supported cases(the default path is "/usr/local/bin" if not in virtual environment. )# ami-val -f https://xxxx/pub/task/343012 --paralle# ami-val -f images.jsonThe json format should be like this, except required, other are options:# cat images.json[{"ami":"ami-01166axxxxxx",<-required"description":"Provided by Red Hat, Inc.","ena_support":true,<-required"name":"RHEL-xxxxxx-x86_64-1-Hourly2-GP2",<-required"region":"us-east-1",<-required"release":{"arch":"x86_64",<-required"base_product":null,"base_version":null,"date":"20201020","product":"RHEL","respin":1,"type":null,"variant":"BaseOS","version":"8.x"},"root_device":"/dev/sda1","sriov_net_support":"simple","type":"hourly","virtualization":"hvm","volume":"gp2"}]List all supported cases only without run# ami-val -lFilter case name with keywords timezone and bash# ami-val -l -p timezone,bash_historyFilter case name with keywords stage1 and skip timezone check# ami-val -l -p stage1 -s timezoneClean the resource created# ami-val --logdir /tmp/ami_val_344423 --cleanThe log fileThe console only shows the case test run. The test debug log file are saved in "/tmp/ami_val" following case name by default. If task id can be detected, it will be in "/tmp/ami_val_taskid" by default.Below is an example:# ami-val -f https://xxxxxxx/pub/task/343012 --paralleRuninmode:is_listcase:Falsepattern:Noneis_paralle:True Taskurlprovided,trytodownloadit Getdatafromhttp://xxxxxxx/pub/task/343012/log/images.json?format=raw Gotdatafromhttp://xxxxxxx/pub/task/343012/log/images.json?format=raw Removeexists/tmp/ami_val_343012 Createnew/tmp/ami_val_343012 Datasavedto/tmp/ami_val_343012/images.json Useprofile:aws resource/tmp/ami_val_343012/resource.json2021-03-0101:41:30.068325INFO:Useprofile:awsinregion['us-west-2','cn-northwest-1','us-gov-west-1']2021-03-0101:41:30.544445INFO:Initkeyinregionus-west-2successfully Runningami_val.tests.test_stage0.test_stage0_check_aminame(1/6)Runningami_val.tests.test_stage0.test_stage0_check_ena_enabled(2/6)Runningami_val.tests.test_stage0.test_stage0_launch_instance(3/6)Runningami_val.tests.test_stage1.test_stage1_check_bash_history(4/6)Runningami_val.tests.test_stage1.test_stage1_check_username(5/6)Runningami_val.tests.test_stage2.test_stage2_check_ha_specific(6/6)Totalcasenum:6Logdir:/tmp/ami_val_343012 HTMLsummary:/tmp/ami_val_343012/sum.html Pleasewaitresourcecleanupdone......The installed filesAll test files are located in "ami_val/tests" directory.# pip3 show -f ami-val|grep -v _pycache|grep -v distName:ami-val Version:0.0.1 Summary:AMIvalidationtool Home-page:https://github.com/liangxiao1/ami-val Author:XiaoLiang Author-email:[email protected] License:GPLv3+ Location:/home/p3_os_env/lib/python3.6/site-packages Requires:PyYAML,filelock,awscli,boto3,tipset,argparse Required-by:Files:../../../bin/ami-valami_val/__init__.pyami_val/ami_val.pyami_val/ami_val_run.pyami_val/cfg/ami-val.yamlami_val/cfg/os-tests.yamlami_val/data/baseline_log.jsonami_val/data/results.htmlami_val/libs/__init__.pyami_val/libs/aws_lib.pyami_val/libs/resource_class.pyami_val/libs/rmt_ssh.pyami_val/libs/utils_lib.pyami_val/scripts/rhel-ha-aws-check.shami_val/tests/__init__.pyami_val/tests/test_stage0.pyami_val/tests/test_stage1.pyami_val/tests/test_stage2.pyContributionYou are welcomed to create pull request or raise issue. New case from real customer senario or rhbz is prefered.
am-i-varun-or-yughandar
No description available on PyPI.
amivcrm
Connector to the AMIV SugarCRMSugarCRM provides a SOAP and a REST api. At the time this tool was written the REST api was unfortunately not available. Therefore SOAP is used.The python library suds is used, more exactly the fork byjurko.Installationpip install amivcrmUsageYou will need a soap username and password. You can find them in theAMIV Wiki. After you got the credentials, its as easy as this:from amivcrm import AMIVCRM CRM = AMIVCRM(username, password) # Optional: Specify `url` and/or `appname` # CRM = AMIVCRM(username, password, url="...", appname="...") # Get Companies CRM.get('Accounts') # Select only certain fields # Filter and order with SQL statements CRM.get('Accounts', # Only companies participating in job fair query="accounts_cstm.messeteilnahme_c = 1", # Order alphabetically order_by="accounts.name", # Return Name and ID only select_fields=['name', 'id']) # Get a single company by id CRM.getentry('Accounts', '505404b1-1851-1472-d63e-4d829377e30b', # Optional: Limit the returned fields as well select_fields=['name']) # Get a company only if modified after given date entry_id = '505404b1-1851-1472-d63e-4d829377e30b' date = '2016-03-20 08:05:39' # Be careful to use quotes in query query = ("accounts.id = '%s' and accounts.date_modified >= '%s'" % (entry_id, date)) CRM.get('Accounts', query=query)
amix
Automatic mix of audio clips.InstallationMake sure, to haveffmpeginstalled.pipinstallamixUsageI also uploaded my first results toSoundCloud.Please check first of all the help function.amix--helpAlso make sure to always obtain the latest version.amix--versionRender audio from the definition fileamix.ymlin the current working directory to disc.amixIncrease verbosity to also output theffmpeglogging.amix-vvUse ajinja2template and supply data.amixtemplates/amix.yml.j2--data"full=8""half=4""from=7.825""tempo=0.538""pitch=1.1""original_tempo=180"Automatically create parts from clips.amix--parts_from_clipsConfigurationYou can find the JSON schemahere.A sample configuration looks like:name:DnBoriginal_tempo:180parts:-name:backbeat_partbars:16clips:-name:backbeatmix:-name:introparts:-name:backbeat_part
amiyabot
Amiya-Bot简洁高效的 Python 异步渐进式 QQ 频道机器人框架!可使用内置的适配器创建 KOOK、mirai-api-http、go-cqhttp、ComWeChat Client 以及支持 OneBot v11/12 的机器人实现。官方文档:www.amiyabot.comInstallpip install amiyabotGet startedSingle modeimportasynciofromamiyabotimportAmiyaBot,Message,Chainbot=AmiyaBot(appid='******',token='******')@bot.on_message(keywords='hello')asyncdef_(data:Message):returnChain(data).text(f'hello,{data.nickname}')asyncio.run(bot.start())Multiple modeimportasynciofromamiyabotimportMultipleAccounts,AmiyaBot,Message,Chainbots=MultipleAccounts(AmiyaBot(appid='******',token='******'),AmiyaBot(appid='******',token='******'),...)@bots.on_message(keywords='hello')asyncdef_(data:Message):returnChain(data).text(f'hello,{data.nickname}')asyncio.run(bots.start())Use adapterimportasynciofromamiyabotimportAmiyaBot,Message,Chainfromamiyabot.adapters.onebot.v11importonebot11bot=AmiyaBot(appid='******',token='******',adapter=onebot11(host='127.0.0.1',http_port=8080,ws_port=8060))@bot.on_message(keywords='hello')asyncdef_(data:Message):returnChain(data).text(f'hello,{data.nickname}')asyncio.run(bot.start())Get more
amiyabot-core-test
Amiya-Bot简洁高效的 Python 异步渐进式 QQ 频道机器人框架!可使用内置的适配器创建 KOOK、mirai-api-http、go-cqhttp、ComWeChat Client 以及支持 OneBot v11/12 的机器人实现。官方文档:www.amiyabot.comInstallpip install amiyabotGet startedSingle modeimportasynciofromamiyabotimportAmiyaBot,Message,Chainbot=AmiyaBot(appid='******',token='******')@bot.on_message(keywords='hello')asyncdef_(data:Message):returnChain(data).text(f'hello,{data.nickname}')asyncio.run(bot.start())Multiple modeimportasynciofromamiyabotimportMultipleAccounts,AmiyaBot,Message,Chainbots=MultipleAccounts(AmiyaBot(appid='******',token='******'),AmiyaBot(appid='******',token='******'),...)@bots.on_message(keywords='hello')asyncdef_(data:Message):returnChain(data).text(f'hello,{data.nickname}')asyncio.run(bots.start())Use adapterimportasynciofromamiyabotimportAmiyaBot,Message,Chainfromamiyabot.adapters.onebot.v11importonebot11bot=AmiyaBot(appid='******',token='******',adapter=onebot11(host='127.0.0.1',http_port=8080,ws_port=8060))@bot.on_message(keywords='hello')asyncdef_(data:Message):returnChain(data).text(f'hello,{data.nickname}')asyncio.run(bot.start())Get more
amka-py
amka-pyA validator for greek social security number (AMKA)Installationpipinstallamka-pyUsagefromamka_py.amkaimportvalidate# An invalid AMKAis_valid,err=validate("09095986680")print(is_valid)# Falseprint(err)# A valid AMKAis_valid,err=validate("09095986684");print(is_valid)# True
amk-bipro
No description available on PyPI.
amkiller
PC KILLER WITH PYTHONthis python script has only one File:killerand inside ther's just one funcion:startthe script will use thepyautogui,time,randomlibraries and, the ONLY way to stop it is by shuting down the computer..by the way the program will start in a dramatic way
amk.kakashi
kakashi - File Doppelgangerkakashi,《火影忍者》中的天才忍者,因使用写轮眼复制了上千种忍术而被称为“拷贝忍者”,该工具借用其名,以体现「快速创建文件分身」的功能,即:在保留原文件/目录结构的同时,批量替换指定文本,以生成另一份安装pipinstallamk.kakashi使用显示帮助$kks usage:kakashi[-h][-d][-v][-V][-fFROM_PATH][-tTO_PATH][-mMAP_PATH][-r]在保留原文件/目录结构的同时,批量替换指定文本,以快速创建文件分身 optionalarguments:-h,--helpshowthishelpmessageandexit-d,--debug启用调试模式-v,--verbose显示详细日志-V,--version查看当前版本号-fFROM_PATH,--from_pathFROM_PATH准备分身的文件或目录的路径-tTO_PATH,--to_pathTO_PATH文件或目录分身后保存的路径-mMAP_PATH,--map_pathMAP_PATH分身时需要替换的内容映射文件路径,每一行为一条映射,每条映射的格式须为"旧文本 => 新文本"-r,--remove_if_exist若保存的路径已存在文件,则直接将其删除使用方法$kks-f<from_path>-t<to_path>-m<map_path>[-[d|v]]# 示例(效果如图所示)$kks-ftest_proj-ttest_proj_2-mtest_proj_map.txt-dv更新记录
amkpdf2
This is readme file
amk.pytb
PythonToolbox - Python 工具箱安装pipinstallamk.pytb使用pip 相关新建 packagepytbcreate<pkg_name>发布 package发布测试包:pytb publish <-t/--test>发布正式包:pytb publish安装/更新 package安装 测试包:pytb install <pkg_name> [version] -t,例如:pytb install ampt2 0.0.1 -t安装 正式包:pytb install <pkg_name> [version],例如:pytb install ampt2更新 测试包:pytb install <pkg_name> -ut,例如:pytb install ampt2 0.0.2 -ut更新 正式包:pytb install <pkg_name> -u,例如:pytb install ampt2 0.0.2 -u更新记录
amk.rfwd
Rename Files With Date背景文件的排序方式 通常只有【创建时间、修改时间、标题】,但问题是:照片的标题通常是无效的(固定前缀+编号,甚至是无意义字符串)时间也是不准的(创建时间、修改时间 其实与照片的拍摄时间 是不一致的),所以就需要基于现有排序方式 修改相关信息,以便能够已正确的排序显示因此,我就写了这个工具:读取文件的创建时间(照片则是拍摄时间),将其重命名,以便【在以标题排序显示时 是符合预期的】为了避免命名重复,所以命名规则为「年月日时分秒+3位随机数,共15位数字」,示例如下:202204062217.699.png201601011713.247.JPG201602251612.551.JPG201712311343.878.JPG202204041615.719.HEIC202204041615.813.HEIC202204041710.478.HEIC202204041710.595.JPG202204041710.749.HEIC202204041710.750.HEIC安装可通过如下命令 快速安装:$pipinstallamk.rename_files_with_date使用查看说明$rfwd-h usage:rfwd[-h][-d][-v][-l]path RenamePhotosWithDate positionalarguments:path待重命名的「文件/目录」的路径 optionalarguments:-h,--helpshowthishelpmessageandexit-d,--debug启用调试模式-v,--verbose显示详细日志-l,--list仅列表显示可能的处理,但不执行具体操作,以便检查查看将要执行的操作$rfwd./photos-l或是指定-v以查看很多信息:$rfwd./photos-lv执行重命名# 可追加 -v、-d 以启用详情、调试模式,以查看更多信息$rfwd./photos
aml
No description available on PyPI.
amlan
No description available on PyPI.
amlang
# aml Accounting Mini Language — Small and simple expression language running on Python.amlis a very small programming language running on top of Python.amlis ideal for exposing a powerful but safe interface to users who need to define business rules but have no programming knowledge. In this respectamlis an alternative to cumbersome mouse-driven rule engine configurations.The grammar and parser utilise the [pypeg2](http://fdik.org/pyPEG/) library which must be installed for aml to work.To useamlthe programmer creates a “language instace”. Calling the compile function yields an object (essentially an AST). This object can then be evaluated directly using the evaluate function or translated to Python or SQL using the respective functions.Since the language is very simple, sufficient documentation can be given by example:>>> lang_instance = create_lang_instance() >>> lang_instance.aml_evaluate(lang_instance.aml_compile('1 = 1')) True >>> li = create_lang_instance() >>> c = li.aml_compile >>> e = li.aml_evaluate >>> p = li.aml_translate_python >>> s = li.aml_translate_sql >>> u = li.aml_suggest >>> e(c('1 = 0')) False >>> e(c('"1" = "1"')) True >>> e(c('(1=1)')) True >>> e(c('1 > 1')) False >>> e(c('not 1 > 1')) True >>> e(c('1 != 1')) False >>> e(c('-2 = -2')) True >>> eval(p(c('-2 = -2'))) True >>> eval(p(c('null = null'))) True >>> eval(p(c('1 = null'))) False >>> e(c('"foo" = "foo"')) True >>> e(c('"foo" = \\'foo\\'')) True >>> e(c('"fo\\'o" = "fo\\'o"')) True >>> e(c("'foo'" + '=' + '"foo"')) True >>> li = create_lang_instance({'foo' : 1}); >>> c = li.aml_compile >>> e = li.aml_evaluate >>> e(c('foo = 1')) True >>> li = create_lang_instance({'foo' : 1.00}) >>> c = li.aml_compile >>> e = li.aml_evaluate >>> e(c('foo = 1')) True >>> li = create_lang_instance({'foo' : 2.24}) >>> c = li.aml_compile >>> e = li.aml_evaluate >>> e(c('foo = 2.24')) True >>> li = create_lang_instance({'foo' : 'foo'}) >>> c = li.aml_compile >>> e = li.aml_evaluate >>> e(c('foo = "foo"')) True >>> li = create_lang_instance() >>> c = li.aml_compile >>> p = li.aml_translate_python >>> s = li.aml_translate_sql >>> s(c('null = null')) u'null is null' >>> p(c('null = null')) u'None == None' >>> s(c('null != null')) u'null is not null' >>> p(c('null != null')) u'None != None' >>> s(c('5 != 3')) u'5 <> 3' >>> p(c('5 != 3')) u'5 != 3' >>> li = create_lang_instance({'foo' : 'bar', 'fo2' : 'ba2'}) >>> c = li.aml_compile >>> p = li.aml_translate_python >>> e = li.aml_evaluate >>> u = li.aml_suggest >>> u('1 = fo') [u'fo2', u'foo'] >>> u('1 = FO') [u'fo2', u'foo'] >>> p(c('null = null')) u'None == None' >>> e(c('foo = "bar"')) True >>> e(c('fo2 = "ba2"')) True
amlansarkar
No description available on PyPI.
amlb
No description available on PyPI.
amlc
No description available on PyPI.
amlctor
AMLCTORAzure Machine Learning Pipeline Constructoramlctorallows you to create Azure Machine Learning(shortly -AML)Pipeline.amlctorbased on theAzure Machine Learning SDK, and implements main operations of the Pipeline creation. You can create pipelines with AML Steps, which can take DataInputs. In amlctor pipeline creation consists of 3 steps:0. PreporationIt's highly recommended to create separated folder your pipeline projects. And also, virtual environment(venv). You can create separated venv for future AML projects. It's specially useful if you are working with different kinds of libraries: data science oriented, web and so on.1. Pipeline initialisationSomething like project initialisation. You choose pipeline name, directory and credential.envfile. For storing amlctor has denv storage - orEnvBank. Initialise pipeline as:python-mamlctorinit-nmyfirstpipe-p.-edenv_nameHere-nshows pipeline name,-p- directory in which pipeline will be created,-e- dotenv name. I will talk about denv's a little bit later. After this, in the passed directory will be created named as pipeline passed name.myfirstpipe ---|settings/ ------|settings.py ------|.amlignore ------|.env ------|conda_dependencies.ymlInside the directorysettingsdirectory which contains:settings.py,.amlignore,.envandconda_dependencies.ymlfiles.conda_dependencies.ymlwill be used for environment creation on AML side..amlignoresomething like.gitignorebut for AML..envis file form of our EnvBank instance.-eis optional, if it's skipped, will be created.envtemplate with necessary fields, which you have to fill beforerunningpipeline.settings.py:This module contains all necessary configuractions:fromamlctor.inputimportFileInputSchema,PathInputSchemafromamlctor.coreimportStepSchema# --------------------------| Module Names |----------------------------AML_MODULE_NAME:str='aml'SCRIPT_MODULE_NAME:str='script'DATALOADER_MODULE_NAME:str='data_loader'# ---------------------------| General |---------------------------------NAME="{{pipe_name}}"DESCRIPTION="Your pipeline description"# ---------------------------| DataInputs |-------------------------------file=FileInputSchema(name='name',datastore_name='datastore',path_on_datastore='',files=['file.ext'],data_reference_name='')path=PathInputSchema(name='name',datastore_name='datastore',path_on_datastore='',data_reference_name='')# ---------------------------| Steps |---------------------------------step1=StepSchema(name='step_name',compute_target='compute_name',input_data=[file,path],allow_reuse=False)STEPS=[step1,]# ---------------------------| extra |---------------------------------EXTRA={'continue_on_step_failure':False,}Lets look at the variables we have here.AML_MODULE_NAME- initially, pipeline project has 3 main scripts:dataloader.py- loads all the DataInputs into the pipeline,aml.py- main script of the pipeline, loaded data inputs imported here automaticaly,script.py- just empty script for implement your deep logic. You are free for remove this module or add so many as you need, however - the entry point of project isaml.py.AML_MODULE_NAMEis the name of aml.py module. And the same thing forDATALOADER_MODULE_NAMEandSCRIPT_MODULE_NAME.NAME- name of your pipeline.DESCRIPTION- description of the pipeline.PathInputSchemaandFileInputSchemaDataInput of your pipeline. You create instances of the classes and pass intoStepSchemaclass. EachStepSchemaclass is abstraction ofPythonScriptStep. All steps must be insideSTEPSlist.After filling settings, you have to apply your settings.2.ApplySettingspython-mamlctorapply-p<path_to_pipeline>Applying pipeline means - create structure based on thesettings.pymodule. For each step will be created directory inside pipeline directory and each directory will contain:aml.py,dataloader.pyandscript.py.Note: names of the modules setted in thesettings.pymodule.3.RunPipelinepython-mamlctorrun-p<path_to_pipeline>This command will publish your pipeline into your AML.EnvBankFor work on AML pipeline you have to use your credentials:workspace_name,resource_group,subscription_id,build_id,environment_nameandtenant_id. In amltor these variables store as instances ofEnvBank, which is encrypted jsonlike file. You can create, retrieve or removeEnvBankinstances(I'll name it asdenv). In this purpose you've to usedenvcommand.Create denvYou can create denv in 2 ways: pass path of existing.envfile or in interactive mode - via terminal. In the first case:python-mamlctordenvcreate-p<path_to_.envfile>-n<new_name>Then you'll type new password twise for encryption. After that, denv will save into local storage and you will be able to use it for future pipeline creation.For create denv in interactive mode, you have to pass-ior--interactivearg:python-mamlctordenvcreate-iAfter that you have to type each asked field and set password.Get denvFor retrieve denv use:python-mamlctordenvget-n<name_of_denv>For list all existing denv names add --allargument:python-mamlctordenvget--allNote:for view the denv, you have to type password.Remove denvFor removing denv:python-mamlctordenvrm-n<name_of_denv>DataInputsDataInputs can be files or paths fromAML Datastore. Whole process is creatingDataReferenceobject behind the scenes... All inputs will be loaded in thedataloader.pyand imported intoaml.pymodule. Lets look atamlctorDataInputs.PathInputSchemaAllows you to create data reference link to any directory inside the datastore. class looks like this:classPathInputSchema:name:strdatastore_name:strpath_on_datastore:strdata_reference_name:strWhere:namename of your PathInput, this name will be used as variable name for importing.datastore_name- Datastore name,path_on_datastore- target path related to the Datastore.data_reference_name- data reference name forDataReferenceclass, optional - if empty, will be used name.FileInputSchemaAllows you to mount files from Datastore. Behind the scines, very similar to PathInput, but withfile orientedadditions.classFileInputSchema:name:strdatastore_name:strpath_on_datastore:strdata_reference_name:strfiles:List[str]First 4 fields as previous.files- you can list file or files as list, which will be mounted from Datastore. If you want to get one file, pass as string, for more files - list of strings. When you pass multiple filename, they must be on the same path.Supported file types:amlctorusespandasread methods for read the mounted files. At the moment, suported file types:csv, parquet, excell sheet, jsonSlugged file names will be used as variable names for importing files.Other commandsUpdateYou can updatedataloaderaccording to thesettings.pymodule. It can be useful when you maked some changes intosettings.pyand don't want to overwrite whole pipeline structure by scratch, in this case you can useupdate:python-mamlctorupdate-p<path_to_pipe>-sstep_name[Optional]step_nameargument is optional, if not passed, updating will apply for all steps, otherwise - only for passed step.Renamepython-mamlctorrename-p<path_to_pipe>-n<new_name>Renames pipeline intonew_name. Renaming pipeline means: rename pipeline project directory, changeNAMEvariable insettings.pyand editENVIRONMENT_FILEin the.envfile.
amlearn
# amlearn Machine Learning Package for Amorphous Materials (WIP).To featurize the heterogeneous atom site environments in amorphous materials, we can use amlearn to derive 1k+ candidate features that encompass short- (SRO) and medium-range order (MRO) to describe the packing heterogeneity around each atom site. (See the following example figure for combining site features and machine learning (ML) to predict the deformation heterogeneity in metallic glasses).Candidate features include recognized signatures such as coordination number (CN), Voronoi indices, characteristic motifs, volume metrics (atomic/cluster packing efficiency), i-fold symmetry indices, bond-orientational orders and symmetry functions (originally proposed to fit ML interatomic potentials and recently gained success in featurizing disordered materials). We also include our recently proposed highly interpretable and generalizable distance/area/volume interstice distribution features in amlearn (see [A transferable machine-learning framework linking interstice distribution and plastic heterogeneity in metallic glasses](https://www.nature.com/articles/s41467-019-13511-9). Qi Wang and Anubhav Jain. Nature Communications 10, 5537 (2019)).In amlearn, We integrate Fortran90 with Python (using f2py) to achieve combination of the flexibility and fast-computation (>10x times faster than pure Python) of features. Please refer to the SRO and MRO feature representations inamlearn.featurize. We also include anIntersticeDistributionclass as a site featurizer in [matminer](https://github.com/hackingmaterials/matminer), a comprehensive Python library for ML in materials science.<div align=’center’><img alt=”amlearn” src=”docs_rst/_static/schematic_ML_of_deformation.png” width=”800”></div> &nbsp;## InstallationBefore installing amlearn, please install numpy (version 1.7.0 or greater) first.We recommend to use the conda install.`sh conda install numpy `or you can find numpy installation guide from [Numpy installation instructions](https://www.scipy.org/install.html).Then, you can install amlearn. There are two ways to install amlearn:Install amlearn from PyPI (recommended):`sh pip install amlearn `Alternatively: install amlearn from the GitHub source:First, clone amlearn usinggit:`sh git clonehttps://github.com/Qi-max/amlearn`Then,cdto the amlearn folder and run thesetup.py:`sh cd amlearn sudo python setup.py install `## References Qi Wang and Anubhav Jain. A transferable machine-learning framework linking interstice distribution and plastic heterogeneity in metallic glasses. Nature Communications 10, 5537 (2019). doi:[10.1038/s41467-019-13511-9](https://www.nature.com/articles/s41467-019-13511-9)
amlengine
Python AML.EngineThis provides access to the .Net implementation of theAML.Engine- an API for simple access to AutoamtionML files.Access to the functionalities of the .Net dlls is provided via thepythonnetpackage.Besides the access to the native API of the AML.Engine, the package also contains some additional helper functions implemented in Python that are available via theamlhelpermodule.Usage# ensure the package is loaded correctly and the required DLLs are registered with pythonnetimportamlengine# depending on what functionalities you want to usefromAml.Engineimport*fromAml.Engine.CAEXimport*fromAml.Engine.CAEX.Extensionsimport*fromAml.Engine.AmlObjectsimport*fromAml.Engine.AmlObjects.Extensionsimport*fromAml.Engine.Servicesimport*fromAml.Engine.Adapterimport*# access to the native .Net APIaml_file=CAEXDocument.New_CAEXDocument(CAEXDocument.CAEXSchema.CAEX2_15).Net versionBy default, the DLLs compiled for .Net Framework 4.8 are used. However, using another version provided by the AML.Engine is also possible. Therefore, set the environment variableAML_ENGINE_DOTNET_VERSIONto one of the following values before importing theamlengine:net5.0net6.0net48 (default)netcoreapp3.1
amlensing
GAML: Astrometric MicroLensing prediction using Gaia's dataThis Python package searches for astrometric gravitational microlensing events given a list of lens-source pairs, and output quality assessments and astrometric and photometric microlensing effects of significant lensing events.This project has evolved fromKlüter's amlensing, with the following improvements:major overhaul to standardize and generalize the codebasesubstantial refactors to adapt for general lensing objects and background sourcesE.g., allow setting their mass, mass error, and individual epochsmakes it easier to prepare the input data files, which was abcent in the original codealso a few bug fixes, which affects the result (most slightly)For more detailed changes of this fork, seeCHANGES.mdand the commit log.What does it doGAML will perform several filters to exclude lenses and sources with low quality.By predicting the motion over a specific time span, GAML determines the time and angular separation of lens-source closest approach.By sampling the angular Einstein ring radius and angular separation, it calculates the astrometric and photometric observables of the gravitational microlensing event.Such as centroid shift, positive image shift, centroid shift with a luminous lens, and magnification.Although there are quite some changes from the original codebase, but it is still recommended to readKluter 2022for theoretical details.DocumentationFor further documentations, see thedocsfolder
amle-py
Amstrad Learning EnvironmentThe Amstrad Learning Environment is a Python interface that is meant to be used with AIs and OpenAi Gym. The Python library is available on Linux and Windows.InstallationUnixFirst you’ll need to install Python3 (version >= 3.6). To install Python :$sudoapt-getinstallpython3To check the version :$python3--versionThen, you’ll need to install pip, so you can install the amle-py package :$curlhttps://bootstrap.pypa.io/get-pip.py|pythonWith pip, you’ll need to install two dependencies :$pipinstall--upgradesetuptools$pipinstall--upgradenumpyFinally you can install amle-py :$pipinstall--upgradeamle-pyWindowsInstall Python by going on theirwebsite. Make sure to add Python to your path and install pip during the installation.How to useCompile and use the sources (Linux only)To compile the sources you will need to install a few dependencies :SDL 1.2libpnglibzipcppunitOnce you have all these dependencies, you can compile the project. You may also want to generate the documentation of the project, this is done in the doc/ folder. Make sure to have doxygen installed :$sudoapt-getinstalldoxygenAnd then :$doxygendoxygen.configUse the Python libraryFirst you will need to create a new Python file in which you import the library :importamle_pyThen you will need to create a new amle object :amle=amle_py.AmLEInterface()And you will need to load a game :amle.loadSnapshot("Arkanoid","snap/arka.sna")Note that the first string has to a name the AmLE can understand. If you have a doubt you can get the list of all possible strings with :games=amle.getSupportedGames()Moreover, the second argument is a path to YOUR .sna file. You have to import one from the internet and generate one yourself with an emulator. Also, it doesn’t have to be in a snap/ folder, this is just cleaner.Then you may want to run the game :whilenot(amle.gameOver()):amle.step()This doesn’t do anything interesting though, you can also interact with the game. To do so, before the loop you can do :nbLegalActions=amle.getNbLegalActions()legalActions=np.empty(nbLegalActions,dtype=np.int32)amle.getLegalActions(legalActions)legalActions=legalActions.tolist()And then in the loop :importamle_pyimportrandom# The previous code discussedwhilenot(amle.gameOver()):amle.act(random.choice(legalActions))amle.step()Finally, you may want to generate the documentation for the library. To do so go in the amle_py folder and run :$pydoc-wamle_python_interface.py
amlfbp
No description available on PyPI.
aml-hallucination
Hallucination measurementchange Log0.0.1(04/03/2023)-First Release
amlhpc
amlhpcPackage to provide a -just enough- Slurm or PBS experience on Azure Machine Learning. Use the infamous sbatch/qsub/sinfo to submit jobs and get insight into the state of the HPC system through a familiar way. Allow applications to interact with AML without the need to re-program another integration.For the commands to function, the following environment variables have to be set:SUBSCRIPTION=<guid of you Azure subscription e.g. 12345678-1234-1234-1234-1234567890ab> CI_RESOURCE_GROUP=<name of the resource group where your Azure Machine Learning Workspace is created> CI_WORKSPACE=<name of your Azure MAchine Learning Workspace>In the Azure Machine Learning environment, the CI_RESOURCE_GROUP and CI_WORKGROUP are normally set, so you only need to export SUBSCRIPTION.sinfoShow the available partitions. sinfo does not take any options.(azureml_py38) azureuser@login-vm:~/cloudfiles/code/Users/username$ sinfo PARTITION AVAIL VM_SIZE NODES STATE f16s UP STANDARD_F16S_V2 37 hc44 UP STANDARD_HC44RS 3 hbv2 UP STANDARD_HB120RS_V2 4 login-vm UP STANDARD_DS12_V2 NonesqueueShow the queue with historical jobs. squeue does not take any options.(azureml_py38) azureuser@login-vm:~/cloudfiles/code/Users/username$ squeue JOBID NAME PARTITION STATE TIME crimson_root_52y4l9yfjd sbatch f16s polite_lock_v8wyc9gnx9 runscript.sh f16ssbatchSubmit a job, either as a command through the--wrapoption or a (shell) script. sbatch uses several options, which are explained in sbatch --help. Quite a bit of sbatch options are supported such as running multi-node MPI jobs with the option to set the amount of nodes to be used. Also array jobs are supported with the default--arrayoption.Some additional options are introduced to support e.g. the data-handling methods available in AML. These are explaned indata.md.(azureml_py38) azureuser@login-vm:~/cloudfiles/code/Users/username$ sbatch -p f16s --wrap="hostname" gifted_engine_yq801rygm2(azureml_py38) azureuser@login-vm:~/cloudfiles/code/Users/username$ sbatch --help usage: sbatch [-h] [-a ARRAY] -p PARTITION [-N NODES] [-w WRAP] [script] sbatch: submit jobs to Azure Machine Learning positional arguments: script script to be executed optional arguments: -h, --help show this help message and exit -a ARRAY, --array ARRAY index for array jobs -p PARTITION, --partition PARTITION set compute partition where the job should be run. Use <sinfo> to view available partitions -N NODES, --nodes NODES amount of nodes to use for the job -w WRAP, --wrap WRAP command line to be executed, should be enclosed with quotesIf you encounter a scenario or option that is not supoprted yet or behaves unexpected, please create an issue and explain the option and the scenario.
amlnx-adapter
No description available on PyPI.
amlopschat
No description available on PyPI.
amlopsvueelements
No description available on PyPI.
aml-pipeline
Azure-ML-PipelineAzure Machine Learning Pipeline high level APIThis project defines a set of high level APIs to define and publish Machine Learning Pipeline fromAzure Machine Learning Service.Azure Machine Learning Service supports 2 types of pipelines:Http triggered pipeline: expose a constant REST endpoint to be triggered by authenticated Http requestSchedule based pipeline: pipeline is triggered by a predefined time interval.Use the libaryfor Http triggered pipeline: inherentHttpTriggeredPipelineclass and override two methods:registerDataStores() and definePipelineSteps() . Example:HttpSklearnAzureFunctionPipeline.pyfor schedule based pipeline: (developing)Contribute to this repoBuild locallyUse conda (recommended)conda create -p .env python=3.7conda activate ./.env.env/bin/pip install -r src/requirements.txtUse pippython3.6 -m venv .envsource .env/bin/activate.env/bin/pip install -r src/requirements.txt
amlpp
Descriptionlibrary wrapper for ml libraries.СapabilitiesThe main class is a conveyor as an analogue of a pipeline. There is a partial interchangeability. The focus is on expanding built-in capabilities such as auto-typing predictive models, feature importance inference, scores, and various future features. There are also adapted transformers, classes for customizing the structure of projects.StatusAt the moment, the functionality is not wide, designed for expansion in the future
aml_python
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
amlr
AMLR- Auto Machine Learning ReportCreate a bealtifull Machine Learning Report withOne-Line-CodeMain Features:Exploratory Data AnalisysDataset ConfigurationShapeDetect number of classes (Bernoulli or binary for while)Automatically Duplicate Observations droppedYou can drop Duplicate Observations manually as wellExclude automatically features with highest frequencies (Names, IDs, FW keys etc)Regression AnalysisAutomatic Balance ClassesCorrelation AnalysisDetecting Multicollinearity with VIFResidual AnalisysGrid - Hyperparameter optimizationPartial dependence plot (PDP)Individual Conditional Expectation (ICE)Variable Importance by ModelAML - Partial DependenceEnsemble - (ICE) Individual Condition ExpectationCorrelation Heatmap by ModelModel PerformanceAnalytical Performance ModelingComparative Metrics Table with:Overall ACCKappaOverallRACCSOA1(Landis & Koch)SOA1(Fleiss)SOA1(Altman)SOA1(Cicchetti)SOA1(Cramer)SOA1(Matthews)TNR MacroTPR MacroFPR MacroFNR MacroPPV MacroACC MacroF1 MacroTNR MicroFPR MicroTPR MicroFNR MicroPPV MicroF1 MicroScott PIGwet AC1Bennett SKappa Standard ErrorKappa 1% CIChi-SquaredPhi-SquaredCramer VChi-Squared DF1% CIStandard ErrorResponse EntropyReference EntropyCross EntropyJoint EntropyConditional EntropyKL DivergenceLambda BLambda AKappa UnbiasedOverall RACCUKappa No PrevalenceMutual InformationOverall JHamming LossZero-one LossNIRP-ValueOverall CENOverall MCENOverall MCCRRCBAAUNUAUNPRCIPearson CCSIARIBangdiwala BKrippendorffAlphaThe Best Algorithms TableAutomatically chooses the best model based on the metrics aboveConfusion Matrix for all ModelsFeature Importance for all modelsSave all Models into a Pickle fileHow to Installsudoapt-getinstalldefault-jre pipinstallamlrHow to usesintax:fromamlrimportamlrasrpimportwebbrowserrp=rp.report()rp.create_report(dataset='data/titanic-passengers.csv',target='Survived',max_runtime_secs=1)webbrowser.open('report/index.html')Another option is to load your own data set withpandasand switch, or parse, to theAMLRreport command, but you cannot use both methods. The code will be:df=pd.read_csv('data/titanic-passengers.csv',sep=';')rp.create_report(data_frame=df,target='Survived',max_runtime_secs=1)Parametersdataset: File to read by AMLRdata_frame: Pandas DataFrametarget: The target columnduplicated: DefaultTrueLooking for duplicated linessep: Default;if file is a csv, you must explicity the column sepatator characterexclude: DefaultTruea list with the columns to exclude to the processmax_runtime_secs: Default1time limit to run deep learnig modelsmax_run_timeWhen building a model, this option specifes the maximum runtime in seconds that you want to allot in order to complete the model. If this maximum runtime is exceeded before the model build is completed, then the model will fail.Specifying max_runtime_secs=1 disables this option for production enviroment, thus allowing for an unlimited amount of runtime. If you just want to do a test, regardless of the results, use 1 seconds or a maximum of 61 seconds.We tested with the following DatasetClassic dataset onTitanicdisasterBernoulli Distribution Target or Binary ClassificationDownload here:TitanicOutputSee the outputhereThis is an example of the test made with the Titanic Dataset;enjoi!
amlt
No description available on PyPI.
amltk
AutoML ToolkitA framework for building an AutoML System. The toolkit is designed to be modular and extensible, allowing you to easily swap out components and integrate your own. The toolkit is designed to be used in a variety of different ways, whether for research purposes, building your own AutoML Tool or educational purposes.We focus on building complex parametrized pipelines easily, providing tools to optimize these pipeline parameters and lastly, providing tools to schedule compute tasks on a variety of different compute backends, without the need to refactor everything, once you swap out any one of these.The goal of this toolkit is to drive innovation for AutoML Systems by:Allowing concise research artifacts that can study different design decisions in AutoML.Enabling simple prototypes to scale to the compute you have available.Providing a framework for building real and robust AutoML Systems that are extensible by design.Please check out our documentation for more:Documentation- The homepageGuides- How to use thePipelines,OptimizersandSchedulersin a walkthrough fashion.Reference- A short-overview reference for the various components of the toolkit.Examples- A collection of examples for using the toolkit in different ways.API- The full API reference for the toolkit.InstallationTo install AutoML Toolkit (amltk), you can simply usepip:pipinstallamltk[!TIP] We also provide a list of optional dependencies which you can install if you intend to use them. This allows the toolkit to be as lightweight as possible and play nicely with the tools you use.pip install amltk[notebook]- For usage in a notebookpip install amltk[sklearn]- For usage with scikit-learnpip install amltk[smac]- For using SMAC as an optimizerpip install amltk[optuna]- For using Optuna as an optimizerpip install amltk[pynisher, threadpoolctl, wandb]- Various plugins for running compute taskspip install amltk[cluster, dask, loky]- Different compute backends to run fromInstall from sourceTo install from source, you can clone this repo and install withpip:[email protected]:automl/amltk.git pipinstall-eamltk# -e for editable modeIf planning to contribute, you can install the development dependencies but we highly recommend checking out ourcontributing guidefor more.pipinstall-e"amltk[dev]"FeaturesHere's a brief overview of 3 of the core components from the toolkit:PipelinesDefineparametrizedmachine learning pipelines using a fluid API:fromamltk.pipelineimportComponent,Choice,Sequentialfromsklearn.ensembleimportRandomForestClasifierfromsklearn.preprocessingimportOneHotEncoderfromsklearn.imputeimportSimpleImputerfromsklearn.svmimportSVCpipeline=(Sequential(name="my_pipeline")>>Component(SimpleImputer,space={"strategy":["mean","median"]}),# Choose either mean or median>>OneHotEncoder(drop="first")# No parametrization, no problem>>Choice(# Our pipeline can choose between two different estimatorsComponent(RandomForestClassifier,space={"n_estimators":(10,100),"criterion":["gini","log_loss"]},config={"max_depth":3}),Component(SVC,space={"kernel":["linear","rbf","poly"]}),name="estimator"))# Parser the search space with implemented or you custom parsersearch_space=pipeline.search_space(parser=...)# Configure a pipelineconfigured_pipeline=pipeline.configure(config)# Build the pipeline with a build, no amltk code in your built modelmodel=configured_pipeline.build(builder="sklearn")OptimizersOptimize your pipelines using a variety of different optimizers, with a unified API and a suite of utility for recording and taking control of the optimization loop:fromamltk.optimizationimportTrial,Metric,Historypipeline=...accuracy=Metric("accuracy",maximize=True,bounds=(0.1))inference_time=Metric("inference_time",maximize=False)defevaluate(trial:Trial)->Trial.Report:model=pipeline.configure(trial.config).build("sklearn")try:# Profile the things you'd likewithtrial.profile("fit"):model.fit(...)exceptExceptionase:# Generate reports from exceptions easilyreturntrial.fail(exception=e)# Record anything else you'd liketrial.summary["model_size"]=...# Store whatever you'd liketrial.store({"model.pkl":model,"predictions.npy":predictions}),returntrial.success(accuracy=0.8,inference_time=...)# Easily swap between optimizers, without needing to change the rest of your codefromamltk.optimization.optimizers.smacimportSMACOptimizerfromamltk.optimization.optimizers.smacimportOptunaOptimizerimportrandomOptimizer=random.choice([SMACOptimizer,OptunaOptimizer])smac_optimizer=Optimizer(space=pipeline,metrics=[accuracy,inference_time],bucket="results")# You decide how your optimization loop should workhistory=History()for_inrange(10):trial=optimizer.ask()report=evaluate(trial)history.add(report)optimizer.tell(report)print(history.df())[!TIP] Check out ourintegrated optimizersor integrate your own using the very same API we use!SchedulingSchedule your optimization jobs or AutoML tasks on a variety of different compute backends. By leveraging compute workers and asyncio, you can easily scale your compute needs, react to events as they happen and swap backends, without needing to modify your code!fromamltk.schedulingimportScheduler# Create a Scheduler with a backend, here 4 processesscheduler=Scheduler.with_processes(4)# scheduler = Scheduler.with_SLURM(...)# scheduler = Scheduler.with_OAR(...)# scheduler = Scheduler(executor=my_own_compute_backend)# Define some compute and wrap it as a task to offload to the schedulerdefexpensive_function(x:int)->float:return(2**x)/xtask=scheduler.task(expensive_function)numbers=range(-5,5)results=[]# When the scheduler starts, submit 4 tasks to the [email protected]_start(repeat=4)defon_start():n=next(numbers)task.submit(n)# When the task is done, store the [email protected]_resultdefon_result(_,result:float):results.append(result)# Easy to incrementently add more [email protected]_resultdeflaunch_next(_,result:float):if(n:=next(numbers,None))isnotNone:task.submit(n)# React to issues when they [email protected]_exceptiondefstop_something_went_wrong(_,exception:Exception):scheduler.stop()# Start the scheduler and run it as you likescheduler.run(timeout=10)# ... await scheduler.async_run() for servers and real-time applications[!TIP] Check out ourintegrated compute backendsor use your own!Extra MaterialAutoML Fall School 2023 Colab
aml-uallu
hey there, this is my first pip package
aml-uallu-greetings
hey there, this is my first pip package
amlutils
Applied Machine Learning Utils (amlutils)a library of python3 utility functions that i have found useful when trying to productionize machine and deep learning models.
amm
AMMThe AI model manager.Use it# Initialize a new projectamminit# Probe an existing directory to build a map fileammprobe# Install a model from Civitaiamminstallhttps://civitai.com/models/7240/meinamix# Install a model to a specific placeamminstallhttps://civitai.com/models/7240/meinamix./models/meinamix# Install a Lora from Civitai, and automatically pair it with previewsamminstallhttps://civitai.com/models/13213# Install a model from Hugging Faceamminstallhttps://huggingface.co/THUDM/chatglm-6b/blob/main/pytorch_model-00001-of-00008.bin# Download a model by nameamminstallbloom# Install from an existing amm.jsonamminstall amminstall-rnon-default-named.amm.jsonLicenseMIT
ammar
No description available on PyPI.
ammarpdf
This is the homepage of our project
ammarpy
No description available on PyPI.
ammcpc
This command-line application and python module is a simple wrapper around the MediaConch tool which takes a file and a MediaConch policy file as input and prints to stdout a JSON object indicating, in a way that Archivematica likes, whether the file passes the policy check.Install with Pip:$ pip install ammcpcInstall from source:$ python setup.py installCommand-line usage:$ ammcpc <PATH_TO_FILE> <PATH_TO_POLICY>Python usage with a policy file path:>>> from ammcpc import MediaConchPolicyCheckerCommand >>> policy_checker = MediaConchPolicyCheckerCommand( policy_file_path='/path/to/my-policy.xml') >>> exitcode = policy_checker.check('/path/to/file.mkv')Python usage with a policy as a string:>>> policy_checker = MediaConchPolicyCheckerCommand( policy='<?xml><policy> ... </policy>', policy_file_name='my-policy.xml') >>> exitcode = policy_checker.check('/path/to/file.mkv')System dependencies:MediaConch version 16.12To run the tests, make sure tox is installed, then:$ tox
ammd-test-python
Failed to fetch description. HTTP Status Code: 404
ammeter-logger
Python Ammeter LoggerWorks in conjunction with theMicropython ADC Amperage MonitorView the details of our load testing series here:https://www.learningtopi.com/category/load-testing/IntroductionThis Python module works with theMicropython ADC Amperage Monitorto collect data from a amperage monitor. The module will control the micropython microcontroller to run a baseline of the current sensor or collect data.The module can be run of the command line or imported into other projects to record amperage data.InstallationThe package is available on PyPi or can be installed manually using the why/tar.gz file in the dist folder.pip3 install ammeter_loggerCLI Usage(venv) $ python3 -m ammeter_logger usage: __main__.py [-h] [--get-config] [--get-status] [--skip-init] [--force-init] [--init-only] [--sample-interval SAMPLE_INTERVAL] [--capture-time CAPTURE_TIME] [--baudrate BAUDRATE] [--log-level LOG_LEVEL] DEVICE OUTPUT_FILE __main__.py: error: the following arguments are required: DEVICE, OUTPUT_FILE (venv) $The --get-config and --get-status parameters can be used to get the current status of the microcontroller.It is recommended to run the following the initialize the ammeter (get a baseline 0 reading) with no load on the ammeter before connecting the device you intend to monitor. Initialization can be run using the --force-init option.After the ammeter is initialized, the capture can be started using the --capture-time interval. Logged data will be stored in the specified output_file (as a CSV).
ammico
AMMICO - AI Media and Misinformation Content Analysis ToolThis package extracts data from images such as social media posts that contain an image part and a text part. The analysis can generate a very large number of features, depending on the user input.This project is currently under development!Use pre-processed image files such as social media posts with comments and process to collect information:Text extraction from the imagesLanguage detectionTranslation into English or other languagesCleaning of the text, spell-checkSentiment analysisSubjectivity analysisNamed entity recognitionTopic analysisContent extraction from the imagesTextual summary of the image content ("image caption") that can be analyzed further using the above toolsFeature extraction from the images: User inputs query and images are matched to that query (both text and image query)Question answeringPerforming person and face recognition in imagesFace mask detectionAge, gender and race detectionEmotion recognitionObject detection in imagesDetection of position and number of objects in the image; currently person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, cell phoneCropping images to remove comments from postsInstallationTheAMMICOpackage can be installed using pip:pip install ammicoThis will install the package and its dependencies locally.To make pycocotools work on Windows OS you may need to installvs_BuildTools.exefromhttps://visualstudio.microsoft.com/visual-cpp-build-tools/and choose following elements:Visual Studio extension developmentMSVC v143 - VS 2022 C++ x64/x86 build toolsWindows 11 SDKfor Windows 11 (orWindows 10 SDKfor Windows 10)Be careful, it requires around 7 GB of disk space.UsageThe main demonstration notebook can be found in thenotebooksfolder and also ongoogle colabThere are further sample notebooks in thenotebooksfolder for the more experimental features:Topic analysis: Use the notebookget-text-from-image.ipynbto analyse the topics of the extraced text.You can run this notebook on google colab:HerePlace the data files and google cloud vision API key in your google drive to access the data.Multimodal content: Use the notebookmultimodal_search.ipynbto find the best fitting images to an image or text query.You can run this notebook on google colab:HereColor analysis: Use the notebookcolor_analysis.ipynbto identify colors the image. The colors are then classified into the main named colors in the English language.You can run this notebook on google colab:HereTo crop social media posts use thecropposts.ipynbnotebook.You can run this notebook on google colab:HereFeaturesText extractionThe text is extracted from the images usinggoogle-cloud-vision. For this, you need an API key. Set up your google account following the instructions on the google Vision AI website. You then need to export the location of the API key as an environment variable:export GOOGLE_APPLICATION_CREDENTIALS="location of your .json"The extracted text is then stored under thetextkey (column when exporting a csv).Googletransis used to recognize the language automatically and translate into English. The text language and translated text is then stored under thetext_languageandtext_englishkey (column when exporting a csv).If you further want to analyse the text, you have to set theanalyse_textkeyword toTrue. In doing so, the text is then processed usingspacy(tokenized, part-of-speech, lemma, ...). The English text is cleaned from numbers and unrecognized words (text_clean), spelling of the English text is corrected (text_english_correct), and further sentiment and subjectivity analysis are carried out (polarity,subjectivity). The latter two steps are carried out usingTextBlob. For more information on the sentiment analysis using TextBlob seehere.TheHugging Face transformers libraryis used to perform another sentiment analysis, a text summary, and named entity recognition, using thetransformerspipeline.Content extractionThe image content ("caption") is extracted using theLAVISlibrary. This library enables vision intelligence extraction using several state-of-the-art models, depending on the task. Further, it allows feature extraction from the images, where users can input textual and image queries, and the images in the database are matched to that query (multimodal search). Another option is question answering, where the user inputs a text question and the library finds the images that match the query.Emotion recognitionEmotion recognition is carried out using thedeepfaceandretinafacelibraries. These libraries detect the presence of faces, and their age, gender, emotion and race based on several state-of-the-art models. It is also detected if the person is wearing a face mask - if they are, then no further detection is carried out as the mask prevents an accurate prediction.Object detectionObject detection is carried out usingcvliband theYOLOv4model. This library detects faces, people, and several inanimate objects; we currently have restricted the output to person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, cell phone.Color/hue detectionColor detection is carried out usingcolorgram.pyandcolourfor the distance metric. The colors can be classified into the main named colors/hues in the English language, that are red, green, blue, yellow, cyan, orange, purple, pink, brown, grey, white, black.Cropping of postsSocial media posts can automatically be cropped to remove further comments on the page and restrict the textual content to the first comment only.
ammico-lavis
AMMICO-LAVISThis is a fork ofLAVIS(release 1.0.2) that support ARM M1, M2, and M3 Macs. On MacOS it depends oneva-decord, instead ofdecordon other systems. Supportstransformers>=4.25.0,<4.27.Benchmark,Technical Report,Documentation,Jupyter Notebook Examples,BlogLAVIS - A Library for Language-Vision IntelligenceWhat's New: 🎉[Model Release] Jan 2023, released implementation ofBLIP-2Paper,Project Page,,A generic and efficient pre-training strategy that easily harvests development of pretrained vision models and large language models (LLMs) for vision-language pretraining. BLIP-2 beats Flamingo on zero-shot VQAv2 (65.0vs56.3), establishing new state-of-the-art on zero-shot captioning (on NoCaps121.6CIDEr score vs previous best113.2). In addition, equipped with powerful LLMs (e.g. OPT, FlanT5), BLIP-2 also unlocks the newzero-shot instructed vision-to-language generationcapabilities for various interesting applications!Jan 2023, LAVIS is now available onPyPIfor installation![Model Release] Dec 2022, released implementation ofImg2prompt-VQAPaper,Project Page,A plug-and-play module that enables off-the-shelf use of Large Language Models (LLMs) for visual question answering (VQA). Img2Prompt-VQA surpasses Flamingo on zero-shot VQA on VQAv2 (61.9 vs 56.3), while in contrast requiring no end-to-end training![Model Release] Oct 2022, released implementation ofPNP-VQA(EMNLP Findings 2022,"Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training", by Anthony T.M.H. et al),Paper,Project Page,)A modular zero-shot VQA framework that requires no PLMs training, achieving SoTA zero-shot VQA performance.Table of ContentsIntroductionInstallationGetting StartedModel ZooImage CaptioningVisual question answering (VQA)Unified Feature Extraction InterfaceLoad DatasetsJupyter Notebook ExamplesResources and ToolsDocumentationsEthical and Responsible UseTechnical Report and Citing LAVISLicenseIntroductionLAVIS is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets. It features a unified interface design to access10+tasks (retrieval, captioning, visual question answering, multimodal classification etc.);20+datasets (COCO, Flickr, Nocaps, Conceptual Commons, SBU, etc.);30+pretrained weights of state-of-the-art foundation language-vision models and their task-specific adaptations, includingALBEF,BLIP,ALPRO,CLIP.Key features of LAVIS include:Unified and Modular Interface: facilitating to easily leverage and repurpose existing modules (datasets, models, preprocessors), also to add new modules.Easy Off-the-shelf Inference and Feature Extraction: readily available pre-trained models let you take advantage of state-of-the-art multimodal understanding and generation capabilities on your own data.Reproducible Model Zoo and Training Recipes: easily replicate and extend state-of-the-art models on existing and new tasks.Dataset Zoo and Automatic Downloading Tools: it can be a hassle to prepare the many language-vision datasets. LAVIS provides automatic downloading scripts to help prepare a large variety of datasets and their annotations.The following table shows the supported tasks, datasets and models in our library. This is a continuing effort and we are working on further growing the list.TasksSupported ModelsSupported DatasetsImage-text Pre-trainingALBEF, BLIPCOCO, VisualGenome, SBU ConceptualCaptionsImage-text RetrievalALBEF, BLIP, CLIPCOCO, Flickr30kText-image RetrievalALBEF, BLIP, CLIPCOCO, Flickr30kVisual Question AnsweringALBEF, BLIPVQAv2, OKVQA, A-OKVQAImage CaptioningBLIPCOCO, NoCapsImage ClassificationCLIPImageNetNatural Language Visual Reasoning (NLVR)ALBEF, BLIPNLVR2Visual Entailment (VE)ALBEFSNLI-VEVisual DialogueBLIPVisDialVideo-text RetrievalBLIP, ALPROMSRVTT, DiDeMoText-video RetrievalBLIP, ALPROMSRVTT, DiDeMoVideo Question Answering (VideoQA)BLIP, ALPROMSRVTT, MSVDVideo DialogueVGD-GPTAVSDMultimodal Feature ExtractionALBEF, CLIP, BLIP, ALPROcustomizedText-to-image Generation[COMING SOON]Installation(Optional) Creating conda environmentcondacreate-nlavispython=3.8 condaactivatelavisinstall fromPyPIpipinstallsalesforce-lavisOr, for development, you may build from sourcegitclonehttps://github.com/salesforce/LAVIS.gitcdLAVIS pipinstall-e.Getting StartedModel ZooModel zoo summarizes supported models in LAVIS, to view:fromlavis.modelsimportmodel_zooprint(model_zoo)# ==================================================# Architectures Types# ==================================================# albef_classification ve# albef_feature_extractor base# albef_nlvr nlvr# albef_pretrain base# albef_retrieval coco, flickr# albef_vqa vqav2# alpro_qa msrvtt, msvd# alpro_retrieval msrvtt, didemo# blip_caption base_coco, large_coco# blip_classification base# blip_feature_extractor base# blip_nlvr nlvr# blip_pretrain base# blip_retrieval coco, flickr# blip_vqa vqav2, okvqa, aokvqa# clip_feature_extractor ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50# clip ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50# gpt_dialogue baseLet’s see how to use models in LAVIS to perform inference on example data. We first load a sample image from local.importtorchfromPILimportImage# setup device to usedevice=torch.device("cuda"iftorch.cuda.is_available()else"cpu")# load sample imageraw_image=Image.open("docs/_static/merlion.png").convert("RGB")This example image showsMerlion park(source), a landmark in Singapore.Image CaptioningIn this example, we use the BLIP model to generate a caption for the image. To make inference even easier, we also associate each pre-trained model with its preprocessors (transforms), accessed viaload_model_and_preprocess().importtorchfromlavis.modelsimportload_model_and_preprocessdevice=torch.device("cuda"iftorch.cuda.is_available()else"cpu")# loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.# this also loads the associated image processorsmodel,vis_processors,_=load_model_and_preprocess(name="blip_caption",model_type="base_coco",is_eval=True,device=device)# preprocess the image# vis_processors stores image transforms for "train" and "eval" (validation / testing / inference)image=vis_processors["eval"](raw_image).unsqueeze(0).to(device)# generate captionmodel.generate({"image":image})# ['a large fountain spewing water into the air']Visual question answering (VQA)BLIP model is able to answer free-form questions about images in natural language. To access the VQA model, simply replace thenameandmodel_typearguments passed toload_model_and_preprocess().fromlavis.modelsimportload_model_and_preprocessmodel,vis_processors,txt_processors=load_model_and_preprocess(name="blip_vqa",model_type="vqav2",is_eval=True,device=device)# ask a random question.question="Which city is this photo taken?"image=vis_processors["eval"](raw_image).unsqueeze(0).to(device)question=txt_processors["eval"](question)model.predict_answers(samples={"image":image,"text_input":question},inference_method="generate")# ['singapore']Unified Feature Extraction InterfaceLAVIS provides a unified interface to extract features from each architecture. To extract features, we load the feature extractor variants of each model. The multimodal feature can be used for multimodal classification. The low-dimensional unimodal features can be used to compute cross-modal similarity.fromlavis.modelsimportload_model_and_preprocessmodel,vis_processors,txt_processors=load_model_and_preprocess(name="blip_feature_extractor",model_type="base",is_eval=True,device=device)caption="a large fountain spewing water into the air"image=vis_processors["eval"](raw_image).unsqueeze(0).to(device)text_input=txt_processors["eval"](caption)sample={"image":image,"text_input":[text_input]}features_multimodal=model.extract_features(sample)print(features_multimodal.multimodal_embeds.shape)# torch.Size([1, 12, 768]), use features_multimodal[:,0,:] for multimodal classification tasksfeatures_image=model.extract_features(sample,mode="image")features_text=model.extract_features(sample,mode="text")print(features_image.image_embeds.shape)# torch.Size([1, 197, 768])print(features_text.text_embeds.shape)# torch.Size([1, 12, 768])# low-dimensional projected featuresprint(features_image.image_embeds_proj.shape)# torch.Size([1, 197, 256])print(features_text.text_embeds_proj.shape)# torch.Size([1, 12, 256])similarity=features_image.image_embeds_proj[:,0,:]@features_text.text_embeds_proj[:,0,:].t()print(similarity)# tensor([[0.2622]])Load DatasetsLAVIS inherently supports a wide variety of common language-vision datasets by providingautomatic download toolsto help download and organize these datasets. After downloading, to load the datasets, use the following code:fromlavis.datasets.buildersimportdataset_zoodataset_names=dataset_zoo.get_names()print(dataset_names)# ['aok_vqa', 'coco_caption', 'coco_retrieval', 'coco_vqa', 'conceptual_caption_12m',# 'conceptual_caption_3m', 'didemo_retrieval', 'flickr30k', 'imagenet', 'laion2B_multi',# 'msrvtt_caption', 'msrvtt_qa', 'msrvtt_retrieval', 'msvd_caption', 'msvd_qa', 'nlvr',# 'nocaps', 'ok_vqa', 'sbu_caption', 'snli_ve', 'vatex_caption', 'vg_caption', 'vg_vqa']After downloading the images, we can useload_dataset()to obtain the dataset.fromlavis.datasets.buildersimportload_datasetcoco_dataset=load_dataset("coco_caption")print(coco_dataset.keys())# dict_keys(['train', 'val', 'test'])print(len(coco_dataset["train"]))# 566747print(coco_dataset["train"][0])# {'image': <PIL.Image.Image image mode=RGB size=640x480>,# 'text_input': 'A woman wearing a net on her head cutting a cake. ',# 'image_id': 0}If you already host a local copy of the dataset, you can pass in thevis_pathargument to change the default location to load images.coco_dataset=load_dataset("coco_caption",vis_path=YOUR_LOCAL_PATH)Jupyter Notebook ExamplesSeeexamplesfor more inference examples, e.g. captioning, feature extraction, VQA, GradCam, zeros-shot classification.Resources and ToolsBenchmarks: seeBenchmarkfor instructions to evaluate and train supported models.Dataset Download and Browsing: seeDataset Downloadfor instructions and automatic tools on download common language-vision datasets.GUI Demo: to run the demo locally, runbash run_scripts/run_demo.shand then follow the instruction on the prompts to view in browser. A web demo is coming soon.DocumentationsFor more details and advanced usages, please refer todocumentation.Ethical and Responsible UseWe note that models in LAVIS provide no guarantees on their multimodal abilities; incorrect or biased predictions may be observed. In particular, the datasets and pretrained models utilized in LAVIS may contain socioeconomic biases which could result in misclassification and other unwanted behaviors such as offensive or inappropriate speech. We strongly recommend that users review the pre-trained models and overall system in LAVIS before practical adoption. We plan to improve the library by investigating and mitigating these potential biases and inappropriate behaviors in the future.Technical Report and Citing LAVISYou can find more details in ourtechnical report.If you're using LAVIS in your research or applications, please cite using this BibTeX:@misc{li2022lavis,title={LAVIS: A Library for Language-Vision Intelligence},author={Dongxu Li and Junnan Li and Hung Le and Guangsen Wang and Silvio Savarese and Steven C. H. Hoi},year={2022},eprint={2209.09019},archivePrefix={arXiv},primaryClass={cs.CV}}Contact usIf you have any questions, comments or suggestions, please do not hesitate to contact us [email protected] 3-Clause License
ammo
UNKNOWN
ammod-blob-detector
Blob DetectorClone me withgit clone https://git.inf-cv.uni-jena.de/AMMOD/blob_detector.gitDescriptionBlob detection algorithms for insects on a single-color (white) screen.Installationcondacreate-ndetectorpython~=3.9.0opencv~=4.5 condaactivatedetector pipinstall--upgradepip pipinstall-rrequirements.txtLicenceThis work is licensed under aCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
ammolite
Ammolite - From Biotite to PyMOL and back againThis package enables the transfer of structure related objects fromBiotitetoPyMOLfor visualization, via PyMOL’s Python API:ImportAtomArrayandAtomArrayStackobjects intoPyMOL- without intermediate structure files.ConvertPyMOLobjects intoAtomArrayandAtomArrayStackinstances.UseBiotite’s boolean masks for atom selection inPyMOL.Display images rendered withPyMOLinJupyternotebooks.Have a look atthis example:PyMOL is a trademark of Schrodinger, LLC.