package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
analiseBseoo
|
Ferramenta para auxilar usuários a analisar o URL de um desterminado site na web
|
analise-financeira
|
Failed to fetch description. HTTP Status Code: 404
|
analise-rasa-plots
|
No description available on PyPI.
|
analisischip
|
Analisis de secuencias ChIP-seqHerramientas para análisis de secuencias en resultados de ChIP-seq.Incluye:
Clase para buscar secuencias en resultados de ChIP-seq y en promotores del genoma.
Funciones para buscar sitios de union en listas de secuencias.
Generacion de archivos .csv con resultados e histogramas.
|
analitica-escalable-pec-2
|
Example model package.
|
analitica-test
|
Example model package.
|
analiticcl
|
AnaliticclIntroductionAnaliticcl is an approximate string matching or fuzzy-matching system that can be used for spelling
correction or text normalisation (such as post-OCR correction or post-HTR correction). Texts can be checked against a
validated or corpus-derived lexicon (with or without frequency information) and spelling variants will be returned.Please see themain README.mdfor a further introduction, it also links to a Python tutorial.Analiticcl is written in Rust, this is the Python binding, allowing you to use analiticcl from Python as a module.Installationwith pippip install analiticclfrom sourceTo use this method, you need to have Rust installed and in your$PATH. Install it through your package manager or through rustup:curl https://sh.rustup.rs -sSf | sh -s -- -y
export PATH="$HOME/.cargo/bin:$PATH"Once Rust is installed, you can compile the analiticcl binding:# Create a virtual env (you can use yours as well)
python -m venv .env
source .env/bin/activate
# Install `analiticcl` in the current virtual env
pip install setuptools_rust
python setup.py installUsagefromanaliticclimportVariantModel,Weights,SearchParametersimportjsonmodel=VariantModel("examples/simple.alphabet.tsv",Weights(),debug=False)model.read_lexicon("examples/eng.aspell.lexicon")model.build()result=model.find_variants("udnerstand",SearchParameters(max_edit_distance=3))print(json.dumps(result,ensure_ascii=False,indent=4))print()results=model.find_all_matches("I do not udnerstand the probleem",SearchParameters(max_edit_distance=3,max_ngram=1))print(json.dumps(results,ensure_ascii=False,indent=4))Note:all offsets reported by analiticcl are utf-8 byte-offsets, not character offsets! If you want proper unicode character
offsets, pass the keyword argumentunicodeoffset=TruetoSearchParameters. You will want to set this if you intend to do
any kind of slicing in Python (which uses unicode points by default).Output:[{"text":"understand","score":0.8978494623655915,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"understands","score":0.6725317693059629,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"understood","score":0.6036866359447004,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"understate","score":0.5967741935483871,"lexicon":"../../../examples/eng.aspell.lexicon"}][{"input":"I","offset":{"begin":0,"end":1},"variants":[{"text":"I","score":0.8387096774193549,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"i","score":0.8064516129032258,"lexicon":"../../../examples/eng.aspell.lexicon"}]},{"input":"do","offset":{"begin":2,"end":4},"variants":[{"text":"do","score":1.0,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"dog","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"doc","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"doz","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"dob","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"doe","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"dot","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"dos","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"ado","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"don","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"d","score":0.5967741935483871,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"o","score":0.5967741935483871,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"DOD","score":0.5913978494623655,"lexicon":"../../../examples/eng.aspell.lexicon"}]},{"input":"not","offset":{"begin":5,"end":8},"variants":[{"text":"not","score":1.0,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"knot","score":0.6370967741935484,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"note","score":0.6370967741935484,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"snot","score":0.6370967741935484,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"no","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"nowt","score":0.5967741935483871,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"No","score":0.5913978494623655,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"OT","score":0.5913978494623655,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"pot","score":0.5698924731182795,"lexicon":"../../../examples/eng.aspell.lexicon"}]},{"input":"udnerstand","offset":{"begin":9,"end":19},"variants":[{"text":"understand","score":0.8978494623655915,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"understands","score":0.6725317693059629,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"understood","score":0.6036866359447004,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"understate","score":0.5967741935483871,"lexicon":"../../../examples/eng.aspell.lexicon"}]},{"input":"the","offset":{"begin":20,"end":23},"variants":[{"text":"the","score":1.0,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"thee","score":0.6908602150537635,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"thew","score":0.6370967741935484,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"then","score":0.6370967741935484,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"them","score":0.6370967741935484,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"they","score":0.6370967741935484,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"he","score":0.6236559139784946,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"Thea","score":0.6048387096774194,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"Th","score":0.5913978494623655,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"He","score":0.5913978494623655,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"thy","score":0.5698924731182795,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"she","score":0.5698924731182795,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"tho","score":0.5698924731182795,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"Thu","score":0.5376344086021505,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"Che","score":0.5376344086021505,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"THC","score":0.5376344086021505,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"tee","score":0.5161290322580645,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"toe","score":0.5161290322580645,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"tie","score":0.5161290322580645,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"Te","score":0.510752688172043,"lexicon":"../../../examples/eng.aspell.lexicon"}]},{"input":"probleem","offset":{"begin":24,"end":32},"variants":[{"text":"problem","score":0.9231950844854071,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"problems","score":0.6908602150537635,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"probe","score":0.5913978494623656,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"proclaim","score":0.5766129032258065,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"probated","score":0.543010752688172,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"probates","score":0.543010752688172,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"prole","score":0.5322580645161291,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"prowlers","score":0.4959677419354839,"lexicon":"../../../examples/eng.aspell.lexicon"},{"text":"parolees","score":0.44220430107526887,"lexicon":"../../../examples/eng.aspell.lexicon"}]}]DocumentationThe python binding exposes only a minimal interface, you can use Python'shelp()function to get information on the
classes provided. For more detailed information, please consult theAnaliticcl's rust API documentation. The interfaces that are available in the binding are analogous to the rust versions.
|
analitico
|
Analitico SDKThis package contains plugins and classes used to access analitico.ai cloud services and machine learning models. The package can be installed in Jupyter notebooks, Colaboratory notebooks or other Python environments. To access assets stored in Analitico you will need an API token.InstallationTo install in Python:pip install analiticoTo install on Jupyter, Colaboratory, etc:!pip install analiticoUsageimportanalitico# authorize calls with developer tokensdk=analitico.authorize_sdk(token="tok_xxx")# retrieve a dataset object from analiticodataset=sdk.get_dataset("ds_xxx")# download a data file from storage into a Pandas dataframedf=dataset.download(df="customers.csv")
|
analitiqs
|
AnalitiQs
Transform dataframes.
|
analiz
|
No description available on PyPI.
|
analog
|
Analog is a weblog analysis utility that provides these metrics:Number for requests.Response request method (HTTP verb) distribution.Response status code distribution.Requests per path.Response time statistics (mean, median).Response upstream time statistics (mean, median).Response body size in bytes statistics (mean, median).Per path request method (HTTP verb) distribution.Per path response status code distribution.Per path response time statistics (mean, median).Per path response upstream time statistics (mean, median).Per path response body size in bytes statistics (mean, median).Documentation is onanalog.readthedocs.org,
code and issues are ongithub.com/fabianbuechler/analogand the package can be installed
from PyPI atpypi.python.org/pypi/analog.Changelog1.0.0 - 2015-02-26Provide yaml config file for Travis-CI.Extend tox environments to cover 2.7, 3.2, 3.3, 3.4, pypy and pypy3.Convert repository to git and move to github.Set version only in setup.py, use via pkg_resources.get_distribution.1.0.0b1 - 2014-04-06Going beta with Python 3.4 support and good test coverage.0.3.4 - 2014-04-01Testanalog.analyzerimplementation.Testanalog.utilsimplementation.0.3.3 - 2014-03-10Testanalog.renderersimplementation.Fix bug in default plaintext renderer.0.3.2 - 2014-03-02Testanalog.report.Reportimplementation and fix some bugs.0.3.1 - 2014-02-09Rename--max_ageoption to--max-agefor consistency.0.3.0 - 2014-02-09Ignore __init__.py at PEP257 checks since __all__ is not properly supported.Fix custom log format definitions. Format selection in CLI via subcommands.Add pypy to tox environments.0.2.0 - 2014-01-30Remove dependency on configparser package for Python 2.x.Allow specifying allanalogarguments in a file for convenience.0.1.7 - 2014-01-27Giving up on VERSIONS file. Does not work with different distributions.0.1.6 - 2014-01-27Include CHANGELOG in documentation.Move VERSION file to analog module to make sure it can be installed.0.1.5 - 2014-01-27Replace numpy with backport of statistics for mean and median calculation.0.1.4 - 2014-01-27Move fallback for verbs, status_codes and paths configuration toanalyzer.
Also use the fallbacks inanalog.analyzer.Analyzer.__init__andanalog.analyzer.analyze.0.1.3 - 2014-01-27Fix API-docs building on readthedocs.0.1.1 - 2014-01-26Add numpy torequirements.txtsince installation viasetup.py installdoes not work.Strip VERSION when reading it in setup.py.0.1.0 - 2014-01-26Start documentation: quickstart and CLI usage plus API documentation.Add renderers for CSV and TSV output. Use –output [csv|tsv].
Unified codebase for all tabular renderers.Add renderer for tabular output. Use –output [grid|table].Also analyze HTTP verbs distribution for overall report.Remove timezone aware datetime handling for the moment.Introduce Report.add method to not expose Report externals to Analyzer.Install pytz on Python <= 3.2 for UTC object. Else use datetime.timezone.Add tox environment for py2.7 and py3.3 testing.Initial implementation of log analyzer and report object.Initial package structure, docs, requirements, test scripts.
|
analogai
|
analogaiEasing the use of analog neural networks.
|
analogainas
|
analogai-nasAnalogAINasis a modular and flexible framework to facilitate implementation of Analog-aware Neural Architecture Search. It offers high-level classes to define: the search space, the accuracy evaluator, and the search strategy. It leveragesthe aihwkit frameworkto apply hardware-aware training with analog non-idealities and noise included.AnalogAINASobtained architectures are more robust during inference on Analog Hardware. We also include two evaluators trained to rank the architectures according to their analog training accuracy.SetupWhile installing the repository, creating a new conda environment is recomended.git clone https://github.com/IBM/analog-nas/
pip install -r requirements.txt
pip setup.py installUsageTo get started, check outnas_search_demo.pyto make sure that the installation went well.This python script describes how to use the package.
|
analogbridge
|
Enable users to import any analog media format directly into your app with the Analog Bridge API
|
analogcaption
|
Analog Image Caption PackageThis is a Python package that helps give caption to your images, it is built on the Flickr_8k Dataset and trained with the Xception pre-trained model.How to try this package ?This package is very easy to use and has 2 fuctions which are:extract_features()this function simply take the image and extract the features from it.generate_desc()this function now generates a description for the Image.Demo in a flask [email protected]("/generateCaption", methods=["POST"])
def generateCaption():
image = request.files['image']
img = image.read()
# convert string of image data to uint8
nparr = np.fromstring(img, np.uint8)
# decode image
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
img = cv2.resize(img, (224, 224))
photo = extract_features(img)
# generate description
caption = generate_desc(model, tokenizer, photo, max_length)
return render_template("results.html", image=image, caption=caption)Demo in a python script#import image and extract feature
photo = extract_features(img_path)
img = Image.open(img_path)
description = generate_desc(model, tokenizer, photo, max_length)
print("\n\n")
print(description)
plt.imshow(img)Hope this helps you understand the package and use it in a project. We are currently working on more cool features and will do an update when we're done.
|
analog-design
|
Failed to fetch description. HTTP Status Code: 404
|
analogica-probability
|
No description available on PyPI.
|
analogic-core
|
Failed to fetch description. HTTP Status Code: 404
|
analogic-framework
|
No description available on PyPI.
|
analogistics
|
# analogistics## Analytical Tools for Logistics System Design and Operations Management
This packages is developed for analysts and researchers in the field of Supply Chain Managements. Several methods are developed to support decisions regarding the design and control of three types of Supply Chain systems:
* Distribution Network
* Storage Systems (i.e. warehouses)
* Production SystemsThe subfolderexamplescontains useful implementation of the methods using sample data.## SetupTo install the package, use Python >= 3.6, and run`sh pip install analogistics `Using window operating system, it is possible you are required to have Microsoft Visual C++ 14.0 or greater. Please download it fromhttps://visualstudio.microsoft.com/visual-cpp-build-tools/## License
MIT
|
analogs-finder
|
No description available on PyPI.
|
analogvnn
|
AnalogVNNDocumentation:https://analogvnn.readthedocs.io/Installation:InstallPyTorchInstall AnalogVNN usingpip# Current stable release for CPU and GPUpipinstallanalogvnn# For additional optional featurespipinstallanalogvnn[full]Usage:Sample code with AnalogVNN:sample_code.pySample code without
AnalogVNN:sample_code_non_analog.pySample code with AnalogVNN and
Logs:sample_code_with_logs.pyJupyter
Notebook:AnalogVNN_Demo.ipynbAbstractAnalogVNNis a simulation framework built on PyTorch which can simulate the effects of
optoelectronic noise, limited precision, and signal normalization present in photonic
neural network accelerators. We use this framework to train and optimize linear and
convolutional neural networks with up to 9 layers and ~1.7 million parameters, while
gaining insights into how normalization, activation function, reduced precision, and
noise influence accuracy in analog photonic neural networks. By following the same layer
structure design present in PyTorch, the AnalogVNN framework allows users to convert most
digital neural network models to their analog counterparts with just a few lines of code,
taking full advantage of the open-source optimization, deep learning, and GPU acceleration
libraries available through PyTorch.AnalogVNN Paper:https://arxiv.org/abs/2210.10048Citing AnalogVNNWe would appreciate if you cite the following paper in your publications for which you used AnalogVNN:@article{shah2022analogvnn,title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks},author={Shah, Vivswan and Youngblood, Nathan},journal={arXiv preprint arXiv:2210.10048},year={2022}}Or in textual form:Vivswan Shah, and Nathan Youngblood. "AnalogVNN: A fully modular framework for modeling
and optimizing photonic neural networks." arXiv preprint arXiv:2210.10048 (2022).
|
analogVoiceModem
|
analogVoiceModem
|
analphipy
|
analphipyUtilities to perform metric analysis on fluid pair potentials. The main features
ofanalphipyas follows:Overviewanalphipyis a python package to calculate metrics for classical models for
pair potentials. It provides a simple and extendable api for pair potentials
creation. Several routines to calculate metrics are included in the package.FeaturesPre-defined spherically symmetric potentialsSimple interface to extended to user defined pair potentialsRoutines to calculateNoro-Frenkeleffective parameters.Routines to calculateJensen-ShannondivergenceStatusThis package is actively used by the author. Please feel free to create a pull
request for wanted features and suggestions!Quick startUse one of the following to installanalphipypipinstallanalphipyorcondainstall-cconda-forgeanalphipyExample usage# Create a Lennard-Jones potential>>>importanalphipy>>>p=analphipy.potential.LennardJones(sig=1.0,eps=1.0)# Get a Noro-Frenekl analysis object>>>n=p.to_nf()# Get effective parameters at inverse temperature beta>>>n.sig(beta=1.0)1.01560...>>>n.eps(beta=1.0)-1.0>>>n.lam(beta=1.0)1.44097...DocumentationSee thedocumentationfor a look atanalphipyin action.LicenseThis is free software. SeeLICENSE.ContactThe author can be reached [email protected] package was created withCookiecutterand thewpk-nist-gov/cookiecutter-pypackageProject template forked fromaudreyr/cookiecutter-pypackage.TODOremove# type: ignorefrom potentials.py, base_potentials.pyadd backcheck_untyped_defsto pyproject.toml mypy configremove use ofcustom_inheritChangelogChangelog foranalphipyUnreleasedSee the fragment files inchangelog.dv0.3.0 — 2023-08-04AddedPackage now available on conda-forgeChangedAdded better support for mypy/pyright type checking.v0.2.0 — 2023-05-04RemovedRemovedcached_decoratormodule.Removeddocfillermodule.AddedNow usemodule-utilitiesto
handle caching and docfilling.v0.1.0 — 2023-04-24ChangedUpdate package layoutNew linters via pre-commitDevelopment env now handled by toxv0.0.6 - 2023-03-22Full set of changes:v0.0.5...v0.0.6v0.0.5 - 2023-03-22Full set of changes:v0.0.4...v0.0.5v0.0.4 - 2022-09-27Full set of changes:v0.0.3...v0.0.4v0.0.3 - 2022-09-26Full set of changes:v0.0.2...v0.0.3v0.0.2 - 2022-09-26Full set of changes:v0.0.1...v0.0.2v0.0.1 - 2022-09-26This software was developed by employees of the National Institute of Standards
and Technology (NIST), an agency of the Federal Government. Pursuant to title 17
United States Code Section 105, works of NIST employees are not subject to
copyright protection in the United States and are considered to be in the public
domain. Permission to freely use, copy, modify, and distribute this software and
its documentation without fee is hereby granted, provided that this notice and
disclaimer of warranty appears in all copies.THE SOFTWARE IS PROVIDED 'AS IS' WITHOUT ANY WARRANTY OF ANY KIND, EITHER
EXPRESSED, IMPLIED, OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY
THAT THE SOFTWARE WILL CONFORM TO SPECIFICATIONS, ANY IMPLIED WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND FREEDOM FROM
INFRINGEMENT, AND ANY WARRANTY THAT THE DOCUMENTATION WILL CONFORM TO THE
SOFTWARE, OR ANY WARRANTY THAT THE SOFTWARE WILL BE ERROR FREE. IN NO EVENT
SHALL NIST BE LIABLE FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO, DIRECT,
INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES, ARISING OUT OF, RESULTING FROM, OR
IN ANY WAY CONNECTED WITH THIS SOFTWARE, WHETHER OR NOT BASED UPON WARRANTY,
CONTRACT, TORT, OR OTHERWISE, WHETHER OR NOT INJURY WAS SUSTAINED BY PERSONS OR
PROPERTY OR OTHERWISE, AND WHETHER OR NOT LOSS WAS SUSTAINED FROM, OR AROSE OUT
OF THE RESULTS OF, OR USE OF, THE SOFTWARE OR SERVICES PROVIDED HEREUNDER.Distributions of NIST software should also include copyright and licensing
statements of any third-party software that are legally bundled with the code in
compliance with the conditions of those licenses.
|
analyse
|
UNKNOWN
|
analyse-exec-utils
|
analyse_exec_utilsA decorator to calculate execution time of any function
|
analyseGoogleMyActivity
|
analyseGoogleMyActivityGenerates Reports of Sleep Time, Sleep Routine and App Usage using Data from Google MyActivity : myactivity.google.comSleep Data and App Usage Data are generated on the basis of Following Assumptions :Sleep Data :Bed Time : Time at which User stops using phone and goes to bed.
Wake Up Time : After bed time, the First Time at which User starts using phone.Time Spent on a App : Difference of Time between the opening time of the app and the next app.First Download Google Activity Data File by following these steps :1. Login to Google account. Go to : https://takeout.google.com/
2. Under "Select data to include", Click on "Deselect all" .
3. Scroll Down and Select "My Activity" . Click on "Multiple formats". In "Activity records, Choose 'JSON' & then 'ok'.
4. Scroll Down. Click on "Next Step" and then on "Create Export".
5. Wait for the Google Data Download mail to arrive in your Gmail. Download the Zip file.Installation :pip install analyseGoogleMyActivityRequirements : "numpy", "pandas", "matplotlib"Usage :By Default, Looks for the Latest Takeout Zip in the current working directoryfrom analyseGoogleMyActivity import androidReport
reports = androidReport()Directly Pass the Takeout Zip to the parameter file ( Pass its Path also if the zip file is not in the current working directory)reports = androidReport(file='takeout-2020XXXXTXXXXXXZ-001.zip')Parameters :file : str, optional
Pass MyActivity JSON file or Takeout zip file with path. The default is 'MyActivity.json'.
apps : int or list , optional
No. of Top Apps or List of Apps to find usage for. The default is 12.
timezone : str
Pass the timezone region code. The default is 'in' for Indian Standard Time (IST).
excludeapps : List
List of app names to Exclude from App Usage calculation. The default is ['com.miui.home' ].
idealsleeptime : int
Ideal Sleep Time. The default is 8.
inlineimg : 0 or 1,
To include image in the Report itself or not. The default is 1.
showmarkerday: 0 or 1,
To show day on each marker in sleep routine graphs. The default is 0.
output : 0 or 1, optional
If 1 , Returns Dictionary with Results in Pandas DataFrames, otherwise returns Reports names. The default is 1.
yeartabs : 0 or 1, optional
To Show Year & its Data in Tabs, The default is 1.
verbose : 0 or 1, optional
Shows Additional Progess during Report Generation. The default is 0.
Returns
Dictionary if Parameter output = 1 OR
Tuple having Generated Report names if output = 0
-------
Dictionary with Following Keys having Values
'AppUsage': Time at which an App is opened, Pandas DataFrame
'AppDailyUsage': Day wise data of App opened, Pandas DataFrame
'SleepData': Bed Time & WakeUp Time with Sleep Duration , Pandas DataFrame
'SleepYearlyTable': Yearly Stats of Sleep Time & Sleep Routine
'SleepMonthlyTable': Monthly Stats of Sleep Time & Sleep Routine
'AppYearlyTable': Yearly App Usage Stats
'AppMonthlyTable': Monthly App Usage Stats
__________________________________________________________
|
analyse-obfuscation
|
Windows Command-Line ObfuscationBackgroundanalyse_obfuscationis a python3 module for finding common command-line obfuscation techniques for a given program, as described inthisblog post.By providing one or more commands,analyse_obfuscationwill test if the following obfuscation techniques can be applied:Option Char substitutione.g.ping -n 1 localhost==ping /n 1 localhostCharacter substitutione.g.reg eˣport HKCU out.reg==reg export HKCU out.regCharacter insertione.g.wevtutil gࢯli (…)==wevtutil gli (…)Quotes insertione.g.netsh ad"vfi"rewall show (…)==netsh advfirewall show (…)Shorthandse.g.powershell /encod (…)==powershell /encodedcommand (…)GoalsNote that the goal of this project is to show that a given executable/command line can be obfuscated, not to give a complete list of possible obfuscations for a given command. It should however be possible to derive different obfuscation opportunities fromanalyse_obfuscation's output.Blue teamers 🔵 may want to use this tool, for example, to check if an executable they have written a detection rule is vulnerable to command-line obfuscation, meaning the rule should be improved or additional rules are needed. Note that in some cases this game is unwinnable - please take a look at the recommendations in theblog postfor suggestions on how to tackle this.Red teamers 🔴 may want to use this tool to find opportunities for bypassing simple detection rules.UsageRunThe simplest way to use this project is by running it (without installation).Run script: clone the entire repository, install all dependencies (pip3 install -r requirements.txt) and run via:python3-manalyse_obfuscation.run--helpInstallBy installing the project, it will be possible to simply callanalyse_obfuscationfrom the command line.Via PyPI: install the application via for example pip:pip3installanalyse_obfuscationFrom source: you can install a local version of the module by cloning the entire repository, followed by these commands:(note that this requiressetuptoolsto be installed)python3setup.pysdistbdist_wheel
pip3installdist/analyse_obfuscation-*-py3-none-any.whl--upgradeExamples(Screenshot)Each execution generates a high-level result overview on the stdout, as can be seen in the screenshot. Additionally a .log file providing examples of commands found to be working is created. Sample report files generated by the below commands can be found in thesample_results/folder.# Check simple 'ping' commandanalyse_obfuscation--command"ping /n 1 localhost"# Check 'net share' command using {random}, which will be replaced by random string for each executionanalyse_obfuscation--command"net share x=c:\ /remark:{random}"# Check 'powershell /encodedcommand' command with increased timeout, as executions tend to take longanalyse_obfuscation--command"powershell /encodedcommand ZQBjAGgAbwAgACIAQAB3AGkAZQB0AHoAZQAiAA=="--timeout5# Check 'systeminfo' command by only looking at the exit code, not the output - since every output will be different due to (changing) timestampsanalyse_obfuscation--command"systeminfo /s localhost"--timeout5--exit_code_only# Check all commands as specified in sample.json, saving all reports in 'reports/'analyse_obfuscation--json_filesample/sample.json--report_dirreports/Notethat the results may contain false positives - especially when single-character command-line options are being tested (such as/ninping /n 1 localhost). In such cases, character insertion (method 3) may contain whitespace characters, which doesn't really 'count' as insertion character as whitespaces between command-line arguments are usually filtered out anyway. Similarly, character substitution (method 2) may change the entire option: e.g.ping /s 1 localhostandping /r 1 localhostare functionally different, but happen to give the same output.All optionsAll command-line options of this project can be requested by using the--helpoption:usage: analyse_obfuscation [--threads n] [--verbose] [--report_dir c:\path\to\dir] [--log_file c:\path\to\file.log] [--help] [--command "proc /arg1 /arg2"] [--range {full,educated,ascii,custom}] [--custom_range 0x??..0x?? [0x??..0x?? ...]] [--char_offset n] [--post_command process_name] [--exit_code_only] [--timeout n] [--json_file c:\path\to\file.jsonl]
Tool for identifying executables that have command-line options that can be obfuscated.
required arguments (either is required):
--command "proc /arg1 /arg2"
Single command to test
--json_file c:\path\to\file.jsonl
Path to JSON file (JSON Line formatted) containing commands config
optional --command arguments:
--range {full,educated,ascii,custom}
Character range to scan (default=educated)
--custom_range 0x??..0x?? [0x??..0x?? ...]
Range to scan
--char_offset n Character position used for insertion and replacement
--post_command process_name
Command to run unconditionally after each attempt (e.g. to clean up)
--exit_code_only Only base success on the exit code (and not the output of the command)
--timeout n Number of seconds per execution before timing out.
optional arguments:
--threads n Number of threads to use
--verbose Increase output verbosity
--report_dir c:\path\to\dir
Path to save report files to
--log_file c:\path\to\file.log
Path to save log to
--help Show this help message and exitRepository ContentsItemDescriptionanalyse_obfuscation/Code for python3 module, enabling one to analyse executables for common command-line obfuscation techniques.sample/Sample config file to analyse built-in Windows executables, as well as related input files. Used to generate results in the above folder.sample_results/Report files generated using the JSONL file in the above sample folder.
|
analyser
|
Analyser v1.0The analyser Python package is designed to fetch and analyze signal data. It interfaces with a web API to retrieve signals and their values, offering functionalities to calculate statistical measures like mean and standard deviation. It supports filtering by groups, ensuring tailored analyses. The package also includes efficient data handling with batch retrieval, making it suitable for large datasets.Environment setupCreate virtual environment with Python version > 3.8.5python3 -m venv .venvActivate virtual environmentsource .venv/bin/activateInstall requirementspip install -r requirements.txtRun all Testspytest -v -sUseage(Refer to unseage.md)Installation(Refer to installation.md)
|
analyser-hj3415
|
analyser_hj3415analyser_hj3415 manage the database.Manualanalyser_hj3415 모듈은 세가지 파트로 구성되어 있습니다.1. setting 모듈setting 모듈은 데이터베이스를 활성화하고 주소를 설정하는 역할을 합니다.
데이터베이스의 주소와 활성화 여부를 파일에 저장합니다.fromanalyser_hj3415importsetting# 현재 데이터 베이스 상태를 DbSetting 클래스 형식으로 반환한다.db_setting=setting.load_df()# 현재 데이터베이스 상태 출력print(db_setting)# 몽고db 주소 변경 (2가지 방식)setting.chg_mongo_addr('mongodb://192.168.0.173:27017')db_setting.mongo_addr='mongodb://192.168.0.173:27017'# sqlite3 주소 변경 (2가지 방식)setting.chg_sqlite3_path('/home/hj3415/Stock/_db')db_setting.sqlite3_path='/home/hj3415/Stock/_db'# 데이터베이스를 기본값으로 설정합니다.# DEF_MONGO_ADDR = 'mongodb://localhost:27017'# DEF_WIN_SQLITE3_PATH = 'C:\\_db'# DEF_LINUX_SQLITE3_PATH = '/home/hj3415/Stock/_db'setting.set_default()# 각 데이터베이스 사용 설정setting.turn_on_mongo()setting.turn_off_mongo()setting.turn_off_sqlite3()setting.turn_on_sqlite3()2. mongo 모듈몽고db를 데이터베이스로 사용할 경우를 위한 함수들의 모듈입니다.
현재는 몽고db를 비활성화 할 경우 올바로 작동하지 않기 때문에 디폴트 데이터베이스 입니다.1) Base 클래스모든 데이터베이스 클래스의 기반 클래스로 실제 직접 사용하지 않음.fromanalyser_hj3415.mongoimportBasebase=Base(db='mi',col='kospi')# db 주소를 변경함. 단 파일에 저장되는 것이 아니라 클래스 내부에서 일시적으로 설정하는 것임base.chg_addr('mongodb://192.168.0.173:27017')# 현재 설정된 db 주소, db 명, 컬렉션을 반환함.base.get_status()# ('mongodb://192.168.0.173:27017', 'mi', 'kospi')# 데이터 베이스 관리 함수base.get_all_db()2 - 1) Corps 클래스DB 내에서 종목에 관련된 기반클래스로 db명은 6자리 숫자 코드명임.fromanalyser_hj3415.mongoimportCorpscorps=Corps(code='005930',page='c101')# 코드를 변경함. 6자리 숫자인지 확인 후 설정함.corps.chg_code('005490')# 페이지를 변경함. 페이지명의 유효성 확인 후 설정함.# ('c101', 'c104y', 'c104q', 'c106', 'c108', 'c103손익계산서q', 'c103재무상태표q', 'c103현금흐름표q', 'c103손익계산서y', 'c103재무상태표y', 'c103현금흐름표y', 'dart')corps.chg_page(page='c108')# 데이터 베이스 관리 함수corps.get_all_codes()corps.del_all_codes()corps.drop_corp(code='005930')corps.get_all_pages()corps.drop_all_pages(code='005930')corps.drop_page(code='005930',page='c101')corps.get_all_item()2 - 2) MI 클래스DB 내에서 Market index 관련 클래스fromanalyser_hj3415.mongoimportMImi=MI(index='kospi')# 인덱스를 변경함. 인덱스명의 유효성 확인 후 설정# ('aud', 'chf', 'gbond3y', 'gold', 'silver', 'kosdaq', 'kospi', 'sp500', 'usdkrw', 'wti', 'avgper', 'yieldgap', 'usdidx')mi.chg_index(index='gold')# 저장된 가장 최근 값 반환mi.get_recent()# 데이터를 저장함.mi.save(mi_dict={'date':'2021.07.21','value':'1154.50'})# 데이터 베이스 관리 함수mi.get_all_indexes()mi.drop_all_indexes()mi.drop_index(index='silver')mi.get_all_item()2 - 3) DartByDate 클래스dart_hj3415의 dart 모듈에서 dart 데이터프레임을 추출하면 각 날짜별 컬렉션으로 저장하는 클래스fromdart_hj3415importdartfromanalyser_hj3415.mongoimportDartByDatedate='20210812'dart_db=DartByDate(date=date)# 오늘 날짜의 dart 데이터프레임을 추출하여 데이터베이스에 저장df=dart.get_df(edate=date)dart_db.save(df)# 공시 데이터를 데이터프레임으로 반환한다.dart_db.get_data()dart_db.get_data(title='임원ㆍ주요주주특정증권등소유상황보고서')2 - 4) EvalByDate 클래스eval_hj3415의 eval 모듈에서 eval 데이터프레임을 추출하여 저장하거나 불러올때 사용.
(실제로 eval_hj3415.eval.make_today_eval_df()에서 오늘자 데이터프레임을 항상 저장한다)importpandasaspdimportdatetimefromanalyser_hj3415.mongoimportEvalByDatetoday_str=datetime.datetime.today().strftime('%Y%m%d')eval_db=EvalByDate(date=today_str)# 오늘 날짜의 dart 데이터프레임을 추출하여 데이터베이스에 저장eval_db.save(pd.DataFrame())# 공시 데이터를 데이터프레임으로 반환한다.eval_db.get_data()2 - 5) Noti 클래스dart_hj3415의 analysis 모듈에서 공시를 분석하여 의미있는 공시를 노티하고 노티한 기록을 저장하는 클래스fromanalyser_hj3415.mongoimportNotinoti_db=Noti()# 저장이 필요한 노티 데이터를 딕셔너리로 전달하여 데이터베이스에 저장data={'code':'005930','rcept_no':'20210514000624','rcept_dt':'20210514','report_nm':'임원ㆍ주요주주특정증권등소유상황보고서','point':2,'text':'등기임원이 1.0억 이상 구매하지 않음.'}noti_db.save(noti_dict=data)# 오래된 노티 데이터를 정리하는 함수noti_db.cleaning_data(days_ago=15)=======================================================================================3) CorpsC101 페이지 관리 클래스fromanalyser_hj3415.mongoimportC101c101=C101(code='005930')...구현 클래스는 C101, C108, C106, C103, C1043. sqlite 모듈sqlite3를 데이테베이스로 사용할 경우를 위한 함수들의 모듈입니다.
현재 sqlite3는 사용하지 않기 때문에 작동하지 않습니다.fromanalyser_hj3415importsqlite
|
analysestock
|
Failed to fetch description. HTTP Status Code: 404
|
analyse-stock
|
analyse - a powerful staock data analysis and manipulation library for Pythonanalyseis a Python package providing fast, flexible, and expressive data
structures designed to analyse stock both in hongkong,us,china market,it has
the broader goal of becomingthe most powerful and flexible open source data
analysis / manipulation tool in analyzing stock. It is already well on
its way toward this goal.Main FeaturesHere are just a few of the things that analyse does well:-get raw data from tdx
-calculate pe,pe_ttm
-calculate growth
-calculate current vix
|
analysetool
|
No description available on PyPI.
|
analyse-wo
|
No description available on PyPI.
|
analysis
|
The analysis package provides support for the analysis of Python source code
beyond that provided by the compiler standard library package.
|
analysisapi
|
Analysis class that contain deftable and timesries tbale
|
analysisbykwok
|
No description available on PyPI.
|
analysis-engine
|
analysis_engineSoftware for portfolio management reporting and analysis in the UK Department for Transport, operated via command line
interface (CLI) prompts.InstallingPython must be installed on your computer. If not already installed, it can be installed via the python websitehere.IMPORTANTensure thatAdd Python to PATHis ticked when provided
with the option as part of the installation wizard.Open the command line terminal (Windows) or bash shell and install viapip install analysis_engine.Directories, file paths and poppler.In order to operate the correct directories and files must be set-up and saved on the user's computer.analysis_engineis able to handle different operating systems.Create the following directories in yourMy Documentsdirectory:|-- ipdc
|-- core_data
|-- json
|-- input
|-- output
|-- top250
|-- core_data
|-- json
|-- input
|-- outputEach reporting process e.g. ipdc and top250, respectivecore_datadirectorates require:excel master data files;excel project information file; and,A confi.ini file. This file lists and master data and project information file names.As a minimum theinputfolder should have the following documentssummary_temp.docx,summary_temp_landscape.docx. In additionipdc\inputshould have thedashboards_master.xlsxfile.All outputs from analysis_engine will be saved into theoutputdirectory.Thejsonfolder is where analysis_engine saves master data in an easily accessible
format (.json) and after setup can be ignored by the user.Unfortunately there is one further manual installation, related to a package within analysis_engine
which enables high quality rendering of graphical outputs to word documents. On Windows do the following:Download zip of poppler release from this linkhttps://github.com/oschwartz10612/poppler-windows/releases/download/v21.03.0/Release-21.03.0.zip.unzip and move the whole directory to My Documents.Add the poppler bin directory to PATH following these instructionshttps://www.architectryan.com/2018/03/17/add-to-the-path-on-windows-10/Reboot computer.Mac users should follow instructions herehttps://pypi.org/project/pdf2image/Most Linux distributions should not require any manual installation.Operating analysis_engineTo operate analysis_engine the user must enter the initialcommandanalysisfollowed by asubcommandto specify the reporting process e.gipdcortop250and then finally an analytical outputargument, the options
for which are set out below.NOTEthe--helpoption is available throughout the entire command
line prompt construction process and the user should use it for guidance on what subcommands
and arguments are available for use.analysis_engine currently has the followingarguments:initiateThe user must enter this command
every time master data, contained in the core_data directory, is updated.
The initiate checks and validates the data in a number of ways.dashboardspopulates the IPDC PfM report dashboard. A blank template dashboard
must be saved in the ipdc/input directory. (Not currently available for top250.)dandelionproduces the portfolio dandelion info-graphic.costsproduces a cost profile trend graph and data. (Not currently available for top250.)milestonesproduces milestone schedule graphs and data.vfmproduces vfm data. (Not currently available for top250.)summariesproduces project summary reports.risksproduces risk data. (Not currently available for top250.)dcasproduces dca data. (Not currently available for top250.)speedialprints out changes in project dca ratings. (Not currently available for top250.)queryreturns (from master data) specific data required by the user.The default for each argument is to return outputs with current and last quarter data.Further to each argument the user can specify one or many
furtheroptional_argumentsto alter the analytical output produced. There are
many optional_arguments available, which vary for each argument,
and the user should use the--helpoption to specify those that are available.
|
analysisflow
|
analysisflowIn development.
|
analysishelper
|
The need you help when doing data analysisBuild from sourceClone this repositoryInstall dependenciespip3 install -r requirements.txtRun setup from the repository root directorypython setup.py install
|
analysis_project_root
|
analysis-project-rootReset project root for analysis projects.For analysis projects sometimes it is useful to have a scripts that can import from lots of relative locations in a project;root/
sub1/
a.py
sub2/
b.py
sub3/
c.pyThis is a very simple package that solves the problem for me and hides away unpleasant code to add tosys.pathand change the working directory.To install the package;pip install analysis-project-root
|
analysis-runner
|
Analysis runnerThis tool helps tomake analysis results reproducible,
by automating the following aspects:Allow quick iteration using an environment that resembles production.Only allow access to production datasets through code that has been reviewed.Link the output data with the exact program invocation of how the data has
been generated.One of our main workflow pipeline systems at the CPG isHail Batch. By default, its
pipelines are defined by running a Python programlocally. This tool instead lets you run the "driver" on Hail Batch itself.Furthermore, all invocations are logged together with the output data, as well asAirtableand the sample-metadata server.When using the analysis-runner, the batch jobs are not run under your standard
Hail Batchservice account user(<USERNAME>-trial). Instead, a separate Hail Batch account is
used to run the batch jobs on your behalf. There's a dedicated Batch service
account for each dataset (e.g. "tob-wgs", "fewgenomes") and access level
("test", "standard", or "full", as documented in the team docsstorage policies),
which helps with bucket permission management and billing budgets.Note that you can use the analysis-runner to start arbitrary jobs, e.g. R scripts. They're just launched in the Hail Batch environment, but you can use any Docker image you like.The analysis-runner is also integrated with our Cromwell server to run WDL based workflows.CLIThe analysis-runner CLI can be used to start pipelines based on a GitHub repository,
commit, and command to run.First, make sure that your environment provides Python 3.10 or newer:>python3--version
Python3.10.7If the installed version is too old, on a Mac you can usebrewto update. E.g.:[email protected] install theanalysis-runnerPython package usingpip:python3-mpipinstallanalysis-runnerRunanalysis-runner --helpto see usage information.Make sure that you're logged into GCP:gcloudauthapplication-defaultloginIf you're in the directory of the project you want to run, you can omit the--commitand--repositoryparameters, which will use your current git remote and
commit HEAD.For example:analysis-runner\--dataset<dataset>\--description<description>\--access-level<level>\--output-dir<directory-within-bucket>\script_to_run.pywitharguments<level>corresponds to anaccess levelas defined in the storage policies.<directory-within-bucket>doesnotcontain a prefix likegs://cpg-fewgenomes-main/. For example, if you want your results to be stored ings://cpg-fewgenomes-main/1kg_pca/v2, specify--output-dir 1kg_pca/v2.If you provide a--repository, you MUST supply a--commit <SHA>, e.g.:analysis-runner\--repositorymy-approved-repo\--commit<commit-sha>\--dataset<dataset>\--description<description>\--access-level<level>--output-dir<directory-within-bucket>\script_to_run.pywithargumentsFor more examples (including for running an R script and dataproc), see theexamplesdirectory.GitHub AuthenticationIf you are submitting an analysis-runner job that needs to clone a private repository owned by populationgenomics on GitHub (eg submitting a script to analysis-runner from a private repository), please make sure that your configuration file contains the following section:[infrastructure]git_credentials_secret_name='<ask_software_team_for_secret_name>'git_credentials_secret_project='<ask_software_team_for_secret_project>'If you are specifying multiple configuration files, please make sure that this section appears in the final right-most config to avoid these settings being overwritten.Custom Docker imagesThe default driver image that's used to run scripts comes with Hail and some statistics libraries preinstalled (see the correspondingHail Dockerfile). It's possible to use any custom Docker image instead, using the--imageparameter. Note that any such image needs to contain the critical dependencies as specified in thebaseimage.For R scripts, we add the R-tidyverse set of packages to the base image, see the correspondingR Dockerfileand theR examplefor more details.Helper for Hail BatchThe analysis-runner package has a number of functions that make it easier to run reproducible analysis through Hail Batch.This is installed in the analysis runner driver image, ie: you can access the analysis_runner module when running scripts through the analysis-runner.Checking out a git repository at the current commitfromcpg_utils.hail_batchimportget_batchfromanalysis_runner.gitimport(prepare_git_job,get_repo_name_from_current_directory,get_git_commit_ref_of_current_repository,)b=get_batch(name='do-some-analysis')j=b.new_job('checkout_repo')prepare_git_job(job=j,# you could specify a name here, like 'analysis-runner'repo_name=get_repo_name_from_current_directory(),# you could specify the specific commit here, eg: '1be7bb44de6182d834d9bbac6036b841f459a11a'commit=get_git_commit_ref_of_current_repository(),)# Now, the working directory of j is the checkout out repositoryj.command('examples/bash/hello.sh')Running a dataproc scriptfromcpg_utils.hail_batchimportget_batchfromanalysis_runner.dataprocimportsetup_dataprocb=get_batch(name='do-some-analysis')# starts up a cluster, and submits a script to the cluster,# see the definition for more information about how you can configure the cluster# https://github.com/populationgenomics/analysis-runner/blob/main/analysis_runner/dataproc.py#L80cluster=dataproc.setup_dataproc(b,max_age='1h',packages=['click','selenium'],cluster_name='My Cluster with max-age=1h',)cluster.add_job('examples/dataproc/query.py',job_name='example')DevelopmentYou can ignore this section if you just want to run the tool.To set up a development environment for the analysis runner using pip, run
the following:pipinstall-rrequirements-dev.txt
pre-commitinstall--install-hooks
pipinstall--editable.DeploymentAdd a Hail Batch service account for all supported datasets.Copy the Hail tokensto the Secret Manager.Deploy theserverby invoking thedeploy_serverworkflowmanually.Deploy theAirtablepublisher.Publish theCLI tool and libraryto PyPI.The CLI tool is shipped as a pip package. To build a new version,
we usebump2version.
For example, to increment the patch section of the version tag 1.0.0 and make
it 1.0.1, run:gitcheckout-badd-new-version
bump2versionpatch
gitpush--set-upstreamoriginadd-new-version# Open pull requestopen"https://github.com/populationgenomics/analysis-runner/pull/new/add-new-version"It's important the pull request name start with "Bump version:" (which should happen
by default). Once this is merged intomain, a GitHub action workflow will build a
new package that will be uploaded to PyPI, and become available to install withpip install.
|
analysis-runner-ms
|
Analysis runnerThis tool helps tomake analysis results reproducible,
by automating the following aspects:Allow quick iteration using an environment that resembles production.Only allow access to production datasets through code that has been reviewed.Link the output data with the exact program invocation of how the data has
been generated.One of our main workflow pipeline systems at the CPG isHail Batch. By default, its
pipelines are defined by running a Python programlocally. This tool instead lets you run the "driver" on Hail Batch itself.Furthermore, all invocations are logged together with the output data, as well asAirtableand the sample-metadata server.When using the analysis-runner, the batch jobs are not run under your standard
Hail Batchservice account user(<USERNAME>-trial). Instead, a separate Hail Batch account is
used to run the batch jobs on your behalf. There's a dedicated Batch service
account for each dataset (e.g. "tob-wgs", "fewgenomes") and access level
("test", "standard", or "full", as documented in the team docsstorage policies),
which helps with bucket permission management and billing budgets.Note that you can use the analysis-runner to start arbitrary jobs, e.g. R scripts. They're just launched in the Hail Batch environment, but you can use any Docker image you like.The analysis-runner is also integrated with our Cromwell server to run WDL based workflows.CLIThe analysis-runner CLI can be used to start pipelines based on a GitHub repository,
commit, and command to run.First, make sure that your environment provides Python 3.10 or newer:>python3--version
Python3.10.7If the installed version is too old, on a Mac you can usebrewto update. E.g.:[email protected] install theanalysis-runnerPython package usingpip:python3-mpipinstallanalysis-runnerRunanalysis-runner --helpto see usage information.Make sure that you're logged into GCP:gcloudauthapplication-defaultloginIf you're in the directory of the project you want to run, you can omit the--commitand--repositoryparameters, which will use your current git remote and
commit HEAD.For example:analysis-runner\--dataset<dataset>\--description<description>\--access-level<level>\--output-dir<directory-within-bucket>\script_to_run.pywitharguments<level>corresponds to anaccess levelas defined in the storage policies.<directory-within-bucket>doesnotcontain a prefix likegs://cpg-fewgenomes-main/. For example, if you want your results to be stored ings://cpg-fewgenomes-main/1kg_pca/v2, specify--output-dir 1kg_pca/v2.If you provide a--repository, you MUST supply a--commit <SHA>, e.g.:analysis-runner\--repositorymy-approved-repo\--commit<commit-sha>\--dataset<dataset>\--description<description>\--access-level<level>--output-dir<directory-within-bucket>\script_to_run.pywithargumentsFor more examples (including for running an R script and dataproc), see theexamplesdirectory.Custom Docker imagesThe default driver image that's used to run scripts comes with Hail and some statistics libraries preinstalled (see the correspondingHail Dockerfile). It's possible to use any custom Docker image instead, using the--imageparameter. Note that any such image needs to contain the critical dependencies as specified in thebaseimage.For R scripts, we add the R-tidyverse set of packages to the base image, see the correspondingR Dockerfileand theR examplefor more details.Helper for Hail BatchThe analysis-runner package has a number of functions that make it easier to run reproducible analysis through Hail Batch.This is installed in the analysis runner driver image, ie: you can access the analysis_runner module when running scripts through the analysis-runner.Checking out a git repository at the current commitimporthailtop.batchashbfromcpg_utils.gitimport(prepare_git_job,get_repo_name_from_current_directory,get_git_commit_ref_of_current_repository,)b=hb.Batch('do-some-analysis')j=b.new_job('checkout_repo')prepare_git_job(job=j,organisation='populationgenomics',# you could specify a name here, like 'analysis-runner'repo_name=get_repo_name_from_current_directory(),# you could specify the specific commit here, eg: '1be7bb44de6182d834d9bbac6036b841f459a11a'commit=get_git_commit_ref_of_current_repository(),)# Now, the working directory of j is the checkout out repositoryj.command('examples/bash/hello.sh')Running a dataproc scriptimporthailtop.batchashbfromanalysis_runner.dataprocimportsetup_dataprocb=hb.Batch('do-some-analysis')# starts up a cluster, and submits a script to the cluster,# see the definition for more information about how you can configure the cluster# https://github.com/populationgenomics/analysis-runner/blob/main/analysis_runner/dataproc.py#L80cluster=dataproc.setup_dataproc(b,max_age='1h',packages=['click','selenium'],init=['gs://cpg-common-main/hail_dataproc/install_common.sh'],cluster_name='My Cluster with max-age=1h',)cluster.add_job('examples/dataproc/query.py',job_name='example')DevelopmentYou can ignore this section if you just want to run the tool.To set up a development environment for the analysis runner using pip, run
the following:pipinstall-rrequirements-dev.txt
pre-commitinstall--install-hooks
pipinstall--editable.DeploymentAdd a Hail Batch service account for all supported datasets.Copy the Hail tokensto the Secret Manager.Deploy theserverby invoking thedeploy_serverworkflowmanually.Deploy theAirtablepublisher.Publish theCLI tool and libraryto PyPI.The CLI tool is shipped as a pip package. To build a new version,
we usebump2version.
For example, to increment the patch section of the version tag 1.0.0 and make
it 1.0.1, run:gitcheckout-badd-new-version
bump2versionpatch
gitpush--set-upstreamoriginadd-new-version# Open pull requestopen"https://github.com/populationgenomics/analysis-runner/pull/new/add-new-version"It's important the pull request name start with "Bump version:" (which should happen
by default). Once this is merged intomain, a GitHub action workflow will build a
new package that will be uploaded to PyPI, and become available to install withpip install.
|
analysisstore
|
Mongo backed data analysis tracker service
|
analysistoolbox
|
Analysis Tool BoxDescriptionAnalysis Tool Box (i.e. "analysistoolbox") is a collection of tools in Python for data collection and processing, statisitics, analytics, and intelligence analysis.Getting StartedTo install the package, run the following command in the root directory of the project:pipinstallanalysistoolboxVisualizations are created using the matplotlib and seaborn libraries. While you can select whichever seaborn style you'd like, the following Seaborn style tends to get the best looking plots:sns.set(style="white",font="Arial",context="paper")UsageThere are many modules in the analysistoolbox package, each with their own functions. The following is a list of the modules:CalculusData collectionData processingDescriptive analyticsFile managementHypothesis testingLinear algebraPredictive analyticsStatisticsVisualizationsCalculusThere are several functions in the Calculus submodule. The following is a list of the functions:FindDerivativeFindLimitOfFunctionFindMinimumSquareLossPlotFunctionFindDerivativeTheFindDerivativefunction calculates the derivative of a given function. It uses the sympy library, a Python library for symbolic mathematics, to perform the differentiation. The function also has the capability to print the original function and its derivative, return the derivative function, and plot both the original function and its derivative.# Load the FindDerivative function from the Calculus submodulefromanalysistoolbox.calculusimportFindDerivativeimportsympy# Define a symbolic variablex=sympy.symbols('x')# Define a functionf_of_x=x**3+2*x**2+3*x+4# Use the FindDerivative functionFindDerivative(f_of_x,print_functions=True,return_derivative_function=True,plot_functions=True)FindLimitOfFunctionTheFindLimitOfFunctionfunction finds the limit of a function at a specific point and optionally plot the function and its tangent line at that point. The script uses the matplotlib and numpy libraries for plotting and numerical operations respectively.# Import the necessary librariesfromanalysistoolbox.calculusimportFindLimitOfFunctionimportnumpyasnpimportsympy# Define a symbolic variablex=sympy.symbols('x')# Define a functionf_of_x=np.sin(x)/x# Use the FindLimitOfFunction functionFindLimitOfFunction(f_of_x,point=0,step=0.01,plot_function=True,x_minimum=-10,x_maximum=10,n=1000,tangent_line_window=1)FindMinimumSquareLossTheFindMinimumSquareLossfunction calculates the minimum square loss between observed and predicted values. This function is often used in machine learning and statistics to measure the average squared difference between the actual and predicted outcomes.# Import the necessary librariesfromanalysistoolbox.calculusimportFindMinimumSquareLoss# Define observed and predicted valuesobserved_values=[1,2,3,4,5]predicted_values=[1.1,1.9,3.2,3.7,5.1]# Use the FindMinimumSquareLoss functionminimum_square_loss=FindMinimumSquareLoss(observed_values,predicted_values,show_plot=True)# Print the minimum square lossprint(f"The minimum square loss is:{minimum_square_loss}")PlotFunctionThePlotFunctionfunction plots a mathematical function of x. It takes a lambda function as input and allows for customization of the plot.# Import the necessary librariesfromanalysistoolbox.calculusimportPlotFunctionimportsympy# Set x as a symbolic variablex=sympy.symbols('x')# Define the function to plotf_of_x=lambdax:x**2# Plot the function with default settingsPlotFunction(f_of_x)Data CollectionThere are several functions in the Data Collection submodule. The following is a list of the functions:ExtractTextFromPDFFetchPDFFromURLFetchUSShapefileFetchWebsiteTextGetGoogleSearchResultsGetZipFileExtractTextFromPDFTheExtractTextFromPDFfunction extracts text from a PDF file, cleans it, then saves it to a text file.# Import the functionfromanalysistoolbox.data_collectionimportExtractTextFromPDF# Call the functionExtractTextFromPDF(filepath_to_pdf="/path/to/your/input.pdf",filepath_for_exported_text="/path/to/your/output.txt",start_page=1,end_page=None)FetchPDFFromURLTheFetchPDFFromURLfunction downloads a PDF file from a URL and saves it to a specified location.# Import the functionfromanalysistoolbox.data_collectionimportFetchPDFFromURL# Call the function to download the PDFFetchPDFFromURL(url="https://example.com/sample.pdf",filename="C:/folder/sample.pdf")FetchUSShapefileTheFetchUSShapefilefunction fetches a geographical shapefile from the TIGER database of the U.S. Census Bureau.# Import the functionfromanalysistoolbox.data_collectionimportFetchUSShapefile# Fetch the shapefile for the census tracts in King County, Washington, for the 2021 census yearshapefile=FetchUSShapefile(state='PA',county='Allegheny',geography='tract',census_year=2021)# Print the first few rows of the shapefileprint(shapefile.head())FetchWebsiteTextTheFetchWebsiteTextfunction fetches the text from a website and saves it to a text file.# Import the functionfromanalysistoolbox.data_collectionimportFetchWebsiteText# Call the functiontext=FetchWebsiteText(url="https://www.example.com",browserless_api_key="your_browserless_api_key")# Print the fetched textprint(text)GetGoogleSearchResultsTheGetGoogleSearchResultsfunction fetches Google search results for a given query using the Serper API.# Import the functionfromanalysistoolbox.data_collectionimportGetGoogleSearchResults# Call the function with the query# Make sure to replace 'your_serper_api_key' with your actual Serper API keyresults=GetGoogleSearchResults(query="Python programming",serper_api_key='your_serper_api_key',number_of_results=5,apply_autocorrect=True,display_results=True)# Print the resultsprint(results)GetZipFileTheGetZipFilefunction downloads a zip file from a url and saves it to a specified folder. It can also unzip the file and print the contents of the zip file.# Import the functionfromanalysistoolbox.data_collectionimportGetZipFile# Call the functionGetZipFile(url="http://example.com/file.zip",path_to_save_folder="/path/to/save/folder")Data ProcessingThere are several functions in the Data Processing submodule. The following is a list of the functions:AddDateNumberColumnsAddLeadingZerosAddRowCountColumnAddTPeriodColumnAddTukeyOutlierColumnCleanTextColumnsConductAnomalyDetectionConductEntityMatchingConvertOddsToProbabilityCountMissingDataByGroupCreateBinnedColumnCreateDataOverviewCreateRandomSampleGroupsCreateRareCategoryColumnCreateStratifiedRandomSampleGroupsImputeMissingValuesUsingNearestNeighborsVerifyGranularityAddDateNumberColumnsTheAddDateNumberColumnsfunction adds columns for the year, month, quarter, week, day of the month, and day of the week to a dataframe.# Import necessary packagesfromanalysistoolbox.data_processingimportAddDateNumberColumnsfromdatetimeimportdatetimeimportpandasaspd# Create a sample dataframedata={'Date':[datetime(2020,1,1),datetime(2020,2,1),datetime(2020,3,1),datetime(2020,4,1)]}df=pd.DataFrame(data)# Use the function on the sample dataframedf=AddDateNumberColumns(dataframe=df,date_column_name='Date')# Print the updated dataframeprint(df)AddLeadingZerosTheAddLeadingZerosfunction adds leading zeros to a column. If fixed_length is not specified, the longest string in the column is used as the fixed length. If add_as_new_column is set to True, the new column is added to the dataframe. Otherwise, the original column is updated.# Import necessary packagesfromanalysistoolbox.data_processingimportAddLeadingZerosimportpandasaspd# Create a sample dataframedata={'ID':[1,23,456,7890]}df=pd.DataFrame(data)# Use the AddLeadingZeros functiondf=AddLeadingZeros(dataframe=df,column_name='ID',add_as_new_column=True)# Print updated dataframeprint(df)AddRowCountColumnTheAddRowCountColumnfunction adds a column to a dataframe that contains the row number for each row, based on a group (or groups) of columns. The function can also sort the dataframe by a column or columns before adding the row count column.# Import necessary packagesfromanalysistoolbox.data_processingimportAddRowCountColumnimportpandasaspd# Create a sample dataframedata={'Payment Method':['Check','Credit Card','Check','Credit Card','Check','Credit Card','Check','Credit Card'],'Transaction Value':[100,200,300,400,500,600,700,800],'Transaction Order':[1,2,3,4,5,6,7,8]}df=pd.DataFrame(data)# Call the functiondf_updated=AddRowCountColumn(dataframe=df,list_of_grouping_variables=['Payment Method'],list_of_order_columns=['Transaction Order'],list_of_ascending_order_args=[True])# Print the updated dataframeprint(df_updated)AddTPeriodColumnTheAddTPeriodColumnfunction adds a T-period column to a dataframe. The T-period column is the number of intervals (e.g., days or weeks) since the earliest date in the dataframe.# Import necessary librariesfromanalysistoolbox.data_processingimportAddTPeriodColumnfromdatetimeimportdatetimeimportpandasaspd# Create a sample dataframedata={'date':pd.date_range(start='1/1/2020',end='1/10/2020'),'value':range(1,11)}df=pd.DataFrame(data)# Use the functiondf_updated=AddTPeriodColumn(dataframe=df,date_column_name='date',t_period_interval='days')# Print the updated dataframeprint(df_updated)AddTukeyOutlierColumnTheAddTukeyOutlierColumnfunction adds a column to a dataframe that indicates whether a value is an outlier. The function uses the Tukey method to identify outliers.# Import necessary librariesfromanalysistoolbox.data_processingimportAddTukeyOutlierColumnimportpandasaspd# Create a sample dataframedata=pd.DataFrame({'values':[1,2,3,4,5,6,7,8,9,20]})# Use the functiondf_updated=AddTukeyOutlierColumn(dataframe=data,value_column_name='values',tukey_boundary_multiplier=1.5,plot_tukey_outliers=True)# Print the updated dataframeprint(df_updated)CleanTextColumnsTheCleanTextColumnsfunction cleans string-type columns in a pandas DataFrame by removing all leading and trailing spaces.# Import necessary librariesfromanalysistoolbox.data_processingimportCleanTextColumnsimportpandasaspd# Create a sample dataframedf=pd.DataFrame({'A':[' hello','world ',' python '],'B':[1,2,3],})# Clean the dataframedf_clean=CleanTextColumns(df)ConductAnomalyDetectionTheConductAnomalyDetectionfunction performs anomaly detection on a given dataset using the z-score method.# Import necessary librariesfromanalysistoolbox.data_processingimportConductAnomalyDetectionimportpandasaspd# Create a sample dataframedf=pd.DataFrame({'A':[1,2,3,1000],'B':[4,5,6,2000],})# Conduct anomaly detectiondf_anomaly_detected=ConductAnomalyDetection(dataframe=df,list_of_columns_to_analyze=['A','B'])# Print the updated dataframeprint(df_anomaly_detected)ConductEntityMatchingTheConductEntityMatchingfunction performs entity matching between two dataframes using various fuzzy matching algorithms.fromanalysistoolbox.data_processingimportConductEntityMatchingimportpandasaspd# Create two dataframesdataframe_1=pd.DataFrame({'ID':['1','2','3'],'Name':['John Doe','Jane Smith','Bob Johnson'],'City':['New York','Los Angeles','Chicago']})dataframe_2=pd.DataFrame({'ID':['a','b','c'],'Name':['Jon Doe','Jane Smyth','Robert Johnson'],'City':['NYC','LA','Chicago']})# Conduct entity matchingmatched_entities=ConductEntityMatching(dataframe_1=dataframe_1,dataframe_1_primary_key='ID',dataframe_2=dataframe_2,dataframe_2_primary_key='ID',levenshtein_distance_filter=3,match_score_threshold=80,columns_to_compare=['Name','City'],match_methods=['Partial Token Set Ratio','Weighted Ratio'])ConvertOddsToProbabilityTheConvertOddsToProbabilityfunction converts odds to probability in a new column.# Import necessary packagesfromanalysistoolbox.data_processingimportConvertOddsToProbabilityimportpandasaspd# Create a sample dataframedata={'Team':['Team1','Team2','Team3','Team4'],'Odds':[2.5,1.5,3.0,np.nan]}df=pd.DataFrame(data)# Print the original dataframeprint("Original DataFrame:")print(df)# Use the function to convert odds to probabilitydf=ConvertOddsToProbability(dataframe=df,odds_column='Odds')CountMissingDataByGroupTheCountMissingDataByGroupfunction counts the number of records with missing data in a Pandas dataframe, grouped by specified columns.# Import necessary packagesfromanalysistoolbox.data_processingimportCountMissingDataByGroupimportpandasaspdimportnumpyasnp# Create a sample dataframe with some missing valuesdata={'Group':['A','B','A','B','A','B'],'Value1':[1,2,np.nan,4,5,np.nan],'Value2':[np.nan,8,9,10,np.nan,12]}df=pd.DataFrame(data)# Use the function to count missing data by groupCountMissingDataByGroup(dataframe=df,list_of_grouping_columns=['Group'])CreateBinnedColumnTheCreateBinnedColumnfunction creates a new column in a Pandas dataframe based on a numeric variable. Binning is a process of transforming continuous numerical variables into discrete categorical 'bins'.# Import necessary packagesfromanalysistoolbox.data_processingimportCreateBinnedColumnimportpandasaspdimportnumpyasnp# Create a sample dataframedata={'Group':['A','B','A','B','A','B'],'Value1':[1,2,3,4,5,6],'Value2':[7,8,9,10,11,12]}df=pd.DataFrame(data)# Use the function to create a binned columndf_binned=CreateBinnedColumn(dataframe=df,numeric_column_name='Value1',number_of_bins=3,binning_strategy='uniform')CreateDataOverviewTheCreateDataOverviewfunction creates an overview of a Pandas dataframe, including the data type, missing count, missing percentage, and summary statistics for each variable in the DataFrame.# Import necessary packagesfromanalysistoolbox.data_processingimportCreateDataOverviewimportpandasaspdimportnumpyasnp# Create a sample dataframedata={'Column1':[1,2,3,np.nan,5,6],'Column2':['a','b','c','d',np.nan,'f'],'Column3':[7.1,8.2,9.3,10.4,np.nan,12.5]}df=pd.DataFrame(data)# Use the function to create an overview of the dataframeCreateDataOverview(dataframe=df,plot_missingness=True)CreateRandomSampleGroupsTheCreateRandomSampleGroupsfunction a takes a pandas DataFrame, shuffle its rows, assign each row to one of n groups, and then return the updated DataFrame with an additional column indicating the group number.# Import necessary packagesfromanalysistoolbox.data_processingimportCreateRandomSampleGroupsimportpandasaspd# Create a sample DataFramedata={'Name':['Alice','Bob','Charlie','David','Eve'],'Age':[25,31,35,19,45],'Score':[85,95,78,81,92]}df=pd.DataFrame(data)# Use the functiongrouped_df=CreateRandomSampleGroups(dataframe=df,number_of_groups=2,random_seed=123)CreateRareCategoryColumnTheCreateRareCategoryColumnfunction creates a new column in a Pandas dataframe that indicates whether a categorical variable value is rare. A rare category is a category that occurs less than a specified percentage of the time.# Import necessary packagesfromanalysistoolbox.data_processingimportCreateRareCategoryColumnimportpandasaspd# Create a sample DataFramedata={'Name':['Alice','Bob','Charlie','David','Eve','Alice','Bob','Alice'],'Age':[25,31,35,19,45,23,30,24],'Score':[85,95,78,81,92,88,90,86]}df=pd.DataFrame(data)# Use the functionupdated_df=CreateRareCategoryColumn(dataframe=df,categorical_column_name='Name',rare_category_label='Rare',rare_category_threshold=0.05,new_column_suffix='(rare category)')CreateStratifiedRandomSampleGroupsTheCreateStratifiedRandomSampleGroupsunction performs stratified random sampling on a pandas DataFrame. Stratified random sampling is a method of sampling that involves the division of a population into smaller groups known as strata. In stratified random sampling, the strata are formed based on members' shared attributes or characteristics.# Import necessary packagesfromanalysistoolbox.data_processingimportCreateStratifiedRandomSampleGroupsimportnumpyasnpimportpandasaspd# Create a sample DataFramedata={'Name':['Alice','Bob','Charlie','David','Eve','Alice','Bob','Alice'],'Age':[25,31,35,19,45,23,30,24],'Score':[85,95,78,81,92,88,90,86]}df=pd.DataFrame(data)# Use the functionstratified_df=CreateStratifiedRandomSampleGroups(dataframe=df,number_of_groups=2,list_categorical_column_names=['Name'],random_seed=42)ImputeMissingValuesUsingNearestNeighborsTheImputeMissingValuesUsingNearestNeighborsfunction imputes missing values in a dataframe using the nearest neighbors method. For each sample with missing values, it finds the n_neighbors nearest neighbors in the training set and imputes the missing values using the mean value of these neighbors.# Import necessary packagesfromanalysistoolbox.data_processingimportImputeMissingValuesUsingNearestNeighborsimportpandasaspdimportnumpyasnp# Create a sample DataFrame with missing valuesdata={'A':[1,2,np.nan,4,5],'B':[np.nan,2,3,4,5],'C':[1,2,3,np.nan,5],'D':[1,2,3,4,np.nan]}df=pd.DataFrame(data)# Use the functionimputed_df=ImputeMissingValuesUsingNearestNeighbors(dataframe=df,list_of_numeric_columns_to_impute=['A','B','C','D'],number_of_neighbors=2,averaging_method='uniform')VerifyGranularityTheVerifyGranularityfunction checks the granularity of a given dataframe based on a list of key columns. Granularity in this context refers to the level of detail or summarization in a set of data.# Import necessary packagesfromanalysistoolbox.data_processingimportVerifyGranularityimportpandasaspd# Create a sample DataFramedata={'Name':['Alice','Bob','Charlie','David','Eve','Alice','Bob','Alice'],'Age':[25,31,35,19,45,23,30,24],'Score':[85,95,78,81,92,88,90,86]}df=pd.DataFrame(data)# Use the functionVerifyGranularity(dataframe=df,list_of_key_columns=['Name','Age'],set_key_as_index=True,print_as_markdown=False)Descriptive AnalyticsThere are several functions in the Descriptive Analytics submodule. The following is a list of the functions:ConductManifoldLearningConductPrincipalComponentAnalysisCreateAssociationRulesCreateGaussianMixtureClustersCreateHierarchicalClustersCreateKMeansClustersGenerateEDAWithLIDAConductManifoldLearningTheConductManifoldLearningfunction performs manifold learning on a given dataframe and returns a new dataframe with the original columns and the new manifold learning components. Manifold learning is a type of unsupervised learning that is used to reduce the dimensionality of the data.# Import necessary packagesfromanalysistoolbox.descriptive_analyticsimportConductManifoldLearningimportpandasaspdfromsklearn.datasetsimportload_iris# Load the iris datasetiris=load_iris()iris_df=pd.DataFrame(data=iris.data,columns=iris.feature_names)# Use the functionnew_df=ConductManifoldLearning(dataframe=iris_df,list_of_numeric_columns=['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)'],number_of_components=2,random_seed=42,show_component_summary_plots=True,summary_plot_size=(10,10))ConductPrincipalComponentAnalysisTheConductPrincipalComponentAnalysisfunction performs Principal Component Analysis (PCA) on a given dataframe. PCA is a technique used in machine learning to reduce the dimensionality of data while retaining as much information as possible.# Import necessary packagesfromanalysistoolbox.descriptive_analyticsimportConductManifoldLearningimportpandasaspdfromsklearn.datasetsimportload_iris# Load the iris datasetiris=load_iris()iris_df=pd.DataFrame(data=iris.data,columns=iris.feature_names)# Call the functionresult=ConductPrincipalComponentAnalysis(dataframe=iris_df,list_of_numeric_columns=['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)'],number_of_components=2)CreateAssociationRulesTheCreateAssociationRulesfunction creates association rules from a given dataframe. Association rules are widely used in market basket analysis, where the goal is to find associations and/or correlations among a set of items.# Import necessary packagesfromanalysistoolbox.descriptive_analyticsimportCreateAssociationRulesimportpandasaspd# Assuming you have a dataframe 'df' with 'TransactionID' and 'Item' columnsresult=CreateAssociationRules(dataframe=df,transaction_id_column='TransactionID',items_column='Item',support_threshold=0.01,confidence_threshold=0.2,plot_lift=True,plot_title='Association Rules',plot_size=(10,7))CreateGaussianMixtureClustersTheCreateGaussianMixtureClustersfunction creates Gaussian mixture clusters from a given dataframe. Gaussian mixture models are a type of unsupervised learning that is used to find clusters in data. It adds the resulting clusters as a new column in the dataframe, and also calculates the probability of each data point belonging to each cluster.# Import necessary packagesfromanalysistoolbox.descriptive_analyticsimportCreateGaussianMixtureClustersimportpandasaspdfromsklearnimportdatasets# Load the iris datasetiris=datasets.load_iris()# Convert the iris dataset to a pandas dataframedf=pd.DataFrame(data=np.c_[iris['data'],iris['target']],columns=iris['feature_names']+['target'])# Call the CreateGaussianMixtureClusters functiondf_clustered=CreateGaussianMixtureClusters(dataframe=df,list_of_numeric_columns_for_clustering=['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)'],number_of_clusters=3,column_name_for_clusters='Gaussian Mixture Cluster',scale_predictor_variables=True,show_cluster_summary_plots=True,sns_color_palette='Set2',summary_plot_size=(15,15),random_seed=123,maximum_iterations=200)CreateHierarchicalClustersTheCreateHierarchicalClustersfunction creates hierarchical clusters from a given dataframe. Hierarchical clustering is a type of unsupervised learning that is used to find clusters in data. It adds the resulting clusters as a new column in the dataframe.# Import necessary packagesfromanalysistoolbox.descriptive_analyticsimportCreateHierarchicalClustersimportpandasaspdfromsklearnimportdatasets# Load the iris datasetiris=datasets.load_iris()df=pd.DataFrame(data=iris.data,columns=iris.feature_names)# Call the CreateHierarchicalClusters functiondf_clustered=CreateHierarchicalClusters(dataframe=df,list_of_value_columns_for_clustering=['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)'],number_of_clusters=3,column_name_for_clusters='Hierarchical Cluster',scale_predictor_variables=True,show_cluster_summary_plots=True,color_palette='Set2',summary_plot_size=(6,4),random_seed=412,maximum_iterations=300)CreateKMeansClustersTheCreateKMeansClustersfunction performs K-Means clustering on a given dataset and returns the dataset with an additional column indicating the cluster each record belongs to.# Import necessary packagesfromanalysistoolbox.descriptive_analyticsimportCreateKMeansClustersimportpandasaspdfromsklearnimportdatasets# Load the iris datasetiris=datasets.load_iris()df=pd.DataFrame(data=iris.data,columns=iris.feature_names)# Call the CreateKMeansClusters functiondf_clustered=CreateKMeansClusters(dataframe=df,list_of_value_columns_for_clustering=['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)'],number_of_clusters=3,column_name_for_clusters='K-Means Cluster',scale_predictor_variables=True,show_cluster_summary_plots=True,color_palette='Set2',summary_plot_size=(6,4),random_seed=412,maximum_iterations=300)GenerateEDAWithLIDATheGenerateEDAWithLIDAfunction uses the LIDA package from Microsoft to generate exploratory data analysis (EDA) goals.# Import necessary packagesfromanalysistoolbox.descriptive_analyticsimportGenerateEDAWithLIDAimportpandasaspdfromsklearnimportdatasets# Load the iris datasetiris=datasets.load_iris()df=pd.DataFrame(data=iris.data,columns=iris.feature_names)# Call the GenerateEDAWithLIDA functiondf_summary=GenerateEDAWithLIDA(dataframe=df,llm_api_key="your_llm_api_key_here",llm_provider="openai",llm_model="gpt-3.5-turbo",visualization_library="seaborn",goal_temperature=0.50,code_generation_temperature=0.05,data_summary_method="llm",number_of_samples_to_show_in_summary=5,return_data_fields_summary=True,number_of_goals_to_generate=5,plot_recommended_visualization=True,show_code_for_recommended_visualization=True)File ManagementHypothesis TestingLinear AlgebraPredictive AnalyticsPrescriptive AnalyticsSimulationsVisualizationsContributionsTo report an issue, request a feature, or contribute to the project, please see theCONTRIBUTING.mdfile (in progress).LicenseThis project is licensed under the MIT License - see theLICENSE.mdfile for details.
|
analysis-tools
|
Analysis tools for machine learning projects1. Usage$pipinstallanalysis-tools2. Tutorialexamples/titanic/eda.ipynb를 참고fromanalysis_toolsimporteda,metricsdata=pd.DataFrame(..)target='survived'num_features=['age','sibsp','parch','fare']cat_features=data.columns.drop(num_features)data[num_features]=data[num_features].astype('float32')data[cat_features]=data[cat_features].astype('string')eda.plot_missing_value(data)eda.plot_features(data)eda.plot_features_target(data,target)eda.plot_corr(data.corr())metrics.get_feature_importance(data,target)
|
analyst
|
Analytical toolkit for data science and machine learning--------------------------------------------------------Utilities:- `ml`: machine learning- `nd`: NumPy and Pandas- `viz`: data visualization
|
analysta-index
|
indexerExtension of Langchain loaders, llms and retrievers for Analysta
|
analysta-llm-agents
|
Analysta LLM Agents 🤖Welcome to the Agent Framework repository! This robust Python library is engineered for developers looking to craft cutting-edge agents powered by large language models (LLMs). Dive into a world where creating intelligent, customizable agents is streamlined and efficient.Key Features 🛠Custom Agent Profiles:Tailor your agents with distinct roles, skills, and objectives to fit diverse applications. 🎨Advanced Memory Systems:Implements sophisticated short-term and long-term - memory functionalities to enhance agent performance and learning. 🧠Modular Tool Integration:Seamlessly incorporate a variety of tools, contractors, and data sources to expand your agent's capabilities. 🔧Flexible Communication:Craft and manage dynamic interactions between your agent and users, with support for complex command and response structures. 💬Efficient Error Handling:Robust error management ensures your agent remains resilient and responsive under various conditions. 🛡️How-TosHow to Create Custom Agent Profiles 🛠️Creating custom agent profiles allows you to define unique characteristics and capabilities tailored to your specific needs. Here's a step-by-step guide to get you started:Step 1: Define Your Agent's Role and SkillsStart by outlining the role, skills, and purpose of your agent. Consider what tasks your agent should perform and the knowledge it requires.agent_role="Customer Support Assistant"agent_skills=["Handle FAQs","Ticket Routing","User Feedback Collection"]agent_purpose="To assist users with common queries and direct complex issues to the appropriate teams."agent_constraints="""- Adhere strictly to user privacy and data protection standards.- Maintain a polite and professional tone in all interactions.- Limit response length to ensure concise and relevant answers.- Avoid speculative answers in areas outside of defined expertise.- Escalate issues beyond the agent's capability to a human operator."""Step 2: Initialize Your AgentUse the Agent class to create an instance of your agent. Pass the role, skills, and any other specific parameters you defined.fromanalysta_llm_agents.agents.agentimportAgentclassCustomAgent(Agent):def__init__(self,ctx,**kwargs):super.__init__(agent_prompt=f"{agent_role}skilled in{', '.join(agent_skills)}.{agent_purpose}",agent_constraints=agent_constraintsllm_model_type="AzureChatOpenAI",# This is basucally name of langchain classesllm_model_params={...},# Params to init langchain classshort_term_memory_limit=1024,# Size of short-term memory in tokensembedding_model="AzureOpenAIEmbeddings",# Same langchain classembedding_model_params={...}# Params to init langchain classvectorstore="Chroma",# Same langchain classvectorstore_params={...}# Params to init langchain classtools=[]# Optinal list of tools for agent to usecontractors=[]# Optional list of other agents to communicate withctx=ctx# Context shared between agents, have common configurations)@propertydefname(self):return"Customer Support Assistant"@propertydefdescription(self):return("Assists users with common queries and direct complex issues to the appropriate teams.")nameanddescriptionproperties used to clearly articulate agents identity and functionality. These properties should be descriptive and concise, offering a clear understanding of the agent's purpose and capabilities.With thenameanddescriptionproperties defined, your custom agent is now ready to be used as a contractor by other agents. Ensure your agent's functionalities are accessible and well-documented, allowing other agents to leverage its capabilities effectively.Step 3: Interact with Your AgentFinally, interact with your agent using the custom logic you've implemented.fromuuidimportuuid4fromanalysta_llm_agents.tools.contextimportContextctx=Context()custom_agent=CustomAgent(ctx)user_query="I have an issue with my recent order."formessageincustom_agent.start(user_query,conversation_id=str(uuid4())):print(message)ConclusionBy following these steps, you can create a custom agent profile that is well-suited to your specific tasks and workflows. Experiment with different configurations and functionalities to fully leverage the capabilities of the LLM Agent Framework.Creating Custom ToolsEnhancing your Customer Support Assistant Agent with custom tools can significantly improve its ability to serve users more efficiently and effectively. Below is a guide on how to develop and integrate a custom tool for handling frequently asked questions (FAQs) and ticket routing, inspired by the given code structure.FAQ Retrieval ToolThis tool fetches frequently asked questions and their answers from a predefined knowledge base or document. This enables the agent to quickly provide users with information without the need for manual lookup or intervention.deffetchFAQs(ctx:Any,category:str=None):"""Retrieves frequently asked questions and their answers from a knowledge base.Args:category (str, optional): The category of FAQs to retrieve. If None, fetches all FAQs."""# Example URL to your FAQ knowledge base or API endpointfaq_url="https://yourknowledgebase.com/api/faqs"try:# Optionally filter FAQs by category if providedifcategory:faq_url+=f"?category={category}"# Send the GET request to fetch FAQsresponse=requests.get(faq_url,headers={"Content-Type":"application/json"})response.raise_for_status()# Raise an exception for HTTP errors# Parse and return the FAQs from the responsefaqs=response.json()["faqs"]return"\n".join([f"Q:{faq['question']}\nA:{faq['answer']}"forfaqinfaqs])exceptrequests.exceptions.RequestExceptionase:logger.error(f"Failed to fetch FAQs:{e}")return"ERROR: Unable to retrieve FAQs at this time."Ticket Routing ToolThis tool analyzes incoming support tickets and routes them to the appropriate department or support tier based on the content of the ticket and predefined routing rules.defrouteTicket(ctx:Any,ticket_description:str):"""Analyzes a support ticket and routes it to the appropriate department or support tier.Args:ctx (Any): The context of the agent, containing shared data and configurations.ticket_description (str): The content of the support ticket."""# Example logic to determine the department based on ticket descriptionif"billing"inticket_description.lower():department="Billing"elif"technical"inticket_description.lower():department="Technical Support"else:department="General Inquiries"# Log the routing decisionlogger.info(f"Routing ticket to{department}")# Placeholder for actual routing logic (e.g., API call to ticketing system)# For demonstration, we'll just return the routing decisionreturnf"Ticket routed to{department}department."Grouping and using in agent as paramsGroup tools as a list of functions.__all__=[fetchFAQs,routeTicket]Import that list in youragent.pyfile and use within Agent initfrom.actionsimport__all__astoolsclassCustomAgent(Agent):def__init__(self,ctx,**kwargs):super.__init__(...tools=tools...)This is it. Enjoy custom tools within your agentContributing 🤝We welcome contributions! Whether it's bug fixes, feature additions, or improvements to the documentation, feel free to fork the repository and submit a pull request.License ⚖️This project is licensed under the Apache License 2.0 - see the LICENSE file for details.Acknowledgements 🙏Special thanks to all contributors and the open-source community for support and contributions to this project.Happy Coding! 🚀👾
|
analyst-recommendation-performance
|
view docs athttps://github.com/dbondi/analyst_recommendation_performance
|
analyst-remote-control
|
# analyst-remote-control
Python remote control proxy for VB.NET AnalystControl application.
|
analysts-task-pkg-macca2707
|
data-analyst-taskSimple data analyst program developed for Halfbrick studios as a mini assignment
Instructions are contained within CLI program but i will add them here aswellCommands:
json -- Creates a json file when provided an appropriate CSV file
data -- Makes a data summary of a provided CSV file
sql -- Creates an SQL file when provided an appropriate CSV file, also requires a table name
*quit -- exits the program*Note all commands except this one require a file name, file is contained within this package
|
analysts-tool-share
|
Analyst's Tool Share, for PythonTools for analyzing data, using Python.
|
analytic
|
No description available on PyPI.
|
analytica
|
Analytica is a Automative Exploratory data analysis library build on Python.
|
analytical
|
Analytical is a Python library for sending pageviews and events to analytics platforms
like Google Analytics except from Python rather than JavaScript so it can be done server side.
This has a number of advantages such as working regardless of whether clients block analytics scripts,
privacy sensitive information can be anonymized or removed before sending,
and it allows sending data only known by the server.Feature supportConvenient utilities for anonymizing sensitive information like IP addressesPluggable provider backends for different analytics platforms (currently just Google)Supports Python 2.7, Python 3.5+, and PyPy.Exampleimportanalyticalprovider=analytical.Provider('googleanalytics','UA-XXXXXXX-1')provider.pageview({'dl':'https://example.com','dt':'My Page Title','ua':'user-agent',# User agent'uip: '12.34.56.78', # User IP address})ResourcesGitHub:https://github.com/rtfd/analyticalDocumentation:https://analytical.readthedocs.ioIRC: #readthedocs on freenode
|
analyticalmachine
|
No description available on PyPI.
|
analytickitanalytics
|
AnalyticKit is developer-friendly, self-hosted product analytics. analytickit-python is the python package.
|
analyticlab
|
analyticlab(分析实验室)analyticlab是一个实验数据计算、分析和计算过程展示的Python库,可应用于大学物理实验、分析化学等实验类学科以及大创、工艺流程等科技类竞赛的数据处理。该库包含以下5个部分:数值运算(num、numitem模块):按照有效数字运算规则的数值运算。数理统计(numitem、twoitems模块):包括偏差、误差、置信区间、协方差、相关系数、单个样本的显著性检验、两个样本的显著性检验。离群值处理(outlier模块):包括Nair检验、Grubbs检验、Dixon检验和偏度-峰度检验。符号表达式合成(lsym、lsymitem模块):根据符号和关系式,得到LaTeX格式的计算式。测量及不确定度计算(measure包):根据实验数据或其他不确定度报告中的数据、测量仪器/方法和测量公式,得出测量结果,包含测量值和不确定度。模块定义库中定义了9个类:Num:分析数值运算类,位于num模块。NumItem:分析数组类,位于numitem模块。LSym:LaTeX符号生成类,位于lsym模块。LSymItem:LaTeX符号组类,位于lsymitem模块。Const:常数类,位于const模块。LaTeX:公式集类,位于latexoutput模块。measure.Ins:测量仪器类,位于measure.ins模块。measure.BaseMeasure:基本测量类,位于measure.BaseMeasure模块。measure.Measure:测量类,位于measure.Measure模块。7个函数模块:amath:对数值、符号、测量的求根、对数、三角函数运算。twoitems:两组数据的数理统计。outlier:离群值处理。latexoutput:数学公式、表格、LaTeX符号和不确定度等的输出。measure.std:计算标准偏差。measure.ACategory:计算A类不确定度。measure.BCategory:计算B类不确定度。类的主要功能和绝大多数函数支持process(显示计算过程),通过在调用类方法或函数时,附加参数process=True实现。具体哪些类方法和函数支持process,可以查阅使用教程,或者通过help函数查询其说明文档。注意计算过程是以LaTeX格式输出的,因此计算过程展示功能只有在Jupyter Notebook环境下才能使用。如果只需要得到计算结果而不需要展示其过程,那么一般的Python3开发环境即可。如何安装或更新1.通过pip安装:pip install analyticlab2.通过pip更新版本:pip install analyticlab --upgrade如果更新失败,可以尝试先卸载旧版本,再安装新版本:pip uninstall analyticlabpip install analyticlab3.在pypi上下载analyticlab源代码并安装:打开网址https://pypi.python.org/pypi/analyticlab通过download下载tar.gz文件,解压到本地,通过cd指令切换到解压的文件夹内通过python setup.py install实现安装运行环境analyticlab只能在Python 3.x环境下运行,不支持Python 2.x环境。要求系统已安装numpy、scipy、sympy、quantities库。可以在绝大多数Python平台下运行,但计算过程只有在Jupyter Notebook环境下才能显示出来。参照标准文件GBT 8170-2008 数值修约规则与极限数值的表示和判定GBT 4883-2008 数据的统计处理和解释正态样本离群值的判断和处理JJF1059.1-2012 测量不确定度评定与表示CNAS-GL06 化学领域不确定度指南
|
analyticord
|
# analyticord[](https://pypi.org/project/analyticord)[](https://choosealicense.com/licenses)[](http://discordanalytics-python.readthedocs.io/en/latest/?badge=latest)[](https://travis-ci.org/nitros12/module-python)-----**Table of Contents*** [Installation](#installation)* [Getting Started](#getting-started)* [License](#license)## Installationanalyticord is distributed on [PyPI](https://pypi.org) as a universalwheel and is available on Linux/macOS and Windows and supportsPython 3.5+ and PyPy.```bash$ pip install analyticord```## Getting Started```pythonimport asynciofrom analyticord import AnalytiCordloop = asyncio.get_event_loop()# The most basic usage, with a single bot tokenanalytics = AnalytiCord("token")# start up the analytics# this could be done inside the on_ready event, etc of a d.py botloop.run_until_complete(analytics.start())# hook the on_message event of a d.py bot# this will send message count events to analyticord for youanalytics.messages.hook_bot(bot)```## Licenseanalyticord is distributed under the terms of the[MIT License](https://choosealicense.com/licenses/mit).
|
analytics
|
Py-Analytics is a library designed to make it easy to provide analytics as part of any project.The project’s goal is to make it easy to store and retrieve analytics data. It does not provide
any means to visualize this data.Currently, onlyRedisis supported for storing data.InstallYou can install the latest official stable version using pypi:>>> pip install analyticsOr get the latest version directly from github:>>> pip install -e git+https://github.com/numan/py-analytics.git#egg=analyticsRequirementsRequiredRequirementsshouldbe handled by setuptools, but if they are not, you will need the following Python packages:nydusredisdateutilOptionalhiredisanalytics.create_analytic_backendCreates an analytics object that allows to to store and retrieve metrics:>>> from analytics import create_analytic_backend
>>>
>>> analytics = create_analytic_backend({
>>> 'backend': 'analytics.backends.redis.Redis',
>>> 'settings': {
>>> 'defaults': {
>>> 'host': 'localhost',
>>> 'port': 6379,
>>> 'db': 0,
>>> },
>>> 'hosts': [{'db': 0}, {'db': 1}, {'host': 'redis.example.org'}]
>>> },
>>> })Internally, theRedisanalytics backend usesnydusto distribute your metrics data over your cluster of redis instances.There are two required arguements:backend: full path to the backend class, which should extend analytics.backends.base.BaseAnalyticsBackendsettings: settings required to initialize the backend. For theRedisbackend, this is a list of hosts in your redis cluster.Example Usagefrom analytics import create_analytic_backend
import datetime
analytics = create_analytic_backend({
"backend": "analytics.backends.redis.Redis",
"settings": {
"hosts": [{"db": 5}]
},
})
year_ago = datetime.date.today() - datetime.timedelta(days=365)
#create some analytics data
analytics.track_metric("user:1234", "comment", year_ago)
analytics.track_metric("user:1234", "comment", year_ago, inc_amt=3)
#we can even track multiple metrics at the same time for a particular user
analytics.track_metric("user:1234", ["comments", "likes"], year_ago)
#or track the same metric for multiple users (or a combination or both)
analytics.track_metric(["user:1234", "user:4567"], "comment", year_ago)
#retrieve analytics data:
analytics.get_metric_by_day("user:1234", "comment", year_ago, limit=20)
analytics.get_metric_by_week("user:1234", "comment", year_ago, limit=10)
analytics.get_metric_by_month("user:1234", "comment", year_ago, limit=6)
#create a counter
analytics.track_count("user:1245", "login")
analytics.track_count("user:1245", "login", inc_amt=3)
#retrieve multiple metrics at the same time
#group_by is one of ``month``, ``week`` or ``day``
analytics.get_metrics([("user:1234", "login",), ("user:4567", "login",)], year_ago, group_by="day")
>> [....]
#set a metric count for a day
analytics.set_metric_by_day("user:1245", "login", year_ago, 100)
#sync metrics for week and month after setting day
analytics.sync_agg_metric("user:1245", "login", year_ago, datetime.date.today())
#retrieve a count
analytics.get_count("user:1245", "login")
#retrieve a count between 2 dates
analytics.get_count("user:1245", "login", start_date=datetime.date(month=1, day=5, year=2011), end_date=datetime.date(month=5, day=15, year=2011))
#retrieve counts
analytics.get_counts([("user:1245", "login",), ("user:1245", "logout",)])
#clear out everything we created
analytics.clear_all()BACKWARDS INCOMPATIBLE CHANGESV0.6.0This version introduces prefixes. Any old analytics data will be unaccessable.v0.5.2get_metric_by_day,get_metric_by_weekandget_metric_by_monthreturnseriesas a set of strings instead of a list of date/datetime objectsTODOAdd more backends possibly…?Add an API so it can be deployed as a stand alone service (http, protocolbuffers, …)
|
analytics-command-center
|
No description available on PyPI.
|
analytics-db
|
No description available on PyPI.
|
analyticsdf
|
Analytic generation of datasets with specified statistical characteristics.Introductionanalytics-dataset provides a set of functionality to enable the specification and generation of a wide range of datasets with specified statistical characteristics. Specification to include the predictor matrix and the response vector. Check theanalyticsdf documentationfor more details.
Examples include:High correlation and multi-collinearity among predictor variablesInteraction effects between variablesSkewed distributions of predictor and response variablesNonlinear relationships between predictor and response variablesResearch existing automate dataset functionalitySklearnMake DatasetsfunctionalityMIT Synthetic Data Vault projectMIT Data to AI Labdatacebo2016 IEEE conference paper, The Synthetic Data Vault.Public PackageThis repo has published beta packages on bothPypiandAnaconda
|
analytics-GitGut123
|
This is a very long description
|
analytics-insights
|
analytics_vizA library that can generate different plots with information from analytics
|
analyticslab
|
No description available on PyPI.
|
analytics-lib
|
Библиотека для анализа текстовУстановка (требуется Python 3.8 и новее)pip install analytics-libЗагрузка ресурсов для FastText и spacy моделейpython3 -m dostoevsky download fasttext-social-network-model
python3 -m spacy download ru_core_news_smЗагрузка ресурсов для ELMO и Stanza моделейСледует скачать из папки Soft-Skill-Dev/21_nov по ссылкеhttps://drive.google.com/drive/folders/1T1NuaU1qPQsyAM_i55AsJPrsGA28EZ5j?usp=sharingна Google Drive папку stanza_resources/ и разместить в папку с проектом по путиPATH_TO_STANZA_RESOURCESТакже следует скачать ресурсы для elmo-модели по ссылкеhttp://vectors.nlpl.eu/repository/20/212.zip, разархивировать их
и разместить в папку с проектом по путиPATH_TO_ELMO_RESOURCESЗагрузка pickle-файлов для усредненной статистики по текстам:Из репозиторияhttps://github.com/lyoshamipt/bortnik_psychometryнеобходимо из папки analytics_lib/data скачать актуальные:
-- папки:assessty_all,assessty_short,telecom-- файлы:df_sense.pklиverbs_df.pklИ поместить в папку с проектом по пути:PATH_TO_PICKELSПример использованияPATH_TO_STANZA_RESOURCES = "../bortnik_psychometry/analytics_lib/notebooks/stanza_resources"
PATH_TO_ELMO_RESOURCES = "../bortnik_psychometry/analytics_lib/notebooks/elmo_resources"
PATH_TO_PICKELS = "../bortnik_psychometry/analytics_lib/data"
import sys
sys.path.append("../")
import warnings
import logging
import pandas as pd
import json
import sys
from morpholog import Morpholog
from dostoevsky.tokenization import RegexTokenizer
from dostoevsky.models import FastTextSocialNetworkModel
from matplotlib import rcParams
from pymystem3 import Mystem
from simple_elmo import ElmoModel
import stanza
import spacy
import snowballstemmer
import os
logging.disable(sys.maxsize)
warnings.filterwarnings("ignore")
# инициализация класса
from analytics_lib.nlp_texts.text import TextProcessor
mystem = Mystem()
nlp_core = stanza.Pipeline('ru', use_gpu=False, dir=PATH_TO_STANZA_RESOURCES)
morpholog = Morpholog()
tokenizer = RegexTokenizer()
ftsnm = FastTextSocialNetworkModel(tokenizer=tokenizer)
nlp_spacy = spacy.load("ru_core_news_sm")
stemmer = snowballstemmer.stemmer('russian')
import tensorflow.compat.v1 as tf
tf.reset_default_graph()
elmo_model = ElmoModel()
elmo_model.load(PATH_TO_ELMO_RESOURCES)
df_sense = pd.read_pickle(f"{PATH_TO_PICKELS}/df_sense.pkl")
verbs_df = pd.read_pickle(f"{PATH_TO_PICKELS}/verbs_df.pkl")
text_processor = TextProcessor(
m=mystem,
nlp_core=nlp_core,
morpholog=morpholog,
fastTextSocialNetworkModel=ftsnm,
nlp_spacy=nlp_spacy,
stemmer = stemmer,
elmo_model = elmo_model,
df_sense = df_sense,
verbs_df = verbs_df
)
# пример обработки текста
text = "Программа \"Вернём клиентов\" для дилеров ГаражТулс.В рамках развития дилерской сети за дополнительные деньги настроить триггерную рассылку для тех клиентов, которые ушли.\
Тестируем: берем лояльного дилера, предлагаем в качестве эксперимента предоставить нам клиентов, которые отказались от покупки (на их сайте или магазине). Собираем контакты, настраиваем триггерную рассылку. Смотрим на результат. Если успех, то проводим опрос на основных дилерах и показываем успешный кейс. Узнаем, сколько бы они заплатили за это. Дальше пробуем продать 2-3- дилерам такую услугу. (пока делаем всё вручную) Если они оплачивают, то гипотезу можно считать проверенной и можно запускать в разработку функционал.(и если юнит экономика сходится)Подключаем дилерскую CRM (или любую другую систему, где есть отвалившиеся клиенты) к системе триггерных рассылок. К пакету дилерских документов предлагаем новую услугу. Непринужденно зарабатываем."
dict_res = text_processor.text_statistics_woe(text=text, quantiles="assessty_short", PATH_TO_PICKELS = PATH_TO_PICKELS) # quantiles: 'assessty_all', "assessty_short", 'dialogs'
|
analytics-logger-rest
|
analytics-logger-restREST analytics loggerTo use this package simply install and thenimport analytics_logger_rest.analytics_logger_rest as log_restlogger = log_rest.LogAnalyticsLogger("NAME")Built with help from herehttps://docs.microsoft.com/en-us/azure/azure-monitor/logs/data-collector-api
|
analytics-mayhem-adobe
|
Adobe Analytics Python packageDownload Reports data utilising the Adobe.io version 2.0 API.For more Digital Analytics related reading, checkhttps://analyticsmayhem.comAuthentication methods supported by the package:JWTOAuth (tested only through Jupyter Notebook!)JWT Requirements & Adobe.io accessIn order to run the package, first you need to gain access to a service account from Adobe.io. The method used is JWT authentication. More instructions on how to create the integration at:https://www.adobe.io/authentication/auth-methods.html#!AdobeDocs/adobeio-auth/master/JWT/JWT.md. After you have completed the integration, you will need to have available the following information:Organization ID (issuer): It is in the format of < organisation id >@AdobeOrgTechnical Account ID: < tech account id >@techacct.adobe.comClient ID: Information is available on the completion of the Service Account integrationClient Secret: Information is available on the completion of the Service Account integrationAccount ID: Instructions on how to obtain it athttps://youtu.be/lrg1MuVi0Fo?t=96Report suite: Report suite ID from which you want to download the data.Make sure that the integration is associated with an Adobe Analytics product profile that is granted access to the necessary metrics and dimensions.OAuth RequirementsTo perform an OAuth authentication you need to create an integration at the Adobe I/O Console as described in the guide by Adobe athttps://github.com/AdobeDocs/analytics-2.0-apis/blob/master/create-oauth-client.md. The result of the integration provides the following information:Client ID (API Key)Client SecretPackage installationpip install analytics-mayhem-adobeSamplesInitial setup - JWTAfter you have configured the integration and downloaded the package, the following setup is needed:from analytics.mayhem.adobe import analytics_client
import os
ADOBE_ORG_ID = os.environ['ADOBE_ORG_ID']
SUBJECT_ACCOUNT = os.environ['SUBJECT_ACCOUNT']
CLIENT_ID = os.environ['CLIENT_ID']
CLIENT_SECRET = os.environ['CLIENT_SECRET']
PRIVATE_KEY_LOCATION = os.environ['PRIVATE_KEY_LOCATION']
GLOBAL_COMPANY_ID = os.environ['GLOBAL_COMPANY_ID']
REPORT_SUITE_ID = os.environ['REPORT_SUITE_ID']Next initialise the Adobe client:aa = analytics_client(
adobe_org_id = ADOBE_ORG_ID,
subject_account = SUBJECT_ACCOUNT,
client_id = CLIENT_ID,
client_secret = CLIENT_SECRET,
account_id = GLOBAL_COMPANY_ID,
private_key_location = PRIVATE_KEY_LOCATION
)
aa.set_report_suite(report_suite_id = REPORT_SUITE_ID)Initial setup - OAuthImport the package and initiate the required parametersfrom analytics.mayhem.adobe import analytics_client
client_id = '<client id>'
client_secret = '<client secret>'
global_company_id = '<global company id>'Initialise the Adobe client:aa = analytics_client(
auth_client_id = client_id,
client_secret = client_secret,
account_id = global_company_id
)Perform the authenticationaa._authenticate()This will open a new window and will request you to login to Adobe. After you complete the login process, you will be redirect to the URL you configured as redirect URI during the Adobe Integration creation process. If everything is done correctly, final URL will have a URL query string parameter in the format ofwww.adobe.com/?code=eyJ..... Copy the full URL and paste it in the input text.
For a demo notebook, please refer to theJupyter Notebook - OAuth exampleReport ConfigurationsSet the date range of the report (format: YYYY-MM-DD)aa.set_date_range(date_start = '2019-12-01', date_end= '2019-12-31')To configure specific hours for the start and end date:aa.set_date_range(date_start='2020-12-01', date_end='2020-12-01', hour_start= 4, hour_end= 5 )Ifhour_endis set, then only up to that hour in the last day data will be retrieved instead of the full day.Request with 3 metrics and 1 dimensionaa.add_metric(metric_name= 'metrics/visits')
aa.add_metric(metric_name= 'metrics/orders')
aa.add_metric(metric_name= 'metrics/event1')
aa.add_dimension(dimension_name = 'variables/mobiledevicetype')
data = aa.get_report()Output:itemId_lvl_1value_lvl_1metrics/visitsmetrics/ordersmetrics/event10Other500031001728229488Tablet20045302163986270Mobile Phone492331...............Request with 3 metrics and 2 dimensions:aa.add_metric(metric_name= 'metrics/visits')
aa.add_metric(metric_name= 'metrics/orders')
aa.add_metric(metric_name= 'metrics/event1')
aa.add_dimension(dimension_name = 'variables/mobiledevicetype')
aa.add_dimension(dimension_name = 'variables/lasttouchchannel')
data = aa.get_report_multiple_breakdowns()Output:
Each item in level 1 (i.e. Tablet) is broken down by the dimension in level 2 (i.e. Last Touch Channel). The package downloads all possible combinations. In a similar fashion more dimensions can be added.itemId_lvl_1value_lvl_1itemId_lvl_2value_lvl_2metrics/visitsmetrics/ordersmetrics/event10Other1Paid Search23339100Other2Natural Search424124120Other3Display8404131.....................1728229488Tablet1Paid Search8012411728229488Tablet2Natural Search504121.....................Global segmentsTo add a segment, you need the segment ID (currently only this option is supported). To obtain the ID, you need to activate the Adobe Analytics Workspace debugger (https://github.com/AdobeDocs/analytics-2.0-apis/blob/master/reporting-tricks.md). Then inspect the JSON request window and locate the segment ID under the 'globalFilters' object.To apply the segment:aa.add_global_segment(segment_id = "s1689_5ea0ca222b1c1747636dc970")Issues, Bugs and Suggestions:https://github.com/konosp/adobe-analytics-reports-api-v2.0/issuesKnown missing features:No support for filteringNo support for custom sorting
|
analytics-mesh
|
analytics-meshInterfaces and facades that facilitate a common approach to analytics tasksGetting StartedPlease install the requirements:pipinstall-rrequirementsIf you are going to be making modifications to theipynbnotebooks, then be sure to install the pre-commit hook (see below):pre-commitinstallTestsTests are currently split intounitandintegrationtests. As this package integrates with storage systems, the integration tests are typically running against things like Google Cloud Platform.Tests may be run in the root folder of the repo with:coverage run -m unittest discover tests && coverage reportIf you want to run just the unit tests (and ignore coverage) then,python-munittestdiscovertests/unitSimilarly, for the integration tests.Pre-Commit Hooks and Notebook WorkflowIn this repo we are using the pythonprecommitpackage (included in requirements.txt file). In order to leverage it in your development workflow, you need to run the following commands (assuming you have already installed your requirements).pre-commitinstallWe follow the convention that a version controlledipynbfile is converted to a markdown (md) file to form anipynb-mdpair that are both version controlled. This allows us to code review the markdown files whilst keeping the original ipynb file output intact for easy perusal on the repository manager (Gitlab in our case).Packaging the CodeThe contents of themeshpackage are packaged in the build pipeline and submitted to pypi when merged to the master branch. The contents of thedemosfolder andtestsis omitted from the package. See the.gitlab-ci.ymlfile for the pypi instructions we use, as well as the tests and linting that are performed.The consequences of this is that you will convertipynbnotebooks tomd.ContributionsContributions are most welcome. Please submit patches or new features for code review by any of the main contributors:Jacques du ToitCarl du PlessisJaco GerickePlease be sure to run the tests prior to submission.
|
analytics-monolith
|
Failed to fetch description. HTTP Status Code: 404
|
analytics-monolyth
|
Failed to fetch description. HTTP Status Code: 404
|
analytics-python
|
This library was renamed due to a naming conflict with an official python library.
Please use the new listing athttps://pypi.org/project/segment-analytics-python/Segment is the simplest way to integrate analytics into your application.
One API allows you to turn on any other analytics service. No more learning
new APIs, repeated code, and wasted development time.This is the official python client that wraps the Segment REST API (https://segment.com).Documentation and more details athttps://github.com/segmentio/analytics-python
|
analytics-python-findhotel
|
This is an unofficial fork of Segment’s analytics SDK. This fork is 100%
compatible with Segment’s official SDK (it passes all the tests of the official
version), but it supports configuring the backend HTTPS endpoint to which the
events are delivered.For more information on Segment go tohttps://segment.com.Documentation and more details athttps://github.com/findhotel/analytics-python
|
analytics-report
|
No description available on PyPI.
|
analytics-reporting-client
|
Python Client Wrapper for Analytics Reporter Web Service
|
analytics-schema
|
No description available on PyPI.
|
analyticsTestNN1
|
No description available on PyPI.
|
analytics-toolbox
|
Titlesummary%load_extautoreload%autoreload2Welcome toanalytics_toolboxakaatbEnabling Data Scientists to amplify their inner Data Engineer.A toolbox for managing data coming from multiple Postgres, Redshift & S3 data sources while performing Analytics and Research. We also have some functionality that help users build Slack Bots.Install Uspip install analytics_toolboxDocumentationOur docs are currently useless as of 2020-02-12.Vote For Change!I'll see your comments on GitHub.Support UsComing someday, maybe?Do You Know About config Files?analytics_toolboxis only made possible by its reliance on standardized credential storage. You wanna use us, you sadly must play by some of our rules.We read and build classes via the variable names in the config files you pass to our code. Trust us. Its worth it. You'll end up saving 100s of lines of code by simply passing 1 to 2 arguments when instantiating our primary classes.Config Filesare a great way to store information. We chose this over other options like json or OS level environment variables for no clear reason. If you really want support for other credential formats,vote with your words here.Config File Format GuidelinesPostgres + Redshift ConnectionsIf your config file section has ahostname,port,databaseandusersections, then we'll parse it as a Redshift/Postgres database. You store your password in.pgpass(see below if this is new).Here is an example of Postgres/Redshift entries.""[dev_db]
hostname = dev.yourhost.com
port = 5432
database = dbname
user = htpeter
[prod_db]
hostname = prod.yourhost.com
port = 5432
database = dbname
user = htpeterWhat is .pgpass?When python'spsycopg2or evenpsqlattempt to connect to a server, they will look in a file called~/.pgpass. If they find matching server information, based on the target they are connecting to, they use that password.~/.pgpass's format is simple. Include a line in the file that follows the following format.hostname:port:database:username:passwordEnsure you limit the permissions on this file usingchmod 600 ~/.pgpass, otherwise no tools will use it due to its insecurity.You don't pass database passwords toanalytics_toolbox. Instead we leveragepgpass. Simply paste a record for each database in a text file~/.pgpasswith the following information.Slack ConnectionsOur Slack APIs useSlack Bot OAuth Tokens.Create an OAuth token and save it to a variable calledbot_user_oauth_token. You can store the token in a config section named whatever you want.[company_slack]
oauth_token = 943f-1ji23ojf-43gjio3j4gio2-2fjoi23jfi23hio
[personal_slack]
oauth_token = 943f-dfase3-basf234234-fw4230kf230kf023k023Usage ExamplesQuerying Multiple Databases & Moving DataOur import is both useful and classy enough to be jammed up at the top with yourpds,nps andplts.importanalytics_toolboxasatbAnd then you simply create a database pool object with your Config File. It loads up all the goodies.db=atb.DBConnector('../atb_config_template.ini')db{ 'dev_db': <analytics_toolbox.connector.DatabaseConnection object at 0x11698bc88>,
'prod_db': <analytics_toolbox.connector.DatabaseConnection object at 0x1169d8198>}Now we can query any of our databasese easily!# reference with the config file keynamedb['dev_db'].qry('select * from pg_class limit 5')db['prod_db'].qry('select * from pg_class limit 5')# or if config file section is pythonic, use its name just like pandas!db.dev_db.qry('select * from pg_class limit 5')db.prod_db.qry('select * from pg_class limit 5')
|
analytics-validator
|
No description available on PyPI.
|
analytics-zoo
|
Analytics Zoois an open sourceBig Data AIplatform, and includes the following
features for scaling end-to-end AI to distributed Big Data:Orca:
seamlessly scale out TensorFlow and PyTorch for Big Data (using Spark & Ray)RayOnSpark:
run Ray programs directly on Big Data clustersBigDL
Extensions:
high-level Spark ML pipeline and Keras-like APIs for BigDLChronos:
scalable time series analysis using AutoMLPPML:
privacy preserving big data analysis and machine learning (experimental)For more information, you mayread the docs.
|
analytics-zoo-serving
|
Analytics Zoo: A unified Data Analytics and AI platform for distributed TensorFlow,
Keras, PyTorch, Apache Spark/Flink and Ray.You may want to develop your AI solutions using Analytics Zoo if:You want to easily prototype the entire end-to-end pipeline that applies AI models
(e.g., TensorFlow, Keras, PyTorch, BigDL, OpenVINO, etc.) to production big data.You want to transparently scale your AI applications from a laptop to large clusters with "zero"
code changes.You want to deploy your AI pipelines to existing YARN or K8S clustersWITHOUTany modifications
to the clusters.You want to automate the process of applying machine learning (such as feature engineering,
hyperparameter tuning, model selection and distributed inference).Find instructions to install analytics-zoo via pip, please visit our documentation page:https://analytics-zoo.github.io/master/#PythonUserGuide/install/For source code and more information, please visit our GitHub page:https://github.com/intel-analytics/analytics-zoo
|
analyticViz
|
What is analyticViz?analyticVizis a Python package that allows user to plot basic visualization charts using plotly. It is suitable for people who do not wish to write many lines of code to plot graphs using plotly. Some of the graphs available at the moment includes bar chart, line plots, histograms, box plots etc.Main FeaturesHere are just a few of the things that pandas does well:Functions for plotting various visualizationsVisualizations formatted with a consistent color when used in jupyter notebookOption to plot multiple graphs using one functionWhere to get itThe source code is currently hosted on GitHub at:https://github.com/Adark-Amal/analyticVizBinary installers for the latest released version are available at thePython
Package Index (PyPI)DependenciesPlotly - Provides all the needed visualizationsInstallationTo install analyticViz via PyPIpipinstallanalyticVizLicenseMITDocumentationThe official documentation is yet to be providedBackgroundWork onanalyticVizstarted in 2022 and has been under active development since then.
|
analytic-wfm
|
UNKNOWN
|
analytic-workbench-clients
|
No description available on PyPI.
|
analytic-workspace-client
|
Библиотека для Analytic WorkspaceПолучение токенаПерейдите по ссылкеhttps://aw.example.ru/data-master/get-token(вместоhttps://aw.example.ru/укажите адрес вашего сервера Analytic Workspace).Значение токена лучше всего сохранить в отдельном файл или поместить в переменную окруженияAW_DATA_TOKEN.Пример использованияfromaw_clientimportSessionwithopen('aw_token','rt')asf:aw_token=f.read()session=Session(token=aw_token,aw_url='https://aw.example.ru')# Если токен доступа указан в переменной окружения AW_DATA_TOKEN, то объект сессии можно создавать# без явного указания параметра token: session = Session(aw_url='https://aw.example.ru')df=session.load()# df: pandas.DataFramedisplay(df)
|
analytic-workspace-jupyter
|
No description available on PyPI.
|
analytiks
|
Failed to fetch description. HTTP Status Code: 404
|
analytix
|
A simple yet powerful SDK for the YouTube Analytics API.FeaturesPythonic syntax lets you feel right at homeDynamic error handling saves hours of troubleshooting and makes sure only valid requests count toward your API quotaA clever interface allows you to make multiple requests across multiple sessions without reauthorisingExtra support enables you to export reports in a variety of filetypes and to a number of DataFrame formatsEasy enough for beginners, but powerful enough for advanced usersInstallationInstalling analytixTo install the latest stable version of analytix, use the following command:pipinstallanalytixYou can also install the latest development version using the following command:pipinstallgit+https://github.com/parafoxia/analytixYou may need to prefix these commands with a call to the Python interpreter depending on your OS and Python configuration.DependenciesBelow is a list of analytix's dependencies.
Note that the minimum version assumes you're using CPython 3.8.
The latest versions of each library are always supported.NameMin. versionRequired?Usageurllib31.10.0YesMaking HTTP requestsjwt1.2.0NoDecoding JWT ID tokens (from v5.1)openpyxl3.0.0NoExporting report data to Excel spreadsheetspandas1.4.0NoExporting report data to pandas DataFramespolars0.15.17NoExporting report data to Polars DataFramespyarrow5.0.0NoExporting report data to Apache Arrow tables and file formatsOAuth authenticationAll requests to the YouTube Analytics API need to be authorised through OAuth 2.
In order to do this, you will need a Google Developers project with the YouTube Analytics API enabled.
You can find instructions on how to do that in theAPI setup guide.Once a project is set up, analytix handles authorisation — including token refreshing — for you.More details regarding how and when refresh tokens expire can be found on theGoogle Identity documentation.UsageRetrieving reportsThe following example creates a CSV file containing basic info for the 10 most viewed videos, from most to least viewed, in the US in 2022:fromdatetimeimportdatefromanalytiximportClientclient=Client("secrets.json")report=client.fetch_report(dimensions=("video",),filters={"country":"US"},metrics=("estimatedMinutesWatched","views","likes","comments"),sort_options=("-estimatedMinutesWatched",),start_date=date(2022,1,1),end_date=date(2022,12,31),max_results=10,)report.to_csv("analytics.csv")If you want to analyse this data using additional tools such aspandas, you can directly export the report as a DataFrame or table using theto_pandas(),to_arrow(), andto_polars()methods of the report instance.
You can also save the report as a.tsv,.json,.xlsx,.parquet, or.featherfile.There are more examples in theGitHub repository.Fetching group informationYou can also fetch groups and group items:fromanalytiximportClient# You can also use the client as context manager!withClient("secrets.json")asclient:groups=client.fetch_groups()group_items=client.fetch_group_items(groups[0].id)LoggingIf you want to see what analytix is doing, you can enable the packaged logger:importanalytixanalytix.enable_logging()This defaults to showing all log messages of level INFO and above.
To show more (or less) messages, pass a logging level as an argument.CompatibilityCPython versions 3.8 through 3.12 and PyPy versions 3.8 through 3.10 are officially supported*.
CPython 3.13-dev is provisionally supported*.
Windows, MacOS, and Linux are all supported.*For base analytix functionality; support cannot be guaranteed for functionality requiring external libraries.ContributingContributions are very much welcome! To get started:Familiarise yourself with thecode of conductHave a look at thecontributing guideLicenseThe analytix module for Python is licensed under theBSD 3-Clause License.
|
analytixhero
|
AnalytiXHeroCurrent Version:0.0.1Description:Everything That Needs To Be Done While Preprocessing Data, May It Be Outlier Handling, Skewness/Kurtosis Minimization, Treating Null Spaces Etc. Can Be Done With Pre-Defined State-Of-The-Art Features.Dependencies:NumPy,Pandas,Scikit-Learn,SciPy,MatPlotLib,Seaborn,Python-DateutilThese Dependencies Are Subjected To Current VersionFor Contribution:Check It HereDocumentation:Check It HereThere Are Many Libraries That One Can Use LikeScikit-Learn,NumPyOrPandasBut This Library,AnalytiXHeroCan Make Preprocessing A Task Of Just One Line
|
analytracks
|
No description available on PyPI.
|
analyzdat
|
No description available on PyPI.
|
analyze-distributions
|
No description available on PyPI.
|
analyzefit
|
[](https://pypi.python.org/pypi/analyzefit/)[](https://travis-ci.org/wsmorgan/analyzefit)[](https://codecov.io/gh/wsmorgan/analyzefit)[](https://landscape.io/github/wsmorgan/analyzefit/master)# analyzefitAnalyze fit is a python package that performs standard analysis on thefit of a regression model. The analysis class validate method willcreate a residuals vs fitted plot, a quantile plot, a spread locationplot, and a leverage plot for the model provided as well as print theaccuracy scores for any metric the user likes. For example:If a detailed plot is desired then the plots can also be generatedindividually using the methods res_vs_fit, quantile, spread_loc, andleverage respectively. By default when the plots are createdindividually they are rendered in an interactive inverontment usingthe bokeh plotting package. For example:This allows the user to determine which points the model is failing topredict.Full API Documentation available at: [github pages](https://wsmorgan.github.io/analysefit/).## Installing the codeTo install analyzefit you may either pip install:```pip install analyzefit```or clone this repository and install manually:```python setup.py install```# Validating a ModelTo use analyze fit simply pass the feature matrix, target values, andthe model to the analysis class then call the validate method, (or anyother plotting method). For example:```import pandas as pdimport numpy as npfrom sklearn.linear_model import LinearRegressionfrom sklearn.metrics import mean_squared_error, r2_scorefrom sklearn.model_selection import train_test_splitfrom analyzefit import Analysisdf = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data', header=None,sep="\s+")df.columns = ["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","B","LSTAT","MEDV"]X = df.iloc[:,:-1].valuesy = df[["MEDV"]].valuesX_train, X_test,y_train,y_test = train_test_split(X,y, test_size=0.3,random_state=0)slr = LinearRegression()slr.fit(X_train,y_train)an = Analysis(X_train, y_train, slr)an.validate()an.validate(X=X_test, y=y_test, metric=[mean_squared_error, r2_score])an.res_vs_fit()an.quantile()an.spread_loc()an.leverage()```## Python Packages Used- numpy- matplotlib- bokeh- sklearn
|
analyzefrc
|
AnalyzeFRCDeveloped at the Department of Imaging Physics (ImPhys), Faculty of Applied Sciences, TU Delft.Plots, analysis and resolution measurement of microscopy images using Fourier Ring Correlation (FRC).AnalyzeFRC has native support for .lif files and can also easily read single images in formats supported by Pillow (PIL). Other formats require converting that image into a NumPy array and using that to instantiate AnalyzeFRC's native objects.AnalyzeFRC provides a lot of default options and convenience functions for a specific use case. However, its core functionality, themeasure_frcfunction inanalyzefrc.processcan be adapted in other workflows. You can also directly use thefrc library, on which this library is built.Defaults (please read)By default, when usingprocess_frc,preprocessis set to True. It ensures that each input image is cropped into square form and that a Tukey window is applied. Supplyproprocess=Falseto disable this behavior.By default, when usingprocess_frc,concurrencyis set to False. If set to true by passingconcurrency=True, it leverages thedecopackage to leverage more cores for a 1.5x+ speedup (not higher because the most resource-intensive computations are already parallelized). !! However, please run the program inside aif __name__ == '__main__':block when concurrency is enabled! Otherwise it will fail! Note: on some platforms, this type of concurrency can cause issues, notably Linux and macOS. This is a problem caused by a dependency.By default, if anFRCMeasurementis processed without any presetCurveTaskand has two images, it sets the method to2FRC. Otherwise,1FRCis used.By default, plots are grouped bymeasures, i.e. every measurement will be plotted separately. Use thegroup_<grouping>. Other available groupings includeall(all curves in one plot, use this only to retrieve them to use custom groupings),sets(all curves in the same set name in one plot) andcurves(one plot per curve).By default, 1FRC curves are computed 5 times and averaged, this can be overriden by passingoverride_nto process_frc.InstallationWith (Ana)condaIf you already have Anaconda installed (or miniconda), it is easiest to create a new Python 3.9 environment. Open the Anaconda/miniconda3 prompt and write (here 'envanalyze' can be any environment name you like):condacreate-n'envanalyze'python=3.9This package depends on a number of PyPI-only packages, some also with compiled extensions, which are difficult to port to conda. For this reason, it is recommended to have a seperate environment with only this package and then install using pip:condaactivateenvanalyze
pipinstallanalyzefrcYou now have an environment called 'envanalyze' with analyzefrc installed. Configure your favorite IDE to use the newly created environment and you're good to go! See the usage examples for more details on how to use this package.Without condaCurrently, this library only works on Python 3.9. Ensure you have a working installation. You can use tools likepyenvfor managing Python versions.It is recommended to install this library into avirtual environment. Many tools exist for this today (most IDEs can do it for you), but I recommendPoetry.Install using:pipinstallanalyzefrcIf using Poetry:poetryaddanalyzefrcThis library indirectly (through thefrclibrary) depends onrustfrc(Rust extension) anddiplib(C++ extension). These compiled extensions can sometimes cause issues, so refer to their pages as well.UsageDefault .lif processingTo simply compute the 1FRC of all channels of a .lif dataset and plot the results, you can do the following:importanalyzefrcasafrc# This if-statement is required because concurrency is enabledif__name__=='__main__':# ./ means relative to the current folderfrc_sets=afrc.lif_read('./data/sted/2021_10_05_XSTED_NileRed_variation_excitation_power_MLampe.lif')plot_curves=afrc.process_frc("XSTED_NileRed",frc_sets,preprocess=True)afrc.plot_all(plot_curves)Plot series in one plotIf instead you want to plot each image inside a .lif file in a single plot, do the following:...# imports and processingplot_curves=afrc.process_frc("XSTED_NileRed",frc_sets,grouping='sets',preprocess=True,concurrency=False)afrc.plot_all(plot_curves)Change grouping after computationOr if you already computed the curves with the default grouping ('all'):...# imports and processingfrc_per_set_sets=afrc.group_sets(plot_curves)plot_all(frc_per_set_sets)Save instead of plotIf you don't want to plot the results (in the case of many images the IDE plot buffer can easily be exceeded), but instead save them:...# imports and processing# Will save to './results/<timestamp>-XSTED_NileRed'save_folder=afrc.create_save('./results','XSTED_NileRed',add_timestamp=True)afrc.plot_all(plot_curves,show=False,save=True,save_directory=save_folder,dpi=180)Only extract data, don't plotPlotting using your own tools can also be desired. To extract only the resulting data, do not callplot_all. Instead, use the result ofprocess_frc, which yields a dictionary of lists ofCurve-objects. ACurve-object is simply a data container for NumPy arrays and metadata. An example:...# imports and data readingfromanalyzefrcimportCurveimportmatplotlib.pyplotaspltplot_curves:dict[str,list[Curve]]=afrc.process_frc("XSTED_NileRed",frc_sets,grouping='sets',preprocess=True)# plot all on your ownforcurvesinplot_curves.values():first_curve:Curve=curves[0]plt.plot(first_curve.curve_x,first_curve.curve_y)plt.plot(first_curve.curve_x,first_curve.thres)plt.show()Example: 1FRC vs 2FRC from .tiffA slightly more complex example: If you have a sample .tiff file and you want to compare the performance of 1FRC vs 2FRC, you could do the following:importnumpyasnpimportdiplibasdipimportfrc.utilityasfrcuimportanalyzefrcasafrcfromanalyzefrcimportFRCMeasurement,FRCSetdata_array:np.ndarray=afrc.get_image('./data/siemens.tiff')# Blur the image (to create a frequency band)data_array=frcu.gaussf(data_array,30)data_dip=dip.Image(data_array)half_set_1=np.array(dip.PoissonNoise(data_dip/2))half_set_2=np.array(dip.PoissonNoise(data_dip/2))full_set=np.array(dip.PoissonNoise(data_dip))# Create seperate measurement objectsfrc_2:FRCMeasurement=afrc.frc_measure(half_set_1,half_set_2,set_name='2FRC')frc_1:FRCMeasurement=afrc.frc_measure(full_set,set_name='1FRC')# Combine in one set so they can be plot togetherfrc_set:FRCSet=afrc.frc_set(frc_1,frc_2,name='2FRC vs 1FRC')plot_curve=afrc.process_frc("2FRC vs 1FRC",frc_set,concurrency=False)afrc.plot_all(plot_curve)DetailsThe three operations of setting up the measurements, computing the curves and plotting them are all decoupled and each have their Python module (analyzefrc.read,analyzefrc.process,analyzefrc.plot, respectively). Furthermore, actual file reading convenience functions can be found inanalyzefrc.file_read.FRCSet, FRCMeasurement and FRCMeasureSettingsFor setting up the measurements in preparation of processing, these three classes are essential.FRCSet-objects can be completely unrelated, they share no information. As such, if doing batch processing of different datasets, they can be divided overFRCSet-objects.
Within anFRCSet, there can be an arbitrary number ofFRCMeasurement-objects, which should have similar image dimensions and should, in theory, be able to be sensibly plotted in a single figure.FRCMeasurementis the main data container class. It can be instantiated using anFRCMeasureSettings-object, which contains important parameters that are the same across all images within the measurement (such as the objective's NA value). If these differ across the images, multiple measurements should be used.Changing default curvesBy default, when processing, a singleCurveTaskwill be generated for eachFRCMeasurement, meaning a single curve will be generated for each measurement. However, if a different threshold (other than the 1/7) is desired, or multiple curves per figure are wanted, aCurveTaskcan be created beforehand and given to theFRCMeasurement.Example:...# see .tiff examplefromanalyzefrcimportCurveTask# Create seperate measurement objects# For example we want a smoothed curve for the 1FRC, as well as a non-smoothed curvefrc1_task_smooth=CurveTask(key='smooth_curve',smooth=True,avg_n=3,threshold='half_bit')frc1_task=CurveTask(key='standard_curve',avg_n=3,threshold='half_bit')frc_2:FRCMeasurement=afrc.frc_measure(half_set_1,half_set_2,set_name='2FRC')frc_1:FRCMeasurement=afrc.frc_measure(full_set,set_name='1FRC',curve_tasks=[frc1_task,frc1_task_smooth])...# process and plotChanging default processingIf other measurement-based processings are desired, they can be added in two ways. Arbitrary functions (of the typeMeasureProcessing = Callable[[FRCMeasurement], FRCMeasurement]) can be run for each measurement by passing them as a list to theextra_processings-argument forprocess_frc, or by populating theFRCMeasurement-objects'extra_processingsattribute.Note: each processing is performed in list order after the optionalpreprocessingstep, with global extras performed before the measurement-defined extra processing tasks.This can be useful when using a convenience file loading function. For example, to flip every image and apply a different window functon:...# .lif examplefromanalyzefrcimportFRCMeasurementimportnumpyasnpfromscipy.signalimportwindowsaswinsdefflip_window_data(measure:FRCMeasurement)->FRCMeasurement:measure.image=np.flip(measure.image)size=measure.image.shape[0]assertsize==measure.image.shape[1]cosine=wins.tukey(size)cosine_square=np.ones((size,size))*cosine.reshape((size,1))*cosine.reshape((1,size))measure.image=cosine_square*measure.imagereturnmeasureplot_curves=afrc.process_frc("XSTED_NileRed",frc_sets,preprocess=False,extra_processings=[flip_window_data],concurrency=False)...# plotOther internal detailsThe general processing flow is as follows:(read/read_file) CreateFRCMeasureSettingsbased on data acquisition parameters(read/read_file) CreateFRCMeasurementusing the previous step.(Optionally) create customCurveTask-objects for theFRCMeasurement. Created by default in theprocessstep if not provided.(read/read_file) CreateFRCSetusing multipleFRCMeasurement-objects.(process) ComputeCurve-objects usingmeasure_frc.(process) Sort/group theCurve-objects into a dictionary with lists ofCurve-objects as entries.(plot) Plot thelist[Curve]-dictionary, where each entry becomes a single figure.All steps besides themeasure_frc-step can be implemented in a custom way quite trivially. In a way, all steps except step 5 are for your convenience. Step 5, which is the only step that involves actually processing all the data using thefrclibrary, forms the core of this package.PerformanceProcessing 32 measurements of 1024x1024 pixels takes about thirty seconds to read from a .lif file, process (computing each curve 5 times) and plot on my i7-8750H laptop CPU (which is decently performant even by today's standards).Over 80% of the time is spent processing, i.e. performing the binomial splitting and computing the FRCs (with the latter taking significantly longer). All these functions are implemented through Rust (rustfrc), C++ (diplib) or C (numpy) extensions, meaning they are as fast as can be and mostly parallelized.10-15% of the time is spent plotting using matplotlib, meaning the overhead of this library is only 5-10%.
|
analyze-html
|
Analyze-HTMLIntroductionAnalyze-HTML is a project that read your html tag and count for appearenceHow it workingIt is using Requests to get the HTML of an URL then using BeautifulSoup4 to parse the HTML tag contained.Typer CLIThere are several upgrade function on Typer CLIgenerate-htmlReturn html, to save locally add --save-to-local=Truecount-tag-from-localcount-tag-from-url
|
analyzeMFT
|
===========Analyze MFT===========analyzeMFT.py is designed to fully parse the MFT file from an NTFS filesystemand present the results as accurately as possible in multiple formats.Installation===========You should now be able to install analyzeMFT with pip:pip install analyzeMFTAlternatively:git pull https://github.com/dkovar/analyzeMFT.gitpython setup.py install (or, just run it from that directory)Usage===========Usage: analyzeMFT.py [options]Options:-h, --help show this help message and exit-v, --version report version and exitFile input options:-f FILE, --file=FILE read MFT from FILEFile output options:-o FILE, --output=FILEwrite results to FILE-c FILE, --csvtimefile=FILEwrite CSV format timeline file-b FILE, --bodyfile=FILEwrite MAC information to bodyfileOptions specific to body files:--bodystd Use STD_INFO timestamps for body file rather than FNtimestamps--bodyfull Use full path name + filename rather than justfilenameOther options:-a, --anomaly turn on anomaly detection-l, --localtz report times using local timezone-e, --excel print date/time in Excel friendly format-d, --debug turn on debugging output-s, --saveinmemory Save a copy of the decoded MFT in memory. Do not usefor very large MFTs-p, --progress Show systematic progress reports.-w, --windows-path Use windows path separator when constructing the filepath instead of linuxOutput=========analyzeMFT can produce output in CSV or bodyfile format.CSV output---------The output is currently written in CSV format. Due to the fact that Excelautomatically determines the type of data in a column, it is recommended thatyou write the output to a file without the .csv extension, open it in Excel, andset all the columns to "Text" rather than "General" when the import wizardstarts. Failure to do so will result in Excel formatting the columns in a waythat misrepresents the data.I could pad the data in such a way that forces Excel to set the column type correctlybut this might break other tools.GUI:You can turn off all the GUI dependencies by setting the noGUI flag to 'True'. This is for installations that don't want to install the tk/tcl libraries.Update History=============[See CHANGES.txt]Version 2.0.4:Minor tweaks to support external programsVersion 2.0.3:Restructured to support PyPi (pip)Version 2.0.2:De-OOP'd MFT record parsing to reduce memory consumptionVersion 2.0.1:Added L2T CSV and body file support back in, fixed some minor bugs along the wayMade full file path calculation more efficientVersion 2.0.0 Restructured layout to turn it into a module.Made it more OOP.Improved error handling and corrupt record detection------ Version 1 history follows ------Version 1.0: Initial releaseVersion 1.1: Split parent folder reference and sequence into two fields. I'm still trying to figure out thesignificance of the parent folder sequence number, but I'm convinced that what some documentationrefers to as the parent folder record number is really two values - the parent folder record numberand the parent folder sequence number.Version 1.2: Fixed problem with non-printable characters in filenames. Any Unicode character is legal in afilename, including newlines. This presented some problems in my output. Characters that do notrender well are now converted to hex and a note is added to the Notes column indicating this.(I've learned a lot about Unicode since I first wrote this.)Added "compile time" flag to turn off the inclusion of any GUI related modules and librariesfor systems missing tk/tcl support. (Set noGUI to True in the code)Version 1.3: Added new column to hold log entries relating to each record. For example, a note stating thatsome characters in the filename were converted to hex as they could not be printed.Version 1.4: Credit: Spencer Lynch. I was misusing the flags field in the MFT header. The first bit isActive/Inactive. The second bit is File/Folder.Version 1.5: Fixed date/time reporting. I wasn't reporting useconds at all.Added anomaly detection. Adds two columns:std-fn-shift: If Y, entry's FN create time is after the STD create timeusec-zero: If Y, entry's STD create time's usec value is zeroVersion 1.6: Various bug fixesVersion 1.7: Bodyfile support, with thanks to Dave HullVersion 1.8: Added support for full path extraction, written by Kristinn GudjonssonVersion 1.9: Added support for csv timeline outputVersion 1.10: Just for TomVersion 1.11: Fixed TSK bodyfile outputVersion 1.12: Fix orphan file detection issue that caused recursion error (4/18/2013)Version 1.13: Changed from walking all sequence numbers to pulling sequence number from MFT. Previous approach did not handlegaps wellVersion 1.14: Made -o output optional if -b is specified. (Either/or)Version 1.15: Added file size (real, not allocated) to bodyfile.Added bodyfile option to include fullpath + filename rather than just filenameAdded bodyfile option to use STD_INFO timestamps rather than FN timestampsVersion 2 history is in CHANGES.txtInspiration===========My original inspiration was a combination of MFT Ripper (thus the current output format) and theSANS 508.1 study guide. I couldn't bear to read about NTFS structures again,particularly since the information didn't "stick". I also wanted to learn Pythonso I figured that using it to tear apart the MFT file was a reasonably sizedproject.Many of the variable names are taken directly from Brian Carrier's The Sleuth Kit. His code, plus hisbook "File System Forensic Analysis", was very helpful in my efforts to write this code.The output format is almost identical to Mark Menz's MFT Ripper. His tool really inspired me to learnmore about the structure of the MFT and to learn what additional information I could glean fromthe data.I also am getting much more interested in timeline analysis and figured that really understanding thethe MFT and having a tool that could parse it might serve as a good foundationfor further research in that area.Future work===========1) Figure out how to write the CSV file in a manner that forces Excel to interpret the date/timefields as text. If you add the .csv extension Excel will open the file without invoking the importwizard and the date fields are treated as "General" and the date is chopped leaving just the time.2) Add version switch3) Add "mftr" switch - produce MFT Ripper compatible output4) Add "extract" switch - extract or work on live MFT file5) Finish parsing all possible attributes6) Look into doing more timeline analysis with the information7) Improve the documentation so I can use the structures as a reference and reuse the code more effectively8) Clean up the code and, in particular, follow standard naming conventions9) There are two MFT entry flags that appear that I can't determine the significance of. These appear inthe output as Unknown1 and Unknown210) Parse filename based on 'nspace' value in FN structure11) Test it and ensure that it works on all major Windows OS versions12) Output HTML as well as CSV13) If you specify a bad input filename and a good output filename, you get anerror about the output filename.Useful Documentation====================1) http://dubeyko.com/development/FileSystems/NTFS/ntfsdoc.pdf
|
analyze-objects
|
analyze_objectscontains command line tools that analyze compile object files (.o, .obj). It is a wrapper around the
platform specific toolsnm(linux) ordumpbin(windows).Currently, it consists of the single shell commandfind_symbols.InstallingInstall from pip:python -m pip install analyze_objectsUsageIf binaries of installed python packages are added to the PATH, you can callfind_symbolsdirectly from the shell:find_symbolsOtherwise, it can be invoked using python:python -m analyze_objects.find_symbolsExamplesUse the following command to search the object filesfoo.oandbar.ofor undefined symbols that match the
regular expression"foo":find_symbols --undef_regex foo foo.o bar.oUsing this command requires thatnm(linux) ordumpbin(windows) are available in the PATH. If that is not the
case, you can use the--nm_exeor--dumpbin_exearguments to pass their location tofind_symbols. For
convenience, you may pass--store_configin addition to--nm_exeor--dumpbin_exe, so that this path will be
used in all subsequent calls tofind_symbols. The stored configuration can be cleared using--clear_config.Use--def_regexinstead of--undef_regexto search for defined symbols. It is
possible to combine both arguments and search for both defined and undefined symbols.Thefind_symbolscommand accepts an arbitrary number of object files. It is possible to use placeholders**and*in the object file paths.
|
analyzequicker
|
Analyze QuickerThis package was created to analyze data quicker.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.