package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
anna-cli
|
No description available on PyPI.
|
anna-client
|
anna clientsetup$ pip install anna-clientusageinitializationfromanna_client.clientimportClientclient=Client(endpoint='http://localhost:5000/graphql')get jobs# get all job idsjobs=client.get_jobs()# you can specify a where clause & the fields you wish to receivejobs=client.get_jobs(where={'id_in',[...]},fields=('driver','site','status'))create jobs# create_jobs takes a list of dicts describing your jobsjobs=client.create_jobs(data=[{'driver':'firefox','site':'test'}])delete jobs# provide no parameters in order to delete all jobsclient.delete_jobs(where={})# or delete specific jobsclient.delete_jobs(where={'id_in':my_jobs})update jobs# provide no where parameter in order to update all jobsclient.update_jobs(data={'status':'STOPPED'})# or update specific jobsclient.delete_jobs(where={'id_in':my_jobs},data={'status':'STOPPED'})reserve jobs# reserve_jobs takes a worker and a tuple of job idsclient.reserve_jobs(worker='worker',job_ids=my_jobs)get tasks# get_tasks takes a namespace & returns a url and a list of tuples containing the task names & definitionsurl,tasks=client.get_tasks(namespace='test')
|
anna-dashboard
|
No description available on PyPI.
|
annadb
|
AnnaDB python driverInstallpipinstallannadbConnectfromannadbimportConnectionconn=Connection.from_connection_string("annadb://localhost:10001")TutorialPlease, follow the official tutorial to meet all the featureshttps://annadb.dev/tutorial/python/
|
anna-lib
|
anna-libThe purpose of this package is to simplify the use of selenium.requirementsseleniuminstallation$pipinstallanna-libusagefromanna_lib.seleniumimportdriver,events,assertionsresult=[]firefox=driver.create(driver='firefox',headless=True)firefox.get('http://example.com/')events.click(driver=firefox,target='a[href="http://www.iana.org/domains/example"]')result.append(assertions.current_url_is(firefox,'http://www.iana.org/domains/example'))driverUse this module to create a webdriver based on a set of options:paramtyperequiredvaluesdefault valuedriverstringyes'firefox' or 'chrome' for now'firefox'headlessboolnoTrue or FalseFalseresolutiontupleno(width, height)(1920, 1080)eventsUse this module to interact with pages. Each event takes a driver, a target & a timeout which defaults to 16 seconds, with the exception beingsend_keyswhich also requires a value.
The target is treated as a css selector unless it starts with'$xpath', in which case it is of course treated as an xpath selector.fromanna_lib.seleniumimportevents,driverfirefox=driver.create('firefox',headless=True)events.click(driver=firefox,target='#search')events.send_keys(driver=firefox,target='#search',value='search terms')events.submit(driver=firefox,target='#search')events.hover(driver=firefox,target='$xpath//div.result/a')events.scroll_to(driver=firefox,target='#thing')events.switch_to(driver=firefox,target='iframe')assertionsUse this module to check the state of a page, be it by the url or by the page's elements.
Each assertion takes a driver, some input & a timeout parameter which defaults to 16 seconds.fromanna_lib.seleniumimportassertions,driverfirefox=driver.create('firefox',headless=True)try:assertions.url_equals(driver=firefox,expected='about:blank')assertions.in_url(driver=firefox,part='blank')assertions.element_exists(driver=firefox,target='body')exceptValueErrorase:print(str(e))exceptTypeErrorase:print(str(e))
|
annalise-confluence-junction
|
Annalise AI - Confluence JunctionThis project is expanded from (https://github.com/HUU/Junction)TO DOmove away from using a docker image and use a pushlished python packageunit tests for everythingRunning LocallyTo run this locally run docker compose up with the documents and images directory as env vairablesDOCS=<pathtodocs>IMAGE=<pathtoimages>docker-composeupOverviewJunction works by inspecting the changes made on a commit-by-commit basis to your Git repository, and determining what needs to be changed in Confluence to reflect those changes. Junction (currently) expects to manage the entirespace in Confluence. Thus when using Junction you must tell it which Space to target and update. You must not manually change, create, or modify pages in the target space, or else Junction may be unable to synchronize the state in Git with the state in Confluence.To allow mixing code (and other items) with markdown files for Junction in a single repository, you can tell Junction a subpath within your repository that functions as the root e.g. all markdown files will be kept indocs/. All files should end with the.mdextension.The page will gets its title from the file name, and its contents will be translated into Confluence markup. Seethis example for what output looks like in Confluence.UsageCollect a set of credentials that Junction will use to login to Confluence. You will need to create anAPI tokento use instead of a password.I recommend you make a dedicated user account with access permissions limited to the space(s) you want to manage with Junction.In your git repository, create a folder structure and markdown files you would like to publish. Commit those changes..
├──(yourcodeandotherfiles)└──docs/├──Welcome.md├──Installation.md└──AdvancedUsage|├──Airflow.md|├──VisualStudioOnline.md|├──AtlassianBamboo.md|└──GitHubActions.md└──Credits.mdImagesImages should be placed inside theimagesdirectory within a subdirectory that has the same name as the respective file. for the above example the image directory could look like this..
└── images/
├── Welcome/
├── image1.png
└── image2.png
├── Installation/
└── image1.png
└── Advanced Usage/
├── image1.png
├── image2.png
├── Airflow/
└── image1.pngMermaid DiagramsMermaid diagrams can be included in the markdown but must include the document name in the opening fence:```mermaid filename=<document name>seehere for using mermaid.js in github
|
anna-node
|
annaFetches tasks from an API, executes events & reports back
|
annaohero
|
annaoheroannaohero- this module is a Python library for building multiple files into one file and vice versaInstallationInstall the current version with PyPI:pipinstallannaoheroUsageFiles into one file and and vice versaTo combine the files into a single file, use the command:compress(r"(path to the file folder with your files)") example: compress(r'C:\Users\username\Desktop\folder')compress(r'C:\Users\username\Desktop\folder')After you have entered this command, the "sop.nosh" file will be created. This is the source file with your files!To separate the files you should use the following command:uncompress(r'(way to folder with file "sop.nosh")') example: uncompress(r'C:\Users\username\Desktop\folder') The folder must contain the file that you created with the compress() command!uncompress(r'C:\Users\username\Desktop\folderwithsopnosh')##Examplefromannaoheroimportcompress,uncompresspathtofiles=r'C:\Users\username\Desktop\folder'compress(pathtofiles)pathtofilewithsop=r'C:\Users\username\Desktop\folderwithsopnosh'uncompress(pathtofilewithsop)ContributingBug reports and/or pull requests are welcome
|
annaPack1
|
No description available on PyPI.
|
annapdf
|
This is the homepage of our project
|
anna-risk-score-client
|
Failed to fetch description. HTTP Status Code: 404
|
annas-py
|
annas-pyAnna's Archive unofficial client library based on web scrappingUsageInstall by running:pipinstallannas-pyUsage example:importannas_pyresults=annas_py.search("python",language=annas_py.models.args.Language.EN)forrinresults:print(r.title)information=annas_py.get_informations(results[0].id)print("Title:",information.title)print("Description:",information.description)print("Links:",information.urls)
|
annasys-console
|
python-consoleA Click-based console logging utility.
|
anna-tasks
|
anna tasks package
|
anna-unittasks
|
anna tasks package
|
anna-worker
|
anna-worker
|
annax
|
Annax: Approximate Nearest Neighbor Search with JAXAnnax is a high-performance Approximate Nearest Neighbor (ANN) library built on top of the JAX framework. It provides fast and memory-efficient search for high-dimensional data in various applications, such as large-scale machine learning, computer vision, and natural language processing. Annax leverages the power of GPU acceleration to deliver outstanding performance and includes a wide range of indexing structures and distance metrics to cater to different use cases. The easy-to-use API makes it accessible to both beginners and experts in the field.FeaturesFast and memory-efficient approximate nearest neighbor searchGPU acceleration for high-performance computingSupports a wide range of indexing structures and distance metricsEasy-to-use API for seamless integration with existing projectsApplicable to various domains, including machine learning, computer vision, and natural language processingBuilt on top of the JAX framework for enhanced flexibility and extensibilityInstallationTo install Annax, simply run the following command in your terminal:pipinstallannaxQuick StartHere's a simple example of using Annax to find the nearest neighbors in a dataset:importnumpyasnpimportannax# Generate some random high-dimensional datadata=np.random.random((1000,128))# Create an Annax index with the default configurationindex=annax.Index(data)# Query for the 10 nearest neighbors of a random vectorquery=np.random.random(128)neighbors,distances=index.search(query,k=10)Index Typesannax.Index: Flat Indexannax.IndexIVF: Inverted File Indexannax.IndexPQ: Product Quantization Indexannax.IndexIVFPQ: Inverted File Index with Product QuantizationDevelopmentTo install Annax for development, run the following commands in your terminal:python-mpipinstall-e'.[dev]'pre-commitinstallLicenseAnnax is released under the MIT License.
|
annb
|
ANNB: Approximate Nearest Neighbor BenchmarkNote: This is a work in progress. The API/CLI is not stable yet.Installationpipinstallannb# install vector search index/client you may need for benchmark# e.g install faiss for run faiss index benchmarkUsageCLI UsageRun Benchmarkstart first benchmark with a randome dataset.Just runannb-testto start your first benchmark with a random dataset.annb-testIt will produce a result like this:❯ annb-test
... some logs ...
BenchmarkResult:
attributes:
query_args: [{'nprobe': 1}]
topk: 10
jobs: 1
loop: 5
step: 10
name: Test
dataset: .annb_random_d256_l2_1000.hdf5
index: Test
dim: 256
metric_type: MetricType.L2
index_args: {'index': 'ivfflat', 'nlist': 128}
started: 2023-08-14 13:03:40
durations:
training: 1 items, 1000 total, 1490.03266ms
insert: 1 items, 1000 total, 132.439627ms
query:
nprobe=1,recall=0.2173 -> 1000 items, 18.615083ms, 53719.878659686874qps, latency=0.18615083ms, p95=0.31939ms, p99=0.41488msThis is a simple benchmark test with default index(faiss) with random l2 dataset.
If you wants to generate more data or with some different specifications for the dataset, you could see below options:--index-dim The dimension of the index, default is 256--index-metric-type Index metric type, l2 or ip, default is l2--topk TOPK topk used for query, default is 10--step STEP the query step, default annb will query 10 items per query, you could set it to 0 for query all items in one query (similar like batch for ann-benchmarks)--batch batch mode, alias --step 0--count COUNT the total number of items in the dataset, default is 1000run benchmark with a specific datasetYou could also use ann-benchmarks'sdatasetto run benchmark. download them locally and run benchmark with--datasetoption.annb-test--datasetsift-128-euclidean.hdf5run benchmark with query argsYou mary benchmark with different query args, e.g. different nprobe for faiss ivfflat index. you could try--query-argsoption.annb-test--query-argsnprobe=10--query-argsnprobe=20will output below result:durations:
training: 1 items, 1000 total, 1548.84968ms
insert: 1 items, 1000 total, 143.402532ms
query:
nprobe=1,recall=0.2173 -> 1000 items, 20.074236ms, 49815.09632545916qps, latency=0.20074235999999998ms, p95=0.332276ms, p99=0.455525ms
nprobe=10,recall=0.5221 -> 1000 items, 49.141931ms, 20349.2207092961qps, latency=0.49141931ms, p95=0.722628ms, p99=0.818012ms
nprobe=20,recall=0.6861 -> 1000 items, 69.284072ms, 14433.331805324606qps, latency=0.69284072ms, p95=1.126946ms, p99=1.350359msrun multiple benchmarks with config fileYou may run multiple benchmarks with different index and dataset. you could use--run-filerun benchmarks from a config file.Below is a example config file:config.yamldefault:index_factory:annb.anns.faiss.indexes.index_under_test_factoryindex_factory_args:{}index_name:Testdataset:gist-960-euclidean.hdf5topk:10step:10jobs:1loop:2result:output.pthruns:-name:faiss-gist960-gpu-ivfflatindex_args:gpu:yesindex:ivfflatnlist:1024query_args:-nprobe:1-nprobe:16-nprobe:256-name:faiss-gist960-gpu-ivfpq8index_args:gpu:yesindex:ivfpqnlist:1024query_args:-nprobe:1-nprobe:16-nprobe:256Explanation for above config file:The default section is the default config for all benchmarks.The config keys are generally same as the options forannb-testcommand. e.g.index_factoryis same as--index-factory.You could define multiple benchmarks inrunssection. and each run config will override the default config. In this example, we define use gist-960-euclidean.hdf5 as dataset, so it will use this dataset for all benchmarks. and we use different index and query args for each benchmark. for index_args, we use ivfflat(nlist=1024) and ivfpq(nlist=1024) as two benchmark series. and for query_args, we use nprobe=1,16,256 for each benchmark. That means we will run 6 benchmarks in total, each series will run 3 benchmarks with different nprobe.The result will be saved to output.pth file by default setting. Actually, each benchmark series will save to a separate file. so in this example, we will get two files:output-1.pthandoutput-2.pth. you could useannb-reportto view them.more optionsYou could useannb-test --helpto see more options.❯annb-test--helpCheck Benchmark ResultsTheannb-reportis use to view benchmark results as plain/csv text, or export them to Chart graphic.annb-report--helpexamples for view/export benchmark resultsview benchmark results as plain textannb-reportoutput.pthview benchmark results as csv textannb-reportoutput.pth--formatcsvexport benchmark results to chart graphic(multiple series)annb-reportoutput.pth--formatpng--outputoutput.pngoutput-1.pthoutput-2.pth
|
anncheck
|
anncheckCheck for missing annotations in Python. It's super fast compared to mypy, checking for missing
annotations insympygave the following results:BenchmarkQueryResultExecution timeMyPymypy --disallow-untyped-calls --disallow-untyped-defs --disallow-incomplete-defs sympyFound 31037 errors in 1171 files (checked 1405 source files)88.32 sAnncheckanncheck sympy -rFound 128059 variables missing annotation(s) in 1197 files47.28 sAnncheck Compact modeanncheck sympy -r -cFound 128059 variables missing annotation(s) in 1197 files4.47 sIt also has more options for annotation checking.Installationpip install anncheckUsageUsage: anncheck [OPTIONS] SRC...
Options:
-a, --include-asterisk Include variables starting with '*'. [default:
False]
-c, --compact Compact mode, displays file name and number of
missing annotations on a single line. [default:
False]
-d, --include-docstrings Anncheck doesn't check for functions inside
triple-quotes by default, set flag to do.
[default: False]
-e, --exclude-return Exclude return annotations. [default: False]
-n, --new-line Set flag to separate functions by an empty line.
[default: False]
-r, --recursive Set flag to recursively go into folders.
[default: False]
-m, --exclude-main Exclude functions defined in 'if __name__ ==
"__main__": ...' [default: False]
-p, --padding INTEGER Padding for line number. [default: 3]
--exclude-dunder Exclude dunder functions. [default: False]
--init-return Set flag to show if __init__ is missing a return
annotation. [default: False]
--match-function TEXT Only search functions matching regex. Note: Put
regex in quotes.
--match-variable TEXT Match variables with regex. Note: Put regex in
quotes.
--help Show this message and exit.
|
annchor
|
ANNchorA python library implementing ANNchor:k-nearest neighbour graph construction for slow metrics.User GuideFor user guide and documentation visitgchq.github.io/annchorWhat is ANNchor?ANNchor is a python library which constructs approximatek-nearest neighbour graphs for slow metrics.
Thek-NN graph is an extremely useful data structure that appears in a wide variety of applications, for example: clustering, dimensionality reduction, visualisation and exploratory data analysis (EDA). However, if we want to use a slow metric, thesek-NN graphs can take an exceptionally long time to compute.
Typical slow metrics include the Wasserstein metric (Earth Mover's distance) applied to images, and Levenshtein (Edit) distance on long strings, where the time taken to compute these distances is significantly longer than a typical Euclidean distance.ANNchor uses Machine Learning methods to infer true distances between points in a data set from a variety of features derived from anchor points (aka landmarks/waypoints). In practice, this means that ANNchor does not make as many calls to the underlying metric as other state of the artk-NN graph generation techniques. This translates to quicker run times, especially when the metric is slow.Results from ANNchor can easily be combined with other popular libraries in the Data Science community. In the docs we give examples of how to use ANNchor in an EDA pipeline alongsideUMAPandHDBSCAN.InstallationInstall via PyPI with pip:pipinstallannchorBasic UsageimportnumpyasnpimportannchorX=#your data, list/np.array of itemsdistance=#your distance function, distance(X[i],X[j]) = dann=annchor.Annchor(X,distance,n_anchors=15,n_neighbors=15,p_work=0.1)ann.fit()print(ann.neighbor_graph)ExamplesWe demonstrate ANNchor by example, using Levenshtein distance on a data set of long strings.
This data set is bundled with the annchor package for convenience.Firstly, we import some useful modules and load the data:importosimporttimeimportnumpyasnpfromannchorimportAnnchor,compare_neighbor_graphsfromannchor.datasetsimportload_stringsstrings_data=load_strings()X=strings_data['X']y=strings_data['y']neighbor_graph=strings_data['neighbor_graph']nx=X.shape[0]forxinX[::100]:print(x[:50]+'...')cuiojvfnseoksugfcbwzrcoxtjxrvojrguqttjpeauenefmkmv...
uiofnsosungdgrxiiprvojrgujfdttjioqunknefamhlkyihvx...
cxumzfltweskptzwnlgojkdxidrebonxcmxvbgxayoachwfcsy...
cmjpuuozflodwqvkascdyeosakdupdoeovnbgxpajotahpwaqc...
vzdiefjmblnumdjeetvbvhwgyasygrzhuckvpclnmtviobpzvy...
nziejmbmknuxdhjbgeyvwgasygrhcpdxcgnmtviubjvyzjemll...
yhdpczcjxirmebhfdueskkjjtbclvncxjrstxhqvtoyamaiyyb...
yfhwczcxakdtenvbfctugnkkkjbcvxcxjwfrgcstahaxyiooeb...
yoftbrcmmpngdfzrbyltahrfbtyowpdjrnqlnxncutdovbgabo...
tyoqbywjhdwzoufzrqyltahrefbdzyunpdypdynrmchutdvsbl...
dopgwqjiehqqhmprvhqmnlbpuwszjkjjbshqofaqeoejtcegjt...
rahobdixljmjfysmegdwyzyezulajkzloaxqnipgxhhbyoztzn...
dfgxsltkbpxvgqptghjnkaoofbwqqdnqlbbzjsqubtfwovkbsk...
pjwamicvegedmfetridbijgafupsgieffcwnmgmptjwnmwegvn...
ovitcihpokhyldkuvgahnqnmixsakzbmsipqympnxtucivgqyi...
xvepnposhktvmutozuhkbqarqsbxjrhxuumofmtyaaeesbeuhf...We see a data set consisting of long strings. A closer inspection may indicate some structure, but it is not obvious at this stage.We use ANNchor to find the 25-nearest neighbour graph. Levenshtein distance is included in Annchor, and can be called by using the string 'levenshtein'
(we could also define the levenshtein function beforehand and pass that to Annchor instead). We will specify that we want to do no more than 12% of the brute force work (since the data set is size 1600, brute force would be 1600x1599/2=1279200 calls to the metric, so we will make around ~153500 to the metric). To get accurate timing information, bear in mind that the first run will be slower than future runs due to the numba.jit compile time.start_time=time.time()ann=Annchor(X,'levenshtein',n_neighbors=25,p_work=0.12)ann.fit()print('ANNchor Time:%5.3fseconds'%(time.time()-start_time))# Test accuracyerror=compare_neighbor_graphs(neighbor_graph,ann.neighbor_graph,k)print('ANNchor Accuracy:%dincorrect NN pairs (%5.3f%%)'%(error,100*error/(k*nx)))ANNchor Time: 34.299 seconds
ANNchor Accuracy: 0 incorrect NN pairs (0.000%)Not bad!We can continue to use ANNchor in a typical EDA pipeline. Let's find the UMAP projection of our data set:from umap import UMAP
from matplotlib import pyplot as plt
# Extract the distance matrix
D = ann.to_sparse_matrix()
U = UMAP(metric='precomputed',n_neighbors=k-1)
T = U.fit_transform(D)
# T now holds the 2d UMAP projection of our data
# View the 2D projection with matplotlib
fig,ax = plt.subplots(figsize=(7,7))
ax.scatter(*T.T,alpha=0.1)
plt.show()Finally the structure of the data set is clear to us! There are 8 clusters of two distinct varieties: filaments and clouds.More examples can be found in the Examples subfolder.
Extra python packages will be required to run the examples.
These packages can be installed via:pipinstall-rannchor/Examples/requirements.txt
|
anncolvar
|
Read more in
D. Trapl, I. Horvaćanin, V. Mareška, F. Özçelik, G. Unal and V. Spiwok:
anncolvar: Approximation of Complex Collective Variables by Artificial Neural
Networks for Analysis and Biasing of Molecular Simulations
<https://www.frontiersin.org/articles/10.3389/fmolb.2019.00025/>Front. Mol. Biosci.2019,6, 25 (doi: 10.3389/fmolb.2019.00025)anncolvarNewsCurrent master vsersion makes it possible to use ANN module of recent master version of Plumed.SyntaxCollective variables by artificial neural networks:usage: anncolvar [-h] [-i INFILE] [-p INTOP] [-c COLVAR] [-col COL]
[-boxx BOXX] [-boxy BOXY] [-boxz BOXZ] [-nofit NOFIT]
[-testset TESTSET] [-shuffle SHUFFLE] [-layers LAYERS]
[-layer1 LAYER1] [-layer2 LAYER2] [-layer3 LAYER3]
[-actfun1 ACTFUN1] [-actfun2 ACTFUN2] [-actfun3 ACTFUN3]
[-optim OPTIM] [-loss LOSS] [-epochs EPOCHS] [-batch BATCH]
[-o OFILE] [-model MODELFILE] [-plumed PLUMEDFILE]
[-plumed2 PLUMEDFILE2]
Artificial neural network learning of collective variables of molecular
systems, requires numpy, keras and mdtraj
optional arguments:
-h, --help show this help message and exit
-i INFILE Input trajectory in pdb, xtc, trr, dcd, netcdf or mdcrd,
WARNING: the trajectory must be 1. must contain only atoms
to be analyzed, 2. must not contain any periodic boundary
condition issues!
-p INTOP Input topology in pdb, WARNING: the structure must be 1.
centered in the PBC box and 2. must contain only atoms
to be analyzed!
-c COLVAR Input collective variable file in text format, must
contain the same number of lines as frames in the
trajectory
-col COL The index of the column containing collective variables
in the input collective variable file
-boxx BOXX Size of x coordinate of PBC box (from 0 to set value in
nm)
-boxy BOXY Size of y coordinate of PBC box (from 0 to set value in
nm)
-boxz BOXZ Size of z coordinate of PBC box (from 0 to set value in
nm)
-nofit NOFIT Disable fitting, the trajectory must be properly fited
(default False)
-testset TESTSET Size of test set (fraction of the trajectory, default =
0.1)
-shuffle SHUFFLE Shuffle trajectory frames to obtain training and test
set (default True)
-layers LAYERS Number of hidden layers (allowed values 1-3, default =
1)
-layer1 LAYER1 Number of neurons in the first encoding layer (default =
256)
-layer2 LAYER2 Number of neurons in the second encoding layer (default
= 256)
-layer3 LAYER3 Number of neurons in the third encoding layer (default =
256)
-actfun1 ACTFUN1 Activation function of the first layer (default =
sigmoid, for options see keras documentation)
-actfun2 ACTFUN2 Activation function of the second layer (default =
linear, for options see keras documentation)
-actfun3 ACTFUN3 Activation function of the third layer (default =
linear, for options see keras documentation)
-optim OPTIM Optimizer (default = adam, for options see keras
documentation)
-loss LOSS Loss function (default = mean_squared_error, for options
see keras documentation)
-epochs EPOCHS Number of epochs (default = 100, >1000 may be necessary
for real life applications)
-batch BATCH Batch size (0 = no batches, default = 256)
-o OFILE Output file with original and approximated collective
variables (txt, default = no output)
-model MODELFILE Prefix for output model files (experimental, default =
no output)
-plumed PLUMEDFILE Output file for Plumed (default = plumed.dat)
-plumed2 PLUMEDFILE2 Output file for Plumed with ANN module (default =
plumed2.dat)IntroductionBiased simulations, such as metadynamics, use a predefined set of parameters known
as collective variables. An artificial bias force is applied on collective variables
to enhance sampling. There are two conditions for a parameter to be applied as
a collective variable. First, the value of the collective variables can be calculated
solely from atomic coordinates. Second, the force acting on collective variables
can be converted to the force acting on individual atoms. In the other words, it
is possible to calculate the first derivative of the collective variables with
respect to atomic coordinates. Both calculations must be fast enough, because
they must be evaluated in every step of the simulation.There are many potential collective variables that cannot be easily calculated.
It is possible to calculate the collective variable for hundreds or thousands of
structures, but not for millions of structures (which is necessary for nanosecond
long simulations).anncolvarcan approximate such collective variables using
a neural network.InstallationYou have to chose and install one of keras backends, such as Tensorflow, Theano or
CNTK. For this follow one of these links:TensorFlowTheanoCNTKInstall numpy and cython by PIP:pip install numpy cythonNext, install anncolvar by PIP:pip install anncolvarIf you use Anaconda type:conda install -c spiwokv anncolvarUsageA series of representative structures (hundreds or more) with pre-calculated values
of the collective variable is used to train the neural network. The user can specify
the input set of reference structures (-i) in the form of a trajectory in pdb, xtc,
trr, dcd, netcdf or mdcrd. The trajectory must contain only atoms to be analyzed
(for example only non-hydrogen atoms). The trajectory must not contain any periodic
boundary condition issues. Both conversions can be made by molecular dynamics
simulation packages, for example bygmx trjconv. It is not necessary to fit
frames to a reference structure. It is possible to switch fitting off by-nofit True.It is necessary to supply an input topology in PDB. This is a structure used
as a template for fitting. It is also used to define a box. This box must be large
enough to fit the molecule in all frames of the trajectory. It should not be too
large because this suppresses non-linearity in the neural network. When the user
decides to use a 3x3x3 nm box it is necessary to place the molecule to be centered
at coordinates (1.5,1.5,1.5) nm. In Gromacs it is possible to use:gmx editconf -f mol.pdb -o reference.pdb -c -box 3 3 3It must also contain only atoms to be analyzed. Size of the box can be specified
by parameters-boxx,-boxyand-boxz(in nm).Last input file is the collective variable file. It is a space-separated text
file with the same number of lines as the number of frames in the input trajectory.
The index of the column can be specified by-col(e.g.-col 2for the second
column of the file.The option-testsetcan control the fraction of the trajectory used as
the test set. For example-testset 0.1means that 10 % of input data is used
as the test set and 90 % as the training set. The option-shuffle Truecauses
that first 90 % is used as the training set and remaining 10 % as the test set.
Otherwise frames are shuffled before separation to the training and test set.The architecture of the neural network is controlled by multiple parameters.
The input layer contains 3N neurons (where N is the number of atoms). The number
of hidden layers is controlled by-layers. This can be 1, 2 or 3. For higher
number of layers contact the authors. Number of neurons in the first, second and
third layer is controlled by-layer1,-layer2and-layer3. It is useful
to use the number of layers equal to powers of 2 (32, 64, 128 etc.). Huge numbers
of neurons can cause that the program is slow or run out of memory. Activation
functions of neurons can be controlled by-actfun1,-actfun2and-actfun3.
Any activation function supported by keras can be used.The optimizer used in the training process can be controlled by-optim. The
default ADAM optimizer (-optim adam) works well. The loss function can be
controlled by-loss. The default-loss mean_squared_errorworks well. The
number of epochs can be controlled by-epochs. The default value (100) is
quite little, usually >1000 is necessary for real life applications. The batch
size can be controlled by-batch(-batch 0for no batches, default is 256).Output is written into the text file-o. It contains the approximated and
the original values of collective variable. The model can be stored in the set
of text files (try-model). The input file is printed into the file controlled
by-plumed(by default plumed.dat). This file can be directly used to calculate
the evolution of the collective variable byplumed driveror by Plumed-patched
molecular dynamics engine. To use the collective variable in enhances sampling
(for example metadynamics) it is necessary to add a suitable keyword (for example
METAD).
|
anncorra
|
Tags of AnnCorraIndian Language Machine Translation (ILMT) project has taken the task of annotating corpora(AnnCorra)of several Indian languages and came up with tags which have been defined for the tagging schemes for POS (part of speech) tagging.This repository would explain thePOS(Part Of Speech) Tags along with examples.RequirementsPackage requires the following to run:python(preferable version 3+)InstallationUse the package managerpipto install foobar.pipinstallanncorraorgitclonehttps://github.com/kuldip-barot/anncorra.gitcdanncorra
pythonsetup.pyinstallUsageImport the package after installation.>>>importanncorra>>>anncorra.explain('NN')The output of above command:POS Tags : NN
Full form : Noun
Desription : The tag NN tag set makes a distinction between noun singular (NN) and noun plural (NNS).
Example :
yaha bAta galI_NN galI_RDP meM phEla gayI
'this' 'talk' 'lane' 'lane' 'in' 'spread' 'went'
“The word was spread in every lane”.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.ReferencesAnnCorra : Annotating Corpora; Guidelines For POS And Chunk Annotation For Indian Languages
|
anndata
|
anndata - Annotated dataanndata is a Python package for handling annotated data matrices in memory and on disk, positioned between pandas and xarray. anndata offers a broad range of computationally efficient features including, among others, sparse data support, lazy operations, and a PyTorch interface.Discuss development onGitHub.Read thedocumentation.Ask questions on thescverse Discourse.Install viapip install anndataorconda install anndata -c conda-forge.SeeScanpy's documentationfor usage related to single cell data. anndata was initially built for Scanpy.anndata is part of the scverse project (website,governance) and is fiscally sponsored byNumFOCUS.
Please consider making a tax-deductibledonationto help the project pay for developer time, professional services, travel, workshops, and a variety of other needs.CitationIf you useanndatain your work, please cite theanndatapre-print as follows:anndata: Annotated dataIsaac Virshup, Sergei Rybakov, Fabian J. Theis, Philipp Angerer, F. Alexander WolfbioRxiv2021 Dec 19. doi:10.1101/2021.12.16.473007.You can cite the scverse publication as follows:The scverse project provides a computational ecosystem for single-cell omics data analysisIsaac Virshup, Danila Bredikhin, Lukas Heumos, Giovanni Palla, Gregor Sturm, Adam Gayoso, Ilia Kats, Mikaela Koutrouli, Scverse Community, Bonnie Berger, Dana Pe’er, Aviv Regev, Sarah A. Teichmann, Francesca Finotello, F. Alexander Wolf, Nir Yosef, Oliver Stegle & Fabian J. TheisNat Biotechnol.2023 Apr 10. doi:10.1038/s41587-023-01733-8.
|
anndata2ri
|
AnnData ↭ SingleCellExperimentRPy2 converter fromAnnDatatoSingleCellExperimentand back. (Fordetails about conversionsee thedocs)You can for example use it to process your data using bothScanpyandSeurat, as described in thisexample notebookInstallationpipinstallanndata2ri# orcondainstall-cbiocondaanndata2riTroubleshootingIf you have problems installing or importing anndata2ri,
please make sure you first:Check the stack trace:
If the error happens while installing or importing a dependency such asrpy2,
report your problem in that project’s bug tracker.Search theissues.
At the time of writing 17 of the 29 bugs (60%) are invalid or rpy2 bugs / install problems.Make sure you have a compatible R version: rpy2 requires R ≥ 3.6.Usage from PythonEither use the converter manually …importanndata2rifromrpy2.robjectsimportrfromrpy2.robjects.conversionimportlocalconverterwithlocalconverter(anndata2ri.converter):adata=r('as(some_data, "SingleCellExperiment")')… or activate it globally:importanndata2rifromrpy2.robjectsimportranndata2ri.activate()adata=r('as(some_data, "SingleCellExperiment")')Usage from JupyterActivate the conversion before you load the extension:importanndata2rianndata2ri.activate()%load_extrpy2.ipythonNow you can move objects from Python to R …importscanpy.datasetsasscdadata_paul=scd.paul15()%%R-iadata_pauladata_paul# class: SingleCellExperiment ...… and back:%%R-oadata_allendata(allen,package='scRNAseq')adata_allen<-as(allen,'SingleCellExperiment')print(adata_allen)# AnnData object with ...
|
anndataks
|
Kolmogorov Smirnov test for two AnnData object.Development:https://github.com/iosonofabio/anndata_kolmogorov_smirnov
|
anndata-modified
|
anndata - Annotated DataInstall fromPyPIviapip install anndata.Read thedocumentation.
|
anndata-sdk
|
anndata_sdkHelper functions for AnnData
|
anndataview
|
No description available on PyPI.
|
anndb
|
No description available on PyPI.
|
anndb-api
|
No description available on PyPI.
|
anndi
|
UNKNOWN
|
annea-bar
|
BarAn example lib
|
annea-foo
|
FooAn example lib with dependency to another lib
|
anneal
|
AnnealThis is apythonpackage for simulated annealing (and quenching) in all its
many guises. The design decisions are described in the correspondingArXiV preprint.Developmentpdmis used throughout.micromambacreate-fenvironment.yml
micromambaactivateanneal-devContributingAll contributions are welcome!!We follow theNumPy commit guidelines.Please runpdm alland ensure no linting or test errors existCo-author commitsgenerouslyLicenseMIT.
|
an_nester
|
UNKNOWN
|
annet
|
No description available on PyPI.
|
annex
|
SummaryAnnex provides assistance with developing plugin-based tools.With Annex you can load and reload plugins from various python modules
without the requirement that they exist on the PYTHONPATH.Example UsageIn your project you would define a base class from which all plugins for
project would subclass.base_plugin.pyclass BaseTestPlugin(object):
def run(self, *args, **kwargs):
raise NotImplementedError()example_plugin.pyfrom base_plugin import BaseTestPlugin
class PrinterPlugin(BaseTestPlugin):
def run(self, *args, **kwargs):
print args, kwargsfoo.pyfrom base_plugin import BaseTestPlugin
from annex import Annex
plugins = Annex(BaseTestPlugin, ["/path/to/plugins"])
for plugin in plugins:
plugin.run("foo", bar="baz")
|
annex-dataproxy
|
annex_dataproxyThis git annex external remote extension usesAnnexRemoteto
talk toEBRAINS Data Proxyso that you can use EBRAINS Collaboratory buckets as
Datalad siblings.Installpip install annex_dataproxy.Usagecreate a dataset$dataladcreatepdfdata[INFO]Creatinganewannexrepoat/tmp/pdfdata[INFO]scanningforunlockedfiles(thismaytakesometime)create(ok):/tmp/pdfdata(dataset)$cdpdfdata/
$rsync-ra$HOME/PDFs/./
$dataladsave
add(ok):1107.0903MontbrioPazo-StuartLandau.pdf(file)[199similarmessageshavebeensuppressed;disablewithdatalad.ui.suppress-similar-results=off]save(ok):.(dataset)actionsummary:add(ok:209)save(ok:1)tell the dataproxy remote our token and what bucket & prefix to use$exportEBRAINS_TOKEN=$EBRAINS_TOKEN$exportDATAPROXY_PATH=insference/pdfscreate the annex remote anddatalad push --toit$ git annex initremote pdfs type=external externaltype=dataproxy encryption=none
initremote pdfs ok
(recording state in git...)
$ datalad push --to pdfs
copy(ok): 15009.full.pdf (file) [to pdfs...]
[193 similar messages have been suppressed; disable with datalad.ui.suppress-similar-results=off]
action summary:
copy (notneeded: 6, ok: 203)Statussloppy proof of conceptgit annex testremotepassesPyPI package for easier installbetter mechanism for specifying bucket & prefix
|
annexlang
|
The Annex language is a markup language for protocols. Its current
(only) feature is to create protocol descriptions that can be used in
LaTeX documents.Annex is based on YAML. As of now, there is no further description of
the language except for the example files which make use of most
language features.To convert a file from Annex to TeX, just runannex-convert infile.yml outfile.tex
|
annexremote
|
AnnexRemoteHelper module to easily develop special remotes forgit annex.
AnnexRemote handles all the protocol stuff for you, so you can focus on the remote itself.
It implements the completeexternal special remote protocoland fulfils all specifications regarding whitespaces etc. This is ensured by an excessive test suite.Documentation(Also have a look at theexamplesandgit-annex-remote-googledrivewhich is based on AnnexRemote.)Getting startedPrerequisitesYou need python3 installed on your system.Installingpip3 install annexremoteRunning the testsIf you want to run the tests, copy the content of thetestsfolder to the same location asannexremote.py.
Then use a test discovery likepytestto run them.UsageImport the necessary classesfromannexremoteimportMasterfromannexremoteimportSpecialRemotefromannexremoteimportRemoteErrorNow create your special remote class. It must subtypeSpecialRemoteand implement at least the 6 basic methods:classMyRemote(SpecialRemote):definitremote(self):# initialize the remote, eg. create the folders# raise RemoteError if the remote couldn't be initializeddefprepare(self):# prepare to be used, eg. open TCP connection, authenticate with the server etc.# raise RemoteError if not ready to usedeftransfer_store(self,key,filename):# store the file in `filename` to a unique location derived from `key`# raise RemoteError if the file couldn't be storeddeftransfer_retrieve(self,key,filename):# get the file identified by `key` and store it to `filename`# raise RemoteError if the file couldn't be retrieveddefcheckpresent(self,key):# return True if the key is present in the remote# return False if the key is not present# raise RemoteError if the presence of the key couldn't be determined, eg. in case of connection errordefremove(self,key):# remove the key from the remote# raise RemoteError if it couldn't be removed# note that removing a not existing key isn't considered an errorIn yourmainfunction, link your remote to the master class and initialize the protocol:defmain():master=Master()remote=MyRemote(master)master.LinkRemote(remote)master.Listen()if__name__=="__main__":main()Now save your program asgit-annex-remote-$somethingand make it executable.chmod +x git-annex-remote-$something(You'll need the sheebang line#!/usr/bin/env python3)That's it. Now you've created your special remote.Export remotesImport and subtypeExportRemoteinstead ofSpecialRemote:# ...fromannexremoteimportExportRemoteclassMyRemote(ExportRemote):# implement the remote methods just like in the above example and then additionally:deftransferexport_store(self,key,local_file,remote_file):# store the file located at `local_file` to `remote_file` on the remote# raise RemoteError if the file couldn't be storeddeftransferexport_retrieve(self,key,local_file,remote_file):# get the file located at `remote_file` from the remote and store it to `local_file`# raise RemoteError if the file couldn't be retrieveddefcheckpresentexport(self,key,remote_file):# return True if the file `remote_file` is present in the remote# return False if not# raise RemoteError if the presence of the file couldn't be determined, eg. in case of connection errordefremoveexport(self,key,remote_file):# remove the file in `remote_file` from the remote# raise RemoteError if it couldn't be removed# note that removing a not existing key isn't considered an errordefremoveexportdirectory(self,remote_directory):# remove the directory `remote_directory` from the remote# raise RemoteError if it couldn't be removed# note that removing a not existing directory isn't considered an errordefrenameexport(self,key,filename,new_filename):# move the remote file in `name` to `new_name`# raise RemoteError if it couldn't be movedLoggingThis module includes a StreamHandler to send log records to git annex via the special remote protocol (using DEBUG). You can use it like this:...importlogging...defmain():master=Master()remote=MyRemote(master)master.LinkRemote(remote)logger=logging.getLogger()logger.addHandler(master.LoggingHandler())master.Listen()if__name__=="__main__":main()LicenseThis project is licensed under GPLv3 - see theLICENSEfile for details
|
annextimelog
|
⚠️ This tool still in development. The most basic time tracking features recording, deletion, editing, search as well as syncing are implemented though.annextimelog- ⏱️Git Annex-backed Time TrackingThis is a brainstorm for aGit Annex-backed time tracker.
The idea originated across some of my Mastodon threads:https://fosstodon.org/@nobodyinperson/109596495108921683https://fosstodon.org/@nobodyinperson/109159397807119512https://fosstodon.org/@nobodyinperson/111591979214726456The gist is that I was (and still am) unhappy with the existing time tracking solutions. I worked withhledger's timeclockandtimewarrioreach for quite some time and built my own workflow and scripts around them.✅ RequirementsOver the years, the below features turned out to bemypersonal requirements for a time-tracking system (TL;DR: easy and intuitive recording, hassle-free syncing, data export for further analysis).
Here is a table comparing annextimelog withtimewarriorandhledger timeclock:✅ = feature available, 🟡 = partly available, ❌ = not availablefeaturetimewarriorhledgertimeclockannextimelogprecisestart and end times✅✅✅ as git-annex metadatatracking of overlapping/simultaneous periods❌🟡 (separate files)✅ backend can do itnice, colourful,graphical summary✅🟡✅ with Pythonrich, more plannedplain textdata storage✅✅🟡 buried ingit-annexbranchgit-friendly,merge conflict free data format🟡¹🟡¹✅ git-annex’ own merge strategyarbitrarytagsattachable to tracked periods✅🟡 hledger tags²✅ just git-annex metadataarbitrarynotesattachable to tracked periods🟡³🟡 hledger tags²✅ just git-annex metadatatags can havevalues❌✅ hledger tags²✅ just git-annex metadatafilesattach-/linkable to tracked periods❌🟡 path asfile:tag🟡 annexed files, linking is plannedclito start, stop, edit, etc. tracked periods✅⁴❌ own scripts needed🟡 recording and editingplugin system🟡⁵🟡⁶ (hledger’s own)❌ git-style plugin system planneddata exportto common format✅ (JSON)✅ (CSV, JSON)✅ as timeclock, JSON, cli commandssyncingfunctionality built-in❌❌✅ git-annex’s purpose is syncingmulti-usersupport❌❌✅ nothing in the way, just use tags¹last line is always modified, merge conflicts can arise when working from different machines²hledger tagshave limitations, e.g. no spaces, colons, commas, etc.³timewarrior annotations can't contain newlines for example. I wrote an extension to edit your annotation in your$EDITORand optionally GPG-encrypt it, which lets you add newlines. Quite an inconvenience.⁴timewarrior’s cli has some nasty inconveniences (e.g. no shortcut for ‘yesterday’, must painfully type out the full date, no intelligence to operate only on yesterday, gets confused and errors out in certain combinations of start/end times, etc…)⁵timewarrior extensions (here mine) are just fed the data via STDIN, not other command-line arguments. Not as useful as the git-style plugin system.⁶for the analysis part,hledgerplugins can be used. But as there is no actual cli to manage the data, there’s no plugin system for that.🛠️ ImplementationTo learn more about howannextimelogworks under the hood with git-annex as backend, have a look atdoc/implementation.📦 InstallationYou can run this tool if you havenixinstalled:# drop into a temporary shell with the command availablenixshellgitlab:nobodyinperson/annextimelog# install itnixprofileinstallgitlab:nobodyinperson/annextimelogOn Arch Linux you can install from theAURwith your favorite helper, or directly with pacman fromthis user repository.# use an AUR helper to installparu-SannextimelogOtherwise, you can install it like any other Python package, e.g. withpipor betterpipx:pipxinstallannextimelog# latest development versionpipxinstallgit+https://gitlab.com/nobodyinperson/annextimelogNote that in this case you will need to installgit-annexmanually.Any of the above makes theannextimelog(oratl) command available.❓ Usageusage:annextimelog[-h][--no-config][-ckey=value][--repoREPO][-n][--force][-v][-q][-O{json,console,timeclock,cli,rich}][--version|--version-only]{test,git,config,sync,sy,track,tr,delete,del,rm,remove,summary,su,ls,list,find,search}...
⏱️TimetrackerbasedonGitAnnex
options:-h,--helpshowthishelpmessageandexit--no-configIgnoreconfigfromgit-ckey=valueSetatemporaryconfigkey=value.Ifnotpresent,'annextimelog.'willbeprependedtothekey.--forceJustdoit.Ignorepotentialdataloss.--versionshowversioninformationandexit--version-onlyshowonlyversionandexitData:--repoREPOBackendrepositorytouse.Defaultsto$ANNEXTIMELOGREPO,$ANNEXTIMELOG_REPOor$XDG_DATA_HOME/annextimelog(currently:/tmp/annextimelog)-n,--dry-rundon't actually store, modify or delete events in the repo. Useful for testing what exactly commands would do.Note that the automatic repo creation is still performed.Output:Options changing output behaviour-v, --verbose verbose output. More -v ⮕ more output-q, --quiet less output. More -q ⮕ less output-O {json,console,timeclock,cli,rich}, --output-format {json,console,timeclock,cli,rich}Select output format. Defaults to 'console'.Subcommands:{test,git,config,sync,sy,track,tr,delete,del,rm,remove,summary,su,ls,list,find,search}test run test suitegit Access the underlying git repositoryconfig Convenience wrapper around 'atlgitconfig[annextimelog.]key[value],e.g.'atl config emojis false'willsettheannextimelog.emojisconfigtofalse.sync(sy)syncdatatrack(tr)recordatimeperioddelete(del,rm,remove)deleteaneventsummary(su,ls,list,find,search)showasummaryoftrackedperiods
🛠️Usage
Loggingevents:
>atltrworkfor4h@homewithclient=smallcorponproject=topsecret
>atltr10-11@doctor
>atltry22:00-30minagosleep@homequality=meh
>atl-vvvtr...# debug problemsNote:Commonprepositionslike'with','about',etc.areignored.Seethefulllistwith>python-c'from annextimelog.token import Noop;print(Noop.FILLERWORDS)'Listingevents:
>atl
>atllsweek
>atl-Ojsonls-a# dump all data as JSON>atl-Otimeclockls-a|hledger-ftimeclock:-bal--daily# analyse with hledgerRemovingeventsbyID:
>atlrmO3YzvZ4m
Syncing:# add a git remote of your choice>[email protected]:you/yourrepo# sync up>atlsync
Configuration
>atl-ckey=value...# temporarily set config>atlconfigkeyvalue# permanently set config>atlconfigcommit...# whether events should be committed upon modification. Setting this to false can improve performance but will reduce granularity to undo changes.>atlconfigdryrun...# equivalent of -n / --dry-run>atlconfigemojis...# whether emojis should be shown in pretty-formated event output>atlconfigfast...# setting this to false will cause annextimelog be be more sloppy (and possible faster) by leaving out some non-critical cleanup steps.>atlconfiglonglist...# equivalent of specifying --long (e.g. atl ls -l)>atlconfigoutputformat...# equivalent of -O / --output-format>atlconfigweekstartssunday...# whether the week should start on Sunday instead of Monday (the default)🛠️ DevelopmentThis project usespoetry, so you can run the following in this repository to get into a development environment:poetryinstall
poetryshell# now you're in a shell with everything set upOther:# Auto-run mypy when file changes:justwatch-mypy# Auto-run tests when file changes:justwatch-test# Test how a sequence of command-line args is interpreted as event metadatajusttest-tokenswork@homenote=blamyfield+=one,two,three2hagountilnow# Run tests against a different Python versionjusttest-with-python-version3.10
|
annfunniest
|
Failed to fetch description. HTTP Status Code: 404
|
ann-gsea
|
annsigAn API for annotating single-cell AnnData with Molecular Signatures fromMSigDBTo use:Step 1. InstallTo install with the latest release fromPYPI:pipinstallannsigalternatively, install the development version:gitclonehttps://github.com/mvinyard/annsigcdannsig;pipinstall-e.Step 2. Register and download MSigDBlinkDeveloped with:"msigdb.v7.4.symbols.gmt"(Currently the latest version)Step 3. Example usageimportann_gseaasgseadb=gsea.MSigDB()db.load()db.search()db.fetch()linkDeveloped with:"msigdb.v7.4.symbols.gmt"(Currently the latest version)Instructions from the MSigDB website on how to cite their resource:To cite your use of the Molecular Signatures Database (MSigDB), a joint project of UC San Diego and Broad Institute, please reference Subramanian, Tamayo, et al. (2005, PNAS) and one or more of the following as appropriate: Liberzon, et al. (2011, Bioinformatics), Liberzon, et al. (2015, Cell Systems), and also the source for the gene set as listed on the gene set page.
|
anngtf
|
anngtfLift annotations from agtfto youradataobject.InstallationTo install viapip:pipinstallanngtfTo install the development version:gitclonehttps://github.com/mvinyard/anngtf.gitcdanngtf;pipinstall-e.Example usageParsing a.gtffileimportanngtfgtf_filepath="/path/to/ref/hg38/refdata-cellranger-arc-GRCh38-2020-A-2.0.0/genes/genes.gtf"If this is your first time usinganngtf, run:gtf=anngtf.parse(path=gtf_filepath,genes=False,force=False,return_gtf=True)Running this function will create two.csvfiles from the given.gtffiles - one containing all feature types and one containing only genes. Both of these files are smaller than a.gtfand can be loaded into memory much faster usingpandas.read_csv()(shortcut implemented in the next function). Additionally, this function leaves a paper trail foranngtfto find the newly-created.csvfiles again in the future such that one does not need to pass a path to the gtf.In the scenario in which you've already run the above function, run:gtf=anngtf.load()# no path necessary!Updating theadata.vartable.importanndataasaimportanngtfadata=anndata.read_h5ad("/path/to/singlecell/data/adata.h5ad")gtf=anngtf.load(genes=True)anngtf.add(adata,gtf)Since theanngtfdistribution already knows where the.csv / .gtffiles are, we could directly annotateadatawithout first specifcyinggtfas a DataFrame, saving a step but I think it's more user-friendly to see what each one looks like, first.Working advantageLet's take a look at the time difference of loading a.gtfinto memory as apandas.DataFrame:importanngtfimportgtfparseimporttimestart=time.time()gtf=gtfparse.read_gtf("/home/mvinyard/ref/hg38/refdata-cellranger-arc-GRCh38-2020-A-2.0.0/genes/genes.gtf")stop=time.time()print("baseline loading time:{:.2f}s".format(stop-start),end='\n\n')start=time.time()gtf=anngtf.load()stop=time.time()print("anngtf loading time:{:.2f}s".format(stop-start))baseline loading time: 87.54s
anngtf loading time: 12.46s~ 7x speed improvement.Note: This is not meant to criticize or comment on anything related togtfparse- in fact, this library relies solely ongtfparsefor the actual parsing of a.gtffile into memory aspandas.DataFrame.
|
anngyan-prob
|
No description available on PyPI.
|
annhub-python
|
ANNHUB Python libraryMain backend module, which is used for developing web-app logic and deploying AI model by just a few lines of code.UsageWe develop a RESTful web controller into a reusable library between many AI models. With these functionalities:Input model,Define data input,logging,exception handler.InstallingDelivering and versioning as aPyPipackage.
Install and update usingpip:$ pip install annhub-pythonA simple examplefromannhub_pythonimportPyAnnapp=PyAnn()# Define the expected AI modelapp.set_model("D:\ARI\ANSCENTER\TrainedModel_c++.ann")# Define which model ID will be usedapp.set_model_id(5122020)# Define the input corresponding to the choosen modelapp.set_input_length(4)if__name__=="__main__":app.run()APIThe library will product two APIs:health checking,predictingas well as aSwagger UIfor API documentation.GET: /api/v1/health
POST: /api/v1/predictDetailed ExampleIris Prediction serverIn this example, we illustrate how to develop a server by using AI model powered by ANNHUB with only few steps. You can use thislinkto access our code.
The procedure of using our library to server AI model is as follows:Put a trained model into your project folder.Create main.py file, where some key information will be determined such as model path, model id, input length,...Create Dockerfile to containerize your application. (We recommend to reuse ourDockerfile).Create docker-compose.yml file, which will construct your docker container by a simple command line. (We also recommend to use as ourinstruction)Run your application be a simple command line:docker-compose up -dWith default settings, your AI can be used athttp://localhost:8080. You can accesshttp://localhost:8080/docsto use your Swagger UI documentation.
|
annict
|
python-annictAnnict APIwrapper for Pythonpython-annictofficially supports Python 3.6 or higher.InstallationpipinstallannictQuickstartAuthenticationAcquire the URL for authentication code.>>>fromannict.authimportOAuthHandler>>>handler=OAuthHandler(client_id='Your client ID',client_secret='Your client secret')>>>url=handler.get_authorization_url(scope='read write')>>>print(url)Open the browser and access the URL you obtained, the authentication code will be displayed.
It will be passed to thehandler.authenticate()'s argument to get the access token.>>>handler.authenticate(code='Authentication code')>>>print(handler.get_access_token())Note that this authentication flow is unnecessary when issuing a personal access token on Annict and using it.See:Annict API: 個人用アクセストークンが発行できるようになりましたHello world>>>fromannict.apiimportAPI>>>annict=API('Your access token')>>>results=annict.works(filter_title="Re:ゼロから始める異世界生活")>>>print(results[0].title)Re:ゼロから始める異世界生活CacheFor now, we do not have our own cache system. However, caching is also important to reduce the load on AnnictAPI.So I introduce a cache plugin forrequestslibrary calledrequests_cache.Install with pip.pipinsallrequests_cacherequests_cacheis very easy to use.>>>importrequests_cache>>>requests_cache.install_cache(cache_name='annict',backend='memory',expire_after=300)>>># At first, from Annict API.>>>api.me()>>># You can get results from cache, if it is within the expiration time.>>>api.me()For more information:Requests-cache documentationDocumentationThis library's documentationAnnict Docs(Japanese)
|
annieslasso
|
No description available on PyPI.
|
annif
|
Annif is an automated subject indexing toolkit. It was originally created as
a statistical automated indexing tool that used metadata from theFinna.fidiscovery interface as a training corpus.This repo contains a rewritten production version of Annif based on theprototype. It is a work in progress, but
already functional for many common tasks.Finto AIis a service based on Annif; see thesource code for Finto AI.Basic installAnnif is developed and tested on Linux. If you want to run Annif on Windows or Mac OS, the recommended way is to use Docker (see below) or a Linux virtual machine.You will need Python 3.8+ to install Annif.The recommended way is to install Annif fromPyPIinto a virtual environment.python3 -m venv annif-venv
source annif-venv/bin/activate
pip install annifYou will also need NLTK data files:python -m nltk.downloader punktStart up the application:annifSeeGetting Startedin the wiki for more details.Shell compeletionsAnnif supports tab-key completion in bash, zsh and fish shells for commands and options
and project id, vocabulary id and path parameters.To enable the completion support in your current terminal session useannif completioncommand with the option according to your shell to produce the completion script and
source it. For example, runsource <(annif completion --bash)To enable the completion support in all new sessions first add the completion script in
your home directory:annif completion --bash > ~/.annif-complete.bashThen make the script to be automatically sourced for new terminal sessions by adding the
following to your~/.bashrcfile (or in somealternative startup
file):source ~/.annif-complete.bashFor details and usage for other shells seeClick documentation.Docker installYou can use Annif as a pre-built Docker container. Please see thewiki documentationfor details.Development installA development version of Annif can be installed by cloning theGitHub
repository.Poetryis used for managing dependencies and virtual environment for the development version.SeeCONTRIBUTING.mdfor information onunit tests,code style,development flowetc. details that are useful when participating in Annif development.Installation and setupClone the repository.Switch into the repository directory.Installpipxand Poetry if you don't have them. First pipx:python3 -m pip install --user pipx
python3 -m pipx ensurepathOpen a new shell, and then install Poetry:pipx install poetryPoetry can be installed also without pipx: check thePoetry documentation.Create a virtual environment and install dependencies:poetry installBy default development dependencies are included. Use option-Eto install dependencies for selected optional features (-E "extra1 extra2"for multiple extras), or install all of them with--all-extras. By default the virtual environment directory is not under the project directory, but there is asetting for selecting this.Enter the virtual environment:poetry shellYou will also need NLTK data files:python -m nltk.downloader punktStart up the application:annifGetting helpMany resources are available:Usage documentation in the wikiAnnif tutorialfor learning to use Annifannif-usersdiscussion forumInternal API documentationon ReadTheDocsannif.orgproject web sitePublications / How to citeTwo articles about Annif have been published in peer-reviewed Open Access
journals. The software itself is also archived on Zenodo and
has acitable DOI.Citing the software itselfSee "Cite this repository" in the details of the repository.Annif articlesSuominen, O.; Inkinen, J.; Lehtinen, M., 2022.
Annif and Finto AI: Developing and Implementing Automated Subject Indexing.
JLIS.It, 13(1), pp. 265–282. URL:
https://www.jlis.it/index.php/jlis/article/view/437See BibTex@article{suominen2022annif,
title={Annif and Finto AI: Developing and Implementing Automated Subject Indexing},
author={Suominen, Osma and Inkinen, Juho and Lehtinen, Mona},
journal={JLIS.it},
volume={13},
number={1},
pages={265--282},
year={2022},
doi = {10.4403/jlis.it-12740},
url={https://www.jlis.it/index.php/jlis/article/view/437},
}Suominen, O.; Koskenniemi, I, 2022.
Annif Analyzer Shootout: Comparing text lemmatization methods for automated subject indexing.
Code4Lib Journal, (54). URL:
https://journal.code4lib.org/articles/16719See BibTex@article{suominen2022analyzer,
title={Annif Analyzer Shootout: Comparing text lemmatization methods for automated subject indexing},
author={Suominen, Osma and Koskenniemi, Ilkka},
journal={Code4Lib J.},
number={54},
year={2022},
url={https://journal.code4lib.org/articles/16719},
}Suominen, O., 2019. Annif: DIY automated subject indexing using multiple
algorithms. LIBER Quarterly, 29(1), pp.1–25. DOI:
https://doi.org/10.18352/lq.10285See BibTex@article{suominen2019annif,
title={Annif: DIY automated subject indexing using multiple algorithms},
author={Suominen, Osma},
journal={{LIBER} Quarterly},
volume={29},
number={1},
pages={1--25},
year={2019},
doi = {10.18352/lq.10285},
url = {https://doi.org/10.18352/lq.10285}
}LicenseThe code in this repository is licensed under Apache License 2.0, except for the
dependencies included underannif/static/cssandannif/static/js,
which have their own licenses, see the file headers for details.
Please note that theYAKElibrary is licended
underGPLv3, while Annif is
licensed under the Apache License 2.0. The licenses are compatible, but
depending on legal interpretation, the terms of the GPLv3 (for example the
requirement to publish corresponding source code when publishing an executable
application) may be considered to apply to the whole of Annif+Yake if you
decide to install the optional Yake dependency.
|
annif-client
|
Annif-clientThis is a minimal Python 3.x client library for accessing theAnnifREST API which can be used for automated subject
indexing and classification of text documents.InstallationThe easiest way to install is via pip:pip3 install annif-clientDependenciesThe library depends on therequestsmodule which is used
for HTTP/REST access. If you install this via pip, the dependencies will be
handled automatically.How to useThe client library comes with examples demonstrating its usage. You can invoke
the example by running theannif_client.pyscript.In your own code, you can use the AnnifClient class like this:from annif_client import AnnifClient
# then you can create your own client
annif = AnnifClient()Example invocationHere is the output from a typical example session:$ python3 annif_client.py
Demonstrating usage of AnnifClient
* Creating an AnnifClient object
* The client uses Annif API at https://api.annif.org/v1/
* The version of Annif serving the API is 0.61.0
* Finding the available projects
Project id: yso-fi lang: fi name: YSO NN ensemble Finnish
Project id: yso-sv lang: sv name: YSO NN ensemble Swedish
Project id: yso-en lang: en name: YSO NN ensemble English
Project id: yso-mllm-fi lang: fi name: YSO MLLM Finnish
Project id: yso-mllm-en lang: en name: YSO MLLM English
Project id: yso-mllm-sv lang: sv name: YSO MLLM Swedish
Project id: yso-bonsai-fi lang: fi name: YSO Omikuji Bonsai Finnish
Project id: yso-bonsai-sv lang: sv name: YSO Omikuji Bonsai Swedish
Project id: yso-bonsai-en lang: en name: YSO Omikuji Bonsai English
Project id: yso-fasttext-fi lang: fi name: YSO fastText Finnish
Project id: yso-fasttext-sv lang: sv name: YSO fastText Swedish
Project id: yso-fasttext-en lang: en name: YSO fastText English
* Looking up information about a specific project
Project id: yso-en lang: en name: YSO NN ensemble English
* Analyzing a short text from a string
<http://www.yso.fi/onto/yso/p5319> 0.2852 dog
<http://www.yso.fi/onto/yso/p8122> 0.1401 laziness
<http://www.yso.fi/onto/yso/p2228> 0.1052 red fox
<http://www.yso.fi/onto/yso/p2352> 0.0914 singers
<http://www.yso.fi/onto/yso/p675> 0.0679 pets
<http://www.yso.fi/onto/yso/p27825> 0.0651 jumping
<http://www.yso.fi/onto/yso/p25726> 0.0631 brown
<http://www.yso.fi/onto/yso/p2023> 0.0584 animals
<http://www.yso.fi/onto/yso/p4484> 0.0453 jazz
<http://www.yso.fi/onto/yso/p22993> 0.0357 clicker training
* Analyzing a longer text from a file, with a limit on number of results
<http://www.yso.fi/onto/yso/p2346> 0.6324 copyright
<http://www.yso.fi/onto/yso/p16495> 0.4211 licences (permits)
<http://www.yso.fi/onto/yso/p26592> 0.1882 computer programmes
<http://www.yso.fi/onto/yso/p3069> 0.1434 patents
<http://www.yso.fi/onto/yso/p3068> 0.1044 intellectual property law
* Analyzing a batch of text documents
doc-0
<http://www.yso.fi/onto/yso/p5319> 0.2852 dog
<http://www.yso.fi/onto/yso/p8122> 0.1401 laziness
<http://www.yso.fi/onto/yso/p2228> 0.1052 red fox
<http://www.yso.fi/onto/yso/p2352> 0.0914 singers
<http://www.yso.fi/onto/yso/p675> 0.0679 pets
<http://www.yso.fi/onto/yso/p27825> 0.0651 jumping
<http://www.yso.fi/onto/yso/p25726> 0.0631 brown
<http://www.yso.fi/onto/yso/p2023> 0.0584 animals
<http://www.yso.fi/onto/yso/p4484> 0.0453 jazz
<http://www.yso.fi/onto/yso/p22993> 0.0357 clicker training
doc-1
<http://www.yso.fi/onto/yso/p1780> 0.7189 history
<http://www.yso.fi/onto/yso/p2787> 0.7167 libraries
<http://www.yso.fi/onto/yso/p11657> 0.6425 national libraries
<http://www.yso.fi/onto/yso/p94426> 0.3903 Finland
<http://www.yso.fi/onto/yso/p12676> 0.3430 collections
<http://www.yso.fi/onto/yso/p8025> 0.2621 architecture
<http://www.yso.fi/onto/yso/p4860> 0.2586 library buildings
<http://www.yso.fi/onto/yso/p19136> 0.2577 scientific libraries
<http://www.yso.fi/onto/yso/p1778> 0.2208 histories (literary works)
<http://www.yso.fi/onto/yso/p10184> 0.2120 university libraries
doc-2
<http://www.yso.fi/onto/yso/p4934> 0.5751 museums
<http://www.yso.fi/onto/yso/p2787> 0.5510 libraries
<http://www.yso.fi/onto/yso/p2336> 0.4488 archives (memory organisations)
<http://www.yso.fi/onto/yso/p26984> 0.4123 subject indexing
<http://www.yso.fi/onto/yso/p11477> 0.2768 automation
<http://www.yso.fi/onto/yso/p13380> 0.2394 subject cataloging
<http://www.yso.fi/onto/yso/p39257> 0.1884 indexing (information technology)
<http://www.yso.fi/onto/yso/p1140> 0.0799 data storage
<http://www.yso.fi/onto/yso/p5521> 0.0771 information management
<http://www.yso.fi/onto/yso/p21192> 0.0749 long-term preservationLicenseThe code is published under theApache 2.0license.
|
annin-dofu
|
author: laplaciannin102(Kosuke Asada)Table of Contentsannin_dofuTable of ContentsHow to install概要annin_dofuの由来内容機能How to installpipinstallannin_dofu概要annin_dofuの由来「analyzenatureindataofuniverse」の略「宇宙のデータで自然を分析する」内容laplaciannin102の自作ライブラリ群です.機能詳しくはドキュメント参照modulescalcmatrixparallelstatsutilsresponse_surface_methodology応答曲面法関連stocking_quantity_optimization在庫最適化関連
|
annize
|
TODO Anise is a Python-based execution engine for automation tasks.Automation tasks exist in software development, and probably all kinds of other sectors. They typicallyrequire the execution of different smaller and larger tools.Complex tasks often need a sequence of many steps to execute, with some steps having dependenciesto each other.Manually triggering all these steps in the graphical interfaces of all the involved tools ispossible in theory, but will generate errors and frustration after some cycles.The automation interfaces of those tools are sometimes easier, but sometimesthey are error-prone. Some tasks may also need to ask the user for some information in an interactive way.Some smaller parts might also be machine-specific (e.g. filesystem paths or the code how toaccess a password vault), while the entire task must be runnable on some different machines.In some situations, this can lead to a rather intransparent forest of different tools, with uniqueoddnesses and special conventions. As the number of different project increases, you will see more and moredifferent tools, often doing a similar job, but for different platforms or frameworks and, of course,with different usage conventions. Spontaneously written glue scripts help in the beginning, butwill explode as the complexity exceeds some threshold.Typical tasks in software development could be:- Generating documentation- Testing- Automatic code generation- Creating packages- Creating a homepage, automatically built from the available version information, the packages, the documentation and so on- Deploying this homepage to a web server- Handling version information - e.g. print it in the manual- and many moreThe Anise framework allows you to implement all those tasks in a structured but generic way in a combination of XML andPython code. Once you have created this stuff at a defined place in your project, Anise lets you easily execute yourtasks from command line (or from any editor if you embed it somehow). This gives you a common and easy interfaceto all your 'tool glue' code.The Anise engine executes arbitrary Python sourcecode and provides some additional services like logging, parameter passing from command line, basic graphical userinterface support, a plugin interface, a flexible event system, injecting code and data from other place,dependencies between code fragments, and more.On top of this engine, Anise comes with a bunch of implementations that fulfill tasks (or parts of them) ofsoftware development. There is a testing module, a documentation- and homepage-generator, some package buildingmethods and a lot more. The implementations use the event system in many places in order to allow customization ina somewhat technical but very flexible way. Even so, those implementations are rather specific and it depends onthe particular case, if, and how many of those implementations are useful.
|
ann-linkage-clustering
|
Find Co-Abundant Groups of GenesPurposeAnalyze gene abundance data from a large set of samples and calculate
which sets of genes are found at a similar abundance across all samples.
Those genes are expected to be biologically linked, such as the case of
metagenomic analysis via whole-genome shotgun sequences, where genes
from the same genome tend to be found at a similar abundance.Code AvailabilityThe code in this repository is provided in two different formats. There
is a library of Python code (ann_linkage_clusteringin PyPI) that can
be used to make CAGs directly from a Pandas DataFrame. There is also a
Docker image that is intended to be run with the scriptfind-cags.py.
The documentation below describes the end-to-end workflow that is available
with that Docker image and the single wrapper script.Input Data FormatIt is assumed that all input data will be in JSON format (gzip optional).
The pertinent data for each individual sample is an abundance metric for
each sample. The input file must contain alistin which each element
is adictthat contains the gene ID with onekeyand the abundance
metric with anotherkey.For initial development we will assume that each input file is a singledict, with the results located at a singlekeywithin thatdict.
In the future we may end up supporting more flexibility in extracting
results from files with different structures, but for the first pass we'll
just go with this.Therefore the features that must be specified by the user are:Key for the list of gene abundances within the JSON (e.g. "results")Key for the gene_id within each element of the list (e.g. "id")Key for the abundance metric within each element (e.g. "depth")Here is an example of what that might look like in JSON format:{"results":[{"id":"gene_1","depth":1.1},{"id":"gene_2","depth":0.2},{"id":"gene_3","depth":3000.015},],"logs":["any other data","that you would like","to include in this file is just fine."]}NOTE: All abundance metric values must be >= 0Running from any DataFrameIf you have any other format of data, you can use this code to find CAGs as well.
The big difference is that this script does some data normalization that is very
helpful. For example, if you are using cosine distance, it's best to have the value
indicating absence to be zero. So if you are using the centered log-ratio (clr)
normalization approach, you really need to set a standard cuttoff across all samples,
trim the lowest value to that, and then set that lowest value to zero. This is all
done automatically byfind-cags.py, but you can absolutely use the same functions
to make CAGs with any other input data format or normalization approach.You can follow the workflow in thefind-cags.pyscript, which basically follows
this workflow (assuming thatdfis your DataFrame of abundance data, with genes
in rows and samples in columns):frommultiprocessingimportPoolfromann_linkage_clustering.libimportmake_cags_with_annfromann_linkage_clustering.libimportiteratively_refine_cagsfromann_linkage_clustering.libimportmake_nmslib_index# Maximum distance threshold (use any value)max_dist=0.2# Distance metric (only 'cosine' is supported)distance_metric="cosine"# Multiprocessing pool (pick any number of threads, in this case `1`)threads=1pool=Pool(threads)# Linkage type (only `average` is fully supported)linkage_type="average"# Make the ANN indexindex=make_nmslib_index(df)# Make the CAGs in the first roundcags=make_cags_with_ann(index,max_dist,df,pool,threads=threads,distance_metric=distance_metric,linkage_type=linkage_type)# Iteratively refine the CAGs (this is the part that is hardedcoded to# use average linkage clustering, while the step above could technically# use any of `complete`, `single`, `average`, etc.)iteratively_refine_cags(cags,df.copy(),max_dist,distance_metric=distance_metric,linkage_type=linkage_type,threads=threads)At the end of all of that, thecagsobject is a dictionary containing
all of the identified groups.Sample SheetTo link individual files with sample names, the user will specify a
sample sheet, which is a JSON file formatted as adict, with sample
names as key and file locations as values.Data LocationsAt the moment we will support data found in either the local file system
or AWS S3.Test DatasetFor testing, I will use a set of JSONs which contain the abundance of
individual genes for a set of microbiome samples. That data is found in thetests/folder. There is also a JSON file indicating which sample goes
with which file, which is formatted as a simple dict (keys are sample names
and values are file locations) and located intests/sample_sheet.json.NormalizationThe--normalizemetric accepts three values,clr,median, andsum. In each case
the abundance metric for each gene within each sample is divided by either
themedianor thesumof the abundance metrics for all genes within that
sample. When calculating themedian, only genes with non-zero abundances
are considered. Forclr, each value is divided by the geometric mean for the
sample, and then the log10 is taken. All zero values are filled with the minimum
value for the entire dataset (so that they are equal across samples, and not
sensitive to sequencing depth).Approximate Nearest NeighborThe Approximate Nearest Neighbor algorithm as implemented bynmslibis being used to create the CAGs.
This implementation has a high performance in an independentbenchmark.Distance MetricThe distance metric is now hard-coded to be the cosine similarity. This is limited by the
available functionality of ANN innmslib, and therefore has been standardized to the
other parts of the codebase as well.RefinementsAfter finding CAGs, the algorithm will iteratively join CAGs that are very similar to each
other in aggregate. This increases the fidelity of the final CAGs and mitigates some of the
sensitivity limitations of ANN.Invocationusage: find-cags.py [-h] --sample-sheet SAMPLE_SHEET --output-prefix
OUTPUT_PREFIX --output-folder OUTPUT_FOLDER
[--normalization NORMALIZATION] [--max-dist MAX_DIST]
[--temp-folder TEMP_FOLDER] [--results-key RESULTS_KEY]
[--abundance-key ABUNDANCE_KEY]
[--gene-id-key GENE_ID_KEY] [--threads THREADS]
[--min-samples MIN_SAMPLES] [--clr-floor CLR_FLOOR]
[--test]
Find a set of co-abundant genes
optional arguments:
-h, --help show this help message and exit
--sample-sheet SAMPLE_SHEET
Location for sample sheet (.json[.gz]).
--output-prefix OUTPUT_PREFIX
Prefix for output files.
--output-folder OUTPUT_FOLDER
Folder to place results. (Supported: s3://, or local
path).
--normalization NORMALIZATION
Normalization factor per-sample (median, sum, or clr).
--max-dist MAX_DIST Maximum cosine distance for clustering.
--temp-folder TEMP_FOLDER
Folder for temporary files.
--results-key RESULTS_KEY
Key identifying the list of gene abundances for each
sample JSON.
--abundance-key ABUNDANCE_KEY
Key identifying the abundance value for each element
in the results list.
--gene-id-key GENE_ID_KEY
Key identifying the gene ID for each element in the
results list.
--threads THREADS Number of threads to use.
--min-samples MIN_SAMPLES
Filter genes by the number of samples they are found
in.
--clr-floor CLR_FLOOR
Set a minimum CLR value, 'auto' will use the largest
minimum value.
--test Run in testing mode and only process a subset of 2,000
genes.
|
annlite
|
A fast embedded library for approximate nearest neighbor searchWhat is AnnLite?AnnLiteis alightweightandembeddablelibrary forfastandfilterableapproximate nearest neighbor search(ANNS).
It allows to search for nearest neighbors in a dataset of millions of points with a Pythonic API.Highlighted features:🐥Easy-to-use: a simple API is designed to be used with Python. It is easy to use and intuitive to set up to production.🐎Fast: the library uses a highly optimized approximate nearest neighbor search algorithm (HNSW) to search for nearest neighbors.🔎Filterable: the library allows you to search for nearest neighbors within a subset of the dataset.🍱Integration: Smooth integration with neural search ecosystem includingJinaandDocArray,
so that users can easily expose search API withgRPCand/orHTTP.The library is easy to install and use. It is designed to be used with Python.InstallationTo use AnnLite, you need to first install it. The easiest way to install AnnLite is usingpip:pipinstall-Uannliteor install from source:pythonsetup.pyinstallQuick startBefore you start, you need to know some experience aboutDocArray.AnnLiteis designed to be used withDocArray, so you need to know how to useDocArrayfirst.For example, you can create aDocArraywith1000random vectors with128dimensions:fromdocarrayimportDocumentArrayimportnumpyasnpdocs=DocumentArray.empty(1000)docs.embeddings=np.random.random([1000,128]).astype(np.float32)IndexThen you can create anAnnIndexerto index the createddocsand search for nearest neighbors:fromannliteimportAnnLiteann=AnnLite(128,metric='cosine',data_path="/tmp/annlite_data")ann.index(docs)Note that this will create a directory/tmp/annlite_datato persist the documents indexed.
If this directory already exists, the index will be loaded from the directory.
And if you want to create a new index, you can delete the directory first.SearchThen you can search for nearest neighbors for some query docs withann.search():query=DocumentArray.empty(5)query.embeddings=np.random.random([5,128]).astype(np.float32)result=ann.search(query)Then, you can inspect the retrieved docs for each query doc insidequerymatches:forqinquery:print(f'Query{q.id}')fork,minenumerate(q.matches):print(f'{k}:{m.id}{m.scores["cosine"]}')Queryddbae2073416527bad66ff186543eff80:47dcf7f3fdbe3f0b8d73b87d2a1b266f{'value':0.17575037}1:7f2cbb8a6c2a3ec7be024b750964f317{'value':0.17735684}2:2e7eed87f45a87d3c65c306256566abb{'value':0.17917466}Querydda90782f6514ebe4be4705054f744520:6616eecba99bd10d9581d0d5092d59ce{'value':0.14570713}1:d4e3147fc430de1a57c9883615c252c6{'value':0.15338594}2:5c7b8b969d4381f405b8f07bc68f8148{'value':0.15743542}...Or shorten the loop as one-liner using the element & attribute selector:print(query['@m',('id','scores__cosine')])QueryYou can get specific document by its id:doc=ann.get_doc_by_id('<doc_id>')And you can also get the documents withlimitandoffset, which is useful for pagination:docs=ann.get_docs(limit=10,offset=0)Furthermore, you can also get the documents ordered by a specific column from the index:docs=ann.get_docs(limit=10,offset=0,order_by='x',ascending=True)Note: theorder_bycolumn must be one of thecolumnsin the index.UpdateAfter you have indexed thedocs, you can update the docs in the index by callingann.update():updated_docs=docs.sample(10)updated_docs.embeddings=np.random.random([10,128]).astype(np.float32)ann.update(updated_docs)DeleteAnd finally, you can delete the docs from the index by callingann.delete():to_delete=docs.sample(10)ann.delete(to_delete)Search with filtersTo support search with filters, the annlite must be created withcolumsparameter, which is a series of fields you want to filter by.
At the query time, the annlite will filter the dataset by providingconditionsfor certain fields.importannlite# the column schema: (name:str, dtype:type, create_index: bool)ann=annlite.AnnLite(128,columns=[('price',float)],data_path="/tmp/annlite_data")Then you can insert the docs, in which each doc has a fieldpricewith a float value contained in thetags:importrandomdocs=DocumentArray.empty(1000)docs=DocumentArray([Document(id=f'{i}',tags={'price':random.random()})foriinrange(1000)])docs.embeddings=np.random.random([1000,128]).astype(np.float32)ann.index(docs)Then you can search for nearest neighbors with filtering conditions as:query=DocumentArray.empty(5)query.embeddings=np.random.random([5,128]).astype(np.float32)ann.search(query,filter={"price":{"$lte":50}},limit=10)print(f'the result with filtering:')fori,qinenumerate(query):print(f'query [{i}]:')forminq.matches:print(f'\t{m.id}{m.scores["euclidean"].value}(price={m.tags["price"]})')Theconditionsparameter is a dictionary of conditions. The key is the field name, and the value is a dictionary of conditions.
The query language is the same asMongoDB Query Language.
We currently support a subset of those selectors.$eq- Equal to (number, string)$ne- Not equal to (number, string)$gt- Greater than (number)$gte- Greater than or equal to (number)$lt- Less than (number)$lte- Less than or equal to (number)$in- Included in an array$nin- Not included in an arrayThe query will be performed on the field if the condition is satisfied. The following is an example of a query:A Nike shoes with white color{"brand":{"$eq":"Nike"},"category":{"$eq":"Shoes"},"color":{"$eq":"White"}}We also support boolean operators$orand$and:{"$and":{"brand":{"$eq":"Nike"},"category":{"$eq":"Shoes"},"color":{"$eq":"White"}}}A Nike shoes or price less than100$:{"$or":{"brand":{"$eq":"Nike"},"price":{"$lte":100}}}Dump and LoadBy default, the hnsw index is in memory. You can dump the index todata_pathby calling.dump():fromannliteimportAnnLiteann=AnnLite(128,metric='cosine',data_path="/path/to/data_path")ann.index(docs)ann.dump()And you can restore the hnsw index fromdata_pathif it exists:new_ann=AnnLite(128,metric='cosine',data_path="/path/to/data_path")If you didn't dump the hnsw index, the index will be rebuilt from scratch. This will take a while.Supported distance metricsThe annlite supports the following distance metrics:Supported distances:DistanceparameterEquationEuclideaneuclideand = sqrt(sum((Ai-Bi)^2))Inner productinner_productd = 1.0 - sum(Ai*Bi)Cosine similaritycosined = 1.0 - sum(Ai*Bi) / sqrt(sum(Ai*Ai) * sum(Bi*Bi))Note that inner product is not an actual metric. An element can be closer to some other element than to itself.
That allows some speedup if you remove all elements that are not the closest to themselves from the index, e.g.,inner_product([1.0, 1.0], [1.0. 1.0]) < inner_product([1.0, 1.0], [2.0, 2.0])HNSW algorithm parametersThe HNSW algorithm has several parameters that can be tuned to improve the search performance.Search parametersef_search- The size of the dynamic list for the nearest neighbors during search (default:50).
The larger the value, the more accurate the search results, but the slower the search speed.
Theef_searchmust be larger thanlimitparameter insearch(..., limit).limit- The maximum number of results to return (default:10).Construction parametersmax_connection- The number of bi-directional links created for every new element during construction (default:16).
Reasonable range is from2to100. Higher values works better for dataset with higher dimensionality and/or high recall.
This parameter also affects the memory consumption during construction, which is roughlymax_connection * 8-10bytes per stored element.As an example forn_dim=4random vectors optimalmax_connectionfor search is somewhere around6,
while for high dimensional datasets, highermax_connectionare required (e.g.M=48-64) for optimal performance at high recall.
The rangemax_connection=12-48is ok for the most of the use cases.
Whenmax_connectionis changed one has to update the other parameters.
Nonetheless,ef_searchandef_constructionparameters can be roughly estimated by assuming thatmax_connection * ef_{construction}is a constant.ef_construction: The size of the dynamic list for the nearest neighbors during construction (default:200).
Higher values give better accuracy, but increase construction time and memory consumption.
At some point, increasingef_constructiondoes not give any more accuracy.
To setef_constructionto a reasonable value, one can measure the recall: if the recall is lower than 0.9, then increaseef_constructionand re-run the search.To set the parameters, you can define them when creating the annlite:fromannliteimportAnnLiteann=AnnLite(128,columns=[('price',float)],data_path="/tmp/annlite_data",ef_construction=200,max_connection=16)BenchmarkOne can runexecutor/benchmark.pyto get a quick performance overview.Stored dataIndexing timeQuery size=1Query size=8Query size=64100002.9700.0020.0130.10010000076.4740.0110.0780.649500000467.9360.0460.3562.82310000001025.5060.0910.6955.778Results with filtering can be generated fromexamples/benchmark_with_filtering.py. This script should produce a table similar to:Stored data% same filterIndexing timeQuery size=1Query size=8Query size=641000052.8690.0040.0300.27010000152.8690.0040.0350.29410000203.5060.0050.0380.28710000303.5060.0050.0440.35610000503.5060.0080.0640.48410000802.8690.0130.0980.910100000575.9600.0180.1341.0921000001575.9600.0260.2111.7361000002078.4750.0340.2652.0971000003078.4750.0440.3572.8871000005078.4750.0680.5654.3831000008075.9600.1110.8786.8155000005497.7440.0690.5614.43950000015497.7440.1341.0648.46950000020440.1080.1521.1999.47250000030440.1080.2121.65013.26750000050440.1080.3282.63721.96150000080497.7440.5804.60236.986100000051052.3880.1311.0318.2121000000151052.3880.2632.19116.643100000020980.5980.3512.65921.193100000030980.5980.4613.71329.794100000050980.5980.7325.97547.3561000000801052.3881.1519.25573.552Note that:query times presented are represented in seconds.% same filterindicates the amount of data that verifies a filter in the database.For example, if% same filter = 10andStored data = 1_000_000then it means100_000example verify the filter.Next stepsIf you already have experience with Jina and DocArray, you can start usingAnnLiteright away.Otherwise, you can check out this advanced tutorial to learn how to useAnnLite:herein practice.🙋 FAQ1. Why should I useAnnLite?AnnLiteis easy to use and intuitive to set up in production. It is also very fast and memory efficient, making it a great choice for approximate nearest neighbor search.2. How do I useAnnLitewith Jina?We have implemented an executor forAnnLitethat can be used with Jina.fromjinaimportFlowwithFlow().add(uses='jinahub://AnnLiteIndexer',uses_with={'n_dim':128})asf:f.post('/index',inputs=docs)DoesAnnLitesupport search with filters?Yes.DocumentationYou can find the documentation onGithubandReadTheDocs🤝 Contribute and spread the wordWe are also looking for contributors who want to help us improve: code, documentation, issues, feedback! Here is how you can get started:Have a look through GitHub issues labeled "Good first issue".Read our Contributor Covenant Code of ConductOpen an issue or submit your pull request!LicenseAnnLiteis licensed under theApache License 2.0.
|
ann-nmf
|
AnnData wrapper of theARD-NMF modulefromSignatureAnalyzer-GPU.InstallationInstall usingpippipinstallann_nmfAlternatively, install the development version:gitclonehttps://github.com/mvinyard/ann_nmf.git;cdann_nmf;pipinstall-e.API overviewImport libraries and get some dataimportann_nmfimportscanpyasscadata=sc.datasets.pbmc3k()ann_nmf.ut.preprocess_raw_counts(adata)Key class:nmf=ann_nmf.NMF(adata,outdir="nmf_results/pbmc3k")# saves .h5 filenmf.run(n_runs=10,K0=20,max_iter=2000)SignatureAnalyzer visualization:nmf.cluster()nmf.signatures()Conceptual background and foundational workARD-NMF theoryArxivSignatureAnalyzer (GitHub)SignatureAnalyzer-GPU (GitHub)AcknowledgementsMost of the code to wrap SignatureAnalyzer in an AnnData-friendly API was borrowed directly (and shamelessly) from Shankara Anand (@shankara-a) with only slight refactoring for more flexibility with fewer dependencies on install.
|
anno
|
No description available on PyPI.
|
annobase
|
UNKNOWN
|
annococo
|
AnnoCocoCOCO形式のアノテーションデータを操作するためのライブラリです. pycocotoolsに依存せずに同等の機能を提供することを目指しています.InstallpipinstallannococoUsageTBD
|
annodize
|
Annodize: Python Annotations that are shockingly usefulUsing PEP 593 (Python 3.9+) Annotated Types for good... or for evil.Example usesData type conversion.Dataframe validation.(Your use case here!)See AlsoCheck outfuture-typing, which
lets you use Python 3.10 UnionType types in any Python version:# -*- coding: future_typing -*-foo:int|str
|
annofab-3dpc-editor-cli
|
annofab-3dpc-editor-cliAnnofabの3次元プロジェクトを操作するためのCLIです。Install$ pip install annofab-3dpc-editor-cliコマンドサンプルhttps://annofab-3dpc-editor-cli.readthedocs.io/ja/latest/user_guide/command_sample.html参照バージョンの確認方法$ anno3d version
annofab-3dpc-editor-cli 0.2.1開発環境poetryPoetry version 1.0.5python 3.8開発環境初期化poetryのインストール手順は、このファイル下部のpoetryのインストール手順を参照poetry installpoetryのインストール手順poetryのインストール手順一例を以下に示す2020/05/21 ubuntu 18.04 にて確認ローカルの環境に以下の手順でインストールする以外に,
python 3.8 および poetry の導入がなされたdocker/Dockerfileを用いても良い.pyenvシステムにpython 3.8を直接インストールして使うならpyenvは要らないsudo apt-get update
sudo apt-get install build-essential libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev liblzma-dev python-openssl gitcurl https://pyenv.run | bashコンソールに、以下のような設定すべき内容が出力されるので.bashrcなどに設定export PATH="/home/vagrant/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"pyenv install 3.8.3
pyenv global 3.8.3pipx直接 poetry をインストールするなら要らないpython -m pip install --user pipx
python -m pipx ensurepathcompletionを効かせたいときは、pipx completionsの実行結果に従って設定する$ pipx completions
Add the appropriate command to your shell's config file
so that it is run on startup. You will likely have to restart
or re-login for the autocompletion to start working.
bash:
eval "$(register-python-argcomplete pipx)"
zsh:
To activate completions for zsh you need to have
bashcompinit enabled in zsh:
autoload -U bashcompinit
bashcompinit
Afterwards you can enable completion for pipx:
eval "$(register-python-argcomplete pipx)"
tcsh:
eval `register-python-argcomplete --shell tcsh pipx`
fish:
register-python-argcomplete --shell fish pipx | .poetrypipx install poetry
poetry completions bash | sudo tee /etc/bash_completion.d/poetry.bash-completionPyPIへの公開$ make publish
|
annofabapi
|
annofab-api-python-clientAnnofab Web APIのPythonクライアントライブラリです。Annofab Web API Documentation:https://annofab.com/docs/api/Reference Documentation:https://annofab-api-python-client.readthedocs.io/en/latest/annofab-clihttps://github.com/kurusugawa-computer/annofab-cli「タスクの一括差し戻し」や、「プロジェクト間の差分表示」など、Annofabの画面で実施するには時間がかかる操作を、CLIツールとして提供しています。開発者用ドキュメント:README_for_developer.md注意作者または著作権者は、ソフトウェアに関してなんら責任を負いません。現在、APIは開発途上版です。予告なく互換性のない変更がある可能性をご了承ください。put, post, delete系のメソッドを間違えて実行してしまわないよう、注意してください。特に「プロジェクト削除」や「アノテーション仕様更新」のAPIには十分注意してください。廃止予定現在ありません。FeaturescURLやPostmanなどよりも簡単にAnnofab Web APIにアクセスできます。ログインを意識せずに、APIを利用できます。アクセス過多などで失敗した場合は、リトライされます。「画像を入力データとして登録する」機能など、APIを組み合わせた機能も利用できます。RequirementsPython 3.8+Install$ pip install annofabapihttps://pypi.org/project/annofabapi/Usageインスタンス生成user_id, passwordをコンストラクタ引数に渡す場合# APIアクセス用のインスタンスを生成fromannofabapiimportbuilduser_id="XXXXXX"password="YYYYYY"service=build(user_id,password).netrcに認証情報を記載する場合.netrcファイルに、AnnofabのユーザIDとパスワードを記載します。machine annofab.com
login annofab_user_id
password annofab_passwordfromannofabapiimportbuild_from_netrcservice=build_from_netrc()For Linuxパスは$HOME/.netrc$ chmod 600 $HOME/.netrcでパーミッションを変更するFor Windowsパスは%USERPROFILE%\.netrc環境変数に認証情報を設定する場合環境変数ANNOFAB_USER_ID、ANNOFAB_PASSWORDにユーザIDとパスワードを設定します。fromannofabapiimportbuild_from_envservice=build_from_env().netrcまたは環境変数に認証情報を設定する場合build()を実行すると、環境変数または.netrcファイルから認証情報を読み込みます。fromannofabapiimportbuildservice=build()優先順位は以下の通りです。環境変数.netrcservice.apiのサンプルコードservice.apiには、Web APIに対応するメソッドが定義されています。メソッド名は、Annofab Web APIのOpenAPI specificationに記載されているoperationIdを、スネークケースに変換したものです。各メソッドの戻り値の型はTupple[Content, Response]です。
ResponseはrequestsモジュールのReponseオブジェクトです。
ContentはReponseの中身です。project_id="ZZZZZZ"# `status`が`complete`のタスクを取得するcontent,response=service.api.get_tasks(project_id,query_params={"status":"complete"})print(type(content))# <class 'dict'>print(content)# {'list': [{'project_id': 'ZZZZZZ', 'task_id': '20190317_2', 'phase': 'acceptance', ...print(type(response))# <class 'requests.models.Response'>print(response.headers["Content-Type"])# application/jsonservice.wrapperのサンプルコードservice.wrapperには、server.apiを組み合わせたメソッドが定義されています。# `status`が`complete`のタスクすべてを取得するtasks=service.wrapper.get_all_tasks(project_id,query_params={"status":"complete"})print(type(tasks))# <class 'list'>print(tasks)# [{'project_id': 'ZZZZZZ', 'task_id': '20190317_2', 'phase': 'acceptance', ...# simpleアノテーションzipのダウンロードservice.wrapper.download_annotation_archive(project_id,'output_dir')# 画像ファイルを入力データとして登録するservice.wrapper.put_input_data_from_file(project_id,'sample_input_data_id',f'sample.png')アノテーションzipの読み込みダウンロードしたアノテーションzipを、JSONファイルごとに読み込みます。
zipファイルを展開したディレクトリも読み込み可能です。importzipfilefrompathlibimportPathfromannofabapi.parserimportlazy_parse_simple_annotation_dir,lazy_parse_simple_annotation_zip,SimpleAnnotationZipParser,SimpleAnnotationDirParser,lazy_parse_simple_annotation_zip_by_task# Simpleアノテーションzipの読み込みiter_parser=lazy_parse_simple_annotation_zip(Path("simple-annotation.zip"))forparseriniter_parser:simple_annotation=parser.parse()print(simple_annotation)# Simpleアノテーションzipを展開したディレクトリの読み込みiter_parser=lazy_parse_simple_annotation_dir(Path("simple-annotation-dir"))forparseriniter_parser:simple_annotation=parser.parse()print(simple_annotation)# Simpleアノテーションzipをタスク単位で読み込むtask_iter_parser=lazy_parse_simple_annotation_zip_by_task(Path("simple-annotation.zip"))fortask_parserintask_iter_parser:print(task_parser.task_id)forparserintask_parser.lazy_parse():simple_annotation=parser.parse()print(simple_annotation)# Simpleアノテーションzip内の1個のJSONファイルを読み込みwithzipfile.ZipFile('simple-annotation.zip','r')aszip_file:parser=SimpleAnnotationZipParser(zip_file,"task01/12345678-abcd-1234-abcd-1234abcd5678.json")simple_annotation=parser.parse()print(simple_annotation)# Simpleアノテーションzip内を展開したディレクトリ内の1個のJSONファイルを読み込みparser=SimpleAnnotationDirParser(Path("task01/12345678-abcd-1234-abcd-1234abcd5678.json"))simple_annotation=parser.parse()print(simple_annotation)塗りつぶし画像の読み込みannofabapi.segmentationには、アノテーションZIPに格納されている塗りつぶし画像を扱うための関数が用意されています。
利用する場合は、以下のコマンドを実行してください。$ pip install annofabapi[segmentation]DataClassannofabapi.dataclassに、データ構造用のクラスがあります。
これらのクラスを利用すれば、属性で各値にアクセスできます。fromannofabapi.dataclass.taskimportTaskdict_task,_=service.api.get_task(project_id,task_id)task=Task.from_dict(dict_task)print(task.task_id)print(task.status)備考annofabapiのログを出力する方法(サンプル)importlogginglogging_formatter='%(levelname)-8s:%(asctime)s:%(name)s:%(message)s'logging.basicConfig(format=logging_formatter)logging.getLogger("annofabapi").setLevel(level=logging.DEBUG)
|
annofabapi-3dpc-extensions
|
annofabapi-3dpc-extensionsannofabapiの3次元アノテーション用の拡張機能です。InstallPython 3.8+Install$ pip install annofabapi-3dpc-extensionsUsagecuboidアノテーションやセグメントアノテーションに対応したデータクラスを利用できます。fromannofabapi.parserimportSimpleAnnotationDirParserfromannofab_3dpc.annotationimport(CuboidAnnotationDetailDataV2,EulerAnglesZXY,SegmentAnnotationDetailData,SegmentData,convert_annotation_detail_data,)parser=SimpleAnnotationDirParser("tests/data/task1/input1.json")result=parser.parse(convert_annotation_detail_data)segment_annotation_data=result.details[0].datacuboid_annotation_data=result.details[1].dataasserttype(segment_annotation_data)==SegmentAnnotationDetailDataasserttype(cuboid_annotation_data)==CuboidAnnotationDetailDataV2### cuboid annotationprint(cuboid_annotation_data)# => CuboidAnnotationDetailDataV2(shape=CuboidShapeV2(dimensions=Size(width=6.853874863204751, height=0.2929844409227371, depth=4.092537841193188), location=Location(x=-11.896872014598989, y=-3.0571381239812996, z=0.3601047024130821), rotation=EulerAnglesZXY(x=0, y=0, z=0), direction=CuboidDirection(front=Vector3(x=1, y=0, z=0), up=Vector3(x=0, y=0, z=1))), kind='CUBOID', version='2')# オイラー角をクォータニオンに変換print(cuboid_annotation_data.shape.rotation.to_quaternion())# => [1.0, 0.0, 0.0, 0.0]# クォータニオンからオイラー角に変換print(EulerAnglesZXY.from([1.0,0.0,0.0,0.0]))# => EulerAnglesZXY(x=-0.0, y=0.0, z=0.0)### segment annotationprint(segment_annotation_data)# => SegmentAnnotationDetailData(data_uri='./input1/7ba51c15-f07a-4e29-8584-a4eaf3a6812a')# セグメント情報が格納されたファイルを読み込むwithparser.open_outer_file(Path(segment_annotation_data.data_uri).name)asf:dict_segmenta_data=json.load(f)segment_data=SegmentData.from_dict(dict_segmenta_data)asserttype(segment_data)==SegmentDataassertlen(segment_data.points)>0print(segment_data.points)# => [130439, 130442, ... ]
|
annofabcli
|
annofab-cliAnnofabのCLI(Command Line Interface)ツールです。
「タスクの一括差し戻し」や、「タスク一覧出力」など、Annofabの画面で実施するには時間がかかる操作を、コマンドとして提供しています。Annofabannofab-cliのドキュメント開発用ドキュメント注意作者または著作権者は、ソフトウェアに関してなんら責任を負いません。予告なく互換性のない変更がある可能性をご了承ください。Annofabプロジェクトに大きな変更を及ぼすコマンドも存在します。間違えて実行してしまわないよう、注意してご利用ください。廃止予定2022-11-01 以降JMESPathを指定できる--queryを削除します。使いどころがあまりないのと、jqコマンドでも対応できるためです。--wait_optionsを削除します。使いどころがあまりないためです。RequirementsPython 3.8+Install$ pip install annofabclihttps://pypi.org/project/annofabcli/Windows用の実行ファイルを利用する場合GitHubのリリースページからannofabcli-vX.X.X-windows.zipをダウンロードしてください。
zipの中にあるannofabcli.exeが実行ファイルになります。Dockerを利用する場合$ git clone https://github.com/kurusugawa-computer/annofab-cli.git
$ cd annofab-cli
$ chmod u+x docker-build.sh
$ ./docker-build.sh
$ docker run -it annofab-cli --help
# Annofabの認証情報を標準入力から指定する
$ docker run -it annofab-cli project diff prj1 prj2
Enter Annofab User ID: XXXXXX
Enter Annofab Password:
# Annofabの認証情報を環境変数で指定する
$ docker run -it -e ANNOFAB_USER_ID=XXXX -e ANNOFAB_PASSWORD=YYYYY annofab-cli project diff prj1 prj2Annofabの認証情報の設定https://annofab-cli.readthedocs.io/ja/latest/user_guide/configurations.html参照使い方https://annofab-cli.readthedocs.io/ja/latest/user_guide/index.html参照コマンド一覧https://annofab-cli.readthedocs.io/ja/latest/command_reference/index.htmlよくある使い方受入完了状態のタスクを差し戻す"car"ラベルの"occluded"属性のアノテーションルールに間違いがあったため、以下の条件を満たすタスクを一括で差し戻します。"car"ラベルの"occluded"チェックボックスがONのアノテーションが、タスクに1つ以上存在する。前提条件プロジェクトのオーナが、annofabcliコマンドを実行する# 受入完了のタスクのtask_id一覧を、acceptance_complete_task_id.txtに出力する。
$ annofabcli task list --project_id prj1 --task_query '{"status": "complete","phase":"acceptance"}' \
--format task_id_list --output acceptance_complete_task_id.txt
# 受入完了タスクの中で、 "car"ラベルの"occluded"チェックボックスがONのアノテーションの個数を出力する。
$ annofabcli annotation list_count --project_id prj1 --task_id file://task.txt --output annotation_count.csv \
--annotation_query '{"label_name_en": "car", "attributes":[{"additional_data_definition_name_en": "occluded", "flag": true}]}'
# annotation_count.csvを表計算ソフトで開き、アノテーションの個数が1個以上のタスクのtask_id一覧を、task_id.txtに保存する。
# task_id.txtに記載されたタスクを差し戻す。検査コメントは「carラベルのoccluded属性を見直してください」。
# 差し戻したタスクには、最後のannotation phaseを担当したユーザを割り当てる(画面と同じ動き)。
$ annofabcli task reject --project_id prj1 --task_id file://tasks.txt --cancel_acceptance \
--comment "carラベルのoccluded属性を見直してください"補足Windowsでannofabcliを使う場合WindowsのコマンドプロンプトやPowerShellでannofabcliを使う場合、JSON文字列内の二重引用をエスケープする必要があります。> annofabcli task list --project_id prj1 --task_query '{"\status\": \"complete\"}'
|
annofetch
|
AnnoFetch is a command line script that fetches biological annotations for valid
database accessions.
Called with a file containing valid gene or protein accessions it produces a
csv-file containing available annotations for each accession.
Therefore AnnoFetch fetches entries from several databases like Genbank (NCBI),
Ena (EMBL) or UniProtKB and extracts the desired annotations.A short Tutorial is given in the README.md file
within the Example folder.
|
annofilt
|
Filter out missassembled/truncated annotations
|
annohub
|
No description available on PyPI.
|
annolab
|
This is the official python SDK for AnnoLab, the ML platform for NLP projects.AnnoLab WebsiteGetting StartedAssuming that you have Python andvirtualenvinstalled, set up your environment and install the required dependencies like this or you can install the library usingpip:$virtualenvvenv$.venv/bin/activate$python-mpipinstallannolabUsing the AnnoLab SDKTo get started, ensure you have an annolab account athttps://app.annolab.ai/signupand have created an API Key.
Instructions for creating an API Key may be found athttps://docs.annolab.ai/.Configure the sdk with your api key using one of the following two methods.Create an instance of the SDK passing your api_key.>>>fromannolabimportAnnolab>>>lab=AnnoLab(api_key='YOUR_API_KEY')Or set a global api key. All subsequent uses of the sdk will use this global key for authentication.>>>importannolab>>>fromannolabimportAnnolab>>>>>>annolab.api_key='YOUR_API_KEY'>>>lab=AnnoLab()Usage ExamplesCreating a project.lab.create_project('My New Project')# ORlab.create_project(name='My New Project',owner_name='AnnoLab')Getting an existing project.lab.find_project('My New Project')# ORlab.find_project(name='My New Project',owner_name='AnnoLab')Creating a new text source. Will be added to the “Uploads” directory by default.project=lab.find_project('My New Project')project.create_text_source(name='New Source',text='Some text or tokens for annotation.')# Specifying a directoryproject.create_text_source(name='New Source',text='Some text or tokens for annotation.',directory='Uploads')Creating a new pdf source from a file. Will be added to the “Uploads” directory by default.project=annolab.find_project('My New Project')project.create_pdf_source(file='/path/to/file')project.create_pdf_source(file='/path/to/file',name='custom_name.pdf',directory='Uploads')# You may also pass a filelike object or bytes. "name" is required when doing so.project.create_pdf_source(file=open('myfile.pdf','r+b'),name='myfile.pdf')Creating a new pdf source from a web source.project=annolab.find_project('My New Project')project.create_pdf_source_from_web(url='https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf',name='mypdf.pdf')Adding annotations.project.create_annotations(source_name='New Source',annotations=[{'type':'one','value':'value one','offsets':[0,10]},{'type':'two','value':'two','offsets':[10,20]}],)Adding annotations with relations.project.create_annotations(source_name='New Source',annotations=[{'clientId':1,'type':'one','value':'value one','offsets':[0,10]},{'clientId':2,'type':'two','value':'two','offsets':[10,20]}],relations=[{'annotations':[1,2]}])Exporting a project.project.export(filepath='/path/to/outfile.zip')# With optionsproject.export(filepath='/path/to/outfile.zip',source_ids=[1,2,3],layers=['GoldSet'],include_annotation_types=True,include_sources=True)
|
annolid
|
No description available on PyPI.
|
annonex2embl
|
annonex2emblConverts an annotated DNA multi-sequence alignment (inNEXUSformat) to an EMBL flatfile for submission toENAvia theWebin-CLI submission tool.INSTALLATIONTo get the most recent stable version ofannonex2embl, run:pip install annonex2emblOr, alternatively, if you want to get the latest development version ofannonex2embl, run:pip install git+https://github.com/michaelgruenstaeudl/annonex2embl.gitINPUT, OUTPUT AND PREREQUISITESInput: an annotated DNA multiple sequence alignment in NEXUS format; and a comma-delimited (CSV) metadata tableOutput: a submission-ready, multi-record EMBL flatfileRequirements / Input preparationThe annotations of a NEXUS file are specified viaSETS-block, which is located beneath a DATA-block and defines sets of characters in the DNA alignment. In such a SETS-block, every gene and every exon charset must be accompanied by one CDS charset. Other charsets can be defined unaccompanied.Example of a complete SETS-BLOCKBEGIN SETS;
CHARSET matK_gene_forward = 929-2530;
CHARSET matK_CDS_forward = 929-2530;
CHARSET trnK_intron_forward = 1-928 2531-2813;
END;Examples of corresponding DESCR variableDESCR="tRNA-Lys (trnK) intron, partial sequence; maturase K (matK) gene, complete sequence"EXAMPLE USAGEcdinto the annonex2embl package, then ...On Linux / MacOSSCRPT=$PWD/scripts/annonex2embl_launcher_CLI.py
INPUT=$PWD/examples/input/TestData1.nex
METAD=$PWD/examples/input/Metadata.csv
mkdir -p $PWD/examples/temp/
OTPUT=$PWD/examples/temp/TestData1.embl
DESCR='description of alignment here' # Do not use double-quotes
[email protected]
AUTHR='your name here' # Do not use double-quotes
MNFTS=PRJEB00000
MNFTD=${DESCR//[^[:alnum:]]/_}
python3 $SCRPT -n $INPUT -c $METAD -d "$DESCR" -e $EMAIL -a "$AUTHR" -o $OTPUT --qualifiername "note" --productlookup --manifeststudy $MNFTS --manifestdescr $MNFTD --compressOn WindowsSET SCRPT=$PWD\scripts\annonex2embl_launcher_CLI.py
SET INPUT=$PWD\examples\input\TestData1.nex
SET METAD=$PWD\examples\input\Metadata.csv
mkdir $PWD\examples\temp\
SET OTPUT=$PWD\examples\temp\TestData1.embl
SET DESCR='description of alignment here'
SET [email protected]
SET AUTHR='your name here'
SET MNFTS=PRJEB00000
SET MNFTD=a_unique_description_here
python %SCRPT% -n %INPUT% -c %METAD% -d %DESCR% -e %EMAIL% -a %AUTHR% -o %OTPUT% --productlookup --manifeststudy %MNFTS% --manifestdescr %MNFTD% --compressCHANGELOGSeeCHANGELOG.mdfor a list of recent changes to the software.
|
annonymGenPro
|
No description available on PyPI.
|
annoPipeline
|
annoPipeline - an API-enabled gene annotation pipelineannoPipelineuses APIs frommygene.infoandEntrez esummaryto annotate a user-provided list of gene symbols.Generates a pandas DataFrame with gene symbol, gene name, EntrezID, and bibliographic info for up to 5 pubmed publications where a functional reference was provided (more about functional references atGeneRIF).Designed to be useful for tasks such as:identifying relevant publications for a given functionanalyzing publications trends for genes belonging to a common pathwayidentifying influential PIs for a given gene network.Reqirements:Written for use with Python 3.7, not tested on other versions.annoPipelinerequires:numpy >= 1.16.2pandas >= 0.24.2Biopython >= 1.73openpyxl >= 2.6.1requests >= 2.21.0To Install:pip install annoPipelineOr clone the repo from github.
Then, in the annoPipeline directory, run:python setup.py installRequired dependencies will be installed if missing, may take a few seconds.Example usage:Execute the full annotation pipeline on a list of gene symbols like this:importannoPipelineasap# define a list of genes you would like annotatedgeneList=['CDK2','FGFR1','SLC6A4']# annoPipeline will execute full annotation pipeline (see individual functions below).df=ap.annoPipeline(geneList)# returns pandas df with annotations for gene and bibliographic info.ap.annoPipelinewill default save annotation output to Excel file named by geneList symbols separated by '_'.Warning!If querying asingle gene, still pass as a list. For example:importannoPipelineasapdf=ap.annoPipeline(['CDK2'])# for single gene queries still include [] - will be fixed in later versionv0.0.1 FunctionalityTask 1:From the MyGeneInfo API, use the “Gene query service" GET method to return details on a given list of human gene symbols.From the returned json, parse out the “name", “symbol" and “entrezgene" values and print to screenUsequeryGenes():importannoPipelineasapgeneList=['CDK2','FGFR1','SLC6A4']l1=ap.queryGenes(geneList)# returns list of dicts where keys are default mygene fields (symbol,name,taxid,entrezgene,ensemblgene)Task 2:Using the appropriate identifier from the above result, send a query to the MyGeneInfo “Gene annotation services" method for each geneFrom the resulting json, collate up to 5 generif descriptions per geneWrite the results to an Excel spreadsheet with columns: gene_symbol, gene_name, entrez_id, generifsUsegetAnno():importannoPipelineasapgeneList=['CDK2','FGFR1','SLC6A4']l1=ap.queryGenes(geneList)l2=ap.getAnno(l1,saveExcel=True)# saveExcel defaults Falsereturns pandas df with genes and up to 5 generifs from mygene.info.defaultsaveExcel=False, to save output to Excel must stateTrueifTrue, Excel file will be named by geneList symbols separated by '_'.Task 3:Use the Pubmed IDs associated with the above generif content to extract additional bibliographic information.UseaddBibs():importannoPipelineasapgeneList=['CDK2','FGFR1','SLC6A4']l1=ap.queryGenes(geneList)l2=ap.getAnno(l1)l3=ap.addBibs(l2)# will return df with genes and up to 5 generifs from mygene.infoCurrently returns the following bibliographic information when available:PubDateSourceTitleLastAuthorDOIPmcRefCount
|
annopro
|
AnnoPROAnnoPRO generationstep 1: input proteins sequecesstep 2: features extraction by Profeatstep 3: Feature pairwise distance calculation --> cosine, correlation, jaccardStep4: Feature 2D embedding --> umap, tsne, mdsStep5: Feature grid arrangement --> grid, scatterStep5: Transform --> minmax, standardAnnoPRO architectureEncoding layers: Protein features was learned by CNNs and Protein similarity was learned by FCs.Decoding layers: LSTMsInstallationinstall compilersdependencylapjvrequiresg++or other Cpp compiler, and annopro contains fortran extensional module and requiregfortranor other fortran compiler. Here is an example of installing them on Ubuntu.sudoaptinstallgccg++gfortran# or you can install by conda in your virtual env# command name is like# gcc: x86_64-conda_cos6-linux-gnu-cc# g++: x86_64-conda_cos6-linux-gnu-c++# gfortran: x86_64-conda_cos6-linux-gnu-gfortrancondainstallgcc_linux-64gxx_linux-64gfortran_linux-64install annoproYou can install it directly bypip install annoproor install from source code as following steps.
But you should install numpy first if you install it from source code because we neednumpy.f2pyto help us build fortran extension submodule.gitclonehttps://github.com/idrblab/AnnoPRO.gitcdAnnoPRO
condacreate-nannopropython=3.8
condaactivateannopro
pipinstall.UsageUse it as a terminal command. For all parameters, typeannopro -h.annopro-itest_proteins.fasta-ooutputUse it as a python executable packagepython-mannopro-itest_proteins.fasta-ooutputUse it as a library to integrated with your project.fromannoproimportmainmain("test_proteins.fasta","output")The result is displayed in the./output/bp(cc,mf)_result.csv.Notice: if you use annopro for the first time, annopro will
automatically download required resources when they are used
(lazy download mechanism)Possible problemspip is looking at multiple versions of XXX to determine which version is compatible with other requirements. this could take a while.Your pip is latest, back to old version such as 20.2, or just add--use-deprecated=legacy-resolverparam.Argument mismatch when building source code.Because your gfortran is latest and imcompatible,
edit setup.py and uncomment-fallow-argument-mismatchor
just use a earlier version of gfortran such as 4.8.5, 8.4ContactIf any questions, please create anissueon this repo, we will deal with it as soon as possible.
|
annopyte
|
OverviewThis is designed to add and process annotations to python objects, a-laJava Annotations.Annotations are just metadata, something that is not directly used by code itself but can be queried by other parts of the program.Python itself doesn’t need special low-level implementations in order to support metadata, this library’s target is just to provide
a standard way of setting and querying metadata from python objects.Python 3 actually contains a basic version of metadata, but it’s limited to the arguments and return values of functions; this library is designed to extend
such support to other objects, so you can annotate a class, a function or any object with any kind of data.Current statusCurrently contains aPEP-3107compatible signature annotation implementation
for Python 2.x.Example codeBasic function annotation:>>> from annopyte.annotations.signature import annotate_f
>>> @annotate_f("return_value_annotation", param1="asd", param2="fgh")
... def myfunc(param1, param2=None):
... pass
...
>>> print myfunc.__annotations__
{'return': 'return_value_annotation', 'param2': 'fgh', 'param1': 'asd'}
>>>Prospective code ( to be done )That’s the basic idea I’d like to implement for metadata usage:class Author(Annotation)
name = "default"
@Author(name="John Doe")
class MyClass(object):
pass
>>> query_for_metadata("mypackage", Author, name="John Doe")
[<class 'mypackage.subpackage.MyClass'>]
>>>That’s it.Homepagehttp://annopyte.franzoni.euSupport and [email protected]://groups.google.com/d/forum/pydenji-usersContact meAlan Franzoni <usernameatfranzoni.eu> (please note: write LITERALLY username in the email address!)
|
annorepo-client
|
annorepo-clientA Python client for accessing anannoreposerverinstallingusing poetrypoetry add annorepo-clientusing pippip install annorepo-clientUSAGE|CONTRIBUTING|LICENSE|AUTHORS|MAINTAINERS|CONTRIBUTORS
|
annorest
|
conda create --name dataprocess python "mamba>=0.22.1"
conda activate dataprocess
mamba install ipywidgets
pip install opencc
mamba install pymongo
pip install -e .
pip install twinepython setup.py sdist bdist_wheelpip install wheel
|
annosine2
|
AnnoSINE_v2 is a SINE annotation tool for plant/animal genomes. The program is designed to generate high-quality non-redundant SINE libraries for genome annotation.
|
annosSQL
|
annosSQL介绍annosSQL是一个基于python实现对mysql、oracle、sqllite数据库交互库,基础是基于对pymysql等库的封装,其用法有些特别。annosSQL中实现了一种缓存算法(通过LRU算法实现),对于近期相同sql的查询,
会优先从缓存获取,当缓存空间不存在该数据时,再次访问数据库。安装教程pipinstallannosSQL使用说明使用案例fromannosSQL.Innos.InterfaceimportInterface,HandlerfromannosSQL.Donos.doconnimportConnectionfromannosSQL.Donos.dosqlimportexecute@Interface()classT4:#配置处理器,这是入口,是必须@Handler()defhand(self)->list:pass#配置连接池@Connection(driver="mysql",host="127.0.0.1",user="root",password="123456",port=3306,database="czh")defconn(self)->any:pass'''查询所有数据,p1方法不能有任何的形参变量,必须是空函数,'->'箭头后面跟的是返回值类型,'list'或'dict'或'tuple'都行这里返回list数据类型'''@execute(sql="select * from user_copy1")defp1(self)->list:pass@execute(sql="select * from user_copy1")defp2(self)->dict:pass@execute(sql="select * from user_copy1")defp3(self)->tuple:pass'''占位符使用 {}/#{}/%s,条件查询时,函数的形参必须带类型,如s1方法里的id必须写出:id:int'''# {}占位符是通过str().format()实现的,此时,函数形参的位置十分重要,它将影响sql的条件的正确性@execute(sql="select * from user_copy1 where id={}")defs1(self,id:int)->list:pass# #{}占位符的基本形式如下:#{1},通过#{}大括号里的数值来定位绑定到函数的形参@execute(sql="select * from user_copy1 where id=#{1}")defs2(self,id:int)->dict:pass# %s 多行占位符 支持数据多行数据类型:列表与元组@execute(sql="select * from user_copy1 where id=%s")defs3(self,id:int)->tuple:pass'''快捷符的使用:A(select)'''@execute(sql="A(select) user_copy1")defa1(self)->tuple:passif__name__=='__main__':t4=T4()t4.hand()#调用入口print('p1返回的数据:')p1=t4.p1()print(p1)print('p2返回的数据:')p2=t4.p2()print(p2)print('p3返回的数据:')p3=t4.p3()print(p3)print('s1返回的数据:')s1=t4.s1(1)print(s1)print('s2返回的数据:')s2=t4.s2(1)print(s2)print('s3返回的数据:')s3=t4.s2(1)print(s3)print('a1返回的数据:')a1=t4.a1()print(a1)运行结果p1返回的数据:
[[1, 'chx', '123455', '1'], [2, 'czh', '123456', '2']]
p2返回的数据:
[{'id': 1, 'user': 'chx', 'password': '123455', 'z': '1'}, {'id': 2, 'user': 'czh', 'password': '123456', 'z': '2'}]
p3返回的数据:
((1, 'chx', '123455', '1'), (2, 'czh', '123456', '2'))
s1返回的数据:
[[1, 'chx', '123455', '1']]
s2返回的数据:
[{'id': 1, 'user': 'chx', 'password': '123455', 'z': '1'}]
s3返回的数据:
[{'id': 1, 'user': 'chx', 'password': '123455', 'z': '1'}]
a1返回的数据:
((1, 'chx', '123455', '1'), (2, 'czh', '123456', '2'))xxxx其他说明#{} %s {}
# #{}、 {} 普通占位符
# %s 多行占位符 支持数据多行数据类型:列表与元组
# select * from -> A(select) user
# insert into table values(S{,}[])
# insert into table values(C{})
# insert into table values(L{})
# insert into table values(T{%s,%s})
|
annot
|
]# annotAnnotate data analysisCommand line usageStart in a directory with some data:ls
faa-wildlife.csvTo create an annot project:annot
No .annot directory here. Want to create one? [y]/n: y
Created .annot
Created .annot/index.rst
Created .annot/index.html
Created .annot/DataTables/
Created .annot/data
Created .annot/tempTo open the annot in a web browser:annot openTo add a datasets to the page:annot add my.csvTo add comments to the page:annot markdown '# Background'
annot markdown 'This csv has my data'
-or-edit .annot/index.rst in a text editor, thenannot renderTo add content to the page from Jupyter:import annot
page = annot.Page('path/to/.annot')
page.add_dataframe(df)
page.add_plot(matplotlib.pyplot.gca())
page.add_markdown('# Results')
page.add_html('')Use from VisidataTo save visidata sheets directly to page:Using vdannot:Setup:
PYTHONPATH=dir/with/annot.py
export PYTHONPATH
~/.visidata/vdannot.py # exists
~/.visidatarc contains 'import vdannot'
When you have a sheet to save:
SPACE annot-csv ENTER my_freq_table ENTERWithout using vdannot:Start this long-running process:
> annot monitor
In a different shell, open visidata. When you create a new sheet you
want to save, such as after creating a frequency table, save the
sheet as csv in the .annot/data directory
- Ctrl-S
- Path: .annot/data/my_freq_table.csvTo add a markdown comment:SPACE annot-markdown ENTER This is interesting data ENTER
|
annotald
|
Annotald========Annotald is a program for annotating parsed corpora in the Penn Treebankformat. For more information on the format (as instantiated by the PennParsed Corpora of Historical English), see `the documentation byBeatrice Santorini`_. Annotald was originally written by `AntonIngason`_ as part of the `Icelandic Parsed Historical Corpus`_ project.It is currently being developed by him along with `Jana Beck`_ and`Aaron Ecay`_... _the documentation by Beatrice Santorini:http://www.ling.upenn.edu/hist-corpora/annotation/intro.htm#parsed_files.. _Anton Ingason: http://linguist.is/.. _Icelandic Parsed Historical Corpus:http://linguist.is/icelandic_treebank/Icelandic_Parsed_Historical_Corpus_(IcePaHC).. _Jana Beck: http://www.ling.upenn.edu/~janabeck/.. _Aaron Ecay: http://www.ling.upenn.edu/~ecay/Obtaining Annotald------------------The central location for Annotald development is `on Github`_. You canview or download the program's source code from there. The latestrelease is available as a `Python package`_. Install it with thecommand ``pip install annotald`` . (Further information aboutinstallation is available in the user’s manual.).. _on Github: https://github.com/Annotald/annotald.. _Python package: https://pypi.python.org/pypi/annotaldUsing Annotald--------------The `Annotald user’s manual`_ can be found online. For developers,there is also `automatically generated API documentation`_... _Annotald user’s manual: http://annotald.github.com/user.html.. _automatically generated API documentation:http://annotald.github.com/api-doc/global.htmlLicense-------Annotald is available under the terms of the GNU General Public License(GPL) version 3 or (at your option) any later version. Please see the``LICENSE`` file included with the source code for more information.Funding Sources---------------Annotald development has been funded by the following funding sources:- Icelandic Research Fund (RANNÍS), grant #090662011: “Viable LanguageTechnology beyond English – Icelandic as a Test Case”- The research funds of `Anthony Kroch`_ at the University ofPennsylvania... _Anthony Kroch: http://www.ling.upenn.edu/~kroch/News====Release 1.X.X-------------This release adds the following new features:- Popup choice menus (TODO: link doc)- Node relationships (TODO: link doc)- For developers: the setMetadata function and undo hooksRelease 1.3.10--------------This release fixes the follwing bugs:- Annotald would not start if certain variables were not set in thesettings.js file.- The corpus text would erroneously be regarding as having been changed(on save) if the metadata was changed.Release 1.3.9-------------This release fixes further bugs relating to the installation of NLTK. Apatched version of NLTK is now used to avoid such occurrences.Release 1.3.8-------------This release fixes a bug which caused errors on installation related toan old version of NLTK. It also fixed an issue with the display of CODEnodes in the text preview.Release 1.3.7-------------This release fixes one bug with the displayRename function (L key).Release 1.3.6-------------This release fixes two silly bugs in the last one. The pattern that isunintentionally developing is that every other release is an immediatehotfix! :-/Release 1.3.5-------------This release fixes 5 issues:- Movement keys (arrows and page up/down) are no longer counted asinterrupting sequences of mouse clicks.- guessLeafNode properly treats \*T* and \*ICH* as non-leafs- addConLeafAfter is now added- The urtext window is a little smarter- leafAfter inserts (VB *) by default nowRelease 1.3.4-------------This release really fixes the bug supposedly fixed by 1.3.3.Release 1.3.3-------------This release fixes one bug.- Fix error in hash-trees. Thanks to Ariel for reporting.Release 1.3.2-------------This release fixes one bug.- Fix bug whereby undoing the deletion of a root level node(e.g. IP-MAT) could do weird and nasty things. Thanks to Ariel forreporting.Release 1.3.1-------------This release fixes one bug.- Fix bug whereby a newer, incompatible version of the NLTK librarycould be installed with Annotald.Release 1.3-----------A release with bug and documentation fixes. Thanks to Ariel Diertani,Sandra, and Catarina for testing and feedback.- Fix (?) unexplained crashes- Fix errors in the interaction between the context menu and undofeatures- Document a bug in CherryPy installation that affects some usersRelease 1.2.1-------------A release with some bug fixes and new features. Thanks to ArielDiertani for testing and feature ideas.New feature:- Add display of the text and token ID of the selected nodeBug fixes:- Don’t add the index of a node to context menu label suggestions, ifthe index is contained on the node’s text (as in the case of a trace)- Fix a corner case in the shouldIndexLeaf function dealing with * emptycategories (not \*X* type traces).Release 1.2-----------A release with some bug fixes and new features.New features:- Node collapsing is added, bound to Shift+C by default. Users with acustom `settings.js` file should add a line such as the following toenable this functionality: `addCommand({ keycode: 67, shift: true },toggleCollapsed);`- Long `CODE` nodes are now truncated with an ellipse by default. Thischange could be applied to all nodes if there is user demand.- Server mode is added. By default, Annotald displays a page askingwhether a user really intends to edit a file, to avoid confusion inmulti-user environments. To turn off this prompt, users may eithernavigate to `http://localhsot:port/<username>` directly, or use avariable in `settings.py` to disable the prompt. Consult the user’smanual for detailsBug fixes:- Disallow saving while editing of a node label (as a textbox) is inprogress- Allow using the mouse to select text in a node label editing textboxRelease 1.1.4-------------A single-bugfix release:- Fix a bug which could prevent the saving of trees on exitRelease 1.1.3-------------A release with some minor fixes. Changes:- Previously, Annotald would reindent the .psd file on every save. Thisproved to be slow for large files. Now Annotald reindents the file onexit (only). This means users **ought to** use the exit button in theAnnotald browser UI to exit, and not kill Annotald in the terminal.It is also possible to use the reindent auxiliary command to reindenta file of trees- The `annotald-aux` command was extended with `cat-settings-js` and`cat-settings-py` commands, which write the contents of the defaultJavascript and Python settings files to standard output (whence theymay be piped into a file and further edited.- The `annotald-aux` command also was extended with the `reindent`command, which takes a .psd file as an argument and reindents it.- It is no longer possible to move empty nodes (traces, comments,etc.). It remains possible to move a non-terminal dominating only anempty node(s), so if you must move an empty node create a dummy XP asa “handle” to use for grabbing on.- Deleting a trace now deletes the numeric index from its antecedent, ifthe antecedent is now the only node to bear that index. (If there isanother coindexed trace besides the one deleted, the index willsurvive.)- The search features were improved, especially incremental search.Thanks to Beatrice and Tony for problem reports and discussion.Release 1.1.2-------------A bugfix release. Changes:- Fix overapplication of case in context menu. (Thanks to Joel forreport)- Fix crash when time log db is corrupt. (Thanks to Sandra for report)- Fixes in formatting of documentation. (Thanks to Beatrice for report)- Various code cleanups.Release 1.1.1-------------A hotfix release. Changes:- Fix the height of the context menu (thanks to Jana for reporting)- Fix the interaction of the context menu and case tags. Case is nowfactored out of context menu calculations, just like numerical indices(thanks to Joel for reporting)- Fix calculation of the set of alternatives for the context menu(thanks to Joel for reporting)The user’s manual also acquired an improved section on installation andremote access.Release 1.1-----------Changes:- Annotald is now tested on Python 2.6+ and 3.3+. Annotald officiallysupports (only) these versions of Python- Annotald is now distributed through PyPI, the official python packagearchive- Many bugs fixedRelease 1.0-----------This is the first release since 12.03. The version numbering scheme haschanged.Significant changes in this version:- A user’s manual was written- Significant under-the-hood changes to allow the editing of large filesin Annotald without overly taxing the system CPU or RAM- A structural search feature was added- The case-related functions in the context menu were made portable- A comprehensive time-logging facility was added- The facility to display only a certain number of trees, instead of awhole file at once, was added- A metadata editor for working with the deep format was added (theremaining support for this format remains unimplemented)- A python settings file was added, in addition to the javascriptsettings file- The facility to add custom CSS rules via a file was added- Significant changes of interest to developers:- A developer’s manual was written- Test suites for javascript and python code were addedRelease 12.03-------------This is the first release since 11.12.Potentially backwards-incompatible changes:- The handling of dash tags has been overhauled. Annotald now hasthree separate lists of allowable dash tags: one list for dash tagson word-level labels, one for dash tags on clausal nodes (IP and CP),and one for dash tags on non-clausal non-leaf nodes. Refer to thesettings.js file distributed with Annotald to see how to configurethese options.- Annotald is now licensed under the GPL, version 3 or higher.Other changes:- Added support for validation queries. Use the command-line option -v<path> to the annotald script to specify a validation script. Click the“Validate” button in the annotald interface to invoke the script. Thescript should read trees on standard input, and write (possibly modified)trees to standard output. The output of the script will replace thecontent of the annotald page. By convention, the script should add thedash tag -FLAG to nodes that are considered errors. The “next error”button will scroll the document to the next occurrence of FLAG. ThefixError function is available for user keybindings, and removes the-FLAG from the selected node. The -FLAG tag is automatically removed byAnnotald on save.NOTE: the specifics of this interface are expected to change in futureversions.- Added a comment editor. Press ‘l’ with a comment selected to pop up atext box to edit the text of the comment. Spaces in the original textare converted to underscores in the tree representation. A comment isdefined as a CODE node whose text is enclosed in curly braces {}, andthe first part of the text inside the braces is one of “COM:”,“TODO:”, or “MAN:”. The three types of comment can be toggledbetween, using the buttons at the bottom left of the dialog box.- Added time-logging support. Annotald will write a “timelog.txt” filein the working directory, with information about when the program isstarted/stopped/the file is saved. Jana Beck’s (as yet unreleased)CorpusReader tool can be used to calculate parsing time andwords-per-hour statistics.- Added a facility to edit CorpusSearch .out files. These files haveextraneous comments added by CS. Give the -o command-line flag to theannotald program, and the comments will be removed so that Annotaldcan successfully parse the trees.- Annotald successfully runs on systems which have Python 3 as the“python” command. This relies on the existence of Python 2.x as the“python2” command.- Added support for clitic traces. When creating a movement trace withthe leafBefore and leafAfter functions, if the original phrase has thedash tag -CL, the trace inserted will be ``*CL*``.- Annotald now colors IP-level nodes and the topmost “document” nodedifferently.- Bug fixes.Release 11.12-------------Changes:- Various bugs fixed- Support for ID and METADATA nodes, as sisters of the clause root.(Currently, nodes other than ID and METADATA will not work.)- Change how the coloring is applied to clause roots. CallstyleIpNodes() in settings.js after setting the ipnodes variable.- Add mechanism to hide certain tags from view; see settings.js fordetails.- Added mousewheel support; use shift+wheel-up/-down to move through thetree, sisterwise- Limit undo history to 15 steps. This limits how much memory is usedby Annotald, which could be very high.- Allow (optional) specification of port on the commandline:annotald -p <number> <optional settings file> <.psd file>This allows multiple instances of Annotald ot be running at once (eachon a different port)Release 11.11-------------Changes:- Proper Unicode support on OS X and Linux- Remove dependency on a particular charset in parsed files- Code cleanup (see hacking.txt for instructions/style guide)- Add support for lemmata in (POS word-lemma) format- Speed up the moving of nodes in some cases- Add a notification message when save completes successfully- Add an “exit” button, which kills the Annotald server and closes thebrowser window. Exit will fail if there are unsaved changes- Change behavior of mouse click selection. Previously, the followingbehavior was extant:1) Click a node2) Change the node’s label with a keybaord command3) Click another node to select itResult: the just-clicked node is made the selection endpointThis can be surprising. Now, in order to make a secondary selection,the two mouseclicks must immediately follow each other, without anyintervening keystrokes.- Allow context-sensitive label switching commands. See the includedsettings.js file for an example- (Experimental) Add a CSS class to each node in the tree correspondingto its syntactic label. This facilitates the specification ofadditional CSS rules (for an example, see the settings file)- Keybindings can now be specified with control and shift modifier keys(though not both together). The second argument (action to be taken)for a binding can now be an arbitrary javascript function; the thirdargument is the argument (singular for now) to be passed to thefunction.IcePaHC version---------------Initial version
|
annotate
|
annotateFunction annotations.Overview@annotateDecorator to set a function’s __annotations__ like Py3.https://www.python.org/dev/peps/pep-3107/PyPI record.Documentation.ExamplesfromtypingimportOptional,Tuple,Union,Sequencefromannotateimportannotatefrom.libimportcachedfrom.importjnifrom.jobjectbaseimportJObjectBasefrom.jclassimportJClassfrom.jobjectimportJObjectclassJArray(JObjectBase):"""Java Array"""@classmethod@annotate('JArray',size=Union[int,long])defnewBooleanArray(cls,size):......@classmethod@annotate('JArray',size=Union[int,long])defnewDoubleArray(cls,size):...@classmethod@annotate('JArray',size=Union[int,long])defnewStringArray(cls,size):...@classmethod@annotate('JArray',size=Union[int,long],component_class=JClass)defnewObjectArray(cls,size,component_class):...@annotate(jenv=jni.JNIEnv,jarr=jni.jarray,borrowed=bool)def__init__(self,jenv,jarr,borrowed=False):...def__hash__(self):returnsuper(JArray,self).__hash__()def__len__(self):returnself.getLength()@annotate(JObject,borrowed=bool)defasObject(self,borrowed=False):...@cached@annotate(int)defgetLength(self):...@annotate(bool,idx=int)defgetBoolean(self,idx):......@annotate(float,idx=int)defgetDouble(self,idx):...@annotate(Optional[str],idx=int)defgetString(self,idx):...@annotate(Optional[JObject],idx=int)defgetObject(self,idx):...@annotate(idx=int,val=bool)defsetBoolean(self,idx,val):...@annotate(idx=int,val=str)defsetChar(self,idx,val):......@annotate(idx=int,val=Union[int,long])defsetLong(self,idx,val):...@annotate(idx=int,val=float)defsetDouble(self,idx,val):...@annotate(idx=int,val=Optional[str])defsetString(self,idx,val):...@annotate(idx=int,val=Optional[JObject])defsetObject(self,idx,val):...@annotate('JArray',start=int,stop=int,step=int)defgetBooleanSlice(self,start,stop,step):......@annotate('JArray',start=int,stop=int,step=int)defgetDoubleSlice(self,start,stop,step):...@annotate('JArray',start=int,stop=int,step=int)defgetStringSlice(self,start,stop,step):...@annotate('JArray',start=int,stop=int,step=int)defgetObjectSlice(self,start,stop,step):...@annotate(start=int,stop=int,step=int,val=Sequence[bool])defsetBooleanSlice(self,start,stop,step,val):...@annotate(start=int,stop=int,step=int,val=Union[Sequence[str],str])defsetCharSlice(self,start,stop,step,val):...@annotate(start=int,stop=int,step=int,val=Union[Sequence[Union[int,bytes]],(bytes,bytearray)])defsetByteSlice(self,start,stop,step,val):......@annotate(start=int,stop=int,step=int,val=Sequence[float])defsetDoubleSlice(self,start,stop,step,val):...@annotate(start=int,stop=int,step=int,val=Sequence[Optional[str]])defsetStringSlice(self,start,stop,step,val):...@annotate(start=int,stop=int,step=int,val=Sequence[Optional[JObject]])defsetObjectSlice(self,start,stop,step,val):...@annotate(Tuple)defgetBooleanBuffer(self):withself.jvmas(jvm,jenv):is_copy=jni.jboolean()returnjenv.GetBooleanArrayElements(self._jobj,is_copy),jni.sizeof(jni.jboolean),b"B",is_copy...@annotate(Tuple)defgetDoubleBuffer(self):withself.jvmas(jvm,jenv):is_copy=jni.jboolean()returnjenv.GetDoubleArrayElements(self._jobj,is_copy),jni.sizeof(jni.jdouble),b"d",is_copy@annotate(buf=object)defreleaseBooleanBuffer(self,buf):withself.jvmas(jvm,jenv):jenv.ReleaseBooleanArrayElements(self._jobj,jni.cast(buf,jni.POINTER(jni.jboolean)))...@annotate(buf=object)defreleaseDoubleBuffer(self,buf):withself.jvmas(jvm,jenv):jenv.ReleaseDoubleArrayElements(self._jobj,jni.cast(buf,jni.POINTER(jni.jdouble)))InstallationPrerequisites:Python 3.7 or higherhttps://www.python.org/3.7 is a primary test environment.pip and setuptoolshttps://pypi.org/project/pip/https://pypi.org/project/setuptools/To install run:python -m pip install --upgrade annotateDevelopmentPrerequisites:Development is strictly based ontox. To install it run:python -m pip install --upgrade toxVisitdevelopment page.Installation from sources:clone the sources:git clonehttps://github.com/karpierz/annotate.gitannotateand run:python -m pip install ./annotateor on development mode:python -m pip install --editable ./annotateLicenseCopyright (c) 2012-2022 Adam KarpierzLicensed under the zlib/libpng Licensehttps://opensource.org/licenses/ZlibPlease refer to the accompanying LICENSE file.AuthorsAdam Karpierz <[email protected]>Changelog1.1.19 (2022-10-18)Tox configuration has been moved to pyproject.toml1.0.18 (2022-08-22)Setup update.1.0.17 (2022-07-24)Add support for Python 3.10 and 3.11Setup update (currently based mainly on pyproject.toml).1.0.16 (2022-01-10)Drop support for Python 2, 3.5 and 3.6.Copyright year update.Setup update.1.0.15 (2020-10-18)Setup: fix an improper dependencies versions.Setup general update and cleanup.Fixed docs setup.1.0.8 (2019-05-21)Update required setuptools version.Setup update and improvements.1.0.7 (2018-11-08)Drop support for Python 2.6 and 3.0-3.3Update required setuptools version.1.0.6 (2018-05-08)Update required setuptools version.Improve and simplify setup and packaging.1.0.5 (2018-02-26)Improve and simplify setup and packaging.1.0.4 (2018-01-28)Fix a bug and inconsistencies in tox.iniUpdate of README.rst.1.0.1 (2018-01-24)Update required Sphinx version.Update doc Sphinx configuration files.1.0.0 (2017-11-18)Setup improvements.Other minor improvements.0.7.4 (2017-01-05)Minor setup improvements.0.7.3 (2016-09-25)Fix bug in setup.py0.7.1 (2016-09-25)More PEP8 compliant0.6.7 (2016-09-24)Minor description suplement0.6.4 (2016-09-23)Simplify package structure.0.6.3 (2016-06-19)Fix incompatibility for older versions of setuptools.Add example.0.6.0 (2015-08-17)Python3 support.0.5.1 (2015-02-27)Remove ‘returns’ as keyword argument for declare return type.For now, the type of returned value should be declared by thefirst positional argument.0.3.3 (2014-09-15)Add wheels.0.3.2 (2014-09-13)Standarize package.0.3.0 (2014-09-06)Standarize package.Cosmetic changes.0.2.6 (2014-06-10)Portable setup.py.0.2.5 (2014-06-10)Cosmetic changes.0.2.3 (2012-10-13)Initial release.
|
annotatec
|
annotatechelps you create Python packages with C embeddings.Imagine you're developing C library and already have more than 50 functions. Sometimes you change signatures of old functions or rename them. It'll be headache to write and support allctypes-declarations for Python wrapper.annotatecwill create all Python objects for you. All you need is to provide declarations, which can be placed directly into your C code.Minimal livable exampleYou have some librarylib.cand it's headerlib.h. These files were compiled intolib.so(orlib.dllin Windows).lib.csource:#include"lib.h"intsum(inta,shortb,longlongc){returna+b+c;}lib.hsource:/* @function sum* @return int* @argument int* @argument short* @argument longlong*/intsum(int,short,longlong);Here's a Python wrapper:importannotateclibc=annotatec.Loader(library="lib.so",headers=["lib.h"])That's all! Now you can test it like so:>>>libc.sum(1,2,3)<<<6ReferenceRead detailed reference inwiki-pages.
|
annotated
|
Annotated=========Annotated provides a decorator that flags a function's annotations as useful, callable expressions. Each annotation will be called with its corresponding argument as first parameter, and the result will replace that argument.If no annotation was specified for this particular argument, it will behave as if `lambda x: x` had been used as annotation.`@annotated` Decorator----------------------The `@annotated` decorator is meant to decorate functions, or other objects with a `__code__` attribute (a class is **not** one). It indicates that the function decorated has "active" annotations, for example:```pythonfrom annotated import annotated@annotateddef hello(name: str):print('Hello, ' + name + '!')hello('world')# "Hello, world!"hello(None)# "Hello, None!"```Albeit a bad example (one would rather use `str.format` or the `%` notation to include a value in a string), this illustrates the behaviour of an `@annotated` function.Used this way, `@annotated` ensures that the `name` argument of the `hello` function is **always** a character string.`@annotated` also respects default values, and applies annotations to them. Thus, if we were to rewrite `hello` such as:```pythonfrom annotated import annotated@annotateddef hello(name: str='world'):print('Hello, ' + name + '!')hello()# "Hello, world!"```The default value would be honored, as well as any non-defaults.It should be noted that `@annotated` supports both return annotations (`->`), keyword argument annotations and `*`/`**` annotations.Using `@annotated` on an incompatible (`__code__`-less) object will result in a `TypeError` exception.
|
annotated-bibliography
|
UNKNOWN
|
annotated-dataset
|
Annotated DatasetInstallpipinstallannotated-datasetUsagefromannotated_dataset.inception_clientimportInceptionClientfromannotated_dataset.exportimportexportimportlogginglogging.basicConfig()logging.getLogger().setLevel(logging.INFO)# Create client for inception serverinception_client=InceptionClient.create_client(host='https://mapa.pangeamt.com/',username='xxx',password='xxx')# Configconfig={'dataset_name':'MAPA_BG','inception_projects':[{'name':'Bulgarian_Legal_1','use_segmentation_by_newline':True,'inception_client':inception_client},{'name':'Bulgarian_Legal_2','use_segmentation_by_newline':True,'inception_client':inception_client}],'gold_corpus_preferred_resources':['Bulgarian_Legal_1'],'export_dir':'./'}export(config)Advanced usagesee example_advanced.py
|
annotated-doc
|
annotated-docTable of ContentsInstallationLicenseInstallationpip install annotated-docLicenseannotated-docis distributed under the terms of theMITlicense.
|
annotated-images
|
Annotated imagesSplit folders with files (e.g. images) into train, validation and test folders.Keeps the annotation data (if there are any) together with their images.Given the input folder in the following format:input/
img1.jpg
img1.xml
img1.json
img1.*
img2.jpg
img2.xml
img2.json
img2.*
...
...Gives you this:output/
train/
img1.jpg
img1.xml
img1.json
img1.*
...
val/
img2.jpg
img2.xml
img2.json
img2.*
...
test/
whatever.jpg
whatever.xml
whatever.json
whatever.*
...Works on any file types.Aseedlets you reproduce the splits.Counting occurrences of tagsThis package includes functions to count the occurrences of a tag in JSON and XML files.They can go through all files in a folder and count the occurrence of each tag on every (annotated) image.Installpipinstallannotated-imagesModuleimportannotated_images# To only split into training and validation set, set a tuple to `ratio`, i.e, `(.8, .2)`.annotated-images.split('input_folder',output="output",seed=1337,ratio=(.8,.1,.1))importannotated_images# Returns total count of 'tag' found in all json files in 'path'annotated-images.findTagsJson('path','tag')# Returns total count of 'tag' found in all xml files in 'path'annotated-images.findTagsXml('path','tag')Refthis package is forked fromhttps://github.com/jfilter/split-folders
|
annotated-types
|
annotated-typesPEP-593addedtyping.Annotatedas a way of
adding context-specific metadata to existing types, and specifies thatAnnotated[T, x]shouldbe treated asTby any tool or library without special
logic forx.This package provides metadata objects which can be used to represent common
constraints such as upper and lower bounds on scalar values and collection sizes,
aPredicatemarker for runtime checks, and
descriptions of how we intend these metadata to be interpreted. In some cases,
we also note alternative representations which do not require this package.Installpipinstallannotated-typesExamplesfromtypingimportAnnotatedfromannotated_typesimportGt,Len,PredicateclassMyClass:age:Annotated[int,Gt(18)]# Valid: 19, 20, ...# Invalid: 17, 18, "19", 19.0, ...factors:list[Annotated[int,Predicate(is_prime)]]# Valid: 2, 3, 5, 7, 11, ...# Invalid: 4, 8, -2, 5.0, "prime", ...my_list:Annotated[list[int],Len(0,10)]# Valid: [], [10, 20, 30, 40, 50]# Invalid: (1, 2), ["abc"], [0] * 20DocumentationWhileannotated-typesavoids runtime checks for performance, users should not
construct invalid combinations such asMultipleOf("non-numeric")orAnnotated[int, Len(3)].
Downstream implementors may choose to raise an error, emit a warning, silently ignore
a metadata item, etc., if the metadata objects described below are used with an
incompatible type - or for any other reason!Gt, Ge, Lt, LeExpress inclusive and/or exclusive bounds on orderable values - which may be numbers,
dates, times, strings, sets, etc. Note that the boundary value need not be of the
same type that was annotated, so long as they can be compared:Annotated[int, Gt(1.5)]is fine, for example, and implies that the value is an integer x such thatx > 1.5.We suggest that implementors may also interpretfunctools.partial(operator.le, 1.5)as being equivalent toGt(1.5), for users who wish to avoid a runtime dependency on
theannotated-typespackage.To be explicit, these types have the following meanings:Gt(x)- value must be "Greater Than"x- equivalent to exclusive minimumGe(x)- value must be "Greater than or Equal" tox- equivalent to inclusive minimumLt(x)- value must be "Less Than"x- equivalent to exclusive maximumLe(x)- value must be "Less than or Equal" tox- equivalent to inclusive maximumIntervalInterval(gt, ge, lt, le)allows you to specify an upper and lower bound with a single
metadata object.Noneattributes should be ignored, and non-Noneattributes
treated as per the single bounds above.MultipleOfMultipleOf(multiple_of=x)might be interpreted in two ways:Python semantics, implyingvalue % multiple_of == 0, orJSONschema semantics,
whereint(value / multiple_of) == value / multiple_of.We encourage users to be aware of these two common interpretations and their
distinct behaviours, especially since very large or non-integer numbers make
it easy to cause silent data corruption due to floating-point imprecision.We encourage libraries to carefully document which interpretation they implement.MinLen, MaxLen, LenLen()implies thatmin_length <= len(value) <= max_length- lower and upper bounds are inclusive.As well asLen()which can optionally include upper and lower bounds, we also
provideMinLen(x)andMaxLen(y)which are equivalent toLen(min_length=x)andLen(max_length=y)respectively.Len,MinLen, andMaxLenmay be used with any type which supportslen(value).Examples of usage:Annotated[list, MaxLen(10)](orAnnotated[list, Len(max_length=10))) - list must have a length of 10 or lessAnnotated[str, MaxLen(10)]- string must have a length of 10 or lessAnnotated[list, MinLen(3))(orAnnotated[list, Len(min_length=3))) - list must have a length of 3 or moreAnnotated[list, Len(4, 6)]- list must have a length of 4, 5, or 6Annotated[list, Len(8, 8)]- list must have a length of exactly 8Changed in v0.4.0min_inclusivehas been renamed tomin_length, no change in meaningmax_exclusivehas been renamed tomax_length, upper bound is nowinclusiveinstead ofexclusiveThe recommendation that slices are interpreted asLenhas been removed due to ambiguity and different semantic
meaning of the upper bound in slices vs.LenSeeissue #23for discussion.TimezoneTimezonecan be used with adatetimeor atimeto express which timezones
are allowed.Annotated[datetime, Timezone(None)]must be a naive datetime.Timezone[...](literal ellipsis)
expresses that any timezone-aware datetime is allowed. You may also pass a specific
timezone string ortimezoneobject such asTimezone(timezone.utc)orTimezone("Africa/Abidjan")to express that you only allow a specific timezone,
though we note that this is often a symptom of fragile design.PredicatePredicate(func: Callable)expresses thatfunc(value)is truthy for valid values.
Users should prefer the statically inspectable metadata above, but if you need
the full power and flexibility of arbitrary runtime predicates... here it is.We provide a few predefined predicates for common string constraints:IsLower = Predicate(str.islower)IsUpper = Predicate(str.isupper)IsDigit = Predicate(str.isdigit)IsFinite = Predicate(math.isfinite)IsNotFinite = Predicate(Not(math.isfinite))IsNan = Predicate(math.isnan)IsNotNan = Predicate(Not(math.isnan))IsInfinite = Predicate(math.isinf)IsNotInfinite = Predicate(Not(math.isinf))Some libraries might have special logic to handle known or understandable predicates,
for example by checking forstr.isdigitand using its presence to both call custom
logic to enforce digit-only strings, and customise some generated external schema.
Users are therefore encouraged to avoid indirection likelambda s: s.lower(), in
favor of introspectable methods such asstr.lowerorre.compile("pattern").search.To enable basic negation of commonly used predicates likemath.isnanwithout introducing introspection that makes it impossible for implementers to introspect the predicate we provide aNotwrapper that simply negates the predicate in an introspectable manner. Several of the predicates listed above are created in this manner.We do not specify what behaviour should be expected for predicates that raise
an exception. For exampleAnnotated[int, Predicate(str.isdigit)]might silently
skip invalid constraints, or statically raise an error; or it might try calling it
and then propogate or discard the resultingTypeError: descriptor 'isdigit' for 'str' objects doesn't apply to a 'int' objectexception. We encourage libraries to document the behaviour they choose.Docdoc()can be used to add documentation information inAnnotated, for function and method parameters, variables, class attributes, return types, and any place whereAnnotatedcan be used.It expects a value that can be statically analyzed, as the main use case is for static analysis, editors, documentation generators, and similar tools.It returns aDocInfoclass with a single attributedocumentationcontaining the value passed todoc().This is the early adopter's alternative form of thetyping-docproposal.Integrating downstream types withGroupedMetadataImplementers may choose to provide a convenience wrapper that groups multiple pieces of metadata.
This can help reduce verbosity and cognitive overhead for users.
For example, an implementer like Pydantic might provide aFieldorMetatype that accepts keyword arguments and transforms these into low-level metadata:fromdataclassesimportdataclassfromtypingimportIteratorfromannotated_typesimportGroupedMetadata,Ge@dataclassclassField(GroupedMetadata):ge:int|None=Nonedescription:str|None=Nonedef__iter__(self)->Iterator[object]:# Iterating over a GroupedMetadata object should yield annotated-types# constraint metadata objects which describe it as fully as possible,# and may include other unknown objects too.ifself.geisnotNone:yieldGe(self.ge)ifself.descriptionisnotNone:yieldDescription(self.description)Libraries consuming annotated-types constraints should check forGroupedMetadataand unpack it by iterating over the object and treating the results as if they had been "unpacked" in theAnnotatedtype. The same logic should be applied to thePEP 646Unpacktype, so thatAnnotated[T, Field(...)],Annotated[T, Unpack[Field(...)]]andAnnotated[T, *Field(...)]are all treated consistently.Libraries consuming annotated-types should also ignore any metadata they do not recongize that came from unpacking aGroupedMetadata, just like they ignore unrecognized metadata inAnnotateditself.Our ownannotated_types.Intervalclass is aGroupedMetadatawhich unpacks itself intoGt,Lt, etc., so this is not an abstract concern. Similarly,annotated_types.Lenis aGroupedMetadatawhich unpacks itself intoMinLen(optionally) andMaxLen.Consuming metadataWe intend to not be prescriptive as tohowthe metadata and constraints are used, but as an example of how one might parse constraints from types annotations see ourimplementation intest_main.py.It is up to the implementer to determine how this metadata is used.
You could use the metadata for runtime type checking, for generating schemas or to generate example data, amongst other use cases.Design & HistoryThis package was designed at the PyCon 2022 sprints by the maintainers of Pydantic
and Hypothesis, with the goal of making it as easy as possible for end-users to
provide more informative annotations for use by runtime libraries.It is deliberately minimal, and following PEP-593 allows considerable downstream
discretion in what (if anything!) they choose to support. Nonetheless, we expect
that staying simple and coveringonlythe most common use-cases will give users
and maintainers the best experience we can. If you'd like more constraints for your
types - follow our lead, by defining them and documenting them downstream!
|
annotateiT
|
No description available on PyPI.
|
annotate-lineinfo
|
annotate_lineinfoThis IDAPython script/plugin will parse the PDB for the loaded executable and annotate the disassembly with source and line number information.UsageScriptOption 1) Runannotate_lineinfo.pyas a regular IDAPython script.Option 2) From another script or the IDAPython console:importannotate_lineinfoannotate_lineinfo.ida_annotate_lineinfo()PluginTo installOption 1) Runpython setup.py install --install-ida-plugin=PATHto installannotate_lineinfo_plugin.pytoPATH\pluginsIfPATHis not specified,%IDAUSR%will be tried firstIf%IDAUSR%does not exist, it defaults to%APPDATA%\Hex-Rays\IDA ProOption 2) Manually placeannotate_lineinfo_plugin.pyin thepluginsdirectory of your IDA installation.Annotate entire fileUse shortcut keyAlt-Shift-Aor run fromEdit->Annotate lineinfomenu.Disassembly view popup menuRight click inside a function, select annotateSelect a range of instructions, right click, select annotateFunctions view popup menuSelect one or more functions, right click, select annotateEach of the above actions has a correspondingremove annotationsaction.On load, annotate_lineinfo attempts to locate the PDB in the following locations:_NT_SYMBOL_PATHif setIDA's default PDB download directory%TEMP%\idaMSDIA defaults - Path in debug directory of executable, same path as executableYou may specify the PDB path manually, or request another auto-locate attempt (e.g. after IDA downloads the PDB),
from theEdit->Annotate lineinfomenu.CaveatsOnly runs on Windows. This script makes use of the COM API provided by msdia[ver].dll to parse the PDB.
|
annotateme
|
No description available on PyPI.
|
annotateonline
|
No description available on PyPI.
|
annotater
|
Inline javascript-based web annotation library incorporating Marginalia
(http://www.geof.net/code/annotation). Package includeds a database-backed
annotation store with RESTFul (WSGI-powered) web-interface, abstraction layer
around marginalia to make it easy to incorporate it into your web application
and all the marginalia media files (with improvements).
|
annotate_regions
|
UNKNOWN
|
annotation
|
No description available on PyPI.
|
annotation-analysis
|
BRRAnnotation AnalysisPackage to analyse inter-annotator agreement.Simple package to compute Interannotation Agreement with a simple interface.Getting StartedInstall package:pip3installannotation_analysisExample usage:fromannotation_analysisimportinterannotator_metricsif__name__=="__main__":annotations=[["A","B","A"],["A","B","B"]]k_alpha=interannotator_metrics.krippendorff_alpha(annotations)f_kappa=interannotator_metrics.fleiss_kappa(annotations)print(k_alpha)print(f_kappa)Documentationfunction krippendorf_alpha(annotations:List[List],labels:Optional[List], ignore=Optional[List])"""Arguments:- annotations:- The list of annotations, assumed to be one "row" per annotator (i.e. annotations[0] is annotator #1).- Number of Columns represents the number of datapoints.- Important: at this stage only hashable values are allowed.- E.g. with labels ["A","B"], and three datapoints and 2 annotators the following would be the valid structure:- [["A","B","A"],["A","B","B"]],- Annotator #1 (index=0) -> ["A","B","A"]- Annotator #2 (index=1) -> ["A","B","B"]- labels:- Represents the optional list of valid labels.- Important: at this stage only hashable values are allowed.- ignore:- Represents a list of optional labels that should be ignored- I.e. If this is non-empty then for any datapoint, if any of the annotators has the ignored label, the data point with all annotators is ignoredReturn:- Krippendorff Alpha score for all annotators."""function fleiss_kappa(annotations:List[List],labels:Optional[List], ignore=Optional[List])"""Arguments:- annotations:- The list of annotations, assumed to be one "row" per annotator (i.e. annotations[0] is annotator #1).- Number of Columns represents the number of datapoints.- Important: at this stage only hashable values are allowed.- E.g. with labels ["A","B"], and three datapoints and 2 annotators the following would be the valid structure:- [["A","B","A"],["A","B","B"]],- Annotator #1 (index=0) -> ["A","B","A"]- Annotator #2 (index=1) -> ["A","B","B"]- labels:- Represents the optional list of valid labels.- Important: at this stage only hashable values are allowed.- ignore:- Represents a list of optional labels that should be ignored- I.e. If this is non-empty then for any datapoint, if any of the annotators has the ignored label, the data point with all annotators is ignoredReturn:- Fleiss Kappa score for all annotators."""(C) - Nikolai Rozanov 2022
|
annotation-converter
|
No description available on PyPI.
|
annotationengine
|
No description available on PyPI.
|
annotationfactory
|
IntroductionAnnotation-Factory Python SDK. This package works specifically with Microsoft Cognitive Services detection results.AnnotationWritertakes a JSON object received from Cognitive Services and produces annotation files in both VOC and YOLO formats for use in training machine learning models.Getting StartedInstallannotationfactorypackage via pip:pip install annotationfactorySample to usefromannotationfactory.annotationwriterimportAnnotationWriterimportannotationfactory.annotationconverterasconverterexample={'tagId':0,'tagName':'Apples','region':{'left':0.288039029,'top':0.411838,'width':0.291451037,'height':0.4237842}}# Initialise AnnotationWriter.writer=AnnotationWriter()# Initialise annotation handlers.writer.initVoc("test.jpg",608,608)writer.initYolo()# Add VOC object to writer.writer.addVocObject(example)writer.addVocObject(example)# Add YOLO object to writer.writer.addYoloObject(example)writer.addYoloObject(example)# Output VOC annotations to file.writer.saveVoc("myannotation.xml")# Output YOLO annotations to file.writer.saveYolo("myannotation.txt")# Converts VOC annotations back to CustomVision annotation format.voc2cv=converter.convertVocFromPath("myannotation.xml")# Converts YOLO annotations back to CustomVision annotation format.# Requires a txt file with list of label names as an input.yolo2cv=converter.convertYoloFromPath("myannotation.txt","class.names")Run locallypip install -r requirements.txt
python example/test.pyContributingThis project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visithttps://cla.microsoft.com.When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.This project has adopted theMicrosoft Open Source Code of Conduct.
For more information see theCode of Conduct FAQor
[email protected] any additional questions or comments.
|
annotationframeworkclient
|
No description available on PyPI.
|
annotation-gpt
|
annotation-gptA annotation tool through GPT model.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.