package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
anthill-tormysql
TorMySQLThe highest performance asynchronous MySQL driver.PyPI page:https://pypi.python.org/pypi/tormysqlAboutPresents a Future-based API and greenlet for non-blocking access to MySQL.Support bothtornadoandasyncio.Installationpip install TorMySQLUsed Tornadoexample poolfrom tornado.ioloop import IOLoop from tornado import gen import tormysql pool = tormysql.ConnectionPool( max_connections = 20, #max open connections idle_seconds = 7200, #conntion idle timeout time, 0 is not timeout wait_connection_timeout = 3, #wait connection timeout host = "127.0.0.1", user = "root", passwd = "TEST", db = "test", charset = "utf8" ) @gen.coroutine def test(): with (yield pool.Connection()) as conn: try: with conn.cursor() as cursor: yield cursor.execute("INSERT INTO test(id) VALUES(1)") except: yield conn.rollback() else: yield conn.commit() with conn.cursor() as cursor: yield cursor.execute("SELECT * FROM test") datas = cursor.fetchall() print datas yield pool.close() ioloop = IOLoop.instance() ioloop.run_sync(test)example helpersfrom tornado.ioloop import IOLoop from tornado import gen import tormysql pool = tormysql.helpers.ConnectionPool( max_connections = 20, #max open connections idle_seconds = 7200, #conntion idle timeout time, 0 is not timeout wait_connection_timeout = 3, #wait connection timeout host = "127.0.0.1", user = "root", passwd = "TEST", db = "test", charset = "utf8" ) @gen.coroutine def test(): tx = yield pool.begin() try: yield tx.execute("INSERT INTO test(id) VALUES(1)") except: yield tx.rollback() else: yield tx.commit() cursor = yield pool.execute("SELECT * FROM test") datas = cursor.fetchall() print datas yield pool.close() ioloop = IOLoop.instance() ioloop.run_sync(test)Used asyncio aloneexample poolfrom asyncio import events import tormysql pool = tormysql.ConnectionPool( max_connections = 20, #max open connections idle_seconds = 7200, #conntion idle timeout time, 0 is not timeout wait_connection_timeout = 3, #wait connection timeout host = "127.0.0.1", user = "root", passwd = "TEST", db = "test", charset = "utf8" ) async def test(): async with await pool.Connection() as conn: try: async with conn.cursor() as cursor: await cursor.execute("INSERT INTO test(id) VALUES(1)") except: await conn.rollback() else: await conn.commit() async with conn.cursor() as cursor: await cursor.execute("SELECT * FROM test") datas = cursor.fetchall() print(datas) await pool.close() ioloop = events.get_event_loop() ioloop.run_until_complete(test)example helpersfrom asyncio import events import tormysql pool = tormysql.helpers.ConnectionPool( max_connections = 20, #max open connections idle_seconds = 7200, #conntion idle timeout time, 0 is not timeout wait_connection_timeout = 3, #wait connection timeout host = "127.0.0.1", user = "root", passwd = "TEST", db = "test", charset = "utf8" ) async def test(): async with await pool.begin() as tx: await tx.execute("INSERT INTO test(id) VALUES(1)") cursor = await pool.execute("SELECT * FROM test") datas = cursor.fetchall() print(datas) await pool.close() ioloop = events.get_event_loop() ioloop.run_until_complete(test)ResourcesYou can readPyMySQL Documentationonline for more information.LicenseTorMySQL uses the MIT license, see LICENSE file for the details.
anthon
No description available on PyPI.
anthonnn
No description available on PyPI.
anthonysalmerinester
No description available on PyPI.
anthpy
Anthony’s python utils
anthropic
Anthropic Python API libraryThe Anthropic Python library provides convenient access to the Anthropic REST API from any Python 3.7+ application. It includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered byhttpx.For the AWS Bedrock API, seeanthropic-bedrock.DocumentationThe REST API documentation can be foundon docs.anthropic.com. The full API of this library can be found inapi.md.InstallationpipinstallanthropicUsageThe full API of this library can be found inapi.md.importosfromanthropicimportAnthropicclient=Anthropic(# This is the default and can be omittedapi_key=os.environ.get("ANTHROPIC_API_KEY"),)message=client.messages.create(max_tokens=1024,messages=[{"role":"user","content":"How does a court case get to the supreme court?",}],model="claude-2.1",)print(message.content)While you can provide anapi_keykeyword argument, we recommend usingpython-dotenvto addANTHROPIC_API_KEY="my-anthropic-api-key"to your.envfile so that your API Key is not stored in source control.Async usageSimply importAsyncAnthropicinstead ofAnthropicand useawaitwith each API call:importosimportasynciofromanthropicimportAsyncAnthropicclient=AsyncAnthropic(# This is the default and can be omittedapi_key=os.environ.get("ANTHROPIC_API_KEY"),)asyncdefmain()->None:message=awaitclient.messages.create(max_tokens=1024,messages=[{"role":"user","content":"How does a court case get to the supreme court?",}],model="claude-2.1",)print(message.content)asyncio.run(main())Functionality between the synchronous and asynchronous clients is otherwise identical.Streaming ResponsesWe provide support for streaming responses using Server Side Events (SSE).fromanthropicimportAnthropicclient=Anthropic()stream=client.messages.create(max_tokens=1024,messages=[{"role":"user","content":"your prompt here",}],model="claude-2.1",stream=True,)foreventinstream:print(event.type)The async client uses the exact same interface.fromanthropicimportAnthropicclient=Anthropic()stream=awaitclient.messages.create(max_tokens=1024,messages=[{"role":"user","content":"your prompt here",}],model="claude-2.1",stream=True,)asyncforeventinstream:print(event.type)Streaming HelpersThis library provides several conveniences for streaming messages, for example:importasynciofromanthropicimportAsyncAnthropicclient=AsyncAnthropic()asyncdefmain()->None:asyncwithclient.messages.stream(max_tokens=1024,messages=[{"role":"user","content":"Say hello there!",}],model="claude-2.1",)asstream:asyncfortextinstream.text_stream:print(text,end="",flush=True)print()message=awaitstream.get_final_message()print(message.model_dump_json(indent=2))asyncio.run(main())Streaming withclient.messages.stream(...)exposesvarious helpers for your convenienceincluding event handlers and accumulation.Alternatively, you can useclient.messages.create(..., stream=True)which only returns an async iterable of the events in the stream and thus uses less memory (it does not build up a final message object for you).AWS BedrockThis library also provides support for theAnthropic Bedrock APIif you install this library with thebedrockextra, e.g.pip install -U anthropic[bedrock].You can then import and instantiate a separateAnthropicBedrockclass, the rest of the API is the same.fromanthropicimportAI_PROMPT,HUMAN_PROMPT,AnthropicBedrockclient=AnthropicBedrock()completion=client.completions.create(model="anthropic.claude-instant-v1",prompt=f"{HUMAN_PROMPT}hey!{AI_PROMPT}",stop_sequences=[HUMAN_PROMPT],max_tokens_to_sample=500,temperature=0.5,top_k=250,top_p=0.5,)print(completion.completion)For a more fully fledged example seeexamples/bedrock.py.Token countingYou can estimate billing for a given request with theclient.count_tokens()method, eg:client=Anthropic()client.count_tokens('Hello world!')# 3Using typesNested request parameters areTypedDicts. Responses arePydantic models, which provide helper methods for things like:Serializing back into JSON,model.model_dump_json(indent=2, exclude_unset=True)Converting to a dictionary,model.model_dump(exclude_unset=True)Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, setpython.analysis.typeCheckingModetobasic.Handling errorsWhen the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass ofanthropic.APIConnectionErroris raised.When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass ofanthropic.APIStatusErroris raised, containingstatus_codeandresponseproperties.All errors inherit fromanthropic.APIError.importanthropicfromanthropicimportAnthropicclient=Anthropic()try:client.messages.create(max_tokens=1024,messages=[{"role":"user","content":"your prompt here",}],model="claude-2.1",)exceptanthropic.APIConnectionErrorase:print("The server could not be reached")print(e.__cause__)# an underlying Exception, likely raised within httpx.exceptanthropic.RateLimitErrorase:print("A 429 status code was received; we should back off a bit.")exceptanthropic.APIStatusErrorase:print("Another non-200-range status code was received")print(e.status_code)print(e.response)Error codes are as followed:Status CodeError Type400BadRequestError401AuthenticationError403PermissionDeniedError404NotFoundError422UnprocessableEntityError429RateLimitError>=500InternalServerErrorN/AAPIConnectionErrorRetriesCertain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.You can use themax_retriesoption to configure or disable retry settings:fromanthropicimportAnthropic# Configure the default for all requests:client=Anthropic(# default is 2max_retries=0,)# Or, configure per-request:client.with_options(max_retries=5).messages.create(max_tokens=1024,messages=[{"role":"user","content":"Can you help me effectively ask for a raise at work?",}],model="claude-2.1",)TimeoutsBy default requests time out after 10 minutes. You can configure this with atimeoutoption, which accepts a float or anhttpx.Timeoutobject:fromanthropicimportAnthropic# Configure the default for all requests:client=Anthropic(# 20 seconds (default is 10 minutes)timeout=20.0,)# More granular control:client=Anthropic(timeout=httpx.Timeout(60.0,read=5.0,write=10.0,connect=2.0),)# Override per-request:client.with_options(timeout=5*1000).messages.create(max_tokens=1024,messages=[{"role":"user","content":"Where can I get a good coffee in my neighbourhood?",}],model="claude-2.1",)On timeout, anAPITimeoutErroris thrown.Note that requests that time out areretried twice by default.Default HeadersWe automatically send theanthropic-versionheader set to2023-06-01.If you need to, you can override it by setting default headers per-request or on the client object.Be aware that doing so may result in incorrect types and other unexpected or undefined behavior in the SDK.fromanthropicimportAnthropicclient=Anthropic(default_headers={"anthropic-version":"My-Custom-Value"},)AdvancedLoggingWe use the standard libraryloggingmodule.You can enable logging by setting the environment variableANTHROPIC_LOGtodebug.$exportANTHROPIC_LOG=debugHow to tell whetherNonemeansnullor missingIn an API response, a field may be explicitlynull, or missing entirely; in either case, its value isNonein this library. You can differentiate the two cases with.model_fields_set:ifresponse.my_fieldisNone:if'my_field'notinresponse.model_fields_set:print('Got json like{}, without a "my_field" key present at all.')else:print('Got json like {"my_field": null}.')Accessing raw response data (e.g. headers)The "raw" Response object can be accessed by prefixing.with_raw_response.to any HTTP method call, e.g.,fromanthropicimportAnthropicclient=Anthropic()response=client.messages.with_raw_response.create(max_tokens=1024,messages=[{"role":"user","content":"Where can I get a good coffee in my neighbourhood?",}],model="claude-2.1",)print(response.headers.get('X-My-Header'))message=response.parse()# get the object that `messages.create()` would have returnedprint(message.content)These methods return anLegacyAPIResponseobject. This is a legacy class as we're changing it slightly in the next major version.For the sync client this will mostly be the same with the exception ofcontent&textwill be methods instead of properties. In the async client, all methods will be async.A migration script will be provided & the migration in general should be smooth..with_streaming_responseThe above interface eagerly reads the full response body when you make the request, which may not always be what you want.To stream the response body, use.with_streaming_responseinstead, which requires a context manager and only reads the response body once you call.read(),.text(),.json(),.iter_bytes(),.iter_text(),.iter_lines()or.parse(). In the async client, these are async methods.As such,.with_streaming_responsemethods return a differentAPIResponseobject, and the async client returns anAsyncAPIResponseobject.withclient.messages.with_streaming_response.create(max_tokens=1024,messages=[{"role":"user","content":"Where can I get a good coffee in my neighbourhood?",}],model="claude-2.1",)asresponse:print(response.headers.get("X-My-Header"))forlineinresponse.iter_lines():print(line)The context manager is required so that the response will reliably be closed.Configuring the HTTP clientYou can directly override thehttpx clientto customize it for your use case, including:Support for proxiesCustom transportsAdditionaladvancedfunctionalityimporthttpxfromanthropicimportAnthropicclient=Anthropic(# Or use the `ANTHROPIC_BASE_URL` env varbase_url="http://my.test.server.example.com:8083",http_client=httpx.Client(proxies="http://my.test.proxy.example.com",transport=httpx.HTTPTransport(local_address="0.0.0.0"),),)Managing HTTP resourcesBy default the library closes underlying HTTP connections whenever the client isgarbage collected. You can manually close the client using the.close()method if desired, or with a context manager that closes when exiting.VersioningThis package generally followsSemVerconventions, though certain backwards-incompatible changes may be released as minor versions:Changes that only affect static types, without breaking runtime behavior.Changes to library internals which are technically public but not intended or documented for external use.(Please open a GitHub issue to let us know if you are relying on such internals).Changes that we do not expect to impact the vast majority of users in practice.We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.We are keen for your feedback; please open anissuewith questions, bugs, or suggestions.RequirementsPython 3.7 or higher.
anthropicautodocstrings
Source Code:https://github.com/JordanM575/anthropicautodocstringsAnthropicautodocstrings is a command-line tool with the following key features:Updates the docstrings in Python files using the Anthropic API.Can process a single file or a directory of files, including all subdirectories.Anthropicautodocstrings uses the Anthropic API to generate docstrings, so these are not guaranteed to be perfect. The claude-instant-1.2 model is used to generate the docstrings. This is the fastest. The version this was forked from is slow in comparison. To increase raw speed! This runs asyncronously.Anthropicautodocstrings will work best for code that already has good type hints. Without type hints, the Anthropic API will have to guess input and return types, which may not be accurate.RequirementsPython 3.10+A valid Anthropic API KEY. Set in the environment variables.InstallationTo install the dependencies for this tool, run the following command:$pipinstallanthropicautodocstringsUsageTo use this tool, run the following commands:$exportANTHROPIC_API_KEY=1234567890$aadsINPUT`[--replace-existing-docstrings] `[--skip-constructor-docstrings] `[--exclude-directories EXCLUDE_DIRECTORIES] `[--exclude-files EXCLUDE_FILES]Where INPUT is a Python file or directory containing Python files to update the docstrings in, API_KEY is your Anthropic API key, and the optional flags --replace-existing-docstrings and --skip-constructor-docstrings can be used to skip updating docstrings for constructors (initmethods) and replacing existing docstirngs. EXCLUDE_DIRECTORIES and EXCLUDE_FILES are comma-separated lists of directories and files to exclude from the update.ExamplesUpdate the docstrings in all Python files in the my_code directory:$aadscool_code/Update the docstrings in the my_file.py file:$aadsawesome_script.pyUpdate the docstrings in all Python files in the my_code directory and replace existing ones:$aadscool_code/--replace-existing-docstringsUpdate the docstrings in all Python files in the my_code directory, but skip updating docstrings for class constructors (initmethods):$aadscool_code/--skip-constructor-docstringsUpdate the docstrings in all Python files in the my_code directory, but exlcude the exclude_dir directory and the exclude_file_1.py and exclude_file_2.py files:$aadsmy_code/--exclude-directoriesexclude_dir--exclude-filesexclude_file_1.py,exclude_file_2.py
anthropic-bedrock
Anthropic Bedrock Python API libraryThe Anthropic Bedrock Python library provides convenient access to theAnthropic BedrockREST API from any Python 3.7+ application. It includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered byhttpx.For the non-Bedrock Anthropic API at api.anthropic.com, seeanthropic-python.DocumentationThe REST API documentation can be foundon docs.anthropic.com. The full API of this library can be found inapi.md.Installationpipinstallanthropic-bedrockUsageThe full API of this library can be found inapi.md.importanthropic_bedrockfromanthropic_bedrockimportAnthropicBedrockclient=AnthropicBedrock(# Authenticate by either providing the keys below or use the default AWS credential providers, such as# using ~/.aws/credentials or the "AWS_SECRET_ACCESS_KEY" and "AWS_ACCESS_KEY_ID" environment variables.aws_access_key="<access key>",aws_secret_key="<secret key>",# Temporary credentials can be used with aws_session_token.# Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html.aws_session_token="<session_token>",# aws_region changes the aws region to which the request is made. By default, we read AWS_REGION,# and if that's not present, we default to us-east-1. Note that we do not read ~/.aws/config for the region.aws_region="us-east-2",)completion=client.completions.create(model="anthropic.claude-v2:1",max_tokens_to_sample=256,prompt=f"{anthropic_bedrock.HUMAN_PROMPT}how does a court case get to the Supreme Court?{anthropic_bedrock.AI_PROMPT}",)print(completion.completion)This library usesbotocoreinternally for authentication; you can read more about the default providershere.Async usageSimply importAsyncAnthropicBedrockinstead ofAnthropicBedrockand useawaitwith each API call:importanthropic_bedrockfromanthropic_bedrockimportAsyncAnthropicBedrockclient=AsyncAnthropicBedrock()asyncdefmain():completion=awaitclient.completions.create(model="anthropic.claude-v2:1",max_tokens_to_sample=256,prompt=f"{anthropic_bedrock.HUMAN_PROMPT}how does a court case get to the Supreme Court?{anthropic_bedrock.AI_PROMPT}",)print(completion.completion)asyncio.run(main())Functionality between the synchronous and asynchronous clients is otherwise identical.Streaming ResponsesWe provide support for streaming responses using Server Side Events (SSE).fromanthropic_bedrockimportAnthropicBedrock,HUMAN_PROMPT,AI_PROMPTclient=AnthropicBedrock()stream=client.completions.create(prompt=f"{HUMAN_PROMPT}Your prompt here{AI_PROMPT}",max_tokens_to_sample=300,model="anthropic.claude-v2:1",stream=True,)forcompletioninstream:print(completion.completion,end="",flush=True)The async client uses the exact same interface.fromanthropic_bedrockimportAsyncAnthropicBedrock,HUMAN_PROMPT,AI_PROMPTclient=AsyncAnthropicBedrock()stream=awaitclient.completions.create(prompt=f"{HUMAN_PROMPT}Your prompt here{AI_PROMPT}",max_tokens_to_sample=300,model="anthropic.claude-v2:1",stream=True,)asyncforcompletioninstream:print(completion.completion,end="",flush=True)Using typesNested request parameters areTypedDicts. Responses arePydantic models, which provide helper methods for things like:Serializing back into JSON,model.model_dump_json(indent=2, exclude_unset=True)Converting to a dictionary,model.model_dump(exclude_unset=True)Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, setpython.analysis.typeCheckingModetobasic.Token countingYou can estimate billing for a given request with theclient.count_tokens()method, eg:client=AnthropicBedrock()client.count_tokens('Hello world!')# 3Handling errorsWhen the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass ofanthropic_bedrock.APIConnectionErroris raised.When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass ofanthropic_bedrock.APIStatusErroris raised, containingstatus_codeandresponseproperties.All errors inherit fromanthropic_bedrock.APIError.importanthropic_bedrockfromanthropic_bedrockimportAnthropicBedrockclient=AnthropicBedrock()try:client.completions.create(prompt=f"{anthropic_bedrock.HUMAN_PROMPT}Your prompt here{anthropic_bedrock.AI_PROMPT}",max_tokens_to_sample=256,model="anthropic.claude-v2:1",)exceptanthropic_bedrock.APIConnectionErrorase:print("The server could not be reached")print(e.__cause__)# an underlying Exception, likely raised within httpx.exceptanthropic_bedrock.RateLimitErrorase:print("A 429 status code was received; we should back off a bit.")exceptanthropic_bedrock.APIStatusErrorase:print("Another non-200-range status code was received")print(e.status_code)print(e.response)Error codes are as followed:Status CodeError Type400BadRequestError401AuthenticationError403PermissionDeniedError404NotFoundError422UnprocessableEntityError429RateLimitError>=500InternalServerErrorN/AAPIConnectionErrorRetriesCertain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.You can use themax_retriesoption to configure or disable retry settings:fromanthropic_bedrockimportAnthropicBedrock,HUMAN_PROMPT,AI_PROMPT# Configure the default for all requests:client=AnthropicBedrock(# default is 2max_retries=0,)# Or, configure per-request:client.with_options(max_retries=5).completions.create(prompt=f"{HUMAN_PROMPT}Can you help me effectively ask for a raise at work?{AI_PROMPT}",max_tokens_to_sample=300,model="anthropic.claude-v2:1",)TimeoutsBy default requests time out after 10 minutes. You can configure this with atimeoutoption, which accepts a float or anhttpx.Timeoutobject:fromanthropic_bedrockimportAnthropicBedrock,HUMAN_PROMPT,AI_PROMPT# Configure the default for all requests:client=AnthropicBedrock(# default is 10 minutestimeout=20.0,)# More granular control:client=AnthropicBedrock(timeout=httpx.Timeout(60.0,read=5.0,write=10.0,connect=2.0),)# Override per-request:client.with_options(timeout=5*1000).completions.create(prompt=f"{HUMAN_PROMPT}Where can I get a good coffee in my neighbourhood?{AI_PROMPT}",max_tokens_to_sample=300,model="anthropic.claude-v2:1",)On timeout, anAPITimeoutErroris thrown.Note that requests that time out areretried twice by default.AdvancedLoggingWe use the standard libraryloggingmodule.You can enable logging by setting the environment variableANTHROPIC_BEDROCK_LOGtodebug.$exportANTHROPIC_BEDROCK_LOG=debugHow to tell whetherNonemeansnullor missingIn an API response, a field may be explicitlynull, or missing entirely; in either case, its value isNonein this library. You can differentiate the two cases with.model_fields_set:ifresponse.my_fieldisNone:if'my_field'notinresponse.model_fields_set:print('Got json like{}, without a "my_field" key present at all.')else:print('Got json like {"my_field": null}.')Accessing raw response data (e.g. headers)The "raw" Response object can be accessed by prefixing.with_raw_response.to any HTTP method call, e.g.,fromanthropic_bedrockimportAnthropicBedrock,HUMAN_PROMPT,AI_PROMPTclient=AnthropicBedrock()response=client.completions.with_raw_response.create(prompt=f"{HUMAN_PROMPT}Your prompt here{AI_PROMPT}",max_tokens_to_sample=300,model="anthropic.claude-v2:1",)print(response.headers.get('X-My-Header'))completion=response.parse()# get the object that `completions.create()` would have returnedprint(completion.completion)These methods return anLegacyAPIResponseobject. This is a legacy class as we're changing it slightly in the next major version.For the sync client this will mostly be the same with the exception ofcontent&textwill be methods instead of properties. In the async client, all methods will be async.A migration script will be provided & the migration in general should be smooth..with_streaming_responseThe above interface eagerly reads the full response body when you make the request, which may not always be what you want.To stream the response body, use.with_streaming_responseinstead, which requires a context manager and only reads the response body once you call.read(),.text(),.json(),.iter_bytes(),.iter_text(),.iter_lines()or.parse(). In the async client, these are async methods.As such,.with_streaming_responsemethods return a differentAPIResponseobject, and the async client returns anAsyncAPIResponseobject.withclient.completions.with_streaming_response.create(max_tokens_to_sample=300,model="claude-2.1",prompt=f"{HUMAN_PROMPT}Where can I get a good coffee in my neighbourhood?{AI_PROMPT}",)asresponse:print(response.headers.get("X-My-Header"))forlineinresponse.iter_lines():print(line)The context manager is required so that the response will reliably be closed.Configuring the HTTP clientYou can directly override thehttpx clientto customize it for your use case, including:Support for proxiesCustom transportsAdditionaladvancedfunctionalityimporthttpxfromanthropic_bedrockimportAnthropicBedrockclient=AnthropicBedrock(# Or use the `ANTHROPIC_BEDROCK_BASE_URL` env varbase_url="http://my.test.server.example.com:8083",http_client=httpx.Client(proxies="http://my.test.proxy.example.com",transport=httpx.HTTPTransport(local_address="0.0.0.0"),),aws_secret_key="<secret key>",aws_access_key="<access key>",aws_region="us-east-2",)Managing HTTP resourcesBy default the library closes underlying HTTP connections whenever the client isgarbage collected. You can manually close the client using the.close()method if desired, or with a context manager that closes when exiting.VersioningThis package generally followsSemVerconventions, though certain backwards-incompatible changes may be released as minor versions:Changes that only affect static types, without breaking runtime behavior.Changes to library internals which are technically public but not intended or documented for external use.(Please open a GitHub issue to let us know if you are relying on such internals).Changes that we do not expect to impact the vast majority of users in practice.We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.We are keen for your feedback; please open anissuewith questions, bugs, or suggestions.RequirementsPython 3.7 or higher.
anthros-core
[RUS]Welcome to Anthros-core!Environment, interfaces and tools for your projects, as well as an interactive console for development and testingHow to start?Install the latest version of this packageCreate a *.py file at the root of your projectImport AC: from anthros import core as acRun interactive mode: ac.interfaces.console.run()AdditionallyHow can I find out more about the components?Everything is in the help, enter: help*but for now, everything is in russian, you can help us with the translationHow can I contact the creators?Join our communities:Discord,VkYou can also write tomeor join discord (Tand#9533)
anthroscore-eacl
AnthroScoreThis repository contains code to compute AnthroScore. AnthroScore is introduced in the following paper, which is accepted to EACL 2024:AnthroScore: A Computational Linguistic Measure of AnthropomorphismMyra Cheng, Kristina Gligoric, Tiziano Piccardi, Dan Jurafsky(Stanford University)Abstract:Anthropomorphism, or the attribution of human-like characteristics to non-human entities, has shaped conversations about the impacts and possibilities of technology. We present ANTHROSCORE, an automatic metric of implicit anthropomorphism in language. We use a masked language model to quantify how non-human entities are implicitly framed as human by the surrounding context. We show that ANTHROSCORE corresponds with human judgments of anthropomorphism and dimensions of anthropomorphism described in social science literature. Motivated by concerns of misleading anthropomorphism in computer science discourse, we use ANTHROSCORE to analyze 15 years of research papers and downstream news articles. In research papers, we find that anthropomorphism has steadily increased over time, and that papers related to natural language processing (NLP) and language models have the most anthropomorphism. Within NLP papers, temporal increases in anthropomorphism are correlated with key neural advancements. Building upon concerns of scientific misinformation in mass media, we identify higher levels of anthropomorphism in news headlines compared to the research papers they cite. Since ANTHROSCORE is lexicon-free, it can be directly applied to a wide range of text sources.SetupDownload the repository, eithervia pip:pip install anthroscore-eaclorvia Github:git clone https://github.com/myracheng/anthroscore.git;cd anthroscore;pip install .Install the spaCy model:python -m spacy download en_core_web_sm(The specific model used ishttps://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl#sha256=86cc141f63942d4b2c5fcee06630fd6f904788d2f0ab005cce45aadb8fb73889)Example UsageTo obtain AnthroScores for the terms "model" and "system" in abstracts from examples/acl_50.csv (a subset of ACL Anthology papers)python get_anthroscore.py --input_file example/acl_50.csv \ --text_column_name abstract --entities system model \ --output_file example/results.csv --text_id_name acl_id
anthunder
anthunder(a.k.a. sofa-bolt-python)anthunder(ant thunder) is a sofa-bolt library written in python. It supports RPC calling via 'sofa-bolt + protobuf' protocol.requirementspython3 >= 3.5 (aio classes needs asyncio support)python2.7 (limited support, needs extra 3rd party libraries)mosn >= 1.3 (to use with version >= 0.6)mosn < 1.3 (to use with version < 0.6)roadmapbolt client(protobuf serialization)service discover via mosn (sofa servicemesh sidecar)bolt server(protobuf serialization)hessian2 serialization supportTutorialAs client (caller)Acquire.protofileExecuteprotoc --python_out=. *.prototo compile protobuf file, and get_pb2.pyfile.Import protobuf classes (postfixed with_pb2)fromSampleServicePbResult_pb2importSampleServicePbResultfromSampleServicePbRequest_pb2importSampleServicePbRequestfromanthunderimportAioClientspanctx=SpanContext()# generate a new context, an object of mytracer.SpanContext, stores rpc_trace_context.# spanctx = ctx # or transfered from upstream rpcclient=AioClient(my_app_name)# my_app_name will be send to sidecar as caller name.# will create a thread, and send heartbeat to mesh every 30sinterface='com.alipay.rpc.common.service.facade.pb.SampleServicePb:1.0'# Subscribe interfaceclient.subscribe(interface)# Call synchronouslycontent=client.invoke_sync(interface,"hello",SampleServicePbRequest(name=some_name).SerializeToString(),timeout_ms=500,spanctx=spanctx)result=SampleServicePbResult()result.ParseFromString(content)# Call asynchronouslydefclient_callback(resp):# callback function, accepts bytes as the only argument,# then do deserialize and further processesresult=SampleServicePbResult()result.ParseFromString(content)# do somethingfuture=client.invoke_async(interface,"hello",SampleServicePbRequest(name=some_name).SerializeToString(),spanctx=spanctx,callback=client_callback))See project's unittest for runnable demoAs serverfromanthunder.listenerimportaio_listenerclassSampleService(object):def__init__(self,ctx):# service must accept one param as spanctx for rpc tracing supportself.ctx=ctxdefhello(self,bs:bytes):obj=SampleServicePbRequest()obj.ParseFromString(bs)print("Processing Request",obj)returnSampleServicePbResult(result=obj.name).SerializeToString()listener=aio_listener.AioListener(('127.0.0.1',12200),"test_app")# register interface and its function, plus its protobuf definition classlistener.handler.register_interface("com.alipay.rpc.common.service.facade.pb.SampleServicePb:1.0",SampleService)# start server in a standalone threadlistener.run_threading()# or start in current threadlistener.run_forever()# publish interfaces to service meshlistener.publish()# shutdown the serverlistener.shutdown()LicenseCopyright (c) 2018-present, Ant Financial Service GroupApache License 2.0See LICENSE file.ThirdpartyPart of the mysockpool package uses codes fromurllib3project under the term of MIT License. See origin-license.txt under the mysockpool package.Release History0.5.6 (2019-03-15)Bugfixesfix a infinite loop bug when parsing protocol0.5.4 (2018-11-09)Bugfixesfix server errors under python2.70.5.3 (2018-08-27)Featuresupport antsharecloud parameters.0.5.2 (2018-09-03)Bugfixesfix various errors under python2.70.5.1 (2018-08-31)Bugfixessofa trace rpc id may contains str.
anti
Модуль для работы с браузерами, метрикой, вебмастером и пр.Позволяет получать данные по позициямПозволяет получать суп из поиска Яндекса и ГуглаУтилиты решают проблемы получения доноров, бэклинков, ТИЦ, PR и пр.Импорт модулей:from anti import openYandex, openGoogle, SeoParser, Metrika, Webmaster0.7.1 ИзмененияДобавлен балансировщик между серверамиУменьшено количество sql запросов к базе0.7.2Балансировщик на redis0.7.3Fix: пустая выдача0.7.4Fix SeoParser: поправлена выдача страниц в индексе1.0.1Доделаны необходимые ф-ции для манипуляции с данными1.0.2Добавлена региональность1.0.3Добавлена ф-ция pages_of_site_in_index_google1.0.4Добавлены ф-ции получения возраста домена и посещений по liveinternet1.0.5Поправлена фитча с блоками. Добавлен 1ый пункт в whitelist. Добавлена ф-ция очищения текущего кэша.1.1.0Исправлена логика работы по сохранению и получению данных(Теперь все сохраняется в redis, а потом в postgres)1.1.1Добавлено получение позиций из кэша редиса1.2.0Удалены ф-ции multy. get_yandex_cache_pos получает позиции из redis. В базу данных теперь сохраняем отдельно.1.2.1Добавлена очистка кэша1.2.2Изменена работа с Webmaster и Metrika1.2.3Поправлены ошибки в seoparser1.2.4urllib и urllib2 заменены на requests1.2.5поправлена ф-ция получения PR1.2.6изменено получение данных из вебмастера + поправлена ф-ция get_normal_url_decode1.2.7Исправлена ошибка в SeoUtils1.2.8Добавлен параметр self.quote в SeoUtils1.2.9Порты вынесены в конфиг1.3.0Зависимости приведены в порядок1.3.1Исправлен фикс в выдаче normalize=True(get_yandex_cache_pos)1.3.2Добавили отдельный сервер под балансер1.3.3Поисковик поменял верстку1.3.4Изменена верстка при получении выдачи по сайту1.3.5Получение адреса страниц вместо относительного url1.3.6Исправлена ошибка: передача пустого списка как значения по умолчанию1.3.7Поправлено получение страниц в индексе1.3.8Добавлен timeout к запросам requests1.4.0Переделан функционал работы с redis. Теперь проверка идет по json.1.4.1Поправлен апдейт redis_key1.4.2Еще один hotfix для redis_key1.4.3Поменяли на контекстный менеджер1.4.3.3Обновил метод GetYaca в seoutils1.4.4Добавлен storage аргумент в функцию get_yandex_cache_pos1.5.0Добавлен декоратор на исключения1.5.3Поисковик ввел странное решение в верстку2.0.1Попробовали новую версию, решили откатить назад2.1.2Поправили ф-ции в связи с обновлением версии2.1.3Ключи в redis изменили на page:0:url и page:1:url3.0.0Переделал балансер на rabbitmq, переписал всю серверную часть3.2.5Поправлены проблемы с новым фласком, пофиксил парсинг гугла и прочие фишки3.2.6Поправлено получение страниц с гугла
antiafk
AntiafkAntiafk is a CLI built in Python. The intention was to create a package that allows users the option to trigger keyboard events on specified intervals, for a specified duration of time via the Python REPL.InstallationWindowsTo install antiafk on Windows, you will need to install Python via the Windows installer provided by thePython Organizationunder downloads, and then Windows.Once installed, run whatever version PowerShell you have installed asroot, and execute the following commands:python -V && pip -VIf python -V returns the version of Python you just installed (3.6 or higher), your Python installation has been successful. In the event that the version does not match what you installed, try python3 -V to make sure an a new alias has not been created for that specific version. If pip -V returns the executable path of pip, then we can move forward with the next step.After you've run both of those commands and see that you have pip installed as well as a Python 3 interpreter, in your PowerShell session run:pip install antiafkTest your installation by running:antiafk or antiafk --helpAt this point, if you have not gotten a command not found, then antiafk has been installed successfully.MacOSTo install antiafk on MacOS, make sure you at least have Python 3.6 on your OS as this uses the new f-string formatting started in Python 3.6 and up. To install Python 3.6 along side any current installations, either download the latest release provided by thePython Organizationunder downloads (which will make an alt install), or use brew to install a specific version. With MacOS, you will NOT need to run this asroot, unless you're running from a path with limited access.Once installed, make sure your version of python was successfully installed by running:python -Vpython3 -Vpython3.6 -VIf one of those installatons returns Python 3.6 or higher, locate the pip executable that should have been installed along with it. You can check by running the command:pip -Vpip3 -Vpip3.6 -VIf you can't location your pip executable that was installed along side your Python installation, go to:/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip (or whatever version 3.6 or higher you have installed)You can either execute pipe by declaring the path directly, or by exporting a variable to your path, such as:export pip36=/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip$pip36 install packageOnce you have successfully installed Python 3.6 or higher, and have pip wokring as well, proceed with installing antiafk which whatever pip alias you need to use. For me, it would be:pip3 install antiafkOnce installed, test your installation by running:antiafk or antiafk --helpIf no command cannot be found has been returned, then antiafk has successfully installed.UsageOnce installed, to run antiafk with the default key press interval and stop execution time, run the following command:antiafk <key>Example:antiafk spaceTo view what keys are supported, run:antiafk --helpTo run antiafk with your own interval and stop executon time, run the folling:antiafk <key> -i <interval> -s <how long before program exit>Example:antiafk space -i 5 Minutes -s 1 HourNote: there are no need for single or double quotation marks
antialiased-cnns
Antialiased CNNs[Project Page][Paper][Talk]Making Convolutional Networks Shift-Invariant AgainRichard Zhang. InICML, 2019.Quick & easy startRunpip install antialiased-cnnsimportantialiased_cnnsmodel=antialiased_cnns.resnet50(pretrained=True)If you have a model already and want to antialias and continue training, copy your old weights over:importtorchvision.modelsasmodelsold_model=models.resnet50(pretrained=True)# old (aliased) modelantialiased_cnns.copy_params_buffers(old_model,model)# copy the weights overIf you want to modify your own model, use the BlurPool layer. More information about our provided models and how to use BlurPool is below.C=10# example feature channel sizeblurpool=antialiased_cnns.BlurPool(C,stride=2)# BlurPool layer; use to downsample a feature mapex_tens=torch.Tensor(1,C,128,128)print(blurpool(ex_tens).shape)# 1xCx64x64 tensorUpdates(Oct 2020) FinetuneI initialize the antialiased model with weights from baseline model, and finetune. Before, I was training from scratch. The results are better.(Oct 2020) Additional modelsWe now have 23 total model variants. I added variants of vgg, densenet, resnext, wide resnet varieties! The same conclusions hold.(Sept 2020) Pip installYou can also nowpip install antialiased-cnnsand load models with thepretrained=Trueflag.(Sept 2020) Kernel 4I have added kernel size 4 experiments. When downsampling an even sized feature map (e.g., a 128x128-->64x64), this is actually the correct size to use to keep the indices from drifting.Table of contentsMore information about antialiased modelsInstructions for antialiasing your own model, using theBlurPoollayerImageNet training and evaluation code. Achieving better consistency, while maintaining or improving accuracy, is an open problem. Help improve the results!(0) PreliminariesPip install this packagepip install antialiased-cnnsOr clone this repository and install requirements (notably, PyTorch)https://github.com/adobe/antialiased-cnns.gitcdantialiased-cnns pipinstall-rrequirements.txt(1) Loading an antialiased modelThe following loads a pretrained antialiased model, perhaps as a backbone for your application.importantialiased_cnnsmodel=antialiased_cnns.resnet50(pretrained=True,filter_size=4)We also provide weights for antialiasedAlexNet,VGG16(bn),Resnet18,34,50,101,Densenet121, andMobileNetv2(seeexample_usage.py).(2) How to antialias your own architectureTheantialiased_cnnsmodule contains theBlurPoolclass, which does blur+subsampling. Runpip install antialiased-cnnsor copy theantialiased_cnnssubdirectory.MethodologyThe methodology is simple -- first evaluate with stride 1, and then use ourBlurPoollayer to do antialiased downsampling. Make the following architectural changes.importantialiased_cnns# MaxPool --> MaxBlurPoolbaseline=nn.MaxPool2d(kernel_size=2,stride=2)antialiased=[nn.MaxPool2d(kernel_size=2,stride=1),antialiased_cnns.BlurPool(C,stride=2)]# Conv --> ConvBlurPoolbaseline=[nn.Conv2d(Cin,C,kernel_size=3,stride=2,padding=1),nn.ReLU(inplace=True)]antialiased=[nn.Conv2d(Cin,C,kernel_size=3,stride=1,padding=1),nn.ReLU(inplace=True),antialiased_cnns.BlurPool(C,stride=2)]# AvgPool --> BlurPoolbaseline=nn.AvgPool2d(kernel_size=2,stride=2)antialiased=antialiased_cnns.BlurPool(C,stride=2)We assume incoming tensor hasCchannels. Computing a layer at stride 1 instead of stride 2 adds memory and run-time. As such, we typically skip antialiasing at the highest-resolution (early in the network), to prevent large increases.Add antialiasing and then continue trainingIf you already trained a model, and then add antialiasing, you can fine-tune from that old model:antialiased_cnns.copy_params_buffers(old_model,antialiased_model)If this doesn't work, you can just copy the parameters (and not buffers). Adding antialiasing doesn't add any parameters, so the parameter lists are identical. (It does add buffers, so some heuristic is used to match the buffers, which may throw an error.)antialiased_cnns.copy_params(old_model,antialiased_model)(3) ImageNet Evaluation, Results, and Training codeWe observe improvements in bothaccuracy(how often the image is classified correctly) andconsistency(how often two shifts of the same image are classified the same).ACCURACYBaselineAntialiasedDeltaalexnet56.5556.94+0.39vgg1169.0270.51+1.49vgg1369.9371.52+1.59vgg1671.5972.96+1.37vgg1972.3873.54+1.16vgg11_bn70.3872.63+2.25vgg13_bn71.5573.61+2.06vgg16_bn73.3675.13+1.77vgg19_bn74.2475.68+1.44resnet1869.7471.67+1.93resnet3473.3074.60+1.30resnet5076.1677.41+1.25resnet10177.3778.38+1.01resnet15278.3179.07+0.76resnext50_32x4d77.6277.93+0.31resnext101_32x8d79.3179.33+0.02wide_resnet50_278.4778.70+0.23wide_resnet101_278.8578.99+0.14densenet12174.4375.79+1.36densenet16975.6076.73+1.13densenet20176.9077.31+0.41densenet16177.1477.88+0.74mobilenet_v271.8872.72+0.84CONSISTENCYBaselineAntialiasedDeltaalexnet78.1883.31+5.13vgg1186.5890.09+3.51vgg1386.9290.31+3.39vgg1688.5290.91+2.39vgg1989.1791.08+1.91vgg11_bn87.1690.67+3.51vgg13_bn88.0391.09+3.06vgg16_bn89.2491.58+2.34vgg19_bn89.5991.60+2.01resnet1885.1188.36+3.25resnet3487.5689.77+2.21resnet5089.2091.32+2.12resnet10189.8191.97+2.16resnet15290.9292.42+1.50resnext50_32x4d90.1791.48+1.31resnext101_32x8d91.3392.67+1.34wide_resnet50_290.7792.46+1.69wide_resnet101_290.9392.10+1.17densenet12188.8190.35+1.54densenet16989.6890.61+0.93densenet20190.3691.32+0.96densenet16190.8291.66+0.84mobilenet_v286.5087.73+1.23To reduce clutter, extended results (different filter sizes) arehere. Help improve the results!LicensesThis work is licensed under aCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.All material is made available underCreative Commons BY-NC-SA 4.0license by Adobe Inc. You canuse, redistribute, and adaptthe material fornon-commercial purposes, as long as you give appropriate credit byciting our paperandindicating any changesthat you've made.The repository builds off the PyTorchexamples repositoryand torchvisionmodels repository. These areBSD-style licensed.Citation, contactIf you find this useful for your research, please consider citing thisbibtex. Please contact Richard Zhang <rizhang at adobe dot com> with any comments or feedback.
antiberty
AntiBERTyOfficial repository for AntiBERTy, an antibody-specific transformer language model pre-trained on 558M natural antibody sequences, as described inDeciphering antibody affinity maturation with language models and weakly supervised learning.SetupTo use AntiBERTy, install via pip:pipinstallantibertyAlternatively, you can clone this repository and install the package locally:[email protected]:jeffreyruffolo/AntiBERTy.git$pipinstallAntiBERTyUsageEmbeddingsTo use AntiBERTy to generate sequence embeddings, use theembedfunction. The output is a list of embedding tensors, where each tensor is the embedding for the corresponding sequence. Each embedding has dimension[(Length + 2) x 512].fromantibertyimportAntiBERTyRunnerantiberty=AntiBERTyRunner()sequences=["EVQLVQSGPEVKKPGTSVKVSCKASGFTFMSSAVQWVRQARGQRLEWIGWIVIGSGNTNYAQKFQERVTITRDMSTSTAYMELSSLRSEDTAVYYCAAPYCSSISCNDGFDIWGQGTMVTVS","DVVMTQTPFSLPVSLGDQASISCRSSQSLVHSNGNTYLHWYLQKPGQSPKLLIYKVSNRFSGVPDRFSGSGSGTDFTLKISRVEAEDLGVYFCSQSTHVPYTFGGGTKLEIK",]embeddings=antiberty.embed(sequences)To access the attention matrices, pass thereturn_attentionflag to theembedfunction. The output is a list of attention matrices, where each matrix is the attention matrix for the corresponding sequence. Each attention matrix has dimension[Layer x Heads x (Length + 2) x (Length + 2)].fromantibertyimportAntiBERTyRunnerantiberty=AntiBERTyRunner()sequences=["EVQLVQSGPEVKKPGTSVKVSCKASGFTFMSSAVQWVRQARGQRLEWIGWIVIGSGNTNYAQKFQERVTITRDMSTSTAYMELSSLRSEDTAVYYCAAPYCSSISCNDGFDIWGQGTMVTVS","DVVMTQTPFSLPVSLGDQASISCRSSQSLVHSNGNTYLHWYLQKPGQSPKLLIYKVSNRFSGVPDRFSGSGSGTDFTLKISRVEAEDLGVYFCSQSTHVPYTFGGGTKLEIK",]embeddings,attentions=antiberty.embed(sequences,return_attention=True)Theembedfunction can also be used with masked sequences. Masked residues should be indicated with underscores.ClassificationTo use AntiBERTy to predict the species and chain type of sequences, use theclassifyfunction. The output is two lists of classifications for each sequences.fromantibertyimportAntiBERTyRunnerantiberty=AntiBERTyRunner()sequences=["EVQLVQSGPEVKKPGTSVKVSCKASGFTFMSSAVQWVRQARGQRLEWIGWIVIGSGNTNYAQKFQERVTITRDMSTSTAYMELSSLRSEDTAVYYCAAPYCSSISCNDGFDIWGQGTMVTVS","DVVMTQTPFSLPVSLGDQASISCRSSQSLVHSNGNTYLHWYLQKPGQSPKLLIYKVSNRFSGVPDRFSGSGSGTDFTLKISRVEAEDLGVYFCSQSTHVPYTFGGGTKLEIK",]species_preds,chain_preds=antiberty.classify(sequences)Theclassifyfunction can also be used with masked sequences. Masked residues should be indicated with underscores.Mask predictionTo use AntiBERTy to predict the identity of masked residues, use thefill_masksfunction. Masked residues should be indicated with underscores. The output is a list of filled sequences, corresponding to the input masked sequences.fromantibertyimportAntiBERTyRunnerantiberty=AntiBERTyRunner()sequences=["____VQSGPEVKKPGTSVKVSCKASGFTFMSSAVQWVRQARGQRLEWIGWIVIGSGN_NYAQKFQERVTITRDM__STAYMELSSLRSEDTAVYYCAAPYCSSISCNDGFD____GTMVTVS","DVVMTQTPFSLPV__GDQASISCRSSQSLVHSNGNTY_HWYLQKPGQSPKLLIYKVSNRFSGVPDRFSG_GSGTDFTLKISRVEAEDLGVYFCSQSTHVPYTFGG__KLEIK",]filled_sequences=antiberty.fill_masks(sequences)Pseudo log-likelihoodTo use AntiBERTy to calculate the pseudo log-likelihood of a sequence, use thepseudo_log_likelihoodfunction. The pseudo log-likelihood of a sequence is calculated as the average of per-residue masked log-likelihoods. The output is a list of pseudo log-likelihoods, corresponding to the input sequences.fromantibertyimportAntiBERTyRunnerantiberty=AntiBERTyRunner()sequences=["EVQLVQSGPEVKKPGTSVKVSCKASGFTFMSSAVQWVRQARGQRLEWIGWIVIGSGNTNYAQKFQERVTITRDMSTSTAYMELSSLRSEDTAVYYCAAPYCSSISCNDGFDIWGQGTMVTVS","DVVMTQSSTPFSLPVSLGDQASISCRSSQSLVHSNGNTYLHWYLQKPGQSPKLLIYKVSNRFSGVPDRFSGSGSGTDFTLKISRVEAEDLGVYFCSQSTHVPYTFGGGTKLEIK",]pll=antiberty.pseudo_log_likelihood(sequences,batch_size=16)Citing this work@article{ruffolo2021deciphering,title={Deciphering antibody affinity maturation with language models and weakly supervised learning},author={Ruffolo, Jeffrey A and Gray, Jeffrey J and Sulam, Jeremias},journal={arXiv preprint arXiv:2112.07782},year={2021}}
antiberty-pytorch
No description available on PyPI.
antibiotics
antibioticsNamedTuple / dataclasses <-> delimited text"The best treatment for acute episodes of PANDAS is to treat the strep infection causing the symptoms, if it is still present, with antibiotics."--National Institute of Mental Healthantibioticsis a minimalist type-driven serialization/deserialization library inspired bySerdeandcassava.It uses type annotations to automatically read and writeNamedTupleor@dataclassobjects to or from delimited text files.Out of the box, it only knows about Python scalar types andtyping.Unions of them (includingtyping.Optional), but an extension mechanism for arbitrary type-directed serialization and deserialization is provided through thetype_serde_extargument to theDelimitedconstructor - seeexamples/advanced.py.ForUniontypes, serialization is driven by the runtime type, and deserialization is attempted in the order of declaration of theUnionarguments - except thatNoneTypeis tried first if present, to preserve the expected behavior when deserializing null/missing values of types whose deserializers do not throw when receiving''as an argument.A typeExternalNameis also provided which may be used withtyping.Annotatedto specify the name which should be used for a member when serializing or deserializing (e.g. to match CSV headers).Please note that as with the built-incsvmodule, file-like objects used with this library should be opened withnewline=''.Basic examplefromantibioticsimportDelimitedfromdataclassesimportdataclassfromtypingimportNamedTuple,Optional@dataclassclassSampleDC():w:Optional[float]x:inty:boolz:strclassSampleNT(NamedTuple):w:Optional[float]x:inty:boolz:strif__name__=='__main__':dcs=list()nts=list()foriinrange(10):even=i%2==0dcs.append(SampleDC(i*3.5ifevenelseNone,i,noteven,f'_",\t_{i}'))nts.append(SampleNT(i*3.5ifevenelseNone,i,noteven,f'_",\t_{i}'))csv=Delimited()withopen('dcs.csv','w',newline='')asf:csv.write(SampleDC,dcs,f)tsv=Delimited(sep='\t',escape='\\',newline='\n')withopen('nts.tsv','w',newline='')asf:tsv.write(SampleNT,dcs,f,header=False)withopen('dcs.csv','r',newline='')asf:forrincsv.read(SampleDC,f):print(r)withopen('nts.tsv','r',newline='')asf:forrintsv.read(SampleNT,f,header=False):print(r)Example with custom external namesfromantibioticsimportDelimited,ExternalNamefromdataclassesimportdataclassfromtypingimportAnnotated,Optional@dataclassclassSampleDC():w:Annotated[Optional[float],ExternalName('BigW')]x:Annotated[int,ExternalName('Fancy X')]y:boolz:strif__name__=='__main__':dcs=list()foriinrange(10):even=i%2==0dcs.append(SampleDC(i*3.5ifevenelseNone,i,noteven,f'_",\t_{i}'))csv=Delimited()withopen('dcs.csv','w',newline='')asf:csv.write(SampleDC,dcs,f)withopen('dcs.csv','r',newline='')asf:fordcincsv.read(SampleDC,f):print(dc)DocumentationDocumentation strings and type annotations are provided for public types and functions. We recommend viewing "nice" documentation pages usingpdoc; e.g. in the same environment as theantibioticspackage is installed, installpdocwithpip install pdoc, then runpdoc antibiotics.Install with:pip install antibioticsOr download directlyfrom PyPI.(c) 2023 dwt | terminus data science, LLCavailable under the Apache License 2.0
antiblock-scrapy
antiblock-scrapyMecanismos antibloqueios para o Scrapy.FuncionalidadesScrapy já vem com vários recursos antibloqueios que só precisam ser configurados, bem como muitos outros implementados por terceiros (alguns deles listados abaixo).Os recursos implementados foram:Rotacionador de user-agentsRotacionar de IPs via proxy TorInstalaçãoManeira mais fácil:pipinstallantiblock-scrapyInstalação e configuração do TorÉ necessário configurar oTor. Primeiramente, instale-o:sudoapt-getinstalltorPare sua execução para realizar configurações:sudoservicetorstopAbra seu arquivo de configuração como root, disponível em/etc/tor/torrc, por exemplo, usando o nano:sudonano/etc/tor/torrcColoque as linhas abaixo e salve:ControlPort 9051 CookieAuthentication 0Reinicie o Tor:sudoservicetorstartÉ possível verificar o IP de sua máquina e comparar com o do Tor da seguinte forma:Para ver seu IP:curlhttp://icanhazip.com/Para ver o IP do TOR:torifycurlhttp://icanhazip.com/Proxy do Tor não são suportados pelo Scrapy. Para contornar esse problema, é necessário o uso de um intermediário, nesse caso oPrivoxy.O servidor proxy do Tor se encontra, por padrão, no endereço 127.0.0.1:9050Instalação e configuração doPrivoxy:Instalar:sudoaptinstallprivoxyPare sua execução:sudoserviceprivoxystopConfigurá-lo para usar Tor, abra seu arquivo de configuração:sudonano/etc/privoxy/configAdicione as seguintes linhas:forward-socks5t/127.0.0.1:9050.Inicie-o:service privoxy startPor padrão, privoxy executará no endereço 127.0.0.1:8118Teste:torifycurlhttp://icanhazip.com/curl-x127.0.0.1:8118http://icanhazip.com/O IP mostrado nos dois passos acima deve ser o mesmo.Rotação de IPs via TorConfigure o middleware no arquivo de configuração de seu projeto (settings.py):DOWNLOADER_MIDDLEWARES={...,'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':110,'antiblock_scrapy.middlewares.TorProxyMiddleware':100}Habilite o uso da extensão:TOR_IPROTATOR_ENABLED=TrueTOR_IPROTATOR_CHANGE_AFTER=#número de requisições feitas em um mesmo endereço IPPor padrão, um IP poderá ser reutilizado após 10 usos de outros. Esse valor pode ser alterado pela variável TOR_IPROTATOR_ALLOW_REUSE_IP_AFTER, como abaixo:TOR_IPROTATOR_ALLOW_REUSE_IP_AFTER=#Um número grande demais pode tornar mais lento recuperar um novo IP para uso ou nem encontrar. Se o valor for 0, não haverá registro de IPs usados.Rotação de IPs via lista de proxiesRotacionar IPs via Tor pode tornar o processo de coleta lento. Para isso existem ferramentas de terceiros que rotacionam uma lista de proxies dada, possivelmente deixando a coleta mais rápida (em comparação ao Tor), como:https://github.com/xiaowangwindow/scrapy-rotated-proxyhttps://github.com/TeamHG-Memex/scrapy-rotating-proxiesRotação de user-agentsNo arquivo de configuração de seu projeto Scrapy, adicione as seguintes linhas (settings.py):DOWNLOADER_MIDDLEWARES={...,'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware':None,'antiblock_scrapy.middlewares.RotateUserAgentMiddleware':500,}Defina a lista de user-agents, ative o módulo e defina um uso mínimo e máximo de cada user-agent (o uso de um user-agent será aleatório entre esses números) (settings.py):USER_AGENTS=['user-agent-1','user-agent-2',...,'user-agent-n']ROTATE_USER_AGENT_ENABLED=TrueMIN_USER_AGENT_USAGE=#uso mínimo de user-agentMAX_USER_AGENT_USAGE=#uso máximo de user-agentÉ possível conferir o user-agent usado no site:https://www.whatismybrowser.com/detect/what-is-my-user-agentAtrasos entre requisiçõesEssa funcionalidade já é disponível por padrão ao Scrapy por meio deDOWNLOAD_DELAY.Por padrão:O valor de DOWNLOAD_DELAY é de 0.25 segundosO tempo entre requisições não é fixo, um valor entre 0.5 * DOWNLOAD_DELAY e 1.5 * DOWNLOAD_DELAY é escolhido entre cada requisiçãoPara alterar o atraso entre requisição, faça (emsettings.py):DOWNLOAD_DELAY=# tem em segundos entre requisiçõesPara forçar que o tempo entre requisições seja fixo, ao contrário do padrão aleatório, faça (emsettings.py):RANDOMIZE_DOWNLOAD_DELAY=FalseAutoThrottleUma opção mais avançada de colocar atrasos entre requisições é oAutoThrottle. Ela alterará a velocidade entre requisições de acordo com a latência de resposta do servidor e a capacidade de processamento da engine de maneira automática.Por padrão, essa configuração está desativada. Mas pode ser ativada por meio do seguinte comando (emsettings.py):AUTOTHROTTLE_ENABLED=TrueÉ necessário definir um delay inicial que será ajustado ao longo das requisições automaticamente. Defina-o por meio do comando abaixo, o default de 5.0 segundos (emsettings.py):AUTOTHROTTLE_START_DELAY=#delay inicialDefina também um delay máximo, o default de 60.0 segundos (emsettings.py):AUTOTHROTTLE_MAX_DELAY=#delay máximoO atraso das próximas requisições será ajustado para um valor respeitando DOWNLOAD_DELAY e AUTOTHROTTLE_MAX_DELAY, levando em conta a média de requisições paralelas enviadas ao servidor que, por padrão, é 1.0. Esse valor pode ser ajustado pelo comando abaixo (emsettings.py):AUTOTHROTTLE_TARGET_CONCURRENCY=#média de requisições concorrentesMais detalhes podem ser encontradosaqui.Gerenciamento de cookiesScrapy já possui mecanismos de gerencialmente de cookies e detalhes podem ser encontradosaqui.Por exemplo, caso possua os cookies de acesso de um site que é necessário login, uma das possíveis abordagens é criar um Spider como o abaixo:importscrapy# Cookies de login di siteCOOKIES=[{},...,{}]classLoginSpider(scrapy.Spider):name='foo'defstart_requests(self):urls=['site-com-login-1','site-com-login-2',...,'site-com-login-n']forurlinurls:yieldscrapy.Request(url='site-que-precisa-login',cookies=COOKIES,callback=self.parse)defparse(self,response):...Outras formas de lidar como cookies como, por exemplo, cada requisição com seu próprio cookie, podem ser feitas usandocookiejar, mais detalhesaqui.Bibliotecas de terceiros permitempersistência de cookiese outros recursos, comoscrapy-cookies.
antiblock-scrapy-selenium
antiblock_scrapy_seleniumEste módulo é uma extensão para o projeto descrapy-selenium.O principal uso doscrapy-seleniumé para o caso de sites que precisam processar javascript para renderizar seu conteúdo. Por outro lado, mecanismos antibloqueios básicos de coletas não se encontram no projeto original.Scrapy-selenium foi extendido usando como baseantiblock-selenium, que permite rotacionar IPs via Tor, definir delays entre requisições (aleatório ou fixo), rotacionar user-agents, além de persistir/carregar cookies.Obs.: Não há compatibilidade com selenium remoto neste projeto.FuncionalidadesJunção descrapy-seleniumcomantiblock-selenium, ou seja:Permitir carregar sites que necessitam javascript ao Scrapy, entre outras funcionalidades doscrapy-seleniumEvitar bloqueios em coletas, por meio de:Rotação de IPs via TorRotação de user-agentsDelays aleatórios ou fixos entre requisiçõesPersistir/carregar cookiesInstalaçãoManeira mais fácil:pipinstallantiblock-scrapy-seleniumConfiguraçãoSiga os passos de configuração do Tor emantiblock-selenium.Os navegadores suportados são:ChromeFirefoxUsoBásicoAtivação do Middleware:DOWNLOADER_MIDDLEWARES={'antiblock_scrapy_selenium.SeleniumMiddleware':800}Adicione o navegador a ser usado, o local da executável do driver e os argumentos a serem passados:#settings.pyfromshutilimportwhichSELENIUM_DRIVER_NAME='firefox'#ou chromeSELENIUM_DRIVER_EXECUTABLE_PATH=which('geckodriver')SELENIUM_DRIVER_ARGUMENTS=['-headless']# '--headless' se estiver usando chromeOpcionalmente, defina o local da executável do navegador:SELENIUM_BROWSER_EXECUTABLE_PATH=which('firefox')Useantiblock_scrapy_selenium.SeleniumRequestao invés deRequestdo Scrapy, como abaixo:fromantiblock_scrapy_seleniumimportSeleniumRequestyieldSeleniumRequest(url=url,callback=self.parse_result)Exemplo com um Spider:importscrapyfromantiblock_scrapy_seleniumimportSeleniumRequestclassFooSpider(scrapy.Spider):name='foo'defstart_requests(self):url='https://alguma-url'yieldSeleniumRequest(url=url,callback=self.parse)defparse(self,response):passUtilize as demais funcionalidades doscrapy-seleniumnormalmente, disponíveisaqui.O parâmetro SELENIUM_COMMAND_EXECUTOR do scrapy-selenium não é suportada.Uso de mecanismos antibloqueiosApós seguir os passos de uso básico, configure de acordo com os mecanismos de camuflagem abaixo.Rotação de IPs via TorParâmetros:SELENIUM_DRIVER_CHANGE_IP_AFTER: Define com quantas requisições o IP será alterado(Default 42)SELENIUM_DRIVER_ALLOW_REUSE_IP_AFTER: Define quando um IP poderá ser reusado(Default 10)Exemplo:SELENIUM_DRIVER_CHANGE_IP_AFTER=42SELENIUM_DRIVER_ALLOW_REUSE_IP_AFTER=5Rotação de user-agentsSuporte a rotação de user-agent apenas para Firefox.Parâmetros:SELENIUM_DRIVER_USER_AGENTS: Lista de user-agents a ser rotacionada.SELENIUM_DRIVER_CHANGE_USER_AGENT_AFTER: Quando o user-agent deverá se alterado(Default 0 - user-agent não muda)Exemplo:# settings.pySELENIUM_DRIVER_USER_AGENTS=['user-agent-1','user-agent-2',...,'user-agent-n']SELENIUM_DRIVER_CHANGE_USER_AGENT_AFTER=721#Requisições com mesmo user-agent Ex.: 10, 20, 30...Atrasos entre requisiçõesPermite atrasos aleatórios ou fixos entre requisições.Parâmetros:SELENIUM_DRIVER_TIME_BETWEEN_CALLS: Tempo em segundos entre requisições. Aceita números com até 2 duas casas decimais(Default 0.25)SELENIUM_DRIVER_RANDOM_DELAY: Se o atraso entre requisições será fixo (definindo esse parâmetro comoFalse) ou aleatório, escolhido entre0.5 * SELENIUM_DRIVER_TIME_BETWEEN_CALLSe1.5 * SELENIUM_DRIVER_TIME_BETWEEN_CALLS(Default True)# settings.pySELENIUM_DRIVER_TIME_BETWEEN_CALLS=2.5SELENIUM_DRIVER_RANDOM_DELAY=False# Tempo mínimo fixo entre requisiçõesGerência de CookiesParâmetros:SELENIUM_DRIVER_PERSIST_COOKIES_WHEN_CLOSE: Se quando o driver é fechado os cookies deles serão salvos(Default False)SELENIUM_DRIVER_RELOAD_COOKIES_WHEN_START: Se ao iniciar, cookies salvos na última sessão serão recarregados(Default False)SeTrue, é necessário especificar o domínio dos cookies emSELENIUM_DRIVER_COOKIE_DOMAINSELENIUM_DRIVER_LOCATION_OF_COOKIES: Local onde os cookies serão salvos.(Default 'cookies.pkl')SELENIUM_DRIVER_LOAD_COOKIES: Lista de cookies a serem carregados (Default [] - Lista vazia)Se a lista não vazia for passada, é necessário especificar o domínio dos cookies emSELENIUM_DRIVER_COOKIE_DOMAINSELENIUM_DRIVER_COOKIE_DOMAIN: Domínio onde os cookies são válidos.Exemplo - Persistindo cookies:# settings.pySELENIUM_DRIVER_PERSIST_COOKIES_WHEN_CLOSE=TrueSELENIUM_DRIVER_RELOAD_COOKIES_WHEN_START=TrueSELENIUM_DRIVER_COOKIE_DOMAIN='https://www.site-sendo-coletado.com/'SELENIUM_DRIVER_LOCATION_OF_COOKIES='cookies-1.pkl'Exemplo - Carregando cookies:cookie1={'country':'BR','currency':'dolar'}cookie2={'lang':'pt-br'}SELENIUM_DRIVER_LOAD_COOKIES=[cookie1,cookie2]SELENIUM_DRIVER_COOKIE_DOMAIN='https://www.site-sendo-coletado.com/'
antiblock-selenium
antiblock-seleniumChrome e Firefox selenium webdrivers com alguns mecanismos antibloqueios.RecursosRotação de IPs via TorRotação de user-agents (Apenas para Firefox)Delays aleatórios ou fixos entre requisiçõesPersistência e carregamento de cookiesPermite salvar informações de login, por exemploInstalaçãoA maneira mais simples:pipinstallantiblock-seleniumConfigurando TorÉ necessário configurar oTor. Primeiramente, instale-o:sudoapt-getinstalltorPare sua execução para realizar configurações:sudoservicetorstopAbra seu arquivo de configuração como root, disponível em/etc/tor/torrc, por exemplo, usando o nano:sudonano/etc/tor/torrcColoque as linhas abaixo e salve:ControlPort 9051 CookieAuthentication 0Reinicie o Tor:sudoservicetorstartUsoAs classesFirefoxeChromedeantiblock_seleniumherdam de selenium.webdriver.Firefox e selenium.webdriver.Chrome, respectivamente. Então as use como habitualmente.Uso básicofromantiblock_seleniumimportFirefox,Chromechrome=Chrome()firefox=Firefox()#use os drivers como habitualmente faz com webdriversAs funcionalidades extras a estes webdrivers são listadas abaixo.Rotacionar IPs via TorParâmetros de configuração:allow_reuse_ip_after: TipoInt- default10. Quando um IP é usado, ele poderá ser reusado novamente após allow_reuse_ip_after outros IPs serem usados. Se o número for 0, um IP pode ser reusado em qualquer momento.change_ip_after: TipoInt- default42. Número de requisições feitas por um mesmo IP.Exemplo:fromantiblock_seleniumimportFirefox,Chromechrome=Chrome(allow_reuse_ip_after=5,change_ip_after=100)firefox=Firefox(allow_reuse_ip_after=5,change_ip_after=100)# use chrome/firefox como habitualmente usa os webdriversRotacionar user-agentsPor enquanto, funcionalidade disponível apenas para Firefox.Parâmetros de configuração:user_agents: TipoList- default [] (lista vazia). Lista de user-agents para ser rotacionada.change_user_agent_after: TipoInt- default 0. Número de requisições feitas com o mesmo user-agent. Se o valor for 0, o user-agent não será alterado.Exemplo:fromantiblock_seleniumimportFirefoxuser_agents=['Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36','Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2226.0 Safari/537.36','Mozilla/5.0 (Windows NT 6.4; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36','Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36','Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36',]firefox=Firefox(user_agents=user_agents,change_user_agent_after=100)#use firefox como habitualmente usa webdrivers...Delays aleatórios ou fixosPor padrão, o atraso entre uma requisição e outra é uma valor escolhido ao acaso entre 0.5 * 0.25 e 0.25 * 1.5, o mesmo comportamento do Scrapy.Parâmetros de configuração:time_between_calls: TipoFloat- default 0.25. Tempo em segundos entre uma requisição e outra.random_delay: TipoBool- default True. Se True, um valor entre 0.5 * time_between_calls e 1.5 * time_between_calls será o delay escolhido entre cada requisição. Se False, o delay entre uma requisição e outra será fixo de time_between_calls segundos.Exemplo:fromantiblock_seleniumimportFirefox,Chrome# Tempo fixo de 10 segundos entre requisiçõeschrome=Chrome(time_between_calls=10,random_delay=False)# Tempo aleatório entre 0.5 * 10 e 1.5 * 10 entre cada requisiçãofirefox=Firefox(time_between_calls=10,random_delay=True)# use chrome/firefox como habitualmente usa os webdriversPersistência e carregamento de cookiesO selenium já possui mecanismos de gerenciamentos de cookies, disponíveisaqui. As funcionalidades abaixo são apenas facilitadores de uso, sendo:Salvar cookies ao fechar o driverRecarregar os últimos cookies salvos (salvos de uma última sessão do driver)Carregar uma lista de cookiesParâmetros de configuração:cookie_domain: TipoString- default ''. Para carregar cookies, o selenium precisa acessar o site onde os cookies são válidos. Esse parâmetro representa esse site.persist_cookies_when_close: TipoBool- default False. Define se os cookies do driver serão salvos quando ele for fechado.reload_cookies_when_start: TipoBool- default False. Define se os cookies salvos de uma outra sessão do driver que foi fechada será recarregada.Se esse parâmetro for True, cookie_domain não pode ser vazio.location_of_cookies: TipoString- default 'cookies.pkl'. Nome ou local onde os cookies serão salvos, se persist_cookies_when_close for definido como True.Exemplo:fromantiblock_seleniumimportFirefox,Chromechrome=Chrome(persist_cookies_when_close=True)firefox=Firefox(persist_cookies_when_close=True)chrome.get('algum-site-que-requer-login')firefox.get('algum-site-que-requer-login')#Preencha os dados de loginchrome.close()firefox.close()chrome=Chrome(reload_cookies_when_start=True,cookie_domain='https://dominio-site-com-login.com')firefox=Firefox(reload_cookies_when_start=True,cookie_domain='https://dominio-site-com-login.com')Nem todos sites será possível salvar informação de login apenas salvando cookies.Também é possível carregar uma lista de cookies para um domínio, use a função abaixo:load_cookiesParâmetros:cookies: TipoList. Lista de cookies.cookie_domain: TipoString. Domínio válido do cookie.Exemplo:fromantiblock_seleniumimportFirefox,Chromechrome=Chrome()firefox=Firefox()cookies=[{'domain':'www.google.com.br','expiry':1591216842,'httpOnly':False,'name':'UULE','path':'/','secure':False,'value':'a+cm9sZToxIHByb2R1Y2VyOjEyIHByb3ZlbmFuY2U6NiB0aW1lc3RhbXA6MTU5MTE5NTIzNzI5OTAwMCBsYXRsbmd7bGF0aXR1ZGVfZTc6NDY4MTgxODgwIGxvbmdpdHVkZV9lNzo4MjI3NTEyMH0gcmFkaXVzOjY0MDg4Nzgw'},{'domain':'.google.com.br','expiry':2145916801,'httpOnly':False,'name':'CONSENT','path':'/','secure':False,'value':'WP.287758'},{'domain':'.google.com.br','expiry':1593787242,'httpOnly':False,'name':'1P_JAR','path':'/','secure':True,'value':'2020-6-3-14'},{'domain':'.google.com.br','expiry':1607006421,'httpOnly':True,'name':'NID','path':'/','sameSite':'None','secure':True,'value':'204=aZfE182RJB7HoA9WXJImPNFy4xT0-VCU9t2NhB8byzsMGdSdjnDQo7YkIexDtBsMKQxU0AZDfgyQkKn8T9rD8YN_3hqpIvasJRbg75GZzt8zYTO3dMgS7G1ftELWBzDAuhRb2bCa1iKwut2YfNYJp-2bshYcX0JD5RDW_Gp28Bc'}]chrome.load_cookies(cookies=cookies,cookie_domain='https://www.google.com.br/')firefox.load_cookies(cookies=cookies,cookie_domain='https://www.google.com.br/')TO-DORotação de user-agents para ChromeRotação de IPs via lista de proxysAumentar cobertura de testes
antibodies
antibodiesHelpful Scripts for Antibody NGS Data Processing
antibody-features
Failed to fetch description. HTTP Status Code: 404
antibody-ngs-pipeline
# Antibody NGS PipelineBulk antibody sequence preprocessing, annotation with abstar, import to MongoDB. This is based on the AbStar analysis pipeline: (www.github.com/briney/abstar)### installpip install antibody-ngs-pipeline### useTo run antibody_ngs_pipeline:antibody_ngs_pipelineTo run antibody_ngs_pipeline with FASTQC report on raw data:antibody_ngs_pipeline -fTo run antibody_ngs_pipeline with adapter trimming by CutAdapt:antibody_ngs_pipeline -t <path-to-adapters.fasta>To run antibody_ngs_pipeline with quality trimming by sickle:antibody_ngs_pipeline -qTo run antibody_ngs_pipeline with adapter trimming by CutAdapt, quality trimming with sickle and get a FASTQC report on both raw data and processed data:antibody_ngs_pipeline -f -q -t <path-to-adapters.fasta>### requirements Python 3.5+ abstar abutils cutadapt pymongoDownloading from BaseSpace requires Basemount:https://basemount.basespace.illumina.com/Quality trimming requires sickle:https://github.com/najoshi/sickleFastQC report requires FastQC:https://www.bioinformatics.babraham.ac.uk/projects/fastqc/Merging paired sequences requires PANDAseq:https://github.com/neufeld/pandaseqbatch_mongoimport (from Abstar) requires MongoDB:http://www.mongodb.org/
antibodyomics
AboutSource Code:https://https://github.com/drewschaub/antibodyomicsAntibodyomics is a python library for performing structural and genetic bioinformatic analysis in python.InstallationInstallation is handled using the python package installerpip$pipinstallantibodyomicsLicenseCopyright © Andrew J. Schaub
antibodyomics-test
antibodyomicsBackendI'm using setuptools, but other options were Hatchling, Flit and PDMhttps://packaging.python.org/en/latest/tutorials/packaging-projects/Action ItemsI need to add the required packages topyproject.toml
antibody-test-01
#I Love Sam!
antibody-transformer
antibody-transformer
antibot
No description available on PyPI.
antibot-boxdenat
No description available on PyPI.
antibp3
AntiBP3A tool for predicting and designing Antibacterial peptides for three groups of bacteria using the sequence information.IntroductionAntibacterial peptides (ABPs) are specific cationic AMPs against various bacterial species that disturb the bacterial cell membrane with different mechanisms. The ABPs can also hamper intracellular processes of the bacterial pathogen in different ways, such as modulating the enzymatic activity, protein degradation & synthesis and nucleic acid synthesis, which gives an advantage over traditional antibiotics in developing resistance. Hence, develop an Antibacterial prediction tool for gram-positive, gram-negative and gram-variable bacteria. AntiBP3 is also available as a web server athttps://webs.iiitd.edu.in/raghava/antibp3. Please read/cite the content about the AntiBP3 for complete information, including the algorithm behind the approach.InstallationTo install the package, type the following command:pip install antibp3Minimum USAGETo know about the available option for the CLI tool, type the following command:antibp3 -hThis will predict if the submitted sequences are Antibacterial or non-Antibacterial for the chosen class of bacteria. It will use other parameters by default. It will save the output in "outfile.csv" in CSV (comma-separated variables).Full Usageusage: antibp3 [-h] [-i INPUT [-o OUTPUT] [-s {1,2,3}] [-j {1,2,3,4,5}] [-t THRESHOLD] [-e EVAL] [-d {1,2}] [-wd WORKING]Please provide the following arguments for the successful run Optional arguments: -h, --help show this help message and exit -i INPUT, --input INPUT Input: protein or peptide sequence(s) in FASTA format or single sequence per line in single letter code -o OUTPUT, --output OUTPUT Output: File for saving results by default outfile.csv -s {1,2,3}, --source {1,2,3} Source: 1:GP ABPs, 2:GN ABPs, 3:GV ABPs by default 1 -j {1,2,3,4,5}, --job {1,2,3,4,5} Job Type: 1:Predict, 2:Design, 3:BLAST Search 4:Motif Scan, 5:Protein Scan ; by default 1 -t THRESHOLD, --threshold THRESHOLD Threshold: Value between 0 to 1 by default 0.5 for GP ABPs, 0.45 for GN ABPs and 0.51 for GV ABPs -e EVAL, --eval EVAL E-value for Blast search (Blast Search only), by default 0.01 for GP ABPs, 0.01 for GN ABPs and 0.001 for GV ABPs -w {8,9,10,11,12,13,14,15,16,17,18,19,20}, --winleng {8,9,10,11,12,13,14,15,16,17,18,19,20} Window Length: 8 to 20 (scan mode only), by default 8 -d {1,2}, --display {1,2} Display: 1:ABPs only, 2: All peptides, by default 1 -wd WORKING, --working WORKING Working Directory: Location for writing resultsInput File:It allows users to provide input in the FASTA format.Output File:Program will save the results in the CSV format; if the user does not provide the output file name, it will be stored in "outfile.csv".Source:User should provide sources 1,2, and 3; by default, it's 1 for GP ABPs.Working Directory:This option allows users to set the working directory in which they want to get the output files.Job:User is allowed to choose between three different modules, such as 1 for prediction, 2 for Designing, 3 for Blast scan, 4 for Motif scan and 5 for Protein scan; by default, it's 1.Threshold:User should provide a threshold between 0 and 1; by default, 0.5 for GP ABPs, 0.45 for GN ABPs and 0.51 for GV ABPsWindow length: User can choose any pattern length between 8 and 20 in long sequences. This option is available for only scanning module.e-value:User is allowed to choose an e-value for Blast search between 0.0001 to 1e-20; by default, 0.01 for GP ABPs, 0.01 for GN ABPs and 0.001 for GV ABPsDisplay type:This option allow users to fetch either only Antibacterial peptides by choosing option 1 or prediction against all peptides by choosing option 2.
antibuddy
antibuddyantibuddy is a tool for building multiplexed antibody panels.
antic
Antic is an algebraic number theory library in C.We do not recommend to install Antic from PyPI as it has dependencies that are not available on PyPI.Please consult Antic’s home page for further details:https://github.com/wbhart/antic
anticaptcha
anticaptcha=========Simple anticaptcha library for images with text.>>> pip install anticaptcha.. code:: pythonfrom anticaptcha import Anticaptchaac = Anticaptcha('API_TOKEN')with open('captcha.png', 'rb') as img:response = ac.createTask(img.read())task_id = response['taskId']result = ac.getTaskResult(task_id)solution = result['solution']['text']Test with: python -m pytest
anticaptchaofficial
anticaptchaofficialOfficialhttps://anti-captcha.com/library for solving images with text, Recaptcha v2/v3 Enterprise or non-Enterprise, Funcaptcha Arcoselabs, GeeTest and hCaptcha Enterprise or non-Enterprise. Anti-Captcha is the most popular and reliable captcha solving service, working since 2007. Prices for solving captchas start from $0.0005 per item.pip3installanticaptchaofficialCheck API key balance before creating tasks:balance=solver.get_balance()ifbalance<=0:print("too low balance!")returnCheck subscription credits balance if you have one:credits=solver.get_credits_balance()ifcredits<=0:print("too low credits balance!")returnExample how to createRecaptcha V2task and receive g-response:fromanticaptchaofficial.recaptchav2proxylessimport*solver=recaptchaV2Proxyless()solver.set_verbose(1)solver.set_key("YOUR_API_KEY")solver.set_website_url("https://website.com")solver.set_website_key("SITE_KEY")# Set True if it is Recaptcha V2-invisible#solver.set_is_invisible(True)# Set data-s value for google.com pages#solver.set_data_s('a_long_string_here')# Specify softId to earn 10% commission with your app.# Get your softId here: https://anti-captcha.com/clients/tools/devcentersolver.set_soft_id(0)g_response=solver.solve_and_return_solution()ifg_response!=0:print"g-response: "+g_responseelse:print"task finished with error "+solver.error_codeReport previosly solved Recaptcha V2/V3/Enterprise as incorrect:solver.report_incorrect_recaptcha()Report it as correct to improve your quality:solver.report_correct_recaptcha()Solveimage captcha:fromanticaptchaofficial.imagecaptchaimport*solver=imagecaptcha()solver.set_verbose(1)solver.set_key("YOUR_KEY")# Specify softId to earn 10% commission with your app.# Get your softId here: https://anti-captcha.com/clients/tools/devcentersolver.set_soft_id(0)captcha_text=solver.solve_and_return_solution("captcha.jpeg")ifcaptcha_text!=0:print("captcha text "+captcha_text)else:print("task finished with error "+solver.error_code)Report previosly solved image captcha as incorrect:solver.report_incorrect_image_captcha()SolveHCaptcha:fromanticaptchaofficial.hcaptchaproxylessimport*solver=hCaptchaProxyless()solver.set_verbose(1)solver.set_key("YOUR_KEY")solver.set_website_url("https://website.com")solver.set_website_key("SITE_KEY")solver.set_user_agent("YOUR FULL USER AGENT HERE")# tell API that Hcaptcha is invisible#solver.set_is_invisible(1)# Specify softId to earn 10% commission with your app.# Get your softId here: https://anti-captcha.com/clients/tools/devcentersolver.set_soft_id(0)g_response=solver.solve_and_return_solution()ifg_response!=0:print("g-response: "+g_response)# use this user-agent to make requests to your target websiteprint("user-agent: "+solver.get_user_agent())else:print("task finished with error "+solver.error_code)Report previosly solved Hcaptcha as incorrect:solver.report_incorrect_hcaptcha()SolveFuncaptcha(Arkoselabs):fromanticaptchaofficial.funcaptchaproxylessimport*solver=funcaptchaProxyless()solver.set_verbose(1)solver.set_key("YOUR_KEY")solver.set_website_url("https://website.com")solver.set_website_key("XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX")token=solver.solve_and_return_solution()iftoken!=0:print("result token: "+token)else:print("task finished with error "+solver.error_code)SolveGeeTestcaptcha:fromanticaptchaofficial.geetestproxylessimport*solver=geetestProxyless()solver.set_verbose(1)solver.set_key("YOUR_API_KEY")solver.set_website_url("https://address.com")solver.set_gt_key("CONSTANT_GT_KEY")solver.set_challenge_key("VARIABLE_CHALLENGE_KEY")token=solver.solve_and_return_solution()iftoken!=0:print("result tokens: ")print(token)else:print("task finished with error "+solver.error_code)SolveGeeTest v4captcha:fromanticaptchaofficial.geetestproxylessimport*solver=geetestProxyless()solver.set_verbose(1)solver.set_key("YOUR_API_KEY")solver.set_website_url("https://address.com")solver.set_version(4)solver.set_init_parameters({"riskType":"slide"})token=solver.solve_and_return_solution()iftoken!=0:print("result tokens: ")print(token)else:print("task finished with error "+solver.error_code)SolveHCaptcha Enterprise:fromanticaptchaofficial.hcaptchaproxylessimport*solver=hCaptchaProxyless()solver.set_verbose(1)solver.set_key("YOUR_KEY")solver.set_website_url("https://website.com")solver.set_website_key("SITE_KEY")solver.set_user_agent("YOUR FULL USER AGENT HERE")# tell API that Hcaptcha is invisible#solver.set_is_invisible(1)# tell API that Hcaptcha is Enterprise#solver.set_is_enterprise(1)# set here optional Enterprise parameters like rqdata, sentry, apiEndpoint, endpoint, reportapi, assethost, imghostsolver.set_enterprise_payload({"rqdata":"rq data value from target website","sentry":True})g_response=solver.solve_and_return_solution()ifg_response!=0:print("g-response: "+g_response)# use this user-agent to make requests to your target websiteprint("user-agent: "+solver.get_user_agent())else:print("task finished with error "+solver.error_code)Example how to createTurnstiletask and receive a token:fromanticaptchaofficial.turnstileproxylessimport*solver=turnstileProxyless()solver.set_verbose(1)solver.set_key("YOUR_API_KEY")solver.set_website_url("https://website.com")solver.set_website_key("SITE_KEY")# Optionally specify page actionsolver.set_action("login")# Optionally specify cData tokensolver.set_action("some_custom_token")# Specify softId to earn 10% commission with your app.# Get your softId here: https://anti-captcha.com/clients/tools/devcentersolver.set_soft_id(0)token=solver.solve_and_return_solution()iftoken!=0:print"token: "+tokenelse:print"task finished with error "+solver.error_codeSolveAntiGatetask:fromanticaptchaofficial.antigatetaskimport*solver=antigateTask()solver.set_verbose(1)solver.set_key("YOUR_KEY")solver.set_website_url("http://antigate.com/logintest.php")solver.set_template_name("Sign-in and wait for control text")solver.set_variables({"login_input_css":"#login","login_input_value":"test login","password_input_css":"#password","password_input_value":"test password","control_text":"You have been logged successfully"})result=solver.solve_and_return_solution()ifresult!=0:cookies,localStorage,fingerprint,url,domain=result["cookies"],result["localStorage"],result["fingerprint"],result["url"],result["domain"]print("cookies: ",cookies)print("localStorage: ",localStorage)print("fingerprint: ",fingerprint)print("url: "+url)print("domain: "+domain)else:print("task finished with error "+solver.error_code)SolveAntiBotCookieTasktask to bypass Cloudflare, Datadome and others:fromanticaptchaofficial.antibotcookietaskimport*solver=antibotcookieTask()solver.set_verbose(1)solver.set_key("YOUR_KEY")solver.set_website_url("https://www.somewebsite.com/")solver.set_proxy_address("1.2.3.4")solver.set_proxy_port(3128)solver.set_proxy_login("login")solver.set_proxy_password("password")result=solver.solve_and_return_solution()ifresult==0:print("could not solve task")exit()print(result)cookies,localStorage,fingerprint=result["cookies"],result["localStorage"],result["fingerprint"]iflen(cookies)==0:print("empty cookies, try again")exit()cookie_string='; '.join([f'{key}={value}'forkey,valueincookies.items()])user_agent=fingerprint['self.navigator.userAgent']print(f"use these cookies for requests:{cookie_string}")print(f"use this user-agent for requests:{user_agent}")s=requests.Session()proxies={"http":"http://login:[email protected]:3128","https":"http://login:[email protected]:3128"}s.proxies=proxiescontent=s.get("https://www.somewebsite.com/",headers={"Cookie":cookie_string,"User-Agent":user_agent}).textprint(content)Getobject coordinatesin an image:fromanticaptchaofficial.imagetocoordinatesimport*solver=imagetocoordinates()solver.set_verbose(1)solver.set_key("YOUR_KEY")solver.set_mode("points")solver.set_comment("Select in specified order")# Specify softId to earn 10% commission with your app.# Get your softId here: https://anti-captcha.com/clients/tools/devcentersolver.set_soft_id(0)coordinates=solver.solve_and_return_solution("coordinates.png")ifcoordinates!=0:print("coordinates: ",captcha_text)else:print("task finished with error "+solver.error_code)Report previosly solved captcha as incorrect:solver.report_incorrect_image_captcha()Check outexamplesfor other captcha typesUseful links:Как решить рекапчу автоматическиОбход капчиCómo resolver un recaptcha automáticamenteComo resolver um recaptcha automaticamente
antichat
Python wrapper forhttps://forum.antichat.ru/Installationpip install antichatOr you can install from source with:git clone https://github.com/drygdryg/antichat_python.git cd antichat_python python setup.py installUsagePostingimportantichatclient=antichat.Client(username='myusername',password='mypassword')client.auth()post_id=client.make_post(thread=12345,message='Hi there!')client.delete_post(post_id=post_id,reason='No hello')Reading threadsreader=client.get_reader(thread=12345)all_posts=reader.read()# Read all posts sequentiallyfirst_posts=reader.read(limit=20)# Get only first 20 postslast_posts=reader.read(offset=400)# Skip first 400 postsposts=reader.read(start_post=6789)# Read from post ID 6789page=reader.read_page(page=5)# Read page #5
antichecked
Antichecked ExceptionsAntichecked Exceptions are an alternative to checked exceptions. They're designed to actively catch bugs, and to prevent silencing exceptions for the sake of satisfying the compiler (as usually happens with checked exceptions).Rather than requiring the caller to catch some exceptions, Antichecked Exceptions intercepts some raised exceptions, and helps guarantee to the caller, if the caller opts to catch some exception, that the exception was actually raised by a deliberate design decision rather than a typo or bug.
anticipate
AnticipateExpect the unexpected, but get what you want.@anticipate(int) def get_int(): return '1' assert get_int() == 1 @anticipate(str) def get_str(): return 22 assert get_str() == '22' @anticipate([str]) def get_strs(*args): return args assert list(get_strs(1, 2, 3)) == ['1', '2', '3']Works much better with your own objects or withSpringFieldChangelog0.9.0Dropped support for Python 2.6, added support for Python 3.7.Cleaned up code formattingBug FixesFixed issue that prevented use of an adaptable object (has anadaptmethod) as an anticipated list of type.0.8.0Change so that anticipating an iterable using[type]will always return a list instead of generatorAddedanticipate_input_factoryto make it easier to implement input handlers that need to inject values or handle input errors differentlyMade it so you can use any object that implementsadaptas an anticipate type so you can useSpringFieldfields as input typesImproved error messagesSplit anticipate input and output handling into separate functions to make it easier to intercept input or output handlingCheck that the params being anticipated exist in the function signature
anticipy
AnticipyAnticipy is a tool to generate forecasts for time series. It takes a pandas Series or DataFrame as input, and returns a DataFrame with the forecasted values for a given period of time.Features:Simple interface. Start forecasting with a single function call on a pandas DataFrame.Model selection. If you provide different multiple models (e.g. linear, sigmoidal, exponential), the tool will compare them and choose the best fit for your data.Trend and seasonality. Support for weekly and monthly seasonality, among other types.Calendar events. Provide lists of special dates, such as holiday seasons or bank holidays, to improve model performance.Data cleaning. The library has tools to identify and remove outliers, and to detect and handle step changes in the data.It is straightforward to generate a simple linear model with the tool - just callforecast.run_forecast(my_dataframe): ::import pandas as pd, numpy as np from anticipy import forecastdf = pd.DataFrame({'y': np.arange(0., 5)}, index=pd.date_range('2018-01-01', periods=5, freq='D')) df_forecast = forecast.run_forecast(df, extrapolate_years=1) print(df_forecast.head(12))Output: ::. date source is_actuals model y q5 q20 q80 q95 0 2018-01-01 src True actuals 0.0 NaN NaN NaN NaN 1 2018-01-02 src True actuals 1.0 NaN NaN NaN NaN 2 2018-01-03 src True actuals 2.0 NaN NaN NaN NaN 3 2018-01-04 src True actuals 3.0 NaN NaN NaN NaN 4 2018-01-05 src True actuals 4.0 NaN NaN NaN NaN 5 2018-01-01 src False linear 0.0 NaN NaN NaN NaN 6 2018-01-02 src False linear 1.0 NaN NaN NaN NaN 7 2018-01-03 src False linear 2.0 NaN NaN NaN NaN 8 2018-01-04 src False linear 3.0 NaN NaN NaN NaN 9 2018-01-05 src False linear 4.0 NaN NaN NaN NaN 10 2018-01-06 src False linear 5.0 5.0 5.0 5.0 5.0 11 2018-01-07 src False linear 6.0 6.0 6.0 6.0 6.0Documentation is available inRead the Docs <https://anticipy.readthedocs.io/en/latest/>_
anticipython
AnticipythonCreate .ics calendars for upcoming CPython releases 🐍👀UsageRequires Python 3.6 or later.pipinstallanticipython python-manticipython# Produces an output file named `cpython_releases.ics`Power user configurationEditpeps.pyto include the version and PEPs you want to use.
anti-clustering
Anti-clusteringA generic Python library for solving the anti-clustering problem. While clustering algorithms will achieve high similarity within a cluster and low similarity between clusters, the anti-clustering algorithms will achieve the opposite; namely to minimise similarity within a cluster and maximise the similarity between clusters. Currently, a handful of algorithms are implemented in this library:An exact approach using a BIP formulation.An enumerated exchange heuristic.A simulated annealing heuristic.Keep in mind anti-clustering is computationally difficult problem and may run slow even for small instance sizes. The current ILP does not finish in reasonable time when anti-clustering the Iris dataset (150 data points).The two former approaches are implemented as described in following paper:Papenberg, M., & Klau, G. W. (2021). Using anticlustering to partition data sets into equivalent parts. Psychological Methods, 26(2), 161–174.DOI.PreprintThe paper is accompanied by a library for the R programming language:anticlust.Differently to theanticlustR package, this library currently only have one objective function. In this library the objective will maximise intra-cluster distance: Euclidean distance for numerical columns and Hamming distance for categorical columns.Use casesWithin software testing, anti-clustering can be used for generating test and control groups in AB-testing. Example: You have a webshop with a number of users. The webshop is undergoing active development and you have a new feature coming up. This feature should be tested against as many different users as possible without testing against the entire user-base. For that you can create a maximally diverse subset of the user-base to test against (the A group). The remaining users (B group) will not test this feature. For dividing the user-base you can use the anti-clustering algorithms. A and B groups should be as similar as possible to have a reliable basis of comparison, but internally in group A (and B) the elements should be as dissimilar as possible.This is just one use case, probably many more exists.InstallationThe anti-clustering package is available onPyPI. To install it, run the following command:pipinstallanti-clusteringThe package currently supports Python 3.8 and above.UsageThe input to the algorithm is a Pandas dataframe with each row representing a data point. The output is the same dataframe with an extra column containing integer encoded cluster labels. Below is an example based on the Iris dataset:fromanti_clusteringimportExactClusterEditingAntiClusteringfromsklearnimportdatasetsimportpandasaspdiris_data=datasets.load_iris(as_frame=True)iris_df=pd.DataFrame(data=iris_data.data,columns=iris_data.feature_names)algorithm=ExactClusterEditingAntiClustering()df=algorithm.run(df=iris_df,numerical_columns=list(iris_df.columns),categorical_columns=None,num_groups=2,destination_column='Cluster')ContributionsIf you have any suggestions or have found a bug, feel free to open issues. If you have implemented a new algorithm or know how to tweak the existing ones; PRs are very appreciated.LicenseThis library is licensed under the Apache 2.0 license.
anticor-features
READMEWhat is this repository for?Anti-correlated genes as a method of feature selectionUnsupervised feature selection for single cell omics (or anything else!) that passes the null-dataset testHow do I get set up?python3 -m pip install anticor_featuresYou can also install using the setup.py script in the distribution like so:python3 setup.py installThis should take less than one minute or even seconds if dependecies were already installed.How do I run use this package?from anticor_features.anticor_features import get_anti_cor_genes ## Then feed in the expression matrix, with cells in columns, genes in rows ## and the feature names (all_features) ## and the species code (in gProfiler format, linked below) anti_cor_table = get_anti_cor_genes(in_mat, all_features, species="hsapiens")A list of the gProfiler accepted species codes is listed here:https://biit.cs.ut.ee/gprofiler/page/organism-listThe above call yields a pandas data frame that will give you the collected summary statistics, and let you filter based on the features annotated as "selected" in that column>>> print(anti_cor_table.head()) gene pre_remove_feature pre_remove_pathway ... FDR num_sig_pos_cor selected 0 Xkr4 False False ... NaN NaN NaN 1 Rp1 False False ... NaN NaN NaN 2 Sox17 False False ... 0.001883 3406.0 True 3 Mrpl15 False True ... NaN NaN NaN 4 Lypla1 False True ... NaN NaN NaNThe NaNs are produced where the gene was not assayed for anti-correlations either from pre-filtering (the default is to remove genes in pathways related to mitochondria, ribosomes, and hemoglobin).If you want to customize which GO terms are removed, or specify specific genes to exclude, you can do that with the pre_remove_features and pre_remove_pathways argumentsanti_cor_table = get_anti_cor_genes(in_mat, all_features, species="hsapiens", pre_remove_features = ["ACTB","MT-COX1"])Scanpy (or anything from python where you have a matrix)If you're using scanpy, then you can use the same basic syntax as above. The only thing worth noting is that our downsampling function assumes that the genes are in rows, and cells are in columns, which is flipped from AnnData's formatting, that's why we use the have the transpose() functions below:If you follow alongScanpy's tutorial, then the only thing different would be swapping out:[16]: sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5) [17]: sc.pl.highly_variable_genes(adata) [18]: adata.raw = adata [19]: adata = adata[:, adata.var.highly_variable]forfrom anticor_features.anticor_features import get_anti_cor_genes anti_cor_table = get_anti_cor_genes(adata.X.T, adata.var.index.tolist(), species="hsapiens") selected_table = anti_cor_table[anti_cor_table["selected"]==True] print(selected_table) ## And you can save the anti-correlation dataframe into the adata object as well: import pandas as pd adata.var = pd.concat([adata.var,anti_cor_table], axis=1) ## And then we subset the data to only include the selected features! adata.raw = adata adata = adata[:, selected_table.index] ## Note that the downstream clusters and marker genes will be slightly different!It should take ~ 1-2 minute(s) for the feature selection depending on your internet connection, the speed of the gprofiler server (if looking up pathways like ribosomes, mitochondria, etc to remove), but will also scale a bit with the complexity of the dataset. For example, the tabula muris dataset used in the manuscript took ~ 60 minutes, in part because nearly every gene was expressed at appreciable values in a subset of the cells.An important note if you're working in a cluster environment** Anti-correlated genes are selected out of memory - to do this, the pipeline needs a hard-disk area to work in. On a stand-alone computer it'll automatically find the system temp drive, but this might not be the behavior you want on a cluster, or if you're analyzing several datasets simultaneously because they would overwrite each other. ** In those cases, you should supply the additional argumentscratch_dir=</local/path/to/dataset/directory>. This ensures that each dataset will be analyzed properly and there won't be conflicts in terms of where files get written.Command line interfaceYou can also use this tool at the command line, if you have either a .tsv or an hdf5 file, with the matrix under the key "infile"python3 -m anticor_features.anticor_features -i exprs.tsv -species mmusculusor something similar. This outputs the pandas table to a tsv in the same folder as the input fileSee the help section for more detailed usage of the command line interface:python3 -m anticor_features.anticor_features -hLicenseThis package is available via the AGPLv3 license.Who do I talk to?Repo owner/admin:[email protected]
anticp2
Anticp2: Prediction, Design and scan of anticancer prptidesAntiCP 2.0 is an updated version of AntiCP, developed to predict and design anticancer peptides with high accuracy. This study utilize largest possible dataset of anticancer and non-anticancer peptides. Main dataset consists of experimentally validated 861 anticancer peptides and 861 non-anticancer or validated antimicrobial peptides. Alternate dataset comprises of 970 anti-cancer peptides and 970 non-anticancer peptides (randomly pickup from Swiss-Prot).ReferenceAgrawal P., Bhagat D., Mahalwal M., Sharma N., Raghava G.P.S. (2020), AntiCP 2.0: an updated model for predicting anticancer peptides,Briefings in Bioinformatics, bbaa153Web Serverhttps://webs.iiitd.edu.in/raghava/anticp2/Installationpip install anticp2IntroductionAntiCP2 is developed for predicting, desiging and scanning antcancer peptides. More information on AntiCP2 is abvailble from its web serverhttp://webs.iiitd.edu.in/raghava/anticp2/. This page provide information about stnadalone version of AntiCP2. Please read/cite following paper for complete information including algorithm behind AntiCP2.Agrawal P., Bhagat D., Mahalwal M., Sharma N., and Raghava GPS (2020) AntiCP 2.0: an updated model for predicting anticancer peptides. Briefing in Bioinformatics doi: 10.1093/bib/bbaa153Models:In this program, two models have beeen incorporated for predicting anticancer peptides. Model1 is trained on Anti-Cancer and Anti-Microbial peptides, it is default model. Model2 is trained on Anti-Cancer and Non-Anticalcer (or random peptides) peptides.Modules/Jobs:This program implement three modules (job types); i) Predict: for predictin anticancer peptides, ii) Design: for generating all possible peptides and computing Anti-Cancer potential (score) of peptides, iii) Scan: for creating all possible overlapping peptides of given length (window) and computing Anti-Cancer potential (score) of these overlapping peptides.Minimum USAGE:Minimum ussage is "anticp2 -i peptide.fa" where peptide.fa is a input fasta file. This will predict Anti-Cancer potential of sequence in fasta format. It will use other parameters by default. It will save output in "outfile.csv" in CSV (comma seperated variables).Full Usage:Following is complete list of all options, you may get these options by "anticp2 -h"anticp [-h] -i INPUT [-o OUTPUT] [-j {1,2,3}] [-t THRESHOLD] [-m {1,2}] [-w {5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29}] [-d {1,2}]optional arguments:-h, --help show this help message and exit -i INPUT, --input INPUT Input: protein or peptide sequence in FASTA format or single sequence per line in single letter code -o OUTPUT, --output OUTPUT Output: File for saving results by default outfile.csv -j {1,2,3}, --job {1,2,3} Job Type: 1:predict, 2:design and 3:scan, by default 1 -t THRESHOLD, --threshold THRESHOLD Threshold: Value between 0 to 1 by default 0.5 -m {1,2}, --model {1,2} Model: 1: ACP/AMP, 2: ACP/non-ACP, by default 1 -w {5,6,7,..,30}, --winleng Window Length: 5 to 30 (scan mode only), by default 10 -d {1,2}, --display {1,2} Display: 1:Anticancer peptide, 2: All peptides, by default 1Input File:It allow users to provide input in two format; i) FASTA format (standard) and ii) Simple Format. In case of simple format, file should have one one peptide sequence in a single line in single letter code (eg. peptide.seq). Please note in case of predict and design module (job) length of peptide should be upto 50 amino acids, if more than 50, program will take first 50 residues. In case of of scan module, minimum length of protein/peptide sequence should be more than equal to window length (pattern), see peptide.fa . Please note program will ignore peptides having length less than 5 residues (e.g., protein.fa).Output File:Program will save result in CSV format, in case user do not provide output file name, it will be stored in outfile.csv.Threshold:User should provide threshold between 0 and 1, please note score is propotional to anti-cancer potential of peptide.Address for contactIn case of any query please contactProf. G. P. S. Raghava, Head Department of Computational Biology, Indraprastha Institute of Information Technology (IIIT), Okhla Phase III, New Delhi 110020 ; Phone:+91-11-26907444; Email: [email protected] Web: http://webs.iiitd.edu.in/raghava/
antiCPy
TheantiCPypackage provides tools to monitor destabilization because of varying control parameters or the influence of noise. Based on early warning measures it provides an extrapolation tool to estimate the time horizon in which a critical transition will probably occur.antiCPyThe package abbreviationantiCPystands for ''anticipateCriticalPoints (and if you likeChangePoints) withPython''. The vision of theantiCPypackage is designing a package collection of state-of-the-art early warning measures, leading indicators and time series analysis tools that focus on system stability and resilience in general as well as algorithms that might be helpful to estimate time horizons of future transitions or resilience changes. It provides an easy applicable and efficient toolboxto estimate the drift slope $\hat{\zeta}$ of a polynomial Langevin equation as an early warning signal via Markov Chain Monte Carlo (MCMC) sampling or maximum posterior (MAP) estimation,to estimate a non-Markovian two-time scale polynomial system via MCMC or MAP with the option of a priori activated time scale separation,to estimate the dominant eigenvalue by empiric dynamic modelling approaches like delay embedding and shadow manifolds combined with iterated map's linear stability formalism,extrapolate an early warning signal trend to find the probable transition horizon based on the current data information.Computationally expensive algorithms are implemented both, serially and strongly parallelized to minimize computation times. In case of the change point trend extrapolation it involves furthermore algorithms that allow for computing of complicated fits with high numbers of change points without memory errors. The package aims to provide easily applicable methods and guarantee high flexibility and access to the derived interim results for research purposes.Citing antiCPyIf you useantiCPy'sdrift_slopemeasure, please citeMartin Heßler et al. Bayesian on-line anticipation of critical transitions. New J. Phys. (2022).https://doi.org/10.1088/1367-2630/ac46d4.If you useantiCPy'sdominant_eigenvalueinstead, please citeMartin Heßler et al. Anticipation of Oligocene's climate heartbeat by simplified eigenvalue estimation. arXiv (2023).https://doi.org/10.48550/arXiv.2309.14179DocumentationYou can find thedocumentation on read the docs.InstallThe package can be installed viapip install antiCPyRelated publicationsUp to now the package is accompanied bythe publicationEfficient Multi-Change Point Analysis to Decode Economic Crisis Information from the S&P500 Mean Market Correlation,the publicationMemory Effects, Multiple Time Scales and Local Stability in Langevin Models of the S&P500 Market Correlation,the publicationIdentifying dominant industrial sectors in market states of the S&P 500 financial data,the publicationQuantifying resilience and the risk of regime shifts under strong correlated noise,the publicationBayesian on-line anticipation of critical transitions,the preprintAnticipation of Oligocene's climate heartbeat by simplified eigenvalue estimation,the preprintQuantifying Tipping Risks in Power Grids and beyond.
anti-cursing
anti-cursing"anti-cursing"is a python package that detects and switches negative or any kind of cursing word from sentences or comments whatever🤬You just install the package the way you install any other package and then you can use it in your code.The whole thing is gonna be updated soon.So this isthe very first ideaBut you can find my package in pypi(https://pypi.org/project/anti-cursing/0.0.1/)🙏🏻Plz bare with the program to install model's weight and bias from huggingface at the first time you use the package.ConceptThere are often situations where you have to code something, detect a forbidden word, and change it to another word. Hardcoding all parts is very inconvenient, and in the Python ecosystem, there are many packages to address. One of them is"anti-cursing".The package, which operates exclusively forKorean, does not simply change the banned word by setting it up, but detects and replaces the banned word by learning a deep learning model.Therefore, it is easy to cope with new malicious words as long as they are learned. For this purpose, semi-supervied learning through pseudo labeling is used.Additionally, instead of changing malicious words to special characters such as --- or ***, you can convert them into emojis to make them more natural.ContentsInstallationUsageModel comparisonDatasetUsed APILicenseWorking ExampleReferencesProject StatusFuture WorkInstallationYou can install the package using pip:pipinstallanti-cursingit doesn't work yet, but it will soon!!👨🏻‍💻Usagefromanti_cursing.utilsimportantiCursingantiCursing.anti_cur("나는 너가 좋지만, 너는 너무 개새끼야")나는너가좋지만,너는너무👼🏻야Model-comparisonClassificationKcElectraKoBERTRoBERTa-baseRoBERTa-largeValidation Accuracy0.886800.857210.834210.86994Validation Loss1.004311.232371.300121.16179Training Loss0.099080.037610.00390.06255Epoch10402020Batch-size8321632transformersbeomi/KcELECTRA-baseskt/kobert-base-v1xlm-roberta-baseklue/roberta-largeDatasetSmilegate-AIhttps://github.com/smilegate-ai/korean_unsmile_datasetKorean Sentiment AnalysispaperNaver portal news articles crawlinghttps://news.naver.comNon-labeled Data for Test Dataset😀 Emoji unicode crawling for encodinghttps://unicode.org/emoji/charts/full-emoji-list.htmlUsed-apiGoogle translatorhttps://cloud.google.com/translate/docs(API DOCS)LicenseThis repository is licensed under the MIT license. See LICENSE for details.Click here to see the License information -->LicenseWorking-example---- some video is gonna be placed here ----ReferencesSentiment Analysis Based on Deep Learning : A Comparative StudyNhan Cach Dang, Maria N. Moreno-Garcia, Fernando De la Prieta. 2006. Sentiment Analysis Based on Deep Learning : A Comparative Study. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 1–8, Prague, Czech Republic. Association for Computational Linguistics.Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010.BERT : Pre-training of Deep Bidirectional Transformers for Language UnderstandingJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171–4186.Electra : Pre-training Text Encoders as Discriminators Rather Than GeneratorsKevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 2019. Electra: Pre-training text encoders as discriminators rather than generators. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171–4186.BIDAF : Bidirectional Attention Flow for Machine ComprehensionMinjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi. 2016. Bidirectional Attention Flow for Machine Comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2129–2139.Effect of Negation in Sentences on Sentiment Analysis and Polarity DetectionPartha Mukherjeea, Saptarshi Ghoshb, and Saptarshi Ghoshc. 2018. Effect of Negation in Sentences on Sentiment Analysis and Polarity Detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2129–2139.KOAS : Korean Text Offensiveness Analysis SystemSeonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. KOAS: Korean Text Offensiveness Analysis System. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1–11.Korean Unsmile DatasetSeonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. Korean Unsmile Dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1–11.Project-statusFuture-workupdate soon plz bare with me 🙏🏻KOREAN FROM HERE / 여기부턴 한국어 설명입니다.anti-cursing**"anti-cursing"**은 문장이나 댓글에서 부정적이거나 모든 종류의 욕설을 감지하고 전환하는 파이썬 패키지입니다🤬다른 패키지를 설치하는 방식과 동일하게 패키지를 설치한 다음 코드에서 사용할 수 있습니다.아직 아이디어 구상 단계이기 때문에아무것도 작동하지 않지만곧 작동하도록 업데이트할 예정입니다.Pypi(https://pypi.org/project/anti-cursing/0.0.1/)에패키지르 업로드했습니다. 확인하시 수 있습니다.🙏🏻패키지를 처음 설치하시고 사용하실 때 딥러닝 모델을 불러오기 위해 huggingface에서 parsing을 시도합니다. 처음에만 해당 작업이 필요하니 시간이 조금 걸림과 용량을 차지함을 고려해주세요Concept무언가 코딩을 하며, 금지 단어를 감지하고 그것을 다른 단어로 바꿔야할 상황이 종종 생깁니다. 모든 부분을 하드코딩하는 것이 매우 불편하며, 파이썬 생태계에서는 이를 해결하기 위한 많은 패키지가 있습니다. 그 중 하나가 **"anti-cursing"**입니다.한국어 전용으로 동작하는 해당 패키지는 단순히 금지 단어를 기존에 설정하여 바꾸는 것이 아닌, 딥러닝 모델을 학습하여 금지 단어를 감지하고 바꿉니다. 따라서 새롭게 생기는 악성 단어에 대해서도 학습만 이루어진다면 쉽게 대처할 수 있습니다. 이를 위해 pseudo labeling을 통한 semi-supervied learning을 사용합니다.추가로 악성단어를 ---나 ***같은 특수문자로 변경하는 것이 아닌, 이모지로 변환하여 더욱 자연스럽게 바꿀 수 있습니다.목차설치사용법모델 성능 비교데이터셋사용 APILicense작동 예시참고문헌진행상황발전설치pip를 사용하여 패키지를 설치할 수 있습니다.pipinstallanti-cursing아직 아무것도 작동하지 않지만, 곧 작동하도록 업데이트할 예정입니다👨🏻‍💻.사용법fromanti_cursing.utilsimportantiCursingantiCursing.anti_cur("나는 너가 좋지만, 너는 너무 개새끼야")나는너가좋지만,너는너무👼🏻야모델 성능 비교ClassificationKcElectraKoBERTRoBERTa-baseRoBERTa-largeValidation Accuracy0.886800.857210.834210.86994Validation Loss1.004311.232371.300121.16179Training Loss0.099080.037610.00390.06255Epoch10402020Batch-size8321632transformersbeomi/KcELECTRA-baseskt/kobert-base-v1xlm-roberta-baseklue/roberta-large데이터셋Smilegate-AIhttps://github.com/smilegate-ai/korean_unsmile_dataset한국어 감정분류 데이터셋paper네이버 뉴스 기사 크롤링https://news.naver.com테스트를 위한 데이터셋😀 이모지 유니코드 데이터셋https://unicode.org/emoji/charts/full-emoji-list.html사용 APIGoogle translatorhttps://cloud.google.com/translate/docs(API 문서)License이 프로젝트는 MIT 라이센스를 따릅니다. 자세한 내용은 LICENSE 파일을 참고해주세요.라이센스 정보 -->License작동 예시---- 작동 예시가 추가될 예정입니다 ----참고문헌Sentiment Analysis Based on Deep Learning : A Comparative StudyNhan Cach Dang, Maria N. Moreno-Garcia, Fernando De la Prieta. 2006. Sentiment Analysis Based on Deep Learning : A Comparative Study. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 1–8, Prague, Czech Republic. Association for Computational Linguistics.Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010.BERT : Pre-training of Deep Bidirectional Transformers for Language UnderstandingJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171–4186.Electra : Pre-training Text Encoders as Discriminators Rather Than GeneratorsKevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 2019. Electra: Pre-training text encoders as discriminators rather than generators. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171–4186.BIDAF : Bidirectional Attention Flow for Machine ComprehensionMinjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi. 2016. Bidirectional Attention Flow for Machine Comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2129–2139.Effect of Negation in Sentences on Sentiment Analysis and Polarity DetectionPartha Mukherjeea, Saptarshi Ghoshb, and Saptarshi Ghoshc. 2018. Effect of Negation in Sentences on Sentiment Analysis and Polarity Detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2129–2139.KOAS : Korean Text Offensiveness Analysis SystemSeonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. KOAS: Korean Text Offensiveness Analysis System. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1–11.Korean Unsmile DatasetSeonghwan Kim, Seongwon Lee, and Seungwon Do. 2019. Korean Unsmile Dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1–11.진행상황발전앞으로 추가될 예정입니다 잠시만 기다려주세요🙏🏻
antidb
antidbQuick startpip3 install antidbfrom antidb import (Idx, Prs, count_exec_time) __version__ = 'v1.0.0' dbsnp_vcf_path = '/path/to/GCF_000001405.40.zst' dbsnp_idx_prefix = 'all_rsids' dbsnp_idx = Idx(dbsnp_vcf_path, dbsnp_idx_prefix, lambda dbsnp_zst_line: dbsnp_zst_line.split('\t')[2]) dbsnp_idx.idx() dbsnp_prs = Prs(dbsnp_vcf_path, dbsnp_idx_prefix) @count_exec_time def get_rsid_lines(dbsnp_prs): for dbsnp_zst_line in dbsnp_prs.prs(['rs1009150', 'rs12044852', 'rs4902496']): print(dbsnp_zst_line) print(get_rsid_lines(dbsnp_prs))NC_000022.11 36306254 rs1009150 C T . . RS=1009150;dbSNPBuildID=86;SSR=0;GENEINFO=MYH9:4627;VC=SNV;PUB;INT;GNO;FREQ=1000Genomes:0.569,0.431|ALSPAC:0.2906,0.7094|Estonian:0.269,0.731|GENOME_DK:0.35,0.65|GnomAD:0.4415,0.5585|GoNL:0.3126,0.6874|HapMap:0.5881,0.4119|KOREAN:0.7334,0.2666|MGP:0.8652,0.1348|NorthernSweden:0.315,0.685|Qatari:0.5463,0.4537|SGDP_PRJ:0.2929,0.7071|Siberian:0.3043,0.6957|TOMMO:0.7117,0.2883|TOPMED:0.4596,0.5404|TWINSUK:0.2869,0.7131|dbGaP_PopFreq:0.3304,0.6696;COMMON;CLNVI=.,;CLNORIGIN=.,1;CLNSIG=.,2;CLNDISDB=.,MedGen:CN517202;CLNDN=.,not_provided;CLNREVSTAT=.,single;CLNACC=.,RCV001695529.1;CLNHGVS=NC_000022.11:g.36306254=,NC_000022.11:g.36306254C>T NC_000001.11 116545157 rs12044852 C A . . RS=12044852;dbSNPBuildID=120;SSR=0;GENEINFO=CD58:965|LOC105378925:105378925;VC=SNV;PUB;INT;GNO;FREQ=1000Genomes:0.7473,0.2527|ALSPAC:0.8957,0.1043|Chileans:0.7396,0.2604|Estonian:0.9125,0.0875|GENOME_DK:0.875,0.125|GnomAD:0.8826,0.1174|GoNL:0.9078,0.09218|HapMap:0.787,0.213|KOREAN:0.3945,0.6055|Korea1K:0.3892,0.6108|NorthernSweden:0.895,0.105|PRJEB37584:0.439,0.561|Qatari:0.8704,0.1296|SGDP_PRJ:0.3373,0.6627|Siberian:0.3846,0.6154|TOMMO:0.4146,0.5854|TOPMED:0.8671,0.1329|TWINSUK:0.8972,0.1028|Vietnamese:0.4486,0.5514|dbGaP_PopFreq:0.8864,0.1136;COMMON NC_000014.9 67588896 rs4902496 C G,T . . RS=4902496;dbSNPBuildID=111;SSR=0;GENEINFO=PIGH:5283|GPHN:10243|PLEKHH1:57475;VC=SNV;PUB;U3;INT;R3;GNO;FREQ=1000Genomes:0.3357,0.6643,.|ALSPAC:0.2019,0.7981,.|Estonian:0.1518,0.8482,.|GENOME_DK:0.125,0.875,.|GoNL:0.1703,0.8297,.|HapMap:0.3639,0.6361,.|KOREAN:0.3399,0.6601,.|MGP:0.3558,0.6442,.|NorthernSweden:0.1817,0.8183,.|Qatari:0.2176,0.7824,.|SGDP_PRJ:0.189,0.811,.|Siberian:0.1429,0.8571,.|TOMMO:0.2816,0.7184,.|TOPMED:0.285,0.715,.|TWINSUK:0.1888,0.8112,.|Vietnamese:0.4533,0.5467,.|dbGaP_PopFreq:0.2712,0.7288,0;COMMON ('get_rsid_lines', '0:00:00.007858')App exampleBioinformatic annotator template# autopep8: off import sys; sys.dont_write_bytecode = True # autopep8: on import json import os from argparse import ArgumentParser from datetime import datetime from antidb import (Idx, Prs, count_exec_time) __version__ = 'v1.0.0' def parse_dbsnp_line(dbsnp_zst_line): if 'GnomAD' in dbsnp_zst_line \ and 'CLN' in dbsnp_zst_line: return dbsnp_zst_line.split('\t')[2] return None def parse_rsmerged_line(rsmerged_zst_line): rsmerged_zst_obj = json.loads(rsmerged_zst_line) rsids = list(map(lambda rsid: f'rs{rsid}', ([rsmerged_zst_obj['refsnp_id']] + rsmerged_zst_obj['merged_snapshot_data']['merged_into']))) return rsids def rsid_to_coords(rsid, dbsnp_prs, rsmerged_prs, parse_rsmerged_line): for dbsnp_zst_line in dbsnp_prs.prs(rsid): return dbsnp_zst_line for rsmerged_zst_line in rsmerged_prs.prs(rsid): rsid_syns = parse_rsmerged_line(rsmerged_zst_line) for dbsnp_zst_line in dbsnp_prs.prs(rsid_syns): return dbsnp_zst_line return None arg_parser = ArgumentParser() arg_parser.add_argument('-S', '--ann-file-path', required=True, metavar='str', dest='ann_file_path', type=str, help='Path to table with rsIDs column (uncompressed)') arg_parser.add_argument('-D', '--dbsnp-file-path', required=True, metavar='str', dest='dbsnp_file_path', type=str, help='Path to official dbSNP VCF (uncompressed or compressed via Seekable zstd)') arg_parser.add_argument('-R', '--rsmerged-file-path', required=True, metavar='str', dest='rsmerged_file_path', type=str, help='Path to official refsnp-merged JSON (uncompressed or compressed via Seekable zstd)') arg_parser.add_argument('-T', '--trg-dir-path', required=True, metavar='str', dest='trg_dir_path', type=str, help='Path to directory for results') arg_parser.add_argument('-c', '--rsids-col-num', metavar='1', default=1, dest='rsids_col_num', type=int, help='rsIDs-column number in source table') args = arg_parser.parse_args() dbsnp_idx = Idx(args.dbsnp_file_path, 'rsids__gnomad_cln', parse_dbsnp_line) dbsnp_idx.idx() rsmerged_idx = Idx(args.rsmerged_file_path, 'rsids', parse_rsmerged_line) rsmerged_idx.idx() perf = {'dbsnp_idx': dbsnp_idx.perf, 'rsmerged_idx': rsmerged_idx.perf} dbsnp_prs = Prs(args.dbsnp_file_path, 'rsids__gnomad_cln') rsmerged_prs = Prs(args.rsmerged_file_path, 'rsids') @count_exec_time def ann(args, res_files_crt_time, dbsnp_prs, rsmerged_prs, parse_rsmerged_line): trg_file_path = os.path.join(args.trg_dir_path, f'ann_res_{res_files_crt_time}.txt') dump_file_path = os.path.join(args.trg_dir_path, f'ann_dump_{res_files_crt_time}.txt') with open(args.ann_file_path) as ann_file_opened: with open(trg_file_path, 'w') as trg_file_opened: with open(dump_file_path, 'w') as dump_file_opened: for ann_file_line in ann_file_opened: if ann_file_line.startswith('#'): continue ann_file_line = ann_file_line.rstrip() ann_rsid = ann_file_line.split('\t')[args.rsids_col_num - 1] dbsnp_zst_line = rsid_to_coords(ann_rsid, dbsnp_prs, rsmerged_prs, parse_rsmerged_line) if dbsnp_zst_line: trg_file_opened.write(ann_file_line + '\t' + dbsnp_zst_line) else: dump_file_opened.write(ann_file_line + '\n') res_files_crt_time = datetime.now() perf['ann'] = ann(args, res_files_crt_time, dbsnp_prs, rsmerged_prs, parse_rsmerged_line)[1] perf_file_path = os.path.join(args.trg_dir_path, f'ann_perf_{res_files_crt_time}.json') with open(perf_file_path, 'w') as perf_file_opened: json.dump(perf, perf_file_opened, indent=4)Performance measurement resultsann_perf_2023-07-09 20:45:36.102376.jsondbsnp_idx- indexingGnomAD- andCLN-containing lines of dbSNP VCF;crt_db_zst- compressing indexable file (output is further called "DB");crt_full_idx_tmp- indexing DB (output is further called "temporary full index");crt_full_idx_tmp_srtd- sorting temporary full index by indexed DB elements;crt_full_idx- compressing sorted temporary full index (output is further called "full index");crt_mem_idx- selective indexing of full index;rsmerged_idx- indexing all lines of rsmerged JSON;<...>ann- querying 2842 rsIDs by indexed dbSNP VCF and indexed rsmerged JSON.{ "dbsnp_idx": [ [ "crt_db_zst", "0:39:02.127938" ], [ "crt_full_idx_tmp", "1:06:13.698458" ], [ "crt_full_idx_tmp_srtd", "0:00:00.928633" ], [ "crt_full_idx", "0:00:00.577710" ], [ "crt_mem_idx", "0:00:00.280014" ] ], "rsmerged_idx": [ [ "crt_db_zst", "0:02:44.068920" ], [ "crt_full_idx_tmp", "0:04:43.153807" ], [ "crt_full_idx_tmp_srtd", "0:00:30.015826" ], [ "crt_full_idx", "0:00:17.204649" ], [ "crt_mem_idx", "0:00:08.811190" ] ], "ann": "0:00:06.995505" }
antidevinfo
No description available on PyPI.
antidogpiling
This package provides a generic implementation and Django specific cache backends for anti-dogpiled caching. Django 1.2 and later are supported.Dogpiling is the effect of everyone rushing to renew a value that has timed out in a cache, for as long as a new value has not yet been set. Anti-dogpiling tries do mitigate this by limiting how many gets to produce a new value. How the limiting is done, and what happens to everyone else, depends on the solution.The solution provided in this package serves the old value from the cache while a new one is being produced. This is achieved by introducing a soft timeout for when the value should be renewed, in addition to the regular hard timeout (for when the value is no longer in the cache). At the event of a soft timeout, the first request is responded with a cache miss, and will go on to produce a new value, while subsequent requests are responded with the value that is still cached. When a new value is ready, it simply replaces the old value, and a new soft timeout is set.In this particular implementation, as few requests as possible are let through to produce the new value, without using any locks (I have not tested using locks. Locking across servers could be tricky and undesirable. Unless letting onlyonecall through to renew a value is a hard requirement, locking might not be worth the trouble).See theBenefits and caveatssection for important details.Using the anti-dogpiled Django backendsThe anti-dogpiled Django backends are configured like Django’s own backends. See theSetting up the cachein the Django documentation for details.Django 1.2Simply set theCACHE_BACKENDsetting to point to the right module. Examples:CACHE_BACKEND = 'antidogpiling.django.memcached://127.0.0.1:11211/' CACHE_BACKEND = 'antidogpiling.django.filebased:///var/tmp/django_cache'Django 1.3+Simply replace theBACKENDreference with the corresponding anti-dogpiled backend. An example:CACHES = { 'default': { 'BACKEND': 'antidogpiling.django.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211'. }, }Configuration optionsSee thecache options settingin the Django documentation on how to specify caching options.Use thehard_timeout_factoroption to control how much longer the hard timeout should be, relative to the soft timeout. The default is 8, so by default, the hard timeout is 8 times longer than the soft timeout. (The point is not to be able to specify an exact hard timeout, but to ensure that values stay in the cache for a sufficient amount of time for the anti-dogpiling to have an effect, and for unused values to eventually disappear from the cache.)Use thedefault_grace_timeoption to set the timeout (in seconds) forrenewinga value that has timed out softly. After this period, another client will be allowed to try producing a new value. The default is 60 seconds. The grace time can also be specified per call by using thegrace_timeparameter on theaddandsetmethods.An example for Django 1.3+:CACHES = { 'default': { 'BACKEND': 'antidogpiling.django.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211'. 'OPTIONS': { 'hard_timeout_factor': 100, 'default_grace_time': 10, }, }, }Client usageTheadd,get,set, anddeletemethods work as usual, except that the timeouts set or invalidated are the soft timeouts, instead of the hard timeouts. To affect the hard timeouts, and to not apply any anti-dogpiling, use thehard=Trueparameter on theadd,set, anddeletemethods.Note:You must usehard=Truewhen setting an integer to be used with theincranddecrmethods. Increments and decrements require the raw integer to be stored in the cache.See the caveats below for more details.Benefits and caveatsBenefits of using this packageIt provides the generic functionality, for anyone to base their own solution on.It wraps the cached data in a custom object—not in a tuple or dict or so—making it possible to cache tuples and dicts without conflicting with the internal workings of the anti-dogpiling.It supports all Django 1.2+ cache backends, without re-implementing any Django functionality.General caveatsThere is no protection against dogpiling when a value isnotin the cacheat all.Caveats in the Django backendsTheincranddecrare not anti-dogpiled due to being atomic in Memcached (at least). The anti-dogpiling would not be atomic, unless somehow implemented with locks.Note:When initializing a value for being incremented or decremented, onehasto specifyhard=Truewhen calling thesetmethod. Otherwise, the anti-dogpiling kicks in and stores a complex value which cannot be incremented (aValueErroris raised)!Theset_many,get_many, anddelete_manymethods are not anti-dogpiled, due to a combination of laziness and all the decisions that would have to be made about how to handle soft timeouts, etc.Change history1.1.3 (2012-07-19)Replaced the dynamic mixin with a regular object reference in the common Django backend, voiding the issue with using multiple different anti-dogpiled backends at the same time.Added dummy Django backend.1.1.2 (2012-07-02)Documentation update (no functional change)1.1.1 (2011-05-29)Added support for Django 1.3 backends1.0.1 (2011-02-02)Added manifest file for proper packaging1.0 (2011-02-02)Initial version
antidote
Antidote is a dependency injection micro-framework for Python 3.7+.It is built on the idea of having adeclarative,explicitanddecentralizeddefinitions of dependencies at the type / function / variable definition which can be easily tracked down.Features are built with a strong focus onmaintainability,simplicityandease of usein mind. Everything is statically typed (mypy & pyright), documented with tested examples, can be easily used in existing code and tested in isolation.InstallationTo install Antidote, simply run this command:pipinstallantidoteHelp & IssuesFeel free to open anissueor adiscussiononGithubfor questions, issues, proposals, etc. !DocumentationTutorial, reference and more can be found in thedocumentation. Some quick links:GuideReferenceChangelogOverviewAccessing dependenciesAntidote works with aCatalogwhich is a sort of collection of dependencies. Multiple ones can co-exist, butworldis used by default. The most common form of a dependency is an instance of a given classfromantidoteimportinjectable@injectableclassService:passworld[Service]# retrieve the instanceworld.get(Service,default='something')# similar to a dictBy default,@injectabledefines a singleton but alternative lifetimes (how long theworldkeeps value alive in its cache) exists such astransientwhere nothing is cached at all. Dependencies can also be injected into a function/method with@inject. With both, Mypy, Pyright and PyCharm will infer the correct types.fromantidoteimportinject@inject# ⯆ Infers the dependency from the type hintdeff(service:Service=inject.me())->Service:returnservicef()# service injectedf(Service())# useful for testing: no injection, argument is used@injectsupports a variety of ways to bind arguments to their dependencies if any. This binding isalwaysexplicit. for example:fromantidoteimportInjectMe# recommended with inject.me() for best static-typing experience@injectdeff2(service=inject[Service]):...@inject(kwargs={'service':Service})deff3(service):...@injectdeff4(service:InjectMe[Service]):...Classes can also be fully wired, all methods injected, easily with@wire. It is also possible to inject the first argument, commonly namedself, of a method with an instance of a class:@injectableclassDummy:@inject.methoddefmethod(self)->'Dummy':returnself# behaves like a class methodassertDummy.method()isworld[Dummy]# useful for testing: when accessed trough an instance, no injectiondummy=Dummy()assertdummy.method()isdummyDefining dependenciesAntidote provides out of the box 4 kinds of dependencies:@injectableclasses for which an instance is provided.fromantidoteimportinjectable# ⯆ optional: would just call Service() otherwise.@injectable(factory_method='load')classService:@classmethoddefload(cls)->'Service':returncls()world[Service]constfor defining simple constants.fromantidoteimportconst# Used as namespaceclassConf:TMP_DIR=const('/tmp')# From environment variables, lazily retrievedLOCATION=const.env("PWD")USER=const.env()# uses the name of the variablePORT=const.env(convert=int)# convert the environment variable to a given typeUNKNOWN=const.env(default='unknown')world[Conf.TMP_DIR]@injectdeff(tmp_dir:str=inject[Conf.TMP_DIR]):...@lazyfunction calls (taking into account arguments) used for (stateful-)factories, parameterized dependencies, complex constants, etc.fromdataclassesimportdataclassfromantidoteimportlazy@dataclassclassTemplate:name:str# the wrapped template function is only executed when accessed through world/@inject@lazydeftemplate(name:str)->Template:returnTemplate(name=name)# By default a singleton, so it always returns the same instance of Templateworld[template(name="main")]@injectdeff(main_template:Template=inject[template(name="main")]):...@lazywill automatically apply@injectand can also be a value, property or even a method similarly [email protected].@interfacefor a function, class or even@lazyfunction call for which one or multiple implementations can be provided.fromantidoteimportinterface,implements@interfaceclassTask:pass@implements(Task)classCustomTask(Task):passworld[Task]# instance of CustomTaskThe interface does not need to be a class. It can also be aProtocol, a function or a@lazyfunction call!@interfacedefcallback(event:str)->bool:...@implements(callback)defon_event(event:str)->bool:# do stuffreturnTrue# returns the on_event functionassertworld[callback]ison_event@implementswill enforce as much as possible that the interface is correctly implemented. Multiple implementations can also be retrieved. Conditions, filters on metadata and weighting implementation are all supported to allow full customization of which implementation should be retrieved in which use case.Each of those have several knobs to adapt them to your needs which are presented in the documentation.Testing & DebuggingInjected functions can typically be tested by passing arguments explicitly but it’s not always enough. Antidote provides test context which full isolate themselves and allow overriding any dependencies:original=world[Service]withworld.test.clone()asoverrides:# dependency value is different, but it's still a singleton Service instanceassertworld[Service]isnotoriginal# override examplesoverrides[Service]='x'assertworld[Service]=='x'deloverrides[Service]assertworld.get(Service)[email protected](Service)defbuild_service()->object:return'z'# Test context can be nested and it wouldn't impact the current test contextwithworld.test.clone()asnested_overrides:...# Outside the test context, nothing changed.assertworld[Service]isoriginalAntidote also provides introspection capabilities withworld.debugwhich returns a nicely formatted tree to show what Antidote actually sees without executing anything like the following:🟉 <lazy> f() └── ∅ Service └── Service.__init__ └── 🟉 <const> Conf.HOST ∅ = transient ↻ = bound 🟉 = singletonGoing FurtherScopes are supported. Defining aScopeGlobalVarand using it as dependency will force any dependents to be updated whenever it changes (a request for example).Multiple catalogs can be used which allow you to expose only a subset of your API (dependencies) to your consumer within a catalog.You can easily define your kind of dependencies with proper typing from bothworldandinject.@injectable,@lazy,inject.me()etc.. all rely on Antidote’s core (Provider,Dependency, etc.) which is part of public API.Check out theGuidewhich goes more in depth or theReferencefor specific features.How to ContributeCheck for open issues or open a fresh issue to start a discussion around a feature or a bug.Fork the repo on GitHub. Run the tests to confirm they all pass on your machine. If you cannot find why it fails, open an issue.Start making your changes to the master branch.Send a pull request.Be sure to merge the latest from “upstream” before making a pull request!If you have any issue during development or just want some feedback, don’t hesitate to open a pull request and ask for help ! You’re also more than welcome to open a discussion or an issue on any topic!But, no code changes will be merged if they do not pass mypy, pyright, don’t have 100% test coverage or documentation with tested examples if relevant
antidotedb
Antidote Python ClientThe repository contains classes for using Antidote Database service in Python. It provides the client implementation to use Antidote Database.You can learn more about Antidote DatabasehereInstallationFor installing the Python AntidoteDB client, just use pip installer.pip antidotedbDocumentationThe classes for accessing AntidoteDB are in packageantidotedb.For accessing AntidoteDB, you should start by creating anAntidoteClientobject, as in the following example.from antidotedb import * server = 'locahost' port = 8087 clt = AntidoteClient(server, port)Interactive transactionsstart_transactionstarts an interactive transactions. On failure,start_transactionmethod raises anAntidoteException.tx = clt.start_transaction()commitmethod commits a sequence of operations executed in a transaction. On success, thecommitmethod returnsTrue.ok = tx.commit()abotmethod rollbacks the transactions.tx.abort()Session guaranteesBy default, the client will not maintain a session across consecutive transactions, i.e., there will be no causal dependency established between two consecutive transaction.To set these dependencies, star a transactionas follows:tx = clt.start_transaction(min_snapshot=clt.last_commit)Operations on objectsTheKeyclass allows to specify the AntidoteDB key for an object.Key( bucket_name, key_name, type_name)read_objectsmethod allows to read the contents of one (or more) objects. On success,read_objectsreturns a list of typed objects (more information next). On failure,read_objectsreturnsNone.key = Key( "some_bucket", "some_key_counter", "COUNTER") res = tx.read_objects( key) print( res[0].value())It is also possible to read more than one object.key1 = Key( "some_bucket", "some_key", "COUNTER") key2 = Key( "some_bucket", "some_other_key", "MVREG") res = tx.read_objects( [key1,key2]) print( res[0].value()) print( res[1].values())update_objectsmethod allows to update one (or more) objects. On success/failure,update_objectsreturnsTrue/False.key1 = Key( "some_bucket", "some_key", "COUNTER") res = tx.update_objects( Counter.IncOp(key1, 2))CountersThe data type name for the Counter data type is:COUNTER.key = Key( "some_bucket", "some_key", "COUNTER")The followingread onlyoperations are available in aCounterobject returned byread_objectsmethod:value(), for accessing the value of an object.res = tx.read_objects( key) print( res[0].value())The followingupdateoperations are available:Counter.IncOp(key, value), for incrementing a counter.res = tx.update_objects( Counter.IncOp(key, 2))The other counter data type supported by AntidoteDB,FATCountercan be used usingFATCOUNTERdata type name.Last-writer-wins registerThe data type name for the Last-write-wins Register data type is:LWWREG.key = Key( "some_bucket", "some_key", "LWWREG")The followingread onlyoperations are available in aRegisterobject returned byread_objectsmethod:value(), for accessing the value of an object.res = tx.read_objects( key) print( res[0].value())The followingupdateoperations are available:Register.AssignOp( key, val), for assigning a new value to the register.val = bytes("lightkone",'utf-8') res = tx.update_objects( Register.AssignOp( key, val))Multi-value registerThe data type name for the multi-value Register data type is:MVREG.key = Key( "some_bucket", "some_key", "MVREG")The followingread onlyoperations are available in aMVRegisterobject returned byread_objectsmethod:values(), for accessing the values of an object. The multiple values are returned in a list.res = tx.read_objects( key) print( res[0].values())The followingupdateoperations are available:Register.AssignOp( key, val), for assigning a new value to the register.val = bytes("lightkone",'utf-8') res = tx.update_objects( Register.AssignOp( key, val))SetsThe data type name for theadd-wins setdata type is:ORSET.key = Key( "some_bucket", "some_key", "ORSET")The data type name for theremove-wins setdata type is:RWSET.key = Key( "some_bucket", "some_key", "RWSET")The followingread onlyoperations are available in aSetobject returned byread_objectsmethod:values(), for accessing the value of the set. The multiple values are returned in a list.res = tx.read_objects( key) print( res[0].values())The followingupdateoperations are available:Set.AddOp( key, val), for adding values to the set.val1 = bytes("lightkone",'utf-8') val2 = bytes("syncfree",'utf-8') res = tx.update_objects( Set.AddOp( key, [val1,val2]))Set.RemoveOp( key, val), for removing values from the set.val1 = bytes("lightkone",'utf-8') val2 = bytes("syncfree",'utf-8') res = tx.update_objects( Set.RemoveOp( key, [val1,val2]))FlagsThe data type name for theenable-wins flagdata type is:FLAG_EW.key = Key( "some_bucket", "some_key", "FLAG_EW")The data type name for thedisable-wins flagdata type is:FLAG_DW.key = Key( "some_bucket", "some_key", "FLAG_DW")The followingread onlyoperations are available in aFlagobject returned byread_objectsmethod:value(), for accessing the value of the flag.res = tx.read_objects( key) print( res[0].value())The followingupdateoperations are available:Flag.UpdateOp( key, val), for setting a value to the flag.res = tx.update_objects( Flag.UpdateOp( key, True))MapsThe data type name for thegrow-only mapdata type is:GMAP.key = Key( "some_bucket", "some_key", "GMAP")The data type name for therecursive-remove mapdata type is:RRMAP.key = Key( "some_bucket", "some_key", "RRMAP")The followingread onlyoperations are available in aMapobject returned byread_objectsmethod:value(), for accessing the contents of the map. The map is represented by a Python dictionary that maps a key to the object.res = tx.read_objects( key) print( res[0].value())The followingupdateoperations are available:Map.UpdateOp(key,ops), for removing a key from the map.k1 = bytes("k1",'utf-8') k2 = bytes("k2",'utf-8') val = bytes("lightkone",'utf-8') res = tx.update_objects( Map.RemoveOp( key, [Key( "", k1, "COUNTER")]))Map.RemoveOp(key,ops), for executing a set of operations in the objects stored in the map.k1 = bytes("k1",'utf-8') res = tx.update_objects( Map.RemoveOp( key, [Key( "", k1, "COUNTER")]))Generic operationsThe followingupdateoperations are available in all data types:Type.ResetOp(key), for resetting the value.res = tx.update_objects( Flag.ResetOp(key))Development / ContributingAny help on developing this code is welcome. Feel free to open pull requests or open issues.Testing the AntidoteDB client required an Antidote instance running. You can use Docker to start an instance in your local machine:docker run -d -p "8087:8087" antidotedb/antidote
antidotes
Antidote - A Skiddies buster toolAntiDOTE v0.2AntiDOTE v0.2ProjectunderdevelopmentDescription:Being a script kid is not bad. But using someone else's tools and script to harm others is absolutely bad. And this thing is actually needed right now.Setup:$python3-mpipinstallantidotesFeatures:This tool contains a scanner module to scan the given urls for infection by most used and popular scripts.It contains a special module for 'Saycheese' or other similar 'camera spoofing' scripts. Inherited fromhttps://github.com/StrinTH/saycheese-antidote.This tool can be run in background and keeps checking for malicious url from clipboard and will scan it automatically.This tool is almost depended on requests package. and some less used others for basic operatons.Some new features will be added soon, to play with skids.Miscellaneous Feature:In case, if you want to run it for always, you can run assistance.pyw by clicking on it, which will keep tracking clipboard urls and scan it.Modules description:Antidote Saycheese:This module is antidote for all those tools and scripts on github and all other platforms which are being used by skids for playing with innocents.Special Feature:You can tease them by sending your message on a image.In case, if you can play with SVG payloads, then may be you can get shell in his/her system through this tool. ;)Antidote Shellphish:This tool is one of the most popular phishing tool among these kids, originally made by @thelinuxchoice (username changed and Privated now). tool use send messages as text with this module.Special Feature:You can send message as text but you can be more creative with texts also. ;)Antidote Whatsapp Phisher:There are a lot of tools available on github, similar tohttps://github.com/Ignitetch/whatsapp-phishing(Not targeting to developer, but to the misusers) so You can send your message as text to them.Antidote z-shadow:z-shadowandshadowavewas the most famous free phising site for skids (Down from a month).This module manipulate complete process of transmission of zshadow server and attacker client.Special Feature:This tool bypasses weak human verification by zshadow and shadowave site.You can use this to spam your message as text to them. After a while, their account will be removed by the site bots itself for overloading.Note:Asz-shadowandshadowavesaredownmorethanforamonth,thistoolisnottestedsomuch.butitwasworkingformorethanhalfayearuninteruptully.TO-DO list:[Done]Add auto scan clipboard copied URLs.9cc7fb7Add anti saythanks module (In case needed)Make GUI for this tool to handle actions for URLs through corner sidebar.Support authors:
antidot-fluidtopics-ftml-connector
No description available on PyPI.
antidox
Summaryantidoxis aSphinxextension that can readDoxygenXML “databases” and insert documentation for entities in Sphinx documents, similar toBreathe.It is intended to befastand simple, thougheasily customizable.Document generation (i.e. conversion between doxy-xml and reStructuredText) is driven by XML stylesheets (powered bylxml,) while indexing and selection of documentable entities is done by a SQL database (sqlite3.)Here is anexample projectshowing showing this extension in action.Example usageGenerate the documentation for an entire header, and include all entities defined in that header:.. doxy:c:: lua_run.h::* :children:The syntax<path>::<identifier>can be used to disambiguate between entities with the same name in different files.To document a Doxygen group:.. doxy:c:: [CborPretty] :children:You can manually specify which children should be documented:.. doxy:c:: be_uint16_t :children: u16Cross references are provided by a custom role, e.g.::doxy:r:`be_uint16_t::u16`The complete syntax is decribed in thedocs.Stub generationThegen_stubs.pyscript shows how stub files can be automatically generated. You can adapt this script to your own project.Generating API docs this way is fast and convenient, but may be suboptimal, since the spirit of this extension (and of Sphinx) is to generate narrative documentation and not merely an API reference.Note: BetaThough usable, this extension is still under development. Backwards compatibility will be kept for all releases with the same major/minor version.Be aware, however, that after updating this extension you may need to do a clean build of your docs to see the results.ObjectivesReuse API docs made with Doxygen in a Sphinx project.Provide a smooth transition between 100% automatic API docs (what Doxygen generates) and semi-manual documentation (autodoc-style).Have sensible defaults for automatic documentation generation while allowing customization.Deal with big projects efficiently: the main tool in use now (Breathe) has resource usage issues when dealing with large XML files.
antidxx
No description available on PyPI.
antievil
😈 py-antievil
antifloating
No description available on PyPI.
anti_forgetful
Never forget your AWS instances!anti_forgetfulis a simple and handy tool for launch a single AWS instance from the terminal and tying it’s lifetime to the lifetime of the process on your machine that launched it. This helps to avoid situations where you forget your instance and leave it running for a month. That could be thousands of dollars!Please let me know if you have issues!Just tell me how to use it!First, if you haven’t used AWS before:Set up your AWS account.Follow the first two steps (“Install the AWS CLI” and “Configure the AWS CLI”)here.Next, installanti_forgetful:pip install anti_forgetfulNow, check out theexamplefolder for an example of how to launch a Jupyter notebook server. To start building your instance, move to that directory and run:anti_forgetful awscfgThis tells the launcher to useawscfg.pyas your configuration file and starts to build your instance. It’ll take a few minutes on a freet2.microinstance. After a fewSo how do I write on of these configuration files!The configuration is specified as a Python file:# The name of the public/private key pair and the security group created for# your instance. If this key already exists, it won't be recreated.key_pair_name='tutorial_key_pair'group_name='tutorial_group'# What instance type do you want? https://aws.amazon.com/ec2/instance-types/instance_type='t2.micro'# This option turns off strict host checking in SSH. This can be handy if you# aren't worried about security and want to avoid some manual interaction# launching your instance.no_strict_host_checking=True# The base image to build from. You probably shouldn't change this.base_image_id='ami-428aa838'# The disk size requested from AWS EBS. In GB.root_volume_size=30# Your instance will be given a name so that it can be started and stopped!# Two instances with the same name could get messy... You've been warned.instance_name='tutorial_instance'# This function is run once when your instance is built. Build your docker# images here or install any packages you might want.defsetup_images(s):# Copy a file from the local machine to the instance. Accepts an optional# parameter "dest_filepath" for remote destination.s.copy_to_remote('docker-compose.yml')# Run a shell command on the remote instance.s.run_cmd('docker-compose pull')# This function is run every time your instance boots up.defstart_containers(s):# Forward a port from the remote machine to the local machine through an# ssh tunnel.s.ssh_port_forward(8888,8888)# Star the docker containers!s.run_cmd('docker-compose up')Just in case you still need to terminate some instances.Theawsterminatecommand will list all the instances you currently have running and give you the option of terminating them.MiscellaneousAt the moment, this is pretty completely integrated with Docker. That could easily be changed.I’ve only tried this on Ubuntu with Python 3.5 and Python 3.6.
antifragility-schema
No description available on PyPI.
anti-fraud
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
antifungal
Antifungal Peptide Prediction ToolThis repository hosts the Antifungal Peptide Prediction Tool, a Python package for predicting and analyzing antifungal peptides. It integrates various functionalities including peptide sequence processing, descriptor calculation, and machine learning-based prediction models.InstallationTo install the package, run the following command:# optional: create a virtual environmentpipinstallvirtualenv virtualenvenv--python=python3.8 env\Scripts\activate# activate the virtual environment on Windowssourceenv/bin/activate# activate the virtual environment on Linux# install antifungal package with pippipinstallantifungalUsageThe tool can be used to predict antifungal activities of peptides, as well as rational design, including segment peptide sequences, perform single-point mutation analysis, and globally optimize peptide sequences for enhanced properties.Example Usage for antifungal peptide predictionfromantifungal.predictimportpredict_MICseq=['HIHIRHMWLLR','HIHIRHMWLLRR']pred=predict_MIC(seq)print(pred)# Expected output:# {# 'antifungal': [True, True],# 'prob_antifungal': [95.2, 97.9],# 'MIC_C_albicans': [21.8, 17.34],# 'prob_MIC_C_albicans': [99.8, 99.8],# 'MIC_C_krusei': [7.13, 5.87],# 'prob_MIC_C_krusei': [99.3, 99.4],# 'MIC_C_neoformans': [24.4, 15.57],# 'prob_MIC_C_neoformans': [99.3, 99.6],# 'MIC_C_parapsilosis': [18.3, 17.05],# 'prob_MIC_C_parapsilosis': [84.5, 82.6],# 'AFI': [16.23, 12.82],# 'prob_AFI': [79.16, 79.9],# 'peptide_seq': ['HIHIRHMWLLR', 'HIHIRHMWLLRR']}# }Example Usage for antifungal peptide designfromantifungal.designimportsegment,single_point_mutation,global_optimization# Example for segment classsegment_instance=segment("YOUR_PEPTIDE_SEQUENCE")segment_predictions=segment_instance.get_segmented_sequences().predict()# Example for single_point_mutation classmutation_instance=single_point_mutation("YOUR_PEPTIDE_SEQUENCE")mutation_predictions=mutation_instance.get_mutated_sequences().predict()# Example for global_optimization classoptimization_instance=global_optimization("YOUR_PEPTIDE_SEQUENCE")optimized_seq,results=optimization_instance.optimize()Directory Structuredata/: Contains training data used for model development.training_data/: Stores the datasets utilized in training the predictive models.screening_data: Stores the screening results in the following article.model/: Houses the trained models for antifungal peptide prediction.propy/: Includes a modified version of the propy package, with bug fixes. The original package can be found at PyPI propy.ChemoinfoPy/: Contains Python scripts for variable selection, peptide sequence preprocessing, descriptor calculation, and dataset partitioning for correction and validation purposes.ReferenceFor more detailed information, refer to the paperLarge-Scale Screening of Antifungal Peptides Based on Quantitative Structure–Activity Relationship, ACS Med. Chem. Lett. 2022, 13, 1, 99–104and visit theAntifungal Webserver.LicenseMIT
antigate
Real-time captcha-to-text decodings===================================.. image:: https://api.travis-ci.org/gotlium/antigate.png?branch=master:alt: Build Status:target: https://travis-ci.org/gotlium/antigate.. image:: https://coveralls.io/repos/gotlium/antigate/badge.png?branch=master:target: https://coveralls.io/r/gotlium/antigate?branch=master.. image:: https://img.shields.io/badge/python-2.6,2.7,3.3,3.4,3.5-blue.svg:alt: Python 2.6, 2.7, 3.3, 3.4, 3.5:target: https://pypi.python.org/pypi/antigate/.. image:: https://img.shields.io/pypi/v/antigate.svg:alt: Current version on PyPi:target:https://pypi.python.org/pypi/antigate/.. image:: https://img.shields.io/pypi/dm/antigate.svg:alt: Downloads from PyPi:target:https://pypi.python.org/pypi/antigate/.. image:: https://img.shields.io/badge/license-GPLv2-green.svg:target: https://pypi.python.org/pypi/antigate/:alt: LicenseDocumentation available `here <https://pythonhosted.org/antigate/>`_.Installation------------From source:.. code-block:: bash$ git clone https://github.com/gotlium/antigate.git$ cd antigate && python setup.py installFrom PyPi:.. code-block:: bash$ pip install antigate**Requirements:**You can use grab/requests/urllib as http backends.`Grab` installation:.. code-block:: bashpip install grab pycurl`Requests` installation:.. code-block:: bashpip install requests`UrlLib` used by default.Usage-----.. code-block:: python>>> from antigate import AntiGate # AntiCaptcha# per line example>>> print AntiGate('API-KEY', 'captcha.jpg') # AntiCaptcha('API-KEY', filename or base64 or bytes)# or like this>>> gate = AntiGate('API-KEY') # AntiCaptcha('API-KEY')>>> captcha_id = gate.send('captcha.jpg')>>> print gate.get(captcha_id)If you wish to complain about a mismatch results, use ``abuse`` method:.. code-block:: python>>> from antigate import AntiGate>>> gate = AntiGate('API-KEY', 'captcha.jpg')>>> if str(gate) != 'qwerty':>>> gate.abuse()After all manipulations, you can get your account balance:.. code-block:: python>>> print gate.balance()Or get your statistics data:.. code-block:: python>>> print gate.stats()System load info:.. code-block:: python>>> print gate.load()Customizing requests to API---------------------------Customize grab-lib preferences:.. code-block:: python>>> from antigate import AntiGate>>> config = {'connect_timeout': 5, 'timeout': 60}>>> gate = AntiGate('API-KEY', 'captcha.jpg', grab_config=config)>>> print gateAdditional options for sending Captcha:.. code-block:: python>>> from antigate import AntiGate>>> config = {'min_len': '3', 'max_len': '5', 'phrase': '2'}>>> gate = AntiGate('API-KEY', 'captcha.jpg', send_config=config)>>> print gateUse all methods manually:.. code-block:: python>>> from antigate import AntiGate>>> gate = AntiGate('API-KEY')>>> captcha_id1 = gate.send('captcha1.jpg')>>> captcha_id2 = gate.send('captcha2.jpg')>>> print gate.get(captcha_id1)>>> print gate.get(captcha_id2)Get results for multiple ids:.. code-block:: python>>> gate = AntiGate('API-KEY')>>> captcha_id1 = gate.send('captcha1.jpg')>>> captcha_id2 = gate.send('captcha2.jpg')>>> print gate.get_multi([captcha_id1, captcha_id2])If you want use bytes or base64:.. code-block:: python# Per line binary example>>> print AntiGate('API-KEY', fp.read())# Per line base64 example>>> print AntiGate('API-KEY', b64encode(fp.read()))# Custom requests>>> gate = AntiGate('API-KEY')# base64>>> captcha_id = gate.send(b64encode(fp.read()))# or stream>>> captcha_id = gate.send(fp.read())>>> print gate.get(captcha_id)Api documentation-----------------https://anti-captcha.com/apidoc / http://antigate.com/?action=api#algoCompatibility-------------* Python: 2.6, 2.7, 3.3, 3.4, 3.5.. image:: https://d2weczhvl823v0.cloudfront.net/gotlium/antigate/trend.png:alt: Bitdeli badge:target: https://bitdeli.com/free
antigen
AntigenExtendable mutation testing frameworkWhat is mutation testing?Mutation testing provides what coverage tries to, it finds logic that is not covered by your test suite.It finds such places by applying mutations to your code and running the modified code against your test suite. If the tests succeed with the mutated code, it means the changed expression is likely not covered ny the tests.In comparison to coverage::green_heart: Checks expressions, not lines.:green_heart: Checks whether the expression is covered, not whether it was executed.:x: Can find irrelevant mutants (e.g. mutations in logging or performance optimizations or a mutation that does not break the code):x: Executes the tests many times and thereforetakes much more time.There are mitigations for these downsides::star: We can mutate only lines that have changed in a given PR:star: We can show the failing mutants via comments/warnings, as opposed to failing the whole CI pipeline.A much more in-depth explanation about the concept can be found inThis blog post by Goran PetrovicWhy use Antigen?ExtendabilityIn my personal experience, trying to integrate mutation testing into your CI pipeline can be a bit challenging. There are a lot of features you might want to customize to mitigate some of the downsides of mutation testing, or to be able to integrate it to your project and dev environment effectively.For example, ignoring mutations on logging logic (which depends on your logging framework and conventions), or showing the results on various platforms (e.g. github, bitbucket, gitlab).Antigen puts extendability as a top priority so that adding mutation testing to your project is feasible.How?Mechine friendlyThe core of antigen is a pure python package that can be used by scripts.The actual CLI uses the core package instead of the logic being coupled to it.PluginableAntigen is written as a pipeline, each stage has an interface (e.g. Mutator, MutantFilter, Runner).Extending the logic is as simple as creating an object or function that matches that (simple) interface.The Antigen CLI utilisesentry pointsso that antigen plugins can be added just by installing them with pip.UsageAntigen is currently in development, the API might change between versions.antigen=Antigen(filters=[PatchFilter.from_git_diff("develop")],config=Config(project_root=Path("/home/myuser/myproject/")),)forpathinantigen.find_files("."):formutationinantigen.gen_mutations(path):result=antigen.test_mutation(mutation,run_command="pytest",)print(result)RoadmapAdd CLIUseparsoinstead of the built-inastfor cross-version mutations.Add wrapper class for remote components (i.e.RemoteFilter(hostname),RemoteRunner(hostname)).Add Output component (withJunitXMLOutput,GithubOutput,BitbucketOutputbuiltins)Add Cache component (withFileCache,MongoCachebuiltins)Add Sorter component (for selecting the most likely to succeed mutations)
antigranular
Privacy Unleashed: Working with AntigranularAntigranular is a community-led, open-source platform that combines confidential computing with differential privacy. This integration fosters a secure environment to handle and fully utilize unseen data.Connect to AntigranularYou can activate Antigranular using the magic command%%ag. Any code that follows%%agwill run on our remote server. This server operates under restricted conditions, allowing only methods that guarantee differential privacy.Install the Antigranular package usingpip:!pipinstallantigranularImport theAntigranularlibrary:importantigranularasagTo connect to the AG Enclave Server, use your client credentials and either a dataset or competition ID:ag_client=ag.login(user_id="<user_id>",user_secret="<user_secret>",competition="<competition_name>")orag_client=ag.login(user_id="<user_id>",user_secret="<user_secret>",dataset="<dataset_name>")A succesful login will register the cell magic%%ag.Loading Private DatasetsPrivate datasets can be loaded asPrivateDataFramesandPrivateSeriesusing theag_utilslibrary.ag_utilsis a package locally installed on the remote server, which eliminates the need to install anything other than the antigranular package.Theload_dataset()method allows for obtaining a dictionary of private objects. The structure of the response dictionary, along with the dataset path and private object names, will be specified during the competition.%%agfromop_pandasimportPrivateDataFrame,PrivateSeriesfromag_utilsimportload_dataset"""Sample response structure{train_x : priv_train_x,train_y : priv_train_y,test_x : priv_test_x}"""# Obtaining the dictionary containing private objectsresponse=load_dataset("<path_to_dataset>")# Unpacking the response dictionarytrain_x=response["train_x"]train_y=response["train_y"]test_x=response["test_x"]Exporting ObjectsSince the code following%%agruns in a highly restricted environment, it's necessary to export differentially private objects to the local environment for further analysis. Theexportmethod inag_utilsallows data objects to be exported.API info:export(obj, variable_name:str)This command exports the remote object to the local environment and assigns it to the specified variable name. Note thatPrivateSeriesandPrivateDataFrameobjects cannot be exported and will raise an error if you attempt to do so.%%agfromag_utilsimportexporttrain_info=train_x.describe(eps=1)export(train_info,'variable_name')Once exported, you can perform any kind of data analysis on the differentially private object.# Local code blockprint(variable_name)--------------------------------------AgeSalarycount99987.00000099987.000000mean38.435953120009.334336std12.16737946255.486093min18.25744840048.25903725%27.18518980057.63996050%38.210860120380.29121675%49.147724159835.637091max59.282932199920.664706Libraries Supportedpandas: An adaptable data manipulation library offering efficient data structures and tools for data analysis and manipulation.op_pandas: A wrapped library specifically designed for differentially private data manipulation within the Pandas framework. It enhances privacy-preserving techniques and enables privacy-aware data processing.op_diffprivlib: A differentially private library that provides various privacy-preserving algorithms and mechanisms for machine learning and data analysis tasks.op_smartnoise: A library focused on privacy-preserving analysis using the SmartNoise framework. It provides tools for differential privacy and secure computation.op_opendp: A library that offers differentially private data analysis and algorithms based on the OpenDP project. It provides privacy-preserving methods and tools for statistical analysis.
antigravity
UNKNOWN
anti-header
anti-headerinfo: fake chrome, firefox, opera browser header anti headerFeaturesmore header paramsmore request methodInstallationpipinstallanti-headerUsageimportanti_headerfromanti_headerimportHeaderfrompprintimportpprinthd=Header(platform='windows',min_version=90,max_version=100).basehd=Header(platform='windows',min_version=90,max_version=100).randomprint(anti_header.VERSION)# must_header param useagehd=Header(must_header={'aa':'bb'}).randomhd=Header(must_header={'aa':'bb'}).base# rand_header param useagehd=Header(rand_header={'cc':'dd'}).randomhd=Header(rand_header={'cc':'dd'}).base# default_header param useageforiinrange(10):hd=Header(default_header={'ee':'ff'}).basepprint(hd.to_unicode_dict())"""base example{'cjito': 'azhbmf','ee': 'ff','referer': 'https://www.google.com/','user-agent': 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.7.3455.76 Safari/537.36'}random example{'accept-encoding': 'gzip, deflate','accept-type': 'utf-8','ee': 'ff','origin': 'https://www.google.com/','referer': 'https://www.google.com/','sec-ch-ua-mobile': '?0','sec-fetch-mode': 'navigate','te': 'Trailers','upgrade-insecure-requests': '1','user-agent': 'Mozilla/5.0 (SM-G3609 Build/KTU84P; WIFI) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.5.6492.87 Safari/537.36','x-forwarded-for': '1','xorsv': 'pvmcue'}"""If You want to requests method useage just:# test.pyimportrequestsfromanti_headerimportHeader_url='https://www.google.com/'hd=Header(url=_url,platform='windows')requests.get(_url,header=hd.random.to_unicode_dict())If You want to scrapy downloadmiddleware method useage just:# random_header.pyfromanti_headerimportHeaderclassRandomHeaderMiddleware(object):def__init__(self):passdefprocess_request(request,spider):request.headers=Headers(url=request.url).randomdefprocess_response(request,response,spider):returnresponseIf You want to specify param just:fromanti_headerimportHeaderhd=Header(logger=True)# the default install logurutry:fromloguruimportloggerexcept:install("loguru")fromloguruimportlogger# close default singletonhd=Header(dry=True)Make sure that You using latest versionpip install -U anti-headerCheck version via python console:import anti_header print(anti_header.VERSION)
antiheroes
ANTIHEROESDo you like antiheroes? You can use this python package ANTIHEROES to list and get fascinating antiheroes from movies, TV shows, video games, and comics.Installation# Official Versionpython-mpipinstallantiheroesGetting StartedDirectly after the installation you can use this package. You will need Python Release 3.7.0 or older.You can use 3 Methods:get_hero(): get a random antiherolist_heroes(): list all antiheroespopular_heroes(): list top x antiheroesOpen the python console and use the package>>>fromantiheroesimportget_hero >>>get_hero()THEHULKentersinyourroom!!! >>>get_hero()WOLVERINEentersinyourroom!!! >>>get_hero()ELEKTRANATCHIOSentersinyourroom!!! >>>get_hero()DUKENUKEMentersinyourroom!!! >>>get_hero()THEHULKentersinyourroom!!! >>>fromantiheroesimportlist_heroes >>>list_heroes()ALLANTIHEROES:[{'name':'WOLVERINE','world':'MARVEL COMICS'},{'name':'DEADPOOL','world':'MARVEL COMICS'},{'name':'ELEKTRA NATCHIOS','world':'MARVEL COMICS'},{'name':'JOHN WICK','world':'JOHN WICK'},{'name':'BLADE','world':'MARVEL COMICS'}, >>>fromantiheroesimportpopular_heroes >>>popular_heroes(5)MYTOP5ANTIHEROES:[{'name':'WOLVERINE','world':'MARVEL COMICS'},{'name':'DEADPOOL','world':'MARVEL COMICS'},{'name':'ELEKTRA NATCHIOS','world':'MARVEL COMICS'},{'name':'JOHN WICK','world':'JOHN WICK'},{'name':'BLADE','world':'MARVEL COMICS'}]
antikythera
IMSI Catcher detection, analysis and display.Development Environment SetupWindowsWireshark must be installed for thepysharklibrary to have access to the packet dissectors it needs. See theWireshark Documentationfor details.LinuxSetup a virtual environment to ensure system packages are not used:mkdir -p ~/.virtualenv/antikythera python3 -m venv ~/.virtualenv/antikythera source ~/.virtualenv/antikythera/bin/activateNoteThe commandsource~/.virtualenv/antikythera/bin/activatemust be reran for each new shell instance. When activated the name of the virtual environment should appear somewhere on the prompt such as:(antikythera) user@hostname:~$Then for Debian or Ubuntu based distributions just run the setup scriptsudo bash setup.sh. The documentation can be built locally by runningpython setup.py docsand to run the tests:pip install -r test-requirements.txt python setup.py testThe program can be installed and ran as follows:python setup.py install anti
antiless
Antiless.>availabletoanyonewhowantstouseit<Installation! ~ use at the knowledge of knowing it may be buggy ~ ! pip install antilessUsage# default shit if u want to only import for ease - auto switches #fromantilessimportprintfasprint,inputfasinput,initinit(debug=False)print("[DEBUG] Should NOT Show")print("(~) Should NOT Show")print("[INFO] Should Show")print("(*) Should Show")# ==================================## log format: [21:14:38] INF > testfromantilessimportloglog.info("info",sep=">")log.error("error")log.fatal("fatal")log.success("success")log.debug("debug")log.log("Retrieved",code="3131")log.vert("test",test=True,madeby="antilag")importtimefromantilessimportBetaConsolec=BetaConsole(speed=2)whileTrue:try:timestamp=c.getTimestamp()c.alphaPrint("[INF]",f"[{timestamp}] antilag? :Đ",increment=False)time.sleep(0.001)exceptKeyboardInterrupt:exit(0)*antilag@discord|antilagg@github|antilag.dev*
antilles-tools
No description available on PyPI.
antimait
antimaitantimait is a library made of tools to ease the implementation of IoT automation systems based on devices such as Arduino and ESP.To install it run:pip3 install antimaitor clone this repository, change directory to the root of the project and run:pip3 install .DocumentationDocumentation for the antimait library can be found in thedocsdirectory or at the following link:https://antimait.github.io/antimait[ita]antimait è una libreria composta da strumenti che possono essere utilizzati per semplificare l'implementazione di progetti basati su Arduino ed Esp nel campo dell'IoT.Per installarla, basterà eseguire:pip3 install antimaito clonare questa repository, spostarsi nella cartella base del progetto ed eseguire:pip3 install .DocumentazioneLa documentazione relativa ad antimait (in lingua inglese) può essere trivata nella cartelladocso al seguente link:https://antimait.github.io/antimait
antimarkdown
Convert HTML to Markdown, quickly and easily:>>> import antimarkdown >>> print(antimarkdown.to_markdown(""" ... <h1>antimarkdown</h1> ... ... <p>Convert HTML to Markdown, quickly and easily!</p> ... """)) antimarkdown ============ Convert HTML to Markdown, quickly and easily!
antimatter
No description available on PyPI.
antimatter-engine
Python Rust SessionThis module provides a bridge between Python and Rust, enabling Python code to interact with Rust-implemented functionality related to session and capsule management. It is part of theantimatterproject and leverages PyO3 for seamless Python-Rust interoperability.DepsPython:On Mac: maturinpython3-mpipinstallmaturinOn Linux: maturin[patchelf]python3-mpipinstallmaturin[patchelf]Rust:Cargo will handle these for you.To build, install and run example.makeTo just build and install.makebuild makeinstallTo remove.makeuninstallLimitations of this version.It does build and run on Linux, but you will need to change the name of the .whl file that the Makefile looks at for the install target.Example use.An example of how to encapsulate and read data from Python is given in the example folder. A short example :To import.To use this wrapper, you will need to import the built library.importantimatter_engineCreate a new session.Before any operations can be performed, one needs to create a session in which to perform capsule related requests with. This is currently done by providing theDomain IDto thePySessionconstructor.antimatter_engine.create_session(str)->PySessionUpon success, this returns a validPySessionthat can be used to create, read and write capsules with.Encapsulate data.To encapsulate data, one must call theencapsulatemethod on a valid session. When encapsulating one will need to provide the following:col_defns - A list ofPyColumnDefinitioncontaining the name and tags for each column.data_elems - A 2D list ofPyDataElementcontaining the data to be encapsulated, as well as each elements span tags.write_context_name - Is the name of the write context used to perform this encapsulate operation with.path - The path to write the capsule to upon successful encapsulation.capsule_tags - A list ofPyTagthat are used as the capsule's tags.extra - Astringused to store any extra information about the capsule. This is freeform and can be used to store anything that can be represented as astring.valid_session.encapsulate(col_defns=List[PyColumnDefinition],data_elems=List[List[PyDataElement]],write_context_name=str,path=str,capsule_tags=List[PyTag],extra=str)->NoneUpon success, this will return no error.Open a capsule.To open a capsule for reading, needs to first open the capsule using a valid session. This is done by calling theopen_capsulemethod on a valid session with the following:path - The resource path to the capsule's blob.read_context_name - Is the name of the read context used to read data from the capsule.valid_session.open_capsule(path=str,read_context_name=str)->PyCapsuleSessionUpon success, this returns aPyCapsuleSessionwhich represents a valid opened capsule that can be read from.Read from an opened capsule.To read all redacted data from a capsule one can call theread_allmethod on a valid session. When doing so, one must provide the following: read_parameters - A dictionary with all required read parameters for reading the capsule (this can be empty if none are required).valid_py_capsule_session.read_all(read_parameters=Dict)->(List[str],List[List[PyTag]],List[List[str]],str)Upon success, this returns back the following in order:col_names - A list ofstringwith column information, normally the name of the column.col_tags - A list ofList[PyTag]containing tags for each column.redacted_data - A 2D list ofstringcontaining the redacted data read from the capsule.extra - Astringused to store any extra information about the capsule. This is freeform and can be used to store anything that can be represented as astring.
antimeridian
antimeridianFix shapes that cross the antimeridian. Seethe documentationfor information about the underlying algorithm. Depends onshapelyandnumpy.Can fix:ShapelyPolygon,MultiPolygon,LineString, andMultiLineStringobjectsGeoJSONPolygons,MultiPolygons,FeaturesandFeatureCollections, as dictionariesAnything that has a__geo_interface__UsagepipinstallantimeridianThen:importantimeridianfixed=antimeridian.fix_geojson(geojson)We also have some utilities to createbounding boxesandcentroidsfrom antimeridian-crossing polygons and multipolygons. Seethe documentationfor a complete API reference.Command line interfaceUse theclioptional dependency to install theantimeridianCLI:pipinstall'antimeridian[cli]'antimeridianfixinput.json>output.jsonDevelopingClone and install in editable mode with the development optional dependencies:gitclonehttps://github.com/gadomski/antimeridiancdantimeridian pipinstall-e'.[dev,docs]'We usepytestfor tests:pytestWe useSphinxfor docs:make-CdocshtmlContributingGithubissuesandpull requests, please and thank you!LicenseApache-2.0
antimetal
No description available on PyPI.
antimicrobial
Commom Calibration methods for multivariate calibrationThis is a Python library for designing antimicrobial agent. For more information, refer towww.chemoinfolab.com/antifungal.InstallationUse the package managerpipto install toolkit in requirements.txt.Ref_1. Zhang J., Yang L. B., Tian Z. Q., Zhao W. J., Sun C. Q., Zhu L. J., Huang M. J., Guo G., Liang G. Y.Large-Scale Screening of Antifungal Peptides Based on Quantitative Structure–Activity Relationship [J]. ACS Med. Chem. Lett., 2022, 13(1): 99-104.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT
antimony
Thank you for using Antimony, a human-readable, human-writable modular model definition language.The documentation for this project is in the doc/ subdirectory, and was created with Doxygen. Described there is information about the Antimony model language and the libAntimony library, including compilation instructions.To enable full functionality in libAntimony, you first need to install libSBML, available fromhttp://sbml.org/Software/libSBML/. Both libSBML and libAntimony now use CMake as their build systems. For full functionality, be sure to enable the ‘comp’ package in libsbml, as this enables full hierarchical modeling support. As of antimony v2.4, all binary releases of the library and executables use the ‘comp’ package by default.For CellML:The CellML API v1.11 SDK was used in this version of Antimony (and QTAntimony) and seems to work OK (despite producing 2.33 billion warnings during compile). As of this writing, the URL to download the SDK washttp://www.cellml.org/tools/downloads/cellml_api/releases/1.11All that is needed in CMake is to set CELLML_API_INSTALL_DIR to this SDK, and the other CELLML_* variables will be set automatically.If downloading the SDK does not work (or if it is unavailable for your operating system) it too now uses CMake as its build system, and we have had reasonable success using this on linux-based systems.Antimony was supported by NIH/NIGMS Grant # GM081070.What’s new in v2.7.0Fixed a bug with QTAntimony and linux systems with German or other languages that use ‘.’ and ‘,’ in numbers opposite to how they are used in American Math.Added find/replace functionality to QTAntimonyAdded ‘go to line’ functionality to QTAntimonyAdded the ability to define submodels with implied parameters:A: foo(1,2)is now the same as declaringA: foo(x,y) x=1 y=2Added the ability to concisely define elements and their assignment rules:const species S1 in C := 1+xWhat’s new in v2.6.1Fixed a bug that prevented the new distributions from being properly exported to SBML when used in event constructs.What’s new in v2.6.0Built-in distribution functions that round-trip to SBML and SBML-distrib:normal(mean, stddev), truncatedNormal(mean, stddev, min, max)uniform(min, max)exponential(rate), truncatedExponential(rate, min, max)gamma(shape, scale), truncatedGamma(shape, scale, min, max)poisson(rate), truncatedPoisson(rate, min, max)New function ‘rateOf’/’rate’ that uses the custom annotaition of Copasi/roadrunner to define a function that returns the rate of change of a variable.Default libsbml bindings now static.Better error handling for miscoded models.Better python installerFix to add ‘arcsinh’ and ‘sec’ back in as built-in functions.Allow python-style ‘and’, ‘or’ and ‘not’ functions instead of ‘&&’, ‘||’ and ‘!’.Allow shorthand use of function definitions that uses global variables if present.Add ability to define and import model-wide units for SBML.No longer allow export of invalid SBML, for any reason.Various bug fixes for more robust round-tripping of models involving deletions and other hierarchical features.What’s new in v2.5.1Fix installers, particularly for python.What’s new in v2.5:Better support for Python bindingsVarious fixes for QTAntimony, including better support for turning on/off CellML visualization.Better suite of unit tests, along with fixes for the problems they uncovered.More compact and consistent spacing in produced math infix.What’s new in v2.4:Because the SBML ‘Hierarchical Model Composition’ package has been accepted and is now officially part of SBML (yay!), support for it is now on by default, and has moved out of beta. Models created using this package should be fully compatible with other programs who also support SBML hierarchy. Support for ‘flattened’ models also remains, so core SBML L3v1 models as well as SBML L2 and L1 models can still be exported and imported.)The Tutorial has been updated to include information on all the most recent features.What’s new in v2.3-beta:The ability to import and export SBML models with all the new 2.2 capabilities to ‘hierarchical model composition’ package constructs.What’s new in v2.2:The ability to define units and use them in mathematical equations.The ability to define conversion factors when synchronizing elements between submodels.The ability to define time and extent conversion factors when importing submodels.The ability to delete submodel elements.The syntax for each is:Deletions:delete A1.x;which will delete the variable ‘x’ from submodel A1, and will clear any equations or reactions that had ‘x’ in it: if a reaction rate was ‘p1*x’, that reaction rate will be cleared entirely (and may be reset using normal Antimony constructs).Time and extent conversion factors in submodels:A1: submodel(), timeconv=60, extentconv=100;orA1: submodel(), timeconv=tc, extentconv=xc;(where ‘tc’ and ‘xc’ are parameters that may be defined elsewhere) may be used.Conversion factors for synchronized variables:A1.y * cf is x;orA1.y is x / cf;What’s new in v2.0:The ability to translate to and from CellMLThe ability to define events more specifically (following their modification for SBML Level 3)The ability to define irreversible reactions with ‘=>’ instead of ‘->’.What’s new in v2.1-beta:The ability to import and export SBML models with ‘hierarchical model composition’ package constructs.By default, libAntimony will compile as version 2.0. To compile v2.1-beta from source, you will need the working SVN version of libSBML:https://sbml.svn.sourceforge.net/svnroot/sbml/trunk/libsbmlplus the working SVN version of the ‘comp’ package, from:https://sbml.svn.sourceforge.net/svnroot/sbml/branches/libsbml-packages/comp/both of which use CMake. Check out both, then run CMake on the comp package first, telling it where the libsbml source is. Then make the ‘integrate’ target, which will copy the source code from the comp package into the libsbml source. Then run CMake on the libsbml source, making sure to turn on ‘ENABLE_COMP’, and build the library.Finally, run CMake on this Antimony distribution, pointing it at your newly-built libsbml-with-comp library, and checking ‘WITH_COMP_SBML’. Assuming everything compiles for you, the library that is then built should call itself ‘v2.1-beta’, and will be able to read and write comp-enabled SBML models.
antinex-client
AntiNex Python ClientPython API Client for training deep neural networks with the REST API runninghttps://github.com/jay-johnson/train-ai-with-django-swagger-jwtInstallpip install antinex-clientAntiNex Stack StatusAntiNex client is part of the AntiNex stack:ComponentBuildDocs LinkDocs BuildREST APIDocsCore WorkerDocsNetwork PipelineDocsAI UtilsDocsClientDocsRun PredictionsThese examples use the default userrootwith password123321. It is advised to change this to your own user in the future.Train a Deep Neural Network with a JSON List of Recordsai -u root -p 123321 -f examples/predict-rows-scaler-django-simple.jsonTrain a Deep Neural Network to Predict Attacks with the AntiNex DatasetsPlease make sure the datasets are available to the REST API, Celery worker, and AntiNex Core worker. The datasets are already included in the docker containerai-coreprovided in the defaultcompose.ymlfile:https://github.com/jay-johnson/train-ai-with-django-swagger-jwt/blob/51f731860daf134ea2bd3b68468927c614c83ee5/compose.yml#L53-L104If you’re running outside docker make sure to clone the repo with:git clone https://github.com/jay-johnson/antinex-datasets.git /opt/antinex/antinex-datasetsTrain the Django Defensive Deep Neural NetworkPlease wait as this will take a few minutes to return and convert the predictions to a pandas DataFrame.ai -u root -p 123321 -f examples/scaler-full-django-antinex-simple.json ... [30200 rows x 72 columns]Using Pre-trained Neural Networks to make PredictionsTheAntiNex Coremanages pre-trained deep neural networks in memory. These can be used with the REST API by adding the"publish_to_core": trueto a request while running with theREST API compose.ymldocker containers running.Run:ai -u root -p 123321 -f examples/publish-to-core-scaler-full-django.jsonHere is the diff between requests that will run using a pre-trained model and one that will train a new neural network:antinex-client$ diff examples/publish-to-core-scaler-full-django.json examples/scaler-full-django-antinex-simple.json 5d4 < "publish_to_core": true, antinex-client$Prepare a Datasetai_prepare_dataset.py -u root -p 123321 -f examples/prepare-new-dataset.jsonGet Job Record for a Deep Neural NetworkGet a user’s MLJob record by setting:-i<MLJob.id>This include the model json or model description for the Keras DNN.ai_get_job.py -u root -p 123321 -i 4Get Predictions Results for a Deep Neural NetworkGet a user’s MLJobResult record by setting:-i<MLJobResult.id>This includes predictions from the training or prediction job.ai_get_results.py -u root -p 123321 -i 4Get a Prepared DatasetGet a user’s MLPrepare record by setting:-i<MLPrepare.id>ai_get_prepared_dataset.py -u root -p 123321 -i 15Using a Client Built from Environment VariablesThis is how theNetwork Pipelinestreams data to theAntiNex Coreto make predictions with pre-trained models.Export the example environment file:source examples/example-prediction.envRun the client prediction stream scriptai_env_predict.py -f examples/predict-rows-scaler-full-django.jsonDevelopmentvirtualenv -p python3 ~/.venvs/antinexclient && source ~/.venvs/antinexclient/bin/activate && pip install -e .TestingRun allpython setup.py testLintingflake8 .pycodestyle .LicenseApache 2.0 - Please refer to theLICENSEfor more details
antinex-core
Automating network exploit detection using highly accurate pre-trained deep neural networks.As of 2018-03-12, the core can repeatedly predict attacks on Django, Flask, React + Redux, Vue, and Spring application servers by training using the pre-recordedAntiNex datasetswith cross validation scores above~99.8%with automated scaler normalization.Accuracy + Training + Cross Validation in a Jupyter Notebookhttps://github.com/jay-johnson/antinex-core/blob/master/docker/notebooks/AntiNex-Protecting-Django.ipynbUsing a Pre-Trained Deep Neural Network in a Jupyter Notebookhttps://github.com/jay-johnson/antinex-core/blob/master/docker/notebooks/AntiNex-Using-Pre-Trained-Deep-Neural-Networks-For-Defense.ipynbOverviewThe core is a Celery worker pool for processing training and prediction requests for deep neural networks to detect network exploits (Nex) using Keras and Tensorflow in near real-time. Internally each worker manages a buffer of pre-trained models identified by thelabelfrom the initial training request. Once trained, a model can be used for rapid prediction testing provided the samelabelname is used on the prediction request. Models can also be re-trained by using the training api with the samelabel. While the initial focus is on network exploits, the repository also includes mock stock data for demonstrating running a worker pool to quickly predict regression data (like stock prices) with many, pre-trained deep neural networks.This repository is a standalone training and prediction worker pool that is decoupled from the AntiNex REST API:https://github.com/jay-johnson/train-ai-with-django-swagger-jwtAntiNex Stack StatusAntiNex Core Worker is part of the AntiNex stack:ComponentBuildDocs LinkDocs BuildREST APIDocsCore WorkerDocsNetwork PipelineDocsAI UtilsDocsClientDocsInstallpip install antinex-coreOptional for Generating ImagesIf you want to generate images please installpython3-tkon Ubuntu.sudo apt-get install python3-tkDockerStart the container for browsing with Jupyter:# if you do not have docker compose installed, you can try installing it with: # pip install docker-compose cd docker ./start-stack.shOpen Jupyter Notebook with Django Deep Neural Network AnalysisDefault password is:adminhttp://localhost:8888/notebooks/AntiNex-Protecting-Django.ipynbView Notebook Presentation SlidesUseAlt + rinside the notebookUse the non-vertical scolling url:http://localhost:8889/Slides-AntiNex-Protecting-Django.slides.htmlUse the non-vertical scolling url:http://localhost:8890/Slides-AntiNex-Using-Pre-Trained-Deep-Neural-Networks-For-Defense.slides.htmlRunPlease make sure redis is running and accessible before starting the core:redis-cli 127.0.0.1:6379>With redis running and the antinex-core pip installed in the python 3 runtime, use this command to start the core:./run-antinex-core.shOr with celery:celery worker -A antinex_core.antinex_worker -l DEBUGPublish a Predict RequestTo train and predict with the new automated scaler-normalized dataset with a 99.8% prediction accuracy for detecting attacks using a wide, two-layer deep neural network with theAntiNex datasetsrun the following steps.ClonePlease make sure to clone the dataset repo to the pre-configured location:mkdir -p -m 777 /opt/antinex git clone https://github.com/jay-johnson/antinex-datasets.git /opt/antinex/antinex-datasetsDjango - Train and Predict./antinex_core/scripts/publish_predict_request.py -f training/scaler-full-django-antinex-simple.jsonFlask - Train and Predict./antinex_core/scripts/publish_predict_request.py -f training/scaler-full-flask-antinex-simple.jsonReact and Redux - Train and Predict./antinex_core/scripts/publish_predict_request.py -f training/scaler-full-react-redux-antinex-simple.jsonVue - Train and Predict./antinex_core/scripts/publish_predict_request.py -f training/scaler-full-vue-antinex-simple.jsonSpring - Train and Predict./antinex_core/scripts/publish_predict_request.py -f training/scaler-full-spring-antinex-simple.jsonAccuracy and Prediction ReportAfter a few minutes the final report will be printed out like:2018-03-11 23:35:00,944 - antinex-prc - INFO - sample=30178 - label_value=1.0 predicted=1 label=attack 2018-03-11 23:35:00,944 - antinex-prc - INFO - sample=30179 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,944 - antinex-prc - INFO - sample=30180 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,944 - antinex-prc - INFO - sample=30181 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,944 - antinex-prc - INFO - sample=30182 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30183 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30184 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30185 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30186 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30187 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30188 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30189 - label_value=1.0 predicted=1 label=attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30190 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,945 - antinex-prc - INFO - sample=30191 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30192 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30193 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30194 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30195 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30196 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30197 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30198 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,946 - antinex-prc - INFO - sample=30199 - label_value=-1.0 predicted=-1 label=not_attack 2018-03-11 23:35:00,947 - antinex-prc - INFO - Full-Django-AntiNex-Simple-Scaler-DNN made predictions=30200 found=30200 accuracy=99.84685430463577 2018-03-11 23:35:00,947 - antinex-prc - INFO - Full-Django-AntiNex-Simple-Scaler-DNN - saving model=full-django-antinex-simple-scaler-dnnIf you do not have the datasets cloned locally, you can use the included minimized dataset from the repo:./antinex_core/scripts/publish_predict_request.py -f training/scaler-django-antinex-simple.jsonPublish a Train Request./antinex_core/scripts/publish_train_request.pyPublish a Regression Prediction Request./antinex_core/scripts/publish_regression_predict.pyJSON APIThe AntiNex core manages a pool of workers that are subscribed to process tasks found in two queues (webapp.train.requestsandwebapp.predict.requests). Tasks are defined as JSON dictionaries and must have the following structure:{ "label": "Django-AntiNex-Simple-Scaler-DNN", "dataset": "./tests/datasets/classification/cleaned_attack_scans.csv", "apply_scaler": true, "ml_type": "classification", "predict_feature": "label_value", "features_to_process": [ "eth_type", "idx", "ip_ihl", "ip_len", "ip_tos", "ip_version", "tcp_dport", "tcp_fields_options.MSS", "tcp_fields_options.Timestamp", "tcp_fields_options.WScale", "tcp_seq", "tcp_sport" ], "ignore_features": [ ], "sort_values": [ ], "seed": 42, "test_size": 0.2, "batch_size": 32, "epochs": 10, "num_splits": 2, "loss": "binary_crossentropy", "optimizer": "adam", "metrics": [ "accuracy" ], "histories": [ "val_loss", "val_acc", "loss", "acc" ], "model_desc": { "layers": [ { "num_neurons": 250, "init": "uniform", "activation": "relu" }, { "num_neurons": 1, "init": "uniform", "activation": "sigmoid" } ] }, "label_rules": { "labels": [ "not_attack", "not_attack", "attack" ], "label_values": [ -1, 0, 1 ] }, "version": 1 }Regression prediction tasks are also supported, and here is an example from an included dataset with mock stock prices:{ "label": "Scaler-Close-Regression", "dataset": "./tests/datasets/regression/stock.csv", "apply_scaler": true, "ml_type": "regression", "predict_feature": "close", "features_to_process": [ "high", "low", "open", "volume" ], "ignore_features": [ ], "sort_values": [ ], "seed": 7, "test_size": 0.2, "batch_size": 32, "epochs": 50, "num_splits": 2, "loss": "mse", "optimizer": "adam", "metrics": [ "accuracy" ], "model_desc": { "layers": [ { "activation": "relu", "init": "uniform", "num_neurons": 200 }, { "activation": null, "init": "uniform", "num_neurons": 1 } ] } }Splunk Environment VariablesThis repository uses theSpylunkinglogger that supports publishing logs to Splunk over the authenticated HEC REST API. You can set these environment variables to publish to Splunk:export SPLUNK_ADDRESS="<splunk address host:port>" export SPLUNK_API_ADDRESS="<splunk api address host:port>" export SPLUNK_USER="<splunk username for login>" export SPLUNK_PASSWORD="<splunk password for login>" export SPLUNK_TOKEN="<Optional - username and password will login or you can use a pre-existing splunk token>" export SPLUNK_INDEX="<splunk index>" export SPLUNK_QUEUE_SIZE="<num msgs allowed in queue - 0=infinite>" export SPLUNK_RETRY_COUNT="<attempts per log to retry publishing>" export SPLUNK_RETRY_BACKOFF="<cooldown in seconds per failed POST>" export SPLUNK_SLEEP_INTERVAL="<sleep in seconds per batch>" export SPLUNK_SOURCE="<splunk source>" export SPLUNK_SOURCETYPE="<splunk sourcetype>" export SPLUNK_TIMEOUT="<timeout in seconds>" export SPLUNK_DEBUG="<1 enable debug|0 off - very verbose logging in the Splunk Publishers>"Developmentvirtualenv -p python3 ~/.venvs/antinexcore && source ~/.venvs/antinexcore/bin/activate && pip install -e .TestingRun allpython setup.py testRun a test casepython -m unittest tests.test_train.TestTrain.test_train_antinex_simple_success_retrainLintingflake8 .pycodestyle .LicenseApache 2.0 - Please refer to theLICENSEfor more details
antinex-utils
Standalone utilities for training AI.Used in:https://github.com/jay-johnson/train-ai-with-django-swagger-jwtInstallpip install antinex-utilsDevelopmentSet up the repositorymkdir -p -m 777 /opt/antinex git clone https://github.com/jay-johnson/antinex-utils.git /opt/antinex/utils cd /opt/antinex/utilsSet up the virtual env and installvirtualenv -p python3 ~/.venvs/antinexutils && source ~/.venvs/antinexutils/bin/activate && pip install -e .TestingRun allpython setup.py testRun a test casepython -m unittest tests.test_classification.TestClassification.test_classification_deep_dnnpython -m unittest tests.test_regression.TestRegression.test_dataset_regression_using_scalerAntiNex Stack StatusAntiNex AI Utilities is part of the AntiNex stack:ComponentBuildDocs LinkDocs BuildREST APIDocsCore WorkerDocsNetwork PipelineDocsAI UtilsDocsClientDocsLintingflake8 .pycodestyle –exclude=.tox,.eggsLicenseApache 2.0 - Please refer to theLICENSEfor more details
antiocr
antiOCRAnti OCR, Free Texts.拒绝图片文字被OCR,让文字自由传播!antiOCR把指定文本转换成机器无法识别但人可读的文字图片。欢迎扫码加小助手为好友,备注anti,小助手会定期统一邀请大家入群:作者也维护知识星球私享群,这里面的提问会较快得到作者的回复,欢迎加入。知识星球私享群也会陆续发布一些开源项目相关的私有资料,包括一些使用教程,未公开的模型,不同应用场景的调用代码,使用过程中遇到的难题解答等。本群也会发布一些相关的最新研究资料。使用说明调用很简单,以下是示例:fromantiocrimportAntiOcrtexts='拒绝图片文字被OCR,让文字自由传播! antiOCR 把指定文本转换成机器无法识别但人可读的文字图片。'anti=AntiOcr()# 生成文字图片img=anti(texts,font_fp='/System/Library/Fonts/PingFang.ttc',# 使用的字体文件)img.save("output.png")使用示例可以参考Streamlit Demo。目前 antiOCR 使用的反OCR方法主要包括:每个文字随机使用不同大小的字体;每个汉字按指定概率进行倒转(随机生成倒转角度);每个汉字按指定概率转换为中文拼音;【可选】随机生成干扰的背景图片;字体可按需指定;示例生成的图片背景图片来源固定图片固定图片固定图片随机图片随机图片随机图片安装嗯,顺利的话一行命令即可。pipinstallantiocr安装速度慢的话,可以指定国内的安装源,如使用豆瓣源:pipinstallantiocr-ihttps://pypi.doubanio.com/simple给作者来杯咖啡开源不易,如果此项目对您有帮助,可以考虑给作者加点油🥤,鼓鼓气💪🏻。官方代码库:https://github.com/breezedeus/antiocr。
antiope
antiope-awsPython module & scripts for managing Multiple AWS accounts leveraging AntiopeRationaleInstallationFor testing, the install process ispipinstall-e.For production usage, you can install antiope-aws from PyPi.UsageManifest FilesScriptsNone yetPython ModuleThe Python Modules consists of three main classes.All modules use the python logger for debug and informational events. These can be overridden as needed.AWSAccountForeignAWSAccountAWSOrganizationVPCRoadmap
antiorm
Anti-ORM is not an ORM, and it certainly does not want to be. Anti-ORM is a simple Python module that provides a pythonic syntax for making it more convenient to build SQL queries over the DBAPI-2.0 interface.In practice, if you’re the kind of person that likes it to the bare metal, it’s almost as good as the ORMs. At least there is no magic, and it just works.
antiparking
Failed to fetch description. HTTP Status Code: 404
antiparser
No description available on PyPI.
antipathy
Tired of calling a function for every path manipulation you need to do?Is:>>> path, filename = os.path.split(some_name) >>> basename, ext = os.path.splitext(filename) >>> basename = basename + '_01' >>> new_name = os.path.join(path, basename+ext)wearing on your nerves?In short, are you filled with antipathy [1] for os.path?Then get antipathy and work with Path:>>> from antipathy import Path >>> some_name = Path('/home/ethan/source/my_file.txt') >>> backups = Path('/home/ethan/backup/') >>> print some_name.path '/home/ethan/source/' >>> print some_name.ext '.txt' >>> print some_name.exists() True # (well, if it happens to exist at this moment ;) >>> backup = backups / some_name.filename + '_01' + some_name.ext >>> print backup '/home/ethan/backup/my_file_01.txt' >>> some_name.copy(backup)Because Path is a subclass of bytes/str/unicode, it can still be passed to other functions that expect a bytes/str/unicode object and work seamlessly [2].[1]https://www.google.com/#q=antipathy[2] in most cases – there are a few places that do atypecheck instead of anisinstancecheck.
antipattern-mitigation
No description available on PyPI.
antipetros_discordbot
Antipetros DiscordbotToCInstallationPyPiUsageDescriptionFeaturesDependenciesPython dependenciesExternal dependenciesLicenseDevelopmentFuture PlansSee alsoLinksBot-Name:AntiPetrosVersion:1.3.9InstallationPyPipipinstallantipetros_discordbot==1.3.9Usageantipetrosbot cleanCli command to clean the 'APPDATA' folder that was created.antipetrosbot get-pathGet remote path to the User data dir or files withing.antipetrosbot runStandard way to start the bot and connect it to discord.antipetrosbot stopCli way of autostoping the bot.DescriptionFeaturesCurrently usable CogsAdministrationCogDescriptionShort DescriptionCommands and methods that help in Administrate the Discord Server.Config NameadministrationCog State Tags- DOCUMENTATION_MISSING- OUTDATED- NEEDS_REFRACTORING- FEATURE_MISSING- UNTESTED- OPEN_TODOSCommandsDELETE_MSGaliases:deletemsg,delete+msg,delete.msg,delete-msgis hidden:Trueusage:NoneMAKE_EMBEDhelp:Creates a simple embed message in the specified channel.No support for embed fields, as input would be to complicated.Args: channel (discord.TextChannel): either channel name or channel id (prefered), where the message should be posted. --title (str): --description (str): --url (str): --thumbnail (str): --image (str): --timestamp (str): --author-name (str): --author-url (str): --author-icon (str): --footer-text (str): --footer-icon (str): --thumbnail (str): --image (str): --disable-mentions (bool): --delete-after (int):aliases:make.embed,make-embed,makeembed,make+embedis hidden:Trueusage:NoneTHE_BOTS_NEW_CLOTHEShelp:Sends about a page worth of empty message to a channel, looks like channel got purged.Optional deletes the empty message after specified seconds (defaults to not deleting)Args: delete_after (int, optional): time in seconds after which to delete the empty message. Defaults to None which means that it does not delete the empty message.aliases:thebotsnewclothes,the+bots+new+clothes,the-bots-new-clothes,clr-scrn,the.bots.new.clothesis hidden:Trueusage:NoneWRITE_MESSAGEaliases:write+message,writemessage,write-message,write.messageis hidden:Trueusage:NoneAntistasiLogWatcherCogDescriptionShort DescriptionsoonConfig Nameantistasi_log_watcherCog State Tags- DOCUMENTATION_MISSING- FEATURE_MISSING- UNTESTED+ WORKINGCommandsGET_NEWEST_LOGShelp:Gets the newest log files from the Dev Drive.If the log file is bigger than current file size limit, it will provide it zipped.Tries to fuzzy match both server and sub-folder.Args: server (str): Name of the Server sub_folder (str): Name of the sub-folder e.g. Server, HC_0, HC_1,... amount (int, optional): The amount of log files to get. standard max is 5 . Defaults to 1.aliases:get-newest-logs,get.newest.logs,get+newest+logs,getnewestlogsis hidden:Falseusage:@AntiPetrosget_newest_logsmainserver_1serverGET_NEWEST_MOD_DATAhelp:Gets the required mods for the Server.Provides the list as embed and Arma3 importable html file.Args: server (str): Name of the Antistasi Community Server to retrieve the mod list.aliases:get+newest+mod+data,getnewestmoddata,get.newest.mod.data,get-newest-mod-datais hidden:Falseusage:@AntiPetrosget_newest_mod_datamainserver_1AutoReactionCogDescriptionShort DescriptionWiPConfig Nameauto_reactionCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsADD_CHANNEL_REACTION_INSTRUCTIONaliases:add-channel-reaction-instruction,addchannelreactioninstruction,add+channel+reaction+instruction,add.channel.reaction.instructionis hidden:Falseusage:NoneADD_EXCEPTION_TO_WORD_REACTION_INSTRUCTIONaliases:add+exception+to+word+reaction+instruction,add-exception-to-word-reaction-instruction,addexceptiontowordreactioninstruction,add.exception.to.word.reaction.instructionis hidden:Falseusage:NoneADD_WORD_REACTION_INSTRUCTIONaliases:add-word-reaction-instruction,add.word.reaction.instruction,add+word+reaction+instruction,addwordreactioninstructionis hidden:Falseusage:NoneCHANGE_WORD_REACTION_INSTRUCTION_OPTIONaliases:change.word.reaction.instruction.option,change-word-reaction-instruction-option,change+word+reaction+instruction+option,changewordreactioninstructionoptionis hidden:Falseusage:NoneLIST_ALL_REACTION_INSTRUCTIONSaliases:list+all+reaction+instructions,listallreactioninstructions,list.all.reaction.instructions,list-all-reaction-instructionsis hidden:Falseusage:NoneREMOVE_REACTION_INSTRUCTIONaliases:remove-reaction-instruction,remove+reaction+instruction,removereactioninstruction,remove.reaction.instructionis hidden:Falseusage:NoneBotAdminCogDescriptionShort DescriptionCommands and methods that are needed to Administrate the Bot itself.Config Namebot_adminCog State Tags- DOCUMENTATION_MISSING- FEATURE_MISSINGCommandsADD_TO_BLACKLISTaliases:add+to+blacklist,add.to.blacklist,addtoblacklist,add-to-blacklistis hidden:Trueusage:NoneADD_WHO_IS_PHRASEaliases:add.who.is.phrase,add+who+is+phrase,addwhoisphrase,add-who-is-phraseis hidden:Trueusage:NoneALL_ALIASESaliases:allaliases,all.aliases,all+aliases,all-aliasesis hidden:Trueusage:NoneINVOCATION_PREFIXESaliases:invocation+prefixes,invocationprefixes,invocation-prefixes,invocation.prefixesis hidden:Trueusage:NoneLIFE_CHECKaliases:you_dead?,life-check,life+check,are-you-there,poke-with-stick,life.check,lifecheckis hidden:Trueusage:NoneREMOVE_FROM_BLACKLISTaliases:remove-from-blacklist,remove.from.blacklist,remove+from+blacklist,removefromblacklistis hidden:Trueusage:NoneSEND_LOG_FILEhelp:Gets the log files of the bot and post it as a file to discord.You can choose to only get the newest or all logs.Args: which_logs (str, optional): [description]. Defaults to 'newest'. other options = 'all'aliases:send.log.file,sendlogfile,send+log+file,send-log-fileis hidden:Trueusage:@AntiPetrossend_log_fileallSEND_LOOP_INFOaliases:sendloopinfo,send+loop+info,send-loop-info,send.loop.infois hidden:Trueusage:NoneTELL_UPTIMEaliases:tell+uptime,tell-uptime,telluptime,tell.uptimeis hidden:Trueusage:NoneTELL_VERSIONaliases:tell+version,tell-version,tell.version,tellversionis hidden:Trueusage:NoneBotFeedbackCogDescriptionShort DescriptionWiPConfig Namebot_feedbackCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsCommunityServerInfoCogDescriptionShort DescriptionsoonConfig Namecommunity_server_infoCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsCURRENT_ONLINE_SERVERhelp:Shows all server of the Antistasi Community, that are currently online.Testserver_3 and Eventserver are excluded as they usually are password guarded.aliases:current-online-server,servers,currentonlineserver,current+online+server,server?,server,current.online.serveris hidden:Falseusage:@AntiPetroscurrent_online_serverCURRENT_PLAYERShelp:Show all players that are currently online on one of the Antistasi Community Server.Shows Player Name, Player Score and Time Played on that Server.Args: server (str): Name of the Server, case insensitive.aliases:currentplayers,current-players,current+players,current.playersis hidden:Falseusage:@AntiPetroscurrent_playersmainserver_1EXCLUDE_FROM_SERVER_STATUS_NOTIFICATIONaliases:exclude-from-server-status-notification,exclude+from+server+status+notification,excludefromserverstatusnotification,exclude.from.server.status.notificationis hidden:Falseusage:NoneUNDO_EXCLUDE_FROM_SERVER_STATUS_NOTIFICATIONaliases:undoexcludefromserverstatusnotification,undo+exclude+from+server+status+notification,undo-exclude-from-server-status-notification,undo.exclude.from.server.status.notificationis hidden:Falseusage:NoneConfigCogDescriptionShort DescriptionCog with commands to access and manipulate config files, also for changing command aliases. Almost all are only available in DM'scommands are hidden from the help command.Config NameconfigCog State Tags- NEEDS_REFRACTORING- FEATURE_MISSING- OPEN_TODOSCommandsADD_ALIAShelp:Adds an alias for a command.Alias has to be unique and not spaces.Args: command_name (str): name of the command alias (str): the new alias.aliases:addalias,add-alias,add.alias,add+aliasis hidden:Trueusage:@AntiPetrosadd_aliasflip_coinflip_itCHANGE_SETTING_TOhelp:NOT IMPLEMENTEDis hidden:Trueusage:NoneCONFIG_REQUESThelp:Returns a Config file as and attachment, with additional info in an embed.Args: config_name (str, optional): Name of the config, or 'all' for all configs. Defaults to 'all'.is hidden:Trueusage:NoneLIST_CONFIGShelp:NOT IMPLEMENTEDaliases:listconfigs,list+configs,list.configs,list-configsis hidden:Trueusage:NoneOVERWRITE_CONFIG_FROM_FILEhelp:NOT IMPLEMENTEDis hidden:Trueusage:NoneSHOW_CONFIG_CONTENThelp:NOT IMPLEMENTEDis hidden:Trueusage:NoneSHOW_CONFIG_CONTENT_RAWhelp:NOT IMPLEMENTEDis hidden:Trueusage:NoneFaqCogDescriptionShort DescriptionCreates Embed FAQ items.Config NamefaqCog State Tags- DOCUMENTATION_MISSING- FEATURE_MISSING- UNTESTED+ WORKINGCommandsPOST_FAQ_BY_NUMBERhelp:Posts an FAQ as an embed on request.Either as an normal message or as an reply, if the invoking message was also an reply.Deletes invoking messageArgs: faq_numbers (commands.Greedy[int]): minimum one faq number to request, maximum as many as you want seperated by one space (i.e. 14 12 3) as_template (bool, optional): if the resulting faq item should be created via the templated items or from the direct parsed faqs.aliases:faq,postfaqbynumber,post.faq.by.number,post+faq+by+number,post-faq-by-numberis hidden:Falseusage:NoneFixedAnswerCogDescriptionShort DescriptionWiPConfig Namefixed_answerCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsBOB_STREAMINGaliases:bobstreaming,bob.streaming,bob+streaming,bob-streaming,bobdevis hidden:Falseusage:NoneNEW_VERSION_ETAaliases:newversioneta,eta,new+version+eta,update,new.version.eta,new-version-etais hidden:Falseusage:NoneGithubCogDescriptionShort DescriptionWiPConfig NamegithubCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsGET_FILEaliases:get-file,get.file,get+file,getfileis hidden:Falseusage:NoneGITHUB_REFERALSaliases:github+referals,github.referals,github-referals,githubreferalsis hidden:Falseusage:NoneGITHUB_TRAFFICaliases:github-traffic,githubtraffic,github.traffic,github+trafficis hidden:Falseusage:NoneGiveAwayCogDescriptionShort DescriptionSoonConfig Namegive_awayCog State Tags- DOCUMENTATION_MISSING- FEATURE_MISSINGCommandsABORT_GIVE_AWAYhelp:NOT IMPLEMENTEDaliases:abort+give+away,abort-give-away,abortgiveaway,abort.give.awayis hidden:Trueusage:NoneCREATE_GIVEAWAYaliases:giveaway,creategiveaway,create-giveaway,create+giveaway,create.giveawayis hidden:Trueusage:NoneFINISH_GIVE_AWAYhelp:NOT IMPLEMENTEDaliases:finishgiveaway,finish.give.away,finish-give-away,finish+give+awayis hidden:Trueusage:NoneImageManipulatorCogDescriptionShort DescriptionCommands that manipulate or generate images.Config Nameimage_manipulationCog State Tags- NEEDS_REFRACTORING- FEATURE_MISSING- OPEN_TODOS+ WORKINGCommandsADD_FONTaliases:add-font,add+font,add.font,addfontis hidden:Falseusage:NoneADD_STAMPhelp:Adds a new stamp image to the available stamps.This command needs to have the image as an attachment.aliases:add_image,add+stamp,add.stamp,add-stamp,addstampis hidden:Falseusage:@AntiPetrosadd_stampAVAILABLE_STAMPShelp:Posts all available stamps.aliases:available+stamps,availablestamps,available.stamps,available-stampsis hidden:Falseusage:@AntiPetrosavailable_stampsGET_STAMP_IMAGEaliases:get.stamp.image,get_image,get-stamp-image,getstampimage,get+stamp+imageis hidden:Falseusage:NoneLIST_FONTSaliases:list+fonts,listfonts,list-fonts,list.fontsis hidden:Falseusage:NoneMEMBER_AVATARhelp:Stamps the avatar of a Member with the Antistasi Crest.Returns the new stamped avatar as a .PNG image that the Member can save and replace his orginal avatar with.Example: @AntiPetros member_avataris hidden:Falseusage:NoneSTAMP_IMAGEhelp:Stamps an image with a small image from the available stamps.Usefull for watermarking images.Get all available stamps with '@AntiPetros available_stamps'aliases:stamp.image,stamp+image,stamp-image,stampimageis hidden:Falseusage:@AntiPetrosstamp_image-siASLOGO-fpbottom-spright-so0.5-f0.25TEXT_TO_IMAGEaliases:text+to+image,text-to-image,text.to.image,texttoimageis hidden:Falseusage:NoneInfoCogDescriptionShort DescriptionWiPConfig NameinfoCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsCODE_FILE_TO_IMAGEaliases:code.file.to.image,code-file-to-image,code+file+to+image,codefiletoimageis hidden:Falseusage:NoneINFO_BOTaliases:infobot,info.bot,info-bot,info+botis hidden:Falseusage:NoneINFO_COMMANDaliases:info+command,infocommand,info-command,info.commandis hidden:Falseusage:NoneINFO_GUILDaliases:info+guild,info-guild,infoguild,info.guildis hidden:Falseusage:NoneINFO_MEaliases:info+me,infome,info.me,info-meis hidden:Falseusage:NoneINFO_OTHERaliases:infoother,info-other,info.other,info+otheris hidden:Falseusage:NoneKlimBimCogDescriptionShort DescriptionCollection of small commands that either don't fit anywhere else or are just for fun.Config Nameklim_bimCog State Tags+ WORKINGCommandsCHOOSE_RANDOMhelp:Selects random items from a semi-colon(;) seperated list. No limit on how many items the list can have, except for Discord character limit.Amount of item to select can be set by specifying a number before the list. Defaults to selecting only 1 item. Max amount is 25.Args:choices (str): input list as semi-colon seperated list. select_amount (Optional[int], optional): How many items to select. Defaults to 1.Example:@AntiPetros 2 this is the first item; this is the second; this is the thirdaliases:choose-random,choose+random,chooserandom,choose.randomis hidden:Falseusage:NoneFLIP_COINhelp:Simulates a coin flip and posts the result as an image of a Petros Dollar.aliases:flipcoin,flip+coin,coinflip,flip.coin,flip,flip-coinis hidden:Falseusage:@AntiPetrosflip_coinMAKE_FIGLEThelp:Posts an ASCII Art version of the input text.Warning, your invoking message gets deleted!Args: text (str): text you want to see as ASCII Art.aliases:make-figlet,make.figlet,make+figlet,makefigletis hidden:Falseusage:@AntiPetrosmake_figletThetexttofigletROLL_DICEhelp:Roll Dice and get the result also as Image.All standard DnD Dice are available, d4, d6, d8, d10, d12, d20, d100.Args: dice_line (str): the dice you want to roll in the format2d6, first number is amount. Multiple different dice can be rolled, just seperate them by a space2d6 4d20 1d4.aliases:roll.dice,rolldice,roll-dice,roll+diceis hidden:Falseusage:NoneTHE_DRAGONhelp:Posts and awesome ASCII Art Dragon!aliases:thedragon,the.dragon,the+dragon,the-dragonis hidden:Falseusage:@AntiPetrosthe_dragonURBAN_DICTIONARYhelp:Searches Urbandictionary for the search term and post the answer as embedArgs:term (str): the search term entries (int, optional): How many UD entries for that term it should post, max is 5. Defaults to 1.aliases:urban+dictionary,urbandictionary,urban-dictionary,urban.dictionaryis hidden:Falseusage:@AntiPetrosurban_dictionaryPetros2PerformanceCogDescriptionShort DescriptionCollects Latency data and memory usage every 10min and posts every 24h a report of the last 24h as graphs.Config NameperformanceCog State Tags- DOCUMENTATION_MISSING- NEEDS_REFRACTORING- FEATURE_MISSING- OPEN_TODOSCommandsGET_COMMAND_STATSaliases:get-command-stats,get.command.stats,getcommandstats,get+command+statsis hidden:Trueusage:NoneINITIAL_MEMORY_USEaliases:initial+memory+use,initialmemoryuse,initial-memory-use,initial.memory.useis hidden:Trueusage:NoneREPORThelp:Reports both current latency and memory usage as Graph.is hidden:Trueusage:@AntiPetrosreportREPORT_LATENCYaliases:report+latency,report-latency,report.latency,reportlatencyis hidden:Trueusage:NoneREPORT_MEMORYaliases:report.memory,report+memory,reportmemory,report-memoryis hidden:Trueusage:NonePurgeMessagesCogDescriptionShort DescriptionSoonConfig Namepurge_messagesCog State Tags- DOCUMENTATION_MISSING- FEATURE_MISSINGCommandsPURGE_ANTIPETROSaliases:purgeantipetros,purge+antipetros,purge-antipetros,purge.antipetrosis hidden:Trueusage:NoneRulesCogDescriptionShort DescriptionWiPConfig NamerulesCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsALL_RULESaliases:all.rules,all-rules,allrules,all+rulesis hidden:Falseusage:NoneBETTER_RULESaliases:better-rules,betterrules,better.rules,better+rulesis hidden:Falseusage:NoneCOMMUNITY_RULESaliases:community-rules,community.rules,community+rules,communityrulesis hidden:Falseusage:NoneEXPLOITS_RULESaliases:exploits.rules,exploitsrules,exploits-rules,exploits+rulesis hidden:Falseusage:NoneSERVER_RULESaliases:serverrules,server-rules,server.rules,server+rulesis hidden:Falseusage:NoneSaveSuggestionCogDescriptionShort DescriptionProvides functionality for each Antistasi Team to save suggestions by reacting with emojis.Config Namesave_suggestionCog State Tags- DOCUMENTATION_MISSING- NEEDS_REFRACTORING- FEATURE_MISSING- UNTESTED- OPEN_TODOS+ WORKINGCommandsAUTO_ACCEPT_SUGGESTIONSis hidden:Trueusage:NoneCLEAR_ALL_SUGGESTIONSis hidden:Trueusage:NoneGET_ALL_SUGGESTIONSis hidden:Trueusage:NoneMARK_DISCUSSEDis hidden:Trueusage:NoneREMOVE_ALL_USERDATAis hidden:Trueusage:NoneREQUEST_MY_DATAis hidden:Trueusage:NoneUNSAVE_SUGGESTIONis hidden:Trueusage:NoneSubscriptionCogDescriptionShort DescriptionOrganizes Topic so they can be subscribed and mentioned selectively.Config NamesubscriptionCog State Tags- DOCUMENTATION_MISSING- FEATURE_MISSINGCommandsCREATE_SUBSCRIPTION_CHANNEL_HEADERaliases:create.subscription.channel.header,createsubscriptionchannelheader,create+subscription+channel+header,create-subscription-channel-headeris hidden:Trueusage:NoneMODIFY_TOPIC_EMBEDaliases:modify-topic-embed,modify.topic.embed,modify+topic+embed,modifytopicembedis hidden:Trueusage:NoneNEW_TOPICaliases:newtopic,new.topic,new+topic,new-topicis hidden:Trueusage:NoneREMOVE_TOPICaliases:removetopic,remove-topic,remove+topic,remove.topicis hidden:Trueusage:NoneTOPIC_TEMPLATEaliases:topictemplate,topic-template,topic.template,topic+templateis hidden:Trueusage:NoneUNSUBSCRIBEis hidden:Trueusage:NoneUPDATE_SUBSCRIPTION_CHANNEL_HEADERaliases:update.subscription.channel.header,update-subscription-channel-header,updatesubscriptionchannelheader,update+subscription+channel+headeris hidden:Trueusage:NoneTeamRosterCogDescriptionShort DescriptionWiPConfig Nameteam_rosterCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsDELETE_AND_REDO_TEAM_ROSTERaliases:delete-and-redo-team-roster,delete+and+redo+team+roster,deleteandredoteamroster,delete.and.redo.team.rosteris hidden:Falseusage:NoneFORCE_UPDATE_TEAM_ROSTERaliases:force+update+team+roster,forceupdateteamroster,force.update.team.roster,force-update-team-rosteris hidden:Falseusage:NoneINITIALIZE_TEAM_ROSTERaliases:initialize-team-roster,initialize.team.roster,initializeteamroster,initialize+team+rosteris hidden:Falseusage:NoneTEAM_ROSTER_CHANGE_DESCRIPTIONaliases:team-roster-change-description,team+roster+change+description,team.roster.change.description,teamrosterchangedescriptionis hidden:Falseusage:NoneTEAM_ROSTER_CHANGE_EXTRA_ROLEaliases:teamrosterchangeextrarole,team.roster.change.extra.role,team+roster+change+extra+role,team-roster-change-extra-roleis hidden:Falseusage:NoneTEAM_ROSTER_CHANGE_IMAGEaliases:teamrosterchangeimage,team-roster-change-image,team.roster.change.image,team+roster+change+imageis hidden:Falseusage:NoneTEAM_ROSTER_CHANGE_JOIN_DESCRIPTIONaliases:team+roster+change+join+description,team.roster.change.join.description,team-roster-change-join-description,teamrosterchangejoindescriptionis hidden:Falseusage:NoneTemplateCheckerCogDescriptionShort DescriptionsoonConfig Nametemplate_checkerCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsCHECK_TEMPLATEhelp:Checks all Classnames inside a provided template.Needs to have the tempalte as attachment to the invoking message.Returns the list of classnames it can't find in the config along with possible correction.Returns also a corrected version of the template file.Args: all_items_file (bool, optional): if it should also provide a file that lists all used classes. Defaults to True. case_insensitive (bool, optional): if it should check Case insentive. Defaults to False.aliases:checktemplate,check+template,check-template,check.templateis hidden:Falseusage:NoneTranslateCogDescriptionShort DescriptionCollection of commands that help in translating text to different Languages.Config NametranslateCog State Tags+ WORKINGCommandsAVAILABLE_LANGUAGESaliases:availablelanguages,available+languages,available.languages,available-languagesis hidden:Falseusage:NoneTRANSLATEhelp:Translates text into multiple different languages.Tries to auto-guess input language.Warning, your invoking message gets deleted!Args: text_to_translate (str): the text to translate, quotes are optional to_language_id (Optional[LanguageConverter], optional): either can be the name of the language or an language code (iso639-1 language codes). Defaults to "english".is hidden:Falseusage:@AntiPetrostranslategermanThisistheSentencetotranslateVoteCogDescriptionShort DescriptionWiPConfig NamevoteCog State Tags- EMPTY- DOCUMENTATION_MISSING- CRASHING- OUTDATED- FEATURE_MISSING- UNTESTEDCommandsCREATE_VOTEaliases:create-vote,create+vote,create.vote,createvoteis hidden:Falseusage:NoneDependenciesDeveloped with Python Version3.9.1Python dependenciesJinja22.11.2Markdown3.3.4Pillow8.1.2PyGithub1.54.1Pygments2.8.1WeasyPrint52.2aiohttp3.7.3aiosqlite0.16.1antistasi_template_checker0.1.1arrow0.17.0async_property0.2.1asyncstdlib3.9.0beautifulsoup44.9.3click7.1.2cryptography3.3.1dateparser1.0.0dpytest0.0.22emoji1.1.0fuzzywuzzy0.18.0gidappdata0.1.13gidconfig0.1.16gidlogger0.1.9googletrans4.0.0rc1humanize3.2.0icecream2.0.0imgkit1.1.0marshmallow3.10.0matplotlib3.3.3psutil5.8.0pyfiglet0.8.post1python-youtube0.7.0python_a2s1.3.0python_dotenv0.15.0pytz2020.5rich9.13.0tldextract3.1.0validator_collection1.5.0watchgod0.6webdavclient33.14.5External dependenciesCairoWindowsfollow these instructionshttps://github.com/tschoonj/GTK-for-Windows-Runtime-Environment-InstallerUnixsudo apt-get install -y libcairo2-devPangoWindowsFollow thishttps://github.com/ImageMagick/pangoUnixsudo apt-get install -y libsdl-pango-devLicenseMITDevelopmentFuture Planspr loggingPR team cog for all the logging, Youtube twitch twitter.databasemigrating all the json files to an sql lite DBstartup infoauto updating server info in startup info, need to figure out how to seperate development from production with this onegithub wikiAutomatically create and modify github Wikiauto reaction regexadd an item that can auto react with an emoji to an regex pattern in messageauto message cogadd an cog that works like auto reactions but does other things when triggered (than just emojis)See alsoLinksAntistasi WebsiteAntistasi Steam Workshop ItemsA3 Antistasi Official Discord Server
antipickle
image source. antipickle keeps your heterogeneous data freshantipicklewhen you want to use pickle, but you shouldn'tWhy? Because pickle isn't the right way to persist or share data, andwe all know that.When it comes to practice, it takes time and effort to substitute pickle.'Hmm, I can use json here' — I thought on many occasions, and usually was wrong.Something small but annoying was in the way:datetimethat can't be stored ornp.arraythat serializers don't know how to deal with. Or evenbytes! And many smaller things.At this point I either had to give up and pickle itOR allocate time on figuring out 'how do I make this right'.antipicklesolves this for me.antipickleis arestrictedformat forsafe, persistent, and platform-independentstorage.Also, it is very convenient:importantipickleantipickle.dump(data,'data.antipickle')antipickle.dump(data,'s3://mybucket/data.antipickle')# stores in s3antipickle.dump(data,'s3://mybucket/data.antipickle.gz')# will additionally gziploaded_date=antipickle.load('s3://mybucket/data.antipickle.gz')# or local fileTo download/upload from s3 you need an additional dependency:pip install s3fs.Batteries included:Here is a simple example of what antipickle can save/load:data={'constants':[3.1415,2.718,True,False,42],'with nones':[1,None,0],b'bytes':b"can be stored too!",'nested lists and tuples':[[1,[2]],(1,2,None),{'nested':'dict'}],('tuple','as','key'):{'is_ok':True},'numpy nd':np.zeros([3,4],dtype='uint32'),}antipickle.dump(data,'data.antipickle')More formally,antipicklesupports python pieces commonly used for computations:bytes,str,int,float,complex,bool, andNonelist,tuple,set(all of them are stored as different entities)dict(including integer keys andtuplekeys)numpyarrays (native.npyformat used;dtype=Onot supported)pandasseries and dataframe (using parquet serialization viapyarrow)polarsseries and dataframe (using parquet)Any tree-formed structure of the above (no loops allowed)Configurable support:dataclassesandpydanticclasses.For reference, other non-pythonic formats (json and its binary relatives) have problems with native types (not making difference between list and tuple) or encodings (not storing bytes) or collections (not allowing integers, bytes and tuples in dict keys).Antipickle is python-centric and has it solved.Installationpip install antipickleWhat is it forLet's set the expectation bar.antipickle isnot fast, but isn't slow eithernot super-compact, but quite okrestricted: it wasn't designed to serialize just anything, it focuses on common python types and cases for data folksAt the same time,antipickle issafepersistentvery convenientmodular and easy to extendand thus suitable for data sharing and data preservation.When to (not) usepicklepickleis designed for interprocess communication or as a temporary storage.picklehas a good tradeoff of space- and time- efficiency and can serialize almost anything, including graphs with cycles.Namepicklesuggests you could use it for long-term preservation of data, but that's not true:pickle's serialization is tied to an internal object representation, which is not guaranteed to be preserved in the next release (or even on a different OS). Developers of some packages (notablyscikit-learn) provide some guarantees about being able to parse models that were saved with previous 1-2 minor package releases, but that's an exception not a rule.Second,pickleis insecure. And unreadable. And pickles can be large. During unpickling they can do anything python can, i.e. anything at all. So python docssay it clear:Only unpickle data you trust!.That said,pickleis extremely convenient and simple to use, and works as a short-term solution for many cases, so we all (python data folks) kinda doing that wrong pickling thing from time to time, because of convenience. And because very few of us are ready to spend time on figuring out proper serialization.All comments above apply topickle-like libs likejoblib,dill,cloudpickle.Licenseantipickleis distributed under the terms of theMITlicense.Otherantipicklebuilds uponmsgpack-python(the only dependency).antipicklesupports all maintained python versions (Python 3.7+)
antiplagiarism
AntiPlagiarismAntiplagiarism to compare many files. It gives a percentage of plagiarism to a couple of files
antiprism-python
antiprism_pythonis a collection of geometry modelling programs written in Python3, and associated with theAntiprismproject. The Antiprism programs may be used to view, process or analyse these models.Some of these programs were written to solve a specific problem, some to solve a general problem, some were written as prototypes. The programs vary in quality, completeness and usefulness. They are all shared under the MIT licence.InstallInstall the whole package withpip3(orpipif that installs as Python3), either from the PyPI repository:pip3 install antiprism_pythonOr directly from the Git repository:pip3 install git+git://github.com/antiprism/antiprism_pythonAlternatively, download the programs you are interested in and copyanti_lib.pyinto the same directory.ProgramsTo see the help for a program, run it with-h.barrel.pyCreate a cyclic polyhedron with one or two bands of equatorial squares, oriented like diamonds.eq_antherm.pyCreate an antihermaphrodite with equilateral trianglesgeodesic.pyCreate coordinates for a higher frequency, plane-faced or spherical, icosahedron, octahedron or tetrahedron.gold_bowtie.pyCreate a polyhedron with axial symmetry involving a golden trapezium bow-tielat_grid.pyMake a variety of lattices and grids using integer coordinates.lamella.pyCreate a lamella domes. Also, makes square barrel models and multiply-gyroelongated antihermaphrodite models.njitterbug.pyCreate a jitterbug model for a general polygon base. The transformation includes, if constructible, models corresponding to the antiprism, snub-antiprisms and gyrobicupola for that base.njohnson.pyCreate a Johnson-based model, from J88, J89, J90, with a general polygon base.packer.pyPack balls in a sphere. The pack is seeded with two or more balls, then subsequent balls are added one at a time in three point contact in positions chosen by the packing method.pentabelt.pyMake a model with axial symmetry based on a belt of staggered pentagonsproj_dome.pyMake a Jacoby-style dome, as described inhttp://www.google.com/patents/US7900405. Project a tiling of unit-edged triangles, squares or crossed squares (unit edges), at a specified height, onto a unit hemisphere, by gnomonic, stereographic or general point projection.rotegrity_models.pyCreate cyclic rotegrity models, with 1 or 2 layers of rotegrity unitssph_circles.pyring_place.pyPlace maximum radius rings of contacting balls around points on a sphere. Input is a list of coordinates, one set per line.sph_circles.pyDistribute points on horizontal circles on a sphere (like a disco ball). The sphere is split into equal width bands. Balls with a diameter of this width are distributed equally around each band. The number of balls is either as many points as will fit in the band, or a specified number.sph_saff.pyDistribute num_points (default 20) on a sphere using the algorithm from “Distributing many points on a sphere” by E.B. Saff and A.B.J. Kuijlaars, Mathematical Intelligencer 19.1 (1997) 5–11.sph_spiral.pyDistribute points in a spiral on a sphere.spiro.pyCreate a spirograph pattern.str_art.pyCreate simple epicycloid string art patterns.temcor_dome.pyMake a Temcor-style dome, using the method described inhttps://groups.google.com/d/msg/geodesichelp/hJ3V9Nfp3kE/nikgoBPSFfwJ. The base model is a pyramid with a unit edge base polygon at a specified height above the origin. The axis to rotate the plane about passes through the origin and is in the direction of the base polygon mid-edge to the pyramid apex.tri_tiling.pyCreate a polyhedron which tiles the sphere with congruent triangles.twister.pyTwist two polygons placed on symmetry axes and joined by a vertextwister_rhomb.pyTwist polygons, of the same type, placed on certain fixed axes and joined by vertices.twister_test.pyTwist two polygons placed on axes at a specified angle and joined by a vertex.Complementary ProgramsRelated Python programs in external projectsantitileGenerates geodesic models by various methods.view_off.pyAn OFF file viewer with export to PNG and SVG.
antipywidgets
AntIPyWidgetsMy IPyWidgetsInstallationpip install antipywidgetsJobs runnerimportantipywidgets.jobs_runnerasjobs_runnerdefget_job(...):defjob():...Dothejob...returnjobjobs=[get_job(...)for...in...]runner=jobs_runner.make_runner(jobs)In the next cellrunner()
antisafelinks
Microsoft Safelinks AnnihilatorUnleash the Real LinksDescriptionAntiSafeLinksis a liberating Python program that takes a stand against Microsoft's "security-driven" SafeLinks introduced in Office365 emails. This open-source tool empowers users to reclaim their original links by removing the suffocating "safelink" wrappers. If you are suffering Microsoft' actions and you want to 1) recover a particular url, 2) recover all the links from a email stored in a local file in your computer, or 3) you keep your mail box as a Maildir format locally, then you can useAntiSafeLinksto neutralize it.Why AntiSafeLinks?Microsoft perverts the structure of your emails and, in fact, makes them more insecure by obscuring URL in your emails. These actions typically break multiple URLs that can lie in your email, avoids you to check before entering a URL what is the address it will take you to and, furthermore, Microsoft collects all metadata from you when accessing the SafeLinks website.This tool has been created with the purpose of recovering all these emails when you actually do not have any other alternative because your company's sysadmins decisions.FeaturesLink Liberation: AntiSafeLinks does one thing, and it does it well: it liberates your links from the "safelink" tyranny, restoring them to their true form.Ease of Use: Simply pass the program a modified URL, an email file, or a Maildir directory, and watch it go to work, effortlessly recovering all original links. You can put it as a cronjob and it will go through all your mail periodically.Preserves Privacy: No need to worry about your sensitive data being unnecessarily routed through Microsoft's servers. AntiSafeLinks ensures your privacy remains intact and only runs locally. Additionally, AntiSafeLinks does not require any external (Python) dependency to run.Current IssuesThe current version fails to parse properly emails thatcontain mailing list digestscontaining mails within the mail. I am still working on how to parse these emails properly.Currently AntiSafeLinks breaks these emails.How To UseModified URL. Do you have a modified URL that you want to recover?antisafelinks--url"URL-perverted-by-microsoft"Email file. If you have an email stored locally in your computer as a single file that may contain SafeLinks URLs.antisafelinks--email<PATH-TO-EMAIL-FILE>Note that if you want to create a copy of the email, instead of modifying it in-situ, you can add the--output <NEW-FILE>option.Maildir directory: If you keep your mail account as a Maildir directory locally in your computer, you can make AntiSafeLinks to run through the mailbox periodically (e.g. with a cronjob) calling it as:antisafelinks--dir<PATH-TO-FOLDER-CONTAININGTHEMAILDIRDIRECTORIES>As an example, I personally synchronize (two-ways) my mail withOfflineIMAPandDavMail. Then I have acronjobthat runs AntiSafeLinks after retrieving new emails. Therefore, when it is synchronized again, all emails have been recovered to their original URL versions, and as such they show up in my Mailboxes.InstallationInstall it via Pippipinstallantisafelinksorpython3-mpipinstallantisafelinksInstall from Source CodeClone this repository to your preferred directory.git clone https://github.com/bmarcote/antisafelinks.gitNavigate to theantisafelinksdirectory.Install the package withpython3 -m pip install .DisclaimerThis tool is provided as-is and comes with no warranties or guarantees. Use it responsibly and at your own risk. We are not affiliated with Microsoft in any way, and this project is purely for personal joy.ContributingWe welcome contributions from fellow link liberators! If you believe in the cause and want to make AntiSafeLink even better, feel free to submit a pull request or open an issue.LicenseAntiSafeLink is released under the GPLv3 License, which is a permissive license allowing you to do whatever you damn well please with this code.Support Me with a Coffee!If you find this software useful and you plan to use it in your day-to-day life, I'd like to extend an invitation to show your appreciation by"paying me a coffee" donation. Every cup of coffee represents not just a token of gratitude but a gesture that helps me continue dedicating time and effort to enhance and maintain the software for all of you. Your support goes a long way in keeping this project alive and ensures that I can keep delivering top-notch features and improvements. So, if you find value in what I've crafted, consider contributing the cost of a coffee and be a vital part of our thriving community. Your generosity is greatly appreciated! ☕❤️
antismash-models
antiSMASH object modelAn object model for mapping antiSMASH data objects from and to Redis via aioredis.LicenseUnder the same GNU AGPL v3 or later license as the rest of antiSMASH. SeeLICENSE.txtfor details.
antispam
# AntiSpam[![downloads](https://img.shields.io/pypi/dm/antispam.svg)](https://pypi.python.org/pypi/antispam/)[![version](https://img.shields.io/pypi/v/antispam.svg?label=version)](https://pypi.python.org/pypi/antispam/)[![supported](https://img.shields.io/pypi/pyversions/antispam.svg)](https://pypi.python.org/pypi/antispam/)[![license](https://img.shields.io/pypi/l/antispam.svg)](https://opensource.org/licenses/MIT)Bayesian anti-spam classifier written in Python.PyPI: [pypi.python.org/pypi/antispam](https://pypi.python.org/pypi/antispam)# Installation```pip install antispam```# UsageUse the built-in training model provided by antispam:```import antispamantispam.score("Cheap shoes for sale at DSW shoe store!")# => 0.9657724517163143antispam.isspam("Cheap shoes for sale at DSW shoe store!")# => Trueantispam.score("Hi mark could you please send me a copy of your machine learning homework? thanks")# => 0.0008064840568731558antispam.score("Hi mark could you please send me a copy of your machine learning homework? thanks")# => False```Train your own modle.```import antispamd = antispam.Detector("my_model.dat")d.train("Super cheap octocats for sale at GitHub.", True)d.train("Hi John, could you please come to my office by 3pm? Ding", False)msg1 = "Cheap shoes for sale at DSW shoe store!"d.score(msg1)# => 0.9999947825633266d.is_spam(msg1)# => Truemsg2 = "Hi mark could you please send me a copy of your machine learning homework? thanks"d.score(msg2)# => 4.021280114849398e-08d.is_spam(msg2)# => False```##License[MIT Licenses](https://opensource.org/licenses/MIT)
antispaminc
AntispamIncAn Python Wrapper For AntispamInc Api.SupportChannel : @AntispamIncGroup : @AntispamIncSupportBot : @AntispamIncBotWebsite : antispaminc.tkUsage Examplefromantispaminc.connectimportConnectmytoken='your_token_from_@antispamincbot'sed=Connect(mytoken)sed2=sed.is_banned('12974624')print(sed2.reason)Installingpipinstallantispamincorpip3installantispaminc
antisplodge
AntiSplodgeAntiSplodge, is a simple feed-forward neural network-based pipeline, designed to effective deconvolute spatial transcriptomics profiles, in an easy, fast, and, intuitive manner. It comes with all functions required to do a full deconvolution, from sampling synthetic spot profiles required to train the neural network, to the methods required to train the supplied network architecture. It is neatly packed into a python package with function calls similar to that of traditional R-packages, where users are only exposed to fiddling with hyperparameters.InstallationUsing pipYou can install the package directy by running the following pip command:pip install antisplodgeYou can find the pip page at:https://pypi.org/project/AntiSplodge/From GitHuBYou can install the package directly from GitHub by running the following command:python -m pip install git+https://github.com/HealthML/AntiSplodge.gitDirectly from source (this repository)Clone the repository to a folder of your choice.From a terminal this can be done by running:git clone [email protected]:HealthML/AntiSplodge.gitSubsequently, run the following pip command from your terminal (in the root of cloned directory):pip install .UsageThe full pipeline (see blow) assumes that you have a scRNA dataset (SC) and spatial transcriptomics dataset (ST) that both are formatted as .h5ad (AnnData data structures). Please seehttps://anndata.readthedocs.io/for information about how to structure your data. Alternative you can check out the tutorialhttps://github.com/HealthML/AntiSplodge_Turorialfor an example on how to do this.Standard full pipelineimportantisplodgeasAS# SC should be the single-cell dataset formatted as .h5ad (AnnData)Exp=AS.DeconvolutionExperiment(SC)Exp.setVerbosity(True)# CELLTYPE_COLUMN should be replaced with actual columnExp.setCellTypeColumn('CELLTYPE_COLUMN')# Use 80% as train data and split the rest into a 50/50 split validation and testingExp.splitTrainTestValidation(train=0.8,rest=0.5)# Generate profiles, num_profiles = [#training, #validation, #testing]# This will construct 10,000*10(CDs)=100,000, 5,000*10=50,000, 1,000*10=10,000 profiles# for train, validation and test (respectively)Exp.generateTrainTestValidation(num_profiles=[10000,5000,1000],CD=[1,10])# Load the profiles into data loadersExp.setupDataLoaders()# Initialize Neural network-model and allocate it to the cuda_id specified# Use 'cuda_id="cpu"' if you want to allocate it to a cpuExp.setupModel(cuda_id=6)Exp.setupOptimizerAndCriterion(learning_rate=0.001)# Train the model using the profiles generated# The patience parameter determines how long it will run without fining a new better (lower) error# The weights found will be saved to 'NNDeconvolver.pt' and will be autoloaded once the training is completestats=AS.train(Exp,save_file="NNDeconvolver.pt",patience=100)# print the mean JSD for train, validation, and testprint(AS.getMeanJSD(Exp,"train"),AS.getMeanJSD(Exp,"validation"),AS.getMeanJSD(Exp,"test"))## Afterwards do prediction## Assuming we have a spatial transcriptomics dataset ST formatted in .h5ad (AnnData)# create dataloader so that we can predict the profiles of each spot in our ST datasetdataset_spots=AS.SingleCellDataset(torch.from_numpy(np.array(ST.X.toarray())).float(),torch.from_numpy(np.array([0]*ST.n_obs)).float())spots_loader=DataLoader(dataset=dataset_spots,batch_size=50,# batch_size doesn't matter)spot_preds=AS.predict(Exp,spots_loader)# predict spots# The results for each ST profile (spot) is now in spot_preds and can be used for further analysisOrder of executionStart an experiment:Exp = AS.DeconvolutionExperiment(SC)must be the first call.Define datasets based on the SC dataset:Exp.splitTrainTestValidation(train=0.8, rest=0.5)must be called beforeExp.generateTrainTestValidation(num_profiles=[10000,5000,1000], CD=[1,10]).Setup model and optimizers:Exp.setupModel(cuda_id=6)must be called beforeExp.setupOptimizerAndCriterion(learning_rate = 0.001). Each timesetupModelis called,Exp.setupOptimizerAndCriterionmust be called again, as optimizers and criterions are bound to the model, for use during training.Train the model:stats = AS.train(Exp, save_file="NNDeconvolver.pt", patience=100).Predict spots using the model:spot_preds = AS.predict(Exp, spots_loader).The order of execution must be in the order listed above.Useful snippetsSeveral ways of training1. Standard trainingThe standard training procedure.# Assuming an Exp is an DeconvolutionExperimentAS.train(experiment=Exp,patience=25,save_file=None,auto_load_model_on_finish=True)# default parameters2. Several warm restartsDo 10 warm restarts with a low patience (n=5), this will autoload the model per train call. This will make the best model weights be loaded back onto the model and it will try again from these settingsbest_error=None# Do 10 warm restartsforiinrange(10):AS.train(experiment=Exp,patience=5,best_loss=best_error)best_error=np.min(stats['validation_loss'])3. Lowering learning rateStart with a high learning rate and lower this by half for each warm restart.lr=0.01all_stats=[]best_error=None# do 5 warm restarts with decreasing learning rateforiinrange(5):print("Training with learning rate:",lr)Exp.setupOptimizerAndCriterion(learning_rate=lr)lr/=10# reduce learning rate by a factor of 10# For longer training, increase patience thresholdstats=AS.train(Exp,save_file="NNDeconvolver.pt",patience=25,best_loss=best_error)all_stats.extend(stats)best_error=np.min(stats['validation_loss'])# set best error as the target error to beat# the results in stats is the training errors during in each epoch (which might be needed for training plots)4. Running on systems with reduced memory using smaller sets of training dataFor users having trouble with the memory footprint of the profile generation, it is possible to generate smaller sets of training and validation profiles.Exp.splitTrainTestValidation(train=0.8,rest=0.5)# define the dataset splitsExp.setupModel(cuda_id=6)# the model can be built beforehandbest_error=None# do 100 warm restarts with smaller chunks of training dataforiinrange(100):Exp.generateTrainTestValidation(num_profiles=[5000,1000,1],CD=[1,10])Exp.setupDataLoaders()AS.train(experiment=Exp,save_file="CurrentDeconvolver.pt",patience=10,best_loss=best_error)best_error=np.min(stats['validation_loss'])# Remember to generate test profiles after training is completeExp.generateTrainTestValidation(num_profiles=[1,1,1000],CD=[1,10])# Continue as usualTutorialCheck out the tutorial located at:https://github.com/HealthML/AntiSplodge_Turorial. This will give you a full tour from preprocessing to deconvoluting by predicting cell type proportions of the spatial transcriptomics spots.DependenciesThe list of major dependencies are:numpy>=1.17.2 (https://numpy.org/)pandas>=0.25.3 (https://pandas.pydata.org/)scikit-learn>=0.22.1 (https://scikit-learn.org/)torch>=1.9.0 (https://pytorch.org/)DocumentationThe documentation is available at:https://antisplodge.readthedocs.io/.ReferencesComing soon.LicenseThe source code for AntiSplodge is licensed under the MIT License. See theLICENSEfile for details.Known issuesAntiSplodge is prone to be affected by bad initiations. Oftentimes, this can be resolved by simply restarting the Experiment (or re-initializing the model). This seems to be more frequent when solving problems with many classes (large number of cell types). If verose is set to true, you should see output warnings during training with (!!NaNs vectors produced!!, these are not a problem if they only persist for a single iteration and is gone in the next).
antispoofing.clientspec
This package implements scripts to trainclient-specificclassifiers for biometric anti-spoofing. For comparison purposes, this package also explains how to generate the results for the correspondingclient-independentapproaches based on the same classification technique.The methods are tried out with the Replay-Attack face spoofing database and each script of the package includes interface to connect to this database. So, reproducing results on Replay-Attack is very straight-forward. The package can be also used with CASIA-FASD database, but it uses a modified protocol to the database. The modified protocol is available at our interface toCASIA-FASDIf you use this package and/or its results, please cite the following publications:Theoriginal paperwith the client-specific counter-measure explained in details:@ARTICLE{Chingovska_IEEETIFSSI_2015, author = {Chingovska, Ivana and Anjos, Andr{\'{e}}}, title = {On the use of client identity information for face anti-spoofing}, journal = {IEEE Transactions on Information Forensics and Security, Special Issue on Biometric Anti-spoofing}, year = {2015}, }Bobas the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.Raw dataThe Replay-Attack data used in the paper is publicly available and should be downloaded and installedpriorto try using the programs described in this package. Visitthe REPLAY-ATTACK database portalfor more information.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.clientspecto download the latest stable version of this package. Then, extract the .zip file to a folder of your choice.Theantispoofing.clientspecpackage is a satellite package of the free signal processing and machine learning libraryBob. This dependency has to be downloaded manually. This version of the package depends onBobversion 2 or greater. To installpackages of Bob, please read theInstallation Instructions. ForBobto be able to work properly, some dependent Bob packages are required to be installed. Please make sure that you have read the Dependencies for your operating system.The most simple solution is to download and extractantispoofing.clientspecpackage, then to go to the console and write:$ cd antispoofing.clientspec $ python bootstrap-buildout.py $ bin/buildoutThis will download all required dependentBoband other packages and install them locally.User GuideThis section explains how to use the package, in order to reproduce the results from the paper. It focuses on the following things:Generating the features used in the paper.Generating scores forclient-independent SVM-based discriminativebaselines. They are generated using other satellite packages.Generating scores forclient-specific SVM-based discriminativeapproach.Generating scores forclient-independent GMM-based generativebaselines.Generating scores forclient-specific GMM-based generativeapproach.Computing the error rates.Each step consists of several sub-steps.Generating the featuresThe paper uses three types of features: LBP, LBP-TOP and MOTION. They can be generated with three different satellite packages.LBP. We use the simplest regular uniform LBP8,1u2. The features are extracted from the face bounding box, normalized to 64x64. To compute these features, run the following command fromhttp://pypi.python.org/pypi/antispoofing.lbp$ ./bin/calcframelbp.py -v replaydir -d lbp/dir-features --ff 50 -c replayYou need to run this command once again with the-eoption to extract the features for the enrollment videos of the database.LBP-TOP. We use the simplest regular uniform LBP8,8,8,1,1,1u2. The features are extracted from the face bounding box, normalized to 64x64. To compute these features, run the following command fromhttp://pypi.python.org/pypi/antispoofing.lbp$ ./bin/calcframelbp.py -v replaydir -d lbp-top/dir-features --ff 50 -cXY -cXT -cYT -sC replayAgain, you need to run this command once again with the-eoption to extract the features for the enrollment videos of the database.MOTION. We used the default parameters to extract these features. Please refer to the satellite packagehttp://pypi.python.org/pypi/antispoofing.motionfor instructions how to generate the features. You need to run just the commands:./bin/motion_framediff.pyand./bin/motion_diffcluster.py. Don’t forget to run the commands with-eoption as well, in order to extract the features for the enrollment videos of the database.Please take a look at the correposnding satellite packages about their requirements, installation etc.Generating scores for baseline client-independent SVMTo generate the baseline results, we used SVM classifiers, which are provided within the satellite packages providing the features. The only exception is the MOTION features, which, in the original paper were classified using MLP, while in our case are classified using the SVM classifier used to classify LBP. In the following, we give the exact commands and parameters to generate the baseline results for the grandtest protocol of Replay-Attack.LBP: we usedhttp://pypi.python.org/pypi/antispoofing.lbpsatellite package. The SVM is trained with NO PCA reduction (hence, on the whole data), and after min-max normalization of the data. Note that, in the case of these features, we use min-max normalization as it gives better results than standard normalization:$ ./bin/svmtrain_lbp.py -v lbp/dir-features -d lbp/dir-machines --min-max-normalize replay --protocol grandtestIf you want to generate the machine for different Replay-Attack protocol, just set the--protocoloption. More then one protocol can be specified.To do the classification, call:$ ./bin/svmeval_lbp.py -v lbp/dir-features -i lbp/dir-machines/svm_machine.hdf5 -d lbp/dir-scores replayAfter this, you will have all the baseline scores for LBP in the directory``lbp/dir-scores``. The scores will be written as an array in .hdf files with the name of the video, and one score per frame.LBP-TOP: we usedhttp://pypi.python.org/pypi/antispoofing.lbptopsatellite package. The SVM is trained with NO PCA reduction (hence, on the whole data), and after min-max normalization of the data. Note that, in the case of these features, we use min-max normalization as it gives better results than standard normalization:$ ./bin/lbptop_svmtrain.py -i lbp-top/dir-features -d lbp-top/dir-machines -n replay --protocol grandtestIf you want to generate the machine for different Replay-Attack protocol, just set the--protocoloption. More then one protocol can be specified.To do the classification, call:$ ./bin/lbptop_make_scores.py -f lbp-top/dir-features -m lbp-top/dir-machine/svm_machine_XY-XT-YT-plane.txt -n lbp-top/dir-machine/svm_normalization_XY-XT-YT-plane.txt -o lbp-top/dir-scores -l XY-XT-YT -v -a SVM replayAfter this, you will have all the baseline scores for LBP in the directory``lbp-top/dir-scores``. The scores will be written as an array in .hdf files with the name of the video, and one score per frame.Note that any of these steps can take a very long time, so if you are at Idiap, consider using the SGE GRID. Refer to the satellite package about how to use it.MOTION: we usedhttp://pypi.python.org/pypi/antispoofing.lbpsatellite package, as the original satellite pacakge for MOTION features, does not contain SVM classification utilities. The SVM is trained with NO PCA reduction (hence, on the whole data), and after standard normalization of the data. Note that, in the case of these features, we use standard normalization as it gives better results than min-max normalization:$ ./bin/svmtrain_lbp.py -v motion/dir-features -d motion/dir-machines --min-max-normalize replay --protocol grandtestIf you want to generate the machine for different Replay-Attack protocol, just set the--protocoloption. More then one protocol can be specified.To do the classification, call:$ ./bin/svmeval_lbp.py -v motion/dir-features -i motion/dir-machines/svm_machine.hdf5 -d motion/dir-scores replayAfter this, you will have all the baseline scores for LBP in the directory``motion/dir-scores``. The scores will be written as an array in .hdf files with the name of the video, and one score per frame.Generating scores for client-specific SVMGenerating the client-specific results consists of 2 steps: training an SVM for each client and calculating the scores for each client. Below we give how to perform these steps for LBP features and grandtest protocol of Replay-Attack. The steps for the other type of features are analogous.Generate client-specific SVM. In our results, we were training SVM with NO PCA on the original features and after a standard normalization. To train a client-specific SVM for the clients in the test set and for LBP features, call:$ ./bin/svm_clientspec_train.py --featname lbp --outdir lbp/dir-machines --group test lbp/dir-features -n replay --protocol grandtestThis step needs to be run three times: for the training, development and test subset. The above examples shows how to run it for the test set. The SVM machines, as well as the normalization parameters (and PCA parameters, if needed) will be stored in thetestsubdirectory of the output directory, in separate files for each client. The command works analogously for the devel and train set. The parameter--featnamecan be any custom name that you choose to give to your features, but pay attention to use it consistently through the calls of all the other scripts. Type--helpafter the command to see all its available options.Compute the client-specific scores.$ ./bin/svm_clientspec_eval.py –featname lbp -svmdir lbp/dir-machines –group test -outdir lbp/dir-scores lbp/dir-features replayThis step needs to be run three times: for the training, development and test subset. The above examples shows how to run it for the test set. The SVM machines, as well as the normalization parameters (and PCA parameters, if needed) will be stored in thetestsubdirectory of the output directory, in separate files for each client. The command work analogously for the devel and train set. Type--helpafter the command to see all its available options.After this, you will have all the baseline scores for LBP in the directory``lbp/dir-scores``. The scores will be written as an array in .hdf files with the name of the video, and one score per frame.Generating scores for baseline client-independent GMMGenerating the baseline results can be done in 5 steps. The values of the hyper-parameters (number of Gaussians) which are given in the commands below are optimized for the grandtest set of Replay-Attack. Please find a table at the end of the section for the parameter values optimized for other Replay-Attack protocols. Note that the models are created for features which arenormalizedusing standard normalization andPCA reduced.Create model for Real Accesses(LBP, LBP-TOP and MOTION features):$ ./bin/naive_modelling.py --featname lbp --gaussians 5 --modeltype real -n -r -c -j -o lbp/dir-models/real lbp/dir-features replay --protocol grandtest $ ./bin/naive_modelling.py --featname lbp-top --gaussians 5 --modeltype real -n -r -c -j -o lbp-top/dir-models/real lbptop/dir-features replay --protocol grandtest $ ./bin/naive_modelling.py --featname motion --gaussians 10 --modeltype real -n -r -c -j -e 0.995 -o motion/dir-models/real motion/dir-features replay --protocol grandtestNote the parameter-e0.995denoting the kept energy during PCA reduction for the MOTION features. We use the default for LBP and LBP-TOP. The parameter--featnamecan be any custom name that you choose to give to your features, but pay attention to use it consistently through the calls of all the scripts. Don’t forget to change the protocol (--protocol) to the corresponding protocol of Replay-Attack that you want to use. Specifying several protocols is possible. Type--helpafter the command to see all its available options.Create model for Attacks(LBP, LBP-TOP and MOTION features):$ ./bin/naive_modelling.py --featname lbp --gaussians 10 --modeltype attack -n -r -c -j -o lbp/dir-models/attack lbp/dir-features replay --protocol grandtest $ ./bin/naive_modelling.py --featname lbp-top --gaussians 50 --modeltype attack -n -r -c -j -o lbp-top/dir-models/attack lbptop/dir-features replay --protocol grandtest $ ./bin/naive_modelling.py --featname motion --gaussians 300 --modeltype attack -n -r -c -j -e 0.995 -o motion/dir-models/attack motion/dir-features replay --protocol grandtestNote the parameter-e0.995denoting the kept energy during PCA reduction for the MOTION features. We use the default for LBP and LBP-TOP. Don’t forget to change the protocol to the corresponding protocol of Replay-Attack that you want to use. Specifying several protocols is possible. Type--helpafter the command to see all its available options.Calculate likelihoods to real access model./bin/naive_likelihood.py --featname lbp --gaussians 5 --modeldir lbp/dir-models/real -o lbp/dir-likelihoods/real lbp/dir-features replayGenerating the likelihoods for the other features is analogous. Just change the--gaussiansparameter to the corresponding value. Type--helpafter the command to see all its available options.Calculate likelihoods to attack model./bin/naive_likelihood.py --featname lbp --gaussians 10 --modeldir lbp/dir-models/attack -o lbp/dir-likelihoods/attack lbp/dir-features replayGenerating the likelihoods for the other features is analogous. Just change the--gaussiansparameter to the corrsponding value. Type--helpafter the command to see all its available options.Calculate likelihood ratios./bin/naive_likelihood_ratio.py --dirreal lbp/dir-likelihoods/real/GMM-5 --dirattack lbp/dir-likelihoods/attack/GMM-10 -o lbp/likelihood_ratio/GMM-5/GMM-10/llr_real.vs.attack replayGenerating the likelihood ratios for other features is analogous. You just need to change the number of Gaussians in the input and output folders to the corresponding values. Type--helpafter the command to see all its available options.After this, you will have scores for all the videos of Replay-Attack in the directorylbp/likelihood_ratio/GMM-5/GMM-10/llr_real.vs.attack(or analogous for the other features). The scores will be written as an array in .hdf files with the name of the video, and one score per frame.The optimized values (obtained via grid search) for the number of Gaussians for each of the protocols of Replay-Atatck are given in the following table:featuresLBPLBP-TOPMOTIONprotocolrealattackrealattackrealattackgrandtest51055010300print250235510355digital15351015100115video5205301060print+digital51052545165print+video515107510240digital+video510530100295Generating scores for client-specific GMMGenerating the client-specific results can be done in 7 steps. The values of the hyper-parameters (number of Gaussians and relevance factor) which are given in the commands below are optimized for the grandtest set of Replay-Attack. Please find a table at the end of the section for the parameter values optimized for other Replay-Attack protocols. Note that the models are created for features which arenormalizedusing standard normalization andPCA reduced.Create model for Real Accesses. This step is exactly the same as step 1 of the previous section. Just replace the values of the number of Gaussians optimized for the client-specific models, which are given in the table at the end of the section.Create model for Attacks. This step is exactly the same as step 2. of the previous section. Just replace the values of the number of Gaussians optimized for the client-specific models, which are given in the table at the end of the section.Enroll clients from the Real Access model using MAP adaptation$ ./bin/map_adapt_per_client.py --featname lbp --modelfile lbp/dir-models/real/GMM-275.hdf5 -o lbp/dir-map-models/TEST/GMM-275/reals.hdf5 --group test --rel 1 --clss enroll lbp/dir-features replayThis step needs to be run three times: for the training, development and test subset. The above examples shows how to run it for the test set. The class of samples using for the MAP adaptation is specified with--clssparameter and needs to be theenrollmentsamples in this case. The output is an .hdf5 file where the MAP adapted models are stored for each client of the particular subset.Generating the MAP models for the other features is analogous. Just change the number of Gaussians in the model filename and the output directory. Type--helpafter the command to see all its available options.Create cohort models from the Attack model using MAP adaptation$ ./bin/map_adapt_per_client.py --featname lbp --modelfile lbp/dir-models/attack/GMM-25.hdf5 -o lbp/dir-map-models/TRAIN/GMM-25/attacks.hdf5 --group train --rel 1 --clss attack lbp/dir-features replay --protocol grandtestThis step needs to be run only once, because the cohorts are created from the training set. The class of samples using for the MAP adaptation is specified with--clssparameter and needs to be theattacksamples in this case. Don’t forget to change the protocol (--protocol) to the corresponding protocol of Replay-Attack that you want to use. The output is an .hdf5 file where all the cohort models are stored.Generating the cohort models for the other features is analogous. Just change the number of Gaussians in the model filename and the output directory. Type--helpafter the command to see all its available options.Calculate likelihoods to real access client-specific models$ ./bin/naive_likelihood_clientspecmodel.py --featname lbp --mapmodelfile lbp/dir-map-models/TEST/GMM-275/reals.hdf5 -o lbp/dir-likelihood-clientspec/GMM-275 --group test lbp/dir-features replayThis step needs to be run three times: for the training, development and test subset. The above examples shows how to run it for the test set. Generating the likelihoods for other features is analogous. Just change the number of Gaussians in the MAP model filename and the output directory. Type--helpafter the command to see all its available options.Calculate likelihoods to attack cohort models$ ./bin/naive_likelihood_cohortmodels.py --featname lbp --cohortfile lbp/dir-map-models/TRAIN/GMM-25/attacks.hdf5 -o lbp/dir-likelihood-cohort/likelihood-cohort-all/GMM-25 --group test lbp/dir-features replayThis step needs to be run three times: for the training, development and test subset. The above examples shows how to run it for the test set. Generating the likelihoods for other features is analogous. Just change the number of Gaussians in the MAP model filename and the output directory. Note that you can specify the number N of cohorts that you want to use to compute the likelihood, using the-soption. In such a case, the highest N cohorts will be taken into account only. Type--helpafter the command to see all its available options.Calculate the likelihood ratio$ ./bin/naive_likelihood_ratio.py --dirreal lbp/dir-likelihood-clientspec/GMM-275 --dirattack lbp/dir-likelihood-cohort/likelihood-cohort-all/GMM-25 -o lbp/likelihood_ratio/GMM-275/GMM-25/llr_clientspec.vs.cohortall replayGenerating the likelihood ratios for other features is analogous. You just need to change the number of Gaussians in the input and output folders to the corresponding values. Type--helpafter the command to see all its available options.After this, you will have scores for all the videos of Replay-Attack in the directorylbp/likelihood_ratio/GMM-275/GMM-25/llr_clientspec.vs.cohortall(or analogous for the other features). The scores will be written as an array in .hdf files with the name of the video, and one score per frame.The optimized values (via grid search) for the number of Gaussians and the MAP relevance factor for each of the protocols of Replay-Attack are given in the following table:featuresLBPLBP-TOPMOTIONprotocolrealattackrelrealattackrelrealattackrelgrandtest275251295100510455print160201300210170101digital250543003531001651video275155295555152305print+digital275201295605501001print+video28015324080515905digital+video25010324085545652Computing the error ratesAfter the scores have been generated, you can use the script./bin/score_evaluation_crossdb.pyto compute the error rates. For example, to compute the error rates for the scores obtained using the client-specific SVM approach, call:$ ./bin/score_evaluation_crossdb.py --devel-scores-dir lbp/dir-scores --test_scores-dir lbp/dir-scores --dev-set replay --test-set replay --attack-devel grandtest --attack-test grandtest --verboseThe same command used for computing the scores obtained using the client-specific GMM approach, will look like:$ ./bin/score_evaluation_crossdb.py --devel-scores-dir lbp/likelihood_ratio/GMM-275/GMM-25/llr_clientspec.vs.cohortall --test_scores-dir lbp/likelihood_ratio/GMM-275/GMM-25/llr_clientspec.vs.cohortall --dev-set replay --test-set replay --attack-devel grandtest --attack-test grandtest --verboseType--helpafter the command to see all its available options. Note that with the options--sdand--styou can specify the directiories with the scores of the development and test set, respectively. Note that this script can be used to use one database for computing the threshold and another one for evaluation (specify the names of the databases with the-dand-tparameters). For the cross-protocol evaluation (that is described in the paper), you can specify separate protocols used for decision threshoold and evaluation (use--adand--atparameters). In such a case, most likely the values of the parameters--sdand--stwill be different too.Plotting the box plotsHere is an example how to plot the box plots of the scores for each users, for the scores obtained using the client-specific GMM approach:$ ./bin/scores_box_plot.py --devel-scores-dir lbp/likelihood_ratio/GMM-275/GMM-25/llr_clientspec.vs.cohortall --test_scores-dir lbp/likelihood_ratio/GMM-275/GMM-25/llr_clientspec.vs.cohortall --dev-set replay --test-set replay --attack-devel grandtest --attack-test grandtest --normalization --reject-outlier --verboseType--helpafter the command to see all its available options. It is recommended that the scores are always normalized (--normalizationoption) with outliers rejected during the normalization (--reject-outlieroption).ProblemsIn case of problems, please [email protected](or any of the authors of the paper).
antispoofing.competition_icb2013
This package implements:cropping face bounding boxes from Replay-Attack databaseextracting GLCM features for spoofing detectiongenerating classification scores for the features using SVM and LDAextracting other types of features using other satellite packages it depends oncalculating Q-statistics and fusing classification scores at score-level using other satellite package it depends on.This satellite package depends on the following satellite packages:antispoofing.lbp,antispoofing.lbptop,antispoofing.motion,antispoofing.fusion,antispoofing.utils. This dependence enables an interface for the scripts in these satellite packages through antispoofing.competition_icb2013, which means easy spoofing score generation using different types of features, as well as analysis of the common errors and fusion of the methods at score-level.The fused system consisting of several of these counter-measures is submitted to the The 2nd competition on counter measures to 2D facial spoofing attacks, in conjuction with ICB 2013.If you use this package and/or its results, please cite the following publications:Bob as the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }The 2nd competition on counter measures to 2D facial spoofing attacks:@INPROCEEDINGS{Chingovska_ICB2013_2013, author = {Chingovska, Ivana and others}, keywords = {Anti-spoofing, Competition, Counter-Measures, face spoofing, presentation attack}, title = {The 2nd competition on counter measures to 2D facial spoofing attacks}, booktitle = {International Conference of Biometrics 2013}, year = {2013} }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.Raw dataThe data used in the paper is publicly available and should be downloaded and installedpriorto try using the programs described in this package. Visitthe REPLAY-ATTACK database portalfor more information.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.competition_icb2013to download the latest stable version of this package.There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers likepip(oreasy_install) or manually download, unpack and usezc.buildoutto create a virtual work environment just for this package.Using an automatic installerUsingpipis the easiest (shell commands are marked with a$signal):$ pip install antispoofing.competition_icb2013You can also do the same witheasy_install:$ easy_install antispoofing.competition_icb2013This will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.This scheme works well with virtual environments byvirtualenvor if you have root access to your machine. Otherwise, we recommend you use the next option.Usingzc.buildoutDownload the latest version of this package fromPyPIand unpack it in your working area. The installation of the toolkit itself usesbuildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:$ python bootstrap.py $ ./bin/buildoutThese 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.NoteThe python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use ofBob, you must make sure that thebootstrap.pyscript is called with thesameinterpreter used to build Bob, or unexpected problems might occur.If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the filebuildout.cfgbeforerunning./bin/buildout. Find the section namedexternaland edit the lineegg-directoriesto point to thelibdirectory of the Bob installation you want to use. For example:[external] recipe = xbob.buildout:external egg-directories=/Users/crazyfox/work/bob/build/libUser GuideThis section explains how to use the package in order to: a) crop face bounding boxes from Replay-Attack; b) calculate the GLCM features on Replay-Attack database; c) generate LBP, LBP-TOP and motion correlation features on Replay-Attack; d) generate classification scores using Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM) and Multi-Layer perceptron (MLP); e) calculate common errors and Q-statistics for each of the features; f) perform fusion at score-level for the different classification scores.For generation of LBP, LBP-TOP and motion-correlation features, please refer to the corresponding satellite packages (antispoofing.lbp,antispoofing.lbptop,antispoofing.motionrespectively). For fusion at score-level, please refer to the corresponding satellite package (antispoofing.fusion).Crop face bounding boxesThe features used in the paper are generated over the normalized face bounding boxes of the frames in the videos. The script to be used for face cropping and normalization is./bin/crop_faces.py. It outputs .hdf5 files for each video, containing 3D numpy.array of pixel values of the normalized cropped frames. The first dimension of the array corresponds to the frames of the video files.:$ ./bin/crop_faces.py replayTo execute this script for the anonymized test-set, please call:$ ./bin/crop_faces.py replay --ICB-2013To see all the options for the scriptscrop_faces.py. just type--helpat the command line. If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack):$ ./bin/calcglcm.py replay --helpThis script uses the automatic face detections provided alongside Replay-Attack database. For frames with no detections, we copy the face detection from the previous frame (if there is one). In our work, we consider all the face bounding boxes smaller then 50x50 pixels as invalid detections (option--ff). Frames with no detected face or invalid detected face (<50x50 pixels) are set to Nan in our .hdf5 files. The face bounding boxes are normalized to 64x64 before storing (option-n).Calculate the GLCM featuresThe first stage of the process is calculating the feature vectors on per-frame basis. The script operates on .hdf5 files as obtained using./bin/crop_faces.py. The first dimension of the array corresponds to the frames of the video files.The program to be used for calculating the GLCM features is./bin/calcglcm.py:$ ./bin/calcglcm.py replayTo execute this script for the anonymized test-set, call:$ ./bin/calc_faces.py replay --ICB-2013To see all the options for the scriptscalcglcm.pyjust type--helpat the command line. If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack):$ ./bin/calcglcm.py replay --helpClassification with linear discriminant analysis (LDA)The classification with LDA is performed using the script./bin/ldatrain.py. To execute the script with prior normalization and PCA dimensionality reduction as is done in the paper (for Replay-Attack), call:$ ./bin/ldatrain.py -r -n replayIf you want to normalize the output scores as well, just set the--nsoption.To execute this script for the anonymized test-set, call:$ ./bin/ldatrain.py -r -n replay --ICB-2013To reproduce our results, set the parameters cost=-1 (option-c-1) and gamma=3 (option-g3) in the training of the SVM.This script can be used to calculate the LDA scores not only for GLCM, but also for any other features computed with any other of the satellite packages. To see all the options for this script, just type--helpat the command line.Classification with support vector machine (SVM)The classification with SVM is performed using the script./bin/svmtrain.py. To execute the script with prior normalization of the data in the range[-1,1]and PCA reduction as in the paper (for Replay-Attack), call:$ ./bin/svmtrain.py -n -r replayIf you want to normalize the output scores as well, just set the--nsoption.To call this script for the anonymized test-set, call:$ ./bin/svmtrain.py -n -r replay --ICB-2013To reproduce our results, set the parameters cost=-1 (option-c-1) and gamma=3 (option-g3) in the training of the SVM.This script can be used to calculate the SVM scores not only for GLCM, but also for any other features computed with any other of the satellite packages. To see all the options for this script, just type--helpat the command line.Bounding box countermeasureA fast countermeasure that takes account the area of the face bounding box as a feature.:$ ./bin/icb2013_facebb_countermeasure.py --input-dir [Database dir] -v [database]Q-StatisticFusion two or more countermeasures is one way to improve the classification performance. Kuncheva and Whitaker [1] shown the combination of statistically independent classifiers maximises the performance of a fusion and in order to measure this dependency, they proposed the Q-Statistic. For two countermeasures (A and B), the Q-Statistics can be defined.\begin{equation*} Q_{A,B} = \frac{N_{11}N_{00} - N_{01}N_{10}}{N_{11}N_{00} +N_{01}N_{10}} \end{equation*}wheremathcal{N}is the number of times that a countermeasure make a correct classification (mathcal{N_1}) or make an incorrect classification (mathcal{N_0}).To run the Q-Statistic script call:$ ./bin/icb2013_qstatistic.py --input-dir [Set of scores of each countermeasure] -v [database]Generating other types of featuresThis package depends on other satellite packages for calculating other types of features: LBP, LBP-TOP and motion correlation. To read more details and to generate these types of features, please refer to the corresponding satellite packages (antispoofing.lbp,antispoofing.lbptop,antispoofing.motionrespectively). Note that it is possoble to call the scripts belonging to these other satellite packages within antispoofing.competition_icb2013 satellite package.To generate classificatio scores for the other types of features, you can use the methods provided by this or the other correponding satellite packages.Fusion of counter-measuresThe classification scores obtained using different features and classification techniques can be fused at score level. To read about the available fusion techniques as well as to perform the fusion, please refer to the corresponding satellite packageantispoofing.fusion. Note that you can call the scripts belonging to antispoofing.fusion satellite package within antispoofing.competition_icb2013 satellite package.Generating error ratesTo calculate the threshold on the classification scores of a single or a fused counter-measure, use./bin/eval_threshold.py. Note that as an input argument you need to give the file with the developments scores to evaluate the threshold. To calculate the error rates, use./bin/apply_threshold.py. To see all the options for these two scripts, just type--helpat the command line.References[1] L. I. Kuncheva and C. J. Whitaker, “Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy,” Mach. Learn., vol. 51, pp. 181–207, May 2003.
antispoofing.crossdatabase
This package contains scripts that permit to make a cross database testing in face antispoofing countermeasures in order evaluate them generalization power.If you use this package and/or its results, please cite the following publications:The original paper with the fusion of countermesures explained in details:@inproceedings{FreitasPereira_ICB_2013, author = {de Freitas Pereira, Tiago and Anjos, Andr{\'{e}} and De Martino, Jos{\'{e}} Mario and Marcel, S{\'{e}}bastien}, month = Jun, title = {Can face anti-spoofing countermeasures work in a real world scenario?}, journal = {International Conference on Biometrics 2013}, year = {2013}, }Bob as the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.crossdatabaseto download the latest stable version of this package.There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers likepip(oreasy_install) or manually download, unpack and usezc.buildoutto create a virtual work environment just for this package.Using an automatic installerUsingpipis the easiest (shell commands are marked with a$signal):$ pip install antispoofing.crossdatabaseYou can also do the same witheasy_install:$ easy_install antispoofing.crossdatabaseThis will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.This scheme works well with virtual environments byvirtualenvor if you have root access to your machine. Otherwise, we recommend you use the next option.Usingzc.buildoutDownload the latest version of this package fromPyPIand unpack it in your working area. The installation of the toolkit itself usesbuildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:$ python bootstrap.py $ ./bin/buildoutThese 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.NoteThe python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use ofBob, you must make sure that thebootstrap.pyscript is called with thesameinterpreter used to build Bob, or unexpected problems might occur.If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the filebuildout.cfgbeforerunning./bin/buildout. Find the section namedbuildoutand edit or add the lineprefixesto point to the directory where Bob is installed or built. For example:[buildout] ... prefixes=/Users/crazyfox/work/bob/build/libUser GuideFirst of all, it is necessary to be familiarized with the satellite packages:antispoofing.motionandantispoofing.lbptop. The antispoofing.lbptop satellite package generates the scores of the LBP-TOP and LBP countermeasures.Crossdatabase testThe examples bellow show how to reproduce the performance using the inter-database protocol.For each countermeasure trained and tuned with the Replay Attack Database, to get the performance using the test set of the Casia FASD, just type:$ ./bin/crossdb_result_analysis.py --scores-dir <scores_replay_countermeasure_directory> -d replay -t casias_fasd:For each countermeasure trained and tuned with the Casia FASD, to get the performance using the test set of the Replay Attack Database, just type:$ ./bin/crossdb_result_analysis.py --scores-dir <scores_casia_countermeasure_directory> -d casia_fasd -t replayTraining all dataTo get the performance using a countermeasure trained and tuned with both databases (Replay Attack Database and Casia FASD) just type:To report the results using the Replay Attack Database:$ ./bin/crossdb_result_analysis.py --scores-dir <scores_all_countermeasures_directory> -d all -t replayTo report the results using the Casia FASD:$ ./bin/crossdb_result_analysis.py --scores-dir <scores_all_countermeasures_directory> -d all -t casia_fasdFrameworkFor each countermeasures, to get the performance using the Score Level Fusion based Framework just type:To report the results using the Replay Attack Database:$ ./bin/crossdb_fusion_framework.py --scores-dir <scores_trained_with_replay> <scores_trained_with_casia> -d all -t replay --normalizer MinMaxNorm --fusion-algorithm SUMTo report the results using the Casia FASD:$ ./bin/crossdb_fusion_framework.py --scores-dir <scores_trained_with_casia> <scores_trained_with_replay> -d all -t casia_fasd --normalizer MinMaxNorm --fusion-algorithm SUMProblemsIn case of problems, please contact any of the authors of the paper.
antispoofing.evaluation
This package provides methods for evaluation of biometric verification systems under spoofing attacks. The evaluation is based on the Expected Performance and Spoofability Curve (EPSC). Using this package, you can compute thresholds based on EPSC, compute various error rates and plot various curves related to EPSC.Besides providing methods for plotting EPSC within your own scripts, this package brings several scripts that you can use to evaluate your own verification system (fused with an anti-spoofing system or not) from several perspectives. For example, you can:evaluate the threshold of a classification system on the development setapply the threshold on an evaluation or any other set to compute different error ratesplot score distributionsplot different performance curves (DET, EPC and EPSC)Furthermore, you can generate hypothetical data and use them to exemplify the above mentioned functionalities.If you use this package and/or its results, please cite the following publication:Our original paper on biometric evaluation (title, pdf and bibtex to be announced soon):@ARTICLE{Chingovska_IEEETIFS_2014, author = {Chingovska, Ivana and Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, title = {Biometrics Evaluation Under Spoofing Attacks}, journal = {IEEE Transactions on Information Forensics and Security}, year = {2014}, }Bobas the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.evaluationto download the latest stable version of this package. Then, extract the .zip file to a folder of your choice.Theantispoofing.evaluationpackage is a satellite package of the free signal processing and machine learning libraryBob. This dependency has to be downloaded manually. This version of the package depends onBobversion 2 or greater. To installpackages of Bob, please read theInstallation Instructions. ForBobto be able to work properly, some dependent Bob packages are required to be installed. Please make sure that you have read the Dependencies for your operating system.The most simple solution is to download and extractantispoofing.evaluationpackage, then to go to the console and write:$ cd antispoofing.evaluation $ python bootstrap-buildout.py $ bin/buildoutThis will download all required dependentBoband other packages and install them locally.Using the packageAfter instalation of the package, go to the console and type:$ ./bin/sphinx-build doc sphinxNow, the full documentation of the package, including a User Guide, will be availabe insphinx/index.html.ProblemsIn case of problems, please [email protected]