code
stringlengths 114
1.05M
| path
stringlengths 3
312
| quality_prob
float64 0.5
0.99
| learning_prob
float64 0.2
1
| filename
stringlengths 3
168
| kind
stringclasses 1
value |
---|---|---|---|---|---|
This tool provides some scripts to import *Nmap* and *Nessus* scan results into a sqlite database.
The imported results can then be analyzed by various tool to generate target list for other tools, generate csv files and DOCX reports (
based on DOCX templates). The following console commands are available after installation:
| cli command | description |
| ----------- | ----------- |
| scandb-importer | Import *nmap* and *nessus* scans into a sqlite database |
| scandb-services | Generate ip address lists based on port filters (e.g. as input for other tools) |
| scandb-vulns | Search and generate ip address lists based on vulnerability filters (e.g. search for severity, cve, plugin-id, plugin output) |
| scandb-statistics | Print scan, port, vulnerability statistics or generate CSV files with these statistics. Can also be used to generate a CSV file with a list of open ports per host. |
| scandb-compare | Compare to scandb instances and generate CSV statistics with differences. (Due to database changes that might happen these instances should be generated with the same *scandb* version) |
| scandb-report | Generate DOCX Reports based on given templates.|
## License
This script is licensed under the GNU General Public License in version 3. See http://www.gnu.org/licenses/ for further details.
## Installation
The tool has been published to pypi and can be installed via *pip*.
```
pip install scandb
```
## scandb-importer
This command can be used do import a single file or many files at once to a sqlite database.
You can use the parameters *--file* and *--dir* to specify the files that should be imported.
```
$ scandb-importer -h
usage: scandb-importer [-h] [--db DB] [--file [FILE [FILE ...]]] [--dir DIR]
I will import Nmap and Nessus scans into a SQLite database.
optional arguments:
-h, --help show this help message and exit
--db DB
--file [FILE [FILE ...]]
The nessus and/or nmap file(s)
--dir DIR Directory name with nessus and/or nmap files
```
## scandb-services
This command can be used to generate target lists based on port filters.
```
$ scandb-services -h
usage: scandb-services [-h] [--db DB] [--status STATUS] [-t PORTS] [-u PORTS] [-o UNION|INTERSECTION] [--list] [-d LIST_DELIMITER] [--list-file FILE]
I can be used to generate target lists (ip address lists) that can be used as input for other tools based on given filters.
optional arguments:
-h, --help show this help message and exit
--db DB
--status STATUS Status string stored in database (default: up)
-t PORTS, --tcp PORTS
Open TCP ports
-u PORTS, --udp PORTS
Open UDP ports
-o UNION|INTERSECTION, --operation UNION|INTERSECTION
Operation to combine the sets of TCP and UDP ports (default: UNION)
--list Generate a target list
-d LIST_DELIMITER, --list-delimiter LIST_DELIMITER
Delimiter used to separate hosts in the list output
--list-file FILE Generate a file with the targets instead of printing them to stdout
```
Generate a list of all hosts (with status 'up'):
```
$ scandb-services --list
192.168.1.2
192.168.1.1
192.168.1.11
192.168.1.19
```
Generate a list of all hosts (with status 'up') and use the delimiter "," instead of a new line:
```
$ scandb-services --list -d ","
192.168.1.2,192.168.1.1,192.168.1.11,192.168.1.19
```
Generate a list of hosts with open tcp port 80:
```
$ scandb-services --list -d " " -t 80
192.168.1.2 192.168.1.1
```
Generate a list of hosts with open udp port 53:
```
$ scandb-services --list -d " " -u 53
192.168.1.19 192.168.1.1
```
Generate a list of hosts with open tcp port 80 or udp port 53:
```
$ scandb-services --list -d " " -u 53 -t 80
192.168.1.19 192.168.1.2 192.168.1.1
```
Generate a list of hosts with open tcp port 80 and udp port 53:
```
$ scandb-services --list -d " " -u 53 -t 80 -o intersection
192.168.1.1
```
## scandb-statistics
This command can be used to display statistics or to create a csv file with all IP addresses and their open ports.
```
$ scandb-statistics -h
usage: scandb-statistics [-h] [--db DB] [-s] [-v] [-p] [--host-portlist] [-d DELIMITER] [-o OUTFILE] [-w] [--docx] [--template TEMPLATE]
I can generate statistics about vulnerabilities, open ports or for the imported scans. Furthermore I can generate a host/portlist as csv file. All statistics can be displayed on stdout or they can be written to csv or docx files (based on templates). See
https://bitbucket.org/cbless/scandb/src/master/examples/ for example templates.A description of usable objects and their attributes can be found under: https://bitbucket.org/cbless/scandb/wiki/Report-Templates
optional arguments:
-h, --help show this help message and exit
--db DB
-s, --scan-statistics
Print statistics for each scan
-v, --vuln-statistics
Print number of vulns foreach host.
-p, --port-statistics
Print number of 'open' TCP and UDP ports foreach host.
--host-portlist generate a csv with a list of TCP and UDP Ports per host
-d DELIMITER, --delimiter DELIMITER
Delimiter for CSV files.
-o OUTFILE, --outfile OUTFILE
Prefix for output files.
-w, --write-file Write data to CSV file. Prefix of filename can be changed with parameter outfile
--docx Render the given DOCX template for the selected statistics. Prefix of filename can be changed with parameter '--outfile'. The template can be specified with parameter '--template'
--template TEMPLATE Name of the template to render. Examples can be found under: https://bitbucket.org/cbless/scandb/src/master/examples/
```
To generate a list of open TCP and UDP ports you can use the following command:
```
$ scandb-statistics --host-portlist
Results written to : scandb-hostportlist.csv
```
The content of the file scandb-hostportlist.csv will looks like this.
```
192.168.1.1;53;udp
192.168.1.1;53,80,443,5060,8181;tcp
192.168.1.19;161;udp
192.168.1.2;53,80,5060,8089;tcp
```
## scandb-vulns
This command can be used to generate target lists based on vulnerability filters.
```
$ scandb-vulns -h
usage: scandb-vulns [-h] [--db DB] [--min-severity MIN_SEVERITY] [--filter-by {cve,plugin-id,plugin-name,plugin-output,description,ip}] [--search SEARCH-Term] [--list {ips,details}] [-d LIST_DELIMITER] [--list-file FILE]
I can be used to query the sqlite database to filter specific vulnerabilities. Results can be displayed to stdout or written to a csv file.
optional arguments:
-h, --help show this help message and exit
--db DB
--min-severity MIN_SEVERITY
Minimum severity level (default: 0)
--filter-by {cve,plugin-id,plugin-name,plugin-output,description,ip}
Filter hosts by the given filter. The search value is specified with option --search. The following fields can be used as filter 'cve', 'plugin-id', 'plugin-name', 'description', 'ip'. (Note: The option 'ip' returns just the ip itself, when '
--list ips' is selected and a vulnerability was detected for that ip, otherwise the result is empty.)
--search SEARCH-Term Search term used for querying the database. The type of the search field can be selected with the parameter --filter-by
--list {ips,details} Generate a target list of ip addresses when selecting 'ips' or display the columns Address,Port,Protocol,Severity,Plugin-ID,Plugin-Name
-d LIST_DELIMITER, --list-delimiter LIST_DELIMITER
Delimiter used to separate hosts in the list output. Only when --list ips is used.
--list-file FILE Generate a file with the results instead of printing them to stdout. Incase of '--list ips' is selected the file contains a list of ip address (one per line), in case of '--list details' it will be a csv file
```
Select hosts that are affected by a cve starting with CVE-2015- and display only the ip address.
```
scandb-vulns --filter-by cve --search CVE-2015- --list ips
```
Select hosts that are affected by a vulnerability with Plugin-ID 48243 and display the columns Address,Port,Protocol,Severity,Plugin-ID,Plugin-Name.
```
scandb-vulns --db test.sqlite --filter-by plugin-id --search 48243 --list details
Address Port Protocol Severity Plugin-IDPlugin-Name
192.168.100.101 443 tcp 0 48243PHP Version Detection
192.168.100.111 80 tcp 0 48243PHP Version Detection
192.168.100.122 443 tcp 0 48243PHP Version Detection
```
## scandb-compare
This command can be used to compare two scandb database instances (databases must be created with scandb v0.4.0 or
a later version).
```
$ scandb-compare -h
usage: scandb-compare [-h] [--db1 DB1] [--db2 DB2] [-v] [-p] [--host-portlist] [-o OUTFILE]
optional arguments:
-h, --help show this help message and exit
--db1 DB1
--db2 DB2
-v, --vuln-statistics
Print number of vulns foreach host and db.
-p, --port-statistics
Print number of 'open' TCP and UDP ports foreach host and db.
--host-portlist generate a csv with a list of TCP and UDP Ports per host and db
-o OUTFILE, --outfile OUTFILE
Prefix for output files.
```
## scandb-report
This command can be used to export vulnerabilities to a docx format based on custom templates.
See also:
- [DOCX template examples]( https://bitbucket.org/cbless/scandb/src/master/examples/ )
- [Description of Report Objects]( https://bitbucket.org/cbless/scandb/wiki/Report-Templates )
```
$ scandb-report -h
usage: scandb-report [-h] [--db DB] [--min-severity MIN_SEVERITY] [--plugins PLUGINS [PLUGINS ...]] [--export-vulns {all,unsorted,host,plugin}] [--template TEMPLATE] [--outfile OUTFILE]
Generate DOCX reports based on custom templates. See https://bitbucket.org/cbless/scandb/src/master/examples/ for example templates.A description of usable objects and their attributes can be found under: https://bitbucket.org/cbless/scandb/wiki/Report-Templates
optional arguments:
-h, --help show this help message and exit
--db DB
--min-severity MIN_SEVERITY
Minimum severity level (default: 0). Either plugins or min-severity can be used.
--plugins PLUGINS [PLUGINS ...]
List of plugins to export. Either plugins or min-severity can be used.
--export-vulns {all,unsorted,host,plugin}
Can be used to specifiy how the vulnerabilities will be injected into the template. 'unsorted' means that the vulnerabilites will be available unsorted as 'vulns'. 'host' means that a list of vulnerabilities is avaialable per host. 'plugin'
means that the list of affected systems is available per plugin/vulnerability as 'vulns_by_plugin'. 'all' means that all three options are available in the template. (default 'plugin')
--template TEMPLATE Name of the template to render. Examples can be found under: https://bitbucket.org/cbless/scandb/src/master/examples/
--outfile OUTFILE Name that is used for the generated report.
```
**Example:** Export only vulnerabilities with a minimum severity of MEDIUM.
```
scandb-report --min-severity 2 --db scandb.sqlite --template "examples/vulns-by-plugin_with_stats.docx"
```
**Example:** Export only a list of vulnerabilities that match the specified plugin IDs.
```
scandb-report --plugins 12344,44443,22211 --db scandb.sqlite --template "examples/vulns-by-plugin_with_stats.docx"
```
|
/scandb-1.0.0.tar.gz/scandb-1.0.0/README.md
| 0.590779 | 0.856752 |
README.md
|
pypi
|
<div align='center'>
<img src="https://raw.githubusercontent.com/saattrupdan/ScandEval/main/gfx/scandeval.png" width="517" height="217">
</div>
### Evaluation of pretrained language models on mono- or multilingual Scandinavian language tasks.
______________________________________________________________________
[](https://pypi.org/project/scandeval/)
[](https://arxiv.org/abs/2304.00906)
[](https://github.com/saattrupdan/ScandEval/blob/main/LICENSE)
[](https://github.com/saattrupdan/ScandEval/commits/main)
[](https://github.com/saattrupdan/ScandEval/tree/main/tests)
[](https://github.com/saattrupdan/ScandEval/blob/main/CODE_OF_CONDUCT.md)
## Installation
To install the package simply write the following command in your favorite terminal:
```
$ pip install scandeval
```
## Quickstart
### Benchmarking from the Command Line
The easiest way to benchmark pretrained models is via the command line interface. After
having installed the package, you can benchmark your favorite model like so:
```
$ scandeval --model-id <model-id>
```
Here `model_id` is the HuggingFace model ID, which can be found on the [HuggingFace
Hub](https://huggingface.co/models). By default this will benchmark the model on all
the datasets eligible. If you want to benchmark on a specific dataset, this can be done
via the `--dataset` flag. This will for instance evaluate the model on the
`AngryTweets` dataset:
```
$ scandeval --model-id <model-id> --dataset angry-tweets
```
We can also separate by language. To benchmark all Danish models on all Danish
datasets, say, this can be done using the `language` tag, like so:
```
$ scandeval --language da
```
Multiple models, datasets and/or languages can be specified by just attaching multiple
arguments. Here is an example with two models:
```
$ scandeval --model-id <model-id1> --model-id <model-id2> --dataset angry-tweets
```
The specific model version to use can also be added after the suffix '@':
```
$ scandeval --model-id <model-id>@<commit>
```
It can be a branch name, a tag name, or a commit id. It defaults to 'main' for latest.
See all the arguments and options available for the `scandeval` command by typing
```
$ scandeval --help
```
### Benchmarking from a Script
In a script, the syntax is similar to the command line interface. You simply initialise
an object of the `Benchmarker` class, and call this benchmark object with your favorite
models and/or datasets:
```
>>> from scandeval import Benchmarker
>>> benchmark = Benchmarker()
>>> benchmark('<model-id>')
```
To benchmark on a specific dataset, you simply specify the second argument, shown here
with the `AngryTweets` dataset again:
```
>>> benchmark('<model_id>', 'angry-tweets')
```
If you want to benchmark a subset of all the models on the Hugging Face Hub, you can
specify several parameters in the `Benchmarker` initializer to narrow down the list of
models to the ones you care about. As a simple example, the following would benchmark
all the Nynorsk models on Nynorsk datasets:
```
>>> benchmark = Benchmarker(language='nn')
>>> benchmark()
```
## Citing ScandEval
If you want to cite the framework then feel free to use this:
```
@inproceedings{nielsen2023scandeval,
title={ScandEval: A Benchmark for Scandinavian Natural Language Processing},
author={Nielsen, Dan Saattrup},
booktitle={The 24rd Nordic Conference on Computational Linguistics},
year={2023}
}
```
## Remarks
The image used in the logo has been created by the amazing [Scandinavia and the
World](https://satwcomic.com/) team. Go check them out!
## Project structure
```
.
├── .flake8
├── .github
│ └── workflows
│ └── ci.yaml
├── .gitignore
├── .pre-commit-config.yaml
├── CHANGELOG.md
├── LICENSE
├── README.md
├── gfx
│ └── scandeval.png
├── makefile
├── poetry.toml
├── pyproject.toml
├── src
│ ├── scandeval
│ │ ├── __init__.py
│ │ ├── benchmark_config_factory.py
│ │ ├── benchmark_dataset.py
│ │ ├── benchmarker.py
│ │ ├── callbacks.py
│ │ ├── cli.py
│ │ ├── config.py
│ │ ├── dataset_configs.py
│ │ ├── dataset_factory.py
│ │ ├── dataset_tasks.py
│ │ ├── exceptions.py
│ │ ├── hf_hub.py
│ │ ├── languages.py
│ │ ├── model_loading.py
│ │ ├── named_entity_recognition.py
│ │ ├── question_answering.py
│ │ ├── question_answering_trainer.py
│ │ ├── scores.py
│ │ ├── sequence_classification.py
│ │ ├── speed_benchmark.py
│ │ ├── types.py
│ │ └── utils.py
│ └── scripts
│ ├── create_angry_tweets.py
│ ├── create_dane.py
│ ├── create_mim_gold_ner.py
│ ├── create_norec.py
│ ├── create_norne.py
│ ├── create_scala.py
│ ├── create_scandiqa.py
│ ├── create_suc3.py
│ ├── create_swerec.py
│ ├── create_wikiann_fo.py
│ ├── fill_in_missing_model_metadata.py
│ ├── fix_dot_env_file.py
│ ├── load_ud_pos.py
│ └── versioning.py
└── tests
├── __init__.py
├── conftest.py
├── test_benchmark_config_factory.py
├── test_benchmark_dataset.py
├── test_benchmarker.py
├── test_callbacks.py
├── test_cli.py
├── test_config.py
├── test_dataset_configs.py
├── test_dataset_factory.py
├── test_dataset_tasks.py
├── test_exceptions.py
├── test_hf_hub.py
├── test_languages.py
├── test_model_loading.py
├── test_named_entity_recognition.py
├── test_question_answering.py
├── test_question_answering_trainer.py
├── test_scores.py
├── test_sequence_classification.py
├── test_speed_benchmark.py
├── test_types.py
└── test_utils.py
```
|
/ScandEval-7.1.0.tar.gz/ScandEval-7.1.0/README.md
| 0.616128 | 0.981841 |
README.md
|
pypi
|
import json
import logging
import re
from pathlib import Path
from typing import List, Union
import pandas as pd
# Set up logging
logger = logging.getLogger(__name__)
def postprocess(path: Union[str, Path], suffix: str = "-postprocessed") -> None:
"""Post-process the built corpus.
Args:
path (str or Path):
The path to the corpus file.
suffix (str, optional):
The suffix to append to the output file. Defaults to "-postprocessed".
"""
# Convert the path to a Path object
path = Path(path)
# Load the corpus as a Pandas DataFrame
with path.open() as f:
records = [json.loads(line) for line in f]
corpus = pd.DataFrame.from_records(records)
# Remove the duplicates
prev_count = len(corpus)
corpus = corpus.drop_duplicates(subset="doc")
if corpus is None:
raise ValueError("The corpus is empty.")
logger.info(f"Removed {prev_count - len(corpus):,} duplicate comments.")
# Remove the comments writted by bots
prev_count = len(corpus)
corpus = corpus[~corpus.doc.str.contains("I am a bot")]
logger.info(f"Removed {prev_count - len(corpus):,} bot comments.")
# Remove the comments with less than 20 characters and spaces
prev_count = len(corpus)
corpus = corpus[
corpus.doc.map(lambda doc: len(re.sub(r"[^a-zæøå ]", "", doc.lower())) > 20)
]
logger.info(
f"Removed {prev_count - len(corpus):,} comments that contained too little "
"content."
)
# Remove the inappropriate comments
prev_count = len(corpus)
banned_subreddits = get_banned_subreddits(
corpus.subreddit.unique()
) # NSFW_SUBREDDITS + DRUG_SUBREDDITS
corpus = corpus[~corpus.subreddit.isin(banned_subreddits)]
logger.info(f"Removed {prev_count - len(corpus):,} inappropriate comments.")
# Save the corpus
output_path = path.parent / f"{path.stem}{suffix}.jsonl"
with output_path.open("w") as f:
for _, row in corpus.iterrows():
f.write(json.dumps(row.to_dict()) + "\n")
def get_banned_subreddits(subreddits: List[str]) -> List[str]:
"""Check if a list of subreddits are banned.
Args:
subreddits (List[str]):
The list of subreddits to check.
Returns:
List[str]:
The list of banned subreddits.
Raises:
ValueError:
If the list of subreddits is empty.
"""
banned_words = [
"nsfw",
"gonewild",
"cock",
"tits",
"titties",
"milf",
"porn",
"dirty",
"fraek",
"nipple",
"trusse",
"buksebule",
"rape",
"jodel",
"weed",
"drugs",
"droger",
"stoffer",
"darknet",
"sortemarked",
"psyches",
"rusmidler",
"naket",
]
# Filter the subreddits
banned_subreddits = [
subreddit
for subreddit in subreddits
if any(keyword in subreddit.lower() for keyword in banned_words)
]
return banned_subreddits
|
/scandi_reddit-0.2.1-py3-none-any.whl/scandi_reddit/postprocess.py
| 0.854293 | 0.34183 |
postprocess.py
|
pypi
|
import json
import logging
import subprocess
from multiprocessing import cpu_count
from pathlib import Path
from typing import Any, Dict, Generator, List, Optional
import pandas as pd
import zstandard
from datasets.arrow_dataset import Dataset
from joblib import Parallel, delayed
from nlp_dedup import Deduper
from tqdm.auto import tqdm
from scandi_reddit.postprocess import postprocess
from .download import download_reddit_file
from .language_filter import filter_comment
# Set up logging
logger = logging.getLogger(__name__)
def build_reddit_dataset(
overwrite: bool = False,
n_jobs: int = -2,
starting_year: int = 2005,
starting_month: int = 1,
skip_download: bool = False,
hub_repo_id: Optional[str] = None,
) -> None:
"""Build a Scandinavian Reddit dataset.
Args:
overwrite (bool, optional):
Whether to overwrite existing files. Defaults to False.
n_jobs (int, optional):
The number of jobs to run in parallel. Can be set to a negative number to
use all but that number of cores. Defaults to -2.
starting_year (int, optional):
The year to start downloading from. Defaults to 2005.
starting_month (int, optional):
The month to start downloading from. Defaults to 1.
skip_download (bool, optional):
Whether to skip downloading the files. If this is set then the "data/raw"
directory must contain the files "reddit-da.jsonl", "reddit-no.jsonl",
"reddit-sv.jsonl" and "reddit-is.jsonl". Defaults to False.
hub_repo_id (Optional[str], optional):
The ID of the Hugging Face Hub repository to upload the dataset to. If
this is set then the dataset will be uploaded to the Hugging Face Hub.
If None then the dataset will not be uploaded. Defaults to None.
"""
# Set up paths to data directories
raw_data_dir = Path("data") / "raw"
processed_data_dir = Path("data") / "processed"
final_data_dir = Path("data") / "final"
# Create language mapping
language_mapping = {
"da": "Danish",
"sv": "Swedish",
"no": "Norwegian",
"is": "Icelandic",
}
# Set up the output files
output_paths = {
lang: processed_data_dir / f"reddit-{lang}.jsonl"
for lang in language_mapping.keys()
}
# Ensure `n_jobs` is non-negative
if n_jobs < 0:
n_jobs = cpu_count() + n_jobs + 1
# Remove the previous files if `overwrite` is set
if overwrite:
for path in output_paths.values():
path.unlink(missing_ok=True)
# Replace starting year and month by the newest file present in the raw data
# folder, if any
existing_files = list(raw_data_dir.glob("RC_*.zst"))
for file in existing_files:
year = int(file.stem.split("_")[1].split("-")[0])
month = int(file.stem.split("_")[1].split("-")[1])
starting_year = max(starting_year, year)
starting_month = max(starting_month, month)
# Download the Reddit dumps and apply the language filter
if not skip_download:
logger.info(f"Fetching Reddit comments using {n_jobs} jobs in parallel.")
for year in range(starting_year, 2030):
for month in range(starting_month, 13):
# Download the file
input_path = download_reddit_file(year=year, month=month)
# If the download failed then skip to the next month
if not input_path.exists():
continue
# Extract the comments from the file
extract_comments_from_file(
input_path=input_path,
output_paths=output_paths,
n_jobs=n_jobs,
)
# Delete the input file again
input_path.unlink()
# Set the starting month to 1
starting_month = 1
# Post-process the files
for lang, path in output_paths.items():
logger.info(f"Post-processing the {language_mapping[lang]} corpus.")
postprocess(path=path, suffix="-postprocessed")
# Initialise the Deduper
deduper = Deduper(
split_method="word_ngram",
num_minhashes=128,
ngram_size=5,
similarity_threshold=0.8,
batch_size=1_000_000,
n_jobs=n_jobs,
random_seed=4242,
store_config_to_disk=True,
store_mask_to_disk=True,
store_lsh_cache_to_disk=False,
store_corpus_to_disk=False,
)
# Create the corpus generator
def build_corpus() -> Generator[str, None, None]:
for path in output_paths.values():
path_processed = path.parent / f"{path.stem}-postprocessed.jsonl"
with path_processed.open() as f:
for line in f:
line = json.loads(line)
yield line["doc"] # type: ignore[index]
# Count the lines in the corpus
num_docs = 0
for path in output_paths.values():
proc = subprocess.Popen(["wc", "-l", str(path)], stdout=subprocess.PIPE)
num_docs += int(proc.communicate()[0].decode().split()[0])
# Deduplicate the files
deduper.deduplicate(
corpus=build_corpus(),
output_dir=processed_data_dir / "deduplicated",
num_docs=num_docs,
overwrite=True,
)
# Load the deduplication mask
mask_path = processed_data_dir / "deduplicated" / "mask.jsonl"
with mask_path.open() as f:
mask = [json.loads(line) for line in f]
# Load all the deduplicated files
all_records: List[Dict[str, Any]] = list()
idx: int = 0
for path in output_paths.values():
path_processed = path.parent / f"{path.stem}-postprocessed.jsonl"
with path_processed.open() as f:
for line in f:
if not mask[idx]["duplicate"]:
record = json.loads(line)
all_records.append(record)
idx += 1
# Convert the records to a Hugging Face dataset
df = pd.DataFrame.from_records(all_records)
dataset = Dataset.from_pandas(df)
# Save the dataset to disk
dataset.save_to_disk(str(final_data_dir / "scandireddit"))
# Push the dataset to the Hugging Face Hub
if hub_repo_id is not None:
dataset.push_to_hub(hub_repo_id)
def extract_comments_from_file(
input_path: Path,
output_paths: dict[str, Path],
n_jobs: int,
) -> None:
"""Extract comments from a Reddit file.
Args:
input_path (Path):
The path to the input file.
output_paths (dict[str, Path]):
The paths to the output files.
n_jobs (int):
The number of jobs to run in parallel.
"""
# Open the file
f = input_path.open("rb")
# Open up the output files
output_files = {
lang: output_file.open("a") for lang, output_file in output_paths.items()
}
# Create a decompressor
decompressor = zstandard.ZstdDecompressor(max_window_size=2**31)
# Create a stream reader
stream_reader = decompressor.stream_reader(f)
# Initialise the buffer
buffer: str = ""
# Create progress bar, with unit being millions
progress_bar = tqdm(
desc=f"Processing comments from {input_path.name}",
unit_scale=True,
)
# Infinite loop, break when we reach the end of the file
while True:
# Load a batch of data, break if it cannot be loaded
try:
batch = stream_reader.read(1_000_000_000)
except zstandard.ZstdError:
logger.debug("Could not load batch.")
break
# Decode the batch, skip if it cannot be decoded
try:
batch = batch.decode()
except UnicodeDecodeError:
logger.debug(f"Could not decode batch from {input_path.name}")
continue
# Break if we reached the end of the file
if not batch:
logger.debug(f"Reached end of file {input_path.name}")
break
# Add the buffer
batch = buffer + batch
# Split the batch into individual comments
comments = batch.splitlines()
# Process the comments in parallel
with Parallel(n_jobs=n_jobs) as parallel:
records = parallel(
delayed(filter_comment)(comment) for comment in comments[:-1]
)
# If `records` is None then skip to the next file
if records is None:
logger.debug(f"No records found in {input_path.name}")
continue
# Iterate over the records, writing them to the output files
for item in records:
# Skip if the record is None
if item is None:
progress_bar.update()
continue
# Unpack the record
record, lang = item
# Write the record to the correct file
if lang in output_files:
output_files[lang].write(record + "\n")
# Up the progress bar
progress_bar.update()
# Update the buffer
buffer = comments[-1]
# Close the progress bar
progress_bar.close()
# Close the output files
for output_file in output_files.values():
output_file.close()
# Close the file
f.close()
|
/scandi_reddit-0.2.1-py3-none-any.whl/scandi_reddit/build.py
| 0.793266 | 0.252741 |
build.py
|
pypi
|
<p align="center">
<img src="https://github.com/rederoth/ScanDy/blob/main/docs/scandy_repo_card.png">
</p>
<p align="center">
<a href="https://github.com/psf/black">
<img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
<a href="https://doi.org/10.1101/2023.03.14.532608">
<img alt="paper" src="https://img.shields.io/badge/preprint-10.1101%2F2023.03.14.532608-blue"></a>
</p>
<!-- # ScanDy
Simulating Realistic Human Scanpaths in Dynamic Real-World Scenes -->
## Introduction
`ScanDy` is a modular and mechanistic computational framework for simulating realistic **scan**paths in **dy**namic real-world scenes. It is specifically designed to quantitatively test hypotheses about eye-movement behavior in videos.
Specifically, it can be used to demonstrate the influence of object-representations on gaze behavior by comparing object-based and location-based models.
For a visual guide of how `ScanDy` works, have a look at the [interactive notebook](examples/interactive_guide.ipynb) (also on [Colab](https://colab.research.google.com/github/rederoth/ScanDy/blob/main/examples/interactive_guide.ipynb)) and the <a href="#examples">example usecases</a>.
## Software architecture
The structure of `ScanDy` is inspired by the `neurolib` framework, which is also used for parameter optimization and exploration.
<p align="center">
<img src="https://github.com/rederoth/ScanDy/blob/main/docs/software_architecture.png">
</p>
Scanpath models inherit from the `Model` base class, whose functionality includes initializing and running model simulations and the evaluation and visualization of the resulting scanpaths. Models are implemented in a modular way, consiting of moules for (I) Scene features, (II) Visual sensitivity, (III) Scanpath history, (IV) Decision making, and (V) Gaze update.
## Installation
You can install `ScanDy` as pypi package using `pip`:
```
pip install scandy
```
We however reccomend that you clone (or fork) this repository and install all dependencies with
```
git clone https://github.com/rederoth/ScanDy.git
cd ScanDy/
pip install -r requirements.txt
pip install .
```
This gives you more freedom to modify the existing models and run the examples.
*CAVEAT*: There is currently an incompatability with Python 3.11 and the `numba` package (required by `neurolib`), see [numba/numba#8304](https://github.com/numba/numba/issues/8304). We therefore recommend using Python <=3.10 or manually installing `numba`/`neurolib`.
## Dataset
The scanpath models require precomputed maps of the video data. We use the VidCom dataset (Li et al., 2011), for which we provide all the required data on OSF (https://www.doi.org/10.17605/OSF.IO/83XUC).
To prepare the dataset, we used the following resources:
* [VidCom](http://ilab.usc.edu/vagba/dataset/VidCom/) - Video and eye-tracking data
* [deep_em_classifier](https://github.com/MikhailStartsev/deep_em_classifier/) - Eye movement classification
* [detectron2](https://github.com/facebookresearch/detectron2/) - Frame-wise object segmentation
* [deep_sort](https://github.com/nwojke/deep_sort/) - Object tracking
* [dynamic-proto-object-saliency](https://github.com/csmslab/dynamic-proto-object-saliency/) - Low-level saliency maps
* [TASED-Net](https://github.com/MichiganCOG/TASED-Net/) - High-level saliency maps
* [PWC-Net](https://github.com/NVlabs/PWC-Net/) - Optical flow calculation
If you only want to play around with a single video, we uploaded a version of the dataset only containing the "field03" video to [Google drive](https://drive.google.com/file/d/1oT9OJ2tRsvdJGFFLSKDCaY3BJev4Irzf/view?usp=sharing).
## Examples
We prepared a number of [IPython Notebooks](examples/) for you to explore the framework.
To get started with `ScanDy`, have a look at our [interactive guide](examples/interactive_guide.ipynb), where you can explore the effect of individual model parameters.
Additionally, we show instructive usecases, including:
* [Example 1](examples/ex1_scanpath_sgl_video.ipynb), on [Colab](https://colab.research.google.com/github/rederoth/ScanDy/blob/main/examples/ex1_scanpath_sgl_video.ipynb): Scanpath simulation and visualization for a single video
* [Example 2](examples/ex2_model_comparison.ipynb), on [Colab](https://colab.research.google.com/github/rederoth/ScanDy/blob/main/examples/ex2_model_comparison.ipynb): Evolutionary optimization of model parameters
* [Example 3](examples/ex3_model_extension.ipynb), on [Colab](https://colab.research.google.com/github/rederoth/ScanDy/blob/main/examples/ex3_model_extension.ipynb): Extending on existing models: Location-based model with object-based sensitivity
All figures from our manuscript (Roth et al., 2023) can be reproduced with [this notebook](examples/manuscript_results.ipynb), which is also executable on [Colab](https://colab.research.google.com/github/rederoth/ScanDy/blob/main/examples/manuscript_results.ipynb).
## More information
### How to cite
If `ScanDy` is useful for your research, please cite our preprint:
> Roth, N., Rolfs, M., Hellwich, O., & Obermayer, K. (2023). Objects guide human gaze behavior in dynamic real-world scenes. *bioRxiv*, 2023-03.
```bibtex
@article{roth2023objects,
title = {Objects Guide Human Gaze Behavior in Dynamic Real-World Scenes},
author = {Roth, Nicolas and Rolfs, Martin and Hellwich, Olaf and Obermayer, Klaus},
elocation-id = {2023.03.14.532608},
year = {2023},
doi = {10.1101/2023.03.14.532608},
publisher = {Cold Spring Harbor Laboratory},}
```
### Contact
If you have feedback, questions, and/or ideas, feel free to send a [mail](mailto:[email protected]) to Nico.
Nicolas Roth,
PhD Student at Science of Intelligence;
Neural Information Processing Group,
Fakultaet IV, Technische Universitaet Berlin,
MAR 5-6, Marchstr. 23, 10587 Berlin
### Acknowledgments
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 "Science of Intelligence" – project number 390523135.
|
/scandy-0.0.3.tar.gz/scandy-0.0.3/README.md
| 0.640748 | 0.978672 |
README.md
|
pypi
|
# Scanflow-Kubernetes: An MLOps Platform
Scanflow-Kubernetes is a platform to simplify MLOps. It originally supports deploying and operating on Kubernetes, but users can also extend it into other platforms.
Scanflow is a high-level library that is built on top of Mlflow. It provides the ability to define workflows, build each node of workflows and agents, and deploy/run the agents/workflows. In addition, it announces a framework for developing agents in order to manage and supervise workflows in both the ML training stage and the inference stage.
Current components of Scanflow includes:
- **Scanflow Developing**(Scanflow Application): A format for teams defining workflows, agents and basic environment.
- **Scanflow Building**: An API to build Scanflow Application(each node of workflows and agents as containers)
- **Scanflow Deploying**: An API to create a working environment for each team and deploy agents, also provides workflows running as batch workflows or deploying as online services
- **Scanflow Operating**(Scanflow Agent): A framework to develop agents. Provide an online multi-agent system to manage and supervise the workflows.
- **Scanflow Tracking**(Supported by MLflow): Mlflow provides an API to log parameters, artifacts, and models in machine learning experiments. We use mlflow as a database to track these information and transmit the information between teams.
## Scanflow Architecture

Scanflow Tracker is based on MLflow, MLflow logs can be recorded to local files as default.
In our private platform, we config PostgreSQL as backend and Minio as artifact stores. For more information regarding how to config [Mlflow with remote tracking server backend and artifact stores](https://www.mlflow.org/docs/latest/tracking.html#scenario-4-mlflow-with-remote-tracking-server-backend-and-artifact-stores)

## Installing
Please check [installing](installer/Readme.md) for more details
## MLOps

Figure 1: Architecture of MLOps.
The architecture of MLOps is shown in Figure 1. There are many phases and steps required to make the machine learning model in production to provide values. The top describes the steps for the data team and data science team before a model into production. Normally, the data team is responsible for discovering and collecting the valuable data, and the data science team will then develop a machine learning workflow that contains data preparation, validation, and preprocessing, as well as model training, validation, and testing. Workflow manager (e.g., Scanflow) can track the metadata such as metrics and scores and the artifacts during the training phase, analyze them, and automatically tune the hyper-parameters, early stopping and do neural architecture search for improving the model.
The bottom describes the model in production, including the model inference workflow deployment and the operation phase that automatically manages the machine learning workflow from both the application layer (e.g., workflow manager Scanflow) and the infrastructure layer (e.g., resource manager Kubernetes).
For deploying and managing the machine learning workflow at scale, the data engineer team should also build a workflow managed by the workflow manager but wrap and deploy the model as a service. From the application layer controlled view, the workflow manager could log the model metrics (such as scores) and artifacts (such as new data) to detect outliers, adversarial or drift and provide model explanations and finally trigger the machine learning workflow to be retrained or the model to be updated. From the infrastructure layer controlled view, allowing the model as a service helps it to be released, updated and rollouted independently, and can monitor the latency and failure rate of its predicted invocations at inference time. With these observations, the resource manager can automatically scale the service to achieve the reliability and efficiency of the model. Here we start the definition of each step, it consists of setting the images, requirements, python scripts, and parameters. This definition is set just once and the behavior of each step can be changed by its parameters. In a production system, this notebook should be run once in order to start the network, tracker, executors, and agents as containers. Then, these containers can be executed or reached on demand by using Scanflow API (e.g. call the online predictor service or execute the inference batch executor).
Scanflow as a shared tool between teams, can help different teams working under the same concept in order to communicate and share the data, models, and artifacts. Also Scanflow deals with the hard point within all the stages, thus can help teams fast and easily develop, build, deploy, and auto-manage their workflows. MNIST project is organized in this way, in the below tutorials we show how different teams use Scanflow.
## Tutorials
Please check the jupyter notebook for more details.
MNIST Project Tutorial: [mnist](tutorials/mnist/Readme.md)
mlperf Project Tutorial: [mlperf](tutorials/mlperf/Readme.md)
|
/scanflow-0.1.1.tar.gz/scanflow-0.1.1/README.md
| 0.838647 | 0.974677 |
README.md
|
pypi
|
import typing as t
from pathlib import Path
from pydantic import BaseModel
from strictyaml import YAML, Float, Int, Map, Seq, Str, load
ROOT_PATH = Path(__file__).resolve().parent
ROOT = ROOT_PATH.parent
CONFIG_FILE_PATH = ROOT / "config.yml"
TRAIN_BATCH_FILES = ROOT / "train_batch_raw_files"
TRAINED_MODEL_DIR = ROOT / "models"
SCHEMA = Map(
{
"pipeline_name": Str(),
"train_batch_files": Str(),
"validated_files": Str(),
"train_data": Str(),
"train_db_path": Str(),
"test_batch_files": Str(),
"test_validated_files": Str(),
"test_data": Str(),
"test_db_path": Str(),
"test_query": Str(),
"train_query": Str(),
"kmeans_model_path": Str(),
"sample_file_name": Str(),
"length_0f_date_stamp_in_file": Int(),
"length_0f_time_stamp_in_file": Int(),
"number_of_columns": Int(),
"unwanted_features": Seq(Str()),
"target": Str(),
"features": Seq(Str()),
"random_state": Int(),
"test_size": Float(),
"logistic_regression_params": Map(
{
"logistic__solver": Seq(Str()),
"logistic__penalty": Seq(Str()),
"logistic__C": Seq(Float()),
}
),
"random_forest_params": Map(
{
"random_forest__criterion": Seq(Str()),
"random_forest__n_estimators": Seq(Int()),
"random_forest__min_samples_leaf": Seq(Int()),
"random_forest__min_samples_split": Seq(Int()),
"random_forest__max_features": Seq(Float()),
}
),
"cv": Int(),
"mlflow_config": Map(
{
"artifacts_dir": Str(),
"experiment_name": Str(),
"run_name": Str(),
"registered_model_name": Str(),
"remote_server_uri": Str(),
}
),
}
)
class AppConfig(BaseModel):
pipeline_name: str
train_batch_files: str
validated_files: str
train_db_path: str
train_data: str
test_data: str
test_batch_files: str
test_validated_files: str
test_db_path: str
test_query: str
train_query: str
kmeans_model_path: str
class ModelConfig(BaseModel):
sample_file_name: str
length_0f_date_stamp_in_file: int
length_0f_time_stamp_in_file: int
number_of_columns: int
unwanted_features: t.Sequence[str]
target: str
features: t.Sequence[str]
random_state: int
test_size: float
logistic_regression_params: t.Dict[str, list]
random_forest_params: t.Dict[str, t.Sequence]
cv: int
mlflow_config: t.Dict[str, str]
class Config(BaseModel):
app_config: AppConfig
model_config: ModelConfig
def get_config_path() -> Path:
if CONFIG_FILE_PATH.is_file:
return CONFIG_FILE_PATH
raise Exception(f"Config not found at {CONFIG_FILE_PATH!r}")
def parse_config_file(cfg_path: Path = None, schema=None) -> YAML:
if cfg_path is None:
cfg_path = get_config_path()
schema = SCHEMA
if cfg_path:
with open(cfg_path, "r") as cfg_file:
data = load(cfg_file.read(), schema)
return data
raise OSError(f"Did not find config file at path: {cfg_path}")
def create_and_valid_config(cfg: YAML = None) -> Config:
if cfg is None:
cfg = parse_config_file(cfg)
_config = Config(
app_config=AppConfig(**cfg.data), model_config=ModelConfig(**cfg.data)
)
return _config
config = create_and_valid_config()
|
/scania_truck_air_presure_fault_detector-0.1.0.tar.gz/scania_truck_air_presure_fault_detector-0.1.0/scania_truck_air_presure_fault_detector/config/core.py
| 0.493164 | 0.259961 |
core.py
|
pypi
|
import xml.etree.ElementTree as ET
from math import acos, degrees
from collections import defaultdict
from sys import stdout
import numpy as np
def calc_sphere(x, y, z):
"""Calculate spherical coordinates for axial data."""
return np.degrees(np.arctan2(*(np.array((
x, y)) * np.sign(z)))) % 360, np.degrees(np.arccos(np.abs(z)))
def parse_pp(fname, start, end):
tree = ET.parse(fname)
points = defaultdict(list)
for x in tree.getroot().findall("./point"):
points[x.attrib["name"]].append(
np.array((float(x.attrib['x']), float(x.attrib['y']), float(
x.attrib['z']))))
scanline_start = points.pop(start)[0]
scanline_vector = points.pop(end)[0] - scanline_start
scanline_vector /= np.linalg.norm(scanline_vector)
points_data = []
for point in points:
a, b, c = points[point]
ab = b - a
ac = c - a
bc = c - b
length = max([np.linalg.norm(x) for x in (ab, ac, bc)])
centroid = np.mean((a, b, c), axis=0)
d = np.dot(centroid - scanline_start, scanline_vector)
n = np.cross(ab, ac)
n /= np.linalg.norm(n)
angle = degrees(acos(abs(np.dot(n, scanline_vector))))
theta, phi = calc_sphere(*n)
points_data.append([theta, phi, point, d, length, angle])
points_data.sort(key=lambda point: point[3])
return points_data
def main():
import argparse
parser = argparse.ArgumentParser(
description="Process .pp files into scanline data.")
parser.add_argument(
"--start",
action="store",
dest="start",
default="S1",
help="name of point at start of scanline")
parser.add_argument(
"--end",
action="store",
dest="end",
default="S2",
help="name of point at end of scanline")
parser.add_argument(
"--out",
action="store",
dest="out",
default=None,
help="name of output file, prints to stdout if not given")
parser.add_argument(
"fname",
action="store",
help="input .pp file from meshlab's point picking tool")
args = parser.parse_args()
data = parse_pp(args.fname, args.start, args.end)
if args.out is None:
f = stdout
else:
f = open(args.out, "wb")
from csv import writer
data_writer = writer(f)
data_writer.writerow(
["#dipdir", "dip", "point", "position", "length", "angle"])
data_writer.writerows(data)
if args.out is not None:
f.close()
if __name__ == "__main__":
main()
|
/scanline-0.2.1.tar.gz/scanline-0.2.1/scanline.py
| 0.547464 | 0.345602 |
scanline.py
|
pypi
|
import numpy as np
class sMerge:
"""The sMerge class, corresponds to the sMerge struct
Attributes
----------
isStack : bool
whether the provided images are in a stack or not
img_shape : ndarray
the shape of the provided images
numImages : int
the number of input images
scanAngles : ndarray
the direction of the input images, same order with the provided images
nr, nc : int
the number of rows and columns of the input images
imageSize : ndarray
the shape of the output images
scanOr : ndarray
the xy points define the origins of the scan lines. Three dimensional
array, with the first dim the number of images, second dim (size 2)
the x and y, and third dim the number of rows of input images.
scanDir : ndarray
contains the xy vectors of which all rotated images. Two dimensional
array, with the first dim the number of images and second dim the
vectors (size 2, with x and y)
scanLines : ndarray
the input images as a stack. Three dimensional array, with the first
dim the number of images, second and third dim the row and column
imageTransform : ndarray
the regenerated image with the current scan line origins. Three
dimensional, the first dim the number of images, the second and third
dim store the regenerated image with shape imageSize
imageDensity : ndarray
the density of the regenerated image during interpolation with the raw
images. Three dimensional, the first dim the number of images, the
second and third dim store the density with shape imageSize
linearSearchScores : ndarray
the correlation score during the search of linear drifts. Three
dimensional, the first dim the number of images, the second and third
dim correspond to the search grid position
xyLinearDrift : ndarray
two values, the x and y linear drift found
ref : ndarray
two values, the x and y coordinates of the reference point during
alignment
scanActive : ndarray
contains the active position (bool array) of scan line used for
alignment
stats : ndarray
the mean absolute difference of each alignment steps during final
alignment. Two dimensional, first dim the number of alignment and
second dim the mean absolute difference.
"""
def __init__(self, scanAngles, images, KDEsigma=1/2, edgeWidth=1/128,
paddingScale=1.125, imageRef=None):
"""
Parameters
----------
scanAngles : array-like
the scan angles in degrees, the same order as provided images.
images : array-like
provided images, can be a sequence of images (e.g. img1, img2,
img3, im4) or as a stack (three-dimensional structure with
navigation index first). When a stack is provided, no check is
performed to ensure the first index is the navigation.
KDEsigma : float, optional
the smoothing between pixels when regenerating images for KDE. The
default is 1/2.
edgeWidth : float, optional
size of edge blending relative to input images. The default is
1/128.
paddingScale : float, optional
padding amount for scaling of the output.
imageRef : array-like, optional
a reference image to compare with when performing alignment. The
default is None.
"""
self.KDEsigma = KDEsigma
self.edgeWidth = edgeWidth
self.paddingScale = paddingScale
if imageRef is None:
self.imageRef = None
else:
self.imageRef = np.asarray(imageRef)
# validate input
self._input_validation(scanAngles, images)
self.nr, self.nc = self.img_shape
# set the size of the output images
self.imageSize = np.floor(self.img_shape *
self.paddingScale/4 + 0.5).astype(int) * 4
# initialise scanOr and scanDir
self.scanOr = np.zeros((self.numImages, 2, self.nr))
self.scanDir = np.zeros((self.numImages, 2))
# save raw data to scanLines
if self.isStack:
self.scanLines = images[0]
else:
self.scanLines = np.empty((self.numImages, *self.img_shape))
for k, im in enumerate(images):
self.scanLines[k, :, :] = im
# calculate the scan line origins
self._set_scanOr_scanDir()
self.imageTransform = np.zeros((self.numImages, *self.imageSize))
self.imageDensity = np.zeros((self.numImages, *self.imageSize))
self.linearSearchScores = None
self.xyLinearDrift = None
self.ref = np.floor(self.imageSize/2 + 0.5).astype(int) - 1
self.scanActive = None
self.stats = None
def _input_validation(self, scanAngles, images):
"""Determine whether provided images is a stack, the shapes of them
and number of images, and some checks on input data
"""
# ensure tuple if not passed from multiple args
images = tuple(images)
if len(images) == 1:
# 3D stack, navigation index first
images = images[0]
if images.ndim != 3:
raise ValueError('A stack of image is expected.')
self.isStack = True
self.img_shape = np.asarray(images.shape[1:])
self.numImages = images.shape[0]
elif len(images) > 1:
# image sequence
shapes = np.asarray([arr.shape for arr in images])
shape_equal = (shapes[0,0] == shapes[:, 0]).all() &\
(shapes[0,1] == shapes[:, 1]).all()
if not shape_equal:
msg = 'The provided images are not of the same shape'
raise ValueError(msg)
self.isStack = False
self.img_shape = shapes[0,:]
self.numImages = len(images)
self.scanAngles = np.asarray(scanAngles)
if self.scanAngles.size != self.numImages:
msg = ('The number of scanning angles ({}) does not match the '
'number of images ({})')
raise ValueError(msg.format(self.scanAngles.size, self.numImages))
return
def _set_scanOr_scanDir(self):
"""Set pixel origins and scan direction
"""
scanAngles_rad = np.deg2rad(self.scanAngles)
for k in range(self.numImages):
# initialise origins of rows
# zero in Python
xy = np.zeros((2, self.nr))
xy[0, :] = np.arange(self.nr)
# coordinates offset by half before rotation
xy[0, :] -= self.img_shape[0] / 2
xy[1, :] -= self.img_shape[1] / 2
# rotate the coordinates above the origin
# the 'origin' is different in MATLAB due to 0 and 1 indexing
# to accommodate this the points are translated by 1 in rotation
ang = scanAngles_rad[k]
rotM = np.array([[np.cos(ang), -np.sin(ang)],
[np.sin(ang), np.cos(ang)]])
xy = rotM @ (xy+1) - 1
# cancel the offset after rotation
xy[0, :] += self.imageSize[0] / 2
xy[1, :] += self.imageSize[1] / 2
# shift coordinates by fractional part of the first one
# ensure first coordinate always integers (why?)
xy[0, :] -= xy[0, 0] % 1
xy[1, :] -= xy[1, 0] % 1
self.scanOr[k, ...] = xy
self.scanDir[k, :] = [np.cos(ang+np.pi/2), np.sin(ang+np.pi/2)]
|
/scanning_drift_corr-1.0.1.tar.gz/scanning_drift_corr-1.0.1/src/scanning_drift_corr/sMerge.py
| 0.89499 | 0.918663 |
sMerge.py
|
pypi
|
import numpy as np
from scipy.ndimage import gaussian_filter, distance_transform_edt
def distance_transform(binary_image):
""" Same as bwdist in MATLAB, computes the Euclidean distance transform
of the binary image. For each pixel, the distance transform assigns a
number that is the distance between that pixel and the nearest nonzero
pixel of the binary image.
Parameters
----------
binary_image : array-like
the binary image
Returns
-------
ndarray
the distance transform.
"""
binary_image = np.asarray(binary_image, dtype=bool)
if np.any(binary_image):
return distance_transform_edt(~binary_image)
else:
return np.full(binary_image.shape, np.inf)
def bilinear_interpolation(scanLines, scanOr, scanDir, imageSize,
indLines=None, upsampleFactor=1):
"""Bilinear interpolation, ported from MATLAB
Parameters
----------
scanLines : array-like
the original raw image at this scanning direction
scanOr : array-like
the scan line origins of the image
scanDir : array-like
the scanning direction
imageSize : array-like
the new image size
indLines : array-like
contains position of active scan line origins
upsampleFactor : float
the oversampling ratio
Returns
-------
sig : ndarray
the interpolated signal
count : ndarray
the count of weights in interpolating signal
"""
scanLines = np.asarray(scanLines)
scanOr = np.asarray(scanOr)
scanDir = np.asarray(scanDir)
imageSize = np.asarray(imageSize)
nr, nc = scanLines.shape
if indLines is None:
# use all rows
indLines = np.ones(nr, dtype=bool)
else:
indLines = np.asarray(indLines, dtype=bool)
# Expand coordinates
t = np.arange(1, nc+1)
x0 = scanOr[0, indLines][:,None]
y0 = scanOr[1, indLines][:,None]
# plus here to shift in Python's coordinate system
xInd = x0*upsampleFactor + (upsampleFactor-1)/2 +\
(t*scanDir[0])*upsampleFactor
yInd = y0*upsampleFactor + (upsampleFactor-1)/2 +\
(t*scanDir[1])*upsampleFactor
# initialise empty array
w = np.empty((4, xInd.size), dtype=float)
xAll = np.empty((4, xInd.size), dtype=int)
yAll = np.empty((4, xInd.size), dtype=int)
# Prevent pixels from leaving image boundaries
xInd = np.core.umath.clip(xInd, 0, (imageSize[0]*upsampleFactor)-2).ravel()
yInd = np.core.umath.clip(yInd, 0, (imageSize[1]*upsampleFactor)-2).ravel()
imgsize = imageSize*upsampleFactor
# Convert to bilinear interpolants and weights
# xAll/yAll have 4 rows, each represent the interpolants of the pixel of
# the image which as are column vec (column size is raw data size)
xIndF = np.floor(xInd).astype(int)
yIndF = np.floor(yInd).astype(int)
# remove vstack
# xAll = np.vstack([xIndF, xIndF+1, xIndF, xIndF+1])
# yAll = np.vstack([yIndF, yIndF, yIndF+1, yIndF+1])
xAll[0, :] = xIndF
xAll[1, :] = xIndF+1
xAll[2, :] = xIndF
xAll[3, :] = xIndF+1
yAll[0, :] = yIndF
yAll[1, :] = yIndF
yAll[2, :] = yIndF+1
yAll[3, :] = yIndF+1
dx = xInd - xIndF
dy = yInd - yIndF
# remove vstack
# w = np.vstack([(1-dx)*(1-dy), dx*(1-dy), (1-dx)*dy, dx*dy])
w[0, :] = (1-dx)*(1-dy)
w[1, :] = dx*(1-dy)
w[2, :] = (1-dx)*dy
w[3, :] = dx*dy
# indAll in MATLAB is from sub2ind
# instead of np.ravel_multi_index((xAll, yAll), imgsize)
# plain calculation is quicker, why?
indAll = yAll + xAll*imgsize[-1]
indAll_ravel = indAll.ravel()
# get the active scan line from the raw image
image = scanLines[indLines, :]
# weigh the raw image for interpolation
wsig = w * image.ravel()
wcount = w
# Generate image and density
sig = np.bincount(indAll_ravel,
weights=wsig.ravel(),
minlength=imgsize.prod()).reshape(imgsize)
count = np.bincount(indAll_ravel,
weights=wcount.ravel(),
minlength=imgsize.prod()).reshape(imgsize)
return sig, count
def apply_KDE(img, KDEsigma, rmin=5):
"""Apply KDE
Parameters
----------
img : array-like
the image to be convolved
KDEsigma : float
the sigma value of the Gaussian kernel
rmin : int
the minimum radius to be truncated after convolution. The default is 5.
Returns
-------
imgconv : ndarray
the convolved image
"""
# set the truncated width
r = np.maximum(np.ceil(KDEsigma*3), rmin)
# the parameters match the behaviour of convolving a normalised Gaussian
# kernel in MATLAB
fargs = {'sigma' : KDEsigma,
'mode' : 'constant',
'cval' : 0,
'truncate' : r / KDEsigma}
imgconv = gaussian_filter(img, **fargs)
return imgconv
def hybrid_correlation(img1, img2, padxy=None):
"""hybrid correlation betweeen two images, assuming the same size
Parameters
----------
img1, img2 : array-like
the images to be correlated
padxy : array-like
the padding (zero) in x and y dimensions. The default is [0, 0].
Returns
-------
Icorr : ndarray
the correlation between the two images
"""
if padxy is None:
padxy = np.array([0, 0])
else:
padxy = np.asarray(padxy)
# get the row and column of the un-padded image
nr, nc = np.asarray(img1.shape) - padxy
w2 = _hanning_weight(nr, nc, padxy)
m1 = np.fft.fft2(w2 * img1)
m2 = np.fft.fft2(w2 * img2)
m = m1 * m2.conj()
magnitude = np.sqrt(np.abs(m))
phase = np.exp(1j*np.angle(m))
Icorr = np.fft.ifft2(magnitude * phase).real
return Icorr
def _hanning_weight(nr, nc, padw):
"""Get the Hanning window for smoothing before Fourier transform
"""
# chop off 0 to be consistent with the MATLAB hanningLocal
hanning = np.hanning(nc + 2)[1:-1] * np.hanning(nr + 2)[1:-1][:, None]
shifts = np.floor(padw / 2 + 0.5).astype(int)
padded = np.pad(hanning, ((0, padw[0]), (0, padw[1])),
mode='constant', constant_values=0)
w2 = np.roll(padded, shifts, axis=(0,1))
return w2
|
/scanning_drift_corr-1.0.1.tar.gz/scanning_drift_corr-1.0.1/src/scanning_drift_corr/tools.py
| 0.924304 | 0.804905 |
tools.py
|
pypi
|
import warnings
import numpy as np
from scipy.ndimage.morphology import binary_dilation
from scanning_drift_corr.SPmakeImage import SPmakeImage
def SPmerge02_initial(sm, **kwargs):
"""Initial alignment
Parameters
----------
sm : sMerge object
the sMerge object contains all the data.
densityCutoff : float, optional
density cutoff for image boundaries (norm. to 1). Default to 0.8.
distStart : float, optional
radius of # of scanlines used for initial alignment. Default to
mean of raw data divided by 16.
initialShiftMaximum : float, optional
maximum number of pixels shifted per line for the initial alignment
step. This value should have a maximum of 1, but can be set lower
to stabilize initial alignment. Default to 0.25.
"""
# ignore unknown input arguments
_args_list = ['densityCutoff', 'distStart', 'initialShiftMaximum']
for key in kwargs.keys():
if key not in _args_list:
msg = "The argument '{}' is not recognised, and it is ignored."
warnings.warn(msg.format(key), RuntimeWarning)
meanScanLines = np.mean(sm.scanLines.shape[1:])
densityCutoff = kwargs.get('densityCutoff', 0.8)
distStart = kwargs.get('distStart', meanScanLines/16)
initialShiftMaximum = kwargs.get('initialShiftMaximum', 1/4)
# Rough initial alignment of scanline origins, to nearest pixel
sm.scanActive = np.zeros((sm.numImages, sm.nr), dtype=bool)
indStart = _get_starting_scanlines(sm, distStart)
for k in range(sm.numImages):
# get alignment image for the current image, based on orthogonality
imageAlign = _get_reference_image(sm, k, densityCutoff)
imageAlign = imageAlign.ravel()
# align origins and get the step size
dOr = sm.scanOr[k, :, 1:] - sm.scanOr[k, :, :-1]
xyStep = np.mean(dOr, axis=1)
# set the aligned indices for this image
indAligned = np.zeros(sm.nr, dtype=bool)
indAligned[indStart[k]] = True
# start the alignment and stop until all have been aligned
dxy = np.array([[0,1,-1,0,0], [0,0,0,1,-1]])
while not indAligned.all():
# Determine scanline indices to check next
# indMove contains the indices of scanline to check
# indsActive contains the indices of currently active scanlines
inds = np.arange(sm.nr)
v = binary_dilation(indAligned)
v[indAligned] = False
indsMove = inds[v]
indsActive = inds[indAligned]
# loop over each selected scan line
for m in indsMove:
# determine starting point from neighboring scanline
xyOr = _get_xyOr(sm, k, m, indsActive, xyStep)
# score each of the moved selected scan line against the
# reference image (imageAlign)
score = np.zeros(dxy.shape[1])
raw_scanline = sm.scanLines[k, m, :]
for p in range(dxy.shape[1]):
xymove = dxy[:, p]
score[p] = _get_score(sm, imageAlign, xyOr, k, xymove,
raw_scanline)
# move the scan line
ind = np.argmin(score)
sm.scanOr[k, :, m] = xyOr + dxy[:, ind]*initialShiftMaximum
indAligned[m] = True
return
def _get_starting_scanlines(sm, distStart):
""" Get starting scanlines for initial alignment
indStart is an array containing the index of the starting scanline for
each image
"""
indStart = np.zeros(sm.numImages, dtype=int)
for k in range(sm.numImages):
# Scan line direction and origins
v = np.array([-sm.scanDir[k, 1], sm.scanDir[k, 0]])
or_ = sm.scanOr[k, ...]
# Determine closest scanline origin from point-line distance
c = -np.sum(sm.ref*v)
dist = np.abs(v[0]*or_[0,:] + v[1]*or_[1,:] + c) / np.linalg.norm(v)
indStart[k] = np.argmin(dist)
sub = dist < distStart
sm.scanActive[k, sub] = True
return indStart
def _get_reference_image(sm, k, densityCutoff):
"""Generate alignment image, use the most orthogonal image to current one
unless user has specified a reference image.
"""
# sum of row of ortho is 0 if they are exact orthogonal
ortho = (sm.scanDir[k, :] * sm.scanDir).sum(axis=1)
indAlign = np.argmin(np.abs(ortho))
if sm.imageRef is None:
sm = SPmakeImage(sm, indAlign, sm.scanActive[indAlign, :])
dens_cut = sm.imageDensity[indAlign, ...] > densityCutoff
imageAlign = sm.imageTransform[indAlign, ...] * dens_cut
else:
imageAlign = sm.imageRef
return imageAlign
def _get_xyOr(sm, k, m, indsActive, xyStep):
"""Determine starting point from neighboring scanline
"""
minDistInd = np.argmin(np.abs(m - indsActive))
# Step perpendicular to scanDir (orthogonality)
indMin = indsActive[minDistInd]
xyOr = sm.scanOr[k, :, indMin] + xyStep * (m - indMin)
return xyOr
def _get_score(sm, imageAlign, xyOr, k, xymove, raw_scanline):
"""Refine score by moving origin of this scanline
"""
t = np.arange(1, sm.nc+1)
xInd = np.floor(xyOr[0] + t*sm.scanDir[k, 0] + 0.5).astype(int)
yInd = np.floor(xyOr[1] + t*sm.scanDir[k, 1] + 0.5).astype(int)
# move the scan line
dx, dy = xymove
nxInd = xInd + dx
nyInd = yInd + dy
# Prevent pixels from leaving image boundaries
nxInd = np.core.umath.clip(nxInd, 0, sm.imageSize[0]-2).ravel()
nyInd = np.core.umath.clip(nyInd, 0, sm.imageSize[1]-2).ravel()
# same as np.ravel_multi_index((nxInd, nyInd), sm.imageSize)
# but quicker, why?
rInd = nyInd + nxInd*sm.imageSize[-1]
# calculate the score after moving the scanline
score = np.abs(imageAlign[rInd] - raw_scanline).sum()
return score
|
/scanning_drift_corr-1.0.1.tar.gz/scanning_drift_corr-1.0.1/src/scanning_drift_corr/SPmerge02_initial.py
| 0.820577 | 0.620075 |
SPmerge02_initial.py
|
pypi
|
import numpy as np
from scanning_drift_corr.SPmakeImage import SPmakeImage
from scanning_drift_corr.tools import distance_transform
def _globbal_phase_correlation(sm, scanOrStep, meanAbsDiff, densityCutoff,
densityDist,flagGlobalShiftIncrease,
minGlobalShift, refineInitialStep, alignStep,
flagReportProgress):
"""to prevent unit cell hopping
"""
# save current origins, step size and score
scanOrCurrent = sm.scanOr.copy()
scanOrStepCurrent = scanOrStep.copy()
meanAbsDiffCurrent = meanAbsDiff.copy()
# Align to windowed image 0 or imageRef
smooth, imageFFT1, vecAlign = _get_ref(sm, densityCutoff, densityDist)
# Align datasets 1 and higher to dataset 0, or align all images to imageRef
for k in vecAlign:
# simple phase correlation
phaseCorr = _phase_correlation(sm, k, densityCutoff, imageFFT1)
# Get peak maximum
xInd, yInd = np.unravel_index(phaseCorr.argmax(), phaseCorr.shape)
# Compute relative shifts
nr, nc = sm.imageSize
dx = (xInd + nr/2) % nr - nr/2
dy = (yInd + nc/2) % nc - nc/2
# Only apply shift if it is larger than 2 pixels (dx+dy)
if (abs(dx) + abs(dy)) > minGlobalShift:
shiftApplied = _apply_shift(sm, k, dx, dy)
# Reset search values for this image if it is globally shifted
if shiftApplied:
scanOrStep[k, :] = refineInitialStep
if not flagGlobalShiftIncrease:
# Verify global shift did not make mean abs. diff. increase.
meanAbsDiffNew = _fraction_MD(sm, densityCutoff)
if meanAbsDiffNew < meanAbsDiffCurrent:
# If global shift decreased mean absolute different, keep.
sm.stats[alignStep-1, :] = np.array([alignStep-1, meanAbsDiff])
else:
# If global shift incresed mean abs. diff., return origins
# and step sizes to previous values.
sm.scanOr = scanOrCurrent
scanOrStep = scanOrStepCurrent
def _get_ref(sm, densityCutoff, densityDist):
# Align to windowed image 0 or imageRef
intensityMedian = np.median(sm.scanLines)
cut = sm.imageDensity[0, ...] < densityCutoff
min_d = np.minimum(distance_transform(cut) / densityDist, 1)
densityMask = np.sin(min_d * np.pi/2)**2
if sm.imageRef is None:
smooth = sm.imageTransform[0,...]*densityMask +\
(1-densityMask)*intensityMedian
imageFFT1 = np.fft.fft2(smooth)
vecAlign = range(1, sm.numImages)
else:
smooth = sm.imageRef*densityMask + (1-densityMask)*intensityMedian
imageFFT1 = np.fft.fft2(smooth)
vecAlign = range(sm.numImages)
return smooth, imageFFT1, vecAlign
def _phase_correlation(sm, k, densityCutoff, imageFFT1):
"""correlate the phase of current image with reference image
"""
# Simple phase correlation
intensityMedian = np.median(sm.scanLines)
cut = sm.imageDensity[k, ...] < densityCutoff
min_d = np.minimum(distance_transform(cut) / 64, 1)
densityMask = np.sin(min_d * np.pi/2)**2
smooth = sm.imageTransform[k,...]*densityMask +\
(1-densityMask)*intensityMedian
imageFFT2 = np.fft.fft2(smooth).conj()
phase = np.angle(imageFFT1*imageFFT2)
phaseCorr = np.abs(np.fft.ifft2(np.exp(1j*phase)))
return phaseCorr
def _apply_shift(sm, k, dx, dy):
"""apply the shift dx and dy, check if within image after global shift
"""
# apply global origin shift, if possible
xNew = sm.scanOr[k, 0, :] + dx
yNew = sm.scanOr[k, 1, :] + dy
# Verify shifts are within image boundaries
nr, nc = sm.imageSize
withinBoundary = (xNew.min() >= 0) & (xNew.max() < nr-2) &\
(yNew.min() >= 0) & (yNew.max() < nc-2)
if withinBoundary:
sm.scanOr[k, 0, :] = xNew
sm.scanOr[k, 1, :] = yNew
# Recompute image with new origins
sm = SPmakeImage(sm, k)
return withinBoundary
def _fraction_MD(sm, densityCutoff):
"""Get mean absolute difference as a fraction of the mean scanline
intensity.
"""
imgT_mean = sm.imageTransform.mean(axis=0)
Idiff = np.abs(sm.imageTransform - imgT_mean).mean(axis=0)
dmask = sm.imageDensity.min(axis=0) > densityCutoff
img_mean = np.abs(sm.scanLines).mean()
meanAbsDiff = Idiff[dmask].mean() / img_mean
return meanAbsDiff
|
/scanning_drift_corr-1.0.1.tar.gz/scanning_drift_corr-1.0.1/src/scanning_drift_corr/SPmerge02_phase_correlation.py
| 0.777807 | 0.666158 |
SPmerge02_phase_correlation.py
|
pypi
|
import numpy as np
from scanning_drift_corr.tools import distance_transform, \
bilinear_interpolation, apply_KDE
# Developer's use, for normal usage, should be 0. A small number to
# the 'count' array to be consistent with MATLAB while checking the
# logical of implementation.
DELTA = 0
def SPmakeImage(sMerge, indImage, indLines=None):
"""
This function generates a resampled scanning probe image with dimensions
of imageSize, from a an array of N scan lines given in scaneLines,
(lines specified as image rows), from an array of Nx2 origins in scanOr.
scanDir is a 2 element vector specifying the direction of the scan.
All arrays are stored inside struct sMerge. ind specified update index.
indLines is a vector of binary values specifying which lines to include.
Parameters
----------
sMerge : sMerge object
the sMerge object.
indImage : int
the index of the image to be transformed.
indLines : ndarray, optional
an array of binary values specifying which lines to include.
The default is None, set to use all rows.
Returns
-------
sMerge : sMerge object
the sMerge object.
"""
# perform bilinear interpolation
scanLines = sMerge.scanLines[indImage, ...]
scanOr = sMerge.scanOr[indImage, ...]
scanDir = sMerge.scanDir[indImage, :]
imageSize = sMerge.imageSize
sig, count = bilinear_interpolation(scanLines, scanOr, scanDir, imageSize,
indLines=indLines)
# Apply KDE
sig = apply_KDE(sig, sMerge.KDEsigma)
count = apply_KDE(count, sMerge.KDEsigma)
# cheat mode!
if DELTA:
count += DELTA
# the precision(?) in MATLAB sometimes results in edge value being evaluated
# as zero while it is not (shouldn't be worried?)
sub = count > 0
sig[sub] /= count[sub]
sMerge.imageTransform[indImage, ...] = sig
# Estimate sampling density
bound = count == 0
bound[[0,-1], :] = True
bound[:, [0, -1]] = True
# MATLAB bwdist calculates 'the distance between that pixel and the
# nearest nonzero pixel', scipy version is more conventional, which is
# the reverse, use a wrapper to handle this
dt = distance_transform(bound)
dtmin = np.minimum(dt/sMerge.edgeWidth, 1)
sMerge.imageDensity[indImage, ...] = np.sin(dtmin*np.pi/2)**2
return sMerge
def makeImage(scanLines, scanOr, scanDir, imageSize, KDEsigma):
"""Generate the resampled image only by using data not in the sMerge
object.
Used in the parallel search.
"""
# perform bilinear interpolation
sig, count = bilinear_interpolation(scanLines, scanOr, scanDir, imageSize)
# Apply KDE
sig = apply_KDE(sig, KDEsigma)
count = apply_KDE(count, KDEsigma)
# cheat mode!
if DELTA:
count += DELTA
sub = count > 0
sig[sub] /= count[sub]
return sig
|
/scanning_drift_corr-1.0.1.tar.gz/scanning_drift_corr-1.0.1/src/scanning_drift_corr/SPmakeImage.py
| 0.888771 | 0.66651 |
SPmakeImage.py
|
pypi
|
# Scanorama
- [API example usage](#api-example-usage)
- [Full tutorial](#full-tutorial)
- [Installation](#installation)
- [Testing](#testing)
- [Troubleshooting](#troubleshooting)
## Overview
Scanorama enables batch-correction and integration of heterogeneous scRNA-seq datasets, which is described in the paper ["Efficient integration of heterogeneous single-cell transcriptomes using Scanorama"](https://www.nature.com/articles/s41587-019-0113-3) by Brian Hie, Bryan Bryson, and Bonnie Berger. This repository contains the Scanorama source code as well as scripts necessary for reproducing the results in the paper.
Scanorama is designed to be used in scRNA-seq pipelines downstream of noise-reduction methods, including those for imputation and highly-variable gene filtering. The results from Scanorama integration and batch correction can then be used as input to other tools for scRNA-seq clustering, visualization, and analysis.
Tools for data sketching can also greatly accelerate Scanorama integration, as described in the paper ["Geometric sketching compactly summarizes the single-cell transcriptomic landscape"](https://www.cell.com/cell-systems/fulltext/S2405-4712\(19\)30152-8) and implemented [here](https://github.com/brianhie/geosketch).
## API example usage
**Scanorama is part of [Scanpy's external API](https://scanpy.readthedocs.io/en/stable/generated/scanpy.external.pp.scanorama_integrate.html).** Consider using this API for easy integration with Scanpy.
Alternatively, parameter documentation using the base Scanorama package is provided in the Scanorama source code at the top of [`scanorama/scanorama.py`](scanorama/scanorama.py).
Here is example usage of Scanorama in Python:
```Python
# List of datasets (matrices of cells-by-genes):
datasets = [ list of scipy.sparse.csr_matrix or numpy.ndarray ]
# List of gene lists:
genes_list = [ list of list of string ]
import scanorama
# Integration.
integrated, genes = scanorama.integrate(datasets, genes_list)
# Batch correction.
corrected, genes = scanorama.correct(datasets, genes_list)
# Integration and batch correction.
integrated, corrected, genes = scanorama.correct(datasets, genes_list, return_dimred=True)
```
There are also wrappers that make it easy to use Scanorama with [scanpy's AnnData object](https://anndata.readthedocs.io/en/latest/):
```Python
# List of datasets:
adatas = [ list of scanpy.AnnData ]
import scanorama
# Integration.
scanorama.integrate_scanpy(adatas)
# Batch correction.
corrected = scanorama.correct_scanpy(adatas)
# Integration and batch correction.
corrected = scanorama.correct_scanpy(adatas, return_dimred=True)
```
The function `integrate_scanpy()` will simply add an entry into `adata.obsm` called `'X_scanorama'` for each `adata` in `adatas`. `obsm['X_scanorama']` contains the low dimensional embeddings as a result of integration, which can be used for KNN graph construction, visualization, and other downstream analysis.
The function `correct_scanpy()` is a little more involved -- it will create new `AnnData` objects and replace `adata.X` with the Scanorama-transformed cell-by-gene matrix, while keeping the other metadata in `adata` as well.
You can also call Scanorama from R using the [`reticulate`](https://rstudio.github.io/reticulate/) package (tested with R version 3.5.1 and reticulate version 1.10):
```R
# List of datasets (matrices of cells-by-genes):
datasets <- list( list of matrix )
# List of gene lists:
genes_list <- list( list of list of string )
library(reticulate)
scanorama <- import('scanorama')
# Integration.
integrated.data <- scanorama$integrate(datasets, genes_list)
# Batch correction.
corrected.data <- scanorama$correct(datasets, genes_list, return_dense=TRUE)
# Integration and batch correction.
integrated.corrected.data <- scanorama$correct(datasets, genes_list,
return_dimred=TRUE, return_dense=TRUE)
```
Note that `reticulate` has trouble returning sparse matrices, so you should set the `return_dense` flag to `TRUE` (which returns the corrected data as R `matrix` objects) when attempting to use Scanorama's `correct()` method in R. This will increase memory usage, however, especially for very large datasets.
## Full tutorial
For step-by-step tutorials on how Scanorama can integrate into a full single-cell analysis pipeline, there are a few excellent resources made available by the community of Scanorama users.
Here is a simple exercise for integrating three PBMC scRNA-seq datasets (by Åsa Björklund and Paulo Czarnewski):
https://nbisweden.github.io/workshop-scRNAseq/labs/compiled/scanpy/scanpy_03_integration.html
Here is a more advanced exercise for integrating scRNA-seq Visium spatial data (by Giovanni Palla):
https://scanpy-tutorials.readthedocs.io/en/latest/spatial/integration-scanorama.html
Our gratitude goes out to the creators of these tutorials!
## Installation
### Setup
You should be able to download Scanorama using `pip`:
```
pip install scanorama
```
If for some reason this doesn't work, you can also install from within the Scanorama repository:
```
git clone https://github.com/brianhie/scanorama.git
cd scanorama/
python setup.py install --user
```
If you are running inside an anaconda environment, first install annoy by doing:
```
conda install -c conda-forge python-annoy
```
## Examples from paper
### Dataset download
All of the data used in our study (around 4 GB) can be downloaded from http://cb.csail.mit.edu/cb/scanorama/data.tar.gz. Download and unpack this data with the command:
```
wget http://cb.csail.mit.edu/cb/scanorama/data.tar.gz
tar xvf data.tar.gz
```
A smaller version of the data (around 720 MB), including 26 heterogeneous datasets, can be similarly downloaded from http://scanorama.csail.mit.edu/data_light.tar.gz.
### Data processing
The script `bin/process.py` can handle two file formats. The first is a tab-delimited table format where the columns correspond to cells and the rows correspond to genes. A sample file looks something like:
```
gene cell_a cell_b
gene_1 10 10
gene_2 20 20
```
The second is a sparse matrix format used by 10X Genomics (example [here](http://cf.10xgenomics.com/samples/cell-exp/1.1.0/293t/293t_filtered_gene_bc_matrices.tar.gz)). This format has a directory where one file has a list of gene names (`genes.tsv`) and one file has a list of the nonzero transcript counts at certain gene/cell coordinates (`matrix.mtx`).
To ensure a consistent data format, the examples first processes these raw files and saves them in `.npz` files along with some related metadata. To generate these files, run the command:
```
python bin/process.py conf/panorama.txt
```
The corresponding `.npz` files will be saved in the `data/` directory.
New files can be processed by feeding them into `bin/process.py` via the command line or a configuration file, or by modifying the `data_names` variables at the top of `bin/config.py`.
### Panorama stitching
#### Toy datasets
For a good illustration of how Scanorama works, we can integrate three toy datasets: 293T cells, Jurkat cells, and a 50:50 293T:Jurkat mixture. To integrate these datasets, run:
```
python bin/293t_jurkat.py
```
By default, this prints a log reporting the alignments the algorithm has found between datasets and saves visualization images to a file in the repository's top-level directory.
#### 26 datasets
We can also stitch a much larger number of cells from many more datsets. To do this, run
```
python bin/integration_panorama.py conf/panorama.txt
```
to integrate the datasets or
```
python bin/panorama.py conf/panorama.txt
```
to batch correct the datasets as well. The collection of datasets to be integrated is specified in the config file `conf/panorama.txt`. Default parameters are listed at the top of `scanorama/scanorama.py`.
By default, this script will output a verbose log as it finds alignments and applies batch correction. At the end, it will automatically save t-SNE visualized images of the integrated result. The numpy matrices containing the batch-corrected datasets are also available (in memory) to integrate with other single cell pipelines and packages.
#### Runtime performance and memory requirements
Scanorama runs on multiple cores to speed up its computation; [here are some instructions](https://roman-kh.github.io/numpy-multicore/) to check if Python is making use of the benefits from multicore processing. Aligning and batch-correcting 105,476 cells across 26 datasets should complete in around 15 minutes with the process running on 10 cores. The memory usage should be under 8 GB for integration and under 26 GB for batch correction.
Note that the gradient descent portion of the t-SNE visualization step can take a very long time (a few hours) and require a lot of memory (around 30 GB) on more than 100k cells. Other methods for accelerating t-SNE could be used in place of the t-SNE implementation used in this pipeline, such as a faster C++ implementation of [t-SNE](https://github.com/lvdmaaten/bhtsne), [Multicore-TSNE](https://github.com/DmitryUlyanov/Multicore-TSNE), or [net-SNE](https://github.com/hhcho/netsne), a version of t-SNE that uses a neural network to reduce the time required for the gradient descent optimization procedure.
#### Additional analyses from paper
Scripts for performing additional analyses of the data are also available in the `bin/` directory.
## Scanorama implementation
For those interested in the algorithm implementation, `scanorama/scanorama.py` is the main file that handles the mutual nearest neighbors-based matching, batch correction, and panorama assembly.
## Testing
Unit tests require using [pytest](https://docs.pytest.org/en/latest/) and can be run with the command
```
python -m pytest tests
```
from the top-level directory.
## Troubleshooting
- Make sure the input matrices are cells-by-genes, not the transpose.
- For large dataset integration under memory constraints (e.g., if you run into a `MemoryError`), try lowering the `batch_size` parameter to improve memory usage and try sketch-based acceleration using the `sketch` parameter to `integrate()` to improve both memory usage and runtime.
- Some users report "Illegal instruction" or "Segfault" errors using the most recent versions of the `annoy` package; Scanorama is tested with `annoy` version 1.11.5 on Ubuntu 18.04. To fix, pass `approx=False` to use scikit-learn's nearest neighbors matching.
- For the example scripts, be sure to run `bin/process.py` first, although this is not necessary if you are using Scanorama through the API.
## Questions
For questions, please use the [GitHub Discussions](https://github.com/brianhie/scanorama/discussions) forum. For bugs or other problems, please file an [issue](https://github.com/brianhie/scanorama/issues).
|
/scanorama-1.7.3.tar.gz/scanorama-1.7.3/README.md
| 0.684159 | 0.990357 |
README.md
|
pypi
|
import numpy as np
from scanorama import *
from scipy.sparse import vstack
from sklearn.preprocessing import normalize, LabelEncoder
import sys
from time import time
from benchmark import write_table
from process import load_names, process
np.random.seed(0)
NAMESPACE = 'mouse_brain'
BATCH_SIZE = 10000
data_names = [
'data/mouse_brain/nuclei',
'data/mouse_brain/dropviz/Cerebellum_ALT',
'data/mouse_brain/dropviz/Cortex_noRep5_FRONTALonly',
'data/mouse_brain/dropviz/Cortex_noRep5_POSTERIORonly',
'data/mouse_brain/dropviz/EntoPeduncular',
'data/mouse_brain/dropviz/GlobusPallidus',
'data/mouse_brain/dropviz/Hippocampus',
'data/mouse_brain/dropviz/Striatum',
'data/mouse_brain/dropviz/SubstantiaNigra',
'data/mouse_brain/dropviz/Thalamus',
]
if __name__ == '__main__':
process(data_names, min_trans=100)
datasets, genes_list, n_cells = load_names(data_names)
t0 = time()
datasets_dimred, genes = integrate(
datasets, genes_list, ds_names=data_names,
batch_size=BATCH_SIZE,
)
print('Integrated panoramas in {:.3f}s'.format(time() - t0))
t0 = time()
datasets_dimred, datasets, genes = correct(
datasets, genes_list, ds_names=data_names,
return_dimred=True, batch_size=BATCH_SIZE,
)
print('Integrated and batch corrected panoramas in {:.3f}s'
.format(time() - t0))
labels = []
names = []
curr_label = 0
for i, a in enumerate(datasets_dimred):
labels += list(np.zeros(a.shape[0]) + curr_label)
names.append(data_names[i])
curr_label += 1
labels = np.array(labels, dtype=int)
mouse_brain_genes = [
'Gja1', 'Flt1', 'Gabra6', 'Syt1', 'Gabrb2', 'Gabra1',
'Meg3', 'Mbp', 'Rgs5',
]
# Downsample for visualization purposes
for i in range(len(data_names)):
ds = datasets_dimred[i]
rand_idx = np.random.choice(ds.shape[0], size=int(ds.shape[0]/10),
replace=False)
datasets_dimred[i] = ds[rand_idx, :]
datasets[i] = datasets[i][rand_idx, :]
embedding = visualize(datasets_dimred,
labels, NAMESPACE + '_ds', names,
gene_names=mouse_brain_genes, genes=genes,
gene_expr=vstack(datasets),
multicore_tsne=True,
image_suffix='.png')
np.savetxt('data/{}_embedding.txt'.format(NAMESPACE),
embedding, delimiter='\t')
cell_labels = (
open('data/cell_labels/mouse_brain_cluster.txt')
.read().rstrip().split()
)
le = LabelEncoder().fit(cell_labels)
cell_labels = le.transform(cell_labels)
cell_types = le.classes_
visualize(None,
cell_labels, NAMESPACE + '_type', cell_types,
embedding=embedding, image_suffix='.png')
|
/scanorama-1.7.3.tar.gz/scanorama-1.7.3/bin/mouse_brain.py
| 0.472683 | 0.322313 |
mouse_brain.py
|
pypi
|
# Modified by Brian Hie <[email protected]> to allow for multicore
# pairwise distance matrix computation.
# Original source code available at:
# https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/metrics/cluster/unsupervised.py
# Authors: Robert Layton <[email protected]>
# Arnaud Fouchet <[email protected]>
# Thierry Guillemot <[email protected]>
# License: BSD 3 clause
import numpy as np
from sklearn.utils import check_random_state
from sklearn.utils import check_X_y
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.preprocessing import LabelEncoder
def check_number_of_labels(n_labels, n_samples):
if not 1 < n_labels < n_samples:
raise ValueError("Number of labels is %d. Valid values are 2 "
"to n_samples - 1 (inclusive)" % n_labels)
def silhouette_score(X, labels, metric='euclidean', sample_size=None,
random_state=None, **kwds):
"""Compute the mean Silhouette Coefficient of all samples.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each
sample. The Silhouette Coefficient for a sample is ``(b - a) / max(a,
b)``. To clarify, ``b`` is the distance between a sample and the nearest
cluster that the sample is not a part of.
Note that Silhouette Coefficient is only defined if number of labels
is 2 <= n_labels <= n_samples - 1.
This function returns the mean Silhouette Coefficient over all samples.
To obtain the values for each sample, use :func:`silhouette_samples`.
The best value is 1 and the worst value is -1. Values near 0 indicate
overlapping clusters. Negative values generally indicate that a sample has
been assigned to the wrong cluster, as a different cluster is more similar.
Read more in the :ref:`User Guide <silhouette_coefficient>`.
Parameters
----------
X : array [n_samples_a, n_samples_a] if metric == "precomputed", or, \
[n_samples_a, n_features] otherwise
Array of pairwise distances between samples, or a feature array.
labels : array, shape = [n_samples]
Predicted labels for each sample.
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string, it must be one of the options
allowed by :func:`metrics.pairwise.pairwise_distances
<sklearn.metrics.pairwise.pairwise_distances>`. If X is the distance
array itself, use ``metric="precomputed"``.
sample_size : int or None
The size of the sample to use when computing the Silhouette Coefficient
on a random subset of the data.
If ``sample_size is None``, no sampling is used.
random_state : int, RandomState instance or None, optional (default=None)
The generator used to randomly select a subset of samples. If int,
random_state is the seed used by the random number generator; If
RandomState instance, random_state is the random number generator; If
None, the random number generator is the RandomState instance used by
`np.random`. Used when ``sample_size is not None``.
**kwds : optional keyword parameters
Any further parameters are passed directly to the distance function.
If using a scipy.spatial.distance metric, the parameters are still
metric dependent. See the scipy docs for usage examples.
Returns
-------
silhouette : float
Mean Silhouette Coefficient for all samples.
References
----------
.. [1] `Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis". Computational
and Applied Mathematics 20: 53-65.
<http://www.sciencedirect.com/science/article/pii/0377042787901257>`_
.. [2] `Wikipedia entry on the Silhouette Coefficient
<https://en.wikipedia.org/wiki/Silhouette_(clustering)>`_
"""
if sample_size is not None:
X, labels = check_X_y(X, labels, accept_sparse=['csc', 'csr'])
random_state = check_random_state(random_state)
indices = random_state.permutation(X.shape[0])[:sample_size]
if metric == "precomputed":
X, labels = X[indices].T[indices].T, labels[indices]
else:
X, labels = X[indices], labels[indices]
return np.mean(silhouette_samples(X, labels, metric=metric, **kwds))
def silhouette_samples(X, labels, metric='euclidean', **kwds):
"""Compute the Silhouette Coefficient for each sample.
The Silhouette Coefficient is a measure of how well samples are clustered
with samples that are similar to themselves. Clustering models with a high
Silhouette Coefficient are said to be dense, where samples in the same
cluster are similar to each other, and well separated, where samples in
different clusters are not very similar to each other.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each
sample. The Silhouette Coefficient for a sample is ``(b - a) / max(a,
b)``.
Note that Silhouette Coefficient is only defined if number of labels
is 2 <= n_labels <= n_samples - 1.
This function returns the Silhouette Coefficient for each sample.
The best value is 1 and the worst value is -1. Values near 0 indicate
overlapping clusters.
Read more in the :ref:`User Guide <silhouette_coefficient>`.
Parameters
----------
X : array [n_samples_a, n_samples_a] if metric == "precomputed", or, \
[n_samples_a, n_features] otherwise
Array of pairwise distances between samples, or a feature array.
labels : array, shape = [n_samples]
label values for each sample
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string, it must be one of the options
allowed by :func:`sklearn.metrics.pairwise.pairwise_distances`. If X is
the distance array itself, use "precomputed" as the metric.
**kwds : optional keyword parameters
Any further parameters are passed directly to the distance function.
If using a ``scipy.spatial.distance`` metric, the parameters are still
metric dependent. See the scipy docs for usage examples.
Returns
-------
silhouette : array, shape = [n_samples]
Silhouette Coefficient for each samples.
References
----------
.. [1] `Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis". Computational
and Applied Mathematics 20: 53-65.
<http://www.sciencedirect.com/science/article/pii/0377042787901257>`_
.. [2] `Wikipedia entry on the Silhouette Coefficient
<https://en.wikipedia.org/wiki/Silhouette_(clustering)>`_
"""
X, labels = check_X_y(X, labels, accept_sparse=['csc', 'csr'])
le = LabelEncoder()
labels = le.fit_transform(labels)
check_number_of_labels(len(le.classes_), X.shape[0])
distances = pairwise_distances(X, metric=metric, **kwds)
unique_labels = le.classes_
n_samples_per_label = np.bincount(labels, minlength=len(unique_labels))
# For sample i, store the mean distance of the cluster to which
# it belongs in intra_clust_dists[i]
intra_clust_dists = np.zeros(distances.shape[0], dtype=distances.dtype)
# For sample i, store the mean distance of the second closest
# cluster in inter_clust_dists[i]
inter_clust_dists = np.inf + intra_clust_dists
for curr_label in range(len(unique_labels)):
# Find inter_clust_dist for all samples belonging to the same
# label.
mask = labels == curr_label
current_distances = distances[mask]
# Leave out current sample.
n_samples_curr_lab = n_samples_per_label[curr_label] - 1
if n_samples_curr_lab != 0:
intra_clust_dists[mask] = np.sum(
current_distances[:, mask], axis=1) / n_samples_curr_lab
# Now iterate over all other labels, finding the mean
# cluster distance that is closest to every sample.
for other_label in range(len(unique_labels)):
if other_label != curr_label:
other_mask = labels == other_label
other_distances = np.mean(
current_distances[:, other_mask], axis=1)
inter_clust_dists[mask] = np.minimum(
inter_clust_dists[mask], other_distances)
sil_samples = inter_clust_dists - intra_clust_dists
sil_samples /= np.maximum(intra_clust_dists, inter_clust_dists)
# score 0 for clusters of size 1, according to the paper
sil_samples[n_samples_per_label.take(labels) == 1] = 0
return sil_samples
def calinski_harabaz_score(X, labels):
"""Compute the Calinski and Harabaz score.
The score is defined as ratio between the within-cluster dispersion and
the between-cluster dispersion.
Read more in the :ref:`User Guide <calinski_harabaz_index>`.
Parameters
----------
X : array-like, shape (``n_samples``, ``n_features``)
List of ``n_features``-dimensional data points. Each row corresponds
to a single data point.
labels : array-like, shape (``n_samples``,)
Predicted labels for each sample.
Returns
-------
score : float
The resulting Calinski-Harabaz score.
References
----------
.. [1] `T. Calinski and J. Harabasz, 1974. "A dendrite method for cluster
analysis". Communications in Statistics
<http://www.tandfonline.com/doi/abs/10.1080/03610927408827101>`_
"""
X, labels = check_X_y(X, labels)
le = LabelEncoder()
labels = le.fit_transform(labels)
n_samples, _ = X.shape
n_labels = len(le.classes_)
check_number_of_labels(n_labels, n_samples)
extra_disp, intra_disp = 0., 0.
mean = np.mean(X, axis=0)
for k in range(n_labels):
cluster_k = X[labels == k]
mean_k = np.mean(cluster_k, axis=0)
extra_disp += len(cluster_k) * np.sum((mean_k - mean) ** 2)
intra_disp += np.sum((cluster_k - mean_k) ** 2)
return (1. if intra_disp == 0. else
extra_disp * (n_samples - n_labels) /
(intra_disp * (n_labels - 1.)))
|
/scanorama-1.7.3.tar.gz/scanorama-1.7.3/bin/unsupervised.py
| 0.971279 | 0.681747 |
unsupervised.py
|
pypi
|
from sklearn.linear_model import LinearRegression
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
n_cells = np.reshape(np.array([
10547,
26369,
52738,
105476
]), (4, 1))
pano_memory = np.reshape(np.array([
1.1,
2.8,
6.1,
12.9
]), (4, 1))
cca_memory = np.reshape(np.array([
5.0,
12.9,
26.1,
54.2
]), (4, 1))
mnn_memory = np.reshape(np.array([
5.1,
13.1,
25.7,
49.5
]), (4, 1))
pano_runtime = np.reshape(np.array([
40.9,
110.1,
204.8,
469.6,
]) / 3600., (4, 1))
cca_runtime = np.reshape(np.array([
9724.1,
24622.1,
49243.7,
99683.4,
]) / 3600., (4, 1))
mnn_runtime = np.reshape(np.array([
15669.2,
39899.7,
78677.3,
157212.6
]) / 3600., (4, 1))
line_x = np.array(range(n_cells[-1]), dtype=int)
line_x = line_x.reshape(-1, 1)
# Memory plot.
plt.figure()
plt.plot(line_x, LinearRegression().fit(n_cells, pano_memory)
.predict(line_x), 'k')
pano = plt.scatter(n_cells, pano_memory, marker='o')
plt.plot(line_x, LinearRegression().fit(n_cells, cca_memory)
.predict(line_x), 'k')
cca = plt.scatter(n_cells, cca_memory, marker='^')
plt.plot(line_x, LinearRegression().fit(n_cells, mnn_memory)
.predict(line_x), 'k')
mnn = plt.scatter(n_cells, mnn_memory, marker='s')
plt.legend((pano, cca, mnn),
('Scanorama', 'Seurat CCA', 'scran MNN'))
plt.xlabel('Number of cells')
plt.ylabel('Memory (GB)')
plt.savefig('benchmark_memory.svg')
# Memory plot.
plt.figure()
plt.plot(line_x, LinearRegression().fit(n_cells, pano_runtime)
.predict(line_x), 'k')
pano = plt.scatter(n_cells, pano_runtime, marker='o')
plt.plot(line_x, LinearRegression().fit(n_cells, cca_runtime)
.predict(line_x), 'k')
cca = plt.scatter(n_cells, cca_runtime, marker='^')
plt.plot(line_x, LinearRegression().fit(n_cells, mnn_runtime)
.predict(line_x), 'k')
mnn = plt.scatter(n_cells, mnn_runtime, marker='s')
plt.legend((pano, cca, mnn),
('Scanorama', 'Seurat CCA', 'scran MNN'))
plt.yscale('log')
plt.xlabel('Number of cells')
plt.ylabel('Runtime (hours)')
plt.savefig('benchmark_runtime.svg')
|
/scanorama-1.7.3.tar.gz/scanorama-1.7.3/bin/plot_resources.py
| 0.798147 | 0.567757 |
plot_resources.py
|
pypi
|
import numpy as np
from scanorama import *
from scipy.sparse import vstack
from sklearn.preprocessing import normalize, LabelEncoder
import sys
from time import time
from benchmark import write_table
from process import load_names, process
np.random.seed(0)
NAMESPACE = 'mouse_brain_sketched'
BATCH_SIZE = 10000
data_names = [
'data/mouse_brain/nuclei',
'data/mouse_brain/dropviz/Cerebellum_ALT',
'data/mouse_brain/dropviz/Cortex_noRep5_FRONTALonly',
'data/mouse_brain/dropviz/Cortex_noRep5_POSTERIORonly',
'data/mouse_brain/dropviz/EntoPeduncular',
'data/mouse_brain/dropviz/GlobusPallidus',
'data/mouse_brain/dropviz/Hippocampus',
'data/mouse_brain/dropviz/Striatum',
'data/mouse_brain/dropviz/SubstantiaNigra',
'data/mouse_brain/dropviz/Thalamus',
]
if __name__ == '__main__':
process(data_names, min_trans=100)
datasets, genes_list, n_cells = load_names(data_names)
datasets_merged, genes = merge_datasets(datasets[:], genes_list)
t0 = time()
datasets_dimred, genes = integrate(
datasets, genes_list, ds_names=data_names,
sketch=True, sketch_method='geosketch', sketch_max=2000,
)
print('Sketched and integrated panoramas in {:.3f}s'
.format(time() - t0))
datasets = datasets_merged
names = []
for i, a in enumerate(datasets_dimred):
names.append(data_names[i])
mouse_brain_genes = [
'Gja1', 'Flt1', 'Gabra6', 'Syt1', 'Gabrb2', 'Gabra1',
'Meg3', 'Mbp', 'Rgs5',
]
# Downsample for visualization purposes
rand_idxs = []
labels = []
curr_label = 0
for i in range(len(data_names)):
ds = datasets_dimred[i]
rand_idx = np.random.choice(ds.shape[0], size=int(ds.shape[0]/10),
replace=False)
datasets_dimred[i] = ds[rand_idx, :]
datasets[i] = datasets[i][rand_idx, :]
labels += list(np.zeros(datasets_dimred[i].shape[0]) + curr_label)
curr_label += 1
labels = np.array(labels, dtype=int)
embedding = visualize(datasets_dimred,
labels, NAMESPACE + '_ds', names,
gene_names=mouse_brain_genes, genes=genes,
gene_expr=vstack(datasets),
multicore_tsne=True,
image_suffix='.png')
cell_labels = (
open('data/cell_labels/mouse_brain_cluster.txt')
.read().rstrip().split()
)
le = LabelEncoder().fit(cell_labels)
cell_labels = le.transform(cell_labels)
cell_types = le.classes_
visualize(None,
cell_labels, NAMESPACE + '_type', cell_types,
embedding=embedding, image_suffix='.png')
|
/scanorama-1.7.3.tar.gz/scanorama-1.7.3/bin/mouse_brain_sketched.py
| 0.404743 | 0.36108 |
mouse_brain_sketched.py
|
pypi
|
import numpy as np
from scanorama import *
from scipy.stats import ttest_ind
from sklearn.metrics import silhouette_samples as sil
from process import load_names, process
def test_knn(datasets_dimred, genes, labels, idx, distr, xlabels):
knns = [ 5, 10, 50, 100 ]
len_distr = len(distr)
for knn in knns:
integrated = assemble(datasets_dimred[:], knn=knn, sigma=150)
X = np.concatenate(integrated)
distr.append(sil(X[idx, :], labels[idx]))
for d in distr[:len_distr]:
print(ttest_ind(np.ravel(X[idx, :]), np.ravel(d)))
xlabels.append(str(knn))
print('')
plt.figure()
plt.boxplot(distr, showmeans=True, whis='range')
plt.xticks(range(1, len(xlabels) + 1), xlabels)
plt.ylabel('Silhouette Coefficient')
plt.ylim((-1, 1))
plt.savefig('param_sensitivity_{}.svg'.format('knn'))
def test_sigma(datasets_dimred, genes, labels, idx, distr, xlabels):
sigmas = [ 10, 50, 100, 200 ]
len_distr = len(distr)
for sigma in sigmas:
integrated = assemble(datasets_dimred[:], sigma=sigma)
X = np.concatenate(integrated)
distr.append(sil(X[idx, :], labels[idx]))
for d in distr[:len_distr]:
print(ttest_ind(np.ravel(X[idx, :]), np.ravel(d)))
xlabels.append(str(sigma))
print('')
plt.figure()
plt.boxplot(distr, showmeans=True, whis='range')
plt.xticks(range(1, len(xlabels) + 1), xlabels)
plt.ylabel('Silhouette Coefficient')
plt.ylim((-1, 1))
plt.savefig('param_sensitivity_{}.svg'.format('sigma'))
def test_alpha(datasets_dimred, genes, labels, idx, distr, xlabels):
alphas = [ 0, 0.05, 0.20, 0.50 ]
len_distr = len(distr)
for alpha in alphas:
integrated = assemble(datasets_dimred[:], alpha=alpha, sigma=150)
X = np.concatenate(integrated)
distr.append(sil(X[idx, :], labels[idx]))
for d in distr[:len_distr]:
print(ttest_ind(np.ravel(X[idx, :]), np.ravel(d)))
xlabels.append(str(alpha))
print('')
plt.figure()
plt.boxplot(distr, showmeans=True, whis='range')
plt.xticks(range(1, len(xlabels) + 1), xlabels)
plt.ylabel('Silhouette Coefficient')
plt.ylim((-1, 1))
plt.savefig('param_sensitivity_{}.svg'.format('alpha'))
def test_approx(datasets_dimred, genes, labels, idx, distr, xlabels):
integrated = assemble(datasets_dimred[:], approx=False, sigma=150)
X = np.concatenate(integrated)
distr.append(sil(X[idx, :], labels[idx]))
len_distr = len(distr)
for d in distr[:len_distr]:
print(ttest_ind(np.ravel(X[idx, :]), np.ravel(d)))
xlabels.append('Exact NN')
print('')
plt.figure()
plt.boxplot(distr, showmeans=True, whis='range')
plt.xticks(range(1, len(xlabels) + 1), xlabels)
plt.ylabel('Silhouette Coefficient')
plt.ylim((-1, 1))
plt.savefig('param_sensitivity_{}.svg'.format('approx'))
def fit_tsne(X, perplexity=PERPLEXITY, n_iter=N_ITER,
learn_rate=200., early_exag=12.):
try:
from MulticoreTSNE import MulticoreTSNE
tsne = MulticoreTSNE(
n_iter=500, perplexity=perplexity,
learning_rate=learn_rate,
early_exaggeration=early_exag,
random_state=69,
n_jobs=40
)
except ImportError:
tsne = TSNEApprox(
n_iter=500, perplexity=perplexity,
learning_rate=learn_rate,
early_exaggeration=early_exag,
random_state=69,
)
tsne.fit(X)
embedding = tsne.embedding_
return embedding
def test_perplexity(datasets_dimred, genes, labels, idx,
distr, xlabels):
X = np.concatenate(datasets_dimred)
perplexities = [ 10, 100, 500, 2000 ]
len_distr = len(distr)
for perplexity in perplexities:
embedding = fit_tsne(X, perplexity=perplexity)
distr.append(sil(embedding[idx, :], labels[idx]))
for d in distr[:len_distr]:
print(ttest_ind(np.ravel(X[idx, :]), np.ravel(d)))
xlabels.append(str(perplexity))
print('')
plt.figure()
plt.boxplot(distr, showmeans=True, whis='range')
plt.xticks(range(1, len(xlabels) + 1), xlabels)
plt.ylabel('Silhouette Coefficient')
plt.ylim((-1, 1))
plt.savefig('param_sensitivity_{}.svg'.format('perplexity'))
def test_learn_rate(datasets_dimred, genes, labels, idx,
distr, xlabels):
X = np.concatenate(datasets_dimred)
learn_rates = [ 50., 100., 500., 1000. ]
len_distr = len(distr)
for learn_rate in learn_rates:
embedding = fit_tsne(X, learn_rate=learn_rate)
distr.append(sil(embedding[idx, :], labels[idx]))
for d in distr[:len_distr]:
print(ttest_ind(np.ravel(X[idx, :]), np.ravel(d)))
xlabels.append(str(learn_rate))
print('')
plt.figure()
plt.boxplot(distr, showmeans=True, whis='range')
plt.xticks(range(1, len(xlabels) + 1), xlabels)
plt.ylabel('Silhouette Coefficient')
plt.ylim((-1, 1))
plt.savefig('param_sensitivity_{}.svg'.format('learn_rate'))
if __name__ == '__main__':
with open('conf/panorama.txt') as f:
data_names = f.read().split()
labels = np.array(
open('data/cell_labels/all.txt').read().rstrip().split()
)
idx = range(labels.shape[0])
datasets, genes_list, n_cells = load_names(data_names)
datasets, genes = merge_datasets(datasets, genes_list)
datasets_dimred, genes = process_data(datasets, genes)
X = np.concatenate(datasets_dimred)
sil_non = sil(X[idx, :], labels[idx])
print(np.median(sil_non))
X = np.loadtxt('data/corrected_mnn.txt')
sil_mnn = sil(X[idx, :], labels[idx])
print(np.median(sil_mnn))
X = np.loadtxt('data/corrected_seurat.txt')
sil_cca = sil(X[idx, :], labels[idx])
print(np.median(sil_cca))
distr = [ sil_non, sil_mnn, sil_cca ]
xlabels = [ 'No correction', 'scran MNN', 'Seurat CCA' ]
# Test alignment parameters.
test_approx(datasets_dimred[:], genes, labels, idx, distr[:], xlabels[:])
test_alpha(datasets_dimred[:], genes, labels, idx, distr[:], xlabels[:])
test_knn(datasets_dimred[:], genes, labels, idx, distr[:], xlabels[:])
test_sigma(datasets_dimred[:], genes, labels, idx, distr[:], xlabels[:])
datasets_dimred = assemble(datasets_dimred)
# Test visualization parameters.
test_perplexity(datasets_dimred[:], genes, labels, idx, distr[:], xlabels[:])
test_learn_rate(datasets_dimred[:], genes, labels, idx, distr[:], xlabels[:])
|
/scanorama-1.7.3.tar.gz/scanorama-1.7.3/bin/param_sensitivity.py
| 0.421909 | 0.717663 |
param_sensitivity.py
|
pypi
|
# scanphyslog2bids
[](https://travis-ci.org/lukassnoek/scanphyslog2bids)
Code to convert Philips physiology files ("SCANPHYSLOG") to the BIDS-format, including the estimation of volume triggers using the logged gradient, volume markers, or by interpolation. It writes out BIDSified physio-files (as `*.tsv.gz` and associated `*.json` files). From there on, you can use other software to, for example, estimate RETROICOR/HRV/RVT regressors for nuisance regression. I recommend using the [PhysIO toolbox](https://github.com/translationalneuromodeling/tapas/tree/master/PhysIO) for this (worked really well for me in the past, see image below), but FSL's [PNM](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/PNM) should also work.

## Installation
I recommend installing the package from the master branch using `pip`:
```
pip install https://github.com/lukassnoek/scanphyslog2bids/archive/master.zip
```
Or clone this repository locally and install as follows:
```
python setup.py install
```
This package uses Python 3.6 or higher and depends on the following Python packages:
- nibabel
- numpy
- pandas
- matplotlib (optional for plots)
- click (for the CLI)
## Usage
This package comes with a Python interface and a CLI. See the code below for a minimal example using the Python interface:
```python
from scanphyslog2bids.core import PhilipsPhysioLog
log_file = 'my_scanphyslog_file.log'
out_dir = '~/my_bids_data' # where the BIDSified data should be saved
deriv_dir '~/my_bids_data/physio' # where some QC plots should be saved
# fmri_file is used to extract metadata, such as TR and number of volumes
fmri_file = 'fmri_file_associated_with_scanphyslog.nii.gz'
fmri_img = nib.load(f)
n_dyns = fmri_img.shape[-1]
tr = np.round(fmri_img.header['pixdim'][4], 3)
# Create PhilipsPhysioLog object with info
phlog = PhilipsPhysioLog(f=log_file, tr=tr, n_dyns=ndyns, sf=496, manually_stopped=False)
# Load in data, do some preprocessing
phlog.load()
# Try to align physio data with scan data, using a particular method
# (either "vol_markers", "gradient_log", or "interpolation")
phlog.align(trigger_method='gradient_log') # load and find vol triggers
# Write out BIDS files
phlog.to_bids(out_dir) # writes out .tsv.gz and .json files
# Optional: plot some QC graphs for alignment and actual traces
phlog.plot_alignment(out_dir=deriv_dir) # plots alignment with gradient
phlog.plot_traces(out_dir=deriv_dir) # plots cardiac/resp traces
```
The command line interface can be used as follows:
```
(base) lukas@uva:~/software/scanphyslog2bids$ scanphyslog2bids --help
Usage: scanphyslog2bids [OPTIONS]
Options:
--file TEXT Scanphyslog file to convert (mandatory)
--sf INTEGER Sampling rate (optional, default: 496)
--fmri TEXT Associated fmri file (optional, assuming ndyns and tr are given)
--ndyns INTEGER Number of dynamics/volumes (optional, assuming that fmri is given)
--tr FLOAT Repetition time of fmri scan (optional, assuming that fmri is given)
--manualstop Was the scan manually stopped? (optional, store True)
--triggermethod TEXT Method to detect triggers (optional, default: gradient_log)
--outdir TEXT Output directory for BIDS file (optional, default: parent-dir of phys-file)
--plottraces Whether to plot the traces (optional, store True)
--plotalignment Whether to plot the alignment (optional, store True)
--derivdir TEXT Derivatives directory (for plots) (optional, default:
--help Show this message and exit.
```
## Output
Apart from the BIDSified SCANPHYSLOG files (i.e., a `*.tsv.gz` with cardiac/respiratory/volume trigger traces and `*.json` file), the package allows for creating plots of the physio-scan alignment (using the `plot_alignment` method or the `--plotalignment` flag) and the actual respiratory and cardiac traces (using the `plot_traces` method or the `--plottraces` flag).
The alignment plot looks similar to the figure below, which visualizes the full gradient trace (if available) with the estimated volume triggers on top (first row), the close-up view of the start and end of the trace (second/third row; because here most issues tend to arise during alignment), and a trace of the number of samples in between (estimated) volume triggers (fourth row). The number of samples in between triggers should generally not deviate more than 2 samples across triggers.

You can also use this plot to detect funky gradients.
The "traces" plot allow you to do some visual QC on the respiratory and cardiac traces:

## Advice
This package should work for different "types" of SCANPHYSLOG files, but ideally they contain volume markers (ask your Philips technician to enable this), so you're sure about the actual volume onsets. So, if possible, use the `vol_markers` "trigger method". However, in my experience, this feature is rarely enabled on Philips scanners.
If volume markers are not available, I recommend using the `gradient_log` method, which often works quite well, except for when your gradients are really funky (e.g., when you tilt the FOV a lot). For most (2D "ascending", i.e., inferior-superior or vice versa) scans, the "y" gradient direction works best to distill the volume onsets (use `which_grad='y'` when calling the `align` method).
If you don't have volume markers *and* the gradients were not logged (this seems to happen at some scanners), you can use the `interpolate` trigger method. It works by "interpolating" triggers backwards from the end of the file (specifically from the "end marker"). This is, however, definitely not foolproof, as the "end marker" in SCANPHYSLOG files do *not always* seem to coincide with the actual offset of the last volume. I have found that this "offset" between the end of the last volume and the end marker may vary from 5 samples (~0.01 seconds) to about 166 samples (~0.332 seconds) depending on your scanner's hardware or software (interface) system. When using `interpolation` trigger method, you can control the assumed offset with the `offset_end_scan` parameter in the `align` method (for which the default is set to `20`).
Also, make sure you use the right sampling frequency (the `sf` parameter). The default is set to 496, which is the sampling frequency for *wireless* physio recorders. If you use a wired physio recorder, the sampling frequency is (as I've been told) 500 Hz.
## Issues
Feel free to submit an issue when encountering issues (of better yet: send a PR if you fixed the bug yourself).
|
/scanphyslog2bids-0.1.tar.gz/scanphyslog2bids-0.1/README.md
| 0.878445 | 0.943919 |
README.md
|
pypi
|
import click
import sys
class NaturalOrderGroup(click.Group):
"""Command group trying to list subcommands in the order they were added.
With decorator, use::
@click.group(cls=NaturalOrderGroup)
"""
def list_commands(self, ctx):
"""List command names as they are in commands dict.
If the dict is OrderedDict, it will preserve the order commands
were added.
"""
return self.commands.keys()
class CommaSeparatedText(click.ParamType):
"""
Comma separated text
"""
def __init__(self, dtype=click.STRING, simplify=False, length=None):
self.dtype = dtype
self.dtype_name = _get_type_name(dtype)
self.simplify = simplify
self.length = length
if length and length <= 3:
self.name = ",".join([f"{self.dtype_name}"] * length)
else:
self.name = "{}[,{}...]".format(self.dtype_name, self.dtype_name)
def convert(self, value, param, ctx):
"""
>>> @click.command()
... @click.option('--test-param')
... def test_cmd():
... pass
...
>>> ctx = click.Context(test_cmd)
>>> param = test_cmd.params[0]
>>> test_cst1 = CommaSeparatedText()
>>> test_cst2 = CommaSeparatedText(click.INT, length=2)
>>> test_cst3 = CommaSeparatedText(click.FLOAT, simplify=True)
>>>
>>> test_cst1.convert(None, param, ctx)
>>> test_cst2.convert('7,2', param, ctx)
[7, 2]
>>> test_cst2.convert('7.2', param, ctx)
Traceback (most recent call last):
...
click.exceptions.BadParameter: 7.2 is not a valid integer
>>> test_cst2.convert('7', param, ctx)
Traceback (most recent call last):
...
click.exceptions.BadParameter: 7 is not a valid comma separated list of length 2
>>> test_cst3.convert('7.2', param, ctx)
7.2
"""
try:
if value is None:
converted = None
else:
converted = list(map(self.dtype, str(value).split(",")))
if self.simplify and len(converted) == 1:
converted = converted[0]
except ValueError:
self.fail(
"{} is not a valid comma separated list of {}".format(
value, self.dtype_name
),
param,
ctx,
)
if self.length:
if len(converted) != self.length:
self.fail(
"{} is not a valid comma separated list of length {}".format(
value, self.length
),
param,
ctx,
)
return converted
class Dictionary(click.ParamType):
"""
Text to be parsed as a python dict definition
"""
def __init__(self, keys=None):
self.name = "TEXT:VAL[,TEXT:VAL...]"
self.keys = keys
def convert(self, value, param, ctx):
"""
>>> @click.command()
... @click.option('--my-param', type=Dictionary(keys=('abc', 'def', 'ghi', 'jkl', 'mno')))
... def test_cmd():
... pass
...
>>> ctx = click.Context(test_cmd)
>>> param = test_cmd.params[0]
>>> dict_param = param.type
>>> dict_str1 = 'abc:0.1,def:TRUE,ghi:False,jkl:None,mno:some_string'
>>> dict_str2 = 'abc:0.1,def:TRUE,ghi:False,jkl:None,mnp:some_string'
>>> dict_str3 = ''
>>> dict_param.convert(dict_str1, param, ctx)
{'abc': 0.1, 'def': True, 'ghi': False, 'jkl': None, 'mno': 'some_string'}
>>> dict_param.convert(dict_str2, param, ctx)
Traceback (most recent call last):
...
click.exceptions.BadParameter: mnp is not a valid key (('abc', 'def', 'ghi', 'jkl', 'mno'))
>>> dict_param.convert(dict_str3, param, ctx)
Traceback (most recent call last):
...
click.exceptions.BadParameter: is not a valid python dict definition
"""
try:
converted = dict()
for token in value.split(","):
if ":" not in token:
raise ValueError
key, _, value = token.partition(":")
if not key:
raise ValueError
if isinstance(self.keys, (list, tuple)) and key not in self.keys:
self.fail(f"{key} is not a valid key ({self.keys})")
if value == "None":
value = None
elif value.lower() == "true":
value = True
elif value.lower() == "false":
value = False
else:
try:
value = float(value)
except ValueError:
pass
converted[key] = value
return converted
except ValueError:
self.fail(f"{value} is not a valid python dict definition", param, ctx)
def _get_type_name(obj):
name = "text"
try:
name = getattr(obj, "name")
except AttributeError:
name = getattr(obj, "__name__")
return name
def valid_limit(ctx, param, value):
"""
Callback function that checks order of numeric inputs
>>> @click.command()
... @click.option('--test-param', help='Sample help')
... def test_cmd():
... pass
...
>>> ctx = click.Context(test_cmd)
>>> param = test_cmd.params[0]
>>> valid_limit(ctx, param, value=[0.0125, 3])
[0.0125, 3]
>>> valid_limit(ctx, param, value=[0.0125, -0.0125])
Traceback (most recent call last):
...
click.exceptions.BadParameter: lower limit must not exceed upper limit
>>> valid_limit(ctx, param, value=[0.0125, 0.0125])
[0.0125, 0.0125]
"""
if value[0] > value[1]:
param.type.fail("lower limit must not exceed upper limit", param, ctx)
return value
def valid_parameter_limits(ctx, param, value):
"""
Callback function that checks order of multiple numeric inputs
>>> @click.command()
... @click.option('--test-param', type=(click.STRING, click.FLOAT, click.FLOAT), multiple=True)
... def test_cmd():
... pass
...
>>> ctx = click.Context(test_cmd)
>>> param = test_cmd.params[0]
>>> valid_parameter_limits(ctx, param, [['a', 0.0, 2.0]])
[['a', 0.0, 2.0]]
>>> valid_parameter_limits(ctx, param, [['b', 0.0, 0.0]])
[['b', 0.0, 0.0]]
>>> valid_parameter_limits(ctx, param, [['c', 0.0, -1.0]])
Traceback (most recent call last):
...
click.exceptions.BadParameter: lower limit must not exceed upper limit
>>> valid_parameter_limits(ctx, param, [['a', 0.0, 2.0], ['c', 0.0, -1.0]])
Traceback (most recent call last):
...
click.exceptions.BadParameter: lower limit must not exceed upper limit
"""
for val in value:
if val[1] > val[2]:
param.type.fail("lower limit must not exceed upper limit", param, ctx)
return value
def mutually_exclusive_with(param_name):
internal_name = param_name.strip("-").replace("-", "_").lower()
def valid_mutually_exclusive(ctx, param, value):
try:
other_value = ctx.params[internal_name]
except KeyError:
return value
if (value is None) == (other_value is None):
param.type.fail(
'mutually exclusive with "{}", one and only one must be '
"specified.".format(param_name),
param,
ctx,
)
return value
return valid_mutually_exclusive
def required_by(param_name):
internal_name = param_name.strip("-").replace("-", "_").lower()
def required(ctx, param, value):
try:
other_value = ctx.params[internal_name]
except KeyError:
return value
if other_value and not value:
param.type.fail(
'required by "{}".'.format(param_name),
param,
ctx,
)
return value
return required
if __name__ == "__main__":
import doctest
sys.exit(doctest.testmod(verbose=True)[0])
|
/scanpy_scripts-1.1.6-py3-none-any.whl/scanpy_scripts/click_utils.py
| 0.450118 | 0.214311 |
click_utils.py
|
pypi
|
import click
from .click_utils import (
CommaSeparatedText,
Dictionary,
valid_limit,
valid_parameter_limits,
mutually_exclusive_with,
required_by,
)
COMMON_OPTIONS = {
"input": [
click.argument(
"input_obj",
metavar="<input_obj>",
type=click.Path(exists=True, dir_okay=False),
),
click.option(
"--input-format",
"-f",
type=click.Choice(["anndata", "loom"]),
default="anndata",
show_default=True,
help="Input object format.",
),
],
"output": [
click.argument(
"output_obj",
metavar="<output_obj>",
type=click.Path(dir_okay=False, writable=True),
),
click.option(
"--output-format",
"-F",
type=click.Choice(["anndata", "loom", "zarr"]),
default="anndata",
show_default=True,
help="Output object format.",
),
click.option(
"--zarr-chunk-size",
"-z",
type=click.INT,
default=1000,
show_default=True,
help="Chunk size for writing output in zarr format.",
),
click.option(
"--loom-write-obsm-varm",
"-b",
is_flag=True,
default=False,
show_default=True,
help="Write obsm and varm to the Loom file?",
),
click.option(
"--export-mtx",
"-X",
type=click.Path(dir_okay=True, writable=True),
default=None,
show_default=True,
help="When specified, using it as prefix for exporting mtx files. "
'If not empty and not ending with "/" or "_", a "_" will be '
"appended.",
),
click.option(
"--mtx-compression",
"-G",
type=click.Choice(["zip", "gzip", "bz2", "zstd"]),
default=None,
show_default=True,
help="Compression type for MTX output.",
),
click.option(
"--show-obj",
type=click.Choice(["stdout", "stderr"]),
default=None,
show_default=True,
help="Print output object summary info to specified stream.",
),
],
"save": [
click.option(
"--save-raw",
"-r",
is_flag=True,
default=False,
show_default=True,
help="Save adata to adata.raw before processing.",
),
click.option(
"--save-layer",
"-y",
type=click.STRING,
default=None,
show_default=True,
help="Save adata.X to the specified layer before processing.",
),
],
"plot": [
click.argument(
"output_fig",
metavar="<output_fig>",
type=click.Path(dir_okay=False, writable=True),
),
click.option(
"--fig-size",
type=CommaSeparatedText(click.INT, length=2),
default="7,7",
show_default=True,
help="Figure size.",
),
click.option(
"--fig-dpi",
type=click.INT,
default=80,
show_default=True,
help="Figure DPI.",
),
click.option(
"--fig-fontsize",
type=click.INT,
default=15,
show_default=True,
help="Figure font size.",
),
],
"frame_title": [
click.option(
"--frameon/--frameoff",
"frameon",
default=True,
show_default=True,
help="Draw a frame around the plot",
),
click.option(
"--title",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="Provide title for the plot or panels.",
),
],
"use_pc": [
click.option(
"--n-pcs",
"-n",
type=click.INT,
default=None,
show_default=True,
help="Use this many PCs. Use `.X` if --n-pcs is 0 when --use-rep is "
"None.",
),
click.option(
"--use-rep",
"-u",
type=click.STRING,
default=None,
show_default=True,
help="Use the indicated representation. If None, the representation is "
"chosen automatically: for `.n_vars` < 50, `.X` is used, otherwise "
"`X_pca` is used. If `X_pca` is not present, it's computed with "
"default parameters.",
),
],
"knn_graph": [
click.option(
"--neighbors-key",
type=click.STRING,
default=None,
show_default=False,
help="If not specified, look in .uns[‘neighbors’] for neighbors "
"settings and .obsp[‘connectivities’], .obsp[‘distances’] for connectivities and "
"distances respectively (default storage places for pp.neighbors). If specified, "
"look in .uns[neighbors_key] for neighbors settings and "
".obsp[.uns[neighbors_key][‘connectivities_key’]], "
".obsp[.uns[neighbors_key][‘distances_key’]] for connectivities and distances "
"respectively.",
),
click.option(
"--obsp",
type=click.STRING,
default=None,
show_default=True,
help="Use .obsp[obsp] as adjacency. You can’t specify both obsp and "
"neighbors_key at the same time.",
),
click.option(
"--directed/--undirected",
"directed",
default=True,
show_default=True,
help="Interpret the adjacency matrix as directed graph.",
),
click.option(
"--use-weights",
is_flag=True,
default=False,
show_default=True,
help="Use weights from KNN graph.",
),
],
"neighbor_metric": click.option(
"--metric",
"-t",
type=click.Choice(
[
"cityblock",
"cosine",
"euclidean",
"l1",
"l2",
"manhattan",
"braycurtis",
"canberra",
"chebyshev",
"correlation",
"dice",
"hamming",
"jaccard",
"kulsinski",
"mahalanobis",
"minkowski",
"rogerstanimoto",
"russellrao",
"seuclidean",
"sokalmichener",
"sokalsneath",
"sqeuclidean",
"yule",
]
),
default="euclidean",
show_default=True,
help="A known metric’s name.",
),
"layer": click.option(
"--layer",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="Name of the AnnData object layer that wants to be plotted. By "
"default adata.raw.X is plotted. If use_raw=False is set, then adata.X "
"is plotted. If layer is set to a valid layer name, then the layer is "
"plotted. layer takes precedence over use_raw.",
),
"n_comps": click.option(
"--n-comps",
type=click.INT,
default=None,
show_default=True,
help="Number of components to compute",
),
"key_added": click.option(
"--key-added",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="Key under which to add the computed results",
),
"random_state": click.option(
"--random-state",
"-S",
type=click.INT,
default=0,
show_default=True,
help="Seed for random number generator.",
),
"use_raw": click.option(
"--use-raw/--no-raw",
"use_raw",
default=None,
show_default=True,
help="Use expression values in `.raw` if present.",
),
"zero_center": click.option(
"--no-zero-center",
"zero_center",
is_flag=True,
flag_value=False,
default=True,
help="When set, omit zero-centering variables to allow efficient "
"handling of sparse input.",
),
"n_jobs": click.option(
"--n-jobs",
"-J",
type=click.INT,
default=None,
show_default=True,
help="Number of jobs for parallel computation.",
),
"restrict_to": click.option(
"--restrict-to",
type=(click.STRING, CommaSeparatedText()),
default=(None, None),
show_default=True,
help="Restrict the clustering to the categories within the key for "
'sample annotation, in the form of "obs_key list_of_categories".',
),
"export_embedding": click.option(
"--export-embedding",
"-E",
type=click.Path(dir_okay=False, writable=True),
default=None,
show_default=True,
help="Export embeddings in a tab-separated text table.",
),
"export_cluster": click.option(
"--export-cluster",
type=click.Path(dir_okay=False, writable=True),
default=None,
show_default=True,
help="Export embeddings in a tab-separated text table.",
),
"var_names": click.option(
"--var-names",
type=(CommaSeparatedText()),
show_default=True,
help="var_names should be a valid subset of adata.var_names.",
),
"gene_symbols": click.option(
"--gene-symbols",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="Column name in .var DataFrame that stores gene symbols. By "
"default this is assumed to be the index column of the .var "
"DataFrame. Setting this option allows alternative names to be "
"used.",
),
"diffexp_plot": [
click.option(
"--rgg",
is_flag=True,
default=False,
show_default=True,
help="When set, use the rank_genes_groups_ form of the function, "
"where gene lists are automatically selected.",
),
click.option(
"--groupby",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="The key of the observation grouping to consider.",
),
click.option(
"--log",
is_flag=True,
default=False,
show_default=True,
help="Plot on logarithmic axis.",
),
click.option(
"--num-categories",
type=click.INT,
default=7,
show_default=True,
help="Only used if groupby observation is not categorical. This value "
"determines the number of groups into which the groupby observation "
"should be subdivided.",
),
click.option(
"--dendrogram",
is_flag=True,
default=False,
show_default=False,
help="If True, a dendrogram based on the hierarchical clustering "
"between the groupby categories is added. The dendrogram information is "
"computed using scanpy.tl.dendrogram(). If tl.dendrogram has not been "
"called previously the function is called with default parameters.",
),
click.option(
"--standard-scale",
type=click.Choice(["var", "obs"]),
default=None,
show_default=True,
help="Whether or not to standardize that dimension between 0 and 1, "
"meaning for each variable or group, subtract the minimum and divide "
"each by its maximum.",
),
],
"sviol": [
click.option(
"--no-stripplot",
"stripplot",
is_flag=True,
default=True,
show_default=True,
help="When set, do not add a stripplot on top of the violin plot.",
),
click.option(
"--no-jitter",
"jitter",
is_flag=True,
default=True,
show_default=True,
help="Suppress jitter in the stripplot (only when stripplot is True)",
),
click.option(
"--size",
type=click.INT,
default=1,
show_default=True,
help="Size of the jitter points.",
),
click.option(
"--order",
type=CommaSeparatedText(),
default=None,
show_default=True,
help="Order in which to show the categories.",
),
click.option(
"--scale",
type=click.Choice(["area", "count", "width"]),
default="width",
show_default=True,
help="The method used to scale the width of each violin. If ‘area’, "
"each violin will have the same area. If ‘count’, the width of the "
"violins will be scaled by the number of observations in that bin. If "
"‘width’, each violin will have the same width.",
),
click.option(
"--row-palette",
type=CommaSeparatedText(simplify=True),
default="muted",
show_default=True,
help="The row palette determines the colors to use in each of the "
"stacked violin plots. The value should be a valid seaborn palette name "
"or a valic matplotlib colormap (see "
"https://seaborn.pydata.org/generated/seaborn.color_palette.html). "
"Alternatively, a single color name or hex value can be passed. E.g. "
"‘red’ or ‘#cc33ff’.",
),
],
"dot": [
click.option(
"--expression-cutoff",
type=click.FLOAT,
default=0,
show_default=True,
help="Expression cutoff that is used for binarizing the gene expression "
"and determining the fraction of cells expressing given genes. A gene is "
"expressed only if the expression value is greater than this threshold.",
),
click.option(
"--mean-only-expressed",
is_flag=True,
default=False,
show_default=True,
help="If True, gene expression is averaged only over the cells "
"expressing the given genes.",
),
click.option(
"--color-map",
type=CommaSeparatedText(simplify=True),
default="Reds",
show_default=True,
help="String denoting matplotlib color map.",
),
click.option(
"--dot-max",
type=click.FLOAT,
default=None,
show_default=True,
help="If none, the maximum dot size is set to the maximum fraction "
"value found (e.g. 0.6). If given, the value should be a number between "
"0 and 1. All fractions larger than dot_max are clipped to this value.",
),
click.option(
"--dot-min",
type=click.FLOAT,
default=None,
show_default=True,
help="If none, the minimum dot size is set to 0. If given, the value "
"should be a number between 0 and 1. All fractions smaller than dot_min "
"are clipped to this value.",
),
click.option(
"--smallest-dot",
type=click.FLOAT,
default=0,
show_default=True,
help="If none, the smallest dot has size 0. All expression levels with "
"dot_min are potted with smallest_dot dot size.",
),
],
"heat": [
click.option(
"--show-gene-labels",
is_flag=True,
default=None,
show_default=True,
help="By default gene labels are shown when there are 50 or less "
"genes. Otherwise the labels are removed.",
),
],
"swap_axes": click.option(
"--swap-axes",
is_flag=True,
default=False,
show_default=True,
help="By default, the x axis contains var_names (e.g. genes) and the y "
"axis the groupby categories. By setting swap_axes then x are the "
"groupby categories and y the var_names. When swapping axes "
"var_group_positions are no longer used.",
),
"rank_genes_groups_plots": [
click.option(
"--groups",
type=CommaSeparatedText(),
default=None,
show_default=True,
help="The groups for which to show the gene ranking.",
),
click.option(
"--n-genes",
"-n",
type=click.INT,
default=10,
show_default=True,
help="Number of genes to show.",
),
],
"root": click.option(
"--root",
type=click.INT,
default=0,
show_default=True,
help="If choosing a tree layout, this is the index of the root node.",
),
"plot_embed": [
click.option(
"--use-raw/--no-raw",
default=None,
show_default=True,
help="Use `.raw` attribute for coloring with gene expression. If "
"`None`, uses `.raw` if present.",
),
click.option(
"--groups",
type=click.STRING,
default=None,
help="Key for categorical in `.obs`. You can pass your predefined "
"groups by choosing any categorical annotation of observations.",
),
],
"batch_key": click.option(
"--batch-key",
"key",
type=click.STRING,
required=True,
help="The name of the column in adata.obs that differentiates among "
"experiments/batches.",
),
"batch_layer": click.option(
"--layer",
"-l",
type=click.STRING,
default=None,
show_default=True,
help="Layer to batch correct. By default corrects the contents of .X.",
),
"scrublet": [
click.option(
"--sim-doublet-ratio",
type=click.FLOAT,
default=2.0,
show_default=True,
help="Number of doublets to simulate relative to the number of "
"observed transcriptomes.",
),
click.option(
"--synthetic-doublet-umi-subsampling",
type=click.FLOAT,
default=1.0,
show_default=True,
help="Where input_obj_sim not suplied, rate for sampling UMIs when "
"creating synthetic doublets. If 1.0, each doublet is created by "
"simply adding the UMI counts from two randomly sampled observed "
"transcriptomes. For values less than 1, the UMI counts are added "
"and then randomly sampled at the specified rate.",
),
],
}
COMMON_OPTIONS["opt_output"] = [
click.option(
"--output-obj",
type=click.Path(dir_okay=False, writable=True),
help="Optionally output an object to the specified path.",
),
*COMMON_OPTIONS["output"][1:],
]
CMD_OPTIONS = {
"read": [
click.option(
"--input-10x-h5",
"-i",
type=click.Path(exists=True, dir_okay=False),
callback=mutually_exclusive_with("--input-10x-mtx"),
help="Input 10x data in Cell-Ranger hdf5 format.",
),
click.option(
"--input-10x-mtx",
"-x",
type=click.Path(exists=True, file_okay=False),
callback=mutually_exclusive_with("--input-10x-h5"),
help="Path of input folder containing 10x data in mtx format.",
),
*COMMON_OPTIONS["output"],
click.option(
"--genome",
"-g",
callback=required_by("--input-10x-h5"),
default="hg19",
show_default=True,
help="Name of the genome group in hdf5 file, required by "
'"--input-10x-h5".',
),
click.option(
"--var-names",
"-v",
type=click.Choice(["gene_symbols", "gene_ids"]),
callback=required_by("--input-10x-mtx"),
default="gene_symbols",
show_default=True,
help="Attribute to be used as the index of the variable table, "
'required by "--input-10x-mtx".',
),
click.option(
"--extra-obs",
type=click.Path(exists=True, dir_okay=False),
default=None,
show_default=True,
help="Extra cell metadata table, must be tab-separated with a header "
"row and an index column, and with matched dimension.",
),
click.option(
"--extra-var",
type=click.Path(exists=True, dir_okay=False),
default=None,
show_default=True,
help="Extra gene metadata table, must be tab-separated with a header "
"row and an index column, and with matched dimension.",
),
],
"filter": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["save"][0], # --save-raw
click.option(
"--gene-name",
"-g",
type=click.STRING,
default="index",
show_default=True,
help="Name of the variable that contains gene names, used for flagging "
'mitochondria genes when column "mito" is absent from `.var`.',
),
click.option(
"--list-attr",
"-l",
is_flag=True,
default=False,
help="When set, list attributes that can be filtered on.",
),
click.option(
"--param",
"-p",
type=(click.STRING, click.FLOAT, click.FLOAT),
multiple=True,
callback=valid_parameter_limits,
help="Numerical parameters used to filter the data, "
'in the format of "-p name min max". '
"Multiple -p entries allowed.",
),
click.option(
"--category",
"-c",
type=(click.STRING, CommaSeparatedText()),
multiple=True,
help="Categorical attributes used to filter the data, "
'in the format of "-c <name> <values>", '
"where entries with attribute <name> with value in <values> are kept. "
'If <values> is preceded by "!", entries with value in <values> are '
"removed. Multiple -c entries allowed.",
),
click.option(
"--subset",
"-s",
type=(click.STRING, click.File()),
multiple=True,
help='Similar to --category in the format of "-s <name> <file>", '
"but the <file> to be a one-column table that provides the values. "
"Multiple -s entries allowed.",
),
click.option(
"--force-recalc",
is_flag=True,
default=False,
help="When set, re-calculate `pct_counts_<qc_variable>` and "
"`pct_counts_in_top_<n>_genes` even if they exist.",
),
],
"norm": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
*COMMON_OPTIONS["save"],
COMMON_OPTIONS["key_added"],
click.option(
"--no-log-transform",
"log_transform",
is_flag=True,
default=True,
show_default=True,
help="When set, do not apply (natural) log transform following normalisation.",
),
click.option(
"--normalize-to",
"-t",
"target_sum",
type=float,
default=10_000,
show_default=True,
help="Normalize per cell nUMI to this number.",
),
click.option(
"--exclude-highly-expressed",
"-e",
"exclude_highly_expressed",
is_flag=True,
default=False,
show_default=True,
help="Exclude (very) highly expressed genes for the computation of "
"the normalization factor (size factor) for each cell. A gene is considered "
"highly expressed, if it has more than max_fraction of the total counts in at "
"least one cell. The not-excluded genes will sum up to the number "
"specified by --normalize-to.",
),
click.option(
"--max-fraction",
"-m",
"max_fraction",
type=float,
default=0.05,
show_default=True,
help="If exclude_highly_expressed=True, consider cells as highly "
"expressed that have more counts than max_fraction of the original total counts "
"in at least one cell.",
),
click.option(
"--layers",
"-l",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="List of layers to normalize. Set to 'all' to normalize all layers.",
),
click.option(
"--layer-norm",
"-n",
"layer_norm",
type=click.Choice(["after", "X"]),
default=None,
show_default=True,
help="Specifies how to normalize layers: 1) If None, after "
"normalization, for each layer in layers each cell has a total count equal to "
"the median of the counts_per_cell before normalization of the layer. 2) If "
"'after', for each layer in layers each cell has a total count equal to "
"target_sum. 3) If 'X', for each layer in layers each cell has a total count "
"equal to the median of total counts for observations (cells) of adata.X before "
"normalization.'",
),
],
"hvg": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
click.option(
"--mean-limits",
"-m",
type=(click.FLOAT, click.FLOAT),
callback=valid_limit,
default=(0.0125, 3),
show_default=True,
help="Cutoffs for the mean of expression" 'in the format of "-m min max".',
),
click.option(
"--disp-limits",
"-d",
type=(click.FLOAT, click.FLOAT),
callback=valid_limit,
default=(0.5, float("inf")),
show_default=True,
help="Cutoffs for the dispersion of expression"
'in the format of "-d min max".',
),
click.option(
"--span",
type=click.FLOAT,
default=0.3,
show_default=True,
help="The fraction of the data (cells) used when estimating the "
"variance in the loess model fit if flavor='seurat_v3'.",
),
click.option(
"--n-bins",
"-b",
type=click.INT,
default=20,
show_default=True,
help="Number of bins for binning the mean gene expression.",
),
click.option(
"--n-top-genes",
"-t",
type=click.INT,
default=None,
show_default=True,
help="Number of highly-variable genes to keep.",
),
click.option(
"--flavor",
"-v",
type=click.Choice(["seurat", "cell_ranger", "seurat_v3"]),
default="seurat",
show_default=True,
help="Choose the flavor for computing normalized dispersion.",
),
click.option(
"--subset",
"-s",
is_flag=True,
default=False,
help="When set, inplace subset to highly-variable genes, otherwise "
"only flag highly-variable genes.",
),
click.option(
"--batch-key",
"batch_key",
type=click.STRING,
default=None,
help="If specified, highly-variable genes are selected within each "
"batch separately and merged. This simple process avoids the selection of "
"batch-specific genes and acts as a lightweight batch correction method. For all "
"flavors, genes are first sorted by how many batches they are a HVG. For "
"dispersion-based flavors ties are broken by normalized dispersion. If flavor = "
"'seurat_v3', ties are broken by the median (across batches) rank based on "
"within-batch normalized variance.",
),
],
"scale": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
*COMMON_OPTIONS["save"],
COMMON_OPTIONS["zero_center"],
click.option(
"--max-value",
"-m",
type=click.FLOAT,
default=None,
show_default=True,
help="When specified, clip to this value after scaling, otherwise do "
"not clip",
),
click.option(
"--layer",
"-l",
type=CommaSeparatedText(simplify=True),
default=None,
help="If provided, which element of layers to scale.",
),
],
"regress": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
*COMMON_OPTIONS["save"],
COMMON_OPTIONS["n_jobs"],
click.option(
"--keys",
"-k",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="Key(s) for observation annotation on which to regress.",
),
],
"pca": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["zero_center"],
COMMON_OPTIONS["random_state"],
COMMON_OPTIONS["export_embedding"],
COMMON_OPTIONS["n_comps"],
click.option(
"--svd-solver",
"-V",
type=click.Choice(["auto", "arpack", "randomized"]),
default="auto",
show_default=True,
help="SVD solver to use.",
),
click.option(
"--use-all",
"-a",
"use_highly_variable",
is_flag=True,
flag_value=False,
default=True,
help="When set, use all genes for PCA, otherwise use "
"highly-variable genes by default.",
),
click.option(
"--chunked",
"-K",
is_flag=True,
default=False,
help="When set, perform an incremental PCA on segments of "
"--chunk-size, which automatically zero centers and ignore settings of "
"--random-state and --svd-solver.",
),
click.option(
"--chunk-size",
"-Z",
type=click.INT,
callback=required_by("--chunked"),
default=None,
show_default=True,
help="Number of observations to include in each chunk, required by "
"--chunked.",
),
],
"neighbor": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
*COMMON_OPTIONS["use_pc"],
COMMON_OPTIONS["key_added"],
COMMON_OPTIONS["random_state"],
click.option(
"--n-neighbors",
"-k",
type=CommaSeparatedText(click.INT, simplify=True),
default=15,
show_default=True,
help="The size of local neighborhood (in terms of number of "
"neighboring data points) used for manifold approximation. Larger "
"values result in more global views of the manifold, while smaller "
"values result in more local data being preserved. In general values "
"should be in the range 2 to 100. If --knn is set, number of nearest "
"neighbors to be searched, othwise a Gaussian kernel width is set to "
"the distance of the --n-neighbors neighbor.",
),
click.option(
"--no-knn",
"knn",
is_flag=True,
flag_value=False,
default=True,
show_default=True,
help="When NOT set, use a hard threshold to restrict the number of "
"neighbors to --n-neighbors. Otherwise, use a Gaussian kernel to "
"assign low weights to neighbors more distant than the --n-neighbors "
"nearest neighbor",
),
click.option(
"--method",
"-m",
type=click.Choice(["umap", "gauss", "rapids"]),
default="umap",
show_default=True,
help="Use umap or gauss with adaptive width for computing "
"connectivities. Use rapids for the RAPIDS implementation of UMAP "
"(experimental, GPU only).",
),
COMMON_OPTIONS["neighbor_metric"],
],
"umap": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["knn_graph"][0], # --neighbors-key
COMMON_OPTIONS["random_state"],
COMMON_OPTIONS["key_added"],
COMMON_OPTIONS["export_embedding"],
click.option(
"--init-pos",
type=click.STRING,
default="spectral",
show_default=True,
help="How to initialize the low dimensional embedding. Can be "
'"spectral", "paga" or "random", or any key of `.obsm`.',
),
click.option(
"--min-dist",
type=click.FLOAT,
default=0.5,
show_default=True,
help="The effective minimum distance between embedded points. Smaller "
"values will result in a more clustered embedding, while larger values "
"will results in a more even dispersal of points.",
),
click.option(
"--spread",
type=click.FLOAT,
default=1.0,
show_default=True,
help="The effective scale of embedded points, which determines the "
"scale at which embedded points will be spread out.",
),
click.option(
"--n-components",
type=click.INT,
default=2,
show_default=True,
help="The number of dimensions of the embedding.",
),
click.option(
"--maxiter",
type=click.INT,
default=None,
show_default=True,
help="The number of iterations of the optimization.",
),
click.option(
"--alpha",
type=click.FLOAT,
default=1.0,
show_default=True,
help="The initial learning rate for the embedding optimization.",
),
click.option(
"--gamma",
type=click.FLOAT,
default=1.0,
show_default=True,
help="Weighting applied to negative samples in low dimensional "
"embedding optimization.",
),
click.option(
"--negative-sample-rate",
type=click.INT,
default=5,
show_default=True,
help="The number of negative edge samples to use per positive edge "
"sample in optimizing the low dimensional embedding.",
),
click.option(
"--method",
type=click.Choice(["umap", "rapids"]),
default="umap",
show_default=True,
help="Use the original ‘umap’ implementation, or ‘rapids’ "
"(experimental, GPU only).",
),
],
"tsne": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
*COMMON_OPTIONS["use_pc"],
COMMON_OPTIONS["random_state"],
COMMON_OPTIONS["key_added"],
COMMON_OPTIONS["n_jobs"],
COMMON_OPTIONS["export_embedding"],
click.option(
"--perplexity",
type=click.FLOAT,
default=30,
show_default=True,
help="The perplexity is related to the number of nearest neighbors "
"that is used in other manifold learning algorithms. Larger datasets "
"usually require a larger perplexity. Consider selecting a value "
"between 5 and 50. The choice is not extremely critical since t-SNE "
"is quite insensitive to this parameter.",
),
click.option(
"--early-exaggeration",
type=click.FLOAT,
default=12,
show_default=True,
help="Controls how tight natural clusters in the original space are in "
"the embedded space and how much space will be between them. For "
"larger values, the space between natural clusters will be larger in "
"the embedded space. Again, the choice of this parameter is not very "
"critical. If the cost function increases during initial optimization, "
"the early exaggeration factor or the learning rate might be too high.",
),
click.option(
"--learning-rate",
type=click.FLOAT,
default=1000,
show_default=True,
help='Note that the R-package "Rtsne" uses a default of 200. The '
"learning rate can be a critical parameter. It should be between 100 "
"and 1000. If the cost function increases during initial optimization, "
"the early exaggeration factor or the learning rate might be too high. "
"If the cost function gets stuck in a bad local minimum increasing the "
"learning rate helps sometimes.",
),
click.option(
"--no-fast-tsne",
"use_fast_tsne",
is_flag=True,
flag_value=False,
default=True,
show_default=True,
help="When NOT set, use the MulticoreTSNE package by D. Ulyanov if "
"installed.",
),
],
"fdg": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["random_state"],
COMMON_OPTIONS["export_embedding"],
COMMON_OPTIONS["root"],
click.option(
"--init-pos",
type=click.STRING,
default=None,
help="Use precomputed coordinates for initialization. Can be any key "
'of `.obsm` or "paga" if .uns["paga"] is present',
),
click.option(
"--layout",
type=click.Choice(
["fa", "fr", "grid_fr", "kk", "lgl", "drl", "rt", "rt_circular"]
),
default="fa",
show_default=True,
help='Name of any valid igraph layout, including "fa" (ForceAtlas2), '
'"fr" (Fruchterman Reingold), "grid_fr" (Grid Fruchterman Reingold, '
'faster than "fr"), "kk" (Kamadi Kawai, slower than "fr"), "lgl" '
'(Large Graph Layout, very fast), "drl" (Distributed Recursive Layout, '
'pretty fast) and "rt" (Reingold Tilford tree layout).',
),
click.option(
"--key-added-ext",
type=click.STRING,
default=None,
show_default=True,
help="By default, append 'layout'",
),
click.option(
"--init-pos",
type=click.STRING,
default=None,
show_default=True,
help='How to initialize the low dimensional embedding. Can be "paga", '
"or any valid key of `.obsm`.",
),
COMMON_OPTIONS["knn_graph"][0], # --neighbors-key
COMMON_OPTIONS["knn_graph"][1], # --obsp
],
"louvain": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["export_cluster"],
*COMMON_OPTIONS["knn_graph"],
COMMON_OPTIONS["restrict_to"],
COMMON_OPTIONS["random_state"],
COMMON_OPTIONS["key_added"],
click.option(
"--flavor",
type=click.Choice(["vtraag", "igraph"]),
default="vtraag",
show_default=True,
help="Choose between two packages for computing the clustering. "
'"vtraag" is much powerful, and the default.',
),
click.option(
"--resolution",
"-r",
type=CommaSeparatedText(click.FLOAT, simplify=True),
default=1,
show_default=True,
help='For the default flavor "vtraag", you can provide a resolution. '
"Higher resolution means finding more and smaller clusters.",
),
],
"leiden": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["export_cluster"],
*COMMON_OPTIONS["knn_graph"],
COMMON_OPTIONS["restrict_to"],
COMMON_OPTIONS["random_state"],
COMMON_OPTIONS["key_added"],
click.option(
"--resolution",
"-r",
type=CommaSeparatedText(click.FLOAT, simplify=True),
default=1,
show_default=True,
help="A parameter value controlling the coarseness of the clustering. "
'Higher values lead to more clusters. Set to "None" if overriding '
"--partition_type to one that doesn't accept `resolution_parameter`.",
),
click.option(
"--n-iterations",
type=click.INT,
default=-1,
show_default=True,
help="How many iterations of the Leiden clustering algorithm to "
"perform. -1 has the algorithm run until it reaches its optimal "
"clustering.",
),
],
"diffexp": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["use_raw"],
COMMON_OPTIONS["key_added"],
click.option(
"--layer",
"-l",
type=click.STRING,
default=None,
help="Key from adata.layers whose value will be used to perform tests on.",
),
click.option(
"--groupby",
"-g",
type=click.STRING,
required=True,
help="The key of the observations grouping to consider.",
),
click.option(
"--groups",
type=CommaSeparatedText(simplify=True),
default="all",
show_default=True,
help="Subset of groups to which comparison shall be restricted.",
),
click.option(
"--reference",
type=click.STRING,
default="rest",
show_default=True,
help='If "rest", compare each group to the union of the rest of the '
"groups. If a group identifier, compare with respect to this group.",
),
click.option(
"--n-genes",
"-n",
type=click.INT,
default=None,
show_default=True,
help="The number of genes that appear in the retured tables. By "
"default return all available genes depending on the value of "
"--use-raw.",
),
click.option(
"--method",
type=click.Choice(["logreg", "t-test", "wilcoxon", "t-test_overestim_var"]),
default="t-test_overestim_var",
show_default=True,
help="Method of performing differential expression analysis.",
),
click.option(
"--corr-method",
type=click.Choice(["benjamini-hochberg", "bonferroni"]),
default="benjamini-hochberg",
show_default=True,
help='P-value correction method. Used only for "t-test", '
'"t-test_overestim_var" and "wilcoxon".',
),
click.option(
"--rankby-abs",
is_flag=True,
default=False,
show_default=True,
help="Rank genes by the absolute value of the score, not by the score. "
"The returned scores are never the absolute values.",
),
click.option(
"--pts",
is_flag=True,
default=False,
show_default=True,
help="Compute the fraction of cells expressing the genes.",
),
click.option(
"--tie-correct",
is_flag=True,
default=False,
show_default=True,
help="Use tie correction for 'wilcoxon' scores. Used only for "
"'wilcoxon'.",
),
click.option(
"--filter-params",
type=Dictionary(
keys=[
"min_in_group_fraction",
"max_out_group_fraction",
"min_fold_change",
]
),
default=None,
show_default=True,
help="Parameters for filtering DE results, valid parameters are: "
'"min_in_group_fraction" (float), "max_out_group_fraction" (float), '
'"min_fold_change" (float).',
),
click.option(
"--logreg-param",
type=Dictionary(),
default=None,
show_default=True,
help="Parameters passed to `sklearn.linear_model.LogisticRegression`.",
),
click.option(
"--save",
type=click.Path(dir_okay=False, writable=True),
default=None,
show_default=True,
help="Tab-separated table to store results of differential expression "
"analysis.",
),
],
"paga": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["knn_graph"][0], # --neighbors-key
COMMON_OPTIONS["key_added"],
click.option(
"--groups",
type=click.STRING,
required=True,
help="Key for categorical in `.obs`. You can pass your predefined "
"groups by choosing any categorical annotation of observations.",
),
click.option(
"--model",
type=click.Choice(["v1.2", "v1.0"]),
default="v1.2",
show_default=True,
help="The PAGA connectivity model.",
),
click.option(
"--use-rna-velocity",
is_flag=True,
default=False,
show_default=True,
help="Use RNA velocity to orient edges in the abstracted graph and "
"estimate transitions. Requires that adata.uns contains a directed single-cell "
"graph with key velocity_graph. This feature might be subject to change in the "
"future.",
),
],
"diffmap": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["knn_graph"][0], # --neighbors-key
COMMON_OPTIONS["key_added"],
COMMON_OPTIONS["export_embedding"],
COMMON_OPTIONS["n_comps"],
],
"dpt": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["knn_graph"][0], # --neighbors-key
COMMON_OPTIONS["key_added"],
click.option(
"--root",
type=(click.STRING, click.STRING),
default=(None, None),
show_default=True,
help="Specify a categorical annotaion of observations (`.obs`) and a "
"value representing the root cells.",
),
click.option(
"--n-dcs",
type=click.INT,
default=10,
show_default=True,
help="The number of diffusion components to use.",
),
click.option(
"--n-branchings",
type=click.INT,
default=0,
show_default=True,
help="Number of branchings to detect.",
),
click.option(
"--min-group-size",
type=click.FLOAT,
default=0.01,
show_default=True,
help="During recursive splitting of branches for --n-branchings > 1, "
"do not consider branches/groups that contain fewer than this fraction "
"of the total number of data points.",
),
click.option(
"--disallow-kendall-tau-shift",
"allow_kendall_tau_shift",
is_flag=True,
default=True,
show_default=True,
help="By default: If a very small branch is detected upon "
"splitting, shift away from maximum correlation in Kendall tau criterion of "
"[Haghverdi16] to stabilize the splitting. Use flag to disable this.",
),
],
"combat": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["batch_key"],
COMMON_OPTIONS["batch_layer"],
click.option(
"--key-added",
type=click.STRING,
default=None,
show_default=True,
help="Key under which to add the computed results. By default a new "
"layer will be created called 'combat', 'combat_{layer}' or "
"'combat_layer_{key_added}' where those parameters were specified. A value of 'X' "
"causes batch-corrected values to overwrite the original content of .X.",
),
click.option(
"--covariates",
type=(CommaSeparatedText()),
default=None,
show_default=True,
help="Comma-separated list of additional covariates besides the "
"batch variable such as adjustment variables or biological condition. This "
"parameter refers to the design matrix X in Equation 2.1 in [Johnson07] and to "
"the mod argument in the original combat function in the sva R package. Note "
"that not including covariates may introduce bias or lead to the removal of "
"biological signal in unbalanced designs.",
),
],
"harmony": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["batch_key"],
click.option(
"--basis",
type=click.STRING,
default="X_pca",
show_default=True,
help="The name of the field in adata.obsm where the PCA table is "
"stored. Defaults to 'X_pca', which is the default for sc.tl.pca().",
),
click.option(
"--adjusted-basis",
type=click.STRING,
default="X_pca_harmony",
show_default=True,
help="The name of the field in adata.obsm where the adjusted PCA "
"table will be stored after running this function.",
),
click.option(
"--theta",
type=click.FLOAT,
default=2,
show_default=True,
help="Diversity clustering penalty parameter. theta=0 does not encourage any "
"diversity. Larger values of theta result in more diverse clusters.",
),
click.option(
"--lambda",
"lamb",
type=click.FLOAT,
default=1,
show_default=True,
help="Ridge regression penalty parameter. Lambda must be strictly "
"positive. Smaller values result in more aggressive correction.",
),
click.option(
"--sigma",
type=click.FLOAT,
default=0.1,
show_default=True,
help="Width of soft kmeans clusters. Sigma scales the distance from "
"a cell to cluster centroids. Larger values of sigma result in cells assigned to "
"more clusters. Smaller values of sigma make soft kmeans cluster approach hard "
"clustering.",
),
click.option(
"--n-clust",
"nclust",
type=click.INT,
default=None,
show_default=False,
help="Number of clusters in model. nclust=1 equivalent to simple "
"linear regression.",
),
click.option(
"--tau",
type=click.INT,
default=0,
show_default=True,
help="Protection against overclustering small datasets with large ones. "
"tau is the expected number of cells per cluster.",
),
click.option(
"--block-size",
type=click.FLOAT,
default=0.05,
show_default=True,
help="What proportion of cells to update during clustering. Between "
"0 to 1, default 0.05. Larger values may be faster but less accurate.",
),
click.option(
"--max-iter-cluster",
"max_iter_kmeans",
type=click.INT,
default=20,
show_default=True,
help="Maximum number of rounds to run clustering at each round of "
"Harmony.",
),
click.option(
"--max-iter-harmony",
type=click.INT,
default=10,
show_default=True,
help="Maximum number of rounds to run Harmony. One round of Harmony "
"involves one clustering and one correction step.",
),
click.option(
"--epsilon-cluster",
type=click.FLOAT,
default=1e-5,
show_default=True,
help="Convergence tolerance for clustering round of Harmony Set to "
"-Inf to never stop early.",
),
click.option(
"--epsilon-harmony",
type=click.FLOAT,
default=1e-5,
show_default=True,
help="Convergence tolerance for clustering round of Harmony Set to "
"-Inf to never stop early.",
),
COMMON_OPTIONS["random_state"],
],
"mnn": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
*COMMON_OPTIONS["save"],
COMMON_OPTIONS["batch_key"],
COMMON_OPTIONS["batch_layer"],
click.option(
"--key-added",
type=click.STRING,
default=None,
show_default=True,
help="Key under which to add the computed results. By default a new "
"layer will be created called 'mnn', 'mnn_{layer}' or "
"'mnn_layer_{key_added}' where those parameters were specified. A value of 'X' "
"causes batch-corrected values to overwrite the original content of .X.",
),
click.option(
"--var-subset",
type=(click.STRING, CommaSeparatedText()),
multiple=True,
help="The subset of vars (list of str) to be used when performing "
"MNN correction in the format of '--var-subset <name> <values>'. Typically, use "
"the highly variable genes (HVGs) like '--var-subset highly_variable True'. When "
"unset, uses all vars.",
),
click.option(
"--n-neighbors",
"-k",
type=CommaSeparatedText(click.INT, simplify=True),
default=20,
show_default=True,
help="Number of mutual nearest neighbors.",
),
click.option(
"--sigma",
type=click.FLOAT,
default=1.0,
show_default=True,
help="The bandwidth of the Gaussian smoothing kernel used to "
"compute the correction vectors.",
),
click.option(
"--no-cos_norm_in",
"cos_norm_in",
is_flag=True,
default=True,
help="Default behaviour is to perform cosine normalization on the "
"input data prior to calculating distances between cells. Use this "
"flag to disable that behaviour.",
),
click.option(
"--no-cos_norm_out",
"cos_norm_out",
is_flag=True,
default=True,
help="Default behaviour is to perform cosine normalization prior to "
"computing corrected expression values. Use this flag to disable that "
"behaviour.",
),
click.option(
"--svd-dim",
type=click.INT,
default=None,
show_default=True,
help="The number of dimensions to use for summarizing biological "
"substructure within each batch. If not set, biological components "
"will not be removed from the correction vectors.",
),
click.option(
"--no-var-adj",
is_flag=True,
default=True,
help="Default behaviour is to adjust variance of the correction "
"vectors. Use this flag to disable that behaviour. Note this step takes most "
"computing time.",
),
click.option(
"--compute-angle",
is_flag=True,
default=False,
help="When set, compute the angle between each cell’s correction "
"vector and the biological subspace of the reference batch.",
),
click.option(
"--svd-mode",
type=click.Choice(["svd", "rsvd", "irlb"]),
default="rsvd",
show_default=True,
help="'svd' computes SVD using a non-randomized SVD-via-ID "
"algorithm, while 'rsvd' uses a randomized version. 'irlb' performs truncated "
"SVD by implicitly restarted Lanczos bidiagonalization (forked from "
"https://github.com/airysen/irlbpy).",
),
],
"bbknn": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
COMMON_OPTIONS["key_added"],
COMMON_OPTIONS["batch_key"],
click.option(
"--use-rep",
"-u",
type=click.STRING,
default="X_pca",
show_default=True,
help="The dimensionality reduction in .obsm to use for neighbour "
"detection.",
),
COMMON_OPTIONS["use_pc"][0], # --n-pcs
click.option(
"--no-approx",
"approx",
is_flag=True,
default=True,
help="Default behaviour is to use annoy’s approximate neighbour "
"finding. This results in a quicker run time for large datasets while also "
"potentially increasing the degree of batch correction. Use this flag to disable "
"that behaviour.",
),
COMMON_OPTIONS["neighbor_metric"],
click.option(
"--neighbors-within-batch",
type=click.INT,
default=3,
show_default=True,
help="How many top neighbours to report for each batch; total "
"number of neighbours will be this number times the number of batches.",
),
click.option(
"--trim",
type=click.INT,
default=None,
show_default=True,
help="Trim the neighbours of each cell to these many top "
"connectivities. May help with population independence and improve the tidiness "
"of clustering. The lower the value the more independent the individual "
"populations, at the cost of more conserved batch effect. If None, sets the "
"parameter value automatically to 10 times the total number of neighbours for "
"each cell. Set to 0 to skip.",
),
click.option(
"--annoy-n-trees",
type=click.INT,
default=10,
show_default=True,
help="Only used when approx=True. The number of trees to construct "
"in the annoy forest. More trees give higher precision when querying, at the "
"cost of increased run time and resource intensity.",
),
click.option(
"--no-use-faiss",
"use_faiss",
is_flag=True,
default=True,
help="Default behaviour If approx=False and the metric is "
"“euclidean”, is to use the faiss package to compute nearest neighbours if "
"installed. This improves performance at a minor cost to numerical precision as "
"faiss operates on float32. Use this flag to disable that behaviour.",
),
click.option(
"--set-op-mix-ratio",
type=click.FLOAT,
default=1,
show_default=True,
help="UMAP connectivity computation parameter, float between 0 and "
"1, controlling the blend between a connectivity matrix formed exclusively from "
"mutual nearest neighbour pairs (0) and a union of all observed neighbour "
"relationships with the mutual pairs emphasised (1).",
),
click.option(
"--local-connectivity",
type=click.INT,
default=1,
show_default=True,
help="UMAP connectivity computation parameter, how many nearest "
"neighbors of each cell are assumed to be fully connected (and given a "
"connectivity value of 1)",
),
],
"scrublet": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
click.option(
"--batch-key",
"batch_key",
type=click.STRING,
help="The name of the column in adata.obs that differentiates among "
"experiments/batches. Doublets will be detected in each batch separately.",
),
click.option(
"--input-obj-sim",
"adata_sim",
type=click.Path(exists=True, dir_okay=False),
default=None,
help="(Advanced use case) Optional annData object generated by "
"sc.external.pp.scrublet_simulate_doublets(), with same number of "
"vars as adata. This should have been built from input_obj after "
"filtering genes and cells and selcting highly-variable genes.",
),
click.option(
"--threshold",
type=click.FLOAT,
default=None,
show_default=True,
help="Doublet score threshold for calling a transcriptome a "
"doublet. If not set, this is set automatically by looking for the "
"minimum between the two modes of the doublet_scores_sim_ histogram. "
"It is best practice to check the threshold visually using the "
"doublet_scores_sim_ histogram and/or based on co-localization of "
"predicted doublets in a 2-D embedding.",
),
*COMMON_OPTIONS["scrublet"],
click.option(
"--expected-doublet-rate",
type=click.FLOAT,
default=0.05,
show_default=True,
help="Where input_obj_sim not suplied, the estimated doublet rate "
"for the experiment.",
),
click.option(
"--stdev-doublet-rate",
type=click.FLOAT,
default=0.02,
show_default=True,
help="Where input_obje_sim not suplied, uncertainty in the expected "
"doublet rate.",
),
click.option(
"--knn-dist-metric",
"-t",
type=click.Choice(
[
"cityblock",
"cosine",
"euclidean",
"l1",
"l2",
"manhattan",
"braycurtis",
"canberra",
"chebyshev",
"correlation",
"dice",
"hamming",
"jaccard",
"kulsinski",
"mahalanobis",
"minkowski",
"rogerstanimoto",
"russellrao",
"seuclidean",
"sokalmichener",
"sokalsneath",
"sqeuclidean",
"yule",
]
),
default="euclidean",
show_default=True,
help="A known metric’s name.",
),
click.option(
"--no-normalize-variance",
"normalize_variance",
is_flag=True,
default=True,
help="Default is to normalize the data such that each gene has a "
"variance of 1. sklearn.decomposition.TruncatedSVD will be used for "
"dimensionality reduction, if --no-mean-center is set. Use this flag "
"to disable that behaviour.",
),
click.option(
"--log-transform",
is_flag=True,
default=False,
show_default=True,
help="Whether to use :func:~scanpy.pp.log1p to log-transform the "
"data prior to PCA.",
),
click.option(
"--no-mean-center",
"mean_center",
is_flag=True,
default=True,
help="If True, center the data such that each gene has a mean of 0. "
"sklearn.decomposition.PCA will be used for dimensionality "
"reduction.",
),
click.option(
"--n-pcs",
"n_prin_comps",
type=click.INT,
default=30,
show_default=True,
help="Number of principal components used to embed the "
"transcriptomes prior to k-nearest-neighbor graph construction.",
),
click.option(
"--no-approx",
"use_approx_neighbors",
is_flag=True,
default=True,
help="Default behaviour is to use the approximate nearest neighbor "
"method (annoy) for the KNN classifier. Use this flag to disable "
"that behaviour.",
),
click.option(
"--get-doublet-neighbor-parents",
is_flag=True,
default=False,
show_default=True,
help="If set, return (in .uns) the parent transcriptomes that "
"generated the doublet neighbors of each observed transcriptome. "
"This information can be used to infer the cell states that "
"generated a given doublet state.",
),
click.option(
"--n-neighbors",
"-k",
type=CommaSeparatedText(click.INT, simplify=True),
default=None,
show_default=True,
help="Number of neighbors used to construct the KNN graph of "
"observed transcriptomes and simulated doublets. If not set, this is "
"automatically set to np.round(0.5 * np.sqrt(n_obs)).",
),
click.option(
"--filter",
"filter",
is_flag=True,
default=False,
help="By default, the output object is annotated but not filtered "
"according to the scrublet status. Setting this flag will cause "
"predicted multiplet elements to be removed.",
),
click.option(
"--no-verbose",
"verbose",
is_flag=True,
default=True,
help="Default behaviour is to print progress updates. Use this flag "
"to disable that.",
),
click.option(
"--export-table",
type=click.Path(dir_okay=False, writable=True),
default=None,
show_default=True,
help="Export a table of double scores and calls to the specified file.",
),
COMMON_OPTIONS["random_state"],
],
"plot_scrublet": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["plot"],
click.option(
"--scale-hist-obs",
"-b",
type=click.Choice(["linear", "log", "symlog", "logit"]),
default="log",
show_default=True,
help="Set y axis scale transformation in matplotlib for the plot of observed transcriptomes.",
),
click.option(
"--scale-hist-sim",
"-s",
type=click.Choice(["linear", "log", "symlog", "logit"]),
default="linear",
show_default=True,
help="Set y axis scale transformation in matplotlib for the plot of observed transcriptomes.",
),
],
"scrublet_simulate_doublets": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["output"],
*COMMON_OPTIONS["scrublet"],
click.option(
"--layer",
"-l",
type=click.STRING,
default=None,
help="Layer of adata where raw values are stored, or ‘X’ if values "
"are in .X.",
),
],
"embed": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["plot"],
*COMMON_OPTIONS["frame_title"],
COMMON_OPTIONS["layer"],
click.option(
"--basis",
type=click.STRING,
default="umap",
show_default=True,
help="Name of the embedding to plot, must be a key of `.obsm` without "
'the prefix "X_".',
),
click.option(
"--color",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="Keys for annotations of observations/cells or variables/genes.",
),
click.option(
"--legend-loc",
type=click.Choice(["right margin", "on data"]),
default="right margin",
show_default=True,
help='Location of legend, either "on data", "right margin" or valid '
"keywords for `matplotlib.legend`.",
),
click.option(
"--legend-fontsize",
type=click.INT,
default=15,
show_default=True,
help="Legend font size.",
),
click.option(
"--size",
type=click.FLOAT,
default=None,
show_default=True,
help="Point size. Automatically computed if not specified.",
),
COMMON_OPTIONS["gene_symbols"],
click.option(
"--edges",
is_flag=True,
default=False,
show_default=True,
help="Show edges.",
),
click.option(
"--edges-width",
type=click.FLOAT,
default=0.1,
show_default=True,
help="Width of edges.",
),
click.option(
"--edges-color",
type=click.STRING,
default=None,
show_default=True,
help="Color of edges. See draw_networkx_edges().",
),
COMMON_OPTIONS["knn_graph"][0], # --neighbors-key
click.option(
"--no-sort-order",
"sort_order",
is_flag=True,
default=True,
show_default=True,
help="Disable default behaviour: for continuous annotations used as "
"color parameter, plot data points with higher values on top of others.",
),
*COMMON_OPTIONS["plot_embed"],
click.option(
"--components",
type=click.STRING,
default=None,
show_default=True,
help="For instance, ['1,2', '2,3']. To plot all available components use 'all'.",
),
click.option(
"--projection",
type=click.Choice(["2d", "3d"]),
default="2d",
show_default=True,
help="Projection of plot.",
),
],
"plot_paga": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["plot"],
*COMMON_OPTIONS["frame_title"],
*COMMON_OPTIONS["plot_embed"],
COMMON_OPTIONS["random_state"],
click.option(
"--use-key",
type=click.STRING,
default="paga",
show_default=True,
help="The key in `.uns` that contains trajectory information.",
),
click.option(
"--layout",
type=click.Choice(["fa", "fr", "grid_fr", "kk", "lgl", "drl", "rt"]),
default="fr",
show_default=True,
help="Plotting layout that computes positions.",
),
click.option(
"--init-pos",
type=click.STRING,
default=None,
show_default=True,
help="Plotting layout that computes positions.",
),
click.option(
"--threshold",
type=click.FLOAT,
default=0.01,
show_default=True,
help="Do not draw edges for weights below this threshold. Set to 0 to "
"include all edges.",
),
COMMON_OPTIONS["root"],
click.option(
"--root",
type=click.INT,
default=0,
show_default=True,
help="If choosing a tree layout, this is the index of the root node.",
),
click.option(
"--transitions",
type=click.STRING,
default=None,
show_default=True,
help='Key for `.uns["paga"]` that specifies the matrix, e.g. '
"`transition_confidence`, that stores the arrows.",
),
click.option(
"--single-component",
is_flag=True,
default=False,
show_default=True,
help="Restrict to largest connected component",
),
click.option(
"--solid-edges",
type=click.Choice(["connectivities", "connectivities_tree"]),
default="connectivities",
show_default=True,
help='Key for `.uns["paga"]` that specifies the matrix that stores the '
"edges to be drawn solid black.",
),
click.option(
"--basis",
type=click.STRING,
default=None,
show_default=True,
help="Name of the embedding to plot, must be a key of `.obsm` without "
'the prefix "X_".',
),
click.option(
"--color",
type=CommaSeparatedText(simplify=True),
default=None,
show_default=True,
help="Key(s) for annotation of observations/cells or variables/genes. Comma-separated if more than one",
),
click.option(
"--legend-loc",
type=click.Choice(["right margin", "on data"]),
default="right margin",
show_default=True,
help='Location of legend, either "on data", "right margin" or valid '
"keywords for `matplotlib.legend`.",
),
click.option(
"--size",
type=click.FLOAT,
default=None,
show_default=True,
help="Point size. Automatically computed if not specified.",
),
click.option(
"--node-size-scale",
type=click.FLOAT,
default=1.0,
show_default=True,
help="Increase of decrease the size of the nodes.",
),
click.option(
"--fontsize",
type=click.INT,
default=None,
show_default=True,
help="Font size for node labels.",
),
click.option(
"--edge-width-scale",
type=click.FLOAT,
default=1.0,
show_default=True,
help="Increase of decrease the width of the edges.",
),
click.option(
"--arrowsize",
type=click.INT,
default=30,
show_default=True,
help="For directed graphs, specify the length and width of the arrowhead.",
),
*COMMON_OPTIONS["opt_output"],
],
"sviol": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["plot"],
COMMON_OPTIONS["use_raw"],
COMMON_OPTIONS["var_names"],
*COMMON_OPTIONS["rank_genes_groups_plots"],
COMMON_OPTIONS["layer"],
*COMMON_OPTIONS["diffexp_plot"],
COMMON_OPTIONS["gene_symbols"],
*COMMON_OPTIONS["sviol"],
COMMON_OPTIONS["swap_axes"],
],
"dot": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["plot"],
COMMON_OPTIONS["use_raw"],
COMMON_OPTIONS["var_names"],
*COMMON_OPTIONS["rank_genes_groups_plots"],
COMMON_OPTIONS["layer"],
*COMMON_OPTIONS["diffexp_plot"],
COMMON_OPTIONS["gene_symbols"],
*COMMON_OPTIONS["dot"],
],
"matrix": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["plot"],
COMMON_OPTIONS["use_raw"],
COMMON_OPTIONS["var_names"],
*COMMON_OPTIONS["rank_genes_groups_plots"],
COMMON_OPTIONS["layer"],
*COMMON_OPTIONS["diffexp_plot"],
COMMON_OPTIONS["gene_symbols"],
],
"heat": [
*COMMON_OPTIONS["input"],
*COMMON_OPTIONS["plot"],
COMMON_OPTIONS["use_raw"],
COMMON_OPTIONS["var_names"],
*COMMON_OPTIONS["rank_genes_groups_plots"],
COMMON_OPTIONS["layer"],
*COMMON_OPTIONS["diffexp_plot"],
COMMON_OPTIONS["gene_symbols"],
*COMMON_OPTIONS["heat"],
COMMON_OPTIONS["swap_axes"],
],
}
|
/scanpy_scripts-1.1.6-py3-none-any.whl/scanpy_scripts/cmd_options.py
| 0.590779 | 0.273462 |
cmd_options.py
|
pypi
|
import numpy as np
import scanpy as sc
from ..obj_utils import (
_backup_default_key,
_delete_backup_key,
_rename_default_key,
_set_default_key,
_restore_default_key,
)
def paga(
adata,
key_added=None,
**kwargs,
):
"""
Wrapper function for sc.tl.paga, for supporting named slot
"""
sc.tl.paga(adata, **kwargs)
if key_added:
paga_key = f"paga_{key_added}"
_rename_default_key(adata.uns, "paga", paga_key)
else:
_delete_backup_key(adata.uns, "paga")
return adata
def plot_paga(
adata,
use_key="paga",
basis=None,
layout=None,
init_pos=None,
legend_loc="on data",
color=None,
size=None,
title=None,
show=None,
save=None,
**kwargs,
):
"""Make PAGA plot"""
if basis is not None and f"X_{basis}" in adata.obsm.keys():
ax = sc.pl.embedding(
adata,
basis=basis,
color=color,
legend_loc=legend_loc,
size=size,
title=None,
save=False,
show=False,
)
grouping = adata.uns[use_key]["groups"]
categories = list(adata.obs[grouping].cat.categories)
obsm = adata.obsm[f"X_{basis}"]
group_pos = np.zeros((len(categories), 2))
for i, label in enumerate(categories):
offset = 1 if basis.startswith("diffmap") else 0
_scatter = obsm[adata.obs[grouping] == label, (0 + offset) : (2 + offset)]
x_pos, y_pos = np.median(_scatter, axis=0)
group_pos[i] = [x_pos, y_pos]
_set_default_key(adata.uns, "paga", use_key)
kwargs["node_size_scale"] = 0
kwargs["fontsize"] = 1
kwargs["pos"] = group_pos
kwargs["color"] = None
try:
sc.pl.paga(
adata,
ax=ax,
title=title,
show=show,
save=save,
**kwargs,
)
finally:
_restore_default_key(adata.uns, "paga", use_key)
else:
_set_default_key(adata.uns, "paga", use_key)
try:
sc.pl.paga(
adata,
layout=layout,
init_pos=init_pos,
color=color,
title=title,
show=show,
save=save,
**kwargs,
)
finally:
_restore_default_key(adata.uns, "paga", use_key)
return adata
|
/scanpy_scripts-1.1.6-py3-none-any.whl/scanpy_scripts/lib/_paga.py
| 0.446012 | 0.230205 |
_paga.py
|
pypi
|
import pandas as pd
import scanpy as sc
import logging
def diffexp(
adata,
use_raw=None,
n_genes=None,
key_added="rank_genes_groups",
layer=None,
logreg_param=None,
filter_params=None,
save=None,
groupby=None,
groups=None,
**kwargs,
):
"""
Wrapper function for sc.tl.rank_genes_groups.
"""
if adata.raw is None:
use_raw = False
if n_genes is None:
n_genes = adata.raw.shape[1] if use_raw else adata.shape[1]
if logreg_param and isinstance(logreg_param, dict):
for key, val in logreg_param:
kwargs[key] = val
key_added = key_added if key_added else "rank_genes_groups"
diff_key = (key_added + f"_{layer}") if layer else key_added
if groups == "all":
# Avoid divisions by zeros for singlet groups. See
# https://github.com/theislab/scanpy/pull/1490#issuecomment-726031442.
groups_to_test = list(
adata.obs[groupby].value_counts().loc[lambda x: x > 1].index
)
if len(groups_to_test) < len(adata.obs[groupby].cat.categories):
groups = groups_to_test
logging.warning(
"Singlet groups removed before passing to rank_genes_groups()"
)
sc.tl.rank_genes_groups(
adata,
use_raw=use_raw,
n_genes=n_genes,
key_added=diff_key,
groupby=groupby,
groups=groups,
**kwargs,
)
de_tbl = extract_de_table(adata.uns[diff_key])
if isinstance(filter_params, dict):
sc.tl.filter_rank_genes_groups(
adata,
key=diff_key,
key_added=diff_key + "_filtered",
use_raw=use_raw,
**filter_params,
)
de_tbl = extract_de_table(adata.uns[diff_key + "_filtered"])
de_tbl = de_tbl.loc[de_tbl.genes.astype(str) != "nan", :]
if save:
de_tbl.to_csv(save, sep="\t", header=True, index=False)
return de_tbl
def diffexp_paired(adata, groupby, pair, **kwargs):
"""
Restrict DE to between a pair of clusters, return both up and down genes
"""
test, ref = pair
de_key = f"de.{test}-{ref}"
up_de = diffexp(
adata,
key_added=de_key,
groupby=groupby,
groups=[test],
reference=ref,
**kwargs,
)
ref, test = pair
de_key = f"de.{test}-{ref}"
down_de = diffexp(
adata,
key_added=de_key,
groupby=groupby,
groups=[test],
reference=ref,
**kwargs,
)
return up_de, down_de
def extract_de_table(de_dict):
"""
Extract DE table from adata.uns
"""
if de_dict["params"]["method"] == "logreg":
requested_fields = ("scores",)
else:
requested_fields = (
"scores",
"logfoldchanges",
"pvals",
"pvals_adj",
)
gene_df = _recarray_to_dataframe(de_dict["names"], "genes")[
["cluster", "rank", "genes"]
]
gene_df["ref"] = de_dict["params"]["reference"]
gene_df = gene_df[["cluster", "ref", "rank", "genes"]]
de_df = pd.DataFrame(
{
field: _recarray_to_dataframe(de_dict[field], field)[field]
for field in requested_fields
if field in de_dict
}
)
return gene_df.merge(de_df, left_index=True, right_index=True)
def _recarray_to_dataframe(array, field_name):
return (
pd.DataFrame(array)
.reset_index()
.rename(columns={"index": "rank"})
.melt(id_vars="rank", var_name="cluster", value_name=field_name)
)
|
/scanpy_scripts-1.1.6-py3-none-any.whl/scanpy_scripts/lib/_diffexp.py
| 0.632049 | 0.310172 |
_diffexp.py
|
pypi
|
from typing import Collection, Optional
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scanpy as sc
def expr_colormap():
"""\
Gray-to-blue colormap for expression data
"""
cdict = {
'red': [
(0.0, 220/256, 220/256),
(0.5, 42/256, 42/256),
(1.0, 6/256, 6/256)
],
'green': [
(0.0, 220/256, 220/256),
(0.5, 145/256, 145/256),
(1.0, 37/256, 27/256)
],
'blue': [
(0.0, 220/256, 220/256),
(0.5, 174/256, 174/256),
(1.0, 170/256, 170/256)
]
}
return mpl.colors.LinearSegmentedColormap('exprCmap', segmentdata=cdict, N=256)
def feature_plot(
adata: sc.AnnData,
feature: str,
gridsize: tuple = (180, 70),
linewidths: float = 0.15,
figsize: Optional[float] = None
) -> mpl.figure.Figure:
"""\
Plot expression of gene or feature in hexbin
Plots numeric feature value, commonly gene expression, on UMAP
coordinates using hexbin. Feature is taken from ``adata.obs`` if it is
found there, otherwise from ``adata.raw``.
Parameters
----------
adata
Annotated data matrix
feature
Name of the feature to plot
gridsize
Tuple of hexbin dimentions, larger numbers produce smaller hexbins
linewidths
Width of the lines to draw around each hexbin
figsize
Optional, make figure of this size
Returns
-------
Matplotlib figure with colorbar added.
"""
if feature in adata.obs.columns:
values = adata.obs_vector(feature)
else:
values = adata.raw.obs_vector(feature)
kwargs = {}
if figsize is not None:
kwargs["figsize"] = figsize
fig, ax = plt.subplots(**kwargs)
hb = ax.hexbin(
adata.obsm["X_umap"][:, 0],
adata.obsm["X_umap"][:, 1],
C=values,
cmap=expr_colormap(),
gridsize=gridsize,
linewidths=linewidths
)
cax = fig.add_axes((0.92, 0.8, 0.02, 0.15))
fig.colorbar(hb, cax=cax, fraction=0.05, pad=0.02, aspect=40)
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(f"{feature}")
ax.set_xlabel("UMAP1")
ax.set_ylabel("UMAP2")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
fig.tight_layout()
return fig
def plot_composition(
adata: sc.AnnData,
group_by: str,
color: str,
relative: bool = False,
palette: Optional[Collection] = None,
plot_numbers: bool = False
) -> mpl.axes.Axes:
"""\
Plot composition of clusters or other metadata
Groups cells by one metadata field and plots stacked barplot
colored by another metadata field. Common use case is to see which
samples contribute to which clusters. Plots horizontally.
Parameters
----------
adata
Annotated data matrix
group_by
Name of the field to group by on y axis
color
Name of the field to color by
relative
Plot percentage for each cluster if ``True`` or absolute counts if ``False``
palette
Optional, pass your own palette
plot_numbers
If ``True``, plot number of cells next to the bars
Returns
-------
Matplotlib axes with the plot.
"""
left = np.zeros(len(adata.obs[group_by].unique()))
total = None
if relative:
total = adata.obs[group_by].value_counts().sort_index(ascending=False)
fig, ax = plt.subplots()
num_colors = adata.obs[color].unique().size
# TODO: adjust
if palette is not None:
colors = palette
elif num_colors <= 10:
colors = mpl.cm.tab10
elif num_colors <= 20:
colors = mpl.cm.tab20
elif num_colors <= 28:
colors = sc.pl.palettes.default_28
else:
colors = sc.pl.palettes.default_102
for i, s in enumerate(adata.obs[color].cat.categories):
cnt = adata.obs[group_by][adata.obs[color] == s].value_counts().sort_index(ascending=False)
if relative:
cnt = cnt / total * 100
c = isinstance(colors, list) and colors[i] or colors(i)
ax.barh(cnt.index, cnt, left=left, label=s, color=c)
left += cnt
if plot_numbers:
for i, count in enumerate(total):
ax.text(left[i] + 2, i, str(count), va="center")
ax.legend(title=color.capitalize())
ax.set_title(f"{group_by.capitalize()} by {color}")
return ax
|
/scanpy-utils-0.1.1.tar.gz/scanpy-utils-0.1.1/src/sc_utils/plotting.py
| 0.940997 | 0.650981 |
plotting.py
|
pypi
|
import numpy as np
import pandas as pd
def get_markers(
adata,
groupby,
key="rank_genes_groups",
p_val_cutoff=0.05,
logfc_cutoff=0.5
):
"""\
Extract markers from adata into Seurat-like table
Extracts markers after they are computed by ``scanpy``. Produces Seurat-like
table with fields
``"p_val", "avg_logFC", "pct.1", "pct.2", "p_val_adj", "cluster", "gene"``
Calculates the percentage of cells that express a given gene
in the target cluster (``pct.1`` field) and outside the cluster
(``pct.2`` field) from ``adata.raw`` matrix.
Parameters
----------
adata
Annotated data matrix.
groupby
``adata.obs`` field used for marker calculation
key
``adata.uns`` key that has computed markers
p_val_cutoff
Drop all genes with adjusted p-value greater than or equal to this
logfc_cutoff
Drop all genes with average logFC less than or equal to this
Returns
-------
Returns a pandas dataframe with above listed columns, optionally
subsetted on the genes that pass the cutoffs.
``p_val`` field is a copy of adjusted p-value field.
Example
-------
>>> sc.tl.rank_genes_groups(adata, "leiden", method="wilcoxon", n_genes=200)
>>> markers = sc_utils.get_markers(adata, "leiden")
>>> markers.to_csv("markers.csv")
"""
markers = pd.concat([
pd.DataFrame(adata.uns[key]["names"]).melt(),
pd.DataFrame(adata.uns[key]["pvals_adj"]).melt(),
pd.DataFrame(adata.uns[key]["logfoldchanges"]).melt()
], axis=1)
markers.columns = ("cluster", "gene", "cluster2", "p_val_adj", "cluster3", "avg_logFC")
markers = markers.loc[:, ["cluster", "gene", "avg_logFC", "p_val_adj"]]
markers = markers.loc[markers.avg_logFC > logfc_cutoff, ]
markers = markers.loc[markers.p_val_adj < p_val_cutoff, ]
markers["pct.1"] = pd.Series(dtype=float)
markers["pct.2"] = pd.Series(dtype=float)
for cluster in markers.cluster.unique():
cells = adata.obs[groupby] == cluster
in_cluster_selector = markers.cluster == cluster
genes = markers.gene[in_cluster_selector]
in_cluster = np.sum(adata.raw[cells, genes].X > 0, axis=0) / cells.sum()
markers.loc[in_cluster_selector, "pct.1"] = in_cluster.T
other_cells = adata.obs[groupby] != cluster
other_clusters = np.sum(adata.raw[other_cells, genes].X > 0, axis=0) / other_cells.sum()
markers.loc[in_cluster_selector, "pct.2"] = other_clusters.T
markers["p_val"] = markers.p_val_adj
markers = markers.loc[:, ["p_val", "avg_logFC", "pct.1", "pct.2", "p_val_adj", "cluster", "gene"]]
return markers
|
/scanpy-utils-0.1.1.tar.gz/scanpy-utils-0.1.1/src/sc_utils/markers.py
| 0.896953 | 0.626438 |
markers.py
|
pypi
|
import json
import logging
from argparse import ArgumentParser
from pathlib import Path
from subprocess import CalledProcessError, check_output
from typing import List, Optional
def report_output(stdout: bytes, label: str) -> List[str]:
ret = stdout.decode().strip().split("\n")
print(f"{label}: {ret}")
return ret
def get_branch_contents(ref: str) -> List[str]:
"""Get the list of directories in a branch."""
stdout = check_output(["git", "ls-tree", "-d", "--name-only", ref])
return report_output(stdout, "Branch contents")
def get_sorted_tags_list() -> List[str]:
"""Get a list of sorted tags in descending order from the repository."""
stdout = check_output(["git", "tag", "-l", "--sort=-v:refname"])
return report_output(stdout, "Tags list")
def get_versions(ref: str, add: Optional[str], remove: Optional[str]) -> List[str]:
"""Generate the file containing the list of all GitHub Pages builds."""
# Get the directories (i.e. builds) from the GitHub Pages branch
try:
builds = set(get_branch_contents(ref))
except CalledProcessError:
builds = set()
logging.warning(f"Cannot get {ref} contents")
# Add and remove from the list of builds
if add:
builds.add(add)
if remove:
assert remove in builds, f"Build '{remove}' not in {sorted(builds)}"
builds.remove(remove)
# Get a sorted list of tags
tags = get_sorted_tags_list()
# Make the sorted versions list from main branches and tags
versions: List[str] = []
for version in ["master", "main"] + tags:
if version in builds:
versions.append(version)
builds.remove(version)
# Add in anything that is left to the bottom
versions += sorted(builds)
print(f"Sorted versions: {versions}")
return versions
def write_json(path: Path, repository: str, versions: str):
org, repo_name = repository.split("/")
struct = [
dict(version=version, url=f"https://{org}.github.io/{repo_name}/{version}/")
for version in versions
]
text = json.dumps(struct, indent=2)
print(f"JSON switcher:\n{text}")
path.write_text(text)
def main(args=None):
parser = ArgumentParser(
description="Make a versions.txt file from gh-pages directories"
)
parser.add_argument(
"--add",
help="Add this directory to the list of existing directories",
)
parser.add_argument(
"--remove",
help="Remove this directory from the list of existing directories",
)
parser.add_argument(
"repository",
help="The GitHub org and repository name: ORG/REPO",
)
parser.add_argument(
"output",
type=Path,
help="Path of write switcher.json to",
)
args = parser.parse_args(args)
# Write the versions file
versions = get_versions("origin/gh-pages", args.add, args.remove)
write_json(args.output, args.repository, versions)
if __name__ == "__main__":
main()
|
/scanspec-0.6.3.tar.gz/scanspec-0.6.3/.github/pages/make_switcher.py
| 0.853837 | 0.311898 |
make_switcher.py
|
pypi
|
import numpy as np
from itertools import product
import taichi as ti
import taichi.math as tm
pos_infinite = np.finfo('f').max # 3.4028235e+38
neg_infinite = np.finfo('f').min # -3.4028235e+38
PI = 3.141592653589793
ti.init(arch=ti.cuda, default_fp=ti.f64)
if ti.cfg.arch == ti.cuda:
print("GPU is available")
else:
print("GPU is not available")
"""
Author: Guangzhao Cheng, Lu Cheng
Date: 22.06.2023
"""
# ------------- base functions -------------
@ti.func
def my_log_taichi(x):
log_x = tm.log(x)
if x <= 0.0:
log_x = neg_infinite
return log_x
@ti.func
def logpdf_normal_taichi(x, mu, sigma):
return -0.5 * ((x - mu) / sigma) ** 2 - tm.log(sigma) - 0.5 * tm.log(2 * PI)
@ti.func
def pdf_normal_taichi(x, mu, sigma):
return tm.exp(-0.5 * ((x - mu) / sigma) ** 2) / tm.sqrt(2 * PI) / sigma
# Attention: different with logsumexp, the taichi func input a 2D array
@ti.func
def logsumexp_taichi(x_mat: ti.types.ndarray(), idx: int):
n = x_mat.shape[1]
max = x_mat[idx, 0]
sum = 0.0
ti.loop_config(serialize=True)
for i in range(n):
if x_mat[idx, i] > max:
max = x_mat[idx, i]
ti.loop_config(serialize=True)
for i in range(n):
sum += tm.exp(x_mat[idx, i] - max)
return tm.log(sum) + max
@ti.func
def loglik_l_xt_taichi(x, l, theta):
utr_len = theta - x
res = neg_infinite
if l <= utr_len:
res = -tm.log(utr_len)
return res
@ti.func
def lik_l_xt_taichi(x, l, theta) -> ti.f64:
utr_len = theta - x
res = 0.0
if l <= utr_len:
res = 1 / utr_len
return res
@ti.func
def loglik_x_st_pa_taichi(pa, theta, sigma_f):
return logpdf_normal_taichi(pa - theta, 0, sigma_f)
@ti.func
def loglik_x_st_taichi(x, s, theta, mu_f, sigma_f):
return logpdf_normal_taichi(x, theta + s - mu_f, sigma_f)
@ti.func
def lik_x_st_taichi(x, s, theta, mu_f, sigma_f):
return pdf_normal_taichi(x, theta + s - mu_f, sigma_f)
@ti.func
def loglik_r_s_taichi(r, s):
res = neg_infinite
if r <= s:
res = -tm.log(s)
return res
@ti.func
def lik_r_s_taichi(r, s):
res = 0.0
if r <= s:
res = 1 / s
return res
# ------------- kernel functions -------------
# ------------- [core func 1] loglik_xlr_t_pa -------------
@ti.kernel
def loglik_xlr_t_pa_kernel(x_arr: ti.types.ndarray(), l_arr: ti.types.ndarray(),
pa_arr: ti.types.ndarray(), loglik_arr: ti.types.ndarray(),
theta: float, sigma_f: float, n_frag: int):
for i in range(n_frag):
loglik_arr[i] = loglik_l_xt_taichi(x_arr[i], l_arr[i], theta) + \
loglik_x_st_pa_taichi(pa_arr[i], theta, sigma_f)
# ------------- [core func 2] loglik_xlr_t_r_known -------------
@ti.kernel
def loglik_xlr_t_r_known_kernel(x_arr: ti.types.ndarray(), l_arr: ti.types.ndarray(), r_arr: ti.types.ndarray(),
s_dis_arr: ti.types.ndarray(), pmf_s_arr: ti.types.ndarray(),
logpmf_s_arr: ti.types.ndarray(),
loglik_arr: ti.types.ndarray(),
tmp_mat: ti.types.ndarray(),
theta: float, mu_f: float, sigma_f: float):
n_s = s_dis_arr.shape[0]
n_frag = loglik_arr.shape[0]
for i in range(n_frag):
tmpn = 0.0
for j in range(n_s):
s = s_dis_arr[j]
if s < r_arr[i]:
tmp_mat[i, j] = neg_infinite
continue
else:
tmpn += pmf_s_arr[j]
tmp_mat[i, j] = loglik_r_s_taichi(r_arr[i], s) + loglik_x_st_taichi(x_arr[i], s, theta, mu_f, sigma_f) \
+ loglik_l_xt_taichi(x_arr[i], l_arr[i], theta) + logpmf_s_arr[j]
loglik_arr[i] = logsumexp_taichi(tmp_mat, i) - tm.log(tmpn)
# ------------- [core func 3] loglik_xlr_t_r_unknown -------------
# befor my_log
# cpu [0. ] taichi [2.16908322e-309]
# after my_log
# cpu [neg_infinite] taichi [-710.7244891]
@ti.kernel
def loglik_xlr_t_r_unknown_kernel(x_arr: ti.types.ndarray(), l_arr: ti.types.ndarray(), r_arr: ti.types.ndarray(),
s_dis_arr: ti.types.ndarray(), pmf_s_arr: ti.types.ndarray(),
loglik_arr: ti.types.ndarray(),
theta: float, mu_f: float, sigma_f: float, n_frag: int, n_s: int):
for i in range(n_frag):
for j in range(n_s):
s = s_dis_arr[j]
loglik_arr[i] += 1 / s * lik_x_st_taichi(x_arr[i], s, theta, mu_f, sigma_f) * lik_l_xt_taichi(x_arr[i],
l_arr[i],
theta) * \
pmf_s_arr[j]
# ----------- add 2023.04.20 ----------------
if loglik_arr[i] < 1e-300:
loglik_arr[i] = 0.0
# ----------- add end 2023.04.20 ------------
loglik_arr[i] = my_log_taichi(loglik_arr[i])
# ------------- [core func 4] loglik_marginal_lxr -------------
@ti.kernel
def call_logp_theta_sum_kernel(all_theta: ti.types.ndarray(), logp_theta_arr: ti.types.ndarray(),
n_sel_theta: int, alpha: float, beta: float, min_ind: int) -> ti.f64:
p_theta_sum = 0.0
for i in range(n_sel_theta):
it = i + min_ind
logp_theta_arr[i] = logpdf_normal_taichi(all_theta[it], alpha, beta)
p_theta_sum += tm.exp(logp_theta_arr[i])
logp_theta_sum = tm.log(p_theta_sum)
return logp_theta_sum
@ti.kernel
def cal_res_kernel(loglik_xlr_t_arr: ti.types.ndarray(), logp_theta_arr: ti.types.ndarray(), res: ti.types.ndarray(),
tmp_mat: ti.types.ndarray(), logp_theta_sum: float, n_sel_theta: int, min_ind: int, n_frag: int):
for j in range(n_frag):
for i in range(n_sel_theta):
it = i + min_ind
tmp_mat[j, i] = loglik_xlr_t_arr[j, it] + logp_theta_arr[i] - logp_theta_sum
res[j] = logsumexp_taichi(tmp_mat, j)
# ------------- interface functions -------------
def loglik_xlr_t_pa(x_arr, l_arr, pa_arr, theta, sigma_f):
"""
Args:
x_arr: NumPy array, float64, (n_frag,)
l_arr: NumPy array, float64, (n_frag,)
pa_arr: NumPy array, float64, (n_frag,)
theta: float64
sigma_f: int
Returns:
loglik_arr: NumPy array, float64, (n_frag,)
"""
n_frag = len(x_arr)
loglik_arr = np.zeros(n_frag)
loglik_xlr_t_pa_kernel(x_arr, l_arr, pa_arr, loglik_arr, theta, sigma_f, n_frag)
return loglik_arr
def loglik_xlr_t_r_known(x_arr, l_arr, r_arr, s_dis_arr, pmf_s_arr, theta, mu_f, sigma_f):
n_frag, n_s = len(x_arr), len(s_dis_arr)
loglik_arr = np.zeros(n_frag)
logpmf_s_arr = np.log(pmf_s_arr)
tmp_mat = np.zeros((n_frag, n_s)) + neg_infinite
loglik_xlr_t_r_known_kernel(x_arr, l_arr, r_arr, s_dis_arr, pmf_s_arr, logpmf_s_arr, loglik_arr,
tmp_mat, theta, mu_f, sigma_f)
return loglik_arr
def loglik_xlr_t_r_unknown(x_arr, l_arr, r_arr, s_dis_arr, pmf_s_arr, theta, mu_f, sigma_f):
n_frag, n_s = len(x_arr), len(s_dis_arr)
loglik_arr = np.zeros(n_frag)
loglik_xlr_t_r_unknown_kernel(x_arr, l_arr, r_arr, s_dis_arr, pmf_s_arr, loglik_arr,
theta, mu_f, sigma_f, n_frag, n_s)
return loglik_arr
def loglik_marginal_lxr(alpha, beta, all_theta, loglik_xlr_t_arr):
n_frag = loglik_xlr_t_arr.shape[0]
min_ind = int(np.searchsorted(all_theta, alpha - 3 * beta, side='left'))
max_ind = int(np.searchsorted(all_theta, alpha + 3 * beta, side='right') - 1)
n_sel_theta = max_ind - min_ind + 1
res = np.zeros(n_frag)
logp_theta_arr = np.zeros(n_sel_theta)
# ti@kernel 1
logp_theta_sum = call_logp_theta_sum_kernel(all_theta, logp_theta_arr,
n_sel_theta, alpha, beta, min_ind)
tmp_mat = np.zeros((n_frag, n_sel_theta)) + neg_infinite
cal_res_kernel(loglik_xlr_t_arr, logp_theta_arr, res, tmp_mat, logp_theta_sum, n_sel_theta, min_ind, n_frag)
return res # n_frag x 1
def get_loglik_marginal_tensor(all_theta, predef_beta_arr, loglik_xlr_t_arr):
n_alpha = len(all_theta)
n_beta = len(predef_beta_arr)
n_frag = loglik_xlr_t_arr.shape[0]
res = np.zeros((n_alpha, n_beta, n_frag)) + neg_infinite
for i, j in product(range(n_alpha), range(n_beta)):
res[i, j] = loglik_marginal_lxr(all_theta[i], predef_beta_arr[j], all_theta, loglik_xlr_t_arr)
return res
|
/scape-apa-1.0.0.tar.gz/scape-apa-1.0.0/src/scape/taichi_core.py
| 0.40439 | 0.336863 |
taichi_core.py
|
pypi
|
import os
import torch
from torch.utils.data import DataLoader
from torch.autograd import Variable
import numpy as np
from tqdm.auto import tqdm
import matplotlib.pyplot as plt
from loss import compute_mmd, mse_loss, binary_cross_entropy,kl_divergence
import random
def adjust_learning_rate(init_lr, optimizer, iteration,seperation):
lr = max(init_lr * (0.9 ** (iteration//seperation)), 0.0001)
for param_group in optimizer.param_groups:
param_group["lr"] = lr
return lr
class EarlyStopping:
"""
Early stops the training if loss doesn't improve after a given patience.
"""
def __init__(self, patience=10, verbose=False, checkpoint_file=''):
"""
Parameters
----------
patience
How long to wait after last time loss improved. Default: 10
verbose
If True, prints a message for each loss improvement. Default: False
"""
self.patience = patience
self.verbose = verbose
self.counter = 0
self.best_score = None
self.early_stop = False
self.loss_min = np.Inf
self.checkpoint_file = checkpoint_file
def __call__(self, loss, model):
# loss=loss.cpu().detach().numpy()
if np.isnan(loss):
self.early_stop = True
score = -loss
if self.best_score is None:
self.best_score = score
self.save_checkpoint(loss, model)
elif score < self.best_score:
self.counter += 1
if self.verbose:
print(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter > self.patience:
self.early_stop = True
model.load_model(self.checkpoint_file)
else:
self.best_score = score
self.save_checkpoint(loss, model)
self.counter = 0
def save_checkpoint(self, loss, model):
'''
Saves model when loss decrease.
'''
if self.verbose:
print(f'Loss decreased ({self.loss_min:.6f} --> {loss:.6f}). Saving model ...')
torch.save(model.state_dict(), self.checkpoint_file)
self.loss_min = loss
def train(model, data, condition, velocity, epoch, batch_size, lr, weight_decay,patience,GPU, seed,verbose, outdir,a):
if torch.cuda.is_available(): # cuda device
device='cuda'
torch.cuda.set_device(GPU)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
else:
device='cpu'
torch.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay)
model.train()
dataset = torch.utils.data.TensorDataset(data,condition,velocity)
early_stopping = EarlyStopping(patience=patience, checkpoint_file=os.path.join(outdir,'model.pt'))
y_loss = {} # loss history
y_loss['MSE1'] = []
y_loss['MSE2'] = []
y_loss['train']=[]
x_epoch = []
fig = plt.figure()
for epoch in tqdm(range(1, epoch+1)):
epoch_lr = adjust_learning_rate(lr, optimizer, epoch, seperation=10)
MSE1_loss=0.0
MSE2_loss=0.0
train_loss=0.0
train_data=DataLoader(dataset,batch_size=batch_size,shuffle=True, drop_last=True)
for iteration,data_list in enumerate(train_data):
x=data_list[0].to(device)
c=data_list[1].to(device)
v=data_list[2].to(device)
optimizer.zero_grad()
recon_x, recon_g = model(x)
mu, log_var,g = model.encoder(x)
z = model.reparameterize(mu, log_var)
true_samples = Variable(torch.randn(x.shape[0], 10), requires_grad=False)
mmd = 50*compute_mmd(true_samples.to(device), z)
# mmd=kl_divergence(mu,log_var)
mse1= 10* a * mse_loss(recon_x,v)
mse2= 10*(1-a)*mse_loss(recon_g,c)
mse = mse1+mse2
loss=mse+mmd
loss.backward()
optimizer.step()
MSE1_loss += mse1.item()
MSE2_loss += mse2.item()
train_loss += loss.item()
epoch_loss1 = MSE1_loss / len(train_data)
epoch_loss2 = MSE2_loss / len(train_data)
epoch_loss3 = train_loss / len(train_data)
y_loss['MSE1'].append(epoch_loss1)
y_loss['MSE2'].append(epoch_loss2)
y_loss['train'].append(epoch_loss3)
x_epoch.append(epoch)
if verbose:
plt.plot(x_epoch, y_loss['train'], 'go-', label='train',linewidth=1.5, markersize=4)
plt.plot(x_epoch, y_loss['MSE1'], 'ro-', label='MSE1',linewidth=1.5, markersize=4)
plt.plot(x_epoch, y_loss['MSE2'], 'bo-', label='MSE2',linewidth=1.5, markersize=4)
if len(x_epoch)==1:
plt.legend()
print('====> Epoch: {}, Loss: {:.4f}, MSE: {:.4f}, MMD: {:.4f}'.format(epoch,loss.cpu().data.numpy(),mse.cpu().data.numpy(),mmd.cpu().data.numpy()))
early_stopping(loss.cpu().data.numpy(), model)
if early_stopping.early_stop:
print('EarlyStopping: run {} epoch'.format(epoch))
break
if verbose:
plt.xlabel('Epoch')
plt.ylabel('Loss')
fig.savefig(os.path.join(outdir, 'train_loss.pdf'))
return device
|
/scape_sc-0.5.2-py3-none-any.whl/scape/train.py
| 0.737347 | 0.421135 |
train.py
|
pypi
|
import os
import random
import pandas as pd
import numpy as np
import scanpy as sc
import scipy as sci
import scvelo as scv
import torch
from model import VAE
from train import train
def SCAPE(adata,adata_raw,genes,perturbation='KO',epoch=1000,lr=0.0005,patience=10,seed=9,GPU=0,batch_size=128,n_jobs=1,a=0.9,outdir='./',verbose=False):
"""
Single-Cell integrative Analysis via Latent feature Extraction
Parameters
----------
adata
An AnnData Object with spot/cell x gene matrix stored.
k
The number cells consider when constructing Adjacency Matrix. Default: 20.
alpha
The relative ratio of reconstruction loss of Adjacency Matrix. Default: 0.05 (0.5 was recommanded for Visium data's domain finding).
lr
Learning rate. Default: 2e-4.
patience
Patience in early stopping. Default: 50.
epoch
Max iterations for training. Default: 2000.
loss_type
Loss Function of feature matrix reconstruction loss. Default: MSE (BCE was recommanded for Visium data's domain finding).
GPU
Index of GPU to use if GPU is available. Default: 0.
outdir
Output directory. Default: '.'.
verbose
Verbosity, True or False. Default: False.
Returns
-------
adata with the low-dimensional representation of the data stored at adata.obsm['latent'], and calculated neighbors and Umap.
The output folder contains:
adata.h5ad
The AnnData Object with the low-dimensional representation of the data stored at adata.obsm['latent'].
checkpoint
model.pt contains the variables of the model.
"""
np.random.seed(seed)
torch.manual_seed(seed)
random.seed(seed)
os.makedirs(outdir, exist_ok=True)
adata.var["cond_genes"]=False
adata.var["cond_genes"].loc[genes]=True
cond = torch.from_numpy(adata.layers["velocity"][:,adata.var["cond_genes"]].astype(np.float32).copy())
for i in genes:
if i in list(adata.var[adata.var["velocity_genes"]].index):
adata.var['velocity_genes'][[i]]=False
velocity_genes=adata.var[adata.var["velocity_genes"]].index.copy()
data = torch.from_numpy(adata_raw[:, velocity_genes].X.A.astype(np.float32).copy())
vel = torch.from_numpy(adata.layers["velocity"][:, adata.var["velocity_genes"]].astype(np.float32).copy())
x_dim=data.shape[1]
c_dim=cond.shape[1]
model=VAE(x_dim, c_dim)
device=train(model, data,condition=cond, velocity=vel,epoch=epoch, batch_size=batch_size,lr=lr,weight_decay=5e-4, patience=patience,GPU=GPU, seed=seed,verbose=verbose, outdir=outdir,a=a)
# Load model
pretrained_dict = torch.load(os.path.join(outdir,'model.pt'), map_location=device)
model_dict = model.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict)
model = model.eval()
# save velocity
latent = model.predict(data,out='latent')
adata.obsm['latent']=latent.copy()
latent_g = model.predict(data,out='z_p')
adata.obsm['latent_g']=latent_g.copy()
if perturbation=='KO':
recon_x = model.predict(data,out='x',device=device)
recon_x_ = model.predict_ko(data,device=device)
adata.layers['velocity_scvelo']=adata.layers['velocity'].copy()
gene_subset=adata.var["velocity_genes"]
adata.layers['velocity_scape_unp'] = np.ones(adata.shape) * np.nan
adata.layers['velocity_scape_unp'][:, gene_subset] = recon_x.copy()
adata.layers['velocity_scape'] = np.ones(adata.shape) * np.nan
adata.layers['velocity_scape'][:, gene_subset] = recon_x_.copy()
velocity_delta = adata.layers['velocity_scape']-adata.layers['velocity_scape_unp']
adata.layers['velocity_delta']=velocity_delta.copy()
elif perturbation=='OE':
recon_x = model.predict(data, out='x',device=device)
recon_x_ = model.predict_oe(data,device=device)
adata.layers['velocity_scvelo']=adata.layers['velocity'].copy()
gene_subset=adata.var["velocity_genes"]
adata.layers['velocity_scape_unp'] = np.ones(adata.shape) * np.nan
adata.layers['velocity_scape_unp'][:, gene_subset] = recon_x.copy()
adata.layers['velocity_scape'] = np.ones(adata.shape) * np.nan
adata.layers['velocity_scape'][:, gene_subset] = recon_x_.copy()
velocity_delta = adata.layers['velocity_scape']-adata.layers['velocity_scape_unp']
adata.layers['velocity_delta']=velocity_delta.copy()
adata.layers['velocity']=adata.layers['velocity_scape'].copy()
scv.tl.velocity_graph(adata,n_jobs=n_jobs)
#Output
adata.write(os.path.join(outdir,'adata.h5ad'))
return adata
|
/scape_sc-0.5.2-py3-none-any.whl/scape/function.py
| 0.717111 | 0.524943 |
function.py
|
pypi
|
import torch
import torch.nn as nn
class weightConstraint(object):
def __init__(self):
pass
def __call__(self,module):
if hasattr(module,'weight'):
w=module.weight.data
w=w.clamp(min=0)
module.weight.data=w
class VAE(nn.Module):
def __init__(self, x_dim, c_dim):
super(VAE, self).__init__()
constraints=weightConstraint()
# encoder layer
self.e_fc1 = self.fc_layer(x_dim, 1024,activation=3)
self.e_fc2 = self.fc_layer(1024, 128,activation=3)
self.mu_enc = nn.Linear(128, 10)
self.var_enc = nn.Linear(128, 10)
self.g_enc = nn.Linear(128, 10)
# decoder layer
self.d_fc1 = self.fc_layer(10, x_dim, activation=2)
self.d_fc2 = self.fc_layer(10, c_dim, activation=6)
self.d_fc2.apply(constraints)
def reparameterize(self, mu, log_var):
# vae reparameterization trick
std = torch.exp(0.5*log_var)
eps = torch.randn_like(std)
self.z_mean = mu
self.z_sigma = std
return mu + eps*std
def forward(self,x):
mu, log_var, g = self.encoder(x)
z = self.reparameterize(mu, log_var)
return self.decoder(z,g)
def encoder(self, x):
layer1 = self.e_fc1(x)
layer2 = self.e_fc2(layer1)
mu=self.mu_enc(layer2)
log_var=self.var_enc(layer2)
g = self.g_enc(layer2)
return mu, log_var, g
def decoder(self, z, g):
# z_g = torch.cat((z,g),dim=1)
z_g=z+g
recon_x = self.d_fc1(z_g)
recon_g = self.d_fc2(g)
return recon_x,recon_g
def fc_layer(self, in_dim, out_dim, activation=0):
if activation == 1:
layer = nn.Sequential(
nn.Linear(in_dim, out_dim),
nn.ReLU())
elif activation == 2:
layer = nn.Sequential(
nn.Linear(in_dim, out_dim))
elif activation == 3:
layer = nn.Sequential(
nn.Linear(in_dim, out_dim),
# nn.Dropout(0.3),
nn.BatchNorm1d(out_dim),
nn.LeakyReLU())
elif activation == 4:
layer = nn.Sequential(
nn.Linear(in_dim, out_dim),
nn.Sigmoid())
elif activation == 5:
layer = nn.Sequential(
nn.Linear(in_dim, out_dim,bias=False),
nn.ReLU())
elif activation == 6:
layer = nn.Sequential(
nn.Linear(in_dim, out_dim,bias=False))
elif activation == 7:
layer = nn.Sequential(
nn.Linear(in_dim, out_dim),
nn.Tanh())
return layer
def predict(self, data, device='cuda', out='z'):
x = data.float().to(device)
mu, log_var, g = self.encoder(x)
# z = self.reparameterize(mu, log_var)
z = mu
# g_0 = torch.zeros_like(g)
if out == 'latent':
z_g=z+g
output=z_g.detach().cpu().data.numpy()
elif out == 'z':
output=z.detach().cpu().data.numpy()
elif out == 'z_p':
output=g.detach().cpu().data.numpy()
elif out == 'x':
recon_x, recon_g = self.decoder(z,g)
output = recon_x.detach().cpu().data.numpy()
elif out == 'g':
recon_x, recon_g = self.decoder(z,g)
output = recon_g.detach().cpu().data.numpy()
return output
def predict_oe(self, data, device='cuda'):
x = data.float().to(device)
mu, log_var,g = self.encoder(x)
# z = self.reparameterize(mu, log_var)
z = mu
g_0 = torch.ones_like(g)
recon_x, recon_g = self.decoder(z,g_0)
# recon_x, recon_g = self.decoder(z,z_p)
output = recon_x.detach().cpu().data.numpy()
return output
def predict_ko(self, data, device='cuda'):
x = data.float().to(device)
mu, log_var,g = self.encoder(x)
# z = self.reparameterize(mu, log_var)
z = mu
# g_0 = torch.zeros_like(g)
g_0 = -1*torch.ones_like(g)
recon_x, recon_g = self.decoder(z,g_0)
output = recon_x.detach().cpu().data.numpy()
return output
def load_model(self, path):
pretrained_dict = torch.load(path, map_location=lambda storage, loc: storage)
model_dict = self.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
model_dict.update(pretrained_dict)
self.load_state_dict(model_dict)
|
/scape_sc-0.5.2-py3-none-any.whl/scape/model.py
| 0.895795 | 0.355104 |
model.py
|
pypi
|
import open3d as o3d
import numpy as np
import copy
import math
from tqdm import tqdm
import pyvista as pv
class scapula():
def __init__(self, file_name):
self.mesh = o3d.io.read_triangle_mesh(file_name)
self.mesh.compute_vertex_normals()
self.pcd = o3d.io.read_point_cloud(file_name)
self.guide_mesh = 0
self.mesh_pv = pv.read(file_name)
def get_points(self, point_local):
self.p1 = point_local[0]
self.p2 = point_local[1]
self.p3 = point_local[2]
def select_points(self):
def pick_points(pcd):
vis = o3d.visualization.VisualizerWithEditing()
vis.create_window()
vis.add_geometry(pcd)
vis.add_geometry(pcd)
vis.run()
vis.destroy_window()
return vis.get_picked_points()
value = self.pcd.points
picked_id_pcd = pick_points(self.pcd)
self.p1 = value[picked_id_pcd[0]]
self.p2 = value[picked_id_pcd[1]]
self.p3 = value[picked_id_pcd[2]]
self.id = picked_id_pcd
def computer_circle(self):
def find_center(p1, p2, p3):
x1 = p1[0];y1 = p1[1];z1 = p1[2]
x2 = p2[0];y2 = p2[1];z2 = p2[2]
x3 = p3[0];y3 = p3[1];z3 = p3[2]
a1 = (y1*z2 - y2*z1 - y1*z3 + y3*z1 + y2*z3 - y3*z2)
b1 = -(x1*z2 - x2*z1 - x1*z3 + x3*z1 + x2*z3 - x3*z2)
c1 = (x1*y2 - x2*y1 - x1*y3 + x3*y1 + x2*y3 - x3*y2)
d1 = -(x1*y2*z3 - x1*y3*z2 - x2*y1*z3 + x2*y3*z1 + x3*y1*z2 - x3*y2*z1)
a2 = 2 * (x2 - x1)
b2 = 2 * (y2 - y1)
c2 = 2 * (z2 - z1)
d2 = x1*x1 + y1*y1 + z1*z1 - x2*x2 - y2*y2 - z2*z2
a3 = 2 * (x3 - x1)
b3 = 2 * (y3 - y1)
c3 = 2 * (z3 - z1)
d3 = x1*x1 + y1*y1 + z1*z1 - x3*x3 - y3*y3 - z3*z3
x = -(b1*c2*d3 - b1*c3*d2 - b2*c1*d3 + b2*c3*d1 + b3*c1*d2 - b3*c2*d1) / (a1*b2*c3 - a1*b3*c2 - a2*b1*c3 + a2*b3*c1 + a3*b1*c2 - a3*b2*c1)
y = (a1*c2*d3 - a1*c3*d2 - a2*c1*d3 + a2*c3*d1 + a3*c1*d2 - a3*c2*d1) / (a1*b2*c3 - a1*b3*c2 - a2*b1*c3 + a2*b3*c1 + a3*b1*c2 - a3*b2*c1)
z = -(a1*b2*d3 - a1*b3*d2 - a2*b1*d3 + a2*b3*d1 + a3*b1*d2 - a3*b2*d1) / (a1*b2*c3 - a1*b3*c2 - a2*b1*c3 + a2*b3*c1 + a3*b1*c2 - a3*b2*c1)
return x, y, z
p1 = self.p1; p2 = self.p2; p3 = self.p3
x, y, z = find_center(p1, p2, p3)
r_circle = np.sqrt((p1[0] - x)**2 + (p1[1] - y)**2 + (p1[2] - z)**2)
self.center = [x, y, z]
self.r = r_circle
def move_center_to_O(self):
def change_mesh(mesh_first, x, y, z):
a = [-x, -y, -z]
mesh_second = copy.deepcopy(mesh_first).translate(tuple(a))
mesh_second.compute_vertex_normals()
return mesh_second
x = self.center[0]; y = self.center[1]; z = self.center[2]
self.mesh = change_mesh(self.mesh, x, y, z)
def find_vector(self):
def find_normal_vector(p1, p2, p3):
x1 = p1[0];y1 = p1[1];z1 = p1[2]
x2 = p2[0];y2 = p2[1];z2 = p2[2]
x3 = p3[0];y3 = p3[1];z3 = p3[2]
a = (y2 - y1) * (z3 - z1) - (y3 - y1) * (z2 - z1)
b = (z2 - z1) * (x3 - x1) - (z3 - z1) * (x2 - x1)
c = (x2 - x1) * (y3 - y1) - (x3 - x1) * (y2 - y1)
return [a, b, c]
def find_dis(point, mesh):
mesh2 = copy.deepcopy(mesh)
mesh2 = o3d.t.geometry.TriangleMesh.from_legacy(mesh)
scene = o3d.t.geometry.RaycastingScene()
_ = scene.add_triangles(mesh2)
query_point = o3d.core.Tensor([point], dtype=o3d.core.Dtype.Float32)
return scene.compute_signed_distance(query_point)
def amount_point(normal_vector, mesh_second):
length = 0.1
j = 0
for i in range(100):
vector_point = normal_vector * (length * i)
if find_dis(vector_point, mesh_second) < 0:
j = j + 1
return j
def dis(x, y):
return np.sqrt((x[0] - y[0]) ** 2 + (x[1] - y[1]) ** 2 + (x[2] - y[2]) ** 2)
def find_angle(p1, p2, p3):
l1 = dis(p1, p2); l2 = dis(p2, p3); l3 = dis(p1, p3)
cos = (l1 ** 2 + l2 ** 2 - l3 ** 2) / (2 * l1 * l2)
return math.acos(cos)/np.pi
def rotate_mesh(normal_vector):
point_coordinate = [0, 0, 0]
# 向量OB,也就是法向量
vector_ob = [normal_vector[0], normal_vector[1], normal_vector[2]]
# 法向量与z轴的夹角
theta = find_angle(vector_ob, [0, 0, 0], [0, 0, 1])
# 第一次旋转
vector_ob2 = [0, np.sin(np.pi * theta), np.cos(np.pi * theta)]
alpha = find_angle(vector_ob, [0, 0,np.cos(np.pi * theta)], vector_ob2)
if vector_ob[0] < 0:
alpha = - alpha
R = self.mesh.get_rotation_matrix_from_xyz((0, 0, np.pi * alpha))
mesh_third = copy.deepcopy(self.mesh)
mesh_third.rotate(R, center=point_coordinate)
# 第二次旋转
R = self.mesh.get_rotation_matrix_from_xyz((np.pi * theta, 0, 0))
mesh_fourth = copy.deepcopy(mesh_third)
mesh_fourth.rotate(R, center=point_coordinate)
return mesh_fourth
def rotate_mesh2(normal_vector, mesh):
point_coordinate = (0, 0, 0)
# 向量OB,也就是法向量
vector_ob = [normal_vector[0], normal_vector[1], normal_vector[2]]
# print (vector_ob)
# 法向量与z轴的夹角
mesh_second = copy.deepcopy(mesh)
theta = find_angle(vector_ob, [0, 0, 0], [0, 1, 0])
# print ('/n', theta, '/n'); print ('/n', find_angle(vector_ob, [0, 0, 0], [1, 0, 0]), '/n')
R = mesh_second.get_rotation_matrix_from_xyz((0, 0, theta))
mesh_third = copy.deepcopy(mesh)
mesh_third.rotate(R, center=point_coordinate)
return mesh_third
def change_cylinder(mesh_cylinder1):
point_coordinate = [0, 0, 0]
a = - np.asarray(mesh_cylinder1.vertices)[0] + [0, 0, 0]
mesh_cylinder2 = copy.deepcopy(mesh_cylinder1).translate(tuple(a))
mesh_cylinder2.compute_vertex_normals()
R = self.mesh.get_rotation_matrix_from_xyz((0, np.pi * 1, 0))
mesh_cylinder = copy.deepcopy(mesh_cylinder2)
mesh_cylinder.rotate(R, center=point_coordinate)
return mesh_cylinder
p1 = self.p1; p2 = self.p2; p3 = self.p3
normal_vector_zero = find_normal_vector(p1, p2, p3)
normal_vector_module = (normal_vector_zero[0] **2 + normal_vector_zero[1] **2 + normal_vector_zero[2] **2) **0.5
normal_vector = (np.asarray(normal_vector_zero)) / normal_vector_module
normal_vector_back = normal_vector * (-1)
numeber = amount_point(normal_vector, self.mesh)
numeber_back = amount_point(normal_vector_back, self.mesh)
if numeber_back > numeber:
normal_vector = normal_vector_back
self.mesh = rotate_mesh(normal_vector)
# print (normal_vector)
self.mesh_frame = o3d.geometry.TriangleMesh.create_coordinate_frame(size = 100)
self.mesh_frame.compute_vertex_normals()
p1 = np.array(self.mesh.vertices[self.id[0]])
vector2 = np.array(p1) / ((p1[0] **2 + p1[1] **2 + p1[2] **2) **0.5)
self.mesh = rotate_mesh2(vector2, self.mesh)
self.cylinder = o3d.geometry.TriangleMesh.create_cylinder(radius=3.25,
height=50)
self.cylinder = change_cylinder(self.cylinder)
self.cylinder0 = copy.deepcopy(self.cylinder)
self.mesh_frame = o3d.geometry.TriangleMesh.create_coordinate_frame(size = 100)
self.mesh_frame.compute_vertex_normals()
# o3d.visualization.draw_geometries([self.cylinder, self.mesh, self.mesh_frame])
def find_nail(self):
def dis(x, y):
return np.sqrt((x[0] - y[0]) ** 2 + (x[1] - y[1]) ** 2 + (x[2] - y[2]) ** 2)
def find_dis2(point):
# mesh = copy.deepcopy(mesh)
query_point = o3d.core.Tensor([point], dtype=o3d.core.Dtype.Float32)
return scene.compute_signed_distance(query_point)
mesh = self.mesh; point_coordinate = (0, 0, 0)
mesh2 = copy.deepcopy(self.mesh)
mesh2 = o3d.t.geometry.TriangleMesh.from_legacy(self.mesh)
scene = o3d.t.geometry.RaycastingScene()
_ = scene.add_triangles(mesh2)
# 1.设定步长,角度1是1°,角度2是18°
theta1 = 5/5; theta2 = 360/20
# 2.初始化记录器
location = [0, [], []] # 长度,点的位置,圆柱的位置
# 3.开始穷举
p = []
for i in range(5):
for j in range(20):
p.append([i, j])
for z in tqdm(p):
i = z[0]; j = z[1]
# 3.1.得出当前需要计算的圆柱位置,并将位于初始位置的圆柱旋转到那里
theta_y = 10 + theta1 * i; theta_z = theta2 * j
R = mesh.get_rotation_matrix_from_xyz((0, theta_y * np.pi / 180, 0))
mesh_cylinderchange1 = copy.deepcopy(self.cylinder)
mesh_cylinderchange1.rotate(R, center=point_coordinate)
R = mesh.get_rotation_matrix_from_xyz((0, 0, theta_z * np.pi / 180))
mesh_cylinderchange = copy.deepcopy(mesh_cylinderchange1)
mesh_cylinderchange.rotate(R, center=point_coordinate)
# 3.2.对当前圆柱位置进行判定,计算算法为:对于圆柱的每一个点,沿着x轴正负方向各走200个单位长度,如果有一侧全部在模型外侧,则这个点在模型外侧。找到在模型外侧且离圆心最近的钉子上的点。
length = 0.1
dis_origin = 100
pcd2 = mesh_cylinderchange.sample_points_uniformly(number_of_points=200)
point = np.asarray(pcd2.points)
point_dis_coordinate = np.array([dis(point[k], point_coordinate) for k in range(200)])
for k in range(200):
if (point_dis_coordinate[k] >= dis_origin) or (point_dis_coordinate[k] <= 5):
continue
judge1 = -1; judge2 = -1
position_x = np.arange(0, 80, 0.1) + point[k][0]
position_x = position_x.reshape(-1, 1)
position_y = np.repeat(point[k][1], 800).reshape(-1, 1)
position_z = np.repeat(point[k][2], 800).reshape(-1, 1)
position = np.concatenate((position_x, position_y, position_z),axis=1)
dis2 = find_dis2(position)
dis2 = dis2.reshape(-1)
if (dis2>=0).all():
judge1 = 1
position_x = np.arange(-80, 0, 0.1) + point[k][0]
position_x = position_x.reshape(-1, 1)
position_y = np.repeat(point[k][1], 800).reshape(-1, 1)
position_z = np.repeat(point[k][2], 800).reshape(-1, 1)
position = np.concatenate((position_x, position_y, position_z),axis=1)
dis2 = find_dis2(position)
dis2 = dis2.reshape(-1)
if (dis2>=0).all():
judge2 = 1
if (judge1 > 0 or judge2 > 0) and (dis_origin > point_dis_coordinate[k]):
dis_origin = point_dis_coordinate[k]
know = point[k]
if (dis_origin != 100) and (dis_origin > location[0]):
location[0] = dis_origin; location[1] = know; location[2] = [i, j]
self.location = location
R = mesh.get_rotation_matrix_from_xyz((0, (5/5*location[2][0]+10)*np.pi / 180, 0))
mesh_cylinderchange1 = copy.deepcopy(self.cylinder)
mesh_cylinderchange1.rotate(R, center=point_coordinate)
R = mesh.get_rotation_matrix_from_xyz((0, 0, (360/20)*location[2][1]*np.pi / 180))
mesh_cylinderchange = copy.deepcopy(mesh_cylinderchange1)
mesh_cylinderchange.rotate(R, center=point_coordinate)
self.cylinder = copy.deepcopy(mesh_cylinderchange)
def find_handle(self):
point_coordinate = (0, 0, 0)
R = self.mesh.get_rotation_matrix_from_xyz((0, (5/5*self.location[2][0]+10)*np.pi / 180, 0))
mesh_cylinderchange1 = copy.deepcopy(self.cylinder2)
mesh_cylinderchange1.rotate(R, center=point_coordinate)
R = self.mesh.get_rotation_matrix_from_xyz((0, 0, (360/20)*self.location[2][1]*np.pi / 180))
mesh_cylinderchange = copy.deepcopy(mesh_cylinderchange1)
mesh_cylinderchange.rotate(R, center=point_coordinate)
self.cylinder2 = copy.deepcopy(mesh_cylinderchange)
def find_guide(self):
mesh1 = o3d.t.geometry.TriangleMesh.from_legacy(self.mesh)
scene = o3d.t.geometry.RaycastingScene()
scene.add_triangles(mesh1)
a=np.array([])
r_circle = self.r
r_circle /= 2 / 3
p = []
for i in range(180):
for j in range(180):
for k in range(15):
p.append([i, j, k])
for z1 in tqdm(p):
i = z1[0]; j = z1[1]; k = z1[2]
x=(-r_circle / 2) + r_circle / 180 * i; y=(self.mesh.vertices[self.id[0]][1]) - r_circle / 180 * j; z = (-10) + 0.8 * k
query_point = o3d.core.Tensor([[x,y,z]],dtype=o3d.core.Dtype.Float32)
ans = scene.compute_closest_points(query_point)
points=ans['points'].numpy()
triangle=ans['primitive_ids'][0].item()
a=np.append(a,triangle)
a=a.astype(int)
mesh2 = copy.deepcopy(self.mesh)
mesh2.triangles = o3d.utility.Vector3iVector(
np.asarray(mesh2.triangles)[a])
mesh2.triangle_normals = o3d.utility.Vector3dVector(
np.asarray(mesh2.triangle_normals)[a])
mesh2.paint_uniform_color([0.1, 0.1, 0.7])
mesh2.compute_vertex_normals()
pcd1 = mesh2.sample_points_uniformly(number_of_points=10000)
xyz = np.asarray(pcd1.points)
xyz2 = []
for i in range(10000):
if (xyz[i][0])**2 + (xyz[i][1])**2 > 2.4**2:
xyz2.append(xyz[i])
xyz2 = np.array(xyz2)
xyz = copy.deepcopy(xyz2)
p = []
z1 = []
for i in range(xyz.shape[0]):
for j in range(10):
z1.append([i, j])
for z in tqdm(z1):
i = z[0]; j = z[1]
q = [xyz[i, 0], xyz[i, 1], xyz[i, 2] - j * 0.5]
p.append(q)
p = np.array(p)
pcd2 = o3d.geometry.PointCloud()
pcd2.points = o3d.utility.Vector3dVector(p)
self.guide_pcd = pcd2
mesh4 = o3d.geometry.TriangleMesh.create_from_point_cloud_alpha_shape(pcd2, alpha=2)
mesh4.compute_vertex_normals()
mesh4.paint_uniform_color([0, 0.8,0.8])
self.guide_mesh = mesh4
self.guide_mesh.paint_uniform_color([0.1, 0.1, 0.7])
def show(self, l):
pl = pv.Plotter()
for i in range(len(l)):
o3d.io.write_triangle_mesh('%d.stl'%i, l[i])
p = pv.read('%d.stl'%i)
_ = pl.add_mesh(p)
pl.camera_position = 'xz'
pl.show()
def save(self):
o3d.io.write_triangle_mesh('cylinder.stl', self.cylinder)
o3d.io.write_triangle_mesh('guide.stl', self.guide_mesh)
|
/scapula_predict-0.0.10-py3-none-any.whl/scapula_predict/__init__.py
| 0.433502 | 0.370624 |
__init__.py
|
pypi
|
from tabulate import tabulate
from scapy_helper.main import get_hex, show_diff
class Compare:
def __init__(self, first, second):
self.first = first
self.second = second
def equal(self):
"""
Return true if booth elements are equal
:return: bool
"""
return not self.diff()
def hex(self):
"""
Return tuple with hex elements
:return: Tuple(str, str)
"""
return get_hex(self.first), get_hex(self.second)
def diff(self):
"""
Show differences between two packets
:return: bool: Return True if packets are NOT EQUAL
"""
print("This is temporary -- will be changed in the future")
return show_diff(self.first, self.second)
def tdiff(self):
"""[Shortcut] Wrapper for the table_diff"""
self.table_diff()
def table_diff(self, index=False):
"""
Print a difference and print table information about packets
:param index: Default=False, If True show position under the differ position
:return: bool: Return True if packets are NOT EQUAL
"""
def prepare_data(first, second):
if "=" in first and "=" in second:
column_a = first.split("=")
column_b = second.split("=")
if column_a != column_b:
header = "{} != {}".format(column_a[1], column_b[1])
return header, column_a[0], column_a[1], column_b[1]
return None, column_a[0], column_a[1], column_b[1]
return first, None, None, None
status = show_diff(self.first, self.second, index=index)
self._print_table(prepare_data)
return status
def _print_table(self, prepare_data):
"""
Print table base on prepared data
:param prepare_data:
:return: None
"""
f_details = self.first.show(dump=True).split("\n")
s_details = self.second.show(dump=True).split("\n")
data = [("Diff or header", "Element", "First", "Second")]
for r in range(len(f_details)):
data.append(prepare_data(f_details[r], s_details[r]))
print(tabulate(data, headers="firstrow", tablefmt="github"))
|
/scapy_helper-0.14.8.tar.gz/scapy_helper-0.14.8/scapy_helper/compare.py
| 0.784814 | 0.514766 |
compare.py
|
pypi
|
"""CLI Command utilities."""
from multiprocessing import Process
from pathlib import Path
from tempfile import mkdtemp
from time import sleep
from typing import Any, Optional
import click
from requests import RequestException, Session
class Printer:
"""Custom Styled Printer class."""
def __init__(self) -> None:
"""Printer class initialisation method."""
self.left_part = "="
self.center_part = "*"
self.right_part = "="
self.max_animation_width = 5
self.wip_content_length = 0
self.process: Optional[Process] = None
def loop_animation(self, position: int, last_postition: int) -> int:
"""Loop animation logic.
Args:
position (int): Current position of moving part
last_postition (int): Final position of moving part
Returns:
int: Calculated position of moving part
"""
if position < last_postition:
position += 1
else:
position = 1
return position
def static_arrow(self, content: str) -> None:
"""Non-animated printing method for first line.
Args:
content (str): Content to print
"""
click.secho("\r====> ", nl=False, bold=True, fg="magenta")
click.secho(content, fg="green")
def animate(self, content: str, speed: float = 0.2) -> None:
"""Add animation in front of content using multithreading.
Args:
content (str): Content to print
speed (float, optional): Time intervel between each animation frame. Defaults to 0.2.
"""
if self.process is not None:
pos = 1
c_len = len(self.center_part)
last_pos = self.max_animation_width - c_len + 1
while True:
l_len = pos - 1
r_len = self.max_animation_width - c_len - l_len
left = self.left_part * l_len
right = self.right_part * r_len
arrow = f"{left}{self.center_part}{right}"
click.secho(
f"\r{arrow} ", nl=False, bold=True, fg="bright_cyan"
)
click.secho(content, nl=False, fg="bright_yellow")
sleep(speed)
pos = self.loop_animation(pos, last_pos)
def stop_animation(self) -> None:
"""Stop the current animation in progress."""
if self.process is not None:
self.process.terminate()
self.process = None
def working(self, content: str, animate: bool = True) -> None:
"""Print animated content with WIP context.
Args:
content (str): Content to print
animate (bool, optional): [description]. Defaults to True.
"""
self.wip_content_length = len(content)
if animate:
self.process = Process(target=self.animate, args=(content,))
self.process.start()
else:
self.static_arrow(content)
def done(self, content: str, url: str = "") -> None:
"""Print completed content.
Args:
content (str): Content to print
url (str, optional): URL to print and launch in browser. Defaults to "".
"""
correction_length = self.wip_content_length - len(content)
empty_string = ""
if correction_length > 0:
empty_string = " " * correction_length
self.stop_animation()
click.secho("\r<===> ", nl=False, fg="magenta")
if not url:
click.secho(f"{content}{empty_string}", fg="green")
else:
click.secho(f"{content} ", nl=False, fg="green")
click.secho(url, nl=False, fg="bright_blue", underline=True)
click.echo(empty_string)
click.launch(url)
class DownloadError(RequestException):
"""Error class raised while download fails."""
def fetch(url: str) -> Any:
"""Fetch content from URL.
Args:
url (str): URL string
Raises:
DownloadError: Raise when download fails
Returns:
Any: JSON data or Binary data
"""
with Session() as session:
res = session.get(url)
if res.ok:
if "json" in res.headers["content-type"]:
result = res.json()
elif "octet-stream" in res.headers["content-type"]:
result = res.content
else:
result = res.text
else:
raise DownloadError("URL not found/accessable")
return result
def download(
url: str, location: Optional[str] = None, verbose: bool = True
) -> str:
"""Download the content and write to file.
Args:
url (str): URL location to download.
location (str, optional): Location to write downloaded file. Defaults to None.
verbose (bool, optional): Control verbose. Defaults to True.
Returns:
str: Path of the file written.
"""
if verbose:
printer = Printer()
data = fetch(url)
for asset in data["assets"]:
if ".deb" in asset["name"]:
download_url = asset["browser_download_url"]
if verbose:
printer.working("Downloading " + asset["name"])
data = fetch(download_url)
directory = Path(
mkdtemp() if location is None else location
).resolve()
directory.mkdir(parents=True, exist_ok=True)
file = directory / asset["name"]
file.write_bytes(data)
if verbose:
printer.done("Downloaded " + asset["name"] + " successfully")
return str(file.resolve(strict=True))
raise DownloadError("Data not found")
|
/scapy-man-0.3.3.tar.gz/scapy-man-0.3.3/scapy/cli/utils.py
| 0.9025 | 0.209187 |
utils.py
|
pypi
|
"""CLI Command to generate password."""
import secrets
import string
from pathlib import Path
from typing import Optional
import click
@click.command()
@click.option(
"-r",
"--root",
default=None,
type=click.Path(file_okay=True, resolve_path=True),
help="Root password file. Ignored if --directory option is given.",
)
@click.option(
"-i",
"--intermediate",
default=None,
type=click.Path(file_okay=True, resolve_path=True),
help="Intermediate password file. Ignored if --directory option is given.",
)
@click.option(
"-p",
"--provisioner",
default=None,
type=click.Path(file_okay=True, resolve_path=True),
help="Provisioner password file. Ignored if --directory option is given.",
)
@click.option(
"-d",
"--directory",
default=None,
type=click.Path(file_okay=True, resolve_path=True),
help="Directory to store passwords",
)
def passwords(
root: Optional[str],
intermediate: Optional[str],
provisioner: Optional[str],
directory: Optional[str],
) -> None:
"""Generate root, intermediate and provisioner passwords."""
password_characters = (
string.ascii_letters + string.digits + string.punctuation
)
def gen_pass() -> str:
"""Password generation function.
Returns:
str: Password string
"""
while True:
password = "".join(
secrets.choice(password_characters) for _ in range(24)
)
have_lower = any(c.islower() for c in password)
have_upper = any(c.isupper() for c in password)
have_digits = any(c.isdigit() for c in password)
if have_lower and have_upper and have_digits:
break
return password
if any(item is None for item in [root, intermediate, provisioner]):
if directory is None:
directory_path = Path.home() / ".step/secrets/passwords"
else:
directory_path = Path(directory)
if not directory_path.exists():
directory_path.mkdir(parents=True)
if root is None:
root_path = directory_path / "root_ca.txt"
else:
root_path = Path(root)
if intermediate is None:
intermediate_path = directory_path / "intermediate_ca.txt"
else:
intermediate_path = Path(intermediate)
if provisioner is None:
provisioner_path = directory_path / "provisioner.txt"
else:
provisioner_path = Path(provisioner)
root_path.write_text(gen_pass())
if root_path.exists():
click.secho("Root password: ", nl=False, fg="magenta")
click.secho(str(root_path), fg="green")
else:
click.secho("Unable to write to root password file", fg="red")
intermediate_path.write_text(gen_pass())
if intermediate_path.exists():
click.secho("Intermediate password: ", nl=False, fg="magenta")
click.secho(str(intermediate_path), fg="green")
else:
click.secho("Unable to write to intermediate password file", fg="red")
provisioner_path.write_text(gen_pass())
if provisioner_path.exists():
click.secho("Provisioner password: ", nl=False, fg="magenta")
click.secho(str(provisioner_path), fg="green")
else:
click.secho("Unable to write to provisioner password file", fg="red")
|
/scapy-man-0.3.3.tar.gz/scapy-man-0.3.3/scapy/cli/generator/password.py
| 0.770206 | 0.162214 |
password.py
|
pypi
|
"""Cloudflare Worker module to deploy Root certificate."""
from pathlib import Path
from typing import Optional, Union
from CloudflareAPI import Cloudflare
from CloudflareAPI.api import Worker as CFWorker
from CloudflareAPI.dataclass.namespace import Namespace
from CloudflareAPI.exceptions import CFError
class Worker:
"""Cloudflare Worker class which handle the deployment of Root certificate."""
WORKER_NS_NAME_KEY = "CA_CERT_STORE"
WORKER_TITLE_KEY = "CA_TITLE"
WORKER_FINGERPRINT_KEY = "ROOT_CA_FINGERPRINT"
WORKER_CA_URL_KEY = "ROOT_CA_URL"
def __init__(self, token: Optional[str] = None) -> None:
"""Initialise the Cloudflare API.
Args:
token (str): Optional argument to pass Cloudflare API Token
"""
self.api = Cloudflare(token=token)
self.metadata: Optional[CFWorker.Metadata] = None
def store(self, web_title: str, fingerprint: str, ca_url: str) -> None:
"""Store data to be published.
Args:
web_title (str): Title of the Worker
fingerprint (str): Fingerprint to attach to the worker
ca_url (str): CA URL to attach to the worker
"""
self.title, self.fingerprint, self.url = (
web_title,
fingerprint,
ca_url,
)
def get_metadata(self, namespace: Namespace) -> None:
"""Generate for the Cloudflare worker.
Args:
namespace (CloudflareAPI.dataclass.Namespace): Namespace instance of Cloudflare API.
"""
self.metadata = self.api.worker.Metadata()
self.metadata.add_variable(self.WORKER_TITLE_KEY, self.title)
self.metadata.add_variable(
self.WORKER_FINGERPRINT_KEY, self.fingerprint
)
self.metadata.add_variable(self.WORKER_CA_URL_KEY, self.url)
self.metadata.add_binding(self.WORKER_NS_NAME_KEY, namespace.id)
def loadCA(self, rootCA: Union[str, Path]) -> None:
"""Load the CA certificate and write to the Cloudflare KV Namespace.
Args:
rootCA (str): Root Certificate file location
"""
rootca: Optional[Union[bytes, str]] = None
rootCA_file = Path(rootCA) if not isinstance(rootCA, Path) else rootCA
try:
namespace = self.api.store.get_ns(self.WORKER_NS_NAME_KEY)
except CFError:
namespace = self.api.store.create(self.WORKER_NS_NAME_KEY)
try:
rootca = rootCA_file.read_text()
namespace.write("root_ca_format", "pem")
except UnicodeDecodeError:
rootca = rootCA_file.read_bytes()
namespace.write("root_ca_format", "der")
namespace.write("root_ca", rootca)
self.get_metadata(namespace)
def deploy(self, name: str, file: Union[str, Path]) -> str:
"""Deploy the worker in to Cloudflare Edge network.
Args:
name (str): Name of worker. This name will be reflected in the worker url.
file (str): Javascript file of the worker to deploy
Returns:
str: Worker url which is deployed in to CLoudflare Edge network
"""
if self.metadata is not None:
worker_name = name.strip().lower()
worker_file = (
Path(file).resolve(strict=True)
if not isinstance(file, Path)
else file
)
if self.api.worker.upload(
name=worker_name,
file=worker_file,
metadata=self.metadata,
):
if self.api.worker.deploy(worker_name):
subdomain = self.api.worker.subdomain.get()
return f"https://{worker_name}.{subdomain}.workers.dev"
raise CFError("Deployment failed")
raise CFError("Metadata not found")
|
/scapy-man-0.3.3.tar.gz/scapy-man-0.3.3/scapy/core/worker.py
| 0.92323 | 0.185652 |
worker.py
|
pypi
|
import re
import struct
from scapy.compat import raw, orb
from scapy.layers.inet import TCP, TCPOptions
from scapy_p0f.utils import lparse, guess_dist
from scapy_p0f.consts import WinType
# Convert TCP option num to p0f (nop is handled seperately)
tcp_options_p0f = {
2: "mss", # maximum segment size
3: "ws", # window scaling
4: "sok", # selective ACK permitted
5: "sack", # selective ACK (should not be seen)
8: "ts", # timestamp
}
# Signatures
class TCP_Signature(object):
__slots__ = ["olayout", "quirks", "ip_opt_len", "ip_ver", "ttl",
"mss", "win", "win_type", "wscale", "pay_class", "ts1"]
def __init__(self, olayout, quirks, ip_opt_len, ip_ver, ttl,
mss, win, win_type, wscale, pay_class, ts1):
self.olayout = olayout
self.quirks = quirks
self.ip_opt_len = ip_opt_len
self.ip_ver = ip_ver
self.ttl = ttl
self.mss = mss
self.win = win
self.win_type = win_type # None for packet signatures
self.wscale = wscale
self.pay_class = pay_class
self.ts1 = ts1 # None for base signatures
@classmethod
def from_packet(cls, pkt):
"""
Receives a TCP packet (assuming it's valid), and returns
a TCP_Signature object
"""
ip_ver = pkt.version
quirks = set()
def addq(name):
quirks.add(name)
# IPv4/IPv6 parsing
if ip_ver == 4:
ttl = pkt.ttl
ip_opt_len = (pkt.ihl * 4) - 20
if pkt.tos & (0x01 | 0x02):
addq("ecn")
if pkt.flags.evil:
addq("0+")
if pkt.flags.DF:
addq("df")
if pkt.id:
addq("id+")
elif pkt.id == 0:
addq("id-")
else:
ttl = pkt.hlim
ip_opt_len = 0
if pkt.fl:
addq("flow")
if pkt.tc & (0x01 | 0x02):
addq("ecn")
# TCP parsing
tcp = pkt[TCP]
win = tcp.window
if tcp.flags & (0x40 | 0x80 | 0x01):
addq("ecn")
if tcp.seq == 0:
addq("seq-")
if tcp.flags.A:
if tcp.ack == 0:
addq("ack-")
elif tcp.ack:
addq("ack+")
if tcp.flags.U:
addq("urgf+")
elif tcp.urgptr:
addq("uptr+")
if tcp.flags.P:
addq("pushf+")
pay_class = 1 if tcp.payload else 0
# Manual TCP options parsing
mss = 0
wscale = 0
ts1 = 0
olayout = ""
optlen = (tcp.dataofs << 2) - 20
x = raw(tcp)[-optlen:] # raw bytes of TCP options
while x:
onum = orb(x[0])
if onum == 0:
x = x[1:]
olayout += "eol+%i," % len(x)
if x.strip(b"\x00"): # non-zero past EOL
addq("opt+")
break
if onum == 1:
x = x[1:]
olayout += "nop,"
continue
try:
olen = orb(x[1])
except IndexError: # no room for length field
addq("bad")
break
oval = x[2:olen]
if onum in tcp_options_p0f:
ofmt = TCPOptions[0][onum][1]
olayout += "%s," % tcp_options_p0f[onum]
optsize = 2 + struct.calcsize(ofmt) if ofmt else 2 # total len
if len(x) < optsize: # option would end past end of header
addq("bad")
break
if onum == 5:
if olen < 10 or olen > 34: # SACK length out of range
addq("bad")
break
else:
if olen != optsize: # length field doesn't fit option type
addq("bad")
break
if ofmt:
oval = struct.unpack(ofmt, oval)
if len(oval) == 1:
oval = oval[0]
if onum == 2:
mss = oval
elif onum == 3:
wscale = oval
if wscale > 14:
addq("exws")
elif onum == 8:
ts1 = oval[0]
if not ts1:
addq("ts1-")
if oval[1] and (tcp.flags.S and not tcp.flags.A):
addq("ts2+")
else: # Unknown option, presumably with specified size
if olen < 2 or olen > 40 or olen > len(x):
addq("bad")
break
x = x[olen:]
olayout = olayout[:-1]
return cls(olayout, quirks, ip_opt_len, ip_ver, ttl, mss, win, None, wscale, pay_class, ts1) # noqa: E501
@classmethod
def from_raw_sig(cls, sig_line):
"""
Parses a TCP sig line and returns a tuple consisting of a
TCP_Signature object and bad_ttl as bool
"""
ver, ttl, olen, mss, wsize, olayout, quirks, pclass = lparse(sig_line, 8) # noqa: E501
wsize, _, scale = wsize.partition(",")
ip_ver = -1 if ver == "*" else int(ver)
ttl, bad_ttl = (int(ttl[:-1]), True) if ttl[-1] == "-" else (int(ttl), False) # noqa: E501
ip_opt_len = int(olen)
mss = -1 if mss == "*" else int(mss)
if wsize == "*":
win, win_type = (0, WinType.ANY)
elif wsize[:3] == "mss":
win, win_type = (int(wsize[4:]), WinType.MSS)
elif wsize[0] == "%":
win, win_type = (int(wsize[1:]), WinType.MOD)
elif wsize[:3] == "mtu":
win, win_type = (int(wsize[4:]), WinType.MTU)
else:
win, win_type = (int(wsize), WinType.NORMAL)
wscale = -1 if scale == "*" else int(scale)
if quirks:
quirks = frozenset(q for q in quirks.split(","))
else:
quirks = frozenset()
pay_class = -1 if pclass == "*" else int(pclass == "+")
sig = cls(olayout, quirks, ip_opt_len, ip_ver, ttl, mss, win, win_type, wscale, pay_class, None) # noqa: E501
return sig, bad_ttl
def __str__(self):
quirks = ",".join(q for q in self.quirks)
fmt = "%i:%i+%i:%i:%i:%i,%i:%s:%s:%i"
s = fmt % (self.ip_ver, self.ttl, guess_dist(self.ttl),
self.ip_opt_len, self.mss, self.win, self.wscale,
self.olayout, quirks, self.pay_class)
return s
class HTTP_Signature(object):
__slots__ = ["http_ver", "hdr", "hdr_set", "habsent", "sw"]
def __init__(self, http_ver, hdr, hdr_set, habsent, sw):
self.http_ver = http_ver
self.hdr = hdr
self.hdr_set = hdr_set
self.habsent = habsent # None for packet signatures
self.sw = sw
@classmethod
def from_packet(cls, pkt):
"""
Receives an HTTP packet (assuming it's valid), and returns
a HTTP_Signature object
"""
http_payload = raw(pkt[TCP].payload)
crlfcrlf = b"\r\n\r\n"
crlfcrlfIndex = http_payload.find(crlfcrlf)
if crlfcrlfIndex != -1:
headers = http_payload[:crlfcrlfIndex + len(crlfcrlf)]
else:
headers = http_payload
headers = headers.decode() # XXX: Check if this could fail
first_line, headers = headers.split("\r\n", 1)
if "1.0" in first_line:
http_ver = 0
elif "1.1" in first_line:
http_ver = 1
else:
raise ValueError("HTTP version is not 1.0/1.1")
sw = ""
headers_found = []
hdr_set = set()
for header_line in headers.split("\r\n"):
name, _, value = header_line.partition(":")
if value:
value = value.strip()
headers_found.append((name, value))
hdr_set.add(name)
if name in ("User-Agent", "Server"):
sw = value
hdr = tuple(headers_found)
return cls(http_ver, hdr, hdr_set, None, sw)
@classmethod
def from_raw_sig(cls, sig_line):
"""
Parses an HTTP sig line and returns a HTTP_Signature object
"""
ver, horder, habsent, expsw = lparse(sig_line, 4)
http_ver = -1 if ver == "*" else int(ver)
# horder parsing - split by commas that aren't in []
new_horder = []
for header in re.split(r",(?![^\[]*\])", horder):
name, _, value = header.partition("=")
if name[0] == "?": # Optional header
new_horder.append((name[1:], value[1:-1], True))
else:
new_horder.append((name, value[1:-1], False))
hdr = tuple(new_horder)
hdr_set = frozenset(header[0] for header in hdr if not header[2])
habsent = frozenset(habsent.split(","))
return cls(http_ver, hdr, hdr_set, habsent, expsw)
def __str__(self):
# values that depend on the context are not included in the string
skipval = ("Host", "User-Agent", "Date", "Content-Type", "Server")
hdr = ",".join(n if n in skipval else "%s=[%s]" % (n, v) for n, v in self.hdr) # noqa: E501
fmt = "%i:%s::%s"
s = fmt % (self.http_ver, hdr, self.sw)
return s
# Records
class MTU_Record(object):
__slots__ = ["label_id", "mtu"]
def __init__(self, label_id, sig_line):
self.label_id = label_id
self.mtu = int(sig_line)
class TCP_Record(object):
__slots__ = ["label_id", "bad_ttl", "sig"]
def __init__(self, label_id, sig_line):
self.label_id = label_id
sig, bad_ttl = TCP_Signature.from_raw_sig(sig_line)
self.bad_ttl = bad_ttl
self.sig = sig
class HTTP_Record(object):
__slots__ = ["label_id", "sig"]
def __init__(self, label_id, sig_line):
self.label_id = label_id
self.sig = HTTP_Signature.from_raw_sig(sig_line)
|
/scapy-p0f-1.0.5.tar.gz/scapy-p0f-1.0.5/scapy_p0f/base_classes.py
| 0.48121 | 0.169028 |
base_classes.py
|
pypi
|
from typing import List
from scapy.packet import Packet
from six import PY2
from urwid import AttrMap, SimpleListWalker, CheckBox
from urwid.version import VERSION as URWID_VERSION
from .extended_listbox import ExtendedListBox
from .row_formatter import RowFormatter
class PacketListView(ExtendedListBox):
"""
Lists all the packets which have been sniffed so far
or were given in a list.
"""
def __init__(self, row_formatter):
# type: (RowFormatter) -> None
self.row_formatter = row_formatter
self.packets = [] # type: List[Packet]
super(PacketListView, self).__init__(True, SimpleListWalker([]))
def update_selected_packet(self):
# type: () -> None
text = self.row_formatter.format(self.focus.base_widget.tag)
self.focus.base_widget.set_label(text)
# noinspection PyProtectedMember
def _create_gui_packet(self, pkt):
# type: (Packet) -> CheckBox
text = self.row_formatter.format(pkt)
gui_packet = CheckBox(text)
# Unfortunately we need to access some protected variables here,
# to customize the underlying widgets
wrap = "clip" if PY2 and URWID_VERSION <= (2, 1, 1) else "ellipsis"
gui_packet._label.set_layout("left", wrap) # pylint: disable=protected-access
# The cursor of `urwid.SelectableIcon` doesn't take a color scheme.
# So just hide the cursor.
# len(text) + 1 hides the cursor
checked_state = gui_packet.states[True]
unchecked_state = gui_packet.states[False]
checked_state._cursor_position = len(checked_state.text) + 1 # pylint: disable=protected-access
unchecked_state._cursor_position = len(unchecked_state.text) + 1 # pylint: disable=protected-access
gui_packet.tag = pkt
return gui_packet
def add_packet(self, pkt):
# type: (Packet) -> None
"""
Creates and appends a Packet widget to the end of the list.
The cursor in front of the packet content is colored
in the default background color.
This way, it is invisible and only the cursor
in front of the packet in focus is colored.
:param pkt: packet, which is passed on from the sniffer
:type pkt: Packet
:return: None
"""
if not self.row_formatter.is_pkt_supported(pkt):
return
self.packets.append(pkt)
self.body.append(
AttrMap(self._create_gui_packet(pkt), None, "cyan"))
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/packet_list_view.py
| 0.843025 | 0.220794 |
packet_list_view.py
|
pypi
|
from ast import literal_eval
from typing import List, Type, Any, Union, Optional
from scapy.base_classes import SetGen
from scapy.config import conf
from scapy.fields import ConditionalField, Emph
from scapy.packet import Packet
from scapy.themes import BlackAndWhite
from scapy.utils import hexdump
import six
from urwid import Columns, SimpleListWalker, Text, connect_signal
from .details_view import DetailsView
from .extended_edit import ExtendedEdit
from .extended_listbox import ExtendedListBox
class EditView(DetailsView):
"""
Custom view which holds the output of Packet.show() as editable list and
shows a hexdump of the current selected Packet
"""
action_name = "Edit"
def __init__(self):
# type: () -> None
"""
Initialize ShowView
"""
self._current_packet = None # type: Optional[Packet]
self._show_text = ExtendedListBox(False, SimpleListWalker([]))
self._hex_text = ExtendedListBox(False, SimpleListWalker([]))
hexdump_str_width = 71
col = Columns([self._show_text, (hexdump_str_width, self._hex_text)],
dividechars=2)
super(EditView, self).__init__(col)
def update_packets(self, focused_packet, all_packets):
# type: (Packet, List[Packet]) -> None
self._update(focused_packet)
def _update(self, packet, force_update=False):
# type: (Packet, Optional[bool]) -> None
"""
Internal update function
:param packet: Packet which get displayed by this view
:param force_update: Forces re-rendering
"""
if packet == self._current_packet and not force_update:
return
self._current_packet = packet
show_text = self._show(packet) + [Text("")]
hexdump_text = hexdump(packet, dump=True)
# Keep an empty line as the last line. This gives a nice
# visual feedback that the end of the list is reached.
# For `show_text` this is given because it always ends with an "\n"
# For `hexdump_text` we add it manually
self._update_hexdump(hexdump_text.split("\n"))
self._update_show(show_text)
@staticmethod
def _build_command(target_type, string):
# type: (Type[Any], str) -> Any
"""
This method tries to build a value from a string for any type.
:param target_type: desired type of string
:param string: string that should be build to a value
:return: value
"""
try:
# For Python3 we need to add the "b" prefix for bytes
# Python2 does not need this
if target_type == bytes and six.PY3:
value = literal_eval(
'b"' + string[1:-1].replace('"', '\\"') + '"')
else:
value = literal_eval(string)
except (SyntaxError, ValueError):
# Encapsulate value_str as a string and parse as simple string
# Should always work except if the field doesn't accept a string
value = literal_eval('"' + string.replace('"', '\\"') + '"')
return value
def _edit_done_callback(self, packet, field_name, _edit_widget, new_text):
# type: (Packet, str, ExtendedEdit, str) -> None
"""
Gets called after a field has been edited. This method sets a new
value in the field of the current packet
:param packet: Packet where the field has to be updated
:param field_name: Destination field for the new value
:param _edit_widget: Edit widget which caused the callback
:param new_text: Text content of the Edit widget which should be set
as new field value
"""
old_type = type(packet.getfieldval(field_name))
value = self._build_command(old_type, new_text.strip())
if not EditView._is_valid_value(packet, field_name, value):
self._emit("notification",
"Invalid value.\nGiven type: %s\nExpected type: %s" %
(type(value).__name__, old_type.__name__))
else:
packet.setfieldval(field_name, value)
# show changes also in hexdump view
# Also "beautifies" output in show widget automatically
if self._current_packet:
self._update(self._current_packet, True)
self._emit("packet_modified")
@staticmethod
def _is_valid_value(packet, field_name, value):
# type: (Packet, str, Any) -> bool
"""
Checks if the value is valid for the field of a packet
:param packet: Packet where field should get a new value
:param field_name: Destination field for the value
:param value: Value to set in field
:return: Returns True if value can be set without Exception
"""
# noinspection PyBroadException
try:
clone = packet.copy()
clone.setfieldval(field_name, value)
clone.build()
return True
except Exception: # pylint: disable=broad-except
return False
def _update_show(self, lines):
# type: (List[Text]) -> None
"""
:param lines: Lines to display in show part of this view
"""
self._update_existing_lines(self._show_text, lines)
def _update_hexdump(self, lines):
# type: (List[str]) -> None
"""
:param lines: Lines to display in hexdump part of this view
"""
self._update_existing_lines(self._hex_text,
[Text(line) for line in lines])
@staticmethod
def _update_existing_lines(listbox, lines):
# type: (ExtendedListBox, List[Text]) -> None
"""
This method reuses existing lines.
If there are too many, they are stripped.
If there are too few, new ones are created.
This also ensures that if a new package should becomes shown
the view does not "scroll" back to the top but keeps the line.
:param listbox: ListBox which holds lines to update
:param lines: Lines to display
"""
# strip lines which are too much
del listbox.body[len(lines):]
for i, item in enumerate(lines):
if i < len(listbox.body):
# reuse line with urwid.Text
listbox.body[i] = item
else:
# Seems the former shown packet had less lines than the new one
# Or it's the first Packet to be shown
listbox.body.append(item)
# pylint: disable=invalid-name, line-too-long
# noinspection PyProtectedMember,DuplicatedCode,SpellCheckingInspection
def _show(self, pkt, lvl="", label_lvl=""): # noqa: E501
# type: (Packet, str, str) -> List[Union[Text, ExtendedEdit]]
"""
Custom implementation of `Packet.show()`
Returns a list of widgets which represent the show output.
Lines with fields are editable.
:param pkt: the packet for which the show should be generated
:param str lvl: additional information about the layer lvl
:param str label_lvl: additional information about the layer fields
:return: return a hierarchical list of Text objects
"""
ct = BlackAndWhite()
s = "%s%s %s %s" % (label_lvl,
ct.punct("###["),
ct.layer_name(pkt.name),
ct.punct("]###"))
lines = [Text(s)]
for f in pkt.fields_desc:
if isinstance(f, ConditionalField) and not f.cond(pkt):
continue
if isinstance(f, Emph) or f in conf.emph:
ncol = ct.emph_field_name
vcol = ct.emph_field_value
else:
ncol = ct.field_name
vcol = ct.field_value
fvalue = pkt.getfieldval(f.name)
if isinstance(fvalue, Packet) or (f.islist and f.holds_packets and isinstance(fvalue, list)): # noqa: E501
s = "%s \\%-10s\\" % (label_lvl + lvl, ncol(f.name))
lines.append(Text(s))
fvalue_gen = SetGen(fvalue, _iterpacket=0)
for fvalue in fvalue_gen:
lines.extend(self._show(fvalue, label_lvl=label_lvl + lvl + " |")) # noqa: E501
else:
begn = "%s %-10s%s " % (label_lvl + lvl, ncol(f.name), ct.punct("="),) # noqa: E501
reprval = f.i2repr(pkt, fvalue)
if isinstance(reprval, str):
reprval = reprval.replace("\n", "\n" + " " * (len(label_lvl) + len(lvl) + len(f.name) + 4)) # noqa: E501
edit = ExtendedEdit(True, begn, vcol(reprval))
connect_signal(edit, "apply", self._edit_done_callback,
weak_args=[pkt], user_args=[f.name])
lines.append(edit)
if pkt.payload:
new_lines = self._show(pkt.payload, lvl=lvl + (" " * pkt.show_indent), label_lvl=label_lvl) # noqa: E501
lines.extend(new_lines)
return lines
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/edit_view.py
| 0.889295 | 0.202838 |
edit_view.py
|
pypi
|
from collections import OrderedDict
from typing import Tuple, Callable, List
from urwid import AttrMap, Button, Columns, Text
class ButtonBar(Columns):
def __init__(self, commands):
# type: (OrderedDict[str, Action]) -> None
"""
The commandline interface renders a set of buttons implemented
through Action objects. The key for each button is defined by the
key in the commands dict. The Action object delivers the text to
display and the function to execute on a key press.
:param commands: A dictionary to describe the supported keys. The key
of the dict maps to the key press, when the Action
is executed.
"""
self._actions = commands
self._key_button_map = OrderedDict((cmd[0], self._create_button(cmd))
for cmd in commands.items())
widgets = [(len(btn.get_label()) + 2, btn)
for btn in self._key_button_map.values()]
# Fill the rest of the row with the right color
widgets.append(AttrMap(Text(""), "cyan"))
super(ButtonBar, self).__init__(widgets)
def refresh(self):
# type: () -> None
"""
Refreshes the texts of the buttons.
"""
for action, btn in zip(self._actions.values(),
self._key_button_map.values()):
btn.set_label(("cyan", action.text))
def keypress(self, size, key):
# type: (int, str) -> None
"""
Handle editing keystrokes, return None to not forward key press.
:param size:
:param key: Name of key pressed.
"""
if key in self._actions:
self._execute_and_change_state(key)
def _execute_and_change_state(self, key):
# type: (str) -> None
"""
Executes action for a key and updates the according button text
:param key: Key to execute
"""
action = self._actions[key]
action.execute()
btn = self._key_button_map[key]
btn.set_label(("cyan", action.text))
# noinspection PyProtectedMember
def _create_button(self, cmd):
# type: (Tuple[str, Action]) -> Button
"""
Helper function to create a Button object for a command
:param cmd: Tuple of key and Action object
:return: Button for this Action
"""
key, action = cmd
btn = Button(("cyan", action.text),
on_press=lambda _sender, k:
self._execute_and_change_state(k),
user_data=key)
# We need to access the underlying Columns widget
cols = btn._w # pylint: disable=protected-access
# We don't want any dividing chars
cols.dividechars = 0
# Set the prefix and make it pack instead of "<" and fixed length
cols.contents[0] = (Text(key.upper()), cols.options("pack"))
# Remove the ">" behind the actual button text
del cols.contents[2]
# len(text) + 1 hides the cursor
cols.contents[1][0]._cursor_position = len(btn.label) + 1 # pylint: disable=protected-access
# Ensure buttons won't gain focus but they are still clickable
cols._selectable = False # pylint: disable=protected-access
return btn
class Action(object):
"""
Helper class to store a list of texts and functions. On every execute,
the internal index increases. The internal index points to the current
text and function. If the index points to the last function, the next
execution causes a roll-over to index zero.
"""
def __init__(self, texts, funcs, state_index=0):
# type: (List[str], List[Callable[[], None]], int) -> None # noqa: E501
"""
Initialize an Action object
:param texts: A list of texts. Has to have the same order as funcs.
:param funcs: A list of functions. Has to have the same order as texts.
:param state_index: initial index if necessary
"""
self._texts = texts
self._funcs = funcs
self._state_index = state_index
if len(self._texts) != len(self._funcs):
raise AssertionError("The lists texts and funcs need to have "
"the same length")
if self._state_index > len(self._texts):
raise AssertionError("State index can't be greater than length "
"of texts or funcs")
def execute(self):
# type: () -> None
"""
Executes the function selected by the current index. Afterwards the
index is increased.
"""
self._funcs[self._state_index]()
self._state_index += 1
self._state_index %= len(self._funcs)
def reset(self):
# type: () -> None
"""
Resets internal index back to zero.
"""
self._state_index = 0
@property
def text(self):
# type: () -> str
"""
Get the text selected by the current index.
:return: text selected.
"""
text_width = 12
return self._texts[self._state_index].ljust(text_width)[:text_width]
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/button_bar.py
| 0.857709 | 0.399402 |
button_bar.py
|
pypi
|
from typing import Any, Tuple
from urwid import Edit, Canvas
class ExtendedEdit(Edit):
"""
A new signal "apply" is emitted by this Edit after Enter is pressed.
A new signal "exit" is emitted by this Edit after Escape is pressed.
It also takes care of resetting the text after losing focus.
"""
signals = ["apply", "exit"] + Edit.signals
def __init__(self, use_reset, *args, **kwargs):
# type: (bool, Any, Any) -> None
"""
Initialize ExtendedEdit
:param args: args for Edit
:param use_reset: whether a reset after losing focus is desired
:param kwargs: kwargs for Edit
"""
self._use_reset = use_reset
# Holds this widgets focus state from last rendering
self._had_focus = False
# The text the edit field contained before gaining the focus
self._old_value = ""
super(ExtendedEdit, self).__init__(*args, **kwargs)
def keypress(self, size, key):
# type: (Tuple[int, int], str) -> Any
"""
Custom implementation of keypress from Widget. Key-Presses to Enter
are handled by the edit. The apply signal is emitted on enter.
Other keys are not handled and forwarded.
:param size:
:param key: key which is pressed
:return: None if key is handled otherwise let the super class return
"""
if key == "enter":
# Lose focus already here that
# the old value doesn't get applied in `render`
self._had_focus = False
self._emit("apply", self.edit_text)
return None
if key == "esc":
self._emit("exit")
return None
return super(ExtendedEdit, self).keypress(size, key)
def render(self, size, focus=False):
# type: (Tuple[int], bool) -> Canvas
"""
Custom implementation of render to reset to old value as soon as
the focus is lost.
"""
if self._use_reset:
if not self._had_focus and focus:
# we got the focus
# Cache original value
self._old_value = self.get_edit_text()
elif self._had_focus and not focus:
# We lost the focus
# Set edit_text to old one
self.edit_text = self._old_value
self._had_focus = focus
return super(ExtendedEdit, self).render(size, focus=focus)
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/extended_edit.py
| 0.816772 | 0.2258 |
extended_edit.py
|
pypi
|
from collections import defaultdict
from itertools import count
from typing import cast, Callable, Dict, List, Tuple, Optional, Any
from scapy.config import conf
from scapy.packet import Packet, Packet_metaclass
from .column_configuration import payload_column, repr_column
class RowFormatter(object):
"""
Helper class for row formatting of Packet fields
"""
def __init__(self, columns=None, basecls=None):
# type: (Optional[List[Tuple[str, int, Callable[[Packet], str]]]], Optional[Packet_metaclass]) -> None # noqa: E501 # pylint: disable=line-too-long
"""
Initialize RowFormatter
:param columns: List of column description tuples for
the generation of formatted rows
:param basecls: Packet_metaclass for evaluation if a certain Packet
is supported by this formatter
"""
self.basecls = basecls
self.columns = columns or self.get_all_columns()
self._format_string = self._create_format_string()
self._time = -1.0 # type: float
nr_messages = count()
self._id_map = \
defaultdict(lambda: next(nr_messages)) # type: Dict[int, int]
'''
holds the mapping of a packet (with its time as key) to the sequential
number this ensures that a packet, even if "re-rendered", gets the same
number again. This happens for example after editing a packet
'''
def is_pkt_supported(self, packet):
# type: (Packet) -> bool
"""
Evaluates if a packet is supported by this formatter
:param packet: Input packet
:return: True if supported
"""
return self.basecls is None or isinstance(packet, self.basecls)
def get_header_string(self):
# type: () -> str
"""
Based on the configured columns, this function returns a string for
the header column.
:return: Formatted string containing all column names
"""
cols = {name: name.upper() for name, _, _ in self.columns}
return self._format_string.format(**cols)
def format(self, packet):
# type: (Packet) -> str
"""
Returns a formatted string containing all desired values of a packet
:param packet: Packet containing all values
:return: Formatted string containing all values formatted in columns
"""
cols = {name: str(func(packet)) for name, _, func in self.columns}
return self._format_string.format(**cols)
def _create_format_string(self):
# type: () -> str
"""
Function to create a format string according to the configured columns
:return:
"""
format_string = ""
for name, width, _ in self.columns[:-1]:
format_string += \
"{" + name + ":" + str(width) + "." + str(width) + "} "
# Do not trim last column. Usually it's the data column
# so allow it to be as long as necessary
name = self.columns[-1][0]
format_string += "{" + name + "}"
return format_string
def get_all_columns(self):
# type: () -> List[Tuple[str, int, Callable[[Packet], str]]] # noqa: E501
"""
Depending if a basecls filter is configured, this function returns
either a standard column configuration which uses the packets repr
function, or a custom column configuration based on the basecls
:return: A default or a basecls specific column configuration
"""
if self.basecls is None:
return self.get_default_columns() + repr_column
config_columns = self.get_config_columns()
if config_columns is not None and len(config_columns) > 0:
return self.get_default_columns() + config_columns
return self.get_default_columns() + self.fields_to_columns() + \
payload_column
def get_default_columns(self):
# type: () -> List[Tuple[str, int, Callable[[Packet], str]]]
"""
Return the default column configuration
:return: The default column configuration
"""
return [
("NO", 5, lambda p: str(self._id_map[id(p)])),
("TIME", 11, self.relative_time)
]
def get_config_columns(self):
# type: () -> List[Tuple[str, int, Callable[[Packet], str]]]
"""
Return all columns from Scapys configuration dependent on the basecls
:return: A columns configuration from
conf.contribs["packet_viewer_columns"] for the current basecls
if a configuration is present.
"""
if self.basecls is None:
return []
try:
config_dict = conf.contribs["packet_viewer_columns"]
value = config_dict.get(self.basecls.__name__, []) # type: List[Tuple[str, int, Callable[[Packet], str]]] # noqa: E501 # pylint: disable=line-too-long
return value
except KeyError:
return []
def fields_to_columns(self, width=12):
# type: (int) -> List[Tuple[str, int, Callable[[Any], str]]]
"""
Returns a column configuration automatically deduced by the configured
basecls. All fields of this Packet_metaclass will be returned
:param width: The width of a field
:return: A automatically generated column configuration
"""
columns = [] # type: List[Tuple[str, int, Callable[[Any], str]]]
if self.basecls is None:
return columns
for field_desc in self.basecls.fields_desc:
# Instantiate a value for the field to check its type
dummy_field_val = self.basecls().getfieldval(field_desc.name)
# If byte, python adds quotation marks to the repr
# bytes_to_repr removes it
# We use repr() over str() because otherwise byte values
# like 0x0a ('\n') would change the layout
field_name = str(field_desc.name)
if isinstance(dummy_field_val, bytes):
def callback(p, field=field_name):
# type: (Packet, str) -> str
return self.text_to_repr(p, field)
else:
def callback(p, field=field_name):
# type: (Packet, str) -> str
return self.field_to_repr(p, field)
columns.append((field_name, width, callback))
return columns
def relative_time(self, packet):
# type: (Packet) -> str
"""
Returns the relative time between the given packet and the first packet
ever received.
:param packet: Current Packet
:return: Time difference between received and first Packet
"""
if self._time == -1.0:
self._time = packet.time
return str(packet.time - self._time)
@staticmethod
def field_to_repr(p, name):
# type: (Packet, str) -> str
"""
Returns the value of a field
:param p: Packet containing the value
:param name: Field name of value to return
:return: Value of field
"""
repr_val = p.get_field(name).i2repr(p, p.getfieldval(name))
return cast(str, repr_val)
@staticmethod
def text_to_repr(p, name):
# type: (Packet, str) -> str
"""
Returns the value of a field without quote symbols
:param p: Packet containing the value
:param name: Field name of value to return
:return: Value of field
"""
return RowFormatter.field_to_repr(p, name)[1:-1]
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/row_formatter.py
| 0.906216 | 0.239594 |
row_formatter.py
|
pypi
|
from platform import platform
import re
from traceback import print_exc
from typing import Optional, Union, Iterable, List, Tuple, Callable, Dict, \
Any, Type
from urwid import MainLoop, connect_signal, raw_display
from scapy.automaton import ObjectPipe
from scapy.config import conf
from scapy.packet import Packet_metaclass, Packet
from scapy.supersocket import SuperSocket
from scapy.themes import BlackAndWhite
from scapy.plist import PacketList
from . import column_configuration # noqa: F401 # pylint: disable=unused-import
from .details_view import DetailsView
from .edit_view import EditView
from .main_window import MainWindow
from .pop_ups import show_question_pop_up, show_info_pop_up
from .row_formatter import RowFormatter
class ScreenWSL(raw_display.Screen):
def write(self, data):
# type: (str) -> None
"""
Write function for custom WSL screen. This replace urwid's SI/SO,
which produce artifacts under WSL
:param data: Some data to write to the raw_display.Screen
"""
if "microsoft" in platform().lower():
data = re.sub("[\x0e\x0f]", "", data)
super(ScreenWSL, self).write(data)
class Viewer(object):
"""
A packet viewer for Scapy. Based on urwid this class can visualize packets.
This viewer is extendable and customizable.
The following configurations are used internally:
conf.contribs["packet_viewer_custom_views"]
conf.contribs["packet_viewer_columns"]
Customize views:
Derive a custom view from DetailsView and implement
the desired behaviour. Add your view as list to the configuration.
conf.contribs["packet_viewer_custom_views"] = [myCustomView]
CustomViews can also be given to the Viewer directly:
```Viewer(source, views=[myCustomView])```
Customize columns:
The configuration of conf.contribs["packet_viewer_columns"] contains
a dictionary where the key is the basecls of a packet. This allows
you to customize the packet_viewer columns dependent on a basecls.
A column description is defined as a list of tuples where every
tuple defines a column.
The definition of a column consists of a string for the name, an int
for the column width and a function for the determination of
the content. Example:
```
src_col = ("SRC", 15, lambda p: p.src)
dst_col = ("DST", 15, lambda p: p.dst)
# assign column definitions to IP packets
conf.contribs["packet_viewer_columns"][IP] = [src_col, dst_col]
```
Now the packet_viewer shows the default columns, followed by
this custom columns if a basecls is provided.
A identical configuration can be given to the constructor to customize
the columns even more. Example:
```
Viewer(source, columns=[src_col, dst_col])
```
Attention: This requires that every packet from the source has the
attributes `src` and `dst`.
"""
def __init__(self, source, columns, basecls, views,
globals_dict, **kwargs_for_sniff):
# type: (Union[SuperSocket, Iterable[Packet]], Optional[List[Tuple[str, int, Callable[[Packet], str]]]], Optional[Packet_metaclass], Optional[List[Type[DetailsView]]], Optional[Dict[Any, Any]], Any) -> None # noqa: E501 # pylint: disable=line-too-long
"""
Initialization of a Viewer class. Customization and basecls filtering
can be chosen through the arguments
:param source: Any list of Packets or a Socket.
:param columns: A list of column configuration triples.
(<name>, <length>, <function returning content>).
See `column_configuration.py` for examples.
:param basecls: A basecls for basecls filtering. If this argument is
provided, only packets from this instance are shown.
If a basecls is provided. The Viewer will automatically
read basecls specific column configuration from
`conf.contribs["packet_viewer_columns"]`.
:param views: Custom or additional views.
:param kwargs_for_sniff: Arguments for sniff, if source is a socket.
"""
self.palette = [
("cyan", "black", "dark cyan"),
("green", "black", "dark green"),
("red", "white", "dark red"),
("default_bold", "bold", ""),
]
if views is None:
self.views = [EditView]
self.views += conf.contribs.get("packet_viewer_custom_views", [])
else:
self.views = views
for view in self.views:
self.palette += getattr(view, "palette", [])
self.source = source
self.globals_dict = globals_dict
self.kwargs_for_sniff = kwargs_for_sniff
self.formatter = RowFormatter(columns, basecls)
self.main_window = None # type: Optional[MainWindow]
self.loop = None # type: Optional[MainLoop]
self.msg_pipe = None # type: Optional[ObjectPipe]
def _connect_signals(self):
# type: () -> None
"""
Internal function to connect signals from MainWindow to PopUps
"""
if self.main_window is None:
return
connect_signal(
self.main_window, "question_popup",
lambda _, msg, cb: show_question_pop_up(self.loop, msg, cb))
connect_signal(
self.main_window, "info_popup",
lambda _, info: show_info_pop_up(self.loop, info))
connect_signal(
self.main_window, "msg_to_main_thread",
lambda _, *args: self.msg_pipe.send(args)) # type: ignore[union-attr] # noqa: E501 # pylint: disable=line-too-long
def run(self):
# type: () -> Tuple[PacketList, PacketList]
"""
Start Viewer
:return: Tuple of two PacketLists. First list contains all selected
Packets. Second list contains all Packets
"""
color_theme = conf.color_theme
conf.color_theme = BlackAndWhite()
try:
self.main_window = MainWindow(self.source, self.formatter,
self.views,
self.globals_dict,
**self.kwargs_for_sniff)
self.loop = MainLoop(self.main_window, palette=self.palette,
screen=ScreenWSL())
self.msg_pipe = ObjectPipe()
self.loop.event_loop.watch_file(self.msg_pipe.fileno(),
self._dispatcher)
self._initialize_warning()
self._connect_signals()
self.loop.run()
except Exception: # pylint: disable=broad-except
# We don't want the user session to break if the viewer crashes.
# So catch everything, but at least print the exception
print_exc()
return PacketList(), PacketList()
finally:
conf.color_theme = color_theme
if self.main_window and self.main_window.sniffer \
and self.main_window.sniffer.running:
self.main_window.sniffer.stop()
if self.msg_pipe:
self.msg_pipe.close()
return self.main_window.selected_packets, self.main_window.all_packets
def _dispatcher(self, *_args):
# type: (Optional[Any]) -> None
info = self.msg_pipe.recv() # type: ignore[union-attr]
msg = info[0]
if msg == "redraw":
# Through awaking the MainLoop it will enter idle soon again.
# This redraws automatically.
# So no need to start the drawing here a second time.
# See http://urwid.org/reference/main_loop.html#urwid.MainLoop.entering_idle # noqa: E501
pass
elif msg == "call":
func = info[1]
args = info[2:]
func(*args)
elif msg == "new_packet":
packet = info[1]
if self.main_window is not None:
self.main_window.new_packet(packet)
def _initialize_warning(self):
# type: () -> None
"""
This function allows an initial warning to be displayed
as soon as the viewer is opened.
"""
# The loop isn't running yet thus signals are not usable yet.
# So call show_info_pop_up directly instead of invoking it
# through emitting a signal.
if self.globals_dict is None:
info = "Without giving 'globals()', the Packet Viewer " \
"cannot know your currently imported classes. " \
"Thus the Craft&Send feature will be disabled."
show_info_pop_up(self.loop, info)
def viewer(source, columns=None, basecls=None, views=None, globals_dict=None,
**kwargs_for_sniff):
# type: (Union[SuperSocket, Iterable[Packet]], Optional[List[Tuple[str, int, Callable[[Packet], str]]]], Optional[Packet_metaclass], Optional[List[Type[DetailsView]]], Optional[Dict[Any, Any]], Any) -> Tuple[PacketList, PacketList] # noqa: E501 # pylint: disable=line-too-long
"""
Convenience function for Viewer
:param source: Socket or list of Packets
:param columns: A list of column configuration triples.
(<name>, <length>, <function returning content>).
See `column_configuration.py` for examples.
:param basecls: Packet_metaclass for basecls filtering and
column configuration determination
:param views: List of custom views
:param globals_dict: Necessary for crafting packets in this tool,
since this dictionary contains the imported
Packet classes.
:param kwargs_for_sniff: Parameters forwarded to sniff
if source is a socket
:return: Tuple of two PacketLists. First list contains all selected
Packets. Second list contains all Packets
"""
return Viewer(source, columns, basecls, views, globals_dict,
**kwargs_for_sniff).run()
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/viewer.py
| 0.848188 | 0.422713 |
viewer.py
|
pypi
|
from decimal import Decimal
import re
from typing import List, Optional, Tuple
from cantools.database.can import Message, Signal
from scapy.packet import Packet
import urwid
from . import message_layout_string as mls
from .decimal_edit import DecimalEdit
class SignalTableRow(urwid.Columns):
urwid_signals = [ 'message_updated' ]
C_IDENTIFIER_RE = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]{0,31}$")
LETTER_COLUMN_LABEL = "Letter"
LABEL_COLUMN_LABEL = "Label"
SIGNED_COLUMN_LABEL = "Signed?"
FLOAT_COLUMN_LABEL = "Float?"
OFFSET_COLUMN_LABEL = "Offset"
SCALE_COLUMN_LABEL = "Scale"
MINIMUM_COLUMN_LABEL = "Minimum"
MAXIMUM_COLUMN_LABEL = "Maximum"
UNIT_COLUMN_LABEL = "Unit"
DECODED_COLUMN_LABEL = "Decoded Value"
TABLE_COLUMN_INFO: List[Tuple[str, int]] = [
# (column label, (minimum) width to hold values of this column)
(LETTER_COLUMN_LABEL, 1),
(LABEL_COLUMN_LABEL, 30),
(SIGNED_COLUMN_LABEL, 7),
(FLOAT_COLUMN_LABEL, 7),
(OFFSET_COLUMN_LABEL, 5),
(SCALE_COLUMN_LABEL, 5),
(MINIMUM_COLUMN_LABEL, 5),
(MAXIMUM_COLUMN_LABEL, 5),
(UNIT_COLUMN_LABEL, 4),
(DECODED_COLUMN_LABEL, 10)
]
TABLE_COLUMN_LABELS = [ label for label, _ in TABLE_COLUMN_INFO ]
TABLE_COLUMNS: List[Tuple[str, int]] = [
# (column label, column width)
(label, max(len(label), min_width))
for label, min_width
in TABLE_COLUMN_INFO
]
TABLE_COLUMN_DIVIDECHARS = 2
TABLE_ROW_WIDTH = (len(TABLE_COLUMNS) - 1) * TABLE_COLUMN_DIVIDECHARS + sum(w for _, w in TABLE_COLUMNS)
def __init__(
self,
message: Message,
signal: Signal,
letter: str,
focused_packet: Optional[Packet] = None
) -> None:
cls = self.__class__
urwid.register_signal(cls, cls.urwid_signals)
self._message = message
self._signal = signal
self._letter = letter
self._focused_packet: Optional[Packet] = None
self._decoded_value = urwid.Text("")
# Label
signal_label_edit = urwid.Edit(edit_text=signal.name, wrap='clip')
urwid.connect_signal(signal_label_edit, 'postchange', self._update_signal_label)
# Signed?
signal_signed_checkbox = urwid.CheckBox("yes" if signal.is_signed else "no", state=signal.is_signed)
urwid.connect_signal(signal_signed_checkbox, 'postchange', self._update_signal_signed)
# Float?
signal_float_checkbox = urwid.CheckBox("yes" if signal.is_float else "no", state=signal.is_float)
urwid.connect_signal(signal_float_checkbox, 'postchange', self._update_signal_float)
# Offset
signal_offset_edit = DecimalEdit(initial=signal.decimal.offset, default=Decimal(0), wrap='clip')
urwid.connect_signal(signal_offset_edit, 'valuechange', self._update_signal_offset)
# Scale
signal_scale_edit = DecimalEdit(initial=signal.decimal.scale, default=Decimal(1), wrap='clip')
urwid.connect_signal(signal_scale_edit, 'valuechange', self._update_signal_scale)
# Minimum
self._signal_minimum_edit = DecimalEdit(initial=signal.decimal.minimum, wrap='clip')
urwid.connect_signal(
self._signal_minimum_edit,
'valuechange',
lambda _widget, _value: self._update_signal_bounds()
)
# Maximum
self._signal_maximum_edit = DecimalEdit(initial=signal.decimal.maximum, wrap='clip')
urwid.connect_signal(
self._signal_maximum_edit,
'valuechange',
lambda _widget, _value: self._update_signal_bounds()
)
# Unit
signal_unit_edit = urwid.Edit(edit_text=signal.unit or "", wrap='clip')
urwid.connect_signal(signal_unit_edit, 'postchange', self._update_signal_unit)
# Label -> Column mapping
column_widgets = {
cls.LETTER_COLUMN_LABEL: urwid.Text(letter),
cls.LABEL_COLUMN_LABEL: signal_label_edit,
cls.SIGNED_COLUMN_LABEL: signal_signed_checkbox,
cls.FLOAT_COLUMN_LABEL: signal_float_checkbox,
cls.OFFSET_COLUMN_LABEL: signal_offset_edit,
cls.SCALE_COLUMN_LABEL: signal_scale_edit,
cls.MINIMUM_COLUMN_LABEL: self._signal_minimum_edit,
cls.MAXIMUM_COLUMN_LABEL: self._signal_maximum_edit,
cls.UNIT_COLUMN_LABEL: signal_unit_edit,
cls.DECODED_COLUMN_LABEL: self._decoded_value
}
super().__init__(
[ (width, column_widgets[label]) for label, width in cls.TABLE_COLUMNS ],
dividechars=cls.TABLE_COLUMN_DIVIDECHARS
)
self.update(focused_packet)
def update(self, focused_packet: Optional[Packet], force: bool = False) -> None:
if focused_packet is not self._focused_packet or force:
self._focused_packet = focused_packet
# Update the "Decoded Value" cell if needed
if focused_packet is None:
self._decoded_value.set_text("")
else:
self._decoded_value.set_text("{} {}".format(
self._message.decode(focused_packet.data).get(self._signal.name, "n.A."),
self._signal.unit or ""
))
@property
def signal(self) -> Signal:
return self._signal
@property
def letter(self) -> str:
return self._letter
@classmethod
def _validate_c_identifier(cls, text: str) -> bool:
return cls.C_IDENTIFIER_RE.match(text) is not None
@classmethod
def _validate_char_string(cls, text: str) -> bool:
# All printable characters except for '"' are allowed.
return text.isprintable() and '"' not in text
def _signal_updated(self) -> None:
# Refresh and re-validate the message
self._message.refresh(strict=True)
# Force an update
self.update(self._focused_packet, force=True)
urwid.emit_signal(self, 'message_updated')
def _update_signal_label(self, widget: urwid.Edit, old_text: str) -> None:
text = widget.edit_text
# An empty label is a special case, as it has to be possible to fully delete a label before typing a
# new one, but an empty label is obviously invalid. The (slightly hacky) solution chosen here is to
# assume the (valid) signal name "__empty__" instead of an empty label.
if text == "":
text = "__empty__"
if self._validate_c_identifier(text):
self._signal.name = text
self._signal_updated()
else:
widget.edit_text = old_text
def _update_signal_signed(self, widget: urwid.CheckBox, _old_checked: bool) -> None:
checked = widget.get_state()
widget.set_label("yes" if checked else "no")
self._signal.is_signed = checked
self._signal_updated()
def _update_signal_float(self, widget: urwid.CheckBox, _old_checked: bool) -> None:
# TODO: Float signals are kind of a mystery. What about minimum/maximum/scale/offset/signedness etc.
# when dealing with float signals?
checked = widget.get_state()
if checked and self._signal.length not in [ 16, 32, 64 ]:
# Block setting the float flag if the signal is not of the required bit length.
# TODO: Some info about the blocking for the user would be cool here
widget.set_state(False, do_callback=False)
else:
widget.set_label("yes" if checked else "no")
self._signal.is_float = checked
self._signal_updated()
def _update_signal_offset(self, _widget: DecimalEdit, value: Optional[Decimal]) -> None:
if value is None:
# This can never happen, it is just here to satisfy the type checker.
value = Decimal(1)
self._signal.decimal.offset = value
self._signal.offset = float(value)
self._signal_updated()
def _update_signal_scale(self, _widget: DecimalEdit, value: Optional[Decimal]) -> None:
if value is None:
# This can never happen, it is just here to satisfy the type checker.
value = Decimal(0)
self._signal.decimal.scale = value
self._signal.scale = float(value)
self._signal_updated()
def _update_signal_bounds(self) -> None:
minimum = self._signal_minimum_edit.value
maximum = self._signal_maximum_edit.value
# Only update the signal's bounds if the minimum is smaller than the maximum (or one or both is not
# defined).
if minimum is None or maximum is None or minimum < maximum:
self._signal.decimal.minimum = minimum
self._signal.decimal.maximum = maximum
self._signal.minimum = None if minimum is None else float(minimum)
self._signal.maximum = None if maximum is None else float(maximum)
self._signal_updated()
def _update_signal_unit(self, widget: urwid.Edit, old_text: str) -> None:
text = widget.edit_text
if self._validate_char_string(text):
self._signal.unit = None if text == "" else text
self._signal_updated()
else:
widget.edit_text = old_text
class SignalTable(urwid.ListBox):
urwid_signals = [ 'focus_changed', 'message_updated' ]
TABLE_WIDTH = SignalTableRow.TABLE_ROW_WIDTH
focus: Optional[SignalTableRow]
def __init__(self, message: Optional[Message] = None, focused_packet: Optional[Packet] = None) -> None:
cls = self.__class__
urwid.register_signal(cls, cls.urwid_signals)
self._message: Optional[Message] = None
super().__init__(urwid.SimpleFocusListWalker([
# Initialized with just the table header
urwid.Columns(
[ (width, urwid.Text(label)) for label, width in SignalTableRow.TABLE_COLUMNS ],
dividechars=SignalTableRow.TABLE_COLUMN_DIVIDECHARS
)
]))
self.update(message, focused_packet)
def _focus_changed(self) -> None:
urwid.emit_signal(self, 'focus_changed')
def _message_updated(self) -> None:
# Simply forward the event
urwid.emit_signal(self, 'message_updated')
def update(
self,
message: Optional[Message],
focused_packet: Optional[Packet] = None,
force: bool = False
) -> None:
# If the message has changed, update the table
if message is not self._message or force:
self._message = message
# Disconnect the 'modified' signal before updating the table walker, as modification in code
# triggers events.
urwid.disconnect_signal(self.body, 'modified', self._focus_changed)
# Delete all rows except for the header
del self.body[1:]
if message is not None:
# Map signals to letters
signal_letter_mapping = mls.get_signal_letter_mapping(message)
# Build the signal rows
for signal, letter in sorted(signal_letter_mapping.items(), key=lambda x: x[1]):
row = SignalTableRow(message, signal, letter)
# Get notified about changes to the message
urwid.connect_signal(row, 'message_updated', self._message_updated)
self.body.append(row)
# Reconnect the signal as soon as the modifications are done
urwid.connect_signal(self.body, 'modified', self._focus_changed)
for row in self.body[1:]:
row.update(focused_packet)
@property
def focused_row(self) -> Optional[SignalTableRow]:
# Exclude the header by checking for a focus position of 0
if self.focus is None or self.focus_position == 0:
return None
return self.focus
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/custom_views/analyze_can_view/signal_table.py
| 0.873674 | 0.246913 |
signal_table.py
|
pypi
|
from typing import cast, List
import numpy as np
def count_bit_flips(bodies: List[bytes], size: int) -> List[int]:
"""
Args:
bodies: The bodies to analyze.
size: The number of bits in each body. All bodies must have the same bit size.
Returns:
The absolute TAV, i.e. for each bit position the absolute number of bit flips.
"""
bodies_np = np.array(bodies, dtype=np.uint64)
if size < 1:
raise ValueError("Bodies must consist of at least one bit.")
if size > 64:
raise ValueError("Bodies must consist of 64 bits at most.")
tav = np.zeros(size, dtype=np.uint64)
for bit in np.arange(size):
bits = (bodies_np >> bit) & 1
tav[bit] = np.sum(bits[1:] ^ bits[:-1])
return cast(List[int], tav.tolist())
def calculate_bit_flip_correlation(bodies: List[bytes], size: int) -> List[float]:
"""
Args:
bodies: The bodies to analyze.
size: The number of bits in each body. All bodies must have the same bit size.
Returns:
The Bit-Correlation-Over-Time. Like the derivative of the TAV, this metric relates adjacent bit
positions, thus the entry "0" belongs to the relation between bit positions 0 and 1. Note that entries
might be nan (= not a number), in case at least one of the correlated bits is constant. For example,
if bit 4 is constant, the entries "3" and "4" will be nan, because the correlation with a constant bit
is undefined.
"""
bodies_np = np.array(bodies, dtype=np.uint64)
# Free parameters!
bcot_max_samples = 64 * 1024
convolution_length = max(min(bodies_np.shape[0], bcot_max_samples) // 200, 64)
if size < 1:
raise ValueError("Bodies must consist of at least one bit.")
if size > 64:
raise ValueError("Bodies must consist of 64 bits at most.")
bodies_np = bodies_np[:bcot_max_samples]
# Note: this code works with temporary Python list, which are potential bottlenecks, but the
# lists only have one entry per bit position (minus one), so the worst case is 63 entries per
# list, which should not be an issue.
# Note: Variable names are chosen as per the paper that defines this algorithm.
b = bodies_np[1:] ^ bodies_np[:-1] # pylint: disable=invalid-name
b_t = np.array([ ((b >> col) & 1) for col in np.arange(size) ], dtype=np.uint8)
v_t = np.ones((size, convolution_length), dtype=np.uint8)
c_t = np.array([ np.convolve(b_t[row], v_t[row]) for row in np.arange(size) ])
bcot = np.array([ np.corrcoef(c_t[row], c_t[row + 1])[1][0] for row in np.arange(size - 1) ])
return cast(List[float], bcot.astype(np.float64).tolist())
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/custom_views/analyze_can_view/utils.py
| 0.939941 | 0.789558 |
utils.py
|
pypi
|
from typing import Dict, List, Optional, Tuple
from cantools.database.can import Message, Signal
from cantools.database.utils import start_bit
def get_signal_letter_mapping(message: Message) -> Dict[Signal, str]:
"""
Assign each signal of a message a unique letter. The order in which letters are assigned to signals is
based on the order in which the signals are given by message._signals. Should produce the same assignment
every time given the same message.
Args:
message: The message.
Returns:
The mapping between the signals of the message and (unique) letters.
"""
# Mapping between signals and signal letters
signal_letter_mapping: Dict[Signal, str] = {}
# Populate the mappings
next_signal_letter_ord = ord('a')
for signal in message.signals:
next_signal_letter = chr(next_signal_letter_ord)
signal_letter_mapping[signal] = next_signal_letter
next_signal_letter_ord += 1
return signal_letter_mapping
def message_layout_string(message: Message, highlight: Optional[str] = None) -> str:
"""
This is a copy of the Message.layout_string method, adjusted to the needs of the AnalyzeCANView.
The output of the original layout_string method (using signal_names=True), given a DBC message that
consists of 8 bytes and has many signals, is too tall (in lines) for the AnalyzeCANView. Setting
signal_names=False is not a solution, as some sort of association between signals in the ASCII-art and
signal names is needed.
This copy of the layout_string method is adjusted to label signals right in the signal ASCII-art, using
lowercase letters from 'a' to 'z' in place of the original signal-starting x's. The mapping between the
letters and the signals is obtained by calling `get_signal_letter_mapping`.
Args:
message: The message to format.
highlight: The letter of the signal to highlight, or None.
Returns:
The message formatted as ASCII-art.
"""
# Mapping between signals and signal letters
signal_letter_mapping = get_signal_letter_mapping(message)
# A string containing all signal letters for convenience
all_signal_letters = ''.join(signal_letter_mapping.values())
def format_big() -> List[str]:
signals = []
for signal in message.signals:
if signal.byte_order != 'big_endian':
continue
# Small modification here to use the signal letter for the tail instead of 'x' and use '=' instead
# of '-' for highlighted signals.
signal_letter = signal_letter_mapping[signal]
dash = '=' if signal_letter == highlight else '-'
formatted = start_bit(signal) * ' '
formatted += '<{}{}'.format((3 * signal.length - 2) * dash, signal_letter)
signals.append(formatted)
return signals
def format_little() -> List[str]:
signals = []
for signal in message.signals:
if signal.byte_order != 'little_endian':
continue
# Small modification here to use the signal letter for the tail instead of 'x' and use '=' instead
# of '-' for highlighted signals.
signal_letter = signal_letter_mapping[signal]
dash = '=' if signal_letter == highlight else '-'
formatted = signal.start * ' '
formatted += '{}{}<'.format(signal_letter, (3 * signal.length - 2) * dash)
end = signal.start + signal.length
if end % 8 != 0:
formatted += (8 - (end % 8)) * ' '
formatted = ''.join(formatted[i:i + 24][::-1] for i in range(0, len(formatted), 24))
signals.append(formatted)
return signals
def format_byte_lines() -> Tuple[List[str], int, int]:
# Signal lines.
signals = format_big() + format_little()
if len(signals) > 0:
length = max(len(signal) for signal in signals)
if length % 24 != 0:
length += (24 - (length % 24))
signals = [ signal + (length - len(signal)) * ' ' for signal in signals ]
# Signals union line.
signals_union = ''
for chars in zip(*signals):
head = chars.count('<')
dash = chars.count('-') + chars.count('=')
# Modified to detect signal letters as tails instead of 'x'
tail = sum(chars.count(letter) for letter in all_signal_letters)
# Little modification of the original code to find the union char more easily
non_space_chars = list(filter(lambda char: char != ' ', chars))
if head + dash + tail > 1:
signals_union += 'X' # TODO: This swallows tails
else:
if len(non_space_chars) == 0:
signals_union += ' '
else:
signals_union += non_space_chars[0]
# Split the signals union line into byte lines, 8 bits per line.
byte_lines = [ signals_union[i:(i + 24)] for i in range(0, len(signals_union), 24) ]
unused_byte_lines = (message.length - len(byte_lines))
if unused_byte_lines > 0:
byte_lines += unused_byte_lines * [24 * ' ']
# Insert bits separators into each byte line.
lines = []
for byte_line in byte_lines:
line = ''
prev_byte = None
for i in range(0, 24, 3):
byte_triple = byte_line[i:i + 3]
if i == 0:
line += '|'
elif byte_triple[0] in ' <>' + all_signal_letters:
# Detecting signal letters instead of 'x' ^
line += '|'
elif byte_triple[0] == 'X':
if prev_byte == 'X':
line += 'X'
elif prev_byte == '-':
line += '-'
elif prev_byte == '=':
line += '='
else:
line += '|'
elif byte_triple[0] == '=':
line += '='
else:
line += '-'
line += byte_triple
prev_byte = byte_triple[2]
line += '|'
lines.append(line)
# Add byte numbering.
number_width = len(str(len(lines))) + 4
number_fmt = '{{:{}d}} {{}}'.format(number_width - 1)
lines = [ number_fmt.format(number, line) for number, line in enumerate(lines) ]
return lines, len(lines), number_width
def add_header_lines(lines: List[str], number_width: int) -> List[str]:
# Modified to use less rows by moving the "Bit" label next to the numbers.
return [
"Bit".rjust(number_width, ' ') + ' 7 6 5 4 3 2 1 0',
number_width * ' ' + '+---+---+---+---+---+---+---+---+'
] + lines
def add_horizontal_lines(byte_lines: List[str], number_width: int) -> List[str]:
padding = number_width * ' '
lines = []
for byte_line in byte_lines:
lines.append(byte_line)
lines.append(padding + '+---+---+---+---+---+---+---+---+')
return lines
def add_y_axis_name(lines: List[str]) -> List[str]:
number_of_matrix_lines = (len(lines) - 3)
if number_of_matrix_lines < 5:
lines += (5 - number_of_matrix_lines) * [ ' ' ]
start_index = 4 + ((number_of_matrix_lines - 4) // 2 - 1)
# Modified to start at 0 minimum instead of 4, due to the lower number of rows required for the "Bit"
# label
start_index = max(start_index, 0)
axis_lines = start_index * [ ' ' ]
axis_lines += [ ' B', ' y', ' t', ' e' ]
axis_lines += (len(lines) - start_index - 4) * [ ' ' ]
return [ axis_line + line for axis_line, line in zip(axis_lines, lines) ]
# All signal name labelling code was removed.
lines, _, number_width = format_byte_lines()
lines = add_horizontal_lines(lines, number_width)
lines = add_header_lines(lines, number_width)
lines = add_y_axis_name(lines)
lines = [ line.rstrip() for line in lines ]
return '\n'.join(lines)
|
/scapy-packet_viewer-0.0.3.tar.gz/scapy-packet_viewer-0.0.3/scapy_packet_viewer/custom_views/analyze_can_view/message_layout_string.py
| 0.869035 | 0.743541 |
message_layout_string.py
|
pypi
|
[](https://api.travis-ci.org/tintinweb/scapy-ssl_tls.svg?branch=master)
SSL/TLS layers for scapy the interactive packet manipulation tool.
Scapy-SSL/TLS
=============
SSL/TLS and DTLS layers and TLS utiltiy functions for [Scapy](http://www.secdev.org/projects/scapy/).
An offensive stack for SSLv2, SSLv3 (TLS), TLS, DTLS penetration testing providing easy access to packet crafting, automatic dissection, encryption, decryption, session tracking, basic TLS state machines, automated handshakes, TLSSocket abstraction, cryptography containers, predefined hooks, SSL sniffing including minimalistic PCAP stream decryption (RSA_WITH_\*), fuzzing and security scanning (*Renegotiation, Heartbleed, Poodle, Logjam/Freak, DROWN, various Buffer overflows, ...*).
| branch | release status |
|---------------|----------|
| [v1.2.x](https://github.com/tintinweb/scapy-ssl_tls/releases) | :heavy_check_mark: maintenance: only bug-fixes will be released |
| [v2.x](https://github.com/tintinweb/scapy-ssl_tls/releases) | :warning: experimental: not fully backwards compatible with v1.x due to interface changes |
Features
---------
* Protocol Support
* TLS 1.3 draft 18
* TLS 1.2
* TLS 1.1
* TLS 1.0
* SSLv3/TLS Records
* SSLv2 Handshake
* DTLS Records
* TLS Session Context
* Session Tracking
* Key sniffing (master_key, ...)
* Client and Server support
* Sniffer / PCAP processor and decryptor
* State Machines
* TLS Client Scapy Automata
* TLS Server Scapy Automata
Installation
------------
##### Option 1: pip - download latest release from the python package index
pip install scapy-ssl_tls
##### Option 2: from source
pip install -r requirements.txt
python setup.py install
##### Option 3: manual installation
1) install requirements from requirements.txt
2) locate *< scapy >* installation directory: `python -c "import scapy; print scapy.__file__"`
3) copy scapy_ssl_tls/* to *< scapy >*/layers/
4) modify *< scapy >*/config.py to autoload SSL/TLS
```diff
@@ -373,3 +373,3 @@
load_layers = ["l2", "inet", "dhcp", "dns", "dot11", "gprs", "hsrp", "inet6", "ir", "isakmp", "l2tp",
- "mgcp", "mobileip", "netbios", "netflow", "ntp", "ppp", "radius", "rip", "rtp",
+ "mgcp", "mobileip", "netbios", "netflow", "ntp", "ppp", "radius", "rip", "rtp","ssl_tls",
"sebek", "skinny", "smb", "snmp", "tftp", "x509", "bluetooth", "dhcp6", "llmnr", "sctp", "vrrp" ]
```
##### verify installation:
```python
#> python
>>> from scapy_ssl_tls.ssl_tls import TLS
>>> TLS
<class 'scapy_ssl_tls.ssl_tls.SSL'>
#> scapy # via site-packages
>>> from scapy_ssl_tls.ssl_tls import TLS
>>> TLS
<class 'scapy_ssl_tls.ssl_tls.SSL'>
#> scapy # with layers autoloaded via config.py
>>> SSL
<class 'scapy.layers.ssl_tls.SSL'>
>>> TLS
<class 'scapy.layers.ssl_tls.SSL'>
>>> TLSRecord
<class 'scapy.layers.ssl_tls.TLSRecord'>
```
Troubleshooting
-----------
**Q:** `sessionctx_sniffer.py` does not seem to detect `SSL/TLS` or does not show any sniffed `SSL/TLS` sessions.
**A:** This is problem caused by the import magic in `sessionctx_sniffer.py` where the example might mix up imports from the projects directory with the ones installed with `pip` or via `setup.py install`. Make sure to update to `>=v1.2.3`, or run `sessionctx_sniffer.py` from a different directory, or uninstall scapy-ssl_tls to use it directly from the project directory, or remove the `from scapy_ssl_tls.ssl_tls import *` import lines from the example.
**Note:** This has been addressed with `>=v1.2.3` where the system-wide import has preference.
**Q:** `sessionctx_sniffer.py` does not seem to dissect large `SSL/TLS` records properly.
**A:** In order to fully reconstruct *sniffed* `SSL/TLS` records one needs to `defragment` the sniffed IP packets and `reassemble` them to TCP segments. Since TCP Stream reassembly is not an easy task (retransmissions, out-of-order segments, ...) - and therefore out of scope for this project - the `sessionctx_sniffer.py` example implements a very limited tcp stream reassembly algorithm that only tries to reconstruct consecutive segments not taking into account any type of flow-control (ordering, retransmissions, ...).
## Examples
##### Heartbleed Record
```python
==============================================================================
>>> (TLSRecord(version="TLS_1_1")/TLSHeartBeat(length=2**14-1,data='bleed...')).show()
###[ TLS Record ]###
content_type= heartbeat
version= TLS_1_1
length= None
###[ TLS Extension HeartBeat ]###
type= request
length= 16383
data= 'bleed...'
padding= ''
```
##### Heartbleed Attack
```python
import scapy
from scapy.layers.ssl_tls import *
import socket
target = ('target.local',443)
# create tcp socket
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(target)
p = TLSRecord(version="TLS_1_1")/TLSHandshake()/TLSClientHello(version="TLS_1_1")
s.sendall(str(p))
s.recv(8192)
p = TLSRecord(version="TLS_1_1")/TLSHeartBeat(length=2**14-1,data='bleed...')
s.sendall(str(p))
resp = s.recv(8192)
print "resp: %s"%repr(resp)
s.close()
```
##### Dissect TLSClientHello (pcap)
```python
>>> rdpcap("a.cap")[3].show()
###[ Ethernet ]###
dst= d0:ae:ec:c3:6e:d4
src= f0:1f:af:1c:b6:01
type= 0x800
###[ IP ]###
version= 4L
ihl= 5L
tos= 0x0
len= 257
id= 12457
flags= DF
frag= 0L
ttl= 128
proto= tcp
chksum= 0x5b97
src= 192.168.2.45
dst= 216.58.210.166
\options\
###[ TCP ]###
sport= 54988
dport= https
seq= 2403802801L
ack= 3671968520L
dataofs= 5L
reserved= 0L
flags= PA
window= 64350
chksum= 0x210e
urgptr= 0
options= []
###[ SSL/TLS ]###
\records\
|###[ TLS Record ]###
| content_type= handshake
| version= TLS_1_0
| length= 0xd4
|###[ TLS Handshake ]###
| type= client_hello
| length= 0xd0
|###[ TLS Client Hello ]###
| version= TLS_1_2
| gmt_unix_time= 3242904930L
| random_bytes= 'x"W\xe6\xfd\x97\xb7\xaf \xda\x12c\x8c\x07 o\xe3\th\xc3\xc1\xe0\xe3C\xe4\x00\xc6\xc7'
| session_id_length= 0x0
| session_id= ''
| cipher_suites_length= 0x28
| cipher_suites= ['ECDHE_ECDSA_WITH_AES_128_GCM_SHA256', 'ECDHE_RSA_WITH_AES_128_GCM_SHA256', 'DHE_RSA_WITH_AES_128_GCM_SHA256', '0xcc14', '0xcc13', 'ECDHE_ECDSA_WITH_AES_256_CBC_SHA', 'ECDHE_ECDSA_WITH_AES_128_CBC_SHA', 'ECDHE_RSA_WITH_AES_128_CBC_SHA', 'ECDHE_RSA_WITH_AES_256_CBC_SHA', 'ECDHE_ECDSA_WITH_RC4_128_SHA', 'ECDHE_RSA_WITH_RC4_128_SHA', 'DHE_RSA_WITH_AES_128_CBC_SHA', 'DHE_DSS_WITH_AES_128_CBC_SHA', 'DHE_RSA_WITH_AES_256_CBC_SHA', 'RSA_WITH_AES_128_GCM_SHA256', 'RSA_WITH_AES_128_CBC_SHA', 'RSA_WITH_AES_256_CBC_SHA', 'RSA_WITH_3DES_EDE_CBC_SHA', 'RSA_WITH_RC4_128_SHA', 'RSA_WITH_RC4_128_MD5']
| compression_methods_length= 0x1
| compression_methods= ['NULL']
| extensions_length= 0x7f
| \extensions\
| |###[ TLS Extension ]###
| | type= server_name
| | length= 0x17
| |###[ TLS Extension Servername Indication ]###
| | length= 0x15
| | \server_names\
| | |###[ TLS Servername ]###
| | | type= host
| | | length= 0x12
| | | data= 'ad.doubleclick.net'
| |###[ TLS Extension ]###
| | type= renegotiation_info
| | length= 0x1
| |###[ TLS Extension Renegotiation Info ]###
| | length= 0x0
| | data= ''
| |###[ TLS Extension ]###
| | type= supported_groups
| | length= 0x8
| |###[ TLS Extension Elliptic Curves ]###
| | length= 0x6
| | elliptic_curves= ['secp256r1', 'secp384r1', 'secp521r1']
| |###[ TLS Extension ]###
| | type= ec_point_formats
| | length= 0x2
| |###[ TLS Extension EC Points Format ]###
| | length= 0x1
| | ec_point_formats= ['uncompressed']
| |###[ TLS Extension ]###
| | type= SessionTicket TLS
| | length= 0x0
| |###[ TLS Extension ]###
| | type= next_protocol_negotiation
| | length= 0x0
| |###[ TLS Extension ]###
| | type= application_layer_protocol_negotiation
| | length= 0x1a
| |###[ TLS Extension Application-Layer Protocol Negotiation ]###
| | length= 0x18
| | \protocol_name_list\
| | |###[ TLS ALPN Protocol ]###
| | | length= 0x8
| | | data= 'spdy/3.1'
| | |###[ TLS ALPN Protocol ]###
| | | length= 0x5
| | | data= 'h2-14'
| | |###[ TLS ALPN Protocol ]###
| | | length= 0x8
| | | data= 'http/1.1'
| |###[ TLS Extension ]###
| | type= 0x7550
| | length= 0x0
| |###[ TLS Extension ]###
| | type= status_request
| | length= 0x5
| |###[ Raw ]###
| | load= '\x01\x00\x00\x00\x00'
| |###[ TLS Extension ]###
| | type= signed_certificate_timestamp
| | length= 0x0
| |###[ TLS Extension ]###
| | type= signature_algorithms
| | length= 0x12
| |###[ TLS Extension Signature And Hash Algorithm ]###
| | length= 0x10
| | \algs\
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha256
| | | sig_alg= rsa
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha384
| | | sig_alg= rsa
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha1
| | | sig_alg= rsa
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha256
| | | sig_alg= ecdsa
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha384
| | | sig_alg= ecdsa
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha1
| | | sig_alg= ecdsa
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha256
| | | sig_alg= dsa
| | |###[ TLS Signature Hash Algorithm Pair ]###
| | | hash_alg= sha1
| | | sig_alg= dsa
```
##### Full Handshake with Application Data (DHE_RSA_WITH_AES_128_CBC_SHA)
see /examples/full_rsa_connection_with_application_data.py
```python
# python examples/full_rsa_connection_with_application_data.py localhost 443
Connected to server: ('localhost', 443)
###[ SSL/TLS ]###
\records \
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_1
| length = 0x2a
|###[ TLS Handshake ]###
| type = server_hello
| length = 0x26
|###[ TLS Server Hello ]###
| version = TLS_1_1
| gmt_unix_time= 1439578475
| random_bytes= 'S-\x0f\x1bt\x95\xcc\xa9wwI\xb9\xf5\x10\x12\x11*\x82%\xdd\xb6\x1e\xc0b\xdc\xac\x9b'
| session_id_length= 0x0
| session_id= ''
| cipher_suite= DHE_RSA_WITH_AES_128_CBC_SHA
| compression_method= NULL
| \extensions\
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_1
| length = 0x2de
|###[ TLS Handshake ]###
| type = certificate
| length = 0x2da
|###[ TLS Certificate List ]###
| length = 0x2d7
| \certificates\
| |###[ TLS Certificate ]###
| | length = 0x2d4
| | \data \
| | |###[ X509Cert ]###
| | | version = <ASN1_INTEGER[2L]>
| | | sn = <ASN1_INTEGER[14155341744006398450L]>
| | | sign_algo = <ASN1_OID['.1.2.840.113549.1.1.5']>
| | | sa_value = <ASN1_NULL[0L]>
| | | \issuer \
| | | |###[ X509RDN ]###
| | | | oid = <ASN1_OID['.2.5.4.3']>
| | | | value = <ASN1_PRINTABLE_STRING['localhost.localdomain']>
| | | not_before= <ASN1_UTC_TIME['130425105002Z']>
| | | not_after = <ASN1_UTC_TIME['230423105002Z']>
| | | \subject \
| | | |###[ X509RDN ]###
| | | | oid = <ASN1_OID['.2.5.4.3']>
| | | | value = <ASN1_PRINTABLE_STRING['localhost.localdomain']>
| | | pubkey_algo= <ASN1_OID['.1.2.840.113549.1.1.1']>
| | | pk_value = <ASN1_NULL[0L]>
| | | pubkey = <ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]>
| | | \x509v3ext \
| | | |###[ X509v3Ext ]###
| | | | val = <ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]>
| | | sign_algo2= <ASN1_OID['.1.2.840.113549.1.1.5']>
| | | sa2_value = <ASN1_NULL[0L]>
| | | signature = <ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']>
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_1
| length = 0x20d
|###[ TLS Handshake ]###
| type = server_key_exchange
| length = 0x209
|###[ TLS Server Key Exchange ]###
|###[ TLS Diffie-Hellman Server Params ]###
| p_length = 0x80
| p = '\xd6}\xe4@\xcb\xbb\xdc\x196\xd6\x93\xd3J\xfd\n\xd5\x0c\x84\xd29\xa4_R\x0b\xb8\x81t\xcb\x98\xbc\xe9Q\x84\x9f\x91.c\x9cr\xfb\x13\xb4\xb4\xd7\x17~\x16\xd5Z\xc1y\xbaB\x0b*)\xfe2JFzc^\x81\xffY\x017{\xed\xdc\xfd3\x16\x8aF\x1a\xad;r\xda\xe8\x86\x00x\x04[\x07\xa7\xdb\xcaxt\x08}\x15\x10\xea\x9f\xcc\x9d\xdd3\x05\x07\xddb\xdb\x88\xae\xaat}\xe0\xf4\xd6\xe2\xbdh\xb0\xe79>\x0f$!\x8e\xb3'
| g_length = 0x1
| g = '\x02'
| ys_length = 0x80
| y_s = "\xc9\x1aK\xe5\xc2\xd9@\x83\x05\xd7\xd1J1[\xdb3\xc2\xa8\xb7\xa0\xdd\xc6cFjje\x92d\xc0\n\x1b\xb6N\xf3f\x9c\xa6\xb86\xf3\xd8\x91\xcf\x18\x87|3\x13fh\x8a$\xdf\xd6\xb6D\x9d\x90\xf6\x08*\xee?\x1f\xc3/|\xbe\xbc\xdd\xf0\x9aX\x8b\x00E\x06\x01\x9a\xc3\xfc\xb2\x1b\xa5\xa7>3\xc8\x95\x07\xfb\x84\x1b\xf9\xa2!%\xfc\xf4\xca`\x1a'\xd1\xeaj\x15c%\xe7\xa8 \xfe,E\x82\x8e\xc2S\xd4e\x88\xf6\xde\xa7\xd5 "
| sig_length= 0x100
| sig = '1\xd5!6H\xfa\x0e\xe1\x7f\xa8\x13!\x83\x05X1\x92\xab\x9e^\x8c\xa1\xe2\x05Q\xdajb\x1b\x98\xc0\xc0y\xcbJ5!@P\xe1\xf02\xc9Ar@\xf5\x1d\xe3\xa7<\x10:\xcd\xab\xa6\r\xf2p\xbc@&l8\xf9|\xcd\xc6\xf5K\x1c\xbd\xb0P1\x18W\x9b98O\xa6\xf4\x95\nm\x92\xb4\xf8"o\xeb\xcc\xf7\xbd\xa6\xf5\x9b\xc9\xe1Iw\xe8\xefkn\x13,\x7f\\\x7f(\xc7X\xad|\x19\xbd\n\x85\xcd1\xa3\xb6=\xd1\xda\xd1\xec\x95J\x82\xf4\xcc/wz P\x16\xc3\x99y\xc1\x08A\xec\x11\xeb\xb6tA*+\xff\xd5\x0e\xdb\xf0I\xb5^\x8d2\xc0\x8b\x06yuw\xe9Z\x80v\xd8\xca\xe4\x1f&\x14\xd4\x8e\x13\xe4\xef/6Jq\xe6\x87Y\xb6i\x03Y\xa88\xf3\xe6|b8n\xae\xf4\x81\xc2\xd6\xcd\x82\xe9=\xe1\xfe\r\x90\x9fp\xa4\t\xe8\xd4\x7fL\xa35\xaa#\xaa\x9a\x05\xbfO\xe9w\x11d\xa4\xa7\x98?\xcb\xec\x1c\xc6:l\x0cb7\xb0!,P\xcc'
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_1
| length = 0x4
|###[ TLS Handshake ]###
| type = server_hello_done
| length = 0x0
###[ SSL/TLS ]###
\records \
|###[ TLS Record ]###
| content_type= change_cipher_spec
| version = TLS_1_1
| length = 0x1
|###[ TLS ChangeCipherSpec ]###
| message = '\x01'
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_1
| length = 0x40
|###[ TLS Plaintext ]###
| data = '\x14\x00\x00\x0c\x94\tJ\xb0\xe5\x8a\xcb\xceN\xa3\x16\x86'
| explicit_iv= '\xbd\xd3\xcf\x0e\xd6Q\xba\xec:\xad\xc0\xb8\x81%a!'
| mac = "@*'?:\x1bCR\xf5UZ\xcb\t\xbc\x12CwW\xfc\x01"
| padding = '\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b'
| padding_len= 0xb
Finished handshake. Sending application data (GET request)
Got response from server
###[ SSL/TLS ]###
\records \
|###[ TLS Record ]###
| content_type= application_data
| version = TLS_1_1
| length = 0x140
|###[ TLS Plaintext ]###
| data = 'HTTP/1.1 200 OK\r\nDate: Fri, 14 Aug 2015 18:54:36 GMT\r\nServer: Apache/2.2.22 (Debian)\r\nLast-Modified: Thu, 25 Apr 2013 10:50:57 GMT\r\nETag: "46fc5-b1-4db2d317b0640"\r\nAccept-Ranges: bytes\r\nContent-Length: 177\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nX-Pad: avoid browser bug\r\n\r\n'
| explicit_iv= '\xa7\xb5p\xf9\x87!\x89\x1fS{\xb3\x90\x86=]w'
| mac = '\xaf\xcf\x85.\x1f\xed\x18\x97\xf1L.\xa1\x03\xabh\xcd\xc6\xaa\xcb\xdf'
| padding = ''
|###[ TLS Record ]###
| content_type= application_data
| version = TLS_1_1
| length = 0xe0
|###[ TLS Plaintext ]###
| data = '<html><body><h1>It works!</h1>\n<p>This is the default web page for this server.</p>\n<p>The web server software is running but no content has been added, yet.</p>\n</body></html>\n'
| explicit_iv= 'FqV\x86\xe8v\xafoJz\x1c\xdb\xc6\x0b\x8ab'
| mac = '\x15\x9b!\x183\xea\xb0\xa0\x15\xeedc2H\xd8\x97\xf8\x8d\xaay'
| padding = '\n\n\n\n\n\n\n\n\n\n'
| padding_len= 0xa
<TLSSessionCtx: id=153622476
params.handshake.client=<TLSClientHello version=TLS_1_1 cipher_suites=['DHE_RSA_WITH_AES_128_CBC_SHA'] compression_methods=['NULL'] |>
params.handshake.server=<TLSServerHello version=TLS_1_1 gmt_unix_time=1439578475 random_bytes='S-\x0f\x1bt\x95\xcc\xa9wwI\xb9\xf5\x10\x12\x11*\x82%\xdd\xb6\x1e\xc0b\xdc\xac\x9b' session_id_length=0x0 session_id='' cipher_suite=DHE_RSA_WITH_AES_128_CBC_SHA compression_method=NULL |>
params.negotiated.version=TLS_1_1
params.negotiated.ciphersuite=DHE_RSA_WITH_AES_128_CBC_SHA
params.negotiated.key_exchange=DHE
params.negotiated.encryption=('AES', 16, 'CBC')
params.negotiated.mac=SHA
params.negotiated.compression=NULL
crypto.client.enc=<Crypto.Cipher.AES.AESCipher instance at 0x92d4f2c>
crypto.client.dec=<Crypto.Cipher.AES.AESCipher instance at 0x92d4f8c>
crypto.server.enc=<Crypto.Cipher.AES.AESCipher instance at 0x92d4fac>
crypto.server.dec=<Crypto.Cipher.AES.AESCipher instance at 0x92d4fcc>
crypto.server.rsa.privkey=None
crypto.server.rsa.pubkey=<Crypto.Cipher.PKCS1_v1_5.PKCS115_Cipher instance at 0x92b5bcc>
crypto.server.dsa.privkey=None
crypto.server.dsa.pubkey=None
crypto.client.dh.x='\xac\x93\x94\xd8\xf8\x85hb\xc4\xb5\x17\x80\x1b\xb1\xb9\xcb\xa3v$[\xb5\x95*\xeb\xfb\xc5\xdc\x0c\xa2J\xbe\x08'
crypto.client.dh.y_c=':\xe97\x06{:\xb2\x13\xb8\xaa\xa8\x1b\xf9\xa5\x13B\xf6\xe0\xe2AY\x97\x9c\xc7\xcf|\xc1XQ\x98\x9e\xc2\xd3\t\xf9\xa7\x9a\xae\x95\xc1i\xc4\xe3\x84D\xdf\x11^Z\x1d7r:\xd9\xa1\xf1\x96\xcf\xdc\x92\x15\x9f-\x9a\xbe\x84 \x9c\x9clQ\x8f\xe7p\x9c\x8f\xcf\xefT)!\x10I\xb9\x99\xc5\x99\xe1\x1f\x03\r\xf8\xa5\xb1o\t\x01t\x1a\x0e\x1c\x029\xc49\xf5\x08 _\x03p\xbe\x97uZ\xd2\x0e\x19\xb8l[\xd2\x85\x02\x8e\xc1j\xaa'
crypto.server.dh.p='\xd6}\xe4@\xcb\xbb\xdc\x196\xd6\x93\xd3J\xfd\n\xd5\x0c\x84\xd29\xa4_R\x0b\xb8\x81t\xcb\x98\xbc\xe9Q\x84\x9f\x91.c\x9cr\xfb\x13\xb4\xb4\xd7\x17~\x16\xd5Z\xc1y\xbaB\x0b*)\xfe2JFzc^\x81\xffY\x017{\xed\xdc\xfd3\x16\x8aF\x1a\xad;r\xda\xe8\x86\x00x\x04[\x07\xa7\xdb\xcaxt\x08}\x15\x10\xea\x9f\xcc\x9d\xdd3\x05\x07\xddb\xdb\x88\xae\xaat}\xe0\xf4\xd6\xe2\xbdh\xb0\xe79>\x0f$!\x8e\xb3'
crypto.server.dh.g='\x02'
crypto.server.dh.x=None
crypto.server.dh.y_s="\xc9\x1aK\xe5\xc2\xd9@\x83\x05\xd7\xd1J1[\xdb3\xc2\xa8\xb7\xa0\xdd\xc6cFjje\x92d\xc0\n\x1b\xb6N\xf3f\x9c\xa6\xb86\xf3\xd8\x91\xcf\x18\x87|3\x13fh\x8a$\xdf\xd6\xb6D\x9d\x90\xf6\x08*\xee?\x1f\xc3/|\xbe\xbc\xdd\xf0\x9aX\x8b\x00E\x06\x01\x9a\xc3\xfc\xb2\x1b\xa5\xa7>3\xc8\x95\x07\xfb\x84\x1b\xf9\xa2!%\xfc\xf4\xca`\x1a'\xd1\xeaj\x15c%\xe7\xa8 \xfe,E\x82\x8e\xc2S\xd4e\x88\xf6\xde\xa7\xd5 "
crypto.session.encrypted_premaster_secret=None
crypto.session.premaster_secret='\xb7`\xc2\xb2\x99\xeb\xbd\xbee\x9cD\xaf\x15A\x1a3\x1b\x1b\xc6\xf3UKf\xda\xd1\xe8\x02\xf2\xce\x10\xe5$\xe3J/\x1cK\x1b\x9fP5b\xc5\xa0\xab\x1c_\xca\x0cH\xb3\xfb\x10q\x83,\x148\xb5\xf1\x0e\x8d\xd1\xfd\x03\xa2,\xa3\xd1,\xc3i)\x0c\xe9p\xd0\xc7:2\xe5\xdb1\xb3\x9f;h4\xc5\xce\xad\xa2\x1d\xf4\xc7-\xb5)\x99l\x93\xc5~\x92\x1f\xe0b\xc5\xea\xb6(\xee\x9eHT\x01\xcb\x9a\xa5\x07p\x02\x13\xf3W\xf4\xf4V'
crypto.session.master_secret='\x00y\x00b\xfb\xb7\x95\x1c\x8d\xaa\x0f2q\xc9G<\xf8\x15B`pp\x05\x88\xb6\x02\x00\t:k\xc1\xd4t\xdc&\xa6\x040\xfa4z8\x18yVz\xcd\x00'
crypto.session.randombytes.client='U\xce9k\xb0l\x89\xfe\x95\xe45\xef\x88g\xe8\x1cz%wc\xb7\xd1\xcc\xd5,\x03Xx\x0eB\xd9@'
crypto.session.randombytes.server='U\xce9kS-\x0f\x1bt\x95\xcc\xa9wwI\xb9\xf5\x10\x12\x11*\x82%\xdd\xb6\x1e\xc0b\xdc\xac\x9b\x00'
crypto.session.key.client.mac='\xd9\xdcX\xf9\x83\x10j\xf9\x9bz8i\nzt\xc2|wn\x11'
crypto.session.key.client.encryption='S\xa8F\x18x\xae\xd5\x0e\x97\xdb\x05PU-+"'
crypto.session.key.cllient.iv='\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
crypto.session.key.server.mac='\xda\xe2\x9fw\xe0\x87\xabDD\xfb\xfc\xa1&\xff\xf1\x82\x8e\xe5\xd38'
crypto.session.key.server.encryption='\x981\xbf\xcb\x1b<\xa3!\xa2\x85[I\xafb\xe2\xfe'
crypto.session.key.server.iv='\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
crypto.session.key.length.mac=20
crypto.session.key.length.encryption=16
crypto.session.key.length.iv=16
>
```
##### Full Handshake with Application Data (ECDHE_RSA_WITH_AES_128_CBC_SHA256)
see /examples/full_rsa_connection_with_application_data.py
```python
# python examples/full_rsa_connection_with_application_data.py localhost 443
Connected to server: ('localhost', 443)
###[ SSL/TLS ]###
\records \
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_2
| length = 0x2a
|###[ TLS Handshake ]###
| type = server_hello
| length = 0x26
|###[ TLS Server Hello ]###
| version = TLS_1_2
| gmt_unix_time= 1450127754
| random_bytes= 'b\x81\x06Q\xca\x9a71N\xc5<TT\xfb!R\x01\x87H\xe7\t\x11\xec\x9f\xd9D\xfa\xa3'
| session_id_length= 0x0
| session_id= ''
| cipher_suite= ECDHE_RSA_WITH_AES_128_CBC_SHA256
| compression_method= NULL
| \extensions\
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_2
| length = 0x2de
|###[ TLS Handshake ]###
| type = certificate
| length = 0x2da
|###[ TLS Certificate List ]###
| length = 0x2d7
| \certificates\
| |###[ TLS Certificate ]###
| | length = 0x2d4
| | \data \
| | |###[ X509Cert ]###
| | | version = <ASN1_INTEGER[2L]>
| | | sn = <ASN1_INTEGER[14155341744006398450L]>
| | | sign_algo = <ASN1_OID['.1.2.840.113549.1.1.5']>
| | | sa_value = <ASN1_NULL[0L]>
| | | \issuer \
| | | |###[ X509RDN ]###
| | | | oid = <ASN1_OID['.2.5.4.3']>
| | | | value = <ASN1_PRINTABLE_STRING['localhost.localdomain']>
| | | not_before= <ASN1_UTC_TIME['130425105002Z']>
| | | not_after = <ASN1_UTC_TIME['230423105002Z']>
| | | \subject \
| | | |###[ X509RDN ]###
| | | | oid = <ASN1_OID['.2.5.4.3']>
| | | | value = <ASN1_PRINTABLE_STRING['localhost.localdomain']>
| | | pubkey_algo= <ASN1_OID['.1.2.840.113549.1.1.1']>
| | | pk_value = <ASN1_NULL[0L]>
| | | pubkey = <ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]>
| | | \x509v3ext \
| | | |###[ X509v3Ext ]###
| | | | val = <ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]>
| | | sign_algo2= <ASN1_OID['.1.2.840.113549.1.1.5']>
| | | sa2_value = <ASN1_NULL[0L]>
| | | signature = <ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']>
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_2
| length = 0x14d
|###[ TLS Handshake ]###
| type = server_key_exchange
| length = 0x149
|###[ TLS Server Key Exchange ]###
|###[ TLS EC Diffie-Hellman Server Params ]###
| curve_type= named_curve
| curve_name= secp256r1
| p_length = 0x41
| p = "\x04\x1b\x85z\xe3\xf1\xfe\x107\xfa\x1d\x85b2\xe2\x96\x85'\x80\n\x9c\x85\xa5\xfa\x10&L\xb9\x82\x18\xe3\xd5\xff\x0eD|(g\x1c\x03\x9b\xe2\xa8\x1f\x92\x8b\xa7\xb8\xeb\xd8\xf6\x14v\xafQ\x94U1[\xc0d1\xff\xc2\xca"
| hash_type = sha1
| sig_type = rsa
| sig_length= 0x100
| sig = '\xc07E\xab\xe9\xb6\xe5\x8a_\x1f;\x7f>\x8c\xb5\xe0\xf2:\xbb\xeaIk\xee0f\xc0\xef\x94`\xfc\x9e\x00\x0e\x00\x14\x01\x0b\x01\x9akqXw\xc90AO\x1ar\xf4\x82\x86Y`\xb5;\xad]\x9e\x16\x866\x0c:"O\xf3l\x0c\xd8\x14\xda\x17E+\x14\xd5F\x07\xf3\xafF\x0f.+\x05i\xc1\x13\x0f2\x0f\xc0l(\x86\xa0N\x08\xad\xd19&i2\' \x0e\x19}\xb6\xbf\xed\xf1\xbf\x89\xe9\xd7\x179I\xe2$\xa4\xd4pX\xfb\x0c\t-5\x8f\xe69R\xf1U\xf2\xfc\xd3\x0c\x14\xa7f\xf9\xba(t\x0b\xec\x82?wWe\x88\xf8\x943Kf\xa8`\xf5\xa0b\xdea\xc4\xef\x8e\xcc\xbbb\x97\x0b\x00\xb9\x02\xf7\xf6\x1a\xf8\xedjv\xa6 \xfc\x95!\x93\x1c\xfd\x13Y\x1c(\x07\x95\xbf\xa8\x17\xd5\x96\xd5\xa3\xc4c\xcd\xfa\xac\x12U|!ti\x15O\xf5\xd3F\xdd\x7fr\xf5\x83\x11\xb9\xf7`\x0f\xf9?<\x96\xd8dL\xcd\x02\x1f\xf6\x12\x07\x14\xa1\x8d#\xde9\x86J]'
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_2
| length = 0x4
|###[ TLS Handshake ]###
| type = server_hello_done
| length = 0x0
###[ SSL/TLS ]###
\records \
|###[ TLS Record ]###
| content_type= change_cipher_spec
| version = TLS_1_2
| length = 0x1
|###[ TLS ChangeCipherSpec ]###
| message = '\x01'
|###[ TLS Record ]###
| content_type= handshake
| version = TLS_1_2
| length = 0x50
|###[ TLS Plaintext ]###
| data = '\x14\x00\x00\x0c\x10s\xd9?)WB\xcf\xffY\xed}'
| explicit_iv= '\xca7\xa8\x86\x86\xd2\xe1\x18&\xf9r-\x8a\x86\xbf\x16'
| mac = '\xbf\xb8\x07\x15\xc5\x91\xe4SBLQ\xef\x9b\xdc\xcb\x89d\xb5\xde\xec\x11T\x98gG>T\xc4\xe8\x8b\n\x03'
| padding = '\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f'
| padding_len= 0xf
Finished handshake. Sending application data (GET request)
Got response from server
###[ SSL/TLS ]###
\records \
|###[ TLS Record ]###
| content_type= application_data
| version = TLS_1_2
| length = 0x150
|###[ TLS Plaintext ]###
| data = 'HTTP/1.1 200 OK\r\nDate: Mon, 14 Dec 2015 21:15:56 GMT\r\nServer: Apache/2.2.22 (Debian)\r\nLast-Modified: Thu, 25 Apr 2013 10:50:57 GMT\r\nETag: "46fc5-b1-4db2d317b0640"\r\nAccept-Ranges: bytes\r\nContent-Length: 177\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nX-Pad: avoid browser bug\r\n\r\n'
| explicit_iv= '\x04\xa4lS\xa1\xbe\xeaI\xca\xc9Zp\xa6\xc8\x94\x9e'
| mac = '5\xb374\xeb\xd7\x990\xaf\x11/\xd8\x8c\x86\x9f\x8cVm\xe1\xfbD>P\xf1\x84\xd4\xb1\x7f[Ku\n'
| padding = '\x04\x04\x04\x04'
| padding_len= 0x4
|###[ TLS Record ]###
| content_type= application_data
| version = TLS_1_2
| length = 0xf0
|###[ TLS Plaintext ]###
| data = '<html><body><h1>It works!</h1>\n<p>This is the default web page for this server.</p>\n<p>The web server software is running but no content has been added, yet.</p>\n</body></html>\n'
| explicit_iv= '\x19\t-\xe8\xa5\xe3;\xad^\x8d\x8d\xf2I\x1c\xcb\xad'
| mac = '<\xd5\xb5\x90\x9d\x9b\x8c8B\xc1\xe8\xfb\xdd\x91\n\x8b\xaee\xab]\xfd\xd5kD\xc8\x86\xa1\x02YR\x1e\x9a'
| padding = '\x0e\x0e\x0e\x0e\x0e\x0e\x0e\x0e\x0e\x0e\x0e\x0e\x0e\x0e'
| padding_len= 0xe
<TLSSessionCtx: id=151963340
params.handshake.client=<TLSClientHello version=TLS_1_2 cipher_suites=['ECDHE_RSA_WITH_AES_128_CBC_SHA256'] compression_methods=['NULL'] |>
params.handshake.server=<TLSServerHello version=TLS_1_2 gmt_unix_time=1450127754 random_bytes='b\x81\x06Q\xca\x9a71N\xc5<TT\xfb!R\x01\x87H\xe7\t\x11\xec\x9f\xd9D\xfa\xa3' session_id_length=0x0 session_id='' cipher_suite=ECDHE_RSA_WITH_AES_128_CBC_SHA256 compression_method=NULL |>
params.negotiated.version=TLS_1_2
params.negotiated.ciphersuite=ECDHE_RSA_WITH_AES_128_CBC_SHA256
params.negotiated.key_exchange=ECDHE
params.negotiated.encryption=('AES', 16, 'CBC')
params.negotiated.mac=SHA256
params.negotiated.compression=NULL
crypto.client.enc=<Crypto.Cipher.AES.AESCipher instance at 0x913598c>
crypto.client.dec=<Crypto.Cipher.AES.AESCipher instance at 0x91359ec>
crypto.server.enc=<Crypto.Cipher.AES.AESCipher instance at 0x9135a0c>
crypto.server.dec=<Crypto.Cipher.AES.AESCipher instance at 0x9135a2c>
crypto.server.rsa.privkey=None
crypto.server.rsa.pubkey=<Crypto.Cipher.PKCS1_v1_5.PKCS115_Cipher instance at 0x912ef8c>
crypto.server.dsa.privkey=None
crypto.server.dsa.pubkey=None
crypto.client.dh.x=None
crypto.client.dh.y_c=None
crypto.server.dh.p=None
crypto.server.dh.g=None
crypto.server.dh.x=None
crypto.server.dh.y_s=None
crypto.client.ecdh.curve_name=None
crypto.client.ecdh.priv='^\xba\xeb\xcc\xb3>\x85\xa4O\x88#\t\xfe\x11etc\xe3HE\xdf\xab5"\x00*\xa7\xa4\xba\x16\rY'
crypto.client.ecdh.pub=(15593007407665255161332890480389306948921121224892181265648081329388797451046, 97367016829523129655161775995807426469043502553948069450170722834830665800268) on "secp256r1" => y^2 = x^3 + 115792089210356248762697446949407573530086143415290314195533631308867097853948x + 41058363725152142129326129780047268409114441015993725554835256314039467401291 (mod 115792089210356248762697446949407573530086143415290314195533631308867097853951)
crypto.server.ecdh.curve_name='secp256r1'
crypto.server.ecdh.priv=None
crypto.server.ecdh.pub=(12448285729810697387785923206705205168894064463590796449895082178698960688639, 6453382386374218660658583494811319811574853038993757274506963746262301524682) on "secp256r1" => y^2 = x^3 + 115792089210356248762697446949407573530086143415290314195533631308867097853948x + 41058363725152142129326129780047268409114441015993725554835256314039467401291 (mod 115792089210356248762697446949407573530086143415290314195533631308867097853951)
crypto.session.encrypted_premaster_secret=None
crypto.session.premaster_secret='\xd8\xf0&5\x02\xcar^(\xd9\x1b0X\xb5`\x89\x16\xc0HM\x85[*\x93\xacx\xfbj\x86O\x01\x83'
crypto.session.master_secret='\xb91\xaa&\xfc\xac\xf7\x12\xca\xa0\xa8\xc5\xd5\x9e\xdf\x14\x877\xdf(#\xe0\x9c\xc6\xf1\x93@\x15\x8dgS4\xe0\x915\x1a\x1d\xcc\x10g\xde\x16=\x0f\x1a\x02s\xe7'
crypto.session.randombytes.client='Vo1\x8aP\x01,C\xc8(\x17\x8eb}\xeeZ\xde\xb6\xd0\xf7\xd7\x96)\xc0\xb2\xc9\xb4\x10\xc1P\\J'
crypto.session.randombytes.server='Vo1\x8ab\x81\x06Q\xca\x9a71N\xc5<TT\xfb!R\x01\x87H\xe7\t\x11\xec\x9f\xd9D\xfa\xa3'
crypto.session.key.client.mac='m\xbe\x8b\xc1\x06\xba;%\xd5\xa7.\xc1\xc0|6\x17\x7f\xd8k\xac!4o\xcdWvz7\xc4\xec\x95\xb5'
crypto.session.key.client.encryption='\xa8\x93Ro\xe0\xc5\x93E\xaa1\xa0p0!\x04p'
crypto.session.key.cllient.iv='\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
crypto.session.key.server.mac='k\xc5\xa2VU\xcd\x1f\xf9;dF2\xb5\x15n[\xf8\xff\xd3\xb5\xfc\xf7(\x99\xe8q\\A\xf0\xedeY'
crypto.session.key.server.encryption='#\xc0%-;\xc1\xfa\xbc\xdbe\x04f\xaa\xf3\xc7\xec'
crypto.session.key.server.iv='\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
crypto.session.key.length.mac=32
crypto.session.key.length.encryption=16
crypto.session.key.length.iv=16
>
```
##### SCSV Fallback Testing
socket stream example to test remote implementations for protocol downgrading attemps (following latest SSL POODLE attacks) - examples/SCSV_fallback_test.py
```python
for: ('google.com', 443)
record hello
('SSL_3_0', 'SSL_3_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK SSL_3_0
('SSL_3_0', 'TLS_1_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_0
('SSL_3_0', 'TLS_1_2') ... resp: TLSServerHello: outer TLS_1_2 inner TLS_1_2
('SSL_3_0', 'TLS_1_1') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_1
('TLS_1_0', 'SSL_3_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK SSL_3_0
('TLS_1_0', 'TLS_1_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_0
('TLS_1_0', 'TLS_1_2') ... resp: TLSServerHello: outer TLS_1_2 inner TLS_1_2
('TLS_1_0', 'TLS_1_1') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_1
('TLS_1_2', 'SSL_3_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK SSL_3_0
('TLS_1_2', 'TLS_1_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_0
('TLS_1_2', 'TLS_1_2') ... resp: TLSServerHello: outer TLS_1_2 inner TLS_1_2
('TLS_1_2', 'TLS_1_1') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_1
('TLS_1_1', 'SSL_3_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK SSL_3_0
('TLS_1_1', 'TLS_1_0') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_0
('TLS_1_1', 'TLS_1_2') ... resp: TLSServerHello: outer TLS_1_2 inner TLS_1_2
('TLS_1_1', 'TLS_1_1') ... resp: TLSAlert.INAPPROPRIATE_FALLBACK TLS_1_1
overall:
TLS_FALLBACK_SCSV_SUPPORTED ... True
SSLv3_ENABLED ... True
```
##### SSLv2 dissection
```python
-----------------------
###[ SSL/TLS ]###
\records \
|###[ SSLv2 Record ]###
| length = 0x3e
| content_type= client_hello
|###[ SSLv2 Client Hello ]###
| version = SSL_2_0
| cipher_suites_length= 0x15
| session_id_length= 0x10
| challenge_length= 0x10
| cipher_suites= [131200, 393280, 65664, 262272, 458944, 524416, 327808]
| session_id= 'aaaaaaaaaaaaaaaa'
| challenge = 'aaaaaaaaaaaaaaaa'
```
##### TLS Sniffer / PCAP decryption
TLS1.0 Session Context based decryption of RSA_WITH_AES_128_CBC_SHA for known private key
```python
# python examples/sessionctx_sniffer.py 192.168.220.131 443 tests/files/RSA_WITH_AES_128_CBC_SHA_w_key.pcap tests/files/openssl_1_0_1_f_server.pem
* pcap ready!
* load servers privatekey for ciphertext decryption (RSA key only): tests/files/openssl_1_0_1_f_server.pem
| 192.168.220.1 :54908 => 192.168.220.131 :443 | <SSL records=[<TLSRecord content_type=handshake version=TLS_1_0 lengunix_time=120678007 random_bytes="Ua\xc1\\w22\xc4\x01s\x8d>\xc0\xd2\xa6\xe2\xb7#4*]#\xaf\x003\xa3'\xa0" session_id_length=0x0ECDHE_ECDSA_WITH_AES_256_GCM_SHA384', 'ECDHE_RSA_WITH_AES_256_CBC_SHA384', 'ECDHE_ECDSA_WITH_AES_256_CBC_SHA384', 'ECDHE_RSA_'DHE_RSA_WITH_AES_256_GCM_SHA384', 'DHE_RSA_WITH_AES_256_CBC_SHA256', 'DHE_DSS_WITH_AES_256_CBC_SHA256', 'DHE_RSA_WITH_AES_25_CAMELLIA_256_CBC_SHA', 'ECDH_RSA_WITH_AES_256_GCM_SHA384', 'ECDH_ECDSA_WITH_AES_256_GCM_SHA384', 'ECDH_RSA_WITH_AES_256_CBC_TH_AES_256_CBC_SHA', 'RSA_WITH_AES_256_GCM_SHA384', 'RSA_WITH_AES_256_CBC_SHA256', 'RSA_WITH_AES_256_CBC_SHA', 'RSA_WITH_CAME 'ECDHE_RSA_WITH_AES_128_CBC_SHA256', 'ECDHE_ECDSA_WITH_AES_128_CBC_SHA256', 'ECDHE_RSA_WITH_AES_128_CBC_SHA', 'ECDHE_ECDSA_WHE_RSA_WITH_AES_128_CBC_SHA256', 'DHE_DSS_WITH_AES_128_CBC_SHA256', 'DHE_RSA_WITH_AES_128_CBC_SHA', 'DHE_DSS_WITH_AES_128_CBCC_SHA', 'DHE_DSS_WITH_CAMELLIA_128_CBC_SHA', 'ECDH_RSA_WITH_AES_128_GCM_SHA256', 'ECDH_ECDSA_WITH_AES_128_GCM_SHA256', 'ECDH__SHA', 'ECDH_ECDSA_WITH_AES_128_CBC_SHA', 'RSA_WITH_AES_128_GCM_SHA256', 'RSA_WITH_AES_128_CBC_SHA256', 'RSA_WITH_AES_128_CBCSHA', 'ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA', 'DHE_RSA_WITH_3DES_EDE_CBC_SHA', 'DHE_DSS_WITH_3DES_EDE_CBC_SHA', 'ECDH_RSA_WITH_3GOTIATION_INFO_SCSV'] compression_methods_length=0x1 compression_methods=['NULL'] extensions_length=0x15d extensions=[<TLSExt'uncompressed', 'ansiX962_compressed_prime', 'ansiX962_compressed_char2'] |>>, <TLSExtension type=supported_groups length=0x 'sect409k1', 'sect409r1', 'secp384r1', 'sect283k1', 'sect283r1', 'secp256k1', 'secp256r1', 'sect239k1', 'sect233k1', 'sect23', 'sect163r1', 'sect163r2', 'secp160k1', 'secp160r1', 'secp160r2'] |>>, <TLSExtension type=signature_algorithms length=0x20lgorithm=sha512 sig_alg=rsa |>, <TLSSignatureHashAlgorithm hash_alg=sha512 sig_alg=dsa |>, <TLAlgorithm hash_alg=sha384 sig_alg=rsa |>, <TLSSignatureHashAlgorithm hash_alg=sha384 signature_algo<TLSSignatureHashAlgorithm hash_alg=sha256 sig_alg=rsa |>, <TLSSignatureHashAlgorithm hash_alg=sha2orithm=ecdsa |>, <TLSSignatureHashAlgorithm hash_alg=sha224 sig_alg=rsa |>, <TLSSignatureHashAlgorithm ha224 sig_alg=ecdsa |>, <TLSSignatureHashAlgorithm hash_alg=sha1 sig_alg=rsa |>, <TLSSignatureHaalgorithm=sha1 sig_alg=ecdsa |>] |>>, <TLSExtension type=heartbeat length=0x1 |<TLSExtHeartbeat mode=peer_allowx00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0000\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' |>>] |>>>] |>
| 192.168.220.131 :443 => 192.168.220.1 :54908 | <SSL records=[<TLSRecord content_type=handshake version=TLS_1_0 lengix_time=1435009774 random_bytes='\x1d\xc0u!\xbd\xf9\xc3\xd9\xadmYR\xb4G\x93\xeacX\x88\xe1q/\x08\x16xp+$' session_id_length=0xcipher_suite=RSA_WITH_AES_128_CBC_SHA compression_method=NULL extensions_length=0xa extensions=[<TLSExtension type=renegotialength=0x1 |<TLSExtHeartbeat mode=peer_allowed_to_send |>>] |>>>, <TLSRecord content_type=handshake version=TLS_1_0 length=cates=[<TLSCertificate length=0x3eb data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[13397879971383713459L]> sign_OID['.2.5.4.6']> value=<ASN1_PRINTABLE_STRING['UK']> |>, <X509RDN oid=<ASN1_OID['.2.5.4.10']> value=<ASN1_BADTAG[<ASN1_DECORING[12]>}}>]> |>, <X509RDN oid=<ASN1_OID['.2.5.4.11']> value=<ASN1_BADTAG[<ASN1_DECODING_ERROR['\x0c\x19FOR TESTING PURPOSEDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_BADTAG[<ASN1_DECODING_ERROR['\x0c\x1cOpenSSL Test Intermediate CA']{{Codec <ASN1Co1208140148Z']> not_after=<ASN1_UTC_TIME['211016140148Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.6']> value=<ASN1_PRINTABLEOR['\x0c\rOpenSSL Group']{{Codec <ASN1Codec BER[1]> not found for tag <ASN1Tag UTF8_STRING[12]>}}>]> |>, <X509RDN oid=<ASN1_{{Codec <ASN1Codec BER[1]> not found for tag <ASN1Tag UTF8_STRING[12]>}}>]> |>, <X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<Ad for tag <ASN1Tag UTF8_STRING[12]>}}>]> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=\xf3I("\xd3\xb9\xfe\xe0\xde\xe48\xce\xee"\x1c\xe9\x91;\x94\xd0r/\x87\x85YKf\xb1\xc5\xf5z\x85]\xc2\x0f\xd3.)X6\xccHk\xa2\xa2\xxfd\xea\xf985+\xf4\xe6\x9a\x0e\xf6\xbb\x12\xab\x87!\xc3/\xbc\xf4\x06\xb8\x8f\x8e\x10\x07\'\x95\xe5B\xcb\xd1\xd5\x10\x8c\x92\xbMW\x06U!"%\xdb\xf3\xaa\xa9`\xbfM\xaay\xd1\xab\x92H\xba\x19\x8e\x12\xech\xd9\xc6\xba\xdf\xecZ\x1c\xd8C\xfe\xe7R\xc9\xcf\x02\xxa2\x13J%\xaf\xe6\x1c\xb1%\xbf\xb4\x99\xa2S\xd3\xa2\x02\xbf\x11\x02\x03\x01\x00\x01']> x509v3ext=[<X509v3Ext val=<ASN1_SEQUEval=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.15']>, <ASN1_BOOLEAN[-1L]>, <ASN1_STRING['\x03\x02\x05\xe0']>]]> |>, <X509v3Ext val=<Certificate']>]]> |>, <X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.14']>, <ASN1_STRING["\x04\x14\x82\xbc\xcf\x00\x00\x1['.2.5.29.35']>, <ASN1_STRING['0\x16\x80\x146\xc3l\x88\xe7\x95\xfe\xb0\xbd\xec\xce>=\x86\xab!\x81\x87\xda\xda']>]]> |>] sign_["\x00\xa9\xbdMW@t\xfe\x96\xe9+\xd6x\xfd\xb3c\xcc\xf4\x0bM\x12\xcaZt\x8d\x9b\xf2a\xe6\xfd\x06\x11C\x84\xfc\x17\xa0\xeccc6\xb9x02\x081\x9a\xf1\xd9\x17\xc5\xe9\xa6\xa5\x96Km@\xa9[e(\xcb\xcb\x00\x03\x82c7\xd3\xad\xb1\x96;v\xf5\x17\x16\x02{\xbdSSFr4\xd6\b3\x10\xf7l\xc6\x85K-'\xad\n \\\xfb\x8d\x19p4\xb9u_|\x87\xd5\xc3\xec\x93\x13A\xfcs\x03\xb9\x8d\x1a\xfe\xf7&\x86I\x03\xa9\xc5\\xc1C\xc7\xe0%\xb6\xf1\xd3\x00\xd7@\xabK\x7f+z>\xa6\x99LT"]> |> |>] |>>>, <TLSRecord content_type=handshake version=TLS_1_0
<TLSSessionCtx: id=153917580
params.handshake.client=<TLSClientHello version=TLS_1_2 gmt_unix_time=120678007 random_bytes="Ua\xc1\\w22\xc4\x01s\x8d>\xength=0x76 cipher_suites=['ECDHE_RSA_WITH_AES_256_GCM_SHA384', 'ECDHE_ECDSA_WITH_AES_256_GCM_SHA384', 'ECDHE_RSA_WITH_AES_256ECDSA_WITH_AES_256_CBC_SHA', 'DHE_DSS_WITH_AES_256_GCM_SHA384', 'DHE_RSA_WITH_AES_256_GCM_SHA384', 'DHE_RSA_WITH_AES_256_CBC_256_CBC_SHA', 'DHE_RSA_WITH_CAMELLIA_256_CBC_SHA', 'DHE_DSS_WITH_CAMELLIA_256_CBC_SHA', 'ECDH_RSA_WITH_AES_256_GCM_SHA384', '256_CBC_SHA384', 'ECDH_RSA_WITH_AES_256_CBC_SHA', 'ECDH_ECDSA_WITH_AES_256_CBC_SHA', 'RSA_WITH_AES_256_GCM_SHA384', 'RSA_WITHWITH_AES_128_GCM_SHA256', 'ECDHE_ECDSA_WITH_AES_128_GCM_SHA256', 'ECDHE_RSA_WITH_AES_128_CBC_SHA256', 'ECDHE_ECDSA_WITH_AES_1_WITH_AES_128_GCM_SHA256', 'DHE_RSA_WITH_AES_128_GCM_SHA256', 'DHE_RSA_WITH_AES_128_CBC_SHA256', 'DHE_DSS_WITH_AES_128_CBC_SHSHA', 'DHE_DSS_WITH_SEED_CBC_SHA', 'DHE_RSA_WITH_CAMELLIA_128_CBC_SHA', 'DHE_DSS_WITH_CAMELLIA_128_CBC_SHA', 'ECDH_RSA_WITH_A'ECDH_ECDSA_WITH_AES_128_CBC_SHA256', 'ECDH_RSA_WITH_AES_128_CBC_SHA', 'ECDH_ECDSA_WITH_AES_128_CBC_SHA', 'RSA_WITH_AES_128_G, 'RSA_WITH_CAMELLIA_128_CBC_SHA', 'ECDHE_RSA_WITH_3DES_EDE_CBC_SHA', 'ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA', 'DHE_RSA_WITH_3DESWITH_3DES_EDE_CBC_SHA', 'RSA_WITH_3DES_EDE_CBC_SHA', 'EMPTY_RENEGOTIATION_INFO_SCSV'] compression_methods_length=0x1 compresslength=0x4 |<TLSExtECPointsFormat length=0x3 ec_point_formats=['uncompressed', 'ansiX962_compressed_prime', 'ansiX962_compregth=0x32 elliptic_curves=['sect571r1', 'sect571k1', 'secp521r1', 'sect409k1', 'sect409r1', 'secp384r1', 'sect283k1', 'sect283, 'sect193r1', 'sect193r2', 'secp192k1', 'secp192r1', 'sect163k1', 'sect163r1', 'sect163r2', 'secp160k1', 'secp160r1', 'secp1ithm length=0x1e algs=[<TLSSignatureHashAlgorithm hash_alg=sha512 sig_alg=rsa |>, <TLSSignatureHashalgorithm=sha512 sig_alg=ecdsa |>, <TLSSignatureHashAlgorithm hash_alg=sha384 sig_alg=rsa |>, hAlgorithm hash_alg=sha384 sig_alg=ecdsa |>, <TLSSignatureHashAlgorithm hash_alg=sha256 signature_a <TLSSignatureHashAlgorithm hash_alg=sha256 sig_alg=ecdsa |>, <TLSSignatureHashAlgorithm hash_alg=salgorithm=dsa |>, <TLSSignatureHashAlgorithm hash_alg=sha224 sig_alg=ecdsa |>, <TLSSignatureHashAlgorithm a1 sig_alg=dsa |>, <TLSSignatureHashAlgorithm hash_alg=sha1 sig_alg=ecdsa |>] |>>, <TLSExtensi type=padding length=0xf0 |<Raw load='\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0000\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
params.handshake.server=<TLSServerHello version=TLS_1_0 gmt_unix_time=1435009774 random_bytes='\x1d\xc0u!\xbd\xf9\xc3\xd9fa\xa56F\xd8,\x07=\xb1:y\x12P\xc04"\xd4\xfe\x88eC}\xe1\xad]\xdf1' cipher_suite=RSA_WITH_AES_128_CBC_SHA compression_method=NUtRenegotiationInfo length=0x0 |>>, <TLSExtension type=heartbeat length=0x1 |<TLSExtHeartbeat mode=peer_allowed_to_send |>>
params.negotiated.version=TLS_1_0
params.negotiated.ciphersuite=RSA_WITH_AES_128_CBC_SHA
params.negotiated.key_exchange=RSA
params.negotiated.encryption=('AES', 16, 'CBC')
params.negotiated.mac=SHA
params.negotiated.compression=NULL
crypto.client.enc=<Crypto.Cipher.AES.AESCipher instance at 0x938042c>
crypto.client.dec=<Crypto.Cipher.AES.AESCipher instance at 0x932944c>
crypto.server.enc=<Crypto.Cipher.AES.AESCipher instance at 0x932948c>
crypto.server.dec=<Crypto.Cipher.AES.AESCipher instance at 0x934bd4c>
crypto.server.rsa.privkey=<Crypto.Cipher.PKCS1_v1_5.PKCS115_Cipher instance at 0x932946c>
crypto.server.rsa.pubkey=<Crypto.Cipher.PKCS1_v1_5.PKCS115_Cipher instance at 0x93804ec>
crypto.server.dsa.privkey=None
crypto.server.dsa.pubkey=None
crypto.client.dh.x=None
crypto.client.dh.y_c=None
crypto.server.dh.p=None
crypto.server.dh.g=None
crypto.server.dh.x=None
crypto.server.dh.y_s=None
crypto.client.ecdh.curve_name=None
crypto.client.ecdh.priv=None
crypto.client.ecdh.pub=None
crypto.server.ecdh.curve_name=None
crypto.server.ecdh.priv=None
crypto.server.ecdh.pub=None
crypto.session.encrypted_premaster_secret=None
crypto.session.premaster_secret='\x03\x03Ux\xff,U\x8bM\xf4\xf7\x9b\xe4\xb4\x95\xdf\x90\x02\\I{<\xbe\x87uui\xdc\x16\xffn\xf
crypto.session.master_secret='\xb7\xe38\x8a\xbc\t9Q\xac,\r\r\x0f(\xbd\\\r<\xa3F\xf2\xc0\xff\xfc\x88\xe1J\xed\x08\xf8\xbc\x
crypto.session.randombytes.client="\x071fwUa\xc1\\w22\xc4\x01s\x8d>\xc0\xd2\xa6\xe2\xb7#4*]#\xaf\x003\xa3'\xa0"
crypto.session.randombytes.server='U\x88\x82\xee\x1d\xc0u!\xbd\xf9\xc3\xd9\xadmYR\xb4G\x93\xeacX\x88\xe1q/\x08\x16xp+$'
crypto.session.key.client.mac=' d\x90\xca\xbdUKe\x96\xc9Y":^w\xa0\x01\xbd=\xbc'
crypto.session.key.client.encryption="\xc4/\xcb\xc7\n\x85\x0bx\x8c\xd8\x8e+\x83\x8b'{"
crypto.session.key.cllient.iv='\xdfV\xee\xb1Y\xe1\xae\xfd\xb0\xee\xd9\x1ey\xd2\xf7\xd4'
crypto.session.key.server.mac='\xcf\xe2F\x97\x81\x9cw\x03\xbc~\x1e\xaf\x15\xdd2J\xd0\x07I\x87'
crypto.session.key.server.encryption='Zw\xfd\x15\x15a\x0bh@F\xac\xfen\x0ea\xa8'
crypto.session.key.server.iv='\x16\xcb)\xfa\xfc\x9f\xaar/\x19\xb5\x88\x85o\x8e\xe3'
crypto.session.key.length.mac=20
crypto.session.key.length.encryption=16
crypto.session.key.length.iv=16
>
| 192.168.220.1 :54908 => 192.168.220.131 :443 | <SSL records=[<TLSRecord content_type=handshake version=TLS_1_0 lengload='\x01\x00\x08\xa9xP\xf3\xdb\xfc\x8b,\xc0C^N\x96ALQ\t\xabW\xcb\x9a\xe4\'\xa96\xb8y\xf8\x1d\xda\x7f\x97Q\x804\x12\n\xe4\xcee\xaeW\xe5\xa4k\xc4^\x95\x8e\xba\r#\xdf\xa2JD\xca\xa0\x98S\x933*<\xc1\n\x18\x1f\xd9\xe4\xad\x82\xb6\xea\x9c\xb8\x14\xa61\xb1x00\x0f1\x0e\xcb\xc3=G^??\xba\xee\xc3\xeb\x16\xe8\xf9\xd6\xdf5e\xb8\r5)\xc7\xc1\xf3\x1d\x85\x181:/\x1d\x16j\xdcS`E\xa7\xc2D"\xc6\xb0Y@\x90\x18\xe4\x1c\xb1\xf3\x9a\xe9\xd9\x80P\xd8\xa9\x01Z\x9d\x000\x95\xbb\xddf\x13\xc9' |>>>>, <TLSRecord content_typSRecord content_type=handshake version=TLS_1_0 length=0x30 |<TLSCiphertext data=',\x8c\xecA\x83\xa7\x8c\xce\xe3\x9e\xb20\xd\x08' |>>] |>
|-> decrypted record | <SSL records=[<TLSRecord content_type=handshake version=TLS_1_0 lengientRSAParams length=0x100 data='\x08\xa9xP\xf3\xdb\xfc\x8b,\xc0C^N\x96ALQ\t\xabW\xcb\x9a\xe4\'\xa96\xb8y\xf8\x1d\xda\x7f\x9\x03 \x91\xe2\xa9I\xee\xaeW\xe5\xa4k\xc4^\x95\x8e\xba\r#\xdf\xa2JD\xca\xa0\x98S\x933*<\xc1\n\x18\x1f\xd9\xe4\xad\x82\xb6\xea\\xa0\x8dJ\xf9b\xe4k\x00\x0f1\x0e\xcb\xc3=G^??\xba\xee\xc3\xeb\x16\xe8\xf9\xd6\xdf5e\xb8\r5)\xc7\xc1\xf3\x1d\x85\x181:/\x1d\x10,U\x0c@-[_\x0e\xfd\xc6\xb0Y@\x90\x18\xe4\x1c\xb1\xf3\x9a\xe9\xd9\x80P\xd8\xa9\x01Z\x9d\x000\x95\xbb\xddf\x13\xc9' |>>>>, <TLsage='\x01' |>>, <TLSRecord content_type=handshake version=TLS_1_0 length=0x30 |<TLSPlaintext data='\x14\x00\x00\x0c\xc2\xc\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b' padding_len=0xb |>>] |>
| 192.168.220.131 :443 => 192.168.220.1 :54908 | <SSL records=[<TLSRecord content_type=change_cipher_spec version=TLSversion=TLS_1_0 length=0x30 |<TLSCiphertext data='\x917\xacq\x0f\x8a\xe6\xcd\xc7\x0c\xe8\xe9(\xe2\xda\xbc\xe2\xcd\x8cbP9$\xc
|-> decrypted record | <SSL records=[<TLSRecord content_type=change_cipher_spec version=TLSversion=TLS_1_0 length=0x30 |<TLSPlaintext data='\x14\x00\x00\x0c1\xa9\xd7 v\r\xe1\x0e\xa4M2x' mac='\x9f\x81w\x94\xd1\xd9pe\ng_len=0xb |>>] |>
| 192.168.220.1 :54908 => 192.168.220.131 :443 | <SSL records=[<TLSRecord content_type=application_data version=TLS_1da\xa3?/\xc8\xe0\xbbR\xc0u\xde' |>>, <TLSRecord content_type=application_data version=TLS_1_0 length=0x70 |<TLSCiphertext db2\x1e\xdc\x94\xccq\x04\xb7\x8e\xe3[\xcb=\xb1\x0c3\xd8\x82\xec\xa7\x97\xf2\xfe\x1f\xcdp\x94\xc5\x06]\xf0\xee\xadZ\xb4\xe7L<T\\x90\x98\xb3\xf6\x9b\x1e\x8e\xa0\xcd' |>>] |>
|-> decrypted record | <SSL records=[<TLSRecord content_type=application_data version=TLS_1ng='\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b' padding_len=0xb |>>, <TLSRecord content_type=application_data version=TLS_68.220.131\r\nAccept: */*\r\n\r\n' mac='\x96\xee\xffa\x13\xd3\xa6\x97C\xa2\xd0y\xf1\x00r(\x07\x12\xb3\xff' padding='\x0c\x0c\
| 192.168.220.131 :443 => 192.168.220.1 :54908 | <SSL records=[<TLSRecord content_type=application_data version=TLS_1xea\x9f\x12\x0b\xd5\xf94lR\x7f\xa6g\xf3' |>>, <TLSRecord content_type=application_data version=TLS_1_0 length=0xb40 |<TLSCipce\x86\xb8\xc5R\xb1\xf0\xcd\x93w\xe1X\n\xaf*(0+t\xe7S\xc7\xe2\x15\x0f\x9f[\xac\x8c\xfbW\x05Zv1|\xdf\xe9\xddT\xf2\x02\x92a\x9f\x92gp\x94\x98\xa6\xe4\xb6\xc6\xce\xefTr\xe8-\xde\xeaI\xf0\xf4bJ\xa3U\xefTg\x05\x83\xfaZ\xc8 Q\x02\xba\xb1\x9e\x95\xb5\xf5\xaxbd\xd0 P\x92\xcc\x18;\xff]e\x00^[\xd6q\xf2w\xd9]\xe7\xde\x1c}\xd4B\xf1x\xf8\x966\x81,\xea\xb8#\x1d\x1b\xc9\xberTQ\x99{]\xeb\cfS\x92e\x0cX|\xb9}\xcd([[d-\xf9\x99\xc2Xe\xe7\x92v\xef \xe5}g;\x13\x93 R\x90s\xf7\x08\xee\xdav\xe6\x17\x84\x8fbZ\xa3\\#\xba\xa2\xe1D\'\x11\xbf\xfe\xeb\xa8\xb9^\x8e\x9bY\x9e\x1a\x95\xb0F\x15\x14\xd0\xf9)\xc9bW\xd2\x16\xbbb\x14+\xe1\x92=cl{P\xfc\x10\xx19\x19\xdfuB$8\xf2\xc1\xa6S\x88\xc3\xc8\xbd\xb4\x87I\xeeA\xf0\nS8mj6\xc8\x0b*\xc0\x9e-\xc2\xcf\xee\xd9#BG\xb2\x1d\xfd*bu\x85x00\x86\xf5\x18\x19H\xf80\x1fG\x01^R(\xc7\xd23z\xcf\xbf\x16\x87\xcaR\xd2\xc6\xdc\xde\xc8R-\x1aAF=\x16\xe2\xd6\xb2!I\xa8L\x98\\xf9@u\xf1"\x8a\xf2\x1f\xe8\xdc\x9cEU\xc5\xa9x\\\xd4\xeb\xd6\'\xb6%\x8a\x18;O\xb9)\xa7\x9c\xe4\xd8q\x1d\xcf\x80\xa0\xb9_C$\xde-D\x1c\x1e\x17\xe7\xc4\xace\xc0\x7fFTk\x8aL\x08\xfe0M>\x87\x0e\x19B\xe2\xad\x12Q!\xb7\'\x9drRZ\x9a\xe5\x01q\x05q\x15\xb4\xad\xd8\x12\xb1@\x88\xbf\x9f\xef3N\x97\xd8V>\x9d#\xee\xed\x9f\xac\xec\x06\xd1\xb9\x99n\xd5\xadT\x15\x9cY\xa9|\xa8\xc1P_x1N\x0c\xxb8zJ\x8b\xf1\x04\xadF\xa1\xa3\x82\x93\xceU\xdbf\x97\xc2$T2\x9c\x1b\xc8\x86\x18A\xf5FyW\xf8\xd0\xba\xb8\x12\xb8\xdeB\xf5\xcfzb\xd3\xfeA\x9b\r\xa4PB\xc4Qy!\xe0T\x14)\xfdb\xb2\x99w\x90\xde@\x0eg\xbb\xa6\r9\x96rd9\xe6\x868\xbe\x84/\t)gxRM=\xe4\x06\xa1\x\x92\xd5\xc0u`\xf15\x95\x05\x92ja\xe3\x80w\x95+\xc4c\xc8Kf/\xaf\xbd\xc4\xc9e\xba\xc4\xb9\xde\x9d\x1b\x96\x9d\x9b \xd6]\xe3Q\x6\xd7~\xe9H\xeb\x90\x88\xa9\n\x85\xcc\xad\x02\x04B\xd9\xca-\xffk&7\x98\xa3\xaf\xddsm\x0fr\x05\xf9=\x12^\xcf\xca\x92\x1cwa\x9fxfe\x9a\xd7T\x90%q\x1c\x17\x95Q\xe0n\xf46\x97\xdf\xa7q\x1b:\x88\x98\xfbxu\x8d*~h\r<\xcf\x7f\xb0\xd8\xd6\xca\x8b}\'G\xdfj\xfd7cb\xc4K\x9b3\xb9\xd9F\xe3\xfa\xc4/\x1fs\xc8\x8c\x11\xde\xd8w\xd9\xee\xd6=|\x12 ?\x9f\xc8\xc2\xa9\xd6\x8b\x0e\xc2\xeaIS\xb1\xexdd\xa5m\xa6\x93\x92\x9a\x1ce\x93S\xadln\xe3\xa2\xc0\x82M\xe3:\xc7\xaa\x9e\xd4\x99{%9\xd5\x1bw\xd4c}\xd7p\xaf\xee\xadx\'H\xcc0?>\xd1\x17\xa2g\xaa\xde\xf6t!{\xd7\xc7\xf5b\xe4\xf45\xa8(\xd0\xdc\xbf\x86\xff\xf9\xc9\xfc\x9b\xc2\xe2@\x0b\x8bm\x06\x98@\xfaa1\xbf_5\xc0s\x9f\xfc\xf3\xb2\xe0\x14\xb04\xa8\xe2\x8eck\xfer\xe2\x81\x8a\x9a\xf2\xbai\xd6\x13G\x8b\xe4}</\xe3\xd9=\xdb\n\xc2\xfd\x14\xf1T5\x02VX\xbea.\x98q\xf9\r\x15,\xe4\xc6g\xf2\x83\xf63Az_ef\x1d\x95,\xc43 \x16E\xca9b\x83JAa\xd5?\x0b\xf0\x7f\xfeY\\x9e,\xd7lH\xc4&Z9Q^\x1e\xbf\x1c\xdbt\x00\xbe\xaf7\xa9\'^MH\xf1\xa3\xd7W[\xbf\x9b\xe0\x00\xce\xa3\x18\x1cz\x1f\xeaV?\xab\x8d-97#\x8e\x08\xd8\xc9\x0cd&9.\xb0\x9d\x13\x03\xe2N<\x0b\xdf\x95\x9e\xa9\xe5R\xac\x1201\xb0"\xe8v]\x89\x0ez~\x1de\x91\xa6\xcd\xfa8\x9d2\xe8|\x02\xe0\xb1\r\xf5\x99N/\x16\xf1ky\xfc\xb5\xf4\xf5\xc3VQ=k\xee\xb8\x8fg\x9c,\x85yu\x05C\xc3\xe5!\x14>\xee,(y\xd8\xf8-\x13\xba\xc2\xf6\x18\xfe\x9c\x10\x15_\x80\xffE~g\x96a\x91\xaf\x1f\x8a1\x12A\x05\xa6T\x01\xa0e\x9e\x0c\x9b\x9b\xc2\xd3\xd7dcg\xd8\nk\xe8n\x1d\x8c\xb1%\xb7\x8bl\xc0]F\xf4X\xe7\x8fE3K\xe3\x06\xa0d\x08\x98\xb4\xb8\x0c\xa7\xc2\xa3O\x93\xcc\xc2PC\x86J\ef\xfd|\xa8\x15__U\x87\r\xae\xf8\x97\x92\xd19\x81s?U\x01\x01\x9f\xe0&\x9f\x99\x87\x7f\x8a\x84\x08n]\xc4\x00\xd6|\x1e-\x83\x90F\x8b\xc0\xcd\xa2+\\\x9b.z\xf1\x1b\xe6G\xe1lscV\x00\x87\x9e\xf1\x93\xb5\xe9\xcb\x164\x140g\xd0\xb9\x1d5\xc7\x7f/\xdc\xb6{|\xcb\xff\x95\xb1\xa8mp\xec\xcb;\x8aM\x11&\xaf\xa3\xe6\r}\xc6K\xd9w\xe3\x99\xc4\rQ\x93A.\x19\xb1:\xec\x1e\xbd{},\x1f\xfe\x10\x984\x7f\xe3\x10\xe9\x85K\x9d\xf0\xa3\x9a\xf3\x85\xf9\xce\xbc*h\x10\xc2\xf9\x8c/\r\x84\xf5\xdf%{iI7&\xf6\x08\x14M]y\xe9\xb0VH\xe3f9\x08\'\xfd]T\xcd\xf8Ey\xc6\xd8"@cq>\xa6\x12d\xbb\xd2\x92uw:#\xe2\xaf\x19\x01\x7f\xe92X\x8f\xad\xe2hO\xf6\x14\xc2c\xee\x8a\x\x83\x0e\x15\xda`}\xa5\xc9\xcbM\xc3\xff\x15\xa0\x9bt\xb9\x8cWwL\x91\xbd\x00\xcdA\nK\\K/\xd2~p{\xf6\xe4\xaav\x07X\n\xef\xfe\x8xc2\x08h\xf3\xc3\xf1\xd5l\xe4\xf5,[\xa0-?\x9b\x12\x99\xaf\xb5\xd30\xc6K\xd3\xf0A\x93e\xf9\xf3\x07\xe0\xe2\x9b\xc3)\x00\xac6\xx1a\x8e\xc5C"\x8a\x0c\xa9\xc6\xe4\xe9\xf4\xc6Sz%L\xe5\xb6f\x86\x9e\x03b\x08\xb0\x86\xc2\x1b\xe4\x9b\x1f\xfb\xa8]fb$\xae\xb3f~ea\xa8\xd4\x99\xea\xb7\xd4J\x9c\xb7\xcd\x10\xa5#\xd8>\xcde\x9a\x9f\x10\xef)\xe1\xfb,\xf3\xee0\xa9\xa4\xe2f\xa5_y\xa7\xb6\x8b)xae0"\xcc\x01m\xe8\xe4 R\x8c\xc6.v\x8c\xdc\x98\xbc\xe5\xf4\xc8\xaa\xc2\xc6\x11i\xa7\xcc\xc9\x10\x9c\xeb\x96\xc4\xd4\xd0\xd0C\b8\xc2\xac\xdb\xad\xda\x86\xde\x0cVc\xea\xfe\xbb-?:\xbb\xf4|\xb1yi\xfb\xafw\xed\xa3]:y(\xa7\xe9etN\xf9cG\x1dux\xad\\\x8c\x84\\x9b\xf4\xa91\xd7\xf2\xc2\x0f\xf1\xd8\x8a~\xee\x17\xa4\x05\x7f\x0ce-O\xd6\xa9\x95\xa3\xe9\xebu\nd\xdc\t\xaa~OU\xd8\x8c\xfa\xb5\x04V"\x96\x8d\x87\x92\xbd\x90\xa4\xbb\x80\x96\x1dG\xb2NDzJBt\xa9\xf8\xcc\xf5\x8c\x1e\x11fP\xba\xbe\xf64"s\xd6$\xc9T\xda)\xd
|-> decrypted record | <SSL records=[<TLSRecord content_type=application_data version=TLS_1\xe1\xd5' padding='\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b' padding_len=0xb |>>, <TLSRecord content_type=application_daml\r\n\r\n<HTML><BODY BGCOLOR="#ffffff">\n<pre>\n\ns_server -accept 443 -cert openssl_1_0_1_f_server.pem -tls1 -cipher AES128v3:AES128-SHA \n---\nCiphers common between both SSL end points:\nECDHE-RSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES25 ECDHE-ECDSA-AES256-SHA \nDHE-DSS-AES256-GCM-SHA384 DHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-SHA256 \nDHE-DSS-AES256 DHE-DSS-CAMELLIA256-SHA ECDH-RSA-AES256-GCM-SHA384\nECDH-ECDSA-AES256-GCM-SHA384 ECDH-RSA-AES256-SHA384 ECDH-ECDSA-AE \nAES256-SHA256 AES256-SHA CAMELLIA256-SHA \nECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA- ECDHE-ECDSA-AES128-SHA \nDHE-DSS-AES128-GCM-SHA256 DHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-SHA256 \nDHE-DSS-A DHE-DSS-SEED-SHA DHE-RSA-CAMELLIA128-SHA \nDHE-DSS-CAMELLIA128-SHA ECDH-RSA-AES128-GCM-SHA256 ECDH-ECDSA \nECDH-ECDSA-AES128-SHA AES128-GCM-SHA256 AES128-SHA256 \nAES128-SHA SEED-SHA-SHA EDH-RSA-DES-CBC3-SHA \nEDH-DSS-DES-CBC3-SHA ECDH-RSA-DES-CBC3-SHA ECDH-ECDSA-DES-CBC3-SHA \nDES-CBC3pher : AES128-SHA\n Session-ID: B458EC666AFAA53646D82C073DB13A791250C03422D4FE8865437DE1AD5DDF31\n Session-ID-ctx: 0E6652DAF255AFACF0E16C286A8D\n Key-Arg : None\n PSK identity: None\n PSK identity hint: None\n SRP username: Non\n 1 items in the session cache\n 0 client connects (SSL_connect())\n 0 client renegotiates (SSL_connect())\n 0 cliencept())\n 1 server accepts that finished\n 0 session cache hits\n 0 session cache misses\n 0 session cache timeouts\navailable\n</BODY></HTML>\r\n\r\n' mac='\x97$\x1a\x18\x12B\r6,d\xb0\x9fMq\xdd\xe6\xd2\\\n\xe7' padding='\x08\x08\x08\x08\x08\
| 192.168.220.1 :54908 => 192.168.220.131 :443 | <SSL records=[<TLSRecord content_type=alert version=TLS_1_0 length=04\xa0\x07N^v\xa83kh\xc0\xfd\xe9' |>>>] |>
|-> decrypted record | <SSL records=[<TLSRecord content_type=alert version=TLS_1_0 length=0an\xfbZ\xf5\x82\x16' padding='\t\t\t\t\t\t\t\t\t' padding_len=0x9 |>>] |>
```
##### SSL Security Scanner
Active Scanner:
```python
# python examples/security_scanner.py active localhost 443
An example implementation of a passive TLS security scanner with custom starttls support:
TLSScanner() generates TLS probe traffic (optional)
TLSInfo() passively evaluates the traffic and generates events/warning
Scanning with 10 parallel threads...
=> accepted_ciphersuites
=> accepted_ciphersuites_ssl2
=> compressions
=> heartbleed
=> poodle2
=> scsv
=> secure_renegotiation
=> supported_protocol_versions
[*] Capabilities (Debug)
<TLSInfo
packets.processed: 403
client.versions: set([])
client.ciphers: set([])
client.compressions: set([])
client.preferred_ciphers: set([])
client.sessions_established: 0
client.heartbeat: None
server.versions: set([768, 769, 770, 771])
server.ciphers: set([65, 132, 3, 4, 5, 6, 8, 9, 10, 47, 136, 51, 20, 21, 22, 150, 57, 154, 159, 69, 53])
server.compressions: set([0])
server.sessions_established: 0
server.fallback_scsv: False
server.heartbeat: 1
server.certificates: set([<TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>, <TLSCertificateList length=0x2d7 certificates=[<TLSCertificate length=0x2d4 data=<X509Cert version=<ASN1_INTEGER[2L]> sn=<ASN1_INTEGER[14155341744006398450L]> sign_algo=<ASN1_OID['.1.2.840.113549.1.1.5']> sa_value=<ASN1_NULL[0L]> issuer=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] not_before=<ASN1_UTC_TIME['130425105002Z']> not_after=<ASN1_UTC_TIME['230423105002Z']> subject=[<X509RDN oid=<ASN1_OID['.2.5.4.3']> value=<ASN1_PRINTABLE_STRING['localhost.localdomain']> |>] pubkey_algo=<ASN1_OID['.1.2.840.113549.1.1.1']> pk_value=<ASN1_NULL[0L]> pubkey=<ASN1_BIT_STRING["\x000\x82\x01\n\x02\x82\x01\x01\x00\xdcS\xa3%U\r\xe0\xb3\xab5=$'\x8d\x13\x95cp\x0c\xe2p\xb5\x0e\xe3J\x1fy\x7f\x876\x9cH\xd8Z\x8e\x1c\x04\xc4C\x8e<\x1a\xd1\x90\xbdm\xaa\x08ku<Tw\t\xbd{\xb7wZm\x9cmW\\o\x9dw\xdf\xa3\xe7}\xac!:\x150\xb7\x98lCA\xec\x18\x97\xba#B\x8b\xa1c\xd8aw\xbb\xc6\xc4\x0fbs\x87eT<E\xbf\r\x92\xfc\x8b}7b7\xf12\x19(\x95y+\x12oiW4\xd7\xf5\x06\xf2G\xf2\x15\xfc\xf6\xa6Y\x83\x11\xc7P\\'\x8b\xd2\x96\xd0\xa2\xb51\xb3\x00N\xb9s\\\x03\x95\xb0\x12\xe1l\x9d\x83\x92uU\x9d\xbd\xdct}@6\r\xbb\xc9\xea@S\xf4D\xbe\x93\x99`xUjF.M\xd8\xbc\xfc\xdb 1\xaa{;\xf3\xec)1\xa9\xe4\xfapl\x18\x07O\x88Y\xc8\xed\xb63\xf2\x7f\xe2~g\xe7\xf9\xc4L\x9d\xcbg\xda\xdf\x1e5\xb3C\x07\xeav\xf0\x13m]\x94\xdaY\xc8\xc3?\x99\xb6\xb6\xb5\xc5bM\x02\x03\x01\x00\x01"]> x509v3ext=[<X509v3Ext val=<ASN1_SEQUENCE[[<ASN1_OID['.2.5.29.19']>, <ASN1_STRING['0\x00']>]]> |>] sign_algo2=<ASN1_OID['.1.2.840.113549.1.1.5']> sa2_value=<ASN1_NULL[0L]> signature=<ASN1_BIT_STRING['\x00X\xaf\xa2B\xb4c\x83}S\x06\x07\xb7\xb6\xa4nT\xeeAS\xe0\x93\x81\x820\x9c\x92\x16\xb3H\xd0\x11Z\x02\\g|\x9f\x0b\x8f\x96\x82\x1a3\x8d\xe1.3\xcd\xe9\xc2K\x990\x8c\x98\x1b\xf6\x03\x1a\x06\xc2l2\xcb+x$-\xd8J9\xae\xc8\xdd\x8a\x7f8\x1e\xf9z\x10\xdd\xf9\x88s\xf5\xd1\xf3i\x7f\x8d\xbahU{]\x9bTu\x81T\xda\x0e`\x86\xd1\xbb\xe4\x98\xb2\r\xa2\x9a9N\xedmOw1I\xe4\xe3GCw\xad\xa2\xe7\x18\x8d"\xb7\x8c~B\xce\xba\xfc+\x8a\x81$\xdb\xc33\x01a\xd8\x9al\xack\x07\xbe\x18f2\x13\xa8\xc2\xf2\xa4\xcb\x86x\xd2\xa9\xf2\xef\xb3\x14<\xb10\x91W\xbfA_F\x81\xe8A\x8ac\xa9\n\x82\n\n\x93\xfd7\xb3Z\xe9\xab\x18\xc0=\x96\x84\x02?UC\xb6\x0ep\xfa\x19\xa6\xfcbM\x9d\x00\xa1\x03`\x0c\xbe\xda;+`\x13\xd6\xbaly\xeb\x02\xf7Mr\x9a\x00\xc1W7~\x89^6I\x1fj5u\xa8 r;\x8d']> |> |>] |>])
>
[*] supported ciphers: 34/326
* SSLv2_RC4_128_EXPORT40_WITH_MD5 (0x20080)
* ECDH_anon_WITH_RC4_128_SHA (0xc016)
* RSA_EXPORT_WITH_RC4_40_MD5 (0x0003)
* RSA_WITH_CAMELLIA_256_CBC_SHA (0x0084)
* RSA_WITH_RC4_128_SHA (0x0005)
* RSA_EXPORT_WITH_RC2_CBC_40_MD5 (0x0006)
* RSA_WITH_IDEA_CBC_SHA (0x0007)
* RSA_EXPORT_WITH_DES40_CBC_SHA (0x0008)
* RSA_WITH_DES_CBC_SHA (0x0009)
* RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
* ECDH_anon_WITH_3DES_EDE_CBC_SHA (0xc017)
* ECDHE_RSA_WITH_RC4_128_SHA (0xc011)
* ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (0xc012)
* ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
* DHE_RSA_EXPORT_WITH_DES40_CBC_SHA (0x0014)
* DHE_RSA_WITH_DES_CBC_SHA (0x0015)
* DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016)
* ECDH_anon_WITH_AES_256_CBC_SHA (0xc019)
* ECDH_anon_WITH_AES_128_CBC_SHA (0xc018)
* RSA_WITH_RC4_128_MD5 (0x0004)
* DHE_RSA_WITH_SEED_CBC_SHA (0x009a)
* RSA_WITH_SEED_CBC_SHA (0x0096)
* DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f)
* SSLv2_RC2_CBC_128_CBC_WITH_MD5 (0x40080)
* RSA_WITH_AES_128_CBC_SHA (0x002f)
* DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (0x0088)
* DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
* RSA_WITH_AES_256_CBC_SHA (0x0035)
* DHE_RSA_WITH_AES_256_CBC_SHA (0x0039)
* SSLv2_DES_64_CBC_WITH_MD5 (0x60040)
* RSA_WITH_CAMELLIA_128_CBC_SHA (0x0041)
* DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (0x0045)
* SSLv2_RC4_128_WITH_MD5 (0x10080)
* ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
[*] supported protocol versions: 5/8
* SSL_3_0 (0x0300)
* TLS_1_0 (0x0301)
* SSL_2_0 (0x0002)
* TLS_1_1 (0x0302)
* TLS_1_2 (0x0303)
[*] supported compressions methods: 1/3
* NULL (0x0000)
[*] Events: 16
* EVENT - HEARTBLEED - vulnerable
* EVENT - DROWN - SSLv2 with EXPORT ciphers enabled
* EVENT - CIPHERS - Export ciphers enabled
* EVENT - CIPHERS - RC4 ciphers enabled
* EVENT - CIPHERS - MD5 ciphers enabled
* EVENT - FREAK - server supports RSA_EXPORT cipher suites
* EVENT - LOGJAM - server supports weak DH-Group (512) (DHE_*_EXPORT) cipher suites
* EVENT - PROTOCOL VERSION - SSLv2 supported
* EVENT - PROTOCOL VERSION - SSLv3 supported
* EVENT - HEARTBEAT - enabled (non conclusive heartbleed)
* EVENT - INSUFFICIENT SERVER CERT PUBKEY SIZE - 2048 >= 640 bits
* EVENT - SUSPICIOUS SERVER CERT PUBKEY SIZE - 640 not a multiple of 2048 bits
* EVENT - SERVER CERT PUBKEY FACTORED - trivial private_key recovery possible due to known factors n = p x q. See https://en.wikipedia.org/wiki/RSA_numbers | grep 3107418240490043721350750035888567930037346022842727545720161948823206440518081504556346829671723286782437916272838033415471073108501919548529007337724822783525742386454014691736602477652346609
* EVENT - DOWNGRADE / POODLE - FALLBACK_SCSV - not honored
* EVENT - TLS EXTENSION SECURE RENEGOTIATION - not supported
* EVENT - HEARTBEAT - enabled (non conclusive heartbleed)
Scan took: 30.60623884201s
```
Passive Scanner:
```python
# python examples/security_scanner.py sniff 192.168.139.131 443
An example implementation of a passive TLS security scanner with custom starttls support:
TLSScanner() generates TLS probe traffic (optional)
TLSInfo() passively evaluates the traffic and generates events/warning
[*] [passive] Scanning in 'sniff' mode...
Connection: 192.168.139.1:1364 <==> 192.168.139.131:443
* EVENT - CRIME - client supports compression
* EVENT - SLOTH - client announces capability of signature/hash algorithm: RSA/sha1
Connection: 192.168.139.131:443 <==> 192.168.139.1:1364
* EVENT - CRIME - client supports compression
* EVENT - SLOTH - client announces capability of signature/hash algorithm: RSA/sha1
Connection: 192.168.139.131:443 <==> 192.168.139.1:1364
* EVENT - CRIME - client supports compression
* EVENT - SLOTH - client announces capability of signature/hash algorithm: RSA/sha1
* EVENT - CRIME - server supports compression
* EVENT - INSUFFICIENT SERVER CERT PUBKEY SIZE - 2048 >= 640 bits
* EVENT - SUSPICIOUS SERVER CERT PUBKEY SIZE - 640 not a multiple of 2048 bits
* EVENT - SERVER CERT PUBKEY FACTORED - trivial private_key recovery possible due to known factors n = p x q. See https://en.wikipedia.org/wiki/RSA_numbers | grep 3107418240490043721350750035888567930037346022842727545720161948823206440518081504556346829671723286782437916272838033415471073108501919548529007337724822783525742386454014691736602477652346609
* EVENT - HEARTBEAT - enabled (non conclusive heartbleed)
Connection: 192.168.139.1:1364 <==> 192.168.139.131:443
```
## Authors / Contributors
* tintinweb ( http://oststrom.com | https://github.com/tintinweb)
* alexmgr ( https://github.com/alexmgr )
|
/scapy-ssl_tls-2.0.0.tar.gz/scapy-ssl_tls-2.0.0/README.md
| 0.673192 | 0.816772 |
README.md
|
pypi
|
TLS_CLIENTCERTIFICATETYPE_IDENTIFIERS_REGISTRY = {
0x00: 'Unassigned',
0x01: 'rsa_sign',
0x02: 'dss_sign',
0x03: 'rsa_fixed_dh',
0x04: 'dss_fixed_dh',
0x05: 'rsa_ephemeral_dh_RESERVED',
0x06: 'dss_ephemeral_dh_RESERVED',
0x14: 'fortezza_dms_RESERVED',
0x40: 'ecdsa_sign',
0x41: 'rsa_fixed_ecdh',
0x42: 'ecdsa_fixed_ecdh',
}
TLS_CIPHER_SUITE_REGISTRY = {
0x0000: 'NULL_WITH_NULL_NULL',
0x0001: 'RSA_WITH_NULL_MD5',
0x0002: 'RSA_WITH_NULL_SHA',
0x0003: 'RSA_EXPORT_WITH_RC4_40_MD5',
0x0004: 'RSA_WITH_RC4_128_MD5',
0x0005: 'RSA_WITH_RC4_128_SHA',
0x0006: 'RSA_EXPORT_WITH_RC2_CBC_40_MD5',
0x0007: 'RSA_WITH_IDEA_CBC_SHA',
0x0008: 'RSA_EXPORT_WITH_DES40_CBC_SHA',
0x0009: 'RSA_WITH_DES_CBC_SHA',
0x000a: 'RSA_WITH_3DES_EDE_CBC_SHA',
0x000b: 'DH_DSS_EXPORT_WITH_DES40_CBC_SHA',
0x000c: 'DH_DSS_WITH_DES_CBC_SHA',
0x000d: 'DH_DSS_WITH_3DES_EDE_CBC_SHA',
0x000e: 'DH_RSA_EXPORT_WITH_DES40_CBC_SHA',
0x000f: 'DH_RSA_WITH_DES_CBC_SHA',
0x0010: 'DH_RSA_WITH_3DES_EDE_CBC_SHA',
0x0011: 'DHE_DSS_EXPORT_WITH_DES40_CBC_SHA',
0x0012: 'DHE_DSS_WITH_DES_CBC_SHA',
0x0013: 'DHE_DSS_WITH_3DES_EDE_CBC_SHA',
0x0014: 'DHE_RSA_EXPORT_WITH_DES40_CBC_SHA',
0x0015: 'DHE_RSA_WITH_DES_CBC_SHA',
0x0016: 'DHE_RSA_WITH_3DES_EDE_CBC_SHA',
0x0017: 'DH_anon_EXPORT_WITH_RC4_40_MD5',
0x0018: 'DH_anon_WITH_RC4_128_MD5',
0x0019: 'DH_anon_EXPORT_WITH_DES40_CBC_SHA',
0x001a: 'DH_anon_WITH_DES_CBC_SHA',
0x001b: 'DH_anon_WITH_3DES_EDE_CBC_SHA',
0x001e: 'KRB5_WITH_DES_CBC_SHA',
0x001f: 'KRB5_WITH_3DES_EDE_CBC_SHA',
0x0020: 'KRB5_WITH_RC4_128_SHA',
0x0021: 'KRB5_WITH_IDEA_CBC_SHA',
0x0022: 'KRB5_WITH_DES_CBC_MD5',
0x0023: 'KRB5_WITH_3DES_EDE_CBC_MD5',
0x0024: 'KRB5_WITH_RC4_128_MD5',
0x0025: 'KRB5_WITH_IDEA_CBC_MD5',
0x0026: 'KRB5_EXPORT_WITH_DES_CBC_40_SHA',
0x0027: 'KRB5_EXPORT_WITH_RC2_CBC_40_SHA',
0x0028: 'KRB5_EXPORT_WITH_RC4_40_SHA',
0x0029: 'KRB5_EXPORT_WITH_DES_CBC_40_MD5',
0x002a: 'KRB5_EXPORT_WITH_RC2_CBC_40_MD5',
0x002b: 'KRB5_EXPORT_WITH_RC4_40_MD5',
0x002c: 'PSK_WITH_NULL_SHA',
0x002d: 'DHE_PSK_WITH_NULL_SHA',
0x002e: 'RSA_PSK_WITH_NULL_SHA',
0x002f: 'RSA_WITH_AES_128_CBC_SHA',
0x0030: 'DH_DSS_WITH_AES_128_CBC_SHA',
0x0031: 'DH_RSA_WITH_AES_128_CBC_SHA',
0x0032: 'DHE_DSS_WITH_AES_128_CBC_SHA',
0x0033: 'DHE_RSA_WITH_AES_128_CBC_SHA',
0x0034: 'DH_anon_WITH_AES_128_CBC_SHA',
0x0035: 'RSA_WITH_AES_256_CBC_SHA',
0x0036: 'DH_DSS_WITH_AES_256_CBC_SHA',
0x0037: 'DH_RSA_WITH_AES_256_CBC_SHA',
0x0038: 'DHE_DSS_WITH_AES_256_CBC_SHA',
0x0039: 'DHE_RSA_WITH_AES_256_CBC_SHA',
0x003a: 'DH_anon_WITH_AES_256_CBC_SHA',
0x003b: 'RSA_WITH_NULL_SHA256',
0x003c: 'RSA_WITH_AES_128_CBC_SHA256',
0x003d: 'RSA_WITH_AES_256_CBC_SHA256',
0x003e: 'DH_DSS_WITH_AES_128_CBC_SHA256',
0x003f: 'DH_RSA_WITH_AES_128_CBC_SHA256',
0x0040: 'DHE_DSS_WITH_AES_128_CBC_SHA256',
0x0041: 'RSA_WITH_CAMELLIA_128_CBC_SHA',
0x0042: 'DH_DSS_WITH_CAMELLIA_128_CBC_SHA',
0x0043: 'DH_RSA_WITH_CAMELLIA_128_CBC_SHA',
0x0044: 'DHE_DSS_WITH_CAMELLIA_128_CBC_SHA',
0x0045: 'DHE_RSA_WITH_CAMELLIA_128_CBC_SHA',
0x0046: 'DH_anon_WITH_CAMELLIA_128_CBC_SHA',
0x0067: 'DHE_RSA_WITH_AES_128_CBC_SHA256',
0x0068: 'DH_DSS_WITH_AES_256_CBC_SHA256',
0x0069: 'DH_RSA_WITH_AES_256_CBC_SHA256',
0x006a: 'DHE_DSS_WITH_AES_256_CBC_SHA256',
0x006b: 'DHE_RSA_WITH_AES_256_CBC_SHA256',
0x006c: 'DH_anon_WITH_AES_128_CBC_SHA256',
0x006d: 'DH_anon_WITH_AES_256_CBC_SHA256',
0x0084: 'RSA_WITH_CAMELLIA_256_CBC_SHA',
0x0085: 'DH_DSS_WITH_CAMELLIA_256_CBC_SHA',
0x0086: 'DH_RSA_WITH_CAMELLIA_256_CBC_SHA',
0x0087: 'DHE_DSS_WITH_CAMELLIA_256_CBC_SHA',
0x0088: 'DHE_RSA_WITH_CAMELLIA_256_CBC_SHA',
0x0089: 'DH_anon_WITH_CAMELLIA_256_CBC_SHA',
0x008a: 'PSK_WITH_RC4_128_SHA',
0x008b: 'PSK_WITH_3DES_EDE_CBC_SHA',
0x008c: 'PSK_WITH_AES_128_CBC_SHA',
0x008d: 'PSK_WITH_AES_256_CBC_SHA',
0x008e: 'DHE_PSK_WITH_RC4_128_SHA',
0x008f: 'DHE_PSK_WITH_3DES_EDE_CBC_SHA',
0x0090: 'DHE_PSK_WITH_AES_128_CBC_SHA',
0x0091: 'DHE_PSK_WITH_AES_256_CBC_SHA',
0x0092: 'RSA_PSK_WITH_RC4_128_SHA',
0x0093: 'RSA_PSK_WITH_3DES_EDE_CBC_SHA',
0x0094: 'RSA_PSK_WITH_AES_128_CBC_SHA',
0x0095: 'RSA_PSK_WITH_AES_256_CBC_SHA',
0x0096: 'RSA_WITH_SEED_CBC_SHA',
0x0097: 'DH_DSS_WITH_SEED_CBC_SHA',
0x0098: 'DH_RSA_WITH_SEED_CBC_SHA',
0x0099: 'DHE_DSS_WITH_SEED_CBC_SHA',
0x009a: 'DHE_RSA_WITH_SEED_CBC_SHA',
0x009b: 'DH_anon_WITH_SEED_CBC_SHA',
0x009c: 'RSA_WITH_AES_128_GCM_SHA256',
0x009d: 'RSA_WITH_AES_256_GCM_SHA384',
0x009e: 'DHE_RSA_WITH_AES_128_GCM_SHA256',
0x009f: 'DHE_RSA_WITH_AES_256_GCM_SHA384',
0x00a0: 'DH_RSA_WITH_AES_128_GCM_SHA256',
0x00a1: 'DH_RSA_WITH_AES_256_GCM_SHA384',
0x00a2: 'DHE_DSS_WITH_AES_128_GCM_SHA256',
0x00a3: 'DHE_DSS_WITH_AES_256_GCM_SHA384',
0x00a4: 'DH_DSS_WITH_AES_128_GCM_SHA256',
0x00a5: 'DH_DSS_WITH_AES_256_GCM_SHA384',
0x00a6: 'DH_anon_WITH_AES_128_GCM_SHA256',
0x00a7: 'DH_anon_WITH_AES_256_GCM_SHA384',
0x00a8: 'PSK_WITH_AES_128_GCM_SHA256',
0x00a9: 'PSK_WITH_AES_256_GCM_SHA384',
0x00aa: 'DHE_PSK_WITH_AES_128_GCM_SHA256',
0x00ab: 'DHE_PSK_WITH_AES_256_GCM_SHA384',
0x00ac: 'RSA_PSK_WITH_AES_128_GCM_SHA256',
0x00ad: 'RSA_PSK_WITH_AES_256_GCM_SHA384',
0x00ae: 'PSK_WITH_AES_128_CBC_SHA256',
0x00af: 'PSK_WITH_AES_256_CBC_SHA384',
0x00b0: 'PSK_WITH_NULL_SHA256',
0x00b1: 'PSK_WITH_NULL_SHA384',
0x00b2: 'DHE_PSK_WITH_AES_128_CBC_SHA256',
0x00b3: 'DHE_PSK_WITH_AES_256_CBC_SHA384',
0x00b4: 'DHE_PSK_WITH_NULL_SHA256',
0x00b5: 'DHE_PSK_WITH_NULL_SHA384',
0x00b6: 'RSA_PSK_WITH_AES_128_CBC_SHA256',
0x00b7: 'RSA_PSK_WITH_AES_256_CBC_SHA384',
0x00b8: 'RSA_PSK_WITH_NULL_SHA256',
0x00b9: 'RSA_PSK_WITH_NULL_SHA384',
0x00ba: 'RSA_WITH_CAMELLIA_128_CBC_SHA256',
0x00bb: 'DH_DSS_WITH_CAMELLIA_128_CBC_SHA256',
0x00bc: 'DH_RSA_WITH_CAMELLIA_128_CBC_SHA256',
0x00bd: 'DHE_DSS_WITH_CAMELLIA_128_CBC_SHA256',
0x00be: 'DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256',
0x00bf: 'DH_anon_WITH_CAMELLIA_128_CBC_SHA256',
0x00c0: 'RSA_WITH_CAMELLIA_256_CBC_SHA256',
0x00c1: 'DH_DSS_WITH_CAMELLIA_256_CBC_SHA256',
0x00c2: 'DH_RSA_WITH_CAMELLIA_256_CBC_SHA256',
0x00c3: 'DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256',
0x00c4: 'DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256',
0x00c5: 'DH_anon_WITH_CAMELLIA_256_CBC_SHA256',
0x00ff: 'EMPTY_RENEGOTIATION_INFO_SCSV',
0x5600: 'FALLBACK_SCSV',
0xc001: 'ECDH_ECDSA_WITH_NULL_SHA',
0xc002: 'ECDH_ECDSA_WITH_RC4_128_SHA',
0xc003: 'ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA',
0xc004: 'ECDH_ECDSA_WITH_AES_128_CBC_SHA',
0xc005: 'ECDH_ECDSA_WITH_AES_256_CBC_SHA',
0xc006: 'ECDHE_ECDSA_WITH_NULL_SHA',
0xc007: 'ECDHE_ECDSA_WITH_RC4_128_SHA',
0xc008: 'ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA',
0xc009: 'ECDHE_ECDSA_WITH_AES_128_CBC_SHA',
0xc00a: 'ECDHE_ECDSA_WITH_AES_256_CBC_SHA',
0xc00b: 'ECDH_RSA_WITH_NULL_SHA',
0xc00c: 'ECDH_RSA_WITH_RC4_128_SHA',
0xc00d: 'ECDH_RSA_WITH_3DES_EDE_CBC_SHA',
0xc00e: 'ECDH_RSA_WITH_AES_128_CBC_SHA',
0xc00f: 'ECDH_RSA_WITH_AES_256_CBC_SHA',
0xc010: 'ECDHE_RSA_WITH_NULL_SHA',
0xc011: 'ECDHE_RSA_WITH_RC4_128_SHA',
0xc012: 'ECDHE_RSA_WITH_3DES_EDE_CBC_SHA',
0xc013: 'ECDHE_RSA_WITH_AES_128_CBC_SHA',
0xc014: 'ECDHE_RSA_WITH_AES_256_CBC_SHA',
0xc015: 'ECDH_anon_WITH_NULL_SHA',
0xc016: 'ECDH_anon_WITH_RC4_128_SHA',
0xc017: 'ECDH_anon_WITH_3DES_EDE_CBC_SHA',
0xc018: 'ECDH_anon_WITH_AES_128_CBC_SHA',
0xc019: 'ECDH_anon_WITH_AES_256_CBC_SHA',
0xc01a: 'SRP_SHA_WITH_3DES_EDE_CBC_SHA',
0xc01b: 'SRP_SHA_RSA_WITH_3DES_EDE_CBC_SHA',
0xc01c: 'SRP_SHA_DSS_WITH_3DES_EDE_CBC_SHA',
0xc01d: 'SRP_SHA_WITH_AES_128_CBC_SHA',
0xc01e: 'SRP_SHA_RSA_WITH_AES_128_CBC_SHA',
0xc01f: 'SRP_SHA_DSS_WITH_AES_128_CBC_SHA',
0xc020: 'SRP_SHA_WITH_AES_256_CBC_SHA',
0xc021: 'SRP_SHA_RSA_WITH_AES_256_CBC_SHA',
0xc022: 'SRP_SHA_DSS_WITH_AES_256_CBC_SHA',
0xc023: 'ECDHE_ECDSA_WITH_AES_128_CBC_SHA256',
0xc024: 'ECDHE_ECDSA_WITH_AES_256_CBC_SHA384',
0xc025: 'ECDH_ECDSA_WITH_AES_128_CBC_SHA256',
0xc026: 'ECDH_ECDSA_WITH_AES_256_CBC_SHA384',
0xc027: 'ECDHE_RSA_WITH_AES_128_CBC_SHA256',
0xc028: 'ECDHE_RSA_WITH_AES_256_CBC_SHA384',
0xc029: 'ECDH_RSA_WITH_AES_128_CBC_SHA256',
0xc02a: 'ECDH_RSA_WITH_AES_256_CBC_SHA384',
0xc02b: 'ECDHE_ECDSA_WITH_AES_128_GCM_SHA256',
0xc02c: 'ECDHE_ECDSA_WITH_AES_256_GCM_SHA384',
0xc02d: 'ECDH_ECDSA_WITH_AES_128_GCM_SHA256',
0xc02e: 'ECDH_ECDSA_WITH_AES_256_GCM_SHA384',
0xc02f: 'ECDHE_RSA_WITH_AES_128_GCM_SHA256',
0xc030: 'ECDHE_RSA_WITH_AES_256_GCM_SHA384',
0xc031: 'ECDH_RSA_WITH_AES_128_GCM_SHA256',
0xc032: 'ECDH_RSA_WITH_AES_256_GCM_SHA384',
0xc033: 'ECDHE_PSK_WITH_RC4_128_SHA',
0xc034: 'ECDHE_PSK_WITH_3DES_EDE_CBC_SHA',
0xc035: 'ECDHE_PSK_WITH_AES_128_CBC_SHA',
0xc036: 'ECDHE_PSK_WITH_AES_256_CBC_SHA',
0xc037: 'ECDHE_PSK_WITH_AES_128_CBC_SHA256',
0xc038: 'ECDHE_PSK_WITH_AES_256_CBC_SHA384',
0xc039: 'ECDHE_PSK_WITH_NULL_SHA',
0xc03a: 'ECDHE_PSK_WITH_NULL_SHA256',
0xc03b: 'ECDHE_PSK_WITH_NULL_SHA384',
0xc03c: 'RSA_WITH_ARIA_128_CBC_SHA256',
0xc03d: 'RSA_WITH_ARIA_256_CBC_SHA384',
0xc03e: 'DH_DSS_WITH_ARIA_128_CBC_SHA256',
0xc03f: 'DH_DSS_WITH_ARIA_256_CBC_SHA384',
0xc040: 'DH_RSA_WITH_ARIA_128_CBC_SHA256',
0xc041: 'DH_RSA_WITH_ARIA_256_CBC_SHA384',
0xc042: 'DHE_DSS_WITH_ARIA_128_CBC_SHA256',
0xc043: 'DHE_DSS_WITH_ARIA_256_CBC_SHA384',
0xc044: 'DHE_RSA_WITH_ARIA_128_CBC_SHA256',
0xc045: 'DHE_RSA_WITH_ARIA_256_CBC_SHA384',
0xc046: 'DH_anon_WITH_ARIA_128_CBC_SHA256',
0xc047: 'DH_anon_WITH_ARIA_256_CBC_SHA384',
0xc048: 'ECDHE_ECDSA_WITH_ARIA_128_CBC_SHA256',
0xc049: 'ECDHE_ECDSA_WITH_ARIA_256_CBC_SHA384',
0xc04a: 'ECDH_ECDSA_WITH_ARIA_128_CBC_SHA256',
0xc04b: 'ECDH_ECDSA_WITH_ARIA_256_CBC_SHA384',
0xc04c: 'ECDHE_RSA_WITH_ARIA_128_CBC_SHA256',
0xc04d: 'ECDHE_RSA_WITH_ARIA_256_CBC_SHA384',
0xc04e: 'ECDH_RSA_WITH_ARIA_128_CBC_SHA256',
0xc04f: 'ECDH_RSA_WITH_ARIA_256_CBC_SHA384',
0xc050: 'RSA_WITH_ARIA_128_GCM_SHA256',
0xc051: 'RSA_WITH_ARIA_256_GCM_SHA384',
0xc052: 'DHE_RSA_WITH_ARIA_128_GCM_SHA256',
0xc053: 'DHE_RSA_WITH_ARIA_256_GCM_SHA384',
0xc054: 'DH_RSA_WITH_ARIA_128_GCM_SHA256',
0xc055: 'DH_RSA_WITH_ARIA_256_GCM_SHA384',
0xc056: 'DHE_DSS_WITH_ARIA_128_GCM_SHA256',
0xc057: 'DHE_DSS_WITH_ARIA_256_GCM_SHA384',
0xc058: 'DH_DSS_WITH_ARIA_128_GCM_SHA256',
0xc059: 'DH_DSS_WITH_ARIA_256_GCM_SHA384',
0xc05a: 'DH_anon_WITH_ARIA_128_GCM_SHA256',
0xc05b: 'DH_anon_WITH_ARIA_256_GCM_SHA384',
0xc05c: 'ECDHE_ECDSA_WITH_ARIA_128_GCM_SHA256',
0xc05d: 'ECDHE_ECDSA_WITH_ARIA_256_GCM_SHA384',
0xc05e: 'ECDH_ECDSA_WITH_ARIA_128_GCM_SHA256',
0xc05f: 'ECDH_ECDSA_WITH_ARIA_256_GCM_SHA384',
0xc060: 'ECDHE_RSA_WITH_ARIA_128_GCM_SHA256',
0xc061: 'ECDHE_RSA_WITH_ARIA_256_GCM_SHA384',
0xc062: 'ECDH_RSA_WITH_ARIA_128_GCM_SHA256',
0xc063: 'ECDH_RSA_WITH_ARIA_256_GCM_SHA384',
0xc064: 'PSK_WITH_ARIA_128_CBC_SHA256',
0xc065: 'PSK_WITH_ARIA_256_CBC_SHA384',
0xc066: 'DHE_PSK_WITH_ARIA_128_CBC_SHA256',
0xc067: 'DHE_PSK_WITH_ARIA_256_CBC_SHA384',
0xc068: 'RSA_PSK_WITH_ARIA_128_CBC_SHA256',
0xc069: 'RSA_PSK_WITH_ARIA_256_CBC_SHA384',
0xc06a: 'PSK_WITH_ARIA_128_GCM_SHA256',
0xc06b: 'PSK_WITH_ARIA_256_GCM_SHA384',
0xc06c: 'DHE_PSK_WITH_ARIA_128_GCM_SHA256',
0xc06d: 'DHE_PSK_WITH_ARIA_256_GCM_SHA384',
0xc06e: 'RSA_PSK_WITH_ARIA_128_GCM_SHA256',
0xc06f: 'RSA_PSK_WITH_ARIA_256_GCM_SHA384',
0xc070: 'ECDHE_PSK_WITH_ARIA_128_CBC_SHA256',
0xc071: 'ECDHE_PSK_WITH_ARIA_256_CBC_SHA384',
0xc072: 'ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256',
0xc073: 'ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384',
0xc074: 'ECDH_ECDSA_WITH_CAMELLIA_128_CBC_SHA256',
0xc075: 'ECDH_ECDSA_WITH_CAMELLIA_256_CBC_SHA384',
0xc076: 'ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256',
0xc077: 'ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384',
0xc078: 'ECDH_RSA_WITH_CAMELLIA_128_CBC_SHA256',
0xc079: 'ECDH_RSA_WITH_CAMELLIA_256_CBC_SHA384',
0xc07a: 'RSA_WITH_CAMELLIA_128_GCM_SHA256',
0xc07b: 'RSA_WITH_CAMELLIA_256_GCM_SHA384',
0xc07c: 'DHE_RSA_WITH_CAMELLIA_128_GCM_SHA256',
0xc07d: 'DHE_RSA_WITH_CAMELLIA_256_GCM_SHA384',
0xc07e: 'DH_RSA_WITH_CAMELLIA_128_GCM_SHA256',
0xc07f: 'DH_RSA_WITH_CAMELLIA_256_GCM_SHA384',
0xc080: 'DHE_DSS_WITH_CAMELLIA_128_GCM_SHA256',
0xc081: 'DHE_DSS_WITH_CAMELLIA_256_GCM_SHA384',
0xc082: 'DH_DSS_WITH_CAMELLIA_128_GCM_SHA256',
0xc083: 'DH_DSS_WITH_CAMELLIA_256_GCM_SHA384',
0xc084: 'DH_anon_WITH_CAMELLIA_128_GCM_SHA256',
0xc085: 'DH_anon_WITH_CAMELLIA_256_GCM_SHA384',
0xc086: 'ECDHE_ECDSA_WITH_CAMELLIA_128_GCM_SHA256',
0xc087: 'ECDHE_ECDSA_WITH_CAMELLIA_256_GCM_SHA384',
0xc088: 'ECDH_ECDSA_WITH_CAMELLIA_128_GCM_SHA256',
0xc089: 'ECDH_ECDSA_WITH_CAMELLIA_256_GCM_SHA384',
0xc08a: 'ECDHE_RSA_WITH_CAMELLIA_128_GCM_SHA256',
0xc08b: 'ECDHE_RSA_WITH_CAMELLIA_256_GCM_SHA384',
0xc08c: 'ECDH_RSA_WITH_CAMELLIA_128_GCM_SHA256',
0xc08d: 'ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384',
0xc08e: 'PSK_WITH_CAMELLIA_128_GCM_SHA256',
0xc08f: 'PSK_WITH_CAMELLIA_256_GCM_SHA384',
0xc090: 'DHE_PSK_WITH_CAMELLIA_128_GCM_SHA256',
0xc091: 'DHE_PSK_WITH_CAMELLIA_256_GCM_SHA384',
0xc092: 'RSA_PSK_WITH_CAMELLIA_128_GCM_SHA256',
0xc093: 'RSA_PSK_WITH_CAMELLIA_256_GCM_SHA384',
0xc094: 'PSK_WITH_CAMELLIA_128_CBC_SHA256',
0xc095: 'PSK_WITH_CAMELLIA_256_CBC_SHA384',
0xc096: 'DHE_PSK_WITH_CAMELLIA_128_CBC_SHA256',
0xc097: 'DHE_PSK_WITH_CAMELLIA_256_CBC_SHA384',
0xc098: 'RSA_PSK_WITH_CAMELLIA_128_CBC_SHA256',
0xc099: 'RSA_PSK_WITH_CAMELLIA_256_CBC_SHA384',
0xc09a: 'ECDHE_PSK_WITH_CAMELLIA_128_CBC_SHA256',
0xc09b: 'ECDHE_PSK_WITH_CAMELLIA_256_CBC_SHA384',
0xc09c: 'RSA_WITH_AES_128_CCM',
0xc09d: 'RSA_WITH_AES_256_CCM',
0xc09e: 'DHE_RSA_WITH_AES_128_CCM',
0xc09f: 'DHE_RSA_WITH_AES_256_CCM',
0xc0a0: 'RSA_WITH_AES_128_CCM_8',
0xc0a1: 'RSA_WITH_AES_256_CCM_8',
0xc0a2: 'DHE_RSA_WITH_AES_128_CCM_8',
0xc0a3: 'DHE_RSA_WITH_AES_256_CCM_8',
0xc0a4: 'PSK_WITH_AES_128_CCM',
0xc0a5: 'PSK_WITH_AES_256_CCM',
0xc0a6: 'DHE_PSK_WITH_AES_128_CCM',
0xc0a7: 'DHE_PSK_WITH_AES_256_CCM',
0xc0a8: 'PSK_WITH_AES_128_CCM_8',
0xc0a9: 'PSK_WITH_AES_256_CCM_8',
0xc0aa: 'PSK_DHE_WITH_AES_128_CCM_8',
0xc0ab: 'PSK_DHE_WITH_AES_256_CCM_8',
0xc0ac: 'ECDHE_ECDSA_WITH_AES_128_CCM',
0xc0ad: 'ECDHE_ECDSA_WITH_AES_256_CCM',
0xc0ae: 'ECDHE_ECDSA_WITH_AES_128_CCM_8',
0xc0af: 'ECDHE_ECDSA_WITH_AES_256_CCM_8',
0xc0b0: 'ECCPWD_WITH_AES_128_GCM_SHA256',
0xc0b1: 'ECCPWD_WITH_AES_256_GCM_SHA384',
0xc0b2: 'ECCPWD_WITH_AES_128_CCM_SHA256',
0xc0b3: 'ECCPWD_WITH_AES_256_CCM_SHA384',
0xcca8: 'ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256',
0xcca9: 'ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256',
0xccaa: 'DHE_RSA_WITH_CHACHA20_POLY1305_SHA256',
0xccab: 'PSK_WITH_CHACHA20_POLY1305_SHA256',
0xccac: 'ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256',
0xccad: 'DHE_PSK_WITH_CHACHA20_POLY1305_SHA256',
0xccae: 'RSA_PSK_WITH_CHACHA20_POLY1305_SHA256',
0xd000: 'Unassigned',
0xd001: 'ECDHE_PSK_WITH_AES_128_GCM_SHA256',
0xd002: 'ECDHE_PSK_WITH_AES_256_GCM_SHA384',
0xd003: 'ECDHE_PSK_WITH_AES_128_CCM_8_SHA256',
0xd004: 'Unassigned',
0xd005: 'ECDHE_PSK_WITH_AES_128_CCM_SHA256',
}
TLS_CONTENTTYPE_REGISTRY = {
0x14: 'change_cipher_spec',
0x15: 'alert',
0x16: 'handshake',
0x17: 'application_data',
0x18: 'heartbeat',
}
TLS_ALERT_REGISTRY = {
0x00: 'close_notify',
0x0a: 'unexpected_message',
0x14: 'bad_record_mac',
0x15: 'decryption_failed',
0x16: 'record_overflow',
0x1e: 'decompression_failure',
0x28: 'handshake_failure',
0x29: 'no_certificate_RESERVED',
0x2a: 'bad_certificate',
0x2b: 'unsupported_certificate',
0x2c: 'certificate_revoked',
0x2d: 'certificate_expired',
0x2e: 'certificate_unknown',
0x2f: 'illegal_parameter',
0x30: 'unknown_ca',
0x31: 'access_denied',
0x32: 'decode_error',
0x33: 'decrypt_error',
0x3c: 'export_restriction_RESERVED',
0x46: 'protocol_version',
0x47: 'insufficient_security',
0x50: 'internal_error',
0x56: 'inappropriate_fallback',
0x5a: 'user_canceled',
0x64: 'no_renegotiation',
0x6e: 'unsupported_extension',
0x6f: 'certificate_unobtainable',
0x70: 'unrecognized_name',
0x71: 'bad_certificate_status_response',
0x72: 'bad_certificate_hash_value',
0x73: 'unknown_psk_identity',
}
TLS_HANDSHAKETYPE_REGISTRY = {
0x00: 'hello_request',
0x01: 'client_hello',
0x02: 'server_hello',
0x03: 'hello_verify_request',
0x04: 'NewSessionTicket',
0x0b: 'certificate',
0x0c: 'server_key_exchange',
0x0d: 'certificate_request',
0x0e: 'server_hello_done',
0x0f: 'certificate_verify',
0x10: 'client_key_exchange',
0x14: 'finished',
0x15: 'certificate_url',
0x16: 'certificate_status',
0x17: 'supplemental_data',
}
TLS_SUPPORTED_GROUPS_REGISTRY = {
0x00: 'Unassigned',
0x01: 'sect163k1',
0x02: 'sect163r1',
0x03: 'sect163r2',
0x04: 'sect193r1',
0x05: 'sect193r2',
0x06: 'sect233k1',
0x07: 'sect233r1',
0x08: 'sect239k1',
0x09: 'sect283k1',
0x0a: 'sect283r1',
0x0b: 'sect409k1',
0x0c: 'sect409r1',
0x0d: 'sect571k1',
0x0e: 'sect571r1',
0x0f: 'secp160k1',
0x10: 'secp160r1',
0x100: 'ffdhe2048',
0x101: 'ffdhe3072',
0x102: 'ffdhe4096',
0x103: 'ffdhe6144',
0x104: 'ffdhe8192',
0x11: 'secp160r2',
0x12: 'secp192k1',
0x13: 'secp192r1',
0x14: 'secp224k1',
0x15: 'secp224r1',
0x16: 'secp256k1',
0x17: 'secp256r1',
0x18: 'secp384r1',
0x19: 'secp521r1',
0x1a: 'brainpoolP256r1',
0x1b: 'brainpoolP384r1',
0x1c: 'brainpoolP512r1',
0x1d: 'x25519',
0x1e: 'x448',
0xff00: 'Unassigned',
0xff01: 'arbitrary_explicit_prime_curves',
0xff02: 'arbitrary_explicit_char2_curves',
}
TLS_EC_POINT_FORMAT_REGISTRY = {
0x00: 'uncompressed',
0x01: 'ansiX962_compressed_prime',
0x02: 'ansiX962_compressed_char2',
}
TLS_EC_CURVE_TYPE_REGISTRY = {
0x00: 'Unassigned',
0x01: 'explicit_prime',
0x02: 'explicit_char2',
0x03: 'named_curve',
}
TLS_SUPPLEMENTAL_DATA_FORMATS = {
0x00: 'user_mapping_data',
0x4002: 'authz_data',
}
TLS_USERMAPPINGTYPE_VALUES = {
0x40: 'upn_domain_hint',
}
TLS_SIGNATUREALGORITHM_REGISTRY = {
0x00: 'anonymous',
0x01: 'rsa',
0x02: 'dsa',
0x03: 'ecdsa',
0x07: 'ed25519',
0x08: 'ed448',
}
TLS_HASHALGORITHM_REGISTRY = {
0x00: 'none',
0x01: 'md5',
0x02: 'sha1',
0x03: 'sha224',
0x04: 'sha256',
0x05: 'sha384',
0x06: 'sha512',
0x07: 'Unassigned',
0x08: 'Intrinsic',
}
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
# Skipping: AttributeError("'NoneType' object has no attribute 'text'",)
TLS_EXPORTER_LABEL_REGISTRY = {
}
TLS_AUTHORIZATION_DATA_FORMATS = {
0x00: 'x509_attr_cert',
0x01: 'saml_assertion',
0x02: 'x509_attr_cert_url',
0x03: 'saml_assertion_url',
0x40: 'keynote_assertion_list',
0x41: 'keynote_assertion_list_url',
0x42: 'dtcp_authorization',
}
HEARTBEAT_MESSAGE_TYPES = {
0x00: 'Reserved',
0x01: 'heartbeat_request',
0x02: 'heartbeat_response',
0xff: 'Reserved',
}
HEARTBEAT_MODES = {
0x00: 'Reserved',
0x01: 'peer_allowed_to_send',
0x02: 'peer_not_allowed_to_send',
0xff: 'Reserved',
}
# Generator: fetch_iana_tls_registry.py
# date: 2018-02-12
# sources: https://www.iana.org/assignments/comp-meth-ids/comp-meth-ids.xml
# WARNING! THIS FILE IS AUTOGENERATED, DO NOT EDIT!
TLS_COMPRESSION_METHOD_IDENTIFIERS = {
0x00: 'NULL',
0x01: 'DEFLATE',
0x40: 'LZS',
}
# Generator: fetch_iana_tls_registry.py
# date: 2018-02-12
# sources: https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xml
# WARNING! THIS FILE IS AUTOGENERATED, DO NOT EDIT!
EXTENSIONTYPE_VALUES = {
0x00: 'server_name',
0x01: 'max_fragment_length',
0x02: 'client_certificate_url',
0x03: 'trusted_ca_keys',
0x04: 'truncated_hmac',
0x05: 'status_request',
0x06: 'user_mapping',
0x07: 'client_authz',
0x08: 'server_authz',
0x09: 'cert_type',
0x0a: 'supported_groups',
0x0b: 'ec_point_formats',
0x0c: 'srp',
0x0d: 'signature_algorithms',
0x0e: 'use_srtp',
0x0f: 'heartbeat',
0x10: 'application_layer_protocol_negotiation',
0x11: 'status_request_v2',
0x12: 'signed_certificate_timestamp',
0x13: 'client_certificate_type',
0x14: 'server_certificate_type',
0x15: 'padding',
0x16: 'encrypt_then_mac',
0x17: 'extended_master_secret',
0x18: 'token_binding',
0x19: 'cached_info',
0x23: 'SessionTicket_TLS',
0xff01: 'renegotiation_info',
}
TLS_CERTIFICATE_TYPES = {
0x00: 'X_509',
0x01: 'OpenPGP',
0x02: 'Raw_Public_Key',
}
TLS_CERTIFICATE_STATUS_TYPES = {
0x00: 'Reserved',
0x01: 'ocsp',
0x02: 'ocsp_multi',
}
APPLICATION_LAYER_PROTOCOL_NEGOTIATION_PROTOCOL_IDS = {
'c-webrtc': 'Confidential_WebRTC_Media_and_Data',
'coap': 'CoAP',
'ftp': 'FTP',
'h2': 'HTTP_2_over_TLS',
'h2c': 'HTTP_2_over_TCP',
'http/1.1': 'HTTP_1_1',
'imap': 'IMAP',
'managesieve': 'ManageSieve',
'pop3': 'POP3',
'spdy/1': 'SPDY_1',
'spdy/2': 'SPDY_2',
'spdy/3': 'SPDY_3',
'stun.nat-discovery': 'NAT_discovery_using_Session_Traversal_Utilities_for_NAT',
'stun.turn': 'Traversal_Using_Relays_around_NAT',
'webrtc': 'WebRTC_Media_and_Data',
}
TLS_CACHEDINFORMATIONTYPE_VALUES = {
0x00: 'Reserved',
0x01: 'cert',
0x02: 'cert_req',
}
|
/scapy-ssl_tls-2.0.0.tar.gz/scapy-ssl_tls-2.0.0/scapy_ssl_tls/ssl_tls_registry.py
| 0.432782 | 0.265907 |
ssl_tls_registry.py
|
pypi
|
import binascii
import StringIO
class PKCS7Encoder(object):
"""
RFC 2315: PKCS#7 page 21
Some content-encryption algorithms assume the
input length is a multiple of k octets, where k > 1, and
let the application define a method for handling inputs
whose lengths are not a multiple of k octets. For such
algorithms, the method shall be to pad the input at the
trailing end with k - (l mod k) octets all having value k -
(l mod k), where l is the length of the input. In other
words, the input is padded at the trailing end with one of
the following strings:
01 -- if l mod k = k-1
02 02 -- if l mod k = k-2
.
.
.
k k ... k k -- if l mod k = 0
The padding can be removed unambiguously since all input is
padded and no padding string is a suffix of another. This
padding method is well-defined if and only if k < 256;
methods for larger k are an open issue for further study.
"""
def __init__(self, k=16):
self.k = k
# @param text The padded text for which the padding is to be removed.
# @exception ValueError Raised when the input padding is missing or corrupt.
def decode(self, text):
"""
Remove the PKCS#7 padding from a text string
"""
nl = len(text)
val = int(binascii.hexlify(text[-1]), 16)
if val > self.k:
raise ValueError('Input is not padded or padding is corrupt')
l = nl - val
return text[:l]
# @param text The text to encode.
def encode(self, text):
"""
Pad an input string according to PKCS#7
"""
return text + self.get_padding(text)
def get_padding(self, text):
l = len(text)
output = StringIO.StringIO()
val = self.k - (l % self.k)
for _ in xrange(val):
output.write('%02x' % val)
return binascii.unhexlify(output.getvalue())
|
/scapy-ssl_tls-2.0.0.tar.gz/scapy-ssl_tls-2.0.0/scapy_ssl_tls/pkcs7.py
| 0.702938 | 0.46794 |
pkcs7.py
|
pypi
|
import logging
import os
from typing import Dict, List
from pathlib import Path
from importlib_metadata import version
from flask.cli import load_dotenv
from msal import PublicClientApplication
from sentry_sdk.integrations.flask import FlaskIntegration
from str2bool import str2bool
from bas_style_kit_jinja_templates import BskTemplates
class Config:
"""
Flask/App configuration base class
Configuration options are mostly set using class properties and are typically hard-coded. A limited number of
options can be set at runtime using environment variables (set directly or through an `.env` file).
"""
ENV = os.environ.get("FLASK_ENV")
DEBUG = False
TESTING = False
LOG_FORMAT = "[%(asctime)s] %(levelname)s [%(name)s.%(funcName)s:%(lineno)d] %(message)s"
# Used as defaults for values that can be set at runtime
_APP_ENABLE_SENTRY = True
_LOGGING_LEVEL = logging.WARNING
_COLLECTIONS_PATH = Path.home().joinpath(".config/scar_add_metadata_toolbox/collections.json")
_AUTH_SESSION_FILE_PATH = Path.home().joinpath(".config/scar_add_metadata_toolbox/auth.json")
_SITE_PATH = Path.home().joinpath(".config/scar_add_metadata_toolbox/_site")
def __init__(self):
load_dotenv()
"""
APP_ENABLE_SENTRY - Whether to enable Sentry error reporting
If true errors and uncaught exceptions will be reported to Sentry. A default value is set on an per-environment
basis (off in development/testing) by overriding the attribute, however it can be also be set at runtime.
"""
self.APP_ENABLE_SENTRY = str2bool(os.environ.get("APP_ENABLE_SENTRY") or str(self._APP_ENABLE_SENTRY))
"""
AUTH_SESSION_FILE_PATH - Path to the file used to store authentication information
When ran as a CLI using containers, this application becomes stateless. Therefore user auth information (access
token etc.) needs to persisted elsewhere, in this case as a file written to the path set by this config option.
Note: As this file stores authentication information its contents should be considered sensitive, meaning
restricted read/write permissions should be set for example. Note that as OAuth is used for authentication, no
long-lived credentials (e.g. passwords) will be stored in this file.
"""
self.AUTH_SESSION_FILE_PATH = Path(os.environ.get("APP_AUTH_SESSION_FILE_PATH") or self._AUTH_SESSION_FILE_PATH)
# noinspection PyPep8Naming
@property
def NAME(self) -> str:
"""
Application/Package name
:rtype str
:return: Application name
"""
return "scar-add-metadata-toolbox"
# noinspection PyPep8Naming
@property
def VERSION(self) -> str:
"""
Application version
Taken from the package where possible, otherwise a generic placeholder is used.
:rtype str
:return: Application version
"""
return "Unknown"
# noinspection PyPep8Naming
@property
def LOGGING_LEVEL(self) -> int:
"""
Application logging level
Python logging module logging level. If set at runtime, the level set as a descriptive string is mapped to the
relevant numeric level using the logging level enumeration.
:rtype int
:return: Application logging level
"""
if "APP_LOGGING_LEVEL" in os.environ: # pragma: no cover
if os.environ.get("APP_LOGGING_LEVEL") == "debug":
return logging.DEBUG
elif os.environ.get("APP_LOGGING_LEVEL") == "info":
return logging.INFO
elif os.environ.get("APP_LOGGING_LEVEL") == "warning":
return logging.WARNING
elif os.environ.get("APP_LOGGING_LEVEL") == "error":
return logging.ERROR
elif os.environ.get("APP_LOGGING_LEVEL") == "critical":
return logging.CRITICAL
return self._LOGGING_LEVEL
# noinspection PyPep8Naming
@property
def SENTRY_CONFIG(self) -> Dict:
"""
Sentry runtime configuration
Settings used for Sentry, typically reusing other config options. Only relevant if `APP_ENABLE_SENTRY` is True.
:rtype dict
:return: Sentry runtime configuration
"""
return {
"dsn": "https://[email protected]/5197036",
"integrations": [FlaskIntegration()],
"environment": self.ENV,
"release": f"{self.NAME}@{self.VERSION}",
}
# noinspection PyPep8Naming
@property
def BSK_TEMPLATES(self) -> BskTemplates:
"""
BAS Style Kit Jinja2 templates configuration
Sets relevant configuration options for setting application identity, primary navigation, analytics and
required CSS styles and JavaScript.
:rtype BskTemplates
:return: BAS Style Kit Jinja2 templates configuration
"""
bsk_templates = BskTemplates()
bsk_templates.site_title = "BAS Data Catalogue"
bsk_templates.site_description = (
"Discover data, services and records held by the British Antarctic Survey and UK Polar Data Centre"
)
bsk_templates.bsk_site_nav_brand_text = "BAS Data Catalogue"
bsk_templates.bsk_site_development_phase = "alpha"
bsk_templates.bsk_site_feedback_href = "/feedback"
bsk_templates.bsk_site_footer_policies_cookies_href = "/legal/cookies"
bsk_templates.bsk_site_footer_policies_copyright_href = "/legal/copyright"
bsk_templates.bsk_site_footer_policies_privacy_href = "/legal/privacy"
bsk_templates.site_analytics["id"] = "UA-64130716-19"
bsk_templates.site_styles.append(
{
"href": "https://cdn.web.bas.ac.uk/libs/font-awesome-pro/5.13.0/css/all.min.css",
"integrity": "sha256-DjbUjEiuM4tczO997cVF1zbf91BC9OzycscGGk/ZKks=",
}
)
bsk_templates.site_scripts.append(
{
"href": "https://browser.sentry-cdn.com/5.15.4/bundle.min.js",
"integrity": "sha384-Nrg+xiw+qRl3grVrxJtWazjeZmUwoSt0FAVsbthlJ5OMpx0G08bqIq3b/v0hPjhB",
}
)
bsk_templates.site_scripts.append(
{
"href": "https://cdn.web.bas.ac.uk/libs/jquery-sticky-tabs/1.2.0/jquery.stickytabs.js",
"integrity": "sha256-JjbqQErDTc0GyOlDQLEgyqoC6XR6puR0wIJFkoHp9Fo=",
}
)
bsk_templates.site_scripts.append(
{
"href": "https://cdn.web.bas.ac.uk/libs/markdown-it/11.0.0/js/markdown-it.min.js",
"integrity": "sha256-3mv+NUxFuBg26MtcnuN2X37WUxuGunWCCiG2YCSBjNc=",
}
)
bsk_templates.site_styles.append({"href": "/static/css/app.css"})
bsk_templates.site_scripts.append({"href": "/static/js/app.js"})
return bsk_templates
# noinspection PyPep8Naming
@property
def COLLECTIONS_CONFIG(self) -> dict:
"""
Collections config
Configuration for application Collections class instance. See Collections class for details on configuration on
required/available options
:rtype dict
:return: Collections config
"""
return {"collections_path": Path(os.environ.get("APP_COLLECTIONS_PATH") or self._COLLECTIONS_PATH)}
# noinspection PyPep8Naming
@property
def CSW_CLIENTS_CONFIG(self) -> dict:
"""
CSW clients config
Configuration for CSW clients used in application Repository class instances. See Repository class for details
on required/available options. This arrangement of configuration options is intended for use with the
application MirrorRepository class instance.
:rtype dict
:return: CSW clients config
"""
return {
"unpublished": {"client_config": {"endpoint": os.environ.get("CSW_ENDPOINT_UNPUBLISHED")}},
"published": {"client_config": {"endpoint": os.environ.get("CSW_ENDPOINT_PUBLISHED")}},
}
# noinspection PyPep8Naming
@property
def CSW_SERVERS_CONFIG(self) -> dict:
"""
CSW servers config
Configuration for CSW servers/repositories used in CSWServer class instances. See CSWServer class for details on
required/available options. This arrangement of configuration options is intended for use with the application
CSWServer class instances set by `scar_add_metadata_toolbox.utils._create_csw_repositories` method.
:rtype dict
:return: CSW servers config
"""
return {
"unpublished": {
"endpoint": os.environ.get("CSW_SERVER_CONFIG_UNPUBLISHED_ENDPOINT"),
"title": "Internal CSW (Unpublished)",
"abstract": "Internal PyCSW OGC CSW server for unpublished records",
"database_connection_string": os.environ.get("CSW_SERVER_CONFIG_UNPUBLISHED_DATABASE_CONNECTION"),
"database_table": "records_unpublished",
"auth_required_scopes_read": ["BAS.MAGIC.ADD.Records.ReadWrite.All"],
"auth_required_scopes_write": ["BAS.MAGIC.ADD.Records.ReadWrite.All"],
},
"published": {
"endpoint": os.environ.get("CSW_SERVER_CONFIG_PUBLISHED_ENDPOINT"),
"title": "Internal CSW (Published)",
"abstract": "Internal PyCSW OGC CSW server for published records",
"database_connection_string": os.environ.get("CSW_SERVER_CONFIG_PUBLISHED_DATABASE_CONNECTION"),
"database_table": "records_published",
"auth_required_scopes_read": [],
"auth_required_scopes_write": ["BAS.MAGIC.ADD.Records.Publish.All"],
},
}
# noinspection PyPep8Naming
@property
def AZURE_OAUTH_TENANCY(self) -> str:
"""
Azure tenancy (server)
Tenancy ID for the Azure app registration representing the server/catalogue component of this application.
Note: This value is not sensitive.
:rtype str
:return: Azure tenancy ID
"""
return "b311db95-32ad-438f-a101-7ba061712a4e"
# noinspection PyPep8Naming
@property
def AZURE_OAUTH_APPLICATION_ID(self) -> str:
"""
Azure application (server)
Azure app registration ID for the registration representing the server/catalogue component of this application.
Note: This value is not sensitive.
:rtype str
:return: Azure app registration ID
"""
return "8b45581e-1b2e-4b8c-b667-e5a1360b6906"
# noinspection PyPep8Naming
@property
def AZURE_OAUTH_CLIENT_APPLICATION_IDS(self) -> List[str]:
"""
Azure approved applications (server)
List of of Azure app registrations ID for applications/services (clients) trusted/approved to use the
server/catalogue component of this application.
This list automatically includes the app registration representing the client/editor component of this
application, in addition to these services:
* 3b864b8d-a6b8-44c1-8468-16f455e5eb4f = BAS Nagios (for uptime/availability monitoring)
Note: These values are not sensitive.
:rtype list
:return: List of approved Azure app registration IDs
"""
return [self.AUTH_CLIENT_ID, "3b864b8d-a6b8-44c1-8468-16f455e5eb4f"]
# noinspection PyPep8Naming
@property
def AUTH_CLIENT_SCOPES(self) -> List[str]:
"""
Azure scopes (client)
List of scopes requested in OAuth authorisation requests to Azure (i.e. sign-in requests).
These should be scopes always required by this application, rather than scopes needed for specific/privileged
actions, as these are typically conferred on specific users and will be included as roles in access tokens.
This scope is very general and is effectively static. Other scopes, needed for publishing records for example,
are granted to specific users as roles (which the Flask Azure OAuth provider treats as scopes).
Note: These values are not sensitive.
:rtype list
:return: OAuth authorisation request scopes
"""
return ["api://8bfe65d3-9509-4b0a-acd2-8ce8cdc0c01e/BAS.MAGIC.ADD.Access"]
# noinspection PyPep8Naming
@property
def AUTH_CLIENT_ID(self) -> str:
"""
Azure application (client)
Azure app registration ID for the registration representing the client/editor component of this application.
Note: This value is not sensitive.
:rtype str
:return: Azure app registration ID
"""
return "91c284e7-6522-4eb4-9943-f4ec08e98cb9"
# noinspection PyPep8Naming
@property
def AUTH_CLIENT_TENANCY(self) -> str:
"""
Azure tenancy (client)
Tenancy endpoint for the Azure app registration representing the client/editor component of this application.
Note: This value is not sensitive.
:rtype str
:return: Azure tenancy endpoint
"""
return "https://login.microsoftonline.com/b311db95-32ad-438f-a101-7ba061712a4e"
# noinspection PyPep8Naming
@property
def CLIENT_AUTH(self) -> PublicClientApplication:
"""
Azure auth provider (client)
Uses the Microsoft Authentication Library (MSAL) for Python to simplify requesting access tokens from Azure.
This is used for the client/editor component of this application, which is considered a 'public' client as this
application runs on the user's device, and therefore isn't confidential.
Note: The Flask Azure OAuth provider is used for the server/catalogue component, instantiated in the
application factor method.
:rtype PublicClientApplication
:return: Microsoft Authentication Library Public Client application
"""
return PublicClientApplication(client_id=self.AUTH_CLIENT_ID, authority=self.AUTH_CLIENT_TENANCY)
# noinspection PyPep8Naming
@property
def SITE_PATH(self) -> Path:
"""
Path to the directory used to store generated static site content
The contents of this directory should be considered ephemeral and under the exclusive control this application.
:rtype Path
:return Site site content path
"""
return Path(os.environ.get("APP_SITE_PATH") or self._SITE_PATH)
# noinspection PyPep8Naming
@property
def S3_BUCKET(self) -> str:
return os.environ.get("APP_S3_BUCKET")
class ProductionConfig(Config): # pragma: no cover
"""
Flask configuration for Production environments
Note: This method is excluded from test coverage as its meaning would be undermined.
"""
# noinspection PyPep8Naming
@property
def VERSION(self) -> str:
return version("scar-add-metadata-toolbox")
class DevelopmentConfig(Config): # pragma: no cover
"""
Flask configuration for (local) Development environments
Note: This method is excluded from test coverage as its meaning would be undermined.
"""
DEBUG = True
_APP_ENABLE_SENTRY = False
_LOGGING_LEVEL = logging.INFO
_COLLECTIONS_PATH = Path(f"./collections.json")
_AUTH_SESSION_FILE_PATH = Path("./auth.json")
_SITE_PATH = Path("./_site")
def __init__(self):
"""
Use this method to override property values defined in the config base class.
For this class, values will typically be local services to ensure production data is not inadvertently modified.
"""
super().__init__()
if "CSW_ENDPOINT_UNPUBLISHED" not in os.environ:
os.environ["CSW_ENDPOINT_UNPUBLISHED"] = "http://app:9000/csw/unpublished"
if "CSW_ENDPOINT_PUBLISHED" not in os.environ:
os.environ["CSW_ENDPOINT_PUBLISHED"] = "http://app:9000/csw/published"
if "CSW_SERVER_CONFIG_UNPUBLISHED_ENDPOINT" not in os.environ:
os.environ["CSW_SERVER_CONFIG_UNPUBLISHED_ENDPOINT"] = "http://app:9000/csw/unpublished"
if "CSW_SERVER_CONFIG_PUBLISHED_ENDPOINT" not in os.environ:
os.environ["CSW_SERVER_CONFIG_PUBLISHED_ENDPOINT"] = "http://app:9000/csw/published"
if "CSW_SERVER_CONFIG_UNPUBLISHED_DATABASE_CONNECTION" not in os.environ:
os.environ[
"CSW_SERVER_CONFIG_UNPUBLISHED_DATABASE_CONNECTION"
] = "postgresql://postgres:password@db/postgres"
if "CSW_SERVER_CONFIG_PUBLISHED_DATABASE_CONNECTION" not in os.environ:
os.environ["CSW_SERVER_CONFIG_PUBLISHED_DATABASE_CONNECTION"] = "postgresql://postgres:password@db/postgres"
# noinspection PyPep8Naming
@property
def VERSION(self) -> str:
return "N/A"
@property
def S3_BUCKET(self) -> str:
if "APP_S3_BUCKET" in os.environ:
return os.environ["APP_S3_BUCKET"]
return "add-catalogue-integration.data.bas.ac.uk"
class TestingConfig(DevelopmentConfig):
"""
Flask configuration for Testing environments
"""
TESTING = True
_LOGGING_LEVEL = logging.DEBUG
def __init__(self):
"""
Use this method to override property values defined in the config base class.
For this class, values will typically be generic or intentionally wrong to ensure components are mocked
correctly or production data is not inadvertently modified.
"""
super().__init__()
os.environ["CSW_ENDPOINT_UNPUBLISHED"] = "http://example.com/csw/unpublished"
os.environ["CSW_ENDPOINT_PUBLISHED"] = "http://example.com/csw/published"
os.environ["CSW_SERVER_CONFIG_UNPUBLISHED_ENDPOINT"] = "http://example.com/csw/unpublished"
os.environ["CSW_SERVER_CONFIG_PUBLISHED_ENDPOINT"] = "http://example.com/csw/published"
os.environ[
"CSW_SERVER_CONFIG_UNPUBLISHED_DATABASE_CONNECTION"
] = "postgresql://postgres:password@example/postgres"
os.environ[
"CSW_SERVER_CONFIG_PUBLISHED_DATABASE_CONNECTION"
] = "postgresql://postgres:password@example/postgres"
os.environ["S3_BUCKET"] = "example"
|
/scar_add_metadata_toolbox-0.3.0-py3-none-any.whl/scar_add_metadata_toolbox/config.py
| 0.754915 | 0.161056 |
config.py
|
pypi
|
import os
import json
from base64 import urlsafe_b64decode
from typing import Dict, Optional
from pathlib import Path
# noinspection PyPackageRequirements
from awscli.clidriver import create_clidriver
from jinja2 import PrefixLoader, PackageLoader
from werkzeug.utils import import_string
from scar_add_metadata_toolbox.config import Config
from scar_add_metadata_toolbox.csw import CSWServer
def _create_app_config() -> Config:
"""
Create a Flask application configuration object
Creates an instance of the relevant Config class defined in `config.py` based on the application environment
(e.g. in production, the ProductionConfig class).
:rtype Config
:return: Flask config object
"""
return import_string(f"scar_add_metadata_toolbox.config.{str(os.environ['FLASK_ENV']).capitalize()}Config")()
def _create_app_jinja_loader() -> PrefixLoader:
"""
Create a Jinja environment's template sources
Creates a Jinja prefix loader to load shared and application specific templates together. A prefix (namespace) is
used to select which set of templates to use. Templates are loaded from relevant Python modules
:rtype PrefixLoader
:return: Jinja prefix loader
"""
return PrefixLoader(
{
"app": PackageLoader("scar_add_metadata_toolbox"),
"bas_style_kit": PackageLoader("bas_style_kit_jinja_templates"),
}
)
def _create_csw_repositories(repositories_config: dict) -> Dict[str, CSWServer]:
"""
Create application CSW servers
Creates CSW servers (catalogues/repositories) used in the server/catalogue component of this application.
The arrangement of servers used is designed to provide the catalogues needed for the MirrorRepository class.
:rtype dict
:param repositories_config: dictionary of configurations for CSW servers, keyed by MirrorRepository class reference
:return:
"""
_repositories = {}
for repository_name, repository_config in repositories_config.items():
_repositories[repository_name] = CSWServer(config=repository_config)
return _repositories
def aws_cli(*cmd) -> None:
"""
AWS CLI python bindings
Creates an instance of the AWS CLI that can be used via Python. This allows convenience commands like `s3 sync`,
rather than needing to implement this ourselves using the underlying boto (AWS Python SDK) methods.
Source: https://github.com/boto/boto3/issues/358#issuecomment-372086466
"""
old_env = dict(os.environ)
try:
env = os.environ.copy()
env["LC_CTYPE"] = "en_US.UTF"
os.environ.update(env)
exit_code = create_clidriver().main(*cmd)
if exit_code > 0:
raise RuntimeError(f"AWS CLI exited with code {exit_code}")
finally:
os.environ.clear()
os.environ.update(old_env)
class AppAuthToken:
"""
Azure auth token
This class serves two main purposes:
1. enabling easier access to access tokens returned in auth requests from the Microsoft Authentication Library
2. persisting auth information to a local file for situations where this application is ran statelessly
"""
def __init__(self, session_file_path: Path):
"""
:type session_file_path Path
:param session_file_path: Path to the file used to persist auth information
"""
self.session_file_path = session_file_path
self._payload = None
@property
def access_token_bearer_insecure(self) -> str:
"""
Return the name of the user identified in the access token
This is a convenience method to return the name of the user an access token is issued for. This method avoids
having to fetch signing key sets to authenticate tokens etc. where the claims shown don't have an
impact on security (e.g. greeting messages).
WARNING: This method is insecure as it does not validate it's claims are authentic, or that the token is still
valid. This method therefore MUST NOT be used in a secure context (e.g. determining if a user has access to a
resource or action). A full JWT library MUST be used instead in such circumstances.
:rtype str
:return: Name of the user in access token (or '*unknown*')
"""
try:
access_token_parts = self.access_token.split(".")
access_token_payload = urlsafe_b64decode(access_token_parts[1].encode() + b"===").decode()
access_token_claims = json.loads(access_token_payload)
return f"{access_token_claims['given_name']} {access_token_claims['family_name']}"
except KeyError:
return "*unknown*"
@property
def access_token(self) -> Optional[str]:
"""
OAuth access token
As defined by Azure: https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens
This application uses V2 access tokens.
None is returned if an access token isn't set so that this class is compatible with the OWSLib Authentication
class, which defaults credentials to None if not set (i.e. unauthenticated).
:rtype str or None
:return: access token
"""
try:
return self.payload["access_token"]
except KeyError:
return None
@property
def payload(self) -> dict:
"""
Azure device flow response payload
Payload returned by the Azure OAuth device flow via the Microsoft Authentication Library Public Client object.
This includes tokens (access, refresh, id) and metadata (expiration times) for various purposes.
When read, the payload is loaded from a JSON file.
:rtype dict
:return: Azure device flow response payload
"""
self._payload = self._load()
return self._payload
@payload.setter
def payload(self, payload: dict):
"""
Azure device flow response payload
When set, the payload is saved to a JSON file.
:type payload dict
:param payload: Azure device flow response payload
"""
self._payload = payload
self._dump()
@payload.deleter
def payload(self):
"""
Azure device flow response payload
When deleted, the stored payload file is removed.
"""
self._payload = None
self.session_file_path.unlink()
def _load(self) -> dict:
"""
Loads payload information from a JSON encoded file
:rtype dict
:returns Azure device flow response payload
"""
try:
with open(str(self.session_file_path), "r") as auth_file:
return json.load(auth_file)
except FileNotFoundError:
return {}
def _dump(self) -> None:
"""
Saves payload information to a file encoded as JSON
"""
self.session_file_path.parent.mkdir(parents=True, exist_ok=True)
with open(str(self.session_file_path), "w") as auth_file:
json.dump(self._payload, auth_file, indent=4)
|
/scar_add_metadata_toolbox-0.3.0-py3-none-any.whl/scar_add_metadata_toolbox/utils.py
| 0.788217 | 0.218795 |
utils.py
|
pypi
|
import tensorflow as tf
import numpy as np
import time
class DetectorAPI:
def __init__(self, path_to_ckpt):
self.path_to_ckpt = path_to_ckpt
self.detection_graph = tf.Graph()
with self.detection_graph.as_default():
od_graph_def = tf.compat.v1.GraphDef()
with tf.io.gfile.GFile(self.path_to_ckpt, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
self.default_graph = self.detection_graph.as_default()
self.sess = tf.compat.v1.Session(graph=self.detection_graph)
# Definite input and output Tensors for detection_graph
self.image_tensor = self.detection_graph.get_tensor_by_name(
'image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
self.detection_boxes = self.detection_graph.get_tensor_by_name(
'detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
self.detection_scores = self.detection_graph.get_tensor_by_name(
'detection_scores:0')
self.detection_classes = self.detection_graph.get_tensor_by_name(
'detection_classes:0')
self.num_detections = self.detection_graph.get_tensor_by_name(
'num_detections:0')
def processFrame(self, image):
# Expand dimensions since the trained_model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image, axis=0)
# Actual detection.
start_time = time.time()
(boxes, scores, classes,
num) = self.sess.run([
self.detection_boxes, self.detection_scores,
self.detection_classes, self.num_detections
],
feed_dict={self.image_tensor: image_np_expanded})
end_time = time.time()
print("Elapsed Time:", end_time - start_time)
im_height, im_width, _ = image.shape
boxes_list = [None for i in range(boxes.shape[1])]
for i in range(boxes.shape[1]):
boxes_list[i] = (int(boxes[0, i, 1] * im_width),
int(boxes[0, i, 0] * im_height),
int(boxes[0, i, 3] * im_width),
int(boxes[0, i, 2] * im_height))
return boxes_list, scores[0].tolist(), [
int(x) for x in classes[0].tolist()
], int(num[0])
def close(self):
self.sess.close()
self.default_graph.close()
|
/scar-4.3.0.tar.gz/scar-4.3.0/examples/mask-detector-workflow/blurry-faces/src/DetectorAPI.py
| 0.843412 | 0.349255 |
DetectorAPI.py
|
pypi
|
import json, os, argparse, importlib, sys, traceback
from typing import Any, Dict, List, Literal, Tuple
import dearpygui.dearpygui as dpg
from scared.remote import load_remote_run
# {run name => {config key => config value}}
run_configs: Dict[str, Dict[str, Any]] = {}
# {run name => {metrics name => metrics value}}
run_metrics: Dict[str, Dict[str, Any]] = {}
# {run name => {property name => property value}}
run_props: Dict[str, Dict[Literal["remote"], Any]] = {}
dpg.create_context()
dpg.create_viewport(title="sacred-dpg-dashboard")
config_windows = []
def get_all_possible_metrics() -> List[Tuple[str, str]]:
"""Return all opened metrics
:return: a list of tuple of form ``(run_name, metrics_name)``
"""
global run_metrics
metrics = []
for run_name, metrics_dict in run_metrics.items():
for metrics_name in metrics_dict.keys():
metrics.append((run_name, metrics_name))
return metrics
# themes
with dpg.theme(tag="hist_theme"):
with dpg.theme_component(dpg.mvHistogramSeries):
dpg.add_theme_style(
dpg.mvPlotStyleVar_FillAlpha, 0.5, category=dpg.mvThemeCat_Plots
)
with dpg.theme(tag="metrics_list_button_theme"):
with dpg.theme_component(dpg.mvButton):
# left align
dpg.add_theme_style(
dpg.mvStyleVar_ButtonTextAlign, 0.0, category=dpg.mvThemeCat_Core
)
with dpg.theme(tag="remote"):
with dpg.theme_component(dpg.mvTab):
PURPLE = (124, 36, 179)
GRAY_PURPLE = (98, 66, 117)
dpg.add_theme_color(dpg.mvThemeCol_TabActive, PURPLE)
dpg.add_theme_color(dpg.mvThemeCol_TabHovered, PURPLE)
dpg.add_theme_color(dpg.mvThemeCol_Tab, GRAY_PURPLE)
# main metrics window
with dpg.window(tag="metrics_window"):
with dpg.child_window(no_scrollbar=True):
with dpg.table(header_row=False, resizable=True):
dpg.add_table_column(init_width_or_weight=0.25)
dpg.add_table_column()
with dpg.table_row():
dpg.add_child_window(tag="metrics_list")
dpg.add_child_window(tag="plots")
def create_plot(tag: str, parent: str):
yaxis = f"{tag}_plot_y"
xaxis = f"{tag}_plot_x"
def fit_axis_data():
dpg.fit_axis_data(xaxis)
dpg.fit_axis_data(yaxis)
def add_series(x, y, label: str, plot_type: Literal["plot", "hist", "scatter"]):
"""Add a series to the plot"""
assert len(x) == len(y)
def change_series_plot_type(sender, app_data, user_data):
"""
:param user_data: ``(series, plot_type)``
"""
series, plot_type = user_data
dpg.delete_item(series)
add_series(x, y, label, plot_type)
fit_axis_data()
if plot_type == "plot":
if len(x) == 1:
series = dpg.add_hline_series(y, label=label, parent=yaxis)
else:
series = dpg.add_line_series(x, y, label=label, parent=yaxis)
for other_type in ["hist", "scatter"]:
dpg.add_button(
label=f"plot as {other_type}",
parent=series,
user_data=(series, other_type),
callback=change_series_plot_type,
)
elif plot_type == "hist":
series = dpg.add_histogram_series(
y, label=label, parent=yaxis, min_range=min(y), max_range=max(y)
)
dpg.bind_item_theme(series, "hist_theme")
for other_type in ["plot", "scatter"]:
dpg.add_button(
label=f"plot as {other_type}",
parent=series,
user_data=(series, other_type),
callback=change_series_plot_type,
)
elif plot_type == "scatter":
series = dpg.add_scatter_series(x, y, label=label, parent=yaxis)
for other_type in ["plot", "hist"]:
dpg.add_button(
label=f"plot as {other_type}",
parent=series,
user_data=(series, other_type),
callback=change_series_plot_type,
)
else:
raise ValueError(f"unknown plot type : {plot_type}")
def delete_series(sender, app_data, user_data):
dpg.delete_item(user_data)
fit_axis_data()
dpg.add_button(
label="delete",
parent=series,
user_data=series,
callback=delete_series,
)
fit_axis_data()
with dpg.plot(
tag=tag,
parent=parent,
drop_callback=lambda s, a, u: add_series(a[0], a[1], a[2], "plot"),
payload_type="plotting",
width=-1,
height=-1,
):
dpg.add_plot_axis(dpg.mvXAxis, tag=xaxis)
dpg.add_plot_axis(dpg.mvYAxis, tag=yaxis)
dpg.add_plot_legend(show=True)
def set_plot_grid(grid: Literal["1", "2x1", "2x2"]):
# delete current plots
dpg.delete_item("plots", children_only=True)
# create a new grid
if grid == "1":
create_plot("plot1", "plots")
elif grid == "2x1":
with dpg.subplots(2, 1, width=-1, height=-1, parent="plots") as s:
create_plot("plot1", s)
create_plot("plot2", s)
elif grid == "2x2":
with dpg.subplots(2, 2, width=-1, height=-1, parent="plots") as s:
create_plot("plot1", s)
create_plot("plot2", s)
create_plot("plot3", s)
create_plot("plot4", s)
else:
raise ValueError(f"unknown grid specification: '{grid}'")
set_plot_grid("1")
def refresh_metrics_list():
"""
Refresh the metrics list according to the currently opened runs.
"""
dpg.delete_item("metrics_list", children_only=True)
with dpg.tab_bar(parent="metrics_list"):
for run_name in run_metrics.keys():
with dpg.tab(label=run_name) as t:
# set "remote" theme if the run is a remote one
if run_props.get(run_name, {}).get("remote"):
dpg.bind_item_theme(t, "remote")
filter_set = dpg.add_filter_set()
dpg.add_input_text(
label=f"Filter",
user_data=filter_set,
callback=lambda sender, input_string: dpg.set_value(
dpg.get_item_user_data(sender), input_string
),
before=filter_set,
)
for metrics_name in run_metrics[run_name]:
button = dpg.add_button(
label=metrics_name,
filter_key=metrics_name,
width=-1,
parent=filter_set,
)
dpg.bind_item_theme(button, "metrics_list_button_theme")
# drag payload
x_data = run_metrics[run_name][metrics_name]["steps"]
y_data = run_metrics[run_name][metrics_name]["values"]
label = f"{run_name}|{metrics_name}"
with dpg.drag_payload(
parent=dpg.last_item(),
drag_data=(x_data, y_data, label),
payload_type="plotting",
):
# display text and a plot preview when dragging
dpg.add_text(label)
dpg.add_simple_plot(default_value=y_data)
def config_window_tabbar(w: int) -> str:
return f"config_window_{w}_tabbar"
def refresh_config_window(w: int):
"""Refresh a config window according to the currently opened runs"""
tabbar = config_window_tabbar(w)
dpg.delete_item(tabbar, children_only=True)
for run_name, run_config in run_configs.items():
with dpg.tab(label=run_name, parent=tabbar):
dpg.add_text(json.dumps(run_config, indent=4))
def create_config_window(sender, app_data):
global config_windows
with dpg.window(pos=(0, 0), width=400, height=400) as w:
dpg.add_tab_bar(tag=config_window_tabbar(w))
config_windows.append(w)
dpg.configure_item(w, on_close=lambda *args: config_windows.remove(w))
refresh_config_window(w)
def open_run(run_root_dir: str):
global run_metrics
global run_configs
run_name = os.path.basename(run_root_dir)
# metrics
with open(f"{run_root_dir}/metrics.json") as f:
local_run_metrics = json.load(f)
run_metrics[run_name] = local_run_metrics
# config
with open(f"{run_root_dir}/config.json") as f:
local_run_config = json.load(f)
local_run_config = {
k: v for k, v in local_run_config.items() if not k == "__annotations__"
}
run_configs[run_name] = local_run_config
run_props[run_name] = {}
refresh_metrics_list()
for w in config_windows:
refresh_config_window(w)
def open_remote_run(host: str, run_root_dir: str):
name, config, metrics = load_remote_run(host, run_root_dir)
run_configs[name] = config
run_metrics[name] = metrics
run_props[name] = {"remote": True}
refresh_metrics_list()
for w in config_windows:
refresh_config_window(w)
# menu bar
def on_open(sender, app_data):
global run_metrics
global run_configs
root_dir = os.path.dirname(app_data["file_path_name"])
run_names = [selection for selection in app_data["selections"].keys()]
for run_name in run_names:
open_run(f"{root_dir}/{run_name}")
with dpg.menu_bar(parent="metrics_window"):
with dpg.menu(label="Runs"):
# open local runs
fs = dpg.add_file_dialog(
show=False, callback=on_open, directory_selector=True, width=800, height=600
)
dpg.add_menu_item(label="Open...", callback=lambda: dpg.show_item(fs))
# open remote runs
with dpg.window(
tag="open_remote_runs_window",
label="Open remote run",
modal=True,
show=False,
no_title_bar=True,
) as remote_runs_w:
dpg.add_input_text(tag="open_remote_run_host", label="host")
dpg.add_input_text(tag="open_remote_run_path", label="path")
def open_remote_runs_window():
dpg.configure_item(remote_runs_w, show=True, width=200)
dpg.set_item_pos(remote_runs_w, [200, 200])
def close_remote_runs_window():
dpg.configure_item(remote_runs_w, show=False)
def on_ok():
open_remote_run(
dpg.get_value("open_remote_run_host"),
dpg.get_value("open_remote_run_path"),
)
close_remote_runs_window()
with dpg.group(horizontal=True):
dpg.add_button(label="OK", callback=on_ok)
dpg.add_button(label="Cancel", callback=close_remote_runs_window)
dpg.add_menu_item(label="Open remote...", callback=open_remote_runs_window)
# open configs window
dpg.add_menu_item(label="Open configs window", callback=create_config_window)
with dpg.menu(label="Plots"):
with dpg.menu(label="Set grid"):
dpg.add_menu_item(label="1", callback=lambda: set_plot_grid("1"))
dpg.add_menu_item(label="2x1", callback=lambda: set_plot_grid("2x1"))
dpg.add_menu_item(label="2x2", callback=lambda: set_plot_grid("2x2"))
with dpg.menu(label="Debug"):
dpg.add_menu_item(label="open dpg debug window", callback=dpg.show_debug)
dpg.add_menu_item(
label="Open item registry",
callback=lambda: dpg.show_tool(dpg.mvTool_ItemRegistry),
)
parser = argparse.ArgumentParser()
parser.add_argument(
"-i", "--input-files", nargs="*", help="List of files to open at startup"
)
parser.add_argument(
"-c",
"--custom-metrics",
nargs="*",
help="List of modules where custom metrics are defined in a 'custom_metrics' variable. A custom metrics should be a function, taking as input the run_metrics dict and outputting a list of values or a tuple (steps, values)",
)
args = parser.parse_args()
if args.input_files:
for f in args.input_files:
if f.startswith("ssh:"):
host, path, *_ = f[4:].split(":")
open_remote_run(host, path)
else:
open_run(f.rstrip("/"))
if args.custom_metrics:
run_metrics["custom"] = {}
for module_path in args.custom_metrics:
module_root_dir = os.path.abspath(os.path.dirname(module_path))
sys.path.append(module_root_dir)
mod_name = os.path.basename(module_path)
module = importlib.import_module(mod_name)
for f in module.custom_metrics:
try:
metrics = f(run_metrics)
except Exception:
print(f"exception computing metrics {f.__name__}")
traceback.print_exc(file=sys.stderr)
exit(1)
neg_error = None
pos_error = None
if isinstance(metrics, tuple):
if len(metrics) == 2:
steps, values = metrics
else:
steps, values, neg_error, pos_error = metrics
else:
steps, values = (list(range(len(metrics))), metrics)
run_metrics["custom"][f.__name__] = {"values": values, "steps": steps}
if not neg_error is None:
assert not pos_error is None
run_metrics["custom"][f.__name__]["error"] = (neg_error, pos_error)
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.set_primary_window("metrics_window", True)
dpg.start_dearpygui()
dpg.destroy_context()
|
/scared_dashboard-0.1.0.tar.gz/scared_dashboard-0.1.0/scared/dashboard.py
| 0.570331 | 0.209975 |
dashboard.py
|
pypi
|
import numpy as np
import scarf.instance
__all__ = ["gen_random_instance"]
def _id_2_tuple(num_single, idx):
if idx < num_single:
return idx
else:
couple_idx = idx - num_single
return (couple_idx // 2, couple_idx % 2)
def gen_random_instance(num_single, num_couple, num_hospital,
num_additional_seat=0,
single_pref_len=0, couple_pref_len=0, ihp=True):
"""Generate a uniform random instance.
Generate a stable matching instance where single doctor's preference lists are
chosen uniformly at random, each hospital's ranking of doctors are also chosen
at random. The couple's preference lists are chosen at random from all lists
that are unemployment averse.
Args:
num_single: int
Number of single doctors.
num_couple: int
Number of coupled doctors.
num_hospital: int
Number of hospitals.
num_additional_seat: int, optional
Number of total seats will be number of applicants plus num_additional_seat,
provided that each hospital has at least one seat (Otherwise the hospital
capacity is 1 for every hospital). Default is 0.
single_pref_len: int, optional
Length of single's preference list. Any hospital outside
preference list are worse than unemployment option. Default: generate full
preference list.
couple_pref_len: int, optional
Length of couple's preference list. Any hospital pair outside
preference list are worse than unemployment option. Default: generate full
preference list.
ihp: bool, optional
If True (default), hospitals will share the same preference on individual
doctors. Otherwise hospitals will have independent preference lists.
Returns:
A `ScarfInstance` object.
"""
num_applicant = num_single + 2 * num_couple
single_pref_list = np.argsort(
np.random.rand(num_single, num_hospital)
).tolist()
if single_pref_len:
for s in range(num_single):
single_pref_list[s] = single_pref_list[s][:single_pref_len]
num_hospital_pair = (num_hospital + 1) ** 2 - 1
couple_pref_seed = np.random.rand(num_couple, num_hospital_pair)
couple_pref_seed += np.array(
[(pid // (num_hospital + 1) == num_hospital) or
(pid % (num_hospital + 1) == num_hospital)
for pid in range(num_hospital_pair)], dtype=np.float64)
# plans with unemployment is are ranked lower
couple_pref_list = np.argsort(couple_pref_seed).tolist()
wrap = lambda x: x if x < num_hospital else -1
cpl = min(couple_pref_len,
num_hospital_pair) if couple_pref_len > 0 else num_hospital_pair
couple_pref_list = [
[(wrap(pid // (num_hospital + 1)),
wrap(pid % (num_hospital + 1))) for pid in li[:cpl]]
for li in couple_pref_list
]
if ihp:
hospital_pref_list = np.argsort(
np.random.rand(num_applicant)
).tolist()
hospital_pref_list = [
_id_2_tuple(num_single, i) for i in hospital_pref_list
]
else:
hospital_pref_list = np.argsort(
np.random.rand(num_hospital, num_applicant)
).tolist()
hospital_pref_list = [
[_id_2_tuple(num_single, i) for i in li] for li in hospital_pref_list
]
hosp_seat = np.random.randint(
num_hospital,
size=max(num_applicant - num_hospital + num_additional_seat, 0)
) # assign each seat randomly to a hospital
hospital_cap = []
for h in range(num_hospital):
hospital_cap.append(int(sum(hosp_seat == h) + 1))
return scarf.instance.ScarfInstance(
single_pref_list=single_pref_list,
couple_pref_list=couple_pref_list,
hospital_pref_list=hospital_pref_list,
hospital_cap=hospital_cap
)
|
/scarfmatch-0.0.3.tar.gz/scarfmatch-0.0.3/scarf/random.py
| 0.573798 | 0.368946 |
random.py
|
pypi
|
import numpy as np
def check_single_preflist(s, pair_list):
assert(np.all([len(p) == 2 and p[0] == s for p in pair_list]))
assert(pair_list[-1] == (s, -1))
def check_couple_preflist(c, pair_list):
assert(np.all([len(p) == 3 and p[0] == c for p in pair_list]))
assert(pair_list[-1] == (c, -1, -1))
def check_hospital_preflist(h, pair_list):
assert(np.all(h in p for p in pair_list))
assert(pair_list[-1] == (-1, h))
def recover_pref_lists(num_single, num_couple, num_hospital, U,
pair_list):
"""Recover preference list from utility matrix."""
assert(U.shape[0] == num_single + num_couple + num_hospital)
num_hospital_pair = (num_hospital + 1) ** 2 - 1
assert(U.shape[1] == len(pair_list))
# Sort decend by negating the array
orders_U = np.argsort(-U)
single_pref_list, couple_pref_list, hospital_pref_list = [], [], []
for s in range(num_single):
cols = orders_U[s]
single_s_pair_list = [pair_list[col] for col in cols if U[s][col] < 0]
check_single_preflist(s, single_s_pair_list)
# The last one is (s, num_hospital), remove
single_pref_list.append([p[1] for p in single_s_pair_list[:-1]])
for c in range(num_couple):
cols = orders_U[num_single + c]
couple_c_pair_list = [pair_list[col] for col in cols
if U[num_single + c][col] < 0]
check_couple_preflist(c, couple_c_pair_list)
# The last one is (c, num_hospital, num_hospital), remove
couple_pref_list.append([p[1:] for p in couple_c_pair_list[:-1]])
for h in range(num_hospital):
cols = orders_U[num_single + num_couple + h]
hospital_h_pref_list = [pair_list[col] for col in cols
if U[num_single + num_couple + h][col] < 0]
check_hospital_preflist(h, hospital_h_pref_list)
# The last one is (s, num_hospital), remove
hospital_pref_list.append(hospital_h_pref_list[:-1])
return single_pref_list, couple_pref_list, hospital_pref_list
def create_hospital_pref_on_pairs(num_hospital, h, one_hospital_pref_list,
single_pref_list, couple_pref_list):
"""Create hospital's preference on pairs given preference on individuals."""
pair_pref_list = []
for i in one_hospital_pref_list:
if isinstance(i, int):
if h in single_pref_list[i]:
pair_pref_list += [(i, h)]
else:
c, j = i # couple c, member j
# find out if this member is the better one
member_j_position = one_hospital_pref_list.index((c, j))
other_member_position = one_hospital_pref_list.index((c, 1 - j))
is_better_member = member_j_position < other_member_position
# filter out the pairs related to hospital h and (c, j)
# where (c, j) is the worst member assigned to couple h in this pair
def is_relavent_pair(p):
return p[j] == h and not (p == (h, h) and is_better_member)
couple_c_pref_list = list(filter(
is_relavent_pair,
couple_pref_list[c]
))
pair_pref_list += [(c,) + p for p in couple_c_pref_list]
return pair_pref_list
def check_stable(U, basis):
"""Check if a basis is ordinal basis for a utility matrix.
Args:
U: (m, n) utility matrix.
basis: a list of m column indices
Returns:
True if `basis` is an ordinal basis of `U`
"""
U_pt = U + np.tile(
np.linspace(start=0.5, stop=0.0, num=U.shape[1]),
(U.shape[0], 1))
rowmins = np.min(U[:, basis], axis=1, keepdims=True)
U_dom = U <= rowmins
return np.all(np.any(U_dom, axis=0))
def check_feasible(A, basis, alloc, b, tol=1e-6):
"""Check if a basis is a feasible basis for a polytope.
Args:
A: (m, n) constraint matrix.
basis: a list of m column indices
alloc: allocation vector
b: the right hand side vector of size (m,)
Returns:
True if `basis` is a feasible basis of the polytope `Ax=b, x>=0`.
"""
return np.all(np.dot(A[:, basis], alloc) <= np.array(b) + tol)
|
/scarfmatch-0.0.3.tar.gz/scarfmatch-0.0.3/scarf/utils.py
| 0.783243 | 0.526708 |
utils.py
|
pypi
|
from abc import ABC, abstractmethod
from typing import Optional
from urllib.parse import urlencode
import requests
class ScarletSharkClient(ABC):
version: str
api_actions: dict = {}
api_key: str
print_json = False
def __init__(
self, api_key: str, print_json: bool = False):
self.api_key = api_key
self.print_json = print_json
def _resolve_url(self, action_name: str, params) -> str:
if action_name not in self.api_actions:
raise Exception(f'The action [{action_name}] is not supported')
query_parameters: dict = {}
for k, v in params.items():
if k != 'self' and v:
key = k
if k == 'ip' and action_name == 'search_ip':
key = 'ips[]'
elif k == 'email' and action_name == 'search_email':
key = 'emails[]'
elif k == 'url' and action_name == 'search_url':
key = 'urls[]'
query_parameters[key] = v
if not query_parameters:
raise Exception(f'At least query parameter has to be specified in [{params}]')
endpoint = self.api_actions[action_name]
query_string = urlencode(query_parameters)
return f'{self.version}{endpoint}?{query_string}'
def _prepare_request(self, uri: str):
headers = {
'Authorization': f'Bearer {self.api_key}'
}
base_url = 'https://api.scarletshark.com/'
url = f'{base_url}{uri}'
response = requests.get(url, headers=headers)
if response.status_code == 200:
r = response.json()
if int(r.get('result_code', -1)) < 0:
raise Exception(r.get('result').get('message'))
result = r.get('result')
if self.print_json:
import json
print(json.dumps(result, indent=2, sort_keys=True))
return result
return None
@abstractmethod
def search_dns(
self,
ip: Optional[str] = None,
hostname: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns known hostname and IP associations from the Scarlet Shark database. These associations are mostly active IP lookups.
:param ip: String [optional] - IP address to find hostnames for
:param hostname: String [optional] - hostname to find IPs for
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns known hostname and IP associations from the Scarlet Shark database
"""
pass
@abstractmethod
def search_domain(
self,
domain: str,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information on the given domain.
:param domain: String - The domain to search for. The domain will automatically be changed from Unicode to an IDNA ASCII-compatible format
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns information on the given domain.
"""
pass
@abstractmethod
def search_email(
self,
email: str,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns threat information for the given email addresses.
:param email: String - Email address to search for threat data on
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns threat information for the given email addresses.
"""
pass
@abstractmethod
def search_hash(
self,
sha256: Optional[str] = None,
md5: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information on either a SHA256 or a MD5 hash.
:param sha256: String [optional] - The SHA256 hash to search for
:param md5: String [optional] - The MD5 hash to search for
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns information on either a SHA256 or a MD5 hash.
"""
pass
@abstractmethod
def search_ip(
self,
ip: str,
context: Optional[str] = None,
time_period: Optional[int] = None,
time_zone: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Looks up information for an IP and any threat intel information about that IP.
:param ip: String - v4 or v6 IP address
:param context: String [optional] – Possible values: [user_activity, none] - The context of the IP look up. This helps give a more accurate threat classification.
:param time_period: Integer - The number of days to show security issues for.
:param time_zone: String [optional] - PHP Time Zone Strings. Results will be returned in the given time zone. UTC is the default. See: https://www.php.net/manual/en/timezones.php
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Looks up information for an IP and any threat intel information about that IP
"""
pass
@abstractmethod
def search_network(
self,
ip: str = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Looks up information network information about a given IP.
:param ip: String - v4 or v6 IP address
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Looks up information network information about a given IP
"""
pass
@abstractmethod
def search_threat_actors(
self,
query: Optional[str] = None,
threat_actor_id: Optional[int] = None,
vertical: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information about a given threat actor or threat actors targeting a given vertical.
:param query: String [optional] - Search string to match against threat actor aliases
:param threat_actor_id: Integer [optional] - The Scarlet Shark threat_actor_id to search by
:param vertical: String [optional] - The vertical to search by
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Information about a given threat actor or threat actors targeting a given vertical
"""
pass
@abstractmethod
def search_threat_tools(
self,
query: Optional[str] = None,
threat_actor_id: Optional[int] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information about a given threat tool. The threat tool can be malware or a legitimate tool.
:param query: String [optional] - Search string to match against threat tool aliases
:param threat_actor_id: Integer [optional] - The Scarlet Shark threat_tool_id to search by
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Information about a given threat tool. The threat tool can be malware or a legitimate tool.
"""
pass
@abstractmethod
def search_url(
self,
url: str,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Looks up threat information for the given URL.
:param url: String - URL to search for threat data on.
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Looks up threat information for the given URL
"""
pass
|
/scarlet-shark-client-1.0.5.tar.gz/scarlet-shark-client-1.0.5/scarlet_shark_client/clients/abstract.py
| 0.882371 | 0.298453 |
abstract.py
|
pypi
|
from abc import ABC, abstractmethod
from typing import Optional
from urllib.parse import urlencode
import requests
class ScarletSharkClient(ABC):
version: str
api_actions: dict = {}
api_key: str
print_json = False
def __init__(
self, api_key: str, print_json: bool = False):
self.api_key = api_key
self.print_json = print_json
def _resolve_url(self, action_name: str, params) -> str:
if action_name not in self.api_actions:
raise Exception(f'The action [{action_name}] is not supported')
query_parameters: dict = {}
for k, v in params.items():
if k != 'self' and v:
query_parameters[k] = v
if not query_parameters:
raise Exception(f'At least query parameter has to be specified in [{params}]')
endpoint = self.api_actions[action_name]
query_string = urlencode(query_parameters)
return f'{self.version}{endpoint}?{query_string}'
def _prepare_request(self, uri: str):
headers = {
'Authorization': f'Bearer {self.api_key}'
}
base_url = 'https://api.scarletshark.com/'
url = f'{base_url}{uri}'
response = requests.get(url, headers=headers)
if response.status_code == 200:
r = response.json()
if int(r.get('result_code', -1)) < 0:
raise Exception(r.get('result').get('message'))
result = r.get('result')
if self.print_json:
import json
print(json.dumps(result, indent=2, sort_keys=True))
return result
return None
@abstractmethod
def search_dns(
self,
ip: Optional[str] = None,
hostname: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns known hostname and IP associations from the Scarlet Shark database. These associations are mostly active IP lookups.
:param ip: String [optional] - IP address to find hostnames for
:param hostname: String [optional] - hostname to find IPs for
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns known hostname and IP associations from the Scarlet Shark database
"""
pass
@abstractmethod
def search_domain(
self,
domain: str,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information on the given domain.
:param domain: String - The domain to search for. The domain will automatically be changed from Unicode to an IDNA ASCII-compatible format
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns information on the given domain.
"""
pass
@abstractmethod
def search_email(
self,
emails: list[str],
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns threat information for the given email addresses.
:param emails: String Array - Email addresses to search for threat data on
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns threat information for the given email addresses.
"""
pass
@abstractmethod
def search_hash(
self,
sha256: Optional[str] = None,
md5: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information on either a SHA256 or a MD5 hash.
:param sha256: String [optional] - The SHA256 hash to search for
:param md5: String [optional] - The MD5 hash to search for
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Returns information on either a SHA256 or a MD5 hash.
"""
pass
@abstractmethod
def search_ip(
self,
ips: list[str],
context: Optional[str] = None,
time_period: Optional[int] = None,
time_zone: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Looks up information for an IP and any threat intel information about that IP.
:param ips: String array - v4 or v6 IP address
:param context: String [optional] – Possible values: [user_activity, none] - The context of the IP look up. This helps give a more accurate threat classification.
:param time_period: Integer - The number of days to show security issues for.
:param time_zone: String [optional] - PHP Time Zone Strings. Results will be returned in the given time zone. UTC is the default. See: https://www.php.net/manual/en/timezones.php
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Looks up information for an IP and any threat intel information about that IP
"""
pass
@abstractmethod
def search_network(
self,
ip: str = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Looks up information network information about a given IP.
:param ip: String - v4 or v6 IP address
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Looks up information network information about a given IP
"""
pass
@abstractmethod
def search_threat_actors(
self,
query: Optional[str] = None,
threat_actor_id: Optional[int] = None,
vertical: Optional[str] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information about a given threat actor or threat actors targeting a given vertical.
:param query: String [optional] - Search string to match against threat actor aliases
:param threat_actor_id: Integer [optional] - The Scarlet Shark threat_actor_id to search by
:param vertical: String [optional] - The vertical to search by
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Information about a given threat actor or threat actors targeting a given vertical
"""
pass
@abstractmethod
def search_threat_tools(
self,
query: Optional[str] = None,
threat_actor_id: Optional[int] = None,
nonce: Optional[int] = None) -> Optional[dict]:
"""
Returns information about a given threat tool. The threat tool can be malware or a legitimate tool.
:param query: String [optional] - Search string to match against threat tool aliases
:param threat_actor_id: Integer [optional] - The Scarlet Shark threat_tool_id to search by
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Information about a given threat tool. The threat tool can be malware or a legitimate tool.
"""
pass
@abstractmethod
def search_url(
self,
urls: list[str],
nonce: Optional[int] = None) -> Optional[dict]:
"""
Looks up threat information for the given URLs.
:param urls: String Array - URLs to search for threat data on. The domain of each URL will automatically be changed from Unicode to an IDNA ASCII-compatible format
:param nonce: Integer [optional] - A nonce that is returned, if provided in the request
:return: Looks up threat information for the given URLs
"""
pass
|
/scarlet_shark_python-1.0.3-py3-none-any.whl/scarlet_shark_python/clients/abstract.py
| 0.910326 | 0.303061 |
abstract.py
|
pypi
|
from django.forms.widgets import ClearableFileInput
from django.template.loader import render_to_string
from django.utils.html import escape
from django.utils.safestring import mark_safe
try:
from ..cms.internal_tags.fields import TaggedRelationWidget
from ..cms.widgets import APIChoiceWidget
except ValueError:
from cms.internal_tags.fields import TaggedRelationWidget
from cms.widgets import APIChoiceWidget
class AssetsFileWidget(TaggedRelationWidget):
crop_link = "crops/{0}/edit/"
def get_qs(self):
qs = super().get_qs()
if self.asset_type:
qs["ftype"] = self.asset_type
return qs
def get_add_qs(self):
qs = self.get_qs()
if "ftype" in qs:
qs["type"] = qs.pop("ftype")
return qs
def get_crop_sizes(self):
from . import get_image_cropper
sizes = []
if self.sizes:
for x in self.sizes:
crop = get_image_cropper().get_crop_config(x)
if crop and crop.editable:
sizes.append(
{
"name": crop.name,
"width": crop.width,
"height": crop.height,
"post_link": self.crop_link.format(x),
}
)
return sizes
def render(self, name, value, attrs=None, renderer=None):
obj = self.obj_for_value(value)
# Go directly to parent of APIChoiceWidget to get input
hidden_input = super(APIChoiceWidget, self).render(name, value, attrs=attrs, renderer=None)
context = {
"hidden_input": hidden_input,
"object": obj,
"asset_type": self.asset_type,
"asset_tags": self.tags,
"link": self.get_api_link(),
"add_link": self.get_add_link(),
"base_api_link": self._api_link,
"sizes": self.get_crop_sizes(),
"required_tags": self.required_tags,
}
html = render_to_string("assets/asset_widget.html", context)
return mark_safe(html)
class RawImageWidget(ClearableFileInput):
template_with_initial = (
"%(initial_text)s: %(initial)s %(clear_template)s<br />%(input)s"
)
def render(self, name, value, attrs=None, renderer=None):
thumbnail = None
data = super().render(name, value, attrs, renderer=None)
if value and hasattr(value, "admin_url"):
thumbnail = value.admin_url()
if thumbnail:
data = mark_safe(
'<p class="widget-asset-simple"><span class="widget-asset-simple-preview" style="background-image:url({0})"></span>{1}</p>'.format(
escape(thumbnail), data
)
)
return data
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/assets/widgets.py
| 0.485112 | 0.210219 |
widgets.py
|
pypi
|
import taggit.managers
from django.db import migrations, models
import scarlet.assets.fields
import scarlet.assets.utils
class Migration(migrations.Migration):
dependencies = [
("taggit", "0001_initial"),
]
operations = [
migrations.CreateModel(
name="Asset",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("title", models.CharField(max_length=255)),
(
"file",
scarlet.assets.fields.AssetRealFileField(
upload_to=scarlet.assets.utils.assets_dir
),
),
(
"type",
models.CharField(
db_index=True,
max_length=255,
choices=[
(b"unknown", b"Unknown"),
(b"image", b"Image"),
(b"document", b"Document"),
(b"audio", b"Audio"),
(b"video", b"Video"),
],
),
),
("slug", models.SlugField(unique=True, max_length=255)),
("user_filename", models.CharField(max_length=255)),
("created", models.DateTimeField(auto_now_add=True)),
("modified", models.DateTimeField(auto_now=True)),
("cbversion", models.PositiveIntegerField(editable=False)),
(
"tags",
taggit.managers.TaggableManager(
to="taggit.Tag",
through="taggit.TaggedItem",
help_text="A comma-separated list of tags.",
verbose_name="Tags",
),
),
],
options={"abstract": False,},
bases=(models.Model,),
),
migrations.CreateModel(
name="ImageDetail",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("width", models.PositiveIntegerField()),
("height", models.PositiveIntegerField()),
("name", models.CharField(max_length=255)),
("editable", models.BooleanField(default=False, editable=False)),
("x", models.PositiveIntegerField(null=True)),
("x2", models.PositiveIntegerField(null=True)),
("y", models.PositiveIntegerField(null=True)),
("y2", models.PositiveIntegerField(null=True)),
(
"image",
models.ForeignKey(
to="assets.Asset", on_delete=models.deletion.CASCADE,
),
),
],
options={"abstract": False,},
bases=(models.Model,),
),
]
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/assets/migrations/0001_initial.py
| 0.489503 | 0.266853 |
0001_initial.py
|
pypi
|
from django.core.management.base import BaseCommand
from django.db.models.loading import get_models
try:
from django.db.transaction import atomic
except ImportError:
from django.db.transaction import commit_on_success as atomic
from ...fields import AssetsFileField
from ...models import Asset
class Command(BaseCommand):
args = None
help = "Make sure all uploaded files have the minumum required tags"
def handle(self, *args, **options):
seen = {}
with atomic():
for m in get_models():
if hasattr(m._meta, "_view_model") and not (m._meta, "is_view", False):
continue
for field in m._meta.local_fields:
if isinstance(field, AssetsFileField):
assert isinstance(field.asset_tags, tuple), (field.name, m)
assert isinstance(field.required_tags, tuple), (field.name, m)
qs = m.objects.filter().values_list(field.name, flat=True)
ids = set([x for x in qs if x])
s = seen.get(field.asset_type, set())
s = s.union(ids)
for t in Asset.TYPES:
if t[0] != field.asset_type:
has = seen.get(t[0], set())
double = s.intersection(has)
if double:
raise Exception(
f"{double} are in {field.asset_type} and {t[0]}"
)
seen[field.asset_type] = s
if ids:
Asset.objects.filter(pk__in=ids).update(
type=field.asset_type
)
for asset in Asset.objects.filter(pk__in=ids):
has = set([a.name for a in asset.tags.all()])
needs = set(field.asset_tags).difference(has)
for t in needs:
asset.tags.add(t.lower())
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/assets/management/commands/update_model_tags.py
| 0.497559 | 0.187951 |
update_model_tags.py
|
pypi
|
import django.db.models.deletion
from django.db import migrations, models
import scarlet.assets.fields
import scarlet.assets.utils
import scarlet.cms.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
('assets', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='HotSpot',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('x_cord', models.IntegerField()),
('y_cord', models.IntegerField()),
('overlay_size_x', models.IntegerField()),
('overlay_size_y', models.IntegerField()),
('order', scarlet.cms.fields.OrderField(db_index=True, default=0, verbose_name=b'Pin number')),
('label', models.CharField(blank=True, max_length=255, verbose_name=b'Pin title')),
('text', scarlet.cms.fields.HTMLTextField(blank=True)),
('video_json', models.TextField(blank=True)),
('image_cache', scarlet.assets.fields.AssetRealFileField(blank=True, editable=False, max_length=255, upload_to=scarlet.assets.utils.assets_dir)),
('icon_cache', scarlet.assets.fields.AssetRealFileField(blank=True, editable=False, max_length=255, upload_to=scarlet.assets.utils.assets_dir)),
('icon', scarlet.assets.fields.AssetsFileField(blank=True, denormalize=False, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='+', to='assets.Asset', verbose_name=b'Icon')),
('image', scarlet.assets.fields.AssetsFileField(blank=True, denormalize=False, null=True, on_delete=django.db.models.deletion.PROTECT, related_name='+', to='assets.Asset')),
],
options={
'verbose_name': 'Hotspot',
'verbose_name_plural': 'Hotspots',
},
),
migrations.CreateModel(
name='HotSpotModule',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('slug', models.SlugField(blank=True)),
('intro_copy', models.TextField(blank=True)),
('image_cache', scarlet.assets.fields.AssetRealFileField(editable=False, max_length=255, upload_to=scarlet.assets.utils.assets_dir)),
('image', scarlet.assets.fields.AssetsFileField(denormalize=False, on_delete=django.db.models.deletion.PROTECT, related_name='+', to='assets.Asset')),
],
options={
'verbose_name': 'Hotspot module',
'verbose_name_plural': 'Hotspot modules',
},
),
migrations.AddField(
model_name='hotspot',
name='module',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='hotspots', to='hotspots.HotSpotModule'),
),
]
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/hotspots/migrations/0001_initial.py
| 0.45181 | 0.25965 |
0001_initial.py
|
pypi
|
import random
import re
from functools import update_wrapper
from django import http
from django.conf.urls import include, url
from django.urls import reverse
from django.utils.decorators import classonlymethod
from django.utils.safestring import mark_safe
from . import actions, helpers, options, views
from .item import VersionsList
# Constant that defines a attribute points to it's parent
PARENT = "parent"
ACTION_ALIAS = "_action"
def create_new_viewclass(base, **kwargs):
# Create a new view class based on a view instance
data = {}
kwargs.update(getattr(base, "changed_kwargs", {}))
for k, v in list(kwargs.items()):
if hasattr(base, k):
data[k] = v
if isinstance(base, views.CMSView):
name = f"{base.__class__.__name__}{hex(id(base))}{random.random()}"
parent = base.__class__
else:
name = base.__name__ + "Sub"
parent = base
return type(name, (parent,), data)
class PromiseBundle:
def __init__(self, cls, name=None, title=None, title_plural=None):
assert name
self.name = name
self.title = title
self.title_plural = title_plural
self.cls = cls
self.initialized = None
def __call__(self, child_name, parent, site):
return self.cls(
name=self.name,
title=self.title,
title_plural=self.title_plural,
parent=parent,
attr_on_parent=child_name,
site=site,
)
@staticmethod
def hidden_name(name):
return f"_{name}_promise"
class URLAlias:
"""
Alias urls to some other view or bundle. Aliases
created in this way will not be added to the actual
urls in the cms site. But when a url is requested
for an attribute on a bundle that points to a URLAlias
instance, whether that happens through a template tag
or one of bundles view getter methods, the url or view
returned will be the one for the aliased name/bundle.
:param bundle_attr: The name of the bundle that this alias \
points to. None means the current bundle, using the `PARENT` \
constant means the view name will be looked up on the \
parent bundle. Defaults to None.
:param alias_to: The name of the view that you want this \
to point to instead. Defaults to None.
"""
def __init__(self, bundle_attr=None, alias_to=None):
self.bundle_attr = bundle_attr
self.alias_to = alias_to
def get_bundle(self, current_bundle, url_kwargs, context_kwargs):
"""
Returns the bundle to get the alias view from.
If 'self.bundle_attr' is set, that bundle that it points to
will be returned, otherwise the current_bundle will be
returned.
"""
if self.bundle_attr:
if self.bundle_attr == PARENT:
return current_bundle.parent
view, name = current_bundle.get_view_and_name(self.bundle_attr)
return view
return current_bundle
def get_view_name(self, requested):
"""
Returns the name of the view to lookup.
If `requested` is equal to 'self.bundle_attr' then
'main' will be returned. Otherwise if `self.alias_to`
is set the it's value will be returned. Otherwise
the `requested` itself will be returned.
"""
value = self.alias_to and self.alias_to or requested
if value == self.bundle_attr:
return "main"
return value
class ViewAlias(URLAlias):
"""
Works the same as URLAlias accept it allows
you to reuse a view registered somewhere
else as at different url on this bundle.
"""
pass
class BundleMeta(type):
"""
Metaclass for bundle that gathers the known views,
subbundles and meta options from all the parent classes.
"""
def __new__(cls, name, bases, attrs):
meta = options.Meta()
_children = set()
_views = set()
# Copy views from bases along with meta
for base in bases[::-1]:
val = getattr(base, "_views", None)
if val and isinstance(val, tuple):
_views = _views.union(set(base._views))
val = getattr(base, "_children", None)
if val and isinstance(val, tuple):
_children = _children.union(set(base._children))
if hasattr(base, "_meta"):
meta.add_meta(base._meta)
m = attrs.pop("Meta", None)
meta.add_meta(m)
for k, v in list(attrs.items()):
if isinstance(v, PromiseBundle):
_children.add(k)
_views.add(k)
attrs[v.hidden_name(k)] = v
elif isinstance(v, views.CMSView):
_views.add(k)
elif isinstance(v, ViewAlias):
_views.add(k)
for v in _children:
attrs.pop(v, None)
attrs["_children"] = tuple(_children)
attrs["_views"] = tuple(_views)
attrs["_meta"] = meta
cls = super().__new__(cls, name, bases, attrs)
return cls
class Bundle(metaclass=BundleMeta):
"""
Base bundle class. A bundle is a class that is meant to group together
CMSViews and other bundle classes. It contains some methods to
help the views know where to find each other, keep track of their
url parameters and provide page navigation and headers.
Views and sub bundles are specified as class attributes when
creating a new Bundle class.
Each bundle class has a options class stored at _meta. When one bundle
inherits from another the meta class attributes are copied from all
base classes, with the normal resolution rules applying. The exception
is attributes containing a dictionary. In that case a copy of the
dictionary from the further ancestor will be made and then updated
with the dictionary from the closer. The resulting new dictionary
is stored as the value for that attribute.
Any time you set the value of a class attribute to the constant
`PARENT` (also available on bundle instances as `self.parent_attr`)
you are saying that attribute should be looked up on the parent object.
This works for view attributes and some non view attributes like
`navigation` and `object_header`.
:param navigation: A list of tuples that represent the side navigation \
items for this bundle. The format is (attribute_name, title, url_kwargs). \
Title and url_kwargs are optional. If no title is given the title of the bundle
that the view is on will be used. Default is an empty tuple.
:param dashboard: A list of the tuples that represent the main navigation.\
format is the same as `navigation`. Default is an empty tuple.
:param required_groups: A list of groups names that a visitor must \
be a member of to access views in this bundle. Default is an empty tuple.
:param live_groups: A list of groups names that a visitor must \
be a member of to access the 'live_views` in this bundle. Default is None \
which means same as `required_groups`.
:param object_view: The name of the view that should be rendered as \
the object header. Defaults to 'delete'.
:param main_list: A URLAlias for 'main' used by main views as their \
default redirect target.
By default the following views are created:
* **main** - ListView
* **add*** - FormView
* **edit** - FormView
* **delete** - DeleteActionView
"""
parent_attr = PARENT
action_alias = ACTION_ALIAS
navigation = ()
dashboard = ()
required_groups = ()
live_groups = None
_children = ()
_views = ()
main = views.ListView()
add = views.FormView(force_add=True)
edit = views.FormView()
delete = actions.DeleteActionView()
main_list = URLAlias(alias_to="main")
object_view = "delete"
def __init__(
self,
title=None,
title_plural=None,
name=None,
parent=None,
attr_on_parent=None,
site=None,
):
assert name
self.name = name
self.title = title
self.title_plural = title_plural
self.admin_site = site
self._url_params = ()
self.attr_on_parent = attr_on_parent
self.parent = parent
if self.parent:
self.name = f"{self.parent.name}_{self.name}"
reg = rf"^{parent.get_regex_for_name(self.name, attr_on_parent)}"
url_params = list(re.compile(reg).groupindex.keys())
l = list(parent.url_params)
l.extend(url_params)
self._url_params = tuple(l)
if self.required_groups == self.parent_attr:
self.required_groups = self.parent.required_groups
self.item_regex = self._meta.item_regex_base % {"name": self.name}
# Only process defaults if we have a model
if self._meta.model:
if site and self._meta.primary_model_bundle:
site.register_model(self._meta.model, self)
added_views = []
action_views = set(self._meta.action_views)
for view in self._views:
v = getattr(self, view, None)
if v and isinstance(v, views.CMSView):
view_kwargs = self._meta.get_kwargs_for_view(view)
if self.live_groups and view in self._meta.live_views:
view_kwargs["required_groups"] = list(self.live_groups)
setattr(self, view, create_new_viewclass(v, **view_kwargs))
# Create aliases for action views
if view in action_views:
view_name = "{0}{1}".format(view, ACTION_ALIAS)
if not hasattr(self, view_name):
setattr(self, view_name, ViewAlias(alias_to=view))
added_views.append(view_name)
if added_views:
self._views = tuple(list(self._views) + added_views)
def set_admin_site(self, site):
self.admin_site = site
if site and self._meta.primary_model_bundle:
site.register_model(self._meta.model, self)
def _get_url_params(self):
return self._url_params
url_params = property(_get_url_params)
def get_object_header_view(
self, request, url_kwargs, parent_only=False, render_type="object_header"
):
"""
An object header is the title block of a CMS page. Actions
to linked to in the header are based on this views
bundle.
This returns a view instance and view name of the view that
should be rendered as an object header the view used is specified
in `self.object_view`. If not match is found None, None is returned
:param request: The request object
:param url_kwargs: Any url keyword arguments as a dictionary
:param parent_only: If `True` then the view will only \
be rendered if object_view points to parent. This is usually \
what you want to avoid extra lookups to get the object \
you already have.
:param render_type: The render type to use for the header. \
Defaults to 'object_header'.
"""
if parent_only and self.object_view != self.parent_attr:
return None, None
if self.object_view == self.parent_attr and self.parent:
return self.parent.get_object_header_view(
request, url_kwargs, render_type=render_type
)
elif self.object_view:
view, name = self.get_initialized_view_and_name(
self.object_view,
can_submit=False,
base_template="cms/partial.html",
request=request,
kwargs=url_kwargs,
render_type=render_type,
)
if view and view.can_view(request.user):
return view, name
return None, None
def get_string_from_view(
self, request, view_name, url_kwargs, render_type="string"
):
"""
Returns a string that is a rendering of the view given a
request, view_name, and the original url_kwargs. Makes the
following changes the view before rendering:
* Sets can_submit to False.
* Adds action_url to the context. This is the url where \
this view actually lives.
* Sets the default base_template to be 'cms/partial.html'
This will always call GET and never POST as any actions
that modify data should take place on the original
url and not like this.
:param request: The request object.
:param view_name: The name of the view that you want.
:param url_kwargs: The url keyword arguments that came \
with the request object. The view itself is responsible \
to remove arguments that would not be part of a normal match \
for that view. This is done by calling the `get_url_kwargs` \
method on the view.
:param render_type: The render type to use. Defaults to \
'string'.
"""
response = ""
try:
view, name = self.get_initialized_view_and_name(
view_name,
render_type=render_type,
can_submit=False,
base_template="cms/partial.html",
request=request,
kwargs=url_kwargs,
)
if isinstance(view, URLAlias):
view_name = view.get_view_name(view_name)
bundle = view.get_bundle(self, url_kwargs, {})
if bundle and isinstance(bundle, Bundle):
return bundle.get_string_from_view(
request, view_name, url_kwargs, render_type=render_type
)
elif view:
if view and name and view.can_view(request.user):
response = self._render_view_as_string(
view, name, request, url_kwargs
)
except http.Http404:
pass
return response
def _render_view_as_string(self, view, name, request, url_kwargs):
url_kwargs = view.get_url_kwargs()
url = reverse(f"admin:{name}", kwargs=url_kwargs)
view.add_to_render_data(action_url=url)
return mark_safe(view.as_string(request, **url_kwargs))
def get_view_url(
self,
view_name,
user,
url_kwargs=None,
context_kwargs=None,
follow_parent=True,
check_permissions=True,
):
"""
Returns the url for a given view_name. If the view isn't
found or the user does not have permission None is returned.
A NoReverseMatch error may be raised if the view was unable
to find the correct keyword arguments for the reverse function
from the given url_kwargs and context_kwargs.
:param view_name: The name of the view that you want.
:param user: The user who is requesting the view
:param url_kwargs: The url keyword arguments that came \
with the request object. The view itself is responsible \
to remove arguments that would not be part of a normal match \
for that view. This is done by calling the `get_url_kwargs` \
method on the view.
:param context_kwargs: Extra arguments that will be passed \
to the view for consideration in the final keyword arguments \
for reverse.
:param follow_parent: If we encounter a parent reference should \
we follow it. Defaults to True.
:param check_permisions: Run permissions checks. Defaults to True.
"""
view, url_name = self.get_initialized_view_and_name(
view_name, follow_parent=follow_parent
)
if isinstance(view, URLAlias):
view_name = view.get_view_name(view_name)
bundle = view.get_bundle(self, url_kwargs, context_kwargs)
if bundle and isinstance(bundle, Bundle):
return bundle.get_view_url(
view_name,
user,
url_kwargs=url_kwargs,
context_kwargs=context_kwargs,
follow_parent=follow_parent,
check_permissions=check_permissions,
)
elif view:
# Get kwargs from view
if not url_kwargs:
url_kwargs = {}
url_kwargs = view.get_url_kwargs(context_kwargs, **url_kwargs)
view.kwargs = url_kwargs
if check_permissions and not view.can_view(user):
return None
url = reverse(f"admin:{url_name}", kwargs=url_kwargs)
return url
def _view_uses_name_as_url_kwarg(self, view_name):
# Returns True if the given view_name uses
# self.name in url kwargs
view_name = view_name.replace(ACTION_ALIAS, "")
return (view_name in self._meta.item_views) or (
view_name in self._meta.action_views
)
def _get_slug_url_kwarg_for_name(self, view_name):
arg = None
if self._view_uses_name_as_url_kwarg(view_name):
arg = f"{self.name}_pk"
elif self.parent:
# Get the attribute from the parent so this can be chained
arg = self.parent._get_slug_url_kwarg_for_name(self.attr_on_parent)
return arg
def _get_view_kwargs(self, view, view_name):
kwargs = {}
if hasattr(view, "bundle"):
kwargs["bundle"] = self
if hasattr(view, "slug_url_kwarg"):
kwargs["slug_url_kwarg"] = self._get_slug_url_kwarg_for_name(view_name)
return kwargs
def get_initialized_view_and_name(
self, view_name, follow_parent=True, **extra_kwargs
):
"""
Creates and returns a new instance of a CMSView \
and it's url_name.
:param view_name: The name of the view to return.
:param follow_parent: If we encounter a parent reference should \
we follow it. Defaults to True.
:param extra_kwargs: Keyword arguments to pass to the view.
"""
view, name = self.get_view_and_name(view_name)
# Initialize the view with the right kwargs
if hasattr(view, "as_view"):
e = dict(extra_kwargs)
e.update(**self._get_view_kwargs(view, view_name))
e["name"] = view_name
view = view(**e)
# It is a Bundle return the main
elif isinstance(view, Bundle):
view, name = view.get_initialized_view_and_name("main", **extra_kwargs)
elif view == self.parent_attr and self.parent:
if follow_parent:
return self.parent.get_initialized_view_and_name(
view_name, **extra_kwargs
)
else:
view = None
name = None
return view, name
def get_single_title(self):
return self.get_title(plural=False)
def get_title(self, plural=True):
"""
Get's the title of the bundle. Titles can be singular
or plural.
"""
value = self.title
if value == self.parent_attr:
return self.parent.get_title(plural=plural)
if not value and self._meta.model:
value = helpers.model_name(
self._meta.model,
self._meta.custom_model_name,
self._meta.custom_model_name_plural,
plural,
)
elif self.title and plural:
value = helpers.pluralize(self.title, self.title_plural)
return helpers.capfirst_if_needed(value)
def _get_bundle_from_promise(self, attname):
assert (
self.admin_site
), "You must specify an admin_site before initializing sub bundles"
attr = f"_{attname}_bundle"
view = getattr(self, attr, None)
if not view:
promise = getattr(self, PromiseBundle.hidden_name(attname), None)
if promise:
view = promise(attname, self, self.admin_site)
setattr(self, attr, view)
return view
def get_view_and_name(self, attname):
"""
Gets a view or bundle and returns it
and it's url_name.
"""
view = getattr(self, attname, None)
if attname in self._children:
view = self._get_bundle_from_promise(attname)
if view:
if attname in self._children:
return view, view.name
elif isinstance(view, ViewAlias):
view_name = view.get_view_name(attname)
bundle = view.get_bundle(self, {}, {})
if bundle and isinstance(bundle, Bundle):
view, name = bundle.get_view_and_name(view_name)
if hasattr(view, "as_view"):
if attname != "main":
name = f"{self.name}_{attname}"
else:
name = self.name
return view, name
elif view == self.parent_attr and self.parent:
return self.parent_attr, None
elif isinstance(view, URLAlias):
return view, None
return None, None
def get_regex_for_name(self, name, attname):
# Get the regex for this view
regex = ""
if name != self.name and attname != "main":
regex = f"{attname}/"
if hasattr(self._meta, f"{attname}_regex_base"):
regex = getattr(self._meta, f"{attname}_regex_base")
regex = regex % {"group_name": self.name, "attname": attname}
elif attname in self._meta.item_views or attname in self._meta.action_views:
regex = f"{self.item_regex}{regex}"
return regex
def get_url(self, name, view_obj, attname):
def wrap(view):
def wrapper(*args, **kwargs):
return self.admin_site.admin_view(view)(*args, **kwargs)
return update_wrapper(wrapper, view)
regex = self.get_regex_for_name(name, attname)
if isinstance(view_obj, Bundle):
reg = rf"^{regex}"
u = url(reg, include(view_obj.get_urls()))
else:
view_kwargs = self._get_view_kwargs(view_obj, attname)
u = url(rf"^{regex}$", wrap(view_obj.as_view(**view_kwargs)), name=name)
return u
def get_urls(self):
"""
Returns urls handling bundles and views.
This processes the 'item view' first in order
and then adds any non item views at the end.
"""
parts = []
seen = set()
# Process item views in order
for v in list(self._meta.item_views) + list(self._meta.action_views):
if v not in seen:
view, name = self.get_view_and_name(v)
if view and name:
parts.append(self.get_url(name, view, v))
seen.add(v)
# Process everything else that we have not seen
for v in set(self._views).difference(seen):
# Get the url name
view, name = self.get_view_and_name(v)
if view and name:
parts.append(self.get_url(name, view, v))
return parts
def _optional_tuples(self, tup):
for item in tup:
if len(item) == 1:
yield (item[0], None, None)
elif len(item) == 2:
yield (item[0], item[1], None)
else:
yield item
def _nav_from_tuple(self, request, tup, **kwargs):
navigation = []
for view_name, title, url_kwargs in self._optional_tuples(tup):
url = self.get_view_url(
view_name, request.user, url_kwargs=url_kwargs, context_kwargs=kwargs
)
if url:
if not title and view_name in self._children:
b = self._get_bundle_from_promise(view_name)
title = b.get_title()
elif not title:
title = self.get_title()
navigation.append((url, title))
return navigation
def get_dashboard_urls(self, request):
"""
Generates a list of tuples based on the values
in `self.dashboard` that are the main navigation links
for this bundle. The tuple format is (url, title).
"""
return self._nav_from_tuple(request, self.dashboard)
def get_dashboard_block(self, request):
"""
Returns a block of html for display on the dashboard.
"""
return None
def get_navigation(self, request, **kwargs):
"""
Generates a list of tuples based on the values
in `self.navigation` that are the side navigation links
for this bundle. The tuple format is (url, title).
"""
if self.navigation == self.parent_attr:
if self.parent:
return self.parent.get_navigation(request, **kwargs)
return ()
else:
return self._nav_from_tuple(request, self.navigation, **kwargs)
@classonlymethod
def as_subbundle(cls, name=None, title=None, title_plural=None):
"""
Wraps the given bundle so that it can be lazily
instantiated.
:param name: The slug for this bundle.
:param title: The verbose name for this bundle.
"""
return PromiseBundle(cls, name=name, title=title, title_plural=title_plural)
class BlankBundle(Bundle):
"""
Base bundle that has no preset views. Should be used as a base
for bundle's that are not meant for typical CRUD operations.
"""
main = None
add = None
edit = None
delete = None
publish = None
versions = None
unpublish = None
main_list = None
class VersionMixin:
_views = ("publish", "unpublish", "versions")
publish = actions.PublishActionView()
unpublish = actions.UnPublishActionView()
versions = VersionsList()
class VersionedBundle(Bundle, VersionMixin):
"""
Base bundle for versioned models. Adds views for publishing,
un-publishing and managing versions.
"""
class Meta(options.VersionMeta):
pass
class DelegatedObjectBundle(Bundle):
"""
Base bundle that delegates the following views to use the
bundle specified by edit:
* delete
* publish
* unpublish
* versions
This is useful for bundles that contain a list but all the actions
for items in that list are specified on the sub bundle edit.
"""
delete = URLAlias(bundle_attr="edit")
publish = URLAlias(bundle_attr="edit")
unpublish = URLAlias(bundle_attr="edit")
versions = URLAlias(bundle_attr="edit")
delete_action = ViewAlias(bundle_attr="edit", alias_to="delete")
publish_action = ViewAlias(bundle_attr="edit", alias_to="publish")
unpublish_action = ViewAlias(bundle_attr="edit", alias_to="unpublish")
class Meta(options.VersionMeta):
pass
class ObjectOnlyBundle(Bundle):
"""
Base Bundle for sub bundles that do not contain a list
page. Makes the following changes
* Removes add.
* main is a FormView.
* edit points to PARENT, since that is what main is.
* main_list points to PARENT.
* The item views attribute of meta is set to be empty.
"""
add = None
main = views.FormView()
edit = PARENT
main_list = URLAlias(bundle_attr=PARENT)
delegated = True
class Meta:
item_views = ()
action_views = ()
live_views = ("delete", "publish", "unpublish", "versions")
class VersionedObjectOnlyBundle(ObjectOnlyBundle, VersionMixin):
"""
Same as ObjectOnlyBundle but adds version management views.
"""
pass
class ChildBundle(Bundle):
"""
Base Bundle for sub bundles. Makes the following changes:
* required_groups is inherited from PARENT.
"""
required_groups = PARENT
class Meta:
pass
class ParentVersionedBundle(ChildBundle):
"""
Same as ChildBundle expect that is also changes:
* object_view is inherited from PARENT.
"""
object_view = PARENT
class SingletonBundle(Bundle):
"""
Bundle having only edit view that points to single
instance with `pk = 1` using SingletonFormView.
"""
add = None
delete = None
edit = views.SingletonFormView()
main = edit
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/cms/bundles.py
| 0.712532 | 0.176689 |
bundles.py
|
pypi
|
import json
from django import http
from django.contrib import messages
from django.core.serializers.json import DjangoJSONEncoder
from django.shortcuts import render
from django.template.defaultfilters import slugify
from django.template.loader import render_to_string
from django.utils.encoding import force_text
class RenderResponse:
"""
Render a template. Doesn't do anything special with css/js
as per current front end direction.
:param template: The template to render.
:param partial_base: The template to use as a base for \
partial rendering. IE: ajax requests.
:param base: The template to use a the base template.
"""
template = None
base = "base.html"
partial_base = "partial.html"
def __init__(self, **kwargs):
# Go through keyword arguments and save to instance
for key, value in kwargs.items():
setattr(self, key, value)
def update_kwargs(self, request, **kwargs):
"""
Hook for adding data to the context before
rendering a template.
:param kwargs: The current context keyword arguments.
:param request: The current request object.
"""
if "base" not in kwargs:
kwargs["base"] = self.base
if request.is_ajax() or request.GET.get("json"):
kwargs["base"] = self.partial_base
return kwargs
def render(self, request, redirect_url=None, **kwargs):
"""
Uses `self.template` to render a response.
:param request: The current request object.
:param redirect_url: If given this will return the \
redirect method instead of rendering the normal template. \
Renders providing this argument are referred to as a \
'render redirect' in this documentation.
:param kwargs: The current context keyword arguments.
"""
if redirect_url:
# Redirection is used when we click on `Save` for ordering
# items on `ListView`. `kwargs` contains `message` but that
# one is not passing through redirection. That's the reason for using
# directly `messages` and get message on result template
if kwargs.get("message"):
messages.success(request, kwargs.get("message"))
return self.redirect(request, redirect_url, **kwargs)
kwargs = self.update_kwargs(request, **kwargs)
return render(request, self.template, kwargs)
def redirect(self, request, url, renderer=None, **kwargs):
"""
Hook for changing redirect behavior. Should
return a HttpResponse object. Default implementation
redirects to the given url.
:param request: The current request object.
:param url: The url to redirect to.
:param kwargs: The current context keyword arguments.
"""
return http.HttpResponseRedirect(url)
class CMSRender(RenderResponse):
"""
Render a template to use in the cms application. Inherits
from RenderResponse. Used by most CMS views.
"""
def update_kwargs(self, request, **kwargs):
"""
Adds variables to the context that are expected by the
base cms templates.
* **navigation** - The side navigation for this bundle and user.
* **dashboard** - The list of dashboard links for this user.
* **object_header** - If no 'object_header' was passed in the \
current context and the current bundle is set to get it's \
object_header from it's parent, this will get that view and render \
it as a string. Otherwise 'object_header will remain unset.
* **subitem** - This is set to true if we rendered a new object_header \
and the object used to render that string is not present in the \
context args as 'obj'. This effects navigation and wording in the \
templates.
"""
kwargs = super().update_kwargs(request, **kwargs)
# Check if we need to to include a separate object
# bundle for the title
bundle = kwargs.get("bundle")
url_kwargs = kwargs.get("url_params")
view = None
if bundle:
view, name = bundle.get_object_header_view(
request, url_kwargs, parent_only=True
)
kwargs["dashboard"] = bundle.admin_site.get_dashboard_urls(request)
if view:
obj = view.get_object()
if "object_header" not in kwargs:
kwargs["object_header"] = bundle._render_view_as_string(
view, name, request, url_kwargs
)
if obj and obj != kwargs.get("obj"):
kwargs["subitem"] = True
return kwargs
class ChoicesRender:
"""
A Renderer meant to render an object list view as JSON.
Used by ListView when called with ?type=choices.
"""
def get_different_page(self, request, page):
"""
Returns a url that preserves the current querystring
while changing the page requested to `page`.
"""
if page:
qs = request.GET.copy()
qs["page"] = page
return f"{request.path_info}?{qs.urlencode()}"
return None
def get_label_attr(self, label):
attr = label.attr
if label.attr == "__str__":
attr = force_text(slugify(label.name))
if hasattr(attr, "__call__"):
attr = attr.__name__
return attr
def get_object_list(self, adm_list):
l = []
labels = list(adm_list.labels())
for row in adm_list:
data = {
"id": row.instance.pk,
}
for label in labels:
d = row.get_value(label.attr, 1)
if callable(d):
d = d()
data[self.get_label_attr(label)] = force_text(d)
l.append(data)
return l
def get_fields(self, adm_list):
data = {}
for label in adm_list.labels():
data[self.get_label_attr(label)] = {
"name": force_text(label.name),
"sortable": label.sortable,
"order_type": label.order_type,
}
return data
def render(self, request, **kwargs):
"""
Returns a JSON representation of a objects list page.
The json has the following attributes:
* **is_paginated** - Is the list paginated.
* **results** - A list of objects, where each object has an \
attribute/value for each field in the list. An 'id' attribute \
is always included.
* **fields** - An object who's properties are the fields \
in the results list. Each property will have an object with \
the the following attributes:
* **name** - The verbose name of the field.
* **sortable** - Can this column be sorted. True or False.
* **order_type** - What is the current order of this column.
The following attributes only appear if the list is paginated:
* **count** - If the list is paginated, how many objects \
total are there.
* **page** - Current page number.
* **next** - The full link to the next page.
* **previous** - The full link to the previous page.
If the list can be filtered the following attribute is included:
* **params** - An object who's properties are the filter options. \
Each property contains an object with the following attributes:
* **value** - If the current result list has been filtered by \
this field then value will contain the filter value that was used.
* **choices** - If the field is a choice field this will contain \
the options.
Example JSON:
::
{"count": 1,
"fields": {
"name": {"sortable": true, "name": "name", "order_type": "asc"}
},
"results": [{"id": 12, "name": "Test"}],
"next": "",
"params": {"name": {"value": null}},
"is_paginated": true,
"page": 1,
"previous": ""}
"""
data = {"is_paginated": kwargs.get("is_paginated")}
if data.get("is_paginated"):
page = kwargs["page_obj"]
next_p = ""
previous = ""
if page.has_next():
next_p = self.get_different_page(request, page.number + 1)
if page.has_previous():
previous = self.get_different_page(request, page.number - 1)
data.update(
{
"count": page.paginator.count,
"page": page.number,
"next": next_p,
"previous": previous,
}
)
if kwargs.get("filter_form"):
exclude = request.GET.getlist("exclude")
filter_form = {}
form = kwargs.get("filter_form")
for name in form.get_search_fields(exclude):
k = form[name]
obj = {}
obj["value"] = k.value()
obj["label"] = k.label
if hasattr(k.field, "choices"):
obj["choices"] = k.field.choices
filter_form[k.name] = obj
data["params"] = filter_form
adm_list = kwargs["list"]
data["fields"] = self.get_fields(adm_list)
data["results"] = self.get_object_list(adm_list)
return http.HttpResponse(json.dumps(data, cls=DjangoJSONEncoder))
class RenderString(RenderResponse):
"""
A Renderer that returns a rendered string instead of
a HttpResponse object. Inherits from RenderResponse.
Used by CMS views when called with render_type = 'string'.
"""
def render(self, request, **kwargs):
kwargs = self.update_kwargs(request, **kwargs)
return render_to_string(self.template, kwargs, request)
class PopupRender(RenderResponse):
"""
A Renderer that forces a special popup base template to be
used. Returns a rendered response when a redirect is requested
instead of redirecting to the given url. Used by FormView when called
with ?popup=1.
:param base: The popup only base template.
:param redirect_template: The template to use for redirect renders.
"""
base = "cms/base_popup.html"
redirect_template = "cms/popup_redirect.html"
def update_kwargs(self, request, **kwargs):
kwargs["base"] = self.base
return kwargs
def redirect(self, request, url, **kwargs):
return render(request, self.redirect_template, kwargs)
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/cms/renders.py
| 0.798305 | 0.195921 |
renders.py
|
pypi
|
class Meta:
"""
The options class for Bundle objects, every bundle will
have an instance of this class as a _meta class attribute.
The following options, if set, are passed to all view instances
on the bundle. For more information on what each one does
see the CMS Views documentation.
* model
* parent_field
* parent_lookups
* base_template
* custom_model_name
* custom_model_name_plural
You can specify additional arguments to all view classes by
setting a dictionary to `default_kwargs`. You can also specify
additional arguments to just one view class by using FOO_kwargs.
Other settings are not passed to views. These are:
* **item_views** - A tuple of attribute names that should be \
treated as item views. Meaning that they need additional url \
keyword arguments to lookup their item. The regular expression \
to use is set by `item_regex_base` or FOO_regex_base. \
The order of the regular expressions in the resulting url \
config matters, so the order specified here is preserved.
* **item_regex_base** - A regular expression string for 'item \
views'. This argument must take the string formatter %(name)s \
followed by '_pk' to keep it distinct from any parent url arguments. \
Defaults to '(?P<%(name)s_pk>\d+)/'. You can modify the regex \
for a particular view by adding a FOO_regex_base attribute to \
your meta class. The base regex strings should use the string \
formatters %(group_name)s and %(attname)s. As with the regular \
regex base, %(group_name)s must be followed by '_pk' to keep \
it distinct from any parent url arguments.
* **live_views** - Live views are views that will have their \
required groups set to the live_groups attribute of the bundle. \
Set to 'delete' by default.
* **primary_model_bundle** Specifies that this bundle is the \
primary bundle for it's model. This allows the custom relationship \
widgets to be used by other CMS views that contain fields that \
reference this model.
"""
view_attributes = (
"model",
"parent_field",
"parent_lookups",
"base_template",
"custom_model_name",
"custom_model_name_plural",
)
other_attributes = (
"item_regex_base",
"item_views",
"live_views",
"defaults",
"default_kwargs",
"primary_model_bundle",
"action_views",
)
def __init__(self):
self.primary_model_bundle = False
# which items should use item_regex_base
self.item_views = ("edit",)
# which items are considered live actions
self.live_views = ("delete",)
# which items should be displayed as mass actions
self.action_views = ("delete",)
# The regex that should be used to match in
# urls the value for %s is determined by the bundle
self.item_regex_base = "(?P<%(name)s_pk>\d+)/"
# the models that views are based on.
# If not give all items are ignored.
self.model = None
# Custom model name if you don't want to use the default
self.custom_model_name = None
self.custom_model_name_plural = None
# Optional: field on model that refers to a foreign key that must
# be present in order to work on this bundle.
self.parent_field = None
# Optional: Any additional fields that you need to be
# be included when finding a parent. Only used if parent_field is
self.parent_lookups = None
self.add_kwargs = {"force_add": True}
# Kwargs that get passed to all views
self.default_kwargs = {}
def add_meta(self, meta):
allowed = list(self.view_attributes)
allowed.extend(list(self.other_attributes))
if meta:
for k in [
x
for x in dir(meta)
if x in allowed or x.endswith("_kwargs") or x.endswith("_regex_base")
]:
v = getattr(meta, k)
if isinstance(v, dict) and getattr(self, k, None):
tmp = dict(getattr(self, k))
tmp.update(v)
v = tmp
setattr(self, k, v)
def get_kwargs_for_view(self, name):
"""
Returns the full list of keyword arguments
for the given view name as a dictionary.
First the default_kwargs dictionary is copied.
Then it is updated with the any of the 'view values'
that can be specified directly on this instance. IE: models.
Then that dictionary is updated with the values
particular to this view names from the FOO_kwargs dictionary.
"""
data = dict(self.default_kwargs)
for k in self.view_attributes:
if hasattr(self, k):
data[k] = getattr(self, k)
data.update(getattr(self, f"{name}_kwargs", {}))
return data
class VersionMeta:
item_views = ("edit", "versions")
live_views = ("delete", "publish", "unpublish", "versions")
action_views = ("delete", "publish", "unpublish")
class Orderable:
"""
Allows rows to be reordered on the 'main' list page.
"""
main_kwargs = {
"change_fields": ("order",),
"base_template": "cms/base_bundle_view.html",
"can_sort": False,
}
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/cms/options.py
| 0.836655 | 0.38168 |
options.py
|
pypi
|
from django import forms
from django.core.validators import EMPTY_VALUES
from django.db.models import F, Q
from . import settings, widgets
class BaseFilterForm(forms.Form):
"""
A base filter form. Implementing classes should
define their own filter fields.
"""
exclude = []
search_fields = None
SEARCH_KEY = "search"
def get_filter_fields(self, exclude=None):
"""
Get the fields that are normal filter fields
"""
exclude_set = set(self.exclude)
if exclude:
exclude_set = exclude_set.union(set(exclude))
return [name for name in self.fields if name not in exclude_set]
def get_search_fields(self, exclude=None):
"""
Get the fields for searching for an item.
"""
exclude = set(exclude)
if self.search_fields and len(self.search_fields) > 1:
exclude = exclude.union(self.search_fields)
return self.get_filter_fields(exclude=exclude)
def get_filter_kwargs(self):
"""
Translates the cleaned data into a dictionary
that can used to generate the filter removing
blank values.
"""
if self.is_valid():
filter_kwargs = {}
for field in self.get_filter_fields():
empty_values = EMPTY_VALUES
if hasattr(self.fields[field], "empty_values"):
empty_values = self.fields[field].empty_values
value = self.cleaned_data.get(field)
if value not in empty_values:
if self.search_fields and field in self.search_fields:
filter_kwargs[f"{field}__icontains"] = value
else:
filter_kwargs[field] = value
return filter_kwargs
else:
return {}
def get_filter(self):
"""
Returns a list of Q objects
that is created by passing for the keyword arguments
from `self.get_filter_kwargs`.
If search_fields are specified and we received
a seach query all search_fields will be queried use
using OR (|) for that term and any specific terms for
those search_fields will be ignored.
Returns an empty list if there is nothing to filter on.
"""
args = []
filter_kwargs = self.get_filter_kwargs()
search = filter_kwargs.pop("search", None)
if search and self.search_fields:
search_args = []
for field in self.search_fields:
k = f"{field}__icontains"
filter_kwargs.pop(k, None)
q = Q(**{k: search})
if search_args:
q = search_args[0] | q
search_args[0] = q
else:
search_args.append(q)
args.append(search_args[0])
if filter_kwargs:
args.append(Q(**filter_kwargs))
return args
class VersionFilterForm(BaseFilterForm):
DRAFT = "draft"
LIVE = "live"
SCHEDULED = "scheduled"
exclude = ("item_status",)
item_status = forms.ChoiceField(
required=False,
choices=(
("", "All"),
(DRAFT, "Has unpublished changes"),
(LIVE, "Is Live"),
(SCHEDULED, "Is scheduled"),
),
)
def get_status_filter(self):
q = None
if self.is_valid():
ftype = self.cleaned_data.get("item_status")
if ftype == self.DRAFT:
q = Q(is_published=False) | Q(last_save__gt=F("last_scheduled"))
elif ftype == self.LIVE:
q = Q(last_save=F("last_scheduled"), last_scheduled=F("v_last_save"))
elif ftype == self.SCHEDULED:
q = Q(
last_save=F("last_scheduled"), last_scheduled__gt=F("v_last_save")
)
return q
def get_filter(self):
l = super().get_filter()
q = self.get_status_filter()
if q:
l.append(q)
return l
def search_form(*fields, **kwargs):
"""
Construct a search form filter form using the fields
provided as arguments to this function.
By default a field will be created for each field passed
and hidden field will be created for search. If you pass
the key work argument `search_only` then only a visible
search field will be created on the form.
Passing `status_filter` will include a version status filter
on this form.
"""
fdict = {"search_fields": set(fields)}
if kwargs.get("search_only"):
fdict["search"] = forms.CharField(max_length=255, required=False)
else:
fdict["search"] = forms.CharField(
max_length=255, required=False, widget=forms.HiddenInput
)
for f in fields:
fdict[f] = forms.CharField(max_length=255, required=False)
if kwargs.get("status_filter", False):
return type("filterform", (VersionFilterForm,), fdict)
else:
return type("filterform", (BaseFilterForm,), fdict)
class HiddenObjectForm(forms.ModelForm):
"""
Base form with no fields. Meant for use with formsets.
"""
class Meta:
fields = []
class WhenForm(forms.Form):
"""
Base form for actions that are date based.
Set a 'when' DateTimeField that is not required.
"""
when = forms.DateTimeField(
widget=widgets.RadioDateTimeWidget,
input_formats=settings.DATETIME_INPUT_FORMATS,
required=False,
)
class MassActionForm(forms.ModelForm):
selected = forms.BooleanField(required=False)
class ActionForm(forms.Form):
action = forms.ChoiceField(label=("Action:"))
class LazyFormSetFactory:
"""
Wrapper class for formset factories, for use with FormView.
To create a formset, you create an instance of this class
where the first argument is the factory function. Any other
arguments will get passed to the factory function when it
gets called.
::
LazyFormSetFactory(inlineformset_factory, models.Parent, models.Child)
"""
def __init__(self, *args, **kwargs):
assert len(args) > 0, "You must provide at least one argument"
assert callable(args[0]), "The first argument must be a formset factory"
self.args = args
self.kwargs = kwargs
if "extra" not in self.kwargs:
self.kwargs["extra"] = 0
def __call__(self, callback, form_processor):
"""
Return a formset class. Uses the factory function
that was specified on initialization.
:param callback: A callable that will be used as the \
*formfield_callback*.
:param form_processor: A callable that will be used to \
prep the form before the factory is called.
"""
self.kwargs["formfield_callback"] = callback
if "form" in self.kwargs:
self.kwargs["form"] = form_processor(self.kwargs["form"])
else:
self.kwargs["exclude"] = []
return self.args[0](*self.args[1:], **self.kwargs)
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/cms/forms.py
| 0.690455 | 0.237189 |
forms.py
|
pypi
|
import re
from django.contrib.contenttypes.models import ContentType
from django.db import models
from taggit.managers import TaggableManager
from taggit.models import Tag, TaggedItem
from .fields import TaggedRelationFormField
def get_model():
return Tag
def get_tag_manager():
return TaggableManager()
def tokenize_tags(tags_string):
"""
This function is responsible to extract usable tags from a text.
:param tags_string: a string of text
:return: a string of comma separated tags
"""
# text is parsed in two steps:
# the first step extract every single world that is 3 > chars long
# and that contains only alphanumeric characters, underscores and dashes
tags_string = tags_string.lower().strip(",")
single_words = set(
[
w[:100]
for w in re.split(";|,|\*|\n| ", tags_string)
if len(w) >= 3 and re.match("^[A-Za-z0-9_-]*$", w)
]
)
# the second step divide the original string using comma as separator
comma_separated = set([t[:100] for t in tags_string.split(",") if t])
# resulting set are merged using union
return list(single_words | comma_separated)
def tags_to_string(tags):
return ",".join(tags).lower()
def set_auto_tags_for_form(form, auto_tags):
for name, field in list(form.fields.items()):
if (
isinstance(field, TaggedRelationFormField)
and name in form.changed_data
and form.cleaned_data.get(name)
):
form.cleaned_data[name].auto_tags = auto_tags
def set_auto_tags_for_formset(formset, auto_tags):
for form in formset:
set_auto_tags_for_form(form, auto_tags)
def update_changed_tags(new_tags, old_tags):
args = None
for tag in old_tags:
q = models.Q(tag__name=tag)
if not args:
args = q
else:
args = q | args
types = (
TaggedItem.objects.filter(args)
.values("content_type", "object_id")
.annotate(cs=models.Count("content_type"))
.filter(cs=len(old_tags))
)
add_tags = [Tag.objects.get_or_create(name=tag) for tag in new_tags]
mapping = {}
for t in types:
if not t["content_type"] in mapping:
mapping[t["content_type"]] = []
mapping[t["content_type"]].append(t["object_id"])
for t, ids in list(mapping.items()):
t = ContentType.objects.get_for_id(t)
m = t.model_class()
for ins in m.objects.filter(pk__in=ids):
ins.tags.add(tag)
def get_tags_from_data(data, view_tags):
view_tags = set(tokenize_tags(",".join(view_tags)))
old_tags = set(tokenize_tags(data.get("view_tags", "")))
auto_tags = set(tokenize_tags(data.get("auto_tags", "")))
changed_tags = set(view_tags).difference(old_tags)
if changed_tags:
auto_tags = changed_tags.union(auto_tags)
return set(auto_tags), changed_tags, old_tags
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/cms/internal_tags/taggit_handler.py
| 0.517571 | 0.326513 |
taggit_handler.py
|
pypi
|
from django.db import models
from django.db.models.fields import related
from .models import VersionView
class FKToVersion(models.ForeignKey):
"""
Field that creates a relation between a
version and another model
"""
def __init__(self, *args, **kwargs):
kwargs["to_field"] = "vid"
# Two cases that should only be caused from an upgrade
# of an old project where certain params weren't required
if kwargs.get("on_delete"):
on_delete = kwargs.get("on_delete")
del kwargs["on_delete"]
else:
on_delete = models.CASCADE
if kwargs.get("to"):
to = kwargs.get("to")
del kwargs["to"]
else:
to = args[0]
args = (to, on_delete)
super().__init__(*args, **kwargs)
def deconstruct(self):
"""
FK to version always points to a version table
"""
name, path, args, kwargs = super().deconstruct()
if not kwargs["to"].endswith("_version"):
kwargs["to"] = "{0}_version".format(kwargs["to"])
return name, path, args, kwargs
class M2MFromVersion(models.ManyToManyField):
"""
Field that creates a many to many relation between a
version and another model.
"""
def __init__(self, to, **kwargs):
# Symmetrical doesn't work with M2m relationships to
# self and versioning.
if to == "self":
kwargs["symmetrical"] = False
super().__init__(to, **kwargs)
def update_rel_to(self, klass):
"""
If we have a string for a model, see if we know about it yet,
if so use it directly otherwise take the lazy approach.
This check is needed because this is called before
the main M2M field contribute to class is called.
"""
if isinstance(related.resolve_relation(klass, self.remote_field.model), str):
relation = related.resolve_relation(klass, self.remote_field.model)
try:
app_label, model_name = relation.split(".")
except ValueError:
# If we can't split, assume a model in current app
app_label = klass._meta.app_label
model_name = relation
model = None
try:
model = klass._meta.apps.get_registered_model(app_label, model_name)
# For django < 1.6
except AttributeError:
model = models.get_model(
app_label, model_name, seed_cache=False, only_installed=False
)
except LookupError:
print(
f"LookupError: Unable to find model {app_label}.{model_name}."
)
if model:
self.remote_field.model = model
def contribute_to_class(self, cls, name):
"""
Because django doesn't give us a nice way to provide
a through table without losing functionality. We have to
provide our own through table creation that uses the
FKToVersion field to be used for the from field.
"""
self.update_rel_to(cls)
# Called to get a name
self.set_attributes_from_name(name)
self.model = cls
# Set the through field
if not self.remote_field.through and not cls._meta.abstract:
self.remote_field.through = create_many_to_many_intermediary_model(
self, cls
)
# Do the rest
super().contribute_to_class(cls, name)
def create_many_to_many_intermediary_model(field, klass):
"""
Copied from django, but uses FKToVersion for the
'from' field. Fields are also always called 'from' and 'to'
to avoid problems between version combined models.
"""
managed = True
temp_to_model = related.resolve_relation(klass, field.remote_field.model)
if (
isinstance(temp_to_model, str)
and temp_to_model != related.RECURSIVE_RELATIONSHIP_CONSTANT
):
to_model = temp_to_model
to = to_model.split(".")[-1]
def set_managed(model, related, through):
through._meta.managed = model._meta.managed or related._meta.managed
lazy_name = f"{klass._meta.object_name}_{field.name}"
related.lazy_related_operation(set_managed, klass, to_model, lazy_name)
elif isinstance(temp_to_model, str):
to = klass._meta.object_name
to_model = klass
managed = klass._meta.managed
else:
to = temp_to_model._meta.object_name
to_model = temp_to_model
managed = klass._meta.managed or to_model._meta.managed
if issubclass(klass, VersionView):
managed = False
name = f"{klass._meta.object_name}_{field.name}"
if (
temp_to_model == related.RECURSIVE_RELATIONSHIP_CONSTANT
or to == klass._meta.object_name
):
from_ = f"from_{to.lower()}"
to = f"to_{to.lower()}"
else:
from_ = klass._meta.object_name.lower()
to = to.lower()
meta = type(
"Meta",
(object,),
{
"db_table": field._get_m2m_db_table(klass._meta),
"managed": managed,
"auto_created": klass,
"app_label": klass._meta.app_label,
"db_tablespace": klass._meta.db_tablespace,
"unique_together": ("from", "to"),
"verbose_name": f"{from_}-{to} relationship",
"verbose_name_plural": f"{from_}-{to} relationships",
"apps": field.model._meta.apps,
},
)
# Construct and return the new class.
return type(
str(name),
(models.Model,),
{
"Meta": meta,
"__module__": klass.__module__,
"from": FKToVersion(
klass,
related_name=f"{name}+",
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=models.CASCADE,
),
"to": models.ForeignKey(
to_model,
related_name=f"{name}+",
db_tablespace=field.db_tablespace,
db_constraint=field.remote_field.db_constraint,
on_delete=models.CASCADE,
),
},
)
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/versioning/fields.py
| 0.615897 | 0.229557 |
fields.py
|
pypi
|
from .groups import CacheGroup
class CacheManager:
"""
A CacheManager is where you register all the cache groups
that you are tracking.
Similar to the django admin, most implementations would only
have one instance of this class that all managers would be
registered with. If you don't need any customizations you can
simply register with the default instance
"""
_registry = {}
@classmethod
def reset(cls):
cls._registry = {}
def get_group(self, key):
"""
Returns the cache groups that matches the given key,
if not such key was registered None is returned.
"""
return self._registry.get(key)
def register_cache(self, cache_group):
"""
Register a cache_group with this manager.
Use this method to register more complicated
groups that you create yourself. Such as if you
need to register several models each with different
parameters.
:param cache_group: The group to register. \
The group is registered with the cache_group key attribute. \
Raises an exception if the key is already registered.
"""
if cache_group.key in self._registry:
raise Exception(f"{cache_group.key} is already registered")
self._registry[cache_group.key] = cache_group
def register_model(self, key, *models, **kwargs):
"""
Register a cache_group with this manager.
Use this method to register more simple
groups where all models share the same parameters.
Any arguments are treated as models that you would like
to register.
Any keyword arguments received are passed to the
register method when registering each model.
:param key: The key to register this group as. \
Raises an exception if the key is already registered.
"""
assert models, "No models passed in!"
cache_group = CacheGroup(key)
for model in models:
cache_group.register(model, **kwargs)
self.register_cache(cache_group)
def invalidate_cache(self, klass, extra=None, **kwargs):
"""
Invalidate a cache for a specific class.
This will loop through all registered groups that have registered
the given model class and call their invalidate_cache method.
All keyword arguments will be directly passed through to the
group's invalidate_cache method, with the exception of **extra**
as noted below.
:param klass: The model class that need some invalidation.
:param extra: A dictionary where the key corresponds to the name \
of a group where this model is registered and a value that is a \
list that will be passed as the extra keyword argument when \
calling invalidate_cache on that group. In this way you can \
specify specific extra values to invalidate only for specific \
groups.
"""
extra = extra or kwargs.pop("extra", {})
for group in list(self._registry.values()):
if klass in group.models:
e = extra.get(group.key)
group.invalidate_cache(klass, extra=e, **kwargs)
cache_manager = CacheManager()
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/cache/manager.py
| 0.875335 | 0.411052 |
manager.py
|
pypi
|
from django.contrib.auth import get_user_model
from django.contrib.auth.forms import AdminPasswordChangeForm
try:
from ..cms import bundles, views
from ..cms.sites import site
except ValueError:
from cms import bundles, views
from cms.sites import site
from . import forms, groups
class AddView(views.FormView):
force_add = True
form_class = forms.SignupModelForm
fieldsets = (
(
"User Information",
{"fields": ("username", "first_name", "last_name", "email")},
),
("Password", {"fields": ("password1", "password2")}),
)
class PasswordView(views.FormView):
redirect_to_view = "edit"
context_object_name = "object"
def get_form_class(self):
return AdminPasswordChangeForm
def write_message(self, message=None):
message = f"{self.object} password changed"
super().write_message(message=message)
def get_form_kwargs(self):
# Since we aren't using a model form
# strip instance and use user instead
kwargs = super().get_form_kwargs()
instance = kwargs.pop("instance")
kwargs["user"] = instance
return kwargs
class AccountBundle(bundles.Bundle):
required_groups = (groups.ADMIN,)
add = AddView()
edit = views.FormView(
form_class=forms.UserForm,
fieldsets=(
(
"User Information",
{
"fields": (
"username",
"first_name",
"last_name",
"email",
"password",
)
},
),
("Status", {"fields": ("is_active", "is_superuser", "is_staff")}),
("Groups", {"fields": ("groups",)}),
),
context_object_name="object",
)
password = PasswordView()
main = views.ListView(
paginate_by=100,
display_fields=("username", "first_name", "last_name", "email", "groups"),
action_links=(
("edit", "Edit", "e"),
("delete", "Delete", "d"),
("password", "Change Password", "k"),
),
)
class Meta:
model = get_user_model()
primary_model_bundle = True
item_views = ("password", "edit", "delete")
default_kwargs = {"object_header_tmpl": "cms/object_header_no_preview.html"}
site.register("users", AccountBundle(name="accounts_admin", title="Account"), 10)
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/accounts/cms_bundles.py
| 0.510496 | 0.193319 |
cms_bundles.py
|
pypi
|
from django.contrib import messages
from django.contrib.auth import authenticate, login, REDIRECT_FIELD_NAME
from django.contrib.auth.forms import PasswordChangeForm
from django.contrib.auth.views import logout as Signout
from django.http import Http404, HttpResponseForbidden
from django.shortcuts import get_object_or_404, redirect
from django.urls import reverse
from django.utils.translation import ugettext as _
from django.views.generic import FormView, TemplateView, View
from django.views.generic.list import ListView, MultipleObjectMixin
from . import settings as accounts_settings
from . import signals as accounts_signals
from .decorators import secure_required
from .forms import (
AuthenticationForm,
ChangeEmailForm,
EditProfileForm,
SignupForm
)
from .models import AccountsSignup
from .utils import get_profile_model, get_user_model, signin_redirect
class ExtraContextTemplateView(TemplateView):
""" Add extra context to a simple template view """
extra_context = None
def get_context_data(self, *args, **kwargs):
context = super().get_context_data(
*args, **kwargs
)
if self.extra_context:
context.update(self.extra_context)
return context
# this view is used in POST requests,
# e.g. signup when the form is not valid
post = TemplateView.get
@secure_required
def activate(
request,
activation_key,
template_name="accounts/activate_fail.html",
success_url=None,
extra_context=None,
):
"""
Activate a user with an activation key.
The key is a SHA1 string. When the SHA1 is found with an
:class:`AccountsSignup`, the :class:`User` of that account will be
activated. After a successful activation the view will redirect to
``success_url``. If the SHA1 is not found, the user will be shown the
``template_name`` template displaying a fail message.
:param activation_key:
String of a SHA1 string of 40 characters long. A SHA1 is always 160bit
long, with 4 bits per character this makes it --160/4-- 40 characters
long.
:param template_name:
String containing the template name that is used when the
``activation_key`` is invalid and the activation fails. Defaults to
``accounts/activation_fail.html``.
:param success_url:
String containing the URL where the user should be redirected to after
a successful activation. Will replace ``%(username)s`` with string
formatting if supplied. If ``success_url`` is left empty, will direct
to ``accounts_profile_detail`` view.
:param extra_context:
Dictionary containing variables which could be added to the template
context. Default to an empty dictionary.
"""
user = AccountsSignup.objects.activate_user(activation_key)
if user:
# Sign the user in.
auth_user = authenticate(identification=user.email, check_password=False)
login(request, auth_user)
if accounts_settings.ACCOUNTS_USE_MESSAGES:
messages.success(
request,
_("Your account has been activated and you have been signed in."),
fail_silently=True,
)
if success_url:
redirect_to = success_url % {"username": user.username}
else:
redirect_to = reverse(
"accounts_profile_detail", kwargs={"username": user.username}
)
return redirect(redirect_to)
else:
if not extra_context:
extra_context = dict()
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
@secure_required
def email_confirm(
request,
confirmation_key,
template_name="accounts/email_confirm_fail.html",
success_url=None,
extra_context=None,
):
"""
Confirms an email address with a confirmation key.
Confirms a new email address by running :func:`User.objects.confirm_email`
method. If the method returns an :class:`User` the user will have his new
e-mail address set and redirected to ``success_url``. If no ``User`` is
returned the user will be represented with a fail message from
``template_name``.
:param confirmation_key:
String with a SHA1 representing the confirmation key used to verify a
new email address.
:param template_name:
String containing the template name which should be rendered when
confirmation fails. When confirmation is successful, no template is
needed because the user will be redirected to ``success_url``.
:param success_url:
String containing the URL which is redirected to after a successful
confirmation. Supplied argument must be able to be rendered by
``reverse`` function.
:param extra_context:
Dictionary of variables that are passed on to the template supplied by
``template_name``.
"""
user = AccountsSignup.objects.confirm_email(confirmation_key)
if user:
if accounts_settings.ACCOUNTS_USE_MESSAGES:
messages.success(
request, _("Your email address has been changed."), fail_silently=True
)
if success_url:
redirect_to = success_url
else:
redirect_to = reverse(
"accounts_email_confirm_complete", kwargs={"username": user.username}
)
return redirect(redirect_to)
else:
if not extra_context:
extra_context = dict()
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
def direct_to_user_template(request, username, template_name, extra_context=None):
"""
Simple wrapper for Django's :func:`direct_to_template` view.
This view is used when you want to show a template to a specific user. A
wrapper for :func:`direct_to_template` where the template also has access
to the user that is found with ``username``. For ex. used after signup,
activation and confirmation of a new e-mail.
:param username:
String defining the username of the user that made the action.
:param template_name:
String defining the name of the template to use. Defaults to
``accounts/signup_complete.html``.
**Keyword arguments**
``extra_context``
A dictionary containing extra variables that should be passed to the
rendered template. The ``account`` key is always the ``User``
that completed the action.
**Extra context**
``viewed_user``
The currently :class:`User` that is viewed.
"""
user = get_object_or_404(get_user_model(), username__iexact=username)
if not extra_context:
extra_context = dict()
extra_context["viewed_user"] = user
extra_context["profile"] = user.get_profile()
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
@secure_required
def signin(
request,
auth_form=AuthenticationForm,
template_name="accounts/signin_form.html",
redirect_field_name=REDIRECT_FIELD_NAME,
redirect_signin_function=signin_redirect,
extra_context=None,
):
"""
Signin using email or username with password.
Signs a user in by combining email/username with password. If the
combination is correct and the user :func:`is_active` the
:func:`redirect_signin_function` is called with the arguments
``REDIRECT_FIELD_NAME`` and an instance of the :class:`User` who is is
trying the login. The returned value of the function will be the URL that
is redirected to.
A user can also select to be remembered for ``ACCOUNTS_REMEMBER_DAYS``.
:param auth_form:
Form to use for signing the user in. Defaults to the
:class:`AuthenticationForm` supplied by accounts.
:param template_name:
String defining the name of the template to use. Defaults to
``accounts/signin_form.html``.
:param redirect_field_name:
Form field name which contains the value for a redirect to the
succeeding page. Defaults to ``next`` and is set in
``REDIRECT_FIELD_NAME`` setting.
:param redirect_signin_function:
Function which handles the redirect. This functions gets the value of
``REDIRECT_FIELD_NAME`` and the :class:`User` who has logged in. It
must return a string which specifies the URI to redirect to.
:param extra_context:
A dictionary containing extra variables that should be passed to the
rendered template. The ``form`` key is always the ``auth_form``.
**Context**
``form``
Form used for authentication supplied by ``auth_form``.
"""
form = auth_form()
if request.method == "POST":
form = auth_form(request.POST, request.FILES)
if form.is_valid():
identification = form.cleaned_data["identification"]
password = form.cleaned_data["password"]
remember_me = form.cleaned_data["remember_me"]
user = authenticate(identification=identification, password=password)
if user.is_active:
login(request, user)
if remember_me:
request.session.set_expiry(
accounts_settings.ACCOUNTS_REMEMBER_ME_DAYS[1] * 86400
)
else:
request.session.set_expiry(0)
if accounts_settings.ACCOUNTS_USE_MESSAGES:
messages.success(
request, _("You have been signed in."), fail_silently=True
)
# Whereto now?
redirect_to = redirect_signin_function(
request.GET.get(redirect_field_name), user
)
return redirect(redirect_to)
else:
return redirect(
reverse("accounts_disabled", kwargs={"username": user.username})
)
if not extra_context:
extra_context = dict()
extra_context.update(
{"form": form, "next": request.GET.get(redirect_field_name),}
)
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
@secure_required
def signout(
request,
next_page=accounts_settings.ACCOUNTS_REDIRECT_ON_SIGNOUT,
template_name="accounts/signout.html",
*args,
**kwargs
):
"""
Signs out the user and adds a success message ``You have been signed
out.`` If next_page is defined you will be redirected to the URI. If
not the template in template_name is used.
:param next_page:
A string which specifies the URI to redirect to.
:param template_name:
String defining the name of the template to use. Defaults to
``accounts/signout.html``.
"""
if (
request.user.is_authenticated() and accounts_settings.ACCOUNTS_USE_MESSAGES
): # pragma: no cover
messages.success(request, _("You have been signed out."), fail_silently=True)
return Signout(request, next_page, template_name, *args, **kwargs)
@secure_required
def email_change(
request,
username,
email_form=ChangeEmailForm,
template_name="accounts/email_form.html",
success_url=None,
extra_context=None,
):
"""
Change email address
:param username:
String of the username which specifies the current account.
:param email_form:
Form that will be used to change the email address. Defaults to
:class:`ChangeEmailForm` supplied by accounts.
:param template_name:
String containing the template to be used to display the email form.
Defaults to ``accounts/email_form.html``.
:param success_url:
Named URL where the user will get redirected to when successfully
changing their email address. When not supplied will redirect to
``accounts_email_complete`` URL.
:param extra_context:
Dictionary containing extra variables that can be used to render the
template. The ``form`` key is always the form supplied by the keyword
argument ``form`` and the ``user`` key by the user whose email address
is being changed.
**Context**
``form``
Form that is used to change the email address supplied by ``form``.
``account``
Instance of the ``Account`` whose email address is about to be changed.
**Todo**
Need to have per-object permissions, which enables users with the correct
permissions to alter the email address of others.
"""
user = get_object_or_404(get_user_model(), username__iexact=username)
form = email_form(user)
if request.method == "POST":
form = email_form(user, request.POST, request.FILES)
if form.is_valid():
form.save()
if success_url:
redirect_to = success_url
else:
redirect_to = reverse(
"accounts_email_change_complete", kwargs={"username": user.username}
)
return redirect(redirect_to)
if not extra_context:
extra_context = dict()
extra_context["form"] = form
extra_context["profile"] = user.get_profile()
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
@secure_required
def password_change(
request,
username,
template_name="accounts/password_form.html",
pass_form=PasswordChangeForm,
success_url=None,
extra_context=None,
):
""" Change password of user.
This view is almost a mirror of the view supplied in
:func:`contrib.auth.views.password_change`, with the minor change that in
this view we also use the username to change the password. This was needed
to keep our URLs logical (and REST) across the entire application. And
that in a later stadium administrators can also change the users password
through the web application itself.
:param username:
String supplying the username of the user who's password is about to be
changed.
:param template_name:
String of the name of the template that is used to display the password
change form. Defaults to ``accounts/password_form.html``.
:param pass_form:
Form used to change password. Default is the form supplied by Django
itself named ``PasswordChangeForm``.
:param success_url:
Named URL that is passed onto a :func:`reverse` function with
``username`` of the active user. Defaults to the
``accounts_password_complete`` URL.
:param extra_context:
Dictionary of extra variables that are passed on to the template. The
``form`` key is always used by the form supplied by ``pass_form``.
**Context**
``form``
Form used to change the password.
"""
user = get_object_or_404(get_user_model(), username__iexact=username)
form = pass_form(user=user)
if request.method == "POST":
form = pass_form(user=user, data=request.POST)
if form.is_valid():
form.save()
# Send a signal that the password has changed
accounts_signals.password_complete.send(sender=None, user=user)
if success_url:
redirect_to = success_url
else:
redirect_to = reverse(
"accounts_password_change_complete",
kwargs={"username": user.username},
)
return redirect(redirect_to)
if not extra_context:
extra_context = dict()
extra_context["form"] = form
extra_context["profile"] = user.get_profile()
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
@secure_required
def profile_edit(
request,
username,
edit_profile_form=EditProfileForm,
template_name="accounts/profile_form.html",
success_url=None,
extra_context=None,
**kwargs
):
"""
Edit profile.
Edits a profile selected by the supplied username. First checks
permissions if the user is allowed to edit this profile, if denied will
show a 404. When the profile is successfully edited will redirect to
``success_url``.
:param username:
Username of the user which profile should be edited.
:param edit_profile_form:
Form that is used to edit the profile. The :func:`EditProfileForm.save`
method of this form will be called when the form
:func:`EditProfileForm.is_valid`. Defaults to :class:`EditProfileForm`
from accounts.
:param template_name:
String of the template that is used to render this view. Defaults to
``accounts/edit_profile_form.html``.
:param success_url:
Named URL which will be passed on to a django ``reverse`` function
after the form is successfully saved. Defaults to the
``accounts_detail`` url.
:param extra_context:
Dictionary containing variables that are passed on to the
``template_name`` template. ``form`` key will always be the form used
to edit the profile, and the ``profile`` key is always the edited
profile.
**Context**
``form``
Form that is used to alter the profile.
``profile``
Instance of the ``Profile`` that is edited.
"""
user = get_object_or_404(get_user_model(), username__iexact=username)
profile = user.get_profile()
user_initial = {"first_name": user.first_name, "last_name": user.last_name}
form = edit_profile_form(instance=profile, initial=user_initial)
if request.method == "POST":
form = edit_profile_form(
request.POST, request.FILES, instance=profile, initial=user_initial
)
if form.is_valid():
profile = form.save()
if accounts_settings.ACCOUNTS_USE_MESSAGES:
messages.success(
request, _("Your profile has been updated."), fail_silently=True
)
if success_url:
redirect_to = success_url
else:
redirect_to = reverse(
"accounts_profile_detail", kwargs={"username": username}
)
return redirect(redirect_to)
if not extra_context:
extra_context = dict()
extra_context["form"] = form
extra_context["profile"] = profile
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
def profile_detail(
request,
username,
template_name=accounts_settings.ACCOUNTS_PROFILE_DETAIL_TEMPLATE,
extra_context=None,
**kwargs
):
"""
Detailed view of an user.
:param username:
String of the username of which the profile should be viewed.
:param template_name:
String representing the template name that should be used to display
the profile.
:param extra_context:
Dictionary of variables which should be supplied to the template. The
``profile`` key is always the current profile.
**Context**
``profile``
Instance of the currently viewed ``Profile``.
"""
user = get_object_or_404(get_user_model(), username__iexact=username)
profile_model = get_profile_model()
try:
profile = user.get_profile()
except profile_model.DoesNotExist:
profile = profile_model(user=user)
profile.save()
if not profile.can_view_profile(request.user):
return HttpResponseForbidden(
_("You don't have permission to view this profile.")
)
if not extra_context:
extra_context = dict()
extra_context["profile"] = user.get_profile()
return ExtraContextTemplateView.as_view(
template_name=template_name, extra_context=extra_context
)(request)
def account_delete(
request,
username,
template_name=accounts_settings.ACCOUNTS_PROFILE_DETAIL_TEMPLATE,
extra_context=None,
**kwargs
):
"""
Delete an account.
"""
user = get_object_or_404(get_user_model(), username__iexact=username)
user.is_active = False
user.save()
return redirect(reverse("accounts_admin"))
class ProfileListView(ListView):
"""
Lists all profiles
"""
context_object_name = "profile_list"
page = 1
paginate_by = 20
template_name = "accounts/profile_list.html"
extra_context = None
def get_context_data(self, **kwargs):
# Call the base implementation first to get a context
context = super().get_context_data(**kwargs)
try:
page = int(self.request.GET.get("page", None))
except (TypeError, ValueError):
page = self.page
if not self.request.user.is_staff:
raise Http404
if not self.extra_context:
self.extra_context = dict()
context["page"] = page
context["paginate_by"] = self.paginate_by
context["extra_context"] = self.extra_context
context["form"] = SignupForm()
return context
def get_queryset(self):
profile_model = get_profile_model()
queryset = profile_model.objects.get_visible_profiles(self.request.user)
return queryset
class AccountsFormView(FormView, MultipleObjectMixin):
template_name = "accounts/profile_list.html"
form_class = SignupForm
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
return context
def get_success_url(self):
return reverse("accounts_admin", kwargs=None)
def form_valid(self, form):
if not self.request.user.is_authenticated():
return HttpResponseForbidden()
user = form.save()
# Send the signup complete signal
accounts_signals.signup_complete.send(sender=None, user=user)
# record the interest using the message in form.cleaned_data
return super().form_valid(form)
class AccountsListView(View):
def get(self, request, *args, **kwargs):
view = ProfileListView.as_view()
return view(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
view = AccountsFormView.as_view()
return view(request, *args, **kwargs)
|
/scarletcms-3.1.0b8.tar.gz/scarletcms-3.1.0b8/scarlet/accounts/views.py
| 0.704465 | 0.156137 |
views.py
|
pypi
|
# scarplet
[](https://travis-ci.com/rmsare/scarplet)
[](https://scarplet.readthedocs.io/en/latest/?badge=latest)
scarplet is a Python package for applying template matching techniques to digital elevation data, in
particular for detecting and measuring the maturity of fault scarps and other
landforms [[0, 1]](#references).
It is intended for earth scientists who want to apply diffusion dating methods
to or extract landforms from large datasets. The scarplet API can be used to
estimate the height and relative age of a landform or identify DEM pixels
based on their fit to a landform template.
It was designed with two main goals:
* Allow contributors to define template functions for their problem area of interest
* Make it straightforward to apply these methods to large datasets by parallelizing/distrbuting computation using multiprocessing, [dask](https://dask.readthedocs.io), or other tools [[2]](#references)
## Getting started
### Installation
`scarplet` can be installed using `conda` or `pip`. It is developed for Python 3.4+ and currently works on Linux and Mac OS X.
```bash
conda install scarplet -c conda-forge
```
Or, to manually install the latest version from github:
```bash
git clone https://github.com/rmsare/scarplet
cd scarplet
conda install --file=requirements.txt -c conda-forge
python setup.py develop
```
The main dependencies are numpy, scipy, numexpr, pyfftw (which requires LibFFTW3)
and rasterio/GDAL.
## Examples
Example notebooks can be found in the [docs folder](docs/source/examples/) or [website](https://scarplet.readthedocs.io/en/latest/examples/scarps.html) and sample datasets can be loaded using the [datasets submodule](https://scarplet.readthedocs.io/en/latest/scarplet.datasets.base.html).
### Detecting fault scarps
This example uses a scarp template based on a diffusion model of scarp degradation
[[0]](#references) to identify scarp-like landforms along the San Andreas Fault near
Wallace Creek, CA.
```python
import numpy as np
import scarplet as sl
from scarplet.WindowedTemplate import Scarp
params = {'scale': 100,
'age': 10,
'ang_min': -10 * np.pi / 2,
'ang_max': 10 * np.pi / 2
}
data = sl.datasets.load_carrizo()
res = sl.match(data, Scarp, **params)
sl.plot_results(data, res)
```
<img src="docs/img/carrizo_example.png" alt="Fault scarp results" height="340">
### Extracting confined river channels
To illustrate template function flexibility, this example uses a Channel
template similar to the Ricker wavelet [[3]](#references) to extract part of a channel network.
This is example uses a moderate resolution SRTM data tile. In general, for
high resolution data like lidar, there are more robust alternatives for
channel network extraction or channel head identification [[4, 5]](#references).
```python
import numpy as np
import scarplet as sl
from scarplet.WindowedTemplate import Channel
params = {'scale': 10,
'age': 0.1,
'ang_min': -np.pi / 2,
'ang_max': np.pi / 2
}
data = sl.datasets.load_grandcanyon()
res = sl.match(data, Channel, **params)
sl.plot_results(data, res)
```
<img src="docs/img/rivers_example.png" alt="Channel results" height="340">
There are also [example notebooks](https://scarplet.readthedocs.io/en/latest/index.html) and [an API reference](https://scarplet.readthedocs.io/en/latest/api.html) in the documentation.
## Documentation
Read the documentation for example use cases, an API reference, and more. They
are hosted at [scarplet.readthedocs.io](https://scarplet.readthedocs.io).
## Contributing
### Bug reports
Bug reports are much appreciated. Please [open an issue](https://github.com/rmsare/scarplet/issues/new) with the `bug` label,
and provide a minimal example illustrating the problem.
### Suggestions
Feel free to [suggest new features](https://github.com/rmsare/scarplet/issues/new) in an issue with the `new-feature` label.
### Pull requests
Don't hestitate to file an issue; I would be happy to discuss extensions or help to build a new feature.
If you would like to add a feature or fix a bug, please fork the repository, create a feature branch, and [submit a PR](https://github.com/rmsare/scarplet/compare) and reference any relevant issues. There are nice guides to contributing with GitHub [here](https://akrabat.com/the-beginners-guide-to-contributing-to-a-github-project/) and [here](https://yourfirstpr.github.io/). Please include tests where appropriate and check that the test suite passes (a Travis build or `pytest scarplet/tests`) before submitting.
### Support and questions
Please [open an issue](https://github.com/rmsare/scarplet/issues/new) with your question.
## References
[0] Hanks, T.C., 2000. The age of scarplike landforms from diffusion‐equation analysis. Quaternary Geochronology, 4, pp. 313-338. [doi](https://doi.org/10.1029/RF004p0313)
[1] Hilley, G.E., DeLong, S., Prentice, C., Blisniuk, K. and Arrowsmith, J.R., 2010. Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data. Geophysical Research Letters, 37(4). [doi](https://doi.org/10.1029/2009GL042044)
[2] Sare, R, Hilley, G. E., and DeLong, S. B., 2018, Regional scale detection of fault scarps and other tectonic landforms: Examples from Northern California, in review, Journal of Geophysical Research: Solid Earth.
[3] Lashermes, B., Foufoula‐Georgiou, E., and Dietrich, W. E., 2007, Channel network extraction from high resolution topography using wavelets. Geophysical Research Letters, 34(23). [doi](https://doi.org/10.1029/2007GL031140)
[4] Passalacqua, P., Tarolli, P., and Foufoula‐Georgiou, E., 2010, Testing space‐scale methodologies for automatic geomorphic feature extraction from lidar in a complex mountainous landscape. Water Resources Research, 46(11). [doi](https://doi.org/10.1029/2009WR008812)
[5] Clubb, F. J., Mudd, S. M., Milodowski, D. T., Hurst, M. D., and Slater, L. J., 2014, Objective extraction of channel heads from high‐resolution topographic data. Water Resources Research, 50(5). [doi](https://doi.org/10.1002/2013WR015167)
## License
This work is licensed under the MIT License (see [LICENSE](LICENSE)).
|
/scarplet-0.1.3-py3-none-any.whl/scarplet-0.1.3.dist-info/DESCRIPTION.rst
| 0.436142 | 0.976061 |
DESCRIPTION.rst
|
pypi
|
import cupy as cp
import numpy as np
from scipy.signal import tukey
def gaussian_window(x, width):
"""Gaussian window.
This function can generate a bank of windows at once if the width
argument is a vector (and/or amplitude). In this case, it should have
a new axis with respect to the time vector to allow for outer product.
Parameters
----------
x : :class:`T.ndarray` or np.ndarray
Input variable (in the same units than the width).
width : float or np.ndarray
Window width (in the same units than the input variable). If an array
is provided, the function returns as many windows as the number of
elements of this parameter.
amplitude : float or np.ndarray, optional
Window amplitude at maximum (default 1). If this parameter is a vector,
it should have the same number of elements than the width.
Returns
-------
:class:`T.ndarray`
The Gaussian window in the time domain. If the width (and possibly
amplitude) argument is a vector, the function returns a matrix with
shape (len(width), len(x)).
"""
# turn parameters into a numpy arrays for dimension check
x = cp.array(x)
width = cp.array(width)
# add new axis for outer product if several widths are given
width = width[:, None] if width.shape and (width.ndim == 1) else width
return cp.exp(-((x / width) ** 2))
def complex_morlet(x, center, width):
"""Complex Morlet wavelet.
The complex Morlet wavelet is a complex plane wave modulated by a
Gaussian window. The oscillatory frequency of the plane wave is the
center frequency, and the temporal width of the Gaussian is the width
argument.
This function can generate a filter bank at once if the width and center
arguments are vectors of the same size. In this case, they should have a
new axis with respect to the time vector to allow for outer product.
Arguments
---------
x: :class:`T.ndarray` or np.ndarray
Time vector in seconds.
width: float or :class:`T.ndarray` or np.ndarray
Temporal signal width in seconds.
center: float or :class:`T.ndarray` or np.ndarray
Center frequency in hertz.
Keyword arguments
-----------------
amplitude: float (optional)
Wavelet normalization (default 1). If amplitude is a vector, it should
have the same dimension than width (and center).
Returns
-------
filter: :class:`T.ndarray`
The complex Mortlet wavelet in the time domain. If the center and width
(and possibly amplitude) arguments are vectors, the function returns
a matrix with shape (len(width), len(x)).
"""
# turn parameters into a numpy arrays for dimension check
x = cp.array(x)
width = cp.array(width)
center = cp.array(center)
# add new axis for outer product if several widths are given
width = width[:, None] if width.shape else width
center = center[:, None] if center.shape else center
# check compatibility between arguments
if width.shape and center.shape:
assert (
width.shape == center.shape
), f"Shape for widths {width.shape} and centers {center.shape} differ."
return gaussian_window(x, width) * cp.exp(2j * cp.pi * center * x)
class ComplexMorletBank:
"""Complex Morlet filter bank."""
def __init__(
self, bins, octaves, resolution=1, quality=4, taper_alpha=1e-3
):
"""Filter bank creation.
This function creates the filter bank in the time domain, and obtains
it in the frequency domain with a fast Fourier transform.
Arguments
---------
bins: int
Number of samples in the time domain.
octaves: int
Number of octaves spanned by the filter bank.
Keyword arguments
-----------------
resolution: int
Number of filters per octaves (default 1).
sampling: float
Input data sampling rate (default 1 Hz).
quality: float
Filter bank quality factor (constant, default 4).
"""
# attribution
self.bins = bins
self.octaves = octaves
self.resolution = resolution
self.quality = quality
# generate bank
self.wavelets = complex_morlet(
self.times(), self.centers(), self.widths()
)
self.spectra = cp.fft.fft(self.wavelets)
self.size = self.wavelets.shape[0]
self.taper = cp.array(tukey(bins, alpha=taper_alpha))
pass
def transform(self, sample):
"""Scalogram applied to a data sample.
Arguments
---------
x: np.ndarray
A data sample of shape `(..., channels, bins)`, with the same
number of bins than the filter bank.
Returns
-------
wx: cp.ndarray
The scalograms for all channels with shape (the ellipsis stands for
unknown number of input dimensions)
`n_channels, ..., n_filters, n_bins`.
"""
sample = cp.fft.fft(cp.array(sample) * self.taper)
convolved = sample[..., None, :] * self.spectra
scalogram = cp.fft.fftshift(cp.fft.ifft(convolved), axes=-1)
return cp.abs(scalogram)
def times(self, sampling_rate=1):
"""Wavelet bank symmetric time vector in seconds."""
duration = self.bins / sampling_rate
return np.linspace(-0.5, 0.5, num=self.bins) * duration
def frequencies(self, sampling_rate=1):
"""Wavelet bank frequency vector in hertz."""
return np.linspace(0, sampling_rate, self.bins)
def nyquist(self, sampling_rate=1):
"""Wavelet bank frequency vector in hertz."""
return sampling_rate / 2
@property
def shape(self):
"""Filter bank total number of filters."""
return self.octaves * self.resolution, self.bins
@property
def ratios(self):
"""Wavelet bank ratios."""
ratios = np.linspace(self.octaves, 0.0, self.shape[0], endpoint=False)
return -ratios[::-1]
@property
def scales(self):
"""Wavelet bank scaling factors."""
return 2 ** self.ratios
def centers(self, sampling_rate=1):
"""Wavelet bank center frequencies."""
return self.scales * self.nyquist(sampling_rate)
def widths(self, sampling_rate=1):
"""Wavelet bank temporal widths."""
return self.quality / self.centers(sampling_rate)
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/wavelet.py
| 0.934619 | 0.91804 |
wavelet.py
|
pypi
|
import obspy
import pandas as pd
from glob import glob
from parse import parse
from tqdm import tqdm
from .io import stdout
def inventorize(parsable_path, channel_pattern, tag_pattern):
"""Glob files with expansion on tag and channel, and get headers.
Arguments
---------
parsable_path: str
The path to data with a tag and a channel string variables.
channel_pattern: str
Pattern to expand for channels.
tag_pattern: str
Pattern to expand for tag.
Returns
-------
db: pandas.DataFrame
The extracted availability and metadata.
"""
# File pattern creation
pattern = dict(tag=tag_pattern, channel=channel_pattern)
filepath_pattern = parsable_path.format(**pattern)
stdout("Data files pattern {}", filepath_pattern)
# File pattern glob
filepaths_matching = sorted(glob(filepath_pattern))
if len(filepaths_matching) > 0:
stdout("Found {} matching files", len(filepaths_matching))
else:
print("No files matching pattern; exiting.")
exit()
# Build up inventory
header = obspy.read(filepaths_matching[0], headonly=True)[0].stats
db_entrynames = [item for item in header]
db = pd.DataFrame(columns=db_entrynames)
# Extract headers from every files
index = 0
for filepath in tqdm(filepaths_matching, desc="Making inventory"):
# Fix Windows filepath (temporary solution)
if "\\" in filepath:
filepath=filepath.replace("\\", "/")
# Read headers
stream = obspy.read(filepath, headonly=True)
parsed = parse(parsable_path, filepath)
# print(parsable_path)
# print(filepath)
# Extract stats from every traces
for trace in stream:
for entry in db_entrynames:
db.loc[index, entry] = trace.stats[entry]
db.loc[index, "tag"] = parsed["tag"]
db.loc[index, "path"] = filepath
index += 1
return db
def read(filename_inventory):
"""Read inventory pickle file.
Arguments
---------
filename: str
The filename of the inventory.
Returns
-------
pandas.DataFrame
The tags and corresponding metadata to read.
"""
# Read pickle file
db = pd.read_pickle(filename_inventory)
# Drop columns
db = db.drop(db._format.unique()[0].lower(), axis=1)
db = db.drop("_format", axis=1)
# Convert to pandas timestamps
db.starttime = pd.to_datetime([t.datetime for t in db.starttime])
db.endtime = pd.to_datetime([t.datetime for t in db.endtime])
# Infer types
db = db.infer_objects()
# Duration
db["duration"] = db.endtime - db.starttime
db["duration_hours"] = db.duration.dt.total_seconds() / 3600
return db
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/inventory.py
| 0.49048 | 0.248198 |
inventory.py
|
pypi
|
import argparse
import click
import glob
import json
import logging
import numpy as np
import os
from pathlib import Path
FILE_ARGUMENTS = "arguments.json"
def stdout(message, value):
"""Format message with click echo and style.
Arguments
---------
message: str
The message to use as a base string with {} variables.
value: str
The values to put in place of the {}.
"""
# Happend curly brackets if not
if "{" not in message:
message = message + " {}"
# Turn into tuple for allowing several values
if not isinstance(value, tuple):
values = (value,)
else:
values = value
# Format
values = [click.style(value, bold=True) for value in values]
message = message.format(*values)
# Echo
click.echo(message)
pass
def mkdir(dirpath):
"""Create directory.
Arguments:
----------
dirpath: str
Path to the directory to create.
"""
if not os.path.exists(dirpath):
Path(dirpath).mkdir(parents=True, exist_ok=True)
stdout("Created directory", dirpath)
else:
stdout("Using existing directory", dirpath)
pass
def mkdirs(*args):
"""Create directories.
Arguments:
----------
dirpaths: tuple or list
Paths to the directories to create.
"""
for dirpath in args:
mkdir(dirpath)
def parse(init=False):
"""Command-line argument parser.
The usage and help for every command-line arguments for the main program is
avaialbe from the main running script:
python main.py -h
assuming that the main.py script implements at least the following lines:
import scat
scat.parse()
Keyword arguments
-----------------
readonly: bool
A safe switch to use in order to prevent from erasing the data.
Returns
-------
args: :class:`scat.argparse.Namespace`
The parsed arguments.
reader: module
The imported data reading module.
"""
# Instanciate parser with description and module's docstring.
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
# Logging level.
parser.add_argument(
"--log",
default="INFO",
metavar="LEVEL",
choices=["NOTSET", "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"],
help="""Logging level. The level name can either be NOTSET, DEBUG,
INFO, WARNING (default), ERROR or CRITICAL. Please visit
https://docs.python.org/3/howto/logging.html for further info.""",
)
# Output directory.
parser.add_argument(
"--outdir",
metavar="PATH",
type=str,
default="out/case",
help="""Specify output directory path. The directory should exist
already.""",
)
# Window duration
parser.add_argument(
"--segment",
metavar="DURATION",
type=float,
default=1800.0,
help="""Segment duration in seconds. A default window duration of 20
seconds is set, but this argument should be considered as a required
argument. This segment also defines the pooling size, since the pooling
is performed over the full segment duration.""",
)
# Window step
parser.add_argument(
"--step",
metavar="DURATION",
type=float,
default=900.0,
help="""Segment step in seconds. This defines the time interval between
the starting time of two consecutive segments. The clustering
performances with respect to the step is still under debate. for
cleaner results, the step should be set with the segment duration.""",
)
# Number of octaves at each layer
parser.add_argument(
"--octaves",
type=int,
default=[7, 12],
nargs="+",
metavar="J",
help="""Number of octaves for each scattering layer. The number of
octaves define the freequency extent of each layer. The number of
octaves is defined from the Nyquist frequency. The scattering network
depth depends on the length of this list.""",
)
# Number of filter per octaves at each layer
parser.add_argument(
"--resolution",
type=int,
default=[6, 1],
nargs="+",
metavar="Q",
help="""Number of wavelets per octaves for each scattering layer. This
define the frequency resolution of each scattering layer, and
consequently, the representation density. Note that the length of this
argument must be the same that the number of octaves.""",
)
# Wavelet banks quality factors
parser.add_argument(
"--quality",
type=int,
default=[1, 1],
nargs="+",
metavar="Qc",
help="""Wavelet bank qulity factor. This defines the ratio bewteen
the center frequency qnd the frequency bandwitdh, and therefore
represents the selectivity of the filter, or quality. It allows to
choose the density level of each representation.""",
)
# Pooling type
parser.add_argument(
"--pooling",
metavar="TYPE",
type=str,
default="max",
choices=["max", "avg"],
help="""Pooling reduction operation. The pooling is performed on a
the duration of a full segment. By default, the maximum pooling
is performed (choices: max or avg).""",
)
# Waveform inventory
parser.add_argument(
"--inventory",
metavar="PATH",
type=str,
default="INVENTORY",
help="""Waveform inventory to read with obspy.""",
)
# Features files
parser.add_argument(
"--file_features",
metavar="PATH",
type=str,
default="out/case/features/*.npz",
help="""Path to features, where the wildcard is replaced by the
tag.""",
)
# Network file
parser.add_argument(
"--file_network",
metavar="PATH",
type=str,
default="out/case/models/network.pickle",
help="""Path to network pickle file for saving.""",
)
if parser.prog == "make_latent.py":
parser.add_argument(
"dim",
metavar="DIM",
type=int,
default=10,
help="""Latent space dimensions.""",
)
parser.add_argument(
"--file_latent",
metavar="PATH",
type=str,
default="out/case/latent/latent.npz",
help="""Path to latent file for saving.""",
)
parser.add_argument(
"--file_reduction",
metavar="PATH",
type=str,
default="out/case/model/reduce.pickle",
help="""Path to reduction model pickle file for saving.""",
)
parser.add_argument(
"--normalize",
metavar="NORM",
type=int,
default=0,
help="""If 1, the higher-order scatterings are normalized.""",
)
if parser.prog == "make_cluster.py":
parser.add_argument(
"--depth",
metavar="depth",
type=int,
default=40,
help="""Number of dendrogram splits.""",
)
parser.add_argument(
"--threshold",
metavar="THRESHOLD",
type=float,
default=0.5,
help="""Threshold for cluster definition.""",
)
parser.add_argument(
"--distance",
metavar="distance",
type=str,
default="ward",
help="""Method to calculate clusters.""",
)
parser.add_argument(
"--file_leaves",
metavar="FILE_LEAVES",
type=str,
default="out/case/model/leaves.pickle",
help="""Save file for leaves.""",
)
parser.add_argument(
"--file_clustering",
metavar="FILE_CLUSTERING",
type=str,
default="out/case/model/clusters.pickle",
help="""Save file for clusters.""",
)
parser.add_argument(
"--file_latent",
metavar="PATH",
type=str,
default="out/case/latent/latent.npz",
help="""Path to latent file for saving.""",
)
parser.add_argument(
"--file_linkage",
metavar="PATH",
type=str,
default="out/case/cluster/linkage.pickle",
help="""Path to linkage file for saving.""",
)
parser.add_argument(
"--linkage",
metavar="linkage",
type=int,
default=1,
help="""Recalculate linkage.""",
)
# Parse command-line arguments
args = parser.parse_args()
# Set logging
logging.basicConfig(format="[%(levelname)s] %(message)s", level=args.log)
logging.info(f"Logging level set to {args.log}")
if init == True:
args = parser.parse_args()
save_arguments(args)
return args
else:
return load_arguments(args)
def save_arguments(args, skip=["log"]):
"""Save arguments.
The input arguments out of the parser function are saved into a JSON file
format in order to ensure readability by the user.
Arguments
---------
args: :class:`scat.argparse.Namespace`
The argument to save.
skip: list
A list of arguments not to save. Useful for avoiding saved arguments
to overwrite new arguments in later runs like log or mode.
"""
# Filename
json_filename = os.path.join(args.outdir, FILE_ARGUMENTS)
# Skip arguments from the list that are not to be saved
args_dict = args.__dict__.copy()
for arg_skip in skip:
args_dict.pop(arg_skip)
# Save
with open(json_filename, "w") as json_file:
json.dump(args_dict, json_file, indent=4)
# Logging checkpoint
logging.info("Saved arguments at {}".format(json_filename))
pass
def load_arguments(args=argparse.Namespace(), filename=None):
"""Load arguments.
This function allows to update the arguments pre-loaded by the parser from
a JSON file previously saved with the save_arguments function. It allows to
recover a set of arguments of a previous run.
Arguments
---------
args: :class:`argparse.Namespace`
The default arguments to overwrite.
Returns
-------
args: :class:`argparse.Namespace`
The updated arguments.
"""
# Filename
if filename is not None:
json_filename = filename
else:
json_filename = os.path.join(args.outdir, FILE_ARGUMENTS)
# Read and update arguments
with open(json_filename, "r") as json_file:
args.__dict__.update(json.load(json_file))
# Logging checkpoint
logging.info("Loaded arguments from {}".format(json_filename))
logging.debug("Arguments list {}".format(args))
return args
def load_features(path):
"""Read scattering coefficients and return design matrix."""
# init
x = list()
t = list()
# loop over available npz files
for datafile in sorted(glob.glob(path)):
data = np.load(datafile)
features = [data[key] for key in data if "features" in key]
# reshape and stack
if features[0].shape[0]:
features = [feat.reshape(feat.shape[0], -1) for feat in features]
x.append(np.hstack(features))
t.append(data["times"])
return np.vstack(x), np.hstack(t)
def load_feature_file(path):
"""Read scattering coefficients and return design matrix."""
data = np.load(path)
features = [data[key] for key in data if "features" in key]
if features[0].shape[0]:
features = [feat.reshape(feat.shape[0], -1) for feat in features]
x = np.hstack(features)
t = data["times"]
return x, t
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/io.py
| 0.607081 | 0.317611 |
io.py
|
pypi
|
"""Show selected features in time and latent spaces."""
import nmmn.plots
import numpy as np
import os
from matplotlib import dates as mdates
from matplotlib import pyplot as plt
from scipy.stats import median_abs_deviation
from scipy.signal import medfilt
from ..io import stdout
def demad(x, factor=10.0):
"""Normalize signal with median absolute deviation.
Arguments
---------
x: np.ndarray
The input signal.
factor: float, optional
An additional normalization factor.
Returns
-------
The data normalized with median absolute deviation.
"""
mad = median_abs_deviation(x)
return x / np.mean(mad) / factor
def show_time(times, features, factor=0.4, medfilt_kernel=101):
"""Latent variables in time domain."""
# Preprocess
features = features.T
features = demad(features)
n_features, n_bins = features.shape
# Figure
fig, ax = plt.subplots(1, figsize=(5, 7))
# Show
for index, feature in enumerate(features):
color = f"C{index % 3}"
feature += index + 1
feature_filtered = medfilt(feature, medfilt_kernel)
ax.plot(times, feature, ".", ms=1, alpha=0.5, mew=0, color=color)
ax.plot(times, feature_filtered, lw=0.7, color=color)
# Labels
ax.grid()
ax.set_ylim(0, n_features + factor)
ax.set_yticks(np.arange(n_features) + 1)
ax.set_ylabel("Independant component index")
# Date labels
dateticks = mdates.AutoDateLocator()
datelabels = mdates.ConciseDateFormatter(dateticks)
ax.xaxis.set_major_locator(dateticks)
ax.xaxis.set_major_formatter(datelabels)
ax.set_xlim(times.min(), times.max())
# Remove borders
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.tick_params(axis="y", length=0)
return fig
def show_latent(features, cmap=nmmn.plots.wolframcmap(), nbins=800):
"""Latent variables in versus diagram."""
# Preprocess
features = features.T
features = demad(features)
n_features = features.shape[0] - 1
# Figure
figsize = 2 * [n_features]
gridspec_kw = dict(hspace=0.1, wspace=0.1)
fig, ax = plt.subplots(
n_features,
n_features,
figsize=figsize,
gridspec_kw=gridspec_kw,
constrained_layout=False,
sharex="col",
sharey="row",
)
# Versus diagrams
for i in range(n_features):
x = features[i]
x_min, x_max = x.min(), x.max()
x_bins = np.linspace(x_min, x_max, nbins)
for j in range(0, n_features):
y = features[j + 1]
y_min, y_max = y.min(), y.max()
y_bins = np.linspace(y_min, y_max, nbins)
# Lower triangular
if j >= i:
# Histogram
counts, _, _ = np.histogram2d(x, y, (x_bins, y_bins))
counts = counts.T
counts[counts == 0] = 1e-4
counts = np.log(counts)
extent = [x_min, x_max, y_min, y_max]
ax[j, i].imshow(
counts, cmap=cmap, extent=extent, aspect="auto"
)
ax[j, i].grid()
ax[j, i].set_xticks([])
ax[j, i].set_yticks([])
ax[j, i].set_ylim(y_min, y_max)
ax[j, i].set_xlim(x_min, x_max)
# Style
for side in ax[j, i].spines:
ax[j, i].spines[side].set_visible(False)
# Upper triangular
else:
ax[j, i].set_axis_off()
# Labels
if j == n_features - 1:
ax[j, i].set_xlabel(f"Latent {i + 1}")
if i == 0:
ax[j, i].set_ylabel(f"Latent {j + 2}")
return fig
def show_features(file_features, file_figure, medfilt_kernel=101):
# Basename
basename = os.path.basename(file_features)
basename = basename.split(".")[0]
# Load features
with np.load(file_features) as data:
features = data["features"]
times = data["times"]
stdout("Loaded features from", file_features)
# Show in time and save
fig = show_time(times, features, medfilt_kernel=medfilt_kernel)
print("figure created")
file_figure_time = file_figure + "_time.png"
fig.show()
fig.savefig(file_figure_time, dpi=600)
print("figure saved")
stdout("Figure saved at", file_figure_time)
# Show in latent space and save
fig = show_latent(features)
file_figure_space = file_figure + "_space.png"
fig.savefig(file_figure_space)
stdout("Figure saved at", file_figure_space)
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/display/features.py
| 0.872687 | 0.677337 |
features.py
|
pypi
|
import numpy as np
from matplotlib import dates as mdates
from matplotlib import pyplot as plt
from string import ascii_lowercase as letters
from scipy.cluster import hierarchy
COLORS = [
"0.8",
"#222222",
"#F3C300",
"#875692",
"#F38400",
"#A1CAF1",
"#BE0032",
"#C2B280",
"#848482",
"#008856",
"#E68FAC",
"#0067A5",
"#F99379",
"#604E97",
"#F6A600",
"#B3446C",
"#DCD300",
"#882D17",
"#8DB600",
"#654522",
"#E25822",
"#2B3D26",
]
def get_leaves(dendrogram_info, ax):
"""Get dendrogram list of leaves coordinates and colors.
Arguments
---------
d: dict
Output of the scipy.hierarchy.dendrogram function.
Returns
-------
coords, colors: array-like
The x coordinates and color of each leave.
"""
# Extract coordinates of each leave (with a depth coordinate indexed 0)
infos = (key for key in dendrogram_info)
node_index, node_depth, *_ = (dendrogram_info[key] for key in infos)
leaves_coordinates = list()
for index, depth in zip(node_index, node_depth):
if depth[0] == 0:
leaves_coordinates.append(index[0])
if depth[-1] == 0:
leaves_coordinates.append(index[-1])
leaves_coordinates = sorted(set(leaves_coordinates))
# Cardinality
leaves_population_size = list()
for label in ax.get_yticklabels():
label = label.get_text()
label = label.replace("(", "").replace(")", "")
population_size = int(label)
leaves_population_size.append(population_size)
return leaves_coordinates, leaves_population_size
def get_prediction(linkage, population_size):
"""Get cluster predection for each sample.
Arguments
---------
linkage: np.ndarray
Output of the linkage function.
population_size: list
The size of every cluster
"""
indexes_flat = hierarchy.leaves_list(linkage)
predictions = np.zeros_like(indexes_flat)
start = 0
for index, size in enumerate(population_size):
predictions[indexes_flat[start : start + size]] = index + 1
start += size
return predictions
def show_dendrogram(linkage, ax=plt.gca(), depth=30):
"""Show dendrogram and returns basic cluster informations.
Arguments
---------
linkage: np.ndarray
Linkage matrix.
Keyword arguments
-----------------
ax: plt.Axes
The axes to draw the dendrogram into.
depth: str
The dendrogram depth
Return
------
prediction: np.ndarray
The cluster prediction per sample.
"""
# Show and get dendrogram
with plt.rc_context({"lines.linewidth": 0.7}):
dendrogram_infos = hierarchy.dendrogram(
linkage,
p=depth,
truncate_mode="lastp",
color_threshold=0,
ax=ax,
orientation="left",
above_threshold_color="0.3",
count_sort=True,
labels=None,
)
# Extract informations
coordinates, population_sizes = get_leaves(dendrogram_infos, ax)
predictions = get_prediction(linkage, population_sizes)
# Plot leave nodes
node_style = dict(ms=5, mec="0.3", mew=0.7, clip_on=False)
for coordinate, color in zip(coordinates, COLORS):
ax.plot(0, coordinate, "o", mfc=color, **node_style)
index = int((coordinate - 5) / 10) + 1
label = "{:d}".format(index)
ax.text(-0.1, coordinate, label, color=color, va="center")
return predictions
def dendrogram(
linkage, times, n_clusters, hourly=np.arange(24), n_cal_bins=150
):
# Deactivate axes basic properties
spines_off = {
"axes.spines.right": False,
"axes.spines.left": False,
"axes.spines.top": False,
"axes.facecolor": "none",
"xtick.top": False,
"ytick.left": False,
}
# Generate axes
gs = dict(width_ratios=[2, 4, 1, 2])
figsize = 6, n_clusters * 0.35
with plt.rc_context(spines_off):
figure_kwargs = dict(sharey=True, figsize=figsize, gridspec_kw=gs)
figure, axes = plt.subplots(1, 4, **figure_kwargs)
# Axes unpack
ax_dendrogram, ax_cal, ax_hourly, ax_population = axes
# Calendar bins
timestamps = mdates.date2num(times)
edge_shift = 0.1 * (timestamps[-1] - timestamps[0])
start, end = timestamps[0] - edge_shift, timestamps[-1] + edge_shift
cal_bins = np.linspace(start, end, n_cal_bins)
cal_step = cal_bins[1] - cal_bins[0]
h_step = hourly[1] - hourly[0]
# Show dendrogram
predictions = show_dendrogram(linkage, ax=ax_dendrogram, depth=n_clusters)
classes = sorted(set(predictions))
# Show other cluster properties
for cluster, color in zip(classes, COLORS):
# Cluster coordinates
yshift = (cluster - 1) * 10 + 5
indexes = predictions == cluster
# Population size
size = np.sum(predictions == cluster)
ratio = 100 * size / len(times)
# Calendar occurrences
cluster_times = times[indexes]
cluster_timestamps = timestamps[indexes]
cluster_hours = [time.hour for time in cluster_times]
cal_counts, _ = np.histogram(cluster_timestamps, cal_bins)
cal_counts = cal_counts / cal_counts.max()
# Hourly occurrences
hourly_counts = np.sum(cluster_hours == hourly[:, None], axis=1)
hourly_counts = hourly_counts / hourly_counts.max()
# Population graph
bar_style = dict(height=3, color=color, ec="0.3", lw=0.5, align="edge")
text_style = dict(size=6, va="center", color=color)
text_label = f" {size}"
ax_population.barh(yshift, ratio, **bar_style)
ax_population.text(ratio, yshift + 1.5, text_label, **text_style)
# Calendar graph
bar_style = dict(bottom=yshift, width=cal_step, fc=color, align="edge")
step_style = dict(c="0.3", lw=0.5, where="post")
ax_cal.bar(cal_bins[:-1], cal_counts * 5, **bar_style)
ax_cal.step(cal_bins[:-1], cal_counts * 5 + yshift, **step_style)
# Hourly graph
bar_style = dict(bottom=yshift, width=h_step, fc=color, align="edge")
step_style = dict(c="0.3", lw=0.5, where="post")
ax_hourly.bar(hourly, hourly_counts * 5, **bar_style)
ax_hourly.step(hourly, hourly_counts * 5 + yshift, **step_style)
# Labels dendrogram
ax_dendrogram.set_xlabel("Rescaled distance")
ax_dendrogram.set_yticklabels([])
ax_dendrogram.yaxis.set_label_position("right")
# Labels population
ax_population.set_yticks(10 * np.arange(len(classes)) + 5)
ax_population.set_xlabel("Relative\npopulation size (%)", loc="left")
# Labels calendar
ax_cal.set_yticks(10 * np.arange(len(classes)) + 5)
ax_cal.set_xlabel("Calendar date", loc="left")
dateticks = mdates.AutoDateLocator()
datelabels = mdates.ConciseDateFormatter(dateticks, show_offset=False)
ax_cal.xaxis.set_major_locator(dateticks)
ax_cal.xaxis.set_major_formatter(datelabels)
ax_cal.set_xlim(start, end)
plt.setp(ax_cal.get_xticklabels(), rotation="vertical")
# Labels hourly
hours_ticks = range(0, 25, 12)
hours_labels = [f"{h:02d}" for h in hours_ticks]
ax_hourly.set_xlim(0, 24)
ax_hourly.set_xlabel("Local\ntime (hours)", loc="left")
ax_hourly.set_xticks(hours_ticks)
ax_hourly.set_xticklabels(hours_labels)
# All-axes cosmetics
for ax, letter in zip(axes, letters):
ax.grid(clip_on=False)
ax.set_title(letter, loc="left")
return figure, predictions
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/display/linkage.py
| 0.799168 | 0.624866 |
linkage.py
|
pypi
|
import nmmn.plots
import numpy as np
import pickle
from matplotlib import dates as mdates
from matplotlib import pyplot as plt
from .. import inventory
from .. import signal
from ..io import load_feature_file, stdout
plt.rcParams["figure.constrained_layout.use"] = True
def show(t, x, sx, net, timestamp, channel=1):
# Create figure
fig, ax = plt.subplots(3, figsize=(5, 6))
# Metadata
sampling_rate = net.sampling_rate
freq = [bank.centers(sampling_rate) for bank in net.banks]
# Log-transformatino
sx = np.log10(sx + 1)
scats_0, scats_1 = signal.reshape_features(sx, net)
# show single window
sx1 = np.squeeze(scats_1[channel]).T
# cmap = nmmn.plots.wolframcmap()
cmap = "Spectral_r"
img = ax[0].pcolormesh(freq[0], freq[1], sx1[:-1, :-1], cmap=cmap)
ax[0].set_xscale("log")
ax[0].set_yscale("log")
ax[0].set_xlim(freq[0].min(), freq[0].max())
ax[0].set_ylim(freq[1].min(), freq[1].max())
ax[0].set_xlabel("First-order frequency (Hz)")
ax[0].set_ylabel("Second-order frequency (Hz)")
ax[0].grid()
ax[0].set_title("a", loc="left")
cb = plt.colorbar(img, ax=ax[0], aspect=10)
cb.set_label("Second-order\nscattering coefficients")
# show first-oder scatterings
ax[1].step(freq[0], scats_0[channel], where="post")
ax[1].fill_between(freq[0], 0, scats_0[channel], alpha=0.2, step="post")
ax[1].set_ylim(bottom=0, top=40)
ax[1].set_xscale("log")
ax[1].set_xlim(freq[0].min(), freq[0].max())
ax[1].set_xlabel("First-order frequency (Hz)")
ax[1].grid(which="both")
ax[1].set_ylabel("First-order\nscattering coefficients")
ax[1].set_title("b", loc="left")
# show single window
ax[2].plot(t, x[channel], "k")
dateticks = mdates.AutoDateLocator()
datelabels = mdates.ConciseDateFormatter(dateticks)
ax[2].xaxis.set_major_locator(dateticks)
ax[2].xaxis.set_major_formatter(datelabels)
ax[2].set_xlim(t.min(), t.max())
ax[2].set_ylim(-1, 1)
ax[2].axvline(
timestamp, c="C1", zorder=0, lw=4, alpha=0.4, label="Timestamp"
)
ax[2].legend()
ax[2].grid()
ax[2].set_ylabel("Amplitude")
ax[2].set_title("c", loc="left")
return fig
def show_scatterings(
file_features,
file_inventory,
file_network,
file_figure,
timestamp,
step,
reader=None,
):
"""Show features on given time stamp."""
# Read inventory
tags, paths, start, end, _ = inventory.read(file_inventory)
start, end = mdates.datestr2num(np.vstack((start, end)))
# Get closest path to timestamp
timestamp = mdates.date2num(timestamp)
timestamp_index = np.searchsorted(start, timestamp) - 1
timestamp_index = 0 if timestamp_index < 0 else timestamp_index
path = paths[timestamp_index]
tag = tags[timestamp_index]
file_features = file_features.replace("*", tag)
file_figure = file_figure.replace("*", tag)
# load network
net = pickle.load(open(file_network, "rb"))
stdout("Loaded scatterings network from", file_network)
bins = net.banks[0].bins
# load features
scatterings, scattering_times = load_feature_file(file_features)
stdout("Loaded scatterings from", file_features)
# Read seismograms
stream = reader(path)
stdout("Loaded seismogram from", path)
times = stream[0].times("matplotlib")
seismograms = [trace.data for trace in stream]
npts = min((len(trace) for trace in seismograms))
seismograms = np.array([component[:npts] for component in seismograms])
times = times[:npts]
step = int(step * stream[0].stats.sampling_rate)
seismograms = signal.segmentize(seismograms, bins, step)
times = signal.segmentize(times, bins, step)
# Find corresponding window index
window_index = np.searchsorted(scattering_times, timestamp) - 1
window_index = 0 if window_index < 0 else window_index
# select window
scatterings = scatterings[window_index]
t = times[window_index]
x = seismograms[window_index]
x = 0.9 * x / (np.abs(x).max() + 1e-5)
# Reshape features
sx = signal.reshape_features(scatterings, net)
sx = signal.normalize_features(sx)
sx = signal.vectorize_features(sx)
# show
show(t, x, sx, net, timestamp).savefig(file_figure)
stdout("Saved figure at", file_figure + ".png")
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/display/scatterings.py
| 0.636692 | 0.576959 |
scatterings.py
|
pypi
|
import click
import numpy as np
import os
import pickle
from sklearn.decomposition import FastICA as skmodel
from sklearn.metrics import r2_score, mean_squared_error
from .common import common_options
from .. import signal, io
from ..display import show_features
@click.command("features", short_help="Calculate features.")
@common_options
@click.option(
"--dimensions",
default=10,
show_default=True,
help="Number of latent space dimensions.",
type=int,
)
@click.option(
"--normalize",
is_flag=True,
default=False,
show_default=True,
help="Normalization flag.",
)
@click.option(
"--medfilt",
type=int,
default=101,
show_default=True,
help="Median filter kernel size.",
)
def features(
dimensions,
normalize=False,
savepath=None,
figpath=None,
filename_network=None,
filename_reduction=None,
path_scatterings=None,
path_features=None,
show=False,
medfilt=None,
**kwargs,
):
"""Reduce scattering domain dimensions."""
# Path
dirpath_models = os.path.join(savepath, "models")
dirpath_scats = os.path.join(savepath, path_scatterings)
dirpath_features = os.path.join(savepath, path_features)
dirpath_figure = os.path.join(figpath, "features")
# Files
filepath_network = os.path.join(dirpath_models, filename_network)
filepath_reduction = os.path.join(dirpath_models, filename_reduction)
filepath_scatterings = os.path.join(dirpath_scats, "scatterings_*.npz")
filepath_features = os.path.join(dirpath_features, "features_{}_{}.npz")
filepath_figure = os.path.join(dirpath_figure, "features_{}_{}")
# Append parameters to filenames
norm = "norm" if normalize is True else "no-norm"
filepath_reduction = filepath_reduction.format(dimensions)
filepath_features = filepath_features.format(dimensions, norm)
filepath_figure = filepath_figure.format(dimensions, norm)
if show is True:
io.mkdir(dirpath_figure)
show_features(
filepath_features, filepath_figure, medfilt_kernel=medfilt
)
else:
# Directories
io.mkdir(dirpath_features)
# Parameters
io.stdout("Using {} for scattering coefficients", filepath_scatterings)
io.stdout("Using {} for features", filepath_features)
io.stdout("Using {} dimensions", dimensions)
io.stdout("Using normalization", normalize)
# Load features
features, times = io.load_features(filepath_scatterings)
# Normalize features
if normalize is True:
io.stdout("Read newtork from", filepath_network)
net = pickle.load(open(filepath_network, "rb"))
for index in range(features.shape[0]):
feature = signal.reshape_features(features[index], net)
feature = signal.normalize_features(feature)
features[index] = signal.vectorize_features(feature)
# Preprocess
keep = features.sum(axis=1) > 1e-3
times = times[keep]
features = features[keep]
features = np.log10(features + 1e-3)
features = (features - features.min()) / (
features.max() - features.min()
)
# Reduce
print("Performing reduction")
model = skmodel(n_components=dimensions)
latents = model.fit_transform(features)
inversed = model.inverse_transform(latents)
io.stdout("Mean squared error", mean_squared_error(features, inversed))
io.stdout("R2 coefficient", r2_score(features, inversed))
# Save latent variables
np.savez(filepath_features, times=times, features=latents)
io.stdout("Saved features at", filepath_features)
# save model
pickle.dump(model, open(filepath_reduction, "wb"))
io.stdout("Saved model at", filepath_reduction)
pass
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/cli/features.py
| 0.69987 | 0.277005 |
features.py
|
pypi
|
import click
import os
import pickle
from .common import common_options
from .transform import load_waveform
from ..io import mkdir, stdout
from ..display import show_waveforms
@click.command("waveforms", short_help="Show waveforms from clustering.")
@common_options
@click.option(
"--segment",
type=float,
default=200.0,
show_default=True,
help="Segment duration (seconds).",
)
@click.option(
"--n_samples",
type=int,
default=10,
show_default=True,
help="Number of waveforms to show.",
)
@click.option(
"--dimensions",
default=10,
show_default=True,
help="Number of latent space dimensions.",
type=int,
)
@click.option(
"--normalize",
is_flag=True,
default=False,
show_default=True,
help="Normalization flag.",
)
def waveforms(
segment,
dimensions,
normalize,
n_samples,
savepath=None,
figpath=None,
filename_inventory=None,
filename_network=None,
path_features=None,
path_clusters=None,
**kwargs,
):
"""Transform seismograms into scattering domain."""
# Path
dirpath_clusters = os.path.join(savepath, path_clusters)
dirpath_features = os.path.join(savepath, path_features)
dirpath_inventory = os.path.join(savepath, "inventories")
dirpath_figure = os.path.join(figpath, "waveforms", "waveforms_{}_{}")
# Files
filepath_features = os.path.join(dirpath_features, "features_{}_{}.npz")
filepath_clusters = os.path.join(dirpath_clusters, "clusters_{}_{}.npz")
filepath_inventory = os.path.join(dirpath_inventory, filename_inventory)
filepath_figure = os.path.join(dirpath_figure, "cluster_*")
# Append
norm = "norm" if normalize is True else "no-norm"
dirpath_figure = dirpath_figure.format(dimensions, norm)
filepath_features = filepath_features.format(dimensions, norm)
filepath_clusters = filepath_clusters.format(dimensions, norm)
filepath_figure = filepath_figure.format(dimensions, norm)
mkdir(dirpath_figure)
show_waveforms(
segment,
filepath_features,
filepath_clusters,
filepath_inventory,
filepath_figure,
n_samples=n_samples,
reader=load_waveform,
factor=0.8,
)
pass
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/cli/waveforms.py
| 0.420481 | 0.185929 |
waveforms.py
|
pypi
|
import click
import os
from .common import common_options
from ..inventory import inventorize
from .. import io
@click.command("inventory", short_help="Create dataset inventory.")
@common_options
@click.option(
"--tags",
type=str,
default="*",
show_default=True,
help="Tags to search for in the datapath. Accepts wildcards.",
)
def inventory(
tags=None,
channels=None,
savepath=None,
datapath=None,
filename_inventory=None,
**kwargs,
):
"""Create an inventory of available data.
The script generates an inventory of the data files from a parsable path.
This inventory allows later to connect the calculated features and the
input data quickly. In addition, this script enables selecting the input
data based on different criteria, such as the sampling rate, duration,
channel, or dates.
The parser determines two elements from the given parsable path: a "tag"
identifier for each file and a "channel" variable. Note that these two
variables can appear multiple times in the string.
Let us consider the example list of files below containing three days of
data (tagged as 2014.14, 2014.15, and 2014.16) recorded with two channels
(E and Z).
/path/to/data/2010.014/HHE/2010.014.HHE.sac
/path/to/data/2010.014/HHZ/2010.014.HHZ.sac
/path/to/data/2010.015/HHE/2010.015.HHE.sac
/path/to/data/2010.015/HHZ/2010.015.HHZ.sac
/path/to/data/2010.016/HHE/2010.016.HHE.sac
/path/to/data/2010.016/HHZ/2010.016.HHZ.sac
The idea is to generate a list of file paths that can read all channels of
the same date at once with the obspy's read routine with the following
command:
scatseisnet inventory --datapath /path/to/data/{tag}/HH{channel}/{tag}.HH{channel} --channels Z E --tag * --filename_inventory inventory
From this command, the parser will determine the following list of tags
(that can be restricted to more specific regular expression with the --tag
command-line argument) with corresponding paths and save them in the
"inventory" file specified in the --file_inventory option.
2014.014 /path/to/data/2010.014/HH[Z,E]/2010.014.HH[Z,E].sac
2014.015 /path/to/data/2010.015/HH[Z,E]/2010.015.HH[Z,E].sac
2014.016 /path/to/data/2010.015/HH[Z,E]/2010.016.HH[Z,E].sac
Note that the notation "[Z,E]" enables obspy to read both channels Z and E
in the same stream at once, given the path expansion capabilities of the
obspy read routine (based on the glob Python library).
"""
# Resolve paths
dirpath_inventory = os.path.join(savepath, "inventories")
filepath_inventory = os.path.join(dirpath_inventory, filename_inventory)
# Calculate inventory
io.mkdir(dirpath_inventory)
db = inventorize(datapath, channels, tags)
# Save inventory
db.to_pickle(filepath_inventory)
io.stdout("Inventory complete saved at", filepath_inventory)
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/cli/inventory.py
| 0.674908 | 0.373047 |
inventory.py
|
pypi
|
"""Calculate scattering transform on segmented time series."""
import click
import numpy as np
import os
import pickle
from dateutil import tz
from matplotlib import dates as mdates
from .common import common_options
from .. import hierarchy
from ..display import dendrogram
from ..io import stdout, mkdir
@click.command("linkage", short_help="Calculate linkage matrix.")
@common_options
@click.option(
"--method",
type=click.Choice(("single", "centroid", "median", "ward")),
default="ward",
help="Number of clusters splits.",
)
@click.option(
"--dimensions",
default=10,
show_default=True,
help="Number of latent space dimensions.",
type=int,
)
@click.option(
"--normalize",
is_flag=True,
default=False,
show_default=True,
help="Normalization flag.",
)
@click.option(
"--n_clusters",
"-n",
type=int,
default=10,
show_default=True,
help="Number of clusters.",
)
@click.option(
"--time-zone",
default="Mexico/General",
type=str,
show_default=True,
help="Time zone for local time histogram.",
)
def linkage(
method,
n_clusters,
dimensions,
normalize,
time_zone,
path_features=None,
path_clusters=None,
savepath=None,
figpath=None,
show=False,
**kwargs,
):
# Path
dirpath_clusters = os.path.join(savepath, path_clusters)
dirpath_features = os.path.join(savepath, path_features)
# Files
filepath_features = os.path.join(dirpath_features, "features_{}_{}.npz")
filepath_linkage = os.path.join(dirpath_clusters, "linkage_{}_{}.npz")
filepath_clusters = os.path.join(dirpath_clusters, "clusters_{}_{}.npz")
filepath_dendrogram = os.path.join(figpath, "dendrogram_{}_{}.png")
# Append
norm = "norm" if normalize is True else "no-norm"
filepath_features = filepath_features.format(dimensions, norm)
filepath_linkage = filepath_linkage.format(dimensions, norm)
filepath_clusters = filepath_clusters.format(dimensions, norm)
filepath_dendrogram = filepath_dendrogram.format(dimensions, norm)
if show is True:
mkdir(figpath)
linkage = pickle.load(open(filepath_linkage, "rb"))
stdout("Loaded linkage matrix from", filepath_linkage)
with np.load(filepath_features) as data:
timestamps = data["times"]
times = np.array(mdates.num2date(timestamps, tz=tz.gettz(time_zone)))
# Show dendrogram
fig, predictions = dendrogram(linkage, times, n_clusters)
fig.savefig(filepath_dendrogram)
stdout("Saved figure at", filepath_dendrogram)
# Save predictions
np.savez(filepath_clusters, predictions=predictions, times=timestamps)
stdout("Saved predictions at", filepath_clusters)
else:
mkdir(dirpath_clusters)
# Load features
with np.load(filepath_features) as data:
features = data["features"]
stdout("Features loaded from", filepath_features)
# Calculate and save linkage
linkage = hierarchy.linkage(features, method)
pickle.dump(linkage, open(filepath_linkage, "wb"))
stdout("Saved linkage matrix at", filepath_linkage)
|
/scatseisnet_gpu-0.1.6-py3-none-any.whl/scatseisnet/cli/linkage.py
| 0.739046 | 0.376251 |
linkage.py
|
pypi
|
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from scipy.linalg import eigh
import warnings
warnings.simplefilter(action='ignore',category=FutureWarning)
import string
#reshape the 3D array into 4D with dimensions (h/2,w/2,c,4)
def reshape_array(I):
h,w,c = I.shape
new_arr=np.zeros(shape=(int(h/2),int(w/2),int(c),4))
k = 0
for i in range(2):
for j in range(2):
new_arr[:,:,:,k] = I[i:len(new_arr)*2:2,j:len(new_arr[0])*2:2,:]
k += 1
return new_arr
def pca(Iout,normalize=None,mask=None):
if mask is None:
mask = np.ones_like(Iout[...,0])
if normalize is None:
normalize = 'dn'
X = np.reshape(Iout,(-1,Iout.shape[-1]))
mask = mask.ravel()[...,None]
Xbar = np.sum(X*mask,0)/np.sum(mask)
X0 = X - Xbar
Sigma = X0.transpose() @ (X0*mask) / np.sum(mask)
d2,V = eigh(Sigma)
d = np.sqrt(d2)
XV = X0 @ V
if 'd0' in normalize:
XVd = XV / d[-1]
elif 'd1' in normalize:
XVd = XV / d[-2]
elif 'd2' in normalize:
XVd = XV / d[-3]
elif 'd3' in normalize:
XVd = XV / d[-4]
elif 'd4' in normalize:
XVd = XV / d[-5]
elif 'd' in normalize:
XVd = XV / d
else:
XVd = XV
if 'n' in normalize:
XVdn = norm.cdf(XVd)
else:
XVdn = XVd
out = np.reshape(XVdn,Iout.shape)
out = out[:,:,::-1] # change it from ascending to descending
return out
def make_filters(radius=15,n_directions=1,
scale_low=2.0, scale_high=3.0,
slant=1.7, draw=False):
'''
Produce a set of low pass and high pass filters
based on the Gaussian kernel, with direction selectivity
throug the Morlet wavelet.
For the special case of n_directions=1, we do not use the
Moret wavelet.
'''
# build a Gaussian filter for low pass
x = np.arange(-radius,radius+1)
X = np.stack(np.meshgrid(x,x,indexing='ij'))
R2 = np.sum(X**2,0)
h = np.exp(-R2/2.0/scale_low**2)
h /= np.sum(h) # normalize to 1
# angles for direction selectivity
thetas = np.arange(n_directions)/n_directions * np.pi
# build a list of filters at different angles
gs = []
for t in thetas:
# we use a Gaussian filter times a complex exponential
g0 = np.exp(-((X[0]*np.cos(t) + X[1]*np.sin(t))**2 + (-X[0]*np.sin(t) + X[1]*np.cos(t))**2/(slant)**2)/2.0/(scale_high)**2)
g0 /= np.sum(g0)
wave = np.exp(1j*(X[0]*np.cos(t) + X[1]*np.sin(t))/scale_high*slant)
# we normalize the filter such that it is zero mean and its absolute value sums to 1
g = g0 * ( wave - np.sum(g0*wave)/np.sum(g0) )
# the sum is sum(g0*wave) - sum(g0)*sum(g0*wave)/sum(g0) = 0
g /= np.sum(np.abs(g))
gs.append(g)
# special case if not using directions
if n_directions == 1:
# we use a highpass filter that corresponds to the complement of our original lowpass filter
gs = [ np.fft.fftshift(np.fft.ifftn(1.0 - np.fft.fftn(np.fft.ifftshift(h)))).real ]
if draw:
gshow = [h] + gs
f,ax = plt.subplots(2,n_directions+1,squeeze=False)
for i,g in enumerate(gshow):
handle = ax[0,i].imshow(g.real)
ax[0,i].set_xticks([])
ax[0,i].set_yticks([])
#plt.colorbar(handle,ax=ax[0,i])
handle = ax[1,i].imshow(g.imag)
ax[1,i].set_xticks([])
ax[1,i].set_yticks([])
#plt.colorbar(handle,ax=ax[1,i])
if i == len(gs)//2:
ax[0,i].set_title('real part')
ax[1,i].set_title('imaginary part')
f.canvas.draw()
#h: low pass, gs: high pass (last dim)
return h,np.stack(gs,-1)
def apply_filters(I,filters):
'''
Apply filters to a image multi channel image
Note this only works for square filters (rows = columns)
This introduces a new dimension to the image, one for each filter
If there is only one filter, it will introduce a singleton dimension
If the image is only one channel, it will introduce a singleton dimension
'''
# append another dimension if necessary
if filters.ndim == 2:
filters = filters[...,None]
if I.ndim == 2:
I = I[...,None]
# Fourier transform over first two axes
Ihat = np.fft.fftn(I,axes=(0,1))
# pad the filters on the right so they are the same size as the image
topad = np.array(I.shape)[:2] - np.array(filters.shape)[:2]
topad = [(0,t) for t in topad]
topad.append((0,0))
filtersp = np.pad(filters,topad,mode='constant')
# roll them so the center pixel is at the top left corner
r = (filters.shape[0]-1)//2
filtersp = np.roll(filtersp,[-r]*2,[0,1])
# Fourier transform over first two axes
filtersphat = np.fft.fftn(filtersp,axes=(0,1))
# filtering is equivalent to multiplying in the Fourier domain
Ifiltered = np.fft.ifftn( Ihat[...,None] * filtersphat[...,None,:])
return Ifiltered
|
/scatter_downsample-0.2.6.tar.gz/scatter_downsample-0.2.6/scatter_downsample/helpers.py
| 0.466359 | 0.550064 |
helpers.py
|
pypi
|
import torch
from torch import nn
class ScatteringCompositionalLearner(nn.Module):
"""
Scattering Compositional Learner (SCL) [1] for solving Raven's Progressive Matrices.
[1] Wu, Yuhuai, et al. "The Scattering Compositional Learner: Discovering Objects, Attributes, Relationships in Analogical Reasoning." arXiv 2020
"""
def __init__(self, image_size=160):
"""
Initializes the SCL model.
:param image_size: width and height of RPM panels
"""
super(ScatteringCompositionalLearner, self).__init__()
self.conv = nn.Sequential(
ConvBnRelu(1, 16, kernel_size=3, stride=2, padding=1),
ConvBnRelu(16, 16, kernel_size=3, padding=1),
ConvBnRelu(16, 32, kernel_size=3, padding=1),
ConvBnRelu(32, 32, kernel_size=3, padding=1)
)
conv_dimension = 40 * (image_size // 80) * 40 * (image_size // 80)
self.conv_projection = nn.Sequential(
nn.Linear(conv_dimension, 80),
nn.ReLU(inplace=True)
)
self.ff_object = FeedForwardResidualBlock(80)
self.scattering = Scattering()
self.attribute_network = nn.Sequential(
nn.Linear(32 * 8, 128),
nn.ReLU(inplace=True),
nn.Linear(128, 8)
)
self.ff_attribute = FeedForwardResidualBlock(80)
self.relation_network = nn.Sequential(
nn.Linear(9, 64),
nn.ReLU(inplace=True),
nn.Linear(64, 32),
nn.ReLU(inplace=True),
nn.Linear(32, 5)
)
self.ff_relation = FeedForwardResidualBlock(5 * 80)
self.score = nn.Linear(5 * 80, 1)
def forward(self, x: torch.Tensor):
batch_size, num_panels, height, width = x.size()
x = x.view(batch_size * num_panels, 1, height, width)
x = self.conv(x)
x = x.view(batch_size, num_panels, 32, -1)
x = self.conv_projection(x)
x = self.ff_object(x)
x = self.scattering(x, num_groups=10)
x = self.attribute_network(x)
x = x.view(batch_size, num_panels, 10 * 8)
x = self.ff_attribute(x)
x = torch.cat([
x[:, :8, :].unsqueeze(dim=1).repeat(1, 8, 1, 1),
x[:, 8:, :].unsqueeze(dim=2)
], dim=2)
x = self.scattering(x, num_groups=80)
x = self.relation_network(x)
x = x.view(batch_size, 8, 80 * 5)
x = self.ff_relation(x)
x = self.score(x).squeeze()
return x
class ConvBnRelu(nn.Module):
def __init__(self, in_channels, out_channels, **kwargs):
super(ConvBnRelu, self).__init__()
self.projection = nn.Sequential(
nn.Conv2d(in_channels, out_channels, **kwargs),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
def forward(self, x):
return self.projection(x)
class FeedForwardResidualBlock(nn.Module):
def __init__(self, dim, expansion_multiplier=1):
super(FeedForwardResidualBlock, self).__init__()
self.projection = nn.Sequential(
nn.Linear(dim, dim * expansion_multiplier),
nn.ReLU(inplace=True),
nn.LayerNorm(dim * expansion_multiplier),
nn.Linear(dim * expansion_multiplier, dim)
)
def forward(self, x: torch.Tensor):
return x + self.projection(x)
class Scattering(nn.Module):
def forward(self, x, num_groups):
"""
:param x: a Tensor with rank >= 3 and last dimension divisible by number of groups
:param num_groups: number of groups
"""
shape = x.shape[:-1] + (num_groups,) + (x.shape[-1] // num_groups,)
x = x.view(shape)
x = x.transpose(-3, -2).contiguous()
return x.flatten(start_dim=-2)
|
/scattering_compositional_learner-0.1.0-py3-none-any.whl/scattering_compositional_learner/scl.py
| 0.961786 | 0.565959 |
scl.py
|
pypi
|
from typing import List, IO, Tuple
import numpy as np
import math
from sgt import _cpolarize
def make_rotation_matrix(alpha_deg: float, beta_deg: float, gamma_deg: float) -> np.ndarray:
"""Makes a rotation matrix that defines orientation of the detector.
The rotation matrix is calculated based on the given Euler angles.
The Euler angles here is the x-y-x type, that is,
1. The initial axes :math:`(x, y, z)` are rotated by `alpha_deg` around the :math:`x` axis.
2. The resultant axes :math:`(x', y', z')` are rotated by `beta_deg` around the :math:`y'` axis.
3. The resultant axes :math:`(x'', y'', z'')` are rotated by `gamma_deg` around the :math:`x''` axis.
Args:
alpha_deg: Rotation around the :math:`x` axis, in degrees.
beta_deg: Rotation around the :math:`y'` axis, in degrees.
gamma_deg: Rotation around the :math:`x''` axis, in degrees.
Returns:
A 3x3 numpy array.
Example:
>>> R = make_rotation_matrix(10.0, 10.0, 10.0)
>>> vec = np.array([1.0, 2.0, 3.0])
>>> rotated = np.matmul(R, vec)
"""
r1: float = math.radians(alpha_deg)
r2: float = math.radians(beta_deg)
r3: float = math.radians(gamma_deg)
c1: float = math.cos(r1)
c2: float = math.cos(r2)
c3: float = math.cos(r3)
s1: float = math.sin(r1)
s2: float = math.sin(r2)
s3: float = math.sin(r3)
return np.array([[c2, s2*s3, c3*s2],
[s1*s2, c1*c3-c2*s1*s3, -c1*s3-c2*c3*s1],
[-c1*s2, c3*s1+c1*c2*s3, c1*c2*c3-s1*s3]]) # xyx
def make_pixel_coords_in_detector_system(
hor_px_num: int, ver_px_num: int,
px_width: float, px_height: float,
center_coord_hor: float, center_coord_ver: float
) -> Tuple[np.ndarray, np.ndarray]:
"""Makes the matrices of coordinates of each pixel on the 2D detector coordinate system.
The detector coordinate system is a 2D Cartesian coordinate system
whose origin is at the image center (= where the direct beam hits the detector plane).
Two axes, denoted as u and v, are defined to be parallel to the horizontal and vertical
edges of the detector, respectively.
Args:
hor_px_num: Number of pixels along the horizontal axis.
ver_px_num: Number of pixels along the vertical axis.
px_width: Size of a single pixel along the horizontal axis.
px_height: Size of a single pixel along the horizontal axis.
center_coord_hor: Horizontal coordinate of the image center
measured from the center of pixel at index `[0,0]`.
center_coord_ver: Vertical coordinate of the image center.
Returns:
2D numpy arrays of the horizontal and vertical coordinates.
Example:
>>> u, v = make_pixel_coords_in_detector_system(1475, 1679, 0.172, 0.172, 132.35, 134.39)
"""
# u coordinates = (index of the pixel along u axis)*(pixel width) - (u coord at the center)
u: np.ndarray = np.arange(hor_px_num).astype(float)*px_width - center_coord_hor
# v coordinates = (index of the pixel along v axis)*(pixel width) - (v coord at the center)
v: np.ndarray = np.arange(ver_px_num).astype(float)*px_height - center_coord_ver
# matrix of u coordinates & v coordinates
uu: np.ndarray = np.array([])
vv: np.ndarray = np.array([])
uu, vv = np.meshgrid(u, v, indexing="xy")
return uu, vv
def make_default_mask(hor_px_num: int, ver_px_num: int) -> np.ndarray:
"""Makes a default mask array.
A mask array is a 2D array of the same shape as the scattering image
but of `numpy.uint8` type. Pixels to be masked are assigned with 1
and unmasked pixels are assigned with zero.
Args:
hor_px_num: Number of pixels along the horizontal edge.
ver_px_num: Number of pixels along the horizontal edge.
Returns:
A 2D numpy array of the dtype `numpy.uint8`.
"""
return np.zeros((ver_px_num, hor_px_num), dtype=np.uint8)
def make_default_array(hor_px_num: int, ver_px_num: int) -> np.ndarray:
"""Makes a default float array.
Args:
hor_px_num: Number of pixels along the horizontal edge.
ver_px_num: Number of pixels along the horizontal edge.
Returns:
A 2D numpy array of the float type.
"""
return np.zeros((ver_px_num, hor_px_num), dtype=float)
def make_basis_vectors_on_detector_in_lab_system(rotation_matrix: np.ndarray) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
"""Makes the basis vectors of the detector coordinate system,
expressed in the lab coordinate system.
For the definition of the detector coordinate system,
refer to :py:func:`make_pixel_coords_in_detector_system`.
The basis vectors of the detector coordinate system
is basically the basis vectors of the lab coordinate system
being rotated by the rotation matrix that defines
the detector orientation.
The input rotation matrix can be created by
:py:func:`make_rotation_matrix`.
(but actually can be any SO(3) matrix)
Args:
rotation_matrix: A 3x3 numpy array representing
the detector orientation.
Returns:
Three numpy arrays `a`, `b`, and `n` representing the basis vectors.
`a` and `b` are the basis vector along the horizontal and vertical
edge of the detector, respectively, and `n` is the one
perpendicular to the detector plane.
"""
# basis vectors on the detector plane, expressed in the lab coordinate system
a: np.ndarray = np.matmul(rotation_matrix, np.array([1.0, 0.0, 0.0])) # in-plane basis vector of the detector plane
b: np.ndarray = np.matmul(rotation_matrix, np.array([0.0, 1.0, 0.0])) # in-plane basis vector of the detector plane
n: np.ndarray = np.matmul(rotation_matrix, np.array([0.0, 0.0, 1.0])) # plane normal
return a, b, n
def make_pixel_coords_in_lab_system(
xcoords_det: np.ndarray, ycoords_det: np.ndarray,
a: np.ndarray, b: np.ndarray, n: np.ndarray, sdd: float
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
x: np.ndarray = xcoords_det*a[0] + ycoords_det*b[0]
y: np.ndarray = xcoords_det*a[1] + ycoords_det*b[1]
z: np.ndarray = xcoords_det*a[2] + ycoords_det*b[2] + sdd
return x, y, z
def calc_shortest_dist_to_detector(a: np.ndarray, b: np.ndarray, n: np.ndarray, sdd: float) -> float:
"""Computes the shortest distance from the sample to the detector plane.
Let P be the point on the detector such that OP is the shortest distance
between the origin and the detector plane.
The vector OP must be perpendicular to the detector plane,
so the vector :math:`\\vec{\\mathrm{OP}}` is proportional
to the detector plane normal vector :math:`\\vec{n}`.
That is,
.. math::
\\vec{\\mathrm{OP}} = \\mathrm{OP} \\vec{n}
The vector OP can also be expressed as
.. math::
\\vec{\\mathrm{OP}} = u\\vec{a} + v\\vec{b} + L\\vec{e}_z
where :math:`(u, v)` is the coordinate of point P
on the detector coordinate system
and vector :math:`\\vec{e}_z` is the z basis vector.
:math:`L` is the sample-to-detector distance.
Equating the two expressions,
.. math::
k\\vec{n} = u\\vec{a} + v\\vec{b} + L\\vec{e}_z
which reads
.. math::
u a_x + v b_x - \\mathrm{OP} n_x &= 0 \\\\
u a_y + v b_y - \\mathrm{OP} n_y &= 0 \\\\
u a_z + v b_z - \\mathrm{OP} n_z &= -L
By defining a matrix
.. math::
\\mathbf{M} =
\\begin{pmatrix}
a_x & b_x & -n_x \\\\
a_y & b_y & -n_y \\\\
a_z & b_z & -n_z \\\\
\\end{pmatrix}
the equations are simplified to
.. math::
\\mathbf{M} \\vec{s} &= -L \\vec{e}_z \\\\
\\vec{s} &= -L \\mathbf{M}^{-1}\\vec{e}_z
where :math:`\\vec{s} = (u, v, \\mathrm{OP})`.
This method computes :math:`\\vec{s}` using the above equation
and returns its third component, :math:`\mathrm{OP}`.
Args:
a: basis vector of the detector coordinate system
along the horizontal edge of the detector.
b: basis vector of the detector coordinate system
along the vertical edge of the detector.
n: basis vector of the detector coordinate system
along the plane normal of the detector.
sdd: sample-to-detector distance.
Returns:
The distance in the float value.
Note:
The input vectors should be expressed
in the lab coordinate system.
Use :py:func:`make_basis_vectors_on_detector_in_lab_system`
to generate the basis vectors.
"""
M: np.ndarray = np.array([[a[0], b[0], -n[0]],
[a[1], b[1], -n[1]],
[a[2], b[2], -n[2]]])
ez: np.ndarray = np.array([0.0, 0.0, 1.0])
s: np.ndarray = -sdd*np.matmul(np.linalg.inv(M), ez)
return s[2]
def make_solid_angle_coverage_correction_factors(
x: np.ndarray, y: np.ndarray, z: np.ndarray, shortest_dist_to_detector: float
) -> np.ndarray:
"""Makes an array with the correction factor for solid angle coverage of each pixel.
Based on Equation 28 in Pauw J Phys.: Condens. Matter 25, 383201.
DOI: 10.1088/0953-8984/25/38/383201.
Args:
x: x coordinates of pixels in the lab system.
y: y coordinates of pixels in the lab system.
z: z coordinates of pixels in the lab system.
shortest_dist_to_detector: shortest distance from the sample to the detector plane.
see :py:func:`calc_shortest_dist_to_detector`.
Returns:
A 2D array of correction factors for each pixel.
The correction factor is normalized at the beam center.
The correction can be done by multiplying this array to the intensity array.
"""
# Lp = (x^2 + y^2 + z^2)^(1/2)
# Lp^3 = (x^2 + y^2 + z^2)^(3/2)
Lp3: np.ndarray = np.power(x*x + y*y + z*z, 3.0/2.0)
return Lp3/np.power(shortest_dist_to_detector, 3.0)
def make_q(x: np.ndarray, y: np.ndarray, z: np.ndarray, wavelength: float) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
"""Computes q vector.
By definition,
.. math::
\\vec{q} = \dfrac{2 \\pi}{\\lambda}(\\vec{e}_\\mathrm{s} - \\vec{e}_\\mathrm{i})
where :math:`\\lambda` is the wavelength,
:math:`\\vec{e}_\\mathrm{s}` is the basis vector along the scattered ray,
and :math:`\\vec{e}_\\mathrm{i}` is the basis vector along the incident ray.
Here, :math:`\\vec{e}_\\mathrm{i}` is fixed to (0, 0, 1).
Since the sample is placed at the origin,
.. math::
\\vec{e}_\\mathrm{s} = \\dfrac{\\vec{r}}{|\\vec{r}|}
where :math:`\\vec{r}` is the coordinate of the pixel in the lab system.
Args:
x: x coordinates of pixels in the lab system.
y: y coordinates of pixels in the lab system.
z: z coordinates of pixels in the lab system.
wavelength: wavelength of the incident beam.
Returns:
Three 2D arrays representing x, y, and z components of the q vector.
"""
ei_z: float = 1.0
pre: float = 2.0*np.pi/wavelength
Lp: float = np.sqrt(x*x + y*y + z*z)
es_x: np.ndarray = x/Lp
es_y: np.ndarray = y/Lp
es_z: np.ndarray = z/Lp
qx: np.ndarray = pre * es_x
qy: np.ndarray = pre * es_y
qz: np.ndarray = pre * (es_z - ei_z)
return qx, qy, qz
def calc_polar_map(
qx: np.ndarray, qy: np.ndarray, qz: np.ndarray,
mask: np.ndarray,
qmin: float, qmax: float,
N_q: int, N_azi: int) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
"""Calculates mapping to the polar coordinate system.
Args:
qx: x component of q vector.
qy: y component of q vector.
qz: z component of q vector.
mask: mask array.
qmin: lower boundary of q.
qmax: upper boundary of q.
N_q: number of bins along the q axis.
N_azi: number of bins along the azimuthal axis.
360 deg is divided into `N_azi` sections.
Returns:
Five numpy arrays, `map_q`, `map_azi`, `density`, `ax_q`, and `ax_azi`.
"""
assert qx.dtype == np.float64
assert qy.dtype == np.float64
assert qz.dtype == np.float64
assert mask.dtype == np.uint8
return _cpolarize.calc_polar_map(qx, qy, qz, mask, qmin, qmax, N_q, N_azi)
def circular_average(
i: np.ndarray, e: np.ndarray,
map_q: np.ndarray, map_azi: np.ndarray,
density: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
assert i.dtype == np.float64
assert e.dtype == np.float64
assert map_q.dtype == np.int64
assert map_azi.dtype == np.int64
assert density.dtype == np.int64
return _cpolarize.circular_average(i, e, map_q, map_azi, density)
if __name__ == "__main__":
pass
|
/scattering-geometry-tools-0.1.3.tar.gz/scattering-geometry-tools-0.1.3/src/sgt/core.py
| 0.977371 | 0.909345 |
core.py
|
pypi
|
import torch
from torch import nn
import torch.nn.functional as F
# helper functions
def default(val, default_val):
return val if val is not None else default_val
def expand_dim(t, dim, k):
t = t.unsqueeze(dim)
expand_shape = [-1] * len(t.shape)
expand_shape[dim] = k
return t.expand(*expand_shape)
# simple MLP with ReLU activation
class MLP(nn.Module):
def __init__(self, *dims, activation = None):
super().__init__()
assert len(dims) > 2, 'must have at least 3 dimensions, for dimension in and dimension out'
activation = default(activation, nn.ReLU)
layers = []
pairs = list(zip(dims[:-1], dims[1:]))
for ind, (dim_in, dim_out) in enumerate(pairs):
is_last = ind >= (len(pairs) - 1)
layers.append(nn.Linear(dim_in, dim_out))
if not is_last:
layers.append(activation())
self.net = nn.Sequential(*layers)
def forward(self, x):
return self.net(x)
# the feedforward residual block mentioned in the paper
# used after extracting the visual features, as well as post-extraction of attribute information
class FeedForwardResidual(nn.Module):
def __init__(self, dim, mult = 4):
super().__init__()
self.net = nn.Sequential(
nn.Linear(dim, dim * mult),
nn.LayerNorm(dim * mult),
nn.ReLU(inplace = True),
nn.Linear(dim * mult, dim)
)
def forward(self, x):
return x + self.net(x)
# convolutional net
# todo, make customizable and add Evonorm for batch independent normalization
class ConvNet(nn.Module):
def __init__(self, image_size, chans, output_dim):
super().__init__()
num_conv_layers = len(chans) - 1
conv_output_size = image_size // (2 ** num_conv_layers)
convolutions = []
channel_pairs = list(zip(chans[:-1], chans[1:]))
for ind, (chan_in, chan_out) in enumerate(channel_pairs):
is_last = ind >= (len(channel_pairs) - 1)
convolutions.append(nn.Conv2d(chan_in, chan_out, 3, padding=1, stride=2))
if not is_last:
convolutions.append(nn.BatchNorm2d(chan_out))
self.net = nn.Sequential(
*convolutions,
nn.Flatten(1),
nn.Linear(chans[-1] * (conv_output_size ** 2), output_dim),
nn.ReLU(inplace=True),
FeedForwardResidual(output_dim)
)
def forward(self, x):
return self.net(x)
# scattering transform
class ScatteringTransform(nn.Module):
def __init__(self, dims, heads, activation = None):
super().__init__()
assert len(dims) > 2, 'must have at least 3 dimensions, for dimension in, the hidden dimension, and dimension out'
dim_in, *hidden_sizes, dim_out = dims
dim_in //= heads
dim_out //= heads
self.heads = heads
self.mlp = MLP(dim_in, *hidden_sizes, dim_out, activation = activation)
def forward(self, x):
shape, heads = x.shape, self.heads
dim = shape[-1]
assert (dim % heads) == 0, f'the dimension {dim} must be divisible by the number of heads {heads}'
x = x.reshape(-1, heads, dim // heads)
x = self.mlp(x)
return x.reshape(shape)
# main scattering compositional learner class
class SCL(nn.Module):
def __init__(
self,
image_size = 160,
set_size = 9,
conv_channels = [1, 16, 16, 32, 32, 32],
conv_output_dim = 80,
attr_heads = 10,
attr_net_hidden_dims = [128],
rel_heads = 80,
rel_net_hidden_dims = [64, 23, 5]):
super().__init__()
self.vision = ConvNet(image_size, conv_channels, conv_output_dim)
self.attr_heads = attr_heads
self.attr_net = ScatteringTransform([conv_output_dim, *attr_net_hidden_dims, conv_output_dim], heads = attr_heads)
self.ff_residual = FeedForwardResidual(conv_output_dim)
self.rel_heads = rel_heads
self.rel_net = MLP(set_size * (conv_output_dim // rel_heads), *rel_net_hidden_dims)
self.to_logit = nn.Linear(rel_net_hidden_dims[-1] * rel_heads, 1)
def forward(self, sets):
b, m, n, c, h, w = sets.shape
images = sets.view(-1, c, h, w)
features = self.vision(images)
attrs = self.attr_net(features)
attrs = self.ff_residual(attrs)
attrs = attrs.reshape(b, m, n, self.rel_heads, -1).transpose(-2, -3).flatten(3)
rels = self.rel_net(attrs)
rels = rels.flatten(2)
logits = self.to_logit(rels).flatten(1)
return logits
# wrapper for easier training
class SCLTrainingWrapper(nn.Module):
def __init__(self, scl):
super().__init__()
self.scl = scl
def forward(self, questions, answers):
answers = answers.unsqueeze(2)
questions = expand_dim(questions, dim=1, k=8)
permutations = torch.cat((questions, answers), dim=2)
return self.scl(permutations)
|
/scattering_transform-0.0.7-py3-none-any.whl/scattering_transform/scattering_transform.py
| 0.867471 | 0.614683 |
scattering_transform.py
|
pypi
|
import inspect
import logging
import sys
from importlib.machinery import PathFinder, SourceFileLoader, ModuleSpec
from typing import Optional, List, Dict
from scavenger.internal.invocation_registry import InvocationRegistry
from scavenger.internal.util import md5, filter_by_exclude_packages
class Finder(PathFinder):
invocation_registry: InvocationRegistry
packages: List[str]
exclude_packages: List[str]
def __init__(self, packages: List[str], exclude_packages: List[str], decorators: List[str], exclude_init: bool,
invocation_registry: InvocationRegistry):
self.packages = packages
self.exclude_packages = exclude_packages
self.decorators = decorators
self.exclude_init = exclude_init
self.invocation_registry = invocation_registry
def find_spec(self, fullname, path=None, target=None) -> ModuleSpec:
try:
spec = super().find_spec(fullname, path, target)
if spec and spec.loader and isinstance(spec.loader, SourceFileLoader) and self.patch_required(fullname):
loader = ScavengerSourceFileLoader(fullname, spec.origin, self.invocation_registry, self.decorators, self.exclude_init)
spec.loader = loader
return spec
except Exception as e:
logging.warning("Creating custom loader is failed. ", e)
def patch_required(self, fullname) -> bool:
if filter_by_exclude_packages(fullname, self.exclude_packages):
return False
return 0 < sum(1 for module_name in self.packages if
fullname.startswith(module_name))
@staticmethod
def filter_by_exclude_packages(target, exclude_packages):
for exclude_package in exclude_packages:
if target.startswith(exclude_package):
return True
class ScavengerSourceFileLoader(SourceFileLoader):
invocation_registry: InvocationRegistry
def __init__(self, fullname, origin, invocation_registry, decorators, exclude_init):
super().__init__(fullname, origin)
self.invocation_registry = invocation_registry
self.decorators = decorators
self.exclude_init = exclude_init
def exec_module(self, module):
super().exec_module(module)
try:
self.patch_recursively(module)
except Exception as e:
logging.warning("Scavenger function patching is Failed. ", e)
def patch_recursively(self, obj):
for key, value in inspect.getmembers(obj):
if inspect.isfunction(value) or inspect.ismethod(value):
if not self.is_target_module(value):
continue
if isinstance(inspect.getattr_static(obj, key), classmethod):
value = value.__func__
if self.exclude_init and key == '__init__':
continue
if self.decorators and not self.has_decorator(self.get_decorators(value)):
continue
signature = f"{value.__module__}.{value.__qualname__}{inspect.signature(value)}"
if isinstance(inspect.getattr_static(obj, key), classmethod):
setattr(obj, key, classmethod(self.patch_class_method(value, signature, self.invocation_registry)))
elif isinstance(inspect.getattr_static(obj, key), staticmethod):
setattr(obj, key, staticmethod(self.patch(value, signature, self.invocation_registry)))
else:
setattr(obj, key, self.patch(value, signature, self.invocation_registry))
elif inspect.isclass(value) and key != "__class__":
if not self.is_target_module(value):
continue
self.patch_recursively(value)
def is_target_module(self, child: type) -> bool:
return child.__module__.startswith(self.name)
def has_decorator(self, decorators):
return sum(1 for decorator in decorators if decorator in self.decorators) >= 1
@staticmethod
def patch_class_method(function, signature, invocation_registry):
def wrapper(cls, *args, **kwargs):
invocation_registry.register(md5(signature))
return function(cls, *args, **kwargs)
return wrapper
@staticmethod
def patch(function, signature, invocation_registry):
def wrapper(*args, **kwargs):
invocation_registry.register(md5(signature))
return function(*args, **kwargs)
return wrapper
@staticmethod
def get_decorators(func):
try:
source = inspect.getsource(func)
except TypeError as e:
logging.warning(f"Function inspection error : {func}")
return []
index = source.find("def ")
return [
line.strip().split()[0].split("(")[0]
for line in source[:index].strip().splitlines()
if line.strip()[0] == "@"
]
class Patcher:
_finder: Optional[Finder]
packages: List[str]
exclude_packages: List[str]
store: Dict[str, int]
def __init__(self, packages: List[str], exclude_packages: List[str], decorators: List[str], exclude_init: bool,
invocation_registry: InvocationRegistry):
self._finder = None
self.packages = packages
self.exclude_packages = exclude_packages
self.decorators = decorators
self.exclude_init = exclude_init
self.invocation_registry = invocation_registry
def patch(self):
finder = Finder(self.packages, self.exclude_packages, self.decorators, self.exclude_init, self.invocation_registry)
sys.meta_path.insert(0, finder)
self._finder = finder
def unpatch(self):
sys.meta_path.remove(self._finder)
|
/scavenger_agent_python-0.1.2-py3-none-any.whl/scavenger/internal/patch.py
| 0.552057 | 0.177169 |
patch.py
|
pypi
|
import ast
import logging
import os
import time
from _ast import FunctionDef, Module, Call, Attribute, Name
from dataclasses import dataclass
from pathlib import Path
from typing import List
from scavenger.internal.model import Codebase, Function
from scavenger.internal.util import remove_suffix, filter_by_exclude_packages, remove_prefix
logger = logging.getLogger(__name__)
@dataclass
class PyFile:
codebase_path: Path
relative_path: Path
class CodeBaseScanner:
def __init__(self, codebase: List[str], packages: List[str], exclude_packages: List[str], decorators: List[str], exclude_init: bool):
self.codebase_path_list = [Path(codebase_path) for codebase_path in codebase]
self.packages = packages
self.exclude_packages = exclude_packages
self.decorators = decorators
self.exclude_init = exclude_init
def scan(self) -> Codebase:
logger.info("Codebase scanning is starting.")
start: int = time.perf_counter_ns()
functions: List[Function] = []
for py_file in self.find_all_py_files():
with open(py_file.codebase_path.joinpath(py_file.relative_path), "r") as r:
root: Module = ast.parse(r.read())
package: str = remove_suffix(str(py_file.relative_path).replace(os.sep, "."), ".py")
function_nodes: List[FunctionDef] = self.get_all_functions_from_tree(root)
for functionDef in function_nodes:
functions.append(self.function_def_to_function(functionDef, package))
logger.info(
f"Codebase scanning is done. Found {len(functions)} functions. {int((time.perf_counter_ns() - start) / 1_000_000)}ms elapsed")
return Codebase(functions=functions)
def get_all_functions_from_tree(self, root: Module) -> List[FunctionDef]:
function_nodes: List[FunctionDef] = []
for node in ast.walk(root):
if isinstance(node, ast.ClassDef):
for child in ast.iter_child_nodes(node):
child._parent = remove_prefix(f"{getattr(node, '_parent', '')}.{node.name}", ".")
elif isinstance(node, ast.FunctionDef):
if self.exclude_init and node.name == '__init__':
continue
if self.decorators and not self.has_decorator(self.get_decorators(node)):
continue
function_nodes.append(node)
return function_nodes
def has_decorator(self, decorators):
return sum(1 for decorator in decorators if decorator in self.decorators) >= 1
@staticmethod
def get_decorators(function_def: ast.FunctionDef):
decorators = []
for decorator in function_def.decorator_list:
decorators.append(f"@{get_decorator_from_node(decorator)}")
return decorators
@staticmethod
def function_def_to_function(function: FunctionDef, package: str) -> Function:
parameter_types: List[str] = [arg.arg for arg in function.args.args]
if function.args.vararg is not None:
parameter_types.append(f"*{function.args.vararg.arg}")
if function.args.kwarg is not None:
parameter_types.append(f"**{function.args.kwarg.arg}")
parameter_types_str = ", ".join(parameter_types)
declaring_type = remove_suffix(f"{package}.{getattr(function, '_parent', '')}", ".")
return Function(
declaring_type=declaring_type,
name=function.name,
parameter_types=parameter_types_str,
signature=f"{declaring_type}.{function.name}({parameter_types_str})",
package_name=package
)
def find_all_py_files(self) -> List[PyFile]:
py_files: List[PyFile] = []
for codebase_path in self.codebase_path_list:
exclude_packages = [str(codebase_path.joinpath(exclude_package.replace(".", os.sep))) for exclude_package in
self.exclude_packages]
for package in self.packages:
py_files += self.find_files_in_package(codebase_path, exclude_packages, package)
return py_files
@staticmethod
def find_files_in_package(codebase_path, exclude_packages, package):
py_files = []
pattern: str = os.path.join(package.replace(".", os.sep), "**", "*.py")
for absolute_path in codebase_path.glob(pattern):
if not filter_by_exclude_packages(str(absolute_path), exclude_packages):
py_files.append(PyFile(codebase_path, absolute_path.relative_to(codebase_path)))
return py_files
def get_decorator_from_node(node):
if isinstance(node, Name):
return node.id
elif isinstance(node, Call):
return f"{get_decorator_from_node(node.func)}"
elif isinstance(node, Attribute):
return f"{get_decorator_from_node(node.value)}.{node.attr}"
else:
raise logger.warning(f"Unknown decorator type : {node}")
|
/scavenger_agent_python-0.1.2-py3-none-any.whl/scavenger/internal/scan.py
| 0.693992 | 0.194521 |
scan.py
|
pypi
|
import hashlib
import math
import time
from dataclasses import dataclass
from typing import List
from scavenger.config import Config
from scavenger.internal.util import md5
from scavenger.model.CodeBasePublication_pb2 import CodeBasePublication
@dataclass
class Function:
name: str
declaring_type: str
signature: str
parameter_types: str
package_name: str
def to_codebase_entry(self) -> CodeBasePublication.CodeBaseEntry:
return CodeBasePublication.CodeBaseEntry(
declaring_type=self.declaring_type,
method_name=self.name,
modifiers="public",
package_name=self.package_name,
parameter_types=self.parameter_types,
signature=self.signature,
signature_hash=md5(self.signature),
visibility="public"
)
@dataclass
class Codebase:
functions: List[Function]
def get_fingerprint(self, config: Config, sort: bool = False):
m = hashlib.sha256()
m.update(bytes(str(config.codebase), 'utf-8'))
m.update(bytes(str(config.packages), 'utf-8'))
m.update(bytes(str(config.exclude_packages), 'utf-8'))
m.update(bytes(str(config.exclude_init), 'utf-8'))
m.update(len(self.functions).to_bytes(1, 'big'))
functions = sorted(self.functions, key=lambda x: x.name) if sort else self.functions
for function in functions:
m.update(bytes(function.signature, 'utf-8'))
return m.hexdigest()
class SchedulerState:
name: str
interval_seconds: int
retry_interval_seconds: int
retry_interval_factor: int
num_failures: int
next_event_at_seconds: float
clock: int
def __init__(self, name):
self.name = name
def initialize(self, interval_seconds, retry_interval_seconds):
self.interval_seconds = interval_seconds
self.retry_interval_seconds = retry_interval_seconds
self.next_event_at_seconds = 0
self.reset_retry_counter()
return self
def reset_retry_counter(self):
self.num_failures = 0
self.retry_interval_factor = 1
def update_intervals(self, interval_seconds, retry_interval_seconds):
if self.next_event_at_seconds != 0:
if interval_seconds < self.interval_seconds and self.retry_interval_factor == 1:
self.next_event_at_seconds = time.time() + interval_seconds
elif retry_interval_seconds < self.retry_interval_seconds and self.retry_interval_factor > 1:
self.next_event_at_seconds = time.time() + retry_interval_seconds * self.retry_interval_factor
self.interval_seconds = interval_seconds
self.retry_interval_seconds = retry_interval_seconds
def schedule_next(self):
self.next_event_at_seconds = time.time() + self.interval_seconds
self.reset_retry_counter()
def schedule_now(self):
self.next_event_at_seconds = 0
def schedule_retry(self):
back_off_limit = 5
if self.num_failures < back_off_limit:
self.retry_interval_factor = 1
else:
self.retry_interval_factor = int(math.pow(2, min(self.num_failures - back_off_limit + 1, 4)))
self.next_event_at_seconds = time.time() + self.retry_interval_seconds * self.retry_interval_factor
self.num_failures += 1
def is_due_time(self):
return time.time() >= self.next_event_at_seconds
|
/scavenger_agent_python-0.1.2-py3-none-any.whl/scavenger/internal/model.py
| 0.833019 | 0.169509 |
model.py
|
pypi
|
from typing import List, Dict
import pandas as pd
import re
def _get_content_row(bl_sdf: pd.DataFrame, t1_sdf: pd.DataFrame, t2_sdf: pd.DataFrame, profile_name: str, profile_group: str,
bl_total_cust: int, t1_total_cust: int, t2_total_cust: int):
# sum abs number for each label value
bl_abs = bl_sdf['no_of_cust'][bl_sdf[profile_name] == profile_group].sum()
t1_abs = t1_sdf['no_of_cust'][t1_sdf[profile_name] == profile_group].sum()
t2_abs = t2_sdf['no_of_cust'][t2_sdf[profile_name] == profile_group].sum()
bl_perc = 0.0
t1_perc = 0.0
t2_perc = 0.0
# calculate penetration
if bl_abs is not None:
bl_perc = round((bl_abs / bl_total_cust) * 100, 2)
if t1_abs is not None:
t1_perc = round((t1_abs / t1_total_cust) * 100, 2)
if t2_abs is not None:
t2_perc = round((t2_abs / t2_total_cust) * 100, 2)
# calculate deviation
t1_dev = (t1_perc - bl_perc)
t2_dev = (t2_perc - bl_perc)
return [profile_group, bl_abs, bl_perc, t1_abs, t1_perc, t1_dev, t2_abs, t2_perc, t2_dev]
def create_profile_df(bl_sdf: pd.DataFrame, t1_sdf: pd.DataFrame, t2_sdf: pd.DataFrame, selected_profiles: List[str], profile_title_map: Dict[str, str]):
column_names = ['label', 'base_abs', 'base_perc', 'target1_abs', 'target1_perc', 'target1_dev', 'target2_abs', 'target2_perc', 'target2_dev']
bl_total_cust = bl_sdf['no_of_cust'].sum()
t1_total_cust = t1_sdf['no_of_cust'].sum()
t2_total_cust = t2_sdf['no_of_cust'].sum()
result_list = []
for profile_name in selected_profiles:
profile_groups = bl_sdf.groupby([profile_name]).groups.keys()
profile_title = profile_title_map[profile_name]
# create title row
result_list.append([profile_title, None, None, None, None, None, None, None, None])
# create content row
for profile_group in profile_groups:
if profile_group != "" and profile_group is not None and str(profile_group) != '0' and str(profile_group) != 'N':
content_row = _get_content_row(bl_sdf, t1_sdf, t2_sdf, profile_name, profile_group, bl_total_cust, t1_total_cust, t2_total_cust)
result_list.append(content_row)
profile_df = pd.DataFrame(result_list, columns=column_names)
return profile_df
def create_product_df(bl_sdf: pd.DataFrame, t1_sdf: pd.DataFrame, t2_sdf: pd.DataFrame, selected_products: List[str], product_title_map: Dict[str, str]):
column_names = ['label', 'base_abs', 'base_perc', 'target1_abs', 'target1_perc', 'target1_dev', 'target2_abs', 'target2_perc', 'target2_dev']
bl_total_cust = bl_sdf['no_of_cust'].sum()
t1_total_cust = t1_sdf['no_of_cust'].sum()
t2_total_cust = t2_sdf['no_of_cust'].sum()
result_list = [["Product Holding", None, None, None, None, None, None, None, None]]
for product_name in selected_products:
product_values = bl_sdf.groupby([product_name]).groups.keys()
for product_value in product_values:
if product_value != "" and product_value is not None and str(product_value) != '0' and str(product_value) != 'N':
content_row = _get_content_row(bl_sdf, t1_sdf, t2_sdf, product_name, product_value, bl_total_cust, t1_total_cust, t2_total_cust)
content_row[0] = f"Have {product_title_map[product_name]}"
result_list.append(content_row)
profile_df = pd.DataFrame(result_list, columns=column_names)
return profile_df
def get_result_df(bl_sdf: pd.DataFrame, t1_sdf: pd.DataFrame, t2_sdf: pd.DataFrame, selected_profiles: List[str], profile_title_map: Dict[str, str],
selected_products: List[str], product_title_map: Dict[str, str]):
profile_df = create_profile_df(bl_sdf, t1_sdf, t2_sdf, selected_profiles, profile_title_map)
product_df = create_product_df(bl_sdf, t1_sdf, t2_sdf, selected_products, product_title_map)
return profile_df.append(product_df).reset_index(drop=True)
def convert_result_df_to_data_as_rows(result_df: pd.DataFrame):
data_rows = result_df[["label", "base_perc", "target1_abs", "target1_perc", "target1_dev", "target2_abs", "target2_perc", "target2_dev"]].to_numpy().copy().tolist()
for i, row in enumerate(data_rows):
count_null = 0
for j, value in enumerate(row):
if pd.isnull(value):
count_null += 1
data_rows[i][j] = 0
if j in [4, 7]:
data_rows[i][j] = round(data_rows[i][j])
elif j in [1, 3, 6]:
data_rows[i][j] = str(round(data_rows[i][j])) + " %"
elif j in [2, 5]:
data_rows[i][j] = "{:,}".format(int(data_rows[i][j]))
else:
data_rows[i][j] = re.sub('\[*.*\]', '', str(data_rows[i][j])).strip()
# Check if is header row
if count_null > 3:
data_rows[i] = [data_rows[i][0]]
return data_rows
|
/scb_profile_x-0.0.9-py3-none-any.whl/scb_profile_x/data_preparation.py
| 0.583678 | 0.339609 |
data_preparation.py
|
pypi
|
import logging
import matplotlib.pyplot as plt
import pandas as pd
from pandas import DataFrame, Series
from .stats import save_stats
logger = logging.getLogger(__name__)
base_dir = "."
def plot_rr_overall_winrates(ser_overall_winrate: Series):
plt.figure()
bot_winrates = ser_overall_winrate.sort_values(ascending=False)
ax = bot_winrates.plot(
kind="bar",
figsize=(ser_overall_winrate.shape[0] / 4, 6),
title=f"Win rates of tournament bots",
ylim=(0, 1.1),
color="#8B96D0"
)
ax.set_xlabel("Bot name")
ax.set_ylabel("Win rate")
fig = ax.get_figure()
fig.tight_layout()
for p in ax.patches:
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig.savefig(f"{base_dir}/rr_overall_winrates.pdf")
save_stats(rr_overall_winrates=DataFrame(bot_winrates))
def plot_rr_elos(ser_elos: Series):
plt.figure()
ser_elos = ser_elos.sort_values(ascending=False)
ax = ser_elos.plot(
kind="bar",
figsize=(len(ser_elos) / 4, 5),
title=f"Elo ratings of tournament bots",
ylim=(0, ser_elos.max()),
color="#8B96D0"
)
ax.set_xlabel("Bot name")
ax.set_ylabel("Elo rating")
for p, patch_bot in zip(ax.patches, ser_elos.index.tolist()):
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig = ax.get_figure()
fig.tight_layout()
fig.savefig(f"{base_dir}/rr_elos.pdf")
save_stats(rr_elos=DataFrame(ser_elos))
def plot_rr_race_winrates(df_race_wintimes):
race_winrates = df_race_wintimes.sum(axis=1) / \
(df_race_wintimes + df_race_wintimes.transpose()).sum(axis=1)
plt.figure()
ax = race_winrates.sort_values(ascending=False).plot(
kind="bar",
figsize=(5, 3),
title=f"Win rates of tournament races",
ylim=(0, 1.1),
color="#8B96D0"
)
ax.set_ylabel("Win rate")
ax.set_xlabel("")
for p in ax.patches:
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig = ax.get_figure()
fig.tight_layout()
fig.savefig(f"{base_dir}/rr_race_winrates.pdf")
save_stats(rr_race_winrates=DataFrame(race_winrates))
def plot_rr_race_counts(ser_races: Series):
race_counts = ser_races.groupby(ser_races).count().sort_values(ascending=False)
plt.figure()
ax = race_counts.plot(
kind="bar",
figsize=(5, 3),
title=f"Number of tournament bots that use given race",
ylim=(0, race_counts.max()),
color="#8B96D0"
)
ax.set_xlabel("")
ax.set_ylabel("")
for p in ax.patches:
ax.annotate("%d" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig = ax.get_figure()
fig.tight_layout()
fig.savefig(f"{base_dir}/rr_race_counts.pdf")
save_stats(rr_race_counts=DataFrame(race_counts))
def plot_rr_game_times(df_gametimes: DataFrame):
plt.figure()
df2 = pd.DataFrame({col: vals['game_time'] for col, vals in df_gametimes.groupby(["bot"])})
meds = df2.median()
meds.sort_values(ascending=False, inplace=True)
df2 = df2[meds.index]
ax = df2.plot(
kind="box",
figsize=(df2.shape[1] / 4, 7),
rot=90,
grid=True
)
ax.set_title("Real-life time durations of play sorted by median times")
ax.set_xlabel("Bot name")
ax.set_ylabel("Time [sec]")
fig = ax.get_figure()
fig.tight_layout()
fig.savefig(f"{base_dir}/rr_times.pdf")
save_stats(rr_times=df2)
def plot_rr_maps_winrates(ser_maps: Series):
maps = ser_maps.index.get_level_values(level=0)
best_bot_on_map = DataFrame(ser_maps).groupby(maps).apply(
lambda x: pd.Series([x['win_rate'].argmax()[1], x['win_rate'].max()],
index=["bot", "score"]))
ax = best_bot_on_map.plot(
kind="bar",
figsize=(12, 6),
rot=90,
legend=None,
ylim=(0, 1.1),
color="#8B96D0"
)
ax.set_title("Best-scoring bots on each tournament map scenario")
ax.set_xlabel("Map name")
ax.set_ylabel("Win rate")
for p, bot in zip(ax.patches, best_bot_on_map['bot'].tolist()):
ax.annotate("%.2f %s" % (p.get_height(), bot),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig = ax.get_figure()
fig.tight_layout()
fig.savefig(f"{base_dir}/rr_maps.pdf")
save_stats(rr_maps=best_bot_on_map)
def plot_bot_overall_winrates(bot: str, ser_overall_winrate: Series):
plt.figure()
bot_winrates = ser_overall_winrate.sort_values(ascending=False)
ax = bot_winrates.plot(
kind="bar",
figsize=(ser_overall_winrate.shape[0] / 4, 6),
title=f"Updated win rates after playing '{bot}' with tournament bots",
ylim=(0, 1.1),
color="#8B96D0"
)
ax.set_xlabel("Bot name")
ax.set_ylabel("Win rate")
fig = ax.get_figure()
fig.tight_layout()
for p in ax.patches:
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
pos = bot_winrates.index.get_loc(bot)
ax.patches[pos].set_facecolor('#ff0000')
fig.savefig(f"{base_dir}/bot_overall_winrates.pdf")
save_stats(bot_overall_winrates=DataFrame(bot_winrates))
def plot_bot_rr_winrates(bot: str, df_rr_winrate: DataFrame):
plt.figure()
bots = set(df_rr_winrate.columns)
other_winrates = df_rr_winrate.loc[bot, bots - {bot}].sort_values(ascending=False)
ax = other_winrates.plot(
kind="bar",
figsize=(df_rr_winrate.shape[0] / 4, 5),
title=f"Win rate of bot '{bot}' against each opponent",
ylim=(0, 1.1),
color="#8B96D0"
)
ax.set_xlabel("Bot name")
ax.set_ylabel("Win rate")
for p in ax.patches:
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig = ax.get_figure()
fig.tight_layout()
fig.savefig(f"{base_dir}/bot_rr_winrates.pdf")
save_stats(bot_rr_winrates=DataFrame(other_winrates))
def plot_bot_elos(bot: str, ser_elos: Series):
plt.figure()
ser_elos = ser_elos.sort_values(ascending=False)
ax = ser_elos.plot(
kind="bar",
figsize=(len(ser_elos) / 4, 5),
title=f"Updated elo ratings after playing '{bot}' with tournament bots",
ylim=(0, ser_elos.max()),
color="#8B96D0"
)
ax.set_xlabel("Bot name")
ax.set_ylabel("Elo rating")
for p, patch_bot in zip(ax.patches, ser_elos.index.tolist()):
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
pos = ser_elos.index.get_loc(bot)
ax.patches[pos].set_facecolor('#ff0000')
fig = ax.get_figure()
fig.tight_layout()
fig.savefig(f"{base_dir}/bot_elos.pdf")
save_stats(bot_elos=DataFrame(ser_elos))
def plot_bot_race_winrates(bot: str, df_botrace_winrate: DataFrame):
plt.figure()
bot_races = df_botrace_winrate.loc[bot].sort_values(ascending=False)
ax = bot_races.plot(
kind="bar",
figsize=(5, 3),
title=f"Win rate of bot '{bot}' given a race",
ylim=(0, 1.1),
color="#8B96D0"
)
ax.set_ylabel("Win rate")
ax.set_xlabel("")
fig = ax.get_figure()
fig.tight_layout()
for p in ax.patches:
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig.savefig(f"{base_dir}/bot_races.pdf")
save_stats(bot_races=DataFrame(bot_races))
def plot_bot_maps_winrates(bot, ser_maps):
plt.figure()
map_results = ser_maps.unstack(level=0).transpose()
bot_maps = map_results.loc[:, bot]
ax = bot_maps.plot(
kind="bar",
figsize=(7, 6),
ylim=(0, 1.1),
title=f"Win rate of bot '{bot}' given a map",
color="#8B96D0")
ax.set_xlabel("Map name")
ax.set_ylabel("Win rate")
fig = ax.get_figure()
fig.tight_layout()
for p in ax.patches:
ax.annotate("%.2f" % p.get_height(),
(p.get_x() + p.get_width() / 2., 0.01),
ha='center', va='bottom', xytext=(0, 10), textcoords='offset points',
rotation=90)
fig.savefig(f"{base_dir}/bot_maps.pdf")
save_stats(bot_maps=DataFrame(bot_maps))
def plot_overall_results(ser_overall_winrate, df_race_wintimes, df_gametimes,
ser_bot_races, ser_maps, ser_elos):
plot_rr_overall_winrates(ser_overall_winrate)
plot_rr_elos(ser_elos)
plot_rr_race_winrates(df_race_wintimes)
plot_rr_race_counts(ser_bot_races)
plot_rr_game_times(df_gametimes)
plot_rr_maps_winrates(ser_maps)
def plot_bot_results(bot: str, df_rr_winrate, ser_overall_winrate, df_botrace_winrate,
ser_maps, ser_elos):
plot_bot_overall_winrates(bot, ser_overall_winrate)
plot_bot_elos(bot, ser_elos)
plot_bot_rr_winrates(bot, df_rr_winrate)
plot_bot_race_winrates(bot, df_botrace_winrate)
plot_bot_maps_winrates(bot, ser_maps)
|
/scbw_mq-0.2.2-py3-none-any.whl/scbw_mq/tournament/benchmark/plots.py
| 0.595257 | 0.297081 |
plots.py
|
pypi
|
import csv
import glob
import json
import logging
from typing import Dict, Optional
import elo
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
from tqdm import tqdm
logger = logging.getLogger(__name__)
elo.WIN = 1.
elo.DRAW = 0.5
elo.LOSS = 0.
elo.K_FACTOR = 20
elo.INITIAL = 2000
elo.BETA = 200
def process_results(result_dir: str) -> DataFrame:
rows = {"game_name": [],
"map": [],
"winner": [],
"winner_race": [],
"loser": [],
"loser_race": [],
"game_time": []}
race = dict(
T="Terran",
Z="Zerg",
P="Protoss",
R="Random"
)
for file in tqdm(glob.glob(f"{result_dir}/*/result.json"), unit="game"):
with open(file, "r") as f:
info = json.load(f)
rows["game_name"].append(info["game_name"])
rows["map"].append(info["map"].replace("sscai/", ""))
rows["winner"].append(info['winner'])
rows["winner_race"].append(race[info['winner_race']])
rows["loser"].append(info['loser'])
rows["loser_race"].append(race[info['loser_race']])
rows["game_time"].append(info["game_time"])
return DataFrame(rows).set_index("game_name")
def calc_stats(df: DataFrame, ser_round_robin_elos: Optional[Series]):
# helper col
df['one'] = 1
bots = set(df['winner']).union(set(df['loser']))
# Win times
df_wintimes = pd.pivot_table(df, index='winner', columns='loser', values='one', aggfunc=np.sum) \
.fillna(0).sort_index()
for missing_bot in bots - set(df_wintimes.columns):
df_wintimes[missing_bot] = 0
for missing_bot in bots - set(df_wintimes.index):
df_wintimes = df_wintimes.append(Series({bot: 0 for bot in bots}, name=missing_bot))
df_wintimes = df_wintimes.sort_index(axis=0).sort_index(axis=1)
# Win rate
df_rr_winrate = (df_wintimes / (df_wintimes + df_wintimes.transpose()))
ser_overall_winrate = (
df_wintimes.sum(axis=1) /
(df_wintimes + df_wintimes.transpose()).sum(axis=1)
).sort_values(ascending=False)
# Game times
df_winner = df[['winner', 'winner_race', 'game_time']]
df_winner.columns = ['bot', 'race', 'game_time']
df_loser = df[['loser', 'loser_race', 'game_time']]
df_loser.columns = ['bot', 'race', 'game_time']
df_gametimes: DataFrame = pd.concat((df_winner, df_loser))
# Map winning rates
map_index = pd.MultiIndex.from_product([list(set(df['map'])), list(bots)])
df_map_winners = df.groupby(by=["map", "winner"])['one'].sum().reindex(map_index)
df_map_losers = df.groupby(by=["map", "loser"])['one'].sum().reindex(map_index)
ser_maps = df_map_winners / (df_map_winners + df_map_losers)
ser_maps.name = 'win_rate'
# Race win times
ser_bot_races = df_gametimes.set_index("bot").groupby("bot")['race'].head(1)
df_race_wintimes = pd.pivot_table(df, index='winner_race', columns='loser_race',
values='one', aggfunc=np.sum)
# ... this is probably not needed
races = set(ser_bot_races.unique())
for missing_race in races - set(df_race_wintimes.columns):
df_race_wintimes[missing_race] = 0
for missing_race in races - set(df_race_wintimes.index):
df_race_wintimes = df_race_wintimes.append(
Series({race: 0 for race in races}, name=missing_race))
# Each bot winrates against each race
df_botrace_wintimes = pd.pivot_table(
df, index='winner', columns='loser_race', values='one', aggfunc=np.sum) \
.fillna(0).sort_index()
df_botrace_losetimes = pd.pivot_table(
df, index='loser', columns='winner_race', values='one', aggfunc=np.sum) \
.fillna(0).sort_index()
df_botrace_winrate = df_botrace_wintimes / (df_botrace_wintimes + df_botrace_losetimes)
# Calculate elos
if ser_round_robin_elos is None:
ser_elos = calc_round_robin_elo(df)
else:
ser_elos = calc_player_elo(ser_round_robin_elos, df)
return df_rr_winrate, \
ser_overall_winrate, \
df_gametimes, \
df_race_wintimes, \
df_botrace_winrate, \
ser_maps, \
ser_bot_races, \
ser_elos
def save_stats(**df):
for name, df in df.items():
df.to_csv(f"{name}.csv", sep=",", quoting=csv.QUOTE_ALL)
def calc_round_robin_elo(df_results: DataFrame) -> Series:
bots = set(df_results['winner']).union((set(df_results['loser'])))
initial_ratings = {bot: elo.Rating() for bot in bots}
return calc_elo(initial_ratings, df_results)
def calc_player_elo(ser_round_robin_elo: Series, df_bot_results: DataFrame) -> Series:
ratings = {bot: elo.Rating(value=score) for bot, score in ser_round_robin_elo.items()}
return calc_elo(ratings, df_bot_results)
def calc_elo(ratings: Dict[str, elo.Rating], df_game_results: DataFrame):
elo_calc = elo.Elo(rating_class=elo.Rating)
for i in np.arange(len(df_game_results)):
winner = df_game_results.iloc[i]['winner']
loser = df_game_results.iloc[i]['loser']
winner_elo, loser_elo = elo_calc.rate_1vs1(ratings[winner],
ratings[loser], drawn=False)
ratings[winner] = winner_elo
ratings[loser] = loser_elo
return Series({bot: rating.value for bot, rating in ratings.items()}, name="elo")
|
/scbw_mq-0.2.2-py3-none-any.whl/scbw_mq/tournament/benchmark/stats.py
| 0.589362 | 0.238667 |
stats.py
|
pypi
|
# scc4onnx
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. **S**imple **C**hannel **C**onverter for **ONNX**.
https://github.com/PINTO0309/simple-onnx-processing-tools
[](https://pepy.tech/project/scc4onnx)  [](https://pypi.org/project/scc4onnx/) [](https://github.com/PINTO0309/scc4onnx/actions?query=workflow%3ACodeQL)
<p align="center">
<img src="https://user-images.githubusercontent.com/33194443/170157082-e0de3434-483f-4167-a71d-6ad4e087ac68.png" />
</p>
# Key concept
- [x] Allow the user to specify the name of the input OP to change the input order.
- [x] All number of dimensions can be freely changed, not only 4 dimensions such as NCHW and NHWC.
- [x] Simply rewrite the input order of the input OP to the specified order and extrapolate Transpose after the input OP so that it does not affect the processing of subsequent OPs.
- [x] Allows the user to change the channel order of RGB and BGR by specifying options.
## 1. Setup
### 1-1. HostPC
```bash
### option
$ echo export PATH="~/.local/bin:$PATH" >> ~/.bashrc \
&& source ~/.bashrc
### run
$ pip install -U onnx \
&& python3 -m pip install -U onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com \
&& pip install -U scc4onnx
```
### 1-2. Docker
https://github.com/PINTO0309/simple-onnx-processing-tools#docker
## 2. CLI Usage
```bash
$ scc4onnx -h
usage:
scc4onnx [-h]
--input_onnx_file_path INPUT_ONNX_FILE_PATH
--output_onnx_file_path OUTPUT_ONNX_FILE_PATH
[--input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM]
[--channel_change_inputs INPUT_OP_NAME DIM]
[--non_verbose]
optional arguments:
-h, --help
show this help message and exit
--input_onnx_file_path INPUT_ONNX_FILE_PATH
Input onnx file path.
--output_onnx_file_path OUTPUT_ONNX_FILE_PATH
Output onnx file path.
--input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM
Specify the name of the input_op to be dimensionally changed and the order of the
dimensions after the change.
The name of the input_op to be dimensionally changed can be specified multiple times.
e.g.
--input_op_names_and_order_dims aaa [0,3,1,2] \
--input_op_names_and_order_dims bbb [0,2,3,1] \
--input_op_names_and_order_dims ccc [0,3,1,2,4,5]
--channel_change_inputs INPUT_OP_NAME DIM
Change the channel order of RGB and BGR.
If the original model is RGB, it is transposed to BGR.
If the original model is BGR, it is transposed to RGB.
It can be selectively specified from among the OP names specified
in --input_op_names_and_order_dims.
OP names not specified in --input_op_names_and_order_dims are ignored.
Multiple times can be specified as many times as the number of OP names specified
in --input_op_names_and_order_dims.
--channel_change_inputs op_name dimension_number_representing_the_channel
dimension_number_representing_the_channel must specify the dimension position before
the change in input_op_names_and_order_dims.
For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
e.g.
--channel_change_inputs aaa 3 \
--channel_change_inputs bbb 1 \
--channel_change_inputs ccc 5
--non_verbose
Do not show all information logs. Only error logs are displayed.
```
## 3. In-script Usage
```python
$ python
>>> from scc4onnx import order_conversion
>>> help(order_conversion)
Help on function order_conversion in module scc4onnx.onnx_input_order_converter:
order_conversion(
input_op_names_and_order_dims: Union[dict, NoneType] = None,
channel_change_inputs: Union[dict, NoneType] = None,
input_onnx_file_path: Union[str, NoneType] = '',
output_onnx_file_path: Union[str, NoneType] = '',
onnx_graph: Union[onnx.onnx_ml_pb2.ModelProto, NoneType] = None,
non_verbose: Union[bool, NoneType] = False
) -> onnx.onnx_ml_pb2.ModelProto
Parameters
----------
input_onnx_file_path: Optional[str]
Input onnx file path.
Either input_onnx_file_path or onnx_graph must be specified.
output_onnx_file_path: Optional[str]
Output onnx file path.
If output_onnx_file_path is not specified, no .onnx file is output.
onnx_graph: Optional[onnx.ModelProto]
onnx.ModelProto.
Either input_onnx_file_path or onnx_graph must be specified.
onnx_graph If specified, ignore input_onnx_file_path and process onnx_graph.
input_op_names_and_order_dims: Optional[dict]
Specify the name of the input_op to be dimensionally changed and
the order of the dimensions after the change.
The name of the input_op to be dimensionally changed
can be specified multiple times.
e.g.
input_op_names_and_order_dims = {
"input_op_name1": [0,3,1,2],
"input_op_name2": [0,2,3,1],
"input_op_name3": [0,3,1,2,4,5],
}
channel_change_inputs: Optional[dict]
Change the channel order of RGB and BGR.
If the original model is RGB, it is transposed to BGR.
If the original model is BGR, it is transposed to RGB.
It can be selectively specified from among the OP names
specified in input_op_names_and_order_dims.
OP names not specified in input_op_names_and_order_dims are ignored.
Multiple times can be specified as many times as the number
of OP names specified in input_op_names_and_order_dims.
channel_change_inputs = {"op_name": dimension_number_representing_the_channel}
dimension_number_representing_the_channel must specify
the dimension position after the change in input_op_names_and_order_dims.
For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
e.g.
channel_change_inputs = {
"aaa": 1,
"bbb": 3,
"ccc": 2,
}
non_verbose: Optional[bool]
Do not show all information logs. Only error logs are displayed.
Default: False
Returns
-------
order_converted_graph: onnx.ModelProto
Order converted onnx ModelProto
```
## 4. CLI Execution
```bash
$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1
```
## 5. In-script Execution
```python
from scc4onnx import order_conversion
order_converted_graph = order_conversion(
onnx_graph=graph,
input_op_names_and_order_dims={"left": [0,2,3,1], "right": [0,2,3,1]},
channel_change_inputs={"left": 1, "right": 1},
non_verbose=True,
)
```
## 6. Sample
### 6-1. Transpose only

```bash
$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1]
```


### 6-2. Transpose + RGB<->BGR

```bash
$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1
```

### 6-3. RGB<->BGR only

```bash
$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--channel_change_inputs left 1 \
--channel_change_inputs right 1
```

## 7. Issues
https://github.com/PINTO0309/simple-onnx-processing-tools/issues
|
/scc4onnx-1.0.4.tar.gz/scc4onnx-1.0.4/README.md
| 0.585101 | 0.851645 |
README.md
|
pypi
|
from typing import Union
import numpy as np
import torch
from anndata import AnnData
from torch.types import Device
from .pca import scPCA
from .train import SUBSAMPLE
from .utils import get_protein_counts, get_rna_counts
class scCCA(scPCA):
"""
scCCA model.
Parameters
----------
adata: anndata.AnnData
Anndata object with the single-cell data.
num_factors: int
Number of factors to fit.
protein_obsm_key: str or None (default: None)
Key to extract single-cell protein matrix from `adata.obsm`.
layers_key: str or None (default: None)
Key to extract single-cell count matrix from adata.layers. If layers_key is None,
scPCA will try to extract the count matrix from the adata.X.
batch_formula: str or None (default: None)
R style formula to extract batch information from adata.obs. If batch_formula is None,
scPCA assumes a single batch. Usually `batch_column - 1`.
design_formula: str or None (default: None)
R style formula to construct the design matrix from adata.obs. If design_formula is None,
scPCA fits a normal PCA.
subsampling: int (default: 4096)
Number of cells to subsample for training. A larger number will result in a more accurate
computation of the gradients, but will also increase the training time and memory usage.
device: torch.device (default: torch.device("cuda") if a GPU is available)
Device to run the model on. A GPU is highly recommended.
model_key: str (default: "scpca")
Key to store the model in the AnnData object.
model_kwargs: dict
Model parameters. See sccca.model.model for more details.
training_kwargs: dict
Training parameters. See sccca.handler for more details.
"""
def __init__(
self,
adata: AnnData,
num_factors: int,
protein_obsm_key: str,
layers_key: Union[str, None] = None,
batch_formula: Union[str, None] = None,
design_formula: Union[str, None] = None,
subsampling: int = 4096,
device: Device = torch.device("cuda" if torch.cuda.is_available() else "cpu"),
model_key: str = "sccca",
model_kwargs: dict = {
"β_rna_sd": 0.01,
"β_rna_mean": 3,
"intercept": True,
"batch_beta": False,
"horseshoe": False,
},
training_kwargs: dict = SUBSAMPLE,
):
self.protein_obsm_key = protein_obsm_key
super().__init__(
adata=adata,
num_factors=num_factors,
layers_key=layers_key,
batch_formula=batch_formula,
design_formula=design_formula,
subsampling=subsampling,
device=device,
model_key=model_key,
model_kwargs=model_kwargs,
training_kwargs=training_kwargs,
)
def _setup_data(self):
"""
Sets up the data.
"""
X = get_rna_counts(self.adata, self.layers_key)
Y = get_protein_counts(self.adata, self.protein_obsm_key)
X_size = np.log(X.sum(axis=1, keepdims=True))
Y_size = np.log(Y.sum(axis=1, keepdims=True))
batch = np.asarray(self.batch_states.encoding).astype(np.float32)
design = np.asarray(self.design_states.encoding).astype(np.float32)
batch_idx = self.batch_states.index
design_idx = self.design_states.index
num_genes = X.shape[1]
num_cells = X.shape[0]
num_batches = batch.shape[1]
num_proteins = Y.shape[1]
idx = np.arange(num_cells)
data = dict(
num_factors=self.num_factors,
X=X,
X_size=X_size,
Y=Y,
Y_size=Y_size,
design=design,
batch=batch,
design_idx=design_idx,
batch_idx=batch_idx,
idx=idx,
num_genes=num_genes,
num_proteins=num_proteins,
num_batches=num_batches,
num_cells=num_cells,
)
return self._to_torch(data)
def posterior_to_anndata(self, model_key=None, num_samples=25):
_ = self._meta_to_anndata(model_key, num_samples)
adata = self.adata
adata.varm[f"{model_key}_W_rna"] = (
self.handler.predict_global_variable("W_lin", num_samples=num_samples).T.swapaxes(-1, -3).swapaxes(-1, -2)
)
adata.varm[f"{model_key}_V_rna"] = self.handler.predict_global_variable(
"W_add", num_samples=num_samples
).T.swapaxes(-1, -2)
α_rna = self.handler.predict_global_variable("α_rna", num_samples=num_samples).T
if α_rna.ndim == 2:
α_rna = np.expand_dims(α_rna, 1)
adata.varm[f"{model_key}_α_rna"] = α_rna.swapaxes(-1, -2)
σ_rna = self.handler.predict_global_variable("σ_rna", num_samples=num_samples).T
if σ_rna.ndim == 2:
σ_rna = np.expand_dims(σ_rna, 1)
adata.varm[f"{model_key}_σ_rna"] = σ_rna.swapaxes(-1, -2)
adata.obsm[f"X_{model_key}"] = self.handler.predict_local_variable("z", num_samples=num_samples).swapaxes(0, 1)
def mean_to_anndata(self, model_key=None, num_samples=25):
_ = self._meta_to_anndata(model_key, num_samples)
adata = self.adata
adata.layers[f"{model_key}_μ_rna"] = self.handler.predict_local_variable("μ_rna", num_samples=num_samples).mean(
0
)
adata.obsm[f"{model_key}_μ_prot"] = self.handler.predict_local_variable("μ_prot", num_samples=num_samples).mean(
0
)
adata.layers[f"{model_key}_offset_rna"] = self.handler.predict_local_variable(
"offset_rna", num_samples=num_samples
).mean(0)
adata.obsm[f"X_{model_key}"] = self.handler.predict_local_variable("z", num_samples=num_samples).mean(0)
adata.varm[f"{model_key}_W_rna"] = (
self.handler.predict_global_variable("W_lin", num_samples=num_samples).mean(0).T
)
adata.varm[f"{model_key}_V_rna"] = (
self.handler.predict_global_variable("W_add", num_samples=num_samples).mean(0).T
)
adata.varm[f"{model_key}_α_rna"] = self.handler.predict_global_variable("α_rna").mean(0).T
adata.varm[f"{model_key}_σ_rna"] = self.handler.predict_global_variable("σ_rna").mean(0).T
def to_anndata(self, adata=None, model_key=None, num_samples=25):
model_key = self.model_key if model_key is None else model_key
adata = self.adata if adata is None else adata
adata.uns[f"{model_key}"] = {}
res = adata.uns[f"{model_key}"]
res["design"] = self.design_states.mapping
res["intercept"] = self.batch_states.mapping
res["model"] = {"num_factors": self.num_factors, **self.model_kwargs}
res["α_rna"] = self.handler.predict_global_variable("α_rna", num_samples=num_samples).mean(0)
res["α_prot"] = self.handler.predict_global_variable("α_prot", num_samples=num_samples).mean(0)
res["W_fac"] = self.handler.predict_global_variable("W_fac", num_samples=num_samples).mean(0)
res["W_vec"] = self.handler.predict_global_variable("W_vec", num_samples=num_samples).mean(0)
res["W_lin"] = self.handler.predict_global_variable("W_lin", num_samples=num_samples).mean(0)
res["W_add"] = self.handler.predict_global_variable("W_add", num_samples=num_samples).mean(0)
res["V_fac"] = self.handler.predict_global_variable("V_fac", num_samples=num_samples).mean(0)
res["V_vec"] = self.handler.predict_global_variable("V_vec", num_samples=num_samples).mean(0)
res["V_lin"] = self.handler.predict_global_variable("V_lin", num_samples=num_samples).mean(0)
res["V_add"] = self.handler.predict_global_variable("V_add", num_samples=num_samples).mean(0)
res["μ_rna"] = self.handler.predict_local_variable("μ_rna", num_samples=num_samples).mean(0)
res["μ_prot"] = self.handler.predict_local_variable("μ_prot", num_samples=num_samples).mean(0)
adata.obsm[f"X_{model_key}"] = self.handler.predict_local_variable("z", num_samples=num_samples).mean(0)
adata.obsm[f"Z_{model_key}"] = self.handler.predict_local_variable("z_vec", num_samples=num_samples).mean(0)
|
/sccca-0.3.1-py3-none-any.whl/scCCA/cca.py
| 0.955047 | 0.653127 |
cca.py
|
pypi
|
import numpy as np
import pyro
import torch
from pyro.infer import Predictive, Trace_ELBO
from tqdm import tqdm
from .handler import SVIBaseHandler
class SVILocalHandler(SVIBaseHandler):
"""
Extend SVIBaseHandler to enable to use a separate model and guide
for prediction. Assumes theat model and guide accept an idx argument
that is an torch array of indices.
"""
def __init__(
self,
model,
guide,
loss: Trace_ELBO = pyro.infer.TraceMeanField_ELBO,
optimizer=torch.optim.Adam,
scheduler=pyro.optim.ReduceLROnPlateau,
seed=None,
num_epochs: int = 30000,
log_freq: int = 10,
checkpoint_freq: int = 500,
to_numpy: bool = True,
optimizer_kwargs: dict = {"lr": 1e-2},
scheduler_kwargs: dict = {"factor": 0.99},
loss_kwargs: dict = {"num_particles": 1},
predict_model=None,
predict_guide=None,
idx: torch.Tensor = None,
):
super().__init__(
model=model,
guide=guide,
loss=loss,
optimizer=optimizer,
scheduler=scheduler,
seed=seed,
num_epochs=num_epochs,
log_freq=log_freq,
checkpoint_freq=checkpoint_freq,
to_numpy=to_numpy,
optimizer_kwargs=optimizer_kwargs,
scheduler_kwargs=scheduler_kwargs,
loss_kwargs=loss_kwargs,
)
self.predict_model = predict_model
self.predict_guide = predict_guide
self.idx = idx
def predict(self, return_sites, num_samples=25, *args, **kwargs):
if self.params is not None:
pyro.clear_param_store()
pyro.get_param_store().set_state(self.params)
predictive = Predictive(
self.predict_model,
guide=self.predict_guide,
num_samples=num_samples,
return_sites=return_sites,
)
posterior = predictive(*args, **kwargs)
self.posterior = self._to_numpy(posterior) if self.to_numpy else posterior
torch.cuda.empty_cache()
def predict_global_variable(self, var: str, num_samples: int = 25):
"""
Sample global variables from the posterior.
Parameters
----------
var : str
Name of the variable to sample.
num_samples : int
Number of samples to draw.
"""
self.predict([var], num_samples=num_samples, idx=self.idx[0:1])
return self.posterior[var]
def predict_local_variable(
self,
var: str,
num_samples: int = 25,
num_split: int = 2048,
obs_dim: int = 1,
):
"""
Sample local variables from the posterior. In order to
avoid memory issues, the sampling is performed in batches.
Parameters
----------
var : str
Name of the variable to sample.
num_samples : int
Number of samples to draw.
num_split : int
The parameter determines the size of the batches. The actual
batch size is total number of observations divided by num_split.
obs_dim : int
The dimension of the observations. After sampling, the output
is concatenated along this dimension.
"""
split_obs = torch.split(self.idx, num_split)
# create status bar
pbar = tqdm(range(len(split_obs)))
results = []
for i in pbar:
self.predict([var], num_samples=num_samples, idx=split_obs[i])
results.append(self.posterior[var])
# update status bar
pbar.set_description(f"Predicting {var} for obs {torch.min(split_obs[i])}-{torch.max(split_obs[i])}.")
return np.concatenate(results, obs_dim)
|
/sccca-0.3.1-py3-none-any.whl/scCCA/train/local_handler.py
| 0.91094 | 0.378804 |
local_handler.py
|
pypi
|
from typing import List, Union
import matplotlib.cm as cm
import matplotlib.patheffects as PathEffects
import matplotlib.pyplot as plt
import numpy as np
from .utils import set_up_cmap, set_up_plot
def loading_bar(
adata,
model_key: str,
state: str,
factor: Union[int, List[int], None] = None,
vector: str = "W_rna",
design_dim=0,
sign=1,
lowest=4,
highest=3,
fat_bar=0.6,
thin_bar=0.01,
offset=0.1,
fontsize=10,
cmap=cm.RdBu,
annot_bottom=False,
ax=None,
):
"""
Plot factor on a given embedding.
Parameters
----------
adata: AnnData
AnnData object.
model_key: str, optional (default: "X_scpca")
Key for the fitted model.
embedding: str, optional (default: "X_umap")
Key for the embedding (e.g. UMAP, T-SNE).
factor: int, list, optional (default: None)
Factor(s) to plot. If None, then all factors are plotted.
sign: float, optional (default: 1.0)
Sign of the factor. Should be either 1.0 or -1.0.
cmap: str, optional (default: "PiYG")
Colormap for the scatterplot.
colorbar_pos: str, optional (default: "right")
Position of the colorbar.
colorbar_width: str, optional (default: "3%")
Width of the colorbar.
orientation: str, optional (default: "vertical")
Orientation of the colorbar. Should be either "vertical" or "horizontal".
size: float, optional (default: 1)
Marker/Dot size of the scatterplot.
ncols: int, optional (default: 4)
Number of columns for the subplots.
width: int, optional (default: 4)
Width of each subplot.
height: int, optional (default: 3)
Height of each subplot.
ax: matplotlib.axes.Axes, optional (default: None)
Axes object to plot on. If None, then a new figure is created.
Returns
-------
ax: matplotlib.axes.Axes
Axes object.
"""
ax = set_up_plot(
adata,
model_key,
factor,
_loadings_bar,
state=state,
vector=vector,
design_dim=design_dim,
sign=sign,
lowest=lowest,
highest=highest,
fat_bar=fat_bar,
thin_bar=thin_bar,
offset=offset,
fontsize=fontsize,
cmap=cmap,
annot_bottom=annot_bottom,
ax=ax,
)
return ax
def _loadings_bar(
adata,
model_key: str,
factor: int,
state: Union[str, List[str]],
vector: str = "W_rna",
design_dim=0,
sign=1,
lowest=4,
highest=3,
fat_bar=0.6,
thin_bar=0.01,
offset=0.1,
fontsize=10,
cmap=cm.RdBu,
annot_bottom=False,
ax=None,
):
model_dict = adata.uns[model_key]
if isinstance(state, str):
idx = model_dict["design"][state]
# loadings = sign * model_dict[vector][idx][factor]
loadings = sign * adata.varm[f"{model_key}_{vector}"][..., factor, idx]
else:
model_design = model_dict["design"]
state_a = model_design[state[0]]
state_b = model_design[state[1]]
# loadings = sign * (model_dict[vector][state_b][factor] - model_dict[vector][state_a][factor])
loadings = sign * (
adata.varm[f"{model_key}_{vector}"][..., factor, state_b]
- adata.varm[f"{model_key}_{vector}"][..., factor, state_a]
)
y = loadings
other = len(loadings) - (lowest + highest)
loadings_idx = np.argsort(loadings)
w = np.concatenate(
[
np.ones(lowest) * fat_bar,
np.ones(other) * thin_bar,
np.ones(highest) * fat_bar,
]
)
cmap, norm = set_up_cmap(loadings, cmap)
mapper = cm.ScalarMappable(norm=norm, cmap=cmap)
colors = [mapper.to_rgba(v) for v in y[loadings_idx]]
xticks = []
for n, c in enumerate(w):
xticks.append(sum(w[:n]) + w[n] / 2)
if ax is None:
plt.bar(xticks, height=y[loadings_idx], width=w, color=colors, alpha=0.9)
ax = plt.gca()
else:
ax.bar(
xticks,
height=y[loadings_idx],
width=w,
color=colors,
alpha=0.9,
)
ax.set_xticks([])
# _ = ax.set_xticklabels(xticks_labels, rotation=90)
ax.margins(x=0.01)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_title(f"Loading {factor}")
for name, xtick in zip(adata.var_names[loadings_idx].values[:lowest].tolist(), xticks[:lowest]):
if not annot_bottom:
txt = ax.text(
x=xtick,
y=-offset,
s=name,
rotation=90,
ha="center",
color="white",
va="top",
fontweight="bold",
fontsize=fontsize,
)
txt.set_path_effects([PathEffects.withStroke(linewidth=2, foreground="black")])
else:
ax.text(
x=xtick,
y=offset,
s=name,
rotation=90,
ha="center",
color="black",
va="bottom",
fontsize=fontsize,
)
for name, xtick in zip(adata.var_names[loadings_idx].values[-highest:].tolist(), xticks[-highest:]):
if not annot_bottom:
txt = ax.text(
x=xtick,
y=offset,
s=name,
rotation=90,
ha="center",
color="white",
va="bottom",
fontweight="bold",
fontsize=fontsize,
)
txt.set_path_effects([PathEffects.withStroke(linewidth=2, foreground="black")])
else:
ax.text(
x=xtick,
y=-offset,
s=name,
rotation=90,
ha="center",
color="black",
va="top",
fontsize=fontsize,
)
return ax
|
/sccca-0.3.1-py3-none-any.whl/scCCA/plots/loadings_bar.py
| 0.969871 | 0.480966 |
loadings_bar.py
|
pypi
|
from typing import Union
import matplotlib.pyplot as plt
import numpy as np
from adjustText import adjust_text
from matplotlib.colors import LogNorm
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy.stats import gamma, variation
from sklearn.metrics import mean_squared_error
from ..utils import extract_counts
def disp(
adata,
model_key: str = "scpca",
layers_key: Union[str, None] = None,
protein_obsm_key: Union[str, None] = None,
cmap: str = "viridis",
ax: plt.Axes = None,
) -> plt.Axes:
"""
Plot the fitted dispersion against the coefficient of variation RNA or
protein counts.
Parameters
----------
adata: AnnData
AnnData object.
model_key: str, optional (default: "scpca")
Key for the fitted model.
layers_key: str, optional (default: None)
If `layers_key` is None, then the raw counts are extracted from `adata.X`.
Otherwise, the counts are extracted from `adata.layers[layers_key]`.
protein_obsm_key: str, optional (default: None)
Key for protein counts in `adata.obsm`. Providing `protein_obsm_key`
overrides `layers_key`, i.e. protein counts are plotted.
cmap: str, optional (default: "viridis")
Colormap for the scatterplot. Color represents the mean of the counts.
ax: matplotlib.axes.Axes, optional (default: None)
Axes to plot on. If None, then a new figure is created.
Returns
-------
ax: matplotlib.axes.Axes
"""
# Extract counts
counts = extract_counts(adata, layers_key, protein_obsm_key)
posterior_key = "α_prot" if protein_obsm_key is not None else "α_rna"
if ax is None:
plt.scatter(
adata.uns[model_key][posterior_key],
variation(counts, axis=0),
c=counts.mean(0),
cmap="viridis",
norm=LogNorm(),
)
ax = plt.gca()
else:
ax.scatter(
adata.uns[model_key][posterior_key],
variation(counts, axis=0),
c=counts.mean(0),
cmap="viridis",
norm=LogNorm(),
)
plt.colorbar()
# ax.plot(np.linspace(1, 60), np.linspace(1, 60), color='C1')
ax.set_yscale("log")
ax.set_xscale("log")
ax.set_xlabel(r"$\alpha$")
ax.set_ylabel("CV")
return ax
def qc_hist(
adata,
model_key: str = "scpca",
layers_key: Union[str, None] = None,
protein_obsm_key: Union[str, None] = None,
cmap: str = "viridis",
colorbar_pos="right",
colorbar_width="3%",
orientation="vertical",
ax: plt.Axes = None,
) -> plt.Axes:
"""
Plots a 2D histogram of the predicted counts against the true counts.
Parameters
----------
adata: AnnData
AnnData object.
model_key: str, optional (default: "scpca")
Key for the fitted model.
layers_key: str, optional (default: None)
If `layers_key` is None, then the raw counts are extracted from `adata.X`.
Otherwise, the counts are extracted from `adata.layers[layers_key]`.
protein_obsm_key: str, optional (default: None)
Key for protein counts in `adata.obsm`. Providing `protein_obsm_key`
overrides `layers_key`, i.e. protein counts are plotted.
cmap: str, optional (default: "viridis")
Colormap for the scatterplot. Color represents the mean of the counts.
ax: matplotlib.axes.Axes, optional (default: None)
Axes to plot on. If None, then a new figure is created.
Returns
-------
ax: matplotlib.axes.Axes
"""
if ax is None:
fig = plt.figure()
ax = plt.gca()
else:
fig = plt.gcf()
# Extract counts
counts = extract_counts(adata, layers_key, protein_obsm_key)
posterior_key = "μ_prot" if protein_obsm_key is not None else "μ_rna"
if posterior_key == "μ_rna":
predicted_counts = adata.layers[f"{model_key}_{posterior_key}"]
else:
predicted_counts = adata.obsm[f"{model_key}_{posterior_key}"]
im = ax.hist2d(
np.log10(counts.reshape(-1) + 1),
np.log10(predicted_counts.reshape(-1) + 1),
bins=50,
norm=LogNorm(),
cmap=cmap,
)
divider = make_axes_locatable(ax)
cax = divider.append_axes(colorbar_pos, size=colorbar_width, pad=0.1)
fig.colorbar(im[3], cax=cax, orientation=orientation)
max_val = np.max([*ax.get_xlim(), *ax.get_ylim()])
min_val = np.min([*ax.get_xlim(), *ax.get_ylim()])
# print(max_val)
ax.set_xlim([min_val, max_val])
ax.set_ylim([min_val, max_val])
ax.plot(
np.linspace(min_val, max_val),
np.linspace(min_val, max_val),
color="w",
linewidth=2,
)
ax.set_aspect("equal")
ax.set_ylabel(r"Predicted count ($\log_{10}(x+1)$ scaled)")
ax.set_xlabel(r"True count ($\log_{10}(x+1)$ scaled)")
rmse = mean_squared_error(counts, predicted_counts)
ax.set_title(f"RMSE {rmse:.2f}")
return ax
def mean_var(
adata,
model_key: Union[str, None] = None,
layers_key: Union[str, None] = None,
protein_obsm_key: Union[str, None] = None,
highest: Union[int, None] = None,
β_rna_mean: float = 3,
β_rna_sd: float = 1,
alpha: float = 1.0,
repel: float = 0.1,
margin: float = 0.01,
max_distance: float = 0.1,
ax: plt.Axes = None,
):
if ax is None:
plt.figure()
ax = plt.gca()
counts = extract_counts(adata, layers_key, protein_obsm_key)
def vart(concentration, mean):
"""Computes the expected variance of the Gamma Poission distribution."""
return concentration / (concentration / mean) ** 2 * (1 + concentration / mean)
if model_key is not None:
model_dict = adata.uns[model_key]
params = model_dict["model"]
# prior_mean = params["β_rna_mean"]
β_rna_mean = params["β_rna_mean"]
β_rna_sd = params["β_rna_sd"]
a = β_rna_mean**2 / β_rna_sd**2
b = β_rna_mean / β_rna_sd**2
-4
upper = gamma(a, scale=1 / b).ppf(0.975)
lower = gamma(a, scale=1 / b).ppf(0.025)
true_mean = np.mean(counts, axis=0)
true_var = np.var(counts, axis=0)
theoretical = vart(β_rna_mean, np.logspace(-4, 3, 1000))
expectation = vart(β_rna_mean, true_mean)
ax.fill_between(
np.logspace(-4, 3, 1000),
vart(lower, np.logspace(-4, 3, 1000)),
vart(upper, np.logspace(-4, 3, 1000)),
color="C3",
alpha=0.2,
)
im = ax.scatter(
true_mean,
true_var,
alpha=alpha,
s=10,
c=adata.varm[f"{model_key}_α_rna"] if model_key is not None else None,
cmap="viridis",
)
ax.plot(np.logspace(-4, 3), np.logspace(-4, 3), color="C3", label="Identity")
ax.plot(
np.logspace(-4, 3, 1000),
theoretical,
color="C3",
linestyle="--",
label=f"Prior mean {β_rna_mean:.2f}",
)
# ax.plot(np.logspace(-3, 3, 1000), vart(upper, np.logspace(-3, 3, 1000)), color='C3', linestyle='--')
# ax.plot(np.logspace(-3, 3, 1000), vart(lower, np.logspace(-3, 3, 1000)), color='C3', linestyle='--')
ax.legend()
ax.set_yscale("log")
ax.set_xscale("log")
ax.set_ylabel("Variance")
ax.set_xlabel("Mean")
cax = plt.colorbar(im)
cax.set_label("α")
if highest is not None:
deviation = np.abs((true_var - expectation) / expectation)
highest_genes = np.argsort(deviation)[-highest:]
genes = adata.var_names[highest_genes]
texts = [ax.text(true_mean[h], true_var[h], adata.var_names[h], fontsize=10) for h in highest_genes]
adjust_text(texts, arrowprops=dict(arrowstyle="-", color="k", lw=0.5))
print(true_mean[highest_genes])
# ta.allocate_text(
# fig,
# ax,
# true_mean[highest_genes],
# true_var[highest_genes],
# genes,
# x_scatter=true_mean[highest_genes],
# y_scatter=true_var[highest_genes],
# textsize=10,
# linecolor="grey",
# min_distance=repel,
# max_distance=max_distance,
# margin=margin,
# )
# txt_height = np.log(0.04*(ax.get_ylim()[1] - ax.get_ylim()[0]))
# txt_width = np.log(0.02*(ax.get_xlim()[1] - ax.get_xlim()[0]))
# text_positions = get_text_positions(true_mean[highest_genes], true_var[highest_genes], txt_width, txt_height)
# text_plotter(true_mean[highest_genes], true_var[highest_genes], text_positions, ax, txt_width, txt_height)
print(genes)
return expectation, true_var
|
/sccca-0.3.1-py3-none-any.whl/scCCA/plots/qc.py
| 0.95418 | 0.686928 |
qc.py
|
pypi
|
from typing import Callable, List, Union
import matplotlib.colors as co
import matplotlib.pyplot as plt
import numpy as np
from anndata import AnnData
def set_up_cmap(array: np.ndarray, cmap: str = "RdBu"):
vmin = array.min()
vmax = array.max()
if vmin < 0 and vmax > 0:
norm = co.TwoSlopeNorm(vmin=vmin, vmax=vmax, vcenter=0)
elif vmin < 0 and vmax < 0:
# print('min color')
norm = co.Normalize(vmin=vmin, vmax=0)
cmap = co.LinearSegmentedColormap.from_list("name", [cmap(-0.001), "w"])
else:
# print('max color')
cmap = co.LinearSegmentedColormap.from_list("name", ["w", cmap(1.001)])
norm = co.Normalize(vmin=0, vmax=vmax)
return cmap, norm
def rand_jitter(arr, stdev=1):
# stdev = .01 * (max(arr) - min(arr))
# print(stdev)
return arr + np.random.randn(len(arr)) * stdev
def set_up_subplots(num_plots, ncols=4, width=4, height=3):
"""Set up subplots for plotting multiple factors."""
if num_plots == 1:
fig, ax = plt.subplots()
return fig, ax
nrows, reminder = divmod(num_plots, ncols)
if num_plots < ncols:
nrows = 1
ncols = num_plots
else:
nrows, reminder = divmod(num_plots, ncols)
if nrows == 0:
nrows = 1
if reminder > 0:
nrows += 1
fig, axes = plt.subplots(nrows, ncols, figsize=(width * ncols, height * nrows))
_ = [ax.axis("off") for ax in axes.flatten()[num_plots:]]
return fig, axes
def set_up_plot(
adata: AnnData,
model_key: str,
instances: Union[int, List[int], None],
func: Callable,
ncols: int = 4,
width: int = 4,
height: int = 3,
ax: Union[plt.Axes, None] = None,
**kwargs
):
if isinstance(instances, list):
num_plots = len(instances)
fig, ax = set_up_subplots(num_plots, ncols=ncols, width=width, height=height)
elif isinstance(instances, int):
num_plots = 1
if ax is None:
fig, ax = plt.subplots(1, 1)
else:
model_dict = adata.uns[model_key]
if model_key == "pca":
num_plots = model_dict["variance"].shape[0]
else:
num_plots = model_dict["model"]["num_factors"]
instances = [i for i in range(num_plots)]
fig, ax = set_up_subplots(num_plots, ncols=ncols, width=width, height=height)
if num_plots == 1:
if isinstance(instances, list):
instances = instances[0]
func(adata, model_key, instances, ax=ax, **kwargs)
else:
for i, ax_i in zip(instances, ax.flatten()):
func(adata, model_key, i, ax=ax_i, **kwargs)
|
/sccca-0.3.1-py3-none-any.whl/scCCA/plots/utils.py
| 0.607547 | 0.469703 |
utils.py
|
pypi
|
from typing import List, Union
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from .utils import set_up_cmap, set_up_plot
def factor_embedding(
adata,
model_key="X_scpca",
basis: str = "X_umap",
factor: Union[int, List[int], None] = None,
sign: float = 1.0,
cmap=cm.PiYG,
colorbar_pos="right",
colorbar_width="3%",
orientation="vertical",
pad=0.1,
size: float = 1,
ncols: int = 4,
width: int = 4,
height: int = 3,
ax=None,
):
"""
Plot factor on a given basis.
Parameters
----------
adata: AnnData
AnnData object.
model_key: str, optional (default: "X_scpca")
Key for the fitted model.
basis: str, optional (default: "X_umap")
Key for the basis (e.g. UMAP, T-SNE).
factor: int, list, optional (default: None)
Factor(s) to plot. If None, then all factors are plotted.
sign: float, optional (default: 1.0)
Sign of the factor. Should be either 1.0 or -1.0.
cmap: str, optional (default: "PiYG")
Colormap for the scatterplot.
colorbar_pos: str, optional (default: "right")
Position of the colorbar.
colorbar_width: str, optional (default: "3%")
Width of the colorbar.
orientation: str, optional (default: "vertical")
Orientation of the colorbar. Should be either "vertical" or "horizontal".
size: float, optional (default: 1)
Marker/Dot size of the scatterplot.
ncols: int, optional (default: 4)
Number of columns for the subplots.
width: int, optional (default: 4)
Width of each subplot.
height: int, optional (default: 3)
Height of each subplot.
ax: matplotlib.axes.Axes, optional (default: None)
Axes object to plot on. If None, then a new figure is created.
Returns
-------
ax: matplotlib.axes.Axes
Axes object.
"""
ax = set_up_plot(
adata,
model_key,
factor,
_factor_embedding,
sign=sign,
cmap=cmap,
basis=basis,
colorbar_pos=colorbar_pos,
colorbar_width=colorbar_width,
orientation=orientation,
pad=pad,
size=size,
ncols=ncols,
width=width,
height=height,
ax=ax,
)
return ax
def _factor_embedding(
adata,
model_key: str,
factor: int,
basis: str = "X_umap",
sign=1.0,
cmap=cm.PiYG,
colorbar_pos="right",
colorbar_width="3%",
orientation="vertical",
pad=0.1,
size: float = 1,
ax=None,
):
if ax is None:
fig = plt.figure()
ax = plt.gca()
else:
fig = plt.gcf()
weights = sign * adata.obsm[f"X_{model_key}"][..., factor]
cmap, norm = set_up_cmap(weights, cmap)
im = ax.scatter(
adata.obsm[basis][:, 0],
adata.obsm[basis][:, 1],
s=size,
c=weights,
norm=norm,
cmap=cmap,
)
divider = make_axes_locatable(ax)
cax = divider.append_axes(colorbar_pos, size=colorbar_width, pad=pad)
fig.colorbar(im, cax=cax, orientation=orientation)
ax.set_title(f"Factor {factor}")
ax.set_xlabel(f"{basis}")
ax.set_ylabel(f"{basis}")
ax.set_xticks([])
ax.set_yticks([])
|
/sccca-0.3.1-py3-none-any.whl/scCCA/plots/factor_embedding.py
| 0.950709 | 0.660011 |
factor_embedding.py
|
pypi
|
from collections import OrderedDict, namedtuple
import numpy as np
import pandas as pd
from patsy import dmatrix
from patsy.design_info import DesignMatrix
StateMapping = namedtuple("StateMapping", "mapping, reverse, encoding, index, columns, states, sparse")
def get_states(design: DesignMatrix) -> namedtuple:
"""Extracts the states from the design matrix.
Parameters
----------
design: DesignMatrix
Design matrix of the model.
Returns
-------
StateMapping: namedtuple
Named tuple with the following fields
"""
unique_rows, inverse_rows = np.unique(np.asarray(design), axis=0, return_inverse=True)
combinations = OrderedDict()
sparse_state = {}
for j, row in enumerate(range(unique_rows.shape[0])):
idx = tuple(np.where(unique_rows[row] == 1)[0])
combinations[idx] = unique_rows[row], j
state_name = "|".join([design.design_info.column_names[i] for i in np.where(unique_rows[row] == 1)[0]])
if state_name != "Intercept":
state_name = state_name.lstrip("Intercept|")
sparse_state[state_name] = j
factor_cols = {v: k for k, v, in design.design_info.column_name_indexes.items()}
state_cols = {v: k for k, v in factor_cols.items()}
state_mapping = {}
reverse_mapping = {}
for idx, (k, v) in enumerate(combinations.items()):
state = ""
for idx in k:
state += factor_cols[idx] + "|"
state = state.rstrip("|")
state_mapping[state] = v[1]
reverse_mapping[v[1]] = state
return StateMapping(
state_mapping, reverse_mapping, unique_rows, inverse_rows, factor_cols, state_cols, sparse_state
)
def get_state_loadings(adata, model_key: str) -> dict:
"""
Computes the loading matrix for each state defined in the
design matrix of the model.
Parameters
----------
adata: AnnData
Anndata object with the fitted scPCA model stored.
model_key: str
Key of the model in the AnnData object.
Returns
-------
dict of np.ndarray with
Dictionary with the loading matrices for each state.
"""
design = adata.uns[model_key]["design"]
states = {}
for k, v in design.items():
states[k] = adata.varm[model_key][..., v].sum(-1)
return states
def get_formula(adata, formula):
if formula is None:
batch = dmatrix("1", adata.obs)
else:
batch = dmatrix(formula, adata.obs)
return batch
def get_ordered_genes(adata, model_key, state, factor, sign=1.0, vector="W_rna", highest=10, lowest=0, ascending=False):
model_dict = adata.uns[model_key]
model_design = model_dict["design"]
state = model_design[state]
diff_factor = adata.varm[f"{model_key}_{vector}"][..., factor, state]
order = np.argsort(diff_factor)
if highest == 0:
gene_idx = order[:lowest]
else:
gene_idx = np.concatenate([order[:lowest], order[-highest:]])
magnitude = np.abs(diff_factor[gene_idx])
genes = adata.var_names.to_numpy()[gene_idx]
return (
pd.DataFrame(
{
"gene": genes,
"magnitude": magnitude,
"diff": diff_factor[gene_idx],
"type": ["lowest"] * lowest + ["highest"] * highest,
"state": state,
"factor": factor,
}
)
.sort_values(by="diff", ascending=ascending)
.reset_index(drop=True)
)
def get_diff_genes(adata, model_key, state, factor, sign=1.0, vector="W_rna", highest=10, lowest=0, ascending=False):
model_dict = adata.uns[model_key]
model_design = model_dict["design"]
state_a = model_design[state[0]]
state_b = model_design[state[1]]
# diff_factor = sign * (model_dict[vector][state_b][factor] - model_dict[vector][state_a][factor])
diff_factor = sign * (
adata.varm[f"{model_key}_{vector}"][..., factor, state_b]
- adata.varm[f"{model_key}_{vector}"][..., factor, state_a]
)
order = np.argsort(diff_factor)
if highest == 0:
gene_idx = order[:lowest]
else:
gene_idx = np.concatenate([order[:lowest], order[-highest:]])
magnitude = np.abs(diff_factor[gene_idx])
genes = adata.var_names.to_numpy()[gene_idx]
return (
pd.DataFrame(
{
"gene": genes,
"magnitude": magnitude,
"diff": diff_factor[gene_idx],
"type": ["lowest"] * lowest + ["highest"] * highest,
"state": state[1] + "-" + state[0],
"factor": factor,
}
)
.sort_values(by="diff", ascending=ascending)
.reset_index(drop=True)
)
|
/sccca-0.3.1-py3-none-any.whl/scCCA/utils/design.py
| 0.941352 | 0.633368 |
design.py
|
pypi
|
import numpy as np
import anndata
from scipy.sparse import csr_matrix, hstack
from sccloud import io, tools, cite_seq
def run_pipeline(input_file, output_name, **kwargs):
is_raw = not kwargs["processed"]
if "seurat_compatible" not in kwargs:
kwargs["seurat_compatible"] = False
# load input data
adata = io.read_input(
input_file,
genome=kwargs["genome"],
concat_matrices=False if kwargs["cite_seq"] else True,
h5ad_mode=("a" if (is_raw or kwargs["subcluster"]) else "r+"),
select_singlets=kwargs["select_singlets"],
channel_attr=kwargs["channel_attr"],
black_list=(
kwargs["black_list"].split(",") if kwargs["black_list"] is not None else []
),
)
if not kwargs["cite_seq"]:
if is_raw:
values = adata.X.getnnz(axis=1)
if values.min() == 0: # 10x raw data
adata._inplace_subset_obs(values >= kwargs["min_genes_on_raw"])
else:
data_list = adata
assert len(data_list) == 2
adata = cdata = None
for i in range(len(data_list)):
if data_list[i].uns["genome"].startswith("CITE_Seq"):
cdata = data_list[i]
else:
adata = data_list[i]
assert adata is not None and cdata is not None
print("Inputs are loaded.")
if kwargs["seurat_compatible"]:
assert is_raw and kwargs["select_hvf"]
if kwargs["subcluster"]:
adata = tools.get_anndata_for_subclustering(adata, kwargs["subset_selections"])
is_raw = True # get submat and then set is_raw to True
if is_raw:
if not kwargs["subcluster"]:
# filter out low quality cells/genes
tools.run_filter_data(
adata,
output_filt=kwargs["output_filt"],
plot_filt=kwargs["plot_filt"],
plot_filt_figsize=kwargs["plot_filt_figsize"],
mito_prefix=kwargs["mito_prefix"],
min_genes=kwargs["min_genes"],
max_genes=kwargs["max_genes"],
min_umis=kwargs["min_umis"],
max_umis=kwargs["max_umis"],
percent_mito=kwargs["percent_mito"],
percent_cells=kwargs["percent_cells"],
)
if kwargs["seurat_compatible"]:
raw_data = adata.copy() # raw as count
# normailize counts and then transform to log space
tools.log_norm(adata, kwargs["norm_count"])
# set group attribute
if kwargs["batch_correction"] and kwargs["group_attribute"] is not None:
tools.set_group_attribute(adata, kwargs["group_attribute"])
# select highly variable features
if kwargs["select_hvf"]:
tools.highly_variable_features(
adata,
kwargs["batch_correction"],
flavor=kwargs["hvf_flavor"],
n_top=kwargs["hvf_ngenes"],
n_jobs=kwargs["n_jobs"],
)
if kwargs["hvf_flavor"] == "sccloud":
if kwargs["plot_hvf"] is not None:
from sccloud.plotting import plot_hvf
robust_idx = adata.var["robust"].values
plot_hvf(
adata.var.loc[robust_idx, "mean"],
adata.var.loc[robust_idx, "var"],
adata.var.loc[robust_idx, "hvf_loess"],
adata.var.loc[robust_idx, "highly_variable_features"],
kwargs["plot_hvf"] + ".hvf.pdf",
)
# batch correction
if kwargs["batch_correction"]:
tools.correct_batch(adata, features="highly_variable_features")
# PCA
tools.pca(
adata,
n_components=kwargs["nPC"],
features="highly_variable_features",
random_state=kwargs["random_state"],
)
# Find K neighbors
tools.neighbors(
adata,
K=kwargs["K"],
rep="pca",
n_jobs=kwargs["n_jobs"],
random_state=kwargs["random_state"],
full_speed=kwargs["full_speed"],
)
# calculate diffmap
if (
kwargs["fle"]
or kwargs["net_fle"]
):
if not kwargs["diffmap"]:
print("Turn on --diffmap option!")
kwargs["diffmap"] = True
if kwargs["diffmap"]:
tools.diffmap(
adata,
n_components=kwargs["diffmap_ndc"],
rep="pca",
solver=kwargs["diffmap_solver"],
random_state=kwargs["random_state"],
max_t=kwargs["diffmap_maxt"],
)
if kwargs["diffmap_to_3d"]:
tools.reduce_diffmap_to_3d(adata, random_state=kwargs["random_state"])
# calculate kBET
if ("kBET" in kwargs) and kwargs["kBET"]:
stat_mean, pvalue_mean, accept_rate = tools.calc_kBET(
adata,
kwargs["kBET_batch"],
K=kwargs["kBET_K"],
alpha=kwargs["kBET_alpha"],
n_jobs=kwargs["n_jobs"],
)
print(
"kBET stat_mean = {:.2f}, pvalue_mean = {:.4f}, accept_rate = {:.2%}.".format(
stat_mean, pvalue_mean, accept_rate
)
)
# clustering
if kwargs["spectral_louvain"]:
tools.spectral_louvain(
adata,
rep="pca",
resolution=kwargs["spectral_louvain_resolution"],
rep_kmeans=kwargs["spectral_louvain_basis"],
n_clusters=kwargs["spectral_louvain_nclusters"],
n_init=kwargs["spectral_louvain_ninit"],
n_jobs=kwargs["n_jobs"],
random_state=kwargs["random_state"],
temp_folder=kwargs["temp_folder"],
class_label="spectral_louvain_labels",
)
if kwargs["spectral_leiden"]:
tools.spectral_leiden(
adata,
rep="pca",
resolution=kwargs["spectral_leiden_resolution"],
rep_kmeans=kwargs["spectral_leiden_basis"],
n_clusters=kwargs["spectral_leiden_nclusters"],
n_init=kwargs["spectral_leiden_ninit"],
n_jobs=kwargs["n_jobs"],
random_state=kwargs["random_state"],
temp_folder=kwargs["temp_folder"],
class_label="spectral_leiden_labels",
)
if kwargs["louvain"]:
tools.louvain(
adata,
rep="pca",
resolution=kwargs["louvain_resolution"],
random_state=kwargs["random_state"],
class_label=kwargs["louvain_class_label"],
)
if kwargs["leiden"]:
tools.leiden(
adata,
rep="pca",
resolution=kwargs["leiden_resolution"],
n_iter=kwargs["leiden_niter"],
random_state=kwargs["random_state"],
class_label=kwargs["leiden_class_label"],
)
# visualization
if kwargs["net_tsne"]:
tools.net_tsne(
adata,
rep="pca",
n_jobs=kwargs["n_jobs"],
perplexity=kwargs["tsne_perplexity"],
random_state=kwargs["random_state"],
select_frac=kwargs["net_ds_frac"],
select_K=kwargs["net_ds_K"],
select_alpha=kwargs["net_ds_alpha"],
net_alpha=kwargs["net_l2"],
polish_learning_frac=kwargs["net_tsne_polish_learing_frac"],
polish_n_iter=kwargs["net_tsne_polish_niter"],
out_basis=kwargs["net_tsne_basis"],
)
if kwargs["net_umap"]:
tools.net_umap(
adata,
rep="pca",
n_jobs=kwargs["n_jobs"],
n_neighbors=kwargs["umap_K"],
min_dist=kwargs["umap_min_dist"],
spread=kwargs["umap_spread"],
random_state=kwargs["random_state"],
select_frac=kwargs["net_ds_frac"],
select_K=kwargs["net_ds_K"],
select_alpha=kwargs["net_ds_alpha"],
full_speed=kwargs["full_speed"],
net_alpha=kwargs["net_l2"],
polish_learning_rate=kwargs["net_umap_polish_learing_rate"],
polish_n_epochs=kwargs["net_umap_polish_nepochs"],
out_basis=kwargs["net_umap_basis"],
)
if kwargs["net_fle"]:
tools.net_fle(
adata,
output_name,
n_jobs=kwargs["n_jobs"],
K=kwargs["fle_K"],
full_speed=kwargs["full_speed"],
target_change_per_node=kwargs["fle_target_change_per_node"],
target_steps=kwargs["fle_target_steps"],
is3d=False,
memory=kwargs["fle_memory"],
random_state=kwargs["random_state"],
select_frac=kwargs["net_ds_frac"],
select_K=kwargs["net_ds_K"],
select_alpha=kwargs["net_ds_alpha"],
net_alpha=kwargs["net_l2"],
polish_target_steps=kwargs["net_fle_polish_target_steps"],
out_basis=kwargs["net_fle_basis"],
)
if kwargs["tsne"]:
tools.tsne(
adata,
rep="pca",
n_jobs=kwargs["n_jobs"],
perplexity=kwargs["tsne_perplexity"],
random_state=kwargs["random_state"],
)
if kwargs["fitsne"]:
tools.fitsne(
adata,
rep="pca",
n_jobs=kwargs["n_jobs"],
perplexity=kwargs["tsne_perplexity"],
random_state=kwargs["random_state"],
)
if kwargs["umap"]:
tools.umap(
adata,
rep="pca",
n_neighbors=kwargs["umap_K"],
min_dist=kwargs["umap_min_dist"],
spread=kwargs["umap_spread"],
random_state=kwargs["random_state"],
)
if kwargs["fle"]:
tools.fle(
adata,
output_name,
n_jobs=kwargs["n_jobs"],
K=kwargs["fle_K"],
full_speed=kwargs["full_speed"],
target_change_per_node=kwargs["fle_target_change_per_node"],
target_steps=kwargs["fle_target_steps"],
is3d=False,
memory=kwargs["fle_memory"],
random_state=kwargs["random_state"],
)
# calculate diffusion-based pseudotime from roots
if len(kwargs["pseudotime"]) > 0:
tools.calc_pseudotime(adata, kwargs["pseudotime"])
# merge cite-seq data and run t-SNE
if kwargs["cite_seq"]:
adt_matrix = np.zeros((adata.shape[0], cdata.shape[1]), dtype="float32")
idx = adata.obs_names.isin(cdata.obs_names)
adt_matrix[idx, :] = cdata[adata.obs_names[idx],].X.toarray()
if abs(100.0 - kwargs["cite_seq_capping"]) > 1e-4:
cite_seq.capping(adt_matrix, kwargs["cite_seq_capping"])
var_names = np.concatenate(
[adata.var_names, ["AD-" + x for x in cdata.var_names]]
)
new_data = anndata.AnnData(
X=hstack([adata.X, csr_matrix(adt_matrix)], format="csr"),
obs=adata.obs,
obsm=adata.obsm,
uns=adata.uns,
var={
"var_names": var_names,
"gene_ids": var_names,
"n_cells": np.concatenate(
[adata.var["n_cells"].values, [0] * cdata.shape[1]]
),
"percent_cells": np.concatenate(
[adata.var["percent_cells"].values, [0.0] * cdata.shape[1]]
),
"robust": np.concatenate(
[adata.var["robust"].values, [False] * cdata.shape[1]]
),
"highly_variable_features": np.concatenate(
[
adata.var["highly_variable_features"].values,
[False] * cdata.shape[1],
]
),
},
)
new_data.obsm["X_CITE-Seq"] = adt_matrix
adata = new_data
print("ADT count matrix is attached.")
tools.fitsne(
adata,
rep="CITE-Seq",
n_jobs=kwargs["n_jobs"],
perplexity=kwargs["tsne_perplexity"],
random_state=kwargs["random_state"],
out_basis="citeseq_fitsne",
)
print("Antibody embedding is done.")
if kwargs["seurat_compatible"]:
seurat_data = adata.copy()
seurat_data.raw = raw_data
seurat_data.uns["scale.data"] = adata.uns["fmat_highly_variable_features"]
seurat_data.uns["scale.data.rownames"] = adata.var_names[
adata.var["highly_variable_features"]
].values
io.write_output(seurat_data, output_name + ".seurat.h5ad")
# write out results
io.write_output(adata, output_name + ".h5ad")
if kwargs["output_loom"]:
io.write_output(adata, output_name + ".loom")
print("Results are written.")
|
/sccloud-0.14.0.tar.gz/sccloud-0.14.0/scCloud/pipeline/pipeline.py
| 0.424889 | 0.24801 |
pipeline.py
|
pypi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.