metadata
dict | text
stringlengths 60
3.49M
|
---|---|
{
"source": "josephmje/sdcflows",
"score": 2
} |
#### File: workflows/fit/fieldmap.py
```python
r"""
Processing phase-difference and *directly measured* :math:`B_0` maps.
Theory
~~~~~~
The displacement suffered by every voxel along the phase-encoding (PE) direction
can be derived from Eq. (2) of [Hutton2002]_:
.. math::
\Delta_\text{PE} (i, j, k) = \gamma \cdot \Delta B_0 (i, j, k) \cdot T_\text{ro},
\label{eq:fieldmap-1}\tag{1}
where
:math:`\Delta_\text{PE} (i, j, k)` is the *voxel-shift map* (VSM) along the *PE* direction,
:math:`\gamma` is the gyromagnetic ratio of the H proton in Hz/T
(:math:`\gamma = 42.576 \cdot 10^6 \, \text{Hz} \cdot \text{T}^\text{-1}`),
:math:`\Delta B_0 (i, j, k)` is the *fieldmap variation* in T (Tesla), and
:math:`T_\text{ro}` is the readout time of one slice of the EPI dataset
we want to correct for distortions.
Let :math:`V` represent the «*fieldmap in Hz*» (or equivalently,
«*voxel-shift-velocity map*» as Hz are equivalent to voxels/s), with
:math:`V(i,j,k) = \gamma \cdot \Delta B_0 (i, j, k)`, then, introducing
the voxel zoom along the phase-encoding direction, :math:`s_\text{PE}`,
we obtain the nonzero component of the associated displacements field
:math:`\Delta D_\text{PE} (i, j, k)` that unwarps the target EPI dataset:
.. math::
\Delta D_\text{PE} (i, j, k) = V(i, j, k) \cdot T_\text{ro} \cdot s_\text{PE}.
\label{eq:fieldmap-2}\tag{2}
.. _sdc_direct_b0 :
Direct B0 mapping sequences
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some MR schemes such as :abbr:`SEI (spiral-echo imaging)` can directly
reconstruct an estimate of *the fieldmap in Hz*, :math:`V(i,j,k)`.
These *fieldmaps* are described with more detail `here
<https://cni.stanford.edu/wiki/GE_Processing#Fieldmaps>`__.
This corresponds to `this section of the BIDS specification
<https://bids-specification.readthedocs.io/en/latest/04-modality-specific-files/01-magnetic-resonance-imaging-data.html#case-3-direct-field-mapping>`__.
.. _sdc_phasediff :
Phase-difference B0 estimation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The fieldmap variation in T, :math:`\Delta B_0 (i, j, k)`, that is necessary to obtain
:math:`\Delta_\text{PE} (i, j, k)` in Eq. :math:`\eqref{eq:fieldmap-1}` can be
calculated from two subsequient :abbr:`GRE (Gradient-Recalled Echo)` echoes,
via eq. (1) of [Hutton2002]_:
.. math::
\Delta B_0 (i, j, k) = \frac{\Delta \Theta (i, j, k)}{2\pi \cdot \gamma \, \Delta\text{TE}},
\label{eq:fieldmap-3}\tag{3}
where
:math:`\Delta \Theta (i, j, k)` is the phase-difference map in radians,
and :math:`\Delta\text{TE}` is the elapsed time between the two GRE echoes.
For simplicity, the «*voxel-shift-velocity map*» :math:`V(i,j,k)`, which we
can introduce in Eq. :math:`\eqref{eq:fieldmap-2}` to directly obtain
the displacements field, can be obtained as:
.. math::
V(i, j, k) = \frac{\Delta \Theta (i, j, k)}{2\pi \cdot \Delta\text{TE}}.
\label{eq:fieldmap-4}\tag{4}
This calculation is further complicated by the fact that :math:`\Theta_i`
(and therfore, :math:`\Delta \Theta`) are clipped (or *wrapped*) within
the range :math:`[0 \dotsb 2\pi )`.
It is necessary to find the integer number of offsets that make a region
continuously smooth with its neighbors (*phase-unwrapping*, [Jenkinson2003]_).
This corresponds to `this section of the BIDS specification
<https://bids-specification.readthedocs.io/en/latest/04-modality-specific-files/01-magnetic-resonance-imaging-data.html#two-phase-images-and-two-magnitude-images>`__.
Some scanners produce one ``phasediff`` map, where the drift between the two echos has
already been calulated (see `the corresponding section of BIDS
<https://bids-specification.readthedocs.io/en/latest/04-modality-specific-files/01-magnetic-resonance-imaging-data.html#case-1-phase-difference-map-and-at-least-one-magnitude-image>`__).
References
----------
.. [Hutton2002] Hutton et al., Image Distortion Correction in fMRI: A Quantitative
Evaluation, NeuroImage 16(1):217-240, 2002. doi:`10.1006/nimg.2001.1054
<https://doi.org/10.1006/nimg.2001.1054>`__.
.. [Jenkinson2003] <NAME>. (2003) Fast, automated, N-dimensional phase-unwrapping
algorithm. MRM 49(1):193-197. doi:`10.1002/mrm.10354
<https://doi.org/10.1002/mrm.10354>`__.
"""
from nipype.pipeline import engine as pe
from nipype.interfaces import utility as niu
from niworkflows.engine.workflows import LiterateWorkflow as Workflow
def init_fmap_wf(omp_nthreads=1, debug=False, mode="phasediff", name="fmap_wf"):
"""
Estimate the fieldmap based on a field-mapping MRI acquisition.
Estimates the fieldmap using either one phase-difference map or
image and one or more
magnitude images corresponding to two or more :abbr:`GRE (Gradient Echo sequence)`
acquisitions.
When we have a sequence that directly measures the fieldmap,
we just need to mask it (using the corresponding magnitude image)
to remove the noise in the surrounding air region, and ensure that
units are Hz.
Workflow Graph
.. workflow ::
:graph2use: orig
:simple_form: yes
from sdcflows.workflows.fit.fieldmap import init_fmap_wf
wf = init_fmap_wf(omp_nthreads=6)
Parameters
----------
omp_nthreads : :obj:`int`
Maximum number of threads an individual process may use.
debug : :obj:`bool`
Run on debug mode
name : :obj:`str`
Unique name of this workflow.
Inputs
------
magnitude : :obj:`list` of :obj:`str`
Path to the corresponding magnitude image for anatomical reference.
fieldmap : :obj:`list` of :obj:`tuple`(:obj:`str`, :obj:`dict`)
Path to the fieldmap acquisition (``*_fieldmap.nii[.gz]`` of BIDS).
Outputs
-------
fmap : :obj:`str`
Path to the estimated fieldmap.
fmap_ref : :obj:`str`
Path to a preprocessed magnitude image reference.
fmap_coeff : :obj:`str` or :obj:`list` of :obj:`str`
The path(s) of the B-Spline coefficients supporting the fieldmap.
fmap_mask : :obj:`str`
Path to a binary brain mask corresponding to the ``fmap`` and ``fmap_ref``
pair.
"""
from ...interfaces.bspline import (
BSplineApprox,
DEFAULT_LF_ZOOMS_MM,
DEFAULT_HF_ZOOMS_MM,
DEFAULT_ZOOMS_MM,
)
workflow = Workflow(name=name)
inputnode = pe.Node(
niu.IdentityInterface(fields=["magnitude", "fieldmap"]), name="inputnode"
)
outputnode = pe.Node(
niu.IdentityInterface(fields=["fmap", "fmap_ref", "fmap_mask", "fmap_coeff"]),
name="outputnode",
)
magnitude_wf = init_magnitude_wf(omp_nthreads=omp_nthreads)
bs_filter = pe.Node(BSplineApprox(), n_procs=omp_nthreads, name="bs_filter")
bs_filter.interface._always_run = debug
bs_filter.inputs.bs_spacing = (
[DEFAULT_LF_ZOOMS_MM, DEFAULT_HF_ZOOMS_MM] if not debug else [DEFAULT_ZOOMS_MM]
)
bs_filter.inputs.extrapolate = not debug
# fmt: off
workflow.connect([
(inputnode, magnitude_wf, [("magnitude", "inputnode.magnitude")]),
(magnitude_wf, bs_filter, [("outputnode.fmap_mask", "in_mask")]),
(magnitude_wf, outputnode, [
("outputnode.fmap_mask", "fmap_mask"),
("outputnode.fmap_ref", "fmap_ref"),
]),
(bs_filter, outputnode, [
("out_extrapolated" if not debug else "out_field", "fmap"),
("out_coeff", "fmap_coeff")]),
])
# fmt: on
if mode == "phasediff":
workflow.__postdesc__ = """\
A *B<sub>0</sub>* nonuniformity map (or *fieldmap*) was estimated from the
phase-drift map(s) measure with two consecutive GRE (gradient-recalled echo)
acquisitions.
"""
phdiff_wf = init_phdiff_wf(omp_nthreads, debug=debug)
# fmt: off
workflow.connect([
(inputnode, phdiff_wf, [("fieldmap", "inputnode.phase")]),
(magnitude_wf, phdiff_wf, [
("outputnode.fmap_ref", "inputnode.magnitude"),
("outputnode.fmap_mask", "inputnode.mask"),
]),
(phdiff_wf, bs_filter, [
("outputnode.fieldmap", "in_data"),
]),
])
# fmt: on
else:
from niworkflows.interfaces.images import IntraModalMerge
from ...interfaces.fmap import CheckB0Units
workflow.__postdesc__ = """\
A *B<sub>0</sub>* nonuniformity map (or *fieldmap*) was directly measured with
an MRI scheme designed with that purpose such as SEI (Spiral-Echo Imaging).
"""
# Merge input fieldmap images (assumes all are given in the same units!)
fmapmrg = pe.Node(
IntraModalMerge(zero_based_avg=False, hmc=False, to_ras=False),
name="fmapmrg",
)
units = pe.Node(CheckB0Units(), name="units", run_without_submitting=True)
# fmt: off
workflow.connect([
(inputnode, units, [(("fieldmap", _get_units), "units")]),
(inputnode, fmapmrg, [(("fieldmap", _get_file), "in_files")]),
(fmapmrg, units, [("out_avg", "in_file")]),
(units, bs_filter, [("out_file", "in_data")]),
])
# fmt: on
return workflow
def init_magnitude_wf(omp_nthreads, name="magnitude_wf"):
"""
Prepare the magnitude part of :abbr:`GRE (gradient-recalled echo)` fieldmaps.
Average (if not done already) the magnitude part of the
:abbr:`GRE (gradient recalled echo)` images, run N4 to
correct for B1 field nonuniformity, and skull-strip the
preprocessed magnitude.
Workflow Graph
.. workflow ::
:graph2use: orig
:simple_form: yes
from sdcflows.workflows.fit.fieldmap import init_magnitude_wf
wf = init_magnitude_wf(omp_nthreads=6)
Parameters
----------
omp_nthreads : :obj:`int`
Maximum number of threads an individual process may use
name : :obj:`str`
Name of workflow (default: ``magnitude_wf``)
Inputs
------
magnitude : :obj:`os.PathLike`
Path to the corresponding magnitude path(s).
Outputs
-------
fmap_ref : :obj:`os.PathLike`
Path to the fieldmap reference calculated in this workflow.
fmap_mask : :obj:`os.PathLike`
Path to a binary brain mask corresponding to the reference above.
"""
from nipype.interfaces.ants import N4BiasFieldCorrection
from niworkflows.interfaces.masks import BETRPT
from niworkflows.interfaces.images import IntraModalMerge
workflow = Workflow(name=name)
inputnode = pe.Node(niu.IdentityInterface(fields=["magnitude"]), name="inputnode")
outputnode = pe.Node(
niu.IdentityInterface(fields=["fmap_ref", "fmap_mask", "mask_report"]),
name="outputnode",
)
# Merge input magnitude images
# Do not reorient to RAS to preserve the validity of PhaseEncodingDirection
magmrg = pe.Node(IntraModalMerge(hmc=False, to_ras=False), name="magmrg")
# de-gradient the fields ("bias/illumination artifact")
n4_correct = pe.Node(
N4BiasFieldCorrection(dimension=3, copy_header=True),
name="n4_correct",
n_procs=omp_nthreads,
)
bet = pe.Node(BETRPT(generate_report=True, frac=0.6, mask=True), name="bet")
# fmt: off
workflow.connect([
(inputnode, magmrg, [("magnitude", "in_files")]),
(magmrg, n4_correct, [("out_avg", "input_image")]),
(n4_correct, bet, [("output_image", "in_file")]),
(bet, outputnode, [("mask_file", "fmap_mask"),
("out_file", "fmap_ref"),
("out_report", "mask_report")]),
])
# fmt: on
return workflow
def init_phdiff_wf(omp_nthreads, debug=False, name="phdiff_wf"):
r"""
Generate a :math:`B_0` field from consecutive-phases and phase-difference maps.
This workflow preprocess phase-difference maps (or generates the phase-difference
map should two ``phase1``/``phase2`` be provided at the input), and generates
an image equivalent to BIDS's ``fieldmap`` that can be processed with the
general fieldmap workflow.
Besides phase2 - phase1 subtraction, the core of this particular workflow relies
in the phase-unwrapping with FSL PRELUDE [Jenkinson2003]_.
FSL PRELUDE takes wrapped maps in the range 0 to 6.28, `as per the user guide
<https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FUGUE/Guide#Step_2_-_Getting_.28wrapped.29_phase_in_radians>`__.
For the phase-difference maps, recentering back to :math:`[-\pi \dotsb \pi )`
is necessary.
After some massaging and with the scaling of the echo separation factor
:math:`\Delta \text{TE}`, the phase-difference maps are converted into
an actual :math:`B_0` map in Hz units.
Workflow Graph
.. workflow ::
:graph2use: orig
:simple_form: yes
from sdcflows.workflows.fit.fieldmap import init_phdiff_wf
wf = init_phdiff_wf(omp_nthreads=1)
Parameters
----------
omp_nthreads : :obj:`int`
Maximum number of threads an individual process may use
debug : :obj:`bool`
Run on debug mode
name : :obj:`str`
Name of workflow (default: ``phdiff_wf``)
Inputs
------
magnitude : :obj:`os.PathLike`
A reference magnitude image preprocessed elsewhere.
phase : :obj:`list` of :obj:`tuple` of (:obj:`os.PathLike`, :obj:`dict`)
List containing one GRE phase-difference map with its corresponding metadata
(requires ``EchoTime1`` and ``EchoTime2``), or the phase maps for the two
subsequent echoes, with their metadata (requires ``EchoTime``).
mask : :obj:`os.PathLike`
A brain mask calculated from the magnitude image.
Outputs
-------
fieldmap : :obj:`os.PathLike`
The estimated fieldmap in Hz.
"""
from nipype.interfaces.fsl import PRELUDE
from ...interfaces.fmap import Phasediff2Fieldmap, PhaseMap2rads, SubtractPhases
workflow = Workflow(name=name)
workflow.__postdesc__ = f"""\
The corresponding phase-map(s) were phase-unwrapped with `prelude` (FSL {PRELUDE.version}).
"""
inputnode = pe.Node(
niu.IdentityInterface(fields=["magnitude", "phase", "mask"]), name="inputnode"
)
outputnode = pe.Node(niu.IdentityInterface(fields=["fieldmap"]), name="outputnode",)
def _split(phase):
return phase
split = pe.MapNode( # We cannot use an inline connection function with MapNode
niu.Function(function=_split, output_names=["map_file", "meta"]),
iterfield=["phase"],
run_without_submitting=True,
name="split",
)
# phase diff -> radians
phmap2rads = pe.MapNode(
PhaseMap2rads(),
name="phmap2rads",
iterfield=["in_file"],
run_without_submitting=True,
)
# FSL PRELUDE will perform phase-unwrapping
prelude = pe.Node(PRELUDE(), name="prelude")
calc_phdiff = pe.Node(
SubtractPhases(), name="calc_phdiff", run_without_submitting=True
)
calc_phdiff.interface._always_run = debug
compfmap = pe.Node(Phasediff2Fieldmap(), name="compfmap")
# fmt: off
workflow.connect([
(inputnode, split, [("phase", "phase")]),
(inputnode, prelude, [("magnitude", "magnitude_file"),
("mask", "mask_file")]),
(split, phmap2rads, [("map_file", "in_file")]),
(phmap2rads, calc_phdiff, [("out_file", "in_phases")]),
(split, calc_phdiff, [("meta", "in_meta")]),
(calc_phdiff, prelude, [("phase_diff", "phase_file")]),
(prelude, compfmap, [("unwrapped_phase_file", "in_file")]),
(calc_phdiff, compfmap, [("metadata", "metadata")]),
(compfmap, outputnode, [("out_file", "fieldmap")]),
])
# fmt: on
return workflow
def _get_file(intuple):
"""
Extract the filename from the inputnode.
>>> _get_file([("fmap.nii.gz", {"Units": "rad/s"})])
'fmap.nii.gz'
>>> _get_file(("fmap.nii.gz", {"Units": "rad/s"}))
'fmap.nii.gz'
"""
if isinstance(intuple, list):
intuple = intuple[0]
return intuple[0]
def _get_units(intuple):
"""
Extract Units from metadata.
>>> _get_units([("fmap.nii.gz", {"Units": "rad/s"})])
'rad/s'
>>> _get_units(("fmap.nii.gz", {"Units": "rad/s"}))
'rad/s'
"""
if isinstance(intuple, list):
intuple = intuple[0]
return intuple[1]["Units"]
```
#### File: fit/tests/test_phdiff.py
```python
import os
from pathlib import Path
from json import loads
import pytest
from ..fieldmap import init_fmap_wf, Workflow
@pytest.mark.skipif(os.getenv("TRAVIS") == "true", reason="this is TravisCI")
@pytest.mark.skipif(os.getenv("GITHUB_ACTIONS") == "true", reason="this is GH Actions")
@pytest.mark.parametrize(
"fmap_file",
[
("ds001600/sub-1/fmap/sub-1_acq-v4_phasediff.nii.gz",),
(
"ds001600/sub-1/fmap/sub-1_acq-v2_phase1.nii.gz",
"ds001600/sub-1/fmap/sub-1_acq-v2_phase2.nii.gz",
),
("ds001771/sub-36/fmap/sub-36_acq-topup1_fieldmap.nii.gz",),
("HCP101006/sub-101006/fmap/sub-101006_phasediff.nii.gz",),
],
)
def test_phdiff(tmpdir, datadir, workdir, outdir, fmap_file):
"""Test creation of the workflow."""
tmpdir.chdir()
fmap_path = [datadir / f for f in fmap_file]
fieldmaps = [
(str(f.absolute()), loads(Path(str(f).replace(".nii.gz", ".json")).read_text()))
for f in fmap_path
]
wf = Workflow(
name=f"phdiff_{fmap_path[0].name.replace('.nii.gz', '').replace('-', '_')}"
)
mode = "mapped" if "fieldmap" in fmap_path[0].name else "phasediff"
phdiff_wf = init_fmap_wf(omp_nthreads=2, debug=True, mode=mode,)
phdiff_wf.inputs.inputnode.fieldmap = fieldmaps
phdiff_wf.inputs.inputnode.magnitude = [
f.replace("diff", "1")
.replace("phase", "magnitude")
.replace("fieldmap", "magnitude")
for f, _ in fieldmaps
]
if outdir:
from ...outputs import init_fmap_derivatives_wf, init_fmap_reports_wf
outdir = outdir / "unittests" / fmap_file[0].split("/")[0]
fmap_derivatives_wf = init_fmap_derivatives_wf(
output_dir=str(outdir),
write_coeff=True,
bids_fmap_id="phasediff_id",
)
fmap_derivatives_wf.inputs.inputnode.source_files = [f for f, _ in fieldmaps]
fmap_derivatives_wf.inputs.inputnode.fmap_meta = [f for _, f in fieldmaps]
fmap_reports_wf = init_fmap_reports_wf(
output_dir=str(outdir), fmap_type=mode if len(fieldmaps) == 1 else "phases",
)
fmap_reports_wf.inputs.inputnode.source_files = [f for f, _ in fieldmaps]
# fmt: off
wf.connect([
(phdiff_wf, fmap_reports_wf, [
("outputnode.fmap", "inputnode.fieldmap"),
("outputnode.fmap_ref", "inputnode.fmap_ref"),
("outputnode.fmap_mask", "inputnode.fmap_mask")]),
(phdiff_wf, fmap_derivatives_wf, [
("outputnode.fmap", "inputnode.fieldmap"),
("outputnode.fmap_ref", "inputnode.fmap_ref"),
("outputnode.fmap_coeff", "inputnode.fmap_coeff"),
]),
])
# fmt: on
else:
wf.add_nodes([phdiff_wf])
if workdir:
wf.base_dir = str(workdir)
wf.run(plugin="Linear")
``` |
{
"source": "josephmjoy/robotics",
"score": 3
} |
#### File: robotutils/comm/common.py
```python
import abc
from collections import namedtuple
class DatagramTransport(abc.ABC):
"""
Interface for a datagram transport that is provided to an instance of
RobotComm to provide the underlying raw communication.
NOTE TO IMPLEMENTORS: The destination/remote node object is opaque
to the clinents, however there are two requirements for this object:
1. It must support str to obtain a text representation of the address
2. It must be suitable for use as a key or set element, hence immutable.
Strings and tuples work well for this.
"""
@abc.abstractmethod
def send(self, msg, destination) -> None:
"""Sends a single text message. {destination} is an opaque
node object returned by get_remote_node or in the transpoet
receive handler"""
@abc.abstractmethod
def start_listening(self, handler) -> None:
"""
Starts listening for incoming datagrams. This will likely use up
resources, such as a dedicated thread, depending on the implementation.
handler(msg: str, remote_node: Object) --
called when a message arrives. The handler will
likely be called in some other thread's context.
The handler MUST NOT block. If time consuming operations
need to be performed, queue the message for further processing, or
implement a state machine. The handler *may* be reentered or called
concurrently from another thread. Call stop_listening to stop new
messages from being received. """
@abc.abstractmethod
def stop_listening(self) -> None:
"""Stops listening"""
@abc.abstractmethod
def close(self) ->None:
"""Closes all open listeners and remote notes."""
ReceivedMessage = namedtuple('ReceivedMessage',
('msgtype',
'message',
'remote_node',
'received_timestamp',
'channel'))
# Following requires Python 3.5+
ReceivedMessage.__doc__ += """: Incoming message as reported to client"""
ServerStatistics = namedtuple('ServerStatistics',
('rcvd_commands',
'rcvd_CMDs',
'sent_CMDRESPs',
'rcvd_CMDRESPACKs',
'cur_SvrRecvdCmdMap_size',
'cur_SvrRcvdCmdIncomingQueue_size',
'cur_SvrRcvdCmdCompletedQueue_size'))
ClientStatistics = namedtuple('ClientStatistics',
('sent_commands',
'sent_CMDs',
'rcvd_CMDRESPs',
'sent_CMDRESPACKs',
'cur_CliSentCmdMap_size',
'cur_CliSentCmdCompletionQueue_size'))
ClientRtStatistics = namedtuple('ClientRtStatistics',
('approx_sent_rtcommands',
'approx_send_RTCMDs',
'approx_rcvd_RTCMDRESPs',
'approx_rttimeouts',
'cur_CliSentRtCmdMap_size'))
ChannelStatistics = namedtuple('ChannelStatistics',
('channel_name',
'sent_messages',
'rcvd_messages',
'client_stats',
'client_rtstats',
'server_stats'))
```
#### File: python_robotutils/robotutils/config_helper.py
```python
import sys
import re
_COLONSPACE = ": " # space after : is MANDATORY
_REGEX_WHITESPACE = re.compile(r"\s+")
_HYPHENSPACE = "- " # space after - is MANDATORY
def read_section(reader, section_name, keys) -> dict:
"""
Loads the specified section from the specified reader. It will not raise an
exception. On error (OS exception or not being able to find the section) it
will return an empty dict. It leaves the reader in the same seek position
as it did on entry, unless there was an OS exception, in which case the
state of the file pointer will be unknown
If {keys} is non-null it will clear it and fill it with the keys in the order
that they were found in the input.
Returns: dict representing mapping of keys to values for that specified section.
"""
mapping = {}
if keys:
keys.clear()
# We work with a copy becaues we have to seek ahead
# looking for the section
# OBSOLETE with io.TextIOWrapper(reader) as reader2:
try:
start = reader.tell()
if _find_section(section_name, reader):
_process_section(reader, mapping, keys)
reader.seek(start)
except OSError as err:
_printerr(err) #nothing to do
return mapping
def write_section(section_name, section, keys, writer) -> bool:
"""
Saves the specified section to the specified writer starting at the current
point in the writer. It will not throw an exception. On error (IO exception
or not being able to write the section) it will return false. WARNING: It can
not scan the destination to see if this section has already been written, so
typically this method is called when writing out an entire configuration with
multiple sections in sequence.
Returns True on success and False on failure.
"""
keys = keys if keys else section.keys()
ret = False
# OBSOLETE with io.TextIOWrapper(writer) as writer2:
try:
writer.write(section_name + ":\n")
for k in keys:
val = section.get(k)
if val:
output = " " + k + _COLONSPACE + val + "\n"
writer.write(output)
ret = True
except OSError as err:
_printerr(err) # Just return false
return ret
def read_list(section_name, reader) -> list:
"""
Loads the specified list-of-strings section from the specified reader.
It will not throw an exception. On error (OS xception or not being able to find
the section) it will return an empty list.
It leaves the reader in the same seek position as it did on entry,
unless there was an OS exception, in which case the state of the file pointer will be unknown.
Return: List of strings in the section
"""
list_items = []
# OBSOLETE with io.TextIOWrapper(reader) as reader2:
try:
start = reader.tell()
if _find_section(section_name, reader):
_process_list_section(reader, list_items)
reader.seek(start)
except OSError as err:
_printerr(err) # Just return false
return list_items
def write_list(section_name, list_items, writer) -> bool:
"""
Saves the specified list-of-strings section to the specified writer starting
at the current point in the writer. It will not throw an exception. On error
(OS exception or not being able to write the section) it will return false.
WARNING: It cannot scan the destination to see if this section has already
been written, so typically this method is called when writing out an entire
configuration with multiple sections in sequence.
Returns: True on success and False on failure.
"""
ret = False
#OBSOLETE with io.TextIOWrapper(writer) as writer2:
try:
writer.write(section_name + ":\n")
for item in list_items:
output = " " + _HYPHENSPACE + item + "\n"
writer.write(output)
ret = True
except OSError as err:
_printerr(err) # Just return false
return ret
#
# ------------ Private Methods ----------------
#
def _find_section(section_name, reader) -> bool:
"""
Seeks to the start of the specified section. If it does not find the section
the reader's file pointer will be in an unknown state.
Returns: True on success and False on failure.
"""
if not section_name or ' ' in section_name or ':' in section_name or '\t' in section_name:
# Bogus section name. Note that colons are not allowed within section names,
# though that is valid YAML.
# This is because this is a configuration reader, not general YAML parser.
return False # ******************** EARLY RETURN *******************
ret = False
for line in reader:
if line.startswith("..."):
break
if _matches_section_name(line, section_name):
remaining = line[len(section_name):]
remaining = _trim_comment(remaining).strip()
if ':' in remaining:
icolon = remaining.index(':')
precolon = remaining[:icolon].strip()
postcolon = remaining[icolon + 1:].strip()
ret = not (precolon or postcolon)
break
# doesn't match, keep looking...
return ret
def _matches_section_name(line, section_name) -> bool:
"""Checks if {line} starts with the specified section name """
if not line.startswith(section_name):
return False
# Line STARTS with {section_name} - promising, but perhaps it's a prefix of
# a longer section name or something badly garbled...
if len(line) > len(section_name):
char = line[len(section_name)] # char right after sectioName
if not char in "\t :#":
# 1st post char could be part of a longer section name, so let's just keep
# looking
return False
return True
def _trim_comment(line) -> str:
"""Remove comment from end of line, if any"""
if '#' in line:
icomment = line.index('#')
line = line[:icomment]
return line
def _process_section(reader, mapping, keys) -> None:
"""
Read in a section. Place any (k,v) pairs in to dict {mapping}. If {keys} add any
keys read, in the order they were read.
"""
indentation = -1 # we will set it when we find the first child of the section.
for line in reader:
if line.startswith("..."):
break
line = _trim_comment(line)
validkv = False
pre = post = ""
if line and not _REGEX_WHITESPACE.fullmatch(line):
icolon = line.index(_COLONSPACE) if _COLONSPACE in line else -1
if icolon:
pre = line[:icolon].strip()
post = line[icolon + 1:].strip() # +1 for space after colon
else:
# The other case is the line *ends* in a colon...
if line.endswith(':'):
pre = line[:-1].strip()
post = ""
if pre:
this_indentation = line.index(pre)
if indentation < 0:
indentation = this_indentation
assert indentation >= 0
if indentation < 1 or this_indentation < indentation:
# We are done because indentation level has popped up.
break
validkv = indentation == this_indentation and post
if validkv:
mapping[pre] = post
if keys:
keys.append(pre)
def _process_list_section(reader, items) -> None:
"""
Read a list of strings and put them into {items}. Quit early if encountering
anything unexpected
"""
indentation = -1 # we will set it when we find the first child of the section.
quit_ = False
for line in reader:
if quit_ or line.startswith("..."):
break
line = _trim_comment(line)
if line and not _REGEX_WHITESPACE.fullmatch(line):
quit_ = True
if _HYPHENSPACE in line:
ihyphen = line.index(_HYPHENSPACE)
pre = line[:ihyphen].strip()
post = line[ihyphen + 1:].strip() # +1 for space after hyphen
if not pre:
this_indentation = ihyphen
if indentation < 0:
indentation = this_indentation
assert indentation >= 0
# We expect strictly indented lines with exactly the same
# indentation.
if indentation >= 1 and this_indentation == indentation:
items.append(post) # Empty strings are added too.
quit_ = False
def _printerr(*args):
"""print error message"""
print(*args, file=sys.stderr)
```
#### File: python_robotutils/tests/test_comm_helper.py
```python
import time
import logging
import unittest
import concurrent.futures
#Pylint complains about import order of .context, but if it is put later,
#unittest (with -k) can't find robotutils, because it hasn't loaded .context
#when discovering other tests
#from .context import robotutils
from robotutils.comm_helper import UdpTransport, EchoServer, EchoClient
from robotutils import concurrent_helper as conc
from .context import logging_helper # also to ensure .context gets loaded
_LOGNAME = "test"
_LOGGER = logging.getLogger(_LOGNAME)
_TRACE = logging_helper.LevelSpecificLogger(logging_helper.TRACELEVEL, _LOGGER)
# Uncomment one of these to set the global trace level for ALL unit tests, not
# just the ones in this file.
# logging.basicConfig(level=logging.INFO)
#logging.basicConfig(level=logging_helper.TRACELEVEL)
SERVER_IP_ADDRESS = "127.0.0.1"
SERVER_PORT = 41899 + 3
MAX_PACKET_SIZE = 1024
class CommUtilsTest(unittest.TestCase):
"""Unit tests for comm_helper"""
# Echo client-server tests use these channel names.
ECHO_CHANNEL_A = "A"
MSG_PREFIX = "MSG"
def test_udp_transport_simple(self):
"""Simple test of the UDP transport"""
# The client sends string versions of integers 1 to N
# The server parses and adds these up and in the end
# the test verifies that the sum is as expected.
client = UdpTransport(recv_bufsize=MAX_PACKET_SIZE)
server = UdpTransport(recv_bufsize=MAX_PACKET_SIZE,
local_host=SERVER_IP_ADDRESS,
local_port=SERVER_PORT)
count = 1000
total = conc.AtomicNumber(0)
latch = conc.CountDownLatch(count)
def process_message(msg, node):
_TRACE("Server got message [{}] from {}".format(msg, node))
val = int(msg[len(self.MSG_PREFIX):]) # skip past prefix
total.add(val)
latch.count_down()
server.start_listening(process_message)
remote = client.new_remote_node(SERVER_IP_ADDRESS, SERVER_PORT)
for i in range(1, count+1):
msg = self.MSG_PREFIX + str(i)
client.send(msg, remote)
_TRACE("Client sent message [{}]".format(msg))
# time.sleep(0.1)
_LOGGER.info("Waiting for all messages to arrive...")
if latch.wait(1):
_LOGGER.info("Done waiting for all %d messages to arrive.", count)
expected = count * (count + 1) // 2 # sum of 1 to count
self.assertEqual(total.value(), expected)
else:
msg = "TIMED OUT waiting for all {} messages to arrive.".format(count)
self.fail(msg)
client.close()
server.close()
self.assertEqual(client._send_errors, 0) # pylint: disable=protected-access
self.assertEqual(server._send_errors, 0) # pylint: disable=protected-access
def test_only_echo_client(self):
"""Test the UDP echo client sending to nowhere"""
client = EchoClient('localhost')
num_sends = 4
# send_messages will block until done...
_TRACE("GOING TO SEND MESSAGES")
client.send_messages(num_sends)
_TRACE("DONE SENDING MESSAGES")
client.close()
self.assertTrue(bool(client)) # replace with some better check
def test_only_echo_server(self):
"""Test the UDP echo server waiting for a client never shows up"""
server = EchoServer('localhost')
stop_server = False
with concurrent.futures.ThreadPoolExecutor(1) as executor:
def runserver():
server.start()
while not stop_server:
server.periodic_work()
time.sleep(0.1)
server.stop()
executor.submit(runserver)
time.sleep(1)
stop_server = True
_TRACE("Waiting for server to shut down")
self.assertTrue(stop_server) # replace with some better check
def test_echo_nomsgs(self):
"""Test the UDP echo client and server with 0 messages"""
self.run_echo_test(num_sends=0)
def test_echo_simple(self):
"""Test the UDP echo client and server with a small number of messages"""
self.run_echo_test(num_sends=1)
def test_echo_stress(self):
"""Test the UDP echo client and server with a small number of messages"""
self.run_echo_test(num_sends=1000, rate=5000)
def run_echo_test(self, *, num_sends, rate=1):
"""Parametrized echo client server test"""
server = EchoServer('localhost')
client = EchoClient('localhost')
client.set_parameters(rate=rate)
stop_server = False
receive_count = 0
server.start()
with concurrent.futures.ThreadPoolExecutor(1) as executor:
def runserver():
while not stop_server:
server.periodic_work()
time.sleep(0.1)
executor.submit(runserver)
time.sleep(0.1) # Give some time for server to get started
def response_handler(resptype, respbody):
_TRACE("GOT RESPONSE (%s, %s)", resptype, respbody)
nonlocal receive_count
receive_count += 1 # assume call to hander is serialized
# send_messages will block until done...
try:
_TRACE("GOING TO SEND MESSAGES")
client.send_messages(num_sends, response_handler=response_handler)
_TRACE("DONE SENDING MESSAGES")
except Exception: # pylint: disable=broad-except
_LOGGER.exception("While sending messages")
self.fail("Exception thrown")
finally:
client.close()
server.stop()
stop_server = True
_TRACE("Waiting for server to shut down")
_LOGGER.info("Echo test complete. Sent: %d Received: %d", num_sends, receive_count)
if num_sends > 2: # We may lose the 1st or last message based on timing
self.assertGreater(receive_count, 0) # Should receive at least 1 message"
```
#### File: python_robotutils/tests/test_config_helper.py
```python
import unittest
import io
from .context import config_helper
class TestConfigHelper(unittest.TestCase):
"""Container for config_helper unit tests"""
def test_simple_section_usage(self):
"""Test simple non-empty section"""
inp = "\n".join(("mySection:", " sk: sv", " ik: 10\n"))
reader = io.StringIO(inp) # in-memory stream input
keys = []
mapper = config_helper.read_section(reader, "mySection", keys)
writer = io.StringIO() # in-memory stream output
bresult = config_helper.write_section("mySection", mapper, keys, writer)
self.assertTrue(bresult)
output = writer.getvalue()
self.assertEqual(inp, output)
def test_simple_list_usage(self):
"""Test a simple list of two items"""
inp = "\n".join(("myList:", " - item1", " - item2\n"))
reader = io.StringIO(inp) # in-memory stream input
lines = config_helper.read_list("myList", reader)
writer = io.StringIO() # in-memory stream output
bresult = config_helper.write_list("myList", lines, writer)
self.assertTrue(bresult)
output = writer.getvalue()
self.assertEqual(inp, output)
def test_messy_yaml_input(self):
"""Test a much more complicated config file"""
inp = """
# This is a comment
---
# Section 1
section1:
k1a: v1a
k1b: v1b
sectionx:# empty section
section2: # with comment
k2a: v2a
k2b: v2b
section3: # with comment
kComplex: #complex section to be ignored
with complex floating text
complex1: 1
complex2: 2
k3a: v3a # added comment
# some comment and empty lines and lines with spacecs
# comment
\t
k3b: v3b
section4: # with comment
k4a: v4a
k4b: v4b
badSect: badVal
skip: skip
# List 1
list1:
- v1a
- v1b
listx:# empty section
list2: # with comment
- v2a
- v2b
list3: # with comment
- v3a # added comment
# some comment and empty lines and lines with spacecs
# comment
\t
- v3b
kComplex: #complex section will trigger early quitting
with complex floating text
complex1: 1
complex2: 2
- random item
list4: # with comment
- v4a
- v4b
- badVal
- skip
"""
keys = []
writer = io.StringIO() # in-memory stream output
reader = io.StringIO(inp) # in-memory stream input
# Process sections
for i in range(1, 5):
stri = str(i)
section = "section" + stri
mapping = config_helper.read_section(reader, section, keys)
ret = config_helper.write_section(section, mapping, keys, writer)
self.assertEqual(True, ret)
# We expect certain keys and values to be there.
self.assertEqual(2, len(mapping))
self.assertEqual("v" + stri + "a", mapping.get("k" + stri + "a"))
self.assertEqual("v" + stri + "b", mapping.get("k" + stri + "b"))
# Process lists
for i in range(1, 5):
stri = str(i)
section = "list" + stri
items = config_helper.read_list(section, reader)
ret = config_helper.write_list(section, items, writer)
self.assertTrue(ret)
# We expect certain list items to be there.
self.assertEqual(2, len(items))
self.assertEqual("v"+ stri +"a", items[0])
self.assertEqual("v"+ stri +"b", items[1])
# Creat new input from what was just written
input2 = writer.getvalue()
writer2 = io.StringIO()
# Process sections a 2nd time - from the new input
reader = io.StringIO(input2)
for i in range(1, 5):
section = "section" + str(i)
mapping = config_helper.read_section(reader, section, keys)
ret = config_helper.write_section(section, mapping, keys, writer2)
self.assertTrue(ret)
# Process lists a 2nd time - from the new input
for i in range(1, 5):
section = "list" + str(i)
items = config_helper.read_list(section, reader)
ret = config_helper.write_list(section, items, writer2)
self.assertTrue(ret)
output2 = writer2.getvalue()
self.assertEqual(input2, output2)
```
#### File: python_robotutils/tests/test_msgmap.py
```python
import unittest
import random
import sys
from .context import msgmap
# pylint: disable=invalid-name
class TestStringMethods(unittest.TestCase):
"""Container class for unittest tests."""
def test_empty_str_to_dict(self):
"""Empty string to dict """
d = msgmap.str_to_dict('')
self.assertEqual(len(d), 0)
def test_empty_dict_to_str(self):
"""Empty dict to string"""
s = msgmap.dict_to_str(dict())
self.assertEqual(len(s), 0)
def test_singleton_str_to_dict(self):
"""single kv-pair to dict"""
d = msgmap.str_to_dict('k:v')
self.assertEqual(len(d), 1)
self.assertEqual(d.get('k'), 'v')
def test_singleton_dict_to_str(self):
"""dict to single kv-pair"""
s = msgmap.dict_to_str({'k':'v'})
self.assertEqual(s, 'k:v')
def test_simple_str_to_dict(self):
""" Simple case of multiple kv-pairs - str to dict"""
d = msgmap.str_to_dict('k1:v1 k2:v2 k3:v3')
self.assertEqual(len(d), 3)
self.assertEqual(d.get('k1'), 'v1')
self.assertEqual(d.get('k2'), 'v2')
self.assertEqual(d.get('k3'), 'v3')
def test_simple_dict_to_str(self):
""" Simple case of multiple kv-pairs - dict to str"""
s = msgmap.dict_to_str({'k1':'v1', 'k2':'v2', 'k3':'v3'})
# On Python < 3.7, key insert order may not be in order of insertion...
sorted_s = " ".join(sorted(s.split()))
self.assertEqual(sorted_s, 'k1:v1 k2:v2 k3:v3')
def test_more_complex_mappings(self):
'''
This test creates some crazy key-value pairs with random amounts of
whitespace between them, and verifies that str-to-dict works in both
directions.
'''
keys = [
"!#!@AB89.[],/-+",
"2f0j20j0j",
"13r2ffs,,,,,,",
"-n339ghhss8898v",
"d"
]
values = [
"13r \n\t \r doo bE B",
"waka waka waka",
"What's all the fuss?!",
"(a=3.5, b=4.5, c=11.2)",
"\"some quoted string\""
]
#keys = ['a', 'b', 'c']
#values = ['1', '2', '3']
assert len(keys) == len(values)
kv_dirty = [] # includes random whitespace
kv_clean = [] # does not include any whitespace
pre = ''
for (k, v) in zip(keys, values):
# make sure our keys and values have no edge whitespaces. We
# depend on this so that when we convert from dict to str we
# get exactly what we predict.
assert k == k.strip()
assert v == v.strip()
# This is to ensure that there is a space between each value and
# subsequent key
pk = pre + k
pre = ' '
# note: random_whitespace may return an empty string
kv_dirty.append(random_whitespace())
kv_clean.append(pk)
kv_dirty.append(pk)
kv_dirty.append(random_whitespace())
kv_clean.append(':')
kv_dirty.append(':')
kv_dirty.append(random_whitespace())
kv_clean.append(v)
kv_dirty.append(v)
kv_dirty.append(random_whitespace())
msg_dirty = ''.join(kv_dirty)
msg_clean = ''.join(kv_clean)
# create a dictionary using the 'dirty' message - which has random
# whitespace inserted -- and verify that all the k:v mappings are there
d = msgmap.str_to_dict(msg_dirty)
for (k, v) in zip(keys, values):
self.assertEqual(v, d.get(k))
# now convert back to a string and verify it is what we expect - the
# clean string! This only works in Python 3.7+ because key order is
# not deterministic in earlier versons
if min_version(3, 7):
output_msg = msgmap.dict_to_str(d)
self.assertEqual(output_msg, msg_clean)
def random_whitespace():
"""Return a 'random' amount of 'random' whitespace characters"""
whitespace = ' \t\t \n\r '
i = random.randrange(len(whitespace))
return whitespace[i:]
def min_version(major, minor):
"""Returns true if the python version is at least {major}'.'{minor}"""
cur = sys.version_info
return cur.major >= major and cur.minor >= minor
``` |
{
"source": "Joseph-ML/training-data-analyst",
"score": 2
} |
#### File: blogs/beamadvent/day3b.py
```python
import apache_beam as beam
import numpy as np
import argparse, logging
def find_locations(wire):
positions = [(0,(0,0))] # row, (col, steps)
for nav in wire.split(','):
dir = nav[0]
if dir == 'L':
update = (-1, 0)
elif dir == 'R':
update = (1, 0)
elif dir == 'U':
update = (0, 1)
else:
update = (0, -1)
n = int(nav[1:])
for _ in range(n):
row, (col, steps) = positions[-1]
newpos = (row + update[0],
(col + update[1], steps+1))
positions.append(newpos)
return positions[1:] # remove the 0,0
def find_intersection(kv):
row, d = kv
if d['wire1'] and d['wire2']:
wire1 = d['wire1'][0]
wire2 = d['wire2'][0]
for col1, steps1 in wire1:
for col2, steps2 in wire2:
if col1 == col2:
yield (row, col1, steps1+steps2)
def manhattan(rc):
row, col = rc
return abs(row) + abs(col)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Solutions to https://adventofcode.com/2019/ using Apache Beam')
parser.add_argument('--input', required=True, help='Specify input file')
parser.add_argument('--output', required=True, help='Specify output file')
options = parser.parse_args()
runner = 'DirectRunner' # run Beam on local machine, but write outputs to cloud
logging.basicConfig(level=getattr(logging, 'INFO', None))
#wires = ('R75,D30,R83,U83,L12,D49,R71,U7,L72', 'U62,R66,U55,R34,D71,R55,D58,R83')
#wires = ('R98,U47,R26,D63,R33,U87,L62,D20,R33,U53,R51', 'U98,R91,D20,R16,D67,R40,U7,R15,U6,R7')
wires = [line.rstrip() for line in open(options.input)]
print(wires)
opts = beam.pipeline.PipelineOptions(flags=[])
p = beam.Pipeline(runner, options=opts)
locations = {'wire1':
(p | 'create1' >> beam.Create(find_locations(wires[0]))
| 'group1' >> beam.GroupByKey()),
'wire2':
(p | 'create2' >> beam.Create(find_locations(wires[1]))
| 'group2' >> beam.GroupByKey())
}
(locations
| 'cogroup' >> beam.CoGroupByKey()
| 'intersect' >> beam.FlatMap(find_intersection)
| 'signal' >> beam.Map(lambda rcs: rcs[2])
| 'mindist' >> beam.CombineGlobally(beam.transforms.combiners.TopCombineFn(1, reverse=True))
| 'output' >> beam.io.textio.WriteToText(options.output)
)
job = p.run()
if runner == 'DirectRunner':
job.wait_until_finish()
```
#### File: blogs/df_linear_opt/linearopt.py
```python
PROJECT = 'ai-analytics-solutions'
BUCKET = 'ai-analytics-solutions-kfpdemo'
REGION = 'us-central1'
INPUT = 'input.json'
RUNNER = 'DirectRunner'
OUTPUT = 'output.json'
# to try it in streaming mode, write one json message at a time to pub/sub
# and change the input to beam.io.ReadFromPubSub(topic=input_topic)
# and change the output to beam.io.WriteStringsToPubSub(output_topic)
from datetime import datetime
import apache_beam as beam
class Inventory:
# only dye & concentrate can be carried forward in time, not labor or water
def __init__(self, leftover=[]):
if len(leftover) == 4:
self.dye = leftover[0]
self.concentrate = leftover[3]
else:
self.dye = self.concentrate = 0
def update(self, leftover):
self.dye = leftover[0]
self.concentrate = leftover[3]
def linopt(materials, inventory):
import numpy as np
from scipy.optimize import linprog
from scipy.optimize import OptimizeResult
# coefficients of optimization function to *minimize*
c = -1 * np.array([50, 100, 125, 40])
# constraints A_ub @x <= b_ub (could also use a_eq, b_eq, etc.)
A_ub = [
[50, 60, 100, 50],
[5, 25, 10, 5],
[300, 400, 800, 200],
[30, 75, 50, 20]
]
b_ub = [
materials['dye'] + inventory.dye,
materials['labor'],
materials['water'],
materials['concentrate'] + inventory.concentrate
]
bounds = [
(0, np.inf),
(0, 25),
(0, 10),
(0, np.inf)
]
def log_info(status):
print(status.nit, status.fun)
res = linprog(c, A_ub=A_ub, b_ub=b_ub, bounds=bounds, callback=log_info)
qty = np.floor(np.round(res.x, 1))
leftover = b_ub - np.matmul(A_ub, qty)
print("{} --> {} --> {} + {}".format(materials, b_ub, qty, list(np.round(leftover))))
inventory.update(leftover)
return qty
def get_latest_inventory(pvalue):
return Inventory(beam.pvalue.AsSingleton(lambda x: x[-1])) # last value
def run():
import json
options = beam.options.pipeline_options.PipelineOptions()
setup_options = options.view_as(beam.options.pipeline_options.SetupOptions)
setup_options.save_main_session = True
google_cloud_options = options.view_as(beam.options.pipeline_options.GoogleCloudOptions)
google_cloud_options.project = PROJECT
google_cloud_options.region = REGION
google_cloud_options.job_name = 'linearopt-{}'.format(datetime.now().strftime("%Y%m%d-%H%M%S"))
google_cloud_options.staging_location = 'gs://{}/staging'.format(BUCKET)
google_cloud_options.temp_location = 'gs://{}/temp'.format(BUCKET)
std_options = options.view_as(beam.options.pipeline_options.StandardOptions)
std_options.runner = RUNNER
p = beam.Pipeline(options=options)
inventory = Inventory()
(p
| 'ingest' >> beam.io.ReadFromText(INPUT)
| 'parse' >> beam.Map(lambda x: json.loads(x))
| 'with_ts' >> beam.Map(lambda x: beam.window.TimestampedValue(x, x['timestamp']))
# | 'windowed' >> beam.WindowInto(beam.window.FixedWindows(60)) # 1-minute windows
# | 'materials' >> beam.CombinePerKey(sum)
| 'optimize' >> beam.Map(lambda x: linopt(x, inventory))
| 'output' >> beam.io.WriteToText(OUTPUT)
)
result = p.run()
result.wait_until_finish()
if __name__ == '__main__':
run()
```
#### File: blogs/gcp_forecasting/time_series.py
```python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import numpy as np
import pandas as pd
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
def _keep(window, windows):
"""Helper function for creating rolling windows."""
windows.append(window.copy())
return -1. # Float return value required for Pandas apply.
def create_rolling_features_label(series, window_size, pred_offset, pred_n=1):
"""Computes rolling window of the series and creates rolling window of label.
Args:
series: A Pandas Series. The indices are datetimes and the values are
numeric type.
window_size: integer; steps of historical data to use for features.
pred_offset: integer; steps into the future for prediction.
pred_n: integer; window size of label.
Returns:
Pandas dataframe where the index is the datetime predicting at. The columns
beginning with "-" indicate windows N steps before the prediction time.
Examples:
>>> series = pd.Series(np.random.random(6),
index=pd.date_range(start='1/1/2018', end='1/06/2018'))
# Example #1:
>>> series
2018-01-01 0.803948
2018-01-02 0.269849
2018-01-03 0.971984
2018-01-04 0.809718
2018-01-05 0.324454
2018-01-06 0.229447
>>> window_size = 3 # get 3 months of historical data
>>> pred_offset = 1 # predict starting next month
>>> pred_n = 1 # for predicting a single month
>>> utils.create_rolling_features_label(series,
window_size,
pred_offset,
pred_n)
pred_datetime -3_steps -2_steps -1_steps label
2018-01-04 0.803948 0.269849 0.971984 0.809718
2018-01-05 0.269849 0.971984 0.809718 0.324454
2018-01-06 0.971984 0.809718 0.324454 0.229447
# Example #2:
>>> window_size = 3 # get 3 months of historical data
>>> pred_offset = 2 # predict starting 2 months into future
>>> pred_n = 1 # for predicting a single month
>>> utils.create_rolling_features_label(series,
window_size,
pred_offset,
pred_n)
pred_datetime -4_steps -3_steps -2_steps label
2018-01-05 0.803948 0.269849 0.971984 0.324454
2018-01-06 0.269849 0.971984 0.809718 0.229447
# Example #3:
>>> window_size = 3 # get 3 months of historical data
>>> pred_offset = 1 # predict starting next month
>>> pred_n = 2 # for predicting a multiple months
>>> utils.create_rolling_features_label(series,
window_size,
pred_offset,
pred_n)
pred_datetime -3_steps -2_steps -1_steps label_0_steps
label_1_steps
2018-01-04 0.803948 0.269849 0.971984 0.809718
0.324454
2018-01-05 0.269849 0.971984 0.809718 0.324454
0.229447
"""
if series.isnull().sum() > 0:
raise ValueError('Series must not contain missing values.')
if pred_n < 1:
raise ValueError('pred_n must not be < 1.')
if len(series) < (window_size + pred_offset + pred_n):
raise ValueError('window_size + pred_offset + pred_n must not be greater '
'than series length.')
total_steps = len(series)
def compute_rolling_window(series, window_size):
# Accumulate series into list.
windows = []
series.rolling(window_size)\
.apply(_keep, args=(windows,))
return np.array(windows)
features_start = 0
features_end = total_steps - (pred_offset - 1) - pred_n
historical_windows = compute_rolling_window(
series[features_start:features_end], window_size)
# Get label pred_offset steps into the future.
label_start, label_end = window_size + pred_offset - 1, total_steps
label_series = series[label_start:label_end]
y = compute_rolling_window(label_series, pred_n)
if pred_n == 1:
# TODO(crawles): remove this if statement/label name. It's for backwards
# compatibility.
columns = ['label']
else:
columns = ['label_{}_steps'.format(i) for i in range(pred_n)]
# Make dataframe. Combine features and labels.
label_ix = label_series.index[0:len(label_series) + 1 - pred_n]
df = pd.DataFrame(y, columns=columns, index=label_ix)
df.index.name = 'pred_date'
# Populate dataframe with past sales.
for day in range(window_size - 1, -1, -1):
day_rel_label = pred_offset + window_size - day - 1
df.insert(0, '-{}_steps'.format(day_rel_label), historical_windows[:, day])
return df
def add_aggregate_features(df, time_series_col_names):
"""Compute summary statistic features for every row of dataframe."""
x = df[time_series_col_names]
features = {'mean': x.mean(axis=1)}
features['std'] = x.std(axis=1)
features['min'] = x.min(axis=1)
features['max'] = x.max(axis=1)
percentiles = range(10, 100, 20)
for p in percentiles:
features['{}_per'.format(p)] = np.percentile(x, p, axis=1)
df_features = pd.DataFrame(features, index=x.index)
return df_features.merge(df, left_index=True, right_index=True)
def move_column_to_end(df, column_name):
temp = df[column_name]
df.drop(column_name, axis=1, inplace=True)
df[column_name] = temp
def is_between_dates(dates, start=None, end=None):
"""Return boolean indices indicating if dates occurs between start and end."""
if start is None:
start = pd.to_datetime(0)
if end is None:
end = pd.to_datetime(sys.maxsize)
date_series = pd.Series(pd.to_datetime(dates))
return date_series.between(start, end).values
def _count_holidays(dates, months, weeks):
"""Count number of holidays spanned in prediction windows."""
cal = calendar()
holidays = cal.holidays(start=dates.min(), end=dates.max())
def count_holidays_during_month(date):
beg = date
end = date + pd.DateOffset(months=months, weeks=weeks)
return sum(beg <= h < end for h in holidays)
return pd.Series(dates).apply(count_holidays_during_month)
def _get_day_of_month(x):
"""From a datetime object, extract day of month."""
return int(x.strftime('%d'))
def add_date_features(df, dates, months, weeks, inplace=False):
"""Create features using date that is being predicted on."""
if not inplace:
df = df.copy()
df['doy'] = dates.dayofyear
df['dom'] = dates.map(_get_day_of_month)
df['month'] = dates.month
df['year'] = dates.year
df['n_holidays'] = _count_holidays(dates, months, weeks).values
return df
class Metrics(object):
"""Performance metrics for regressor."""
def __init__(self, y_true, predictions):
self.y_true = y_true
self.predictions = predictions
self.residuals = self.y_true - self.predictions
self.rmse = self.calculate_rmse(self.residuals)
self.mae = self.calculate_mae(self.residuals)
self.malr = self.calculate_malr(self.y_true, self.predictions)
def calculate_rmse(self, residuals):
"""Root mean squared error."""
return np.sqrt(np.mean(np.square(residuals)))
def calculate_mae(self, residuals):
"""Mean absolute error."""
return np.mean(np.abs(residuals))
def calculate_malr(self, y_true, predictions):
"""Mean absolute log ratio."""
return np.mean(np.abs(np.log(1 + predictions) - np.log(1 + y_true)))
def report(self, name=None):
if name is not None:
print_string = '{} results'.format(name)
print(print_string)
print('~' * len(print_string))
print('RMSE: {:2.3f}\nMAE: {:2.3f}\nMALR: {:2.3f}'.format(
self.rmse, self.mae, self.malr))
```
#### File: rl_model_code/trainer/model.py
```python
import json
import os
import gym
import numpy as np
import tensorflow as tf
tf.reset_default_graph()
# task.py arguments.
N_GAMES_PER_UPDATE = None
DISCOUNT_RATE = None
N_HIDDEN = None
LEARNING_RATE = None
# Currently hardcoded.
n_max_steps = 1000
n_iterations = 30
save_iterations = 5
# For cartpole.
env = gym.make('CartPole-v0')
n_inputs = 4
n_outputs = 1
def discount_rewards(rewards, discount_rate):
discounted_rewards = np.zeros(len(rewards))
cumulative_rewards = 0
for step in reversed(range(len(rewards))):
cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate
discounted_rewards[step] = cumulative_rewards
return discounted_rewards
def discount_and_normalize_rewards(all_rewards, discount_rate):
all_discounted_rewards = [
discount_rewards(rewards, discount_rate) for rewards in all_rewards
]
flat_rewards = np.concatenate(all_discounted_rewards)
reward_mean = flat_rewards.mean()
reward_std = flat_rewards.std()
return [(discounted_rewards - reward_mean) / reward_std
for discounted_rewards in all_discounted_rewards]
def hp_directory(model_dir):
"""If running a hyperparam job, create subfolder name with trial ID.
If not running a hyperparam job, just keep original model_dir.
"""
trial_id = json.loads(os.environ.get('TF_CONFIG', '{}')).get('task', {}).get(
'trial', '')
return os.path.join(model_dir, trial_id)
# Play games and train agent. Or evaluate and make gifs.
def run(outdir, train_mode):
# Build network.
initializer = tf.keras.initializers.VarianceScaling()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden = tf.layers.dense(
X, N_HIDDEN, activation=tf.nn.elu, kernel_initializer=initializer)
logits = tf.layers.dense(hidden, n_outputs)
outputs = tf.nn.sigmoid(logits) # probability of action 0 (left)
p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs])
action = tf.multinomial(tf.log(p_left_and_right), num_samples=1)
# Optimizer, gradients.
y = 1. - tf.to_float(action)
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(
labels=y, logits=logits)
optimizer = tf.train.AdamOptimizer(LEARNING_RATE)
grads_and_vars = optimizer.compute_gradients(cross_entropy)
gradients = [grad for grad, variable in grads_and_vars]
gradient_placeholders = []
grads_and_vars_feed = []
for grad, variable in grads_and_vars:
gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape())
gradient_placeholders.append(gradient_placeholder)
grads_and_vars_feed.append((gradient_placeholder, variable))
training_op = optimizer.apply_gradients(grads_and_vars_feed)
# For TensorBoard.
episode_reward = tf.placeholder(dtype=tf.float32, shape=[])
tf.summary.scalar('reward', episode_reward)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
if train_mode:
hp_save_dir = hp_directory(outdir)
with tf.Session() as sess:
init.run()
# For TensorBoard.
print('hp_save_dir')
train_writer = tf.summary.FileWriter(hp_save_dir, sess.graph)
for iteration in range(n_iterations):
all_rewards = []
all_gradients = []
for game in range(N_GAMES_PER_UPDATE):
current_rewards = []
current_gradients = []
obs = env.reset()
for _ in range(n_max_steps):
action_val, gradients_val = sess.run(
[action, gradients], feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
current_rewards.append(reward)
current_gradients.append(gradients_val)
if done:
break
all_rewards.append(current_rewards)
all_gradients.append(current_gradients)
avg_reward = np.mean(([np.sum(r) for r in all_rewards]))
print('\rIteration: {}, Reward: {}'.format(
iteration, avg_reward, end=''))
all_rewards = discount_and_normalize_rewards(
all_rewards, discount_rate=DISCOUNT_RATE)
feed_dict = {}
for var_index, gradient_placeholder in enumerate(gradient_placeholders):
mean_gradients = np.mean([
reward * all_gradients[game_index][step][var_index]
for game_index, rewards in enumerate(all_rewards)
for step, reward in enumerate(rewards)
],
axis=0)
feed_dict[gradient_placeholder] = mean_gradients
sess.run(training_op, feed_dict=feed_dict)
if iteration % save_iterations == 0:
print('Saving model to ', hp_save_dir)
model_file = '{}/my_policy_net_pg.ckpt'.format(hp_save_dir)
saver.save(sess, model_file)
# Also save event files for TB.
merge = tf.summary.merge_all()
summary = sess.run(merge, feed_dict={episode_reward: avg_reward})
train_writer.add_summary(summary, iteration)
obs = env.reset()
steps = []
done = False
else: # Make a gif.
from moviepy.editor import ImageSequenceClip
model_file = '{}/my_policy_net_pg.ckpt'.format(outdir)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, save_path=model_file)
# Run model.
obs = env.reset()
done = False
steps = []
rewards = []
while not done:
s = env.render('rgb_array')
steps.append(s)
action_val = sess.run(action, feed_dict={X: obs.reshape(1, n_inputs)})
obs, reward, done, info = env.step(action_val[0][0])
rewards.append(reward)
print('Final reward :', np.mean(rewards))
clip = ImageSequenceClip(steps, fps=30)
clip.write_gif('cartpole.gif', fps=30)
```
#### File: mnist/testing/tfjob_test.py
```python
import json
import logging
import os
import pytest
from kubernetes.config import kube_config
from kubernetes import client as k8s_client
from kubeflow.tf_operator import tf_job_client #pylint: disable=no-name-in-module
from kubeflow.testing import util
def test_training(record_xml_attribute, tfjob_name, namespace, trainer_image, num_ps, #pylint: disable=too-many-arguments
num_workers, train_steps, batch_size, learning_rate, model_dir, export_dir):
util.set_pytest_junit(record_xml_attribute, "test_mnist")
util.maybe_activate_service_account()
app_dir = os.path.join(os.path.dirname(__file__), "../training/GCS")
app_dir = os.path.abspath(app_dir)
logging.info("--app_dir not set defaulting to: %s", app_dir)
# TODO (@jinchihe) Using kustomize 2.0.3 to work around below issue:
# https://github.com/kubernetes-sigs/kustomize/issues/1295
kusUrl = 'https://github.com/kubernetes-sigs/kustomize/' \
'releases/download/v2.0.3/kustomize_2.0.3_linux_amd64'
util.run(['wget', '-q', '-O', '/usr/local/bin/kustomize', kusUrl], cwd=app_dir)
util.run(['chmod', 'a+x', '/usr/local/bin/kustomize'], cwd=app_dir)
# TODO (@jinchihe): The kubectl need to be upgraded to 1.14.0 due to below issue.
# Invalid object doesn't have additional properties ...
kusUrl = 'https://storage.googleapis.com/kubernetes-release/' \
'release/v1.14.0/bin/linux/amd64/kubectl'
util.run(['wget', '-q', '-O', '/usr/local/bin/kubectl', kusUrl], cwd=app_dir)
util.run(['chmod', 'a+x', '/usr/local/bin/kubectl'], cwd=app_dir)
# Configurate custom parameters using kustomize
util.run(['kustomize', 'edit', 'set', 'namespace', namespace], cwd=app_dir)
util.run(
['kustomize', 'edit', 'set', 'image', f'training-image={trainer_image}'],
cwd=app_dir,
)
util.run(['../base/definition.sh', '--numPs', num_ps], cwd=app_dir)
util.run(['../base/definition.sh', '--numWorkers', num_workers], cwd=app_dir)
trainning_config = {
"name": tfjob_name,
"trainSteps": train_steps,
"batchSize": batch_size,
"learningRate": learning_rate,
"modelDir": model_dir,
"exportDir": export_dir,
}
configmap = 'mnist-map-training'
for key, value in trainning_config.items():
util.run(
[
'kustomize',
'edit',
'add',
'configmap',
configmap,
f'--from-literal={key}={value}',
],
cwd=app_dir,
)
# Created the TFJobs.
util.run(['kustomize', 'build', app_dir, '-o', 'generated.yaml'], cwd=app_dir)
util.run(['kubectl', 'apply', '-f', 'generated.yaml'], cwd=app_dir)
logging.info("Created job %s in namespaces %s", tfjob_name, namespace)
kube_config.load_kube_config()
api_client = k8s_client.ApiClient()
# Wait for the job to complete.
logging.info("Waiting for job to finish.")
results = tf_job_client.wait_for_job(
api_client,
namespace,
tfjob_name,
status_callback=tf_job_client.log_status)
logging.info("Final TFJob:\n %s", json.dumps(results, indent=2))
if creation_failures := tf_job_client.get_creation_failures_from_tfjob(
api_client, namespace, results):
logging.warning(creation_failures)
if not tf_job_client.job_succeeded(results):
failure = "Job {0} in namespace {1} in status {2}".format( # pylint: disable=attribute-defined-outside-init
tfjob_name, namespace, results.get("status", {}))
logging.error(failure)
# if the TFJob failed, print out the pod logs for debugging.
pod_names = tf_job_client.get_pod_names(
api_client, namespace, tfjob_name)
logging.info("The Pods name:\n %s", pod_names)
core_api = k8s_client.CoreV1Api(api_client)
for pod in pod_names:
logging.info("Getting logs of Pod %s.", pod)
try:
pod_logs = core_api.read_namespaced_pod_log(pod, namespace)
logging.info("The logs of Pod %s log:\n %s", pod, pod_logs)
except k8s_client.rest.ApiException as e:
logging.info("Exception when calling CoreV1Api->read_namespaced_pod_log: %s\n", e)
return
# We don't delete the jobs. We rely on TTLSecondsAfterFinished
# to delete old jobs. Leaving jobs around should make it
# easier to debug.
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO,
format=('%(levelname)s|%(asctime)s'
'|%(pathname)s|%(lineno)d| %(message)s'),
datefmt='%Y-%m-%dT%H:%M:%S',
)
logging.getLogger().setLevel(logging.INFO)
pytest.main()
``` |
{
"source": "JosephMontoya-TRI/api",
"score": 2
} |
#### File: tests/materials/test_utils.py
```python
from mp_api.routes.materials.utils import formula_to_criteria
def test_formula_to_criteria():
# Regular formula
assert formula_to_criteria("Cr2O3") == {
"composition_reduced.Cr": 2.0,
"composition_reduced.O": 3.0,
"nelements": 2,
}
# Add wildcard
assert formula_to_criteria("Cr2*3") == {
"composition_reduced.Cr": 2.0,
"formula_anonymous": "A2B3",
}
# Anonymous element
assert formula_to_criteria("A2B3") == {"formula_anonymous": "A2B3"}
# Chemsys
assert formula_to_criteria("Si-O") == {"chemsys": "O-Si"}
assert formula_to_criteria("Si-*") == {"elements": {"$all": ["Si"]}, "nelements": 2}
assert formula_to_criteria("*-*-*") == {"nelements": 3}
``` |
{
"source": "JosephMontoya-TRI/BEEP",
"score": 3
} |
#### File: BEEP/beep/principal_components.py
```python
import numpy as np
import pandas as pd
import json
from monty.json import MSONable
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from monty.serialization import loadfn
class PrincipalComponents(MSONable):
"""
PCA object.
Attributes:
data (pandas.DataFrame): dataframe to be decomposed using PCA.
name (str): name for PCA instance.
n_components (int): number of principal components to use.
explained_variance_threshold (float): desired variance to be explained.
pca (sklearn.pca.PCA): pca object.
"""
def __init__(
self,
data,
name="FastCharge",
n_components=15,
explained_variance_threshold=0.90,
):
"""
Args:
data (pandas.DataFrame): dataframe to be decomposed using PCA.
name (str): name for PCA instance.
n_components (int): number of principal components to use.
explained_variance_threshold (float): desired variance to be explained.
"""
self.data = data
self.name = name
self.explained_variance_threshold = explained_variance_threshold
self.n_components = n_components
self.scaler = StandardScaler()
self.pca = PCA(n_components=self.n_components)
self.fit()
self.get_reconstruction_errors()
@classmethod
def from_interpolated_data(
cls,
file_list_json,
name="FastCharge",
qty_to_pca="discharge_capacity",
pivot_column="voltage",
cycles_to_pca=np.linspace(20, 500, 20, dtype="int"),
):
"""
Method to take a list of structure jsons containing interpolated capacity vs voltage,
create a PCA object and perform fitting.
Args:
file_list_json (str): json string or json filename corresponding.
name (str):
qty_to_pca (str): string denoting quantity to pca.
pivot_column (str): string denoting column to pivot on. For PCA of
Q(V), pivot_column would be voltage.
cycles_to_pca (int): how many cycles per file to use for pca decomposition.
Returns:
beep.principal_components.PrincipalComponents:
"""
return cls(
pivot_data(file_list_json, qty_to_pca, pivot_column, cycles_to_pca), name
)
def as_dict(self):
"""
Method for dictionary/json serialization.
Returns:
dict: object representation as dictionary.
"""
obj = {
"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"pca": self.pca.__dict__,
"scaler": self.scaler.__dict__,
"embeddings": self.embeddings,
"reconstruction_errors": self.reconstruction_errors,
}
return obj
def fit(self):
"""
Method to scale the dataframe, run PCA and evaluate embeddings.
"""
# Center and scale training data
scaled_data = self.scaler.fit_transform(self.data)
self.pca.fit(scaled_data)
# Find minimum number of components to explain threshold amount of variance in the data.
self.min_components = (
np.min(
np.where(
np.cumsum(self.pca.explained_variance_ratio_)
> self.explained_variance_threshold
)
)
+ 1
)
# Eval embeddings of training data
self.embeddings = self.pca.transform(scaled_data)
self.white_embeddings = (
self.embeddings - np.mean(self.embeddings, axis=0)
) / np.std(self.embeddings, axis=0)
self.reconstructions = self.scaler.inverse_transform(
self.pca.inverse_transform(self.embeddings)
)
return
def get_pca_embeddings(self, data):
"""
Method to compute PCA embeddings on new data using the trained PCA fit.
Args:
data (pandas.DataFrame): data frame
Returns:
numpy.array: transformed to embedded space, shape (n_samples, n_components)
"""
return self.pca.transform(self.scaler.transform(data))
def get_pca_reconstruction(self, embeddings):
"""
Method to inverse transform PCA embeddings to reconstruct data
Returns:
numpy.array, shape [n_samples, n_features]. Transformed array.
"""
return self.scaler.inverse_transform(self.pca.inverse_transform(embeddings))
def get_pca_decomposition_outliers(self, data, upper_quantile=95, lower_quantile=5):
"""
Outlier detection using PCA decomposition.
Args:
data (pandas.DataFrame): dataframe for which outlier detection needs
to be performed
upper_quantile (int): upper quantile for outlier detection
lower_quantile (int): lower quantile for outlier detection
Returns:
numpy.array: distances to center of PCA set
numpy.array: boolean vector of same length as data
"""
# Compute center of the PCA training set
center = np.median(self.white_embeddings, axis=0)
# Define upper and lower quantiles for detecting outliers
q_upper, q_lower = np.percentile(
self.white_embeddings, [upper_quantile, lower_quantile], axis=0
)
# Transform new data
embeddings = self.pca.transform(self.scaler.transform(data))
# Compute centered embeddings and distances
white_embeddings = (embeddings - np.mean(self.embeddings, axis=0)) / np.std(
self.embeddings, axis=0
)
distances = np.linalg.norm(white_embeddings - center, axis=1)
# Flag outliers even if one of the principal components falls out of inter-quantile range
outlier_list = (white_embeddings > q_upper).any(axis=1) | (
white_embeddings < q_lower
).any(axis=1)
return distances, outlier_list
def get_reconstruction_errors(self):
"""
Method to compute reconstruction errors of training dataset
"""
self.reconstruction_errors = np.mean(
np.abs(self.reconstructions - self.data), axis=1
)
return
def get_reconstruction_error_outliers(self, data, threshold=1.5):
"""
Get outliers based on PCA reconstruction errors.
Args:
data (pandas.DataFrame): dataframe for which outlier detection needs to be performed.
threshold (float): threshold for outlier detection
Returns:
numpy.array: vector of same length as data.
"""
embeddings = self.pca.transform(self.scaler.transform(data))
reconstructions = self.scaler.inverse_transform(
self.pca.inverse_transform(embeddings)
)
reconstruction_errors = np.mean(np.abs(reconstructions - data), axis=1)
return (
reconstruction_errors,
reconstruction_errors > max(self.reconstruction_errors) * threshold,
)
def pivot_data(
file_list_json,
qty_to_pca="discharge_capacity",
pivot_column="voltage",
cycles_to_pca=np.linspace(10, 100, 10, dtype="int"),
):
"""
Method to take a list of structure jsons, construct a dataframe to PCA using
a pivoting column.
Args:
file_list_json (str): json string or json filename corresponding to a
dictionary with a file_list and validity attribute, if this string
ends with ".json", a json file is assumed.
qty_to_pca (str): string denoting quantity to pca.
pivot_column (str): string denoting column to pivot on. For PCA of Q(V),
pivot_column would be voltage.
cycles_to_pca (np.array): how many cycles per file to use for pca
decomposition.
Returns:
pandas.DataFrame: pandas dataframe to PCA.
"""
if file_list_json.endswith(".json"):
file_list_data = loadfn(file_list_json)
else:
file_list_data = json.loads(file_list_json)
file_list = file_list_data["file_list"]
df_to_pca = pd.DataFrame()
for file in file_list:
processed_run = loadfn(file)
df = processed_run.cycles_interpolated
df = df[df.cycle_index.isin(cycles_to_pca)]
df_to_pca = df_to_pca.append(
df.pivot(index="cycle_index", columns=pivot_column, values=qty_to_pca),
ignore_index=True,
)
return df_to_pca
```
#### File: beep/tests/test_splice.py
```python
import os
import unittest
import numpy as np
from beep.utils import MaccorSplice
TEST_DIR = os.path.dirname(__file__)
TEST_FILE_DIR = os.path.join(TEST_DIR, "test_files")
class SpliceTest(unittest.TestCase):
def setUp(self):
self.arbin_file = os.path.join(TEST_FILE_DIR, "FastCharge_000000_CH29.csv")
self.filename_part_1 = os.path.join(TEST_FILE_DIR, "xTESLADIAG_000038.078")
self.filename_part_2 = os.path.join(TEST_FILE_DIR, "xTESLADIAG_000038con.078")
self.output = os.path.join(TEST_FILE_DIR, "xTESLADIAG_000038joined.078")
self.test = os.path.join(TEST_FILE_DIR, "xTESLADIAG_000038test.078")
def test_maccor_read_write(self):
splicer = MaccorSplice(self.filename_part_1, self.filename_part_2, self.output)
meta_1, data_1 = splicer.read_maccor_file(self.filename_part_1)
splicer.write_maccor_file(meta_1, data_1, self.test)
meta_test, data_test = splicer.read_maccor_file(self.test)
assert meta_1 == meta_test
assert np.allclose(data_1["Volts"].to_numpy(), data_test["Volts"].to_numpy())
assert np.allclose(data_1["Amps"].to_numpy(), data_test["Amps"].to_numpy())
assert np.allclose(
data_1["Test (Sec)"].to_numpy(), data_test["Test (Sec)"].to_numpy()
)
def test_column_increment(self):
splicer = MaccorSplice(self.filename_part_1, self.filename_part_2, self.output)
meta_1, data_1 = splicer.read_maccor_file(self.filename_part_1)
meta_2, data_2 = splicer.read_maccor_file(self.filename_part_2)
data_1, data_2 = splicer.column_increment(data_1, data_2)
assert data_1["Rec#"].max() < data_2["Rec#"].min()
``` |
{
"source": "JosephMontoya-TRI/CAMD",
"score": 2
} |
#### File: CAMD/camd/analysis.py
```python
import abc
import warnings
import json
import pickle
import os
import numpy as np
import itertools
import pandas as pd
from camd import tqdm
from qmpy.analysis.thermodynamics.phase import Phase, PhaseData
from qmpy.analysis.thermodynamics.space import PhaseSpace
from multiprocessing import Pool, cpu_count
from pymatgen import Composition
from pymatgen.entries.computed_entries import ComputedEntry
from pymatgen.analysis.phase_diagram import (
PhaseDiagram,
PDPlotter,
tet_coord,
triangular_coord,
)
from pymatgen.analysis.structure_matcher import StructureMatcher
from pymatgen import Structure
from camd.utils.data import cache_matrio_data, \
filter_dataframe_by_composition, ELEMENTS
from camd import CAMD_CACHE
from monty.os import cd
from monty.serialization import loadfn
class AnalyzerBase(abc.ABC):
"""
The AnalyzerBase class defines the contract
for post-processing experiments and creating
a new seed_data object for the agent.
"""
def __init__(self):
"""
Initialize an Analyzer. Should contain all necessary
state variables for consistent analysis methods
"""
self._initial_seed_indices = []
@abc.abstractmethod
def analyze(self, new_experimental_results, seed_data):
"""
Analyze method, performs some operation on new
experimental results in order to place them
in the context of the seed data
Args:
new_experimental_results (DataFrame): new data
to be added to the seed
seed_data (DataFrame): current seed data from
campaign
Returns:
(DataFrame): dataframe corresponding to the summary
of the previous set of experiments
(DataFrame): dataframe corresponding to the new
seed data
"""
@property
def initial_seed_indices(self):
"""
Returns: The list of indices contained in the initial seed. Intended to be set by the Campaign.
"""
return self._initial_seed_indices
class GenericMaxAnalyzer(AnalyzerBase):
"""
Generic analyzer that checks new data with a target column against a threshold to be crossed.
"""
def __init__(self, threshold=0):
"""
Args:
threshold (int or float): The target values of new acquisitions are compared to find if they are above this
threshold, to keep track of the performance in sequential framework.
"""
self.threshold = threshold
self.score = []
self.best_examples = []
super(GenericMaxAnalyzer, self).__init__()
def analyze(self, new_experimental_results, seed_data):
"""
Analyzes the results of an experiment by finding
the best examples and their scores
Args:
new_experimental_results (pandas.DataFrame): new experimental
results to be analyzed
seed_data (pandas.DataFrame): past data to include in analysis
Returns:
(pandas.DataFrame): one-row dataframe summarizing past results
(pandas.DataFrame): new seed data to be passed to agent
"""
new_seed = seed_data.append(new_experimental_results)
self.score.append(np.sum(new_seed["target"] > self.threshold))
self.best_examples.append(new_seed.loc[new_seed.target.idxmax()])
new_discovery = (
[self.score[-1] - self.score[-2]]
if len(self.score) > 1
else [self.score[-1]]
)
summary = pd.DataFrame(
{
"score": [self.score[-1]],
"best_example": [self.best_examples[-1]],
"new_discovery": new_discovery,
}
)
return summary, new_seed
class AnalyzeStructures(AnalyzerBase):
"""
This class tests if a list of structures are unique. Typically
used for comparing hypothetical structures (post-DFT relaxation)
and those from ICSD.
"""
def __init__(self, structures=None, hull_distance=None):
"""
Analyzer for structural analysis of jobs
Args:
structures ([Structure]): list of a-priori structures to
compare against
hull_distance ([float]): hull_distance by which to filter
results
"""
self.structures = structures if structures else []
self.structure_ids = None
self.unique_structures = None
self.groups = None
self.energies = None
self.against_icsd = False
self.structure_is_unique = None
self.hull_distance = hull_distance
super(AnalyzeStructures, self).__init__()
def analyze(
self, structures=None, structure_ids=None, against_icsd=False, energies=None
):
"""
One encounter of a given structure will be labeled as True, its
remaining matching structures as False.
Args:
structures (list): a list of structures to be compared.
structure_ids (list): uids of structures, optional.
against_icsd (bool): whether a comparison to icsd is also made.
energies (list): list of energies (per atom) corresponding
to structures. If given, the lowest energy instance of a
given structure will be return as the unique one. Otherwise,
there is no such guarantee. (optional)
Returns:
([bool]) list of bools corresponding to the given list of
structures corresponding to uniqueness
"""
self.structures = structures
self.structure_ids = structure_ids
self.against_icsd = against_icsd
self.energies = energies
smatch = StructureMatcher()
self.groups = smatch.group_structures(structures)
self.structure_is_unique = []
if self.energies:
for i in range(len(self.groups)):
self.groups[i] = [
x
for _, x in sorted(
zip(
[
self.energies[self.structures.index(s)]
for s in self.groups[i]
],
self.groups[i],
)
)
]
self._unique_structures = [i[0] for i in self.groups]
for s in structures:
if s in self._unique_structures:
self.structure_is_unique.append(True)
else:
self.structure_is_unique.append(False)
self._not_duplicate = self.structure_is_unique
if self.against_icsd:
structure_file = "oqmd1.2_exp_based_entries_structures.json"
cache_matrio_data(structure_file)
with open(os.path.join(CAMD_CACHE, structure_file), "r") as f:
icsd_structures = json.load(f)
chemsys = set()
for s in self._unique_structures:
chemsys = chemsys.union(set(s.composition.as_dict().keys()))
self.icsd_structs_inchemsys = []
for k, v in icsd_structures.items():
try:
s = Structure.from_dict(v)
elems = set(s.composition.as_dict().keys())
if elems == chemsys:
self.icsd_structs_inchemsys.append(s)
# TODO: can we make this exception more specific,
# do we have an example where this fails?
except Exception as e:
warnings.warn("Unable to process structure {}".format(k))
warnings.warn("Error: {}".format(e))
self.matching_icsd_strs = []
for i in range(len(structures)):
if self.structure_is_unique[i]:
match = None
for s2 in self.icsd_structs_inchemsys:
if smatch.fit(self.structures[i], s2):
match = s2
break
self.matching_icsd_strs.append(
match
) # store the matching ICSD structures.
else:
self.matching_icsd_strs.append(None)
# Flip matching bools, and create a filter
self._icsd_filter = [not i for i in self.matching_icsd_strs]
self.structure_is_unique = (
np.array(self.structure_is_unique) * np.array(self._icsd_filter)
).tolist()
self.unique_structures = list(
itertools.compress(self.structures, self.structure_is_unique)
)
else:
self.unique_structures = self._unique_structures
# We store the final list of unique structures as unique_structures.
# We return a corresponding list of bool to the initial structure
# list provided.
return self.structure_is_unique
def analyze_vaspqmpy_jobs(self, jobs, against_icsd=False, use_energies=False):
"""
Useful for analysis integrated as part of a campaign itself
Args:
jobs (pd.DataFrame): dataframe of DFT experiment results
against_icsd (bool): whether to validate against ICSD or not
Returns:
"""
self.structure_ids = []
self.structures = []
self.energies = []
for j, r in jobs.iterrows():
if r["status"] == "SUCCEEDED":
rdict = r['result'].as_dict()
self.structures.append(r['result'].final_structure)
self.structure_ids.append(j)
self.energies.append(rdict["output"]["final_energy_per_atom"])
if use_energies:
return self.analyze(
self.structures, self.structure_ids, against_icsd, self.energies
)
else:
return self.analyze(self.structures, self.structure_ids, against_icsd)
class StabilityAnalyzer(AnalyzerBase):
"""
Analyzer object for stability campaigns
"""
def __init__(self, hull_distance=0.05, parallel=cpu_count(), entire_space=False,
plot=True):
"""
The Stability Analyzer is intended to analyze DFT-result
data in the context of a global compositional seed in
order to determine phase stability.
Args:
hull_distance (float): distance above hull below
which to deem a material "stable"
parallel (bool): flag for whether or not
multiprocessing is to be used
# TODO: is there ever a case where you would
# would want to do the entire space?
entire_space (bool): flag for whether to analyze
entire space of results or just new chemical
space
plot (bool): whether to generate plot as part of
standard analyze sequence
"""
self.hull_distance = hull_distance
self.parallel = parallel
self.entire_space = entire_space
self.space = None
self.plot = plot
super(StabilityAnalyzer, self).__init__()
@staticmethod
def get_phase_space(dataframe):
"""
Gets PhaseSpace object associated with dataframe
Args:
dataframe (DataFrame): dataframe with columns "Composition"
containing formula and "delta_e" containing
formation energy per atom
"""
phases = []
for data in dataframe.iterrows():
phases.append(
Phase(
data[1]["Composition"],
energy=data[1]["delta_e"],
per_atom=True,
description=data[0],
)
)
for el in ELEMENTS:
phases.append(Phase(el, 0.0, per_atom=True))
pd = PhaseData()
pd.add_phases(phases)
space = PhaseSpaceAL(bounds=ELEMENTS, data=pd)
return space
def analyze(self, new_experimental_results, seed_data):
"""
Args:
new_experimental_results (DataFrame): new experimental
results to be added to the seed
seed_data (DataFrame): seed to be augmented via
the new_experimental_results
Returns:
(DataFrame): summary of the process, i. e. of
the increment or experimental results
(DataFrame): augmented seed data, i. e. "new"
seed data according to the experimental results
"""
# Check for new results
new_comp = new_experimental_results['Composition'].sum()
new_experimental_results = new_experimental_results.dropna(subset=['delta_e'])
new_seed = seed_data.append(new_experimental_results)
# Aggregate seed_data and new experimental results
include_columns = ["Composition", "delta_e"]
filtered = new_seed[include_columns].drop_duplicates(keep="last").dropna()
if not self.entire_space:
# Constrains the phase space to that of the target compounds.
# More efficient when searching in a specified chemistry,
# less efficient if larger spaces are without specified chemistry.
filtered = filter_dataframe_by_composition(filtered, new_comp)
space = self.get_phase_space(filtered)
new_phases = [p for p in space.phases if p.description in filtered.index]
space.compute_stabilities(phases=new_phases, ncpus=self.parallel)
# Compute new stabilities and update new seed, note that pandas will complain
# if the index is not explicit due to multiple types (e. g. ints for OQMD
# and strs for prototypes)
new_data = pd.DataFrame(
{"stability": [phase.stability for phase in new_phases]},
index=[phase.description for phase in new_phases]
)
new_data["is_stable"] = new_data["stability"] <= self.hull_distance
# TODO: This is implicitly adding "stability", and "is_stable" columns
# but could be handled more gracefully
if "stability" not in new_seed.columns:
new_seed = pd.concat([new_seed, new_data], axis=1, sort=False)
else:
new_seed.update(new_data)
# Write hull figure to disk
if self.plot:
self.plot_hull(filtered, new_experimental_results.index, filename="hull.png")
# Compute summary metrics
summary = self.get_summary(
new_seed,
new_experimental_results.index,
initial_seed_indices=self.initial_seed_indices,
)
# Drop excess columns from experiment
new_seed = new_seed.drop([
'path', 'status', 'start_time', 'jobId', 'jobName', 'jobArn',
'result', 'error', 'elapsed_time'
], axis="columns", errors="ignore")
return summary, new_seed
@staticmethod
def get_summary(new_seed, new_ids, initial_seed_indices=None):
"""
Gets summary row for given experimental results after
preliminary stability analysis. This is not meant
to provide the basis for a generic summary method
and is particular to the StabilityAnalyzer.
Args:
new_seed (DataFrame): dataframe corresponding to
new processed seed
new_ids ([]): list of index values for those
experiments that are "new"
initial_seed_indices ([]): indices of the initial
seed
Returns:
(DataFrame): dataframe summarizing processed
experimental data including values for
how many materials were discovered
"""
# TODO: Right now analyzers don't know anything about the history
# of experiments, so can be difficult to determine marginal
# value of a given experimental run
processed_new = new_seed.loc[new_ids]
initial_seed_indices = initial_seed_indices if initial_seed_indices else []
total_discovery = new_seed.loc[
~new_seed.index.isin(initial_seed_indices)
].is_stable.sum()
return pd.DataFrame(
{
"new_candidates": [len(processed_new)],
"new_discovery": [processed_new.is_stable.sum()],
"total_discovery": [total_discovery],
}
)
def plot_hull(self, df, new_result_ids, filename=None, finalize=False):
"""
Generate plots of convex hulls for each of the runs
Args:
df (DataFrame): dataframe with formation energies and formulas
new_result_ids ([]): list of new result ids (i. e. indexes
in the updated dataframe)
filename (str): filename to output, if None, no file output
is produced
finalize (bool): flag indicating whether to include all new results
Returns:
(pyplot): plotter instance
"""
# Generate all entries
total_comp = Composition(df['Composition'].sum())
if len(total_comp) > 4:
warnings.warn("Number of elements too high for phase diagram plotting")
return None
filtered = filter_dataframe_by_composition(df, total_comp)
filtered = filtered[['delta_e', 'Composition']]
filtered = filtered.dropna()
# Create computed entry column with un-normalized energies
filtered["entry"] = [
ComputedEntry(
Composition(row["Composition"]),
row["delta_e"] * Composition(row["Composition"]).num_atoms,
entry_id=index,
)
for index, row in filtered.iterrows()
]
ids_prior_to_run = list(set(filtered.index) - set(new_result_ids))
if not ids_prior_to_run:
warnings.warn("No prior data, prior phase diagram cannot be constructed")
return None
# Create phase diagram based on everything prior to current run
entries = filtered.loc[ids_prior_to_run]["entry"].dropna()
# Filter for nans by checking if it's a computed entry
pg_elements = sorted(total_comp.keys())
pd = PhaseDiagram(entries, elements=pg_elements)
plotkwargs = {
"markerfacecolor": "white",
"markersize": 7,
"linewidth": 2,
}
if finalize:
plotkwargs.update({"linestyle": "--"})
else:
plotkwargs.update({"linestyle": "-"})
plotter = PDPlotter(pd, backend='matplotlib', **plotkwargs)
getplotkwargs = {"label_stable": False} if finalize else {}
plot = plotter.get_plot(**getplotkwargs)
# Get valid results
valid_results = [
new_result_id
for new_result_id in new_result_ids
if new_result_id in filtered.index
]
if finalize:
# If finalize, we'll reset pd to all entries at this point to
# measure stabilities wrt. the ultimate hull.
pd = PhaseDiagram(filtered["entry"].values, elements=pg_elements)
plotter = PDPlotter(
pd, backend="matplotlib", **{"markersize": 0, "linestyle": "-", "linewidth": 2}
)
plot = plotter.get_plot(plt=plot)
for entry in filtered["entry"][valid_results]:
decomp, e_hull = pd.get_decomp_and_e_above_hull(entry, allow_negative=True)
if e_hull < self.hull_distance:
color = "g"
marker = "o"
markeredgewidth = 1
else:
color = "r"
marker = "x"
markeredgewidth = 1
# Get coords
coords = [entry.composition.get_atomic_fraction(el) for el in pd.elements][
1:
]
if pd.dim == 2:
coords = coords + [pd.get_form_energy_per_atom(entry)]
if pd.dim == 3:
coords = triangular_coord(coords)
elif pd.dim == 4:
coords = tet_coord(coords)
plot.plot(
*coords,
marker=marker,
markeredgecolor=color,
markerfacecolor="None",
markersize=11,
markeredgewidth=markeredgewidth
)
if filename is not None:
plot.savefig(filename, dpi=70)
plot.close()
def finalize(self, path="."):
"""
Post-processing a dft campaign
"""
update_run_w_structure(
path, hull_distance=self.hull_distance, parallel=self.parallel
)
class PhaseSpaceAL(PhaseSpace):
"""
Modified qmpy.PhaseSpace for GCLP based stability computations
TODO: basic multithread or Gurobi for gclp
"""
def compute_stabilities(self, phases, ncpus=cpu_count()):
"""
Calculate the stability for every Phase.
Args:
phases ([Phase]): list of Phases for which to compute
stability
ncpus (int): number of cpus to use, i. e. processes
to use
Returns:
([float]) stability values for all phases
"""
self.update_phase_dict(ncpus=ncpus)
if ncpus > 1:
with Pool(ncpus) as pool:
stabilities = pool.map(self.compute_stability, phases)
# Pool doesn't always modify the phases directly,
# so assign stability after
for phase, stability in zip(phases, stabilities):
phase.stability = stability
else:
stabilities = [self.compute_stability(phase) for phase in tqdm(phases)]
return stabilities
def compute_stability(self, phase):
"""
Computes stability for a given phase in the phase
diagram
Args:
phase (Phase): phase for which to compute
stability
Returns:
(float): stability of given phase
"""
# If the phase name (formula) is in the set of minimal
# phases by formula, compute it relative to that minimal phase
if phase.name in self.phase_dict:
phase.stability = (
phase.energy
- self.phase_dict[phase.name].energy
+ self.phase_dict[phase.name].stability
)
else:
phase.stability = self._compute_stability_gclp(phase)
return phase.stability
def _compute_stability_gclp(self, phase):
"""
Computes stability using gclp. The function
is still a little unstable, so we use a blank
try-except to let it do what it can.
Args:
phase (Phase): phase for which to compute
stability using gclp
Returns:
(float): stability
"""
try:
phase.stability = phase.energy - self.gclp(phase.unit_comp)[0]
# TODO: do we have an example where this fails? Can we provide
# a more concrete exception?
except Exception as e:
print(phase, "stability determination failed, error {}".format(e))
phase.stability = np.nan
return phase.stability
def update_phase_dict(self, ncpus=cpu_count()):
"""
Function to update the phase dict associated with
the PhaseSpaceAL
Args:
ncpus (int): number of processes to use
Returns:
(None)
"""
uncomputed_phases = [
phase for phase in self.phase_dict.values() if phase.stability is None
]
if ncpus > 1:
# Compute stabilities, then update, pool doesn't modify attribute
with Pool(ncpus) as pool:
stabilities = pool.map(self._compute_stability_gclp, uncomputed_phases)
for phase, stability in zip(uncomputed_phases, stabilities):
phase.stability = stability
else:
for phase in uncomputed_phases:
self._compute_stability_gclp(phase)
assert (
len(
[phase for phase in self.phase_dict.values() if phase.stability is None]
)
== 0
)
def update_run_w_structure(folder, hull_distance=0.2, parallel=True):
"""
Updates a campaign grouped in directories with structure analysis
"""
with cd(folder):
required_files = ["seed_data.pickle"]
if os.path.isfile("error.json"):
error = loadfn("error.json")
print("{} ERROR: {}".format(folder, error))
if not all([os.path.isfile(fn) for fn in required_files]):
print("{} ERROR: no seed data, no analysis to be done")
else:
with open("seed_data.pickle", "rb") as f:
df = pickle.load(f)
with open("experiment.pickle", "rb") as f:
experiment = pickle.load(f)
# Hack to update agg_history
experiment.update_current_data(None)
all_submitted, all_results = experiment.agg_history
old_results = df.drop(all_results.index, errors='ignore')
new_results = df.drop(old_results.index)
st_a = StabilityAnalyzer(
hull_distance=hull_distance, parallel=parallel, entire_space=True, plot=False)
summary, new_seed = st_a.analyze(new_results, old_results)
# Having calculated stabilities again, we plot the overall hull.
# Filter by chemsys
new_comp = new_results['Composition'].sum()
filtered = filter_dataframe_by_composition(new_seed, new_comp)
st_a.plot_hull(
filtered,
all_submitted.index,
filename="hull_finalized.png",
finalize=True,
)
stable_discovered = new_seed[new_seed["is_stable"].fillna(False)]
# Analyze structures if present in experiment
if "structure" in all_results.columns:
s_a = AnalyzeStructures()
s_a.analyze_vaspqmpy_jobs(all_results, against_icsd=True, use_energies=True)
unique_s_dict = {}
for i in range(len(s_a.structures)):
if s_a.structure_is_unique[i] and (
s_a.structure_ids[i] in stable_discovered
):
unique_s_dict[s_a.structure_ids[i]] = s_a.structures[i]
with open("discovered_unique_structures.json", "w") as f:
json.dump(dict([(k, s.as_dict()) for k, s in unique_s_dict.items()]), f)
with open("structure_report.log", "w") as f:
f.write("consumed discovery unique_discovery duplicate in_icsd \n")
f.write(
str(len(all_submitted))
+ " "
+ str(len(stable_discovered))
+ " "
+ str(len(unique_s_dict))
+ " "
+ str(len(s_a.structures) - sum(s_a._not_duplicate))
+ " "
+ str(sum([not i for i in s_a._icsd_filter]))
)
```
#### File: CAMD/camd/domain.py
```python
import pandas as pd
import abc
import warnings
import itertools
import numpy as np
from protosearch.build_bulk.oqmd_interface import OqmdInterface
from pymatgen.io.ase import AseAtomsAdaptor
from pymatgen import Composition, Element
from matminer.featurizers.base import MultipleFeaturizer
from matminer.featurizers.composition import (
ElementProperty,
Stoichiometry,
ValenceOrbital,
IonProperty,
)
from matminer.featurizers.structure import (
SiteStatsFingerprint,
StructuralHeterogeneity,
ChemicalOrdering,
StructureComposition,
MaximumPackingEfficiency,
)
class DomainBase(abc.ABC):
"""
Domains combine generation and featurization and prepare
the search space for CAMD Loop.
"""
@abc.abstractmethod
def candidates(self):
"""
Primary method for every Domain to provide candidates.
Returns:
(pandas.DataFrame): features for generated hypothetical
structures. The Index of dataframe should be the
unique ids for the structures.
"""
pass
@property
@abc.abstractmethod
def bounds(self):
"""
Returns:
list: names of dimensions of the search space.
"""
pass
@abc.abstractmethod
def sample(self, num_samples):
"""
Abstract method for sampling from created domain
Args:
num_samples:
Returns:
"""
pass
@property
def bounds_string(self):
"""
Property representation of search space bounds
Returns:
(str): representation of search space bounds, e.g.
"Ir-Fe-O" or "x1-x2-x3"
"""
return "-".join(self.bounds)
class StructureDomain(DomainBase):
"""
Provides machine learning ready candidate domains (search spaces) for
hypothetical structures. If scanning an entire system, use the
StructureDomain.from_bounds method. If scanning for formula(s),
provide a list of formulas directly to StructureDomain.
Once the StructureDomain is initialized, the method candidates returns
a fully-featurized hypothetical materials set subject to n_max_atoms.
"""
def __init__(self, formulas, n_max_atoms=None):
"""
Args:
formulas ([str]): list of chemical formulas to create new
material candidates.
n_max_atoms (int): number of max atoms
"""
self.formulas = formulas
self.n_max_atoms = n_max_atoms
self.features = None
self._hypo_structures = None
@classmethod
def from_bounds(
cls,
bounds,
n_max_atoms=None,
charge_balanced=True,
create_subsystems=False,
**kwargs
):
"""
Convenience constructor that delivers an ML-ready domain
from defined chemical boundaries.
Args:
bounds ([str]): list of element strings corresponding to bounds of
the composition space, e. g. ['Fe', 'O', 'N']
n_max_atoms (int): maximum number of atoms in the generated
formulae
charge_balanced (bool): whether to filter generated formulae by
charge balancing the respective elements according to allowed
oxidation states
create_subsystems (bool): TODO - what is this?
**kwargs: arguments to pass to formula creator
"""
formulas = create_formulas(
bounds,
charge_balanced=charge_balanced,
create_subsystems=create_subsystems,
**kwargs
)
print("Generated chemical formulas: {}".format(formulas))
return cls(formulas, n_max_atoms)
@property
def bounds(self):
"""
Method to get bounds from StructureDomain
Returns:
([]): list of dimensions in search space
"""
bounds = set()
for formula in self.formulas:
bounds = bounds.union(Composition(formula).as_dict().keys())
return bounds
def get_structures(self):
"""
Method to call protosearch structure generation
"""
if self.formulas:
print("Generating hypothetical structures...")
self._hypo_structures = get_structures_from_protosearch(self.formulas)
print(
"Generated {} hypothetical structures".format(len(self.hypo_structures))
)
else:
raise ValueError("Need formulas to create structures")
@property
def hypo_structures(self):
"""
Returns (dataframe): Hypothetical structures generated by
protosearch, filtered by n_max_atoms
"""
if self._hypo_structures is None:
self.get_structures()
if self.n_max_atoms:
n_max_filter = [
i.num_sites <= self.n_max_atoms
for i in self._hypo_structures["structure"]
]
if self._hypo_structures is not None:
return self._hypo_structures[n_max_filter]
else:
return None
else:
return self._hypo_structures
@property
def hypo_structures_dict(self):
"""
Returns:
(dict): Hypothetical structures generated by
protosearch, filtered by n_max_atoms
"""
return self.hypo_structures["structure"].to_dict()
@property
def compositions(self):
"""
Returns:
(list): Compositions of hypothetical structures generated.
"""
if self.hypo_structures is not None:
return [s.composition for s in self.hypo_structures]
else:
warnings.warn("No stuctures available.")
return []
@property
def formulas_with_valid_structures(self):
"""
Quick method to filter formulas with valid structures
Returns:
([str]): list of formulas with corresponding valid
structures
"""
# Note the redundancy here is for pandas to work
if self.valid_structures is not None:
return [s.composition.formula for s in self.valid_structures["structure"]]
else:
warnings.warn("No structures available yet.")
return []
def featurize_structures(self, featurizer=None, **kwargs):
"""
Featurizes the hypothetical structures available from
hypo_structures method. Hypothetical structures for which
featurization fails are removed and valid structures are
made available as valid_structures
Args:
featurizer (Featurizer): A MatMiner Featurizer.
Defaults to MultipleFeaturizer with PRB Ward
Voronoi descriptors.
**kwargs (dict): kwargs passed to featurize_many
method of featurizer.
Returns:
(pandas.DataFrame): features
"""
# Note the redundancy here is for pandas to work
if self.hypo_structures is None:
warnings.warn("No structures available. Generating structures.")
self.get_structures()
print("Generating features")
featurizer = (
featurizer
if featurizer
else MultipleFeaturizer(
[
SiteStatsFingerprint.from_preset(
"CoordinationNumber_ward-prb-2017"
),
StructuralHeterogeneity(),
ChemicalOrdering(),
MaximumPackingEfficiency(),
SiteStatsFingerprint.from_preset(
"LocalPropertyDifference_ward-prb-2017"
),
StructureComposition(Stoichiometry()),
StructureComposition(ElementProperty.from_preset("magpie")),
StructureComposition(ValenceOrbital(props=["frac"])),
StructureComposition(IonProperty(fast=True)),
]
)
)
features = featurizer.featurize_many(
self.hypo_structures["structure"], ignore_errors=True, **kwargs
)
n_species, formula = [], []
for s in self.hypo_structures["structure"]:
n_species.append(len(s.composition.elements))
formula.append(s.composition.formula)
self._features_df = pd.DataFrame.from_records(
features, columns=featurizer.feature_labels()
)
self._features_df.index = self.hypo_structures.index
self._features_df["N_species"] = n_species
self._features_df["Composition"] = formula
self._features_df["structure"] = self.hypo_structures["structure"]
self.features = self._features_df.dropna(axis=0, how="any")
self.features = self.features.reindex(sorted(self.features.columns), axis=1)
self._valid_structure_labels = list(self.features.index)
self.valid_structures = self.hypo_structures.loc[self._valid_structure_labels]
print(
"{} out of {} structures were successfully featurized.".format(
self.features.shape[0], self._features_df.shape[0]
)
)
return self.features
def candidates(self, include_composition=True):
"""
This is the recommended convenience method that returns
a fully-featurized set of hypothetical structures.
Args:
include_composition (bool): Adds a column named "formula"
to the dataframe.
Returns:
(pandas.DataFrame) feature vectors of valid
hypothetical structures.
"""
if self._hypo_structures is None:
self.get_structures()
if self.features is None:
self.featurize_structures()
if include_composition:
return self.features
else:
return self.features.drop("Composition", axis=1)
def sample(self, num_samples):
"""
Method for sampling domain
Args:
num_samples (int): number of samples to return
Returns:
(pd.DataFrame): dataframe corresponding to sampled
domain with num_samples candidates
"""
self.candidates().sample(num_samples)
def get_structures_from_protosearch(formulas, source="icsd", db_interface=None):
"""
Calls protosearch to get the hypothetical structures.
Args:
formulas ([str]): list of chemical formulas from which
to generate candidate structures
source (str): project name in OQMD to be used as source.
Defaults to ICSD.
db_interface (DbInterface): interface to OQMD database
by default uses the one pulled from data.matr.io
Returns:
(pandas.DataFrame) hypothetical pymatgen structures
generated and their unique ids from protosearch
"""
if db_interface is None:
db_interface = OqmdInterface(source)
dataframes = [
db_interface.create_proto_data_set(chemical_formula=formula)
for formula in formulas
]
_structures = pd.concat(dataframes)
# Drop bad structures
_structures.dropna(axis=0, how="any", inplace=True)
# conversion to pymatgen structures
ase_adap = AseAtomsAdaptor()
pmg_structures = [
ase_adap.get_structure(_structures.iloc[i]["atoms"])
for i in range(len(_structures))
]
_structures["structure"] = pmg_structures
# This is for compatibility with Mc1, which doesn't allow
# underscores
structure_uids = [
_structures.iloc[i]["structure_name"].replace('_', '-')
for i in range(len(_structures))
]
_structures.index = structure_uids
return _structures
def get_stoichiometric_formulas(n_components, grid=None):
"""
Generates anonymous stoichiometric formulas for a set
of n_components with specified coefficients
Args:
n_components (int): number of components (dimensions)
grid (list): a range of integers
Returns:
(list): unique stoichiometric formula from an
allowed grid of integers.
"""
grid = grid if grid else list(range(1, 8))
args = [grid for _ in range(n_components)]
stoics = np.array(list(itertools.product(*args)))
fracs = stoics.astype(float) / np.sum(stoics, axis=1)[:, None]
_, indices, counts = np.unique(fracs, axis=0, return_index=True, return_counts=True)
return stoics[indices]
def create_formulas(
bounds,
charge_balanced=True,
oxi_states_extend=None,
oxi_states_override=None,
all_oxi_states=False,
grid=None,
create_subsystems=False,
):
"""
Creates a list of formulas given the bounds of a chemical space.
TODO:
- implement create_subsystems
Args:
bounds ([str]): list of elements to bound the space
charge_balanced (bool): whether to balance oxidations
states in the generated formulae
oxi_states_extend ({}): dictionary of {element: [int]}
where the value is the added oxidation state to be
included
oxi_states_override ({str: int}): override for oxidation
states, see Composition.oxi_state_guesses
all_oxi_states ({str: int): global config for oxidation
states, see Composition.oxi_state_guesses
grid ([]): list of integers to use for coefficients
create_subsystems (bool): whether to create formulas
for sub-chemical systems, e. g. for Sr-Ti-O,
whether to create Ti-O and Sr-O
Returns:
([str]): list of chemical formulas
"""
if create_subsystems:
raise NotImplementedError("Create subsystems not yet implemented.")
stoichs = get_stoichiometric_formulas(len(bounds), grid=grid)
formulas = []
for f in stoichs:
f_ = ""
for i in range(len(f)):
f_ += bounds[i] + f.astype(str).tolist()[i]
formulas.append(f_)
if charge_balanced:
charge_balanced_formulas = []
if oxi_states_extend:
oxi_states_override = oxi_states_override if oxi_states_override else {}
for element, states in oxi_states_extend.items():
states = states if isinstance(states, list) else [states]
_states = states + list(Element[element].common_oxidation_states)
if element in oxi_states_override:
oxi_states_override[element] += states
else:
oxi_states_override[element] = _states
for formula in formulas:
c = Composition(formula)
if c.oxi_state_guesses(
oxi_states_override=oxi_states_override, all_oxi_states=all_oxi_states
):
charge_balanced_formulas.append(formula)
return charge_balanced_formulas
else:
return formulas
def heuristic_setup(elements):
"""
Helper function to setup a default structure_domain
Args:
elements ([str]): list of elements to use to
generate formulae
Returns:
(int): maximum coefficient for element set
(bool): whether or not charge balancing should be used
"""
grid_defaults = {2: 5, 3: 5}
n_comp = len(elements)
_g = grid_defaults.get(n_comp, 4)
# Charge balance ionic compounds
if {"O", "Cl", "F", "S", "N", "Br", "I"}.intersection(set(elements)):
charge_balanced = True
else:
charge_balanced = False
if not charge_balanced:
return _g, charge_balanced
else:
g_max_max = 8
while True:
sd = StructureDomain.from_bounds(
elements, charge_balanced=True, grid=range(1, _g)
)
n = len(sd.formulas)
if n >= 20:
return _g, charge_balanced
else:
if _g < g_max_max:
_g += 1
else:
return _g, charge_balanced
``` |
{
"source": "JosephMontoya-TRI/FTCP",
"score": 2
} |
#### File: JosephMontoya-TRI/FTCP/main_semi.py
```python
from data import *
from model import *
from utils import *
from sampling import *
import joblib
import numpy as np
import matplotlib.pyplot as plt
from keras import optimizers
from keras.callbacks import ReduceLROnPlateau, LearningRateScheduler
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# Query ternary and quaternary compounds with number of sites <= 40
max_elms = 4
min_elms = 3
max_sites = 40
# Use your own API key to query Materials Project (https://materialsproject.org/open)
mp_api_key = 'YourAPIKey'
dataframe = data_query(mp_api_key, max_elms, min_elms, max_sites, include_te=True)
# Obtain FTCP representation
FTCP_representation, Nsites = FTCP_represent(dataframe, max_elms, max_sites, return_Nsites=True)
# Preprocess FTCP representation to obtain input X
FTCP_representation = pad(FTCP_representation, 2)
X, scaler_X = minmax(FTCP_representation)
# Get Y from queried dataframe
prop = ['formation_energy_per_atom', 'band_gap', 'Powerfactor', 'ind']
prop_dim = 2
semi_prop_dim = 1
Y = dataframe[prop].values
scaler_y = MinMaxScaler()
scaler_y_semi = MinMaxScaler()
Y[:, :prop_dim] = scaler_y.fit_transform(Y[:, :prop_dim])
Y[:, prop_dim:prop_dim+semi_prop_dim] = scaler_y_semi.fit_transform(Y[:, prop_dim:prop_dim+semi_prop_dim])
# Get training, and test data; feel free to have a validation set if you need to tune the hyperparameter
ind_train, ind_test = train_test_split(np.arange(len(Y)), test_size=0.2, random_state=21)
X_train, X_test = X[ind_train], X[ind_test]
y_train, y_test = Y[ind_train], Y[ind_test]
# Get model
VAE, encoder, decoder, regression, vae_loss = FTCP(X_train,
y_train,
coeffs=(3, 20, 5,),
semi=True,
label_ind=dataframe.dropna()['ind'].values,
prop_dim=(prop_dim, semi_prop_dim),
)
# Train model
reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.3, patience=4, min_lr=1e-6)
def scheduler(epoch, lr):
if epoch == 50:
lr = 2e-4
elif epoch == 100:
lr = 5e-5
return lr
schedule_lr = LearningRateScheduler(scheduler)
VAE.compile(optimizer=optimizers.rmsprop(lr=8e-4), loss=vae_loss)
VAE.fit([X_train, y_train],
X_train,
shuffle=True,
batch_size=256,
epochs=200,
callbacks=[reduce_lr, schedule_lr],
)
#%% Visualize latent space with two arbitrary dimensions
train_latent = encoder.predict(X_train, verbose=1)
y_train_ = np.concatenate((scaler_y.inverse_transform(y_train[:, :prop_dim]),
scaler_y_semi.inverse_transform(y_train[:, prop_dim:prop_dim+semi_prop_dim])),
axis=1
)
y_test_ = np.concatenate((scaler_y.inverse_transform(y_test[:, :prop_dim]),
scaler_y_semi.inverse_transform(y_test[:, prop_dim:prop_dim+semi_prop_dim])),
axis=1
)
font_size = 26
plt.rcParams['axes.labelsize'] = font_size
plt.rcParams['xtick.labelsize'] = font_size-2
plt.rcParams['ytick.labelsize'] = font_size-2
fig, ax = plt.subplots(1, 3, figsize=(18, 5.3))
s0 = ax[0].scatter(train_latent[:,0], train_latent[:,1], s=7, c=np.squeeze(y_train_[:,0]))
plt.colorbar(s0, ax=ax[0], ticks=list(range(-1, -8, -2)))
s1 = ax[1].scatter(train_latent[:,0], train_latent[:,1], s=7, c=np.squeeze(y_train_[:,1]))
plt.colorbar(s1, ax=ax[1], ticks=list(range(0, 10, 2)))
s2 = ax[2].scatter(train_latent[:,0], train_latent[:,1], s=7, c=np.squeeze(y_train_[:,2]))
plt.colorbar(s2, ax=ax[2])
fig.text(0, 0.92, '(A) $E_\mathrm{f}$', fontsize=font_size)
fig.text(0.33, 0.92, '(B) $E_\mathrm{g}$', fontsize=font_size)
fig.text(0.678, 0.92, '(C) Power Factor', fontsize=font_size)
plt.tight_layout()
plt.subplots_adjust(wspace=0.3, top=0.85)
plt.show()
#%% Evalute Reconstruction, and Target-Learning Branch Error
X_test_recon = VAE.predict([X_test, y_test], verbose=1)
X_test_recon_ = inv_minmax(X_test_recon, scaler_X)
X_test_recon_[X_test_recon_ < 0.1] = 0
X_test_ = inv_minmax(X_test, scaler_X)
# Mean absolute percentage error
def MAPE(y_true, y_pred):
# Add a small value to avoid division of zero
y_true, y_pred = np.array(y_true+1e-12), np.array(y_pred+1e-12)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
# Mean absolute error
def MAE(y_true, y_pred):
return np.nanmean(np.abs(y_true - y_pred), axis=0)
# Mean absolute error for reconstructed site coordinate matrix
def MAE_site_coor(SITE_COOR, SITE_COOR_recon, Nsites):
site = []
site_recon = []
# Only consider valid sites, namely to exclude zero padded (null) sites
for i in range(len(SITE_COOR)):
site.append(SITE_COOR[i, :Nsites[i], :])
site_recon.append(SITE_COOR_recon[i, :Nsites[i], :])
site = np.vstack(site)
site_recon = np.vstack(site_recon)
return np.mean(np.ravel(np.abs(site - site_recon)))
# Read string of elements considered in the study (to get dimension for element matrix)
elm_str = joblib.load('data/element.pkl')
# Get lattice constants, abc
abc = X_test_[:, len(elm_str), :3]
abc_recon = X_test_recon_[:, len(elm_str), :3]
print('abc (MAPE): ', MAPE(abc,abc_recon))
# Get lattice angles, alpha, beta, and gamma
ang = X_test_[:, len(elm_str)+1, :3]
ang_recon = X_test_recon_[:, len(elm_str)+1, :3]
print('angles (MAPE): ', MAPE(ang, ang_recon))
# Get site coordinates
coor = X_test_[:, len(elm_str)+2:len(elm_str)+2+max_sites, :3]
coor_recon = X_test_recon_[:, len(elm_str)+2:len(elm_str)+2+max_sites, :3]
print('coordinates (MAE): ', MAE_site_coor(coor, coor_recon, Nsites[ind_test]))
# Get accuracy of reconstructed elements
elm_accu = []
for i in range(max_elms):
elm = np.argmax(X_test_[:, :len(elm_str), i], axis=1)
elm_recon = np.argmax(X_test_recon_[:, :len(elm_str), i], axis=1)
elm_accu.append(metrics.accuracy_score(elm, elm_recon))
print(f'Accuracy for {len(elm_str)} elements are respectively: {elm_accu}')
# Get target-learning branch regression error
y_test_hat = regression.predict(X_test, verbose=1)
y_test_hat_ = scaler_y.inverse_transform(y_test_hat[0])
y_test_semi_hat_ = scaler_y_semi.inverse_transform(y_test_hat[1])
print(f'The regression MAE for {prop[:prop_dim]} are respectively', MAE(y_test_[:, :prop_dim], y_test_hat_))
print(f'The regression MAE for {prop[prop_dim:prop_dim+semi_prop_dim]} are respectively', MAE(y_test_[:, prop_dim:prop_dim+semi_prop_dim], y_test_semi_hat_))
#%% Sampling the latent space and perform inverse design
# Specify design targets, 0.3 eV <= Eg <= 1.5 eV, Ef < 0 eV/atom (power factor as high as possible)
target_Ef, target_Eg_min, target_Eg_max = -1.5, 0.3, 1.5
# Set number of compounds to purturb locally about
Nsamples = 10
# Obtain points that are closest to the design target in the training set
ind_constraint_1 = np.squeeze(np.argwhere(y_train_[:, 0] < target_Ef))
ind_constraint_2 = np.squeeze(np.argwhere(y_train_[:, 1] >= target_Eg_min))
ind_constraint_3 = np.squeeze(np.argwhere(y_train_[:, 1] <= target_Eg_max))
ind_constraint = np.intersect1d(np.intersect1d(ind_constraint_1, ind_constraint_2), ind_constraint_3)
# Sort the latent space according to the value of predicted power factor
y_train_semi_hat = regression.predict(X_train, verbose=1)[1]
ind_temp = np.argsort(-y_train_semi_hat[ind_constraint, 0])
ind_sample = ind_constraint[ind_temp][:Nsamples]
# Set number of purturbing instances around each compound
Nperturb = 3
# Set local purturbation (Lp) scale
Lp_scale = 0.9
# Sample (Lp)
samples = train_latent[ind_sample, :]
samples = np.tile(samples, (Nperturb, 1))
gaussian_noise = np.random.normal(0, 1, samples.shape)
samples = samples + gaussian_noise * Lp_scale
ftcp_designs = decoder.predict(samples, verbose=1)
ftcp_designs = inv_minmax(ftcp_designs, scaler_X)
# Get chemical info for designed crystals and output CIFs
pred_formula, pred_abc, pred_ang, pred_latt, pred_site_coor, ind_unique = get_info(ftcp_designs,
max_elms,
max_sites,
elm_str=joblib.load('data/element.pkl'),
to_CIF=True,
check_uniqueness=True,
mp_api_key=mp_api_key,
)
``` |
{
"source": "JosephMontoya-TRI/materialnet",
"score": 3
} |
#### File: legacy/data/process.py
```python
import csv
import json
import sys
def unique_nodes(edges):
nodes = set([]);
for e in edges:
nodes.add(e[0])
nodes.add(e[1])
return nodes
def filter_edges(edges, nodes):
return filter(lambda e: e[0] in nodes and e[1] in nodes, edges)
def extract(rec):
val = {'degree': rec['deg'][-1],
'eigen_cent': rec['eigen_cent'][-1],
'deg_cent': rec['deg_cent'][-1],
'shortest_path': rec['shortest_path'][-1],
'deg_neigh': rec['deg_neigh'][-1],
'clus_coeff': rec['clus_coeff'][-1],
'discovery': rec['discovery'],
'x': rec['x'],
'y': rec['y']}
if 'synthesis_probability' in rec:
val['synthesis_probability'] = rec['synthesis_probability']
if 'formation_energy' in rec:
val['formation_energy'] = rec['formation_energy']
if 'discovery_prec' in rec:
val['discovery'] = min(rec['discovery_prec'])
return val
def exit_error():
print >>sys.stderr, 'usage: process.py <num_nodes> <edges> <networkexist> <networkhypo> <synthesis> <oqmdexist> <oqmdhypo> <position>'
return 1
def main():
if len(sys.argv) < 4:
return exit_error()
n = None
try:
n = int(sys.argv[1])
except ValueError:
return exit_error()
edgefile = sys.argv[2]
nodefile = sys.argv[3]
hyponodefile = sys.argv[4]
synthfile = sys.argv[5]
oqmdfile = sys.argv[6]
oqmdhypofile = sys.argv[7]
positionfile = sys.argv[8]
try:
with open(edgefile) as f:
reader = csv.reader(f)
edges = list(reader)
except IOError:
print >>sys.stderr, 'fatal: could not open edgefile %s' % (edgefile)
return 1
nodes = unique_nodes(edges)
node_sample = set(list(nodes)[:n]) if n > 0 else set(list(nodes))
try:
with open(nodefile) as f:
data = json.loads(f.read())
with open(hyponodefile) as f:
hypodata = json.loads(f.read())
for h in hypodata:
assert h not in data
data[h] = hypodata[h]
with open(positionfile) as f:
positions = json.loads(f.read())
for p in positions:
name = p['name']
if name in data:
data[name]['x'] = p['x']
data[name]['y'] = p['y']
with open(synthfile) as f:
synth = json.loads(f.read())
for k, v in synth.iteritems():
assert k in data
data[k]['synthesis_probability'] = v
for oqmdf in [oqmdfile, oqmdhypofile]:
with open(oqmdf) as f:
oqmd = json.loads(f.read())
for k, v in oqmd.iteritems():
formula = v['formula']
assert formula in data
data[formula]['formation_energy'] = v['formation_energy']
except IOError:
print >>sys.stderr, 'fatal: could not open nodefile %s' % (nodefile)
return 1
for k in data:
data[k] = extract(data[k])
edge_sample = filter_edges(edges, data)
with open('edges.json', 'w') as f:
print >>f, json.dumps(edge_sample, indent=2)
with open('nodes.json', 'w') as f:
print >>f, json.dumps(data)
return 0
if __name__ == '__main__':
sys.exit(main())
``` |
{
"source": "JosephMontoya-TRI/monty",
"score": 3
} |
#### File: monty/monty/math.py
```python
from __future__ import division, unicode_literals, absolute_import
"""
Addition math functions.
"""
__author__ = '<NAME>'
__copyright__ = 'Copyright 2013, The Materials Virtual Lab'
__version__ = '0.1'
__maintainer__ = '<NAME>'
__email__ = '<EMAIL>'
__date__ = '10/28/14'
import math
def nCr(n, r):
"""
Calculates nCr.
Args:
n (int): total number of items.
r (int): items to choose
Returns:
nCr.
"""
f = math.factorial
return int(f(n) / f(r) / f(n-r))
def nPr(n, r):
"""
Calculates nPr.
Args:
n (int): total number of items.
r (int): items to permute
Returns:
nPr.
"""
f = math.factorial
return int(f(n) / f(n-r))
```
#### File: monty/os/__init__.py
```python
from __future__ import absolute_import
import os
import errno
from contextlib import contextmanager
__author__ = '<NAME>'
__copyright__ = 'Copyright 2013, The Materials Project'
__version__ = '0.1'
__maintainer__ = '<NAME>'
__email__ = '<EMAIL>'
__date__ = '1/24/14'
@contextmanager
def cd(path):
"""
A Fabric-inspired cd context that temporarily changes directory for
performing some tasks, and returns to the original working directory
afterwards. E.g.,
with cd("/my/path/"):
do_something()
Args:
path: Path to cd to.
"""
cwd = os.getcwd()
os.chdir(path)
try:
yield
finally:
os.chdir(cwd)
def makedirs_p(path, **kwargs):
"""
Wrapper for os.makedirs that does not raise an exception if the directory already exists, in the fashion of
"mkdir -p" command. The check is performed in a thread-safe way
Args:
path: path of the directory to create
kwargs: standard kwargs for os.makedirs
"""
try:
os.makedirs(path, **kwargs)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
```
#### File: monty/tests/test_itertools.py
```python
from __future__ import division, unicode_literals, print_function
"""
#TODO: Replace with proper module doc.
"""
__author__ = '<NAME>'
__copyright__ = 'Copyright 2013, The Materials Project'
__version__ = '0.1'
__maintainer__ = '<NAME>'
__email__ = '<EMAIL>'
__date__ = '8/29/14'
import unittest
from monty.itertools import iterator_from_slice
class FuncTest(unittest.TestCase):
def test_iterator_from_slice(self):
self.assertEqual(list(iterator_from_slice(slice(0, 6, 2))), [0, 2, 4])
if __name__ == '__main__':
unittest.main()
```
#### File: monty/tests/test_math.py
```python
from __future__ import division, unicode_literals
__author__ = '<NAME>'
__copyright__ = 'Copyright 2014, The Materials Virtual Lab'
__version__ = '0.1'
__maintainer__ = '<NAME>'
__email__ = '<EMAIL>'
__date__ = '1/24/14'
import unittest
from monty.math import nCr, nPr
class FuncTest(unittest.TestCase):
def test_nCr(self):
self.assertEqual(nCr(4, 2), 6)
def test_deprecated_property(self):
self.assertEqual(nPr(4, 2), 12)
if __name__ == "__main__":
unittest.main()
``` |
{
"source": "JosephMontoya-TRI/qmpy",
"score": 2
} |
#### File: qmpy/tests/test_tri_py3.py
```python
import unittest
import os
import tempfile
import shutil
from qmpy.materials.structure import Structure
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
# Stub for testing phases
class PhaseTest(unittest.TestCase):
def test_phases(self):
pass
class VaspIOTest(unittest.TestCase):
def setUp(self):
self.cwd = os.getcwd()
self.temp_dir = tempfile.mkdtemp()
os.chdir(self.temp_dir)
def tearDown(self):
os.chdir(self.cwd)
shutil.rmtree(self.temp_dir)
def test_vasp_io(self):
# Get structure from CIF
# Write VASP files from structure
pass
if __name__ == '__main__':
unittest.main()
``` |
{
"source": "josephmoyer/WALKOFF-Apps",
"score": 3
} |
#### File: WALKOFF-Apps/Converter/main.py
```python
from apps import App, action
from PIL import Image
import logging
import os
logger = logging.getLogger(__name__)
@action
def convert_image(input_file, output_file):
"""Converts images from one file type to another
Arguments:
input_file -- the file path to be converted from
output_file -- the file path to the new file
Note:
List of supported formats: https://pillow.readthedocs.io/en/5.3.x/handbook/image-file-formats.html
"""
if input_file != output_file:
try:
Image.open(input_file).save(output_file)
except IOError:
return ('Failed to convert ' + input_file + ' to ' + output_file), 'FailedToConvert'
except ValueError:
return ('Failed to convert ' + input_file + ' to ' + output_file + '. Check suppprted file types.'), 'FailedToConvert'
return output_file, 'Success'
@action
def convert_image_batch(input_dir, output_type, sub_directories):
"""Converts all images from their original file type to a specified type calls
convert_image for each file. Outputs the new file to the same directory as the
original file. Tries to convert every file, will only convert supported formats
Arguments:
input_dir -- the directory path to convert
output_type -- the type to convert to ('PNG', 'JPG', etc.)
sub_directories -- boolean to determine if all sub dir will be explored
Note:
List of supported formats: https://pillow.readthedocs.io/en/5.3.x/handbook/image-file-formats.html
"""
successful_converts = 0
if(sub_directories):
for root, dirs, files in os.walk(input_dir):
for name in files:
fp = os.path.join(root, name) # file path
output_name = fp[0:(fp.rfind('.') + 1)] + output_type
if (convert_image(fp, output_name).status == 'Success'):
successful_converts += 1
else:
for entry in os.scandir(input_dir):
if entry.is_file():
output_name = entry.path[0:(entry.name.rfind('.') + 1)] + output_type
if (convert_image(entry.path, output_name).status == 'Success'):
successful_converts += 1
# Make sure at least one file gets converted, if not it's a failure
if successful_converts > 0:
return input_dir, 'Success'
else:
return input_dir, 'FailedToConvert'
```
#### File: signature-base/threatintel/get-otx-iocs.py
```python
from OTXv2 import OTXv2
import re
import os
import sys
import traceback
import argparse
OTX_KEY = '7607c7e15409381f7f8532b4d4caaeaa6c96e75cea7ef3169d9b6bad19290c43'
# Hashes that are often included in pulses but are false positives
HASH_WHITELIST = ['e617348b8947f28e2a280dd93c75a6ad',
'125da188e26bd119ce8cad7eeb1fc2dfa147ad47',
'06f7826c2862d184a49e3672c0aa6097b11e7771a4bf613ec37941236c1a8e20',
'd378bffb70923139d6a4f546864aa61c',
'8094af5ee310714caebccaeee7769ffb08048503ba478b879edfef5f1a24fefe',
'01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b',
'b6f9aa44c5f0565b5deb761b1926e9b6',
# Empty file
'd41d8cd98f00b204e9800998ecf8427e',
'da39a3ee5e6b4b0d3255bfef95601890afd80709',
'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855',
# One byte line break file (Unix) 0x0a
'68b329da9893e34099c7d8ad5cb9c940',
'adc83b19e793491b1c6ea0fd8b46cd9f32e592fc',
'01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b',
# One byte line break file (Windows) 0x0d0a
'81051bcc2cf1bedf378224b0a93e2877',
'ba8ab5a0280b953aa97435ff8946cbcbb2755a27',
'<KEY>',
]
FILENAMES_WHITELIST = ['wncry']
DOMAIN_WHITELIST = ['proofpoint.com']
class WhiteListedIOC(Exception): pass
class OTXReceiver():
# IOC Strings
hash_iocs = ""
filename_iocs = ""
c2_iocs_ipv4 = ""
c2_iocs_ipv6 = ""
c2_iocs_domain = ""
# Output format
separator = ";"
use_csv_header = False
extension = "txt"
hash_upper = True
filename_regex_out = True
def __init__(self, api_key, siem_mode, debug, proxy, csvheader, extension):
self.debug = debug
self.otx = OTXv2(api_key, proxy)
if siem_mode:
self.separator = ","
self.use_csv_header = csvheader
self.extension = extension
self.hash_upper = True
self.filename_regex_out = False
def get_iocs_last(self):
# mtime = (datetime.now() - timedelta(days=days_to_load)).isoformat()
print("Starting OTX feed download ...")
self.events = self.otx.getall()
print("Download complete - %s events received" % len(self.events))
# json_normalize(self.events)
def write_iocs(self, ioc_folder):
hash_ioc_file = os.path.join(ioc_folder, "otx-hash-iocs.{0}".format(self.extension))
filename_ioc_file = os.path.join(ioc_folder, "otx-filename-iocs.{0}".format(self.extension))
c2_ioc_ipv4_file = os.path.join(ioc_folder, "otx-c2-iocs-ipv4.{0}".format(self.extension))
c2_ioc_ipv6_file = os.path.join(ioc_folder, "otx-c2-iocs-ipv6.{0}".format(self.extension))
c2_ioc_domain_file = os.path.join(ioc_folder, "otx-c2-iocs.{0}".format(self.extension))
print("Processing indicators ...")
for event in self.events:
try:
for indicator in event["indicators"]:
try:
# Description
description = event["name"].encode('unicode-escape').replace(self.separator, " - ")
# Hash IOCs
if indicator["type"] in ('FileHash-MD5', 'FileHash-SHA1', 'FileHash-SHA256'):
# Whitelisting
if indicator["indicator"].lower() in HASH_WHITELIST:
raise WhiteListedIOC
hash = indicator["indicator"]
if self.hash_upper:
hash = indicator["indicator"].upper()
self.hash_iocs += "{0}{3}{1} {2}\n".format(
hash,
description,
" / ".join(event["references"])[:80],
self.separator)
# Filename IOCs
if indicator["type"] == 'FilePath':
# Whitelisting
for w in FILENAMES_WHITELIST:
if w in indicator["indicator"]:
raise WhiteListedIOC
filename = indicator["indicator"]
if self.filename_regex_out:
filename = my_escape(indicator["indicator"])
self.filename_iocs += "{0}{3}{1} {2}\n".format(
filename,
description,
" / ".join(event["references"])[:80],
self.separator)
# C2 IOCs
# Whitelisting
if indicator["type"] in ('IPv4', 'IPv6', 'domain', 'hostname', 'CIDR'):
for domain in DOMAIN_WHITELIST:
if domain in indicator["indicator"]:
print(indicator["indicator"])
raise WhiteListedIOC
if indicator["type"] == 'IPv4':
self.c2_iocs_ipv4 += "{0}{3}{1} {2}\n".format(
indicator["indicator"],
description,
" / ".join(event["references"])[:80],
self.separator)
if indicator["type"] == 'IPv6':
self.c2_iocs_ipv6 += "{0}{3}{1} {2}\n".format(
indicator["indicator"],
description,
" / ".join(event["references"])[:80],
self.separator)
if indicator["type"] in ('domain', 'hostname', 'CIDR'):
self.c2_iocs_domain += "{0}{3}{1} {2}\n".format(
indicator["indicator"],
description,
" / ".join(event["references"])[:80],
self.separator)
except WhiteListedIOC as e:
pass
except Exception as e:
traceback.print_exc()
# Write to files
with open(hash_ioc_file, "w") as hash_fh:
if self.use_csv_header:
hash_fh.write('hash{0}'.format(self.separator) + 'source\n')
hash_fh.write(self.hash_iocs)
print("{0} hash iocs written to {1}".format(self.hash_iocs.count('\n'), hash_ioc_file))
with open(filename_ioc_file, "w") as fn_fh:
if self.use_csv_header:
fn_fh.write('filename{0}'.format(self.separator) + 'source\n')
fn_fh.write(self.filename_iocs)
print("{0} filename iocs written to {1}".format(self.filename_iocs.count('\n'), filename_ioc_file))
with open(c2_ioc_ipv4_file, "w") as c24_fh:
if self.use_csv_header:
c24_fh.write('host{0}'.format(self.separator) + 'source\n')
c24_fh.write(self.c2_iocs_ipv4)
print("{0} c2 ipv4 iocs written to {1}".format(self.c2_iocs_ipv4.count('\n'), c2_ioc_ipv4_file))
with open(c2_ioc_ipv6_file, "w") as c26_fh:
if self.use_csv_header:
c26_fh.write('host{0}'.format(self.separator) + 'source\n')
c26_fh.write(self.c2_iocs_ipv6)
print("{0} c2 ipv6 iocs written to {1}".format(self.c2_iocs_ipv6.count('\n'), c2_ioc_ipv6_file))
with open(c2_ioc_domain_file, "w") as c2d_fh:
if self.use_csv_header:
c2d_fh.write('host{0}'.format(self.separator) + 'source\n')
c2d_fh.write(self.c2_iocs_domain)
print("{0} c2 domain iocs written to {1}".format(self.c2_iocs_domain.count('\n'), c2_ioc_domain_file))
def my_escape(string):
return re.sub(r'([\-\(\)\.\[\]\{\}\\\+])', r'\\\1', string)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='OTX IOC Receiver')
parser.add_argument('-k', help='OTX API key', metavar='APIKEY', default=OTX_KEY)
# parser.add_argument('-l', help='Time frame in days (default=1)', default=1)
parser.add_argument('-o', metavar='dir', help='Output directory', default='../iocs')
parser.add_argument('-p', metavar='proxy', help='Proxy server (e.g. http://proxy:8080 or '
'http://user:pass@proxy:8080', default=None)
parser.add_argument('--verifycert', action='store_true', help='Verify the server certificate', default=False)
parser.add_argument('--siem', action='store_true', default=False,
help='CSV output for use in SIEM systems (e.g. Splunk)')
parser.add_argument('--nocsvheader', action='store_true', default=False,
help='Disable header in CSV output (e.g. McAfee SIEM)')
parser.add_argument('-e', metavar='ext', help='File extension', default='txt')
parser.add_argument('--debug', action='store_true', default=False, help='Debug output')
args = parser.parse_args()
if len(args.k) != 64:
print("Set an API key in script or via -k APIKEY. Go to https://otx.alienvault.com create an account and get your own API key")
sys.exit(0)
# Create a receiver
otx_receiver = OTXReceiver(api_key=args.k, siem_mode=args.siem, debug=args.debug, proxy=args.p,
csvheader=(not args.nocsvheader), extension=args.e)
# Retrieve the events and store the IOCs
# otx_receiver.get_iocs_last(int(args.l))
otx_receiver.get_iocs_last()
# Write IOC files
otx_receiver.write_iocs(ioc_folder=args.o)
```
#### File: interfaces/nmapopenvas/graphs.py
```python
from interfaces import dispatcher, AppBlueprint
from walkoff.events import WalkoffEvent
from flask import Blueprint, jsonify
import json
blueprint = AppBlueprint(blueprint=Blueprint('NOVAS_Demo', __name__))
latest_graph = "WalkoffDemoGraph.json"
@dispatcher.on_app_actions('Nmap', actions=['graph from results'],
events=WalkoffEvent.ActionExecutionSuccess)
def get_latest_graph(data):
global latest_graph
latest_graph = data['arguments'][3]['value']
@blueprint.blueprint.route('/demo', methods=['GET'])
def read_and_send_graph():
try:
global latest_graph
with open(latest_graph) as f:
r = jsonify(json.load(f))
return r, 200
except IOError:
return None, 461
```
#### File: WALKOFF-Apps/SkeletonApp/main.py
```python
import logging
from apps import App, action
logger = logging.getLogger(__name__)
@action
def test_global_action(data):
return data
class Main(App):
"""
Skeleton example app to build other apps off of
Args:
name (str): Name of the app
device (list[str]): List of associated device names
"""
def __init__(self, name, device, context):
App.__init__(self, name, device, context) #Required to call superconstructor
@action
def test_function(self):
"""
Basic self contained function
"""
return {}
@action
def test_function_with_param(self, test_param):
"""
Basic function that takes in a parameter
Args:
test_param (str): String that will be returned
"""
return test_param
@action
def test_function_with_device_reference(self):
"""
Basic function that calls an instance variable. In this case, a device name.
"""
# password = self.get_device().get_encrypted_field('password'); do not store this variable in cache
return self.device_fields['username']
``` |
{
"source": "josephmtinangi/python-basics",
"score": 2
} |
#### File: python-basics/functions/check-for-prime.py
```python
def check(number):
print('Coming soon')
``` |
{
"source": "joseph-nagel/torchutils",
"score": 3
} |
#### File: torchutils/tests/test_data.py
```python
import pytest
import numpy as np
import torch
from torch.utils.data import TensorDataset
from torchutils.data import mean_std_over_dataset, image2tensor, tensor2image
@pytest.mark.parametrize('no_samples', [100, 1000])
@pytest.mark.parametrize('feature_shape', [(), (1,), (10,), (10,10)])
def test_mean_std_over_dataset(no_samples, feature_shape):
'''Test correctness of evaluating the mean and standard deviation.'''
torch.manual_seed(0)
X = torch.randn(no_samples, *feature_shape)
y = torch.randint(2, size=(no_samples,))
data_set = TensorDataset(X, y)
mean, std = mean_std_over_dataset(data_set)
ref_mean = X.numpy().mean()
ref_std = X.numpy().std()
assert np.isclose(mean, ref_mean, rtol=1e-02, atol=1e-03)
assert np.isclose(std, ref_std, rtol=1e-02, atol=1e-03)
@pytest.mark.parametrize('shape', [(10,10), (10,10,3), (1,10,10,3)])
def test_image2tensor2image(shape):
'''Test the transformation and back-transformation of an image.'''
np.random.seed(0)
image = np.random.randn(*shape)
tensor = image2tensor(image)
new_image = tensor2image(tensor)
assert np.allclose(image.squeeze(), new_image.squeeze())
@pytest.mark.parametrize('shape', [(10,10), (3,10,10), (1,3,10,10)])
def test_tensor2image2tensor(shape):
'''Test the transformation and back-transformation of a tensor.'''
torch.manual_seed(0)
tensor = torch.randn(*shape)
image = tensor2image(tensor)
new_tensor = image2tensor(image)
assert np.allclose(tensor.squeeze(), new_tensor.squeeze())
```
#### File: torchutils/tests/test_tools.py
```python
import pytest
import itertools
import torch
import torch.nn as nn
from torchutils.tools import conv_out_shape
input_shapes = [(10,10), (100,100)]
kernel_sizes = [(3,3), (5,5)]
strides = [1]
paddings = [0, 1, 2]
dilations = [1]
cartesian_product = [elem for elem in itertools.product(
input_shapes,
kernel_sizes,
strides,
paddings,
dilations
)]
@pytest.fixture(params=cartesian_product)
def data_conv_model_and_input(request):
'''Create convolutional layer and input tensor.'''
torch.manual_seed(0)
input_shape, kernel_size, stride, padding, dilation = request.param
model = nn.Conv2d(in_channels=1,
out_channels=1,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation)
X = torch.randn(1, model.in_channels, *input_shape)
return model, X
def test_conv_out_shape(data_conv_model_and_input):
'''Test the predicted output shape after the convolution.'''
model, X = data_conv_model_and_input
y = model(X)
actual_out_shape = y.shape[2:]
predicted_out_shape = conv_out_shape(input_shape=X.shape[2:],
kernel_size=model.kernel_size,
stride=model.stride,
padding=model.padding,
dilation=model.dilation)
assert predicted_out_shape == actual_out_shape
```
#### File: torchutils/torchutils/data.py
```python
import numpy as np
import torch
from torch.utils.data import DataLoader, Sampler
def mean_std_over_dataset(data_set, batch_size=1, channel_wise=False, verbose=True):
'''
Calculate mean and std. in a batch-wise sweep over the dataset.
Parameters
----------
data_set : PyTorch DataSet object
Set of data to be analyzed.
Returns
-------
mean : float or array
Mean value or channel-wise mean values.
std : float or array
Standard deviation or channel-wise standard deviations.
'''
# data loader
data_loader = DataLoader(data_set, batch_size=batch_size, shuffle=False)
# mean and std.
if not channel_wise:
# mean
mean = 0.
for images, labels in data_loader:
# print('{}'.format(images.numpy().shape))
mean += np.mean(images.numpy())
mean /= len(data_loader)
# std.
std = 0.
for images, labels in data_loader:
std += np.mean((images.numpy() - mean)**2)
std /= len(data_loader) - 1
std = np.sqrt(std)
# channel-wise mean and std.
else:
# mean
no_summands = 0.
mean = np.zeros(3)
for images, labels in data_loader:
mean += np.sum(images.numpy(), axis=(0,2,3))
no_summands += np.size(images.numpy()[:,0,:,:])
mean /= no_summands
# std.
no_summands = 0.
std = np.zeros(3)
for images, labels in data_loader:
std += np.sum((images.numpy() - mean.reshape(1,-1,1,1))**2, axis=(0,2,3))
no_summands += np.size(images.numpy()[:,0,:,:])
std /= no_summands - 1
std = np.sqrt(std)
if verbose:
print('Mean: {}'.format(np.array2string(np.array(mean), precision=4)))
print('Std.: {}'.format(np.array2string(np.array(std), precision=4)))
return mean, std
class GaussianNoise(object):
'''
Gaussian noise corruptions.
Summary
-------
The class realizes a transformer corrupting images with Gaussian noise.
This can be used for data augmentation or robustness evaluation.
'''
def __init__(self, noise_std=1.0):
self.noise_std = noise_std
def __call__(self, X):
X_noisy = X + torch.randn_like(X) * self.noise_std
return X_noisy
class BalancedSampler(Sampler):
'''
Balanced sampling of imbalanced datasets.
Summary
-------
In order to deal with an imbalanced classification dataset,
an appropriate over/undersampling scheme is implemented.
Here, samples are taken with replacement from the set, such that
all classes are equally likely to occur in the training mini-batches.
This might be especially helpful in combination with data augmentation.
Different weights for samples in the empirical loss would be an alternative.
Parameters
----------
data_set : PyTorch dataset
Imbalanced dataset to be over/undersampled.
no_samples : int or None
Number of samples to draw in one epoch.
indices : array_like or None
Subset of indices that are sampled.
'''
def __init__(self, dataset, no_samples=None, indices=None):
self.indices = list(range(len(dataset)))
if no_samples is None:
self.no_samples = len(dataset) if indices is None else len(indices)
else:
self.no_samples = no_samples
# class occurrence counts
data_loader = DataLoader(dataset, batch_size=1, shuffle=False)
labels_list = []
for image, label in data_loader:
labels_list.append(label)
labels_tensor = torch.cat(labels_list, dim=0)
unique_labels, counts = torch.unique(labels_tensor, return_counts=True)
# unnormalized probabilities
weights_for_class = 1.0 / counts.float()
weights_for_index = torch.tensor(
[weights_for_class[labels_tensor[idx]] for idx in self.indices]
)
# zero indices
if indices is not None:
zero_ids = np.setdiff1d(self.indices, indices).tolist()
weights_for_index[zero_ids] = torch.tensor(0.0)
# balanced sampling distribution
self.categorical = torch.distributions.Categorical(probs=weights_for_index)
def __iter__(self):
return (idx for idx in self.categorical.sample((self.no_samples,)))
def __len__(self):
return self.no_samples
def image2tensor(image, unsqueeze=True):
'''
Convert image array to PyTorch tensor.
Parameters
----------
image : array
unsqueeze : bool
Returns
-------
tensor : PyTorch tensor
'''
if image.ndim == 2: # (no_rows, no_cols)
tensor = torch.from_numpy(image)
elif image.ndim == 3: # (no_rows, no_cols, no_channels)
tensor = torch.from_numpy(image.transpose(2, 0, 1))
elif image.ndim == 4: # (no_samples, no_rows, no_channels)
tensor = torch.from_numpy(image.transpose(0, 3, 1, 2))
if unsqueeze:
for _ in range(4 - image.ndim):
tensor = tensor.unsqueeze(0)
return tensor # (no_samples, no_channels, no_rows, no_colums)
def tensor2image(tensor, squeeze=True):
'''
Convert PyTorch tensor to image array.
Parameters
----------
tensor : PyTorch tensor
squeeze : bool
Returns
-------
image : array
'''
if tensor.ndim == 2: # (no_rows, no_cols)
image = tensor.numpy()
elif tensor.ndim == 3: # (no_channels, no_rows, no_cols)
image = tensor.numpy().transpose((1, 2, 0))
elif tensor.ndim == 4: # (no_samples, no_channels, no_rows, no_cols)
image = tensor.numpy().transpose((0, 2, 3, 1))
if squeeze:
image = image.squeeze()
return image # (no_samples, no_rows, no_cols, no_channels)
```
#### File: torchutils/torchutils/pretrained.py
```python
import numpy as np
import torch
import torch.nn as nn
from torchvision.models import AlexNet, VGG, ResNet, DenseNet
def create_feature_extractor(model_architecture,
input_shape=None,
is_pretrained=True,
is_frozen=True):
'''
Create a pretrained feature extractor.
Summary
-------
A feature extractor is created from a predefined model architecture.
At the moment, AlexNet, VGG, and ResNet and DenseNet models are supported.
The shape of the output features is empirically determined.
Parameters
----------
model_architecture : PyTorch module or constructor function
Predefined model architecture.
input_shape : tuple or None
Model input shape. If not given, ImageNet defaults are used.
is_pretrained : bool
Determines whether pretrained weights are loaded.
is_frozen : bool
Determines whether weights are frozen.
'''
# initialize model
if isinstance(model_architecture, (AlexNet, VGG, ResNet, DenseNet)):
pretrained_model = model_architecture # is already a model instance
else:
pretrained_model = model_architecture(pretrained=is_pretrained) # is only a model constructor
# freeze parameters
if is_frozen:
freeze_parameters(pretrained_model)
# input shape
if input_shape is None:
input_shape = (3, 224, 224)
elif len(input_shape) == 2:
input_shape = (3,) + input_shape
# create features
if isinstance(pretrained_model, (AlexNet, VGG, DenseNet)):
feature_list = list(pretrained_model.features.children())
elif isinstance(pretrained_model, ResNet):
feature_list = list(pretrained_model.children())[:-2]
# feature_list.append(nn.AvgPool2d((7,7)))
feature_extractor = nn.Sequential(*feature_list)
# feature shape
feature_shape = get_output_shape(feature_extractor, input_shape)
# no_features = np.prod(feature_shape)
return feature_extractor, feature_shape
def freeze_parameters(model):
'''Freeze the weights of a model.'''
for param in model.parameters():
param.requires_grad = False
def get_output_shape(model, input_shape):
'''
Return the output shape of a model.
Summary
-------
The model output shape is empirically determined for input tensors of a certain shape.
Its values are randomly sampled from a standard normal distribution.
Very generally, for a given input tensor with shape (no_samples, *input_shape),
the model predicts an (no_samples, *output_shape)-shaped output.
The shape of this output, without the sample size, is returned.
'''
model.eval()
with torch.no_grad():
predictions = model(torch.randn(1, *input_shape))
output_shape = tuple(predictions.shape[1:])
return output_shape
def extract_features(feature_extractor, data_loader, expand=None, as_array=False):
'''
Extract features given a model and a data loader.
Summary
-------
Extracts features from all images generated by the data loader.
After they are computed in batches, they are eventually concatenated
and returned as either PyTorch tensors or Numpy arrays.
Parameters
----------
feature_extractor : PyTorch module
Feature extraction model.
data_loader : PyTorch DataLoader
Data loader instance.
expand : tuple or None
If given, input tensors are expanded accordingly.
as_array : bool
Determines whether outputs are returned as Numpy arrays.
'''
features_list = []
labels_list = []
feature_extractor.eval()
with torch.no_grad():
for images, labels in data_loader:
if expand is not None:
images = images.expand(*expand)
features = feature_extractor(images)
features_list.append(features)
labels_list.append(labels)
if as_array: # Numpy arrays
features = np.concatenate([tensor.numpy() for tensor in features_list], axis=0)
labels = np.concatenate([tensor.numpy() for tensor in labels_list], axis=0)
else: # PyTorch tensors
features = torch.cat(features_list, dim=0)
labels = torch.cat(labels_list, dim=0)
return features, labels
```
#### File: torchutils/torchutils/tools.py
```python
import numpy as np
def moving_average(x, window=3, mode='full'):
'''
Calculate the moving average over an array.
Summary
-------
This function computes the running mean of an array.
Padding is performed for the "left" side, not for the "right".
Parameters
----------
x : array
Input array.
window : int
Window size.
mode : {'full', 'last'}
Determines whether the full rolling mean history
or only its last element is returned.
Returns
-------
running_mean : float
Rolling mean.
'''
x = np.array(x)
if mode == 'full':
x_padded = np.pad(x, (window-1, 0), mode='constant', constant_values=x[0])
running_mean = np.convolve(x_padded, np.ones((window,))/window, mode='valid')
elif mode == 'last':
if x.size >= window:
running_mean = np.convolve(x[-window:], np.ones((window,))/window, mode='valid')[0]
else:
x_padded = np.pad(x, (window-x.size, 0), mode='constant', constant_values=x[0])
running_mean = np.convolve(x_padded, np.ones((window,))/window, mode='valid')[0]
return running_mean
def conv_out_shape(input_shape,
kernel_size,
stride=1,
padding=0,
dilation=1,
mode='floor'):
'''
Calculate the output shape of a convolutional layer.
Summary
-------
This function returns the output tensor shape of a convolutional layer.
One needs to pass the input shape and all relevant layer properties as arguments.
The parameter convention of PyTorch's convolutional layer modules is adopted herein,
e.g. see the documentation of the torch.nn.Conv2d class.
Parameters
----------
input_shape : int or array-like
Shape of the layer input tensor.
kernel_size : int or array-like
Size of the convolutional kernels.
stride : int or array-like
Stride parameter.
padding : int or array-like
Padding parameter.
dilation : int or array-like
Dilation parameter.
mode : {'floor', 'ceil'}
Determines whether to floor or to ceil.
Returns
-------
output_shape : int or tuple
Shape of the layer output tensor.
Notes
-----
The same function can be used to determine the output size of pooling layers.
Though, some care regarding the ceil/floor mode has to be taken.
PyTorch's default behavior is to floor the output size.
'''
input_shape = np.array(input_shape)
no_dims = input_shape.size
kernel_size = _make_array(kernel_size, no_dims)
stride = _make_array(stride, no_dims)
padding = _make_array(padding, no_dims)
dilation = _make_array(dilation, no_dims)
if mode == 'floor':
output_shape = np.floor((input_shape + 2*padding - dilation*(kernel_size-1) - 1) / stride + 1).astype('int')
elif mode == 'ceil':
output_shape = np.ceil((input_shape + 2*padding - dilation*(kernel_size-1) - 1) / stride + 1).astype('int')
if no_dims == 1:
output_shape = int(output_shape)
if no_dims >= 2:
output_shape = tuple([int(output_shape[i]) for i in range(no_dims)])
return output_shape
def _make_array(x, no_dims):
'''Transform a scalar into an array with equal entries.'''
return np.array(x) if np.size(x) == no_dims else np.array([x for i in range(no_dims)])
``` |
{
"source": "josephnavarro/ascii-art-generator",
"score": 4
} |
#### File: ascii-art-generator/src/hashable.py
```python
from PIL import Image
import uuid
class HashableImage:
"""
:
: Hashable PIL Image container. Enables PIL image data to be used as dictionary keys.
:
:
: Attrs:
: Image image : Image object
: UUID _hash : Uniquely hashable element
:
:
"""
__slots__ = [
"image",
"_hash",
]
def __init__(self, image: Image):
self.image = image # type: Image
self._hash = uuid.uuid4() # type: uuid.UUID
@property
def size(self) -> (int, int):
"""
:
: Gets size (i.e. width and height) of the contained image object.
:
:
"""
return self.image.size
def __hash__(self):
"""
:
: Returns hash(self).
:
:
"""
return hash(self._hash)
def convert(self, mode: str) -> Image:
"""
:
: Implements Image.convert(). Note that this returns a PIL Image, not a HashableImage.
:
:
"""
return self.image.convert(mode)
def crop(self, rect: (int, int, int, int)):
"""
:
: Implements Image.crop(). Note that this returns another HashableImage, not a PIL Image.
:
:
"""
return HashableImage(self.image.crop(rect))
def load(self):
"""
:
: Returns pixel access data for the contained image object.
:
:
"""
return self.image.load()
``` |
{
"source": "josephndep/Traffic-signals-simple-codes--Intermediate-Python-program",
"score": 4
} |
#### File: Traffic-signals-simple-codes--Intermediate-Python-program/Traffic signal codes/patrol.py
```python
import time
from datetime import datetime
from time import sleep
now = datetime.now()
current_time = now.strftime("%H:%M:%S") # Time is of esssence that is why current_time is included
# Define a function convoy
def traffic():
while True:
for X in range(10000):
print("Green....", current_time) # GREEN Traffic lights with timer below
time.sleep(5) #
print("Orange....") # With the orange light
print("Possible convoy or Ambulance", current_time) # Decisions of leaving space for convoys
print('Press tap Ctrl+C for Emergency: ') # or Ambulance can be decided
try:
for i in range(0,5):
sleep(3)
print("No Interruptions...")
except KeyboardInterrupt:
print("OK Blue Traffic lights Activated") # BLUE Activated
print("Blue....")
print("Space for Emergency")
time.sleep(7)
print("Red okay stop", current_time) # Red Traffic lights with Timer below
time.sleep(5)
traffic() # recall the function
```
#### File: Traffic-signals-simple-codes--Intermediate-Python-program/Traffic signal codes/traffic.py
```python
import time
import random
from random import choice
# Green = 2
# Orange = 0
# Red = 1
def traffic_colors():
color = [1, 2]
print('Orange[0]...')
X = (choice(color))
print(X)
if X == 1:
print("Red!..")
time.sleep(3.5)
print("Hey! Hold on")
elif X == 2:
print("Green!...Move on")
traffic_colors()
# for n in range(n, -1, -1):
# print(n, end= '', flush=True)
# print(c, flush=True)
# time.sleep(5)
# pass
``` |
{
"source": "josephneumann/spirit-island-logger",
"score": 3
} |
#### File: spirit-island-logger/app/forms.py
```python
from flask_wtf import FlaskForm
from wtforms import (
ValidationError,
BooleanField,
IntegerField,
SubmitField,
DateField,
SelectMultipleField,
SelectField,
TextAreaField,
PasswordField,
StringField,
)
from wtforms.validators import DataRequired, NumberRange, Optional
class CreateGameForm(FlaskForm):
players = SelectField("Players", validators=[DataRequired()])
expansions = SelectMultipleField("Expansions")
create_game = SubmitField("Create Game")
def validate_spirit_count(form, field):
if field.data and len(field.data) != form.players.data:
raise ValidationError("Number of Spirits selected does not match player count")
def validate_board_count(form, field):
if field.data and len(field.data) != form.players.data:
raise ValidationError("Number of Boards selected does not match player count")
class SpiritCreateGameForm(FlaskForm):
players = SelectField("Players", validators=[DataRequired()])
spirits = SelectMultipleField("Spirits")
expansions = SelectMultipleField("Expansions")
create_game = SubmitField("Create Game")
class EditGameForm(FlaskForm):
players = IntegerField(
"Players",
validators=[
DataRequired(),
NumberRange(min=1, max=4, message="Must be 1-4 players"),
],
)
update_game = SubmitField("Save Game")
date = DateField("Date")
spirits = SelectMultipleField("Spirits", validators=[validate_spirit_count])
boards = SelectMultipleField("Boards", validators=[validate_board_count])
scenario = SelectField("Scenario")
adversary = SelectField("Adversary")
status = SelectField("Status")
rating = IntegerField(
"Rating (1-10)", validators=[Optional(), NumberRange(min=0, max=10)]
)
notes = TextAreaField("Notes")
class ScoreGameForm(FlaskForm):
outcome = SelectField("Outcome")
invader_cards = IntegerField(
"Invader Cards in Deck", validators=[NumberRange(min=0, max=12)]
)
dahan = IntegerField(
"Dahan Left", validators=[NumberRange(min=0, max=50)]
)
blight = IntegerField(
"Blight", validators=[NumberRange(min=0, max=50)]
)
score_game = SubmitField("Calculate Score")
class RandomizeGameForm(FlaskForm):
use_thematic_boards = BooleanField("Thematic")
spirit_max_complexity = SelectField("Max Spirit Complexity")
use_scenario = BooleanField("Use Scenario")
scenario_max_difficulty = SelectField(
"Scenario Max Difficulty", validators=[Optional()]
)
use_adversary = BooleanField("Use Adversary")
adversary_max_difficulty = SelectField(
"Adversary Max Difficulty", validators=[Optional()]
)
force = BooleanField("Force Over-write")
randomize = SubmitField("Randomize Game")
class LoginForm(FlaskForm):
username = StringField("Username")
password = PasswordField("Password")
remember_me = BooleanField("Remember Me")
login = SubmitField("Log In")
class DeleteGameForm(FlaskForm):
confirm_text = StringField("Confirm Text")
delete = SubmitField("Delete")
``` |
{
"source": "josephnglynn/blenderCustomSaveFolder",
"score": 2
} |
#### File: josephnglynn/blenderCustomSaveFolder/CustomRenderOptions.py
```python
import ntpath
import os
import bpy
bl_info = {
"name": "Render TO Specific Directory",
"blender": (2, 80, 0),
"category": "Render",
}
home = os.path.expanduser("~")
RenderOutputDir = home + "/Documents/Blender Render Outputs/"
RenderAnimationDir = home + "/Documents/Blender Video Outputs/"
def checkIfDirectoriesExist(projectName):
if (os.path.exists(RenderOutputDir) == False):
os.mkdir(RenderOutputDir)
if (os.path.exists(RenderOutputDir + projectName) == False):
os.mkdir(RenderOutputDir + projectName)
if (os.path.exists(RenderAnimationDir) == False):
os.mkdir(RenderAnimationDir)
if (os.path.exists(RenderAnimationDir + projectName) == False):
os.mkdir(RenderAnimationDir + projectName)
def getVersion(projectName, dir):
location = dir + projectName + "/config"
v = ""
print("starting")
if os.path.isfile(location):
config = open(location, 'r+')
version = config.readlines()[0]
if (version[len(version)-1] == "9"):
stringVersion = str(version)
strings = []
for i in stringVersion:
strings.append(i)
bigNumber = ""
for i in range(0, len(stringVersion)-1):
if (strings[i] != "."):
bigNumber += strings[i]
numberAsInt = int(bigNumber)
numberAsInt += 1
numberAsString = str(numberAsInt)
writeToFile = [numberAsString, ".", "0"]
config.seek(0)
config.writelines(writeToFile)
for i in writeToFile:
v += i
else:
stringVersion = str(version)
strings = []
for i in stringVersion:
strings.append(i)
numberAsInt = int(version[len(version)-1])
numberAsInt = numberAsInt + 1
numberAsString = str(numberAsInt)
strings[len(stringVersion)-1] = numberAsString
finalVersion = ""
for i in strings:
finalVersion += i
config.seek(0)
config.writelines(finalVersion)
v = finalVersion
else:
config = open(location, 'w+')
config.writelines("1.0")
v = "1.0"
return v
def getFileNameAndLocation(dir):
projectName = os.path.splitext(ntpath.basename(bpy.data.filepath))[0]
checkIfDirectoriesExist(projectName=projectName)
version = getVersion(projectName=projectName, dir=dir)
fileName = dir + projectName + "/" + projectName + "_v" + version + \
"." + str(bpy.context.scene.render.image_settings.file_format).lower()
return fileName
class SimpleRender(bpy.types.Operator):
bl_idname = "myops.render"
bl_label = "Render"
def execute(self, context):
bpy.context.scene.render.image_settings.file_format = "PNG"
bpy.context.scene.render.filepath = getFileNameAndLocation(RenderOutputDir)
bpy.ops.render.render('INVOKE_DEFAULT',animation=False, write_still=True)
return {'FINISHED'}
class SimpleRenderAnimation(bpy.types.Operator):
bl_idname = "myops.renderanimation"
bl_label = "Render Animation"
def execute(self, context):
bpy.context.scene.render.image_settings.file_format = "FFMPEG"
bpy.context.scene.render.ffmpeg.constant_rate_factor = "PERC_LOSSLESS"
bpy.context.scene.render.filepath = getFileNameAndLocation(RenderAnimationDir)
bpy.ops.render.render('INVOKE_DEFAULT',animation=True, write_still=True)
return {'FINISHED'}
class TOPBAR_MT_custom_menu(bpy.types.Menu):
bl_label = "Custom Render"
def draw(self, context):
layout = self.layout
layout.operator("myops.render", text="render")
layout.operator("myops.renderanimation", text="render Animation")
def menu_draw(self, context):
self.layout.menu("TOPBAR_MT_custom_menu")
def register():
bpy.utils.register_class(SimpleRender)
bpy.utils.register_class(SimpleRenderAnimation)
bpy.utils.register_class(TOPBAR_MT_custom_menu)
bpy.types.TOPBAR_MT_editor_menus.append(TOPBAR_MT_custom_menu.menu_draw)
def unregister():
bpy.types.TOPBAR_MT_editor_menus.remove(TOPBAR_MT_custom_menu.menu_draw)
bpy.utils.unregister_class(SimpleRender)
bpy.utils.unregister_class(SimpleRenderAnimation)
bpy.utils.unregister_class(TOPBAR_MT_custom_menu)
if __name__ == "__main__":
register()
``` |
{
"source": "josephniwjc/MINE-Database",
"score": 3
} |
#### File: MINE-Database/minedatabase/metabolomics.py
```python
from __future__ import annotations
import math
import os
import re
import time
import xml.etree.ElementTree as ET
from ast import literal_eval
from typing import Callable, Generator, List, Optional, Set, Tuple
import numpy as np
import pymongo
import minedatabase
from minedatabase.databases import MINE
from minedatabase.utils import mongo_ids_to_mine_ids, score_compounds
MINEDB_DIR = os.path.dirname(minedatabase.__file__)
class MetabolomicsDataset:
"""A class containing all the information for a metabolomics data set."""
# pylint: disable=redefined-outer-name
def __init__(
self,
name: str,
adducts: List[str] = None,
known_peaks: List[Peak] = None,
unknown_peaks: List[Peak] = None,
native_set: Set[str] = set(),
ppm: bool = False,
tolerance: float = 0.001,
halogens: bool = False,
verbose: bool = False,
):
"""Metabolomics Dataset initialization.
Parameters
----------
name : str
Name of metabolomics dataset.
adducts : List[str], optional
List of adduct names, e.g. ["[M-H]-", "[M+H]+"] (defaults to all)
See minedatabase/data/adducts/{Negative/Positive} Adducts full.txt
for a full list of possible adducts.
known_peaks : List[Peak], optional
List of Peak objects annotated with ID and associated data, by
default None.
unknown_peaks : List[Peak], optional
List of Peak objects annotated with associated data, by default
None.
native_set : Set[str], optional
Set of compound IDs native to the organism that generated the
dataset (e.g. IDs from model), by default set().
ppm : bool, optional
If True, tolerance is set in parts per million, and if false
(default), tolerance is set in Daltons.
tolerance : float, optional
Mass tolerance for hits, by default 0.001.
halogens : bool, optional
Filters out compounds containing halogens if True, by default False.
verbose : bool, optional
Prints more info to stdout if True, by default False.
"""
# Load adducts
pos_fp = os.path.join(MINEDB_DIR, "data/adducts/Positive Adducts full.txt")
neg_fp = os.path.join(MINEDB_DIR, "data/adducts/Negative Adducts full.txt")
all_pos_adducts = self._read_adduct_file(pos_fp)
all_neg_adducts = self._read_adduct_file(neg_fp)
if adducts:
self.pos_adducts = list(filter(lambda x: x[0] in adducts, all_pos_adducts))
self.neg_adducts = list(filter(lambda x: x[0] in adducts, all_neg_adducts))
else:
self.pos_adducts = all_pos_adducts
self.neg_adducts = all_neg_adducts
# MongoDB projection for compound search
self.hit_projection = {
"Formula": 1,
"MINE_id": 1,
"SMILES": 1,
"Inchikey": 1,
"Spectra.Positive": 1,
"Spectra.Negative": 1,
"logP": 1
}
# Load peak data and initialize other attributes
self.name = name
self.known_peaks = known_peaks if known_peaks else []
self.unknown_peaks = unknown_peaks if unknown_peaks else []
self.native_set = native_set
self.ppm = ppm
self.tolerance = tolerance
self.halogens = halogens
self.verbose = verbose
self.total_formulas = 0
self.total_hits = 0
self.matched_peaks = 0
self.possible_masses = {"+": [], "-": []}
self.possible_ranges = {"+": [], "-": []}
def __str__(self) -> str:
"""Give string representation.
Returns
-------
self.name : str
Name of the metabolomics dataset.
"""
return self.name
def _read_adduct_file(self, filepath: str) -> List[Tuple]:
"""Read specified adduct file.
Parameters
----------
filepath : str
Path to adduct file.
Returns
-------
adducts : List[Tuple]
A list of (str, float, float) tuples of form
('adduct name', m/z multiplier, adduct mass change).
"""
adducts = []
with open(filepath, "r") as infile:
for line in infile:
if line.startswith("#"):
continue
adduct = line.strip().split("\t")
adduct[0] = adduct[0].strip()
adduct[1] = float(adduct[1])
adduct[2] = float(adduct[2])
adducts.append(tuple(adduct))
return adducts
def enumerate_possible_masses(self, tolerance: float) -> None:
"""Generate all possible masses from unknown peaks and list of
adducts. Saves these mass ranges to self.possible_ranges.
Parameters
----------
tolerance : float
Mass tolerance in Daltons.
"""
for peak in self.unknown_peaks:
if peak.charge == "+":
peak_adducts = self.pos_adducts
else:
peak_adducts = self.neg_adducts
masses, ranges = peak._enumerate_possible_masses(
self, peak_adducts, tolerance
)
self.possible_masses[peak.charge] += masses
self.possible_ranges[peak.charge] += ranges
for charge in ["+", "-"]:
self.possible_masses[charge] = np.array(set(self.possible_masses[charge]))
def get_rt(self, peak_id: str) -> Optional[float]:
"""Return retention time for peak with given ID. If not found, returns
None.
Parameters
----------
peak_id : str
ID of peak as listed in dataset.
Returns
-------
rt : float, optional
Retention time of peak with given ID, None if not found.
"""
rt = None
for peak in self.unknown_peaks + self.known_peaks:
if peak_id == peak.name:
rt = peak.r_time
break
return rt
def find_db_hits(
self,
peak: Peak,
db: MINE,
core_db: MINE,
adducts: List[Tuple[str, float, float]],
) -> None:
"""This function searches the database for matches of a peak given
adducts and updates the peak object with that information.
Parameters
----------
peak : Peak
Peak object to query against MINE compound database.
db : MINE
MINE database to query.
adducts : List[Tuple[str, float, float]]
List of adducts. Each adduct contains three values in a tuple:
(adduct name, mass multiplier, ion mass).
"""
# find nominal mass for a given m/z for each adduct and the max and
# min values for db
potential_masses = [(peak.mz - adduct[2]) / adduct[1] for adduct in adducts]
if self.ppm:
precision = (self.tolerance / 100000.0) * potential_masses
else:
precision = self.tolerance * 0.001 # convert to mDa
upper_bounds = [pm + precision for pm in potential_masses]
lower_bounds = [pm - precision for pm in potential_masses]
# search database for hits in the each adducts mass range that have no
# innate charge.
mongo_ids = []
for i, adduct in enumerate(adducts):
# build the query by adding the optional terms
query_terms = [
{"Mass": {"$gte": float(lower_bounds[i])}},
{"Mass": {"$lte": float(upper_bounds[i])}},
{"Charge": 0},
{"MINES": {"$eq": db.name}},
]
if adduct[0] == "[M]+":
query_terms[2] = {"Charge": 1}
for compound in core_db.compounds.find(
{"$and": query_terms}, self.hit_projection
):
# Filters out halogens if the flag is enabled by moving to the
# next compound before the current compound is counted or
# stored.
if not self.halogens:
if re.search("F[^e]|Cl|Br", compound["Formula"]):
continue
# update the total hits for the peak and make a note if the
# compound is in the native_set
peak.total_hits += 1
if compound["_id"] in self.native_set:
peak.native_hit = True
compound["native_hit"] = True
peak.formulas.add(compound["Formula"])
compound["adduct"] = adduct[0]
compound["peak_name"] = peak.name
mongo_ids.append(compound["_id"])
peak.isomers.append(compound)
# Get MINE IDs in bulk
mongo_to_mine = mongo_ids_to_mine_ids(mongo_ids, core_db)
for cpd in peak.isomers:
cpd["MINE_id"] = mongo_to_mine[cpd["_id"]]
def annotate_peaks(self, db: MINE, core_db: MINE) -> None:
"""This function iterates through the unknown peaks in the dataset and
searches the database for compounds that match a peak m/z given the
adducts permitted. Statistics on the annotated data set are printed.
Parameters
----------
db : MINE
MINE database.
core_db : MINE
Core database containing spectra info.
"""
for i, peak in enumerate(self.unknown_peaks):
positive = (
peak.charge == "+"
or peak.charge == "Positive"
or (peak.charge and isinstance(peak.charge, bool))
)
negative = (
peak.charge == "-"
or peak.charge == "Negative"
or (not peak.charge and isinstance(peak.charge, bool))
)
if positive:
self.find_db_hits(peak, db, core_db, self.pos_adducts)
elif negative:
self.find_db_hits(peak, db, core_db, self.neg_adducts)
else:
raise ValueError(
"Invalid compound charge specification. "
'Please use "+" or "Positive" for '
'positive ions and "-" or "Negative" for '
f"negative ions. (charge = {peak.charge})"
)
if peak.total_hits > 0:
self.matched_peaks += 1
self.total_hits += peak.total_hits
self.total_formulas += len(peak.formulas)
if self.verbose:
pct_done = int(float(i) / float(len(self.unknown_peaks)) * 100)
print(f"{pct_done} percent of peaks processed")
# Scoring functions appear before the Peak class because dot_product method is
# default object for Peak.score_isomers
def dot_product(x: List[tuple], y: List[tuple], epsilon: float = 0.01) -> float:
"""Calculate the dot product of two spectra, allowing for some variability
in mass-to-charge ratios
Parameters
----------
x : List[tuple]
First spectra m/z values.
y : List[tuple]
Second spectra m/z values.
epsilon : float, optional
Mass tolerance in Daltons, by default 0.01.
Returns
-------
dot_prod : float
Dot product of x and y.
"""
z = 0
n_v1 = 0
n_v2 = 0
for int1, int2 in _approximate_matches(x, y, epsilon):
z += int1 * int2
n_v1 += int1 * int1
n_v2 += int2 * int2
dot_prod = z / (math.sqrt(n_v1) * math.sqrt(n_v2))
return dot_prod
def jaccard(x: List[tuple], y: List[tuple], epsilon: float = 0.01) -> float:
"""Calculate the Jaccard Index of two spectra, allowing for some
variability in mass-to-charge ratios
Parameters
----------
x : List[tuple]
First spectra m/z values.
y : List[tuple]
Second spectra m/z values.
epsilon : float, optional
Mass tolerance in Daltons, by default 0.01.
Returns
-------
jaccard_index : float
Jaccard Index of x and y.
"""
intersect = 0
for val1, val2 in _approximate_matches(x, y, epsilon):
if val1 and val2:
intersect += 1
jaccard_index = intersect / float((len(x) + len(y) - intersect))
return jaccard_index
def _approximate_matches(
list1: List[tuple], list2: List[tuple], epsilon: float = 0.01
) -> Generator:
"""Takes two list of tuples and searches for matches of tuples first value
within the supplied epsilon. Emits tuples with the tuples second values
where found. if a value in one dist does not match the other list, it is
emitted alone but with a 0 as the other value.
Parameters
----------
list1 : list
First list of tuples.
list2 : list
Second list of tuples.
epsilon : float, optional
Maximum difference, by default 0.01.
Yields
-------
Generator
Generator that yields found matches.
"""
list1.sort()
list2.sort()
list1_index = 0
list2_index = 0
while list1_index < len(list1) or list2_index < len(list2):
if list1_index == len(list1):
yield (0, list2[list2_index][1])
list2_index += 1
continue
if list2_index == len(list2):
yield (list1[list1_index][1], 0)
list1_index += 1
continue
list1_element = list1[list1_index][0]
list2_element = list2[list2_index][0]
difference = abs(list1_element - list2_element)
if difference < epsilon:
yield (list1[list1_index][1], list2[list2_index][1])
list1_index += 1
list2_index += 1
elif list1_element < list2_element:
yield (list1[list1_index][1], 0)
list1_index += 1
elif list2_element < list1_element:
yield (0, list2[list2_index][1])
list2_index += 1
class Peak:
"""Peak object which contains peak metadata as well as mass, retention
time, spectra, and any MINE database hits.
Parameters
----------
name : str
Name or ID of the peak.
r_time : float
Retention time of the peak.
mz : float
Mass-to-charge ratio (m/z) of the peak.
charge : str
Charge of the peak, "+" or "-".
inchi_key : str, optional
InChI key of the peak, if already identified, by default None.
ms2 : List[float], optional
MS2 spectra m/z values for this peak, by default None.
Attributes
----------
isomers : List[Dict]
List of compound documents in JSON (dict) format.
formulas : Set[str]
All the unique compound formulas from compounds found for this peak.
total_hits : int
Number of compound hits for this peak.
native_hit : bool
Whether this peak matches a compound provided in the native set.
"""
def __init__(
self,
name: str,
r_time: float,
mz: float,
charge: str,
inchi_key: str = None,
ms2: List[(float, float)] = None,
) -> None:
self.name = name
if r_time:
self.r_time = float(r_time)
else:
self.r_time = None
self.mz = float(mz)
self.charge = charge
self.inchi_key = inchi_key
self.ms2peaks = ms2
self.isomers = []
self.formulas = set()
self.total_hits = 0
self.native_hit = False
def __str__(self) -> str:
"""String representation of the peak.
Returns
-------
str
Name of the peak.
"""
return self.name
def __repr__(self) -> str:
"""Print representation of the peak.
Returns
-------
str
Print representation of the peak.
"""
return (
f"Peak {self.name}: {self.mz} m/z, {self.r_time} RT, {self.charge} mode, "
f"Contains {len(self.ms2peaks)} MS2 peaks starting with {self.ms2peaks[:3]}..."
)
def _enumerate_possible_masses(
self, met_dataset: MetabolomicsDataset, adducts: List[str], tolerance: float
) -> (List[float], List[Tuple[float, float]]):
"""Generate all possible masses for a given peak.
Parameters
----------
met_dataset : MetabolomicsDataset
Instance of MetabolomicsDataset with associated adducts.
adducts : List[str]
List of adducts, charge should match charge of this peak.
tolerance : float
Mass tolernace in Daltons.
Returns
-------
possible_masses : List[float]
List of possible masses.
possible_ranges : List[Tuple(float, float)]
List of lower and upper bounds provided aggregate mass + tolerance.
"""
possible_masses = []
possible_ranges = []
if self.charge == "+":
adducts = met_dataset.pos_adducts
else:
adducts = met_dataset.neg_adducts
for adduct in adducts:
possible_mass = (self.mz - adduct[2]) / adduct[1]
possible_masses.append(possible_mass)
possible_ranges.append(
(
possible_mass - tolerance,
possible_mass + tolerance,
self.name,
adduct[0],
)
)
return possible_masses, possible_ranges
def score_isomers(
self,
metric: Callable[[list, list], float] = dot_product,
energy_level: int = 20,
tolerance: float = 0.005,
) -> None:
"""Scores and sorts isomers based on mass spectra data.
Calculates the cosign similarity score between the provided ms2 peak
list and pre-calculated CFM-spectra and sorts the isomer list
according to this metric.
Parameters
----------
metric : function, optional
The scoring metric to use for the spectra. Function must accept 2
lists of (mz, intensity) tuples and return a score, by default dot_product.
energy_level : int, optional
The Fragmentation energy level to use. May be 10,
20 or 40., by default 20.
tolerance : float, optional
The precision to use for matching m/z in mDa, by default 0.005.
Raises
------
ValueError
Empty ms2 peak.
"""
if not self.ms2peaks:
raise ValueError("The ms2 peak list is empty")
if self.charge == "+":
spec_key = "Positive"
else:
spec_key = "Negative"
for i, hit in enumerate(self.isomers):
if spec_key in hit['Spectra']:
hit_spec = hit['Spectra'][spec_key][f"{energy_level}V"]
score = metric(self.ms2peaks, hit_spec, epsilon=tolerance)
rounded_score = round(score * 1000)
self.isomers[i]["Spectral_score"] = rounded_score
else:
self.isomers[i]["Spectral_score"] = 0
self.isomers.sort(key=lambda x: x["Spectral_score"], reverse=True)
def get_KEGG_comps(
db: MINE, core_db: MINE, kegg_db: pymongo.database.Database, model_ids: List[str]
) -> set:
"""Get KEGG IDs from KEGG MINE database for compounds in model(s).
Parameters
----------
db : MINE
MINE Mongo database.
kegg_db : pymongo.database.Database
Mongo database with annotated organism metabolomes from KEGG.
model_ids : List[str]
List of organism identifiers from KEGG.
Returns
-------
set
MINE IDs of compounds that are linked to a KEGG ID in at least one of
the organisms in model_ids.
"""
kegg_ids, _ids = set(), set()
for model in kegg_db.models.find({"_id": {"$in": model_ids}}):
comp_ids = model['Compounds']
kegg_ids = kegg_ids.union(comp_ids)
kegg_id_list = list(kegg_ids) # sets are not accepted as query params for pymongo
for comp in core_db.compounds.find({"$and": [{"KEGG_id": {"$in": kegg_id_list}}, {"MINES": db.name}]}):
_ids.add(comp["_id"])
return _ids
def read_adduct_names(filepath: str) -> List[str]:
"""Read adduct names from text file at specified path into a list.
Parameters
----------
filepath : str
Path to adduct text file.
Returns
-------
adducts : list
Names of adducts in text file.
Notes
-----
Not used in this codebase but used by MINE-Server to validate adduct input.
"""
with open(filepath) as infile:
adducts = [line.split(" \t")[0] for line in infile if not line[0] == "#"]
return adducts
def read_mgf(input_string: str, charge: bool, ms2_delim="\t") -> List[Peak]:
"""Parse mgf metabolomics data file.
Parameters
----------
input_string : str
Metabolomics input data file.
charge : bool
True if positive, False if negative.
ms2_delim : str
Delimiter for whitespace between intensity and m/z value. Usually tab
but can also be a space in some MGF files. Tab by default.
Returns
-------
peaks : List[Peak]
A list of Peak objects.
"""
peaks = []
ms2 = []
r_time = None
for line in input_string.split("\n"):
sl = line.strip(" \r\n").split("=")
if sl[0] == "PEPMASS":
if len(sl) > 1:
mass = sl[1]
else:
mass = None
elif sl[0] == "TITLE":
if len(sl) > 1:
name = sl[1]
else:
name = ""
elif sl[0] == "RTINSECONDS":
r_time = sl[1]
elif sl[0] == "END IONS":
peaks.append(Peak(name, r_time, mass, charge, "False", ms2=ms2))
ms2 = []
else:
try:
mz, i = sl[0].split(ms2_delim)
ms2.append((float(mz), float(i)))
except ValueError:
continue
return peaks
def read_msp(input_string: str, charge: bool) -> List[Peak]:
"""Parse msp metabolomics data file.
Parameters
----------
input_string : str
Metabolomics input data file.
charge : bool
True if positive, False if negative.
Returns
-------
peaks : List[Peak]
A list of Peak objects.
"""
peaks = []
for spec in input_string.strip().split("\n\n"):
ms2 = []
inchikey = "False"
r_time = 0
name = "N/A"
for line in spec.split("\n"):
sl = line.split(": ")
sl[0] = sl[0].replace(" ", "").replace("/", "").upper()
if sl[0] == "PRECURSORMZ":
mass = sl[1]
elif sl[0] == "NAME":
name = sl[1]
elif sl[0] == "RETENTIONTIME":
r_time = sl[1]
elif sl[0] == "INCHIKEY":
inchikey = sl[1]
elif line and line[0].isdigit():
try:
row = re.split("[\t ]", line)
ms2.append((float(row[0]), float(row[1])))
except ValueError:
continue
peaks.append(Peak(name, r_time, mass, charge, inchikey, ms2=ms2))
return peaks
def read_mzxml(input_string: str, charge: bool) -> List[Peak]:
"""Parse mzXML metabolomics data file.
Parameters
----------
input_string : str
Metabolomics input data file.
charge : bool
True if positive, False if negative.
Returns
-------
List[Peak]
A list of Peak objects.
"""
peaks = []
root = ET.fromstring(input_string)
prefix = root.tag.strip("mzXML")
for scan in root.findall(f".//{prefix}scan"):
# somewhat counter intuitively we will get the peak info from the
# second fragments precursor info.
if scan.attrib["msLevel"] == "2":
precursor = scan.find(f"./{prefix}precursorMz")
mz = precursor.text
r_time = scan.attrib["retentionTime"][2:-1]
name = f"{mz} @ {r_time}"
charge = scan.attrib["polarity"]
peaks.append(Peak(name, r_time, mz, charge, "False"))
return peaks
class Struct:
"""convert key-value pairs into object-attribute pairs."""
def __init__(self, **entries):
self.__dict__.update(entries)
def ms_adduct_search(
db: MINE,
core_db: MINE,
keggdb: pymongo.database.Database,
text: str,
text_type: str,
ms_params,
) -> List:
"""Search for compound-adducts matching precursor mass.
Parameters
----------
db : MINE
Contains compound documents to search.
core_db : MINE
Contains extra info (including spectra) for compounds in db.
keggdb : pymongo.database.Database
Contains models with associated compound documents.
text : str
Text as in metabolomics datafile for specific peak.
text_type : str
Type of metabolomics datafile (mgf, mzXML, and msp are supported). If
text, assumes m/z values are separated by newlines (and set text_type
to "form").
ms_params : dict
Specifies search settings, using the following key-value pairs:
------------------------
Required Key-Value Pairs
------------------------
"tolerance": float specifying tolerance for m/z, in mDa by default.
Can specify in ppm if "ppm" key's value is set to True.
"charge": bool ('+' for positive, '-' for negative).
------------------------
Optional Key-Value Pairs
------------------------
"adducts": list of adducts to use. If not specified, uses all adducts.
"models": List of model _ids. If supplied, score compounds higher if
present in model. ["eco"] by default (E. coli).
"ppm": bool specifying whether "tolerance" is in mDa or ppm. Default
value for ppm is False (so tolerance is in mDa by default).
"kovats": length 2 tuple specifying min and max kovats retention index
to filter compounds (e.g. (500, 1000)).
"logp": length 2 tuple specifying min and max logp to filter compounds
(e.g. (-1, 2)).
"halogens": bool specifying whether to filter out compounds containing
F, Cl, or Br. Filtered out if set to True. False by default.
Returns
-------
ms_adduct_output : list
Compound JSON documents matching ms adduct query.
"""
print(
f"<MS Adduct Search: TextType={text_type}, Text={text}, Parameters={ms_params}>"
)
name = text_type + time.strftime("_%d-%m-%Y_%H:%M:%S", time.localtime())
if isinstance(ms_params, dict):
ms_params = Struct(**ms_params)
dataset = MetabolomicsDataset(
name,
adducts=ms_params.adducts,
ppm=ms_params.ppm,
tolerance=ms_params.tolerance,
halogens=ms_params.halogens,
verbose=ms_params.verbose,
)
ms_adduct_output = []
if text_type == "form":
for mz in text.split("\n"):
dataset.unknown_peaks.append(
Peak(mz, 0, float(mz), ms_params.charge, "False")
)
elif text_type == "mgf":
dataset.unknown_peaks = read_mgf(text, ms_params.charge)
elif text_type == "mzXML" or text_type == "mzxml":
dataset.unknown_peaks = read_mzxml(text, ms_params.charge)
elif text_type == "msp":
dataset.unknown_peaks = read_msp(text, ms_params.charge)
else:
raise IOError(f"{text_type} files not supported")
if ms_params.models:
dataset.native_set = get_KEGG_comps(db, core_db, keggdb, ms_params.models)
else:
dataset.native_set = set()
dataset.annotate_peaks(db, core_db)
if ms_params.logp:
min_logp, max_logp = ms_params.logp
else:
min_logp, max_logp = (-1000, 1000)
for peak in dataset.unknown_peaks:
for hit in peak.isomers:
if min_logp < hit['logP'] < max_logp:
ms_adduct_output.append(hit)
if ms_params.models:
ms_adduct_output = score_compounds(
db,
ms_adduct_output,
ms_params.models[0],
parent_frac=0.75,
reaction_frac=0.25,
)
return ms_adduct_output
def ms2_search(
db: MINE, core_db: MINE, keggdb: pymongo.database.Database, text: str, text_type: str, ms_params
) -> List:
"""Search for compounds matching MS2 spectra.
Parameters
----------
db : MINE
Contains compound documents to search.
core_db : MINE
Contains extra info (including spectra) for compounds in db.
keggdb : pymongo.database.Database
Contains models with associated compound documents.
text : str
Text as in metabolomics datafile for specific peak.
text_type : str
Type of metabolomics datafile (mgf, mzXML, and msp are supported). If
text, assumes m/z values are separated by newlines (and set text_type
to "form").
ms_params : dict
Specifies search settings, using the following key-value pairs:
------------------------
Required Key-Value Pairs
------------------------
"tolerance": float specifying tolerance for m/z, in mDa by default.
Can specify in ppm if "ppm" key's value is set to True.
"charge": bool (1 for positive, 0 for negative).
"energy_level": int specifying fragmentation energy level to use. May
be 10, 20, or 40.
"scoring_function": str describing which scoring function to use. Can
be either "jaccard" or "dot product".
------------------------
Optional Key-Value Pairs
------------------------
"adducts": list of adducts to use. If not specified, uses all adducts.
"models": List of model _ids. If supplied, score compounds higher if
present in model.
"ppm": bool specifying whether "tolerance" is in mDa or ppm. Default
value for ppm is False (so tolerance is in mDa by default).
"kovats": length 2 tuple specifying min and max kovats retention index
to filter compounds (e.g. (500, 1000)).
"logp": length 2 tuple specifying min and max logp to filter compounds
(e.g. (-1, 2)).
"halogens": bool specifying whether to filter out compounds containing
F, Cl, or Br. Filtered out if set to True. False by default.
Returns
-------
ms_adduct_output : list
Compound JSON documents matching ms2 search query.
"""
print(f"<MS2 Search: TextType={text_type}, Parameters={ms_params}, Spectra={repr(text)}>")
name = text_type + time.strftime("_%d-%m-%Y_%H:%M:%S", time.localtime())
if isinstance(ms_params, dict):
ms_params = Struct(**ms_params)
dataset = MetabolomicsDataset(
name,
adducts=ms_params.adducts,
ppm=ms_params.ppm,
tolerance=ms_params.tolerance,
halogens=ms_params.halogens,
verbose=ms_params.verbose,
)
ms_adduct_output = []
if text_type == "form":
split_form = [x.split() for x in text.strip().split("\n")]
ms2_data = [(float(mz), float(i)) for mz, i in split_form[1:]]
peak = Peak(
split_form[0][0],
0,
float(split_form[0][0]),
ms_params.charge,
"False",
ms2=ms2_data,
)
dataset.unknown_peaks.append(peak)
elif text_type == "mgf":
dataset.unknown_peaks = read_mgf(text, ms_params.charge)
elif text_type == "mzXML":
dataset.unknown_peaks = read_mzxml(text, ms_params.charge)
elif text_type == "msp":
dataset.unknown_peaks = read_msp(text, ms_params.charge)
else:
raise IOError(f"{text_type} files not supported")
if not ms_params.models:
ms_params.models = ["eco"]
if ms_params.models:
dataset.native_set = get_KEGG_comps(db, core_db, keggdb, ms_params.models)
dataset.annotate_peaks(db, core_db)
if ms_params.logp:
min_logp, max_logp = ms_params.logp
else:
min_logp, max_logp = (-1000, 1000)
for peak in dataset.unknown_peaks:
if ms_params.scoring_function == "jaccard":
if not ms_params.ppm:
peak.score_isomers(
metric=jaccard,
energy_level=ms_params.energy_level,
tolerance=float(ms_params.tolerance) / 1000,
)
else:
peak.score_isomers(metric=jaccard, energy_level=ms_params.energy_level)
elif ms_params.scoring_function == "dot product":
if not ms_params.ppm:
peak.score_isomers(
metric=dot_product,
energy_level=ms_params.energy_level,
tolerance=float(ms_params.tolerance) / 1000,
)
else:
peak.score_isomers(
metric=dot_product, energy_level=ms_params.energy_level
)
else:
raise ValueError(
'ms_params["scoring_function"] must be either '
'"jaccard" or "dot product".'
)
for hit in peak.isomers:
if min_logp < hit['logP'] < max_logp:
ms_adduct_output.append(hit)
if ms_params.models:
ms_adduct_output = score_compounds(
db,
ms_adduct_output,
ms_params.models[0],
parent_frac=0.75,
reaction_frac=0.25,
)
return ms_adduct_output
def spectra_download(db: MINE, mongo_id: str = None) -> str:
"""Download one or more spectra for compounds matching a given query.
Parameters
----------
db : MINE
Contains compound documents to search.
mongo_query : str, optional (default: None)
A valid Mongo query as a literal string. If None, all compound spectra
are returned.
parent_filter : str, optional (default: None)
If set to a metabolic model's Mongo _id, only get spectra for compounds
in or derived from that metabolic model.
putative : bool, optional (default: True)
If False, only find known compounds (i.e. in Generation 0). Otherwise,
finds both known and predicted compounds.
Returns
-------
spectral_library : str
Text of all matching spectra, including headers and peak lists.
"""
def print_peaklist(peaklist):
text = [f"Num Peaks: {len(peaklist)}"]
for x in peaklist:
text.append(f"{x[0]} {x[1]}")
text.append("")
return text
spectral_library = []
msp_projection = {
"MINE_id": 1,
"KEGG_id": 1,
"Mass": 1,
"Inchikey": 1,
"Formula": 1,
"SMILES": 1,
"logP": 1,
"Charge": 1,
"Spectra.Positive": 1,
"Spectra.Negative": 1,
}
query_dict = {"_id": mongo_id}
compound = db.compounds.find_one(query_dict, msp_projection)
# add header
header = []
header.append(f"Name: MINE Compound {compound['_id']}")
for k, v in compound.items():
if k not in {"_id", "Spectra"}:
header.append(f"{k}: {v}")
header.append("Instrument: CFM-ID 4.0")
# add peak lists
if compound["Spectra"]:
for ion_mode in ["Positive", "Negative"]:
for energy, spec in compound["Spectra"][ion_mode].items():
spectral_library += header
spectral_library += [f"Ionization: {ion_mode}", f"Energy: {energy}"]
spectral_library += print_peaklist(spec)
spectral_library = "\n".join(spectral_library)
return spectral_library
```
#### File: josephniwjc/MINE-Database/pickaxe_run_template.py
```python
import datetime
import multiprocessing
import pickle
import time
import pymongo
from rdkit import DataStructs
from rdkit.Chem import AllChem
# Make sure you have minedatabase installed! (either from GitHub or via pip)
from minedatabase.filters import (
AtomicCompositionFilter,
MCSFilter,
MetabolomicsFilter,
MWFilter,
SimilarityFilter,
SimilaritySamplingFilter
)
# Uncomment to use these. Pickaxe doesn't come packaged with dependencies by default.
# from minedatabase.filters import ThermoFilter
# from minedatabase.filters import ReactionFeasibilityFilter
from minedatabase.pickaxe import Pickaxe
from minedatabase.rules import metacyc_generalized, metacyc_intermediate
start = time.time()
###############################################################################
#### Database and output information
# The default mongo is localhost:27017
# Connecting remotely requires the location of the database
# as well as username/password if security is being used.
#
# The mongo_uri is read from mongo_uri.csv
# Database writing options
write_db = False
# Database name and message to print in metadata
database = "example_db"
message = "Example run to show how pickaxe is run."
# Whether to write compounds to core compound database with extra info
write_core = False
# Force overwrite existing database
database_overwrite = False
# Use local DB, i.e. localhost:27017
use_local = False
# Writing compound and reaction csv files locally
write_to_csv = False
output_dir = "."
###############################################################################
###############################################################################
#### Starting Compounds, Cofactors, and Rules
# Input compounds in a csv folder with headings:
# id,smiles
input_cpds = "./example_data/starting_cpds_single.csv"
# Rule specification and generation. Rules can be manually created or
# metacyc_intermediate or metacyc_generalized can provide correctly formatted
# biological reactions derived from metacyc.
#
# See the documentation for description of options.
rule_list, coreactant_list, rule_name = metacyc_intermediate(
n_rules=None,
fraction_coverage=0.2,
anaerobic=True,
exclude_containing = ["aromatic", "halogen"]
)
###############################################################################
###############################################################################
# Core Pickaxe Run Options
generations = 1 # Total rounds of rule applications
processes = 1 # Number of processes for parallelization
verbose = False # Display RDKit warnings and errors
# These are for MINE-Database generation and advanced options.
# Be careful changing these.
inchikey_blocks_for_cid = 1 # Number of inchi key blocks to gen cid
explicit_h = False # use explicit hydrogens in rules
kekulize = True # kekulize molecules
neutralise = True # Neutralise all molecules when loading
quiet = True # Silence errors
indexing = False #
###############################################################################
###############################################################################
#### Filtering and Sampling Options
#############################################
# Global Filtering Options
# Path to target cpds file (not required for all filters)
target_cpds = "./example_data/target_list_many.csv"
# Load compounds even without a filter
# Can be paired with prune_to_targets to reduce end network
load_targets_without_filter = True
# Should targets be flagged for reaction
react_targets = False
# Prune generated network to contain only compounds
# that are used to generate a target
prune_to_targets = False
# Apply filter after final generation, before pruning
filter_after_final_gen = True
##########################################
# Molecular Weight Filter options.
# Removes compounds not in range of [min_MW, max_MW]
# Apply this filter?
MW_filter = True
# Minimum MW in g/mol. None gives no lower bound.
min_MW = 100
# Maximum MW in g/mol. None gives no upper bound.
max_MW = 150
##########################################
# Atomic Composition Filter options.
# Filters compounds that do not fall within atomic composition range
# Only elements specified will be filtered
# Apply this filter?
atomic_composition_filter = False
# Atomic composition constraint specification
atomic_composition_constraints = {
"C": [4, 7],
"O": [5, 5]
}
##########################################
# Thermodynamics Filter options.
# Uses eQuilibrator to filter by ∆Gr
# Information for use of eQuilibrator
# URI of eQuilibrator DB, either postgres URI, or sqlite file name location
eq_uri = "compounds.sqlite"
# Maximum allowable ∆Gr in kJ/mol
dg_max = 10
# conditions
p_h = 7
p_mg = 3
ionic_strength = 0.15
# comment below line and uncomment other definition if using thermo filter
thermo_filter = None
# thermo_filter = ThermoFilter(
# eq_uri=eq_uri,
# dg_max=dg_max,
# p_h=p_h,
# p_mg=p_mg,
# ionic_strength=ionic_strength
# )
##########################################
# Feasibility Filter options.
# Checks Feasibility of reaction
# Apply this filter?
feasibility_filter = False
# Which generations to filter, empty list filtters all
generation_list = []
last_generation_only = True
# comment below line and uncomment other definition if using thermo filter
feasibility_filter = None
# uncomment below if using feasibility filter (note this requires extra dependencies)
# feasibility_filter = ReactionFeasibilityFilter(
# generation_list=generation_list,
# last_generation_only=last_generation_only
# )
##########################################
# Similarity Filtering options.
# Filters by similarity score, uses default RDKit fingerprints and tanimoto by default
# Apply this filter?
similarity_filter = True
# Methods to calculate similarity by, default is RDkit and Tanimoto
# Supports Morgan Fingerprints and Dice similarity as well.
cutoff_fingerprint_method = "Morgan"
# arguments to pass to fingerprint_method
cutoff_fingerprint_args = {"radius": 2}
cutoff_similarity_method = "Tanimoto"
# Similarity filter threshold. Can be single number or a list with length at least
# equal to the number of generations (+1 if filtering after expansion)
similarity_threshold = [0, 0.2, 0.7]
# Only accepts compounds whose similarity is increased in comparison to their parent
increasing_similarity = False
##########################################
# Similarity Sampling Options
# Samples by similarity score
# Uses default RDKit fingerprints and tanimoto by default, but supports
# Morgan and dice
# Apply this sampler?
similarity_sample = True
# Number of compounds per generation to sample
sample_size = 100
# Default is RDKit
sample_fingerprint_method = "Morgan"
# arguments to pass to fingerprint_method
sample_fingerprint_args = {"radius": 2}
sample_similarity_method = "Tanimoto"
def weight(score):
"""weight is a function that accepts a similarity score as the sole argument
and returns a scaled value.
"""
return score**4
# How to represent the function in text for database entry
weight_representation = "score^4"
##########################################
# Maximum common substructure (MCS) filter
# Apply this filter?
mcs_filter = False
# Finds the MCS of the target and compound and identifies fraction of target
# the MCS composes
crit_mcs = [0.3, 0.8, 0.95]
##########################################
# Metabolomics Filter Options
# Apply this filter?
metabolomics_filter = False
# Path to csv with list of detected masses (and optionally, retention times).
# For example: Peak ID, Retention Time, Aggregate M/Z, Polarity, Compound Name,
# Predicted Structure (smile), ID
#
# Peak1, 6.33, 74.0373, negative, propionic acid, CCC(=O)O, yes
# Peak2, 26.31, 84.06869909, positive, , , no
# ...
met_data_path = "./local_data/ADP1_Metabolomics_PeakList_final.csv"
# Name of dataset
met_data_name = "ADP1_metabolomics"
# Adducts to add to each mass in mass list to create final list of possible
# masses.
# See "./minedatabase/data/adducts/All adducts.txt" for options.
possible_adducts = ["[M+H]+", "[M-H]-"]
# Tolerance in Da
mass_tolerance = 0.001
# Retention Time Filter Options (optional but included in metabolomics filter)
# Path to pickled machine learning predictor (SMILES => RT)
rt_predictor_pickle_path = "../RT_Prediction/final_RT_model.pickle"
# Allowable deviation in predicted RT (units just have to be consistent with dataset)
rt_threshold = 4.5
# Mordred descriptors to use as input to model (must be in same order as in trained model)
# If None, will try to use all (including 3D) mordred descriptors
rt_important_features = ["nAcid", "ETA_dEpsilon_D", "NsNH2", "MDEO-11"]
###############################################################################
###############################################################################
# Verbose output
print_parameters = True
def print_run_parameters():
"""Write relevant parameters."""
def print_parameter_list(plist):
for i in plist:
print(f"--{i}: {eval(i)}")
print("\n-------------Run Parameters-------------")
print("\nRun Info")
print_parameter_list(["coreactant_list", "rule_name", "input_cpds"])
print("\nExpansion Options")
print_parameter_list(["generations", "processes"])
print("\nGeneral Filter Options")
print_parameter_list(
[
"filter_after_final_gen",
"react_targets",
"prune_to_targets",
]
)
if similarity_sample:
print("\nTanimoto Sampling Filter Options")
print_parameter_list(
[
"sample_size",
"weight_representation",
"sample_fingerprint_args",
"sample_fingerprint_method",
"sample_similarity_method"
]
)
if similarity_filter:
print("\nTanimoto Threshold Filter Options")
print_parameter_list(
[
"similarity_threshold",
"increasing_similarity",
"cutoff_fingerprint_args",
"cutoff_fingerprint_method",
"cutoff_similarity_method"
]
)
if mcs_filter:
print("\nMaximum Common Substructure Filter Options")
print_parameter_list(["crit_mcs"])
if metabolomics_filter:
print("\nMetabolomics Filter Options")
print_parameter_list(["met_data_path", "met_data_name",
"possible_adducts", "mass_tolerance"])
if MW_filter:
print("\nMolecular Weight Filter Options")
print_parameter_list(["min_MW", "max_MW"])
if atomic_composition_filter:
print("\nAtomic Composition Filter")
print_parameter_list(["atomic_composition_constraints"])
print("\nPickaxe Options")
print_parameter_list(
[
"verbose",
"explicit_h",
"kekulize",
"neutralise",
"quiet",
"indexing"
]
)
print("----------------------------------------\n")
###############################################################################
###############################################################################
# Running pickaxe, don"t touch unless you know what you are doing
if __name__ == "__main__":
# Use "spawn" for multiprocessing
multiprocessing.set_start_method("spawn")
# Define mongo_uri
# mongo_uri definition, don't modify
if write_db == False:
mongo_uri = None
elif use_local:
mongo_uri = "mongodb://localhost:27017"
else:
mongo_uri = open("mongo_uri.csv").readline().strip("\n")
# Change database to none if not writing
if write_db is False:
database = None
### Initialize the Pickaxe class
# print parameters
if print_parameters:
print_run_parameters()
pk = Pickaxe(
coreactant_list=coreactant_list,
rule_list=rule_list,
errors=verbose,
explicit_h=explicit_h,
kekulize=kekulize,
neutralise=neutralise,
image_dir=None,
inchikey_blocks_for_cid=inchikey_blocks_for_cid,
database=database,
database_overwrite=database_overwrite,
mongo_uri=mongo_uri,
quiet=quiet,
react_targets=react_targets,
filter_after_final_gen=filter_after_final_gen
)
# Load compounds
pk.load_compound_set(compound_file=input_cpds)
# Load target compounds for filters
if (
similarity_filter or mcs_filter or similarity_sample
or load_targets_without_filter or MW_filter
or atomic_composition_filter or thermo_filter
or feasibility_filter
):
pk.load_targets(target_cpds)
# Apply filters in this order
if MW_filter:
pk.filters.append(MWFilter(min_MW, max_MW))
if atomic_composition_filter:
pk.filters.append(AtomicCompositionFilter(atomic_composition_constraints))
if similarity_filter:
taniFilter = SimilarityFilter(
crit_similarity=similarity_threshold,
increasing_similarity=increasing_similarity,
fingerprint_method=cutoff_fingerprint_args,
fingerprint_args=cutoff_fingerprint_args,
similarity_method=cutoff_similarity_method
)
pk.filters.append(taniFilter)
if similarity_sample:
taniSampleFilter = SimilaritySamplingFilter(
sample_size=sample_size,
weight=weight,
fingerprint_method=sample_fingerprint_method,
fingerprint_args=sample_fingerprint_args,
similarity_method=sample_similarity_method)
pk.filters.append(taniSampleFilter)
if mcs_filter:
mcsFilter = MCSFilter(crit_mcs=crit_mcs)
pk.filters.append(mcsFilter)
if metabolomics_filter:
if rt_predictor_pickle_path:
with open(rt_predictor_pickle_path, "rb") as infile:
rt_predictor = pickle.load(infile)
else:
rt_predictor = None
metFilter = MetabolomicsFilter(
filter_name="ADP1_Metabolomics_Data",
met_data_name=met_data_name,
met_data_path=met_data_path,
possible_adducts=possible_adducts,
mass_tolerance=mass_tolerance,
rt_predictor=rt_predictor,
rt_threshold=rt_threshold,
rt_important_features=rt_important_features
)
pk.filters.append(metFilter)
if feasibility_filter:
pk.filers.append(feasibility_filter)
if thermo_filter:
pk.filters.append(thermo_filter)
# Transform compounds (the main step)
pk.transform_all(processes, generations)
if pk.targets and prune_to_targets:
pk.prune_network_to_targets()
# Write results to database
if write_db:
pk.save_to_mine(processes=processes, indexing=indexing, write_core=write_core)
client = pymongo.MongoClient(mongo_uri)
db = client[database]
db.meta_data.insert_one({"Timestamp": datetime.datetime.now(),
"Run Time": f"{round(time.time() - start, 2)}",
"Generations": f"{generations}",
"Rule Name": f"{rule_name}",
"Input compound file": f"{input_cpds}"
})
db.meta_data.insert_one({"Timestamp": datetime.datetime.now(),
"Message": message})
if (similarity_filter or mcs_filter or similarity_sample):
db.meta_data.insert_one(
{
"Timestamp": datetime.datetime.now(),
"React Targets": react_targets,
"Tanimoto Filter": similarity_filter,
"Tanimoto Values": f"{similarity_threshold}",
"MCS Filter": mcs_filter,
"MCS Values": f"{crit_mcs}",
"Sample By": similarity_sample,
"Sample Size": sample_size,
"Sample Weight": weight_representation,
"Pruned": prune_to_targets
}
)
if write_to_csv:
pk.assign_ids()
pk.write_compound_output_file(output_dir + "/compounds.tsv")
pk.write_reaction_output_file(output_dir + "/reactions.tsv")
print("----------------------------------------")
print(f"Overall run took {round(time.time() - start, 2)} seconds.")
print("----------------------------------------")
```
#### File: MINE-Database/Scripts/db_plots.py
```python
import matplotlib
matplotlib.use('Agg')
import seaborn
import pandas
import matplotlib.pyplot as plt
from minedatabase.databases import MINE
from collections import defaultdict, Counter
import sys
def make_violin_plots(db_list, prop_list=('Mass', 'logP', 'NP_likeness')):
df = pandas.DataFrame()
for db_name in db_list:
db = MINE(db_name)
l = []
cursor = db.compounds.find({"Type":{'$ne': 'Coreactant'}},
dict([('_id', 0), ('Type', 1)]
+ [(x, 1) for x in prop_list]))
for x in cursor:
x['DB'] = str(db_name.strip('exp2'))
l.append(x)
df = df.append(l)
f, ax = plt.subplots(1, len(prop_list))
for i, prop in enumerate(prop_list):
seaborn.violinplot(split=True, hue='Type', x='DB', y=prop, data=df,
ax=ax[i])
if i > 0:
ax[i].legend_.remove()
plt.tight_layout()
plt.savefig("MINE property comparison.png")
def make_box_plots(db_list, prop_list=('Mass', 'logP', 'NP_likeness')):
df = pandas.DataFrame()
for db_name in db_list:
db = MINE(db_name)
new_name = str(db_name.replace('exp2', 'MINE').split('-')[0])
l = []
cursor = db.compounds.find(dict([(x, {'$exists': 1})
for x in prop_list]),
dict([('_id', 0)]
+ [(x, 1) for x in prop_list]))
for x in cursor:
x['DB'] = new_name
l.append(x)
df = df.append(l)
f, ax = plt.subplots(1, len(prop_list))
for i, prop in enumerate(prop_list):
seaborn.boxplot(x='DB', y=prop, data=df, ax=ax[i],
showfliers=False)
plt.tight_layout()
plt.savefig("MINE property comparison.png")
def make_fp_heatmap(db_name, fp_type='MACCS', n_rows=25):
db = MINE(db_name)
data = defaultdict(Counter)
for comp in db.compounds.find({}, {"_id": 0, "Generation": 1, fp_type: 1}):
if fp_type in comp and int(comp['Generation']) > -1:
data[int(comp['Generation'])].update(comp[fp_type])
df = pandas.DataFrame(data)
df_norm = df.div(df.max(axis=0), axis=1)
if not n_rows:
df_top = df_norm
else:
df_norm['range'] = df_norm.max(axis=1) - df_norm.min(axis=1)
df_top = df_norm.sort_values('range', ascending=False).head(int(n_rows)).ix[:, :-1]
hm = seaborn.heatmap(df_top)
hm.collections[0].colorbar.set_label("Prevalence")
plt.xlabel('Generation')
plt.ylabel(fp_type + " bit")
plt.yticks(rotation=0)
plt.savefig(db_name + '_fp_heatmap.png')
if __name__ == "__main__":
if sys.argv[1] == "violin":
make_violin_plots(sys.argv[2:])
elif sys.argv[1] == "boxplot":
make_box_plots(sys.argv[2:])
elif sys.argv[1] == 'heatmap':
make_fp_heatmap(*sys.argv[2:])
else:
print("Unrecognised plot type")
```
#### File: tests/test_unit/test_databases.py
```python
import json
import os
import shutil
from copy import copy, deepcopy
from pathlib import Path
from shutil import rmtree
import pymongo
import pytest
from pymongo.errors import ServerSelectionTimeoutError
from minedatabase import utils
from minedatabase.databases import (
MINE,
write_compounds_to_mine,
write_core_compounds,
write_reactions_to_mine,
)
try:
client = pymongo.MongoClient(ServerSelectionTimeoutMS=10)
client.server_info()
del client
except ServerSelectionTimeoutError as err:
pytest.skip("No Mongo Server Found", allow_module_level=True)
file_path = Path(__file__)
file_dir = file_path.parent
DATA_DIR = (file_dir / "../data/").resolve()
@pytest.fixture()
def cpd_dict():
"""Create a compound dict for testing."""
cpd_dict = {
"_id": "Ctest",
"SMILES": (
"Nc1ncnc2c1ncn2[C@@H]1O[C@H](COP(=O)(O)OS(=O)(=O)O)[C@@H]"
"(OP(=O)(O)O)[C@H]1O"
),
"Inchi": (
"InChI=1S/C10H15N5O13P2S/c11-8-5-9(13-2-12-8)15(3-14-5)10-"
"6(16)7(27-29(17,18)19)4(26-10)1-25-30(20,21)28-31(22,23)"
"24/h2-4,6-7,10,16H,1H2,(H,20,21)(H2,11,12,13)(H2,17,18,19"
")(H,22,23,24)/t4-,6-,7-,10-/m1/s1"
),
"Type": "Coreactant",
"Generation": 0,
"Formula": "C10H15N5O13P2S",
"Expand": False,
}
yield cpd_dict
@pytest.fixture()
def rxn_dicts():
"""Create two reaction dictionaries for testing."""
rxn_dicts = [
{
"_id": "RXN1",
"Reactants": [[1, "cpd1"], [2, "cpd2"]],
"Products": [[1, "cpd3"], [1, "cpd4"]],
"Operators": ["op1", "op2"],
"SMILES_rxn": "(1) A + (2) B -> (1) C + (2) D",
},
{
"_id": "RXN2",
"Reactants": [[1, "cpd1"], [2, "cpd2"]],
"Products": [[1, "cpd3"], [1, "cpd4"]],
"Operators": ["op1"],
"SMILES_rxn": "(1) A + (2) B -> (1) C + (2) D",
},
]
return rxn_dicts
@pytest.mark.skipif(
os.name == "nt", reason="MolConvert fails on Windows due to permissions errors"
)
def test_generate_image_files(test_db):
"""Test image generation."""
img_dir = DATA_DIR / "imgs/"
test_db.generate_image_files(img_dir)
try:
assert (img_dir / "C455bc3dc93cd3bb3bef92a34767693a4716aa3fb.svg").is_file()
assert len(os.listdir(img_dir)) == 26
finally:
rmtree(img_dir)
# Use 'Generation': 1 to do this for just the starting compound
test_db.generate_image_files(img_dir, {"Generation": 1}, 3, "png")
try:
assert (
img_dir / "C/c/f" / "Ccffda1b2e82fcdb0e1e710cad4d5f70df7a5d74f.png"
).is_file()
finally:
shutil.rmtree(img_dir)
def test_insert_single_mine_compound(test_db):
"""Test single mine insertion."""
smiles = "CC(=O)O"
compound_dict = [{"_id": "test_cpd", "SMILES": smiles}]
write_compounds_to_mine(compound_dict, test_db)
entry = test_db.compounds.find_one({"SMILES": smiles})
assert entry
assert entry["_id"] == "test_cpd"
def test_insert_bulk_mine_compounds(test_db):
"""Test inserting bulk compounds."""
smiles1 = "CC(=O)O"
smiles2 = "CCN"
compound_dict = [
{"_id": "test_cpd1", "SMILES": smiles1},
{"_id": "test_cpd2", "SMILES": smiles2},
]
write_compounds_to_mine(compound_dict, test_db)
entry1 = test_db.compounds.find_one({"SMILES": smiles1})
assert entry1
assert entry1["_id"] == "test_cpd"
entry2 = test_db.compounds.find_one({"SMILES": smiles2})
assert entry2
assert entry2["_id"] == "test_cpd2"
def test_insert_single_core_compound(test_db, cpd_dict):
"""Test inserting a single core compound."""
write_core_compounds([cpd_dict], test_db, "test")
try:
entry = test_db.core_compounds.find_one({"_id": cpd_dict["_id"]})
assert entry
assert isinstance(entry["Mass"], float)
assert entry["Inchi"]
assert entry["Inchikey"]
finally:
test_db.core_compounds.delete_many({"_id": cpd_dict["_id"]})
def test_insert_bulk_core_compound(test_db, cpd_dict):
"""Test inserting many core compounds."""
cpd_dict2 = copy(cpd_dict)
cpd_dict2["_id"] = "cpd_dict2"
write_core_compounds([cpd_dict, cpd_dict2], test_db, "test")
#
try:
for smiles in [cpd_dict["SMILES"], cpd_dict2["SMILES"]]:
entry = test_db.core_compounds.find_one({"SMILES": smiles})
assert entry
assert isinstance(entry["Mass"], float)
assert entry["Inchi"]
assert entry["Inchikey"]
finally:
for i in [0, 1]:
test_db.core_compounds.delete_many({"_id": f"test_mine_cpd{i}"})
def test_write_reaction(test_db, rxn_dicts):
"""Test writing reactions."""
write_reactions_to_mine(rxn_dicts, test_db)
assert (deepcopy(rxn_dicts[0])) == deepcopy(
test_db.reactions.find_one({"_id": "RXN1"})
)
assert (deepcopy(rxn_dicts[1])) == deepcopy(
test_db.reactions.find_one({"_id": "RXN2"})
)
def db_driver(test_db):
"""Test database driver."""
assert isinstance(test_db.compounds, pymongo.collection.Collection)
assert isinstance(test_db.reactions, pymongo.collection.Collection)
assert isinstance(test_db.operators, pymongo.collection.Collection)
assert isinstance(test_db.core_compounds, pymongo.collection.Collection)
assert isinstance(test_db._db, pymongo.database.Database)
```
#### File: tests/test_unit/test_pickaxe.py
```python
import hashlib
import os
import re
import subprocess
from filecmp import cmp
from pathlib import Path
import pymongo
import pytest
from pymongo.errors import ServerSelectionTimeoutError
from rdkit.Chem import AllChem
from minedatabase import pickaxe
from minedatabase.databases import MINE
try:
client = pymongo.MongoClient(ServerSelectionTimeoutMS=20)
client.server_info()
del client
is_mongo = True
except ServerSelectionTimeoutError as err:
is_mongo = False
valid_db = pytest.mark.skipif(not is_mongo, reason="No MongoDB Connection")
file_path = Path(__file__)
file_dir = file_path.parent
DATA_DIR = (file_dir / "../data/").resolve()
def purge(directory, pattern):
"""Delete all files in a directory matching a regex pattern."""
for filename in os.listdir(directory):
if re.search(pattern, filename):
os.remove(os.path.join(directory, filename))
def delete_database(name):
"""Delete database."""
mine = MINE(name)
mine.client.drop_database(name)
mine.client.close()
def test_cofactor_loading(pk):
"""Test loading cofactors.
GIVEN a default Pickaxe object
WHEN cofactors are loaded into the Pickaxe object in its creation
THEN make sure those cofactors were loaded correctly
"""
c_id = "X73bc8ef21db580aefe4dbc0af17d4013961d9d17"
assert c_id in pk.compounds
assert pk.compounds[c_id]["Formula"] == "H2O"
assert pk.compounds[c_id]["Type"] == "Coreactant"
assert isinstance(pk.coreactants["Water"][0], AllChem.Mol)
assert pk.coreactants["Water"][1][0] == "X"
def test_reaction_rule_loading(default_rule):
"""Test loading rules.
GIVEN a reaction rule dict
WHEN reaction rules are loaded during Pickaxe object initialization
THEN make sure it is formatted correctly
"""
assert isinstance(default_rule[0], AllChem.ChemicalReaction)
assert isinstance(default_rule[1], dict)
assert default_rule[1]["Reactants"] == ["ATP", "Any"]
assert "Products" in default_rule[1]
assert "Comments" in default_rule[1]
def test_compound_loading(pk):
"""Test loading compounds.
GIVEN a default Pickaxe object
WHEN compounds are loaded
THEN check that they are loaded correctly
"""
compound_smiles = pk.load_compound_set(
compound_file=file_dir / "../data/test_compounds.tsv"
)
assert len(compound_smiles) == 14
def test_transform_all(default_rule, smiles_dict, coreactant_dict):
"""Test transform function.
GIVEN a set of rules and starting compounds
WHEN we run pickaxe to predict potential transformations
THEN make sure all expected transformations are predicted
"""
pk = pickaxe.Pickaxe(errors=False, explicit_h=True)
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound(
smiles_dict["FADH"], smiles_dict["FADH"], cpd_type="Starting Compound"
)
pk.operators["2.7.1.a"] = default_rule
pk.transform_all(generations=2)
assert len(pk.compounds) == 31
assert len(pk.reactions) == 49
comp_gens = set([x["Generation"] for x in pk.compounds.values()])
assert comp_gens == {0, 1, 2}
def test_compound_output_writing(pk_transformed):
"""Test compound output writing.
GIVEN a Pickaxe object with predicted transformations
WHEN all compounds (including predicted) are written to an output file
THEN make sure they are correctly written, and that they are all present
"""
with open(file_dir / "../data/testcompoundsout.tsv", "rb") as infile:
expected = hashlib.sha256(infile.read()).hexdigest()
pk_transformed.write_compound_output_file(file_dir / "../data/testcompoundsout.tsv")
assert os.path.exists(file_dir / "../data/testcompoundsout_new.tsv")
try:
with open(file_dir / "../data/testcompoundsout_new.tsv", "rb") as infile:
output_compounds = hashlib.sha256(infile.read()).hexdigest()
assert expected == output_compounds
finally:
os.remove(file_dir / "../data/testcompoundsout_new.tsv")
def test_reaction_output_writing(pk_transformed):
"""Test writing reaction output.
GIVEN a Pickaxe object with predicted transformations
WHEN all reactions (including predicted) are written to an output file
THEN make sure they are correctly written, and that they are all present
"""
with open(file_dir / "../data/testreactionsout.tsv", "rb") as infile:
expected = hashlib.sha256(infile.read()).hexdigest()
pk_transformed.write_reaction_output_file(file_dir / "../data/testreactionsout.tsv")
assert os.path.exists(file_dir / "../data/testreactionsout_new.tsv")
try:
with open(file_dir / "../data/testreactionsout_new.tsv", "rb") as infile:
output_compounds = hashlib.sha256(infile.read()).hexdigest()
assert expected == output_compounds
finally:
os.remove(file_dir / "../data/testreactionsout_new.tsv")
def test_multiprocessing(pk, smiles_dict, coreactant_dict):
"""Test multiprocessing.
GIVEN a Pickaxe object
WHEN we use multiprocessing to enumerate predicted reactions
THEN make sure those predictions are correct
"""
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound("FADH", smiles_dict["FADH"], cpd_type="Starting Compound")
pk.transform_all(generations=2, processes=2)
assert len(pk.compounds) == 67
assert len(pk.reactions) == 49
comp_gens = set([x["Generation"] for x in pk.compounds.values()])
assert comp_gens == {0, 1, 2}
def test_pruning(default_rule, smiles_dict, coreactant_dict):
"""Test pruning network to targets.
GIVEN a Pickaxe expansion
WHEN that expansion is pruned via Pickaxe.prune_network()
THEN make sure that the pruned compounds no longer exist in the network
"""
pk = pickaxe.Pickaxe(explicit_h=True)
pk.operators["2.7.1.a"] = default_rule
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound("FADH", smiles_dict["FADH"], cpd_type="Starting Compound")
pk.transform_all(generations=2)
ids = [
"C89d19c432cbe8729c117cfe50ff6ae4704a4e6c1",
"C750e93db23dd3f796ffdf9bdefabe32b10710053",
"C41",
]
pk.prune_network(ids)
pk.assign_ids()
DATA_DIR = (file_dir / "../data").resolve()
pk.write_compound_output_file(DATA_DIR / "pruned_comps.tsv")
pk.write_reaction_output_file(DATA_DIR / "pruned_rxns.tsv")
assert os.path.exists(DATA_DIR / "pruned_comps_new.tsv")
assert os.path.exists(DATA_DIR / "pruned_rxns_new.tsv")
try:
assert cmp(DATA_DIR / "pruned_comps.tsv", DATA_DIR / "pruned_comps_new.tsv")
assert cmp(DATA_DIR / "pruned_rxns.tsv", DATA_DIR / "pruned_rxns_new.tsv")
finally:
os.remove((DATA_DIR / "pruned_comps_new.tsv").resolve())
os.remove((DATA_DIR / "pruned_rxns_new.tsv").resolve())
def test_target_generation(default_rule, smiles_dict, coreactant_dict):
"""Test generating a target from starting compounds."""
pk = pickaxe.Pickaxe(explicit_h=True)
pk.operators["2.7.1.a"] = default_rule
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound("FADH", smiles_dict["FADH"], cpd_type="Starting Compound")
pk.load_targets(file_dir / "../data/test_targets.csv")
pk.transform_all(generations=2)
pk.prune_network_to_targets()
assert "C11088915f64b93293e70af9c3b7822a4f131225d" in pk.compounds
assert len(pk.reactions) == 4
assert len(pk.compounds) == 6
@valid_db
def test_save_as_mine(default_rule, smiles_dict, coreactant_dict):
"""Test saving compounds to database.
GIVEN a Pickaxe expansion
WHEN that expansion is saved as a MINE DB in the MongoDB
THEN make sure that all features are saved in the MongoDB as expected
"""
DATA_DIR = (file_dir / "../data").resolve()
delete_database("MINE_test")
pk = pickaxe.Pickaxe(database="MINE_test", image_dir=DATA_DIR, explicit_h=True)
pk.operators["2.7.1.a"] = default_rule
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound("FADH", smiles_dict["FADH"], cpd_type="Starting Compound")
pk.transform_all(generations=2)
pk.save_to_mine(processes=1)
mine_db = MINE("MINE_test")
try:
assert mine_db.compounds.estimated_document_count() == 31
assert mine_db.reactions.estimated_document_count() == 49
assert mine_db.operators.estimated_document_count() == 1
assert mine_db.operators.find_one()["Reactions_predicted"] == 49
assert os.path.exists(
DATA_DIR / "X9c29f84930a190d9086a46c344020283c85fb917.svg"
)
start_comp = mine_db.compounds.find_one({"Type": "Starting Compound"})
assert len(start_comp["Reactant_in"]) > 0
# Don't track sources of coreactants
coreactant = mine_db.compounds.find_one({"Type": "Coreactant"})
assert "Product_of" not in coreactant
assert "Reactant_in" not in coreactant
product = mine_db.compounds.find_one({"Generation": 2})
assert len(product["Product_of"]) > 0
assert product["Type"] == "Predicted"
finally:
delete_database("MINE_test")
purge(DATA_DIR, r".*\.svg$")
@valid_db
def test_save_target_mine(default_rule, smiles_dict, coreactant_dict):
"""Test saving the target run to a MINE."""
delete_database("MINE_test")
pk = pickaxe.Pickaxe(database="MINE_test", explicit_h=True)
pk.operators["2.7.1.a"] = default_rule
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound("FADH", smiles_dict["FADH"], cpd_type="Starting Compound")
pk.load_targets(file_dir / "../data/test_targets.csv")
pk.transform_all(generations=2)
pk.prune_network_to_targets()
pk.save_to_mine()
mine_db = MINE("MINE_test")
try:
assert mine_db.compounds.estimated_document_count() == 6
assert mine_db.reactions.estimated_document_count() == 4
assert mine_db.operators.estimated_document_count() == 1
assert mine_db.operators.find_one()["Reactions_predicted"] == 4
start_comp = mine_db.target_compounds.find_one()
assert start_comp["InChI_key"] == "<KEY>"
assert all([i in start_comp.keys() for i in ["_id", "SMILES", "InChI_key"]])
finally:
delete_database("MINE_test")
@valid_db
def test_database_already_exists(default_rule, smiles_dict, coreactant_dict):
"""Test database collision.
GIVEN an existing MINE
WHEN a new pickaxe object is defined
THEN make sure program exits with database collision
"""
delete_database("MINE_test")
pk = pickaxe.Pickaxe(database="MINE_test")
pk.operators["2.7.1.a"] = default_rule
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound("FADH", smiles_dict["FADH"], cpd_type="Starting Compound")
pk.transform_all(generations=2)
pk.save_to_mine(processes=1)
try:
with pytest.raises(SystemExit) as pytest_wrapped_e:
pk = pickaxe.Pickaxe(database="MINE_test")
assert pytest_wrapped_e.type == SystemExit
assert pytest_wrapped_e.value.code == (
"Exiting due to database name collision."
)
finally:
delete_database("MINE_test")
def test_pickle(coreactant_dict, smiles_dict, default_rule):
"""Test pickling of pickaxe objects."""
pickle_path = Path("test_pickle.pk")
pk = pickaxe.Pickaxe(errors=False, explicit_h=True)
pk._load_coreactant(coreactant_dict["ATP"])
pk._load_coreactant(coreactant_dict["ADP"])
pk._add_compound(
smiles_dict["FADH"], smiles_dict["FADH"], cpd_type="Starting Compound"
)
pk.operators["2.7.1.a"] = default_rule
pk.transform_all(generations=2)
pk.pickle_pickaxe(pickle_path)
del pk
pk = pickaxe.Pickaxe(errors=False)
pk.load_pickled_pickaxe(pickle_path)
assert len(pk.compounds) == 31
assert len(pk.reactions) == 49
comp_gens = set([x["Generation"] for x in pk.compounds.values()])
assert comp_gens == {0, 1, 2}
pickle_path.unlink()
def test_local_cli():
"""Test command line interface writing locally.
GIVEN the pickaxe CLI
WHEN pickaxe is run from the command line
THEN make sure it exits with exit code 0 (no errors)
"""
os.chdir(file_dir / "../data/../..")
rc = subprocess.call(
f"python minedatabase/pickaxe.py -o tests/ -r tests/data/test_cd_rxn_rule.tsv",
shell=True,
)
assert not rc
purge(DATA_DIR / "..", r".*\.tsv$")
@valid_db
def test_mongo_cli():
"""Test command line interface writing to mongo."""
mine = MINE("tests")
os.chdir(file_dir / "../data/../..")
rc = subprocess.call(
"python minedatabase/pickaxe.py -d tests -r tests/data/test_cd_rxn_rule.tsv",
shell=True,
)
assert not rc
try:
assert mine.compounds.estimated_document_count() == 51
finally:
mine.client.drop_database("tests")
purge(file_dir / "..", r".*\.svg$")
``` |
{
"source": "josephnjeri/advent_of_code",
"score": 3
} |
#### File: advent_of_code/src/laternfish.py
```python
from collections import deque
import sys
def create_and_reset_lanternfish_timer(initial_states):
next_states = [None] * len(initial_states)
for index, item in enumerate(initial_states):
if index == len(initial_states) - 1:
if item == 0:
next_states.append(8)
else:
next_states[index] = initial_states[index] - 1
else:
if item == 0:
next_states[index] = 6
next_states.append(8)
else:
next_states[index] = initial_states[index] - 1
return next_states
def count_number_of_fish_after_n_days(laternfish_list, no_of_days):
# init counts
basket = deque([0] * 9)
for laternfish in laternfish_list:
basket[laternfish] += 1
# run through days
for each_day in range(no_of_days):
basket[7] += basket[0]
basket.rotate(-1)
return sum(basket)
class Lanternfish:
"""
lanternfish swimming across the deep sea
"""
def __init__(self, data_file, days_to_simulate):
self.initial_states = None
self.initial_state = None
self.data_file = data_file
self.days_to_simulate = days_to_simulate
def read_data(self):
with open(self.data_file, "r") as afile:
data = afile.read()
self.initial_states = [int(i) for i in data.split(",")]
def examine_lanternfish_growth_over_time(self, days):
self.read_data()
initial_states = self.initial_states
for day in range(days):
print(f'day: {day}/{days} days')
next_states = create_and_reset_lanternfish_timer(initial_states)
initial_states = next_states
sys.getsizeof(initial_states)
return next_states
def count_total_number_of_fish(self, days):
return len(self.examine_lanternfish_growth_over_time(days))
def get_total_number_of_fish_deque(self):
self.read_data()
return count_number_of_fish_after_n_days(self.initial_states, self.days_to_simulate)
if __name__ == "__main__":
# datafile = "../data/day6_example_data.csv"
datafile = "../data/day6_lanternfish_data.csv"
obj = Lanternfish(datafile, days_to_simulate=256)
print("")
print("")
print(f'After {obj.days_to_simulate} days, there shall a total of {obj.get_total_number_of_fish_deque()} fish')
# print(obj.count_total_number_of_fish(150))
``` |
{
"source": "joseph-njogu/Devansiblelib",
"score": 4
} |
#### File: joseph-njogu/Devansiblelib/calculator.py
```python
def add(first_term, second_term):
return first_term + second_term
def subtract(first_term, second_term):
return first_term - second_term
# def multiplication(first_term, second_term):
# return first_term * second_term
``` |
{
"source": "joseph-njogu/Django_local_lib",
"score": 3
} |
#### File: locallib/catalog/tests.py
```python
from django.core.management import call_command
from django.test import TestCase
from catalog.models import Author
from django.urls import reverse
from django.test import Client
class AuthorModelTest(TestCase):
@classmethod
def setUpTestData(cls):
# Set up non-modified objects used by all test methods
Author.objects.create(first_name='Njogu', last_name='Joseph')
def test_first_name_label(self):
author = Author.objects.get(id=1)
field_label = author._meta.get_field('first_name').verbose_name
self.assertEquals(field_label, 'first name')
def test_date_of_birth_label(self):
author = Author.objects.get(id=1)
field_label = author._meta.get_field('date_of_birth').verbose_name
self.assertEquals(field_label, 'date of birth')
def test_date_of_death_label(self):
author = Author.objects.get(id=1)
field_label = author._meta.get_field('date_of_death').verbose_name
self.assertEquals(field_label, 'Died')
def test_first_name_max_length(self):
author = Author.objects.get(id=1)
max_length = author._meta.get_field('first_name').max_length
self.assertEquals(max_length, 100)
def test_last_name_max_length(self):
author = Author.objects.get(id=1)
max_length = author._meta.get_field('last_name').max_length
self.assertEquals(max_length, 100)
def test_object_name_is_last_name_comma_first_name(self):
author = Author.objects.get(id=1)
expected_object_name =f'{author.last_name},{author.first_name}'
self.assertEquals(expected_object_name, str(author))
def test_get_absolute_url(self):
author = Author.objects.get(id=1)
# This will also fail if the urlconf is not defined.
self.assertEquals(author.get_absolute_url(), '/catalog/author/1')
# Class AuthorListViewTest(TestCase):
from django.test import Client
@classmethod
def setUpTestData(cls):
# Create 13 authors for pagination tests
number_of_authors = 13
# for author_id in range(number_of_authors):
# Author.objects.create(
# first_name=f' Christian {author_id}',
# last_name=f'Surname {author_id}',
# )
def test_view_uses_correct_view(self):
response = self.client.get(reverse('index'))
self.assertEqual(response.status_code, 200)
def test_view_url_exists_at_desired_location(self):
response = self.client.get('/catalog/authors/')
self.assertEqual(response.status_code, 200)
def test_view_url_accessible_by_name(self):
response = self.client.get(reverse('authors'))
self.assertEqual(response.status_code, 200)
def test_view_uses_correct_template(self):
response = self.client.get(reverse('authors'))
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'catalog/author_list.html')
def test_pagination_is_ten(self):
response = self.client.get(reverse('authors'))
self.assertEqual(response.status_code, 200)
self.assertTrue('is_paginated' in response.context)
# self.assertTrue(response.context['is_paginated'] == True)
self.assertTrue(len(response.context['author_list']) == 2)
def test_lists_all_authors(self):
# Get second page and confirm it has (exactly) remaining 3 items
response = self.client.get(reverse('authors') + '?page=2')
self.assertEqual(response.status_code, 200)
self.assertTrue('is_paginated' in response.context)
# self.assertTrue(response.context['is_paginated'] == True)
self.assertTrue(len(response.context['author_list']) == 2)
import datetime
from django.utils import timezone
from catalog.models import BookInstance, Book, Genre
from django.test import TestCase
# Create your tests here.
import datetime
from catalog.forms import RenewBookForm
class RenewBookFormTest(TestCase):
def test_renew_form_date_in_past(self):
"""Test form is invalid if renewal_date is before today."""
date = datetime.date.today() - datetime.timedelta(days=1)
form = RenewBookForm(data={'renewal_date': date})
self.assertFalse(form.is_valid())
def test_renew_form_date_too_far_in_future(self):
# """Test form is invalid if renewal_\
# date more than 4 weeks from today."""
date = datetime.date.today() + datetime.timedelta(weeks=4) + \
datetime.timedelta(days=1)
form = RenewBookForm(data={'renewal_date': date})
self.assertFalse(form.is_valid())
def test_renew_form_date_today(self):
"""Test form is valid if renewal_date is today."""
date = datetime.date.today()
form = RenewBookForm(data={'renewal_date': date})
self.assertTrue(form.is_valid())
def test_renew_form_date_max(self):
"""Test form is valid if renewal_date is within 4 weeks."""
date = datetime.date.today() + datetime.timedelta(weeks=4)
form = RenewBookForm(data={'renewal_date': date})
self.assertTrue(form.is_valid())
def test_renew_form_date_field_label(self):
"""Test renewal_date label is 'renewal date'."""
form = RenewBookForm()
self.assertTrue(
form.fields['renewal_date'].label is None or
form.fields['renewal_date'].label == 'renewal date')
def test_renew_form_date_field_help_text(self):
"""Test renewal_date help_text is as expected."""
form = RenewBookForm()
self.assertEqual(
form.fields['renewal_date'].help_text,
'Enter a date between now and 4 weeks (default 3).')
``` |
{
"source": "joseph-njogu/EcloudFinance-module",
"score": 2
} |
#### File: e_cloud_finance/e_cloud_finance_module/models.py
```python
from django.db import models
# Create your models here.
from django.contrib.auth.models import User
# Create your models here.
class UserProfileInfo(models.Model):
user = models.OneToOneField(User,on_delete=models.CASCADE)
portfolio_site = models.URLField(blank=True)
profile_pic = models.ImageField(upload_to='profile_pics',blank=True)
def __str__(self):
return self.user.username
class banking_details(models.Model):
banke_name = models.OneToOneField(User,on_delete=models.CASCADE)
portfolio_site = models.URLField(blank=True)
profile_pic = models.ImageField(upload_to='profile_pics',blank=True)
def __str__(self):
return self.user.username
class institution_income(models.Model):
institution_name = models.OneToOneField(User,on_delete=models.CASCADE)
portfolio_site = models.URLField(blank=True)
profile_pic = models.ImageField(upload_to='profile_pics',blank=True)
def __str__(self):
return self.user.username
``` |
{
"source": "joseph-njogu/Projects-from-the-Backend-Development",
"score": 4
} |
#### File: Projects-from-the-Backend-Development/concurrency/determine_the_current_thread.py
```python
import threading
import time
# Naming threads is useful in server processes with multiple service threads
# that can handle different operations
def first_function():
print(threading.currentThread().getName()+str(' is Starting \n'))
time.sleep(2)
print(threading.currentThread().getName()+str(' is Exiting \n'))
return
def second_function():
print(threading.currentThread().getName()+str(' is Starting \n'))
time.sleep(2)
print(threading.currentThread().getName()+str(' is Exiting \n'))
return
def third_function():
print(threading.currentThread().getName()+str(' is Starting \n'))
time.sleep(2)
print(threading.currentThread().getName()+str(' is Exiting \n'))
return
if __name__ == "__main__":
t1 = threading.Thread(name='first_function', target=first_function)
t2 = threading.Thread(name='second_function', target=second_function)
t3 = threading.Thread(name='third_function', target=third_function)
t1.start()
t2.start()
t3.start()
t1.join()
t2.join()
t3.join()
```
#### File: Projects-from-the-Backend-Development/concurrency/spawn_a_process.py
```python
import multiprocessing
def function(i):
print('called function in process: {}'.format(i))
return
if __name__ == '__main__':
Process_jobs = []
for i in range(5):
p = multiprocessing.Process(target=function, args=(i,))
Process_jobs.append(p)
p.start()
p.join()
```
#### File: Projects-from-the-Backend-Development/memtests/memc.py
```python
MEM_SERVER = '127.0.0.1' # Server Address
MEM_PORT = '11211' # Mem Port
MEM_TIMEOUT = 30 # Data TTL (Seconds)
import memcache, hashlib
# Connect To Local Memcache Server
mem = memcache.Client(['%s:%s' % (MEM_SERVER,MEM_PORT)])
def get_result(query):
# Check if query has a value
if query:
hash = hashlib.md5(query).hexdigest() # hash query with md5
# Check Cache
result = mem.get(hash)
if result: # Check if query is cached
return result
else:
return False # Nothing Cached, Run Query
def set_result(query, result):
# Check if query has a value
if query:
hash = hashlib.md5(query).hexdigest()
# Set Key and Value (hash, result)
mem.set(hash,result, MEM_TIMEOUT)
``` |
{
"source": "JosephNjuguna/QuestionerV2",
"score": 3
} |
#### File: v2/views/questionsviews.py
```python
import datetime
#imports
from flask_restful import Resource
from flask import request, make_response, jsonify
#local import
from app.api.v2.utilis.validations import CheckData
from app.api.v2.models.questions import QuestionsModel
from app.api.v2.models.meetup import MeetUp
#error messages
empty_question_title = "Question title empty. Please input data"
empty_question_body = "Question body empty. Please input data"
class PostQuestion(Resource):
"""post question class"""
def post(self, m_id):
try:
data = request.get_json()
question_body = data['question_body']
question_title = data['question_title']
meetup_id = m_id
user_id = 1
votes = 0
question = {
"question_body":question_body,
"question_title":question_title,
"meetup_id": meetup_id,
"user_id":user_id,
"votes":votes
}
if len(question_title)== 0:
return make_response(jsonify({"message": empty_question_title}),400)
if len(question_body)== 0:
return make_response(jsonify({"message": empty_question_body }),400)
question_data = QuestionsModel(**question)
saved = question_data.post_question_db()
question_id = saved
resp = {
"message": "Question successfully posted",
"username": question_body,
"question_id": "{}".format(question_id)
}
if saved == True:
return make_response(jsonify({"Message":"Question already exist"}),409)
return resp, 201
except KeyError:
return make_response(jsonify({"status":400, "message": "Missing either Question body or Question title input"}),400)
class GetQuestionsMeetup(Resource):
def get(self, m_id):
questions = QuestionsModel()
meetup_id = questions.check_meetup_id(m_id)
single_meetup_questions= questions.get_questions(m_id)
resp = {
"status":200,
"message":"all meetups",
"data":[{
"meetups": single_meetup_questions
}]
}
if not meetup_id:
return make_response(jsonify({"Message":"Meetup id not found"}),404)
return resp
class GetSingleQuestion(Resource):
"""get single question class"""
def get(self, m_id, q_id ):
single_question = QuestionsModel()
one_meetup_questions= single_question.get_specificquestion(m_id, q_id)
resp = {
"status":200,
"message":"all meetups",
"data":[{
"meetups": str(one_meetup_questions)
}]
}
return resp,200
class UpvoteQuestion(Resource):
"""upvote question class"""
@staticmethod
def patch(m_id, q_id):
upvoted = QuestionsModel().upvote_question(m_id, q_id)
question_id, question_createdon, question_title, question_body, question_votes= upvoted
resp = {
"id":question_id,
"createdon": question_createdon,
"question_meetup_id":question_body,
"question_title":question_title,
"question_body":question_body,
"votes":question_votes
}
print(resp)
class DownVoteQuestion(Resource):
"""downvote question class"""
def __init__(self):
pass
```
#### File: test/v2/test_questions.py
```python
import json
import unittest
# local imports
from app import create_app
class QuestionsTest(unittest.TestCase):
"""Testing Question URL API endpoints"""
def setUp(self):
self.app = create_app("testing").test_client()
self.question1 = {
"question_body":"Youtube ads are everything??",
"question_title":"Youtube"
}
self.question2 = {
"question_title":"",
"question_body":"Youtube ads are everything"
}
self.question3 = {
"question_title":"Youtube",
"question_body":""
}
def test_1_question_post(self):
"""test that user post a question with all required fields"""
response = self.app.post('/api/v2/meetup/1/question', data=json.dumps(self.question1), content_type='application/json')
self.assertEqual(response.status_code, 201)
def test_2_questions(self):
"""test that user post a question with all required fields"""
response = self.app.get('/api/v2/meetup/1/question', data=json.dumps(self.question1), content_type='application/json')
self.assertEqual(response.status_code, 200)
def test_3_single_questions(self):
"""test that user post a question with all required fields"""
response = self.app.get('/api/v2/meetup/1/question/1', data=json.dumps(self.question1), content_type='application/json')
self.assertEqual(response.status_code, 200)
def test_empty_title_question_post(self):
"""test user input question with empty title"""
response = self.app.post('/api/v2/meetup/1/question', data=json.dumps(self.question2), content_type='application/json')
self.assertEqual(response.status_code, 400)
def test_empty_body_question_post(self):
"""test user input question with empty body"""
response = self.app.post('/api/v2/meetup/1/question', data=json.dumps(self.question3), content_type='application/json')
self.assertEqual(response.status_code, 400)
``` |
{
"source": "josephnl/deploy",
"score": 2
} |
#### File: josephnl/deploy/diff_deploy.py
```python
import configparser
import paramiko
import os
def main():
# 读取配置文件,注意configparser模块和函数的大小写
cf = configparser.ConfigParser()
cf.read("C:\\Work\\Joseph\\deploy\\config.ini")
host = cf.get("server", "host")
port = int(cf.get("server", "port"))
user = cf.get("server", "user")
passwd = cf.get("server", "passwd")
local_path = cf.get("path", "local_path")
project_path = cf.get("path", "project_path")
server_path = cf.get("path", "server_path")
# 拼接本地目录
webinf_path = local_path + '\\webapp\\WEB-INF'
java_path = local_path + '\\java'
webapp_path = local_path + '\\webapp'
# 从SVN 导出拆分的包到本地local_path, 有jsp和JAVA文件
# 准备拆分发布文件列表, 把所有JAVA文件对应的CLASS找到,并且复制到WEB-INF下
# 因为windows不支持os.path.walk 利用dir命令,查找java_path目录下的所有java文件
listdir_cmd = 'dir ' + java_path + ' /aa /s /b'
file_list = os.popen(listdir_cmd).readlines()
for java_file in file_list:
classfile_src = project_path + '\\WEB-INF\\classes'+ java_file[len(java_path):-5] + 'class'
classfile_des = webinf_path + '\\classes' + java_file[len(java_path):-5] + 'class'
os.system('mkdir ' + os.path.dirname(classfile_des)) # 建目录
os.system('copy ' + classfile_src + ' ' + classfile_des) # copy文件
print(os.path.basename(classfile_des) + ' has being copyed\n')
# 链接SFTP, 上传文件
t = paramiko.Transport(host, port)
t.connect(username=user, password=<PASSWORD>)
sftp = paramiko.SFTPClient.from_transport(t)
# 准备需要拆分上传的文件列表,基于webapp, 因为所有CLASS文件已经在WEB-INF下面
# 这里还没有做远程路径不存在的情况,后续增加
listdir_cmd = 'dir ' + webinf_path + ' /aa /s /b'
file_list = os.popen(listdir_cmd).readlines()
print(file_list)
for file in file_list:
src = file[0:-1]
des = server_path + file[len(webapp_path):-1].replace('\\','/')
try:
sftp.put(src, des)
print(os.path.basename(des) + ' Uploading success\n')
except FileNotFoundError:
print(os.path.basename(des) + ' Uploading failed, server path not exist\n')
t.close()
if __name__ == '__main__':
main()
``` |
{
"source": "josephnoir/ctutlz",
"score": 2
} |
#### File: ctutlz/ctutlz/ctlog.py
```python
import json
import re
import requests
from os.path import abspath, expanduser, join, isfile, dirname
import html2text
from utlz import load_json, namedtuple, text_with_newlines
from utlz.types import Enum
from ctutlz.utils.encoding import decode_from_b64, encode_to_b64
from ctutlz.utils.encoding import digest_from_b64
from ctutlz.utils.logger import logger
# https://groups.google.com/forum/#!topic/certificate-transparency/zZwGExvQeiE
# PENDING:
# The Log has requested inclusion in the Log list distributor’s trusted Log list,
# but has not yet been accepted.
# A PENDING Log does not count as ‘currently qualified’, and does not count as ‘once qualified’.
# QUALIFIED:
# The Log has been accepted by the Log list distributor, and added to the CT checking code
# used by the Log list distributor.
# A QUALIFIED Log counts as ‘currently qualified’.
# USABLE:
# SCTs from the Log can be relied upon from the perspective of the Log list distributor.
# A USABLE Log counts as ‘currently qualified’.
# FROZEN (READONLY in JSON-schema):
# The Log is trusted by the Log list distributor, but is read-only, i.e. has stopped accepting
# certificate submissions.
# A FROZEN Log counts as ‘currently qualified’.
# RETIRED:
# The Log was trusted by the Log list distributor up until a specific retirement timestamp.
# A RETIRED Log counts as ‘once qualified’ if the SCT in question was issued before the retirement timestamp.
# A RETIRED Log does not count as ‘currently qualified’.
# REJECTED:
# The Log is not and will never be trusted by the Log list distributor.
# A REJECTED Log does not count as ‘currently qualified’, and does not count as ‘once qualified’.
KnownCTStates = Enum(
PENDING='pending',
QUALIFIED='qualified',
USABLE='usable',
READONLY='readonly', # frozen
RETIRED='retired',
REJECTED='rejected'
)
Log = namedtuple(
typename='Log',
field_names=[ # each of type: str
'key', # base-64 encoded, type: str
'log_id',
'mmd', # v1: maximum_merge_delay
'url',
# optional ones:
'description=None',
'dns=None',
'temporal_interval=None',
'log_type=None',
'state=None', # JSON-schema has: pending, qualified, usable, readonly, retired, rejected
'operated_by=None'
],
lazy_vals={
'key_der': lambda self: decode_from_b64(self.key), # type: bytes
'log_id_der': lambda self: digest_from_b64(self.key), # type: bytes
'pubkey': lambda self: '\n'.join([ # type: str
'-----BEGIN PUBLIC KEY-----',
text_with_newlines(text=self.key,
line_length=64),
'-----END PUBLIC KEY-----']),
'scts_accepted_by_chrome':
lambda self:
None if self.state is None else
True if next(iter(self.state)) in [KnownCTStates.USABLE,
KnownCTStates.QUALIFIED,
KnownCTStates.READONLY] else
False,
}
)
# plurale tantum constructor
def Logs(log_dicts):
'''
Arg log_dicts example:
{
"logs": [
{
"description": "Google 'Argon2017' log",
"log_id": "+tTJfMSe4vishcXqXOoJ0CINu/TknGtQZi/4aPhrjCg=",
"key": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEVG18id3qnfC6X/RtYHo3TwIlvxz2b4WurxXfaW7t26maKZfymXYe5jNGHif0vnDdWde6z/7Qco6wVw+dN4liow==",
"url": "https://ct.googleapis.com/logs/argon2017/",
"mmd": 86400,
"state": {
"rejected": {
"timestamp": "2018-02-27T00:00:00Z"
}
},
"temporal_interval": {
"start_inclusive": "2017-01-01T00:00:00Z",
"end_exclusive": "2018-01-01T00:00:00Z"
},
"operated_by": {
"name": "Google",
"email": [
"<EMAIL>"
],
}
},
'''
logs_out = []
for log in log_dicts:
logs_out += [Log(**kwargs) for kwargs in log['logs']]
return logs_out
def set_operator_names(logs_dict):
'''
Fold the logs listing by operator into list of logs.
Append operator information to each log
Arg log_dicts example:
{
"operators": [
{
"name": "Google",
"email": [
"<EMAIL>"
],
"logs": [
{
"description": "Google 'Argon2017' log",
"log_id": "+tTJfMSe4vishcXqXOoJ0CINu/TknGtQZi/4aPhrjCg=",
"key": "<KEY>
"url": "https://ct.googleapis.com/logs/argon2017/",
"mmd": 86400,
"state": {
"rejected": {
"timestamp": "2018-02-27T00:00:00Z"
}
},
"temporal_interval": {
"start_inclusive": "2017-01-01T00:00:00Z",
"end_exclusive": "2018-01-01T00:00:00Z"
}
},
'''
logs_dict['logs'] = []
for operator in logs_dict['operators']:
operator_name = operator['name']
operator_email = operator['email']
for log in operator['logs']:
log['operated_by'] = {
'name': operator_name,
'email': operator_email
}
logs_dict['logs'].append(log)
del logs_dict['operators']
'''logs included in chrome browser'''
BASE_URL = 'https://www.gstatic.com/ct/log_list/v2/'
URL_LOG_LIST = BASE_URL + 'log_list.json'
URL_ALL_LOGS = BASE_URL + 'all_logs_list.json'
def download_log_list(url=URL_ALL_LOGS):
'''Download json file with known logs accepted by chrome and return the
logs as a list of `Log` items.
Return: dict, the 'logs_dict'
Arg log_dicts example:
{
"operators": [
{
"name": "Google",
"email": [
"<EMAIL>"
],
"logs": [
{
"description": "Google 'Argon2017' log",
"log_id": "+tTJfMSe4vishcXqXOoJ0CINu/TknGtQZi/4aPhrjCg=",
"key": "<KEY>
"url": "https://ct.googleapis.com/logs/argon2017/",
"mmd": 86400,
"state": {
"rejected": {
"timestamp": "2018-02-27T00:00:00Z"
}
},
"temporal_interval": {
"start_inclusive": "2017-01-01T00:00:00Z",
"end_exclusive": "2018-01-01T00:00:00Z"
}
},
'''
response = requests.get(url)
response_str = response.text
data = json.loads(response_str)
data['url'] = url
return data
def read_log_list(filename):
'''Read log list from file `filename` and return as logs_dict.
Return: dict, the 'logs_dict'
logs_dict example: {
'logs: [
{
"description": "Google 'Aviator' log",
"key": "MFkwE..."
"url": "ct.googleapis.com/aviator/",
"maximum_merge_delay": 86400,
"operated_by": [0],
"final_sth": {
...
},
"dns_api_endpoint": ...
},
],
'operators': [
...
]
}
'''
filename = abspath(expanduser(filename))
data = load_json(filename)
return data
def get_log_list(list_name='really_all_logs.json'):
'''Try to read log list from local file. If file not exists download
log list.
Return: dict, the 'logs_dict'
logs_dict example: {
'logs: [
{
"description": "Google 'Aviator' log",
"key": "MFkwE..."
"url": "ct.googleapis.com/aviator/",
"maximum_merge_delay": 86400,
"operated_by": [0],
"final_sth": {
...
},
"dns_api_endpoint": ...
},
],
'operators': [
...
]
}
'''
thisdir = dirname(__file__)
filename = join(thisdir, list_name)
if isfile(filename):
logs_dict = read_log_list(filename)
else:
logs_dict = download_log_list(''.join([BASE_URL, list_name]))
return logs_dict
def print_schema():
thisdir = dirname(__file__)
filename = join(thisdir, 'log_list_schema.json')
with open(filename, 'r') as fh:
json_str = fh.read()
# print(json_str.strip())
print('TODO')
```
#### File: ctutlz/scripts/ctloglist.py
```python
import argparse
import datetime
import json
import logging
from utlz import first_paragraph, red
from ctutlz.ctlog import download_log_list
from ctutlz.ctlog import set_operator_names, print_schema
from ctutlz.ctlog import URL_ALL_LOGS, Logs
from ctutlz.utils.logger import VERBOSE, init_logger, setup_logging, logger
from ctutlz._version import __version__
def create_parser():
parser = argparse.ArgumentParser(description=first_paragraph(__doc__))
parser.epilog = __doc__.split('\n', 1)[-1]
parser.add_argument('-v', '--version',
action='version',
default=False,
version=__version__,
help='print version number')
me1 = parser.add_mutually_exclusive_group()
me1.add_argument('--short',
dest='loglevel',
action='store_const',
const=logging.INFO,
default=VERBOSE, # default loglevel if nothing set
help='show short results')
me1.add_argument('--debug',
dest='loglevel',
action='store_const',
const=logging.DEBUG,
help='show more for diagnostic purposes')
me2 = parser.add_mutually_exclusive_group()
me2.add_argument('--json',
action='store_true',
dest='print_json',
help='print merged log lists as json')
me2.add_argument('--schema',
action='store_true',
dest='print_schema',
help='print json schema')
return parser
def warn_inconsistency(url, val_a, val_b):
# suppress warning doubles (i know it's hacky)
key = url + ''.join(sorted('%s%s' % (val_a, val_b)))
if not hasattr(warn_inconsistency, 'seen'):
warn_inconsistency.seen = {}
if not warn_inconsistency.seen.get(key, False):
warn_inconsistency.seen[key] = True
else:
return
logger.warning(red('inconsistent data for log %s: %s != %s' % (url, val_a, val_b)))
def data_structure_from_log(log):
log_data = dict(log._asdict())
log_data['id_b64'] = log.id_b64
log_data['pubkey'] = log.pubkey
log_data['scts_accepted_by_chrome'] = \
log.scts_accepted_by_chrome
return log_data
def list_from_lists(log_lists):
log_list = []
for item_dict in log_lists:
for log in item_dict['logs']:
log_data = data_structure_from_log(log)
log_list.append(log_data)
return log_list
def show_log(log, order=3):
logger.verbose('#' * order + ' %s\n' % log.url)
logdict = log._asdict()
for key, value in logdict.items():
if key == 'id_b64_non_calculated' and value == log.id_b64:
value = None # don't log this value
if key == 'operated_by':
value = ', '.join(value)
# avoid markdown syntax interpretation and improve readablity
key = key.replace('_', ' ')
if value is not None:
logger.verbose('* __%s__: `%s`' % (key, value))
logger.verbose('* __scts accepted by chrome__: '
'%s' % log.scts_accepted_by_chrome)
if log.key is not None:
logger.verbose('* __id b64__: `%s`' % log.log_id)
logger.verbose('* __pubkey__:\n```\n%s\n```' % log.pubkey)
logger.verbose('')
def show_logs(logs, heading, order=2):
if len(logs) <= 0:
return
logger.info('#' * order + '%s\n' % ' ' + heading if heading else '')
s_or_not = 's'
if len(logs) == 1:
s_or_not = ''
# show log size
logger.info('%i log%s\n' % (len(logs), s_or_not))
# list log urls
for log in logs:
if logger.level < logging.INFO:
anchor = log.url.replace('/', '')
logger.verbose('* [%s](#%s)' % (log.url, anchor))
else:
logger.info('* %s' % log.url)
logger.info('')
for log in logs:
show_log(log)
logger.info('End of list')
def ctloglist(print_json=None):
'''Gather ct-log lists and print the merged log list.
Args:
print_json(boolean): If True, print merged log list as json data.
Else print as markdown.
'''
if not print_json:
today = datetime.date.today()
now = datetime.datetime.now()
logger.info('# Known Certificate Transparency (CT) Logs\n')
logger.verbose('Created with [ctloglist]'
'(https://github.com/theno/ctutlz#ctloglist)\n')
logger.verbose('* [all_logs_list.json]('
'https://www.gstatic.com/ct/log_list/v2/all_logs_list.json)'
'\n')
logger.info('Version (Date): %s\n' % today)
logger.verbose('Datetime: %s\n' % now)
logger.info('') # formatting: insert empty line
# all_logs_list.json
all_dict = download_log_list(URL_ALL_LOGS)
orig_all_dict = dict(all_dict)
set_operator_names(all_dict)
all_logs = Logs([all_dict])
if print_json:
json_str = json.dumps(orig_all_dict, indent=4, sort_keys=True)
print(json_str)
else:
show_logs(all_logs, '')
def main():
init_logger()
parser = create_parser()
args = parser.parse_args()
setup_logging(args.loglevel)
logger.debug(args)
if args.print_schema:
print_schema()
else:
ctloglist(args.print_json)
if __name__ == '__main__':
main()
```
#### File: ctutlz/utils/string.py
```python
def to_hex(val):
'''Return val as str of hex values concatenated by colons.'''
if type(val) is int:
return hex(val)
try:
# Python-2.x
if type(val) is long:
return hex(val)
except NameError:
pass
# else:
try:
# Python-2.x
return ":".join("{0:02x}".format(ord(char)) for char in val)
except TypeError:
# Python-3.x
return ":".join("{0:02x}".format(char) for char in val)
# http://stackoverflow.com/a/16891418
def string_without_prefix(prefix ,string):
'''Return string without prefix. If string does not start with prefix,
return string.
'''
if string.startswith(prefix):
return string[len(prefix):]
return string
def string_with_prefix(prefix, string):
'''Return string with prefix prepended. If string already starts with
prefix, return string.
'''
return str(prefix) + string_without_prefix(str(prefix), str(string))
```
#### File: ctutlz/utils/tdf_bytes.py
```python
import struct
from utlz import namedtuple as namedtuple_utlz
# tdf := "TLS Data Format" (cf. https://tools.ietf.org/html/rfc5246#section-4)
def namedtuple(typename, field_names='arg', lazy_vals=None, **kwargs):
lazy_vals['_parse'] = lambda self: \
self.arg if type(self.arg) == dict else \
self._parse_func(self.arg)[0] if type(self.arg) == bytes else \
None
lazy_vals['tdf'] = lambda self: \
self._parse['tdf']
return namedtuple_utlz(typename, field_names, lazy_vals, **kwargs)
class TdfBytesParser(object):
'''An instance of this is a file like object which enables access of a
tdf (data) struct (a bytes string).
'''
# context methods
def __init__(self, tdf_bytes):
self._bytes = tdf_bytes
self.offset = 0
'''mutable parse results (read and delegate) dict'''
self.res = {}
def __enter__(self):
self.offset = 0
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
self.offset = 0
return
# methods for parsing
def read(self, key, fmt):
data = struct.unpack_from(fmt, self._bytes, self.offset)
self.offset += struct.calcsize(fmt)
if len(data) == 1:
self.res[key] = data[0]
else:
self.res[key] = data
return self.res[key]
def delegate(self, key, read_func):
self.res[key], offset = read_func(self._bytes[self.offset:])
self.offset += offset
return self.res[key]
def result(self):
self.res['tdf'] = bytes(bytearray(self._bytes[0:self.offset]))
return self.res, self.offset
```
#### File: josephnoir/ctutlz/setup.py
```python
import os
import shutil
from setuptools import setup, find_packages
from codecs import open
def create_readme_with_long_description():
this_dir = os.path.abspath(os.path.dirname(__file__))
readme_md = os.path.join(this_dir, 'README.md')
readme = os.path.join(this_dir, 'README')
if os.path.isfile(readme_md):
if os.path.islink(readme):
os.remove(readme)
shutil.copy(readme_md, readme)
try:
import pypandoc
long_description = pypandoc.convert(readme_md, 'rst', format='md')
if os.path.islink(readme):
os.remove(readme)
with open(readme, 'w') as out:
out.write(long_description)
except(IOError, ImportError, RuntimeError):
if os.path.isfile(readme_md):
os.remove(readme)
os.symlink(readme_md, readme)
with open(readme, encoding='utf-8') as in_:
long_description = in_.read()
return long_description
this_dir = os.path.abspath(os.path.dirname(__file__))
filename = os.path.join(this_dir, 'ctutlz', '_version.py')
with open(filename, 'rt') as fh:
version = fh.read().split('"')[1]
description = __doc__.split('\n')[0]
long_description = create_readme_with_long_description()
setup(
name='ctutlz',
version=version,
description=description,
long_description=long_description,
url='https://github.com/theno/ctutlz',
author='<NAME>',
author_email='<EMAIL>',
license='MIT',
entry_points={
'console_scripts': [
'ctloglist = ctutlz.scripts.ctloglist:main',
'decompose-cert = ctutlz.scripts.decompose_cert:main',
'verify-scts = ctutlz.scripts.verify_scts:main',
],
},
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Topic :: Software Development :: Libraries :: Python Modules',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
keywords='python development utilities library '
'certificate-transparency ct signed-certificate-timestamp sct',
packages=find_packages(exclude=[
'contrib',
'docs',
'tests',
]),
package_data={'ctutlz': ['really_all_logs.json', 'log_list_schema.json'], },
install_requires=[
'cffi>=1.4.0',
'cryptography>=2.8.0',
'html2text>=2016.9.19',
'pyasn1>=0.4.0',
'pyasn1-modules>=0.2.0',
'pyOpenSSL>=18.0.0',
'requests>=2.20.0',
'utlz>=0.10.0',
],
extras_require={
'dev': ['pypandoc'],
},
setup_requires=[
'cffi>=1.4.0'
],
cffi_modules=[
'ctutlz/tls/handshake_openssl_build.py:ffibuilder'
],
)
```
#### File: ctutlz/tests/test_rfc6962.py
```python
from ctutlz import rfc6962
def test_parse_log_entry_type_0():
tdf = b'\x00\x00'
parse, offset = rfc6962._parse_log_entry_type(tdf)
assert offset == 2
assert parse == {
'tdf': b'\x00\x00',
'val': 0,
}
def test_parse_log_entry_type_1():
tdf = b'\x00\x01'
parse, offset = rfc6962._parse_log_entry_type(tdf)
assert offset == 2
assert parse == {
'tdf': b'\x00\x01',
'val': 1,
}
def test_log_entry_type_0_from_tdf():
tdf = b'\x00\x00anything'
log_entry_type = rfc6962.LogEntryType(arg=tdf)
assert log_entry_type.is_x509_entry is True
assert log_entry_type.is_precert_entry is False
assert log_entry_type.tdf == b'\x00\x00'
assert str(log_entry_type) == 'x509_entry'
assert log_entry_type._parse == {
'tdf': b'\x00\x00',
'val': 0,
}
def test_log_entry_type_0_from_parse():
parse = {
'tdf': b'\x00\x00',
'val': 0,
}
log_entry_type = rfc6962.LogEntryType(arg=parse)
assert log_entry_type.is_x509_entry is True
assert log_entry_type.is_precert_entry is False
assert log_entry_type.tdf == b'\x00\x00'
assert str(log_entry_type) == 'x509_entry'
assert log_entry_type._parse == {
'tdf': b'\x00\x00',
'val': 0,
}
def test_log_entry_type_1_from_tdf():
tdf = b'\x00\x01'
log_entry_type = rfc6962.LogEntryType(arg=tdf)
assert log_entry_type.is_x509_entry is False
assert log_entry_type.is_precert_entry is True
assert log_entry_type.tdf == b'\x00\x01'
assert str(log_entry_type) == 'precert_entry'
assert log_entry_type._parse == {
'tdf': b'\x00\x01',
'val': 1,
}
def test_log_entry_type_1_from_parse():
parse = {
'tdf': b'\x00\x01',
'val': 1,
}
log_entry_type = rfc6962.LogEntryType(arg=parse)
assert log_entry_type.is_x509_entry is False
assert log_entry_type.is_precert_entry is True
assert log_entry_type.tdf == b'\x00\x01'
assert str(log_entry_type) == 'precert_entry'
assert log_entry_type._parse == {
'tdf': b'\x00\x01',
'val': 1,
}
def test_signature_type_0_from_tdf():
tdf = b'\x00\x01\x02\x03\x04\x05\x06\x07\x89'
signature_type = rfc6962.SignatureType(arg=tdf)
assert signature_type.is_certificate_timestamp is True
assert signature_type.is_tree_hash is False
assert signature_type._parse == {
'tdf': b'\x00',
'val': 0,
}
def test_signature_type_0_from_parse():
parse = {
'tdf': b'\x00',
'val': 0,
}
signature_type = rfc6962.SignatureType(arg=parse)
assert signature_type.is_certificate_timestamp is True
assert signature_type.is_tree_hash is False
assert signature_type._parse == {
'tdf': b'\x00',
'val': 0,
}
def test_signature_type_1_from_tdf():
tdf = b'\x01'
signature_type = rfc6962.SignatureType(arg=tdf)
assert signature_type.is_certificate_timestamp is False
assert signature_type.is_tree_hash is True
assert signature_type._parse == {
'tdf': b'\x01',
'val': 1,
}
def test_signature_type_1_from_parse():
parse = {
'tdf': b'\x01',
'val': 1,
}
signature_type = rfc6962.SignatureType(arg=parse)
assert signature_type.is_certificate_timestamp is False
assert signature_type.is_tree_hash is True
assert signature_type._parse == {
'tdf': b'\x01',
'val': 1,
}
def test_version_from_tdf():
tdf = b'\x00anything'
version = rfc6962.Version(tdf)
assert version.is_v1 is True
assert version._parse == {
'tdf': b'\x00',
'val': 0,
}
# invalid version number
invalid_tdf = b'\x10'
version = rfc6962.Version(invalid_tdf)
assert version.is_v1 is False
assert version._parse == {
'tdf': b'\x10',
'val': 16,
}
def test_version_from_parse():
parse = {
'val': 0,
'tdf': b'\x00',
}
version = rfc6962.Version(arg=parse)
assert version.is_v1 is True
assert version._parse == {
'tdf': b'\x00',
'val': 0,
}
def test_SignedCertificateTimestamp_from_tdf():
tdf = (b'\x00\xeeK\xbd\xb7u\xce`\xba\xe1Bi\x1f\xab\xe1\x9ef\xa3\x0f~_\xb0r'
b'\xd8\x83\x00\xc4{\x89z\xa8\xfd\xcb\x00\x00\x01]\xe7\x11\xf5\xf7'
b'\x00\x00\x04\x03\x00F0D\x02 ph\xa0\x08\x96H\xbc\x1b\x11\x0e\xd0'
b'\x98\x02\xa8\xac\xb8\x19-|,\xe5\x0e\x9e\xf8/_&\xf7b\x88\xb4U\x02 X'
b'\xbc\r>jFN\x0e\xda\x0b\x1b\xb5\xc0\x1a\xfd\x90\x91\xb0&\x1b\xdf'
b'\xdc\x02Z\xd4zd\xd7\x80c\x0f\xd5')
sct = rfc6962.SignedCertificateTimestamp(arg=tdf)
assert sct.log_id.tdf == (b'\xeeK\xbd\xb7u\xce`\xba\xe1Bi\x1f\xab\xe1\x9ef'
b'\xa3\x0f~_\xb0r\xd8\x83\x00\xc4{\x89z\xa8\xfd'
b'\xcb')
assert sct.tdf == tdf
```
#### File: ctutlz/tests/test_sct_ee_cert.py
```python
from os.path import join, dirname
import OpenSSL
from pyasn1.codec.der.decoder import decode as der_decoder
from pyasn1.type.univ import ObjectIdentifier, Sequence
from utlz import flo
from ctutlz.sct.ee_cert import pyopenssl_certificate_from_der, EndEntityCert
def test_pyopenssl_certificate_from_der():
basedir = join(dirname(__file__), 'data', 'test_sct_ee_cert')
for filename in ['ev_cert.der', 'cert_no_ev.der']:
cert_der = open(flo('{basedir}/{filename}'), 'rb').read()
got = pyopenssl_certificate_from_der(cert_der)
assert type(got) is OpenSSL.crypto.X509
def test_is_ev_cert():
basedir = join(dirname(__file__), 'data', 'test_sct_ee_cert')
test_data = [
('ev_cert.der', True),
('cert_no_ev.der', False),
]
for filename, expected in test_data:
cert_der = open(flo('{basedir}/{filename}'), 'rb').read()
ee_cert = EndEntityCert(cert_der)
assert ee_cert.is_ev_cert is expected
def test_is_letsencrypt_cert():
basedir = join(dirname(__file__), 'data', 'test_sct_ee_cert')
test_data = [
('issued_by_letsencrypt.der', True),
('issued_by_letsencrypt_2.der', True),
('issued_by_letsencrypt_not.der', False),
]
for filename, expected in test_data:
cert_der = open(flo('{basedir}/{filename}'), 'rb').read()
ee_cert = EndEntityCert(cert_der)
assert ee_cert.is_letsencrypt_cert is expected
``` |
{
"source": "josephnowak/multitask_organizer",
"score": 3
} |
#### File: multitask_queue/tests/test_decorator.py
```python
from multitask_queue.task import TaskDescriptor
from multitask_queue import decorators
class TestDecorators:
def test_decorators(self):
total_decorators = {
'regular_task': 'regular',
'parallel_task': 'parallel',
'independent_task': 'independent',
'pre_execution_task': 'pre_execution',
'autofill_task': 'autofill'
}
for decorator_name, type_task in total_decorators.items():
decorator = getattr(decorators, decorator_name)
if decorator_name != 'autofill_task':
parameters = dict(
exec_on_events=['event'],
exec_after_tasks=['dummy'],
exec_before_tasks=['dummy2'],
autofill=['dummy3']
)
if type_task in ['independent', 'parallel']:
parameters['type_parallelization'] = 'thread'
@decorator(**parameters)
def dummy_func(a: int = 5):
pass
task = decorators.PLUGINS['dummy_func']
assert task.func == dummy_func
assert task.exec_on_events == {'event'}
assert task.exec_after_tasks == {'dummy'}
assert task.exec_before_tasks == {'dummy2'}
assert task.autofill == {'dummy3'}
else:
@decorator
def dummy_func(a: int = 5):
pass
task = decorators.PLUGINS['dummy_func']
assert task.type_task == type_task
assert task.parameters == ['a']
assert task.default_parameters == {'a': 5}
del decorators.PLUGINS['dummy_func']
if __name__ == "__main__":
test = TestDecorators()
test.test_decorators()
```
#### File: multitask_queue/tests/test_multitask.py
```python
from multitask_queue.multitask import MultitasksQueue, Multitask, MultitasksOrganizer
from multitask_queue.task import TasksOrganizer, Task, TaskDescriptor
from multitask_queue.decorators import regular_task, pre_execution_task, parallel_task
class TestMultitask:
def test_multitask(self):
def add_event_modification(multitask: Multitask):
multitask.add_events({'modification'})
return {}
def add_event_exception(multitask: Multitask):
multitask.add_events({'exception'})
return {}
def sum_to_value(value: int, a: int):
return {'value': value + a}
def mul_value(value: int, b: int):
return {'value': value * b}
def divide_value(value: int, c: int):
return {'value': value / c}
task_event_modification = Task(
TaskDescriptor(
func=add_event_modification,
type_task='pre_execution',
exec_on_events=['regular'],
exec_after_tasks=[],
exec_before_tasks=[],
autofill=[]
)
)
task_event_exception = Task(
TaskDescriptor(
func=add_event_exception,
type_task='pre_execution',
exec_on_events=['regular'],
exec_after_tasks=[],
exec_before_tasks=[],
autofill=[]
)
)
task_sum = Task(
TaskDescriptor(
func=sum_to_value,
type_task='regular',
exec_on_events=['modification'],
exec_after_tasks=[],
exec_before_tasks=[],
autofill=[]
)
)
task_mul = Task(
TaskDescriptor(
func=mul_value,
type_task='regular',
exec_on_events=['modification'],
exec_after_tasks=['sum_to_value'],
exec_before_tasks=[],
autofill=['sum_to_value']
)
)
task_div = Task(
TaskDescriptor(
func=divide_value,
type_task='regular',
exec_on_events=['exception'],
exec_after_tasks=[],
exec_before_tasks=[],
autofill=[]
)
)
multitask_handler = Multitask(
multitask_id='any',
events={'regular'}
)
data = {'value': 1, 'a': 1, 'b': 2, 'c': 2}
multitask_handler.run(
data,
TasksOrganizer([task_mul, task_sum, task_div, task_event_modification])
)
assert data['value'] == 4
multitask_handler = Multitask(
multitask_id='any',
events={'regular'}
)
multitask_handler.run(
data,
TasksOrganizer([task_mul, task_sum, task_div, task_event_exception])
)
assert data['value'] == 2
def test_multitasks_queue(self):
@pre_execution_task(exec_on_events=['preprocess'])
def enqueue_multitasks(mt_queue: MultitasksOrganizer):
mt_queue.put(
Multitask(
multitask_id=1,
events={'modification'}
)
)
mt_queue.put(
Multitask(
multitask_id=2,
events={'exception'}
)
)
return {}
@regular_task(exec_on_events=['modification'])
def sum_to_value(value: int, a: int):
return {'value': value + a}
@regular_task(exec_on_events=['modification'], exec_after_tasks=['sum_to_value'])
def mul_value(value: int, b: int):
return {'value': value * b}
@parallel_task(exec_on_events=['exception'], type_parallelization='thread')
def divide_value(value: int, c: int):
return {'value': value / c}
data = {'value': 1, 'a': 1, 'b': 4, 'c': 2}
multitasks = MultitasksQueue()
multitasks.run(data)
assert data['value'] == 4
if __name__ == "__main__":
test = TestMultitask()
# test.test_multitask()
test.test_multitasks_queue()
``` |
{
"source": "josephobonyo/sigma_coding_youtube",
"score": 3
} |
#### File: python/python-vba/Lesson 2 - Creating Python Formulas.py
```python
import pythoncom
import numpy as np
import win32com.client
class PythonObjectLibrary:
# This will create a GUID to register it with Windows, it is unique.
_reg_clsid_ = pythoncom.CreateGuid()
# Register the object as an EXE file, the alternative is an DLL file (INPROC_SERVER)
_reg_clsctx_ = pythoncom.CLSCTX_LOCAL_SERVER
# the program ID, this is the name of the object library that users will use to create the object.
_reg_progid_ = "Python.ObjectLibrary"
# this is a description of our object library.
_reg_desc_ = "This is our Python object library."
# a list of strings that indicate the public methods for the object. If they aren't listed they are conisdered private.
_public_methods_ = ['pythonSum', 'pythonMultiply','addArray']
# multiply two cell values.
def pythonMultiply(self, a, b):
return a * b
# add two cell values
def pythonSum(self, x, y):
return x + y
# add a range of cell values
def addArray(self, myRange):
# create an instance of the range object that is passed through
rng1 = win32com.client.Dispatch(myRange)
# Get the values from the range
rng1val = np.array(list(rng1.Value))
return rng1val.sum()
if __name__ == '__main__':
import win32com.server.register
win32com.server.register.UseCommandLine(PythonObjectLibrary)
``` |
{
"source": "Joseph-Odhiambo/Blog-Post",
"score": 3
} |
#### File: Blog-Post/tests/user_test.py
```python
import unittest
from app.models import User
class UserTest(unittest.TestCase):
def setUp(self):
self.new_user = User(username='joseph', email="<EMAIL>", bio='default bio', password='<PASSWORD>')
def test_password_setter(self):
self.assertTrue(self.new_user.hashed_password is not None)
def test_no_access_password(self):
with self.assertRaises(AttributeError):
self.new_user.hashed_password
def test_password_verification(self):
self.assertTrue(self.new_user.verify_password('<PASSWORD>'))
``` |
{
"source": "josePhoenix/omabutton",
"score": 2
} |
#### File: josePhoenix/omabutton/player.py
```python
import re
import sys
import multiprocessing
import subprocess
import time
import os
import glob
import Queue as queue
import logging
LOG_FORMAT = "%(asctime)s %(module)s:%(lineno)d [%(levelname)s]: %(message)s"
logging.basicConfig(format=LOG_FORMAT, level=logging.WARN)
log = logging # shorthand
try:
from RPi import GPIO
_ON_RASPI = True
except ImportError:
GPIO = None
_ON_RASPI = False
import vlc
import id3reader
if _ON_RASPI:
MEDIA_ROOT = "/media/usb0"
SPEECH_HELPER = "flite"
else:
MEDIA_ROOT = "/Users/jdl/dev/omabutton/media"
SPEECH_HELPER = "say"
BUTTON_PREVIOUS = 25
BUTTON_PLAYPAUSE = 22
BUTTON_NEXT = 4
class Buttons(object):
def __init__(self, button_names):
self._lookup = {}
for identifier, channel_number in button_names.items():
setattr(self, identifier, channel_number)
self._lookup[channel_number] = identifier
def __getitem__(self, channel_number):
return self._lookup[channel_number]
def initialize(self, callback):
GPIO.setmode(GPIO.BCM)
for channel in self._lookup.keys():
GPIO.setup(channel, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.add_event_detect(
channel,
GPIO.FALLING, # buttons pulled down, so falling
# means when button released
callback=callback,
bouncetime=1000
)
class SpeechRequest(object):
NOT_STARTED = 0
IN_PROGRESS = 1
COMPLETED = 2
def __init__(self, msg):
self.msg = msg
self.ref = None
def play(self):
self.ref = subprocess.Popen((SPEECH_HELPER, self.msg))
def wait(self):
if self.ref is None:
return
while self.status() != SpeechRequest.COMPLETED:
time.sleep(0.2)
def stop(self):
if self.ref:
try:
self.ref.terminate()
except OSError as e:
if e.errno == errno.ESRCH:
pass # race condition where pid might be cleaned up before
# we try to kill it
else:
raise
def status(self):
if self.ref is None:
return SpeechRequest.NOT_STARTED
else:
if self.ref.poll() is None:
return SpeechRequest.IN_PROGRESS
else:
return SpeechRequest.COMPLETED
class Player(multiprocessing.Process):
media_files = []
def __init__(self, media_root, buttons):
self.buttons = buttons
self.event_queue = multiprocessing.Queue()
# VLC plumbing
self._media_root = media_root
self._instance = vlc.Instance()
self.now_playing = self._instance.media_player_new()
self._vlc_event_manager = self.now_playing.event_manager()
self._vlc_event_manager.event_attach(
vlc.EventType.MediaPlayerEndReached,
self._auto_advance
# proceed to the next item upon finishing one
)
self._media_list_position = 0
self.running = False
self.media_files = glob.glob(os.path.join(self._media_root, '*.[mM][pP]3'))
self.media_files.sort()
if not len(self.media_files) > 0:
log.error('Cannot init player! No media '
'found in {}'.format(self._media_root))
sys.exit(1)
super(Player, self).__init__()
def _auto_advance(self, *args, **kwargs):
log.debug('[_auto_advance] event from VLC: {} {}'.format(args, kwargs))
player.send_event(BUTTON_NEXT)
@staticmethod
def _say(message):
speech = SpeechRequest(message)
speech.play()
speech.wait()
@staticmethod
def _name_for_media(normpath):
id3r = id3reader.Reader(normpath)
if id3r.getValue('title') is not None and \
id3r.getValue('performer') is not None:
return '{0} from {1}'.format(
id3r.getValue('title').encode('ascii', 'ignore'),
id3r.getValue('performer').encode('ascii', 'ignore')
)
else:
# Fallback on filename parsing
filename = os.path.basename(normpath).replace('.mp3', '')
return re.sub(r'\d+\s*-?\s*', '', filename)
def _begin_media(self, filepath, begin_as_paused=False):
if self.now_playing.is_playing():
log.debug('[_begin_media] currently is_playing, stopping before '
'switching media')
self.now_playing.stop()
normpath = os.path.abspath(filepath)
log.debug('[_begin_media] attempting to load {}'.format(normpath))
try:
media = self._instance.media_new(normpath)
except NameError:
log.error('NameError: %s (%s vs LibVLC %s)' % (sys.exc_info()[1],
__version__,
libvlc_get_version()))
sys.exit(1)
self.now_playing.set_media(media)
log.debug('[_begin_media] set_media succeeded')
self._name = self._name_for_media(normpath)
if not begin_as_paused:
log.debug('[_begin_media] begin_as_paused not set, playing')
self.play()
def play(self):
log.debug('[play] announcing name: {}'.format(self._name))
self._say('Now playing: {}'.format(self._name))
log.debug('[play] begin playing...')
self.now_playing.play()
log.debug('[play] ...playing!')
def pause(self):
if self.now_playing.is_playing():
self.now_playing.set_pause(True)
def playpause(self):
if not self.now_playing.is_playing():
log.debug('[playpause] currently not is_playing, play()')
self.play()
else:
log.debug('[playpause] currently is_playing, pause()')
self.pause()
def next_media(self):
self._media_list_position += 1
if self._media_list_position >= len(self.media_files):
self._say('That is the end of the material currently available. '
'Have someone load some more for you, or press the green '
'play button to start from the beginning.')
log.debug('[next_media] wrap around, we reached the end')
self._media_list_position = 0
self._begin_media(self.media_files[self._media_list_position],
begin_as_paused=True)
else:
self._begin_media(self.media_files[self._media_list_position],
begin_as_paused=False)
def previous_media(self):
self._media_list_position -= 1
if self._media_list_position < 0:
self._say('You have reached the beginning of the list. Now '
'counting back from the last item on the list.')
log.debug('[previous_media] wrap around, we reached the beginning')
self._media_list_position = len(self.media_files) - 1
self._begin_media(self.media_files[self._media_list_position])
def send_event(self, event_channel):
log.debug('[send_event] {}'.format(event_channel))
self.event_queue.put(event_channel)
def initialize(self):
# on first start, load the first media item in the list
# and set it to paused
log.debug("[initialize] Queueing up first media item...")
self._media_list_position = 0
self._begin_media(self.media_files[0], begin_as_paused=True)
self._say("There are {} pod cast recordings available. "
"Ready to play!".format(len(self.media_files)))
log.debug("[initialize] Ready to play!")
def dispatch(self, event):
if self.now_playing is None:
log.debug('[dispatch] Not done initializing yet; '
'ignore button press')
return
if event == BUTTON_NEXT:
# cancel currently playing item
# begin playing next item, wrapping if necessary
log.debug('[dispatch] got BUTTON_NEXT')
self.next_media()
elif event == BUTTON_PLAYPAUSE:
# pause currently playing item
log.debug('[dispatch] got BUTTON_PLAYPAUSE')
self.playpause()
elif event == BUTTON_PREVIOUS:
# cancel currently playing item
# begin playing previous item, wrapping if necessary
log.debug('[dispatch] got BUTTON_PREVIOUS')
self.previous_media()
else:
log.debug('[dispatch] What? Unknown button: {}'.format(event))
def run(self):
self.running = True
self.initialize()
while self.running:
try:
log.debug('[run] checking for event...')
event = self.event_queue.get(block=True, timeout=1.0)
log.debug('[run] got event {} ({})', self.buttons[event], event)
self.dispatch(event)
except queue.Empty:
pass # try again for another second
# (prevents waiting forever on quit for a blocking get())
if __name__ == "__main__":
if _ON_RASPI:
buttons = Buttons({
'PREVIOUS': BUTTON_PREVIOUS,
'PLAYPAUSE': BUTTON_PLAYPAUSE,
'NEXT': BUTTON_NEXT,
})
else:
buttons = {}
player = Player(MEDIA_ROOT, buttons)
if _ON_RASPI:
buttons.initialize(callback=player.send_event)
player.start()
while not _ON_RASPI:
command = raw_input("---\nnext - n\nplaypause - .\nprevious - p\n> ")
if not command:
continue
elif command[0] == 'n':
print 'next'
player.send_event(BUTTON_NEXT)
elif command[0] == 'p':
print 'previous'
player.send_event(BUTTON_PREVIOUS)
elif command[0] == '.':
print 'play/pause'
player.send_event(BUTTON_PLAYPAUSE)
else:
print '???'
# run forever
player.join()
``` |
{
"source": "JosephOHagan/pyspheregl",
"score": 2
} |
#### File: pyspheregl/demo/dots_test.py
```python
import numpy as np
import pyglet
from pyglet.gl import *
# sphere stuff
from ..sim.sphere_sim import getshader, resource_file
from ..sim import sphere_sim
from ..sphere import sphere
from ..utils.graphics_utils import make_unit_quad_tile
from ..utils.shader import ShaderVBO, shader_from_file
from ..utils.np_vbo import VBuf, IBuf
from ..utils import transformations as tn
from ..touch.rotater import RotationHandler
from ..touch import rotater
class DotsTest(object):
def __init__(self):
self.viewer = sphere_sim.make_viewer(show_touches=True, draw_fn=self.draw,
tick_fn=self.tick,
touch_fn=self.touch)
self.rotater = RotationHandler()
self.pts = sphere.lon_ring(0, 200) + sphere.lon_ring(30, 200) + sphere.lon_ring(-30, 200) + sphere.lon_ring(90, 200) + sphere.lon_ring(60, 200) + sphere.lon_ring(-60, 200)
self.pts += sphere.lat_ring(0, 200) + sphere.lat_ring(30, 200) + sphere.lat_ring(-30, 200) + sphere.lat_ring(90, 200) + sphere.lat_ring(60, 200) + sphere.lat_ring(-60, 200)
self.pts = np.array(self.pts, dtype=float)
point_shader = shader_from_file([getshader("sphere.vert"), getshader("user/point.vert")], [getshader("user/point.frag")])
self.point_vbo = ShaderVBO(point_shader, IBuf(np.arange(len(self.pts))),
buffers={"position":VBuf(self.pts), },
vars={"constant_size":10,
},
attribs={"color":(1,1,1,1)},
primitives=GL_POINTS)
self.viewer.start()
def touch(self, events):
for event in events:
xyz = sphere.polar_to_cart(*event.touch.lonlat)
if event.event=="DRAG":
self.rotater.drag(event.touch.id, xyz)
if event.event=="UP":
self.rotater.up(event.touch.id, xyz)
if event.event=="DOWN":
self.rotater.down(event.touch.id, xyz)
def draw(self):
glClearColor(0.0,0.0,0.0,1)
glClear(GL_COLOR_BUFFER_BIT)
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE)
glEnable(GL_POINT_SPRITE)
self.point_vbo.draw(vars={"quat":self.rotater.orientation})
def tick(self):
self.rotater.update(1/60.0)
pass
if __name__=="__main__":
p = DotsTest()
```
#### File: pyspheregl/demo/search_test.py
```python
import math
import random
from random import randint
import datetime
import numpy as np
import pyglet
from pyglet.gl import *
# sphere stuff
from ..sim.sphere_sim import getshader, resource_file
from ..sim import sphere_sim
from ..sphere import sphere
from ..utils.shader import ShaderVBO, shader_from_file
from ..utils.np_vbo import VBuf, IBuf
from ..utils import transformations as tn
from ..touch.rotater import RotationHandler
class SearchTest(object):
spiral_amount = 256 # Number of spirals to generate
target_distance = 99 # Initial dummy value for target distance
target_threshold = 3.0 # Threshold target in degrees before target timer starts
threshold_flag = True
end_threshold = datetime.datetime.now() + datetime.timedelta(seconds=3600) # Dummy initial value
time_within_ring = 3 # Number of seconds to align dot within ring
align_n = 31 # Dot position to display task dot
target_pt = np.array([[np.radians(180), np.radians(0)]], dtype=np.float) # Position of target on sphere
reset = False
touched_point = False
complete = False
# Generate a new random dot position from spiral amount
def generate_random_dot_pt(self):
t = self.align_n
while t == self.align_n:
t = randint(0, self.spiral_amount-1)
return t
# Check if two points are within some threshold degree of each other
def check_pts(self, xyz, rotated_origin):
if (np.degrees(self.target_distance) < self.target_threshold and np.degrees(self.target_distance) > (-1) * self.target_threshold):
return True
else:
return False
def xyz_to_lonlat(self, xyz):
return sphere.cart_to_polar(xyz[0], xyz[1], xyz[2])
def __init__(self):
self.viewer = sphere_sim.make_viewer(show_touches=True, draw_fn=self.draw,
tick_fn=self.tick,
touch_fn=self.touch)
self.rotater = RotationHandler()
self.point_colour = (1.0, 0.0, 0.0, 1.0)
self.background_colour = (1.0, 1.0, 1.0, 1.0)
# Generate random point
self.pts = np.array(sphere.spiral_layout(256))
self.align_n = self.generate_random_dot_pt()
self.align_pts = self.pts[self.align_n : self.align_n + 1, :]
self.origin = sphere.spherical_to_cartesian(self.pts[self.align_n]) # The point that we start at
self.finger_pts = np.array([[0.0, 0.0]], dtype=np.float)
# Point shader; simple coloured circles, with no spherical correction
point_shader = shader_from_file([getshader("sphere.vert"), getshader("user/point.vert")], [getshader("user/point.frag")])
self.point_vbo = ShaderVBO(point_shader, IBuf(np.arange(len(self.pts))),
buffers={"position":VBuf(self.pts), },
vars={"constant_size":10.0,
},
primitives=GL_POINTS)
# Point (dot) shader and VBO
align_point_shader = shader_from_file([getshader("sphere.vert"), getshader("user/point.vert")], [getshader("user/point.frag")])
self.align_point_vbo = ShaderVBO(align_point_shader, IBuf(np.arange(len(self.align_pts))),
buffers={"position":VBuf(self.align_pts), },
vars={"constant_size":20.0,
},
primitives=GL_POINTS)
self.viewer.start()
def touch(self, events):
for event in events:
xyz = sphere.polar_to_cart(*event.touch.lonlat)
if event.event=="DRAG":
self.rotater.drag(event.touch.id, xyz)
if event.event=="UP":
self.rotater.up(event.touch.id, xyz)
if not self.complete:
self.end_threshold = datetime.datetime.now() + datetime.timedelta(seconds=360000)
self.point_colour = (1.0, 0.0, 0.0, 1.0)
self.touched_point = False
if event.event=="DOWN":
self.rotater.down(event.touch.id, xyz)
self.finger_pts = np.array([self.xyz_to_lonlat(xyz)], dtype=np.float)
if (self.check_pts(xyz, self.rotated_origin)):
self.end_threshold = datetime.datetime.now() + datetime.timedelta(seconds=2)
self.touched_point = True
self.point_colour = (0.0, 1.0, 0.0, 1.0)
def draw(self):
glClearColor(0.0,0.0,0.0,1)
glClear(GL_COLOR_BUFFER_BIT)
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE)
glEnable(GL_POINT_SPRITE)
# Draw the background
self.point_vbo.set_attrib("color", self.background_colour)
self.point_vbo.draw(vars={"quat":self.rotater.orientation})
# self.point_vbo.draw() # Modification
# Draw the single moving target point
self.align_point_vbo.set_attrib("color", self.point_colour)
self.align_point_vbo.draw(vars={"quat":self.rotater.orientation})
# self.align_point_vbo.draw() # Modification
# Compute distance between the points (must be in Cartesian space)
quat = self.rotater.orientation
self.rotated_origin = sphere.rotate_cartesian(quat/np.linalg.norm(quat), np.array(self.origin)) # rotate the points with the quaternion
# Compare distance, again directly in Cartesian space
self.target_distance = sphere.spherical_distance(sphere.cart_to_polar(self.rotated_origin[0], self.rotated_origin[1], -self.rotated_origin[2]) , self.finger_pts[0])
# self.target_distance = sphere.spherical_distance_cartesian(self.rotated_origin, self.target)
def tick(self):
self.rotater.update(1/60.0)
if (datetime.datetime.now() >= self.end_threshold and self.touched_point):
self.touched_point = False
self.complete = True
# Change background dots to indicate success
self.background_colour = (0.0, 1.0, 0.0, 1.0)
self.reset = True
# Set and start timer between iterations
if self.reset:
self.end_threshold = datetime.datetime.now() + datetime.timedelta(seconds=5)
self.reset = False
# Start next iteration
if datetime.datetime.now() >= self.end_threshold:
# Reset flags and dummy value
self.touched_point = False
self.complete = False
self.end_threshold = datetime.datetime.now() + datetime.timedelta(seconds=360000)
# Reset colours and orientation
self.background_colour = (1.0, 1.0, 1.0, 1.0)
self.point_colour = (1.0, 0.0, 0.0, 1.0)
self.rotater.orientation = np.array([0,0,0,1], dtype=np.float)
self.rotater.angular_velocity = np.array([0,0,0,0.0], dtype=np.float)
# Get next target point
self.align_n = self.generate_random_dot_pt()
self.align_pts = self.pts[self.align_n : self.align_n + 1, :]
self.origin = sphere.spherical_to_cartesian(self.pts[self.align_n])
align_point_shader = shader_from_file([getshader("sphere.vert"), getshader("user/point.vert")], [getshader("user/point.frag")])
self.align_point_vbo = ShaderVBO(align_point_shader, IBuf(np.arange(len(self.align_pts))),
buffers={"position":VBuf(self.align_pts), },
vars={"constant_size":20.0,
},
primitives=GL_POINTS)
pass
if __name__=="__main__":
p = SearchTest()
```
#### File: pyspheregl/utils/np_vbo.py
```python
import pyglet
from pyglet.gl import *
import numpy as np
class VBuf:
def __init__(self, buffer, name="", id=-1, divisor=-1, mode=GL_STATIC_DRAW):
self.buffer = create_vbo(buffer, mode=mode)
self.name = name
self.id = id
self.shape = buffer.shape
self.divisor = divisor
self.mode = mode
def set(self, array):
assert(self.shape==array.shape)
self.buffer.set_data(array)
#self.buffer.set_data(array.astype(np.float32).ctypes.data)
def create_vao(vbufs, ibo):
"""
Takes a list of VBufs, and generates
the VAO which attaches all of them and returns it.
"""
# generate a new vao
vao = GLuint()
glGenVertexArrays(1, vao)
glBindVertexArray(vao)
# attach vbos
for vbuf in vbufs:
glEnableVertexAttribArray(vbuf.id)
attach_vbo(vbuf.buffer, vbuf.id)
if vbuf.divisor!=-1:
glVertexAttribDivisor(vbuf.id, vbuf.divisor)
if ibo!=None:
glBindBuffer(ibo.target, ibo.id)
# unbind all buffers
glBindVertexArray(0)
glBindBuffer(GL_ARRAY_BUFFER, 0)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
return vao
def draw_vao(vao, primitives=GL_QUADS, n_vtxs=0, n_prims=0):
glBindVertexArray(vao)
if n_prims==0:
glDrawElements(primitives, n_vtxs, GL_UNSIGNED_INT, 0)
else:
glDrawElementsInstanced(primitives, n_vtxs, GL_UNSIGNED_INT, 0, n_prims)
glBindVertexArray(0)
# simple VBO wrapper since pyglet's built in object
# seems to be buggy (?)
class VBO:
def __init__(self, data, mode):
vbo = GLuint()
glGenBuffers(1, vbo)
self.target = GL_ARRAY_BUFFER
self.id = vbo
self.bind()
self.mode = mode
# upload the placeholder data
# must be the same shape on subsequent updates!
data = data.astype(np.float32)
glBufferData(self.target, data.nbytes, data.ctypes.data, self.mode)
self.nbytes = data.nbytes
self.vbo_id = vbo
self.shape = data.shape
self.unbind()
def bind(self):
glBindBuffer(self.target, self.id)
def unbind(self):
glBindBuffer(self.target, 0)
def set_data(self, data):
data = data.astype(np.float32)
assert(data.nbytes == self.nbytes and data.shape==self.shape)
self.bind()
glBufferSubData(self.target, 0, data.nbytes, data.ctypes.data)
def create_vbo(arr, mode=GL_STATIC_DRAW):
"""Creates an np.float32/GL_FLOAT buffer from the numpy array arr on the GPU"""
#bo = pyglet.graphics.vertexbuffer.create_buffer(arr.nbytes, GL_ARRAY_BUFFER, mode, vbo=True)
bo = VBO(arr, mode)
return bo
def create_elt_buffer(arr, mode=GL_STATIC_DRAW):
"""Creates an np.uint32/GL_UNSIGNED_INT buffer from the numpy array arr on the GPU"""
arr = arr.astype(np.uint32)
bo = pyglet.graphics.vertexbuffer.create_buffer(arr.nbytes, GL_ELEMENT_ARRAY_BUFFER, mode, vbo=True)
bo.bind()
bo.set_data(arr.ctypes.data)
bo.shape = arr.shape # store shape for later
# unbind
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
return bo
def IBuf(arr):
return create_elt_buffer(arr)
def attach_vbo(bo, n):
"""Attach a vertex buffer object to attribute pointer n"""
glEnableVertexAttribArray(n)
glBindBuffer(bo.target, bo.id)
# use number of elements in last element of the buffer object
glVertexAttribPointer(n, bo.shape[-1], GL_FLOAT, False, 0, 0)
def draw_elt_buffer(elt_bo, primitives=GL_QUADS):
"""Using the given element buffer, draw the indexed geometry"""
glBindBuffer(elt_bo.target, elt_bo.id)
glDrawElements(primitives, elt_bo.shape[0], GL_UNSIGNED_INT, 0)
# unbind all buffers
glBindBuffer(GL_ARRAY_BUFFER, 0)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
``` |
{
"source": "josephok/ProjectEuler",
"score": 4
} |
#### File: ProjectEuler/p29/solve.py
```python
def distinct_powers(low = 2, up = 5):
result = set()
for a in range(low, up + 1):
for b in range(low, up + 1):
result.add(a ** b)
return result
# test
# answer is 9183
print(len(distinct_powers(2, 100)))
```
#### File: ProjectEuler/p74/solve.py
```python
import math
def digit_factorial_chains(n=1000000):
number_of_chains = 0
for i in range(1, n):
if chains_length(i) == 60:
number_of_chains += 1
return number_of_chains
def sum_of_factorial(number):
return sum(math.factorial(int(i)) for i in str(number))
def chains_length(number):
chains = set()
chains.add(number)
t = number
while True:
t = sum_of_factorial(t)
if t not in chains:
chains.add(t)
else:
break
return len(chains)
if __name__ == "__main__":
print(digit_factorial_chains())
```
#### File: ProjectEuler/p80/solve.py
```python
from decimal import Decimal, getcontext
import unittest
getcontext().prec = 105
def sum_of_100_digits(n):
sq_root = str(Decimal(n).sqrt())
if sq_root.find('.') != -1:
temp = sq_root.replace('.', '')[0:100]
return sum(map(int, [i for i in temp]))
return 0
def one_hundred_natural_numbers(n=100):
return sum([sum_of_100_digits(i) for i in range(1, n+1)])
class TestResult(unittest.TestCase):
def test_sum_of_100_digits(self):
self.assertEqual(sum_of_100_digits(2), 475)
def test_one_hundred_natural_numbers(self):
self.assertEqual(one_hundred_natural_numbers(n=100), 40886)
if __name__ == '__main__':
unittest.main()
```
#### File: ProjectEuler/p92/test.py
```python
import unittest
from solve import sum_of_square
class MyTest(unittest.TestCase):
def test_sum_of_square(self):
self.assertEqual(1, sum_of_square(1))
self.assertEqual(4, sum_of_square(2))
self.assertEqual(9, sum_of_square(3))
self.assertEqual(32, sum_of_square(44))
self.assertEqual(89, sum_of_square(85))
if __name__ == '__main__':
unittest.main()
``` |
{
"source": "Josepholaidepetro/tensorflow_job",
"score": 3
} |
#### File: tensorflow_job/churn/tfjobchurn.py
```python
import argparse
import logging
import json
import os
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import numpy as np
import pandas as pd
# splitting the data
from sklearn.model_selection import train_test_split
# Standardization - feature scaling
from sklearn.preprocessing import StandardScaler
# data encoding
from sklearn.preprocessing import LabelEncoder
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, models
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import SGD, Adam, RMSprop
logging.getLogger().setLevel(logging.INFO)
def make_datasets_unbatched():
data = pd.read_csv("https://raw.githubusercontent.com/AdeloreSimiloluwa/Artificial-Neural-Network/master/data/Churn_Modelling.csv")
#preprocessing
X = data.iloc[:, 3:-1]
y = data.iloc[:,-1:]
# encoding country
encoder_X_1= LabelEncoder()
X.iloc[:,1] = encoder_X_1.fit_transform(X.iloc[:,1])
# encoding gender
encoder_X_2= LabelEncoder()
X.iloc[:,2] = encoder_X_2.fit_transform(X.iloc[:,2])
# we would also use the dummy variable because they are norminal variables
dummy = pd.get_dummies(X["Geography"], prefix = ['Geography'],drop_first=True)
X=pd.concat([X,dummy], axis = 1)
X=X.drop(columns = ['Geography'], axis = 1)
# split the data
X_train,X_test,y_train,y_test = train_test_split( X,y, test_size=0.2, random_state = 10)
train_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test))
train = train_dataset.cache().shuffle(2000).repeat()
return train, test_dataset
def model(args):
model = models.Sequential()
model.add(Dense(units =9, activation='relu', input_dim=11))
model.add(Dense(units =9, activation='relu'))
model.add(Dense(units =1, activation='sigmoid'))
model.summary()
opt = args.optimizer
model.compile(optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy'])
tf.keras.backend.set_value(model.optimizer.learning_rate, args.learning_rate)
return model
def main(args):
# MultiWorkerMirroredStrategy creates copies of all variables in the model's
# layers on each device across all workers
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
communication=tf.distribute.experimental.CollectiveCommunication.AUTO)
logging.debug(f"num_replicas_in_sync: {strategy.num_replicas_in_sync}")
BATCH_SIZE_PER_REPLICA = args.batch_size
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
# Datasets need to be created after instantiation of `MultiWorkerMirroredStrategy`
train_dataset, test_dataset = make_datasets_unbatched()
train_dataset = train_dataset.batch(batch_size=BATCH_SIZE)
test_dataset = test_dataset.batch(batch_size=BATCH_SIZE)
# See: https://www.tensorflow.org/api_docs/python/tf/data/experimental/DistributeOptions
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = \
tf.data.experimental.AutoShardPolicy.DATA
train_datasets_sharded = train_dataset.with_options(options)
test_dataset_sharded = test_dataset.with_options(options)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = model(args)
# Keras' `model.fit()` trains the model with specified number of epochs and
# number of steps per epoch.
multi_worker_model.fit(train_datasets_sharded,
epochs=50,
steps_per_epoch=30)
eval_loss, eval_acc = multi_worker_model.evaluate(test_dataset_sharded,
verbose=0, steps=10)
# Log metrics for Katib
logging.info("loss={:.4f}".format(eval_loss))
logging.info("accuracy={:.4f}".format(eval_acc))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size",
type=int,
default=32,
metavar="N",
help="Batch size for training (default: 128)")
parser.add_argument("--learning_rate",
type=float,
default=0.1,
metavar="N",
help='Initial learning rate')
parser.add_argument("--optimizer",
type=str,
default='adam',
metavar="N",
help='optimizer')
parsed_args, _ = parser.parse_known_args()
main(parsed_args)
``` |
{
"source": "josepholasoji/expert-valuator",
"score": 3
} |
#### File: expert-valuator/model_generator/gen.py
```python
import definitions as utils
import json
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing import text as txtProcessor
from tensorflow.keras.preprocessing.text import Tokenizer
def generate():
#Parse and tokenized the library data
tenkenizer = Tokenizer(num_words=None, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', lower=True, split='\,', char_level=False, oov_token=None)
tenkenizer.fit_on_texts(utils.libraries.split(","))
librarySequence = tenkenizer.word_index
#Parse and tokenized the language data
tenkenizer = Tokenizer(num_words=None, filters='!"$%&()*,./:;<=>?@[\\]^_`{|}~\t\n', lower=True, split='\,', char_level=False, oov_token=None)
tenkenizer.fit_on_texts(utils.programming_languages.split(","))
programming_laguage_sequence = tenkenizer.word_index
#Parse and tokenized the classification data
tenkenizer = Tokenizer(num_words=None, filters='!"$%&()*,./:;<=>?@[\\]^_`{|}~\t\n', lower=True, split='\,', char_level=False, oov_token=None)
tenkenizer.fit_on_texts(utils.classifications.split(","))
classifications_sequence = tenkenizer.word_index
#load the template data
template = (json.loads(utils.template_data))
language_of_interest = []
for language in template['languages']:
language = language.lower()
if language in programming_laguage_sequence:
token = programming_laguage_sequence[language]
else:
token = programming_laguage_sequence["none"]
language_of_interest.append(token)
lib_of_interest = []
for lib in template['technology']:
lib = lib.lower()
if lib in librarySequence:
token = librarySequence[lib]
else:
token = librarySequence["none"]
lib_of_interest.append(token)
header = ["language",
"library",
"imported_library",
"commit_count",
"days_since_last_commit",
"code_churn",
"classificatioin"]
table = []
table.append(header)
#for disqualified
for entry_data in template['disqualified']:
entry = utils.gen_classification_data(entry_data, librarySequence, programming_laguage_sequence, classifications_sequence)
table+=(entry)
#for expert
for entry_data in template['expert']:
entry = utils.gen_classification_data(entry_data, librarySequence, programming_laguage_sequence, classifications_sequence)
table+=(entry)
#for novice
for entry_data in template['novice']:
entry = utils.gen_classification_data(entry_data, librarySequence, programming_laguage_sequence, classifications_sequence)
table+=(entry)
#for intermediate
for entry_data in template['intermediate']:
entry = utils.gen_classification_data(entry_data, librarySequence, programming_laguage_sequence, classifications_sequence)
table+=(entry)
my_df = pd.DataFrame(table)
my_df.to_csv('truth_template_dataset.csv', index=True)
return 0
``` |
{
"source": "josephoregon/plant-cv",
"score": 2
} |
#### File: plant-cv/tests/tests.py
```python
import pytest
import os
import shutil
import json
import numpy as np
import cv2
import sys
import pandas as pd
from plotnine import ggplot
from plantcv import plantcv as pcv
import plantcv.learn
import plantcv.parallel
import plantcv.utils
# Import matplotlib and use a null Template to block plotting to screen
# This will let us test debug = "plot"
import matplotlib
import dask
from dask.distributed import Client
PARALLEL_TEST_DATA = os.path.join(os.path.dirname(os.path.abspath(__file__)), "parallel_data")
TEST_TMPDIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", ".cache")
TEST_IMG_DIR = "images"
TEST_IMG_DIR2 = "images_w_date"
TEST_SNAPSHOT_DIR = "snapshots"
TEST_PIPELINE = os.path.join(PARALLEL_TEST_DATA, "plantcv-script.py")
META_FIELDS = {"imgtype": 0, "camera": 1, "frame": 2, "zoom": 3, "lifter": 4, "gain": 5, "exposure": 6, "id": 7}
VALID_META = {
# Camera settings
"camera": {
"label": "camera identifier",
"datatype": "<class 'str'>",
"value": "none"
},
"imgtype": {
"label": "image type",
"datatype": "<class 'str'>",
"value": "none"
},
"zoom": {
"label": "camera zoom setting",
"datatype": "<class 'str'>",
"value": "none"
},
"exposure": {
"label": "camera exposure setting",
"datatype": "<class 'str'>",
"value": "none"
},
"gain": {
"label": "camera gain setting",
"datatype": "<class 'str'>",
"value": "none"
},
"frame": {
"label": "image series frame identifier",
"datatype": "<class 'str'>",
"value": "none"
},
"lifter": {
"label": "imaging platform height setting",
"datatype": "<class 'str'>",
"value": "none"
},
# Date-Time
"timestamp": {
"label": "datetime of image",
"datatype": "<class 'datetime.datetime'>",
"value": None
},
# Sample attributes
"id": {
"label": "image identifier",
"datatype": "<class 'str'>",
"value": "none"
},
"plantbarcode": {
"label": "plant barcode identifier",
"datatype": "<class 'str'>",
"value": "none"
},
"treatment": {
"label": "treatment identifier",
"datatype": "<class 'str'>",
"value": "none"
},
"cartag": {
"label": "plant carrier identifier",
"datatype": "<class 'str'>",
"value": "none"
},
# Experiment attributes
"measurementlabel": {
"label": "experiment identifier",
"datatype": "<class 'str'>",
"value": "none"
},
# Other
"other": {
"label": "other identifier",
"datatype": "<class 'str'>",
"value": "none"
}
}
METADATA_COPROCESS = {
'VIS_SV_0_z1_h1_g0_e82_117770.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'VIS_SV_0_z1_h1_g0_e82_117770.jpg'),
'camera': 'SV',
'imgtype': 'VIS',
'zoom': 'z1',
'exposure': 'e82',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117770',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none',
'coimg': 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'
},
'NIR_SV_0_z1_h1_g0_e65_117779.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'),
'camera': 'SV',
'imgtype': 'NIR',
'zoom': 'z1',
'exposure': 'e65',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117779',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none'
}
}
METADATA_VIS_ONLY = {
'VIS_SV_0_z1_h1_g0_e82_117770.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'VIS_SV_0_z1_h1_g0_e82_117770.jpg'),
'camera': 'SV',
'imgtype': 'VIS',
'zoom': 'z1',
'exposure': 'e82',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117770',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none'
}
}
METADATA_NIR_ONLY = {
'NIR_SV_0_z1_h1_g0_e65_117779.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'),
'camera': 'SV',
'imgtype': 'NIR',
'zoom': 'z1',
'exposure': 'e65',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117779',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none'
}
}
# Set the temp directory for dask
dask.config.set(temporary_directory=TEST_TMPDIR)
# ##########################
# Tests setup function
# ##########################
def setup_function():
if not os.path.exists(TEST_TMPDIR):
os.mkdir(TEST_TMPDIR)
# ##############################
# Tests for the parallel subpackage
# ##############################
def test_plantcv_parallel_workflowconfig_save_config_file():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_workflowconfig_save_config_file")
os.mkdir(cache_dir)
# Define output path/filename
template_file = os.path.join(cache_dir, "config.json")
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# Save template file
config.save_config(config_file=template_file)
assert os.path.exists(template_file)
def test_plantcv_parallel_workflowconfig_import_config_file():
# Define input path/filename
config_file = os.path.join(PARALLEL_TEST_DATA, "workflow_config_template.json")
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# import config file
config.import_config(config_file=config_file)
assert config.cluster == "LocalCluster"
def test_plantcv_parallel_workflowconfig_validate_config():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_workflowconfig_validate_config")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# Set valid values in config
config.input_dir = os.path.join(PARALLEL_TEST_DATA, "images")
config.json = os.path.join(cache_dir, "valid_config.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.img_outdir = cache_dir
# Validate config
assert config.validate_config()
def test_plantcv_parallel_workflowconfig_invalid_startdate():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_workflowconfig_invalid_startdate")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# Set valid values in config
config.input_dir = os.path.join(PARALLEL_TEST_DATA, "images")
config.json = os.path.join(cache_dir, "valid_config.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.img_outdir = cache_dir
config.start_date = "2020-05-10"
# Validate config
assert not config.validate_config()
def test_plantcv_parallel_workflowconfig_invalid_enddate():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_workflowconfig_invalid_enddate")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# Set valid values in config
config.input_dir = os.path.join(PARALLEL_TEST_DATA, "images")
config.json = os.path.join(cache_dir, "valid_config.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.img_outdir = cache_dir
config.end_date = "2020-05-10"
config.timestampformat = "%Y%m%d"
# Validate config
assert not config.validate_config()
def test_plantcv_parallel_workflowconfig_invalid_metadata_terms():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_workflowconfig_invalid_metadata_terms")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# Set invalid values in config
# input_dir and json are not defined by default, but are required
# Set an incorrect metadata term
config.filename_metadata.append("invalid")
# Validate config
assert not config.validate_config()
def test_plantcv_parallel_workflowconfig_invalid_filename_metadata():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_workflowconfig_invalid_filename_metadata")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# Set invalid values in config
# input_dir and json are not defined by default, but are required
# Do not set required filename_metadata
# Validate config
assert not config.validate_config()
def test_plantcv_parallel_workflowconfig_invalid_cluster():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_workflowconfig_invalid_cluster")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
# Set invalid values in config
# input_dir and json are not defined by default, but are required
# Set invalid cluster type
config.cluster = "MyCluster"
# Validate config
assert not config.validate_config()
def test_plantcv_parallel_metadata_parser_snapshots():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_snapshots", "output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS", "camera": "SV"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == METADATA_VIS_ONLY
def test_plantcv_parallel_metadata_parser_snapshots_coimg():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_snapshots_coimg", "output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.coprocess = "FAKE"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == METADATA_VIS_ONLY
def test_plantcv_parallel_metadata_parser_images():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_IMG_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_images", "output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS"}
config.start_date = "2014"
config.end_date = "2014"
config.timestampformat = '%Y' # no date in filename so check date range and date_format are ignored
config.imgformat = "jpg"
meta = plantcv.parallel.metadata_parser(config=config)
expected = {
'VIS_SV_0_z1_h1_g0_e82_117770.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'images', 'VIS_SV_0_z1_h1_g0_e82_117770.jpg'),
'camera': 'SV',
'imgtype': 'VIS',
'zoom': 'z1',
'exposure': 'e82',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': None,
'id': '117770',
'plantbarcode': 'none',
'treatment': 'none',
'cartag': 'none',
'measurementlabel': 'none',
'other': 'none'}
}
assert meta == expected
config.include_all_subdirs = False
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == expected
def test_plantcv_parallel_metadata_parser_regex():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_IMG_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_images", "output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.delimiter = r'(VIS)_(SV)_(\d+)_(z1)_(h1)_(g0)_(e82)_(\d+)'
meta = plantcv.parallel.metadata_parser(config=config)
expected = {
'VIS_SV_0_z1_h1_g0_e82_117770.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'images', 'VIS_SV_0_z1_h1_g0_e82_117770.jpg'),
'camera': 'SV',
'imgtype': 'VIS',
'zoom': 'z1',
'exposure': 'e82',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': None,
'id': '117770',
'plantbarcode': 'none',
'treatment': 'none',
'cartag': 'none',
'measurementlabel': 'none',
'other': 'none'}
}
assert meta == expected
def test_plantcv_parallel_metadata_parser_images_outside_daterange():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_IMG_DIR2)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_images_outside_daterange",
"output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "timestamp"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "NIR"}
config.start_date = "1970-01-01 00_00_00"
config.end_date = "1970-01-01 00_00_00"
config.timestampformat = "%Y-%m-%d %H_%M_%S"
config.imgformat = "jpg"
config.delimiter = r"(NIR)_(SV)_(\d)_(z1)_(h1)_(g0)_(e65)_(\d{4}-\d{2}-\d{2} \d{2}_\d{2}_\d{2})"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == {}
def test_plantcv_parallel_metadata_parser_no_default_dates():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_no_default_dates", "output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS", "camera": "SV", "id": "117770"}
config.start_date = None
config.end_date = None
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == METADATA_VIS_ONLY
def test_plantcv_parallel_check_date_range_wrongdateformat():
start_date = 10
end_date = 10
img_time = '2010-10-10'
with pytest.raises(SystemExit, match=r'does not match format'):
date_format = '%Y%m%d'
_ = plantcv.parallel.check_date_range(
start_date, end_date, img_time, date_format)
def test_plantcv_parallel_metadata_parser_snapshot_outside_daterange():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_snapshot_outside_daterange",
"output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS"}
config.start_date = "1970-01-01 00:00:00.0"
config.end_date = "1970-01-01 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == {}
def test_plantcv_parallel_metadata_parser_fail_images():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_fail_images", "output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"cartag": "VIS"}
config.start_date = "1970-01-01 00:00:00.0"
config.end_date = "1970-01-01 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.coprocess = "NIR"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == METADATA_NIR_ONLY
def test_plantcv_parallel_metadata_parser_images_with_frame():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_images_with_frame", "output.json")
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.coprocess = "NIR"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == {
'VIS_SV_0_z1_h1_g0_e82_117770.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'VIS_SV_0_z1_h1_g0_e82_117770.jpg'),
'camera': 'SV',
'imgtype': 'VIS',
'zoom': 'z1',
'exposure': 'e82',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117770',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none',
'coimg': 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'
},
'NIR_SV_0_z1_h1_g0_e65_117779.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'),
'camera': 'SV',
'imgtype': 'NIR',
'zoom': 'z1',
'exposure': 'e65',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117779',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none'
}
}
def test_plantcv_parallel_metadata_parser_images_no_frame():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_images_no_frame",
"output.json")
config.filename_metadata = ["imgtype", "camera", "X", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.coprocess = "NIR"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == {
'VIS_SV_0_z1_h1_g0_e82_117770.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'VIS_SV_0_z1_h1_g0_e82_117770.jpg'),
'camera': 'SV',
'imgtype': 'VIS',
'zoom': 'z1',
'exposure': 'e82',
'gain': 'g0',
'frame': 'none',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117770',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none',
'coimg': 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'
},
'NIR_SV_0_z1_h1_g0_e65_117779.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'),
'camera': 'SV',
'imgtype': 'NIR',
'zoom': 'z1',
'exposure': 'e65',
'gain': 'g0',
'frame': 'none',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117779',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none'
}
}
def test_plantcv_parallel_metadata_parser_images_no_camera():
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_metadata_parser_images_no_frame", "output.json")
config.filename_metadata = ["imgtype", "X", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.metadata_filters = {"imgtype": "VIS"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.coprocess = "NIR"
meta = plantcv.parallel.metadata_parser(config=config)
assert meta == {
'VIS_SV_0_z1_h1_g0_e82_117770.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'VIS_SV_0_z1_h1_g0_e82_117770.jpg'),
'camera': 'none',
'imgtype': 'VIS',
'zoom': 'z1',
'exposure': 'e82',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117770',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none',
'coimg': 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'
},
'NIR_SV_0_z1_h1_g0_e65_117779.jpg': {
'path': os.path.join(PARALLEL_TEST_DATA, 'snapshots', 'snapshot57383', 'NIR_SV_0_z1_h1_g0_e65_117779.jpg'),
'camera': 'none',
'imgtype': 'NIR',
'zoom': 'z1',
'exposure': 'e65',
'gain': 'g0',
'frame': '0',
'lifter': 'h1',
'timestamp': '2014-10-22 17:49:35.187',
'id': '117779',
'plantbarcode': 'Ca031AA010564',
'treatment': 'none',
'cartag': '2143',
'measurementlabel': 'C002ch_092214_biomass',
'other': 'none'
}
}
def test_plantcv_parallel_job_builder_single_image():
# Create cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_job_builder_single_image")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(cache_dir, "output.json")
config.tmp_dir = cache_dir
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.img_outdir = cache_dir
config.metadata_filters = {"imgtype": "VIS", "camera": "SV"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.other_args = ["--other", "on"]
config.writeimg = True
jobs = plantcv.parallel.job_builder(meta=METADATA_VIS_ONLY, config=config)
image_name = list(METADATA_VIS_ONLY.keys())[0]
result_file = os.path.join(cache_dir, image_name + '.txt')
expected = ['python', TEST_PIPELINE, '--image', METADATA_VIS_ONLY[image_name]['path'], '--outdir',
cache_dir, '--result', result_file, '--writeimg', '--other', 'on']
if len(expected) != len(jobs[0]):
assert False
else:
assert all([i == j] for i, j in zip(jobs[0], expected))
def test_plantcv_parallel_job_builder_coprocess():
# Create cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_job_builder_coprocess")
os.mkdir(cache_dir)
# Create config instance
config = plantcv.parallel.WorkflowConfig()
config.input_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
config.json = os.path.join(cache_dir, "output.json")
config.tmp_dir = cache_dir
config.filename_metadata = ["imgtype", "camera", "frame", "zoom", "lifter", "gain", "exposure", "id"]
config.workflow = TEST_PIPELINE
config.img_outdir = cache_dir
config.metadata_filters = {"imgtype": "VIS", "camera": "SV"}
config.start_date = "2014-10-21 00:00:00.0"
config.end_date = "2014-10-23 00:00:00.0"
config.timestampformat = '%Y-%m-%d %H:%M:%S.%f'
config.imgformat = "jpg"
config.other_args = ["--other", "on"]
config.writeimg = True
config.coprocess = "NIR"
jobs = plantcv.parallel.job_builder(meta=METADATA_COPROCESS, config=config)
img_names = list(METADATA_COPROCESS.keys())
vis_name = img_names[0]
vis_path = METADATA_COPROCESS[vis_name]['path']
result_file = os.path.join(cache_dir, vis_name + '.txt')
nir_name = img_names[1]
coresult_file = os.path.join(cache_dir, nir_name + '.txt')
expected = ['python', TEST_PIPELINE, '--image', vis_path, '--outdir', cache_dir, '--result', result_file,
'--coresult', coresult_file, '--writeimg', '--other', 'on']
if len(expected) != len(jobs[0]):
assert False
else:
assert all([i == j] for i, j in zip(jobs[0], expected))
def test_plantcv_parallel_multiprocess_create_dask_cluster_local():
client = plantcv.parallel.create_dask_cluster(cluster="LocalCluster", cluster_config={})
status = client.status
client.shutdown()
assert status == "running"
def test_plantcv_parallel_multiprocess_create_dask_cluster():
client = plantcv.parallel.create_dask_cluster(cluster="HTCondorCluster", cluster_config={"cores": 1,
"memory": "1GB",
"disk": "1GB"})
status = client.status
client.shutdown()
assert status == "running"
def test_plantcv_parallel_multiprocess_create_dask_cluster_invalid_cluster():
with pytest.raises(ValueError):
_ = plantcv.parallel.create_dask_cluster(cluster="Skynet", cluster_config={})
def test_plantcv_parallel_convert_datetime_to_unixtime():
unix_time = plantcv.parallel.convert_datetime_to_unixtime(timestamp_str="1970-01-01", date_format="%Y-%m-%d")
assert unix_time == 0
def test_plantcv_parallel_convert_datetime_to_unixtime_bad_strptime():
with pytest.raises(SystemExit):
_ = plantcv.parallel.convert_datetime_to_unixtime(timestamp_str="1970-01-01", date_format="%Y-%m")
def test_plantcv_parallel_multiprocess():
image_name = list(METADATA_VIS_ONLY.keys())[0]
image_path = os.path.join(METADATA_VIS_ONLY[image_name]['path'], image_name)
result_file = os.path.join(TEST_TMPDIR, image_name + '.txt')
jobs = [['python', TEST_PIPELINE, '--image', image_path, '--outdir', TEST_TMPDIR, '--result', result_file,
'--writeimg', '--other', 'on']]
# Create a dask LocalCluster client
client = Client(n_workers=1)
plantcv.parallel.multiprocess(jobs, client=client)
assert os.path.exists(result_file)
def test_plantcv_parallel_process_results():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_process_results")
os.mkdir(cache_dir)
plantcv.parallel.process_results(job_dir=os.path.join(PARALLEL_TEST_DATA, "results"),
json_file=os.path.join(cache_dir, 'appended_results.json'))
plantcv.parallel.process_results(job_dir=os.path.join(PARALLEL_TEST_DATA, "results"),
json_file=os.path.join(cache_dir, 'appended_results.json'))
# Assert that the output JSON file matches the expected output JSON file
result_file = open(os.path.join(cache_dir, "appended_results.json"), "r")
results = json.load(result_file)
result_file.close()
expected_file = open(os.path.join(PARALLEL_TEST_DATA, "appended_results.json"))
expected = json.load(expected_file)
expected_file.close()
assert results == expected
def test_plantcv_parallel_process_results_new_output():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_process_results_new_output")
os.mkdir(cache_dir)
plantcv.parallel.process_results(job_dir=os.path.join(PARALLEL_TEST_DATA, "results"),
json_file=os.path.join(cache_dir, 'new_result.json'))
# Assert output matches expected values
result_file = open(os.path.join(cache_dir, "new_result.json"), "r")
results = json.load(result_file)
result_file.close()
expected_file = open(os.path.join(PARALLEL_TEST_DATA, "new_result.json"))
expected = json.load(expected_file)
expected_file.close()
assert results == expected
def test_plantcv_parallel_process_results_valid_json():
# Test when the file is a valid json file but doesn't contain expected keys
with pytest.raises(RuntimeError):
plantcv.parallel.process_results(job_dir=os.path.join(PARALLEL_TEST_DATA, "results"),
json_file=os.path.join(PARALLEL_TEST_DATA, "valid.json"))
def test_plantcv_parallel_process_results_invalid_json():
# Create a test tmp directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_parallel_process_results_invalid_json")
os.mkdir(cache_dir)
# Move the test data to the tmp directory
shutil.copytree(os.path.join(PARALLEL_TEST_DATA, "bad_results"), os.path.join(cache_dir, "bad_results"))
with pytest.raises(RuntimeError):
plantcv.parallel.process_results(job_dir=os.path.join(cache_dir, "bad_results"),
json_file=os.path.join(cache_dir, "bad_results", "invalid.txt"))
# ####################################################################################################################
# ########################################### PLANTCV MAIN PACKAGE ###################################################
matplotlib.use('Template')
TEST_DATA = os.path.join(os.path.dirname(os.path.abspath(__file__)), "data")
HYPERSPECTRAL_TEST_DATA = os.path.join(os.path.dirname(os.path.abspath(__file__)), "hyperspectral_data")
HYPERSPECTRAL_DATA = "darkReference"
HYPERSPECTRAL_WHITE = "darkReference_whiteReference"
HYPERSPECTRAL_DARK = "darkReference_darkReference"
HYPERSPECTRAL_HDR = "darkReference.hdr"
HYPERSPECTRAL_MASK = "darkReference_mask.png"
HYPERSPECTRAL_DATA_NO_DEFAULT = "darkReference2"
HYPERSPECTRAL_HDR_NO_DEFAULT = "darkReference2.hdr"
HYPERSPECTRAL_DATA_APPROX_PSEUDO = "darkReference3"
HYPERSPECTRAL_HDR_APPROX_PSEUDO = "darkReference3.hdr"
HYPERSPECTRAL_HDR_SMALL_RANGE = {'description': '{[HEADWALL Hyperspec III]}', 'samples': '800', 'lines': '1',
'bands': '978', 'header offset': '0', 'file type': 'ENVI Standard',
'interleave': 'bil', 'sensor type': 'Unknown', 'byte order': '0',
'default bands': '159,253,520', 'wavelength units': 'nm',
'wavelength': ['379.027', '379.663', '380.3', '380.936', '381.573', '382.209']}
FLUOR_TEST_DATA = os.path.join(os.path.dirname(os.path.abspath(__file__)), "photosynthesis_data")
FLUOR_IMG = "PSII_PSD_supopt_temp_btx623_22_rep1.DAT"
TEST_COLOR_DIM = (2056, 2454, 3)
TEST_GRAY_DIM = (2056, 2454)
TEST_BINARY_DIM = TEST_GRAY_DIM
TEST_INPUT_COLOR = "input_color_img.jpg"
TEST_INPUT_GRAY = "input_gray_img.jpg"
TEST_INPUT_GRAY_SMALL = "input_gray_img_small.jpg"
TEST_INPUT_BINARY = "input_binary_img.png"
# Image from http://www.libpng.org/pub/png/png-OwlAlpha.html
# This image may be used, edited and reproduced freely.
TEST_INPUT_RGBA = "input_rgba.png"
TEST_INPUT_BAYER = "bayer_img.png"
TEST_INPUT_ROI_CONTOUR = "input_roi_contour.npz"
TEST_INPUT_ROI_HIERARCHY = "input_roi_hierarchy.npz"
TEST_INPUT_CONTOURS = "input_contours.npz"
TEST_INPUT_OBJECT_CONTOURS = "input_object_contours.npz"
TEST_INPUT_OBJECT_HIERARCHY = "input_object_hierarchy.npz"
TEST_VIS = "VIS_SV_0_z300_h1_g0_e85_v500_93054.png"
TEST_NIR = "NIR_SV_0_z300_h1_g0_e15000_v500_93059.png"
TEST_VIS_TV = "VIS_TV_0_z300_h1_g0_e85_v500_93054.png"
TEST_NIR_TV = "NIR_TV_0_z300_h1_g0_e15000_v500_93059.png"
TEST_INPUT_MASK = "input_mask_binary.png"
TEST_INPUT_MASK_OOB = "mask_outbounds.png"
TEST_INPUT_MASK_RESIZE = "input_mask_resize.png"
TEST_INPUT_NIR_MASK = "input_nir.png"
TEST_INPUT_FDARK = "FLUO_TV_dark.png"
TEST_INPUT_FDARK_LARGE = "FLUO_TV_DARK_large"
TEST_INPUT_FMIN = "FLUO_TV_min.png"
TEST_INPUT_FMAX = "FLUO_TV_max.png"
TEST_INPUT_FMASK = "FLUO_TV_MASK.png"
TEST_INPUT_GREENMAG = "input_green-magenta.jpg"
TEST_INPUT_MULTI = "multi_ori_image.jpg"
TEST_INPUT_MULTI_MASK = "multi_ori_mask.jpg"
TEST_INPUT_MULTI_OBJECT = "roi_objects.npz"
TEST_INPUT_MULTI_CONTOUR = "multi_contours.npz"
TEST_INPUT_ClUSTER_CONTOUR = "clusters_i.npz"
TEST_INPUT_MULTI_HIERARCHY = "multi_hierarchy.npz"
TEST_INPUT_VISUALIZE_CONTOUR = "roi_objects_visualize.npz"
TEST_INPUT_VISUALIZE_HIERARCHY = "roi_obj_hierarchy_visualize.npz"
TEST_INPUT_VISUALIZE_CLUSTERS = "clusters_i_visualize.npz"
TEST_INPUT_VISUALIZE_BACKGROUND = "visualize_background_img.png"
TEST_INPUT_GENOTXT = "cluster_names.txt"
TEST_INPUT_GENOTXT_TOO_MANY = "cluster_names_too_many.txt"
TEST_INPUT_CROPPED = 'cropped_img.jpg'
TEST_INPUT_CROPPED_MASK = 'cropped-mask.png'
TEST_INPUT_MARKER = 'seed-image.jpg'
TEST_INPUT_SKELETON = 'input_skeleton.png'
TEST_INPUT_SKELETON_PRUNED = 'input_pruned_skeleton.png'
TEST_FOREGROUND = "TEST_FOREGROUND.jpg"
TEST_BACKGROUND = "TEST_BACKGROUND.jpg"
TEST_PDFS = "naive_bayes_pdfs.txt"
TEST_PDFS_BAD = "naive_bayes_pdfs_bad.txt"
TEST_VIS_SMALL = "setaria_small_vis.png"
TEST_MASK_SMALL = "setaria_small_mask.png"
TEST_VIS_COMP_CONTOUR = "setaria_composed_contours.npz"
TEST_ACUTE_RESULT = np.asarray([[[119, 285]], [[151, 280]], [[168, 267]], [[168, 262]], [[171, 261]], [[224, 269]],
[[246, 271]], [[260, 277]], [[141, 248]], [[183, 194]], [[188, 237]], [[173, 240]],
[[186, 260]], [[147, 244]], [[163, 246]], [[173, 268]], [[170, 272]], [[151, 320]],
[[195, 289]], [[228, 272]], [[210, 272]], [[209, 247]], [[210, 232]]])
TEST_VIS_SMALL_PLANT = "setaria_small_plant_vis.png"
TEST_MASK_SMALL_PLANT = "setaria_small_plant_mask.png"
TEST_VIS_COMP_CONTOUR_SMALL_PLANT = "setaria_small_plant_composed_contours.npz"
TEST_SAMPLED_RGB_POINTS = "sampled_rgb_points.txt"
TEST_TARGET_IMG = "target_img.png"
TEST_TARGET_IMG_WITH_HEXAGON = "target_img_w_hexagon.png"
TEST_TARGET_IMG_TRIANGLE = "target_img copy.png"
TEST_SOURCE1_IMG = "source1_img.png"
TEST_SOURCE2_IMG = "source2_img.png"
TEST_TARGET_MASK = "mask_img.png"
TEST_TARGET_IMG_COLOR_CARD = "color_card_target.png"
TEST_SOURCE2_MASK = "mask2_img.png"
TEST_TARGET_MATRIX = "target_matrix.npz"
TEST_SOURCE1_MATRIX = "source1_matrix.npz"
TEST_SOURCE2_MATRIX = "source2_matrix.npz"
TEST_MATRIX_B1 = "matrix_b1.npz"
TEST_MATRIX_B2 = "matrix_b2.npz"
TEST_TRANSFORM1 = "transformation_matrix1.npz"
TEST_MATRIX_M1 = "matrix_m1.npz"
TEST_MATRIX_M2 = "matrix_m2.npz"
TEST_S1_CORRECTED = "source_corrected.png"
TEST_SKELETON_OBJECTS = "skeleton_objects.npz"
TEST_SKELETON_HIERARCHIES = "skeleton_hierarchies.npz"
TEST_THERMAL_ARRAY = "thermal_img.npz"
TEST_THERMAL_IMG_MASK = "thermal_img_mask.png"
TEST_INPUT_THERMAL_CSV = "FLIR2600.csv"
PIXEL_VALUES = "pixel_inspector_rgb_values.txt"
# ##########################
# Tests for the main package
# ##########################
@pytest.mark.parametrize("debug", ["print", "plot"])
def test_plantcv_debug(debug, tmpdir):
from plantcv.plantcv._debug import _debug
# Create a test tmp directory
img_outdir = tmpdir.mkdir("sub")
pcv.params.debug = debug
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
_debug(visual=img, filename=os.path.join(img_outdir, TEST_INPUT_COLOR))
assert True
@pytest.mark.parametrize("datatype,value", [[list, []], [int, 2], [float, 2.2], [bool, True], [str, "2"], [dict, {}],
[tuple, ()], [None, None]])
def test_plantcv_outputs_add_observation(datatype, value):
# Create output instance
outputs = pcv.Outputs()
outputs.add_observation(sample='default', variable='test', trait='test variable', method='type', scale='none',
datatype=datatype, value=value, label=[])
assert outputs.observations["default"]["test"]["value"] == value
def test_plantcv_outputs_add_observation_invalid_type():
# Create output instance
outputs = pcv.Outputs()
with pytest.raises(RuntimeError):
outputs.add_observation(sample='default', variable='test', trait='test variable', method='type', scale='none',
datatype=list, value=np.array([2]), label=[])
def test_plantcv_outputs_save_results_json_newfile(tmpdir):
# Create a test tmp directory
cache_dir = tmpdir.mkdir("sub")
outfile = os.path.join(cache_dir, "results.json")
# Create output instance
outputs = pcv.Outputs()
outputs.add_observation(sample='default', variable='test', trait='test variable', method='test', scale='none',
datatype=str, value="test", label="none")
outputs.save_results(filename=outfile, outformat="json")
with open(outfile, "r") as fp:
results = json.load(fp)
assert results["observations"]["default"]["test"]["value"] == "test"
def test_plantcv_outputs_save_results_json_existing_file(tmpdir):
# Create a test tmp directory
cache_dir = tmpdir.mkdir("sub")
outfile = os.path.join(cache_dir, "data_results.txt")
shutil.copyfile(os.path.join(TEST_DATA, "data_results.txt"), outfile)
# Create output instance
outputs = pcv.Outputs()
outputs.add_observation(sample='default', variable='test', trait='test variable', method='test', scale='none',
datatype=str, value="test", label="none")
outputs.save_results(filename=outfile, outformat="json")
with open(outfile, "r") as fp:
results = json.load(fp)
assert results["observations"]["default"]["test"]["value"] == "test"
def test_plantcv_outputs_save_results_csv(tmpdir):
# Create a test tmp directory
cache_dir = tmpdir.mkdir("sub")
outfile = os.path.join(cache_dir, "results.csv")
testfile = os.path.join(TEST_DATA, "data_results.csv")
# Create output instance
outputs = pcv.Outputs()
outputs.add_observation(sample='default', variable='string', trait='string variable', method='string', scale='none',
datatype=str, value="string", label="none")
outputs.add_observation(sample='default', variable='boolean', trait='boolean variable', method='boolean',
scale='none', datatype=bool, value=True, label="none")
outputs.add_observation(sample='default', variable='list', trait='list variable', method='list',
scale='none', datatype=list, value=[1, 2, 3], label=[1, 2, 3])
outputs.add_observation(sample='default', variable='tuple', trait='tuple variable', method='tuple',
scale='none', datatype=tuple, value=(1, 2), label=(1, 2))
outputs.add_observation(sample='default', variable='tuple_list', trait='list of tuples variable',
method='tuple_list', scale='none', datatype=list, value=[(1, 2), (3, 4)], label=[1, 2])
outputs.save_results(filename=outfile, outformat="csv")
with open(outfile, "r") as fp:
results = fp.read()
with open(testfile, "r") as fp:
test_results = fp.read()
assert results == test_results
def test_plantcv_transform_warp_smaller():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR),-1)
bimg = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY),-1)
bimg_small = cv2.resize(bimg, (200,300)) #not sure why INTER_NEAREST doesn't preserve values
bimg_small[bimg_small>0]=255
mrow, mcol = bimg_small.shape
vrow, vcol, vdepth = img.shape
pcv.params.debug = None
mask_warped = pcv.transform.warp(bimg_small, img[:,:,2],
pts = [(0,0),(mcol-1,0),(mcol-1,mrow-1),(0,mrow-1)],
refpts = [(0,0),(vcol-1,0),(vcol-1,vrow-1),(0,vrow-1)])
pcv.params.debug = 'plot'
mask_warped_plot = pcv.transform.warp(bimg_small, img[:,:,2],
pts = [(0,0),(mcol-1,0),(mcol-1,mrow-1),(0,mrow-1)],
refpts = [(0,0),(vcol-1,0),(vcol-1,vrow-1),(0,vrow-1)])
assert np.count_nonzero(mask_warped)==93142
assert np.count_nonzero(mask_warped_plot)==93142
def test_plantcv_transform_warp_larger():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR),-1)
gimg = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY),-1)
gimg_large = cv2.resize(gimg, (5000,7000))
mrow, mcol = gimg_large.shape
vrow, vcol, vdepth = img.shape
pcv.params.debug='print'
mask_warped_print = pcv.transform.warp(gimg_large, img,
pts = [(0,0),(mcol-1,0),(mcol-1,mrow-1),(0,mrow-1)],
refpts = [(0,0),(vcol-1,0),(vcol-1,vrow-1),(0,vrow-1)])
assert np.sum(mask_warped_print)==83103814
def test_plantcv_transform_warp_rgbimgerror():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR),-1)
gimg = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY),-1)
gimg_large = cv2.resize(gimg, (5000,7000))
mrow, mcol = gimg_large.shape
vrow, vcol, vdepth = img.shape
with pytest.raises(RuntimeError):
_ = pcv.transform.warp(img, img,
pts = [(0,0),(mcol-1,0),(mcol-1,mrow-1),(0,mrow-1)],
refpts = [(0,0),(vcol-1,0),(vcol-1,vrow-1),(0,vrow-1)])
def test_plantcv_transform_warp_4ptserror():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR),-1)
mrow, mcol, _ = img.shape
vrow, vcol, vdepth = img.shape
with pytest.raises(RuntimeError):
_ = pcv.transform.warp(img[:,:,0], img,
pts = [(0,0),(mcol-1,0),(0,mrow-1)],
refpts = [(0,0),(vcol-1,0),(0,vrow-1)])
with pytest.raises(RuntimeError):
_ = pcv.transform.warp(img[:,:,1], img,
pts = [(0,0),(mcol-1,0),(0,mrow-1)],
refpts = [(0,0),(vcol-1,0),(vcol-1,vrow-1),(0,vrow-1)])
with pytest.raises(RuntimeError):
_ = pcv.transform.warp(img[:,:,2], img,
pts = [(0,0),(mcol-1,0),(mcol-1,mrow-1),(0,mrow-1)],
refpts = [(0,0),(vcol-1,0),(vcol-1,vrow-1),(0,vrow-1),(0,vrow-1)])
def test_plantcv_acute():
# Read in test data
mask = cv2.imread(os.path.join(TEST_DATA, TEST_MASK_SMALL), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR), encoding="latin1")
obj_contour = contours_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.acute(obj=obj_contour, win=5, thresh=15, mask=mask)
_ = pcv.acute(obj=obj_contour, win=0, thresh=15, mask=mask)
_ = pcv.acute(obj=np.array(([[213, 190]], [[83, 61]], [[149, 246]])), win=84, thresh=192, mask=mask)
_ = pcv.acute(obj=np.array(([[3, 29]], [[31, 102]], [[161, 63]])), win=148, thresh=56, mask=mask)
_ = pcv.acute(obj=np.array(([[103, 154]], [[27, 227]], [[152, 83]])), win=35, thresh=0, mask=mask)
# Test with debug = None
pcv.params.debug = None
_ = pcv.acute(obj=np.array(([[103, 154]], [[27, 227]], [[152, 83]])), win=35, thresh=0, mask=mask)
_ = pcv.acute(obj=obj_contour, win=0, thresh=15, mask=mask)
homology_pts = pcv.acute(obj=obj_contour, win=5, thresh=15, mask=mask)
assert all([i == j] for i, j in zip(np.shape(homology_pts), (29, 1, 2)))
def test_plantcv_acute_vertex():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_acute_vertex")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL))
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR), encoding="latin1")
obj_contour = contours_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.acute_vertex(obj=obj_contour, win=5, thresh=15, sep=5, img=img, label="prefix")
_ = pcv.acute_vertex(obj=[], win=5, thresh=15, sep=5, img=img)
_ = pcv.acute_vertex(obj=[], win=.01, thresh=.01, sep=1, img=img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.acute_vertex(obj=obj_contour, win=5, thresh=15, sep=5, img=img)
# Test with debug = None
pcv.params.debug = None
acute = pcv.acute_vertex(obj=obj_contour, win=5, thresh=15, sep=5, img=img)
assert all([i == j] for i, j in zip(np.shape(acute), np.shape(TEST_ACUTE_RESULT)))
pcv.outputs.clear()
def test_plantcv_acute_vertex_bad_obj():
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL))
obj_contour = np.array([])
pcv.params.debug = None
result = pcv.acute_vertex(obj=obj_contour, win=5, thresh=15, sep=5, img=img)
assert all([i == j] for i, j in zip(result, [0, ("NA", "NA")]))
pcv.outputs.clear()
def test_plantcv_analyze_bound_horizontal():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_bound_horizontal")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img_above_bound_only = cv2.imread(os.path.join(TEST_DATA, TEST_MASK_SMALL_PLANT))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
object_contours = contours_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=300, label="prefix")
pcv.outputs.clear()
_ = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=100)
_ = pcv.analyze_bound_horizontal(img=img_above_bound_only, obj=object_contours, mask=mask, line_position=1756)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=1756)
# Test with debug = None
pcv.params.debug = None
_ = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=1756)
assert len(pcv.outputs.observations["default"]) == 7
def test_plantcv_analyze_bound_horizontal_grayscale_image():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
object_contours = contours_npz['arr_0']
# Test with a grayscale reference image and debug="plot"
pcv.params.debug = "plot"
boundary_img1 = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=1756)
assert len(np.shape(boundary_img1)) == 3
def test_plantcv_analyze_bound_horizontal_neg_y():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_bound_horizontal")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
object_contours = contours_npz['arr_0']
# Test with debug=None, line position that will trigger -y
pcv.params.debug = "plot"
_ = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=-1000)
_ = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=0)
_ = pcv.analyze_bound_horizontal(img=img, obj=object_contours, mask=mask, line_position=2056)
assert pcv.outputs.observations['default']['height_above_reference']['value'] == 713
def test_plantcv_analyze_bound_vertical():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_bound_vertical")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
object_contours = contours_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.analyze_bound_vertical(img=img, obj=object_contours, mask=mask, line_position=1000, label="prefix")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.analyze_bound_vertical(img=img, obj=object_contours, mask=mask, line_position=1000)
# Test with debug = None
pcv.params.debug = None
_ = pcv.analyze_bound_vertical(img=img, obj=object_contours, mask=mask, line_position=1000)
assert pcv.outputs.observations['default']['width_left_reference']['value'] == 94
def test_plantcv_analyze_bound_vertical_grayscale_image():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_bound_vertical")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
object_contours = contours_npz['arr_0']
# Test with a grayscale reference image and debug="plot"
pcv.params.debug = "plot"
_ = pcv.analyze_bound_vertical(img=img, obj=object_contours, mask=mask, line_position=1000)
assert pcv.outputs.observations['default']['width_left_reference']['value'] == 94
pcv.outputs.clear()
def test_plantcv_analyze_bound_vertical_neg_x():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_bound_vertical")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
object_contours = contours_npz['arr_0']
# Test with debug="plot", line position that will trigger -x
pcv.params.debug = "plot"
_ = pcv.analyze_bound_vertical(img=img, obj=object_contours, mask=mask, line_position=2454)
assert pcv.outputs.observations['default']['width_left_reference']['value'] == 441
def test_plantcv_analyze_bound_vertical_small_x():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_bound_vertical")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
object_contours = contours_npz['arr_0']
# Test with debug='plot', line position that will trigger -x, and two channel object
pcv.params.debug = "plot"
_ = pcv.analyze_bound_vertical(img=img, obj=object_contours, mask=mask, line_position=1)
assert pcv.outputs.observations['default']['width_right_reference']['value'] == 441
def test_plantcv_analyze_color():
# Clear previous outputs
pcv.outputs.clear()
# Test with debug = None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type="all")
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type=None, label="prefix")
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type=None)
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type='lab')
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type='hsv')
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type=None)
# Test with debug = "print"
# pcv.params.debug = "print"
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type="all")
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type=None, label="prefix")
# Test with debug = "plot"
# pcv.params.debug = "plot"
# _ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type=None)
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type='lab')
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type='hsv')
# _ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type=None)
# Test with debug = None
# pcv.params.debug = None
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type='rgb')
assert pcv.outputs.observations['default']['hue_median']['value'] == 84.0
def test_plantcv_analyze_color_incorrect_image():
img_binary = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
with pytest.raises(RuntimeError):
_ = pcv.analyze_color(rgb_img=img_binary, mask=mask, hist_plot_type=None)
#
#
def test_plantcv_analyze_color_bad_hist_type():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
pcv.params.debug = "plot"
with pytest.raises(RuntimeError):
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type='bgr')
def test_plantcv_analyze_color_incorrect_hist_plot_type():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
with pytest.raises(RuntimeError):
pcv.params.debug = "plot"
_ = pcv.analyze_color(rgb_img=img, mask=mask, hist_plot_type="bgr")
def test_plantcv_analyze_nir():
# Clear previous outputs
pcv.outputs.clear()
# Test with debug=None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
_ = pcv.analyze_nir_intensity(gray_img=img, mask=mask, bins=256, histplot=True)
result = len(pcv.outputs.observations['default']['nir_frequencies']['value'])
assert result == 256
def test_plantcv_analyze_nir_16bit():
# Clear previous outputs
pcv.outputs.clear()
# Test with debug=None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
_ = pcv.analyze_nir_intensity(gray_img=np.uint16(img), mask=mask, bins=256, histplot=True)
result = len(pcv.outputs.observations['default']['nir_frequencies']['value'])
assert result == 256
def test_plantcv_analyze_object():
# Test with debug = None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
obj_contour = contours_npz['arr_0']
obj_images = pcv.analyze_object(img=img, obj=obj_contour, mask=mask)
pcv.outputs.clear()
assert len(obj_images) != 0
def test_plantcv_analyze_object_grayscale_input():
# Test with debug = None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
obj_contour = contours_npz['arr_0']
obj_images = pcv.analyze_object(img=img, obj=obj_contour, mask=mask)
assert len(obj_images) != 1
def test_plantcv_analyze_object_zero_slope():
# Test with debug = None
pcv.params.debug = None
# Create a test image
img = np.zeros((50, 50, 3), dtype=np.uint8)
img[10:11, 10:40, 0] = 255
mask = img[:, :, 0]
obj_contour = np.array([[[10, 10]], [[11, 10]], [[12, 10]], [[13, 10]], [[14, 10]], [[15, 10]], [[16, 10]],
[[17, 10]], [[18, 10]], [[19, 10]], [[20, 10]], [[21, 10]], [[22, 10]], [[23, 10]],
[[24, 10]], [[25, 10]], [[26, 10]], [[27, 10]], [[28, 10]], [[29, 10]], [[30, 10]],
[[31, 10]], [[32, 10]], [[33, 10]], [[34, 10]], [[35, 10]], [[36, 10]], [[37, 10]],
[[38, 10]], [[39, 10]], [[38, 10]], [[37, 10]], [[36, 10]], [[35, 10]], [[34, 10]],
[[33, 10]], [[32, 10]], [[31, 10]], [[30, 10]], [[29, 10]], [[28, 10]], [[27, 10]],
[[26, 10]], [[25, 10]], [[24, 10]], [[23, 10]], [[22, 10]], [[21, 10]], [[20, 10]],
[[19, 10]], [[18, 10]], [[17, 10]], [[16, 10]], [[15, 10]], [[14, 10]], [[13, 10]],
[[12, 10]], [[11, 10]]], dtype=np.int32)
obj_images = pcv.analyze_object(img=img, obj=obj_contour, mask=mask)
assert len(obj_images) != 0
def test_plantcv_analyze_object_longest_axis_2d():
# Test with debug = None
pcv.params.debug = None
# Create a test image
img = np.zeros((50, 50, 3), dtype=np.uint8)
img[0:5, 45:49, 0] = 255
img[0:5, 0:5, 0] = 255
mask = img[:, :, 0]
obj_contour = np.array([[[45, 1]], [[45, 2]], [[45, 3]], [[45, 4]], [[46, 4]], [[47, 4]], [[48, 4]],
[[48, 3]], [[48, 2]], [[48, 1]], [[47, 1]], [[46, 1]], [[1, 1]], [[1, 2]],
[[1, 3]], [[1, 4]], [[2, 4]], [[3, 4]], [[4, 4]], [[4, 3]], [[4, 2]],
[[4, 1]], [[3, 1]], [[2, 1]]], dtype=np.int32)
obj_images = pcv.analyze_object(img=img, obj=obj_contour, mask=mask)
assert len(obj_images) != 0
def test_plantcv_analyze_object_longest_axis_2e():
# Test with debug = None
pcv.params.debug = None
# Create a test image
img = np.zeros((50, 50, 3), dtype=np.uint8)
img[10:15, 10:40, 0] = 255
mask = img[:, :, 0]
obj_contour = np.array([[[10, 10]], [[10, 11]], [[10, 12]], [[10, 13]], [[10, 14]], [[11, 14]], [[12, 14]],
[[13, 14]], [[14, 14]], [[15, 14]], [[16, 14]], [[17, 14]], [[18, 14]], [[19, 14]],
[[20, 14]], [[21, 14]], [[22, 14]], [[23, 14]], [[24, 14]], [[25, 14]], [[26, 14]],
[[27, 14]], [[28, 14]], [[29, 14]], [[30, 14]], [[31, 14]], [[32, 14]], [[33, 14]],
[[34, 14]], [[35, 14]], [[36, 14]], [[37, 14]], [[38, 14]], [[39, 14]], [[39, 13]],
[[39, 12]], [[39, 11]], [[39, 10]], [[38, 10]], [[37, 10]], [[36, 10]], [[35, 10]],
[[34, 10]], [[33, 10]], [[32, 10]], [[31, 10]], [[30, 10]], [[29, 10]], [[28, 10]],
[[27, 10]], [[26, 10]], [[25, 10]], [[24, 10]], [[23, 10]], [[22, 10]], [[21, 10]],
[[20, 10]], [[19, 10]], [[18, 10]], [[17, 10]], [[16, 10]], [[15, 10]], [[14, 10]],
[[13, 10]], [[12, 10]], [[11, 10]]], dtype=np.int32)
obj_images = pcv.analyze_object(img=img, obj=obj_contour, mask=mask)
assert len(obj_images) != 0
def test_plantcv_analyze_object_small_contour():
# Test with debug = None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
obj_contour = [np.array([[[0, 0]], [[0, 50]], [[50, 50]], [[50, 0]]], dtype=np.int32)]
obj_images = pcv.analyze_object(img=img, obj=obj_contour, mask=mask)
assert obj_images is None
def test_plantcv_analyze_thermal_values():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_thermal_values")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
# img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_THERMAL_IMG_MASK), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_THERMAL_ARRAY), encoding="latin1")
img = contours_npz['arr_0']
pcv.params.debug = None
thermal_hist = pcv.analyze_thermal_values(thermal_array=img, mask=mask, histplot=True)
assert thermal_hist is not None and pcv.outputs.observations['default']['median_temp']['value'] == 33.20922
def test_plantcv_apply_mask_white():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_apply_mask_white")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.apply_mask(img=img, mask=mask, mask_color="white")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.apply_mask(img=img, mask=mask, mask_color="white")
# Test with debug = None
pcv.params.debug = None
masked_img = pcv.apply_mask(img=img, mask=mask, mask_color="white")
assert all([i == j] for i, j in zip(np.shape(masked_img), TEST_COLOR_DIM))
def test_plantcv_apply_mask_black():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_apply_mask_black")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.apply_mask(img=img, mask=mask, mask_color="black")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.apply_mask(img=img, mask=mask, mask_color="black")
# Test with debug = None
pcv.params.debug = None
masked_img = pcv.apply_mask(img=img, mask=mask, mask_color="black")
assert all([i == j] for i, j in zip(np.shape(masked_img), TEST_COLOR_DIM))
def test_plantcv_apply_mask_hyperspectral():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_apply_mask_hyperspectral")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
hyper_array = pcv.hyperspectral.read_data(filename=spectral_filename)
img = np.ones((2056, 2454))
img_stacked = cv2.merge((img, img, img, img))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.apply_mask(img=img_stacked, mask=img, mask_color="black")
# Test with debug = "plot"
pcv.params.debug = "plot"
masked_array = pcv.apply_mask(img=hyper_array.array_data, mask=img, mask_color="black")
assert np.mean(masked_array) == 13.97111260224949
def test_plantcv_apply_mask_bad_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
with pytest.raises(RuntimeError):
pcv.params.debug = "plot"
_ = pcv.apply_mask(img=img, mask=mask, mask_color="wite")
def test_plantcv_auto_crop():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_auto_crop")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
contours = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_OBJECT), encoding="latin1")
roi_contours = [contours[arr_n] for arr_n in contours]
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.auto_crop(img=img1, obj=roi_contours[1], padding_x=(20, 10), padding_y=(20, 10), color='black')
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.auto_crop(img=img1, obj=roi_contours[1], color='image')
_ = pcv.auto_crop(img=img1, obj=roi_contours[1], padding_x=2000, padding_y=2000, color='image')
# Test with debug = None
pcv.params.debug = None
cropped = pcv.auto_crop(img=img1, obj=roi_contours[1], padding_x=20, padding_y=20, color='black')
x, y, z = np.shape(img1)
x1, y1, z1 = np.shape(cropped)
assert x > x1
def test_plantcv_auto_crop_grayscale_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_auto_crop_grayscale_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
gray_img = cv2.cvtColor(rgb_img, cv2.COLOR_BGR2GRAY)
contours = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_OBJECT), encoding="latin1")
roi_contours = [contours[arr_n] for arr_n in contours]
# Test with debug = "plot"
pcv.params.debug = "plot"
cropped = pcv.auto_crop(img=gray_img, obj=roi_contours[1], padding_x=20, padding_y=20, color='white')
x, y = np.shape(gray_img)
x1, y1 = np.shape(cropped)
assert x > x1
def test_plantcv_auto_crop_bad_color_input():
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
gray_img = cv2.cvtColor(rgb_img, cv2.COLOR_BGR2GRAY)
contours = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_OBJECT), encoding="latin1")
roi_contours = [contours[arr_n] for arr_n in contours]
with pytest.raises(RuntimeError):
_ = pcv.auto_crop(img=gray_img, obj=roi_contours[1], padding_x=20, padding_y=20, color='wite')
def test_plantcv_auto_crop_bad_padding_input():
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
gray_img = cv2.cvtColor(rgb_img, cv2.COLOR_BGR2GRAY)
contours = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_OBJECT), encoding="latin1")
roi_contours = [contours[arr_n] for arr_n in contours]
with pytest.raises(RuntimeError):
_ = pcv.auto_crop(img=gray_img, obj=roi_contours[1], padding_x="one", padding_y=20, color='white')
def test_plantcv_canny_edge_detect():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_canny_edge_detect")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.canny_edge_detect(img=rgb_img, mask=mask, mask_color='white')
_ = pcv.canny_edge_detect(img=img, mask=mask, mask_color='black')
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.canny_edge_detect(img=img, thickness=2)
_ = pcv.canny_edge_detect(img=img)
# Test with debug = None
pcv.params.debug = None
edge_img = pcv.canny_edge_detect(img=img)
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(edge_img), TEST_BINARY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(edge_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_canny_edge_detect_bad_input():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_canny_edge_detect")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
with pytest.raises(RuntimeError):
_ = pcv.canny_edge_detect(img=img, mask=mask, mask_color="gray")
def test_plantcv_closing():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_closing")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
gray_img = cv2.cvtColor(rgb_img, cv2.COLOR_BGR2GRAY)
bin_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug=None
pcv.params.debug = None
_ = pcv.closing(gray_img)
# Test with debug='plot'
pcv.params.debug = 'plot'
_ = pcv.closing(bin_img, np.ones((4, 4), np.uint8))
# Test with debug='print'
pcv.params.debug = 'print'
filtered_img = pcv.closing(bin_img)
assert np.sum(filtered_img) == 16261860
def test_plantcv_closing_bad_input():
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
with pytest.raises(RuntimeError):
_ = pcv.closing(rgb_img)
def test_plantcv_cluster_contours():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_cluster_contours")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
roi_objects = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_OBJECT), encoding="latin1")
hierarchy = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_HIERARCHY), encoding="latin1")
objs = [roi_objects[arr_n] for arr_n in roi_objects]
obj_hierarchy = hierarchy['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.cluster_contours(img=img1, roi_objects=objs, roi_obj_hierarchy=obj_hierarchy, nrow=4, ncol=6)
_ = pcv.cluster_contours(img=img1, roi_objects=objs, roi_obj_hierarchy=obj_hierarchy, show_grid=True)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.cluster_contours(img=img1, roi_objects=objs, roi_obj_hierarchy=obj_hierarchy, nrow=4, ncol=6)
# Test with debug = None
pcv.params.debug = None
clusters_i, contours, hierarchy = pcv.cluster_contours(img=img1, roi_objects=objs, roi_obj_hierarchy=obj_hierarchy,
nrow=4, ncol=6)
lenori = len(objs)
lenclust = len(clusters_i)
assert lenori > lenclust
def test_plantcv_cluster_contours_grayscale_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_cluster_contours_grayscale_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), 0)
roi_objects = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_OBJECT), encoding="latin1")
hierachy = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_HIERARCHY), encoding="latin1")
objs = [roi_objects[arr_n] for arr_n in roi_objects]
obj_hierarchy = hierachy['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.cluster_contours(img=img1, roi_objects=objs, roi_obj_hierarchy=obj_hierarchy, nrow=4, ncol=6)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.cluster_contours(img=img1, roi_objects=objs, roi_obj_hierarchy=obj_hierarchy, nrow=4, ncol=6)
# Test with debug = None
pcv.params.debug = None
clusters_i, contours, hierachy = pcv.cluster_contours(img=img1, roi_objects=objs, roi_obj_hierarchy=obj_hierarchy,
nrow=4, ncol=6)
lenori = len(objs)
lenclust = len(clusters_i)
assert lenori > lenclust
def test_plantcv_cluster_contours_splitimg():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_cluster_contours_splitimg")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
contours = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_CONTOUR), encoding="latin1")
clusters = np.load(os.path.join(TEST_DATA, TEST_INPUT_ClUSTER_CONTOUR), encoding="latin1")
hierachy = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_HIERARCHY), encoding="latin1")
cluster_names = os.path.join(TEST_DATA, TEST_INPUT_GENOTXT)
cluster_names_too_many = os.path.join(TEST_DATA, TEST_INPUT_GENOTXT_TOO_MANY)
roi_contours = [contours[arr_n] for arr_n in contours]
cluster_contours = [clusters[arr_n] for arr_n in clusters]
obj_hierarchy = hierachy['arr_0']
# Test with debug = None
pcv.params.debug = None
_, _, _ = pcv.cluster_contour_splitimg(img=img1, grouped_contour_indexes=cluster_contours,
contours=roi_contours,
hierarchy=obj_hierarchy, outdir=cache_dir, file=None, filenames=None)
_, _, _ = pcv.cluster_contour_splitimg(img=img1, grouped_contour_indexes=[[0]], contours=[],
hierarchy=np.array([[[1, -1, -1, -1]]]))
_, _, _ = pcv.cluster_contour_splitimg(img=img1, grouped_contour_indexes=cluster_contours,
contours=roi_contours,
hierarchy=obj_hierarchy, outdir=cache_dir, file='multi', filenames=None)
_, _, _ = pcv.cluster_contour_splitimg(img=img1, grouped_contour_indexes=cluster_contours,
contours=roi_contours,
hierarchy=obj_hierarchy, outdir=None, file=None, filenames=cluster_names)
_, _, _ = pcv.cluster_contour_splitimg(img=img1, grouped_contour_indexes=cluster_contours,
contours=roi_contours,
hierarchy=obj_hierarchy, outdir=None, file=None,
filenames=cluster_names_too_many)
output_path, imgs, masks = pcv.cluster_contour_splitimg(img=img1, grouped_contour_indexes=cluster_contours,
contours=roi_contours, hierarchy=obj_hierarchy, outdir=None,
file=None,
filenames=None)
assert len(output_path) != 0
def test_plantcv_cluster_contours_splitimg_grayscale():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_cluster_contours_splitimg_grayscale")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), 0)
contours = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_CONTOUR), encoding="latin1")
clusters = np.load(os.path.join(TEST_DATA, TEST_INPUT_ClUSTER_CONTOUR), encoding="latin1")
hierachy = np.load(os.path.join(TEST_DATA, TEST_INPUT_MULTI_HIERARCHY), encoding="latin1")
cluster_names = os.path.join(TEST_DATA, TEST_INPUT_GENOTXT)
cluster_names_too_many = os.path.join(TEST_DATA, TEST_INPUT_GENOTXT_TOO_MANY)
roi_contours = [contours[arr_n] for arr_n in contours]
cluster_contours = [clusters[arr_n] for arr_n in clusters]
obj_hierarchy = hierachy['arr_0']
pcv.params.debug = None
output_path, imgs, masks = pcv.cluster_contour_splitimg(img=img1, grouped_contour_indexes=cluster_contours,
contours=roi_contours, hierarchy=obj_hierarchy, outdir=None,
file=None,
filenames=None)
assert len(output_path) != 0
def test_plantcv_color_palette():
# Return a color palette
colors = pcv.color_palette(num=10, saved=False)
assert np.shape(colors) == (10, 3)
def test_plantcv_color_palette_random():
# Return a color palette in random order
pcv.params.color_sequence = "random"
colors = pcv.color_palette(num=10, saved=False)
assert np.shape(colors) == (10, 3)
def test_plantcv_color_palette_saved():
# Return a color palette that was saved
pcv.params.saved_color_scale = [[0, 0, 0], [255, 255, 255]]
colors = pcv.color_palette(num=2, saved=True)
assert colors == [[0, 0, 0], [255, 255, 255]]
def test_plantcv_crop():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_crop")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img, _, _ = pcv.readimage(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK), 'gray')
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.crop(img=img, x=10, y=10, h=50, w=50)
# Test with debug = "plot"
pcv.params.debug = "plot"
cropped = pcv.crop(img=img, x=10, y=10, h=50, w=50)
assert np.shape(cropped) == (50, 50)
def test_plantcv_crop_hyperspectral():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_crop_hyperspectral")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = np.ones((2056, 2454))
img_stacked = cv2.merge((img, img, img, img))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.crop(img=img_stacked, x=10, y=10, h=50, w=50)
# Test with debug = "plot"
pcv.params.debug = "plot"
cropped = pcv.crop(img=img_stacked, x=10, y=10, h=50, w=50)
assert np.shape(cropped) == (50, 50, 4)
def test_plantcv_crop_position_mask():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_crop_position_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
nir, path1, filename1 = pcv.readimage(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK), 'gray')
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK), -1)
mask_three_channel = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK), -1)
mask_resize = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK_RESIZE), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="top", h_pos="right")
_ = pcv.crop_position_mask(nir, mask_resize, x=40, y=3, v_pos="top", h_pos="right")
_ = pcv.crop_position_mask(nir, mask_three_channel, x=40, y=3, v_pos="top", h_pos="right")
# Test with debug = "print" with bottom
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="bottom", h_pos="left")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="top", h_pos="right")
# Test with debug = "plot" with bottom
_ = pcv.crop_position_mask(nir, mask, x=45, y=2, v_pos="bottom", h_pos="left")
# Test with debug = None
pcv.params.debug = None
newmask = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="top", h_pos="right")
assert np.sum(newmask) == 707115
def test_plantcv_crop_position_mask_color():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_crop_position_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
nir, path1, filename1 = pcv.readimage(os.path.join(TEST_DATA, TEST_INPUT_COLOR), mode='native')
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK), -1)
mask_resize = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK_RESIZE))
mask_non_binary = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="top", h_pos="right")
# Test with debug = "print" with bottom
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="bottom", h_pos="left")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="top", h_pos="right")
# Test with debug = "plot" with bottom
_ = pcv.crop_position_mask(nir, mask, x=45, y=2, v_pos="bottom", h_pos="left")
_ = pcv.crop_position_mask(nir, mask_non_binary, x=45, y=2, v_pos="bottom", h_pos="left")
_ = pcv.crop_position_mask(nir, mask_non_binary, x=45, y=2, v_pos="top", h_pos="left")
_ = pcv.crop_position_mask(nir, mask_non_binary, x=45, y=2, v_pos="bottom", h_pos="right")
_ = pcv.crop_position_mask(nir, mask_resize, x=45, y=2, v_pos="top", h_pos="left")
# Test with debug = None
pcv.params.debug = None
newmask = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="top", h_pos="right")
assert np.sum(newmask) == 707115
def test_plantcv_crop_position_mask_bad_input_x():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_crop_position_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK), -1)
# Read in test data
nir, path1, filename1 = pcv.readimage(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK))
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.crop_position_mask(nir, mask, x=-1, y=-1, v_pos="top", h_pos="right")
def test_plantcv_crop_position_mask_bad_input_vpos():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_crop_position_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK), -1)
# Read in test data
nir, path1, filename1 = pcv.readimage(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK))
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="below", h_pos="right")
def test_plantcv_crop_position_mask_bad_input_hpos():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_crop_position_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK), -1)
# Read in test data
nir, path1, filename1 = pcv.readimage(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK))
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.crop_position_mask(nir, mask, x=40, y=3, v_pos="top", h_pos="starboard")
def test_plantcv_dilate():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_dilate")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.dilate(gray_img=img, ksize=5, i=1)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.dilate(gray_img=img, ksize=5, i=1)
# Test with debug = None
pcv.params.debug = None
dilate_img = pcv.dilate(gray_img=img, ksize=5, i=1)
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(dilate_img), TEST_BINARY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(dilate_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_dilate_small_k():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = None
pcv.params.debug = None
with pytest.raises(ValueError):
_ = pcv.dilate(img, 1, 1)
def test_plantcv_erode():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_erode")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.erode(gray_img=img, ksize=5, i=1)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.erode(gray_img=img, ksize=5, i=1)
# Test with debug = None
pcv.params.debug = None
erode_img = pcv.erode(gray_img=img, ksize=5, i=1)
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(erode_img), TEST_BINARY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(erode_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_erode_small_k():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = None
pcv.params.debug = None
with pytest.raises(ValueError):
_ = pcv.erode(img, 1, 1)
def test_plantcv_distance_transform():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_distance_transform")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_CROPPED_MASK), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.distance_transform(bin_img=mask, distance_type=1, mask_size=3)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.distance_transform(bin_img=mask, distance_type=1, mask_size=3)
# Test with debug = None
pcv.params.debug = None
distance_transform_img = pcv.distance_transform(bin_img=mask, distance_type=1, mask_size=3)
# Assert that the output image has the dimensions of the input image
assert all([i == j] for i, j in zip(np.shape(distance_transform_img), np.shape(mask)))
def test_plantcv_fatal_error():
# Verify that the fatal_error function raises a RuntimeError
with pytest.raises(RuntimeError):
pcv.fatal_error("Test error")
def test_plantcv_fill():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_fill")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.fill(bin_img=img, size=63632)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.fill(bin_img=img, size=63632)
# Test with debug = None
pcv.params.debug = None
fill_img = pcv.fill(bin_img=img, size=63632)
# Assert that the output image has the dimensions of the input image
# assert all([i == j] for i, j in zip(np.shape(fill_img), TEST_BINARY_DIM))
assert np.sum(fill_img) == 0
def test_plantcv_fill_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_fill_bad_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
_ = pcv.fill(bin_img=img, size=1)
def test_plantcv_fill_holes():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_fill_holes")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.fill_holes(bin_img=img)
pcv.params.debug = "plot"
_ = pcv.fill_holes(bin_img=img)
# Test with debug = None
pcv.params.debug = None
fill_img = pcv.fill_holes(bin_img=img)
assert np.sum(fill_img) > np.sum(img)
def test_plantcv_fill_holes_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_fill_holes_bad_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
_ = pcv.fill_holes(bin_img=img)
def test_plantcv_find_objects():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_find_objects")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.find_objects(img=img, mask=mask)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.find_objects(img=img, mask=mask)
# Test with debug = None
pcv.params.debug = None
contours, hierarchy = pcv.find_objects(img=img, mask=mask)
# Assert the correct number of contours are found
assert len(contours) == 2
def test_plantcv_find_objects_grayscale_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_find_objects_grayscale_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "plot"
pcv.params.debug = "plot"
contours, hierarchy = pcv.find_objects(img=img, mask=mask)
# Assert the correct number of contours are found
assert len(contours) == 2
def test_plantcv_flip():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_flip")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img_binary = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.flip(img=img, direction="horizontal")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.flip(img=img, direction="vertical")
_ = pcv.flip(img=img_binary, direction="vertical")
# Test with debug = None
pcv.params.debug = None
flipped_img = pcv.flip(img=img, direction="horizontal")
assert all([i == j] for i, j in zip(np.shape(flipped_img), TEST_COLOR_DIM))
def test_plantcv_flip_bad_input():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.flip(img=img, direction="vert")
def test_plantcv_gaussian_blur():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_gaussian_blur")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img_color = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.gaussian_blur(img=img, ksize=(51, 51), sigma_x=0, sigma_y=None)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.gaussian_blur(img=img, ksize=(51, 51), sigma_x=0, sigma_y=None)
_ = pcv.gaussian_blur(img=img_color, ksize=(51, 51), sigma_x=0, sigma_y=None)
# Test with debug = None
pcv.params.debug = None
gaussian_img = pcv.gaussian_blur(img=img, ksize=(51, 51), sigma_x=0, sigma_y=None)
imgavg = np.average(img)
gavg = np.average(gaussian_img)
assert gavg != imgavg
def test_plantcv_get_kernel_cross():
kernel = pcv.get_kernel(size=(3, 3), shape="cross")
assert (kernel == np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])).all()
def test_plantcv_get_kernel_rectangle():
kernel = pcv.get_kernel(size=(3, 3), shape="rectangle")
assert (kernel == np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]])).all()
def test_plantcv_get_kernel_ellipse():
kernel = pcv.get_kernel(size=(3, 3), shape="ellipse")
assert (kernel == np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])).all()
def test_plantcv_get_kernel_bad_input_size():
with pytest.raises(ValueError):
_ = pcv.get_kernel(size=(1, 1), shape="ellipse")
def test_plantcv_get_kernel_bad_input_shape():
with pytest.raises(RuntimeError):
_ = pcv.get_kernel(size=(3, 1), shape="square")
def test_plantcv_get_nir_sv():
nirpath = pcv.get_nir(TEST_DATA, TEST_VIS)
nirpath1 = os.path.join(TEST_DATA, TEST_NIR)
assert nirpath == nirpath1
def test_plantcv_get_nir_tv():
nirpath = pcv.get_nir(TEST_DATA, TEST_VIS_TV)
nirpath1 = os.path.join(TEST_DATA, TEST_NIR_TV)
assert nirpath == nirpath1
def test_plantcv_hist_equalization():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hist_equalization")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.hist_equalization(gray_img=img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.hist_equalization(gray_img=img)
# Test with debug = None
pcv.params.debug = None
hist = pcv.hist_equalization(gray_img=img)
histavg = np.average(hist)
imgavg = np.average(img)
assert histavg != imgavg
def test_plantcv_hist_equalization_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hist_equalization_bad_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), 1)
# Test with debug = None
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.hist_equalization(gray_img=img)
def test_plantcv_image_add():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_image_add")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img2 = np.copy(img1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.image_add(gray_img1=img1, gray_img2=img2)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.image_add(gray_img1=img1, gray_img2=img2)
# Test with debug = None
pcv.params.debug = None
added_img = pcv.image_add(gray_img1=img1, gray_img2=img2)
assert all([i == j] for i, j in zip(np.shape(added_img), TEST_BINARY_DIM))
def test_plantcv_image_subtract():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_image_sub")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# read in images
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img2 = np.copy(img1)
# Test with debug = "print"
pcv.params.debug = 'print'
_ = pcv.image_subtract(img1, img2)
# Test with debug = "plot"
pcv.params.debug = 'plot'
_ = pcv.image_subtract(img1, img2)
# Test with debug = None
pcv.params.debug = None
new_img = pcv.image_subtract(img1, img2)
assert np.array_equal(new_img, np.zeros(np.shape(new_img), np.uint8))
def test_plantcv_image_subtract_fail():
# read in images
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img2 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY))
# test
with pytest.raises(RuntimeError):
_ = pcv.image_subtract(img1, img2)
def test_plantcv_invert():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_invert")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.invert(gray_img=img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.invert(gray_img=img)
# Test with debug = None
pcv.params.debug = None
inverted_img = pcv.invert(gray_img=img)
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(inverted_img), TEST_BINARY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(inverted_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_landmark_reference_pt_dist():
# Clear previous outputs
pcv.outputs.clear()
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_landmark_reference")
os.mkdir(cache_dir)
points_rescaled = [(0.0139, 0.2569), (0.2361, 0.2917), (0.3542, 0.3819), (0.3542, 0.4167), (0.375, 0.4236),
(0.7431, 0.3681), (0.8958, 0.3542), (0.9931, 0.3125), (0.1667, 0.5139), (0.4583, 0.8889),
(0.4931, 0.5903), (0.3889, 0.5694), (0.4792, 0.4306), (0.2083, 0.5417), (0.3194, 0.5278),
(0.3889, 0.375), (0.3681, 0.3472), (0.2361, 0.0139), (0.5417, 0.2292), (0.7708, 0.3472),
(0.6458, 0.3472), (0.6389, 0.5208), (0.6458, 0.625)]
centroid_rescaled = (0.4685, 0.4945)
bottomline_rescaled = (0.4685, 0.2569)
_ = pcv.landmark_reference_pt_dist(points_r=[], centroid_r=('a', 'b'), bline_r=(0, 0))
_ = pcv.landmark_reference_pt_dist(points_r=[(10, 1000)], centroid_r=(10, 10), bline_r=(10, 10))
_ = pcv.landmark_reference_pt_dist(points_r=[], centroid_r=(0, 0), bline_r=(0, 0))
_ = pcv.landmark_reference_pt_dist(points_r=points_rescaled, centroid_r=centroid_rescaled,
bline_r=bottomline_rescaled, label="prefix")
assert len(pcv.outputs.observations['prefix'].keys()) == 8
def test_plantcv_laplace_filter():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_laplace_filter")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.laplace_filter(gray_img=img, ksize=1, scale=1)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.laplace_filter(gray_img=img, ksize=1, scale=1)
# Test with debug = None
pcv.params.debug = None
lp_img = pcv.laplace_filter(gray_img=img, ksize=1, scale=1)
# Assert that the output image has the dimensions of the input image
assert all([i == j] for i, j in zip(np.shape(lp_img), TEST_GRAY_DIM))
def test_plantcv_logical_and():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_logical_and")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img2 = np.copy(img1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.logical_and(bin_img1=img1, bin_img2=img2)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.logical_and(bin_img1=img1, bin_img2=img2)
# Test with debug = None
pcv.params.debug = None
and_img = pcv.logical_and(bin_img1=img1, bin_img2=img2)
assert all([i == j] for i, j in zip(np.shape(and_img), TEST_BINARY_DIM))
def test_plantcv_logical_or():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_logical_or")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img2 = np.copy(img1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.logical_or(bin_img1=img1, bin_img2=img2)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.logical_or(bin_img1=img1, bin_img2=img2)
# Test with debug = None
pcv.params.debug = None
or_img = pcv.logical_or(bin_img1=img1, bin_img2=img2)
assert all([i == j] for i, j in zip(np.shape(or_img), TEST_BINARY_DIM))
def test_plantcv_logical_xor():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_logical_xor")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img2 = np.copy(img1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.logical_xor(bin_img1=img1, bin_img2=img2)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.logical_xor(bin_img1=img1, bin_img2=img2)
# Test with debug = None
pcv.params.debug = None
xor_img = pcv.logical_xor(bin_img1=img1, bin_img2=img2)
assert all([i == j] for i, j in zip(np.shape(xor_img), TEST_BINARY_DIM))
def test_plantcv_median_blur():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_median_blur")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.median_blur(gray_img=img, ksize=5)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.median_blur(gray_img=img, ksize=5)
# Test with debug = None
pcv.params.debug = None
blur_img = pcv.median_blur(gray_img=img, ksize=5)
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(blur_img), TEST_BINARY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(blur_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_median_blur_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_median_blur_bad_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
_ = pcv.median_blur(img, 5.)
def test_plantcv_naive_bayes_classifier():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_naive_bayes_classifier")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.naive_bayes_classifier(rgb_img=img, pdf_file=os.path.join(TEST_DATA, TEST_PDFS))
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.naive_bayes_classifier(rgb_img=img, pdf_file=os.path.join(TEST_DATA, TEST_PDFS))
# Test with debug = None
pcv.params.debug = None
mask = pcv.naive_bayes_classifier(rgb_img=img, pdf_file=os.path.join(TEST_DATA, TEST_PDFS))
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(mask), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(mask), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_naive_bayes_classifier_bad_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.naive_bayes_classifier(rgb_img=img, pdf_file=os.path.join(TEST_DATA, TEST_PDFS_BAD))
def test_plantcv_object_composition():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_object_composition")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
object_contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_CONTOURS), encoding="latin1")
object_contours = [object_contours_npz[arr_n] for arr_n in object_contours_npz]
object_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_HIERARCHY), encoding="latin1")
object_hierarchy = object_hierarchy_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.object_composition(img=img, contours=object_contours, hierarchy=object_hierarchy)
_ = pcv.object_composition(img=img, contours=[], hierarchy=object_hierarchy)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.object_composition(img=img, contours=object_contours, hierarchy=object_hierarchy)
# Test with debug = None
pcv.params.debug = None
contours, mask = pcv.object_composition(img=img, contours=object_contours, hierarchy=object_hierarchy)
# Assert that the objects have been combined
contour_shape = np.shape(contours) # type: tuple
assert contour_shape[1] == 1
def test_plantcv_object_composition_grayscale_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_object_composition_grayscale_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
object_contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_CONTOURS), encoding="latin1")
object_contours = [object_contours_npz[arr_n] for arr_n in object_contours_npz]
object_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_HIERARCHY), encoding="latin1")
object_hierarchy = object_hierarchy_npz['arr_0']
# Test with debug = "plot"
pcv.params.debug = "plot"
contours, mask = pcv.object_composition(img=img, contours=object_contours, hierarchy=object_hierarchy)
# Assert that the objects have been combined
contour_shape = np.shape(contours) # type: tuple
assert contour_shape[1] == 1
def test_plantcv_within_frame():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_within_frame")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
mask_ib = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK), -1)
mask_oob = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MASK_OOB), -1)
in_bounds_ib = pcv.within_frame(mask=mask_ib, border_width=1, label="prefix")
in_bounds_oob = pcv.within_frame(mask=mask_oob, border_width=1)
assert (in_bounds_ib is True and in_bounds_oob is False)
def test_plantcv_within_frame_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_within_frame")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
grayscale_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
with pytest.raises(RuntimeError):
_ = pcv.within_frame(grayscale_img)
def test_plantcv_opening():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_closing")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
gray_img = cv2.cvtColor(rgb_img, cv2.COLOR_BGR2GRAY)
bin_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug=None
pcv.params.debug = None
_ = pcv.opening(gray_img)
# Test with debug='plot'
pcv.params.debug = 'plot'
_ = pcv.opening(bin_img, np.ones((4, 4), np.uint8))
# Test with debug='print'
pcv.params.debug = 'print'
filtered_img = pcv.opening(bin_img)
assert np.sum(filtered_img) == 16184595
def test_plantcv_opening_bad_input():
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI), -1)
with pytest.raises(RuntimeError):
_ = pcv.opening(rgb_img)
def test_plantcv_output_mask():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_output_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
img_color = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.output_mask(img=img, mask=mask, filename='test.png', outdir=None, mask_only=False)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.output_mask(img=img, mask=mask, filename='test.png', outdir=cache_dir, mask_only=False)
_ = pcv.output_mask(img=img_color, mask=mask, filename='test.png', outdir=None, mask_only=False)
# Remove tmp files in working direcctory
shutil.rmtree("ori-images")
shutil.rmtree("mask-images")
# Test with debug = None
pcv.params.debug = None
imgpath, maskpath, analysis_images = pcv.output_mask(img=img, mask=mask, filename='test.png',
outdir=cache_dir, mask_only=False)
assert all([os.path.exists(imgpath) is True, os.path.exists(maskpath) is True])
def test_plantcv_output_mask_true():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_output_mask")
pcv.params.debug_outdir = cache_dir
os.mkdir(cache_dir)
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
img_color = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.output_mask(img=img, mask=mask, filename='test.png', outdir=cache_dir, mask_only=True)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.output_mask(img=img_color, mask=mask, filename='test.png', outdir=cache_dir, mask_only=True)
pcv.params.debug = None
imgpath, maskpath, analysis_images = pcv.output_mask(img=img, mask=mask, filename='test.png', outdir=cache_dir,
mask_only=False)
assert all([os.path.exists(imgpath) is True, os.path.exists(maskpath) is True])
def test_plantcv_plot_image_matplotlib_input():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_pseudocolor")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
pimg = pcv.visualize.pseudocolor(gray_img=img, mask=mask, min_value=10, max_value=200)
with pytest.raises(RuntimeError):
pcv.plot_image(pimg)
def test_plantcv_plot_image_plotnine():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_plot_image_plotnine")
os.mkdir(cache_dir)
dataset = pd.DataFrame({'x': [1, 2, 3, 4], 'y': [1, 2, 3, 4]})
img = ggplot(data=dataset)
try:
pcv.plot_image(img=img)
except RuntimeError:
assert False
# Assert that the image was plotted without error
assert True
def test_plantcv_print_image():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_print_image")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img, path, img_name = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_COLOR))
filename = os.path.join(cache_dir, 'plantcv_print_image.png')
pcv.print_image(img=img, filename=filename)
# Assert that the file was created
assert os.path.exists(filename) is True
def test_plantcv_print_image_bad_type():
with pytest.raises(RuntimeError):
pcv.print_image(img=[], filename="/dev/null")
def test_plantcv_print_image_plotnine():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_print_image_plotnine")
os.mkdir(cache_dir)
dataset = pd.DataFrame({'x': [1, 2, 3, 4], 'y': [1, 2, 3, 4]})
img = ggplot(data=dataset)
filename = os.path.join(cache_dir, 'plantcv_print_image.png')
pcv.print_image(img=img, filename=filename)
# Assert that the file was created
assert os.path.exists(filename) is True
def test_plantcv_print_results(tmpdir):
# Create a tmp directory
cache_dir = tmpdir.mkdir("sub")
outfile = os.path.join(cache_dir, "results.json")
pcv.print_results(filename=outfile)
assert os.path.exists(outfile)
def test_plantcv_readimage_native():
# Test with debug = None
pcv.params.debug = None
_ = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_COLOR), mode='rgba')
_ = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img, path, img_name = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_COLOR), mode='native')
# Assert that the image name returned equals the name of the input image
# Assert that the path of the image returned equals the path of the input image
# Assert that the dimensions of the returned image equals the expected dimensions
if img_name == TEST_INPUT_COLOR and path == TEST_DATA:
if all([i == j] for i, j in zip(np.shape(img), TEST_COLOR_DIM)):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_readimage_grayscale():
# Test with debug = None
pcv.params.debug = None
_, _, _ = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_GRAY), mode="grey")
img, path, img_name = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_GRAY), mode="gray")
assert len(np.shape(img)) == 2
def test_plantcv_readimage_rgb():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_GRAY), mode="rgb")
assert len(np.shape(img)) == 3
def test_plantcv_readimage_rgba_as_rgb():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_RGBA), mode="native")
assert np.shape(img)[2] == 3
def test_plantcv_readimage_csv():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readimage(filename=os.path.join(TEST_DATA, TEST_INPUT_THERMAL_CSV), mode="csv")
assert len(np.shape(img)) == 2
def test_plantcv_readimage_envi():
# Test with debug = None
pcv.params.debug = None
array_data = pcv.readimage(filename=os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA), mode="envi")
if sys.version_info[0] < 3:
assert len(array_data.array_type) == 8
def test_plantcv_readimage_bad_file():
with pytest.raises(RuntimeError):
_ = pcv.readimage(filename=TEST_INPUT_COLOR)
def test_plantcv_readbayer_default_bg():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_readbayer_default_bg")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Test with debug = "print"
pcv.params.debug = "print"
_, _, _ = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="BG", alg="default")
# Test with debug = "plot"
pcv.params.debug = "plot"
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="BG", alg="default")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_default_gb():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="GB", alg="default")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_default_rg():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="RG", alg="default")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_default_gr():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="GR", alg="default")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_edgeaware_bg():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="BG", alg="edgeaware")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_edgeaware_gb():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="GB", alg="edgeaware")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_edgeaware_rg():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="RG", alg="edgeaware")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_edgeaware_gr():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="GR", alg="edgeaware")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_variablenumbergradients_bg():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="BG", alg="variablenumbergradients")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_variablenumbergradients_gb():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="GB", alg="variablenumbergradients")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_variablenumbergradients_rg():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="RG", alg="variablenumbergradients")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_variablenumbergradients_gr():
# Test with debug = None
pcv.params.debug = None
img, path, img_name = pcv.readbayer(filename=os.path.join(TEST_DATA, TEST_INPUT_BAYER),
bayerpattern="GR", alg="variablenumbergradients")
assert all([i == j] for i, j in zip(np.shape(img), (335, 400, 3)))
def test_plantcv_readbayer_default_bad_input():
# Test with debug = None
pcv.params.debug = None
with pytest.raises(RuntimeError):
_, _, _ = pcv.readbayer(filename=os.path.join(TEST_DATA, "no-image.png"), bayerpattern="GR", alg="default")
def test_plantcv_rectangle_mask():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_rectangle_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
img_color = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.rectangle_mask(img=img, p1=(0, 0), p2=(2454, 2056), color="white")
_ = pcv.rectangle_mask(img=img, p1=(0, 0), p2=(2454, 2056), color="white")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.rectangle_mask(img=img_color, p1=(0, 0), p2=(2454, 2056), color="gray")
# Test with debug = None
pcv.params.debug = None
masked, hist, contour, heir = pcv.rectangle_mask(img=img, p1=(0, 0), p2=(2454, 2056), color="black")
maskedsum = np.sum(masked)
imgsum = np.sum(img)
assert maskedsum < imgsum
def test_plantcv_rectangle_mask_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_rectangle_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = None
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.rectangle_mask(img=img, p1=(0, 0), p2=(2454, 2056), color="whit")
def test_plantcv_report_size_marker_detect():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_report_size_marker_detect")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MARKER), -1)
# ROI contour
roi_contour = [np.array([[[3550, 850]], [[3550, 1349]], [[4049, 1349]], [[4049, 850]]], dtype=np.int32)]
roi_hierarchy = np.array([[[-1, -1, -1, -1]]], dtype=np.int32)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.report_size_marker_area(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy, marker='detect',
objcolor='light', thresh_channel='s', thresh=120, label="prefix")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.report_size_marker_area(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy, marker='detect',
objcolor='light', thresh_channel='s', thresh=120)
# Test with debug = None
pcv.params.debug = None
images = pcv.report_size_marker_area(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy, marker='detect',
objcolor='light', thresh_channel='s', thresh=120)
pcv.outputs.clear()
assert len(images) != 0
def test_plantcv_report_size_marker_define():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MARKER), -1)
# ROI contour
roi_contour = [np.array([[[3550, 850]], [[3550, 1349]], [[4049, 1349]], [[4049, 850]]], dtype=np.int32)]
roi_hierarchy = np.array([[[-1, -1, -1, -1]]], dtype=np.int32)
# Test with debug = None
pcv.params.debug = None
images = pcv.report_size_marker_area(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy, marker='define',
objcolor='light', thresh_channel='s', thresh=120)
assert len(images) != 0
def test_plantcv_report_size_marker_grayscale_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# ROI contour
roi_contour = [np.array([[[0, 0]], [[0, 49]], [[49, 49]], [[49, 0]]], dtype=np.int32)]
roi_hierarchy = np.array([[[-1, -1, -1, -1]]], dtype=np.int32)
# Test with debug = None
pcv.params.debug = None
images = pcv.report_size_marker_area(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy, marker='define',
objcolor='light', thresh_channel='s', thresh=120)
assert len(images) != 0
def test_plantcv_report_size_marker_bad_marker_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MARKER), -1)
# ROI contour
roi_contour = [np.array([[[3550, 850]], [[3550, 1349]], [[4049, 1349]], [[4049, 850]]], dtype=np.int32)]
roi_hierarchy = np.array([[[-1, -1, -1, -1]]], dtype=np.int32)
with pytest.raises(RuntimeError):
_ = pcv.report_size_marker_area(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy, marker='none',
objcolor='light', thresh_channel='s', thresh=120)
def test_plantcv_report_size_marker_bad_threshold_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MARKER), -1)
# ROI contour
roi_contour = [np.array([[[3550, 850]], [[3550, 1349]], [[4049, 1349]], [[4049, 850]]], dtype=np.int32)]
roi_hierarchy = np.array([[[-1, -1, -1, -1]]], dtype=np.int32)
with pytest.raises(RuntimeError):
_ = pcv.report_size_marker_area(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy, marker='detect',
objcolor='light', thresh_channel=None, thresh=120)
def test_plantcv_rgb2gray_cmyk():
# Test with debug = None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
c = pcv.rgb2gray_cmyk(rgb_img=img, channel="c")
# Assert that the output image has the dimensions of the input image but is only a single channel
assert all([i == j] for i, j in zip(np.shape(c), TEST_GRAY_DIM))
def test_plantcv_rgb2gray_cmyk_bad_channel():
# Test with debug = None
pcv.params.debug = None
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
# Channel S is not in CMYK
_ = pcv.rgb2gray_cmyk(rgb_img=img, channel="s")
def test_plantcv_rgb2gray_hsv():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_rgb2gray_hsv")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.rgb2gray_hsv(rgb_img=img, channel="s")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.rgb2gray_hsv(rgb_img=img, channel="s")
# Test with debug = None
pcv.params.debug = None
s = pcv.rgb2gray_hsv(rgb_img=img, channel="s")
# Assert that the output image has the dimensions of the input image but is only a single channel
assert all([i == j] for i, j in zip(np.shape(s), TEST_GRAY_DIM))
def test_plantcv_rgb2gray_hsv_bad_input():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.rgb2gray_hsv(rgb_img=img, channel="l")
def test_plantcv_rgb2gray_lab():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_rgb2gray_lab")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.rgb2gray_lab(rgb_img=img, channel='b')
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.rgb2gray_lab(rgb_img=img, channel='b')
# Test with debug = None
pcv.params.debug = None
b = pcv.rgb2gray_lab(rgb_img=img, channel='b')
# Assert that the output image has the dimensions of the input image but is only a single channel
assert all([i == j] for i, j in zip(np.shape(b), TEST_GRAY_DIM))
def test_plantcv_rgb2gray_lab_bad_input():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.rgb2gray_lab(rgb_img=img, channel="v")
def test_plantcv_rgb2gray():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_rgb2gray")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.rgb2gray(rgb_img=img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.rgb2gray(rgb_img=img)
# Test with debug = None
pcv.params.debug = None
gray = pcv.rgb2gray(rgb_img=img)
# Assert that the output image has the dimensions of the input image but is only a single channel
assert all([i == j] for i, j in zip(np.shape(gray), TEST_GRAY_DIM))
def test_plantcv_roi2mask():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_acute_vertex")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL))
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR), encoding="latin1")
obj_contour = contours_npz['arr_0']
pcv.params.debug = "plot"
_ = pcv.roi.roi2mask(img=img, contour=obj_contour)
pcv.params.debug = "print"
mask = pcv.roi.roi2mask(img=img, contour=obj_contour)
assert np.shape(mask)[0:2] == np.shape(img)[0:2] and np.sum(mask) == 255
def test_plantcv_roi_objects():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_roi_objects")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
roi_contour_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_ROI_CONTOUR), encoding="latin1")
roi_contour = [roi_contour_npz[arr_n] for arr_n in roi_contour_npz]
roi_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_ROI_HIERARCHY), encoding="latin1")
roi_hierarchy = roi_hierarchy_npz['arr_0']
object_contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_CONTOURS), encoding="latin1")
object_contours = [object_contours_npz[arr_n] for arr_n in object_contours_npz]
object_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_HIERARCHY), encoding="latin1")
object_hierarchy = object_hierarchy_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.roi_objects(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy,
object_contour=object_contours, obj_hierarchy=object_hierarchy, roi_type="largest")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.roi_objects(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy,
object_contour=object_contours, obj_hierarchy=object_hierarchy, roi_type="partial")
# Test with debug = None and roi_type = cutto
pcv.params.debug = None
_ = pcv.roi_objects(img=img, roi_contour=roi_contour, roi_hierarchy=roi_hierarchy,
object_contour=object_contours, obj_hierarchy=object_hierarchy, roi_type="cutto")
# Test with debug = None
kept_contours, kept_hierarchy, mask, area = pcv.roi_objects(img=img, roi_contour=roi_contour,
roi_hierarchy=roi_hierarchy,
object_contour=object_contours,
obj_hierarchy=object_hierarchy, roi_type="partial")
# Assert that the contours were filtered as expected
assert len(kept_contours) == 1891
def test_plantcv_roi_objects_bad_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
roi_contour_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_ROI_CONTOUR), encoding="latin1")
roi_contour = [roi_contour_npz[arr_n] for arr_n in roi_contour_npz]
roi_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_ROI_HIERARCHY), encoding="latin1")
roi_hierarchy = roi_hierarchy_npz['arr_0']
object_contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_CONTOURS), encoding="latin1")
object_contours = [object_contours_npz[arr_n] for arr_n in object_contours_npz]
object_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_HIERARCHY), encoding="latin1")
object_hierarchy = object_hierarchy_npz['arr_0']
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.roi_objects(img=img, roi_type="cut", roi_contour=roi_contour, roi_hierarchy=roi_hierarchy,
object_contour=object_contours, obj_hierarchy=object_hierarchy)
def test_plantcv_roi_objects_grayscale_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_roi_objects_grayscale_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR), 0)
roi_contour_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_ROI_CONTOUR), encoding="latin1")
roi_contour = [roi_contour_npz[arr_n] for arr_n in roi_contour_npz]
roi_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_ROI_HIERARCHY), encoding="latin1")
roi_hierarchy = roi_hierarchy_npz['arr_0']
object_contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_CONTOURS), encoding="latin1")
object_contours = [object_contours_npz[arr_n] for arr_n in object_contours_npz]
object_hierarchy_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_OBJECT_HIERARCHY), encoding="latin1")
object_hierarchy = object_hierarchy_npz['arr_0']
# Test with debug = "plot"
pcv.params.debug = "plot"
kept_contours, kept_hierarchy, mask, area = pcv.roi_objects(img=img, roi_type="partial", roi_contour=roi_contour,
roi_hierarchy=roi_hierarchy,
object_contour=object_contours,
obj_hierarchy=object_hierarchy)
# Assert that the contours were filtered as expected
assert len(kept_contours) == 1891
def test_plantcv_rotate():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
rotated = pcv.rotate(img=img, rotation_deg=45, crop=True)
imgavg = np.average(img)
rotateavg = np.average(rotated)
assert rotateavg != imgavg
def test_plantcv_transform_rotate():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_rotate_img")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.transform.rotate(img=img, rotation_deg=45, crop=True)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.transform.rotate(img=img, rotation_deg=45, crop=True)
# Test with debug = None
pcv.params.debug = None
rotated = pcv.transform.rotate(img=img, rotation_deg=45, crop=True)
imgavg = np.average(img)
rotateavg = np.average(rotated)
assert rotateavg != imgavg
def test_plantcv_transform_rotate_gray():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.transform.rotate(img=img, rotation_deg=45, crop=False)
# Test with debug = None
pcv.params.debug = None
rotated = pcv.transform.rotate(img=img, rotation_deg=45, crop=False)
imgavg = np.average(img)
rotateavg = np.average(rotated)
assert rotateavg != imgavg
def test_plantcv_scale_features():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_scale_features")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
mask = cv2.imread(os.path.join(TEST_DATA, TEST_MASK_SMALL), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR), encoding="latin1")
obj_contour = contours_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.scale_features(obj=obj_contour, mask=mask, points=TEST_ACUTE_RESULT, line_position=50)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.scale_features(obj=obj_contour, mask=mask, points=TEST_ACUTE_RESULT, line_position='NA')
# Test with debug = None
pcv.params.debug = None
points_rescaled, centroid_rescaled, bottomline_rescaled = pcv.scale_features(obj=obj_contour, mask=mask,
points=TEST_ACUTE_RESULT,
line_position=50)
assert len(points_rescaled) == 23
def test_plantcv_scale_features_bad_input():
mask = np.array([])
obj_contour = np.array([])
pcv.params.debug = None
result = pcv.scale_features(obj=obj_contour, mask=mask, points=TEST_ACUTE_RESULT, line_position=50)
assert all([i == j] for i, j in zip(result, [("NA", "NA"), ("NA", "NA"), ("NA", "NA")]))
def test_plantcv_scharr_filter():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_scharr_filter")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
pcv.params.debug = "print"
# Test with debug = "print"
_ = pcv.scharr_filter(img=img, dx=1, dy=0, scale=1)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.scharr_filter(img=img, dx=1, dy=0, scale=1)
# Test with debug = None
pcv.params.debug = None
scharr_img = pcv.scharr_filter(img=img, dx=1, dy=0, scale=1)
# Assert that the output image has the dimensions of the input image
assert all([i == j] for i, j in zip(np.shape(scharr_img), TEST_GRAY_DIM))
def test_plantcv_shift_img():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_shift_img")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.shift_img(img=img, number=300, side="top")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.shift_img(img=img, number=300, side="top")
# Test with debug = "plot"
_ = pcv.shift_img(img=img, number=300, side="bottom")
# Test with debug = "plot"
_ = pcv.shift_img(img=img, number=300, side="right")
# Test with debug = "plot"
_ = pcv.shift_img(img=mask, number=300, side="left")
# Test with debug = None
pcv.params.debug = None
rotated = pcv.shift_img(img=img, number=300, side="top")
imgavg = np.average(img)
shiftavg = np.average(rotated)
assert shiftavg != imgavg
def test_plantcv_shift_img_bad_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
pcv.params.debug = None
_ = pcv.shift_img(img=img, number=-300, side="top")
def test_plantcv_shift_img_bad_side_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
pcv.params.debug = None
_ = pcv.shift_img(img=img, number=300, side="starboard")
def test_plantcv_sobel_filter():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_sobel_filter")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.sobel_filter(gray_img=img, dx=1, dy=0, ksize=1)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.sobel_filter(gray_img=img, dx=1, dy=0, ksize=1)
# Test with debug = None
pcv.params.debug = None
sobel_img = pcv.sobel_filter(gray_img=img, dx=1, dy=0, ksize=1)
# Assert that the output image has the dimensions of the input image
assert all([i == j] for i, j in zip(np.shape(sobel_img), TEST_GRAY_DIM))
def test_plantcv_stdev_filter():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_sobel_filter")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
pcv.params.debug = "plot"
_ = pcv.stdev_filter(img=img, ksize=11)
pcv.params.debug = "print"
filter_img = pcv.stdev_filter(img=img, ksize=11)
assert (np.shape(filter_img) == np.shape(img))
def test_plantcv_watershed_segmentation():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_watershed_segmentation")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_CROPPED))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_CROPPED_MASK), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.watershed_segmentation(rgb_img=img, mask=mask, distance=10, label="prefix")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.watershed_segmentation(rgb_img=img, mask=mask, distance=10)
# Test with debug = None
pcv.params.debug = None
_ = pcv.watershed_segmentation(rgb_img=img, mask=mask, distance=10)
assert pcv.outputs.observations['default']['estimated_object_count']['value'] > 9
def test_plantcv_white_balance_gray_16bit():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_white_balance_gray_16bit")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.white_balance(img=img, mode='hist', roi=(5, 5, 80, 80))
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.white_balance(img=img, mode='max', roi=(5, 5, 80, 80))
# Test without an ROI
pcv.params.debug = None
_ = pcv.white_balance(img=img, mode='hist', roi=None)
# Test with debug = None
white_balanced = pcv.white_balance(img=img, roi=(5, 5, 80, 80))
imgavg = np.average(img)
balancedavg = np.average(white_balanced)
assert balancedavg != imgavg
def test_plantcv_white_balance_gray_8bit():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_white_balance_gray_8bit")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK))
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.white_balance(img=img, mode='hist', roi=(5, 5, 80, 80))
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.white_balance(img=img, mode='max', roi=(5, 5, 80, 80))
# Test without an ROI
pcv.params.debug = None
_ = pcv.white_balance(img=img, mode='hist', roi=None)
# Test with debug = None
white_balanced = pcv.white_balance(img=img, roi=(5, 5, 80, 80))
imgavg = np.average(img)
balancedavg = np.average(white_balanced)
assert balancedavg != imgavg
def test_plantcv_white_balance_rgb():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_white_balance_rgb")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MARKER))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.white_balance(img=img, mode='hist', roi=(5, 5, 80, 80))
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.white_balance(img=img, mode='max', roi=(5, 5, 80, 80))
# Test without an ROI
pcv.params.debug = None
_ = pcv.white_balance(img=img, mode='hist', roi=None)
# Test with debug = None
white_balanced = pcv.white_balance(img=img, roi=(5, 5, 80, 80))
imgavg = np.average(img)
balancedavg = np.average(white_balanced)
assert balancedavg != imgavg
def test_plantcv_white_balance_bad_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK), -1)
# Test with debug = None
with pytest.raises(RuntimeError):
pcv.params.debug = "plot"
_ = pcv.white_balance(img=img, mode='hist', roi=(5, 5, 5, 5, 5))
def test_plantcv_white_balance_bad_mode_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MARKER))
# Test with debug = None
with pytest.raises(RuntimeError):
pcv.params.debug = "plot"
_ = pcv.white_balance(img=img, mode='histogram', roi=(5, 5, 80, 80))
def test_plantcv_white_balance_bad_input_int():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_NIR_MASK), -1)
# Test with debug = None
with pytest.raises(RuntimeError):
pcv.params.debug = "plot"
_ = pcv.white_balance(img=img, mode='hist', roi=(5., 5, 5, 5))
def test_plantcv_x_axis_pseudolandmarks():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_x_axis_pseudolandmarks_debug")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_MASK_SMALL), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR), encoding="latin1")
obj_contour = contours_npz['arr_0']
pcv.params.debug = "print"
_ = pcv.x_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.x_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img, label="prefix")
_ = pcv.x_axis_pseudolandmarks(obj=np.array([[0, 0], [0, 0]]), mask=np.array([[0, 0], [0, 0]]), img=img)
_ = pcv.x_axis_pseudolandmarks(obj=np.array(([[89, 222]], [[252, 39]], [[89, 207]])),
mask=np.array(([[42, 161]], [[2, 47]], [[211, 222]])), img=img)
_ = pcv.x_axis_pseudolandmarks(obj=(), mask=mask, img=img)
# Test with debug = None
pcv.params.debug = None
top, bottom, center_v = pcv.x_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
pcv.outputs.clear()
assert all([all([i == j] for i, j in zip(np.shape(top), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(bottom), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(center_v), (20, 1, 2)))])
def test_plantcv_x_axis_pseudolandmarks_small_obj():
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL_PLANT))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_MASK_SMALL_PLANT), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR_SMALL_PLANT), encoding="latin1")
obj_contour = contours_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_, _, _ = pcv.x_axis_pseudolandmarks(obj=[], mask=mask, img=img)
_, _, _ = pcv.x_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_, _, _ = pcv.x_axis_pseudolandmarks(obj=[], mask=mask, img=img)
top, bottom, center_v = pcv.x_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
assert all([all([i == j] for i, j in zip(np.shape(top), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(bottom), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(center_v), (20, 1, 2)))])
def test_plantcv_x_axis_pseudolandmarks_bad_input():
img = np.array([])
mask = np.array([])
obj_contour = np.array([])
pcv.params.debug = None
result = pcv.x_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
assert all([i == j] for i, j in zip(result, [("NA", "NA"), ("NA", "NA"), ("NA", "NA")]))
def test_plantcv_x_axis_pseudolandmarks_bad_obj_input():
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL_PLANT))
with pytest.raises(RuntimeError):
_ = pcv.x_axis_pseudolandmarks(obj=np.array([[-2, -2], [-2, -2]]), mask=np.array([[-2, -2], [-2, -2]]), img=img)
def test_plantcv_y_axis_pseudolandmarks():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_y_axis_pseudolandmarks_debug")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_MASK_SMALL), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR), encoding="latin1")
obj_contour = contours_npz['arr_0']
pcv.params.debug = "print"
_ = pcv.y_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img, label="prefix")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.y_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
pcv.outputs.clear()
_ = pcv.y_axis_pseudolandmarks(obj=[], mask=mask, img=img)
_ = pcv.y_axis_pseudolandmarks(obj=(), mask=mask, img=img)
_ = pcv.y_axis_pseudolandmarks(obj=np.array(([[89, 222]], [[252, 39]], [[89, 207]])),
mask=np.array(([[42, 161]], [[2, 47]], [[211, 222]])), img=img)
_ = pcv.y_axis_pseudolandmarks(obj=np.array(([[21, 11]], [[159, 155]], [[237, 11]])),
mask=np.array(([[38, 54]], [[144, 169]], [[81, 137]])), img=img)
# Test with debug = None
pcv.params.debug = None
left, right, center_h = pcv.y_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
pcv.outputs.clear()
assert all([all([i == j] for i, j in zip(np.shape(left), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(right), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(center_h), (20, 1, 2)))])
def test_plantcv_y_axis_pseudolandmarks_small_obj():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_y_axis_pseudolandmarks_debug")
os.mkdir(cache_dir)
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL_PLANT))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_MASK_SMALL_PLANT), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_VIS_COMP_CONTOUR_SMALL_PLANT), encoding="latin1")
obj_contour = contours_npz['arr_0']
# Test with debug = "print"
pcv.params.debug = "print"
_, _, _ = pcv.y_axis_pseudolandmarks(obj=[], mask=mask, img=img)
_, _, _ = pcv.y_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
# Test with debug = "plot"
pcv.params.debug = "plot"
pcv.outputs.clear()
left, right, center_h = pcv.y_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
pcv.outputs.clear()
assert all([all([i == j] for i, j in zip(np.shape(left), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(right), (20, 1, 2))),
all([i == j] for i, j in zip(np.shape(center_h), (20, 1, 2)))])
def test_plantcv_y_axis_pseudolandmarks_bad_input():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_y_axis_pseudolandmarks_debug")
os.mkdir(cache_dir)
img = np.array([])
mask = np.array([])
obj_contour = np.array([])
pcv.params.debug = None
result = pcv.y_axis_pseudolandmarks(obj=obj_contour, mask=mask, img=img)
pcv.outputs.clear()
assert all([i == j] for i, j in zip(result, [("NA", "NA"), ("NA", "NA"), ("NA", "NA")]))
def test_plantcv_y_axis_pseudolandmarks_bad_obj_input():
img = cv2.imread(os.path.join(TEST_DATA, TEST_VIS_SMALL_PLANT))
with pytest.raises(RuntimeError):
_ = pcv.y_axis_pseudolandmarks(obj=np.array([[-2, -2], [-2, -2]]), mask=np.array([[-2, -2], [-2, -2]]), img=img)
def test_plantcv_background_subtraction():
# List to hold result of all tests.
truths = []
fg_img = cv2.imread(os.path.join(TEST_DATA, TEST_FOREGROUND))
bg_img = cv2.imread(os.path.join(TEST_DATA, TEST_BACKGROUND))
big_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Testing if background subtraction is actually still working.
# This should return an array whose sum is greater than one
pcv.params.debug = None
fgmask = pcv.background_subtraction(background_image=bg_img, foreground_image=fg_img)
truths.append(np.sum(fgmask) > 0)
fgmask = pcv.background_subtraction(background_image=big_img, foreground_image=bg_img)
truths.append(np.sum(fgmask) > 0)
# The same foreground subtracted from itself should be 0
fgmask = pcv.background_subtraction(background_image=fg_img, foreground_image=fg_img)
truths.append(np.sum(fgmask) == 0)
# The same background subtracted from itself should be 0
fgmask = pcv.background_subtraction(background_image=bg_img, foreground_image=bg_img)
truths.append(np.sum(fgmask) == 0)
# All of these should be true for the function to pass testing.
assert (all(truths))
def test_plantcv_background_subtraction_debug():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_background_subtraction_debug")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# List to hold result of all tests.
truths = []
fg_img = cv2.imread(os.path.join(TEST_DATA, TEST_FOREGROUND))
bg_img = cv2.imread(os.path.join(TEST_DATA, TEST_BACKGROUND))
# Test with debug = "print"
pcv.params.debug = "print"
fgmask = pcv.background_subtraction(background_image=bg_img, foreground_image=fg_img)
truths.append(np.sum(fgmask) > 0)
# Test with debug = "plot"
pcv.params.debug = "plot"
fgmask = pcv.background_subtraction(background_image=bg_img, foreground_image=fg_img)
truths.append(np.sum(fgmask) > 0)
# All of these should be true for the function to pass testing.
assert (all(truths))
def test_plantcv_background_subtraction_bad_img_type():
fg_color = cv2.imread(os.path.join(TEST_DATA, TEST_FOREGROUND))
bg_gray = cv2.imread(os.path.join(TEST_DATA, TEST_BACKGROUND), 0)
pcv.params.debug = None
with pytest.raises(RuntimeError):
_ = pcv.background_subtraction(background_image=bg_gray, foreground_image=fg_color)
def test_plantcv_background_subtraction_different_sizes():
fg_img = cv2.imread(os.path.join(TEST_DATA, TEST_FOREGROUND))
bg_img = cv2.imread(os.path.join(TEST_DATA, TEST_BACKGROUND))
bg_shp = np.shape(bg_img) # type: tuple
bg_img_resized = cv2.resize(bg_img, (int(bg_shp[0] / 2), int(bg_shp[1] / 2)), interpolation=cv2.INTER_AREA)
pcv.params.debug = None
fgmask = pcv.background_subtraction(background_image=bg_img_resized, foreground_image=fg_img)
assert np.sum(fgmask) > 0
def test_plantcv_spatial_clustering_dbscan():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_spatial_clustering_dbscan")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI_MASK), -1)
pcv.params.debug = "print"
_ = pcv.spatial_clustering(img, algorithm="DBSCAN", min_cluster_size=10, max_distance=None)
pcv.params.debug = "plot"
spmask = pcv.spatial_clustering(img, algorithm="DBSCAN", min_cluster_size=10, max_distance=None)
assert len(spmask[1]) == 2
def test_plantcv_spatial_clustering_optics():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_spatial_clustering_optics")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI_MASK), -1)
pcv.params.debug = None
spmask = pcv.spatial_clustering(img, algorithm="OPTICS", min_cluster_size=100, max_distance=5000)
assert len(spmask[1]) == 2
def test_plantcv_spatial_clustering_badinput():
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_MULTI_MASK), -1)
pcv.params.debug = None
with pytest.raises(NameError):
_ = pcv.spatial_clustering(img, algorithm="Hydra", min_cluster_size=5, max_distance=100)
# ##############################
# Tests for the learn subpackage
# ##############################
def test_plantcv_learn_naive_bayes():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_learn_naive_bayes")
os.mkdir(cache_dir)
# Make image and mask directories in the cache directory
imgdir = os.path.join(cache_dir, "images")
maskdir = os.path.join(cache_dir, "masks")
if not os.path.exists(imgdir):
os.mkdir(imgdir)
if not os.path.exists(maskdir):
os.mkdir(maskdir)
# Copy and image and mask to the image/mask directories
shutil.copyfile(os.path.join(TEST_DATA, TEST_VIS_SMALL), os.path.join(imgdir, "image.png"))
shutil.copyfile(os.path.join(TEST_DATA, TEST_MASK_SMALL), os.path.join(maskdir, "image.png"))
# Run the naive Bayes training module
outfile = os.path.join(cache_dir, "naive_bayes_pdfs.txt")
plantcv.learn.naive_bayes(imgdir=imgdir, maskdir=maskdir, outfile=outfile, mkplots=True)
assert os.path.exists(outfile)
def test_plantcv_learn_naive_bayes_multiclass():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_learn_naive_bayes_multiclass")
os.mkdir(cache_dir)
# Run the naive Bayes multiclass training module
outfile = os.path.join(cache_dir, "naive_bayes_multiclass_pdfs.txt")
plantcv.learn.naive_bayes_multiclass(samples_file=os.path.join(TEST_DATA, TEST_SAMPLED_RGB_POINTS), outfile=outfile,
mkplots=True)
assert os.path.exists(outfile)
# ####################################
# Tests for the morphology subpackage
# ####################################
def test_plantcv_morphology_segment_curvature():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_curvature")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
pcv.params.debug = "print"
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=skeleton)
pcv.outputs.clear()
_ = pcv.morphology.segment_curvature(segmented_img, seg_objects, label="prefix")
pcv.params.debug = "plot"
pcv.outputs.clear()
_ = pcv.morphology.segment_curvature(segmented_img, seg_objects)
assert len(pcv.outputs.observations['default']['segment_curvature']['value']) == 22
def test_plantcv_morphology_check_cycles():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_branches")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
pcv.params.debug = "print"
_ = pcv.morphology.check_cycles(mask, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.check_cycles(mask)
pcv.params.debug = None
_ = pcv.morphology.check_cycles(mask)
assert pcv.outputs.observations['default']['num_cycles']['value'] == 1
def test_plantcv_morphology_find_branch_pts():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_branches")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pcv.params.debug = "print"
_ = pcv.morphology.find_branch_pts(skel_img=skeleton, mask=mask, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.find_branch_pts(skel_img=skeleton)
pcv.params.debug = None
branches = pcv.morphology.find_branch_pts(skel_img=skeleton)
assert np.sum(branches) == 9435
def test_plantcv_morphology_find_tips():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_tips")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pcv.params.debug = "print"
_ = pcv.morphology.find_tips(skel_img=skeleton, mask=mask, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.find_tips(skel_img=skeleton)
pcv.params.debug = None
tips = pcv.morphology.find_tips(skel_img=skeleton)
assert np.sum(tips) == 9435
def test_plantcv_morphology_prune():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_pruned")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pcv.params.debug = "print"
_ = pcv.morphology.prune(skel_img=skeleton, size=1)
pcv.params.debug = "plot"
_ = pcv.morphology.prune(skel_img=skeleton, size=1, mask=skeleton)
pcv.params.debug = None
pruned_img, _, _ = pcv.morphology.prune(skel_img=skeleton, size=3)
assert np.sum(pruned_img) < np.sum(skeleton)
def test_plantcv_morphology_prune_size0():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_pruned")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pruned_img, _, _ = pcv.morphology.prune(skel_img=skeleton, size=0)
assert np.sum(pruned_img) == np.sum(skeleton)
def test_plantcv_morphology_iterative_prune():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_pruned")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pruned_img = pcv.morphology._iterative_prune(skel_img=skeleton, size=3)
assert np.sum(pruned_img) < np.sum(skeleton)
def test_plantcv_morphology_segment_skeleton():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_skeleton")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pcv.params.debug = "print"
_ = pcv.morphology.segment_skeleton(skel_img=skeleton, mask=mask)
pcv.params.debug = "plot"
segmented_img, segment_objects = pcv.morphology.segment_skeleton(skel_img=skeleton)
assert len(segment_objects) == 73
def test_plantcv_morphology_fill_segments():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_fill_segments")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
obj_dic = np.load(os.path.join(TEST_DATA, TEST_SKELETON_OBJECTS))
obj = []
for key, val in obj_dic.items():
obj.append(val)
pcv.params.debug = "print"
_ = pcv.morphology.fill_segments(mask, obj, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.fill_segments(mask, obj)
tests = [pcv.outputs.observations['default']['segment_area']['value'][42] == 5529,
pcv.outputs.observations['default']['segment_area']['value'][20] == 5057,
pcv.outputs.observations['default']['segment_area']['value'][49] == 3323]
assert all(tests)
def test_plantcv_morphology_fill_segments_with_stem():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_fill_segments")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
obj_dic = np.load(os.path.join(TEST_DATA, TEST_SKELETON_OBJECTS))
obj = []
for key, val in obj_dic.items():
obj.append(val)
stem_obj = obj[0:4]
pcv.params.debug = "print"
_ = pcv.morphology.fill_segments(mask, obj, stem_obj)
num_objects = len(pcv.outputs.observations['default']['leaf_area']['value'])
assert num_objects == 70
def test_plantcv_morphology_segment_angle():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_angles")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
pcv.params.debug = "print"
segmented_img, segment_objects = pcv.morphology.segment_skeleton(skel_img=skeleton)
_ = pcv.morphology.segment_angle(segmented_img=segmented_img, objects=segment_objects, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.segment_angle(segmented_img, segment_objects)
assert len(pcv.outputs.observations['default']['segment_angle']['value']) == 22
def test_plantcv_morphology_segment_angle_overflow():
# Clear previous outputs
pcv.outputs.clear()
# Don't prune, would usually give overflow error without extra if statement in segment_angle
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_angles")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
segmented_img, segment_objects = pcv.morphology.segment_skeleton(skel_img=skeleton)
_ = pcv.morphology.segment_angle(segmented_img, segment_objects)
assert len(pcv.outputs.observations['default']['segment_angle']['value']) == 73
def test_plantcv_morphology_segment_euclidean_length():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_eu_length")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
pcv.params.debug = "print"
segmented_img, segment_objects = pcv.morphology.segment_skeleton(skel_img=skeleton)
_ = pcv.morphology.segment_euclidean_length(segmented_img, segment_objects, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.segment_euclidean_length(segmented_img, segment_objects)
assert len(pcv.outputs.observations['default']['segment_eu_length']['value']) == 22
def test_plantcv_morphology_segment_euclidean_length_bad_input():
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
skel = pcv.morphology.skeletonize(mask=mask)
pcv.params.debug = None
segmented_img, segment_objects = pcv.morphology.segment_skeleton(skel_img=skel)
with pytest.raises(RuntimeError):
_ = pcv.morphology.segment_euclidean_length(segmented_img, segment_objects)
def test_plantcv_morphology_segment_path_length():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_path_length")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
pcv.params.debug = "print"
segmented_img, segment_objects = pcv.morphology.segment_skeleton(skel_img=skeleton)
_ = pcv.morphology.segment_path_length(segmented_img, segment_objects, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.segment_path_length(segmented_img, segment_objects)
assert len(pcv.outputs.observations['default']['segment_path_length']['value']) == 22
def test_plantcv_morphology_skeletonize():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_skeletonize")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
input_skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pcv.params.debug = "print"
_ = pcv.morphology.skeletonize(mask=mask)
pcv.params.debug = "plot"
_ = pcv.morphology.skeletonize(mask=mask)
pcv.params.debug = None
skeleton = pcv.morphology.skeletonize(mask=mask)
arr = np.array(skeleton == input_skeleton)
assert arr.all()
def test_plantcv_morphology_segment_sort():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_sort")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=skeleton)
pcv.params.debug = "print"
_ = pcv.morphology.segment_sort(skeleton, seg_objects, mask=skeleton)
pcv.params.debug = "plot"
leaf_obj, stem_obj = pcv.morphology.segment_sort(skeleton, seg_objects)
assert len(leaf_obj) == 36
def test_plantcv_morphology_segment_tangent_angle():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_tangent_angle")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skel = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
objects = np.load(os.path.join(TEST_DATA, TEST_SKELETON_OBJECTS), encoding="latin1")
objs = [objects[arr_n] for arr_n in objects]
pcv.params.debug = "print"
_ = pcv.morphology.segment_tangent_angle(skel, objs, 2, label="prefix")
pcv.params.debug = "plot"
_ = pcv.morphology.segment_tangent_angle(skel, objs, 2)
assert len(pcv.outputs.observations['default']['segment_tangent_angle']['value']) == 73
def test_plantcv_morphology_segment_id():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_tangent_angle")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skel = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
objects = np.load(os.path.join(TEST_DATA, TEST_SKELETON_OBJECTS), encoding="latin1")
objs = [objects[arr_n] for arr_n in objects]
pcv.params.debug = "print"
_ = pcv.morphology.segment_id(skel, objs)
pcv.params.debug = "plot"
_, labeled_img = pcv.morphology.segment_id(skel, objs, mask=skel)
assert np.sum(labeled_img) > np.sum(skel)
def test_plantcv_morphology_segment_insertion_angle():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_insertion_angle")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pruned, _, _ = pcv.morphology.prune(skel_img=skeleton, size=6)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=pruned)
leaf_obj, stem_obj = pcv.morphology.segment_sort(pruned, seg_objects)
pcv.params.debug = "plot"
_ = pcv.morphology.segment_insertion_angle(pruned, segmented_img, leaf_obj, stem_obj, 3, label="prefix")
pcv.params.debug = "print"
_ = pcv.morphology.segment_insertion_angle(pruned, segmented_img, leaf_obj, stem_obj, 10)
assert pcv.outputs.observations['default']['segment_insertion_angle']['value'][:6] == ['NA', 'NA', 'NA',
24.956918822001636,
50.7313343343401,
56.427712102130734]
def test_plantcv_morphology_segment_insertion_angle_bad_stem():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_insertion_angle")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pruned, _, _ = pcv.morphology.prune(skel_img=skeleton, size=5)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=pruned)
leaf_obj, stem_obj = pcv.morphology.segment_sort(pruned, seg_objects)
stem_obj = [leaf_obj[0], leaf_obj[10]]
with pytest.raises(RuntimeError):
_ = pcv.morphology.segment_insertion_angle(pruned, segmented_img, leaf_obj, stem_obj, 10)
def test_plantcv_morphology_segment_combine():
skel = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=skel)
pcv.params.debug = "plot"
# Test with list of IDs input
_, new_objects = pcv.morphology.segment_combine([0, 1], seg_objects, skel)
assert len(new_objects) + 1 == len(seg_objects)
def test_plantcv_morphology_segment_combine_lists():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_insertion_angle")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skel = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=skel)
pcv.params.debug = "print"
# Test with list of lists input
_, new_objects = pcv.morphology.segment_combine([[0, 1, 2], [3, 4]], seg_objects, skel)
assert len(new_objects) + 3 == len(seg_objects)
def test_plantcv_morphology_segment_combine_bad_input():
skel = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON_PRUNED), -1)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=skel)
pcv.params.debug = "plot"
with pytest.raises(RuntimeError):
_, new_objects = pcv.morphology.segment_combine([0.5, 1.5], seg_objects, skel)
def test_plantcv_morphology_analyze_stem():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_analyze_stem")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pruned, segmented_img, _ = pcv.morphology.prune(skel_img=skeleton, size=6)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=pruned)
leaf_obj, stem_obj = pcv.morphology.segment_sort(pruned, seg_objects)
pcv.params.debug = "plot"
_ = pcv.morphology.analyze_stem(rgb_img=segmented_img, stem_objects=stem_obj, label="prefix")
pcv.params.debug = "print"
_ = pcv.morphology.analyze_stem(rgb_img=segmented_img, stem_objects=stem_obj)
assert pcv.outputs.observations['default']['stem_angle']['value'] == -12.531776428222656
def test_plantcv_morphology_analyze_stem_bad_angle():
# Clear previous outputs
pcv.outputs.clear()
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_morphology_segment_insertion_angle")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
skeleton = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_SKELETON), -1)
pruned, _, _ = pcv.morphology.prune(skel_img=skeleton, size=5)
segmented_img, seg_objects = pcv.morphology.segment_skeleton(skel_img=pruned)
_, _ = pcv.morphology.segment_sort(pruned, seg_objects)
# print([stem_obj[3]])
# stem_obj = [stem_obj[3]]
stem_obj = [[[[1116, 1728]], [[1116, 1]]]]
_ = pcv.morphology.analyze_stem(rgb_img=segmented_img, stem_objects=stem_obj)
assert pcv.outputs.observations['default']['stem_angle']['value'] == 22877334.0
# ########################################
# Tests for the hyperspectral subpackage
# ########################################
def test_plantcv_hyperspectral_read_data_default():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_read_data_default")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = "plot"
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
_ = pcv.hyperspectral.read_data(filename=spectral_filename)
pcv.params.debug = "print"
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
assert np.shape(array_data.array_data) == (1, 1600, 978)
def test_plantcv_hyperspectral_read_data_no_default_bands():
pcv.params.debug = "plot"
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA_NO_DEFAULT)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
assert np.shape(array_data.array_data) == (1, 1600, 978)
def test_plantcv_hyperspectral_read_data_approx_pseudorgb():
pcv.params.debug = "plot"
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA_APPROX_PSEUDO)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
assert np.shape(array_data.array_data) == (1, 1600, 978)
def test_plantcv_spectral_index_ndvi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_ndvi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ndvi(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_ndvi_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ndvi(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.ndvi(hsi=index_array, distance=20)
def test_plantcv_spectral_index_gdvi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_gdvi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.gdvi(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_gdvi_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.gdvi(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.gdvi(hsi=index_array, distance=20)
def test_plantcv_spectral_index_savi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_savi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_savi_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.savi(hsi=index_array, distance=20)
def test_plantcv_spectral_index_pri():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_pri")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pri(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_pri_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pri(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.pri(hsi=index_array, distance=20)
def test_plantcv_spectral_index_ari():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_ari")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ari(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_ari_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ari(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.ari(hsi=index_array, distance=20)
def test_plantcv_spectral_index_ci_rededge():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_ci_rededge")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ci_rededge(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_ci_rededge_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ci_rededge(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.ci_rededge(hsi=index_array, distance=20)
def test_plantcv_spectral_index_cri550():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_cri550")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.cri550(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_cri550_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.cri550(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.cri550(hsi=index_array, distance=20)
def test_plantcv_spectral_index_cri700():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_cri700")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.cri700(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_cri700_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.cri700(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.cri700(hsi=index_array, distance=20)
def test_plantcv_spectral_index_egi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_egi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
index_array = pcv.spectral_index.egi(rgb_img=rgb_img)
assert np.shape(index_array.array_data) == (2056, 2454) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_evi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_evi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.evi(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_evi_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.evi(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.evi(hsi=index_array, distance=20)
def test_plantcv_spectral_index_mari():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_mari")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.mari(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_mari_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.mari(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.mari(hsi=index_array, distance=20)
def test_plantcv_spectral_index_mcari():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_mcari")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.mcari(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_mcari_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.mcari(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.mcari(hsi=index_array, distance=20)
def test_plantcv_spectral_index_mtci():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_mtci")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.mtci(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_mtci_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.mtci(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.mtci(hsi=index_array, distance=20)
def test_plantcv_spectral_index_ndre():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_ndre")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ndre(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_ndre_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.ndre(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.ndre(hsi=index_array, distance=20)
def test_plantcv_spectral_index_psnd_chla():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_psnd_chla")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psnd_chla(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_psnd_chla_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psnd_chla(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.psnd_chla(hsi=index_array, distance=20)
def test_plantcv_spectral_index_psnd_chlb():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_psnd_chlb")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psnd_chlb(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_psnd_chlb_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psnd_chlb(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.psnd_chlb(hsi=index_array, distance=20)
def test_plantcv_spectral_index_psnd_car():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_psnd_car")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psnd_car(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_psnd_car_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psnd_car(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.psnd_car(hsi=index_array, distance=20)
def test_plantcv_spectral_index_psri():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_psri")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psri(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_psri_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.psri(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.psri(hsi=index_array, distance=20)
def test_plantcv_spectral_index_pssr_chla():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_pssr_chla")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pssr_chla(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_pssr_chla_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pssr_chla(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.pssr_chla(hsi=index_array, distance=20)
def test_plantcv_spectral_index_pssr_chlb():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_pssr_chlb")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pssr_chlb(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_pssr_chlb_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pssr_chlb(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.pssr_chlb(hsi=index_array, distance=20)
def test_plantcv_spectral_index_pssr_car():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_pssr_car")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pssr_car(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_pssr_car_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.pssr_car(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.pssr_car(hsi=index_array, distance=20)
def test_plantcv_spectral_index_rgri():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_rgri")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.rgri(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_rgri_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.rgri(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.rgri(hsi=index_array, distance=20)
def test_plantcv_spectral_index_rvsi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_rvsi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.rvsi(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_rvsi_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.rvsi(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.rvsi(hsi=index_array, distance=20)
def test_plantcv_spectral_index_sipi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_sipi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.sipi(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_sipi_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.sipi(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.sipi(hsi=index_array, distance=20)
def test_plantcv_spectral_index_sr():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_sr")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.sr(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_sr_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.sr(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.sr(hsi=index_array, distance=20)
def test_plantcv_spectral_index_vari():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_vari")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.vari(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_vari_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.vari(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.vari(hsi=index_array, distance=20)
def test_plantcv_spectral_index_vi_green():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_vi_green")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.vi_green(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_vi_green_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.vi_green(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.vi_green(hsi=index_array, distance=20)
def test_plantcv_spectral_index_wi():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_index_wi")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.wi(hsi=array_data, distance=20)
assert np.shape(index_array.array_data) == (1, 1600) and np.nanmax(index_array.pseudo_rgb) == 255
def test_plantcv_spectral_index_wi_bad_input():
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
pcv.params.debug = None
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.wi(hsi=array_data, distance=20)
with pytest.raises(RuntimeError):
_ = pcv.spectral_index.wi(hsi=index_array, distance=20)
def test_plantcv_hyperspectral_analyze_spectral():
# Clear previous outputs
pcv.outputs.clear()
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_analyze_spectral")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
mask = cv2.imread(os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_MASK), -1)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
# pcv.params.debug = "plot"
# _ = pcv.hyperspectral.analyze_spectral(array=array_data, mask=mask, histplot=True)
# pcv.params.debug = "print"
# _ = pcv.hyperspectral.analyze_spectral(array=array_data, mask=mask, histplot=True, label="prefix")
pcv.params.debug = None
_ = pcv.hyperspectral.analyze_spectral(array=array_data, mask=mask, histplot=True, label="prefix")
assert len(pcv.outputs.observations['prefix']['spectral_frequencies']['value']) == 978
def test_plantcv_hyperspectral_analyze_index():
# Clear previous outputs
pcv.outputs.clear()
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_analyze_index")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=801)
mask_img = np.ones(np.shape(index_array.array_data), dtype=np.uint8) * 255
# pcv.params.debug = "print"
# pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img, histplot=True)
# pcv.params.debug = "plot"
# pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img, histplot=True)
pcv.params.debug = None
pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img, histplot=True)
assert pcv.outputs.observations['default']['mean_index_savi']['value'] > 0
def test_plantcv_hyperspectral_analyze_index_set_range():
# Clear previous outputs
pcv.outputs.clear()
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_analyze_index_set_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=801)
mask_img = np.ones(np.shape(index_array.array_data), dtype=np.uint8) * 255
pcv.params.debug = None
pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img, histplot=True, min_bin=0, max_bin=1)
assert pcv.outputs.observations['default']['mean_index_savi']['value'] > 0
def test_plantcv_hyperspectral_analyze_index_auto_range():
# Clear previous outputs
pcv.outputs.clear()
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_analyze_index_auto_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=801)
mask_img = np.ones(np.shape(index_array.array_data), dtype=np.uint8) * 255
pcv.params.debug = None
pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img, min_bin="auto", max_bin="auto")
assert pcv.outputs.observations['default']['mean_index_savi']['value'] > 0
def test_plantcv_hyperspectral_analyze_index_outside_range_warning():
import io
from contextlib import redirect_stdout
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_analyze_index_auto_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=801)
mask_img = np.ones(np.shape(index_array.array_data), dtype=np.uint8) * 255
f = io.StringIO()
with redirect_stdout(f):
pcv.params.debug = None
pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img, min_bin=.5, max_bin=.55, label="i")
out = f.getvalue()
# assert os.listdir(cache_dir) is 0
assert out[0:10] == 'WARNING!!!'
def test_plantcv_hyperspectral_analyze_index_bad_input_mask():
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=801)
mask_img = cv2.imread(os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_MASK))
with pytest.raises(RuntimeError):
pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img)
def test_plantcv_hyperspectral_analyze_index_bad_input_index():
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
index_array = pcv.spectral_index.savi(hsi=array_data, distance=801)
mask_img = cv2.imread(os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_MASK), -1)
index_array.array_data = cv2.imread(os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_MASK))
with pytest.raises(RuntimeError):
pcv.hyperspectral.analyze_index(index_array=index_array, mask=mask_img)
def test_plantcv_hyperspectral_analyze_index_bad_input_datatype():
pcv.params.debug = None
spectral_filename = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
array_data = pcv.hyperspectral.read_data(filename=spectral_filename)
mask_img = cv2.imread(os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_MASK), -1)
with pytest.raises(RuntimeError):
pcv.hyperspectral.analyze_index(index_array=array_data, mask=mask_img)
def test_plantcv_hyperspectral_calibrate():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_calibrate")
os.mkdir(cache_dir)
raw = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
white = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_WHITE)
dark = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DARK)
raw = pcv.hyperspectral.read_data(filename=raw)
white = pcv.hyperspectral.read_data(filename=white)
dark = pcv.hyperspectral.read_data(filename=dark)
pcv.params.debug = "plot"
_ = pcv.hyperspectral.calibrate(raw_data=raw, white_reference=white, dark_reference=dark)
pcv.params.debug = "print"
calibrated = pcv.hyperspectral.calibrate(raw_data=raw, white_reference=white, dark_reference=dark)
assert np.shape(calibrated.array_data) == (1, 1600, 978)
def test_plantcv_hyperspectral_extract_wavelength():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_hyperspectral_extract_wavelength")
os.mkdir(cache_dir)
spectral = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
spectral = pcv.hyperspectral.read_data(filename=spectral)
pcv.params.debug = "plot"
_ = pcv.hyperspectral.extract_wavelength(spectral_data=spectral, wavelength=500)
pcv.params.debug = "print"
new = pcv.hyperspectral.extract_wavelength(spectral_data=spectral, wavelength=500)
assert np.shape(new.array_data) == (1, 1600)
def test_plantcv_hyperspectral_avg_reflectance():
spectral = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
mask_img = cv2.imread(os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_MASK), -1)
spectral = pcv.hyperspectral.read_data(filename=spectral)
avg_reflect = pcv.hyperspectral._avg_reflectance(spectral, mask=mask_img)
assert len(avg_reflect) == 978
def test_plantcv_hyperspectral_inverse_covariance():
spectral = os.path.join(HYPERSPECTRAL_TEST_DATA, HYPERSPECTRAL_DATA)
spectral = pcv.hyperspectral.read_data(filename=spectral)
inv_cov = pcv.hyperspectral._inverse_covariance(spectral)
assert np.shape(inv_cov) == (978, 978)
# ########################################
# Tests for the photosynthesis subpackage
# ########################################
def test_plantcv_photosynthesis_read_dat():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_photosynthesis_read_dat")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
pcv.params.debug = "plot"
fluor_filename = os.path.join(FLUOR_TEST_DATA, FLUOR_IMG)
_, _, _ = pcv.photosynthesis.read_cropreporter(filename=fluor_filename)
pcv.params.debug = "print"
fdark, fmin, fmax = pcv.photosynthesis.read_cropreporter(filename=fluor_filename)
assert np.sum(fmin) < np.sum(fmax)
def test_plantcv_photosynthesis_analyze_fvfm():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_fvfm")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# filename = os.path.join(cache_dir, 'plantcv_fvfm_hist.png')
# Read in test data
fdark = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FDARK), -1)
fmin = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMIN), -1)
fmax = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMAX), -1)
fmask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMASK), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.photosynthesis.analyze_fvfm(fdark=fdark, fmin=fmin, fmax=fmax, mask=fmask, bins=1000, label="prefix")
# Test with debug = "plot"
pcv.params.debug = "plot"
fvfm_images = pcv.photosynthesis.analyze_fvfm(fdark=fdark, fmin=fmin, fmax=fmax, mask=fmask, bins=1000)
assert len(fvfm_images) != 0
def test_plantcv_photosynthesis_analyze_fvfm_print_analysis_results():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_fvfm")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
fdark = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FDARK), -1)
fmin = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMIN), -1)
fmax = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMAX), -1)
fmask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMASK), -1)
_ = pcv.photosynthesis.analyze_fvfm(fdark=fdark, fmin=fmin, fmax=fmax, mask=fmask, bins=1000)
result_file = os.path.join(cache_dir, "results.txt")
pcv.print_results(result_file)
pcv.outputs.clear()
assert os.path.exists(result_file)
def test_plantcv_photosynthesis_analyze_fvfm_bad_fdark():
# Clear previous outputs
pcv.outputs.clear()
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_analyze_fvfm")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
fdark = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FDARK), -1)
fmin = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMIN), -1)
fmax = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMAX), -1)
fmask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMASK), -1)
_ = pcv.photosynthesis.analyze_fvfm(fdark=fdark + 3000, fmin=fmin, fmax=fmax, mask=fmask, bins=1000)
check = pcv.outputs.observations['default']['fdark_passed_qc']['value'] is False
assert check
def test_plantcv_photosynthesis_analyze_fvfm_bad_input():
fdark = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
fmin = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMIN), -1)
fmax = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMAX), -1)
fmask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_FMASK), -1)
with pytest.raises(RuntimeError):
_ = pcv.photosynthesis.analyze_fvfm(fdark=fdark, fmin=fmin, fmax=fmax, mask=fmask, bins=1000)
# ##############################
# Tests for the roi subpackage
# ##############################
def test_plantcv_roi_from_binary_image():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_roi_from_binary_image")
os.mkdir(cache_dir)
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Create a binary image
bin_img = np.zeros(np.shape(rgb_img)[0:2], dtype=np.uint8)
cv2.rectangle(bin_img, (100, 100), (1000, 1000), 255, -1)
# Test with debug = "print"
pcv.params.debug = "print"
pcv.params.debug_outdir = cache_dir
_, _ = pcv.roi.from_binary_image(bin_img=bin_img, img=rgb_img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_, _ = pcv.roi.from_binary_image(bin_img=bin_img, img=rgb_img)
# Test with debug = None
pcv.params.debug = None
roi_contour, roi_hierarchy = pcv.roi.from_binary_image(bin_img=bin_img, img=rgb_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 3600, 1, 2)
def test_plantcv_roi_from_binary_image_grayscale_input():
# Read in a test grayscale image
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Create a binary image
bin_img = np.zeros(np.shape(gray_img)[0:2], dtype=np.uint8)
cv2.rectangle(bin_img, (100, 100), (1000, 1000), 255, -1)
# Test with debug = "plot"
pcv.params.debug = "plot"
roi_contour, roi_hierarchy = pcv.roi.from_binary_image(bin_img=bin_img, img=gray_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 3600, 1, 2)
def test_plantcv_roi_from_binary_image_bad_binary_input():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Binary input is required but an RGB input is provided
with pytest.raises(RuntimeError):
_, _ = pcv.roi.from_binary_image(bin_img=rgb_img, img=rgb_img)
def test_plantcv_roi_rectangle():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_roi_rectangle")
os.mkdir(cache_dir)
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
pcv.params.debug_outdir = cache_dir
_, _ = pcv.roi.rectangle(x=100, y=100, h=500, w=500, img=rgb_img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_, _ = pcv.roi.rectangle(x=100, y=100, h=500, w=500, img=rgb_img)
# Test with debug = None
pcv.params.debug = None
roi_contour, roi_hierarchy = pcv.roi.rectangle(x=100, y=100, h=500, w=500, img=rgb_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 4, 1, 2)
def test_plantcv_roi_rectangle_grayscale_input():
# Read in a test grayscale image
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "plot"
pcv.params.debug = "plot"
roi_contour, roi_hierarchy = pcv.roi.rectangle(x=100, y=100, h=500, w=500, img=gray_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 4, 1, 2)
def test_plantcv_roi_rectangle_out_of_frame():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# The resulting rectangle needs to be within the dimensions of the image
with pytest.raises(RuntimeError):
_, _ = pcv.roi.rectangle(x=100, y=100, h=500, w=3000, img=rgb_img)
def test_plantcv_roi_circle():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_roi_circle")
os.mkdir(cache_dir)
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
pcv.params.debug_outdir = cache_dir
_, _ = pcv.roi.circle(x=100, y=100, r=50, img=rgb_img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_, _ = pcv.roi.circle(x=100, y=100, r=50, img=rgb_img)
# Test with debug = None
pcv.params.debug = None
roi_contour, roi_hierarchy = pcv.roi.circle(x=200, y=225, r=75, img=rgb_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 424, 1, 2)
def test_plantcv_roi_circle_grayscale_input():
# Read in a test grayscale image
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "plot"
pcv.params.debug = "plot"
roi_contour, roi_hierarchy = pcv.roi.circle(x=200, y=225, r=75, img=gray_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 424, 1, 2)
def test_plantcv_roi_circle_out_of_frame():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# The resulting rectangle needs to be within the dimensions of the image
with pytest.raises(RuntimeError):
_, _ = pcv.roi.circle(x=50, y=225, r=75, img=rgb_img)
def test_plantcv_roi_ellipse():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_roi_ellipse")
os.mkdir(cache_dir)
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
pcv.params.debug_outdir = cache_dir
_, _ = pcv.roi.ellipse(x=200, y=200, r1=75, r2=50, angle=0, img=rgb_img)
# Test with debug = "plot"
pcv.params.debug = "plot"
_, _ = pcv.roi.ellipse(x=200, y=200, r1=75, r2=50, angle=0, img=rgb_img)
# Test with debug = None
pcv.params.debug = None
roi_contour, roi_hierarchy = pcv.roi.ellipse(x=200, y=200, r1=75, r2=50, angle=0, img=rgb_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 360, 1, 2)
def test_plantcv_roi_ellipse_grayscale_input():
# Read in a test grayscale image
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "plot"
pcv.params.debug = "plot"
roi_contour, roi_hierarchy = pcv.roi.ellipse(x=200, y=200, r1=75, r2=50, angle=0, img=gray_img)
# Assert the contours and hierarchy lists contain only the ROI
assert np.shape(roi_contour) == (1, 360, 1, 2)
def test_plantcv_roi_ellipse_out_of_frame():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# The resulting rectangle needs to be within the dimensions of the image
with pytest.raises(RuntimeError):
_, _ = pcv.roi.ellipse(x=50, y=225, r1=75, r2=50, angle=0, img=rgb_img)
def test_plantcv_roi_multi():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.roi.multi(rgb_img, coord=[(25, 120), (100, 100)], radius=20)
# Test with debug = None
pcv.params.debug = None
rois1, roi_hierarchy1 = pcv.roi.multi(rgb_img, coord=(25, 120), radius=20, spacing=(10, 10), nrows=3, ncols=6)
# Assert the contours has 18 ROIs
assert len(rois1) == 18
def test_plantcv_roi_multi_bad_input():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# The user must input a list of custom coordinates OR inputs to make a grid. Not both
with pytest.raises(RuntimeError):
_, _ = pcv.roi.multi(rgb_img, coord=[(25, 120), (100, 100)], radius=20, spacing=(10, 10), nrows=3, ncols=6)
def test_plantcv_roi_multi_bad_input_oob():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# nputs to make a grid make ROIs that go off the screen
with pytest.raises(RuntimeError):
_, _ = pcv.roi.multi(rgb_img, coord=(25000, 12000), radius=2, spacing=(1, 1), nrows=3, ncols=6)
def test_plantcv_roi_multi_bad_input_oob_list():
# Read in test RGB image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# All vertices in the list of centers must draw roi's that are inside the image
with pytest.raises(RuntimeError):
_, _ = pcv.roi.multi(rgb_img, coord=[(25000, 25000), (25000, 12000), (12000, 12000)], radius=20)
def test_plantcv_roi_custom():
# Read in test RGB image
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
pcv.params.debug = "plot"
cnt, hier = pcv.roi.custom(img=img, vertices=[[226, 1], [313, 184], [240, 202], [220, 229], [161, 171]])
assert np.shape(cnt) == (1, 5, 2)
def test_plantcv_roi_custom_bad_input():
# Read in test RGB image
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# ROI goes out of bounds
with pytest.raises(RuntimeError):
_ = pcv.roi.custom(img=img, vertices=[[226, -1], [3130, 1848], [2404, 2029], [2205, 2298], [1617, 1761]])
# ##############################
# Tests for the transform subpackage
# ##############################
def test_plantcv_transform_get_color_matrix():
# load in target_matrix
matrix_file = np.load(os.path.join(TEST_DATA, TEST_TARGET_MATRIX), encoding="latin1")
matrix_compare = matrix_file['arr_0']
# Read in rgb_img and gray-scale mask
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_MASK), -1)
# The result should be a len(np.unique(mask))-1 x 4 matrix
headers, matrix = pcv.transform.get_color_matrix(rgb_img, mask)
assert np.array_equal(matrix, matrix_compare)
def test_plantcv_transform_get_color_matrix_img():
# Read in two gray-scale images
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_MASK), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_MASK), -1)
# The input for rgb_img needs to be an RGB image
with pytest.raises(RuntimeError):
_, _ = pcv.transform.get_color_matrix(rgb_img, mask)
def test_plantcv_transform_get_color_matrix_mask():
# Read in two gray-scale images
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_MASK))
# The input for rgb_img needs to be an RGB image
with pytest.raises(RuntimeError):
_, _ = pcv.transform.get_color_matrix(rgb_img, mask)
def test_plantcv_transform_get_matrix_m():
# load in comparison matrices
matrix_m_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_M1), encoding="latin1")
matrix_compare_m = matrix_m_file['arr_0']
matrix_b_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_B1), encoding="latin1")
matrix_compare_b = matrix_b_file['arr_0']
# read in matrices
t_matrix_file = np.load(os.path.join(TEST_DATA, TEST_TARGET_MATRIX), encoding="latin1")
t_matrix = t_matrix_file['arr_0']
s_matrix_file = np.load(os.path.join(TEST_DATA, TEST_SOURCE1_MATRIX), encoding="latin1")
s_matrix = s_matrix_file['arr_0']
# apply matrices to function
matrix_a, matrix_m, matrix_b = pcv.transform.get_matrix_m(t_matrix, s_matrix)
matrix_compare_m = np.rint(matrix_compare_m)
matrix_compare_b = np.rint(matrix_compare_b)
matrix_m = np.rint(matrix_m)
matrix_b = np.rint(matrix_b)
assert np.array_equal(matrix_m, matrix_compare_m) and np.array_equal(matrix_b, matrix_compare_b)
def test_plantcv_transform_get_matrix_m_unequal_data():
# load in comparison matrices
matrix_m_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_M2), encoding="latin1")
matrix_compare_m = matrix_m_file['arr_0']
matrix_b_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_B2), encoding="latin1")
matrix_compare_b = matrix_b_file['arr_0']
# read in matrices
t_matrix_file = np.load(os.path.join(TEST_DATA, TEST_TARGET_MATRIX), encoding="latin1")
t_matrix = t_matrix_file['arr_0']
s_matrix_file = np.load(os.path.join(TEST_DATA, TEST_SOURCE2_MATRIX), encoding="latin1")
s_matrix = s_matrix_file['arr_0']
# apply matrices to function
matrix_a, matrix_m, matrix_b = pcv.transform.get_matrix_m(t_matrix, s_matrix)
matrix_compare_m = np.rint(matrix_compare_m)
matrix_compare_b = np.rint(matrix_compare_b)
matrix_m = np.rint(matrix_m)
matrix_b = np.rint(matrix_b)
assert np.array_equal(matrix_m, matrix_compare_m) and np.array_equal(matrix_b, matrix_compare_b)
def test_plantcv_transform_calc_transformation_matrix():
# load in comparison matrices
matrix_file = np.load(os.path.join(TEST_DATA, TEST_TRANSFORM1), encoding="latin1")
matrix_compare = matrix_file['arr_0']
# read in matrices
matrix_m_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_M1), encoding="latin1")
matrix_m = matrix_m_file['arr_0']
matrix_b_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_B1), encoding="latin1")
matrix_b = matrix_b_file['arr_0']
# apply to function
_, matrix_t = pcv.transform.calc_transformation_matrix(matrix_m, matrix_b)
matrix_t = np.rint(matrix_t)
matrix_compare = np.rint(matrix_compare)
assert np.array_equal(matrix_t, matrix_compare)
def test_plantcv_transform_calc_transformation_matrix_b_incorrect():
# read in matrices
matrix_m_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_M1), encoding="latin1")
matrix_m = matrix_m_file['arr_0']
matrix_b_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_B1), encoding="latin1")
matrix_b = matrix_b_file['arr_0']
matrix_b = np.asmatrix(matrix_b, float)
with pytest.raises(RuntimeError):
_, _ = pcv.transform.calc_transformation_matrix(matrix_m, matrix_b.T)
def test_plantcv_transform_calc_transformation_matrix_not_mult():
# read in matrices
matrix_m_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_M1), encoding="latin1")
matrix_m = matrix_m_file['arr_0']
matrix_b_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_B1), encoding="latin1")
matrix_b = matrix_b_file['arr_0']
with pytest.raises(RuntimeError):
_, _ = pcv.transform.calc_transformation_matrix(matrix_m, matrix_b[:3])
def test_plantcv_transform_calc_transformation_matrix_not_mat():
# read in matrices
matrix_m_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_M1), encoding="latin1")
matrix_m = matrix_m_file['arr_0']
matrix_b_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_B1), encoding="latin1")
matrix_b = matrix_b_file['arr_0']
with pytest.raises(RuntimeError):
_, _ = pcv.transform.calc_transformation_matrix(matrix_m[:, 1], matrix_b[:, 1])
def test_plantcv_transform_apply_transformation():
# load corrected image to compare
corrected_compare = cv2.imread(os.path.join(TEST_DATA, TEST_S1_CORRECTED))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform")
os.mkdir(cache_dir)
# Make image and mask directories in the cache directory
imgdir = os.path.join(cache_dir, "images")
# read in matrices
matrix_t_file = np.load(os.path.join(TEST_DATA, TEST_TRANSFORM1), encoding="latin1")
matrix_t = matrix_t_file['arr_0']
# read in images
target_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
source_img = cv2.imread(os.path.join(TEST_DATA, TEST_SOURCE1_IMG))
# Test with debug = "print"
pcv.params.debug = "print"
pcv.params.debug_outdir = imgdir
_ = pcv.transform.apply_transformation_matrix(source_img, target_img, matrix_t)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.transform.apply_transformation_matrix(source_img, target_img, matrix_t)
# Test with debug = None
pcv.params.debug = None
corrected_img = pcv.transform.apply_transformation_matrix(source_img, target_img, matrix_t)
# assert source and corrected have same shape
assert np.array_equal(corrected_img, corrected_compare)
def test_plantcv_transform_apply_transformation_incorrect_t():
# read in matrices
matrix_t_file = np.load(os.path.join(TEST_DATA, TEST_MATRIX_B1), encoding="latin1")
matrix_t = matrix_t_file['arr_0']
# read in images
target_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
source_img = cv2.imread(os.path.join(TEST_DATA, TEST_SOURCE1_IMG))
with pytest.raises(RuntimeError):
_ = pcv.transform.apply_transformation_matrix(source_img, target_img, matrix_t)
def test_plantcv_transform_apply_transformation_incorrect_img():
# read in matrices
matrix_t_file = np.load(os.path.join(TEST_DATA, TEST_TRANSFORM1), encoding="latin1")
matrix_t = matrix_t_file['arr_0']
# read in images
target_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
source_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_MASK), -1)
with pytest.raises(RuntimeError):
_ = pcv.transform.apply_transformation_matrix(source_img, target_img, matrix_t)
def test_plantcv_transform_save_matrix():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform")
os.mkdir(cache_dir)
# read in matrix
matrix_t_file = np.load(os.path.join(TEST_DATA, TEST_TRANSFORM1), encoding="latin1")
matrix_t = matrix_t_file['arr_0']
# .npz filename
filename = os.path.join(cache_dir, 'test.npz')
pcv.transform.save_matrix(matrix_t, filename)
assert os.path.exists(filename) is True
def test_plantcv_transform_save_matrix_incorrect_filename():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform")
os.mkdir(cache_dir)
# read in matrix
matrix_t_file = np.load(os.path.join(TEST_DATA, TEST_TRANSFORM1), encoding="latin1")
matrix_t = matrix_t_file['arr_0']
# .npz filename
filename = "test"
with pytest.raises(RuntimeError):
pcv.transform.save_matrix(matrix_t, filename)
def test_plantcv_transform_load_matrix():
# read in matrix_t
matrix_t_file = np.load(os.path.join(TEST_DATA, TEST_TRANSFORM1), encoding="latin1")
matrix_t = matrix_t_file['arr_0']
# test load function with matrix_t
matrix_t_loaded = pcv.transform.load_matrix(os.path.join(TEST_DATA, TEST_TRANSFORM1))
assert np.array_equal(matrix_t, matrix_t_loaded)
def test_plantcv_transform_correct_color():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform")
os.mkdir(cache_dir)
# load corrected image to compare
corrected_compare = cv2.imread(os.path.join(TEST_DATA, TEST_S1_CORRECTED))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_correct_color")
os.mkdir(cache_dir)
# Make image and mask directories in the cache directory
imgdir = os.path.join(cache_dir, "images")
matdir = os.path.join(cache_dir, "saved_matrices")
# Read in target, source, and gray-scale mask
target_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
source_img = cv2.imread(os.path.join(TEST_DATA, TEST_SOURCE1_IMG))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_MASK), -1)
output_path = os.path.join(matdir)
# Test with debug = "print"
pcv.params.debug = "print"
pcv.params.debug_outdir = imgdir
_, _, _, _ = pcv.transform.correct_color(target_img, mask, source_img, mask, cache_dir)
# Test with debug = "plot"
pcv.params.debug = "plot"
_, _, _, _ = pcv.transform.correct_color(target_img, mask, source_img, mask, output_path)
# Test with debug = None
pcv.params.debug = None
_, _, matrix_t, corrected_img = pcv.transform.correct_color(target_img, mask, source_img, mask, output_path)
# assert source and corrected have same shape
assert all([np.array_equal(corrected_img, corrected_compare),
os.path.exists(os.path.join(output_path, "target_matrix.npz")) is True,
os.path.exists(os.path.join(output_path, "source_matrix.npz")) is True,
os.path.exists(os.path.join(output_path, "transformation_matrix.npz")) is True])
def test_plantcv_transform_correct_color_output_dne():
# load corrected image to compare
corrected_compare = cv2.imread(os.path.join(TEST_DATA, TEST_S1_CORRECTED))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_correct_color_output_dne")
os.mkdir(cache_dir)
# Make image and mask directories in the cache directory
imgdir = os.path.join(cache_dir, "images")
# Read in target, source, and gray-scale mask
target_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
source_img = cv2.imread(os.path.join(TEST_DATA, TEST_SOURCE1_IMG))
mask = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_MASK), -1)
output_path = os.path.join(cache_dir, "saved_matrices_1") # output_directory that does not currently exist
# Test with debug = "print"
pcv.params.debug = "print"
pcv.params.debug_outdir = imgdir
_, _, _, _ = pcv.transform.correct_color(target_img, mask, source_img, mask, output_path)
# Test with debug = "plot"
pcv.params.debug = "plot"
_, _, _, _ = pcv.transform.correct_color(target_img, mask, source_img, mask, output_path)
# Test with debug = None
pcv.params.debug = None
_, _, matrix_t, corrected_img = pcv.transform.correct_color(target_img, mask, source_img, mask, output_path)
# assert source and corrected have same shape
assert all([np.array_equal(corrected_img, corrected_compare),
os.path.exists(os.path.join(output_path, "target_matrix.npz")) is True,
os.path.exists(os.path.join(output_path, "source_matrix.npz")) is True,
os.path.exists(os.path.join(output_path, "transformation_matrix.npz")) is True])
def test_plantcv_transform_create_color_card_mask():
# Load target image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_create_color_card_mask")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.transform.create_color_card_mask(rgb_img=rgb_img, radius=6, start_coord=(166, 166),
spacing=(21, 21), nrows=6, ncols=4, exclude=[20, 0])
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.transform.create_color_card_mask(rgb_img=rgb_img, radius=6, start_coord=(166, 166),
spacing=(21, 21), nrows=6, ncols=4, exclude=[20, 0])
# Test with debug = None
pcv.params.debug = None
mask = pcv.transform.create_color_card_mask(rgb_img=rgb_img, radius=6, start_coord=(166, 166),
spacing=(21, 21), nrows=6, ncols=4, exclude=[20, 0])
assert all([i == j] for i, j in zip(np.unique(mask), np.array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110,
120, 130, 140, 150, 160, 170, 180, 190, 200, 210,
220], dtype=np.uint8)))
def test_plantcv_transform_quick_color_check():
# Load target image
t_matrix = np.load(os.path.join(TEST_DATA, TEST_TARGET_MATRIX), encoding="latin1")
target_matrix = t_matrix['arr_0']
s_matrix = np.load(os.path.join(TEST_DATA, TEST_SOURCE1_MATRIX), encoding="latin1")
source_matrix = s_matrix['arr_0']
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_quick_color_check")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Test with debug = "print"
pcv.params.debug = "print"
pcv.transform.quick_color_check(target_matrix, source_matrix, num_chips=22)
# Test with debug = "plot"
pcv.params.debug = "plot"
pcv.transform.quick_color_check(target_matrix, source_matrix, num_chips=22)
# Test with debug = None
pcv.params.debug = None
pcv.transform.quick_color_check(target_matrix, source_matrix, num_chips=22)
assert os.path.exists(os.path.join(cache_dir, "color_quick_check.png"))
def test_plantcv_transform_find_color_card():
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_find_color_card")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
df, start, space = pcv.transform.find_color_card(rgb_img=rgb_img, threshold_type='adaptgauss', blurry=False,
threshvalue=90)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.transform.create_color_card_mask(rgb_img=rgb_img, radius=6, start_coord=start,
spacing=space, nrows=6, ncols=4, exclude=[20, 0])
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.transform.create_color_card_mask(rgb_img=rgb_img, radius=6, start_coord=start,
spacing=space, nrows=6, ncols=4, exclude=[20, 0])
# Test with debug = None
pcv.params.debug = None
mask = pcv.transform.create_color_card_mask(rgb_img=rgb_img, radius=6, start_coord=start,
spacing=space, nrows=6, ncols=4, exclude=[20, 0])
assert all([i == j] for i, j in zip(np.unique(mask), np.array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110,
120, 130, 140, 150, 160, 170, 180, 190, 200, 210,
220], dtype=np.uint8)))
def test_plantcv_transform_find_color_card_optional_parameters():
# Clear previous outputs
pcv.outputs.clear()
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG_COLOR_CARD))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_find_color_card")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Test with threshold ='normal'
df1, start1, space1 = pcv.transform.find_color_card(rgb_img=rgb_img, threshold_type='normal', blurry=True,
background='light', threshvalue=90, label="prefix")
assert pcv.outputs.observations["prefix"]["color_chip_size"]["value"] > 15000
def test_plantcv_transform_find_color_card_otsu():
# Clear previous outputs
pcv.outputs.clear()
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG_COLOR_CARD))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_find_color_card_otsu")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Test with threshold ='normal'
df1, start1, space1 = pcv.transform.find_color_card(rgb_img=rgb_img, threshold_type='otsu', blurry=True,
background='light', threshvalue=90, label="prefix")
assert pcv.outputs.observations["prefix"]["color_chip_size"]["value"] > 15000
def test_plantcv_transform_find_color_card_optional_size_parameters():
# Clear previous outputs
pcv.outputs.clear()
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG_COLOR_CARD))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_find_color_card")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
_, _, _ = pcv.transform.find_color_card(rgb_img=rgb_img, record_chip_size="mean")
assert pcv.outputs.observations["default"]["color_chip_size"]["value"] > 15000
def test_plantcv_transform_find_color_card_optional_size_parameters_none():
# Clear previous outputs
pcv.outputs.clear()
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG_COLOR_CARD))
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_find_color_card")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
_, _, _ = pcv.transform.find_color_card(rgb_img=rgb_img, record_chip_size=None)
assert pcv.outputs.observations.get("default") is None
def test_plantcv_transform_find_color_card_bad_record_chip_size():
# Clear previous outputs
pcv.outputs.clear()
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
pcv.params.debug = None
_, _, _ = pcv.transform.find_color_card(rgb_img=rgb_img, record_chip_size='averageeeed')
assert pcv.outputs.observations["default"]["color_chip_size"]["value"] is None
def test_plantcv_transform_find_color_card_bad_thresh_input():
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
with pytest.raises(RuntimeError):
pcv.params.debug = None
_, _, _ = pcv.transform.find_color_card(rgb_img=rgb_img, threshold_type='gaussian')
def test_plantcv_transform_find_color_card_bad_background_input():
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
with pytest.raises(RuntimeError):
pcv.params.debug = None
_, _, _ = pcv.transform.find_color_card(rgb_img=rgb_img, background='lite')
def test_plantcv_transform_find_color_card_bad_colorcard():
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG_WITH_HEXAGON))
with pytest.raises(RuntimeError):
pcv.params.debug = None
_, _, _ = pcv.transform.find_color_card(rgb_img=rgb_img)
def test_plantcv_transform_rescale():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_rescale")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.transform.rescale(gray_img=gray_img, min_value=0, max_value=100)
pcv.params.debug = "plot"
rescaled_img = pcv.transform.rescale(gray_img=gray_img, min_value=0, max_value=100)
assert max(np.unique(rescaled_img)) == 100
def test_plantcv_transform_rescale_bad_input():
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
with pytest.raises(RuntimeError):
_ = pcv.transform.rescale(gray_img=rgb_img)
def test_plantcv_transform_resize():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_trancform_resize")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
size = (100, 100)
# Test with debug "print"
pcv.params.debug = "print"
_ = pcv.transform.resize(img=gray_img, size=size, interpolation="auto")
# Test with debug "plot"
pcv.params.debug = "plot"
resized_img = pcv.transform.resize(img=gray_img, size=size, interpolation="auto")
assert resized_img.shape == size
def test_plantcv_transform_resize_unsupported_method():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
with pytest.raises(RuntimeError):
_ = pcv.transform.resize(img=gray_img, size=(100, 100), interpolation="mymethod")
def test_plantcv_transform_resize_crop():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
size = (20, 20)
resized_im = pcv.transform.resize(img=gray_img, size=size, interpolation=None)
assert resized_im.shape == size
def test_plantcv_transform_resize_pad():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
size = (100, 100)
resized_im = pcv.transform.resize(img=gray_img, size=size, interpolation=None)
assert resized_im.shape == size
def test_plantcv_transform_resize_pad_crop_color():
color_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL))
size = (100, 100)
resized_im = pcv.transform.resize(img=color_img, size=size, interpolation=None)
assert resized_im.shape == (size[1], size[0], 3)
def test_plantcv_transform_resize_factor():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_trancform_resize_factor")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
# Resizing factors
factor_x = 0.5
factor_y = 0.2
# Test with debug "print"
pcv.params.debug = "print"
_ = pcv.transform.resize_factor(img=gray_img, factors=(factor_x, factor_y), interpolation="auto")
# Test with debug "plot"
pcv.params.debug = "plot"
resized_img = pcv.transform.resize_factor(img=gray_img, factors=(factor_x, factor_y), interpolation="auto")
output_size = resized_img.shape
expected_size = (int(gray_img.shape[0] * factor_y), int(gray_img.shape[1] * factor_x))
assert output_size == expected_size
def test_plantcv_transform_resize_factor_bad_input():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
with pytest.raises(RuntimeError):
_ = pcv.transform.resize_factor(img=gray_img, factors=(0, 2), interpolation="auto")
def test_plantcv_transform_nonuniform_illumination_rgb():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_nonuniform_illumination")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Load rgb image
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_TARGET_IMG))
pcv.params.debug = "plot"
_ = pcv.transform.nonuniform_illumination(img=rgb_img, ksize=11)
pcv.params.debug = "print"
corrected = pcv.transform.nonuniform_illumination(img=rgb_img, ksize=11)
assert np.mean(corrected) < np.mean(rgb_img)
def test_plantcv_transform_nonuniform_illumination_gray():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_transform_nonuniform_illumination")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Load rgb image
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
pcv.params.debug = "plot"
_ = pcv.transform.nonuniform_illumination(img=gray_img, ksize=11)
pcv.params.debug = "print"
corrected = pcv.transform.nonuniform_illumination(img=gray_img, ksize=11)
assert np.shape(corrected) == np.shape(gray_img)
# ##############################
# Tests for the threshold subpackage
# ##############################
def test_plantcv_threshold_binary():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_binary")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with object type = dark
pcv.params.debug = None
_ = pcv.threshold.binary(gray_img=gray_img, threshold=25, max_value=255, object_type="dark")
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.threshold.binary(gray_img=gray_img, threshold=25, max_value=255, object_type="light")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.threshold.binary(gray_img=gray_img, threshold=25, max_value=255, object_type="light")
# Test with debug = None
pcv.params.debug = None
binary_img = pcv.threshold.binary(gray_img=gray_img, threshold=25, max_value=255, object_type="light")
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(binary_img), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(binary_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_threshold_binary_incorrect_object_type():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
pcv.params.debug = None
_ = pcv.threshold.binary(gray_img=gray_img, threshold=25, max_value=255, object_type="lite")
def test_plantcv_threshold_gaussian():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_gaussian")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with object type = dark
pcv.params.debug = None
_ = pcv.threshold.gaussian(gray_img=gray_img, max_value=255, object_type="dark")
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.threshold.gaussian(gray_img=gray_img, max_value=255, object_type="light")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.threshold.gaussian(gray_img=gray_img, max_value=255, object_type="light")
# Test with debug = None
pcv.params.debug = None
binary_img = pcv.threshold.gaussian(gray_img=gray_img, max_value=255, object_type="light")
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(binary_img), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(binary_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_threshold_gaussian_incorrect_object_type():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
pcv.params.debug = None
_ = pcv.threshold.gaussian(gray_img=gray_img, max_value=255, object_type="lite")
def test_plantcv_threshold_mean():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_mean")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with object type = dark
pcv.params.debug = None
_ = pcv.threshold.mean(gray_img=gray_img, max_value=255, object_type="dark")
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.threshold.mean(gray_img=gray_img, max_value=255, object_type="light")
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.threshold.mean(gray_img=gray_img, max_value=255, object_type="light")
# Test with debug = None
pcv.params.debug = None
binary_img = pcv.threshold.mean(gray_img=gray_img, max_value=255, object_type="light")
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(binary_img), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(binary_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_threshold_mean_incorrect_object_type():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
pcv.params.debug = None
_ = pcv.threshold.mean(gray_img=gray_img, max_value=255, object_type="lite")
def test_plantcv_threshold_otsu():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_otsu")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GREENMAG), -1)
# Test with object set to light
pcv.params.debug = None
_ = pcv.threshold.otsu(gray_img=gray_img, max_value=255, object_type="light")
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.threshold.otsu(gray_img=gray_img, max_value=255, object_type='dark')
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.threshold.otsu(gray_img=gray_img, max_value=255, object_type='dark')
# Test with debug = None
pcv.params.debug = None
binary_img = pcv.threshold.otsu(gray_img=gray_img, max_value=255, object_type='dark')
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(binary_img), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(binary_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_threshold_otsu_incorrect_object_type():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
pcv.params.debug = None
_ = pcv.threshold.otsu(gray_img=gray_img, max_value=255, object_type="lite")
def test_plantcv_threshold_custom_range():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "print"
pcv.params.debug = 'print'
# Test channel='gray'
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0], upper_thresh=[255], channel='gray')
_, _ = pcv.threshold.custom_range(gray_img, lower_thresh=[0], upper_thresh=[255], channel='gray')
# Test channel='HSV'
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0, 0, 0], upper_thresh=[255, 255, 255], channel='HSV')
# Test channel='LAB'
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0, 0, 0], upper_thresh=[255, 255, 255], channel='LAB')
pcv.params.debug = 'plot'
# Test channel='RGB'
mask, binary_img = pcv.threshold.custom_range(img, lower_thresh=[0, 0, 0], upper_thresh=[255, 255, 255],
channel='RGB')
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(binary_img), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(binary_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_threshold_custom_range_bad_input_hsv():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0, 0], upper_thresh=[2, 2, 2, 2], channel='HSV')
def test_plantcv_threshold_custom_range_bad_input_rgb():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0, 0], upper_thresh=[2, 2, 2, 2], channel='RGB')
def test_plantcv_threshold_custom_range_bad_input_lab():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0, 0], upper_thresh=[2, 2, 2], channel='LAB')
def test_plantcv_threshold_custom_range_bad_input_gray():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0, 0], upper_thresh=[2], channel='gray')
def test_plantcv_threshold_custom_range_bad_input_channel():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_range")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_, _ = pcv.threshold.custom_range(img, lower_thresh=[0], upper_thresh=[2], channel='CMYK')
def test_plantcv_threshold_saturation():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_saturation")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.threshold.saturation(rgb_img=rgb_img, threshold=254, channel="all")
# Test with debug = "plot"
pcv.params.debug = "plot"
thresh = pcv.threshold.saturation(rgb_img=rgb_img, threshold=254, channel="any")
assert np.sum(thresh) == 920050455 and len(np.unique(thresh)) == 2
def test_plantcv_threshold_saturation_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_saturation_bad_input")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
rgb_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_ = pcv.threshold.saturation(rgb_img=rgb_img, threshold=254, channel="red")
def test_plantcv_threshold_triangle():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_triangle")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.threshold.triangle(gray_img=gray_img, max_value=255, object_type="dark", xstep=10)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.threshold.triangle(gray_img=gray_img, max_value=255, object_type="light", xstep=10)
# Test with debug = None
pcv.params.debug = None
binary_img = pcv.threshold.triangle(gray_img=gray_img, max_value=255, object_type="light", xstep=10)
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(binary_img), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(binary_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
def test_plantcv_threshold_triangle_incorrect_object_type():
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
pcv.params.debug = None
_ = pcv.threshold.triangle(gray_img=gray_img, max_value=255, object_type="lite", xstep=10)
def test_plantcv_threshold_texture():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_threshold_texture")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
gray_img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY_SMALL), -1)
binary_img = pcv.threshold.texture(gray_img, ksize=6, threshold=7, offset=3, texture_method='dissimilarity',
borders='nearest', max_value=255)
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(binary_img), TEST_GRAY_DIM)):
# Assert that the image is binary
if all([i == j] for i, j in zip(np.unique(binary_img), [0, 255])):
assert 1
else:
assert 0
else:
assert 0
# ###################################
# Tests for the visualize subpackage
# ###################################
def test_plantcv_visualize_auto_threshold_methods_bad_input():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_auto_threshold_methods")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_ = pcv.visualize.auto_threshold_methods(gray_img=img)
def test_plantcv_visualize_auto_threshold_methods():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_auto_threshold_methods")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
pcv.params.debug = "print"
_ = pcv.visualize.auto_threshold_methods(gray_img=img)
pcv.params.debug = "plot"
labeled_imgs = pcv.visualize.auto_threshold_methods(gray_img=img)
assert len(labeled_imgs) == 5 and np.shape(labeled_imgs[0])[0] == np.shape(img)[0]
def test_plantcv_visualize_pseudocolor():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_pseudocolor")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
obj_contour = contours_npz['arr_0']
filename = os.path.join(cache_dir, 'plantcv_pseudo_image.png')
# Test with debug = "print"
pcv.params.debug = "print"
_ = pcv.visualize.pseudocolor(gray_img=img, mask=None)
_ = pcv.visualize.pseudocolor(gray_img=img, mask=None)
pimg = pcv.visualize.pseudocolor(gray_img=img, mask=mask, min_value=10, max_value=200)
pcv.print_image(pimg, filename)
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.visualize.pseudocolor(gray_img=img, mask=mask, background="image")
_ = pcv.visualize.pseudocolor(gray_img=img, mask=mask, background="image", title="customized title")
_ = pcv.visualize.pseudocolor(gray_img=img, mask=None)
_ = pcv.visualize.pseudocolor(gray_img=img, mask=mask, background="black", obj=obj_contour, axes=False,
colorbar=False)
_ = pcv.visualize.pseudocolor(gray_img=img, mask=mask, background="image", obj=obj_contour, obj_padding=15)
_ = pcv.visualize.pseudocolor(gray_img=img, mask=None, axes=False, colorbar=False)
# Test with debug = None
pcv.params.debug = None
_ = pcv.visualize.pseudocolor(gray_img=img, mask=None)
pseudo_img = pcv.visualize.pseudocolor(gray_img=img, mask=mask, background="white")
# Assert that the output image has the dimensions of the input image
if all([i == j] for i, j in zip(np.shape(pseudo_img), TEST_BINARY_DIM)):
assert 1
else:
assert 0
def test_plantcv_visualize_pseudocolor_bad_input():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_pseudocolor")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
with pytest.raises(RuntimeError):
_ = pcv.visualize.pseudocolor(gray_img=img)
def test_plantcv_visualize_pseudocolor_bad_background():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_pseudocolor_bad_background")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
with pytest.raises(RuntimeError):
_ = pcv.visualize.pseudocolor(gray_img=img, mask=mask, background="pink")
def test_plantcv_visualize_pseudocolor_bad_padding():
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_pseudocolor_bad_background")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
contours_npz = np.load(os.path.join(TEST_DATA, TEST_INPUT_CONTOURS), encoding="latin1")
obj_contour = contours_npz['arr_0']
with pytest.raises(RuntimeError):
_ = pcv.visualize.pseudocolor(gray_img=img, mask=mask, obj=obj_contour, obj_padding="pink")
def test_plantcv_visualize_colorize_masks():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_naive_bayes_classifier")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
mask = pcv.naive_bayes_classifier(rgb_img=img, pdf_file=os.path.join(TEST_DATA, TEST_PDFS))
_ = pcv.visualize.colorize_masks(masks=[mask['plant'], mask['background']],
colors=[(0, 0, 0), (1, 1, 1)])
# Test with debug = "plot"
pcv.params.debug = "plot"
_ = pcv.visualize.colorize_masks(masks=[mask['plant'], mask['background']],
colors=[(0, 0, 0), (1, 1, 1)])
# Test with debug = None
pcv.params.debug = None
colored_img = pcv.visualize.colorize_masks(masks=[mask['plant'], mask['background']],
colors=['red', 'blue'])
# Assert that the output image has the dimensions of the input image
assert not np.average(colored_img) == 0
def test_plantcv_visualize_colorize_masks_bad_input_empty():
with pytest.raises(RuntimeError):
_ = pcv.visualize.colorize_masks(masks=[], colors=[])
def test_plantcv_visualize_colorize_masks_bad_input_mismatch_number():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
mask = pcv.naive_bayes_classifier(rgb_img=img, pdf_file=os.path.join(TEST_DATA, TEST_PDFS))
with pytest.raises(RuntimeError):
_ = pcv.visualize.colorize_masks(masks=[mask['plant'], mask['background']], colors=['red', 'green', 'blue'])
def test_plantcv_visualize_colorize_masks_bad_color_input():
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
# Test with debug = "print"
pcv.params.debug = "print"
mask = pcv.naive_bayes_classifier(rgb_img=img, pdf_file=os.path.join(TEST_DATA, TEST_PDFS))
with pytest.raises(RuntimeError):
_ = pcv.visualize.colorize_masks(masks=[mask['plant'], mask['background']], colors=['red', 1.123])
@pytest.mark.parametrize("bins,lb,ub,title", [[200, 0, 255, "Include Title"], [100, None, None, None]])
def test_plantcv_visualize_histogram(bins, lb, ub, title):
# Test with debug = None
pcv.params.debug = None
# Read test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
mask = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
fig_hist, hist_df = pcv.visualize.histogram(img=img, mask=mask, bins=bins, lower_bound=lb, upper_bound=ub,
title=title, hist_data=True)
assert all([isinstance(fig_hist, ggplot), isinstance(hist_df, pd.core.frame.DataFrame)])
def test_plantcv_visualize_histogram_no_mask():
# Test with debug = None
pcv.params.debug = None
# Read test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
fig_hist = pcv.visualize.histogram(img=img, mask=None)
assert isinstance(fig_hist, ggplot)
def test_plantcv_visualize_histogram_rgb_img():
# Test with debug = None
pcv.params.debug = None
# Test RGB input image
img_rgb = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
fig_hist = pcv.visualize.histogram(img=img_rgb)
assert isinstance(fig_hist, ggplot)
def test_plantcv_visualize_histogram_multispectral_img():
# Test with debug = None
pcv.params.debug = None
# Test multi-spectral image
img_rgb = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img_multi = np.concatenate((img_rgb, img_rgb), axis=2)
fig_hist = pcv.visualize.histogram(img=img_multi)
assert isinstance(fig_hist, ggplot)
def test_plantcv_visualize_histogram_no_img():
with pytest.raises(RuntimeError):
_ = pcv.visualize.histogram(img=None)
def test_plantcv_visualize_histogram_array():
# Read test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
_ = pcv.visualize.histogram(img=img[0, :])
def test_plantcv_visualize_clustered_contours():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_plot_hist")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_VISUALIZE_BACKGROUND), -1)
roi_objects = np.load(os.path.join(TEST_DATA, TEST_INPUT_VISUALIZE_CONTOUR), encoding="latin1")
hierarchy = np.load(os.path.join(TEST_DATA, TEST_INPUT_VISUALIZE_HIERARCHY), encoding="latin1")
cluster_i = np.load(os.path.join(TEST_DATA, TEST_INPUT_VISUALIZE_CLUSTERS), encoding="latin1")
objs = [roi_objects[arr_n] for arr_n in roi_objects]
obj_hierarchy = hierarchy['arr_0']
cluster = [cluster_i[arr_n] for arr_n in cluster_i]
# Test in plot mode
pcv.params.debug = "plot"
# Reset the saved color scale (can be saved between tests)
pcv.params.saved_color_scale = None
_ = pcv.visualize.clustered_contours(img=img1, grouped_contour_indices=cluster, roi_objects=objs,
roi_obj_hierarchy=obj_hierarchy, bounding=False)
# Test in print mode
pcv.params.debug = "print"
# Reset the saved color scale (can be saved between tests)
pcv.params.saved_color_scale = None
cluster_img = pcv.visualize.clustered_contours(img=img, grouped_contour_indices=cluster, roi_objects=objs,
roi_obj_hierarchy=obj_hierarchy, nrow=2, ncol=2, bounding=True)
assert np.sum(cluster_img) > np.sum(img)
def test_plantcv_visualize_colorspaces():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_plot_hist")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
pcv.params.debug = "plot"
vis_img_small = pcv.visualize.colorspaces(rgb_img=img, original_img=False)
pcv.params.debug = "print"
vis_img = pcv.visualize.colorspaces(rgb_img=img)
assert np.shape(vis_img)[1] > (np.shape(img)[1]) and np.shape(vis_img_small)[1] > (np.shape(img)[1])
def test_plantcv_visualize_colorspaces_bad_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_plot_hist")
os.mkdir(cache_dir)
pcv.params.debug_outdir = cache_dir
# Read in test data
img = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_GRAY), -1)
with pytest.raises(RuntimeError):
_ = pcv.visualize.colorspaces(rgb_img=img)
def test_plantcv_visualize_overlay_two_imgs():
pcv.params.debug = None
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_visualize_overlay_two_imgs")
os.mkdir(cache_dir)
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img2 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY))
pcv.params.debug = None
out_img = pcv.visualize.overlay_two_imgs(img1=img1, img2=img2)
sample_pt1 = img1[1445, 1154]
sample_pt2 = img2[1445, 1154]
sample_pt3 = out_img[1445, 1154]
pred_rgb = (sample_pt1 * 0.5) + (sample_pt2 * 0.5)
pred_rgb = pred_rgb.astype(np.uint8)
assert np.array_equal(sample_pt3, pred_rgb)
def test_plantcv_visualize_overlay_two_imgs_grayscale():
pcv.params.debug = None
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_visualize_overlay_two_imgs_grayscale")
os.mkdir(cache_dir)
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
img2 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY), -1)
out_img = pcv.visualize.overlay_two_imgs(img1=img1, img2=img2)
sample_pt1 = np.array([255, 255, 255], dtype=np.uint8)
sample_pt2 = np.array([255, 255, 255], dtype=np.uint8)
sample_pt3 = out_img[1445, 1154]
pred_rgb = (sample_pt1 * 0.5) + (sample_pt2 * 0.5)
pred_rgb = pred_rgb.astype(np.uint8)
assert np.array_equal(sample_pt3, pred_rgb)
def test_plantcv_visualize_overlay_two_imgs_bad_alpha():
pcv.params.debug = None
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_visualize_overlay_two_imgs_bad_alpha")
os.mkdir(cache_dir)
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img2 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_BINARY))
alpha = -1
with pytest.raises(RuntimeError):
_ = pcv.visualize.overlay_two_imgs(img1=img1, img2=img2, alpha=alpha)
def test_plantcv_visualize_overlay_two_imgs_size_mismatch():
pcv.params.debug = None
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_visualize_overlay_two_imgs_size_mismatch")
os.mkdir(cache_dir)
img1 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_COLOR))
img2 = cv2.imread(os.path.join(TEST_DATA, TEST_INPUT_CROPPED))
with pytest.raises(RuntimeError):
_ = pcv.visualize.overlay_two_imgs(img1=img1, img2=img2)
# ##############################
# Tests for the utils subpackage
# ##############################
def test_plantcv_utils_json2csv():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_json2csv")
os.mkdir(cache_dir)
plantcv.utils.json2csv(json_file=os.path.join(TEST_DATA, "merged_output.json"),
csv_file=os.path.join(cache_dir, "exports"))
assert all([os.path.exists(os.path.join(cache_dir, "exports-single-value-traits.csv")),
os.path.exists(os.path.join(cache_dir, "exports-multi-value-traits.csv"))])
def test_plantcv_utils_json2csv_no_json():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_json2csv_no_json")
os.mkdir(cache_dir)
with pytest.raises(IOError):
plantcv.utils.json2csv(json_file=os.path.join(TEST_DATA, "not_a_file.json"),
csv_file=os.path.join(cache_dir, "exports"))
def test_plantcv_utils_json2csv_bad_json():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_json2csv_bad_json")
os.mkdir(cache_dir)
with pytest.raises(ValueError):
plantcv.utils.json2csv(json_file=os.path.join(TEST_DATA, "incorrect_json_data.txt"),
csv_file=os.path.join(cache_dir, "exports"))
def test_plantcv_utils_sample_images_snapshot():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_sample_images")
os.mkdir(cache_dir)
snapshot_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
img_outdir = os.path.join(cache_dir, "snapshot")
plantcv.utils.sample_images(source_path=snapshot_dir, dest_path=img_outdir, num=3)
assert os.path.exists(os.path.join(cache_dir, "snapshot"))
def test_plantcv_utils_sample_images_flatdir():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_sample_images")
os.mkdir(cache_dir)
flat_dir = os.path.join(TEST_DATA)
img_outdir = os.path.join(cache_dir, "images")
plantcv.utils.sample_images(source_path=flat_dir, dest_path=img_outdir, num=30)
random_images = os.listdir(img_outdir)
assert all([len(random_images) == 30, len(np.unique(random_images)) == 30])
def test_plantcv_utils_sample_images_bad_source():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_sample_images")
os.mkdir(cache_dir)
fake_dir = os.path.join(TEST_DATA, "snapshot")
img_outdir = os.path.join(cache_dir, "images")
with pytest.raises(IOError):
plantcv.utils.sample_images(source_path=fake_dir, dest_path=img_outdir, num=3)
def test_plantcv_utils_sample_images_bad_flat_num():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_sample_images")
os.mkdir(cache_dir)
flat_dir = os.path.join(TEST_DATA)
img_outdir = os.path.join(cache_dir, "images")
with pytest.raises(RuntimeError):
plantcv.utils.sample_images(source_path=flat_dir, dest_path=img_outdir, num=300)
def test_plantcv_utils_sample_images_bad_phenofront_num():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_sample_images")
os.mkdir(cache_dir)
snapshot_dir = os.path.join(PARALLEL_TEST_DATA, TEST_SNAPSHOT_DIR)
img_outdir = os.path.join(cache_dir, "images")
with pytest.raises(RuntimeError):
plantcv.utils.sample_images(source_path=snapshot_dir, dest_path=img_outdir, num=300)
def test_plantcv_utils_tabulate_bayes_classes():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_tabulate_bayes_classes")
os.mkdir(cache_dir)
outfile = os.path.join(cache_dir, "rgb_table.txt")
plantcv.utils.tabulate_bayes_classes(input_file=os.path.join(TEST_DATA, PIXEL_VALUES), output_file=outfile)
table = pd.read_csv(outfile, sep="\t")
assert table.shape == (228, 2)
def test_plantcv_utils_tabulate_bayes_classes_missing_input():
# Test cache directory
cache_dir = os.path.join(TEST_TMPDIR, "test_plantcv_utils_tabulate_bayes_classes_missing_input")
os.mkdir(cache_dir)
outfile = os.path.join(cache_dir, "rgb_table.txt")
with pytest.raises(IOError):
plantcv.utils.tabulate_bayes_classes(input_file=os.path.join(PIXEL_VALUES), output_file=outfile)
# ##############################
# Clean up test files
# ##############################
def teardown_function():
shutil.rmtree(TEST_TMPDIR)
``` |
{
"source": "josephorourke/topp",
"score": 2
} |
#### File: josephorourke/topp/TOPP.py
```python
from __future__ import nested_scopes
import fileinput, glob, os, re, shutil, string, sys, time, warnings
import distutils.spawn
from UserDict import UserDict
from UserList import UserList
import remove_url_spaces
home_url = "http://cs.smith.edu/~orourke/TOPP/"
public_html = "/Users/orourke/public_html/TOPP/"
front_page_links_to_problems = 1
warning_file = open ("warnings", "w+")
#latex2html = "/Users/edemaine/Packages/bin/latex2html"
#if not os.path.exists (latex2html): latex2html = "latex2html"
pandoc = 'pandoc'
#latex = 'latex'
latex = 'pdflatex'
bibtex = 'bibtex'
## Netlify
if os.environ.get('DEPLOY_URL'):
latex = './netlify-latex.sh'
bibtex = './netlify-latex.sh bibtex'
pandoc = './netlify-pandoc.sh'
##############################################################################
def main ():
problems = read_problems (glob.glob ("Problems/P.[0-9][0-9][0-9][0-9][0-9][0-9]"))
process_categories (problems)
if not os.path.isdir('tex'): os.mkdir('tex')
make_problems_latex (problems, "tex/problems.tex")
make_numerical_problem_list (problems, "tex/problems_by_number.tex")
make_categorized_problem_list (problems, "tex/categorized_problem_list.tex")
make_category_list (problems, "tex/category_list.tex")
os.system ("%s master" % latex)
find_cites (problems, "master.aux")
run ("%s master" % bibtex,
"Warning", "Error", "couldn't open", "Repeated", "^ : ", "^I", "You're")
remove_url_spaces.replace_file ("master.bbl")
bibitems = grab_bibitems ("master.bbl")
make_problems_latex (problems, "tex/problems.tex", "tex", bibitems)
os.system ("%s master" % latex)
run ("%s master" % latex, "(Citation|Reference).*undefined")
#os.system ("dvips -o master.ps master.dvi")
#os.system ("ps2pdf -dMaxSubsetPct=100 -dCompatibilityLevel=1.2 -dSubsetFonts=true -dEmbedAllFonts=true master.ps")
#run ("cp master.tex Welcome.tex")
#run ("cp master.aux Welcome.aux")
header = '\\section*{\\href{./}{The Open Problems Project}}\n\n'
#footer = "The Open Problems Project - %s" % time.strftime ("%B %d, %Y")
#os.system ("%s -noreuse -split 4 -toc_depth 3 -link 0 -image_type gif -no_math -html_version 3.2,math,latin1,unicode -local_icons -accent_images=normalsize -nofootnode -address '%s' -init_file dot_latex2html-init -custom_titles Welcome.tex" % (latex2html, footer))
## -no_navigation
if not os.path.isdir('html'): os.mkdir('html')
for probnum in problems.problem_numbers ():
problem = problems[probnum]
prefix = header
if probnum+1 in problems:
prefix += '\\textbf{Next:} \\href{P%d.html}{%s}\n\n' % (probnum+1, problems[probnum+1].text_with_number_focus())
if probnum-1 in problems:
prefix += '\\textbf{Previous:} \\href{P%d.html}{%s}\n\n' % (probnum-1, problems[probnum-1].text_with_number_focus())
run_pandoc('tex/P%s.tex' % probnum, 'html/P%s.html' % probnum,
'TOPP: ' + problem.text_with_number_focus(), prefix)
run_pandoc('tex/problems_by_number.tex', 'html/problems_by_number.html',
'TOPP: Numerical List of All Problems', header)
indexbib = open('tex/index_bib.tex', 'w')
indexbib.write(bibitems['_begin'])
indexbib.write(bibitems['mo-cgc42-01'])
indexbib.write(bibitems['_end'])
indexbib.close()
run ("cat author.tex intro.tex tex/categorized_problem_list.tex tex/index_bib.tex acknowledgments.tex >tex/index.tex")
run_pandoc('tex/index.tex', 'html/index.html',
'TOPP: The Open Problems Project', '\\section*{The Open Problems Project}')
# + '''
#<h2>edited by <a href="http://erikdemaine.org/"><NAME></a>, <a href="http://www.ams.sunysb.edu/~jsbm/">Joseph~S.~B.~Mitchell</a>, <a href="
#''')
#run ("cp master.ps master.pdf Welcome/")
run ("cp master.pdf html/")
run ("cp Problems/problem.template html/")
run ("ln -f -s index.html html/Welcome.html") # backward compatibility
run ("chmod -R a+rX html")
#run ("chgrp -R topp Welcome && chmod -R g+rwX Welcome")
if len (sys.argv) > 1:
print "Copying files into public_html..."
#run ("cp -d -p Welcome/* %s" % public_html)
copy_files ("html/*", public_html)
if warning_file.tell () > 0:
print "*** Warnings from TOPP.py are in the file 'warnings'. Please inspect. ***"
##############################################################################
class Problem (UserDict):
def __init__ (self):
UserDict.__init__ (self)
self.fields = [] ## Ordered list of fields
def cleanup_fields (self):
## Remove extreme blank lines.
for key in self.keys ():
if type (self[key]) == type ([]):
while self[key] and self[key][0] == "":
del self[key][0]
while self[key] and self[key][-1] == "":
del self[key][-1]
def text_with_number_focus (self):
return "Problem %d: %s" % (self['Number'], self['Problem'])
def text_without_number_focus (self):
return "%s (Problem~%d)" % (self['Problem'], self['Number'])
class Problems (UserDict):
def category_list (self):
## Sort keys (categories) without regard to case.
catlist = self.categories.keys ()
catlist.sort (lambda x, y: cmp (x.lower (), y.lower ()))
return catlist
def problem_numbers (self):
"""Returns a sorted list of category numbers."""
probnums = self.keys ()
probnums.sort ()
return probnums
class Categories (UserList):
def __init__ (self, s):
return UserList.__init__ (self, map (string.strip, string.join (s, " ").replace (",", ";").split (";")))
def __str__ (self):
return string.join (self, "; ")
##############################################################################
class TOPPWarning (UserWarning): pass
## Copy all warnings to warning_file.
def showwarning (*args, **dict):
formatted = apply (warnings.formatwarning, args, dict)
sys.stderr.write (formatted)
warning_file.write (formatted)
warnings.showwarning = showwarning
def run (command, *keep):
"""Run the given command with os.system but with stderr copied to
`warning_file`.
If extra options are specified beyond the command, filter to include only
lines with the specified regular expressions.
"""
stdin, stdouterr = os.popen4 (command, bufsize = 1)
stdin.close ()
first = 1
if not keep: keep = [""]
while 1:
line = stdouterr.readline ()
if not line: break
sys.stdout.write (line)
for keeper in keep:
if re.search (keeper, line):
if first:
warning_file.write ("--- Messages from running: %s\n" % command)
first = 0
warning_file.write (line)
break
stdouterr.close ()
def copy_files (src_spec, dest_dir):
"""Copy multiple files specified by glob pattern to target directory.
Attempts to preserve mode and modification times, and produces one unique
warning if this happens.
Doesn't preserve links of any kind, but that seems safest in the context
of LaTeX2HTML's hard links.
"""
if not os.path.isdir (dest_dir):
warnings.warn ("%r is not a directory; failing to copy %r there" %
(dest_dir, src_spec), TOPPWarning)
return
for src in glob.glob (src_spec):
dest = os.path.join (dest_dir, os.path.basename (src))
try:
shutil.copyfile (src, dest)
## like shutil.copy but without preserving mode or anything
except (IOError, OSError), e:
warnings.warn ("Failed to copy %r to %r: %s" % (src, dest, e),
TOPPWarning)
else:
try:
copygroup (src, dest)
shutil.copymode (src, dest)
shutil.copystat (src, dest)
except (IOError, OSError), e:
warnings.warn ("Unable to preserve some modes and/or modification "
"times while copying %r to %r" % (src_spec, dest_dir),
TOPPWarning)
def copygroup (src, dest):
src_stat = os.stat (src)
dest_stat = os.stat (dest)
os.chown (dest, dest_stat.st_uid, src_stat.st_gid)
##############################################################################
problem_sep_re = r"^-+\s*$"
problem_sep_rec = re.compile (problem_sep_re)
field_re = r"^\* ([^:]*):\s*(.*)$"
field_rec = re.compile (field_re)
none_re = r"\<none\>"
none_rec = re.compile (none_re)
def read_problems (files):
"""Read one or more problems in TOPP problem format.
Reads file(s) in problem format and outputs a dictionary of Problem's,
indexed by problem number and secondarily indexed by field.
A problem-file consists of one or more problem records, separated by
a line of all dashes. Each problem consists of a number of fields.
Files no longer have to end with a line of all dashes to separate from the
next file.
Field begin:
^* field-name: field-value
For certain fields (Number, Problem), the value is expected to be on that
line. For others, it may continue for an arbitrary number of lines.
To suppress output for a field, use field-value of <none>.
For Categories field, categories are separated by ;'s (or ,'s).
fname : field name
fvalue : value of field on the field-name line
pname : Problem name
pnumber : Problem number
Note each problem is given a LaTeX label "Problem.N" where N is the
problem number. So there can be inter-problem references using \ref{}.
"""
def end_problem ():
problem.cleanup_fields ()
## Put problem in numeric index, assuming it has a number.
if problem.has_key ('Number'):
problems[problem['Number']] = problem
def warn_where ():
return "%s:%d" % (input.filename (), input.filelineno ())
problems = Problems ()
for file in files:
problem = Problem ()
fname = None
input = fileinput.input (file)
for line in input:
line = line.rstrip () ## Trim off trailing whitespace, including newline
if problem_sep_rec.match (line):
end_problem ()
problem = Problem ()
fname = None
elif field_rec.match (line):
## Beginning of new field
match = field_rec.match (line)
fname = match.group (1)
fvalue = match.group (2) ## Note: leading spaces removed by pattern match.
if none_rec.match (fvalue):
pass ## Ignore empty field.
elif fname == 'Number': ## Numeric single-line field
problem[fname] = int (fvalue)
elif fname == 'Problem': ## Single-line field
problem[fname] = fvalue
else:
if problem.has_key (fname):
warnings.warn ("%s: Field %s occurs a second time in the same problem; second occurrence overwriting first" % (warn_where (), fname), TOPPWarning)
else:
problem.fields.append (fname)
problem[fname] = [fvalue]
else:
if problem.has_key (fname):
if type (problem[fname]) == type ([]):
problem[fname].append (line)
elif line:
## Nonblank lines after one-liner are ignored.
warnings.warn ("%s: Field %s extends to multiple lines; only first line kept" % (warn_where (), fname), TOPPWarning)
else:
warnings.warn ("%s: Stray line outside of any field" % warn_where (), TOPPWarning)
end_problem ()
return problems
##############################################################################
def process_categories (problems):
"""Process 'Categories' field of all problems.
Splits 'Categories' field according to separations by semicolons."""
problems.categories = {}
for problem in problems.values ():
if not problem.has_key ('Categories'):
##continue
warnings.warn ("Problem %d has no categories specified; listing under Miscellaneous" % problem['Number'], TOPPWarning)
problem['Categories'] = 'Miscellaneous'
problem['Categories'] = Categories (problem['Categories'])
for category in problem['Categories']:
if problems.categories.has_key (category):
problems.categories[category].append (problem)
else:
problems.categories[category] = [problem]
##############################################################################
auto_disclaimer = "% DO NOT EDIT THIS FILE. Auto-generated by TOPP.py.\n"
def make_problems_latex (problems, outname, outdir = None, bibitems = None):
"""Converts set of problems into a LaTeX file."""
outfile = open (outname, "w")
outfile.write (auto_disclaimer)
if bibitems: ## Disable final \bibliography
outfile.write ("%begin{latexonly}\n")
outfile.write ("\\onebigbibfalse\n")
outfile.write ("%end{latexonly}\n")
outfile.write (bibitems['_commands'])
for probnum in problems.problem_numbers ():
if outdir: probfile = open (os.path.join(outdir, 'P%s.tex' % probnum), 'w')
def write(s):
outfile.write(s)
if outdir: probfile.write(s)
problem = problems[probnum]
text = problem.text_with_number_focus ()
#\\section*{\\htmladdnormallink{The Open Problem Project:}{%s}\\\\
# \\label{Problem.%d}%s}
write ("""
\\problem{%d}
\\section*{\\label{Problem.%d}%s}
\\begin{description}
""" % (probnum, probnum, text))
for field in problem.fields:
if type (problem[field]) == type ([]):
write ("\\item[%s] %s" %
(field, string.join (problem[field] + [""], "\n")))
else:
write ("\\item[%s] %s" % (field, str (problem[field])))
write ("""
\\end{description}
""")
if bibitems and problem.cites:
write (bibitems['_begin'])
for tag in problem.cites:
if bibitems.has_key (tag):
write (bibitems[tag])
else:
warnings.warn ("Problem %d cites unseen reference %s" % (probnum, tag), TOPPWarning)
write (bibitems['_end'])
if outdir: probfile.close()
outfile.close ()
##############################################################################
def make_numerical_problem_list (problems, outname):
"""Creates LaTeX file with list of all problems by number."""
outfile = open (outname, "w")
outfile.write (auto_disclaimer + """
\\section{\\label{problems by number}Numerical List of All Problems}
%%\\refstepcounter{section}
The following lists all problems sorted by number.
These numbers can be used for citations and correspond to the order in which
the problems were entered.
\\begin{itemize}
""")
for probnum in problems.problem_numbers ():
problem = problems[probnum]
text = "Problem %d: %s" % (problem['Number'], problem['Problem'])
outfile.write (" \\item \\htmlref{%s}{Problem.%d}\n" %
(text, problem['Number']))
outfile.write ("""
\\end{itemize}
""")
outfile.close ()
##############################################################################
def make_categorized_problem_list (problems, outname):
"""Creates LaTeX file with list of categories and corresponding problems."""
outfile = open (outname, "w")
if front_page_links_to_problems:
section = "\\subsection"
else:
section = "\\section"
outfile.write (auto_disclaimer + """
%s{\\label{categorized problem list}Categorized List of All Problems}
Below, each category lists the problems that are classified under that category.
Note that each problem may be classified under several categories.
\\begin{description}
""" % section)
catnum = 0
for category in problems.category_list ():
catnum += 1
outfile.write ("""
\\item[\label{Category.%d}%s:]
%%begin{latexonly}
~
%%end{latexonly}
""" % (catnum, category))
outfile.write ("\\begin{itemize}\n")
problems_in_cat = problems.categories[category]
problems_in_cat.sort (lambda x, y: cmp (x['Problem'], y['Problem']))
for problem in problems_in_cat:
text = problem.text_without_number_focus ()
outfile.write (" \\item \\htmlref{%s}{Problem.%d}\n" %
(text, problem['Number']))
#outfile.write (" \\item %s\n" % (problem.text_without_number_focus ()))
outfile.write ("\\end{itemize}\n")
outfile.write ("""
\\end{description}
""")
outfile.close ()
##############################################################################
def make_category_list (problems, outname):
"""Creates LaTeX file with list of categories."""
outfile = open (outname, "w")
if front_page_links_to_problems: outfile.close (); return
outfile.write (auto_disclaimer + """
\\html{%
\\subsection{\\label{category list}Categories}
\\begin{htmlonly}
To begin navigating through the open problems,
select a category of interest. Alternatively, you may view
\\end{htmlonly}
%begin{latexonly}
The following lists the categories covered by the open problems.
See also
%end{latexonly}
\\hyperref{a list of all problems sorted by category}
{Section }
{ for a list of all problems sorted by category}
{categorized problem list}
or
\\hyperref{a list of all problems sorted numerically}
{Section }
{ for a list of all problems sorted numerically}
{problems by number}.
\\begin{itemize}
""")
catnum = 0
for category in problems.category_list ():
catnum += 1
outfile.write (" \\item \htmlref{%s}{Category.%d}\n" % (category, catnum))
outfile.write ("""
\\end{itemize}
}% \\html
""")
outfile.close ()
##############################################################################
beginbiblio_re = r"^\\begin\{thebibliography\}"
beginbiblio_rec = re.compile (beginbiblio_re)
newcommand_re = r"^\\newcommand"
newcommand_rec = re.compile (newcommand_re)
endbiblio_re = r"^\\end\{thebibliography\}"
endbiblio_rec = re.compile (endbiblio_re)
bibitem_re = r"^\\bibitem.*\{([^\}]*)\}"
bibitem_rec = re.compile (bibitem_re)
def grab_bibitems (bblfile):
"""Finds \\bibitem statements in .bbl file.
Returns a dictionary of strings indexed by the \\cite key.
Each string is all the lines of the bibitem.
There is also a special key '_begin' which contains the
\\begin{thebibliography} line. Ditto for '_end'.
An additional entry '_commands' should be included just once."""
bibitems = {'_begin': "", '_commands': "", '_end': ""}
key = None
for line in fileinput.input (bblfile):
## Store lines for later use.
if beginbiblio_rec.match (line):
bibitems['_begin'] += line
elif newcommand_rec.match (line):
bibitems['_commands'] += line
elif endbiblio_rec.match (line):
bibitems['_end'] += line
break
elif bibitem_rec.match (line):
match = bibitem_rec.match (line)
key = match.group (1)
bibitems[key] = line
elif key is not None:
bibitems[key] += line
return bibitems
##############################################################################
problem_marker_re = r"^% BeginProblem\{(\d+)\}"
problem_marker_rec = re.compile (problem_marker_re)
citation_re = r"^\\citation\{([^\}]*)\}"
citation_rec = re.compile (citation_re)
def find_cites (problems, auxfile):
"""Finds which problems cite which bibliographic references.
Finds all \\citation commands in the given .aux file which has already been
augmented by '% BeginProblem{123}' comments. Sets the 'cites' entry of
each problem to the list of references cited by that problem."""
problem = None
for line in fileinput.input (auxfile):
if problem_marker_rec.match (line):
match = problem_marker_rec.match (line)
problem = problems[int (match.group (1))]
problem.cites = []
elif citation_rec.match (line):
match = citation_rec.match (line)
if problem is not None:
problem.cites.append (match.group (1))
else:
warnings.warn ("Citation %s used before a problem began; potentially dangerous" % match.group (1), TOPPWarning)
for problem in problems.values ():
## Sort and remove duplicate citations
problem.cites.sort ()
for i in range (len (problem.cites) - 1, 0, -1):
if problem.cites[i] == problem.cites[i-1]:
del problem.cites[i]
##############################################################################
tempfile = 'temp.tex'
def run_pandoc(infile, outfile, title, prefix = ''):
print 'pandoc %s -> %s' % (infile, outfile)
tex = prefix + open(infile, 'r').read()
tex = re.sub(r'%begin{latexonly}(?:.|\n)*?%end{latexonly}', '', tex)
tex = re.sub(r'\\label\s*{((Problem|Category)\.(\d+)|problems by number|categorized problem list)}', '', tex)
tex = re.sub(r'\\author\s*{((?:[^{}]|{[^{}]*})*)}', r'\\subsubsection*{\1}', tex)
tex = re.sub(r'\\and\b', ', ', tex)
tex = re.sub(r'(Problem[\s~]*|)\\ref\s*{Problem\.(\d+)}', r'\\href{P\2.html}{\1\2}', tex)
tex = re.sub(r'\\htmlref\s*{([^{}]*)}\s*{Problem\.(\d+)}', r'\\href{P\2.html}{\1}', tex)
tex = re.sub(r'\\htmlref\s*{([^{}]*)}\s*{([^{}]*)}', (lambda match: '\\href{%s}{%s}' % (match.group(2).replace(' ','_').replace('Problem.','P')+'.html', match.group(1))), tex)
tex = re.sub(r'\\htmladdnormallink\s*{([^{}]*)}\s*{([^{}]*)}', r'\\href{\2}{\1}', tex)
tex = re.sub(r'\\hyperref\s*{([^{}]*)}\s*{([^{}]*)}\s*{([^{}]*)}\s*{([^{}]*)}', (lambda match: '\\href{%s}{%s}' % (match.group(4).replace(' ','_').replace('Problem.','P')+'.html', match.group(1))), tex)
tex = re.sub(r'\\begin\s*{thebibliography}\s*{[^{}]*}', r'''
\\section*{Bibliography}
\\begin{description}
''', tex)
tex = re.sub(r'\\end\s*{thebibliography}', r'\\end{description}', tex)
tex = re.sub(r'\\newblock\b', r'', tex)
tex = re.sub(r'{\\etalchar{\+}}', r'+', tex)
tex = re.sub(r"{\\'e}", r'é', tex)
tex = re.sub(r"{\\'o}", r'ó', tex)
tex = re.sub(r'{\\"o}', r'ö', tex)
tex = re.sub(r'{\\"u}', r'ü', tex)
tex = re.sub(r'{Her}', r'Her', tex)
cites = {}
def bibitem(match):
cites[match.group(2)] = match.group(1)
return '\item[\label{%s}]' % match.group(1)
tex = re.sub(r'\\bibitem\s*\[([^][]*)\]\s*{([^{}]*)}', bibitem, tex)
def cite(match):
out = []
for part in match.group(2).split(','):
part = part.strip()
if part in cites: part = cites[part]
out.append('\\ref{%s}' % part)
if match.group(1): out.append(match.group(1))
return ', '.join(out)
tex = re.sub(r'\\cite\s*(?:\[([^][]*)\]\s*)?{([^{}]*)}', cite, tex)
def includegraphics(match):
#run ("convert figs/%s.pdf html/%s.png" % (match.group(2), match.group(2)))
run ("cp figs/%s.png html/%s.png" % (match.group(2), match.group(2)))
return match.group(0)[:-1] + '.png}'
tex = re.sub(r'\\includegraphics\s*(?:\[([^][]*)\]\s*)?{([^{}]*)}', includegraphics, tex)
#print tex
temp = open(tempfile, 'w')
temp.write(tex)
temp.close()
os.system ("%s -d pandoc.defaults -i %s -o %s -M title=%s" % (pandoc, tempfile, outfile, repr(title)))
os.remove(tempfile)
##############################################################################
if __name__ == '__main__':
main ()
``` |
{
"source": "josephp27/env2yml",
"score": 3
} |
#### File: env2yml/E2Yaml/utilities.py
```python
import subprocess
def ignored_term_in_line(line, ignored_terms):
for term in ignored_terms:
if term in line:
return True
return False
def process_key(key, preserved_words_dict):
key = key.strip().lower()
key = replace_separator_in_preserved_words(key, preserved_words_dict)
tree = key.split('_')
converted_cases = convert_casing(tree, preserved_words_dict)
return convert_separator_back(converted_cases)
def convert_casing(targets, source):
for i, word in enumerate(targets):
targets[i] = source.get(word, word)
return targets
def replace_separator_in_preserved_words(target, preserved_words):
for key, val in preserved_words.items():
modified_val = val.replace('_', '|')
target = target.replace(val, modified_val)
return target
def convert_separator_back(targets):
for i, word in enumerate(targets):
targets[i] = word.replace('|', '_')
return targets
def convert_key_value_pairs_to_dictionary(keys, value, dictionary):
level = dictionary
for i in range(len(keys) - 1):
node = keys[i]
if node not in level:
level[node] = {}
level = level[node]
level[keys[-1]] = value
return dictionary
def write_to_clipboard(output):
process = subprocess.Popen(
'pbcopy', env={'LANG': 'en_US.UTF-8'}, stdin=subprocess.PIPE)
process.communicate(output.encode('utf-8'))
def read_from_clipboard():
return subprocess.check_output(
'pbpaste', env={'LANG': 'en_US.UTF-8'}).decode('utf-8')
``` |
{
"source": "josephp27/Zeno",
"score": 3
} |
#### File: Zeno/examples/impl.py
```python
import yaml
from ZenoMapper.Configuration import ConfigParser, Configuration
from ZenoMapper.Types import String, Boolean, Integer, List
from ZenoMapper.zeno import Zeno
class MyConfig(ConfigParser):
"""
loading your own config is done via subclassing the ConfigParser class and implementing the
get_config function.
"""
cache = None
@staticmethod
def get_config():
# each time an object is instantiated, this is called, so let's cache the results to increase performance
if not MyConfig.cache:
with open("data.yml", 'r') as stream:
MyConfig.cache = yaml.safe_load(stream)
return MyConfig.cache
class Spring(Configuration):
"""
loads in from data.yml. accessing nested sections can be done via nested classes
"""
class Data:
class MongoDb:
database = String()
encryption = Boolean() # conversion automatically happens when specifying the type
encryptionKey = String()
password = String()
replicaSet = String()
second = Integer()
myList = List()
class MyServer(Configuration):
host = String()
port = Integer()
class SuperNested(Configuration):
"""Specifying section"""
__section__ = 'Spring.Data.MongoDb'
database = String()
encryption = Boolean()
encryptionKey = String()
password = String()
replicaSet = String()
class Nested:
key = Integer()
print(Spring().Data.myList) # ['first', 'second', 'third']
print(Spring().Data.MongoDb.encryption is True) # True
print(MyServer().host) # my.server.com
print(SuperNested().database) # TESTDB
print(SuperNested().Nested.key) # True
# this method is used if specifying a class is not ideal for the user
zeno = Zeno()
print(zeno.Spring.Data.MongoDb.database) # TESTDB
# if the constructor is specified, it denotes how to search withing the yml file starting
# in the section mongodb withing the data section, within the spring section
# in this case it will be all these member variables: database encryption encryptionKey password replicaSet
zeno_2 = Zeno('Spring.Data.MongoDb')
print(zeno_2.database) # TESTDB
```
#### File: Zeno/test/test_immutability.py
```python
from unittest import TestCase
from ZenoMapper.zeno import Zeno
from test import SuperNested
class TestImmutable(TestCase):
def test_cannot_set_item(self):
with self.assertRaises(AttributeError):
SuperNested()['database'] = 1
def test_cannot_set_attriute(self):
with self.assertRaises(AttributeError):
SuperNested().database = 1
def test_cannot_set_item_zeno(self):
with self.assertRaises(AttributeError):
Zeno()['Spring'] = 1
def test_cannot_set_attriute_zeno(self):
with self.assertRaises(AttributeError):
Zeno().Spring = 1
```
#### File: Zeno/test/test_types.py
```python
from unittest import TestCase
from ZenoMapper import Integer, List, Boolean, String
class TestInteger(TestCase):
def test_convert_converts_integer(self):
actual = Integer().convert(183)
self.assertEqual(actual, 183)
def test_convert_converts_string(self):
actual = Integer().convert("183")
self.assertEqual(actual, 183)
class TestList(TestCase):
def test_convert_converts_list(self):
actual = List().convert([])
self.assertEqual(actual, [])
def test_convert_converts_empty_string(self):
actual = List().convert('')
self.assertEqual(actual, [])
def test_convert_converts_empty_list(self):
actual = List().convert('[]')
self.assertEqual(actual, [])
def test_convert_converts_list_with_braces(self):
actual = List().convert('[first, second, third]')
self.assertEqual(actual, ['first', 'second', 'third'])
def test_convert_converts_list_without_braces(self):
actual = List().convert('first, second, third')
self.assertEqual(actual, ['first', 'second', 'third'])
def test_convert_throws_exception_when_not_string_or_list(self):
with self.assertRaises(Exception) as context:
List().convert(543234)
self.assertTrue('543234 is not a string or list' in str(context.exception))
class TestBoolean(TestCase):
def test_convert_converts_boolean_True(self):
actual = Boolean().convert(True)
self.assertEqual(actual, True)
def test_convert_converts_boolean_False(self):
actual = Boolean().convert(False)
self.assertEqual(actual, False)
def test_convert_converts_string_yes(self):
actual = Boolean().convert('yes')
self.assertEqual(actual, True)
def test_convert_converts_string_yes_different_case(self):
actual = Boolean().convert('YES')
self.assertEqual(actual, True)
def test_convert_converts_string_true(self):
actual = Boolean().convert('true')
self.assertEqual(actual, True)
def test_convert_converts_string_t(self):
actual = Boolean().convert('t')
self.assertEqual(actual, True)
def test_convert_converts_string_one(self):
actual = Boolean().convert('1')
self.assertEqual(actual, True)
def test_convert_converts_string_false(self):
actual = Boolean().convert('false')
self.assertEqual(actual, False)
def test_convert_converts_string_not_yes_true_t_1(self):
actual = Boolean().convert('fjkdsla;')
self.assertEqual(actual, False)
def test_convert_throws_exception_when_not_string(self):
with self.assertRaises(Exception) as context:
Boolean().convert(543234)
self.assertTrue('Invalid literal for boolean. Not a string: 543234' in str(context.exception))
class TestString(TestCase):
def test_converts_string(self):
actual = String().convert(183)
self.assertEqual(actual, "183")
def test_converts_list(self):
actual = String().convert([])
self.assertEqual(actual, "[]")
```
#### File: Zeno/test/test_zeno.py
```python
from unittest import TestCase
from ZenoMapper.zeno import Zeno
from test import parsed_yml
class TestZeno(TestCase):
def test_zeno_is_equal_dictionary_when_nothing_set_in_constructor(self):
self.assertEqual(Zeno(), parsed_yml)
def test_zeno_is_equal_dictionary(self):
self.assertEqual(Zeno('Spring'), parsed_yml['Spring'])
def test_zeno_is_equal_dictionary_referencing_first_nested(self):
self.assertEqual(Zeno('Spring').Data, parsed_yml['Spring']['Data'])
def test_zeno_is_equal_dictionary_referencing_second_nested(self):
self.assertEqual(Zeno('Spring.Data.MongoDb'), parsed_yml['Spring']['Data']['MongoDb'])
def test_zeno_is_equal_dictionary_referencing_third_nested(self):
self.assertEqual(Zeno('Spring.Data.MongoDb').Nested, parsed_yml['Spring']['Data']['MongoDb']['Nested'])
def test_zeno_nested_loads_list(self):
self.assertEqual(Zeno('Spring').Data.myList, ['first', 'second', 'third'])
def test_zeno_nested_loads_second_first_nested(self):
self.assertEqual(Zeno('Spring').Data.second, 1)
def test_zeno_nested_loads_nested_dot_notation_db(self):
self.assertEqual(Zeno('Spring.Data.MongoDb').database, 'TESTDB')
def test_zeno_nested_loads_nested_dot_notation_encryption(self):
self.assertEqual(Zeno('Spring.Data.MongoDb').encryption, True)
def test_zeno_nested_loads_nested_dot_notation_enckey(self):
self.assertEqual(Zeno('Spring.Data.MongoDb').encryptionKey, 'FakePassWord!')
def test_zeno_nested_loads_nested_dot_notation_pass(self):
self.assertEqual(Zeno('Spring.Data.MongoDb').password, '!<PASSWORD>')
def test_zeno_nested_loads_nested_dot_notation_replica(self):
self.assertEqual(Zeno('Spring.Data.MongoDb').replicaSet, 'FAKE-DB-531')
def test_zeno_nested_loads_nested_dot_notation_nested_key(self):
self.assertEqual(Zeno('Spring.Data.MongoDb').Nested.key, 5243)
``` |
{
"source": "JosephP91/obstacle-avoidance",
"score": 3
} |
#### File: JosephP91/obstacle-avoidance/sift.py
```python
import cv2
from time import time
from base import FeatureExtractorThread
from utils import save_image, debug
class SIFTThread(FeatureExtractorThread):
def __init__(self, image_path, name, config):
super(SIFTThread, self).__init__(image_path, name)
self.results = None
self.config = config
def run(self):
sift = cv2.SIFT(self.config.get('points'), self.config.get('levels'))
start_time = time()
keypoints, descriptors = sift.detectAndCompute(self.image, None)
debug("SIFT time: {} seconds.".format(time() - start_time))
self.results = {'img': self.image, 'ext': self.extension, 'kp': keypoints, 'desc': descriptors}
image = cv2.drawKeypoints(self.image, keypoints)
save_image(image, self.name, self.extension)
def join(self, timeout=None):
"""
Override del metodo join, che effettua dapprima il joining del thread e poi ritorna i risultati
della computazione precedente.
:param timeout: eventuale timeout di connessione.
:return: i risultati della computazione.
"""
super(SIFTThread, self).join(timeout)
return self.results
``` |
{
"source": "JosephPai/FashionAI-Attributes",
"score": 3
} |
#### File: JosephPai/FashionAI-Attributes/single_task_predict.py
```python
import gc
from tqdm import tqdm
from keras.layers import *
from keras.models import *
from keras.applications import *
from keras.applications.densenet import preprocess_input
from .dataset import *
def getX(n, x_path):
X = np.zeros((n, image_size, image_size, 3), dtype=np.uint8)
for i in tqdm(range(n)):
X[i] = padding(x_path[i])
return X
def predict():
x_path, y = create_dataset(TEST_PATH % task_name)
num_classes = len(y[0])
n = len(x_path)
X = getX(n, x_path)
cnn_model = DenseNet121(include_top=False, input_shape=(image_size, image_size, 3), weights='imagenet')
inputs = Input((image_size, image_size, 3))
x = inputs
x = Lambda(preprocess_input, name='preprocessing')(x)
x = cnn_model(x)
x = GlobalAveragePooling2D()(x)
x = Dropout(0.5)(x)
x = Dense(num_classes, activation='softmax', name='softmax')(x)
model = Model(inputs, x)
model.load_weights(model_name)
test_np = model.predict(X, batch_size=256)
np.savetxt(SAVE_LABEL_PATH, test_np)
del model
del X
gc.collect()
if __name__=="__main__":
predict()
``` |
{
"source": "JosephPai/Python3App",
"score": 2
} |
#### File: Python3App/www/ormTest.py
```python
import logging; logging.basicConfig(level=logging.INFO)
from models import User
import asyncio
import orm
loop = asyncio.get_event_loop()
# 插入
async def insert():
await orm.create_pool(loop,user='root', password='password', db='awesome')
u = User(name='Test2', email='<EMAIL>', passwd='<PASSWORD>', image='about:blank')
await u.save()
r = await User.findALL()
print(r)
# 删除
async def remove():
await orm.create_pool(loop, user='root', password='password', db='awesome')
r = await User.find('001492757565916ec<PASSWORD>eeb<PASSWORD>')
await r.remove()
print('remove',r)
await orm.destory_pool()
# 更新
async def update():
await orm.create_pool(loop, user='root', password='password', db='awesome')
r = await User.find('00149276202953<PASSWORD>')
r.passwd = '<PASSWORD>'
await r.update()
print('update',r)
await orm.destory_pool()
async def find():
await orm.create_pool(loop, user='root', password='password', db='awesome')
all = await User.findAll()
print(all)
pk = await User.find('00149276202953187d8d3176f894f1fa82d9caa7d36775a000')
print(pk)
num = await User.findNumber('email')
print(num)
await orm.destory_pool()
loop.run_until_complete(find())
loop.close()
``` |
{
"source": "josephpalma/manda-dash",
"score": 3
} |
#### File: manda-dash/scraper/ports.py
```python
import requests
import pandas as pd
from bs4 import BeautifulSoup
import time
import numpy as np
import random
import logging
def searates(country, country_abr):
response = requests.get(url='https://www.searates.com/maritime/' + country +'.html',)
soup = BeautifulSoup(response.content, 'html.parser')
# This is going to collect all of the ports in the given country
ports = []
for tr in soup.find_all('ul')[1:2]:
tds = tr.find_all('li')
for x in range ((len(tds))):
ports.append("%s" % \
(tds[x].text))
ports_details = []
# This is going to go through each port in the given country and collect all of the current port data on searates.com
for x in range(len(ports)):
link = ports[x].lower().replace(' ','_')
link = link.replace('-','_')
link = link.replace('(','')
link = link.replace(')','')
link = link.replace(',','')
link = link.replace("'",'')
link = link.replace('/','')
response = requests.get(url='https://www.searates.com/port/' + link + '_' + country_abr + '.htm',)
soup = BeautifulSoup(response.content, 'html.parser')
ports_details.append([ports[x]])
for tr in soup.find_all('table'):
tds = tr.find_all('tr')
for k in range ((len(tds))-1):
ports_details[x].append("%s" %
(tds[k+1].text))
time.sleep(random.uniform(.5, 9.9)) #This is to create a random time inbetween visits to the website to avoid being black/stopped
col_title = []
# Give col headers to the csv file.
if len(ports_details[0]) > 2:
for x in range(len(ports_details[0])):
x_split = ports_details[0][x].split(':',1)
col_title.append(x_split[0])
col_title[0] = ' Port Name'
else:
for x in range(len(ports_details[-1])):
x_split = ports_details[-1][x].split(':',1)
col_title.append(x_split[0])
col_title[0] = ' Port Name'
# Revomve any redundent information from the csv.
for x in range(len(ports_details)):
for k in range(len(ports_details[x])):
data = ports_details[x][k].split(':',1)
if len(data)>1:
ports_details[x][k] = data[1]
# Creates the data frame for the csv.
df = pd.DataFrame(ports_details)
df.columns = col_title
for x in range(len(df)):
name = df[' Port Name'][x].split()
if name[0] == 'Port':
new_name = ' '.join(name[1:])
df[' Port Name'][x] = new_name
df.to_csv(r'../../../../../../scraper/data/ports_vs.csv')
def WPI(country, country_abr):
# This just pulls the data from the already downloaded PUB150 database that has been exported from an Access Database file to csv file manually saved and exported as WPI_complete.csv
file = pd.read_csv(r'../../../../../../scraper/WPI_complete.csv')
df = pd.DataFrame(file)
df_new = df[df['Country '] == country_abr.upper()]
df_new.to_csv(r'../../../../../../scraper/WPI_Data.csv')
def combine_data_frames(country, country_abr):
file1 = pd.read_csv(r'../../../../../../scraper/data/ports_vs.csv')
file2 = pd.read_csv(r'../../../../../../scraper/WPI_Data.csv')
df1 = pd.DataFrame(file1)
df2 = pd.DataFrame(file2)
# Clearing all of the blank rows from the dataframe
df2 = df2.dropna(how='all')
# Creating a Latitude and longitude field in the PUB150 extracted data. This is because it does not list data in a readable long lat format.
lat = []
long = []
df2 = df2.reset_index(drop=True)
for x in range(len(df2)):
lat.append('-' +str(df2['Field4'][x]) + ' ' + str(df2['Field5'][x]) + ' ' + str(df2['Combo353'][x]))
long.append(str(int(df2['Field7'][x])) + ' ' + str(int(df2['Field8'][x])) + ' ' + str(df2['Combo214'][x]))
df2['Latitude'] = lat
df2['Longitude'] = long
# Removing white spaces from the columns
df1_col = list(df1.columns)
df2_col = list(df2.columns)
for x in range(len(df1_col)):
df1_col[x] = df1_col[x].strip()
for x in range(len(df2_col)):
df2_col[x] = df2_col[x].strip()
df2.columns = df2_col
df1.columns = df1_col
# Renaming columns so that they match in both dataframes and can easily be combined
df2 = df2.rename(columns = {
'1st Port of Entry' : 'First Port of Entry',
'ETA Message' : 'ETA Message Required',
'U.S. Representative' : 'USA Representative',
'Maximum Size Vessel' : 'Maximum Vessel Size',
'Overhead Limits' : 'Overhead Limit',
'Tide.1' : 'Mean Tide',
'100 Tons Plus' : '100+ Ton Lifts',
'50-100 Tons' : '50-100 Ton Lifts',
'25-49 Tons' : '25-49 Ton Lifts',
'0-24 Tons' : '0-24 Ton Lifts',
'Fixed' : 'Fixed Cranes',
'Mobile' : 'Mobile Cranes',
'Floating' : 'Floating Cranes',
'Electric Repair' : 'Electrical Repair',
'Nav Equipment' : 'Navigation Equipment',
'Repair' : 'Ship Repairs',
'Railway' : 'Marine Railroad Size',
'Drydock' : 'Drydock Size'
})
df1 = df1.rename(columns = {
'Local Assist' : 'Local Assistance',
'Assist' : 'Tug Assistance',
'Salvage' : 'Tug Salvage',
'Deratt Cert' : 'SSCC Cert',
'Radio Tel' : 'Radio Telephone',
'Med Moor' : 'Med. Moor',
'Ice' : 'Ice Moor',
'Beach' : 'Beach Moor',
})
for x in range(len(df2['Port Name'])):
name = df2['Port Name'][x].split()
if name[0] == 'PORT':
new_name = ' '.join(name[1:])
df2['Port Name'][x] = new_name
# Combining both dataframes into one
combine = [df1,df2]
result = pd.concat(combine, ignore_index = True,
keys= ['Port Name', 'Publication', 'Chart',
'Harbor Size', 'Harbor Type', 'Shelter', 'Tide', 'Swell',
'Other', 'Overhead Limit', 'Channel', 'Anchorage', 'Cargo Pier',
'Oil Terminal', 'Mean Tide', 'Maximum Vessel Size',
'Good Holding Ground', 'Turning Area', 'First Port of Entry',
'USA Representative', 'ETA Message Required', 'Compulsory', 'Available',
'Local Assistance', 'Advisable', 'Tug Salvage', 'Tug Assistance',
'Pratique', 'SSCC Cert', 'Other.1', 'Telephone', 'Telefax', 'Radio',
'Radio Telephone', 'Air', 'Rail', 'Wharves', 'Anchor', 'Med. Moor',
'Beach Moor', 'Ice Moor', 'Medical Facilities', 'Garbage Disposal',
'Degauss', 'Dirty Ballast', 'Fixed Cranes', 'Mobile Cranes',
'Floating Cranes', '100+ Ton Lifts', '50-100 Ton Lifts',
'25-49 Ton Lifts', '0-24 Ton Lifts', 'Longshore', 'Electrical', 'Steam',
'Navigation Equipment', 'Electrical Repair', 'Provisions', 'Water',
'Fuel Oil', 'Diesel Oil', 'Deck', 'Engine', 'Ship Repairs',
'Drydock Size', 'Marine Railroad Size', 'Latitude', 'Longitude'],
sort=False)
# Formating the combined data frame so theat the country abr is filled for all ports along with capitalizing all ports for uniform data
result['Country'] = df2['Country'][0]
result['Port Name'] = result['Port Name'].str.upper()
result = result.drop(columns = ['Combo214', 'Combo353', 'Field4', 'Field5', 'Field7', 'Field8',
'Unnamed: 0', 'Index No.', 'Region', 'Ice', 'Telefax'])
# Reordering to move the Country abr to the first column
first_column = result.pop('Country')
result.insert(0, 'Country', first_column)
# returns the csv of the combined data
result.to_csv(r'../../../../../../scraper/data/data_before_clean.csv')
clean_up(country, country_abr)
# remove dupliacte port info
def clean_up(country, country_abr):
file = pd.read_csv(r'../../../../../../scraper/data/data_before_clean.csv')
df = pd.DataFrame(file)
df = df.replace(np.nan, '', regex=True)
# This section is a very roundabout way to check for duplicates and prevent data loss when merging the data frames.
# It could be quicker but I could not figure out a way to make faster functions work without data loss.
df = df.replace(r'^\s*$', 'nan', regex=True)
data = [[x] for x in df.columns]
test = []
for x in range(len(df)):
w = 0
k = 0
name = df['Port Name'][x].split()
if name[0] == 'PORT':
name = name[1]
else:
name = ' '.join(name)
while k < len(df):
name2 = df['Port Name'][k].split()
if name2[0] == 'PORT':
name2 = name2[1]
else:
name2 = ' '.join(name2)
if name == name2 and x != k:
if k not in test:
test.append(x)
set1 = df.iloc[x]
set2 = df.iloc[k]
for j in range(len(data)):
if set1[j] == set2[j]:
data[j].append(set1[j])
elif set1[j] == 'nan':
data[j].append(set2[j])
elif set2[j] == 'nan':
data[j].append(set1[j])
else:
data[j].append(set1[j])
break
k+=1
if k == len(df):
set1 = df.iloc[x]
check = set1['Port Name']
check = check.split()
if check[-1] == 'HARBOR' or check[-1] == 'HARBOUR':
check = ' '.join(check[:-1])
if len(data[2]) > 1:
for x in data[2]:
if check in x:
w = 1
break
if w == 1:
break
for j in range(len(data)):
data[j].append(set1[j])
final = pd.DataFrame(data)
final = final.transpose()
final.columns = df.columns
final = final.drop([0])
final = final.replace('nan', '', regex=True)
# End of roundabout data loss fix.
# This section is cleaning the data into a format that was agreed to for the data reporting.
# These drops are data that at the time were said to be unneccesary however,
# this program will still scrap the data and drops it here incase at any point in the future it is deemed necessary to have these data points.
final = final.drop(['Unnamed: 0', 'UN/LOCODE', '800 Number', 'Max Draft', 'ETA Message Required', 'Other', 'Advisable', 'Local Assistance', 'Other.1', 'SSCC Cert', 'Telephone', 'Radio', 'Air', 'Telegraph',
'Radio Telephone', 'Ice Moor', 'Anchor', 'Beach Moor', 'Electrical Repair', 'Steam', 'Electrical', 'Navigation Equipment', 'Engine', 'Degauss', 'Garbage Disposal', 'Dirty Ballast'], axis = 1)
# This section is for cleaning data into a uniform standard done through hard coding along with decoding some of the encoded sections of data.
final['Harbor Size'].replace({
# HARBOR SIZE
'L': 'Large',
'M': 'Medium',
'S': 'Small' ,
'V': 'Very Small'
}, inplace=True)
final['Harbor Type'].replace({
# HARBOR TYPE
'RT' : 'River Tide Gate' ,
'LC' : 'Lake or Canal' ,
'OR' : 'Open Roadstead' ,
'TH' : 'Typhoon Harbor' ,
'RN' : 'River Natural',
'CN' : 'Coastal Natural',
'CB' : 'Coastal Breakwater',
'CT' : 'Coastal Tide Gate' ,
'RB' : 'RIVER BASIN',
'N' : 'NONE'
}, inplace=True)
final['Shelter'].replace({
# SHELTER AFFORDED
'E' : 'Excellent',
'G' : 'Good',
'F' : 'Fair',
'P' : 'Poor',
'N' : 'None'
}, inplace=True)
for x in ['Channel','Cargo Pier','Anchorage','Oil Terminal']:
final[x].replace({
# FEET
'A' : '76 - over ft' ,
'B' : '71 - 75 ft',
'C' : '66 - 70 ft' ,
'D' : '61 - 65 ft' ,
'E' : '56 - 60 ft' ,
'F' : '51 - 55 ft' ,
'G' : '46 - 51 ft' ,
'H' : '41 - 45 ft' ,
'J' : '36 - 40 ft' ,
'K' : '31 - 35 ft' ,
'L' : '26 - 30 ft' ,
'M' : '21 - 25 ft' ,
'N' : '16 - 20 ft',
'O' : '11 - 15 ft',
'P' : '6 - 10 ft',
'Q' : '0 - 5 ft'
}, inplace=True)
final['Maximum Vessel Size'].replace({
# MAXIMUM SIZE VESSEL
'L': 'over 500 feet', #(152.4 meters)
'M': 'less than 500 feet' #(152.4 meters)
}, inplace=True)
final['Ship Repairs'].replace({
#REPAIRS
'A' : 'Major', #– Extensive overhauling and rebuilding in well equipped shipyards.
'B' : 'Moderate', #– Extensive overhauling and rebuilding that does not require drydocking. Suitable drydocking facilities are usually lacking or inadequate.
'C' : 'Limited', #– Small repair work in independent machine shops or foundries.
'D' : 'Emergency only',
'N' : 'None'
}, inplace=True)
final['Marine Railroad Size'].replace({
#Railways
'S' : 'Up to 200 tons',
'M' : '201 to 1,000 tons',
'L' : 'over 1,000 tons',
' Small ' : 'Up to 200 tons',
' Medium ' : '201 to 1,000 tons',
' Large ' : 'over 1,000 tons'
}, inplace=True)
final['Drydock Size'].replace({
# Drydock
'S' : 'Up to 656 ft', #(200 meters)
'M' : '657 ft to 984 ft', #(201 to 300 meters)
'L' : '985 ft and over', #(301 meters and over)
' Small ' : 'Up to 656 ft', #(200 meters)
' Medium ' : '657 ft to 984 ft', #(201 to 300 meters)
' Large ' : '985 ft and over' #(301 meters and over)
}, inplace=True)
# Leaving this in incase anyone wants to add Max Draft again
# final['Max Draft'].replace({
# # FEET
# ' a ' : '76 - over ft' ,
# ' b ' : '71 - 75 ft',
# ' c ' : '66 - 70 ft' ,
# ' d ' : '61 - 65 ft' ,
# ' e ' : '56 - 60 ft' ,
# ' f ' : '51 - 55 ft' ,
# ' g ' : '46 - 51 ft' ,
# ' h ' : '41 - 45 ft' ,
# ' j ' : '36 - 40 ft' ,
# ' k ' : '31 - 35 ft' ,
# ' l ' : '26 - 30 ft' ,
# ' m ' : '21 - 25 ft' ,
# ' n ' : '16 - 20 ft',
# ' o ' : '11 - 15 ft',
# ' p ' : '6 - 10 ft',
# ' q ' : '0 - 5 ft'
# }, inplace=True)
remove_units = ['Channel','Cargo Pier','Anchorage','Oil Terminal']
for x in remove_units:
final[x].replace({
' 6 - 10 feet 1.8 - 3 meters ' : '6 -10 ft',
' 11 - 15 feet 3.4 - 4.6 meters ' :'11 - 15 ft',
' 16 - 20 feet 4.9 - 6.1 meters ' :'16 - 20 ft',
' 21 - 25 feet 6.4 - 7.6 meters ' :'21 - 25 ft',
' 26 - 30 feet 7.1 - 9.1 meters ' :'26 - 30 ft',
' 31 - 35 feet 9.4 - 10 meters ' :'31 - 35 ft',
' 36 - 40 feet 11 - 12.2 meters ' :'36 - 40 ft',
' 41 - 45 feet 12.5 - 13.7 meters ' : '41 - 45 ft',
' 46 - 50 feet 14 - 15.2 meters ' :'46 - 50 ft',
' 51 - 55 feet 15.5 - 16 meters ' :'51 - 55 ft',
' 61 - 65 feet 18.6 - 19.8 meters ' : '61 - 65 ft',
' 71 - 75 feet 21.6 - 22.9 meters ' : '71 - 75 ft',
' 76 feet - OVER 23.2m - OVER ' : '76 - over ft'
}, inplace=True)
for x in range(len(final['Mean Tide'])):
data = final['Mean Tide'][x+1].split()
if len(data)>0:
final['Mean Tide'][x+1] = data[0]
else:
final['Mean Tide'][x+1] = ''
final_fixes = ['First Port of Entry', 'USA Representative', 'Medical Facilities', 'Turning Area', 'Good Holding Ground', 'Tide', 'Overhead Limit', 'Swell',
'Compulsory', 'Available', 'Tug Assistance', 'Tug Salvage', 'Pratique', 'Rail', 'Wharves', 'Med. Moor', '100+ Ton Lifts', '50-100 Ton Lifts',
'25-49 Ton Lifts', '0-24 Ton Lifts', 'Fixed Cranes', 'Mobile Cranes', 'Floating Cranes', 'Longshore', 'Provisions', 'Fuel Oil', 'Deck', 'Water', 'Diesel Oil']
for x in final_fixes:
try:
final[x].replace({
'Y': 'Yes',
' Yes ': 'Yes',
'N': 'No' ,
' No ': 'No',
1 : 'Yes',
' 1 ': 'Yes'
}, inplace=True)
except:
final[x].astype(str).replace({
'Y': 'Yes',
' Yes ': 'Yes',
'N': 'No' ,
' No ': 'No',
1 : 'Yes',
' 1 ': 'Yes'
}, inplace=True)
# This is to fix encoding errors in the degree symbols and make the pulled data easily readable
for x in range(len(final['Latitude'])):
data = final['Latitude'][x+1].strip()
if len(data) > 0:
if data[-1] == 'S':
data = final['Latitude'][x+1].replace("'",'')
data = data.replace('º','')
data = data.replace('-','')
final['Latitude'][x+1] = '-' + data
else:
data = final['Latitude'][x+1].replace("'",'')
data = data.replace('º','')
data = data.replace('-','')
final['Latitude'][x+1] = data
else:
final['Latitude'][x+1] = '0'
for x in range(len(final['Longitude'])):
data = final['Longitude'][x+1].strip()
if len(data) > 0:
if data[-1] == 'E':
data = final['Longitude'][x+1].strip()
data = data.replace('º','')
data = data.replace("'",'')
data = data.replace('-','')
final['Longitude'][x+1] = data.replace('"','')
else:
data = final['Longitude'][x+1].strip()
data = data.replace('º','')
data = data.replace("'",'')
data = data.replace('-','')
data = data.replace('"','')
final['Longitude'][x+1] = '-' + data
else:
final['Longitude'][x+1] = '0'
# This section is to give the dd cords. for digital maps
dd_lat_list = []
for x in range(len(final['Latitude'])):
data = final['Latitude'][x+1].strip()
if data == '0':
dd_lat_list.append(data)
else:
if data[-1] == 'S':
data = data.replace('S','')
data = data.replace('-','- ')
data = data.split()
if len(data) > 3:
dd = int(data[1]) + int(data[2])/60 + int(data[3])/3600
dd_lat_list.append(str(data[0]) + str(dd))
else:
dd = int(data[1]) + int(data[2])/60
dd_lat_list.append(str(data[0]) + str(dd))
else:
data = data.replace('N','')
data = data.replace('-','- ')
data = data.split()
if len(data) > 3:
dd = int(data[0]) + int(data[1])/60 + int(data[2])/3600
dd_lat_list.append(str(dd))
else:
dd = int(data[0]) + int(data[1])/60
dd_lat_list.append(str(dd))
dd_long_list = []
for x in range(len(final['Longitude'])):
data = final['Longitude'][x+1].strip()
if data == '0':
dd_long_list.append(data)
else:
if data[-1] == 'E':
data = data.replace('E','')
data = data.split()
if len(data) > 3:
dd = int(data[0]) + int(data[1])/60 + int(data[2])/3600
dd_long_list.append(str(dd))
else:
dd = int(data[0]) + int(data[1])/60
dd_long_list.append(str(dd))
else:
data = data.replace('W','')
data = data.replace('-','- ')
data = data.split()
if len(data) > 3:
dd = int(data[1]) + int(data[2])/60 + int(data[3])/3600
dd_long_list.append(str(data[0]) + str(dd))
else:
dd = int(data[1]) + int(data[2])/60
dd_long_list.append(str(data[0]) + str(dd))
insert = final.columns.get_loc('Longitude')+1
final.insert(insert, 'DD Lat.', dd_lat_list)
final.insert(insert+1, 'DD Long.', dd_long_list)
# Final change and send to csv
# This header change is to make the headers work with the app.
new_headers = [
'country',
'portName',
'portAuthority',
'address',
'phone',
'fax',
'email',
'latitude',
'longitude',
'ddLatitude',
'ddLongitude',
'portType',
'portSize',
'firstPortofEntry',
'publication',
'chart',
'usaRep',
'medicalFacilities',
'harborSize',
'shelter',
'maxVesselSize',
'harborType',
'turningArea',
'holdingGround',
'tide',
'overheadLimit',
'swell',
'channel',
'cargoPier',
'meanTide',
'anchorage',
'oilTerminal',
'compulsory',
'available',
'tugAssistance',
'tugSalvage',
'pratique',
'rail',
'wharves',
'medMoor',
'hundredTonLifts',
'fiftyTonLifts',
'twentyTonLifts',
'zeroTonLifts',
'fixedCranes',
'mobileCranes',
'floatingCranes',
'longshore',
'provisions',
'fuelOil',
'deck',
'water',
'dieselOil',
'shipRepairs',
'marineRailroadSize',
'drydockSize'
]
final.columns = new_headers
# This section is to fix some formatting with addresses
lst = [x for x in final.address]
for x in range(len(lst)):
res = []
if type(lst[x]) == str:
new_lst = lst[x].split()
test = [x for x in new_lst]
# these two for loops from https://www.geeksforgeeks.org/python-add-space-between-potential-words/
for ele in test:
temp = [[]]
for char in ele:
# checking for upper case character
if char.isupper():
temp.append([])
# appending character at latest list
temp[-1].append(char)
# joining lists after adding space
res.append(' '.join(''.join(ele) for ele in temp))
new_add = ' '.join(res)
new_add = new_add.replace(' ', ' ')
new_add = new_add.replace('- ', '-')
new_add = new_add.replace('G P O', 'GPO')
new_add = new_add.replace('P O', 'PO')
new_add = new_add.strip()
final.address[x+1] = new_add
else:
final.address[x+1] = 'should be blank'
final = final.reset_index(drop=True)
final.to_csv(r'../../../../../../scraper/data/port_data/' + country + '_ports.csv')
def run_ports():
# List of countries of intrest to scrap though. This is based on the formating of the searates website urls (which requires country name and country code)
# and the PUB150 database country codes.
countries = [
['australia', 'au'],
['bangladesh', 'bd'],
['cambodia', 'kh'],
['china', 'cn'],
['fiji', 'fj'],
['germany', 'de'],
['hong_kong', 'hk'],
['italy', 'it'],
['india', 'in'],
['indonesia', 'id'],
['japan', 'jp'],
['malaysia', 'my'],
['myanmar', 'mm'],
['nauru', 'nr'],
['new_caledonia', 'nc'],
['new_zealand', 'nz'],
['papua_new_guinea', 'pg'],
['philippines', 'ph'],
['samoa', 'ws'],
['solomon_islands', 'sb'],
['south_korea', 'kr'],
['sri_lanka', 'lk'],
['taiwan', 'tw'],
['thailand', 'th'],
['tonga', 'to'],
['tuvalu', 'tv'],
['vanuatu', 'vu'],
['vietnam', 'vn']
]
errors = []
for x in range(len(countries)):
try:
searates(countries[x][0], countries[x][1])
WPI(countries[x][0], countries[x][1])
combine_data_frames(countries[x][0], countries[x][1])
except Exception as e:
error_log = 'Failed to update ' + str(countries[x][0]) + ' because of ' + str(e)
logging.error('Failed to update ' + str(countries[x][0]) + ' because of ' + str(e))
errors.append(error_log)
# This is to tell the user which countries ran into errors and did not run completely.
# Most often the reason this will cause errors is because of being rejected from the website.
with open(r'../../../../../../scraper/data/port_errors.txt', 'w') as f:
f.write('Encountered errors with the following countries please run again with only the following countries. \n' + '\n'.join(errors))
if (__name__ == '__main__'):
run_ports()
``` |
{
"source": "Joseph-Park-dev/image_scraper_with_URL",
"score": 4
} |
#### File: Joseph-Park-dev/image_scraper_with_URL/image_scraper_with_URL.py
```python
from selenium import webdriver
import requests
import os
import time
import io
from PIL import Image
import hashlib
'''
Web Image Scraper
- Give this a URL, you get the image.
Original Author
-"<NAME>" on towardsdatascience.com
[https://towardsdatascience.com/image-scraping-with-python-a96feda8af2d]
- "eamander" on GitHub [https://github.com/eamander/Pinterest_scraper]
Code Modified by
- <NAME> (<NAME>)
'''
class InputManager(object):
def __init__(self):
self.print_menu()
self.img_source = None
def print_menu(self):
print("""
Web Image Scraper_ver 1.11
- Give it the URL, you get the image.
Original Author
-"<NAME>" on towardsdatascience.com
[https://towardsdatascience.com/image-scraping-with-python-a96feda8af2d]
- "eamander" on GitHub [https://github.com/eamander/Pinterest_scraper]
Code Modified by
- <NAME> (<NAME>)
""")
def get_search_info(self):
while self.img_source == None:
URL = input("Please insert the URL containing images [Google] > ")
img_source = self.verify_user_input(URL)
if img_source != None:
self.img_source = img_source
break
if img_source == "Google":
print("""
Current Image Source : Google
""")
return URL, None, None
elif img_source == "Others":
print("""
Current Image Source : Others
""")
return URL, None, None
def get_destination_folder(self):
folder_name = None
folder_location = None
folder_location = input("Please insert the destination to save image > ")
folder_name = input("How would you name your image folder? > ")
return os.path.join(folder_location,'_'.join(folder_name.split(' ')))
def get_image_count(self):
return int(input("How many images are you willing to download? > "))
def verify_user_input(self, URL):
src = None
if URL.find("www.google.com") != -1:
src = "Google"
else:
src = "Others"
return src
########## Fabian Bosler's Code, modified by <NAME> ##########
class URLImageScraper(object):
def __init__(self, driver_path):
my_input_manager = InputManager()
self.URL, self.login_Pinterest_ID, self.login_Pinterest_PWD = my_input_manager.get_search_info()
self.folder_location = my_input_manager.get_destination_folder()
self.max_download_count = my_input_manager.get_image_count()
self.wd = webdriver.Chrome(executable_path=driver_path)
self.num_of_downloads = 0
def __del__(self):
self.wd.quit()
self.show_result()
def search_and_download(self, driver_path):
if not os.path.exists(self.folder_location):
os.makedirs(self.folder_location)
res = self.fetch_image_urls(wd= self.wd, sleep_between_interactions=0.5)
for elem in res:
self.persist_image(elem)
self.num_of_downloads += 1
def fetch_image_urls(self, wd:webdriver, sleep_between_interactions:int=1):
def scroll_to_end(wd):
self.wd.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(sleep_between_interactions)
# build the google query
# load the page
wd.get(self.URL)
image_urls = set()
download_count = 0
results_start = 0
while download_count < self.max_download_count:
scroll_to_end(wd)
# get all image thumbnail results
thumbnail_results = wd.find_elements_by_css_selector("img.Q4LuWd")
number_results = len(thumbnail_results)
print(f"Found: {number_results} search results. Extracting links from {results_start}:{number_results}")
for img in thumbnail_results[results_start:number_results]:
# try to click every thumbnail such that we can get the real image behind it
try:
img.click()
time.sleep(sleep_between_interactions)
except Exception:
continue
# extract image urls
actual_images = wd.find_elements_by_css_selector('img.n3VNCb')
for actual_image in actual_images:
if actual_image.get_attribute('src') and 'http' in actual_image.get_attribute('src'):
image_urls.add(actual_image.get_attribute('src'))
download_count = len(image_urls)
if len(image_urls) >= self.max_download_count:
print(f"Found: {len(image_urls)} image links, done!")
break
else:
print("Found:", len(image_urls), "image links, looking for more ...")
time.sleep(30)
return
load_more_button = wd.find_element_by_css_selector(".mye4qd")
if load_more_button:
wd.execute_script("document.querySelector('.mye4qd').click();")
# move the result startpoint further down
results_start = len(thumbnail_results)
return image_urls
def persist_image(self, URL:str):
try:
image_content = requests.get(URL).content
except Exception as e:
print(f"ERROR - Could not download {URL} - {e}")
try:
image_file = io.BytesIO(image_content)
image = Image.open(image_file).convert('RGB')
file_path = os.path.join(self.folder_location,hashlib.sha1(image_content).hexdigest()[:10] + '.jpg')
with open(file_path, 'wb') as f:
image.save(f, "JPEG", quality=85)
print(f"SUCCESS - saved {URL} - as {file_path}")
except Exception as e:
print(f"ERROR - Could not save {URL} - {e}")
def show_result(self):
print("""
Process Finished
Number of Downloaded Images : {num_of_dwn}
Location of Folder containing Images : {folder_loc}
""".format(
num_of_dwn = self.num_of_downloads,
folder_loc = self.folder_location)
)
########## Fabian Bosler's Code, modified by <NAME> ##########
def main():
DRIVER_PATH = "./ChromeDriver/chromedriver.exe"
my_image_scraper = URLImageScraper(DRIVER_PATH)
my_image_scraper.search_and_download(DRIVER_PATH)
del my_image_scraper
query = input("Type anything to exit > ")
if query != None:
return
if __name__ == "__main__":
main()
``` |
{
"source": "josephpaulgiroux/gpt-2-tensorflow2.0",
"score": 3
} |
#### File: josephpaulgiroux/gpt-2-tensorflow2.0/sample.py
```python
import tensorflow as tf
import sentencepiece as spm
from gpt2_model import Gpt2
import json
import encoder
def argmax(logits):
return tf.argmax(logits)
def top_k_logits(logits, k):
if k == 0:
return logits
values, _ = tf.nn.top_k(logits, k=k)
min_values = values[:, -1]
return tf.where(
logits < min_values,
tf.ones_like(logits, dtype=logits.dtype) * -1e10,
logits
)
# Nucleas Sampling (https://arxiv.org/pdf/1904.09751.pdf)
def top_p_logits(logits, p):
"""Took from OpenAI GPT-2 Implememtation"""
batch = tf.shape(logits)[0]
sorted_logits = tf.sort(logits, direction='DESCENDING', axis=-1)
cumulative_probs = tf.cumsum(tf.nn.softmax(sorted_logits, axis=-1), axis=-1)
indices = tf.stack([
tf.range(0, batch),
tf.maximum(tf.reduce_sum(tf.cast(cumulative_probs <= p, tf.int32), axis=-1) - 1, 0),
], axis=-1)
min_values = tf.gather_nd(sorted_logits, indices)
return tf.where(
logits < min_values,
tf.ones_like(logits) * -1e10,
logits,
)
class SequenceGenerator:
def __init__(self, model_path, model_param, vocab_path, encoder_path):
self.sp = None
self.model = None
self.model_path = model_path
self.model_param = model_param
self.vocab_path = vocab_path
self.encoder_path = encoder_path
def load_weights(self):
with open(self.model_param) as f:
param = json.load(f)
self.model = Gpt2(param['num_layers'],
param['d_model'],
param['num_heads'],
param['dff'],
param['max_seq_len'],
param['vocab_size'])
ckpt = tf.train.Checkpoint(model=self.model)
ckpt_manager = tf.train.CheckpointManager(ckpt, self.model_path, max_to_keep=1)
ckpt.restore(ckpt_manager.latest_checkpoint).expect_partial()
print('Model weights loaded into memory')
self.encoder = encoder.get_encoder(
model_name='1558M',
models_dir=self.model_path,
)
# self.sp = spm.SentencePieceProcessor()
# self.sp.load(self.vocab_path)
def sample_sequence(self,
context=None,
seq_len=512,
bos=3,
eos=4,
temperature=1,
top_k=8,
top_p=8,
nucleus_sampling=True):
if context == None:
print("Give some context to model.................")
return
context = tf.expand_dims(([bos] + self.encoder.encode(context)), 0)
prev = context
output = context
past = None
for i in range(seq_len):
logits, past = self.model(prev, training=False, past=past)
# print(logits)
logits = logits[:, -1, :] / tf.cast(temperature, tf.float32)
# print(logits)
logits = top_k_logits(logits, k=top_k)
# print(logits)
if nucleus_sampling:
logits = top_p_logits(logits, p=top_p)
samples = tf.random.categorical(logits, num_samples=1, dtype=tf.int32)
# print(samples)
if tf.equal(samples, eos):
# print("Predicted end of sequence.")
break
# print("shape.........")
# print(tf.shape(output))
# print(tf.shape(samples))
output = tf.concat([output, samples], axis=-1)
prev = samples
# print(tf.shape(output))
# print(output)
# print("--------------------------")
result = tf.squeeze(output, axis=0)
pred = [int(i) for i in result]
generated_seq = self.encoder.decode(pred[1:])
generated_seq = generated_seq.replace("[SEP]", "").strip()
generated_seq = ' '.join(generated_seq.split())
return generated_seq
``` |
{
"source": "JosephPB/n3jet",
"score": 3
} |
#### File: c++_calls/conversion/modeldump_single.py
```python
import numpy as np
np.random.seed(1337)
from keras.models import Sequential, model_from_json
from keras.optimizers import Adam
import json
import argparse
from modeldump import ModelDump
def parse():
"""
Parse arguments
"""
parser = argparse.ArgumentParser(description=
'This is a simple script to dump Keras model into
simple format suitable for porting into pure C++ model'
)
parser.add_argument('-a', '--architecture', help="JSON with model architecture", required=True)
parser.add_argument('-w', '--weights', help="Model weights in HDF5 format", required=True)
parser.add_argument('-o', '--output', help="Ouput file name", required=True)
parser.add_argument('-v', '--verbose', help="Verbose", required=False)
args = parser.parse_args()
return args
if __name__ == "__main__":
args = parse()
print 'Read architecture from', args.architecture
print 'Read weights from', args.weights
print 'Writing to', args.output
model_dump = ModelDump(
architecture = args.architecture,
weights = args.weights,
output = args.output,
verbose = args.verbose,
init = True
)
```
#### File: n3jet/deprecated/rambo_while.py
```python
import numpy as np
from tqdm import tqdm
import pandas as pd
from n3jet.utils.general_utils import dot
# set com energy
s = 500
# set ps variable product
delta_vars = 0.2
def pair_check(p1,p2,delta,s_com):
'''Check proximity of pair of momenta
:param p1, p2: 4-momenta
:param delta: proximity measure according to the JADE algorithm - e.g. 0.01
returns: boolean - True if too close, False otherwise
'''
distance = (dot(p1,p2))/s_com
close = False
if distance <= delta:
close = True
#print ('True: Distance is: {} and the delta is {}'.format(distance,delta))
return close
def check_all(p_array,delta,s_com):
'Given an array of 4-momenta, check proximity of all pairs'
too_close = False
p_array = p_array[2:]
#print ('Checking {}'.format(p_array))
for idx, p in enumerate(p_array):
to_check = p_array[idx+1:]
for j in to_check:
proximity = pair_check(p,j,delta,s_com=s_com)
if proximity == True:
too_close = True
return too_close
def random_moms(num_points):
'Generate 4-mom components from a uniform distribution'
moms = []
for i in range(num_points):
moms.append(np.random.uniform(0,1,4))
return np.array(moms)
def isotropic_moms(mom):
'Create massles 4-mom with isotropic angular distribution'
c = 2*mom[0] - 1
phi = 2*np.pi*mom[1]
q_0 = -np.log(mom[2]*mom[3])
q_1 = q_0*np.sqrt(1-c**2)*np.cos(phi)
q_2 = q_0*np.sqrt(1-c**2)*np.sin(phi)
q_3 = q_0*c
return np.array([q_0,q_1,q_2,q_3])
def dot(p1,p2):
'Minkowski metric dot product'
prod = p1[0]*p2[0]-(p1[1]*p2[1]+p1[2]*p2[2]+p1[3]*p2[3])
return prod
def boost(mom, moms, w):
'''
Boost and scale isotropic 4-mom for momentum conservation
:param mom: one isotropic 4-mom
:param moms: array of all isotropic momenta
:param w: centre of mass energy
returns: np array of boosted an scaled 4-mom corresponding to mom
'''
q = mom
Q = np.zeros(4)
for i in moms:
Q+=i
M = np.sqrt(dot(Q,Q))
b = -Q[1:]/M
x = w/M
gamma = Q[0]/M
a = 1/(1+gamma)
p_0 = x*(gamma*q[0]+np.dot(b,q[1:]))
p_space = x*(q[1:]+b*q[0]+a*(np.dot(b,q[1:]))*b)
return np.array([p_0,p_space[0],p_space[1],p_space[2]])
def generate(num_jets, num_points, w, delta=0.4):
p_1 = np.array([w/2,0.,0.,w/2])
p_2 = np.array([w/2,0.,0.,-w/2])
cut_momenta = []
# other_momenta = []
pbar = tqdm(total=num_points)
while len(cut_momenta) < num_points:
moms = random_moms(num_jets)
iso_moms = []
for i in moms:
iso_moms.append(isotropic_moms(i))
iso_moms = np.array(iso_moms)
boost_moms = []
boost_moms.append(p_1)
boost_moms.append(p_2)
for i in iso_moms:
boost_moms.append(boost(i,iso_moms,w))
close = check_all(boost_moms, delta=delta,s_com=dot(p_1,p_2))
if close == False:
cut_momenta.append(boost_moms)
pbar.update(1)
# else:
# other_momenta.append(boost_moms)
pbar.close()
cut_mom = pd.DataFrame({'momenta':list(cut_momenta)})
# other_mom = pd.DataFrame({'momenta':list(other_momenta)})
return cut_mom['momenta']#, other_mom['momenta']
```
#### File: n3jet/models/model.py
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import normalize, MinMaxScaler
import tensorflow as tf
from tensorflow.keras import activations
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.wrappers.scikit_learn import KerasRegressor
from keras import metrics
from keras.initializers import glorot_uniform
from keras.callbacks import EarlyStopping
from keras.optimizers import Adam
import keras.backend as K
class Model:
def __init__(
self,
input_size,
momenta,
labels,
all_jets = False,
all_legs = False,
model_dataset = False,
high_precision = False
):
'''
:param input_size: the flattened input dim for the model
e.g. 3 jets has input_dim of (3-1)*4=8
:param momenta: input momenta in NJET format (i.e. [num points, num jets, 4])
:param labels: labels
'''
self.input_size = input_size
self.momenta = momenta
self.labels = labels
self.all_jets = all_jets
self.all_legs = all_legs
self.model_dataset = model_dataset
self.high_precision = high_precision
if self.high_precision:
K.set_floatx('float64')
def standardise(self, data):
'''standardise data
:param data: an array over which to standardise (this array may be a variable column)
'''
array = np.array(data)
mean = np.mean(array)
std = np.std(array)
standard = (array-mean)/(std)
return mean, std, standard
def normalise(self, data):
array = np.array(data)
minimum = np.min(array)
maximum = np.max(array)
norm = (array-minimum)/(maximum-minimum)
return minimum, maximum, norm
def root_mean_squared_error(self, y_true, y_pred):
'custom loss function RMSE'
return K.sqrt(K.mean(K.square(y_pred - y_true)))
def process_training_data(self, random_state=42, scaling='standardise', **kwargs):
'''
training data must be standardised and split for training and validation
**kwargs can take on:
:param moms: the PS points in format [no_PS_points, points, 4]
:param labs: ground truth labels of squared matrix elements
'''
moms = kwargs.get('moms', self.momenta)
labs = kwargs.get('labs', self.labels)
if self.all_legs == True:
momenta = np.array(moms)
elif self.all_jets == True:
momenta = np.array(moms)[:,2:,:] #include all outgoing jets
else:
momenta = np.array(moms)[:,3:,:] #pick out all but one jet
labels = np.array(labs)
x_standard = momenta.reshape(-1,4).copy() #shape for standardising each momentum element
self.x_mean = np.zeros(4)
self.x_std = np.zeros(4)
if scaling == 'standardise':
self.x_mean[0],self.x_std[0],x_standard[:,0] = self.standardise(momenta.reshape(-1,4)[:,0])
self.x_mean[1],self.x_std[1],x_standard[:,1] = self.standardise(momenta.reshape(-1,4)[:,1])
self.x_mean[2],self.x_std[2],x_standard[:,2] = self.standardise(momenta.reshape(-1,4)[:,2])
self.x_mean[3],self.x_std[3],x_standard[:,3] = self.standardise(momenta.reshape(-1,4)[:,3])
self.y_mean, self.y_std, y_standard = self.standardise(labels)
elif scaling == 'normalise':
self.x_mean[0],self.x_std[0],x_standard[:,0] = self.normalise(momenta.reshape(-1,4)[:,0])
self.x_mean[1],self.x_std[1],x_standard[:,1] = self.normalise(momenta.reshape(-1,4)[:,1])
self.x_mean[2],self.x_std[2],x_standard[:,2] = self.normalise(momenta.reshape(-1,4)[:,2])
self.x_mean[3],self.x_std[3],x_standard[:,3] = self.normalise(momenta.reshape(-1,4)[:,3])
self.y_mean, self.y_std, y_standard = self.normalise(labels)
else:
raise ValueError('scaling must being either normalise or standardise and you have used {}'.format(scaling))
x_standard = x_standard.reshape(-1,self.input_size) #shape for passing into network
# Note: shuffling is on by default for train_test_split
if self.model_dataset:
X_train, X_test, y_train, y_test = train_test_split(x_standard, y_standard, test_size=0.2)
else:
X_train, X_test, y_train, y_test = train_test_split(x_standard, y_standard, test_size=0.2, random_state=42)
return X_train, X_test, y_train, y_test, self.x_mean, self.x_std, self.y_mean, self.y_std
def baseline_model_dataset(self, layers, lr=0.001, activation='tanh', loss='mean_squared_error'):
'define and compile model with fixing weight initialisers and a random dataset'
# create model
# at some point can use new Keras tuning feature for optimising this model
seeds = [
1337,
1337+123,
1337+345,
1337+545,
1337-123,
1337-345,
1337+567,
1337-567,
1337-189,
1337+189,
1337+194,
1337-194,
1337-347,
1337+347,
1337-545
]
if len(layers) > len(seeds)-1:
raise Exception(
'the number of layers cannot be more than {}, you have defined {} layers'.format(
len(seeds)-1, len(layers)
)
)
model = Sequential()
model.add(Dense(layers[0], input_dim=(self.input_size), kernel_initializer = glorot_uniform(seed=seeds[0])))
if activation == 'tanh':
model.add(Activation(activations.tanh))
elif activation == 'relu':
model.add(Activation(activations.relu))
else:
raise ValueError('activation supported are either tanh or relu, you have used {}'.format(activation))
for i in range(1,len(layers)):
model.add(Dense(layers[i], kernel_initializer = glorot_uniform(seed=seeds[i])))
if activation == 'tanh':
model.add(Activation(activations.tanh))
elif activation == 'relu':
model.add(Activation(activations.relu))
model.add(Dense(1, kernel_initializer = glorot_uniform(seed=seeds[-1])))
# Compile model
model.compile(optimizer = Adam(lr=lr, beta_1=0.9, beta_2=0.999, amsgrad=False), loss = loss)
return model
def baseline_model(self, layers, lr=0.001, activation='tanh', loss='mean_squared_error'):
'define and compile model with a fixed dataset but random weights'
# create model
# at some point can use new Keras tuning feature for optimising this model
model = Sequential()
model.add(Dense(layers[0], input_dim=(self.input_size)))
if activation == 'tanh':
model.add(Activation(activations.tanh))
elif activation == 'relu':
model.add(Activation(activations.relu))
else:
raise ValueError('activation supported are either tanh or relu, you have used {}'.format(activation))
for i in range(1, len(layers)):
model.add(Dense(layers[i]))
if activation == 'tanh':
model.add(Activation(activations.tanh))
elif activation == 'relu':
model.add(Activation(activations.relu))
model.add(Dense(1))
# Compile model
model.compile(optimizer = Adam(lr=lr, beta_1=0.9, beta_2=0.999, amsgrad=False), loss = loss)
return model
def fit(
self,
scaling='standardise',
layers=[32,16,8],
epochs=10000,
lr=0.001,
activation='tanh',
loss='mean_squared_error',
**kwargs
):
'''
fit model
:param layers: an array of lengeth 3 providing the number of hidden nodes in the three layers
'''
if activation == 'relu' and scaling !='normalise':
raise ValueError('if activation is set to relu then scaling must be normalised')
random_state = kwargs.get('random_state', 42)
X_train, X_test, y_train, y_test,_,_,_,_ = self.process_training_data(random_state=random_state, scaling=scaling)
print ('The training dataset has size {}'.format(X_train.shape))
if self.model_dataset:
self.model = self.baseline_model_dataset(layers=layers, lr=lr, activation=activation, loss=loss)
else:
self.model = self.baseline_model(layers=layers, lr=lr, activation=activation, loss=loss)
ES = EarlyStopping(monitor='val_loss', min_delta=0, patience=100, verbose=0, restore_best_weights=True)
if self.model_dataset:
self.model.fit(X_train, y_train, epochs=epochs, validation_data=(X_test, y_test), callbacks=[ES], batch_size=512, shuffle=False)
else:
self.model.fit(X_train, y_train, epochs=epochs, validation_data=(X_test, y_test), callbacks=[ES], batch_size=512)
return self.model, self.x_mean, self.x_std, self.y_mean, self.y_std
def standardise_test(self, data, mean, std):
array = np.array(data)
standard = (array-mean)/(std)
return standard
def normalise_test(self, data, minimum, maximum):
array = np.array(data)
norm = (array-minimum)/(maximum-minimum)
return norm
def process_testing_data(self, moms, scaling='standardise', **kwargs):
'''
**kwargs can take on:
:param x_mean, x_std, y_mean, y_std: mean and std of x and y values if
not (properly) provided by class e.g. if using a pretrained model with
known mean and std
'''
labs = kwargs.get('labs', None)
if self.all_legs == True:
momenta = np.array(moms)
elif self.all_jets == True:
momenta = np.array(moms)[:,2:,:] #include all outgoing jets
else:
momenta = np.array(moms)[:,3:,:] #pick out all but one jet
y_mean = kwargs.get('y_mean', self.y_mean)
print_y = kwargs.get('print_y', True)
if print_y == True:
print ('Using y_mean of {} instead of {}'.format(y_mean, self.y_mean))
y_std = kwargs.get('y_std', self.y_std)
x_mean = kwargs.get('x_mean', self.x_mean)
x_std = kwargs.get('x_std', self.x_std)
if labs is not None:
labels = np.array(labs)
x_standard = momenta.reshape(-1,4).copy() #shape for standardising each momentum element
if scaling == 'standardise':
x_standard[:,0] = self.standardise_test(momenta.reshape(-1,4)[:,0],x_mean[0],x_std[0])
x_standard[:,1] = self.standardise_test(momenta.reshape(-1,4)[:,1],x_mean[1],x_std[1])
x_standard[:,2] = self.standardise_test(momenta.reshape(-1,4)[:,2],x_mean[2],x_std[2])
x_standard[:,3] = self.standardise_test(momenta.reshape(-1,4)[:,3],x_mean[3],x_std[3])
x_standard = x_standard.reshape(-1,self.input_size) #shape for passing into network
if labs is not None:
y_standard = self.standardise_test(labels,y_mean,y_std)
return x_standard, y_standard
else:
return x_standard
elif scaling == 'normalise':
x_standard[:,0] = self.normalise_test(momenta.reshape(-1,4)[:,0],x_mean[0],x_std[0])
x_standard[:,1] = self.normalise_test(momenta.reshape(-1,4)[:,1],x_mean[1],x_std[1])
x_standard[:,2] = self.normalise_test(momenta.reshape(-1,4)[:,2],x_mean[2],x_std[2])
x_standard[:,3] = self.normalise_test(momenta.reshape(-1,4)[:,3],x_mean[3],x_std[3])
x_standard = x_standard.reshape(-1,self.input_size) #shape for passing into network
if labs is not None:
y_standard = self.standardise_test(labels,y_mean,y_std)
return x_standard, y_standard
else:
return x_standard
else:
raise ValueError('scaling must being either normalise or standardise and you have used {}'.format(scaling))
def destandardise(self, data, mean, std):
'destandardise array for inference and comparison'
array = np.array(data)
return (array*std) + mean
def denormalise(seld, data, minimum, maximum):
array = np.array(data)
return array*(maximum-minimum) + minimum
def destandardise_data(self, y_pred, x_pred=None, scaling='standardise', **kwargs):
'''
destandardise any standardised data
:param y_pred: squared matrix element values
:param x_pred: optional parameter of momenta values to be destandardised
**kwargs can take on:
:param x_mean, x_std, y_mean, y_std: mean and std of x and y values if not (properly) provided by class e.g. if using a pretrained model with known mean and std
note: when initialising the class with the data used to train a pretrained model, the standardised data will be the same as used in training if the dataset is loaded and passed correctly as the mean and std is independent of the data splitting
'''
y_mean = kwargs.get('y_mean', self.y_mean)
y_std = kwargs.get('y_std', self.y_std)
x_mean = kwargs.get('x_mean', self.x_mean)
x_std = kwargs.get('x_std', self.x_std)
if scaling == 'standardise':
y_destandard = self.destandardise(y_pred,y_mean,y_std)
if x_pred is not None:
x_pred = x_pred.reshape(-1,4)
x_destandard = x_pred.copy()
x_destandard[:,0] = self.destandardise(x_pred[:,0],x_mean[0],x_std[0])
x_destandard[:,1] = self.destandardise(x_pred[:,1],x_mean[1],x_std[1])
x_destandard[:,2] = self.destandardise(x_pred[:,2],x_mean[2],x_std[2])
x_destandard[:,3] = self.destandardise(x_pred[:,3],x_mean[3],x_std[3])
x_destandard = x_destandard.reshape(-1,int((self.input_size)/4),4)
return x_destandard, y_destandard
else:
return y_destandard
elif scaling == 'normalise':
y_destandard = self.denormalise(y_pred,y_mean,y_std)
if x_pred is not None:
x_pred = x_pred.reshape(-1,4)
x_destandard = x_pred.copy()
x_destandard[:,0] = self.denormalise(x_pred[:,0],x_mean[0],x_std[0])
x_destandard[:,1] = self.denormalise(x_pred[:,1],x_mean[1],x_std[1])
x_destandard[:,2] = self.denormalise(x_pred[:,2],x_mean[2],x_std[2])
x_destandard[:,3] = self.denormalise(x_pred[:,3],x_mean[3],x_std[3])
x_destandard = x_destandard.reshape(-1,int((self.input_size)/4),4)
return x_destandard, y_destandard
else:
return y_destandard
else:
raise ValueError('scaling must being either normalise or standardise and you have used {}'.format(scaling))
```
#### File: n3jet/n3jet/paths.py
```python
import logging
import os
try:
from pathlib2 import Path
except:
from pathlib import Path
from sys import argv
logger = logging.getLogger(
__name__
)
project_directory = Path(
os.path.abspath(__file__)
).parent.parent
working_directory = Path(
os.getcwd()
)
working_directory_parent = working_directory.parent
def find_default(name):
"""
Get a default path when no command line argument is passed.
- First attempt to find the folder in the current working directory.
- If it is not found there then try the directory in which June lives.
- Finally, try the directory above the current working directory. This
is for the build pipeline.
This means that tests will find the configuration regardless of whether
they are run together or individually.
Parameters
----------
name
The name of some folder
Returns
-------
The full path to that directory
"""
for directory in (
working_directory,
project_directory,
working_directory_parent
):
path = directory / name
if os.path.exists(
path
):
return path
raise FileNotFoundError(
"Could not find a default path for {}".format(name)
)
def path_for_name(name):
"""
Get a path input using a flag when the program is run.
If no such argument is given default to the directory above
the june with the name of the flag appended.
e.g. --data indicates where the data folder is and defaults
to june/../data
Parameters
----------
name
A string such as "data" which corresponds to the flag --data
Returns
-------
A path
"""
flag = "--{}".format(name)
try:
path = Path(argv[argv.index(flag) + 1])
if not path.exists():
raise FileNotFoundError(
"No such folder {}".format(path)
)
except (IndexError, ValueError):
path = find_default(name)
logger.warning(
"No {} argument given - defaulting to:\n{}".format(flag, path)
)
return path
configs_path = path_for_name("configs")
```
#### File: n3jet/utils/fks_partition.py
```python
import numpy as np
from tqdm import tqdm
from n3jet.utils.general_utils import dot
from n3jet.phase import check_all
class FKSPartition:
def __init__(
self,
momenta,
labels,
all_legs = False,
):
self.momenta = momenta
self.labels = labels
self.all_legs = all_legs
if type(self.momenta) != list:
raise AssertionError('Momentum must be in the form of a list')
def cut_near_split(self, delta_cut, delta_near, return_indices = False):
'''
Split momenta into near and cut arrays -
near is the region close to the PS cuts and the cut region is the rest of the cut PS
:param delta_cut: the PS cut delta
:param delta_near: the secondary 'cut' defining the region 'close to' the cut boundary
'''
cut_momenta = []
self.near_momenta = []
cut_labels = []
self.near_labels = []
cut_indices = []
near_indices = []
for idx, i in tqdm(enumerate(self.momenta), total = len(self.momenta)):
close, min_distance = check_all(
mom=i,
delta=delta_cut,
s_com=dot(i[0],i[1]),
all_legs=self.all_legs
)
if not close:
if min_distance < delta_near:
self.near_momenta.append(i)
self.near_labels.append(self.labels[idx])
near_indices.append(idx)
else:
cut_momenta.append(i)
cut_labels.append(self.labels[idx])
cut_indices.append(idx)
if return_indices:
return cut_momenta, self.near_momenta, cut_labels, self.near_labels, cut_indices, near_indices
else:
return cut_momenta, self.near_momenta, cut_labels, self.near_labels
def s(self, p_1,p_2):
'CoM energy of two massless jets'
return (2*dot(p_1,p_2))
def d_ij(self, mom, i,j):
'CoM energy of selected massless jets'
return self.s(mom[i],mom[j])
def D_ij(self, mom):
"Reciprocal of CoM energy for pairwise sum"
ds = []
pairs = []
if not self.all_legs:
for i in range(2, len(mom)):
for j in range(i+1, len(mom)):
ds.append(self.d_ij(mom,i,j))
pairs.append([i,j])
else:
for i in range(len(mom)):
for j in range(i+1, len(mom)):
if i == 0 and j == 1:
pass
else:
ds.append(self.d_ij(mom,i,j))
pairs.append([i,j])
return np.sum(1/np.array(ds)), pairs
def S_ij(self, mom, i, j):
'Partition function'
D_1,_ = self.D_ij(mom)
return (1/(D_1*self.d_ij(mom,i,j)))
def weighting(self, return_weights = False):
'''
Weights scattering amplitudes according to the different partition function for pairs of particle
'''
D_1, pairs = self.D_ij(self.near_momenta[0])
S_near = []
for idx, i in enumerate(pairs):
print ('Pair {} of {}'.format(idx+1, len(pairs)))
S = []
for j in tqdm(self.near_momenta, total=len(self.near_momenta)):
S.append(self.S_ij(j,i[0],i[1]))
S_near.append(np.array(S))
S_near = np.array(S_near)
labs_split = []
for i in S_near:
labs_split.append(self.near_labels*i)
if return_weights:
return pairs, labs_split, S_near
else:
return pairs, labs_split
``` |
{
"source": "josephpcox/udivs-backend",
"score": 3
} |
#### File: josephpcox/udivs-backend/questions.py
```python
import pandas as pd
import random
import numpy as np
from operator import itemgetter
from datetime import datetime, timedelta
import math
import csv
import os.path
#import numpy as np
#from sklearn.feature_selection import VarianceThreshold
# def getWeek(DataFrame):
def getLocation(DataFrame):
'''
Grabs the day and location from a dataframe generated by the csv file stored in the amazon s3 bucket.
'''
# filters the Day and Place column only
filtered = DataFrame[['Place', 'Time']].copy()
# remove rows with Nan in any column
df = filtered.dropna()
return df
def getActivity(DataFrame):
'''
Gets time and activity from the csv file store in the amazon s3 bucket
'''
# filters the Day and Place column only
filtered = DataFrame[['Day', 'Time', 'Activity']]
# remove rows with Nam in any column
df = filtered.dropna()
final = df[(df.Activity != 'walk')]
final = final[(final.Activity != 'lesson')]
final = final[(final.Activity != 'home time')]
final = final[(final.Activity != 'vehicle')]
final = final[(final.Activity != 'groceries')]
final = final[(final.Activity != 'sleep')]
final = final[(final.Activity != 'drinks')]
final = final[(final.Activity != 'religion')]
final = final[(final.Activity != 'exhibition')]
return df
def getTodayLoc(DataFrame):
'''
This returns a dataframe of location data for one day
grabs the last index in the file (indicating today's date)
use index to return all the locations from today as Dataframe
from the csv file stored in the amazon s3 bucket.
'''
day = int(datetime.strftime(datetime.now(), '%Y%m%d')) # """FIXME"""
df = DataFrame[DataFrame.Day == day]
df = getLocation(df)
return df
def getYesterdayLoc(DataFrame):
'''
get all data from yesterday
uses datetime library to grab the all data from yesterday
returns all the location from yesterday as a dataframe
'''
day = int(datetime.strftime(datetime.now(), '%Y%m%d')) # """FIXME"""
# print(day)
df = DataFrame[DataFrame.Day == day]
df = getLocation(df)
return df
def checkLocList(DataFrame):
'''
We return geolocations that are not from today, these locations get cleaned and double cheked in the udivs question set.
steps------------------------------------------------------------------------------------------------#
1 creates a list of all the places visited in yesterday in placesVistedList
2 make an empty list that stores incorrect locations called inCorrect_loc
iterate untill you have a list of 3
3 grab a random place from the data set, check it against the placesvisitedList
if the random place does not exitst inside the place visted list
append it to the inCorrect_loc list:
else continue
'''
df = getYesterdayLoc(DataFrame)
df = df.drop_duplicates(subset='Place', keep='first')
df = df['Place']
return df
#this returns the time of place in the format HH:MM AM/PM----------------------------------------------#
def getHourTime(DataFrame):
''' This is a helper function that returns the time from a geolocation in Hours and Minutes and AM or PM'''
date_time = DataFrame['Time'].iloc[0]
time = datetime.strptime(date_time, '%a %b %d %H:%M:%S %Z %Y')
hour_time = time.strftime('%I:%M %p')
return hour_time
# This grabs location --------------------------------------------------------------------------------- #
def getData(DataFrame, Amount):
''' This is a helper function to return a location for the udivs system'''
lastday = DataFrame.iloc[:, 1]
lastindex = len(lastday.index)
#count = o
#lastIndex = Activities
return lastday[lastindex]
def getDuration(DataFrame):
'''function returns an array of applications used in a day each with a total duration '''
day = int(datetime.strftime(datetime.now(), '%Y%m%d')) # """FIXME"""
df = data[data.Day == day]
df = df[['Time', 'Activity', 'Duration ms']].copy()
df = df.dropna()
df = df[df['Activity'].str.contains("phone:")]
group = df.groupby('Activity').sum()
return group
def convertms(ms):
''' This helper function converts the milliseconds into minutes for a question in the UDIVS system. It return the floor minute'''
minutes = (ms/(1000*60))
minutes = math.floor(minutes)
return minutes
# -----------------------------------------------------------------------------------------------------#
def getRecentApp():
''' This helper function returns the most recent app used for the UDIVS system'''
day = somDay_df['Activity'].dropna()
for x in day[::-1]:
# print(x)
if "phone:" not in x:
continue
ans = x
break
return ans
#-------------------------------------------------------------------------------------------------------#
# get the first location that is not the current location, generate incorrect answers
def getRecentLocation():
''' Returns the most recent app used by the user for the UDIVS system'''
x = 1
while(True):
curLoc = somDay_df['Place'].iloc[-x]
if curLoc == "nan":
x = x+1
else:
break
# print("curLock:",curLoc)
locData = somDay_df['Place'].dropna()
# print(locData)
ans = ""
for x in locData[::-1]:
if x != curLoc:
ans = x
break
return ans
def getOptions(n):
'''
This is the logic to produce the questions, the incorrect answers, and the actual answer for the
UDIVS survey.
'''
options = []
q_string = '' # empty string to be returned
# question options for "which app did you use most recently
if n == 0:
# Which app did you use most recently?
ans = getRecentApp()
options.append(ans)
count = 1
q_string = 'Which app did you use most recently ?'
# this loop gives an array of answers called options for the user to choose from
day = somDay_df['Activity'].dropna()
for x in day:
flag = 0
if "phone:" in x:
for y in options:
if x == y:
flag = 1
if flag == 0:
options.append(x)
count = count + 1
if count == 4:
break
random.shuffle(options, random.random)
return q_string, ans, options
elif n == 1:
# What place were you at most recently?
ans = getRecentLocation()
options.append(ans)
count = 1
locData = somDay_df['Place'].dropna()
q_string = 'What place were you at most recently ?'
# This loop gives an array of answers called options for the user to choose from
for x in locData:
flag = 0
for y in options:
if x == y:
flag = 1
if flag == 0:
options.append(x)
count = count + 1
if count == 4:
break
random.shuffle(options, random.random)
return q_string, ans, options
elif n == 2:
# which place were you at around:(time) ?
time_loc = getTodayLoc(data)
ans_data = time_loc.sample(n=1)
ans = ans_data['Place'].iloc[0]
options.append(ans)
q_string = 'Which place were you at around', getHourTime(
ans_data), 'today ?'
dummy_data = getLocation(data)
count = 1
while count < 4:
random_day = dummy_data.sample(n=1)
place = random_day['Place'].iloc[0]
flag = 0
for y in options:
if y == place:
flag = 1
if flag == 1:
pass
else:
options.append(place)
count = count + 1
random.shuffle(options, random.random)
return q_string, ans, options
elif n == 3:
# Which of these places did you go to yesterday ?
time_loc = getYesterdayLoc(data)
ans_data = time_loc.sample(n=1)
ans = ans_data['Place'].iloc[0]
options.append(ans)
placesVisited = checkLocList(data)
q_string = 'Which of these places did you go to yesterday ?'
dummy_data = getLocation(data)
count = 1
while count < 4:
random_day = dummy_data.sample(n=1)
place = random_day['Place'].iloc[0]
flag = 0
for z in placesVisited:
if z == place:
flag = 1
for y in options:
if y == place:
flag = 1
if flag == 1:
pass
else:
options.append(place)
count = count + 1
random.shuffle(options, random.random)
return q_string, ans, options
elif n == 4:
# About how long did you use __ for ?
options = ['0-10 minutes', '11-20 minutes',
'21-30 minutes', '+30 minutes']
groups = getDuration(data)
activity = groups.sample(n=1)
miliseconds = int(activity['Duration ms'])
minutes = convertms(miliseconds)
app = activity.index[0]
print("About how long did you use",
app.replace('phone: ', '', 1), "today?")
if minutes <= 10:
ans = options[0]
elif minutes <= 20:
ans = options[1]
elif minutes <= 30:
ans = options[2]
else:
ans = options[3]
return ans, options
elif n == 5:
# which app did you use most frequently today ?
q_string = 'Which app did you use most frequently today ?'
applicationList = []
count = 1
day = somDay_df['Activity'].dropna()
for x in day:
if "phone:" in x:
applicationList.append(x)
app_df = pd.DataFrame(data=applicationList)
ans = app_df[0].value_counts().idxmax()
options.append(ans)
for x in day:
flag = 0
if "phone:" in x:
for y in options:
if x == y:
flag = 1
if flag == 0:
options.append(x)
count = count + 1
if count == 4:
break
random.shuffle(options, random.random)
return q_string, ans, options
'''
data = pd.read_csv('../../userdevice_data/Joe_Data/Smarter_time/timeslots.csv')
# new version of filter to one day without hardcoding
last_index = len(data) - 1
day = data.loc[last_index, 'Day']
somDay_df = data[data.Day == day]
#-------------------------------------------------------------------------------------------------------------------------#
'''
'''
This is where the actual survey begins, we ask the user three questions form or question set
This is a score fusion with a random question form features chosen from the data set
'''
'''
#-------------------------------------------------------------------------------------------------------------------------#
print("Welcome to Joe's Device ! See if you can enter!")
questions=['Which app did you use most recently?','What place were you at most recently?','which place were you at around ','Which of these places did you go to yesterday?',
'How long were you on this app?','Which app did you use most frequently today?']
randomNums=random.sample(range(0,6),3)
print(randomNums)
# Ask the user if they are genuine or an imposter to collect the data properly
user = 2
genuine = True
while(user !=1 and user !=0):
print("Are you a genuine(1)user or an imposter(0)?")
user =int(input("0: imposter\n1: genuine\n"))
print(user)
if (user == 0):
genuine = False
else:
genuine =True
score = 0
count = 1
for n in randomNums:
ans,options = getOptions(n)
#print(ans) # This is where we normaly print the answer for debugging
for o in options:
print(count,". ",o)
count = count+1
userAns=int(input("input answer here: ")) # Utilize Switch CasegetOptions(n)
if genuine:
user = 'genuine'
else:
user = 'imposter'
Q_Num = n + 1
file = open('../raw_scores/question' + str(Q_Num) + '_' + user + '.csv','a')
writer = csv.writer(file)
if ans == options[userAns-1]:
score = score + 1
Qdata = [1]
writer.writerow(Qdata)
else:
Qdata = [0]
writer.writerow(Qdata)
file.close()
count = 1
if genuine:
user = 'genuine'
else:
user = 'imposter'
# This will write the score to the appropriate file
scores = [score]
file = open('../raw_scores/survey_score_'+user+'.csv','a')
writer = csv.writer(file)
writer.writerow(scores)
file.close()
#------------------------------------------------------------------------------ This is where the data analysis goes-------------------------------------------#
'''
'''
This section of code is to to produce the False Reject Rate, The False Acceptance Rate,
and True Reject Rate, True Accept Rate for the total system as well as analysis on each question'''
'''
# Generate genuine and imposter scores with the seed at 1
genuine_scores = pd.read_csv('../raw_scores/survey_score_genuine.csv')
imposter_scores = pd.read_csv('../raw_scores/survey_score_imposter.csv')
Q1_gen = pd.read_csv('../raw_scores/question1_genuine.csv')
Q1_imp = pd.read_csv('../raw_scores/question1_imposter.csv')
Q2_gen = pd.read_csv('../raw_scores/question2_genuine.csv')
Q2_imp = pd.read_csv('../raw_scores/question2_imposter.csv')
Q3_gen = pd.read_csv('../raw_scores/question3_genuine.csv')
Q3_imp = pd.read_csv('../raw_scores/question3_imposter.csv')
Q4_gen = pd.read_csv('../raw_scores/question4_genuine.csv')
Q4_imp = pd.read_csv('../raw_scores/question4_imposter.csv')
Q5_gen = pd.read_csv('../raw_scores/question5_genuine.csv')
Q5_imp = pd.read_csv('../raw_scores/question5_imposter.csv')
'''
``` |
{
"source": "josephperrott/rules_webtesting",
"score": 2
} |
#### File: web/versioned/browsers-0.3.2.bzl
```python
load("//web/internal:platform_http_file.bzl", "platform_http_file")
def browser_repositories(firefox = False, chromium = False, sauce = False):
"""Sets up repositories for browsers defined in //browsers/....
Args:
firefox: Configure repositories for //browsers:firefox-native.
chromium: Configure repositories for //browsers:chromium-native.
sauce: Configure repositories for //browser/sauce:chrome-win10-connect.
"""
if chromium:
org_chromium_chromedriver()
org_chromium_chromium()
if firefox:
org_mozilla_firefox()
org_mozilla_geckodriver()
if sauce:
com_saucelabs_sauce_connect()
def com_saucelabs_sauce_connect():
platform_http_file(
name = "com_saucelabs_sauce_connect",
licenses = ["by_exception_only"], # SauceLabs EULA
amd64_sha256 = "6eb18a5a3f77b190fa0bb48bcda4694d26731703ac3ee56499f72f820fe10ef1",
amd64_urls = [
"https://saucelabs.com/downloads/sc-4.5.4-linux.tar.gz",
],
macos_sha256 = "7dd691a46a57c7c39f527688abd4825531d25a8a1c5b074f684783e397529ba6",
macos_urls = [
"https://saucelabs.com/downloads/sc-4.5.4-osx.zip",
],
windows_sha256 =
"4b2baaeb32624aa4e60ea4a2ca51f7c5656d476ba29f650a5dabb0faaf6cb793",
windows_urls = [
"https://saucelabs.com/downloads/sc-4.5.4-win32.zip",
],
)
# To update Chromium, do the following:
# Step 1: Go to https://omahaproxy.appspot.com/
# Step 2: Look for branch_base_position of current stable releases
# Step 3: Go to https://commondatastorage.googleapis.com/chromium-browser-snapshots/index.html?prefix=Linux_x64/ etc to verify presence of that branch release for that platform.
# If no results, delete the last digit to broaden your search til you find a result.
# Step 4: Verify both Chromium and ChromeDriver are released at that version.
# Step 5: Update the URL to the new release.
def org_chromium_chromedriver():
platform_http_file(
name = "org_chromium_chromedriver",
licenses = ["reciprocal"], # BSD 3-clause, ICU, MPL 1.1, libpng (BSD/MIT-like), Academic Free License v. 2.0, BSD 2-clause, MIT
amd64_sha256 =
"c8b8be2fc6835bd3003c16d73b9574242e215e81e9b3e01d6fed457988d052f4",
amd64_urls = [
"https://commondatastorage.googleapis.com/chromium-browser-snapshots/Linux_x64/870763/chromedriver_linux64.zip",
],
macos_sha256 =
"ad36367b3cfa825ec5528954ef07408e66d7873fa59aa8917f6893a8c062034b",
macos_urls = [
"https://commondatastorage.googleapis.com/chromium-browser-snapshots/Mac/870776/chromedriver_mac64.zip",
],
windows_sha256 =
"038624e31c327c40df979d699e7c1bba0f322025277f9c875266258169a56faa",
windows_urls = [
"https://commondatastorage.googleapis.com/chromium-browser-snapshots/Win/870788/chromedriver_win32.zip",
],
)
def org_chromium_chromium():
platform_http_file(
name = "org_chromium_chromium",
licenses = ["notice"], # BSD 3-clause (maybe more?)
amd64_sha256 =
"3a2ae26b7cc56018ea3435bbe22470a82c26340aac72330d6a87555bc3946ab1",
amd64_urls = [
"https://commondatastorage.googleapis.com/chromium-browser-snapshots/Linux_x64/870763/chrome-linux.zip",
],
macos_sha256 =
"667be4bd866e14b38fdb1b4d1f4c04b4f86e1af710082c30f78c3c5b52e5a34d",
macos_urls = [
"https://commondatastorage.googleapis.com/chromium-browser-snapshots/Mac/870776/chrome-mac.zip",
],
windows_sha256 =
"c0ef527ab7e4776b43da164b96969350cc87f1d18de2f6dfc6b74781092fcce5",
windows_urls = [
"https://commondatastorage.googleapis.com/chromium-browser-snapshots/Win/870788/chrome-win.zip",
],
)
def org_mozilla_firefox():
platform_http_file(
name = "org_mozilla_firefox",
licenses = ["reciprocal"], # MPL 2.0
amd64_sha256 =
"284f58b5ee75daec5eaf8c994fe2c8b14aff6c65331e5deeaed6ba650673357c",
amd64_urls = [
"https://ftp.mozilla.org/pub/firefox/releases/68.0.2/linux-x86_64/en-US/firefox-68.0.2.tar.bz2",
],
macos_sha256 =
"173440ca6147c6e1eebbe36f332da2c4347e37269152ad55c431f6b0d7078862",
macos_urls = [
"https://ftp.mozilla.org/pub/firefox/releases/68.0.2/mac/en-US/Firefox%2068.0.2.dmg",
],
)
def org_mozilla_geckodriver():
platform_http_file(
name = "org_mozilla_geckodriver",
licenses = ["reciprocal"], # MPL 2.0
amd64_sha256 =
"03be3d3b16b57e0f3e7e8ba7c1e4bf090620c147e6804f6c6f3203864f5e3784",
amd64_urls = [
"https://github.com/mozilla/geckodriver/releases/download/v0.24.0/geckodriver-v0.24.0-linux64.tar.gz",
],
macos_sha256 =
"4739ef8f8af5d89bd4a8015788b4dc45c2f5f16b2fdc001254c9a92fe7261947",
macos_urls = [
"https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-macos.tar.gz",
],
)
``` |
{
"source": "josephquang97/py4e03",
"score": 3
} |
#### File: josephquang97/py4e03/test_assignment.py
```python
import pytest
from main import calculate_price
@pytest.mark.parametrize(
"hours, rate, expected",
[
(39, 3, 117.0),
(40, 3, 120.0),
(41, 3, 124.5),
(45, 3, 142.5),
(45, 2.75, 130.625),
],
)
def test_price(hours, rate, expected):
result = calculate_price(hours, rate)
assert result == expected
``` |
{
"source": "josephquang97/pymemapi",
"score": 3
} |
#### File: pymemapi/tests/test_PyMemAPI.py
```python
import sqlite3
from PyMemAPI import __version__
from PyMemAPI import Memrise, SQLite, Course
from PyMemAPI.exception import LoginError, InvalidSeperateElement, AddBulkError, AddLevelError, InputOutOfRange, LanguageError
import unittest
from pytest import MonkeyPatch
# Test version
def test_version():
assert __version__ == "0.1.0"
# Test Memrise features
CLIENT = Memrise()
COURSE: Course
class TestMemrise(unittest.TestCase):
def setUp(self):
self.monkeypatch = MonkeyPatch()
def test_login_fail(self):
with self.assertRaises(LoginError):
CLIENT.login("testingerror", "nopassword")
def test_select_course(self):
user = {"Enter username: ":"dummy_user", "Enter password: ":"<PASSWORD>"}
# self.monkeypatch.setattr("builtins.input", lambda msg: user[msg])
# self.monkeypatch.setattr("getpass.getpass", lambda msg: user[msg])
success = CLIENT.login("dummy_user","testing2022")
if success is True:
responses = {"Make your choice: ": "1"}
self.monkeypatch.setattr("builtins.input", lambda msg : responses[msg])
global COURSE
COURSE = CLIENT.select_course()
COURSE.delete_all_level()
assert COURSE.name == "Testing Course"
else:
assert success
# Unit test for Course
class TestCourse(unittest.TestCase):
def test_addlevel_with_bulk(self):
global COURSE
success = COURSE.add_level_with_bulk("Test Level", "Hello\tXinChao", "\t")
self.assertEqual(success, True)
def test_delete_level(self):
global COURSE
level_id, headers = COURSE.add_level()
success = COURSE.delete_level(level_id)
self.assertEqual(success, True)
def test_move_level(self):
global COURSE
success = COURSE.move_level(1,2)
self.assertEqual(1,1)
def test_update_external_language(self):
global COURSE
COURSE._update_audio_external("en")
self.assertEqual(1,1)
# Test the Exceptions
class TestException(unittest.TestCase):
def setUp(self):
self.monkeypatch = MonkeyPatch()
# When the seperate item is different from "tab" or "comma".
def test_InvalidSeperateElement(self):
global COURSE
with self.assertRaises(InvalidSeperateElement):
success = COURSE.add_level_with_bulk("Test Level", "Hello\tXinChao", "a")
# When the user requests unsupport languages generate audio -> Handled
# This testcase will failed in Linux
def test_LanguageError(self):
global COURSE
responses = {
"Choose the voice number 1: ": "1",
"Enter the number of voices you wish: ": "1",
}
self.monkeypatch.setattr("builtins.input", lambda msg : responses[msg])
COURSE.update_audio("unvalid language")
# Raise Exception for Coverage
def test_AddLevelException(self):
with self.assertRaises(AddLevelError):
raise AddLevelError(id="1",message="Test")
# Raise Exception for Coverage
def test_AddBulkException(self):
with self.assertRaises(AddBulkError):
raise AddBulkError(id="1",message="Test")
def test_InputOutOfRangeException(self):
with self.assertRaises(InputOutOfRange):
responses = {"Make your choice: ": "99"}
self.monkeypatch.setattr("builtins.input", lambda msg : responses[msg])
CLIENT.select_course()
def test_TypeError(self):
global COURSE
success = COURSE.add_level_with_bulk("Test Level", "Hello\tXinChao", "\t")
level = (COURSE.levels())[0]
word = (level.get_words())[0]
with self.assertRaises(TypeError):
word.upload_audio(1)
# Test SQLite
# This case test for Windows
def test_sync_database(db_conn,cmd):
cur: sqlite3.Cursor = db_conn.cursor()
cur.executescript(cmd)
cur.close()
db_conn.commit()
global COURSE
COURSE.sync_database("./course/course.db")
level = (COURSE.levels())[-1]
assert (level.name=="I can't say for sure")
def test_remove_audio():
global COURSE
level = (COURSE.levels())[-1]
words = level.get_words()
for word in words:
word.remove_audio()
word.upload_audio("./audio/audio.mp3")
with open("./audio/audio.mp3","rb") as fp:
audio = fp.read()
word.upload_audio(audio)
assert (1==1)
class TestSQLite(unittest.TestCase):
def test_SQLite_topic_to_bulk(self):
with self.assertRaises(Exception):
db = SQLite("./course/course.db")
db.update_ipas()
db.update_trans(src="en",dest="vi")
db.topic_to_bulk(1,external=True)
db.conn.close()
def test_SQLite_topic_to_bulk2(self):
db = SQLite("./course/course.db")
bulk = db.topic_to_bulk(1,external=True,language="en")
self.assertIsInstance(bulk,str)
def test_ExceptionLanguageError(self):
with self.assertRaises(LanguageError):
db = SQLite("./course/course.db")
db.topic_to_bulk(1,external=True,language="fr")
db.conn.close()
``` |
{
"source": "josephramsay/LDSAPI",
"score": 2
} |
#### File: LDSAPI/APIInterface/LDSAPI.py
```python
import shlex
from abc import ABC, abstractmethod #, ABCMeta
import json
import re
import os
import datetime as DT
import time
import base64
#from http.client import HTTPMessage
from six.moves.http_client import HTTPMessage
from six.moves import http_cookiejar as cjar
from six.moves.urllib import request
#from six.moves.urllib import parse as ul1
from six.moves.urllib.parse import urlparse, urlencode
from six.moves.urllib.error import URLError
from six.moves.urllib.error import HTTPError
from six.moves.urllib.request import Request
from six import string_types
try:
from LDSUtilityScripts.LinzUtil import LogManager, Authentication, LDS
except ImportError:
from LinzUtil import LogManager, Authentication, LDS
try:
from http.client import RemoteDisconnected as HttpResponseError
except ImportError:
from http.client import BadStatusLine as HttpResponseError
#from Main import CReader
#request = ul2.Request("http://api.foursquare.com/v1/user")
#base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
#request.add_header("Authorization", "Basic %s" % base64string)
#result = ul2.urlopen(request)
REDIRECT = False
SLEEP_TIME = 5*60
SLEEP_RETRY_INCR = 5
MAX_RETRY_ATTEMPTS = 10
INIT_MAX_RETRY_ATTEMPTS = 10
KEYINDEX = 0
LM = LogManager()
LM.register()
class LDSAPI(ABC):
#class LDSAPI(object):
# __metaclass__ = ABCMeta
__sec = 'LDSAPI Wrapper'
sch = None
sch_def = 'https'
url = {'lds-l': 'data.linz.govt.nz',
'lds-t': 'data-test.linz.govt.nz',
'mfe-t': 'mfe.data-test.linz.govt.nz',
'apiary': 'private-cbd7-koordinates.apiary.io',
'koordinates': 'koordinates.com',
'ambury': 'ambury-ldsl.kx.gd'}
url_def = 'lds-l'
pxy = {'linz': 'webproxy1.ad.linz.govt.nz:3128',
'local': '127.0.0.1:3128',
'noproxy':':'}
pxy_def = 'noproxy'
ath = {'key':'.apikey3',#'api_llckey',
'basic':'.credentials'}
ath_def = 'key'
def __init__(self):#, creds, cfile):
self.cookies = cjar.LWPCookieJar()
#//
self.setProxyRef(self.pxy_def)
# cant set auth because we dont know yet what auth type to use, doesnt make sense to set up on default
# self.setAuthentication(creds, self.ath[self.ath_def], self.ath_def)
self.scheme = self.sch_def
self.auth = None
self.head = {}
@abstractmethod
def setParams(self):
'''abstract host/path setting method'''
pass
def setCommonParams(self,scheme=None,host=None,fmt='json',sec=None,pth=None,url=None):
'''Assigns path/host params or tries to extract them from a url'''
self.fmt = fmt
if url:
p = urlparse(url)
self.scheme = p.scheme
self.host = p.netloc
self.path = p.path
return
if pth: self.path = self.path_ref[sec][pth]+'?format={0}'.format(self.fmt)
if host: self.host = super(DataAPI, self).url[host]
self.scheme = scheme or self.sch_def
def setProxyRef(self, pref):
#proxy = ul2.ProxyHandler({'http': self.pxy[pref]})
#self.openerstrs_ntlm = ul2.build_opener(proxy)
self.pref = pref
self.openerstrs_ntlm = self.pxy[pref]
def setAuthentication(self, creds, cfile, auth):
self.ath = {auth:self.ath[auth]}
if auth == 'basic':
self._setBasicAuth(creds, cfile)
elif auth == 'key':
self._setKeyAuth(creds, cfile)
else:
LM.err('Auth error. Need key/basic specifier',LM._LogExtra('LDSAPI','sA'))
raise Exception('Incorrect auth configuration supplied')
def _setBasicAuth(self,creds,cfile):
self.setCredentials(creds(cfile))
self._setRequestAuth("Basic {0}".format(self.b64a))
def _setKeyAuth(self,creds,cfile):
self._setRequestKey(creds(cfile))
self._setRequestAuth("key {0}".format(self.key))
def setCredentials(self, creds):
if type(creds) is dict:
self.usr, self.pwd, self.dom = [creds[i] for i in 'upd']
else:
self.usr = 'pp'
#self.usr, self.pwd, self.dom = (*creds,None,None)[:3] #only since py3.5
self.usr, self.pwd, self.dom = (creds+(None,None))[:3]
self.b64a = LDSAPI.encode_auth(
{'user': self.usr, 'pass': <PASSWORD>, 'domain': self.dom}
if self.dom and self.dom != 'WGRP' else
{'user': self.usr, 'pass': <PASSWORD>}
)
#---------------------------------------------------------------------------
def _setRequestKey(self, creds):
self.key = creds['k'] if type(creds) is dict else creds
def _setRequestAuth(self,auth):
self.auth = auth
def setRequest(self,req):
if isinstance(req,string_types):
self.req_str = req
self.req = Request(req)
else:
self.req_str = req.full_url
self.req = req
def _setRequestData(self,data):
self.data = data
def _setRequestInfo(self,info):
self.info = info
def _setRequestHead(self,name,val):
self.head[name] = val
def addRequestHeader(self,name,head):
self.req.add_header(name,head)
def addRequestData(self,data=None):
self.req.add_data(data if data else self.data)
def getRequest(self):
return self.req
def getRequestStr(self,mask = True):
return LDS.kmask(self.req_str) if mask else self.req_str
#---------------------------------------------------------------------------
def setResponse(self,res):
self.res = res
self._setResponseData(res.read())
self._setResponseInfo(res.info())
self._setResponseURL(res.geturl())
self._setResponseHead(LDSAPI.parseHeaders(self.info._headers) if hasattr(self.info,'_headers') else None)
def _setResponseData(self,respdata):
self.respdata = respdata
def _setResponseHead(self,head):
self.head = head
def _setResponseInfo(self,info):
self.info = info
def _setResponseURL(self,url):
self.url = url
def getResponse(self):
return {'info':self.respinfo,'head':self.resphead,'data':self.respdata}
#---------------------------------------------------------------------------
def setExteralLogManager(self,lm):
global LM
LM = lm
@staticmethod
def parseHeaders(head):
# ['Server: nginx\r\n',
# 'Date: Wed, 14 May 2014 20:59:22 GMT\r\n',
# 'Content-Type: application/json\r\n',
# 'Transfer-Encoding: chunked\r\n',
# 'Connection: close\r\n',
# 'Vary: Accept-Encoding\r\n',
# 'Link: <https://data.linz.govt.nz/services/api/v1/layers/?sort=name&kind=raster&format=json>;
# rel="sort-name",
# <https://data.linz.govt.nz/services/api/v1/layers/?sort=-name&kind=raster&format=json>;
# rel="sort-name-desc",
# <https://data.linz.govt.nz/services/api/v1/layers/?kind=raster&page=1&format=json>;
# rel="page-previous",
# <https://data.linz.govt.nz/services/api/v1/layers/?kind=raster&page=3&format=json>;
# rel="page-next",
# <https://data.linz.govt.nz/services/api/v1/layers/?kind=raster&page=4&format=json>;
# rel="page-last"\r\n',
# 'Allow: GET, POST, HEAD, OPTIONS\r\n',
# 'Vary: Accept,Accept-Encoding\r\n',
# 'X-K-gentime: 0.716\r\n']
h={}
relist = {'server':'Server:\s(.*)\r\n',
'date':'Date:\s(.*)\r\n',
'content-type':'Content-Type:\s(.*)\r\n',
'transfer-encoding':'Transfer-Encoding:\s(.*)\r\n',
'connection':'Connection:\s(.*)\r\n',
'vary':'Vary:\s(.*)\r\n',
'link':'Link:\s(.*)\r\n',
'vary-acc':'Vary:\s(.*?)\r\n',
'x-k-gentime':'X-K-gentime:\s(.*?)\r\n',
'oauth-scopes':'OAuth-Scopes:\s(.*?)\r\n'}
if isinstance(head,HTTPMessage):
for k in relist.keys():
s = [i[1] for i in head._headers if i[0].lower()==k]
if s: h[k] = s[0]
elif isinstance(head,string_types):
for k in relist.keys():
s = re.search(relist[k],'|'.join(head))
if s: h[k] = s.group(1)
elif isinstance(head,list):
for k in relist.keys():
s = [i[1] for i in head if i[0].lower()==k]
if s: h[k] = s[0]
# ---------------------
#Pull apart link string, if its available
lnlist = {'sort-name':'<(http.*?)>;\s+rel="sort-name"',
'sort-name-desc':'<(http.*?)>;\s+rel="sort-name-desc"',
'page-previous':'<(http.*?)>;\s+rel="page-previous"',
'page-next':'<(http.*?)>;\s+rel="page-next"',
'page-last':'<(http.*?)>;\s+rel="page-last"'}
link = h['link'].split(',') if 'link' in h else []
for ref in link:
for rex in lnlist.keys():
s = re.search(lnlist[rex],ref)
if s:
if 'page' in rex:
p = re.search('page=(\d+)',s.group(1))
h[rex] = {'u':s.group(1),'p':int(p.group(1)) if p else None}
else:
h[rex] = s.group(1)
continue
return h
def opener(self,purl,puser=None,ppass=None,pscheme=('http','https')):
if REDIRECT:
h1,h2 = REDIRECT.BindableHTTPHandler,REDIRECT.BindableHTTPSHandler
else:
h1,h2 = request.HTTPHandler, request.HTTPSHandler
handlers = [h1(), h2(), request.HTTPCookieProcessor(self.cookies)]
if self.pref != 'noproxy' and purl and len(purl)>1:
#if not noproxy and a proxy url is provided (and its not the placeholder url ie noproxy=':') add a handler
handlers += [request.ProxyHandler({ps:purl for ps in pscheme}),]
#handlers += [request.ProxyHandler({ps:purl}) for ps in pscheme]
if puser and ppass:
#if proxy user/pass provided and a proxy auth handler
pm = request.HTTPPasswordMgrWithDefaultRealm()
pm.add_password(None, purl, puser, ppass)
handlers += [request.ProxyBasicAuthHandler(pm),]
return request.build_opener(*handlers)
def connect(self, plus='', head=None, data={}, auth=None):
'''URL connection wrapper, wraps URL strings in request objects, applying selected openers'''
#self.path='/services/api/v1/layers/{id}/versions/{version}/import/'
self.setRequest('{0}://{1}{2}{3}'.format(self.scheme,self.host, self.path, plus))
# Add user header if provided
if head:
self._setRequestHead(head)
self.addRequestHeader(shlex.split(head)[0].strip("(),"),shlex.split(head)[1].strip("(),"))
if auth:
self._setRequestAuth(auth)
self.addRequestHeader("Authorization", auth)
# Add user data if provided
if data: #or true #for testing
#NB. adding a data component in request switches request from GET to POST
data = urlencode(data)
self._setRequestData(data)
self.addRequestData(data)
return self.conn(self.getRequest())
def conn(self,req):
'''URL connection wrappercatching common exceptions and retrying where necessary
param: connreq can be either a url string or a request object
'''
sr = self.__sec,'Connection Manager'
self.setRequest(req)
req_str = self.getRequestStr()
#if self.auth is set it should have been added to the request header... might be legacy where that hasn't happened
if self.auth:
self.addRequestHeader("Authorization", self.auth)
request.install_opener(self.opener(purl=self.openerstrs_ntlm))
retry = INIT_MAX_RETRY_ATTEMPTS
while retry>0:
retry -= 1
try:
handle = request.urlopen(self.getRequest())#,data)
if handle:
if handle.geturl()!=req_str:
msg = 'Redirect Warning'
#cannot easily mask redirected url so logging original
LM.info(msg,LM._LogExtra(*sr,exc=None,url=req_str,rty=0))
return handle
#self.setResponse(handle)
#break
except HTTPError as he:
last_exc = he
if re.search('429',str(he)):
msg = 'RateLimit Error {0}. Sleep awaiting 429 expiry. Attempt {1}'.format(he,MAX_RETRY_ATTEMPTS-retry)
LM.error(msg,LM._LogExtra(*sr,exc=he,url=req_str,rty=retry))
LDSAPI.sleepIncr(retry)
continue
elif retry:
# I'm leaving this code here to test with because LDS was
# somehow throwing exceptions as well as redirecting
#
#if re.search('301',str(he)):
# msg = 'Redirect Error {0}'.format(he)
# #if we have a valid response and its a 301 see if it contains a redirect-to
# if handle and handle.geturl():
# retry = 1
# self.setRequest(handle.geturl()) #TODO reauth?
# msg += '. Attempting alternate connect'
# else:
# retry = 0
# LM.error(msg,LM._LogExtra(*sr,exc=he,url=self.getRequestStr(),rty=0))
# continue
if re.search('401|500',str(he)):
msg = 'HTTP Error {0} Returns {1}. Attempt {2}'.format(req_str,he,MAX_RETRY_ATTEMPTS-retry)
LM.error(msg,LM._LogExtra(*sr,exc=he,url=req_str,rty=retry))
continue
elif re.search('403',str(he)):
msg = 'HTTP Error {0} Returns {1}. Attempt {2} (consider proxy)'.format(req_str,he,MAX_RETRY_ATTEMPTS-retry)
LM.error(msg,LM._LogExtra(*sr,exc=he,url=req_str,rty=retry))
continue
elif re.search('502',str(he)):
msg = 'Proxy Error {0} Returns {1}. Attempt {2}'.format(req_str,he,MAX_RETRY_ATTEMPTS-retry)
LM.error(msg,LM._LogExtra(*sr,exc=he,url=req_str,rty=retry))
continue
elif re.search('410',str(he)):
msg = 'Layer removed {0} Returns {1}. Attempt {2}'.format(req_str,he,MAX_RETRY_ATTEMPTS-retry)
LM.error(msg,LM._LogExtra(*sr,exc=he,url=req_str,rty=retry))
retry = 0
continue
else:
msg = 'Error with request {0} returns {1}'.format(req_str,he)
LM.error(msg,LM._LogExtra(*sr,exc=he,url=req_str,rty=retry))
continue
else:
#Retries have been exhausted, raise the active httpexception
raise HTTPError(he.msg+msg)
except HttpResponseError as rd:
LM.warning('Disconnect. {}'.format(rd),
LM._LogExtra(*sr,exc=rd,url=req_str,rty=retry))
LDSAPI.sleepIncr(retry)
continue
except URLError as ue:
LM.warning('URL error on connect {}'.format(ue),
LM._LogExtra(*sr,exc=ue,url=req_str,rty=retry))
if re.search('Connection refused|violation of protocol',str(ue)):
LDSAPI.sleepIncr(retry)
continue
#raise ue
except ConnectionError as ce:
LM.warning('Error on connection. {}'.format(ce),
LM._LogExtra(*sr,exc=ce,url=req_str,rty=retry))
LDSAPI.sleepIncr(retry)
continue
except ValueError as ve:
LM.error('Value error on connect {}'.format(ve),LM._LogExtra(*sr,exc=ve,url=req_str,rty=retry))
raise ve
except Exception as xe:
LM.error('Other error on connect {}'.format(xe),LM._LogExtra(*sr,exc=xe,url=req_str,rty=retry))
raise xe
else:
raise last_exc
#except Exception as e:
# print e
# def _testalt(self,ref='basic'):
# p = os.path.join(os.path.dirname(__file__),cxf or LDSAPI.ath[ref])
# self.setAuthentication(Authentication.creds,p, ref)
@staticmethod
def encode_auth(auth):
'''Build and b64 encode a http authentication string. [Needs to be bytes str, hence en/decode()]'''
if 'domain' in auth:
astr = '{d}\{u}:{p}'.format(u=auth['user'], p=auth['pass'], d=auth['domain']).strip().encode()
else:
astr = '{u}:{p}'.format(u=auth['user'], p=auth['pass']).strip().encode()
return base64.b64encode(astr).decode()
def fetchPages(self,psub=''):
sr = self.__sec,'Page fetch'
upd = []
page = 0
pagel = None
morepages = True
while morepages:
page = page + 1
pstr = psub+'&page={0}'.format(page)
try:
res = self.connect(plus=pstr)
if res: self.setResponse(res)
else: raise HTTPError('No Response using URL {}'.format(pstr))
#api.dispReq(api.req)
#api.dispRes(api.res)
except HTTPError as he:
LM.error('HTTP Error on page fetch {}'.format(he),LM._LogExtra(*sr,exc=he,url=pstr))
morepages = False
raise
#continue
except Exception as e:
#Outer catch of unknown errors
LM.error('Error on page fetch {}'.format(he),LM._LogExtra(*sr,exc=he,url=pstr))
raise
# The logic here is a bit redundant but basically if no last page found then its prob the last page
# otherwise save the last page value and compare to current page. If they're equal get off loop
if 'page-last' in self.head:
pagel = self.head['page-last']['p']
else:
morepages = False
if page == pagel:
morepages = False
jdata = json.loads(self.respdata.decode())
upd += [jdata,] if isinstance(jdata,dict) else jdata
return upd
@staticmethod
def sleepIncr(r):
t = (INIT_MAX_RETRY_ATTEMPTS-r)*SLEEP_RETRY_INCR
print('tock' if t%2 else 'tick',t,'{}/{}'.format(r,INIT_MAX_RETRY_ATTEMPTS))
time.sleep(t)
@staticmethod
def dispReq(req):
print ('Request\n-------\n')
print (LDS.kmask(req.get_full_url()),'auth',req.get_header('Authorization'))
@staticmethod
def dispRes(res):
print ('Response\n--------\n')
print (res.info())
@staticmethod
def dispJSON(res):
for l in json.loads(res.read()):
print ('{0} - {1}\n'.format(l[0],l[1]))
@staticmethod
def _populate(data):
return json.dumps({"name": data[0],"type": data[1],"description": data[2],
"categories": data[3], "user": data[4], "options":{"username": data[5],"password": data[6]},
"url_remote": data[7],"scan_schedule": data[8]})
# GET
# /services/api/v1/data/
# Read-only, filterable list views.
# GET
# /services/api/v1/layers/
# Filterable views of layers (at layers/) and tables (at tables/) respectively.
# POST
# /services/api/v1/layers/
# Creates a new layer. All fields except name and data.datasources are optional.
# GET
# /services/api/v1/layers/drafts/
# A filterable list views of layers (layers/drafts/) and and tables (tables/drafts/) respectively, similar to /layers/ and /tables/. This view shows the draft version of each layer or table
#--------------------------------DataAccess
# GET
# /services/api/v1/layers/{id}/
# Displays details of a layer layers/{id}/ or a table tables/{id}/.
# POST
# /services/api/v1/layers/{id}/versions/
# Creates a new draft version, accepting the same content as POST layers/.
# GET
# /services/api/v1/layers/{id}/versions/draft/
# Get a link to the draft version for a layer or table.
# GET
# /services/api/v1/layers/{id}/versions/published/
# Get a link to the current published version for a layer or table.
# GET
# /services/api/v1/layers/{id}/versions/{version}/
# Get the details for a specific layer or table version.
# PUT
# /services/api/v1/layers/{id}/versions/{version}/
# Edits this draft layerversion. If it's already published, a 405 response will be returned.
# POST
# /services/api/v1/layers/{id}/versions/{version}/import/
# Starts importing this draft layerversion (cancelling any running import), even if the data object hasn't changed from the previous version.
# POST
# /services/api/v1/layers/{id}/versions/import/
# A shortcut to create a new version and start importing it.
# POST
# /services/api/v1/layers/{id}/versions/{version}/publish/
# Creates a publish task just for this version, which publishes as soon as any import is complete.
# DELETE
# /services/api/v1/layers/{id}/versions/{version}/
class DataAPI(LDSAPI):
path_ref = {'list':
{'dgt_data' :'/services/api/v1/data/',
'dgt_layers' :'/services/api/v1/layers/',
'dgt_tables' :'/services/api/v1/tables/',
'dgt_groups' :'/services/api/v1/groups',
'dgt_users' :'/services/api/v1/users',
'dpt_layers' :'/services/api/v1/layers/',
'dpt_tables' :'/services/api/v1/tables/',
'dpt_groups' :'/services/api/v1/groups',
'dpt_users' :'/services/api/v1/users',
'dgt_draftlayers' :'/services/api/v1/layers/drafts/',
'dgt_drafttables' :'/services/api/v1/tables/drafts/'},
'detail':
{'dgt_layers' :'/services/api/v1/layers/{id}/',
'dgt_tables' :'/services/api/v1/tables/{id}/',
'dgt_groups' :'/services/api/v1/groups/{id}/',
'dgt_users' :'/services/api/v1/users/{id}/',
'ddl_delete' :'/services/api/v1/layers/{id}/'},
'access':
{'dgt_permissions' :'/services/api/v1/layers/{id}/permissions/'},
'version':
{'dgt_version' :'/services/api/v1/layers/{id}/versions/',
'dpt_version' :'/services/api/v1/layers/{id}/versions/',
'dgt_draftversion' :'/services/api/v1/layers/{id}/versions/draft/',
'dgt_publicversion':'/services/api/v1/layers/{id}/versions/published/',
'dgt_versioninfo' :'/services/api/v1/layers/{id}/versions/{version}/',
'dpu_draftversion' :'/services/api/v1/layers/{id}/versions/{version}/',
'dpt_importversion':'/services/api/v1/layers/{id}/versions/{version}/import/',
'dpt_publish' :'/services/api/v1/layers/{id}/versions/{version}/publish/',
'ddl_delete' :'/services/api/v1/layers/{id}/versions/{version}/'},
'publish':
{'dpt_publish' :'/services/api/v1/publish/',
'dgt_publish' :'/services/api/v1/publish/{id}/',
'ddl_delete' :'/services/api/v1/publish/{id}/'},
'permit':{},
'metadata':
{'dgt_metadata' :'/services/api/v1/layers/{id}/metadata/',
'dgt_metaconv' :'/services/api/v1/layers/{id}/metadata/{type}/',
'dgt_metaorig' :'/services/api/v1/layers/{id}/versions/{version}/metadata/',
'dgt_metaconvver' :'/services/api/v1/layers/{id}/versions/{version}/metadata/{type}/'},
'unpublished':
{'dgt_users':'/services/api/v2/users/'}#this of course doesn't work
}
def __init__(self):
super(DataAPI,self).__init__()
def setParams(self, sec='list', pth='dgt_data', host=LDSAPI.url_def, fmt='json', id=None, version=None, type=None):
super(DataAPI,self).setCommonParams(host=host,fmt=fmt,sec=sec,pth=pth)
if id and re.search('{id}',self.path): self.path = self.path.replace('{id}',str(id))
if version and re.search('{version}',self.path): self.path = self.path.replace('{version}',str(version))
if type and re.search('{type}',self.path): self.path = self.path.replace('{type}',str(type))
#self.host = super(DataAPI, self).url[host]
class SourceAPI(LDSAPI):
path_ref = {'list':
{'sgt_sources':'/services/api/v1/sources/',
'spt_sources':'/services/api/v1/sources/'},
'detail':
{'sgt_sources':'/services/api/v1/sources/{id}/',
'spt_sources':'/services/api/v1/sources/{id}/'},
'metadata':
{'sgt_metadata':'/services/api/v1/sources/{id}/metadata/',
'spt_metadata':'/services/api/v1/sources/{id}/metadata/',
'spt_metatype':'/services/api/v1/sources/{id}/metadata/{type}/'},
'scans':
{'sgt_scans':'/services/api/v1/sources/{source-id}/',
'spt_scans':'/services/api/v1/sources/{source-id}/',
'sgt_scanid':'/services/api/v1/sources/{source-id}/scans/{scan-id}/',
'sdt_scandelete':'/services/api/v1/sources/{source-id}/scans/{scan-id}/',
'sgt_scanlog':'/services/api/v1/sources/{source-id}/scans/{scan-id}/log/'},
'datasource':
{'sgt_dslist':'/services/api/v1/sources/{source-id}/datasources/',
'sgt_dsinfo':'/services/api/v1/sources/{source-id}/datasources/{datasource-id}/',
'sgt_dsmeta':'/services/api/v1/sources/{source-id}/datasources/{datasource-id}/metadata/',
},
'groups':
{'sgt_groups':'/services/api/v1/groups/',
'sgt_groupid':'/services/api/v1/groups/{id}/'}
}
def __init__(self):
super(SourceAPI,self).__init__()
def setParams(self,sec='list',pth='sgt_sources',host='lds-l',fmt='json',id=None,type=None,source_id=None,scan_id=None,datasource_id=None):
super(DataAPI,self).setCommonParams(host=host,fmt=fmt,sec=sec,pth=pth)
#insert optional args if available
if id and re.search('{id}',self.path): self.path = self.path.replace('{id}',str(id))
if type and re.search('{type}',self.path): self.path = self.path.replace('{type}',str(type))
if source_id and re.search('{source-id}',self.path): self.path = self.path.replace('{source-id}',str(source_id))
if scan_id and re.search('{scan-id}',self.path): self.path = self.path.replace('{scan-id}',str(scan_id))
if datasource_id and re.search('{datasource-id}',self.path): self.path = self.path.replace('{datasource-id}',str(datasource_id))
#self.host = super(SourceAPI,self).url[host]
# GET
# /services/api/v1/layers/{id}/redactions/
# Displays a detailed list of redactions for the layer.
# POST
# /services/api/v1/layers{id}/redactions/
# Creates a new redaction for layer {id}.
#
# Note that start_version <= affected versions <= end_version
# primary_key: The primary key(s) for the item being redacted. This should identify a single feature.
# start_version: The URL of the first layer version to perform the redaction on.
# end_version: (Optional) The URL of the last layer version to perform the redaction on.
# new_values: The new values for the row. This can be any subset of fields and only specified fields will be redacted.
# message: A message to be stored with the redaction.
# GET
# /services/api/v1/layers/{id}/redactions/{redaction}/
# Gets information about a specific redaction.
class RedactionAPI(LDSAPI):
path_ref = {'list':
{'rgt_disp' :'/services/api/v1/layers/{id}/redactions/',
'rpt_disp' :'/services/api/v1/layers/{id}/redactions/'},
'redact':
{'rgt_info':'/services/api/v1/layers/{id}/redactions/{redaction}/'}
}
def __init__(self):
super(RedactionAPI,self).__init__()
def setParams(self,sec='list',pth='rgt_disp',h='lds-l',fmt='json',id=None,redaction=None):
super(DataAPI,self).setCommonParams(host=h,fmt=fmt,sec=sec,pth=pth)
#insert optional args if available
if id and re.search('{id}',self.path): self.path = self.path.replace('{id}',str(id))
if redaction and re.search('{redaction}',self.path): self.path = self.path.replace('{redaction}',str(redaction))
#self.host = super(RedactionAPI,self).url[h]
class APIAccess(object):
defs = (LDSAPI.url_def, LDSAPI.pxy_def, LDSAPI.ath_def)
def __init__(self, apit, creds, cfile, refs):
self.api = apit() # Set a data, src or redact api
self.uref,self.pref,self.aref = refs
self.api.setProxyRef(self.pref)
self.api.setAuthentication(creds, cfile, self.aref)
def readLayerPages(self):
'''Calls API custom page reader'''
self.api.setParams(sec='list',pth=self.lpath,host=self.uref)
return self.api.fetchPages()
def readAllLayerIDs(self):
'''Extracts and returns IDs from reading layer-pages'''
return [p['id'] for p in self.readLayerPages() if 'id' in p]
def readGroupPages(self):
'''Calls API custom page reader'''
self.api.setParams(sec='list',pth=self.gpath,host=self.uref)
return self.api.fetchPages()
def readAllGroupIDs(self):
'''Extracts and returns IDs from reading group-pages'''
return [p['id'] for p in self.readGroupPages() if 'id' in p]
class SourceAccess(APIAccess):
'''Convenience class for accessing sourceapi data'''
def __init__(self,creds,ap_creds, uref=LDSAPI.url_def, pref=LDSAPI.pxy_def, aref=LDSAPI.ath_def):
super(SourceAccess,self).__init__(SourceAPI,creds,ap_creds, (uref, pref, aref))
self.path = 'sgt_sources'
#TODO. Implement these functions
def writeDetailFields(self):
pass
def writePermissionFields(self):
pass
def writeSelectedFields(self):
pass
def writePrimaryKeyFields(self):
pass
class RedactionAccess(APIAccess):
'''Convenience class for redacting api data'''
def __init__(self,creds,ap_creds, uref=LDSAPI.url_def, pref=LDSAPI.pxy_def, aref=LDSAPI.ath_def):
super(RedactionAccess,self).__init__(RedactionAPI,creds,ap_creds, (uref, pref, aref))
self.path = 'sgt_sources'
#TODO. Implement these functions
def redactDetailFields(self):
pass
def redactPermissionFields(self):
pass
def redactSelectedFields(self):
pass
def redactPrimaryKeyFields(self):
pass
class StaticFetch():
UREF = LDSAPI.url_def
PREF = LDSAPI.pxy_def
AREF = LDSAPI.ath_def
@classmethod
def get(cls,uref=None,pref=None,korb=None,cxf=None):
'''get requested URL using specified defs'''
uref = uref or cls.UREF
pref = pref or cls.PREF
if isinstance(korb,dict):
return cls._get(uref,pref,korb,cxf)
elif korb and korb.lower() in ['key','basic']:
aref = korb.lower()
else:
aref = cls.AREF
method = (Authentication.creds,cxf or LDSAPI.ath[aref]) if aref=='basic' else (Authentication.apikey,cxf or LDSAPI.ath[aref])
da = DataAccess(*method,uref=uref,pref=pref,aref=aref)
return da.api.conn(uref)
#return res or da.api.getResponse()['data']
@classmethod
def _get(cls,uref=None,pref=None,korb={},cxf=None):
'''korb must be a dict containing {'key':'ABC...','up':['user','pass'],'kfile':'apikey','cfile':'creds'}'''
kk0 = list(korb.keys())[0]
kd = {
'key' :(Authentication.direct,'key'),
'up' :(Authentication.direct,'basic'),
'kfile' :(Authentication.apikey,'key'),
'cfile' :(Authentication.creds, 'basic')
}
da = DataAccess(kd[kk0][0],korb[kk0],uref=uref,pref=pref,aref=kd[kk0][1])
return da.api.conn(uref)
#unnecessary since only classmethods
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
class DataAccess(APIAccess):
'''Convenience class for accessing commonly needed data-api data'''
PAGES = ('data','permission','group')
LAYER_PAGES = ('data','permission')
GROUP_PAGES = ('group',)
def __init__(self, creds, cfile, uref=LDSAPI.url_def, pref=LDSAPI.pxy_def, aref=LDSAPI.ath_def):
super(DataAccess, self).__init__(DataAPI, creds, cfile, (uref, pref, aref))
self.path = 'dgt_layers'
self.lpath = 'dgt_layers'
self.gpath = 'dgt_groups'
self.ppath = 'dgt_permissions'
def _set(self,l,nl=None):
'''fetch if value present in path and returns utf encoded'''
if nl:
for ni in nl:
if l and (ni in l or (isinstance(ni,int) and isinstance(l,(list,tuple)))):
l = l[ni]
else:
l = None
if isinstance(l, string_types):
return l.encode('utf-8')
else:
return l
#if n2: return l[n1][n2].encode('utf8') if n1 in l and n2 in l[n1] else None
#else: return l[n1].encode('utf8') if n1 in l else None
def readLayerFields(self,i):
'''All field from detail layer pages'''
self.api.setParams(sec='detail',pth=self.lpath,host='lds-l',id=i)
return self.api.fetchPages()[0]
def readGroupFields(self,i):
'''All field from detail group pages'''
self.api.setParams(sec='detail',pth=self.gpath,host='lds-l',id=i)
return self.api.fetchPages()[0]
def readPermissionFields(self,i,gfilter=True):
'''All field from permission pages, filter by group.everyone i.e. accessible'''
self.api.setParams(sec='access',pth=self.ppath,host='lds-l',id=i)
pge = [p for p in self.api.fetchPages() if not gfilter or p['id']=='group.everyone']
return pge[0] if pge else None
def readLayers(self): return self._readFields(idfunc=self.readAllLayerIDs,pagereq=self.LAYER_PAGES)
#def readLayers(self): return self._readFields(idfunc=self._testLayerList,pagereq=self.LAYER_PAGES)
#def _testLayerList(self): return [52109,51779]
def readGroups(self): return self._readFields(idfunc=self.readAllGroupIDs,pagereq=self.GROUP_PAGES)
#def readGroups(self): return self._readFields(idfunc=self._testGroupList,pagereq=self.GROUP_PAGES)
#def _testGroupList(self): return [2006,2115]
def _readFields(self,idfunc,pagereq):
'''Read the fields from selected (predefined) pages'''
detail,herror = {},{}
for i in idfunc():
#print ('WARNING. READING LDS-API-ID SUBSET',i)
detail[str(i)],herror[str(i)] = self._readDetail(i,pagereq)
return detail,herror
def _readDetail(self,i,pr):
'''INPROGRESS Attempt to consolidate the readX functions'''
dd,he = {},None
fun_det = {
'data':(self.readLayerFields,
{'id':('id',),'title':('title',),'type':('type',),'group':('group','id'),'kind':('kind',),'cat':('categories',0,'slug'),'crs':('data','crs'),\
'grp-id':('group','id'),'grp-nm':('group','name'),\
'lic-id':('license','id'),'lic-ttl':('license','title'),'lic-typ':('license','type'),'lic-ver':('license','version'),\
'data-crs':('data','crs'),'data-pky':('data','primary_key_fields'),'data-geo':('data','geometry_field'),'data-fld':('data','fields'),\
'date-pub':('published_at',),'date-fst':('first_published_at',),'date-crt':('created_at',),'date-col':('collected_at',)
}),
'permission':(self.readPermissionFields,
{'prm-id':('id',),'prm-typ':('permission',),'prm-gid':('group','id',),'prm-gnm':('group','name')}),
'group':(self.readGroupFields,
{'grp-id':('id',),'grp-name':('name',),'grp-lyrs':('stats','layers',),'grp-tbls':('stats','tables'),'grp-docs':('stats','documents')})
}
for fd in set(fun_det.keys()).intersection(pr):
#fetch the requested pages
try:
d = fun_det[fd][0](i)
except HTTPError as he:
LM.error('HTTP Error on selectedFields data '+he,LM._LogExtra('LArsf','dhe',xid=i))
return
#put the results into a dict
try:
dd.update( {k:self._set(d,fun_det[fd][1][k]) for k in fun_det[fd][1]} if d else {d:None for d in fun_det[fd][1]} )
#special postprocess
if fd == 'data':
dd['data-pky'] = self._set(','.join(dd['data-pky']))
dd['data-fld'] = self._set(','.join([f['name'] for f in dd['data-fld']]))
except IndexError as ie:
#not raising this as an error since it only occurs on 'test' layers
msg = '{0}. Index error getting {1},{2}'.format(ie,d['id'],d['name'])
LM.error(msg,LM._LogExtra('LArsf','die',xid=i))
except TypeError as te:
msg = '{0}. Type error on layer {1}/{2}'.format(te,d['id'],d['name'])
LM.error(msg,LM._LogExtra('LArsf','dte',xid=i))
return
except Exception as e:
msg = '{0}. Error on layer {1}/{2}'.format(e,d['id'],d['name'])
LM.error(msg,LM._LogExtra('LArsf','de',xid=i))
raise
return dd,he
# def _readDetailGroup(self,i):
# he = None
# try:
# #returns the permissions for group.everyone only
# g = self.readGroupFields(i)
# except HTTPError as he:
# LM.error('HTTP Error on selectedFields group '+he,LM._LogExtra('LArsf','phe',xid=i))
# return
#
# try:
# gx = {'grp-name':('name',),'grp-lyrs':('stats','layers',),'grp-tbls':('group','tables'),'grp-docs':('group','documents')}
# gg = {k:self._set(g,gx[k]) for k in gx} if g else {g:None for g in gx}
#
# except IndexError as ie:
# #not raising this as an error since it only occurs on 'test' layers
# msg = '{0} error getting {1},{2}'.format(ie,i,g['name'])
# LM.error(msg,LM._LogExtra('LArsf','gie',xid=i))
# except TypeError as te:
# msg = '{0} error on layer {1}/{2}'.format(te,i,g['name'])
# LM.error(msg,LM._LogExtra('LArsf','gte',xid=i))
# return
# except Exception as e:
# msg = '{0} error on layer {1}/{2}'.format(e,g['id'],g['name'])
# LM.error(msg,LM._LogExtra('LArsf','ge',xid=i))
# raise
#
# return gg,he
def _readSummaryPages2(self,pagereq=('data','group')):
'''IN_PROGRESS Sometimes we don't need to get the detail pages. Just extract the summary'''
detail = {}
herror = {}
if 'data' in pagereq:
d,dh = self._readSummaryData()
detail.update(d)
if dh: herror += dh
if 'group' in pagereq:
d,dh = self._readSummaryGroup()
detail.update(d)
if dh: herror += dh
return detail,herror
def readPrimaryKeyFields(self):
'''Read PrimaryKey field from detail pages'''
res,_ = self.readLayers(pagereq=('data',))
return res
'''Copied from LDSChecker for availability'''
# class AuthenticationException(Exception):pass
# class Authentication(object):
# '''Static methods to read keys/user/pass from files'''
#
# @staticmethod
# def apikey(keyfile,kk='key',keyindex=None):
# '''Returns current key from a keyfile advancing KEYINDEX on subsequent calls (if ki not provided)'''
# global KEYINDEX
# key = Authentication.searchfile(keyfile,'{0}'.format(kk))
# if key: return key
# key = Authentication.searchfile(keyfile,'{0}{1}'.format(kk,keyindex or KEYINDEX))
# if not key and not keyindex:
# KEYINDEX = 0
# key = Authentication.searchfile(keyfile,'{0}{1}'.format(kk,KEYINDEX))
# elif not keyindex:
# KEYINDEX += 1
# return key
#
# @staticmethod
# def direct(value):
# '''Returns arg for cases where user just wants to submit a key/userpass directly'''
# return value
#
# @staticmethod
# def creds(cfile):
# '''Read CIFS credentials file'''
# return (Authentication.searchfile(cfile,'username'),\
# Authentication.searchfile(cfile,'password'),\
# Authentication.searchfile(cfile, 'domain'))
#
# @staticmethod
# def userpass(cfile):
# return creds(cfile)[:2]
#
# #@staticmethod
# #def userpass(upfile):
# # return (Authentication.searchfile(upfile,'username'),Authentication.searchfile(upfile,'password'))
#
# @staticmethod
# def searchfile(spf,skey,default=None):
# '''Given a file name incl path look for the file in the provided path, the home dir and
# the current dir then checks this file for the key/val named in skey'''
# #value = default
# #look in current then app then home
# sp,sf = os.path.split(spf)
# spath = (sp,'',os.path.expanduser('~'),os.path.dirname(__file__))
# verified = [os.path.join(p,sf) for p in spath if os.path.lexists(os.path.join(p,sf))]
# if not verified:
# LM.error('Cannot find file '+sf,LM._LogExtra('LAAs','sf'))
# raise AuthenticationException('Cannot find requested file {}'.format(sf))
# with open(verified[0],'r') as h:
# for line in h.readlines():
# k = re.search('^{key}=(.*)$'.format(key=skey),line)
# if k: return k.group(1)
# return default
#
# @staticmethod
# def getHeader(korb,kfile):
# '''Convenience method for auth header'''
# if korb.lower() == 'basic':
# b64s = base64.encodestring('{0}:{1}'.format(*Authentication.userpass(kfile))).replace('\n', '')
# return ('Authorization', 'Basic {0}'.format(b64s))
# elif korb.lower() == 'key':
# key = Authentication.apikey(kfile)
# return ('Authorization', 'key {0}'.format(key))
# return None # Throw something
class APIFunctionTest(object):
'''Class will not run as-is but illustrates by example api ue and the paging mechanism'''
credsfile = os.path.abspath(os.path.join(os.path.dirname(__file__),'..','.test_credentials'))
def _getCreds(self,cfile):
return 'user','pass','domain'
def _getPages(self):
api = DataAPI(creds,self.credsfile)
api.setParams(sec='list',pth='dgt_layers',host='lds-l')
return api.fetchPages()
def _getUsers(self):
api = DataAPI(creds,self.credsfile)
api.setParams(sec='unpublished',pth='dgt_users',host='lds-l')
return api.fetchPages()
def _getLastPubLayers(self,lk):
'''Example function fetching raster layer id's with their last published date'''
api = DataAPI(creds,self.credsfile)
api.setParams(sec='list',pth='dgt_layers',host='lds-l')
pages = api.fetchPages('&kind={0}'.format(lk))
return [(p['id'],DT.datetime(*map(int, re.split('[^\d]', p['published_at'])[:-1]))) for p in pages if 'id' in p and 'published_at' in p]
def _testSA(self):
sa = SourceAccess(creds,self.credsfile)
#print sa.readAllLayerIDs()
res = sa.readLayerPages()
lsaids = [(r['id'],r['last_scanned_at']) for r in res if r['last_scanned_at']]
for lid,dt in lsaids:
print ('layer {} last scanned at {}'.format(lid,dt))
def _testDA(self):
da = DataAccess(creds,self.credsfile)
#print sa.readAllLayerIDs()
res = da.readPrimaryKeyFields()
print(res)
def _testSF(self):
#,plus='',head=None,data={}
res1 = StaticFetch.get(uref='https://data.linz.govt.nz/layer/51424',korb='KEY',cxf='.apikey3')
print(res1)
res2 = StaticFetch.get(uref='https://data.linz.govt.nz/layer/51414')
print(res2)
def creds(cfile):
'''Read CIFS credentials file'''
return {
'u':searchfile(cfile,'username'),
'p':searchfile(cfile,'password'),
'd':searchfile(cfile,'domain','WGRP'),
'k':searchfile(cfile,'key')
}
def searchfile(sfile,skey,default=None):
value = default
with open(sfile,'r') as h:
for line in h.readlines():
k = re.search('^{key}=(.*)$'.format(key=skey),line)
if k: value=k.group(1)
return value
def main():
global REDIRECT
if REDIRECT:
import BindingIPHandler as REDIRECT
bw = REDIRECT.BindableWrapper()
bw.getLocalIP(True)
t = APIFunctionTest()
#print t._getLastPubLayers(lk='raster')
#t._testSA()
#t._testDA()
t._testSF()
#print t._getUsers()
if __name__ == '__main__':
main()
```
#### File: LDSAPI/APITest/TestDataAPI.py
```python
import unittest
import json
import os
from APIInterface.LDSAPI import DataAPI
#from APIInterface.LDSAPI import SourceAPI
#from APIInterface.LDSAPI import RedactionAPI
from .TestFileReader import FileReader
from .TestSuper import APITestCase
ID = 455
VER = 460
TYP = 'iso'
FMT = 'json'
class DataTester(APITestCase):
avoid = ['dpu_draftversion','ddl_delete']
def setUp(self):
print('D')#\n----------------------------------\n'
self.api = DataAPI(FileReader.creds,self.cdir+self.cfile)
self.api.setParams()
def tearDown(self):
self.api = None
#basic connect
def test_10_ReadBaseData(self):
self.api.connect()
self.api.dispRes(self.api.res)
#parameter sets test
def test_20_DataAPI_list(self):
sec='list'
for pth in list(self.api.data_path[sec].keys()):
print('*** Data API section={} key={} ***'.format(sec,str(pth)))
self.api.setParams(s=sec,p=pth)
self.api.connect('?format={}'.format(FMT))
print('*** {}'.format(self.api.req.get_full_url()))
self.outputRes(self.api.res.read())
def test_30_DataAPI_detail(self):
sec = 'detail'
for pth in list(self.api.data_path[sec].keys()):
if pth in self.avoid: continue
print('*** Data API section={} key={} ***'.format(sec,str(pth)))
self.api.setParams(s=sec,p=pth,id=ID)
self.api.connect('?format={}'.format(FMT))
print('*** {}'.format(self.api.req.get_full_url()))
self.outputRes(self.api.res.read())
def test_40_DataAPI_version(self):
sec = 'version'
for pth in list(self.api.data_path[sec].keys()):
if pth in self.avoid: continue
print('*** Data API section={} key={} ***'.format(sec,str(pth)))
self.api.setParams(s=sec,p=pth,id=ID,version=VER)
self.api.connect('?format={}'.format(FMT))
print('*** {}'.format(self.api.req.get_full_url()))
self.outputRes(self.api.res.read())
# def test_50_DataAPI_publish(self):
# sec = 'publish'
# for pth in self.api.data_path[sec].keys():
# if pth in self.avoid: continue
# print '*** Data API section={} key={} ***'.format(sec,str(pth))
# self.api.setParams(s=sec,p=pth,id=ID)
# self.api.connect('?format=json')
# print '*** {}'.format(self.api.req.get_full_url())
# self.outputRes(self.api.res.read())
def test_60_DataAPI_metadata(self):
sec = 'metadata'
for pth in list(self.api.data_path[sec].keys()):
if pth in self.avoid: continue
print('*** Data API section={} key={} ***'.format(sec,str(pth)))
self.api.setParams(s=sec,p=pth,id=ID,version=VER,type=TYP)
self.api.connect('?format={}'.format(FMT))
print('*** {}'.format(self.api.req.get_full_url()))
self.outputRes(self.api.res.read())
def outputRes(self,res):
be = json.loads(res)
print('JSON - start')
if isinstance(be, dict):
self.outputFeat(be)
else:
for l in be: self.outputFeat(l)
print('JSON - end')
def outputFeat(self,feat):
print('name:{} - id:{} type:{} published:{}'.format(feat['name'],feat['id'],feat['type'],feat['published_at']))
if __name__ == '__main__':
unittest.main()
```
#### File: LDSAPI/KPCInterface/AuthReader.py
```python
import os
import re
KEYINDEX = 0
KEY_FILE = '.apikey_ldst'
AD_CREDS = '.credentials'
CREDS = {}
ARCONF = None
class Authentication(object):
'''Static methods to read keys/user/pass from files'''
@staticmethod
def userpass(upfile):
return (Authentication.searchfile(upfile, 'username'),
Authentication.searchfile(upfile, 'password'))
@staticmethod
def _apikey(kfile, kk='key'):
'''Returns current key from a keyfile advancing on subsequent calls'''
global KEYINDEX
key = Authentication.searchfile(kfile, '{0}{1}'.format(kk, KEYINDEX))
if not key:
KEYINDEX = 0
key = Authentication.searchfile(kfile, '{0}{1}'.format(kk, KEYINDEX))
else:
KEYINDEX += 1
return key
@staticmethod
def _creds(cfile):
'''Read CIFS credentials file'''
return (Authentication.searchfile(cfile, 'username'),
Authentication.searchfile(cfile, 'password'),
Authentication.searchfile(cfile, 'domain', 'WGRP'))
@staticmethod
def searchfile(sfile, skey, default=None):
value = default
# look in current then app then home
spath = (os.path.dirname(__file__), os.path.expanduser('~'))
first = [os.path.join(p, sfile) for p in spath if os.path.exists(os.path.join(p, sfile))][0]
with open(first, 'r') as h:
for line in h.readlines():
k = re.search('^{key}=(.*)$'.format(key=skey), line)
if k: value = k.group(1)
return value
@staticmethod
def apikey():
return Authentication._apikey(os.path.join(os.path.expanduser('~'), KEY_FILE), 'admin')
@staticmethod
def creds():
# upd
return Authentication._creds(os.path.join(os.path.expanduser('~'), AD_CREDS))
class ConfigAuthentication(Authentication):
'''auth subclass where a configreader object is supplied instead of selected searchable files'''
def __init__(self,conf):
self.conf = conf
def creds(self,sect='remote'):
return (self.conf.d[sect]['user'],
self.conf.d[sect]['pass'],
self.conf.d[sect]['workgroup'])
def userpass(self,sect='remote'):
return (self.conf.d[sect]['user'], self.conf.d[sect]['pass'])
def apikey(self,sect='server'):
'''returns a single know apikey, doesnt do key cycling'''
return self.conf.d[sect]['key']
```
#### File: LDSAPI/KPCInterface/KPCAPI.py
```python
import re
import os
import pickle
from collections import namedtuple
from KPCInterface.AuthReader import Authentication
LDS_LIVE = 'data.linz.govt.nz'
LDS_TEST = 'data-test.linz.govt.nz'
RELOAD_INDEX = False
CONF = 'rbu.conf'
LayerInfo = namedtuple('LayerInfo', 'title id version versions dates files')
class LayerRef(object):
layeridmap = {}
def __init__(self,client,reload):
#init name and id refs
self.client = client
if reload or RELOAD_INDEX or not self._load():
self._indexLayers()
self._dump()
def _indexLayers(self, stype='catalog'):
'''query the catalog/layer obj for layer name id pairs returning dict indexed by name (for easier metadata matching)'''
#NB. Catalog layers not necessarily complete
lsrc = self.client.catalog.list if stype=='layer' else self.client.layers.list
#for i in self.client.catalog.list(): print('CL',i)
#for i in self.client.layers.list(): print('LL',i)
for ly in lsrc():#.filter(type='layer')[:10]:
print('Ly',ly)
dd = {'cpub':getattr(ly,'published_at',None),
'fpub':getattr(ly,'first_published_at',None),
'crt':getattr(ly,'created_at',None),
'col':getattr(ly,'collected_at',None)}
self.layeridmap[ly.title] = LayerInfo(title=ly.title,id=ly.id,version=ly.version,versions=[x.id for x in ly.list_versions()],dates=dd, files=())
def _dump(self, config=CONF):
pickle.dump(self.layeridmap,open(config,'wb'))
def _load(self, config=CONF):
try:
self.layeridmap = pickle.load(open(config,'rb'))
except:
return False
return True
# class Authentication(object):
# '''Static methods to read keys/user/pass from files'''
#
#
# @staticmethod
# def userpass(upfile):
# return (Authentication.searchfile(upfile,'username'),Authentication.searchfile(upfile,'password'))
#
# @staticmethod
# def _apikey(kfile,kk='key'):
# '''Returns current key from a keyfile advancing on subsequent calls'''
# global KEYINDEX
# key = Authentication.searchfile(kfile,'{0}{1}'.format(kk,KEYINDEX))
# if not key:
# KEYINDEX = 0
# key = Authentication.searchfile(kfile,'{0}{1}'.format(kk,KEYINDEX))
# else:
# KEYINDEX += 1
# return key
#
# @staticmethod
# def _creds(cfile):
# '''Read CIFS credentials file'''
# return (Authentication.searchfile(cfile,'username'),\
# Authentication.searchfile(cfile,'password'),\
# Authentication.searchfile(cfile,'domain','WGRP'))
#
# @staticmethod
# def searchfile(sfile,skey,default=None):
# value = default
# #look in current then app then home
# spath = (os.path.dirname(__file__),os.path.expanduser('~'))
# first = [os.path.join(p,sfile) for p in spath if os.path.exists(os.path.join(p,sfile))][0]
# with open(first,'r') as h:
# for line in h.readlines():
# k = re.search('^{key}=(.*)$'.format(key=skey),line)
# if k: value=k.group(1)
# return value
#
# @staticmethod
# def apikey():
# return Authentication._apikey(os.path.join(os.path.expanduser('~'),KEY_FILE),'admin')
#
# @staticmethod
# def creds():
# #upd
# return Authentication._creds(os.path.join(os.path.expanduser('~'),AD_CREDS))
```
#### File: LDSAPI/LXMLWrapper/LDSLXML.py
```python
import re
import sys
import http
from lxml import etree
from six.moves.urllib.error import HTTPError
from six.moves.urllib.request import urlopen
PYVER3 = sys.version_info > (3,)
class LXMLWrapperException(Exception): pass
class LXMLSyntaxException(Exception): pass
class LXMLetree(object):
def __init__(self):
pass
def parse(self,content,op='parse'):
return LXMLtree(content,op)
def fromstring(self,text,op='fromstring'):
return LXMLtree(text,op)
def XMLSchema(self,xsdd):
'''return a vanilla schema since no customisation required'''
return etree.XMLSchema(xsdd)
@classmethod
def _subNS(cls,url,ns):
'''Hack for Py2.6 version of LXML to substitute namespace declarations /abc:path for /{full_abc}path'''
for k in set(re.findall('(\w*?):',url)):
url = re.sub(k+':','{'+ns[k]+'}',url)
return url
class LXMLtree(object):
'''Wrapper class for etree objects'''
def __init__(self,ct,op='parse'):
if op=='parse':
self._tree = self.parse(ct)
elif op=='parsestring':
self._tree = self.parse(ct,p='fromstring')
elif op=='fromstring':
self._tree = self.fromstring(ct)
elif op=='recover':
self._tree = self.parse(ct,p=etree.XMLParser(recover=True))
def fromstring(self,text):
'''parses a string to root node'''
return etree.fromstring(text)
def tostring(self):
'''string rep of tree'''
return etree.tostring(self._tree)
def xpath(self,text,namespaces=None):
return etree.XPath(text,namespaces) if namespaces and False else etree.XPath(text)
def parse(self,content,p=None):
'''parses a URL or a response-string to doc tree'''
#HACK. With CSW data need to check for JSON/masked 429s first
try:
if p=='fromstring': etp = self._parse_f(content) #parse using string method
else: etp = self._parse_p(content,p) #parse normally or using provided parser
except HTTPError as he:
raise #but this won't happen because LXML pushes HTTP errors up to IO errors
except (IOError,http.client.IncompleteRead) as ioe:
#if re.search('failed to load HTTP resource', ioe.message): #No longer works on Py3
ioem = str(ioe) if PYVER3 else ioe.message
if re.search('failed to load HTTP resource', ioem):
raise HTTPError(content, 429, 'IOE. Possible 429 Rate Limiting Error. '+ioem, None, None)
if re.search('IncompleteRead', ioem):
raise HTTPError(content, 418, 'IOE. Cannot read. '+ioem, None, None)
raise HTTPError(content, 404, 'IOE. Probable HTTP Error. '+ioem, None, None)
except etree.XMLSyntaxError as xse:
if re.search('Document is empty', str(xse)):
raise LXMLSyntaxException('Response from server is empty') from xse
except Exception as e:
raise
return etp
def _parse_f(self,content):
res = urlopen(content).read()
if re.search('API rate limit exceeded',res):
raise HTTPError(content, 429, 'Masked HTTP429 Rate Limiting Error. ', None, None)
return etree.fromstring(res).getroottree()
def _parse_p(self,content,p):
return etree.parse(content,p) if p else etree.parse(content)
def gettree(self):
return self._tree
def getroot(self):
self.root = LXMLelem(self._tree,root=True)
return self.root
def get(self,path):
self.elem = LXMLelem(self._tree,root=False,path=path)
return self.elem
def find(self,url,namespaces=None):
return self._tree.find(url,namespaces) if sys.version_info[1]>6 else self._tree.find(LXMLetree._subNS(url, namespaces))
def findall(self,url,namespaces=None):
reclist = self._tree.findall(url,namespaces) if sys.version_info[1]>6 else self._tree.findall(LXMLetree._subNS(url, namespaces))
return [LXMLelem(e,wrap=True) for e in reclist]
class LXMLelem(object):
#Python versions <2.7 dont parse namespace aliases
SVI = sys.version_info[0]+(sys.version_info[1]/10)>2.6
'''Wrapper for the LXML element object, hacked to also act as a self wrapper for list/findall queries'''
def __init__(self,parent,wrap=False,root=False,path=''):
if wrap:
self._elem = parent
elif root:
self._elem = parent.getroot()
elif path:
#this isn't used (in current code) so hasn't been tested
self._elem = parent.get(path)
else:
raise LXMLWrapperException('Missing Element descriptor')
self.tag = self._elem.tag
self.text = self._elem.text
def get(self,path):
return self._elem.get(path)
def items(self):
return list(self._elem.items())
def find(self,url,namespaces=None):
'''returns element'''
return self._elem.find(url,namespaces) if self.SVI else self._elem.find(LXMLetree._subNS(url, namespaces))
def findall(self,url,namespaces=None):
reclist = self._elem.findall(url,namespaces) if self.SVI else self._elem.findall(LXMLetree._subNS(url, namespaces))
return [LXMLelem(e,wrap=True) for e in reclist]
``` |
{
"source": "josephramsay/LDS",
"score": 2
} |
#### File: LDSReplicate/lds/ConfigWrapper.py
```python
import logging
import re
from lds.ReadConfig import MainFileReader#, LayerFileReader
from lds.LDSUtilities import LDSUtilities
class ConfigFormatException(Exception): pass
class ConfigContentException(Exception): pass
ldslog = LDSUtilities.setupLogging()
class ConfigWrapper(object):
'''
Convenience wrapper class to main and user config-file reader instances. Main purpose of this class is to
allow user to override selected portions of the main config file. This has nothing
'''
def __init__(self,configdata=None):
self.confdict = {}
#self.layerconfig = None #internal/external; but only external is coded an never used
#self.mainconfig = None #always a file
#self.userconfig = None #always a file
#check to see if conf file is string ie file path or not eg list, dict
##if isinstance(configdata,basestring):
self.setupMainAndUserConfig(configdata)
if isinstance(configdata,dict):
#self.setupMainAndUserConfig(None)
self.setupTempParameters(configdata)
#else:
#raise ConfigFormatException('Provided Config specifier is neither a parameter array or a file path')
def setupMainAndUserConfig(self,inituserconfig):
'''Sets up a reader to the main configuration file or alternatively, a user specified config file.
Userconfig is not mean't to replace mainconfig, just overwrite the parts the user has decided to customise'''
#self.userconfig = None
self.userconfig = MainFileReader(LDSUtilities.standardiseUserConfigName(inituserconfig),False) if inituserconfig else None
self.mainconfig = MainFileReader()
def setupTempParameters(self,confdict):
'''Build a dict matching returned values for use when doing a temporary setup e.g. to test a connection'''
#this is only used for proxy testing at the moment. will configure others if needed
self.confdict = confdict
# #==============MAINCONFIG===========================================================
def readDSParameters(self,drv,params=None):
'''Returns the datasource parameters. By request updated to let users override parts of the basic config file'''
ul = ()
#read main config
ml = self.mainconfig.readDriverConfig(drv)
if self.confdict.has_key(drv):
ul = self.readTempParameters(drv)
elif self.userconfig:
ul = self.userconfig.readDriverConfig(drv)
#else:
# return None
if drv == 'Misc':
ml = self._substIDP(params['idp'],ml)
ul = self._substIDP(params['idp'],ul)
rconfdata = [x if x else y for x,y in zip(ul if ul else (None,)*len(ml),ml)]
return rconfdata
def readTempParameters(self,drv):
cdd = self.confdict[drv]
if drv=='Proxy':
return (cdd['type'],cdd['host'],cdd['port'],cdd['auth'],cdd['user'],cdd['pass'])
elif drv=='WFS':
return ('',cdd['key'],'','','','')
else:
raise ConfigContentException('Support for Proxy config type only')
def _substIDP(self,idp,mul):
'''add requested prefix to layer list. IDP = ID Prefix'''
#64layers
m0 = list([idp+str(s) for s in mul[0]]) if mul[0] else None
#ptnlayers
m1 = list([idp+str(s) for s in mul[1]]) if mul[1] else None
return (m0,m1,mul[2],mul[3])
def readDSProperty(self,drv,prop):
'''Gets a single property from a selected driver config'''
#NB uprop can be none if there is no uc object or if the prop isnt listed in the uc
uprop = self.userconfig.readMainProperty(drv,prop) if self.userconfig else None
return uprop if uprop else self.mainconfig.readMainProperty(drv,prop)
@classmethod
def buildNewUserConfig(cls,ucfilename,uctriples):
'''Class method to initialise a user config from an array of parameters'''
uc = MainFileReader(ucfilename,False)
#uc.initMainFile(os.path.join(os.path.dirname(__file__), '../conf/template.conf'))
uc.initMainFile()
cls.writeUserConfigData(uc,uctriples)
@classmethod
def writeUserConfigData(cls,ucfile,uctriples):
'''Write config data to config file'''
for sfv in uctriples:
ucfile.writeMainProperty(sfv[0],sfv[1],sfv[2])
```
#### File: LDS/LDSReplicate/ldsreplicate.py
```python
import sys
import getopt
from datetime import datetime
from urllib2 import HTTPError
from lds.TransferProcessor import TransferProcessor
from lds.TransferProcessor import InputMisconfigurationException
from lds.VersionUtilities import AppVersion, VersionChecker, UnsupportedVersionException
from lds.DataStore import DSReaderException
from lds.LDSUtilities import LDSUtilities
from lds.ConfigConnector import DatasourceRegister
ldslog = LDSUtilities.setupLogging()
#ldslog = logging.getLogger('LDS')
#ldslog.setLevel(logging.DEBUG)
#
#path = os.path.normpath(os.path.join(os.path.dirname(__file__), "../../log/"))
#if not os.path.exists(path):
# os.mkdir(path)
#df = os.path.join(path,"debug.log")
#
#fh = logging.FileHandler(df,'a')
#fh.setLevel(logging.DEBUG)
#
#formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(module)s %(lineno)d - %(message)s')
#fh.setFormatter(formatter)
#ldslog.addHandler(fh)
__version__ = AppVersion.getVersion()
def usage():
print "Usage: python LDSReader/ldsreplicate.py -l <layer_id> [-f <from date>|-t <to date>|-c <cql filter>|-s <src conn str>|-d <dst conn str>|-v|-h] <output> [full]"
print "For help use --help"
def main():
'''Main entrypoint if the LDS incremental replication script
usage: python LDSReader/ldsreplicate.py -l <layer_id>
[-f <from date>|-t <to date>|-c <cql filter>|-s <src conn str>|-d <dst conn str>|-u <user_config>|-g <group keyword>|-e <conversion-epsg>|-h (help)]
<output>
'''
td = None
fd = None
ly = None
gp = None
ep = None
sc = None
dc = None
cq = None
uc = None
gdal_ver = VersionChecker.getGDALVersion()
#pgis_ver = VersionChecker.getPostGISVersion()
#pg_ver = VersionChecker.getPostgreSQLVersion()
if VersionChecker.compareVersions(VersionChecker.GDAL_MIN,gdal_ver.get('GDAL') if gdal_ver.get('GDAL') is not None else VersionChecker.GDAL_MIN):
raise UnsupportedVersionException('PostgreSQL version '+str(gdal_ver.get('GDAL'))+' does not meet required minumum '+str(VersionChecker.GDAL_MIN))
#do the datasource checks in object and once initialised
# message += 'GDAL '+pgis_ver.get('GDAL')+'<'+VersionChecker.GDAL_MIN+'(reqd) \n'
# if VersionChecker.compareVersions(VersionChecker.GDAL_MIN,pgis_ver.get('GDAL') if pgis_ver.get('GDAL') is not None else VersionChecker.GDAL_MIN):
# message += 'GDAL(pgis) '+pgis_ver.get('GDAL')+'<'+VersionChecker.GDAL_MIN+'(reqd) \n'
# if VersionChecker.compareVersions(VersionChecker.PostgreSQL_MIN,pg_ver.get('PostgreSQL') if pgis_ver.get('PostgreSQL') is not None else VersionChecker.PostgreSQL_MIN):
# message += 'PostgreSQL '+pg_ver.get('PostgreSQL')+'<'+VersionChecker.PostgreSQL_MIN+' (reqd)\n'
# parse command line options
try:
opts, args = getopt.getopt(sys.argv[1:], "hvixf:t:l:g:e:s:d:c:u:", ["help","version","internal","external","fromdate=","todate=","layer=","group=","epsg=","source=","destination=","cql=","userconf="])
ldslog.info("OPTS:"+str(opts))
ldslog.info("ARGS:"+str(args))
except getopt.error, msg:
print msg
usage()
sys.exit(2)
# process options
for opt, val in opts:
if opt in ("-h", "--help"):
print __doc__
sys.exit(0)
elif opt in ("-v", "--version"):
print __version__
sys.exit(0)
elif opt in ("-f","--fromdate"):
fd = val
elif opt in ("-t","--todate"):
td = val
elif opt in ("-l","--layer"):
ly = val
elif opt in ("-g","--group"):
gp = val
elif opt in ("-e","--epsg"):
ep = val
elif opt in ("-s","--source"):
sc = val
elif opt in ("-d","--destination"):
dc = val
elif opt in ("-c","--cql"):
cq = val
elif opt in ("-u","--userconf"):
uc = val
else:
print "unrecognised option:\n" \
"-f (--fromdate) Date in yyyy-mm-dd format start of incremental range (omission assumes auto incremental bounds)," \
"-t (--todate) Date in yyyy-mm-dd format for end of incremental range (omission assumes auto incremental bounds)," \
"-l (--layer) Layer name/id in format v:x### (IMPORTANT. Omission assumes all layers)," \
"-g (--group) Layer sub group list for layer selection, comma separated" \
"-e (--epsg) Destination EPSG. Layers will be converted to this SRS" \
"-s (--source) Connection string for source DS," \
"-d (--destination) Connection string for destination DS," \
"-c (--cql) Filter definition in CQL format," \
"-u (--user) User defined config file used as partial override for template.conf," \
"-h (--help) Display this message"
sys.exit(2)
# #TODO consider ly argument to specify a file name containing a list of layers?
st = datetime.now()
m1 = '*** Begin *** '+str(st.isoformat())
print m1
ldslog.info(m1)
#layer overrides group, whether layer is IN group is not considered
ly if ly else gp
tp = TransferProcessor(None,ly if ly else gp,ep,fd,td,sc,dc,cq,uc)
#output format
if len(args)==0:
print __doc__
sys.exit(0)
else:
#since we're not breaking the switch the last arg read will be the DST used
pn = None
for arg in args:
if arg.lower() in ("init", "initialise", "initalize"):
ldslog.info("Initialisation of configuration files/tables requested. Implies FULL rebuild")
tp.setInitConfig()
elif arg in ("clean"):
ldslog.info("Cleaning named layer")
tp.setCleanConfig()
else:
#if we dont have init/clean the only other arg must be output type
pn = LDSUtilities.standardiseDriverNames(arg)
if pn is None:
print __doc__
raise InputMisconfigurationException("Unrecognised command; output type (pg,ms,slite,fgdb) declaration required")
#aggregation point for common LDS errors
mm = '*** Complete *** '
try:
reg = DatasourceRegister()
sep = reg.openEndPoint('WFS',uc)
dep = reg.openEndPoint(pn,uc)
reg.setupLayerConfig(tp,sep,dep, tp.getInitConfig())
tp.setSRC(sep)
tp.setDST(dep)
tp.processLDS()
except HTTPError as he:
ldslog.error('Error connecting to LDS. '+str(he))
mm = '*** Failed 1 *** '
except DSReaderException as dse:
ldslog.error('Error creating DataSource. '+str(dse))
mm = '*** Failed 2 *** '
#except Exception as e:
#if errors are getting through we catch/report them
# ldslog.error("Error! "+str(e))
# mm = '*** Failed 3 *** '
finally:
reg.closeEndPoint(pn)
reg.closeEndPoint('WFS')
sep,dep = None,None
et = datetime.now()
m2 = mm + str(et.isoformat())
print m2
ldslog.info(m2)
dur = et-st
m3 = '*** Duration *** '+str(dur)
print m3
ldslog.info(m3)
return 1000*dur.total_seconds()
if __name__ == "__main__":
#main()
try:
main()
except Exception as e:
exc_type, exc_value, exc_traceback = sys.exc_info()
ldslog.error('LDSReplicate Error.',exc_info=(exc_type,exc_value,exc_traceback))
print str(e)+'\n(see debug.log for full stack trace)'
```
#### File: lds/test/RequestBuilder_Test.py
```python
import unittest
import os
import sys
import time
import subprocess
sys.path.append('..')
from lds.LDSUtilities import LDSUtilities
from lds.RequestBuilder import RequestBuilder
testlog = LDSUtilities.setupLogging(ff=2)
class Test_1_RequestBuilder(unittest.TestCase):
UCONF = 'TEST'
LGVAL = 'v:x100'
DESTNAME = 'PostgreSQL'
PARAMS100 = ['http://wfs.data.linz.govt.nz/', 'aaaa1111bbbb2222cccc3333dddd4444', 'WFS', '1.0.0', 'GML2', '']
PARAMS110 = ['http://wfs.data.linz.govt.nz/', '1111bbbb2222cccc3333dddd4444eeee', 'WFS', '1.1.0', 'GML2', '']
PARAMS200 = ['http://wfs.data.linz.govt.nz/', 'bbbb2222cccc3333dddd4444eeee5555', 'WFS', '2.0.0', 'GML2', '']
def setUp(self):
testlog.debug('LDSDataStore_Test.setUp')
def tearDown(self):
testlog.debug('LDSDataStore_Test.tearDown')
def test_1_getInstance(self):
w100 = RequestBuilder.getInstance(self.PARAMS100,None)
w110 = RequestBuilder.getInstance(self.PARAMS110,None)
w200 = RequestBuilder.getInstance(self.PARAMS200,None)
#since RB is incomplete just test for name string
#self.assertEqual(w100.__str__(),'RequestBuilder_WFS-1.0.0','str cmp 100')
self.assertEqual(w100.__str__(),'RequestBuilder_WFS-1.1.0','str cmp 100 (subst)')
self.assertEqual(w110.__str__(),'RequestBuilder_WFS-1.1.0','str cmp 110')
self.assertEqual(w200.__str__(),'RequestBuilder_WFS-2.0.0','str cmp 200')
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testLDSRead']
unittest.main()
```
#### File: LDSReplicate/lds/WinUtilities.py
```python
import os
import re
import sys
import platform
import _winreg
from _winreg import *
class WinUtilities(object):
'''Windows utility/info functions.'''
@staticmethod
def callStartFile(file):
os.startfile(file)
@staticmethod
def getArchitecture():
a = int(re.match('(\d+)',platform.architecture()[0]).group(1))
b = 64 if sys.maxsize>1e11 else 32
return (a+b)/2
class Registry(object):
'''Windows Registry functions'''
INTERNET_SETTINGS = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, r'Software\Microsoft\Windows\CurrentVersion\Internet Settings', 0, _winreg.KEY_ALL_ACCESS)
@staticmethod
def readProxyValues():
enable = Registry._getRegistryKey('ProxyEnable')
hp = Registry._getRegistryKey('ProxyServer')
host,port = hp.split(':')
return (enable,host,port)
@staticmethod
def writeProxyValues(host,port):
hp = str(host)+":"+str(port)
Registry._setRegistryKey('ProxyEnable',1)
Registry._setRegistryKey('ProxyServer',hp)
@staticmethod
def readInstDir(name):
return Registry._readAppVal(name)
#---------------------------------------------------------
@classmethod
def _readAppVal(cls,name):
'''Used to find name in reg i.e. install path to LDSR app'''
ipath = 0
val = None
arch = WinUtilities.getArchitecture()
if arch == 32: ipath = r'SOFTWARE\LDS Replicate'
elif arch == 64: ipath = r'SOFTWARE\Wow6432Node\LDS Replicate'
try:
key = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, ipath, 0, _winreg.KEY_READ)
val,_ = _winreg.QueryValueEx(key, name)
except: pass
return val
@classmethod
def _setRegistryKey(cls, name, value):
_, reg_type = _winreg.QueryValueEx(cls.INTERNET_SETTINGS, name)
_winreg.SetValueEx(cls.INTERNET_SETTINGS, name, 0, reg_type, value)
@classmethod
def _getRegistryKey(cls, name):
val ,_ = _winreg.QueryValueEx(cls.INTERNET_SETTINGS, name)
return val
``` |
{
"source": "josephrexme/stegman",
"score": 3
} |
#### File: josephrexme/stegman/stegman.py
```python
import sys, re, binascii, string
def gethex(image):
f = open(image, 'rb')
data = f.read()
f.close()
hexcode = binascii.hexlify(data)
return hexcode
def embed(embedFile, coverFile, stegFile):
filetype = coverFile[-3:]
stegtype = stegFile[-3:]
if filetype != 'png' and filetype != 'jpg':
print 'Invalid format'
elif filetype != stegtype:
print 'Output file has to be in the same format as cover image (%s)' % string.swapcase(filetype)
else:
data = open(embedFile, 'r').read()
info = gethex(coverFile)
if extradatacheck(info, filetype):
print 'File already contains embedded data'
else:
info += data.encode('hex')
f = open(stegFile, 'w')
f.write(binascii.unhexlify(info))
f.close()
print 'Storing data to', stegFile
def extract(stegFile, outFile):
filetype = stegFile[-3:]
data = gethex(stegFile)
if extradatacheck(data, filetype):
store = open(outFile, 'w')
store.write( binascii.unhexlify(extradatacheck(data, filetype)) )
store.close()
print 'Extracted data stored to', outFile
else:
print 'File has no embedded data in it'
def extradatacheck(data, type):
if type == 'png':
pattern = r'(?<=426082)(.*)'
elif type == 'jpg':
pattern == r'(?<=FFD9)(.*)'
match = re.search(pattern, data)
if match:
return match.group(0)
else:
false
def usage():
print """
Usage:
Embeding
stegman -s embedfile.txt coverfile.jpg output.jpg
Extracting
stegman -e stegfile.jpg output.txt
Valid Formats:
JPG, PNG
"""
def args():
if sys.argv[1] == '-s':
embed(sys.argv[2], sys.argv[3], sys.argv[4])
elif sys.argv[1] == '-e':
extract(sys.argv[2], sys.argv[3])
else:
usage()
def main():
if len(sys.argv) > 1:
args()
else:
usage()
if __name__ == '__main__':
main()
``` |
{
"source": "joseph-reynolds/minecraft-pidoodles",
"score": 4
} |
#### File: joseph-reynolds/minecraft-pidoodles/try1.py
```python
import mcpi.minecraft as minecraft
import mcpi.block as block
from mcpi.vec3 import Vec3
from mcpi.connection import Connection
import time
import timeit
import Queue
import threading
def vec3_get_sign(self):
# Warning: hack! Assumes unit vector
return self.x + self.y + self.z
Vec3.get_sign = vec3_get_sign
class TorchFindError(Exception):
"""Torch not found nearby"""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class CornerFindError(Exception):
"""Corner not found"""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
def different_block(b):
if b == block.STONE.id: b = block.SANDSTONE.id
elif b == block.SANDSTONE.id: b = block.DIRT.id
elif b == block.DIRT.id: b = block.WOOD.id
else: b = block.STONE.id
return b
def get_nearby_torchpos(p):
"""Return the position of a nearby torch"""
torchpos = None
search_size = 1
for x in range(int(p.x - search_size), int(p.x + search_size+1)):
for z in range(int(p.z - search_size), int(p.z + search_size+1)):
b = mc.getBlock(x, p.y, z)
if b == block.TORCH.id:
if not torchpos is None:
raise TorchFindError("Too many torches")
torchpos = Vec3(x, p.y, z)
if torchpos is None:
raise TorchFindError("Torch not found nearby")
return torchpos
def get_corner_data(pos):
"""Returns data about a corner next to the input position.
A "corner" is two walls meeting at right angles with a post.
The walls and post define the corner.
The input position should be the inside corner at ground level
such as returned by get_nearby_torchpos.
The return value is a 2-tuple:
- The corner position
- A unit vector pointing "inside" the corner
"""
# Read blocks around the input pos
blocks = [] # Usage: blocks[x][z]
mask = 0 # Bit mask: abcdefghi, like:
# adg
# beh
# cfi
for x in range(int(pos.x - 1), int(pos.x + 2)):
col = []
for z in range(int(pos.z - 1), int(pos.z + 2)):
b = mc.getBlockWithData(x, pos.y, z) # Test
b = mc.getBlock(x, pos.y, z)
col.append(b)
mask = (mask << 1) + (0 if b == block.AIR.id else 1)
blocks.append(col)
mask &= 0b111101111 # Mask off center block
# print "Mask", format(mask,"#011b")
nw_corner_mask = 0b111100100
ne_corner_mask = 0b100100111
sw_corner_mask = 0b111001001
se_corner_mask = 0b001001111
if mask == nw_corner_mask:
corner = Vec3(pos.x - 1, pos.y, pos.z - 1)
vector = Vec3(1, 1, 1)
elif mask == ne_corner_mask:
corner = Vec3(pos.x +1, pos.y, pos.z - 1)
vector = Vec3(-1, 1, 1)
elif mask == sw_corner_mask:
corner = Vec3(pos.x - 1, pos.y, pos.z + 1)
vector = Vec3(1, 1, -1)
elif mask == se_corner_mask:
corner = Vec3(pos.x + 1, pos.y, pos.z + 1)
vector = Vec3(-1, 1, -1)
else:
raise CornerFindError("Corner not found")
return corner, vector
def get_length_of_block_run(startpos, vector):
"""determine the length of a run of blocks
parameters: startpos is the starting block
vector is a unit vector in the direction to count
"""
ans = 0
pos = startpos
while mc.getBlock(pos) != block.AIR.id:
ans += 1
pos = pos + vector
ans -= 1
return ans * vector.get_sign()
def get_house_data(pos):
corner, vector = get_corner_data(pos)
sizex = get_length_of_block_run(corner, Vec3(vector.x, 0, 0))
sizey = get_length_of_block_run(corner, Vec3(0, vector.y, 0))
sizez = get_length_of_block_run(corner, Vec3(0, 0, vector.z))
return corner, Vec3(sizex, sizey, sizez)
def do_house(corner, dim):
newblockid = different_block(mc.getBlock(corner))
# Unit test: just do a chimney
#mc.setBlocks(corner.x, corner.y, corner.z,
# corner.x, corner.y + dim.y, corner.z,
# newblockid)
# Near wall along x direction
mc.setBlocks(corner.x, corner.y, corner.z,
corner.x + dim.x, corner.y + dim.y, corner.z,
newblockid)
# Near wall along z direction
mc.setBlocks(corner.x, corner.y, corner.z,
corner.x, corner.y + dim.y, corner.z + dim.z,
newblockid)
mc = minecraft.Minecraft.create()
#p = mc.player.getTilePos()
#mc.x_connect_multiple(p.x, p.y+2, p.z, block.GLASS.id)
#print 'bye!'
#exit(0)
# Algorithm to build a house
for i in range(0,0):
time.sleep(1)
p = mc.player.getTilePos()
b = mc.getBlock(p.x, p.y-1, p.z)
info = ""
try:
tp = get_nearby_torchpos(p)
info = "torch found"
try:
c,v = get_house_data(tp)
info = "corner found"
do_house(c,v)
except CornerFindError as e:
pass
except TorchFindError as e:
pass # print "TorchFindError:", e.value
print b, info
connections = []
def get_blocks_in_parallel(c1, c2, degree=35):
"""get a cuboid of block data
parms:
c1, c2: the corners of the cuboid
degree: the degree of parallelism (number of sockets)
returns:
map from mcpi.vec3.Vec3 to mcpi.block.Block
"""
# Set up the work queue
c1.x, c2.x = sorted((c1.x, c2.x))
c1.y, c2.y = sorted((c1.y, c2.y))
c1.z, c2.z = sorted((c1.z, c2.z))
workq = Queue.Queue()
for x in range(c1.x, c2.x+1):
for y in range(c1.y, c2.y+1):
for z in range(c1.z, c2.z+1):
workq.put((x,y,z))
print "Getting data for %d blocks" % workq.qsize()
# Create socket connections, if needed
# TO DO: Bad! Assumes degree is a constant
# To do: close the socket
global connections
if not connections:
connections = [Connection("localhost", 4711) for i in range(0,degree)]
# Create worker threads
def worker_fn(connection, workq, outq):
try:
while True:
pos = workq.get(False)
# print "working", pos[0], pos[1], pos[2]
connection.send("world.getBlockWithData", pos[0], pos[1], pos[2])
ans = connection.receive()
blockid, blockdata = map(int, ans.split(","))
outq.put((pos, (blockid, blockdata)))
except Queue.Empty:
pass
outq = Queue.Queue()
workers = []
for w in range(degree):
t = threading.Thread(target = worker_fn,
args = (connections[w], workq, outq))
t.start()
workers.append(t)
# Wait for workers to finish
for w in workers:
# print "waiting for", w.name
w.join()
# Collect results
answer = {}
while not outq.empty():
pos, block = outq.get()
answer[pos] = block
return answer
while False:
# mc.getHeight works
ppos = mc.player.getPos()
h = mc.getHeight(ppos.x, ppos.z)
print h
time.sleep(1)
if False:
"""
degree = 200
corner1 = Vec3(-50, 8, -50)
corner2 = Vec3( 50, 8, 50)
starttime = timeit.default_timer()
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime = timeit.default_timer()
print endtime-starttime, 'get_blocks_in_parallel'
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime2 = timeit.default_timer()
print endtime2-endtime, 'get_blocks_in_parallel again'
for z in range(corner1.z, corner2.z):
s = ""
for x in range(corner1.x, corner2.x):
c = " " if blks[(x, 8, z)][0] == block.AIR.id else "x"
s = s + c
print s
"""
# Performance experiments
"""Results:
Hardware: Raspbery Pi 3 Model B V1.2 with heat sink
Linux commands show:
$ uname -a
Linux raspberrypi 4.4.34-v7+ #930 SMP Wed Nov 23 15:20:41 GMT 2016 armv7l GNU/Linux
$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Model name: ARMv7 Processor rev 4 (v7l)
CPU max MHz: 1200.0000
CPU min MHz: 600.0000
GPU memory is 128 (unit is Mb, I think)
Test getting 10201 blocks, and stack_size=128Kb
varying the number of threads:
threads time(sec) blocks/sec
------- --------- ----------
10 39.87
25 19.46
50 10.68
75 7.29
100 5.57
115 5.01
120 4.86
125 4.75
130 4.58
150 4.47
175 4.55
200 4.24
250 4.41
400 4.60
Observations:
- Each thread process 15 to 25 blocks/sec
- Some blocks take much longer to fetch, about 0.3 sec
- performance peaks with 200 threads, at 2400 blocks/sec
- creating threads is not free
- can create 50 threads in 1 sec, 100 threads in 2.5 sec
- memory consumption increases (not measured)
- the tests were repeated while the game was being
played interactively, specifically, flying at altitude
and looking down so that new blocks were being fetched
as quickly as possible. This did not affect performance:
+ no graphical slowdowns or glitches were observed
+ the performance of fetching blocks was not affected
Note:
The expected case is to create the required threads once
and keep them around for the lifetime of the program.
The experimental code was designed to do just that.
Some data was captured that suggests how expensive
starting up hundreds of threads is. Although it was
not one of the objectives of the original study, it is
given as an interesting observation.
Conclusions:
Eyeballing the data suggests that 200 threads is optimal.
However, if the 6 seconds it takes to create the threads
is not acceptable, consider using 100 threads which is
about 30% slower, but only takes 1 second to create the
threads.
"""
threading.stack_size(128*1024)
for degree in [100, 150, 200]:
connections = []
corner1 = Vec3(-50, 8, -50)
corner2 = Vec3( 50, 8, 50)
starttime = timeit.default_timer()
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime = timeit.default_timer()
blks = get_blocks_in_parallel(corner1, corner2, degree)
endtime2 = timeit.default_timer()
print "entries=10201 degree=%s time1=%s time2=%s" % (
str(degree),
str(endtime-starttime),
str(endtime2-endtime))
# Idea: class for get_blocks_in_parallel()
class ParallelGetter:
"""Get block data from the Mincraft Pi API -- NOT FINISHED"""
def __init__(self, address = "localhost", port = 4711, parallelism=200):
self.address = address
self.port = port
self.parallelism = parallelism
self.connections = [Connection(address, port) for i in range(parallelism)]
# To do: close the socket connection
@staticmethod
def normalize_corners(c1, c2):
"""ensure c1.x <= c2.x, etc., without changing the cuboid"""
c1.x, c2.x = sorted((c1.x, c2.x))
c1.y, c2.y = sorted((c1.y, c2.y))
c1.z, c2.z = sorted((c1.z, c2.z))
return c1, c2
@staticmethod
def generate_work_items_xyz(c1, c2):
c1, c2 = normalize_corners(c1, c2)
workq = Queue.Queue()
for x in range(c1.x, c2.x+1):
for y in range(c1.y, c2.y+1):
for z in range(c1.z, c2.z+1):
workq.put((x,y,z))
return workq
@staticmethod
def _unpack_int(self, response):
return int(response)
@staticmethod
def _unpack_int_int(self, response):
i1, i2 = map(int, response.split(","))
return i1, i2
def get_blocks(self, c1, c2):
workq = generate_work_items_xyz(c, c2)
return _do_work(workq, "world.getBlock", _unpack_int)
def get_blocks_with_data(self, c1, c2):
workq = generate_work_items_xyz(c, c2)
return _do_work(workq, "world.getBlockWithData", _unpack_int_int)
def _do_work(self, workq, api_name, unpack_fn):
"""Perform the parallel portion of the work.
parms:
workq - such as from generate_work_items_xyz
Specifically, start a worker thread for each connection,
Each worker feeds work from the workq to the API, formats
the results, and enqueues the results.
When there is no more work, the workers quit, and the
return value is computed.
"""
def worker_fn(connection, workq, outq, unpack_fn):
try:
while True:
pos = workq.get(False)
connection.send(api_name, pos)
outq.put((pos, unpack_fn(connection.receive())))
except Queue.Empty:
pass
# Create worker threads
outq = Queue.Queue()
workers = []
for w in range(parallelism):
t = threading.Thread(
target = worker_fn,
args = (connections[w], workq, outq, unpack_fn))
t.start()
workers.append(t)
# Wait for workers to finish, then collect their data
for w in workers:
w.join()
answer = {}
while not outq.empty():
pos, data = outq.get()
answer[pos] = data
return answer
"""Idea: Tree jumper
You can jump from tree to tree.
If you are
(a) on a tree (LEAVES = Block(18)),
(b) moving forward, and
(c) jumping,
you will jump/fly to the nearest tree in your path.
Algorithm:
while True:
player_velocity = "track player position to determine velocity"
if "player is moving and on a leaf and jumps":
destination = "find nearest tree(player_pos, player_vel)"
if destination:
parabola = compute(player_pos, destination)
"move player smoothly along the parabola"
"if player hits a block: break"
where
def nearest_tree(player_pos, player_vel):
search_areas = [
player_pos + 30 * player_vel with radius=15,
player_pos + 15 * player_vel with radius=10,
player_pos + 40 * player_vel with radius=15]
search areas *= [player_pos.y, player_pos.y-7, player_pos+7]
for area in search_areas:
fetch a plane of block data centered at (area)
tree = find tree, prefering center of the area
if tree: return tree
return tree
def compute_parabola():
gravity = 0.3 # blocks/time**2
xz_distance = sqrt(xd**2 + zd**2)
xz_speed = 1
total_time = xz_distance / xz_speed
x_vel = xd / total_time
z_vel = zd / total_time
y_vel = ((-yd / total_time) +
((0.5 * gravity * (total_time ** 2)))
"""
# Lets' try a leap/jump
if False:
# mc.player.setPos(1, 4, 3) # if the jump goes badly wrong
ppos = mc.player.getPos()
x = ppos.x
y = ppos.y
z = ppos.z
xv = 0.005
yv = 0.1
zv = 0.02
while yv > 0:
mc.player.setPos(x, y, z)
x += xv
y += yv
z += zv
yv -= 0.0001
time.sleep(0.001)
# Try stacking up multiple getBlocks:
if True:
"""This code is weird. Delete it!"""
connection = Connection("localhost", 4711)
for x in range(-20, 20):
for z in range(-20, 20):
connection.send("world.getBlockWithData", x, 2, z)
print connection.receive()
print connection.receive()
# How big is the world?
if False:
corner1 = Vec3(-200, 0, 0)
corner2 = Vec3(200, 0, 0)
degree = 150
xaxis = get_blocks_in_parallel(corner1, corner2, degree)
xmin = xmax = 0
for x in range(200):
if xaxis[(x, 0, 0)][0] == block.BEDROCK_INVISIBLE.id:
xmax = x - 1
break
for x in range(0, -200, -1):
if xaxis[(x, 0, 0)][0] == block.BEDROCK_INVISIBLE.id:
xmin = x + 1
break
#print "X-axis: %d to %d" % (xmin, xmax)
corner1 = Vec3(0, 0, 200)
corner2 = Vec3(0, 0, -200)
degree = 150
zaxis = get_blocks_in_parallel(corner1, corner2, degree)
zmin = zmax = 0
for z in range(200):
if zaxis[(0, 0, z)][0] == block.BEDROCK_INVISIBLE.id:
zmax = z - 1
break
for z in range(0, -200, -1):
if zaxis[(0, 0, z)][0] == block.BEDROCK_INVISIBLE.id:
zmin = z + 1
break
#print "Z-axis: %d to %d" % (zmin, zmax)
print "The world is: [%d..%d][y][%d..%d]" % (
xmin, xmax, zmin, zmax)
###
### Try stuff with sockets
###
'''
I have not finished coding this part.
def gen_answers(connection, requests, format, parse_fn):
"""generate answers for each request, like (req, answer)"""
request_buf = io.BufferedWriter(connection.socket.makefile('w'))
response_buf = io.BufferedReader(connection.socket.makefile('r'))
request_queue = []
while True:
# Write requests into request_buffer
# ...to do...
# Perform socket I/O
Communicate:
r,w,e = select.select([response_buf], [request_buf], [], 1)
if r:
response_data = response_buf.peek()
if "response_data has a newline at position n":
response_text = response_buf.read(n)
response_buf.read(???)
if w:
request_buf.write(???)
# Read answers
while resp_buf.hasSomeData:
request = request_queue[0] # Er, use a queue?
request_queue = request_queue[1:]
response = parse_fn(response_buf.readline())
yield (request, response)
Hmmm, my sockets are rusty, and my Python io buffer classes weak,
but this seems more correct:
# We write requests (like b"world.getBlock(0,0,0)" into the
# request_buffer and then into the request_file (socket).
request_buffer = bytearray() # bytes, bytearray, or memoryview
request_file = io.FileIO(connection.socket.fileno(), "w", closeFd=False)
"...append data to request_buffer..."
if request_buffer: can select
if selected:
# Write exactly once
bytes_written = request_file.write(request_buffer)
request_buffer = request_buffer[bytes_written:]
if bytes_written == 0: "something is wrong"
# We read responses (like b"2") from the response_file (socket)
# into the response_buffer.
response_file = io.FileIO(connection.socket.fileno(), "r", closeFd=False)
response_buffer = bytes()
if selected:
# Read exactly once
response_buffer.append(response_file.read())
"...remove data from response_buffer..."
# Try gen_answers:
if True:
connection = Connection("localhost", 4711)
def some_rectangle():
for x in range(-2,2):
for z in range(-2,2):
yield (x, 0, z)
for pos, blk in gen_answers(connection,
some_rectangle,
"world.getBlock(%d,%d)",
int):
print "Got", pos, blk
my_blocks = {}
for pos, blk in gen_answers(connection,
some_rectangle,
"world.getBlock(%d,%d)",
int):
my_blocks[pos] = blk
'''
``` |
{
"source": "joseph-r-hamilton/dataframe_image",
"score": 3
} |
#### File: dataframe_image/dataframe_image/_command_line.py
```python
import argparse
import textwrap
import sys
class CustomFormatter(argparse.RawTextHelpFormatter):
pass
HELP = '''\n\n
========================================================
dataframe_image
========================================================
Embed pandas DataFrames as images when converting Jupyter Notebooks to
pdf or markdown documents.
Required Positional Arguments
=============================
filename
The filename of the notebook you wish to convert
Optional Keyword Arguments
==========================
--to
Type of document to create - either pdf or markdown. Possible values
are 'pdf', 'markdown', and 'md'. (default: pdf)
--use
Possible options are 'latex' or 'browser'.
Choose to convert using latex or chrome web browser when converting
to pdf. Output is significantly different for each. Use 'latex' when
you desire a formal report. Use 'browser' to get output similar to
that when printing to pdf within a chrome web browser.
(default: latex)
--center-df
Choose whether to center the DataFrames or not in the image. By
default, this is True, though in Jupyter Notebooks, they are
left-aligned. Use False to make left-aligned. (default: True)
--max-rows
Maximum number of rows to output from DataFrame. This is forwarded to
the `to_html` DataFrame method. (default: 30)
--max-cols
Maximum number of columns to output from DataFrame. This is forwarded
to the `to_html` DataFrame method. (deault: 10)
--execute
Whether or not to execute the notebook first. (default: False)
--save-notebook
Whether or not to save the notebook with DataFrames as images as a new
notebook. The filename will be '{notebook_name}_dataframe_image.ipynb'
(default: False)
--limit
Limit the number of cells in the notebook for conversion. This is
useful to test conversion of a large notebook on a smaller subset.
--document-name
Name of newly created pdf/markdown document without the extension.
If not provided, the name of the notebook will be used.
--table-conversion
DataFrames (and other tables) will be inserted in your document
as an image using a screenshot from Chrome. If this doesn't
work, use matplotlib, which will always work and produce
similar results.
Valid values are 'chrome' or 'matplotlib' (default: 'chrome')
--chrome-path
Path to your machine's chrome executable. By default, it is
automatically found. Use this when chrome is not automatically found.
--latex-command
Pass in a list of commands that nbconvert will use to convert the
latex document to pdf. The latex document is created temporarily when
converting to pdf with the `use` option set to 'latex'.
If the xelatex command is not found on your machine, then pdflatex
will be substituted for it. You must have latex installed on your
machine for this to work. Get more info on how to install latex -
https://nbconvert.readthedocs.io/en/latest/install.html#installing-tex
(default: ['xelatex', {filename}, 'quiet'])
--output-dir
Directory where new pdf and/or markdown files will be saved. By default,
this will be in the same directory where the notebook is. The directory
for images will also be created in here. If --save-notebook is set to
True, it will be saved here as well. Provide a relative path to the
current working directory or an absolute path.
Examples
========
dataframe_image my_notebook.ipynb --to=pdf --save-notebook=True --execute=True
dataframe_image path/to/my_notebook.ipynb --to=md --output-dir="some other/directory/"
Created by <NAME> (https://www.dunderdata.com)
'''
parser = argparse.ArgumentParser(formatter_class=CustomFormatter, add_help=False, usage=argparse.SUPPRESS)
parser.add_argument('filename', default=False)
parser.add_argument('-h', '--help', action='store_true', dest='help')
parser.add_argument('--to', type=str, choices=['md', 'pdf', 'markdown'], default='pdf')
parser.add_argument('--use', type=str, choices=['latex', 'browser'], default='latex')
parser.add_argument('--center-df', type=bool, default=True)
parser.add_argument('--max-rows', type=int, default=30)
parser.add_argument('--max-cols', type=int, default=10)
parser.add_argument('--execute', type=bool, default=False)
parser.add_argument('--save-notebook', type=bool, default=False)
parser.add_argument('--limit', type=int)
parser.add_argument('--document-name')
parser.add_argument('--table-conversion', type=str, choices=['chrome', 'matplotlib'], default='chrome')
parser.add_argument('--chrome-path')
parser.add_argument('--latex-command', type=list, default=['xelatex', '{filename}', 'quiet'])
parser.add_argument('--output-dir')
def main():
if len(sys.argv) == 1 or '-h' in sys.argv or '--help' in sys.argv:
print(HELP)
else:
args = vars(parser.parse_args())
del args['help']
from ._convert import convert
convert(**args)
```
#### File: dataframe_image/tests/test_df_image.py
```python
import pytest
import pandas as pd
import dataframe_image
df = pd.read_csv('tests/notebooks/data/covid19.csv', parse_dates=['date'], index_col='date')
class TestImage:
def test_df(self):
df.tail(10).dfi.export('tests/test_output/covid19.png')
def test_styled(self):
df.tail(10).style.background_gradient().export_png('tests/test_output/covid19_styled.png')
def test_mpl(self):
df.tail(10).dfi.export('tests/test_output/covid19_mpl.png', table_conversion='matplotlib')
``` |
{
"source": "josephrocca/onnx-typecast",
"score": 2
} |
#### File: josephrocca/onnx-typecast/convert-float16-to-float32.py
```python
import onnx
from onnx import helper as h
from onnx import checker as ch
from onnx import TensorProto, GraphProto, AttributeProto
from onnx import numpy_helper as nph
import numpy as np
from collections import OrderedDict
from logger import log
import typer
def make_param_dictionary(initializer):
params = OrderedDict()
for data in initializer:
params[data.name] = data
return params
def convert_params_to_float(params_dict):
converted_params = []
for param in params_dict:
data = params_dict[param]
if data.data_type == TensorProto.FLOAT16:
data_cvt = nph.to_array(data).astype(np.float32)
data = nph.from_array(data_cvt, data.name)
converted_params += [data]
return converted_params
def convert_constant_nodes_to_float(nodes):
"""
convert_constant_nodes_to_float Convert Constant nodes to FLOAT. If a constant node has data type FLOAT16, a new version of the
node is created with FLOAT data type and stored.
Args:
nodes (list): list of nodes
Returns:
list: list of new nodes all with FLOAT constants.
"""
new_nodes = []
for node in nodes:
if (
node.op_type == "Constant"
and node.attribute[0].t.data_type == TensorProto.FLOAT16
):
data = nph.to_array(node.attribute[0].t).astype(np.float32)
new_t = nph.from_array(data)
new_node = h.make_node(
"Constant",
inputs=[],
outputs=node.output,
name=node.name,
value=new_t,
)
new_nodes += [new_node]
else:
new_nodes += [node]
return new_nodes
def convert_model_to_float(model_path: str, out_path: str):
"""
convert_model_to_float Converts ONNX model with FLOAT16 params to FLOAT params.\n
Args:\n
model_path (str): path to original ONNX model.\n
out_path (str): path to save converted model.
"""
log.info("ONNX FLOAT16 --> FLOAT Converter")
log.info(f"Loading Model: {model_path}")
# * load model.
model = onnx.load_model(model_path)
ch.check_model(model)
# * get model opset version.
opset_version = model.opset_import[0].version
graph = model.graph
# * The initializer holds all non-constant weights.
init = graph.initializer
# * collect model params in a dictionary.
params_dict = make_param_dictionary(init)
log.info("Converting FLOAT16 model params to FLOAT...")
# * convert all FLOAT16 aprams to FLOAT.
converted_params = convert_params_to_float(params_dict)
log.info("Converting constant FLOAT16 nodes to FLOAT...")
new_nodes = convert_constant_nodes_to_float(graph.node)
# convert input and output to FLOAT:
input_type = graph.input[0].type.tensor_type.elem_type
output_type = graph.output[0].type.tensor_type.elem_type
if input_type == TensorProto.FLOAT16:
graph.input[0].type.tensor_type.elem_type = TensorProto.FLOAT
if output_type == TensorProto.FLOAT16:
graph.output[0].type.tensor_type.elem_type = TensorProto.FLOAT
# convert node attributes to FLOAT:
for node in new_nodes:
for attribute in node.attribute:
if attribute.name == "to" and attribute.i == TensorProto.FLOAT16: # for op_type=="Cast"
attribute.i = AttributeProto.FLOAT
if hasattr(attribute, "type"):
if attribute.type == TensorProto.FLOAT16:
attribute.type = TensorProto.FLOAT
elif attribute.type == AttributeProto.TENSOR:
if attribute.t.data_type == TensorProto.FLOAT16:
attribute.t.CopyFrom( nph.from_array( nph.to_array(attribute.t).astype(np.float32) ) )
graph_name = f"{graph.name}-float"
log.info("Creating new graph...")
# * create a new graph with converted params and new nodes.
graph_float = h.make_graph(
new_nodes,
graph_name,
graph.input,
graph.output,
initializer=converted_params,
)
log.info("Creating new float model...")
model_float = h.make_model(graph_float, producer_name="onnx-typecast")
model_float.opset_import[0].version = opset_version
ch.check_model(model_float)
log.info(f"Saving converted model as: {out_path}")
onnx.save_model(model_float, out_path)
log.info(f"Done Done London. 🎉")
return
if __name__ == "__main__":
typer.run(convert_model_to_float)
``` |
{
"source": "josephrocca/TokenCut",
"score": 2
} |
#### File: josephrocca/TokenCut/networks.py
```python
import torch
import torch.nn as nn
from torchvision.models.resnet import resnet50
from torchvision.models.vgg import vgg16
import dino.vision_transformer as vits
#import moco.vits as vits_moco
def get_model(arch, patch_size, device):
# Initialize model with pretraining
url = None
if "moco" in arch:
if arch == "moco_vit_small" and patch_size == 16:
url = "moco-v3/vit-s-300ep/vit-s-300ep.pth.tar"
elif arch == "moco_vit_base" and patch_size == 16:
url = "moco-v3/vit-b-300ep/vit-b-300ep.pth.tar"
model = vits.__dict__[arch](num_classes=0)
elif "mae" in arch:
if arch == "mae_vit_base" and patch_size == 16:
url = "mae/visualize/mae_visualize_vit_base.pth"
model = vits.__dict__[arch](num_classes=0)
elif "vit" in arch:
if arch == "vit_small" and patch_size == 16:
url = "dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth"
elif arch == "vit_small" and patch_size == 8:
url = "dino/dino_deitsmall8_300ep_pretrain/dino_deitsmall8_300ep_pretrain.pth"
elif arch == "vit_base" and patch_size == 16:
url = "dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth"
elif arch == "vit_base" and patch_size == 8:
url = "dino/dino_vitbase8_pretrain/dino_vitbase8_pretrain.pth"
elif arch == "resnet50":
url = "dino/dino_resnet50_pretrain/dino_resnet50_pretrain.pth"
model = vits.__dict__[arch](patch_size=patch_size, num_classes=0)
else:
raise NotImplementedError
for p in model.parameters():
p.requires_grad = False
if url is not None:
print(
"Since no pretrained weights have been provided, we load the reference pretrained DINO weights."
)
state_dict = torch.hub.load_state_dict_from_url(
url="https://dl.fbaipublicfiles.com/" + url
)
if "moco" in arch:
state_dict = state_dict['state_dict']
for k in list(state_dict.keys()):
# retain only base_encoder up to before the embedding layer
if k.startswith('module.base_encoder') and not k.startswith('module.base_encoder.head'):
# remove prefix
state_dict[k[len("module.base_encoder."):]] = state_dict[k]
# delete renamed or unused k
del state_dict[k]
elif "mae" in arch:
state_dict = state_dict['model']
for k in list(state_dict.keys()):
# retain only base_encoder up to before the embedding layer
if k.startswith('decoder') or k.startswith('mask_token'):
# remove prefix
#state_dict[k[len("module.base_encoder."):]] = state_dict[k]
# delete renamed or unused k
del state_dict[k]
msg = model.load_state_dict(state_dict, strict=True)
print(
"Pretrained weights found at {} and loaded with msg: {}".format(
url, msg
)
)
else:
print(
"There is no reference weights available for this model => We use random weights."
)
model.eval()
model.to(device)
return model
```
#### File: TokenCut/unsupervised_saliency_detection/object_discovery.py
```python
import torch
import torch.nn.functional as F
import numpy as np
#from scipy.linalg.decomp import eig
import scipy
from scipy.linalg import eigh
from scipy import ndimage
#from sklearn.mixture import GaussianMixture
#from sklearn.cluster import KMeans
def ncut(feats, dims, scales, init_image_size, tau = 0, eps=1e-5, im_name='', no_binary_graph=False):
"""
Implementation of NCut Method.
Inputs
feats: the pixel/patche features of an image
dims: dimension of the map from which the features are used
scales: from image to map scale
init_image_size: size of the image
tau: thresold for graph construction
eps: graph edge weight
im_name: image_name
no_binary_graph: ablation study for using similarity score as graph edge weight
"""
feats = F.normalize(feats, p=2, dim=0)
A = (feats.transpose(0,1) @ feats)
A = A.cpu().numpy()
if no_binary_graph:
A[A<tau] = eps
else:
A = A > tau
A = np.where(A.astype(float) == 0, eps, A)
d_i = np.sum(A, axis=1)
D = np.diag(d_i)
# Print second and third smallest eigenvector
_, eigenvectors = eigh(D-A, D, subset_by_index=[1,2])
eigenvec = np.copy(eigenvectors[:, 0])
# method1 avg
second_smallest_vec = eigenvectors[:, 0]
avg = np.sum(second_smallest_vec) / len(second_smallest_vec)
bipartition = second_smallest_vec > avg
seed = np.argmax(np.abs(second_smallest_vec))
if bipartition[seed] != 1:
eigenvec = eigenvec * -1
bipartition = np.logical_not(bipartition)
bipartition = bipartition.reshape(dims).astype(float)
# predict BBox
pred, _, objects,cc = detect_box(bipartition, seed, dims, scales=scales, initial_im_size=init_image_size) ## We only extract the principal object BBox
mask = np.zeros(dims)
mask[cc[0],cc[1]] = 1
mask = torch.from_numpy(mask).to('cuda')
# mask = torch.from_numpy(bipartition).to('cuda')
bipartition = F.interpolate(mask.unsqueeze(0).unsqueeze(0), size=init_image_size, mode='nearest').squeeze()
eigvec = second_smallest_vec.reshape(dims)
eigvec = torch.from_numpy(eigvec).to('cuda')
eigvec = F.interpolate(eigvec.unsqueeze(0).unsqueeze(0), size=init_image_size, mode='nearest').squeeze()
return seed, bipartition.cpu().numpy(), eigvec.cpu().numpy()
def detect_box(bipartition, seed, dims, initial_im_size=None, scales=None, principle_object=True):
"""
Extract a box corresponding to the seed patch. Among connected components extract from the affinity matrix, select the one corresponding to the seed patch.
"""
w_featmap, h_featmap = dims
objects, num_objects = ndimage.label(bipartition)
cc = objects[np.unravel_index(seed, dims)]
if principle_object:
mask = np.where(objects == cc)
# Add +1 because excluded max
ymin, ymax = min(mask[0]), max(mask[0]) + 1
xmin, xmax = min(mask[1]), max(mask[1]) + 1
# Rescale to image size
r_xmin, r_xmax = scales[1] * xmin, scales[1] * xmax
r_ymin, r_ymax = scales[0] * ymin, scales[0] * ymax
pred = [r_xmin, r_ymin, r_xmax, r_ymax]
# Check not out of image size (used when padding)
if initial_im_size:
pred[2] = min(pred[2], initial_im_size[1])
pred[3] = min(pred[3], initial_im_size[0])
# Coordinate predictions for the feature space
# Axis different then in image space
pred_feats = [ymin, xmin, ymax, xmax]
return pred, pred_feats, objects, mask
else:
raise NotImplementedError
```
#### File: TokenCut/weakly_supvervised_detection/datasets.py
```python
import os
import torch
import numpy as np
import pandas as pd
from PIL import Image
from tqdm import tqdm
from collections import defaultdict
from torchvision.datasets.folder import default_loader
from torchvision.datasets.utils import download_url
from torch.utils.data import Dataset
from torchvision import transforms
class CUB200(Dataset):
def __init__(self, root, is_train, transform=None, ori_size=False, input_size=224, center_crop=True):
self.root = root
self.is_train = is_train
self.ori_size = ori_size
if not ori_size and center_crop:
image_size = int(256/224*input_size) #TODO check
crop_size = input_size #TODO check
shift = (image_size - crop_size) // 2
elif not ori_size and not center_crop:
image_size = input_size
crop_size = input_size
shift = 0
self.data = self._load_data(image_size, crop_size, shift, center_crop)
self.transform = transform
def _load_data(self, image_size, crop_size, shift, center_crop=True):
self._labelmap_path = os.path.join(self.root, 'CUB_200_2011', 'classes.txt')
paths = pd.read_csv(
os.path.join(self.root, 'CUB_200_2011', 'images.txt'),
sep=' ', names=['id', 'path'])
labels = pd.read_csv(
os.path.join(self.root, 'CUB_200_2011', 'image_class_labels.txt'),
sep=' ', names=['id', 'label'])
splits = pd.read_csv(
os.path.join(self.root, 'CUB_200_2011', 'train_test_split.txt'),
sep=' ', names=['id', 'is_train'])
orig_image_sizes = pd.read_csv(
os.path.join(self.root, 'CUB_200_2011', 'image_sizes.txt'),
sep=' ', names=['id', 'width', 'height'])
bboxes = pd.read_csv(
os.path.join(self.root, 'CUB_200_2011', 'bounding_boxes.txt'),
sep=' ', names=['id', 'x', 'y', 'w', 'h'])
if self.ori_size:
resized_bboxes = pd.DataFrame({'id': paths.id,
'xmin': bboxes.x,
'ymin': bboxes.y,
'xmax': bboxes.x + bboxes.w,
'ymax': bboxes.y + bboxes.h})
else:
if center_crop:
resized_xmin = np.maximum(
(bboxes.x / orig_image_sizes.width * image_size - shift).astype(int), 0)
resized_ymin = np.maximum(
(bboxes.y / orig_image_sizes.height * image_size - shift).astype(int), 0)
resized_xmax = np.minimum(
((bboxes.x + bboxes.w - 1) / orig_image_sizes.width * image_size - shift).astype(int),
crop_size - 1)
resized_ymax = np.minimum(
((bboxes.y + bboxes.h - 1) / orig_image_sizes.height * image_size - shift).astype(int),
crop_size - 1)
else:
min_length = pd.concat([orig_image_sizes.width, orig_image_sizes.height], axis=1).min(axis=1)
resized_xmin = (bboxes.x / min_length * image_size).astype(int)
resized_ymin = (bboxes.y / min_length * image_size).astype(int)
resized_xmax = ((bboxes.x + bboxes.w - 1) / min_length * image_size).astype(int)
resized_ymax = ((bboxes.y + bboxes.h - 1) / min_length * image_size).astype(int)
resized_bboxes = pd.DataFrame({'id': paths.id,
'xmin': resized_xmin.values,
'ymin': resized_ymin.values,
'xmax': resized_xmax.values,
'ymax': resized_ymax.values})
data = paths.merge(labels, on='id')\
.merge(splits, on='id')\
.merge(resized_bboxes, on='id')
if self.is_train:
data = data[data.is_train == 1]
else:
data = data[data.is_train == 0]
return data
def __len__(self):
return len(self.data)
# def _preprocess_bbox(self, origin_bbox, orig_image_size, center_crop=True):
# xmin, ymin, xmax, ymax = origin_bbox
# orig_width, orig_height = orig_image_size
# if center_crop:
# resized_xmin = np.maximum(
# (bboxes.x / orig_image_sizes.width * image_size - shift).astype(int), 0)
# resized_ymin = np.maximum(
# (bboxes.y / orig_image_sizes.height * image_size - shift).astype(int), 0)
# resized_xmax = np.minimum(
# ((bboxes.x + bboxes.w - 1) / orig_image_sizes.width * image_size - shift).astype(int),
# crop_size - 1)
# resized_ymax = np.minimum(
# ((bboxes.y + bboxes.h - 1) / orig_image_sizes.height * image_size - shift).astype(int),
# crop_size - 1)
# else:
# print(f'width: {orig_image_sizes.width}, height: {orig_image_sizes.height}')
# min_length = min(orig_image_sizes.width , orig_image_sizes.height)
# resized_xmin = int(bb / min_length * self.image_size)
# resized_ymin = int(ymin / min_length * self.image_size)
# resized_xmax = int(xmax / min_length * self.image_size)
# resized_ymax = int(ymax / min_length * self.image_size)
# resized_bboxes = pd.DataFrame({'id': paths.id,
# 'xmin': resized_xmin.values,
# 'ymin': resized_ymin.values,
# 'xmax': resized_xmax.values,
# 'ymax': resized_ymax.values})
def __getitem__(self, idx):
sample = self.data.iloc[idx]
path = os.path.join(self.root, 'CUB_200_2011/images', sample.path)
image = Image.open(path).convert('RGB')
label = sample.label - 1 # label starts from 1
gt_box = torch.tensor(
[sample.xmin, sample.ymin, sample.xmax, sample.ymax])
if self.transform is not None:
image = self.transform(image)
return (image, label, gt_box)
@property
def class_id_to_name(self):
if hasattr(self, '_class_id_to_name'):
return self._class_id_to_name
labelmap = pd.read_csv(self._labelmap_path, sep=' ', names=['label', 'name'])
labelmap['label'] = labelmap['label'].apply(lambda x: x - 1)
self._class_id_to_name = labelmap.set_index('label')['name'].to_dict()
return self._class_id_to_name
@property
def class_name_to_id(self):
if hasattr(self, '_class_name_to_id'):
return self._class_name_to_id
self._class_name_to_id = {v: k for k, v in self.class_id_to_name.items()}
return self._class_name_to_id
@property
def class_to_images(self):
if hasattr(self, '_class_to_images'):
return self._class_to_images
#self.log.warn('Create index...')
self._class_to_images = defaultdict(list)
for idx in tqdm(range(len(self))):
sample = self.data.iloc[idx]
label = sample.label - 1
self._class_to_images[label].append(idx)
#self.log.warn('Done!')
return self._class_to_images
#class ImageNet(H5Dataset):
# def __init__(self, root, is_train, transform=None):
# self.root = root
# self.is_train = is_train
# tag = 'train' if is_train else 'val'
# self.h5_path = os.path.join(root, f'imagenet_{tag}.h5')
#
# super().__init__(self.h5_path, transform)
class ImageNet(Dataset):
def __init__(self, root, is_train, transform=None, ori_size=False, input_size=224, center_crop=True):
self.root = root
self.is_train = is_train
self.ori_size = ori_size
self.center_crop = center_crop
if not ori_size and center_crop:
self.image_size = int(256/224 * input_size)
self.crop_size = input_size
self.shift = (self.image_size - self.crop_size) // 2
elif not ori_size and not center_crop:
print('resize, without center crop')
self.image_size = input_size
self._load_data()
self.transform = transform
def _load_data(self):
self._labelmap_path = os.path.join(
self.root, 'ILSVRC/Detection', 'imagenet1000_clsidx_to_labels.txt')
if self.is_train:
self.path = os.path.join(self.root, 'ILSVRC/Data/train')
self.metadata = pd.read_csv(
os.path.join(self.root, 'ILSVRC/Detection', 'train.txt'),
sep=' ', names=['path', 'label'])
else:
self.path = os.path.join(self.root, 'ILSVRC/Data/val')
self.metadata = pd.read_csv(
os.path.join(self.root, 'ILSVRC/Detection', 'val.txt'),
sep='\t', names=['path', 'label', 'xmin', 'ymin', 'xmax', 'ymax'])
self.wnids = pd.read_csv(
os.path.join(self.root, 'ILSVRC/Detection/', 'wnids.txt'), names=['dir_name'])
def _preprocess_bbox(self, origin_bbox, orig_image_size, center_crop=True, image_path=None):
xmin, ymin, xmax, ymax = origin_bbox
orig_width, orig_height = orig_image_size
if center_crop:
resized_xmin = np.maximum(
int(xmin / orig_width * self.image_size - self.shift), 0)
resized_ymin = np.maximum(
int(ymin / orig_height * self.image_size - self.shift), 0)
resized_xmax = np.minimum(
int(xmax / orig_width * self.image_size - self.shift), self.crop_size - 1)
resized_ymax = np.minimum(
int(ymax / orig_height * self.image_size - self.shift), self.crop_size - 1)
else:
#print(f'ori W: {orig_width} ori H: {orig_height}, xmin: {xmin}, ymin: {ymin}, xmax: {xmax}, ymax: {ymax}, input_size: {self.image_size}, image_path: {image_path}')
min_length = min(orig_height, orig_width)
resized_xmin = int(xmin / min_length * self.image_size)
resized_ymin = int(ymin / min_length * self.image_size)
resized_xmax = int(xmax / min_length * self.image_size)
resized_ymax = int(ymax / min_length * self.image_size)
#print(f'output: xmin, ymin, xmax, ymax: {[resized_xmin, resized_ymin, resized_xmax, resized_ymax]}')
return [resized_xmin, resized_ymin, resized_xmax, resized_ymax]
def __len__(self):
return len(self.metadata)
def __getitem__(self, idx):
sample = self.metadata.iloc[idx]
if self.is_train:
image_path = os.path.join(self.path, sample.path)
else:
image_path = os.path.join(
self.path, self.wnids.iloc[int(sample.label)].dir_name, sample.path)
image = Image.open(image_path).convert('RGB')
label = sample.label
# preprocess bbox
if self.is_train:
gt_box = torch.tensor([0., 0., 0., 0.])
else:
origin_box = [sample.xmin, sample.ymin, sample.xmax, sample.ymax]
if self.ori_size:
gt_box = torch.tensor(origin_box)
else:
gt_box = torch.tensor(
self._preprocess_bbox(origin_box, image.size, self.center_crop, image_path))
if self.transform is not None:
image = self.transform(image)
return (image, label, gt_box)
@property
def class_id_to_name(self):
if hasattr(self, '_class_id_to_name'):
return self._class_id_to_name
with open(self._labelmap_path, 'r') as f:
self._class_id_to_name = eval(f.read())
return self._class_id_to_name
@property
def class_name_to_id(self):
if hasattr(self, '_class_name_to_id'):
return self._class_name_to_id
self._class_name_to_id = {v: k for k, v in self.class_id_to_name.items()}
return self._class_name_to_id
@property
def wnid_list(self):
if hasattr(self, '_wnid_list'):
return self._wnid_list
self._wnid_list = self.wnids.dir_name.tolist()
return self._wnid_list
@property
def class_to_images(self):
if hasattr(self, '_class_to_images'):
return self._class_to_images
self.log.warn('Create index...')
self._class_to_images = defaultdict(list)
for idx in tqdm(range(len(self))):
sample = self.metadata.iloc[idx]
label = sample.label
self._class_to_images[label].append(idx)
self.log.warn('Done!')
return self._class_to_images
def verify_wnid(self, wnid):
is_valid = bool(re.match(u'^[n][0-9]{8}$', wnid))
is_terminal = bool(wnid in self.wnids.dir_name.tolist())
return is_valid and is_terminal
def get_terminal_wnids(self, wnid):
page = requests.get("http://www.image-net.org/api/text/wordnet.structure.hyponym?wnid={}&full=1".format(wnid))
str_wnids = str(BeautifulSoup(page.content, 'html.parser'))
split_wnids = re.split('\r\n-|\r\n', str_wnids)
return [_wnid for _wnid in split_wnids if self.verify_wnid(_wnid)]
def get_image_ids(self, wnid):
terminal_wnids = self.get_terminal_wnids(wnid)
image_ids = set()
for terminal_wnid in terminal_wnids:
class_id = self.wnid_list.index(terminal_wnid)
image_ids |= set(self.class_to_images[class_id])
return list(image_ids)
DATASETS = {
'cub': CUB200,
'imagenet': ImageNet,
}
LABELS = {
'cub': 200,
'imagenet': 1000,
}
def build_dataset(is_train, args):
# Define arguments
data_name = args.dataset
root = args.data_path
batch_size = args.batch_size_per_gpu
num_workers = args.num_workers
transform = build_transform(is_train, args)
dataset = DATASETS[data_name](root, is_train=is_train, transform=transform, ori_size=args.ori_size, input_size = args.input_size, center_crop = not args.no_center_crop)
return dataset, LABELS[data_name]
def build_transform(is_train, args):
resize_im = args.input_size > 32
if is_train:
transform = transforms.Compose([
transforms.RandomResizedCrop(args.input_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
if not resize_im:
# replace RandomResizedCropAndInterpolation with
# RandomCrop
transform.transforms[0] = transforms.RandomCrop(
args.input_size, padding=4)
return transform
t = []
if resize_im and (not args.ori_size):
if args.no_center_crop:
t.append(transforms.Resize(args.input_size, interpolation=3))
else:
size = int((256 / 224) * args.input_size)
t.append(
transforms.Resize(size, interpolation=3), # to maintain same ratio w.r.t. 224 images
)
if not args.ori_size and not args.no_center_crop:
print('center crop')
t.append(transforms.CenterCrop(args.input_size))
t.append(transforms.ToTensor())
t.append(transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)))
return transforms.Compose(t)
```
#### File: TokenCut/weakly_supvervised_detection/metrics.py
```python
import cv2
import torch
import numpy as np
def loc_accuracy(outputs, labels, gt_boxes, bboxes, iou_threshold=0.5):
if outputs is not None:
_, pred = torch.topk(outputs, k=1, dim=1, largest=True, sorted=True)
pred = pred.t()
correct = pred.eq(labels.view(1, -1).expand_as(pred))
wrongs = [c == 0 for c in correct.cpu().numpy()][0]
batch_size = len(gt_boxes)
gt_known, top1 = 0., 0.
for i, (gt_box, bbox) in enumerate(zip(gt_boxes, bboxes)):
iou_score = iou(gt_box, bbox)
if iou_score >= iou_threshold:
gt_known += 1.
if outputs is not None and not wrongs[i]:
top1 += 1.
gt_loc = gt_known / batch_size
top1_loc = top1 / batch_size
return gt_loc, top1_loc
def iou(box1, box2):
"""box: (xmin, ymin, xmax, ymax)"""
box1_xmin, box1_ymin, box1_xmax, box1_ymax = box1
box2_xmin, box2_ymin, box2_xmax, box2_ymax = box2
inter_xmin = max(box1_xmin, box2_xmin)
inter_ymin = max(box1_ymin, box2_ymin)
inter_xmax = min(box1_xmax, box2_xmax)
inter_ymax = min(box1_ymax, box2_ymax)
inter_area = (inter_xmax - inter_xmin + 1) * (inter_ymax - inter_ymin + 1)
box1_area = (box1_xmax - box1_xmin + 1) * (box1_ymax - box1_ymin + 1)
box2_area = (box2_xmax - box2_xmin + 1) * (box2_ymax - box2_ymin + 1)
iou = inter_area / (box1_area + box2_area - inter_area).float()
return iou.item()
``` |
{
"source": "joseph-roitman/pytest-snapshot",
"score": 2
} |
#### File: pytest-snapshot/tests/test_assert_match.py
```python
import os
from pathlib import Path
import pytest
from pytest_snapshot.plugin import _file_encode
from tests.utils import assert_pytest_passes
@pytest.fixture
def basic_case_dir(testdir):
case_dir = testdir.mkdir('case_dir')
case_dir.join('snapshot1.txt').write_text('the valuÉ of snapshot1.txt\n', 'utf-8')
return case_dir
def test_assert_match_with_external_snapshot_path(testdir, basic_case_dir):
testdir.makepyfile(r"""
from pathlib import Path
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('the value of snapshot1.txt\n', Path('not_case_dir/snapshot1.txt').absolute())
""")
result = testdir.runpytest('-v')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
"E* AssertionError: Snapshot path not_case_dir?snapshot1.txt is not in case_dir",
])
assert result.ret == 1
def test_assert_match_success_string(testdir, basic_case_dir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('the valuÉ of snapshot1.txt\n', 'snapshot1.txt')
""")
assert_pytest_passes(testdir)
def test_assert_match_success_bytes(testdir, basic_case_dir):
testdir.makepyfile(r"""
import os
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match(b'the valu\xc3\x89 of snapshot1.txt' + os.linesep.encode(), 'snapshot1.txt')
""")
assert_pytest_passes(testdir)
def test_assert_match_failure_string(testdir, basic_case_dir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('the INCORRECT value of snapshot1.txt\n', 'snapshot1.txt')
""")
result = testdir.runpytest('-v')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
">* raise AssertionError(snapshot_diff_msg)",
'E* AssertionError: value does not match the expected value in snapshot case_dir?snapshot1.txt',
"E* assert * == *",
"E* - the valuÉ of snapshot1.txt",
"E* ? ^",
"E* + the INCORRECT value of snapshot1.txt",
"E* ? ++++++++++ ^",
])
assert result.ret == 1
def test_assert_match_failure_bytes(testdir, basic_case_dir):
testdir.makepyfile(r"""
import os
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match(b'the INCORRECT value of snapshot1.txt' + os.linesep.encode(), 'snapshot1.txt')
""")
result = testdir.runpytest('-v')
result.stdout.fnmatch_lines([
r'*::test_sth FAILED*',
r">* raise AssertionError(snapshot_diff_msg)",
r'E* AssertionError: value does not match the expected value in snapshot case_dir?snapshot1.txt',
r"E* assert * == *",
r"E* At index 4 diff: * != *",
r"E* Full diff:",
r"E* - b'the valu\xc3\x89 of snapshot1.txt{}'".format(repr(os.linesep)[1:-1]),
r"E* + b'the INCORRECT value of snapshot1.txt{}'".format(repr(os.linesep)[1:-1]),
])
assert result.ret == 1
def test_assert_match_invalid_type(testdir, basic_case_dir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match(123, 'snapshot1.txt')
""")
result = testdir.runpytest('-v')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
'E* TypeError: value must be str or bytes',
])
assert result.ret == 1
def test_assert_match_missing_snapshot(testdir, basic_case_dir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('something', 'snapshot_that_doesnt_exist.txt')
""")
result = testdir.runpytest('-v')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
"E* snapshot case_dir?snapshot_that_doesnt_exist.txt doesn't exist. "
"(run pytest with --snapshot-update to create it)",
])
assert result.ret == 1
def test_assert_match_update_existing_snapshot_no_change(testdir, basic_case_dir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('the valuÉ of snapshot1.txt\n', 'snapshot1.txt')
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth PASSED*',
])
assert result.ret == 0
assert_pytest_passes(testdir) # assert that snapshot update worked
@pytest.mark.parametrize('case_dir_repr',
["'case_dir'",
"str(Path('case_dir').absolute())",
"Path('case_dir')",
"Path('case_dir').absolute()"],
ids=['relative_string_case_dir',
'abs_string_case_dir',
'relative_path_case_dir',
'abs_path_case_dir'])
@pytest.mark.parametrize('snapshot_name_repr',
["'snapshot1.txt'",
"str(Path('case_dir/snapshot1.txt').absolute())",
"Path('case_dir/snapshot1.txt')", # TODO: support this or "Path('snapshot1.txt')"?
"Path('case_dir/snapshot1.txt').absolute()"],
ids=['relative_string_snapshot_name',
'abs_string_snapshot_name',
'relative_path_snapshot_name',
'abs_path_snapshot_name'])
def test_assert_match_update_existing_snapshot(testdir, basic_case_dir, case_dir_repr, snapshot_name_repr):
"""
Tests that `Snapshot.assert_match` works when updating an existing snapshot.
Also tests that `Snapshot` supports absolute/relative str/Path snapshot directories and snapshot paths.
"""
testdir.makepyfile(r"""
from pathlib import Path
def test_sth(snapshot):
snapshot.snapshot_dir = {case_dir_repr}
snapshot.assert_match('the NEW value of snapshot1.txt\n', {snapshot_name_repr})
""".format(case_dir_repr=case_dir_repr, snapshot_name_repr=snapshot_name_repr))
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth PASSED*',
'*::test_sth ERROR*',
"E* AssertionError: Snapshot directory was modified: case_dir",
'E* Updated snapshots:',
'E* snapshot1.txt',
])
assert result.ret == 1
assert_pytest_passes(testdir) # assert that snapshot update worked
def test_assert_match_update_existing_snapshot_and_exception_in_test(testdir, basic_case_dir):
"""
Tests that `Snapshot.assert_match` works when updating an existing snapshot and then the test function fails.
In this case, both the snapshot update error and the test function error are printed out.
"""
testdir.makepyfile(r"""
from pathlib import Path
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('the NEW value of snapshot1.txt\n', 'snapshot1.txt')
assert False
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
'*::test_sth ERROR*',
"E* AssertionError: Snapshot directory was modified: case_dir",
'E* Updated snapshots:',
'E* snapshot1.txt',
'E* assert False',
])
assert result.ret == 1
def test_assert_match_create_new_snapshot(testdir, basic_case_dir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('the NEW value of new_snapshot1.txt', 'sub_dir/new_snapshot1.txt')
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth PASSED*',
'*::test_sth ERROR*',
"E* Snapshot directory was modified: case_dir",
'E* Created snapshots:',
'E* sub_dir?new_snapshot1.txt',
])
assert result.ret == 1
assert_pytest_passes(testdir) # assert that snapshot update worked
def test_assert_match_create_new_snapshot_in_default_dir(testdir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.assert_match('the value of new_snapshot1.txt', 'sub_dir/new_snapshot1.txt')
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth PASSED*',
'*::test_sth ERROR*',
"E* Snapshot directory was modified: snapshots?test_assert_match_create_new_snapshot_in_default_dir?test_sth",
'E* Created snapshots:',
'E* sub_dir?new_snapshot1.txt',
])
assert result.ret == 1
assert testdir.tmpdir.join(
'snapshots/test_assert_match_create_new_snapshot_in_default_dir/test_sth/sub_dir/new_snapshot1.txt'
).read_text('utf-8') == 'the value of new_snapshot1.txt'
assert_pytest_passes(testdir) # assert that snapshot update worked
def test_assert_match_existing_snapshot_is_not_file(testdir, basic_case_dir):
basic_case_dir.mkdir('directory1')
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('something', 'directory1')
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
"E* AssertionError: snapshot exists but is not a file: case_dir?directory1",
])
assert result.ret == 1
@pytest.mark.parametrize('tested_value', [
b'',
'',
bytes(bytearray(range(256))),
''.join(chr(i) for i in range(0, 10000)).replace('\r', ''),
' \n \t \n Whitespace! \n\t Whitespace! \n \t \n ',
# We don't support \r due to cross-compatibility and git by default modifying snapshot files...
pytest.param('\r', marks=pytest.mark.xfail(strict=True)),
], ids=[
'empty-bytes',
'empty-string',
'all-bytes',
'unicode',
'whitespace',
'slash-r',
])
def test_assert_match_edge_cases(testdir, basic_case_dir, tested_value):
"""
This test tests many possible values to snapshot test.
This test will fail if we change the snapshot file format in any way.
This test also checks that assert_match will pass after a snapshot update.
"""
testdir.makepyfile(r"""
def test_sth(snapshot):
tested_value = {tested_value!r}
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match(tested_value, 'tested_value_snapshot')
""".format(tested_value=tested_value))
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth PASSED*',
'*::test_sth ERROR*',
])
assert result.ret == 1
if isinstance(tested_value, str):
expected_encoded_snapshot = tested_value.replace('\n', os.linesep).encode()
else:
expected_encoded_snapshot = tested_value
encoded_snapshot = Path(str(basic_case_dir)).joinpath('tested_value_snapshot').read_bytes()
assert encoded_snapshot == expected_encoded_snapshot
assert_pytest_passes(testdir) # assert that snapshot update worked
def test_assert_match_unsupported_value_existing_snapshot(testdir, basic_case_dir):
"""
Test that when running tests without --snapshot-update, we don't tell the user that the value is unsupported.
We instead tell the user that the value does not equal the snapshot. This behaviour is more helpful.
"""
basic_case_dir.join('newline.txt').write_binary(_file_encode('\n'))
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('\r', 'newline.txt')
""")
result = testdir.runpytest('-v')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
'E* AssertionError: value does not match the expected value in snapshot case_dir?newline.txt',
"E* - '\\n'",
"E* + '\\r'",
])
assert result.ret == 1
def test_assert_match_unsupported_value_update_existing_snapshot(testdir, basic_case_dir):
basic_case_dir.join('newline.txt').write_binary(_file_encode('\n'))
testdir.makepyfile(r"""
import os
from unittest import mock
def _file_encode(string: str) -> bytes:
return string.replace('\n', os.linesep).encode()
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
with mock.patch('pytest_snapshot.plugin._file_encode', _file_encode):
snapshot.assert_match('\r', 'newline.txt')
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
"E* ValueError: value is not supported by pytest-snapshot's serializer.",
])
assert result.ret == 1
def test_assert_match_unsupported_value_create_snapshot(testdir, basic_case_dir):
testdir.makepyfile(r"""
import os
from unittest import mock
def _file_encode(string: str) -> bytes:
return string.replace('\n', os.linesep).encode()
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
with mock.patch('pytest_snapshot.plugin._file_encode', _file_encode):
snapshot.assert_match('\r', 'newline.txt')
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
"E* ValueError: value is not supported by pytest-snapshot's serializer.",
])
assert result.ret == 1
def test_assert_match_unsupported_value_slash_r(testdir, basic_case_dir):
testdir.makepyfile(r"""
def test_sth(snapshot):
snapshot.snapshot_dir = 'case_dir'
snapshot.assert_match('\r', 'newline.txt')
""")
result = testdir.runpytest('-v', '--snapshot-update')
result.stdout.fnmatch_lines([
'*::test_sth FAILED*',
'E* ValueError: Snapshot testing strings containing "\\r" is not supported.',
])
assert result.ret == 1
``` |
{
"source": "josephroqueca/campus-guide-backend",
"score": 3
} |
#### File: campus-guide-backend/script/schema_validate.py
```python
import json
import os
import re
import sys
import jsonschema
RE_LANGUAGE = re.compile(r'[.][a-z]+$')
RE_COMMENT = re.compile(r'^\s*[/]{2}.*$', flags=re.MULTILINE)
VERBOSE = False
SUCCESS_CODE = 0
if '-v' in sys.argv:
VERBOSE = True
sys.argv.remove('-v')
if '--verbose' in sys.argv:
VERBOSE = True
sys.argv.remove('--verbose')
if len(sys.argv) < 3:
print('Usage: ./schema_validate.py <asset_dir> <schema_dir>')
SUCCESS_CODE = 2
sys.exit(SUCCESS_CODE)
# Import base schemas
STORE = {}
BASE_SCHEMA_DIR = os.path.join(sys.argv[2], '__base__')
for base_schema_name in os.listdir(BASE_SCHEMA_DIR):
with open(os.path.join(BASE_SCHEMA_DIR, base_schema_name)) as base_schema_raw:
base_schema = json.load(base_schema_raw)
STORE[base_schema['id']] = base_schema
def set_success_code(code):
"""
Set the success code for the program.
:param code:
New code
:type code:
`int`
"""
global SUCCESS_CODE # pylint:disable=global-statement
SUCCESS_CODE = code
def strip_comments(str_in):
"""
Remove single line comments from a multiline string.
:param str_in:
Input string
:type str_in:
`str`
:rtype:
`str`
"""
comment = re.search(RE_COMMENT, str_in)
while comment:
str_in = str_in[:comment.span()[0]] + str_in[comment.span()[1] + 1:]
comment = re.search(RE_COMMENT, str_in)
return str_in
def validate(config, schema_path, schema_name):
"""
Validate a single configuration file using the schema at the provided path,
with the provided name.
:param config:
Location of the config file
:type config:
`str`
:param schema_path:
Location of the schema
:type schema_path:
`str`
:param schema_name:
Name of the schema file
:type schema_name:
`str`
"""
config_json = schema_json = None
with open(config) as file:
config_json = json.loads(strip_comments(file.read()))
with open(os.path.join(schema_path, schema_name)) as file:
schema_json = json.loads(file.read())
resolver = jsonschema.RefResolver(
'file://{0}/{1}'.format(schema_path, schema_name),
schema_json,
STORE,
)
try:
jsonschema.Draft4Validator(schema_json, resolver=resolver).validate(config_json)
if VERBOSE:
print(' Success: {0}'.format(config))
except jsonschema.ValidationError as error:
set_success_code(1)
print(' Failed: `{0}`'.format(config))
print(' {0}'.format(error.message))
def validate_all(config_dir, schema_dir):
"""
Validate all files in a directory.
:param config_dir:
The base directory of configuration files
:type config_dir:
`str`
:param schema_dir:
The base directory of schema files to validate with
:type schema_dir:
`str`
"""
directories = []
print('Beginning validation of `{0}`'.format(config_dir))
for file in os.listdir(config_dir):
file_path = os.path.join(config_dir, file)
if os.path.isfile(file_path):
if not file_path.endswith('.json'):
if VERBOSE:
print(' Skipping `{0}`'.format(file_path))
continue
schema_path = schema_name = None
# Use specific schema for app config files
if re.search(os.path.join(sys.argv[1], 'config'), file_path):
schema_path = os.path.join(sys.argv[2], 'config')
schema_name = 'config.schema.json'
else:
# Strip filetype and language modifier
schema_path = schema_dir
schema_name = file[:file.index('.json')]
language_pos = re.search(RE_LANGUAGE, schema_name)
if language_pos:
schema_name = schema_name[:language_pos.span()[0]] + \
schema_name[language_pos.span()[1] + 1:]
schema_name = '{0}.schema.json'.format(schema_name)
validate(file_path, schema_path, schema_name)
else:
directories.append(file)
for directory in directories:
d_path = os.path.join(config_dir, directory)
sd_path = os.path.join(schema_dir, directory)
# Recursively push assets in directories
validate_all(d_path, sd_path)
validate_all(sys.argv[1], sys.argv[2])
sys.exit(SUCCESS_CODE)
``` |
{
"source": "josephroquedev/advent-of-code",
"score": 3
} |
#### File: day_13/python/day13.py
```python
from aoc import AOC
import re
## Part 1
aoc = AOC(year=2015, day=13)
data = aoc.load()
# Regular expression to get the names and happiness changes of each pair
regex_happiness = re.compile(
r"(\w+) would (gain|lose) (\d+) happiness units by sitting next to (\w+)."
)
happiness = {}
possibilities = []
# For every line in input
for line in data.lines():
info = re.match(regex_happiness, line)
# Check if the person is gaining or losing happiness
mult = 1
if info.group(2) == "lose":
mult = -1
# Add the person and their neighbor as an entry in the dict
if info.group(1) in happiness:
happiness[info.group(1)][info.group(4)] = mult * int(info.group(3))
else:
happiness[info.group(1)] = {info.group(4): mult * int(info.group(3))}
def calc_possibilities(first_person, person, visited, total_so_far):
# Finds all the possibilities from a person to neighbors which have not been tried so far
# and adds the total change in happiness together
global happiness
global possibilities
# Make a copy of the list and add a new entry
visited = visited[:]
visited.append(person)
# If all of the people are in the list, add the total change in happiness to the possibilities
if len(visited) == len(happiness):
total_so_far += (
happiness[first_person][person] + happiness[person][first_person]
)
possibilities.append(total_so_far)
# For each person the person can sit beside
for neighbor in happiness[person]:
# If they're already in the list, skip them
if neighbor in visited:
continue
# Get all the possibilities of the next person's neighbor
calc_possibilities(
first_person,
neighbor,
visited,
total_so_far + happiness[neighbor][person] + happiness[person][neighbor],
)
# Start with each person and go around the table, trying every combination
for p in happiness:
for n in happiness[p]:
calc_possibilities(p, n, [p], happiness[p][n] + happiness[n][p])
aoc.p1(max(possibilities))
## Part 2
# Regular expression to get the names and happiness changes of each pair
regex_happiness = re.compile(
r"(\w+) would (gain|lose) (\d+) happiness units by sitting next to (\w+)."
)
happiness = {}
possibilities = []
# For every line in input
for line in data.lines():
info = re.match(regex_happiness, line)
# Check if the person is gaining or losing happiness
mult = 1
if info.group(2) == "lose":
mult = -1
# Add the person and their neighbor as an entry in the dict
if info.group(1) in happiness:
happiness[info.group(1)][info.group(4)] = mult * int(info.group(3))
else:
happiness[info.group(1)] = {info.group(4): mult * int(info.group(3))}
# Adding myself to the table
happiness["Joseph"] = {}
for p in happiness:
if not p == "Joseph":
happiness[p]["Joseph"] = 0
happiness["Joseph"][p] = 0
def calc_possibilities(first_person, person, visited, total_so_far):
# Finds all the possibilities from a person to neighbors which have not been tried so far
# and adds the total change in happiness together
global happiness
global possibilities
# Make a copy of the list and add a new entry
visited = visited[:]
visited.append(person)
# If all of the people are in the list, add the total change in happiness to the possibilities
if len(visited) == len(happiness):
total_so_far += (
happiness[first_person][person] + happiness[person][first_person]
)
possibilities.append(total_so_far)
# For each person the person can sit beside
for neighbor in happiness[person]:
# If they're already in the list, skip them
if neighbor in visited:
continue
# Get all the possibilities of the next person's neighbor
calc_possibilities(
first_person,
neighbor,
visited,
total_so_far + happiness[neighbor][person] + happiness[person][neighbor],
)
# Start with each person and go around the table, trying every combination
for p in happiness:
for n in happiness[p]:
calc_possibilities(p, n, [p], happiness[p][n] + happiness[n][p])
aoc.p2(max(possibilities))
```
#### File: day_24/python/day24.py
```python
from aoc import AOC
from operator import mul
from functools import reduce
aoc = AOC(year=2015, day=24)
data = aoc.load()
## Part 1
total_weight = 0
weights = []
for line in data.lines():
w = int(line)
total_weight += w
weights.append(w)
# Get the weights from heaviest to lightest
weights.sort(reverse=True)
# Goal is to get 3 groups of exactly this weight
target_weight = total_weight // 3
minimum_presents = -1
quantum_entanglement = -1
potential_first_bags = []
def bag_it(ws, add_to_compilation, compilation, bag):
current_weight = sum(bag)
if current_weight == target_weight:
if add_to_compilation:
compilation.append(bag)
return True
if current_weight > target_weight:
return False
bagged = False
for i, weight in enumerate(ws):
remaining = ws[i + 1 :]
bagged = (
bag_it(remaining, add_to_compilation, compilation, bag + [weight]) or bagged
)
if not add_to_compilation and bagged:
return True
return bagged
bag_it(weights, True, potential_first_bags, [])
for first_bag in potential_first_bags:
if minimum_presents != -1 and minimum_presents < len(first_bag):
continue
bag_entanglement = reduce(mul, first_bag)
if quantum_entanglement != -1 and bag_entanglement > quantum_entanglement:
continue
b = first_bag[:]
remaining_weights = weights[:]
while b:
remaining_weights.remove(b.pop())
if bag_it(remaining_weights, False, None, []):
minimum_presents = len(first_bag)
quantum_entanglement = bag_entanglement
aoc.p1(quantum_entanglement)
## Part 2
total_weight = 0
weights = []
for line in data.lines():
w = int(line)
total_weight += w
weights.append(w)
# Get the weights from heaviest to lightest
weights.sort(reverse=True)
# Goal is to get 4 groups of exactly this weight
target_weight = total_weight // 4
minimum_presents = -1
quantum_entanglement = -1
potential_first_bags = []
def bag_it(ws, add_to_compilation, compilation, bag):
current_weight = sum(bag)
if current_weight == target_weight:
if add_to_compilation:
compilation.append(bag)
return True
if current_weight > target_weight:
return False
bagged = False
for i, weight in enumerate(ws):
remaining = ws[i + 1 :]
bagged = (
bag_it(remaining, add_to_compilation, compilation, bag + [weight]) or bagged
)
if not add_to_compilation and bagged:
return True
return bagged
bag_it(weights, True, potential_first_bags, [])
for first_bag in potential_first_bags:
if minimum_presents != -1 and minimum_presents < len(first_bag):
continue
bag_entanglement = reduce(mul, first_bag)
if quantum_entanglement != -1 and bag_entanglement > quantum_entanglement:
continue
potential_second_bags = []
b = first_bag[:]
remaining_weights = weights[:]
while b:
remaining_weights.remove(b.pop())
bag_it(remaining_weights, True, potential_second_bags, [])
for second_bag in potential_second_bags:
bag_again = second_bag[:]
remaining_weights_again = remaining_weights[:]
while bag_again:
remaining_weights_again.remove(bag_again.pop())
if bag_it(remaining_weights, False, None, []):
minimum_presents = len(first_bag)
quantum_entanglement = bag_entanglement
aoc.p2(quantum_entanglement)
```
#### File: day_07/python/day07.py
```python
from aoc import AOC
aoc = AOC(year=2017, day=7)
data = aoc.load()
## Part 1
class Program:
def __init__(self, params):
components = params.split()
self.name = components[0]
self.weight = int(components[1][1:-1])
self.heldPrograms = []
if len(components) > 2:
self.heldPrograms = [
name if name[-1] != "," else name[0:-1] for name in components[3:]
]
programMap = {}
for line in data.lines():
program = Program(line)
programMap[program.name] = program
subPrograms = {
program
for programName in programMap
for program in programMap[programName].heldPrograms
}
baseProgram = "".join([x for x in programMap if x not in subPrograms])
aoc.p1(baseProgram)
## Part 2
class Program:
def __init__(self, params):
components = params.split()
self.name = components[0]
self.weight = int(components[1][1:-1])
self.held_programs = []
if len(components) > 2:
self.held_programs = [
name if name[-1] != "," else name[0:-1] for name in components[3:]
]
def total_weight(self):
weight = self.weight
sub_program_weights = [
program_map[name].total_weight() for name in self.held_programs
]
if not sub_program_weights:
return weight
sub_weight = sum(sub_program_weights)
return weight + sub_weight
program_map = {}
for line in data.lines():
program = Program(line)
program_map[program.name] = program
sub_programs = {
program
for program_name in program_map
for program in program_map[program_name].held_programs
}
base_program_name = [x for x in program_map if x not in sub_programs][0]
base_program = program_map[base_program_name]
base_weight = base_program.total_weight()
aoc.p2(base_weight)
```
#### File: day_06/python/day06.py
```python
from aoc import AOC, manhattan
aoc = AOC(year=2018, day=6)
data = aoc.load()
## Part 1
coords = [tuple(t) for t in data.numbers_by_line()]
width = max([x for x, _ in coords])
height = max([y for _, y in coords])
left = min([x for x, _ in coords])
top = min([y for _, y in coords])
region_sizes = dict(zip(coords, [0] * len(coords)))
for region_center in region_sizes:
x, y = region_center
if x == left or x == width or y == top or y == height:
region_sizes[region_center] = -1
def find_closest(coord):
min_dist = max(width, height)
current_closest = None
for center in region_sizes:
dist = manhattan(coord, center)
if dist < min_dist:
current_closest = center
min_dist = dist
elif dist == min_dist:
current_closest = None
return current_closest
for x in range(left, width + 1):
for y in range(top, height + 1):
closest = find_closest((x, y))
if closest is not None and region_sizes[closest] >= 0:
region_sizes[closest] += 1
aoc.p1(max(region_sizes.values()))
## Part 2
coords = [tuple(t) for t in data.numbers_by_line()]
width = max([x for x, _ in coords])
height = max([y for _, y in coords])
left = min([x for x, _ in coords])
top = min([y for _, y in coords])
last_region_size = -1
valid_region_size = 0
max_valid_dist = 10000
def coord_is_valid(coord):
total = 0
for other_coord in coords:
total += manhattan(coord, other_coord)
return total < max_valid_dist
for x in range(0, width + 1):
for y in range(0, height + 1):
if coord_is_valid((x, y)):
valid_region_size += 1
xx = width + 1
yy = height + 1
while last_region_size != valid_region_size:
last_region_size = valid_region_size
for y in range(yy + 1):
if coord_is_valid((xx, y)):
valid_region_size += 1
for x in range(xx + 1):
if coord_is_valid((x, yy)):
valid_region_size += 1
aoc.p2(valid_region_size)
```
#### File: day_22/python/day22.py
```python
from aoc import AOC, numbers_from
import queue
aoc = AOC(year=2018, day=22)
data = aoc.load()
PUZZLE_INPUT = numbers_from(data.contents())
depth = PUZZLE_INPUT[0]
target = (PUZZLE_INPUT[1], PUZZLE_INPUT[2])
mouth = (0, 0)
ROCKY, WET, NARROW = 0, 1, 2
geologic_indices = {
mouth: 0,
target: 0,
}
def geologic_index(region):
if region in geologic_indices:
return geologic_indices[region]
x, y = region
if y == 0:
index = x * 16807
elif x == 0:
index = y * 48271
else:
index = erosion_level((x - 1, y)) * erosion_level((x, y - 1))
geologic_indices[region] = index
return index
def erosion_level(region):
index = geologic_index(region)
return (index + depth) % 20183
def region_type(region):
erosion = erosion_level(region)
return erosion % 3
total_risk = sum(
region_type((x, y)) for x in range(target[0] + 1) for y in range(target[1] + 1)
)
aoc.p1(total_risk)
## Part 2
ROCKY, WET, NARROW = 0, 1, 2
TORCH, CLIMBING, NEITHER = 0, 1, 2
rocky_equip = [TORCH, CLIMBING]
wet_equip = [CLIMBING, NEITHER]
narrow_equip = [TORCH, NEITHER]
torch_region = [ROCKY, NARROW]
climbing_region = [ROCKY, WET]
neither = [WET, NARROW]
PUZZLE_INPUT = numbers_from(data.contents())
depth = PUZZLE_INPUT[0]
puzzle_target = (PUZZLE_INPUT[1], PUZZLE_INPUT[2])
mouth = (0, 0)
init = (TORCH, mouth)
geologic_indices = {
mouth: 0,
puzzle_target: 0,
}
def geologic_index(region):
if region in geologic_indices:
return geologic_indices[region]
x, y = region
if y == 0:
index = x * 16807
elif x == 0:
index = y * 48271
else:
index = erosion_level((x - 1, y)) * erosion_level((x, y - 1))
geologic_indices[region] = index
return index
def erosion_level(region):
index = geologic_index(region)
return (index + depth) % 20183
def region_type(region):
erosion = erosion_level(region)
return erosion % 3
def neighboring_regions(region):
x, y = region
return [
x
for x in [
(x - 1, y) if x > 0 else None,
(x, y - 1) if y > 0 else None,
(x + 1, y),
(x, y + 1),
]
if x is not None
]
def neighbors(state):
equipment, region = state
rt = region_type(region)
nrs = neighboring_regions(region)
nrts = [region_type(x) for x in nrs]
possible_equipment = (
rocky_equip if rt == ROCKY else wet_equip if rt == WET else narrow_equip
)
possible_regions = (
torch_region
if equipment == TORCH
else climbing_region
if equipment == CLIMBING
else neither
)
return list(
set(
[(e, region, 7) for e in possible_equipment if e is not equipment]
+ [
(equipment, r, 1)
for i, r in enumerate(nrs)
if nrts[i] in possible_regions
]
)
)
def bfs(initial_state, target):
q = queue.PriorityQueue()
dist = {initial_state: 0}
prev = {initial_state: None}
q.put((0, initial_state))
best_time = 99999
while not q.empty():
minutes, current_state = q.get()
ns = neighbors(current_state)
for ne in ns:
ne_equipment, ne_region, ne_minutes = ne
ne_state = (ne_equipment, ne_region)
alt = minutes + ne_minutes
if alt >= best_time:
continue
if ne_state not in dist or alt < dist[ne_state]:
dist[ne_state] = alt
prev[ne_state] = current_state
if ne_region == target:
if ne_equipment != TORCH:
alt += 7
dist[(TORCH, ne_region)] = alt
prev[(TORCH, ne_region)] = ne_state
best_time = alt
q.put((alt, ne_state))
return best_time
aoc.p2(bfs(init, puzzle_target))
```
#### File: day_04/python/day04.py
```python
from aoc import AOC, chunk
aoc = AOC(year=2019, day=4)
password_range = range(256310, 732736 + 1)
# Part 1
def has_pair(p):
return any(x == y for x, y in zip(p[:-1], p[1:]))
def never_decreases(p):
return all(int(y) >= int(x) for x, y in zip(p[:-1], p[1:]))
def is_valid(p):
return has_pair(str(p)) and never_decreases(str(p))
valid_passwords = sum(1 for p in password_range if is_valid(p))
aoc.p1(valid_passwords)
# Part 2
def has_pair(p):
p = [None, None] + list(p) + [None, None]
return any(
x == y and x != w and y != z
for w, x, y, z in zip(p[:-3], p[1:-2], p[2:-1], p[3:])
)
valid_passwords = sum(1 for p in password_range if is_valid(p))
aoc.p2(valid_passwords)
```
#### File: day_05/python/day05.py
```python
from aoc import AOC
aoc = AOC(year=2020, day=5)
data = aoc.load()
# Part 1
def bin_search(commands, lower, upper):
for idx, dir in enumerate(commands):
mid = (lower + upper) // 2
if dir == "F" or dir == "L":
upper = mid
elif dir == "B" or dir == "R":
lower = mid + 1
return [lower, upper]
def seat_id(boarding_pass):
lower, upper = bin_search(boarding_pass[:7], 0, 127)
row = lower if boarding_pass[6] == "B" else upper
lower, upper = bin_search(boarding_pass[7:], 0, 7)
col = lower if boarding_pass[-1] == "R" else upper
return row * 8 + col
seats = [seat_id(p) for p in data.lines()]
aoc.p1(max(seats))
# Part 2
seats = set(seats)
aoc.p2(next(sid for sid in seats if (sid + 1) not in seats) + 1)
```
#### File: day_12/python/day12.py
```python
from aoc import AOC, Position, Direction
aoc = AOC(year=2020, day=12)
data = aoc.load()
# Part 1
def turn_ship(dir, ins, val):
directions = [d.name for d in Direction]
directions = list(reversed(directions)) if ins == "L" else directions
rotations = directions.index(dir)
return (directions[rotations:] + directions[:rotations])[val // 90]
def move_ship(ship, offset, val):
return Position(ship.x + offset.x * val, ship.y + offset.y * val)
def command_ship(ship, dir, ins, val):
if ins in ["L", "R"]:
return ship, turn_ship(dir, ins, val)
elif ins == "F":
return move_ship(ship, Direction[dir].position, val), dir
else:
return move_ship(ship, Direction[ins].position, val), dir
ship, direction = Position(0, 0), "E"
for instruction in data.parse_lines(r"(\w)(\d+)"):
ship, direction = command_ship(ship, direction, instruction[0], int(instruction[1]))
aoc.p1(abs(ship.x) + abs(ship.y))
# Part 2
def rotate_waypoint(waypoint, ins, val):
while val >= 90:
if ins == "R":
waypoint = Position(-waypoint.y, waypoint.x)
if ins == "L":
waypoint = Position(waypoint.y, -waypoint.x)
val -= 90
return waypoint
def command_waypoint(ship, waypoint, ins, val):
if ins in ["L", "R"]:
return ship, rotate_waypoint(waypoint, ins, val)
elif ins == "F":
return move_ship(ship, waypoint, val), waypoint
else:
return ship, move_ship(waypoint, Direction[ins].position, val)
ship, waypoint = Position(0, 0), Position(10, -1)
for instruction in data.parse_lines(r"(\w)(\d+)"):
ship, waypoint = command_waypoint(
ship, waypoint, instruction[0], int(instruction[1])
)
aoc.p2(abs(ship.x) + abs(ship.y))
```
#### File: day_19/python/day19.py
```python
from aoc import AOC, Regex, Drop, String
import re
from functools import lru_cache
aoc = AOC(year=2020, day=19)
data = aoc.load()
chunks = data.chunk(
[
Regex(r"^(\d.*)$"),
Drop(1),
String(),
]
)
rules = {
int(m[0][: m[0].find(":")]): [
s[1] if '"' in s else [int(x) for x in s.split(" ")]
for s in m[0][m[0].find(" ") + 1 :].split(" | ")
]
for m in chunks[0]
}
"""
Given input:
0: 1 2
1: "a"
2: 1 3 | 3 1
3: "b"
`rules` will be:
{
0: [[1, 2]],
1: ['a'],
2: [[1, 3], [3, 1]],
3: ['b']
}
"""
# Part 1
def resolve_rule(r):
@lru_cache
def resolver(r):
if type(rules[r][0]) is str:
return rules[r][0]
else:
return (
"("
+ "|".join(["".join([resolver(y) for y in x]) for x in rules[r]])
+ ")"
)
return "^" + resolver(r) + "$"
rule_zero = resolve_rule(0)
aoc.p1(len([1 for x in chunks[1] if re.match(rule_zero, x)]))
# Part 2
rules[8] = [[42], [42, 8]]
rules[11] = [[42, 31], [42, 11, 31]]
def resolve_infinite_rule(r):
@lru_cache
def manual_resolution(r):
if r == 8:
return "(" + resolver(42) + ")" + "+"
elif r == 11:
return (
"("
+ "|".join([resolver(42) * i + resolver(31) * i for i in range(1, 5)])
+ ")"
)
return ""
@lru_cache
def resolver(r):
if type(rules[r][0]) is str:
return rules[r][0]
else:
return (
"("
+ "|".join(
[
"".join(
[
manual_resolution(y) if y in [8, 11] else resolver(y)
for y in x
]
)
for x in rules[r]
]
)
+ ")"
)
return "^" + resolver(r) + "$"
rule_zero = resolve_infinite_rule(0)
aoc.p2(len([1 for x in chunks[1] if re.match(rule_zero, x)]))
```
#### File: day_23/python/day23.py
```python
from aoc import AOC
aoc = AOC(year=2020, day=23)
data = aoc.load()
def get_cups(extended):
cups = [int(c) for c in data.contents().strip()]
if extended:
cups.extend([x for x in range(max(cups), 1_000_001)])
cups = {
c: {"v": c, "n": cups[i + 1] if i + 1 < len(cups) else cups[0]}
for i, c in enumerate(cups)
}
return cups
def step():
global current, cups
head = cups[current]
picked_up = (
cups[head["n"]]["v"],
cups[cups[head["n"]]["n"]]["v"],
cups[cups[cups[head["n"]]["n"]]["n"]]["v"],
)
head["n"] = cups[cups[cups[cups[head["n"]]["n"]]["n"]]["n"]]["v"]
destination = head["v"] - 1 if head["v"] > lowest else highest
while destination in picked_up:
destination = destination - 1 if destination > lowest else highest
cups[picked_up[2]]["n"] = cups[destination]["n"]
cups[destination]["n"] = picked_up[0]
current = head["n"]
# Part 1
cups = get_cups(False)
lowest = min(cups.keys())
highest = max(cups.keys())
current = next(iter(cups.keys()))
for _ in range(100):
step()
current = cups[1]["n"]
labels = []
while current != 1:
labels.append(str(cups[current]["v"]))
current = cups[current]["n"]
aoc.p1("".join(labels))
# Part 2
cups = get_cups(True)
lowest = min(cups.keys())
highest = max(cups.keys())
current = next(iter(cups.keys()))
for _ in range(10_000_000):
step()
aoc.p2(cups[1]["n"] * cups[cups[1]["n"]]["n"])
```
#### File: day_16/python/day16.py
```python
from aoc import AOC, chunk, flatten
from typing import List, Tuple
from math import prod
aoc = AOC(year=2021, day=16)
data = aoc.load()
def read_binary(hexa: str) -> List[int]:
# Convert hexadecimal to binary, padding all hex values to 4 digit binary vlaues
return flatten([list(str(bin(int(c, 16))[2:].rjust(4, "0"))) for c in hexa])
def read_value(packet: List[int], len: int) -> Tuple[List[int], int]:
# Read an int from the start of the packet and return the packet's remainder, and the value
return packet[len:], int("".join(packet[:len]), 2)
def perform_operation(op: int, values: List[int]) -> int:
if op == 0:
return sum(values)
elif op == 1:
return prod(values)
elif op == 2:
return min(values)
elif op == 3:
return max(values)
elif op == 5:
return 1 if values[0] > values[1] else 0
elif op == 6:
return 1 if values[0] < values[1] else 0
elif op == 7:
return 1 if values[0] == values[1] else 0
def read_literal(packet: List[int]) -> Tuple[List[int], int]:
# Read the literal value
literal = []
for c in chunk(5, packet):
literal.append(c[1:])
if c[0] == "0":
break
packet = packet[len(literal) * 5 :]
literal = int("".join(flatten(literal)), 2)
return packet, literal
def read_subpackets_by_length(
packet: List[int],
) -> Tuple[List[int], List[int], List[int]]:
packet, total_subpacket_length = read_value(packet, 15)
subpackets = packet[:total_subpacket_length]
values, versions = [], []
while subpackets:
subpackets, value, subversions = read_packet(subpackets)
values.append(value)
versions += subversions
return packet[total_subpacket_length:], values, versions
def read_subpackets_by_count(
packet: List[int],
) -> Tuple[List[int], List[int], List[int]]:
packet, total_subpackets = read_value(packet, 11)
values, versions = [], []
for _ in range(total_subpackets):
packet, value, subversions = read_packet(packet)
values.append(value)
versions += subversions
return packet, values, versions
def read_packet(packet: List[int]):
packet, version = read_value(packet, 3)
packet, type_id = read_value(packet, 3)
if type_id == 4:
packet, literal = read_literal(packet)
return packet, literal, [version]
else:
# Parse the subpackets
packet, length_type_id = read_value(packet, 1)
if length_type_id == 0:
packet, subvalues, subversions = read_subpackets_by_length(packet)
value = perform_operation(type_id, subvalues)
return packet, value, subversions + [version]
else:
packet, subvalues, subversions = read_subpackets_by_count(packet)
value = perform_operation(type_id, subvalues)
return packet, value, subversions + [version]
packet = read_binary(data.contents())
_, output, versions = read_packet(packet)
aoc.p1(sum(versions))
aoc.p2(output)
```
#### File: day_19/python/day19.py
```python
from itertools import product
from typing import Dict, Iterable, List, Optional, Set, Tuple
from aoc import AOC, manhattan, flatten
from dataclasses import dataclass
aoc = AOC(year=2021, day=19)
data = aoc.load()
@dataclass
class ScannerPosition:
absolute_offset: Tuple[int, int, int]
offset: Tuple[int, int, int]
rotation: Tuple[int, int, int]
direction: int
base: int
def __iter__(self):
return iter(
(
self.absolute_offset,
self.offset,
self.rotation,
self.direction,
self.base,
)
)
scanners: Dict[int, List[Tuple[int, int, int]]] = {}
scanner_id: int = None
for readings in data.numbers_by_line():
if len(readings) == 1:
scanner_id = readings[0]
scanners[scanner_id] = []
continue
elif len(readings) == 3:
scanners[scanner_id].append(tuple(readings))
def sub(a: Tuple[int, int, int], b: Tuple[int, int, int]) -> Tuple[int, int, int]:
return tuple(map(lambda i, j: i - j, a, b))
def add(a: Tuple[int, int, int], b: Tuple[int, int, int]) -> Tuple[int, int, int]:
return tuple(map(lambda i, j: i + j, a, b))
def mult(a: Tuple[int, int, int], b: Tuple[int, int, int]) -> Tuple[int, int, int]:
return tuple(map(lambda i, j: i * j, a, b))
def facing(beacon: Tuple[int, int, int], i: int) -> Tuple[int, int, int]:
x, y, z = beacon
if i == 0:
return (x, y, z)
if i == 1:
return (x, z, y)
if i == 2:
return (y, x, z)
if i == 3:
return (y, z, x)
if i == 4:
return (z, x, y)
if i == 5:
return (z, y, x)
rotations = list(product((1, -1), (1, -1), (1, -1)))
positions: Dict[int, ScannerPosition] = {}
positions[0] = ScannerPosition(
absolute_offset=(0, 0, 0), offset=(0, 0, 0), rotation=(1, 1, 1), direction=0, base=0
)
def find_overlap(
base: Set[Tuple[int, int, int]], beacons: List[Tuple[int, int, int]]
) -> Optional[Tuple[int, int, int]]:
for ref_idx, reference in enumerate(base):
if len(base) - ref_idx < 12:
continue
for idx, beacon in enumerate(beacons):
if len(beacons) - idx < 12:
break
offset = sub(reference, beacon)
matched_beacons = sum(1 for b in beacons if add(b, offset) in base)
if matched_beacons >= 12:
return offset
def find_position(
base_ids: Iterable[int], scanner_id: int, beacons: List[Tuple[int, int, int]]
):
for base_id in base_ids:
if scanner_id == base_id:
# Can't compare the same scanners
continue
base = scanners[base_id]
checked_beacons = set()
for rotation in rotations:
for direction in range(6):
first_beacon = mult(facing(beacons[0], direction), rotation)
if first_beacon in checked_beacons:
continue
checked_beacons.add(first_beacon)
comparable_beacons = [
mult(facing(b, direction), rotation) for b in beacons
]
offset = find_overlap(set(base), comparable_beacons)
if offset:
return base_id, offset, rotation, direction
def offset_relative_to_zero(
position: Tuple[int, int, int], base: int
) -> Tuple[int, int, int]:
relative_offset, _, relative_rotation, relative_direction, base = positions[base]
position = mult(facing(position, relative_direction), relative_rotation)
while base != 0:
_, _, relative_rotation, _, base = positions[base]
position = mult(position, relative_rotation)
return add(position, relative_offset)
def position_relative_to_zero(
position: Tuple[int, int, int], base: int
) -> Tuple[int, int, int]:
_, relative_offset, relative_rotation, relative_direction, base = positions[base]
position = add(
mult(facing(position, relative_direction), relative_rotation), relative_offset
)
while base != 0:
_, relative_offset, relative_rotation, relative_direction, base = positions[
base
]
position = add(
mult(facing(position, relative_direction), relative_rotation),
relative_offset,
)
return position
prior_comparisons: Set[Tuple[int, int]] = set()
while len(positions) < len(scanners):
for scanner_id, beacons in scanners.items():
if scanner_id in positions:
continue
# Saving some work by not checking against scanners that have been checked in the past
base_ids = [
id for id in positions.keys() if (scanner_id, id) not in prior_comparisons
]
prior_comparisons.update([(scanner_id, id) for id in base_ids])
position = find_position(base_ids, scanner_id, beacons)
if position is None:
continue
base_id, offset, rotation, direction = position
if base_id == 0:
positions[scanner_id] = ScannerPosition(
offset, offset, rotation, direction, base_id
)
else:
positions[scanner_id] = ScannerPosition(
offset_relative_to_zero(offset, base_id),
offset,
rotation,
direction,
base_id,
)
# Part 1
beacons = set(
flatten(
[position_relative_to_zero(b, id) for b in scanners[id]] for id in positions
)
)
aoc.p1(len(beacons))
# Part 2
maximum_distance = 0
for ida in positions:
for idb in positions:
position_a = position_relative_to_zero(
positions[ida].offset, positions[ida].base
)
position_b = position_relative_to_zero(
positions[idb].offset, positions[idb].base
)
maximum_distance = max(maximum_distance, manhattan(position_a, position_b))
aoc.p2(maximum_distance)
```
#### File: day_21/python/day21.py
```python
from aoc import AOC
from functools import cache
from itertools import product
aoc = AOC(year=2021, day=21)
data = aoc.load()
# Part 1
class Die:
def __init__(self):
self.last_roll = 0
self.roll_count = 0
def roll(self):
self.roll_count += 1
self.last_roll = (self.last_roll % 100) + 1
return self.last_roll
scores = [0, 0]
positions = [l[1] for l in data.numbers_by_line()]
current_player = 0
die = Die()
while max(scores) < 1000:
roll = sum([die.roll() for _ in range(3)])
positions[current_player] += roll
while positions[current_player] > 10:
positions[current_player] -= 10
scores[current_player] += positions[current_player]
current_player = (current_player + 1) % 2
aoc.p1(min(scores) * die.roll_count)
# Part 2
@cache
def possibilities(scores, positions, current_player):
if scores[0] >= 21:
return (1, 0)
if scores[1] >= 21:
return (0, 1)
p1_wins = p2_wins = 0
for r1, r2, r3 in product(range(1, 4), range(1, 4), range(1, 4)):
new_position = (positions[current_player] + sum([r1, r2, r3])) % 10
next_positions = (
(new_position, positions[1])
if current_player == 0
else (positions[0], new_position)
)
new_score = scores[current_player] + new_position + 1
next_score = (
(new_score, scores[1]) if current_player == 0 else (scores[0], new_score)
)
p1, p2 = possibilities(next_score, next_positions, (current_player + 1) % 2)
p1_wins += p1
p2_wins += p2
return (p1_wins, p2_wins)
positions = tuple([l[1] - 1 for l in data.numbers_by_line()])
scores = (0, 0)
aoc.p2(max(possibilities(scores, positions, 0)))
```
#### File: lib/commands/fetch.py
```python
from argparse import ArgumentParser
from lib.session import Session
from os import path
import requests
class Fetch:
@classmethod
def build_parser(cls, parser: ArgumentParser):
parser.description = "Fetch the input for the set challenge"
def run(self, session: Session):
session.validate(require_token=True)
# Already cached input
if path.exists(session.input_file):
print(f"input already exists for {session.challenge}, not fetching")
return
# Fetch the input with the session
cookies = {"session": session.token}
r = requests.get(session.challenge.input_url, cookies=cookies)
# Cache to the file
with open(session.input_file, "w") as f:
f.write(r.text)
print(f"Fetched and cached input for ${session.challenge}")
```
#### File: lib/commands/submit.py
```python
from argparse import ArgumentParser
# from lib.commands.run import Run
from lib.session import Session
# import requests
class Submit:
@classmethod
def build_parser(cls, parser: ArgumentParser):
parser.description = "Submit the set challenge"
parser.add_argument(
"PART",
help="which part of the day's challenge to submit",
type=int,
choices=[1, 2],
)
def run(self, session: Session):
print("submission is currently a WIP")
# session.validate(require_token=True)
# part = session.command_args.PART
# # Run command will return the last line output when the Submit command is running
# output = Run().run(session)
# for line in output.splitlines():
# if line.startswith(f"p{part}="):
# submission = line[3:]
# if not submission:
# print(f"no submission for part {part} in {output}")
# return
# # Submit the answer with the current "level"
# cookies = {"session": session.token}
# r = requests.post(
# session.challenge.submit_url,
# cookies=cookies,
# json={
# 'level': str(part),
# 'answer': submission
# }
# )
# print(r)
# print(r.text)
# print(r.json())
```
#### File: aoc/util/aoc_wrapper.py
```python
from datetime import datetime
from typing import Optional
from util.data.data import Data
import sys
class AOC:
token: Optional[str] = None
is_submitting: bool = False
input_file: Optional[str] = None
log_file: Optional[str] = None
contains_test_input: bool = False
force_skip_test: bool = False
@classmethod
def on_test_input_set(cls):
AOC.contains_test_input = True
def __init__(self, year: int, day: int):
self.year = year
self.day = day
self.p1_solution = None
self.p2_solution = None
def load(self):
contents = None
if AOC.input_file:
with open(AOC.input_file) as f:
contents = f.read()
if not contents:
raise Exception(f"Failed to load input data ({AOC.input_file})")
return Data(
contents,
force_skip_test=AOC.force_skip_test,
on_test_input_set=AOC.on_test_input_set,
)
def d(self, s):
print(s)
sys.stdout.flush()
def log(self, s):
if AOC.log_file:
with open(AOC.log_file, "a") as f:
f.write(f"{datetime.now()}: {s}\n")
def p1(self, solution):
self.p1_solution = solution
if AOC.is_submitting:
self.d(f"p1={solution}")
else:
self.d(solution)
self.log(solution)
def p2(self, solution):
self.p2_solution = solution
if AOC.is_submitting:
self.d(f"p2={solution}")
else:
self.d(solution)
self.log(solution)
```
#### File: aoc/util/direction.py
```python
from enum import Enum
from util.position import Position
class Direction(Enum):
N = (0, -1)
E = (1, 0)
S = (0, 1)
W = (-1, 0)
@property
def position(self):
return Position(self.value[0], self.value[1])
```
#### File: util/intcode/state.py
```python
from typing import DefaultDict, List, Tuple
from aoc import AOC
from util.functions import digits
from util.intcode.mode import Mode
class State:
def __init__(
self,
program: DefaultDict[int, int],
pointer: int = 0,
mode: Mode = Mode.POSITION,
):
self.program = program
self.pointer = pointer
self.mode = mode
self.inputs = []
self.outputs = []
@property
def output(self):
return self.program[0]
@property
def next_input(self):
return self.inputs.pop(0)
def add_input(self, input):
self.inputs.append(input)
def add_output(self, output):
self.outputs.append(output)
@property
def instruction(self) -> Tuple[int, List[int]]:
instruction = digits(self.program[self.pointer])
if len(instruction) == 1:
opcode = instruction[0]
mode = []
else:
opcode = instruction[-2] * 10 + instruction[-1]
mode = [] if len(instruction) == 2 else instruction[-3::-1]
return opcode, mode
```
#### File: lib/language/__init__.py
```python
from lib.language.language_helper import LanguageHelper as _LanguageHelper
from lib.language.language_id import LanguageID as _LanguageID
from lib.language.python_helper import PythonHelper as _PythonHelper
from lib.session import Session
LanguageHelper = _LanguageHelper
LanguageID = _LanguageID
PythonHelper = _PythonHelper
def language_helper(session: Session) -> LanguageHelper:
if session.language == LanguageID.PYTHON:
return PythonHelper(session)
```
#### File: lib/language/language_helper.py
```python
from abc import ABC, abstractmethod, abstractproperty
from lib.language.language_id import LanguageID
from lib.session import Session
from lib.util.filesystem import cd
from os import listdir, path
from typing import List, Optional, Tuple
import subprocess
class LanguageHelper(ABC):
id: LanguageID
session: Session
def __init__(self, id: LanguageID, session: Session):
self.id = id
self.session = session
@abstractproperty
def file_extension(self) -> str:
raise NotImplementedError()
@abstractmethod
def solve_challenge(self) -> Optional[subprocess.Popen[str]]:
raise NotImplementedError()
def run(self) -> Tuple[int, str]:
return self._run()
def _run(self, within_execution_directory: bool = False) -> Tuple[int, str]:
if not within_execution_directory and self.execution_directory:
with cd(self.execution_directory):
return self._run(within_execution_directory=True)
print("---")
p = self.solve_challenge()
if p is None:
return 1, "failed to run"
output = []
while True:
stream = p.stdout.readline()
if stream == "" and p.poll() is not None:
break
if stream:
output.append(stream.rstrip())
print(stream.rstrip())
print("---")
return p.returncode, "\n".join(output)
@property
def execution_directory(self) -> Optional[str]:
return None
@property
def language_support_directory(self) -> str:
return path.join(self.session.base_directory, "lib", "helpers", self.id.value)
@property
def starter_file(self) -> str:
return path.join(self.language_support_directory, f"starter{self.file_extension}")
@property
def supporting_files_directory(self):
return path.join(self.language_support_directory, "supporting_files")
@property
def helper_library(self):
return path.join(self.language_support_directory, "aoc")
@property
def helper_files(self) -> List[str]:
return list(
map(lambda x: path.join(self.helper_library, x), listdir(self.helper_library))
)
@property
def root_file(self) -> str:
return path.join(
self.session.working_directory,
f"day{self.session.challenge.day_with_padding}{self.file_extension}",
)
def open_pipe(self, command: str) -> subprocess.Popen[str]:
return subprocess.Popen(
command, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, text=True
)
``` |
{
"source": "JosephRPalmer/greenlight",
"score": 3
} |
#### File: greenlight/signalman/signalman.py
```python
__author__ = "<NAME>"
__version__ = "0.1.17"
__license__ = "MIT"
import argparse
import requests
import sys
import time
from interruptingcow import timeout
from retrying import retry
class Timeout(Exception):
pass
class ResponseError(Exception):
def __init__(self, message="Request response did not match required response"):
self.message = message
super().__init__(self.message)
def timedprint(message):
print("{} -- {}".format(time.strftime("%H:%M:%S", time.localtime()), message))
def urlbuilder(url, port, ssl):
scheme = "http"
colon = ":"
if ssl or port == 443:
scheme = "https"
elif not port:
port = "80"
if "://" in str(url):
schema_array = url.split("://", 1)
url = schema_array[1]
timedprint(
"Detected '{}://'. Removing protocol scheme and rebuilding URL.".format(schema_array[0]))
if "/" in url:
fqdn = url.split("/", 1)[0]
path = url.split("/", 1)[1]
else:
fqdn = url
path = ""
if ":" in fqdn:
colon = ""
port = ""
timedprint("Ignoring --port directive as port found in URL")
urlbuilder = "{}://{}{}{}/{}".format(scheme, fqdn, colon, port, path)
timedprint("Using built url {}".format(urlbuilder))
return urlbuilder
@retry(wait_exponential_multiplier=1000, wait_exponential_max=10000)
def caller(url, return_type, return_value, headers, debug):
resp = requests.get(url, headers=headers)
if debug:
timedprint("Sent: {}".format(resp.request.headers))
timedprint("Recieved: Headers:{} Body:{}".format(
resp.headers, str(resp.content)))
if return_type == "code":
if int(resp.status_code) != int(return_value):
timedprint("Response code was {}, looking for {}".format(
resp.status_code, code))
raise ResponseError()
else:
timedprint("Response code conditions met, found {}".format(
resp.status_code))
elif return_type == "text":
if return_value not in resp.text:
timedprint("Response text did not contain {}".format(text))
raise ResponseError()
else:
timedprint(
"Response text conditions met, found {} in response text".format(text))
elif return_type == "json":
json_key = return_value.split(":", 1)[0]
json_value = return_value.split(":", 1)[1]
if json_key in resp.json():
if str(resp.json()[json_key]) == str(json_value):
timedprint("Response JSON contains matching key and value. Found '{}:{}'".format(
json_key, resp.json()[json_key]))
else:
timedprint("Response JSON contains matching key but wrong value. Value found is {}, looking for {}.".format(
str(resp.json()[json_key]), str(json_value)))
raise ResponseError()
else:
timedprint("Response key/value pair not matched. Retrying...")
raise ResponseError()
def header_format(headers):
headerlist = []
if " " in str(headers):
headerlist = headers.split(" ")
else:
headerlist = headers
outputheaders = {}
for header in headerlist:
if header.count(":") < 1:
print("Header with detail {} was skipped due to incompatible formatting".format(
header))
continue
templist = header.split(":")
outputheaders[templist[0]] = templist[1]
timedprint("Adding header '{}:{}'".format(templist[0], templist[1]))
return outputheaders
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--timeout", type=int,
help='Set timeout for signalman to run in minutes', required=True)
parser.add_argument("--endpoint", type=str,
help='Endpoint to poll', required=True)
parser.add_argument("--port", type=int, help='Port to poll')
parser.add_argument(
"--r-type", type=str, help='Set a return type for signalman to look for, choose from text, code and json',
choices=["json", "code", "text"], required=True)
parser.add_argument("--r-value", type=str,
help='Set a return value for signalman to look for', required=True)
parser.add_argument("--headers", type=str, nargs='+',
help='Set request headers to use, for example to request Content-Type: application/json use content-type:application/json')
parser.add_argument('--ssl', action='store_true',
help="Use to poll with https enabled")
parser.add_argument('--debug', action='store_true',
help="Use to enable debugging")
# Specify output of "--version"
parser.add_argument(
"--version",
action="version",
version="%(prog)s (version {version})".format(version=__version__))
args = parser.parse_args()
headers = {}
if args.headers:
headers = header_format(args.headers)
try:
with timeout(args.timeout*60, exception=TimeoutError):
caller(urlbuilder(args.endpoint, args.port, args.ssl), args.r_type,
args.r_value, headers, args.debug)
except TimeoutError:
print("signalman timed out")
sys.exit(1)
if __name__ == '__main__':
""" This is executed when run from the command line """
main()
``` |
{
"source": "JosephRS409/cse210-project",
"score": 3
} |
#### File: finding_dallin/game/dallin.py
```python
import arcade
import random as r
from game import constants
class Dallin(arcade.Sprite):
def __init__(self):
"""Sets up the Dallin spritee in a random location on the map"""
super().__init__(constants.DALLIN)
# self.center_y = 3100
# self.center_x = 100
self.center_x = r.randint(100, 6300)
self.center_y = r.randint(100, 6300)
```
#### File: finding_dallin/game/player.py
```python
import arcade
from game import constants
class Player(arcade.Sprite):
def __init__(self):
"""Sets up the player sprite"""
super().__init__(constants.PLAYER_BAT, .4)
self.center_x = 100
self.center_y = 6300
self.change_x = 0
self.change_y = 0
self.keys = []
def update(self):
""" Move the player """
# Move player.
self.center_x += self.change_x
self.center_y += self.change_y
``` |
{
"source": "josephrubin/order-from-chaos",
"score": 3
} |
#### File: josephrubin/order-from-chaos/plane_v1.py
```python
import json
import math
from os import sys
import kdtree
from matplotlib import pyplot as plt
import matplotlib
import matplotlib.patches
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from drop import Drop
from util import *
import cProfile
__author__ = "<NAME>, <NAME>"
def bounce_probability(bounce_count):
"""Return the probability that a drop will bounce given that it
has bounced `bounce_count` times already."""
return 1 / (math.pow(2, bounce_count + 1))
def _main():
"""Run a 2D simulation of life."""
# Validate the cmd args.
if len(sys.argv) > 3 or len(sys.argv) < 2 or '--help' in sys.argv:
print('usage: {} <json_config_file_name> [output_file_name]'.format(sys.argv[0]), file=sys.stderr)
exit(1)
if len(sys.argv) < 3:
output_file_name = '/dev/null'
else:
output_file_name = sys.argv[2]
# Load the simulation settings from the JSON config file.
json_config_file_name = sys.argv[1]
with open(json_config_file_name, 'r') as json_config_file:
settings = json.loads(json_config_file.read())['settings']
public_entry(settings, output_file_name)
def public_entry(settings, output_file_name):
# Define the initial state.
state = {'points': {}, 'geo': kdtree.create(dimensions=2), 'steps_completed': 0, 'settings': settings}
visualize_init(settings)
# Run the simulation.
state = simulate_step(state, settings['DROP_COUNT'])
geo = state['geo']
points = state['points']
# Collect all of the stems for later analysis.
none_count = 0
stems = []
for node in kdtree.level_order(geo):
# Bug fix for empty root.
if node.data is None:
none_count += 1
assert none_count <= 1
break
assert node is not None
point_id = node.data.ident
point = points[point_id]
stems.append({'coord': point['coord'], 'height': point['height']})
#print('Number of stems: ', len(stems), file=sys.stderr)
#print('Number of points: ', len(points), file=sys.stderr)
# Output the relevant parts of the state.
_state = {'settings': settings, 'stems': stems}
with open(output_file_name, 'w') as output_file:
output_file.write(json.dumps(_state))
def visualize_init(settings):
matplotlib.use('TkAgg')
plt.clf()
fig = plt.gcf()
ax = plt.gca()
fig.set_size_inches(12, 12)
ax.axis('equal', adjustable='datalim')
ax.set(xlim=(-1.5, 1.5), ylim=(-1.5, 1.5))
if settings['PLANE_SHAPE'] == DISK:
ax.add_artist(plt.Circle((0,0), radius=1, fill=False))
elif settings['PLANE_SHAPE'] == SQUARE:
border = matplotlib.patches.Rectangle((-1, -1), 2, 2, fill=False)
ax.add_patch(border)
def visualize_random(settings, count=100):
fig = plt.gcf()
fig.clf()
ax = plt.gca()
ax.cla()
fig.set_size_inches(12, 12)
ax.axis('equal', adjustable='datalim')
ax.set(xlim=(-1.5, 1.5), ylim=(-1.5, 1.5))
if settings['PLANE_SHAPE'] == DISK:
ax.add_artist(plt.Circle((0,0), radius=1, fill=False))
elif settings['PLANE_SHAPE'] == SQUARE:
border = matplotlib.patches.Rectangle((-1, -1), 2, 2, fill=False)
ax.add_patch(border)
for _ in range(count):
coord = random_coord(settings['PLANE_SHAPE'])
drop_artist = plt.Circle((coord[0], coord[1]), radius=settings['DROP_RADIUS'], fill=True, color=(0, 0, 0, 1))
ax.add_artist(drop_artist)
if settings['SHOW_BOUNCE_RADIUS']:
bounce_artist_outer = plt.Circle((coord[0], coord[1]),
radius=settings['BOUNCE_DISTANCE'] + settings['DROP_RADIUS'],
fill=False, color=(0, 0, 0, 1))
bounce_artist_inner = plt.Circle((coord[0], coord[1]),
radius=settings['BOUNCE_DISTANCE'] - settings['DROP_RADIUS'],
fill=False, color=(0, 0, 0, 1))
ax.add_artist(bounce_artist_outer)
ax.add_artist(bounce_artist_inner)
plt.draw()
plt.show()
last_fast_draw_artists = []
def visualize_state(state):
global last_fast_draw_artists
for artist in last_fast_draw_artists:
artist.remove()
last_fast_draw_artists = []
points = state['points']
geo = state['geo']
settings = state['settings']
fig = plt.gcf()
ax = plt.gca()
if settings['PLANE_SHAPE'] == DISK:
border = plt.Circle((0,0), radius=1, fill=False)
ax.add_artist(border)
last_fast_draw_artists.append(border)
elif settings['PLANE_SHAPE'] == SQUARE:
border = matplotlib.patches.Rectangle((-1, -1), 2, 2, fill=False)
ax.add_patch(border)
last_fast_draw_artists.append(border)
height_max = None
for point_id in points:
point = points[point_id]
height = point['height']
if height_max is None or height > height_max:
height_max = height
none_count = 0
for node in kdtree.level_order(geo):
# Bug fix for empty root.
if node.data is None:
none_count += 1
assert none_count <= 1
break
assert node is not None
point_id = node.data.ident
point = points[point_id]
coord = point['coord']
height = point['height']
if height < 0.5 * height_max:
continue
color = (height / (height_max + 1))
drop_artist = plt.Circle((coord[0], coord[1]), radius=settings['DROP_RADIUS'], fill=True, color=(color, 0, 0, 1))
ax.add_artist(drop_artist)
last_fast_draw_artists.append(drop_artist)
if settings['SHOW_BOUNCE_RADIUS']:
bounce_artist_outer = plt.Circle((coord[0], coord[1]),
radius=settings['BOUNCE_DISTANCE'] + settings['DROP_RADIUS'],
fill=False, color=(color, 0, 0, 1))
bounce_artist_inner = plt.Circle((coord[0], coord[1]),
radius=settings['BOUNCE_DISTANCE'] - settings['DROP_RADIUS'],
fill=False, color=(color, 0, 0, 1))
ax.add_artist(bounce_artist_outer)
ax.add_artist(bounce_artist_inner)
plt.draw()
#plt.savefig("gallery1/{}.png".format(state['steps_completed']))
plt.pause(settings['INTERACTIVE_DELAY'])
def visualize_drop(coords, settings):
"""Draw a single drop on the screen."""
fig = plt.gcf()
ax = plt.gca()
drop_artist = plt.Circle((coords[0], coords[1]), radius=settings['DROP_RADIUS'], fill=True, color=(1, 0, 0, 0.4))
ax.add_artist(drop_artist)
plt.draw()
plt.pause(settings['INTERACTIVE_DELAY'])
return drop_artist
def visualize_drop_bounce(coords, settings):
fig = plt.gcf()
ax = plt.gca()
drop_artist = plt.Circle((coords[0], coords[1]), radius=settings['DROP_RADIUS'], fill=True, color=(0, 1, 0, 0.4))
ax.add_artist(drop_artist)
plt.draw()
plt.pause(settings['INTERACTIVE_DELAY'])
return drop_artist
def visualize_drop_active(coords, settings):
"""Draw a single drop on the screen."""
fig = plt.gcf()
ax = plt.gca()
drop_artist = plt.Circle((coords[0], coords[1]), radius=settings['DROP_RADIUS'], fill=True, color=(0, 0, 1, 0.4))
ax.add_artist(drop_artist)
plt.draw()
plt.pause(settings['INTERACTIVE_DELAY'])
return drop_artist
def unvisualize_drop(artist):
artist.remove()
plt.draw()
def simulate_step(state, step_count=1):
"""Run `step_count` steps of the simulation on `state`, returning
the final state."""
for _ in range(step_count):
state = _simulate_single_step(state)
return state
def _simulate_single_step(state):
"""Run a single step of the simulation on `state`, returning the next state."""
# Recover the important parts of our state.
geo = state['geo']
points = state['points']
steps_completed = state['steps_completed']
settings = state['settings']
PLANE_SHAPE = settings['PLANE_SHAPE']
DROP_RADIUS = settings['DROP_RADIUS']
STEM_RADIUS = settings['STEM_RADIUS']
STEM_STICK_PROBABILITY = settings['STEM_STICK_PROBABILITY']
GROUND_STICK_PROBABILITY = settings['GROUND_STICK_PROBABILITY']
OLD_GENOME_BIAS = settings['OLD_GENOME_BIAS']
MELT_INTERVAL = settings['MELT_INTERVAL']
INTERACTIVE_MODE = settings['INTERACTIVE_MODE']
PERIODIC_BOUNDARY = settings['PERIODIC_BOUNDARY']
BOUNCE_DISTANCE = settings['BOUNCE_DISTANCE']
# We're constantly removing and adding to our kdtree, so rebalance it every
# so often for efficiency.
if len(points) != 0 and steps_completed % 600 == 0 and geo.data is not None:
geo = geo.rebalance()
# In interactive fast mode we draw the state every so often.
if settings['INTERACTIVE_FAST_MODE'] and len(points) != 0:
if steps_completed % settings['INTERACTIVE_FAST_INTERVAL'] == 0:
visualize_state(state)
# Create a new drop in the plane.
drop_coord = random_coord(PLANE_SHAPE)
drop_artist = None
if settings['INTERACTIVE_MODE']:
drop_artist = visualize_drop_active(drop_coord, settings)
# The drop can keep bouncing as long as it intersects a stem
# and hasn't stuck yet. The bounce probability is determined
# by bounce_probability(bounce_count).
drop_stuck = False
bounce_count = 0
while not drop_stuck:
# Check intersection with any stem.
drop_stem_intersect = False
# Search the tree for any drop intersections.
intersections = geo.search_nn_dist(drop_coord, (DROP_RADIUS + STEM_RADIUS) * (DROP_RADIUS + STEM_RADIUS))
if settings['PERIODIC_BOUNDARY']:
phantom_drops = [
(drop_coord[0] + 2, drop_coord[1]),
(drop_coord[0] - 2, drop_coord[1]),
(drop_coord[0], drop_coord[1] + 2),
(drop_coord[0], drop_coord[1] - 2),
(drop_coord[0] + 2, drop_coord[1] + 2),
(drop_coord[0] + 2, drop_coord[1] - 2),
(drop_coord[0] - 2, drop_coord[1] + 2),
(drop_coord[0] - 2, drop_coord[1] - 2)
]
for phantom_drop in phantom_drops:
intersections.extend(geo.search_nn_dist(phantom_drop, (DROP_RADIUS + STEM_RADIUS) * (DROP_RADIUS + STEM_RADIUS)))
if intersections:
# The drop has intersected with some other drops that are already there.
# Find the drop that is the highest up.
highest_point_height = None
highest_point = None
highest_point_id = None
for node in intersections:
point_id = node.ident
point = points[point_id]
assert node == point['coord']
intersection_height = point['height']
if highest_point_height is None or intersection_height > highest_point_height:
highest_point_height = intersection_height
highest_point = point
highest_point_id = point_id
drop_stem_intersect = True
# Check if the drop bounces.
if drop_stem_intersect and random_real() <= bounce_probability(bounce_count):
assert highest_point is not None
if settings['INTERACTIVE_MODE']:
unvisualize_drop(drop_artist)
# The drop bounces.
bounce_count += 1
bounce_angle = random_theta()
bounce_offset = polar_to_cartesian(BOUNCE_DISTANCE, bounce_angle)
# For periodic boundary conditions we roll over from the edges of the boundary.
if PERIODIC_BOUNDARY:
assert PLANE_SHAPE == SQUARE
drop_coord = [highest_point['coord'][0] + bounce_offset[0], highest_point['coord'][1] + bounce_offset[1]]
if drop_coord[0] > 1:
drop_coord[0] -= 2
if drop_coord[1] > 1:
drop_coord[1] -= 2
if drop_coord[0] < -1:
drop_coord[0] += 2
if drop_coord[1] < -1:
drop_coord[1] += 2
drop_coord = tuple(drop_coord)
else:
drop_coord = (highest_point['coord'][0] + bounce_offset[0], highest_point['coord'][1] + bounce_offset[1])
if INTERACTIVE_MODE:
drop_artist = visualize_drop_bounce(drop_coord)
else:
# The drop does not bounce.
drop_stuck = True
if drop_stem_intersect:
# The drop has landed on top of an existing stem. Replace
# the top of the stem with the new drop.
if random_real() <= STEM_STICK_PROBABILITY:
if INTERACTIVE_MODE:
unvisualize_drop(drop_artist)
drop_artist = visualize_drop(drop_coord)
unvisualize_drop(highest_point['artist'])
geo = geo.remove(highest_point['coord'])
assert drop_coord is not None
new_coord = ((OLD_GENOME_BIAS * highest_point['coord'][0] + drop_coord[0]) / (1 + OLD_GENOME_BIAS),
(OLD_GENOME_BIAS * highest_point['coord'][1] + drop_coord[1]) / (1 + OLD_GENOME_BIAS))
geo.add(Drop(new_coord, ident=steps_completed))
new_height = highest_point['height'] + 1 + settings['BOUNCE_HEIGHT_ADDITION']
points[steps_completed] = {'height': new_height, 'artist': drop_artist, 'coord': new_coord}
else:
# The drop has landed outside of any stem. Simply add
# it as a new stem.
if random_real() <= GROUND_STICK_PROBABILITY:
# Create a new stem for this drop.
if INTERACTIVE_MODE:
unvisualize_drop(drop_artist)
drop_artist = visualize_drop(drop_coord)
assert drop_coord is not None
geo.add(Drop(drop_coord, ident=steps_completed))
points[steps_completed] = {'height': 0, 'artist': drop_artist, 'coord': drop_coord}
elif INTERACTIVE_MODE:
unvisualize_drop(drop_artist)
new_state = {'points': points, 'geo': geo, 'steps_completed': steps_completed, 'settings': settings}
if steps_completed % MELT_INTERVAL == 0:
new_state = melt(new_state)
return {'points': new_state['points'], 'geo': new_state['geo'], 'steps_completed': new_state['steps_completed'] + 1, 'settings': settings}
def melt(state):
steps_completed = state['steps_completed']
geo = state['geo']
points = state['points']
settings = state['settings']
MELT_INTERVAL = settings['MELT_INTERVAL']
MELT_PROBABILITY = settings['MELT_PROBABILITY']
INTERACTIVE_MODE = settings['INTERACTIVE_MODE']
# Melt stems from the bottom.
none_count = 0
if len(points) != 0:
for node in kdtree.level_order(geo):
# Bug fix for empty root.
if node.data is None:
none_count += 1
assert none_count <= 1
break
point_id = node.data.ident
point = points[point_id]
# Calculate the proper melt amount probabalistically.
binom_melt_amount = np.random.binomial(MELT_INTERVAL, MELT_PROBABILITY)
# There shouldn't be any stems that should have already been removed.
assert points[point_id]['height'] >= 0
# Melt the stem and remove it if its height has decreased past zero.
points[point_id]['height'] -= binom_melt_amount
if points[point_id]['height'] < 0:
geo = geo.remove(point['coord'])
if INTERACTIVE_MODE:
unvisualize_drop(point['artist'])
return {'points': points, 'geo': geo, 'steps_completed': steps_completed, 'settings': settings}
if __name__ == '__main__':
#cProfile.run('_main()')
_main()
``` |
{
"source": "JosephRynkiewicz/CIFAR10",
"score": 3
} |
#### File: JosephRynkiewicz/CIFAR10/aggregate.py
```python
from __future__ import print_function
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import os
import argparse
import numpy as np
from utils import progress_bar
from collections import Counter
from efficientnet_pytorch import EfficientNet
parser = argparse.ArgumentParser(description='PyTorch CIFAR 10 aggregation')
parser.add_argument('--ns', default=10, type=int, help='number of samples')
args = parser.parse_args()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("device : ",device)
# Data
print('==> Preparing data..')
transform_test = transforms.Compose([
transforms.Resize(200),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
# Model
print('==> Building model..')
net = EfficientNet.from_pretrained('efficientnet-b4', num_classes=10)
net = net.to(device)
batch_size=10
nclasses=10
nsplit=args.ns
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=1)
namesave='./checkpoint/ckpt'
def extractoutputs(loader,namesave='./checkpoint/ckpt',batch_size=1,nsplit=10,nclasses=10,nbobs=10000):
assert os.path.isdir('checkpoint'), 'Error: no checkpoint directory found!'
outputsnet = torch.zeros(nsplit,nbobs,nclasses)
predictsnet = torch.zeros(nsplit,nbobs,dtype=torch.int)
for i in range(0,nsplit):
print('split ',i)
correct = 0
namesaveb=namesave+str(i)+'.t7'
checkpoint = torch.load(namesaveb)
net.load_state_dict(checkpoint['net'])
net.eval()
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(loader):
inputs = inputs.to(device)
targets = targets.to(device)
output = F.softmax(net(inputs),dim=1)
_, predicted = output.max(1)
correct += predicted.eq(targets).sum().item()
output = output.to('cpu')
indice=batch_idx*batch_size
outputsnet[i,indice:(indice+output.size(0))]=output
predictsnet[i,indice:(indice+output.size(0))]=predicted
print("Test accuracy : ", 100.0*correct/nbobs)
return outputsnet, predictsnet
def find_majority(predictsnet):
majvote=torch.zeros(predictsnet.size(1),dtype=torch.int)
for i in range(predictsnet.size(1)):
votes = Counter(predictsnet[:,i].tolist())
majvote[i]=votes.most_common(1)[0][0]
return majvote
outputsnet, predictsnet = extractoutputs(testloader,namesave,batch_size,nsplit,nclasses)
moytest = torch.mean(outputsnet,dim=0)
moyharmotest = torch.mean(outputsnet.log(),dim=0)
majtest = find_majority(predictsnet)
def testaggregatemoy(testloader,moytensor):
correct = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(testloader):
_, predicted = moytensor[batch_idx:(batch_idx+1)].max(1)
correct += predicted.eq(targets).sum().item()
return correct/moytensor.size(0)
def testaggregatemaj(testloader,majtensor):
correct = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(testloader):
correct += majtensor[batch_idx].eq(targets).sum().item()
return correct/majtensor.size(0)
testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False, num_workers=0)
erragregmoy = testaggregatemoy(testloader,moytest)
erragregmoyharmo = testaggregatemoy(testloader,moyharmotest)
erragregmaj = testaggregatemaj(testloader,majtest)
print('==> Error of soft aggregation: ',erragregmoy)
print('==> Error of soft harmonic aggregation: ',erragregmoyharmo)
print('==> Error of hard aggregation: : ',erragregmaj)
``` |
{
"source": "josephsalimin/flask-router-wrapper",
"score": 2
} |
#### File: flask-router-wrapper/tests/conftest.py
```python
from flask import Flask
import pytest
@pytest.fixture
def app():
yield Flask(__name__)
```
#### File: flask-router-wrapper/tests/handler.py
```python
from flask import g, jsonify
from flask_router_wrapper import Middleware
class SetValueMiddleware(Middleware):
def _exec(self, next_function, *args, **kwargs):
g.val = 0
return next_function(*args, **kwargs)
class IncrementValueMiddleware(Middleware):
def _exec(self, next_function, *args, **kwargs):
g.val += 1
return next_function(*args, **kwargs)
class SetValueCallable:
def __call__(self, next_function, *args, **kwargs):
g.val = 0
return next_function(*args, **kwargs)
class IncrementValueCallable:
def __call__(self, next_function, *args, **kwargs):
g.val += 1
return next_function(*args, **kwargs)
class NotCallableMiddleware:
pass
def set_value_middleware(next_function, *args, **kwargs):
g.val = 0
return next_function(*args, **kwargs)
def increment_value_middleware(next_function, *args, **kwargs):
g.val += 1
return next_function(*args, **kwargs)
def index_handler():
return jsonify({"message": "hello"})
def value_json_handler():
return jsonify({"value": g.val})
``` |
{
"source": "josephsalmon/celer",
"score": 3
} |
#### File: celer/utils/testing.py
```python
import numpy as np
from scipy import sparse
def build_dataset(n_samples=50, n_features=200, n_targets=1, sparse_X=False):
"""Build samples and observation for linear regression problem."""
random_state = np.random.RandomState(0)
if n_targets > 1:
w = random_state.randn(n_features, n_targets)
else:
w = random_state.randn(n_features)
if sparse_X:
X = sparse.random(n_samples, n_features, density=0.5, format='csc',
random_state=random_state)
else:
X = np.asfortranarray(random_state.randn(n_samples, n_features))
y = X.dot(w)
return X, y
``` |
{
"source": "JosephSalomon/GN-Core",
"score": 3
} |
#### File: src/helper/fileManager.py
```python
from os import sys, path
import os
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
from helper import constants
def create_dir(dir_path):
try:
os.makedirs(dir_path)
print("Directory ", dir_path, " Created ")
except FileExistsError:
pass
```
#### File: src/helper/gpa.py
```python
class GPA():
_term_gpa = 0
_cumulative_gpa = 0
def __init__(self, term_gpa=0, cumulative_gpa=0):
self._term_gpa = term_gpa
self._cumulative_gpa = cumulative_gpa
def get_cumulative_gpa(self):
return self._cumulative_gpa
def get_term_gpa(self):
return self._term_gpa
@staticmethod
def get_letter_grade(gpa):
return {
'term_gpa': GPA.convert_float(gpa.get_term_gpa()),
'cumulative_gpa': GPA.convert_float(gpa.get_cumulative_gpa())
}
@staticmethod
def get_number_grade(gpa):
return {
'term_gpa':
GPA.convert_letter(GPA.convert_float(gpa.get_term_gpa())),
'cumulative_gpa':
GPA.convert_letter(GPA.convert_float(gpa.get_cumulative_gpa()))
}
@staticmethod
def convert_float(f):
if 0 <= f < 1:
return 'F'
elif 1 <= f < 1.3:
return 'D'
elif 1.3 <= f < 1.7:
return 'D+'
elif 1.7 <= f < 2:
return 'C-'
elif 2 <= f < 2.3:
return 'C'
elif 2.3 <= f < 2.7:
return 'C+'
elif 2.7 <= f < 3:
return 'B-'
elif 3 <= f < 3.3:
return 'B'
elif 3.3 <= f < 3.7:
return 'B+'
elif 3.7 <= f < 4:
return 'A-'
else:
return 'A'
@staticmethod
def convert_letter(l):
scale = {
'A+': '97 - 100',
'A': '93 - 96',
'A-': '90 - 92',
'B+': '87 - 89',
'B': '83 - 86',
'B-': '80 - 82',
'C+': '77 - 79',
'C': '73 - 76',
'C-': '70 - 72',
'D+': '67 - 69',
'D': '65 - 66',
'F': '0'
}
return scale[l]
```
#### File: src/helper/helper.py
```python
def print_to_screen(text):
print("RENDER::" + text)
```
#### File: src/helper/refresh_result.py
```python
class RefreshResult():
def __init__(self, classes, gpa):
self.classes = classes
self.gpa = gpa
``` |
{
"source": "JosephSamela/hyperv-quick-create-gallery-generator",
"score": 3
} |
#### File: hyperv-quick-create-gallery-generator/example/generate_new_gallery.py
```python
import os
import time
import hashlib
import sys
import json
base_url = "http://dc-sitf-ail/gallery/"
def generate_images():
images = []
for root, dirs, files in os.walk('.'):
for dir in dirs:
name = dir
locale = "en-US"
publisher = "Draper"
lastUpdated = time.strftime("%Y-%m-%dT%TZ", time.localtime())
for root, dirs, files in os.walk('./'+dir):
for file in files:
uri = base_url+dir+'/'+file
hash = "sha256:" + sha256_checksum(dir+'/'+file)
f = {"uri":uri, "hash":hash}
if "disk" in file:
disk = f
elif "logo" in file:
logo = f
elif "symbol" in file:
symbol = f
elif "thumbnail" in file:
thumbnail = f
elif "description" in file:
with open(dir+'/description.txt', 'r') as myfile:
description = myfile.readlines()
elif "version" in file:
with open(dir+'/version.txt', 'r') as myfile:
version=myfile.read().replace('\n', '')
elif "details" in file:
with open(dir+'/details.json') as myfile:
details = json.load(myfile)
image = {
"name":name,
"version":version,
"locale":locale,
"publisher":publisher,
"lastUpdated":lastUpdated,
"description":description,
"disk":disk,
"logo":logo,
"symbol":symbol,
"thumbnail":thumbnail,
"details":details
}
images.append(image)
return images
def sha256_checksum(filename, block_size=65536):
print("HASHING - "+filename)
sha256 = hashlib.sha256()
with open(filename, 'rb') as f:
for block in iter(lambda: f.read(block_size), b''):
sha256.update(block)
return sha256.hexdigest()
if __name__ == "__main__":
print("Starting...")
images = generate_images()
gallery = {"images":images}
with open('gallery.json', 'w') as outfile:
json.dump(gallery, outfile)
print("Success!")
``` |
{
"source": "JosephSamela/pychop3d",
"score": 3
} |
#### File: pychop3d/test/test_configuration.py
```python
import trimesh
import numpy as np
import tempfile
import yaml
import os
from pychop3d.configuration import Configuration
from pychop3d import bsp_tree
from pychop3d import bsp_node
def test_modify_configuration():
"""Verify that modifying the configuration modifies the behavior of the other modules. Create a tree with the
default part and the default configuration, verify that it will fit in the printer volume, then modify the
printer volume in the config and verify that a newly created tree will have a different n_parts objective
"""
config = Configuration.config
print()
mesh = trimesh.load(config.mesh, validate=True)
# create bsp tree
tree = bsp_tree.BSPTree(mesh)
print(f"n parts: {tree.nodes[0].n_parts}")
assert tree.nodes[0].n_parts == 1
config.printer_extents = config.printer_extents / 2
print("modified config")
print(f"original tree n parts: {tree.nodes[0].n_parts}")
assert tree.nodes[0].n_parts == 1
new_tree = bsp_tree.BSPTree(mesh)
print(f"new tree n parts: {new_tree.nodes[0].n_parts}")
assert new_tree.nodes[0].n_parts == 2
config.restore_defaults()
def test_load():
"""load a non-default parameter from a yaml file and verify that the config object matches
"""
config = Configuration.config
with tempfile.TemporaryDirectory() as tempdir:
params = {
'printer_extents': [1, 2, 3],
'test_key': 'test_value'
}
yaml_path = os.path.join(tempdir, "test.yml")
with open(yaml_path, 'w') as f:
yaml.safe_dump(params, f)
new_config = Configuration(yaml_path)
assert isinstance(new_config.printer_extents, np.ndarray)
assert np.all(new_config.printer_extents == np.array([1, 2, 3]))
assert new_config.test_key == 'test_value'
assert not hasattr(config, 'test_key')
def test_save():
"""modify the config, save it, verify that the modified values are saved and can be loaded
"""
config = Configuration.config
config.connector_diameter = 100
with tempfile.TemporaryDirectory() as tempdir:
# change directory
config.directory = tempdir
# save using a file name
path = config.save("test_config.yml")
# load the config back
new_config = Configuration(path)
assert new_config.connector_diameter == 100
with tempfile.TemporaryDirectory() as tempdir:
# change config directory
config.directory = tempdir
# save using cached name, should be 'test_config.yml'
path = config.save()
assert path == os.path.join(tempdir, 'test_config.yml')
config.restore_defaults()
def test_functions():
"""modify the config and verify that various functions correctly use the updated version
"""
config = Configuration.config
mesh = trimesh.load(config.mesh, validate=True)
print()
# BSPNode instantiation (n_parts)
n_parts_1 = bsp_node.BSPNode(mesh).n_parts
config.printer_extents = np.array([20, 20, 20])
n_parts_2 = bsp_node.BSPNode(mesh).n_parts
assert n_parts_1 != n_parts_2
config.restore_defaults()
# get_planes (plane_spacing, default is )
node = bsp_node.BSPNode(mesh)
planes_1 = bsp_tree.get_planes(node.part, np.array([0, 1, 0]))
config.plane_spacing /= 2
planes_2 = bsp_tree.get_planes(node.part, np.array([0, 1, 0]))
assert len(planes_2) > len(planes_1)
config.restore_defaults()
# uniform normals
normals1 = config.normals.copy()
config.n_theta = 10
normals2 = config.normals.copy()
config.n_phi = 10
normals3 = config.normals.copy()
assert len(normals1) < len(normals2) < len(normals3)
config.restore_defaults()
# etc, etc ...
``` |
{
"source": "josephsarz/Instagram-Automation-Bot",
"score": 2
} |
#### File: josephsarz/Instagram-Automation-Bot/byPassSetup.py
```python
from selenium import webdriver
import time
from selenium.webdriver.common.by import By
def byPassSetup():
follow = browser.find_element_by_xpath('//*[@id="react-root"]/section/main/section/div/div/div[1]/div/div/div[2]/div[3]/button')
follow.click()
time.sleep(8)
print('following....')
```
#### File: josephsarz/Instagram-Automation-Bot/createUser.py
```python
from selenium import webdriver
from random import randint
import time
from selenium.webdriver.common.by import By
import accountInfoGenerator as account
import json
def createAccount():
browser= webdriver.Chrome("/usr/local/bin/chromedriver")
browser.get("http://www.instagram.com")
time.sleep(10) #time.sleep count can be changed depending on the Internet speed.
#Generate User details
email = account.generatingEmail()
username = account.username()
fullname = account.generatingName()
password = '<PASSWORD>'+username
#Save to File in json Format
data = {}
data['users'] = []
data['users'].append({
'email': email,
'username': username,
'fullname': fullname,
'password': password
})
with open('data.txt', 'w') as outfile:
json.dump(data, outfile)
#Fill the email value
email_field = browser.find_element_by_name('emailOrPhone')
email_field.send_keys(email)
print('email : '+email)
#Fill the fullname value
fullname_field = browser.find_element_by_name('fullName')
fullname_field.send_keys(fullname)
print('account : '+fullname)
#Fill username value
username_field = browser.find_element_by_name('username')
username_field.send_keys(username)
print('username : '+username)
#Fill password value
password_field = browser.find_element_by_name('password')
password_field.send_keys(password) #You can determine another password here.
print('password : '+password)
submit = browser.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[7]/div[1]/button')
submit.click()
time.sleep(8)
print('Registering....')
```
#### File: josephsarz/Instagram-Automation-Bot/loginUser.py
```python
from selenium import webdriver
from random import randint
import time
from selenium.webdriver.common.by import By
import accountInfoGenerator as account
def loginUser():
browser= webdriver.Chrome("/usr/local/bin/chromedriver")
browser.get("https://www.instagram.com/accounts/login/")
time.sleep(10) #time.sleep count can be changed depending on the Internet speed.
username = 'your username'
password = '<PASSWORD>'
#Fill the email value
email_field = browser.find_element_by_name('username')
email_field.send_keys(username)
print('email : '+username)
#Fill password value
password_field = browser.find_element_by_name('password')
password_field.send_keys(password) #You can determine another password here.
print('password : '+password)
submit = browser.find_element_by_xpath('//*[@id="react-root"]/section/main/div/article/div/div[1]/div/form/div[4]/button')
submit.click()
time.sleep(8)
print('Login....')
``` |
{
"source": "josephsavage/hive-sbi-api",
"score": 2
} |
#### File: hive_sbi_api/v1/views.py
```python
import logging
from django_filters import rest_framework as filters
from rest_framework.mixins import (ListModelMixin,
RetrieveModelMixin)
from rest_framework.viewsets import GenericViewSet
from rest_framework.filters import OrderingFilter
from rest_framework.generics import get_object_or_404
from hive_sbi_api.core.models import (Member,
Transaction)
from .serializers import (MemberSerializer,
TransactionSerializer)
from .filters import TransactionFilter
logger = logging.getLogger('v1')
class MemberViewSet(ListModelMixin,
RetrieveModelMixin,
GenericViewSet):
lookup_value_regex = '[^/]+'
lookup_field = 'account'
queryset = Member.objects.all()
serializer_class = MemberSerializer
filter_backends = [OrderingFilter]
ordering_fields = [
'total_shares',
'shares',
'bonus_shares',
'estimate_rewarded',
'pending_balance',
'next_upvote_estimate',
'total_rshares',]
def get_object(self):
"""
Returns the object the view is displaying.
You may want to override this if you need to provide non-standard
queryset lookups. Eg if objects are referenced using multiple
keyword arguments in the url conf.
"""
queryset = self.filter_queryset(self.get_queryset())
# Perform the lookup filtering.
lookup_url_kwarg = self.lookup_url_kwarg or self.lookup_field
assert lookup_url_kwarg in self.kwargs, (
'Expected view %s to be called with a URL keyword argument '
'named "%s". Fix your URL conf, or set the `.lookup_field` '
'attribute on the view correctly.' %
(self.__class__.__name__, lookup_url_kwarg)
)
filter_kwargs = {self.lookup_field: self.kwargs[lookup_url_kwarg].lower()}
obj = get_object_or_404(queryset, **filter_kwargs)
# May raise a permission denied
self.check_object_permissions(self.request, obj)
return obj
class TransactionViewSet(ListModelMixin,
RetrieveModelMixin,
GenericViewSet):
queryset = Transaction.objects.all()
serializer_class = TransactionSerializer
lookup_field = 'index'
filter_backends = [
filters.DjangoFilterBackend,
]
filterset_class = TransactionFilter
``` |
{
"source": "josephsavage/hive-sbi-webapp",
"score": 2
} |
#### File: hive_sbi_webapp/webapp/views.py
```python
import json
import logging
import requests
from django.conf import settings
from django.views.generic import TemplateView
from .viewmixins import BaseMixinView
from .forms import UseInfoForm
logger = logging.getLogger('webapp')
class HomeView(BaseMixinView, TemplateView):
template_name = "webapp/home.html"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['active_home'] = True
return context
class UserInfoForm(BaseMixinView, TemplateView):
template_name = "webapp/userinfo_form.html"
def get_user(self, **kwargs):
return self.request.GET.get('user')
def get_userinfo_form(self, **kwargs):
user = self.get_user()
initial = {}
if user:
initial = {'user': user}
return UseInfoForm(initial=initial)
def get_userinfo(self, **kwargs):
user = self.get_user()
userinfo = None
if not user:
return userinfo
userinfo = {
"status_code": None,
"success": False,
"data": None,
"error": None,
}
try:
response = requests.get(
"{}/getUserInfo?user={}".format(settings.SBI_API_URL, user),
)
userinfo["status_code"] = response.status_code
if response.status_code == 200:
content = json.loads(response.content.decode("utf-8"))
userinfo["success"] = content["success"]
if userinfo["success"]:
userinfo["data"] = content["data"]
else:
userinfo["error"] = content["error"]
except requests.exceptions.ConnectionError:
userinfo["error"] = "Connection Error"
return userinfo
def get_userinfo_hive(self, **kwargs):
user = self.get_user()
userinfo_hive = None
if not user:
return userinfo_hive
userinfo_hive = {
"status_code": None,
"success": False,
"data": None,
"error": None,
}
#try:
response = requests.get(
"{}/users/{}/".format(settings.SBI_API_URL_V1, user),
)
userinfo_hive["status_code"] = response.status_code
if response.status_code == 200:
content = json.loads(response.content.decode("utf-8"))
userinfo_hive["success"] = content["success"]
if userinfo_hive["success"]:
userinfo_hive["data"] = content["data"]
else:
userinfo_hive["error"] = content["error"]
#except requests.exceptions.ConnectionError:
# userinfo_hive["error"] = "Connection Error"
return userinfo_hive
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['active_userinfo'] = True
context['user'] = self.get_user()
context['userinfo_form'] = self.get_userinfo_form()
context['userinfo'] = self.get_userinfo()
context['userinfo_hive'] = self.get_userinfo_hive()
return context
class RichListView(BaseMixinView, TemplateView):
template_name = "webapp/rich_list.html"
def get_richlist(self, **kwargs):
LIMIT = 200
ordering = self.request.GET.get("ordering", "")
try:
offset = int(self.request.GET.get("offset", 0))
except ValueError:
offset = 0
richlist = {
"status_code": None,
"previous": None,
"next": None,
"active_page_number": None,
"prev_page_number": None,
"next_page_number": None,
}
try:
params = ""
if ordering:
params = "?ordering={}".format(ordering)
if offset:
if params:
params = "{}&offset={}".format(params, offset)
else:
params = "?offset={}".format(offset)
response = requests.get(
"{}/v1/members/{}".format(settings.SBI_API_URL_V1, params),
)
richlist["status_code"] = response.status_code
if response.status_code == 200:
content = json.loads(response.content.decode("utf-8"))
if content["previous"]:
richlist["previous"] = content["previous"].split("?")[1]
if content["next"]:
richlist["next"] = content["next"].split("?")[1]
active_page_number = offset / LIMIT + 1
richlist["active_page_number"] = int(active_page_number)
richlist["prev_page_number"] = int(active_page_number - 1)
if offset + 200 < content["count"]:
richlist["next_page_number"] = int(active_page_number + 1)
richlist["results"] = content["results"]
except requests.exceptions.ConnectionError:
richlist["content"] = "Connection Error"
return richlist
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['active_richlist'] = True
context['richlist'] = self.get_richlist()
context['total_shares_ascending_active'] = False
context['total_shares_descending_active'] = False
context['shares_ascending_active'] = False
context['shares_descending_active'] = False
context['bonus_shares_ascending_active'] = False
context['bonus_shares_descending_active'] = False
context['pending_balance_ascending_active'] = False
context['pending_balance_descending_active'] = False
context['next_upvote_estimate_ascending_active'] = False
context['next_upvote_estimate_descending_active'] = False
context['estimate_rewarded_ascending_active'] = False
context['estimate_rewarded_descending_active'] = False
ordering = self.request.GET.get("ordering", "")
if ordering == "total_shares":
context['total_shares_ascending_active'] = True
if ordering == "-total_shares":
context['total_shares_descending_active'] = True
if ordering == "shares":
context['shares_ascending_active'] = True
if ordering == "-shares":
context['shares_descending_active'] = True
if ordering == "bonus_shares":
context['bonus_shares_ascending_active'] = True
if ordering == "-bonus_shares":
context['bonus_shares_descending_active'] = True
if ordering == "pending_balance":
context['pending_balance_ascending_active'] = True
if ordering == "-pending_balance":
context['pending_balance_descending_active'] = True
if ordering == "next_upvote_estimate":
context['next_upvote_estimate_ascending_active'] = True
if ordering == "-next_upvote_estimate":
context['next_upvote_estimate_descending_active'] = True
if ordering == "estimate_rewarded":
context['estimate_rewarded_ascending_active'] = True
if ordering == "-estimate_rewarded":
context['estimate_rewarded_descending_active'] = True
return context
``` |
{
"source": "Joseph-Schafer/Twitter2SQL",
"score": 3
} |
#### File: twitter2sql/analysis/coding.py
```python
import os
import csv
import random
from pprint import pprint
from collections import Counter, defaultdict, namedtuple
from itertools import combinations
import xlsxwriter
import pandas as pd
import numpy as np
from sklearn.metrics import cohen_kappa_score, confusion_matrix
from tqdm import tqdm
def analyze_codes(
input_data,
coders,
output_directory,
suffix='',
multi_select_codes=None,
lead_code=None,
code_hierarchy=None,
exclude_codes=None,
exclude_values=None,
arb_cols=None,
code_groups=None,
max_raters=2,
aggregate_stats=True,
confusion_matrices=True,
pairwise_stats=True,
arb=True,
individual_stats=True,
discussion=True,
verbose=True,
exclude=None):
""" Input xlsx is expected to have the following format:
1. A 'Codebook' sheet with vertical cols of titled codes.
2. Coding sheets per coder titled 'Tweets_{coder}'
3. Any number of unrelated sheets.
Each coding sheet is expected to have, in order:
1. Any number of data cols
2. TRUE/FALSE coder cols = to # of coders.
3. Code cols
"""
if exclude_codes is None:
exclude_codes = ['Notes']
if exclude_values is None:
exclude_values = ['Unclear']
code_level_stats = os.path.join(output_directory, f'Code_Level_Stats{suffix}.csv')
confusion_matrix_stats = os.path.join(output_directory, f'Confusion_Matrices{suffix}.csv')
confusion_matrix_image_dir = os.path.join(output_directory, f'Confusion_Matrices{suffix}')
individual_csv = os.path.join(output_directory, f'Individual_Statistics{suffix}.csv')
arb_csv = os.path.join(output_directory, f'Arbitration{suffix}.csv')
output_xlsx = os.path.join(output_directory, f'Coding_Analysis{suffix}.xlsx')
discussion_xlsx = os.path.join(output_directory, f'Discussion{suffix}.xlsx')
if not os.path.exists(output_directory):
os.mkdir(output_directory)
if not os.path.exists(confusion_matrix_image_dir):
os.mkdir(confusion_matrix_image_dir)
file_type = os.path.splitext(input_data)[1]
""" 1. Load data
"""
# Maybe support for non-Sheets coding later?
if file_type not in ['.xlsx']:
raise NotImplementedError
if file_type == '.xlsx':
raw_xlsx = pd.ExcelFile(input_data)
data_dict = {}
for coder in tqdm(coders):
for name in raw_xlsx.sheet_names:
if coder in name:
df = raw_xlsx.parse(name, keep_default_na=False, na_values=[''])
df = df.dropna(subset=['Tweet'])
data_dict[coder] = df
code_dict = {}
codebook = raw_xlsx.parse('Codebook', keep_default_na=False, na_values=[''])
for col in list(codebook):
code_list = list(codebook[col].dropna().astype(str))
code_list = [x for x in code_list if x not in exclude_values]
code_dict[col] = code_list
""" 2. Extract codes and identify coders. Not great code, but extracts codes according to the
order specified above.
"""
codes_only = []
past_coders = False
for colname in list(data_dict[coders[0]]):
if colname in coders:
past_coders = True
if past_coders:
if colname not in coders and colname != exclude:
codes_only += [colname]
analysis_codes = [x for x in codes_only if x not in exclude_codes]
if lead_code is None:
lead_code = analysis_codes[0]
print(lead_code)
if max_raters is None:
max_raters = len(coders)
""" 3. Write out data.
"""
with open(code_level_stats, 'w') as codefile, \
open(confusion_matrix_stats, 'w') as confusionfile, \
open(arb_csv, 'w') as arbfile, \
open(individual_csv, 'w') as indivfile, \
xlsxwriter.Workbook(output_xlsx) as workbook, \
xlsxwriter.Workbook(discussion_xlsx) as dworkbook:
# First figure out filewriting (ugh)...
stats_sheet, confusion_sheet, arb_sheets, individual_sheet, \
writer, confusion_writer, arb_writer, individual_writer, \
code_header, individual_header = format_filewriters(
workbook, arb_cols, coders + ['Discussion'],
codes_only, data_dict, codefile, arbfile, confusionfile,
indivfile)
confusion_rownum, individual_rownum, stats_rownum = 0, 1, 1
arb_rownums = {coder: 1 for coder in coders}
# Calculate individual ratios
if individual_stats:
process_individual_stats(
individual_writer, individual_sheet, individual_rownum,
data_dict, lead_code, analysis_codes, multi_select_codes,
individual_header)
# Get scores for the aggregate data first..
if aggregate_stats:
coder1, coder2 = '0', '1'
combined = process_aggregate_codesheet(data_dict, analysis_codes, lead_code)
output_row = {'Combined Pair': 'All'}
stats_rownum, confusion_rownum = write_data(
output_row, combined, coder1, coder2, code_dict,
code_hierarchy, multi_select_codes, code_groups,
lead_code, exclude_values, confusion_matrices,
confusion_rownum, confusion_writer, confusion_sheet,
analysis_codes, stats_rownum, stats_sheet, code_header,
writer)
# Calculate scores for each coding pair.
if pairwise_stats:
for idx, (coder1, coder2) in enumerate(combinations(coders, 2)):
combined = process_pair_codesheet(
data_dict, coders, analysis_codes, coder1, coder2, lead_code)
output_row = {
'Combined Pair': f'{coder1}_{coder2}',
'Pair 1': coder1, 'Pair 2': coder2}
stats_rownum, confusion_rownum = write_data(
output_row, combined, coder1, coder2, code_dict,
code_hierarchy, multi_select_codes, code_groups,
lead_code, exclude_values, False,
confusion_rownum, confusion_writer, confusion_sheet,
analysis_codes, stats_rownum, stats_sheet, code_header,
writer)
# Then write out rows for arb
if arb:
write_arb(
arb_writer, data_dict, lead_code, arb_cols,
codes_only, arb_sheets, arb_rownums)
if discussion:
write_discussion(dworkbook, data_dict, lead_code, codes_only, coders, analysis_codes)
return output_xlsx
def write_data(
output_row, combined, coder1, coder2, code_dict,
code_hierarchy, multi_select_codes, code_groups,
lead_code, exclude_values, confusion_matrices,
confusion_rownum, confusion_writer, confusion_sheet,
analysis_codes, stats_rownum, stats_sheet, code_header,
writer):
for code in tqdm(analysis_codes):
output_row['Code'] = code
output_row = calculate_all_scores(
output_row, combined, coder1, coder2, code_dict,
code, code_hierarchy, multi_select_codes, code_groups,
lead_code, exclude_values)
if confusion_matrices:
confusion_rownum = write_confusion(
combined, confusion_writer, confusion_sheet, confusion_rownum,
code, coder1, coder2, code_dict, multi_select_codes)
stats_rownum = write_xlsx_row(
stats_sheet, output_row,
stats_rownum, code_header)
writer.writerow(output_row)
return stats_rownum, confusion_rownum
def write_xlsx_row(sheet, row, rownum, code_header=None):
if isinstance(row, dict):
row = [row[x] if x in row else '' for x in code_header]
sheet.write_row(rownum, 0, row)
rownum += 1
return rownum
def format_filewriters(
workbook, arb_cols,
coders, codes_only, data_dict,
codefile, arbfile, confusionfile, indivfile):
# XLSX Sheet Creation
# Agreement Statistics
code_header = ['Combined Pair', 'Pair 1', 'Pair 2', 'Code']
extra_measures = ['', 'Conditional_', 'Partial_', 'Grouped_']
for extra in extra_measures:
code_header += [f'{extra}N', f'{extra}Agreement', f'{extra}Cohen_Kappa']
stats_sheet = workbook.add_worksheet('Agreement_Statistics')
stats_sheet.write_row(0, 0, code_header)
# Individual Statistics
individual_header = ['Coder', 'Code', 'Value', 'N', 'Percent', 'Multi_N', 'Multi_Percent']
individual_sheet = workbook.add_worksheet('Individual_Statistics')
individual_sheet.write_row(0, 0, individual_header)
# Confusion Matrices
confusion_sheet = workbook.add_worksheet('Confusion_Statistics')
# Arbitration
if arb_cols is None:
arb_header = list(data_dict[coders[0]])
else:
arb_header = [x for x in list(data_dict[coders[0]]) if x in arb_cols]
arb_header = arb_header + ['Coder', 'Arbitrate?'] + codes_only
arb_sheets = {}
for coder in coders:
arb_sheets[coder] = workbook.add_worksheet(f'Arbitration_{coder}')
arb_sheets[coder].write_row(0, 0, arb_header)
# Discussion
# CSV Headers and Writers
arb_writer = csv.writer(arbfile, delimiter=',')
arb_writer.writerow(arb_header)
writer = csv.DictWriter(codefile, code_header, delimiter=',')
writer.writeheader()
individual_writer = csv.DictWriter(indivfile, individual_header, delimiter=',')
individual_writer.writeheader()
confusion_writer = csv.writer(confusionfile, delimiter=',')
return stats_sheet, confusion_sheet, arb_sheets, individual_sheet, \
writer, confusion_writer, arb_writer, individual_writer, code_header, \
individual_header
def process_aggregate_codesheet(data_dict, analysis_codes, lead_code):
# Get all codes from coders
all_dfs = []
for coder, data in data_dict.items():
df = data[analysis_codes]
all_dfs += [df]
# Fill in that dataframe with the matching codes from coders
coding_array = []
sample_df = df
for index in tqdm(range(sample_df.shape[0])):
row_vals = []
for df in all_dfs:
row = df.iloc[index]
if not pd.isnull(row[lead_code]):
row_vals += row.values.tolist()
row_vals += (len(all_dfs) * len(analysis_codes) - len(row_vals)) * [np.nan]
coding_array += [row_vals]
colnames = []
for i in range(len(all_dfs)):
colnames += [f'{code}_{i}' for code in analysis_codes]
combined_df = pd.DataFrame(coding_array, columns=colnames)
combined_df = remove_bad_rows(combined_df, '0', '1', lead_code)
return combined_df
def process_pair_codesheet(data_dict, coders, analysis_codes, coder1, coder2, lead_code):
dfs = []
for coder, pair_coder in [[coder1, coder2], [coder2, coder1]]:
df = data_dict[coder][coders + analysis_codes]
df = df.astype({c: 'bool' for c in coders})
df = df[(df[coder]) & (df[pair_coder])][analysis_codes]
df = df.rename(columns={x: f'{x}_{coder}' for x in list(df)})
dfs += [df]
combined_df = pd.concat(dfs, axis=1)
combined_df = remove_bad_rows(combined_df, coder1, coder2, lead_code)
return combined_df
def remove_bad_rows(df, coder1, coder2, lead_code):
""" Modify this to work for more than 2 coders with df.query
"""
# Remove null rows that have not yet been coded, according to the "lead code" (i.e. first code).
df = df[~(df[f'{lead_code}_{coder1}'].isna()) & ~(df[f'{lead_code}_{coder2}'].isna())]
return df
def calculate_all_scores(
output_row, df, coder1, coder2, code_dict, code, code_hierarchy,
multi_select_codes, code_groups, lead_code, exclude_values):
col1 = f'{code}_{coder1}'
col2 = f'{code}_{coder2}'
# Remove 'Unclear' and other excluded rows.
df = df[~(df[col1].isin(exclude_values)) & ~(df[col2].isin(exclude_values))]
# Basic Agreement
output_row = calculate_agreement_scores(
output_row, df,
col1, col2, code_dict, code, prefix='')
# Partial Agreement
if code in multi_select_codes:
if code in code_hierarchy:
# Not generalizable to other data, obviously
condition_code = code_hierarchy[code]
h_data = df.copy()
for key, item in condition_code.items():
h_data = h_data[(h_data[f'{key}_{coder1}'] != item) & (h_data[f'{key}_{coder2}'] != item)]
output_row = calculate_agreement_scores(
output_row, h_data, col1,
col2, code_dict, code, prefix='Partial_', partial=True)
else:
output_row = calculate_agreement_scores(
output_row, df, col1, col2,
code_dict, code, prefix='Partial_', partial=True)
# Conditional Agreement
if code in code_hierarchy:
output_row = calculate_agreement_scores(
output_row, h_data,
col1, col2, code_dict, code, prefix='Conditional_')
else:
output_row['Conditional_Agreement'] = None
output_row['Conditional_Cohen_Kappa'] = None
# Grouped Agreement
if code in code_groups:
group_dict = code_groups[code]
for key, item in group_dict.items():
df = df.replace(key, item)
grouped_categories = list(set(group_dict.values()))
output_categories = grouped_categories + [x for x in code_dict[code] if x not in group_dict]
output_dict = {code: list(output_categories)}
output_row = calculate_agreement_scores(
output_row, df,
col1, col2, output_dict, code, prefix='Grouped_')
else:
output_row['Grouped_Agreement'] = None
output_row['Grouped_Cohen_Kappa'] = None
return output_row
def calculate_agreement_scores(
output_row, df, col1, col2,
code_dict, code, prefix='', partial=False):
# Total Rows
count = df.shape[0]
output_row[f'{prefix}N'] = count
if partial:
# This is so messed up. Means to an end.
data1 = []
data2 = []
same_count = 0
for _, row in df.iterrows():
vals1, vals2 = str.split(str(row[col1]), '|'), str.split(str(row[col2]), '|')
if not set(vals1).isdisjoint(vals2):
vals1, vals2 = vals1[0], vals1[0]
same_count += 1
else:
vals1, vals2 = vals1[0], vals2[0]
data1 += [vals1]
data2 += [vals2]
else:
data1 = df[col1].astype(str)
data2 = df[col2].astype(str)
same_count = df[df[col1] == df[col2]].shape[0]
# Agreement
agreement = same_count / count
output_row[f'{prefix}Agreement'] = agreement
# Cohen's Kappa
c_kappa = cohen_kappa_score(data1, data2, labels=code_dict[code])
if np.isnan(c_kappa):
output_row[f'{prefix}Cohen_Kappa'] = 'N/A'
else:
output_row[f'{prefix}Cohen_Kappa'] = c_kappa
# Fleiss' Kappa
return output_row
def write_confusion(
df, writer, confusion_sheet, confusion_rownum,
code, coder1, coder2, code_dict, multi_select_codes):
""" Writes confusion matrices into something tractable in a .csv file.
Enable for 2+ raters
"""
col1 = list(df[f'{code}_{coder1}'].astype(str))
col2 = list(df[f'{code}_{coder2}'].astype(str))
# # Remove 'Unclear' and other excluded rows.
# df = df[~(df[col1].isin(exclude_values)) & ~(df[col2].isin(exclude_values))]
# Deal with multi-select in the confusion matrices. Not sure if redundant
if code in multi_select_codes:
new_cols = [[], []]
for idx, value1 in enumerate(col1):
value2 = col2[idx]
if '|' in value1 or '|' in value2:
value1 = set(str.split(value1, '|'))
value2 = set(str.split(value2, '|'))
diff_vals = (value1 - value2).union(value2 - value1)
match_vals = value1.intersection(value2)
for match in match_vals:
new_cols[0] += [match]
new_cols[1] += [match]
for diff in (value1 - value2):
for val in value2:
new_cols[0] += [diff]
new_cols[1] += [val]
for diff in (value2 - value1):
for val in value1:
new_cols[0] += [val]
new_cols[1] += [diff]
col1 += new_cols[0]
col2 += new_cols[1]
# Confusion matrix
confusion = confusion_matrix(col1, col2, labels=code_dict[code])
# No ground truth, so make it symmetric
# 20 points if you can find a Python-y way to do this.
for i in range(confusion.shape[0]):
for j in range(confusion.shape[1]):
if i != j:
total = confusion[i, j] + confusion[j, i]
confusion[i, j] = total
confusion[j, i] = total
writer.writerow([code])
labels = code_dict[code]
code_row = [code]
writer.writerow(code_row)
confusion_rownum = write_xlsx_row(confusion_sheet, code_row, confusion_rownum)
header_row = [''] + labels
writer.writerow(header_row)
confusion_rownum = write_xlsx_row(confusion_sheet, header_row, confusion_rownum)
for idx, row in enumerate(confusion):
output_row = [labels[idx]] + row.tolist()
confusion_sheet.write_row(confusion_rownum, 0, output_row)
confusion_rownum += 1
writer.writerow(output_row)
return confusion_rownum
def write_arb(
writer, data_dict, lead_code, arb_cols,
codes_only, arb_sheets, arb_rownums):
""" Identify codes without a majority consensus, and write those to a new sheet for arbitrating.
Currently exits arb when any arbitrator affirms the choice of a previous coder, so
more complex majority systems are not implemented.
"""
sample_df = data_dict[list(data_dict.keys())[0]]
for i in tqdm(range(sample_df.shape[0])):
# See who has and has not coded this data.
have_coded = []
arbitrators = []
for coder, data in data_dict.items():
row = data.iloc[i]
if not pd.isnull(row[lead_code]):
have_coded += [coder]
else:
arbitrators += [coder]
# If 2+ people have coded...
if len(have_coded) > 1:
# Grab the relevant data...
if arbitrators:
arbitrator = random.choice(arbitrators)
else:
arbitrator = 'Discussion'
output_rows = []
code_rows = []
for coder in have_coded:
output_row = data_dict[coder].iloc[i].fillna('')
data_cols = output_row[arb_cols].values.tolist()
code_cols = output_row[codes_only].values.tolist()
code_rows += [code_cols]
output_rows += [data_cols + [coder, 'FALSE'] + code_cols]
# Wonky method to determine if there is a majority.
# There's probably a better way..
# Hack here for the Notes column, TODO
# majority_dict = defaultdict(int)
majority = False
for idx, row in enumerate(code_rows):
for row2 in code_rows[idx + 1:]:
if row[:-1] == row2[:-1]:
majority = True
# Finally, if there is no majority, write to arb file
# if any([value > 0 for key, value in majority_dict.items()]):
if majority:
continue
for output_row in output_rows:
writer.writerow(output_row)
arb_rownums[arbitrator] = write_xlsx_row(
arb_sheets[arbitrator],
output_row, arb_rownums[arbitrator])
arb_row = [''] * len(arb_cols) + [arbitrator, 'TRUE']
# Blank out disagreements for arbitrator
# Very confusing. I'm a little out of it right now tbh.
for idx in range(len(code_cols)):
answers = [row[idx] for row in code_rows]
if len(answers) == len(set(answers)):
arb_row += ['']
else:
arb_row += [max(set(answers), key=answers.count)]
writer.writerow(arb_row)
arb_rownums[arbitrator] = write_xlsx_row(
arb_sheets[arbitrator],
arb_row, arb_rownums[arbitrator])
return arb_sheets, arb_rownums
def process_individual_stats(
individual_writer, individual_sheet, individual_rownum,
data_dict, lead_code, analysis_codes, multi_select_codes,
individual_header):
for coder, df in data_dict.items():
output_row = {'Coder': coder}
df = df[~df[lead_code].isnull()]
for code in analysis_codes:
output_row['Code'] = code
sub_df = df[code].dropna()
# Deal with multi-select in the confusion matrices. Not sure if redundant
if code in multi_select_codes:
data = list(sub_df.astype(str))
multi_select_count = 0
new_data = []
for idx, value in enumerate(data):
if '|' in value:
multi_select_count += 1
values = set(str.split(value, '|'))
for val in values:
new_data += [val]
else:
new_data += [value]
sub_df = pd.DataFrame(new_data, columns=[code])
sub_df = sub_df[code]
multi_select_percent = multi_select_count / len(data)
else:
multi_select_count = None
multi_select_percent = None
output_row['Multi_N'] = multi_select_count
output_row['Multi_Percent'] = multi_select_percent
# sub_df = sub_df[~sub_df[code].isnull()]
percents = sub_df.value_counts(normalize=True) * 100
counts = sub_df.value_counts()
for key, value in percents.items():
output_row['N'] = counts[key]
output_row['Percent'] = value
output_row['Value'] = key
individual_writer.writerow(output_row)
write_xlsx_row(individual_sheet, output_row, individual_rownum, individual_header)
individual_rownum += 1
return
def write_discussion(dworkbook, data_dict, lead_code, codes_only, coders, analysis_codes):
# Individual Statistics
discussion_header = ['Pair_1', 'Pair_2', 'Row'] + analysis_codes
with open('Test_Count.csv', 'a') as f:
writer = csv.writer(f, delimiter=',')
for idx, (coder1, coder2) in enumerate(combinations(coders, 2)):
discussion_sheet = dworkbook.add_worksheet(f'{coder1}_{coder2}')
discussion_sheet.write_row(0, 0, discussion_header)
combined = process_pair_codesheet(
data_dict, coders, analysis_codes, coder1, coder2, lead_code)
rownum = 1
for i in tqdm(range(combined.shape[0])):
row = combined.iloc[i]
output_row = {'Pair_1': coder1, 'Pair_2': coder2, 'Row': combined.index[i]}
if row.iloc[0:len(analysis_codes)].tolist() == row.iloc[len(analysis_codes):].tolist():
writer.writerow([combined.index[i], 0])
continue
wrong_codes = 0
for code in analysis_codes:
if row[f'{code}_{coder1}'] != row[f'{code}_{coder2}']:
output_row[code] = ' // '.join([str(row[f'{code}_{coder1}']), str(row[f'{code}_{coder2}'])])
wrong_codes += 1
writer.writerow([combined.index[i], wrong_codes])
write_xlsx_row(discussion_sheet, output_row, rownum, discussion_header)
rownum += 1
return
def scratch_code():
# params = namedtuple('Parameters', [
# 'code_dict', 'code_hierarchy', 'multi_select_codes', 'code_groups',
# 'lead_code', 'exclude_values', 'analysis_codes', 'code_header', 'max_raters'])
return
if __name__ == '__main__':
pass
```
#### File: twitter2sql/core/json_util.py
```python
import json
import pandas as pd
from pprint import pprint
def load_json(input_json, output_type='dict'):
# Should probably check this at the format level.
if input_json.endswith('json'):
with open(input_json, 'r') as f:
data = json.load(f)
else:
with open(input_json, 'r') as f:
data = [json.loads(jline) for jline in list(f)]
return data
def extract_mentions(input_data):
return
def extract_images(input_data, types=['photo']):
# Stolen from https://github.com/morinokami/twitter-image-downloader/blob/master/twt_img/twt_img.py
if len(types) > 1:
raise NotImplementedError
if "media" in input_data["entities"]:
if "extended_entities" in input_data:
media_types = [x['type'] for x in input_data["extended_entities"]["media"]]
extra = [
x["media_url"] for x in input_data["extended_entities"]["media"] if x['type'] in types
]
else:
media_types = None
extra = []
if all([x in types for x in media_types]):
urls = [x["media_url"] for x in input_data["entities"]["media"] if x['type'] in types]
urls = set(urls + extra)
return urls
else:
return None
if __name__ == '__main__':
pass
```
#### File: twitter2sql/core/util.py
```python
import time
import os
import psycopg2
import csv
import pandas as pd
import re
import tweepy
import json
from datetime import timedelta
from datetime import datetime
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
from collections import defaultdict
def twitter_str_to_dt(dt_str):
return datetime.strptime(dt_str, "%a %b %d %H:%M:%S +0000 %Y")
def open_tweepy_api(twitter_c_key=None, twitter_c_key_secret=None,
twitter_a_key=None, twitter_a_key_secret=None,
credentials=None):
# This is a little stupid.
if credentials:
creds = {}
for line in open(credentials).readlines():
key, value = line.strip().split("=")
creds[key] = value
twitter_c_key = creds['twitter_c_key']
twitter_c_key_secret = creds['twitter_c_key_secret']
twitter_a_key = creds['twitter_a_key']
twitter_a_key_secret = creds['twitter_a_key_secret']
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(twitter_c_key, twitter_c_key_secret)
auth.set_access_token(twitter_a_key, twitter_a_key_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
return api
def open_database(database_name,
db_config_file,
overwrite_db=False,
owner='example',
admins=[],
named_cursor=None,
itersize=None,):
# Parse the database credentials out of the file
database_config = {"database": database_name}
for line in open(db_config_file).readlines():
key, value = line.strip().split("=")
database_config[key] = value
# cursor.execute("select * from information_schema.tables where table_name=%s", ('mytable',))
if overwrite_db:
create_statement = """CREATE DATABASE {db}
WITH
OWNER = {owner}
ENCODING = 'UTF8'
LC_COLLATE = 'en_US.UTF-8'
LC_CTYPE = 'en_US.UTF-8'
TABLESPACE = pg_default
CONNECTION LIMIT = -1;
""".format(db=database_name, owner=owner)
public_permissions = """GRANT TEMPORARY, CONNECT ON DATABASE {db} TO PUBLIC;""".format(db=database_name)
owner_permissions = """GRANT ALL ON DATABASE {db} TO {user};""".format(db=database_name, user=owner)
admin_permissions = []
for admin in admins:
admin_permissions += ['\nGRANT TEMPORARY ON DATABASE {db} to {user}'.format(db=database_name, user=admin)]
all_commands = [create_statement] + [public_permissions] + [owner_permissions] + admin_permissions
create_database_config = database_config
create_database_config['database'] = 'postgres'
database = psycopg2.connect(**database_config)
database.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
cursor = database.cursor(cursor_factory=psycopg2.extras.DictCursor)
for command in all_commands:
cursor.execute(command)
database.commit()
cursor.close()
database.close()
# Connect to the database and get a cursor object
database = psycopg2.connect(**database_config)
cursor = database.cursor(cursor_factory=psycopg2.extras.DictCursor, name=named_cursor)
if itersize is not None:
cursor.itersize = itersize
return database, cursor
def get_column_header_dict(input_column_csv):
column_header_dict = {}
with open(input_column_csv, 'r') as readfile:
reader = csv.reader(readfile, delimiter=',')
next(reader) # This line skips the header row.
for row in reader:
column_header_dict[row[0]] = {'type': row[2], 'json_fieldname': row[1], 'clean': row[4], 'instructions': row[5]}
if column_header_dict[row[0]]['clean'] == 'TRUE':
column_header_dict[row[0]]['clean'] = True
else:
column_header_dict[row[0]]['clean'] = False
return column_header_dict
def close_database(cursor, database, commit=True):
# Close everything
cursor.close()
if commit:
database.commit()
database.close()
def clean(s):
# Fix bytes/str mixing from earlier in code:
if type(s) is bytes:
s = s.decode('utf-8')
# Replace weird characters that make Postgres unhappy
s = s.replace("\x00", "") if s else None
# re_pattern = re.compile(u"\u0000", re.UNICODE)
# s = re_pattern.sub(u'\u0000', '')
# add_item = re.sub(r'(?<!\\)\\(?!["\\/bfnrt]|u[0-9a-fA-F]{4})', r'', add_item)
s = re.sub(r'(?<!\\)\\u0000', r'', s) if s else None
return s
def c(u):
# Encode unicode so it plays nice with the string formatting
return u.encode('utf8')
def get_last_modified(json_file):
return os.path.getmtime(json_file)
def within_time_bounds(json_file, start_time, end_time):
json_modified_time = get_last_modified(json_file)
return (json_modified_time >= time.mktime(start_time.timetuple())) and (json_modified_time <= (time.mktime(end_time.timetuple()) + timedelta(days=1).total_seconds()))
def save_to_csv(rows, output_filename, column_headers=None):
with open(output_filename, 'w') as outfile:
writer = csv.writer(outfile, delimiter=',')
if column_headers is None:
writer.writerow(rows[0].keys())
else:
writer.writerow(column_headers)
for item in rows:
if column_headers is None:
# This might not work if dictionaries don't pull out keys in same order.
writer.writerow(item.values())
else:
output_row = [item[column] for column in column_headers]
writer.writerow(output_row)
return
def load_from_csv(input_csv, time_columns=[]):
with open(input_csv, 'r') as readfile:
reader = csv.reader(readfile, delimiter=',')
output_dict_list = []
header = next(reader)
for row in reader:
output_dict = {}
for idx, item in enumerate(row):
# Time conversion is a little inefficient. But who cares!
if header[idx] in time_columns:
item = datetime.strptime(item, "%Y-%m-%d %H:%M:%S")
output_dict[header[idx]] = item
output_dict_list += [output_dict]
return output_dict_list
def list_on_key(dict_list, key):
""" Is there a one-liner for this?
"""
return_list = []
for sub_dict in dict_list:
return_list += [sub_dict[key]]
return return_list
def extract_entity_to_column():
return
def to_list_of_dicts(cursor):
results = cursor.fetchall()
dict_result = []
for row in results:
dict_result.append(dict(row))
return dict_result
def to_pandas(cursor, dtype=None):
results = cursor.fetchall()
column_headers = list(results[0].keys())
if not dtype:
data_frame = pd.DataFrame(results)
else:
new_results = []
for result in results:
new_results += [[str(x) if x else None for x in result]]
data_frame = pd.DataFrame(new_results, dtype='str')
data_frame.columns = column_headers
return data_frame
def sort_json(input_file, output_file=None, reverse=False, key='created_at', format=None):
if output_file is None:
output_file = input_file
with open(input_file, "r") as f:
json_dict = json.load(f)
if key == 'created_at':
json_dict.sort(reverse=reverse, key=lambda t: twitter_str_to_dt(t[key]))
else:
json_dict.sort(reverse=reverse, key=lambda t: t[key])
with open(output_file, 'w') as f:
json.dump(json_dict, f)
return
def write_json(input_file, output_file=None):
return
def format_json(input_file, output_file=None, json_format='newlines'):
if output_file is None:
output_file = input_file
with open(input_file, "r") as f:
json_dict = json.load(f)
if json_format == 'newlines':
with open(output_file, "w") as openfile:
openfile.write("[\n")
for idx, tweet in enumerate(json_dict):
json.dump(tweet, openfile)
if idx == len(json_dict) - 1:
openfile.write('\n')
else:
openfile.write(",\n")
openfile.write("]")
return
def sample_json_to_csv(input_directories, number, keys):
return
def int_dict():
return defaultdict(int)
def set_dict():
return defaultdict(set)
def dict_dict():
return defaultdict(dict)
def list_dict():
return defaultdict(dict)
def sql_type_dictionary():
""" Return a dictionary of PSQL types for typical column names
in Twitter2SQL databases.
"""
type_dict = {'user_id': 'bigint',
'tweet': 'TEXT',
'user_name': 'TEXT',
'user_screen_name': 'TEXT',
'in_reply_to_status_id': 'bigint',
'created_at': 'timestamptz',
'in_reply_to_user_screen_name': 'TEXT',
'in_reply_to_user_id': 'bigint'}
return type_dict
```
#### File: twitter2sql/query/commands.py
```python
import datetime
from psycopg2 import sql
from collections import OrderedDict
from twitter2sql.core.util import open_database, save_to_csv, to_list_of_dicts
def aggregate_by_time(database_name,
db_config_file,
column_name='example',
num_returned=10000,
table_name='table',
return_headers=None,
distinct=None,
output_filename=None,
output_column_headers=None):
return
def filter_by_time(database_name,
db_config_file,
time_column='example',
num_returned=10000,
table_name='table',
return_headers=None,
distinct=None,
output_filename=None,
output_column_headers=None):
database, cursor = open_database(database_name, db_config_file)
cursor.execute(sql.SQL("""
SELECT *
FROM {}
WHERE time_column BETWEEN %s and %s
LIMIT %s;
""").format(sql.Identifier(table_name), sql.Identifier(column_name)), [start_date, end_date])
results = cursor.fetchall()
dict_result = []
for row in results:
dict_result.append(dict(row))
results = remove_duplicates(dict_result, limit=100)
if output_filename is not None:
save_to_csv(results, output_filename, output_column_headers)
return results
class Command(object):
def __init__(self, verbose=False):
self.verbose = verbose
return
def execute_sql(self):
return
def aggregate(database_name,
db_config_file,
aggregate_column='example',
output_columns='example',
table_name='table',
count_column_name='total_tweets',
num_returned=1000,
return_headers=None,
output_filename=None,
output_column_headers=None,
verbose=False):
database, cursor = open_database(database_name, db_config_file)
# Max is a bit shady here.
output_columns_sql = sql.SQL(',').join([sql.SQL("MAX({output}) as {output}").format(output=sql.Identifier(output)) for output in output_columns])
sql_statement = sql.SQL("""
SELECT {agg}, COUNT({count}) as {count_name},
{outputs}
FROM {table}
GROUP BY {agg}
ORDER BY {count_name} DESC
LIMIT %s;
""").format(agg=sql.Identifier(aggregate_column),
count=sql.Identifier(aggregate_column),
count_name=sql.Identifier(count_column_name),
table=sql.Identifier(table_name),
outputs=output_columns_sql)
cursor.execute(sql_statement, [num_returned])
results = to_list_of_dicts(cursor)
if verbose:
for result in results:
print(result)
if output_filename is not None:
save_to_csv(results, output_filename, output_column_headers)
return
def grab_top(database_name=None,
db_config_file=None,
cursor=None,
column_name='example',
num_returned=100,
table_name='table',
return_headers=None,
distinct=None,
output_filename=None,
output_column_headers=None):
if cursor is None:
database, cursor = open_database(database_name, db_config_file)
else:
return
if distinct is None:
sql_statement = sql.SQL("""
SELECT user_screen_name,user_name,user_description,user_created_ts,user_followers_count,user_id,created_at,complete_text
FROM (SELECT DISTINCT ON (user_id) user_screen_name,user_name,user_description,user_created_ts,user_followers_count,user_id,created_at,complete_text
FROM {} WHERE lang='en') as sub_table
ORDER BY {} DESC
LIMIT %s;
""").format(sql.Identifier(table_name), sql.Identifier(column_name))
cursor.execute(sql_statement, [num_returned])
else:
# Currently non-functional.
cursor.execute(sql.SQL("""
SELECT * FROM (
SELECT DISTINCT ON {} *
FROM {}
ORDER BY {} DESC
LIMIT %s;
""").format(sql.Identifier(distinct),
sql.Identifier(table_name),
sql.Identifier(column_name)),
[num_returned])
results = to_list_of_dicts(cursor)
# results = remove_duplicates(results, limit=100)
if output_filename is not None:
save_to_csv(results, output_filename, output_column_headers)
return results
def remove_duplicates(rows, duplicate_key='user_id', sort_key='user_followers_count', limit=None):
""" Removes duplicates in a list, can return only top 'limit' results.
Placeholder function until I find how to do this in SQL.
Assumes list is sorted.
This is like the worst function I have ever made.
"""
output_rows = []
key_dict = OrderedDict()
for idx, item in enumerate(rows):
item_key = item[duplicate_key]
# Very difficult to understand code.
if item_key not in key_dict:
key_dict[item_key] = item
else:
if key_dict[item_key][sort_key] < item[sort_key]:
key_dict[item_key] = item
if limit is not None:
if len(key_dict) == limit:
break
print('Removed duplicates from..', idx)
for key, value in key_dict.items():
output_rows += [value]
return output_rows
if __name__ == '__main__':
grab_top()
```
#### File: twitter2sql/request/lists.py
```python
import tweepy
import csv
import os
from pprint import pprint
from tqdm import tqdm
def get_list_members(api, user, list_id, output_csv_folder):
members = []
if not os.path.exists(output_csv_folder):
os.mkdir(output_csv_folder)
descriptors = ['name', 'description', 'slug', 'created_at', 'id']
for list_object in api.lists_all(user):
slug = list_object._json['slug']
print(slug, list_object._json['id'])
try:
users = []
for result in tweepy.Cursor(api.list_members, user, slug).pages():
for page in result:
users += [[page._json['id'], page._json['screen_name'], page._json['name']]]
with open(os.path.join(output_csv_folder, f'{slug}.csv'), 'w') as writefile:
writer = csv.writer(writefile, delimiter=',')
for row in users:
print(row)
writer.writerow(row)
except Exception as e:
print(f'Failed on {slug}')
print(e)
return None
def add_to_list(api, input_ids, list_id):
if type(input_ids) is list:
pass
elif type(input_ids) is str:
if input_ids.endswith('.txt'):
with open(input_ids, 'r') as f:
input_ids = f.readlines()
else:
raise ValueError(f"{input_ids} in str format must be a .txt file.")
else:
raise ValueError(f"{input_ids} must be either a filepath or a list of screen names.")
for idx, uid in enumerate(tqdm(input_ids)):
print(list_id, uid)
try:
api.add_list_member(list_id=str(list_id), user_id=uid)
except Exception as e:
print(e)
return None
if __name__ == '__main__':
pass
``` |
{
"source": "josephschorr/Python-Wrapper",
"score": 3
} |
#### File: Python-Wrapper/xboxapi/gamer.py
```python
class Gamer(object):
''' Xbox profile wrapper '''
def __init__(self, gamertag=None, client=None, xuid=None):
self.client = client
self.gamertag = gamertag
self.xuid = xuid if xuid is not None else self.fetch_xuid()
self.endpoints = ['messages',
'conversations',
'recent-players',
'activity-feed',
'latest-xbox360-games',
'latest-xboxone-games',
'latest-xboxone-apps',
'xboxone-gold-lounge',
'game-details',
'game-details-hex']
self.endpoints_xuid = ['achievements',
'profile',
'presence',
'gamercard',
'activity',
'friends',
'followers',
'game-clips',
'game-clips/saved',
'game-stats',
'screenshots',
'xbox360games',
'xboxonegames',
'game-status']
def get(self, method=None, term=None):
''' Retrieve data from supported endpoints '''
# Hack to avoid calling api again for xuid retrieval
if method == 'xuid':
return self.xuid
url = self.parse_endpoints(method, term)
if url is not False:
return self.client.api_get(url).json()
url = self.parse_endpoints_secondary(method, term)
if url is not False:
return self.client.api_get(url).json()
return {}
def parse_endpoints(self, method=None, term=None):
''' Constructs a valid endpoint url for api '''
if method is None:
return False
for endpoint in self.endpoints:
if endpoint != method:
continue
url = endpoint
if term is not None:
url = url + '/' + term
return url
return False
def parse_endpoints_secondary(self, method=None, term=None):
''' Parse secondary endpoints that require xuid in url '''
for endpoint in self.endpoints_xuid:
if endpoint != method:
continue
url = str(self.xuid) + '/' + endpoint
if term is not None:
url = url + '/' + term
return url
return False
def send_message(self, message=None, xuids=None):
''' Send a message given a list of gamer xuids '''
payload = {}
if message is None:
raise ValueError('A message is required!')
if xuids is not None and not hasattr(xuids, 'append'):
raise TypeError('List was not given!')
if xuids is None:
xuids = [self.xuid]
payload['to'] = xuids
payload['message'] = message
return self.client.api_post('messages', payload)
def post_activity(self, message=None):
''' Post directly to your activity feed '''
payload = {}
if message is None:
raise ValueError('A message is required!')
payload['message'] = message
return self.client.api_post('acitivity-feed', payload)
def fetch_xuid(self):
''' Fetch gamer xuid from gamertag '''
return self.client.api_get('xuid/' + self.gamertag).json()
``` |
{
"source": "josephscott/browser-engineering",
"score": 3
} |
#### File: josephscott/browser-engineering/browser.py
```python
def request(url):
assert url.startswith("http://")
url = url[len("http://"):]
host, path = url.split("/", 1)
path = "/" + path
import socket
s = socket.socket(
family=socket.AF_INET,
type=socket.SOCK_STREAM,
proto=socket.IPPROTO_TCP,
)
s.connect(("example.org", 80))
s.send(b"GET /index.html HTTP/1.0\r\n" +
b"Host: example.org\r\n\r\n")
response = s.makefile("r", encoding="utf8", newline="\r\n")
statusline = response.readline()
version, status, explanation = statusline.split(" ", 2)
assert status == "200", "{}: {}".format(status, explanation)
headers = {}
while True:
line = response.readline()
if line == "\r\n": break
header, value = line.split(":", 1)
headers[header.lower()] = value.strip()
body = response.read()
s.close()
return headers, body
def show(body):
in_angle = False
for c in body:
if c == "<":
in_angle = True
elif c == ">":
in_angle = False
elif not in_angle:
print(c, end="")
def load(url):
headers, body = request(url)
show(body)
if __name__ == "__main__":
import sys
load(sys.argv[1])
``` |
{
"source": "josephsdavid/N2D",
"score": 3
} |
#### File: examples/stockCluster/augment.py
```python
import numpy as np
import pandas as pd
from scipy.interpolate import CubicSpline # for warping
from transforms3d.axangles import axangle2mat # for rotation
# augmentation of data
def Jitter(X, sigma=0.5):
myNoise = np.random.normal(loc=0, scale=sigma, size=X.shape)
return X + myNoise
df = pd.read_csv("Data/stock_close.csv")
df.apply(Jitter, axis=1)
def augment(df, n):
res = []
for i in range(0, n):
x = df.apply(Jitter, axis=1)
res.append(np.asarray(x))
return np.hstack(res)
```
#### File: examples/stockCluster/preprocess.py
```python
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler
def scale(path):
df = pd.read_csv(path, index_col=0)
scaled_feats = StandardScaler().fit_transform(df.values)
scaled_features_df = pd.DataFrame(scaled_feats, columns=df.columns)
return scaled_features_df
```
#### File: N2D/n2d/generators.py
```python
import numpy as np
from tensorflow.keras.models import Model
# Local modules
from . import N2D
class manifold_cluster_generator(N2D.UmapGMM):
def __init__(self, manifold_class, manifold_args, cluster_class, cluster_args):
# cluster exceptions
self.manifold_in_embedding = manifold_class(**manifold_args)
self.cluster_manifold = cluster_class(**cluster_args)
proba = getattr(self.cluster_manifold, "predict_proba", None)
self.proba = callable(proba)
self.hle = None
def fit(self, hl):
super().fit(hl)
def predict(self, hl):
if self.proba:
super().predict(hl)
else:
manifold = self.manifold_in_embedding.transform(hl)
y_pred = self.cluster_manifold.predict(manifold)
return np.asarray(y_pred)
def fit_predict(self, hl):
if self.proba:
super().fit_predict(hl)
else:
self.hle = self.manifold_in_embedding.fit_transform(hl)
y_pred = self.cluster_manifold.fit_predict(self.hle)
return np.asarray(y_pred)
def predict_proba(self, hl):
if self.proba:
super().predict_proba(hl)
else:
print("Your clusterer cannot predict probabilities")
class autoencoder_generator(N2D.AutoEncoder):
def __init__(self, model_levels=(), x_lambda=lambda x: x):
self.Model = Model(model_levels[0], model_levels[2])
self.encoder = Model(model_levels[0], model_levels[1])
self.x_lambda = x_lambda
def fit(
self,
x,
batch_size,
epochs,
loss,
optimizer,
weights,
verbose,
weight_id,
patience,
):
super().fit(
x,
batch_size,
epochs,
loss,
optimizer,
weights,
verbose,
weight_id,
patience,
)
``` |
{
"source": "JosephSemrai/TensorMap",
"score": 3
} |
#### File: app/resources/data_preprocessing.py
```python
from flask import jsonify, redirect, url_for, render_template, request, session
from . import main
from .. import db
from .database_models.data_preproc import dataset
import os
import json
import csv
import pandas as pd
from flask import send_file
from .template_manipulation import editExperimentConfigurations
def createJsonData(entry):
i= 0
columns = []
data = []
splitLine = None
allData = {}
if entry:
with open(entry.filePath, 'r') as f:
index = 0
for line in f:
if index == 0:
print(line)
newLine = line.replace('\n', '').replace('"','')
splitLine = newLine.split(",")
print(splitLine)
for column in splitLine:
temColumn = {}
temColumn["title"] = column
temColumn["field"] = column
columns.append(temColumn)
print(columns)
else:
newRowLine = line.replace('\n', '').replace('"','')
splitData = newRowLine.split(",")
if index==1:
print(line)
print(newRowLine)
print(splitData)
print(splitLine)
temRow = {}
for i in range(len(splitData)):
temRow[splitLine[i]] = splitData[i]
data.append(temRow)
index += 1
allData["columns"]=columns
allData["data"]=data
allData["error"]="None"
responseData = json.dumps(allData)
return responseData
else:
allData["columns"]="None"
allData["data"]="None"
allData["error"]="Dataset Not Found"
responseData = json.dumps(allData)
return responseData
@main.route('/addData', methods=['POST'])
def addData():
file = request.files['file']
datasetFile = file.filename
dirPath = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '.', 'dataset'))
path = '{}{}{}'.format(dirPath, "/", datasetFile)
splitFileInfo = datasetFile.split(".")
fileName = splitFileInfo[0]
fileFormat = splitFileInfo[1]
if dataset.query.filter_by(fileName=fileName).one_or_none():
return "error"
else:
file.save(path)
# writing the file entry in the database.
data = dataset(fileName, path, fileFormat, "None","None",0)
db.session.add(data)
db.session.commit()
return splitFileInfo[0]
@main.route('/visualizeData', methods=['GET'])
def visualizeData():
# ******************************************change
# entry = dataset.query.filter_by(fileName=request.args['fileName']).one()
entry = dataset.query.filter_by(fileName="store").one()
print(entry.filePath)
responseData = createJsonData(entry)
return responseData
@main.route('/addRow', methods=['POST'])
def addDataRow():
content = request.get_json()
entry = dataset.query.filter_by(fileName=content['fileName']).one()
print(entry.filePath)
row = []
dataCsv = pd.read_csv(entry.filePath)
for column in content["columnData"]:
if column["title"] in content["rowdata"]:
row.append(content["rowdata"][column["title"]])
else:
row.append("None")
print(row)
dataCsv.loc[len(dataCsv)] = row
dataCsv.to_csv(entry.filePath, index=False)
return "Done"
@main.route('/editRow', methods=['POST'])
def editDataRow():
content = request.get_json()
entry = dataset.query.filter_by(fileName=content['fileName']).one()
print(entry.filePath)
row = []
dataCsv = pd.read_csv(entry.filePath)
for column in content["columnData"]:
if column["title"] in content["newRowData"]:
row.append(content["newRowData"][column["title"]])
else:
row.append("None")
print(row)
dataCsv.loc[content["newRowData"]["tableData"]["id"]] = row
dataCsv.to_csv(entry.filePath, index=False)
return "Done"
@main.route('/deleteRow', methods=['POST'])
def deleteDataRow():
content = request.get_json()
entry = dataset.query.filter_by(fileName=content['fileName']).one()
print(entry.filePath)
dataCsv = pd.read_csv(entry.filePath)
dataCsv.drop(dataCsv.index[content["oldRowData"]["tableData"]["id"]], inplace = True)
dataCsv.to_csv(entry.filePath, index=False)
return "Done"
@main.route('/deleteColumn', methods=['POST'])
def deleteDataColumn():
content = request.get_json()
entry = dataset.query.filter_by(fileName=content['fileName']).one()
dataCsv = pd.read_csv(entry.filePath)
for column in content["columnData"]:
if column["checked"]:
print(column["title"])
dataCsv.drop(column["title"], axis=1, inplace=True)
print(column)
dataCsv.to_csv(entry.filePath, index=False)
entry.features = "None"
entry.labels = "None"
entry.testPercentage = 0
db.session.commit()
responseData = createJsonData(entry)
return responseData
@main.route('/downloadCSV', methods=['POST'])
def download():
content = request.get_json()
entry = dataset.query.filter_by(fileName=content['fileName']).one()
fullFileName = '{}{}{}'.format(entry.fileName, ".", entry.fileFormat)
try:
return send_file(entry.filePath,
attachment_filename=fullFileName,
as_attachment=True)
except Exception as e:
return str(e)
@main.route('/saveConfig', methods=['POST'])
def saveConfig():
content = request.get_json()
featureString = ""
labelString = ""
index = 0
entry = dataset.query.filter_by(fileName=content['fileName']).one()
for feature in content["features"]:
if feature["checked"] :
print(feature["title"])
if index == 0:
featureString = '{}{}'.format(featureString, feature["title"])
else:
featureString = '{}{}{}'.format(featureString, ",", feature["title"])
index += 1
index = 0
for label in content["labels"]:
if label["checked"] :
print(label["title"])
if index == 0:
labelString = '{}{}'.format(labelString, label["title"])
else:
labelString = '{}{}{}'.format(labelString, ",", label["title"])
index += 1
print(labelString)
entry.features = featureString
entry.labels = labelString
entry.testPercentage = content['trainPercentage']
db.session.commit()
editExperimentConfigurations(featureString,labelString,content['trainPercentage'],entry)
return "done"
@main.route('/viewData', methods=['GET'])
def viewData():
entries = dataset.query.all()
entries = [entry.serialize() for entry in entries]
return json.dumps(entries)
``` |
{
"source": "josephshanks/Swing-and-a-miss",
"score": 3
} |
#### File: Swing-and-a-miss/src/Cleaner.py
```python
import numpy as np
import pandas as pd
def cleaning(df):
#dropping columns that contains information of the result of the play. I am trying to model the contact quality only by factors known before the batter hits the ball
df=df.drop(['pitch_type','game_date','player_name','pitcher','batter','events','description','spin_dir','spin_rate_deprecated','break_angle_deprecated',
'break_length_deprecated','des','game_type','home_team','away_team','type','hit_location','bb_type','game_year','hc_x','hc_y',
'tfs_deprecated','tfs_zulu_deprecated','umpire','sv_id','hit_distance_sc','launch_speed','launch_angle','game_pk','pitcher',
'estimated_ba_using_speedangle','estimated_woba_using_speedangle','woba_value','woba_denom','babip_value','iso_value','pitch_name',
'launch_speed_angle','home_score','away_score','post_away_score','post_home_score','post_bat_score','post_fld_score'],axis=1)
df[['on_3b','on_2b','on_1b']] = df[['on_3b','on_2b','on_1b']].fillna(value=0)
df[['if_fielding_alignment','of_fielding_alignment']] = df[['if_fielding_alignment','of_fielding_alignment']].fillna(value='Standard')
df.dropna()
#Was there anybody on third base?
df['on_3b']=df['on_3b'].apply(lambda x: 1 if x >= 1 else 0)
#Was there anybody on first and second base?
df['on_2b']=df['on_2b'].apply(lambda x: 1 if x >= 1 else 0)
df['on_1b']=df['on_1b'].apply(lambda x: 1 if x >= 1 else 0)
#batter stance and pitcher stance: 1 for Right, 0 for Left
df['stand']=df['stand'].apply(lambda x: 1 if x=='R' else 0)
df['p_throws']=df['p_throws'].apply(lambda x: 1 if x=='R' else 0)
df['inning_topbot']=df['inning_topbot'].apply(lambda x: 1 if x=='Bot' else 0)
df['if_fielding_alignment']=df['if_fielding_alignment'].apply(lambda x: 0 if x=='Standard' else 1)
df['of_fielding_alignment']=df['of_fielding_alignment'].apply(lambda x: 0 if x=='Standard' else 1)
#drop nulls
df=df.dropna()
return df
``` |
{
"source": "josephshell/bounceIO",
"score": 2
} |
#### File: josephshell/bounceIO/ProgramRunner.py
```python
import pygame
from src.utils.Logger import debug
from src.games.bounceIO.BounceIo import bounce_io_debug
RED = (255, 0, 0)
BLUE = (0, 0, 255)
WHITE = (255, 255, 255)
SIZE = (800, 600)
def main():
bounce_io_debug(pygame, WHITE, RED, BLUE, SIZE, 10, 50, debug)
if __name__ == '__main__':
main()
```
#### File: src/utils/Logger.py
```python
import datetime
def debug(msg: str):
time = datetime.datetime.now().time()
print("[%s]: %s" % (time, msg))
``` |
{
"source": "Josephshitandi/News-API",
"score": 3
} |
#### File: app/main/views.py
```python
from flask import render_template,request,redirect,url_for
from . import main
from ..request import get_news,get_article,get_category
from .forms import ReviewForm
# from ..models import Review
@main.route('/')
def index():
'''
View root page function that returns the index page and its data
'''
# Getting news sources
popularity = get_news('popularity')
bitcoin = get_news('bitcoin')
business = get_news('business')
techcrunch = get_news('techcrunch')
wall_street = get_news('wsj')
title = 'Home - Welcome to The best News Review Website Online'
search_news = request.args.get('news_query')
if search_news:
return redirect(url_for('search', category_name = search_news))
else:
return render_template('index.html', title = title, popularity = popularity, bitcoin = bitcoin, business = business, techcrunch = techcrunch, wall_street = wall_street )
@main.route('/article/')
def article():
'''
View article page function that returns the news articles details page and its data
'''
news = get_article(article)
title = f'{news.title}'
article = Article.get_article(news.url)
return render_template('article.html',title = title,news = news,article = article)
@main.route('/categories/<category_name>')
def category(category_name):
'''
method that returns the categories page
'''
category = get_category(category_name)
title = f'{category_name}'
return render_template('categories.html', title = title, category = category)
@main.route('/categories/technology')
def technology():
'''
method that returns the categories page
'''
technology = get_category('technology')
title = 'TECHNOLOGY'
return render_template('categories.html', title = title, technology = technology)
@main.route('/categories/sport')
def sports():
'''
method that returns the categories page
'''
sports = get_category('sports')
title = 'SPORTS'
return render_template('categories.html', title = title, sports = sports)
@main.route('/categories/entertainment')
def entertainment():
'''
method that returns the categories page
'''
sports = get_category('entertainment')
title = 'ENTERTAINMENT'
return render_template('categories.html', title = title, entertainment = entertainment)
@main.route('/categories/business')
def business():
'''
method that returns the categories page
'''
business = get_category('business')
title = 'BUSINESS'
return render_template('categories.html', title = title, business = business)
@main.route('/search/<category_name>')
def search(category_name):
'''
View function to display the search results
'''
category_list = category.split(" ")
category_format = "+".join(category_list)
searched_news = search_movie(category_format)
title = f'search results for {category_name}'
return render_template('search.html',news = searched_news)
``` |
{
"source": "Josephshitandi/safe-boda-group",
"score": 3
} |
#### File: app/main/forms.py
```python
from flask_wtf import FlaskForm
from wtforms import StringField, TextAreaField,SubmitField,DateField
from ..models import Comment
from wtforms import StringField, TextAreaField, SubmitField, SelectField, PasswordField,IntegerField
from wtforms.validators import Required, Email, EqualTo, ValidationError
class CommentForm(FlaskForm):
comment = TextAreaField('Comment')
submit = SubmitField('Post a comment')
class RiderForm(FlaskForm):
email = StringField('Your Email Address',validators=[Required(),Email()])
ridername = StringField('Enter your ridername',validators = [Required()])
number_plate = StringField('Your number_plate',validators =[Required()])
motor_model = StringField('Your motor_model',validators =[Required()])
password = PasswordField('Password',validators = [Required(), EqualTo('password_confirm',message = 'Passwords must match')])
password_confirm = PasswordField('<PASSWORD> Passwords',validators = [Required()])
submit = SubmitField('Upload')
def validate_email(self,data_field):
if User.query.filter_by(email =data_field.data).first():
raise ValidationError('There is an account with that email')
def validate_ridername(self,data_field):
if User.query.filter_by(ridername = data_field.data).first():
raise ValidationError('That ridername is taken')
class UpdateProfile(FlaskForm):
bio = TextAreaField('Say something about yourself',validators=[Required()])
submit = SubmitField('Save')
class TaskForm(FlaskForm):
title = StringField('Title', validators=[Required()])
post = TextAreaField('Your Task', validators=[Required()])
submit = SubmitField('Task')
class BookForm(FlaskForm):
first_point = StringField('Enter your current Location')
second_point = StringField('Enter your Destination point')
mobile = IntegerField('Enter your Mobile number')
payment = SelectField(u'Payment Method', choices=[('Cash', 'Cash'), ('Mpesa', 'Mpesa'),('Bank', 'Bank')])
submit = SubmitField('Submit')
``` |
{
"source": "josephsieh/Halide",
"score": 3
} |
#### File: python_bindings/apps/blur.py
```python
import os, sys
from halide import *
OUT_DIMS = (1536, 2560)
def main():
input = ImageParam(UInt(16), 2, 'input')
x, y = Var('x'), Var('y')
blur_x = Func('blur_x')
blur_y = Func('blur_y')
blur_x[x,y] = (input[x,y]+input[x+1,y]+input[x+2,y])/3
blur_y[x,y] = (blur_x[x,y]+blur_x[x,y+1]+blur_x[x,y+2])/3
xi, yi = Var('xi'), Var('yi')
blur_y.tile(x, y, xi, yi, 8, 4).parallel(y).vectorize(xi, 8)
blur_x.compute_at(blur_y, x).vectorize(x, 8)
maxval = 255
in_image = Image(UInt(16), builtin_image('rgb.png'), scale=1.0) # Set scale to 1 so that we only use 0...255 of the UInt(16) range
eval_func = filter_image(input, blur_y, in_image, disp_time=True, out_dims = (OUT_DIMS[0]-8, OUT_DIMS[1]-8), times=5)
I = eval_func()
if len(sys.argv) >= 2:
I.save(sys.argv[1], maxval)
else:
I.show(maxval)
if __name__ == '__main__':
main()
``` |
{
"source": "JosephSilvermanArt/qLib",
"score": 2
} |
#### File: scripts/python/qlibattribmenu.py
```python
"""
*** History entry for related change (update date accordingly) ***
2019-11-13:
- Updated attribute popup menu(s) to use shared menu python code ([#899|https://github.com/qLab/qLib/issues/899])
*** Some Make-Me-Life-Easier Codez ***
# use "class" parameter to determine if point, prim, etc
# and return numeric attribs
#
import traceback
r = []
try:
import qlibattribmenu as qm
r = qm.buildAttribMenu(kwargs,
hou.pwd().parm("class").evalAsString(),
filter=qm.isNumeric )
except:
r = ["", ":("]
print traceback.format_exc()
return r
# list ALL attributes
# instead of "all" use "comp" or "component" to list all but detail attribs
#
import traceback
r = []
try:
import qlibattribmenu as qm
r = qm.buildAttribMenu(kwargs,
"all")
except:
r = ["", ":("]
print traceback.format_exc()
return r
# list per-prim and per-point attributes, of type int or string
#
import traceback
r = []
try:
import qlibattribmenu as qm
r = qm.buildAttribMenu(kwargs,
"prim point",
filter=lambda a: qm.isInt(a) or qm.isString(a) )
except:
r = ["", "couldn't build this menu :("]
print traceback.format_exc()
return r
"""
import hou
import re
import traceback
def buildAttribLabel(a, showClass=True):
"""Build an informative attrib label.
TODO:
- show unique values for ints too? not just strings?
"""
assert type(a) is hou.Attrib
csh = { "global": "detail", "point": "pt", "vertex": "vtx" }
had=hou.attribData
td = { had.String:'s', had.Int:'i', had.Float:'f' }
t = a.dataType()
ts = a.size()
ty = '?'
if t in td: ty = td[t]
if ts==3: ty='v'
if ts==4: ty='p'
ax=[]
if showClass:
c = re.search('[^.]+$', str(a.type()) ).group(0).lower()
if c in csh: c = csh[c]
ax.append(c)
q = a.qualifier()
if q and q!='':
ax.append(str(q).lower())
s = len(a.strings())
if s>0: ax.append('strings:%d' % s)
ax = ' (%s)' % ', '.join(ax) if len(ax) else ''
R = '%s@ %s%s' % (ty, a.name(), ax, )
return R
def buildAttribMenu(
kwargs, # regular hou kwargs
attribClass, # string or tuple
inputGeo=None, # either a hou.Geometry or None means function will try its best
filter=None, # filter function, taking a hou.Attrib object
showClass=None, # None, True or False, to override default decision
# if attribute class should be shown
):
"""Build an attribute popup menu based on various criteria.
"""
assert type(kwargs) is dict, "expected a valid kwargs dict"
assert type(attribClass) in (str, tuple, list, ), "invalid attribClass argument"
# auto-detect geometry input if necessary
#
if not inputGeo and kwargs and kwargs.has_key("node"):
i = kwargs["node"].inputs()
if len(i):
inputGeo = i[0].geometry()
# process attribClass input
#
if type(attribClass) is str:
# support plain strings like "point primitive"
attribClass = tuple(attribClass.split())
if "all" in attribClass:
attribClass = ("point", "primitive", "vertex", "detail", )
if "comp" in attribClass or "component" in attribClass:
attribClass = ("point", "primitive", "vertex", )
if type(attribClass) is not tuple:
# this is intended for lists
attribClass = tuple(attribClass)
attribClass = tuple(sorted(attribClass))
# got inputGeo and attribClass (hopefully)
if not inputGeo:
raise hou.OperationFailed("Couldn't determine input geometry")
# collect attributes, filter and sort them
#
get_funcs = {"point": "pointAttribs",
"primitive": "primAttribs",
"prim": "primAttribs",
"vertex": "vertexAttribs",
"detail": "globalAttribs",
"global": "globalAttribs" }
show_class = len(attribClass)>1
if showClass:
show_class = showClass
R = []
add_sep = False
for c in attribClass:
# get them attributes
attribs = ()
if c in get_funcs:
attribs = inputGeo.__getattribute__(get_funcs[c])()
# filter them if required
if filter:
attribs = [ a for a in attribs if filter(a) ]
# sort 'em alphabetically
attribs = sorted(attribs, key = lambda a: a.name().lower())
# add menu separator between classes
if add_sep and len(attribs)>0:
R.append("_separator_")
R.append("")
for a in attribs:
R.append(a.name())
R.append(buildAttribLabel(a, showClass=show_class))
add_sep = True
return R
def isNumeric(hou_attrib):
"""Convenience filter function for numeric attributes (floats, ints, vectors).
"""
assert type(hou_attrib) is hou.Attrib, "invalid argument"
return \
hou_attrib.dataType()!=hou.attribData.String
def isNumber(hou_attrib):
"""Convenience filter function for numeric (int/float of size 1) attributes.
"""
assert type(hou_attrib) is hou.Attrib, "invalid argument"
return \
hou_attrib.dataType()!=hou.attribData.String and \
hou_attrib.size()==1
def isInt(hou_attrib):
"""Convenience filter function for integer attributes.
"""
assert type(hou_attrib) is hou.Attrib, "invalid argument"
return \
hou_attrib.dataType()==hou.attribData.Int and \
hou_attrib.size()==1
def isString(hou_attrib):
"""Convenience filter function for string attributes.
"""
assert type(hou_attrib) is hou.Attrib, "invalid argument"
return \
hou_attrib.dataType()==hou.attribData.String and \
hou_attrib.size()==1
def isIntOrString(hou_attrib):
"""Convenience filter function for int/string
(usually partition or piece) attributes.
"""
assert type(hou_attrib) is hou.Attrib, "invalid argument"
return isInt(hou_attrib) or isString(hou_attrib)
def isVector(hou_attrib):
"""Convenience filter function for numeric (int/float of size 3) attributes.
"""
assert type(hou_attrib) is hou.Attrib, "invalid argument"
return \
hou_attrib.dataType()!=hou.attribData.String and \
hou_attrib.size()==3
def isVector4(hou_attrib):
"""Convenience filter function for numeric (int/float of size 4) attributes.
"""
assert type(hou_attrib) is hou.Attrib, "invalid argument"
return \
hou_attrib.dataType()!=hou.attribData.String and \
hou_attrib.size()==4
``` |
{
"source": "josephslab/shared-code",
"score": 3
} |
#### File: shared-code/dnds_NeiGojoboriMethod/neiGojobori_dnds.py
```python
from Bio import SeqIO
from Bio.Seq import MutableSeq
from Bio.Seq import Seq
import sys
import numpy as np
# input for overall script, just fasta file of codon-aligned sequences
fastaInput = sys.argv[1]
# DNA alphabet, could be changed but probably not
alphabet = ["A", "T", "G", "C"]
### Determine expected number of nonsynonymous sites in a codon
# input: codon
# output: expected number of nonsynonymous sites for codon
def nonsynonymousSites(codon):
codon = MutableSeq(codon)
first = codon[0]
second = codon[1]
third = codon[2]
mutatedCodons = list()
# Generate a list of codons mutated at first position
alphabetMinusOne = [x for x in alphabet if x not in first]
# copy codon sequence, need [:] to avoid changing both original and copy
mutatedCodon = codon[:]
for letter in alphabetMinusOne:
mutatedCodon[0] = letter
mutatedCodons.append(mutatedCodon[:])
#print(mutatedCodon)
alphabetMinusOne = [x for x in alphabet if x not in second]
mutatedCodon = codon[:]
for letter in alphabetMinusOne:
mutatedCodon[1] = letter
mutatedCodons.append(mutatedCodon[:])
#print(mutatedCodon)
alphabetMinusOne = [x for x in alphabet if x not in third]
mutatedCodon = codon[:]
for letter in alphabetMinusOne:
mutatedCodon[2] = letter
mutatedCodons.append(mutatedCodon[:])
#print(mutatedCodon)
# Translate seq to identify nonsynonymous mutations. For every nonsynonymous mutation, add 1/3 to number of nonsynonymous sites
n = 0
for mutatedCodon in mutatedCodons:
if mutatedCodon.translate() != codon.translate():
n = n + (1/3)
return(n)
### Count substitutions in a step-wise mutational pathway
# input: list of codons from a mutational pathway
# output: number of non-synonymous and synonymous differences in pathway
def countSubs(pathway):
aaList = [x.translate() for x in pathway]
nd = 0
sd = 0
for i in range(len(aaList) - 1):
if aaList[i] != aaList[i+1]:
nd = nd + 1
else:
sd = sd + 1
return([nd, sd])
### Determine distance between reference and sample codon
# input: two codons of same length
# output: distance between two codons (i.e. number of pairwise differences)
def codonDistance(codon1, codon2):
# make sequences mutable
codon1 = MutableSeq(codon1)
codon2 = MutableSeq(codon2)
# vectors to store outputs
distance = 0
nd = 0
sd = 0
mismatchPositions = list() # mismatches between codons
#calculate differences between two codons
for i in range(len(codon1)):
letter1 = codon1[i]
letter2 = codon2[i]
if letter1 != letter2:
distance = distance + 1
mismatchPositions.append(i)
# If the codons are identical, then there are no differences
if distance == 0:
return([nd, sd])
# If codons differ by one basepair, their is either one synonymous or nonsynonymous site
if distance == 1:
if codon1.translate() == codon2.translate():
sd = sd + 1
return([nd, sd])
else:
nd = nd + 1
return([nd, sd])
# If codons differ by two basepairs, there are two possible step-wise mutational pathways
if distance == 2:
# construct pathway one
pathway = list()
interCodon = codon1[:] # intermediate codon
pathway.append(interCodon[:])
interCodon[mismatchPositions[0]] = codon2[mismatchPositions[0]]
pathway.append(interCodon[:])
interCodon[mismatchPositions[1]] = codon2[mismatchPositions[1]]
pathway.append(interCodon[:])
pathway1Count = countSubs(pathway) # count subs
#print(pathway)
# construct pathway two
pathway = list()
interCodon = codon1[:]
pathway.append(interCodon[:])
interCodon[mismatchPositions[1]] = codon2[mismatchPositions[1]]
pathway.append(interCodon[:])
interCodon[mismatchPositions[0]] = codon2[mismatchPositions[0]]
pathway.append(interCodon[:])
pathway2Count = countSubs(pathway) # count substitutions
#print(pathway)
# assume pathways equally probable, so average result
sum_list = [(a + b)/2 for a, b in zip(pathway1Count, pathway2Count)]
return(sum_list)
if distance == 3:
mutPos = [[0,1,2],[0,2,1],[1,2,0],[1,0,2],[2,1,0],[2,0,1]]
subCounts = list() # output counts of substitutions per pathway
for i in mutPos:
pathway = list()
interCodon = codon1[:] # intermediate codon
pathway.append(interCodon[:])
for j in i:
interCodon[j] = codon2[j]
pathway.append(interCodon[:])
subCounts.append(countSubs(pathway)) # count subs
#print(pathway)
#print(countSubs(pathway))
#print(subCounts)
#print(list(np.sum(subCounts, axis=0)/6))
#sum_list = [sum(x) for x in zip(*subCounts)]
# assume pathways are equally probable, so average result
return(list(np.sum(subCounts, axis=0)/6))
### APPLY FUNCTIONS ###
# Load sequences
records = list(SeqIO.parse(fastaInput, "fasta"))
# Number of sequences
recNum = len(records)
# print header of output
print("\t".join(["reference", "sample", "N", "S", "Nd", "Sd", "pN", "pS", "dN", "dS", "dNdS"]))
# Loop over all pairs of sequences
for j in range((recNum - 1)):
for k in range(j+1, recNum):
refRecord = records[j]
samRecord = records[k]
refSeq = refRecord.seq
samSeq = samRecord.seq
samName = samRecord.name
refName = refRecord.name
# count number of codons in sequence
codonCount = len(refSeq)/3
# counts of nonsynonymous sites
N = list()
# loop over codons in reference sequence
for i in range(int(codonCount)):
codonRefSeq = refSeq[i*3:i*3+3]
N.append(nonsynonymousSites(codonRefSeq))
# sum over nonsynonymous sites of individual codons to get total
N = sum(N)
# calculate number of synonymous sites: total sites - nonsynonymous sites
S = 3*codonCount - N
# calculate number of differences between sequence pair
NdSd = list()
for i in range(int(codonCount)):
codonRefSeq = refSeq[i*3:i*3+3]
codonSamSeq = samSeq[i*3:i*3+3]
NdSd.append(codonDistance(codonRefSeq, codonSamSeq))
# Sum differences across codons to get total differences
NdSd = np.sum(NdSd, axis=0)
Nd = NdSd[0]
Sd = NdSd[1]
# Calculate proportion of differences
pN = Nd/N
pS = Sd/S
# Calculate substitution rates
dN = (-3/4)*np.log(1-(4/3)*pN)
dS = (-3/4)*np.log(1-(4/3)*pS)
# output final result
print("\t".join([refName, samName, str(N), str(S), str(Nd), str(Sd), str(pN), str(pS), str(dN), str(dS), str(dN/dS)]))
# Test countSubs function
#print(countSubs([Seq("CCG"), Seq("CTG"), Seq("CTA")]))
#print(countSubs([Seq("CCG"), Seq("CCA"), Seq("CTA")]))
#print(countSubs([Seq("AAA"), Seq("TAA"), Seq("TTA"), Seq("TTG")]))
# Test codonDistance
#print(codonDistance(Seq("AAA"), Seq("AAA"))) # no difference
#print(codonDistance(Seq("AAA"), Seq("TAA"))) # one nonsynonymous difference
#print(codonDistance(Seq("AAA"), Seq("AAG"))) # one synonymous difference
#print(codonDistance(Seq("TTT"), Seq("GTA"))) # two differences
#print(codonDistance(Seq("AAA"), Seq("TTG"))) # three differences
``` |
{
"source": "josephsmann/UnsupervisedDeepLearning-Pytorch",
"score": 3
} |
#### File: test/.ipynb_checkpoints/helperFunctions-checkpoint.py
```python
import sys
# Check if using python 2 or 3
if sys.version_info.major == 2:
import cPickle as pickle
else:
import pickle
def get_experiment_data(exp_num):
'''
This function loads a precomputed file: 'Experiment_data.pkl' that contains the subject that will
be used for training/testing purposes. We have data for 10 experiments. The data in the pkl contains
70% of the data for trainig purposes and the remaining 30% for testing purposes. The data was divided
in a stratified way --i.e. the proportion of positive samples is the same in both, training and test set.
Inputs:
- exp_num: it is a number between 0 - 9 indicating the experiment ID
Outputs:
- train_info: A matix of dimensions 1254 x 5. The first column contains a string with the subject_ID.
The rest of the columns are: label, age, gender and TIV
- test_info: Similar to train_info, but for testing purposes. The size of this matrix is 538 x 5
Example:
train_info, test_info = get_experiment_data(0)
print(train_info[0, :])
['PAC2018_0592' 2 51 2 1658.0]
print(test_info[0, :])
['PAC2018_1807' 1 27 2 1620.0]
'''
# Load the pickle files with the data to use as train and test sets in a given experiment
matrices = pickle.load(open('Experiment_data.pkl', 'rb'))
# Extract the train and test matrices
train_matrices = matrices[0]
test_matrices = matrices[1]
train_info = train_matrices[exp_num]
test_info = test_matrices[exp_num]
return train_info, test_info
``` |
{
"source": "josephsnyder/CastXML-python-distributions",
"score": 2
} |
#### File: CastXML-python-distributions/tests/test_distribution.py
```python
import os
import pytest
from path import Path
DIST_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '../dist'))
def _check_castxml_install(virtualenv, tmpdir):
expected_version = "0.3.4"
for executable_name in ["castxml"]:
output = virtualenv.run(
"%s --version" % executable_name, capture=True).splitlines()[0]
assert output == "%s version %s" % (executable_name, expected_version)
@pytest.mark.skipif(not Path(DIST_DIR).exists(), reason="dist directory does not exist")
def test_wheel(virtualenv, tmpdir):
wheels = Path(DIST_DIR).files(match="*.whl")
if not wheels:
pytest.skip("no wheel available")
assert len(wheels) == 1
print(wheels)
virtualenv.run("pip install %s" % wheels[0])
_check_castxml_install(virtualenv, tmpdir)
``` |
{
"source": "josephsnyder/cppwg",
"score": 3
} |
#### File: cppwg/input/info_helper.py
```python
import os
class CppInfoHelper(object):
"""
This attempts to automatically fill in some class info based on
simple analysis of the source tree.
"""
def __init__(self, module_info):
self.module_info = module_info
self.class_dict = {}
self.setup_class_dict()
def setup_class_dict(self):
# For convenience collect class info in a dict keyed by name
for eachClassInfo in self.module_info.class_info:
self.class_dict[eachClassInfo.name] = eachClassInfo
def expand_templates(self, feature_info, feature_type):
template_substitutions = feature_info.hierarchy_attribute_gather('template_substitutions')
if len(template_substitutions) == 0:
return
# Skip any features with pre-defined template args
no_template = feature_info.template_args is None
source_path = feature_info.source_file_full_path
if not (no_template and source_path is not None):
return
if not os.path.exists(source_path):
return
f = open(source_path)
lines = (line.rstrip() for line in f) # Remove blank lines
lines = list(line for line in lines if line)
for idx, eachLine in enumerate(lines):
stripped_line = eachLine.replace(" ", "")
if idx+1 < len(lines):
stripped_next = lines[idx+1].replace(" ", "")
else:
continue
for idx, eachSub in enumerate(template_substitutions):
template_args = eachSub['replacement']
template_string = eachSub['signature']
cleaned_string = template_string.replace(" ", "")
if cleaned_string in stripped_line:
feature_string = feature_type + feature_info.name
feature_decl_next = feature_string + ":" in stripped_next
feature_decl_whole = feature_string == stripped_next
if feature_decl_next or feature_decl_whole:
feature_info.template_args = template_args
break
f.close()
def do_custom_template_substitution(self, feature_info):
pass
```
#### File: cppwg/writers/header_collection_writer.py
```python
import os
import ntpath
class CppHeaderCollectionWriter():
"""
This class manages generation of the header collection file for
parsing by CastXML
"""
def __init__(self, package_info, wrapper_root):
self.wrapper_root = wrapper_root
self.package_info = package_info
self.header_file_name = "wrapper_header_collection.hpp"
self.hpp_string = ""
self.class_dict = {}
self.free_func_dict = {}
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
self.class_dict[eachClassInfo.name] = eachClassInfo
for eachFuncInfo in eachModule.free_function_info:
self.free_func_dict[eachFuncInfo.name] = eachFuncInfo
def add_custom_header_code(self):
"""
Any custom header code goes here
"""
pass
def write_file(self):
"""
The actual write
"""
if not os.path.exists(self.wrapper_root + "/"):
os.makedirs(self.wrapper_root + "/")
file_path = self.wrapper_root + "/" + self.header_file_name
hpp_file = open(file_path, 'w')
hpp_file.write(self.hpp_string)
hpp_file.close()
def should_include_all(self):
"""
Return whether all source files in the module source locs should be included
"""
for eachModule in self.package_info.module_info:
if eachModule.use_all_classes or eachModule.use_all_free_functions:
return True
return False
def write(self):
"""
Main method for generating the header file output string
"""
hpp_header_dict = {'package_name': self.package_info.name}
hpp_header_template = """\
#ifndef {package_name}_HEADERS_HPP_
#define {package_name}_HEADERS_HPP_
// Includes
"""
self.hpp_string = hpp_header_template.format(**hpp_header_dict)
# Now our own includes
if self.should_include_all():
for eachFile in self.package_info.source_hpp_files:
include_name = ntpath.basename(eachFile)
self.hpp_string += '#include "' + include_name + '"\n'
else:
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
if eachClassInfo.source_file is not None:
self.hpp_string += '#include "' + eachClassInfo.source_file + '"\n'
elif eachClassInfo.source_file_full_path is not None:
include_name = ntpath.basename(eachClassInfo.source_file_full_path)
self.hpp_string += '#include "' + include_name + '"\n'
for eachFuncInfo in eachModule.free_function_info:
if eachFuncInfo.source_file_full_path is not None:
include_name = ntpath.basename(eachFuncInfo.source_file_full_path)
self.hpp_string += '#include "' + include_name + '"\n'
# Add the template instantiations
self.hpp_string += "\n// Instantiate Template Classes \n"
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
full_names = eachClassInfo.get_full_names()
if len(full_names) == 1:
continue
prefix = "template class "
for eachTemplateName in full_names:
self.hpp_string += prefix + eachTemplateName.replace(" ","") + ";\n"
# Add typdefs for nice naming
self.hpp_string += "\n// Typedef for nicer naming\n"
self.hpp_string += "namespace cppwg{ \n"
for eachModule in self.package_info.module_info:
for eachClassInfo in eachModule.class_info:
full_names = eachClassInfo.get_full_names()
if len(full_names) == 1:
continue
short_names = eachClassInfo.get_short_names()
for idx, eachTemplateName in enumerate(full_names):
short_name = short_names[idx]
typdef_prefix = "typedef " + eachTemplateName.replace(" ","") + " "
self.hpp_string += typdef_prefix + short_name + ";\n"
self.hpp_string += "}\n"
self.add_custom_header_code()
self.hpp_string += "\n#endif // {}_HEADERS_HPP_\n".format(self.package_info.name)
self.write_file()
``` |
{
"source": "josephsnyder/gitlab-runner-auth",
"score": 2
} |
#### File: gitlab-runner-auth/tests/test_generate_config.py
```python
import os
import re
import socket
import toml
import json
import stat
import pytest
from unittest.mock import MagicMock, patch
from urllib.parse import parse_qs
from httmock import HTTMock, urlmatch, response
from pytest import fixture
from pathlib import Path
from tempfile import TemporaryDirectory
from gitlab_runner_config import (
Runner,
Executor,
GitLabClientManager,
SyncException,
identifying_tags,
generate_tags,
owner_only_permissions,
load_executors,
create_runner,
generate_runner_config,
)
base_path = os.getcwd()
@fixture
def instance():
return "main"
@fixture
def top_level_call_patchers():
create_runner_patcher = patch("gitlab_runner_config.create_runner")
client_manager_patcher = patch("gitlab_runner_config.GitLabClientManager")
runner = MagicMock()
runner.to_dict.return_value = {}
create_runner_mock = create_runner_patcher.start()
create_runner_mock.return_value = runner
yield [create_runner_mock, client_manager_patcher.start()]
create_runner_mock.stop()
client_manager_patcher.stop()
@fixture
def established_prefix(instance, tmp_path):
prefix = tmp_path
prefix.chmod(0o700)
instance_config_template_file = prefix / "config.template.{}.toml".format(instance)
instance_config_template_file.write_text(
"""
[client_configs]
foo = "bar"
""".strip()
)
executor_dir = prefix / "main"
executor_dir.mkdir()
executor_dir.chmod(0o700)
return (prefix, instance)
@fixture
def executor_configs():
configs = []
url_tmpl = "http://localhost/{}"
executor_tmpl = "{}-executor"
for desc in ["foo", "bar"]:
configs.append(
{
"description": "runner-{}".format(desc),
"url": url_tmpl.format(desc),
"executor": executor_tmpl.format(desc),
}
)
return configs
@fixture
def client_configs():
configs = []
url_tmpl = "http://localhost/{}"
for server in ["foo", "bar"]:
configs.append(
{
"registration_token": server,
"url": url_tmpl.format(server),
"personal_access_token": server,
}
)
return configs
@fixture
def runner_config(client_configs):
return {"name": "foo", "client_configs": client_configs}
@fixture
def executor_tomls_dir(executor_configs):
td = TemporaryDirectory()
for config in executor_configs:
with open(td.name / Path(config["description"] + ".toml"), "w") as f:
toml.dump(config, f)
yield Path(td.name)
td.cleanup()
@fixture
def executor(instance, executor_configs):
yield Executor(instance, executor_configs)
@fixture
def url_matchers():
runners = [{"id": 1}, {"id": 2}]
@urlmatch(path=r".*\/api\/v4\/runners/all$", method="get")
def runner_list_resp(url, request):
query = parse_qs(url.query)
# All calls by this utility must be qualified by tags
assert "tag_list" in query
tag_list = query["tag_list"].pop().split(",")
assert len(tag_list)
assert len(tag_list) == len(set(tag_list))
headers = {"content-type": "application/json"}
content = json.dumps(runners)
return response(200, content, headers, None, 5, request)
@urlmatch(path=r".*\/api\/v4\/runners\/\d+$", method="get")
def runner_detail_resp(url, request):
runner_id = url.path.split("/")[-1]
headers = {"content-type": "application/json"}
content = json.dumps(
{
"id": runner_id,
"token": "token",
"description": "runner-{}".format(runner_id),
}
)
return response(200, content, headers, None, 5, request)
@urlmatch(path=r".*\/api\/v4\/runners\/\d+$", method="delete")
def runner_delete_resp(url, request):
runner_id = url.path.split("/")[-1]
headers = {"content-type": "application/json"}
content = json.dumps(
{
"id": runner_id,
"token": "token",
"description": "runner-{}".format(runner_id),
}
)
return response(204, content, headers, None, 5, request)
@urlmatch(path=r".*\/api\/v4\/runners$", method="post")
def runner_registration_resp(url, request):
# All registered runners must be qualified by tags
body = json.loads(request.body.decode())
assert "tag_list" in body
tag_list = body["tag_list"].split(",")
assert len(tag_list)
assert len(tag_list) == len(set(tag_list))
headers = {"content-type": "application/json"}
# TODO id from request
content = json.dumps(
{
"id": 3,
"token": "token",
"description": "runner-{}".format(3),
}
)
return response(201, content, headers, None, 5, request)
return (
runner_list_resp,
runner_detail_resp,
runner_delete_resp,
runner_registration_resp,
)
def test_identifying_tags(instance):
hostname = socket.gethostname()
trimmed_hostname = re.sub(r"\d", "", hostname)
with pytest.raises(ValueError, match="instance name cannot be"):
identifying_tags("managed")
with pytest.raises(ValueError, match="instance name cannot be"):
identifying_tags(hostname)
with pytest.raises(ValueError, match="instance name cannot be"):
identifying_tags(trimmed_hostname)
def test_generate_tags(instance):
tags = generate_tags(instance)
hostname = socket.gethostname()
assert instance in tags
assert hostname in tags
# test finding a resource manager
with TemporaryDirectory() as td:
def get_tags(tag_schema=None):
schema = None
if tag_schema:
with open(tag_schema) as fh:
schema = json.load(fh)
tags = generate_tags(instance, executor_type="batch", tag_schema=schema)
return tags
# test schema runs without error
get_tags(tag_schema="tag_schema.json")
def test_generate_tags_env(instance):
env_name = "TEST_TAG"
missing_env_name = "TEST_MISSING_TAG"
env_val = "tag"
os.environ[env_name] = env_val
tags = generate_tags(instance, env=[env_name, missing_env_name])
assert env_val in tags
assert missing_env_name not in tags
def test_owner_only_permissions():
with TemporaryDirectory() as td:
d = Path(td)
os.chmod(d, 0o700)
assert owner_only_permissions(d)
os.chmod(d, 0o750)
assert not owner_only_permissions(d)
os.chmod(d, 0o705)
assert not owner_only_permissions(d)
os.chmod(d, 0o755)
assert not owner_only_permissions(d)
class TestExecutor:
def test_normalize(self, executor):
executor.normalize()
assert all(c.get("description") for c in executor.configs)
assert all(c.get("tags") for c in executor.configs)
def test_missing_token(self, executor):
url = executor.configs[0]["url"]
assert len(executor.missing_token(url)) == 1
for e in executor.missing_token(url):
e["token"] = "token"
assert len(executor.missing_token(url)) == 0
def test_missing_required_config(self, executor):
assert len(executor.missing_required_config()) == len(executor.configs)
def test_load_executors(self, instance, executor_configs, executor_tomls_dir):
executor = load_executors(instance, executor_tomls_dir)
assert len(executor.configs) == len(executor_configs)
def test_load_executors_no_files(self, instance, executor_tomls_dir):
with TemporaryDirectory() as td:
executor = load_executors(instance, Path(td))
assert len(executor.configs) == 0
def test_load_executors_extra_file(self, executor_configs, executor_tomls_dir):
with open(executor_tomls_dir / "bat", "w") as fh:
fh.write("bat")
# loaded executors should only consider .toml files
executor = load_executors(instance, executor_tomls_dir)
assert len(executor.configs) == len(executor_configs)
class TestRunner:
def test_create(self, instance, runner_config, executor_tomls_dir):
runner = create_runner(runner_config, instance, executor_tomls_dir)
assert runner_config.get("client_configs") is not None
assert runner.config is not None
assert runner.executor is not None
def test_empty(self, instance, runner_config):
runner = Runner(runner_config, Executor(instance, []))
assert runner.empty()
def test_to_dict(self, instance, runner_config, executor_tomls_dir):
runner = create_runner(runner_config, instance, executor_tomls_dir)
runner_dict = runner.to_dict()
assert type(runner_dict.get("runners")) == list
assert toml.dumps(runner_dict)
class TestGitLabClientManager:
def setup_method(self, method):
self.runner = MagicMock()
def test_init(self, instance, client_configs):
client_manager = GitLabClientManager(instance, client_configs)
assert client_manager.clients
assert client_manager.registration_tokens
def test_sync_runner_state(self, instance, client_configs, url_matchers):
client_manager = GitLabClientManager(instance, client_configs)
with HTTMock(*url_matchers):
client_manager.sync_runner_state(self.runner)
self.runner.executor.add_token.assert_called()
def test_sync_runner_state_delete(self, instance, client_configs, url_matchers):
client_manager = GitLabClientManager(instance, client_configs)
self.runner.executor.add_token.side_effect = KeyError("Missing key!")
with HTTMock(*url_matchers):
client_manager.sync_runner_state(self.runner)
self.runner.executor.add_token.assert_called()
def test_sync_runner_state_missing(self, instance, client_configs, url_matchers):
client_manager = GitLabClientManager(instance, client_configs)
self.runner.executor.missing_token.return_value = [
{"description": "bat", "tags": ["bat", "bam"]}
]
with HTTMock(*url_matchers):
client_manager.sync_runner_state(self.runner)
self.runner.executor.missing_token.assert_called()
for config in client_configs:
self.runner.executor.missing_token.assert_any_call(config["url"])
self.runner.executor.add_token.assert_called()
class TestGitLabRunnerConfig:
def test_generate_runner_config(self, established_prefix, top_level_call_patchers):
generate_runner_config(*established_prefix)
def test_generate_runner_config_invalid_prefix_perms(
self, established_prefix, top_level_call_patchers
):
prefix, instance = established_prefix
prefix.chmod(0o777)
with pytest.raises(SystemExit):
generate_runner_config(prefix, instance)
def test_generate_runner_config_invalid_executor_perms(
self, established_prefix, top_level_call_patchers
):
prefix, instance = established_prefix
executor_dir = prefix / instance
executor_dir.chmod(0o777)
with pytest.raises(SystemExit):
generate_runner_config(prefix, instance)
def test_generate_runner_config_missing(
self, established_prefix, top_level_call_patchers
):
prefix, instance = established_prefix
instance_config_template_file = prefix / "config.template.{}.toml".format(
instance
)
instance_config_template_file.unlink()
with pytest.raises(SystemExit):
generate_runner_config(prefix, instance)
def test_generate_runner_sync_error(
self, established_prefix, top_level_call_patchers
):
manager = MagicMock()
manager.sync_runner_state.side_effect = SyncException("oh no!")
_, client_manager_patcher = top_level_call_patchers
client_manager_patcher.return_value = manager
with pytest.raises(SystemExit):
generate_runner_config(*established_prefix)
``` |
{
"source": "josephsnyder/spack-infrastructure",
"score": 2
} |
#### File: images/gh-gl-sync/test_SpackCIBridge.py
```python
import os
from unittest.mock import create_autospec, patch, Mock
import SpackCIBridge
class AttrDict(dict):
def __init__(self, iterable, **kwargs):
super(AttrDict, self).__init__(iterable, **kwargs)
for key, value in iterable.items():
if isinstance(value, dict):
self.__dict__[key] = AttrDict(value)
else:
self.__dict__[key] = value
def test_list_github_prs(capfd):
"""Test the list_github_prs method."""
github_pr_response = [
AttrDict({
"number": 1,
"merge_commit_sha": "aaaaaaaa",
"head": {
"ref": "improve_docs",
"sha": "shafoo"
},
"base": {
"sha": "shabar"
}
}),
AttrDict({
"number": 2,
"merge_commit_sha": "bbbbbbbb",
"head": {
"ref": "fix_test",
"sha": "shagah"
},
"base": {
"sha": "shafaz"
}
}),
]
gh_repo = Mock()
gh_repo.get_pulls.return_value = github_pr_response
bridge = SpackCIBridge.SpackCIBridge()
bridge.py_gh_repo = gh_repo
import subprocess
actual_run_method = subprocess.run
mock_run_return = Mock()
mock_run_return.stdout = b"Merge shagah into ccccccc"
subprocess.run = create_autospec(subprocess.run, return_value=mock_run_return)
retval = bridge.list_github_prs()
subprocess.run = actual_run_method
github_prs = retval[0]
assert github_prs["pr_strings"] == ["pr1_improve_docs", "pr2_fix_test"]
assert github_prs["merge_commit_shas"] == ["aaaaaaaa", "bbbbbbbb"]
assert gh_repo.get_pulls.call_count == 1
out, err = capfd.readouterr()
expected = """Skip pushing pr2_fix_test because GitLab already has HEAD shagah
All Open PRs:
pr1_improve_docs
pr2_fix_test
Filtered Open PRs:
pr1_improve_docs
"""
assert out == expected
def test_list_github_protected_branches(capfd):
"""Test the list_github_protected_branches method and verify that we do not
push main_branch commits when it already has a pipeline running."""
github_branches_response = [
AttrDict({
"name": "alpha",
"protected": True
}),
AttrDict({
"name": "develop",
"protected": True
}),
AttrDict({
"name": "feature",
"protected": False
}),
AttrDict({
"name": "main",
"protected": True
}),
AttrDict({
"name": "release",
"protected": True
}),
AttrDict({
"name": "wip",
"protected": False
}),
]
gh_repo = Mock()
gh_repo.get_branches.return_value = github_branches_response
bridge = SpackCIBridge.SpackCIBridge(main_branch="develop")
bridge.currently_running_sha = "aaaaaaaa"
bridge.py_gh_repo = gh_repo
protected_branches = bridge.list_github_protected_branches()
assert protected_branches == ["alpha", "main", "release"]
expected = "Skip pushing develop because it already has a pipeline running (aaaaaaaa)"
out, err = capfd.readouterr()
assert expected in out
def test_get_synced_prs(capfd):
"""Test the get_synced_prs method."""
bridge = SpackCIBridge.SpackCIBridge()
bridge.get_gitlab_pr_branches = lambda *args: None
bridge.gitlab_pr_output = b"""
gitlab/github/pr1_example
gitlab/github/pr2_another_try
"""
assert bridge.get_synced_prs() == ["pr1_example", "pr2_another_try"]
out, err = capfd.readouterr()
assert out == "Synced PRs:\n pr1_example\n pr2_another_try\n"
def test_get_prs_to_delete(capfd):
"""Test the get_prs_to_delete method."""
open_prs = ["pr3_try_this", "pr4_new_stuff"]
synced_prs = ["pr1_first_try", "pr2_different_approach", "pr3_try_this"]
bridge = SpackCIBridge.SpackCIBridge()
closed_refspecs = bridge.get_prs_to_delete(open_prs, synced_prs)
assert closed_refspecs == [":github/pr1_first_try", ":github/pr2_different_approach"]
out, err = capfd.readouterr()
assert out == "Synced Closed PRs:\n pr1_first_try\n pr2_different_approach\n"
def test_get_open_refspecs():
"""Test the get_open_refspecs and update_refspecs_for_protected_branches methods."""
open_prs = {
"pr_strings": ["pr1_this", "pr2_that"],
"merge_commit_shas": ["aaaaaaaa", "bbbbbbbb"],
"base_shas": ["shafoo", "shabar"],
"head_shas": ["shabaz", "shagah"],
"backlogged": [False, False]
}
bridge = SpackCIBridge.SpackCIBridge()
open_refspecs, fetch_refspecs = bridge.get_open_refspecs(open_prs)
assert open_refspecs == [
"github/pr1_this:github/pr1_this",
"github/pr2_that:github/pr2_that"
]
assert fetch_refspecs == [
"+aaaaaaaa:refs/remotes/github/pr1_this",
"+bbbbbbbb:refs/remotes/github/pr2_that"
]
protected_branches = ["develop", "master"]
bridge.update_refspecs_for_protected_branches(protected_branches, open_refspecs, fetch_refspecs)
assert open_refspecs == [
"github/pr1_this:github/pr1_this",
"github/pr2_that:github/pr2_that",
"github/develop:github/develop",
"github/master:github/master",
]
assert fetch_refspecs == [
"+aaaaaaaa:refs/remotes/github/pr1_this",
"+bbbbbbbb:refs/remotes/github/pr2_that",
"+refs/heads/develop:refs/remotes/github/develop",
"+refs/heads/master:refs/remotes/github/master"
]
def test_ssh_agent():
"""Test starting & stopping ssh-agent."""
def check_pid(pid):
"""Local function to check if a PID is running or not."""
try:
os.kill(pid, 0)
except OSError:
return False
else:
return True
# Read in our private key.
# Don't worry, this key was just generated for testing.
# It's not actually used for anything.
key_file = open("test_key.base64", "r")
ssh_key_base64 = key_file.read()
key_file.close()
# Start ssh-agent.
bridge = SpackCIBridge.SpackCIBridge()
bridge.setup_ssh(ssh_key_base64)
assert "SSH_AGENT_PID" in os.environ
pid = int(os.environ["SSH_AGENT_PID"])
assert check_pid(pid)
# Run our cleanup function to kill the ssh-agent.
SpackCIBridge.SpackCIBridge.cleanup()
# Make sure it's not running any more.
# The loop/sleep is to give the process a little time to shut down.
import time
for i in range(10):
if check_pid(pid):
time.sleep(0.01)
assert not check_pid(pid)
# Prevent atexit from trying to kill it again.
del os.environ["SSH_AGENT_PID"]
def test_get_pipeline_api_template():
"""Test that pipeline_api_template get constructed properly."""
bridge = SpackCIBridge.SpackCIBridge(gitlab_host="https://gitlab.spack.io", gitlab_project="zack/my_test_proj")
template = bridge.pipeline_api_template
assert template[0:84] == "https://gitlab.spack.io/api/v4/projects/zack%2Fmy_test_proj/pipelines?updated_after="
assert template.endswith("&ref={1}")
def test_dedupe_pipelines():
"""Test the dedupe_pipelines method."""
input = [
{
"id": 1,
"sha": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"ref": "github/pr1_readme",
"status": "failed",
"created_at": "2020-08-26T17:26:30.216Z",
"updated_at": "2020-08-26T17:26:36.807Z",
"web_url": "https://gitlab.spack.io/zack/my_test_proj/pipelines/1"
},
{
"id": 2,
"sha": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"ref": "github/pr1_readme",
"status": "passed",
"created_at": "2020-08-27T17:27:30.216Z",
"updated_at": "2020-08-27T17:27:36.807Z",
"web_url": "https://gitlab.spack.io/zack/my_test_proj/pipelines/2"
},
{
"id": 3,
"sha": "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
"ref": "github/pr2_todo",
"status": "failed",
"created_at": "2020-08-26T17:26:30.216Z",
"updated_at": "2020-08-26T17:26:36.807Z",
"web_url": "https://gitlab.spack.io/zack/my_test_proj/pipelines/3"
},
]
expected = {
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": {
"id": 2,
"sha": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"ref": "github/pr1_readme",
"status": "passed",
"created_at": "2020-08-27T17:27:30.216Z",
"updated_at": "2020-08-27T17:27:36.807Z",
"web_url": "https://gitlab.spack.io/zack/my_test_proj/pipelines/2"
},
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb": {
"id": 3,
"sha": "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
"ref": "github/pr2_todo",
"status": "failed",
"created_at": "2020-08-26T17:26:30.216Z",
"updated_at": "2020-08-26T17:26:36.807Z",
"web_url": "https://gitlab.spack.io/zack/my_test_proj/pipelines/3"
},
}
bridge = SpackCIBridge.SpackCIBridge()
assert bridge.dedupe_pipelines(input) == expected
def test_make_status_for_pipeline():
"""Test the make_status_for_pipeline method."""
bridge = SpackCIBridge.SpackCIBridge()
pipeline = {"web_url": "foo"}
status = bridge.make_status_for_pipeline(pipeline)
assert status == {}
pipeline["status"] = "canceled"
status = bridge.make_status_for_pipeline(pipeline)
assert status == {}
test_cases = [
{
"input": "created",
"state": "pending",
"description": "Pipeline has been created",
},
{
"input": "waiting_for_resource",
"state": "pending",
"description": "Pipeline is waiting for resources",
},
{
"input": "preparing",
"state": "pending",
"description": "Pipeline is preparing",
},
{
"input": "pending",
"state": "pending",
"description": "Pipeline is pending",
},
{
"input": "running",
"state": "pending",
"description": "Pipeline is running",
},
{
"input": "manual",
"state": "pending",
"description": "Pipeline is running manually",
},
{
"input": "scheduled",
"state": "pending",
"description": "Pipeline is scheduled",
},
{
"input": "failed",
"state": "error",
"description": "Pipeline failed",
},
{
"input": "skipped",
"state": "failure",
"description": "Pipeline was skipped",
},
{
"input": "success",
"state": "success",
"description": "Pipeline succeeded",
},
]
for test_case in test_cases:
pipeline["status"] = test_case["input"]
status = bridge.make_status_for_pipeline(pipeline)
assert status["state"] == test_case["state"]
assert status["description"] == test_case["description"]
class FakeResponse:
status: int
data: bytes
def __init__(self, *, data: bytes):
self.data = data
def read(self):
self.status = 201 if self.data is not None else 404
return self.data
def close(self):
pass
def test_post_pipeline_status(capfd):
"""Test the post_pipeline_status method."""
open_prs = {
"pr_strings": ["pr1_readme"],
"merge_commit_shas": ["aaaaaaaa"],
"base_shas": ["shafoo"],
"head_shas": ["shabaz"],
"backlogged": [False]
}
gh_commit = Mock()
gh_commit.create_status.return_value = AttrDict({"state": "error"})
gh_repo = Mock()
gh_repo.get_commit.return_value = gh_commit
bridge = SpackCIBridge.SpackCIBridge(gitlab_host="https://gitlab.spack.io",
gitlab_project="zack/my_test_proj",
github_project="zack/my_test_proj")
bridge.py_gh_repo = gh_repo
os.environ["GITHUB_TOKEN"] = "my_github_token"
mock_data = b'''[
{
"id": 1,
"sha": "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"ref": "github/pr1_readme",
"status": "failed",
"created_at": "2020-08-26T17:26:30.216Z",
"updated_at": "2020-08-26T17:26:36.807Z",
"web_url": "https://gitlab.spack.io/zack/my_test_proj/pipelines/1"
}
]'''
with patch('urllib.request.urlopen', return_value=FakeResponse(data=mock_data)) as mock_urlopen:
bridge.post_pipeline_status(open_prs, [])
assert mock_urlopen.call_count == 2
assert gh_repo.get_commit.call_count == 1
assert gh_commit.create_status.call_count == 1
out, err = capfd.readouterr()
expected_content = " pr1_readme -> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n"
assert expected_content in out
del os.environ["GITHUB_TOKEN"]
def test_pipeline_status_backlogged_by_main_branch(capfd):
"""Test the post_pipeline_status method for a PR that is backlogged because its base is being tested."""
open_prs = {
"pr_strings": ["pr1_readme"],
"merge_commit_shas": ["aaaaaaaa"],
"base_shas": ["shafoo"],
"head_shas": ["shabaz"],
"backlogged": ["base"]
}
gh_commit = Mock()
gh_commit.create_status.return_value = AttrDict({"state": "pending"})
gh_repo = Mock()
gh_repo.get_commit.return_value = gh_commit
bridge = SpackCIBridge.SpackCIBridge(gitlab_host="https://gitlab.spack.io",
gitlab_project="zack/my_test_proj",
github_project="zack/my_test_proj",
main_branch="develop")
bridge.py_gh_repo = gh_repo
os.environ["GITHUB_TOKEN"] = "my_github_token"
currently_running_url = "https://gitlab.spack.io/zack/my_test_proj/pipelines/4"
bridge.currently_running_url = currently_running_url
expected_desc = "waiting for base develop commit pipeline to succeed"
bridge.post_pipeline_status(open_prs, [])
assert gh_commit.create_status.call_count == 1
gh_commit.create_status.assert_called_with(
state="pending",
context="ci/gitlab-ci",
description=expected_desc,
target_url=currently_running_url
)
out, err = capfd.readouterr()
expected_content = """Posting backlogged status to the following:
pr1_readme -> shabaz"""
assert expected_content in out
del os.environ["GITHUB_TOKEN"]
def test_pipeline_status_backlogged_by_checks(capfd):
"""Test the post_pipeline_status method for a PR that is backlogged because of a required check."""
"""Helper function to parameterize the test"""
def verify_backlogged_by_checks(capfd, checks_return_value):
github_pr_response = [
AttrDict({
"number": 1,
"merge_commit_sha": "aaaaaaaa",
"head": {
"ref": "improve_docs",
"sha": "shafoo"
},
"base": {
"sha": "shabar"
}
}),
]
gh_commit = Mock()
gh_commit.get_check_runs.return_value = checks_return_value
gh_commit.create_status.return_value = AttrDict({"state": "pending"})
gh_repo = Mock()
gh_repo.get_pulls.return_value = github_pr_response
gh_repo.get_commit.return_value = gh_commit
bridge = SpackCIBridge.SpackCIBridge(gitlab_host="https://gitlab.spack.io",
gitlab_project="zack/my_test_proj",
github_project="zack/my_test_proj",
prereq_checks=["style"])
bridge.py_gh_repo = gh_repo
bridge.currently_running_sha = None
import subprocess
actual_run_method = subprocess.run
mock_run_return = Mock()
mock_run_return.stdout = b"Merge shagah into ccccccc"
subprocess.run = create_autospec(subprocess.run, return_value=mock_run_return)
all_open_prs, open_prs = bridge.list_github_prs()
subprocess.run = actual_run_method
os.environ["GITHUB_TOKEN"] = "<PASSWORD>"
expected_desc = open_prs["backlogged"][0]
assert expected_desc == "waiting for style check to succeed"
bridge.post_pipeline_status(open_prs, [])
assert gh_commit.create_status.call_count == 1
gh_commit.create_status.assert_called_with(
state="pending",
context="ci/gitlab-ci",
description=expected_desc,
target_url="",
)
out, err = capfd.readouterr()
expected_content = """Posting backlogged status to the following:
pr1_improve_docs -> shafoo"""
assert expected_content in out
del os.environ["GITHUB_TOKEN"]
# Verify backlogged status when the required check hasn't passed successfully.
checks_return_value = [
AttrDict({
"name": "style",
"status": "in_progress",
"conclusion": None,
})
]
verify_backlogged_by_checks(capfd, checks_return_value)
# Verify backlogged status when the required check is missing from the API's response.
checks_api_response = []
verify_backlogged_by_checks(capfd, checks_api_response)
``` |
{
"source": "josephsnyder/tech_journal",
"score": 2
} |
#### File: josephsnyder/tech_journal/setup.py
```python
from setuptools import setup, Command, find_packages
from subprocess import check_call
import os
class BuildUICommand(Command):
description = 'Build the standalone front end and include it in the sdist'
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
dest = os.path.join(
os.path.abspath('girder_tech_journal'),
'external_web_client')
install = ['yarn', 'install']
build = ['yarn', 'build']
copy = ['cp', '-r', 'dist', dest]
commands = [install, build, copy]
os.chdir(os.path.abspath('girder-tech-journal-gui'))
for cmd in commands:
check_call(cmd)
with open('README.rst') as readme:
long_description = readme.read()
setup(
name='girder-tech-journal',
version='1.0.0',
description='A Girder plugin for a Technical Journal',
long_description=long_description,
long_description_content_type='text/reStructuredText',
url='https://github.com/girder/tech_journal',
maintainer='Kitware, Inc.',
maintainer_email='<EMAIL>',
include_package_data=True,
packages=find_packages(),
install_requires=[
'girder>=3',
'celery',
'girder-oauth',
'girder-worker[girder]',
'girder-worker-utils'
],
entry_points={
'girder.plugin': [
'tech_journal = girder_tech_journal:TechJournalPlugin'
],
'girder_worker_plugins': [
'tech_journal_tasks = girder_tech_journal.tasks:TechJournalTasks',
]
},
cmdclass={
'build_ui': BuildUICommand
}
)
``` |
{
"source": "josephsnyder/VistA-1",
"score": 2
} |
#### File: Python/Pexpect/winpexpect.py
```python
import os
import sys
import pywintypes
import itertools
import random
import time
import signal
from Queue import Queue, Empty
from threading import Thread, Lock
from pexpect import spawn, ExceptionPexpect, EOF, TIMEOUT
from subprocess import list2cmdline
from msvcrt import open_osfhandle
from win32api import (SetHandleInformation, GetCurrentProcess, OpenProcess,
PostMessage, SendMessage,
CloseHandle, GetCurrentThread, STD_INPUT_HANDLE)
from win32pipe import CreateNamedPipe, ConnectNamedPipe
from win32process import (STARTUPINFO, CreateProcess, CreateProcessAsUser,
GetExitCodeProcess, TerminateProcess, ExitProcess,
GetWindowThreadProcessId)
from win32event import WaitForSingleObject, INFINITE
from win32security import (LogonUser, OpenThreadToken, OpenProcessToken,
GetTokenInformation, TokenUser, ACL_REVISION_DS,
ConvertSidToStringSid, ConvertStringSidToSid,
SECURITY_ATTRIBUTES, SECURITY_DESCRIPTOR, ACL,
LookupAccountName)
from win32file import CreateFile, ReadFile, WriteFile, INVALID_HANDLE_VALUE, FILE_SHARE_READ
from win32console import (GetStdHandle, KEY_EVENT, ENABLE_WINDOW_INPUT, ENABLE_MOUSE_INPUT,
ENABLE_ECHO_INPUT, ENABLE_LINE_INPUT, ENABLE_PROCESSED_INPUT,
ENABLE_MOUSE_INPUT)
from win32con import (HANDLE_FLAG_INHERIT, STARTF_USESTDHANDLES,
STARTF_USESHOWWINDOW, CREATE_NEW_CONSOLE, SW_HIDE,
PIPE_ACCESS_DUPLEX, WAIT_OBJECT_0, WAIT_TIMEOUT,
LOGON32_PROVIDER_DEFAULT, LOGON32_LOGON_INTERACTIVE,
TOKEN_ALL_ACCESS, GENERIC_READ, GENERIC_WRITE,
OPEN_EXISTING, PROCESS_ALL_ACCESS, MAXIMUM_ALLOWED,
LEFT_CTRL_PRESSED,RIGHT_CTRL_PRESSED,
WM_CHAR, VK_RETURN, WM_KEYDOWN, WM_KEYUP)
from win32gui import EnumWindows
from winerror import (ERROR_PIPE_BUSY, ERROR_HANDLE_EOF, ERROR_BROKEN_PIPE,
ERROR_ACCESS_DENIED)
from pywintypes import error as WindowsError
# Compatibility with Python < 2.6
try:
from collections import namedtuple
except ImportError:
def namedtuple(name, fields):
d = dict(zip(fields, [None]*len(fields)))
return type(name, (object,), d)
# Compatbility wiht Python 3
if sys.version_info[0] == 3:
_WriteFile = WriteFile
def WriteFile(handle, s):
return _WriteFile(handle, s.encode('ascii'))
_ReadFile = ReadFile
def ReadFile(handle, size):
err, data = _ReadFile(handle, size)
return err, data.decode('ascii')
def split_command_line(cmdline):
"""Split a command line into a command and its arguments according to
the rules of the Microsoft C runtime."""
# http://msdn.microsoft.com/en-us/library/ms880421
s_free, s_in_quotes, s_in_escape = range(3)
state = namedtuple('state',
('current', 'previous', 'escape_level', 'argument'))
state.current = s_free
state.previous = s_free
state.argument = []
result = []
for c in itertools.chain(cmdline, ['EOI']): # Mark End of Input
if state.current == s_free:
if c == '"':
state.current = s_in_quotes
state.previous = s_free
elif c == '\\':
state.current = s_in_escape
state.previous = s_free
state.escape_count = 1
elif c in (' ', '\t', 'EOI'):
if state.argument or state.previous != s_free:
result.append(''.join(state.argument))
del state.argument[:]
else:
state.argument.append(c)
elif state.current == s_in_quotes:
if c == '"':
state.current = s_free
state.previous = s_in_quotes
elif c == '\\':
state.current = s_in_escape
state.previous = s_in_quotes
state.escape_count = 1
else:
state.argument.append(c)
elif state.current == s_in_escape:
if c == '\\':
state.escape_count += 1
elif c == '"':
nbs, escaped_delim = divmod(state.escape_count, 2)
state.argument.append(nbs * '\\')
if escaped_delim:
state.argument.append('"')
state.current = state.previous
else:
if state.previous == s_in_quotes:
state.current = s_free
else:
state.current = s_in_quotes
state.previous = s_in_escape
else:
state.argument.append(state.escape_count * '\\')
state.argument.append(c)
state.current = state.previous
state.previous = s_in_escape
if state.current != s_free:
raise ValueError, 'Illegal command line.'
return result
join_command_line = list2cmdline
def which(command):
path = os.environ.get('Path', '')
path = path.split(os.pathsep)
pathext = os.environ.get('Pathext', '.exe;.com;.bat;.cmd')
pathext = pathext.split(os.pathsep)
for dir in itertools.chain([''], path):
for ext in itertools.chain([''], pathext):
fname = os.path.join(dir, command) + ext
if os.access(fname, os.X_OK):
return fname
def _read_header(handle, bufsize=4096):
"""INTERNAL: read a stub header from a handle."""
header = ''
while '\n\n' not in header:
err, data = ReadFile(handle, bufsize)
header += data
return header
def _parse_header(header):
"""INTERNAL: pass the stub header format."""
parsed = {}
lines = header.split('\n')
for line in lines:
if not line:
break
p1 = line.find('=')
if p1 == -1:
if line.startswith(' '): # Continuation
if key is None:
raise ValueError, 'Continuation on first line.'
input[key] += '\n' + line[1:]
else:
raise ValueError, 'Expecting key=value format'
key = line[:p1]
parsed[key] = line[p1+1:]
return parsed
def _quote_header(s):
"""INTENAL: quote a string to be used in a stub header."""
return s.replace('\n', '\n ')
def _get_current_sid():
"""INTERNAL: get current SID."""
try:
token = OpenThreadToken(GetCurrentThread(), MAXIMUM_ALLOWED, True)
except WindowsError:
token = OpenProcessToken(GetCurrentProcess(), MAXIMUM_ALLOWED)
sid = GetTokenInformation(token, TokenUser)[0]
return sid
def _lookup_sid(domain, username):
"""INTERNAL: lookup the SID for a user in a domain."""
return LookupAccountName(domain, username)[0]
def _create_security_attributes(*sids, **kwargs):
"""INTERNAL: create a SECURITY_ATTRIBUTES structure."""
inherit = kwargs.get('inherit', 0)
access = kwargs.get('access', GENERIC_READ|GENERIC_WRITE)
attr = SECURITY_ATTRIBUTES()
attr.bInheritHandle = inherit
desc = SECURITY_DESCRIPTOR()
dacl = ACL()
for sid in sids:
dacl.AddAccessAllowedAce(ACL_REVISION_DS, access, sid)
desc.SetSecurityDescriptorDacl(True, dacl, False)
attr.SECURITY_DESCRIPTOR = desc
return attr
def _create_named_pipe(template, sids=None):
"""INTERNAL: create a named pipe."""
if sids is None:
sattrs = None
else:
sattrs = _create_security_attributes(*sids)
for i in range(100):
name = template % random.randint(0, 999999)
try:
pipe = CreateNamedPipe(name, PIPE_ACCESS_DUPLEX,
0, 1, 1, 1, 100000, sattrs)
SetHandleInformation(pipe, HANDLE_FLAG_INHERIT, 0)
except WindowsError, e:
if e.winerror != ERROR_PIPE_BUSY:
raise
else:
return pipe, name
raise ExceptionPexpect, 'Could not create pipe after 100 attempts.'
def _stub(cmd_name, stdin_name, stdout_name, stderr_name):
"""INTERNAL: Stub process that will start up the child process."""
# Open the 4 pipes (command, stdin, stdout, stderr)
cmd_pipe = CreateFile(cmd_name, GENERIC_READ|GENERIC_WRITE, 0, None,
OPEN_EXISTING, 0, None)
SetHandleInformation(cmd_pipe, HANDLE_FLAG_INHERIT, 1)
stdin_pipe = CreateFile(stdin_name, GENERIC_READ, 0, None,
OPEN_EXISTING, 0, None)
SetHandleInformation(stdin_pipe, HANDLE_FLAG_INHERIT, 1)
stdout_pipe = CreateFile(stdout_name, GENERIC_WRITE, 0, None,
OPEN_EXISTING, 0, None)
SetHandleInformation(stdout_pipe, HANDLE_FLAG_INHERIT, 1)
stderr_pipe = CreateFile(stderr_name, GENERIC_WRITE, 0, None,
OPEN_EXISTING, 0, None)
SetHandleInformation(stderr_pipe, HANDLE_FLAG_INHERIT, 1)
# Learn what we need to do..
header = _read_header(cmd_pipe)
input = _parse_header(header)
if 'command' not in input or 'args' not in input:
ExitProcess(2)
# http://msdn.microsoft.com/en-us/library/ms682499(VS.85).aspx
startupinfo = STARTUPINFO()
startupinfo.dwFlags |= STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW
startupinfo.hStdInput = stdin_pipe
startupinfo.hStdOutput = stdout_pipe
startupinfo.hStdError = stderr_pipe
startupinfo.wShowWindow = SW_HIDE
# Grant access so that our parent can open its grandchild.
if 'parent_sid' in input:
mysid = _get_current_sid()
parent = ConvertStringSidToSid(input['parent_sid'])
sattrs = _create_security_attributes(mysid, parent,
access=PROCESS_ALL_ACCESS)
else:
sattrs = None
try:
res = CreateProcess(input['command'], input['args'], sattrs, None,
True, CREATE_NEW_CONSOLE, os.environ, os.getcwd(),
startupinfo)
except WindowsError, e:
message = _quote_header(str(e))
WriteFile(cmd_pipe, 'status=error\nmessage=%s\n\n' % message)
ExitProcess(3)
else:
pid = res[2]
# Pass back results and exit
err, nbytes = WriteFile(cmd_pipe, 'status=ok\npid=%s\n\n' % pid)
ExitProcess(0)
class ChunkBuffer(object):
"""A buffer that allows a chunk of data to be read in smaller reads."""
def __init__(self, chunk=''):
self.add(chunk)
def add(self, chunk):
self.chunk = chunk
self.offset = 0
def read(self, size):
data = self.chunk[self.offset:self.offset+size]
self.offset += size
return data
def __len__(self):
return max(0, len(self.chunk)-self.offset)
def run (command, timeout=-1, withexitstatus=False, events=None, extra_args=None, logfile=None, cwd=None, env=None, stub=None):
"""
This function runs the given command; waits for it to finish; then
returns all output as a string. STDERR is included in output. If the full
path to the command is not given then the path is searched.
Note that lines are terminated by CR/LF (\\r\\n) combination even on
UNIX-like systems because this is the standard for pseudo ttys. If you set
'withexitstatus' to true, then run will return a tuple of (command_output,
exitstatus). If 'withexitstatus' is false then this returns just
command_output.
The run() function can often be used instead of creating a spawn instance.
For example, the following code uses spawn::
from pexpect import *
child = spawn('scp foo <EMAIL>:.')
child.expect ('(?i)password')
child.sendline (mypassword)
The previous code can be replace with the following::
from pexpect import *
run ('scp foo <EMAIL>:.', events={'(?i)password': mypassword})
Examples
========
Start the apache daemon on the local machine::
from pexpect import *
run ("/usr/local/apache/bin/apachectl start")
Check in a file using SVN::
from pexpect import *
run ("svn ci -m 'automatic commit' my_file.py")
Run a command and capture exit status::
from pexpect import *
(command_output, exitstatus) = run ('ls -l /bin', withexitstatus=1)
Tricky Examples
===============
The following will run SSH and execute 'ls -l' on the remote machine. The
password '<PASSWORD>' will be sent if the '(?i)password' pattern is ever seen::
run ("ssh <EMAIL>@machine.<EMAIL>.com 'ls -l'", events={'(?i)password':'<PASSWORD>'})
This will start mencoder to rip a video from DVD. This will also display
progress ticks every 5 seconds as it runs. For example::
from pexpect import *
def print_ticks(d):
print d['event_count'],
run ("mencoder dvd://1 -o video.avi -oac copy -ovc copy", events={TIMEOUT:print_ticks}, timeout=5)
The 'events' argument should be a dictionary of patterns and responses.
Whenever one of the patterns is seen in the command out run() will send the
associated response string. Note that you should put newlines in your
string if Enter is necessary. The responses may also contain callback
functions. Any callback is function that takes a dictionary as an argument.
The dictionary contains all the locals from the run() function, so you can
access the child spawn object or any other variable defined in run()
(event_count, child, and extra_args are the most useful). A callback may
return True to stop the current run process otherwise run() continues until
the next event. A callback may also return a string which will be sent to
the child. 'extra_args' is not used by directly run(). It provides a way to
pass data to a callback function through run() through the locals
dictionary passed to a callback. """
if timeout == -1:
child = winspawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env, stub=None)
else:
child = winspawn(command, timeout=timeout, maxread=2000, logfile=logfile, cwd=cwd, env=env, stub=None)
if events is not None:
patterns = events.keys()
responses = events.values()
else:
patterns=None # We assume that EOF or TIMEOUT will save us.
responses=None
child_result_list = []
event_count = 0
while 1:
try:
index = child.expect (patterns)
if isinstance(child.after, basestring):
child_result_list.append(child.before + child.after)
else: # child.after may have been a TIMEOUT or EOF, so don't cat those.
child_result_list.append(child.before)
if isinstance(responses[index], basestring):
child.send(responses[index])
elif isinstance(responses[index], types.FunctionType):
callback_result = responses[index](locals())
#sys.stdout.flush()
if isinstance(callback_result, basestring):
child.send(callback_result)
elif callback_result:
child.expect(EOF)
break
else:
child.terminate()
raise TypeError ('The callback must be a string or function type.')
event_count = event_count + 1
except TIMEOUT, e:
child_result_list.append(child.before)
child.terminate()
break
except EOF, e:
child_result_list.append(child.before)
child.close()
break
child_result = ''.join(child_result_list)
if withexitstatus:
child.wait()
return (child_result, child.exitstatus)
else:
return child_result
class winspawn(spawn):
"""A version of pexpect.spawn for the Windows platform. """
# The Windows version of spawn is quite different when compared to the
# Posix version.
#
# The first difference is that it's not possible on Windows to select()
# on a file descriptor that corresponds to a file or a pipe. Therefore,
# to do non-blocking I/O, we need to use threads.
#
# Secondly, there is no way to pass /only/ the file descriptors
# corresponding to the redirected stdin/out/err to the child. Either all
# inheritable file descriptors are passed, or none. We solve this by
# indirectly executing our child via a stub for which we close all file
# descriptors. The stub communicates back to us via a named pipe.
#
# Finally, Windows does not have ptys. It does have the concept of a
# "Console" though but it's much less sophisticated. This code runs the
# child in a new console by passing the flag CREATE_NEW_CONSOLE to
# CreateProcess(). We create a new console for our child because this
# way it cannot interfere with the current console, and it is also
# possible to run the main program without a console (e.g. a Windows
# service).
#
# NOTE:
# Some special application will identify the input type. If its input handle
# is not the stdin, the child process will disable the interactive mode.
# For example: To run python as interactive mode, we should do like below:
# child = winspawn('python', ['-i'])
# option '-i' will force the python into interactive mode
#
pipe_buffer = 4096
pipe_template = r'\\.\pipe\winpexpect-%06d'
def __init__(self, command, args=[], timeout=30, maxread=2000,
searchwindowsize=None, logfile=None, cwd=None, env=None,
username=None, domain=None, password=<PASSWORD>, stub=None):
"""Constructor."""
self.username = username
self.domain = domain
self.password = password
self.stub = stub
self.child_hwnd = None
self.child_handle = None
self.child_output = Queue()
self.user_input = Queue()
self.chunk_buffer = ChunkBuffer()
self.stdout_handle = None
self.stdout_eof = False
self.stdout_reader = None
self.stderr_handle = None
self.stderr_eof = False
self.stderr_reader = None
self.stdin_reader = None # stdin of parent console
self.stdin_handle = None # stdin of parent console
self.interrupted = False
super(winspawn, self).__init__(command, args, timeout=timeout,
maxread=maxread, searchwindowsize=searchwindowsize,
logfile=logfile, cwd=cwd, env=env)
def __del__(self):
try:
self.terminate()
except WindowsError:
pass
def _spawn(self, command, args=None):
"""Start the child process. If args is empty, command will be parsed
according to the rules of the MS C runtime, and args will be set to
the parsed args."""
if args:
args = args[:] # copy
args.insert(0, command)
else:
args = split_command_line(command)
command = args[0]
self.command = command
self.args = args
command = which(self.command)
if command is None:
raise ExceptionPexpect, 'Command not found: %s' % self.command
args = join_command_line(self.args)
# Create the pipes
sids = [_get_current_sid()]
if self.username and self.password:
sids.append(_lookup_sid(self.domain, self.username))
cmd_pipe, cmd_name = _create_named_pipe(self.pipe_template, sids)
stdin_pipe, stdin_name = _create_named_pipe(self.pipe_template, sids)
stdout_pipe, stdout_name = _create_named_pipe(self.pipe_template, sids)
stderr_pipe, stderr_name = _create_named_pipe(self.pipe_template, sids)
startupinfo = STARTUPINFO()
startupinfo.dwFlags |= STARTF_USESHOWWINDOW
startupinfo.wShowWindow = SW_HIDE
if self.stub == None or not getattr(sys, 'frozen', False):
# python = os.path.join(sys.exec_prefix, 'python.exe')
python = sys.executable
self_dir = os.path.normpath(os.path.dirname(os.path.abspath(__file__)))
pycmd = 'import sys; sys.path.insert(0, r"%s"); import winpexpect; winpexpect._stub(r"%s", r"%s", r"%s", r"%s")' \
% (self_dir, cmd_name, stdin_name, stdout_name, stderr_name)
pyargs = join_command_line([python, '-c', pycmd])
else:
python = self.stub
pyargs = join_command_line([python, cmd_name, stdin_name, stdout_name, stderr_name])
# Create a new token or run as the current process.
if self.username and self.password:
token = LogonUser(self.username, self.domain, self.password,
LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT)
res = CreateProcessAsUser(token, python, pyargs, None, None,
False, CREATE_NEW_CONSOLE, self.env,
self.cwd, startupinfo)
else:
token = None
res = CreateProcess(python, pyargs, None, None, False,
CREATE_NEW_CONSOLE, self.env, self.cwd,
startupinfo)
child_handle = res[0]
res[1].Close() # don't need thread handle
ConnectNamedPipe(cmd_pipe)
ConnectNamedPipe(stdin_pipe)
ConnectNamedPipe(stdout_pipe)
ConnectNamedPipe(stderr_pipe)
# Tell the stub what to do and wait for it to exit
WriteFile(cmd_pipe, 'command=%s\n' % command)
WriteFile(cmd_pipe, 'args=%s\n' % args)
if token:
parent_sid = ConvertSidToStringSid(_get_current_sid())
WriteFile(cmd_pipe, 'parent_sid=%s\n' % str(parent_sid))
WriteFile(cmd_pipe, '\n')
header = _read_header(cmd_pipe)
output = _parse_header(header)
if output['status'] != 'ok':
m = 'Child did not start up correctly. '
m += output.get('message', '')
raise ExceptionPexpect, m
self.pid = int(output['pid'])
self.child_handle = OpenProcess(PROCESS_ALL_ACCESS, False, self.pid)
WaitForSingleObject(child_handle, INFINITE)
# Start up the I/O threads
self.child_fd = open_osfhandle(stdin_pipe.Detach(), 0) # for pexpect
self.stdout_handle = stdout_pipe
self.stdout_reader = Thread(target=self._child_reader,
args=(self.stdout_handle,))
self.stdout_reader.start()
self.stderr_handle = stderr_pipe
self.stderr_reader = Thread(target=self._child_reader,
args=(self.stderr_handle,))
self.stderr_reader.start()
# find the handle of the child console window
find_hwnds = []
def cb_comparewnd (hwnd, lparam):
_, pid = GetWindowThreadProcessId(hwnd)
if pid == self.pid:
find_hwnds.append(hwnd)
return True
tmfind = time.time()
while True:
EnumWindows(cb_comparewnd, None)
if find_hwnds:
self.child_hwnd = find_hwnds[0]
break
if time.time() - tmfind > self.timeout:
raise ExceptionPexpect, 'Did not find child console window'
self.terminated = False
self.closed = False
def terminate(self, force=False):
"""Terminate the child process. This also closes all the file
descriptors."""
if self.child_handle is None or self.terminated:
return
self.__terminate(force)
self.close()
self.wait()
self.terminated = True
def close(self):
"""Close all communications channels with the child."""
if self.closed:
return
self.interrupted = True
if self.stdin_reader:
CloseHandle(self.stdin_handle)
self.stdin_reader.join()
os.close(self.child_fd)
CloseHandle(self.stdout_handle)
CloseHandle(self.stderr_handle)
# Now the threads are ready to be joined.
self.stdout_reader.join()
self.stderr_reader.join()
self.closed = True
def wait(self, timeout=None):
"""Wait until the child exits. If timeout is not specified this
blocks indefinately. Otherwise, timeout specifies the number of
seconds to wait."""
if self.exitstatus is not None:
return
if timeout is None:
timeout = INFINITE
else:
timeout = 1000 * timeout
ret = WaitForSingleObject(self.child_handle, timeout)
if ret == WAIT_TIMEOUT:
raise TIMEOUT, 'Timeout exceeded in wait().'
self.exitstatus = GetExitCodeProcess(self.child_handle)
return self.exitstatus
def isalive(self):
"""Return True if the child is alive, False otherwise."""
if self.exitstatus is not None:
return False
ret = WaitForSingleObject(self.child_handle, 0)
if ret == WAIT_OBJECT_0:
self.exitstatus = GetExitCodeProcess(self.child_handle)
return False
return True
def kill(self, signo):
"""The signal.CTRL_C_EVENT and signal.CTRL_BREAK_EVENT signals is
avaiable under windows from Python3.2. Any other value for sig will
cause the process to be unconditionally killed by the TerminateProcess
API,"""
if sys.version_info[0] == 3 and sys.version_info[1] >= 2:
super().kill(signo)
else:
raise ExceptionPexpect, 'Signals are not availalbe on Windows'
def __terminate(self, force=False):
"""This forces a child process to terminate. It starts nicely with
signal.CTRL_C_EVENT and signal.CTRL_BREAK_EVENT. If "force" is True
then moves onto TerminateProcess. This returns True if the child
was terminated. This returns False if the child could not be terminated.
For python earlier than 3.2, force parameter will be ignored and
TerminateProcess will be always used"""
if not self.isalive():
return True
if sys.version_info[0] == 3 and sys.version_info[1] >= 2:
try:
self.kill(signal.CTRL_C_EVENT)
time.sleep(self.delayafterterminate)
if not self.isalive():
return True
self.kill(signal.CTRL_BREAK_EVENT)
time.sleep(self.delayafterterminate)
if not self.isalive():
return True
if force:
# any value other than CTRL_C_EVENT and signal.CTRL_BREAK_EVENT
# will terminate the process by killed by the TerminateProcess
self.kill(123)
time.sleep(self.delayafterterminate)
return (not self.isalive())
return False
except OSError as e:
# I think there are kernel timing issues that sometimes cause
# this to happen. I think isalive() reports True, but the
# process is dead to the kernel.
# Make one last attempt to see if the kernel is up to date.
time.sleep(self.delayafterterminate)
return (not self.isalive())
else:
try:
TerminateProcess(self.child_handle, 1)
time.sleep(self.delayafterterminate)
return (not self.isalive())
except WindowsError, e:
# ERROR_ACCESS_DENIED (also) happens when the child has already
# exited.
return (e.winerror == ERROR_ACCESS_DENIED and not self.isalive())
def direct_send(self, s):
"""Some subprocess is using the getche() to get the input, the most
common case is the password input. The getche() doesn't listen at
the stdin. So the send() doesn't work on this case. Here we will send
the string to the console window by windows message: WM_KEYDOWN,
WM_KEYUP, WM_CHAR.
There is another way available to implement the direct-send function.
That is attach the child console from the stub process and write the
console input directly. Here is the implement steps:
1. In the stub process add below code and don't exit the stub process.
def _string2records(s):
records = []
for c in s:
rec = win32console.PyINPUT_RECORDType(KEY_EVENT)
rec.KeyDown = True
rec.RepeatCount = 1
rec.Char = c
rec.VirtualKeyCode = ord(c)
records.append(rec)
rec = win32console.PyINPUT_RECORDType(KEY_EVENT)
rec.KeyDown = False
rec.RepeatCount = 1
rec.Char = c
rec.VirtualKeyCode = ord(c)
records.append(rec)
return records
while True:
header = _read_header(cmd_pipe)
input = _parse_header(header)
if input['command'] == 'send':
try:
win32console.AttachConsole(pid)
s = input['string']
stdin_handle = GetStdHandle(STD_INPUT_HANDLE)
records = _string2records(s)
nrecords = stdin_handle.WriteConsoleInput(records)
win32console.FreeConsole()
except WindowsError as e:
message = _quote_header(str(e))
WriteFile(cmd_pipe,
'status=error\nmessage=%s\n\n' % message)
else:
WriteFile(cmd_pipe,
'status=ok\nnbytes=%d\n\n' % nrecords)
2. The stub executable must be win32gui type, using "pythonw.exe"
instead of "python.exe"
3. direct_send function can be implemented as below:
WriteFile(self.stub_pipe, 'command=send\nstring=%s\n\n' % s)
header = _read_header(self.stub_pipe)
output = _parse_header(header)
if output['status'] != 'ok':
m = 'send string failed: '
m += output.get('message', '')
raise ExceptionPexpect(m)
4. This way can not send the CRLF(don't know the reason). For send
the CRLF, we still need the SendMessage like the direct_sendline do.
Finally, I choose to use the windows message solution, just because
it looks like much simple than the attach-console solution.
"""
self._input_log(s)
for c in s:
PostMessage(self.child_hwnd, WM_CHAR, ord(c), 1)
def direct_sendline(self, s):
self.direct_send(s)
self._input_log('\r\n')
PostMessage(self.child_hwnd, WM_KEYDOWN, VK_RETURN, 0x001C0001)
PostMessage(self.child_hwnd, WM_KEYUP, VK_RETURN, 0xC01C0001)
def interact(self, escape_character = chr(29), input_filter = None, output_filter = None):
# Flush the buffer.
self.stdin_reader = Thread(target=self._stdin_reader)
self.stdin_reader.start()
self.interrupted = False
try:
while self.isalive():
data = self._interact_read(self.stdin_handle)
if data != None:
if input_filter: data = input_filter(data)
i = data.rfind(escape_character)
if i != -1:
data = data[:i]
os.write(self.child_fd, data.encode('ascii'))
break
os.write(self.child_fd, data.encode('ascii'))
data = self._interact_read(self.child_fd)
if data != None:
if output_filter: data = output_filter(data)
self._output_log(data)
if sys.stdout not in (self.logfile, self.logfile_read):
# interactive mode, the child output will be always output to stdout
sys.stdout.write(data)
# child exited, read all the remainder output
while self.child_output.qsize():
handle, status, data = self.child_output.get(block=False)
if status != 'data':
break
self._output_log(data)
if sys.stdout not in (self.logfile, self.logfile_read):
sys.stdout.write(data)
except KeyboardInterrupt:
self.interrupted = True
self.terminate()
return
self.close()
def _output_log(self, data):
if self.logfile is not None:
self.logfile.write (data)
self.logfile.flush()
if self.logfile_read is not None:
self.logfile_read.write(data)
self.logfile_read.flush()
def _input_log(self, data):
if self.logfile is not None:
self.logfile.write (data)
self.logfile.flush()
if self.logfile_send is not None:
self.logfile_send.write (data)
self.logfile_send.flush()
def _interact_read(self, fd):
"""This is used by the interact() method.
"""
data = None
try:
if fd == self.stdin_handle:
data = self.user_input.get(block=False)
else:
handle, status, data = self.child_output.get(timeout=0.1)
if status == 'eof':
self._set_eof(handle)
raise EOF, 'End of file in interact_read().'
elif status == 'error':
self._set_eof(handle)
raise OSError, data
except Exception as e:
data = None
return data
def _stdin_reader(self):
"""INTERNAL: Reader thread that reads stdin for user interaction"""
self.stdin_handle = GetStdHandle(STD_INPUT_HANDLE)
self.stdin_handle.SetConsoleMode(ENABLE_LINE_INPUT|ENABLE_ECHO_INPUT|ENABLE_MOUSE_INPUT|
ENABLE_WINDOW_INPUT|ENABLE_MOUSE_INPUT|ENABLE_PROCESSED_INPUT)
# Remove flag: ENABLE_PROCESSED_INPUT to deal with the ctrl-c myself
try:
while not self.interrupted:
ret = WaitForSingleObject(self.stdin_handle, 1000)
if ret == WAIT_OBJECT_0:
records = self.stdin_handle.PeekConsoleInput(1)
rec = records[0]
if rec.EventType == KEY_EVENT:
if not rec.KeyDown or ord(rec.Char) == 0:
self.stdin_handle.FlushConsoleInputBuffer()
continue
else:
# discard the events: FOCUS_EVENT/WINDOW_BUFFER_SIZE_EVENT/MENU_EVENT,
self.stdin_handle.FlushConsoleInputBuffer()
continue
err, data = ReadFile(self.stdin_handle, self.maxread)
#print('read finished:', [hex(ord(i)) for i in data], err)
self.user_input.put(data)
except Exception as e:
pass
def _child_reader(self, handle):
"""INTERNAL: Reader thread that reads stdout/stderr of the child
process."""
status = 'data'
while not self.interrupted:
try:
err, data = ReadFile(handle, self.maxread)
assert err == 0 # not expecting error w/o overlapped io
except WindowsError, e:
if e.winerror == ERROR_BROKEN_PIPE:
status = 'eof'
data = ''
else:
status = 'error'
data = e.winerror
self.child_output.put((handle, status, data))
if status != 'data':
break
def _set_eof(self, handle):
"""INTERNAL: mark a file handle as end-of-file."""
if handle == self.stdout_handle:
self.stdout_eof = True
elif handle == self.stderr_handle:
self.stderr_eof = True
def read_nonblocking(self, size=1, timeout=-1):
"""INTERNAL: Non blocking read."""
if len(self.chunk_buffer):
return self.chunk_buffer.read(size)
if self.stdout_eof and self.stderr_eof:
assert self.child_output.qsize() == 0
return ''
if timeout == -1:
timeout = self.timeout
try:
handle, status, data = self.child_output.get(timeout=timeout)
except Empty:
raise TIMEOUT, 'Timeout exceeded in read_nonblocking().'
if status == 'data':
self.chunk_buffer.add(data)
elif status == 'eof':
self._set_eof(handle)
raise EOF, 'End of file in read_nonblocking().'
elif status == 'error':
self._set_eof(handle)
raise OSError, data
buf = self.chunk_buffer.read(size)
self._output_log(buf)
return buf
```
#### File: VistA-1/Scripts/PopulatePatchesByPackage.py
```python
from __future__ import with_statement
import sys
import os
import csv
# append this module in the sys.path at run time
curDir = os.path.dirname(os.path.abspath(__file__))
if curDir not in sys.path:
sys.path.append(curDir)
from LoggerManager import logger, initConsoleLogging
from PatchOrderGenerator import PatchOrderGenerator
from PatchInfoParser import installNameToDirName
from ConvertToExternalData import addToGitIgnoreList, isValidKIDSBuildHeaderSuffix
from ConvertToExternalData import isValidSha1Suffix
from PopulatePackages import populatePackageMapByCSV, order_long_to_short
def place(src,dst):
logger.info('%s => %s\n' % (src,dst))
d = os.path.dirname(dst)
if d and not os.path.exists(d):
try: os.makedirs(d)
except OSError as ex:
logger.error(ex)
pass
if not os.path.exists(dst):
try:
os.rename(src,dst)
except OSError as ex:
logger.error(ex)
logger.error( "%s => %s" % (src, dst))
pass
def placeToDir(infoSrc, destDir, addToGitIgnore=True):
if not infoSrc or not os.path.exists(infoSrc):
return
infoSrcName = os.path.basename(infoSrc)
infoDest = os.path.join(destDir, infoSrcName)
if os.path.normpath(infoDest) != os.path.normpath(infoSrc):
place(infoSrc, infoDest)
if addToGitIgnore and isValidSha1Suffix(infoSrcName):
addToGitIgnoreList(infoDest[:infoDest.rfind('.')])
def placeAssociatedFiles(associatedFileList, destDir):
if associatedFileList:
for infoSrc in associatedFileList:
placeToDir(infoSrc, destDir)
def placePatchInfo(patchInfo, curDir, path):
""" place the KIDS info file first if present """
logger.debug("place patch info %s" % patchInfo)
destDir = os.path.join(curDir, path)
infoSrc = patchInfo.kidsInfoPath
if infoSrc:
placeToDir(infoSrc, destDir)
""" place the associated files """
placeAssociatedFiles(patchInfo.associatedInfoFiles, destDir)
""" place the global files """
placeAssociatedFiles(patchInfo.associatedGlobalFiles, destDir)
""" place the custom installer file """
placeToDir(patchInfo.customInstallerPath, destDir)
""" ignore the multiBuilds kids file """
if patchInfo.isMultiBuilds: return
placeToDir(patchInfo.kidsFilePath, destDir)
""" check the KIDS Sha1 path """
placeToDir(patchInfo.kidsSha1Path, destDir)
#-----------------------------------------------------------------------------
def populate(input):
packages, namespaces = populatePackageMapByCSV(input)
#---------------------------------------------------------------------------
# Collect all KIDS and info files under the current directory recursively
#---------------------------------------------------------------------------
curDir = os.getcwd()
patchOrderGen = PatchOrderGenerator()
patchOrder = patchOrderGen.generatePatchOrder(curDir)
patchInfoDict = patchOrderGen.getPatchInfoDict()
patchInfoSet = set(patchInfoDict.keys())
patchList = patchInfoDict.values()
noKidsInfoDict = patchOrderGen.getNoKidsBuildInfoDict()
noKidsInfoSet = set(noKidsInfoDict.keys())
noKidsPatchList = noKidsInfoDict.values()
leftoverTxtFiles = patchOrderGen.getInvalidInfoFiles()
#---------------------------------------------------------------------------
# place multiBuilds KIDS Build under MultiBuilds directory
#---------------------------------------------------------------------------
multiBuildSet = set([x.installName for x in patchList if x.isMultiBuilds])
for info in multiBuildSet:
logger.info("Handling Multibuilds Kids %s" % info)
patchInfo = patchInfoDict[info]
src = patchInfo.kidsFilePath
dest = os.path.normpath(os.path.join(curDir, "MultiBuilds",
os.path.basename(src)))
if src != dest:
place(src,dest)
if isValidKIDSBuildHeaderSuffix(dest):
" add to ignore list if not there"
addToGitIgnoreList(dest[0:dest.rfind('.')])
src = patchInfo.kidsSha1Path
if not src: continue
dest = os.path.normpath(os.path.join(curDir, "MultiBuilds",
os.path.basename(src)))
if src != dest:
place(src,dest)
# Map by package namespace (prefix).
for ns in sorted(namespaces.keys(),order_long_to_short):
path = namespaces[ns]
nsPatchList = [x.installName for x in patchList if x.namespace==ns]
for patch in nsPatchList:
logger.info("Handling Kids %s" % patch)
patchInfo = patchInfoDict[patch]
patchDir = os.path.join(path, "Patches", installNameToDirName(patch))
placePatchInfo(patchInfo, curDir, patchDir)
# Map KIDS Info Files that do not have associated KIDS Build Files
nsNoKidsList = [x.installName for x in noKidsPatchList if x.namespace==ns]
for patch in nsNoKidsList:
logger.info("Handling No Kids info File %s" % patch)
patchInfo = noKidsInfoDict[patch]
patchDir = os.path.join(path, "Patches", installNameToDirName(patch))
placePatchInfo(patchInfo, curDir, patchDir)
patchInfoSet.difference_update(nsPatchList)
noKidsInfoSet.difference_update(nsNoKidsList)
# Put leftover kids files in Uncategorized package.
for patch in patchInfoSet:
logger.info("Handling left over Kids File %s" % patch)
patchInfo = patchInfoDict[patch]
placePatchInfo(patchInfo, curDir, 'Uncategorized')
for patch in noKidsInfoSet:
logger.info("Handling left over no Kids Info File %s" % patch)
patchInfo = noKidsInfoDict[patch]
placePatchInfo(patchInfo, curDir, 'Uncategorized')
# Put invalid kids info files in Uncategorized package.
for src in leftoverTxtFiles:
logger.info("Handling left over files: %s" % src)
from KIDSAssociatedFilesMapping import getAssociatedInstallName
installName = getAssociatedInstallName(src)
if installName == "MultiBuilds": # put in Multibuilds directory
dest = os.path.normpath(os.path.join(curDir, "MultiBuilds",
os.path.basename(src)))
if src != dest:
place(src,dest)
continue
dirName = os.path.dirname(src)
if not dirName.endswith("Packages"):
logger.debug("Do not move %s" % src)
continue
dest = os.path.normpath(os.path.join(curDir, 'Uncategorized',
os.path.basename(src)))
if src != dest:
place(src,dest)
def main():
import logging
initConsoleLogging(logging.INFO)
populate(sys.stdin)
if __name__ == '__main__':
main()
```
#### File: Testing/PyUnit/TestSSEP.py
```python
import socket
import sys,os
import unittest
def createAndConnect(host="127.0.0.1", port=9210):
print "Connect to host %s, port %s" % (host, port)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
sock.setblocking(0) # non-blocking
return sock
def sendDivGet(sock):
inputfile = "divget.xml"
sendRequestByFile(inputfile, sock)
def sendDivSet(sock):
inputfile = "divset.xml"
sendRequestByFile(inputfile, sock)
def sendGetUserInfo(sock):
inputfile = "getuserinfo.xml"
sendRequestByFile(inputfile, sock)
def sendIamHere(sock):
inputfile = "im_here.xml"
sendRequestByFile(inputfile, sock)
def sendGetPatientList(sock):
inputfile = "getpatientlist.xml"
sendRequestByFile(inputfile, sock)
def sendGetPatientVitals(sock):
inputfile = "getpatientvitals.xml"
sendRequestByFile(inputfile, sock)
def sendIntroMsgGet(sock):
inputfile = "getintromessage.xml"
sendRequestByFile(inputfile, sock)
def createORContext(sock):
inputfile = "createorguichartcontext.xml"
sendRequestByFile(inputfile,sock)
def createSignonContext(sock):
inputfile = "createSignonContext.xml"
sendRequestByFile(inputfile,sock)
def signInAlexander(sock):
inputfile = "signon.xml"
sendRequestByFile(inputfile,sock)
def sendRequestByFile(inputfile, sock):
sock.settimeout(5)
if os.path.dirname(__file__)+ "/" == "/":
commandfile = inputfile
else:
commandfile = os.path.dirname(__file__)+ "/" + inputfile
with open(commandfile,'r') as input:
for line in input:
sock.send(line)
sock.send(chr(4))
def getResponse(sock, timeout=10):
sock.settimeout(10)
output = ""
while True:
data=sock.recv(256)
if data:
output = output + data
if chr(4) in data:
break
return output
def runRPC(self,rpcfilename, signon):
if os.path.dirname(__file__)+ "/" == "/":
resultsfile = rpcfilename.replace(".xml","_results.xml")
else:
resultsfile = os.path.dirname(__file__)+ "/" + rpcfilename.replace(".xml","_results.xml")
correct_response = open(resultsfile,'r').read()
sock = createSocket()
if signon:
createSignonContext(sock)
getResponse(sock)
signInAlexander(sock)
getResponse(sock)
createORContext(sock)
getResponse(sock)
sendRequestByFile(rpcfilename,sock)
response = getResponse(sock)
print "Response from " + rpcfilename
print response
self.assertEquals(response[:-1],correct_response, msg= "Didn't find a correct response to '" + rpcfilename + "'" )
sock.close()
def createSocket():
sock = None
sock = createAndConnect(results.host,int(results.port))
return sock
class TestM2MBroker(unittest.TestCase):
def test_IamHere_NoSignon(self):
runRPC(self,"imhere.xml",False)
def test_IamHere_Signon(self):
runRPC(self,"imhere.xml",True)
def test_GetIntroMessage_NoSignon(self):
runRPC(self,"getintromessage.xml",False)
def test_GetPatientList_Signon(self):
runRPC(self,"getpatientlist.xml",True)
def test_GetPatientList_NoSignon(self):
runRPC(self,"patientlisterror.xml",False)
def test_GetPatientVitals_NoSignon(self):
runRPC(self,"getpatientvitals.xml",False)
def main():
# Import Argparse and add Scripts/ directory to sys.path
# by finding directory of current script and going up two levels
import argparse
curDir = os.path.dirname(os.path.abspath(__file__))
scriptDir = os.path.normpath(os.path.join(curDir, "../../"))
if scriptDir not in sys.path:
sys.path.append(scriptDir)
# OSEHRA Imports
from RPCBrokerCheck import CheckRPCListener
from VistATestClient import createTestClientArgParser,VistATestClientFactory
# Arg Parser to get address and port of RPC Listener along with a log file
# Inherits the connection arguments of the testClientParser
testClientParser = createTestClientArgParser()
ssepTestParser= argparse.ArgumentParser(description='Test the M2M broker via XML files',
parents=[testClientParser])
ssepTestParser.add_argument("-ha",required=True,dest='host',
help='Address of the host where RPC Broker is listening')
ssepTestParser.add_argument("-hp",required=True,dest='port',
help='Port of the host machine where RPC Broker is listening')
ssepTestParser.add_argument("-l",required=True,dest='log_file',
help='Path to a file to log the output.')
# A global variable so that each test is able to use the port and address of the host
global results
results = ssepTestParser.parse_args()
testClient = VistATestClientFactory.createVistATestClientWithArgs(results)
assert testClient
with testClient:
# If checkresult == 0, RPC listener is set up correctly and tests should be run
# else, don't bother running the tests
testClient.setLogFile(results.log_file)
checkresult = CheckRPCListener(testClient.getConnection(),results.host,results.port)
if checkresult == 0:
suite = unittest.TestLoader().loadTestsFromTestCase(TestM2MBroker)
unittest.TextTestRunner(verbosity=2).run(suite)
else:
print "FAILED: The RPC listener is not set up as needed."
if __name__ == "__main__":
main()
```
#### File: RAS/lib/PATActions.py
```python
import time
import TestHelper
from Actions import Actions
class PATActions (Actions):
'''
This class extends the Actions class with methods specific to actions performed
through the Roll and Scroll interface to add patients to system. Two methods are provided;
patientaddcsv() which adds a single record from a CSV file and patientaddallcsv() which
adds all patient records from a CSV file.
'''
def __init__(self, VistAconn, user=None, code=None):
Actions.__init__(self, VistAconn, user, code)
def setuser (self, user=None, code=None):
'''Set access code and verify code'''
self.acode = user
self.vcode = code
def signon (self):
'''Signon via XUP'''
self.VistA.write('S DUZ=1 D ^XUP')
def signoff (self):
'''Signoff and halt'''
self.VistA.write('')
self.VistA.write('h\r\r')
def patientaddcsv(self, ssn, pfile=None, getrow=None):
'''Add a patient from a specified record of a specified CSV file'''
prec = [1]
if pfile is not None:
preader = TestHelper.CSVFileReader()
prec = preader.getfiledata(pfile, 'key', getrow)
for pitem in prec:
self.signon()
self.VistA.wait('OPTION NAME');
self.VistA.write('Register a Patient')
self.VistA.wait('PATIENT NAME');
self.VistA.write(prec[pitem]['fullname'].rstrip().lstrip())
self.VistA.wait('NEW PATIENT');
self.VistA.write('YES')
self.VistA.wait('SEX');
self.VistA.write(prec[pitem]['sex'].rstrip().lstrip())
self.VistA.wait('DATE OF BIRTH');
self.VistA.write(prec[pitem]['dob'].rstrip().lstrip())
self.VistA.wait('SOCIAL SECURITY NUMBER');
self.VistA.write(ssn)
self.VistA.wait('TYPE');
self.VistA.write(prec[pitem]['type'].rstrip().lstrip())
self.VistA.wait('PATIENT VETERAN');
self.VistA.write(prec[pitem]['veteran'].rstrip().lstrip())
self.VistA.wait('SERVICE CONNECTED');
self.VistA.write(prec[pitem]['service'].rstrip().lstrip())
self.VistA.wait('MULTIPLE BIRTH INDICATOR');
self.VistA.write(prec[pitem]['twin'].rstrip().lstrip())
self.VistA.wait('//');
self.VistA.write('^\r')
self.VistA.wait('MAIDEN NAME');
self.VistA.write(prec[pitem]['maiden'].rstrip().lstrip())
self.VistA.wait('PLACE OF BIRTH');
self.VistA.write(prec[pitem]['cityob'].rstrip().lstrip())
self.VistA.wait('PLACE OF BIRTH');
self.VistA.write(prec[pitem]['stateob'].rstrip().lstrip())
self.VistA.wait('');
self.VistA.write('\r\r\r')
def patientaddallcsv(self, pfile):
'''Add ALL patients from specified CSV '''
preader = TestHelper.CSVFileReader()
prec = preader.getfiledata(pfile, 'key')
for pitem in prec:
self.signon()
self.VistA.wait('OPTION NAME');
self.VistA.write('Register a Patient')
self.VistA.wait('PATIENT NAME');
self.VistA.write(prec[pitem]['fullname'].rstrip().lstrip())
self.VistA.wait('NEW PATIENT');
self.VistA.write('YES')
self.VistA.wait('SEX');
self.VistA.write(prec[pitem]['sex'].rstrip().lstrip())
self.VistA.wait('DATE OF BIRTH');
self.VistA.write(prec[pitem]['dob'].rstrip().lstrip())
self.VistA.wait('SOCIAL SECURITY NUMBER');
self.VistA.write(pitem)
self.VistA.wait('TYPE');
self.VistA.write(prec[pitem]['type'].rstrip().lstrip())
self.VistA.wait('PATIENT VETERAN');
self.VistA.write(prec[pitem]['veteran'].rstrip().lstrip())
self.VistA.wait('SERVICE CONNECTED');
self.VistA.write(prec[pitem]['service'].rstrip().lstrip())
self.VistA.wait('MULTIPLE BIRTH INDICATOR');
self.VistA.write(prec[pitem]['twin'].rstrip().lstrip())
if str(prec[pitem]['pdup'].rstrip().lstrip()) is '1':
self.VistA.wait('as a new patient')
self.VistA.write('Yes')
self.VistA.wait('//');
self.VistA.write('^\r')
self.VistA.wait('MAIDEN NAME');
self.VistA.write(prec[pitem]['maiden'].rstrip().lstrip())
self.VistA.wait('PLACE');
self.VistA.write(prec[pitem]['cityob'].rstrip().lstrip())
self.VistA.wait('PLACE');
self.VistA.write(prec[pitem]['stateob'].rstrip().lstrip())
self.VistA.wait('');
self.VistA.write('\r\r\r')
```
#### File: OTJ/VA FileMan 22.2/VistAInitFileMan.py
```python
from __future__ import with_statement
import sys
import os
import argparse
# setup system path
filedir = os.path.dirname(os.path.abspath(__file__))
test_python_dir = os.path.normpath(os.path.join(filedir, "../../Python/"))
scripts_python_dir = os.path.normpath(os.path.join(filedir, "../../../Scripts"))
sys.path.append(test_python_dir)
sys.path.append(scripts_python_dir)
from VistATestClient import VistATestClientFactory, createTestClientArgParser
from LoggerManager import logger, initConsoleLogging
from ExternalDownloader import obtainKIDSBuildFileBySha1
from ConvertToExternalData import readSha1SumFromSha1File
from ConvertToExternalData import isValidRoutineSha1Suffix
def getFileMan22_2RoutineFile(rFile):
sha1sum = readSha1SumFromSha1File(rFile)
logger.info("sha1sum is %s" % sha1sum)
result, path = obtainKIDSBuildFileBySha1(rFile, sha1sum, filedir)
if not result:
logger.error("Could not obtain FileMan 22V2 file for %s" % rFile)
raise Exception("Error getting FileMan 22V2 file for %s" % rFile)
return path
def inputMumpsSystem(testClient):
connection = testClient.getConnection()
if testClient.isCache(): # this is the Cache
connection.send("CACHE\r")
elif testClient.isGTM(): # this is GT.M(UNIX)
connection.send("GT.M(UNIX)\r")
else:
pass
def initFileMan(testClient, siteName, siteNumber):
connection = testClient.getConnection()
testClient.waitForPrompt()
connection.send("D ^DINIT\r")
connection.expect("Initialize VA FileMan now?")
connection.send("YES\r")
connection.expect("SITE NAME:")
if siteName and len(siteName) > 0:
connection.send(siteName+"\r")
else:
connection.send("\r") # just use the default
connection.expect("SITE NUMBER")
if siteNumber and int(siteNumber) != 0:
connection.send(str(siteNumber)+"\r")
else:
connection.send("\r")
selLst = [
"Do you want to change the MUMPS OPERATING SYSTEM File?",
"TYPE OF MUMPS SYSTEM YOU ARE USING:",
]
while True:
idx = connection.expect(selLst)
if idx == 0:
connection.send("YES\r") # we want to change MUMPS OPERATING SYSTEM File
continue
elif idx == len(selLst) - 1:
inputMumpsSystem(testClient)
break
testClient.waitForPrompt()
connection.send('\r')
def initFileMan22_2(testClient):
testClient.waitForPrompt()
conn = testClient.getConnection()
conn.send('D ^DIINIT\r')
conn.expect('ARE YOU SURE EVERYTHING\'S OK\? ')
conn.send('YES\r')
testClient.waitForPrompt()
conn.send('D ^DMLAINIT\r')
testClient.waitForPrompt()
conn.send('\r')
def inhibitLogons(testClient, flag=True):
from VistAMenuUtil import VistAMenuUtil
from VistATaskmanUtil import getBoxVolPair
volumeSet = getBoxVolPair(testClient).split(':')[0]
menuUtil = VistAMenuUtil(duz=1)
menuUtil.gotoFileManEditEnterEntryMenu(testClient)
conn = testClient.getConnection()
conn.send('14.5\r') # 14.5 is the VOLUME SET File
conn.expect('EDIT WHICH FIELD: ')
conn.send('1\r') # field Inhibit Logons?
conn.expect('THEN EDIT FIELD: ')
conn.send('\r')
conn.expect('Select VOLUME SET: ')
conn.send('%s\r' % volumeSet)
conn.expect('INHIBIT LOGONS\?: ')
if flag:
conn.send('YES\r')
else:
conn.send('NO\r')
conn.expect('Select VOLUME SET: ')
conn.send('\r')
menuUtil.exitFileManMenu(testClient)
def stopAllMumpsProcessGTM():
pass
def deleteFileManRoutinesCache(testClient):
conn = testClient.getConnection()
testClient.waitForPrompt()
conn.send('D ^%ZTRDEL\r')
conn.expect('All Routines\? ')
conn.send('No\r')
for input in ("DI*", "'DIZ*", "DM*", "'DMZ*", "DD*", "'DDZ*"):
conn.expect('Routine: ')
conn.send(input+'\r')
conn.expect('Routine: ')
conn.send('\r')
conn.expect('routines to DELETE, OK: ')
conn.send('YES\r')
testClient.waitForPrompt(120)
conn.send('\r')
def deleteFileManRoutinesGTM():
""" first get routine directory """
from ParseGTMRoutines import extract_m_source_dirs
var = os.getenv('gtmroutines')
routineDirs = extract_m_source_dirs(var)
if not routineDirs:
return []
import glob
outDir = routineDirs[0:1]
for routineDir in routineDirs:
for pattern in ['DI*.m', 'DD*.m', 'DM*.m']:
globPtn = os.path.join(routineDir, pattern)
fmFiles = glob.glob(os.path.join(routineDir, pattern))
if fmFiles:
if routineDir not in outDir:
outDir.append(routineDir)
for fmFile in fmFiles:
logger.debug("removing file %s" % fmFile)
os.remove(fmFile)
return outDir
def verifyRoutines(testClient):
testClient.waitForPrompt()
conn = testClient.getConnection()
conn.send('D ^DINTEG\r')
testClient.waitForPrompt()
conn.send('\r')
def rewriteFileManRoutineCache(testClient):
testClient.waitForPrompt()
conn = testClient.getConnection()
for filename in ['DIDT','DIDTC','DIRCR']:
conn.send('ZL %s ZS %s\r' % (filename, filename.replace('DI','%')))
testClient.waitForPrompt()
conn.send('\r')
def rewriteFileManRoutineGTM(outDir):
import shutil
for filename in ['DIDT','DIDTC','DIRCR']:
src = os.path.join(outDir, filename + ".m")
dst = os.path.join(outDir, filename.replace('DI','_') + '.m')
logger.debug("Copy %s to %s" % (src, dst))
shutil.copyfile(src, dst)
def installFileMan22_2(testClient, inputROFile):
"""
Script to initiate FileMan 22.2
"""
if not os.path.exists(inputROFile):
logger.error("File %s does not exist" % inputROFile)
return
rFile = inputROFile
if isValidRoutineSha1Suffix(inputROFile):
rFile = getFileMan22_2RoutineFile(inputROFile)
from VistATaskmanUtil import VistATaskmanUtil
outDir = None # gtm routine import out dir
#allow logon to shutdown taskman via menu
inhibitLogons(testClient, flag=False)
# stop all taskman tasks
taskManUtil = VistATaskmanUtil()
logger.info("Stop Taskman...")
taskManUtil.stopTaskman(testClient)
logger.info("Inhibit logons...")
inhibitLogons(testClient)
# remove fileman 22.2 affected routines
logger.info("Remove FileMan 22 routines")
if testClient.isCache():
deleteFileManRoutinesCache(testClient)
else:
outDir = deleteFileManRoutinesGTM()
if not outDir:
logger.info("Can not identify mumps routine directory")
return
outDir = outDir[0]
# import routines into System
from VistARoutineImport import VistARoutineImport
vistARtnImport = VistARoutineImport()
logger.info("Import FileMan 22.2 Routines from %s" % rFile)
vistARtnImport.importRoutines(testClient, rFile, outDir)
# verify integrity of the routines that just imported
logger.info("Verify FileMan 22.2 Routines...")
verifyRoutines(testClient)
# rewrite fileman routines
logger.info("Rewrite FileMan 22.2 Routines...")
if testClient.isCache():
rewriteFileManRoutineCache(testClient)
else:
rewriteFileManRoutineGTM(outDir)
# initial fileman
logger.info("Initial FileMan...")
initFileMan(testClient, None, None)
logger.info("Initial FileMan 22.2...")
initFileMan22_2(testClient)
logger.info("Enable logons...")
inhibitLogons(testClient, flag=False)
""" restart taskman """
logger.info("Restart Taskman...")
taskManUtil.startTaskman(testClient)
DEFAULT_OUTPUT_LOG_FILE_NAME = "VistAInitFileMan.log"
import tempfile
def getTempLogFile():
return os.path.join(tempfile.gettempdir(), DEFAULT_OUTPUT_LOG_FILE_NAME)
def main():
testClientParser = createTestClientArgParser()
parser = argparse.ArgumentParser(description='VistA Initialize FileMan Utilities',
parents=[testClientParser])
parser.add_argument('roFile', help="routine import file in ro format")
result = parser.parse_args();
print (result)
""" create the VistATestClient"""
testClient = VistATestClientFactory.createVistATestClientWithArgs(result)
assert testClient
with testClient as vistAClient:
logFilename = getTempLogFile()
print "Log File is %s" % logFilename
initConsoleLogging()
vistAClient.setLogFile(logFilename)
installFileMan22_2(vistAClient, result.roFile)
if __name__ == '__main__':
main()
```
#### File: Dox/PythonScripts/FileManGlobalDataParser.py
```python
import os
import sys
import re
from datetime import datetime
import logging
import json
from CrossReference import FileManField
from ZWRGlobalParser import getKeys, sortDataEntryFloatFirst, printGlobal
from ZWRGlobalParser import convertToType, createGlobalNodeByZWRFile
from ZWRGlobalParser import readGlobalNodeFromZWRFileV2
from FileManSchemaParser import FileManSchemaParser
FILE_DIR = os.path.dirname(os.path.abspath(__file__))
SCRIPTS_DIR = os.path.normpath(os.path.join(FILE_DIR, "../../../Scripts"))
if SCRIPTS_DIR not in sys.path:
sys.path.append(SCRIPTS_DIR)
from FileManDateTimeUtil import fmDtToPyDt
from PatchOrderGenerator import PatchOrderGenerator
import glob
""" These are used to capture install entries that don't use the
package prefix as their install name or have odd capitalization
after being passed through the title function
"""
INSTALL_PACKAGE_FIX = {"VA FILEMAN 22.0": "VA FileMan",
"DIETETICS 5.5" : "Dietetics"
}
INSTALL_RENAME_DICT = {"Kernel Public Domain" : "Kernel",
"Kernel - Virgin Install" : "Kernel",
#"DIETETICS " : "Dietetics",
"Rpc Broker": "RPC Broker",
"Pce Patient Care Encounter": "PCE Patient Care Encounter",
"Sagg" : "SAGG Project",
"Sagg Project" : "SAGG Project",
"Emergency Department" : "Emergency Department Integration Software",
"Gen. Med. Rec. - Vitals" : "General Medical Record - Vitals",
"Gen. Med. Rec. - I/O" : "General Medical Record - IO",
"Mailman" : "MailMan",
"Bar Code Med Admin" : "Barcode Medication Administration",
"Ifcap" : "IFCAP",
"Master Patient Index Vista" : "Master Patient Index VistA",
"Consult/Request Tracking" : "Consult Request Tracking",
"Outpatient Pharmacy Version" : "Outpatient Pharmacy",
"Clinical Info Resource Network" : "Clinical Information Resource Network",
"Dss Extracts" : "DSS Extracts",
"Automated Info Collection Sys" : "Automated Information Collection System",
"Text Integration Utilities" : "Text Integration Utility",
"Drug Accountability V." : "Drug Accountability",
"Women'S Health" : "Womens Health",
"Health Data & Informatics" : "Health Data and Informatics",
"Capacity Management - Rum" : "Capacity Management - RUM",
"Authorization/Subscription" : "Authorization Subscription",
"Pharmacy Data Management Host" : "Pharmacy Data Management",
"Equipment/Turn-In Request" : "Equipment Turn-In Request",
"Pbm" : "Pharmacy Benefits Management",
"Cmoph" : "CMOP",
"Cmop" : "CMOP"
}
regexRtnCode = re.compile("( ?[DQI] |[:',])(\$\$)?(?P<tag>"
"([A-Z0-9][A-Z0-9]*)?)\^(?P<rtn>[A-Z%][A-Z0-9]+)")
def getMumpsRoutine(mumpsCode):
"""
For a given mumpsCode, parse the routine and tag information
via regular expression.
return an iterator with (routine, tag, rtnpos)
"""
pos = 0
endpos = 0
for result in regexRtnCode.finditer(mumpsCode):
if result:
routine = result.group('rtn')
if routine:
tag = result.group('tag')
start, end = result.span('rtn')
endpos = result.end()
pos = endpos
yield (routine, tag, start)
raise StopIteration
def test_getMumpsRoutine():
for input in (
('D ^TEST1', [('TEST1','',3)]),
('D ^%ZOSV', [('%ZOSV','',3)]),
('D TAG^TEST2',[('TEST2','TAG',6)]),
('Q $$TST^%RRST1', [('%RRST1','TST',8)]),
('D ACKMSG^DGHTHLAA',[('DGHTHLAA','ACKMSG',9)]),
('S XQORM(0)="1A",XQORM("??")="D HSTS^ORPRS01(X)"',[('ORPRS01','HSTS',36)]),
('I $$TEST^ABCD D ^EST Q:$$ENG^%INDX K ^DD(0)',
[
('ABCD','TEST',9),
('EST','',17),
('%INDX','ENG',29)
]
),
('S DUZ=1 K ^XUTL(0)', None),
("""W:'$$TM^%ZTLOAD() *7,!!,"WARNING -- TASK MANAGER DOESN'T!!!!",!!,*7""",
[('%ZTLOAD','TM',8)]
),
("""W "This is a Test",$$TM^ZTLOAD()""",[('ZTLOAD','TM',24)]),
("""D ^PSIVXU Q:$D(XQUIT) D EN^PSIVSTAT,NOW^%DTC S ^PS(50.8,1,.2)=% K %""",
[
('PSIVXU','',3),
('PSIVSTAT','EN',27),
('%DTC','NOW',40)
]
),
("""D ^TEST1,EN^TEST2""",
[
('TEST1','',3),
('TEST2','EN',12)
]
),
):
for idx, (routine,tag,pos) in enumerate(getMumpsRoutine(input[0])):
assert (routine, tag, pos) == input[1][idx], "%s: %s" % ((routine, tag, pos), input[1][idx])
class FileManFileData(object):
"""
Class to represent FileMan File data WRT
either a FileMan file or a subFile
"""
def __init__(self, fileNo, name):
self._fileNo = fileNo
self._name = name
self._data = {}
@property
def dataEntries(self):
return self._data
@property
def name(self):
return self._name
@property
def fileNo(self):
return self._fileNo
def addFileManDataEntry(self, ien, dataEntry):
self._data[ien] = dataEntry
def __repr__(self):
return "%s, %s, %s" % (self._fileNo, self._name, self._data)
class FileManDataEntry(object):
"""
One FileMan File DataEntry
"""
def __init__(self, fileNo, ien):
self._ien = ien
self._data = {}
self._fileNo = fileNo
self._name = None
self._type = None
@property
def fields(self):
return self._data
@property
def name(self):
return self._name
@property
def type(self):
return self._type
@property
def ien(self):
return self._ien
@property
def fileNo(self):
return self._fileNo
@name.setter
def name(self, name):
self._name = name
@type.setter
def type(self, type):
self._type = type
def addField(self, fldData):
self._data[fldData.id] = fldData
def __repr__(self):
return "%s: %s: %s" % (self._fileNo, self._ien, self._data)
class FileManDataField(object):
"""
Represent an individual field in a FileMan DataEntry
"""
def __init__(self, fieldId, type, name, value):
self._fieldId = fieldId
self._type = type
self._name = name
self._value = value
@property
def id(self):
return self._fieldId
@property
def name(self):
return self._name
@property
def type(self):
return self._type
@property
def value(self):
return self._value
@value.setter
def value(self, value):
self._value = value
def __repr__(self):
return "%s: %s" % (self._name, self._value)
def printFileManFileData(fileManData, level=0):
curIndent = "\t"*(level+1)
if level == 0:
print "File#: %s, Name: %s" % (fileManData.fileNo, fileManData.name)
for ien in getKeys(fileManData.dataEntries.keys(), float):
dataEntry = fileManData.dataEntries[ien]
printFileManDataEntry(dataEntry, ien, level)
def printFileManDataEntry(dataEntry, ien, level):
curIndent = "\t"*(level+1)
if level == 0:
print "FileEntry#: %s, Name: %s" % (ien, dataEntry.name)
else:
print
for fldId in sorted(dataEntry.fields.keys(), key=lambda x: float(x)):
dataField = dataEntry.fields[fldId]
if dataField.type == FileManField.FIELD_TYPE_SUBFILE_POINTER:
if dataField.value and dataField.value.dataEntries:
print "%s%s:" % (curIndent, dataField.name)
printFileManFileData(dataField.value, level+1)
elif dataField.type == FileManField.FIELD_TYPE_WORD_PROCESSING:
wdList = dataField.value
if wdList:
print "%s%s:" % (curIndent, dataField.name)
for item in wdList:
print "%s\t%s" % (curIndent, item)
else:
print "%s%s: %s" % (curIndent, dataField.name, dataField.value)
print
def test_FileManDataEntry():
fileManData = FileManFileData('1', 'TEST FILE 1')
dataEntry = FileManDataEntry("Test",1)
dataEntry.addField(FileManDataField('0.1', 0, 'NAME', 'Test'))
dataEntry.addField(FileManDataField('1', 0, 'TAG', 'TST'))
dataEntry.addField(FileManDataField('2', 1, 'ROUTINE', 'TEST1'))
dataEntry.addField(FileManDataField('3', 2, 'INPUT TYPE', '1'))
subFileData = FileManFileData('1.01', 'TEST FILE SUB-FIELD')
subDataEntry = FileManDataEntry("1.01", 1)
subDataEntry.addField(FileManDataField('.01',0, 'NAME', 'SUBTEST'))
subDataEntry.addField(FileManDataField('1', 1, 'DATA', '0'))
subFileData.addFileManDataEntry('1', subDataEntry)
subDataEntry = FileManDataEntry("1.01", 2)
subDataEntry.addField(FileManDataField('.01', 0, 'NAME', 'SUBTEST1'))
subDataEntry.addField(FileManDataField('1', 1, 'DATA', '1'))
subFileData.addFileManDataEntry('2', subDataEntry)
dataEntry.addField(FileManDataField('4', 9, 'SUB-FIELD', subFileData))
fileManData.addFileManDataEntry('1', dataEntry)
printFileManFileData(fileManData)
def sortSchemaByLocation(fileSchema):
locFieldDict = {}
for fldAttr in fileSchema.getAllFileManFields().itervalues():
loc = fldAttr.getLocation()
if not loc: continue
locInfo = loc.split(';')
if len(locInfo) != 2:
logging.error("Unknown location info %s for %r" % (loc, fldAttr))
continue
index,pos = locInfo
if index not in locFieldDict:
locFieldDict[index] = {}
locFieldDict[index][pos] = fldAttr
return locFieldDict
"""
hard code initial map due to the way the ^DIC is extracted
"""
initGlobalLocationMap = {
x: "^DIC(" + x for x in (
'.2', '3.1', '3.4', '4', '4.001', '4.005',
'4.009', '4.05', '4.1', '4.11', '4.2', '4.2996',
'5', '7', '7.1', '8', '8.1', '8.2', '9.2', '9.4',
'9.8', '10', '10.2', '10.3', '11', '13', '19', '19.1',
'19.2', '19.8', '21', '22', '23', '25','30', '31', '34',
'35', '36', '37', '39.1', '39.2', '39.3', '40.7', '40.9',
'42', '42.4', '42.55', '43.4', '45.1', '45.3', '45.6',
'45.61', '45.68', '45.7', '45.81', '45.82', '45.88', '45.89',
'47', '49', '51.5', '68.4', '81.1', '81.2', '81.3', '150.9',
'194.4', '194.5', '195.1', '195.2', '195.3', '195.4', '195.6',
'213.9', '220.2', '220.3', '220.4', '620', '625', '627', '627.5',
'627.9', '6910', '6910.1', '6921', '6922',
)
}
""" handle file# 0 or the schema file """
initGlobalLocationMap['0'] = '^DD('
class FileManGlobalDataParser(object):
def __init__(self, crossRef=None):
self.patchDir = None
self._dataRoot = None
self._allSchemaDict = None
self._crossRef = crossRef
self._curFileNo = None
self._glbData = {} # fileNo => FileManData
self._pointerRef = {}
self._fileKeyIndex = {} # File: => ien => Value
self._glbLocMap = initGlobalLocationMap # File: => Global Location
self._fileParsed = set() # set of files that has been parsed
self._rtnRefDict = {} # dict of rtn => fileNo => Details
self._allFiles = {} # Dict of fileNum => Global file
@property
def outFileManData(self):
return self._glbData
@property
def crossRef(self):
return self._crossRef
@property
def globalLocationMap(self):
return self._glbLocMap
def getFileNoByGlobalLocation(self, glbLoc):
"""
get the file no by global location
return fileNo if found, otherwise return None
"""
outLoc = normalizeGlobalLocation(glbLoc)
for key, value in self._glbLocMap.iteritems():
if value == outLoc:
return key
return None
def getFileManFileNameByFileNo(self, fileNo):
if self._crossRef:
fileManFile = self._crossRef.getGlobalByFileNo(fileNo)
if fileManFile:
return fileManFile.getFileManName()
return ""
def _createDataRootByZWRFile(self, inputFileName):
self._dataRoot = createGlobalNodeByZWRFile(inputFileName)
def getAllFileManZWRFiles(self, dirName, pattern):
searchFiles = glob.glob(os.path.join(dirName, pattern))
outFiles = {}
for file in searchFiles:
fileName = os.path.basename(file)
if fileName == 'DD.zwr':
outFiles['0'] = {'name': 'Schema File',
'path': os.path.normpath(os.path.abspath(file))}
continue
result = re.search("(?P<fileNo>^[0-9.]+)(-[1-9])?\+(?P<des>.*)\.zwr$", fileName)
if result:
"ignore split file for now"
if result.groups()[1]:
logging.info("Ignore file %s" % fileName)
continue
fileNo = result.group('fileNo')
if fileNo.startswith('0'): fileNo = fileNo[1:]
globalDes = result.group('des')
outFiles[fileNo] = {'name': globalDes,
'path': os.path.normpath(os.path.abspath(file))}
return outFiles
def parseAllZWRGlobaFilesBySchema(self, mRepositDir, allSchemaDict):
""" Parsing all ZWR Global Files via Schema
"""
allFiles = self.getAllFileManZWRFiles(os.path.join(mRepositDir,
'Packages'),
"*/Globals/*.zwr")
allFileManGlobal = self._crossRef.getAllFileManGlobals()
sizeLimit = 59*1024*1024 # 1 MiB
for key in sorted(allFileManGlobal.keys()):
fileManFile = allFileManGlobal[key]
fileNo = fileManFile.getFileNo()
if fileNo not in allFiles:
continue
ddFile = allFiles[fileNo]['path']
glbDes = allFiles[fileNo]['name']
self._createDataRootByZWRFile(ddFile)
if not self._dataRoot:
logging.warn("not data for file %s" % ddFile)
continue
globalName = fileManFile.getName()
parts = globalName.split('(')
rootName, subscript = (None, None)
if len(parts) > 1:
rootName = parts[0]
subscript = parts[1].strip('"')
else:
rootName = parts[0]
subscript = rootName
assert rootName == self._dataRoot.subscript
logging.info("File: %s, root: %s, sub: %s" % (ddFile, rootName, subscript))
self.parseZWRGlobalDataBySchema(self._dataRoot, allSchemaDict,
fileNo, subscript)
self._glbData = {}
def parseZWRGlobalFileBySchema(self, inputFileName, allSchemaDict,
fileNumber, subscript):
self._createDataRootByZWRFile(inputFileName)
self.parseZWRGlobalDataBySchema(dataRoot, allSchemaDict,
fileNumber, subscript)
def generateFileIndex(self, inputFileName, allSchemaDict,
fileNumber):
self._allSchemaDict = allSchemaDict
schemaFile = allSchemaDict[fileNumber]
if not schemaFile.hasField('.01'):
logging.error("File does not have a .01 field, ignore")
return
keyField = schemaFile.getFileManFieldByFieldNo('.01')
keyLoc = keyField.getLocation()
if not keyLoc:
logging.error(".01 field does not have a location")
return
self._curFileNo = fileNumber
glbLoc = self._glbLocMap[fileNumber]
for dataRoot in readGlobalNodeFromZWRFileV2(inputFileName, glbLoc):
if not dataRoot: continue
self._dataRoot = dataRoot
fileDataRoot = dataRoot
(ien, detail) = self._getKeyNameBySchema(fileDataRoot, keyLoc, keyField)
if detail:
self._addFileKeyIndex(fileNumber, ien, detail)
elif ien:
logging.info("No name associated with ien: %s, file: %s" % (ien, fileNumber))
else:
logging.info("No index for data with ien: %s, file: %s" % (ien, fileNumber))
def _getKeyNameBySchema(self, dataRoot, keyLoc, keyField):
floatKey = getKeys(dataRoot, float)
logging.debug('Total # of entry is %s' % len(floatKey))
for ien in floatKey:
if float(ien) <=0:
continue
dataEntry = dataRoot[ien]
index, loc = keyLoc.split(';')
if not index or index not in dataEntry:
continue
dataEntry = dataEntry[index]
if not dataEntry.value:
return (ien, None)
values = dataEntry.value.split('^')
dataValue = None
if convertToType(loc, int):
intLoc = int(loc)
if intLoc > 0 and intLoc <= len(values):
dataValue = values[intLoc-1]
else:
dataValue = str(dataEntry.value)
if dataValue:
return (ien, self._parseIndividualFieldDetail(dataValue, keyField, None))
return (None, None)
def parseZWRGlobalFileBySchemaV2(self, inputFileName, allSchemaDict,
fileNumber, glbLoc=None):
self._allSchemaDict = allSchemaDict
schemaFile = allSchemaDict[fileNumber]
self._glbData[fileNumber] = FileManFileData(fileNumber,
self.getFileManFileNameByFileNo(fileNumber))
self._curFileNo = fileNumber
if not glbLoc:
glbLoc = self._glbLocMap.get(fileNumber)
logging.info("File: %s global loc: %s" % (fileNumber, glbLoc))
elif fileNumber in self._glbLocMap:
logging.info("global loc %s, %s" % (glbLoc, self._glbLocMap[fileNumber]))
for dataRoot in readGlobalNodeFromZWRFileV2(inputFileName, glbLoc):
if not dataRoot:
continue
self._dataRoot = dataRoot
fileDataRoot = dataRoot
self._parseDataBySchema(fileDataRoot, schemaFile,
self._glbData[fileNumber])
self._resolveSelfPointer()
if self._crossRef:
self._updateCrossReference()
def parseZWRGlobalDataBySchema(self, dataRoot, allSchemaDict,
fileNumber, subscript):
self._allSchemaDict = allSchemaDict
schemaFile = allSchemaDict[fileNumber]
fileDataRoot = dataRoot
if subscript:
if subscript in dataRoot:
logging.info("using subscript %s" % subscript)
fileDataRoot = dataRoot[subscript]
self._glbData[fileNumber] = FileManFileData(fileNumber,
self.getFileManFileNameByFileNo(fileNumber))
self._parseDataBySchema(fileDataRoot, schemaFile,
self._glbData[fileNumber])
else: # assume this is for all files in the entry
for fileNo in getKeys(self._dataRoot, float):
fileDataRoot = self._dataRoot[fileNo]
self._glbData[fileNo] = FileManFileData(fileNo,
schemaFile.getFileManName())
self._parseDataBySchema(fileDataRoot, schemaFile, self._glbData[fileNo])
self._resolveSelfPointer()
if self._crossRef:
self._updateCrossReference()
def _updateCrossReference(self):
if '8994' in self._glbData:
self._updateRPCRefence()
if '101' in self._glbData:
self._updateHL7Reference()
if '779.2' in self._glbData:
self._updateHLOReference()
if '9.7' in self._glbData:
self._updateInstallReference()
def outRtnReferenceDict(self):
if len(self._rtnRefDict):
""" generate the dependency in json file """
with open(os.path.join(self.outDir, "Routine-Ref.json"), 'w') as output:
logging.info("Generate File: %s" % output.name)
json.dump(self._rtnRefDict, output)
def _updateHLOReference(self):
hlo = self._glbData['779.2']
for ien in sorted(hlo.dataEntries.keys(),key=lambda x: float(x)):
hloEntry = hlo.dataEntries[ien]
entryName = hloEntry.name
namespace, package = \
self._crossRef.__categorizeVariableNameByNamespace__(entryName)
if package:
package.hlo.append(hloEntry)
logging.info("Adding hlo: %s to Package: %s" %
(entryName, package.getName()))
def _updateHL7Reference(self):
protocol = self._glbData['101']
for ien in sorted(protocol.dataEntries.keys(), key=lambda x: float(x)):
protocolEntry = protocol.dataEntries[ien]
if '4' in protocolEntry.fields:
type = protocolEntry.fields['4'].value
if (type != 'event driver' and type != 'subscriber') and (not re.search("[Mm]enu", type)):
logging.info("Adding Protocol Entry of type: %s" % (type))
entryName = protocolEntry.name
namespace, package = \
self._crossRef.__categorizeVariableNameByNamespace__(entryName)
if package:
package.protocol.append(protocolEntry)
logging.info("Adding Protocol Entry: %s to Package: %s" %
(entryName, package.getName()))
# only care about the event drive and subscriber type
elif (type == 'event driver' or type == 'subscriber'):
entryName = protocolEntry.name
namespace, package = \
self._crossRef.__categorizeVariableNameByNamespace__(entryName)
if package:
package.hl7.append(protocolEntry)
logging.info("Adding HL7: %s to Package: %s" %
(entryName, package.getName()))
elif '12' in protocolEntry.fields: # check the packge it belongs
pass
else:
logging.warn("Can not find a package for HL7: %s" % entryName)
for field in ('771', '772'):
if field not in protocolEntry.fields:
continue
hl7Rtn = protocolEntry.fields[field].value
if not hl7Rtn:
continue
for rtn, tag, pos in getMumpsRoutine(hl7Rtn):
hl7Info = {"name": entryName,
"ien": ien}
if tag:
hl7Info['tag'] = tag
self._rtnRefDict.setdefault(rtn,{}).setdefault('101',[]).append(hl7Info)
def _updateRPCRefence(self):
rpcData = self._glbData['8994']
for ien in sorted(rpcData.dataEntries.keys(), key=lambda x: float(x)):
rpcEntry = rpcData.dataEntries[ien]
rpcRoutine = None
if rpcEntry.name:
namespace, package = \
self._crossRef.__categorizeVariableNameByNamespace__(rpcEntry.name)
if package:
package.rpcs.append(rpcEntry)
logging.info("Adding RPC: %s to Package: %s" %
(rpcEntry.name, package.getName()))
if '.03' in rpcEntry.fields:
rpcRoutine = rpcEntry.fields['.03'].value
else:
if rpcRoutine:
""" try to categorize by routine called """
namespace, package = \
self._crossRef.__categorizeVariableNameByNamespace__(rpcRoutine)
if package:
package.rpcs.append(rpcEntry)
logging.info("Adding RPC: %s to Package: %s based on routine calls" %
(rpcEntry.name, package.getName()))
else:
logging.error("Can not find package for RPC: %s" %
(rpcEntry.name))
""" Generate the routine referenced based on RPC Call """
if rpcRoutine:
rpcInfo = {"name": rpcEntry.name,
"ien" : ien
}
if '.02' in rpcEntry.fields:
rpcTag = rpcEntry.fields['.02'].value
rpcInfo['tag'] = rpcTag
self._rtnRefDict.setdefault(rpcRoutine,{}).setdefault('8994',[]).append(rpcInfo)
def _findInstallPackage(self,packageList, installEntryName):
namespace, package = self._crossRef.__categorizeVariableNameByNamespace__(installEntryName)
# A check to remove the mis-categorized installs which happen to fall in a namespace
if installEntryName in INSTALL_PACKAGE_FIX:
package = INSTALL_PACKAGE_FIX[installEntryName]
# If it cannot match a package by namespace, capture the name via Regular Expression
if package is None:
pkgMatch = re.match("[A-Z./ \&\-\']+",installEntryName)
if pkgMatch:
# if a match is found, switch to title case and remove extra spaces
targetName = pkgMatch.group(0).title().strip()
# First check it against the list of package names
if targetName in packageList:
package = targetName
# Then check it against the dictionary above for some odd spellings or capitalization
elif targetName in INSTALL_RENAME_DICT:
package = INSTALL_RENAME_DICT[targetName]
# If all else fails, assign it to the "Unknown"
else:
package = "Unknown"
package = str(package).strip()
return package
def _updateInstallReference(self):
installData = self._glbData['9.7']
output = os.path.join(self.outDir, "install_information.json")
installJSONData = {}
packageList = self._crossRef.getAllPackages()
patchOrderGen = PatchOrderGenerator()
patchOrderGen.analyzeVistAPatchDir(self.patchDir +"/Packages")
with open(output, 'w') as installDataOut:
logging.warn("inside the _updateInstallReference")
for ien in sorted(installData.dataEntries.keys(), key=lambda x: float(x)):
installItem = {}
installEntry = installData.dataEntries[ien]
package = self._findInstallPackage(packageList, installEntry.name)
# if this is the first time the package is found, add an entry in the install JSON data.
if package not in installJSONData:
installJSONData[package]={}
if installEntry.name:
logging.warn("Gathering info for: %s" % installEntry.name)
installItem['name'] = installEntry.name
installItem['ien'] = installEntry.ien
installItem['label'] = installEntry.name
installItem['value'] = len(installJSONData[package])
if installEntry.name in patchOrderGen._kidsDepBuildDict:
installchildren = []
for child in patchOrderGen._kidsDepBuildDict[installEntry.name]:
childPackage = self._findInstallPackage(packageList,child)
installchildren.append({"name": child, "package": childPackage});
installItem['children'] = installchildren
if '11' in installEntry.fields:
installItem['installDate'] = installEntry.fields['11'].value.strftime("%Y-%m-%d")
if '1' in installEntry.fields:
installItem['packageLink'] = installEntry.fields['1'].value
if '40' in installEntry.fields:
installItem['numRoutines'] = len(installEntry.fields['40'].value.dataEntries)
if '14' in installEntry.fields:
installItem['numFiles'] = len(installEntry.fields['14'].value.dataEntries)
# Checks for the absence of asterisks which usually denotes a package change.
testMatch = re.search("\*+",installEntry.name)
if testMatch is None:
installItem['packageSwitch'] = True
installJSONData[package][installEntry.name] = installItem
installJSONData['MultiBuild']={}
for multiBuildFile in patchOrderGen._multiBuildDict:
multibuildItem = {}
multibuildItem['name']=os.path.basename(multiBuildFile);
multibuildItem['children'] = []
for installName in patchOrderGen._multiBuildDict[multiBuildFile]:
package = self._findInstallPackage(packageList, installName)
multibuildItem['children'].append({"name": installName, "package": package});
installJSONData['MultiBuild'][os.path.basename(multiBuildFile)] = multibuildItem
logging.warn("About to dump data into %s" % output)
json.dump(installJSONData,installDataOut)
def _resolveSelfPointer(self):
""" Replace self-reference with meaningful data """
for fileNo in self._pointerRef:
if fileNo in self._glbData:
fileData = self._glbData[fileNo]
for ien, fields in self._pointerRef[fileNo].iteritems():
if ien in fileData.dataEntries:
name = fileData.dataEntries[ien].name
if not name: name = str(ien)
for field in fields:
field.value = "^".join((field.value, name))
del self._pointerRef
self._pointerRef = {}
def _parseFileDetail(self, dataEntry, ien):
if 'GL' in dataEntry:
loc = dataEntry['GL'].value
loc = normalizeGlobalLocation(loc)
self._glbLocMap[ien] = loc
def _parseDataBySchema(self, dataRoot, fileSchema, outGlbData):
""" first sort the schema Root by location """
locFieldDict = sortSchemaByLocation(fileSchema)
""" for each data entry, parse data by location """
floatKey = getKeys(dataRoot, float)
for ien in floatKey:
if float(ien) <=0:
continue
#if level == 0 and int(ien) != 160: continue
dataEntry = dataRoot[ien]
outDataEntry = FileManDataEntry(fileSchema.getFileNo(), ien)
dataKeys = [x for x in dataEntry]
sortedKey = sorted(dataKeys, cmp=sortDataEntryFloatFirst)
for locKey in sortedKey:
if locKey == '0' and fileSchema.getFileNo() == '1':
self._parseFileDetail(dataEntry[locKey], ien)
if locKey in locFieldDict:
fieldDict = locFieldDict[locKey] # a dict of {pos: field}
curDataRoot = dataEntry[locKey]
if len(fieldDict) == 1:
fieldAttr = fieldDict.values()[0]
if fieldAttr.isSubFilePointerType(): # Multiple
self._parseSubFileField(curDataRoot, fieldAttr, outDataEntry)
else:
self._parseSingleDataValueField(curDataRoot, fieldAttr,
outDataEntry)
else:
self._parseDataValueField(curDataRoot, fieldDict, outDataEntry)
outGlbData.addFileManDataEntry(ien, outDataEntry)
if fileSchema.getFileNo() == self._curFileNo:
self._addFileKeyIndex(self._curFileNo, ien, outDataEntry.name)
def _parseSingleDataValueField(self, dataEntry, fieldAttr, outDataEntry):
if not dataEntry.value:
return
values = dataEntry.value.split('^')
location = fieldAttr.getLocation()
dataValue = None
if location:
index, loc = location.split(';')
if loc:
if convertToType(loc, int):
intLoc = int(loc)
if intLoc > 0 and intLoc <= len(values):
dataValue = values[intLoc-1]
else:
dataValue = str(dataEntry.value)
else:
dataValue = str(dataEntry.value)
if dataValue:
self._parseIndividualFieldDetail(dataValue, fieldAttr, outDataEntry)
def _parseDataValueField(self, dataRoot, fieldDict, outDataEntry):
if not dataRoot.value:
return
values = dataRoot.value.split('^')
if not values: return # this is very import to check
for idx, value in enumerate(values, 1):
if value and str(idx) in fieldDict:
fieldAttr = fieldDict[str(idx)]
self._parseIndividualFieldDetail(value, fieldAttr, outDataEntry)
def _parseIndividualFieldDetail(self, value, fieldAttr, outDataEntry):
if not value.strip(' '):
return
value = value.strip(' ')
fieldDetail = value
pointerFileNo = None
if fieldAttr.isSetType():
setDict = fieldAttr.getSetMembers()
if setDict and value in setDict:
fieldDetail = setDict[value]
elif fieldAttr.isFilePointerType() or fieldAttr.isVariablePointerType():
fileNo = None
ien = None
if fieldAttr.isFilePointerType():
filePointedTo = fieldAttr.getPointedToFile()
if filePointedTo:
fileNo = filePointedTo.getFileNo()
ien = value
else:
fieldDetail = 'No Pointed to File'
else: # for variable pointer type
vpInfo = value.split(';')
if len(vpInfo) != 2:
logging.error("Unknown variable pointer format: %s" % value)
fieldDetail = "Unknow Variable Pointer"
else:
fileNo = self.getFileNoByGlobalLocation(vpInfo[1])
ien = vpInfo[0]
if not fileNo:
logging.warn("Could not find File for %s" % value)
fieldDetail = 'Global Root: %s, IEN: %s' % (vpInfo[1], ien)
if fileNo and ien:
fieldDetail = '^'.join((fileNo, ien))
idxName = self._getFileKeyIndex(fileNo, ien)
if idxName:
idxes = str(idxName).split('^')
if len(idxes) == 1:
fieldDetail = '^'.join((fieldDetail, str(idxName)))
elif len(idxes) == 3:
fieldDetail = '^'.join((fieldDetail, str(idxes[-1])))
elif fileNo == self._curFileNo:
pointerFileNo = fileNo
else:
logging.warn("Can not find value for %s, %s" % (ien, fileNo))
elif fieldAttr.getType() == FileManField.FIELD_TYPE_DATE_TIME: # datetime
if value.find(',') >=0:
fieldDetail = horologToDateTime(value)
else:
outDt = fmDtToPyDt(value)
if outDt:
fieldDetail = outDt
else:
logging.warn("Could not parse Date/Time: %s" % value)
elif fieldAttr.getName().upper().startswith("TIMESTAMP"): # timestamp field
if value.find(',') >=0:
fieldDetail = horologToDateTime(value)
if outDataEntry:
dataField = FileManDataField(fieldAttr.getFieldNo(),
fieldAttr.getType(),
fieldAttr.getName(),
fieldDetail)
if pointerFileNo:
self._addDataFieldToPointerRef(pointerFileNo, value, dataField)
outDataEntry.addField(dataField)
if fieldAttr.getFieldNo() == '.01':
outDataEntry.name = fieldDetail
outDataEntry.type = fieldAttr.getType()
return fieldDetail
def _addDataFieldToPointerRef(self, fileNo, ien, dataField):
self._pointerRef.setdefault(fileNo, {}).setdefault(ien, set()).add(dataField)
def _addFileKeyIndex(self, fileNo, ien, value):
ienDict = self._fileKeyIndex.setdefault(fileNo, {})
if ien not in ienDict:
ienDict[ien] = value
def _getFileKeyIndex(self, fileNo, ien):
if fileNo in self._fileKeyIndex:
if ien in self._fileKeyIndex[fileNo]:
return self._fileKeyIndex[fileNo][ien]
return None
def _parseSubFileField(self, dataRoot, fieldAttr, outDataEntry):
logging.debug ("%s" % (fieldAttr.getName() + ':'))
subFile = fieldAttr.getPointedToSubFile()
if fieldAttr.hasSubType(FileManField.FIELD_TYPE_WORD_PROCESSING):
outLst = self._parsingWordProcessingNode(dataRoot)
outDataEntry.addField(FileManDataField(fieldAttr.getFieldNo(),
FileManField.FIELD_TYPE_WORD_PROCESSING,
fieldAttr.getName(),
outLst))
elif subFile:
subFileData = FileManFileData(subFile.getFileNo(),
subFile.getFileManName())
self._parseDataBySchema(dataRoot, subFile, subFileData)
outDataEntry.addField(FileManDataField(fieldAttr.getFieldNo(),
FileManField.FIELD_TYPE_SUBFILE_POINTER,
fieldAttr.getName(),
subFileData))
else:
logging.info ("Sorry, do not know how to intepret the schema %s" %
fieldAttr)
def _parsingWordProcessingNode(self, dataRoot):
outLst = []
for key in getKeys(dataRoot, int):
if '0' in dataRoot[key]:
outLst.append("%s" % dataRoot[key]['0'].value)
return outLst
def testGlobalParser(crosRef=None):
parser = createArgParser()
result = parser.parse_args()
print result
from InitCrossReferenceGenerator import parseCrossRefGeneratorWithArgs
from FileManDataToHtml import FileManDataToHtml
crossRef = parseCrossRefGeneratorWithArgs(result)
glbDataParser = FileManGlobalDataParser(crossRef)
#glbDataParser.parseAllZWRGlobaFilesBySchema(result.MRepositDir, allSchemaDict)
allFiles = glbDataParser.getAllFileManZWRFiles(os.path.join(result.MRepositDir,
'Packages'),
"*/Globals/*.zwr")
assert '0' in allFiles and '1' in allFiles and set(result.fileNos).issubset(allFiles)
schemaParser = FileManSchemaParser()
allSchemaDict = schemaParser.parseSchemaDDFileV2(allFiles['0']['path'])
isolatedFiles = schemaParser.isolatedFiles
glbDataParser.parseZWRGlobalFileBySchemaV2(allFiles['1']['path'],
allSchemaDict, '1', '^DIC(')
glbDataParser._allFiles = allFiles
glbDataParser._allSchemaDict = allSchemaDict
for fileNo in result.fileNos:
assert fileNo in glbDataParser.globalLocationMap
if result.outdir:
glbDataParser.outDir = result.outdir
if result.patchRepositDir:
glbDataParser.patchDir = result.patchRepositDir
htmlGen = FileManDataToHtml(crossRef, result.outdir)
if not result.all or set(result.fileNos).issubset(isolatedFiles):
for fileNo in result.fileNos:
gdFile = allFiles[fileNo]['path']
logging.info("Parsing file: %s at %s" % (fileNo, gdFile))
glbDataParser.parseZWRGlobalFileBySchemaV2(gdFile,
allSchemaDict,
fileNo)
if result.outdir:
htmlGen.outputFileManDataAsHtml(glbDataParser)
else:
fileManDataMap = glbDataParser.outFileManData
for file in getKeys(fileManDataMap.iterkeys(), float):
printFileManFileData(fileManDataMap[file])
del glbDataParser.outFileManData[fileNo]
glbDataParser.outRtnReferenceDict()
return
""" Also generate all required files as well """
sccSet = schemaParser.sccSet
fileSet = set(result.fileNos)
for idx, value in enumerate(sccSet):
fileSet.difference_update(value)
if not fileSet:
break
for i in xrange(0,idx+1):
fileSet = sccSet[i]
fileSet &= set(allFiles.keys())
fileSet -= isolatedFiles
fileSet.discard('757')
if len(fileSet) > 1:
for file in fileSet:
zwrFile = allFiles[file]['path']
globalSub = allFiles[file]['name']
logging.info("Generate file key index for: %s at %s" % (file, zwrFile))
glbDataParser.generateFileIndex(zwrFile, allSchemaDict, file)
for file in fileSet:
zwrFile = allFiles[file]['path']
globalSub = allFiles[file]['name']
logging.info("Parsing file: %s at %s" % (file, zwrFile))
glbDataParser.parseZWRGlobalFileBySchemaV2(zwrFile,
allSchemaDict,
file)
if result.outdir:
htmlGen.outputFileManDataAsHtml(glbDataParser)
del glbDataParser.outFileManData[file]
def horologToDateTime(input):
"""
convert Mumps Horolog time to python datatime
"""
from datetime import timedelta
originDt = datetime(1840,12,31,0,0,0)
if input.find(',') < 0: # invalid format
return None
days, seconds = input.split(',')
return originDt + timedelta(int(days), int(seconds))
def test_horologToDateTime():
input = (
('57623,29373', datetime(1998,10,7,8,9,33)),
)
for one, two in input:
assert horologToDateTime(one) == two, "%s, %s" % (one, two)
def normalizeGlobalLocation(input):
if not input:
return input
result = input
if input[0] != '^':
result = '^' + result
if input[-1] == ',':
result = result[0:-1]
return result
def test_normalizeGlobalLocation():
input = (
('DIPT(', '^DIPT('),
('^DIPT(', '^DIPT('),
('DIPT("IX",', '^DIPT("IX"'),
)
for one, two in input:
assert normalizeGlobalLocation(one) == two, "%s, %s" % (one, two)
def createArgParser():
import argparse
from InitCrossReferenceGenerator import createInitialCrossRefGenArgParser
initParser = createInitialCrossRefGenArgParser()
parser = argparse.ArgumentParser(description='FileMan Global Data Parser',
parents=[initParser])
#parser.add_argument('ddFile', help='path to ZWR file contains DD global')
#parser.add_argument('gdFile', help='path to ZWR file contains Globals data')
parser.add_argument('fileNos', help='FileMan File Numbers', nargs='+')
#parser.add_argument('glbRoot', help='Global root location for FileMan file')
parser.add_argument('-outdir', help='top directory to generate output in html')
parser.add_argument('-all', action='store_true',
help='generate all dependency files as well')
return parser
def unit_test():
test_normalizeGlobalLocation()
test_horologToDateTime()
test_getMumpsRoutine()
def main():
from LogManager import initConsoleLogging
initConsoleLogging(formatStr='%(asctime)s %(message)s')
unit_test()
#test_FileManDataEntry()
testGlobalParser()
if __name__ == '__main__':
main()
```
#### File: Dox/PythonScripts/LogManager.py
```python
import logging
import sys
logger = logging.getLogger()
def initConsoleLogging(defaultLevel=logging.INFO,
formatStr = '%(asctime)s %(levelname)s %(message)s'):
logger.setLevel(defaultLevel)
consoleHandler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter(formatStr)
consoleHandler.setLevel(defaultLevel)
consoleHandler.setFormatter(formatter)
logger.addHandler(consoleHandler)
``` |
{
"source": "JosephSolomon99/Web_Scraping",
"score": 3
} |
#### File: Web_Scraping/Scraper/scraper.py
```python
from methods import *
#from secrets import (LINKEDINUSERNAME, LINKEDINPASSWORD)
def main():
'''
Function that controls scraper script
'''
website = "https://www.linkedin.com/feed/"
chrome_options = webdriver.ChromeOptions()
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'
chrome_options.add_argument("window-size=1920,1080")
chrome_options.add_argument("--headless")
chrome_options.add_argument(f'user-agent={user_agent}')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--allow-running-insecure-content')
chrome_options.add_argument("--allow-insecure-localhost")
chrome_options.add_experimental_option('detach', True)
LINKEDINUSERNAME = input("Enter Linkedin username: ")
LINKEDINPASSWORD = input("\n Enter Linkedin password: ")
scraper = WebDriver(chrome_options, website, LINKEDINUSERNAME, LINKEDINPASSWORD)
scraper.driver.implicitly_wait(2)
scraper.driver.get(website)
sleep(3)
scraper.accept_cookies()
sleep(2)
scraper.log_me_in()
sleep(2)
scraper.get_database_details()
sleep(1)
# Edit this to change search term and location
search_term = input("Enter job title to search for: ")
search_location = input("Enter location to search in: ")
scraper.search_term(search_term, search_location)
sleep(2)
scraper.extract_job_details()
sleep(2)
print("\n\nScraping has been completed")
scraper.driver.quit()
if __name__ == "__main__":
# safeguard used to prevent script running
# automatically if it's imported into another file
main()
``` |
{
"source": "josephsookim/v2",
"score": 2
} |
#### File: josephsookim/v2/main.py
```python
from flask import Flask, render_template
from flask_talisman import Talisman
app = Flask(__name__)
csp = {
'default-src': [
'\'self\'',
],
'script-src': [
'\'self\'',
'\'unsafe-inline\'',
'cdn.jsdelivr.net',
'ajax.googleapis.com',
'https://www.google-analytics.com',
'https://ssl.google-analytics.com',
'https://www.googletagmanager.com',
'cdnjs.cloudflare.com'
],
'style-src': [
'\'self\'',
'\'unsafe-inline\'',
'cdnjs.cloudflare.com',
'cdn.jsdelivr.net',
'fonts.googleapis.com'
],
'font-src': [
'\'self\'',
'cdnjs.cloudflare.com',
'fonts.gstatic.com'
],
'img-src': [
'\'self\'',
'https://www.google-analytics.com',
'www.googletagmanager.com'
],
'connect-src': '*',
}
Talisman(
app,
content_security_policy=csp,
)
@app.route('/', methods=['GET'])
def home():
return render_template('index.html')
if __name__ == '__main__':
app.run()
``` |
{
"source": "josephstalin117/pytorch-DCRNN",
"score": 2
} |
#### File: pytorch-DCRNN/lib/metrics.py
```python
import numpy as np
import torch
def masked_mse_torch(preds, labels, null_val=np.nan):
"""
Accuracy with masking.
:param preds:
:param labels:
:param null_val:
:return:
"""
if np.isnan(null_val):
# mask = ~tf.is_nan(labels)
mask = ~torch.isnan(labels)
else:
# mask = tf.not_equal(labels, null_val)
mask = torch.ne(labels, null_val)
# mask = tf.cast(mask, tf.float32)
mask = mask.to(torch.float32)
# mask /= tf.reduce_mean(mask)
mask /= torch.mean(mask)
# mask = tf.where(tf.is_nan(mask), tf.zeros_like(mask), mask)
mask = torch.where(torch.isnan(mask), torch.zeros_like(mask), mask)
# loss = tf.square(tf.subtract(preds, labels))
loss = torch.pow(preds - labels, 2)
loss = loss * mask
# loss = tf.where(tf.is_nan(loss), tf.zeros_like(loss), loss)
loss = torch.where(torch.isnan(loss), torch.zeros_like(loss), loss)
return torch.mean(loss)
def masked_mae_torch(preds, labels, null_val=np.nan):
"""
Accuracy with masking.
:param preds:
:param labels:
:param null_val:
:return:
"""
if np.isnan(null_val):
mask = ~torch.isnan(labels)
else:
mask = torch.ne(labels, null_val)
mask = mask.to(torch.float32)
mask /= torch.mean(mask)
mask = torch.where(torch.isnan(mask), torch.zeros_like(mask), mask)
loss = torch.abs(preds - labels)
loss = loss * mask
loss = torch.where(torch.isnan(loss), torch.zeros_like(loss), loss)
return torch.mean(loss)
def masked_rmse_torch(preds, labels, null_val=np.nan):
"""
Accuracy with masking.
:param preds:
:param labels:
:param null_val:
:return:
"""
return torch.sqrt(masked_mse_torch(preds=preds, labels=labels, null_val=null_val))
def masked_rmse_np(preds, labels, null_val=np.nan):
return np.sqrt(masked_mse_np(preds=preds, labels=labels, null_val=null_val))
def masked_mse_np(preds, labels, null_val=np.nan):
with np.errstate(divide='ignore', invalid='ignore'):
if np.isnan(null_val):
mask = ~np.isnan(labels)
else:
mask = np.not_equal(labels, null_val)
mask = mask.astype('float32')
mask /= np.mean(mask)
rmse = np.square(np.subtract(preds, labels)).astype('float32')
rmse = np.nan_to_num(rmse * mask)
return np.mean(rmse)
def masked_mae_np(preds, labels, null_val=np.nan):
with np.errstate(divide='ignore', invalid='ignore'):
if np.isnan(null_val):
mask = ~np.isnan(labels)
else:
mask = np.not_equal(labels, null_val)
mask = mask.astype('float32')
mask /= np.mean(mask)
mae = np.abs(np.subtract(preds, labels)).astype('float32')
mae = np.nan_to_num(mae * mask)
return np.mean(mae)
def masked_mape_np(preds, labels, null_val=np.nan):
with np.errstate(divide='ignore', invalid='ignore'):
if np.isnan(null_val):
mask = ~np.isnan(labels)
else:
mask = np.not_equal(labels, null_val)
mask = mask.astype('float32')
mask /= np.mean(mask)
mape = np.abs(np.divide(np.subtract(preds, labels).astype('float32'), labels))
mape = np.nan_to_num(mask * mape)
return np.mean(mape)
# Builds loss function.
def masked_mse_loss(scaler, null_val):
def loss(preds, labels):
if scaler:
preds = scaler.inverse_transform(preds)
labels = scaler.inverse_transform(labels)
return masked_mse_torch(preds=preds, labels=labels, null_val=null_val)
return loss
def masked_rmse_loss(scaler, null_val):
def loss(preds, labels):
if scaler:
preds = scaler.inverse_transform(preds)
labels = scaler.inverse_transform(labels)
return masked_rmse_torch(preds=preds, labels=labels, null_val=null_val)
return loss
def masked_mae_loss(scaler, null_val):
def loss(preds, labels):
if scaler:
preds = scaler.inverse_transform(preds)
labels = scaler.inverse_transform(labels)
mae = masked_mae_torch(preds=preds, labels=labels, null_val=null_val)
return mae
return loss
# def masked_mae_loss(null_val):
# def loss(preds, labels):
# mae = masked_mae_torch(preds=preds, labels=labels, null_val=null_val)
# return mae
# return loss
def calculate_metrics(df_pred, df_test, null_val):
"""
Calculate the MAE, MAPE, RMSE
:param df_pred:
:param df_test:
:param null_val:
:return:
"""
mape = masked_mape_np(preds=df_pred.as_matrix(), labels=df_test.as_matrix(), null_val=null_val)
mae = masked_mae_np(preds=df_pred.as_matrix(), labels=df_test.as_matrix(), null_val=null_val)
rmse = masked_rmse_np(preds=df_pred.as_matrix(), labels=df_test.as_matrix(), null_val=null_val)
return mae, mape, rmse
``` |
{
"source": "josephsurin/UoM-WAM-Spam",
"score": 3
} |
#### File: josephsurin/UoM-WAM-Spam/wamspam.py
```python
import time
import getpass
import requests
from bs4 import BeautifulSoup
from functools import partial
from notification import *
# # #
# SCRIPT CONFIGURATION
#
# set this to True if you would like the script to repeatedly check the results
# page, or False if you only want it to run once
CHECK_REPEATEDLY = True
# if you set the script to check repeatedly above, you can configure the delay
# between WAM checks in minutes here
DELAY_BETWEEN_CHECKS = 60 # minutes
# leave these lines unchanged to be prompted for your username and password
# every time you run the script, or just hard code your credentials here if
# you're lazy (but then be careful not to let anyone else see this file)
UNIMELB_USERNAME = input("Username: ")
UNIMELB_PASSWORD = getpass.getpass()
# your WAM will be stored in this file in between checks
WAM_FILENAME = "wam.txt"
# # #
# WEB SCRAPING CONFIGURATION
#
# if you have multiple degrees, set this to the id of the degree with the WAM
# you want the script to watch (0, 1, 2, ... based on order from results page).
# if you only have a single degree, you can ignore this one.
DEGREE_INDEX = int(input("Degree index (or just press enter): ") or 0)
# select the HTML parser for BeautifulSoup to use. in most cases, you won't
# have to touch this.
BS4_PARSER = "html.parser"
# # #
# EMAIL CONFIGURATION
#
# here we specify the format of the email messages (customise to your liking)
SUBJECT = "WAM Update Detected"
EMAIL_TEMPLATE = """Hello there!
{message}
Love,
WAM Spammer
"""
INCREASE_MESSAGE_TEMPLATE = """
I noticed that your WAM increased from {before} to {after}.
Congratulations! The hard work paid off (and I'm sure there
was a little luck involved, too).
"""
DECREASE_MESSAGE_TEMPLATE = """
I noticed that your WAM changed from {before} to {after}.
That's okay, I know you tried your best, and that's all
anyone can ask for.
"""
FIRSTMSG_MESSAGE_TEMPLATE = """
I noticed that your WAM is {after}. I hope it makes you
happy. Anyway, I'll keep an eye on it for you from now on,
and I'll let you know if it changes.
"""
HELLO_SUBJECT = "Hello! I'm WAM Spammer"
HELLO_MESSAGE = """
I'm <NAME>. This is just a test message to tell you
that I'm running. I'll look out for a change to your WAM
every so often---unless I crash! Every now and then you
should probably check to make sure nothing has gone wrong.
"""
# let's get to it!
def main():
"""Run the checking script, once or forever, depending on configuration."""
# conduct the first check! don't catch any exceptions here, if the
# check fails this first time, it's likely to be a configuration problem
# (e.g. wrong username/password) so we should crash the script to let the
# user know.
# notification_helper = EmailNotification(UNIMELB_USERNAME, UNIMELB_PASSWORD)
notification_helpers = select_notification_method()
poll_and_notify(notification_helpers)
# also send a test message to make sure the email configuration is working
for notification_helper in notification_helpers:
notification_helper.notify(HELLO_SUBJECT, EMAIL_TEMPLATE.format(message=HELLO_MESSAGE))
while CHECK_REPEATEDLY:
print("Sleeping", DELAY_BETWEEN_CHECKS, "minutes before next check.")
time.sleep(DELAY_BETWEEN_CHECKS * 60) # seconds
print("Waking up!")
try:
poll_and_notify(notification_helpers)
except Exception as e:
# if we get an exception now, it may have been some temporary
# problem accessing the website, let's just ignore it and try
# again next time.
print("Exception encountered:")
print(f"{e.__class__.__name__}: {e}")
print("Hopefully it won't happen again. Continuing.")
def poll_and_notify(notification_helpers):
"""
Check for an updated WAM, and send an email notification if any change is
detected.
"""
# check the results page for the updated WAM
new_wam_text = scrape_wam()
if new_wam_text is None:
# no WAM found
return
new_wam = float(new_wam_text)
# load the previous WAM to compare against
try:
with open(WAM_FILENAME) as wamfile:
old_wam_text = wamfile.read()
old_wam = float(old_wam_text)
except:
# the first time we run the script, there probably wont be a wam file
old_wam = None
# detect the type of WAM change so that we can choose which message to send
if old_wam is None:
message_template = FIRSTMSG_MESSAGE_TEMPLATE
elif new_wam > old_wam:
message_template = INCREASE_MESSAGE_TEMPLATE
elif new_wam < old_wam:
message_template = DECREASE_MESSAGE_TEMPLATE
else:
print("No change to WAM---stop before triggering notifications.")
return
# compose and send the message
message = message_template.format(before=old_wam, after=new_wam)
email_text = EMAIL_TEMPLATE.format(message=message)
for notification_helper in notification_helpers:
notification_helper.notify(SUBJECT, email_text)
# update the wam file for next time
with open(WAM_FILENAME, 'w') as wamfile:
wamfile.write(f"{new_wam}\n")
class InvalidLoginException(Exception):
"""Represent a login form validation error"""
def scrape_wam(username=UNIMELB_USERNAME, password=<PASSWORD>):
with requests.Session() as session:
# step 1. load a login page to initialse session
print("Logging in to the results page")
response = session.get("https://prod.ss.unimelb.edu.au"
"/student/SM/ResultsDtls10.aspx?f=$S1.EST.RSLTDTLS.WEB")
soup = BeautifulSoup(response.content, BS4_PARSER)
# step 2. fill in login form and authenticate, reaching results page
# get the form's hidden field values into the POST data
hidden_fields = soup.find_all('input', type='hidden')
login_form = {tag['name']: tag['value'] for tag in hidden_fields}
# simulate filling in the form with username and password,
# and pressing the login button
login_form['ctl00$Content$txtUserName$txtText'] = username
login_form['ctl00$Content$txtPassword$txtText'] = password
login_form['__EVENTTARGET'] = "ctl00$Content$cmdLogin"
# post the form, with a URL that will take us back to the results page
response = session.post("https://prod.ss.unimelb.edu.au/student/"
"login.aspx?f=$S1.EST.RSLTDTLS.WEB&ReturnUrl=%2fstudent%2fSM%2f"
"ResultsDtls10.aspx%3ff%3d%24S1.EST.RSLTDTLS.WEB", data=login_form)
# detect a potential failed login
soup = BeautifulSoup(response.content, BS4_PARSER)
if soup.find(id="ctl00_Content_valErrors"):
raise InvalidLoginException("Your login attempt was not successful."
" Please check your details and try again.")
# now `soup` should be the parsed results page or multi-degree page...
# step 3. if necessary, navigate to a specific degree page (for
# multi-degree students)
title = soup.find(id="ctl00_h1PageTitle")
if title.text == "Results > Choose a Study Plan":
print("Multiple degrees detected.")
degree_grid = soup.find("table", id="ctl00_Content_grdResultPlans")
cell = degree_grid.find_all("tr")[DEGREE_INDEX+1].find_all("td")[2]
print(f"Loading results for degree {DEGREE_INDEX} - {cell.text}")
# get the form's hidden field values into the POST data
hidden_fields = soup.find_all('input', type='hidden')
degree_form = {tag['name']: tag['value'] for tag in hidden_fields}
# now simulate pressing the required degree button
degree_form['__EVENTTARGET'] = "ctl00$Content$grdResultPlans"
degree_form['__EVENTARGUMENT'] = f"ViewResults${DEGREE_INDEX}"
# post the form, to take us to the results page proper
response = session.post("https://prod.ss.unimelb.edu.au/student/SM/"
"ResultsDtls10.aspx?f=$S1.EST.RSLTDTLS.WEB", data=degree_form)
soup = BeautifulSoup(response.content, BS4_PARSER)
# now `soup` should be the parsed results page for the chosen degree...
# step 4. extract the actual WAM text from the results page, as required
print("Extracting WAM")
wam_para = soup.find(class_='UMWAMText')
if wam_para is not None:
wam_text = wam_para.find('b').text
else:
print("Couldn't find WAM (no WAM yet?)")
wam_text = None
return wam_text
def select_notification_method() -> NotificationHelper:
print()
methods = [
("Email", partial(EmailNotification, UNIMELB_USERNAME, UNIMELB_PASSWORD)),
("Pushbullet", PushBulletNotification),
("ServerChan (WeChat)", ServerChanNotification),
("Telegram Bot", TelegramBotNotification),
("IFTTT Webhook", IFTTTWebhookNotification),
("Desktop Notifications", DesktopNotification),
("Log File", LogFile)
]
for i, m in enumerate(methods):
print("{}: {}".format(i, m[0]))
inp = input("Please select your preferred notification method(s) (comma delimited): ")
try:
selected = set([int(i) for i in inp.split(',')])
print("You have selected:", ", ".join([methods[i][0] for i in selected]))
except:
print(f"There was an error in your input, defaulting to method 0: {methods[0][0]}")
return [methods[0][1]()]
helpers = []
for i in selected:
helpers.append(methods[i][1]())
return helpers
if __name__ == '__main__':
main()
``` |
{
"source": "josephsv96/crate_classifier",
"score": 2
} |
#### File: crate_classifier/src/augmentation.py
```python
import numpy as np
from imgaug import augmenters as iaa
from pathlib import Path
from tqdm import tqdm
import cv2
# Local Modules
from src.utils import create_output_folder
from src.utils import save_npy_v2
from src.utils import read_cmp
from src.utils import write_cmp
from src.utils import get_timestamp
class Augmenter:
"""Class load image sets at different exposures and their annotations
to generate augmented images from them.
"""
def __init__(self, PARAMS, img_paths, ann_paths, out_dir=None):
self.img_paths = img_paths
self.ann_paths = ann_paths
self.src_h, self.src_w = PARAMS["img_src_shape"]
self.out_h, self.out_w = PARAMS["net_in_shape"]
self.num_exp = PARAMS["num_exp"]
self.num_img = int(len(img_paths) / self.num_exp)
self.AUG_CONFIG = PARAMS["augmentation"]
self.GAUSS_CONFIG = PARAMS["guassian"]
self.out_dir = out_dir
if out_dir is None:
out_path = create_output_folder(Path.cwd(),
folder_name="output_")
self.out_dir = "/".join(str(out_path).split("\\")
) + "/" + get_timestamp()
(Path(self.out_dir)).mkdir()
(Path(self.out_dir) / "images").mkdir()
(Path(self.out_dir) / "npy_images").mkdir()
(Path(self.out_dir) / "npy_annots").mkdir()
@ staticmethod
def get_augmenters(num_gen, aug_config):
"""Generate a list of deterministic augmenters. num_gen = 0 or None
will return a single Augmenter
Args:
num_gen (int): Number of augmenters
Returns:
list: List of Augmeters
"""
seq_img = iaa.Sequential([iaa.Fliplr(0.5),
iaa.Affine(scale=aug_config["scale"],
translate_percent={
"x": aug_config["trans_x"],
"y": aug_config["trans_y"]},
rotate=aug_config["rotate"],
shear=aug_config["shear"]
)],
random_order=False, random_state=0)
seq_img_list = seq_img.to_deterministic(n=num_gen)
return seq_img_list
@ staticmethod
def to_square(img_instance):
"""Converts a wide image to square image, without loosing resolution.
Args:
img_instance (numpy.array): Image in wide resolution
Returns:
numpy.array: Squared image
"""
if len(img_instance.shape) != 3:
img_instance = np.expand_dims(img_instance, axis=-1)
if img_instance.shape[1] > img_instance.shape[0]:
dim = img_instance.shape[0]
mid = int(img_instance.shape[1]/2)
img_sq = img_instance[:, int(mid-dim/2):int(mid+dim/2), :]
else:
dim = img_instance.shape[1]
mid = int(img_instance.shape[0]/2)
img_sq = img_instance[int(mid-dim/2):int(mid+dim/2), :, :]
return img_sq
@ staticmethod
def gaussian_blur(img_instance, gauss_config):
"""Apply Gaussian filter to an image instance
Args:
img_instance (numpy.array): Input image array
gauss_config (dict): Filter parameters
Returns:
numpy.array: Blurred image output
"""
img_instance = cv2.GaussianBlur(img_instance,
ksize=tuple(gauss_config["ksize"]),
sigmaX=gauss_config["sigma"],
borderType=cv2.BORDER_DEFAULT)
return img_instance
def get_img_augs(self, img_files, augmenter):
"""Apply augmenter to img_files array.
Args:
img_files (numpy.array): Image set; (height, width, num_exp * 3)
augmenter (imgaug.augmenters.meta.Augmenter): Augmenter
Returns:
numpy.array: Augmented image set of (out_h, out_w, num_exp * 3)
"""
img_aug = np.zeros([self.out_h, self.out_w, self.num_exp*3],
dtype=np.float32)
j = 0
for i in range(self.num_exp):
image_instance = cv2.imread(str(img_files[i]),
cv2.IMREAD_UNCHANGED)
aug_img = augmenter.augment_image(image_instance)
img_buffer = self.to_square(self.gaussian_blur(aug_img,
self.GAUSS_CONFIG))
img_buffer = cv2.resize(img_buffer, (self.out_h, self.out_w),
interpolation=cv2.INTER_NEAREST)
img_aug[:, :, j:j+3] = img_buffer
j += 3
return img_aug
def get_ann_augs(self, annot, augmenter):
"""Apply augmenter to annotation array.
Args:
annot (numpy.array): Annotation; (height, width)
augmenter (imgaug.augmenters.meta.Augmenter): Augmenter
Returns:
numpy.array: Augmented annotation set of (out_h, out_w, 1)
"""
annot_aug = np.zeros([self.out_h, self.out_w, 1],
dtype=np.float32)
aug_img = augmenter.augment_image(annot[:, :])
ann_buffer = self.to_square(aug_img)
annot_aug[:, :, 0] = cv2.resize(ann_buffer, (self.out_h, self.out_w),
interpolation=cv2.INTER_NEAREST)
return annot_aug
def generate_aug(self, num_gen, r_state=1, write_img=False, start_index=0):
"""To generate augmented image and annotation sets.
Same augmentation is applied to a set of images and its annotation.
Args:
num_gen (int): Number of images to be generated.
r_state (int, optional): Random state to select image set.
Defaults to 1.
write_img (bool, optional): Flag to write images to output folder.
Defaults to False.
Returns:
numpy.array: Augmented annotation set of (out_h, out_w, num_exp*3)
numpy.array: Augmented annotation set of (out_h, out_w, 1)
"""
img_aug_arr = np.zeros([num_gen, self.out_h, self.out_w, self.num_exp*3],
dtype=np.float32)
ann_aug_arr = np.zeros([num_gen, self.out_h, self.out_w, 1],
dtype=np.float32)
# Generating augmenters
augs = self.get_augmenters(num_gen=num_gen, aug_config=self.AUG_CONFIG)
# Select image and corresponding annotation randomly
# for i in range(num_gen):
for i in tqdm(range(num_gen)):
np.random.seed(i * abs(r_state))
random_index = np.random.randint(0, self.num_img - 1)
# Select image
img_index = random_index * self.num_exp
img_files = self.img_paths[img_index:img_index + self.num_exp]
# Select annotation
ann_file = self.ann_paths[random_index]
# Image Generation
# (i,128,128,9) = f(random_index,964,1292,9)
img_aug_arr[i, :, :, :] = self.get_img_augs(img_files,
augmenter=augs[i])
# Annot Generation
# (i,128,128,1) = f(random_index,964,1292,1)
annot_instance = read_cmp(ann_file, (self.src_h, self.src_w))
ann_aug_arr[i, :, :, :] = self.get_ann_augs(annot_instance,
augmenter=augs[i])
# Saving images and masks
if write_img is True:
j = 0
exp_names = [chr(ord('`')+i+1) for i in range(self.num_exp)]
# writing images to .bmp
for _, ch in zip(range(self.num_exp), exp_names):
img_name = f"img_{str(i+start_index).zfill(6)}_{ch}.bmp"
img_file = f"{self.out_dir}/images/{img_name}"
cv2.imwrite(img_file,
img_aug_arr[i, :, :, j:j+self.num_exp])
j += self.num_exp
# writing image set to .npy
img_name = f"img_{str(i+start_index).zfill(6)}"
img_file = f"{self.out_dir}/npy_images/{img_name}"
save_npy_v2(img_aug_arr[i, :, :, :], img_file)
# writing annotation to .npy
ann_name = f"ann_{str(i+start_index).zfill(6)}"
ann_file = f"{self.out_dir}/npy_annots/{ann_name}"
save_npy_v2(ann_aug_arr[i, :, :, :], ann_file)
# !WORK-IN-PROGRESS
# writing annotation to .cmp
# ann_name = f"img_{str(i+start_index).zfill(6)}.cmp"
# ann_file = f"{self.out_dir}/images/{ann_name}"
# write_cmp(np.argmax(ann_aug_arr[i, :, :, :], axis=-1),
# ann_file)
return img_aug_arr, ann_aug_arr
```
#### File: crate_classifier/src/main.py
```python
def main():
# initialize the number of epochs to train for, initial learning rate,
# batch size, and image dimensions
"""
EPOCHS = 500
R_STATE = 0
BS = 32
INIT_LR = 1e-3
input_height = 128
input_width = 128
# STEPS = len(train_X) // BS ##steps_per_epoch
class_limit = 20"""
```
#### File: src/models/model_resnet18.py
```python
from tensorflow.keras import Model
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import Reshape
# from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Add
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.optimizers import Adam
class CrateNet:
def __init__(self, grid_h, grid_w, num_exp, num_classes, depth, init_lr, epochs):
self.grid_h = grid_h
self.grid_w = grid_w
self.num_exp = num_exp
self.num_classes = num_classes
self.depth = depth
self.init_lr = init_lr
self.epochs = epochs
@staticmethod
def res_identity(x, kernel_no, num_layer):
x_init = x
x = Conv2D(kernel_no, (3, 3), strides=(1, 1), padding='same',
name='conv_' + str(num_layer), use_bias=False)(x)
x = BatchNormalization(name='norm_' + str(num_layer))(x)
x = LeakyReLU(alpha=0.1)(x)
num_layer += 1
x = Conv2D(kernel_no, (3, 3), strides=(1, 1), padding='same',
name='conv_' + str(num_layer), use_bias=False)(x)
x = BatchNormalization(name='norm_' + str(num_layer))(x)
num_layer += 1
# Add block
x = Add()([x, x_init])
x = LeakyReLU(alpha=0.1)(x)
num_layer += 1
return x, num_layer
@staticmethod
def res_conv(x, kernel_no, num_layer):
x_init = x
x = Conv2D(kernel_no, (3, 3), strides=(1, 1), padding='same',
name='conv_' + str(num_layer), use_bias=False)(x)
x = BatchNormalization(name='norm_' + str(num_layer))(x)
x = LeakyReLU(alpha=0.1)(x)
num_layer += 1
x = Conv2D(kernel_no, (3, 3), strides=(1, 1), padding='same',
name='conv_' + str(num_layer), use_bias=False)(x)
x = BatchNormalization(name='norm_' + str(num_layer))(x)
num_layer += 1
# Skip line
x_skip = Conv2D(kernel_no, (1, 1), strides=(1, 1), padding='same',
name='skip_conv_' + str(num_layer), use_bias=False)(x_init)
x_skip = BatchNormalization(name='skip_norm_' + str(num_layer))(x_skip)
# Add block
x = Add()([x, x_skip])
x = LeakyReLU(alpha=0.1)(x)
num_layer += 1
return x, num_layer
@staticmethod
def denife_cnn(height, width, num_exposures, num_classes, depth=3):
input_layer = Input(shape=(height, width, depth*num_exposures),
name="input_1")
x = input_layer
num_layer = 1
# stack 1
x = Conv2D(8, (7, 7), strides=(1, 1), padding='same',
name='conv_' + str(num_layer), use_bias=False)(x)
x = BatchNormalization(name='norm_' + str(num_layer))(x)
x = LeakyReLU(alpha=0.1)(x)
num_layer += 1
print("stack 1")
# stack 2
# sub stack 1
x, num_layer = CrateNet.res_identity(x, 8, num_layer)
# sub stack 2
x, num_layer = CrateNet.res_identity(x, 8, num_layer)
print("stack 2")
# stack 3
# sub stack 1
x, num_layer = CrateNet.res_conv(x, 16, num_layer)
# sub stack 2
x, num_layer = CrateNet.res_identity(x, 16, num_layer)
print("stack 3")
# stack 4
# sub stack 1
x, num_layer = CrateNet.res_conv(x, 32, num_layer)
# sub stack 2
x, num_layer = CrateNet.res_identity(x, 32, num_layer)
print("stack 4")
# stack 5
# sub stack 1
x, num_layer = CrateNet.res_conv(x, 64, num_layer)
# sub stack 2
x, num_layer = CrateNet.res_identity(x, 64, num_layer)
print("stack 5")
# stack 6
x = Conv2D(num_classes, (1, 1), strides=(1, 1), padding='same',
name='conv_' + str(num_layer), use_bias=False)(x)
x = BatchNormalization(name='norm_' + str(num_layer))(x)
x = LeakyReLU(alpha=0.1)(x)
num_layer += 1
print("stack 6")
# Output Detection layer
x = Conv2D(num_classes, (1, 1), strides=(1, 1),
padding='same', name='DetectionLayer', use_bias=False)(x)
output_layer = Reshape((height, width, num_classes),
name="reshape_1")(x)
return input_layer, output_layer
@staticmethod
def build(grid_h, grid_w, num_exp, num_classes, depth, init_lr, epochs):
"""Build model with CategoricalCrossentropy loss
"""
input_l, output_l = CrateNet.denife_cnn(height=grid_h,
width=grid_w,
num_exposures=num_exp,
num_classes=num_classes,
depth=depth)
# Model Defenition
model = Model(inputs=input_l, outputs=output_l,
name="cnn_model_" + str(num_exp) + "_exp")
opt = Adam(lr=init_lr,
decay=init_lr / (epochs * 0.5))
model.compile(loss=CategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=["accuracy"])
return model
```
#### File: src/sub_modules/model_utils.py
```python
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
# Local modules
from data_loader import load_json, load_npy
from preprocessing import img_preprocess, ann_preprocess
from preprocessing import resize_arr, split_data, stack_exp
from preprocessing import stack_exp_v2, ann_preprocess_v2
from utils import img_arr_to_gray
# import augmentation_dev as augmentation
# import plotting
# from train import train_model_1, train_model_2, train_model_1_v2
# Models
from models import model_14k as model_1 # baseline model
# VGG16 models
from models import model_vgg16_24k as model_2
from models import model_vgg16_34k as model_3
from models import model_vgg16_47k as model_4
# ResNet models
from models import model_resnet18_34k as model_5
from models import model_resnet18_46k as model_6
# DenseNet models
from models import model_densenet21_35k as model_7
from models import model_densenet21_48k as model_8
# Colormap object for custom colours
from matplotlib.colors import LinearSegmentedColormap
# Colors
class_colors = [(0, 0, 0),
(0, 0.5, 0), (0, 1, 1), (0.5, 0.5, 0.5), (0.5, 1, 0.5),
(0.5, 0.25, 0.25), (0.5, 0, 1), (0, 0.25, 0.5), (0, 0, 1),
(1, 1, 1), (1, 0, 0.5)]
CMAP_11 = LinearSegmentedColormap.from_list("cmap_11", class_colors, N=11)
# Global
num_exp = 3
model_list = [model_1,
model_2, model_3, model_4,
model_5, model_6,
model_7, model_8]
model_names = ["model_14k",
"VGG16_24k", "VGG16_34k", "VGG16_47k",
"ResNet18_34k", "ResNet18_46k",
"DenseNet21_35k", "DenseNet21_48k"]
# Constants
net_h = 128
net_w = 128
class_limit = 20
R_STATE = 0
BS = 32
LR = 1e-2
PRE_EPOCHS = 500
EPOCHS = 500
# Converting from RGB to BGR
def bgr_to_rgb(images):
buffer_arr = np.zeros((images.shape))
j = 0
for i in range(num_exp):
buffer_arr[:, :, :, j] = images[:, :, :, j+2]
buffer_arr[:, :, :, j+1] = images[:, :, :, j+1]
buffer_arr[:, :, :, j+2] = images[:, :, :, j]
j += 3
return buffer_arr
# Preview of data
def preview_data_v2(img_arr, annot_arr, num_exp, index=0):
plt.figure(figsize=(20, 50))
# showing images
j = 0
for i in range(num_exp):
plt.subplot(1, num_exp+2, i+1)
plt.imshow(img_arr[index, :, :, j:j+num_exp]/255)
plt.xlabel(f"exp_{i}")
j += num_exp
plt.xticks([])
plt.yticks([])
# showing annotations
plt.subplot(1, num_exp+2, i+2)
plt.imshow(annot_arr[index, :, :, 0])
plt.xlabel("mask")
plt.xticks([])
plt.yticks([])
# showing label
plt.show()
# Misc
def print_all_preds(img_arr, y_test, y_pred, cols=5, class_to_show=6, color_mode='rgb'):
num = y_test.shape[0]
if (num % cols != 0):
rows = int(num/cols) + 1
else:
rows = int(num/cols)
fig_size = 2
col_set = cols * 3
k = 0
# Creating a img grid
plt.figure(figsize=(fig_size*col_set, fig_size*rows))
plt_num = 1
rgb_img = np.zeros([img_arr.shape[1], img_arr.shape[2], 3])
for i in range(rows):
for j in range(cols):
if k == num:
break
if color_mode == 'bgr':
rgb_img[:, :, 0] = img_arr[k, :, :, 2]
rgb_img[:, :, 1] = img_arr[k, :, :, 1]
rgb_img[:, :, 2] = img_arr[k, :, :, 0]
else:
rgb_img = img_arr[k, :, :, 0:3]
plt.subplot(rows, col_set, plt_num)
plt.imshow(rgb_img)
plt.xlabel("Image")
plt.xticks([])
plt.yticks([])
plt.subplot(rows, col_set, plt_num+1)
plt.imshow(y_test[k])
plt.xlabel("Ground_Truth")
plt.xticks([])
plt.yticks([])
plt.clim(vmin=0, vmax=class_to_show)
plt.subplot(rows, col_set, plt_num+2)
plt.imshow(y_pred[k])
plt.xlabel("Prediction")
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.clim(vmin=0, vmax=class_to_show)
plt_num = plt_num + 3
k += 1
plt.tight_layout()
plt.show()
# plt.savefig("/content/drive/My Drive/rhs_werk_March2020/dataset_3_ann/all_preds.png")
def print_all_preds_v2(img_arr, model_pred, cols=5, class_to_show=6, color_mode='rgb'):
# arrays
is_crate_arr = model_pred[:, :, :, 0]
class_arr = np.argmax(model_pred[:, :, :, 1:], axis=-1)
# Grid spec
num = model_pred.shape[0]
if (num % cols != 0):
rows = int(num/cols) + 1
else:
rows = int(num/cols)
fig_size = 2
col_set = cols * 3
k = 0
# Creating a img grid
plt.figure(figsize=(fig_size*col_set, fig_size*rows))
plt_num = 1
rgb_img = np.zeros([img_arr.shape[1], img_arr.shape[2], 3])
for i in range(rows):
for j in range(cols):
if k == num:
break
if color_mode == 'bgr':
rgb_img[:, :, 0] = img_arr[k, :, :, 2]
rgb_img[:, :, 1] = img_arr[k, :, :, 1]
rgb_img[:, :, 2] = img_arr[k, :, :, 0]
else:
rgb_img = img_arr[k, :, :, 0:3]
plt.subplot(rows, col_set, plt_num)
plt.imshow(rgb_img)
plt.xlabel("Image")
plt.xticks([])
plt.yticks([])
plt.subplot(rows, col_set, plt_num+1)
plt.imshow(is_crate_arr[k])
plt.xlabel("is_Crate")
plt.xticks([])
plt.yticks([])
plt.clim(vmin=0, vmax=1)
plt.subplot(rows, col_set, plt_num+2)
plt.imshow(class_arr[k, :, :])
plt.xlabel("class")
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.clim(vmin=0, vmax=class_to_show)
plt_num = plt_num + 3
k += 1
plt.tight_layout()
plt.show()
# plt.savefig("/content/drive/My Drive/rhs_werk_March2020/dataset_3_ann/all_preds.png")
def pred_to_img(y_test, y_pred):
y_test = np.argmax(y_test, axis=-1)
y_pred = np.argmax(y_pred, axis=-1)
return y_test, y_pred
def show_model_pred(model, img_arr, annot_arr, net_h, net_w, class_to_show, color_mode):
img_arr = resize_arr(img_preprocess(img_arr), net_h, net_w)
model_pred = model.predict(img_arr)
model_true = resize_arr(ann_preprocess(
annot_arr, class_limit), net_h, net_w)
y_test, y_pred = pred_to_img(model_true, model_pred)
print_all_preds(img_arr, y_test, y_pred, cols=4,
class_to_show=class_to_show, color_mode=color_mode)
return model_pred
def show_model_pred_2(model, img_arr, annot_arr, net_h, net_w, class_to_show, color_mode):
img_arr = resize_arr(img_preprocess(img_arr), net_h, net_w)
model_pred = model.predict(img_arr)
# model_true = resize_arr(ann_preprocess(annot_arr,class_limit), net_h, net_w)
print_all_preds_v2(img_arr, model_pred, cols=4,
class_to_show=class_to_show, color_mode=color_mode)
return model_pred
def synth_exp(image_arr, num_exp=3):
new_image_arr = np.zeros(
[image_arr.shape[0], image_arr.shape[1], image_arr.shape[2], num_exp*3])
for i in range(image_arr.shape[0]):
new_image_arr[i, :, :, 0:3] = image_arr[i, :, :, :] - 70
new_image_arr[i, :, :, 3:6] = image_arr[i, :, :, :]
new_image_arr[i, :, :, 6:9] = image_arr[i, :, :, :] + 70
new_image_arr[new_image_arr > 255] = 255.0
new_image_arr[new_image_arr < 0] = 0.0
return new_image_arr
def join_2_arr(arr_1, arr_2):
img_num = arr_1.shape[0] + arr_2.shape[0]
joined_arr = np.zeros(
[img_num, arr_1.shape[1], arr_1.shape[2], arr_1.shape[-1]])
joined_arr[0:arr_1.shape[0], :, :, :] = arr_1
joined_arr[arr_1.shape[0]:img_num, :, :, :] = arr_2
return joined_arr
# Plotting
def lowest_point(arr):
minimun_val = arr.index(min(arr))
return minimun_val
def highest_point(arr):
minimun_val = arr.index(max(arr))
return minimun_val
def plot_history_mode(history_list, epochs, models_list, model_names=None, plot_mode="train"):
# Setting modes (train or val)
if plot_mode == "train":
acc_mode = "accuracy"
loss_mode = "loss"
style = "dashed"
elif plot_mode == "test":
acc_mode = "val_accuracy"
loss_mode = "val_loss"
style = "solid"
else:
print("ERROR: Invalid plot_mode")
plt.figure(figsize=(30, 15))
#plot_colors = ['b', 'm', 'r', 'g', ]
# Initializing names
if model_names is None or len(model_names) != len(history_list):
model_names = [f"model_1{i}" for i in range(len(history_list))]
# "Accuracies"
plt.subplot(1, 2, 1)
plot_legend = []
print("Max Accuracies:")
for i, model_history in enumerate(history_list):
max_acc_val = max(model_history.history[acc_mode]) * 100
max_pt_val = highest_point(model_history.history[acc_mode])
print(
f"({plot_mode}) {model_names[i]}\t: {max_acc_val:.2f} @ {max_pt_val}/{epochs}")
# Plots
plt.plot(model_history.history[acc_mode], linestyle=style)
# plt.hlines(y=max(model_history.history['accuracy']), xmin=0, xmax=epochs)
plot_legend.append(
f'{model_names[i]} [{models_list[i].count_params():,d}]')
# plot labels
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(plot_legend, loc='best')
# "Losses"
plt.subplot(1, 2, 2)
plot_legend = []
for i, model_history in enumerate(history_list):
plt.plot(model_history.history[loss_mode], linestyle=style)
# plt.hlines(y=min(model_history.history['loss']), xmin=0, xmax=epochs)
plot_legend.append(
f'{model_names[i]} [{models_list[i].count_params():,d}]')
# plot labels
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(plot_legend, loc='best')
plt.show()
# Training Helper funcs
# Preprocessing for Model 1 ("isCrate")
def preprocess_iscrate(img_arr, annot_arr):
X = resize_arr(img_preprocess(img_arr), net_h, net_w)
# print("X shape:", X.shape)
# convert y to "isCrate" classes
y = resize_arr(ann_preprocess(
annot_arr[:, :, :, 0], class_limit), net_h, net_w)
y = y[:, :, :, 0:2]
y[:, :, :, 1] = (y[:, :, :, 0] * - 1) + 1
# print("y shape:", y.shape)
# Splitting data to detect "isCrate"
train_data, test_data = split_data(X, y)
return (train_data, test_data)
# Preprocessing for Model 2 ("isClass")
def preprocess_isclass(img_arr, annot_arr):
X = resize_arr(img_preprocess(img_arr), net_h, net_w)
# print("X shape:", X.shape)
y = resize_arr(ann_preprocess(
annot_arr[:, :, :, 0], class_limit), net_h, net_w)
# print("y shape:", y.shape)
# Splitting data to detect "isCrate"
train_data, test_data = split_data(X, y)
return (train_data, test_data)
def bundle_models(model_list, height=net_h, width=net_w, exp_num=3, class_num=2, net_depth=3, LR=LR, EPOCHS=EPOCHS):
"""
Returns a list of compiled models with given config and model defenitions
"""
compiled_models = []
for i, model_def in enumerate(model_list):
print(f"Building model_{i+1}/{len(model_list)}")
model_comp = model_def.CrateNet.build(grid_h=height, grid_w=width,
num_exp=exp_num,
num_classes=class_num,
depth=net_depth,
init_lr=LR, epochs=EPOCHS)
compiled_models.append(model_comp)
return compiled_models
def train_model_bundle(models, train_data, test_data, EPOCHS=EPOCHS, to_h5=True, mod_no='1'):
"""Training model from the list of defined models"""
models_hist = []
for i, model in enumerate(models):
print(f"Training model_{i+1}/{len(models)}")
models_hist.append(model.fit(x=train_data[0], y=train_data[1],
validation_data=test_data, epochs=EPOCHS))
if to_h5:
model.save("mod_" + mod_no + '_' + model_names[i] + ".h5")
model.save_weights("mod_" + mod_no + '_' +
model_names[i] + "_w.h5")
return models_hist
# Crate detector
# Crate detector
def crate_detector(model_1, model_2, img_arr, annnot_arr, index):
output_1 = np.argmax(model_1.predict(
np.expand_dims(img_arr[index], axis=0)), axis=-1)[0, :, :]
output_2 = np.argmax(model_2.predict(
np.expand_dims(img_arr[index], axis=0)), axis=-1)[0, :, :]
# Colors
class_colors = ['black', '#259c14', '#4c87c6', '#737373', '#cbec24',
'#f0441a', '#0d218f', 'blue', 'magenta', 'green']
output_3 = output_1 * output_2
plt.figure(figsize=(20, 4))
plt.subplot(1, 5, 1)
plt.imshow(img_arr[index, :, :, 6:9], cmap="cividis")
plt.title("Image")
plt.xticks([])
plt.yticks([])
plt.subplot(1, 5, 2)
plt.imshow(np.argmax(annnot_arr[index, :, :, :], axis=-1), cmap="cividis")
plt.title("Groud truth")
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.clim(0, 10)
plt.subplot(1, 5, 3)
plt.imshow(output_1, cmap="cividis")
plt.title("model_1 output")
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.clim(0, 1)
plt.subplot(1, 5, 4)
plt.imshow(output_2, cmap="cividis")
plt.title("model_2 output")
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.clim(0, 10)
plt.subplot(1, 5, 5)
plt.imshow(output_3, cmap="cividis")
plt.title("combined output")
plt.xticks([])
plt.yticks([])
plt.colorbar()
plt.clim(0, 10)
plt.show()
return output_3
def load_mod_weights(model_list, model_names, prefix="mod_1_"):
loaded_model_list = []
for model, name in zip(model_list, model_names):
loaded_model_list.append(model.load_weights(prefix + name + ".h5"))
print(model)
print(prefix + name + ".h5")
print(loaded_model_list)
return loaded_model_list
class ModelBundle:
def __init__(self, model_list, model_names, net_config, train_config):
self.model_list = model_list
self.model_names = model_names
self.height = net_config["net_h"]
self.width = net_config["net_w"]
self.num_exp = net_config["num_exp"]
self.net_depth = net_config["net_depth"]
self.LR = train_config["learning_rate"]
self.EPOCHS = train_config["epochs"]
def bundle_models(self, class_num=2):
"""
Returns a list of compiled models with given config and model
defenitions
"""
compiled_models = []
for i, model_def in enumerate(model_list):
print(f"Building model_{i+1}/{len(model_list)}")
model_comp = model_def.CrateNet.build(grid_h=self.height,
grid_w=self.width,
num_exp=self.num_exp,
num_classes=class_num,
depth=self.net_depth,
init_lr=self.LR,
epochs=self.EPOCHS)
compiled_models.append(model_comp)
return compiled_models
def train_bundle(self, train_d, test_d, class_num, mod_no=1, to_h5=True):
"""Training model from the list of defined models"""
c_models = self.bundle_models(class_num)
models_hist = []
for i, model in enumerate(c_models):
print(f"Training (mod_{mod_no}) {self.model_names[i]}")
print(f"{i+1}/{len(c_models)}")
# Loop over the different datasets
models_hist.append(model.fit(x=train_d[0], y=train_d[1],
validation_data=test_d,
epochs=EPOCHS))
if to_h5:
model_name_i = "mod_" + str(mod_no) + '_' + model_names[i]
print(f"Saving model: {model_name_i}")
model.save(model_name_i + ".h5")
model.save_weights(model_name_i + "_w.h5")
return models_hist
```
#### File: src/sub_modules/pkg_1a.py
```python
import numpy as np
from pathlib import Path
# Local Modules
from src.utils import load_json, rm_duplicate
class DataChecker:
def __init__(self, src_dir, num_exp):
self.src_dir = src_dir
self.num_exp = num_exp
self.img_paths, self.ann_paths = DataChecker.data_checker(src_dir,
num_exp)
@staticmethod
def data_checker(src_dir, num_exp, img_ext=".bmp", ann_ext=".cmp"):
"""Checks if there are missing annotation files in the source folder.
Return pathlib.Path files for images and annotations
WARNING!! DOES NOT CHECK IF ANNOTATIONS ARE CORRECT (by name)
ADD CROSS VALIDATION
Args:
src_dir (str): Source path
num_exp (int): Number of exposures of the images
img_ext (str, optional): Image File Extension. Defaults to ".bmp".
ann_ext (str, optional): Annotation File Extension. Defaults to
".cmp".
Returns:
list: Valid image file paths
list: Valid annotation file paths
"""
src_path = Path(src_dir)
# All image and annotation paths
image_paths = list(src_path.glob("**/*" + img_ext))
annot_paths = list(src_path.glob("**/*" + ann_ext))
img_nums = rm_duplicate([file.stem.split("_")[1]
for file in image_paths])
ann_nums = [file.stem.split("_")[1] for file in annot_paths]
img_dict = dict(enumerate(img_nums))
indices = {v: k for k, v in img_dict.items()}
# Matching
valid_sets = set(img_dict.values()).intersection(ann_nums)
valid_indices = np.sort([indices[value] for value in valid_sets])
# Missing annotations
missing_sets = np.sort(
list(set(img_dict.values()).symmetric_difference(ann_nums)))
if missing_sets.size > 0:
print(f"Missing annotation files: {missing_sets}")
# Valid paths
valid_image_paths = []
valid_annot_paths = []
for i, index in enumerate(valid_indices):
j = index * num_exp
valid_annot_paths.append(annot_paths[i])
for k in range(num_exp):
valid_image_paths.append(image_paths[j+k])
valid_image_paths = valid_image_paths
valid_annot_paths = valid_annot_paths
return valid_image_paths, valid_annot_paths
def logging(self):
"""Logging for pkg_1a
"""
print("img_paths sample:", self.img_paths[:4], sep="\n")
print("ann_paths sample:", self.ann_paths[:4], sep="\n")
return None
def main():
# Initial Parameters
PKG_1_PARAMS = load_json("pkg_1_config.json")
# Checking for consistency in the dataset
pkg_1a_obj = DataChecker(PKG_1_PARAMS["src_dir"],
PKG_1_PARAMS["num_exp"])
pkg_1a_obj.logging()
return None
if __name__ == "__main__":
main()
```
#### File: crate_classifier/src/train.py
```python
import numpy as np
from data_loader import load_json, load_npy
from preprocessing import img_preprocess, ann_preprocess
from preprocessing import resize_arr, split_data, stack_exp
from model import make_model, make_model_2
def train_model_1(img_arr, annot_arr, net_h, net_w, class_limit, epochs=100):
"""
To train the standard model.
img_arr shape => (img_num, height, width, 9)
"""
# Preprocessing - Images
X = resize_arr(img_preprocess(img_arr), net_h, net_w)
print("X shape:", X.shape)
# Preprocessing - Annotations
y = resize_arr(ann_preprocess(annot_arr, class_limit),
net_h, net_w)
print("y shape:", y.shape)
# Splitting data
train_data, test_data = split_data(X, y)
model_input, model = make_model(img_shape=[net_w, net_h],
num_classes=class_limit,
num_exposures=3)
model_hist = model.fit(x=train_data[0],
y=train_data[1],
validation_data=test_data,
epochs=epochs)
return model, model_input, model_hist
def add_is_crate(class_label_arr):
is_crate_arr = class_label_arr[:, :, :, 0] - (1.0) * (-1)
is_crate_arr_2 = np.zeros(
[is_crate_arr.shape[0], is_crate_arr.shape[1], is_crate_arr.shape[2], 1])
is_crate_arr_2[:, :, :, 0] = is_crate_arr
return is_crate_arr_2
def train_model_1_v2(img_arr, annot_arr, net_h, net_w, class_limit, init_lr=1e-3, epochs=100):
"""
To train the standard model.
img_arr shape => (img_num, height, width, 9)
"""
# Preprocessing - Images
X = resize_arr(img_preprocess(img_arr), net_h, net_w)
print("X shape:", X.shape)
# Preprocessing - Annotations
y = resize_arr(ann_preprocess(annot_arr, class_limit),
net_h, net_w)
print("y shape:", y.shape)
# Splitting data
train_data, test_data = split_data(X, y)
# Creating 'isCrate output'
train_data.append(add_is_crate(train_data[1]))
test_data.append(add_is_crate(test_data[1]))
# Making the model
model_input, model = make_model_2(img_shape=[net_w, net_h],
num_classes=class_limit,
num_exposures=3,
init_lr=init_lr,
epochs=epochs)
model_hist = model.fit(x=train_data[0],
y={"isCrate": train_data[2],
"class": train_data[1]},
validation_data=(test_data[0],
{"isCrate": test_data[2],
"class": test_data[1]}),
epochs=epochs)
return model, model_input, model_hist
def train_model_2(img_arr, annot_arr, net_h, net_w, class_limit, epochs=100):
"""
To train the standard model.
img_arr shape => (img_num, height, width, 9)
train_model_1 adding "is_crate" class
"""
# Preprocessing - Images
X = resize_arr(img_preprocess(img_arr), net_h, net_w)
print("X shape:", X.shape)
# Preprocessing - Annotations
y = resize_arr(ann_preprocess(annot_arr, class_limit),
net_h, net_w)
# Adding "is_crate" class
y[:, :, :, 0] = (y[:, :, :, 0] * -1) + 1
print("y shape:", y.shape)
# Splitting data
train_data, test_data = split_data(X, y)
model_input, model = make_model(img_shape=[net_w, net_h],
num_classes=class_limit,
num_exposures=3)
model_hist = model.fit(x=train_data[0],
y=train_data[1],
validation_data=test_data,
epochs=epochs)
return model, model_input, model_hist
def main():
data_dir = "dataset/data_1"
images_file = data_dir + "/dataset_images.npy"
labels_file = data_dir + "/dataset_labels.npy"
config = data_dir + "/label_config.json"
images = load_npy(images_file)
annotations = load_npy(labels_file)
config = load_json(config)
print("Image batch shape:\t", images.shape)
print("Annotation batch shape:\t", annotations.shape)
epochs = 10
R_STATE = 0
BS = 32
INIT_LR = 1e-3
net_w = 128
net_h = 128
# STEPS = len(train_X) // BS ##steps_per_epoch
class_limit = 20
# Preprocessing - Images
X = resize_arr(img_preprocess(stack_exp(images)),
net_h, net_w)
print(X.shape)
# Preprocessing - Annotations
y = resize_arr(ann_preprocess(annotations, class_limit),
net_h, net_w)
print(y.shape)
# Splitting data
train_data, test_data = split_data(X, y)
model_1_input, model_1 = make_model(img_shape=[net_w, net_h],
num_classes=class_limit,
num_exposures=3)
model_1_hist = model_1.fit(x=train_data[0],
y=train_data[1],
validation_data=test_data,
epochs=epochs)
return model_1_hist.history.key()
if __name__ == "__main__":
main()
```
#### File: crate_classifier/src/train_v2.py
```python
from pathlib import Path
from data_loader import load_npy
from preprocessing import img_preprocess, ann_preprocess
from preprocessing import resize_arr, split_data
from preprocessing import stack_exp_v2, ann_preprocess_v2
from model_v2 import CrateNet
def main():
# loading files
images = load_npy(Path(
"C:/Users/josep/Documents/work/crate_classifier/dataset/data_1/dataset/data_1_images_v1.npy"))
annots = load_npy(Path(
"C:/Users/josep/Documents/work/crate_classifier/dataset/data_1/dataset/data_1_labels_v1.npy"))
# Constants
net_h = 128
net_w = 128
class_limit = 20
LR = 1e-2
EPOCHS = 100
# Preprocessing
img_arr = stack_exp_v2(images)
annot_arr = ann_preprocess_v2(annots)
X = resize_arr(img_preprocess(img_arr), net_h, net_w)
print("X shape:", X.shape)
y = resize_arr(ann_preprocess(annot_arr, class_limit), net_h, net_w)
print("y shape:", y.shape)
# Splitting data
train_data, test_data = split_data(X, y)
# Building and fitting model
model = CrateNet.build(grid_h=net_h, grid_w=net_w, num_exp=3,
num_classes=class_limit, init_lr=LR, epochs=EPOCHS)
model_hist = model.fit(x=train_data[0], y=train_data[1],
validation_data=test_data,
epochs=EPOCHS)
return model_hist
if __name__ == "__main__":
model_hist = main()
```
#### File: crate_classifier/tests/testing_utils.py
```python
try:
from src.utils import load_json
except ImportError as error:
print(f"Error: {error}; Local modules not found")
except Exception as exception:
print(exception)
def load_params_1():
"""Returns source path of images and number of exposures
"""
PKG_1_PARAMS = load_json("config/pkg_1_config.json")
return PKG_1_PARAMS
def load_params_2():
"""Returns source path of images and number of exposures
"""
PKG_2_PARAMS = load_json("config/pkg_2_config.json")
return PKG_2_PARAMS
```
#### File: crate_classifier/tests/test_pkg_1a.py
```python
from pathlib import Path
try:
from tests.testing_utils import load_params_1
from src.sub_modules.pkg_1a import DataChecker
except ImportError as error:
print(f"Error: {error}; Local modules not found")
except Exception as exception:
print(exception)
# Helper Functions
def chk_dir(src_dir):
"""To check if the directory exists. Returns boolean
"""
src_is_dir = Path(src_dir).is_dir()
if src_is_dir is False:
print(f"Test Directory '{src_dir}' does not exist.")
print("Add a directory containing the source images to run this test")
return src_is_dir
# Tests
def test_data_checker_1():
PARAMS = load_params_1()
src_dir, num_exp = PARAMS["src_dir"], PARAMS["num_exp"]
assert(chk_dir(src_dir) is True)
data_checker_obj = DataChecker(src_dir, num_exp)
img_files = data_checker_obj.img_paths
ann_files = data_checker_obj.ann_paths
assert(int(len(img_files)/num_exp) == len(ann_files))
assert(img_files[int(0*num_exp)].stem.split('_')[1] ==
ann_files[0].stem.split('_')[1])
def test_data_checker_2():
PARAMS = load_params_1()
src_dir, num_exp = PARAMS["src_dir"], PARAMS["num_exp"]
assert(chk_dir(src_dir) is True)
data_checker_obj = DataChecker(src_dir, num_exp)
img_files = data_checker_obj.img_paths
ann_files = data_checker_obj.ann_paths
assert(int(len(img_files)/num_exp) == len(ann_files))
assert(img_files[int(10*num_exp)].stem.split('_')[1] ==
ann_files[10].stem.split('_')[1])
``` |
{
"source": "josephtesla/quine-mccluskey-minimizer",
"score": 4
} |
#### File: quine-mccluskey-minimizer/Python/converter.py
```python
def get_term(_bit_string):
_ascii = 97
term = ""
for x in range(len(_bit_string)):
ch = chr(_ascii + x)
if _bit_string[x] != '_':
term = term + ch if _bit_string[x] == '1' else term + ch + "'"
return term.upper()
def get_function(prime_implicants, no_of_variables):
bit_strings = list(map(lambda x: x[1], prime_implicants))
func = " + ".join([get_term(x) for x in bit_strings])
return func
```
#### File: quine-mccluskey-minimizer/Python/main.py
```python
from qm_minimizer import QM_Minimizer
from GUI import GUI
def main():
print("|-----------------------------------------------|")
print("| |")
print("| |")
print("| QUINE-McKluskey Minimizer |")
print("| |")
print("| SELECT AN ENVIRONMENT: |")
print("| |")
print("| ENTER '1' FOR GUI |")
print("| |")
print("| ENTER '2' FOR CONSOLE |")
print("| |")
print("| |")
print("|-----------------------------------------------|")
env = input("Enter here: ")
while env not in ['1', '2']:
print("!INVALID INPUT!")
env = input("Enter here: ")
if env == "1":
GUI() #RUN GUI instance
elif env == "2":
print("\n|-------------- CONSOLE MODE -------------| \n")
invalid = True
while invalid:
try:
print("|-----ENTER MINTERMS SEPERATED BY COMMA (e.g 3,2,5,12...) ------|")
minterms = input("minterms: ")
print("\n\n|-----ENTER NUMBER OF VARIABLES (e.g 4) ------|")
no_of_vars = int(input("no of variables: "))
func = QM_Minimizer(minterms, no_of_vars).minimize()
print("|------ MINIMIZED EXPRESSION -------| \n")
print("\t " + func)
invalid = False
except Exception as InvalidInputException:
print("\n\n!----- ERROR: INVALID FUNCTION INPUT------|\n\n")
invalid = True
main()
```
#### File: quine-mccluskey-minimizer/Python/qm_minimizer.py
```python
from converter import get_function
class QM_Minimizer:
def __init__(self, min_terms, N):
self.min_terms = sorted([int(x) for x in min_terms.split(",")]) #initialize minterm from given function
self.no_of_variables = N #set maximum number of variables in function
self.s1_implicants = [] #to store size 2 implicants
self.s2_implicants = [] #to store size 4 implicants
self.prime_implicants = [] #to store prime implicants
#Converts a given minterm integer into binary format with length of no of variables
def to_binary(self, n):
N = self.no_of_variables
bin_string = ""
while n != 0:
rem = n % 2
n = n // 2
bin_string += str(rem)
bin_string = bin_string[::-1]
L = len(bin_string)
if L < N:
bin_string = "0" * (N - L) + bin_string
return bin_string
def generate_column_1(self):
bin_column = []
no_of_terms = len(self.min_terms)
for i in range(self.no_of_variables + 1):
for _m in self.min_terms:
bin_eq = self.to_binary(_m)
if (bin_eq.count("1") == i):
bin_column.append((_m, bin_eq))
return bin_column
def position_of(self, string, key):
pos = []
for k in range(len(string)):
if string[k] == key:
pos.append(k)
return pos
def is_next_gray(self, bin_1, bin_2):
bit = "1"
changed_bit_pos = ""
for k in range(len(bin_1)):
if (bin_1[k] == bit and bin_2[k] != bit):
return (False, -1)
if bin_1[k] != bit and bin_2[k] == bit:
changed_bit_pos += str(k)
if bin_1.count(bit) + 1 == bin_2.count(bit):
return (True, changed_bit_pos)
return (False, -1)
def replace_all(self, string, x, y):
return y.join(string.split(x))
def replace_pos(self, string, p, y):
x = list(string)
x[p] = y
return "".join(x)
def generate_column_2(self):
self.s1_implicants = self.generate_column_1()
for x in range(len(self.s1_implicants) - 1):
for y in range(x + 1, len(self.s1_implicants)):
imp_1 = self.s1_implicants[x]
imp_2 = self.s1_implicants[y]
gray_result = self.is_next_gray(imp_1[1], imp_2[1])
if gray_result[0]:
changed_pos = gray_result[1]
bit_str = self.replace_pos(imp_1[1], int(changed_pos), "_")
self.s2_implicants.append([(imp_1[0], imp_2[0]), bit_str])
self.update_prime_implicants(self.s2_implicants)
return self.s2_implicants
def same_changed_pos(self, bit_str_1, bit_str_2):
first_pos = bit_str_1.index("_")
for k in range(len(bit_str_1)):
if bit_str_1[k] == "_" and bit_str_2[k] != "_":
return (False, first_pos)
return (True, first_pos)
def update_prime_implicants(self, current_column):
for x in range(len(current_column)):
c_imp_1 = current_column[x]
is_match = False
for y in range(len(current_column)):
c_imp_2 = current_column[y]
if x != y:
is_match = self.same_changed_pos(c_imp_1[1], c_imp_2[1])
if is_match[0]:
break
if not is_match[0]:
if c_imp_1 not in self.prime_implicants:
self.prime_implicants.append(c_imp_1)
return self.prime_implicants
def get_next_column_and_results(self, current_column, _last_reached):
if _last_reached:
return self.prime_implicants
is_last_column = False
next_column = []
for x in range(len(current_column) - 1):
for y in range(x + 1, len(current_column)):
c_imp_1 = current_column[x]
c_imp_2 = current_column[y]
bit_compare_result = self.same_changed_pos(c_imp_1[1], c_imp_2[1])
fp = bit_compare_result[1]
changed_bit = self.is_next_gray(c_imp_1[1][:fp], c_imp_2[1][:fp])
if bit_compare_result[0] and changed_bit[0]:
# last_pos = len(c_imp_1[1]) - 1 - c_imp_1[1][::-1].index("_")
if c_imp_1[1][fp + 1:] == c_imp_2[1][fp + 1:]:
terms = c_imp_1[0] + c_imp_2[0]
bit_str = self.replace_pos(c_imp_1[1], int(changed_bit[1]), "_")
next_column.append([terms, bit_str])
self.update_prime_implicants(current_column)
if len(next_column) == 0:
is_last_column = True
#recursively generate new columns
return self.get_next_column_and_results(next_column, is_last_column)
def minimize(self):
prime_imps = self.get_next_column_and_results(self.generate_column_2(), False)
return get_function(prime_imps, self.no_of_variables)
``` |
{
"source": "josephtessmer/EMsoft",
"score": 3
} |
#### File: EMsoft/resources/DDD_CSV_Presorter.py
```python
import pandas as pd
import h5py
import numpy as np
def round25(x):
# return round(x*4)/4
return round(x*16)/16
rotate = True # does the data need to be rotated
propdir = 'Z' # direction of beam propagation
posscale= 1 # scale point grid, if not in nm
disscale= 1 # scale of displacement values, if not in nm
frac = 1 # Lower the resolution of the grid, which fraction of points are included
# If the data needs to be rotated to a proper reference frame, this is the matrix describing the
# relationship between the crystal and sample axes
rotmat = np.matrix([[1, 0, 0],[0, 1, 0],[0, 0, 1]])
rotmat = rotmat.transpose()
# path to input file
path = 'full/input/path/here' #full path to the input file
inpath = path + ".txt" # extension for input path
print(inpath)
atomdata = pd.read_csv(inpath, sep=' ', skipinitialspace=True,header=None)#, skiprows=9,)
maxIndex = atomdata.shape[0]
print('read file')
print(maxIndex)
# rotate data if necessary
ad = atomdata.as_matrix()
if(rotate):
for i in range(0,maxIndex):
ad[i,3:6] = np.matmul(ad[i,3:6], rotmat) * disscale
ad[i,0:3] = np.matmul(ad[i,0:3], rotmat) * posscale
if i%1000000 == 0: print(i)
print('rotated matrix')
# now the 2D array needs to be sorted:
maxs = np.amax(ad,0)
mins = np.amin(ad,0)
xxdim = (maxs[0] - mins[0])/frac
yydim = (maxs[1] - mins[1])/frac
zzdim = (maxs[2] - mins[2])/frac
displacements = np.zeros((5,int(xxdim)+ 1,int(yydim)+ 1,int(zzdim)+ 1),dtype=float)
for i in range(0,maxIndex):
xx = round25(ad[i,0]/frac+xxdim/2)
yy = round25(ad[i,1]/frac+yydim/2)
zz = round25(ad[i,2]/frac+zzdim/2)
if (xx%1 == 0.0 and yy%1 == 0.0 and zz%1 == 0.0):
displacements[0:3,int(xx),int(yy),int(zz)] = ad[i,3:6]
displacements[3,int(xx),int(yy),int(zz)] = 1
displacements[4,int(xx),int(yy),int(zz)] = 1
if i%1000000 == 0: print(i)
# sort displacements for different prop dirs
if (propdir == 'Z'):
displacements = np.transpose(displacements,[3,2,1,0])
elif(propdir == 'Y'):
# (disp,z,x,y)
for i in range(0,sidel):
for j in range(0,sidel):
for k in range(0,2*depth+1):
displacements[:,i,j,k] = (displacements[2,i,j,k],displacements[0,i,j,k],\
displacements[1,i,j,k],displacements[3,i,j,k],displacements[4,i,j,k])
displacements = np.transpose(displacements,[2,1,3,0])
elif(propdir == 'X'):
# (disp,z,y,x)
for i in range(0,sidel):
for j in range(0,sidel):
for k in range(0,2*depth+1):
displacements[:,i,j,k] = (displacements[2,i,j,k],displacements[1,i,j,k],\
displacements[0,i,j,k],displacements[3,i,j,k],displacements[4,i,j,k])
displacements = np.transpose(displacements,[1,2,3,0])
print('sorted to 4D')
outpath = path + ".h5"
f = h5py.File(outpath,'w')
dset = f.create_dataset('Data',data=displacements)
```
#### File: pyEMsoft/unittests/test_constants_and_typedefs.py
```python
import sys
from EMsoft import pyEMsoft
from EMsoft.pyEMsoftTools import Tools
import numpy as np
import unittest
class Test_Constants(unittest.TestCase):
def setUp(self):
pass
# list a few constants the complete list of constants also include: permeability/permittivity
# of vacuum, electron charge/rest mass,electron volt, Avogadro's constant, Bohr magnetron
def test_SIconstants(self):
print('Value of \u03C0 is %.19f' % (pyEMsoft.constants.cpi), '\n')
print('Speed of light (c) is %.1f (m/s)' % (pyEMsoft.constants.clight), '\n')
print('Planck constant (h) is %e (Js)' % (pyEMsoft.constants.cplanck), '\n')
print('Boltzmann Constant (k) is %e (mˆ2kgsˆ(-1)Kˆ(-1))' % (pyEMsoft.constants.cboltzmann), '\n')
# list of element symbols and atom color (the original output character array is an numpy.array in ASCII encoded format)
# a tool is created to get the character array (Tools.get_character_array)
def test_characterarray(self):
print('Element Symbols:\n', Tools.get_character_array(pyEMsoft.constants.atom_sym), '\n')
print('Atom colors for PostScript drawings:\n', Tools.get_character_array(pyEMsoft.constants.atom_color), '\n')
# print out some numerical arrays
def test_array(self):
print('Shannon-Prewitt ionic radii in nanometer:\n', pyEMsoft.constants.atom_spradii, '\n')
print('Atomic weights for things like density computations (from NIST elemental data base):\n', pyEMsoft.constants.atom_weights, '\n')
print('Fundamental zone type:\n', pyEMsoft.constants.fztarray, '\n')
print('Butterfly9x9 Filter:\n', pyEMsoft.constants.butterfly9x9, '\n')
class Test_Typedefs(unittest.TestCase):
def setUp(self):
pass
# print out a lot of useful crystallographic data
def test_SG(self):
print('All space group names:\n', Tools.get_character_array(pyEMsoft.typedefs.sym_sgname),'\n')
print('Extended Hermann-Mauguin symbols for the orthorhombic space groups:\n',Tools.get_character_array(pyEMsoft.typedefs.extendedhmorthsymbols),'\n')
print('First space group of each crystal system:\n', pyEMsoft.typedefs.sgxsym,'\n')
print('First space group # for a given point group:\n', pyEMsoft.typedefs.sgpg,'\n')
print('Numbers of all the symmorphic space groups:\n', pyEMsoft.typedefs.sgsym,'\n')
print('Number of the symmorphic space group with the same point group symmetry:\n', pyEMsoft.typedefs.sgsymnum,'\n')
print('10 2D point group symbols:\n', Tools.get_character_array(pyEMsoft.typedefs.pgtwd), '\n')
print('Inverse table for 2D point groups:\n', pyEMsoft.typedefs.pgtwdinverse, '\n')
print('2D point group orders :\n', pyEMsoft.typedefs.pgtwdorder, '\n')
print('32 3D point group symbols:\n', Tools.get_character_array(pyEMsoft.typedefs.pgthd), '\n')
if __name__ == '__main__':
unittest.main()
``` |
{
"source": "JosephThePatrician/WikipediaQA",
"score": 3
} |
#### File: WikipediaQA/wikipediaqa/TextProcessor.py
```python
import numpy as np
import spacy
import re
from warnings import warn
class TextProcessor:
def __init__(self, spacy_model : str = "en_core_web_sm"):
# try:
self.nlp = spacy.load(spacy_model)
# except Exception:
# raise Exception(f"{spacy_model} was not found, you should install it via python -m spacy {spacy_model}")
def __call__(self, question : str):
"""
Extract everything helpful for search queries
"""
question = self.removeExtraSpaces(question)
doc = self.nlp(question)
important_stuff = []
inQuot = self.getInQuotation(question)
important_stuff.extend(inQuot)
ne = self.getNE(doc)
important_stuff.extend([sent for sent in ne
if sent not in important_stuff])
title = self.getTitle(question)
important_stuff.extend([sent for sent in title
if sent not in important_stuff])
nnp = self.getNNP(doc)
# check if any NNPs have already been taken
nnp = self.remove_repeating(nnp + ne + inQuot + title)
important_stuff.extend([sent for sent in nnp
if sent not in important_stuff])
# if nothing is found, return question
if len(important_stuff) == 0:
return [question]
return important_stuff
def removeExtraSpaces(self, text):
# remove multible spaces
text = re.sub(' +', ' ', text)
# remove spaces at the end and start of string
text = re.sub(r'\A +| +$', '', text)
return text
def postprocess(self, text):
# the is useless in search
text = text.replace('the', '')
text = self.removeExtraSpaces(text)
return text
def postprocess_answer(self, answer):
"""postrocess answer of model"""
if '▁' in answer: # some models have this approach
# replace "▁wo rd" with "word"
answer = answer.replace(' ', '').replace('▁', ' ')
if '##' in answer: # others models this
# replace "wo ##rd" with "word"
answer = answer.replace(' ##', '')
# replace "word . Word" with "word. Word", same with comma
answer = answer.replace(' .', '.').replace(' ,', ',')
# replace "word ' s" with "word's"
answer = answer.replace(' \' ', '\'')
# replace floats "2. 3" and "2, 3" with "2.3" and "2,3"
answer = re.sub(r'\d,\s\d|\d\.\s\d',
lambda x: x.group(0).replace(' ', ''),
answer)
answer = self.removeExtraSpaces(answer)
return answer
def remove_punctuation(self, text):
text = re.sub(r"[.,!?;:()[]{}\"'«»]", '', text)
return text
def getTitle(self, text):
"""
Get all words that start with upper letter
example:
"Hello World, it's me Mario" -> ["Hello World", "Mario"]
"""
text = self.remove_punctuation(text)
split = text[:-1].split()
capital = [i if i[0].isupper() else ' ' for i in split[1:]]
capital = ' '.join(capital)
return [i for i in re.split(' +', capital) if i]
def getNE(self, doc):
"""return every named entity, that can be used in search"""
ne = [self.postprocess(ent.text) for ent in doc.ents
if ent.label_ not in ('CARDINAL', 'DATE', 'ORDINAL', 'NORP')]
return ne
def getAllNE(self, doc):
"""return every named entity"""
if type(doc) == str:
doc = self.nlp(doc)
return doc.ents
def getInQuotation(self, text):
"""
return sentence in quotations if there are any
"""
# get rid of 's, 've, etc.
sent = re.sub(r"[a-zA-Z][\'][a-zA-Z]", '[quot]', text)
# check if there are any quots
if ('\'' not in sent) and ('\"' not in sent) and ('«' not in sent):
return []
# there might be words like "James' " that have quots in the end
if ('\"' in sent or '«' in sent) and "\'" in sent:
sent = sent.replace("\'", '[quot]')
if sent.count("\'") % 2 == 1:
warn(f"could not figure out the quotations in {text}")
return []
# find sentences in quotations
sent = re.findall(r"(?<=').*(?=')|"
r'(?<=").*(?=")|'
r"(?<=«).*(?=»)", sent)
# replace all quotes with "
# wikipedia will search exact match for sentences in quotes
sent = [f'"{i}"' for i in sent]
sent = sent.replace('[quot]', "\'") # put back '
return [self.postprocess(i) for i in sent]
def getNNP(self, doc):
"""return every proper noun or noun and everything associated with this words"""
important = []
for id, token in enumerate(doc):
if (token.pos_ in ['PROPN', 'NOUN']):
associated = self.find_associated(token, [])
# do not return simple words like "name"
if (len(associated) > 0 or
(len(associated) == 0 and token.text.istitle())):
important.append(associated + [token.i])
return [' '.join([
self.postprocess(doc[i].text) for i in sorted(sent)
if 'name' not in doc[i].text and
'many' not in doc[i].text and
'much' not in doc[i].text
]) for sent in important
]
def remove_repeating(self, important):
"""
remove repeating words and parts of words
example:
["Hello", "Hello World", "Hello World"] -> ["Hello World"]
"""
# get amount of words
len_important = [i.count(' ') for i in important]
# sort by the amount of words in ascending order
sort = np.argsort(len_important)
important = np.array(important, dtype=object)[sort]
l = len(important)
# list of positions of words to remove
leave = list(range(l))
for i in range(l):
for u in range(i+1, l):
if important[i] in important[u]:
leave[i] = -1
break
# which words to leave
leave = [i for i in leave if i != -1]
return important[leave].tolist()
def find_associated(self, token, output): # you need to pass output=[]
"""return everything connected with word"""
for child in token.children:
if child.dep_ in ['compound', 'nmod', 'nummod', 'amod']:
output.append(child.i)
self.find_associated(child, output)
return output
``` |
{
"source": "JosephTico/paid-catalogo",
"score": 3
} |
#### File: entrega2/4-restauracion_imagenes/4_8_filtro_butterworth.py
```python
import numpy as np
from PIL import Image, ImageOps
import matplotlib.pyplot as plt
from scipy import ndimage
def imageToArray(filename: str, rgb: bool = False):
img = Image.open(filename)
if not rgb:
img = ImageOps.grayscale(img)
img_array = np.array(img)
return img_array
img = imageToArray("camarografo.jpg")
ruido = imageToArray("ruido_periodico.jpg")
A = (img*(3/4)+ruido*(1/4)).astype(float)
F1 = np.fft.fft2(A)
D1 = np.fft.fftshift(F1)
freq_base = np.log(1+np.abs(D1))
# matrix distancias
m, n = A.shape
dist = np.zeros(A.shape)
m1 = m//2
n1 = n//2
for i in range(m):
for j in range(n):
dist[i, j] = np.sqrt((i-m1)**2 + (j-n1)**2)
W = 6
D0 = 32
H = np.ones(A.shape)
# ind = ((D0-W/2) <= dist) & (dist <= (D0+W/2))
H = 1/(1+((dist*W)/(dist**2-D0**2))**(2*n))
F2 = np.fft.fftshift(H)*F1
D2 = np.fft.fftshift(F2)
freq_filtrada = np.log(1+np.abs(D2))
A_filtrada = np.fft.ifft2(F2)
A_filtrada = np.real(A_filtrada)
plt.subplot(2, 3, 1)
plt.title('Imagen A')
plt.imshow(A, cmap='gray')
plt.subplot(2, 3, 4)
plt.title('Frecuencia imagen')
plt.imshow(freq_base, cmap='gray')
plt.subplot(2, 3, 2)
plt.title('Filtro rechaza banda Butterworth')
plt.imshow(H, cmap='gray')
plt.subplot(2, 3, 5)
plt.title('Frecuencia imagen con filtro')
plt.imshow(freq_filtrada, cmap='gray')
plt.subplot(2, 3, 3)
plt.title('Imagen con filtro')
plt.imshow(A_filtrada, cmap='gray')
plt.show()
```
#### File: entrega2/5-morfologicas/5_2_apertura_clausura_binarias.py
```python
import numpy as np
from PIL import Image, ImageOps
import matplotlib.pyplot as plt
from scipy import ndimage
def imageToArray(filename:str, rgb:bool=False):
img = Image.open(filename)
if not rgb:
img = ImageOps.grayscale(img)
img_array = np.array(img)
return img_array
def binaria(input_image, th=127):
result_image=input_image>=127
return result_image
def apertura(input_image,structure_rank=2, structure_connectivity=1):
structure=ndimage.generate_binary_structure(structure_rank, structure_connectivity)
result_image = ndimage.binary_erosion(input_image,structure=structure )
result_image = ndimage.binary_dilation(result_image,structure=structure )
return result_image
def clausura(input_image, structure_rank=2, structure_connectivity=1):
structure=ndimage.generate_binary_structure(structure_rank, structure_connectivity)
result_image = ndimage.binary_dilation(input_image,structure=structure )
result_image = ndimage.binary_erosion(result_image,structure=structure )
return result_image
A = imageToArray("imagenes/imagen5.jpg")
A_bin = binaria(A)
# A_dilatada = ndimage.binary_dilation(A_bin, structure=np.ones((3,3)))
# A_dilatada = ndimage.binary_dilation(A_bin, structure=np.ones((2,2)))
# A_dilatada = ndimage.binary_dilation(A_bin)
# A_erosionada = ndimage.binary_dilation(A_bin)
A_apertura=apertura(A_bin, 2)
A_clausura=clausura(A_bin, 2)
# Imagen A
plt.subplot(1, 3, 1)
plt.title('Imagen A')
plt.imshow(A_bin, cmap='gray')
plt.subplot(1, 3, 2)
plt.title('Apertura')
plt.imshow(A_apertura, cmap='gray')
plt.subplot(1, 3, 3)
plt.title('Clausura')
plt.imshow(A_clausura, cmap='gray')
plt.show()
``` |
{
"source": "josephtjohnson/Meme_Generator",
"score": 3
} |
#### File: Meme_Generator/MemeGenerator/MemeEngine.py
```python
import os
import random
from utils import image_resize, text_draw
from typing import List
from PIL import Image, ImageFont, ImageDraw
class MemeEngine:
"""
A class to represent a meme.
...
Attributes
----------
output : str
file save location
img_path : str
image file path
text : str
quote text
author : str
quote author
width : int
width of image in pixels (default = 500)
Methods
-------
make_meme(img_path, text, author, width=500):
Receives the image path, quote, and author and returns the file
save location of the generated meme.
"""
def __init__(self, output):
"""
Constructs attribute for save location object.
Parameters
----------
output : str
file save location
"""
self.output = output
if output is not None:
if not os.path.exists(self.output):
os.mkdir(self.output)
def make_meme(self, img_path, text, author, width=500) -> str:
"""
Constructs attributes for meme and returns file save location
for meme image.
Parameters
----------
img_path : str
image file path
text : str
quote text
author : str
quote author
width : int
width of image in pixels (default = 500)
"""
img = image_resize(img_path, width)
width, height = img.size
ext = img_path.split('.')[-1]
font = ImageFont.truetype('fonts/Courgette-Regular.ttf', 25)
fill = (255, 255, 255)
draw = ImageDraw.Draw(img)
text_draw(draw, text, author, fill, font, int(width), int(height))
save_dir = f'{self.output}/{random.randint(0,10000)}.{ext}'
img.save(save_dir)
return save_dir
```
#### File: Meme_Generator/QuoteEngine/DocxIngestor.py
```python
from docx import Document
from typing import List
from .IngestorInterface import IngestorInterface
from .QuoteModel import QuoteModel
class DocxIngestor(IngestorInterface):
"""
A class that inherits from IngestorInterface and creates a list of quotes.
...
Attributes
----------
path : str
file path
Methods
-------
parse(path):
Receives the quote file path and returns list of QuoteModel objects that contains the quote body and author.
"""
file_types = ['docx']
@classmethod
def parse(cls, path: str) -> List[QuoteModel]:
"""
Parses an accepted file and returns a list of QuoteModel objects that contains the quote body and author.
Parameters
----------
path : str
file path
"""
if not cls.can_ingest(path):
raise Exception(f'Cannot ingest {path}')
quotes = []
doc = Document(path)
for para in doc.paragraphs:
if para.text != "":
parse = para.text.split(' - ')
new_quote = QuoteModel(str(parse[0]), str(parse[1]))
quotes.append(new_quote)
return quotes
```
#### File: josephtjohnson/Meme_Generator/utils.py
```python
from QuoteEngine import Ingestor, QuoteModel
from MemeGenerator import MemeEngine
from PIL import Image
import argparse
import random
import os
import textwrap
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')
file_handler = logging.FileHandler('utils.log')
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
def open_image(category):
"""
Opens an image from a user-specified category.
Parameters
----------
category : str
image category (dog or book, default=dog)
"""
images = "./_data/photos/book/"
if category == 'dog':
images = "./_data/photos/dog/"
imgs = []
for root, dirs, files in os.walk(images):
imgs = [os.path.join(root, name) for name in files]
return random.choice(imgs)
def open_image_app():
"""
Returns images for building the meme.
Parameters
----------
category : str
image category (dog or book, default=dog)
"""
images = "./_data/photos/dog/"
imgs = []
for root, dirs, files in os.walk(images):
imgs = [os.path.join(root, name) for name in files]
return imgs
def open_quote(category):
"""
Opens a quote from a user-specified category.
Parameters
----------
category : str
image category (dog or book, default=dog)
"""
quote_files = ['./_data/BookQuotes/BookQuotesDOCX.docx']
if category == 'dog':
quote_files = ['./_data/DogQuotes/DogQuotesTXT.txt',
'./_data/DogQuotes/DogQuotesDOCX.docx',
'./_data/DogQuotes/DogQuotesPDF.pdf',
'./_data/DogQuotes/DogQuotesCSV.csv']
quotes = []
for f in quote_files:
quotes.extend(Ingestor.parse(f))
return random.choice(quotes)
def open_quote_app():
"""
Return quotes for building the meme.
Parameters
----------
category : str
image category (dog or book, default=dog)
"""
quote_files = ['./_data/DogQuotes/DogQuotesTXT.txt',
'./_data/DogQuotes/DogQuotesDOCX.docx',
'./_data/DogQuotes/DogQuotesPDF.pdf',
'./_data/DogQuotes/DogQuotesCSV.csv']
quotes = []
for f in quote_files:
quotes.extend(Ingestor.parse(f))
return quotes
def image_resize(img_path, width=500):
"""
Resize an image to be used by make_meme()
Paramters
---------
img_path : str
image file path
width : int
width of image in pixels (default = 500)
"""
MAX_WIDTH: int = 500
assert width is not None, 'Width is zero'
assert width >= MAX_WIDTH, 'Width > 500'
img_path = img_path
with Image.open(img_path) as img:
ratio = width/float(img.size[0])
height = int(ratio*img.size[1])
img = img.resize((width, height))
return img
def text_draw(draw, text, author, fill, font, width, height):
"""
Draw text in random location on image.
Paramters
---------
draw : image object
image
text : str
quote text
author : str
quote text
fill : tuple
text fill
font : font object
text font
width : int
image width
height : int
image height
"""
x_max = int(0.6*width)
y_max = int(0.8*height)
x = random.randint(15, x_max)
y = random.randint(20, y_max)
wrap_limit = (width - x)*0.08
text = textwrap.fill(text, wrap_limit)
if len(text+author) > (height-y)*0.5:
draw.text((20, 20), text=text+'\n'+'-'+author, fill=fill, font=font)
else:
draw.text((x, y), text=text+'\n'+'-'+author, fill=fill, font=font)
return draw
``` |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.