question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-16 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
71,942,122 | 2022-4-20 | https://stackoverflow.com/questions/71942122/creating-python-zip-for-aws-lambda-using-bazel | I've a monorepo that contains a set of Python AWS lambdas and I'm using Bazel for building and packaging the lambdas. I'm now trying to use Bazel to create a zip file that follows the expected AWS Lambdas packaging and that I can upload to Lambda. Wondering what's the best way to do this with Bazel? Below are a few different things I've tried thus far: Attempt 1: py_binary BUILD.bazel py_binary( name = "main_binary", srcs = glob(["*.py"]), main = "main.py", visibility = ["//appcode/api/transaction_details:__subpackages__"], deps = [ requirement("Faker"), ], ) Problem: This generates the following: main_binary (python executable) main_binary.runfiles main_binary.runfiles_manifest Lambda expects the handler to be in the format of lambda_function.lambda_handler. Since main_binary is an executable vs. a python file, it doesn't expose the actual handler method and the lambda blows up because it can't find it. I tried updating the handler configuration to simply point to the main_binary but it blows up because it expects two arguments(i.e. lambda_function.lambda_handler). Attempt 2: py_library + pkg_zip BUILD.bazel py_library( name = "main", srcs = glob(["*.py"]), visibility = ["//appcode/api/transaction_details:__subpackages__"], deps = [ requirement("Faker"), ], ) pkg_zip( name = "main_zip", srcs =["//appcode/api/transaction_details/src:main" ], ) Problem: This generates a zip file with: main.py __init__.py The zip file now includes the main.py but none of its runtime dependencies. Thus the lambda blows up because it can't find Faker. Other Attempts: I've also tried using the --build_python_zip flag as well as the @bazel_tools//tools/zip:zipper with a generic rule but they both lead to similar outcomes as the two previous attempts. | Below are the changes I made to the previous answer to generate the lambda zip. Thanks @jvolkman for the original suggestion. project/BUILD.bazel: Added rule to generate requirements_lock.txt from project/requirements.txt load("@rules_python//python:pip.bzl", "compile_pip_requirements") compile_pip_requirements( name = "requirements", extra_args = ["--allow-unsafe"], requirements_in = "requirements.txt", requirements_txt = "requirements_lock.txt", ) project/WORKSPACE.bazel: swap pip_install with pip_parse workspace(name = "mdc-eligibility") load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "rules_python", sha256 = "9fcf91dbcc31fde6d1edb15f117246d912c33c36f44cf681976bd886538deba6", strip_prefix = "rules_python-0.8.0", url = "https://github.com/bazelbuild/rules_python/archive/refs/tags/0.8.0.tar.gz", ) load("@rules_python//python:repositories.bzl", "python_register_toolchains") python_register_toolchains( name = "python3_9", python_version = "3.9", ) load("@rules_python//python:pip.bzl", "pip_parse") load("@python3_9//:defs.bzl", "interpreter") pip_parse( name = "mndc-eligibility-deps", requirements_lock = "//:requirements_lock.txt", python_interpreter_target = interpreter, quiet = False ) load("@mndc-eligibility-deps//:requirements.bzl", "install_deps") install_deps() project/build_rules/lambda_packaging/lambda.bzl: Modified custom rule provided by @jvolkman to include source code in the resulting zip code. def contains(pattern): return "contains:" + pattern def startswith(pattern): return "startswith:" + pattern def endswith(pattern): return "endswith:" + pattern def _is_ignored(path, patterns): for p in patterns: if p.startswith("contains:"): if p[len("contains:"):] in path: return True elif p.startswith("startswith:"): if path.startswith(p[len("startswith:"):]): return True elif p.startswith("endswith:"): if path.endswith(p[len("endswith:"):]): return True else: fail("Invalid pattern: " + p) return False def _short_path(file_): # Remove prefixes for external and generated files. # E.g., # ../py_deps_pypi__pydantic/pydantic/__init__.py -> pydantic/__init__.py short_path = file_.short_path if short_path.startswith("../"): second_slash = short_path.index("/", 3) short_path = short_path[second_slash + 1:] return short_path # steven chambers def _py_lambda_zip_impl(ctx): deps = ctx.attr.target[DefaultInfo].default_runfiles.files f = ctx.outputs.output args = [] for dep in deps.to_list(): short_path = _short_path(dep) # Skip ignored patterns if _is_ignored(short_path, ctx.attr.ignore): continue args.append(short_path + "=" + dep.path) # MODIFICATION: Added source files to the map of files to zip source_files = ctx.attr.target[DefaultInfo].files for source_file in source_files.to_list(): args.append(source_file.basename+"="+source_file.path) ctx.actions.run( outputs = [f], inputs = deps, executable = ctx.executable._zipper, arguments = ["cC", f.path] + args, progress_message = "Creating archive...", mnemonic = "archiver", ) out = depset(direct = [f]) return [ DefaultInfo( files = out, ), OutputGroupInfo( all_files = out, ), ] _py_lambda_zip = rule( implementation = _py_lambda_zip_impl, attrs = { "target": attr.label(), "ignore": attr.string_list(), "_zipper": attr.label( default = Label("@bazel_tools//tools/zip:zipper"), cfg = "host", executable = True, ), "output": attr.output(), }, executable = False, test = False, ) def py_lambda_zip(name, target, ignore, **kwargs): _py_lambda_zip( name = name, target = target, ignore = ignore, output = name + ".zip", **kwargs ) project/appcode/api/transaction_details/src/BUILD.bazel: Used custom py_lambda_zip rule to zip up py_library load("@mndc-eligibility-deps//:requirements.bzl", "requirement") load("@python3_9//:defs.bzl", "interpreter") load("//build_rules/lambda_packaging:lambda.bzl", "contains", "endswith", "py_lambda_zip", "startswith") py_library( name = "main", srcs = glob(["*.py"]), visibility = ["//appcode/api/transaction_details:__subpackages__"], deps = [ requirement("Faker"), ], ) py_lambda_zip( name = "lambda_archive", ignore = [ contains("/__pycache__/"), endswith(".pyc"), endswith(".pyo"), # Ignore boto since it's provided by Lambda. startswith("boto3/"), startswith("botocore/"), # With the move to hermetic toolchains, the zip gets a lib/ directory containing the # python runtime. We don't need that. startswith("lib/"), ], target = ":main", ) | 5 | 2 |
72,019,169 | 2022-4-26 | https://stackoverflow.com/questions/72019169/what-is-the-function-of-the-errors-parameter-to-subprocess-popen | The python subprocess docs only mention the errors parameter in passing: If encoding or errors are specified, or universal_newlines is true, file objects for stdin, stdout and stderr are opened in text mode using the specified encoding and errors or the io.TextIOWrapper default. I understand the usage of text and universal_newlines, which are synonyms. I also understand the usage of encoding. What is the function of the errors parameter? It appears to have been added in Python 3.6. Is errors just a synonym of encoding? If it is just a synonym, are there any published reasoning for this addition? | The errors parameter is described in the linked documentation for io.TextIOWrapper: errors is an optional string that specifies how encoding and decoding errors are to be handled. Pass 'strict' to raise a ValueError exception if there is an encoding error (the default of None has the same effect), or pass 'ignore' to ignore errors. (Note that ignoring encoding errors can lead to data loss.) 'replace' causes a replacement marker (such as '?') to be inserted where there is malformed data. 'backslashreplace' causes malformed data to be replaced by a backslashed escape sequence. When writing, 'xmlcharrefreplace' (replace with the appropriate XML character reference) or 'namereplace' (replace with \N{...} escape sequences) can be used. Any other error handling name that has been registered with codecs.register_error() is also valid. | 5 | 5 |
72,001,505 | 2022-4-25 | https://stackoverflow.com/questions/72001505/how-to-get-unique-elements-and-their-firstly-appeared-indices-of-a-pytorch-tenso | Assume a 2*X(always 2 rows) pytorch tensor: A = tensor([[ 1., 2., 2., 3., 3., 3., 4., 4., 4.], [43., 33., 43., 76., 33., 76., 55., 55., 55.]]) torch.unique(A, dim=1) will return: tensor([[ 1., 2., 2., 3., 3., 4.], [43., 33., 43., 33., 76., 55.]]) But I also need the indices of every unique elements where they firstly appear in original input. In this case, indices should be like: tensor([0, 1, 2, 3, 4, 6]) # Explanation # A = tensor([[ 1., 2., 2., 3., 3., 3., 4., 4., 4.], # [43., 33., 43., 76., 33., 76., 55., 55., 55.]]) # (0) (1) (2) (3) (4) (6) It's complex for me because the second row of tensor A may not be nicely sorted: A = tensor([[ 1., 2., 2., 3., 3., 3., 4., 4., 4.], [43., 33., 43., 76., 33., 76., 55., 55., 55.]]) ^ ^ Is there a simple and efficient method to get the desired indices? P.S. It may be useful that the first row of the tensor is always in ascending order. | One possible way to gain such indicies: unique, idx, counts = torch.unique(A, dim=1, sorted=True, return_inverse=True, return_counts=True) _, ind_sorted = torch.sort(idx, stable=True) cum_sum = counts.cumsum(0) cum_sum = torch.cat((torch.tensor([0]), cum_sum[:-1])) first_indicies = ind_sorted[cum_sum] For tensor A in snippet above: print(first_indicies) # tensor([0, 1, 2, 4, 3, 6]) Note that unique in this case is equal to: tensor([[ 1., 2., 2., 3., 3., 4.], [43., 33., 43., 33., 76., 55.]]) | 7 | 8 |
72,002,559 | 2022-4-25 | https://stackoverflow.com/questions/72002559/converting-object-to-dictionary-key | I was wondering if there is an easy way to essentially have multiple keys in a dictionary for one value. An example of what I would like to achieve is as following: class test: key="test_key" def __str__(self): return self.key tester = test() dictionary = {} dictionary[tester] = 1 print(dictionary[tester]) print(dictionary["test_key"]) where the output would be: >>> 1 >>> 1 What I'm looking for is a way to automatically convert the object to a string before its used as a key. Is this possible? | Personally, I think it's better to explicitly cast the object to a string, e.g. dictionary[str(tester)] = 1 That being said, if you're really really REALLY sure you want to do this, define the __hash__ and __eq__ dunder methods. No need to create a new data structure or change the existing code outside of the class definition: class test: key="test_key" def __hash__(self): return hash(self.key) def __eq__(self, other): if isinstance(other, str): return self.key == other return self.key == other.key def __str__(self): return self.key This will output: 1 1 | 6 | 4 |
71,973,392 | 2022-4-22 | https://stackoverflow.com/questions/71973392/importerror-cannot-import-rendering-from-gym-envs-classic-control | I'm working with RL agents, and was trying to replicate the finding of the this paper, wherein they make a custom parkour environment based on Gym open AI, however when trying to render this environment I run into. import numpy as np import time import gym import TeachMyAgent.environments env = gym.make('parametric-continuous-parkour-v0', agent_body_type='fish', movable_creepers=True) env.set_environment(input_vector=np.zeros(3), water_level = 0.1) env.reset() while True: _, _, d, _ = env.step(env.action_space.sample()) env.render(mode='human') time.sleep(0.1) c:\users\manu dwivedi\teachmyagent\TeachMyAgent\environments\envs\parametric_continuous_parkour.py in render(self, mode, draw_lidars) 462 463 def render(self, mode='human', draw_lidars=True): --> 464 from gym.envs.classic_control import rendering 465 if self.viewer is None: 466 self.viewer = rendering.Viewer(RENDERING_VIEWER_W, RENDERING_VIEWER_H) ImportError: cannot import name 'rendering' from 'gym.envs.classic_control' (C:\ProgramData\Anaconda3\envs\teachagent\lib\site-packages\gym\envs\classic_control\__init__.py) [1]: https://github.com/flowersteam/TeachMyAgent I thought this might be a problem with this custom environments and how the authors decided to render it, however, when I try just from gym.envs.classic_control import rendering I run into the same error, github users here suggested this can be solved by adding rendor_mode='human' when calling gym.make() rendering, but this seems to only goes for their specific case. | I got (with help from a fellow student) it to work by downgrading the gym package to 0.21.0. Performed the command pip install gym==0.21.0 for this. Update, from Github issue: Based on https://github.com/openai/gym/issues/2779 This should be a problem of gymgrid, there is an open PR: wsgdrfz/gymgrid#1 If you want to use the last version of gym, you can try using the branch of that PR (https://github.com/CedricHermansBIT/gymgrid2); you can install it with pip install gymgrid2 | 10 | 8 |
72,001,132 | 2022-4-25 | https://stackoverflow.com/questions/72001132/python-typing-tuplestr-vs-tuplestr | What's the difference between tuple[str, ...] vs tuple[str]? (python typing) | giving a tuple type with ellipsis means that number of elements in this tuple is not known, but type is known: x: tuple[str, ...] = ("hi",) #VALID x: tuple[str, ...] = ("hi", "world") #ALSO VALID not using ellipsis means a tuple with specific number of elements, e.g.: y: tuple[str] = ("hi", "world") # Type Warning: Expected type 'Tuple[str]', got 'Tuple[str, str]' instead This goes in contrast to notation of other collections, e.g. list[str] means list of any length with elements of type str. | 17 | 20 |
71,949,467 | 2022-4-21 | https://stackoverflow.com/questions/71949467/pydantic-validation-error-for-basesettings-model-with-local-env-file | I'm developing a simple FastAPI app and I'm using Pydantic for storing app settings. Some settings are populated from the environment variables set by Ansible deployment tools but some other settings are needed to be set explicitly from a separate env file. So I have this in config.py class Settings(BaseSettings): # Project wide settings PROJECT_MODE: str = getenv("PROJECT_MODE", "sandbox") VERSION: str class Config: env_file = "config.txt" And I have this config.txt VERSION="0.0.1" So project_mode env var is being set by deployment script and version is being set from the env file. The reason for that is that we'd like to keep deployment script similar across all projects, so any custom vars are populated from the project specific env files. But the problem is that when I run the app, it fails with: pydantic.error_wrappers.ValidationError: 1 validation error for Settings VERSION field required (type=value_error.missing) So how can I populate Pydantic settings model from the local ENV file? | If the environment file isn't being picked up, most of the time it's because it isn't placed in the current working directory. In your application in needs to be in the directory where the application is run from (or if the application manages the CWD itself, to where it expects to find it). In particular when running tests this can be a bit confusing, and you might have to configure your IDE to run tests with the CWD set to the project root if you run the tests from your IDE. | 7 | 11 |
71,996,380 | 2022-4-25 | https://stackoverflow.com/questions/71996380/how-to-assign-a-function-to-a-route-functionally-without-a-route-decorator-in-f | In Flask, it is possible to assign an arbitrary function to a route functionally like: from flask import Flask app = Flask() def say_hello(): return "Hello" app.add_url_rule('/hello', 'say_hello', say_hello) which is equal to (with decorators): @app.route("/hello") def say_hello(): return "Hello" Is there such a simple and functional way (add_url_rule) in FastAPI? | You can use the add_api_route method to add a route to a router or an app programmtically: from fastapi import FastAPI, APIRouter def foo_it(): return {'Fooed': True} app = FastAPI() router = APIRouter() router.add_api_route('/foo', endpoint=foo_it) app.include_router(router) app.add_api_route('/foo-app', endpoint=foo_it) Both expose the same endpoint in two different locations: Ξ» curl http://localhost:8000/foo {"Fooed":true} Ξ» curl http://localhost:8000/foo-app {"Fooed":true} | 4 | 8 |
71,996,274 | 2022-4-25 | https://stackoverflow.com/questions/71996274/upsampling-entire-groups-in-python | Suppose I have a dataframe like this import pandas as pd df = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,2], 'Order':[1,2,3,4,5,1,2,3,4]}) df ID Order 0 1 1 1 1 2 2 1 3 3 1 4 4 1 5 5 2 1 6 2 2 7 2 3 8 2 4 I need to upsample by entire blocks, so essentially creating new copies of the block for ID == 1, and ID == 2. So the upsampled df (for n = 2 two samples with replacement) might look something like df_upsampled = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,2,1,1,1,1,1,2,2,2,2], 'Order':[1,2,3,4,5,1,2,3,4,1,2,3,4,5,1,2,3,4]}) df_upsampled ID Order 0 1 1 1 1 2 2 1 3 3 1 4 4 1 5 5 2 1 6 2 2 7 2 3 8 2 4 9 1 1 10 1 2 11 1 3 12 1 4 13 1 5 14 2 1 15 2 2 16 2 3 17 2 4 I thought I could handle this quick with resample() but haven't figured out how to copy entire blocks (per ID) | So basically all you need to do is to get a copy of your data frame and concat it to the original one: df = pd.concat([df, df], ignore_index=True) | 4 | 2 |
71,969,299 | 2022-4-22 | https://stackoverflow.com/questions/71969299/how-to-disable-code-formatting-in-ipython | IPython has this new feature that reformats my prompt. Unfortunately, it is really buggy, so I want to disable it. I managed to do it when starting IPython from the command line by adding the following line in my ipython_config.py: c.TerminalInteractiveShell.autoformatter = None However, it does not work when I run it from a python script. I start IPython from my script the following way: c = traitlets.config.get_config() c.InteractiveShellEmbed.colors = "Linux" c.TerminalInteractiveShell.autoformatter = None c.InteractiveShellEmbed.loop_runner = lambda coro: loop.run_until_complete(coro) IPython.embed(display_banner='', using='asyncio', config=c) If I change the colors value, the colors change accordingly, so the configuration itself works. However, no matter what I do with autoformatter, IPython autoformats my code regardless. What am I doing wrong? | Apparently, the answer is: c.InteractiveShellEmbed.autoformatter = None | 8 | 1 |
71,990,420 | 2022-4-24 | https://stackoverflow.com/questions/71990420/how-do-i-efficiently-find-which-elements-of-a-list-are-in-another-list | I want to know which elements of list_1 are in list_2. I need the output as an ordered list of booleans. But I want to avoid for loops, because both lists have over 2 million elements. This is what I have and it works, but it's too slow: list_1 = [0,0,1,2,0,0] list_2 = [1,2,3,4,5,6] booleans = [] for i in list_1: booleans.append(i in list_2) # booleans = [False, False, True, True, False, False] I could split the list and use multithreading, but I would prefer a simpler solution if possible. I know some functions like sum() use vector operations. I am looking for something similar. How can I make my code more efficient? | If you want to use a vector approach you can also use Numpy isin. It's not the fastest method, as demonstrated by oda's excellent post, but it's definitely an alternative to consider. import numpy as np list_1 = [0,0,1,2,0,0] list_2 = [1,2,3,4,5,6] a1 = np.array(list_1) a2 = np.array(list_2) np.isin(a1, a2) # array([False, False, True, True, False, False]) | 46 | 15 |
71,962,260 | 2022-4-22 | https://stackoverflow.com/questions/71962260/reading-data-in-vertex-ai-pipelines | This is my first time using Google's Vertex AI Pipelines. I checked this codelab as well as this post and this post, on top of some links derived from the official documentation. I decided to put all that knowledge to work, in some toy example: I was planning to build a pipeline consisting of 2 components: "get-data" (which reads some .csv file stored in Cloud Storage) and "report-data" (which basically returns the shape of the .csv data read in the previous component). Furthermore, I was cautious to include some suggestions provided in this forum. The code I currently have, goes as follows: from kfp.v2 import compiler from kfp.v2.dsl import pipeline, component, Dataset, Input, Output from google.cloud import aiplatform # Components section @component( packages_to_install=[ "google-cloud-storage", "pandas", ], base_image="python:3.9", output_component_file="get_data.yaml" ) def get_data( bucket: str, url: str, dataset: Output[Dataset], ): import pandas as pd from google.cloud import storage storage_client = storage.Client("my-project") bucket = storage_client.get_bucket(bucket) blob = bucket.blob(url) blob.download_to_filename('localdf.csv') # path = "gs://my-bucket/program_grouping_data.zip" df = pd.read_csv('localdf.csv', compression='zip') df['new_skills'] = df['new_skills'].apply(ast.literal_eval) df.to_csv(dataset.path + ".csv" , index=False, encoding='utf-8-sig') @component( packages_to_install=["pandas"], base_image="python:3.9", output_component_file="report_data.yaml" ) def report_data( inputd: Input[Dataset], ): import pandas as pd df = pd.read_csv(inputd.path) return df.shape # Pipeline section @pipeline( # Default pipeline root. You can override it when submitting the pipeline. pipeline_root=PIPELINE_ROOT, # A name for the pipeline. name="my-pipeline", ) def my_pipeline( url: str = "test_vertex/pipeline_root/program_grouping_data.zip", bucket: str = "my-bucket" ): dataset_task = get_data(bucket, url) dimensions = report_data( dataset_task.output ) # Compilation section compiler.Compiler().compile( pipeline_func=my_pipeline, package_path="pipeline_job.json" ) # Running and submitting job from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") run1 = aiplatform.PipelineJob( display_name="my-pipeline", template_path="pipeline_job.json", job_id="mlmd-pipeline-small-{0}".format(TIMESTAMP), parameter_values={"url": "test_vertex/pipeline_root/program_grouping_data.zip", "bucket": "my-bucket"}, enable_caching=True, ) run1.submit() I was happy to see that the pipeline compiled with no errors, and managed to submit the job. However "my happiness lasted short", as when I went to Vertex AI Pipelines, I stumbled upon some "error", which goes like: The DAG failed because some tasks failed. The failed tasks are: [get-data].; Job (project_id = my-project, job_id = 4290278978419163136) is failed due to the above error.; Failed to handle the job: {project_number = xxxxxxxx, job_id = 4290278978419163136} I did not find any related info on the web, neither could I find any log or something similar, and I feel a bit overwhelmed that the solution to this (seemingly) easy example, is still eluding me. Quite obviously, I don't what or where I am mistaking. Any suggestion? | With some suggestions provided in the comments, I think I managed to make my demo pipeline work. I will first include the updated code: from kfp.v2 import compiler from kfp.v2.dsl import pipeline, component, Dataset, Input, Output from datetime import datetime from google.cloud import aiplatform from typing import NamedTuple # Importing 'COMPONENTS' of the 'PIPELINE' @component( packages_to_install=[ "google-cloud-storage", "pandas", ], base_image="python:3.9", output_component_file="get_data.yaml" ) def get_data( bucket: str, url: str, dataset: Output[Dataset], ): """Reads a csv file, from some location in Cloud Storage""" import ast import pandas as pd from google.cloud import storage # 'Pulling' demo .csv data from a know location in GCS storage_client = storage.Client("my-project") bucket = storage_client.get_bucket(bucket) blob = bucket.blob(url) blob.download_to_filename('localdf.csv') # Reading the pulled demo .csv data df = pd.read_csv('localdf.csv', compression='zip') df['new_skills'] = df['new_skills'].apply(ast.literal_eval) df.to_csv(dataset.path + ".csv" , index=False, encoding='utf-8-sig') @component( packages_to_install=["pandas"], base_image="python:3.9", output_component_file="report_data.yaml" ) def report_data( inputd: Input[Dataset], ) -> NamedTuple("output", [("rows", int), ("columns", int)]): """From a passed csv file existing in Cloud Storage, returns its dimensions""" import pandas as pd df = pd.read_csv(inputd.path+".csv") return df.shape # Building the 'PIPELINE' @pipeline( # i.e. in my case: PIPELINE_ROOT = 'gs://my-bucket/test_vertex/pipeline_root/' # Can be overriden when submitting the pipeline pipeline_root=PIPELINE_ROOT, name="readcsv-pipeline", # Your own naming for the pipeline. ) def my_pipeline( url: str = "test_vertex/pipeline_root/program_grouping_data.zip", bucket: str = "my-bucket" ): dataset_task = get_data(bucket, url) dimensions = report_data( dataset_task.output ) # Compiling the 'PIPELINE' compiler.Compiler().compile( pipeline_func=my_pipeline, package_path="pipeline_job.json" ) # Running the 'PIPELINE' TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") run1 = aiplatform.PipelineJob( display_name="my-pipeline", template_path="pipeline_job.json", job_id="mlmd-pipeline-small-{0}".format(TIMESTAMP), parameter_values={ "url": "test_vertex/pipeline_root/program_grouping_data.zip", "bucket": "my-bucket" }, enable_caching=True, ) # Submitting the 'PIPELINE' run1.submit() Now, I will add some complementary comments, which in sum, managed to solve my problem: First, having the "Logs Viewer" (roles/logging.viewer) enabled for your user, will greatly help to troubleshoot any existing error in your pipeline (Note: that role worked for me, however you might want to look for a better matching role for you own purposes here). Those errors will appear as "Logs", which can be accessed by clicking the corresponding button: NOTE: In the picture above, when the "Logs" are displayed, it might be helpful to carefully check each log (close to the time when you created you pipeline), as generally each eof them corresponds with a single warning or error line: Second, the output of my pipeline was a tuple. In my original approach, I just returned the plain tuple, but it is advised to return a NamedTuple instead. In general, if you need to input / output one or more "small values" (int or str, for any reason), pick a NamedTuple to do so. Third, when the connection between your pipelines is Input[Dataset] or Ouput[Dataset], adding the file extension is needed (and quite easy to forget). Take for instance the ouput of the get_data component, and notice how the data is recorded by specifically adding the file extension, i.e. dataset.path + ".csv". Of course, this is a very tiny example, and projects can easily scale to huge projects, however as some sort of "Hello Vertex AI Pipelines" it will work well. Thank you. | 4 | 4 |
71,993,231 | 2022-4-24 | https://stackoverflow.com/questions/71993231/how-to-type-a-tuple-with-many-elements-in-python | I am experimenting with the typing module and I wanted to know how to properly type something like a Nonagon (a 9 point polygon), which should be a Tuple and not a List because it should be immutable. In 2D space, it would be something like this: Point2D = Tuple[float, float] Nonagon = Tuple[Point2D, Point2D, Point2D, Point2D, Point2D, Point2D, Point2D, Point2D, Point2D] nine_points: Nonagon = ( (0.0, 0.0), (6.0, 0.0), (6.0, 2.0), (2.0, 2.0), (6.0, 5.0), (2.0, 8.0), (6.0, 8.0), (6.0, 10.0), (0.0, 10.0), ) Is there any syntactic sugar available to make the Nonagon declaration shorter or easier to read? This is not valid Python, but I am looking for something similar to this: Nonagon = Tuple[*([Point2D] * 9)] # Not valid Python Or using NamedTuple # Not properly detected by static type analysers Nonagon = NamedTuple('Nonagon', [(f"point_{i}", Point2D) for i in range(9)]) This is NOT what I want: # Valid but allows for more and less than 9 points Nonagon = Tuple[Point2D, ...] I think the most adequate way would be something like: from typing import Annotated # Valid but needs MinMaxLen and checking logic to be defined from scratch Nonagon = Annotated[Point2D, MinMaxLen(9, 9)] | You can use the types module. All type hints come from types.GenericAlias. From the doc: Represent a PEP 585 generic type E.g. for t = list[int], t.__origin__ is list and t.__args__ is (int,). This means that you can make your own type hinting by passing the type arguments to the class itself. >>> Point2D = tuple[float, float] >>> Nonagon = types.GenericAlias(tuple, (Point2D,)*9) >>> Nonagon tuple[tuple[float, float], tuple[float, float], tuple[float, float], tuple[float, float], tuple[float, float], tuple[float, float], tuple[float, float], tuple[float, float], tuple[float, float]] | 9 | 4 |
71,990,386 | 2022-4-24 | https://stackoverflow.com/questions/71990386/calculating-divergence-and-curl-from-optical-flow-and-plotting-it | I'm using flow = cv2.calcOpticalFlowFarneback() to calculate optical flow in a video and it gives me a numpy array with a shape of (height, width, 2) that contains the Fx and Fy values for each pixel (flow[:,:,0] = Fx and flow[:,:,1] = Fy). For calculating the divergence I'm using np.gradient like this: def divergence_npgrad(flow): Fx, Fy = flow[:, :, 0], flow[:, :, 1] F = [Fx, Fy] d = len(F) return np.ufunc.reduce(np.add, [np.gradient(F[i], axis=i) for i in range(d)]) Next, I want to calculate the curl. I know there is a curl function in sympy.physics.vector but I really don't get how is it working or how is it would apply to my flow. So I thought I could use np.gradient for this too. In 2D I need to calculate dFy/dx - dFx/dy for every pixel, so i wold be like this: def curl_npgrad(flow): Fx, Fy = flow[:, :, 0], flow[:, :, 1] dFx_dy = np.gradient(Fx, axis=1) dFy_dx = np.gradient(Fy, axis=0) curl = np.ufunc.reduce(np.subtract, [dFy_dx, dFx_dy]) return curl Is it a right way to do this or am I missing something? Now if I have the curl, I want to make two plots with matplotlib. My point is that I want to show the vectors from flow with different colormaps. One plot would use the magnitude values of the vectors as colormap, normalized to (0-magnitude_max). The other plot would use the curl values as colormap, where the arrows are blue if the curl is negative and red if the curl is positive in that position. Here is what I'm trying to use: def flow_plot(flow, frame): frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) h, w = flow.shape[:2] dpi = 72 xinch = w / dpi yinch = h / dpi step = 24 y, x = np.mgrid[step / ((h % step) // 2):h:step, step / ((w % step) // 2):w:step].reshape(2, -1).astype(int) fx, fy = flow[y, x].T mag = np.sqrt(np.power(fx, 2) + np.power(fy, 2)) fx = fx / mag fy = fy / mag curl = curl_npgrad(flow) curl_map = curl[y, x] quiver_params = dict(cmap='Oranges', # for magnitude #cmap='seismic', # for curl norm=colors.Normalize(vmin=0.0, vmax=1.0), # for magnitude #norm = colors.CenteredNorm(), # for curl units='xy', scale=0.03, headwidth=3, headlength=5, minshaft=1.5, pivot='middle') fig = plt.figure(figsize=(xinch, yinch), dpi=dpi) plt.imshow(frame) plt.quiver(x, y, fx, fy, mag, **quiver_params) plt.gca().invert_yaxis() plt.gca().set_aspect('equal', 'datalim') plt.axis('off') fig.tight_layout(pad=0) fig.canvas.draw() img = np.frombuffer(fig.canvas.tostring_rgb(), dtype='uint8') img = img.reshape(fig.canvas.get_width_height()[::-1] + (3,)) img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) img = cv2.flip(img, 0) plt.close(fig) return img I'm converting the plot to a cv2 image so i can use it for opencv video writer. I noticed that if I'm not showing the original frame behind the plot, I have to invert the y axis and use -fy in plt.quiver(), if I want to show the frame behind, I have to invert the y axis too, can use fy, but than I have to flip the whole image afterwards. How does it make any sense? I can't get it. As for the curl, it's kinda messy for me. barely showing any color, random red an blue spots, not a buch of red/blue arrows where the fluid clearly rotating. It's like these: image1 of the messy curl, image2 of the messy curl Is it a bad way to calculate the curl for this kind of flow? What am I missing? | If I understand the way you've set up your axes correctly, you're missing a flow = np.swapaxes(flow, 0, 1) at the top of both divergence_npgrad and curl_npgrad. I tried applying your curl and divergence functions to simple functions where I already knew the correct curl and divergence. For example, I tried this function: F_x = x F_y = y Produces this plot: For this function, divergence should be high everywhere. Applying your function, it tells me that divergence is 0. Code to produce this array: import numpy as np import matplotlib.pyplot as plt x,y = np.meshgrid(np.linspace(-2,2,10),np.linspace(-2,2,10)) u = x v = y field2 = np.stack((u, v), axis=-1) plt.quiver(x, y, field2[:, :, 0], field2[:, :, 1]) I also tried a function for testing curl: F_x = cos(x + y) F_y = sin(x - y) Which produces this plot: Code to produce this array: import numpy as np import matplotlib.pyplot as plt x,y = np.meshgrid(np.linspace(0,2,10),np.linspace(0,2,10)) u = np.cos(x + y) v = np.sin(x - y) field = np.stack((u, v), axis=-1) plt.quiver(x, y, field[:, :, 0], field[:, :, 1]) Thanks to U of Mich. Math department for this example! For this function, curl should be highest around the center swirl. Applying your curl function to this, I get a result that curl is highest in the corners. Here's the code I tried which works: def divergence_npgrad(flow): flow = np.swapaxes(flow, 0, 1) Fx, Fy = flow[:, :, 0], flow[:, :, 1] dFx_dx = np.gradient(Fx, axis=0) dFy_dy = np.gradient(Fy, axis=1) return dFx_dx + dFy_dy def curl_npgrad(flow): flow = np.swapaxes(flow, 0, 1) Fx, Fy = flow[:, :, 0], flow[:, :, 1] dFx_dy = np.gradient(Fx, axis=1) dFy_dx = np.gradient(Fy, axis=0) curl = dFy_dx - dFx_dy return curl Some explanation of why I think this is a correct change: np.gradient(..., axis=0) provides the gradient across rows, because that's axis 0. In your input image, you have the shape (height, width, 2), so axis 0 actually represents height, or y. Using np.swapaxes(flow, 0, 1) swaps the order of the x and y axes of that array. Using np.ufunc.reduce is not required - you can use broadcasting instead and it will work fine. It's also a little easier to read. Here are the results for using this code. Here is the result of the divergence calculation on the first function. Result: positive, constant value everywhere. Here is the result of the curl calculation on the second function. Result: a positive value centered on the swirly part of the function. (There's a change of axes here - 0,0 is in the top left.) | 4 | 5 |
71,985,442 | 2022-4-24 | https://stackoverflow.com/questions/71985442/pycharm-is-generating-language-errors-for-python-version-3-6-although-interprete | The language interpreter is set to a Python 3.9 version: But a Python scratch file is being parsed by some kind of 3.6 interpreter: Note that I created in two different scratch files and the same error occurs. Why would this happen and is there a workaround [short of creating an entirely new project from scratch]? I am on Pycharm Professional 2021.3.1 Update based on answer by @TurePaisson he though maybe the Code is compatible with specific Python were set. That was a shrewd guess - but turns out I have not set that: Update The following snippet can be used to test python3.6 vs 3.8+ x = (y := 3) + 7 | Following up on @bad_coder 's attempt to fix that will be paraphrased as: check the Run Configuration for pointing to a different python interpreter than the project level one That fix worked for me: Bring up the Run [Context menu] | Edit Configurations Change the Python interpreter to the appropriate one: Shown below is a case where the interpreter is said to the earlier language level. Go to the dropdown and select a correct [python 3.8+] interpeter. | 6 | 1 |
71,944,041 | 2022-4-20 | https://stackoverflow.com/questions/71944041/using-modern-typing-features-on-older-versions-of-python | So, I was writing an event emitter class using Python. Code currently looks like this: from typing import Callable, Generic, ParamSpec P = ParamSpec('P') class Event(Generic[P]): def __init__(self): ... def addHandler(self, action : Callable[P, None]): ... def removeHandler(self, action : Callable[P, None]): ... def fire(self, *args : P.args, **kwargs : P.kwargs): ... As you can see, annotations depend on ParamSpec, which was added to typing in python 3.10 only. And while it works good in Python 3.10 (on my machine), it fails in Python 3.9 and older (on other machines) because ParamSpec is a new feature. So, how could I avoid importing ParamSpec when running program or use some fallback alternative, while not confusing typing in editor (pyright)? | I don't know if there was any reason to reinvent the wheel, but typing_extensions module is maintained by python core team, supports python3.7 and later and is used exactly for this purpose. You can just check python version and choose proper import source: import sys if sys.version_info < (3, 10): from typing_extensions import ParamSpec else: from typing import ParamSpec | 6 | 12 |
71,987,196 | 2022-4-24 | https://stackoverflow.com/questions/71987196/decrypt-in-go-what-was-encrypted-with-aes-in-cfb-mode-in-python | Issue I want to be able to decrypt in Go what was encrypted in Python. The encrypting/decrypting functions work respectively in each language but not when I am encrypting in Python and decrypting in Go, I am guessing there is something wrong with the encoding because I am getting gibberish output: RxοΏ½οΏ½οΏ½οΏ½dοΏ½οΏ½IοΏ½K|οΏ½apοΏ½οΏ½οΏ½kοΏ½οΏ½B%FοΏ½οΏ½οΏ½UVοΏ½~d3hοΏ½ΓοΏ½οΏ½οΏ½οΏ½|οΏ½οΏ½οΏ½οΏ½οΏ½>οΏ½BοΏ½οΏ½BοΏ½ Encryption/Decryption in Python def encrypt(plaintext, key=config.SECRET, key_salt='', no_iv=False): """Encrypt shit the right way""" # sanitize inputs key = SHA256.new((key + key_salt).encode()).digest() if len(key) not in AES.key_size: raise Exception() if isinstance(plaintext, string_types): plaintext = plaintext.encode('utf-8') # pad plaintext using PKCS7 padding scheme padlen = AES.block_size - len(plaintext) % AES.block_size plaintext += (chr(padlen) * padlen).encode('utf-8') # generate random initialization vector using CSPRNG if no_iv: iv = ('\0' * AES.block_size).encode() else: iv = get_random_bytes(AES.block_size) log.info(AES.block_size) # encrypt using AES in CFB mode ciphertext = AES.new(key, AES.MODE_CFB, iv).encrypt(plaintext) # prepend iv to ciphertext if not no_iv: ciphertext = iv + ciphertext # return ciphertext in hex encoding log.info(ciphertext) return ciphertext.hex() def decrypt(ciphertext, key=config.SECRET, key_salt='', no_iv=False): """Decrypt shit the right way""" # sanitize inputs key = SHA256.new((key + key_salt).encode()).digest() if len(key) not in AES.key_size: raise Exception() if len(ciphertext) % AES.block_size: raise Exception() try: ciphertext = codecs.decode(ciphertext, 'hex') except TypeError: log.warning("Ciphertext wasn't given as a hexadecimal string.") # split initialization vector and ciphertext if no_iv: iv = '\0' * AES.block_size else: iv = ciphertext[:AES.block_size] ciphertext = ciphertext[AES.block_size:] # decrypt ciphertext using AES in CFB mode plaintext = AES.new(key, AES.MODE_CFB, iv).decrypt(ciphertext).decode() # validate padding using PKCS7 padding scheme padlen = ord(plaintext[-1]) if padlen < 1 or padlen > AES.block_size: raise Exception() if plaintext[-padlen:] != chr(padlen) * padlen: raise Exception() plaintext = plaintext[:-padlen] return plaintext Encryption/Decryption in Go // PKCS5Padding adds padding to the plaintext to make it a multiple of the block size func PKCS5Padding(src []byte, blockSize int) []byte { padding := blockSize - len(src)%blockSize padtext := bytes.Repeat([]byte{byte(padding)}, padding) return append(src, padtext...) } // Encrypt encrypts the plaintext,the input salt should be a random string that is appended to the plaintext // that gets fed into the one-way function that hashes it. func Encrypt(plaintext) string { h := sha256.New() h.Write([]byte(os.Getenv("SECRET"))) key := h.Sum(nil) plaintextBytes := PKCS5Padding([]byte(plaintext), aes.BlockSize) block, err := aes.NewCipher(key) if err != nil { panic(err) } // The IV needs to be unique, but not secure. Therefore it's common to // include it at the beginning of the ciphertext. ciphertext := make([]byte, aes.BlockSize+len(plaintextBytes)) iv := ciphertext[:aes.BlockSize] if _, err := io.ReadFull(rand.Reader, iv); err != nil { panic(err) } stream := cipher.NewCFBEncrypter(block, iv) stream.XORKeyStream(ciphertext[aes.BlockSize:], plaintextBytes) // return hexadecimal representation of the ciphertext return hex.EncodeToString(ciphertext) } func PKCS5UnPadding(src []byte) []byte { length := len(src) unpadding := int(src[length-1]) return src[:(length - unpadding)] } func Decrypt(ciphertext string) string { h := sha256.New() // have to check if the secret is hex encoded h.Write([]byte(os.Getenv("SECRET"))) key := h.Sum(nil) ciphertext_bytes := []byte(ciphertext) block, err := aes.NewCipher(key) if err != nil { panic(err) } log.Print(aes.BlockSize) // The IV needs to be unique, but not secure. Therefore it's common to // include it at the beginning of the ciphertext. iv := ciphertext_bytes[:aes.BlockSize] if len(ciphertext) < aes.BlockSize { panic("ciphertext too short") } ciphertext_bytes = ciphertext_bytes[aes.BlockSize:] stream := cipher.NewCFBDecrypter(block, iv) stream.XORKeyStream(ciphertext_bytes, ciphertext_bytes) plaintext := PKCS5UnPadding(ciphertext_bytes) return string(plaintext) } | The CFB mode uses a segment size which corresponds to the bits encrypted per encryption step, see CFB. Go only supports a segment size of 128 bits (CFB128), at least without deeper modifications (s. here and here). In contrast, the segment size in PyCryptodome is configurable and defaults to 8 bits (CFB8), s. here. The posted Python code uses this default value, so the two codes are incompatible. Since the segment size is not adjustable in the Go code, it must be set to CFB128 in the Python code: cipher = AES.new(key, AES.MODE_CFB, iv, segment_size=128) Also, the ciphertext is hex encoded in the Python code, so it must be hex decoded in the Go code, which does not yet happen in the posted code. With these both changes, the ciphertext produced with the Python code can be decrypted. The ciphertext in the following Go Code was created with the Python code using a segment size of 128 bits and the passphrase my passphrase and is successfully decrypted: package main import ( "crypto/aes" "crypto/cipher" "crypto/sha256" "encoding/hex" "fmt" ) func main() { ciphertextHex := "546ddf226c4c556c7faa386940f4fff9b09f7e3a2ccce2ed26f7424cf9c8cd743e826bc8a2854bb574df9f86a94e7b2b1e63886953a6a3eb69eaa5fa03d69ba5" // Fix 1: Apply CFB128 on the Python side fmt.Println(Decrypt(ciphertextHex)) // The quick brown fox jumps over the lazy dog } func PKCS5UnPadding(src []byte) []byte { length := len(src) unpadding := int(src[length-1]) return src[:(length - unpadding)] } func Decrypt(ciphertext string) string { h := sha256.New() //h.Write([]byte(os.Getenv("SECRET"))) h.Write([]byte("my passphrase")) // Apply passphrase from Python side key := h.Sum(nil) //ciphertext_bytes := []byte(ciphertext) ciphertext_bytes, _ := hex.DecodeString(ciphertext) // Fix 2. Hex decode ciphertext block, err := aes.NewCipher(key) if err != nil { panic(err) } iv := ciphertext_bytes[:aes.BlockSize] if len(ciphertext) < aes.BlockSize { panic("ciphertext too short") } ciphertext_bytes = ciphertext_bytes[aes.BlockSize:] stream := cipher.NewCFBDecrypter(block, iv) stream.XORKeyStream(ciphertext_bytes, ciphertext_bytes) plaintext := PKCS5UnPadding(ciphertext_bytes) return string(plaintext) } Security: Using a digest as key derivation function is insecure. Apply a dedicated key derivation function like PBKDF2. A static or missing salt is also insecure. Use a randomly generated salt for each encryption. Concatenate the non-secret salt with the ciphertext (analogous to the IV), e.g. salt|IV|ciphertext. The variant no_iv=True applies a static IV (zero IV), which is insecure and should not be used. The correct way is described with the variant no_iv=False. CFB is a stream cipher mode and therefore does not require padding/unpadding, which can therefore be removed on both sides. | 4 | 7 |
71,986,643 | 2022-4-24 | https://stackoverflow.com/questions/71986643/userwarning-failed-to-initialize-numpy-module-compiled-against-api-version-0xf | (my2022) C:\Users\donhu>pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113 Requirement already satisfied: torch in d:\programdata\anaconda3\envs\my2022\lib\site-packages (1.10.2) Collecting torchvision Downloading https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp310-cp310-win_amd64.whl (5.4 MB) |ββββββββββββββββββββββββββββββββ| 5.4 MB 1.1 MB/s Collecting torchaudio Downloading https://download.pytorch.org/whl/cu113/torchaudio-0.11.0%2Bcu113-cp310-cp310-win_amd64.whl (573 kB) |ββββββββββββββββββββββββββββββββ| 573 kB 6.4 MB/s Requirement already satisfied: typing_extensions in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from torch) (4.1.1) Collecting torch Downloading https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp310-cp310-win_amd64.whl (2186.0 MB) |ββββββββββββββββββββββββββββββββ| 2186.0 MB 4.7 kB/s Requirement already satisfied: numpy in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from torchvision) (1.21.5) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from torchvision) (9.0.1) Requirement already satisfied: requests in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from torchvision) (2.27.1) Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from requests->torchvision) (1.26.8) Requirement already satisfied: certifi>=2017.4.17 in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from requests->torchvision) (2021.5.30) Requirement already satisfied: idna<4,>=2.5 in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from requests->torchvision) (3.3) Requirement already satisfied: charset-normalizer~=2.0.0 in d:\programdata\anaconda3\envs\my2022\lib\site-packages (from requests->torchvision) (2.0.4) Installing collected packages: torch, torchvision, torchaudio Attempting uninstall: torch Found existing installation: torch 1.10.2 Uninstalling torch-1.10.2: Successfully uninstalled torch-1.10.2 Successfully installed torch-1.11.0+cu113 torchaudio-0.11.0+cu113 torchvision-0.12.0+cu113 (my2022) C:\Users\donhu>python Python 3.10.4 | packaged by conda-forge | (main, Mar 30 2022, 08:38:02) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import torch D:\ProgramData\Anaconda3\envs\my2022\lib\site-packages\torch\_masked\__init__.py:223: UserWarning: Failed to initialize NumPy: module compiled against API version 0xf but this version of numpy is 0xe (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:68.) example_input = torch.tensor([[-3, -2, -1], [0, 1, 2]]) >>> torch.FloatTensor([31, 1, 1989], [26, 8, 1987]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: new() received an invalid combination of arguments - got (list, list), but expected one of: * (*, torch.device device) didn't match because some of the arguments have invalid types: (list, list) * (torch.Storage storage) * (Tensor other) * (tuple of ints size, *, torch.device device) * (object data, *, torch.device device) >>> torch.FloatTensor([[31, 1, 1989], [26, 8, 1987]]) tensor([[3.1000e+01, 1.0000e+00, 1.9890e+03], [2.6000e+01, 8.0000e+00, 1.9870e+03]]) >>> import torchtorch.FloatTensor([[31, 1, 1989], [26, 8, 1987]]) File "<stdin>", line 1 import torchtorch.FloatTensor([[31, 1, 1989], [26, 8, 1987]]) ^ SyntaxError: invalid syntax >>> import torchtorch.FloatTensor([[31, 1, 1989], [26, 8, 1987]]) File "<stdin>", line 1 import torchtorch.FloatTensor([[31, 1, 1989], [26, 8, 1987]]) ^ SyntaxError: invalid syntax >>> import torch >>> torch.FloatTensor([[31, 1, 1989], [26, 8, 1987]]) tensor([[3.1000e+01, 1.0000e+00, 1.9890e+03], [2.6000e+01, 8.0000e+00, 1.9870e+03]]) >>> jupyter Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'jupyter' is not defined >>> notebook Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'notebook' is not defined >>> jupyter-notebook Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'jupyter' is not defined >>> exit() (my2022) C:\Users\donhu>python -m notebook [I 15:14:15.768 NotebookApp] The port 8888 is already in use, trying another port. [I 15:14:15.769 NotebookApp] Serving notebooks from local directory: C:\Users\donhu [I 15:14:15.770 NotebookApp] Jupyter Notebook 6.4.8 is running at: [I 15:14:15.770 NotebookApp] http://localhost:8889/?token=a354cd600920030068b4020dfc955b40d721d0da3e749421 [I 15:14:15.770 NotebookApp] or http://127.0.0.1:8889/?token=a354cd600920030068b4020dfc955b40d721d0da3e749421 [I 15:14:15.770 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 15:14:15.828 NotebookApp] To access the notebook, open this file in a browser: file:///C:/Users/donhu/AppData/Roaming/jupyter/runtime/nbserver-14140-open.html Or copy and paste one of these URLs: http://localhost:8889/?token=a354cd600920030068b4020dfc955b40d721d0da3e749421 or http://127.0.0.1:8889/?token=a354cd600920030068b4020dfc955b40d721d0da3e749421 [I 15:15:03.168 NotebookApp] 302 GET /?token=a354cd600920030068b4020dfc955b40d721d0da3e749421 (::1) 1.000000ms [W 15:15:09.559 NotebookApp] Notebook Untitled.ipynb is not trusted [I 15:15:09.849 NotebookApp] Kernel started: d123ea3f-ccd3-4d2c-8bb0-2265c8610c76, name: python3 [I 15:15:17.878 NotebookApp] Starting buffering for d123ea3f-ccd3-4d2c-8bb0-2265c8610c76:c8907cbab1e34c409557d30a3227e5a6 [W 15:15:20.088 NotebookApp] Notebook Untitled.ipynb is not trusted [I 15:17:20.272 NotebookApp] Saving file at /Untitled.ipynb [W 15:17:20.272 NotebookApp] Notebook Untitled.ipynb is not trusted run import torch print(torch.__version__) Error 1.11.0+cu113 D:\ProgramData\Anaconda3\envs\my2022\lib\site-packages\torch\_masked\__init__.py:223: UserWarning: Failed to initialize NumPy: module compiled against API version 0xf but this version of numpy is 0xe (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:68.) example_input = torch.tensor([[-3, -2, -1], [0, 1, 2]]) How to fix it? | The problem cause by Use latest version of PyTorch (too new) Use latest version of conda (No support NumPy what PyTorch 1.11.0 need), then run Python from conda's virtual environment. Solution: Download Python 3.10.4 at https://www.python.org/ Install PyTorch like the command in the question, instlal Jupyter notebook. No use conda, Anaconda Navigator. Everything will no problem. | 11 | -2 |
71,986,268 | 2022-4-24 | https://stackoverflow.com/questions/71986268/pandas-constant-values-after-each-zero-value | Say I have the following dataframe: values 0 4 1 0 2 2 3 3 4 0 5 8 6 5 7 1 8 0 9 4 10 7 I want to find a pandas vectorized function (preferably using groupby) that would replace all nonzero values with the first nonzero value in that chunk of nonzero values, i.e. something that would give me values new 0 4 4 1 0 0 2 2 2 3 3 2 4 0 0 5 8 8 6 5 8 7 1 8 8 0 0 9 4 4 10 7 4 Is there a good way of achieving this? | Make a boolean mask to select the rows having zero and its following row, then use this boolean mask with where to replace remaining values with NaN, then use forward fill to propagate the values in forward direction. m = df['values'].eq(0) df['new'] = df['values'].where(m | m.shift()).ffill().fillna(df['values']) Result print(df) values new 0 4 4.0 1 0 0.0 2 2 2.0 3 3 2.0 4 0 0.0 5 8 8.0 6 5 8.0 7 1 8.0 8 0 0.0 9 4 4.0 10 7 4.0 | 5 | 4 |
71,984,449 | 2022-4-23 | https://stackoverflow.com/questions/71984449/how-to-add-an-extra-middle-step-into-a-list-comprehension | Let's say I have a list[str] object containing timestamps in "HH:mm" format, e.g. timestamps = ["22:58", "03:11", "12:21"] I want to convert it to a list[int] object with the "number of minutes since midnight" values for each timestamp: converted = [22*60+58, 3*60+11, 12*60+21] ... but I want to do it in style and use a single list comprehension to do it. A (syntactically incorrect) implementation that I naively constructed was something like: def timestamps_to_minutes(timestamps: list[str]) -> list[int]: return [int(hh) * 60 + int(mm) for ts in timestamps for hh, mm = ts.split(":")] ... but this doesn't work because for hh, mm = ts.split(":") is not a valid syntax. What would be the valid way of writing the same thing? To clarify: I can see a formally satisfying solution in the form of: def timestamps_to_minutes(timestamps: list[str]) -> list[int]: return [int(ts.split(":")[0]) * 60 + int(ts.split(":")[1]) for ts in timestamps] ... but this is highly inefficient and I don't want to split the string twice. | You could use an inner generator expression to do the splitting: [int(hh)*60 + int(mm) for hh, mm in (ts.split(':') for ts in timestamps)] Although personally, I'd rather use a helper function instead: def timestamp_to_minutes(timestamp: str) -> int: hh, mm = timestamp.split(":") return int(hh)*60 + int(mm) [timestamp_to_minutes(ts) for ts in timestamps] # Alternative list(map(timestamp_to_minutes, timestamps)) | 26 | 33 |
71,984,170 | 2022-4-23 | https://stackoverflow.com/questions/71984170/pandas-dataframe-get-maximum-with-respect-to-other-entries | I have a Dataframe like this: name phase value BOB 1 .9 BOB 2 .05 BOB 3 .05 JOHN 2 .45 JOHN 3 .45 JOHN 4 .05 FRANK 1 .4 FRANK 3 .6 I want to find which entry in column 'phase' has the maximum value in column 'value'. If more than one share the same maximum value keep the first or a random value for 'phase'. Desired result table: name phase value BOB 1 .9 JOHN 2 .45 FRANK 3 .6 my approach was: df.groupby(['name'])[['phase','value']].max() but it returned incorrect values. | You don't need to use groupby. Sort values by value and phase (adjust the order if necessary) and drop duplicates by name: out = (df.sort_values(['value', 'phase'], ascending=[False, True]) .drop_duplicates('name') .sort_index(ignore_index=True)) print(out) # Output name phase value 0 BOB 1 0.90 1 JOHN 2 0.45 2 FRANK 3 0.60 | 4 | 4 |
71,978,756 | 2022-4-23 | https://stackoverflow.com/questions/71978756/keras-symbolic-inputs-outputs-do-not-implement-len-error | I want to make an AI playing my custom environment, unfortunately, when I run my code, following error accrues: File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "C:\Program Files\JetBrains\PyCharm Community Edition 2021.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:/PycharmProjects/Custom Enviroment AI/Enviroment.py", line 88, in <module> DQN = buildAgent(model, actions) File "D:/PycharmProjects/Custom Enviroment AI/Enviroment.py", line 82, in buildAgent dqn = DQNAgent(model, memory=memory, policy=policy, nb_actions=actions, nb_steps_warmup=10, File "D:\PycharmProjects\Custom Enviroment AI\venv\lib\site-packages\rl\agents\dqn.py", line 108, in __init__ if hasattr(model.output, '__len__') and len(model.output) > 1: File "D:\PycharmProjects\Custom Enviroment AI\venv\lib\site-packages\keras\engine\keras_tensor.py", line 221, in __len__ raise TypeError('Keras symbolic inputs/outputs do not ' TypeError: Keras symbolic inputs/outputs do not implement `__len__`. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. This error will also get raised if you try asserting a symbolic input/output directly. The error says that you souldn't use len() and you should use .shape istead, unfortunately this seems to be an error inside tensorflow My full code is: from rl.memory import SequentialMemory from rl.policy import BoltzmannQPolicy from rl.agents.dqn import DQNAgent from keras.layers import Dense import tensorflow as tf import numpy as np import random import pygame import gym class Env(gym.Env): def __init__(self): self.action_space = gym.spaces.Discrete(4) self.observation_space = gym.spaces.MultiDiscrete([39, 27]) self.screen = pygame.display.set_mode((800, 600)) self.PlayerX = 0 self.PlayerY = 0 self.FoodX = 0 self.FoodY = 0 self.state = [self.FoodX - self.PlayerX + 19, self.FoodY - self.PlayerY + 14] self.timeLimit = 1000 def render(self, mode="human"): self.screen.fill((0, 0, 0)) pygame.draw.rect(self.screen, (255, 255, 255), pygame.Rect(self.PlayerX * 40, self.PlayerY * 40, 40, 40)) pygame.draw.rect(self.screen, (255, 0, 0), pygame.Rect(self.FoodX * 40, self.FoodY * 40, 40, 40)) pygame.display.update() def reset(self): self.FoodX = random.randint(1, 19) self.FoodY = random.randint(1, 14) self.PlayerX = 0 self.PlayerY = 0 self.timeLimit = 1000 return self.state def step(self, action): self.timeLimit -= 1 reward = -1 if action == 0 and self.PlayerY > 0: self.PlayerY -= 1 if action == 1 and self.PlayerX > 0: self.PlayerX -= 1 if action == 2 and self.PlayerY < 14: self.PlayerY += 1 if action == 3 and self.PlayerX < 19: self.PlayerX += 1 if self.PlayerX == self.FoodX and self.PlayerY == self.FoodY: reward += 30 self.FoodX = random.randint(1, 19) self.FoodY = random.randint(1, 14) if self.timeLimit <= 0: done = True else: done = False self.state = [self.FoodX - self.PlayerX, self.FoodY - self.PlayerY] return self.state, reward, done env = Env() states = env.observation_space.shape actions = env.action_space.n def build_model(states, actions): model = tf.keras.Sequential() model.add(Dense(2, activation='relu', input_shape=states)) model.add(Dense(4, activation='relu')) model.add(Dense(actions, activation='linear')) return model def buildAgent(model, actions): policy = BoltzmannQPolicy() memory = SequentialMemory(limit=50000, window_length=1) dqn = DQNAgent(model, memory=memory, policy=policy, nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2) return dqn model = build_model(states, actions) DQN = buildAgent(model, actions) DQN.compile(tf.keras.optimizers.Adam(learning_rate=1e-3), metrics=['mae']) DQN.fit(env, nb_steps=50000, visualize=False, verbose=1) scores = DQN.test(env, nb_episodes=100, visualize=True) print(np.mean(scores.history['episode_reward'])) pygame.quit() model.save('model.h5') I use Tensorflow: 2.8.0. This seems to be an error in Tensorflow's code but I have no idea what to do | As mentioned here, you need to install a newer version of keras-rl: !pip install keras-rl2 You also need to add an extra dimension to your input shape and a Flatten layer at the end, since Keras expects this when working with the DQN agent: def build_model(states, actions): model = tf.keras.Sequential() model.add(Dense(2, activation='relu', input_shape=(1, states[0]))) model.add(Dense(4, activation='relu')) model.add(Dense(actions, activation='linear')) model.add(Flatten()) return model Lastly, your step method in your custom environment must also return an info dictionary (I just created an empty one): def step(self, action): self.timeLimit -= 1 reward = -1 if action == 0 and self.PlayerY > 0: self.PlayerY -= 1 if action == 1 and self.PlayerX > 0: self.PlayerX -= 1 if action == 2 and self.PlayerY < 14: self.PlayerY += 1 if action == 3 and self.PlayerX < 19: self.PlayerX += 1 if self.PlayerX == self.FoodX and self.PlayerY == self.FoodY: reward += 30 self.FoodX = random.randint(1, 19) self.FoodY = random.randint(1, 14) if self.timeLimit <= 0: done = True else: done = False self.state = [self.FoodX - self.PlayerX, self.FoodY - self.PlayerY] return self.state, reward, done, {} If you make these changes, it should work fine. Here is the full working code: from rl.memory import SequentialMemory from rl.policy import BoltzmannQPolicy from rl.agents.dqn import DQNAgent from keras.layers import Dense, Flatten import tensorflow as tf import numpy as np import random import pygame import gym class Env(gym.Env): def __init__(self): self.action_space = gym.spaces.Discrete(4) self.observation_space = gym.spaces.MultiDiscrete([39, 27]) self.screen = pygame.display.set_mode((800, 600)) self.PlayerX = 0 self.PlayerY = 0 self.FoodX = 0 self.FoodY = 0 self.state = [self.FoodX - self.PlayerX + 19, self.FoodY - self.PlayerY + 14] self.timeLimit = 1000 def render(self, mode="human"): self.screen.fill((0, 0, 0)) pygame.draw.rect(self.screen, (255, 255, 255), pygame.Rect(self.PlayerX * 40, self.PlayerY * 40, 40, 40)) pygame.draw.rect(self.screen, (255, 0, 0), pygame.Rect(self.FoodX * 40, self.FoodY * 40, 40, 40)) pygame.display.update() def reset(self): self.FoodX = random.randint(1, 19) self.FoodY = random.randint(1, 14) self.PlayerX = 0 self.PlayerY = 0 self.timeLimit = 1000 return self.state def step(self, action): self.timeLimit -= 1 reward = -1 if action == 0 and self.PlayerY > 0: self.PlayerY -= 1 if action == 1 and self.PlayerX > 0: self.PlayerX -= 1 if action == 2 and self.PlayerY < 14: self.PlayerY += 1 if action == 3 and self.PlayerX < 19: self.PlayerX += 1 if self.PlayerX == self.FoodX and self.PlayerY == self.FoodY: reward += 30 self.FoodX = random.randint(1, 19) self.FoodY = random.randint(1, 14) if self.timeLimit <= 0: done = True else: done = False self.state = [self.FoodX - self.PlayerX, self.FoodY - self.PlayerY] return self.state, reward, done, {} env = Env() states = env.observation_space.shape actions = env.action_space.n def build_model(states, actions): model = tf.keras.Sequential() model.add(Dense(2, activation='relu', input_shape=(1, states[0]))) model.add(Dense(4, activation='relu')) model.add(Dense(actions, activation='linear')) model.add(Flatten()) return model def buildAgent(model, actions): policy = BoltzmannQPolicy() memory = SequentialMemory(limit=50000, window_length=1) dqn = DQNAgent(model, memory=memory, policy=policy, nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2) return dqn model = build_model(states, actions) DQN = buildAgent(model, actions) DQN.compile(tf.keras.optimizers.Adam(learning_rate=1e-3), metrics=['mae']) DQN.fit(env, nb_steps=50000, visualize=False, verbose=1) scores = DQN.test(env, nb_episodes=100, visualize=True) print(np.mean(scores.history['episode_reward'])) pygame.quit() model.save('model.h5') For more information, see the docs. | 4 | 8 |
71,973,225 | 2022-4-22 | https://stackoverflow.com/questions/71973225/generating-indices-of-a-2d-numpy-array | I want to generate a 2D numpy array with elements calculated from their positions. Something like the following code: import numpy as np def calculate_element(i, j, other_parameters): # do something return value_at_i_j def main(): arr = np.zeros((M, N)) # (M, N) is the shape of the array for i in range(M): for j in range(N): arr[i][j] = calculate_element(i, j, ...) This code runs extremely slow since the loops in Python are just not very efficient. Is there any way to do this faster in this case? By the way, for now I use a workaround by calculating two 2D "index matrices". Something like this: def main(): index_matrix_i = np.array([range(M)] * N).T index_matrix_j = np.array([range(N)] * M) ''' index_matrix_i is like [[0,0,0,...], [1,1,1,...], [2,2,2,...], ... ] index_matrix_j is like [[0,1,2,...], [0,1,2,...], [0,1,2,...], ... ] ''' arr = calculate_element(index_matrix_i, index_matrix_j, ...) Edit1: The code becomes much faster after I apply the "index matrices" trick, so the main question I want to ask is that if there is a way to not use this trick, since it takes more memory. In short, I want to have a solution that is efficient in both time and space. Edit2: Some examples I tested # a simple 2D Gaussian def calculate_element(i, j, i_mid, j_mid, i_sig, j_sig): gaus_i = np.exp(-((i - i_mid)**2) / (2 * i_sig**2)) gaus_j = np.exp(-((j - j_mid)**2) / (2 * j_sig**2)) return gaus_i * gaus_j # size of M, N M, N = 1200, 4000 # use for loops to go through every element # this code takes ~10 seconds def main_1(): arr = np.zeros((M, N)) # (M, N) is the shape of the array for i in range(M): for j in range(N): arr[i][j] = calculate_element(i, j, 600, 2000, 300, 500) # print(arr) plt.figure(figsize=(8, 5)) plt.imshow(arr, aspect='auto', origin='lower') plt.show() # use index matrices # this code takes <1 second def main_2(): index_matrix_i = np.array([range(M)] * N).T index_matrix_j = np.array([range(N)] * M) arr = calculate_element(index_matrix_i, index_matrix_j, 600, 2000, 300, 500) # print(arr) plt.figure(figsize=(8, 5)) plt.imshow(arr, aspect='auto', origin='lower') plt.show() | You can use np.indices() to generate the desired output: For example, np.indices((3, 4)) outputs: [[[0 0 0 0] [1 1 1 1] [2 2 2 2]] [[0 1 2 3] [0 1 2 3] [0 1 2 3]]] | 4 | 4 |
71,971,105 | 2022-4-22 | https://stackoverflow.com/questions/71971105/importerror-cannot-import-name-x-from-y | (ldm) C:\WBC\latent-diffusion-main>python scripts/txt2img.py --prompt "a sunset behind a mountain range, vector image" --ddim_eta 1.0 --n_samples 1 --n_iter 1 --H 384 --W 1024 --scale 5.0 Loading model from models/ldm/text2img-large/model.ckpt Traceback (most recent call last): File "scripts/txt2img.py", line 108, in <module> model = load_model_from_config(config, "models/ldm/text2img-large/model.ckpt") # TODO: check path File "scripts/txt2img.py", line 19, in load_model_from_config model = instantiate_from_config(config.model) File "c:\wbc\latent-diffusion-main\ldm\util.py", line 78, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "c:\wbc\latent-diffusion-main\ldm\util.py", line 86, in get_obj_from_str return getattr(importlib.import_module(module, package=None), cls) File "C:\ProgramData\Anaconda3\envs\ldm\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "c:\wbc\latent-diffusion-main\ldm\models\diffusion\ddpm.py", line 12, in <module> import pytorch_lightning as pl File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\__init__.py", line 20, in <module> from pytorch_lightning import metrics # noqa: E402 File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\metrics\__init__.py", line 15, in <module> from pytorch_lightning.metrics.classification import ( # noqa: F401 File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\metrics\classification\__init__.py", line 14, in <module> from pytorch_lightning.metrics.classification.accuracy import Accuracy # noqa: F401 File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\metrics\classification\accuracy.py", line 18, in <module> from pytorch_lightning.metrics.utils import deprecated_metrics, void File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\metrics\utils.py", line 22, in <module> from torchmetrics.utilities.data import get_num_classes as _get_num_classes ImportError: cannot import name 'get_num_classes' from 'torchmetrics.utilities.data' (C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torchmetrics\utilities\data.py) I can't run a latent diffusion neural network. I'm using anaconda and the environment that comes with the neural network. I think I'm doing everything right, but something goes wrong. If you know how to solve this problem, I would be extremely grateful. | Looking at your error, it appears get_num_classes doesn't exist anymore. I verified this by looking that their github and docs. It was removed after this commit. | 4 | 4 |
71,967,845 | 2022-4-22 | https://stackoverflow.com/questions/71967845/error-when-using-visual-keras-for-plotting-model | I'm trying to visualize my Deep Learning model using visual keras, but i am getting an error which i am not sure i understand. This is my first time using visual keras, and i am not sure what to do. As an example !pip install visual keras import visualkeras import tensorflow as tf tf.keras.utils.plot_model(model, show_shapes=True) input = tf.keras.Input(shape=(100,), dtype='int32', name='input') x = tf.keras.layers.Embedding(output_dim=512, input_dim=10000, input_length=100)(input) x = tf.keras.layers.LSTM(32)(x) x = tf.keras.layers.Dense(64, activation='relu')(x) x = tf.keras.layers.Dense(64, activation='relu')(x) x = tf.keras.layers.Dense(64, activation='relu')(x) output = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x) model = tf.keras.Model(inputs=[input], outputs=[output]) visualkeras.layered_view(model, legend=True, draw_volume=False) and the error looks like this TypeError: 'int' object is not iterable. Any help will be much appreciated. | It is a bug as you can read here. The author suggests to install a newer version of the library: !pip install git+https://github.com/paulgavrikov/visualkeras --upgrade If for some reason you cannot update the version. Go to the source code of the library (where it was installed) and navigate to visualkeras/layered.py. In line 100 of this file, change z = min(max(z), max_z) to z = min(max([z]), max_z). Save the changes and it will work. An example: !pip install git+https://github.com/paulgavrikov/visualkeras --upgrade import visualkeras import tensorflow as tf input = tf.keras.Input(shape=(100,), dtype='int32', name='input') x = tf.keras.layers.Embedding(output_dim=512, input_dim=10000, input_length=100)(input) x = tf.keras.layers.LSTM(32)(x) x = tf.keras.layers.Dense(64, activation='relu')(x) x = tf.keras.layers.Dense(64, activation='relu')(x) x = tf.keras.layers.Dense(64, activation='relu')(x) output = tf.keras.layers.Dense(1, activation='sigmoid', name='output')(x) model = tf.keras.Model(inputs=[input], outputs=[output]) visualkeras.layered_view(model, legend=True, draw_volume=False) | 5 | 5 |
71,956,208 | 2022-4-21 | https://stackoverflow.com/questions/71956208/detect-thick-black-lines-in-image-with-opencv | I have the following image of a lego board with some bricks on it Now I am trying to detect the thick black lines (connecting the white squares) with OpenCV. I have already experimented a lot with HoughLinesP, converted the image to gray or b/w before, applied blur, ... Nonthing led to usable results. # Read image img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE) # Resize Image img = cv2.resize(img, (0,0), fx=0.25, fy=0.25) # Initialize output out = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) # Median blurring to get rid of the noise; invert image img = cv2.medianBlur(img, 5) # Adaptive Treshold bw = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,\ cv2.THRESH_BINARY,15,8) # HoughLinesP linesP = cv2.HoughLinesP(bw, 500, np.pi / 180, 50, None, 50, 10) # Draw Lines if linesP is not None: for i in range(0, len(linesP)): l = linesP[i][0] cv2.line(out, (l[0], l[1]), (l[2], l[3]), (0,0,255), 3, cv2.LINE_AA) The adaptive treshold lets you see edges quite well, but with HoughLinesP you don't get anything usable out of it What am I doing wrong? Thanks, both @fmw42 and @jeru-luke for your great solutions to this problem! I liked isolating / masking the green board, so I combined both: import cv2 import numpy as np img = cv2.imread("image.jpg") scale_percent = 50 # percent of original size width = int(img.shape[1] * scale_percent / 100) height = int(img.shape[0] * scale_percent / 100) dim = (width, height) # resize image img = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) a_component = lab[:,:,1] # binary threshold the a-channel th = cv2.threshold(a_component,127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1] # numpy black black = np.zeros((img.shape[0],img.shape[1]),np.uint8) # function to obtain the largest contour in given image after filling it def get_region(image): contours, hierarchy = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) c = max(contours, key = cv2.contourArea) mask = cv2.drawContours(black,[c],0,255, -1) return mask mask = get_region(th) # turning the region outside the green block white green_block = cv2.bitwise_and(img, img, mask = mask) green_block[black==0]=(255,255,255) # median blur median = cv2.medianBlur(green_block, 5) # threshold on black lower = (0,0,0) upper = (15,15,15) thresh = cv2.inRange(median, lower, upper) # apply morphology open and close kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3)) morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (29,29)) morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel) # filter contours on area contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] result = green_block.copy() for c in contours: area = cv2.contourArea(c) if area > 1000: cv2.drawContours(result, [c], -1, (0, 0, 255), 2) # view result cv2.imshow("result", result) cv2.waitKey(0) cv2.destroyAllWindows() | Here I am presenting a repeated segmentation approach using color. This answer is based on the usage of LAB color space 1. Isolating the green lego block img = cv2.imread(image_path) lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) a_component = lab[:,:,1] # binary threshold the a-channel th = cv2.threshold(a_component,127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1] th # function to obtain the largest contour in given image after filling it def get_region(image): contours, hierarchy = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) c = max(contours, key = cv2.contourArea) black = np.zeros((image.shape[0], image.shape[1]), np.uint8) mask = cv2.drawContours(black,[c],0,255, -1) return mask mask = get_region(th) mask # turning the region outside the green block white green_block = cv2.bitwise_and(img, img, mask = mask) green_block[black==0]=(255,255,255) green_block 2. Segmenting the road To get an approximate region of the road, I subtracted the mask and th. cv2.subtract() performs arithmetic subtraction, where cv2 will take care of negative values. road = cv2.subtract(mask,th) # `road` contains some unwanted spots/contours which are removed using the function "get_region" only_road = get_region(road) only_road Masking only the road segment with the original image gives road_colored = cv2.bitwise_and(img, img, mask = only_road) road_colored[only_road==0]=(255,255,255) road_colored From the above image only the black regions (road) are present, which is easy to segment: # converting to grayscale and applying threshold th2 = cv2.threshold(road_colored[:,:,1],127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1] # using portion of the code from fmw42's answer, to get contours above certain area contours = cv2.findContours(th2, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = contours[0] if len(contours) == 2 else contours[1] result = img.copy() for c in contours: area = cv2.contourArea(c) if area > 1000: cv2.drawContours(result, [c], -1, (0, 0, 255), 4) result Note: To clean up the end result, you can apply morphological operations on th2 before drawing contours. | 5 | 6 |
71,944,832 | 2022-4-20 | https://stackoverflow.com/questions/71944832/how-to-dump-a-hydra-config-into-yaml-with-target-fields | I instantiate a hydra configuration from a python dataclass. For example from dataclasses import dataclass from typing import Any from hydra.utils import instantiate class Model(): def __init__(self, x=1): self.x = x @dataclass class MyConfig: model: Any param: int static_config = MyConfig(model=Model(x=2), param='whatever') instantiated_config = instantiate(static_config) Now, I would like to dump this configuration as a yaml, including the _target_ fields that Hydra uses to re-instantiate the objects pointed to inside the configuration. I would like to avoid having to write my own logic to write those _target_ fields, and I imagine there must be some hydra utility that does this, but I can't seem to find it in the documentation. | See OmegaConf.to_yaml and OmegaConf.save: from omegaconf import OmegaConf # dumps to yaml string yaml_data: str = OmegaConf.to_yaml(my_config) # dumps to file: with open("config.yaml", "w") as f: OmegaConf.save(my_config, f) # OmegaConf.save can also accept a `str` or `pathlib.Path` instance: OmegaConf.save(my_config, "config.yaml") See also the Hydra-Zen project, which offers automatic generation of OmegaConf objects (which can be saved to yaml). | 5 | 4 |
71,938,799 | 2022-4-20 | https://stackoverflow.com/questions/71938799/python-asyncio-create-task-really-need-to-keep-a-reference | The documentation of asyncio.create_task() states the following warning: Important: Save a reference to the result of this function, to avoid a task disappearing mid execution. (source) My question is: Is this really true? I have several IO bound "fire and forget" tasks which I want to run concurrently using asyncio by submitting them to the event loop using asyncio.create_task(). However, I do not really care for the return value of the coroutine or even if they run successfully, only that they do run eventually. One use case is writing data from an "expensive" calculation back to a Redis data base. If Redis is available, great. If not, oh well, no harm. This is why I do not want/need to await those tasks. Here a generic example: import asyncio async def fire_and_forget_coro(): """Some random coroutine waiting for IO to complete.""" print('in fire_and_forget_coro()') await asyncio.sleep(1.0) print('fire_and_forget_coro() done') async def async_main(): """Main entry point of asyncio application.""" print('in async_main()') n = 3 for _ in range(n): # create_task() does not block, returns immediately. # Note: We do NOT save a reference to the submitted task here! asyncio.create_task(fire_and_forget_coro(), name='fire_and_forget_coro') print('awaiting sleep in async_main()') await asyncio.sleep(2.0) # <-- note this line print('sleeping done in async_main()') print('async_main() done.') # all references of tasks we *might* have go out of scope when returning from this coroutine! return if __name__ == '__main__': asyncio.run(async_main()) Output: in async_main() awaiting sleep in async_main() in fire_and_forget_coro() in fire_and_forget_coro() in fire_and_forget_coro() fire_and_forget_coro() done fire_and_forget_coro() done fire_and_forget_coro() done sleeping done in async_main() async_main() done. When commenting out the await asyncio.sleep() line, we never see fire_and_forget_coro() finish. This is to be expected: When the event loop started with asyncio.run() closes, tasks will not be executed anymore. But it appears that as long as the event loop is still running, all tasks will be taken care of, even when I never explicitly created references to them. This seem logical to me, as the event loop itself must have a reference to all scheduled tasks in order to run them. And we can even get them all using asyncio.all_tasks()! So, I think I can trust Python to have at least one strong reference to every scheduled tasks as long as the event loop it was submitted to is still running, and thus I do not have to manage references myself. But I would like a second opinion here. Am I right or are there pitfalls I have not yet recognized? If I am right, why the explicit warning in the documentation? It is a usual Python thing that stuff is garbage-collected if you do not keep a reference to it. Are there situations where one does not have a running event loop but still some task objects to reference? Maybe when creating an event loop manually (never did this)? | There is an open issue at the cpython bug tracker at github about this topic I just found: https://github.com/python/cpython/issues/88831 Quote: asyncio will only keep weak references to alive tasks (in _all_tasks). If a user does not keep a reference to a task and the task is not currently executing or sleeping, the user may get "Task was destroyed but it is pending!". So the answer to my question is, unfortunately, yes. One has to keep around a reference to the scheduled task. However, the github issue also describes a relatively simple workaround: Keep all running tasks in a set() and add a callback to the task which removes itself from the set() again. running_tasks = set() # [...] task = asyncio.create_task(some_background_function()) running_tasks.add(task) task.add_done_callback(lambda t: running_tasks.remove(t)) | 26 | 20 |
71,953,766 | 2022-4-21 | https://stackoverflow.com/questions/71953766/xarray-create-variables-attributes | I want to create a dataset with xarray and want to add attributes to variables while creating the dataset. The xarray documentation provides a way of adding global attribute. For example, as below: ds = xr.Dataset( data_vars=dict( 'temperature'=(["x", "y", "time"], temperature), 'precipitation'=(["x", "y", "time"], precipitation), ), coords=dict( lon=(["x", "y"], lon), lat=(["x", "y"], lat), time=time, reference_time=reference_time, ), attrs=dict(description="Weather related data."),) One way to add variable attribute would be some like this: ds['temperature'].attrs = {"units": K, '_FillValue': -999} But, in my opinion it is more like updating the attribute. Is there a way to directly assign attributes while creating the dataset directly using xr.Dataset ? | Yes, you can directly define variable attributes when defining the data_vars. You just need to provide the attributes in a dictionary Form. See also: https://xarray.pydata.org/en/stable/internals/variable-objects.html In your example above that would be: ds = xr.Dataset( data_vars=dict( temperature=(["x", "y", "time"], temperature,{'units':'K'}), precipitation=(["x", "y", "time"], precipitation,{'units':'mm/day'}), ), coords=dict( lon=(["x", "y"], lon), lat=(["x", "y"], lat), time=time, reference_time=reference_time, ), attrs=dict(description="Weather related data."),) | 4 | 9 |
71,949,010 | 2022-4-21 | https://stackoverflow.com/questions/71949010/google-cloud-sdk-python-was-not-found | After I install Google cloud sdk in my computer, I open the terminal and type "gcloud --version" but it says "python was not found" note: I unchecked the box saying "Install python bundle" when I install Google cloud sdk because I already have python 3.10.2 installed. so, how do fix this? Thanks in advance. | As mentioned in the document: Cloud SDK requires Python; supported versions are Python 3 (preferred, 3.5 to 3.8) and Python 2 (2.7.9 or later). By default, the Windows version of Cloud SDK comes bundled with Python 3 and Python 2. To use Cloud SDK, your operating system must be able to run a supported version of Python. As suggested by @John Hanley the CLI cannot find Python which is already installed. Try reinstalling the CLI selecting install Python bundle. If you are still facing the issue another workaround can be to try with Python version 2.x.x . You can follow the below steps : 1.Uninstall all Python version 3 and above. 2.Install Python version -2.x.x (I have installed - 2.7.17) 3.Create environment variable - CLOUDSDK_PYTHON and provide value as C:\Python27\python.exe 4.Run GoogleCloudSDKInstaller.exe again. | 16 | 0 |
71,946,233 | 2022-4-20 | https://stackoverflow.com/questions/71946233/typeerror-textiowrapper-seek-takes-no-keyword-arguments | I wanted to seek to the start of the file of to write from the start. In the documentation of python 3.9 io.IOBase.seek it is displayed seek has a parameter "whence" yet an error is being displayed: TypeError: TextIOWrapper.seek() takes no keyword arguments my code is: with open("t.txt",'a+') as f: f.seek(0,) print(f.readlines()) f.seek(0,whence=0) f.write("12\n23\n32") I have use "a+" as I want to preserve the contains of the file when it is opened as well edit later. I wanted to edit the contains from the start that's why I used whence = 0, as it would help me edit from start of the stream | Yeah, it's a little bit weird. Take a look at help(f.seek): Help on built-in function seek: seek(cookie, whence=0, /) method of _io.TextIOWrapper instance Note the / slash. https://stackoverflow.com/a/24735582/8431111 It says "no keywords, please!". You can specify f.seek(0), or f.seek(0, 0). You just can't name that 2nd parameter whence. It is helpful documentation in the signature, but you can't name it in the call. | 4 | 5 |
71,934,914 | 2022-4-20 | https://stackoverflow.com/questions/71934914/cannot-import-is-safe-url-from-django-utils-http-alternatives | I am trying to update my code to the latest Django version, The method is_safe_url() seems to be removed from new version of Django (4.0). I have been looking for an alternative for the past few hours with no luck. Does anyone know of any alternatives for the method in Django 4.0? | in Django 3.0 they have renamed is_safe_url to url_has_allowed_host_and_scheme. Here you can read more about it docs | 11 | 21 |
71,933,885 | 2022-4-20 | https://stackoverflow.com/questions/71933885/when-is-finally-run-in-a-python-generator | I think I might misunderstand how the finally clause of a try/except/finally block works in Python, for generators. In the following block of code, a generator starts a thread, and if the caller exits for any reason, the thread is cleaned up. That's the intention, at least. However, I've noticed that in some strange situations, the finally block isn't run: If the caller raises and then catches their own exception, and assigns the exception object to a variable, the finally isn't run. I have no idea why this would be the case. Here's the code. import signal from threading import Thread import time class MyThread(Thread): def __init__(self): super().__init__() self._stopped = False def run(self): while not self._stopped: time.sleep(0.2) def stop(self): self._stopped = True class ThreadRunner: def start(self): self._my_thread = MyThread() self._my_thread.start() def end(self): self._my_thread.stop() self._my_thread.join() print('ThreadRunner end!') def loop_forever(): thread_runner = ThreadRunner() try: thread_runner.start() yield finally: print('loop_forever() is all done!') # When does this line get run? thread_runner.end() def listener(): print('listener begin!') looper = loop_forever() next(looper) try: raise Exception() except Exception as e: es = e # If you comment out this line (and replace it with `pass`), it doesn't hang print('listener... done!') def main(): def handle_exit(signum, *args): raise Exception("SIGINT: EXITING") signal.signal(signal.SIGINT, handle_exit) listener() if __name__ == "__main__": main() print('This program has exited. Or has it?') What you'll see if run this code is this: Press CTRL+C to trigger a signal handler. That will raise an exception, which triggers the finally in question. As I said above, if remove es = e (replace it with pass), for some reason, our code exits as expected. def listener(): print('listener begin!') looper = loop_forever() next(looper) try: raise Exception() except Exception as e: pass # es = e # If you comment out this line (and replace it with `pass`), it doesn't hang print('listener... done!') Also, if I re-write listener() to use a for loop instead of next(), our code exits as expected: def listener(): print('listener_using_next') for _ in loop_forever(): try: raise Exception() except Exception as e: es = e print('listener_using_next... done!') Edit: In response to one comment, I want to note that our code exits as expected even if you return early, interrupting the generator. def listener(): print('listener_using_next') for _ in loop_forever(): try: raise Exception() except Exception as e: es = e return print('listener_using_next... done!') Finally, if I invoke garbage collection after main() (with import gc;gc.collect()), our code exits almost as expected: The "This program has exited" line prints, followed by "loop_forever() is all done!" if __name__ == "__main__": main() print('This program has exited. Or has it?') import gc;gc.collect() Can someone point to me to an explanation of how finally: works for generators? Ideally, in such a way that would help explain this strange behavior? | finally in a generator is run under the same conditions as in any other code: when the execution of the try block finishes, as well as execution of any except block that got triggered, the finally runs. However, relative to most non-generator code, it is much easier in a generator for these conditions to just not happen. When you suspend your generator in the middle of a try, the conditions for the finally to run have not occurred. If the generator just never resumes again, the conditions for the finally to run will never occur. To try to bandage this a bit, when a generator is garbage collected, Python will throw a GeneratorExit exception into the generator. This usually causes the generator to run any pending finally blocks and finish up, but if the generator catches the exception and suspends again, or if the generator doesn't ever get garbage collected, finally blocks may not run. In your test case, by saving the exception to the es variable, you create a reference cycle (through the exception object's stack trace) which keeps the generator alive. The es = e is necessary to create the reference cycle because Python sticks an implicit del e at the end of the except specifically to avoid a reference cycle. Python then just doesn't run garbage collection until the end of the program, at which point your generator finally gets cleaned up. Note that there is no guarantee garbage collection will run, or that it will clean up everything it "should" clean up, and as you've seen, even if it does run, it doesn't have to run any time soon. When you use a for loop instead of next, you're not leaving the generator suspended in the middle of the try. for runs the generator to completion, immediately and naturally running the finally. | 5 | 5 |
71,904,575 | 2022-4-17 | https://stackoverflow.com/questions/71904575/matplotlib-3d-scatter-plot-alpha-varies-when-viewing-different-angles | When creating 3D scatter plots with matplotlib I noticed that when the alpha (transparency) of the points is varied it will draw them differently depending on how you rotate the view. The example images below are the same plot rotated slightly, which causes the alpha values to mysteriously reverse. Is anyone familiar with this behavior and how to address it? It looks like the 'zorder' (draw order) is a single value for the entire scatter plot call. Simplified example code to recreate: import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(projection="3d") X = [i for i in range(10)] Y = [i for i in range(10)] Z = [i for i in range(10)] S = [(i+1)*400 for i in range(10)] A = [i/10 for i in range(10)] ax.scatter(xs=X, ys=Y, zs=Z, s=S, alpha=A) plt.show() Python 3.9.5 matplotlib 3.5.1 | UPDATE: a fix has been added to matplotlib as part of the future 3.11.0 release (not yet released at time of writing) I also posted this on the matplotlib github to see what the developers think. It appears to be a bug with a very low priority, per their latest response: "We have limited core developer resources and mpl_toolkits/mplot3d is only a secondary priority for the core team. Likely this needs someone in the community to pick up and provide a fix." I will try to do so, but am posting this for visibility in the meantime so that other people with more experience are made aware. | 5 | 2 |
71,898,644 | 2022-4-17 | https://stackoverflow.com/questions/71898644/how-to-use-python-typing-annotated | I'm having a hard time understanding from the documentation exactly what typing.Annotated is good for and an even harder time finding explanations/examples outside the documentation. Or does it "being good for something" depend entirely on what third party libraries you're using? In what (real-world) context would you use Annotated? | Annotated in python allows developers to declare the type of a reference and provide additional information related to it. name: Annotated[str, "first letter is capital"] This tells that name is of type str and that name[0] is a capital letter. On its own Annotated does not do anything other than assigning extra information (metadata) to a reference. It is up to another code, which can be a library, framework or your own code, to interpret the metadata and make use of it. For example, FastAPI uses Annotated for data validation: def read_items(q: Annotated[str, Query(max_length=50)]) Here the parameter q is of type str with a maximum length of 50. This information was communicated to FastAPI (or any other underlying library) using the Annotated keyword. | 142 | 143 |
71,882,419 | 2022-4-15 | https://stackoverflow.com/questions/71882419/fastapi-how-to-get-the-response-body-in-middleware | Is there any way to get the response content in a middleware? The following code is a copy from here. @app.middleware("http") async def add_process_time_header(request: Request, call_next): start_time = time.time() response = await call_next(request) process_time = time.time() - start_time response.headers["X-Process-Time"] = str(process_time) return response | The response body is an iterator, which once it has been iterated through, it cannot be re-iterated again. Thus, you either have to save all the iterated data to a list (or bytes variable) and use that to return a custom Response, or initiate the iterator again. The options below demonstrate both approaches. In case you would like to get the request body inside the middleware as well, please have a look at this answer. Option 1 Save the data to a list and use iterate_in_threadpool to initiate the iterator again, as described hereβwhich is what StreamingResponse uses, as shown here. from starlette.concurrency import iterate_in_threadpool @app.middleware("http") async def some_middleware(request: Request, call_next): response = await call_next(request) response_body = [chunk async for chunk in response.body_iterator] response.body_iterator = iterate_in_threadpool(iter(response_body)) print(f"response_body={response_body[0].decode()}") return response Note 1: If your code uses StreamingResponse, response_body[0] would return only the first chunk of the response. To get the entire response body, you should join that list of bytes (chunks), as shown below (.decode() returns a string representation of the bytes object): print(f"response_body={(b''.join(response_body)).decode()}") Note 2: If you have a StreamingResponse streaming a body that wouldn't fit into your server's RAM (for example, a response of 30GB), you may run into memory errors when iterating over the response.body_iterator (this applies to both options listed in this answer), unless you loop through response.body_iterator (as shown in Option 2), but instead of storing the chunks in an in-memory variable, you store it somewhere on the disk. However, you would then need to retrieve the entire response data from that disk location and load it into RAM, in order to send it back to the client (which could extend the delay in responding to the client even more)βin that case, you could load the contents into RAM in chunks and use StreamingResponse, similar to what has been demonstrated here, here, as well as here, here and here (in Option 1, you can just pass your iterator/generator function to iterate_in_threadpool). However, I would not suggest following that approach, but instead have such endpoints returning large streaming responses excluded from the middleware, as described in this answer. Option 2 The below demosntrates another approach, where the response body is stored in a bytes object (instead of a list, as shown above), and is used to return a custom Response directly (along with the status_code, headers and media_type of the original response). @app.middleware("http") async def some_middleware(request: Request, call_next): response = await call_next(request) chunks = [] async for chunk in response.body_iterator: chunks.append(chunk) response_body = b''.join(chunks) print(f"response_body={response_body.decode()}") return Response(content=response_body, status_code=response.status_code, headers=dict(response.headers), media_type=response.media_type) | 18 | 41 |
71,897,602 | 2022-4-16 | https://stackoverflow.com/questions/71897602/sqlalchemy-exc-programmingerror-psycopg2-programmingerror-cant-adapt-type-r | I created a database with 3 tables using PostgreSQL and flask-sqlalchemy. I am querying 3 tables to get only their ids then I check their ids to see if there's any similar one then add the similar one to the third table but anytime i run it i get this error sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'Row' [SQL: INSERT INTO login (student_id, name, timestamp) VALUES (%(student_id)s, %(name)s, %(timestamp)s)] [parameters: {'student_id': (1234567,), 'name': None, 'timestamp': datetime.datetime(2022, 4, 16, 21, 10, 53, 30512)}] @app.route('/') def check(): id = Esp32.query.with_entities(Esp32.student_id).all() students = Student.query.with_entities(Student.student_id).all() logins = Login.query.with_entities(Login.student_id).all() for ids in id: if ids in students and ids not in logins: new = Login(student_id= ids) db.session.add(new) db.session.commit() return render_template('check.html', newlog = new) please could someone tell me what this error means and why I am getting it | id is a query result. ids is a query row. To get one value from that row, you need to tell it which column (even if there is only one column): ids['student_id']. | 9 | 10 |
71,905,671 | 2022-4-17 | https://stackoverflow.com/questions/71905671/how-to-go-through-all-pydantic-validators-even-if-one-fails-and-then-raise-mult | Is it possible to call all validators to get back a full list of errors? @validator('password', always=True) def validate_password1(cls, value): password = value.get_secret_value() min_length = 8 if len(password) < min_length: raise ValueError('Password must be at least 8 characters long.') return value @validator('password', always=True) def validate_password2(cls, value): password = value.get_secret_value() if not any(character.islower() for character in password): raise ValueError('Password should contain at least one lowercase character.') return value The current behavior seems to call one validator at a time. My Pydantic class: class User(BaseModel): email: EmailStr password: SecretStr If I did not include the email, or password, field on a request then I would get both validation failures in an array, which is what I want to do for the password field, but the current behavior seems to call one, and if it fails then throws the error immediately. | You can't raise multiple Validation errors/exceptions for a specific field in the way this is demonstrated in your question. Suggested solutions are given below. Option 1 Update Note that in Pydantic V2, @validator has been deprecated and was replaced by @field_validator. Please have a look at this answer for more details and examples. Original answer Concatenate error messages using a single variable, and raise the ValueError once at the end (if errors occured): @validator('password', always=True) def validate_password1(cls, value): password = value.get_secret_value() min_length = 8 errors = '' if len(password) < min_length: errors += 'Password must be at least 8 characters long. ' if not any(character.islower() for character in password): errors += 'Password should contain at least one lowercase character.' if errors: raise ValueError(errors) return value In the case that all the above conditional statements are met, the output will be: { "detail": [ { "loc": [ "body", "password" ], "msg": "Password must be at least 8 characters long. Password should contain at least one lowercase character.", "type": "value_error" } ] } Option 2 Update In Pydantic V2, ErrorWrapper has been removedβhave a look at Migration Guide. If one would like to implement this on their own, please have a look at Pydantic V1.9 error_wrappers.py. Additionally, @validator has been deprecated and was replaced by @field_validator. Please have a look at this answer for more details and examples. Original answer Raise ValidationError directly, using a list of ErrorWrapper class. from pydantic import ValidationError from pydantic.error_wrappers import ErrorWrapper @validator('password', always=True) def validate_password1(cls, value): password = value.get_secret_value() min_length = 8 errors = [] if len(password) < min_length: errors.append(ErrorWrapper(ValueError('Password must be at least 8 characters long.'), loc=None)) if not any(character.islower() for character in password): errors.append(ErrorWrapper(ValueError('Password should contain at least one lowercase character.'), loc=None)) if errors: raise ValidationError(errors, model=User) return value Since FastAPI seems to be adding the loc attribute itself, loc would end up having the field name (i.e., password) twice, if it was added in the ErrorWrapper, using the loc attribute (which is a required parameter). Hence, you could leave it empty (using None), which you can later remove through a validation exception handler, as shown below: from fastapi import Request, status from fastapi.encoders import jsonable_encoder from fastapi.exceptions import RequestValidationError from fastapi.responses import JSONResponse @app.exception_handler(RequestValidationError) async def validation_exception_handler(request: Request, exc: RequestValidationError): for error in exc.errors(): error['loc'] = [x for x in error['loc'] if x] # remove null attributes return JSONResponse(content=jsonable_encoder({"detail": exc.errors()}), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY) In the case that all the above conditional statements are met, the output will be: { "detail": [ { "loc": [ "body", "password" ], "msg": "Password must be at least 8 characters long.", "type": "value_error" }, { "loc": [ "body", "password" ], "msg": "Password should contain at least one lowercase character.", "type": "value_error" } ] } | 8 | 6 |
71,915,358 | 2022-4-18 | https://stackoverflow.com/questions/71915358/spark-read-bigquery-external-table | Trying to Read a external table from BigQuery but gettint a error SCALA_VERSION="2.12" SPARK_VERSION="3.1.2" com.google.cloud.bigdataoss:gcs-connector:hadoop3-2.2.0, com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.24.2' table = 'data-lake.dataset.member' df = spark.read.format('bigquery').load(table) df.printSchema() Result: root |-- createdAtmetadata: date (nullable = true) |-- eventName: string (nullable = true) |-- producerName: string (nullable = true) So when im print df.createOrReplaceTempView("member") spark.sql("select * from member limit 100").show() i got this message error: INVALID_ARGUMENT: request failed: Only external tables with connections can be read with the Storage API. | As external tables are not supported in queries by spark, i tried the other way and got! def read_query_bigquery(project, query): df = spark.read.format('bigquery') \ .option("parentProject", "{project}".format(project=project))\ .option('query', query)\ .option('viewsEnabled', 'true')\ .load() return df project = 'data-lake' query = 'select * from data-lake.dataset.member' spark.conf.set("materializationDataset",'dataset') df = read_query_bigquery(project, query) df.show() | 4 | 4 |
71,861,779 | 2022-4-13 | https://stackoverflow.com/questions/71861779/mwaa-airflow-pythonvirtualenvoperator-requires-virtualenv | I am using AWS's MWAA service (2.2.2) to run a variety of DAGs, most of which are implemented with standard PythonOperator types. I bundle the DAGs into an S3 bucket alongside any shared requirements, then point MWAA to the relevant objects & versions. Everything runs smoothly so far. I would now like to implement a DAG using the PythonVirtualenvOperator type, which AWS acknowledge is not supported out of the box. I am following their guide on how to patch the behaviour using a custom plugin, but continue to receive an error from Airflow, shown at the top of the dashboard in big red writing: DAG Import Errors (1) ... ... AirflowException: PythonVirtualenvOperator requires virtualenv, please install it. I've confirmed that the plugin is indeed being picked up by Airflow (I see it referenced in the admin screen), and for the avoidance of doubt I am using the exact code provided by AWS in their examples for the DAG. AWS's documentation on this is pretty light and I've yet to stumble across any community discussion for the same. From AWS's docs, we'd expect the plugin to run at startup prior to any DAGs being processed. The plugin itself appears to effectively rewrite the venv command to use the pip-installed version, rather than that which is installed on the machine, however I've struggled to verify that things are happening in the order I expect. Any pointers on debugging the instance's behavior would be very much appreciated. Has anyone faced a similar issue? Is there a gap in the MWAA documentation that needs addressing? Am I missing something incredibly obvious? Possibly related, but I do see this warning in the scheduler's logs, which may indicate why MWAA is struggling to resolve the dependency? WARNING: The script virtualenv is installed in '/usr/local/airflow/.local/bin' which is not on PATH. | Airflow uses shutil.which to look for virtualenv. The installed virtualenv via requirements.txt isn't on the PATH. Adding the path to virtualenv to PATH solves this. The doc here is wrong https://docs.aws.amazon.com/mwaa/latest/userguide/samples-virtualenv.html import os from airflow.plugins_manager import AirflowPlugin import airflow.utils.python_virtualenv from typing import List def _generate_virtualenv_cmd(tmp_dir: str, python_bin: str, system_site_packages: bool) -> List[str]: cmd = ['python3','/usr/local/airflow/.local/lib/python3.7/site-packages/virtualenv', tmp_dir] if system_site_packages: cmd.append('--system-site-packages') if python_bin is not None: cmd.append(f'--python={python_bin}') return cmd airflow.utils.python_virtualenv._generate_virtualenv_cmd=_generate_virtualenv_cmd #This is the added path code os.environ["PATH"] = f"/usr/local/airflow/.local/bin:{os.environ['PATH']}" class VirtualPythonPlugin(AirflowPlugin): name = 'virtual_python_plugin' | 7 | 9 |
71,866,688 | 2022-4-14 | https://stackoverflow.com/questions/71866688/visualize-decision-tree-with-not-only-training-set-tag-distribution-but-also-te | We can visualize decision tree with training set distribution, for example from matplotlib import pyplot as plt from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn import tree # Prepare the data data, can do row sample and column sample here iris = datasets.load_iris() X = iris.data y = iris.target # Fit the classifier with default hyper-parameters clf = DecisionTreeClassifier(random_state=1234) clf.fit(X, y) fig = plt.figure(figsize=(25,20)) _ = tree.plot_tree(clf, feature_names=iris.feature_names, class_names=iris.target_names, filled=True) gives us with distribution of training set, for example value = [50, 50, 50] in the root node. However, I am not able to give it a test set, and get the distribution of test set in the visualized tree. | I don't think there is an sklearn method to do this (yet). Option 1: Changing the annotation plot of the tree by adding X_test information You can use the custom function below: def plot_tree_test(clf, tree_plot, X_test, y_test): n = len(tree_plot) cat = clf.n_classes_ # Getting the path for each item in X_test path = clf.decision_path(X_test).toarray().transpose() # Looping through each node/leaf in the tree and adding information from X_test path for i in range(n): value = [] for j in range(cat): value += [sum(y_test[path[i]==1]==j)] tree_plot[i].set_text(tree_plot[i].get_text()+f'\ntest samples = {path[i].sum()}\ntest value = {value}') return tree_plot Then changing slightly the script: from matplotlib import pyplot as plt from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn import tree from sklearn.model_selection import train_test_split # Prepare the data data, can do row sample and column sample here iris = datasets.load_iris() X = iris.data y = iris.target # Creating a train and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=1234) # Fit the classifier with default hyper-parameters clf = DecisionTreeClassifier(random_state=1234) clf.fit(X_train, y_train) fig = plt.figure(figsize=(25,20)) tree_plot = tree.plot_tree(clf, feature_names=iris.feature_names, class_names=iris.target_names, filled=True) tree_plot = plot_tree_test(clf, tree_plot, X_test, y_test) plt.show() Output: Option 2: Changing the classifier itself with X_test information You can use the custom function below: def tree_test(clf, X_test, y_test): state = clf.tree_.__getstate__() n = len(state['values']) cat = clf.n_classes_ # Getting the path for each item in X_test path = clf.decision_path(X_test).toarray().transpose() # Looping through each node/leaf in the tree and adding information from X_test path values = [] for i in range(n): value = [] for j in range(cat): value += [float(sum(y_test[path[i]==1]==j))] values += [[value]] state['nodes'][i][5] = path[i].sum() state['nodes'][i][6] = max(path[i].sum(), 0.1) # 0 returns error values = np.array(values) state['values'] = values clf.tree_.__setstate__(state) return clf Then changing slightly the script: from matplotlib import pyplot as plt from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn import tree from sklearn.model_selection import train_test_split import numpy as np # Prepare the data data, can do row sample and column sample here iris = datasets.load_iris() X = iris.data y = iris.target # Creating a train and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=1234) # Fit the classifier with default hyper-parameters clf = DecisionTreeClassifier(random_state=1234) clf.fit(X_train, y_train) clf = tree_test(clf, X_test, y_test) fig = plt.figure(figsize=(25,20)) tree_plot = tree.plot_tree(clf, feature_names=iris.feature_names, class_names=iris.target_names, filled=True) plt.show() Output: | 6 | 1 |
71,850,888 | 2022-4-13 | https://stackoverflow.com/questions/71850888/finding-cdrs-in-ngs-data | I have millions of sequences in fasta format and want to extract CDRs (CDR1, CDR2 and CDR3).I chose only one sequence as an example and tried to extract CDR1 but not able to extract CDR1. sequence:-'FYSHSAVTLDESGGGLQTPGGGLSLVCKASGFTFSSYGMMWVRQAPGKGLEYVAGIRNDA GDKRYGSAVQGRATISRDNGQSTVRLQLNNLRAEDTGTYFCAKESGCYWDSTHCIDAWGH GTEVIVSTGG'. cdr1 starts from:- 'VCKASGFTFS', with maximum three replacements but C at 2nd place is must. cdr1 ends at:-'WVRQAP', with maximum two replacements but R at 3rd place is must. Extracted cdr1 should be SYGMM def cdr1_in(cdr_in): #VCKASGFTFS pin=0 max_pin=3 if cdr[1]!='C': pin+=1 if cdr[0]!='V': pin+=1 if cdr[2]!='K': pin+=1 if cdr[3]!='A': pin+=1 if cdr[4]!='S': pin+=1 if cdr[5]!='G': pin+=1 if cdr[6]!='F': pin+=1 if cdr[7]!='T': pin+=1 if cdr[8]!='F': pin+=1 if cdr[9]!='S': pin+=1 if pin<max_pin: print('CDR_in pattern', cdr_in) # print('CDR_starts from', arr.index(cdr_in)+9) return (arr.index(cdr_in)+9) def cdr1_out(cdr_out):#WVRQAP pin=0 max_pin=2 if cdr[1]!='V': pin+=1 if cdr[0]!='W': pin+=1 if cdr[2]!='R': pin+=1 if cdr[3]!='Q': pin+=1 if cdr[4]!='A': pin+=1 if cdr[5]!='P': pin+=1 if pin<max_pin: # print('CDR_in pattern', cdr_out) # print('CDR_ends at', arr.index(cdr_out)) return (arr.index(cdr_out)) K=10 arr=sequence for i in range(len(arr)-k+1): slider=arr[i:k+i] print("CDR_1 is:", arr[cdr1_in(slider): cdr1_out(slider)]) | I got it by the following method which works absolutely fine for me to find CDR1,2 and 3. All I need to define 3 different dictionaries having the definition ie prefix, suffix, max pin, fix position and pass them to the following code. Here I have performed this to find the CDR1, which gives me the desired output. dictionary_1={ 'cdr1_in': 'VCKASGFTFS', 'cdr1_out':'WVRQAP', 'max_pin_in':3, 'position_c':2, 'max_pin_out':2, 'position_r':3 } def cdr_out(dictionary_1,x): count=0 for i in range(len(x)-5): rider=x[i:i+6] # print(rider) if rider[2]=='R': # print(rider) for i in range(len(dictionary_1['cdr1_out'])): if rider[i]!=dictionary_1['cdr1_out'][i]: count+=1 if count<dictionary_1['max_pin_out']: # print(rider) return x.index(rider) def cdr_in(dictionary_1,x): count=0 for i in range(len(x)-9): rider=x[i:i+10] # print(rider) if rider[1]=='C': # print(rider) for i in range(len(dictionary_1['cdr1_in'])): if rider[i]!=dictionary_1['cdr1_in'][i]: count+=1 if count<dictionary_1['max_pin_in']: # print(rider) y=x.index(rider) z=cdr_out(dictionary_1,x) cdr=x[y+10:z] return cdr print(cdr_in(dictionary_1,x)) SYGMM | 4 | 2 |
71,873,314 | 2022-4-14 | https://stackoverflow.com/questions/71873314/getting-error-value-is-not-a-valid-dict-when-using-pydantic-models-in-fastapi | I'm trying to use Pydantic models with FastAPI to make multiple predictions (for a list of inputs). The problem is that one can't pass Pydantic models directly to model.predict() function, so I converted it to a dictionary, however, I'm getting the following error: AttributeError: 'list' object has no attribute 'dict' My code: from fastapi import FastAPI import uvicorn from pydantic import BaseModel import pandas as pd from typing import List app = FastAPI() class Inputs(BaseModel): id: int f1: float f2: float f3: str class InputsList(BaseModel): inputs: List[Inputs] @app.post('/predict') def predict(input_list: InputsList): df = pd.DataFrame(input_list.inputs.dict()) prediction = classifier.predict(df.loc[:, df.columns != 'id']) probability = classifier.predict_proba(df.loc[:, df.columns != 'id']) return {'id': df["id"].tolist(), 'prediction': prediction.tolist(), 'probability': probability.tolist()} I have also a problem with the return, I need the output to be something like : [ { "id": 123, "prediction": "class1", "probability": 0.89 }, { "id": 456, "prediction": "class3", "probability": 0.45 } ] PS: the id in Inputs class doesn't take place in the prediction (is not a feature), but I need it to be shown next to its prediction (to reference it). Request: | First, there are unecessary commas , at the end of both f1 and f2 attributes of your schema, as well as in the JSON payload you are sending. Hence, your schema should be: class Inputs(BaseModel): id: int f1: float f2: float f3: str Second, the 422 error is due to that the JSON payload you are sending does not match your schema. As noted by @MatsLindh your JSON payload should look like this: { "inputs": [ { "id": 1, "f1": 1.0, "f2": 1.0, "f3": "text" }, { "id": 2, "f1": 2.0, "f2": 2.0, "f3": "text" } ] } Third, you are creating the DataFrame in the worng way. You are attempting to call the dict() method on a list object; hence, the AttributeError: 'list' object has no attribute 'dict'. Instead, as shown here, you should call the .dict() method on each item in the list, as shown below: df = pd.DataFrame([i.dict() for i in input_list.inputs]) Finally, to return the results in the output format mentioned in your question, use the below. Note predict_proba() returns an array of lists containing the class probabilities for the input. If you would like to return only the probability for a specific class, use the index for that class instead, e.g., prob[0]. results = [] for (id, pred, prob) in zip(df["id"].tolist(), prediction.tolist(), probability.tolist()): results.append({"id": id, "prediction": pred, "probability": prob}) return results alternatively, you can use a DataFrame and call its to_dict() method to convert it into a dictionary, as shown below. If you have a large amount of data and find the approach below being quite slow in returning the results, please have a look at this answer for alternative approaches. results = pd.DataFrame({'id': df["id"].tolist(),'prediction': prediction.tolist(),'probability': probability.tolist()}) return results.to_dict(orient="records") If you would like to return only the probability for a specific class when using DataFrame, you could extract it and add it to a new list like this prob_list = [item[0] for item in probability.tolist()] or using operator.itemgetter() like this prob_list = list(map(itemgetter(0), probability.tolist())), and use that list instead when creating the DataFrame. | 5 | 3 |
71,907,619 | 2022-4-18 | https://stackoverflow.com/questions/71907619/python-not-found-for-node-gyp | I am trying to npm install for a project in my mac but for some reason it says python not found even though python3 command is working fine and I also set alias python to python3 in by ~/.zshrc and ~/.bash-profile and restarted several times but still the same issue. Screenshot of the issue. NOTE: See comments for the solution | The problem is that Python is required in the system path to operate this command. Solutions: Install pyenv Install either Python 2.7 or Python 3.x: pyenv install 2.7.18 or pyenv install 3.9.11 (for example) If you have more than one python version, ensure one of them is set as global: pyenv global 3.9.11 Add pyenv to your system path On Mac, put this in your ~/.bashrc or ~/.zshrc: export PATH=$(pyenv root)/shims:$PATH, then run source ~/.bashrc or source ~/.zshrc For Windows, try npm config set python C:\Library\Python\Python310\python.exe (for example) via administrator Run npm install again | 6 | 15 |
71,915,309 | 2022-4-18 | https://stackoverflow.com/questions/71915309/token-used-too-early-error-thrown-by-firebase-admin-auths-verify-id-token-metho | Whenever I run from firebase_admin import auth auth.verify_id_token(firebase_auth_token) It throws the following error: Token used too early, 1650302066 < 1650302067. Check that your computer's clock is set correctly. I'm aware that the underlying google auth APIs do check the time of the token, however as outlined here there should be a 10 second clock skew. Apparently, my server time is off by 1 second, however running this still fails even though this is well below the allowed 10 second skew. Is there a way to fix this? | This is how the firebase_admin.verify_id_token verifies the token: verified_claims = google.oauth2.id_token.verify_token( token, request=request, audience=self.project_id, certs_url=self.cert_url) and this is the definition of google.oauth2.id_token.verify_token(...) def verify_token( id_token, request, audience=None, certs_url=_GOOGLE_OAUTH2_CERTS_URL, clock_skew_in_seconds=0, ): As you can see, the function verify_token allows to specify a "clock_skew_in_seconds" but the firebase_admin function is not passing it along, thus the the default of 0 is used and since your server clock is off by 1 second, the check in verify_token fails. I would consider this a bug in firebase_admin.verify_id_token and maybe you can open an issue against the firebase admin SDK, but other than that you can only make sure, your clock is either exact or shows a time EARLIER than the actual time Edit: I actually opened an issue on GitHub for firebase/firebase-admin-Python and created an according pull request since I looked at all the source files already anyway... If and when the pull request is merged, the server's clock is allowed to be off by up to a minute. | 12 | 17 |
71,858,814 | 2022-4-13 | https://stackoverflow.com/questions/71858814/could-not-find-a-working-python-interpreter-unity-firebase | Could not find a working python interpreter. Please make sure one of the following is in your PATH: python python3 python3.8 python3.7 python2.7 python2 I installed python 3.10.4 Path is set in environment variables. Still not working. | How to set path: Find the path to install Python on your computer. To do this, open the Windows search bar and type python.exe. Select the Open file location option. Copy path of python folder. To add Python To PATH In User Variables: Open My Computer\Properties\Advanced system settings\Advanced Environment Variables\Environment Variables. In the User Variables menu, find a variable named Path. Then paste the path you copied earlier into the Variable Value option using Ctrl+v and click OK. if you cannot find this variable, you may need to create it. To do this, click New. Then, in the variable name form, enter the path and paste your Python path into the variable value field. 6.You can also add Python to the PATH system variable. Although this is just an alternative and not needed if you have already added it to the Users variables. To use the System Variables option, follow the steps highlighted above to copy the Python path and its script. Then go back to environment variables. Then, in the system variables segment, look for a variable named Path. Click this variable and click Edit. | 6 | 6 |
71,895,146 | 2022-4-16 | https://stackoverflow.com/questions/71895146/pandas-to-latex-how-to-make-column-names-bold | When I'm using the pandas.to_latex function to create latex table, the column names are unfortunately not bold. What can I do to make it bold? | Update I have been told on GitHub that this is allready possible with plain pandas but there is some missing documentation, which will be updated soon. You can use the line below. result = df.style.applymap_index( lambda v: "font-weight: bold;", axis="columns" ).to_latex(convert_css=True) Old answer Here is a complete example, this is adapted from the offical documentation. There is a keyword to print bold columns bold_rows=True. Sadly there isn't a kyword parameter to do the same for the columns. But I can use this to check if my code gives the same result for the column headers. I use the result of to_latex() and split it in three sections. One section is the line with the column names. In this line I use a regular expression to add the \text{}-string. My code works only if you colum names don't have a whitespace. import pandas as pd df = pd.DataFrame( dict(name=['Raphael', 'Donatello'], mask=['red', 'purple'], weapon=['sai', 'bo staff'] ) ) ans = df.to_latex(bold_rows=True) split_middle = '\n\midrule\n' split_top = '\\toprule\n' top, mid = ans.split(split_middle) start, columns = top.split(split_top) columns = re.sub('(\w)+', '\\\\textbf{\g<0>}', columns) result = split_middle.join([split_top.join([start, columns]), mid]) >>> result \begin{tabular}{llll} \toprule {} & \textbf{name} & \textbf{mask} & \textbf{weapon} \\ \midrule \textbf{0} & Raphael & red & sai \\ \textbf{1} & Donatello & purple & bo staff \\ \bottomrule \end{tabular} In the output you can see, that the header now is bold. | 5 | 3 |
71,915,551 | 2022-4-18 | https://stackoverflow.com/questions/71915551/prevent-mypy-errors-in-platform-dependent-python-code | I have something akin to the following piece of python code: import platform if platform.system() == "Windows": import winreg import win32api def do_cross_platform_thing() -> None: if platform.system() == "Windows": # do some overly complicated windows specific thing with winreg and win32api else: # do something reasonable for everyone else Now, on linux, mypy complains that it's missing imports because win32api doesn't exist, "module has no attribute ..." because the winreg module is defined, but basically disabled (all code is behind an 'is windows' check). Is there any reasonable way to deal with this ? My current solution is spamming # type: ignore everywhere --ignore-missing-imports Are there any better solutions for this? | Ok, after checking the Docs as @SUTerliakov so kindly suggested, it seems that i have to change my if platform.system() == "Windows" to this, semantically identical check: if sys.platform == "win32" Only this second version triggers some magic builtin mypy special case that identifies this as a platform check and ignores the branch not applicable to its plattform. | 4 | 4 |
71,915,400 | 2022-4-18 | https://stackoverflow.com/questions/71915400/how-do-i-superimpose-an-image-in-the-back-of-a-matplotlib-plot | I'm trying to superimpose an image in the back of a matplotlib plot. It is being rendered as HTML in a flask website so I am saving the plot as an image before inserting it. The plot without the background image looks like this: The code that produces the above output is here: fname = 'scatter_averages.png' url_full = os.path.join(_path_full, fname) image = plt.imread("app/static/images/quantum_grid.jpg") if os.path.isfile(url_full): os.remove(url_full) plt.clf() df = self.text_numeric_averages() if df is not None: plt.figure(figsize=(6, 8)) fig, ax = plt.subplots() df.index += 1 x, y = df.iloc[:, 2], df.iloc[:, 1] ax.plot(x, y, '-o') plt.xlabel(df.columns[2]) plt.ylabel(df.columns[1]) for i in range(len(df)): xyi = df.iloc[i, :].values ax.annotate(str(df.index[i]) + " " + xyi[0][:3], (xyi[2], xyi[1])) axes = plt.gca() y_min, y_max = axes.get_ylim() x_min, x_max = axes.get_xlim() # ax.imshow(image, extent=[x_min, x_max, y_min, y_max]) plt.savefig(url_full) The commented out line above is my attempt to get the image to superimpose. The output when that line is uncommented is this: How do I keep the sizing and scale of the first image but use the background image in the second plot as the background? I'm not concerned with the image looking distorted. | ax.imshow(image, extent=[x_min, x_max, y_min, y_max], aspect="auto") This will fix it. | 7 | 6 |
71,923,704 | 2022-4-19 | https://stackoverflow.com/questions/71923704/new-color-terminal-prograss-bar-in-pip | I find the new version pip(package installer for Python) has a colorful progress bar to show the downloading progress. How can I do that? Like this: | pip itself is using the rich package! In particular, their progress bar docs show this example: from rich.progress import track for n in track(range(n), description="Processing..."): do_work(n) | 7 | 13 |
71,925,980 | 2022-4-19 | https://stackoverflow.com/questions/71925980/cannot-perform-operation-another-operation-is-in-progress-in-pytest | I want to test some function, that work with asyncpg. If I run one test at a time, it works fine. But if I run several tests at a time, all tests except the first one crash with the error asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress. Tests: @pytest.mark.asyncio async def test_project_connection(superuser_id, project_id): data = element_data_random(project_id) element_id = (await resolve_element_create(data=data, user_id=superuser_id))["id"] project_elements = (await db_projects_element_ids_get([project_id]))[project_id] assert element_id in project_elements @pytest.mark.asyncio async def test_project_does_not_exist(superuser_id): data = element_data_random(str(uuid.uuid4())) with pytest.raises(ObjectWithIdDoesNotExistError): await resolve_element_create(data=data, user_id=superuser_id) All functions for work with db use pool look like: async def <some_db_func>(*args): pool = await get_pool() await pool.execute(...) # or fetch/fetchrow/fetchval How I get the pool: db_pool = None async def get_pool(): global db_pool async def init(con): await con.set_type_codec('jsonb', encoder=ujson.dumps, decoder=ujson.loads, schema='pg_catalog') await con.set_type_codec('json', encoder=ujson.dumps, decoder=ujson.loads, schema='pg_catalog') if not db_pool: dockerfiles_dir = os.path.join(src_dir, 'dockerfiles') env_path = os.path.join(dockerfiles_dir, 'dev.env') try: # When code and DB inside docker containers host = 'postgres-docker' socket.gethostbyname(host) except socket.error: # When code on localhost, but DB inside docker container host = 'localhost' load_dotenv(dotenv_path=env_path) db_pool = await asyncpg.create_pool( database=os.getenv("POSTGRES_DBNAME"), user=os.getenv("POSTGRES_USER"), password=os.getenv("POSTGRES_PASSWORD"), host=host, init=init ) return db_pool As far as I understand under the hood, asynΡpg creates a new connection and runs the request inside that connection if you run the request through pool. Which makes it clear that each request should have its own connection. However, this error occurs, which is caused when one connection tries to handle two requests at the same time | Okay, thanks to @Adelin I realized that I need to run each asynchronous test synchronously. I I'm new to asyncio so I didn't understand it right away and found a solution. It was: @pytest.mark.asyncio async def test_...(*args): result = await <some_async_func> assert result == excepted_result It become: def test_...(*args): async def inner() result = await <some_async_func> assert result == excepted_result asyncio.get_event_loop().run_until_complete(inner()) | 8 | 4 |
71,862,398 | 2022-4-13 | https://stackoverflow.com/questions/71862398/install-python-3-6-on-mac-m1 | I'm trying to run an old app that requires python < 3.7. I'm currently using python 3.9 and need to use multiple versions of python. I've installed pyenv-virtualenv and pyenv and successfully installed python 3.7.13. However, when I try to install 3.6.*, I get this: $ pyenv install 3.6.13 python-build: use [email protected] from homebrew python-build: use readline from homebrew Downloading Python-3.6.13.tar.xz... -> https://www.python.org/ftp/python/3.6.13/Python-3.6.13.tar.xz Installing Python-3.6.13... python-build: use tcl-tk from homebrew python-build: use readline from homebrew python-build: use zlib from xcode sdk BUILD FAILED (OS X 12.3.1 using python-build 2.2.5-11-gf0f2cdd1) Inspect or clean up the working tree at /var/folders/r5/xz73mp557w30h289rr6trb800000gp/T/python-build.20220413143259.33773 Results logged to /var/folders/r5/xz73mp557w30h289rr6trb800000gp/T/python-build.20220413143259.33773.log Last 10 log lines: checking for --with-cxx-main=<compiler>... no checking for clang++... no configure: By default, distutils will build C++ extension modules with "clang++". If this is not intended, then set CXX on the configure command line. checking for the platform triplet based on compiler characteristics... darwin configure: error: internal configure error for the platform triplet, please file a bug report make: *** No targets specified and no makefile found. Stop. Is there a way to solve this? I've looked and it seems like Mac M1 doesn't allow installing 3.6.* | Copying from a GitHub issue. I successfully installed Python 3.6 on an Apple M1 MacBook Pro running Monterey using the following setup. There is probably some things in here that can be removed/refined... but it worked for me! #Install Rosetta /usr/sbin/softwareupdate --install-rosetta --agree-to-license # Install x86_64 brew arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" # Set up x86_64 homebrew and pyenv and temporarily set aliases alias brew86="arch -x86_64 /usr/local/bin/brew" alias pyenv86="arch -x86_64 pyenv" # Install required packages and flags for building this particular python version through emulation brew86 install pyenv gcc libffi gettext export CPPFLAGS="-I$(brew86 --prefix libffi)/include -I$(brew86 --prefix openssl)/include -I$(brew86 --prefix readline)/lib" export CFLAGS="-I$(brew86 --prefix openssl)/include -I$(brew86 --prefix bzip2)/include -I$(brew86 --prefix readline)/include -I$(xcrun --show-sdk-path)/usr/include -Wno-implicit-function-declaration" export LDFLAGS="-L$(brew86 --prefix openssl)/lib -L$(brew86 --prefix readline)/lib -L$(brew86 --prefix zlib)/lib -L$(brew86 --prefix bzip2)/lib -L$(brew86 --prefix gettext)/lib -L$(brew86 --prefix libffi)/lib" # Providing an incorrect openssl version forces a proper openssl version to be downloaded and linked during the build export [email protected] # Install Python 3.6 pyenv86 install --patch 3.6.15 <<(curl -sSL https://raw.githubusercontent.com/pyenv/pyenv/master/plugins/python-build/share/python-build/patches/3.6.15/Python-3.6.15/0008-bpo-45405-Prevent-internal-configure-error-when-runn.patch\?full_index\=1) Note, the build succeeds but gives the following warning WARNING: The Python readline extension was not compiled. Missing the GNU readline lib? running pyenv versions shows that 3.6.15 can be used normally by the system | 16 | 30 |
71,922,124 | 2022-4-19 | https://stackoverflow.com/questions/71922124/python-convert-punycode-back-to-unicode | I'm trying to add contacts to Sendgrid from a db which occasionally is storing the user email in punycode [email protected] which translates to example-email@yahΓ³o.com in Unicode. Anyway if I try and add the ascii version there's an error because sendgrid doesn't accept it - however it does accept the Unicode version. So is there a way to convert them in python. So I think long story short is there a way to decode punycode to Unicode? Edit As suggested in comments i tried 'example-email@yahΓ³o.com'.encode('punycode').decode() which returns [email protected] so this is incorrect outside of python so is not a valid solution. Thanks in advance. | There is the xn-- ACE prefix in your encoded e-mail address: The ACE prefix for IDNA is "xn--" or any capitalization thereof. So apply the idna encoding (see Python Specific Encodings): codec idna Implement RFC 3490, see also encodings.idna. Only errors='strict' is supported. Result: 'yahΓ³o.com'.encode('idna').decode() # 'xn--yaho-sqa.com' and vice versa: 'xn--yaho-sqa.com'.encode().decode('idna') # 'yahΓ³o.com' You could use the idna library instead: Support for the Internationalised Domain Names in Applications (IDNA) protocol as specified in RFC 5891. This is the latest version of the protocol and is sometimes referred to as βIDNA 2008β. This library also provides support for Unicode Technical Standard 46, Unicode IDNA Compatibility Processing. This acts as a suitable replacement for the βencodings.idnaβ module that comes with the Python standard library, but which only supports the older superseded IDNA specification (RFC 3490). | 9 | 12 |
71,922,261 | 2022-4-19 | https://stackoverflow.com/questions/71922261/typeerror-setup-got-an-unexpected-keyword-argument-stage | I am trying to train my q&a model through pytorch_lightning. However while running the command trainer.fit(model,data_module) I am getting the following error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-72-b9cdaa88efa7> in <module>() ----> 1 trainer.fit(model,data_module) 4 frames /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _call_setup_hook(self) 1488 1489 if self.datamodule is not None: -> 1490 self.datamodule.setup(stage=fn) 1491 self._call_callback_hooks("setup", stage=fn) 1492 self._call_lightning_module_hook("setup", stage=fn) TypeError: setup() got an unexpected keyword argument 'stage' I have installed and imported pytorch_lightning. Also I have defined data_module = BioQADataModule(train_df, val_df, tokenizer, batch_size = BATCH_SIZE) where BATCH_SIZE = 2, N_EPOCHS = 6. The model I have used is as follows:- model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True) Also, I have defined the class for the model as follows:- class BioQAModel(pl.LightningModule): def __init__(self): super().__init__() self.model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True) def forward(self, input_ids, attention_mask, labels=None): output = self.model( input_ids = encoding["input_ids"], attention_mask = encoding["attention_mask"], labels=labels ) return output.loss, output.logits def training_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss, outputs = self(input_ids, attention_mask, labels) self.log("train_loss", loss, prog_bar=True, logger=True) return loss def validation_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss, outputs = self(input_ids, attention_mask, labels) self.log("val_loss", loss, prog_bar=True, logger=True) return loss def test_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss, outputs = self(input_ids, attention_mask, labels) self.log("test_loss", loss, prog_bar=True, logger=True) return loss def configure_optimizers(self): return AdamW(self.parameters(), lr=0.0001) For any additional information required, please specify. Edit 1: Adding BioQADataModule: class BioQADataModule(pl.LightningDataModule): def __init__( self, train_df: pd.DataFrame, test_df: pd.DataFrame, tokenizer: T5Tokenizer, batch_size: int = 8, source_max_token_len = 396, target_max_token_len = 32 ): super().__init__() self.batch_size = batch_size self.train_df = train_df self.test_df = test_df self.tokenizer = tokenizer self.source_max_token_len = source_max_token_len self.target_max_token_len = target_max_token_len def setup(self): self.train_dataset = BioQADataset( self.train_df, self.tokenizer, self.source_max_token_len, self.target_max_token_len ) self.test_dataset = BioQADataset( self.test_df, self.tokenizer, self.source_max_token_len, self.target_max_token_len ) def train_dataloader(self): return DataLoader( self.train_dataset, batch_size = self.batch_size, shuffle = True, num_workers = 4 ) def val_dataloader(self): return DataLoader( self.train_dataset, batch_size = 1, shuffle = True, num_workers = 4 ) def test_dataloader(self): return DataLoader( self.train_dataset, batch_size = 1, shuffle = True, num_workers = 4 ) | You need to add an extra argument stage=None to your setup method: def setup(self, stage=None): self.train_dataset = BioQADataset( self.train_df, self.tokenizer, self.source_max_token_len, self.target_max_token_len ) self.test_dataset = BioQADataset( self.test_df, self.tokenizer, self.source_max_token_len, self.target_max_token_len ) I've played with Pytorch Lightning myself for multi-GPU training here. Although some of the code is a bit outdated (metrics are a standalone module now), you might find it useful. | 4 | 10 |
71,918,897 | 2022-4-19 | https://stackoverflow.com/questions/71918897/why-is-mypy-trying-to-instantiate-my-abstract-class-in-python | If I have a Python module like this: from abc import ABC, abstractmethod class AbstractClass(ABC): @abstractmethod def method(self): pass class ConcreteClass1(AbstractClass): def method(self): print("hello") class ConcreteClass2(AbstractClass): def method(self): print("hello") class ConcreteClass3(AbstractClass): def method(self): print("hello") classes = [ ConcreteClass1, ConcreteClass2, ConcreteClass3, ] for c in classes: c().method() And I hit it with mypy test.py, I get this: test.py:27: error: Cannot instantiate abstract class "AbstractClass" with abstract attribute "method" The code runs without any issues though, and I can't see any issues in the logic. At no point am I trying to instantiate the AbstractClass directly. Some weird behavior I've noticed: If, instead of a loop, I do this: ... ConcreteClass1().method() ConcreteClass2().method() ConcreteClass3().method() mypy is happy. Also, if, instead of 3 classes in the loop, I do 2: classes = [ ConcreteClass1, ConcreteClass2, #ConcreteClass3, ] for c in classes: c().method() mypy is happy with that as well. What's going on? Is this a mypy bug? If so, can I tell mypy to ignore this "problem"? | The problem appears similar to mypy issues with abstract classes and dictionaries - for some reason, mypy can't typecheck this properly without a type annotation on the list: classes: list[Type[AbstractClass]] = [ ConcreteClass1, ConcreteClass2, ConcreteClass3, ] (Change list to List if you're on Python 3.8 or below) You might want to file an issue for this bug on the mypy Github. | 5 | 5 |
71,927,889 | 2022-4-19 | https://stackoverflow.com/questions/71927889/parse-yaml-with-dots-delimiter-in-keys | We use YAML configuration for services scaling. Usually it goes like this: service: scalingPolicy: capacity: min: 1 max: 1 So it's easy to open with basic PyYAML and parse as an dict to get config['service']['scalingPolicy']['capacity']['min'] result as 1. Problem is that some configs are built with dots delimiter e.g: service.scalingPolicy.capacity: min: 1 max: 1 Basic consumer of this configs is Java's Spring and somehow it's treated equally as the example above. But due to need to also parse these configs with Python - I get whole dot separated line as a config['service.scalingPolicy.capacity'] key. The question is - how would I make python parse any kind of keys combinations (both separated by dots and separated by tabulation and :). I didn't find related parameters for Python YAML libs (I've checked standard PyYAML and ruamel.yaml) and handling any possible combination manually seems like a crazy idea. The only possible idea I have is to write my own parser but maybe there is something I'm missing so I won't have to reinvent the bicycle. | This is not trivial, it is much more easy to split a lookup with a key with dots into recursing into a nested data structure. Here you have a nested data structure and different [key] lookups mean different things at different levels. If you use ruamel.yaml in the default round-trip mode, you can add a class-variable to the type that represents a mapping, that defines on what the keys were split and an instance variable that keeps track of the prefix already matched: import sys import ruamel.yaml from ruamel.yaml.compat import ordereddict from ruamel.yaml.comments import merge_attrib yaml_str = """\ service.scalingPolicy.capacity: min: 1 max: 1 """ def mapgetitem(self, key): sep = getattr(ruamel.yaml.comments.CommentedMap, 'sep') if sep is not None: if not hasattr(self, 'splitprefix'): self.splitprefix = '' if self.splitprefix: self.splitprefix += sep + key else: self.splitprefix = key if self.splitprefix not in self: for k in self.keys(): if k.startswith(self.splitprefix): break else: raise KeyError(self.splitprefix) return self key = self.splitprefix delattr(self, 'splitprefix') # to make the next lookup work from start try: return ordereddict.__getitem__(self, key) except KeyError: for merged in getattr(self, merge_attrib, []): if key in merged[1]: return merged[1][key] raise old_mapgetitem = ruamel.yaml.comments.CommentedMap.__getitem__ # save the original __getitem__ ruamel.yaml.comments.CommentedMap.__getitem__ = mapgetitem ruamel.yaml.comments.CommentedMap.sep = '.' yaml = ruamel.yaml.YAML() # yaml.indent(mapping=4, sequence=4, offset=2) # yaml.preserve_quotes = True config = yaml.load(yaml_str) print('min:', config['service']['scalingPolicy']['capacity']['min']) print('max:', config['service']['scalingPolicy']['capacity']['max']) print('---------') config['service']['scalingPolicy']['capacity']['max'] = 42 # and dump with the original routine, as it uses __getitem__ ruamel.yaml.comments.CommentedMap.__getitem__ = old_mapgetitem yaml.dump(config, sys.stdout) which gives: min: 1 max: 1 --------- service.scalingPolicy.capacity: min: 1 max: 42 | 4 | 3 |
71,920,941 | 2022-4-19 | https://stackoverflow.com/questions/71920941/how-to-obtain-a-token-for-a-user-with-payload-using-django-simple-jwt | I can get a correct token when calling the URL /token/ but I wish to create a token manually for the user when /login/ is called. urls.py: from django.urls import path from . import views from .views import MyTokenObtainPairView from rest_framework_simplejwt.views import ( TokenRefreshView, TokenVerifyView ) urlpatterns = [ path('', views.api_root), path('register/', views.register), path('login/', views.login), path('token/', MyTokenObtainPairView.as_view(), name='token_obtain_pair'), path('token/refresh/', TokenRefreshView.as_view(), name='token_refresh'), path('token/verify/', TokenVerifyView.as_view(), name='token_verify'), ] views.py: @api_view(['POST']) def login(request): email = request.data.get('email') password = request.data.get('password') user = get_object_or_404(User, email=email) if make_password(password) == user.password: if not user.is_active: return Response({"error": "User is not active"}, status=400) tokens = MyTokenObtainPairView.get_token(user) parse_token = { 'refresh': str(tokens), 'access': str(tokens.access_token), } return Response(status=200, data=parse_token) else: return Response(status=401) class MyTokenObtainPairSerializer(TokenObtainPairSerializer): def validate(self, attrs): data = super().validate(attrs) refresh = self.get_token(self.user) data['refresh'] = str(refresh) data['access'] = str(refresh.access_token) # Add extra responses here data['username'] = self.user.username data['groups'] = self.user.groups.values_list('name', flat=True) data['test'] = '1234' return data class MyTokenObtainPairView(TokenObtainPairView): serializer_class = MyTokenObtainPairSerializer How do I modify this line to get my token for a specific user? tokens = MyTokenObtainPairView.get_token(user) I have read the doc about manually create token by importing this: from rest_framework_simplejwt.tokens import RefreshToken but it is not adding the payload into the token... | Actually after Googling for an hour I finally got a solution from another post... Proper way to do this: refresh = RefreshToken.for_user(user) refresh['user_name'] = user.username refresh['first_name'] = user.first_name refresh['last_name'] = user.last_name refresh['full_name'] = user.get_full_name() return { 'refresh': str(refresh), 'access': str(refresh.access_token), } In case someone would need this... | 4 | 9 |
71,914,320 | 2022-4-18 | https://stackoverflow.com/questions/71914320/mutex-lock-in-python3 | I'm using mutex for blocking part of code in the first function. Can I unlock mutex in the second function? For example: import threading mutex = threading.Lock() def function1(): mutex.acquire() #do something def function2(): #do something mutex.release() #do something | You certainly can do what you're asking, locking the mutex in one function and unlocking it in another one. But you probably shouldn't. It's bad design. If the code that uses those functions calls them in the wrong order, the mutex may be locked and never unlocked, or be unlocked when it isn't locked (or even worse, when it's locked by a different thread). If you can only ever call the functions in exactly one order, why are they even separate functions? A better idea may be to move the lock-handling code out of the functions and make the caller responsible for locking and unlocking. Then you can use a with statement that ensures the lock and unlock are exactly paired up, even in the face of exceptions or other unexpected behavior. with mutex: function1() function2() Or if not all parts of the two functions are "hot" and need the lock held to ensure they run correctly, you might consider factoring out the parts that need the lock into a third function that runs in between the other two: function1_cold_parts() with mutex: hot_parts() function2_cold_parts() | 4 | 4 |
71,916,052 | 2022-4-18 | https://stackoverflow.com/questions/71916052/tkinter-use-for-loop-to-display-multiple-images | I am trying to display multiple images (as labels) to the window but only the last images is displayed from tkinter import * from PIL import ImageTk, Image root = Tk() f = open("data/itemIDsList.txt") ids = [] for line in f: line = line.rstrip("\n") ids.append(line) f.close() for i in range(10): img = ImageTk.PhotoImage(Image.open(f"website/images/{ids[i]}.png")) Label(root, image=img, width=60, height=80).grid() root.mainloop() | Each time you reassign img in the loop, the data of the previous image gets destroyed and can no longer be displayed. To fix this, add the images to a list to store them permanently: from tkinter import * from PIL import ImageTk, Image root = Tk() f = open("data/itemIDsList.txt") ids = [] for line in f: line = line.rstrip("\n") ids.append(line) f.close() imgs = [] for i in range(10): imgs.append(ImageTk.PhotoImage(Image.open(f"website/images/{ids[i]}.png"))) Label(root, image=imgs[-1], width=60, height=80).grid() root.mainloop() | 4 | 5 |
71,914,660 | 2022-4-18 | https://stackoverflow.com/questions/71914660/subtract-columns-from-two-dfs-based-on-matching-condition | Suppose I have the following two DFs: DF A: First column is a date, and then there are columns that start with a year (2021, 2022...) Date 2021.Water 2021.Gas 2022.Electricity may-04 500 470 473 may-05 520 490 493 may-06 540 510 513 DF B: First column is a date, and then there are columns that start with a year (2021, 2022...) Date 2021.Amount 2022.Amount may-04 100 95 may-05 110 105 may-06 120 115 The expected result is a DF with the columns from DF A, but that have the rows divided by the values for the matching year in DF B. Such as: Date 2021.Water 2021.Gas 2022.Electricity may-04 5.0 4.7 5.0 may-05 4.7 4.5 4.7 may-06 4.5 4.3 4.5 I am really struggling with this problem. Let me know if any clarifications are needed and will be glad to help. | Try this: dfai = dfa.set_index('Date') dfai.columns = dfai.columns.str.split('.', expand=True) dfbi = dfb.set_index('Date').rename(columns = lambda x: x.split('.')[0]) df_out = dfai.div(dfbi, level=0).round(1) df_out.columns = df_out.columns.map('.'.join) df_out.reset_index() Output: Date 2021.Water 2021.Gas 2022.Electricity 0 may-04 5.0 4.7 5.0 1 may-05 4.7 4.5 4.7 2 may-06 4.5 4.2 4.5 Details First, move 'Date' into the index of both dataframes, then use string split to get years into a level in each dataframe. Use, pd.DataFrame.div with level=0 to align operations on the top level index of each dataframe. Flatten multiindex column header back to a single level and reset_index. | 4 | 2 |
71,911,077 | 2022-4-18 | https://stackoverflow.com/questions/71911077/python-multiprocessing-progress-approach | I've been busy writing my first multiprocessing code and it works, yay. However, now I would like some feedback of the progress and I'm not sure what the best approach would be. What my code (see below) does in short: A target directory is scanned for mp4 files Each file is analysed by a separate process, the process saves a result (an image) What I'm looking for could be: Simple Each time a process finishes a file it sends a 'finished' message The main code keeps count of how many files have finished Fancy Core 0 processing file 20 of 317 ||||||____ 60% completed Core 1 processing file 21 of 317 |||||||||_ 90% completed ... Core 7 processing file 18 of 317 ||________ 20% completed I read all kinds of info about queues, pools, tqdm and I'm not sure which way to go. Could anyone point to an approach that would work in this case? Thanks in advance! EDIT: Changed my code that starts the processes as suggested by gsb22 My code: # file operations import os import glob # Multiprocessing from multiprocessing import Process # Motion detection import cv2 # >>> Enter directory to scan as target directory targetDirectory = "E:\Projects\Programming\Python\OpenCV\\videofiles" def get_videofiles(target_directory): # Find all video files in directory and subdirectories and put them in a list videofiles = glob.glob(target_directory + '/**/*.mp4', recursive=True) # Return the list return videofiles def process_file(videofile): ''' What happens inside this function: - The video is processed and analysed using openCV - The result (an image) is saved to the results folder - Once this function receives the videofile it completes without the need to return anything to the main program ''' # The processing code is more complex than this code below, this is just a test cap = cv2.VideoCapture(videofile) for i in range(10): succes, frame = cap.read() # cv2.imwrite('{}/_Results/{}_result{}.jpg'.format(targetDirectory, os.path.basename(videofile), i), frame) if succes: try: cv2.imwrite('{}/_Results/{}_result_{}.jpg'.format(targetDirectory, os.path.basename(videofile), i), frame) except: print('something went wrong') if __name__ == "__main__": # Create directory to save results if it doesn't exist if not os.path.exists(targetDirectory + '/_Results'): os.makedirs(targetDirectory + '/_Results') # Get a list of all video files in the target directory all_files = get_videofiles(targetDirectory) print(f'{len(all_files)} video files found') # Create list of jobs (processes) jobs = [] # Create and start processes for file in all_files: proc = Process(target=process_file, args=(file,)) jobs.append(proc) for job in jobs: job.start() for job in jobs: job.join() # TODO: Print some form of progress feedback print('Finished :)') | I read all kinds of info about queues, pools, tqdm and I'm not sure which way to go. Could anyone point to an approach that would work in this case? Here's a very simple way to get progress indication at minimal cost: from multiprocessing.pool import Pool from random import randint from time import sleep from tqdm import tqdm def process(fn) -> bool: sleep(randint(1, 3)) return randint(0, 100) < 70 files = [f"file-{i}.mp4" for i in range(20)] success = [] failed = [] NPROC = 5 pool = Pool(NPROC) for status, fn in tqdm(zip(pool.imap(process, files), files), total=len(files)): if status: success.append(fn) else: failed.append(fn) print(f"{len(success)} succeeded and {len(failed)} failed") Some comments: tqdm is a 3rd-party library which implements progressbars extremely well. There are others. pip install tqdm. we use a pool (there's almost never a reason to manage processes yourself for simple things like this) of NPROC processes. We let the pool handle iterating our process function over the input data. we signal state by having the function return a boolean (in this example we choose randomly, weighting in favour of success). We don't return the filename, although we could, because it would have to be serialised and sent from the subprocess, and that's unnecessary overhead. we use Pool.imap, which returns an iterator which keeps the same order as the iterable we pass in. So we can use zip to iterate files directly. Since we use an iterator with unknown size, tqdm needs to be told how long it is. (We could have used pool.map, but there's no need to commit the ram---although for one bool it probably makes no difference.) I've deliberately written this as a kind of recipe. You can do a lot with multiprocessing just by using the high-level drop in paradigms, and Pool.[i]map is one of the most useful. References https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool https://tqdm.github.io/ | 5 | 1 |
71,902,156 | 2022-4-17 | https://stackoverflow.com/questions/71902156/why-we-declare-metaclass-abc-abcmeta-when-use-abstract-class-in-python | When I was reading the code online, I have encountered the following cases of using abstract classes: from abc import abstractmethod,ABCMeta class Generator(object,metaclass=ABCMeta): @abstractmethod def generate(self): raise NotImplementedError("method not implemented") generator=Generator() generator.generate() The following error is returned, as expected: TypeError: Can't instantiate abstract class Generator with abstract methods generate But if I write it like this (the only difference is in the second line) from abc import abstractmethod,ABCMeta class Generator(object): @abstractmethod def generate(self): raise NotImplementedError("method not implemented") generator=Generator() generator.generate() Although there are changes in the error message, NotImplementedError: method not implemented When I implemented the generate method, both of the above ways of Generator were executed correctly, class GeneticAlgorithm(Generator): def generate(self): print("ABC") ga=GeneticAlgorithm() ga.generate() >>> ABC So why do we need the statement metaclass=ABCMeta? I know something from GeeksforGeeks that ABCMeta metaclass provides a method called register method that can be invoked by its instance. By using this register method, any abstract base class can become an ancestor of any arbitrary concrete class. But this still doesn't make me understand the necessity of declaring metaclass=ABCMeta, it feels like @abstractmethod modifying the method is enough. | You "need" the metaclass=ABCMeta to enforce the rules at instantiation time. generator=Generator() # Errors immediately when using ABCMeta generator.generate() # Only errors if and when you call generate otherwise Imagine if the class had several abstract methods, only some of which were implemented in a child. It might work for quite a while, and only error when you got around to calling an unimplemented method. Failing eagerly before you rely on the ABC is generally a good thing, in the same way it's usually better for a function to raise an exception rather than just returning None to indicate failure; you want to know as soon as things are wrong, not get a weird error later without knowing the ultimate cause of the error. Side-note: There's a much more succinct way to be an ABC than explicitly using the metaclass=ABCMeta syntax: from abc import abstractmethod, ABC class Generator(ABC): Python almost always makes empty base classes that use the metaclass to simplify use (especially during the 2 to 3 transition period, where there was no compatible metaclass syntax that worked in both, and direct inheritance was the only thing that worked). | 4 | 6 |
71,904,130 | 2022-4-17 | https://stackoverflow.com/questions/71904130/how-to-add-type-hints-in-pycharm | I often find myself having to start a debugging session in PyCharm only in order to inspect a variable and look up its class with something.__class__ so that I can insert the type hint into the code in order to make it more readable. Is there a way to do it automatically in PyCharm via a context action, in VSCode or maybe some other tool? | Have you tried Adding type hints in the PyCharm? The VSCode-Python has not supported this feature, and I have submitted a feature request on GitHub. | 4 | 2 |
71,902,946 | 2022-4-17 | https://stackoverflow.com/questions/71902946/numba-no-implementation-of-function-functionbuilt-in-function-getitem-found | IΒ΄m having a hard time implementing numba to my function. Basically, I`d like to concatenate to arrays with 22 columns, if the new data hasn't been added yet. If there is no old data, the new data should become a 2d array. The function works fine without the decorator: @jit(nopython=True) def add(new,original=np.array([])): duplicate=True if original.size!=0: for raw in original: for ii in range(11,19): if raw[ii]!=new[ii]: duplicate=False if duplicate==False: res=np.zeros((original.shape[0]+1,22)) res[:original.shape[0]]=original res[-1]=new return res else: return original else: res=np.zeros((1,22)) res[0]=new return res Also if I remove the last part of the code: else: res=np.zeros((1,22)) res[0]=new return res It would work with njit So if I ignore the case, that there hasnΒ΄t been old data yet, everything would be fine. FYI: the data I`m passing in is mixed float and np.nan. Anybody an idea? Thank you so much in advance! this is my error log: --------------------------------------------------------------------------- TypingError Traceback (most recent call last) <ipython-input-255-d05a5f4ea944> in <module>() 19 return res 20 #add(a,np.array([b])) ---> 21 add(a) 2 frames /usr/local/lib/python3.7/dist-packages/numba/core/dispatcher.py in _compile_for_args(self, *args, **kws) 413 e.patch_message(msg) 414 --> 415 error_rewrite(e, 'typing') 416 except errors.UnsupportedError as e: 417 # Something unsupported is present in the user code, add help info /usr/local/lib/python3.7/dist-packages/numba/core/dispatcher.py in error_rewrite(e, issue_type) 356 raise e 357 else: --> 358 reraise(type(e), e, None) 359 360 argtypes = [] /usr/local/lib/python3.7/dist-packages/numba/core/utils.py in reraise(tp, value, tb) 78 value = tp() 79 if value.__traceback__ is not tb: ---> 80 raise value.with_traceback(tb) 81 raise value 82 TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<built-in function getitem>) found for signature: >>> getitem(float64, int64) There are 22 candidate implementations: - Of which 22 did not match due to: Overload of function 'getitem': File: <numerous>: Line N/A. With argument(s): '(float64, int64)': No match. During: typing of intrinsic-call at <ipython-input-255-d05a5f4ea944> (7) File "<ipython-input-255-d05a5f4ea944>", line 7: def add(new,original=np.array([])): <source elided> for ii in range(11,19): if raw[ii]!=new[ii]: ^ Update: Here is how it should work. The function shall cover three main cases sample input for new data (1d array): array([9.0000000e+00, 0.0000000e+00, 1.0000000e+00, 0.0000000e+00, 0.0000000e+00, nan, 5.7300000e-01, 9.2605450e-01, 9.3171725e-01, 9.2039175e-01, 9.3450000e-01, 1.6491636e+09, 1.6494228e+09, 1.6496928e+09, 1.6497504e+09, 9.2377000e-01, 9.3738000e-01, 9.3038000e-01, 9.3450000e-01, nan, nan, nan]) sample input for original data (2d array): array([[4.00000000e+00, 0.00000000e+00, 1.00000000e+00, 0.00000000e+00, 0.00000000e+00, nan, 5.23000000e-01, 8.31589755e-01, 8.34804877e-01, 8.28374632e-01, 8.36090000e-01, 1.64938320e+09, 1.64966400e+09, 1.64968920e+09, 1.64975760e+09, 8.30750000e-01, 8.38020000e-01, 8.34290000e-01, 8.36090000e-01, nan, nan, nan]]) new data will be added and there is no original data add(new) Output: array([[9.0000000e+00, 0.0000000e+00, 1.0000000e+00, 0.0000000e+00, 0.0000000e+00, nan, 5.7300000e-01, 9.2605450e-01, 9.3171725e-01, 9.2039175e-01, 9.3450000e-01, 1.6491636e+09, 1.6494228e+09, 1.6496928e+09, 1.6497504e+09, 9.2377000e-01, 9.3738000e-01, 9.3038000e-01, 9.3450000e-01, nan, nan, nan]]) new data will be added, which hasnΒ΄t already been added before and there is original data add(new,original) Output: array([[4.00000000e+00, 0.00000000e+00, 1.00000000e+00, 0.00000000e+00, 0.00000000e+00, nan, 5.23000000e-01, 8.31589755e-01, 8.34804877e-01, 8.28374632e-01, 8.36090000e-01, 1.64938320e+09, 1.64966400e+09, 1.64968920e+09, 1.64975760e+09, 8.30750000e-01, 8.38020000e-01, 8.34290000e-01, 8.36090000e-01, nan, nan, nan], [9.00000000e+00, 0.00000000e+00, 1.00000000e+00, 0.00000000e+00, 0.00000000e+00, nan, 5.73000000e-01, 9.26054500e-01, 9.31717250e-01, 9.20391750e-01, 9.34500000e-01, 1.64916360e+09, 1.64942280e+09, 1.64969280e+09, 1.64975040e+09, 9.23770000e-01, 9.37380000e-01, 9.30380000e-01, 9.34500000e-01, nan, nan, nan]]) new data will be added, which already had been added before add(new,original) Output: array([[9.0000000e+00, 0.0000000e+00, 1.0000000e+00, 0.0000000e+00, 0.0000000e+00, nan, 5.7300000e-01, 9.2605450e-01, 9.3171725e-01, 9.2039175e-01, 9.3450000e-01, 1.6491636e+09, 1.6494228e+09, 1.6496928e+09, 1.6497504e+09, 9.2377000e-01, 9.3738000e-01, 9.3038000e-01, 9.3450000e-01, nan, nan, nan]]) | The main issue is that Numba assumes that original is a 1D array while this is not the case. The pure-Python code works because the interpreter it never execute the body of the loop for raw in original but Numba need to compile all the code before its execution. You can solve this problem using the following function prototype: def add(new,original=np.array([[]])): # Note the `[[]]` instead of `[]` With that, Numba can deduce correctly that the original array is a 2D one. Note that specifying the dimension and types of Numpy arrays and inputs is a good method to avoid such errors and sneaky bugs (eg. due to integer/float truncation). | 6 | 8 |
71,902,175 | 2022-4-17 | https://stackoverflow.com/questions/71902175/create-venn-diagram-in-python-with-4-circles | How can I create a venn diagram in python from 4 sets? Seems like the limit in matplotlib is only 3? from matplotlib_venn import venn3 v = venn3( [ set(ether_list), set(bitcoin_list), set(doge_list), ], ) | Venn diagrams with circles can work only with <4 sets, because the geometrical properties of intersections (some won't be possible to show). Some python libraries that allow you to show venn diagrams with more exotic shapes are: pyvenn venn | 10 | 9 |
71,885,891 | 2022-4-15 | https://stackoverflow.com/questions/71885891/urllib3-exceptions-maxretryerror-httpconnectionpoolhost-localhost-port-5958 | At dawn my code was working perfectly, but today when I woke up it is no longer working, and I didn't change any line of code, I also checked if Firefox updated, and no, it didn't, and I have no idea what maybe, I've been reading the urllib documentation but I couldn't find any information from asyncio.windows_events import NULL from ctypes.wintypes import PINT from logging import root from socket import timeout from string import whitespace from tkinter import N from turtle import color from urllib.request import Request from hyperlink import URL from selenium import webdriver from selenium.webdriver.firefox.service import Service from selenium.webdriver.firefox.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support.expected_conditions import presence_of_element_located #from webdriver_manager.firefox import GeckoDriverManager import time from datetime import datetime import telebot #driver = webdriver.Firefox(service=Service(GeckoDriverManager().install())) colors = NULL api = "******" url = "https://blaze.com/pt/games/double" bot = telebot.TeleBot(api) chat_id = "*****" firefox_driver_path = "/Users/AntΓ΄nio/Desktop/roletarobo/geckodriver.exe" firefox_options = Options() firefox_options.add_argument("--headless") webdriver = webdriver.Firefox( executable_path = firefox_driver_path, options = firefox_options) with webdriver as driver: driver.get(url) wait = WebDriverWait(driver, 25) wait.until(presence_of_element_located((By.CSS_SELECTOR, "div#roulette.page.complete"))) time.sleep(2) results = driver.find_elements(By.CSS_SELECTOR, "div#roulette-recent div.entry") for quote in results: quote.text.split('\n') data = [my_elem.text for my_elem in driver.find_elements(By.CSS_SELECTOR, "div#roulette-recent div.entry")][:8] #mΓ©todo convertElements, converte elementos da lista em elementos declarados def convertElements( oldlist, convert_dict ): newlist = [] for e in oldlist: if e in convert_dict: newlist.append(convert_dict[e]) else: newlist.append(e) return newlist #fim do mΓ©todo colors = convertElements(data, {'':"white",'1':"red",'2':"red",'3':"red",'4':"red",'5':"red",'6':"red",'7':"red",'8':"black",'9':"black",'10':"black",'11':"black",'12':"black",'13':"black",'14':"black"}) print(colors) It was working perfectly, since Sunday I've been coding and it's always been working File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\support\wait.py", line 78, in until value = method(self._driver) File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\support\expected_conditions.py", line 64, in _predicate return driver.find_element(*locator) File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 1248, in find_element return self.execute(Command.FIND_ELEMENT, { File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 423, in execute response = self.command_executor.execute(driver_command, params) File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 333, in execute return self._request(command_info[0], url, body=data) File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\remote_connection.py", line 355, in _request resp = self._conn.request(method, url, body=body, headers=headers) File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\request.py", line 78, in request return self.request_encode_body( File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\request.py", line 170, in request_encode_body return self.urlopen(method, url, **extra_kw) File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 813, in urlopen return self.urlopen( File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 785, in urlopen retries = retries.increment( File "C:\Users\AntΓ΄nio\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause))urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=59587): Max retries exceeded with url: /session/b38be2fe-6d92-464f-a096-c43183aef6a8/element (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000173145EF520>: Failed to establish a new connection: [WinError 10061] No connections could be made because the target machine actively refused them')) | This error message... MaxRetryError(_pool, url, error or ResponseError(cause))urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=59587): Max retries exceeded with url: /session/b38be2fe-6d92-464f-a096-c43183aef6a8/element (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000173145EF520>: Failed to establish a new connection: [WinError 10061] No connections could be made because the target machine actively refused them')) ...implies that the GeckoDriver was unable to initiate/spawn a new Browsing Context i.e. firefox session. Root cause The root cause of this error can be either of the following: This error may surface if have closed the Browsing Context manually with brute force when the driver have already initiated a lookout for element/elements. There is a possibility that the application you are trying to access is throttling the requests from your system/machine/ip-address/network. There is also a possibility that the application have identified the Selenium driven GeckoDriver initiated firefox Browsing Context as a bot and is denying any access. Solution Ensure that: To evade the detection as a bot, pass the argument --disable-blink-features=AutomationControlled as follows: from selenium.webdriver.firefox.options import Options options = Options() options.add_argument('--disable-blink-features=AutomationControlled') Always invoke driver.quit() within tearDown(){} method to close & destroy the WebDriver and Web Client instances gracefully. Induce WebDriverWait to synchronize the fast moving WebDriver along with the Browsing Context. | 10 | 12 |
71,894,769 | 2022-4-16 | https://stackoverflow.com/questions/71894769/keras-attributeerror-adam-object-has-no-attribute-name | I want to compile my DQN Agent but I get error: AttributeError: 'Adam' object has no attribute '_name', DQN = buildAgent(model, actions) DQN.compile(Adam(lr=1e-3), metrics=['mae']) I tried adding fake _name but it doesn't work, I'm following a tutorial and it works on tutor's machine, it's probably some new update change but how to fix this Here is my full code: from keras.layers import Dense, Flatten import gym from keras.optimizer_v1 import Adam from rl.agents.dqn import DQNAgent from rl.policy import BoltzmannQPolicy from rl.memory import SequentialMemory env = gym.make('CartPole-v0') states = env.observation_space.shape[0] actions = env.action_space.n episodes = 10 def buildModel(statez, actiones): model = Sequential() model.add(Flatten(input_shape=(1, statez))) model.add(Dense(24, activation='relu')) model.add(Dense(24, activation='relu')) model.add(Dense(actiones, activation='linear')) return model model = buildModel(states, actions) def buildAgent(modell, actionz): policy = BoltzmannQPolicy() memory = SequentialMemory(limit=50000, window_length=1) dqn = DQNAgent(model=modell, memory=memory, policy=policy, nb_actions=actionz, nb_steps_warmup=10, target_model_update=1e-2) return dqn DQN = buildAgent(model, actions) DQN.compile(Adam(lr=1e-3), metrics=['mae']) DQN.fit(env, nb_steps=50000, visualize=False, verbose=1) | Your error came from importing Adam with from keras.optimizer_v1 import Adam, You can solve your problem with tf.keras.optimizers.Adam from TensorFlow >= v2 like below: (The lr argument is deprecated, it's better to use learning_rate instead.) # !pip install keras-rl2 import tensorflow as tf from keras.layers import Dense, Flatten import gym from rl.agents.dqn import DQNAgent from rl.policy import BoltzmannQPolicy from rl.memory import SequentialMemory env = gym.make('CartPole-v0') states = env.observation_space.shape[0] actions = env.action_space.n episodes = 10 def buildModel(statez, actiones): model = tf.keras.Sequential() model.add(Flatten(input_shape=(1, statez))) model.add(Dense(24, activation='relu')) model.add(Dense(24, activation='relu')) model.add(Dense(actiones, activation='linear')) return model def buildAgent(modell, actionz): policy = BoltzmannQPolicy() memory = SequentialMemory(limit=50000, window_length=1) dqn = DQNAgent(model=modell, memory=memory, policy=policy, nb_actions=actionz, nb_steps_warmup=10, target_model_update=1e-2) return dqn model = buildModel(states, actions) DQN = buildAgent(model, actions) DQN.compile(tf.keras.optimizers.Adam(learning_rate=1e-3), metrics=['mae']) DQN.fit(env, nb_steps=50000, visualize=False, verbose=1) | 5 | 4 |
71,893,002 | 2022-4-16 | https://stackoverflow.com/questions/71893002/how-to-make-flask-handle-25k-request-per-second-like-express-js | So i am making a big social media app but i have a problem which framework to choose flask or express.js i like flask so much but it cant handle too much requests. Express.js can handle about 25k request per second (google). So is there anyway to make flask handle 25k request per second using gunicorn currently i am using this command $ gunicorn -w 4 -b 0.0.0.0:5000 your_project:app but it can only handle 4 request at a time. And uh one more question can flask handle 1Million user at a time. Should i choose express.js because it can handle 25k request | You can use multithreads or gevent to increase gunicorn's concurrency. Option1 multithreads eg: gunicorn -w 4 --threads 100 -b 0.0.0.0:5000 your_project:app --threads 100 means 100 threads per process. -w 4 means 4 processes, so -w 4 --threads 100 means 400 requests at a time Option2 gevent worker eg: pip install gevent gunicorn -w 4 -k gevent --worker-connections 1000 -b 0.0.0.0:5000 your_project:app -k gevent --worker-connections 1000 means 1000 coroutines per gevent worker process. -w 4 means 4 processes, so -w 4 -k gevent --worker-connections 1000 means 4000 requests at a time. For more information, you can refer to my blog post: https://easydevguide.com/posts/gunicorn_concurrency | 4 | 6 |
71,893,082 | 2022-4-16 | https://stackoverflow.com/questions/71893082/how-can-i-send-results-of-a-test-as-a-parameter-to-my-python-script | I created a scheduled task and my cypress script is being run once an hour. But after that I want to execute a python script and pass the result data there. Run the script and get the "results" as failed or success. $ cypress run --spec "cypress/integration/myproject/myscript.js" And pass the "results" data to a python script. $ python test.py results How can I do this? | There is a subprocess module which is able to run external commands, here is the example: import subprocess def get_test_output(): filepath = './cypress/integration/myproject/myscript.js' res = subprocess.run( ['echo', filepath], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, ) # In your case it will be: # res = subprocess.run( # ['cypress', 'run', '--spec', filepath], # stdout=subprocess.PIPE, # stderr=subprocess.STDOUT, # ) return res.stdout.decode() if __name__ == '__main__': test_res = get_test_output() print(test_res) # => ./cypress/integration/myproject/myscript.js You could run cypress in the begining of test.py and pass results further to needed functions | 4 | 5 |
71,886,600 | 2022-4-15 | https://stackoverflow.com/questions/71886600/algorithm-for-ordering-data-so-that-neighbor-elements-are-as-identical-as-possib | I have a (potentially large) list data of 3-tuples of small non-negative integers, like data = [ (1, 0, 5), (2, 4, 2), (3, 2, 1), (4, 3, 4), (3, 3, 1), (1, 2, 2), (4, 0, 3), (0, 3, 5), (1, 5, 1), (1, 5, 2), ] I want to order the tuples within data so that neighboring tuples (data[i] and data[i+1]) are "as similar as possible". Define the dissimilarity of two 3-tuples as the number of elements which are unequal between them. E.g. (0, 1, 2) vs. (0, 1, 2): Dissimilarity 0. (0, 1, 2) vs. (0, 1, 3): Dissimilarity 1. (0, 1, 2) vs. (0, 2, 1): Dissimilarity 2. (0, 1, 2) vs. (3, 4, 5): Dissimilarity 3. (0, 1, 2) vs. (2, 0, 1): Dissimilarity 3. Question: What is a good algorithm for finding the ordering of data which minimizes the sum of dissimilarities between all neighboring 3-tuples? Some code Here's a function which computes the dissimilarity between two 3-tuples: def dissimilar(t1, t2): return sum(int(a != b) for a, b in zip(t1, t2)) Here's a function which computes the summed total dissimilarity of data, i.e. the number which I seek to minimize: def score(data): return sum(dissimilar(t1, t2) for t1, t2 in zip(data, data[1:])) The problem can be solved by simply running score() over every permutation of data: import itertools n_min = 3*len(data) # some large number for perm in itertools.permutations(data): n = score(perm) if n < n_min: n_min = n data_sorted = list(perm) print(data_sorted, n_min) Though the above works, it's very slow as we explicitly check each and every permutation (resulting in O(N!) complexity). On my machine the above takes about 20 seconds when data has 10 elements. For completeness, here's the result of running the above given the example data: data_sorted = [ (1, 0, 5), (4, 0, 3), (4, 3, 4), (0, 3, 5), (3, 3, 1), (3, 2, 1), (1, 5, 1), (1, 5, 2), (1, 2, 2), (2, 4, 2), ] with n_min = 15. Note that several other orderings (10 in total) with a score of 15 exist. For my purposes these are all equivalent and I just want one of them. Final remarks In practice the size of data may be as large as say 10000. The sought-after algorithm should beat O(N!), i.e. probably be polynomial in time (and space). If no such algorithm exists, I would be interested in "near-solutions", i.e. a fast algorithm which gives an ordering of data with a small but not necessarily minimal total score. One such algorithm would be lexicographic sorting, i.e. sorted(data) # score 18 though I hope to be able to do better than this. Edit (comments on accepted solution) I have tried all of the below heuristic solutions given as code (I have not tried e.g. Google OR-tools). For large len(data), I find that the solution of Andrej Kesely is both quick and gives the best results. The idea behind this method is quite simple. The sorted list of data elements (3-tuples) is built up one by one. Given some data element, the next element is chosen to be the most similar one out of the remaining (not yet part of the sorted) data. Essentially this solves a localized version of the problem where we only "look one ahead", rather than optimizing globally over the entire data set. We can imagine a hierarchy of algorithms looking n ahead, each successively delivering better (or at least as good) results but at the cost of being much more expensive. The solution of Andrej Kesely then sits lowest in this hierarchy. The algorithm at the highest spot, looking len(data) ahead, solves the problem exactly. Let's settle for "looking 1 ahead", i.e. the answer by Andrej Kesely. This leaves room for a) the choice of initial element, b) what to do when several elements are equally good candidates (same dissimilarity) for use as the next one. Choosing the first element in data as the initial element and the first occurrence of an element with minimal dissimilarity, both a) and b) are determined from the original order of elements within data. As Andrej Kesely points out, it then helps to (lex)sort data in advance. In the end I went with this solution, but refined in a few ways: I try out the algorithm for 6 initial sortings of data; lex sort for columns (0, 1, 2), (2, 0, 1), (1, 2, 0), all in ascending as well as descending order. For large len(data), the algorithm becomes too slow for me. I suspect it scales like O(nΒ²). I thus process chunks of the data of size n_max independently, with the final result being the different sorted chunks concatenated. Transitioning from one chunk to the next we expect a dissimilarity of 3, but this is unimportant if we keep n_max large. I go with n_max = 1000. As an implementation note, the performance can be improved by not using data.pop(idx) as this itself is O(n). Instead, either leave the original data as is and use another data structure for keeping track of which elements/indices have been used, or replace data[idx] with some marker value upon use. | This isn't exact algorithm, just heuristic, but should be better that naive sorting: # you can sort first the data for lower total average score: # data = sorted(data) out = [data.pop(0)] while data: idx, t = min(enumerate(data), key=lambda k: dissimilar(out[-1], k[1])) out.append(data.pop(idx)) print(score(out)) Testing (100 repeats with data len(data)=1000): import random from functools import lru_cache def get_data(n=1000): f = lambda n: random.randint(0, n) return [(f(n // 30), f(n // 20), f(n // 10)) for _ in range(n)] @lru_cache(maxsize=None) def dissimilar(t1, t2): a, b, c = t1 x, y, z = t2 return (a != x) + (b != y) + (c != z) def score(data): return sum(dissimilar(t1, t2) for t1, t2 in zip(data, data[1:])) def lexsort(data): return sorted(data) def heuristic(data, sort_data=False): data = sorted(data) if sort_data else data[:] out = [data.pop(0)] while data: idx, t = min(enumerate(data), key=lambda k: dissimilar(out[-1], k[1])) out.append(data.pop(idx)) return out N, total, total_lexsort, total_heuristic, total_heuristic2 = 100, 0, 0, 0, 0 for i in range(N): data = get_data() r0 = score(data) r1 = score(lexsort(data)) r2 = score(heuristic(data)) r3 = score(heuristic(data, True)) print("original data", r0) print("lexsort", r1) print("heuristic", r2) print("heuristic with sorted", r3) total += r0 total_lexsort += r1 total_heuristic += r2 total_heuristic2 += r3 print("total original data score", total) print("total score lexsort", total_lexsort) print("total score heuristic", total_heuristic) print("total score heuristic(with sorted)", total_heuristic2) Prints: ... total original data score 293682 total score lexsort 178240 total score heuristic 162722 total score heuristic(with sorted) 160384 | 47 | 10 |
71,888,628 | 2022-4-15 | https://stackoverflow.com/questions/71888628/allocate-an-integer-randomly-across-k-bins | I'm looking for an efficient Python function that randomly allocates an integer across k bins. That is, some function allocate(n, k) will produce a k-sized array of integers summing to n. For example, allocate(4, 3) could produce [4, 0, 0], [0, 2, 2], [1, 2, 1], etc. It should be randomly distributed per item, assigning each of the n items randomly to each of the k bins. | Adapting Michael Szczesny's comment based on numpy's new paradigm: def allocate(n, k): return np.random.default_rng().multinomial(n, [1 / k] * k) This notebook verifies that it returns the same distribution as my brute-force approach. | 4 | 1 |
71,883,661 | 2022-4-15 | https://stackoverflow.com/questions/71883661/pytube-error-get-throttling-function-name-could-not-find-match-for-multiple | I am trying to download YouTube playlist from url "https://www.youtube.com/watch?v=uyVYfSNb_Pc&list=PLBxwSeQlMDNiNt72UmSvKBLsxPgGY_Jy-", but getting the error 'get_throttling_function_name: could not find match for multiple'. Code block is: ` from pytube import Playlist play_list = Playlist('https://www.youtube.com/watch?v=uyVYfSNb_Pc&list=PLBxwSeQlMDNiNt72UmSvKBLsxPgGY_Jy-') print(f'Downloading: {play_list.title}') for video in play_list.videos: print(video.title) st = video.streams.get_highest_resolution() st.download(r'path') ` i am using the latest version of pytube. | Becuase youtube changed something on its end, and now you have to change pytube's ciper.py's function_patterns to the following r'a\.[a-zA-Z]\s*&&\s*\([a-z]\s*=\s*a\.get\("n"\)\)\s*&&\s*' r'\([a-z]\s*=\s*([a-zA-Z0-9$]{2,3})(\[\d+\])?\([a-z]\)' And you also have to change line 288 to this: nfunc=re.escape(function_match.group(1))), You'll have to use this workaround until pytube officially releases a fix. | 5 | 7 |
71,889,136 | 2022-4-15 | https://stackoverflow.com/questions/71889136/python-pandas-weighted-average-with-the-use-of-groupby-agg | I want the ability to use custom functions in pandas groupby agg(). I Know there is the option of using apply but doing several aggregations is what I want. Below is my test code that I tried to get working for the weighted average. Python Code import pandas as pd import numpy as np def weighted_avg(df, values, weights): '''To calculate a weighted average in Pandas. Demo see https://www.statology.org/pandas-weighted-average/ Example: df.groupby('Group Names').apply(w_avg, 'Results', 'AFY')''' v = df[values] w = df[weights] return (v * w).sum() / w.sum() # below creates a dataframe. dfr = pd.DataFrame(np.random.randint(1,50,size=(4,4)), columns=list('ABCD')) dfr['group'] = [1, 1, 0, 1] print(dfr) dfr = dfr.groupby('group').agg({'A':'mean', 'B':'sum', 'C': lambda x: weighted_avg(dfr, 'D', 'C')}).reset_index() print(dfr) Results - Output A B C D group 0 5 2 17 38 1 1 35 30 22 32 1 2 15 18 16 11 0 3 46 6 20 34 1 group A B C 0 0 15.000000 18 29.413333 1 1 28.666667 38 29.413333 The problem: The weighted average is returning the value for the whole table and not the 'group' column. How can I get the weighted average by group working? I did try placing the groupby inside the function like shown here but no success. Thank you for taking a look. | You can use x you have in lambda (specifically, use it's .index to get values you want). For example: import pandas as pd import numpy as np def weighted_avg(group_df, whole_df, values, weights): v = whole_df.loc[group_df.index, values] w = whole_df.loc[group_df.index, weights] return (v * w).sum() / w.sum() dfr = pd.DataFrame(np.random.randint(1, 50, size=(4, 4)), columns=list("ABCD")) dfr["group"] = [1, 1, 0, 1] print(dfr) dfr = ( dfr.groupby("group") .agg( {"A": "mean", "B": "sum", "C": lambda x: weighted_avg(x, dfr, "D", "C")} ) .reset_index() ) print(dfr) Prints: A B C D group 0 32 2 34 29 1 1 33 32 15 49 1 2 4 43 41 10 0 3 39 33 7 31 1 group A B C 0 0 4.000000 43 10.000000 1 1 34.666667 67 34.607143 EDIT: As @enke stated in comments, you can call your weighted_avg function with already filtered dataframe: weighted_avg(dfr.loc[x.index], 'D', 'C') | 4 | 2 |
71,882,225 | 2022-4-15 | https://stackoverflow.com/questions/71882225/slicing-of-a-scanned-image-based-on-large-white-spaces | I am planning to split the questions from this PDF document. The challenge is that the questions are not orderly spaced. For example the first question occupies an entire page, second also the same while the third and fourth together make up one page. If I have to manually slice it, it will be ages. So, I thought to split it up into images and work on them. Is there a possibility to take image as this and split into individual components like this? | We may solve it using (mostly) morphological operations: Read the input image as grayscale. Apply thresholding with inversion. Automatic thresholding using cv2.THRESH_OTSU is working well. Apply opening morphological operation for removing small artifacts (using the kernel np.ones(1, 3)) Dilate horizontally with very long horizontal kernel - make horizontal lines out of the text lines. Apply closing vertically - create two large clusters. The size of the vertical kernel should be tuned according to the typical gap. Finding connected components with statistics. Iterate the connected components and crop the relevant area in the vertical direction. Complete code sample: import cv2 import numpy as np img = cv2.imread('scanned_image.png', cv2.IMREAD_GRAYSCALE) # Read image as grayscale thesh = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV)[1] # Apply automatic thresholding with inversion. thesh = cv2.morphologyEx(thesh, cv2.MORPH_OPEN, np.ones((1, 3), np.uint8)) # Apply opening morphological operation for removing small artifacts. thesh = cv2.dilate(thesh, np.ones((1, img.shape[1]), np.uint8)) # Dilate horizontally - make horizontally lines out of the text. thesh = cv2.morphologyEx(thesh, cv2.MORPH_CLOSE, np.ones((50, 1), np.uint8)) # Apply closing vertically - create two large clusters nlabel, labels, stats, centroids = cv2.connectedComponentsWithStats(thesh, 4) # Finding connected components with statistics parts_list = [] # Iterate connected components: for i in range(1, nlabel): top = int(stats[i, cv2.CC_STAT_TOP]) # Get most top y coordinate of the connected component height = int(stats[i, cv2.CC_STAT_HEIGHT]) # Get the height of the connected component roi = img[top-5:top+height+5, :] # Crop the relevant part of the image (add 5 extra rows from top and bottom). parts_list.append(roi.copy()) # Add the cropped area to a list cv2.imwrite(f'part{i}.png', roi) # Save the image part for testing cv2.imshow(f'part{i}', roi) # Show part for testing # Show image and thesh testing cv2.imshow('img', img) cv2.imshow('thesh', thesh) cv2.waitKey() cv2.destroyAllWindows() Results: Stage 1: Stage 2: Stage 3: Stage 4: Top area: Bottom area: | 5 | 8 |
71,878,323 | 2022-4-14 | https://stackoverflow.com/questions/71878323/adaptive-resizing-for-a-tkinter-text-widget | I have been attempting to create a application that contains two Text() widgets, both of which can dynamically resize when the window size is changed. Before I have always used the root.pack() manager, with fill='both' and expand=True. While this works for LabelFrames and most other widgets, it does not work when a Text widget is resized smaller then its original dimensions. Is there a way to have dynamically resizing Text widgets? Ex. import tkinter as tk window = tk.Tk() editor = tk.Text(bg='red') editor.pack(side='top',fill='both',expand=True) output = tk.Text(bg='green') output.pack(side='top',fill='both',expand=True) window.mainloop() | Tkinter will try to honor the requested size of a text widget. Since you didn't specify a size, the text widget will request a size of 80x24. When you resize the window smaller, pack tries to make room for everything at its requested size, and it does so in stacking order. As the window shrinks, there's room for all of the first text widget but not enough for both. Because there's not enough room, it has to subtract space from the remaining widgets. Thus, it starts chopping off the last text widget. To combat this, you can set the requested size of the text widgets to a small value that will fit in almost any window size, and then force them to grow by setting the size of the window as a whole. This way, pack will first allocate enough space for each small window, and then expand them equally when there's extra space. For example: import tkinter as tk window = tk.Tk() window.geometry("400x400") editor = tk.Text(bg='red', width=1, height=1) output = tk.Text(bg='green', width=1, height=1) editor.pack(side='top',fill='both',expand=True) output.pack(side='top',fill='both',expand=True) window.mainloop() The other solution is to use grid which lets you specify that rows and columns should be of uniform size. import tkinter as tk window = tk.Tk() editor = tk.Text(bg='red') output = tk.Text(bg='green') editor.grid(row=0, column=0, sticky="nsew") output.grid(row=1, column=0, sticky="nsew") window.grid_columnconfigure(0, weight=1) window.grid_rowconfigure((0,1), weight=1, uniform=1) window.mainloop() | 4 | 3 |
71,875,067 | 2022-4-14 | https://stackoverflow.com/questions/71875067/adding-text-labels-to-a-plotly-scatter-plot-for-a-subset-of-points | I have a plotly.express.scatter plot with thousands of points. I'd like to add text labels, but only for outliers (eg, far away from a trendline). How do I do this with plotly? I'm guessing I need to make a list of points I want labeled and then pass this somehow to plotly (update_layout?). I'm interested in a good way to do this. Any help appreciated. | You have the right idea: you'll want to have the coordinates of your outliers, and use Plotly's text annotations to add text labels to these points. I am not sure how you want to determine outliers, but the following is an example using the tips dataset. import pandas as pd from sklearn import linear_model import plotly.express as px df = px.data.tips() ## use linear model to determine outliers by residual X = df["total_bill"].values.reshape(-1, 1) y = df["tip"].values regr = linear_model.LinearRegression() regr.fit(X, y) df["predicted_tip"] = regr.predict(X) df["residual"] = df["tip"] - df["predicted_tip"] residual_mean, residual_std = df["residual"].mean(), df["residual"].std() df["residual_normalized"] = (((df["tip"] - df["predicted_tip"]) - residual_mean) / residual_std).abs() ## determine outliers using whatever method you like outliers = df.loc[df["residual_normalized"] > 3.0, ["total_bill","tip"]] fig = px.scatter(df, x="total_bill", y="tip", trendline="ols", trendline_color_override="red") ## add text to outliers using their (x,y) coordinates: for x,y in outliers.itertuples(index=False): fig.add_annotation( x=x, y=y, text="outlier", showarrow=False, yshift=10 ) fig.show() | 4 | 4 |
71,858,905 | 2022-4-13 | https://stackoverflow.com/questions/71858905/does-urllib3-support-http-2-requests-will-it | I know the following about various python HTTP libraries: Requests does not support HTTP/2 requests. Hyper does support HTTP/2 requests, but is archived as of early 2021 and wouldn't be a good choice for new projects. HTTPX does support HTTP/2, but this support is optional, requires installing extra dependencies, and comes with some caveats about rough edges. AIOHTTP does not support HTTP2 yet (as of mid April 2022). The focus of this project is also not solely on being a client -- this package also includes a server. The other major HTTP request library I'm aware of is urllib3. This is what OpenAPI Generator uses by default when generating python client libraries. My Questions are: Can urrlib3 be configured to make HTTP/2 requests? I cannot find any information on http2 support in the documentation, and through my testing of a generated OpenAPI client, all requests are HTTP/1.1. If the answer is no currently, are the maintainers planning HTTP/2 support? I cannot find any evidence of this in the project's open issues. | I asked about this in the urllib3 discord, and got an answer from one of the maintainers that corroborates what Tim Roberts commented; Proper HTTP/2 implementations require async/await to take advantage of the main different feature in HTTP/2, which is making requests in parallel. urllib3 in particular is not planning to support this because it'll in general require a rewrite. | 10 | 9 |
71,867,872 | 2022-4-14 | https://stackoverflow.com/questions/71867872/checking-if-the-number-is-a-decimal-decimal-type-in-python | Variable 'a' could be of type - int/float/decimal.Decimal (but not a string) I want to check if its a decimal.Decimal type. Following works: import decimal a = decimal.Decimal(4) if type(a) is decimal.Decimal: print('yes decimal') else: print('not decimal') But, is there a righter way of doing the same? tnx. | Use isinstance which outputs True or False. result = isinstance(a, decimal.Decimal) print(result) | 5 | 6 |
71,864,620 | 2022-4-13 | https://stackoverflow.com/questions/71864620/pandas-how-to-avoid-map-converting-int-to-floats | I have a dictionary: matches = {282: 285, 266: 277, 276: 293, 263: 264, 286: 280, 356: 1371, 373: 262, 314: 327, 294: 290, 285: 282, 277: 266, 293: 276, 264: 263, 280: 286, 1371: 356, 262: 373, 327: 314, 290: 294} And a df, like so: team_id 0 327 1 293 2 373 3 282 4 314 5 263 6 280 7 354 8 264 9 294 10 1371 11 262 12 266 13 356 14 290 15 285 16 286 17 275 18 277 19 276 Now I'm trying to create an 'adversary_id' column, mapped from the dict, like so: df['adversary_id'] = df['team_id'].map(matches) But this new column adversary_id is being converted to type float, and two rows are ending up with NaN: Why, if all data is type int? How do I fix this? | This is because the np.nan or NaN (they are not exact same) values you see in the dataframe are of type float. It is a limitation that pitifully can't be avoided as long as you have NaN values in your code. Kindly read more in pandas' documentation here. Because NaN is a float, a column of integers with even one missing values is cast to floating-point dtype (see Support for integer NA for more). pandas provides a nullable integer array, which can be used by explicitly requesting the dtype: The proposed solution is to force the type with: df['team_id'] = pd.Series(df['team_id'],dtype=pd.Int64Dtype()) Returning: <class 'pandas.core.frame.DataFrame'> RangeIndex: 5 entries, 0 to 4 Data columns (total 1 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Example 4 non-null Int64 dtypes: Int64(1) memory usage: 173.0 bytes | 5 | 5 |
71,863,508 | 2022-4-13 | https://stackoverflow.com/questions/71863508/cant-get-react-and-flask-cors-to-work-locally | I'm trying to get an application with a React/NodeJS frontend and a Flask backend to run locally for development purposes. I've been scouring StackOverflow for the last hour, but I can't seem to get past the CORS-issue. I have: import json from flask import Flask, request from flask_cors import CORS, cross_origin app = Flask(__name__) # Enable cors requests CORS(app) # Initiate model @app.route("/query", methods=["POST"]) @cross_origin(origin="*", headers=["Content-Type"]) def query(): """Endpoint for receiving bot response""" print(request) query = request.json bot_answer = blablagetanswer() ... #filling the json return json.dumps({"botResponse": bot_answer}) if __name__ == "__main__": app.run( host="0.0.0.0", port=80, debug=True, ) I have read multiple answers, and tried many variations of CORS handling in Flask (tried with and without the @cross_origin() decorater, added @cross_origin(origin="*") or even @cross_origin(origin="*", headers=["Content-Type"]) right under the @app.route()), but to no avail. I even tried adding these two lines to my flask app: app.config["CORS_HEADERS"] = "Content-Type" CORS(app, resources={r"/*": {"origins": "*"}}) It also didn't help. I'm fetching from React like this: var jsonData = { "lastConversations": [this.state.chatInput] } console.log("Sending Chat: " + JSON.stringify(jsonData, null, 2)); fetch('http://my-ip:80/query', { method: 'POST', mode: 'cors', body: JSON.stringify(jsonData) }) and the log does contain the text I want to send, but Chrome just says: Access to fetch at 'http://my-ip/query' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. What am I missing? This is just for local testing, how can I get it to work? | Turns out when working with JSON, it's important to make sure you let the other side know that you're sending application/json data. Modifying the fetch like this solved it: var jsonData = { "lastConversations": [this.state.chatInput] } console.log("Sending Chat: " + JSON.stringify(jsonData, null, 2)); fetch('http://my-ip:80/query', { method: 'POST', mode: 'cors', headers: { # this is the key part 'Content-Type': 'application/json' }, body: JSON.stringify(jsonData) }) | 7 | 0 |
71,862,034 | 2022-4-13 | https://stackoverflow.com/questions/71862034/how-to-add-type-hint-for-all-protocol-buffer-objects-in-python-functions | I want to add type hints for arguments in functions that accept any google protocol buffer object. def do_something(protobuf_obj: WHAT_IS_HERE): # protobuf_obj can be any protocol buffer instance pass What class should I put there from the google.protobuf library? | I ended up using the Message abstract base class. From the docs: class google.protobuf.message.Message Abstract base class for protocol messages. Protocol message classes are almost always generated by the protocol compiler. These generated types subclass Message and implement the methods shown below. So, now it looks like: from google.protobuf.message import Message def do_something(protobuf_obj: Message): # protobuf_obj can be any protocol buffer instance pass | 4 | 8 |
71,860,253 | 2022-4-13 | https://stackoverflow.com/questions/71860253/how-to-deploy-a-python-dash-application-on-an-internal-company-server | I have written a Python Dash Application and it works completely fine on my local computer. Now, I want to be able to deploy this application on a server within the corporate network. I do NOT want to deploy this on Heroku etc because the datasource is an internal API. How do I go about deploying this application on the server? It's a Linux based machine. I found this post that says use the code below but not quite sure where to add this piece of code. waitress-serve --host=0.0.0.0 --port=8080 appname:app.server | The code you are referring to, waitress-serve, is a command-line wrapper bound to the function waitress.serve provided by Waitress. You run it in your terminal or from a shell script. Waitress is a production-quality pure-Python WSGI server with very acceptable performance. It has no dependencies except ones which live in the Python standard library. It runs on CPython on Unix and Windows under Python 3.7+. You can install it with pip install waitress. @see waitress-serve documentation here. | 4 | 3 |
71,857,720 | 2022-4-13 | https://stackoverflow.com/questions/71857720/how-to-put-a-matplotlib-figure-and-a-seaborn-figure-into-one-combined-matplotlib | I create 2 figures. One is a Seaborn figure and one is a matplotlib figure. Now I would like to combined those 2 figures into 1 combined figure. While the matplotlib figure is being displayed on the left hand side, the seaborn figure is not displayed. Here is the code #Plot the seaborn figure fig, ax = plt.subplots(figsize=(12,6)) sns.kdeplot(data=data_train.squeeze(), color='cornflowerblue', label='train', fill=False, ax=ax) sns.kdeplot(data=data_valid.squeeze(), color='orange', label='valid', fill=False, ax=ax) sns.kdeplot(data=data_test.squeeze(), color='green', label='test', fill=False, ax=ax) ax.legend(loc=2, prop={'size': 20}) plt.tight_layout() plt.xticks(fontsize =20) plt.yticks(fontsize =20) plt.title(f"{currentFeature}\nKernel density functions", fontsize = 20) plt.ylabel('Density',fontsize=20) plt.show() # Plot a matplotlib figure in a combined plot on the left side X1 = np.linspace(data_train.min(), data_train.max(), 1000) X2 = np.linspace(data_valid.min(), data_valid.max(), 1000) X3 = np.linspace(data_test.min(), data_test.max(), 1000) fig, ax = plt.subplots(1,2, figsize=(12,6)) ax[0].plot(X1, histogram_dist_train.pdf(X1), label='train') ax[0].plot(X2, histogram_dist_valid.pdf(X2), label='valid') ax[0].plot(X3, histogram_dist_test.pdf(X3), label='test') ax[0].set_title('matplotlib figure', fontsize = 14) ax[0].legend() #Try to plot the same seaborn figure from above on the right side of the combined figure ax[1].plot(sns.kdeplot(data=data_train.squeeze(), color='cornflowerblue', label='train', fill=False, ax=ax)) ax[1].plot(sns.kdeplot(data=data_valid.squeeze(), color='orange', label='valid', fill=False, ax=ax)) ax[1].plot(sns.kdeplot(data=data_test.squeeze(), color='green', label='test', fill=False, ax=ax)) ax[1].set_title('seaborn figure', fontsize = 14) ax[1].legend() When running the code I get the following error "AttributeError: 'numpy.ndarray' object has no attribute 'xaxis'". The single seaborn figure is created and the combined matplotlib figure is also created. But only on the left side you can see the correct matplotlib figure while on the right side it is just empty. Any ideas how I can do this? | Some hopefully clarifying comments: A figure is the top-level container for all plot elements. It's misleading/incorrect to refer to a matplotlib figure or seaborn figure, when really you're referring to an Axes. This creates one figure with two subplots. fig, ax = plt.subplots(1,2, figsize=(12,6)) Pure matplotlib plotting: ax[0].plot(X1, histogram_dist_train.pdf(X1), label='train')) Seaborn plotting: passing an existing Axes to kdeplot: sns.kdeplot(data=data_train.squeeze(), color='cornflowerblue', label='train', fill=False, ax=ax[1]) | 4 | 4 |
71,855,414 | 2022-4-13 | https://stackoverflow.com/questions/71855414/goeopandas-plot-shape-and-apply-opacity-outside-shape | I am plotting a city boundary (geopandas dataframe) to which I added a basemap using contextily. I would like to apply opacity to the region of the map outside of the city limits. The below example shows the opposite of the desired effect, as the opacity should be applied everywhere except whithin the city limits. import osmnx as ox import geopandas as gpd import contextily as cx berlin = ox.geocode_to_gdf('Berlin,Germany') fig, ax = plt.subplots(1, 1, figsize=(10,10)) _ = ax.axis('off') berlin.plot(ax=ax, color='white', edgecolor='black', alpha=.7, ) # basemap cx.add_basemap(ax,crs=berlin.crs,) plt.savefig('stackoverflow_question.png', dpi=100, bbox_inches='tight', ) Plot showing opposite of desired result: | You can create a new polygon that is a buffer on the total bounds of your geometry minus your geometry import osmnx as ox import geopandas as gpd import contextily as cx import matplotlib.pyplot as plt from shapely.geometry import box berlin = ox.geocode_to_gdf("Berlin,Germany") notberlin = gpd.GeoSeries( [ box(*box(*berlin.total_bounds).buffer(0.1).bounds).difference( berlin["geometry"].values[0] ) ], crs=berlin.crs, ) fig, ax = plt.subplots(1, 1, figsize=(10, 10)) _ = ax.axis("off") notberlin.plot( ax=ax, color="white", edgecolor="black", alpha=0.7, ) # basemap cx.add_basemap( ax, crs=berlin.crs, ) # plt.savefig('stackoverflow_question.png', # dpi=100, # bbox_inches='tight', # ) | 4 | 4 |
71,851,010 | 2022-4-13 | https://stackoverflow.com/questions/71851010/geopandas-plot-two-geo-dataframes-over-each-other-on-a-map | I am new to using Geopandas and plotting maps from Geo Dataframe. I have two Geo DataFrames which belong to the same city. But they are sourced from different sources. One contains the Geometry data for houses and another for Census tracts. I want to plot the houses' boundary on top of the tract boundry. Below is the first row from each data set. I am also not sure why the Geometry Polygon values are on such a different scale in each of these datasets. Houses Data Set House Data Tract Data Set Tract Data I tried the following code in the Jupyer Notebook but nothing is showing up. f, ax = plt.subplots() tract_data.plot(ax=ax) house_data.plot(ax=ax) But an empty plot shows up. This is my first post. Please let me know what else I can provide. | You probably need to set the correct coordinate reference system (crs). More info here An easy fix might be f, ax = plt.subplots() tract_data.to_crs(house_data.crs).plot(ax=ax) house_data.plot(ax=ax) | 5 | 6 |
71,775,175 | 2022-4-7 | https://stackoverflow.com/questions/71775175/convert-pandas-pivot-table-function-into-polars-pivot-function | I'm trying to convert some python pandas into polars. I'm stuck trying to convert pandas pivot_table function into polars. The following is the working pandas code. I can't seem to get the same behavior with the Polars pivot function. The polars pivot function forces the column parameter and uses the column values as headers instead of the column label as a header. I'm going for the same output below but with Polars instead of Pandas. df = pd.DataFrame({"obj" : ["ring", "shoe", "ring"], "price":["65", "42", "65"], "value":["53", "55", "54"], "date":["2022-02-07", "2022-01-07", "2022-03-07"]}) table = pd.pivot_table(df, values=['price','value','date'],index=['obj'], aggfunc={'price': pd.Series.nunique,'value':pd.Series.nunique,'date':pd.Series.nunique}) print(table) Outputs the following: date price value obj ring 2 1 2 shoe 1 1 1 | In Polars, we would not use a pivot table for this. Instead, we would use the group_by and agg functions. Using your data, it would be: import polars as pl df = pl.from_pandas(df) df.group_by("obj").agg(pl.all().n_unique()) shape: (2, 4) ββββββββ¬ββββββββ¬ββββββββ¬βββββββ β obj β price β value β date β β --- β --- β --- β --- β β str β u32 β u32 β u32 β ββββββββͺββββββββͺββββββββͺβββββββ‘ β ring β 1 β 2 β 2 β β shoe β 1 β 1 β 1 β ββββββββ΄ββββββββ΄ββββββββ΄βββββββ pivot and unpivot Where we would use the pivot function in Polars is to summarize a dataset in 'long' format to a dataset in 'wide' format. As an example, let's convert your original dataset to 'long' format using the unpivot function. df2 = df.unpivot(index="obj") print(df2) shape: (9, 3) ββββββββ¬βββββββββββ¬βββββββββββββ β obj β variable β value β β --- β --- β --- β β str β str β str β ββββββββͺβββββββββββͺβββββββββββββ‘ β ring β price β 65 β β shoe β price β 42 β β ring β price β 65 β β ring β value β 53 β β shoe β value β 55 β β ring β value β 54 β β ring β date β 2022-02-07 β β shoe β date β 2022-01-07 β β ring β date β 2022-03-07 β ββββββββ΄βββββββββββ΄βββββββββββββ Now let's use pivot to summarize this 'long' format dataset back to one in "wide" format and simply count the number of values. df2.pivot(on='variable', index='obj', aggregate_function=pl.len()) shape: (2, 4) ββββββββ¬βββββββ¬ββββββββ¬ββββββββ β obj β date β price β value β β --- β --- β --- β --- β β str β u32 β u32 β u32 β ββββββββͺβββββββͺββββββββͺββββββββ‘ β ring β 2 β 2 β 2 β β shoe β 1 β 1 β 1 β ββββββββ΄βββββββ΄ββββββββ΄ββββββββ Does this help clarify the use of the pivot functionality? | 5 | 7 |
71,808,640 | 2022-4-9 | https://stackoverflow.com/questions/71808640/filling-null-values-of-a-column-with-another-column | I want to fill the null values of a column with the content of another column of the same row in a lazy data frame in Polars. Is this possible with reasonable performance? | There's a function for this: fill_null. Let's say we have this data: import polars as pl df = pl.DataFrame({'a': [1, None, 3, 4], 'b': [10, 20, 30, 40] }).lazy() print(df.collect()) shape: (4, 2) ββββββββ¬ββββββ β a β b β β --- β --- β β i64 β i64 β ββββββββͺββββββ‘ β 1 β 10 β β null β 20 β β 3 β 30 β β 4 β 40 β ββββββββ΄ββββββ We can fill the null values in column a with values in column b: df.with_columns(pl.col('a').fill_null(pl.col('b'))).collect() shape: (4, 2) βββββββ¬ββββββ β a β b β β --- β --- β β i64 β i64 β βββββββͺββββββ‘ β 1 β 10 β β 20 β 20 β β 3 β 30 β β 4 β 40 β βββββββ΄ββββββ The performance of this will be quite good. | 6 | 8 |
71,850,031 | 2022-4-12 | https://stackoverflow.com/questions/71850031/polars-how-to-filter-using-in-and-not-in-like-in-sql | How can I achieve the equivalents of SQL's IN and NOT IN? I have a list with the required values. Here's the scenario: import pandas as pd import polars as pl exclude_fruit = ["apple", "orange"] df = pl.DataFrame( { "A": [1, 2, 3, 4, 5, 6], "fruits": ["banana", "banana", "apple", "apple", "banana", "orange"], "B": [5, 4, 3, 2, 1, 6], "cars": ["beetle", "audi", "beetle", "beetle", "beetle", "frog"], "optional": [28, 300, None, 2, -30, 949], } ) df.filter(~pl.select("fruits").str.contains(exclude_fruit)) df.filter(~pl.select("fruits").to_pandas().isin(exclude_fruit)) df.filter(~pl.select("fruits").isin(exclude_fruit)) | You were close. df.filter(~pl.col('fruits').is_in(exclude_fruit)) shape: (3, 5) βββββββ¬βββββββββ¬ββββββ¬βββββββββ¬βββββββββββ β A β fruits β B β cars β optional β β --- β --- β --- β --- β --- β β i64 β str β i64 β str β i64 β βββββββͺβββββββββͺββββββͺβββββββββͺβββββββββββ‘ β 1 β banana β 5 β beetle β 28 β β 2 β banana β 4 β audi β 300 β β 5 β banana β 1 β beetle β -30 β βββββββ΄βββββββββ΄ββββββ΄βββββββββ΄βββββββββββ | 21 | 35 |
71,837,398 | 2022-4-12 | https://stackoverflow.com/questions/71837398/pydantic-validations-for-extra-fields-that-not-defined-in-schema | I am using pydantic for schema validations and I would like to throw an error when any extra field that isn't defined is added to a schema. from typing import Literal, Union from pydantic import BaseModel, Field, ValidationError class Cat(BaseModel): pet_type: Literal['cat'] meows: int class Dog(BaseModel): pet_type: Literal['dog'] barks: float class Lizard(BaseModel): pet_type: Literal['reptile', 'lizard'] scales: bool class Model(BaseModel): pet: Union[Cat, Dog, Lizard] = Field(..., discriminator='pet_type') n: int print(Model(pet={'pet_type': 'dog', 'barks': 3.14, 'eats': 'biscuit'}, n=1)) """ try: Model(pet={'pet_type': 'dog'}, n=1) except ValidationError as e: print(e) """ In the above code, I have added the eats field which is not defined. The pydantic validations are applied and the extra values that I defined are removed in response. I want to throw an error saying eats is not allowed for Dog or something like that. Is there any way to achieve that? And is there any chance that we can provide the input directly instead of the pet object? print(Model({'pet_type': 'dog', 'barks': 3.14, 'eats': 'biscuit', n=1})). I tried without descriminator but those specific validations are missing related to pet_type. Can someone guide me how to achieve either one of that? | Pydantic v2 You can use the extra field in the model_config class attribute to forbid extra attributes during model initialisation (by default, additional attributes will be ignored). For example: from pydantic import BaseModel, ConfigDict class Pet(BaseModel): model_config = ConfigDict(extra="forbid") name: str data = { "name": "some name", "some_extra_field": "some value", } my_pet = Pet.model_validate(data) # <- effectively the same as Pet(**pet_data) will raise a ValidationError: ValidationError: 1 validation error for Pet some_extra_field Extra inputs are not permitted [type=extra_forbidden, input_value='some value', input_type=str] For further information visit https://errors.pydantic.dev/2.7/v/extra_forbidden Works as well when the model is "nested", e.g.: class PetModel(BaseModel): my_pet: Pet n: int pet_data = { "my_pet": {"name": "Some Name", "invalid_field": "some value"}, "n": 5, } pet_model = PetModel.model_validate(pet_data) # Effectively the same as # pet_model = PetModel(my_pet={"name": "Some Name", "invalid_field": "some value"}, n=5) will raise: ValidationError: 1 validation error for PetModel my_pet.invalid_field Extra inputs are not permitted [type=extra_forbidden, input_value='some value', input_type=str] For further information visit https://errors.pydantic.dev/2.7/v/extra_forbidden NB: As you can see, extra has the type ExtraValues now, and its value will get validated by ConfigDict. This means it's not possible to accidentally provide an unsupported value for extra (e.g. having a typo), i.e. something like ConfigDict(extra="fordib") will fail with a SchemaError. Pydantic v1 You can use the extra field in the Config class to forbid extra attributes during model initialisation (by default, additional attributes will be ignored). For example: from pydantic import BaseModel, Extra class Pet(BaseModel): name: str class Config: extra = Extra.forbid data = { "name": "some name", "some_extra_field": "some value", } my_pet = Pet.parse_obj(data) # <- effectively the same as Pet(**pet_data) will raise a VaidationError: ValidationError: 1 validation error for Pet some_extra_field extra fields not permitted (type=value_error.extra) Works as well when the model is "nested", e.g.: class PetModel(BaseModel): my_pet: Pet n: int pet_data = { "my_pet": {"name": "Some Name", "invalid_field": "some value"}, "n": 5, } pet_model = PetModel.parse_obj(pet_data) # Effectively the same as # pet_model = PetModel(my_pet={"name": "Some Name", "invalid_field": "some value"}, n=5) will raise: ValidationError: 1 validation error for PetModel my_pet -> invalid_field extra fields not permitted (type=value_error.extra) | 45 | 67 |
71,769,359 | 2022-4-6 | https://stackoverflow.com/questions/71769359/how-to-use-python-poetry-to-install-package-to-a-virtualenv-in-a-standalone-fash | I've recently migrated to poetry for my dependencies management so pardon if my question is out of the scope of poetry here. Final goal My final goal is to create a RPM package that contains a virtualenv with my software installed along with all its dependencies. This RPM would then provide my software in isolation with the system where it is installed. Reproduce the problem I'm facing a problem while using poetry install in my virtualenv. As soon as the source directory of my software is deleted, my CLI refuses to work any longer. Reproduce I've created a simple repository to reproduce the problem: https://github.com/riton/python-poetry-venv Here are the that I'm using with poetry: #!/bin/bash -ex VENV_DIR="/venv" SRC_DIR="/src" ALT_SRC_DIR="/src2" USER_CACHE_DIR="~/.cache" # Copy directory (cause we're mounting it read-only in the container) # and we want to remove the source directory later on cp -r $SRC_DIR $ALT_SRC_DIR # We'll remove this directory to test if the soft is still working # without the source dir cd $ALT_SRC_DIR [...] python3.8 -m venv "$VENV_DIR" source $VENV_DIR/bin/activate [...] poetry install --no-dev -v [...] # Our software will be called without an activated virtualenv # so 'deactivate' the current one deactivate cd / echo "Try after install" # Start the "CLI" after installation $VENV_DIR/bin/python-poetry-venv echo "Removing source directory and trying again" rm -rf $ALT_SRC_DIR $VENV_DIR/bin/python-poetry-venv echo "Removing user cache dir and trying again" rm -rf $USER_CACHE_DIR $VENV_DIR/bin/python-poetry-venv The script above fails with the following error: [...] Try after install + /venv/bin/python-poetry-venv THIS IS THE MAIN + echo 'Removing source directory and trying again' Removing source directory and trying again + rm -rf /src2 + /venv/bin/python-poetry-venv Traceback (most recent call last): File "/venv/bin/python-poetry-venv", line 2, in <module> from python_poetry_venv.cli import main ModuleNotFoundError: No module named 'python_poetry_venv' make: *** [Makefile:2: test-with-poetry-install] Error 1 link to the full script source As soon as the source directory is removed. The CLI refuses to work any longer. Trying with pip install I've tried to replace the poetry install with something like poetry build && pip install dist/*.whl (link to this script version) With the version using pip install of the .whl file, I'm successfully creating a standalone deployment of my application. This is suitable to RPM packaging and could be deployed anywhere. Software versions + python3.8 -V Python 3.8.13 + poetry --version Poetry version 1.1.13 Final thoughts I can't help but think that I'm misusing poetry here. So any help will be very much appreciated. Thanks in advance Regards | I'm late to the party, but I want to suggest a way to accomplish this. While poetry is amazing at managing your project's main and dev dependencies and locking their versions, I wouldn't rely on it while deploying on your situation. Here's a way to solve it: # export your dependencies in the requirements.txt format using poetry poetry export --without-hashes -f requirements.txt -o requirements.txt # create your venv like you did on your example (you may want to upgrade pip/wheel/setuptools first) python3 -m venv venv && . venv/bin/activate # then install the dependencies pip install --no-cache-dir --no-deps -r requirements.txt # then you install your own project pip install . There you have it, everything you need will be self-contained in the venv folder. | 6 | 16 |
71,814,658 | 2022-4-10 | https://stackoverflow.com/questions/71814658/python-typing-does-typeddict-allow-additional-extra-keys | Does typing.TypedDict allow extra keys? Does a value pass the typechecker, if it has keys which are not present on the definition of the TypedDict? | It depends. PEP-589, the specification of TypedDict, explicitely forbids extra keys: Extra keys included in TypedDict object construction should also be caught. In this example, the director key is not defined in Movie and is expected to generate an error from a type checker: m: Movie = dict( name='Alien', year=1979, director='Ridley Scott') # error: Unexpected key 'director' [emphasis by me] The typecheckers mypy, pyre, and pyright implement this according to the specification. However, it is possible that a value with extra keys is accepted. This is because subtyping of TypedDicts is allowed, and the subtype might implement the extra key. PEP-589 only forbids extra keys in object construction, i.e. in literal assignment. As any value that complies with a subtype is always deemed to comply with the parent type and can be upcasted from the subtype to the parent type, an extra key can be introduced through a subtype: from typing import TypedDict class Movie(TypedDict): name: str year: int class MovieWithDirector(Movie): director: str # This is illegal: movie: Movie = { 'name': 'Ash is purest white', 'year': 2018, 'director': 'Jia Zhangke', } # This is legal: movie_with_director: MovieWithDirector = { 'name': 'Ash is purest white', 'year': 2018, 'director': 'Jia Zhangke', } # This is legal, MovieWithDirector is a subtype of Movie movie: Movie = movie_with_director In the example above, we see that the same value can sometimes be considered complying with Movie by the typing system, and sometimes not. As a consequence of subtyping, typing a parameter as a certain TypedDict is not a safeguard against extra keys, because they could have been introduced through a subtype. If your code is sensitive with regard to the presence of extra keys (for instance, if it makes use of param.keys(), param.values() or len(param) on the TypedDict parameter param), this could lead to problems when extra keys are present. A solution to this problem is to either handle the exceptional case that extra keys are actually present on the parameter or to make your code insensitive against extra keys. If you want to test that your code is robust against extra keys, you cannot simply add a key in the test value: def some_movie_function(movie: Movie): # ... def test_some_movie_function(): # this will not be accepted by the type checker: result = some_movie_function({ 'name': 'Ash is purest white', 'year': 2018, 'director': 'Jia Zhangke', 'genre': 'drama', }) Workarounds are to either make the type checkers ignore the line or to create a subtype for your test, introducing the extra keys only for your test: class ExtendedMovie(Movie): director: str genre: str def test_some_movie_function(): extended_movie: ExtendedMovie = { 'name': 'Ash is purest white', 'year': 2018, 'director': 'Jia Zhangke', 'genre': 'drama', } result = some_movie_function(test_some_movie_function) # run assertions against result | 18 | 15 |
71,805,911 | 2022-4-9 | https://stackoverflow.com/questions/71805911/elastic-transport-tlserror-tls-error-caused-bytlserrortls-error-caused-by-ss | Getting this error Trying to connect elasticsearch docker container with elasticsearch-python client. /home/raihan/dev/aims_lab/ai_receptionist/env/lib/python3.6/site-packages/elasticsearch/_sync/client/__init__.py:379: SecurityWarning: Connecting to 'https://localhost:9200' using TLS with verify_certs=False is insecure **transport_kwargs, <Elasticsearch(['https://localhost:9200'])> Traceback (most recent call last): File "test_all.py", line 29, in <module> resp = es.index(index="test-index", id=1, document=doc) File "/home/raihan/dev/aims_lab/ai_receptionist/env/lib/python3.6/site-packages/elasticsearch/_sync/client/utils.py", line 404, in wrapped return api(*args, **kwargs) File "/home/raihan/dev/aims_lab/ai_receptionist/env/lib/python3.6/site-packages/elasticsearch/_sync/client/__init__.py", line 2218, in index __method, __path, params=__query, headers=__headers, body=__body File "/home/raihan/dev/aims_lab/ai_receptionist/env/lib/python3.6/site-packages/elasticsearch/_sync/client/_base.py", line 295, in perform_request client_meta=self._client_meta, File "/home/raihan/dev/aims_lab/ai_receptionist/env/lib/python3.6/site-packages/elastic_transport/_transport.py", line 334, in perform_request request_timeout=request_timeout, File "/home/raihan/dev/aims_lab/ai_receptionist/env/lib/python3.6/site-packages/elastic_transport/_node/_http_urllib3.py", line 199, in perform_request raise err from None elastic_transport.TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:852))) contents in elastic.py host = "https://localhost:9200" es = Elasticsearch(host, ca_certs=False, verify_certs=False) print(es) doc = { 'author': 'kimchy', 'text': 'Elasticsearch: cool. bonsai cool.', 'timestamp': datetime.now(), } resp = es.index(index="test-index", id=1, document=doc) print(resp['result']) resp = es.get(index="test-index", id=1) print(resp['_source']) contents in elasticsearch dockerfile FROM docker.elastic.co/elasticsearch/elasticsearch:7.12.0 RUN elasticsearch-plugin install --batch https://github.com/alexklibisz/elastiknn/releases/download/7.12.0.0/elastiknn-7.12.0.0.zip urllib3==1.26.9 requests==2.27.1 | #disable certificate es = Elasticsearch(hosts="https://localhost:9200", basic_auth=(USER, PASS), verify_certs=False) #if getting an issue relevant to the certificate then: es = Elasticsearch(hosts="https://localhost:9200", basic_auth=(USER, PASS), ca_certs=CERTIFICATE, verify_certs=False) # I hope you know where to find certificate, e.g: $ find /usr/share/elasticsearch -name "certs.pem" Reference Follow for more | 10 | 19 |
71,831,415 | 2022-4-11 | https://stackoverflow.com/questions/71831415/downgrade-python-version-in-virtual-environment | I am always getting the same error regarding TensorFlow: ModuleNotFoundError: No module named 'tensorflow.contrib'. I am actually using Python version 3.9 but, reading online, it seems that version 3.7 is the last stable one that can work with TensorFlow version >2.0. Unfortunately I have started my project in a venv with the wrong version of Python and I would like to downgrade it, how can I do that? | Building on @chepner's comment above, since venvs are just directories, you can save your current state and start a fresh virtual environment instead. # Save current installs (venv) -> pip freeze -r > requirements.txt # Shutdown current env (venv) -> deactivate # Copy it to keep a backup -> mv venv venv-3.9 # Ensure you have python3.7 -> python3.7 -V # Create and activate a 3.7 venv -> python3.7 -m venv venv-3.7 -> source venv-3.7/bin/activate # Reinstall previous requirements (venv-3.7) -> pip install -r requirements.txt # Install new requirements Hope that helps! | 14 | 8 |
71,800,133 | 2022-4-8 | https://stackoverflow.com/questions/71800133/how-to-return-a-custom-404-not-found-page-using-fastapi | I am making a rick roll site for Discord and I would like to redirect to the rick roll page on 404 response status codes. I've tried the following, but didn't work: @app.exception_handler(fastapi.HTTPException) async def http_exception_handler(request, exc): ... | Update A more elegant solution would be to use a custom exception handler, passing the status code of the exception you would like to handle, as shown below: from fastapi.responses import RedirectResponse from fastapi.exceptions import HTTPException @app.exception_handler(404) async def not_found_exception_handler(request: Request, exc: HTTPException): return RedirectResponse('https://fastapi.tiangolo.com') or, use the exception_handlers parameter of the FastAPI class like this: async def not_found_error(request: Request, exc: HTTPException): return RedirectResponse('https://fastapi.tiangolo.com') exception_handlers = {404: not_found_error} app = FastAPI(exception_handlers=exception_handlers) Note: In the examples above, a RedirectResponse is returned, as OP asked for redirecting the user. However, you could instead return some custom Response, HTMLResponse or Jinja2 TemplateResponse, as demosntrated in the example below. Working Example app.py from fastapi import FastAPI, Request from fastapi.templating import Jinja2Templates from fastapi.exceptions import HTTPException async def not_found_error(request: Request, exc: HTTPException): return templates.TemplateResponse('404.html', {'request': request}, status_code=404) async def internal_error(request: Request, exc: HTTPException): return templates.TemplateResponse('500.html', {'request': request}, status_code=500) templates = Jinja2Templates(directory='templates') exception_handlers = { 404: not_found_error, 500: internal_error } app = FastAPI(exception_handlers=exception_handlers) templates/404.html <!DOCTYPE html> <html> <title>Not Found</title> <body> <h1>Not Found</h1> <p>The requested resource was not found on this server.</p> </body> </html> templates/500.html <!DOCTYPE html> <html> <title>Internal Server Error</title> <body> <h1>Internal Server Error</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request. </p> </body> </html> Original answer You would need to create a middleware and check for the status_code of the response. If it is 404, then return a RedirectResponse. Example: from fastapi import Request from fastapi.responses import RedirectResponse @app.middleware("http") async def redirect_on_not_found(request: Request, call_next): response = await call_next(request) if response.status_code == 404: return RedirectResponse("https://fastapi.tiangolo.com") else: return response | 6 | 8 |
71,835,308 | 2022-4-11 | https://stackoverflow.com/questions/71835308/how-to-use-python-docx-template-to-insert-bullet-points | I'm trying to insert bullet point text with docx-template. I know that it can be done with the standard docx like below; document.add_paragraph('text to be bulleted', style='List Bullet') But I just can't get the same thing to work on docx-template; rt = RichText() rt.add('text to be bulleted', style='List Bullet') The above code just returns 'text to be bulleted' without the bullet in front. I've looked through all the "tests" on the github and nothing mentions bullet. Any assistance would be greatly appreciated. Thank you in advance. | Until this is officially supported (open issue), you might use a workaround (but only fixed indentations): In your word document add this (with a real single bullet list item): {% for bullet in bullets %} β {{ bullet }}{% endfor %} Note that the {% endfor %} must be in the same line to avoid blank lines between the bullet items. Use this python code here: from docxtpl import DocxTemplate tpl=DocxTemplate('template.docx') context = { 'bullets': [ 'item 1', 'item 2', ], } tpl.render(context) tpl.save("output.docx") | 4 | 5 |
71,768,274 | 2022-4-6 | https://stackoverflow.com/questions/71768274/how-to-extract-all-youtube-comments-using-youtube-api-python | Let's say I have a video_id having 8487 comments. This code returns only 4309 comments. def get_comments(youtube, video_id, comments=[], token=''): video_response=youtube.commentThreads().list(part='snippet', videoId=video_id, pageToken=token).execute() for item in video_response['items']: comment = item['snippet']['topLevelComment'] text = comment['snippet']['textDisplay'] comments.append(text) if "nextPageToken" in video_response: return get_comments(youtube, video_id, comments, video_response['nextPageToken']) else: return comments youtube = build('youtube', 'v3',developerKey=api_key) comment_threads = get_comments(youtube,video_id) print(len(comment_threads)) > 4309 How can I extract all the 8487 comments? | From the answer of commentThreads, you have to add the replies parameter in order to retrieve the replies the comments might have. So, your request should look like this: video_response=youtube.commentThreads().list(part='id,snippet,replies', videoId=video_id, pageToken=token).execute() Then, modify your code accordingly for read the replies of the comments. In this example I made using the try-it feature available in the documentation, you can check that the reponse contains both, the top comment and its replies. Edit (08/04/2022): Create a new variable that contains the totalReplyCount that the topLevelComment might have. Something like: def get_comments(youtube, video_id, comments=[], token=''): # Stores the total reply count a top level commnet has. totalReplyCount = 0 # Replies of the top-level comment might have. replies=[] video_response=youtube.commentThreads().list(part='snippet', videoId=video_id, pageToken=token).execute() for item in video_response['items']: comment = item['snippet']['topLevelComment'] text = comment['snippet']['textDisplay'] comments.append(text) # Get the total reply count: totalReplyCount = item['snippet']['totalReplyCount'] # Check if the total reply count is greater than zero, # if so,call the new function "getAllTopLevelCommentReplies(topCommentId, replies, token)" # and extend the "comments" returned list. if (totalReplyCount > 0): comments.extend(getAllTopLevelCommentReplies(comment['id'], replies, None)) # Clear variable - just in case - not sure if need due "get_comments" function initializes the variable. replies = [] if "nextPageToken" in video_response: return get_comments(youtube, video_id, comments, video_response['nextPageToken']) else: return comments Then, if the value of totalReplyCount is greater than zero, make another call using the comment.list for bring the replies the top level comment has. For this new call, you have to pass the id of the top level comment. Example (untested): # Returns all replies the top-level comment has: # topCommentId = it's the id of the top-level comment you want to retrieve its replies. # replies = array of replies returned by this function. # token = the comments.list might return moren than 100 comments, if so, use the nextPageToken for retrieve the next batch of results. def getAllTopLevelCommentReplies(topCommentId, replies, token): replies_response=youtube.comments().list(part='snippet', maxResults=100, parentId=topCommentId pageToken=token).execute() for item in replies_response['items']: # Append the reply's text to the replies.append(item['snippet']['textDisplay']) if "nextPageToken" in replies_response: return getAllTopLevelCommentReplies(topCommentId, replies, replies_response['nextPageToken']) else: return replies Edit (11/04/2022): I've added the Google Colab example I modified based on your code and it works with my video example (ouf0ozwnU84) = it brings its 130 comments, but, with your video example (BaGgScV4NN8) I got 3300 of 3359. This might be some comments could be under approval/moderation or something else I'm missing or probably there are comments too old and additional filters are needed, or the API is buggy - see here some other questions related to troubles facing with the pagination using the API - I suggest you to check this tutorial which shows code and you can change it. | 8 | 6 |
71,764,921 | 2022-4-6 | https://stackoverflow.com/questions/71764921/how-to-delete-an-element-in-a-json-file-python | I am trying to delete an element in a json file, here is my json file: before: { "names": [ { "PrevStreak": false, "Streak": 0, "name": "Brody B#3719", "points": 0 }, { "PrevStreak": false, "Streak": 0, "name": "XY_MAGIC#1111", "points": 0 } ] } after running script: { "names": [ { "PrevStreak": false, "Streak": 0, "name": "Brody B#3719", "points": 0 } ] } how would I do this in python? the file is stored locally and I am deciding which element to delete by the name in each element Thanks | You will have to read the file, convert it to python native data type (e.g. dictionary), then delete the element and save the file. In your case something like this could work: import json filepath = 'data.json' with open(filepath, 'r') as fp: data = json.load(fp) del data['names'][1] with open(filepath, 'w') as fp: json.dump(data, fp) | 4 | 5 |
71,811,731 | 2022-4-9 | https://stackoverflow.com/questions/71811731/how-do-you-get-vs-code-to-write-debug-stdout-to-the-debug-console | I am trying to debug my Python Pytest tests in VS Code, using the Testing Activity on the left bar. I am able to run my tests as expected, with some passing and some failing. I would like to debug the failing tests to more accurately determine what is causing the failures. When I run an individual test in debug mode VS Code is properly hitting a breakpoint and stopping, and the Run and Debug pane shows the local variables. I can observe the status of local variables either in the Variables > Local pane or through the REPL, by typing the name of the variable. When I try to print out any statement, such as using > print("here") I do not get any output to the Debug Console. When I reference a variable, or put the string directly using > "here" I do see the output to the Debug Console. It seems to me that the stdout of my REPL is not displaying to the Debug Console. A number of answers online have been suggesting to add options like "redirectOutput": true or "console": "integratedTerminal", but neither of those seem to have worked. My full launch.json is below: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "debugOptions": [ "WaitOnAbnormalExit", "WaitOnNormalExit" ], "console": "integratedTerminal", "stopOnEntry": false, "redirectOutput": true, "outputCapture": "std" } ] } Is there another setting I'm missing to enable this output? Have I got the wrong console type? | So After a lot of frustrating "debugging" I found a solution that worked for me (if you are using pytest as me): tldr Two solutions: downgrade your vscode python extension to v2022.2.1924087327 that will do the trick (or any version that had the debugpy<=1.5.1). Or, Launch the debbuger from the debug tab not the testing tab. And use a configuration like the following one { "name": "Python: Current File (Integrated Terminal)", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "purpose": ["debug-test"], "redirectOutput": true, "env": {"PYTHONPATH": "${workspaceRoot}"} } Bonus. If you are using pytest you can temporarily disable the capture of the stdout of pytest so your print statements, and the print function, *if you breakpoint inside the contextmanager, will work too. This is very cumbersome but points out the original problem of why the prints are not longer working. def test_disabling_capturing(capsys): print('this output is captured') with capsys.disabled(): print('output not captured, going directly to sys.stdout') print('this output is also captured') the long explanation so the problem apparently is that the debugpy (which is the library used by vscode python debugger) in is last version v1.6.0 fixed this "bug (827)". In a nutshell, this "bug" was that vscode "duplicated" all the stdout when debugging because it captures the pytest stdout and replicate it in the debugger console. This is because, by default, pytest captures all the stdout and store it (so when running all test in parallel it doesn't create a mess). After "fixing" this issue, now, when you launch the test via the testing tab, by default, pytest is capturing all the stdout and the "new" (>=v1.6.1) debugpy ignores it. Therefore, all the print statements are not shown anymore on the debug console, even when you call print() in a breakpoint, because are captured by pytest (IDK where the pytest captured stdout is showing/stored if it is anywhere). which, in my case is a PITA. You can disable the pytest capture option using the flag -s or --capture=no when launching pytest in a console or even from the debug tab as a custom configuration. but the problem is that there is no way (apparently) to add these parameters in vscode for the testing tab so pytest is executed using that option. Therefore the solution that I found was to downgrade the python extension to a version that uses an older version of debugpy v1.5.1, you can see in the python extension changelog that from the version 2022.4.0 they update the debugpy version, so going before that did the trick for me, you will have the double stdout "bug" in the console, but the print statement will work. ref: The issue that lead me to the solution You may make your voice heard here in the vscode-python issues | 13 | 16 |
71,824,282 | 2022-4-11 | https://stackoverflow.com/questions/71824282/sqlfluff-always-returns-templating-parsing-errors | I am trying to set up sqlfluff but for all of our queries it always returns this when running sqlfluff fix [1 templating/parsing errors found] Is there any way how I can force it to tell me the error that occurs? I tried running it on highes verbosity level but no useful information logged. | Use the command sqlfluff parse, which will give you the line numbers of where the parse violations are occurring. Once you've rectified all parse violations, run sqlfluff fix again. | 12 | 10 |
71,768,061 | 2022-4-6 | https://stackoverflow.com/questions/71768061/huggingface-transformers-classification-using-num-labels-1-vs-2 | question 1) The answer to this question suggested that for a binary classification problem I could use num_labels as 1 (positive or not) or 2 (positive and negative). Is there any guideline regarding which setting is better? It seems that if we use 1 then probability would be calculated using sigmoid function and if we use 2 then probabilities would be calculated using softmax function. question 2) In both cases are my y labels going to be same? each data point will have 0 or 1 and not one hot encoding? For example, if I have 2 data points then y would be 0,1 and not [0,0],[0,1] I have very unbalanced classification problem where class 1 is present only 2% of times. In my training data I am oversampling question 3) My data is in pandas dataframe and I am converting it to a dataset and creating y variable using below. How should I cast my y column - label if I am planning to use num_labels=1? `train_dataset=Dataset.from_pandas(train_df).cast_column("label", ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None))` | Well, it probably is kind of late. But I want to point out one thing, according to the Hugging Face code, if you set num_labels = 1, it will actually trigger the regression modeling, and the loss function will be set to MSELoss(). You can find the code here. Also, in their own tutorial, for a binary classification problem (IMDB, positive vs. negative), they set num_labels = 2. from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) Here is the link. | 5 | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.