question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
73,295,856 | 2022-8-9 | https://stackoverflow.com/questions/73295856/dataproc-errors-when-reading-and-writing-data-from-bigquery-using-pyspark | I am trying to read some BigQuery data, (ID: my-project.mydatabase.mytable [original names protected]) from a user-managed Jupyter Notebook instance, inside Dataproc Workbench. What I am trying is inspired in this, and more specifically, the code is (please read some additional comments, on the code itself): from pyspark.sql import SparkSession from pyspark.sql.functions import udf, col from pyspark.sql.types import IntegerType, ArrayType, StringType from google.cloud import bigquery # UPDATE (2022-08-10): BQ conector added spark = SparkSession.builder.appName('SpacyOverPySpark') \ .config('spark.jars.packages', 'com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.24.2') \ .getOrCreate() # ------------------ IMPORTING DATA FROM BIG QUERY -------------------------- # UPDATE (2022-08-10): This line now runs... df = spark.read.format('bigquery').option('table', 'my-project.mydatabase.mytable').load() # But imports the whole table, which could become expensive and not optimal print("DataFrame shape: ", (df.count(), len(df.columns)) # 109M records & 9 columns; just need 1M records and one column: "posting" # I tried the following, BUT with NO success: # sql = """ # SELECT `posting` # FROM `mentor-pilot-project.indeed.indeed-data-clean` # LIMIT 1000000 # """ # df = spark.read.format("bigquery").load(sql) # print("DataFrame shape: ", (df.count(), len(df.columns))) # ------- CONTINGENCY PLAN: IMPORTING DATA FROM CLOUD STORAGE --------------- # This section WORKS (just to enable the following sections) # HINT: This dataframe contains 1M rows of text, under a single column: "posting" df = spark.read.csv("gs://hidden_bucket/1M_samples.csv", header=True) # ---------------------- EXAMPLE CUSTOM PROCESSING -------------------------- # Example Python UDF Python def split_text(text:str) -> list: return text.split() # Turning Python UDF into Spark UDF textsplitUDF = udf(lambda z: split_text(z), ArrayType(StringType())) # "Applying" a UDF on a Spark Dataframe (THIS WORKS OK) df.withColumn("posting_split", textsplitUDF(col("posting"))) # ------------------ EXPORTING DATA TO BIG QUERY ---------------------------- # UPDATE (2022-08-10) The code causing the error: # df.write.format('bigquery') \ # .option('table', 'wordcount_dataset.wordcount_output') \ # .save() # has been replace by a code that successfully stores data in BQ: df.write \ .format('bigquery') \ .option("temporaryGcsBucket", "my_temp_bucket_name") \ .mode("overwrite") \ .save("my-project.mynewdatabase.mytable") When reading data from BigQuery, using a SQL query, the error triggered is: Py4JJavaError: An error occurred while calling o195.load. : com.google.cloud.spark.bigquery.repackaged.com.google.inject.ProvisionException: Unable to provision, see the following errors: 1) Error in custom provider, java.lang.IllegalArgumentException: 'dataset' not parsed or provided. at com.google.cloud.spark.bigquery.SparkBigQueryConnectorModule.provideSparkBigQueryConfig(SparkBigQueryConnectorModule.java:65) while locating com.google.cloud.spark.bigquery.SparkBigQueryConfig 1 error at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:226) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1097) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1131) at com.google.cloud.spark.bigquery.BigQueryRelationProvider.createRelationInternal(BigQueryRelationProvider.scala:75) at com.google.cloud.spark.bigquery.BigQueryRelationProvider.createRelation(BigQueryRelationProvider.scala:46) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:242) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:230) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) Caused by: java.lang.IllegalArgumentException: 'dataset' not parsed or provided. at com.google.cloud.bigquery.connector.common.BigQueryUtil.lambda$parseTableId$2(BigQueryUtil.java:153) at java.util.Optional.orElseThrow(Optional.java:290) at com.google.cloud.bigquery.connector.common.BigQueryUtil.parseTableId(BigQueryUtil.java:153) at com.google.cloud.spark.bigquery.SparkBigQueryConfig.from(SparkBigQueryConfig.java:237) at com.google.cloud.spark.bigquery.SparkBigQueryConnectorModule.provideSparkBigQueryConfig(SparkBigQueryConnectorModule.java:67) at com.google.cloud.spark.bigquery.SparkBigQueryConnectorModule$$FastClassByGuice$$db983008.invoke(<generated>) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderMethod$FastClassProviderMethod.doProvision(ProviderMethod.java:264) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderMethod.doProvision(ProviderMethod.java:173) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProviderInstanceBindingImpl$CyclicFactory.provision(InternalProviderInstanceBindingImpl.java:185) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalProviderInstanceBindingImpl$CyclicFactory.get(InternalProviderInstanceBindingImpl.java:162) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:168) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39) at com.google.cloud.spark.bigquery.repackaged.com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1094) ... 18 more When writing data to BigQuery, the error is: Py4JJavaError: An error occurred while calling o167.save. : java.lang.ClassNotFoundException: Failed to find data source: bigquery. Please find packages at http://spark.apache.org/third-party-projects.html UPDATE: (2022-09-10) The error when writing data to BigQuery has been solved, please refer to the code above, as well as the comment section below. What am I doing wrong? | Key points found during the discussion: Add the BigQuery connector as a dependency through spark.jars=<gcs-uri> or spark.jars.packages=com.google.cloud.spark:spark-bigquery-with-dependencies_<scala-version>:<version>. Specify the correct table name in <project>.<dataset>.<table> format. The default mode for dataframe writer is errorifexists. When writing to a non-existent table, the dataset must exist, the table will be created automatically. When writing to an existing table, mode needs to be set as "append" or "overwrite" in df.write.mode(<mode>)...save(). When writing to a BQ table, do either a) direct write (supported since 0.26.0) df.write \ .format("bigquery") \ .option("writeMethod", "direct") \ .save("dataset.table") b) or indirect write df.write \ .format("bigquery") \ .option("temporaryGcsBucket","some-bucket") \ .save("dataset.table") See this doc. When reading from BigQuery through a SQL query, add mandatory properties viewsEnabled=true and materializationDataset=<dataset>: spark.conf.set("viewsEnabled","true") spark.conf.set("materializationDataset","<dataset>") sql = """ SELECT tag, COUNT(*) c FROM ( SELECT SPLIT(tags, '|') tags FROM `bigquery-public-data.stackoverflow.posts_questions` a WHERE EXTRACT(YEAR FROM creation_date)>=2014 ), UNNEST(tags) tag GROUP BY 1 ORDER BY 2 DESC LIMIT 10 """ df = spark.read.format("bigquery").load(sql) df.show() See this doc. | 4 | 6 |
73,259,393 | 2022-8-6 | https://stackoverflow.com/questions/73259393/retrying-failed-futures-in-pythons-threadpoolexecutor | I want to implement retry logic with Python's concurrent.futures.ThreadPoolExecutor. I would like the following properties: A new future is added to the work queue as soon as it fails. A retried future can be retried again, either indefinitely or up to a maximum retry count. A lot of existing code I found online basically operates in "rounds", where they call as_completed on an initial list of futures, resubmits failed futures, gathers those futures in a new list, and goes back to calling as_completed on the new list if it's not empty. Basically something like this: with concurrent.futures.ThreadPoolExecutor(...) as executor: futures = {executor.submit(fn, job): job for job in jobs} while len(futures) > 0: new_futures = {} for fut in concurrent.futures.as_completed(futures): if fut.exception(): job = futures[fut] new_futures[executor.submit(fn, job)] = job else: ... # logic to handle successful job futures = new_futures However, I think that doesn't satisfy the first property, since it's possible that a retried future completes before the initial futures, but we won't process it until all the initial futures complete. Here's a hypothetical pathological case. Let's say we have two jobs, the first runs for 1 second but has a 90% chance of failure, while the second runs for 100 seconds. If our executor has 2 workers, and the first job fails after 1 second, we'll retry it immediately. But if it failed again, we won't be able to retry until the second job completes. So my question is, is it possible to implement retry logic with these desired properties, without using external libraries or rewriting low-level executor logic? One thing I tried is putting the retry logic in the code sent to the worker: def worker_job(fn): try: return fn() except Exception: executor.submit(fn) with concurrent.futures.ThreadPoolExecutor(...) as executor: jobs = [functools.partial(fn, arg) for arg in args] executor.map(worker_job, jobs) But it seems like submitting new jobs from within a job doesn't work. | Retry using as_completed Simple way Loop with wait(..., return_when=FIRST_COMPLETED) instead of as_completed(...). Trade-offs: Overhead of pending futures (re-adding waiter, building new_futures). Troublesome if want to specify overall timeout. with concurrent.futures.ThreadPoolExecutor() as executor: futures = {executor.submit(fn, job): job for job in jobs} while len(futures) > 0: new_futures = {} done, pending = concurrent.futures.wait(futures, return_when=FIRST_COMPLETED) for fut in done: if fut.exception(): job = futures[fut] new_futures[executor.submit(fn, job)] = job else: ... # logic to handle successful job for fut in pending: job = futures[fut] new_futures[fut] = job futures = new_futures Efficient way Tweak as_completed(...) to add to fs and pending, and use waiter. Trade-off: Maintenance. Advantage: Ability to specify overall timeout if wanted. class AsCompletedWaiterWrapper: def __init__(self): self.fs = None self.pending = None self.waiter = None def listen(self, fut): with self.waiter.lock: self.fs.add(fut) self.pending.add(fut) fut._waiters.append(self.waiter) def as_completed(self, fs, timeout=None): """ concurrent.futures.as_completed plus the 3 lines marked with +. """ if timeout is not None: end_time = timeout + time.monotonic() fs = set(fs) total_futures = len(fs) with _AcquireFutures(fs): finished = set( f for f in fs if f._state in [CANCELLED_AND_NOTIFIED, FINISHED]) pending = fs - finished waiter = _create_and_install_waiters(fs, _AS_COMPLETED) self.fs = fs # + self.pending = pending # + self.waiter = waiter # + finished = list(finished) try: yield from _yield_finished_futures(finished, waiter, ref_collect=(fs,)) while pending: if timeout is None: wait_timeout = None else: wait_timeout = end_time - time.monotonic() if wait_timeout < 0: raise TimeoutError( '%d (of %d) futures unfinished' % ( len(pending), total_futures)) waiter.event.wait(wait_timeout) with waiter.lock: finished = waiter.finished_futures waiter.finished_futures = [] waiter.event.clear() # reverse to keep finishing order finished.reverse() yield from _yield_finished_futures(finished, waiter, ref_collect=(fs, pending)) finally: # Remove waiter from unfinished futures for f in fs: with f._condition: f._waiters.remove(waiter) Usage: with concurrent.futures.ThreadPoolExecutor() as executor: futures = {executor.submit(fn, job): job for job in jobs} w = AsCompletedWaiterWrapper() for fut in w.as_completed(futures): if fut.exception(): job = futures[fut] new_fut = executor.submit(fn, job) futures[new_fut] = job w.listen(new_fut) else: ... # logic to handle successful job Retry from job helper Wait for events in with ... executor: as ThreadPoolExecutor.__exit__ shuts down executor so it cannot schedule new futures. Trade-offs: Would not work with ProcessPoolExecutor due to executor reference in main process. Troublesome if want to specify overall timeout. def worker_job(fn, event): try: rv = fn() event.set() return rv except Exception: executor.submit(worker_job, fn, event) with concurrent.futures.ThreadPoolExecutor() as executor: jobs = [functools.partial(fn, arg) for arg in args] events = [threading.Event() for _ in range(len(jobs))] executor.map(worker_job, jobs, events) for e in events: e.wait() | 4 | 3 |
73,216,605 | 2022-8-3 | https://stackoverflow.com/questions/73216605/add-background-color-to-cells-reportlab-python | summary = [['Metrics','Status']] try: for i in output['responsetimes']: if i['metric'] == 'ResponseTime': k = i['value'].split(' ') if int(k[0])<1000: temp = ['Response Times','Green'] summary.append(temp) else: temp = ['Response Times','Red'] summary.append(temp) except: summary.append(['Response Times','NA']) try: for i in output['runtimedumps']: if i['metric'] == 'Shortdumps Frequency': k = i['value'].split(' ') if int(k[0])==0: temp = ['Runtime Dumps','Green'] summary.append(temp) else: temp = ['Runtime Dumps','Red'] summary.append(temp) except: summary.append(['Runtime Dumps','NA']) try: temp = [] for i in output['buffer']: if (i['metric'] == 'HitRatio'): k = i['value'].split(' ') if int(k[0])>95: temp.append('green') else: temp.append('red') if 'red' in temp: summary.append(['Buffer','Red']) else: summary.append(['Buffer','Green']) except: summary.append(['Buffer','NA']) try: for i in output['updatemonitoring']: if i['metric'] == 'ErrorsInWpUD1': if int(i['value'])==0: temp = ['Update Monitoring','Green'] summary.append(temp) else: temp = ['Update Monitoring','Red'] summary.append(temp) except: summary.append(['Update Monitoring','NA']) try: for i in output['memory']: if i['metric'] == 'Physical': total = int(i['value'].split(' ')[0]) if i['metric'] == 'Free (Value)': free = int(i['value'].split(' ')[0]) if int((free*100)/total)<5: summary.append(['Memory Utilization','Red']) else: summary.append(['Memory Utilization','Green']) except: summary.append(['Memory Utilization','Green']) try: for i in output['cpu']: if i['metric'] == 'CPU_Utilization': used = int(i['value'].split(' ')[0]) if used>80: summary.append(['CPU Utilization','Red']) else: summary.append(['CPU Utilization','Green']) except: summary.append(['CPU Utilization','NA']) try: temp = [] for i in output['fs']: if int(i['perc'].split(' ')[0])>85: temp.append('red') else: temp.append('green') if 'red' in temp: summary.append(['File System','Red']) else: summary.append(['File System','Green']) except: summary.append(['File System','NA']) t=Table(summary,hAlign='LEFT') GRID_STYLE = TableStyle( [ ('GRID',(0,0),(-1,-1),0.5,colors.black), ('BACKGROUND', (0, 0), (-1, 0), '#2766A8'), ('TEXTCOLOR', (0, 0), (-1, 0), colors.white), ] ) t.setStyle(GRID_STYLE) Story.append(t) Story.append(Spacer(1, 12)) I am creating a table using report lab, You can see how it looks in the image below: I want to highlight the cell based on their values. For example, Response Times cell would be green if its value is green or red otherwise. I'm new at this, and can use some guidance on how to achieve this. | Found the answer myself after some more searching stackoverflow questions. Might help someone else.. In my case, list summary has the heading by default which is ['Metrics','Status'] and I'm appending the rest of the values based on validations like this, for i in output['responsetimes']: if i['metric'] == 'ResponseTime': k = i['value'].split(' ') if int(k[0])<1000: temp = ['Response Times','Green'] summary.append(temp) else: temp = ['Response Times','Red'] summary.append(temp) So in the end my summary list looks something like this, [ ['Metrics','Status'], ['Response','Green'], ['Metrics2','Red'], ['Metrics4','Green'], ['Metrics3','Red'] ] And now I just need to loop over this summary list and add styles to the already existing GRID_STYLE with the help of TableStyle class like this, GRID_STYLE = TableStyle( [ ('GRID',(0,0),(-1,-1),0.5,colors.black), ('BACKGROUND', (0, 0), (-1, 0), '#2766A8'), ('TEXTCOLOR', (0, 0), (-1, 0), colors.white), ] ) for row, values, in enumerate(summary): for column, value in enumerate(values): if value == "Red": GRID_STYLE.add('BACKGROUND', (column, row), (column, row), colors.red) if value == "Green": GRID_STYLE.add('BACKGROUND', (column, row), (column, row), colors.green) t.setStyle(GRID_STYLE) And voila, now the table looks like this, | 4 | 3 |
73,291,228 | 2022-8-9 | https://stackoverflow.com/questions/73291228/add-route-to-fastapi-with-custom-path-parameters | I am trying to add routes from a file and I don't know the actual arguments beforehand so I need to have a general function that handles arguments via **kwargs. To add routes I am using add_api_route as below: from fastapi import APIRouter my_router = APIRouter() def foo(xyz): return {"Result": xyz} my_router.add_api_route('/foo/{xyz}', endpoint=foo) Above works fine. However enrty path parameters are not fixed and I need to read them from a file, to achieve this, I am trying something like this: from fastapi import APIRouter my_router = APIRouter() def foo(**kwargs): return {"Result": kwargs['xyz']} read_from_file = '/foo/{xyz}' # Assume this is read from a file my_router.add_api_route(read_from_file, endpoint=foo) But it throws this error: {"detail":[{"loc":["query","kwargs"],"msg":"field required","type":"value_error.missing"}]} FastAPI tries to find actual argument xyz in foo signature which is not there. Is there any way in FastAPI to achieve this? Or even any solution to accept a path like /foo/... whatever .../? | This will generate a function with a new signature (I assume every parameter is a string): from fastapi import APIRouter import re import inspect my_router = APIRouter() def generate_function_signature(route_path: str): args = {arg: str for arg in re.findall(r'\{(.*?)\}', route_path)} def new_fn(**kwargs): return {"Result": kwargs['xyz']} params = [ inspect.Parameter( param, inspect.Parameter.POSITIONAL_OR_KEYWORD, annotation=type_ ) for param, type_ in args.items() ] new_fn.__signature__ = inspect.Signature(params) new_fn.__annotations__ = args return new_fn read_from_file = '/foo/{xyz}' # Assume this is read from a file my_router.add_api_route( read_from_file, endpoint=generate_function_signature(read_from_file) ) However I am sure there is a better way of doing whatever you are trying to do, but I would need to understand your problem first | 4 | 3 |
73,247,204 | 2022-8-5 | https://stackoverflow.com/questions/73247204/black-not-respecting-extend-exclude-in-pyproject-toml | In VSCode, with Python 3.9 and black==22.6.0, I have a project structure like: --- root ------src ---------module0.py ---------module1.py ------tests ---------test_folder0 ------------test_file0.py ------------test_file1.py ---------test_folder1 ---------etc. In pyproject.toml I can't get the extend-exlude part to actually exclude my test files. I've tried multiple different ways, both for the entire tests folder as well as for the test_whatever.py files but nothing seems to work, even though my various attempts have been validated by https://regex101.com/. The simplest example: [tool.black] line-length = 200 target-version = ['py39'] include = '\.pyi?$' extend-exclude = '''.*test_.*''' Either my regex is wrong (or black requires some modifications) or VSCode is ignoring my project's configuration or idk. | Maintainer of Black here :wave: OK, so I actually missed a few points in my comments. To address the main question, this is 100% expected behaviour. Your regex is fine. The thing is that when you ask VSCode to format your file on save, it calls Black passing the filepath to your current file (you just saved) directly. If you open the "Output" tab in the bottom panel and then save a Python file, you'll notice something like this: ./venv/bin/python -m black --safe --diff --quiet ./tests/data/nothing-changed.py --include, --exclude, and --extend-exclude only affect files and directories that are discovered by Black itself. You might be wondering, huh, Black can look for files to format? Yes, it does when you run black . or give Black any other directory. The flipside of this is that these options do nothing to files given as an argument to Black. If you want to keep Format on Save enabled, your only recourse is to use --force-exclude which is similar to --extend-exclude but it's always enforced. You can either configure --force-exclude in pyproject.toml or via VSCode's Black arguments setting (preferably for the current workspace only). The difference between putting it in pyproject.toml and configuring VSCode to pass extra options to Black is well, when it's applied. If it's in pyproject.toml it will always be enforced, even when you're not using VSCode and instead are using a bash shell or whatever. This can be useful if you're using pre-commit (which passes files to Black directly just like VSCode) or similar, but can be annoying otherwise. (if you do choose to force exclude project-wide via pyproject.toml, you can format force-excluded files by either piping it in or temporarily clearing --force-exclude on the CLI, e.g. black --force-exclude='' file_you_want_to_format_even_though_it_is_force_excluded.py) | 6 | 16 |
73,293,535 | 2022-8-9 | https://stackoverflow.com/questions/73293535/install-newer-version-of-sqlite3-on-aws-lambda-for-use-with-python | I have a Python script running in a Docker container on AWS Lambda. I'm using the recommended AWS image (public.ecr.aws/lambda/python:3.9), which comes with SQLite version 3.7.17 (from 2013!). When I test the container locally on my M1 Mac, I see this: $ docker run --env-file .env --entrypoint bash -ti my-image bash-4.2# uname -a Linux e9ed14d35cbe 5.10.104-linuxkit #1 SMP PREEMPT Thu Mar 17 17:05:54 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux bash-4.2# sqlite3 --version 3.7.17 2013-05-20 00:56:22 118a3b35693b134d56ebd780123b7fd6f1497668 However, I use newer SQLite features, so I need to find a way to use a newer version of the library. The most straightforward solution would be to install a binary package as suggested in this answer. The docs say it should be as simple as installing using pip. Unfortunately, when I attempt to use this approach inside the Docker container, I get this: bash-4.2# pip3 install pysqlite3-binary ERROR: Could not find a version that satisfies the requirement pysqlite3-binary (from versions: none) ERROR: No matching distribution found for pysqlite3-binary And I get the same error when I attempt to install it outside the container using pipenv (which is what I'm actually using for package management): π 01:08:24 β― pipenv install pysqlite3-binary Installing pysqlite3-binary... Error: An error occurred while installing pysqlite3-binary! Error text: ERROR: Could not find a version that satisfies the requirement pysqlite3-binary (from versions: none) ERROR: No matching distribution found for pysqlite3-binary β Installation Failed Am I doing something wrong? And if not, how can I get a recent version of SQLite which Python can use in this container? Do I really need to use a separate build stage in the Dockerfile as suggested here and copy the rpm components into place as laid out here? That feels like a lot of work for something that many people presumably need to do all the time. Update: I tried the rpm approach inside the container using version 3.26 from EPEL8 (IIUC) and it failed with a bunch of dependency errors like this: bash-4.2# curl --output-dir /tmp -sO https://vault.centos.org/centos/8/BaseOS/aarch64/os/Packages/sqlite-3.26.0-15.el8.aarch64.rpm bash-4.2# yum localinstall /tmp/sqlite-3.26.0-15.el8.aarch64.rpm Loaded plugins: ovl Examining /tmp/sqlite-3.26.0-15.el8.aarch64.rpm: sqlite-3.26.0-15.el8.aarch64 # etc. --> Finished Dependency Resolution Error: Package: sqlite-3.26.0-15.el8.aarch64 (/sqlite-3.26.0-15.el8.aarch64) Requires: libc.so.6(GLIBC_2.28)(64bit) # Plus 6 other package dependency errors Error: Package: nss-softokn-3.67.0-3.amzn2.0.1.aarch64 (@amzn2-core) Requires: libsqlite3.so.0()(64bit) Removing: sqlite-3.7.17-8.amzn2.1.1.aarch64 (@amzn2-core) libsqlite3.so.0()(64bit) Updated By: sqlite-3.26.0-15.el8.aarch64 (/sqlite-3.26.0-15.el8.aarch64) Not found Obsoleted By: sqlite-3.26.0-15.el8.aarch64 (/sqlite-3.26.0-15.el8.aarch64) Not found Available: sqlite-3.7.17-8.amzn2.0.2.aarch64 (amzn2-core) libsqlite3.so.0()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest When I try --skip-broken, it just skips installing the 3.26 package altogether. Update 2: I've tried downloading the Python 3.9 wheel from pysqlite3-binary manually. However, it looks like that project only produces wheels for x86_64, not the aarch64 platform which Lambda uses. (This is not correct, see answer.) So presumably that's why pip is not finding it. | The problem was that I was running Docker locally to do my testing, on an M1 Mac. Hence the aarch64 architecture. Lambda does allow you to use ARM, but thankfully it still defaults to x86_64. I confirmed that my Lambda function was running x86_64, which is what the binary wheel uses, so that's good: So I needed to do three things: Change my Pipfile to conditionally install the binary package only on x86_64: pysqlite3-binary = { version = "*", platform_machine = "== 'x86_64'" } Tweak the sqlite import, as described in the original answer: try: import pysqlite3 as sqlite3 except ModuleNotFoundError: import sqlite3 # for local testing because pysqlite3-binary couldn't be installed on macos print(f"{sqlite3.sqlite_version=}") Set my Docker container to launch in x86 emulation mode locally. $ DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build -t my-image . $ DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run -ti my-image Et, voilΓ ! sqlite3.sqlite_version='3.39.2' | 5 | 1 |
73,279,086 | 2022-8-8 | https://stackoverflow.com/questions/73279086/converting-32-bit-tiff-to-8-bit-tiff-while-retaining-metadata-and-tags-in-python | I would like to convert several TIFF files with 32-bit pixel depth into 8-bit pixel depth TIFFs while retaining metadata and TIFF tags. The 32-bit TIFFs are four-dimensional ImageJ-hyperstacks with TZYX axes (i.e. time, z-depth, y-coordinate, x-coordinate) and values in the range of [0, 1]. I can convert to 8-bit and copy over the metadata (using a very small sample image created in ImageJ): import numpy as np import tifffile infile32 = "test.tif" with tifffile.TiffFile(infile32) as tif: imagej_metadata = tif.imagej_metadata a = tifffile.imread(infile32) print(a.shape) a = np.round(255 * a) a = a.astype(np.uint8) tifffile.imwrite("test_py-8bit.tif", a, imagej=True, metadata = imagej_metadata) >>> (4, 3, 10, 11) However, the pixel resolution (how many micrometers are in 1 pixel) is wrong, the "z" axis (a.shape[1]) is wrongly recognized as color channel, and the "time" axis (a.shape[0]) is wrongly recognized as z. If I do this process manually in ImageJ, this problem does not occur, so I suspect the TIFF tags are necessary. I'd like a programmatic way to do it, so I can run the script on a cluster over hundreds of files. Looking at the documentation of tifffile, I know it's possible to also extract tags: with tifffile.TiffFile(infile32) as tif: for page in tif.pages: for tag in page.tags: tag_name, tag_value = tag.name, tag.value But how can I pass these tags to tifffile.imwrite? | Copy the resolution and resolutionunit properties and add the axes order of the image array to the imagej_metadata dict: import numpy import tifffile with tifffile.TiffFile('imagej_float32.tif') as tif: data = tif.asarray() imagej_metadata = tif.imagej_metadata imagej_metadata['axes'] = tif.series[0].axes resolution = tif.pages[0].resolution resolutionunit = tif.pages[0].resolutionunit del imagej_metadata['hyperstack'] imagej_metadata['min'] = 0 imagej_metadata['max'] = 255 tifffile.imwrite( 'imagej_uint8.tif', numpy.round(255 * data).astype(numpy.uint8), imagej=True, resolution=resolution, resolutionunit=resolutionunit, metadata=imagej_metadata, ) | 5 | 5 |
73,251,559 | 2022-8-5 | https://stackoverflow.com/questions/73251559/x-ref-the-python-standard-library-with-intersphinx-while-omitting-the-module-nam | EDIT: Other answers than the one I provided are welcome! Consider the following function: from pathlib import Path from typing import Union def func(path: Union[str, Path]) -> None: """My super function. Parameters ---------- path : str | Path path to a super file. """ pass When documenting with sphinx, I would like to cross-ref both str and Path with intersphinx. But obviously, it does not work for the latter since it is referenced as pathlib.Path in the objects.inv file. Is there a way to tell intersphinx/sphinx that Path is from the pathlib module? Without resorting to: path : str | `pathlib.Path` or path : str | `~pathlib.Path` which does not render nicely in a python interpreter, e.g. IPython. | numpydoc can do this through x-ref aliases: https://numpydoc.readthedocs.io/en/latest/ In the configuration conf.py: numpydoc_xref_param_type = True numpydoc_xref_aliases = { "Path": "pathlib.Path", } It might try to match other words from the parameters types of the Parameters, Other Parameters, Returns and Yields sections. They can be ignored by adding the following to the configuration conf.py: numpydoc_xref_ignore = { "of", "shape", } numpydoc also offers other useful tools, such as docsting validation, which can be configured in conf.py with the keys described here: https://numpydoc.readthedocs.io/en/latest/install.html#configuration | 4 | 0 |
73,238,617 | 2022-8-4 | https://stackoverflow.com/questions/73238617/passing-argument-key-pair-to-vscode-python-debugger-separated-with-an-equal-sign | For management of my config files, I'm using Hydra which requires passing additional arguments using a plus and then an equal sign between the argument and its value, e.g. python evaluate.py '+model_path="logs/fc/version_1/checkpoints/epoch=1-step=2.ckpt"' Above I'm also using quotes to escape the equal signs in the value. I want to pass this in vscode to launch.json to the args field; however, I don't know how to do it properly because typically the argument and value parts are separated by a space and not an equal sign as for Hydra. So the following doesn't work: { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "args" : ["+model_path", "logs/fc/version_1/checkpoints/epoch=1-step=2.ckpt"] } ] } How should I change args to get it right? | This may not be the most elegant solution but it works if we pass everything as a single argument (no comma inbetween) like this: "args" : ["+model_path='logs/fc/version_7/checkpoints/epoch=19-step=3700.ckpt'"] | 4 | 1 |
73,242,764 | 2022-8-4 | https://stackoverflow.com/questions/73242764/how-to-efficiently-calculate-membership-counts-by-month-and-group | I have to calculate in Python the number of unique active members by year, month, and group for a large dataset (N ~ 30M). Membership always starts at the beginning of the month and ends at the end of the month. Here is a very small subset of the data. print(df.head(6)) member_id type start_date end_date 1 10 A 2021-12-01 2022-05-31 2 22 B 2022-01-01 2022-07-31 3 17 A 2022-01-01 2022-06-30 4 57 A 2022-02-02 2022-02-28 5 41 B 2022-02-02 2022-04-30 My current solution is inefficient as it relies on a for loop: import pandas as pd date_list = pd.date_range( start=min(df.start_date), end=max(df.end_date), freq='MS' ) members = pd.DataFrame() for d in date_list: df['date_filter'] = ( (d >= df.start_date) & (d <= df.end_date) ) grouped_members = ( df .loc[df.date_filter] .groupby(by='type', as_index=False) .member_id .nunique() ) member_counts = pd.DataFrame( data={'year': d.year, 'month': d.month} index=[0] ) member_counts = member_counts.merge( right=grouped_members, how='cross' ) members = pd.concat[members, member_counts] members = members.reset_index(drop=True) It produces the following: print(members) year month type member_id 0 2021 12 A 1 1 2021 12 B 0 2 2022 1 A 3 3 2022 1 B 1 4 2022 2 A 3 5 2022 2 B 2 6 2022 3 A 2 7 2022 3 B 2 8 2022 4 A 2 9 2022 4 B 2 10 2022 5 A 2 11 2022 5 B 1 12 2022 6 A 1 13 2022 6 B 1 14 2022 7 A 0 15 2022 7 B 1 I'm looking for a completely vectorized solution to reduce computational time. | Updated answer that avoids melt. Maybe faster? Uses the same idea as before where we don't actually care about member ids, we are just keeping track of start/end counts #Create multiindexed series for reindexing later months = pd.date_range( start=df.start_date.min(), end=df.end_date.max(), freq='MS', ).to_period('M') ind = pd.MultiIndex.from_product([df.type.unique(),months],names=['type','month']) #push each end date to the next month df['end_date'] += pd.DateOffset(1) #Convert the dates to yyyy-mm df['start_date'] = df.start_date.dt.to_period('M') df['end_date'] = df.end_date.dt.to_period('M') #Get cumsum counts per type/month of start and ends gb_counts = ( df.groupby('type').agg( start = ('start_date','value_counts'), end = ('end_date','value_counts'), ) .reindex(ind) .fillna(0) .groupby('type') .cumsum() .astype(int) ) counts = (gb_counts.start-gb_counts.end).unstack() counts ORIGINAL Updated answer than works unless the same member_id/group has overlapping date ranges (in which case it double-counts) The idea is to keep track of when the number of users changes per group instead of exploding out all months per user. I think this should be very fast and I'm curious how it performs Output Code (looks long but is mostly comments) import pandas as pd import itertools #Load example data import io #just for reading in your example table df = pd.read_csv( io.StringIO(""" 0 member_id type start_date end_date 1 10 A 2021-12-01 2022-05-31 2 22 B 2022-01-01 2022-07-31 3 17 A 2022-01-01 2022-06-30 4 57 A 2022-02-02 2022-02-28 5 41 B 2022-02-02 2022-04-30 """), delim_whitespace=True, index_col=0, parse_dates=['start_date','end_date'], ).reset_index(drop=True) #Create categorical index for reindexing and ffill months = pd.date_range( start=df.start_date.min(), end=df.end_date.max(), freq='MS', ).to_period('M') cat_ind = pd.Categorical(itertools.product(df.type.unique(),months)) #push each end date to the next month df['end_date'] += pd.DateOffset(1) #Convert the dates to yyyy-mm df['start_date'] = df.start_date.dt.to_period('M') df['end_date'] = df.end_date.dt.to_period('M') #Melt from: # #member_id | type | start_date | end_date #----------|------|------------|----------- # 10 | A | 2021-12-01 | 2022-05-31 # ... # #to # # type | active_users | date #---------------------------- # A | start_date | 2021-12-01 # A | end_date | 2022-05-31 # ... df = df.melt( id_vars='type', value_vars=['start_date','end_date'], var_name='active_users', value_name='date', ).sort_values('date') #Replace var column with +1/-1 for start/end date rows # # type | active_users | date #---------------------------- # A | 1 | 2021-12-01 # A | -1 | 2022-05-31 # ... df['active_users'] = df.active_users.replace({'start_date':1,'end_date':-1}) #Sum within each type/date then cumsum the number of active users df = df.groupby(['type','date']).sum().cumsum() #Reindex to ffill missing dates df = df.reindex(cat_ind).ffill().astype(int) df.unstack() | 6 | 2 |
73,280,922 | 2022-8-8 | https://stackoverflow.com/questions/73280922/python-how-to-type-hint-tf-keras-object-in-functions | This example function returns a dictionary of keras tensors: import pandas as pd import tensorflow as tf def create_input_tensors(data: pd.DataFrame) -> Dict[str,tf.keras.engine.keras_tensor.KerasTensor]: """Turns each dataframe column into a keras tensor and returns them as a dict""" tensors = {} for name, column in data.items(): tensors[name] = tf.keras.Input(shape=(1, ), name=name, dtype=float32) return tensors I do not know how to correctly type hint the return value. Running the code snippet yields the following exception: Exception has occurred: AttributeError module 'keras.api._v2.keras' has no attribute 'engine' Googling this exception did not help. Running type(tensors['year']) in the debugger at the end of the function to see what type one of the elements in the return dictionary is (year is one of the columns in data) yields <class 'keras.engine.keras_tensor.KerasTensor'>. I have issues with this specific function, but also generally when trying to type hint functions that handle any kind of keras object. An answer that is applicable to these similar problems would be much appreciated. | This works like a charm for me: import typing from keras.engine.keras_tensor import KerasTensor def f() -> typing.Dict[str, KerasTensor]: return {"a": tf.keras.Input(shape=(1, ),)} f() | 4 | 2 |
73,275,978 | 2022-8-8 | https://stackoverflow.com/questions/73275978/how-do-i-solve-userwarning-dataframe-columns-are-not-unique-some-columns-will | i have a dataframe of 2 columns. i tried converting it into a dictionary using df2.set_index('pay').T.to_dict('list'). As there are duplicated keys, some columns were omitted. Is there any way to resolve this issue or an alternative method? pay score 500 1 700 4 1000 5 700 3 I would like to achieve this dictionary. {'0': [500, 1], '1': [700, 4], '2': [1000, 5], '3' [700, 3]} | IIUC use: d = df2.T.to_dict('list') print (d) {0: [500, 1], 1: [700, 4], 2: [1000, 5], 3: [700, 3]} | 4 | 2 |
73,274,305 | 2022-8-8 | https://stackoverflow.com/questions/73274305/can-i-assign-the-result-of-a-function-on-multiple-lines-python | Lets say I have a function that return a lot of variables: def func(): return var_a, var_b, var_c, var_d, var_e, var_f, var_g, var_h, var_i, Using this function results in very long lines var_a, var_b, var_c, var_d, var_e, var_f, var_g, var_h, var_i = func() Ideally, I would like to use the line breaker \, e.g. a = var \ + var \ + var \ + var \ + var However, I don't think this is possible with the result of a function (i.e. unpacking tuple). Are there methods to do so? Or should I find another way to return fewer variables? Do you have any other style suggestions? | You can wrap the variables in round brackets (var_a, var_b, var_c, var_d, var_e, var_f, var_g, var_h, var_i) = func() | 5 | 6 |
73,257,454 | 2022-8-6 | https://stackoverflow.com/questions/73257454/filter-shapely-polygons-by-centroid-clip-or-something-else | I drew this flower of life by buffering points to polygons. I wanted each overlapping region to be its own polygon, so I used union and polygonize on the lines. I have filtered the polygons by area to eliminate sliver polygons, and now I'd like to filter them again and am stuck. I only want to keep the circles that are complete, so the first circle at 0,0 and the first level of surrounding rings (or petals). I want circles like this: I am wondering if I can filter by centroid location, something like: complete_polys = [polygon for polygon in filtered_polys if centroid[i].x < 4] complete_polys = [polygon for polygon in complete_polys_x if centroid[i].x > -4] Obviously this doesn't work, and I don't even know if it is possible. Perhaps this is the wrong approach entirely, and maybe snap() or clip_by_rect() might be better options? Thanks in advance for your insight and help. Here's the code to generate the circles: import matplotlib.pyplot as plt from shapely.geometry import Point, LineString from shapely.ops import unary_union, polygonize from matplotlib.pyplot import cm import numpy as np def plot_coords(coords, color): pts = list(coords) x, y = zip(*pts) # print(color) plt.plot(x,y, color='k', linewidth=1) plt.fill_between(x, y, facecolor=color) def plot_polys(polys, color): for poly, color in zip(polys, color): plot_coords(poly.exterior.coords, color) x = 0 y = 0 h = 1.73205080757 points = [# center Point(x, y), # first ring Point((x + 2), y), Point((x - 2), y), Point((x + 1), (y + h)), Point((x - 1), (y + h)), Point((x + 1), (y - h)), Point((x - 1), (y - h)), # second ring Point((x + 3), h), Point((x - 3), h), Point((x + 3), -h), Point((x - 3), -h), Point((x + 2), (h + h)), Point((x - 2), (h + h)), Point((x + 2), (-h + -h)), Point((x - 2), (-h + -h)), Point((x + 4), y), Point((x - 4), y), Point(x, (h + h)), Point(x, (-h + -h)), #third ring Point((x + 4), (h + h)), Point((x - 4), (h + h)), Point((x + 4), (-h + -h)), Point((x - 4), (-h + -h)), Point((x + 1), (h + h + h)), Point((x - 1), (h + h + h)), Point((x + 1), (-h + -h + -h)), Point((x - 1), (-h + -h + -h)), Point((x + 5), h), Point((x - 5), h), Point((x + 5), -h), Point((x - 5), -h)] # buffer points to create circle polygons circles = [] for point in points: circles.append(point.buffer(2)) # unary_union and polygonize to find overlaps rings = [LineString(list(pol.exterior.coords)) for pol in circles] union = unary_union(rings) result_polys = [geom for geom in polygonize(union)] # remove tiny sliver polygons threshold = 0.01 filtered_polys = [polygon for polygon in result_polys if polygon.area > threshold] print("total polygons = " + str(len(result_polys))) print("filtered polygons = " + str(len(filtered_polys))) colors = cm.viridis(np.linspace(0, 1, len(filtered_polys))) fig = plt.figure() ax = fig.add_subplot() fig.subplots_adjust(top=0.85) plot_polys(filtered_polys, colors) ax.set_aspect('equal') plt.show() | Is this what you wanted? I used the x^2 + y^2 = r^2 circle formula to filter. complete_polys = [polygon for polygon in filtered_polys if (polygon.centroid.x**2 + polygon.centroid.y**2 < 4**2)] plot_polys(complete_polys, colors) | 4 | 3 |
73,273,628 | 2022-8-8 | https://stackoverflow.com/questions/73273628/find-a-element-in-bs4-by-partial-class-name-not-working | I want to find an a element in a soup object by a substring present in its class name. This particular element will always have JobTitle inside the class name, with random preceding and trailing characters, so I need to locate it by its substring of JobTitle. You can see the element here: It's safe to assume there is only 1 a element to find, so using find should work, however my attempts (there have been more than the 2 shown below) have not worked. I've also included the top elements in case it's relevant for location for some reason. I'm on Windows 10, Python 3.10.5, and BS4 4.11.1. I've created a reproducible example below (I thought the regex way would have worked, but I guess not): import re from bs4 import BeautifulSoup # Parse this HTML, getting the only a['href'] in it (line 22) html_to_parse = """ <li> <div class="cardOutline tapItem fs-unmask result job_5ef6bf779263a83c sponsoredJob resultWithShelf sponTapItem desktop vjs-highlight"> <div class="slider_container css-g7s71f eu4oa1w0"> <div class="slider_list css-kyg8or eu4oa1w0"> <div class="slider_item css-kyg8or eu4oa1w0"> <div class="job_seen_beacon"> <div class="fe_logo"> <img alt="CyberCoders logo" class="feLogoImg desktop" src="https://d2q79iu7y748jz.cloudfront.net/s/_squarelogo/256x256/f0b43dcaa7850e2110bc8847ebad087b" /> </div> <table cellpadding="0" cellspacing="0" class="jobCard_mainContent big6_visualChanges" role="presentation"> <tbody> <tr> <td class="resultContent"> <div class="css-1xpvg2o e37uo190"> <h2 class="jobTitle jobTitle-newJob css-bdjp2m eu4oa1w0" tabindex="-1"> <a aria-label="full details of REMOTE Senior Python Developer" class="jcs-JobTitle css-jspxzf eu4oa1w0" data-ci="385558680" data-empn="8690912762161442" data-hide-spinner="true" data-hiring-event="false" data-jk="5ef6bf779263a83c" data-mobtk="1g9u19rmn2ea6000" data-tu="https://jsv3.recruitics.com/partner/a51b8de1-f7bf-11e7-9edd-d951492604d9.gif?client=521&rx_c=&rx_campaign=indeed16&rx_group=110383&rx_source=Indeed&job=KE2-168714218&rx_r=none&rx_ts=20220808T034442Z&rx_pre=1&indeed=sp" href="/pagead/clk?mo=r&ad=-6NYlbfkN0CpFJQzrgRR8WqXWK1qKKEqALWJw739KlKqr2H-MSI4eoBlI4EFrmor2FYZMP3muM35UEpv7D8dnBwRFuIf8XmtgYykaU5Nl3fSsXZ8xXiGdq3dZVwYJYR2-iS1SqyS7j4jGQ4Clod3n72L285Zn7LuBKMjFoBPi4tB5X2mdRnx-UikeGviwDC-ahkoLgSBwNaEmvShQxaFt_IoqJP6OlMtTd7XlgeNdWJKY9Ph9u8n4tcsN_tCjwIc3RJRtS1O7U0xcsVy5Gi1JBR1W7vmqcg5n4WW1R_JnTwQQ8LVnUF3sDzT4IWevccQb289ocL5T4jSfRi7fZ6z14jrR6bKwoffT6ZMypqw4pXgZ0uvKv2v9m3vJu_e5Qit1D77G1lNCk9jWiUHjWcTSYwhhwNoRzjAwd4kvmzeoMJeUG0gbTDrXFf3V2uJQwjZhTul-nbfNeFPRX6vIb4jgiTn4h3JVq-zw0woq3hTrLq1z9Xpocf5lIGs9U7WJnZM-Mh7QugzLk1yM3prCk7tQYRl3aKrDdTsOdbl5Afs1DkatDI7TgQgFrr5Iauhiv7I9Ss-fzPJvezhlYR4hjkkmSSAKr3Esz06bh5GlZKFONpq1I0IG5aejSdS_kJUhnQ1D4Uj4x7X_mBBN-fjQmL_CdyWM1FzNNK0cZwdLjKL-d8UK1xPx3MS-O-WxVGaMq0rn4lyXgOx7op9EHQ2Qdxy9Dbtg6GNYg5qBv0iDURQqi7_MNiEBD-AaEyqMF3riCBJ4wQiVaMjSTiH_DTyBIsYc0UsjRGG4a949oMHZ8yL4mGg57QUvvn5M_urCwCtQTuyWZBzJhWFmdtcPKCn7LpvKTFGQRUUjsr6mMFTQpA0oCYSO7E-w2Kjj0loPccA9hul3tEwQm1Eh58zHI7lJO77kseFQND7Zm9OMz19oN45mvwlEgHBEj4YcENhG6wdB6M5agUoyyPm8fLCTOejStoecXYnYizm2tGFLfqNnV-XtyDZNV_sQKQ2TQ==&xkcb=SoD0-_M3b-KooEWCyR0LbzkdCdPP&p=0&fvj=0&vjs=3" id="sj_5ef6bf779263a83c" role="button" target="_blank"> <span id="jobTitle-5ef6bf779263a83c" title="REMOTE Senior Python Developer">REMOTE Senior Python Developer</span> </a> </h2> </div> </td> </tr> </tbody> </table> </div> </div> </div> </div> </div> </li> """ # Soupify it soup = BeautifulSoup(html_to_parse, "html.parser") # Start by making sure "find_all("a")" works all_links = soup.find_all("a") print(all_links) # Good. # Attempt 1 job_url = soup.find('a[class*="JobTitle"]').a['href'] print(job_url) # Nope. # Attempt 2 job_url = soup.find("a", {"class": re.compile("^.*jobTitle.*")}).a['href'] print(job_url) # Nope... | To find an element with partial class name you need to use select, not find. The will give you the <a> tag, the href will be in it job_url = soup.select_one('a[class*="JobTitle"]')['href'] print(job_url) # /pagead/clk?mo=r&ad=-6NYlbfkN0CpFJQzrgRR8WqXWK1qKKEqALWJw739KlKqr2H-MSI4eoBlI4EFrmor2FYZMP3muM35UEpv7D8dnBwRFuIf8XmtgYykaU5Nl3fSsXZ8xXiGdq3dZVwYJYR2-iS1SqyS7j4jGQ4Clod3n72L285Zn7LuBKMjFoBPi4tB5X2mdRnx-UikeGviwDC-ahkoLgSBwNaEmvShQxaFt_IoqJP6OlMtTd7XlgeNdWJKY9Ph9u8n4tcsN_tCjwIc3RJRtS1O7U0xcsVy5Gi1JBR1W7vmqcg5n4WW1R_JnTwQQ8LVnUF3sDzT4IWevccQb289ocL5T4jSfRi7fZ6z14jrR6bKwoffT6ZMypqw4pXgZ0uvKv2v9m3vJu_e5Qit1D77G1lNCk9jWiUHjWcTSYwhhwNoRzjAwd4kvmzeoMJeUG0gbTDrXFf3V2uJQwjZhTul-nbfNeFPRX6vIb4jgiTn4h3JVq-zw0woq3hTrLq1z9Xpocf5lIGs9U7WJnZM-Mh7QugzLk1yM3prCk7tQYRl3aKrDdTsOdbl5Afs1DkatDI7TgQgFrr5Iauhiv7I9Ss-fzPJvezhlYR4hjkkmSSAKr3Esz06bh5GlZKFONpq1I0IG5aejSdS_kJUhnQ1D4Uj4x7X_mBBN-fjQmL_CdyWM1FzNNK0cZwdLjKL-d8UK1xPx3MS-O-WxVGaMq0rn4lyXgOx7op9EHQ2Qdxy9Dbtg6GNYg5qBv0iDURQqi7_MNiEBD-AaEyqMF3riCBJ4wQiVaMjSTiH_DTyBIsYc0UsjRGG4a949oMHZ8yL4mGg57QUvvn5M_urCwCtQTuyWZBzJhWFmdtcPKCn7LpvKTFGQRUUjsr6mMFTQpA0oCYSO7E-w2Kjj0loPccA9hul3tEwQm1Eh58zHI7lJO77kseFQND7Zm9OMz19oN45mvwlEgHBEj4YcENhG6wdB6M5agUoyyPm8fLCTOejStoecXYnYizm2tGFLfqNnV-XtyDZNV_sQKQ2TQ==&xkcb=SoD0-_M3b-KooEWCyR0LbzkdCdPP&p=0&fvj=0&vjs=3 | 4 | 2 |
73,271,056 | 2022-8-7 | https://stackoverflow.com/questions/73271056/hydra-install-on-python-3-10-fails-due-to-vs-build-tools | I'm trying to install Hydra 2.5 on a Windows 10 system. I have Visual Studio Build Tools 2022 installed with the desktop C++ development option. When I use pip I get the error attached below. I've tried it with both python 3.10 and 3.9. I've tried a fresh conda environment. I've also tried to install Mingw-w64 to see if that might help but it didn't help. Any suggestions would be appreciated. Building wheels for collected packages: hfcnn, hydra Building wheel for hfcnn (setup.py) ... done Created wheel for hfcnn: filename=hfcnn-0.1.0-py2.py3-none-any.whl size=42563 sha256=6eddd284ff183a321cd329928f9d3c242253072b854502a6df97de9b413edc69 Stored in directory: C:\Users\natep\AppData\Local\Temp\pip-ephem-wheel-cache-w1fs1loj\wheels\24\17\50\c13d5e23193f95d3a4a29906052d1bfec09abb75cf58968c32 Building wheel for hydra (setup.py) ... error error: subprocess-exited-with-error Γ python setup.py bdist_wheel did not run successfully. β exit code: 1 β°β> [39 lines of output] running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.10 copying src\hydra.py -> build\lib.win-amd64-3.10 running build_ext building '_hydra' extension creating build\temp.win-amd64-3.10 creating build\temp.win-amd64-3.10\Release creating build\temp.win-amd64-3.10\Release\src "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\natep\AppData\Local\Temp\pip-install-k0ndin1s\hydra_a0d7e613ec4f4a2091c58d266385d27f\src -IC:\Users\natep\anaconda3\envs\hfcnn\include -IC:\Users\natep\anaconda3\envs\hfcnn\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" /Tcsrc/MurmurHash3.c /Fobuild\temp.win-amd64-3.10\Release\src/MurmurHash3.obj -std=gnu99 -O2 -D_LARGEFILE64_SOURCE cl : Command line warning D9002 : ignoring unknown option '-std=gnu99' MurmurHash3.c "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\natep\AppData\Local\Temp\pip-install-k0ndin1s\hydra_a0d7e613ec4f4a2091c58d266385d27f\src -IC:\Users\natep\anaconda3\envs\hfcnn\include -IC:\Users\natep\anaconda3\envs\hfcnn\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" /Tcsrc/_hydra.c /Fobuild\temp.win-amd64-3.10\Release\src/_hydra.obj -std=gnu99 -O2 -D_LARGEFILE64_SOURCE cl : Command line warning D9002 : ignoring unknown option '-std=gnu99' _hydra.c src/_hydra.c(1621): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int', possible loss of data src/_hydra.c(2668): warning C4267: '=': conversion from 'size_t' to 'char', possible loss of data src/_hydra.c(3377): warning C4018: '<': signed/unsigned mismatch src/_hydra.c(6964): warning C4244: 'function': conversion from 'unsigned __int64' to 'unsigned long', possible loss of data src/_hydra.c(7072): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int', possible loss of data src/_hydra.c(7103): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int', possible loss of data src/_hydra.c(7316): warning C4267: 'function': conversion from 'size_t' to 'unsigned int', possible loss of data src/_hydra.c(7445): error C2105: '++' needs l-value src/_hydra.c(7447): error C2105: '--' needs l-value src/_hydra.c(8530): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(8535): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(8539): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(8551): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(9924): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3 src/_hydra.c(9940): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3 src/_hydra.c(11521): warning C4996: 'PyCFunction_Call': deprecated in 3.9 src/_hydra.c(11586): warning C4996: 'PyCFunction_Call': deprecated in 3.9 error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.32.31326\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for hydra Running setup.py clean for hydra Successfully built hfcnn Failed to build hydra Installing collected packages: hydra, funcy, dictdiffer, commonmark, billiard, appdirs, zipp, zc.lockfile, xmltodict, websocket-client, waitress, vine, urllib3, typing-extensions, torchinfo, tomlkit, toml, tensorboard-data-server, tabulate, sqlparse, sniffio, smmap, six, shtab, shortuuid, ruamel.yaml.clib, rsa, pyyaml, python-slugify, pypiwin32, pyparsing, pyjwt, pygments, pycparser, pyasn1-modules, psutil, protobuf, prompt-toolkit, prometheus-client, pillow, pathspec, oauthlib, numpy, networkx, multidict, MarkupSafe, markdown, kiwisolver, itsdangerous, idna, h11, greenlet, future, ftfy, fsspec, frozenlist, fonttools, entrypoints, dvc-render, dpath, distro, diskcache, dill, cycler, colorama, cloudpickle, charset-normalizer, cachetools, attrs, atpublic, async-timeout, absl-py, yarl, werkzeug, tqdm, torch, sqlalchemy, scipy, ruamel.yaml, rich, requests, querystring-parser, python-dateutil, pydot, packaging, Mako, Jinja2, importlib-metadata, grpcio, grandalf, google-auth, gitdb, flufl.lock, flatten-dict, dulwich, configobj, click, cffi, anyio, amqp, aiosignal, torchvision, requests-oauthlib, python-benedict, pygit2, pandas, matplotlib, kombu, hyperopt, httpcore, gitpython, Flask, dvclive, dvc-objects, docker, databricks-cli, cryptography, click-repl, click-plugins, click-didyoumean, alembic, aiohttp, rna, prometheus-flask-exporter, pandarallel, httpx, google-auth-oauthlib, dvc-data, celery, asyncssh, aiohttp-retry, webdav4, tensorboard, scmrepo, mlflow, dvc-task, dvc, hfcnn Running setup.py install for hydra ... error error: subprocess-exited-with-error Γ Running setup.py install for hydra did not run successfully. β exit code: 1 β°β> [41 lines of output] running install C:\Users\natep\anaconda3\envs\hfcnn\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\lib.win-amd64-3.10 copying src\hydra.py -> build\lib.win-amd64-3.10 running build_ext building '_hydra' extension creating build\temp.win-amd64-3.10 creating build\temp.win-amd64-3.10\Release creating build\temp.win-amd64-3.10\Release\src "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\natep\AppData\Local\Temp\pip-install-k0ndin1s\hydra_a0d7e613ec4f4a2091c58d266385d27f\src -IC:\Users\natep\anaconda3\envs\hfcnn\include -IC:\Users\natep\anaconda3\envs\hfcnn\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" /Tcsrc/MurmurHash3.c /Fobuild\temp.win-amd64-3.10\Release\src/MurmurHash3.obj -std=gnu99 -O2 -D_LARGEFILE64_SOURCE cl : Command line warning D9002 : ignoring unknown option '-std=gnu99' MurmurHash3.c "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\natep\AppData\Local\Temp\pip-install-k0ndin1s\hydra_a0d7e613ec4f4a2091c58d266385d27f\src -IC:\Users\natep\anaconda3\envs\hfcnn\include -IC:\Users\natep\anaconda3\envs\hfcnn\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.32.31326\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" /Tcsrc/_hydra.c /Fobuild\temp.win-amd64-3.10\Release\src/_hydra.obj -std=gnu99 -O2 -D_LARGEFILE64_SOURCE cl : Command line warning D9002 : ignoring unknown option '-std=gnu99' _hydra.c src/_hydra.c(1621): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int', possible loss of data src/_hydra.c(2668): warning C4267: '=': conversion from 'size_t' to 'char', possible loss of data src/_hydra.c(3377): warning C4018: '<': signed/unsigned mismatch src/_hydra.c(6964): warning C4244: 'function': conversion from 'unsigned __int64' to 'unsigned long', possible loss of data src/_hydra.c(7072): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int', possible loss of data src/_hydra.c(7103): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int', possible loss of data src/_hydra.c(7316): warning C4267: 'function': conversion from 'size_t' to 'unsigned int', possible loss of data src/_hydra.c(7445): error C2105: '++' needs l-value src/_hydra.c(7447): error C2105: '--' needs l-value src/_hydra.c(8530): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(8535): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(8539): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(8551): error C2039: 'tp_print': is not a member of '_typeobject' C:\Users\natep\anaconda3\envs\hfcnn\include\cpython/object.h(191): note: see declaration of '_typeobject' src/_hydra.c(9924): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3 src/_hydra.c(9940): warning C4996: '_PyUnicode_get_wstr_length': deprecated in 3.3 src/_hydra.c(11521): warning C4996: 'PyCFunction_Call': deprecated in 3.9 src/_hydra.c(11586): warning C4996: 'PyCFunction_Call': deprecated in 3.9 error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.32.31326\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure Γ Encountered error while trying to install package. β°β> hydra note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. | The package name is hydra-core :). | 12 | 42 |
73,270,890 | 2022-8-7 | https://stackoverflow.com/questions/73270890/how-do-i-convert-a-torch-tensor-to-an-image-to-be-returned-by-fastapi | I have a torch tensor which I need to convert to a byte object so that I can pass it to starlette's StreamingResponse which will return a reconstructed image from the byte object. I am trying to convert the tensor and return it like so: def some_unimportant_function(params): return_image = io.BytesIO() torch.save(some_img, return_image) return_image.seek(0) return_img = return_image.read() return StreamingResponse(content=return_img, media_type="image/jpeg") The below works fine on regular byte objects and my API returns the reconstructed image: def some_unimportant_function(params): image = Image.open(io.BytesIO(some_image)) return_image = io.BytesIO() image.save(return_image, "JPEG") return_image.seek(0) return StreamingResponse(content=return_image, media_type="image/jpeg") Using PIL library for this what am I doing wrong here? | Converting PyTorch Tensor to the PIL Image object using torchvision.transforms.ToPILImage() module and then treating it as PIL Image as your second function would work. Here is an example. def some_unimportant_function(params): tensor = # read the tensor from disk or whatever image = torchvision.transforms.ToPILImage()(tensor.unsqueeze(0)) return_image = io.BytesIO() image.save(return_image, "JPEG") return_image.seek(0) return StreamingResponse(content=return_image, media_type="image/jpeg") | 7 | 5 |
73,271,404 | 2022-8-7 | https://stackoverflow.com/questions/73271404/how-to-find-the-average-of-the-differences-between-all-the-numbers-of-a-python-l | I have a python list like this, arr = [110, 60, 30, 10, 5] What I need to do is actually find the difference of every number with all the other numbers and then find the average of all those differences. So, for this case, it would first find the difference between 110 and then all the remaining elements, i.e. 60, 30, 10, 5, and then it will find the difference of 60 with the remaining elements, i.e. 30, 10, 5 and etc. After which, it will compute the Average of all these differences. Now, this can easily be done with two For Loops but in O(n^2) time complexity and also a little bit of "messy" code. I was wondering if there was a faster and more efficient way of doing this same thing? | I'll just give the formula first: n = len(arr) out = np.sum(arr * np.arange(n-1, -n, -2) ) / (n*(n-1) / 2) # 52 Explanation: You want to find the mean of a[0] - a[1], a[0] - a[2],..., a[0] - a[n-1] a[1] - a[2],..., a[1] - a[n-1] ... there, your `a[0]` occurs `n-1` times with `+` sign, `0` with `-` -> `n-1` times `a[1]` occurs `n-2` times with `+` sign, `1` with `-` -> `n-3` times ... and so on | 20 | 36 |
73,267,809 | 2022-8-7 | https://stackoverflow.com/questions/73267809/run-playwright-in-interactive-mode-in-python | I was using playwright to scrape pages using Python. I know how to do the same using a script, but I was trying this in an interactive mode. from playwright.sync_api import Playwright, sync_playwright, expect import time def run(playwright: Playwright) -> None: browser = playwright.chromium.launch(headless=False) context = browser.new_context() page = context.new_page() page.goto("https://www.wikipedia.org/") context.close() browser.close() with sync_playwright() as playwright: run(playwright) I tried to do this in interactive mode as: >>> from playwright.sync_api import Playwright, sync_playwright, expect >>> playwright = sync_playwright() >>> browser = playwright.chromium.launch(headless=False) But this gave me an error: Traceback (most recent call last): File "C:\Users\hpoddar\AppData\Local\Programs\Python\Python310\lib\idlelib\run.py", line 578, in runcode exec(code, self.locals) File "<pyshell#2>", line 1, in <module> AttributeError: 'PlaywrightContextManager' object has no attribute 'chromium' | Use the .start() method: >>> from playwright.sync_api import Playwright, sync_playwright, expect >>> playwright = sync_playwright().start() >>> browser = playwright.chromium.launch(headless=False) >>> page = browser.new_page() Alternatively, if you just want an interactive browser, and don't care about an interactive shell, you can also use the wait_for_timeout function instead (only applicable on Page objects) and set the timeout to a high value: from playwright.sync_api import sync_playwright with sync_playwright() as playwright: browser = playwright.chromium.launch(headless=False) page = browser.new_page() page.wait_for_timeout(10000) | 5 | 13 |
73,270,707 | 2022-8-7 | https://stackoverflow.com/questions/73270707/functools-singledispatchmethod-with-own-class-as-arg-type | I would like to use functools.singledispatchmethod to overload the binary arithmetic operator methods of a class called Polynomial. The problem I have is that I can't find a way to register method calls where other is a Polynomial. Perhaps better explained with a simple example: from __future__ import annotations from functools import singledispatchmethod class Polynomial: @singledispatchmethod def __add__(self, other): return NotImplemented @__add__.register def _(self, other: Polynomial): return NotImplemented The code above raises a NameError: NameError: name 'Polynomial' is not defined This NameError is not caused by the annotation but is raised inside functools. Also, annotating using a string 'Polynomial' instead without the need of __future__.annotations doesn't work either. This behavior is not documented in the documentation for functools.singledispatchmethod. I can make it work by making Polynomial inherit from another class, and then using that class in the type annotation: from functools import singledispatchmethod class _Polynomial: pass class Polynomial(_Polynomial): @singledispatchmethod def __add__(self, other): return NotImplemented @__add__.register def _(self, other: _Polynomial): return NotImplemented ..but I am not overly fond of this solution. How can I make this work without needing the useless intermediate class? | The alternative is to add the methods after the class definition: from __future__ import annotations from functools import singledispatchmethod class Polynomial: pass @singledispatchmethod def __add__(self, other): return NotImplemented @__add__.register def _(self, other: Polynomial): return NotImplemented Polynomial.__add__ = __add__ You can't actually reference the class during a class definition, because the class doesn't exist yet. Using the from __future__ import annotations makes annotations get saved as strings, but as soon as the decorator tries to evaluate those strings for their values, the same problem occurs, so it doesn't get around it (because it isn't a mere annotation). This is evident in the stack trace (abbreviated): ~/miniconda3/envs/py39/lib/python3.9/functools.py in register(cls, func) 858 # only import typing if annotation parsing is necessary 859 from typing import get_type_hints --> 860 argname, cls = next(iter(get_type_hints(func).items())) 861 if not isinstance(cls, type): 862 raise TypeError( ~/miniconda3/envs/py39/lib/python3.9/typing.py in get_type_hints(obj, globalns, localns, include_extras) 1447 if isinstance(value, str): 1448 value = ForwardRef(value) -> 1449 value = _eval_type(value, globalns, localns) 1450 if name in defaults and defaults[name] is None: 1451 value = Optional[value] ~/miniconda3/envs/py39/lib/python3.9/typing.py in _eval_type(t, globalns, localns, recursive_guard) 281 """ 282 if isinstance(t, ForwardRef): --> 283 return t._evaluate(globalns, localns, recursive_guard) 284 if isinstance(t, (_GenericAlias, GenericAlias)): 285 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__) ~/miniconda3/envs/py39/lib/python3.9/typing.py in _evaluate(self, globalns, localns, recursive_guard) 537 localns = globalns 538 type_ =_type_check( --> 539 eval(self.__forward_code__, globalns, localns), 540 "Forward references must evaluate to types.", 541 is_argument=self.__forward_is_argument__, So, when the singledispatchmethod decorater typing.get_type_hints, which is essentially eval (with the added feature of taking care to obtain the correct global scope to evaluate the string in, the one where the annotation was made) | 5 | 2 |
73,227,632 | 2022-8-3 | https://stackoverflow.com/questions/73227632/matplotlib-chart-not-animating-pandas-data-issue | I'm experimenting with Matplotlib animated charts currently. Having an issue where, using a public dataset, the data isn't animating. I am pulling data from a public CSV file following some of the guidance from this post (which has to be updated a bit for things like the URL of the data). I've tested my Matplotlib installation, and it displays static test data without issue in a variety of formats. Not sure if this is a problem in Pandas getting the data across or something I've missed with the animation. Code: import matplotlib.animation as ani import matplotlib.pyplot as plt import numpy as np import pandas as pd url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv' df = pd.read_csv(url, delimiter=',', header='infer') df_interest = df.loc[ df['Country/Region'].isin(['United Kingdom', 'US', 'Italy', 'Germany']) & df['Province/State'].isna()] df_interest.rename( index=lambda x: df_interest.at[x, 'Country/Region'], inplace=True) df1 = df_interest.transpose() df1 = df1.drop(['Province/State', 'Country/Region', 'Lat', 'Long']) df1 = df1.loc[(df1 != 0).any(1)] df1.index = pd.to_datetime(df1.index) print(df1) color = ['red', 'green', 'blue', 'orange'] fig = plt.figure() plt.xticks(rotation=45, ha="right", rotation_mode="anchor") # rotate the x-axis values plt.subplots_adjust(bottom=0.2, top=0.9) # ensuring the dates (on the x-axis) fit in the screen plt.ylabel('No of Deaths') plt.xlabel('Dates') def buildmebarchart(i=int): plt.legend(df1.columns) p = plt.plot(df1[:i].index, df1[:i].values) # note it only returns the dataset, up to the point i for i in range(0, 4): p[i].set_color(color[i]) # set the colour of each curve animator = ani.FuncAnimation(fig, buildmebarchart, interval=50) plt.show() Result (a static graph with no data): UPDATE: It appears to be this line of pandas code that might be the culprit: p = plt.plot(df1[:i].index, df1[:i].values) The df1[:i].index and .values are empty when I insert a breakpoint to print them out. | Actually, your code is fine. I can successfully run it as a script without making any changes. Here's my environment: python 3.8.5 pandas 1.1.3 matplotlib 3.3.2 But I suspect, that you are working in Jupyter-Notebook. In this case you have to make some changes. First, set up matplotlib to work interactively in the notebook. You can use ether notebook or nbAgg as a backend (see The builtin backends for more info): %matplotlib notebook Next, deactivate the last line of code: #plt.show() <- comment or delete this line Should work. Take into account that I use Jupyter-Notebook version 6.1.4. If the problem persists, update the description to reflect your environment. P.S. Try ipympl as an alternative backend. | 4 | 3 |
73,264,498 | 2022-8-7 | https://stackoverflow.com/questions/73264498/how-to-divide-an-array-in-several-sections | I have an array with approximately 12000 length, something like array([0.3, 0.6, 0.3, 0.5, 0.1, 0.9, 0.4...]). Also, I have a column in a dataframe that provides values like 2,3,7,3,2,7.... The length of the column is 48, and the sum of those values is 36. I want to distribute the values, which means the 12000 lengths of array is distributed by specific every value. For example, the first value in that column( = 2) gets its own array of 12000*(2/36) (maybe [0.3, 0.6, 0.3]), and the second value ( = 3) gets its array of 12000*(3/36), and its value continues after the first value(something like [0.5, 0.1, 0.9, 0.4]) and so on. | import pandas as pd import numpy as np # mock some data a = np.random.random(12000) df = pd.DataFrame({'col': np.random.randint(1, 5, 48)}) indices = (len(a) * df.col.to_numpy() / sum(df.col)).cumsum() indices = np.concatenate(([0], indices)).round().astype(int) res = [] for s, e in zip(indices[:-1], indices[1:]): res.append(a[round(s):round(e)]) # some tests target_pcts = df.col.to_numpy() / sum(df.col) realized_pcts = np.array([len(sl) / len(a) for sl in res]) diffs = target_pcts / realized_pcts assert 0.99 < np.min(diffs) and np.max(diffs) < 1.01 assert all(np.concatenate([*res]) == a) | 4 | 2 |
73,255,282 | 2022-8-5 | https://stackoverflow.com/questions/73255282/min-of-given-keys-from-python-defaultdictionary | I got a defaultdict with lists as values and tuples as keys (ddict in the code below). I want to find the min and max of values for a given set of keys. The keys are given as a numpy array. The numpy array is a 3D array containing the keys. Each row of the 3D array is the block of keys for which we need to find the min and max i.e. for each row we take the corresponding 2D array entries, and get the values corresponding to those entries and find the min and max over those values. I need to do it for all the rows of the 3D array. from operator import itemgetter import numpy as np ddict = {(1.0, 1.0): [1,2,3,4], (1.0, 2.5): [2,3,4,5], (1.0, 3.75): [], (1.5, 1.0): [8,9,10], (1.5, 2.5): [2,6,8,19,1,31], (1.5,3.75): [4]} indA = np.array([ [ [( 1.0, 1.0), ( 1.0, 3.75)], [(1.5,1.0), (1.5,3.75)] ], [ [(1.0, 2.5), (1.5,1.0)], [(1.5, 2.5), (1.5,3.75)] ] ], dtype='float16,float16') mins = min(ddict, key=itemgetter(*[tuple(i) for b in indA for i in b.flatten()])) maxs = max(ddict, key=itemgetter(*[tuple(i) for b in indA for i in b.flatten()])) I tried the above code to get the output of min1 = min([1,2,3,4,8,9,10,4]) & min2 = min([2,3,4,5,8,9,10,2,6,8,19,1,31,4]) and max1= max([1,2,3,4,8,9,10,4]) & max2 = max([2,3,4,5,8,9,10,2,6,8,19,1,31,4]) I want to calculate the min and max for every 2D array in the numpy array. Any workaround ? Why my code is not working ? It gives me error TypeError: tuple indices must be integers or slices, not tuple | Here is what I think you're after: import numpy as np # I've reformatted your example data, to make it a bit clearer # no change in content though, just different whitespace # whether d is a dict or defaultdict doesn't matter d = { (1.0, 1.0): [1, 2, 3, 4], (1.0, 2.5): [2, 3, 4, 5], (1.0, 3.75): [], (1.5, 1.0): [8, 9, 10], (1.5, 2.5): [2, 6, 8, 19, 1, 31], (1.5, 3.75): [4] } # indA is just an array of indices, avoid capitals in variable names indices = np.array( [ [[(1.0, 1.0), (1.0, 3.75)], [(1.5, 1.0), (1.5, 3.75)]], [[(1.0, 2.5), (1.5, 1.0)], [(1.5, 2.5), (1.5, 3.75)]] ]) for group in indices: # you flattened each grouping of indices, but that flattens # the tuples you need intact as well: print('not: ', group.flatten()) # Instead, you just want all the tuples: print('but: ', group.reshape(-1, group.shape[-1])) # with that knowledge, this is how you can get the lists you want # the min and max for for group in indices: group = group.reshape(-1, group.shape[-1]) values = list(x for key in group for x in d[tuple(key)]) print(values) # So, the solution: result = [ (min(vals), max(vals)) for vals in ( list(x for key in grp.reshape(-1, grp.shape[-1]) for x in d[tuple(key)]) for grp in indices ) ] print(result) Output: not: [1. 1. 1. 3.75 1.5 1. 1.5 3.75] but: [[1. 1. ] [1. 3.75] [1.5 1. ] [1.5 3.75]] not: [1. 2.5 1.5 1. 1.5 2.5 1.5 3.75] but: [[1. 2.5 ] [1.5 1. ] [1.5 2.5 ] [1.5 3.75]] [1, 2, 3, 4, 8, 9, 10, 4] [2, 3, 4, 5, 8, 9, 10, 2, 6, 8, 19, 1, 31, 4] [(1, 10), (1, 31)] That is, [(1, 10), (1, 31)] is the result you are after, 1 being the minimum of the combined values of the first group of indices, 10 the maximum of that same group of values, etc. Some explanation of key lines: values = list(x for key in group for x in d[tuple(key)]) This constructs a list of combined values by looping over every pair of key values in group and using them as an index into the dictionary d. However, since key will be an ndarray after the reshaping, it is passed to the tuple() function first, so that the dict is correctly indexed. It loops over the retrieved values and adds each value x to the resulting list. The solution is put together in a single comprehension: [ (min(vals), max(vals)) for vals in ( list(x for key in grp.reshape(-1, grp.shape[-1]) for x in d[tuple(key)]) for grp in indices ) ] The outer brackets indicate that a list is being constructed. (min(vals), max(vals)) is a tuple of min and max of vals, and vals loops over the inner comprehension. The inner comprehension is a generator (with parentheses instead of brackets) generating the lists for each group in indices, as explained above. Edit: you updated your question adding a dtype to indices, making it a structured array like this: indices = np.array( [ [[(1.0, 1.0), (1.0, 3.75)], [(1.5, 1.0), (1.5, 3.75)]], [[(1.0, 2.5), (1.5, 1.0)], [(1.5, 2.5), (1.5, 3.75)]] ], dtype='float16,float16') To counter that change and still make the solution work, you can simply work on an unstructured copy: unstructured_indices = rf.structured_to_unstructured(indices) for group in unstructured_indices: group = group.reshape(-1, group.shape[-1]) values = list(x for key in group for x in d[tuple(key)]) print(values) And the solution becomes: result = [ (min(vals), max(vals)) for vals in ( list(x for key in grp.reshape(-1, grp.shape[-1]) for x in d[tuple(key)]) for grp in rf.structured_to_unstructured(indices) ) ] | 4 | 3 |
73,225,265 | 2022-8-3 | https://stackoverflow.com/questions/73225265/how-to-insert-bulk-data-into-cosmos-db-in-python | I'm developing an application in Python which uses Azure Cosmos DB as the main database. At some point in the app, I need to insert bulk data (a batch of items) into Cosmos DB. So far, I've been using Azure Cosmos DB Python SDK for SQL API for communicating with Cosmos DB; however, it doesn't provide a method for bulk data insertion. As I understood, these are the insertion methods provided in this SDK, both of which only support single item insert, which can be very slow when using it in a for loop: .upsert_item() .create_item() Is there another way to use this SDK to insert bulk data instead of using the methods above in a for loop? If not, is there an Azure REST API that can handle bulk data insertion? | The Cosmos DB service does not provide this via its REST API. Bulk mode is implemented at the SDK layer and unfortunately, the Python SDK does not yet support bulk mode. It does however support asynchronous IO. Here's an example that may help you. from azure.cosmos.aio import CosmosClient import os URL = os.environ['ACCOUNT_URI'] KEY = os.environ['ACCOUNT_KEY'] DATABASE_NAME = 'myDatabase' CONTAINER_NAME = 'myContainer' async def create_products(): async with CosmosClient(URL, credential=KEY) as client: database = client.get_database_client(DATABASE_NAME) container = database.get_container_client(CONTAINER_NAME) for i in range(10): await container.upsert_item({ 'id': 'item{0}'.format(i), 'productName': 'Widget', 'productModel': 'Model {0}'.format(i) } ) Update: I remembered another way you can do bulk inserts in Cosmos DB for Python SDK and that is using Stored Procedures. There are examples of how to write these, including samples that demonstrate passing an array, which is what you want to do. I would also take a look at bounded execution as you will want to implement this as well. You can learn how to write them here, How to write stored procedures. Then how to register and call them here, How to use Stored Procedures. Note: these can only be used when passing a partition key value so you can only do batches within logical partitions. | 5 | 4 |
73,258,013 | 2022-8-6 | https://stackoverflow.com/questions/73258013/python-pandas-data-frame-error-while-trying-to-print-it-within-single-df | I see dataframe error while trying to print it within single df[ _ , _ ] form. Below are the code lines #Data Frames code import numpy as np import pandas as pd randArr = np.random.randint(0,100,20).reshape(5,4) df =pd.DataFrame(randArr,np.arange(101,106,1),['PDS','Algo','SE','INS']) print(df['PDS','SE']) errors: Traceback (most recent call last): File "C:\Users\subro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\core\indexes\base.py", line 3621, in get_loc return self._engine.get_loc(casted_key) File "pandas\_libs\index.pyx", line 136, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 163, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas\_libs\hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: ('PDS', 'SE') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\Education\4th year\1st sem\Machine Learning Lab\1st Lab\python\pandas\pdDataFrame.py", line 11, in <module> print(df['PDS','SE']) File "C:\Users\subro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\core\frame.py", line 3505, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Users\subro\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pandas\core\indexes\base.py", line 3623, in get_loc raise KeyError(key) from err KeyError: ('PDS', 'SE') | Do you mean to do this? Need to indicate the column names when creating the dataframe, and also need double square brackets df[[ ]] when extracting a slice of the dataframe import numpy as np import pandas as pd randArr = np.random.randint(0,100,20).reshape(5,4) df = pd.DataFrame(randArr, columns=['PDS', 'SE', 'ABC', 'CDE']) print(df) print(df[['PDS','SE']]) Output: PDS SE ABC CDE 0 56 77 82 42 1 17 12 84 46 2 34 9 19 12 3 19 88 34 19 4 51 54 9 94 PDS SE 0 56 77 1 17 12 2 34 9 3 19 88 4 51 54 | 4 | 2 |
73,257,192 | 2022-8-6 | https://stackoverflow.com/questions/73257192/convert-a-list-of-dictionary-of-dictionaries-to-a-dataframe | I have a list of "dictionary of dictionaries" that looks like this: lis = [{'Health and Welfare Plan + Change Notification': {'evidence_capture': 'null', 'test_result_justification': 'null', 'latest_test_result_date': 'null', 'last_updated_by': 'null', 'test_execution_status': 'Not Started', 'test_result': 'null'}}, {'Health and Welfare Plan + Computations': {'evidence_capture': 'null', 'test_result_justification': 'null', 'latest_test_result_date': 'null', 'last_updated_by': 'null', 'test_execution_status': 'Not Started', 'test_result': 'null'}}, {'Health and Welfare Plan + Data Agreements': {'evidence_capture': 'null', 'test_result_justification': 'Due to the Policy', 'latest_test_result_date': '2019-10-02', 'last_updated_by': 'null', 'test_execution_status': 'In Progress', 'test_result': 'null'}}, {'Health and Welfare Plan + Data Elements': {'evidence_capture': 'null', 'test_result_justification': 'xxx', 'latest_test_result_date': '2019-10-02', 'last_updated_by': 'null', 'test_execution_status': 'In Progress', 'test_result': 'null'}}, {'Health and Welfare Plan + Data Quality Monitoring': {'evidence_capture': 'null', 'test_result_justification': 'xxx', 'latest_test_result_date': '2019-08-09', 'last_updated_by': 'null', 'test_execution_status': 'Completed', 'test_result': 'xxx'}}, {'Health and Welfare Plan + HPU Source Reliability': {'evidence_capture': 'null', 'test_result_justification': 'xxx.', 'latest_test_result_date': '2019-10-02', 'last_updated_by': 'null', 'test_execution_status': 'In Progress', 'test_result': 'null'}}, {'Health and Welfare Plan + Lineage': {'evidence_capture': 'null', 'test_result_justification': 'null', 'latest_test_result_date': 'null', 'last_updated_by': 'null', 'test_execution_status': 'Not Started', 'test_result': 'null'}}, {'Health and Welfare Plan + Metadata': {'evidence_capture': 'null', 'test_result_justification': 'Valid', 'latest_test_result_date': '2020-07-02', 'last_updated_by': 'null', 'test_execution_status': 'Completed', 'test_result': 'xxx'}}, {'Health and Welfare Plan + Usage Reconciliation': {'evidence_capture': 'null', 'test_result_justification': 'Test out of scope', 'latest_test_result_date': '2019-10-02', 'last_updated_by': 'null', 'test_execution_status': 'In Progress', 'test_result': 'null'}}] I would like to convert the list into a dataframe that looks like this: evidence_capture last_updated_by latest_test_result_date test_execution_status test_result test_result_justification test_category Change Notification null null null Not Started null null Health and Welfare Plan Computations null null null Not Started null null Health and Welfare Plan Data Agreements null null 2019-10-02 In Progress null Due to the Policy Health and Welfare Plan Data Elements null null 2019-10-02 In Progress null xxx Health and Welfare Plan Data Quality Monitoring null null 2019-08-09 Completed xxx xxx Health and Welfare Plan HPU Source Reliability null null 2019-10-02 In Progress null xxx. Health and Welfare Plan Lineage null null null Not Started null null Health and Welfare Plan Metadata null null 2020-07-02 Completed xxx Valid Health and Welfare Plan Usage Reconciliation null null 2019-10-02 In Progress null Test out of scope Health and Welfare Plan My code to build the dataframe is using a for-loop to concat the records column by column. After that to process the column names, and then transpose it. The final output would have the repeated string "Health and Welfare Plan" removed from each row index, but appended as a new column. df3 = pd.DataFrame(lis[0]) for i in range(1, len(lis)): df3 = pd.concat([df3, pd.DataFrame(lis[i])], axis=1) df3.columns = [col.split(' + ')[1] for col in df3.columns] df3 = df3.T df3['test_category'] = 'Health and Welfare Plan' print(df3) The code is able to produce the final output, but using "expensive" functions of both for-loop and dataframe concat. So I was wondering if there is a better way to output the same results? | Let us do dict comp to flatten the list of dictionaries pd.DataFrame({k.split(' + ')[1]: v for d in lis for k, v in d.items()}).T evidence_capture test_result_justification latest_test_result_date last_updated_by test_execution_status test_result Change Notification null null null null Not Started null Computations null null null null Not Started null Data Agreements null Due to the Policy 2019-10-02 null In Progress null Data Elements null xxx 2019-10-02 null In Progress null Data Quality Monitoring null xxx 2019-08-09 null Completed xxx HPU Source Reliability null xxx. 2019-10-02 null In Progress null Lineage null null null null Not Started null Metadata null Valid 2020-07-02 null Completed xxx Usage Reconciliation null Test out of scope 2019-10-02 null In Progress null | 4 | 5 |
73,244,027 | 2022-8-5 | https://stackoverflow.com/questions/73244027/character-set-utf8-unsupported-in-python-mysql-connector | I'm trying to connect my database to a python project using the MySQL connector. However, when using the code below, import mysql.connector mydb = mysql.connector.MySQLConnection( host="localhost", user="veensew", password="%T5687j5IiYe" ) print(mydb) I encounter the following errorοΌ mysql.connector.errors.ProgrammingError: Character set 'utf8' unsupported I tried to find out why this is happening and I'm getting the same error always. MySQL Connector version - 8.0.30 I'd appreciate any help. Thank you in advanced! | I ran into the same issue. There were apparently some changes in version 8.0.30 to the way utf8_ collations are handled (see MySQL Connector release notes). I installed version 8.0.29 which fixed the issue for me. pip3 install mysql-connector-python==8.0.29 | 16 | 34 |
73,251,012 | 2022-8-5 | https://stackoverflow.com/questions/73251012/put-logo-and-title-above-on-top-of-page-navigation-in-sidebar-of-streamlit-multi | I am using the new multipage feature and would like to style my multipage app and put a logo with a title on top of/before the page navigation. Here's a small example tested on Python 3.9 with streamlit==1.11.1 in the following directory structure: /Home.py /pages/Page_1.py /pages/Page_2.py Home.py: import streamlit as st st.sidebar.markdown( "My Logo (sidebar) should be on top of the Navigation within the sidebar" ) st.markdown("# Home") Page_1.py: import streamlit as st st.markdown("Page 1") Page_2.py: import streamlit as st st.markdown("Page 2") which I can run using: $ streamlit run Home.py But this leads to the Text printed below and not above the navigation: Is there any way to do this? Any hints are welcome! Best wishes, Cord | One option is to do it via CSS, with a function like this: def add_logo(): st.markdown( """ <style> [data-testid="stSidebarNav"] { background-image: url(http://placekitten.com/200/200); background-repeat: no-repeat; padding-top: 120px; background-position: 20px 20px; } [data-testid="stSidebarNav"]::before { content: "My Company Name"; margin-left: 20px; margin-top: 20px; font-size: 30px; position: relative; top: 100px; } </style> """, unsafe_allow_html=True, ) And then just call that function at the top of each page. That produces an effect like this: | 9 | 12 |
73,247,210 | 2022-8-5 | https://stackoverflow.com/questions/73247210/how-to-plot-a-gantt-chart-using-timesteps-and-not-dates-using-plotly | So I found this code online that would make a Gantt chart: import plotly.express as px import pandas as pd df = pd.DataFrame([ dict(Task="Job A", Start='2009-01-01', Finish='2009-02-28', Resource="Alex"), dict(Task="Job B", Start='2009-03-05', Finish='2009-04-15', Resource="Alex"), dict(Task="Job C", Start='2009-02-20', Finish='2009-05-30', Resource="Max") ]) fig = px.timeline(df, x_start="Start", x_end="Finish", y="Task", color="Resource") fig.update_yaxes(autorange="reversed") fig.show() However, I wish to not use date and times but rather use timesteps, starting from 0 nanoseconds and so forth, what I would want is something that would look like this: import plotly.express as px import pandas as pd df = pd.DataFrame([ dict(Task="Job A", Start=0, Finish=10, Resource="Alex"), dict(Task="Job B", Start=12, Finish=24, Resource="Alex"), dict(Task="Job C", Start=5, Finish=20, Resource="Max") ]) fig = px.timeline(df, x_start="Start", x_end="Finish", y="Task", color="Resource") fig.update_yaxes(autorange="reversed") fig.show() Is this something that would be possible using plotly or does it only use Date and time? I want to use plotly because it gives a very clean graph. I've looked in the plotly express documentation but found nothing to convert and use timesteps. | You can do this using plotly.figure_factory gantt chart and forcing the x-axis to show numbers. There are few examples here, if you need to know more about this. The code output is as shown below. import pandas as pd import plotly.figure_factory as ff df = pd.DataFrame([ dict(Task="Job A", Start=0, Finish=10, Resource="Alex"), dict(Task="Job B", Start=12, Finish=24, Resource="Alex"), dict(Task="Job C", Start=5, Finish=20, Resource="Max") ]) fig = ff.create_gantt(df, index_col = 'Resource', bar_width = 0.4, show_colorbar=True) fig.update_layout(xaxis_type='linear', autosize=False, width=800, height=400) fig.show() Plot | 4 | 4 |
73,245,007 | 2022-8-5 | https://stackoverflow.com/questions/73245007/socketexception-connection-refused-os-error-connection-refused-errno-111 | In my flutter app I am using the flask server for testing purpose. I started my server and run the API url in my flutter app. But SocketException: Connection refused (OS Error: Connection refused, errno = 111), address = 127.0.0.1, port = 44164. error is showing. var headers = {'Content-Type': 'application/json'}; var request = http.Request('POST', Uri.parse('http://127.0.0.1:5000/addrec')); request.body = json.encode({ "name": UploadedName, "grade": Uploadedgrade, "loaction": Uploadedlocation, "like": Uploadedlike, "admission": Uploadedadmission, "comments": Uploadedcomments, "entery_time": UploadeddataentryTime }); request.headers.addAll(headers); http.StreamedResponse response = await request.send(); if (response.statusCode == 200) { print(await response.stream.bytesToString()); } else { print(response.reasonPhrase); } I am using actual android device for app running. | This happens because the localhost (or 127.0.0.1) on the device is only accessible to the device itself. Solution 1 You can reverse-proxy a localhost port to the Android device/emulator running adb reverse on the command prompt like so: adb reverse tcp:5000 tcp:5000 Solution 2 Use the machine's IP address where the API is running. Also, the API should be listening to the IP 0.0.0.0 to be accessible outside the localhost. Supposing the API machine's IP is 192.168.1.123 it's going to be something like: Uri.parse('http://192.168.1.123:5000/addrec') Just take care because changing the API to listen to 0.0.0.0 is a security risk as the API is going to be accessible to the outside world. | 5 | 7 |
73,236,048 | 2022-8-4 | https://stackoverflow.com/questions/73236048/is-there-no-faster-way-to-convert-bgr-opencv-image-to-cmyk | I have an OpenCV image, as usual in BGR color space, and I need to convert it to CMYK. I searched online but found basically only (slight variations of) the following approach: def bgr2cmyk(cv2_bgr_image): bgrdash = cv2_bgr_image.astype(float) / 255.0 # Calculate K as (1 - whatever is biggest out of Rdash, Gdash, Bdash) K = 1 - numpy.max(bgrdash, axis=2) with numpy.errstate(divide="ignore", invalid="ignore"): # Calculate C C = (1 - bgrdash[..., 2] - K) / (1 - K) C = 255 * C C = C.astype(numpy.uint8) # Calculate M M = (1 - bgrdash[..., 1] - K) / (1 - K) M = 255 * M M = M.astype(numpy.uint8) # Calculate Y Y = (1 - bgrdash[..., 0] - K) / (1 - K) Y = 255 * Y Y = Y.astype(numpy.uint8) return (C, M, Y, K) This works fine, however, it feels quite slow - for an 800 x 600 px image it takes about 30 ms on my i7 CPU. Typical operations with cv2 like thresholding and alike take only a few ms for the same image, so since this is all numpy I was expecting this CMYK conversion to be faster. However, I haven't found anything that makes this significantly fater. There is a conversion to CMYK via PIL.Image, but the resulting channels do not look as they do with the algorithm listed above. Any other ideas? | There are several things you should do: shake the math use integer math where possible optimize beyond what numpy can do Shaking the math Given RGB' = RGB / 255 K = 1 - max(RGB') C = (1-K - R') / (1-K) M = (1-K - G') / (1-K) Y = (1-K - B') / (1-K) You see what you can factor out. RGB' = RGB / 255 J = max(RGB') K = 1 - J C = (J - R') / J M = (J - G') / J Y = (J - B') / J Integer math Don't normalize to [0,1] for these calculations. The max() can be done on integers. The differences can too. K can be calculated entirely with integer math. J = max(RGB) K = 255 - J C = 255 * (J - R) / J M = 255 * (J - G) / J Y = 255 * (J - B) / J Numba import numba Numba will optimize that code beyond simply using numpy library routines. It will also do the parallelization as indicated. Choosing the numpy error model and allowing fastmath will cause division by zero to not throw an exception or warning, but also make the math a little faster. Both variants significantly outperform a plain python/numpy solution. Much of that is due to better use of CPU registers caches, rather than intermediate arrays, as is usual with numpy. First variant: ~1.9 ms @numba.njit(parallel=True, error_model="numpy", fastmath=True) def bgr2cmyk_v4(bgr_img): bgr_img = np.ascontiguousarray(bgr_img) (height, width) = bgr_img.shape[:2] CMYK = np.empty((height, width, 4), dtype=np.uint8) for i in numba.prange(height): for j in range(width): B,G,R = bgr_img[i,j] J = max(R, G, B) K = np.uint8(255 - J) C = np.uint8(255 * (J - R) / J) M = np.uint8(255 * (J - G) / J) Y = np.uint8(255 * (J - B) / J) CMYK[i,j] = (C,M,Y,K) return CMYK Thanks to Cris Luengo for pointing out further refactoring potential (pulling out 255/J), leading to a second variant. It takes ~1.6 ms @numba.njit(parallel=True, error_model="numpy", fastmath=True) def bgr2cmyk_v5(bgr_img): bgr_img = np.ascontiguousarray(bgr_img) (height, width) = bgr_img.shape[:2] CMYK = np.empty((height, width, 4), dtype=np.uint8) for i in numba.prange(height): for j in range(width): B,G,R = bgr_img[i,j] J = np.uint8(max(R, G, B)) Jinv = np.uint16((255*256) // J) # fixed point math K = np.uint8(255 - J) C = np.uint8(((J - R) * Jinv) >> 8) M = np.uint8(((J - G) * Jinv) >> 8) Y = np.uint8(((J - B) * Jinv) >> 8) CMYK[i,j] = (C,M,Y,K) return CMYK This fixed point math causes floor rounding. For round-to-nearest, the expression must be ((J - R) * Jinv + 128) >> 8. That would cost a bit more time then (~1.8 ms). What else? I think that numba/LLVM didn't apply SIMD here. Some investigation revealed that the Loop Vectorizer doesn't like any of the instances it was asked to consider. An OpenCL kernel might be even faster. OpenCL can run on CPUs. Numba can also use CUDA. | 5 | 3 |
73,240,620 | 2022-8-4 | https://stackoverflow.com/questions/73240620/the-right-way-to-type-hint-a-coroutine-function | I cannot wrap my head around type hinting a Coroutine. As far as I understand, when we declare a function like so: async def some_function(arg1: int, arg2: str) -> list: ... we effectively declare a function, which returns a coroutine, which, when awaited, returns a list. So, the way to type hint it would be: f: Callable[[int, str], Coroutine[???]] = some_function But Coroutine generic type has 3 arguments! We can see it if we go to the typing.py file: ... Coroutine = _alias(collections.abc.Coroutine, 3) ... There is also Awaitable type, which logically should be a parent of Coroutine with only one generic parameter (the return type, I suppose): ... Awaitable = _alias(collections.abc.Awaitable, 1) ... So maybe it would be more or less correct to type hint the function this way: f: Callable[[int, str], Awaitable[list]] = some_function Or is it? So, basically, the questions are: Can one use Awaitable instead of Coroutine in the case of type hinting an async def function? What are the correct parameters for the Coroutine generic type and what are its use-cases? | As the docs state: Coroutine objects and instances of the Coroutine ABC are all instances of the Awaitable ABC. And for the Coroutine type: A generic version of collections.abc.Coroutine. The variance and order of type variables correspond to those of Generator. Generator in turn has the signature Generator[YieldType, SendType, ReturnType]. So if you want to preserve that type information, use Coroutine, otherwise Awaitable should suffice. | 44 | 27 |
73,239,270 | 2022-8-4 | https://stackoverflow.com/questions/73239270/numpy-rounding-issue | Can someone explain to me why is numpy round acting strange with this exact number rounding: df = pd.DataFrame({'c': [121921117.714999988675115, 445, 22]}) df = np.round(df['c'], 8) Result: 121921117.71499997 445.0 22.0 Expected: 121921117.71499999 445.0 22.0 It's obvious that the first number is not rounded well, any ideas? EDIT: Since I'm focused here on the precision, not so much on the performance, I've used python round function to solve this problem: df.applymap(round, ndigits=8) | Check the small print 2 in the documentation of round aka around. The short answer is that round "uses a fast but sometimes inexact algorithm" and to use format_float_positional if you want to see the correct result. >>> import pandas as pd >>> df = pd.DataFrame({'c': [121921117.714999988675115, 445, 22]}) >>> df["c"][0] 121921117.71499999 >>> round(df["c"][0],8) 121921117.71499997 >>> np.format_float_positional(df["c"][0],8) '121921117.71499999' | 4 | 3 |
73,234,081 | 2022-8-4 | https://stackoverflow.com/questions/73234081/print-a-string-which-has-the-reverse-order | Assignment: Print a string which has the reverse order 'Python love We. Science Data love We' I tried this: strg = We love Data Science. We love Python words = strg.split(" ") words.reverse() new_strg = " ".join(words) print(new_strg) >>> Python love We Science. Data love We But the answer isn't as expected because the . after Science is not at the proper place. How to get the expected result? | Is this the output you need? Python love We. Science Data love We Then the code is strg = 'We love Data Science. We love Python' pos = len(strg) - strg.index('.') - 2 words = [e.strip('.') for e in strg.split()] words.reverse() new_strg = ' '.join(words) print(new_strg[:pos] + '.' + new_strg[pos:]) Or another way to do it: strg = 'We love Data Science. We love Python' new_strg = [s.split()[::-1] for s in strg.split('.')][::-1] print(' '.join(new_strg[0]) + '. ' + ' '.join(new_strg[1])) #or print('{}. {}'.format(' '.join(new_strg[0]), ' '.join(new_strg[1]))) Or to raise the bar: strg = 'We love Data Science. We love Python' print('. '.join([' '.join(new_strg.split()[::-1]) for new_strg in strg.split('.')[::-1]])) | 4 | 2 |
73,234,979 | 2022-8-4 | https://stackoverflow.com/questions/73234979/how-to-check-if-element-is-present-or-not-using-playwright-and-timeout-parameter | I need to find a specific element in my webpage. The element may be in the page or not. This code is giving me error if the element is not visible: error_text = self.page.wait_for_selector( self.ERROR_MESSAGE, timeout=7000).inner_text() How can I look for the element using timeout, and get a bool telling me if the element is found or not? | You have to use the page.is_visible(selector, **kwargs) for this as this returns a boolean value. Playwright Docs, bool = page.is_visible(selector, timeout=7000) print(bool) #OR if page.is_visible(selector, timeout=7000): print("Element Found") else: print("Element not Found") You can also use expect assertion if you directly want to assert the presence of the element and fail the tests if the condition is not met. expect(page.locator("selector")).to_have_count(1, timeout=7000) | 4 | 7 |
73,229,993 | 2022-8-4 | https://stackoverflow.com/questions/73229993/how-to-upload-a-specific-file-to-google-colab | I have a file on my computer that I want to upload to Google Colab. I know there are numerous ways to do this, including a from google.colab import files uploaded = files.upload() or just uploading manually from the file system. But I want to upload that specific file without needing to choose that file myself. Something like: from google.colab import files file_path = 'path/to/the/file' files.upload(file_path) Is there any way to do this? | Providing a file path directly rather than clicking through the GUI for an upload requires access to your local machine's file system. However, when your run cell IPython magic commands such as %pwd in Google collab, you'll notice that the current working directory shown is that of the notebook environment - not that of your machine. The way to eschew the issue are as follows. 1. Local Runtime Only local runtimes via Jupyter seems to enable such access to the local file system. This necessitates the installation of jupyterlab, a Jupyter server extension for using a WebSocket, and launching a local server. See this tutorial. 2. Google Drive In case Google Drive is convenient, you can upload files into Google Drive from your local machine without clicking through a GUI. 3. Embracing the GUI If these options seem overkill, you, unfortunately, have to stick with from google.colab import files uploaded = files.upload() as you alluded to. | 4 | 1 |
73,228,173 | 2022-8-3 | https://stackoverflow.com/questions/73228173/how-to-aggregate-a-subset-of-rows-in-and-append-to-a-multiindexed-pandas-datafra | Problem Setup & Goal I have a Multindexed Pandas DataFrame that looks like this: import pandas as pd df = pd.DataFrame({ 'Values':[1, 3, 4, 8, 5, 2, 9, 0, 2], 'A':['A1', 'A1', 'A1', 'A1', 'A2', 'A2', 'A3', 'A3', 'A3'], 'B':['foo', 'bar', 'fab', 'baz', 'foo', 'baz', 'qux', 'baz', 'bar'] }) df.set_index(['A','B'], inplace=True) print(df.to_string()) Values A B A1 foo 1 bar 3 fab 4 baz 8 A2 foo 5 baz 2 A3 qux 9 baz 0 bar 2 My ultimate goal is to replace all the "bar" and "baz" rows in the B column with a summed row called "other" (see below) in the simplest, most canonical Pandas way. Values A B A1 foo 1 fab 4 other 11 A2 foo 5 other 2 A3 qux 9 other 2 Current Work I managed to figure out how to create a mask for a MultiIndex DataFrame from a similar problem to highlight the rows we want to eventually aggregate, which are in an agg_list. agg_list = ['bar', 'baz'] # Create a mask that highlights the rows in B that are in agg_list filterFunc = lambda x: x.index.get_level_values('B') in agg_list mask = df.groupby(level=['A','B']).apply(filterFunc) This produces the expected mask: print(mask.to_string()) A B A1 bar True baz True fab False foo False A2 baz True foo False A3 bar True baz True qux False And I know how to remove the rows I no longer need: # Remove rows in B col that are in agg_list using mask df_masked = df[[~mask.loc[i1, i2] for i1,i2 in df.index]] print(df_masked.to_string()) Values A B A1 foo 1 fab 4 A2 foo 5 A3 qux 9 But I don't know how to do the actual aggregation/sum on these rows and append it to each Multindexed row. Similar Problems/Solutions Similar problems I've seen didn't involve a Multindex DataFrame, so I can't quite use some of the solutions like this one, which has the same general idea of creating a mask and then append a summed row: threshold = 6 m = df['value'] < threshold df1 = df[~m].copy() df1.loc['Z'] = df.loc[m, 'value'].sum() or m = df['value'] < threshold df1 = df[~m].append(df.loc[m, ['value']].sum().rename('Z')) | Here is one way which resets the index for just B, performs a replace and aggregates the values. agg_list = ['bar', 'baz'] (df.reset_index(level=1) .replace({'B':{'|'.join(agg_list):'other'}},regex=True) .groupby(['A','B']).sum()) Another way is to create a new MultiIndex with bar and baz being replaced with other. (df.set_axis(pd.MultiIndex.from_arrays([df.index.get_level_values(0), df.index.get_level_values(1).str.replace('|'.join(agg_list),'other')])) .groupby(level=[0,1]).sum()) Output: Values A B A1 fab 4 foo 1 other 11 A2 foo 5 other 2 A3 other 2 qux 9 | 4 | 2 |
73,217,036 | 2022-8-3 | https://stackoverflow.com/questions/73217036/how-to-set-required-fields-in-patch-api-in-swagger-ui | I'm using drf-spectacular and here's code in settings.py SPECTACULAR_SETTINGS = { 'TITLE': 'TITLE', 'VERSION': '1.0.0', 'SCHEMA_PATH_PREFIX_TRIM': True, 'PREPROCESSING_HOOKS': ["custom.url_remover.preprocessing_filter_spec"], } in serializers.py class ChangePasswordSerilaizer(serializers.Serializer): current_password = serializers.CharField(write_only=True, min_length=8, required=True) new_password = serializers.CharField(write_only=True, min_length=8, required=True) confirm_new_password = serializers.CharField(write_only=True, min_length=8, required=True) but still fields in request body are showing not required | change your SPECTACULAR_SETTINGS SPECTACULAR_SETTINGS = { 'TITLE': 'APP NAME', 'VERSION': '1.0.0', 'SCHEMA_PATH_PREFIX_TRIM': True, 'PREPROCESSING_HOOKS': ["custom.url_remover.preprocessing_filter_spec"], 'COMPONENT_SPLIT_PATCH': False, } by default COMPONENT_SPLIT_PATCH is true in SPECTACULAR_SETTINGS so you can simply override('COMPONENT_SPLIT_PATCH': False) it to fix this problem | 4 | 5 |
73,193,006 | 2022-8-1 | https://stackoverflow.com/questions/73193006/how-to-add-a-column-to-a-polars-dataframe-using-with-columns | I am currently creating a new column in a polars data frame using predictions = [10, 20, 30, 40, 50] df['predictions'] = predictions where predictions is a numpy array or list containing values I computed with another tool. However, polars throws a warning, that this option will be deprecated. How can the same result be achieved using .with_columns()? | You can now also pass numpy arrays in directly. E.g, df = pl.DataFrame({"x": [0, 1, 2, 3, 4]}) p1 = [10, 20, 30, 40, 50] p2 = np.array(p1) df.with_columns( p1=pl.Series(p1), # For python lists, construct a Series p2=p2, # For numpy arrays, you can pass them directly ) # shape: (5, 3) # βββββββ¬ββββββ¬ββββββ # β x β p1 β p2 β # β --- β --- β --- β # β i64 β i64 β i64 β # βββββββͺββββββͺββββββ‘ # β 0 β 10 β 10 β # β 1 β 20 β 20 β # β 2 β 30 β 30 β # β 3 β 40 β 40 β # β 4 β 50 β 50 β # βββββββ΄ββββββ΄ββββββ | 26 | 4 |
73,181,243 | 2022-7-31 | https://stackoverflow.com/questions/73181243/warningtensorflowlayers-in-a-sequential-model-should-only-have-a-single-input | I have copy past code from tensorflow website's introduction to autoencoder first examplefollowing code works with mnist fashion dataset but not mine.This gives me a very long warning.Please tell me what is worng with my dataset the warning screen short of same error here x_train is my dataset: tf.shape(x_train) output <tf.Tensor: shape=(3,), dtype=int32, numpy=array([169,** **28, 28])> here x_train is the mnist dataset: tf.shape(x_train) output<tf.Tensor: shape=(3,), dtype=int32, numpy=array([60000, 28, 28])> My whole code to make dataset: dir_path='auto/ttt/' data=[] x_train=[] for i in os.listdir(dir_path): img=image.load_img(dir_path+'//'+i,color_mode='grayscale',target_size=(28,28)) data=np.array(img) data=data/255.0 x_train.append(data) this is the warning: WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor. Received: inputs=(<tf.Tensor 'IteratorGetNext:0' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:2' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:3' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:4' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:5' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:6' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:7' shape=(None, 28) dtype=flo... also this value error (same warning): ValueError: Exception encountered when calling layer "sequential_4" (type Sequential). Layer "flatten_2" expects 1 input(s), but it received 169 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:2' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:3' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:4' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:5' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:6' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:7' shape=(None, 28) dtype=float32>, <tf.Tensor 'IteratorGetNext:8' shape=(None, 28) dtype=float3... | The model.fit() is given a list of arrays as input. A list of arrays is generally passed to fit() when a model has multiple inputs. In this case, the fit() method is treating each array as an input, resulting in the error. Please convert the data to a tensor as follows and try again. x_train=tf.convert_to_tensor(x_train) Refer to the gist for complete code. | 4 | 4 |
73,135,157 | 2022-7-27 | https://stackoverflow.com/questions/73135157/redis-timeseries-with-python-responseerror-unknown-command-ts-create | I am trying to create a timeseries in Redis using python like so: import redis connection_redis = redis.Redis(host='127.0.0.1', port=6379) connection_redis.ts().create('ts', retention_msecs=0) but I get the following error: ResponseError: unknown command 'TS.CREATE'. I have been searching for a way to solve this problem but I haven't found anything. I am running redis in a docker. Thank you :)! | The Redis docker image does not contain any Redis module. You can use the Redis Stack docker image. redis/redis-stack-server contains the RediSearch, RedisJSON, RedisGraph, RedisTimeSeries, and RedisBloom modules. redis/redis-stack also contains RedisInsight. Update, October 2024 From a Redis Blog Post: Redis 8 introduces seven new data structures βJSON, time series, and five probabilistic typesβ along with the fastest and most scalable Redis query engine to date. These capabilities, previously only available separately through Redis Stack or our Software and Cloud offerings, are now built natively into Redis Community Edition. You can now simply use Redis docker image (version 8.0-M01 or later). | 5 | 8 |
73,150,560 | 2022-7-28 | https://stackoverflow.com/questions/73150560/get-the-name-of-all-fields-in-a-dataclass | I am trying to write a function to log dataclasses I would like to get the name of all fields in the dataclass and print the value to each (similar to how you might write a function to print a dictionary) i.e. import dataclasses @dataclasses.dataclass class Test: a: str = "a value" b: str = "b value" test = Test() def print_data_class(dataclass_instance): fields = ... # get dataclass fields for field in fields: print(f"{field.name}: {field.value}") print_data_class(test) Desired output: "a": "a value" "b": "b value" However I haven't been able to find how to get the fields of a dataclass, does anyone know how this could be done? | This example shows only a name, type and value, however, __dataclass_fields__ is a dict of Field objects, each containing information such as name, type, default value, etc. Using dataclasses.fields() Using dataclasses.fields() you can access fields you defined in your dataclass. fields = dataclasses.fields(dataclass_instance) Using inspect.getmembers() Using inspect.getmembers() you can access all fields in your dataclass. members = inspect.getmembers(type(dataclass_instance)) fields = list(dict(members)['__dataclass_fields__'].values()) Complete code solution import dataclasses import inspect @dataclasses.dataclass class Test: a: str = "a value" b: str = "b value" def print_data_class(dataclass_instance): # option 1: fields fields = dataclasses.fields(dataclass_instance) # option 2: inspect members = inspect.getmembers(type(dataclass_instance)) fields = list(dict(members)['__dataclass_fields__'].values()) for v in fields: print(f'{v.name}: ({v.type.__name__}) = {getattr(dataclass_instance, v.name)}') print_data_class(Test()) # a: (str) = a value # b: (str) = b value print_data_class(Test(a="1", b="2")) # a: (str) = 1 # b: (str) = 2 | 17 | 27 |
73,176,563 | 2022-7-30 | https://stackoverflow.com/questions/73176563/python-polars-join-on-column-with-greater-or-equal | I have two polars dataframe, one dataframe df_1 with two columns start and end the other dataframe df_2 one with a column dates and I want to do a left join on df_2 under the condition that the dates column is in between the start and end column. To make it more obvious what I want to do here is an example DATA import polars as pl from datetime import date df_1 = pl.DataFrame( { "id": ["abc", "abc", "456"], "start": [date(2022, 1, 1), date(2022, 3, 4), date(2022, 5, 11)], "end": [date(2022, 2, 4), date(2022, 3, 10), date(2022, 5, 16)], "value": [10, 3, 4] } ) df_2 = pl.DataFrame( { "id": ["abc", "abc", "456", "abc", "abc", "456"], "dates": [date(2022, 1, 2), date(2022, 3, 4), date(2022, 5, 11), date(2022, 1, 4), date(2022, 3, 7), date(2022, 5, 13)], } ) So now I would join on id and that dates is in between start and end and the result should look like that RESULT shape: (6, 3) βββββββ¬βββββββββββββ¬ββββββββ β id β dates β value β β --- β --- β --- β β str β date β i64 β βββββββͺβββββββββββββͺββββββββ‘ β abc β 2022-01-02 β 10 β β abc β 2022-03-04 β 3 β β 456 β 2022-05-11 β 4 β β abc β 2022-01-04 β 10 β β abc β 2022-03-07 β 3 β β 456 β 2022-05-13 β 4 β βββββββ΄βββββββββββββ΄ββββββββ | (I'm going to assume that your intervals in df_1 do not overlap for a particular id - otherwise, there may not be a unique value that we can assign to the id/dates combinations in df_2.) One way to do this is with join_asof. The Algorithm ( df_2 .sort("dates") .join_asof( df_1.sort("start"), by="id", left_on="dates", right_on="start", strategy="backward", ) .with_columns( pl.when(pl.col('dates') <= pl.col('end')) .then(pl.col('value')) ) .select('id', 'dates', 'value') ) shape: (6, 3) βββββββ¬βββββββββββββ¬ββββββββ β id β dates β value β β --- β --- β --- β β str β date β i64 β βββββββͺβββββββββββββͺββββββββ‘ β abc β 2022-01-02 β 10 β β abc β 2022-01-04 β 10 β β abc β 2022-03-04 β 3 β β abc β 2022-03-07 β 3 β β 456 β 2022-05-11 β 4 β β 456 β 2022-05-13 β 4 β βββββββ΄βββββββββββββ΄ββββββββ In Steps First, let's append some additional rows to df_2, to show what will happen if a particular row is not contained in an interval in df_1. I'll also add a row number, for easier inspection. df_2 = pl.DataFrame( { "id": ["abc", "abc", "456", "abc", "abc", "456", "abc", "abc", "abc"], "dates": [ date(2022, 1, 2), date(2022, 3, 4), date(2022, 5, 11), date(2022, 1, 4), date(2022, 3, 7), date(2022, 5, 13), date(2021, 12, 31), date(2022, 3, 1), date(2023, 1, 1), ], } ).with_row_index() df_2 shape: (9, 3) βββββββββ¬ββββββ¬βββββββββββββ β index β id β dates β β --- β --- β --- β β u32 β str β date β βββββββββͺββββββͺβββββββββββββ‘ β 0 β abc β 2022-01-02 β β 1 β abc β 2022-03-04 β β 2 β 456 β 2022-05-11 β β 3 β abc β 2022-01-04 β β 4 β abc β 2022-03-07 β β 5 β 456 β 2022-05-13 β β 6 β abc β 2021-12-31 β β 7 β abc β 2022-03-01 β β 8 β abc β 2023-01-01 β βββββββββ΄ββββββ΄βββββββββββββ The join_asof step finds the latest start date that is on or before the dates date. Since intervals do not overlap, this is the only interval that might contain the dates date. For our purposes, I'll make a copy of the start column so that we can inspect the results. (The start column will not be in the results of the join_asof.) Note that for a join_asof, both DataFrames must be sorted by the asof columns (dates and start in this case). ( df_2 .sort("dates") .join_asof( df_1.sort("start").with_columns(pl.col("start").alias("start_df1")), by="id", left_on="dates", right_on="start", strategy="backward", ) .sort("index") ) shape: (9, 7) βββββββββ¬ββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββ¬βββββββββββββ β index β id β dates β start β end β value β start_df1 β β --- β --- β --- β --- β --- β --- β --- β β u32 β str β date β date β date β i64 β date β βββββββββͺββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββͺβββββββββββββ‘ β 0 β abc β 2022-01-02 β 2022-01-01 β 2022-02-04 β 10 β 2022-01-01 β β 1 β abc β 2022-03-04 β 2022-03-04 β 2022-03-10 β 3 β 2022-03-04 β β 2 β 456 β 2022-05-11 β 2022-05-11 β 2022-05-16 β 4 β 2022-05-11 β β 3 β abc β 2022-01-04 β 2022-01-01 β 2022-02-04 β 10 β 2022-01-01 β β 4 β abc β 2022-03-07 β 2022-03-04 β 2022-03-10 β 3 β 2022-03-04 β β 5 β 456 β 2022-05-13 β 2022-05-11 β 2022-05-16 β 4 β 2022-05-11 β β 6 β abc β 2021-12-31 β null β null β null β null β β 7 β abc β 2022-03-01 β 2022-01-01 β 2022-02-04 β 10 β 2022-01-01 β β 8 β abc β 2023-01-01 β 2022-03-04 β 2022-03-10 β 3 β 2022-03-04 β βββββββββ΄ββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββ΄βββββββββββββ The last three rows are the ones that I added. In the last step, we'll inspect the end date, and null out any values where dates is beyond end. ( df_2 .sort("dates") .join_asof( df_1.sort("start").with_columns(pl.col("start").alias("start_df1")), by="id", left_on="dates", right_on="start", strategy="backward", ) .with_columns( pl.when(pl.col('dates') <= pl.col('end')) .then(pl.col('value')) ) .sort("index") ) shape: (9, 7) βββββββββ¬ββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββ¬βββββββββββββ β index β id β dates β start β end β value β start_df1 β β --- β --- β --- β --- β --- β --- β --- β β u32 β str β date β date β date β i64 β date β βββββββββͺββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββͺβββββββββββββ‘ β 0 β abc β 2022-01-02 β 2022-01-01 β 2022-02-04 β 10 β 2022-01-01 β β 1 β abc β 2022-03-04 β 2022-03-04 β 2022-03-10 β 3 β 2022-03-04 β β 2 β 456 β 2022-05-11 β 2022-05-11 β 2022-05-16 β 4 β 2022-05-11 β β 3 β abc β 2022-01-04 β 2022-01-01 β 2022-02-04 β 10 β 2022-01-01 β β 4 β abc β 2022-03-07 β 2022-03-04 β 2022-03-10 β 3 β 2022-03-04 β β 5 β 456 β 2022-05-13 β 2022-05-11 β 2022-05-16 β 4 β 2022-05-11 β β 6 β abc β 2021-12-31 β null β null β null β null β β 7 β abc β 2022-03-01 β 2022-01-01 β 2022-02-04 β null β 2022-01-01 β β 8 β abc β 2023-01-01 β 2022-03-04 β 2022-03-10 β null β 2022-03-04 β βββββββββ΄ββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββ΄βββββββββββββ You can see that the last three rows that I added (which purposely don't match any intervals in df_1) have null as value. Instead of using when/then/otherwise to set value to null, you can filter these out, if that's what you need. | 4 | 4 |
73,212,628 | 2022-8-2 | https://stackoverflow.com/questions/73212628/retrieve-date-from-datetime-column-in-polars | Currently when I try to retrieve date from a polars datetime column, I have to write something similar to: import polars as pl import datetime as dt df = pl.DataFrame({ 'time': [dt.datetime.now()] }) df = df.with_columns( pl.col("time").map_elements(lambda x: x.date()).alias("date") ) shape: (1, 2) ββββββββββββββββββββββββββββββ¬βββββββββββββ β time β date β β --- β --- β β datetime[ΞΌs] β date β ββββββββββββββββββββββββββββββͺβββββββββββββ‘ β 2024-07-20 11:41:04.265539 β 2024-07-20 β ββββββββββββββββββββββββββββββ΄βββββββββββββ Is there a different way, something closer to: pl.col("time").dt.date().alias("date") | You can use .dt.date() import datetime import polars as pl df = pl.DataFrame({ "time": [datetime.datetime.now()] }) df.with_columns( pl.col("time").dt.date().alias("date") ) shape: (1, 2) ββββββββββββββββββββββββββββββ¬βββββββββββββ β time β date β β --- β --- β β datetime[ΞΌs] β date β ββββββββββββββββββββββββββββββͺβββββββββββββ‘ β 2024-07-21 16:17:41.489579 β 2024-07-21 β ββββββββββββββββββββββββββββββ΄βββββββββββββ | 14 | 17 |
73,187,905 | 2022-8-1 | https://stackoverflow.com/questions/73187905/shuffling-two-2d-tensors-in-pytorch-and-maintaining-same-order-correlation | Is it possible to shuffle two 2D tensors in PyTorch by their rows, but maintain the same order for both? I know you can shuffle a 2D tensor by rows with the following code: a=a[torch.randperm(a.size()[0])] To elaborate: If I had 2 tensors a = torch.tensor([[1, 1, 1, 1, 1], [2, 2, 2, 2, 2], [3, 3, 3, 3, 3]]) b = torch.tensor([[4, 4, 4, 4, 4], [5, 5, 5, 5, 5], [6, 6, 6, 6, 6]]) And ran them through some function/block of code to shuffle randomly but maintain correlation and produce something like the following a = torch.tensor([[2, 2, 2, 2, 2], [1, 1, 1, 1, 1], [3, 3, 3, 3, 3]]) b = torch.tensor([[5, 5, 5, 5, 5], [4, 4, 4, 4, 4], [6, 6, 6, 6, 6]]) My current solution is converting to a list, using the random.shuffle() function like below. a_list = a.tolist() b_list = b.tolist() temp_list = list(zip(a_list , b_list )) random.shuffle(temp_list) # Shuffle a_temp, b_temp = zip(*temp_list) a_list, b_list = list(a_temp), list(b_temp) # Convert back to tensors a = torch.tensor(a_list) b = torch.tensor(b_list) This takes quite a while and was wondering if there is a better way. | You can use the function torch.randperm to get a set of indices that act as a random permutation. The following is a small example of getting a random permutation, then applying it to both the a and b tensors: indices = torch.randperm(a.size()[0]) a=a[indices] b=b[indices] | 4 | 8 |
73,191,533 | 2022-8-1 | https://stackoverflow.com/questions/73191533/using-conftest-py-vs-importing-fixtures-from-dedicate-modules | I have been familiarizing with pytest lately and on how you can use conftest.py to define fixtures that are automatically discovered and imported within my tests. It is pretty clear to me how conftest.py works and how it can be used, but I'm not sure about why this is considered a best practice in some basic scenarios. Let's say my tests are structured in this way: tests/ --test_a.py --test_b.py The best practice, as suggested by the documentation and various articles about pytest around the web, would be to define a conftest.py file with some fixtures to be used in both test_a.py and test_b.py. In order to better organize my fixtures, I might have the need of splitting them into separate files in a semantically meaningful way, ex. db_session_fixtures.py, dataframe_fixtures.py, and then import them as plugins in conftest.py. tests/ --test_a.py --test_b.py --conftest.py --db_session_fixtures.py --dataframe_fixtures.py In conftest.py I would have: import pytest pytest_plugins = ["db_session_fixtures", "dataframe_fixtures"] and I would be able to use db_session_fixtures and dataframe_fixtures seamlessly in my test cases without any additional code. While this is handy, I feel it might hurt readability. For example, if I would not use conftest.py as described above, I might write in test_a.py from .dataframe_fixtures import my_dataframe_fixture def test_case_a(my_dataframe_fixture): #some tests and use the fixtures as usual. The downside is that it requires me to import the fixture, but the explicit import improves the readability of my test case, letting me know in a glance where the fixture come from, just as any other python module. Are there downsides I am overlooking on about this solution or other advantages that conftest.py brings to the table, making it the best practice when setting up pytest test suites? | There's not a huge amount of difference, it's mainly just down to preference. I mainly use conftest.py to pull in fixures that are required, but not directly used by your test. So you may have a fixture that does something useful with a database, but needs a database connection to do so. So you make the db_connection fixture available in conftest.py, and then your test only has to do something like: conftest.py from tests.database_fixtures import db_connection __all__ = ['db_connection'] tests/database_fixtures.py import pytest @pytest.fixture def db_connection(): ... @pytest.fixture def new_user(db_connection): ... test/test_user.py from tests.database_fixtures import new_user def test_user(new_user): assert new_user.id > 0 # or whatever the test needs to do If you didn't make db_connection available in conftest.py or directly import it then pytest would fail to find the db_connection fixture when trying to use the new_user fixture. If you directly import db_connection into your test file, then linters will complain that it is an unused import. Worse, some may remove it, and cause your tests to fail. So making the db_connection available in conftest.py, to me, is the simplest solution. Overriding Fixtures The one significant difference is that it is easier to override fixtures using conftest.py. Say you have a directory layout of: ./ ββ conftest.py ββ tests/ ββ test_foo.py ββ bar/ ββ conftest.py ββ test_foobar.py In conftest.py you could have: import pytest @pytest.fixture def some_value(): return 'foo' And then in tests/bar/conftest.py you could have: import pytest @pytest.fixture def some_value(some_value): return some_value + 'bar' Having multiple conftests allows you to override a fixture whilst still maintaining access to the original fixture. So following tests would all work. tests/test_foo.py def test_foo(some_value): assert some_value == 'foo' tests/bar/test_foobar.py def test_foobar(some_value): assert some_value == 'foobar' You can still do this without conftest.py, but it's a bit more complicated. You'd need to do something like: import pytest # in this scenario we would have something like: # mv contest.py tests/custom_fixtures.py from tests.custom_fixtures import some_value as original_some_value @pytest.fixture def some_value(original_some_value): return original_some_value + 'bar' def test_foobar(some_value): assert some_value == 'foobar' | 16 | 14 |
73,154,451 | 2022-7-28 | https://stackoverflow.com/questions/73154451/configure-vscode-to-autocomplete-from-two-python-projects | I have read and tried a lot but am still struggeling with correctly configuring IntelliSense, ie, VSCode's autocomplete for my Python projects. It works fine within a single Python project. But I have a workspace with two of my projects open at the same time since one of them is imported and used inside the other. For simplicity, let's call them myProjectA and myProjectB. Inside myProjectA, I get correct autocomplete (incl. suggestions and auto import) for myProjectA but not for myProjectB, which is used inside A. How can I configure VSCode to show auto suggestions of myProjectB in myProjectA? I have tried adding "python.autoComplete.extraPaths": [ "myProjectB" ] to my config, but it doesn't change anything. Do I need to provide the full path to myProjectB? Or do I need to configure something else? | it wasn't working for me until i added the path to "python.analysis.extraPaths" as well: "python.analysis.extraPaths": [ "path\\to\\directory" ], "python.autoComplete.extraPaths": [ "path\\to\\directory" ] | 4 | 7 |
73,198,957 | 2022-8-1 | https://stackoverflow.com/questions/73198957/how-to-exclude-optional-unset-values-from-a-pydantic-model-using-fastapi | I have this model: class Text(BaseModel): id: str text: str = None class TextsRequest(BaseModel): data: list[Text] n_processes: Union[int, None] So I want to be able to take requests like: {"data": ["id": "1", "text": "The text 1"], "n_processes": 8} and {"data": ["id": "1", "text": "The text 1"]}. Right now in the second case I get {'data': [{'id': '1', 'text': 'The text 1'}], 'n_processes': None} using this code: app = FastAPI() @app.post("/make_post/", response_model_exclude_none=True) async def create_graph(request: TextsRequest): input_data = jsonable_encoder(request) So how can I exclude n_processes here? | Since Pydantic >= 2.0 deprecates model.dict() use model.model_dump(...) instead. You can use exclude_none param of Pydantic's model.dict(...): class Text(BaseModel): id: str text: str = None class TextsRequest(BaseModel): data: list[Text] n_processes: Optional[int] request = TextsRequest(**{"data": [{"id": "1", "text": "The text 1"}]}) print(request.dict(exclude_none=True)) Output: {'data': [{'id': '1', 'text': 'The text 1'}]} Also, it's more idiomatic to write Optional[int] instead of Union[int, None]. | 14 | 5 |
73,200,382 | 2022-8-1 | https://stackoverflow.com/questions/73200382/using-typevartuple-with-inner-typevar-variadic-generics-transformation | How would I use TypeVarTuple for this example? T = TypeVar("T") Ts = TypeVarTuple("Ts") @dataclass class S(Generic[T]): data: T def data_from_s(*structs: ??) -> ??: return tuple(x.data for x in structs) a = data_from_s(S(1), S("3")) # is type tuple[int, str] | I don't see any way to do this with the current spec. The main issue I see is that TypeVarTuple does not support bounds. You can't constrain the types referred to by Ts to be bounded to S. You need to translate somehow tuple[S[T1], S[T2], ...] -> tuple[T1, T2, ...], but you have no way to know that the types contained by Ts are specializations of S or that types themselves are generic with further parameterization. Without using TypeVarTuple, your goal can be accomplished to some extent with a pattern like the following, using overload to handle subsets of the signature for differing amounts of arguments. I also use an ending / in the overloads to prevent usage of named arguments (forcing positional args to be used), which allows the overloads to match the real method definition. Obviously, this pattern becomes awkward as you add more ranges of arguments, but in some cases it can be a nice escape hatch. from dataclasses import dataclass from typing import Any, Generic, TypeVar, assert_type, overload T = TypeVar("T") @dataclass class S(Generic[T]): data: T ... T1 = TypeVar("T1") T2 = TypeVar("T2") T3 = TypeVar("T3") @overload def data_from_s(s1: S[T1], /) -> tuple[T1]: ... @overload def data_from_s(s1: S[T1], s2: S[T2], /) -> tuple[T1, T2]: ... @overload def data_from_s(s1: S[T1], s2: S[T2], s3: S[T3], /) -> tuple[T1, T2, T3]: ... def data_from_s(*structs: S[Any]) -> tuple[Any, ...]: return tuple(x.data for x in structs) Which will pass this test: assert_type( data_from_s(S(1)), tuple[int] ) assert_type( data_from_s(S(1), S("3")), tuple[int, str] ) assert_type( data_from_s(S(1), S("3"), S(3.9)), tuple[int, str, float] ) | 5 | 4 |
73,132,769 | 2022-7-27 | https://stackoverflow.com/questions/73132769/what-is-the-right-way-to-get-unit-vector-to-index-elasticsearch-ann-dot-product | I am trying to index word embedding vectors to Elasticsearch V8 ann dense_vector dot_product. I can successfully index vec to cosine, so I converted it to unit vector with numpy for dot_product. unit_vector = vec / np.linalg.norm(vec) but I get an 400 error saying like this. The [dot_product] similarity can only be used with unit-length vectors. Preview of invalid vector: [-0.0038341882, -0.1564709, 0.08771773, -0.14555556, -0.07952896, ...] Am I missing something? | I was confronted with the exact same problem and I found a solution after much experimentation. In my case, when indexing lots of embeddings to Elasticsearch (dense_vector with similarity parameter set to dot_product), most of them got indexed properly and a small percentage of them failed with The [dot_product] similarity can only be used with unit-length vectors. I found after intensive testing that the problem was that the unit vectors I was working with were of numerical types np.float16 and this was causing the error. Working with np.float32 as a numerical type in my workflow for my unit vectors solved the issue. | 4 | 5 |
73,144,451 | 2022-7-27 | https://stackoverflow.com/questions/73144451/modulenotfounderror-no-module-named-setuptools-command-build | I am trying to pip install sentence transformers. I am working on a Macbook pro with an M1 chip. I am using the following command: pip3 install -U sentence-transformers When I run this, I get this error/output and I do not know how to fix it... Defaulting to user installation because normal site-packages is not writeable Collecting sentence-transformers Using cached sentence-transformers-2.2.2.tar.gz (85 kB) Preparing metadata (setup.py) ... done Collecting transformers<5.0.0,>=4.6.0 Using cached transformers-4.21.0-py3-none-any.whl (4.7 MB) Collecting tqdm Using cached tqdm-4.64.0-py2.py3-none-any.whl (78 kB) Requirement already satisfied: torch>=1.6.0 in ./Library/Python/3.8/lib/python/site-packages (from sentence-transformers) (1.12.0) Collecting torchvision Using cached torchvision-0.13.0-cp38-cp38-macosx_11_0_arm64.whl (1.2 MB) Requirement already satisfied: numpy in ./Library/Python/3.8/lib/python/site-packages (from sentence-transformers) (1.23.1) Collecting scikit-learn Using cached scikit_learn-1.1.1-cp38-cp38-macosx_12_0_arm64.whl (7.6 MB) Collecting scipy Using cached scipy-1.8.1-cp38-cp38-macosx_12_0_arm64.whl (28.6 MB) Collecting nltk Using cached nltk-3.7-py3-none-any.whl (1.5 MB) Collecting sentencepiece Using cached sentencepiece-0.1.96.tar.gz (508 kB) Preparing metadata (setup.py) ... done Collecting huggingface-hub>=0.4.0 Using cached huggingface_hub-0.8.1-py3-none-any.whl (101 kB) Collecting requests Using cached requests-2.28.1-py3-none-any.whl (62 kB) Collecting pyyaml>=5.1 Using cached PyYAML-6.0.tar.gz (124 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: typing-extensions>=3.7.4.3 in ./Library/Python/3.8/lib/python/site-packages (from huggingface-hub>=0.4.0->sentence-transformers) (4.3.0) Requirement already satisfied: filelock in ./Library/Python/3.8/lib/python/site-packages (from huggingface-hub>=0.4.0->sentence-transformers) (3.7.1) Requirement already satisfied: packaging>=20.9 in ./Library/Python/3.8/lib/python/site-packages (from huggingface-hub>=0.4.0->sentence-transformers) (21.3) Collecting tokenizers!=0.11.3,<0.13,>=0.11.1 Using cached tokenizers-0.12.1.tar.gz (220 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> [20 lines of output] Traceback (most recent call last): File "/Users/joeyoneill/Library/Python/3.8/lib/python/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module> main() File "/Users/joeyoneill/Library/Python/3.8/lib/python/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/Users/joeyoneill/Library/Python/3.8/lib/python/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel return hook(config_settings) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/setuptools/build_meta.py", line 146, in get_requires_for_build_wheel return self._get_build_requires( File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/setuptools/build_meta.py", line 127, in _get_build_requires self.run_setup() File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/setuptools/build_meta.py", line 142, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 2, in <module> from setuptools_rust import Binding, RustExtension File "/private/var/folders/bg/ncfh283n4t39vqhvbd5n9ckh0000gn/T/pip-build-env-vjj6eow8/overlay/lib/python3.8/site-packages/setuptools_rust/__init__.py", line 1, in <module> from .build import build_rust File "/private/var/folders/bg/ncfh283n4t39vqhvbd5n9ckh0000gn/T/pip-build-env-vjj6eow8/overlay/lib/python3.8/site-packages/setuptools_rust/build.py", line 20, in <module> from setuptools.command.build import build as CommandBuild # type: ignore[import] ModuleNotFoundError: No module named 'setuptools.command.build' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Can anybody tell me what I should do or what is wrong with what I am currently doing? I factory reset my Mac and re-downloaded everything but I still get this same error. I am stumped. | I posted this as an issue to the actual Sentence Transformers GitHub page. Around 4 days ago I was given this answer by a "Federico Viticci" which resolved the issue and allowed me to finally install the library: "For what it is worth, I was having the exact issue. Installing it directly from source using pip install git+https://github.com/huggingface/transformers fixed it on my M1 Max MacBook Pro." Original Git Issue here: https://github.com/UKPLab/sentence-transformers/issues/1652 | 10 | 0 |
73,143,854 | 2022-7-27 | https://stackoverflow.com/questions/73143854/linking-opencv-python-to-opencv-cuda-in-arch | I'm trying to to get OpenCV with CUDA to be used in Python open-cv on Arch Linux, but I'm not sure how to link it. Arch provides a package opencv-cuda, which provides these files. Guides I've found said to link the python cv2.so to the one provided, but the package doesn't provide that. My python site_packages has cv2.abi3.so in it, and I've tried linking that to core.so and cvv.so to no avail. Do I need to build it differently to support Python? Or is there another step I'm missing? | On Arch, opencv-cuda provides opencv=4.6.0, but you still need the python bindings. Fortunately though, installing python-opencv after installling opencv-cuda works, since it leverages it. I just set up my Python virtual environment to allow system site packages (python -m venv .venv --system-site-packages), and it works like a charm! Neural net image detection runs ~300% as fast now. | 6 | 6 |
73,165,109 | 2022-7-29 | https://stackoverflow.com/questions/73165109/what-is-the-type-of-sum | I want to express that the first parameter is a "list" of the second parameter, and that the result has the same type as the second parameter. This mysum (ie. not the standard lib sum) should work equally well with int/float/str/list, and any other type that supports +=. Naively: def mysum(lst: list[T], start: T) -> T: x = start for item in lst: x += item return x which produces: (dev311) go|c:\srv\tmp> mypy sumtype.py sumtype.py:26: error: Unsupported left operand type for + ("T") sumtype.py:35: error: Cannot infer type argument 1 of "mysum" Found 2 errors in 1 file (checked 1 source file) Second attempt: from typing import Iterable, Protocol, TypeVar T = TypeVar('T') class Addable(Protocol[T]): def __add__(self, other: T) -> T: ... class RAddable(Protocol[T]): def __radd__(self, other: T) -> T: ... def mysum(lst: Iterable[Addable|RAddable], start: Addable) -> Addable: x = start for item in lst: x += item return x however, this doesn't place any restrictions on the "list" items and start being the same type, so this typechecks: class Foo: def __radd__(self, other: int) -> int: return other + 42 mysum([Foo()], []) # <=== should ideally fail since Foo() and [] are not the same type and fails with a type-error at runtime: (dev311) go|c:\srv\tmp> python sumtype.py Traceback (most recent call last): File "c:\srv\tmp\sumtype.py", line 27, in <module> mysum([Foo()], []) File "c:\srv\tmp\sumtype.py", line 18, in mysum x += item File "c:\srv\tmp\sumtype.py", line 24, in __radd__ return other + 42 ~~~~~~^~~~ TypeError: can only concatenate list (not "int") to list I'm assuming it can't be this difficult to type-annotate a generic function, so what am I missing..? | You can make a generic type bound to a protocol that supports __add__ instead: Type variables can be bound to concrete types, abstract types (ABCs or protocols), and even unions of types from typing import TypeVar, Protocol T = TypeVar('T', bound='Addable') class Addable(Protocol): def __add__(self: T, other: T) -> T: ... def mysum(lst: list[T], start: T) -> T: x = start for item in lst: x += item return x Demo: https://mypy-play.net/?gist=ecf4031f21fb81e75d6e6f75d16f3911 | 4 | 7 |
73,212,759 | 2022-8-2 | https://stackoverflow.com/questions/73212759/append-rows-to-dataset-if-missing-from-declared-dictionary-in-python | I have a dataset where I would like to add or append rows with values listed in dictionary (if these values are missing from original dataset) Data ID Date Type Cost Alpha Q1 2022 ok 1 Alpha Q2 2022 ok 1 Alpha Q3 2022 hi 1 Alpha Q4 2022 hi 2 Desired ID Date Type Cost Alpha Q1 2022 ok 1 Alpha Q2 2022 ok 1 Alpha Q3 2022 hi 1 Alpha Q4 2022 hi 2 Gamma Q1 2022 0 Theta Q1 2022 0 Doing I am using the script below, however, this is not appending, but only maps the value if date matches. Any suggestion is appreciated #values = {'Alpha': 'Q1 2022', 'Gamma':' Q1 2022', 'Theta': 'Q1 2022'} df['ID']=out['Date'].map({'Alpha': 'Q1 2022', 'Gamma':' Q1 2022', 'Theta': 'Q1 2022' }) df1 = df1.merge(df, how='left').fillna({'Cost': 0}) | Let us create a delta dataframe from the items of dictionary, then do a outer merge to append distinct rows delta = pd.DataFrame(values.items(), columns=['ID', 'Date']) df.merge(delta, how='outer') ID Date Type Cost 0 Alpha Q1 2022 ok 1.0 1 Alpha Q2 2022 ok 1.0 2 Alpha Q3 2022 hi 1.0 3 Alpha Q4 2022 hi 2.0 4 Gamma Q1 2022 NaN NaN 5 Theta Q1 2022 NaN NaN | 4 | 4 |
73,203,318 | 2022-8-2 | https://stackoverflow.com/questions/73203318/how-to-transform-spark-dataframe-to-polars-dataframe | I wonder how i can transform Spark dataframe to Polars dataframe. Let's say i have this code on PySpark: df = spark.sql('''select * from tmp''') I can easily transform it to pandas dataframe using .toPandas. Is there something similar in polars, as I need to get a polars dataframe for further processing? | Context Pyspark uses arrow to convert to pandas. Polars is an abstraction over arrow memory. So we can hijack the API that spark uses internally to create the arrow data and use that to create the polars DataFrame. TLDR Given an spark context we can write: import pyarrow as pa import polars as pl sql_context = SQLContext(spark) data = [('James',[1, 2]),] spark_df = sql_context.createDataFrame(data=data, schema = ["name","properties"]) df = pl.from_arrow(pa.Table.from_batches(spark_df._collect_as_arrow())) print(df) shape: (1, 2) βββββββββ¬βββββββββββββ β name β properties β β --- β --- β β str β list[i64] β βββββββββͺβββββββββββββ‘ β James β [1, 2] β βββββββββ΄βββββββββββββ Serialization steps This will actually be faster than the toPandas provided by spark itself, because it saves an extra copy. toPandas() will lead to this serialization/copy step: spark-memory -> arrow-memory -> pandas-memory With the query provided we have: spark-memory -> arrow/polars-memory | 14 | 36 |
73,204,179 | 2022-8-2 | https://stackoverflow.com/questions/73204179/how-to-scrape-video-media-with-scrapy-or-other-pythons-libraries | Concretely I want to extract videos from this website: equidia. The first problem is when I launch scrapy shell https://www.equidia.fr/courses/2022-07-31/R1/C1 -s USER_AGENT="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.67 Safari/537.36". I inspected with view(response) and notice the container of the video exists. I can click on play but there is no flow, so no video to extract. I guess the spider cannot be more able than I do in the web browser during inspection. UPDATE: Thanks to palpitus_on_fire I checked more the XHR type requests while I only concentrated on media type. The XHR request I am interested about is: https://equidia-vodce-p-api.hexaglobe.net/video/info/20220731_35186008_00000HR/01NdvIX6xNbihxxc3U52RDBX9novEE47lA1ZmEDw/0b94eb69d04aa18ca33ba5e8767b1a85264c35b6acefc8bfbd7eabb5e259cf2c/1659574478 It returns a response json file: { "mp4": { "240": "https://private4-stream-ovp.vide.io/dl/0bee7ff9bce79419fb27bac012c5d090/62e904dc/equidia_2752987/videos/240p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-240p_full_mp4.mp4?v=0", "360": "https://private3-stream-ovp.vide.io/dl/7c53dc70ae5cc46d7e2c9d43e3dfc685/62e904dc/equidia_2752987/videos/360p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-360p_full_mp4.mp4?v=0", "480": "https://private3-stream-ovp.vide.io/dl/4261c5bd7b48595535710e1dc34e8f93/62e904dc/equidia_2752987/videos/480p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-480p_full_mp4.mp4?v=0", "540": "https://private3-stream-ovp.vide.io/dl/d40592e39e27931f20245edde9f30152/62e904dc/equidia_2752987/videos/540p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-540p_full_mp4.mp4?v=0", "720": "https://private8-stream-ovp.vide.io/dl/9cb5a24a43a8d22d27e73b164020894d/62e904dc/equidia_2752987/videos/720p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-720p_full_mp4.mp4?v=0" } "hls": "https://private3-stream-ovp.vide.io/dl/6abf08b12203c84390a01fc4de65a4d5/62e904dc/equidia_2752987/videos/hls/63/03/S506303/V556523/playlist.m3u8?v=1", "master": "https://private8-stream-ovp.vide.io/dl/6830b3a5039eef57f844bb08cd252584/62e904dc/equidia_2752987/videos/master/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523.mp4?v=0" } "720": "https://private8-stream-ovp.vide.io/dl/9cb5a24a43a8d22d27e73b164020894d/62e904dc/equidia_2752987/videos/720p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-720p_full_mp4.mp4?v=0" contains the video and the resolution I want. Two main problems now: How to obtain XHR request and precisely the one starting with equidia-vodce-p-api.hexaglobe.net ? The popular solutions are to use webtesters like selenium or playwright but I know none of them. How to integrate it in a scrapy spider ? If it is possible I would like to get a code of this form: class VideoPageScraper(scrapy.Spider): def __init__(self): self.start_urls = ['https://www.equidia.fr/courses/2022-07-31/R1/C1'] def parse(self, response): # code to catch the XHR request desired urlvideo = 'https://private8-stream-ovp.vide.io/dl/9cb5a24a43a8d22d27e73b164020894d/62e904dc/equidia_2752987/videos/720p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-720p_full_mp4.mp4?v=0' yield scrapy.Request(url=urlvideo, callback=self.obtainvideo) def obtainvideo(self, response): with #open('/home/avy/binary.mp4', returns'wb') as wfile: wfile.write(response.body) UPDATES POSSIBILITY TO GENERATE XHR'S URL ? Looking at the developement tool of Firefox I am trying to get how the desired XHR request is done. The script main.d685b66f8277bab3376b.js produce the elements to make the XHR's url (at line 1761 as far as I understand). Object { endpoint: "https://equidia-vodce-p-api.hexaglobe.net/video/info", publicKey: "01NdvIX6xNbihxxc3U52RDBX9novEE47lA1ZmEDw", hash: "c79502b086116f30262dfbf7675225dc1c8bfb4c144df09da9a34b9106fccd39", expiresAt: "1659608062" } Then https://www.equidia.fr/polyfills.de328f7d13aac150968d.js initiates a fecth with the XHR's url. Do you think there is a way to get a generated object based on the script as shown above in order to make the desired XHR request ? APPENDICE WITH PLAYWRIGHT TRY (headless browser) I found a library that could be able to obtain the XHR requests I am looking for. playwright is a webtester and it is compatible with scrapy with scrapy-playwright. Inspired by those examples, I tried the following code: from playwright.sync_api import sync_playwright url = 'https://www.equidia.fr/courses/2022-07-31/R1/C1' with sync_playwright() as p: def handle_response(response): # the endpoint we are insterested in if ("equidia-vodce-p-api" in response.url): item = response.json()['mp4'] print(item) browser = p.chromium.launch() page = browser.new_page() page.on("response", handle_response) page.goto(url, wait_until="networkidle", timeout=900000) page.context.close() browser.close() Exception in callback SyncBase._sync.<locals>.callback(<Task finishe...t responses')>) at /home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_sync_base.py:104 handle: <Handle SyncBase._sync.<locals>.callback(<Task finishe...t responses')>) at /home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_sync_base.py:104> Traceback (most recent call last): File "/home/avy/anaconda3/envs/Turf/lib/python3.10/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_sync_base.py", line 105, in callback g_self.switch() File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_browser_context.py", line 124, in <lambda> lambda params: self._on_response( File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_browser_context.py", line 396, in _on_response page.emit(Page.Events.Response, response) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/pyee/base.py", line 176, in emit handled = self._call_handlers(event, args, kwargs) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/pyee/base.py", line 154, in _call_handlers self._emit_run(f, args, kwargs) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/pyee/asyncio.py", line 50, in _emit_run self.emit("error", exc) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/pyee/base.py", line 179, in emit self._emit_handle_potential_error(event, args[0] if args else None) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/pyee/base.py", line 139, in _emit_handle_potential_error raise error File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/pyee/asyncio.py", line 48, in _emit_run coro: Any = f(*args, **kwargs) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_impl_to_api_mapping.py", line 88, in wrapper_func return handler( File "<input>", line 7, in handle_response File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/sync_api/_generated.py", line 601, in json self._sync("response.json", self._impl_obj.json()) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_sync_base.py", line 111, in _sync return task.result() File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_network.py", line 362, in json return json.loads(await self.text()) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_network.py", line 358, in text content = await self.body() File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_network.py", line 354, in body binary = await self._channel.send("body") File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_connection.py", line 39, in send return await self.inner_send(method, params, False) File "/home/avy/anaconda3/envs/Turf/lib/python3.10/site-packages/playwright/_impl/_connection.py", line 63, in inner_send result = next(iter(done)).result() playwright._impl._api_types.Error: Response body is unavailable for redirect responses {'240': 'https://private4-stream-ovp.vide.io/dl/ec808fddd1b10659cb7d3bfeb7cba591/62e9abc8/equidia_2752987/videos/240p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-240p_full_mp4.mp4?v=0', '360': 'https://private3-stream-ovp.vide.io/dl/7c5cc10f2bb466fd85d7109c698500d3/62e9abc8/equidia_2752987/videos/360p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-360p_full_mp4.mp4?v=0', '480': 'https://private3-stream-ovp.vide.io/dl/c65277c3e385fb825e0ea4f5c1447d00/62e9abc8/equidia_2752987/videos/480p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-480p_full_mp4.mp4?v=0', '540': 'https://private3-stream-ovp.vide.io/dl/036f88b63a1157ce51bf68a7e60791cc/62e9abc8/equidia_2752987/videos/540p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-540p_full_mp4.mp4?v=0', '720': 'https://private8-stream-ovp.vide.io/dl/459a1e569da63e69031d1ecbf1f87ed0/62e9abc8/equidia_2752987/videos/720p_full_mp4/63/03/S506303/V556523/20220731_35186008_00000hr-s506303-v556523-720p_full_mp4.mp4?v=0'} I managed to obtain the piece I wanted but with a lot of exceptions that I have no idea what's wrong. | Here is a fully working solution by using scrapy-playwright, I got the idea the following issue 61 and the users profile @lime-n. We download the xhr requests sent and store these into a dict with both the playwright tools and scrapy-playwright. I include the playwright_page_event_handlers to integrate playwright tools for this. Depending on the video size, it will take a few minutes to download. main.py import scrapy from playwright.async_api import Response as PlaywrightResponse import jsonlines import pandas as pd import json from video.items import videoItem headers = { 'authority': 'api.equidia.fr', 'accept': 'application/json, text/plain, */*', 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', 'content-type': 'application/json', 'origin': 'https://www.equidia.fr', 'referer': 'https://www.equidia.fr/', 'sec-ch-ua': '".Not/A)Brand";v="99", "Google Chrome";v="103", "Chromium";v="103"', 'sec-ch-ua-mobile': '?0', 'sec-ch-ua-platform': '"macOS"', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-site', 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36', } class videoDownloadSpider(scrapy.Spider): name = 'video' start_urls = ['https://www.equidia.fr/courses/2022-07-31/R1/C1'] def start_requests(self): for url in self.start_urls: yield scrapy.Request( url, #callback = self.parse, meta = { 'playwright':True, 'playwright_page_event_handlers':{ "response": "handle_response", } } ) async def handle_response(self, response: PlaywrightResponse) -> None: """ We can grab the post data with response.request.post - there are three different types for different needs. The method below helps grab those resource types of 'xhr' and 'fetch' until I can work out how to only send these to the download request. """ self.logger.info(f'test the log of data: {response.request.resource_type, response.request.url, response.request.method}') jl_file = "videos.jl" data = {} if response.request.resource_type == "xhr": if response.request.method == "GET": if 'videos' in response.request.url: data['resource_type']=response.request.resource_type, data['request_url']=response.request.url, data['method']=response.request.method with jsonlines.open(jl_file, mode='a') as writer: writer.write(data) def parse(self, response): video = pd.read_json('videos.jl', lines=True) print('KEST: %s' % video['request_url'][0][0]) yield scrapy.FormRequest( url = video['request_url'][0][0], headers=headers, callback = self.parse_video_json ) def parse_video_json(self, response): another_url = response.json().get('video_url') yield scrapy.FormRequest( another_url, headers=headers, callback=self.extract_videos ) def extract_videos(self, response): videos = response.json().get('mp4') for keys, vids in videos.items(): loader = videoItem() loader['video'] = [vids] yield loader items.py class videoItem(scrapy.Item): video = scrapy.Field() pipelines.py class downloadFilesPipeline(FilesPipeline): def file_path(self, request, response=None, info=None, item=None): file = request.url.split('/')[-1] video_file = f"{file}.mp4" return video_file settings.py from pathlib import Path import os BASE_DIR = Path(__file__).resolve().parent.parent FILES_STORE = os.path.join(BASE_DIR, 'videos') FILES_URLS_FIELD = 'video' FILES_RESULT_FIELD = 'results' DOWNLOAD_HANDLERS = { "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", } TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor" ITEM_PIPELINES = { 'video.pipelines.downloadFilesPipeline': 150, } | 6 | 1 |
73,195,338 | 2022-8-1 | https://stackoverflow.com/questions/73195338/how-to-avoid-database-connection-pool-from-being-exhausted-when-using-fastapi-in | I use FastAPI for a production application that uses asyncio almost entirely except when hitting the database. The database still relies on synchronous SQLAlchemy as the async version was still in alpha (or early beta) at the time. While our services do end up making synchronous blocking calls when it hits the database it's still wrapped in async functions. We do run multiple workers and several instances of the app to ensure we don't hit serious bottlenecks. Concurrency with Threads I understand that FastAPI offers concurrency using threads when using the def controller_method approach but I can't seem to find any details around how it controls the environment. Could somebody help me understand how to control the maximum threads a process can generate. What if it hits system limits? Database connections When I use the async await model I create database connection objects in the middleware which is injected into the controller actions. @app.middleware("http") async def db_session_middleware(request: Request, call_next): await _set_request_id() try: request.state.db = get_sessionmaker(scope_func=None) response = await call_next(request) finally: if request.state.db.is_active: request.state.db.close() return response When it's done via threads is the controller already getting called in a separate thread, ensuring a separate connection for each request? Now if I can't limit the number of threads that are being spawned by the main process, if my application gets a sudden surge of requests won't it overshoot the database connection pool limit and eventually blocking my application? Is there a central threadpool used by FastAPI that I can configure or is this controlled by Uvicorn? Uvicorn I see that Uvicorn has a configuration that let's it limit the concurrency using the --limit-concurrency 60 flag. Is this governing the number of concurrent threads created in the threaded mode? If so, should this always be a lower than my connection pool ( connection pool + max_overflow=40) So in the scenario, where I'm allowing a uvicorn concurrency limit of 60 my db connection pool configurations should be something like this? engine = sqlalchemy.create_engine( cfg("DB_URL"), pool_size=40, max_overflow=20, echo=False, pool_use_lifo=False, pool_recycle=120 ) Is there a central threadpool that is being used in this case? Are there any sample projects that I can look at to see how this could be configured when deployed at scale. I've used Netflix Dispatch as a reference but if there are other projects I'd definitely want to look at those. | Fastapi uses Starlette as an underlying framework. Starlette provides a mechanism for starting def path operations in the thread pool for which it uses anyio. Therefore, we can limit the number of threads which can be executed simultaneously by setting property total_tokens of anyio's CapacityLimiter. Example below: import threading import anyio import uvicorn from fastapi import FastAPI import time import logging THREADS_LIMIT = 5 logging.basicConfig(level=logging.DEBUG) app = FastAPI() class Counter(object): def __init__(self): self._value = 0 self._lock = threading.Lock() def increment(self): with self._lock: self._value += 1 def decrement(self): with self._lock: self._value -= 1 def value(self): with self._lock: return self._value counter = Counter() @app.get("/start_task") def start_task(): counter.increment() logging.info("Route started. Counter: %d", counter.value()) time.sleep(10) counter.decrement() logging.info("Route stopped. Counter: %d", counter.value()) return "Task done" @app.on_event("startup") async def startup_event(): limiter = anyio.to_thread.current_default_thread_limiter() limiter.total_tokens = THREADS_LIMIT if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000, log_level="debug") Try to open 50 connections in parallel: seq 1 50 | xargs -n1 -P50 curl "http://localhost:8000/start_task" We see that the number of simultaneously processed requests is limited to 5. Output: INFO:root:Route started. Counter: 1 INFO:root:Route started. Counter: 2 INFO:root:Route started. Counter: 3 INFO:root:Route started. Counter: 4 INFO:root:Route started. Counter: 5 INFO:root:Route stopped. Counter: 4 INFO:uvicorn.access:127.0.0.1:60830 - "GET /start_task HTTP/1.1" 200 INFO:root:Route stopped. Counter: 3 INFO:root:Route started. Counter: 4 INFO:uvicorn.access:127.0.0.1:60832 - "GET /start_task HTTP/1.1" 200 INFO:root:Route started. Counter: 5 ... | 6 | 7 |
73,209,565 | 2022-8-2 | https://stackoverflow.com/questions/73209565/strange-behaviour-during-multiprocess-calls-to-numpy-conjugate | The attached script evaluates the numpy.conjugate routine for varying numbers of parallel processes on differently sized matrices and records the corresponding run times. The matrix shape only varies in it's first dimension (from 1,64,64 to 256,64,64). Conjugation calls are always made on 1,64,64 sub matrices to ensure that the parts that are being worked on fit into the L2 cache on my system (256 KB per core, L3 cache in my case is 25MB). Running the script yields the following diagram (with slightly different ax labels and colors). As you can see starting from a shape of around 100,64,64 the runtime is depending on the number of parallel processes which are used. What could be the cause of this ? Or why is the dependence on the number of processes for matrices below (100,64,64) so low? My main goal is to find a modification to this script such that the runtime becomes as independent as possible from the number of processes for matrices 'a' of arbitrary size. In case of 20 Processes: all 'a' matrices take at most: 20 * 16 * 256 * 64 * 64 Byte = 320MB all 'b' sub matrices take at most: 20 * 16 * 1 * 64 * 64 Byte = 1.25MB So all sub matrices fit simultaneously in L3 cache as well as individually in the L2 cache per core of my CPU. I did only use physical cores no hyper-threading for these tests. Here is the script: from multiprocessing import Process, Queue import time import numpy as np import os from matplotlib import pyplot as plt os.environ['OPENBLAS_NUM_THREADS'] = '1' os.environ['MKL_NUM_THREADS'] = '1' def f(q,size): a = np.random.rand(size,64,64) + 1.j*np.random.rand(size,64,64) start = time.time() n=a.shape[0] for i in range(20): for b in a: b.conj() duration = time.time()-start q.put(duration) def speed_test(number_of_processes=1,size=1): number_of_processes = number_of_processes process_list=[] queue = Queue() #Start processes for p_id in range(number_of_processes): p = Process(target=f,args=(queue,size)) process_list.append(p) p.start() #Wait until all processes are finished for p in process_list: p.join() output = [] while queue.qsize() != 0: output.append(queue.get()) return np.mean(output) if __name__ == '__main__': processes=np.arange(1,20,3) data=[[] for i in processes] for p_id,p in enumerate(processes): for size_0 in range(1,257): data[p_id].append(speed_test(number_of_processes=p,size=size_0)) fig,ax = plt.subplots() for d in data: ax.plot(d) ax.set_xlabel('Matrix Size: 1-256,64,64') ax.set_ylabel('Runtime in seconds') fig.savefig('result.png') | The problem is due to at least a combination of two complex effects: cache-thrashing and frequency-scaling. I can reproduce the effect on my 6 core i5-9600KF processor. Cache thrashing The biggest effect comes from a cache-thrashing issue. It can be easily tracked by looking at the RAM throughput. Indeed, it is 4 GiB/s for 1 process and 20 GiB/s for 6 processes. The read throughput is similar to the write one so the throughput is symmetric. My RAM is able to reach up to ~40 GiB/s but usually ~32 GiB/s only for mixed read/write patterns. This means the RAM pressure is pretty big. Such use-case typically occurs in two cases: an array is read/written-back from/to the RAM because cache are not big enough; many access to different locations in memory are made but they are mapped in the same cache lines in the L3. At first glance, the first case is much more likely to happen here since arrays are contiguous and pretty big (the other effect unfortunately also happens, see below). In fact, the main problem is the a array that is too big to fit in the L3. Indeed, when size is >128, a takes more than 128*64*64*8*2 = 8 MiB/process. Actually, a is built from two array that must be read so the space needed in cache is 3 time bigger than that: ie. >24 MiB/process. The thing is all processes allocate the same amount of memory, so the bigger the number of processes the bigger the cumulative space taken by a. When the cumulative space is bigger than the cache, the processor needs to write data to the RAM and read it back which is slow. In fact, this is even worse: processes are not fully synchronized so some process can flush data needed by others due to the filling of a. Furthermore, b.conj() creates a new array that may not be allocated at the same memory allocation every time so the processor also needs to write data back. This effect is dependent of the low-level allocator being used. One can use the out parameter so to fix this problem. That being said, the problem was not significant on my machine (using out was 2% faster with 6 processes and equally fast with 1 process). Put it shortly, more processes access more data and the global amount of data do not fit in CPU caches decreasing performance since arrays need to be reloaded over and over. Frequency scaling Modern-processors use frequency scaling (like turbo-boost) so to make (quite) sequential applications faster, but they cannot use the same frequency for all cores when they are doing computation because processors have a limited power budget. This results of a lower theoretical scalability. The thing is all processes are doing the same work so N processes running on N cores are not N times takes more time than 1 process running on 1 core. When 1 process is created, two cores are operating at 4550-4600 MHz (and others are at 3700 MHz) while when 6 processes are running, all cores operate at 4300 MHz. This is enough to explain a difference up to 7% on my machine. You can hardly control the turbo frequency but you can either disable it completely or control the frequency so the minimum-maximum frequency are both set to the base frequency. Note that the processor is free to use a much lower frequency in pathological cases (ie. throttling, when a critical temperature reached). I do see an improved behavior by tweaking frequencies (7~10% better in practice). Other effects When the number of process is equal to the number of core, the OS do more context-switches of the process than if one core is left free for other tasks. Context-switches decrease a bit the performance of the process. THis is especially true when all cores are allocated because it is harder for the OS scheduler to avoid unnecessary migrations. This usually happens on PC with many running processes but not much on computing machines. This overhead is about 5-10% on my machine. Note that the number of processes should not exceed the number of cores (and not hyper-threads). Beyond this limit, the performance are hardly predictable and many complex overheads appears (mainly scheduling issues). | 5 | 2 |
73,166,250 | 2022-7-29 | https://stackoverflow.com/questions/73166250/why-does-a-recursive-python-program-not-crash-my-system | I've written an R.py script which contains the following two lines: import os os.system("python3 R.py") I expected my system to run out of memory after running this script for a few minutes, but it is still surprisingly responsive. Does someone know, what kind of Python interpreter magic is happening here? | Preface os.system() is actually a call to Cβs system(). Here is what the documentation states: The system() function shall behave as if a child process were created using fork(), and the child process invoked the sh utility using execl() as follows: execl(, "sh", "-c", command, (char *)0); where is an unspecified pathname for the sh utility. It is unspecified whether the handlers registered with pthread_atfork() are called as part of the creation of the child process. The system() function shall ignore the SIGINT and SIGQUIT signals, and shall block the SIGCHLD signal, while waiting for the command to terminate. If this might cause the application to miss a signal that would have killed it, then the application should examine the return value from system() and take whatever action is appropriate to the application if the command terminated due to receipt of a signal. The system() function shall not affect the termination status of any child of the calling processes other than the process or processes it itself creates. The system() function shall not return until the child process has terminated. [Option End] The system() function need not be thread-safe. Solution system() creates a child process and exits, there is no stack to be resolved, therefore one would expect this to run as long as resources to do so are available. Furthermore, the operation being of creating a child process is not an intensive oneβ the processes aren't using up much resources, but if allowed to run long enough the script will to start to affect general performance and eventually run out of memory to spawn a new child process. Once this occurs the processes will exit. Example To demonstrate this, set recursion depth limit to 10 and allow the program to run: import os, sys, inspect sys.setrecursionlimit(10) args = sys.argv[1:] arg = int(args[0]) if len(args) else 0 stack_depth = len(inspect.stack(0)) print(f"Iteration {arg} - at stack depth of {stack_depth}") arg += 1 os.system(f"python3 main.py {arg}") Outputs: Iteration 0 - at stack depth of 1 - avaialable memory 43337904128 Iteration 1 - at stack depth of 1 - avaialable memory 43370692608 Iteration 2 - at stack depth of 1 - avaialable memory 43358756864 Iteration 3 - at stack depth of 1 - avaialable memory 43339202560 Iteration 4 - at stack depth of 1 - avaialable memory 43354894336 Iteration 5 - at stack depth of 1 - avaialable memory 43314974720 Iteration 6 - at stack depth of 1 - avaialable memory 43232366592 Iteration 7 - at stack depth of 1 - avaialable memory 43188719616 Iteration 8 - at stack depth of 1 - avaialable memory 43173384192 Iteration 9 - at stack depth of 1 - avaialable memory 43286093824 Iteration 10 - at stack depth of 1 - avaialable memory 43288162304 Iteration 11 - at stack depth of 1 - avaialable memory 43310637056 Iteration 12 - at stack depth of 1 - avaialable memory 43302408192 Iteration 13 - at stack depth of 1 - avaialable memory 43295440896 Iteration 14 - at stack depth of 1 - avaialable memory 43303870464 Iteration 15 - at stack depth of 1 - avaialable memory 43303870464 Iteration 16 - at stack depth of 1 - avaialable memory 43296256000 Iteration 17 - at stack depth of 1 - avaialable memory 43286032384 Iteration 18 - at stack depth of 1 - avaialable memory 43246657536 Iteration 19 - at stack depth of 1 - avaialable memory 43213336576 Iteration 20 - at stack depth of 1 - avaialable memory 43190259712 Iteration 21 - at stack depth of 1 - avaialable memory 43133902848 Iteration 22 - at stack depth of 1 - avaialable memory 43027984384 Iteration 23 - at stack depth of 1 - avaialable memory 43006255104 ... https://replit.com/@pygeek1/os-system-recursion#main.py References https://pubs.opengroup.org/onlinepubs/9699919799/functions/system.html | 7 | 3 |
73,157,370 | 2022-7-28 | https://stackoverflow.com/questions/73157370/pyspark-to-azure-sql-database-connection-issue | I'm trying to connect to Azure SQL Database from Azure Synapse workspace Notebook using PySpark. Also I would like to use Active Directory integrated authentication. So what I've tried: jdbc_df = spark.read \ .format("com.microsoft.sqlserver.jdbc.spark") \ .option("url", "jdbc:sqlserver://my_server_name.database.windows.net:1433") \ .option("database","my_db_name") \ .option("dbtable", "my_table_or_query") \ .option("authentication", "ActiveDirectoryIntegrated") \ .option("encrypt", "true") \ .option("hostNameInCertificate", "*.database.windows.net") \ .load() Also I've tried the same way but in different syntax jdbcUrl = "jdbc:sqlserver://my_server_name.database.windows.net:1433;database=my_db_name;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;Authentication=ActiveDirectoryIntegrated" pushdown_query = "SELECT col1 FROM my_table_name" connectionProperties = { "driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver" } df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query) display(df) And in both cases I get error IllegalArgumentException: KrbException: Cannot locate default realm What I'm doing wrong? | Finally I have found the solution! First of all there should be created working Linked service to Azure SQL database in your Synapse Analytics that uses Authentication type "System Assigned Managed Identity". Than you can reference it in your PySpark Notebook. And don't be confused that method getConnectionString is used to get access token - it really returns not connection string but token. jdbcUrl = "jdbc:sqlserver://my_server_name.database.windows.net:1433;database=my_db_name;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30" token=TokenLibrary.getConnectionString("AzureSQLLinkedServiceName") pushdown_query = "(SELECT col1 FROM my_table_name) as tbl" connectionProperties = { "driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver", "accessToken" : token } df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties) display(df) | 4 | 4 |
73,205,546 | 2022-8-2 | https://stackoverflow.com/questions/73205546/spacy-how-not-to-remove-not-when-cleaning-the-text-with-space | I use this spacy code to later apply it on my text, but i need the negative words to stay in the text like "not". nlp = spacy.load("en_core_web_sm") def my_tokenizer(sentence): return [token.lemma_ for token in tqdm(nlp(sentence.lower()), leave = False) if token.is_stop == False and token.is_alpha == True and token.lemma_ ] Whit this when i apply i get this as a result : [hello, earphone, work] However the original sentence was hello,my earphones are still not working. So, i would like to see the following sentence: [earphone, still, not, work] Thank you | "not" is actually a stop word and in your code if a token is removed if it's a stopword. You can see this either by looking at the list of Spacy stopwords "not" in spacy.lang.en.stop_words.STOP_WORDS or by looping over the tokens of your doc object for tok in nlp(text.lower()): print(tok.text, tok.is_stop, tok.lemma_) #hello False hello #, False , #my True my #earphones False earphone #are True be #still True still #not True not #working False work #. False . Solution To solve this, you should remove the target words such as "not" from the list of stop_words. You can do it this way: # spacy.lang.en.stop_words.STOP_WORDS.remove("not") # or for multiple words use this to_del_elements = {"not", "no"} nlp.Defaults.stop_words = nlp.Defaults.stop_words - to_del_elements Then you can rerun your code and you'll get your expected results: import spacy #spacy.lang.en.stop_words.STOP_WORDS.remove("not") to_del_elements = {"not", "no"} nlp.Defaults.stop_words = nlp.Defaults.stop_words - to_del_elements nlp = spacy.load("en_core_web_sm") def my_tokenizer(sentence): return [token.lemma_ for token in tqdm(nlp(sentence.lower()), leave = False) if token.is_stop == False and token.is_alpha == True and token.lemma_ ] sentence = "hello,my earphones are still not working. no way they will work" results = my_tokenizer(sentence) print(results) #['hello', 'earphone', 'not', 'work', 'no', 'way', 'work'] | 4 | 4 |
73,206,939 | 2022-8-2 | https://stackoverflow.com/questions/73206939/heroku-postgres-postgis-django-releases-fail-with-relation-spatial-ref-sys | Heroku changed their PostgreSQL extension schema management on 01 August 2022. (https://devcenter.heroku.com/changelog-items/2446) Since then every deployment to Heroku of our existing django 4.0 application fails during the release phase, the build succeeds. Has anyone experienced the same issue? Is there a workaround to push new release to Heroku except reinstalling the postgis extension? If I understand the changes right, Heroku added a schema called "heroku_ext" for newly created extensions. As the extension is existing in our case, it should not be affected. All currently installed extensions will continue to work as intended. Following the full logs of an release via git push: git push staging develop:master Gesamt 0 (Delta 0), Wiederverwendet 0 (Delta 0), Pack wiederverwendet 0 remote: Compressing source files... done. remote: Building source: remote: remote: -----> Building on the Heroku-20 stack remote: -----> Using buildpacks: remote: 1. https://github.com/heroku/heroku-geo-buildpack.git remote: 2. heroku/python remote: -----> Geo Packages (GDAL/GEOS/PROJ) app detected remote: -----> Installing GDAL-2.4.0 remote: -----> Installing GEOS-3.7.2 remote: -----> Installing PROJ-5.2.0 remote: -----> Python app detected remote: -----> Using Python version specified in runtime.txt remote: -----> No change in requirements detected, installing from cache remote: -----> Using cached install of python-3.9.13 remote: -----> Installing pip 22.1.2, setuptools 60.10.0 and wheel 0.37.1 remote: -----> Installing SQLite3 remote: -----> Installing requirements with pip remote: -----> Skipping Django collectstatic since the env var DISABLE_COLLECTSTATIC is set. remote: -----> Discovering process types remote: Procfile declares types -> release, web, worker remote: remote: -----> Compressing... remote: Done: 156.1M remote: -----> Launching... remote: ! Release command declared: this new release will not be available until the command succeeds. remote: Released v123 remote: https://myherokuapp.herokuapp.com/ deployed to Heroku remote: remote: This app is using the Heroku-20 stack, however a newer stack is available. remote: To upgrade to Heroku-22, see: remote: https://devcenter.heroku.com/articles/upgrading-to-the-latest-stack remote: remote: Verifying deploy... done. remote: Running release command... remote: remote: Traceback (most recent call last): remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 87, in _execute remote: return self.cursor.execute(sql) remote: psycopg2.errors.UndefinedTable: relation "spatial_ref_sys" does not exist remote: remote: remote: The above exception was the direct cause of the following exception: remote: remote: Traceback (most recent call last): remote: File "/app/manage.py", line 22, in <module> remote: main() remote: File "/app/manage.py", line 18, in main remote: execute_from_command_line(sys.argv) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line remote: utility.execute() remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute remote: self.fetch_command(subcommand).run_from_argv(self.argv) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/base.py", line 414, in run_from_argv remote: self.execute(*args, **cmd_options) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/base.py", line 460, in execute remote: output = self.handle(*args, **options) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/base.py", line 98, in wrapped remote: res = handle_func(*args, **kwargs) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 106, in handle remote: connection.prepare_database() remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/contrib/gis/db/backends/postgis/base.py", line 26, in prepare_database remote: cursor.execute("CREATE EXTENSION IF NOT EXISTS postgis") remote: File "/app/.heroku/python/lib/python3.9/site-packages/sentry_sdk/integrations/django/__init__.py", line 544, in execute remote: return real_execute(self, sql, params) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute remote: return self._execute_with_wrappers( remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers remote: return executor(sql, params, many, context) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute remote: return self.cursor.execute(sql, params) remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/db/utils.py", line 91, in __exit__ remote: raise dj_exc_value.with_traceback(traceback) from exc_value remote: File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 87, in _execute remote: return self.cursor.execute(sql) remote: django.db.utils.ProgrammingError: relation "spatial_ref_sys" does not exist remote: remote: Sentry is attempting to send 2 pending error messages remote: Waiting up to 2 seconds remote: Press Ctrl-C to quit remote: Waiting for release.... failed. To https://git.heroku.com/myherokuapp | I've worked around it by overwriting the postgis/base.py engine, I've put the following in my app under db/base.py from django.contrib.gis.db.backends.postgis.base import ( DatabaseWrapper as PostGISDatabaseWrapper, ) class DatabaseWrapper(PostGISDatabaseWrapper): def prepare_database(self): # This is the overwrite - we don't want to call the # super() because of a faulty extension creation pass Then in my settings I've just pointed the DATABASES["engine"] = "app.db" It won't help with backups but at least I can release again. | 32 | 3 |
73,212,039 | 2022-8-2 | https://stackoverflow.com/questions/73212039/pyside6-app-crashes-when-using-qpainter-drawline | On Windows 10, python3.10, PySide6 (or PyQt6) QApplication crashes when calling QPainter.drawLine() . The terminal just displays : Process finished with exit code -1073741819 (0xC0000005) Please find below the code: import sys from PySide6.QtCore import QPoint, Qt from PySide6.QtGui import QColor, QPainter, QPen, QPixmap from PySide6.QtWidgets import QApplication, QLabel, QMainWindow # from PyQt6.QtCore import QPoint, Qt # from PyQt6.QtGui import QColor, QPainter, QPen, QPixmap # from PyQt6.QtWidgets import QApplication, QLabel, QMainWindow class MainWindow(QMainWindow): def __init__(self): super().__init__() self.label = QLabel() canvas = QPixmap(400, 300) canvas.fill(Qt.GlobalColor.white) self.label.setPixmap(canvas) self.setCentralWidget(self.label) self.draw_something() def draw_something(self): painter = QPainter(self.label.pixmap()) painter.drawLine(10, 10, 300, 200) # >=========== Crash Here painter.end() if __name__ == '__main__': app = QApplication(sys.argv) window = MainWindow() window.show() app.exec() | This is caused by a slight (and not well documented) change in the API that happened starting with Qt5.15. Until Qt5, pixmap() returned a direct pointer to the current pixmap of the label, while in Qt6 it returns an implicit copy of the pixmap. The difference is highlighted only for the latest Qt5 documentation of the pixmap() property: Previously, Qt provided a version of pixmap() which returned the pixmap by-pointer. That version is now deprecated. To maintain compatibility with old code, you can explicitly differentiate between the by-pointer function and the by-value function: To Python developers that's not obvious, but to C++ it's clear by the change of the const QPixmap * (note the asterisk, meaning that it's a pointer) to a pure QPixmap type, meaning that the returned object is a new QPixmap object based on the current pixmap, and not a reference to the pixmap object currently set for the label. Now, the fact is that, conceptually speaking, we should not be able to directly "live draw" on the current pixmap of the label, because: setPixmap() always creates a copy of the pixmap for the label; due to the point above, there is no point in sharing the same pixmap object among different labels, so, as a consequence, pixmap() should always return a copy of that object; Previously, it was possible to directly paint on a QLabel's pixmap (while ensuring that update() was immediately called on the label). The current API instead requests an explicit call to setPixmap() after drawing. So, the solution is to create a reference to the pixmap as long as it's needed: def draw_something(self): pm = self.label.pixmap() painter = QPainter(pm) painter.drawLine(10, 10, 300, 200) painter.end() self.label.setPixmap(pm) | 4 | 2 |
73,206,785 | 2022-8-2 | https://stackoverflow.com/questions/73206785/logging-snakemakes-own-console-output-how-to-change-what-file-snakemake-logs | I'm trying to save Snakemake's own console output (not the logs generated by the individual jobs) to an arbitrary file while still having it written to stdout/stderr. (Unfortunately, my setup means I can't just use tee.) It looks to me like Snakemake should provide that functionality, given it saves the log output to a default location. However, looking through the documentation, I couldn't find a parameter to easily change the output location for the log. From the code of snakemake.logging, I'm getting the impression the logfile's parent directory may be hardcoded, with the file just getting a timestamped name. Is there some bleedingly obvious way to configure what file Snakemake logs to that I'm overlooking? | The file name/path is hardcoded in setup_logfile. It's a hack, but one option is to copy the log file to the desired location using onsuccess/onerror (note that there is no oncompletion, so the log copying should probably apply to both cases): onsuccess: shell("cp -v {log} some_path_for_log_copy.log") onerror: shell("cp -v {log} some_path_for_log_copy.log") | 4 | 4 |
73,200,080 | 2022-8-1 | https://stackoverflow.com/questions/73200080/assign-line-a-color-by-its-angle-in-matplotlib | I'm looking for a way to assign color to line plots in matplotlib in a way that's responsive to the line's angle. This is my current code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline horz = [[0.5,0.6,0.8],[0.1,0.8,0.9],[0.2,0.5,0.9]] vert = [[0.1,0.2,0.3],[0.05,0.1,0.15],[0.2,0.3,0.35]] f = plt.figure(figsize=(6,6)) ax = plt.axes() for column in range(0,len(horz)): x = np.array(horz[column]) y = np.array(vert[column]) #LINEAR TRENDLINE z = np.polyfit(horz[column], vert[column], 1) p = np.poly1d(z) ax.plot(horz[column],p(horz[column]),"-") plt.arrow(x=horz[column][-2],y=p(horz[column])[-2],dx=(horz[column][-1]-horz[column][-2]),dy=(p(horz[column])[-1]-p(horz[column])[-2]), shape='full', lw=.01, length_includes_head=True, head_width=.012, head_length=0.02, head_starts_at_zero=False, overhang = 0.5) #FIG SETTINGS plt.xlim([0, 1]) plt.ylim([0.1,0.5]) ax.set_title('Title', fontsize = 14) The idea here would be that if the line is at 0 degrees, it would be at one end of a given gradient, and if it were at 90 degrees, at the other end. Additionally, I'd like the line length to be taken as the intensity of the color. So if the line is short, it'd be closer to white, and if the line is long, it'd be closer to the raw color from the gradient. | Managed to solve it myself. Used pretty simple formulas for calculating the lines' slopes and distances and then used these as input for the color mapping and alpha transparency attribute. import geopandas as gpd import pandas as pd import matplotlib.pyplot as plt from matplotlib import cm import matplotlib.colors as colors import numpy as np %matplotlib inline #Data horz = [[0.5,0.6,0.8],[0.1,0.3,0.4],[0.2,0.5,0.9],[0.9,0.95,0.95]] vert = [[0.1,0.2,0.45],[0.05,0.1,0.15],[0.2,0.3,0.35],[0.1,0.3,0.5]] #Slope calculation def slopee(x1,y1,x2,y2): x = (y2 - y1) / (x2 - x1) return x #Color set up cmap = plt.cm.coolwarm_r #0 means a horizontal line, 1 means a line at 45 degrees, infinite means a vertical line (2 is vertical enough) cNorm = colors.Normalize(vmin=0, vmax=2) scalarMap = cm.ScalarMappable(norm=cNorm,cmap=cmap) #Fig settings f = plt.figure(figsize=(6,6)) ax = plt.axes() for column in range(0,len(horz)): x = np.array(horz[column]) y = np.array(vert[column]) #LINEAR TRENDLINE # 1 LINEAR # >=2 POLINOMIAL z = np.polyfit(horz[column], vert[column], 1) p = np.poly1d(z) #Distance calc formula def calculateDistance(x1,y1,x2,y2): dist = np.sqrt((x2 - x1)**2 + (y2 - y1)**2) return dist #Set up max an min distances maxdist = calculateDistance(0,0,0,0.9) mindist = calculateDistance(0,0,0,0) #Calculate line slope slope = slopee(horz[column][0],p(horz[column])[0],horz[column][-1],p(horz[column])[-1]) #Not interested in any slopes going "down" if slope >=0: #Map colors based on slope (0-2) colorVal = scalarMap.to_rgba(slope) #Map transparency based on distance transparency = (calculateDistance(horz[column][0],p(horz[column])[0],horz[column][-1],p(horz[column])[-1])-mindist)/(maxdist-mindist) #Set up minimun transparency to be 50% instead of 0% transparency = (0.5*transparency) + 0.5 #The actual arrow plot plt.arrow(x=horz[column][0],y=p(horz[column])[0],dx=(horz[column][-1]-horz[column][0]),dy=(p(horz[column])[-1]-p(horz[column])[0]), shape='full',length_includes_head=True, head_starts_at_zero=False, lw=.5, head_width=.011, head_length=0.01, overhang = 0.5, color=colorVal,alpha=transparency) #FIG SETTINGS plt.xlim([0, 1]) plt.ylim([0,0.5]) ax.set_title('Title',fontsize = 14) | 5 | 1 |
73,199,376 | 2022-8-1 | https://stackoverflow.com/questions/73199376/requestsdependencywarning-urllib3-1-26-11-or-chardet-3-0-4-doesnt-match-a | I have this script to acess my internet modem and reboot the device, but stop to work some weeks ago. Here my code: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager service = Service(executable_path=ChromeDriverManager().install()) driver = webdriver.Chrome(service=service) from selenium.webdriver.common.by import By chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(ChromeDriverManager().install(),options=chrome_options) driver.get('http://192.168.15.1/me_configuracao_avancada.asp',) user = driver.find_element(By.ID, "txtUser") user.send_keys("support") btnLogin = driver.find_element(By.ID, "btnLogin") btnLogin.click() driver.get('http://192.168.15.1/reboot.asp',) reboot = driver.find_element(By.ID, "btnReboot") reboot.click() print("Modem Reiniciado!") now when i run, this error messages return: /usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " Traceback (most recent call last): File "modem.py", line 7, in <module> driver = webdriver.Chrome(service=service) File "/home/fabio/.local/lib/python3.8/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__ super().__init__(DesiredCapabilities.CHROME['browserName'], "goog", File "/home/fabio/.local/lib/python3.8/site-packages/selenium/webdriver/chromium/webdriver.py", line 92, in __init__ super().__init__( File "/home/fabio/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 277, in __init__ self.start_session(capabilities, browser_profile) File "/home/fabio/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 370, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/home/fabio/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 435, in execute self.error_handler.check_response(response) File "/home/fabio/.local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 247, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally. (unknown error: DevToolsActivePort file doesn't exist) (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.) Stacktrace: #0 0x561e346d9cd3 <unknown> #1 0x561e344e1968 <unknown> #2 0x561e3450625c <unknown> #3 0x561e345018fa <unknown> #4 0x561e3453c94a <unknown> #5 0x561e34536aa3 <unknown> #6 0x561e3450c3fa <unknown> #7 0x561e3450d555 <unknown> #8 0x561e347212bd <unknown> #9 0x561e34725418 <unknown> #10 0x561e3470b36e <unknown> #11 0x561e34726078 <unknown> #12 0x561e346ffbb0 <unknown> #13 0x561e34742d58 <unknown> #14 0x561e34742ed8 <unknown> #15 0x561e3475ccfd <unknown> #16 0x7fc22f8b9609 <unknown> Some weeks ago this code run without any problems, but now i'm stuck I'm using Google Chrome 103.0.5060.134 and ChromeDriver 103.0.5060.134. | This error message... /usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " ...implies that the requests module is backdated hence not in sync and needs an update. Solution You can update the requests module using either of the following commands: pip3 install requests or pip3 install --upgrade requests | 10 | 23 |
73,206,810 | 2022-8-2 | https://stackoverflow.com/questions/73206810/faker-python-generating-chinese-pinyin-names | I am trying to generate random chinese names using Faker (Python), but it generates the names in chinese characters instead of pinyin. I found this : and it show that it generates them in pinyin, while when I try the same code, it gives me only chinese characters. how to get the pinyin ?? | fake.romanized_name() worked for me. I got lucky by looking through dir(fake). Doesn't seem to have a method for pinyin address that I can see... | 6 | 4 |
73,202,494 | 2022-8-2 | https://stackoverflow.com/questions/73202494/how-do-i-type-hint-a-function-that-returns-a-zip-object | I've got a function that takes an arbitrary amount of lists (or any iterables, for that matter) and sorts them as one. The code looks like this: def sort_as_one(*args): return zip(*sorted(zip(*args))) def main(): list1 = [3, 1, 2, 4] list2 = ["a", "b", "d", "e"] list3 = [True, False, True, False] result = sort_as_one(list1, list2, list3) # <zip object at ...> print(result) list1, list2, list3 = result print(list1, list2, list3) if __name__ == "__main__": main() How can I accurately type hint the function output? | A zip object is an iterator - it follows the iterator protocol. Idiomatically, you would probably just typing hint it as such. In this case, you want to type hint it as a generic using a type variable: import typing T = typing.TypeVar("T") def sort_as_one(*args: T) -> typing.Iterator[T]: return zip(*sorted(zip(*args))) Note, if you are using variadic arguments, you have to only accept a single type. In this case, the best you can probably do in your case is use Any instead of T. But you should consider using only a function like the above in your code if you want to be able to use it with static type checkers. | 5 | 7 |
73,200,378 | 2022-8-1 | https://stackoverflow.com/questions/73200378/typeerror-float-argument-must-be-a-string-or-a-number-not-natype | I have a column in my dataframe that contains nan values and int values. The original dType was float64, but I was trying to change it to int6, and change nan values to np.nan. now I get this error: TypeError: float() argument must be a string or a number, not 'NAType' when trying to run imputation on it. In the following table, column is similar to "age" data = {'name': ['Alex', 'Ben', 'Marry','Alex', 'Ben', 'Marry'], 'job': ['teacher', 'doctor', 'engineer','teacher', 'doctor', 'engineer'], 'age': [27, 32, 78,27, 32, 78], 'weight': [160, 209, 130,164, 206, 132], 'date': ['6-12-2022', '6-12-2022', '6-12-2022','6-13-2022', '6-13-2022', '6-13-2022'] } df = pd.DataFrame(data) df |name |job |age|weight |date |---|-------|-----------|---|-------|-------- |0 |Alex |teacher |27 |160 |6-12-2022 |1 |Ben |doctor |32 |209 |6-12-2022 |2 |Marry |engineer |78 |130 |6-12-2022 |3 |Alex |teacher |27 |164 |6-13-2022 |4 |Ben |doctor |32 |206 |6-13-2022 |5 |Marry |engineer |78 |132 |6-13-2022 |6 |Alex |teacher |NaN|NaN |6-14-2022 |7 |Ben |doctor |NaN|NaN |6-14-2022 |8 |Marry |engineer |NaN|NaN |6-14-2022 and this is what I tried: df['age']=df['age'].astype( dtype={'age': pd.Int8Dtype()}) df.loc[df.age== '<NA>', 'age']=np.nan Is there any way to change float64 to smaller datatype without causing this issue? Please advise, thanks | Use df['age'] = df['age'].astype(dtype='Int64') with extension datatype Int64 (with a capitalized I) rather than the default dtype which is int64 (lower case i). The latter throws an IntCastingNaNError while the former works smoothly. This functionality was added to Pandas 0.24 and mentioned in this thread. | 8 | 7 |
73,198,894 | 2022-8-1 | https://stackoverflow.com/questions/73198894/which-magic-method-does-hasattr-call | Which magic method method does hasattr call? getattr(__o, name) can also be called as __o.__getattr__(name) setattr(__o, name) can also be called as __o.__setattr__(name) But what is the equivalent for hasattr? I know the associated magic method for the in keyword is __contains__. | There is no specific dunder method for hasattr(). It's essentially equivalent to: def hasattr(object, name): try: getattr(object, name) return True except AttributeError: return False So it's dependent on the same dunder methods used by getattr(). | 5 | 5 |
73,186,539 | 2022-7-31 | https://stackoverflow.com/questions/73186539/dynamic-enum-values-on-nested-classes-with-python | Consider the following enum class: from enum import Enum class Namespace: class StockAPI(Enum): ITEMS = "{url}/items" INVENTORY = "{url}/inventory" class CustomerAPI(Enum): USERS = "{url}/users" PURCHASES = "{url}/purchases" def __init__(self, url): self.url = url I am trying to make url a dynamic value for each enum class. What can I do here so that I can call some enum class in one of the following ways: Namespace.StockAPI.ITEMS.value would return http://localhost/items? Namespace(url="http://localhost").StockAPI.ITEMS.value would also return http://localhost/items Is this possible to do without doing variable interpolation each time I access each enum property? Could factory pattern be of any help here? | I did it the following way, while keeping the inner enum classes: from enum import Enum class Namespace: class StockAPI(Enum): ITEMS = "{url}/items" INVENTORY = "{url}/inventory" class CustomerAPI(Enum): USERS = "{url}/users" PURCHASES = "{url}/purchases" def __init__(self, url: str): attrs = (getattr(self, attr) for attr in dir(self)) enums = (a for a in attrs if isinstance(a, type) and issubclass(a, Enum)) for enum in enums: kv = {e.name: e.value.format(url=url) for e in enum} setattr(self, enum.__name__, Enum(enum.__name__, kv)) print(Namespace(url="http://test.com").StockAPI.ITEMS.value) # http://test.com/items | 4 | 4 |
73,183,974 | 2022-7-31 | https://stackoverflow.com/questions/73183974/how-to-get-a-class-and-definitions-diagram-from-python-code | I have a large multi-file Python application I'd like to document graphically. But first, I made a small "dummy" app to test out different UML packages. (Note: I do have graphviz installed and in the path). Here's my "dummy" code: class User: def __init__(self, level_security=0): self.level_security = level_security def level_security_increment(level_security): level_security += 1 return level_security def printNice(aValue): if type(aValue) != "str": print(str(aValue)) else: print(aValue) return def main(): printNice("Process start") for x in range(0,9): printNice(x) myUser = User(3) printNice(f"Sec lvl: {myUser.level_security}") printNice("Process_done") if __name__ == "__main__": main() Here are the different pyreverse command-line codes I've used to get varying charts. I'll post the chart that came closest to what I want below these codes: pyreverse test.py -S -m y -A -o png pyreverse -ASmn test.py -b -o vdx pyreverse -ASmy test.py -o emf pyreverse -AS -m y test.py -o emf pyreverse -p test.py -o emf pyreverse -AS -b test.py -o emf pyreverse -ASmy test.py -b -o png <-- closest Now here is what that last one you see above produces: Finally, in case it is not clear, I'll reiterate what I want it to show: Classes, Definitions (functions), and - if possible - even variables. But I'd be happy for now to get just Classes and Definitions (functions). To be clear: I want to see function names. Is it as simple as adding/removing a switch to the pyreverse command? Or is there some package I need to add to my "dummy" code? | pyreverse aims to produce a class diagram. It will show you classes, and non-filtered class members (see option -f), as well as associations that can be detected. In this regard, the diagram seems complete. Instances (objects) at top level are not part of a class diagram. This is why pyreverse doesn't show them. Free standing functions do not appear in class diagrams either as they are not classes. There is no consensus about what a free standing function should be in UML. They could be seen as instances of a more general function class (of all the functions with the same signature), or they could be considered as a specific functor class. As no UML rule is defined, pyrevere doesn't show them either. If you want pyreverse to detect them, you should rewrite them as a functor class. | 4 | 3 |
73,195,438 | 2022-8-1 | https://stackoverflow.com/questions/73195438/openai-gyms-env-step-what-are-the-values | I am getting to know OpenAI's GYM (0.25.1) using Python3.10 with gym's environment set to 'FrozenLake-v1 (code below). According to the documentation, calling env.step() should return a tuple containing 4 values (observation, reward, done, info). However, when running my code accordingly, I get a ValueError: Problematic code: observation, reward, done, info = env.step(new_action) Error: 3 new_action = env.action_space.sample() ----> 5 observation, reward, done, info = env.step(new_action) 7 # here's a look at what we get back 8 print(f"observation: {observation}, reward: {reward}, done: {done}, info: {info}") ValueError: too many values to unpack (expected 4) Adding one more variable fixes the error: a, b, c, d, e = env.step(new_action) print(a, b, c, d, e) Output: 5 0 True True {'prob': 1.0} My interpretation: 5 should be observation 0 is reward prob: 1.0 is info One of the True's is done So what's the leftover boolean standing for? Thank you for your help! Complete code: import gym env = gym.make('FrozenLake-v1', new_step_api=True, render_mode='ansi') # build environment current_obs = env.reset() # start new episode for e in env.render(): print(e) new_action = env.action_space.sample() # random action observation, reward, done, info = env.step(new_action) # perform action, ValueError! for e in env.render(): print(e) | From the code's docstrings: Returns: observation (object): this will be an element of the environment's :attr:`observation_space`. This may, for instance, be a numpy array containing the positions and velocities of certain objects. reward (float): The amount of reward returned as a result of taking the action. terminated (bool): whether a `terminal state` (as defined under the MDP of the task) is reached. In this case further step() calls could return undefined results. truncated (bool): whether a truncation condition outside the scope of the MDP is satisfied. Typically a timelimit, but could also be used to indicate agent physically going out of bounds. Can be used to end the episode prematurely before a `terminal state` is reached. info (dictionary): `info` contains auxiliary diagnostic information (helpful for debugging, learning, and logging). This might, for instance, contain: metrics that describe the agent's performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. It also can contain information that distinguishes truncation and termination, however this is deprecated in favour of returning two booleans, and will be removed in a future version. (deprecated) done (bool): A boolean value for if the episode has ended, in which case further :meth:`step` calls will return undefined results. A done signal may be emitted for different reasons: >Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics >simulation has entered an invalid state. It appears that the first boolean represents a terminated value, i.e. "whether a terminal state (as defined under the MDP of the task) is reached. In this case further step() calls could return undefined results." It appears that the second represents whether the value has been truncated, i.e. did your agent go out of bounds or not? From the docstring: "whether a truncation condition outside the scope of the MDP is satisfied. Typically a timelimit, but could also be used to indicate agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached." | 18 | 6 |
73,191,999 | 2022-8-1 | https://stackoverflow.com/questions/73191999/when-to-use-prepare-data-vs-setup-in-pytorch-lightning | Pytorch's docs on Dataloaders only say, in the code def prepare_data(self): # download ... and def setup(self, stage: Optional[str] = None): # Assign train/val datasets for use in dataloaders Please explain the intended separation between prepare_data and setup, what callbacks may occur between them, and why put something in one over the other. | If you look at the pseudo for the Trainer.fit function provided in the documentation page of LightningModule at Β§ Hooks, you can read: def fit(self): if global_rank == 0: # prepare data is called on GLOBAL_ZERO only prepare_data() ## <-- prepare_data configure_callbacks() with parallel(devices): # devices can be GPUs, TPUs, ... train_on_device(model) def train_on_device(model): # called PER DEVICE on_fit_start() setup("fit") ## <-- setup configure_optimizers() # the sanity check runs here on_train_start() for epoch in epochs: fit_loop() on_train_end() on_fit_end() teardown("fit") You can see prepare_data being called only for global_rank == 0, i.e. it is only called by a single processor. It turns out you can read from the documentation description of prepare_data: LightningModule.prepare_data() Use this to download and prepare data. Downloading and saving data with multiple processes (distributed settings) will result in corrupted data. Lightning ensures this method is called only within a single process, so you can safely add your downloading logic within. Whereas setup is called on all processes as you can read from the pseudo-code above as well as its documentation description: LightningModule.setup(stage=None)Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP. | 4 | 6 |
73,179,592 | 2022-7-30 | https://stackoverflow.com/questions/73179592/show-a-dataframe-with-all-rows-that-have-null-values | I am new to pyspark and using Dataframes what I am trying to do is get the subset of all the columns with Null value(s). Most examples I see online show me a filter function on a specific column. Is it possible to filter the entire data frame and show all the rows that contain at least 1 null value? | If you don't care about which columns are null, you can use a loop to create a filtering condition: from pyspark.sql import SparkSession from pyspark.sql import functions as func q1_df = spark\ .createDataFrame([(None, 1, 2), (3, None, 4), (5, 6, None), (7, 8, 9)], ['a', 'b', 'c']) q1_df.show(5, False) +----+----+----+ |a |b |c | +----+----+----+ |null|1 |2 | |3 |null|4 | |5 |6 |null| |7 |8 |9 | +----+----+----+ condition = (func.lit(False)) for col in q1_df.columns: condition = condition | (func.col(col).isNull()) q1_df.filter(condition).show(3, False) +----+----+----+ |a |b |c | +----+----+----+ |null|1 |2 | |3 |null|4 | |5 |6 |null| +----+----+----+ As you're finding the row that any one column is null, you can use the OR condition. Edit on: 2022-08-01 The reason why I first declare condition as func.lit(False) is just for the simplification of my coding, just want to create a "base" condition. In fact, this filter doesn't have any usage in this filtering. When you check the condition, you will see: Column<'(((false OR (a IS NULL)) OR (b IS NULL)) OR (c IS NULL))'> In fact you can use other method to create the condition. For example: for idx, col in enumerate(q1_df.columns): if idx == 0: condition = (func.col(col).isNull()) else: condition = condition | (func.col(col).isNull()) condition Column<'(((a IS NULL) OR (b IS NULL)) OR (c IS NULL))'> Alternatively, if you want to filter out the row that BOTH not null in all columns, in my coding, I would: condition = (func.lit(True)) for col in q1_df.columns: condition = condition & (func.col(col).isNotNull()) As long as you can create all the filtering condition, you can eliminate the func.lit(False). Just to remind that if you create the "base" condition like me, please don't use the python built-in bool type like below since they are not the same type (boolean vs spark column): condition = False for col in q1_df.columns: condition = condition | (func.col(col).isNull()) | 4 | 3 |
73,186,315 | 2022-7-31 | https://stackoverflow.com/questions/73186315/openai-command-not-found-mac | I'm trying to follow the fine tuning guide for Openai here. I ran: pip install --upgrade openai Which install without any errors. But even after restarting my terminal, i still get zsh: command not found: openai Here is the output of echo $PATH: /bin:/usr/bin:/usr/local/bin:/Users/nickrose/Downloads/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin Here is the output of which python: /usr/bin/python Any tips for how to fix this? I'm on MacOS Big Sur 11.6. | Basically pip installs the packages under its related python directory, in a directory called site-packages (most likely, I'm not a python expert tbh). This is not included in the path you provided. First, ask pip to show the location to the package: pip show openai The output would be something like this: Name: openai Version: 0.22.0 Summary: Python client library for the OpenAI API Home-page: https://github.com/openai/openai-python Author: OpenAI Author-email: [email protected] License: Location: /Users/<USER>/DIR/TO/SOME/PYTHON/site-packages Requires: numpy, openpyxl, pandas, pandas-stubs, requests, tqdm Required-by: So your package will be available in /Users/<USER>/DIR/TO/SOME/PYTHON/site-packages/openai Either add /Users/<USER>/DIR/TO/SOME/PYTHON/site-packages/ to your path, or use the complete address to your package, or try to access it using your python: python -m openai # -m stands for module To get more information about the -m flag, run python --help. Update So as you mentioned in the comments, you get permission denied after you add the directory to your package. This actually means that the package exists, but it's not permitted by your OS to execute. This is the thing you have to do, locate your package, and then: sudo chmod +x /PATH/TO/script And the reason you're getting command not found after you use sudo directly with the package, is that you update your path variable in zsh, but when you use sudo, superuser uses sh instead of zsh. | 6 | 7 |
73,183,197 | 2022-7-31 | https://stackoverflow.com/questions/73183197/opencv-not-installing-on-anaconda-prompt | In order to download OpenCV on through Anaconda prompt, I run the following: conda install -c conda-forge opencv However, whenever I try to download, there I get the messages of failed with initial frozen solve. Retrying with flexible solve. Failed with repodata from current_repodata.json, will retry with next repodata source This would continue as the prompt tries to then diagnose what conflicts there are in my system that could prevent OpenCV from downloading. I kept my laptop on over night, but when I woke up in the morning, there was still diagnosing for potential conflicts going on. I'm not too sure what to do at this point. I just started trying again, but the same issues are being experienced. I am trying to download OpenCV so that I can import cv2 to work on machine learning projects for object/image detection. I have also tried pip install -c anaconda opencv but am having the same issues. | Please note that to import cv2, the library/package to install is called opencv-python. From Jupyter notebook, you can try !pip install opencv-python If you're using anaconda, you can try conda install -c conda-forge opencv-python | 5 | 4 |
73,176,562 | 2022-7-30 | https://stackoverflow.com/questions/73176562/how-to-load-a-zip-file-with-pyscript-and-save-into-the-virtual-file-system | I am trying to load a zip file and save it in the virtual file system for further processing with pyscript. In this example, I aim to open it and list its content. As far as I got: See the self standing html code below, adapted from tutorials (with thanks to the author, btw) It is able to load Pyscript, lets the user select a file and loads it (although not in the right format it seems). It creates a dummy zip file and saves it to the virtual file, and list the content. All this works upfront and also if I point the process_file function to that dummy zip file, it indeed opens and lists it. The part that is NOT working is when I select via the button/file selector any valid zip file in the local file system, when loading the data into data it is text (utf-8) and I get this error: File "/lib/python3.10/zipfile.py", line 1353, in _RealGetContents raise BadZipFile("Bad magic number for central directory") zipfile.BadZipFile: Bad magic number for central directory I have tried saving to a file and loading it, instead of using BytesIO , also tried variations of using ArrayBuffer or Stream from here I have also tried creating a FileReader and using readAsBinaryString() or readAsText() and various transformations, with same result: either it fails to recognise the "magic number" or I get "not a zip file". When feeding some streams or arrayBuffer I get variations of: TypeError: a bytes-like object is required, not 'pyodide.JsProxy' At this point I suspect there is something embarrassingly obvious that yet I am unable to see, so, any fresh pair of eyes and advice on how best/simply load a file is much appreciated :) Many thanks in advance. <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" /> <script defer src="https://pyscript.net/alpha/pyscript.js"></script> <title>Example</title> </head> <body> <p>Example</p> <br /> <label for="myfile">Select a file:</label> <input type="file" id="myfile" name="myfile"> <br /> <br /> <div id="print_output"></div> <br /> <p>File Content:</p> <div style="border:2px inset #AAA;cursor:text;height:120px;overflow:auto;width:600px; resize:both"> <div id="content"> </div> </div> <py-script output="print_output"> import asyncio import zipfile from js import document, FileReader from pyodide import create_proxy import io async def process_file(event): fileList = event.target.files.to_py() for f in fileList: data= await f.text() mf=io.BytesIO(bytes(data,'utf-8')) with zipfile.ZipFile(mf,"r") as zf: nl=zf.namelist() nlf=" _ ".join(nl) document.getElementById("content").innerHTML=nlf def main(): # Create a Python proxy for the callback function # process_file() is your function to process events from FileReader file_event = create_proxy(process_file) # Set the listener to the callback e = document.getElementById("myfile") e.addEventListener("change", file_event, False) mf = io.BytesIO() with zipfile.ZipFile(mf, mode="w",compression=zipfile.ZIP_DEFLATED) as zf: zf.writestr('file1.txt', b"hi") zf.writestr('file2.txt', str.encode("hi")) zf.writestr('file3.txt', str.encode("hi",'utf-8')) with open("a.txt.zip", "wb") as f: # use `wb` mode f.write(mf.getvalue()) with zipfile.ZipFile("a.txt.zip", "r") as zf: nl=zf.namelist() nlf=" ".join(nl) document.getElementById("content").innerHTML = nlf main() </py-script> </body> </html> | You were very close with your code. The problem was in converting the file data to the correct data type. The requirement is to convert the arrayBuffer to Uint8Array and then to a bytearray. Import the required function: from js import Uint8Array Read the file data into an arrayBuffer and copy it to a new Uint8Array data = Uint8Array.new(await f.arrayBuffer()) Convert the Uint8Array to a bytearray that BytesIO expects mf = io.BytesIO(bytearray(data)) | 4 | 4 |
73,177,807 | 2022-7-30 | https://stackoverflow.com/questions/73177807/unable-to-build-vocab-for-a-torchtext-text-classification | I'm trying to prepare a custom dataset loaded from a csv file in order to use in a torchtext text binary classification problem. It's a basic dataset with news headlines and a market sentiment label assigned "positive" or "negative". I've been following some online tutorials on PyTorch to get this far but they've made some significant changes in the latest torchtext package so most of the stuff is out of date. Below I've successfully parsed my csv file into a pandas dataframe with two columns - text headline and a label which is either 0 or 1 for positive/negative, split into a training and test dataset then wrapped them as a PyTorch dataset class: train, test = train_test_split(eurusd_df, test_size=0.2) class CustomTextDataset(Dataset): def __init__(self, text, labels): self.text = text self.labels = labels def __getitem__(self, idx): label = self.labels.iloc[idx] text = self.text.iloc[idx] sample = {"Label": label, "Text": text} return sample def __len__(self): return len(self.labels) train_dataset = CustomTextDataset(train['Text'], train['Labels']) test_dataset = CustomTextDataset(test['Text'], test['Labels']) I'm now trying to build a vocabulary of tokens following this tutorial https://coderzcolumn.com/tutorials/artificial-intelligence/pytorch-simple-guide-to-text-classification and the official pytorch tutorial https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html . However using the below code from torchtext.data.utils import get_tokenizer from torchtext.vocab import build_vocab_from_iterator tokenizer = get_tokenizer('basic_english') train_iter = train_dataset def yield_tokens(data_iter): for _, text in data_iter: yield tokenizer(text) vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"]) vocab.set_default_index(vocab["<unk>"]) yields a very small length of vocabulary, and applying the example vocab(['here', 'is', 'an', 'example']) on a text field taken from the original dataframe yields a list of 0s, implying the vocab is being built from the label field, containing only 0s and 1s, not the text field. Could anyone review and show me how to build the vocab targeting the text field? | The very small length of vocabulary is because under the hood, build_vocab_from_iterator uses a Counter from the Collections standard library, and more specifically its update function. This function is used in a way that assumes that what you are passing to build_vocab_from_iterator is an iterable wrapping an iterable containing words/tokens. This means that in its current state, because strings can be iterated upon, your code will create a vocab able to encode all letters, not words, comprising your dataset, hence the very small vocab size. I do not know if that is intended by Python/Pytorch devs, but because of this you need to wrap your simple iterator in a list, for example like this : vocab = build_vocab_from_iterator([yield_tokens(train_iter)], specials=["<unk>"]) Note : If your vocab gives only zeros, it is not because it is taking from the label field, it is just returning the integer corresponding to an unknown token, since all words that are not just a character will be unknown to it. Hope this helps! | 4 | 3 |
73,176,227 | 2022-7-30 | https://stackoverflow.com/questions/73176227/how-to-get-first-element-of-list-or-none-when-list-is-empty | I can do it like this but there has to be better way: arr = [] if len(arr) > 0: first_or_None = arr[0] else: first_or_None = None If I just do arr[0] I get IndexError. Is there something where I can give default argument? | I think the example you give is absolutely fine - it is very readable and would not suffer performance issues. You could use the ternary operator python equivalent, if you really want it to be shorter code: last_or_none = arr[0] if len(arr) > 0 else None | 6 | 12 |
73,170,578 | 2022-7-29 | https://stackoverflow.com/questions/73170578/python-code-blocks-not-rendering-with-readthedocs-and-sphinx | I'm building docs for a python project using Sphinx and readthedocs. For some reason, the code blocks aren't rendering after building the docs, and inline code (marked with backticks) appears italic. I've checked the raw build, there was a warning that the exomagpy module couldn't be found which I resolved by changing ".." to "../" in the os.path.abspath() part of conf.py, this didn't appear to change anything in the docs. There was a similar question here on stackoverflow, I tried the solution but it didn't change. The raw build can be found here: https://readthedocs.org/api/v2/build/17574729.txt The docs are being built from the "Develop" branch of the github page (link below) Here is the link to the github page: https://github.com/quasoph/exomagpy/tree/Develop The repo is structured like so: >.vscode >build/lib/exomagpy >docs >conf.py >rst files (.rst) >makefile >make.bat >exomagpy.egg-info >exomagpy >__init__.py >module files (.py) >.readthedocs.yml >requirements.txt >setup.py Here are my files: conf.py import os import sys sys.path.insert(0, os.path.abspath("../")) # project info project = "exomagpy" root_doc = "index" release = "1.3.0" # general config extensions = ["sphinx.ext.autodoc","sphinx.ext.napoleon","sphinx.ext.autosummary" ] templates_path = ['_templates'] exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] # html options html_theme = 'sphinx_rtd_theme' html_static_path = [] .readthedocs.yml version: 1 build: os: ubuntu-20.04 tools: python: "3.8" sphinx: configuration: docs/conf.py python: version: 3.8 install: - method: setuptools path: . - requirements: requirements.txt dependencies: - python=3.8 requirements.txt numpy matplotlib pandas tensorflow lightkurve requests sphinx==5.1.1 sphinx_rtd_theme==1.0.0 setup.py from setuptools import setup, find_packages setup( name = "exomagpy", version = "1.3.0", author = "Sophie", author_email = "email", url = "link", description = "description", packages = find_packages() ) Any help really appreciated, please let me know if more information is needed. | As @mzjn pointed out, a blank line is required between the code-block directive and the code that is supposed to be highlighted. https://raw.githubusercontent.com/quasoph/exomagpy/main/docs/tutorials.rst .. code-block:: python import exomagpy.predictExo exomagpy.predictExo.tess() exomagpy.predictExo.kepler() Additionally for inline code literals, single backticks in reStructuredText are rendered as italics, as you realized, whereas in Markdown and its flavors it would render as an inline code literal. For reStructuredText, surround the literal with double backticks: ``inline_literal`` | 5 | 4 |
73,172,760 | 2022-7-30 | https://stackoverflow.com/questions/73172760/github-action-couldnt-find-environment-variable-for-django | I was trying to use the environment variable in my Django application where I use the django-environ package with the .env file in my local machine. But I can't use the .env file in my GitHub action. I've configured action secret variables manually from my project settings. Here is my local machine code: import environ env = environ.Env() environ.Env.read_env() DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': env('POSTGRES_DB_NAME'), 'USER': env('POSTGRES_USER'), 'PASSWORD': env('POSTGRES_PASSWORD'), 'HOST': env('POSTGRES_HOST'), 'PORT': env('POSTGRES_PORT'), } } This code is working in my local environment but failed to load in GitHub actions. I have configured the same variables in GitHub actions too, but still, the application couldn't find the environment variable there in the GitHub action. django.yml name: Django CI on: push: branches: [ "main" ] pull_request: branches: [ "main" ] jobs: build: runs-on: ubuntu-latest strategy: max-parallel: 4 matrix: python-version: [3.7, 3.8, 3.9] steps: - uses: actions/checkout@v3 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} - name: Install Dependencies run: | python -m pip install --upgrade pip pip install -r requirements.txt - name: Run Tests run: | python manage.py test The GitHub Action displays the following errors: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/environ/environ.py", line 403, in get_value value = self.ENVIRON[var_name] File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/os.py", line 681, in __getitem__ raise KeyError(key) from None KeyError: 'POSTGRES_DB_NAME' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line utility.execute() File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/core/management/__init__.py", line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/core/management/commands/test.py", line 23, in run_from_argv super().run_from_argv(argv) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/core/management/base.py", line 346, in run_from_argv parser = self.create_parser(argv[0], argv[1]) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/core/management/base.py", line 320, in create_parser self.add_arguments(parser) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/core/management/commands/test.py", line 44, in add_arguments test_runner_class = get_runner(settings, self.test_runner) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/test/utils.py", line 317, in get_runner test_runner_class = test_runner_class or settings.TEST_RUNNER File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/conf/__init__.py", line 82, in __getattr__ self._setup(name) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/conf/__init__.py", line 69, in _setup self._wrapped = Settings(settings_module) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/django/conf/__init__.py", line 170, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/runner/work/lha-backend/lha-backend/root/settings.py", line 87, in <module> 'NAME': env('POSTGRES_DB_NAME'), File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/environ/environ.py", line 201, in __call__ parse_default=parse_default File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/site-packages/environ/environ.py", line 407, in get_value raise ImproperlyConfigured(error_msg) from exc django.core.exceptions.ImproperlyConfigured: Set the POSTGRES_DB_NAME environment variable | You need to configure env for your run step, something like this: - name: Run Tests run: | python manage.py test env: POSTGRES_DB_NAME: ${{ secrets.POSTGRES_DB_NAME }} POSTGRES_USER: ${{ secrets.POSTGRES_USER }} POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }} POSTGRES_HOST: ${{ secrets.POSTGRES_HOST }} POSTGRES_PORT: ${{ secrets.POSTGRES_PORT }} Relevant docs: https://docs.github.com/en/actions/security-guides/encrypted-secrets#using-encrypted-secrets-in-a-workflow Additional advice Use the same name for both GitHub secret and Environment Variable Create an organization on GitHub if you want to use the same secrets across multiple repositories | 4 | 3 |
73,170,302 | 2022-7-29 | https://stackoverflow.com/questions/73170302/ordinal-encoding-in-pandas | Is there a way to have pandas.get_dummies output the numerical representation in one column rather than a separate column for each option? Concretely, currently when using pandas.get_dummies it gives me a column for every option: Size Size_Big Size_Medium Size_Small Big 1 0 0 Medium 0 1 0 Small 0 0 1 But I'm looking for more of the following output: Size Size_Numerical Big 1 Medium 2 Small 3 | You don't want dummies, you want factors/categories. Use pandas.factorize: df['Size_Numerical'] = pd.factorize(df['Size'])[0] + 1 output: Size Size_Numerical 0 Big 1 1 Medium 2 2 Small 3 | 4 | 9 |
73,162,991 | 2022-7-29 | https://stackoverflow.com/questions/73162991/can-i-disable-mypys-cannot-find-implementation-or-library-stub-for-module-name | There are many threads regarding Cannot find implementation or library stub for module named... error, but there's no associated error code. I'd like to disable this completely. How might I go about doing that? | To disable the warning: Pass the --ignore-missing-imports flag on the CLI. Or if using a config file: For mypy.ini or setup.cfg [mypy] ignore_missing_imports = true For pyproject.toml [tool.mypy] ignore_missing_imports = true This will disable warning for all modules. You can set it on a per module basis, but you'll have to check the docs for examples on how to do that. | 4 | 11 |
73,166,964 | 2022-7-29 | https://stackoverflow.com/questions/73166964/python-3-match-values-based-on-column-name-similarity | I have a dataframe of the following form: Year 1 Grade Year 2 Grade Year 3 Grade Year 4 Grade Year 1 Students Year 2 Students Year 3 Students Year 4 Students 60 70 80 100 20 32 18 25 I would like to somehow transpose this table to the following format: Year Grade Students 1 60 20 2 70 32 3 80 18 4 100 25 I created a list of years and initiated a new dataframe with the "year" column. I was thinking of matching the year integer to the column name containing it in the original DF, match and assign the correct value, but got stuck there. | Here's one way to do it. Feel free to ask questions about how it works. import pandas as pd cols = ["Year 1 Grade", "Year 2 Grade", "Year 3 Grade" , "Year 4 Grade", "Year 1 Students", "Year 2 Students", "Year 3 Students", "Year 4 Students"] vals = [60,70,80,100,20,32,18,25] vals = [[v] for v in vals] df = pd.DataFrame({k:v for k,v in zip(cols,vals)}) grades = df.filter(like="Grade").T.reset_index(drop=True).rename(columns={0:"Grades"}) students = df.filter(like="Student").T.reset_index(drop=True).rename(columns={0:"Students"}) pd.concat([grades,students], axis=1) | 4 | 1 |
73,165,967 | 2022-7-29 | https://stackoverflow.com/questions/73165967/how-to-suppress-a-warning-in-one-line-for-pylint-and-flake8-at-the-same-time | I would like to ignore a specific line in static code analysis. For Flake8, I'd use the syntax # noqa: F401. For pylint, I'd use the syntax # pylint: disable=unused-import. As I am working on a code generation framework, I would like the code to support both linters. Is there a way to combine both directives such that both of them are correctly detected? | both of these combinations work for me: import os # noqa: F401 # pylint:disable=unused-import import sys # pylint:disable=unused-import # noqa: F401 | 14 | 21 |
73,166,298 | 2022-7-29 | https://stackoverflow.com/questions/73166298/cant-do-python-imports-from-another-dir | I'm unable to import from Python file in another directory. Directory structure: some_root/ - __init__.py - dir_0/ - __init__.py - dir_1/ - __init__.py - file_1.py - dir_2/ - __init__.py - file_2.py file_1.py has some exported member: # file_1.py def foo(): pass file_2.py tries to import member from file_1.py: # file_2.py from dir_0.dir_1.file_1 import foo But not absolute nor relative import seems work. How to do Python's imports correctly? If I could avoid using sys.path.insert it would be nice, but if there's no way around this then I guess that's how things stand. | Don't mess with the search path You are right not messing around with sys.path. This is not recommended and always just an ugly workaround. There are better solutions for this. Restructure your folder layout See official Python docs about packaging. Distinguish between project folder and package folder. We assume your some_root folder as the package folder which is also the name of the package (used in import statements). And it is recommended to put the package folder into a folder named src. Above it is the project folder some_project. This project folder layout is also known as the "Src-Layout". In your case it should look like this. some_project βββ src βββ some_root βββ dir_0 β βββ dir_1 β β βββ file_1.py β β βββ __init__.py β βββ dir_2 β β βββ file_2.py β β βββ __init__.py β βββ __init__.py βββ __init__.py Make your package installable Create a some_project/setup.cfg with that content. Keep the line breaks and indention in line 5 and 6. They have to be like this but I don't know why. [metadata] name = some_project [options] package_dir= =src packages = find: zip_safe = False python_requires = >= 3 [options.packages.find] where = src exclude = tests* .gitignore Create some_project/setup.py with that content: from setuptools import setup setup() "Install" the package This is not a usual installation. Please see Developement Mode to understand what this really means. The package is not copied into /usr/lib/python/site-packages; only links are created. Navigate into the project folder some_project and run python3 -m pip install --editable . Don't forget the . at the end. Depending on your OS and environment maybe you have to replace python3 with py -3 or python or something else. Import Your file_2.py import some_root import some_root.dir_0 import some_root.dir_0.dir_1 from some_root.dir_0.dir_1 import file_1 file_1.foo() But as others said in the comments. Improve your structure of files and folders and reduce its complexity. | 5 | 5 |
73,165,636 | 2022-7-29 | https://stackoverflow.com/questions/73165636/no-module-named-importlib-metadata | I'm trying to install Odoo 15.0 on mac (python 3.7) when i come to run the command: pip3 install -r requirements.txt I got this error message: Traceback (most recent call last): File "/usr/local/opt/[email protected]/bin/pip3", line 10, in <module> from importlib.metadata import distribution ModuleNotFoundError: No module named 'importlib.metadata' | Try installing this lib manually, using : pip install importlib-metadata or pip3 install importlib-metadata | 13 | 20 |
73,164,169 | 2022-7-29 | https://stackoverflow.com/questions/73164169/multiplying-a-list-of-integer-with-a-list-of-string | Suppose there are two lists: l1 = [2,2,3] l2 = ['a','b','c'] I wonder how one finds the product of the two such that the output would be: #output: ['a','a','b','b','c','c','c'] if I do: l3 = [] for i in l2: for j in l1: l3.append(i) I get: ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'] which is wrong, I wonder where am I making the mistake? | The loop for j in l1: will iterate 3 times every time (because you have 3 items in list l1). Try: out = [b for a, b in zip(l1, l2) for _ in range(a)] print(out) Prints: ['a', 'a', 'b', 'b', 'c', 'c', 'c'] | 5 | 4 |
73,136,808 | 2022-7-27 | https://stackoverflow.com/questions/73136808/aws-glue-error-invalid-input-provided-while-running-python-shell-program | I have Glue job, a python shell code. When I try to run it I end up getting the below error. Job Name : xxxxx Job Run Id : yyyyyy failed to execute with exception Internal service error : Invalid input provided It is not specific to code, even if I just put import boto3 print('loaded') I am getting the error right after clicking the run job option. What is the issue here? | I think Quatermass is right, the jobs started working out of the blue the next day without any changes. | 6 | 2 |
73,159,836 | 2022-7-28 | https://stackoverflow.com/questions/73159836/vectorized-way-to-contract-numpy-array-using-advanced-indexing | I have a Numpy array of dimensions (d1,d2,d3,d4), for instance A = np.arange(120).reshape((2,3,4,5)). I would like to contract it so as to obtain B of dimensions (d1,d2,d4). The d3-indices of parts to pick are collected in an indexing array Idx of dimensions (d1,d2). Idx provides, for each couple (x1,x2) of indices along (d1,d2), the index x3 for which B should retain the whole corresponding d4-line in A, for example Idx = rng.integers(4, size=(2,3)). To sum up, for all (x1,x2), I want B[x1,x2,:] = A[x1,x2,Idx[x1,x2],:]. Is there an efficient, vectorized way to do that, without using a loop? I'm aware that this is similar to Easy way to do nd-array contraction using advanced indexing in Python but I have trouble extending the solution to higher dimensional arrays. MWE A = np.arange(120).reshape((2,3,4,5)) Idx = rng.integers(4, size=(2,3)) # correct result: B = np.zeros((2,3,5)) for i in range(2): for j in range(3): B[i,j,:] = A[i,j,Idx[i,j],:] # what I would like, which doesn't work: B = A[:,:,Idx[:,:],:] | Times for 3 alternatives: In [91]: %%timeit ...: B = np.zeros((2,3,5),A.dtype) ...: for i in range(2): ...: for j in range(3): ...: B[i,j,:] = A[i,j,Idx[i,j],:] ...: 11 Β΅s Β± 48.8 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) In [92]: timeit A[np.arange(2)[:,None],np.arange(3),Idx] 8.58 Β΅s Β± 44 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) In [94]: timeit np.squeeze(np.take_along_axis(A, Idx[:,:,None,None], axis=2), axis=2) 29.4 Β΅s Β± 448 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) Relative times may differ with larger arrays. But this is a good size for testing the correctness. | 4 | 2 |
73,160,583 | 2022-7-29 | https://stackoverflow.com/questions/73160583/difference-between-filter-and-where-in-sqlalchemy | I've seen a few variations of running a query with SQLAlchemy. For example, here is one version: posts = db.query(models.Post).filter(models.Post.owner_id==user.id).all() What would be the difference between using the above or using .where? Why are there two variations here? | According to the documentation, there is no difference. method sqlalchemy.orm.Query.where(*criterion) A synonym for Query.filter(). It was added in version 1.4 by this commit. According to the commit message the reason to add it was to Convert remaining ORM APIs to support 2.0 style. You can read more about "2.0 style" here. | 17 | 30 |
73,155,924 | 2022-7-28 | https://stackoverflow.com/questions/73155924/inheritance-subclassing-issue-in-pydantic | I came across a code snippet for declaring Pydantic Models. The inheritance used there has me confused. class RecipeBase(BaseModel): label: str source: str url: HttpUrl class RecipeCreate(RecipeBase): label: str source: str url: HttpUrl submitter_id: int class RecipeUpdate(RecipeBase): label: str I am not sure what's the benefit of inheriting from RecipeBase in the RecipeCreate and RecipeUpdate class. The part that has me confused is that after inheritance also, why does one has to re-declare label, source, and URL, which are already part of the RecipeBase class in the RecipeCreate class? | Iβd say it is an oversight from the tutorial. There is no benefit and only causes confusion. Typically, Base is used for all overlapping fields, and they are only overloaded when they change type (for example, XyzBase has name: str whereas XyzCreate has name: str|None because it doesnβt has to be provided when updating an instance. The tutorial is doing a bad job explaining why the setup is as it is. | 9 | 6 |
73,157,702 | 2022-7-28 | https://stackoverflow.com/questions/73157702/attributeerror-tuple-object-has-no-attribute-sort | Here is my code, and i am getting an AttributeError: 'tuple' object has no attribute 'sort. I am trying to do an image alignment and found this standard image alignment code in an article. I am learning openCV and python which i am really new too, I am able to do basic stuff with openCV right now i am trying to learn image alignment and i am stuck on this part. Traceback (most recent call last): File "test9.py", line 31, in <module> matches.sort(key = lambda x: x.distance) AttributeError: 'tuple' object has no attribute 'sort' ------------------ (program exited with code: 1) Press return to continue import cv2 import numpy as np # Open the image files. img1_color = cv2.imread("/home/pi/Desktop/Project AOI/testboard1/image_2.jpg") # Image to be aligned. img2_color = cv2.imread("/home/pi/Desktop/Project AOI/testboard1/image_0.jpg") # Reference image. # Convert to grayscale. img1 = cv2.cvtColor(img1_color, cv2.COLOR_BGR2GRAY) img2 = cv2.cvtColor(img2_color, cv2.COLOR_BGR2GRAY) height, width = img2.shape # Create ORB detector with 5000 features. orb_detector = cv2.ORB_create(5000) # Find keypoints and descriptors. # The first arg is the image, second arg is the mask # (which is not required in this case). kp1, d1 = orb_detector.detectAndCompute(img1, None) kp2, d2 = orb_detector.detectAndCompute(img2, None) # Match features between the two images. # We create a Brute Force matcher with # Hamming distance as measurement mode. matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True) # Match the two sets of descriptors. matches = matcher.match(d1, d2) # Sort matches on the basis of their Hamming distance. matches.sort(key = lambda x: x.distance) # Take the top 90 % matches forward. matches = matches[:int(len(matches)*0.9)] no_of_matches = len(matches) # Define empty matrices of shape no_of_matches * 2. p1 = np.zeros((no_of_matches, 2)) p2 = np.zeros((no_of_matches, 2)) for i in range(len(matches)): p1[i, :] = kp1[matches[i].queryIdx].pt p2[i, :] = kp2[matches[i].trainIdx].pt # Find the homography matrix. homography, mask = cv2.findHomography(p1, p2, cv2.RANSAC) # Use this matrix to transform the # colored image wrt the reference image. transformed_img = cv2.warpPerspective(img1_color, homography, (width, height)) # Save the output. cv2.imwrite('output.jpg', transformed_img) | You're getting a tuple returned, not a list. You can't just matches.sort(...) that. OpenCV, since v4.5.4, exhibits this behavior in its Python bindings generation. You have to use this instead: matches = sorted(matches, ...) This creates a new list, which contains the sorted elements of the original tuple. Related issues: https://github.com/opencv/opencv/issues/21266 https://github.com/opencv/opencv/issues/21284 | 4 | 4 |
73,157,383 | 2022-7-28 | https://stackoverflow.com/questions/73157383/how-do-you-create-a-fully-fledged-python-package | When creating a Python package, you can simply write the code, build the package, and share it on PyPI. But how do you do that? How do you create a Python package? How do you publish it? And then, what if you want to go further? How do you set up CI/CD for it? How do you test it and check code coverage? How do you lint it? How do you automate everything you can? | Preamble When you've published dozens of packages, you know how to answer these questions in ways that suit your workflow(s) and taste. But answering these questions for the first time can be quite difficult, time consuming, and frustrating! That's why I spent days researching ways of doing these things, which I then published as a blog article called How to create a Python package in 2022. That article, and this answer, document my findings for when I wanted to publish my package extendedjson Overview Here is an overview of some tools you can use and the steps you can take, in the order I followed them while discovering all of this. Disclaimer: other alternative tools exist (usually) & most of the steps here are not mandatory. Use Poetry for dependency management Use GitHub to host the code Use pre-commit to ensure committed code is linted & formatted well Use Test PyPI to test uploading your package (which will make it installable with pip) Use Scriv for changelog management Upload to the real PyPI Use pytest to test your Python code Use tox to automate linting, formatting, and testing across Python versions black isort pylint flake8 with mccabe Add code coverage with coverage.py Set up CI/CD with GitHub Actions run linters and tests trigger automatically on pull requests and commits integrate with Codecov for coverage reports publish to PyPI automatically Add cool README badges Tidy up a bit set tox to use pre-commit remove duplicate work between tox and pre-commit hooks remove some redundancy in CI/CD Steps Here is an overview of the things you can do and more or less how to do it. Again, thorough instructions plus the rationale of why I picked certain tools, methods, etc, can be found in the reference article. Use Poetry for dependency management. poetry init initialises a project in a directory or poetry new dirname creates a new directory structure for you do poetry install to install all your dependencies poetry add packagename can be used to add packagename as a dependency, use -D if it's a development dependency (i.e., you need it while developing the package, but the package users won't need it. For example, black is a nice example of a development dependency) Set up a repository on GitHub to host your code. Set up pre-commit hooks to ensure your code is always properly formatted and it passes linting. This goes on .pre-commit-config.yaml. E.g., the YAML below checks TOML and YAML files, ensures all files end with a newline, makes sure the end-of-line marker is consistent across all files, and then runs black and isort on your code. # See https://pre-commit.com for more information # See https://pre-commit.com/hooks.html for more hooks repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.0.1 hooks: - id: check-toml - id: check-yaml - id: end-of-file-fixer - id: mixed-line-ending - repo: https://github.com/psf/black rev: 22.3.0 hooks: - id: black - repo: https://github.com/PyCQA/isort rev: 5.10.1 hooks: - id: isort args: ["--profile", "black"] Configure Poetry to use the Test PyPI to make sure you can publish a package and it is downloadable & installable. Tell Poetry about Test PyPI with poetry config repositories.testpypi https://test.pypi.org/legacy/ Log in to Test PyPI, get an API token, and tell Poetry to use it with poetry config http-basic.testpypi __token__ pypi-your-api-token-here (the __token__ is a literal and shouldn't be replaced, your token goes after that). Build poetry build and upload your package poetry publish -r testpypi Manage your CHANGELOG with Scriv run scriv create before any substantial commit and edit the file that pops up run scriv collect before any release to collect all fragments into one changelog Configure Poetry to use PyPI login to PyPI and get an API token tell Poetry about it with poetry config pypi-token.pypi pypi-your-token-here build & publish your package in one fell swoop with poetry publish --build Do a victory lap: try pip install yourpackagename to make sure everything is going great ;) Publish a GH release that matches what you uploaded to PyPI Write tests. There are many options out there. Pytest is simple, versatile, and not too verbose. write tests in a directory tests/ start test files with test_... actual tests are functions with a name starting with test_... use assertions (assert) to check for things (tests fail when asserting something Falsy); notice sometimes you don't even need to import pytest in your test files; e.g.: # In tests/test_basic_example.py def this_test_would_definitely_fail(): assert 5 > 10 def this_test_would_definitely_pass(): assert 5 > 0 run tests with the command pytest Automate testing, linting, and formatting, with tox. tox creates virtual environments for separate Python versions and can run essentially what you tell it to. Configuration goes in tox.ini. You can also embed it in the file pyproject.toml, but as of writing this, that's only supported if you add a string that actually represents the .ini configuration, which is ugly. Example tox.ini: [tox] isolated_build = True envlist = py38,py39,py310 [testenv] deps = black pytest commands = black --check extendedjson pytest . The environments py38 to py310 are automatically understood by tox to represent different Python versions (you guess which ones). The header [testenv] defines configurations for all those environments that tox knows about. We install the dependencies listed in deps = ... and then run the commands listed in commands = .... run tox with tox for all environments or tox -e py39 to pick a specific environment Add code coverage with coverage.py run tests and check coverage with coverage run --source=yourpackage --branch -m pytest . create a nice HTML report with coverage html add this to tox Create a GitHub action that runs linting and testing on commits and pull requests GH Actions are just YAML files in .github/workflows this example GH action runs tox on multiple Python versions # .github/workflows/build.yaml name: Your amazing CI name # Run automatically on... on: push: # pushes... branches: [ main ] # to the main branch... and pull_request: # on pull requests... branches: [ main ] # against the main branch. # What jobs does this workflow run? jobs: build: # There's a job called βbuildβ which runs-on: ubuntu-latest # runs on an Ubuntu machine strategy: matrix: # that goes through python-version: ["3.8", "3.9", "3.10"] # these Python versions. steps: # The job βbuildβ has multiple steps: - name: Checkout sources uses: actions/checkout@v2 # Checkout the repository into the runner, - name: Setup Python uses: actions/setup-python@v2 # then set up Python, with: python-version: ${{ matrix.python-version }} # with the version that is currently βselectedβ... - name: Install dependencies run: | # Then run these commands python -m pip install --upgrade pip python -m pip install tox tox-gh-actions # install two dependencies... - name: Run tox run: tox # and finally run tox. Notice that, above, we installed tox and a plugin called tox-gh-actions. This plugin will make tox aware of the Python version that is set up in the GH action runner, which will allow us to specify which environments tox will run in that case. We just need to set a correspondence in the file tox.ini: # tox.ini # ... [gh-actions] python = 3.8: py38 3.9: py39 3.10: py310 Integrate with Codecov for nice coverage reports in the pull requests. log in to Codecov with GitHub and give permissions add Codecov's action to the YAML from before after tox runs (it's tox that generates the local coverage report data) and add/change a coverage command to generate an xml report (it's a format that Codecov understands) # ... - name: Upload coverage to Codecov uses: codecov/codecov-action@v2 with: fail_ci_if_error: true Add a GH Action to publish to PyPI automatically just set up a YAML file that does your manual steps of building and publishing with Poetry when a new release is made create a PyPI token to be used by GitHub add it as a secret in your repository configure Poetry in the action to use that secret name: Publish to PyPI on: release: types: [ published ] branches: [ main ] workflow_dispatch: jobs: build-and-publish: runs-on: ubuntu-latest steps: # Checkout and set up Python - name: Install poetry and dependencies run: | python -m pip install --upgrade pip python -m pip install poetry - name: Configure poetry env: pypi_token: ${{ secrets.PyPI_TOKEN }} # You set this manually as a secret in your repository run: poetry config pypi-token.pypi $pypi_token - name: Build and publish run: poetry publish --build Add cool badges to your README file like Tidy up a bit run linting through tox on pre-commit to deduplicate effort and run your preferred versions of the linters/formatters/... separate linting/formatting from testing in tox as a separate environment check coverage only once as a separate tox environment | 6 | 8 |
73,155,460 | 2022-7-28 | https://stackoverflow.com/questions/73155460/how-to-get-the-cookies-from-an-http-request-using-fastapi | Is it possible to get the cookies when someone hits the API? I need to read the cookies for each request. @app.get("/") async def root(text: str, sessionKey: str = Header(None)): print(sessionKey) return {"message": text+" returned"} if __name__ == "__main__": uvicorn.run("main:app", host="0.0.0.0", port=5001 ,reload=True) | You can do it in the same way you are accessing the headers in your example (see docs): from fastapi import Cookie @app.get("/") async def root(text: str, sessionKey: str = Header(None), cookie_param: int | None = Cookie(None)): print(cookie_param) return {"message": f"{text} returned"} | 11 | 2 |
73,151,382 | 2022-7-28 | https://stackoverflow.com/questions/73151382/how-to-interperet-the-num-layers-line-when-using-keras-tuner | I'm reading an article about tuning hyperparameters in keras tuner. It includes code to build a model that has this code: def build_model(hp): """ Builds model and sets up hyperparameter space to search. Parameters ---------- hp : HyperParameter object Configures hyperparameters to tune. Returns ------- model : keras model Compiled model with hyperparameters to tune. """ # Initialize sequential API and start building model. model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28,28))) # Tune the number of hidden layers and units in each. # Number of hidden layers: 1 - 5 # Number of Units: 32 - 512 with stepsize of 32 for i in range(1, hp.Int("num_layers", 2, 6)): model.add( keras.layers.Dense( units=hp.Int("units_" + str(i), min_value=32, max_value=512, step=32), activation="relu") ) # Tune dropout layer with values from 0 - 0.3 with stepsize of 0.1. model.add(keras.layers.Dropout(hp.Float("dropout_" + str(i), 0, 0.3, step=0.1))) # Add output layer. model.add(keras.layers.Dense(units=10, activation="softmax")) # Tune learning rate for Adam optimizer with values from 0.01, 0.001, or 0.0001 hp_learning_rate = hp.Choice("learning_rate", values=[1e-2, 1e-3, 1e-4]) # Define optimizer, loss, and metrics model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=["accuracy"]) return model I'm confused about what the numbers, 1,2 and 6 mean in the range function for the num_layers line. | If you see the docs, the 2 and 6 are referring to the min and max values respectively. Also note: [...] max_value is included in the possible values this parameter can take on So this line: for i in range(1, hp.Int("num_layers", 2, 6)): basically means: generate a x number of Dense layers, where x is between 1 and 5 and the range from 2 to 6 is the hyperparameter you are working on / tuning. | 5 | 3 |
73,146,024 | 2022-7-28 | https://stackoverflow.com/questions/73146024/sqlalchemy-method-to-get-orm-object-as-dict | Take the following code: from sqlalchemy import create_engine from sqlalchemy.orm import declarative_base from sqlalchemy import Column, Integer, String engine = create_engine('postgresql://postgres:password@localhost:5432/db', echo=True, echo_pool='debug') Base = declarative_base() class Item(Base): __tablename__ = 'items' id = Column(Integer, primary_key=True) name = Column(String) def __repr__(self): return "<Item(id=%s, name='%s')>" % (self.id, self.name) item = Item(name="computer") Is there a way to get a python dict of the Item with all its fields? For example, I would like to get the following returned: item.to_dict() {"id": None, "name": "computer"} Or do I have to write my own method to do that? | Here would be one way to do it: class MyBase(Base): __abstract__ = True def to_dict(self): return {field.name:getattr(self, field.name) for field in self.__table__.c} class Item(MyBase): # as before item = Item(name="computer") item.to_dict() # {'id': None, 'name': 'computer'} Also, a lot of these usability simplifications can be found in: https://github.com/absent1706/sqlalchemy-mixins. | 7 | 12 |
73,144,724 | 2022-7-27 | https://stackoverflow.com/questions/73144724/python-vs-c-precision | I am trying to reproduce a C++ high precision calculation in full python, but I got a slight difference and I do not understand why. Python: from decimal import * getcontext().prec = 18 r = 0 + (((Decimal(0.95)-Decimal(1.0))**2)+(Decimal(0.00403)-Decimal(0.00063))**2).sqrt() # r = Decimal('0.0501154666744709107') C++: #include <iostream> #include <math.h> int main() { double zx2 = 0.95; double zx1 = 1.0; double zy2 = 0.00403; double zy1 = 0.00063; double r; r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1)); std::cout<<"r = " << r << " ****"; return 0; } // r = 0.050115466674470907 **** There is this 1 showing up near the end in python but not in c++, why ? Changing the precision in python will not change anything (i already tried) because, the 1 is before the "rounding". Python: 0.0501154666744709107 C++ : 0.050115466674470907 Edit: I though that Decimal would convert anything passed to it into a string in order to "recut" them, but the comment of juanpa.arrivillaga made me doubt about it and after checking the source code, it is not the case ! So I changed to use string. Now the Python result is the same as WolframAlpha shared by Random Davis: link. | The origin of the discrepancy is that Python Decimal follows the more modern IBM's General Decimal Arithmetic Specification. In C++ however there too exist support available for 80-bit "extended precision" through the long double format. For reference, the standard IEEE-754 floating point doubles contain 53 bits of precision. Here below the C++ example from the question, refactored using long doubles: #include <iostream> #include <math.h> #include <iomanip> int main() { long double zx2 = 0.95; long double zx1 = 1.0; long double zy2 = 0.00403; long double zy1 = 0.00063; long double r; r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1)); std::fixed; std::cout<< std::setprecision(25) << "r = " << r << " ****"; //25 floats // prints "r = 0.05011546667447091067728042 ****" return 0; } | 4 | 4 |
73,141,350 | 2022-7-27 | https://stackoverflow.com/questions/73141350/override-global-dependency-for-certain-endpoints-in-fastapi | I have a FastAPI server that communicates with a web app. My web app also has 2 types of users, Users (non-admins) and Admins. I added a global dependency to FastAPI to verify the user. I want the verify dependency to only allow Admins to access endpoints by default, and have some decorator (or something similar) to allow non-admins to access certain routes. This way, no one accidentally creates a public route that is supposed to be only for admins. def verify_token(request: Request): # make sure the user's auth token is valid # retrieve the user's details from the database # make sure user is Admin, otherwise throw HTTP exception return True app = FastAPI( title="My App", dependencies=[Depends(verify_token)] ) @app.get(/admins_only) def admins_only(): # this works well! return {'result': 2} @app.get(/non_admin_route) def non_admin_route(): # this doesn't work because verify_token # only allows admins by default, but it should # be accessible to non admins return {'result': 1} | You cannot have conditional global dependencies. You either have them on all endpoints of your app, or on none of them. My recommendation is to split your endpoints in two routers, and only add routes to the respective routers. Then you can add a global dependency to only one of the routers like this: from fastapi import APIRouter, FastAPI, Request, Depends def verify_token(request: Request): # make sure the user's auth token is valid # retrieve the user's details from the database # make sure user is Admin, otherwise throw HTTP exception return True app = FastAPI( title="My App", ) only_admin_router = APIRouter( tags=["forAdmins"], dependencies=[Depends(verify_token)] ) all_users_router = APIRouter(tags="forEverybody") @only_admin_router.get("/admins_only") def admins_only(): # this will only work if verify doesn't raise. return {'result': 2} @all_users_router.get("/non_admin_route") def non_admin_route(): #this will work for all users, verify will not be called. return {'result': 1} app.include_router(only_admin_router) app.include_router(all_users_router) | 10 | 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.