question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,570,018
2024-6-3
https://stackoverflow.com/questions/78570018/bash-cannot-execute-required-file-not-found
I am having multiple problems with making my pythonscript dartsapp.py executable for my pi. At first i tried to use chmod +x dartsapp.py and ./dartsapp.py but i got the error: -bash: ./dartsapp.py: cannot execute: required file not found I also added the dir to PATH but still no luck. Afterwards i tried to use pyinstaller but when i try to call pyinstaller --version i get the same -bash: pyinstaller: command not found even though my PATH looks like this: maxdu@LEDpi:~/0306 $ echo $PATH /home/maxdu/0306:/home/maxdu/0306/dartsapp.py:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games What am i missing / doing wrong here? Update: ls -l /usr/bin/python3 gives me lrwxrwxrwx 1 root root 10 Apr 9 2023 /usr/bin/python3 -> python3.11 and head -n 3 dartsapp.py | cat -e gives me #!/usr/bin/python3^M$ ^M$ # Python-App fM-CM-<r das PunktezM-CM-$hlen beim Darts^M$ and type python3 gives me python3 is hashed (/usr/bin/python3)
head -n 3 dartsapp.py | cat -e outputs: #!/usr/bin/python3^M$ ^M$ # Python-App fM-CM-<r das PunktezM-CM-$hlen beim Darts^M$ and so those ^Ms at the end of each line tells us that dartsapp.py has DOS line endings, see linux-bash-shell-script-error-cannot-execute-required-file-not-found for more information on this specific error and Why does my tool output overwrite itself and how do I fix it? for more information on DOS line endings in general. Both of those questions contain various fixes for this problem, starting with running dos2unix dartsapp.py.
7
7
78,566,724
2024-6-2
https://stackoverflow.com/questions/78566724/selecting-a-particular-set-of-strings-from-a-list-in-polars
df = pl.DataFrame({'list_column': [['a.xml', 'b.xml', 'c', 'd'], ['e.xml', 'f.xml', 'g', 'h']]}) def func(x): return [y for y in x if '.xml' in y] df.with_columns(pl.col('list_column').map_elements(func, return_dtype=pl.List(pl.String))) Is there a way to achieve the same without using map_elements?
There is an accepted request for list.filter() https://github.com/pola-rs/polars/issues/9189 You can emulate the behaviour using list.eval() df.with_columns( pl.col('list_column').list.eval( pl.element().filter(pl.element().str.ends_with('.xml')) ) ) shape: (2, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ list_column β”‚ β”‚ --- β”‚ β”‚ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ ["a.xml", "b.xml"] β”‚ β”‚ ["e.xml", "f.xml"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
78,565,591
2024-6-2
https://stackoverflow.com/questions/78565591/load-jsonb-data-from-postgresql-to-pyspark-and-store-it-in-maptype
Products tables create and insert scripts: create table products (product_id varchar, description varchar, attributes jsonb, tax_rate decimal); insert into products values ('P1', 'Detergent', '{"cost": 45.50, "size": "10g"}', 5.0 ); insert into products values ('P2', 'Bread', '{"cost": 45.5, "size": "200g"}',3.5); I am trying to store jsonb data from postgresql to maptype data/dictionary format in PySpark, then extract 'cost' and 'size' from 'attributes' column into seperate columns. But PySpark is reading jsonb data as string. PySpark code to read data from Postgresql: import pyspark from pyspark.sql import SparkSession from pyspark.sql.functions import col from pyspark.sql.types import StructType, StructField, StringType, IntegerType, MapType, DecimalType spark = SparkSession \ .builder \ .appName("Python Spark SQL basic example") \ .config("spark.jars", "C:\\Users\\nupsingh\\Documents\\Jars\\postgresql-42.7.3.jar") \ .getOrCreate() schema = StructType([ StructField('product_id', StringType(), True), StructField('description', StringType(), True), StructField('attributes', MapType(StringType(),IntegerType()),False), StructField('tax_rate', DecimalType(), True) ]) df = spark.read \ .format("jdbc") \ .option("url", "jdbc:postgresql://localhost:5432/postgres") \ .option("dbtable", "products") \ .option("user", "user1") \ .option("password", "password") \ .option("driver", "org.postgresql.Driver") \ .option("schema", schema) \ .load() df.show() df.printSchema() attributes_col = df.select("attributes") attributes_col.show() products_df = attributes_col.withColumn("cost", col("attributes")["cost"]).withColumn( "size", col("attributes")["size"]) products_df.show()
Short answer: You can't directly load a jsonb field into a MapType in Pyspark. Here's why: When you read data from a database using PySpark's JDBC data source, PySpark relies on the metadata provided by the JDBC driver to infer the data types of the columns. The JDBC driver maps database data types to Java data types, which are then mapped to PySpark data types. In the case of Postgres jsonb data type, the postgres JDBC driver maps it to a Java String type. PySpark, when interacting with the JDBC driver, receives the jsonb data as a string and infers it as a StringType in the schema. PySpark does not have a built-in understanding of the specific structure within the jsonb data. Is there a workaround? Yes, you can achieve your goal in two steps: 1- read the jsonb data as string and 2- use from_json to convert it to MapType. Here's how you can do it: from pyspark.sql.types import StructType, StructField, StringType, MapType, IntegerType, DecimalType from pyspark.sql.functions import from_json, col json_schema = MapType(StringType(),IntegerType())) schema = StructType([ StructField('product_id', StringType(), True), StructField('description', StringType(), True), StructField('attributes', MapType(StringType(), IntegerType(), True), True), StructField('tax_rate', DecimalType(), True), StructField('attributes_json', StringType(), True) # Add a temporary field to hold the jsonb data ]) df = spark.read \ .format("jdbc") \ .option("url", "jdbc:postgresql://localhost:5432/postgres") \ .option("dbtable", "products") \ .option("user", "user1") \ .option("password", "password") \ .option("driver", "org.postgresql.Driver") \ .option("schema", schema) \ .load() # Convert the jsonb column to a MapType df = df.withColumn("attributes", from_json(col("attributes_json"), json_schema)) \ .drop("attributes_json")
2
1
78,549,738
2024-5-29
https://stackoverflow.com/questions/78549738/least-square-inaccurate-in-chemical-speciation
I'm having an issue using the least_squares function to solve a system of non-linear equation to obtain a chemical speciation (here of the system Nickel-ammonia). The system of equations is given here : def equations(x): Ni0, Ni1, Ni2, Ni3, Ni4, Ni5, Ni6, NH3 = x eq1 = np.log10(Ni1)-np.log10(Ni0)-np.log10(NH3)-K1 eq2 = np.log10(Ni2)-np.log10(Ni1)-np.log10(NH3)-K2 eq3 = np.log10(Ni3)-np.log10(Ni2)-np.log10(NH3)-K3 eq4 = np.log10(Ni4)-np.log10(Ni3)-np.log10(NH3)-K4 eq5 = np.log10(Ni5)-np.log10(Ni4)-np.log10(NH3)-K5 eq6 = np.log10(Ni6)-np.log10(Ni5)-np.log10(NH3)-K6 eq7 = Ni0 + Ni1 + Ni2 + Ni3 + Ni4 + Ni5 + Ni6 - Ni_tot eq8 = Ni1 + 2*Ni2 + 3*Ni3 + 4*Ni4 + 5*Ni5 + 6*Ni6 - NH3_tot[i] return np.array([eq1, eq2, eq3, eq4, eq5, eq6, eq7, eq8]) I use a loop to calculate the values of the parameters at various NH3_tot, allowing to observe the variation of concentrations of the various species. The solving is done using the following : # Arrays for data storage NH3_tot = np.arange(0, 6, 0.1) Ni0_values = [] Ni1_values = [] Ni2_values = [] Ni3_values = [] Ni4_values = [] Ni5_values = [] Ni6_values = [] Solutions = [] # Speciation determination with least_square at various NH3_tot for i in range(len(NH3_tot)): x0 = [1,0.1,0.1,0.1,0.1,0.1,0.1,0.1] result = least_squares(equations, x0, bounds=([0,0,0,0,0,0,0,0],[Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot])) Solution = result.x Solutions.append(Solution) Ni0_value,Ni1_value,Ni2_value,Ni3_value,Ni4_value,Ni5_value,Ni6_value, NH3 = Solution # Extract concentrations of species Ni0_values.append(Ni0_value) Ni1_values.append(Ni1_value) Ni2_values.append(Ni2_value) Ni3_values.append(Ni3_value) Ni4_values.append(Ni4_value) Ni5_values.append(Ni5_value) Ni6_values.append(Ni6_value) The issue I get is that for some values (generally the ones for low and high NH3_tot values), are completelly wrong, especially not respecting eq. 7, as the sum of the species exceeds Ni_tot. I tried moving the bounds, but it does not seem to help, and the extrem values are generally wrong. Printing the result always shows : message: xtol termination condition is satisfied. success: True Indicating that the function effectively operates correctly. Changing the chemical system results in similar output. Changing the writting of the equations impacts greatly the result (If know why please help !!), and the use of np.log10 gives the most accurate, but still unsatisfactory. Changing the method of resolution of the least_squares function does not improve the result. Here is the complete script with all the variables: import numpy as np import matplotlib.pyplot as plt from scipy.optimize import least_squares import pandas as pd Ni_tot = 1 K1 = 2.81 K2 = 2.27 K3 = 1.77 K4 = 1.27 K5 = 0.81 K6 = 0.15 # Define the system of nonlinear equations def equations(x): Ni0, Ni1, Ni2, Ni3, Ni4, Ni5, Ni6, NH3 = x eq1 = np.log10(Ni1)-np.log10(Ni0)-np.log10(NH3)-K1 eq2 = np.log10(Ni2)-np.log10(Ni1)-np.log10(NH3)-K2 eq3 = np.log10(Ni3)-np.log10(Ni2)-np.log10(NH3)-K3 eq4 = np.log10(Ni4)-np.log10(Ni3)-np.log10(NH3)-K4 eq5 = np.log10(Ni5)-np.log10(Ni4)-np.log10(NH3)-K5 eq6 = np.log10(Ni6)-np.log10(Ni5)-np.log10(NH3)-K6 eq7 = Ni0 + Ni1 + Ni2 + Ni3 + Ni4 + Ni5 + Ni6 - Ni_tot eq8 = Ni1 + 2*Ni2 + 3*Ni3 + 4*Ni4 + 5*Ni5 + 6*Ni6 - NH3_tot[i] return np.array([eq1, eq2, eq3, eq4, eq5, eq6, eq7, eq8]) # Arrays for data storage NH3_tot = np.arange(0, 14, 0.1) Ni0_values = [] Ni1_values = [] Ni2_values = [] Ni3_values = [] Ni4_values = [] Ni5_values = [] Ni6_values = [] Solutions = [] # Speciation determination with least_square at various NH3_tot for i in range(len(NH3_tot)): x0 = [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1] result = least_squares(equations, x0, bounds=([0,0,0,0,0,0,0,0],[Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot,Ni_tot])) print(result) Solution = result.x Solutions.append(Solution) Ni0_value,Ni1_value,Ni2_value,Ni3_value,Ni4_value,Ni5_value,Ni6_value, NH3 = Solution # Extract concentrations of species Ni0_values.append(Ni0_value) Ni1_values.append(Ni1_value) Ni2_values.append(Ni2_value) Ni3_values.append(Ni3_value) Ni4_values.append(Ni4_value) Ni5_values.append(Ni5_value) Ni6_values.append(Ni6_value) # Conversion to DataFrame results = pd.DataFrame(Solutions, columns=['Ni', 'Ni(NH3)', 'Ni(NH3)2', 'Ni(NH3)3', 'Ni(NH3)4', 'Ni(NH3)5', 'Ni(NH3)6', 'NH3']) # Plot the concentrations plt.figure(figsize=(10, 6)) plt.plot(NH3_tot, Ni0_values, label='Ni') plt.plot(NH3_tot, Ni1_values, label='Ni1') plt.plot(NH3_tot, Ni2_values, label='Ni2') plt.plot(NH3_tot, Ni3_values, label='Ni3') plt.plot(NH3_tot, Ni4_values, label='Ni4') plt.plot(NH3_tot, Ni5_values, label='Ni5') plt.plot(NH3_tot, Ni6_values, label='Ni6') plt.xlabel('eq NH3') plt.ylabel('Concentration') plt.title('Concentration des espèces en fonction de NH3') plt.legend() plt.grid(True) plt.show() The purpose is to study the variation of distribution of species for various ligands, i.e. different K values. But if the speciation is not generated correctly, i cannot trust my data. Edit after jlandercy helped : Using the correction of @jlandercy, we managed to lower the inacuracy of the curve shape, using the mathematical expression of proportion of each complexes (see alpha_i). However there seem to be an issue regarding the "concentration" of Ligand. Indeed, it does not make sense physically to have the formation of a complex (Ni-NH3) only as a function of the ligand concentration, and the ratio NH3/Ni_tot should be the variable. The expected result would be the one given bellow : With C_Ni_tot = 1 mol/L, and C_NH3_tot = 0 -> 12 mol/L. Using the script given by @jlandercy with the following modification : L = np.linspace(0, 0.2, 100) As = alphas(pK, L) I manage to obtain something of the same type as shown : However, there is still the problem of the "ligand scale", which is weird, and I don't see the issue.
Potential cause Optimization procedures in chemistry are inherently tricky because numbers involved span several decades leading to float errors and inaccuracies. You can see in your example than bounds are met but results are meaningless as they does not fulfill the mass balance (real constraint in eq7). Additionally problem occurs at the sides of the axis, where inaccuracies become prominent. Anyway it seems it can be numerically solved by tuning smartly the initial guess for the solver. Concentrations (numerical solution) I leave here a procedure to find the concentrations by solving the complex equilibria and mass balances simultaneously. Given your pK (which for complex are simply log10 of constants K without minus sign we usually find in pKa): pK = np.array([2.81, 2.27, 1.77, 1.27, 0.81, 0.15]) K = np.power(10., pK) def system(x, K, xt, Lt): # (X, XL, XL2, XL3, XL4, XL5, XL6, L) n = len(K) return np.array([ # Complexation equilibria: x[i + 1] - K[i] * x[i] * x[n + 1] for i in range(len(K)) ] + [ # Mass balance (Nickel): np.sum(x[:n + 1]) - xt, # Mass balance (Ammonium): np.sum(np.arange(n + 1) * x[:n + 1]) + x[-1] - Lt, ]) Llin = np.linspace(0, 12, 2000) xt = 1. def solve(K, xt, Lts): sols = [] for Lt in Lts: sol = optimize.fsolve(system, x0=[Lt * 1.1] * (len(K) + 2), args=(K, xt, Lt)) sols.append(sol) C = np.array(sols) return C C = solve(K, xt, Llin) fig, axe = plt.subplots(figsize=(6,6)) axe.plot(Ls, C[:,:len(K) + 1]) axe.set_title("Complex Specie Concentrations w/ Acid/Base") axe.set_xlabel("Total concentration of Ligand, $L$ [mol/L]") axe.set_ylabel("Complex Specie Concentration, $x_i$ [mol/L]") axe.legend(["$x_{%d}$" % i for i in range(len(K) + 1)]) axe.grid() It gives: Which seems to quite agree with your reference figure. Partitions (analytical solution) Your problem resume to write partition functions of complex species (Nickel and Ammonia) for which an analytical solution exists. Therefore there is no need for optimization if we can rely on math. Your complexation problem is governed by: Details on how this formulae can be derived are explained in this post. The key point is that partitions can be expressed uniquely by constants and free ligand concentration. Caution: in this solution the x-axis is the free ligand equilibrium concentration instead of the total concentration of ligand in the above figure. I leave here the procedure to compute such partition functions. We can write the partition functions (rational functions summing up to unity): def monom(i, K, L): return np.power(L, i) * np.prod(K[:i]) def polynom(K, L): return np.sum([monom(i, K, L) for i in range(len(K) + 1)], axis=0) def alpha(i, K, L): return monom(i, K, L) / polynom(K, L) And compute them all at once: def alphas(K, L): return np.array([ alpha(i, K, L) for i in range(len(K) + 1) ]).T Then we assess it for some concentration range: Llog = np.logspace(-5, 2, 2000) As = alphas(K, Llog) We check partition sums are unitary: np.allclose(np.sum(As, axis=1), 1.) # True Which leads to: From this analytical solution, we can compute total ligand concentration: Lt = np.sum(As * np.arange(len(K) + 1), axis=1) + Llog And we get the desired figure: fig, axe = plt.subplots() axe.plot(Lt, As) axe.set_title("Complex Partition Functions") axe.set_xlabel("Total Ligand Concentration, L [mol/L]") axe.set_ylabel(r"Partition Function, $\alpha_i$ [-]") axe.legend([r"$\alpha_{%d}$" % i for i in range(len(pK) + 1)]) axe.grid() Which also agrees with your reference figure. Personally I'll choose the analytical solution as it is more performant and does not have the risk to diverge on singularities. Comparison We can confirm both solutions agrees together: Ltlin = np.linspace(0, 12, 2000) Cs = solve(K, xt, Ltlin) Lf = Cs[:,-1] Cs = (Cs[:,:-1].T / np.sum(Cs[:,:-1], axis=1)).T As = alphas(K, Lf) Where solid colored lines are analytical solutions and dashed blacked lines are numerical solutions.
4
3
78,564,771
2024-6-1
https://stackoverflow.com/questions/78564771/alternative-to-df-renamecolumns-str-replace
I noticed that it's possible to use df.rename(columns=str.lower), but not df.rename(columns=str.replace(" ", "_")). Is this because it is allowed to use the variable which stores the method (str.lower), but it's not allowed to actually call the method (str.lower())? There is a similar question, why the error message of df.rename(columns=str.replace(" ", "_")) is rather confusing – without an answer on that. Is it possible to use methods of the .str accessor (of pd.DataFrame().columns) inside of df.rename(columns=...)? The only solution I came up so far is df = df.rename(columns=dict(zip(df.columns, df.columns.str.replace(" ", "_")))) but maybe there is something more consistent and similar to style of df.rename(columns=str.lower)? I know df.rename(columns=lambda x: x.replace(" ", "_") works, but it doesn't use the .str accessor of pandas columns, it uses the str.replace() of the standard library. The purpose of the question is explore the possibilities to use pandas str methods when renaming columns in method chaining, that's why df.columns = df.columns.str.replace(' ', '_') is not suitable to me. As an df example, assume: df = pd.DataFrame([[0,1,2]], columns=["a pie", "an egg", "a nut"])
df.rename accepts a function object (or other callable). In the first case, str.lower is a function. However, str.replace(" ", "_") calls the function and evaluate to the result, although, in this case, the call is not correct so it raises an error. But you don't want to pass the result of calling the function, you want to pass the function. So something like def space_to_dash(col): return col.replace(" ", "_") df.rename(columns=space_to_dash) Or, use a lambda expression: df.rename(columns=lambda col: col.replace(" ", "_")) Note, df.rename(columns=str.lower) doesn't use the .str accessor either, it uses the built-in str method. So I think you are confused. Now, you can use the .str accessor on the column index object, so: df.columns.str.replace(" ", "_") But then you would need to do what you already said you didn't want to do: df.columns = df.columns.str.replace(" ", "_") It is important to point out, this mutates the original dataframe object in place as opposed to df.rename, which returns a new dataframe object. It isn't clear why you want to use the .str accessor, is that the reason?
1
4
78,564,217
2024-6-1
https://stackoverflow.com/questions/78564217/starting-container-process-caused-exec-fastapi-executable-file-not-found-in
I am trying to Dockerize my fastapi application, but it fails with the following error Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "fastapi": executable file not found in $PATH: unknown Can someone help me out? Dockerfile FROM python:3.12 as builder # install and setup poetry config RUN pip install poetry==1.8.2 ENV POETRY_NO_INTERACTION=1 \ POETRY_VIRTUALENVS_IN_PROJECT=1 \ POETRY_VIRTUALENVS_CREATE=1 \ POETRY_CACHE_DIR=/tmp/poetry_cache WORKDIR /navigation COPY pyproject.toml poetry.lock ./ # poetry complains if there is no README file RUN touch README.md # install without dev dependencies + remove poetry cache RUN poetry install --without dev && rm -rf $POETRY_CACHE_DIR FROM python:3.12-alpine as runtime ENV VIRTUAL_ENV=/navigation/.venv \ PATH="/navigation/.venv/bin:$PATH" COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV} COPY navigation ./navigation CMD ["fastapi", "run", "main.py", "--proxy-headers", "--port", "80"] docker-compose.yml services: navigation-api: build: context: . dockerfile: Dockerfile volumes: - ./navigation:/navigation I'm using poetry (as can be seen in the Dockerfile) to install my dependencies. Here are my dependencies in my pyproject.toml file. pyproject.toml [tool.poetry.dependencies] python = ">=3.12,<3.13" fastapi = "^0.111.0" prisma = "^0.13.1" I also tried to use uvicorn instead of using the fastapi-cli, but the same error occurs. I tried to not use the builder pattern to see if it was an issue there. But the same error. I checked this ticket out, but no solutions offered worked: starting container process caused: exec: "uvicorn": executable file not found in $PATH: unknown
There's two things that are obvious problems in the setup you show. In the Compose file, you should delete: volumes: - ./navigation:/navigation This overwrites the /navigation directory in the image – everything the Dockerfile sets up other than the base image – with content from the host. That includes your virtual environment, which is in /navigation/.env. This is probably directly causing the error you see. (If it's important to you that the running program sees the code on your local system without rebuilding the image, this setup is almost like an ordinary Python virtual environment, except you've placed the Python interpreter in a container where it's hard to run and hard for an IDE to see it. Using a plain virtual environment without Docker will be better.) In the Dockerfile, the base images of the two build stages need to match. Your current setup looks like this: FROM python:3.12 as builder ... FROM python:3.12-alpine as runtime # <-- doesn't match first stage COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV} Virtual environments can be extremely specific to the Python they're built against. You can hit some obscure hard-to-debug errors if the two Pythons don't match exactly. For Python in particular, I'd recommend generally using the Debian-based images (with no suffix or with a "-slim" suffix) and not Alpine, since the Debian-based images can install precompiled "wheel" format images often without requiring a C toolchain.
2
1
78,555,122
2024-5-30
https://stackoverflow.com/questions/78555122/sqlalchemy-i-cant-execute-query-with-two-tables
My problem next: I can't execute query, where need to get list of Accounts with their Records by account_id as sum of Records. My query next: subq = ( select( func.sum(Record.amount).label('amount') ) .filter(Record.account_id == Account.uuid) .subquery() ) accounts = ( await self.session.execute( select(Account) .join(subq, Account.uuid == Record.account_id) .filter(Account.creator_id == uuid) # uuid = it's User ID (Creator) ) ).scalars().all() Also, Record table looks like this: [ { "type": "income", "amount": 2000, "uuid": "cf876a3d-3395-4f5f-82b5-b496a66e107c" }, { "type": "income", "amount": 5000, "uuid": "fe25274d-111f-410c-a18c-04c73cbcc9db" }, { "type": "expense", "amount": 3000, "uuid": "7a151849-fb8e-47dc-96a0-50fef12233e8" }, { "type": "expense", "amount": 750, "uuid": "dd90988a-cd53-4125-8017-2e2ed05ec48c" } ] Desired result is: [ { "uuid": "38528eff-61d8-4210-8f9b-7739414dff4a", "name": "newAccount", "creator_id": "15fvh85j-012f-41bh-9km5-e22c8t78e321", "is_private": false, "amount": 25000 }, { "uuid": "86256mdjv-40m7-96v3-4de1-1e946b356gs8", "name": "personalAccount", "creator_id": "867becf0-c86a-4551-af81- "is_private": false, "amount": 53500 } ] But I got only the same without amount. What I do wrong?
I found the answer that solved my problem, this using join and mappings().all(): accounts = ( await self.session.execute( select( Account.uuid, Account.name, Account.creator_id, Account.is_private, func.coalesce(func.sum(Record.amount), 0).label('amount') ) .filter(Account.creator_id == uuid) .join(Record, Record.account_id == Account.uuid, isouter=True) .group_by(Account.uuid) ) ).mappings().all()
2
0
78,561,796
2024-5-31
https://stackoverflow.com/questions/78561796/cant-find-command-fastapi-for-python-on-msys2
I want to make the Python fastapi work with MSYS2. I work from the MSYS2 MinGW x64 shell. I have the following installed using the commands: pacman -S mingw-w64-x86_64-python pacman -S mingw-w64-x86_64-python-pip pacman -S mingw-w64-x86_64-python-fastapi and I also ran pip install fastapi which gave the output imelf@FLORI-LENOVO-93 MINGW64 /c/Users/imelf/Documents/NachhilfeInfoUni/Kadala/Pybind11 $ pip install fastapi Requirement already satisfied: fastapi in c:/software/msys64/mingw64/lib/python3.11/site-packages (0.109.0) Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from fastapi) (2.5.3) Requirement already satisfied: starlette<0.36.0,>=0.35.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from fastapi) (0.35.0) Requirement already satisfied: typing-extensions>=4.8.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from fastapi) (4.9.0) Requirement already satisfied: annotated-types>=0.4.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi) (0.6.0) Requirement already satisfied: pydantic-core==2.14.6 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from pydantic!=1.8,!=1.88.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi) (2.14.6) Requirement already satisfied: anyio<5,>=3.4.0 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from starlette<0.36.0,>=0.35.0->fastapi) (4.2.0) Requirement already satisfied: idna>=2.8 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from anyio<5,>=3.4.0->starlette<0.36.0,>=0.35.0->fastapi) (3.6) Requirement already satisfied: sniffio>=1.1 in c:/software/msys64/mingw64/lib/python3.11/site-packages (from anyio<5,>=3.4.0->starlette<0.36.0,>=0.35.0->fastapi) (1.3.0) Now I have this file test.py with the following content: from fastapi import FastAPI meine_coole_rest_api = FastAPI() @meine_coole_rest_api.get("/") async def wurzel_pfad(): return {"coole_nachricht" : "Fast API works"} Trying to launch it with as described in this tutorial fastapi dev test.py gives the error message imelf@FLORI-LENOVO-93 MINGW64 /c/Users/imelf/Documents/NachhilfeInfoUni/Kadala/Pybind11 $ fastapi dev test.py bash: fastapi: command not found Why does he not recognize the command? How do I install fastapi correctly?
fastapi doesn't ship with the CLI anymore by default, you can install it using pip install fastapi-cli
1
5
78,552,387
2024-5-30
https://stackoverflow.com/questions/78552387/optimizing-node-placement-in-a-2d-grid-to-match-specific-geodesic-distances
I'm working on a problem where I need to arrange a set of nodes within a 2D grid such that the distance between pairs of nodes either approximate specific values as closely as possible or meet a minimum threshold. In other words, for some node pairs, the distance should be approximately equal to a predefined value (β‰ˆ). For other node pairs, the distance should be greater than or equal to a predefined threshold (β‰₯). Additional challenge: the grid is inscribed inside a concave polygon, so distances must be geodesic. Question: Using OR-Tools, how can I efficiently approximate the location of the nodes given the constraints above-mentioned? [EDIT] I've revised the example script, trying as best I can to apply Laurent's wise suggestions, but my poor understanding of OR-Tools (and its subtleties) still makes this a very difficult task. This version: creates a simple conforming grid pre-computes geodesic distances between all pairs of cells within that grid indicates the number of nodes to place as well as their associated target pairwise distances each pairwise distance comes with an objective to match (either β‰ˆ or β‰₯) declares the CP-SAT model and creates the main variables for the problem (new) ensures each node is assigned exactly to one position and each position can have at most one node (new) creates a Boolean variable checking if the distance constraint is met for each pair of nodes (new) use AddImplication to connect the Boolean variables with the node positions (new) applies conditional penalties based on whether the distance condition is met and tries to minimize their sum Unfortunately, I must be missing a few nuances because this implementation doesn't return any results even though the solution space not null. from ortools.sat.python import cp_model from itertools import combinations import networkx as nx # Dimensions of the original grid (width & height) w, h = 7, 5 # Selection of grid-cell indices (conforming/concave grid) cell_indices = list(sorted(set(range(w * h)) - set([0, 1, 2, 3, 7, 8, 9, 10, 28, 29]))) # Topology of the conforming/concave grid T = nx.Graph() for i in cell_indices: if i >= w and (i - w) in cell_indices: T.add_edge(i, i - w, weight=1) if i < w * (h - 1) and (i + w) in cell_indices: T.add_edge(i, i + w, weight=1) if i % w != 0 and (i - 1) in cell_indices: T.add_edge(i, i - 1, weight=1) if (i + 1) % w != 0 and (i + 1) in cell_indices: T.add_edge(i, i + 1, weight=1) # Precompute geodesic distances using Dijkstra's algorithm geodesic_distances = dict(nx.all_pairs_dijkstra_path_length(T)) # Get the largest geodesic distance max_distance = float('-inf') for i1 in geodesic_distances: for i2 in geodesic_distances[i1]: if i1 != i2 and i1 > i2: distance = geodesic_distances[i1][i2] if distance > max_distance: max_distance = distance # Number of nodes to place num_nodes = 5 # Target distances to match between each pair of nodes + type of objective (β‰ˆ or β‰₯) objective_distances = {(0, 1): (3, 'β‰ˆ'), (0, 2): (2, 'β‰₯'), (0, 3): (2, 'β‰₯'), (0, 4): (3, 'β‰ˆ'), (1, 2): (3, 'β‰₯'), (1, 3): (3, 'β‰₯'), (1, 4): (4, 'β‰₯'), (2, 3): (2, 'β‰ˆ'), (2, 4): (4, 'β‰₯'), (3, 4): (3, 'β‰ˆ')} # Instantiate model model = cp_model.CpModel() # Ensure each position can have at most one node node_at_position = {} for index in cell_indices: at_most_one = [] for node in range(num_nodes): var = model.NewBoolVar(f'node_{node}_at_position_{index}') node_at_position[node, index] = var at_most_one.append(var) # Apply at most one node per position constraint model.AddAtMostOne(at_most_one) # Ensure each node is assigned exactly to one position for node in range(num_nodes): model.AddExactlyOne(node_at_position[node, idx] for idx in cell_indices) penalties = [] # For each pair of nodes: for (node1, node2), (target_distance, constraint_type) in objective_distances.items(): # For each compatible pair of cells for i1, i2 in combinations(cell_indices, 2): # Get the corresponding geodesic distance distance = geodesic_distances[i1][i2] # Create a Boolean variable is_compatible = model.NewBoolVar(f'compat_{node1}_{node2}_{i1}_{i2}') # Create a penalty variable penalty = model.NewIntVar(0, max_distance, f'penalty_{node1}_{node2}_{i1}_{i2}') if constraint_type == 'β‰ˆ': # Condition that `is_compatible` will be True if the distance approximates (deviation: -1/+1) the target distance model.Add(is_compatible == (target_distance - 1 <= distance <= target_distance + 1)) elif constraint_type == 'β‰₯': # Condition that `is_compatible` will be True if the distance is at least the target distance model.Add(is_compatible == (distance >= target_distance)) # If 'is_compatible' is true -> implications to enforce node positions model.AddImplication(is_compatible, node_at_position[node1, i1]) model.AddImplication(is_compatible, node_at_position[node2, i2]) # If it is not -> add a penalty model.Add(penalty == abs(distance - target_distance)).OnlyEnforceIf(is_compatible.Not()) # Accumulate penalties penalties.append(penalty) # Objective to minimize total penalty model.Minimize(sum(penalties)) # Solving the model solver = cp_model.CpSolver() status = solver.Solve(model) print("Solver status:", solver.StatusName(status)) if status == cp_model.FEASIBLE or status == cp_model.OPTIMAL: print("Solution found:") for node in range(num_nodes): for index in cell_indices: if solver.Value(node_at_position[node, index]): print(f'Node {node} is at position {index}') else: print("No solution found.")
If you're not beholden to ortools, here is a solution using pyomo and HiGHS solver. Minimally, this might give you some insights on the correct ortools syntax to implement some of these types of constraints, if this formulation works. I'm not familiar w/ the ortools syntax, but this aligns with the context of what @Laurent Perron mentioned above. This is a pretty stout BIP (Binary Integer Program) and I've put an absolute gap in the solve such that if it gets within 1.0 of the theoretical limit, it will stop rather than continuing the solve. With that, it pops out an optimal answer in about 10 minutes on my machine, with the solution shown at bottom with a total penalty of 3 distance units for the example shown. Code: """ placing nodes within a grid Created on: 5/31/24 """ from collections import defaultdict import pyomo.environ as pyo import highspy import networkx as nx # Dimensions of the original grid (width & height) w, h = 7, 5 # Selection of grid-cell indices (conforming/concave grid) cell_indices = list(sorted(set(range(w * h)) - {0, 1, 2, 3, 7, 8, 9, 10, 28, 29})) # Topology of the conforming/concave grid T = nx.Graph() for i in cell_indices: if i >= w and (i - w) in cell_indices: T.add_edge(i, i - w, weight=1) if i < w * (h - 1) and (i + w) in cell_indices: T.add_edge(i, i + w, weight=1) if i % w != 0 and (i - 1) in cell_indices: T.add_edge(i, i - 1, weight=1) if (i + 1) % w != 0 and (i + 1) in cell_indices: T.add_edge(i, i + 1, weight=1) # Precompute geodesic distances using Dijkstra's algorithm geodesic_distances = dict(nx.all_pairs_dijkstra_path_length(T)) # Get the largest geodesic distance max_distance = float('-inf') for i1 in geodesic_distances: for i2 in geodesic_distances[i1]: if i1 != i2 and i1 > i2: distance = geodesic_distances[i1][i2] if distance > max_distance: max_distance = distance # Number of nodes to place num_nodes = 5 # Target distances to match between each pair of nodes + type of objective (β‰ˆ or β‰₯) objective_distances = {(0, 1): (3, 'β‰ˆ'), (0, 2): (2, 'β‰₯'), (0, 3): (2, 'β‰₯'), (0, 4): (3, 'β‰ˆ'), (1, 2): (3, 'β‰₯'), (1, 3): (3, 'β‰₯'), (1, 4): (4, 'β‰₯'), (2, 3): (2, 'β‰ˆ'), (2, 4): (4, 'β‰₯'), (3, 4): (3, 'β‰ˆ') } # print(geodesic_distances) # let's chop up the pairs into needed subsets, and hook them up to penalty values fudge_factor = 1.0 # the max allowable miss for β‰ˆ constraints cutoff_points = [t[0] for t in objective_distances.values()] ge_pairs = defaultdict(dict) # GTE pairs at cutoff point ae_pairs = defaultdict(dict) # Approx Equal pairs at cutoff point for idx in cell_indices: for other, dist in geodesic_distances[idx].items(): if other <= idx: continue # we only need "sorted" pairs like the objective_distances for c in cutoff_points: if dist >= c: ge_pairs[c][idx, other] = dist - c if c - fudge_factor <= dist <= c + fudge_factor: ae_pairs[c][idx, other] = abs(dist - c) # now bin up the requirements as basis for the associated constraints ge_reqts = {} ae_reqts = {} for (n1, n2), (c, c_type) in objective_distances.items(): if c_type == 'β‰ˆ': ae_reqts[n1, n2] = c elif c_type == 'β‰₯': ge_reqts[n1, n2] = c else: raise ValueError() # on to the model... m = pyo.ConcreteModel() # === SETS === m.G = pyo.Set(initialize=cell_indices, doc='Grid Points') m.GG = pyo.Set(initialize=[(g1, g2) for g1 in m.G for g2 in m.G if g2 > g1], doc=('paired ' 'assignment at ' 'g_1, g_2)')) m.N = pyo.Set(initialize=list(range(num_nodes)), doc='Nodes') m.NN = pyo.Set(initialize=objective_distances.keys(), doc='assignment requirements') def eligible_assignments(m, *nn): # a helper function to determine (limit) assignments to eligible spots if nn in ae_reqts: cutoff = ae_reqts[nn] eligible_locations = ae_pairs[cutoff] return eligible_locations if nn in ge_reqts: cutoff = ge_reqts[nn] eligible_locations = ge_pairs[cutoff] return eligible_locations m.eligible_locations = pyo.Set(m.NN, initialize=eligible_assignments) m.NNGG = pyo.Set(initialize=[(nn, gg) for nn in m.NN for gg in m.eligible_locations[nn]]) # === VARS === m.place = pyo.Var(m.N, m.G, domain=pyo.Binary, doc='place node N at point G') m.paired_assignment = pyo.Var(m.NNGG, domain=pyo.Binary, doc='node pair assigned to grid pair') # === OBJ === # helper expressions ge_penalties = sum(m.paired_assignment[nn, gg] * ge_pairs[ge_reqts[nn]][gg] for nn in ge_reqts for gg in m.eligible_locations[nn]) ae_penalties = sum(m.paired_assignment[nn, gg] * ae_pairs[ae_reqts[nn]][gg] for nn in ae_reqts for gg in m.eligible_locations[nn]) # the OBJECTIVE m.obj = pyo.Objective(expr=ge_penalties + ae_penalties, sense=pyo.minimize) # === CONSTRAINTS === @m.Constraint(m.N) def assign_exactly_once(m, n): return sum(m.place[n, g] for g in m.G) == 1 @m.Constraint(m.G) def max_one_per_grid_point(m, g): return sum(m.place[n, g] for n in m.N) <= 1 @m.Constraint(m.NNGG) def link_assignment(m, n1, n2, *gg): return m.paired_assignment[n1, n2, gg] <= sum(m.place[n1, g] + m.place[n2, g] for g in gg)/2 @m.Constraint(m.NN) def assign_pair_once(m, *nn): return sum(m.paired_assignment[nn, gg] for gg in m.eligible_locations[nn]) == 1 # === QA === # m.pprint() # === SOLVE === opt = pyo.SolverFactory('appsi_highs') res = opt.solve(m, options={'mip_abs_gap': 1.1}, tee=True) # abs gap implies that if we get within 1.0 of theoretical limit, that's good enough to stop. if pyo.check_optimal_termination(res): print('Optimal termination') print(res) print(' --- placements ---') for n in sorted(m.N): for g in m.G: if pyo.value(m.place[n, g]) > 0.5: print(f'place {n} at {g}') print(' --- pairing penalties ---') for nngg in m.NNGG: if pyo.value(m.paired_assignment[nngg]) > 0.5: nn = nngg[:2] gg = nngg[2:] if nn in ge_reqts: c = ge_reqts.get(nn, None) penalty = ge_pairs[c][gg] print(f'assign {nn} to (unordered) {gg} for penalty {penalty}', end='') else: c = ae_reqts.get(nn, None) penalty = ae_pairs[c][gg] print(f'assign {nn} to (unordered) {gg} for penalty {penalty}', end='') print(f' (dist: {geodesic_distances[gg[0]][gg[1]]})') else: print('failed solve ... see log') Output: ... Solving report Status Optimal Primal bound 3 Dual bound 2 Gap 33.33% (tolerance: 36.67%) Solution status feasible 3 (objective) 0 (bound viol.) 0 (int. viol.) 0 (row viol.) Timing 782.32 (total) 0.28 (presolve) 0.00 (postsolve) Nodes 170202 LP iterations 6791616 (total) 493334 (strong br.) 682673 (separation) 357533 (heuristics) Optimal termination Problem: - Lower bound: 2.0 Upper bound: 3.0 Number of objectives: 1 Number of constraints: 0 Number of variables: 0 Sense: 1 Solver: - Status: ok Termination condition: optimal Termination message: TerminationCondition.optimal Solution: - number of solutions: 0 number of solutions displayed: 0 --- placements --- place 0 at 33 place 1 at 12 place 2 at 27 place 3 at 25 place 4 at 31 --- pairing penalties --- assign (0, 1) to (unordered) (12, 33) for penalty 0 (dist: 3) assign (0, 2) to (unordered) (27, 33) for penalty 0 (dist: 2) assign (0, 3) to (unordered) (25, 33) for penalty 0 (dist: 2) assign (0, 4) to (unordered) (31, 33) for penalty 1 (dist: 2) assign (1, 2) to (unordered) (12, 27) for penalty 0 (dist: 3) assign (1, 3) to (unordered) (12, 25) for penalty 0 (dist: 3) assign (1, 4) to (unordered) (12, 31) for penalty 1 (dist: 5) assign (2, 3) to (unordered) (25, 27) for penalty 0 (dist: 2) assign (2, 4) to (unordered) (27, 31) for penalty 0 (dist: 4) assign (3, 4) to (unordered) (25, 31) for penalty 1 (dist: 2) Process finished with exit code 0
3
1
78,561,392
2024-5-31
https://stackoverflow.com/questions/78561392/how-to-substitute-each-regex-pattern-with-a-corresponding-item-from-a-list
I have a string that I want to do regex substitutions on: string = 'ptn, ptn; ptn + ptn' And a list of strings: array = ['ptn_sub1', 'ptn_sub2', '2', '2'] I want to replace each appearance of the regex pattern 'ptn' with a corresponding item from array. Desired result: 'ptn_sub1, ptn_sub2; 2 + 2' I tried using re.finditer to iterate through the matches and substitute each time, but this caused the new string to get mangled since the length of the string changes with each iteration. import re matches = re.finditer(r'ptn', string) new_string = string for i, match in enumerate(matches): span = match.span() new_string = new_string[:span[0]] + array[i] + new_string[span[1]:] Mangled output: 'ptn_sptn_s2, ptn2tn + ptn' How can I do these substitutions correctly?
As stated, one just have to use the "callback" form of re.sub - but it is actually simple enough: import re string = 'ptn, ptn; ptn + ptn' array = ['ptn_sub1', 'ptn_sub2', '2', '2'] array_iter = iter(array) result = re.sub("ptn", lambda m: next(array_iter), string)
2
3
78,558,385
2024-5-31
https://stackoverflow.com/questions/78558385/python-sympy-partial-fraction-decomposition-result-not-as-expected
I am using the Python library Sympy to calculate the partial fraction decomposition of the following rational function: Z = (6.43369157032015e-9*s^3 + 1.35203404799555e-5*s^2 + 0.00357538393743079*s + 0.085)/(4.74334912634438e-11*s^4 + 4.09576274286244e-6*s^3 + 0.00334241812250921*s^2 + 0.15406018058983*s + 1.0) I am calling the method apart() like this: Z_f = apart(Z, full=True).doit() And this is the result: -1.30434068814287e+23/(s + 85524.0054884464) + 19145168.0/(s + 774.88576677949) + 91.9375/(s + 40.7977016133126) + 2.59375/(s + 7.79746609204661) As you can see from the result, these are the residuals: -3.60418953263334e+22 -4789228.25000000 11.0468750000000 0.787109375000000 Note: I identify them like this: Z_f_terms = Z_f.as_ordered_terms() and then extract them in a for cycle. The problem is that by using other tools I get different residuals. By using GNU Octave's residue() I get these residuals: 133.6 1.0776 0.39501 0.56426 Via Wolphram Alpha I get the same residuals as Octave see here the results (please wait until you see the partial fraction expansion) . Why isn't the Sympy library not working as expected? The poles are always correct. Only the residuals are not as I expect. Thanks for any suggestion.
The problem is to do with floats. Presumably something in the algorithm used by apart(full=True) or RootSum.doit() is numerically unstable for approximate coefficients. If you convert the coefficients to rational numbers before computing the RootSum then evalf gives the expected result: In [27]: apart(nsimplify(Z), full=True).evalf() Out[27]: 133.599202650992 1.07757928431867 0.395006955518971 0.564264854137341 ──────────────────── + ─────────────────── + ──────────────────── + ──────────────────── s + 85524.0054884464 s + 774.88576677949 s + 40.7977016133126 s + 7.79746609204661 In [29]: apart(Z, domain="QQ", full=True).evalf() # rational field Out[29]: 133.599202650994 1.07757928431867 0.395006955518971 0.564264854137341 ──────────────────── + ─────────────────── + ──────────────────── + ──────────────────── s + 85524.0054884464 s + 774.88576677949 s + 40.7977016133126 s + 7.79746609204661 Note that the general algorithm and approach used for full=True is really intended for finding an exact representation of the decomposition by avoiding radicals. It is not intended for floats. Probably though normal apart without full=True should compute this decomposition.
2
3
78,560,356
2024-5-31
https://stackoverflow.com/questions/78560356/how-to-update-fields-with-previous-fields-value-in-polars
I have this dataframe: import polars as pl df = pl.DataFrame({ 'file':['a','a','a','a','b','b'], 'ru':['fe','fe','ev','ev','ba','br'], 'rt':[0,0,1,1,1,0], }) shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ file ┆ ru ┆ rt β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ ev ┆ 1 β”‚ β”‚ a ┆ ev ┆ 1 β”‚ β”‚ b ┆ ba ┆ 1 β”‚ β”‚ b ┆ br ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ I'd like to replace the values in "ru" and "rt" within the same group defined by "file" with the values of the first row in the group if the first "rt" value is 0. The desired output would look as follows. shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ file ┆ ru ┆ rt β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ b ┆ ba ┆ 1 β”‚ β”‚ b ┆ br ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ How can I achieve that?
Get first column values within each "file" group. This can be achieved using window functions (using pl.Expr.over in polars). df.with_columns( pl.col("ru").first().over("file"), pl.col("rt").first().over("file"), ) Polars also accepts multiple column names in pl.col and will evaluate the expressions independently of each other (just as above). df.with_columns( pl.col("ru", "rt").first().over("file") ) shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ file ┆ ru ┆ rt β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ b ┆ ba ┆ 1 β”‚ β”‚ b ┆ ba ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Add condition to take use first column values. To only use the first values within each group, if the first column value of "rt" within the group is 0, we can use a pl.when().then().otherwise() construct. For the condition we again use window functions. df.with_columns( pl.when( pl.col("rt").first().over("file") == 0 ).then( pl.col("ru", "rt").first().over("file") ).otherwise( pl.col("ru", "rt") ) ) shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ file ┆ ru ┆ rt β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ a ┆ fe ┆ 0 β”‚ β”‚ b ┆ ba ┆ 1 β”‚ β”‚ b ┆ br ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
4
6
78,557,248
2024-5-30
https://stackoverflow.com/questions/78557248/pandas-map-multiple-columns-based-on-specific-conditions
My organization uses special codes for various employee attributes. We are migrating to a new system and I have to map these codes to a new code based on certain logic. Here is my mappings df Mappings: State Old_Mgmt New_Mgmt Old_ID New_ID New_Site 01 A001 A100 0000 0101 123 01 A002 A100 0000 0102 01 A003 A105 0000 0103 123 02 A001 A100 0000 0101 And here is EmployeeData: State Management ID Site 01 A001 0000 456 01 A002 0000 987 02 A002 0000 987 .... The logic for the mapping is to go through each row of EmployeeData and if there is a match for State, Management, and ID, then it will update to the corresponding New_ value. However for Site, it will update the Site ID only if New_Site is not blank/NaN. This mapping will modify the original dataframe. Based on the above mapping the new EmployeeData would be: State Management ID Site 01 A100 0101 123 (modified this row) 01 A100 0102 987 (modified this row) 02 A002 0000 987 .... My initial thought process was to do something like this: for i,r in EmployeeData.iterrows(): # For each employee row # Create masks for the filters we are looking for mask_state = Mappings['State'] == r['State'] mask_mgmt = Mappings['Old_Mgmt'] == r['Management'] mask_id = Mappings['Old_ID'] == r['ID'] # Filter mappings for the above 3 conditions MATCH = Mappings[mask_state & mask_mgmt & mask_id] if MATCH.empty: # No matches found print("No matches found in mapping. No need to update. Skipping.") continue MATCH = MATCH.iloc[0] # If a match is found, it will correspond to only 1 row EmployeeData.at[i, 'Management'] = MATCH['New_Mgmt'] EmployeeData.at[i, 'ID'] = MATCH['New_ID'] if pd.notna(MATCH['New_Site']): EmployeeData.at[i, 'Site'] = MATCH['New_Site'] However this seems fairly inefficient because I have to filter mappings for every row. If only 1 column was being mapped, I would do something like: # Make a dict mapping Old_Mgmt -> New_Mgmt MGMT_MAPPING = pd.Series(Mappings['New_Mgmt'].values,index=Mappings['Old_Mgmt']).to_dict() mask_state = Mappings['State'] = r['State'] EmployeeData.loc[mask_state, 'Management'] = EmployeeData.loc[mask_state, 'Management'].replace(MGMT_MAPPING) But that would not work for my situation since I need to map multiple values
Left-merge and update: EmployeeData.update(EmployeeData .rename(columns={'Management': 'Old_Mgmt', 'ID': 'Old_ID'}) .merge(Mappings.rename(columns={'New_Mgmt': 'Management', 'New_ID': 'ID', 'New_Site': 'Site'}), how='left', on=['State', 'Old_Mgmt', 'Old_ID'], suffixes=('_old', None)) .replace('', None)[EmployeeData.columns] ) Updated EmployeeData: State Management ID Site 0 01 A100 0101 123 1 01 A100 0102 987 2 02 A002 0000 987
2
1
78,556,399
2024-5-30
https://stackoverflow.com/questions/78556399/hide-pandas-column-headings-in-a-terminal-window-to-save-space-and-reduce-cognit
I am looping through the groups of a pandas groupby object to print the (sub)dataframe for each group. The headings are printed for each group. Here are some of the (sub)dataframes, with column headings "MMSI" and "ShipName": MMSI ShipName 15468 109080345 OYANES 3 [19%] 46643 109080345 OYANES 3 [18%] MMSI ShipName 19931 109080342 OYANES 2 [83%] 48853 109080342 OYANES 2 [82%] MMSI ShipName 45236 109050943 SVARTHAV 2 [11%] 48431 109050943 SVARTHAV 2 [14%] MMSI ShipName 21596 109050904 MR:N2FE [88%] 49665 109050904 MR:N2FE [87%] MMSI ShipName 13523 941500907 MIKKELSEN B 5 [75%] 45711 941500907 MIKKELSEN B 5 [74%] Web searching shows that pandas.io.formats.style.Styler.hide_columns can be used to suppress the headings. I am using Python 3.9, in which hide_columns is not recognized. However, dir(pd.io.formats.style.Styler) shows a hide method, for which the doc string gives this first example: >>> df = pd.DataFrame([[1,2], [3,4], [5,6]], index=["a", "b", "c"]) >>> df.style.hide(["a", "b"]) # doctest: +SKIP 0 1 c 5 6 When I try hide() and variations thereof, all I get is an address to the resulting Styler object: >>> df.style.hide(["a", "b"]) # doctest: +SKIP <pandas.io.formats.style.Styler at 0x243baeb1760> >>> df.style.hide(axis='columns') # https://stackoverflow.com/a/69111895 <pandas.io.formats.style.Styler at 0x243baeb17c0> >>> df.style.hide() # Desparate random trial & error <pandas.io.formats.style.Styler at 0x243baeb1520> What could cause my result to differ from the doc string? How can I properly use the Styler object to get the dataframe printed without column headings? I am using pandas 2.0.3 with Spyder 5.4.3.
I don’t know about Styler, but you can print the data without headers like this. import pandas as pd df = pd.read_csv(r"/home/bera/GIS/data/Pandas_testdata/flights.csv") for month, subframe in df.groupby("month"): print(subframe.to_string(header=False)) print("\n") Output: 3 1949 April 129 15 1950 April 135 27 1951 April 163 39 1952 April 181 51 1953 April 235 63 1954 April 227 75 1955 April 269 87 1956 April 313 99 1957 April 348 111 1958 April 348 123 1959 April 396 135 1960 April 461 7 1949 August 148 19 1950 August 170 31 1951 August 199 43 1952 August 242 55 1953 August 272 67 1954 August 293 79 1955 August 347 91 1956 August 405 103 1957 August 467 115 1958 August 505 127 1959 August 559 139 1960 August 606
3
2
78,552,542
2024-5-30
https://stackoverflow.com/questions/78552542/how-to-decrypt-a-key-encrypted-using-the-sodium-r-package-in-python
I have he following R functions that use the sodium R package to encrypt and decrypt an access key with a password: encrypt_access_key <- function(access_key, password) { key <- sha256(charToRaw(password)) nonce <- random(24) # Generate a random nonce encrypted <- data_encrypt(charToRaw(access_key), key, nonce) list( encrypted_access_key = base64encode(encrypted), nonce = base64encode(nonce) ) } decrypt_access_key <- function(encrypted_access_key, nonce, password) { key <- sha256(charToRaw(password)) encrypted_raw <- base64decode(encrypted_access_key) nonce_raw <- base64decode(nonce) rawToChar(data_decrypt(encrypted_raw, key, nonce_raw)) } I want to create a python function that mimics the functionality of the decrypt_access_key function, but I have not been able to do so... In case it is useful, here is the documentation for the data_encrypt and data_decrypt functions from the sodium package: link Edit: Here is a reproducible example encripted_message <- encrypt_access_key("This is a hidden message", "Example123") decrypt_access_key(encripted_message$encrypted_access_key, encripted_message$nonce, "Example123") There is no need to replicate the functionality of encrypt_access_key, all I need is a python function that can get the string stored as encripted_message$encrypted_access_key and decrypt the message Edit 2: Here is the reproductible example with the explicit value of the nonce and the encrypted message decrypt_access_key(encrypted_access_key = "QG42J4fSVcY9luHaXiEhQcR85isjgUlmFc97pMHIIwuxQyZzDNpz2w==", nonce = "LpO7fBaCs1iobxDxVmOPFycTe/BCifqh", password = "Example123") And here is one of the failed attempts to decrypt the message using python: import base64 from Crypto.Cipher import ChaCha20_Poly1305 from Crypto.Protocol.KDF import PBKDF2 import hashlib def decrypt_access_key(encrypted_access_key, nonce, password): # Decode the base64 encoded components encrypted_access_key_bytes = base64.b64decode(encrypted_access_key) nonce_bytes = base64.b64decode(nonce) # Derive the key using PBKDF2 key = hashlib.pbkdf2_hmac('sha256', password.encode(), b'', 100000) # Decrypt the access key using ChaCha20-Poly1305 cipher = ChaCha20_Poly1305.new(key=key, nonce=nonce_bytes) decrypted = cipher.decrypt(encrypted_access_key_bytes) return decrypted.decode('utf-8') # Example usage encrypted_access_key = 'QG42J4fSVcY9luHaXiEhQcR85isjgUlmFc97pMHIIwuxQyZzDNpz2w==' nonce = 'LpO7fBaCs1iobxDxVmOPFycTe/BCifqh' password = 'Example123' decrypted_access_key = decrypt_access_key(encrypted_access_key, nonce, password) print(decrypted_access_key)
The R code for encryption derives the key from a password using SHA256 and performs authenticated encryption with SecretBox, which uses XSalsa20 and Poly1305 under the hood, see here. The result returns the (24 bytes) nonce and the concatenation of ciphertext and (16 bytes) tag separately. The posted Python code is not compatible with SecretBox. A possible NaCl/Libsodium Python port that supports SecretBox is PyNaCl. In contrast to the R library, PyNaCl returns the concatenation of nonce, ciphertext and tag as the result of the encryption, and requires the concatenation of nonce, ciphertext and tag during decryption: import nacl.secret import hashlib import base64 key = hashlib.sha256(b'Example123').digest() box = nacl.secret.SecretBox(key) nonceB64 = 'LpO7fBaCs1iobxDxVmOPFycTe/BCifqh' ciphertextTagB64 = 'QG42J4fSVcY9luHaXiEhQcR85isjgUlmFc97pMHIIwuxQyZzDNpz2w==' nonceCiphertextTag = base64.b64decode(nonceB64) + base64.b64decode(ciphertextTagB64) decrypted = box.decrypt(nonceCiphertextTag) print(decrypted.decode('utf-8')) # This is a hidden message Note that the use of a fast digest such as SHA256 is a vulnerability. It is more secure to use a reliable key derivation function, at least PBKDF2, or even better the more modern algorithms mentioned in the Libsodium documentation (Argon2, Scrypt), see here.
1
4
78,555,894
2024-5-30
https://stackoverflow.com/questions/78555894/find-all-columns-with-dateformat-in-dataframe
For the following df I would like to extract columns with "dates": import pandas as pd df = pd.DataFrame([["USD", 12.3, 1, 23.33, 33.1],["USD", 32.1, 2, 34.44, 23.1]],columns= ['currency', '1999-07-31', 'amount', '1999-10-31', '2000-01-31']) currency 1999-07-31 amount 1999-10-31 2000-01-31 USD 12.3 1 23.33 33.1 USD 32.1 2 34.44 23.1 Current code: datetime_types = ["datetime", "datetime64", "datetime64[ns]", "datetimetz"] dates = df.columns.to_frame().select_dtypes(include=datetime_types) Current output: dates.to_string() 'Empty DataFrame\nColumns: []\nIndex: [currency, 1999-07-31, amount, 1999-10-31, 2000-01-31]' Desired output: [1999-07-31, 1999-10-31, 2000-01-31]
You can't use select_dtypes in this context, this function filters based on the dtypes of the columns, not their names. You can however use pd.to_datetime with errors='coerce' and notna to form a boolean indexer: out = df.loc[:, pd.to_datetime(df.columns, errors='coerce', format='mixed') .notna()] Output: 1999-07-31 1999-10-31 2000-01-31 0 12.3 23.33 33.1 1 32.1 34.44 23.1 If you just want the column names: cols = df.columns[pd.to_datetime(df.columns, errors='coerce', format='mixed').notna()] Output: Index(['1999-07-31', '1999-10-31', '2000-01-31'], dtype='object')
2
6
78,555,396
2024-5-30
https://stackoverflow.com/questions/78555396/or-tools-cp-sat-python-example-terminates-after-creating-solver
I am new to Google OR-Tools and trying to start by running a basic example from OR-Tools example library found here https://github.com/google/or-tools/blob/stable/ortools/sat/samples/cp_sat_example.py. Every code example gets hung up on the same part after creating the solver. When I run the code I am not getting an output from any of the examples, each time it gets hung up after creating the solver, then the script just seems to terminate. I have modified this CP-Stat model code example with some print statements for debugging but I still can't figure out why every example gets hung up on the same part. #!/usr/bin/env python3 # [START program] """Simple solve.""" # [START import] from ortools.sat.python import cp_model # [END import] def main() -> None: """Minimal CP-SAT example to showcase calling the solver.""" print("Starting the solver...") # Debug print statement try: # Creates the model. # [START model] print("Creating model...") model = cp_model.CpModel() print("Model created.") # [END model] # Creates the variables. # [START variables] print("Creating variables...") var_upper_bound = max(50, 45, 37) x = model.NewIntVar(0, var_upper_bound, "x") y = model.NewIntVar(0, var_upper_bound, "y") z = model.NewIntVar(0, var_upper_bound, "z") print(f"Variables created: x in [0, {var_upper_bound}], y in [0, {var_upper_bound}], z in [0, {var_upper_bound}]") # [END variables] # Creates the constraints. # [START constraints] print("Adding constraints...") model.Add(2 * x + 7 * y + 3 * z <= 50) model.Add(3 * x - 5 * y + 7 * z <= 45) model.Add(5 * x + 2 * y - 6 * z <= 37) print("Constraints added.") # [END constraints] # [START objective] print("Setting objective...") model.Maximize(2 * x + 2 * y + 3 * z) print("Objective set.") # [END objective] # Creates a solver and solves the model. # [START solve] print("Creating solver...") solver = cp_model.CpSolver() print("Solver created.") print("Solving model...") status = solver.Solve(model) print(f"Model solved with status: {status}") # [END solve] # [START print_solution] print("Checking solution...") if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE: print(f"Maximum of objective function: {solver.ObjectiveValue()}\n") print(f"x = {solver.Value(x)}") print(f"y = {solver.Value(y)}") print(f"z = {solver.Value(z)}") else: print("No solution found.") # [END print_solution] # Statistics. # [START statistics] print("Printing statistics...") print("\nStatistics") print(f" status : {solver.StatusName(status)}") print(f" conflicts: {solver.NumConflicts()}") print(f" branches : {solver.NumBranches()}") print(f" wall time: {solver.WallTime()} s") print("Statistics printed.") # [END statistics] except Exception as e: print(f"An error occurred: {e}") if __name__ == "__main__": main() print("Solver finished.") # Debug print statement # [END program] I am using python version 3.12.1 with VS Code. I have uninstalled and reinstalled OR-Tools and all dependent libraries multiple times. The output in the terminal appears as follows: Starting the solver... Creating model... Model created. Creating variables... Variables created: x in [0, 50], y in [0, 50], z in [0, 50] Adding constraints... Constraints added. Setting objective... Objective set. Creating solver... Solver created. Solving model... The optimized solution should look something like: Model solved with status: 4 # This status code should indicate OPTIMAL or FEASIBLE Checking solution... Maximum of objective function: 32.0 # Example value x = 5.0 # Example value y = 0.0 # Example value z = 0.0 # Example value Printing statistics... Statistics status : OPTIMAL # Example status conflicts: 0 branches : 0 wall time: 0.001 s Statistics printed. Solver finished. I believe there is an issue with solver itself but I'm not sure how to diagnose what it is, it may be something simple that I am overlooking. Any help would be greatly appreciated.
Are you using or-tools 9.10 on windows ? If yes, check the known issues in the release notes: https://github.com/google/or-tools/releases You may need to update the visual studio redistributable libraries
3
2
78,555,411
2024-5-30
https://stackoverflow.com/questions/78555411/fillna-with-values-from-other-rows-with-matching-keys
In the dataframe I define below I want to use the features ID and ID2 to fill the cells of features val1 and val2 with values. I want all ID and ID2 cominations to have the same values for the features val1 and val2. df = pd.DataFrame({'ID':[0,0,0,1,1,1], 'DATE':['2021', '2022', '2023', '2021', '2022', '2023'], 'ID2':[23, 34, 54, 321, 1244, 1244], 'val1':[np.nan, 200, 300, np.nan, 234, np.nan], 'val2':[55555, 66666, 77777, 88888, 99999, np.nan], 'val3':['A', 'F', 'W', 'T', 'I', 'O']}) #expected result print(pd.DataFrame({'ID':[0,0,0,1,1,1], 'DATE':['2021', '2022', '2023', '2021', '2022', '2023'], 'ID2':[23, 34, 54, 321, 1244, 1244], 'val1':[np.nan, 200, 300, np.nan, 234, 234], 'val2':[55555, 66666, 77777, 88888, 99999, 99999], 'val3':['A', 'F', 'W', 'T', 'I', 'O']}))
If need first non missing value per groups in columns val1 and val2 use GroupBy.transform with GroupBy.first: df[['val1','val2']] = df.groupby(['ID','ID2'])[['val1','val2']].transform('first') print (df) ID DATE ID2 val1 val2 val3 0 0 2021 23 NaN 55555.0 A 1 0 2022 34 200.0 66666.0 F 2 0 2023 54 300.0 77777.0 W 3 1 2021 321 NaN 88888.0 T 4 1 2022 1244 234.0 99999.0 I 5 1 2023 1244 234.0 99999.0 O
2
3
78,555,334
2024-5-30
https://stackoverflow.com/questions/78555334/filter-a-pandas-dataframe-based-on-multiple-columns-with-a-corresponding-list-of
I have a DataFrame that looks a bit like this: A B C D ... G H I J 0 First First First First ... 0.412470 0.758011 0.066926 0.877992 1 First First First Third ... 0.007162 0.957042 0.601337 0.636086 2 First First Third First ... 0.956398 0.640909 0.602861 0.679656 3 First First Third Third ... 0.905421 0.199685 0.471300 0.975808 4 First Third First First ... 0.378181 0.498606 0.865298 0.914407 5 First Third First Third ... 0.387706 0.247412 0.339593 0.431647 6 First Third Third First ... 0.582202 0.046199 0.496258 0.533133 7 First Third Third Third ... 0.877199 0.011512 0.338528 0.938252 8 Third First First First ... 0.446433 0.175686 0.115796 0.985400 9 Third First First Third ... 0.315839 0.252855 0.142463 0.929233 10 Third First Third First ... 0.192566 0.600732 0.434166 0.933182 11 Third First Third Third ... 0.380029 0.511411 0.672583 0.807731 12 Third Third First First ... 0.915590 0.507470 0.390135 0.303314 13 Third Third First Third ... 0.977414 0.062521 0.909845 0.314432 14 Third Third Third First ... 0.608958 0.384802 0.193425 0.689283 15 Third Third Third Third ... 0.496223 0.478222 0.076192 0.695453 [16 rows x 10 columns] I also have a list (coming from elsewhere) with the values for A, B, C & D that I'm looking for, something like this: expected = ['First', 'Third', 'First', 'Third'] I'd like to filter to find a row matching a certain set of ABCD values, where the expected values are in a list. Something like this (which doesn't work): # This looks neat, but doesn't work rows = df[df[['A', 'B', 'C', 'D'] == expected]] rows # Not what I was hoping for! Out[17]: A B C D E F G H I J 0 First NaN First NaN NaN NaN NaN NaN NaN NaN 1 First NaN First Third NaN NaN NaN NaN NaN NaN 2 First NaN NaN NaN NaN NaN NaN NaN NaN NaN 3 First NaN NaN Third NaN NaN NaN NaN NaN NaN 4 First Third First NaN NaN NaN NaN NaN NaN NaN 5 First Third First Third NaN NaN NaN NaN NaN NaN 6 First Third NaN NaN NaN NaN NaN NaN NaN NaN 7 First Third NaN Third NaN NaN NaN NaN NaN NaN 8 NaN NaN First NaN NaN NaN NaN NaN NaN NaN 9 NaN NaN First Third NaN NaN NaN NaN NaN NaN 10 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 11 NaN NaN NaN Third NaN NaN NaN NaN NaN NaN 12 NaN Third First NaN NaN NaN NaN NaN NaN NaN 13 NaN Third First Third NaN NaN NaN NaN NaN NaN 14 NaN Third NaN NaN NaN NaN NaN NaN NaN NaN 15 NaN Third NaN Third NaN NaN NaN NaN NaN NaN I could use dropna(subset=['A', 'B', 'C', 'D']) to get the relevant rows, then extract the index and use it on the original table, but that's getting quite long-winded. I know I can do this long-hand like this, but I'm wondering whether there's a neater way: # This works, but is clunky: rows = df[(df['A'] == expected[0]) & (df['B'] == expected[1]) & (df['C'] == expected[2]) & (df['D'] == expected[3])] rows # This is what I want: Out[19]: A B C D ... G H I J 5 First Third First Third ... 0.387706 0.247412 0.339593 0.431647 [1 rows x 10 columns] Is there a simpler way of doing this? My searching for filtering by lists just seems to come up with lots of isin suggestions, which aren't relevant.
You need select columns by [[]] first for subset, compare by list and test if all values are Trues by DataFrame.all: expected = ['First', 'Third', 'First', 'Third'] rows = df[(df[['A', 'B', 'C', 'D']] == expected).all(axis=1)] print (rows) A B C D ... G H I J 5 First Third First Third ... 0.387706 0.247412 0.339593 0.431647 How it working: print (df[['A', 'B', 'C', 'D']]) A B C D 0 First First First First 1 First First First Third 2 First First Third First 3 First First Third Third 4 First Third First First 5 First Third First Third 6 First Third Third First 7 First Third Third Third 8 Third First First First 9 Third First First Third 10 Third First Third First 11 Third First Third Third 12 Third Third First First 13 Third Third First Third 14 Third Third Third First 15 Third Third Third Third print ((df[['A', 'B', 'C', 'D']] == expected)) A B C D 0 True False True False 1 True False True True 2 True False False False 3 True False False True 4 True True True False 5 True True True True 6 True True False False 7 True True False True 8 False False True False 9 False False True True 10 False False False False 11 False False False True 12 False True True False 13 False True True True 14 False True False False 15 False True False True print ((df[['A', 'B', 'C', 'D']] == expected).all(axis=1)) 0 False 1 False 2 False 3 False 4 False 5 True 6 False 7 False 8 False 9 False 10 False 11 False 12 False 13 False 14 False 15 False dtype: bool
2
3
78,554,950
2024-5-30
https://stackoverflow.com/questions/78554950/polars-map-elements-doesnt-work-properly-with-isinstance
I have a polars dataframe with two columns, one contains lists of string and the other one string. I want to apply the following expression to both columns. However, for some reason isinstance(x, list) doesn't work properly. def process_column(column_name: str, alias_name: str) -> pl.Expr: return ( pl.col(column_name).map_elements( lambda x: " ".join(x) if isinstance(x, list) else x ) .str.to_lowercase() .str.split(by="-") .list.join(" ") .alias(alias_name) ) Here is a sample dataframe df = pl.DataFrame({ "lists": [["hello", "World"], ["polars", "IS", "fast"]], "strings": ["foo-hello", "bOO"] }) Applying process_column to the "strings" column gives the expected result. df.with_columns(process_column("strings", "processed_string")) However, for the "lists" column a SchemaError is raised. df.with_columns(process_column("lists", "processed_lists")) polars.exceptions.SchemaError: invalid series dtype: expected `String`, got `list[str]` I tried using map_elements with return_dtype=pl.String. This doesn't return an error, but the output is wrong.
I'd start by checking what is going on when applying pl.Expr.map_elements to a column of type List[pl.String] as follows. def check(x): print(x) return x df.with_columns( pl.col("lists").map_elements(check) ) shape: (2,) Series: '' [str] [ "hello" "World" ] shape: (3,) Series: '' [str] [ "polars" "IS" "fast" ] shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ lists ┆ strings β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[str] ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ ["hello", "World"] ┆ foo-hello β”‚ β”‚ ["polars", "IS", "fast"] ┆ bOO β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ This suggests that the function passed to pl.Expr.map_elements doesn't receive native python lists, but pl.Series objects instead. Hence, we should replace lambda x: " ".join(x) if isinstance(x, list) else x with lambda x: x.str.concat(" ").item() if isinstance(x, pl.Series) else x to obtain the expected result. def process_column(column_name: str, alias_name: str) -> pl.Expr: return ( pl.col(column_name) .map_elements( lambda x: x.str.concat(" ").item() if isinstance(x, pl.Series) else x, return_dtype=pl.String ) .str.to_lowercase() .str.split(by="-") .list.join(" ") .alias(alias_name) ) df.with_columns(process_column("lists", "processed_lists")) Note. I've also added the parameter return_dtype=pl.String to silence a warning about calling map_elements without specifying return_dtype possibly leading to unpredictable results. shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ lists ┆ strings ┆ processed_lists β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ list[str] ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════════════════║ β”‚ ["hello", "World"] ┆ foo-hello ┆ hello world β”‚ β”‚ ["polars", "IS", "fast"] ┆ bOO ┆ polars is fast β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
78,554,836
2024-5-30
https://stackoverflow.com/questions/78554836/group-by-specific-column-value-pandas
I have my dataframe set up like this Cus_ID Cus_Type Cost birthdate 123 Owner 50 01 Jan 1980 123 Spouse 50 10 Feb 1982 123 Father 300 01 Dec 1950 125 Owner 20 30 Jan 1990 125 Spouse 30 15 Jul 1994 125 Mother 100 06 Sep 1970 I am trying to do a groupby where I group by Cus_ID but return the cus_type owner such that the final product looks like this: Cus_ID Cus_Type Cost birthdate 123 Owner 400 01 Jan 1980 125 Owner 150 30 Jan 1990 I have a solution to work around it such as doing the groupby cus_id, summing the cost column then adding a column called cus_type on the new dataframe populated with cus_type then do a left join by cus_id and cus_type to the original DF to get the birth date. Is there a better way to do it than the solution that I currently have?
Sort the rows to have the Owner first or last, then use a custom groupby.agg: out = (df.sort_values(by='Cus_Type', key=lambda s: s.eq('Owner')) .groupby('Cus_ID', as_index=False) .agg({'Cus_Type': 'last', 'Cost': 'sum', 'birthdate': 'last'}) ) Another way that is more generic if you have many columns: d = dict.fromkeys(df.columns, 'last') d['Cost'] = 'sum' out = (df.sort_values(by='Cus_Type', key=lambda s: s.eq('Owner')) .groupby('Cus_ID', as_index=False).agg(d) ) # OR out = (df.assign(flag=df['Cus_Type'].eq('Owner')) .groupby('Cus_ID', as_index=False) .agg({'flag': 'idxmax', 'Cost': 'sum'}) ) out = out.join(df.drop(columns=['Cus_ID', 'Cost']) .reindex(out.pop('flag')).reset_index(drop=True)) Output: Cus_ID Cus_Type Cost birthdate 0 123 Owner 400 01 Jan 1980 1 125 Owner 150 30 Jan 1990
2
2
78,551,982
2024-5-29
https://stackoverflow.com/questions/78551982/why-is-unpacking-a-list-in-indexing-a-syntax-error-in-python-3-8-but-not-python
The following code import numpy as np x = np.arange(32).reshape(2,2,2,2,2) extra = [1 for _ in range(3)] print(x[*extra, 0, 0]) prints 28 as expected in Python 3.12 but results in the syntax error File "main.py", line 4 print(x[*extra, 0, 0]) ^ SyntaxError: invalid syntax in Python 3.8. Replacing the indexing with print(x[(*extra, 0, 0)]) works in both versions. The NumPy documentation states that the offending line is syntax sugar for the working one, so I do not understand why there is a syntax error. A quick search for the terms "unpack","index", and "getitem" for the versions 3.9 to 3.12 (inclusive) on https://docs.python.org/3/whatsnew/index.html didn't turn up any indication that there was a change to indexing or unpacking syntax, although I may have missed something. The closest I found was that Python 3.11 has the change "Starred unpacking expressions can now be used in for statements."
This was a consequence of grammar changes in PEP 646 – Variadic Generics and somehow didn't get a mention in the changelog for 3.11. Minimal reproducer: $ python3.11 -c 'print({():42}[*()])' 42 $ python3.10 -c 'print({():42}[*()])' File "<string>", line 1 print({():42}[*()]) ^^^^^^^^^^^^ SyntaxError: invalid syntax. Perhaps you forgot a comma? The relevant diff can be found at #31018: Implement PEP 646 grammar changes.
5
5
78,551,846
2024-5-29
https://stackoverflow.com/questions/78551846/pandas-access-column-like-results-from-groupby-and-agg
I am using groupby and agg to summarize groups of dataframe rows. I summarize each group in terms of its count and size: >>> import pandas as pd >>> df = pd.DataFrame([ [ 1, 2, 3 ], [ 2, 3, 1 ], [ 3, 2, 1 ], [ 2, 1, 3 ], [ 1, 3, 2 ], [ 3, 3, 3 ] ], columns=['A','B','C'] ) >>> gbB = df.groupby('B',as_index=False) >>> Cagg = gbB.C.agg(['count','size']) B count size 0 1 1 1 1 2 2 2 2 3 3 3 The result looks like a dataframe with columns for the grouping variable B and for the summaries count and size: >>> Cagg.columns Index(['B', 'count', 'size'], dtype='object') However, I can't access each of the count and size columns for further manipulation as series or by conversion to_list: >>> Cagg.count <bound method DataFrame.count of B count size 0 1 1 1 1 2 2 2 2 3 3 3> >>> Cagg.size 9 Can I access the individual column-like data with headings count and size?
Don't use attributes to access the columns, this conflicts with the existing methods/properties. Go with indexing using square brackets: Cagg['count'] # 0 1 # 1 2 # 2 3 # Name: count, dtype: int64 Cagg['size'] # 0 1 # 1 2 # 2 3 # Name: size, dtype: int64
2
2
78,551,302
2024-5-29
https://stackoverflow.com/questions/78551302/pandas-cannot-set-a-value-for-multiindex
It is not allowed to set a value for a MultiIndex. It works for the single index. Maybe becaause of the version of pandas. import pandas as pd df = pd.DataFrame() df.loc[('a', 'b'), 'col1'] = 1 print(df) KeyError: "None of [Index(['a', 'b'], dtype='object')] are in the [index]"
You can do either import pandas as pd index = pd.MultiIndex.from_tuples([], names=['level_1', 'level_2']) df = pd.DataFrame(columns=['col1'], index=index) df.loc[('a', 'b'), 'col1'] = 1 print(df) or import pandas as pd df = pd.DataFrame(columns=['col1'], index=pd.MultiIndex(levels=[[], []], codes=[[], []], names=['level_1', 'level_2'])) df.loc[('a', 'b'), 'col1'] = 1 print(df) which returns col1 level_1 level_2 a b 1
2
2
78,546,847
2024-5-29
https://stackoverflow.com/questions/78546847/how-do-you-get-the-stringvar-not-the-text-value-the-stringvar-itself-of-an-en
I'm trying to get the StringVar object associated with an Entry widget, but I can't figure out how. I found this question: get StringVar bound to Entry widget but none of the answers work for me. Both entry["textvariable"] and entry.cget("textvariable") return the name of the StringVar and not the StringVar itself (although the answer with the second explains it properly, unlike the answer with the first which is NOT clear that it only returns the name. I've submitted an edit for the first that fixes this). You're supposed to be able to get the StringVar from its name using entry.getvar(name), but this is returning a str with the contents of the StringVar instead of the StringVar itself. I don't understand why this is happening, because the answer that explains this is marked as correct, and the person who asked the question seems to have wanted the StringVar itself. Did something get changed? If so, how would I get the StringVar now? I'm using Python 3.11.9 at the moment. I would also prefer a method that doesn't need the name of the StringVar, as an Entry without a StringVar explicitly set seems to have a StringVar without a name. Here is some example code: from tkinter import * from tkinter.ttk import * root = Tk() stringVar = StringVar(root, "test") # obviously in the real program I wouldn't be able to access this without using the Entry entry = Entry(root, textvariable=stringVar) entry.pack() name1 = entry["textvariable"] name2 = entry.cget("textvariable") print(name1 == name2) # True shouldBeStringVar = entry.getvar(name1) print(name1, type(name1)) # PY_VAR0 <class 'str'> print(shouldBeStringVar, type(shouldBeStringVar)) # test <class 'str'>
If you know the name of the variable, you can simply "create" a StringVar and set its parameter "name" to that name. This way you will simply get the variable you created earlier. So try this: ... string_name = entry["textvariable"] TheVarObjectYouNeed = var = tk.StringVar(name=string_name) print(f"The var {var} has a type {type(var)}, just as you wanted") print(f'And it has the value "{var.get()}"') ... Note that this code actually creates a new StringVar object. But since I set it to an existing name, "TheVarObjectYouNeed" will actually refer to the same internal variable inside the underlying TCL interpreter (as correctly stated in the comment below), same TCL variable is used for your Entry widget.
2
2
78,549,172
2024-5-29
https://stackoverflow.com/questions/78549172/understanding-array-comparison-while-using-custom-data-types-with-numpy
i have a question regarding comparison with numpy arrays while using cusom data types. Here is my code: import numpy as np class Expr: def __add__(self, other): return Add(self, other) def __eq__(self, other): return Eq(self, other) class Variable(Expr): def __init__(self, name): self.name = name def __repr__(self): return self.name class Operator(Expr): def __init__(self, left, right): self.left = left self.right = right def __repr__(self): return f'{self.__class__.__name__}({self.left}, {self.right})' class Add(Operator): ... class Eq(Operator): ... if __name__ == '__main__': arr1 = np.array([Variable('v1'), Variable('v2')]) arr2 = np.array([Variable('v3'), Variable('v4')]) print(arr1 + arr2) print(arr1 == arr2) The output is: [Add(v1, v3) Add(v2, v4)] [ True True] I dont get why the equal comparison does not return [Eq(v1, v3) Eq(v2, v4)], because my code works for the Addition. How do i make this work. Thank you!
== on arrays produces an output of boolean dtype. The result array physically cannot hold Eq instances; it can only hold booleans. Your Eq instances get converted to booleans, producing True, since that's the boolean value of any object that doesn't define __len__ or __bool__. If you want an array of Eq instances, you need an output of object dtype. You can get that by specifying a dtype for numpy.equal: res = numpy.equal(arr1, arr2, dtype=object)
2
2
78,544,143
2024-5-28
https://stackoverflow.com/questions/78544143/what-is-the-proper-way-to-handle-mypy-attr-defined-errors-due-to-transitions
MRE from transitions import Machine class TradingSystem: def __init__(self): self.machine = Machine(model=self, states=['RUNNING'], initial='RUNNING') def check_running(self) -> None: if self.is_RUNNING(): print("System is running") Example usage system = TradingSystem() system.check_running() Issue mypy transitions_mypy.py gives the error: transitions_mypy.py:9: error: "TradingSystem" has no attribute "is_RUNNING" [attr-defined] This can be avoided by bypassing mypy, for example adding # type: ignore[attr-defined] at the end of line 9. But what is the proper way? Is it better to avoid bypassing mypy? Perhaps by manually defining the attribute?
The answer @Mark proposed won't work since transitions does not override already existing model attributes because that would be unexpected (mis)behaviour (as @Mark pointed out). If you enable logging, transitions will also tell you that: import logging logging.basicConfig(level=logging.DEBUG) The output should contain: WARNING:transitions.core:Model already contains an attribute 'is_RUNNING'. Skip binding. In the past, I suggested to inherit from Machine and override Machine._checked_assignment to workaround that safeguard. However, @james-hirschorn's approach using attrs which was posted in the transitions issue tracker is a better solution in my oppinion: from typing import Callable from attrs import define, field from transitions import Machine @define(slots=False) class TradingSystem: is_RUNNING: Callable[[], bool] = field(init=False) stop: Callable[[], None] = field(init=False) def __attrs_post_init__(self): self.machine = Machine(model=self, states=['RUNNING', 'STOPPED'], initial='RUNNING') self.machine.add_transition(trigger='stop', source='RUNNING', dest='STOPPED') def check_running(self) -> None: if self.is_RUNNING(): print("System is running") def stop_system(self) -> None: self.stop() print("System stopped") # Example usage system = TradingSystem() system.check_running() system.stop_system() This produces some overhead since you have to define triggers and convenience methods twice. Furthermore, type information are also incomplete because transitions will also add is_STOPPED and auto transitions like to_STOPPED/RUNNING or 'peek' transition methods such as may_stop to TradingSystem during runtime. Improving transitions typing support is currently an open issue.
2
1
78,518,903
2024-5-22
https://stackoverflow.com/questions/78518903/reverse-flip-right-leftanti-diagonals-of-a-non-square-numpy-array
What I am after is Python code able to reverse the order of the values in each of the array anti-diagonals in a numpy array. I have already tried various combinations of np.rot90, np.fliplr, np.transpose, np.flipud but none is able to give me the original shape of the 5x3 array with all the anti-diagonals reversed. Any idea how to accomplish this? Example: [[ 1 2 4] [ 3 5 7] [ 6 8 10] [ 9 11 13] [12 14 15]] Should become: [[ 1 3 6] [ 2 5 9] [ 4 8 12] [ 7 11 14] [10 13 15]] I suppose it must be easy, but somehow I have yet failed to find how to do it efficiently on arrays with millions of values. Inspired by the already provided answers (status 2024-05-23 11:37 CET) and re-thinking what would be the most efficient way of getting the required transformation done it seems that giving a simple function taking two indices : iRow, jColumn of a value in an array and returning the required i,j indices to access the array as if it were flipped/reversed over the diagonals will provide fastest results. With such function for the over the diagonals flipped version of the array would be getting the right values without operating on the array as easy as in a trivial case of one-based and column/row based access to array values demonstrated below: import numpy as np srcArr = np.array([[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]) def ijOfArrayValueGivenOneBasedColumnRowBasedIndices(i, j): return ( j - 1, i - 1 ) print( srcArr[ ijOfArrayValueGivenOneBasedColumnRowBasedIndices( 3,4)] ) # gives 21 print( srcArr[3,4] ) # gives 23 From this perspective the question comes down to providing a function ijIndicesToSourceArray_gettingValueOfSourceArrayWithReversedRightLeftAntiDiagonalsAt(i,j,arrShapeRows,arrShapeColumns)
This one seems fairly fast, especially for wide matrices like your width=n=1920, height=m=1080 : def mirror(a): m, n = a.shape if m == n: return a.T.copy() if m > n: return mirror(a.T).T # Shear v = a.flatten() w = v[:-m].reshape((m, n-1)) # Flip the parallelogram w[:, m-1:] = w[::-1, m-1:] # Flip the triangles t = np.vstack((w[:, :m-1].reshape((m-1, m)), v[-m:])) t = t.T w[:, :m-1] = t[:-1].reshape((m, m-1)) # Write flipped parts back and unshear v[:-m] = w.ravel() v[-m:] = t[-1] return v.reshape((m, n)) Attempt This Online! The idea: Slice/reshape the mΓ—n matrix to an mΓ—(n-1) matrix so that the parallelogram (green part) becomes a rectangle so we can just flip it upside down. For example: Now reshape the mΓ—n matrix to an mΓ—(n-1) matrix (omit the black last m cells), which moves the yellow cells to the front of the next row, shifting the next row as needed: Now we can easily flip the green parallelogram part. For the top-left and bottom-right triangles, reshape/shear the left part of this back and put the omitted cells under it: This can now simply be transposed and written back.
9
2
78,537,075
2024-5-27
https://stackoverflow.com/questions/78537075/why-is-dictint-int-incompatible-with-dictint-int-str
import typing a: dict[int, int] = {} b: dict[int, int | str] = a c: typing.Mapping[int, int | str] = a d: typing.Mapping[int | str, int] = a Pylance reports an error for b: dict[int, int | str] = a: Expression of type "dict[int, int]" is incompatible with declared type "dict[int, int | str]" "dict[int, int]" is incompatible with "dict[int, int | str]" Type parameter "_VT@dict" is invariant, but "int" is not the same as "int | str" Consider switching from "dict" to "Mapping" which is covariant in the value type But c: typing.Mapping[int, int | str] = a is OK. Additionally, d: typing.Mapping[int | str, int] = a also gets an error: Expression of type "dict[int, int]" is incompatible with declared type "Mapping[int | str, int]" "dict[int, int]" is incompatible with "Mapping[int | str, int]" Type parameter "_KT@Mapping" is invariant, but "int" is not the same as "int | str" Why are these types hint incompatible? If a function declares a parameter of type dict[int, int | str], how can I pass a dict[int, int] object as its parameter?
dict type was designed to be completely invariant on key and value. Hence when you assign dict[int, int] to dict[int, int | str], you make the type system raise errors. [1] Mapping type on the other hand wasn’t designed to be completely invariant but rather is invariant on key and covariant on value. Hence you can assign one Mapping type (dict[int, int]) to another (Mapping[int, int | str]) if they are both covariant on value. if they are invariant on key, you can assign them else you cannot. Hence when you assign dict[int, int] to Mapping[int | str, int], you make the type system raise errors. [2][3] There is a good reason for the above design in the type system and I will give a few: 1. dict type is a concrete type so it will actually get used in a program. 2. Because of the above mentioned, it was designed the way it was to avoid things like this: a: dict[int, int] = {} b: dict[int, int | str] = a b[0] = 0xDEADBEEF b[1] = "Bull" dicts are assigned by reference [4] hence any mutation to b is actually a mutation to a. So if one reads a as follows: x: int = a[0] assert isinstance(x, int) y: int = a[1] assert isinstance(y, int) One gets unexpected results. x passes but y doesn’t. It then seems like the type system is contradicting itself. This can cause worse problems in a program. For posterity, to correctly type a dictionary in Python, use Mapping type to denote a readonly dictionary and use MutableMapping type to denote a read-write dictionary. [1] Of course Python’s type system doesn’t influence program’s running behaviour but at least linters have some use of this. [2] dict type is a Mapping type but Mapping type is not a dict type. [3] Keep in mind that the ordering of types is important in type theory. [4] All variable names in Python are references to values.
29
29
78,515,406
2024-5-22
https://stackoverflow.com/questions/78515406/changing-values-in-a-dataframe-under-growth-change-with-year-condition
Suppose I have sample = {"Location+Type": ["A", "A", "A", "A", "B", "B", "B", "B"], "Year": ["2010", "2011", "2012", "2013", "2010", "2011", "2012", "2013"], "Price": [1, 2, 1, 3, 1, 1, 1, 1]} df_sample = pd.DataFrame(data=sample) df_sample['pct_pop'] = df_sample['Price'].pct_change() df_sample.head(8) From say 2012, I want to make a change in the dataframes values following the pct_chage() values. In other words, if the percentage change is greater than 30 percent than I want to average the previous values with respect to the Location+Type column starting from the year of 2012. Thus the new dataframe would look like this sample = {"Location+Type": ["A", "A", "A", "A", "B", "B", "B", "B"], "Year": ["2010", "2011", "2012", "2013", "2010", "2011", "2012", "2013"], "Price": [1, 2, 1.5, 1.33, 1, 1, 1, 1]} df_sample = pd.DataFrame(data=sample) df_sample['pct_pop'] = df_sample['Price'].pct_change() df_sample.head(8) Still having some issues following the answers for my particular data which is instead of changing from 2012 its 2023, here is the shared drive https://drive.google.com/drive/u/0/folders/11dh05Dq_VNfUNgALeMg7MI1keVQdN0ZN And here is the code modified df_complete['Year'] = df_complete['Year'].astype(int) # just in case if "Year" holds strings def calculate_new_price(row, df): current_index = row.name # row number will be used to check if value is in the first of the DF # Check if row is not the first if current_index > 0: # assign previous pct_pop value to variable: previous_pct_pop previous_pct_pop = df.loc[current_index - 1, 'pct_pop'] # Check if the year is 2023 or later and if the difference in pct_pop is 0.3 or more if row['Year'] >= 2023 and abs(row['pct_pop'] - previous_pct_pop) >= 0.3: # Filter previous years previous_data = df[(df['Year'] < row['Year']) & (df['Location+Type'] == row['Location+Type'])] # Calculate average if row above is not empty if not previous_data.empty: return previous_data['Median_Home_Value_prediction'].mean() else: return row['Median_Home_Value_prediction'] # if pct_pop change was smaller than 0.3 or year is older than 2012, - use default price. else: return row['Median_Home_Value_prediction'] # if index 0 (first row) use value from first row of col: Price return None # Apply the function to create the new_price column df_complete['Median_Home_Value_prediction_new'] = df_complete.apply(lambda row: calculate_new_price(row, df_complete), axis=1)
Pandas Style We have 4 steps here that can be done using typical pandas tools: Sort data. Find where the percent change exceeds the limit. Calculate cumulative means. Replace the data with average values by masking. # Load data and define key variables data = pd.DataFrame({ 'Location+Type': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'], 'Year': [2010, 2011, 2012, 2013, 2010, 2011, 2012, 2013], 'Price': [1, 2, 1, 3, 1, 1, 1, 1], }) group = 'Location+Type' inner_index = 'Year' value = 'Price' index_start = 2012 # apply to index >= this one value_pct_change_upper_limit = 0.3 # update values where percent change >= this one # Sort and group data df = data.sort_values([group, inner_index]) by_groups = df.groupby(group, as_index=False) # Find where abs(percent change) is greater than or equal to change limit pct_change_ge_limit = (by_groups[value].pct_change() .abs().ge(value_pct_change_upper_limit) .rename('pct_change')) # Find cumulative mean by groups shifted one step forward # to the place where they can be used cum_mean = (by_groups[value].expanding().mean() .groupby(level=0).shift().bfill() # shift the means to replace the values following them .reset_index(level=0, drop=True) # drop group names returend at the level 0 .rename('cum_mean')) # Replace values with cumulative means # where inner group index (Year) starts from a given value # and abs(percent change) >= percent change limit df[value] = df[value].mask( df[inner_index].ge(index_start) & pct_change_ge_limit, other=cum_mean ) NumPy/Numba Acceleration To make it faster, we can calculate whatever is needed on the fly with help of NumPy and Numba with some modifications in the algorithm: Sort data by group (e.g. Location+Type) and inner group index (e.g. Year). Find group start/end points. Iterate over every group starting from the end until the group index limit. Calculate the average only if needed. import numpy as np from numba import njit def get_bounds(series): '''Return the starting points of consecutive values and put `len(series)` at the end, where `series` is a properly sorted `pandas.Series` ''' grouper = series.ne(series.shift()) size = grouper.sum() + 1 bounds = np.empty(size, dtype=int) bounds[:-1] = np.nonzero(grouper)[0] bounds[-1] = len(series) return bounds @njit def modify_excessive_values(bounds, years, values, year_limit, pct_limit): '''Replace the values with averages of their predecessors if their percent change exceeds pct_limit and the corresponding year is more or equal the year_limit ''' out = values.copy().astype('float') # use float to avoid truncating means for integer values for left, right in zip(bounds[:-1], bounds[1:]): # iterate over groups ind = right - 1 while years[ind] >= year_limit and ind > left: # iterate inside the group starting from its end if abs(out[ind]/out[ind-1] - 1) >= pct_limit: out[ind] = out[left:ind].mean() ind -= 1 return out Working with the original data Here's an example of the original data: csv_data = '''\ Location+Type,Year,Tract_number,Median_Home_Value_prediction,pct_pop "Census Tract 1.02, Jefferson County, Arkansas: Summary level: 140, state:05> county:069> tract:000102",2015,5069000102,112000,0.3365155131264916 "Census Tract 1.02, Jefferson County, Arkansas: Summary level: 140, state:05> county:069> tract:000102",2016,5069000102,117811,0.0518839285714285 "Census Tract 1.02, Jefferson County, Arkansas: Summary level: 140, state:05> county:069> tract:000102",2017,5069000102,122031,0.0358200847119538 "Census Tract 1.02, Jefferson County, Arkansas: Summary level: 140, state:05> county:069> tract:000102",2018,5069000102,127031,0.0409731953356113 "Census Tract 1.02, Jefferson County, Arkansas: Summary level: 140, state:05> county:069> tract:000102",2019,5069000102,132779,0.045248797537609 "Census Tract 1.02, Jefferson County, Arkansas: Summary level: 140, state:05> county:069> tract:000102",2020,5069000102,145703,0.0973346688858931 "Census Tract 1.02, Jefferson County, Arkansas: Summary level: 140, state:05> county:069> tract:000102",2021,5069000102,166769,0.1445817862363849 "Census Tract 1.02; Jefferson County; Arkansas: Summary level: 140, state:05> county:069> tract:000102",2023,5069000102,168596,0.594531611608196 "Census Tract 1.02; Jefferson County; Arkansas: Summary level: 140, state:05> county:069> tract:000102",2024,5069000102,213385,0.2656597148863364 "Census Tract 1.02; Jefferson County; Arkansas: Summary level: 140, state:05> county:069> tract:000102",2025,5069000102,176771,-0.1715863204676784 "Census Tract 1.02; Jefferson County; Arkansas: Summary level: 140, state:05> county:069> tract:000102",2026,5069000102,181905,0.0290384486578858 "Census Tract 1.02; Jefferson County; Arkansas: Summary level: 140, state:05> county:069> tract:000102",2027,5069000102,291646,0.6032906623467567 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2010,18003003400,86600,-0.7280150753768844 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2011,18003003400,85500,-0.0127020785219399 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2012,18003003400,85900,0.0046783625730995 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2013,18003003400,85400,-0.0058207217694994 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2014,18003003400,83900,-0.0175644028103044 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2015,18003003400,83200,-0.0083432657926102 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2016,18003003400,87700,0.0540865384615385 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2017,18003003400,90800,0.0353477765108323 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2018,18003003400,100400,0.1057268722466959 "Census Tract 34.00, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2019,18003003400,103100,0.0268924302788844 "Census Tract 34, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2020,18003003400,111500,-0.3719003776540518 "Census Tract 34, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2021,18003003400,117700,0.0556053811659191 "Census Tract 34, Allen County, Indiana: Summary level: 140, state:18> county:003> tract:003400",2022,18003003400,132200,0.1231945624468988 "Census Tract 34; Allen County; Indiana: Summary level: 140, state:18> county:003> tract:003400",2023,18003003400,186965,-0.4049759667529261 "Census Tract 34; Allen County; Indiana: Summary level: 140, state:18> county:003> tract:003400",2024,18003003400,209587,0.1209947984468349 "Census Tract 34; Allen County; Indiana: Summary level: 140, state:18> county:003> tract:003400",2025,18003003400,186934,-0.1080818221957703 "Census Tract 34; Allen County; Indiana: Summary level: 140, state:18> county:003> tract:003400",2026,18003003400,194739,0.0417513977349723 "Census Tract 34; Allen County; Indiana: Summary level: 140, state:18> county:003> tract:003400",2027,18003003400,200658,0.0303972448096778 "Census Tract 34, Genesee County, Michigan: Summary level: 140, state:26> county:049> tract:003400",2010,26049003400,113600,-0.2685125563425627 "Census Tract 34, Genesee County, Michigan: Summary level: 140, state:26> county:049> tract:003400",2011,26049003400,173800,0.5299295774647887 "Census Tract 34, Genesee County, Michigan: Summary level: 140, state:26> county:049> tract:003400",2012,26049003400,164000,-0.0563866513233601 "Census Tract 34, Genesee County, Michigan: Summary level: 140, state:26> county:049> tract:003400",2013,26049003400,115300,-0.2969512195121951 "Census Tract 34, Genesee County, Michigan: Summary level: 140, state:26> county:049> tract:003400",2014,26049003400,50000,-0.5663486556808326 "Census Tract 34, Genesee County, Michigan: Summary level: 140, state:26> county:049> tract:003400",2016,26049003400,13900,-0.722 "Census Tract 34, Genesee County, Michigan: Summary level: 140, state:26> county:049> tract:003400",2022,26049003400,9999,-0.2806474820143885 "Census Tract 34; Genesee County; Michigan: Summary level: 140, state:26> county:049> tract:003400",2023,26049003400,64772,-0.5894961662501869 "Census Tract 34; Genesee County; Michigan: Summary level: 140, state:26> county:049> tract:003400",2025,26049003400,89406,0.3803308197295025 "Census Tract 34; Genesee County; Michigan: Summary level: 140, state:26> county:049> tract:003400",2026,26049003400,61398,-0.3132673555695764 "Census Tract 34; Genesee County; Michigan: Summary level: 140, state:26> county:049> tract:003400",2027,26049003400,37238,-0.3935019397012603 "Census Tract 113.02, Jefferson County, Texas: Summary level: 140, state:48> county:245> tract:011302",2018,48245011302,154081,0.4454127579737335 "Census Tract 113.02, Jefferson County, Texas: Summary level: 140, state:48> county:245> tract:011302",2019,48245011302,160861,0.0440028296804926 ''' There you can find all the records for 4 selected Tract_number values representing some oddities of data: Location+Type can have up to 3 variants for one Tract_number, which complicate grouping data by them. Especially note the difference in the use of commas and semicolons, which can be difficult to pick out visually. So maybe it's better to use Tract_number for grouping, not Location+Type. There are missing years in the middle, making pct_change difficult to interpret (e.g. missing 2022 where Tract_number is 5069000102). Far not all years are represented for Tract_number, sometimes there are only 2 records in a group. The minimum year found is 2010. However, the initial year in the group may be different from 2010, which may complicate the interpretation of the mean values. All of the above oddities are not rare exceptions, but are widely represented in the data (e.g. 7% of all Tract_number have missing years in the middle). Anyway, let's do the calculations on the data example with 2020 as a starting year and grouping the data by Tract_number: import pandas as pd from io import StringIO # Load data and define key variables file = StringIO(csv_data) df = pd.read_csv(file) group = 'Tract_number' inner_index = 'Year' value = 'Median_Home_Value_prediction' index_start = 2020 # inner_index >= index_start pct_change_upper_limit = 0.3 # pct_change >= pct_change_upper_limit # Modify data df.sort_values([group, inner_index], inplace=True) df['pct_change'] = df.groupby(group)[value].pct_change() df[f'New_{value}'] = modify_excessive_values( get_bounds(df[group]), df[inner_index].values, df[value].values, index_start, pct_change_upper_limit ) result = df[[group, inner_index, value, f'New_{value}', 'pct_change']] result.columns = ['Tract', 'Year', 'Value', 'New_Value', 'pct_change'] >>> print(result) Tract Year Value New_Value pct_change 0 5069000102 2015 112000 112000.000000 NaN 1 5069000102 2016 117811 117811.000000 0.051884 2 5069000102 2017 122031 122031.000000 0.035820 3 5069000102 2018 127031 127031.000000 0.040973 4 5069000102 2019 132779 132779.000000 0.045249 5 5069000102 2020 145703 145703.000000 0.097335 6 5069000102 2021 166769 166769.000000 0.144582 7 5069000102 2023 168596 168596.000000 0.010955 8 5069000102 2024 213385 213385.000000 0.265659 9 5069000102 2025 176771 176771.000000 -0.171587 10 5069000102 2026 181905 181905.000000 0.029043 11 5069000102 2027 291646 151343.727273 0.603287 12 18003003400 2010 86600 86600.000000 NaN 13 18003003400 2011 85500 85500.000000 -0.012702 14 18003003400 2012 85900 85900.000000 0.004678 15 18003003400 2013 85400 85400.000000 -0.005821 16 18003003400 2014 83900 83900.000000 -0.017564 17 18003003400 2015 83200 83200.000000 -0.008343 18 18003003400 2016 87700 87700.000000 0.054087 19 18003003400 2017 90800 90800.000000 0.035348 20 18003003400 2018 100400 100400.000000 0.105727 21 18003003400 2019 103100 103100.000000 0.026892 22 18003003400 2020 111500 111500.000000 0.081474 23 18003003400 2021 117700 117700.000000 0.055605 24 18003003400 2022 132200 132200.000000 0.123195 25 18003003400 2023 186965 96453.846154 0.414259 26 18003003400 2024 209587 209587.000000 0.120996 27 18003003400 2025 186934 186934.000000 -0.108084 28 18003003400 2026 194739 194739.000000 0.041753 29 18003003400 2027 200658 200658.000000 0.030395 30 26049003400 2010 113600 113600.000000 NaN 31 26049003400 2011 173800 173800.000000 0.529930 32 26049003400 2012 164000 164000.000000 -0.056387 33 26049003400 2013 115300 115300.000000 -0.296951 34 26049003400 2014 50000 50000.000000 -0.566349 35 26049003400 2016 13900 13900.000000 -0.722000 36 26049003400 2022 9999 9999.000000 -0.280647 37 26049003400 2023 64772 91514.142857 5.477848 38 26049003400 2025 89406 88171.375000 0.380319 39 26049003400 2026 61398 88308.555556 -0.313268 40 26049003400 2027 37238 85617.500000 -0.393498 41 48245011302 2018 154081 154081.000000 NaN 42 48245011302 2019 160861 160861.000000 0.044003
3
1
78,542,042
2024-5-28
https://stackoverflow.com/questions/78542042/cannot-install-chromadb-on-python3-12-3-alpine3-19-docker-image
I am trying to install python dependencies on a python:3.12.3-alpine3.19 Docker immage. When the requirements.txt file is processed I get the following error: 7.932 ERROR: Ignored the following versions that require a different python version: 0.5.12 Requires-Python >=3.7,<3.12; 0.5.13 Requires-Python >=3.7,<3.12; 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11 7.932 ERROR: Could not find a version that satisfies the requirement onnxruntime==1.18.0 (from versions: none) 7.933 ERROR: No matching distribution found for onnxruntime==1.18.0 ------ failed to solve: process "/bin/sh -c pip install -r requirements.txt && pip uninstall Even if I try to install an older version of python I still get an error: From Python:3.10.14-alpine3.19. 61.58 ERROR: Could not find a version that satisfies the requirement onnxruntime==1.18.0 (from versions: none) 61.58 ERROR: No matching distribution found for onnxruntime==1.18.0 Why is this happening?
It seems that Chroma vector DB does not work (or not out of the box) with Alpine distributions of Python. I switched to Bookworm and was able to install Chroma and use it in the script running in my Docker container.
2
2
78,540,724
2024-5-27
https://stackoverflow.com/questions/78540724/how-to-convert-cugraph-directed-graph-to-undirected-to-run-mst
I'm trying to build MST from a directed graph by converting it to an undirected one. I followed cuGraph example here but getting NotImplementedError: Not supported for distributed graph. I tried doing G = cugraph.Graph(directed=False), but it still gave me an error. My code: columns_to_read = ['subject', 'object', 'predicate'] # Read the TSV file using Dask cuDF with specific columns df = dask_cudf.read_csv('graph_edge.tsv', sep='\t', usecols=columns_to_read) # Renaming columns to match cuGraph requirements df = df.rename(columns={'subject': 'src', 'object': 'dst'}) # Create undirected graph from input data DiG = cugraph.Graph(directed=True) DiG.from_dask_cudf_edgelist(df, source='src', destination='dst') G = DiG.to_undirected() # Verify the graph has edges print("Number of edges in the graph:", G.number_of_edges()) # Define the list of target terminals terminals = ['COMPOUND:7045767', 'COMPOUND:0007271', 'COMPOUND:0035249', 'COMPOUND:0005947', 'COMPOUND:0004129', 'COMPOUND:26519', 'COMPOUND:C0132173', 'COMPOUND:0018393', 'COMPOUND:0006979', 'COMPOUND:0025464', 'COMPOUND:0007613', 'COMPOUND:64645', 'COMPOUND:000010173', 'COMPOUNDGO:0014055', 'COMPOUND:0061535', 'COMPOUND:0150076', 'COMPOUND:0001152', 'COMPOUND:0002185', 'COMPOUND:0004975', 'COMPOUND:0005816', 'COMPOUND:60425'] # Compute the minimum spanning tree of the graph mst = cugraph.minimum_spanning_tree(G) # Convert the resulting MST to a cuDF DataFrame mst_df = mst.view_edge_list() # Ensure to finalize the cuGraph communication client.close() cluster.close() # Display the MST DataFrame print(mst_df.compute()) I looked at cuGraph documentation and Stackoverflow posts, but I still get errors.
cugraph does not currently have a multi-GPU (distributed) implementation of MST. This is the reason you are getting the NotImplementedError: Not supported for distributed graph message. The latest detail on what algorithms are supported for multi-GPU can be found here: https://docs.rapids.ai/api/cugraph/stable/graph_support/algorithms/#supported-algorithms We welcome user feedback, if you would like to register your interested in a multi-GPU implementation (or any other cugraph features) you can create an issue here: https://github.com/rapidsai/cugraph/issues. A multi-GPU implementation is not currently on our development road map, although it is on the list of things to eventually get to.
2
1
78,544,018
2024-5-28
https://stackoverflow.com/questions/78544018/processing-csv-type-file-and-creating-new-output-using-python-or-awk
I have a file "linked_policies.txt" that looks like the following (small sample of data): 0000001G|11111111 0000002G|10000018 0000002G|10000320 0000002G|10000337 0000002G|10000343 101359B|10000018 101359B|16023380 504529A|10000018 504529A|15856008 504529A|16007139 504529A|16007151 504529A|16526483 620667G|16526483 0000003G|22222222 0000003G|33333333 The first column represents 'policy' and second column represents 'customer'. Policy 0000001G is not linked to any other policies as it contains only one customer number not linked to any other policies. Similarly, Policy 0000003G is not linked to any other policies as it contains two customer numbers not linked to any other policies. All of the other policies (0000002G, 101359B, 504529A and 620667G) are considered to be linked to each other because for instance policy 0000002G is directly linked to policies 101359B and 504529A (as they all share the same customer 10000018) but policy 0000002G is also indirectly linked to policy 620667G because even though they do not share a customer number directly between them, policy 620667G shares customer 16526483 with policy 504529A and this policy itself has a shared customer with 0000002G so therefore all are considered to be linked policies. I want to process the input file and end up with a new "output_file.txt" whereby for each policy, the policy is listed in the first column and in the second column (space delimited between first and second column) the list of associated policy/ies are listed (including the policy in question itself). For example, from the sample input file following processing, file "output_file.txt" would look as follows: 0000001G 0000001G 0000002G 0000002G|101359B|504529A|620667G 101359B 101359B|0000002G|504529A|620667G 504529A 504529A|0000002G|101359B|620667G 0000003G 0000003G First column shows each unique policy number (from the input file) and the second column shows all of the policy/ies that this policy is directly or indirectly linked to. Looking for a way for this to be done using either AWK or Python (noting that the Python version available to me is earlier than version 3.6) My (Python) script "linked_policies.py" attempt is as follows: # Read the input data with open('linked_policies.txt', 'r') as f: lines = f.readlines() # Create a dictionary to store the customers and their associated policies customer_policies = {} # Iterate through the lines for line in lines: # Split the line into policy and customer policy, customer = line.strip().split('|') # Add the customer to the customer_policies dictionary if customer not in customer_policies: customer_policies[customer] = set() customer_policies[customer].add(policy) # Create a dictionary to store the linked policies linked_policies = {} # Link policies based on shared customers for customer, policies in customer_policies.items(): for policy in policies: if policy not in linked_policies: linked_policies[policy] = set(policies) else: linked_policies[policy].update(policies) # Write the output to a file with open('output_file.txt', 'w') as f: for policy, linked_policies_set in linked_policies.items(): linked_policies_str = '|'.join(sorted(linked_policies_set)) f.write('{} {}\n'.format(policy, linked_policies_str)) When I run this: python ./linked_policies.py ....the resulting output file "output_file.txt" looks like this: 620667G 504529A|620667G 504529A 0000002G|101359B|504529A|620667G 0000003G 0000003G 0000002G 0000002G|101359B|504529A 101359B 0000002G|101359B|504529A 0000001G 0000001G The second, third and sixth lines are all correct. The other lines are incorrect (first, fourth and fifth lines) as they are not correctly showing all of the linked policies.
OP has posted that the original awk answer (see 2nd half of this answer) ran for a few minutes and then ran out of memory (on a host with 23 GB of RAM). Here'a rewrite using 1-dimensional arrays: awk ' function get_customer (p, c, i, n) { n = split(pol_cus[p],c,",") for (i=1; i<=n; i++) get_policy(c[i]) } function get_policy (c, p, i, n) { n = split(cus_pol[c],p,",") for (i=1; i<=n; i++) if (! (p[i] in links)) { links[p[i]] get_customer(p[i]) } } BEGIN { FS = "|" } { sep = ($1 in pol_cus) ? "," : "" pol_cus[$1] = pol_cus[$1] sep $2 sep = ($2 in cus_pol) ? "," : "" cus_pol[$2] = cus_pol[$2] sep $1 } END { PROCINFO["sorted_in"] = "@ind_str_asc" for (policy in pol_cus) { delete links links[policy] get_customer(policy) delete links[policy] printf "%s %s", policy, policy for (link in links) printf "|%s", link print "" } } ' policy.dat To simulate OP's environment I used the following script: awk 'BEGIN { for (i=1;i<=2900000;i++) printf "p%07d|c%07d\np%07d|C%07d\n", i,i,i,i+1 } ' > policy.big.dat The resulting file: 104 MBytes in size 5.8 million lines 2.9 million unique policy #s 5.8 million unique customer #s Test environment: i7-1260P 64 GB RAM gawk v 5.1.0 Results of running both awk scripts against the 104 MByte, 5.8 million line policy.big.dat file: RAM time ----------------------------------------------------------------- awk (2-dimensional arrays) 4.4 GB 12 secs awk (1-dimensional arrays) 2.6 GB 14 secs A diff of the two resulting data sets report no differences. Running the same tests in a virtual machine (Windows 10, 16 GB RAM, gawk v 5.3.0) and making sure Windows Antimalware service is disabled: RAM time ----------------------------------------------------------------- awk (2-dimensional arrays) 7.5 GB 17 secs awk (1-dimensional arrays) 2.9 GB 24 secs Not sure at this point how the original awk answer (below) could take minutes to run and generate an OOM error on OP's host with 23 GB of RAM ... Original answer General approach: save policy/customer and customer/policy pairs in 2-dimensional array for each policy in the policy/customer array recursively search for linked policies One awk approach: awk ' function get_customer (p, c) { for (c in pol_cus[p]) # for each customer associated with this policy ... get_policy(c) # find the associated policies } function get_policy (c, p) { for (p in cus_pol[c]) # for each policy associated with this customer ... if (! (p in links)) { # if we have not seen this policy yet then ... links[p] # add to our list of links and then ... get_customer(p) # find the associated customer(s) } } BEGIN { FS = "|" } { pol_cus[$1][$2] # policy/customer array cus_pol[$2][$1] # customer/policy array } END { PROCINFO["sorted_in"] = "@ind_str_asc" # process array indices in sorted order for (policy in pol_cus) { # loop through our list of policies delete links # clear list of links links[policy] # self link this policy; insures recursive search halts if we see this policy again get_customer(policy) # start the recursive search delete links[policy] # delete the self link printf "%s %s", policy, policy for (link in links) # loop through links and append to current line of output printf "|%s", link print "" # terminate current line of output } } ' policy.dat NOTES: requires GNU awk for a) multi-dimensional arrays (aka array of arrays) and b) sorting the policies (via PROCINFO["sorted_in"]) if sorting is not required you can remove the line PROCINFO["sorted_in"] = "@ind_str_asc" see predefined array scanning orders for details on PROCINFO["sorted_in"] = "@ind_str_asc" This generates: 0000001G 0000001G 0000002G 0000002G|101359B|504529A|620667G 0000003G 0000003G 101359B 101359B|0000002G|504529A|620667G 504529A 504529A|0000002G|101359B|620667G 620667G 620667G|0000002G|101359B|504529A
2
3
78,542,361
2024-5-28
https://stackoverflow.com/questions/78542361/gekko-optimizer-converging-on-wrong-point
I am using the GEKKO optimization package (https://gekko.readthedocs.io/en/latest/index.html) in python. Since my problem sometimes includes integer variables I run with the following options to get optimization for a steady state mixed integer problem (APOPT): m.options.IMODE = 3 m.options.SOLVER = 1 The optimizer finds a solution that is not optimal. Consider the following minimal working example: from gekko import GEKKO m = GEKKO(remote=False) # Define objective def obj(x1, x2): return x1*0.04824647331436083 + x2*0.1359023029124411 + x1*x2*(0.5659570743890336) # Define variables x1 = m.Var(lb=-0.7280376309420935, ub=0.2719623690579065) x2 = m.Var(lb=-0.8912733608888134, ub=0.10872663911118663) u = m.Intermediate(obj(x1,x2)) m.Maximize(u) # Solve the problem m.options.SOLVER = 'APOPT' m.solve(disp=False) # Print the results print(f'x1: {x1.value[0]}') print(f'x2: {x2.value[0]}') print(f'u: {u.value[0]}') This gives the output: x1: 0.27196236906 x2: 0.10872663911 u: 0.044632524297 However, consider the following solution and its utility: x1 = -0.7280376309420935 x2 = -0.8912733608888134 obj(x1, x2) This results in a utility of 0.2109871851434535
A contour plot verifies that there are two local maxima. Gekko solvers report local minima or maxima when the Karush Kuhn Tucker (KKT) conditions are satisfied. from gekko import GEKKO m = GEKKO(remote=False) # Define objective def obj(x1, x2): return x1*0.04824647331436083 + x2*0.1359023029124411 + x1*x2*(0.5659570743890336) # Define variables x1 = m.Var(lb=-0.7280376309420935, ub=0.2719623690579065) x2 = m.Var(lb=-0.8912733608888134, ub=0.10872663911118663) u = m.Intermediate(obj(x1,x2)) m.Maximize(u) # Solve the problem m.options.SOLVER = 'APOPT' m.solve(disp=False) # Print the results print(f'x1: {x1.value[0]}') print(f'x2: {x2.value[0]}') print(f'u: {u.value[0]}') import numpy as np import matplotlib.pyplot as plt # Generate grid of values x1v = np.linspace(-0.7280376309420935, 0.2719623690579065, 400) x2v = np.linspace(-0.8912733608888134, 0.10872663911118663, 400) x1g, x2g = np.meshgrid(x1v, x2v) ug = obj(x1g, x2g) # Create contour plot plt.figure(figsize=(6, 3.5)) contour = plt.contourf(x1g, x2g, ug, levels=50, cmap='viridis') plt.colorbar(contour); plt.xlabel('x1'); plt.ylabel('x2') plt.tight_layout(); plt.show() For problems with multiple local minima or maxima, there are multi-start methods to assist in finding the global optimum: Multi-start implementation with GEKKO Below is an implementation of hyperopt that is used to find the global optimum. hyperopt is a Python package for performing hyperparameter optimization with a variety of optimization algorithms including random search, Tree-structured Parzen Estimator (TPE), and adaptive TPE, as well as a simple and flexible way to define the search space for the hyperparameters. from gekko import GEKKO from hyperopt import fmin, tpe, hp from hyperopt import STATUS_OK, STATUS_FAIL # Define the search space for the multi-start parameters space = {'x1': hp.quniform('x1', -0.72, 0.27, 0.1), 'x2': hp.quniform('x2', -0.89, 0.10, 0.1)} def objective(params): m = GEKKO(remote=False) x1 = m.Var(lb=-0.7280376309420935, ub=0.2719623690579065) x2 = m.Var(lb=-0.8912733608888134, ub=0.10872663911118663) x1.value = params['x1'] x2.value = params['x2'] u = m.Intermediate(x1*0.04824647331436083 + x2*0.1359023029124411 + x1*x2*(0.5659570743890336)) m.Maximize(u) m.options.SOLVER = 1 m.solve(disp=False,debug=False) obj = m.options.objfcnval if m.options.APPSTATUS==1: s=STATUS_OK else: s=STATUS_FAIL m.cleanup() return {'loss':obj, 'status': s, 'x1':x1.value[0],'x2':x2.value[0],'u':u.value[0]} best = fmin(objective, space, algo=tpe.suggest, max_evals=50) sol = objective(best) print(f"Solution Status: {sol['status']}") print(f"Objective: {-sol['loss']:.2f}") print(f"x1: {sol['x1']}") print(f"x2: {sol['x2']}") print(f"u: {sol['u']}") This produces the global maximum with the TPE algorithm to find the best optimal solution by changing the initial guess values using a Bayesian approach. Solution Status: ok Objective: 0.21 x1: -0.72803763094 x2: -0.89127336089 u: 0.21098718514
2
1
78,546,774
2024-5-28
https://stackoverflow.com/questions/78546774/how-to-add-filename-to-polars-pl-scan-csv
I'm reading multiple files with Polars, but I want to add filename as identifier in a new column. #how to add filenames to polars lazy_dfs = (pl.scan_csv("data/file_*.tsv", separator="\t", has_header=False).fetch(n_rows= 500))
You can take a couple of approaches here, if you are working off a local filesystem, then you can manually glob the files, add the column yourself, and concatenate the results. Polars concat + scan from pathlib import Path from tempfile import TemporaryDirectory from numpy.random import default_rng import polars as pl def random_dataframe(rng, size=10): return pl.DataFrame({ 'x': rng.random(size), 'y': rng.integers(10, size=size), 'z': rng.normal(10, scale=3, size=size), }) rng = default_rng(0) with TemporaryDirectory() as d: outdir = Path(d) for letter in 'abcd': df = random_dataframe(rng) df.write_csv(outdir / f'{letter}.csv') plan = pl.concat([ pl.scan_csv(p).with_columns(source=pl.lit(str(p))) for p in outdir.glob('*.csv') ]) print(plan.collect()) # shape: (40, 4) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ x ┆ y ┆ z ┆ source β”‚ # β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ # β”‚ f64 ┆ i64 ┆ f64 ┆ str β”‚ # β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ═══════════β•ͺ════════════════════════║ # β”‚ 0.636962 ┆ 2 ┆ 7.803198 ┆ /tmp/tmp_c8ysn7b/a.csv β”‚ # β”‚ 0.269787 ┆ 8 ┆ 8.367223 ┆ /tmp/tmp_c8ysn7b/a.csv β”‚ # β”‚ 0.040974 ┆ 6 ┆ 9.0511 ┆ /tmp/tmp_c8ysn7b/a.csv β”‚ # β”‚ 0.016528 ┆ 0 ┆ 11.234892 ┆ /tmp/tmp_c8ysn7b/a.csv β”‚ # β”‚ 0.81327 ┆ 3 ┆ 13.12754 ┆ /tmp/tmp_c8ysn7b/a.csv β”‚ # β”‚ … ┆ … ┆ … ┆ … β”‚ # β”‚ 0.529312 ┆ 7 ┆ 13.094359 ┆ /tmp/tmp_c8ysn7b/d.csv β”‚ # β”‚ 0.785786 ┆ 9 ┆ 10.483029 ┆ /tmp/tmp_c8ysn7b/d.csv β”‚ # β”‚ 0.414656 ┆ 9 ┆ 8.243414 ┆ /tmp/tmp_c8ysn7b/d.csv β”‚ # β”‚ 0.734484 ┆ 6 ┆ 5.976341 ┆ /tmp/tmp_c8ysn7b/d.csv β”‚ # β”‚ 0.711143 ┆ 9 ┆ 5.795439 ┆ /tmp/tmp_c8ysn7b/d.csv β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Do note that scanning with a glob and concatenating across a list of inputs produce the same plans: print(pl.scan_csv(outdir / '*.csv').explain()) # UNION # PLAN 0: # # Csv SCAN /tmp/tmp3t7rlzyq/a.csv # PROJECT */3 COLUMNS # PLAN 1: # # Csv SCAN /tmp/tmp3t7rlzyq/b.csv # PROJECT */3 COLUMNS # PLAN 2: # # Csv SCAN /tmp/tmp3t7rlzyq/c.csv # PROJECT */3 COLUMNS # PLAN 3: # # Csv SCAN /tmp/tmp3t7rlzyq/d.csv # PROJECT */3 COLUMNS # END UNION print( pl.concat([pl.scan_csv(p) for p in outdir.glob('*.csv')]).explain() ) # UNION # PLAN 0: # # Csv SCAN /tmp/tmp3t7rlzyq/a.csv # PROJECT */3 COLUMNS # PLAN 1: # # Csv SCAN /tmp/tmp3t7rlzyq/b.csv # PROJECT */3 COLUMNS # PLAN 2: # # Csv SCAN /tmp/tmp3t7rlzyq/c.csv # PROJECT */3 COLUMNS # PLAN 3: # # Csv SCAN /tmp/tmp3t7rlzyq/d.csv # PROJECT */3 COLUMNS # END UNION Duckdb read_csv(…, filename=True) Alternatively, one can use duckdb.read_csv and pass the filename=True parameter. Then materialize the result into a polars.DataFrame from pathlib import Path from string import ascii_lowercase from tempfile import TemporaryDirectory from numpy.random import default_rng import polars as pl import duckdb def random_dataframe(rng, size=10): return pl.DataFrame({ 'x': rng.random(size), 'y': rng.integers(10, size=size), 'z': rng.normal(10, scale=3, size=size), }) rng = default_rng(0) with TemporaryDirectory() as d: outdir = Path(d) for letter in 'abcd': df = random_dataframe(rng) df.write_csv(outdir / f'{letter}.csv') result = duckdb.read_csv(outdir / '*.csv', filename=True).pl() print(result) # shape: (40, 4) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ x ┆ y ┆ z ┆ filename β”‚ # β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ # β”‚ f64 ┆ i64 ┆ f64 ┆ str β”‚ # β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ═══════════β•ͺ════════════════════════║ # β”‚ 0.636962 ┆ 2 ┆ 7.803198 ┆ /tmp/tmp8cyagj76/a.csv β”‚ # β”‚ 0.269787 ┆ 8 ┆ 8.367223 ┆ /tmp/tmp8cyagj76/a.csv β”‚ # β”‚ 0.040974 ┆ 6 ┆ 9.0511 ┆ /tmp/tmp8cyagj76/a.csv β”‚ # β”‚ 0.016528 ┆ 0 ┆ 11.234892 ┆ /tmp/tmp8cyagj76/a.csv β”‚ # β”‚ 0.81327 ┆ 3 ┆ 13.12754 ┆ /tmp/tmp8cyagj76/a.csv β”‚ # β”‚ … ┆ … ┆ … ┆ … β”‚ # β”‚ 0.529312 ┆ 7 ┆ 13.094359 ┆ /tmp/tmp8cyagj76/d.csv β”‚ # β”‚ 0.785786 ┆ 9 ┆ 10.483029 ┆ /tmp/tmp8cyagj76/d.csv β”‚ # β”‚ 0.414656 ┆ 9 ┆ 8.243414 ┆ /tmp/tmp8cyagj76/d.csv β”‚ # β”‚ 0.734484 ┆ 6 ┆ 5.976341 ┆ /tmp/tmp8cyagj76/d.csv β”‚ # β”‚ 0.711143 ┆ 9 ┆ 5.795439 ┆ /tmp/tmp8cyagj76/d.csv β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
3
78,539,430
2024-5-27
https://stackoverflow.com/questions/78539430/nearest-neighbor-for-list-of-arrays
`I have a list of arrays like this (in x, y coordinates): coordinates= array([[ 300, 2300], [ 670, 2360], [ 400, 2300]]), array([[1500, 1960], [1620, 2200], [1505, 1975]]), array([[ 980, 1965], [1060, 2240], [1100, 2250], [ 980, 1975]]), array([[ 565, 1940], [ 680, 2180], [ 570, 1945]])] I want the arrays to be sorted by the nearest neighbor. So, the last coordinate of the first array should be close to the first coordinate of the next array and so forth. I extracted all of the first and last coordinates and put them in two lists using python numpy. Then tried to use sklearn.neighbors NearestNeighbors, but it didn't work.` expected output: coordinates= array([[ 300, 2300], [ 670, 2360], [ 400, 2300]]), array([[ 565, 1940], [ 680, 2180], [ 570, 1945]]), array([[ 980, 1965], [1060, 2240], [1100, 2250], [ 980, 1975]]), array([[1500, 1960], [1620, 2200], [1505, 1975]])]
This looks like a traveling_salesman_problem. Compute the pairwise distance between all nodes, make the self distance NaN/Inf, then get the shortest path: from numpy import array import networkx as nx coordinates= [array([[ 300, 2300], [ 670, 2360], [ 400, 2300]]), array([[1500, 1960], [1620, 2200], [1505, 1975]]), array([[ 980, 1965], [1060, 2240], [1100, 2250], [ 980, 1975]]), array([[ 565, 1940], [ 680, 2180], [ 570, 1945]])] # extract starts and ends starts = np.vstack([a[0] for a in coordinates]) ends = np.vstack([a[-1] for a in coordinates]) # compute pairwise distances, except self dist = np.sqrt(((starts[:, None] - ends)**2).sum(-1)) np.fill_diagonal(dist, np.nan) # build the graph G = nx.from_numpy_array(np.round(dist, 1), create_using=nx.DiGraph) G.remove_edges_from(nx.selfloop_edges(G)) # find shortest path (NB. cycle could be False) path = nx.approximation.traveling_salesman_problem(G, cycle=True) # [1, 0, 3, 2, 1] out = [coordinates[i] for i in path[1:]] If you don't necessarily want a cycle, use cycle=False and out = [coordinates[i] for i in path]. Output: [array([[ 300, 2300], [ 670, 2360], [ 400, 2300]]), array([[ 565, 1940], [ 680, 2180], [ 570, 1945]]), array([[ 980, 1965], [1060, 2240], [1100, 2250], [ 980, 1975]]), array([[1500, 1960], [1620, 2200], [1505, 1975]])] Intermediate dist: array([[ nan, 1248.05849222, 753.67433285, 446.01008957], [1151.34703717, nan, 520.21630117, 930.12095988], [ 669.79474468, 525.09522946, nan, 410.48751504], [ 396.01136347, 940.65137006, 416.47328846, nan]]) Graph (with distances as edge labels and the shortest path in red):
2
1
78,544,920
2024-5-28
https://stackoverflow.com/questions/78544920/pywinauto-how-to-get-rid-of-delays-in-clicks
import time from pywinauto.application import Application import os #Installer path pycharm_exe = r"G:\Softwarewa\JetBrains PyCharm\JetBrains PyCharm Professional 2023.1 [FileCR]\JetBrains PyCharm Professional 2023.1\pycharm-professional-2023.1.exe" app = Application(backend="uia").start(pycharm_exe) # Wait for window to appear while True: if app.PyCharmSetup.exists() == True: break #activate the window app.PyCharmSetup.window() # app.PyCharmSetup.print_control_identifiers() app.PyCharmSetup.Next.click() #click on next again app.PyCharmSetup.Next.click() #click on next once again #app.PyCharmSetup.print_control_identifiers() hi there, i am trying to automate PyCharm installation but there is a problem with script when it reaches where it starts to click on "Next" button it takes about 3 sec to click next button again.... how do i get rid of these delays between clicks?
You need to modify global timings. See the bottom of this guide: https://pywinauto.readthedocs.io/en/latest/wait_long_operations.html#global-timings-for-all-actions But please be careful, especially about setting all timings to zero. It may affect reliability of the automation steps. Because often some time is needed to handle some events at the app side. Maybe Timings.fast() is better in general.
3
1
78,516,686
2024-5-22
https://stackoverflow.com/questions/78516686/calculation-of-the-rotation-matrix-between-two-coordinate-systems-with-python-in
I have a UR5 CB3 robotic arm that shall be trained to pick up watermelons. For that, I installed a camera that determines the location of the watermelon with respect to the TCP (or end effector / the gripping "hand"). This TCP coordinate system is called K1. I also know the location and orientation of the TCP with respect to the base coordinate system. This base coordinate system is called K0. In order to simplifiy the problem, I removed all the code used to determine the watermelon location, and just inserted arbitrary coordinates given in the K1 coordinate system, as for example [ 0 , 0 , 0.1]. Now I want to calculate what these coordinates are in the K0 coordinate system. Mathematically, this should be a fairly simple code, I now would use the [roll, pitch, yaw] to calculate the rotation matrix rot_mat_K1_to_K0 and then use matrix multiplication of rot_mat_K1_to_K0 with the vecotor pointing to W in the K1 system. Then I would add the vector from K0 to K1 given in K1 coordinated and the now rotated vector from K1 to W in a simple vector addition. Then I could send this final vector to my robot to move there. But in the following python code, i run into the problem that the robot moves, but in a completly unrelated direction. At least the error is repeatable, so the arm always moves in the same, but wrong, direction. The following code is called "raw_motion_tester_6.py": from time import sleep import numpy as np from scipy.spatial import transform as tf import rtde_receive import rtde_control print("Environment Ready") # RTDE configuration ROBOT_IP = "192.168.1.102" # IP address of your UR5 CB3 robot FREQUENCY = 125 # Frequency of data retrieval in Hz # Establish RTDE connections rtde_receiver = rtde_receive.RTDEReceiveInterface(ROBOT_IP) rtde_controler = rtde_control.RTDEControlInterface(ROBOT_IP) # Define the variables to retrieve from RTDE variables = ["actual_TCP_pose"] #What does this do? # Start data synchronization rtde_receiver.startFileRecording("recorded_data.csv") #################################################################### # Define the movement distance in the K1 coordinate system move_distance_x = 0.0 # Move X.X m along the x-axis of K1 move_distance_y = 0.0 # Move X.X m along the y-axis of K1 move_distance_z = -0.1 # Move X.X m along the z-axis of K1 #################################################################### try: while True: # Receive and parse RTDE data tcp_position = rtde_receiver.getActualTCPPose() # Matrix transformation origin_K1_wrt_K0 = tcp_position[:3] # Origin of the TCP w.r.t. the Robot Base angles_K1_wrt_K0 = tcp_position[3:6] # Orientation of the origin of the TCP w.r.t. the Robot Base # Create a rotation matrix based on the angles with scipy transform rot_K0_to_K1 = tf.Rotation.from_rotvec(angles_K1_wrt_K0) # rotvec or euler should not make a difference rot_mat_K0_to_K1 = rot_K0_to_K1.as_matrix() print(f"Rotation Matrix K0 to K1: {rot_mat_K0_to_K1}") rot_mat_K1_to_K0 = rot_mat_K0_to_K1.T # Create the movement vector in the K1 coordinate system move_vector_K1 = np.array([move_distance_x, move_distance_y, move_distance_z]) # Convert the movement vector from K1 to K0 coordinate system move_vector_K0 = rot_mat_K1_to_K0 @ move_vector_K1 print(f"Move Vector K0: {move_vector_K0}", f"Shape of Move Vector K0: {move_vector_K0.shape}") # Calculate the new TCP position in the K0 coordinate system new_tcp_position_K0 = origin_K1_wrt_K0 + move_vector_K0 # Move the robot to the new TCP position rtde_controler.moveL(new_tcp_position_K0.tolist() + [tcp_position[3], tcp_position[4], tcp_position[5]], 0.1, 0.2) # sleep(1) break except KeyboardInterrupt: # Stop file recording and disconnect on keyboard interrupt rtde_receiver.stopFileRecording() rtde_receiver.disconnect() rtde_controler.disconnect() finally: print("Disconnected from the UR5 CB3 robot") The simplified version, without rotation matrix, just with translation, called "raw_motion_tester_7.py" works just fine, with rotation as well as translation of the TCP, so the assumption is logical that the problem is somewhere in the mathematical part of the original code. from time import sleep import numpy as np from scipy.spatial import transform as tf import rtde_receive import rtde_control from math import pi print("Environment Ready") # RTDE configuration ROBOT_IP = "192.168.1.102" # IP address of your UR5 CB3 robot FREQUENCY = 125 # Frequency of data retrieval in Hz # Establish RTDE connections rtde_receiver = rtde_receive.RTDEReceiveInterface(ROBOT_IP) rtde_controler = rtde_control.RTDEControlInterface(ROBOT_IP) # Define the variables to retrieve from RTDE variables = ["actual_TCP_pose"] #What does this do? # Start data synchronization rtde_receiver.startFileRecording("recorded_data.csv") #################################################################### # Define the movement distance in the K1 coordinate system move_distance_x = 0.0 # Move X.X m along the x-axis of K1 move_distance_y = -0.0 # Move X.X m along the y-axis of K1 move_distance_z = -0.0 # Move X.X m along the z-axis of K1 rotate_tcp_x = 0 # Rotate X.X rad around the x-axis of K0 rotate_tcp_y = 0 # Rotate X.X rad around the y-axis of K0 rotate_tcp_z = 0 # Rotate X.X rad around the z-axis of K0 #################################################################### try: while True: # Receive and parse RTDE data tcp_position = rtde_receiver.getActualTCPPose() # Matrix transformation origin_K1_wrt_K0 = tcp_position[:3] # Origin of the TCP w.r.t. the Robot Base angles_K1_wrt_K0 = tcp_position[3:6] # Orientation of the origin of the TCP w.r.t. the Robot Base # Create the movement vector in the K0 coordinate system move_vector_K0 = np.array([move_distance_x, move_distance_y, move_distance_z]) # Calculate the new TCP position in the K0 coordinate system new_tcp_position_K0 = origin_K1_wrt_K0 + move_vector_K0 # Move the robot to the new TCP position rtde_controler.moveL(new_tcp_position_K0.tolist() + [tcp_position[3]+rotate_tcp_x, tcp_position[4]+rotate_tcp_y, tcp_position[5]+rotate_tcp_z], 0.1, 0.2) # sleep(1) break except KeyboardInterrupt: # Stop file recording and disconnect on keyboard interrupt rtde_receiver.stopFileRecording() rtde_receiver.disconnect() rtde_controler.disconnect() finally: print("Disconnected from the UR5 CB3 robot") In order to understand what data the rtde_receiver.getActualTCPPose() actually fetches, I saved that to a txt file: "TCP Position: [0.17477931598731355, 0.5513537036720785, 0.524021777855706, 0.4299023783620231, -1.6571432341558983, 1.3242450163108708]" The first three are the coordiantes in meters and the last three are the orientation in Rad. The documentation describes the function as "Actual Cartesian coordinates of the tool: (x,y,z,rx,ry,rz), where rx, ry and rz is a rotation vector representation of the tool orientation". As descrbibed above I wrote a simpler version of the code, to rule out basic conectivity issues or something like that. Also I am quite certain that I calculate the rotation matrix correctly. I investigated the libraries that I use and to the best of my knowledge I am using everything, especially scipy's rot_K0_to_K1 = tf.Rotation.from_rotvec(angles_K1_wrt_K0) seems corret. If you guys have any input, please let me know. Edit: I forgot to add a version of the code that runs while not connected to a robot. This one has arbitrary, but realistic, position data for the UR5. I hope that makes it easier to understand my issue. from time import sleep import numpy as np from scipy.spatial import transform as tf print("Environment Ready") #################################################################### # Define the movement distance in the K1 coordinate system move_distance_x = 0.0 # Move X.X m along the x-axis of K1 move_distance_y = 0.0 # Move X.X m along the y-axis of K1 move_distance_z = -0.1 # Move X.X m along the z-axis of K1 #################################################################### # Pseudo TCP Position for test purposes tcp_position = [0.17477931598731355, 0.5513537036720785, 0.524021777855706, 0.4299023783620231, -1.6571432341558983, 1.3242450163108708] # This is an arbitrary location of the robot, the first three are the xyz coordinates of the TCP in the baseframe, and the last three are the rotation along the xyz axis # Matrix transformation origin_K1_wrt_K0 = tcp_position[:3] # Origin of the TCP w.r.t. the Robot Base angles_K1_wrt_K0 = tcp_position[3:6] # Orientation of the origin of the TCP w.r.t. the Robot Base # Create a rotation matrix based on the angles with scipy transform rot_K0_to_K1 = tf.Rotation.from_rotvec(angles_K1_wrt_K0) # rotvec or euler should not make a difference rot_mat_K0_to_K1 = rot_K0_to_K1.as_matrix() print(f"Rotation Matrix K0 to K1: {rot_mat_K0_to_K1}") rot_mat_K1_to_K0 = rot_mat_K0_to_K1.T # Create the movement vector in the K1 coordinate system move_vector_K1 = np.array([move_distance_x, move_distance_y, move_distance_z]) # Convert the movement vector from K1 to K0 coordinate system move_vector_K0 = rot_mat_K1_to_K0 @ move_vector_K1 print(f"Move Vector K0: {move_vector_K0}", f"Shape of Move Vector K0: {move_vector_K0.shape}") # Calculate the new TCP position in the K0 coordinate system new_tcp_position_K0 = origin_K1_wrt_K0 + move_vector_K0 # Move the robot to the new TCP position new_complete_tcp_description = new_tcp_position_K0.tolist() + [tcp_position[3], tcp_position[4], tcp_position[5]] print("new_complete_tcp_description: ", new_complete_tcp_description) # sleep(1)
Well, guys, I solved it. Looks like the TCP position and orientation was calibrated wrong, before I started using this robot. The callibration was done on the robot itself, not in my code, so I could not find it directly. So my motions were actually correct, but the coordinate system K1 was not what I expected it to be.... Well, I hope someone sees this before they waste as much time as I did during the hunt for the error. Once again, we see how important systematic debugging is....
2
1
78,545,235
2024-5-28
https://stackoverflow.com/questions/78545235/sorting-an-array-list-by-a-specific-item-in-an-array-attribute
I'm new to python/jmespath and am trying to sort the below students by their final grade in 2024. I'm sure I can come up with something in python to do it but was just wondering if there is any jmespath magic I can use. I have seen sort_by and know how to search/sort by simple attributes, but haven't yet found anything that can do this. { "school": "lame", "students": [ { "id": 1, "name": "Joe", "grades": [ {"year": 2024, "final": 85}, {"year": 2023, "final": 73} ] }, { "id": 2, "name": "Pedro", "grades": [ {"year": 2024, "final": 92}, {"year": 2023, "final": 90} ] }, { "id": 3, "name": "Mary", "grades": [ {"year": 2024, "final": 88}, {"year": 2023, "final": 70} ] } ] }
Assuming that you want to sort by the latest year (not specifically the value 2024), you could achieve it with this: jmespath.search("students | sort_by(@,&grades|max_by(@,&year).final)", data) That would result in: [{'id': 1, 'name': 'Joe', 'grades': [{'year': 2024, 'final': 85}, {'year': 2023, 'final': 73}]}, {'id': 3, 'name': 'Mary', 'grades': [{'year': 2024, 'final': 88}, {'year': 2023, 'final': 70}]}, {'id': 2, 'name': 'Pedro', 'grades': [{'year': 2024, 'final': 92}, {'year': 2023, 'final': 90}]}] If you actually wanted to sort by the year 2024 specifically, you could use this: jmespath.search("students | sort_by(@,&(grades[?year==`2024`].final)[0])", data) But you would gen an error if you have any student record without any grade in 2024, as the index selector [0] would return null. In that case you have two options: If you want to ignore students without 2024 grade, you could use this: jmespath.search("students[?grades[?year==`2024`]] | sort_by(@,&(grades[?year==`2024`].final)[0])", data) If you want to sort them at the end or beginning: jmespath.search("students | sort_by(@,&(grades[?year==`2024`].final)[0]||`0`)", data) jmespath.search("students | sort_by(@,&(grades[?year==`2024`].final)[0]||`999`)", data)
2
3
78,541,706
2024-5-28
https://stackoverflow.com/questions/78541706/how-to-computing-k-nearest-neighbors-from-rectangular-distance-matrix-i-e-sci
I want to calculate the k-nearest neighbors using either sklearn, scipy, or numpy but from a rectangular distance matrix that is output from scipy.spatial.distance.cdist. I have tried inputting into the kneighbors_graph and KNeighborsTransformer with metric="precomputed" but have not been successful. How can I achieve this? from scipy.spatial.distance import cdist from sklearn.datasets import make_classification from sklearn.neighbors import kneighbors_graph, KNeighborsTransformer X, _ = make_classification(n_samples=15, n_features=4, n_classes=2, n_clusters_per_class=1, random_state=0) A = X[:10,:] B = X[10:,:] A.shape, B.shape # ((10, 4), (5, 4)) # Rectangular distance matrix dist = cdist(A,B) dist.shape # (10, 5) n_neighbors=3 kneighbors_graph(dist, n_neighbors=n_neighbors, metric="precomputed") # --------------------------------------------------------------------------- # ValueError Traceback (most recent call last) # Cell In[165], line 17 # 14 # (10, 5) # 16 n_neighbors=3 # ---> 17 kneighbors_graph(dist, n_neighbors=n_neighbors, metric="precomputed") # File ~/miniconda3/envs/soothsayer_env/lib/python3.9/site-packages/sklearn/neighbors/_graph.py:117, in kneighbors_graph(X, n_neighbors, mode, metric, p, metric_params, include_self, n_jobs) # 50 """Compute the (weighted) graph of k-Neighbors for points in X. # 51 # 52 Read more in the :ref:`User Guide <unsupervised_neighbors>`. # (...) # 114 [1., 0., 1.]]) # 115 """ # 116 if not isinstance(X, KNeighborsMixin): # --> 117 X = NearestNeighbors( # 118 n_neighbors=n_neighbors, # 119 metric=metric, # 120 p=p, # 121 metric_params=metric_params, # 122 n_jobs=n_jobs, # 123 ).fit(X) # 124 else: # 125 _check_params(X, metric, p, metric_params) # File ~/miniconda3/envs/soothsayer_env/lib/python3.9/site-packages/sklearn/neighbors/_unsupervised.py:176, in NearestNeighbors.fit(self, X, y) # 159 """Fit the nearest neighbors estimator from the training dataset. # 160 # 161 Parameters # (...) # 173 The fitted nearest neighbors estimator. # 174 """ # 175 self._validate_params() # --> 176 return self._fit(X) # File ~/miniconda3/envs/soothsayer_env/lib/python3.9/site-packages/sklearn/neighbors/_base.py:545, in NeighborsBase._fit(self, X, y) # 543 # Precomputed matrix X must be squared # 544 if X.shape[0] != X.shape[1]: # --> 545 raise ValueError( # 546 "Precomputed matrix must be square." # 547 " Input is a {}x{} matrix.".format(X.shape[0], X.shape[1]) # 548 ) # 549 self.n_features_in_ = X.shape[1] # 551 n_samples = X.shape[0] # ValueError: Precomputed matrix must be square. Input is a 10x5 matrix.
Both kneighbors_graph and KNeighborsTransformer are intended to work for graphs within a single set of points; you are computing neighbors between two distinct arrays. I'm not aware of any built-in utility for this, but you could create such a graph manually using numpy and scipy routines. For example: import numpy as np from scipy import sparse indices = np.argpartition(dist, n_neighbors, axis=1)[:, :n_neighbors] data = np.ones(dist.shape[0] * n_neighbors) row = np.repeat(np.arange(dist.shape[0]), n_neighbors) col = indices.ravel() graph = sparse.coo_matrix((data, (row, col)), shape=dist.shape).tocsr() print(graph.todense()) [[1. 0. 1. 1. 0.] [0. 1. 1. 1. 0.] [1. 0. 1. 1. 0.] [1. 0. 1. 1. 0.] [1. 0. 1. 1. 0.] [0. 1. 1. 1. 0.] [0. 1. 1. 1. 0.] [1. 0. 1. 1. 0.] [0. 1. 1. 1. 0.] [1. 0. 1. 1. 0.]] Alternatively, if you want to compute this graph more easily without reference to the pre-computed pairwise distance matrix, you can use KNeighborsTransformer directly: from sklearn.neighbors import KNeighborsTransformer knn = KNeighborsTransformer(n_neighbors=n_neighbors).fit(B) graph = knn.kneighbors_graph(A)
2
1
78,543,721
2024-5-28
https://stackoverflow.com/questions/78543721/python-tkinter-with-oop-how-to-define-function-for-changing-button-attributes
I am learning Object Oriented Programming for Python using Tkinter. I want to create a Class for each type of widgets (right now considering only Buttons). I am passing the Button class to my main App class. Below is the code I have so far: import tkinter as tk from tkinter import ttk class Buttons: def __init__(self): pass def buttons(self,text,width,cmd,col,row): self.text = text self.width = width self.cmd = cmd self.col = col self.row = row ttk.Button(text=self.text, width=self.width, command=self.cmd).grid(column=self.col,row=self.row) class App(tk.Tk, Buttons): def __init__(self): super().__init__() self.geometry('100x100') self.resizable(0,0) b1 = self.buttons('Btn1',10,lambda: changeButton(),0,0) b2 = self.buttons('Btn2',10,lambda: changeButton(),0,1) self.mainloop() # where should I place this function? How can I change button attributes? def changeButton(self): b1['text'] = 'Done' # If button 1 pressed b2['text'] = 'OKAY' # If button 2 pressed if __name__=="__main__": app = App() I have two questions - Where should I place the function 'changeButton()' and why? If I have 2 buttons, and when pressed, if they should change their texts to something else; is it possible to do that via a single function? Or should there be a 2 functions? Thank you When I try the above code, I get the error: line 20, in <lambda> b1 = self.buttons('Btn1',10,lambda: changeButton(),0,0) ^^^^^^^^^^^^^^ TypeError: changeButton() missing 1 required positional argument: 'self'
There are many ways to perform this task, I've added comments inside the code for convenience reasons. import tkinter as tk from tkinter import ttk class MyButton(ttk.Button): def __init__(self, text, width, cmd, col, row): super().__init__() self.textvar = tk.StringVar() # create a StringVar object self.textvar.set(text) # set the value of the StringVar object self.config(textvariable=self.textvar, width=width) self.grid(column=col,row=row) self.bind('<Button-1>', cmd) # binding the cmd function with the Mouse '<Button-1>' event to MyButton object class App(tk.Tk, MyButton): def __init__(self): super().__init__() self.geometry('100x100') self.resizable(0,0) MyButton('Btn1', 10, changeButton, 0, 0) MyButton('Btn2', 10, changeButton, 0, 1) self.mainloop() # where should I place this function? How can I change button attributes? def changeButton(event): """ event is the Mouse '<Button-1>' event object containing information about the event event.widget is the MyButton object that generated the event event.widget.textvar is the StringVar object of the MyButton object event.widget.textvar.get() returns the value of the StringVar object event.widget.textvar.set() sets the value of the StringVar object print(dir(event)) to see all the attributes of the event object """ text = ( 'Done' if event.widget.textvar.get() == 'Btn1' else 'OKAY' if event.widget.textvar.get() == 'Btn2' else 'Btn1' if event.widget.textvar.get() == 'Done' else 'Btn2' ) event.widget.textvar.set(text) if __name__=="__main__": app = App()
2
1
78,543,852
2024-5-28
https://stackoverflow.com/questions/78543852/description-for-apirouter-in-fastapi
Suppose I have the following sample code: from fastapi import APIRouter, FastAPI router_one = APIRouter(tags=["users"]) router_two = APIRouter(tags=["products"]) app = FastAPI(description="## Description for whole application") @router_one.get("/users") async def fn1(): pass @router_one.post("/users") async def fn2(): pass @router_two.get("/products") async def fn3(): pass @router_two.post("/products") async def fn4(): pass app.include_router(router_one) app.include_router(router_two) It is rendered as below in swagger: I know I can pass description argument for individual path operation functions but what I really need is to pass description argument to the APIRouter itself(I showed in the picture). I have some common information which is shared among the path operations below a certain tag like users. I noticed that there is no api available for that in FastAPI like this: router_one = APIRouter(tags=["users"], description=...) # or app.include_router(router_one, description=...) Is there any other way to achieve that?
You can use the openapi_tags parameter of FastAPI class as below - from fastapi import APIRouter, FastAPI router_one = APIRouter(tags=["users"]) router_two = APIRouter(tags=["products"]) app = FastAPI( description="## Description for whole application", openapi_tags=[ {"name": "users", "description": "Operations with users"}, { "name": "products", "description": "Operations with products", "externalDocs": { "description": "Items external docs", "url": "https://fastapi.tiangolo.com/", }, }, ], ) @router_one.get("/users") async def fn1(): pass @router_one.post("/users") async def fn2(): pass @router_two.get("/products") async def fn3(): pass @router_two.post("/products") async def fn4(): pass app.include_router(router_one) app.include_router(router_two) Result
2
3
78,528,308
2024-5-24
https://stackoverflow.com/questions/78528308/index-difference-is-not-working-in-the-pandas
I have the data for 1 year which is time series data hourly basis so few timestamps are missing in between them.The shape of this data is (8188, 3) sample data I have attached below. I am resampling timestamps according to my data duration which will generate all the timestamps of one year even which were missing in my original data df_hourly = temp_df.resample('h').asfreq() the shape of resampled index is (8764, 1) Now, I am taking the difference of resampled data and original data new_rows = df_hourly.index.difference(original_index) so actual index shape should come as (8764-8188=576,) and then I will replace these 576 missing timestamp with median of total temp_df = temp_df[temp_df['cell'] == cell] print(temp_df.head()) temp_df.to_csv('temp_df.csv') print(temp_df.shape) # get_missing_duplicates(temp_df) # fill_missing_duplicates() print(temp_df.shape) temp_df['_time'] = pd.to_datetime(temp_df['_time']) print(temp_df['_time'].dtype) temp_df.set_index('_time', inplace=True) original_index = temp_df.index print(original_index) print("original_index",original_index.shape) df_hourly = temp_df.resample('h').asfreq() print(df_hourly) # This line is not working as expected new_rows = df_hourly.index.difference(original_index) print("&&&&&&&&&&&&",original_index) median_value = df['Total'].median() # new_rows = df_hourly.index.difference(temp_df.index) print("new_rows",new_rows) new_rows = df_hourly.index.difference(original_index) this line gives the wrong result basically it should return the difference df_hourly.index and the shape of temp_df is coming as (8188,) the shape of df_hourly is coming as (8764,) the shape of new_rows is also coming as (8764,) The output of the temp_df.index is DatetimeIndex(['2023-05-22 02:00:04+00:00', '2023-05-22 03:00:03+00:00', '2023-05-22 04:00:03+00:00', '2023-05-22 05:00:03+00:00', '2023-05-22 06:00:03+00:00', '2023-05-22 07:00:03+00:00', '2023-05-22 08:00:03+00:00', '2023-05-22 09:00:03+00:00', '2023-05-22 10:00:03+00:00', '2023-05-22 11:00:03+00:00', ... '2024-05-20 17:00:03+00:00', '2024-05-20 18:00:04+00:00', '2024-05-20 20:00:03+00:00', '2024-05-20 21:00:03+00:00', '2024-05-20 22:00:03+00:00', '2024-05-20 23:00:03+00:00', '2024-05-21 01:00:03+00:00', '2024-05-21 02:00:03+00:00', '2024-05-21 04:00:03+00:00', '2024-05-21 05:00:03+00:00'], dtype='datetime64[ns, UTC]', name='_time', length=8188, freq=None) The output of the df_hourly.index is: df_hourly_index RangeIndex(start=0, stop=8764, step=1) (8764, 3) Sample data:
pandas 2.2.1 Problem with resampling of non-uniform timestamps The issue is not with the code line you mentioned, but with your data. If you look at your time values, you will see that after a whole hour, a few seconds passed before a recording was made: '2023-05-22 02:00:04+00:00' - 4 seconds after 2:00 '2023-05-22 03:00:03+00:00' - 3 seconds after 3:00 Hourly resampling eliminates these sporadic biases. Therefore, you will end up with different indexes that may not overlap at all with the original timestamps. So you may need to round your time before resampling in order to find out missing timestamps, sort of: temp_df['_time'] = pd.to_datetime(temp_df['_time'], utc=True).dt.round('h') As an alternative, you could find missing timestamps by counting data in each hour. In your code, it could look like this: new_rows = df_hourly.index[temp_df.resample('h').size() == 0] If for some reason we need to keep the original timestamps, then I think that asfreq() may not be the best choice; using first() looks more reasonable. Also, I'd add in this case some negative offset when resampling, just in case if some recordings were made a few seconds before the end of an hour, something like this: df_hourly = temp_df.resample('h', offset=pd.Timedelta('-5min')).first() If the reason to find the difference between the resampled timestamps and the original ones is to fill in the blanks, we can use fillna(...) instead. Code to experiment with The following sample data was taken from the original post with one change to represent a possible problem (see "a second before ..." in the code and a notion about the offset= parameter of resample): import pandas as pd timestamps = [ '2024-05-20 17:00:03+00:00', '2024-05-20 18:00:04+00:00', # missing '2024-05-20 20:00:03+00:00', '2024-05-20 20:59:59+00:00', # a second before 21:00:00 '2024-05-20 22:00:03+00:00', '2024-05-20 23:00:03+00:00', # missing '2024-05-21 01:00:03+00:00', '2024-05-21 02:00:03+00:00', # missing '2024-05-21 04:00:03+00:00', '2024-05-21 05:00:03+00:00' ] values = [403, 369, 375, 394, 398, 372, 335, 385, 415, 383] df = pd.DataFrame({ 'time': pd.to_datetime(timestamps, utc=True), 'value': values} ) Option 1: Keep the the original timestamps (no rounding) five_min = pd.Timedelta('5min') grouped_by_hour = df.set_index('time').resample('h', offset=-five_min) resampled_data = grouped_by_hour.first() is_new_record = grouped_by_hour.size() == 0 resampled_data.loc[is_new_record, 'value'] = round(df['value'].median()) resampled_data['value'] = resampled_data.astype({'value': int}) resampled_data.index += five_min print('Resampled_data'.upper(), resampled_data, '\nNew records'.upper(), resampled_data.index[is_new_record], sep='\n') Option 2: Round the timestamps to the nearest hour before resampling rounded_original_time = df['time'].dt.round('h') resampled_data = ( df[['value']] .set_index(rounded_original_time) .resample('h') .asfreq() ) new_records = resampled_data.index.difference(rounded_original_time) resampled_data.loc[new_records, 'value'] = round(df['value'].median()) print('Resampled_data'.upper(), resampled_data, '\nNew records'.upper(), new_records, sep='\n')
2
2
78,543,027
2024-5-28
https://stackoverflow.com/questions/78543027/polars-how-to-drop-inf-rows
I have a polars dataframe containing some rows with infinite values (np.inf and -np.inf) which I'd like to drop. I am aware ofdrop_nans and drop_null but I don't see a similar drop_inf. As I want to handle nan values separately, I cannot just replace inf with nan and then call drop_nans. What's an idiomatic way of dropping rows with infinite values?
you can use DataFrame.filter() and Expr.is_infinite() to filter out rows you don't need: import numpy as np import polars as pl df = pl.DataFrame({ "a": [1, 2, 3, np.inf], "b": [1, 2, 3, 4] }) df.filter(~pl.col('a').is_infinite()) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1.0 ┆ 1 β”‚ β”‚ 2.0 ┆ 2 β”‚ β”‚ 3.0 ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Or, if you want to check all columns for infinite values, you can use .any_horizontal(): import numpy as np import polars as pl df = pl.DataFrame({ "a": [1, 2, 3, np.inf], "b": [1, 2, -np.inf, 4] }) df.filter( ~pl.any_horizontal(pl.all().is_infinite()) ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1.0 ┆ 1.0 β”‚ β”‚ 2.0 ┆ 2.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ If you have non-numeric columns you can also use selectors: import numpy as np import polars as pl import polars.selectors as cs df = pl.DataFrame({ "a": [1, 2, 3, np.inf], "b": [1, 2, -np.inf, 4], "c": list("abcd") }) df.filter( ~pl.any_horizontal(cs.numeric().is_infinite()) ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1.0 ┆ 1.0 ┆ a β”‚ β”‚ 2.0 ┆ 2.0 ┆ b β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
2
3
78,540,833
2024-5-27
https://stackoverflow.com/questions/78540833/how-to-write-a-list-of-mixed-type-of-nested-elements-in-google-document-using-go
My first post here. My apologies in advance if I inadvertently break any rules. Prerequisites: Assume that I've created a Google service account, and obtained the credentials (JSON) file. I created a Google document, and I've its doc_ID available. My programming language is Python. Problem: Assume that I've a list of mixed-type of elements, e.g. (normal text, few nested list items, normal text, another nested list item), etc. See the example below: my_list = [ "This is the normal text line, without any itemized list format.", "* This is the outermost list item (number)", "* This is another outermost list item (number)", " * This is a first-level indented list item (alpha)", " * This is a second-level indented list item (roman)", " * This is another second-level indented list item (roman)", " * This is another first-level indented list item (alpha)", "This is the normal text line, without any itemized list format.", "* This is one more outermost list item (number)", ] I want to write it to a Google document and obtain a mix of normal text paragraph, followed by a nested and properly indented numbered list, followed by a line of normal text paragraph, and finally, another nested list item (preferably continuing from my previous preset, but with the fresh numbering will also be acceptable). I tried several approaches, and got the closest possible result with the suggestion of @tanaike in the following post. How do I indent a bulleted list with the Google Docs API Here is my sample code: text_insert = "" for text in my_list: if text.startswith('* '): text_insert += text[2:] + '\n' elif text.startswith(' * '): text_insert += '\t' + text[4:] + '\n' elif text.startswith(' * '): text_insert += '\t\t' + text[6:] + '\n' else: text_normal = text + '\n' text_insert += text_normal end_index_normal = start_index + len(text_insert) + 1 - indent_corr start_index_normal = end_index_normal - len(text_normal) end_index = start_index + len(text_insert) + 1 indented_requests = [ { "insertText": { "text": text_insert, 'location': {'index': start_index}, } }, { "createParagraphBullets": { 'range': { 'startIndex': start_index, 'endIndex': end_index, # Add 2 for the newline character }, "bulletPreset": "NUMBERED_DECIMAL_ALPHA_ROMAN", } }, { "deleteParagraphBullets": { 'range': { 'startIndex': start_index_normal, 'endIndex': end_index_normal, }, } }, ] try: u_service.documents().batchUpdate(documentId=doc_id, body={'requests': indented_requests}).execute() except HttpError as error: print(f"An error occurred: {error}") What I get in my Google document is the following: However, my goal is the following (obtained after manual editing): How can I achieve this? Any help will be greatly appreciated.
Modification points: In your showing script, end_index_normal and start_index_normal are overwritten by another loop. In the case of deleteParagraphBullets, only one character is selected. Also, I guessed that in your situation, the last character might not be required to be included in createParagraphBullets. When these points are reflected in your script, how about the following modification? Modified script: # Please set your variables. doc_id = "###" start_index = 1 indent_corr = 1 my_list = [ "This is the normal text line, without any itemized list format.", "* This is the outermost list item (number)", "* This is another outermost list item (number)", " * This is a first-level indented list item (alpha)", " * This is a second-level indented list item (roman)", " * This is another second-level indented list item (roman)", " * This is another first-level indented list item (alpha)", "This is the normal text line, without any itemized list format.", "* This is one more outermost list item (number)", ] text_insert = "" deleteParagraphBullets = [] for text in my_list: if text.startswith('* '): text_insert += text[2:] + '\n' elif text.startswith(' * '): text_insert += '\t' + text[4:] + '\n' elif text.startswith(' * '): text_insert += '\t\t' + text[6:] + '\n' else: text_normal = text + '\n' text_insert += text_normal end_index_normal = start_index + len(text_insert) + 1 - indent_corr start_index_normal = end_index_normal - len(text_normal) deleteParagraphBullets.append({ "deleteParagraphBullets": { 'range': { 'startIndex': start_index_normal, 'endIndex': start_index_normal + 1, }, } }) deleteParagraphBullets.reverse() indented_requests = [ { "insertText": { "text": text_insert, 'location': {'index': start_index}, } }, { "createParagraphBullets": { 'range': { 'startIndex': start_index, 'endIndex': start_index + len(text_insert), }, "bulletPreset": "NUMBERED_DECIMAL_ALPHA_ROMAN", } } ] + deleteParagraphBullets u_service.documents().batchUpdate(documentId=doc_id, body={'requests': indented_requests}).execute() Testing: When this script is run, the following result is obtained. From To References: CreateParagraphBulletsRequest DeleteParagraphBulletsRequest
2
2
78,525,287
2024-5-23
https://stackoverflow.com/questions/78525287/flat-a-python-dict-containing-lists
I am trying to normalize a dictionary containing some lists. As an MVCE (Minimal, Verifiable, Complete Example), consider the following dictionary: test_dict = { 'name' : 'john', 'age' : 20, 'addresses' : [ { 'street': 'XXX', 'number': 123, 'complement' : [ 'HOUSE', 'NEAR MARKET' ] }, { 'street': 'YYY', 'number': 456, 'complement' : [ 'AP', 'NEAR PARK' ] }, ], 'phones' : [ '123456' ], 'gender' : 'MASC' } I want each list found in the dictionary to generate a line, so my desired output is: {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'HOUSE', 'phones': '123456', 'gender' : 'MASC'} {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'NEAR MARKET', 'phones': '123456', 'gender' : 'MASC'} {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'AP', 'phones': '123456', 'gender' : 'MASC'} {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'NEAR PARK', 'phones': '123456', 'gender' : 'MASC'} However, when I run my code, I am not able to iterate over more than one list. My intention was to develop a recursive function, so I wouldn't have to worry about a dictionary with more complex structures (a dictionary with more lists inside dictionaries, etc.). However, when I run my code, the output I get is: {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'HOUSE'} {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'NEAR MARKET'} {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'AP'} {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'NEAR PARK'} {'name': 'john', 'age': 20, 'phones': '123456'} My python code (MVCE): def get_list_values(lista, dicionario, key_name, results): if len(lista) > 0: for l in lista: if isinstance(l, dict): search_values(l, dicionario.copy(), results) else: dicionario_metodo = dicionario.copy() dicionario_metodo[key_name] = l results.append(dicionario_metodo) def search_values(dicionario, test, results): for k, v in dicionario.items(): if isinstance(v, list): get_list_values(v, test, k, results ) else: test[k] = v if not any(isinstance(v, list) for v in dicionario.values()): results.append(test.copy()) return results test = {} results = [] for r in search_values(test_dict, test, results): print(r) In which part of my recursion am I going wrong, so it doesn't generate my desired output? Edit 1: test_dict = { 'name' : 'john', 'age' : 20, 'addresses' : [ { 'street': 'XXX', 'number': 123, 'complement' : [ 'HOUSE', 'NEAR MARKET' ] }, { 'street': 'YYY', 'number': 456, 'complement' : [ 'AP', 'NEAR PARK' ] }, ], 'type' : { 'category': 'G123', 'products': [ 'test1', 'test2' ] }, 'phones' : [ '123456' ], 'gender' : 'MASC' }
It took me some time to get this right, but check this out. def flat(out, *kvs): match kvs: case []: yield out case (k, []), *kvs: yield from flat(out, *kvs) case (k, list(l)), *kvs: for v in l: yield from flat(out, (k, v), *kvs) case (_, dict(d)), *kvs: yield from flat(out, *d.items(), *kvs) case (k, v), *kvs: yield from flat([*out, (k, v)], *kvs) case _: raise ValueError("Invalid") That is all you need! This implementation makes extensive use of recursion, pattern mathcing and generators. You can try it out like this: x = map(dict, flat([], (..., test_dict))) print(*x, sep='\n') # {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'HOUSE', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'NEAR MARKET', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'AP', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'NEAR PARK', 'phones': '123456', 'gender': 'MASC'} With your second input data, result would be as below: # {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'HOUSE', 'category': 'G123', 'products': 'test1', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'HOUSE', 'category': 'G123', 'products': 'test2', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'NEAR MARKET', 'category': 'G123', 'products': 'test1', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'XXX', 'number': 123, 'complement': 'NEAR MARKET', 'category': 'G123', 'products': 'test2', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'AP', 'category': 'G123', 'products': 'test1', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'AP', 'category': 'G123', 'products': 'test2', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'NEAR PARK', 'category': 'G123', 'products': 'test1', 'phones': '123456', 'gender': 'MASC'} # {'name': 'john', 'age': 20, 'street': 'YYY', 'number': 456, 'complement': 'NEAR PARK', 'category': 'G123', 'products': 'test2', 'phones': '123456', 'gender': 'MASC'} Edit: Mapped the key-value pairs into dicts as per requirements, made the code neatier.
2
3
78,539,059
2024-5-27
https://stackoverflow.com/questions/78539059/querying-root-main-tkinter-tcl-event-queue-depth
I have an application written with a tkinter UI that performs real world activities via a GPIO stack. As an example, imagine my application has a temperature probe that checks the room temp and then has an external hardware interface to the air conditioning to adjust the temperature output of the HVAC system. This is done in a simple way via a GPIO stack. The application has some "safety critical" type actions that it can perform. There are lots of hardware interlocks, but I still don't want the application crashing and causing the AC to run full blast and turn my theoretical house into a freezer. The problem is always.....how to convert a GUI event driven application into a GUI application that also has another loop running without: Breaking tkinter / tcl design by shoving everything in a "while(true): do things" and adding root.update() to it. Adding threading to the application which is difficult and potentially unsafe So, I've settled on the following architecture: import tkinter as tk def main(args): # Build the UI for the user root = tk.Tk() buttonShutDown = tk.Button(root, text="PUSH ME!")#, command=sys.exit(0)) buttonShutDown.pack() # Go into the method that looks at inputs and outputs monitorInputsAndActionOutputs(root) # The main tkinter loop. NOTE, there are about 9000 articles on why you should NOT just shove everyhtnig in a loop and call root.update(). This is the corect way of doing it! Use root.after() calls! root.mainloop() def monitorInputsAndActionOutputs(root): print ("checked the room temperature") print ("adjusted the airconditioning to make comfy") root.after(100, monitorInputsAndActionOutputs, root) # readd one event of the monitorInputsAndActionOutputs process to the tkinter / tcl queue return None if __name__ == '__main__': import sys sys.exit(main(sys.argv)) This actually works just fine, and theoretically it will continue to work forever and everyone will have the comfy(TM). However, I see a potential problem in that the application has a decent number of methods doing different closed loop PID style adjustments (read, think, change, iterate) and I'm concerned that the tkinter event queue will start to clog with processing after() commands and the overall system will either slow to a crawl, break completely, freeze the UI or a combination of all three. So what I want to do is add in something like (rough code example, this does not run of course): def (checkQueueDepth): myQueue = root.getQueueLength() if(myQueue > 100 items): doSomethingToAlertSomeone() deprioritiseNonCriticalEventsTillThingsCalmTFDown() In order to implement something like this, I need to programmatically acquire the root / tkinter queue depth. After an extensive beardscratching wade through the tkinter docs TCL / TK docs The source code of tkinter The rest of stackoverflow The Internet The fridge (for a snacc) I have ascertained that there is no ability for an external module to query the queue length of tk, which is based on the tcl library and probably implements or extends their notifier. It also doesn't look like based on the tkinter code that there is any private method or object/variable that I can crash through the front door onto and interrogate. Thirdly, the only "solution" it looks to be would be to potentially implement a new notifier system for TCL and then extend it's functionality to provide this feature. There is zero chance of me doing that, I don't have the knowledge or time and I'd sooner rewrite my entire application in another language. Since it can't be done the "right" way, I thought about the "we're not going to call it wrong, but might as well be" way. What I came up with is running something like this: def checkCodeLoops(root): # The maximum time we will accept the queue processing to blow out to is 3 seconds. maxDelayInQueueResponse = datetime.timedelta(seconds = 3) # Take a note of the time now as well as the last time this method was run lastCheckTime = checkTime checkTime = datetime.datetime.now() # If the time between this invocation of the checkCodeLoops method and the last one is greater than the max allowable time, panic! if(checkTime - lastCheckTime > maxDelayInQueueResponse): doSomethingToReduceLoad() lockUISoUserCantAddMoreWorkToTheQueue() # reque this method so that we can check again in 100ms (theoretically in 100ms at least - if the queue's flooded then this might never fire) root.after_idle(100, monitorInputsAndActionOutputs, root) return None Overall it's a s*** way of doing it. You basically add work to the queue in order to see if the queue is busy (stupid) and also there is another issue - if the queue is flooded then this code won't actually run because any events added to the after_idle() process are only processed when the queue is empty of after() and other higher priority items. So I could change it to root.after instead of root.after_idle, and theoretically that would help, but we still have the problem of using a queued event to check if the queue is not functioning, which is kinda stupid. If I had a way of checking the queue depth, I could start to implement load control strategies BEFORE things got to the panic stage. Really keen to hear if anyone has a better way of doing this.
There isn't really any way to know, unfortunately. At the point when you could ask the question, many event sources only know whether there is at least one event pending (that corresponds to a single OS event) and the system calls all are capable of accepting multiple events from the corresponding source at that point (and can do so safely because of non-blocking I/O). It's really complicated down there! What you can do is monitor whether regular timer events have an idle event occur between them. Idle events fire when the other event sources have nothing to contribute and the alternative is to go to sleep in a select() syscall. (Or poll() or whatever the notifier has been configured to use, depending on platform and build options.) If idle events are being serviced between those regular timer events, the queue is healthy; if not, you are in an overrun situation. Queueing theory tells us that as long as the consumers keep up with the producers, queue size will tend to zero. When it is the other way round, queue size tends to infinity. (Exact balancing is very unlikely.)
4
3
78,539,704
2024-5-27
https://stackoverflow.com/questions/78539704/conda-update-flips-between-two-versions-of-the-same-package-from-the-same-channe
This is the continuation of my previous question. Now, with both channel_priority: flexible and channel_priority: strict, every conda update -n base --all flips between The following packages will be UPDATED: libarchive 3.7.2-h313118b_1 --> 3.7.4-haf234dc_0 zstd 1.5.5-h12be248_0 --> 1.5.6-h0ea2cb4_0 The following packages will be DOWNGRADED: zstandard 0.22.0-py311he5d195f_0 --> 0.19.0-py311ha68e1ae_0 and The following packages will be UPDATED: zstandard 0.19.0-py311ha68e1ae_0 --> 0.22.0-py311he5d195f_0 The following packages will be DOWNGRADED: libarchive 3.7.4-haf234dc_0 --> 3.7.2-h313118b_1 zstd 1.5.6-h0ea2cb4_0 --> 1.5.5-h12be248_0 and the corresponding conda list -n base --show-channel-urls are libarchive 3.7.2 h313118b_1 XXX zstandard 0.22.0 py311he5d195f_0 XXX zstd 1.5.5 h12be248_0 XXX and libarchive 3.7.4 haf234dc_0 XXX zstandard 0.19.0 py311ha68e1ae_0 XXX zstd 1.5.6 h0ea2cb4_0 XXX where XXX is the exact same (internal/corp) channel (which I do not cotrol). What am I doing wrong? How do I avoid this? Is this a bug in conda 24.5.0 or in the configuration of the XXX channel? PS. Reported
I cannot fully replicate the behavior that you are experiencing, so your mileage may vary with this solution. Conda has a config feature that allows you to pin package specs. This feature is still in beta according to the conda docs, so it may change in the future. First add the package pinning to your conda config: conda config --append pinned_packages "zstandard>=0.22.0" Then install the newest version of zstandard and downgrade the other two packages. conda install zstandard==0.22.0 libarchive==3.7.2 zstd==1.5.5 Finally check the upgrade again: conda update -n base --all
2
1
78,537,616
2024-5-27
https://stackoverflow.com/questions/78537616/how-to-plot-rows-containing-nas-in-stacked-bar-chart-python
I am still an absolute beginner in python and I googled a lot to built code to make a stacked bar chart plot my df. I am using spyder and python 3.12 After some data preparation, I am left with this df: {'storage': {0: 'a', 2: 'a', 3: 'a', 4: 'b', 5: 'b'}, 'time': {0: '4h', 2: '4h', 3: '4h', 4: '4h', 5: '4h'}, 'Predicted Class': {0: 'germling', 2: 'resting', 3: 'swollen', 4: 'germling', 5: 'hyphae'}, '%': {0: 0.22, 2: 99.02, 3: 0.76, 4: 65.41, 5: 1.72}} I prepare it with pivoting for plotting as a stacked bar chart. class_order = ['resting', 'swollen', 'germling', 'hyphae', 'mature hyphae'] counts2_reduced_df['Predicted Class'] = pd.Categorical(counts2_reduced_df['Predicted Class'], categories=class_order, ordered=True) counts2_pivot_df = counts2_reduced_df.pivot_table(index=["storage","time"], columns="Predicted Class", values="%", aggfunc="sum") This leaves me with this df that now contains to cells with 0 as values that I would like to prevent from being plotted. When I plot this df now, I get the following plot (it still has other issues like making the values of the small percentages readable, but this is an issue for another post I guess). I tried different approaches like converting 0s to nas and then drop.na of course removes the entire rows, but I would like to only prevent the cells containing 0s from being plotted / shown in the plot. I tried counts2_pivot_df = counts2_pivot_df.replace(0, "") but it prevents the columns "hyphae" and "mature hyphae" completely from being plotted. {'resting': {('a', '4h'): 99.02, ('b', '4h'): 2.57, ('c', '4h'): 2.19}, 'swollen': {('a', '4h'): 0.76, ('b', '4h'): 30.19, ('c', '4h'): 25.57}, 'germling': {('a', '4h'): 0.22, ('b', '4h'): 65.41, ('c', '4h'): 70.67}, 'hyphae': {('a', '4h'): '', ('b', '4h'): 1.72, ('c', '4h'): 1.47}, 'mature hyphae': {('a', '4h'): '', ('b', '4h'): 0.11, ('c', '4h'): 0.1}} Is there a way for solving this issue? I have read and tried many posts but cannot come up with a solution. EDIT: This is the code I use to create the plots: fig, ax = plt.subplots() # Colors for each class, color_mapping = { 'resting': 'yellow', 'swollen': 'blue', 'germling': 'red', 'hyphae': 'cyan', 'mature hyphae': 'green', } # Colors for each class using the mapping colors = [color_mapping[class_name] for class_name in class_order] # Ensure colors follow the same order # Plotting the stacked bars counts2_pivot_df.plot(kind='bar', stacked=True, color=colors, ax=ax) # Set x-tick labels correctly based on the pivot index ax.set_xticks(range(len(counts2_pivot_df.index))) ax.set_xticklabels([f'{i[0]} & {i[1]}' for i in counts2_pivot_df.index], rotation=0) ax.set_xlabel('Storage Condition and Time after Inoculation') ax.set_ylabel('Percentage (%)') plt.xticks(rotation=360) plt.legend(title='Predicted Class') ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) # Annotating percentages for n, bar_group in enumerate(ax.containers): for bar in bar_group: height = bar.get_height() text_x = bar.get_x() + bar.get_width() / 2 text_y = bar.get_y() + height / 2 ha = 'center' if height > 5 else 'left' va = 'center' text_x_adjusted = text_x if height > 5 else bar.get_x() + bar.get_width() ax.annotate(f'{height:.2f}%', xy=(text_x_adjusted, text_y), ha=ha, va=va, color='black', fontsize=8, fontweight='bold') plt.show()
Since you are plotting the labels manually, an easy option is to add a continue in your loop when the height is null: for bar in bar_group: height = bar.get_height() if height == 0: continue text_x = bar.get_x() + bar.get_width() / 2 text_y = bar.get_y() + height / 2 ha = 'center' if height > 5 else 'left' va = 'center' text_x_adjusted = text_x if height > 5 else bar.get_x() + bar.get_width() ax.annotate(f'{height:.2f}%', xy=(text_x_adjusted, text_y), ha=ha, va=va, color='black', fontsize=8, fontweight='bold') Output:
2
2
78,538,178
2024-5-27
https://stackoverflow.com/questions/78538178/python-httpx-log-all-request-headers
How can I log all request headers of an httpx request? I use log level DEBUG nad response headers log fine but there are no request headers in the log. If it matters I'm using async api of httpx lib.
Use custom event hooks: import httpx import logging logging.basicConfig(level=logging.DEBUG) async def log_request(request): logging.debug(f"Request headers: {request.headers}") async with httpx.AsyncClient(event_hooks={'request': [log_request]}) as client: response = await client.get('https://httpbin.org/get')
2
2
78,535,999
2024-5-26
https://stackoverflow.com/questions/78535999/how-to-reorganize-data-to-correctly-lineplot-in-python
I have the following code that plots ydata vs xdata which is supposed to be a circle. The plot has two subplots -- a lineplot with markers and a scatter plot. import matplotlib.pyplot as plt xdata = [-1.9987069285852805, -1.955030386765729, -1.955030386765729, -1.8259096357678795, -1.8259096357678795, -1.6169878720004491, -1.6169878720004491, -1.3373959790579202, -1.3373959790579202, -0.9993534642926399, -0.9993534642926399, -0.6176344077078071, -0.6176344077078071, -0.20892176376743077, -0.20892176376743077, 0.20892176376743032, 0.20892176376743032, 0.6176344077078065, 0.6176344077078065, 0.999353464292642, 0.999353464292642, 1.3373959790579217, 1.3373959790579217, 1.6169878720004487, 1.6169878720004487, 1.8259096357678786, 1.8259096357678786, 1.9550303867657255, 1.9550303867657255, 1.9987069285852832] ydata = (0.0, -0.038801795445724575, 0.038801795445724575, -0.07590776623879933, 0.07590776623879933, -0.10969620340136318, 0.10969620340136318, -0.13869039009450249, 0.13869039009450249, -0.16162314123018345, 0.16162314123018345, -0.1774921855402276, 0.1774921855402276, -0.18560396964016201, 0.18560396964016201, -0.185603969640162, 0.185603969640162, -0.17749218554022747, 0.17749218554022747, -0.16162314123018337, 0.16162314123018337, -0.13869039009450224, 0.13869039009450224, -0.10969620340136294, 0.10969620340136294, -0.0759077662387991, 0.0759077662387991, -0.038801795445725006, 0.038801795445725006, 0.0) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16,6)) fig.suptitle('Plot comparison: line vs scatter'+ 3*'\n') fig.subplots_adjust(wspace=1, hspace=3) fig.supxlabel('x') fig.supylabel('y') ax1.plot(xdata, ydata, 'o-', c='blue') ax1.set_title('Line-point plot', c='blue') for i in range(len(xdata)): ax2.scatter(xdata, ydata, c='orange') ax2.set_title('Scatter plot', c='orange') plt.savefig('line_vs_scatter_plot.png') plt.show() Output: From the output, it can be seen that the lineplot does not connect the dots (or points). Can we rearrange the x or y data in someway that fixes the issue? Or do something else?
If the xy coordinates have a consistent zig-zag pattern (odd/even), you could do : def unzigzag(data): data = list(data) # just in case return data[::2] + data[::-2] + [data[0]] ax1.plot(*map(unzigzag, [xdata, ydata]), "bo-") NB: This plot is annotated with the physical position (starting from 1) of x/y in their lists. If not, one option would be to use shapely : from shapely import MultiPoint from shapely.geometry.polygon import orient def unzigzag(x, y, ori=-1): # -1: clock-wise p = MultiPoint(list(zip(x, y))).convex_hull return list(orient(p, ori).boundary.coords) ax2.plot(*zip(*unzigzag(xdata, ydata)), "-bo") # the ori has no effect Animation to highlight the orientation :
2
4
78,536,677
2024-5-26
https://stackoverflow.com/questions/78536677/what-is-the-magic-behind-dataclass-decorator-type-hint-of-dataclasses-module
I'm trying to deepen my understanding of how Python's dataclasses module works, particularly the role of type hints in defining class attributes. When I define a dataclass like this: from dataclasses import dataclass @dataclass class Person: name: str age: int I notice that the type hints (str for name and int for age) are used by the dataclass decorator to generate various methods like __init__, __repr__, and __eq__ (I'm only interested in __init__). This automatic generation makes it very convenient, but I'm curious about the underlying mechanisms that enable this functionality. Specifically, I have a question: How does the dataclass decorator utilize type hints to generate these methods? I’ve copied and pasted the same code from the original dataclasses file into my working directory with the name custom_dataclass (for testing purposes), but my linter did not recognize the __init__ attributes. In picture1, you can see that when I try to instantiate the Person class, I correctly visualize all attributes mentioned in the Person class. This class uses the original @dataclass decorator. However, when I try the same with PersonCustom using the @custom_dataclass decorator, no attributes appear at all, as shown in picture2. If anyone has any answers or insights, I’d like to hear them. Thanks in advance for your help! I've also tried to see declarative_base method from sqlalchemy like from sqlalchemy import Column, Integer, String from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class User(Base): __tablename__ = "user" id = Column(Integer, primary_key=True) name = Column(String) age = Column(Integer) User() but there no attributes appear either
[sqlalchemy] no attributes appear either Well, there aren't any type hints involved. Rather we're assigning Column objects to those names, and each such object is storing a certain sql type (not a hint, not an annotation). the type hints ... are used by the dataclass decorator Yes. It accesses them via an API call. Details are in the documentation. It appears your custom decorator is not making use of dataclass_transform() and is not doing work similar to what the @dataclass decorator does. Which is unsurprising, as there's a lot of work involved, at least when addressing the general case, about 1.5 KLOC of source code.
3
1
78,532,738
2024-5-25
https://stackoverflow.com/questions/78532738/how-can-i-optimize-the-code-of-inversion-mask-in-pygame
And so, I wrote a test function to invert an image in a specific area and shape with the ability to change the location and size. The only problem is that the larger the size of the mask, the slower the mask moves or enlarges. Are there any solutions to speed up the code? import pygame fps = pygame.time.Clock() pygame.init() pygame.event.set_allowed([pygame.QUIT]) # declare all necessary vars image = pygame.image.load('...\\image.png') org_mask_inversion = pygame.image.load('...\\mask_inversion.png') ix, iy = 0, 0 size_x, size_y = 100, 100 current_ix, current_iy = ix, iy current_size_x, current_size_y = size_x, size_y screen = pygame.display.set_mode([960, 720]) mask_inversion = pygame.transform.scale(org_mask_inversion, (size_x, size_y)) image2 = image # function that inverts color values def inversion_of_color(r, g, b): return [255 - r, 255 - g, 255 - b] # image inversion function def inversion_of_surface(surface, sx, sy): # it creates a new surface of the same size as the one specified in the function inverted_surface = pygame.surface.Surface(mask_inversion.get_size(), pygame.SRCALPHA) # passes over the entire size of the mask for x in range(mask_inversion.get_width()): for y in range(mask_inversion.get_height()): # takes the color of the mask pixel m_color = mask_inversion.get_at((x, y)) # checks if the pixel is green color if m_color.g == 255: # checks if the mask extends beyond the screen area if 0 <= sx + x <= screen.get_width() - 1 and 0 <= sy + y <= screen.get_height() - 1: # takes the color of a surface pixel color = surface.get_at((x + sx, y + sy)) # inverts the pixel color and inserts the inverted pixel into the created surface inverted_color = inversion_of_color(color.r, color.g, color.b) inverted_surface.set_at((x, y), inverted_color) return inverted_surface running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False fps.tick(60) keys = pygame.key.get_pressed() pygame.display.set_caption(f'Inverted Image {round(fps.get_fps())} x:{ix} y:{iy}') if keys[pygame.K_LEFT]: ix -= 6 elif keys[pygame.K_RIGHT]: ix += 6 if keys[pygame.K_UP]: iy -= 6 elif keys[pygame.K_DOWN]: iy += 6 if keys[pygame.K_a]: size_x -= 6 size_y -= 6 mask_inversion = pygame.transform.scale(org_mask_inversion, (size_x, size_y)) elif keys[pygame.K_d]: size_x += 6 size_y += 6 mask_inversion = pygame.transform.scale(org_mask_inversion, (size_x, size_y)) # checks if the size or location of the mask has changed if current_ix != ix or current_iy != iy or current_size_x != size_y or current_size_x != size_x: image2 = inversion_of_surface(image, ix, iy) current_ix, current_iy = ix, iy current_size_x, current_size_y = size_x, size_y screen.blit(image, (0, 0)) screen.blit(image2, (ix, iy)) pygame.display.update() pygame.quit() Here's the image of the mask itself enter image description here I tried different ways to cache the result, I tried to use numpy arrays, and tried to check the already changed pixels, but all to no avail.
You can achieve what you want with Pygame functions alone and without for loops: Use pygame.mask.from_surface and pygame.mask.Mask.to_surface to prepare the mask. The mask must consist of two colors, opaque white (255, 255, 255, 255) and transparent black (0, 0, 0, 0): mask = pygame.mask.from_surface(inversionMaskImage) inversionMask = mask.to_surface(setcolor=(255, 255, 255, 255), unsetcolor=(0, 0, 0, 0)) Get the effected area with pygame.Surface.subsurface subSurface = surface.subsurface(pygame.Rect((sx, sy), mask.get_size())) Create a copy of the mask and blend the inverted surface with the mask using the BLEND_SUB mode (see pygame.Surface.blit) finalImage = mask.copy() finalImage.blit(invertedArea, (0, 0), special_flags = pygame.BLEND_MULT) See also Blending and transparency and Clipping Minimal example import pygame pygame.init() screen = pygame.display.set_mode((1024, 683)) clock = pygame.time.Clock() image = pygame.image.load('image/parrot1.png').convert_alpha() inversionMaskImage = pygame.Surface((200, 200), pygame.SRCALPHA) pygame.draw.circle(inversionMaskImage, (255, 255, 255), inversionMaskImage.get_rect().center, inversionMaskImage.get_width()//2) mask = pygame.mask.from_surface(inversionMaskImage) inversionMask = mask.to_surface(setcolor=(255, 255, 255, 255), unsetcolor=(0, 0, 0, 0)) def invert_surface(surface, mask, sx, sy): areaRect = pygame.Rect((sx, sy), mask.get_size()) clipRect = areaRect.clip(surface.get_rect()) subSurface = surface.subsurface(clipRect) finalImage = mask.copy() finalImage.blit(subSurface, (clipRect.x - areaRect.x, clipRect.y - areaRect.y), special_flags = pygame.BLEND_SUB) return finalImage run = True while run: clock.tick(100) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False areaRect = pygame.Rect(pygame.mouse.get_pos(), (0, 0)).inflate(inversionMask.get_size()) invertedArea = invert_surface(image, inversionMask, areaRect.x, areaRect.y) screen.fill('black') screen.blit(image, (0, 0)) screen.blit(invertedArea, areaRect) pygame.display.flip() pygame.quit() exit()
3
2
78,535,203
2024-5-26
https://stackoverflow.com/questions/78535203/find-one-value-between-two-lists-of-dicts-then-find-another-same-value-based-on
This is my code: from collections import defaultdict ip_port_device = [ {'ip': '192.168.1.140', 'port_number': 4, 'device_name': 'device1'}, {'ip': '192.168.1.128', 'port_number': 8, 'device_name': 'device1'}, {'ip': '192.168.1.56', 'port_number': 14, 'device_name': 'device1'}, {'ip': '192.168.1.61', 'port_number': 4, 'device_name': 'device1'}, {'ip': '192.168.1.78', 'port_number': 8, 'device_name': 'device1'}, {'ip': '192.168.1.13', 'port_number': 16, 'device_name': 'device1'}, {'ip': '192.168.2.140', 'port_number': 4, 'device_name': 'device2'}, {'ip': '192.168.2.128', 'port_number': 8, 'device_name': 'device2'}, {'ip': '192.168.2.56', 'port_number': 14, 'device_name': 'device2'}, {'ip': '192.168.2.61', 'port_number': 4, 'device_name': 'device2'}, {'ip': '192.168.2.78', 'port_number': 8, 'device_name': 'device2'}, {'ip': '192.168.2.13', 'port_number': 16, 'device_name': 'device2'}, {'ip': '192.168.3.140', 'port_number': 4, 'device_name': 'device3'}, {'ip': '192.168.3.128', 'port_number': 8, 'device_name': 'device3'}, {'ip': '192.168.3.56', 'port_number': 14, 'device_name': 'device3'}, {'ip': '192.168.3.61', 'port_number': 4, 'device_name': 'device3'}, {'ip': '192.168.3.78', 'port_number': 8, 'device_name': 'device3'}, {'ip': '192.168.3.13', 'port_number': 16, 'device_name': 'device3'}, ] ip_per_node = [ {'node_name': 'server9.example.com', 'ip_address': '192.168.1.140'}, {'node_name': 'server19.example.com', 'ip_address': '192.168.1.128'}, {'node_name': 'server11.example.com', 'ip_address': '192.168.2.140'}, {'node_name': 'server21.example.com', 'ip_address': '192.168.2.128'}, {'node_name': 'server17.example.com', 'ip_address': '192.168.3.140'}, {'node_name': 'server6.example.com', 'ip_address': '192.168.3.128'}, ] ips_and_ports_in_switch = [] for compute in ip_per_node: for port in ip_port_device: if compute['ip_address'] == port['ip']: port = port['port_number'] for new_port in ip_port_device: if port == new_port['port_number']: ips_and_ports_in_switch.append({ 'port_number': new_port['port_number'], 'ip_address': new_port['ip'], 'node_name': compute['node_name'], 'device_name': new_port['device_name'] }) concatenated = defaultdict(list) for entry in ips_and_ports_in_switch: concatenated[(entry['device_name'], entry['port_number'], entry['node_name'])].append(entry['ip_address']) The logic is: if ip_per_node['ip_address'] matches ip_port_device['ip'], then in ip_port_device find all ips have the same port number. Then save like this (expected output): node server9.example.com, port 4, device device1, ips ['192.168.1.140', '192.168.1.61'] node server19.example.com, port 8, device device1, ips ['192.168.1.128', '192.168.1.78'] node server11.example.com, port 4, device device2, ips ['192.168.2.140', '192.168.2.61'] node server21.example.com, port 8, device device2, ips ['192.168.2.128', '192.168.2.78'] node server17.example.com, port 4, device device3, ips ['192.168.3.140', '192.168.3.61'] node server6.example.com, port 8, device device3, ips ['192.168.3.128', '192.168.3.78'] My current code doesn't work as I expect. It saves one port multiple times for all nodes. I tried to add least but needed data for the sample.
# loop both lists to match nodes with port_number and device_name for item in ip_port_device: for node in ip_per_node: if item["ip"] == node["ip_address"]: node.update( { "port_number": item["port_number"], "device_name": item["device_name"], "ips": [item["ip"]], } ) # loop both lists again to get ips that have no direct correspondence in ip_per_node for item in ip_port_device: for node in ip_per_node: if ( item["port_number"] == node["port_number"] and item["device_name"] == node["device_name"] and item["ip"] not in node["ips"] ): node["ips"].append(item["ip"]) for node in ip_per_node: print(f'node {node['node_name']}, port {node['port_number']}, device {node['device_name']}, ips {node['ips']}') node server9.example.com, port 4, device device1, ips ['192.168.1.140', '192.168.1.61'] node server19.example.com, port 8, device device1, ips ['192.168.1.128', '192.168.1.78'] node server11.example.com, port 4, device device2, ips ['192.168.2.140', '192.168.2.61'] node server21.example.com, port 8, device device2, ips ['192.168.2.128', '192.168.2.78'] node server17.example.com, port 4, device device3, ips ['192.168.3.140', '192.168.3.61'] node server6.example.com, port 8, device device3, ips ['192.168.3.128', '192.168.3.78']
2
1
78,524,575
2024-5-23
https://stackoverflow.com/questions/78524575/importerror-cannot-import-name-get-column-indices-from-sklearn-utils
I am getting an import Error when trying to import imblearn.over_sampling for RandomOverSampler. I believe the issue is not with my code but with the libraries clashing, I'm not sure though. import pandas as pd import matplotlib.pyplot as plt import numpy as np from sklearn.preprocessing import StandardScaler #actually scikit-learn from imblearn.over_sampling import RandomOverSampler code that's using StandardScaler and RandomOverSampler: def scale_dataset(dataframe, oversample=False): X = dataframe[dataframe.columns[:-1]].values Y = dataframe[dataframe.columns[-1]].values scaler = StandardScaler() X = scaler.fit_transform(X) if oversample: ros = RandomOverSampler() X, Y = ros.fit_resample(X,Y) data = np.hstack((X, np.reshape(Y, (-1, 1)))) return data, X, Y print(len(train[train["class"]==1])) print(len(train[train["class"]==0])) train, X_train, Y_train = scale_dataset(train, True) I tried fully importing sklearn, uninstalled and reinstalled scipi and sklearn (as scikit-learn), installing Tensorflow. I do have numpy, scipy, pandas and other dependent libraries installed.
This is a known issue (https://github.com/scikit-learn-contrib/imbalanced-learn/issues/1081#issuecomment-2127245933). You can either pip install git+https://github.com/scikit-learn-contrib/imbalanced-learn.git@master or downgrade scikit-learn to 1.5.
3
0
78,530,673
2024-5-24
https://stackoverflow.com/questions/78530673/how-to-update-the-text-of-a-list-element-within-a-ctklistbox
I am writing mass spec data reduction software. I want to provide the user with a list of all samples in the current sequence, next to a check box to show whether the analysis data has been reduced (viewing the analysis reduces the data). The user should be able to click on a sample in the list to jump to it, or use buttons to go to the Next or Previous analysis. The sample label should appear like this when the analysis is un-reduced: ☐ CB-042 and when viewed/reduced: β˜‘ CB-042 This worked great when I was using tkinter: I could just destroy the entire list and repopulate it every time I wanted to update an entry, and there was basically no overhead/delay. I want my software to be more modern-looking, however, so I reached for CTkListbox instead. This looks way better, except that my previous approach of deleting and repopulating the entire list with every update resulted in the CTkListbox visually removing and repopulating the entire list every time. This takes about two seconds with a longer sequence, and is obviously undesirable behavior. My next idea was to delete the individual entry from the listbox, and re-add it with a check mark. This somehow resulted in program continuously deleting the first unchecked item in the list until the list was emptied of unchecked items and the program froze. This also generates a ton of errors about references to list items that no longer exist, so I guess deleting and re-entering the list item won't work either. Below is a not-so-minimal-but-still-reproducible example of where I'm currently at. The functions highlight_current_sample() and on_sample_select() are broken and need to be re-written, but I'm not sure how to do this correctly. How can I accomplish my goal of inserting a checked box for reduced samples in the list? I'd switch to themed tk treeview if that is easier. import tkinter as tk import customtkinter as ctk import matplotlib import matplotlib.pyplot as plt import matplotlib.backends.backend_tkagg as tkagg from tkinter import ttk from CTkListbox import * # highlights the current sample in the list def highlight_current_sample(sample_listbox, filtered_data): global analysis_index # replace the sample with a checked box if it is reduced if filtered_data[analysis_index].reduced and sample_listbox.get(analysis_index)[0] == '☐': sample_listbox.delete(analysis_index) # print('deleting sample', analysis_index) sample_listbox.insert(analysis_index, f'β˜‘ {filtered_data[analysis_index].analysis_label}') sample_listbox.activate(analysis_index) # moves to the selected sample when clicked in the list def on_sample_select(event, sample_listbox, filtered_data): global analysis_index widget = event.widget # get the listbox widget try: index = int(widget.curselection()) analysis_index = index filtered_data[analysis_index].reduced = True update_buttons(filtered_data) highlight_current_sample(sample_listbox, filtered_data) except IndexError: pass # ignore clicks outside of items def on_next(filtered_data, sample_listbox): global analysis_index filtered_data[analysis_index].reduced = True analysis_index = (analysis_index + 1) % len(filtered_data) update_buttons(filtered_data) highlight_current_sample(sample_listbox, filtered_data) def on_previous(filtered_data, sample_listbox): global analysis_index analysis_index = (analysis_index - 1) % len(filtered_data) update_buttons(filtered_data) highlight_current_sample(sample_listbox, filtered_data) def update_buttons(filtered_data): global analysis_index # disable previous button on first index if analysis_index == 0: prev_button.config(state=tk.DISABLED) else: prev_button.config(state=tk.NORMAL) # convert next to finish on last index if analysis_index == len(filtered_data) - 1: view_button.pack(side=tk.BOTTOM, fill=tk.X, expand=True) # Show the finish button next_button.pack_forget() # Hide the next button else: next_button.pack(side=tk.BOTTOM, fill=tk.X, expand=True) # Show the next button view_button.pack_forget() # Hide the finish button class Analysis: def __init__(self, analysis_label, reduced): self.analysis_label = analysis_label self.reduced = reduced # Create a list of Analysis objects filtered_data = [ Analysis("Analysis 1", False), Analysis("Analysis 2", False), Analysis("Analysis 3", False), Analysis("Analysis 4", False), Analysis("Analysis 5", False), Analysis("Analysis 6", False), Analysis("Analysis 7", False), Analysis("Analysis 8", False), Analysis("Analysis 9", False), Analysis("Analysis 10", False), Analysis("Analysis 11", False), Analysis("Analysis 12", False) ] # force matplotlib backend TkAgg (for MacOSX development) matplotlib.use('TkAgg') # set the appearance mode to light ctk.set_appearance_mode('light') # initiate GUI window = tk.Tk() # define and pack main frame main_frame = ctk.CTkFrame(window) main_frame.pack(fill=tk.BOTH, expand=True) # left frame (to hold stats and buttons) left_frame = ctk.CTkFrame(main_frame) left_frame.pack(side=tk.LEFT, fill=tk.BOTH, expand=True) # define and pack frame for buttons button_frame = ctk.CTkFrame(left_frame) button_frame.pack(side=tk.BOTTOM, fill=tk.X, pady=(0, 1), padx=(2, 0)) # define and pack center frame for raw data plots and sample list center_frame = ctk.CTkFrame(main_frame) center_frame.pack(side=tk.RIGHT, fill=tk.BOTH, expand=True) # initialize the data frame, figure and canvas data_frame = ctk.CTkFrame(center_frame) figure = plt.figure(figsize=(15,8)) canvas = tkagg.FigureCanvasTkAgg(figure, master=data_frame) # initialize the right frame, stats frame, and sample_listbox # set light_color and dark_color as very light gray and very dark gray respectively stats_frame = ctk.CTkFrame(left_frame, border_color='gray95') right_frame = ctk.CTkFrame(center_frame) # sample list frame sample_listbox = CTkListbox( right_frame, font=('TkDefaultFont', 11), width=220, label_anchor='w' ) # pack the sample listbox for analysis in filtered_data: if analysis.reduced: sample_listbox.insert(tk.END, f'β˜‘ {analysis.analysis_label}') else: sample_listbox.insert(tk.END, f'☐ {analysis.analysis_label}') sample_listbox.pack(fill=tk.BOTH, expand=True) # initialize and pack buttons global prev_button, exit_button, next_button, view_button s = ttk.Style() s.configure('TButton', font=('TkDefaultFont', 16), anchor='center', justify='center') s.configure("TButton.label", font=('TkDefaultFont', 16), justify='center', anchor="center") exit_button = ttk.Button(button_frame, text="Exit", command=lambda: window.quit(), style='TButton', ) prev_button = ttk.Button(button_frame, text="Previous", command=lambda: on_previous(filtered_data, sample_listbox), style='TButton', ) next_button = ttk.Button(button_frame, text="Next", command=lambda: on_next(filtered_data, sample_listbox), style='TButton', ) view_button = ttk.Button(button_frame, text="View\nAll\nAnalyses", command=window.quit, style='TButton', ) # Place the buttons in their respective rows exit_button.pack(side=tk.BOTTOM, fill=tk.X, expand=True, ipady=20, ipadx=10) prev_button.pack(side=tk.BOTTOM, fill=tk.X, expand=True, ipady=20, ipadx=10) view_button.pack(side=tk.BOTTOM, fill=tk.X, expand=True, ipady=20, ipadx=10) next_button.pack(side=tk.BOTTOM, fill=tk.X, expand=True, ipady=20, ipadx=10) # pack the stats frame on the left stats_frame.pack(side=tk.TOP, fill=tk.X, expand=True, padx=(3,0)) # pack frame for data data_frame.pack(side=tk.LEFT, fill=tk.BOTH, expand=True, padx=(0,2)) # draw the canvas and pack it canvas.draw() canvas.get_tk_widget().pack(side=tk.TOP, fill=tk.BOTH, expand=True) # sample list packing right_frame.pack(side=tk.LEFT, fill=tk.Y) # header label for the sample list header_label = ctk.CTkLabel(right_frame, text='Analyses', font=('TkDefaultFont', 20, 'bold')) header_label.pack(pady=(10,10)) # set current_plot_index global analysis_index analysis_index = 0 # bind the listbox to the sample select function sample_listbox.bind("<<ListboxSelect>>", lambda event: on_sample_select(event, sample_listbox, filtered_data) ) # update the panes update_buttons(filtered_data) highlight_current_sample(sample_listbox, filtered_data) # main window loop window.mainloop()
The solution was to only use insert, not delete. I also needed to unbind and rebind the listbox with every change. It also gave buggy behavior while using globals so I switched to passing the variables properly. Here's the updated code: # highlights the current analysis in the list def highlight_current_analysis(analysis_listbox, analysis_index, filtered_data): # check that the analysis is reduced and that the symbol is ☐ before changing to β˜‘ if filtered_data[analysis_index.get()].reduced and analysis_listbox.get(analysis_index.get())[0] == '☐': analysis_listbox.insert(analysis_index.get(), f'β˜‘ {filtered_data[analysis_index.get()].analysis_label}') analysis_listbox.activate(analysis_index.get()) # moves to the selected analysis when clicked in the list def on_analysis_select(analysis_index, analysis_listbox, filtered_data, fit_type, baseline_type, figure, canvas, stats_frame, prev_button, next_button, view_button): analysis_index.set(analysis_listbox.curselection()) analysis = filtered_data[analysis_index.get()] interactive_update(analysis, fit_type, baseline_type, figure, canvas, stats_frame) analysis_listbox.unbind("<<ListboxSelect>>") highlight_current_analysis(analysis_listbox, analysis_index, filtered_data) analysis_listbox.bind("<<ListboxSelect>>", lambda event: on_analysis_select(analysis_index, analysis_listbox, filtered_data, fit_type, baseline_type, figure, canvas, stats_frame, prev_button, next_button, view_button) )
2
1
78,534,210
2024-5-26
https://stackoverflow.com/questions/78534210/plotly-convert-epoch-timestamps-with-ms-to-readable-datetimes
The below code uses Python lists to create a Plotly graph. The timestamps are Epoch milliseconds. How do I format the x-axis to readable datetime? I tried fig.layout['xaxis_tickformat'] = '%HH-%MM-%SS' but it didn't work. import plotly.graph_objects as go time_series = [1716693661000, 1716693662000, 1716693663000, 1716693664000] prices = [20, 45, 32, 19] fig = go.Figure() fig.add_trace(go.Scatter(x=time_series, y=prices, yaxis='y')) fig.update_layout(xaxis=dict(rangeslider=dict(visible=True),type="linear")) fig.layout['xaxis_tickformat'] = '%Y-%m-%d' fig.show()
You can use fromtimestamp().strftime() from datetime: tdt = [datetime.fromtimestamp(ts / 1000).strftime('%Y-%m-%d %H:%M:%S') for ts in time_series] Code time_series = [1716693661000, 1716693662000, 1716693663000, 1716693664000] prices = [20, 45, 32, 19] tdt = [datetime.fromtimestamp(ts / 1000).strftime('%Y-%m-%d %H:%M:%S') for ts in time_series] fig = go.Figure() fig.add_trace(go.Scatter(x=tdt, y=prices, yaxis='y')) fig.update_layout( xaxis=dict( rangeslider=dict(visible=True), type="date", tickformat='%Y-%m-%d %H:%M:%S' ) ) fig.show()
2
1
78,534,106
2024-5-26
https://stackoverflow.com/questions/78534106/how-to-decorate-instance-methods-and-avoid-sharing-closure-environment-between-i
I'm having trouble finding a solution to this problem. Whenever we decorate a method of a class, the method is not yet bound to any instance, so say we have: from functools import wraps def decorator(f): closure_variable = 0 @wraps(f) def wrapper(*args, **kwargs): nonlocal closure_variable closure_variable += 1 print(closure_variable) f(*args, **kwargs) return return wrapper class ClassA: @decorator def decorated_method(self): pass This leads to something funny, which is all instances of ClassA are bound to the same closure environment. inst1 = ClassA() inst2 = ClassA() inst3 = ClassA() inst1.decorated_method() inst2.decorated_method() inst3.decorated_method() The above lines will output: 1 2 3 Now to my issue at hand, I had created a decorator which caches a token and only requests a new one once it expires. This decorator was applied to a method of a class called TokenSupplier. I realized this behavior and I clearly don't want this to happen, can I solve this issue and keep the decorator design pattern here? I thought of storing a dictionary in the closure environment and using the instance hash to index the desired data but I believe I might be simply missing something more fundamental. My goal would be to have each instance having it's own closure environment but still being able to use a decorator pattern to decorate different future TokenSupplier implementations. Thank you in advance!
In order to avoid sharing the cache across all instances, which may not be required or desired, it is best to have a cache for each instance with expiry time, etc. In other words, we don't need to have a "single source cache" for all instances. In the following implementation, each and every instance of a class initializes its own cache dict() to store the token, its expiration time and other relevant info, that will give you the full control. from functools import wraps import time class TokenCacheDecorator: def __init__(self, get_token_func): self.get_token_func = get_token_func def __get__(self, inst, owner): if inst is None: return self @wraps(self.get_token_func) def wrapper(*args, **kwargs): if not hasattr(inst, '_token_cache') or inst._token_cache['expiration_time'] < time.time(): print(f"[{id(inst)}] Cache miss") token, expires_in = self.get_token_func(inst, *args, **kwargs) inst._token_cache = { 'token': token, 'expiration_time': time.time() + expires_in } print(f"[{id(inst)}] New token - {token} expiration time: {inst._token_cache['expiration_time']}") return inst._token_cache['token'] return wrapper class ClassA: def __init__(self, token, expires_in): self.token = token self.expires_in = expires_in self._token_cache = {'token': None, 'expiration_time': 0} @TokenCacheDecorator def get_token(self): return self.token, self.expires_in inst1 = ClassA("token1", 2) inst2 = ClassA("token2", 2) inst3 = ClassA("token3", 2) print(inst1.get_token()) print(inst2.get_token()) print(inst3.get_token()) time.sleep(3) print(inst1.get_token()) print(inst2.get_token()) print(inst3.get_token()) Prints [4439687776] Cache miss [4439687776] New token - token1 expiration time: 1716693215.503801 token1 [4440899024] Cache miss [4440899024] New token - token2 expiration time: 1716693215.503846 token2 [4440899360] Cache miss [4440899360] New token - token3 expiration time: 1716693215.503862 token3 [4439687776] Cache miss [4439687776] New token - token1 expiration time: 1716693218.5076532 token1 [4440899024] Cache miss [4440899024] New token - token2 expiration time: 1716693218.50767 token2 [4440899360] Cache miss [4440899360] New token - token3 expiration time: 1716693218.507679 token3
2
2
78,530,805
2024-5-24
https://stackoverflow.com/questions/78530805/function-to-swap-top-right-and-bottom-left-squares-in-an-n-x-n-matrix
Given an n x n matrix, swap elements situated in two squares : top right and bottom left. As it is more an academic task, we can restrict size to 10x10. Moreover, we can't have extra space and must swap it in place. F.e. you are given: 0 3 1 2 9 2 3 4 5 6 5 7 7 8 9 9 You need to return: 0 3 5 6 9 2 7 8 1 2 5 7 3 4 9 9 If we have matrix 5x5: 1 2 3 4 5 6 7 8 9 1 3 8 7 5 0 3 1 7 8 5 7 4 3 6 7 We still have to swap 2x2 squares, so the result will be: 1 2 3 3 1 6 7 8 7 4 3 8 7 5 0 4 5 7 8 5 9 1 3 6 7 I've figured out that we'll have two variants of this function depending on matrix size: if n is even or uneven. I've tried to solve this task in two loops but i understand that i just swap elems (it's a variant for an even size): def Swap4(m,r): k = r//2 if r % 2 != 0: for i in range(0, k): for j in range(k+1, r): t = m[i][j] m[i][j] = m[j][i] m[j][i] = t else: for i in range(0, k): for j in range(k, r): t = m[i][j] m[i][j] = m[j][i] m[j][i] = t here k is matrix size n // 2. However, it looks to me that we need a kind of shift in i loop for indices as we swap elems like this: [0][2] -> [2][0] [0][3] -> [2][1] [1][2] -> [3][0] [1][3] -> [3][1]
You don't need a loop, but with a (nested) loop you can avoid having an extra copy of part of the array temporarily, which can matter if your array is enormous and you have memory restrictions that matter. Note: in a comment, you noted that you couldn't use numpy, something that I missed about your question. I've added a solution with only lists at the end. Here: import numpy as np def swap_quad_copy(m: np.array, n: int) -> None: assert (m.shape[0] == m.shape[1]) and n <= m.shape[0] // 2 o = m.shape[0] - n x = m[0:n, o:].copy() m[0:n, o:] = m[o:, 0:n] m[o:, 0:n] = x def swap_quad_loop(m: np.array, n: int) -> None: assert (m.shape[0] == m.shape[1]) and n <= m.shape[0] // 2 o = m.shape[0] - n for i in range(n): for j in range(n): m[i, o+j], m[o+i, j] = m[o+i, j], m[i, o+j] def main(): m = np.array([[11, 12, 13, 14], [21, 22, 23, 24], [31, 32, 33, 34], [41, 42, 43, 44]]) print(m) swap_quad_copy(m, 2) print(m) swap_quad_loop(m, 2) print(m) main() The swap_quad_copy shows how you can copy one quad (the upper right one) to a temporary variable, then overwrite that area with the other quad (the lower left one), and then overwrite the other with the temporary variable. The swap_quad_loop shows how to do the same thing in place, with a nested loop. This takes a bit more time, but uses less space. Output: [[11 12 13 14] [21 22 23 24] [31 32 33 34] [41 42 43 44]] [[11 12 31 32] [21 22 41 42] [13 14 33 34] [23 24 43 44]] [[11 12 13 14] [21 22 23 24] [31 32 33 34] [41 42 43 44]] Note that this doesn't work: m[0:2, 2:4], m[2:4, 0:2] = m[2:4, 0:2], m[0:2, 2:4] The reason you can swap single elements but not whole slices is that a single element doesn't get you a reference, and the tuple assignment just becomes a normal variable swap. But the slicing operation returns views of the original array, not copies, and thus the copy ends up overwriting one of the quads, and you end up with duplicate data, something like: [[11 12 31 32] [21 22 41 42] [31 32 33 34] [41 42 43 44]] If you limit the problem and solution to only using lists of lists, the swap_quad_loop solution still works with minor changes for the type and looks like this: def swap_quad_loop2(m: list[list[int]], n: int) -> None: assert (all(len(m) == len(xs) for xs in m)) and n <= len(m) // 2 o = len(m) - n for i in range(n): for j in range(n): m[i][o+j], m[o+i][j] = m[o+i][j], m[i][o+j] def main(): xss = [[11, 12, 13, 14], [21, 22, 23, 24], [31, 32, 33, 34], [41, 42, 43, 44]] swap_quad_loop2(xss, 2) print('\n'.join(map(str, xss))) main() Output: [11, 12, 31, 32] [21, 22, 41, 42] [13, 14, 33, 34] [23, 24, 43, 44]
3
3
78,530,503
2024-5-24
https://stackoverflow.com/questions/78530503/solving-coupled-2nd-order-ode-numerical-in-python
I would like to solve the following DGL system numerically in python: The procedure should be the same as always. I have 2 coupled differential equations of the 2nd order and I use the substitution g' = v and f' = u to create four coupled differential equations of the 1st order. Here ist my code: from scipy.integrate import solve_ivp import numpy as np import matplotlib.pyplot as plt # Constants e = 1 mu = 1 lambd = 1 # Initial second-order system # g'' + 2/r g'-2/r^2 g - 3/r eg^2- e^2 g^3 - e/(2r) f^2 - e^2/4 f^2g = 0 # f'' + 2/r f' - 2/r^2 f - 2e/r fg - e^2/2 fg^2 + mu^2 f - lambda f^3 = 0 # Substitution # g' = v # v' = g'' # f' = u # u' = f'' # Equivalent set of first-order systems def dSdr(r, S): g, v, f, u = S dgdr = v dvdr = -2 / r * v + 2 / r ** 2 * g + 3 / r * e * g ** 2 + \ e ** 2 * g ** 3 + e / (2 * r) * f**2 + e**2 / 4 * f ** 2 * g dfdr = u dudr = -2 / r * u + 2 / r ** 2 * f + 2 * e / r * f * g + e ** 2 / 2 * f * g**2 - mu ** 2 * f + lambd * f ** 3 return [dgdr, dvdr, dfdr, dudr] # Initial conditions g0 = 0.0001 v0 = 0.0001 f0 = 1 u0 = 0.0001 S0 = [g0, v0, f0, u0] r_start = 0.1 r_end = 100 r = np.linspace(r_start, r_end, 1000) # Solve the differential equations sol = solve_ivp(dSdr, t_span=(min(r), max(r)), y0=S0, t_eval=r, method='RK45') # Check if integration was successful if sol.success: print("Integration successful") else: print("Integration failed:", sol.message) # Debugging information print("Number of function evaluations:", sol.nfev) print("Number of accepted steps:", sol.t_events) print("Number of steps that led to errors:", sol.t) # Plot results plt.figure(figsize=(10, 6)) plt.plot(sol.t, sol.y[0], label='g(r)') plt.plot(sol.t, sol.y[2], label='f(r)') plt.xlabel('r') plt.ylabel('Function values') plt.legend() plt.title('Solutions of the differential equations') plt.show() My boundary con. should be f(0) = g(0) = 0 and f(infinity) = mu/sqrt(lambd) g(infinity) = 0 so that the system makes physical sense. But how can I incorporate this condition or how do I know the initial conditions for v,u? The system should look like this (From the original paper): but it doesn't. Does anyone know what to do? Source: enter link description here
Although you could try a "shooting" method using initial-value solvers, I find it best to go for a boundary-value problem directly. I do so by a method akin to the finite-volume method in computational fluid mechanics (CFD). Division by r creates singularities, so start by multiplying your equations by r2. Collecting the derivative terms you will then get equations which look like spherically-symmetric diffusion equations: Here, Sf and Sg are the great long source terms (which you will probably have to check in my code - there's plenty of scope for minor errors). Discretise each (equivalent to integrating over a cell [ri-1/2,ri+1/2] by the finite-volume method). e.g. for f: This, and the corresponding equation for g can be written in the form There are small differences at the boundaries and it is the non-zero boundary condition on f that produces a non-trivial solution. The discretised equations can then be solved iteratively either by the tri-diagonal matrix algorithm (TDMA) or by iterative schemes like Gauss-Seidel or Jacobi. Although experience with CFD suggests that the TDMA would probably be best, for simplicity I’ve opted for a (slightly under-relaxed) Jacobi method here. Note that, in common with CFD practice, I have, in my edited code, split the source terms as, e.g. sf + spf.f, with spf being negative so that it can be combined with the diagonal coefficient ap on the LHS. EDIT: code amended so that cell-centre values are 1...N with boundary indices 0 and N+1, source terms split into explicit and implicit parts, code reads more like Python than Fortran! import numpy as np import matplotlib.pyplot as plt N = 100 # number of cells (1-N); added boundaries 0 and N+1 rmin, rmax = 0.0, 20.0 # minimum and maximum r (latter an approximation to "infinity" dr = rmax / N # cell width gL, gR, fL, fR = 0, 0, 0, 1 # boundary conditions e, nu, lamda = 1, 1, 1 # Cell / node arrangement: # # 0 N+1 # | 1 2 N-1 N | # |-----|-----| . . . . . . |-----|-----| # r = np.linspace( rmin - 0.5 * dr, rmax + 0.5 * dr, N + 2 ) # cell centres (except for boundaries) r[0] = rmin; r[N+1] = rmax # boundary nodes on faces of cells # Connection coefficients awL = 2 * rmin ** 2 / dr ** 2 aeR = 2 * rmax ** 2 / dr ** 2 aw = np.zeros( N + 2 ) ae = np.zeros( N + 2 ) for i in range( 1, N + 1 ): if i == 1: aw[i] = 0 else: aw[i] = ( 0.5 * ( r[i-1] + r[i] ) ) ** 2 / dr ** 2 if i == N: ae[i] = 0 else: ae[i] = ( 0.5 * ( r[i+1] + r[i] ) ) ** 2 / dr ** 2 ap = aw + ae # Initial values g = np.zeros( N + 2 ) f = np.zeros( N + 2 ) f = f + 1 g = g + 1 # Boundary conditions f[0] = fL; f[N+1] = fR; g[0] = gL; g[N+1] = gR alpha = 0.9 # Under-relaxation factor niter = 0 for _ in range( 10000 ): niter += 1 # Source term (e.g., for f this would be decomposed as sf + spf.f, where spf is negative) spf = -2 -0.5 * e**2 * r**2 * g**2 - lamda * r**2 * f**2 sf = -2 * e * r * f * g + nu**2 * r**2 * f spg = -2 - e**2 * r**2 * g**2 - 0.25 * e**2 * r**2 * f**2 sg = -e * r * ( 3 * g**2 + 0.5 * f**2 ) # Dirichlet boundary conditions applied via source term sf[1] += awL * fL; spf[1] -= awL sg[1] += awL * gL; spg[1] -= awL sf[N] += aeR * fR; spf[N] -= aeR sg[N] += aeR * gR; spg[N] -= aeR # Update g and f (under-relaxed Jacobi method) ftarget = f.copy() gtarget = g.copy() for i in range( 2, N ): # cells 2 - (N-1) ftarget[i] = ( sf[i] + aw[i] * f[i-1] + ae[i] * f[i+1] ) / ( ap[i] - spf[i] ) gtarget[i] = ( sg[i] + aw[i] * g[i-1] + ae[i] * g[i+1] ) / ( ap[i] - spg[i] ) i = 1 # leftmost cell ftarget[i] = ( sf[i] + ae[i] * f[i+1] ) / ( ap[i] - spf[i] ) gtarget[i] = ( sg[i] + ae[i] * g[i+1] ) / ( ap[i] - spg[i] ) i = N # rightmost cell ftarget[i] = ( sf[i] + aw[i] * f[i-1] ) / ( ap[i] - spf[i] ) gtarget[i] = ( sg[i] + aw[i] * g[i-1] ) / ( ap[i] - spg[i] ) # Under-relax the update f = alpha * ftarget + ( 1 - alpha ) * f g = alpha * gtarget + ( 1 - alpha ) * g change = ( np.linalg.norm( f - ftarget ) + np.linalg.norm( g - gtarget ) ) / N # print( niter, change ) if change < 1.0e-10: break print( niter, " iterations " ) plt.plot( r, f, label='f' ) plt.plot( r, g, label='g' ) plt.grid() plt.legend() plt.show()
2
7
78,531,808
2024-5-25
https://stackoverflow.com/questions/78531808/sqlalchemy-and-postgresql-unexpected-timestamp-with-onupdate-func-now
In the following code after 5 seconds sleep I expect the second part of date_updated to be changed, but only the millisecond part is changed. If I use database_url = 'sqlite:///:memory:' it works as expected. Why? class Base(MappedAsDataclass, DeclarativeBase): pass class Test(Base): __tablename__ = 'test' test_id: Mapped[int] = mapped_column(primary_key=True, init=False) name: Mapped[str] date_created: Mapped[datetime] = mapped_column( TIMESTAMP(timezone=True), insert_default=func.now(), init=False ) date_updated: Mapped[datetime] = mapped_column( TIMESTAMP(timezone=True), nullable=True, insert_default=None, onupdate=func.now(), init=False ) database_url: URL = URL.create( drivername='postgresql+psycopg', username='my_username', password='my_password', host='localhost', port=5432, database='my_db' ) engine = create_engine(database_url, echo=True) Base.metadata.drop_all(engine) Base.metadata.create_all(engine) with Session(engine) as session: test = Test(name='foo') session.add(test) session.commit() print(test) time.sleep(5) test.name = 'bar' session.commit() print(test.date_created.time()) # prints: 08:07:45.413737 print(test.date_updated.time()) # prints: 08:07:45.426483
PostgreSQL's now() function returns the time of the start of the current transaction. In the code in the question, the second transaction is triggered by the print statement, since a new transaction must be opened to retrieve the values expired by the previous commit. Thus the difference in the observed timestamps is only a few milliseconds. To obtain the desired behaviour, any of these approaches should work: Don't expire on commit (but carefully consider the consequences in production code): with orm.Session(engine, expire_on_commit=False) as session: remove the print() call Call session.comit() again after the call to time.sleep() to start a new transaction SQLite doesn't have a now() function; SQLAlchemy converts the function call to SELECT CURRENT_TIMESTAMP AS now SQLite does not seem to restrict CURRENT_TIMESTAMP's value to the start of the current transaction: sqlite> begin; sqlite> select current_timestamp as 'now'; 2024-05-25 13:21:22 sqlite> select current_timestamp as 'now'; 2024-05-25 13:21:24 sqlite> select current_timestamp as 'now'; 2024-05-25 13:21:26 sqlite> rollback;
3
4
78,526,105
2024-5-24
https://stackoverflow.com/questions/78526105/is-there-a-compact-f-string-expression-for-printing-non-str-object-with-space-fo
f-string allows a very compact expression for printing str objects with spacing like so: a = "Hello" print(f'{a=:>20}') a= Hello Is there a way to do the same for other objects like so: from pathlib import Path b=Path.cwd() print(f'{b=:>20}') Traceback (most recent call last): File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode exec(code, self.locals) File "<pyshell#11>", line 1, in <module> TypeError: unsupported format string passed to PosixPath.__format__ An alternative is: print(f'b={str(b):>20}') b= /home/user But this method loses the object info that is shown when I do: print(f'{b=}') b=PosixPath('/home/user') The desired outcome is to print b= PosixPath('/home/user') Multiple printed statements should show something like: self.project= PosixPath('/home/user/project') self.project_a= PosixPath('/home/user/project/project') self.longer= PosixPath('/home/user/project/project/icons/project/pic.png') self.PIPFILELOCK= PosixPath('/home/user/project/Pipfile.lock') self.VIRTUALENV= PosixPath('/home/user/.local/share/virtualenvs/project-mKDFEK')
You can use !s and !r: >>> b = Path("/var") >>> f"{b=!s:>20}" 'b= /var' >>> f"{b=!r:>20}" "b= PosixPath('/var')" These are explicit conversion flags. To tabulate a namespace of keys and values: >>> d {'project': PosixPath('/home/user/project'), 'project_a': PosixPath('/home/user/project/project'), 'longer': PosixPath('/home/user/project/project/icons/project/pic.png'), 'PIPFILELOCK': PosixPath('/home/user/project/Pipfile.lock'), 'VIRTUALENV': PosixPath('/home/user/.local/share/virtualenvs/project-mKDFEK')} >>> for k, v in d.items(): ... print(f"{k+'=':<14}{v!r}") ... project= PosixPath('/home/user/project') project_a= PosixPath('/home/user/project/project') longer= PosixPath('/home/user/project/project/icons/project/pic.png') PIPFILELOCK= PosixPath('/home/user/project/Pipfile.lock') VIRTUALENV= PosixPath('/home/user/.local/share/virtualenvs/project-mKDFEK') In your example on printing the attributes of a class, just do: for k, v in self.__dict__.items(): print(f"self.{k + '=':<14}{v!r}") Class attributes are already stored in a dictionary that can be accessed from self.__dict__ so you don't have to create d. Or using 3rd-party tabulate: >>> from tabulate import tabulate # pip install tabulate >>> d = {k+'=': repr(v) for k,v in d.items()} >>> print(tabulate(d.items(), tablefmt="plain")) project= PosixPath('/home/user/project') project_a= PosixPath('/home/user/project/project') longer= PosixPath('/home/user/project/project/icons/project/pic.png') PIPFILELOCK= PosixPath('/home/user/project/Pipfile.lock') VIRTUALENV= PosixPath('/home/user/.local/share/virtualenvs/project-mKDFEK')
3
3
78,531,096
2024-5-25
https://stackoverflow.com/questions/78531096/how-to-get-the-maximum-number-of-consecutive-columns-with-values-above-zero-for
In a dataframe with 12 columns and 4 million rows, I need to add a column that gets the maximum number of consecutive columns with values above zero for each row. Here's a sample df = pd.DataFrame(np.array([[284.77, 234.37, 243.8, 84.36, 0., 0., 0., 55.04, 228.2, 181.97, 0., 0.], [13.78, 0., 38.58, 33.16, 0., 38.04, 74.02, 45.74, 27.2, 9.19, 0., 0.], [88.66, 255.72, 323.19, 7.24, 0., 73.38, 45.73, 0., 0., 77.39, 26.57, 279.34], [0., 0., 34.42, 9.16, 0., 43.4, 42.17, 123.69, 60.5, 25.47, 72.32, 7.29], [320.6, 1445.56, 856.23, 371.21, 0., 244.22, 134.58, 631.59, 561.82, 1172.44, 895.68, 186.28], [0., 0., 32.29, 1000.91, 0., 680., 585.46, 466.6, 0., 493.48, 157.1, 125.31]]), columns=[1,2,3,4,5,6,7,8,9,10,11,12]) And here's an example of my goal: df['MAX_CONSECUTIVE_COL'] = pd.Series([4,5,4,7,7,3]) Due to the size of dataframe, performance is a must have for the solution. I've tried to mask the data with boolean values and do a cumulative sum to identify each group of consecutive columns with values == 0 or != 0 ((df\>0) != (df\>0).shift(axis=1)).cumsum(axis=1) Then, I've got the results of one row: ((df>0) != (df>0).shift(axis=1)).cumsum(axis=1).iloc[0] Applied a value_counts and transformed the result in a dataframe: pd.DataFrame(((df>0) != (df>0).shift(axis=1)).cumsum(axis=1).iloc[0].value_counts()) applied a sort_values: pd.DataFrame(((df>0) != (df>0).shift(axis=1)).cumsum(axis=1).iloc[0].value_counts()).sort_values('count', ascending=False) and, finally, got the first value (the max number of consecutive columns with values !=0 or == 0): pd.DataFrame(((df>0) != (df>0).shift(axis=1)).cumsum(axis=1).iloc[0].value_counts()).sort_values('count', ascending=False).iloc[0,0] Now, I've got a problem: I don't know how to filter only the consecutive columns with values != 0. But let's consider that this method worked and we have now the number of consecutive columns with values !=0 for the first row. The only solution I was capable to develop to get the results for the other rows is iterating each one. Something like this: df['MAX_CONSECUTIVE_COL'] = 0 for n in range(0,df.shape[0]-1): df.loc[df.index[n], 'MAX_CONSECUTIVE_COL'] = pd.DataFrame(((df>0) != df>0).shift(axis=1)).cumsum(axis=1).iloc[n].value_counts()).sort_values('count',ascending=False).iloc[0,0] But remember we have 4 million rows, so this iteration would take a looooong time to be completed, and that's the second problem I have.
If performance is concern I'd consider to use numba: from numba import njit, prange @njit(parallel=True) def get_max(matrix, out): n, m = matrix.shape for row in prange(n): mx, cnt = 0, 0 for col in range(m - 1): # -1 because last column is OUT if matrix[row, col] > 0: cnt += 1 mx = max(mx, cnt) else: cnt = 0 out[row] = mx df["OUT"] = 0 get_max(df.values, df["OUT"].values) print(df) Prints: 1 2 3 4 5 6 7 8 9 10 11 12 OUT 0 284.77 234.37 243.80 84.36 0.0 0.00 0.00 55.04 228.20 181.97 0.00 0.00 4 1 13.78 0.00 38.58 33.16 0.0 38.04 74.02 45.74 27.20 9.19 0.00 0.00 5 2 88.66 255.72 323.19 7.24 0.0 73.38 45.73 0.00 0.00 77.39 26.57 279.34 4 3 0.00 0.00 34.42 9.16 0.0 43.40 42.17 123.69 60.50 25.47 72.32 7.29 7 4 320.60 1445.56 856.23 371.21 0.0 244.22 134.58 631.59 561.82 1172.44 895.68 186.28 7 5 0.00 0.00 32.29 1000.91 0.0 680.00 585.46 466.60 0.00 493.48 157.10 125.31 3 Quick benchmark: from time import monotonic # >4 million rows df = pd.concat([df] * 800_000, ignore_index=True) start_time = monotonic() df["OUT"] = 0 get_max(df.values, df["OUT"].values) print(monotonic() - start_time) Prints on my computer (AMD 5700x): 0.21921994583681226
3
3
78,530,305
2024-5-24
https://stackoverflow.com/questions/78530305/python-how-can-i-call-the-original-of-an-overloaded-method
Let's say I have this: class MyPackage ( dict ) : def __init__ ( self ) : super().__init__() def __setitem__ ( self, key, value ) : raise NotImplementedError( "use set()" ) def __getitem__ ( self, key ) : raise NotImplementedError( "use get()" ) def set ( self, key, value ) : # some magic self[key] = value def get ( self, key ) : # some magic if not key in self.keys() : return "no!" return self[key] (Here, # some magic is additional code that justifies MyPackage as opposed to 'just a dictionary.') The whole point is, I want to provided a dictionary-like object that forces the use of get() and set() methods and disallows all access via [], i.e. it is not permitted to use a['x']="wow" or print( a['x'] ) However, the minute I call get() or set(), the NotImplementedError is raised. Contrast this with, say, Lua, where you can bypass "overloading" by using "raw" getters and setters. Is there any way I can do this in Python without making MyPackage contain a dictionary (as opposed to being a dictionary)?
You already have the solution in your code: use super(). This allows you to call methods on the parent class, in this case, dict: def set ( self, key, value ) : # some magic super().__setitem__(key, value) def get ( self, key ) : # some magic if not key in self.keys() : return "no!" return super().__getitem__(key)
2
2
78,524,556
2024-5-23
https://stackoverflow.com/questions/78524556/typeerror-cannot-convert-numpy-ndarray-to-numpy-ndarray
I'm not sure why but after getting a new install of windows and a new pycharm install I am having issues with running some previously functional code. I am now getting the above error with the code below. Is it a setup issue or has something changed that now makes this code not function? Error happens on the last line. The error doesn't make sense to me as there should be no conversion required for ndarray to ndarray. import numpy as np import pyodbc import pandas as pd import sqlalchemy as SQL import torch import datetime # Setup your SQL connection server = [hidden for security] database = [hidden for security] username = [hidden for security] password = [hidden for security] # This is using the pyodbc connection cnxn = pyodbc.connect( 'DRIVER={SQL Server};SERVER=' + server + ';DATABASE=' + database + ';UID=' + username + ';PWD=' + password) cursor = cnxn.cursor() # This is using the SQLAlchemy connection engine_str = SQL.URL.create( drivername="mssql+pyodbc", username=username, password=password, host=server, port=1433, database=database, query={ "driver": "ODBC Driver 17 for SQL Server", "TrustServerCertificate": "no", "Connection Timeout": "30", "Encrypt": "yes", }, ) engine = SQL.create_engine(engine_str) storeemployee = [] regionalemployee = [] regionid = [] storeid = [] # get table from dev with engine.connect() as connection: result = connection.execute(SQL.text("SELECT StoreId, R_Num, RegionalMerchandiserEmployeeId, StoreMerchandiserEmployeeId from Staging.StoreMerchandiserInput")) for row in result: # set your variables = to the results storeemployee.append(row.StoreMerchandiserEmployeeId) regionalemployee.append(row.RegionalMerchandiserEmployeeId) regionid.append(row.R_Num) storeid.append(row.StoreId) storeemployee = np.array(storeemployee) regionalemployee = np.array(regionalemployee) regionid = np.array(regionid) storeid = np.array(storeid) # StoreMerchandiserEmail data = {'StoreMerchandiserEmployeeId': storeemployee, 'RegionalMerchandiserEmployeeId': regionalemployee, "R_Num": regionid, "StoreId":storeid} FinalData = pd.DataFrame(data, columns=['StoreMerchandiserEmployeeId', 'RegionalMerchandiserEmployeeId', 'R_Num', 'StoreId']) Edit - Full Error Messaging: Traceback (most recent call last): File "C:\Users\Carter.Lowe\Documents\Python Files\Data Import 2.py", line 56, in <module> FinalData = pd.DataFrame(data, columns=['StoreMerchandiserEmployeeId', 'RegionalMerchandiserEmployeeId', 'R_Num', 'StoreId']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Carter.Lowe\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\frame.py", line 778, in __init__ mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Carter.Lowe\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\internals\construction.py", line 443, in dict_to_mgr arrays = Series(data, index=columns, dtype=object) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Carter.Lowe\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\series.py", line 490, in __init__ index = ensure_index(index) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\Carter.Lowe\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\indexes\base.py", line 7647, in ensure_index return Index(index_like, copy=copy, tupleize_cols=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Carter.Lowe\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\indexes\base.py", line 565, in __new__ arr = sanitize_array(data, None, dtype=dtype, copy=copy) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Carter.Lowe\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\construction.py", line 654, in sanitize_array subarr = maybe_convert_platform(data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Carter.Lowe\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\dtypes\cast.py", line 139, in maybe_convert_platform arr = lib.maybe_convert_objects(arr) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "lib.pyx", line 2538, in pandas._libs.lib.maybe_convert_objects TypeError: Cannot convert numpy.ndarray to numpy.ndarray
I just had the same error today and solved it by updating numpy to 2.0.0rc2.
2
4
78,529,922
2024-5-24
https://stackoverflow.com/questions/78529922/dataframe-print-entire-row-s-where-keys-in-the-same-row-hold-equal-values
I would like to recovery the rows in a dataframe where, in the same row, differing keys hold equal values. I can display where, for instance, the rows where col2 == col3. I would like to get this code to track across col1 matching across col2, col3 and col4. Then col2 to match across col 3 and col4. Then finally col3 across col4. I have read through this post and I am confused if iteration is the solution to my problem. If so, how can this be done. I can display, for instance, the rows where col2 == col3. # -*- coding: utf-8 -*- import pandas as pd ## writing a dataframe rows = {'col1':['5412','5148','5800','2122','5645','1060','4801','1039'], 'col2':['542','512','541','412','565','562','645','152'], 'col3':['542','3120','3410','2112','5650','5620','4801','152'], 'col4':['5800','2122','5645','2112','412','562','562','645'] } df = pd.DataFrame(rows) print(f'Unsorted dataframe \n\n{df}') ## print the rows where col2 == col3 dft = df[(df['col2'] == df['col3'])] print('\n\nupdate - list row of matching row elements') print(dft) ## print all except the rows where col2 == col3 dft = df.drop(df[(df['col2'] == df['col3'])].index) print('\n\nupdate - Dropping rows of matching row elements') print(dft) With this I am getting back col1 col2 col3 col4 0 5412 542 542 5800 7 1039 152 152 645 I would like to get back col1 col2 col3 col4 0 5412 542 542 5800 3 2122 412 2112 2112 4 5645 565 5650 412 5 1060 562 5620 562 6 4801 645 4801 562 7 1039 152 152 645
Use nunique with axis=1 and compare it to the number of columns: import pandas as pd rows = { "col1": ["5412", "5148", "5800", "2122", "5645", "1060", "4801", "1039"], "col2": ["542", "512", "541", "412", "565", "562", "645", "152"], "col3": ["542", "3120", "3410", "2112", "5650", "5620", "4801", "152"], "col4": ["5800", "2122", "5645", "2112", "412", "562", "562", "645"], } df = pd.DataFrame(rows) df = df[df.nunique(axis=1) < len(df.columns)] print(df) Output: col1 col2 col3 col4 0 5412 542 542 5800 3 2122 412 2112 2112 5 1060 562 5620 562 6 4801 645 4801 562 7 1039 152 152 645
2
4
78,525,821
2024-5-23
https://stackoverflow.com/questions/78525821/handling-c2016-error-on-windows-using-visual-studio-code
I have to use someone else's C header files, which include empty structs. I have no control over these headers or I would change them as empty structs are not conventional C. The structs are throwing C2016 errors, as expected with the standard compiler in Visual Studio Code (on Windows). The original author of the headers is using some other compiler, which allows empty structs. Here is an example of the error I'm receiving: message_definitions.h(45): error C2016: C requires that a struct or union have at least one member Here is an example of the structs: typedef struct { } Controller_Do_Graceful_Shutdown_t; According to what I've read you are permitted empty structs using other compilers, such as gcc. I have installed gcc and have verified it exists: gcc -v Using built-in specs. COLLECT_GCC=C:\msys64\ucrt64\bin\gcc.exe COLLECT_LTO_WRAPPER=C:/msys64/ucrt64/bin/../lib/gcc/x86_64-w64-mingw32/13.2.0/lto-wrapper.exe Target: x86_64-w64-mingw32 Configured with: ../gcc-13.2.0/configure --prefix=/ucrt64 --with-local-prefix=/ucrt64/local --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --with-native-system-header-dir=/ucrt64/include --libexecdir=/ucrt64/lib --enable-bootstrap --enable-checking=release --with-arch=nocona --with-tune=generic --enable-languages=c,lto,c++,fortran,ada,objc,obj-c++,jit --enable-shared --enable-static --enable-libatomic --enable-threads=posix --enable-graphite --enable-fully-dynamic-string --enable-libstdcxx-filesystem-ts --enable-libstdcxx-time --disable-libstdcxx-pch --enable-lto --enable-libgomp --disable-libssp --disable-multilib --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-libiconv --with-system-zlib --with-gmp=/ucrt64 --with-mpfr=/ucrt64 --with-mpc=/ucrt64 --with-isl=/ucrt64 --with-pkgversion='Rev3, Built by MSYS2 project' --with-bugurl=https://github.com/msys2/MINGW-packages/issues --with-gnu-as --with-gnu-ld --disable-libstdcxx-debug --with-boot-ldflags=-static-libstdc++ --with-stage1-ldflags=-static-libstdc++ Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 13.2.0 (Rev3, Built by MSYS2 project) The hard part I'm using cffi in Python to "import" the C headers, so the C compiler being used is whatever the ffibuilder decides to use. Out of the box it uses the Microsoft C compiler, which throws the C2016 errors. :-( Here is the cffi code: from cffi import FFI ffibuilder = FFI() ffibuilder.set_source("_message_definitions", # name of the output C extension """ #include "message_definitions.h" """, sources=['message_definitions.c'], libraries=[]) if __name__ == "__main__": ffibuilder.compile(verbose=True) Is there a way to tell cffi to use gcc instead, or suppress the C2016 errors being thrown?
Visual studio will refuse to compile that, as it doesn't allow zero size structs in C, you need to use gcc instead to build that library. if you must use MSVC then you need to modify those structs to contain a simple char which is what gcc does, and hope that no one is depending on their size in the code. typedef struct { char unused; } some_struct; to get cffi to use mingw when compiling you need to create a file named setup.cfg in the same folder of your cffi python file with the following content. [build] compiler=mingw32 there is no documentation for this, but cffi uses distutils, if this answer fails in the future then you can check this answer edit history for a complex way to hack distutils,
2
3
78,529,845
2024-5-24
https://stackoverflow.com/questions/78529845/stop-process-from-another-process
Hi guys I have two different processes running at the same time. I'm trying to stop the other process which i have named loop_b but I have only managed to stop the current process and the other doesn't stop from multiprocessing import Process import time import sys counter=0 def loop_a(): global counter while True: print("a") time.sleep(1) counter+=1 if counter==10: print("I want to stop the loop_b") sys.exit() #Process(target=loop_b).close() -> Also i tried this but not worked def loop_b(): while True: print("b") time.sleep(1) if __name__ == '__main__': Process(target=loop_a).start() Process(target=loop_b).start()
One possible solution could be using multiprocessing.Event synchronization primitive to signal other process that it should exit, e.g.: import sys import time from multiprocessing import Event, Process def loop_a(evt): counter = 0 while True: print("a") time.sleep(1) counter += 1 if counter == 3: print("I want to stop the loop_b") evt.set() # <-- set the flag to be True to signal exit sys.exit() def loop_b(evt): while not evt.is_set(): print("b") time.sleep(1) if __name__ == "__main__": evt = Event() p1 = Process(target=loop_a, args=(evt,)).start() p2 = Process(target=loop_b, args=(evt,)).start() Prints: a b a b a b I want to stop the loop_b # <-- and the loob_b ends here too
2
2
78,525,564
2024-5-23
https://stackoverflow.com/questions/78525564/find-matching-rows-in-dataframes-based-on-number-of-matching-items
I have two topic models, topics1 and topics2. They were created from very similar but different datasets. As a result, the words representing each topic/cluster as well as the topic numbers will be different for each dataset. A toy example looks like: import pandas as pd topics1 = pd.DataFrame({'topic_num':[1,2,3], 'words':[['red','blue','green'], ['blue','sky','cloud'], ['eat','food','nomnom']] }) topics2 = pd.DataFrame({'topic_num':[1,2,3], 'words':[['blue','sky','airplane'], ['blue','green','yellow'], ['mac','bit','byte']] }) For each topic in topics1, I would like to find the topic in topics2 with the maximum number of matches. In the above example, in topics1 topic_num 1 would match topic_num 2 intopics2 and topic_num 2 in topics1 would match topic_num 1 in topics2. In both of these cases, 2 of the 3 words in each row match across dataframes. Is there a way to find this using built-in pandas functions such as eq()? My solution just iterated across every word in topics1 and eery word in topics2.
You will have to compute all combinations in any case. You could use sets and broadcast the comparisons: # set the topic_num as index # convert the Series of lists to Series of sets s1 = topics1.set_index('topic_num')['words'].map(set) s2 = topics2.set_index('topic_num')['words'].map(set) # convert to numpy arrays # broadcast the intersection of s1 and s2 # to form a square array, convert back to DataFrame # get the length of the intersection tmp = (pd.DataFrame(s1.to_numpy()[:, None] &s2.to_numpy(), index=s1.index, columns=s2.index ) .map(len) #.applymap(len) ) # get the col id with the max length per row # mask if the length was 0 out = tmp.idxmax(axis=1).where(tmp.gt(0).any(axis=1)) Output: topic_num 1 2.0 2 1.0 3 NaN dtype: float64 Intermediate tmp: topic_num 1 2 3 topic_num 1 1 2 0 2 2 1 0 3 0 0 0 Note that this logic can be wrapped by cdist as rightly suggested by @Onyambu, however I would first convert to set to avoid doing it repeatedly in the cdist (which would be expensive). This should be done in a bit weird way by encapsulating the sets in list since cdist requires a 2D input: from scipy.spatial.distance import cdist dst = lambda x, y : len(x[0] & y[0]) val = cdist([[set(x)] for x in topics1['words']], [[set(x)] for x in topics2['words']], dst) pos = val.argmax(1) out = pd.DataFrame({'topic1': topics1['topic_num'], 'topic2': (topics2['topic_num'] .iloc[pos] .where(val[topics1.index, pos]!=0) .values )}) # topic1 topic2 # 0 1 2.0 # 1 2 1.0 # 2 3 NaN
2
1
78,525,945
2024-5-23
https://stackoverflow.com/questions/78525945/fit-same-model-to-many-datasets-in-python
Below I demonstrate a workflow for fitting the same model to many datasets in R by nesting datasets by test_id, and then fitting the same model to each dataset, and extracting a statistic from each model. My goal is to create the equivalent workflow in Python, using polars, but I will use pandas if necessary. Demonstration in R library(tidyverse) SIMS <- 3 TRIALS <- 1e3 PROB_A <- .65 PROB_B <- .67 df <- bind_rows( tibble( recipe = "A", trials = TRIALS, events = rbinom(n=SIMS, size=trials, prob=PROB_A), rate = events/trials) |> mutate(test_id = 1:n()), tibble( recipe = "B", trials = TRIALS, events = rbinom(n=SIMS, size=trials, prob=PROB_B), rate = events/trials) |> mutate(test_id = 1:n()) ) df df_nest <- df |> group_by(test_id) |> nest() df_nest Define two functions to map over my nested data: glm_foo <- function(.data){ glm(formula = rate ~ recipe, data = .data, weights = trials, family = binomial) } glm_foo(df_nest$data[[1]]) fit_and_extract <- function(.data){ m <- glm(formula = rate ~ recipe, data = .data, weights = trials, family = binomial) m$coefficients['recipeB'] } fit_and_extract(df_nest$data[[1]]) df_nest |> mutate( model = map(.x = data, .f = glm_foo), trt_b = map_dbl(.x = data, .f = fit_and_extract) ) test_id data model trt_b <int> <list> <list> <dbl> 1 <tibble> <S3: glm> 0.05606076 2 <tibble> <S3: glm> 0.11029236 3 <tibble> <S3: glm> 0.01304480 #Python Section I can create the same nested data structure in polars, but I am unsure of how to fit the model to each nested dataset within the list column called data. import polars as pl from polars import col import numpy as np SIMS = 3 TRIALS = int(1e3) PROB_A = .65 PROB_B = .67 df_a = pl.DataFrame({ 'recipe': "A", 'trials': TRIALS, 'events': np.random.binomial(n=TRIALS, p=PROB_A, size=SIMS), 'test_id': np.arange(SIMS) }) df_b = pl.DataFrame({ 'recipe': "B", 'trials': TRIALS, 'events': np.random.binomial(n=TRIALS, p=PROB_B, size=SIMS), 'test_id': np.arange(SIMS) }) df = (pl.concat([df_a, df_b], rechunk=True) .with_columns( fails = col('trials') - col('events') )) df df_agg = df.group_by('test_id').agg(data = pl.struct('events', 'fails', 'recipe')) df_agg.sort('test_id') At this point my mental model of pandas starts to crumble. There are so many mapping options and I'm not really sure how to trouble shoot at this stage. df_agg.with_columns( ( pl.struct(["data"]).map_batches( lambda x: smf.glm('events + fails ~ recipe', family=sm.families.Binomial(), data=x.struct.field('data').to_pandas()).fit() ) ).alias("model") ) ComputeError: PatsyError: Error evaluating factor: TypeError: cannot use __getitem__ on Series of dtype List(Struct({'events': Int64, 'fails': Int64, 'recipe': String})) with argument 'recipe' of type 'str' events + fails ~ recipe
smf.glm() appears to want a Pandas DataFrame and the given formula can refer to column names. data = pd.DataFrame(dict(events=[630], fails=[370], recipe=["A"])) (smf.glm("events + fails ~ recipe", family=sm.families.Binomial(), data=data) .fit().pvalues) # Intercept 4.448408e-16 # dtype: float64 Struct When you pass a struct through a UDF, you get a Series: df.head(1).with_columns(pl.struct("events", "fails", "recipe").map_batches(print)) shape: (1,) Series: "events" [struct[3]] [ {630,370,"A"} ] Series.struct.unnest() will give you a DataFrame (which you can then call .to_pandas() on) shape: (1, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ events ┆ fails ┆ recipe β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ════════║ β”‚ 630 ┆ 370 ┆ A β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Lists I don't know R but it seems the difference in this case is that with Polars, creating a List would be the final step in the order of operations. Once you have a List, it's not as easy to execute such custom functionality for each element. So we would execute the function at the .agg() stage: (df.group_by("test_id") .agg( data = pl.struct("events", "fails", "recipe"), glm = pl.struct("events", "fails", "recipe").map_batches(lambda x: pl.Series( smf.glm( formula = "events + fails ~ recipe", family = sm.families.Binomial(), data = x.struct.unnest().to_pandas()).fit().pvalues )) ) ) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ test_id ┆ data ┆ glm β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ list[struct[3]] ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════════════════════════β•ͺ════════════════════════║ β”‚ 1 ┆ [{655,345,"A"}, {674,326,"B"}] ┆ [5.5700e-22, 0.368276] β”‚ β”‚ 2 ┆ [{657,343,"A"}, {658,342,"B"}] ┆ [1.7233e-22, 0.962417] β”‚ β”‚ 0 ┆ [{630,370,"A"}, {657,343,"B"}] ┆ [4.4486e-16, 0.207569] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Polars doesn"t recognize the return value of glm() but wrapping it with pl.Series() inside the callback gives us a list[f64]. Window functions If you don't actually want to aggregate the DataFrame, you can use .over() instead. df.with_columns( pl.struct("events", "fails", "recipe").map_batches(lambda x: pl.Series( smf.glm( formula = "events + fails ~ recipe", family = sm.families.Binomial(), data = x.struct.unnest().to_pandas()).fit().pvalues )) .flatten() .over("test_id") .alias("glm") ) shape: (6, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ recipe ┆ trials ┆ events ┆ test_id ┆ fails ┆ glm β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i32 ┆ i64 ┆ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════β•ͺ════════β•ͺ═════════β•ͺ═══════β•ͺ════════════║ β”‚ A ┆ 1000 ┆ 630 ┆ 0 ┆ 370 ┆ 4.4486e-16 β”‚ β”‚ A ┆ 1000 ┆ 655 ┆ 1 ┆ 345 ┆ 5.5700e-22 β”‚ β”‚ A ┆ 1000 ┆ 657 ┆ 2 ┆ 343 ┆ 1.7233e-22 β”‚ β”‚ B ┆ 1000 ┆ 657 ┆ 0 ┆ 343 ┆ 0.207569 β”‚ β”‚ B ┆ 1000 ┆ 674 ┆ 1 ┆ 326 ┆ 0.368276 β”‚ β”‚ B ┆ 1000 ┆ 658 ┆ 2 ┆ 342 ┆ 0.962417 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ https://docs.pola.rs/user-guide/expressions/window/
2
3
78,523,591
2024-5-23
https://stackoverflow.com/questions/78523591/how-to-connect-data-points-with-line-where-values-are-missing
I need to draw several biomarker changes by Date on one graph, but biomarker samples were measured in different dates and different times, so for example: data = { 'PatientID': [244651, 244651, 244651, 244651, 244652, 244653, 244651], 'LocationType': ['IP', 'IP', 'OP', 'IP', 'IP', 'OP', 'IP'], 'Date': ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-01', '2023-01-01', '2023-01-05'], 'Biomarker1': [1.1, 1.2, None, 1.4, 2.1, 3.1, 1.5], 'Biomarker2': [2.1, None, 2.3, 2.4, 3.1, 4.1, 2.5], 'Biomarker3': [3.1, 3.2, 3.3, None, 4.1, 5.1, 3.5] } to draw a graph: # Set the date as the index filtered_df.set_index('Date', inplace=True) # Plot all biomarkers plt.figure(figsize=(12, 8)) # Loop through each biomarker column to plot for column in filtered_df.columns: if column not in ['PatientID', 'LocationType']: plt.plot(filtered_df.index, filtered_df[column], marker='o', linestyle='-', label=column) here is my output: Biomarker change over time I need all the point of one biomarkers to be connected just with the line. I cannot use interpolate, the points should be just connected with line. How do I do it? Please, help! I tried to interpolate, but it creates new points, I don't need new points. Here is the full code: import pandas as pd import matplotlib.pyplot as plt import numpy as np # Sample DataFrame (replace this with your actual DataFrame) data = { 'PatientID': [244651, 244651, 244651, 244651, 244652, 244653, 244651], 'LocationType': ['IP', 'IP', 'OP', 'IP', 'IP', 'OP', 'IP'], 'Date': ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-01', '2023-01-01', '2023-01-05'], 'Biomarker1': [1.1, 1.2, None, 1.4, 2.1, 3.1, 1.5], 'Biomarker2': [2.1, None, 2.3, 2.4, 3.1, 4.1, 2.5], 'Biomarker3': [3.1, 3.2, 3.3, None, 4.1, 5.1, 3.5] } # Create DataFrame df = pd.DataFrame(data) df['Date'] = pd.to_datetime(df['Date']) # Filter the data for the specified patient ID and IP location type filtered_df = df[(df['PatientID'] == 244651) & (df['LocationType'] == 'IP')] # Set the date as the index filtered_df.set_index('Date', inplace=True) # Plot all biomarkers plt.figure(figsize=(12, 8)) # Loop through each biomarker column to plot each one separately for column in filtered_df.columns: if column not in ['PatientID', 'LocationType']: plt.plot(filtered_df.index, filtered_df[column], marker='o', linestyle='-', label=column) plt.title('Biomarkers by Date for Patient ID 244651 (IP Location Type)') plt.xlabel('Date') plt.ylabel('Biomarker Value') plt.legend() plt.grid(True) plt.xticks(rotation=45) plt.show()
You can replace the code creating the plot with the following: # Plot all biomarkers plt.figure(figsize=(12, 8)) # Loop through each biomarker column to plot each one separately for column in filtered_df.columns: if column not in ['PatientID', 'LocationType']: biomarker = filtered_df[column].dropna() plt.plot(biomarker.index, biomarker, 'o-', label=column) plt.title('Biomarkers by Date for Patient ID 244651 (IP Location Type)') plt.xlabel('Date') plt.ylabel('Biomarker Value') plt.legend() plt.grid(True) plt.xticks(rotation=45) plt.show() Alternatively, you can use seaborn: import seaborn as sns # Plot all biomarkers plt.figure(figsize=(12, 8)) sns.lineplot(data = filtered_df[['Biomarker1', 'Biomarker2', 'Biomarker3']], markers=['o', 'o', 'o'], dashes=False ) plt.title('Biomarkers by Date for Patient ID 244651 (IP Location Type)') plt.ylabel('Biomarker Value') plt.grid(True) plt.xticks(rotation=45) plt.show() In either case, the plot looks as follows:
2
1
78,527,843
2024-5-24
https://stackoverflow.com/questions/78527843/des3-encryption-result-on-python-is-different-from-des-ede3-cbc-result-of-php
PHP code // php 8.1.13 $text = '1'; $key = base64_decode('3pKtqxNOolyBoJouXWwVYw=='); $iv = base64_decode('O99EDNAif90='); $encrypted = openssl_encrypt($text, 'des-ede3-cbc', $key, OPENSSL_RAW_DATA, $iv); $hex = strtoupper(bin2hex($encrypted)); echo $hex; The result is FF72E6D454B84A7C Python3.8 pycryptodome 3.20.0 import binascii import base64 from Crypto.Cipher import DES3 from Crypto.Util.Padding import pad key = base64.b64decode('3pKtqxNOolyBoJouXWwVYw==') iv = base64.b64decode('O99EDNAif90=') cipher = DES3.new(key, DES3.MODE_CBC, iv) data = '1' data = data.encode('utf-8') data = pad(data, DES3.block_size) encrypted = cipher.encrypt(data) print(binascii.hexlify(encrypted).decode('utf-8').upper()) The result is A311743FB5D91569 Why are these two results different, and how can I make python's results consistent with PHP's? I've tried using pycryptodome and pyDes and can't achieve the same result as PHP
The reason for the different results is that different TripleDES variants are used in both codes. des-ede3-cbc in the PHP code specifies 3TDEA, which requires a 24 bytes key. Since the key used is only 16 bytes in size, PHP/OpenSSL silently pads it with 0x00 values to the required size of 24 bytes. With PyCryptodome, the key length determines the variant. Since the key is 16 bytes in size, 2TDEA is applied. For this reason the result is different. In order to also use 3TDEA, the key must be explicitly padded in the Python code, i.e. expanded to 24 bytes with 0x00 values: key = base64.b64decode('3pKtqxNOolyBoJouXWwVYw==') + b'\x00'*8 If this key is applied in the Python code, the result is the same as in the PHP code.
2
3
78,522,031
2024-5-23
https://stackoverflow.com/questions/78522031/gaussian-fit-with-gap-in-data
I would like to fit a gaussian on an absorption band, in a reflectance spectra. Problem is, in this part of the spectra I have a gap in my data (blank space on the figure), between approximately 0.85 and 1.3 Β΅m. I really need this gaussian, because after this I would like to compute spectral parameters such as the Full-With Half-Maximum and the area occupied by the gaussian. Here is the functions I use to perform the gaussian fit (credit to this post) : def gauss(x, H, A, x0, sigma): return H + A * np.exp(-(x - x0) ** 2 / (2 * sigma ** 2)) def gauss_fit(x, y): mean = sum(x * y) / sum(y) sigma = np.sqrt(sum(y * (x - mean) ** 2) / sum(y)) popt, pcov = scipy.optimize.curve_fit(gauss, x, y, p0=[min(y), max(y), mean, sigma]) return popt How I apply it : H, A, x0, sigma = gauss_fit(x, y) And plot it : plt.plot(x, y, '.k', label='data') plt.plot(x, gauss(x, *gauss_fit(x, y)), '-r', label='fit') plt.xlabel('Wavelength (Β΅m)') plt.ylabel('Reflectance') plt.legend() Here is what I get : Gaussian fit with gap in data As you can see the fit seems to work fine until it reaches the gap. It would really help me if I could have a clean gaussian fit, bonus points if I can then easily compute the FMWH and the area of the gaussian afterward. Please note that the solution should not be too case-specific, as I have huge datasets to work on. It should thus be implementable in loops. The only post I could find talking about this issue is this one, but it did not provide me with a satisfying solution. It's my first time posting on Stack Overflow, please feel free to ask for any complementary information if needed. Edit 1 I think Tino D and lastchance answers solved the main problem of my code, it being that I defined a normal gaussian fit instead of a substracted one. I obtain this new fit. However as you can see it is still not perfect, and it strangely goes beyond y=1 on the right side of the fit. I think the problem now resides in my data itself, so here it is as requested. I use pandas to manage my data, so the file is in pickle format as I store entire nupy arrays in unique DataFrame cells. For this fit we only need the columns 'Wavelength' and 'Reflectance Normalized'. We also only need part of the spectra, so here is the code I use to sample what I need : #df_merged is the name of the DataFrame R0700=FindNearest(df_merged.at[0,'Wavelength'], df_merged.at[0,'Reflectance Normalized'], 0.7, show=True) R1800=FindNearest(df_merged.at[0,'Wavelength'], df_merged.at[0,'Reflectance Normalized'], 1.8, show=True) x=df_merged.at[0,'Wavelength'][R0700[1]:R1800[1]] y=df_merged.at[0,'Reflectance Normalized'][R0700[1]:R1800[1]] FindNearest is a function of mine I use to find specific values in my arrays, it is defined as the following : def FindNearest(wl, rf, x, show=False): # wl = wavelength array ; rf = reflectance array ; x = value to find wl_dif=np.abs(wl-x) idx=np.argmin(wl_dif) # Get reflectance and wavelength values wl_x=wl[idx] rf_x=rf[idx] if show: print("Nearest wavelength :", wl_x, ", Index :", idx, ", Corresponding reflectance :", rf_x) # Values : [0] = nearest wavelength value, [1] = position of nearest value in array # (both reflectance and wavelength since same size), [2] = nearest reflectance value return(wl_x, idx, rf_x) We almost got this, thanks a lot to you 2 !
As lastchance said and as my comment suggested, you were fitting the wrong curve (or your initial guess was not compatible). The function that you wrote was a normal Gaussian and not a subtracted one. So the following can be used import numpy as np %matplotlib notebook import matplotlib.pyplot as plt from scipy.optimize import curve_fit def gauss(x, H, A, x0, sigma): return H - A * np.exp(-(x - x0) ** 2 / (2 * sigma ** 2)) def gauss_fit(x, y): mean = sum(x * y) / sum(y) sigma = np.sqrt(sum(y * (x - mean) ** 2) / sum(y)) popt, pcov = curve_fit(gauss, x, y, p0=[min(y), max(y), mean, sigma]) return popt # generate toydataset x = np.linspace(0.5, 2, 100) H, A, x0, sigma = 1, 0.5, 1.2, 0.3 y = gauss(x, H, A, x0, sigma) + 0.005 * np.random.normal(size=x.size) # enerate gap in data gapStart = int(0.1 * len(x)) gapEnd = int(0.4 * len(x)) xGap = np.concatenate([x[:gapStart], x[gapEnd:]]) yGap = np.concatenate([y[:gapStart], y[gapEnd:]]) popt = gauss_fit(xGap, yGap) plt.figure() plt.scatter(xGap, yGap, label='Gap in data', color='blue') plt.plot(x, gauss(x, *popt), label='Fit', color='red', linestyle='--') plt.xlabel('x') plt.ylabel('y') plt.legend() plt.title("30% missing") plt.grid() This results in the following: Now I did generate the toydata but the idea remains the same. As for the calculations: H_fit, A_fit, x0_fit, sigma_fit = popt ''' full width at half maximum taken from here and area calculation here: https://www.physicsforums.com/threads/area-under-gaussian-peak-by-easy-measurements.419285/ ''' FWHM = 2.35 * sigma_fit Area = H_fit*FWHM/(2.35*0.3989) print(f"FWHM = {FWHM}") print(f"Area = {Area}") Results: FWHM = 0.7030608784583746 Area = 0.7486638847495126 Hope this helps! Edit 1: with data from OP The badly fitted line that you showed in your example is because you are not passing those x values in the middle. Alternatively: popt = gauss_fit(x, y) plt.figure() plt.scatter(x, y, label='Gap in data', color='blue') plt.plot(np.linspace(min(x), max(x), 100), # resample the x variable gauss(np.linspace(min(x), max(x), 100), *popt), # and calculate label='Fit', color='red', linestyle='--') plt.xlabel('x') plt.ylabel('y') plt.legend() plt.grid() We can resample the x variable from min to max with 100 points using np.linspace, and also important to pass it to the gaus functions. Results: I highly recommend to ditch Gaussian or to hard-code H=1 as lastchance suggested. The data on the top right is too different. OR you say that you don't care about these points and that you can cut them out. Edit 2: setting the upper limit to 1 As you saw from your calculations, the fit went overboard and set the sigma as a minus value (hence the negative values). For the fit it is not important since sigma is raised to the power of 2. In any case, I suggest the followign adjustments to your code, although I still think a Gaussian fit does not make sense: def gauss(x, A, x0, sigma): return 1 - A * np.exp(-(x - x0) ** 2 / (2 * sigma ** 2)) def gauss_fit(x, y): mean = sum(x * y) / sum(y) sigma = np.sqrt(sum(y * (x - mean) ** 2) / sum(y)) popt, pcov = curve_fit(gauss, x, y) return popt In the above functions, H is being set to 1. There is also no need for the initial guesses, the optimizer finds values automatically. The code for the plotting stays the same. Results: This gives the following: FWHM = 0.5 Area = 0.54 Now we see that the fit prefers sticking with the edges to minimize the error and neglects a good fit on the tail. Which btw is too wide of a tail for a Gaussian fit to begin with (in my eyes). The distribution looks slightly platykurtic. Which we can also confirm by calculating the kurtosis: mean = np.mean(y) std = np.std(y, ddof=1) n = len(y) kurtosis = (n * (n + 1) * np.sum((y - mean) ** 4) / ((n - 1) * (n - 2) * (n - 3) * std ** 4)) - (3 * (n - 1) ** 2 / ((n - 2) * (n - 3))) print(kurtosis) # -0.13344688172380037 -0.13 is not that far away from 0, but still I think the fit can be done in a better way. Reading material: Link I suggest you open another question on crossvalidated to ask for a recommendation regarding which curve to fit.
2
2
78,526,797
2024-5-24
https://stackoverflow.com/questions/78526797/how-to-distribute-pandas-dataframe-rows-unevenly-across-timestamps-based-on-valu
E.g. DF which contains number of executions across timestamps. DateTime Execution 0 2023-04-03 07:00:00 11 1 2023-04-03 11:00:00 1 2 2023-04-03 12:00:00 1 3 2023-04-03 14:00:00 3 4 2023-04-03 18:00:00 1 <class 'pandas.core.frame.DataFrame'> RangeIndex: 5080 entries, 0 to 5079 Below is the output I'm trying to achieve DateTime Execution 0 2023-04-03 07:00:00 4 1 2023-04-03 08:00:00 4 2 2023-04-03 09:00:00 3 3 2023-04-03 11:00:00 1 4 2023-04-03 12:00:00 1 5 2023-04-03 14:00:00 3 6 2023-04-03 18:00:00 1 Only if the execution is more than 4, it should be distributed to the next hours. Maximum for any hour is 4. Thanks again for quick help. How to distribute pandas dataframe rows evenly across timestamps based on value of the column This helps with Evenly distribution, I'm looking at uneven distribution.
With asfreq/clip : N, C = 4, "Execution" asfreq = df.set_index("DateTime").asfreq("h") out = ( (gby:=asfreq.groupby(asfreq[C].notna().cumsum()))[C] .transform("first") .sub(gby.cumcount() * N) .clip(upper=N) .loc[lambda s: s.gt(0)] .reset_index(name=C) .convert_dtypes() ) Output : DateTime Execution 0 2023-04-03 07:00:00 4 1 2023-04-03 08:00:00 4 2 2023-04-03 09:00:00 3 3 2023-04-03 11:00:00 1 4 2023-04-03 12:00:00 1 5 2023-04-03 14:00:00 3 6 2023-04-03 18:00:00 1
4
7
78,516,650
2024-5-22
https://stackoverflow.com/questions/78516650/perform-aggregation-using-min-max-avg-on-all-columns
I have a dataframe like β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ ts ┆ 646150 ┆ 646151 ┆ 646154 ┆ 646153 ┆ week β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ i8 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════β•ͺ═══════════β•ͺ═══════════β•ͺ══════║ β”‚ 2024-02-01 00:00:00 ┆ 24.490348 ┆ 65.088941 ┆ 53.545259 ┆ 13.499832 ┆ 5 β”‚ β”‚ 2024-02-01 01:00:00 ┆ 15.054187 ┆ 63.095247 ┆ 60.786479 ┆ 29.538156 ┆ 5 β”‚ β”‚ 2024-02-01 02:00:00 ┆ 24.54212 ┆ 63.880298 ┆ 57.535928 ┆ 24.840966 ┆ 5 β”‚ β”‚ 2024-02-01 03:00:00 ┆ 24.85621 ┆ 69.778516 ┆ 67.57284 ┆ 24.672476 ┆ 5 β”‚ β”‚ 2024-02-01 04:00:00 ┆ 21.21628 ┆ 61.137849 ┆ 55.231299 ┆ 16.648383 ┆ 5 β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ 2024-02-29 19:00:00 ┆ 23.17318 ┆ 62.590752 ┆ 72.026908 ┆ 24.614523 ┆ 9 β”‚ β”‚ 2024-02-29 20:00:00 ┆ 23.86416 ┆ 64.87102 ┆ 61.023656 ┆ 20.095353 ┆ 9 β”‚ β”‚ 2024-02-29 21:00:00 ┆ 18.553397 ┆ 67.530137 ┆ 63.477737 ┆ 17.313834 ┆ 9 β”‚ β”‚ 2024-02-29 22:00:00 ┆ 22.339175 ┆ 67.456563 ┆ 62.552035 ┆ 20.880844 ┆ 9 β”‚ β”‚ 2024-02-29 23:00:00 ┆ 15.5809 ┆ 66.774367 ┆ 57.066264 ┆ 29.529057 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ which is generated as follows import numpy as np from datetime import datetime, timedelta def generate_test_data(): # Function to generate hourly timestamps for a month def generate_hourly_timestamps(start_date, end_date): current = start_date while current <= end_date: yield current current += timedelta(hours=1) # Define the date range start_date = datetime(2024, 2, 1) end_date = datetime(2024, 2, 29, 23, 0, 0) # February 29th 23:00 for a leap year # Generate the data timestamps = list(generate_hourly_timestamps(start_date, end_date)) num_hours = len(timestamps) data = { "ts": timestamps, "646150": np.random.uniform(15, 25, num_hours), # Random temperature data between 15 and 25 "646151": np.random.uniform(60, 70, num_hours), # Random humidity data between 60 and 70 "646154": np.random.uniform(50, 75, num_hours), # Random sensor data between 50 and 75 "646153": np.random.uniform(10, 30, num_hours) # Random sensor data between 10 and 30 } df = pl.DataFrame(data) df = df.with_columns(pl.col("ts").cast(pl.Datetime)) return df df = generate_test_data() # Add a week column df = df.with_columns((pl.col("ts").dt.week()).alias("week")) I would like to group by week or some other time intervals and aggregate using min, mean, and max. For this, I could do something like # Group by week and calculate min, max, and avg aggregated_df = df.groupby("week").agg([ pl.col("646150").min().alias("646150_min"), pl.col("646150").max().alias("646150_max"), pl.col("646150").mean().alias("646150_avg"), pl.col("646151").min().alias("646151_min"), pl.col("646151").max().alias("646151_max"), pl.col("646151").mean().alias("646151_avg"), pl.col("646154").min().alias("646154_min"), pl.col("646154").max().alias("646154_max"), pl.col("646154").mean().alias("646154_avg"), pl.col("646153").min().alias("646153_min"), pl.col("646153").max().alias("646153_max"), pl.col("646153").mean().alias("646153_avg") ]) but I would like to avoid specifying the column names. I would like to generate the dataframe like below where the column value is a list or tuples or some other multiple value format that holds the min, max, avg values. β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ week ┆ 646150 ┆ 646151 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i8 ┆ List[f64] ┆ List[f64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════════β•ͺ══════════════════║ β”‚ 5 ┆ [24.1,26.3,25.0] ┆ [22.1,23.3,22.5] β”‚ β”‚ … ┆ … ┆ … ┆ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Is this possible in polars ? Thanks
you can do something like this to get struct-typed columns: df.group_by("week").agg( pl.struct( pl.col(c).min().alias("min"), pl.col(c).max().alias("max"), pl.col(c).mean().alias("mean") ).alias(c) for c in df.columns if c not in ('week', 'ts') ) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ week ┆ 646150 ┆ 646151 ┆ 646154 ┆ 646153 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i8 ┆ struct[3] ┆ struct[3] ┆ struct[3] ┆ struct[3] β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════════════════════════β•ͺ═══════════════════════════β•ͺ═══════════════════════════β•ͺ═══════════════════════════║ β”‚ 9 ┆ {15.044939,24.764209,20.0 ┆ {60.257012,69.928978,64.7 ┆ {50.530551,74.878361,63.0 ┆ {10.190688,29.82809,20.31 β”‚ β”‚ ┆ 59679… ┆ 75548… ┆ 78632… ┆ 305} β”‚ β”‚ 8 ┆ {15.102004,24.991653,20.0 ┆ {60.055959,69.92977,65.01 ┆ {50.048389,74.599839,61.2 ┆ {10.006159,29.938469,20.8 β”‚ β”‚ ┆ 30854… ┆ 5284} ┆ 97655… ┆ 86438… β”‚ β”‚ 5 ┆ {15.292633,24.75995,19.50 ┆ {60.068354,69.961624,64.4 ┆ {50.351197,74.665052,62.7 ┆ {10.128425,29.995835,20.5 β”‚ β”‚ ┆ 5275} ┆ 92186… ┆ 59774… ┆ 48913… β”‚ β”‚ 7 ┆ {15.054969,24.872284,20.1 ┆ {60.015595,69.978639,65.0 ┆ {50.264015,74.722436,61.2 ┆ {10.058671,29.906661,20.2 β”‚ β”‚ ┆ 02549… ┆ 56007… ┆ 16711… ┆ 89755… β”‚ β”‚ 6 ┆ {15.137601,24.97642,19.91 ┆ {60.118846,69.99577,65.32 ┆ {50.025932,74.968677,62.4 ┆ {10.003437,29.798739,19.7 β”‚ β”‚ ┆ 5258} ┆ 6988} ┆ 8508} ┆ 37678… β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ or, if you want to put it into list: df.group_by("week").agg( pl.concat_list(agg(c) for agg in [pl.min, pl.max, pl.mean]) for c in df.columns if c not in ('week', 'ts') ) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ week ┆ 646150 ┆ 646151 ┆ 646154 ┆ 646153 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i8 ┆ list[f64] ┆ list[f64] ┆ list[f64] ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•ͺ════════════════════════β•ͺ════════════════════════β•ͺ════════════════════════β•ͺ═════════════════════════════════║ β”‚ 7 ┆ [15.054969, 24.872284, ┆ [60.015595, 69.978639, ┆ [50.264015, 74.722436, ┆ [10.058671, 29.906661, 20.2897… β”‚ β”‚ ┆ 20.1025… ┆ 65.0560… ┆ 61.2167… ┆ β”‚ β”‚ 6 ┆ [15.137601, 24.97642, ┆ [60.118846, 69.99577, ┆ [50.025932, 74.968677, ┆ [10.003437, 29.798739, 19.7376… β”‚ β”‚ ┆ 19.91525… ┆ 65.32698… ┆ 62.4850… ┆ β”‚ β”‚ 5 ┆ [15.292633, 24.75995, ┆ [60.068354, 69.961624, ┆ [50.351197, 74.665052, ┆ [10.128425, 29.995835, 20.5489… β”‚ β”‚ ┆ 19.50527… ┆ 64.4921… ┆ 62.7597… ┆ β”‚ β”‚ 9 ┆ [15.044939, 24.764209, ┆ [60.257012, 69.928978, ┆ [50.530551, 74.878361, ┆ [10.190688, 29.82809, 20.31305… β”‚ β”‚ ┆ 20.0596… ┆ 64.7755… ┆ 63.0786… ┆ β”‚ β”‚ 8 ┆ [15.102004, 24.991653, ┆ [60.055959, 69.92977, ┆ [50.048389, 74.599839, ┆ [10.006159, 29.938469, 20.8864… β”‚ β”‚ ┆ 20.0308… ┆ 65.01528… ┆ 61.2976… ┆ β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
4
78,525,590
2024-5-23
https://stackoverflow.com/questions/78525590/tensorflow-model-cant-predict-on-polars-dataframe
I trained a TensorFlow model for text classification, but I can't use a Polars DataFrame to make my predictions on it. However, I can use a Pandas DataFrame. import pandas as pd import polars as pl import joblib from tensorflow.keras.models import load_model loaded_model =load_model('model.keras') load_Le = joblib.load('label_encoder.joblib') If i do: text = "some example text" df = pd.DataFrame({"Coment":[text]}) preddict = loaded_model.predict(df["Coment"]) I have no problems, but if I do: text = "some example text" df = pl.DataFrame({"Coment":[text]}) preddict = loaded_model.predict(df["Coment"]) I get TypeError: cannot convert the argument type_value: String to a TensorFlow Dtype. Any advice? Some extra info: Before saving my model, I added this so I can predict on any text (Works fine on pandas) inputs = keras.Input(shape=(1,), dtype="string") processed_inputs = text_vectorization(inputs) outputs = model(processed_inputs) inference_model = keras.Model(inputs, outputs) inference_model.save('model.keras')
As of late 2023, there were no plans to make Polars dataframes compatible with TensorFlow. In the meantime you can convert polar frames to pandas with the to_pandas() member and pass those to the models.
2
1
78,525,518
2024-5-23
https://stackoverflow.com/questions/78525518/python-pandas-create-lookup-table-of-items-bought-together
I'm struggling a bit trying to create a lookup table of items that are bought together. I have a solution in "brute force" Python iterating over a DataFrame, but I'd prefer one in pure Pandas using some kind of aggregate trick if possible. Here is what I have. I've made a small example with a list of invoice numbers and what item that was bought. import pandas as pd df = pd.DataFrame({ "Invoice": [1, 1, 2, 3, 4, 4, 4, 5, 5], "Item": ["Apple", "Pear", "Banana", "Apple", "Apple", "Orange", "Pear", "Apple", "Orange"] }) df We can see that invoice number 4 has three items and invoice number 2 has just one. And here is my solution that I'm not really happy with: item_keys = [] item_values = [] for i, row in df.iterrows(): invoice = row["Invoice"] item = row["Item"] for item_2 in df[df["Invoice"] == invoice]["Item"]: if item_2 != item: item_keys.append(item) item_values.append(item_2) lookup_table = pd.DataFrame({"key": item_keys, "value": item_values}) lookup_table In lookup_table I can then search for a key item and gett all items that were sold together with it. Is there a faster and more elegant way to do this?)
You can also do a self join and filter down: import pandas as pd df = pd.DataFrame({ "Invoice": [1, 1, 2, 3, 4, 4, 4, 5, 5], "Item": ["Apple", "Pear", "Banana", "Apple", "Apple", "Orange", "Pear", "Apple", "Orange"] }) lookup_table = ( df.merge(df, on='Invoice') .rename(columns={'Item_x': 'key', 'Item_y': 'value'}) .loc[lambda d: (d['key'] != d['value'])] .reset_index(drop=True) # .drop(columns='Invoice') # uncomment to remove the 'Invoice' column ) print(lookup_table) # Invoice key value # 0 1 Apple Pear # 1 1 Pear Apple # 2 4 Apple Orange # 3 4 Apple Pear # 4 4 Orange Apple # 5 4 Orange Pear # 6 4 Pear Apple # 7 4 Pear Orange # 8 5 Apple Orange # 9 5 Orange Apple
2
2
78,523,337
2024-5-23
https://stackoverflow.com/questions/78523337/how-is-it-possible-that-rpy2-is-altering-the-values-within-my-dataframe
I am trying to utilize some R based packages within a Python script using the rpy2 package. In order to implement the code, I first need to convert a Pandas dataframe into an R based data matrix. However, something incredibly strange is happening to the values within the code. Here is a minimally reproducible example of the code import pandas as pd import numpy as np import rpy2.robjects as ro from rpy2.robjects.packages import importr from rpy2.robjects import pandas2ri pandas2ri.activate() utils = importr('utils') # Function to generate random column names def generate_column_names(n, suffixes): columns = [] for _ in range(n): name = ''.join(random.choices(string.ascii_uppercase, k=3)) # Random 3-character string suffix = random.choice(suffixes) # Randomly choose between "_Healthy" and "_Sick" columns.append(name + suffix) return columns # Number of rows and columns n_rows = 1000 n_cols = 15 # Generate random float values between 0 and 10 data = np.random.uniform(0, 10, size=(n_rows, n_cols)) # Introduce NaN values sporadically nan_indices = np.random.choice([True, False], size=data.shape, p=[0.1, 0.9]) data[nan_indices] = np.nan # Generate random column names column_names = generate_column_names(n_cols, ["_Healthy", "_Sick"]) # Create the DataFrame df = pd.DataFrame(data, columns=column_names) df = df.replace(np.nan, "NA") with localconverter(ro.default_converter + pandas2ri.converter): R_df = ro.conversion.py2rpy(df) r_matrix = r('data.matrix')(R_df) Now, the input Pandas dataframe looks like this: However, after turning it into a R based dataframe using ro.conversion.py2rpy(), and then recasting that as a data matrix using r('data.matrix'), I get a r_matrix dataframe that look like this: How could this happen? I have checked the intermediate R_df and have found that it has the same values as the input Pandas df, so it seems that the line r('data.matrix') is drastically altering my contents. I have run the analogous commands in R (after importing the exact same dataframe into R using readr), and data.matrix does not affect my dataframe's contents at all, so I am incredibly confused as to what the problem is. Has anyone else experienced this at all?
Your column is being coerced to a factor and then numeric When in Python you do df = df.replace(np.nan, "NA"), you are replacing with the literal string "NA". That means that the "NA" values are then stored as an object rather than float64. Unlike pandas, R does not have an object type. Columns (or vectors in R) need to all be the same type. If a vector contains numeric and string values, R ultimately treats the whole thing as character. The behaviour that you get with a character vector using data.matrix() is: Character columns are first converted to factors and then to integers. For example: set.seed(1) (df <- data.frame( x = 1:5, y = (as.character(rnorm(5))) )) # x y # 1 1 -0.626453810742332 # 2 2 0.183643324222082 # 3 3 -0.835628612410047 # 4 4 1.59528080213779 # 5 5 0.32950777181536 data.matrix(df) # x y # [1,] 1 1 # [2,] 2 3 # [3,] 3 2 # [4,] 4 5 # [5,] 5 4 Use NA_real_ There is a class rpy2.rinterface_lib.sexp.NARealType. You need to instantiate this and then replace np.nan with this object. This means the entire column can remain a float64 in Python, and numeric in R, so there is no coercion to factor. na = rpy2.rinterface_lib.sexp.NARealType() df2 = df.replace(np.nan, na) with localconverter(ro.default_converter + pandas2ri.converter): R_df = ro.conversion.py2rpy(df2) r_matrix = ro.r('data.matrix')(R_df) r_matrix Output: array([[6.71551482, 3.37235768, 1.73878498, ..., 9.26968137, 4.44605036, 0.57638575], [2.14651571, 5.14706755, 7.43517449, ..., 7.56905516, 3.1960465 , 9.13240441], [0.67569123, 8.55601696, 3.34151056, ..., nan, 4.12252086, 5.79825217], ..., [2.93515376, 2.29766304, 2.70761156, ..., 7.80345898, 0.34809462, 4.5128469 ], [5.66194126, 1.32135235, 2.57649142, ..., 3.49908635, 3.77794316, 8.96322655], [8.43950172, 1.65306388, 7.37031975, ..., 8.01045219, 8.68857319, 7.51309124]])
2
2
78,524,354
2024-5-23
https://stackoverflow.com/questions/78524354/polars-for-processing-search-terms-in-text-data
I have a Python script that loads search terms from a JSON file and processes a Pandas DataFrame to add new columns indicating whether certain terms are present in the text data. However, I would like to modify the script to use Polars instead of Pandas and possibly remove the JSON dependency. Here is my original code: import pandas as pd import json class SearchTermLoader: def __init__(self, json_file): self.json_file = json_file def load_terms(self): with open(self.json_file, 'r') as f: data = json.load(f) terms = {} for phase_name, phase_data in data.items(): terms[phase_name] = ( phase_data.get('words', []), phase_data.get('exact_phrases', []) ) return terms class DataFrameProcessor: def __init__(self, df: pd.DataFrame, col_name: str) -> None: self.df = df self.col_name = col_name def add_contains_columns(self, search_terms): columns_to_add = ["type1", "type2"] for column in columns_to_add: self.df[column] = self.df[self.col_name].apply( lambda text: any( term in text for term in search_terms.get(column, ([], []))[0] + search_terms.get(column, ([], []))[1] ) ) return self.df # Example Usage data = {'text_column': ['The apple is red', 'I like bananas', 'Cherries are tasty']} df = pd.DataFrame(data) term_loader = SearchTermLoader('word_list.json') search_terms = term_loader.load_terms() processor = DataFrameProcessor(df, 'text_column') new_df = processor.add_contains_columns(search_terms) new_df Here is an example of the json file: { "type1": { "words": ["apple", "tasty"], "exact_phrases": ["soccer ball"] }, "type2": { "words": ["banana"], "exact_phrases": ["red apple"] } } I understand that I can use the .str.contains() function, but I want to use it with specific words and exact phrases. Could you provide some guidance on how to get started with this?
For non-regex matching, .str.contains_any() is probably a better option. It seems like you want to concat both lists: word_list = pl.read_json("word_list.json") """ for older versions without struct "*" expansion type1 = pl.concat_list( pl.col("type1").struct.field("words", "exact_phrases") ) """ word_list = word_list.select( type1 = pl.concat_list(pl.col("type1").struct["*"]), type2 = pl.concat_list(pl.col("type2").struct["*"]) ) Shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ type1 ┆ type2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[str] ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════════║ β”‚ ["apple", "tasty", "soccer ball"] ┆ ["banana", "red apple"] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ You could then .concat() them into your frame and run .contains_any() new_df = pl.concat([df, word_list], how="horizontal") new_df.with_columns( type1 = pl.col("text_column").str.contains_any(pl.col("type1").flatten()), type2 = pl.col("text_column").str.contains_any(pl.col("type2").flatten()) ) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ text_column ┆ type1 ┆ type2 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ bool ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════║ β”‚ The apple is red ┆ true ┆ false β”‚ β”‚ I like bananas ┆ false ┆ true β”‚ β”‚ Cherries are tasty ┆ true ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
78,523,521
2024-5-23
https://stackoverflow.com/questions/78523521/is-this-still-an-uncontrolled-command-line-which-can-execute-malicious-code
import os command = "conda run -n python3.5 python generate_handwriting.py -text '{}' -style {} -bias {} -stroke_color '{}' -stroke_width {} -output '{}'".format(text, style, bias, stroke_color, stroke_width, output_filename) os.system(command) I get the variables, for example text, directly from the user. I've been notified that this could allow the user to execute malicious code. I have rewritten the code to this: import subprocess cmd = ["conda", "run", "-n", "python3.5", "python", "generate_handwriting.py"] args = ["-text", str(text), "-style", str(style), "-bias", str(bias), "-stroke_color", str(stroke_color), "-stroke_width", str(stroke_width), "-output", output_filename] process = subprocess.run(cmd + args, check=True) Is the issue fixed now? The full code can be found here.
Shell Injection The subprocess.run command, as opposed to os.system, doesn't allow for arbitrary shell execution, like redirections or multiple commands in one string, by default. You'd have to explicitely set shell=True for that. Also, you're putting the arguments in a list, which is pretty much the Python equivalent of SQL's prepared statements. Therefore, it is much better to do it like this. That being said, I'd still advise you to escape user input with shlex.quote, just on general principle. For example, imagine you were to pass a single string command to run, instead of a command list, then a malicious user could pass something like -c "my arbitrary command". This shouldn't be an issue in your specific case, but better safe than sorry. Other Issues If this process is running with higher permissions than the user, you'd need to make sure to close other avenues of attack as well. Some obvious ones would be: Checking that the user has permissions to edit the output destination. Restricting write access to generate_handwriting.py. PATH poisoning, such that the conda executable is provided by the user. But these are from the top of my head and there might very well be more.
3
1
78,522,213
2024-5-23
https://stackoverflow.com/questions/78522213/how-to-use-elements-created-in-a-kivy-file-to-create-another-element-in-another
I'm creating a button in StackLayout in file info_options.kv. Now I want to call a function in class InfoOptionsPanel whenever this button is pressed. I'm able to call this funtion when I'm replacing return MainPanel() with return InfoOptionsPanel() i.e. when I'm not using the layout created by MainPanel, but when I'm using MainPanel class for gui then button gives below, also note that there is no problem in GUI even when using MainPanel for GUI. error thrown File "C:\Users\..\AppData\Local\Programs\Python\Python312\Lib\site-packages\kivy\lang\builder.py", line 60, in custom_callback exec(__kvlang__.co_value, idmap) File "K:\path\to\project\Info_options.kv", line 5, in <module> on_press: self.parent.on_press() ^^^^^^^^^^^^^^^^ AttributeError: 'InfoOptionsPanel' object has no attribute 'on_press' info.kv <InfoPanel@BoxLayout> orientation: 'vertical' size_hint_y: 0.8 pos_hint: {"top": 1} Label: id: info_label text:"Click button" font_size:50 info_options.kv <InfoOptionsPanel@StackLayout>: orientation: 'lr-tb' Button: text: 'Button' on_press: self.parent.on_press() size_hint :.7, 0.1 main_screen.kv <MainPanel@BoxLayout>: orientation: 'vertical' AnchorLayout: anchor_x: 'left' anchor_y: 'top' InfoPanel: size: 100, 100 AnchorLayout: anchor_x: 'right' anchor_y: 'bottom' InfoOptionsPanel: size: 100, 100 main.py from kivy.app import App from kivy.lang.builder import Builder from kivy.uix.boxlayout import BoxLayout from kivy.uix.stacklayout import StackLayout import os kivy_design_files = ["Info_options", "Info", "main_screen",] for kv_file in kivy_design_files: Builder.load_file(os.path.join(os.path.abspath(os.getcwd()), kv_file + ".kv")) class InfoPanel(BoxLayout): pass class InfoOptionsPanel(StackLayout): def on_press(self): print("Button pressed\n"*55) class MainPanel(BoxLayout): pass class MyApp(App): def build(self): return MainPanel() if __name__ == '__main__': MyApp().run()
You don't need to inherit from Kivy's layout classes in both main.py and kivy files. Either remove inheritance from main.py or from your kivy files. For main.py, class InfoPanel(): # don't inherit from BoxLayout pass class InfoOptionsPanel(): # don't inherit from StackLayout def on_press(self): print("Button pressed\n"*55) class MainPanel(): # don't inherit from BoxLayout pass Or for kivy files (do the same in each), <InfoOptionsPanel>: # removed @StackLayout orientation: 'lr-tb' Button: text: 'Button' on_press: root.on_press() size_hint :.7, 0.1 I would suggest you to remove inheritance from main.py and keep python code nice and clean. Kivy files are meant to deal with UI, so take advantage of that. Also, use root.on_press() instead of self.parent.on_press(). Both will work regardless, but root clarifies that you're referring to the root widget. self.parent would refer to some other widget if you add multiple parent widgets or layouts to Button in the future.
2
1
78,522,329
2024-5-23
https://stackoverflow.com/questions/78522329/how-to-distribute-pandas-dataframe-rows-evenly-across-timestamps-based-on-value
E.g. DF which contains number of executions across timestamps. DateTime Execution 0 2023-04-03 07:00:00 4 1 2023-04-03 10:00:00 1 2 2023-04-03 12:00:00 1 3 2023-04-03 14:00:00 1 4 2023-04-03 18:00:00 1 <class 'pandas.core.frame.DataFrame'> RangeIndex: 5080 entries, 0 to 5079 Below is the output I'm trying to achieve DateTime Execution 0 2023-04-03 07:00:00 1 1 2023-04-03 08:00:00 1 2 2023-04-03 09:00:00 1 3 2023-04-03 10:00:00 1 4 2023-04-03 10:00:00 1 5 2023-04-03 12:00:00 1 6 2023-04-03 14:00:00 1 7 2023-04-03 18:00:00 1 So want to distribute the each executions more than 1 to evenly hourly timestamp. I tried: How to divide up 'supply' evenly among rows in a dataframe by rank in Python? but this doesn't give the desired output. tried this but it is only for arranging evenly arrange dataframe rows based on the values in a given column
Use Index.repeat with DataFrame.loc for repeat rows, set 1 and add hours by to_timedelta with GroupBy.cumcount: #if string repr of datetimes df['DateTime'] = pd.to_datetime(df['DateTime']) out = (df.loc[df.index.repeat(df['Execution'])] .assign(Execution=1, DateTime = lambda x: x['DateTime'] + pd.to_timedelta(x.groupby(level=0).cumcount(), unit='H')) .reset_index(drop=True)) print (out) DateTime Execution 0 2023-04-03 07:00:00 1 1 2023-04-03 08:00:00 1 2 2023-04-03 09:00:00 1 3 2023-04-03 10:00:00 1 4 2023-04-03 10:00:00 1 5 2023-04-03 12:00:00 1 6 2023-04-03 14:00:00 1 7 2023-04-03 18:00:00 1
3
1
78,519,636
2024-5-22
https://stackoverflow.com/questions/78519636/lookup-by-datetime-in-timestamp-index-does-not-work
Consider a date-indexed DataFrame: d0 = datetime.date(2024,5,5) d1 = datetime.date(2024,5,10) df0 = pd.DataFrame({"a":[1,2],"b":[10,None],"c":list("xy")}, index=[d0,d1]) df0.index Index([2024-05-05, 2024-05-10], dtype='object') Note that df0.index.dtype is object. Now, lookup works for date: df0.loc[d0] a 1 b 10.0 c x Name: 2024-05-05, dtype: object but both df0.loc[str(d0)] and df0.loc[pd.Timestamp(d0)] raise KeyError. This seems to be reasonable. However, consider df1 = df0.reindex(pd.date_range(d0,d1)) df1.index DatetimeIndex(['2024-05-05', '2024-05-06', '2024-05-07', '2024-05-08', '2024-05-09', '2024-05-10'], dtype='datetime64[ns]', freq='D') Note that df1.index.dtype is datetime64. Now, lookup works for both df1.loc[pd.Timestamp(d0)] (as expected) and df1.loc[str(d0)] (why?!) but not for df1.loc[d0] (if it works for a string, why not date?!) Is this the expected behavior (a bug with tenure)? Is this intentional? PS. Reported.
Looking at the source code it's how pandas implements .loc for DateTimeIndex: https://github.com/pandas-dev/pandas/blob/d9cdd2ee5a58015ef6f4d15c7226110c9aab8140/pandas/core/indexes/datetimes.py#L596 It's implemented for: datetime.datetime and np.datetime64 - https://github.com/pandas-dev/pandas/blob/d9cdd2ee5a58015ef6f4d15c7226110c9aab8140/pandas/core/arrays/datetimes.py#L211C28-L211C51 str datetime.timedelta - raises TypeError datetime.time But NOT for datetime.date
2
2
78,519,562
2024-5-22
https://stackoverflow.com/questions/78519562/can-we-print-a-text-only-once-that-is-inside-a-loop-without-making-it-print-agai
I am trying to make a countdown. Now when I use a while loop to achieve that, I also need to print count down, but what happens is that the countdown takes place in separate lines printing the text over and over again. I want the loop to print the text only once but keep changing the value of countdown until countdown is over. My code is: clearing = True countdown = 3 while clearing: while countdown > 0: print(f"Clearing console in {countdown} second(s)") time.sleep(1) countdown -= 1 os.system('clear') clearing = False and this is the result I am currently achieving in the console rn: Clearing console in 3 second(s) Clearing console in 2 second(s) Clearing console in 1 second(s) see the print repeats evertime until countdown == 0 what I want is for the loop to print the text only once but keep changing the value of countdown inside the text something like this: Clearing console in '3 then 2 then 1' second(s)
I am posting the answer from the comments: clearing = True countdown = 3 while clearing: while countdown > 0: print(f"Clearing console in {countdown} second(s)", end='\r', flush=True) time.sleep(1) countdown -= 1 os.system('clear') clearing = False I also found another way to achieve this effect: clearing = True countdown = 3 while clearing: while countdown > 0: os.system('cls') print(f"Exiting in {countdown} second(s)") time.sleep(1) countdown -= 1 os.system('cls') os.system('cls') clearing = False
2
0
78,519,106
2024-5-22
https://stackoverflow.com/questions/78519106/dataframe-replace-nan-whith-random-in-range
I have a Dataframe in Python whith NaN, as this: import pandas as pd details = { 'info1' : [10,None,None,None,None,None,15,None,None,None,5], 'info2' : [15,None,None,None,10,None,None,None,None,None,20], } df = pd.DataFrame(details) print(df) info1 info2 0 10 15 1 nan nan 2 nan nan 3 nan nan 4 nan 10 5 nan nan 6 15 nan 7 nan nan 8 nan nan 9 nan nan 10 5 20 How to replace NaNs with the random number (e.g., uniform) in a specific range (based on rows that have values), as this:
For a vectorial solution, directly call np.random.uniform with the ffill/bfill as boundaries: import numpy as np df[:] = np.random.uniform(df.ffill(), df.bfill()) Output (with np.random.seed(0)): info1 info2 0 10.000000 15.000000 1 13.013817 12.275584 2 12.118274 11.770529 3 12.187936 10.541135 4 14.818314 10.000000 5 13.958625 15.288949 6 15.000000 19.255966 7 14.289639 10.871293 8 14.797816 18.326198 9 7.218432 18.700121 10 5.000000 20.000000 Note As pointed out by @wjandrea, the behavior of uniform is not officially supported when low > high*. If you want a robust solution, use my original approach with an intermediate array and sort: import numpy as np # ensure the low boundary is before the high tmp = np.sort(np.dstack([df.ffill(), df.bfill()]), axis=2) # generate random numbers between low and high df[:] = np.random.uniform(tmp[..., 0], tmp[..., 1]) (*) If high < low, the results are officially undefined and may eventually raise an error, i.e. do not rely on this function to behave when passed arguments satisfying that inequality condition.
2
2
78,515,954
2024-5-22
https://stackoverflow.com/questions/78515954/suppressing-recommendations-on-attributeerror-python-3-12
I like to use __getattr__ to give objects optional properties while avoiding issues with None. This is a simple example: from typing import Any class CelestialBody: def __init__(self, name: str, mass: float | None = None) -> None: self.name = name self._mass = mass return None def __getattr__(self, name: str) -> Any: if f"_{name}" not in self.__dict__: raise AttributeError(f"Attribute {name} not found") out = getattr(self, f"_{name}") if out is None: raise AttributeError(f"Attribute {name} not set for {self.name}") return out My problem is that Python tries to be nice when I raise my custom AttributeError and exposes the "private" attribute for which I am creating the interface: AttributeError: Attribute mass not set for Earth. Did you mean: '_mass'? Is there a standard way to suppress these recommendations?
The name recommendation logic is triggered only if the name attribute of the AttributeError is not None: elif exc_type and issubclass(exc_type, (NameError, AttributeError)) and \ getattr(exc_value, "name", None) is not None: wrong_name = getattr(exc_value, "name", None) suggestion = _compute_suggestion_error(exc_value, exc_traceback, wrong_name) if suggestion: self._str += f". Did you mean: '{suggestion}'?" So you can easily disable the suggestion by setting name to None when instantiating an AttributeError: if out is None: raise AttributeError(f"Attribute {name} not set for {self.name}", name=None) Demo here
2
2
78,515,236
2024-5-22
https://stackoverflow.com/questions/78515236/longest-repeating-subsequence-edge-cases
Problem While solving the Longest Repeating Subsequence problem using bottom-up dynamic programming, I started running into an edge case whenever a letter was repeated an odd number of times. The goal is to find the longest subsequence that occurs twice in the string using elements at different indices. The ranges can overlap, but the indices should be disjoint (i.e., str[1], str[4] and str[2], str[6] can be a solution, but not str[1], str[2] and str[2], str[3]). EDIT: Misunderstood example. (See the comment) Minimum Reproducible Example s = 'AXXXA' n = len(s) dp = [['' for i in range(n + 1)] for j in range(n + 1)] for i in range(1, n + 1): for j in range(1, n + 1): if (i != j and s[i - 1] == s[j - 1]): dp[i][j] = dp[i - 1][j - 1] + s[i - 1] else: dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]) print(dp[n][n]) Question Any pointers on how to avoid this? With input s = 'AXXXA', the answer should be either A or X, but the final result returns XX, apparently pairing up the third X with both the first X and the second X. False Start I don't want to add a check on a match (s[i - 1] == s[j - 1]) to see if s[i - 1] in dp[i - 1][j - 1] because another input might be something like AAJDDAJJTATA, which must add the A twice.
Actually, your initial algorithm and its answer are correct (... but this is a good question because others might confuse what an LRS means). Given your input (in), the subsequences (s1, s2) are: in: AXXXA s1: XX s2: XX So XX (length 2) is indeed the correct answer here. X would be the correct answer for the problem's non-overlapping version, where the ranges - not just individual indices - must be disjoint.
3
0
78,481,278
2024-5-15
https://stackoverflow.com/questions/78481278/splitting-html-file-and-saving-chunks-using-langchain
I'm very new to LangChain, and I'm working with around 100-150 HTML files on my local disk that I need to upload to a server for NLP model training. However, I have to divide my information into chunks because each file is only permitted to have a maximum of 20K characters. I'm trying to use the LangChain library to do so, but I'm not being successful in splitting my files into my desired chunks. For reference, I'm using this URL: http://www.hadoopadmin.co.in/faq/ Saved locally as HTML only. It's a Hadoop FAQ page that I've downloaded as an HTML file onto my PC. There are many questions and answers there. I've noticed that sometimes, for some files, it gets split by a mere title, and another split is the paragraph following that title. But my desired output would be to have the title and the specific paragraph or following text from the body of the page, and as metadata, the title of the page. I'm using this code: from langchain_community.document_loaders import UnstructuredHTMLLoader from langchain_text_splitters import HTMLHeaderTextSplitter # Same Example with the URL http://www.hadoopadmin.co.in/faq/ Saved Locally as HTML Only dir_html_file='FAQ – BigData.html' data_html = UnstructuredHTMLLoader(dir_html_file).load() headers_to_split_on = [ ("h1", "Header 1")] html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on) html_header_splits = html_splitter.split_text(str(data_html)) But is returning a bunch of weird characters and not splitting the document at all. This is an output: [Document(page_content='[Document(page_content=\'BigData\\n\\n"You can have data without information, but you cannot have information without Big data."\\n\\[email protected]\\n\\n+91-8147644946\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nToggle navigation\\n\\nHome\\n\\nBigData\\n\\n\\tOverview of BigData\\n\\tSources of BigData\\n\\tPros & Cons of BigData\\n\\tSolutions of BigData\\n\\nHadoop Admin\\n\\n\\tHadoop\\n\\t\\n\\t\\tOverview of HDFS\\n\\t\\tOverview of MapReduce\\n\\t\\tApache YARN\\n\\t\\tHadoop Architecture\\n\\t\\n\\n\\tPlanning of Hadoop Cluster\\n\\tAdministration and Maintenance\\n\\tHadoop Ecosystem\\n\\tSetup HDP cluster from scratch\\n\\tInstallation and Configuration\\n\\tAdvanced Cluster Configuration\\n\\tOverview of Ranger\\n\\tKerberos\\n\\t\\n\\t\\tInstalling kerberos/Configuring the KDC and Enabling Kerberos Security\\n\\t\\tConfigure SPNEGO Authentication for Hadoop\\n\\t\\tDisabled kerberos via ambari\\n\\t\\tCommon issues after Disabling kerberos via Ambari\\n\\t\\tEnable https for ambari Server\\n\\t\\tEnable SSL or HTTPS for Oozie Web UI\\n\\nHadoop Dev\\n\\n\\tSolr\\n\\t\\n\\t\\tSolr Installation\\n\\t\\tCommits and Optimizing in Solr and its use for NRT\\n\\t\\tSolr FAQ\\n\\t\\n\\n\\tApache Kafka\\n\\t\\n\\t\\tKafka QuickStart\\n\\t\\n\\n\\tGet last access time of hdfs files\\n\\tProcess hdfs data with Java\\n\\tProcess hdfs data with Pig\\n\\tProcess hdfs data with Hive\\n\\tProcess hdfs data with Sqoop/Flume\\n\\nBigData Architect\\n\\n\\tSolution Vs Enterprise Vs Technical Architect’s Role and Responsibilities\\n\\tSolution architect certification\\n\\nAbout me\\n\\nFAQ\\n\\nAsk Questions\\n\\nFAQ\\n\\nHome\\n\\nFAQ\\n\\nFrequently\\xa0Asked Questions about Big Data\\n\\nMany questions about big data have yet to be answered in a vendor-neutral way. With so many definitions, opinions run the gamut. Here I will attempt to cut to the heart of the matter by addressing some key questions I often get from readers, clients and industry analysts.\\n\\n1) What is Big Data?\\n\\n1) What is Big Data?\\n\\nBig data” is an all-inclusive term used to describe vast amounts of information. In contrast to traditional structured data which is typically stored in a relational database, big data varies in terms of volume, velocity, and variety.\\n\\nBig data\\xa0is characteristically generated in large volumes – on the order of terabytes or exabytes of data (starts with 1 and has 18 zeros after it, or 1 million terabytes) per individual data set.\\n\\nBig data\\xa0is also generated with high velocity – it is collected at frequent intervals – which makes it difficult to analyze (though analyzing it rapidly makes it more valuable).\\n\\nOr in simple words we can say β€œBig Data includes data sets whose size is beyond the ability of traditional software tools to capture, manage, and process the data in a reasonable time.”\\n\\n2) How much data does it take to be called Big Data?\\n\\nThis question cannot be easily answered absolutely. Based on the infrastructure on the market the lower threshold is at about 1 to 3 terabytes.\\n\\nBut using Big Data technologies can be sensible for smaller databases as well, for example if complex mathematiccal or statistical analyses are run against a database. Netezza offers about 200 built in functions and computer languages like Revolution R or Phyton which can be used in such cases.\\n\\ My Expected output will look something like this: One chunk: Frequently Asked Questions about Big Data Many questions about big data have yet to be answered in a vendor-neutral way. With so many definitions, opinions run the gamut. Here I will attempt to cut to the heart of the matter by addressing some key questions I often get from readers, clients and industry analysts. 1) What is Big Data? β€œBig data” is an all-inclusive term used to describe vast amounts of information. In contrast to traditional structured data which is typically stored in a relational database, big data varies in terms of volume, velocity, and variety. Big data is characteristically generated in large volumes – on the order of terabytes or exabytes of data (starts with 1 and has 18 zeros after it, or 1 million terabytes) per individual data set. Big data is also generated with high velocity – it is collected at frequent intervals – which makes it difficult to analyze (though analyzing it rapidly makes it more valuable). Or in simple words we can say β€œBig Data includes data sets whose size is beyond the ability of traditional software tools to capture, manage, and process the data in a reasonable time.” 2) How much data does it take to be called Big Data? This question cannot be easily answered absolutely. Based on the infrastructure on the market the lower threshold is at about 1 to 3 terabytes. But using Big Data technologies can be sensible for smaller databases as well, for example if complex mathematical or statistical analyses are run against a database. Netezza offers about 200 built in functions and computer languages like Revolution R or Phyton which can be used in such cases. Metadata: FAQ Another Chunck 7) Where is the big data trend going? Eventually the big data hype will wear off, but studies show that big data adoption will continue to grow. With a projected $16.9B market by 2015 (Wikibon goes even further to say $50B by 2017), it is clear that big data is here to stay. However, the big data talent pool is lagging behind and will need to catch up to the pace of the market. McKinsey & Company estimated in May 2011 that by 2018, the US alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions. The emergence of big data analytics has permanently altered many businesses’ way of looking at data. Big data can take companies down a long road of staff, technology, and data storage augmentation, but the payoff – rapid insight into never-before-examined data – can be huge. As more use cases come to light over the coming years and technologies mature, big data will undoubtedly reach critical mass and will no longer be labeled a trend. Soon it will simply be another mechanism in the BI ecosystem. 8) Who are some of the BIG DATA users? From cloud companies like Amazon to healthcare companies to financial firms, it seems as if everyone is developing a strategy to use big data. For example, every mobile phone user has a monthly bill which catalogs every call and every text; processing the sheer volume of that data can be challenging. Software logs, remote sensing technologies, information-sensing mobile devices all pose a challenge in terms of the volumes of data created. The size of Big Data can be relative to the size of the enterprise. For some, it may be hundreds of gigabytes, for others, tens or hundreds of terabytes to cause consideration. 9) Data visualization is becoming more popular than ever. In my opinion, it is absolutely essential for organizations to embrace interactive data visualization tools. Blame or thank big data for that and these tools are amazing. They are helping employees make sense of the never-ending stream of data hitting them faster than ever. Our brains respond much better to visuals than rows on a spreadsheet. Companies like Amazon, Apple, Facebook, Google, Twitter, Netflix and many others understand the cardinal need to visualize data. And this goes way beyond Excel charts, graphs or even pivot tables. Companies like Tableau Software have allowed non-technical users to create very interactive and imaginative ways to visually represent information. Metadata: FAQ My thought process is being able to gather all the information and split it into chunks, but I don't want titles without their following paragraphs separated, and I also want as much info as possible (max 20K characters) before creating another chunk. I would also like to save these chunks and their meta data. Is there a function in LangChain to do this? I am open to hearing not to do this in LangChain for efficiency reasons.
check this super amazing HTML chunking package :package: pip install html_chunking Our HTML chunking algorithm operates through a well-structured process that involves several key stages, each tailored to efficiently chunk and merge HTML content while adhering to a token limit. This approach is highly suitable for scenarios where token limitations are critical, and the need for accurate HTML parsing is paramount, especially in tasks like web automation or navigation where HTML content serves as input. For those of you who are interested in this, here's a demo from html_chunking import get_html_chunks merged_chunks = get_html_chunks(your_html_string_here, max_tokens=1000, is_clean_html=True, attr_cutoff_len=25) merged_chunks The output should consists of several HTML chunks, where each chunk contains valid HTML code with preserved structure and attributes (from root node all the way down to current node), and any excessively long attributes are truncated to the specified length. Check out the html_chunking PYPI page and our Github page for more example DEMO!! For those who are investigating the BEST way of chunking HTML for web automation or any other web agent tasks, you should definitely try html_chunking!! LangChain (HTMLHeaderTextSplitter & HTMLSectionSplitter) and LlamaIndex (HTMLNodeParser) split text at the element level and add metadata for each header relevant to the chunk. However, they extract only the text content and exclude the HTML structure, attributes, and other non-text elements, limiting their use for tasks requiring the full HTML context. Check our Github repo below and star :star2: https://github.com/KLGR123/html_chunking
2
1
78,498,481
2024-5-18
https://stackoverflow.com/questions/78498481/userwarning-plan-failed-with-a-cudnnexception-cudnn-backend-execution-plan-des
I'm trying to train a model with Yolov8. Everything was good but today I suddenly notice getting this warning apparently related to PyTorch and cuDNN. In spite the warning, the training seems to be progressing though. I'm not sure if it has any negative effects on the training progress. site-packages/torch/autograd/graph.py:744: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass What is the problem and how to address this? Here is the output of collect_env: Collecting environment information... PyTorch version: 2.3.0+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.29.3 Libc version: glibc-2.31 Python version: 3.9.7 | packaged by conda-forge | (default, Sep 2 2021, 17:58:34) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe Nvidia driver version: 515.105.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] onnx==1.16.0 [pip3] onnxruntime==1.17.3 [pip3] onnxruntime-gpu==1.17.1 [pip3] onnxsim==0.4.36 [pip3] optree==0.11.0 [pip3] torch==2.3.0+cu118 [pip3] torchaudio==2.3.0+cu118 [pip3] torchvision==0.18.0+cu118 [pip3] triton==2.3.0 [conda] numpy 1.24.4 pypi_0 pypi [conda] pytorch-quantization 2.2.1 pypi_0 pypi [conda] torch 2.1.1+cu118 pypi_0 pypi [conda] torchaudio 2.1.1+cu118 pypi_0 pypi [conda] torchmetrics 0.8.0 pypi_0 pypi [conda] torchvision 0.16.1+cu118 pypi_0 pypi [conda] triton 2.1.0 pypi_0 pypi
June 2024 Solution: Upgrade torch version to 2.3.1 to fix it: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
14
4
78,502,481
2024-5-19
https://stackoverflow.com/questions/78502481/algorithm-for-transformation-of-an-1-d-gradient-into-a-special-form-of-a-2-d-gra
Assuming there is a 1-D array/list which defines a color gradient I would like to use it in order to create a 2-D color gradient as follows: Let's for simplicity replace color information with a single numerical value for an example of a 1-D array/list: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ] To keep the gradient progressing diagonally with progress of the largest next value diagonally over the entire array I would like to transform the 1-D sequence into a 2D-array with deliberately chosen shape (i.e. width/height, i.e. number of rows x number of columns where row * columns == length of the 1-D gradient array) as follows: [[ 1 2 4 ] [ 3 6 7 ] [ 5 9 10 ] [ 8 12 13 ] [ 11 14 15 ]] or [[ 1 2 4 7 10 ] [ 3 6 9 12 13 ] [ 5 8 11 14 15 ]] or starting from a sequence: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16] to [[ 1 2 4 7 ] [ 3 6 9 11 ] [ 5 10 13 14 ] [ 8 12 15 16 ]] Is there a ready-to-use out of the box Python module or C-library capable to perform such reshaping of an array or need this special case be coded by hand? And if coding the loops by hand is necessary, what would be the most efficient way of doing this as the sequence I would like to transform is 256Β³ large in size? I there maybe already ready for use code for such reshaping/transformation out there in the deep space of the Internet I have failed to find asking both the search engines and the LLMs?
General idea From what I can see, this can be done in three steps: Split the sequence in fragments as if they were diagonals of an array of a given shape. Separate the elements of each fragment by the parity of their index. Assemble a new array from the modified diagonal fragments. Step 1. Split the sequence into diagonal fragments I think it will be enough to find the stopping points, so then we can slice the sequence with them. For this, we can apply the cumulative sum to a sequence of diagonal lengths: import numpy as np from numba import njit @njit def flat_diagonal_stops(height, width): '''Return a sequence of breakpoints separating a sequence of length height*width into a sequence of matrix diagonals of the shape (height, width) ''' min_dim = min(height, width) lengths = np.empty(height + width, dtype='int') lengths[:min_dim] = [*range(min_dim)] # diagonal lengths in the lower triangle lengths[min_dim:1-min_dim] = min_dim # diagonal lengths in the main body lengths[:-min_dim:-1] = lengths[1:min_dim] # diagonal lengths in the upper triangle return lengths.cumsum() Step 2. Separate elements by index parity A sequence transformation like this: (0, 1, 2, 3, 4, 5) >>> (0, 2, 4, 5, 3, 1) is actually a separation of elements by the parity of their positional index. Elements with an even index are shifted to the left, while the others - to the right in reverse order: @njit def separate_by_index_parity(arr): '''Return a numpy.ndarray filled with elements of arr, first those in even-numbered positions, then those in odd-numbered positions in reverse order ''' out = np.empty_like(arr) middle = sum(divmod(len(out), 2)) out[:middle] = arr[::2] out[:middle-len(out)-1:-1] = arr[1::2] return out Step 3. Assemble the fragments as diagonals of a new array To do this, we can create a flat representation of the required output and work within it by slicing diagonal positions: @njit def assemble_diagonals_separated_by_parity(arr, height, width): '''Return a matrix of shape (height, width) with elements of the given sequence arr arranged along diagonals, where the elements on each diagonal are separated by the parity of their index in them ''' out = np.empty(height*width, dtype=arr.dtype) stops = flat_diagonal_stops(height, width) out_step = width + 1 for offset, (start, stop) in enumerate(zip(stops[:-1], stops[1:]), 1-height): # out_from: the first element of an off-diagonal # out_to : next after the last element of an off-diagonal # out_step: a stride to get diagonal items out_from = -offset*width if offset < 0 else offset out_to = out_from + (stop-start)*out_step # stop - start is equal to the diagonal size out[out_from:out_to:out_step] = separate_by_index_parity(arr[start:stop]) return out.reshape(height, width) The result is a stacking of the modified sequence on diagonals from bottom to top and from left to right. To get other types of stacking, we combine flipping and transposing. For example, we can stack elements in the left-to-right and top-to-bottom order along anti-diagonals as follows (note the reverse order of dimensions (width, height) in a function call): height, width = 6, 4 arr = np.arange(1, 1+height*width) out = np.fliplr(assemble_diagonals_separated_by_parity(arr, width, height).T) print(out) [[ 1 2 4 7] [ 3 6 9 11] [ 5 10 13 15] [ 8 14 17 19] [12 18 21 22] [16 20 23 24]] Code for experiments import numpy as np from numba import njit @njit def flat_diagonal_stops(height, width): min_dim = min(height, width) lengths = np.empty(height + width, dtype='int') lengths[:min_dim] = [*range(min_dim)] lengths[min_dim:1-min_dim] = min_dim lengths[:-min_dim:-1] = lengths[1:min_dim] return lengths.cumsum() @njit def separate_by_index_parity(arr): out = np.empty_like(arr) middle = sum(divmod(len(out), 2)) out[:middle] = arr[::2] out[:middle-len(out)-1:-1] = arr[1::2] return out @njit def assemble_diagonals_separated_by_parity(arr, height, width): if height == 1 or width == 1: return arr.reshape(height, width).copy() out = np.empty(height*width, dtype=arr.dtype) stops = flat_diagonal_stops(height, width) out_step = width + 1 for offset, (start, stop) in enumerate(zip(stops[:-1], stops[1:]), 1-height): out_from = -offset*width if offset < 0 else offset out_to = out_from + (stop-start)*out_step out[out_from:out_to:out_step] = separate_by_index_parity(arr[start:stop]) return out.reshape(height, width) height, width = 6, 4 arr = np.arange(1, 1+height*width) out = np.fliplr(assemble_diagonals_separated_by_parity(arr, width, height).T) print(out) P.S. Stack the data directly along anti-diagonals Let's specialize the assembly function to work directly with anti-diagonals, so as not to get confused with flip-transpose tricks. In this case, we have a shorter slicing step, and the starting point will be along the top and right edges. Everything else remains unchanged: @njit def assemble_antidiagonals_separated_by_parity(arr, height, width): if height == 1 or width == 1: return arr.reshape(height, width).copy() out = np.empty(height*width, dtype=arr.dtype) stops = flat_diagonal_stops(height, width) out_step = width - 1 for offset, (start, stop) in enumerate(zip(stops[:-1], stops[1:])): out_from = offset if offset < width else (offset-width+2)*width-1 out_to = out_from + (stop-start)*out_step out[out_from:out_to:out_step] = separate_by_index_parity(arr[start:stop]) return out.reshape(height, width) >>> height, width = 8, 5 >>> arr = np.arange(1, 1+height*width) >>> out = assemble_antidiagonals_separated_by_parity(arr, height, width) >>> print(out) [[ 1 2 4 7 11] [ 3 6 9 13 16] [ 5 10 15 18 21] [ 8 14 20 23 26] [12 19 25 28 31] [17 24 30 33 35] [22 29 34 37 38] [27 32 36 39 40]]
5
4
78,501,763
2024-5-19
https://stackoverflow.com/questions/78501763/how-to-put-a-python-numpy-array-back-together-from-diagonals-obtained-from-split
As I to my surprise failed to find a Python numpy method able to put an array split into its right to left diagonals in first place back together from the obtained diagonals, I have put some code together for this purpose, but have now hard time to arrive at the right algorithm. The code below works for the 4x3 array, but does not for the 3x5 and the other ones: import numpy as np def get_array(height, width): return np.arange(1, 1 + height*width).reshape(height, width) array = get_array(5, 3) print( array ) print("---", end="") # Dimensions of a 2D array N, M = array.shape print(f" {N=} {M=} ", end="") rangeNMbeg = -M + 2 if N < M else -N + 1 rangeNMend = N + 1 if N < M else M flip = np.fliplr # ( another one: flipur ) diagonals = list( reversed ( [ np.diagonal( flip(array), offset=i) for i in range( rangeNMbeg, rangeNMend ) ] ) ) # Modification of each of diagonals (for example, double each first item) print() for diagonal in diagonals: pass #print(diagonal) print("---") # exit() # Create a new array and arrange the diagonals back to the array arrFromDiagonals = np.empty_like(array) # Fill the array with the modified diagonals for n in range(N): row = [] backCounter=0 for m, diagonal in enumerate( diagonals[n : n + M ] ) : print(f"{str(diagonal):12s} {n=} {m=} len={len(diagonal)}", end = " ") if len(diagonal) > m and n < M: print(f"len > m ", end = " > ") print(f"diagonal[{-1-m=}] = {diagonal[-1-m]}", end = " ") row.append( diagonal[-1- m ]) elif len(diagonal) == m : print(f"len===m ", end = " > ") print(f"diagonal[{0}] = {diagonal[0]}", end = " ") row.append( diagonal[0] ) else: print(f"l <<< m ", end = " > ") print(f"diagonal[{-1}] = {diagonal[-1]}", end = " ") row.append( diagonal[-1] ) print(row) arrFromDiagonals[n,:] = np.array( row ) print(arrFromDiagonals) and outputs: [[ 1 2 3] [ 4 5 6] [ 7 8 9] [10 13 12] [13 14 15]] instead of [[ 1 2 3], [ 4 5 6], [ 7 8 9], [10 11 12], [13 14 15]] Wanted: a general approach to work on arrays using indices being one-based number of a diagonal (or anti-diagonal) and a one-based index of an element of the diagonal from the perspective of a diagonal root-element on the array edges. This approach will allow reading and writing the diagonals as easy as reading and writing rows and columns.
Numpy 1.25 Correcting the initial approach First, let's make the initial approach work. There are two important parts: splitting the data into diagonals and merging them back together. The smallest and largest diagonal offsets are -(hight-1) and width-1 respectively. So we can simplify splitting into diagonals as follows: def get_diagonals(array): '''Return all the top-right to bottom-left diagonals of an array starting from the top-left corner. ''' height, width = array.shape fliplr_arr = np.fliplr(array) return [fliplr_arr.diagonal(offset).copy() # Starting in NumPy 1.9 diagonal returns a read-only view on the original array for offset in range(width-1, -height, -1)] Having all diagonals collected in a list, let's numerate them by their index. Now, for all diagonals with index less then the array width, the index of an element within the diagonal is equal to its first index within the array. For other diagonals, a position of an element within the diagonal plus the height of the diagonal starting point gives the height of the element within the array. The diagonal starting point is equal to the difference of the diagonal number and the last possible index within the second array dimension (see the picture below). So we can rewrite the second part of the code as follows: def from_diagonals(diagonals, shape, dtype): arr = np.empty(shape, dtype) height, width = shape for h in range(height): arr[h, :] = [ diagonal[h if d < width else h - (d - (width-1))] for d, diagonal in enumerate(diagonals[h:h+width], start=h) ] return arr Now the whole process of work is as follows: arr = ... diagonals = get_diagonals(arr) for diag in diagonals: ... new_arr = from_diagonals(diagonals, arr.shape, arr.dtype) Working with indexes We can create a function to reach data by the diagonal indexes like this: def get_diagonal_function(height, width, base=0, antidiagonal=False): from functools import partial index = np.mgrid[:height, :width] if antidiagonal: index = index[..., ::-1] # flip horizontal indices left-to-right _diag = partial(index.diagonal, axis1=1, axis2=2) max_offset = width - 1 shift = max_offset + base diagonal = lambda x: _diag(shift - x) diagonal.min = base diagonal.max = (height-1) + (width-1) + base diagonal.off = _diag return diagonal Example of extracting all anti-diagonals: diagonal = get_diagonal_function(*arr.shape, base=1, antidiagonal=True) diagonals = [arr[*diagonal(x)] for x in range(diagonal.min, 1+diagonal.max)] main_antidiagonal = diagonal.off(0) Example of flipping data along anti-diagonals: arr = [ [ 1, 2, 4], [ 3, 5, 7], [ 6, 8, 10], [ 9, 11, 13], [12, 14, 15] ] arr = np.array(arr) diagonal = get_diagonal_function(*arr.shape, antidiagonal=True) for x in range(1+diagonal.max): index = tuple(diagonal(x)) arr[index] = arr[index][::-1] print(arr) [[ 1 3 6] [ 2 5 9] [ 4 8 12] [ 7 11 14] [10 13 15]] Converting coordinates Warning: The code below has not been properly tested; it is not intended for fast calculations; it is just an ad hoc proof of concept. In case of anti-diagonals as described in the original post, converting can be performed as follows (see figure above for reference): def diag_to_array(d, x, shape, direction='top-bottom,right-left'): height, width = shape h = x if d < width else x + (d - (width-1)) w = d - h return h, w In general, converting resembles an affine transformation (with cutting at the edges, if I may say so). But we need to know the direction and ordering of diagonals, of which there can be 16 options. They can be given by vectors indicating the direction of counting diagonals and elements along them. These vectors can also be used to determine a new origin. def get_origin(d, e, shape): height, width = np.array(shape) - 1 # max coordinate values along dimensions match d, e: case ((1, x), (y, 1)) | ((x, 1), (1, y)): origin = (0, 0) case ((1, x), (y, -1)) | ((x, -1), (1, y)): origin = (0, width) case ((-1, x), (y, 1)) | ((x, 1), (-1, y)): origin = (height, 0) case ((-1, x), (y, -1)) | ((x, -1), (-1, y)): origin = (height, width) return np.array(origin) Notes: d is one of the {(1, 0), (-1, 0), (0, 1), (0, -1)} vectors showing the direction of the diagonal number coordinate; e is one of the {(1, 1), (1, -1), (-1, 1), (-1, -1)} vectors showing the direction of numbering elements along the diagonal. As for coordinate transformations, it seems more convenient to implement them in a class: class Transfomer: def __init__(self, d, e, shape): self.d = np.array(d) self.e = np.array(e) self.shape = np.array(shape) self.A = np.stack([self.d, self.e]) self.origin = get_origin(d, e, shape) self.edge = abs(self.d @ self.shape) - 1 def diag_to_array(self, ndiagonal, element): '''Return the array coordiantes where: ndiagonal: the number of a diagonal element: the element index on the diagonal ''' if ndiagonal > self.edge: element = element + ndiagonal - self.edge elif ndiagonal < 0: element = element - ndiagonal diag_coords = np.array([ndiagonal, element]) array_coords = diag_coords @ self.A + self.origin return array_coords def array_to_diag(self, *args, **kwargs): raise NotImplementedError Example: d = (0, 1) # take anti-diagonals from left to right starting from the top-left corner e = (1, -1) # and index their elements from top-right to bottom-left arr = np.arange(5*3).reshape(5, 3) coords = Transfomer(d, e, arr.shape) diagonal = get_diagonal_function(*arr.shape, base=1, antidiagonal=True) diagonals = [arr[*diagonal(x)] for x in range(diagonal.min, 1+diagonal.max)] for ndiag, element in [(0, 0), (2, 1), (3, 0), (5, 1), (6, 0)]: array_point = coords.diag_to_array(ndiag, element) try: assert diagonals[ndiag][element] == arr[*array_point], f'{ndiag=}, {element=}' except AssertionError as e: print(e) Working with diagonals as a writeable view If the array structure is straight simple (i.e. a contiguous sequence of data with no stride tricks or complicated transpositions), we can try (with some caution) to force a diagonal view to be writeable. Then all changes will be made with the original data, and you don't have to reassemble the diagonals again. Let's do mirroring anti-diagonals, for example: def diag_range(shape, asnumpy=False): '''Returns a sequence of numbers arranged diagonally in a given shape as separate diagonals ''' diagonals = [] dmin, dmax = min(shape), max(shape) start = 1 for n in range(dmin): end = start + n +1 diagonals.append([*range(start, end)]) start = end for n in range(dmin, dmax): end = start + dmin diagonals.append([*range(start, end)]) start = end for n in range(dmin-1, 0, -1): end = start + n diagonals.append([*range(start, end)]) start = end if asnumpy: from numpy import array diagonals = list(map(array, diagonals)) return diagonals def from_diagonals(diagonals, shape, dtype): '''Returns an array constructed by the given diagonals''' arr = np.empty(shape, dtype) height, width = shape for h in range(height): row = [] for w, diag in enumerate(diagonals[h:h+width], start=h): index = h - (w >= width) * (w - (width-1)) row.append(diag[index]) arr[h, :] = row return arr h, w = shape = (6,3) arr = from_diagonals(diag_range(shape), shape, 'int') print('Initial array:'.upper(), arr, '', sep='\n') fliplr_arr = np.fliplr(arr) diagonals = [fliplr_arr.diagonal(i) for i in range(-h+1, w)] for d in diagonals: d.flags.writeable = True # use with caution d[:] = d[::-1] print('Array with flipped diagonals:'.upper(), arr, sep='\n') INITIAL ARRAY: [[ 1 2 4] [ 3 5 7] [ 6 8 10] [ 9 11 13] [12 14 16] [15 17 18]] ARRAY WITH FLIPPED DIAGONALS: [[ 1 3 6] [ 2 5 9] [ 4 8 12] [ 7 11 15] [10 14 17] [13 16 18]]
2
2
78,483,318
2024-5-15
https://stackoverflow.com/questions/78483318/building-python-packages-from-source-for-amd-gpu-hipcc-failed-with-exit-code-1
I am trying to install Python packages, like lietorch or pytorch-scatter with AMD support. I download the repository, git clone --recursive https://github.com/repo If I just try to install the package, I'll get an error, stating clang: error: cannot find HIP runtime; provide its path via '--rocm-path', or pass '-nogpuinc' to build without HIP runtime Therefore, I change the extra_compile_args inside setup.py, adding '-DCMAKE_CXX_FLAGS=--rocm-path=/opt/rocm' to the nvcc flags. The flags get passed and is executed by hipcc: /opt/rocm-6.0.2/bin/hipcc _various_compile_flags_ -DCMAKE_CXX_FLAGS=--rocm-path=/opt/rocm-6.0.2 _even_more_flags_ But it terminates with the error sh: 1: --rocm-path=/opt/rocm-6.0.2/llvm/bin/clang: not found sh: 1: --rocm-path=/opt/rocm-6.0.2/llvm/bin/clang: not found error: command '/opt/rocm-6.0.2/bin/hipcc' failed with exit code 127 Is there anything I can do to get the packages built? Do these packages have insufficient support for AMD and there is nothing I can do? Or is there a grave oversight on my part and an obvious solution? Some System Information: GPU: AMD Radeon RX 6800 XT (gfx1030) Environment Variables: ROCM_PATH=/opt/rocm ROCM_ROOT=/opt/rocm ROCM_HOME=/opt/rocm-6.0.2 CPATH=/opt/rocm-6.0.2/include:/opt/rocm-6.0.2/include/hip:/opt/rocm-6.0.2/include/rocrand:/opt/rocm-6.0.2/include/hiprand:/opt/rocm-6.0.2/include/roctracer:/opt/rocm-6.0.2/include/hipblas:/opt/rocm-6.0.2/include/hipsparse:/opt/rocm-6.0.2/include/hipfft:/opt/rocm-6.0.2/include/rocsolver HIP_PATH=/opt/rocm/hip HCC_HOME=/opt/rocm/hcc HIP_PLATFORM="amd" PYTORCH_ROCM_ARCH="gfx1030" HSA_OVERRIDE_GFX_VERSION=10.3.0 HIP_VISIBLE_DEVICES=0 I tried setting the rocm-path flag as described above and looked for help in github issues and Stack Overflow posts without success. I expect the packages to compile successfully and run on my hardware.
While searching for an answer I stumbled upon this Clang documentation. It mentions the Order of Precedence for HIP Path: Order of Precedence for HIP Path --hip-path compiler option HIP_PATH environment variable (use with caution) --rocm-path compiler option ROCM_PATH environment variable (use with caution) Default automatic detection (relative to Clang or at the default ROCm installation location) Including --hip-path=/opt/rocm inside the extra_compile_args let me compile the packages.
2
1
78,512,202
2024-5-21
https://stackoverflow.com/questions/78512202/pytests-failing-on-teamcity-due-to-matplotlib-backend
I'm trying to run some Python unit tests on a remote build server using Teamcity. They fail when attempting to execute some matplotlib code. I get the following output in the Teamcity build logs, which seems to point towards the matplotlib backend as the culprit. XXXXX\stats.py:144: in PerformHypothesisTest fig, ax = plt.subplots(1, 1, figsize=(10, 6)) .venv\lib\site-packages\matplotlib\pyplot.py:1702: in subplots fig = figure(**fig_kw) .venv\lib\site-packages\matplotlib\pyplot.py:1022: in figure manager = new_figure_manager( .venv\lib\site-packages\matplotlib\pyplot.py:545: in new_figure_manager return _get_backend_mod().new_figure_manager(*args, **kwargs) .venv\lib\site-packages\matplotlib\backend_bases.py:3521: in new_figure_manager return cls.new_figure_manager_given_figure(num, fig) .venv\lib\site-packages\matplotlib\backend_bases.py:3526: in new_figure_manager_given_figure return cls.FigureCanvas.new_manager(figure, num) .venv\lib\site-packages\matplotlib\backend_bases.py:1811: in new_manager return cls.manager_class.create_with_canvas(cls, figure, num) .venv\lib\site-packages\matplotlib\backends\_backend_tk.py:479: in create_with_canvas with _restore_foreground_window_at_end(): C:\Python310\lib\contextlib.py:135: in __enter__ return next(self.gen) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @contextmanager def _restore_foreground_window_at_end(): > foreground = _c_internal_utils.Win32_GetForegroundWindow() E ValueError: PyCapsule_New called with null pointer .venv\lib\site-packages\matplotlib\backends\_backend_tk.py:43: ValueError The tests run fine both: Locally on my PC from Pycharm On the build server when executed from the command line, i.e. by running python -m pytest I'm not super familiar with how Teamcity works and how to debug it, so I would appreciate any ideas as to what might be going wrong here. The build server is running the following versions: Python 3.10.0 Matplotlib 3.9.0 Pytest 8.2.1 If it is useful, the build server is using the 'tkagg' backend (from matplotlib.get_backend()). Update: Thanks for the responses. As snark says, the issue seems to be due to a bug in the most recent Matplotlib release (3.9.0). Until this is fixed, I've dealt with this by explicitly setting the Matplotlib backend to 'Agg' as suggested by BadCaffe and Lemmy. I did this programmatically via a pytest conftest.py file.
I had the same Issue, it has something to do with the missing physical video out (on a server with a nightly build, you usually have no monitor connected). This is why it works on your private machine. matplotlib.use('Agg') should work, but if you have completely automatic nightly builds, that pull down some kind of repository and you can not alter the code, look here: https://matplotlib.org/stable/users/explain/customizing.html#the-matplotlibrc-file You can place a "matplotlibrc" file in the user folder (depending on your os) and simply put "backend : Agg" in there. This does the same thing as matplotlib.use('Agg') in changing the used backend, but you can do it without changing the code.
2
1
78,511,506
2024-5-21
https://stackoverflow.com/questions/78511506/issues-with-publishing-and-subscribing-rates-for-h-264-video-streaming-over-rabb
I am working on a project to stream an H.264 video file using RabbitMQ (AMQP protocol) and display it in a web application. The setup involves capturing video frames, encoding them, sending them to RabbitMQ, and then consuming and decoding them on the web application side using Flask and Flask-SocketIO. However, I am encountering performance issues with the publishing and subscribing rates in RabbitMQ. I cannot seem to achieve more than 10 messages per second. This is not sufficient for smooth video streaming. I need help to diagnose and resolve these performance bottlenecks. Here is my code: Video Capture and Publishing Script: # RabbitMQ setup RABBITMQ_HOST = 'localhost' EXCHANGE = 'DRONE' CAM_LOCATION = 'Out_Front' KEY = f'DRONE_{CAM_LOCATION}' QUEUE_NAME = f'DRONE_{CAM_LOCATION}_video_queue' # Path to the H.264 video file VIDEO_FILE_PATH = 'videos/FPV.h264' # Configure logging logging.basicConfig(level=logging.INFO) @contextmanager def rabbitmq_channel(host): """Context manager to handle RabbitMQ channel setup and teardown.""" connection = pika.BlockingConnection(pika.ConnectionParameters(host)) channel = connection.channel() try: yield channel finally: connection.close() def initialize_rabbitmq(channel): """Initialize RabbitMQ exchange and queue, and bind them together.""" channel.exchange_declare(exchange=EXCHANGE, exchange_type='direct') channel.queue_declare(queue=QUEUE_NAME) channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=KEY) def send_frame(channel, frame): """Encode the video frame using FFmpeg and send it to RabbitMQ.""" ffmpeg_path = 'ffmpeg/bin/ffmpeg.exe' cmd = [ ffmpeg_path, '-f', 'rawvideo', '-pix_fmt', 'rgb24', '-s', '{}x{}'.format(frame.shape[1], frame.shape[0]), '-i', 'pipe:0', '-f', 'h264', '-vcodec', 'libx264', '-pix_fmt', 'yuv420p', '-preset', 'ultrafast', 'pipe:1' ] start_time = time.time() process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = process.communicate(input=frame.tobytes()) encoding_time = time.time() - start_time if process.returncode != 0: logging.error("ffmpeg error: %s", err.decode()) raise RuntimeError("ffmpeg error") frame_size = len(out) logging.info("Sending frame with shape: %s, size: %d bytes", frame.shape, frame_size) timestamp = time.time() formatted_timestamp = datetime.fromtimestamp(timestamp).strftime('%H:%M:%S.%f') logging.info(f"Timestamp: {timestamp}") logging.info(f"Formatted Timestamp: {formatted_timestamp[:-3]}") timestamp_bytes = struct.pack('d', timestamp) message_body = timestamp_bytes + out channel.basic_publish(exchange=EXCHANGE, routing_key=KEY, body=message_body) logging.info(f"Encoding time: {encoding_time:.4f} seconds") def capture_video(channel): """Read video from the file, encode frames, and send them to RabbitMQ.""" if not os.path.exists(VIDEO_FILE_PATH): logging.error("Error: Video file does not exist.") return cap = cv2.VideoCapture(VIDEO_FILE_PATH) if not cap.isOpened(): logging.error("Error: Could not open video file.") return try: while True: start_time = time.time() ret, frame = cap.read() read_time = time.time() - start_time if not ret: break frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame_rgb = np.ascontiguousarray(frame_rgb) # Ensure the frame is contiguous send_frame(channel, frame_rgb) cv2.imshow('Video', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break logging.info(f"Read time: {read_time:.4f} seconds") finally: cap.release() cv2.destroyAllWindows() the backend (flask): app = Flask(__name__) CORS(app) socketio = SocketIO(app, cors_allowed_origins="*") RABBITMQ_HOST = 'localhost' EXCHANGE = 'DRONE' CAM_LOCATION = 'Out_Front' QUEUE_NAME = f'DRONE_{CAM_LOCATION}_video_queue' def initialize_rabbitmq(): connection = pika.BlockingConnection(pika.ConnectionParameters(RABBITMQ_HOST)) channel = connection.channel() channel.exchange_declare(exchange=EXCHANGE, exchange_type='direct') channel.queue_declare(queue=QUEUE_NAME) channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=f'DRONE_{CAM_LOCATION}') return connection, channel def decode_frame(frame_data): # FFmpeg command to decode H.264 frame data ffmpeg_path = 'ffmpeg/bin/ffmpeg.exe' cmd = [ ffmpeg_path, '-f', 'h264', '-i', 'pipe:0', '-pix_fmt', 'bgr24', '-vcodec', 'rawvideo', '-an', '-sn', '-f', 'rawvideo', 'pipe:1' ] process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) start_time = time.time() # Start timing the decoding process out, err = process.communicate(input=frame_data) decoding_time = time.time() - start_time # Calculate decoding time if process.returncode != 0: print("ffmpeg error: ", err.decode()) return None frame_size = (960, 1280, 3) # frame dimensions expected by the frontend frame = np.frombuffer(out, np.uint8).reshape(frame_size) print(f"Decoding time: {decoding_time:.4f} seconds") return frame def format_timestamp(ts): dt = datetime.fromtimestamp(ts) return dt.strftime('%H:%M:%S.%f')[:-3] def rabbitmq_consumer(): connection, channel = initialize_rabbitmq() for method_frame, properties, body in channel.consume(QUEUE_NAME): message_receive_time = time.time() # Time when the message is received # Extract the timestamp from the message body timestamp_bytes = body[:8] frame_data = body[8:] publish_timestamp = struct.unpack('d', timestamp_bytes)[0] print(f"Message Receive Time: {message_receive_time:.4f} ({format_timestamp(message_receive_time)})") print(f"Publish Time: {publish_timestamp:.4f} ({format_timestamp(publish_timestamp)})") frame = decode_frame(frame_data) decode_time = time.time() - message_receive_time # Calculate decode time if frame is not None: _, buffer = cv2.imencode('.jpg', frame) frame_data = buffer.tobytes() socketio.emit('video_frame', {'frame': frame_data, 'timestamp': publish_timestamp}, namespace='/') emit_time = time.time() # Time after emitting the frame # Log the time taken to emit the frame and its size rtt = emit_time - publish_timestamp # Calculate RTT from publish to emit print(f"Current Time: {emit_time:.4f} ({format_timestamp(emit_time)})") print(f"RTT: {rtt:.4f} seconds") print(f"Emit time: {emit_time - message_receive_time:.4f} seconds, Frame size: {len(frame_data)} bytes") channel.basic_ack(method_frame.delivery_tag) @app.route('/') def index(): return render_template('index.html') @socketio.on('connect') def handle_connect(): print('Client connected') @socketio.on('disconnect') def handle_disconnect(): print('Client disconnected') if __name__ == '__main__': consumer_thread = threading.Thread(target=rabbitmq_consumer) consumer_thread.daemon = True consumer_thread.start() socketio.run(app, host='0.0.0.0', port=5000) How can I optimize the publishing and subscribing rates to handle a higher number of messages per second? Any help or suggestions would be greatly appreciated! I attempted to use threading and multiprocessing to handle multiple frames concurrently and I tried to optimize the frame decoding function to make it faster but with no success.
First i dont know so much about rabbitmq but i think it would be handle more then 10 Messages per Seconds. You have some Design issues, you Read the video file to rgb via cv2 and reencode it to h264. The file is already h264 encoded. Its just overhead. Use pyav to read Packet wise the file so you dont need reencode step when Sending. you execute for each frame the whole ffmpeg proc for decoding as again in encoding step, use pyav to Feed the packages to the Decoder as an stream like Thingy. Following this you remove the singel proc execution per frame. If you want to go with the Procs start it once an work with the Pipes. But pyav is way more Developer friendly and give you more cool things as just Work with Pipes
2
2
78,490,151
2024-5-16
https://stackoverflow.com/questions/78490151/autotokenizer-from-pretrained-took-forever-to-load
I used the following code to load my custom-trained tokenizer: from transformers import AutoTokenizer test_tokenizer = AutoTokenizer.from_pretrained('raptorkwok/cantonese-tokenizer-test') It took forever to load. Even if I replace the AutoTokenizer with PreTrainedTokenizerFast, it still loads forever. How to debug or fix this issue?
The problem is resolved when downgrading transformers version to 4.28.1 from 4.41.0. Both pipeline() and from_pretrained() load the tokenizer successfully in seconds.
4
1
78,510,670
2024-5-21
https://stackoverflow.com/questions/78510670/fail-to-change-the-stacked-bar-chart-style-by-openpyxl-how-to-fix-it
Current Style Target Style When I generate a percentage stacked bar chart by openpyxl, I would like it as the target style in the image attached. But what I got is the current one. The code is as below. I tried change the chart.style number. It doesn't work. Environment: Window 10 + Python 3.10 (VS 2017) + Excel 365 def chart_add_A(filename): workbook = load_workbook(filename) worksheet = workbook['A'] chart_data = Reference(worksheet, min_row = 2, max_row = worksheet.max_row, min_col = 2, max_col = 3) chart_series = Reference(worksheet, min_row = 2, max_row = worksheet.max_row, min_col = 1, max_col = 1) chart_A = BarChart() chart_A.type = 'col' chart_A.style = 3 chart_A.grouping = "percentStacked" chart_A.title = 'A' chart_A.x_axis.title = 'Month' chart_A.y_axis.title = 'Count' chart_A.legend = None chart_A.dLbls=label.DataLabelList() chart_A.dLbls.showVal=True chart_A.showVal = True chart_A.width = 24 chart_A.height = 12 chart_A.add_data(chart_data) chart_A.set_categories(chart_series) worksheet.add_chart(chart_A, 'A0') workbook.save(filename) How can I get the target style to save the chart in excel? Thanks
Referenced in the Openpyxl documentation, Stacked Bar Charts should have their overlap set to 100. Need to add the following setting; chart_A.overlap = 100
2
2
78,512,160
2024-5-21
https://stackoverflow.com/questions/78512160/read-extensionobject-opc-ua-python
I'm trying to read an opc ua extension object in python with the following code: from opcua import Client from opcua import ua connessione= 'opc.tcp://10.1.17.21:4840/' client = Client(connessione) try: client.connect() var = client.get_node('ns=1;s=VARIABLE_OPC_first_coiling_machine_production_list') before = var.get_value() client.load_type_definitions() after = var.get_value() print(before, '\n', after) client.disconnect() except Exception as e: print(e.__str__()) But in 'after' I get a non-decipherable arraybyte. Example: ExtensionObject(TypeId:StringNodeId(ns=1;s=ENC_DATATYPE_OPC_coiling_machine_production_list), Encoding:1, 2311 bytes) I also try with opcua-asyncio library, but nothing change.
Try opcua-asyncio library and call the function load_data_type_definitions.
4
1
78,506,890
2024-5-20
https://stackoverflow.com/questions/78506890/how-can-i-clean-up-the-useless-python-packages
I tried to install the requirements for Grok on my server(I don’t have much disk space). dm_haiku==0.0.12 jax[cuda12-pip]==0.4.25 -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html numpy==1.26.4 sentencepiece==0.2.0 But I found they are so big that the disk quota exceeded. Thus, I wanted to uninstall them. I used pip uninstall -r requirements.txt to uninstall these packages and deleted all the cache files by using rm -rf ~/.cache/*. However, the disk usage was still very high. How can I clean up these packages?
pip uninstall will remove only the requirements you pass to it, not all other things they depend on. List the full set of installed packages in your environment, like this: pip freeze Choose the subset you want to uninstall, and pass them to pip uninstall. pip uninstall --yes -r requirements-to-remove.txt This could possibly result in some packages being broken... use pip check to check for that (so you don't have to wait until a runtime error happens to discover that). pip check For example: # run a Python 3.11 environment in Docker docker run --rm -it python:3.11 bash # install 'lightgbm' and 'pandas' (which both depend on 'numpy') pip install lightgbm pandas # that pulls in other dependencies, like 'numpy' and 'pytz' pip freeze # lightgbm==4.3.0 # numpy==1.26.4 # pandas==2.2.2 # ... # if you just uninstall 'pandas', 'numpy' will still be in the environment pip uninstall --yes pandas pip freeze | grep numpy # numpy==1.26.4 # if you then uninstall 'numpy' too, it'll break 'lightgbm' pip uninstall --yes numpy python -c "import lightgbm" # ModuleNotFoundError: No module named 'numpy' # you could detect this with 'pip check' pip check # lightgbm 4.3.0 requires numpy, which is not installed. # scipy 1.13.0 requires numpy, which is not installed. # that could be fixed by reinstalling 'numpy' pip install numpy
2
2
78,513,078
2024-5-21
https://stackoverflow.com/questions/78513078/aggregation-on-sub-dataframes-defined-by-sets-of-indices-without-loop
Suppose I have a Pandas DataFrame, I take some easy example: import pandas as pd df = pd.DataFrame(columns=["A", "B"], data = [(1, 2), (4, 5), (7, 8), (10, 11)]) I have a set of indices, let's make it simple and random: inds = [(0, 1, 3), (0, 1, 2), (1, 2, 3)] I want to aggregate the data according to those indices, in the following way, for instance if the aggregation operation is the mean I would obtain: A B df.loc[inds[0], "A"].mean() df.loc[inds[0], "B"].mean() df.loc[inds[1], "A"].mean() df.loc[inds[1], "B"].mean() df.loc[inds[2], "A"].mean() df.loc[inds[2], "B"].mean() Is there a way to perform this in pure pandas without writing a loop? This is very similar to a df.groupby and then .agg type of operation, but I did not find a way to create a GroupBy object from a custom set of indices.
Edit: showing how to achieve this with groupby, but surely "significantly simpler to think of this as a selection by index problem"; see the answer by @HenryEcker. Option 1 (reindex + groupby) s = pd.Series(inds).explode() out = df.reindex(s).groupby(s.index).mean() out A B 0 5.0 6.0 # i.e. A: (1+4+10)/3, B: (2+5+11)/3, etc. 1 4.0 5.0 2 7.0 8.0 Explanation Use inds to create a pd.Series (here: s), and apply series.explode. The index values function as group identifiers: # intermediate series ('group 0, 1, 2') 0 0 0 1 0 3 1 0 1 1 1 2 2 1 2 2 2 3 dtype: object Apply df.reindex with values from s, use df.groupby with s.index, and get groupby.mean. Option 2 (merge + groupby) out = ( df.merge( pd.Series(inds, name='g').explode(), left_index=True, right_on='g', how='right' ) .drop(columns=['g']) .groupby(level=0) .mean() ) # same result Explanation As with option 1, we create a pd.Series and explode it, but this time we add a name, which we need for the merge in the next step. Now, use df.merge with how=right to add the values from df using g values from our series and index from df as the keys. Finally, drop column 'g' (df.drop), apply df.groupby on the index (level=0), and get groupby.mean.
4
4
78,513,762
2024-5-21
https://stackoverflow.com/questions/78513762/validate-additional-info-using-pydantic-model
I'm a new user of FastAPI. I'm writing a small web application and I'm wondering if it's good practice to validate additional information, which is not directly related to the object itself, using the Pydantic model itself? For example, checking if a user with such a name exists in the database. For example: class CreateUser(BaseModel): model_config = ConfigDict(strict=True) username: str = Field(pattern=r"[0-9a-zA-Z!@#$%&*_.-]{3,}") password: str secret: str @field_validator("username") def validate_username(cls, value: str): # check if user is exist in DB... # if no, return the username # if yes, raise error
In my opinion you shouldn't do that. In FastAPI, to me, Pydantic acts as a gateway to the path operation function(whether it is input or output). It deserializes, cleans(and/or converts) and validates the data so that you can count on that the data you receive is in the good shape. The rest is going to be passed to the path operation function or service layer if you have. I'll decouple the job of checking the availability of the user in database from Pydantic models.
3
2