question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,944,702
2024-9-3
https://stackoverflow.com/questions/78944702/convert-logx-log2-to-log-2x-in-sympy
I received a sympy equation from a library. It makes extensive use of log2, but the output was converted to log(x)/log(2). This makes reading the results messy. I would like to have sympy simplify this equation again with a focus on using log2 directly where possible. How could this be done? Example: (log(A) / log(2)) * (log(B + C) / log(2))
Basically, you can just reverse the process: replace log(w) with log(2) * log2(w) and let sympy cancel all the factors of log(2) / log(2). The only wrinkle is that you don't want to substitute when you find log(2) itself, but that's easy enough. You just use a wildcard, and check if the matched value is literally 2 before doing the substitution: import sympy from sympy import log from sympy.codegen.cfunctions import log2 A, B, C = sympy.symbols("A, B, C") w = sympy.Wild("w") expr = (log(A) / log(2)) * (log(B + C) / log(2)) expr.replace(log(w), lambda w: log(2) if w==2 else log(2)*log2(w)) The result is log2(A)*log2(B + C) which is correct. [H/T @ThomasWeller for finding log2 buried in codegen.]
3
2
78,944,749
2024-9-3
https://stackoverflow.com/questions/78944749/explode-polars-rows-on-multiple-columns-but-with-different-logic
I have this code, which splits a product column into a list, and then uses explode to expand it: import polars as pl import datetime as dt from dateutil.relativedelta import relativedelta def get_3_month_splits(product: str) -> list[str]: front, start_dt, total_m = product.rsplit('.', 2) start_dt = dt.datetime.strptime(start_dt, '%Y%m') total_m = int(total_m) return [f'{front}.{(start_dt+relativedelta(months=m)).strftime("%Y%m")}.3' for m in range(0, total_m, 3)] df = pl.DataFrame({ 'product': ['CHECK.GB.202403.12', 'CHECK.DE.202506.6', 'CASH.US.202509.12'], 'qty': [10, -20, 50], 'price_paid': [1400, -3300, 900], }) print(df.with_columns(pl.col('product').map_elements(get_3_month_splits, return_dtype=pl.List(str))).explode('product')) This currently gives shape: (10, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ product ┆ qty ┆ price_paid β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ════════════║ β”‚ CHECK.GB.202403.3 ┆ 10 ┆ 1400 β”‚ β”‚ CHECK.GB.202406.3 ┆ 10 ┆ 1400 β”‚ β”‚ CHECK.GB.202409.3 ┆ 10 ┆ 1400 β”‚ β”‚ CHECK.GB.202412.3 ┆ 10 ┆ 1400 β”‚ β”‚ CHECK.DE.202506.3 ┆ -20 ┆ -3300 β”‚ β”‚ CHECK.DE.202509.3 ┆ -20 ┆ -3300 β”‚ β”‚ CASH.US.202509.3 ┆ 50 ┆ 900 β”‚ β”‚ CASH.US.202512.3 ┆ 50 ┆ 900 β”‚ β”‚ CASH.US.202603.3 ┆ 50 ┆ 900 β”‚ β”‚ CASH.US.202606.3 ┆ 50 ┆ 900 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ However, I want to keep the total price paid the same. So after splitting the rows into several "sub categories", I want to change the table to this: shape: (10, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ product ┆ qty ┆ price_paid β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ════════════║ β”‚ CHECK.GB.202403.3 ┆ 10 ┆ 1400 β”‚ β”‚ CHECK.GB.202406.3 ┆ 10 ┆ 0 β”‚ β”‚ CHECK.GB.202409.3 ┆ 10 ┆ 0 β”‚ β”‚ CHECK.GB.202412.3 ┆ 10 ┆ 0 β”‚ β”‚ CHECK.DE.202506.3 ┆ -20 ┆ -3300 β”‚ β”‚ CHECK.DE.202509.3 ┆ -20 ┆ 0 β”‚ β”‚ CASH.US.202509.3 ┆ 50 ┆ 900 β”‚ β”‚ CASH.US.202512.3 ┆ 50 ┆ 0 β”‚ β”‚ CASH.US.202603.3 ┆ 50 ┆ 0 β”‚ β”‚ CASH.US.202606.3 ┆ 50 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ i.e. only keeping the price_paid in the first expanded row. So my total price paid remains the same. The qty is okay to stay the way it is. I tried e.g. with_columns(price_arr=pl.col('product').cast(pl.List(pl.Float64))) but was then unable to add anything to first element of the list. Or with_columns(price_arr=pl.col(['product', 'price_paid']).map_elements(price_func)) but it did not seem possible to use map_elements on pl.col([...]).
Concat the appropriate number of trailing 0s to price_paid before calling .explode() on both product and price_paid at once: print( df.with_columns( pl.col("product").map_elements(get_3_month_splits, return_dtype=pl.List(str)) ) .with_columns( pl.concat_list( pl.col("price_paid"), pl.lit(0).repeat_by(pl.col("product").list.len() - 1) ) ) .explode("product", "price_paid") ) Output: shape: (10, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ product ┆ qty ┆ price_paid β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ════════════║ β”‚ CHECK.GB.202403.3 ┆ 10 ┆ 1400 β”‚ β”‚ CHECK.GB.202406.3 ┆ 10 ┆ 0 β”‚ β”‚ CHECK.GB.202409.3 ┆ 10 ┆ 0 β”‚ β”‚ CHECK.GB.202412.3 ┆ 10 ┆ 0 β”‚ β”‚ CHECK.DE.202506.3 ┆ -20 ┆ -3300 β”‚ β”‚ CHECK.DE.202509.3 ┆ -20 ┆ 0 β”‚ β”‚ CASH.US.202509.3 ┆ 50 ┆ 900 β”‚ β”‚ CASH.US.202512.3 ┆ 50 ┆ 0 β”‚ β”‚ CASH.US.202603.3 ┆ 50 ┆ 0 β”‚ β”‚ CASH.US.202606.3 ┆ 50 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
78,942,459
2024-9-3
https://stackoverflow.com/questions/78942459/bar-chart-with-slanted-lines-instead-of-horizontal-lines
I wish to display a barchart over a time series canvas, where the bars have width that match the duration and where the edges connect the first value with the last value. In other words, how could I have slanted bars at the top to match the data? I know how to make barcharts using either the last value (example 1) or the first value (example 2), but what I'm looking for are polygons that would follow the black line shown. Example 1 Example 2 Code: import pandas as pd from pandas import Timestamp import datetime import matplotlib.pyplot as plt import numpy as np # np.nan dd = {'Name': {0: 'A', 1: 'B', 2: 'C'}, 'Start': {0: Timestamp('1800-01-01 00:00:00'), 1: Timestamp('1850-01-01 00:00:00'), 2: Timestamp('1950-01-01 00:00:00')}, 'End': {0: Timestamp('1849-12-31 00:00:00'), 1: Timestamp('1949-12-31 00:00:00'), 2: Timestamp('1979-12-31 00:00:00')}, 'Team': {0: 'Red', 1: 'Blue', 2: 'Red'}, 'Duration': {0: 50*365-1, 1: 100*365-1, 2: 30*365-1}, 'First': {0: 5, 1: 10, 2: 8}, 'Last': {0: 10, 1: 8, 2: 12}} d = pd.DataFrame.from_dict(dd) d.dtypes d # set up colors for team colors = {'Red': '#E81B23', 'Blue': '#00AEF3'} # reshape data to get a single Date | is there a better way? def reshape(data): d1 = data[['Start', 'Name', 'Team', 'Duration', 'First']].rename(columns={'Start': 'Date', 'First': 'value'}) d2 = data[['End', 'Name', 'Team', 'Duration', 'Last']].rename(columns={'End': 'Date', 'Last': 'value'}) return pd.concat([d1, d2]).sort_values(by='Date').reset_index(drop=True) df = reshape(d) df.dtypes df plt.plot(df['Date'], df['value'], color='black') plt.bar(d['Start'], height=d['Last'], align='edge', width=list(+d['Duration']), edgecolor='white', linewidth=2, color=[colors[key] for key in d['Team']]) plt.show() plt.plot(df['Date'], df['value'], color='black') plt.bar(d['End'], height=d['First'], align='edge', width=list(-d['Duration']), edgecolor='white', linewidth=2, color=[colors[key] for key in d['Team']]) plt.show()
You can use Matplotlibs Axes.fill_between to generate these types of charts. Importantly this will accurately represent the gap between your rows where they exist, whereas the approach with the bars will make that gap appear to be wider than they truly are unless you set the edgewidth of the bars to 0. Additionally, for your data transformation this is a pandas.lreshape which is similar to performing multiple melts operations at the same time. import pandas as pd from pandas import Timestamp import matplotlib.pyplot as plt dd = pd.DataFrame({ 'Name': ['A', 'B', 'C'], 'Start': pd.to_datetime(['1800-01-01', '1850-01-01', '1950-01-01']), 'End': pd.to_datetime(['1849-12-31', '1949-12-31', '1979-12-31']), 'Team': ['Red', 'Blue', 'Red'], 'Duration': [50*365-1, 100*365-1, 30*365-1], 'First': [5, 10, 8], 'Last': [10, 8, 12] }) df = ( pd.lreshape(dd, groups={'Date': ['Start', 'End'], 'Value': ['First', 'Last']}) .sort_values('Date') ) colors = {'Red': '#E81B23', 'Blue': '#00AEF3'} fig, ax = plt.subplots() for team in df['Team'].unique(): ax.fill_between( df['Date'], df['Value'], where=(df['Team'] == team), color=colors[team], linewidth=0, ) ax.set_ylim(bottom=0) plt.show()
2
2
78,942,670
2024-9-3
https://stackoverflow.com/questions/78942670/how-to-detect-a-circle-with-uncertain-thickness-and-some-noise-in-an-binary-imag
The input image is here : the input image I try to use cv2.HoughCircles in opencv-python to find the expected circle, but the result is noise as in this picture : result in param2=0.2 the code is: import cv2 import numpy as np img = cv2.imread('image.png') # apply GaussianBlur kernel_size = (15, 15) sigma = 0 blurred_image = cv2.GaussianBlur(img, kernel_size, sigma) gray = cv2.cvtColor(blurred_image, cv2.COLOR_BGR2GRAY) circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT_ALT, dp=1.5, minDist=20, param1=50, param2=0.2, minRadius=100, maxRadius=400) # draw the result if circles is not None: circles = np.uint16(np.around(circles)) for i in circles[0, :]: cv2.circle(img, (i[0], i[1]), i[2], (0, 255, 0), 2) cv2.circle(img, (i[0], i[1]), 1, (0, 0, 255), 3) # show the result cv2.imshow('used image', gray) cv2.imshow('detected circles', img) cv2.waitKey(0) cv2.destroyAllWindows() If i set param2=0.8, then no any circle can be found! What should I do to get a better result that fit the further perfect circle in the example picture above? The "perfect circle" can be defined as follows: Given an angular resolution of 0.1 degrees, the area will be divided into 360/0.1 = 3600 segments. A perfect circle includes as many sectors as possible, where these sectors contain pixels with grayscale values below a certain threshold (e.g., 15). In other words, the goal is to have the proportion of sectors that meet the criteria as close to 1 as possible. If there are multiple resulted circles with same proportion, any one of them can be considered a perfect circle. An example "perfect circle" : perfect circle
This is sort of a static solution for now, but if you always have such nice contrast and you can accuretly get the coordinates, then fitting a circle with least square might not be such a bad idea: def fit_circle(x, y): A = np.c_[x, y, np.ones(len(x))] # design matrix A with columns for x, y, and a constant term f = x**2 + y**2 # get the function as xΒ²+yΒ² C, _, _, _ = np.linalg.lstsq(A, f, rcond=None) # optimize cx = C[0] / 2 # get the centre x coordinate cy = C[1] / 2 # get the centre y coordinate radius = np.sqrt(C[2] + cx**2 + cy**2) # calculate the radius return round(cx), round(cy), round(radius) # return everythin as int To use this function, I did the following: im = cv2.imread("circle.png") # read as BGR imGray = cv2.imread("circle.png", 0) # read as gray y, x = np.where(imGray==0) # get x,y coords cx, cy, r = fit_circle(x,y) # get the circle properties, center and radius im = cv2.circle(im, (cx,cy), r, (255, 0, 0), 5) im = cv2.line(im, (cx-r,cy), (cx+r,cy), (0, 255,0), 2) im = cv2.line(im, (cx,cy-r), (cx,cy+r), (0, 255,0), 2) im = cv2.circle(im, (cx,cy), 10, (0, 0, 255), -1) cv2.imwrite("WithCircle.png", im) # save im for stack The result: As I said, very important that you can get those nice pixels that define your circle, this method can work with some added noise as well I presume, but definitely not good if you have any one-sided deviations.
3
6
78,942,406
2024-9-3
https://stackoverflow.com/questions/78942406/how-to-smooth-a-discrete-stepped-signal-in-a-vectorized-way-with-numpy-scipy
I have a signal like the orange one in the following plot that can only have integer values: As you can see, the orange signal in a bit noisy and "waffles" between levels sometimes when its about to change to a new steady state. I'd like to "smooth" this effect and achieve the blue signal. The blue signal is the orange one filtered such that the transitions don't occur until 3 samples in a row have made the jump to the next step. This is pretty easy if I loop through each sample manually and use a couple state variables to track how many times in a row I've jumped to a new step, but its also slow. I'd like to find a way to vectorize this in numpy. Any ideas? Here's an example of the non-vectorized way that seems to do what I want: up_count = 0 dn_count = 0 out = x.copy() for i in range(len(out)-1): if out[i+1] > out[i]: up_count += 1 dn_count = 0 if up_count == 3: up_count = 0 out[i+1] = out[i+1] else: out[i+1] = out[i] elif out[i+1] < out[i]: up_count = 0 dn_count += 1 if dn_count == 3: dn_count = 0 out[i+1] = out[i+1] else: out[i+1] = out[i] else: dn_count = 0 up_count = 0 EDIT: Thanks to @Bogdan Shevchenko for this solution. I already have numpy and scipy available, so rather than get pandas involved here's my numpy/scipy version of his answer: def ffill(arr, mask): idx = np.where(~mask, np.arange(mask.shape[0])[:, None], 0) np.maximum.accumulate(idx, axis=0, out=idx) return arr[idx, np.arange(idx.shape[1])] x_max = scipy.ndimage.maximum_filter1d(x, 3, axis=0, origin=1, mode="nearest") x_min = scipy.ndimage.minimum_filter1d(x, 3, axis=0, origin=1, mode="nearest") x_smooth = ffill(x, x_max!=x_min)
There could be a lot of different strategies, based on DataFrame.rolling processing. In example: t = pd.Series([ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 2, 3, 1, 2, 2, 1, 1, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) min_at_row = 3 t_smoothed = t.copy() t_smoothed[ t.rolling(min_at_row, min_periods=1).max() != t.rolling(min_at_row, min_periods=1).min() ] = None t_smoothed = t_smoothed.ffill() Result looks like what you want to obtain: If you want to stay with numpy only, without pandas, use np.lib.stride_tricks.sliding_window_view(t, min_at_row).max(axis=1) instead of t.rolling(min_at_row).max() (and analogously with min), but be aware that it will return array without first (min_at_row - 1) values You can also use t_smoothed.shift(-min_at_row + 1) to remove delay of smoothened signal.
2
2
78,942,495
2024-9-3
https://stackoverflow.com/questions/78942495/why-does-my-code-fail-to-turn-to-the-second-page
1st i tried working in a program that generates random pokemon so we can create teams to play in showdown. It looked like this import tkinter as tk from tkinter import * import random import pypokedex root = tk.Tk() #main #1ra pΓ‘gina inicio = Frame(root) inicio.grid(row=0, column=0) ttl = Label(inicio, text="ΒΏQuienes juegan?") quienes_juegan = StringVar() txt1 = Entry(inicio, textvariable = quienes_juegan, width=50) #quienes juegan btn1 = Button(inicio, text="Confirmar", command= lambda: accion()) #avanza y tira los pokes #colocando los botones ttl.grid(padx=650) txt1.grid(pady=20) btn1.grid() elecciones = Frame(root) elecciones.grid(row=0, column=0) jugadores = [] def accion(): #pa avanzar de pagina quienes_juegan_geteado = quienes_juegan.get() variablesss = quienes_juegan_geteado.split() jugadores.extend(variablesss) print(jugadores) elecciones.tkraise() botoncitos(elecciones,pokimones) #Numero de pokimons numero_pokimons = 30 #Quieres legendarios? si = Tru ; no = False legendarios = False #Magia legendary_list = [144,145,146,150,151,243,244,245,249,250,251,377,378,379,380,381,382,383,384,385,386,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,638,639,640,641,642,643,644,645,646,647,648,649,716,717,718,719,720,721,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809] pokimones = [] while numero_pokimons != 0: poke = random.randint(1,809) if poke in pokimones: continue #iteracion para legendarios if legendarios == True: if poke in legendary_list : pokimones.append(poke) numero_pokimons = numero_pokimons-1 else: continue #iteracion para no legendarios elif legendarios == False: if poke in legendary_list: continue else: pokimones.append(poke) numero_pokimons = numero_pokimons-1 #ordenamos de mayor a menor los pokimons para que se vea mas bonito pokimones = sorted(pokimones) #Imprime los pokimons con su nombre #'interfaz' if legendarios == True: conf_lege = 'SI' else: conf_lege = 'NO' elegibles = [pypokedex.get(dex=dex_number) for dex_number in pokimones] def botoncitos(elecciones, pokimones): def crear_boton(index): def seleccionar_pokemon(): button.config(state=DISABLED) button = Button(elecciones, text=item, command=seleccionar_pokemon) button.grid(sticky="nsew", row = index // 6, column = index % 6 ) for index, item in enumerate(elegibles): crear_boton(index) inicio.tkraise() root.geometry("1400x250") root.mainloop() this one worked, it made 30 buttons in rows of 6 with each pokemon generated I tried to order everything in functions so the code could work when i had to modify some variables, like instead of creating just 30 pokemon numbers, it created 6 per user import tkinter as tk from tkinter import * import random import pypokedex root = tk.Tk() #main jugadores = [] #players cantidad_jugadores = len(jugadores) #number of players pokimones = [] #list of pokemon numbers generated by generar_pokimons elegibles = [pypokedex.get(dex=dex_number) for dex_number in pokimones] #transforms the numbers generated into pokemon names def accion(): #command that turns to elecciones, the second page quienes_juegan_geteado = quienes_juegan.get() #gets the entry variablesss = quienes_juegan_geteado.split() #splits the entry jugadores.extend(variablesss) #extends it to the players list in the global variable print(jugadores) #just to test if it works generar_pokimones() #generate the pokemon numbers botoncitos(elecciones,pokimones) elecciones.tkraise() def botoncitos(elecciones, pokimones): #creates 1 button per pokemon generated and places them in a grid of 6 def crear_boton(index): def seleccionar_pokemon(): button.config(state=DISABLED) button = Button(elecciones, text=item, command=seleccionar_pokemon) button.grid(sticky="nsew", row = index // 6, column = index % 6 ) for index, item in enumerate(elegibles): crear_boton(index) def generar_pokimones(): pokemones= [] numero_pokimons = cantidad_jugadores * 6 #number of pokemons determined by the number of players #if you want to play with legendary pokemons legendarios = False legendary_list = [144,145,146,150,151,243,244,245,249,250,251,377,378,379,380,381,382,383,384,385,386,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,638,639,640,641,642,643,644,645,646,647,648,649,716,717,718,719,720,721,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809] #loop while numero_pokimons != 0: poke = random.randint(1,809) if poke in pokemones: continue if legendarios == True: if poke in legendary_list : pokemones.append(poke) numero_pokimons = numero_pokimons-1 else: continue elif legendarios == False: if poke in legendary_list: continue else: pokemones.append(poke) numero_pokimons = numero_pokimons-1 pokemones = sorted(pokemones)#sort pokimones.extend(pokemones) #adds the list to the global variable pokimones #1st page inicio = Frame(root) inicio.grid(row=0, column=0) ttl = Label(inicio, text="ΒΏQuienes juegan?") quienes_juegan = StringVar() txt1 = Entry(inicio, textvariable = quienes_juegan, width=50) #who plays? btn1 = Button(inicio, text="Confirmar", command= lambda: accion()) #turns to the 2nd page, elecciones ttl.grid(padx=650) txt1.grid(pady=20) btn1.grid() elecciones = Frame(root) #the second page, where it displays 6 buttons with pokemon per user elecciones.grid(row=0, column=0) inicio.tkraise() root.geometry("1400x250") root.mainloop() but now when i run this code and click button 1, it just stops, it seems that it doesnt gets the list of numbers so it can generate the buttons. in accion(), i tried printing the list after its supposed to generate and it comes out blank apologies for grammar, english its not my first language
Note that cantidad_jugadores is zero because it is set by the line: cantidad_jugadores = len(jugadores) As jugadores is an empty list when the above line is executed. So the code block inside the while loop inside generar_pokimones() will not be executed because numero_pokimons (which is calculated from the value of cantidad_jugadores) is zero. You need to get the size of jugadores inside generar_pokimones() instead: ... def generar_pokimones(): pokkemones = [] numero_pokimons = len(jugadores) * 6 # get the size of jugadores here ... ... Similar issue on elegibles. You need to create elegibles inside botoncitos() instead: def botoncitos(elecciones, pokimones): #creates 1 button per pokemon generated and places them in a grid of 6 def crear_boton(index): def seleccionar_pokemon(): button.config(state=DISABLED) button = Button(elecciones, text=item, command=seleccionar_pokemon) button.grid(sticky="nsew", row = index // 6, column = index % 6 ) # create elegibles here elegibles = [pypokedex.get(dex=dex_number) for dex_number in pokimones] for index, item in enumerate(elegibles): crear_boton(index)
2
1
78,941,537
2024-9-2
https://stackoverflow.com/questions/78941537/opencv-not-able-to-detect-aruco-marker-within-image-created-with-opencv
I encountered an issue while trying out a simple example of creating and detecting aruco-images. In the following code-snippet, I generate aruco images, save them to a file and then load one of these files for detection: import cv2 aruco_dict= cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_ARUCO_ORIGINAL) params = cv2.aruco.DetectorParameters() ##create aruco marker_size = 500 #mm for i in range(6): marker_image = cv2.aruco.generateImageMarker(aruco_dict, i, marker_size) cv2.imwrite(f"marker_{i}.png", marker_image) ##load aruco image img = cv2.imread("marker_5.png") #detect markers aruco_detector = cv2.aruco.ArucoDetector(aruco_dict,params) corners, ids, _ = aruco_detector.detectMarkers(img) print(corners) print(ids) This code results in an empty list of corners (e.g. the detector wasn't able to find the aruco). I assumed it should be able to detect it easily if I simply re-use the image created by OpenCV. Does someone have an idea what I did wrong or where the issue lies? Any tips are welcome. Kind regards!
These markers need a "quiet zone" around them so the edge of the marker is detectable. That quiet zone should be a white border. You may have heard of this in relation to QR codes. generateImageMarker() does not produce a quiet zone around the marker. This is as it should be. The image contains just the imagery that goes inside the marker's bounds. Just the marker: Use copyMakeBorder() to add a quiet zone. The recommended width is one "module". Half a module may work marginally but I would not recommend operating near that limit. The width of the quiet zone also matters when you print and cut out markers. That quiet zone must be visible on the paper. You can't just cut the marker right up to its edge. Maintain one module of quiet zone. Your marker is sized as 7x7 modules (5x5 of data, and 1 of border all around), resolved at 500 pixels width. One module would figure to be 500/7 ~= 71 pixels wide. The size of a marker (counted in modules) depends on the dictionary you chose (which was "original"). 4x4 and 6x6 flavors are popular. This specifies the data area, so those would be sized 6x6 and 8x8 respectively. border_width = int(500 / 7) marker_image = cv.copyMakeBorder( marker_image, border_width, border_width, border_width, border_width, borderType=cv.BORDER_CONSTANT, value=255) The marker, padded to about one module of quiet zone, will now be detectable:
2
2
78,940,370
2024-9-2
https://stackoverflow.com/questions/78940370/how-to-translate-pandas-dataframe-operations-to-polars-in-python
I am trying to convert some pandas DataFrame operations to Polars in Python, but I am running into difficulties, particularly with row-wise operations and element-wise comparisons. Here is the pandas code I am working with: df_a = pd.DataFrame({ "feature1": [1, 2, 3], "feature2": [7, 8, 9], }) df_b = pd.DataFrame({ "feature1": [3, 8, 2], "feature2": [7, 4, 9], }) if selection_mode == 'option1': max_values = df_a.max(axis=1) selected_features = df_a.eq(max_values, axis=0) final_result = selected_features.mul(df_b).sum(axis=1) / selected_features.sum(axis=1) elif selection_mode == 'option2': above_avg = df_a.ge(df_a.mean(axis=1), axis=0) combined_df = above_avg.mul(df_a).mul(df_b) sum_combined = combined_df.sum(axis=1) sum_above_avg = above_avg.mul(df_a).sum(axis=1) final_result = sum_combined / sum_above_avg Any guidance on translating this pandas code to Polars would be greatly appreciated!
Polars has dedicated horizontal functions for "row-wise" operations. df_a.max_horizontal() shape: (3,) Series: 'max' [i64] [ 7 8 9 ] For DataFrames, Polars will "broadcast" the operation across all columns if the right-hand side is a Series. df_a == df_a.max_horizontal() # df_a.select(pl.all() == pl.Series([7, 8, 9])) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ feature1 ┆ feature2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ bool ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ false ┆ true β”‚ β”‚ false ┆ true β”‚ β”‚ false ┆ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Option #1 max_values = df_a.max_horizontal() selected_features = df_a == max_values final_result = ( (selected_features * df_b).sum_horizontal() / selected_features.sum_horizontal() ) Option #2 above_avg = df_a >= df_a.mean_horizontal() combined_df = above_avg * df_a * df_b sum_combined = combined_df.sum_horizontal() sum_above_avg = (above_avg * df_a).sum_horizontal() final_result = sum_combined / sum_above_avg
2
1
78,939,900
2024-9-2
https://stackoverflow.com/questions/78939900/exploding-multiple-column-list-in-pandas
I've already tried everything posted here but nothing is working, so please don't mark this as duplicate because I think the problem is different. I have a json like this: [{'Id': 1, 'Design': ["09", '10', '13' ], 'Research': ['Eng', 'Math'] }] Plus other non-list colums. This is repeated for 500 ids. I need to explode the list columns. the final output should be an excel file, I don't care if the explosion is done directly in json or in pandas. Already tried: def lenx(x): return len(x) if isinstance(x,(list, tuple, np.ndarray, pd.Series)) else 1 def cell_size_equalize2(row, cols='', fill_mode='internal', fill_value=''): jcols = [j for j,v in enumerate(row.index) if v in cols] if len(jcols)<1: jcols = range(len(row.index)) Ls = [lenx(x) for x in row.values] if not Ls[:-1]==Ls[1:]: vals = [v if isinstance(v,list) else [v] for v in row.values] if fill_mode=='external': vals = [[e] + [fill_value]*(max(Ls)-1) if (not j in jcols) and (isinstance(row.values[j],list)) else e + [fill_value]*(max(Ls)-lenx(e)) for j,e in enumerate(vals)] elif fill_mode == 'internal': vals = [[e]+[e]*(max(Ls)-1) if (not j in jcols) and (isinstance(row.values[j],list)) else e+[e[-1]]*(max(Ls)-lenx(e)) for j,e in enumerate(vals)] else: vals = [e[0:min(Ls)] for e in vals] row = pd.Series(vals,index=row.index.tolist()) return row Leads to index error df.explode(['B', 'C', 'D', 'E']).reset_index(drop=True) Columns must have same lenght df1 = pd.concat([df[x].explode().to_frame() .assign(g=lambda x: x.groupby(level=0).cumcount()) .set_index('g', append=True) for x in cols_to_explode], axis=1) Somehow it creates lots of rows, I think it just explodes a column after another, and it leads to memory error. Desired Output: Id Design Research 1 09 Eng 1 10 Math 1 13
You can use json_normalize and a deduplication-explode (as presented here): tmp = pd.json_normalize(json) def explode_dedup(s): s = s.explode() return s.set_axis( pd.MultiIndex.from_arrays([s.index, s.groupby(level=0).cumcount()]) ) ids = ['Id'] cols = tmp.columns.difference(ids) out = (tmp[ids] .join(pd.concat({c: explode_dedup(tmp[c]) for c in cols}, axis=1) .droplevel(-1) )[tmp.columns] ) NB. if you know the columns to explode, you can alternatively use: cols = ['Design', 'Research'] ids = tmp.columns.difference(cols) Output: Id Design Research 0 1 09 Eng 0 1 10 Math 0 1 13 NaN
2
1
78,938,398
2024-9-1
https://stackoverflow.com/questions/78938398/find-a-fragment-in-the-whole-image
Globally, my task is to determine the similarity / dissimilarity of two .jpg files. Below I will describe the process in more detail. I have five (in reality there are more) template .jpg files. And I have a new .jpg file, which I must match with each template .jpg file to make a decision - is the new .jpg file similar to any of the template .jpg files or not. Correlating entire files in my case is a bad idea, since the error is large. So I came up with a way to "cut" the new file into 12 equal parts (fragments) (that is, into 12 .jpg files) and search for each individual fragment in the template. For this, I used the tutorial https://docs.opencv.org/4.x/dc/dc3/tutorial_py_matcher.html But the problem is that the fragments from the new .jpg file are extremely incorrectly matched with the template. Below I will show an example: Let's take the document below as a template And the document below as a new document (schematically I cut it into 12 parts, that is, I receive one document as input, but I cut this one document into 12 parts (fragments) (that is, 12 new files)) Next, take a look at my code. The gist of it is that I take each of the 12 fragments and search for that fragment in the template def match_slices_in_template(path_template): directory_in_str = 'slices' directory = os.fsencode(directory_in_str) img1 = cv.imread(path_template, cv.IMREAD_GRAYSCALE) # queryImage good = [] for slice_image in os.listdir(directory): print(slice_image) filename = os.fsdecode(slice_image) img2 = cv.imread(f'slices/{filename}', cv.IMREAD_GRAYSCALE) # trainImage # Initiate SIFT detector sift = cv.SIFT_create() # find the keypoints and descriptors with SIFT kp1, des1 = sift.detectAndCompute(img1, None) kp2, des2 = sift.detectAndCompute(img2, None) # BFMatcher with default params bf = cv.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) # Apply ratio test for m, n in matches: if m.distance < 0.3 * n.distance: good.append([m]) # cv.drawMatchesKnn expects list of lists as matches. img3 = cv.drawMatchesKnn(img1, kp1, img2, kp2, good, None, flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) plt.imshow(img3), plt.show() print(match_slices_in_template('Bill_1.jpg')) But the search result is completely incorrect, take a look at some sample graphs that matplotlib built In my example, in fact, the two files are different (although they have a lot in common). But the program identifies them as quite similar. So you probably understand the essence of my question: how to improve the performance of the algorithm so that it would determine similarity/difference more accurately than now
I wrote up an example to guide you in the direction of classifying by relying on text instead of structural similarity between the documents. My folder is organized like this: I have two training images and three test images. The training images are in the template folder and the images I want to classify are in the toBeClassified folder. Here are my input images: Now that we got that out of the way, here's a function that can get text from the images: def text_from_image(filename): ''' Load image as grayscale by using cv2.irmead(_, 0) extract text using pytesseract, we use the text for classification ''' imGray = cv2.imread(filename, 0) text = pytesseract.image_to_string(imGray) return text Now that we have the text, we can use it together with some labels that we put as training reponses to train the model: texts = [text_from_image("Templates/"+filename) for filename in os.listdir("Templates/")] # get the texts labels = ["Shipping", "Invoice"] # label the texts These lines are used to get the model and to train it on the templates: model = make_pipeline(TfidfVectorizer(), MultinomialNB()) # init the model model.fit(texts, labels) # fit the model Now we can go with two approaches, either we are just satisfied with the class with the highest probability or we go with predicting the probabilities. The first approach: def classify_with_model(filename, model): ''' We pass the filename of the images to be classified and the model to get the class with the highest probability, not very good but good for now ''' text = text_from_image(filename) category = model.predict([text])[0] return category If we loop through the test images, we get: for filename in os.listdir("toBeClassified/"): # loop through the test images category = classify_with_model("toBeClassified/"+filename, model) # get the class with the highest prob print(f'Document {filename} is classified as {category}') # print out results # Document Bill1.png is classified as Invoice # Document Shipping1.jpg is classified as Shipping # Document ShippingInvoice1.png is classified as Invoice Notice the last image, is kind of a hybrid that I found online. So looking at the probabilities is quite essential in some cases. As the joke when it comes to SVM learners: A healthy person goes to the doctor for a cancer screening. The doctor uses a state-of-the-art SVM machine learning model with 100% accuracy in identifying different types of cancer. After the test, the doctor comes back and says, "Good news and bad news. The good news: our model is perfect at diagnosing every type of cancer. The bad news: it doesn't know how to say you're healthy." In any case, no time for humour, or my attempt at it. We go with probabilities: def classify_with_probability(filename, model): ''' This is to classify with probability, a bit better to see the confidences ''' text = text_from_image(filename) probabilities = model.predict_proba([text])[0] # get probabilities for each class categories = model.classes_ # get the class labels bestMatchIdx = probabilities.argmax() # index of the highest probability bestMatch = categories[bestMatchIdx] # class with the highest probability confidence = probabilities[bestMatchIdx] # probability of the best match return bestMatch, dict(zip(categories, probabilities)) # return everythin And the results are: for filename in os.listdir("toBeClassified/"): category = classify_with_probability("toBeClassified/"+filename, model) print(f'Document {filename} is classified as {category[0]} with the following confidence:') print(category[1]) print("_______________________") #Document Bill1.png is classified as Invoice with the following confidence: #{'Invoice': 0.5693790615221315, 'Shipping': 0.4306209384778673} #_______________________ #Document Shipping1.jpg is classified as Shipping with the following confidence: #{'Invoice': 0.38458825025403914, 'Shipping': 0.6154117497459587} #_______________________ #Document ShippingInvoice1.png is classified as Invoice with the following confidence: #{'Invoice': 0.5519774748181495, 'Shipping': 0.4480225251818504} #_______________________ Here, we see the probabilities as well, this can be helpful if you want to classify your images into multiple clases at the same time, e.g., invoice and shipping form. I hope this helps you in some way, as I said, and as Christoph mentioned, going with features on those documents will be a wild attempt. The imports: import cv2 import pytesseract from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import make_pipeline import os pytesseract.pytesseract.tesseract_cmd = "C:/Program Files/Tesseract-OCR/tesseract.exe"
2
1
78,938,073
2024-9-1
https://stackoverflow.com/questions/78938073/why-doesnt-the-repeated-number-go-up-in-the-traceback-as-i-increase-the-recur
When I run a recursive function and it exceeds the recursion depth limit, the below error is displayed: Python 3.12.4+ (heads/3.12:99bc8589f0, Jul 27 2024, 11:20:07) [GCC 12.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> def f(): f() ... >>> f() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in f File "<stdin>", line 1, in f File "<stdin>", line 1, in f [Previous line repeated 996 more times] RecursionError: maximum recursion depth exceeded From what I understand, because the traceback is all the same File "<stdin>", line 1, in f, it does not show it all (because obviously it's not really helpful) and only tells me that this line was repeated 996 times more. When I manually change the recursion limit, I expect that the traceback size grows as well. But it does not: >>> sys.setrecursionlimit(2000) >>> >>> f() Traceback (most recent call last): File "<stdin>", line 1, in f File "<stdin>", line 1, in f File "<stdin>", line 1, in f [Previous line repeated 997 more times] RecursionError: maximum recursion depth exceeded I doubled the recursion limit, so now I expect that traceback size doubles, but it says that the previous line is repeated 997 times. Why is this the case? Note I also found this question which seems same as my question, but it isn't. My question is specifically about the size of traceback. Why does increasing the recursion depth result in stack overflow error?
I figured out that this relates to sys.tracebacklimit variable. It limits the traceback size. >>> def f(): f() ... >>> >>> import sys >>> sys.setrecursionlimit(2000) >>> >>> f() Traceback (most recent call last): File "<stdin>", line 1, in f File "<stdin>", line 1, in f File "<stdin>", line 1, in f [Previous line repeated 997 more times] RecursionError: maximum recursion depth exceeded >>> >>> sys.tracebacklimit = 1500 >>> >>> f() Traceback (most recent call last): File "<stdin>", line 1, in f File "<stdin>", line 1, in f File "<stdin>", line 1, in f [Previous line repeated 1497 more times] RecursionError: maximum recursion depth exceeded
2
2
78,939,233
2024-9-2
https://stackoverflow.com/questions/78939233/match-columns-from-column-list-in-a-dataframe-and-rename-them-in-python
I have column list - col_list = ['Subsidiary','State of Jurisdiction of Incorporation','Jurisdiction ofIncorporationor Organization','Jurisdiction of Incorporation or Organization', 'Subsidiaries','State or Other Jurisdiction of Organization','Jurisdiction ofIncorporation orOrganization', 'Company Name', 'State of Incorporation', 'Legal Name','Entity','Name of Company/Jurisdiction of Incorporation or Formation','Place of Formation','Name of Company'] Can I match if my dataframe has these columns using regex or any other library in python like Subsidiary,Subsidiaries,Company Name,Legal Name,Entity,Name of Company/Jurisdiction of Incorporation or Formation,Name of Company , then rename it as 'entity_name' and another column like Jurisdiction, State of Jurisdiction,Place of Formation then rename it as 'entity_place' ? Example - df1 - |Subsidiary|Jurisdiction| |A1|X1| |A2|X2| |A3|X3| expected output - |entity_name|entity_place| |A1|X1| |A2|X2| |A3|X3| df2- |Legal Name|Place of Formation| |A1|X1| |A2|X2| |A3|X3| Expected output- |entity_name|entity_place| |A1|X1| |A2|X2| |A3|X3| Any help would be appreciated.
If there are 2 lists for entity_name and entity_place create dictionary and pass to rename: L1 = ['Subsidiary','Subsidiaries','Company Name','Legal Name', 'Entity', 'Name of Company/Jurisdiction of Incorporation or Formation','Name of Company'] L2 = ['Jurisdiction', 'State of Jurisdiction','Place of Formation'] d = {**dict.fromkeys(L1, 'entity_name'), **dict.fromkeys(L2, 'entity_place')} out = df.rename(columns=d)
2
1
78,929,948
2024-8-29
https://stackoverflow.com/questions/78929948/load-a-saved-on-disk-duckdb-instance-into-a-new-in-memory-duckdb-instance
I'm working on a project where, in a first stage, I pull some raw data, do a bunch of processing of it in duckdb, and end up with a bunch of tables that are used by a bunch of downstream components that also operate in duckdb. I'd like to have the outputs from the first stage persist on disk, and not be modified by the downstream components, but the downstream components need to at least be able to create views and temporary tables. Further, there's no reason to have the downstream components operate on-disk... the data is small enough to fit in memory. I'd like a magical solution like conn = duckdb.connect(":memory:") conn.load_from_disk(path_to_on_disk) but nothing like that seems to exist. I can read each table from the on disk connection, convert to pandas, then load into the in memory connection, but that takes forever. Any ideas? Example of that inefficient approach: def load_disk_duck_to_mem_duck(path: pathlib.Path) -> duckdb.DuckDBPyConnection: """Slow and ugly!""" source_db = duckdb.connect((path / "duck.db").as_posix()) in_memory_db = duckdb.connect(":memory:") tables = source_db.execute("SHOW TABLES").fetchall() # Copy each table from on-disk to in-memory for table in tables: table_name = table[0] temp_df = source_db.table(table_name).df() # Load the table from on-disk and create a copy in the in-memory database in_memory_db.from_df(temp_df).create(table_name) return in_memory_db
I'd like to have the outputs from the first stage persist on disk This can be accomplished using a DuckDB EXPORT DATABASE statement. ... the downstream components need to at least be able to create views and temporary tables. Further, there's no reason to have the downstream components operate on-disk... the data is small enough to fit in memory. So each downstream component should read the exported database, e.g. using the IMPORT DATABASE statement. Alternatively, if you want a single file, you can use ATTACH and COPY DATABASE, like so: attach 'test.db' as test; copy from database memory to test; This assumes the specified file ('test.db' in the example) does not already exist.
2
1
78,938,440
2024-9-1
https://stackoverflow.com/questions/78938440/using-regex-to-split-subsections-with-unique-titles
I'm struggling to find a way to split a corpus of legal documents I have, by section. I've been trying to do this with regex, and while I've come reasonably close, I'm looking to see if there's a way I can refine the output even more to consolidate the number of matches that result from the regex script. Each document is organized with multiple section headers, but all follow the same basic structure. First, there is an "Argument" header summarizing the points made in each of the subsections. I want to include this Argument section because a handful of documents in the corpus have no subsequent subsections; however, the vast majority of them do have these sections. Each subsection begins with roman numerals, and the number of subsections in each document can vary. While I don't know exactly how many subsections are in each document, I'm assuming no more than 10. For the modal document, the structure looks something like: string = """ARGUMENT Summary of argument I. TITLE OF SUBSECTION 1 Text of subsection 1 II. TITLE OF SUBSECTION 2 Text of subsection 2 CONCLUSION Text of conclusion """ I created a regex script to try and split each section by its title using re.split, designating the ARGUMENT header, roman numeral subsections 1 (I) through 10 (X), and the CONCLUSION section, adding new line symbols to avoid splitting on every instance of those words/symbols, regardless of whether they happen to be in the titles themselves: r'(\nARGUMENT|\nI\.|\nII\.|\nIII\.|\nIV\.|\nV\.|\nVI\.|\nVII\.|\nVIII\.|\nIX\.|\nX\.|\nCONCLUSION.*)' The output I want is a list where each title and the resulting text underneath are combined into one element, like below: ['ARGUMENT Summary of argument', 'I. TITLE OF SUBSECTION 1 Text of subsection 1', 'II. TITLE OF SUBSECTION 2 Text of subsection 2', 'CONCLUSION Text of conclusion'] However, when using re.split on the above string, my actual output separates the roman numerals from the rest of the text for that section (note the second and fourth elements of the list below: ['ARGUMENT\nSummary of argument\n', '\nI.', ' TITLE OF SUBSECTION 1\nText of subsection 1\n', '\nII.', ' TITLE OF SUBSECTION 2\nText of subsection 2\n', '\nCONCLUSION', '\nText of conclusion\n'] The newline symbols in the output are not particularly important to me. Rather, the consolidation of the header and the text underneath it is what matters to me more than anything. Is there some edit I can make to my regex script to get the first output as opposed to the second? Or if not, is there some other regex command that I could use to attain that particular output? And, less critically, is there a more efficient or streamlined way to match section headers with roman numerals I through X in my script?
The pattern is just missing some instructions on where and what to split at the patterns you provided. I wrote this pattern that gives more instructions on where to split the string, and remove the roman numeral redundancy. string = """ legal doc """ pattern = r'(?=\b(?:[IVXL]+\.)|CONCLUSION)' answer = re.split(pattern, string) answer = [line.replace('\n', ' ').strip() for line in answer] print(answer) Assuming the legal docs will follow the same structure, we can ignore the ARGUMENT portion and split the text starting at the first point. pattern = r'(?=\b(?:[IVXL]+\.)|CONCLUSION)' So, at ?= we are including and looking ahead for the desired pattern. \b is the boundary for the word, it will help to keep larger roman numerals from splitting up. ?: is non-capturing to prevent redundant elements in the final answer. In [IVXL]+ we are asking to find these characters that may repeat, then using \. we require the roman numeral to end with a period. CONCLUSION simply splits the final paragraph and its contents. answer = [line.replace('\n', ' ').strip() for line in answer] This line is comprehension to remove those newline symbols and also strips whitespaces to help with cleanup. Hope this helps!
3
3
78,938,211
2024-9-1
https://stackoverflow.com/questions/78938211/computing-cross-sectional-rankings-using-a-tidy-polars-dataframe
I need to compute cross-sectional rankings across a number of trading securities. Consider the following pl.DataFrame in long (tidy) format. It comprises three different symbols with respective prices, where each symbol also has a dedicated (i.e. local) trading calendar. df = pl.DataFrame( { "symbol": [*["symbol1"] * 6, *["symbol2"] * 5, *["symbol3"] * 5], "date": [ "2023-12-30", "2023-12-31", "2024-01-03", "2024-01-04", "2024-01-05", "2024-01-06", "2023-12-30", "2024-01-03", "2024-01-04", "2024-01-05", "2024-01-06", "2023-12-30", "2023-12-31", "2024-01-03", "2024-01-04", "2024-01-05", ], "price": [ 100, 105, 110, 115, 120, 125, 200, 210, 220, 230, 240, 3000, 3100, 3200, 3300, 3400, ], } ) print(df) shape: (16, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════║ β”‚ symbol1 ┆ 2023-12-30 ┆ 100 β”‚ β”‚ symbol1 ┆ 2023-12-31 ┆ 105 β”‚ β”‚ symbol1 ┆ 2024-01-03 ┆ 110 β”‚ β”‚ symbol1 ┆ 2024-01-04 ┆ 115 β”‚ β”‚ symbol1 ┆ 2024-01-05 ┆ 120 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ symbol3 ┆ 2023-12-30 ┆ 3000 β”‚ β”‚ symbol3 ┆ 2023-12-31 ┆ 3100 β”‚ β”‚ symbol3 ┆ 2024-01-03 ┆ 3200 β”‚ β”‚ symbol3 ┆ 2024-01-04 ┆ 3300 β”‚ β”‚ symbol3 ┆ 2024-01-05 ┆ 3400 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ The first step is to compute the periodic returns using pct_change and subsequently using pivot to align the symbols per date. returns = df.drop_nulls().with_columns( pl.col("price").pct_change(n=2).over("symbol").alias("return") ).pivot(on="symbol", index="date", values="return") print(returns) shape: (6, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ symbol1 ┆ symbol2 ┆ symbol3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════β•ͺ══════════║ β”‚ 2023-12-30 ┆ null ┆ null ┆ null β”‚ β”‚ 2023-12-31 ┆ null ┆ null ┆ null β”‚ β”‚ 2024-01-03 ┆ 0.1 ┆ null ┆ 0.066667 β”‚ β”‚ 2024-01-04 ┆ 0.095238 ┆ 0.1 ┆ 0.064516 β”‚ β”‚ 2024-01-05 ┆ 0.090909 ┆ 0.095238 ┆ 0.0625 β”‚ β”‚ 2024-01-06 ┆ 0.086957 ┆ 0.090909 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The next step is to use concat_list to create a list to compute the ranks per row (descending, i.e. highest return gets rank 1). ranks = ( returns.with_columns(all_symbols=pl.concat_list(pl.all().exclude("date"))) .select( pl.all().exclude("all_symbols"), pl.col("all_symbols") .list.eval( pl.element().rank(descending=True, method="ordinal").cast(pl.UInt8) ) .alias("rank"), ) ) print(ranks) shape: (6, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ symbol1 ┆ symbol2 ┆ symbol3 ┆ rank β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ f64 ┆ list[u8] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════β•ͺ══════════β•ͺ════════════════════║ β”‚ 2023-12-30 ┆ null ┆ null ┆ null ┆ [null, null, null] β”‚ β”‚ 2023-12-31 ┆ null ┆ null ┆ null ┆ [null, null, null] β”‚ β”‚ 2024-01-03 ┆ 0.1 ┆ null ┆ 0.066667 ┆ [1, null, 2] β”‚ β”‚ 2024-01-04 ┆ 0.095238 ┆ 0.1 ┆ 0.064516 ┆ [2, 1, 3] β”‚ β”‚ 2024-01-05 ┆ 0.090909 ┆ 0.095238 ┆ 0.0625 ┆ [2, 1, 3] β”‚ β”‚ 2024-01-06 ┆ 0.086957 ┆ 0.090909 ┆ null ┆ [2, 1, null] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Now we are finally getting to the actual question: I would like to unpivot ranks again and produce a tidy dataframe. I am looking for the following columns: symbol, date, return, and rank. I was thinking about creating three new columns (basically using explode to unpack the list, but this will only create new rows rather than columns). Also, I am wondering if I am required to pivot df in the first place or if there's a better way to directly operate on the original df in tidy format? I am actually looking for performance as df could have millions of rows.
Well you can simplify the process without the need of explode and to avoid the need to pivot and unpivot: returns = df.drop_nulls().with_columns( pl.col("price").pct_change(n=2).over("symbol").alias("return") ) shape: (16, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price ┆ return β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════β•ͺ══════════║ β”‚ symbol1 ┆ 2023-12-30 ┆ 100 ┆ null β”‚ β”‚ symbol1 ┆ 2023-12-31 ┆ 105 ┆ null β”‚ β”‚ symbol1 ┆ 2024-01-03 ┆ 110 ┆ 0.1 β”‚ β”‚ symbol1 ┆ 2024-01-04 ┆ 115 ┆ 0.095238 β”‚ β”‚ symbol1 ┆ 2024-01-05 ┆ 120 ┆ 0.090909 β”‚ β”‚ … ┆ … ┆ … ┆ … β”‚ β”‚ symbol3 ┆ 2023-12-30 ┆ 3000 ┆ null β”‚ β”‚ symbol3 ┆ 2023-12-31 ┆ 3100 ┆ null β”‚ β”‚ symbol3 ┆ 2024-01-03 ┆ 3200 ┆ 0.066667 β”‚ β”‚ symbol3 ┆ 2024-01-04 ┆ 3300 ┆ 0.064516 β”‚ β”‚ symbol3 ┆ 2024-01-05 ┆ 3400 ┆ 0.0625 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Next rank the return values: ranked_returns = returns.with_columns( pl.col("return").rank(descending=True).over("date").cast(pl.UInt8).alias("rank") ) shape: (16, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price ┆ return ┆ rank β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ f64 ┆ u8 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════β•ͺ══════════β•ͺ══════║ β”‚ symbol1 ┆ 2023-12-30 ┆ 100 ┆ null ┆ null β”‚ β”‚ symbol1 ┆ 2023-12-31 ┆ 105 ┆ null ┆ null β”‚ β”‚ symbol1 ┆ 2024-01-03 ┆ 110 ┆ 0.1 ┆ 1 β”‚ β”‚ symbol1 ┆ 2024-01-04 ┆ 115 ┆ 0.095238 ┆ 2 β”‚ β”‚ symbol1 ┆ 2024-01-05 ┆ 120 ┆ 0.090909 ┆ 2 β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ symbol3 ┆ 2023-12-30 ┆ 3000 ┆ null ┆ null β”‚ β”‚ symbol3 ┆ 2023-12-31 ┆ 3100 ┆ null ┆ null β”‚ β”‚ symbol3 ┆ 2024-01-03 ┆ 3200 ┆ 0.066667 ┆ 2 β”‚ β”‚ symbol3 ┆ 2024-01-04 ┆ 3300 ┆ 0.064516 ┆ 3 β”‚ β”‚ symbol3 ┆ 2024-01-05 ┆ 3400 ┆ 0.0625 ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ And select only the symbol, date, return, and rank columns: tidy_df = ranked_returns.select(["symbol", "date", "return", "rank"]) shape: (16, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ return ┆ rank β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ f64 ┆ u8 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ══════════β•ͺ══════║ β”‚ symbol1 ┆ 2023-12-30 ┆ null ┆ null β”‚ β”‚ symbol1 ┆ 2023-12-31 ┆ null ┆ null β”‚ β”‚ symbol1 ┆ 2024-01-03 ┆ 0.1 ┆ 1 β”‚ β”‚ symbol1 ┆ 2024-01-04 ┆ 0.095238 ┆ 2 β”‚ β”‚ symbol1 ┆ 2024-01-05 ┆ 0.090909 ┆ 2 β”‚ β”‚ … ┆ … ┆ … ┆ … β”‚ β”‚ symbol3 ┆ 2023-12-30 ┆ null ┆ null β”‚ β”‚ symbol3 ┆ 2023-12-31 ┆ null ┆ null β”‚ β”‚ symbol3 ┆ 2024-01-03 ┆ 0.066667 ┆ 2 β”‚ β”‚ symbol3 ┆ 2024-01-04 ┆ 0.064516 ┆ 3 β”‚ β”‚ symbol3 ┆ 2024-01-05 ┆ 0.0625 ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
3
2
78,924,799
2024-8-28
https://stackoverflow.com/questions/78924799/i-cant-get-selenium-chrome-to-work-in-docker-with-python
I have a classic "it works on my machine" problem, a web scraper I ran successfully on my laptop, but with a persistent error whenever I tried and run it in a container. My minimal reproducible dockerized example consists of the following files: requirements.txt: selenium==4.23.1 # 4.23.1 pandas==2.2.2 pandas-gbq==0.22.0 tqdm==4.66.2 Dockerfile: FROM selenium/standalone-chrome:latest # Set the working directory in the container WORKDIR /usr/src/app # Copy your application files COPY . . # Install Python and pip USER root RUN apt-get update && apt-get install -y python3 python3-pip python3-venv # Create a virtual environment RUN python3 -m venv /usr/src/app/venv # Activate the virtual environment and install dependencies RUN . /usr/src/app/venv/bin/activate && \ pip install --no-cache-dir -r requirements.txt # Switch back to the selenium user USER seluser # Set the entrypoint to activate the venv and run your script CMD ["/bin/bash", "-c", "source /usr/src/app/venv/bin/activate && python -m scrape_ev_files"] scrape_ev_files.py (slimmed down to just what's needed to repro error): import os from selenium import webdriver from selenium.webdriver.chrome.service import Service def init_driver(local_download_path): os.makedirs(local_download_path, exist_ok=True) # Set Chrome Options chrome_options = Options() chrome_options.add_argument("--headless") chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--remote-debugging-port=9222") prefs = { "download.default_directory": local_download_path, "download.prompt_for_download": False, "download.directory_upgrade": True, "safebrowsing.enabled": True } chrome_options.add_experimental_option("prefs", prefs) # Set up the driver service = Service() chrome_options = Options() driver = webdriver.Chrome(service=service, options=chrome_options) # Set download behavior driver.execute_cdp_cmd("Page.setDownloadBehavior", { "behavior": "allow", "downloadPath": local_download_path }) return driver if __name__ == "__main__": # PARAMS ELECTION = '2024 MARCH 5TH DEMOCRATIC PRIMARY' ORIGIN_URL = "https://earlyvoting.texas-election.com/Elections/getElectionDetails.do" CSV_DL_DIR = "downloaded_files" # initialize the driver driver = init_driver(local_download_path=CSV_DL_DIR) shell command to reproduce the error: docker build -t my_scraper . # (no error) docker run --rm -t my_scraper # (error) stacktrace from error is below. Any help would be much appreciated! I've tried many iterations of my requirements.txt and Dockerfile attempting to fix this, but this error at this spot has been frustratingly persistent: File "/workspace/scrape_ev_files.py", line 110, in <module> driver = init_driver(local_download_path=CSV_DL_DIR) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/scrape_ev_files.py", line 47, in init_driver driver = webdriver.Chrome(service=service, options=chrome_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/chrome/webdriver.py", line 45, in __init__ super().__init__( File "/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/chromium/webdriver.py", line 66, in __init__ super().__init__(command_executor=executor, options=options) File "/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py", line 212, in __init__ self.start_session(capabilities) File "/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py", line 299, in start_session response = self.execute(Command.NEW_SESSION, caps)["value"] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py", line 354, in execute self.error_handler.check_response(response) File "/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/errorhandler.py", line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created: Chrome failed to start: exited normally. (session not created: DevToolsActivePort file doesn't exist) (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
You override the chrome_options variable just before sending it to webdriver.Chrome() so there are no options defined, --disable-dev-shm-usage (this option solves that issue) in particular. Just remove chrome_options = Options() just before the driver initialization. As a side note, consider using --headless=new instead of --headless, it gives functionality closer to regular chrome and --headless will be deprecated in future versions. Edit The image you are using is turning off the Selenium manager, so you get this warning. You can turn it back on by adding ENV SE_OFFLINE=false to the dockerfile. The driver initialization sometimes hangs and raise TimeoutException: Message: timeout: Timed out receiving message from renderer: 600.000. This is probably due to too many JS commands. Add those options chrome_options.add_argument('--dns-prefetch-disable') chrome_options.add_argument('--disable-gpu') chrome_options.add_argument('--enable-cdp-events')
2
1
78,933,402
2024-8-30
https://stackoverflow.com/questions/78933402/tkinter-with-images
With the following code, I can not display an image in a tkinter cell: from tkinter import * from tkinter import filedialog from PIL import Image, ImageTk root = Tk() root.geometry=("1000x1000") def orig(): orig_image = filedialog.askopenfilename(filetypes=[("Image file", "*.jpg"), ("All files", "*.")]) my_img = ImageTk.PhotoImage(Image.open(orig_image)) lbl = Label(image=my_img) lbl.grid(row=0, column=0) orig() root.mainloop() But, by taking this out of the method, it works fine: from tkinter import * from tkinter import filedialog from PIL import Image, ImageTk root = Tk() root.geometry=("1000x1000") orig_image = filedialog.askopenfilename(filetypes=[("Image file", "*.jpg"), ("All files", "*.")]) my_img = ImageTk.PhotoImage(Image.open(orig_image)) lbl = Label(image=my_img) lbl.grid(row=0, column=0) root.mainloop() What am I missing ? This is part of a larger project where I want to display an "original" OCR scan image and then with other methods, display a "corrected image" next to the original image (in another column) to show if that correction was an improvement or not.
there are some issues of your code Unfortunately, the geometry method is being assigned incorrectly. It should be invoked instead of assigned. To the first code block, you made a definition of a function orig() but let it remain unevaluated. The PhotoImage object needs to be held on to as an attribute to prevent it from being lost to garbage collection. try this way: from tkinter import * from tkinter import filedialog from PIL import Image, ImageTk root = Tk() root.geometry("1000x1000") # Corrected: called as a method def orig(): global my_img # Important: keep a reference to prevent garbage collection orig_image = filedialog.askopenfilename(filetypes=[("Image file", "*.jpg"), ("All files", "*.*")]) img = Image.open(orig_image) my_img = ImageTk.PhotoImage(img) lbl = Label(root, image=my_img) lbl.grid(row=0, column=0) orig() # Call the function to open file dialog and display image root.mainloop()
3
1
78,933,210
2024-8-30
https://stackoverflow.com/questions/78933210/is-string-slice-by-copy-a-cpython-implementation-detail-or-part-of-spec
Python does slice-by-copy on strings: Does Python do slice-by-reference on strings? Is this something that all implementations of Python need to respect, or is it just a detail of the CPython implementation?
Copying underlying memory on str slices is chosen because reference counting / garbage collection becomes complicated otherwise, and reference counting itself is an implementation detail of CPython. Therefore copying string slices is also an implementation detail. Copying could, in theory, be avoided in the implementation without violating any requirements: some years ago a patch was proposed attempting to avoid copying, but it was rejected by Guido because of performance concerns (ref). Note that already, some particular slices do not copy in CPython. This is also an implementation detail/optimization and not documented in the language reference.
5
3
78,933,132
2024-8-30
https://stackoverflow.com/questions/78933132/type-hinting-a-generator-send-type-any-or-none
I have a generator that does not use send() values. Should I type its send_value as Any or None? import typing as t def pi_generator() -> t.Generator[int, ???, None]: pi = "3141592" for digit in pi: yield int(digit) pi_gen = pi_generator() next(pi_gen) # 3 pi_gen.send('foo') # 1 pi_gen.send(pi_gen) # 4 Reasons I see for Any: The generator works perfectly fine with send() for any type, so if somebody had a reason to use .send(1) with this generator, it's totally fine. Methods' arguments' types should be general, and .send(x: Any) is more general than .send(x: None). Reasons I see for None: Return types should be specific, and "Generator that never uses send" is a more specific type than "Any kind of generator". If someone is using .send() to this generator, it's likely they're misunderstanding what it does and the type hint should inform them.
Quoting the docs for typing.Generator: If your generator will only yield values, set the SendType and ReturnType to None: def infinite_stream(start: int) -> Generator[int, None, None]: while True: yield start start += 1 Setting SendType to None is the recommended way to communicate that the generator does not expect callees to send() values.
4
6
78,934,877
2024-8-31
https://stackoverflow.com/questions/78934877/how-do-i-formalize-a-repeated-relationship-among-disjoint-groups-of-classes-in-p
I have Python code that has the following shape to it: from dataclasses import dataclass @dataclass class Foo_Data: foo: int class Foo_Processor: def process(self, data: Foo_Data): ... class Foo_Loader: def load(self, file_path: str) -> Foo_Data: ... #---------------------------------------------------------------- @dataclass class Bar_Data: bar: str class Bar_Processor: def process(self, data: Bar_Data): ... class Bar_Loader: def load(self, file_path: str) -> Bar_Data: ... I have several instances of this sort of Data/Processor/Loader setup, and the classes all have the same method signatures modulo the specific class family (Foo, Bar, etc.). Is there a pythonic way of formalizing this relationship among classes to enforce a similar structure if I decide to create a Spam_Data, Spam_Processor, and Spam_Loader family of classes? For instance, I want something to enforce that Spam_Processor have a process method which takes an argument of type Spam_Data. Is there a way of achieving this standardization somehow with abstract classes, generic types, or some other structure? I tried using abstract classes, but mypy correctly points out that having all *_Data classes be subclasses of an abstract Data class and similarly having all *_Processor classes be subclasses of an abstract Processor class violates the Liskov substitution principle, since each processor is only designed for its respective Data class (i.e., Foo_Processor can't process Bar_Data, but one would expect that it could if these classes have superclasses Processor and Data which are compatible in this way).
You can use abstract base classes (ABCs) with Generics. This way you can define a common interface while ensuring type safety: from abc import ABC, abstractmethod from dataclasses import dataclass from typing import Generic, TypeVar # generic type variable for Data T = TypeVar('T', bound='BaseData') @dataclass class BaseData(ABC): pass class BaseProcessor(ABC, Generic[T]): @abstractmethod def process(self, data: T) -> None: pass class BaseLoader(ABC, Generic[T]): @abstractmethod def load(self, file_path: str) -> T: pass Now you can define your specific classes @dataclass class Foo_Data(BaseData): foo: int class Foo_Processor(BaseProcessor[Foo_Data]): def process(self, data: Foo_Data) -> None: ... class Foo_Loader(BaseLoader[Foo_Data]): def load(self, file_path: str) -> Foo_Data: ... @dataclass class Bar_Data(BaseData): bar: str class Bar_Processor(BaseProcessor[Bar_Data]): def process(self, data: Bar_Data) -> None: ... class Bar_Loader(BaseLoader[Bar_Data]): def load(self, file_path: str) -> Bar_Data: ... Writing your code this way combines the benefits of a common interface with type safety. ABCs ensure that subclasses implement required methods, promoting a consistent structure. Generics allow for type-specific operations, enhancing code readability and maintainability. As a confirmation with mypy: mypy script.py Success: no issues found in 1 source file
2
3
78,932,535
2024-8-30
https://stackoverflow.com/questions/78932535/python-async-thread-safe-semaphore
I'm looking for a thread-safe implementation of a Semaphore I can use in Python. The standard libraries asyncio.Semaphore isn't thread-safe. The standard libraries threading.Semaphore doesn't have awaitable interface. I am using sanic which has multiple threads (workers) but also an asynchronous loop on each thread. I want to be able to yield execution back to the event loop on each of the workers whenever it encounters a blocked semaphore, while it waits. UPDATE: I meant to say process here, not threads. So these should be that Sanic splits across processes, and multiprocessing.Semaphore. I believe the answer given is still relevant to where I can apply a similar solution.
If you are saying that you need a single semaphore instance to be used across all the threads, then create a threading.Semaphore instance and am async function acquire_semaphore such as the following: import asyncio import threading from concurrent.futures import ThreadPoolExecutor executor = ThreadPoolExecutor() semaphore = threading.Semaphore() async def acquire_semaphore(): """Make a threading.Semaphore awaitable.""" def acquirer(): semaphore.acquire() loop = asyncio.get_running_loop() await loop.run_in_executor(executor, acquirer) async def test(): """Acquire and release a semaphore sharable across threads.""" await acquire_semaphore() print('acquired') await asyncio.sleep(1) semaphore.release() # This never blocks print('released') def worker(): asyncio.run(test()) def main(): threads = [ threading.Thread(target=worker) for _ in range(3) ] for thread in threads: thread.start() for thread in threads: thread.join() if __name__ == '__main__': main() Prints: acquired released acquired released acquired released
2
3
78,933,467
2024-8-30
https://stackoverflow.com/questions/78933467/how-can-access-the-pointer-values-passed-to-and-returned-by-c-functions-from-pyt
Can my python code have access to the actual pointer values received and returned by C functions called through ctypes? If yes, how could I achieve that ? I'd like to test the pointer values passed to and returned from a shared library function to test an assignment with pytest (here, to test that strdup didn't return the same pointer but a new pointer to a different address). I've wrapped one of the functions to implement (strdup) in a new C function in a file named wrapped_strdup.c to display the pointer values and memory areas contents: /* ** I'm compiling this into a .so the following way: ** - gcc -o wrapped_strdup.o -c wrapped_strdup.c ** - ar rc wrapped_strdup.a wrapped_strdup.o ** - ranlib wrapped_strdup.a ** - gcc -shared -o wrapped_strdup.so -Wl,--whole-archive wrapped_strdup.a -Wl,--no-whole-archive */ #include <stdio.h> #include <stdlib.h> #include <string.h> char *wrapped_strdup(char *src){ char *dst; printf("From C:\n"); printf("- src address: %X, src content: [%s].\n", src, src); dst = strdup(src); printf("- dst address: %X, dst content: [%s].\n", dst, dst); return dst; } I also create in the same directory a pytest test file named test_strdup.py: #!/usr/bin/env python3 import ctypes import pytest # Setting wrapped_strdup: lib_wrapped_strdup = ctypes.cdll.LoadLibrary("./wrapped_strdup.so") wrapped_strdup = lib_wrapped_strdup.wrapped_strdup wrapped_strdup.restype = ctypes.c_char_p wrapped_strdup.argtypes = [ctypes.c_char_p] @pytest.mark.parametrize("src", [b"", b"foo"]) def test_strdup(src: bytes): print("") dst = wrapped_strdup(src) print("From Python:") print(f"- src address: {hex(id(src))}, src content: [{src!r}].") print(f"- dst address: {hex(id(dst))}, dst content: [{dst!r}].") assert src == dst assert hex(id(src)) != hex(id(dst)) Then, running my test gives me the following output: $ pytest test_strdup.py --maxfail=2 -v -s =================================== test session starts ==================================== platform linux -- Python 3.12.5, pytest-8.3.2, pluggy-1.5.0 -- /usr/bin/python cachedir: .pytest_cache rootdir: /home/vmonteco/code/MREs/MRe_strdup_test_with_ctypes plugins: anyio-4.4.0, cov-5.0.0, typeguard-4.3.0 collected 2 items test_strdup.py::test_strdup[] From C: - src address: C19BDBE8, src content: []. - dst address: 5977DFA0, dst content: []. From Python: - src address: 0x75bcc19bdbc8, src content: [b'']. - dst address: 0x75bcc19bdbc8, dst content: [b'']. FAILED test_strdup.py::test_strdup[foo] From C: - src address: BF00A990, src content: [foo]. - dst address: 59791030, dst content: [foo]. From Python: - src address: 0x75bcbf00a970, src content: [b'foo']. - dst address: 0x75bcbefc18f0, dst content: [b'foo']. PASSED ========================================= FAILURES ========================================= ______________________________________ test_strdup[] _______________________________________ src = b'' @pytest.mark.parametrize("src", [b"", b"foo"]) def test_strdup(src: bytes): print("") dst = wrapped_strdup(src) print("From Python:") print(f"- src address: {hex(id(src))}, src content: [{src!r}].") print(f"- dst address: {hex(id(dst))}, dst content: [{dst!r}].") assert src == dst > assert hex(id(src)) != hex(id(dst)) E AssertionError: assert '0x75bcc19bdbc8' != '0x75bcc19bdbc8' E + where '0x75bcc19bdbc8' = hex(129453562518472) E + where 129453562518472 = id(b'') E + and '0x75bcc19bdbc8' = hex(129453562518472) E + where 129453562518472 = id(b'') test_strdup.py:22: AssertionError ================================= short test summary info ================================== FAILED test_strdup.py::test_strdup[] - AssertionError: assert '0x75bcc19bdbc8' != '0x75bcc19bdbc8' =============================== 1 failed, 1 passed in 0.04s ================================ This output shows two things : Addresses for variables referencing b'' in Python are identical either way (that's the same object) despite addresses being different from the lower level perspective. This is consistent with some pure Python tests and I guess it could be some optimization feature. Addresses values from C and Python for dst and src variables don't actually seem related. So the above attempt is actually unreliable to check that a function returned a pointer to a different area. I could also try to retrieve the pointer value itself and make a second test run for checking this part specifically by changing the restype attribute : #!/usr/bin/env python3 import ctypes import pytest # Setting wrapped_strdup: lib_wrapped_strdup = ctypes.cdll.LoadLibrary("./wrapped_strdup.so") wrapped_strdup = lib_wrapped_strdup.wrapped_strdup wrapped_strdup.restype = ctypes.c_void_p # Note that it's not a c_char_p anymore. wrapped_strdup.argtypes = [ctypes.c_char_p] @pytest.mark.parametrize("src", [b"", b"foo"]) def test_strdup_for_pointers(src: bytes): print("") dst = wrapped_strdup(src) print("From Python:") print(f"- retrieved dst address: {hex(dst)}.") The above gives the following output : $ pytest test_strdup_for_pointers.py --maxfail=2 -v -s =================================== test session starts ==================================== platform linux -- Python 3.12.5, pytest-8.3.2, pluggy-1.5.0 -- /usr/bin/python cachedir: .pytest_cache rootdir: /home/vmonteco/code/MREs/MRe_strdup_test_with_ctypes plugins: anyio-4.4.0, cov-5.0.0, typeguard-4.3.0 collected 2 items test_strdup_for_pointers.py::test_strdup_for_pointers[] From C: - src address: E15BDBE8, src content: []. - dst address: 84D4D820, dst content: []. From Python: - retrieved dst address: 0x608984d4d820. PASSED test_strdup_for_pointers.py::test_strdup_for_pointers[foo] From C: - src address: DEC7EA80, src content: [foo]. - dst address: 84EA7C40, dst content: [foo]. From Python: - retrieved dst address: 0x608984ea7c40. PASSED ==================================== 2 passed in 0.01s ===================================== Which would give the actual address (or at least something that looks related). But without knowing the value the C function receives, it's not of much help. Addendum: what I came up with from Mark's answer (and that works): Here's a test that implements both the solution suggested in the accepted answer : #!/usr/bin/env python3 import ctypes import pytest # Setting libc: libc = ctypes.cdll.LoadLibrary("libc.so.6") strlen = libc.strlen strlen.restype = ctypes.c_size_t strlen.argtypes = (ctypes.c_char_p,) # Setting wrapped_strdup: lib_wrapped_strdup = ctypes.cdll.LoadLibrary("./wrapped_strdup.so") wrapped_strdup = lib_wrapped_strdup.wrapped_strdup # Restype will be set directly in the tests. wrapped_strdup.argtypes = (ctypes.c_char_p,) @pytest.mark.parametrize("src", [b"", b"foo"]) def test_strdup(src: bytes): print("") # Just to make pytest output more readable. # Set expected result type. wrapped_strdup.restype = ctypes.POINTER(ctypes.c_char) # Create the src buffer and retrieve its address. src_buffer = ctypes.create_string_buffer(src) src_addr = ctypes.addressof(src_buffer) src_content = src_buffer[:strlen(src_buffer)] # Run function to test. dst = wrapped_strdup(src_buffer) # Retrieve result address and content. dst_addr = ctypes.addressof(dst.contents) dst_content = dst[: strlen(dst)] # Assertions. assert src_content == dst_content assert src_addr != dst_addr # Output. print("From Python:") print(f"- Src content: {src_content!r}. Src address: {src_addr:X}.") print(f"- Dst content: {dst_content!r}. Dst address: {dst_addr:X}.") @pytest.mark.parametrize("src", [b"", b"foo"]) def test_strdup_alternative(src: bytes): print("") # Just to make pytest output more readable. # Set expected result type. wrapped_strdup.restype = ctypes.c_void_p # Create the src buffer and retrieve its address. src_buffer = ctypes.create_string_buffer(src) src_addr = ctypes.addressof(src_buffer) src_content = src_buffer[:strlen(src_buffer)] # Run function to test. dst = wrapped_strdup(src_buffer) # Retrieve result address and content. dst_addr = dst # cast dst: dst_pointer = ctypes.cast(dst, ctypes.POINTER(ctypes.c_char)) dst_content = dst_pointer[:strlen(dst_pointer)] # Assertions. assert src_content == dst_content assert src_addr != dst_addr # Output. print("From Python:") print(f"- Src content: {src_content!r}. Src address: {src_addr:X}.") print(f"- Dst content: {dst_content!r}. Dst address: {dst_addr:X}.") Output : $ pytest test_strdup.py -v -s =============================== test session starts =============================== platform linux -- Python 3.10.14, pytest-8.3.2, pluggy-1.5.0 -- /home/vmonteco/.pyenv/versions/3.10.14/envs/strduo_test/bin/python3.10 cachedir: .pytest_cache rootdir: /home/vmonteco/code/MREs/MRe_strdup_test_with_ctypes plugins: anyio-4.4.0, stub-1.1.0 collected 4 items test_strdup.py::test_strdup[] From C: - src address: 661BBE90, src content: []. - dst address: F5D8A7A0, dst content: []. From Python: - Src content: b''. Src address: 7C39661BBE90. - Dst content: b''. Dst address: 57B4F5D8A7A0. PASSED test_strdup.py::test_strdup[foo] From C: - src address: 661BBE90, src content: [foo]. - dst address: F5E03340, dst content: [foo]. From Python: - Src content: b'foo'. Src address: 7C39661BBE90. - Dst content: b'foo'. Dst address: 57B4F5E03340. PASSED test_strdup.py::test_strdup_alternative[] From C: - src address: 661BBE90, src content: []. - dst address: F5B0AC50, dst content: []. From Python: - Src content: b''. Src address: 7C39661BBE90. - Dst content: b''. Dst address: 57B4F5B0AC50. PASSED test_strdup.py::test_strdup_alternative[foo] From C: - src address: 661BBE90, src content: [foo]. - dst address: F5BF9C20, dst content: [foo]. From Python: - Src content: b'foo'. Src address: 7C39661BBE90. - Dst content: b'foo'. Dst address: 57B4F5BF9C20. PASSED ================================ 4 passed in 0.01s ================================
A return type of ctypes.c_char_p is "helpful" and converts the return value to a Python string, losing the actual C pointer. Use ctypes.POINTER(ctypes.c_char) to keep the pointer. A return type of ctypes.c_void_p is also "helpful" and converts the returned C address to a Python integer, but can be cast a more specific pointer type to access the data at the address To find it's address, use ctypes.addressof on the contents of the pointer; otherwise you get the address of the storage of the pointer. I use char* strcpy(char* dest, const char* src) as an example because the returned pointer is the same address as the dest pointer and it shows the C addresses are the same from Python without needing a C helper function. In the code below the mutable string buffer dest has the same address as the return value and a few ways to examine the C address of the return value are shown: import ctypes as ct dll = ct.CDLL('msvcrt') dll.strcpy.argtypes = ct.c_char_p, ct.c_char_p dll.strcpy.restype = ct.POINTER(ct.c_char) # NOT ct.c_char_p to keep pointer dll.strlen.argtypes = ct.c_char_p, dll.strlen.restype = ct.c_size_t dest = ct.create_string_buffer(10) # writable char buffer print(f'{ct.addressof(dest) = :#x}') # its C address result = dll.strcpy(dest, b'abcdefg') # Note that for strcpy, returned address is the same as dest address print(f'{ct.addressof(dest) = :#x}') # dest array's C address print(f'{ct.addressof(result.contents) = :#x}') # result pointer's C address (same as dest) n = dll.strlen(result) print(f'{result[:n] = }') # must slice char* or only prints one character. print(f'{dest.value = }') # array has .value (nul-termination) or .raw dll.strcpy.restype = ct.c_void_p # alternative, get the pointer address as Python int result = dll.strcpy(dest, b'abcdefg') print(f'{result = :#x}') # same C address as above p = ct.cast(result, ct.POINTER(ct.c_char)) # Cast afterward print(f'{ct.addressof(p.contents) = :#x}') # same C address n = dll.strlen(p) print(f'{p[:n] = }') # must slice char* p = ct.cast(result, ct.POINTER(ct.c_char * n)) # alternate, pointer to sized array print(f'{p.contents.value = }') # don't have to slice, (char*)[n] has known size n Output: ct.addressof(dest) = 0x1abe80ea398 ct.addressof(dest) = 0x1abe80ea398 ct.addressof(result.contents) = 0x1abe80ea398 result[:n] = b'abcdefg' dest.value = b'abcdefg' result = 0x1abe80ea398 ct.addressof(p.contents) = 0x1abe80ea398 p[:n] = b'abcdefg' p.contents.value = b'abcdefg'
3
4
78,933,109
2024-8-30
https://stackoverflow.com/questions/78933109/multiply-an-input-by-5-raised-to-the-number-of-digits-of-each-numbers-but-my-co
I'm trying to multiply an input * 5 raised to the power of the input. I tried this: def multiply(n): return 5 ** len(str(n)) * n I tried (n = -2), but instead of giving me -10, which is the correct answer, it gave me -50 Why doesn't this output the correct numbers when n is negative?
you are directly checking the length of your input by casting it to string: >>> n = -2 >>> str(n) '-2' # String with length of two -> '-' and '2' (string 2) >>> len(str(n)) 2 Maybe you can try this (not sure if it's good or bad, but will work as you expected): # The dirty way.. def multiply(num: int): # Initialize a variable to keep track of length length = 0 # Convert the input to string and iterate over it for n in str(num): # If the current character is int, increment the length variable try: if isinstance(int(n), int): length += 1 # Above code will raise error for '-', catch here except ValueError: pass return 5 ** length * num # Try with the input -2 again print(multiply(-2))
2
1
78,932,957
2024-8-30
https://stackoverflow.com/questions/78932957/accessing-pokeapi-taking-a-long-time
I want to get a list of names of pokemon from the first 150 that have a hp less than a particular value. Here's what I've got so far: def get_pokemon_with_similar_hp(max_hp): pokemon_names = [] poke_data = [] for i in range(1, 151): api_url = f"https://pokeapi.co/api/v2/pokemon/{i}" pokemon_response = requests.get(api_url) pokemon_data = pokemon_response.json() poke_data.append(pokemon_data) for j in range(0, 150): if poke_data[j]['stats'][0]['base_stat'] < max_hp: pokemon_names.append(poke_data[j]['name']) return pokemon_names This works, and gives me the data I want, but it's currently taking 8.49 s to process and send through to my frontend. Is there a way of improving my code to speed things up? Many thanks in advance :)
According to the API documentation you can use GraphQL query, so you can do this in one request. E.g.: import requests graphql_url = "https://beta.pokeapi.co/graphql/v1beta" # https://beta.pokeapi.co/graphql/console/ payload = { "operationName": "samplePokeAPIquery", "query": r"query samplePokeAPIquery($maxhp: Int) {pokemon_v2_pokemon(where: {pokemon_v2_pokemonstats: {base_stat: {_lt: $maxhp}}}) {id name}}", "variables": {"maxhp": 100}, } data = requests.post(graphql_url, json=payload) print(json.dumps(data.json(), indent=4)) Prints: { "data": { "pokemon_v2_pokemon": [ { "id": 1, "name": "bulbasaur" }, { "id": 2, "name": "ivysaur" }, { "id": 3, "name": "venusaur" }, ...
3
3
78,932,341
2024-8-30
https://stackoverflow.com/questions/78932341/why-does-2x-x-x-in-ieee-floating-point-precision
I would expect this to only hold when the last bit of the mantissa is 0. Otherwise, in order to subtract them (since their exponents differ by 1), x would lose a bit of precision first and the result would either end up being rounded up or down. But a quick experiment shows that it seems to always hold (assuming x and 2x are finite) for any random number (including those with a a trailing 1 bit). import random import struct from collections import Counter def float_to_bits(f: float) -> int: """ Convert a double-precision floating-point number to a 64-bit integer. """ # Pack the float into 8 bytes, then unpack as an unsigned 64-bit integer return struct.unpack(">Q", struct.pack(">d", f))[0] def check_floating_point_precision(num_trials: int) -> float: true_count = 0 false_count = 0 bit_counts = Counter() for _ in range(num_trials): x = random.uniform(0, 1) if 2 * x - x == x: true_count += 1 else: false_count += 1 bits = float_to_bits(x) # Extract the last three bits of the mantissa last_three_bits = bits & 0b111 bit_counts[last_three_bits] += 1 return (bit_counts, true_count / num_trials) num_trials = 1_000_000 (bit_counts, proportion_true) = check_floating_point_precision(num_trials) print(f"The proportion of times 2x - x == x holds true: {proportion_true:.6f}") print("Distribution of last three bits (mod 8):") for bits_value in range(8): print(f"{bits_value:03b}: {bit_counts[bits_value]} occurrences") The proportion of times 2x - x == x holds true: 1.000000 Distribution of last three bits (mod 8): 000: 312738 occurrences 001: 62542 occurrences 010: 125035 occurrences 011: 62219 occurrences 100: 187848 occurrences 101: 62054 occurrences 110: 125129 occurrences 111: 62435 occurrences
If we had to do arithmetic only in the floating-point format, even for internal values during the arithmetic, then, yes, 2*x - x would not always yield x. For example, with four-bit significands, we could have: Expression Value/Calculation x 1.001β€’20 (9) 2*x 1.001β€’21 (18) 2*x - x 1.001β€’21 βˆ’ 1.001β€’20 = 1.001β€’21 βˆ’ 0.100β€’21 (shifted right operand and lost a bit) = 0.101β€’21 = 1.010β€’20 (10). However, that is not how floating-point implementations work. To get the right results, they use either more digits internally or more sophisticated algorithms or both. IEEE-754 specifies that the result of an elementary operation is the value you would get by computing the exact real-number arithmetic result with no error or rounding or limitations on digits and then rounding that real number to the nearest value representable in the destination format, using the rounding rule in effect for the operation. (Most commonly, that rounding rule is to round to the nearest value, breaking ties in favor of the one with an even low digit in its significand.) A consequence of this requirement is that, if the mathematical result is representable in the format, the computed result must be that mathematical result. When the mathematical result is representable, there is never any rounding error in a properly implemented elementary operation.
5
5
78,932,725
2024-8-30
https://stackoverflow.com/questions/78932725/pandas-sort-one-column-by-custom-order-and-the-other-naturally
Consider the following code: import pandas import numpy strs = ['custom','sort']*5 df = pandas.DataFrame( { 'string': strs, 'number': numpy.random.randn(len(strs)), } ) sort_string_like_this = {'sort': 0, 'custom': 1} print(df.sort_values(['string','number'], key=lambda x: x.map(sort_string_like_this))) which prints string number 1 sort -0.074041 3 sort 1.057676 5 sort -0.612289 7 sort 0.757922 9 sort 0.671288 0 custom -0.339373 2 custom -0.320231 4 custom -1.125380 6 custom 2.120829 8 custom -0.031580 I would like to sort it according to the column string using a custom ordering as given by the dictionary and the column number using the natural ordering of numbers. How can this be done?
You can use a condition in the sort key function: df.sort_values( ["string", "number"], key=lambda x: x.map(sort_string_like_this) if x.name == "string" else x, ) string number 7 sort -1.673626 3 sort -0.212634 5 sort -0.071417 9 sort 0.413497 1 sort 0.489508 8 custom -1.787110 0 custom 0.230875 4 custom 0.535791 2 custom 0.671282 6 custom 2.119993
3
4
78,932,231
2024-8-30
https://stackoverflow.com/questions/78932231/rotating-a-curve-using-python
I have written following code to rotate a curve by specified angle and return new equation. I know when we want to rotate the axes by an angle A then new coords become X=xcosA-ysinA and Y=xsinA+ycosA. when i test my fxn on the hyperbola x^2+y^2=1 . Expected equations is 2xy-1=0 but my fxn gives -2xy-1=0 Where I am doing wrong? import sympy as sp @display_mathjax_and_replace def rotate(f,theta): x, y, a, b = sp.symbols('x y a b') rotation_matrix = sp.Matrix([[sp.cos(theta),-sp.sin(theta)],[sp.sin(theta), sp.cos(theta)]]) transformed_coords = rotation_matrix * sp.Matrix([a, b]) g = f.subs({x: transformed_coords[0], y: transformed_coords[1]}) return g I have created following decorator to present it beautifully: def display_mathjax_and_replace(func): def wrapper(*args, **kwargs): result = func(*args, **kwargs) result = result.subs({'a': 'x', 'b': 'y'}) simplified_expr = sp.simplify(result) display(Math(sp.latex(sp.Eq(simplified_expr, 0)))) return wrapper I call my code : x,y = sp.symbols('x y') f = x**2 - y**2 -1 rotate(f,sp.pi/4) Output: -2xy-1 Expected output:2xy-1
You swapped the sign on the sin terms of the transformation matrix. Here is how it should look like for a positive rotation: def rotate_1(f, theta): x, y, a, b = sp.symbols('x y a b') rotation_matrix = sp.Matrix([[sp.cos(theta),sp.sin(theta)],[-sp.sin(theta), sp.cos(theta)]]) transformed_coords = rotation_matrix * sp.Matrix([a, b]) g = f.subs({x: transformed_coords[0], y: transformed_coords[1]}) return g rotate(f, sp.pi/4).simplify() # out: 2*a*b - 1 If you use SymPy Plotting Backend and an interactive environment like Jupyter Notebook, you can perform nice visualization that helps you catch any eventual mistake. For example: %matplotlib widget from sympy import * from spb import * def symbolic_rotate(f, theta, x, y): a, b = var("a, b") rotation_matrix = Matrix([[cos(theta),sin(theta)],[-sin(theta), cos(theta)]]) transformed_coords = rotation_matrix * Matrix([a, b]) g = f.subs({x: transformed_coords[0], y: transformed_coords[1]}) g = g.subs({a: x, b: y}) return g var("x, y, theta, a, b") f = x**2 - y**2 -1 graphics( implicit_2d(f, (x, -10, 10), (y, -10, 10), label="f"), implicit_2d(symbolic_rotate(f, theta, x, y), (x, -10, 10), (y, -10, 10), label="f rot", params={theta: (pi/6, 0, 2*pi)}), aspect="equal" ) symbolic_rotate creates a new expression based on a symbolic angle of rotation. Then, we can plot the original hyperbole and the rotated one and play with a slider. Here, you can visually verify if the transformation matrix is correct.
2
2
78,932,164
2024-8-30
https://stackoverflow.com/questions/78932164/python-average-values-in-2d-array
I want to generate a twodimensional array in Python and I would like to iterate through each element and take an average. An element i should be averaged using the 8 surrounding array elements (including element i). I generated the twodimensional array with a frame of zeros using Forming a frame of zeros around a matrix in python. A = np.array([[1,2,3],[4,5,6],[7,8,9]]) x,y = A.shape n = 1 B = np.zeros((x+2*n,y+2*n),dtype=int) B[n:x+n, n:y+n] = A print(B) What is the easiest way to take the average?
What you want is a 2D convolution, use scipy.signal.convolve2d with numpy.ones to get the sum: from scipy.signal import convolve2d out = convolve2d(B, np.ones((3, 3), dtype='int'), mode='same') Output: array([[ 1, 3, 6, 5, 3], [ 5, 12, 21, 16, 9], [12, 27, 45, 33, 18], [11, 24, 39, 28, 15], [ 7, 15, 24, 17, 9]]) If you want the small matrix, no need to pad with zeros: convolve2d(A, np.ones((3, 3), dtype='int'), mode='same') # array([[12, 21, 16], # [27, 45, 33], # [24, 39, 28]]) For the average, repeat the same operation with an array od ones and divide: kernel = np.ones((3, 3), dtype='int') out = (convolve2d(A, kernel, mode='same') /convolve2d(np.ones_like(A), kernel, mode='same') ) output: array([[3. , 3.5, 4. ], [4.5, 5. , 5.5], [6. , 6.5, 7. ]])
2
2
78,931,947
2024-8-30
https://stackoverflow.com/questions/78931947/finding-the-maximum-product-of-an-array-element-and-a-distance
An array of integers is given. It is necessary to find the maximum product of the distance between a pair of elements and the minimum element of this pair. For example, [2, 5, 2, 2, 1, 5, 2] -> 20, 5 and 5 (5 * (5-1)); [1, 2] -> 1 In the first example, the maximum product is obtained if we take 5 and 5(a[1] and a[5]). The minimum element of the pair is 5. The distance between the elements: 5 - 1 = 4. Result: 5 * 4 = 20 In the second example, there is only one pair. The minimum element is 1. The distance between the elements is 1. Result: 1*1 = 1 not an effective solution: a = [2,5,2,2,1,5,2] res = -10**9 for i in range(len(a)-1): for j in range(i+1,len(a)): res = max(res, min(a[i],a[j])*(j-i)) print(res) The code is slow to work. How can I make it more efficient?
O(n log n): from itertools import accumulate from bisect import bisect_left a = [2,5,2,2,1,5,2] print(max( x * (j - bisect_left(m, x)) for a in [a, a[::-1]] for m in [list(accumulate(a, max))] for j, x in enumerate(a) )) Attempt This Online! Assume the best pair has the smaller-or-equal value as the right value. For each right value, find the furthest left value larger or equal (so that the right value is indeed smaller or equal) and evaluate that pair. Do the whole thing both for the given array and for it reversed, which covers the case that the best pair has the smaller-or-equal value as the left value. Inspired by Nijat Mursali's opening paragraph.
2
1
78,932,035
2024-8-30
https://stackoverflow.com/questions/78932035/filter-or-join-a-polars-dataframe-by-columns-from-another-dataframe
I have two pl.DataFrames: from datetime import date import polars as pl df1 = pl.DataFrame( { "symbol": [ "sec1", "sec1", "sec1", "sec1", "sec1", "sec1", "sec2", "sec2", "sec2", "sec2", "sec2", ], "date": [ date(2021, 9, 14), date(2021, 9, 15), date(2021, 9, 16), date(2021, 9, 17), date(2021, 8, 31), date(2020, 12, 31), date(2021, 9, 14), date(2021, 9, 15), date(2021, 8, 31), date(2021, 12, 30), date(2020, 12, 31), ], "price": range(11), } ) df2 = pl.DataFrame( { "symbol": ["sec1", "sec2"], "current_date": [date(2021, 9, 17), date(2021, 9, 15)], "mtd": [date(2021, 8, 31), date(2021, 8, 31)], "ytd": [date(2020, 12, 31), date(2020, 12, 30)], } ) with pl.Config(tbl_rows=-1): print(df1) print(df2) shape: (11, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════║ β”‚ sec1 ┆ 2021-09-14 ┆ 0 β”‚ β”‚ sec1 ┆ 2021-09-15 ┆ 1 β”‚ β”‚ sec1 ┆ 2021-09-16 ┆ 2 β”‚ β”‚ sec1 ┆ 2021-09-17 ┆ 3 β”‚ β”‚ sec1 ┆ 2021-08-31 ┆ 4 β”‚ β”‚ sec1 ┆ 2020-12-31 ┆ 5 β”‚ β”‚ sec2 ┆ 2021-09-14 ┆ 6 β”‚ β”‚ sec2 ┆ 2021-09-15 ┆ 7 β”‚ β”‚ sec2 ┆ 2021-08-31 ┆ 8 β”‚ β”‚ sec2 ┆ 2021-12-30 ┆ 9 β”‚ β”‚ sec2 ┆ 2020-12-31 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ current_date ┆ mtd ┆ ytd β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ date ┆ date β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════════════β•ͺ════════════β•ͺ════════════║ β”‚ sec1 ┆ 2021-09-17 ┆ 2021-08-31 ┆ 2020-12-31 β”‚ β”‚ sec2 ┆ 2021-09-15 ┆ 2021-08-31 ┆ 2020-12-30 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I need to filter the prices of df1 for each group with the respective dates from df2. I need to incorporate all columns of type date. The number of these columns in df2 might not be fixed. I am looking for the following result: shape: (11, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════║ β”‚ sec1 ┆ 2021-09-17 ┆ 3 β”‚ β”‚ sec1 ┆ 2021-08-31 ┆ 4 β”‚ β”‚ sec1 ┆ 2020-12-31 ┆ 5 β”‚ β”‚ sec2 ┆ 2021-09-15 ┆ 7 β”‚ β”‚ sec2 ┆ 2021-08-31 ┆ 8 β”‚ β”‚ sec2 ┆ 2020-12-30 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I was thinking of filtering df1 by symbol and then do a join operation for every individual date column of df2. I would then subsequently concatenate the resulting dataframes. However, there's probably a much more elegant solution.
You can unpivot, then join: df1.join( df2.unpivot(index='symbol', value_name='date').drop('variable'), on=['symbol', 'date'], how='inner', ) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ price β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═══════║ β”‚ sec1 ┆ 2021-09-17 ┆ 3 β”‚ β”‚ sec1 ┆ 2021-08-31 ┆ 4 β”‚ β”‚ sec1 ┆ 2020-12-31 ┆ 5 β”‚ β”‚ sec2 ┆ 2021-09-15 ┆ 7 β”‚ β”‚ sec2 ┆ 2021-08-31 ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
3
4
78,931,529
2024-8-30
https://stackoverflow.com/questions/78931529/conditional-deduplication-in-polars
I have a dataset i'm trying to remove duplicate entries from. The lazyframe i'm working with is structured like this: df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ title ┆ type ┆ type2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ════════════β•ͺ══════════════════β•ͺ═══════║ β”‚ 1001 ┆ Research A ┆ journal article ┆ 35 β”‚ β”‚ 1002 ┆ Research B ┆ book chapter ┆ 41 β”‚ β”‚ 1003 ┆ Research C ┆ journal article ┆ 35 β”‚ β”‚ 1004 ┆ Research D ┆ conference paper ┆ 42 β”‚ β”‚ 1001 ┆ Research E ┆ journal article ┆ 35 β”‚ β”‚ 1002 ┆ Research F ┆ journal article ┆ 41 β”‚ β”‚ 1003 ┆ Research G ┆ ┆ 41 β”‚ β”‚ 1002 ┆ Research I ┆ book chapter ┆ 41 β”‚ β”‚ 1003 ┆ Research J ┆ journal article ┆ 35 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ """) I want to remove entries that have the same id, but there are actually different cases: The duplicates have the same type (e.g. 1001): keep the first one. The duplicates have a different type: discard the ones with an empty string ("") as type, and then keep only the entries that respect the following pairs of type and type2: dict_df = pl.DataFrame({ "type": ['journal article', 'book chapter', 'book chapter'], "type2": [35, 41, 42] }) Expected output id[i64] title[str] type[str] type2[i64] 1001 Research A journal article 35 1002 Research B book chapter 41 1003 Research C journal article 35 1004 Research D conference paper 42 1001: same type, keep the first one 1002: different type, keep the first occurrence of the entries with pair {'book chapter': 41} 1003: different type, discard the entries with an empty type and keep the first occurrence 1004: not a duplicate I've tried many things, mainly using the pl.when() expression, but I cannot work out a way to filter the groups. ( df .sort('type', descending=True) .group_by("id") .agg([ pl.when(pl.col("type").n_unique() > 1) .then( ... ) .otherwise(pl.all().first()) ]) )
Create a preference table indicating how much you want each combination: preference = pl.DataFrame({ "type": ["journal article", "book chapter", "book chapter"], "iris_type": [35, 41, 42], "preference": [0, 1, 2] }) Join the preference table with your data table: joined = df.lazy().join(preference.lazy(), on=["type", "iris_type"], how="left") Sort the joined table on the preference, pick the first one in each group and drop the preference column: out = ( joined.sort("preference", descending=True, nulls_last=True) .group_by("id") .first() .drop("preference") .collect() )
4
3
78,931,082
2024-8-30
https://stackoverflow.com/questions/78931082/abstract-base-class-function-pointer-python
I'd like to make an abstraction of one of my api classes to resolve the following problem. Let's say I have a base class like: class AbstractAPI(ABC): @abstractmethod def create(self): pass @abstractmethod def delete(self): pass And a concrete class: class API(AbstractAPI): def create(self): print("create") def delete(self): print("delete") When requests come in, I don't have access to my API instance. Both due multithreading and to avoid some circular imports. At that point I do know which method I would like to call later. So my plan was to put a function pointer of AbstractAPI onto a queue and wait until I do have access to the API instance. function_pointer=AbstractAPI.create # later on ... function_pointer(ConcreteAPIInstance) At that point call the function pointer onto the API instance, ba da bim, ba da boom. That does not work. Of course calling the function pointer of AbstractAPI onto an API instance calls the empty AbstractAPI method. Nothing happens. Is there a way to make this work?
Rather than directly referencing the abstract class method function_pointer=AbstractAPI.create, you can write a function that calls the named create() method on a given object. function_pointer = lambda api : api.create() or def function_pointer(api): return api.create()
2
2
78,931,016
2024-8-30
https://stackoverflow.com/questions/78931016/numpy-built-in-function-to-find-largest-vector-in-an-matrix
I have a 2D numpy matrix: arr = np.array([(1, 2), (6, 0), (3, 3), (5, 4)]) I am trying to get the output: [5, 4] I have tried to do the following: max_arr = np.max(arr, axis=0) But this finds the largest value in each column of the matrix, regardless of the vectors from which those values come from. Maybe there is some function that will consider the sum of each vector in the array? The idea is to use only the optimized NumPy functionality in the least steps possible.
Compute the sum per row, then get the position of the maximum with argmax and use this for indexing: out = arr[arr.sum(axis=1).argmax()] Output: array([5, 4]) Intermediates: arr.sum(axis=1) # array([3, 6, 6, 9]) arr.sum(axis=1).argmax() # 3 variant: all maxes If you can have multiple maxima, you can use instead: out = arr[(s:=arr.sum(axis=1)) == s.max()] Or for python < 3.8: s = arr.sum(axis=1) out = arr[s == s.max()] Output: array([[5, 4]]) Note the 2D output, you would get multiple rows if several have the same maximum sum.
2
3
78,930,856
2024-8-30
https://stackoverflow.com/questions/78930856/what-is-an-equivalent-of-in-operator-for-2d-numpy-array
Using Python lists: a = [[0, 1], [3, 4]] b = [0, 2] print(b in a) I'm getting False as an output, but with Numpy arrays: a = np.array([[0, 1], [3, 4]]) b = np.array([0, 2]) print(b in a) I'm getting True as an output. What is an equivalent of in operator above for 2D Numpy arrays?
In the NumPy array case, b in a is interpreted as checking if any element of b is in a, rather than checking for the presence of b as a whole array. You can use the numpy.all function along with numpy.any to compare rows: a = np.array([[0, 1], [3, 4]]) b = np.array([0, 2]) is_row_present = np.any(np.all(a == b, axis=1)) print(is_row_present) >>> False
2
3
78,929,964
2024-8-29
https://stackoverflow.com/questions/78929964/utf-16-as-sequence-of-code-units-in-python
I have the string 'abΓ§' which in UTF-8 is b'ab\xc3\xa7'. I want it in UTF-16, but not this way: b'ab\xc3\xa7'.decode('utf-8').encode('utf-16-be') which gives me: b'\x00a\x00b\x00\xe7' The answer I want is the UTF-16 code units, that is, a list of int: [32, 33, 327] Is there any straightforward way to do that? And of course, the reverse. Given a list of ints which are UTF-16 code units, how do I convert that to UTF-8?
The simple solution that may work in many cases would be something like: def sort_of_get_utf16_code_units(s): return list(map(ord, s)) print(sort_of_get_utf16_code_units('abç') Output: [97, 98, 231] However, that doesn't work for characters outside the Basic Multilingual Plane (BMP): print(sort_of_get_utf16_code_units('😊')) Output is the Unicode code point: [128522] Where you might have expected the code units (as your question states): [55357, 56842] To get that: def get_utf16_code_units(s): utf16_bytes = s.encode('utf-16-be') return [int.from_bytes(utf16_bytes[i:i+2]) for i in range(0, len(utf16_bytes), 2)] print(get_utf16_code_units('😊')) Output: [55357, 56842] Doing the reverse is similar: def utf16_code_units_to_string(code_units): utf16_bytes = b''.join([unit.to_bytes(2, byteorder='big') for unit in code_units]) return utf16_bytes.decode('utf-16-be') print(utf16_code_units_to_string([55357, 56842])) Output: 😊 The byteorder is 'big' by default, but it doesn't hurt to be specific there.
2
6
78,926,011
2024-8-29
https://stackoverflow.com/questions/78926011/gekko-deep-learning-does-not-find-solution
I want to create a regression ANN model with Gekko. I made this model using tf_Keras, and it works very well. Unfortunately, it is not possible to convert the Keras model to a Gekko amp model. Therefore, I need to make it using the Gekko Brain class. However, it fails to find a solution. Here is my dataset: ss10_1k_converted.csv Here is my model in Gekko: import pandas as pd import numpy as np from gekko import brain # load dataset df = pd.read_csv("ss10_1k_converted.csv") # split data for train and test train=df.sample(frac=0.8,random_state=200) test=df.drop(train.index) # Preparing training data train_x = train[['D','V']].to_numpy() train_x = np.transpose(train_x) train_y = train['F'].to_numpy() train_y = np.transpose(train_y).reshape(1,-1) # Preparing test data test_x = test[['D','V']].to_numpy() test_y = test['F'].to_numpy().reshape(1,-1) # neural network b = brain.Brain() b.input_layer(2) b.layer(relu=61) #b.layer(relu=61) b.output_layer(1) b.learn(train_x, train_y, obj=2) Here is the error: EXIT: Restoration Failed! An error occured. The error code is -2 --------------------------------------------------- Solver : IPOPT (v3.12) Solution time : 179.147199999999 sec Objective : 2064584.02728447 Unsuccessful with error code 0 --------------------------------------------------- Creating file: infeasibilities.txt @error: Solution Not Found complete error --> link Please let me know what the problem is. Thank you
There is a way to import models from TensorFlow directly into Gekko. Here is an example from the Gekko documentation: from gekko import GEKKO x = m.Var(.0,lb = 0,ub=1) y = Gekko_NN_TF(model,mma,m,n_output = 1).predict([x]) m.Obj(y) m.solve(disp=False) print('solution:',y.value[0]) print('x:',x.value[0]) print('Gekko Solvetime:',m.options.SOLVETIME,'s') If you do want to fit the Neural Network with Gekko, it is always better to scale the data. This doesn't change the solution, but helps the Neural Network if the data is scaled with a standard scaler (mean=0, stdev=1). import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler from gekko import brain # load dataset df = pd.read_csv("ss10_1k_converted.csv") # Initialize the StandardScaler s = StandardScaler() # split data for train and test train = df.sample(frac=0.8, random_state=200) test = df.drop(train.index) # scale train data train_s = s.fit_transform(train) train_s = pd.DataFrame(train_s,columns=train.columns) # Preparing training data train_x = train_s[['D', 'V']].to_numpy() train_x = np.transpose(train_x) train_y = train_s['F'].to_numpy() train_y = np.transpose(train_y).reshape(1, -1) # scale test data test_s = s.transform(test) test_s = pd.DataFrame(test_s,columns=test.columns) # Preparing test data test_x = test_s[['D', 'V']].to_numpy() test_y = test_s['F'].to_numpy().reshape(1, -1) # neural network b = brain.Brain() b.input_layer(2) b.layer(relu=61) b.output_layer(1) b.learn(train_x, train_y, obj=2) Tested with the first 300 data rows: D,V,F 0.0039933,0.083865,-64.83 0.004411,0.011882,-65.2739 0.0046267,-0.047719,-65.7733 0.0043474,0.032426,-67.1328 0.0046642,0.0026268,-66.7721 0.004506,-0.019885,-66.7998 0.0042449,0.0027718,-66.0785 0.0046068,-0.004274,-64.9826 0.0040353,-0.015186,-64.8578 0.0040928,0.0061156,-65.0936 0.004671,-0.007791,-65.912 0.004279,-0.014973,-66.3976 0.0044212,0.026233,-67.0634 0.0046039,-0.0026849,-66.5501 0.0041202,-0.0067733,-65.8565 0.0043672,0.012628,-65.3988 0.004455,-0.0044196,-64.3723 0.0040499,0.0075972,-65.0242 0.0045626,-7.75E-05,-65.6485 0.0044811,-0.0017835,-66.2727 0.0043505,0.0099326,-67.8957 0.0047254,0.0031688,-66.4947 0.004119,-0.0035763,-66.2588 0.0043387,-0.0021411,-65.8011 0.0046139,-0.0041471,-64.4139 0.0041006,-0.015224,-64.8855 0.0045054,0.016978,-65.2878 0.0044468,-0.01442,-65.8565 0.0042841,-0.0033144,-67.0357 0.0044818,0.017859,-66.4947 0.0043907,-0.012453,-66.7582 0.004108,0.022928,-66.0646 0.0045609,0.0015028,-65.1907 0.0043863,-0.0077244,-65.052 0.0043062,0.014014,-64.9271 0.0048137,-0.0020048,-66.0369 0.0043082,-0.0070927,-66.4947 0.0043583,0.0067367,-67.2576 0.0046149,-0.013683,-66.6611 0.0040759,-0.016978,-66.0508 0.0044474,0.0066973,-65.6069 0.0044941,-0.01849,-64.4139 0.0040872,0.0051854,-64.9688 0.004542,0.0072096,-65.9675 0.0044153,-0.015739,-66.3282 0.0040624,0.0049035,-68.1593 0.0046978,-0.0025099,-66.675 0.0041913,-0.012541,-66.5779 0.0041956,0.011957,-66.1479 0.0045725,0.0023067,-64.1226 0.0040471,-0.0062125,-65.4959 0.0043414,0.026824,-65.0104 0.0046067,-0.0037695,-66.3837 0.0040993,-0.0022286,-67.4518 0.0045984,-0.0097582,-66.9524 0.0046205,-0.0067457,-67.0495 0.0041353,0.01126,-66.786 0.0045369,0.011541,-65.2739 0.0038909,-0.011067,-65.0104 0.0042334,0.01567,-65.4681 0.0047868,0.0051842,-66.2866 0.004284,-0.006591,-66.8137 0.0044214,0.018015,-67.2299 0.0048712,-0.018352,-66.994 0.0042182,-0.015794,-66.0369 0.0044314,0.0070755,-65.8843 0.0041792,-0.0040999,-64.7468 0.0039398,-0.0083723,-65.2462 0.0045341,-0.0067345,-66.134 0.0043972,-0.017462,-66.4669 0.0043558,0.0091193,-68.7696 0.0047372,0.0067633,-66.6334 0.0041105,-0.003295,-66.8137 0.004152,0.0062506,-65.6207 0.0043251,-0.0089543,-64.4139 0.004048,-0.0089443,-65.482 0.0044814,0.021243,-65.371 0.0044972,0.0062206,-66.1895 0.0042125,-0.0060468,-67.4657 0.0045941,-0.001696,-67.0911 0.0044504,-0.015574,-67.3131 0.0042516,0.010873,-65.9537 0.0042283,0.00069826,-65.3294 0.0040209,-0.0075884,-65.2184 0.0042322,0.017588,-65.0797 0.0046712,-0.019778,-66.6334 0.0042237,0.0066664,-66.3421 0.0044727,0.028501,-67.4796 0.0047318,-0.0079854,-67.1328 0.0039351,0.0094957,-66.1617 0.004335,0.022909,-65.9398 0.0044769,-0.0043703,-64.4555 0.0041044,-0.001986,-65.3294 0.0048231,0.023936,-66.0924 0.0045902,0.0015897,-66.5918 0.0044028,0.01097,-68.3257 0.0047,0.0097195,-66.6195 0.0043181,-0.023635,-66.8831 0.0043604,-0.007966,-65.9537 0.0046524,-0.0089459,-64.5249 0.0042599,0.0021704,-65.4543 0.0044449,0.0049516,-65.5098 0.0044628,-0.0078797,-66.4253 0.0042596,-0.0047872,-67.5212 0.0047347,0.01064,-66.8137 0.0043973,-0.010622,-67.4102 0.0042344,0.021881,-66.3976 0.0045758,0.0062213,-65.4127 0.0044302,-0.015902,-65.0797 0.004275,0.011271,-65.2323 0.0048129,-0.0094669,-66.5363 0.0043592,-0.0073374,-66.6611 0.0044803,0.0047882,-67.7986 0.0045777,-0.0046428,-66.7305 0.0040829,-0.0091087,-66.1617 0.0044584,0.003626,-65.9398 0.0045068,-0.0059609,-64.5526 0.0042007,0.013363,-65.4127 0.0046671,-0.0036145,-66.0785 0.0044172,-0.019604,-66.994 0.0043849,0.0069483,-68.2425 0.0047915,-0.018207,-66.6611 0.0040251,-0.011774,-66.6611 0.0041447,0.0042927,-65.7456 0.004618,-0.019381,-64.5249 0.0039094,-0.0062988,-65.4127 0.0044787,0.024759,-65.4681 0.0044859,-0.0035376,-66.6195 0.0040748,-0.0071321,-67.4379 0.0046907,0.016193,-66.9663 0.0044212,-0.0068536,-67.2021 0.0040881,0.012219,-65.9675 0.0045039,0.0073159,-65.6346 0.0041685,-0.012811,-65.3572 0.0043691,0.01791,-65.6762 0.0046814,-0.0047478,-66.3559 0.0041919,-0.0095948,-66.6889 0.0044857,0.013557,-67.6044 0.0047077,-0.017045,-66.9524 0.0040122,-0.013703,-66.453 0.0043503,0.011901,-65.6901 0.0043854,-0.0091003,-64.5803 0.0040964,-0.0030619,-65.3433 0.0044621,0.010922,-66.2588 0.0043823,-0.015234,-66.7721 0.0043401,0.01006,-68.298 0.0046588,-0.014118,-66.8969 0.0041869,-0.0081592,-66.8969 0.0041066,-0.0032463,-65.8565 0.0045463,-0.009837,-64.5942 0.0038705,-0.016649,-65.4265 0.0043316,0.0051654,-65.704 0.0043303,0.00058075,-66.6195 0.0041827,-0.0077244,-67.5905 0.0043924,0.026038,-67.1744 0.0042696,-0.0022192,-67.3825 0.0041159,0.01319,-66.453 0.0044228,0.0031794,-65.371 0.0042871,0.00028224,-65.0381 0.0042961,0.020099,-65.4959 0.0045414,-0.0034207,-66.3421 0.0042336,-0.020485,-66.8553 0.0043969,-0.012384,-67.4934 0.0045912,-0.015408,-67.0773 0.0040612,0.0043115,-66.3559 0.004095,0.007946,-65.7317 0.0040889,-0.003616,-64.5942 0.0040495,0.002248,-65.3156 0.0046104,-0.0010946,-66.3005 0.0043607,0.014525,-66.8692 0.004339,0.022639,-68.4367 0.0046271,0.00042635,-66.8692 0.0040437,-0.01568,-66.7721 0.0043274,0.014712,-65.9953 0.0044512,-0.001676,-64.6497 0.0040564,0.0066864,-65.5375 0.0043595,0.027658,-65.5375 0.0045961,-0.016929,-66.1895 0.0043122,0.010872,-67.4796 0.0047341,0.011687,-67.1466 0.0044862,-0.0031194,-67.1744 0.0040565,0.0017347,-66.2588 0.0046251,-0.010573,-65.1491 0.0042434,-0.022105,-65.2739 0.0043096,0.0079566,-65.6207 0.0046239,-0.012723,-66.2588 0.004143,-0.020797,-66.675 0.0043804,0.007956,-67.0079 0.0046135,-0.0091693,-66.9663 0.003853,0.0038752,-66.1479 0.0042924,0.033501,-65.6485 0.0043707,-0.0085086,-64.5526 0.0041629,-0.004419,-64.9965 0.0046859,0.01378,-66.2727 0.0046791,-0.0082854,-66.5085 0.0042443,0.013296,-68.0483 0.0045428,-0.014748,-66.7305 0.0040734,-0.021863,-66.4253 0.0041598,-0.010524,-65.9537 0.0045834,-0.0075484,-64.3168 0.0039269,0.0030525,-65.1907 0.0043361,0.017656,-65.5236 0.0045107,-0.00312,-66.1479 0.0041235,-0.004234,-67.6044 0.0045916,0.01156,-66.6056 0.0043559,-0.0093513,-67.0357 0.0041099,0.014507,-66.134 0.0045156,0.0071321,-65.1768 0.0041119,-0.003837,-65.3433 0.0041865,0.014789,-65.0104 0.0047428,-0.0051173,-66.2866 0.0042376,-0.021698,-66.4392 0.0045302,0.0053204,-67.0357 0.0045926,-0.015031,-66.8969 0.004028,-0.0058731,-65.9537 0.0041685,0.0039933,-65.6623 0.004197,-0.014634,-64.4416 0.003946,0.0090512,-65.1352 0.0046488,0.0042834,-66.1895 0.0043015,0.0097195,-66.3698 0.0042961,0.0282,-68.1593 0.0047374,0.01752,-66.6056 0.0040965,-0.0067645,-66.453 0.004324,0.016571,-65.8011 0.0046482,-0.012066,-64.0255 0.0042263,-0.0074715,-65.2601 0.0045406,0.0093031,-65.5098 0.0045666,-0.015834,-66.1062 0.004103,-0.012898,-67.5212 0.0046178,-0.0030325,-66.6334 0.0042454,-0.020021,-67.0218 0.0040706,0.013268,-65.9814 0.0044418,0.0070652,-65.2184 0.0041778,-0.021369,-65.2046 0.0042203,0.026001,-64.9965 0.0047789,0.0012015,-66.1062 0.0042161,-0.00043788,-66.5085 0.0042759,0.01094,-67.0357 0.0046614,-0.012782,-66.786 0.0040899,-0.0026661,-65.8565 0.0044348,0.0026455,-65.593 0.0043528,0.0050292,-64.2752 0.0040157,-0.0049422,-64.9271 0.0047363,0.018034,-65.912 0.0042584,-0.01004,-66.2033 0.0043564,-0.00097832,-67.9096 0.0045823,-0.010078,-66.3143 0.0043784,-0.025371,-66.3005 0.0042742,0.023956,-65.704 0.0043372,-0.0097389,-63.97 0.0038545,0.0087318,-65.052 0.0043303,0.0017541,-65.0242 0.0046417,0.00086268,-66.0508 0.0042006,0.015011,-67.2576 0.004722,0.013285,-66.453 0.0044065,-0.0017341,-66.786 0.004288,-0.0038564,-65.7178 0.0045773,0.0016372,-65.1075 0.0040671,-0.017337,-64.9688 0.0043026,0.020536,-64.9133 0.00458,-0.016696,-66.0091 0.0042268,-0.027163,-66.0785 0.0044446,0.014682,-67.0357 0.004735,-0.028151,-66.3143 0.0040209,0.009797,-65.7594 0.0041427,0.0013096,-65.3156 0.004302,-0.017008,-64.0948 0.0038522,-0.003682,-64.7191 0.0047367,0.018354,-66.134 0.0042477,-0.0030631,-65.704 0.0041725,0.00010627,-67.9928 0.0046761,0.0081098,-66.7166 0.0043145,-0.020981,-65.8149 0.0040937,0.028742,-65.2601 0.0043037,0.0014828,-64.6358 0.0039819,-0.013111,-64.7607 0.004401,0.0028881,-64.9549 0.0047076,0.0030719,-66.6195 0.0041962,0.021863,-66.7443 0.0044663,0.0021904,-66.5085 0.0043607,0.004399,-67.5212 0.0041429,0.0039252,-65.3294 0.0046535,0.0034401,-64.9549 0.004017,-0.0025199,-64.7191 0.0044714,0.025264,-64.8162 0.0047704,-0.0057655,-66.0924 0.0042513,-0.01883,-66.675 0.004426,0.019915,-66.7998 0.004765,-0.019584,-66.3421 0.0040506,-0.012056,-66.4392 0.0043522,0.0013278,-65.3433 0.0043356,-0.0074818,-64.2335 0.0041581,0.0020642,-64.7607 0.0045775,0.019129,-65.9814 0.0042725,-0.0068808,-66.4253 0.0043063,0.0038183,-67.8957 0.004798,-0.021794,-66.3559 0.0043567,-0.012045,-66.3005 0.0042421,0.015245,-65.7872 0.0043967,-0.017288,-63.8174 It didn't terminate to optimality after 500 iterations, but it didn't fail like it did before. Perhaps setting m.options.MAX_ITER to a reasonable level and reporting the solution with m.solve(debug=0) can be one way to terminate early without converging to optimality. Overall recommendation is to import the model from Keras / TensorFlow instead of refitting it with the gekko brain function.
2
1
78,929,543
2024-8-29
https://stackoverflow.com/questions/78929543/iterating-over-each-dataframe-in-a-list-of-dataframes-and-changing-name-of-first
I'm trying to iterate through a list of datraframes and do two things, replace blanks with "_" (which I've done) and add a suffix to the first column of each dataframe. I know I can access the first column of the each dataframe via the print row within the loop below, but I'm having trouble adding the suffix to the first column. Can someone please help? Sample data: import pandas as pd px_data = {'Date': ['8/11/18', '8/12/18', '8/13/18', '8/14/18'], 'A df': [58.63, 21.25, 19.17, 18.8], 'B df': [35,105,27,98]} SC_data = {'Date': ['8/11/18', '8/12/18', '8/13/18', '8/14/18'], 'A df': [20.50, 6, 82, 74.6], 'B df': [74,62,8,99]} SMA_data = {'Date': ['8/11/18', '8/12/18', '8/13/18', '8/14/18'], 'A df': [2, 95.3, 39, 68.27], 'B df': [58,37,74,11]} px = pd.DataFrame(px_data) SC = pd.DataFrame(SC_data) SMA = pd.DataFrame(SMA_data) dfs=[px,SC,SMA] for i in range(len(dfs)): dfs[i].columns = dfs[i].columns.str.replace(' ', '_') print(dfs[i].columns[0])
After replacing blanks with underscore, you can add suffix to the first columns of each dataframe in the following way: first_column = dfs[i].columns[0] # get first column dfs[i].rename(columns={first_column: first_column + '_suffix'}, inplace=True) # Add a suffix to the first column
2
1
78,927,980
2024-8-29
https://stackoverflow.com/questions/78927980/hiding-facet-row-axis-in-altair-chart
I'm using a faceted chart in Altair in order to split a lengthy timeline into multiple rows. My initial dataset is a pandas dataframe with "Start" and "End" timestamp columns and a "Product" string column. I bin the dataset into roughly equal rows by evenly dividing the time range: timestamp_normalized = (data.Start - data.Start[0]) / (data.Start.iloc[-1] - data.Start[0]) # range from 0 to 1 data['Row'] = (timestamp_normalized * 3 - 1e-9).astype(int) # divide into 3 bins Then I draw the facet chart like this: import altair as alt alt.Chart(data).mark_bar().encode( x=alt.X('Start', title='') x2='End', color=alt.Color('Product', scale=alt.Scale(scheme='dark2')) ).properties( width=800, height=50 ).facet( row=alt.Row('Row', title=''), ).resolve_scale( x='independent', ) This produces the right chart, but unfortunately the bin indices (which are completely irrelevant, as they only serve to split it into three pieces) are shown on the left side. Is there any way to hide these?
To remove both the header title and labels you can use alt.Row('Row').header(None). You could also control them individually via alt.Row('Row').header(title=None, labels=False) For versions of Altair prior to 5: alt.Row('Row', header=alt.Header(title=None, labels=False))
2
2
78,929,128
2024-8-29
https://stackoverflow.com/questions/78929128/how-to-get-unique-value-count-in-a-polars-series-excluding-null-values
I am working with a Polars Series in Python and I need to obtain the number of unique values in the series. However, I want to exclude any null values from the result. For example, given the following Series: series = pl.Series([1, 2, None, 4, 2, 4, None]) The result should be 3 unique values (1, 2, and 4). What is the best way to achieve this in Polars?
Drop the nulls using .drop_nulls() and count the unique values using .n_unique(). Also, the correct way to represent null values in a call to pl.Series is using Python's None. pl.Null might have worked earlier but it doesn't in the latest polars. import polars as pl series = pl.Series([1, 2, None, 4, 2, 4, None]) print(series.drop_nulls().n_unique()) Output: 3
2
3
78,927,891
2024-8-29
https://stackoverflow.com/questions/78927891/what-is-password-based-authentication-in-the-usercreationform-in-django
I creat a signup form in django using django forms and when i run my code there is field i didnt expect Password-based authentication i did not use it and i have no idea what it is so anyone can tell me what it is and how i can remove it from user signup form? form.py from django import forms from django.contrib.auth import get_user_model from django.contrib.auth.forms import UserCreationForm from django.contrib.auth.hashers import check_password class RegisterForm(UserCreationForm): """Form to Create new User""" def __init__(self, *args,hashed_code=None, **kwargs) -> None: super(RegisterForm,self).__init__(*args, **kwargs) self.hashed_code = hashed_code code = forms.CharField(max_length=4, required=True, label="code", help_text="Enter the four-digit code" ) def is_valid(self): """Return True if the form has no errors, or False otherwise.""" if not self.hashed_code: self.add_error("code","you have not any valid code get the code first") elif not check_password(self.data.get("code"),self.hashed_code) : self.add_error("code","code is invalid") return self.is_bound and not self.errors class Meta: model = get_user_model() fields = ["email", "password1", "password2","code"]
That field comes from BaseUserCreationForm, the superclass of UserCreationForm. This is actually a regression in Django 5.1, and will be fixed when https://github.com/django/django/pull/18484 is released in Django 5.1.1. As a workaround, you should be able to delete the field from your subclass with class RegisterForm(UserCreationForm): usable_password = None # Workaround; see https://github.com/django/django/pull/18484
2
3
78,923,273
2024-8-28
https://stackoverflow.com/questions/78923273/compare-two-boolean-arrays-considering-a-tolerance
I have two boolean arrays, first and second that should be mostly equal (up to a tolerance). I would like to compare them in a way that is forgiving if a few elements are different. Something like np.array_equal(first, second, equal_nan=True) is too strict because all values must be the same and np.allclose(first, second, atol=tolerance, equal_nan=True) is not suitable for comparing booleans. The following case should succeed: tolerance = 1e-5 seed = np.random.rand(100, 100, 100) first = seed > 0.5 second = (seed > 0.5) & (seed < 1. - 1e-6) # 99.9999% overlap in true elements The following case should fail: first = seed > 0.5 second = (seed > 0.5) & (seed < 1. - 1e-4) # 99.99% overlap in true elements The following case should also fail: first = seed > 0.5 second = first[::-1] # first.sum() == second.sum(), but they are not similar How can I handle this case in an elegant manner?
Simon's answer was pretty close to what I needed. However, I preferred using a volume overlapping metric (eg dice and IoU) instead of normalizing by the size of the voxel. Dice and IoU range [0, 1], which is pretty convenient in this case, with 0 meaning no overlap and 1 meaning perfect overlap. Dice implementation: tolerance = 1e-5 dice = 2 * (first & second).sum() / (first.sum() + second.sum()) if 1 - dice > tolerance: raise IoU implementation: iou = (first & second).sum() / (first | second).sum() if 1 - iou > tolerance: raise
3
1
78,926,086
2024-8-29
https://stackoverflow.com/questions/78926086/parsing-numeric-data-with-thousands-seperator-in-polars
I have a tsv file that contains integers with thousand separators. I'm trying to read it using polars==1.6.0, the encoding is utf-16 from io import BytesIO import polars as pl data = BytesIO( """ Id\tA\tB 1\t537\t2,288 2\t325\t1,047 3\t98\t194 """.encode("utf-16") ) df = pl.read_csv(data, encoding="utf-16", separator="\t") print(df) I cannot figure out how to get polars to treat column "B" as integer rather than string, and I also cannot find a clean way of casting it to an integer. shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Id ┆ A ┆ B β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════β•ͺ═══════║ β”‚ 1 ┆ 537 ┆ 2,288 β”‚ β”‚ 2 ┆ 325 ┆ 1,047 β”‚ β”‚ 3 ┆ 98 ┆ 194 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ cast fails, as does passing the schema explicitly. I also tried using str.strip_chars and to remove the comma, my work-around is to use str.replace_all instead. df = df.with_columns( pl.col("B").str.strip_chars(",").alias("B_strip_chars"), pl.col("B").str.replace_all("[^0-9]", "").alias("B_replace"), ) print(df) shape: (3, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Id ┆ A ┆ B ┆ B_strip_chars ┆ B_replace β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════β•ͺ═══════β•ͺ═══════════════β•ͺ═══════════║ β”‚ 1 ┆ 537 ┆ 2,288 ┆ 2,288 ┆ 2288 β”‚ β”‚ 2 ┆ 325 ┆ 1,047 ┆ 1,047 ┆ 1047 β”‚ β”‚ 3 ┆ 98 ┆ 194 ┆ 194 ┆ 194 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Also for this to work in general I'd need to ensure that read_csv doesn't try and infer types for any columns so I can convert them all manually (any numeric column with a value > 999 will contain a comma)
To allow for possible multiple , separators use .str.replace_all: df = df.with_columns(pl.col('B').str.replace_all(",", "").cast(pl.Int64)) which gives for the sample data: shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ Id ┆ A ┆ B β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════║ β”‚ 1 ┆ 537 ┆ 2288 β”‚ β”‚ 2 ┆ 325 ┆ 1047 β”‚ β”‚ 3 ┆ 98 ┆ 194 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
5
3
78,925,412
2024-8-28
https://stackoverflow.com/questions/78925412/finding-the-k-th-largest-element-using-heap
I am trying to solve the leetcode problem: kth-largest-element-in-an-array I know a way to solve this is by using a heap. However, I wanted to implement my own heapify method for practice, and here is my code: def findKthLargest(self, nums: List[int], k: int) -> int: def heapify(nums: List[int], i: int): print(nums, i) largest = i left = (2 * i) + 1 right = (2 * i) + 2 if left < len(nums) and nums[largest] < nums[left]: largest = left if right < len(nums) and nums[largest] < nums[right]: largest = right if largest != i: nums[i], nums[largest] = nums[largest], nums[i] print(nums) heapify(nums, largest) print(nums) for i in range(len(nums)-1, -1, -1): heapify(nums, i) print(nums) return nums[k-1] My code is basically following the implementation given in an editorial: def max_heapify(heap_size, index): left, right = 2 * index + 1, 2 * index + 2 largest = index if left < heap_size and lst[left] > lst[largest]: largest = left if right < heap_size and lst[right] > lst[largest]: largest = right if largest != index: lst[index], lst[largest] = lst[largest], lst[index] max_heapify(heap_size, largest) # heapify original lst for i in range(len(lst) // 2 - 1, -1, -1): max_heapify(len(lst), i) And this worked for 21/41 test cases and is failing Input: nums = [3,2,3,1,2,4,5,5,6] k = 4 My code is returning 3 instead of 4. Here is my output: [3, 2, 3, 1, 2, 4, 5, 5, 6] [3, 2, 3, 1, 2, 4, 5, 5, 6] 8 [3, 2, 3, 1, 2, 4, 5, 5, 6] 7 [3, 2, 3, 1, 2, 4, 5, 5, 6] 6 [3, 2, 3, 1, 2, 4, 5, 5, 6] 5 [3, 2, 3, 1, 2, 4, 5, 5, 6] 4 [3, 2, 3, 1, 2, 4, 5, 5, 6] 3 [3, 2, 3, 6, 2, 4, 5, 5, 1] [3, 2, 3, 6, 2, 4, 5, 5, 1] 8 [3, 2, 3, 6, 2, 4, 5, 5, 1] 2 [3, 2, 5, 6, 2, 4, 3, 5, 1] [3, 2, 5, 6, 2, 4, 3, 5, 1] 6 [3, 2, 5, 6, 2, 4, 3, 5, 1] 1 [3, 6, 5, 2, 2, 4, 3, 5, 1] [3, 6, 5, 2, 2, 4, 3, 5, 1] 3 [3, 6, 5, 5, 2, 4, 3, 2, 1] [3, 6, 5, 5, 2, 4, 3, 2, 1] 7 [3, 6, 5, 5, 2, 4, 3, 2, 1] 0 [6, 3, 5, 5, 2, 4, 3, 2, 1] [6, 3, 5, 5, 2, 4, 3, 2, 1] 1 [6, 5, 5, 3, 2, 4, 3, 2, 1] [6, 5, 5, 3, 2, 4, 3, 2, 1] 3 [6, 5, 5, 3, 2, 4, 3, 2, 1] I see that 4 in index 5 is never being sorted after the initial few iterations. Why is this happening? What am I missing? Any help would be appreciated.
There are a few problems with your (partial) heap-sort algorithm: The loop should start from the last non-leaf node (a node with at least one child) and move to the root. In your code, the loop starts from the last element. Starting from the last element won't help in building the heap correctly because leaf nodes (which include the last elements) already satisfy the heap property and don’t need to be heapified. After building the max heap, the largest element is at the root. We then: Swap it with the last element in the heap. Reduce the heap size by one and restore the heap by calling heapify on the root node. Repeat this process k-1 times. After that, the kth largest element is at the root of the heap: def findKthLargest(self, nums: List[int], k: int) -> int: # added heap_size parameter def heapify(nums: List[int], i: int, heap_size: int): print(nums, i) largest = i left = (2 * i) + 1 right = (2 * i) + 2 if left < heap_size and nums[largest] < nums[left]: largest = left if right < heap_size and nums[largest] < nums[right]: largest = right if largest != i: nums[i], nums[largest] = nums[largest], nums[i] print(nums) heapify(nums, largest, heap_size) print(nums) # starting from the last non-leaf node for i in range(len(nums) // 2 - 1, -1, -1): heapify(nums, i, len(nums)) print(nums) # performing k - 1 swaps for i in range(len(nums) - 1, len(nums) - k, -1): nums[0], nums[i] = nums[i], nums[0] heapify(nums, 0, i) # heapify the reduced heap return nums[0] # kth largest element is at the root
2
1
78,924,751
2024-8-28
https://stackoverflow.com/questions/78924751/how-can-i-consolidate-all-rows-with-the-same-id-in-polars
I have a Polars dataframe with a lot of duplicate data I would like to consolidate. Input: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════║ β”‚ 1 ┆ a β”‚ β”‚ 1 ┆ b β”‚ β”‚ 1 ┆ c β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ My current (non-working) solution: df = pl.DataFrame({'id': [1, 1, 1], 'data': ['a', 'b', 'c']}) df = df.join(df.select('id', 'data'), on='id') Output: shape: (9, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ data ┆ data_right β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ════════════║ β”‚ 1 ┆ a ┆ a β”‚ β”‚ 1 ┆ b ┆ a β”‚ β”‚ 1 ┆ c ┆ a β”‚ β”‚ 1 ┆ a ┆ b β”‚ β”‚ 1 ┆ b ┆ b β”‚ β”‚ 1 ┆ c ┆ b β”‚ β”‚ 1 ┆ a ┆ c β”‚ β”‚ 1 ┆ b ┆ c β”‚ β”‚ 1 ┆ c ┆ c β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Desired output: shape: (1, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ data_1 ┆ data_2 ┆ data_3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ════════β•ͺ════════β•ͺ════════║ β”‚ 1 ┆ a ┆ b ┆ c β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ It seems like a self join would be the way to get a table with all the columns I want, but I'm unsure how to write a self join that would include multiple columns instead of just a bunch of rows with only two, and looping through self joins does not seem like the correct thing to do as it quickly balloons in size. This isn't specifically a Polars problem, but I am working in Python-Polars
The following gives you a df in the format you want: import polars as pl df = ( pl.DataFrame({"id": [1, 1, 1], "data": ["a", "b", "c"]}) .group_by("id") .agg(pl.col("data")) .with_columns(structure=pl.col("data").list.to_struct()) .unnest("structure") .drop("data") ) print(df) """ β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ field_0 ┆ field_1 ┆ field_2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ 1 ┆ a ┆ b ┆ c β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ """
5
5
78,925,371
2024-8-28
https://stackoverflow.com/questions/78925371/creating-a-column-by-if-else-statements
I have a data frame with 4 columns. The columns are: Home_team, Away_team, Home_team_goal, Away_team_goal I want to create a winner column by using goals. I wrote the code below, but when I run it, I get only home teams in the winner column. A = ['A', '1', 'D', '2'] B = ['B', '2', 'E', '2'] C = ['C', '3', 'F', '2'] df = pd.DataFrame(np.array([A,B,C]), columns = ['Home_team', 'Home_team_goal', 'Away_team', 'Away_team_goal']) df['Home_team_goal'] = pd.to_numeric(df['Home_team_goal']) df['Away_team_goal'] = pd.to_numeric(df['Away_team_goal']) df['Average'] = df['Home_team_goal'] - df['Away_team_goal'] df['Winner'] = '' for i, e in enumerate(df['Average']): if e > 0: df['Winner'] = df['Home_team'] elif e < 0: df['Winner'] = df['Away_team'] else: df['Winner'] = 'Tie'
Here another approach: def Winner(x): if x > 0: return 1 elif x < 0: return 2 else: return 0 df['Winner'] = df['Average'].apply(Winner)
2
0
78,925,432
2024-8-28
https://stackoverflow.com/questions/78925432/pandas-unpack-list-of-dicts-to-columns
I have a dataframe that has a field called fields which is a list of dicts (all rows have the same format). Here is how the dataframe is structured: formId fields 123 [{'number': 1, 'label': 'Last Name', 'value': 'Doe'}, {'number': 2, 'label': 'First Name', 'value': 'John'}] I am trying to unpack the fields column so it looks like: formId Last Name First Name 123 Doe John The code I have currently is: for i,r in df.iterrows(): for field in r['fields']: df.at[i, field['label']] = field['value'] However this does not seem like the most efficient way. Is there a better way to accomplish this?
Personally, I'd construct new dataframe: df = pd.DataFrame( [ {"formId": form_id, **{f["label"]: f["value"] for f in fields}} for form_id, fields in zip(df["formId"], df["fields"]) ] ) print(df) Prints: formId Last Name First Name 0 123 Doe John
5
2
78,925,465
2024-8-28
https://stackoverflow.com/questions/78925465/find-rows-where-pandas-dataframe-column-which-is-a-paragraph-or-list-contains
I have a Pandas DataFrame which contains information about various jobs. I am working on filtering based on values in some lists. I have no problem with single value conditional filtering. However, I am having difficulties doing conditional filtering on the Job Description field, which is essentially a paragraph and multiple lines, and the Job Skills field which is essentially a list after I split on the \n\n. EXAMPLE DATA: dftest=pd.DataFrame({ 'Job Posting':['Data Scientist', 'Cloud Engineer', 'Systems Engineer', 'Data Engineer'], 'Time Type':['Full Time', 'Part Time', 'Full Time', 'Part Time'], 'Job Location': ['Colorado', 'Maryland', 'Florida', 'Virginia'], 'Job Description': [ 'asdfas fasdfsad sadfsdaf sdfsdaf', 'asdfasd fasdfasd fwertqqw rtwergd fverty', 'qwerq e5r45yb rtfgs dfaesgf reasdfs dafads', 'aweert scdfsdf asdfa sdfsds vwerewr'], 'Job Skills': [ 'Algorithms\n\nData Analysis\n\nData Mining\n\nData Modeling\n\nData Science\n\nExploratory Data Analysis (EDA)\n\nMachine Learning\n\nUnstructured Data', 'Application Development\n\nApplication Integrations\n\nArchitectural Modeling\n\nCloud Computing\n\nSoftware Product Design\n\nTechnical Troubleshooting', 'Configuration Management (CM)\n\nInformation Management\n\nIntegration Testing\n\nRequirements Analysis\n\nRisk Management\n\nVerification and Validation (V&V)', 'Big Data Analytics\n\nBig Data Management\n\nDatabase Management\n\nData Mining\n\nData Movement\n\nETL Processing\n\nMetadata Repository'] }) Job Posting Time Type Job Location Job Description Job Skills 0 Data Scientist Full Time Maryland asdfas fasdfsad sadfsdaf sdfsdaf Algorithms\n\nData Analysis\n\nPython\n\n Data... 1 Cloud Engineer Part Time Maryland asdfasd fasdfasd fwertqqw rtwergd fverty Application Development\n\nApplication Integra... 2 Systems Engineer Full Time Virginia qwerq e5r45yb rtfgs dfaesgf reasdfs dafads Configuration Management (CM)\n\nInformation M... 3 Data Engineer Part Time Virginia aweert scdfsdf asdfa sdfsds vwerewr Big Data Analytics\n\nBig Data Management\n\nP... LISTS and SPLITTING OF 'Job Skills' data by '\n\n': state = ['Virginia', 'Maryland', 'District of Columbia'] time = ['Full Time'] skills = ['AI', 'Artificial Intelligence', 'Deep Learning', 'Machine Learning', 'Feature Selection', 'Feature Selection', 'Python', 'Cloud Computing'] dftest['Job Skills'] = dftest['Job Skills'].str.split('\n\n') Results: [Algorithms, Data Analysis, Data Mining, Data Modeling, Data Science, Exploratory Data Analysis (EDA), Machine Learning, Unstructured Data] [Application Development, Application Integrations, Architectural Modeling, Cloud Computing, Software Product Design, Technical Troubleshooting] [Configuration Management (CM), Information Management, Integration Testing, Requirements Analysis, Risk Management, Verification and Validation (V&V)] [Big Data Analytics, Big Data Management, Database Management, Data Mining, Data Movement, ETL Processing, Metadata Repository] CONDITIONAL FILTERING: dftest[dftest['Job Location'].isin(state) & dftest['Time Type'].isin(time)] Results: Job Posting Time Type Job Location Job Description Job Skills 0 Data Scientist Full Time Maryland asdfas fasdfsad sadfsdaf sdfsdaf [Algorithms, Data Analysis, Python, Data Mini... 2 Systems Engineer Full Time Virginia qwerq e5r45yb rtfgs dfaesgf reasdfs dafads Configuration Management (CM), Information Ma... ISSUE: Now I want to take the values in dftest['Job Skills'] and find all the rows that match the skills list. I've tried, among others: Iterating through the values in the field and comparing to the skills list and doing it the other way around, but that doesn't work. dftest['Job Skills'].filter(like=skills, axis=0), but that gives another error. I think I am almost there with this, but I just want to have a single unique row if there is a match. For example, this shows rows 0 and 3 match, so I want those rows to print. for i in skills: print('skill: ',i) print(dftest['Job Skills'].map(set([i]).issubset))
IIUC, you can use pd.Series.apply() + any(): out = dftest[dftest["Job Skills"].apply(lambda x: any(s in skills for s in x))] print(out) Prints: Job Posting Time Type Job Location Job Description Job Skills 0 Data Scientist Full Time Colorado asdfas fasdfsad sadfsdaf sdfsdaf [Algorithms, Data Analysis, Data Mining, Data Modeling, Data Science, Exploratory Data Analysis (EDA), Machine Learning, Unstructured Data] 1 Cloud Engineer Part Time Maryland asdfasd fasdfasd fwertqqw rtwergd fverty [Application Development, Application Integrations, Architectural Modeling, Cloud Computing, Software Product Design, Technical Troubleshooting]
3
1
78,925,467
2024-8-28
https://stackoverflow.com/questions/78925467/how-to-have-decorated-function-in-a-python-doctest
How can I include a decorated function inside a Python doctest? def decorator(func): def wrapper() -> None: func() return wrapper def foo() -> None: """ Stub. Examples: >>> @decorator >>> def stub(): ... """ if __name__ == "__main__": import doctest doctest.testmod() Running the above with Python 3.12 throws a SyntaxError: UNEXPECTED EXCEPTION: SyntaxError('invalid syntax', ('<doctest path.to.a[0]>', 1, 0, '@decorator\n', 1, 0))
Multi-line commands should use ... for continuation lines. So the proper way is to use ... instead of >>> on the second line: def foo() -> None: """ Stub. Examples: >>> @decorator ... def stub(): ... """
4
4
78,925,207
2024-8-28
https://stackoverflow.com/questions/78925207/real-and-imaginary-part-of-a-complex-number-in-polar-form
I am a bit confused about the proper way to deal with complex numbers in polar form and the way to separate its real and imaginary part. Notice that I am expecting as real part the radius, as imaginary part the angle. The inbuilt re and im functions get always the real and imaginary part of the Cartesian representation of the complex number. Here an example from sympy import I, pi, re, im, exp, sqrt # complex num in Cartesian form z = -4 + I*4 print(re(z), im(z)) # 4 -4 # complex num in polar form z = 4* sqrt(2) * exp(I * pi * 3/4) print(re(z), im(z)) # 4 -4 but expacting 4*sqrt(2), pi*3/4 What is the most SymPytonic way to deal with such problem?
Maybe you are looking for Abs and arg functions? z = -4 + I*4 print(Abs(z), arg(z)) # 4*sqrt(2) 3*pi/4 z = 4* sqrt(2) * exp(I * pi * 3/4) print(Abs(z), arg(z)) # 4*sqrt(2) 3*pi/4
3
3
78,925,095
2024-8-28
https://stackoverflow.com/questions/78925095/how-to-swap-values-between-two-columns-based-on-conditions-in-python
I am trying to switch values between the Range and Unit columns in the dataframe below based on the condition that if Unit contains -, then replace Unit with Range and Range with Unit. To do that, I am creating a unit_backup column so that I don't lose the original Unit value. 1. dataframe sample_df = pd.DataFrame({'Range':['34-67',12,'gm','45-90'], 'Unit':['ml','35-50','10-100','mg']}) sample_df Range Unit 0 34-67 ml 1 12 35-50 2 gm 10-100 3 45-90 mg 2. Function Below is the code I have tried but I am getting an error in this: def range_unit_correction_fn(df): # creating backup of Unit column df['unit_backup'] = df['Unit'] # condition check if df['unit_backup'].str.contains("-"): # if condition is True then replace `Unit` value with `Range` and `Range` with `unit_backup` df['Unit'] = df['Range'] df['Range'] = df['unit_backup'] else: # if condition False then keep the same value df['Range'] = df['Range'] # drop the backup column df = df.drop(['unit_backup'],axis=1) return df Applying the above function on the dataframe sample_df = sample_df.apply(range_unit_correction_fn, axis=1) sample_df Error: 1061 def apply_standard(self): 1062 if self.engine == "python": -> 1063 results, res_index = self.apply_series_generator() ... ----> 4 if df['unit_backup'].str.contains("-"): 5 df['Unit'] = df['Range'] 6 df['Range'] = df['unit_backup'] AttributeError: 'str' object has no attribute 'str' It seems like some silly mistake, but I am not sure where I am going wrong. Appreciate any sort of help here.
When you access df['unit_backup'], you get a scalar string value, not a pandas Series, so calling .str on it raises an error. To fix it you can check the condition directly on the string value in a row-wise approach: def range_unit_correction_fn(df): # creating backup of Unit column df['unit_backup'] = df['Unit'] # condition check if '-' in df['unit_backup']: # if condition is True then replace `Unit` value with `Range` and `Range` with `unit_backup` df['Unit'] = df['Range'] df['Range'] = df['unit_backup'] # drop the backup column df = df.drop(['unit_backup']) return df
3
0
78,924,586
2024-8-28
https://stackoverflow.com/questions/78924586/how-to-correctly-render-a-3d-surface-date-axis
I'm having some problems rendering axis date correctly in Shiny for Python with plotly surface plots. In particular, axis of type date are render as floats. Find here an example in shiny playground. Note that same exact code works if I render the figure with fig.show() outside of Shiny for Python (i.e. the x axis renders as a date not as a float). I already tried to explicitly cast the layout of the figure as fig.update_layout( scene=dict( xaxis=dict( type="date", tickformat='%Y' ) ) But I get even worst result (figure not rendering at all).
Replace the @render_widget with express.render.ui and replace the return fig with return ui.HTML(fig.to_html()) (see express.ui.HTML and plotly.io.to_html). Link to playground import plotly.graph_objects as go import pandas as pd import numpy as np from shiny.express import render, ui def plotSurface(plot_dfs: list, names: list, title: str, **kwargs): """helper function to plot multiple surface with ESM color themes. Args: plot_dfs (list): list of dataframes containing the matrix. index of each df is datetime format and columns are maturities names (list): list of names for the surfaces in plot_dfs title (str): title of the plot Raises: TypeError: _description_ TypeError: _description_ ValueError: _description_ Returns: Figure: plotly figure """ for i, plot_df in enumerate(plot_dfs): if not isinstance(plot_df.index, pd.core.indexes.datetimes.DatetimeIndex): raise TypeError(f"plot_df number {i} in plot_dfs should have an index of type datetime but got {type(plot_df.index)}") if not (isinstance(plot_dfs, list) and isinstance(names, list)): raise TypeError(f"both plot_dfs and names should be list. Instead got {type(plot_dfs), {type(names)}}") if len(plot_dfs) != len(names): raise ValueError(f"plot_dfs and names should have the same length but got {len(plot_dfs)} != {len(names)}") fig = go.Figure() # stack surfaces. The last one will overwrite the first one when values are equal for i, (plot_df, name) in enumerate(zip(plot_dfs, names)): X, Y = np.meshgrid(plot_df.index, plot_df.columns) Z = plot_df.values.T fig.add_trace(go.Surface(z=Z, x=X, y=Y, name=name, showscale=False, showlegend=True, opacity=0.9)) # Update layout for better visualization and custom template fig.update_layout( title=title, title_x=0.5, scene=dict( xaxis_title='Date', yaxis_title='Maturity', zaxis_title='Value', ), margin=dict(l=30, r=30, b=30, t=50), # template=esm_theme, legend=dict(title="Legend"), ) return fig @render.ui def plot_1(): plot_dfs = [ pd.DataFrame( index = pd.to_datetime([f"{y}/01/01" for y in range(2020, 2100)]), columns = ["3m", "6m", "9m"] + [f"{y}Y" for y in range(1,31)], data = 1 ) ] fig = plotSurface(plot_dfs, names=["t"], title=" ") return ui.HTML(fig.to_html())
2
1
78,924,591
2024-8-28
https://stackoverflow.com/questions/78924591/how-to-use-numpy-financials-irr-when-theres-a-withdrawal-of-value-in-the-middl
I am trying to calculate the IRR for a month using numpy_financial, but I am not succeeding. At which position in the array am I making a mistake? For example: initial_balance = 579676.18 final_balance = 2921989.17 From what I understand from the documentation, the final balance should be treated as a withdrawal (positive value) on the last day because it calculates as zero. The cash flows are an array of 31, corresponding to the number of days in that month: [0, 22700.00, 0 ,0 ,0 ,5320.00 ,0 ,0 ,-39900.00 ,0 ,0 ,0 ,0 ,0 ,0 ,-2278500.00 ,0, 31472.00, -42000.00, 0 , 0 , 0, -70300.00, 0, 0, 0, 0, 0, 0,36650.00, 0] So, from what I understand from the documentation, the final array should be an array of 32? With the first being the initial balance and the other 31 being the daily cash flows + final balance? [initial_balance, 0, 22700.00, 0, 0, 0, 5320.00, 0, 0, -39900.00, 0, 0, 0, 0, 0, 0, -2278500.00, 0, 31472.00, -42000.00, 0, 0, 0, -70300.00, 0, 0, 0, 0, 0, 0, 36650.00, final_balance] The correct IRR for this is 0.0001465, but the output is 0.001685. What am I doing wrong? from numpy_financial import irr final_array = [initial_balance, 0, 22700.00, 0, 0, 0, 5320.00, 0, 0, -39900.00, 0, 0, 0, 0, 0, 0, -2278500.00, 0, 31472.00, -42000.00, 0, 0, 0, -70300.00, 0, 0, 0, 0, 0, 0, 36650.00, final_balance] irr(final_array)
According to the document: Thus, for example, at least the first element of values, which represents the initial investment, will typically be negative. Your initial_balance should be negative, rest everything is correct in your example. from numpy_financial import irr final_array = [-initial_balance, 0, 22700.00, 0, 0, 0, 5320.00, 0, 0, -39900.00, 0, 0, 0, 0, 0, 0, -2278500.00, 0, 31472.00, -42000.00, 0, 0, 0, -70300.00, 0, 0, 0, 0, 0, 0, 36650.00, final_balance] # see negative sign before initial_balance irr(final_array) #Output 0.00014651601147774862
2
1
78,924,081
2024-8-28
https://stackoverflow.com/questions/78924081/syntax-error-near-token-on-bash-script-initialising-conda
I've been asked to add some BASH script to a server .bashrc file so I can then initialise conda. When I log into the server, I get the following message: Last login: Wed Aug 28 16:57:04 2024 from {IP ADDRESS} -bash: /home/{username}/.bashrc: line 129: syntax error near unexpected token `then' -bash: /home/{username}/.bashrc: line 129: ` if [ -f "/opt/anaconda3/etc/profile.d/conda.sh" ]; then' Here's the code: # >>> conda initialize >>> # !! Contents within this block are managed by 'conda init' !! __conda_setup="$('/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)" if [ $? -eq 0 ]; then eval "$__conda_setup" else if [ -f "/opt/anaconda3/etc/profile.d/conda.sh" ]; then . "/opt/anaconda3/etc/profile.d/conda.sh" else export PATH="/opt/anaconda3/bin:$PATH" fi fi unset __conda_setup # <<< conda initialize <<< I ran the bash script through shellcheck and it produced this: Line 126: if [ $? -eq 0 ]; then ^-- SC1046 (error): Couldn't find 'fi' for this 'if'. ^-- SC1073 (error): Couldn't parse this if expression. Fix to allow more checks. Line 129: if [ -f "/opt/anaconda3/etc/profile.d/conda.sh" ]; then ^-- SC1047 (error): Expected 'fi' matching previously mentioned 'if'. >> ^-- SC1072 (error): Unexpected keyword/token. Fix any mentioned problems and try again Can anybody see the error? Edit: The /opt/anaconda3/bin/conda file is definitely there, and the /opt/anaconda3/etc/profile.d/conda.sh file is also there.
Your file is filled with non-ASCII characters. For example, here's a hexdump of line 7: $ sed -n 7p foo.sh | xxd 00000000: e280 afe2 80af e280 af20 6966 205b 202d ......... if [ - 00000010: 6620 222f 6f70 742f 616e 6163 6f6e 6461 f "/opt/anaconda 00000020: 332f 6574 632f 7072 6f66 696c 652e 642f 3/etc/profile.d/ 00000030: 636f 6e64 612e 7368 2220 5d3b 2074 6865 conda.sh" ]; the 00000040: 6e0a n. The byte sequence e280f is a NARROW NO-BREAK SPACE, and bash doesn't know what to do with that. Replace all the whitespace with actual spaces; the code is otherwise correct. # >>> conda initialize >>> # !! Contents within this block are managed by 'conda init' !! __conda_setup="$('/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2>/dev/null)" if [ $? -eq 0 ]; then "$__conda_setup" else if [ -f "/opt/anaconda3/etc/profile.d/conda.sh" ]; then . "/opt/anaconda3/etc/profile.d/conda.sh" else export PATH="/opt/anaconda3/bin:$PATH" fi fi unset __conda_setup # <<< conda initialize <<<
2
6
78,923,520
2024-8-28
https://stackoverflow.com/questions/78923520/filter-a-pandas-row-if-it-is-significantly-larger-than-neighbours-given-that-it
I have a dataframe like this Name Year Value 0 Mexico 1961 14357 1 Mexico 1961 15161 2 Mexico 1961 514658 3 Mexico 1962 15559 4 United States of America 1977 2191197 5 United States of America 1978 2470734 6 United States of America 1978 52470734 7 United States of America 1979 2737377 8 United States of America 1979 52731457 9 United States of America 1980 3029030 10 United States of America 1980 53024589 11 United States of America 1981 3272565 12 United States of America 2010 15199150 13 United States of America 2010 515543018 14 United States of America 2011 15861873 15 United States of America 2011 16250364 I want to convert it to: Name Year Value 0 Mexico 1961 14357 1 Mexico 1961 15161 2 Mexico 1961 14658 3 Mexico 1962 15559 4 United States of America 1977 2191197 5 United States of America 1978 2470734 6 United States of America 1978 2470734 7 United States of America 1979 2737377 8 United States of America 1979 2731457 9 United States of America 1980 3029030 10 United States of America 1980 3024589 11 United States of America 1981 3272565 12 United States of America 2010 15199150 13 United States of America 2010 15543018 14 United States of America 2011 15861873 15 United States of America 2011 16250364 As you can see when the last column was significantly larger than its neighbours, it it replaced by a different number which is just removal of 5 from its front. for example for Mexico, in 3rd row 514658 is replaced by 14658, firstly because 514658 is significantly(5x-10x) larger than its neighbours ie 15161 and 15559. Similarly for USA, United States of America,1979,52731457 is replaced by United States of America,1979,2731457 Similarly United States of America,1978,52470734 United States of America,1980,53024589 United States of America,2010,515543018 are replaced by United States of America,1978,2470734 United States of America,1980,3024589 and United States of America,2010,15543018 respectively. But mind you, firstly the first column ie Name should exactly match, secondly the last column ie Value has to start with 5 and finally Value has to be significantly larger ie with mostly one digit more than neighbours to avoid risk of removing false positives. By now you might have understood this is a data cleaning exercise where some $ symbols have been written as 5 and hence have to be fixed.
You probably want to compare the previous or the next value, as if you only compare to the next one you will not be able to fix any errors that happen to be the last entry for that country. So create a lag and lead column grouped by country using pandas.DataFrame.shift() and then create a Boolean mask of values to replace where: The value starts with 5. The value is at least 5 times larger than the previous value or next value. Of course if you have countries with only one entry then this will not replace any values for that country. df['lag'] = df.groupby('Name')['Value'].shift(1) df['lead'] = df.groupby('Name')['Value'].shift(-1) replace_val = ( df['Value'].astype(str).str.startswith('5') & ( (df['Value'] > df['lag'] * 5) | (df['Value'] > df['lead'] * 5) ) ) df.loc[replace_val, 'Value'] = df.loc[replace_val, 'Value'].astype(str).str[1:].astype(int) df.drop(["lag", "lead"], axis = 1, inplace=True) Output: Name Year Value 0 Mexico 1961 14357 1 Mexico 1961 15161 2 Mexico 1961 14658 3 Mexico 1962 15559 4 United States of America 1977 2191197 5 United States of America 1978 2470734 6 United States of America 1978 2470734 7 United States of America 1979 2737377 8 United States of America 1979 2731457 9 United States of America 1980 3029030 10 United States of America 1980 3024589 11 United States of America 1981 3272565 12 United States of America 2010 15199150 13 United States of America 2010 15543018 14 United States of America 2011 15861873 15 United States of America 2011 16250364 Data data = { "Name": [ "Mexico", "Mexico", "Mexico", "Mexico", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America", "United States of America" ], "Year": [ 1961, 1961, 1961, 1962, 1977, 1978, 1978, 1979, 1979, 1980, 1980, 1981, 2010, 2010, 2011, 2011 ], "Value": [ 14357, 15161, 514658, 15559, 2191197, 2470734, 52470734, 2737377, 52731457, 3029030, 53024589, 3272565, 15199150, 515543018, 15861873, 16250364 ] } df = pd.DataFrame(data)
3
2
78,921,222
2024-8-28
https://stackoverflow.com/questions/78921222/polars-unexpected-behaviour-when-using-drop-nans-on-all-columns
I have a simple Polars dataframe with some nulls and some NaNs and I want to drop only the latter. I'm trying to use drop_nans() by applying it to all columns and for whatever reason it replaces NaNs with a literal 1.0. I am confusion. Maybe I'm using the method wrong, but the docs don't have much info and definitely don't describe this behaviour: ex = pl.DataFrame( { 'a': [float('nan'), 1, float('nan')], 'b': [None, 'a', 'b'] } ) ex.with_columns(pl.all().drop_nans()) Out: a b 1.0 null 1.0 "a" 1.0 "b" I'm using the latest Polars 1.5. What is the correct way of dropping NaNs across all the columns given that in Polars 1.5 dataframes don't seem to have drop_nans() method, only the Series do? EDIT: I'm expecting the result should be: a b 1.0 'a'
What happens in your example is that drop_nans works on a per-column basis. It will first convert the series [float('nan'), 1, float('nan')] to [1], and then broadcast that value to the entire column when combined with ["a", "b"]. It does this because Polars doesn't have a concept of a scalar value yet, and it will treat any column with a single value as such when deciding when to broadcast. In the future this will change, but it is a lot of work. So right now you can see incorrect broadcasts like that when using functions that filter columns, like drop_nan, if it happens to filter down to a length-1 column. Instead of doing pl.all().drop_nans() you should filter to just those rows where column a is not nan: >>> df.filter(pl.col.a.is_not_nan()) shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1.0 ┆ a β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Or more generically, if you have multiple columns with floating-point values: >>> import polars.selectors as cs >>> df.filter(pl.all_horizontal(cs.float().is_not_nan())) shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1.0 ┆ a β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
4
5
78,922,012
2024-8-28
https://stackoverflow.com/questions/78922012/django-how-to-create-foreignkey-field-when-annotate
I wanted to create a ForeignKey field in Django using .annotate, but I couldn't find any option for it, maybe it doesn't exist. I just want to LEFT JOIN a specific model with a specific condition. But now I have to do it like this: queyrset = Invoice.objects.annotate(provided_amount=Subquery(InvoicePayment.objects.filter(invoice_id=OuterRef("id"), date=timezone.now().date()).values("provided_amount"))) queryset.first().provided_amount But I want it like this: queryset = Invoice.objects.annotate(invoice_payment=...) queryset.first().invoice_payment.provided_amount I could do it using property, but I don't want to do it like that, I would like to do everything in one query. Maybe there is a library for this? Django - 4.2.4 Python - 3.10 I tried creating a SQL VIEW and binding it to the model with managed = False, but it's not very convenient to do that every time.
Please don't use related_name='+': it means querying in reverse is now a lot harder. We can name the relation payments instead: class InvoicePayment(models.Model): invoice = models.ForeignKey( Invoice, on_delete=models.CASCADE, related_name='payments' ) date = models.DateField() provided_amount = models.DecimalField( max_digits=64, decimal_places=24, null=True ) class Meta: unique_together = ['invoice', 'date'] class Invoice(models.Model): pass Then, we can work with a FilteredRelation [Django-doc]: from django.db.models import FilteredRelation, Q qs = Invoice.objects.annotate( todays_payment=FilteredRelation( 'payments', condition=Q(payments__date=timezone.now().date()) ), ).select_related('todays_payment') and for example: qs.first().todays_payment.provided_amount
2
2
78,921,783
2024-8-28
https://stackoverflow.com/questions/78921783/regex-how-to-match-a-line-between-two-optional-characters-that-are-not-include
The regex should match /exampleline - match exampleline exampleline - match exampleline exampleline/ - match exampleline /exampleline/ - match exampleline I tried ?\/(.+)\/? but it didn't work /exampleline/ and exampleline/ matched exampleline/, instead of exampleline
You can use negative lookarounds to assert that a match does not start with and does not end with a /: (?!/).+(?<!/) Demo: https://regex101.com/r/EKOvzG/2
4
1
78,902,565
2024-8-22
https://stackoverflow.com/questions/78902565/how-do-i-install-python-dev-dependencies-using-uv
I'm trying out uv to manage my Python project's dependencies and virtualenv but I can't see how to install all my dependencies for local development, including the dev dependencies. In my pyproject.toml I have this kind of thing: [project] name = "my-project" dependencies = [ "django", ] [tool.uv] dev-dependencies = [ "factory-boy", ] [tool.uv.pip] python-version = "3.10" I can do the following to create a virtualenv and then generate a requirements.txt lockfile, which does not contain dev-dependencies (which is OK, because that is for production): $ uv venv --python 3.10 $ uv pip compile pyproject.toml -o requirements.txt But I can't see how to install all the dependencies in my virtualenv. uv pip sync will use the requirements.txt. There's also uv sync but I don't understand how that differs, and trying that generates an error: error: Multiple top-level packages discovered in a flat-layout: ['conf', 'hines', 'docker', 'assets', 'node_modules'].
To expand on the other good answers by Phil Dependency Groups With uv >= 0.4.27 we can use the new Python packaging feature dependency groups to make pyproject uv-agnostic: [project] name = "my-project" dynamic = ["version"] requires-python = ">=3.10" dependencies = ["django"] [dependency-groups] dev = ["factory-boy"] Python Pinned Also can make use of .python-version to pin the project Python version, which be created manually or by uv python: uv python pin 3.10 Check version written to file $ cat .python-version 3.10 Dev venv With the above configuration in place, a developer virtualenv can be created with: uv sync Finally activate venv: source .venv/bin/activate
13
8
78,901,681
2024-8-22
https://stackoverflow.com/questions/78901681/exporting-only-dev-dependencies
Is there a command for uv that exports/extracts just the dependencies declared as dev dependencies from the pyproject.toml file, for example to pass test dependencies to tox? uv add Django uv add pytest --dev Results in this pyproject.toml: [project] dependencies = [ "django>=4.2.15", ] [tool.uv] dev-dependencies = [ "pytest>=8.3.2", ] How can I generate a file that only contains the dev dependencies, basically a requirements-dev.txt? uv pip compile pyproject.toml does not include the dev deps, only the main deps. And I did not see an argument to make it include it: Resolved 4 packages in 83ms # This file was autogenerated by uv via the following command: # uv pip compile pyproject.toml asgiref==3.8.1 # via django django==4.2.15 # via hatch-demo (pyproject.toml) sqlparse==0.5.1 # via django typing-extensions==4.12.2 # via asgiref For comparison, poetry has poetry export --only dev -o requirements-dev.txt, which will generate something like this: iniconfig==2.0.0 ; python_version >= "3.12" and python_version < "4.0" packaging==24.1 ; python_version >= "3.12" and python_version < "4.0" pluggy==1.5.0 ; python_version >= "3.12" and python_version < "4.0" pytest==8.3.2 ; python_version >= "3.12" and python_version < "4.0"
Update uv to the version 0.4.11 or greater (uv self update), and use the following command: uv export --only-dev --no-hashes | awk '{print $1}' FS=' ;' > requirements-dev.txt
2
1
78,907,444
2024-8-23
https://stackoverflow.com/questions/78907444/is-it-possible-use-vs-code-to-pass-multiple-command-line-arguments-to-python-scr
According to the official documentation "Python debugging in VS Code", launch.json can be configured to run with specific command line arguments, or you can use ${command:pickArgs} to input arguments at run time. Examples of putting arguments in launch.json: Specifying arguments in launch.json for Python Visual Studio Code: How debug Python script with arguments However, I would rather use but I would rather use ${command:pickArgs} because it makes it easier to test multiple times with different values. The first time I tried this, I allowed VS Code to create launch.json. By default it contained the following: "args": [ "${command:pickArgs}" ] When I run the file, I get a dialog for inputting arguments: However, if I put in multiple arguments, they get wrapped in quotes and treated as a single string argument. In a case where, e.g. the arguments are supposed to be numeric, an error is generated. For example, if I pass in 4 7, which both need to be cast to int, sys.argv[1] gets the value '4 7' rather than '4', yielding the error invalid literal for int() with base 10: '4 7' I have tried comma-separating the arguments, and putting quotes around them, and what I get is sys.argv[1] with values like '4, 7' or '"4", "7"'. Needless to say, these don't work either. I've seen examples online of a launch.json configuration as follows: "args": "${command:pickArgs}" That is, there are no brackets around ${command:pickArgs}. However, this generates a problem in that if there are spaces in the path to the Python interpreter, the path gets broken apart at the spaces. See, for example: Spaces in python interpreter path or program result in incorrectly quoted debugging commands with arguments #233 The solution seems to be to put the brackets in, which is what I started with in the first place. Since that's what VS Code did automatically, I'm not sure where the varying examples are coming from (with or without brackets) and can't find documentation on this other than the very short mention of ${command:pickArgs} in the official documentation I linked at the very beginning. So, I have not been able to figure out a way to pass in multiple arguments using ${command:pickArgs} (as opposed to hard-coding directly in launch.json), and the only promising solution (remove the brackets) is poorly documented, generates other errors, and the solution seems to be to put the brackets back in. Is this possible to do at all?
Answering my own question after further testing. I will leave it unaccepted for a while to see if other answers come in. TL;DR This is possible as long as there are no spaces in the path to the Python interpreter. The fact that it breaks if there are spaces in the path is a currently open bug. The question arises because of a previously attempted bug fix. Workaround Make sure your launch.json does not have brackets around ${command:pickArgs}. Make sure the path to Python interpreter has no spaces. If it does, create a link to the environment containing the interpreter (the target) from a path that does not have spaces. On Windows in PowerShell, this looks like: New-Item -ItemType Junction -Path C:any\path\without\spaces -Target 'C:\path to Python\environment which may\have spaces in it' Note that the ItemType has to be Junction, not SymbolicLink. If you use a SymbolicLink, the path will be followed (it will find the interpreter) but then replaced with the target path, and the embedded spaces will still create problems. Long-Term Wait for currently open issue Spaces in python interpreter path or program result in incorrectly quoted debugging commands with arguments #233 to be fixed. Explanation As far as I can tell, this is what happened. In a previous version of VS Code, launch.json would be created with the following configuration: "args": "${command:pickArgs}" Notice there are no brackets around ${command:pickArgs}. This works and allows the user to submit multiple command line arguments via dialog box when debugging. However, due to a bug, there was an error if the path to the Python interpreter had spaces in it: Spaces in python interpreter path or program result in incorrectly quoted debugging commands with arguments #233. An attempted fix inserted brackets into launch.json: Fix error in args with default config #385. This fixed the problem with spaces in the Python path, but created the problem that the question asks about, that multiple command line arguments are treated as part of the same string. This fix appears to have made it into v1.92.2, the version I am using, breaking the ability to parse command line arguments. The solution therefore is to remove the brackets around ${command:pickArgs} introduced by the fix in issue #385, but since the Python environment I was testing had spaces in the path, this triggered issue #233. After additional information, issue #233 was reopened. The workaround is to make sure to use an environment without spaces in the path. The long-term solution is to wait for the fix to issue #233.
8
7
78,900,274
2024-8-22
https://stackoverflow.com/questions/78900274/scipy-1-14-1-breaks-statsmodels-0-14-2
After installation of scipy 1.14.1 a previously viable Python program now fails. The original program is more complex so here's a MRE: import pandas as pd import plotly.express as px if __name__ == "__main__": data = { "Date": [0, 7, 14, 21, 28], "Value": [100, 110, 120, 115, 122] } df = pd.DataFrame(data) px.scatter(df, x="Date", y="Value", trendline="ols") Here's the stack trace (username obfuscated): Traceback (most recent call last): File "/Users/****/Python/Aug22.py", line 12, in <module> tl = px.scatter( ^^^^^^^^^^^ File "/Users/****/venv/lib/python3.12/site-packages/plotly/express/_chart_types.py", line 66, in scatter return make_figure(args=locals(), constructor=go.Scatter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/****/venv/lib/python3.12/site-packages/plotly/express/_core.py", line 2267, in make_figure patch, fit_results = make_trace_kwargs( ^^^^^^^^^^^^^^^^^^ File "/Users/****/venv/lib/python3.12/site-packages/plotly/express/_core.py", line 361, in make_trace_kwargs y_out, hover_header, fit_results = trendline_function( ^^^^^^^^^^^^^^^^^^^ File "/Users/****/venv/lib/python3.12/site-packages/plotly/express/trendline_functions/__init__.py", line 43, in ols import statsmodels.api as sm File "/Users/****/venv/lib/python3.12/site-packages/statsmodels/api.py", line 136, in <module> from .regression.recursive_ls import RecursiveLS File "/Users/****/venv/lib/python3.12/site-packages/statsmodels/regression/recursive_ls.py", line 14, in <module> from statsmodels.tsa.statespace.mlemodel import ( File "/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/mlemodel.py", line 33, in <module> from .simulation_smoother import SimulationSmoother File "/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/simulation_smoother.py", line 11, in <module> from .kalman_smoother import KalmanSmoother File "/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/kalman_smoother.py", line 11, in <module> from statsmodels.tsa.statespace.representation import OptionWrapper File "/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/representation.py", line 10, in <module> from .tools import ( File "/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/tools.py", line 14, in <module> from . import (_initialization, _representation, _kalman_filter, File "statsmodels/tsa/statespace/_initialization.pyx", line 1, in init statsmodels.tsa.statespace._initialization ImportError: dlopen(/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/_representation.cpython-312-darwin.so, 0x0002): symbol not found in flat namespace '_npy_cabs' If I revert to scipy 1.14.0 this error does not occur. Is there anything I can do that would allow me to run with scipy 1.14.1? Platform: macOS 14.6.1 python 3.12.5 Apple M2
Update: This issue has now been fixed in statsmodels 0.14.3. The rest of this answer is of historical interest only. This looks like a bug to me, so I filed issues against SciPy, statsmodels, and the statsmodels maintainer filed an issue against NumPy. Here's a summary of what I've learned from those issues. Cause of Problem In NumPy 2.1.0, NumPy started marking certain functions with __attribute__((visibility("hidden")), with the effect that extensions built against NumPy would not re-export symbols exported by NumPy. The pull request doing this is here. These are the cabs related symbols exported by NumPy prior to 2.1.0: $ nm -gU venv/lib/python3.12/site-packages/numpy/linalg/_umath_linalg.cpython-312-darwin.so | grep cabs 1 ↡ 00000000000148c0 T _npy_cabs 000000000001487c T _npy_cabsf 0000000000014904 T _npy_cabsl And after: $ nm -gU venv/lib/python3.12/site-packages/numpy/linalg/_umath_linalg.cpython-312-darwin.so | grep cabs It stopped exporting _npy_cabs. In SciPy 1.14.1, SciPy switched from being built against NumPy 2.0.1 to being built against 2.1.0. As a consequence, it also stopped exporting the _npy_cabs symbol. These are the cabs related symbols being exported prior to 1.14.1: $ nm -gU venv/lib/python3.12/site-packages/scipy/special/_ufuncs.cpython-312-darwin.so | grep cabs 000000000007c248 T _npy_cabs 000000000007c204 T _npy_cabsf 000000000007c28c T _npy_cabsl And after: $ nm -gU venv/lib/python3.12/site-packages/scipy/special/_ufuncs.cpython-312-darwin.so | grep cabs It also stopped exporting _npy_cabs. (The npy stands for NumPy, so it is a little strange for SciPy to provide this function.) Fix In summary, you have three choices of ways to solve the problem. Workaround #1 Downgrade SciPy from 1.14.1 to 1.14.0. As a consequence, you would be missing these fixes. Workaround #2 Downgrade NumPy from 2.1.0 to 2.0.1. As a consequence, you would be missing these fixes and new features. Workaround #3 Wait for a fix from statsmodels for the issue. They would need to adjust the way they are linking NumPy. This has now been fixed in statsmodels 0.14.3. Issue links: scipy statsmodels numpy
2
3
78,912,175
2024-8-25
https://stackoverflow.com/questions/78912175/combine-cross-between-2-dataframe-efficiently
I am working with 2 datasets. One describes some time windows by their start and stop times. The second one contains a big list of events with their corresponding timestamps. I want to combine this into a single dataframe that contains the start and stop time of each window, together with how many events happened during this time window. I have managed to "solve" my problem with: import polars as pl actions = { "id": ["a", "a", "a", "a", "b", "b", "a", "a"], "action": ["start", "stop", "start", "stop", "start", "stop", "start", "stop"], "time": [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0], } events = { "name": ["x", "x", "x", "y", "y", "z", "w", "w", "w"], "time": [0.0, 0.1, 0.5, 1.1, 2.5, 3.0, 4.5, 4.9, 5.5], } actions_df = ( pl.DataFrame(actions) .group_by("id") .agg( start=pl.col("time").filter(pl.col("action") == "start"), stop=pl.col("time").filter(pl.col("action") == "stop"), ) .explode(["start", "stop"]) ) df = ( actions_df.join(pl.DataFrame(events), how="cross") .filter((pl.col("time") >= pl.col("start")) & (pl.col("time") <= pl.col("stop"))) .group_by(["id", "start", "stop", "name"]) .agg(count=pl.count("name")) .pivot("name", index=["id", "start", "stop"], values="count") .fill_null(0) ) result_df = ( actions_df.join(df, on=["id", "start", "stop"], how="left") .fill_null(0) .sort("start") ) print(result_df) """ β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ id ┆ start ┆ stop ┆ w ┆ y ┆ x ┆ z β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ u32 ┆ u32 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ══════β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ a ┆ 0.0 ┆ 1.0 ┆ 0 ┆ 0 ┆ 3 ┆ 0 β”‚ β”‚ a ┆ 2.0 ┆ 3.0 ┆ 0 ┆ 1 ┆ 0 ┆ 1 β”‚ β”‚ b ┆ 4.0 ┆ 5.0 ┆ 2 ┆ 0 ┆ 0 ┆ 0 β”‚ β”‚ a ┆ 6.0 ┆ 7.0 ┆ 0 ┆ 0 ┆ 0 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ """ My issue is that this approach "explodes" in RAM and my process gets killed. I guess that the join(... how="cross") makes my dataframe huge, just to then ignore most of it again. Can I get some help/hints on a better way to solve this? To give some orders of magnitude, my "actions" datasets have on the order of 100-500 time windows (~1 MB), and my "events" datasets have on the order of ~10 million (~200 MB). And I am getting my process killed with 16 GB of RAM. EDIT In real data, my intervals can be overlapping. Thanks to @RomanPekar for bringing this up.
The mentioned non-equi joins PR has been merged as part of Polars 1.7.0 It is called .join_where() and is just an inner join for now. (actions_df .join_where(events_df, pl.col.start <= pl.col.time, pl.col.stop >= pl.col.time ) .pivot( on = "name", values = "time", aggregate_function = pl.len() ) ) shape: (3, 7) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ start ┆ stop ┆ x ┆ y ┆ z ┆ w β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ u32 ┆ u32 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•ͺ═══════β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════║ β”‚ a ┆ 0.0 ┆ 1.0 ┆ 3 ┆ null ┆ null ┆ null β”‚ β”‚ a ┆ 2.0 ┆ 3.0 ┆ null ┆ 1 ┆ 1 ┆ null β”‚ β”‚ b ┆ 4.0 ┆ 5.0 ┆ null ┆ null ┆ null ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ There is an issue for additional join types: https://github.com/pola-rs/polars/issues/18669
3
1
78,911,891
2024-8-25
https://stackoverflow.com/questions/78911891/could-not-parse-modelproto-from-meta-llama-3-1-8b-instruct-tokenizer-model
I tried to use Llama 3.1 without relying on external programs, but I was not successful. I downloaded the Meta-Llama-3.1-8B-Instruct model, which includes only the files consolidated.00.pth, params.json, and tokenizer.model. The params.json file contains the following configuration: { "dim": 4096, "n_layers": 32, "n_heads": 32, "n_kv_heads": 8, "vocab_size": 128256, "ffn_dim_multiplier": 1.3, "multiple_of": 1024, "norm_eps": 1e-05, "rope_theta": 500000.0, "use_scaled_rope": true } Can you guide me on how to use this model? I have tried the following code: import torch from transformers import LlamaTokenizer, LlamaForCausalLM, LlamaConfig model_path = 'Meta-Llama-3.1-8B-Instruct' tokenizer_path = f'{model_path}/tokenizer.model' # Load tokenizer tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path) # Configure the model model_config = LlamaConfig( hidden_size=4096, num_hidden_layers=32, num_attention_heads=32, intermediate_size=5324.8, # This value is calculated as 4096 * 1.3 vocab_size=128256, use_scaled_rope=True ) # Load the model model = LlamaForCausalLM(config=model_config) model.load_state_dict(torch.load(f'{model_path}/consolidated.00.pth')) model.eval() # Tokenize and generate output input_text = "Hello, how are you?" inputs = tokenizer(input_text, return_tensors='pt') outputs = model.generate(inputs['input_ids']) # Decode and print the output decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded_output) However, I got the following error: (venv) PS C:\Users\Main\Desktop\mygguf> python app.py C:\Users\Main\Desktop\mygguf\venv\Lib\site-packages\transformers\tokenization_utils_base.py:2165: FutureWarning: Calling LlamaTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead. warnings.warn( You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message Traceback (most recent call last): File "C:\Users\Main\Desktop\mygguf\app.py", line 9, in <module> tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Main\Desktop\mygguf\venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2271, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Main\Desktop\mygguf\venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2505, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Main\Desktop\mygguf\venv\Lib\site-packages\transformers\models\llama\tokenization_llama.py", line 171, in __init__ self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Main\Desktop\mygguf\venv\Lib\site-packages\transformers\models\llama\tokenization_llama.py", line 198, in get_spm_processor tokenizer.Load(self.vocab_file) File "C:\Users\Main\Desktop\mygguf\venv\Lib\site-packages\sentencepiece\__init__.py", line 961, in Load return self.LoadFromFile(model_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Main\Desktop\mygguf\venv\Lib\site-packages\sentencepiece\__init__.py", line 316, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Internal: could not parse ModelProto from Meta-Llama-3.1-8B-Instruct/tokenizer.model
The way you should think about using llm model is that you have to pass it information systematically. Since you are using a publicly available model they come with things like weights, cfg etc... so you don't need to declare yours. All you need do is to start by declaring the file-paths of your model(i.e where you downloaded it). Also there is tokenism (tokens are simply vectors which models understand they usually map it with the given words you ask it). If the output is not the desired, You can use different tokenizers You can look up the process of using different tokens or tokenizers such as BERT, All-net etc here is a link to a blog You should also spend sometime on Huggingface website here is the link hugging_face Here is a snippet of how to use the model, I have provided comments on what each line does. I hope it helps you! import torch from transformers import AutoTokenizer, AutoModel from transformers import LlamaTokenizer, LlamaForCausalLM, LlamaConfig model_path = 'Meta-Llama-3.1-8B-Instruct' # Load the tokenizer directly from the model path tokenizer = AutoTokenizer.from_pretrained(model_path) # Load model configuration from params.json config = LlamaConfig.from_json_file(f'{model_path}/params.json') # load the model with the specific configs. model = LlamaForCausalLM(config=config) # Load the weights of the model state_dict = torch.load(f'{model_path}/consolidated.00.pth', map_location=torch.device('cpu')) model.load_state_dict(state_dict) model.eval() # generate tokens and generate output input_text = "Hello, how are you?" inputs = tokenizer(input_text, return_tensors='pt') outputs = model.generate(inputs['input_ids']) # print the output you asked it output = tokenizer.decode(outputs[0], skip_special_tokens=True) print(output)
6
1
78,912,212
2024-8-25
https://stackoverflow.com/questions/78912212/how-to-make-cpython-report-vectorcall-as-available-only-when-it-will-actually-he
The Vectorcall protocol is a new calling convention for Python's C API defined in PEP 590. The idea is to speed up calls in Python by avoiding the need to build intermediate tuples and dicts, and instead pass all arguments in a C array. Python supports checking if a callable supports vectorcall by checking if the result of PyVectorcall_Function() is not NULL. However, it appears that functions support vectorcall even when using it will actually harm performance. For example, take the following simple function: def foo(*args): pass This function won't benefit from vectorcall - because it collects args, Python needs to collect the arguments into a tuple anyway. So if I will allocate a tuple instead of a C style array, it will be faster. I also benchmarked this: use std::hint::black_box; use criterion::{criterion_group, criterion_main, Criterion}; use pyo3::conversion::ToPyObject; use pyo3::ffi; use pyo3::prelude::*; fn criterion_benchmark(c: &mut Criterion) { Python::with_gil(|py| { let module = PyModule::from_code( py, cr#" def foo(*args): pass "#, c"args_module.py", c"args_module", ) .unwrap(); let foo = module.getattr("foo").unwrap(); let args_arr = black_box([ 1.to_object(py).into_ptr(), "a".to_object(py).into_ptr(), true.to_object(py).into_ptr(), ]); unsafe { assert!(ffi::PyVectorcall_Function(foo.as_ptr()).is_some()); } c.bench_function("vectorcall - vectorcall", |b| { b.iter(|| unsafe { let args = vec![args_arr[0], args_arr[1], args_arr[2]]; let result = black_box(ffi::PyObject_Vectorcall( foo.as_ptr(), args.as_ptr(), 3, std::ptr::null_mut(), )); ffi::Py_DECREF(result); }) }); c.bench_function("vectorcall - regular call", |b| { b.iter(|| unsafe { let args = ffi::PyTuple_New(3); ffi::Py_INCREF(args_arr[0]); ffi::PyTuple_SET_ITEM(args, 0, args_arr[0]); ffi::Py_INCREF(args_arr[1]); ffi::PyTuple_SET_ITEM(args, 1, args_arr[1]); ffi::Py_INCREF(args_arr[2]); ffi::PyTuple_SET_ITEM(args, 2, args_arr[2]); let result = black_box(ffi::PyObject_Call(foo.as_ptr(), args, std::ptr::null_mut())); ffi::Py_DECREF(result); ffi::Py_DECREF(args); }) }); }); } criterion_group!(benches, criterion_benchmark); criterion_main!(benches); The benchmark is in Rust and uses the convenient functions of the PyO3 framework, but the core work is done using raw FFI calls to the C API, so this shouldn't affect the results. Results: vectorcall - vectorcall time: [51.008 ns 51.263 ns 51.530 ns] vectorcall - regular call time: [35.638 ns 35.826 ns 36.022 ns] The benchmark confirms my suspicion: Python has to do additional works when I use the vectorcall API. On the other hand, the vectorcall API can be more performant than using tuples even when needing to allocate memory, for example when calling a bound method with the PY_VECTORCALL_ARGUMENTS_OFFSET flag. A benchmark confirms that too. So here is my question: Is there a way to know when a vectorcall won't help and even do damage, or alternatively, when a vectorcall can help? Context, even though I don't think it's relevant: I'm experimenting with a pycall!() macro for PyO3. The macro has the ability to call with normal parameters, but also unpack parameters, and should do so in the most efficient way possible. Using vectorcall where available sounds like a good idea; but then I'm facing this obstacle where I cannot know if I should prefer converting directly to a tuple or to a C-style array for vectorcall.
If it looks like PyObject_Call is faster for you, that's probably some sort of inefficiency on the Rust side of things, and you should look into optimizing that. Trying to bypass vectorcall doesn't actually provide the Python-side speedup you're thinking of. Particularly, the tuple you're creating is overhead, not an optimization. For objects that support vectorcall, including standard function objects, PyObject_Call literally just uses vectorcall: if (vector_func != NULL) { return _PyVectorcall_Call(tstate, vector_func, callable, args, kwargs); } Even a direct call to tp_call will just indirectly delegate to vectorcall, because for most types that support vectorcall (again including standard function objects), tp_call is set to PyVectorcall_Call. So even if your function needs its arguments in a tuple, making a tuple for PyObject_Call doesn't actually save any work. PyVectorcall_Call will extract the tuple's item array: /* Fast path for no keywords */ if (kwargs == NULL || PyDict_GET_SIZE(kwargs) == 0) { return func(callable, _PyTuple_ITEMS(tuple), nargs, NULL); } and then if the function needs a tuple, it will have to build a second tuple out of that array. It would take a highly unusual custom callable type to actually support vectorcall, support tp_call without delegating to vectorcall, and have tp_call be faster. Adding extra code to check for this kind of highly unusual case, even if possible, would lose too much time on the usual cases to pay off. And anyway, it's not possible to implement that extra code, in general. There is nothing like a tp_is_vectorcall_faster slot, or any other way for a type to signal that vectorcall is supported but should sometimes be avoided. You'd have to special-case individual types and implement type-specific handling.
8
1
78,904,911
2024-8-23
https://stackoverflow.com/questions/78904911/pydantic-is-not-compatible-with-langchain-documents
I am using LangChain 0.2.34 together with Python 3.12.5 to build a RAG architecture and Pydantic 2.8.2 for validation. It appears that some LangChain classes are not compatible with Pydantic although I explicitly allow arbitrary types. Or am I missing something? Here is a code sample and the respective error. from typing import List from langchain_core.documents.base import Document from pydantic import BaseModel, ConfigDict class ResponseBody(BaseModel): message: List[Document] model_config = ConfigDict(arbitrary_types_allowed=True) docs = [Document(page_content="This is a document")] res = ResponseBody(message=docs) Error: TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given
Langchain is using the functionality in pydantic v1. Define your model with v1 syntax: from pydantic.v1 import BaseModel, ConfigDict ... You can read about their migration plan + pydantic compatibility here: How to use LangChain with different Pydantic versions
2
3
78,918,133
2024-8-27
https://stackoverflow.com/questions/78918133/django-viewflow-passing-field-values-via-urls-upon-process-start
Is it possible **, to pass a value to a process via the startup url/path. I have a process model with a note field. I want to start a new process flow and pass the note to the url e.g. http://server.com/my_process/start/?note=mynote
Since Viewflow is a thin workflow layer built on top of Django, URL parameter processing works just like in Django. Parameters are available in the request.GET object, and you can use them in a custom view according to your needs. For example, to pre-initialize a user form with a value from the URL, you can create a custom subclass of CreateProcessView: from viewflow.workflow.flow.views import CreateProcessView class CustomCreateProcessView(CreateProcessView): """ Custom view to initialize a process with data from the request URL. """ def get_initial(self): initial = super().get_initial() initial['text'] = self.request.GET.get('note', '') return initial With this approach, when you navigate to a URL like http://server.com/my_process/start/?note=mynote, the note parameter will be extracted and used to initialize the text field in the form. For more details, refer to the Django documentation on class-based views and URL handling.
2
1
78,906,787
2024-8-23
https://stackoverflow.com/questions/78906787/how-to-update-the-content-of-a-div-with-a-button-using-fasthtml
Using FastHTML, I would like to create a navigation bar that updates a div's content based on the item clicked. I wrote the following piece of code, but nothing happens when I click the buttons: the #page-content div doesn't update, and I don't see the logs of the GET methods being triggered. from fasthtml.common import * app = FastHTML() @app.route("/", methods=["get"]) def index(): return Html( Body( Header(navigation_bar(["page1", "page2"])), Div(page1(), id="page-content"), ), ) def navigation_bar(navigation_items: list[str]): return Nav( Ul( *[ Li(Button(item, hx_get=f"/{item}", hx_target="#page-content")) for item in navigation_items ] ), ) @app.route("/page1", methods=["get"]) def page1(): return Div("You are on page 1.") @app.route("/page2", methods=["get"]) def page2(): return Div("You are on page 2.") serve() Here is the generated HTML: <html> <head></head> <body> <header> <nav> <ul> <li> <button hx-get="/page1" hx-target="#page-content">page-1</button> </li> <li> <button hx-get="/page2" hx-target="#page-content">page-2</button> </li> </ul> </nav> </header> <div id="page-content"><div>You are on page 1.</div></div> </body> </html> I think the HTML is generated correctly, as the tags' attributes related to HTMX look like what I could see on online examples. Where could the issue be?
The issue came from the fact that I was wrapping my Body component into an Html component manually. Removing the Html component, FastHTML takes care of the wrapping automatically and also includes the required scripts. Here is my updated piece of code: from fasthtml.common import * app = FastHTML() @app.route("/", methods=["get"]) def index(): return Body( Header(navigation_bar(["page1", "page2"])), Div(page1(), id="page-content"), ) def navigation_bar(navigation_items: list[str]): return Nav( Ul( *[ Li(Button(item, hx_get=f"/{item}", hx_target="#page-content")) for item in navigation_items ] ), ) @app.route("/page1", methods=["get"]) def page1(): return Div("You are on page 1.") @app.route("/page2", methods=["get"]) def page2(): return Div("You are on page 2.") serve()
2
1
78,920,796
2024-8-27
https://stackoverflow.com/questions/78920796/cannot-reproduce-working-crc-16-ccitt-kermit-with-c
I found an example of working CRC-16/CCITT KERMIT python code shown below: def make_crc_table(): poly = 0x8408 table = [] for byte in range(256): crc = 0 for bit in range(8): if (byte ^ crc) & 1: crc = (crc >> 1) ^ poly else: crc >>= 1 byte >>= 1 table.append(crc) return table table = make_crc_table() def crc_16_fast(msg): crc = 0xffff for byte in msg: crc = table[(byte ^ crc) & 0xff] ^ (crc >> 8) return crc ^ 0xffff # Test packet = "64 23 2F 36 27 2F 2F 23 1F 25" # Perform CRC16 (Kermit - with poly of 0x8408) msg = bytes.fromhex(packet) out = crc_16_fast(msg) hi, lo = out >> 8, out & 0xff print('{:04x} = {:02x} {:02x}'.format(out, hi, lo)) The output is shown below and matches what I expect: a40e = a4 0e I am trying to reproduce this in C#. I found the following code snippet, however it returns differing results from the python code. So far I have not been able to figure out what the difference is. How can I get the C# code to produce the same expected output? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; public enum Crc16Mode : ushort { Standard = 0xA001, CcittKermit = 0x8408 } namespace CollinsWirelessTransModule { public class Crc16 { readonly ushort[] table = new ushort[256]; public ushort ComputeChecksum(params byte[] bytes) { ushort crc = 0; for (int i = 0; i < bytes.Length; ++i) { byte index = (byte)(crc ^ bytes[i]); crc = (ushort)((crc >> 8) ^ table[index]); } return crc; } public byte[] ComputeChecksumBytes(params byte[] bytes) { ushort crc = ComputeChecksum(bytes); return BitConverter.GetBytes(crc); } public Crc16(Crc16Mode mode) { ushort polynomial = (ushort)mode; Console.WriteLine("Polynomial: " + polynomial.ToString("X")); ushort value; ushort temp; for (ushort i = 0; i < table.Length; ++i) { value = 0; temp = i; for (byte j = 0; j < 8; ++j) { if (((value ^ temp) & 0x0001) != 0) { value = (ushort)((value >> 1) ^ polynomial); } else { value >>= 1; } temp >>= 1; } table[i] = value; } } } class Program { static void Main(string[] args) { var commsCrc = new Crc16(Crc16Mode.CcittKermit); var checkBytes = new byte[] { 0x64, 0x23, 0x2F, 0x36, 0x27, 0x2F, 0x2F, 0x23, 0x1F, 0x25 }; var valueToPrint = commsCrc.ComputeChecksum(checkBytes); var bytesToPrint = BitConverter.GetBytes(valueToPrint); string hex = BitConverter.ToString(bytesToPrint); Console.WriteLine("Hex bytes: " + hex); // Wait for the user to respond before closing. Console.ReadKey(); } } } The output here does not match Python and looks like: Polynomial: 8408 Hex bytes: 76-C7 I made sure the polynomial and starting values were the same.
We can try rewriting Python code, if it's the one we must follow (so not to try getting ready solution and see if it is doing the same, instead we are writing C# code accordingly to Python code). Here's C# snippet with some comments that point out, which Python code is related in particular code block: var poly = 0x8408; var table = new List<int>(); int crc; // start make_crc_table for (int @byte = 0; @byte < 256; @byte++) { var b = @byte; crc = 0; for (byte bit = 0; bit < 8; bit++) { crc = ((b ^ crc) & 1) != 0 ? (crc >> 1) ^ poly : crc >>= 1; b >>= 1; } table.Add(crc); } // end make_crc_table // Test var packet = "64 23 2F 36 27 2F 2F 23 1F 25"; var bytes = packet.Split(' ') .Select(x => Convert.ToByte(x, 16)) .ToArray(); // start crc_16_fast crc = 0xFFFF; foreach (var @byte in bytes) { crc = table[(@byte ^ crc) & 0xFF] ^ (crc >> 8); } var @out = crc ^ 0xFFFF; var hi = @out >> 8; var low = @out & 0xFF; // end crc_16_fast Console.WriteLine("out = " + @out.ToString("x4") + Environment.NewLine + "hi = " + hi.ToString("x2") + Environment.NewLine + "low = " + low.ToString("x2"));
3
1
78,907,902
2024-8-24
https://stackoverflow.com/questions/78907902/how-do-i-do-calculations-with-a-sliding-window-while-being-memory-efficient
I am working with very large (several GB) 2-dimensional square NumPy arrays. Given an input array a, for each element, I would like to find the direction of its largest adjacent neighbor. I am using the provided sliding window view to try to avoid creating unnecessary copies: # a is an L x L array of type np.float32 swv = sliding_window_view(a, (3, 3)) # (L-2) x (L-2) x 3 x 3 directions = swv.reshape(L-2, L-2, 9)[:,:,1::2].argmax(axis = 2).astype(np.uint8) However, calling reshape here creates a (L-2) x (L-2) x 9 copy instead of a view, which consumes an undesirably large chunk of memory. Is there a way to do this operation in a vectorized fashion, but with a smaller memory footprint? EDIT: Many of the responses are geared towards NumPy, which uses CPU (since that's what I initially asked, to simplify the problem). Would the optimal strategy be different for using CuPy, which is NumPy for GPU? As far as I know, it makes using Numba much less straightforward.
Since using sliding_window_view is not efficient for your use case, I will provide an alternative using Numba. First, to simplify the implementation, define the following argmax alternative. from numba import njit @njit def argmax(*values): """argmax alternative that can take an arbitrary number of arguments. Usage: argmax(0, 1, 3, 2) # 2 """ max_arg = 0 max_value = values[0] for i in range(1, len(values)): value = values[i] if value > max_value: max_value = value max_arg = i return max_arg This is a standard argmax function, except it takes multiple scalar arguments instead of a single numpy array. Using this argmax alternative, your operation can be easily re-implemented. @njit(cache=True) def neighbor_argmax(a): height, width = a.shape[0] - 2, a.shape[1] - 2 out = np.empty((height, width), dtype=np.uint8) for y in range(height): for x in range(width): # window: a[y:y + 3, x:x + 3] # center: a[y + 1, x + 1] out[y, x] = argmax( a[y, x + 1], # up a[y + 1, x], # left a[y + 1, x + 2], # right a[y + 2, x + 1], # down ) return out This function requires only a few variables to operate, excluding the input and output buffers. So we don't need to worry about memory footprint. Alternatively, you can use stencil, a sliding window utility for Numba. With stencil, you only need to define the kernel. Numba will take care of the rest. from numba import njit, stencil @stencil def kernel(window): # window: window[-1:2, -1:2] # center: window[0, 0] return np.uint8( # Don't forget to cast to np.uint8. argmax( window[-1, 0], # up window[0, -1], # left window[0, 1], # right window[1, 0], # down ) ) @njit(cache=True) def neighbor_argmax_stencil(a): return kernel(a)[1:-1, 1:-1] # Slicing is not mandatory. It can also be inlined, if you like. @njit(cache=True) def neighbor_argmax_stencil_inlined(a): f = stencil(lambda w: np.uint8(argmax(w[-1, 0], w[0, -1], w[0, 1], w[1, 0]))) return f(a)[1:-1, 1:-1] # Slicing is not mandatory. However, stencil is very limited in functionality and cannot completely replace sliding_window_view. One difference is that there is no option to skip the edges. It is always padded with a constant value (0 by default). That is, if you put (L, L) matrix, you will get (L, L) output, not (L-2, L-2). This is why I am slicing the output in the code above to match your implementation. However, this may not be the desired behavior, as it breaks memory contiguity. You can copy after slicing, but be aware that it will increase the peak memory usage. In addition, it should be noted that these functions can also be easily adapted for multi-threading. For details, please refer to the benchmark code below. Here is the benchmark. import math import timeit import numpy as np from numba import njit, prange, stencil from numpy.lib.stride_tricks import sliding_window_view def baseline(a): L = a.shape[0] swv = sliding_window_view(a, (3, 3)) # (L-2) x (L-2) x 3 x 3 directions = swv.reshape(L - 2, L - 2, 9)[:, :, 1::2].argmax(axis=2).astype(np.uint8) return directions @njit def argmax(*values): """argmax alternative that can accept an arbitrary number of arguments. Usage: argmax(0, 1, 3, 2) # 2 """ max_arg = 0 max_value = values[0] for i in range(1, len(values)): value = values[i] if value > max_value: max_value = value max_arg = i return max_arg @njit(cache=True) def neighbor_argmax(a): height, width = a.shape[0] - 2, a.shape[1] - 2 out = np.empty((height, width), dtype=np.uint8) for y in range(height): for x in range(width): # window: a[y:y + 3, x:x + 3] # center: a[y + 1, x + 1] out[y, x] = argmax( a[y, x + 1], # up a[y + 1, x], # left a[y + 1, x + 2], # right a[y + 2, x + 1], # down ) return out @njit(cache=True, parallel=True) # Add parallel=True. def neighbor_argmax_mt(a): height, width = a.shape[0] - 2, a.shape[1] - 2 out = np.empty((height, width), dtype=np.uint8) for y in prange(height): # Change this to prange. for x in range(width): # window: a[y:y + 3, x:x + 3] # center: a[y + 1, x + 1] out[y, x] = argmax( a[y, x + 1], # up a[y + 1, x], # left a[y + 1, x + 2], # right a[y + 2, x + 1], # down ) return out @stencil def kernel(window): # window: window[-1:2, -1:2] # center: window[0, 0] return np.uint8( # Don't forget to cast to np.uint8. argmax( window[-1, 0], # up window[0, -1], # left window[0, 1], # right window[1, 0], # down ) ) @njit(cache=True) def neighbor_argmax_stencil(a): return kernel(a)[1:-1, 1:-1] # Slicing is not mandatory. @njit(cache=True) def neighbor_argmax_stencil_with_copy(a): return kernel(a)[1:-1, 1:-1].copy() # Slicing is not mandatory. @njit(cache=True, parallel=True) def neighbor_argmax_stencil_mt(a): return kernel(a)[1:-1, 1:-1] # Slicing is not mandatory. @njit(cache=True) def neighbor_argmax_stencil_inlined(a): f = stencil(lambda w: np.uint8(argmax(w[-1, 0], w[0, -1], w[0, 1], w[1, 0]))) return f(a)[1:-1, 1:-1] # Slicing is not mandatory. def benchmark(): size = 2000 # Total nbytes (in MB) for a. n = math.ceil(math.sqrt(size * (10 ** 6) / 4)) rng = np.random.default_rng(0) a = rng.random(size=(n, n), dtype=np.float32) print(f"{a.shape=}, {a.nbytes=:,}") expected = baseline(a) # expected = neighbor_argmax_mt(a) assert expected.shape == (n - 2, n - 2) and expected.dtype == np.uint8 candidates = [ baseline, neighbor_argmax, neighbor_argmax_mt, neighbor_argmax_stencil, neighbor_argmax_stencil_mt, neighbor_argmax_stencil_with_copy, neighbor_argmax_stencil_inlined, ] name_len = max(len(f.__name__) for f in candidates) for f in candidates: assert np.array_equal(expected, f(a)), f.__name__ t = timeit.repeat(lambda: f(a), repeat=3, number=1) print(f"{f.__name__:{name_len}} : {min(t)}") if __name__ == "__main__": benchmark() Result: a.shape=(22361, 22361), a.nbytes=2,000,057,284 baseline : 24.971996600041166 neighbor_argmax : 0.1917789001017809 neighbor_argmax_mt : 0.11929619999136776 neighbor_argmax_stencil : 0.2940085999434814 neighbor_argmax_stencil_mt : 0.17756330000702292 neighbor_argmax_stencil_with_copy : 0.46573049994185567 neighbor_argmax_stencil_inlined : 0.29338629997801036 I think these results are enough to make you consider giving Numba a try :) The following section was added after this answer was accepted. Here is the CUDA version. (I'm using numba 0.60.0) from numba import cuda @cuda.jit(device=True) def argmax_cuda(values): # cuda.jit cannot handle an arbitrary number of arguments. max_arg = 0 max_value = values[0] for i in range(1, len(values)): value = values[i] if value > max_value: max_value = value max_arg = i return max_arg @cuda.jit def neighbor_argmax_cuda_impl(a, out): y, x = cuda.grid(2) if y < out.shape[0] and x < out.shape[1]: out[y, x] = argmax_cuda( # Make sure to use a tuple, not a list. ( a[y, x + 1], # up a[y + 1, x], # left a[y + 1, x + 2], # right a[y + 2, x + 1], # down ) ) def neighbor_argmax_cuda(a, out): # If the input/output array is not on the GPU, you can transfer it like this. # However, note that this operation alone takes longer than neighbor_argmax_mt. # a = cuda.to_device(a) # out = cuda.to_device(out) # Block settings. I'm not sure if this is the optimal one. threadsperblock = (16, 16) blockspergrid_x = int(np.ceil(out.shape[1] / threadsperblock[1])) blockspergrid_y = int(np.ceil(out.shape[0] / threadsperblock[0])) blockspergrid = (blockspergrid_x, blockspergrid_y) neighbor_argmax_cuda_impl[blockspergrid, threadsperblock](a, out) # Back to CPU, if necessary. # out = out.copy_to_host() return out As JΓ©rΓ΄me explained in detail, the time taken to transfer the input/output arrays from the host to the device cannot be ignored. a.shape=(22361, 22361), a.nbytes=2,000,057,284 neighbor_argmax : 0.47917880106251687 neighbor_argmax_mt : 0.08353979291860014 neighbor_argmax_cuda (with transfer) : 0.5072600540006533 neighbor_argmax_cuda (without transfer) : 9.134004358202219e-05 (I had to use another machine to use CUDA. For that reason, the results for the CPU are different from the ones I put above.)
8
6
78,920,271
2024-8-27
https://stackoverflow.com/questions/78920271/is-there-a-best-practice-for-defining-optional-fields-in-pydantic-models
I'm working with Pydantic for data validation in a Python project and I'm encountering an issue with specifying optional fields in my BaseModel. from pydantic import BaseModel class MyModel(BaseModel): author_id: int | None # Case 1: throws error author_id: Optional[int] # Case 2: throws error author_id: int = None # Case 3: works Now, while requesting an endpoint that accepts the above model as its JSON body, I am not providing the field author_id in the request. When I use author_id: int | None, I get an error saying that a required field is missing. However, if I change it to author_id: Optional[int], I encounter the same error. But when I use author_id: int = None or author_id: Optional[int] = None, the model works as expected without errors. (Working if = is present) Do you have any recommendations on how to properly define optional fields in Pydantic models? Is there a specific version of Pydantic (or another library) that supports the int | None syntax correctly? python==3.11 pydantic==2.8.1 fastapi==0.111.1
The reason you encounter an error is that there is no default value. The annotations don't actually do anything by themselves. All of the options bellow work, and they are pretty much equivalent under the hood. The third option is now recommended for projects only using Python 3.10+ from pydantic import BaseModel class MyModel(BaseModel): author_id: Optional[int] = None # the original way author_id: Union[int, None] = None # another option author_id: int | None = None # more modern option (Python 3.10+)
2
3
78,919,290
2024-8-27
https://stackoverflow.com/questions/78919290/how-to-set-value-for-serializers-field-in-drf
I have a web page in which I show some reports of trades between sellers and customers. So for this purpose, I need to create an API to get all of the trades from database, extract necessary data and serialize them to be useful in web page. So I do not think of creating any model and just return the data in JSON format. First I created my serializers like this: from rest_framework import serializers from django.db.models import Sum, Count from account.models import User class IncomingTradesSerializer(serializers.Serializer): all_count = serializers.IntegerField() all_earnings = serializers.IntegerField() successful_count = serializers.IntegerField() successful_earnings = serializers.IntegerField() def __init__(self, *args, **kwargs): self.trades = kwargs.pop('trades', None) super().__init__(*args, **kwargs) def get_all_count(self, obj): return self.trades.count() def get_all_earnings(self, obj): return sum(trade.trade_price for trade in self.trades) def get_successful_count(self, obj): return self.trades.exclude(failure_reason=None).count() def get_successful_earnings(self, obj): return sum(trade.trade_price for trade in self.trades.exclude(failure_reason=None)) class TradesDistributionSerializer(serializers.Serializer): sellers = serializers.DictField() def __init__(self, *args, **kwargs): self.trades = kwargs.pop('trades', None) super().__init__(*args, **kwargs) def get_sellers(self, obj): sellers = {} for user in User.objects.all(): distributed_trades = self.trades.filter(creator=user) sellers[user.username] = sum( trade.trade_price for trade in distributed_trades) return sellers and then my apiView look like this : from rest_framework.views import APIView from rest_framework.response import Response from trade.models import Trade from report.serializers import IncomingTradesSerializer, TradesDistributionSerializer class IndicatorView(APIView): def get(self, request): trades = Trade.objects.all() incoming_trades_serializer = IncomingTradesSerializer(trades=trades) trades_distribution_serializer = TradesDistributionSerializer(trades=trades) results = { 'incomingTrades': incoming_trades_serializer.data, 'tradesDistribution': trades_distribution_serializer.data } return Response(results) the problem is in get_fieldname methods that are not called so the response at the end consist of null or empty values : { "incomingTrades": { "all_count": null, "all_earnings": null, "successful_count": null, "successful_earnings": null }, "tradesDistribution": { "sellers": {} } } I have already change the integerFields and DictField to MethodField but the problem was not solved and the only change was that the fields got disappeared in response and the only things were two empty dictionary for serializers. Another way I tried was overriding to_representation method but it was the same as before (of course my manager told me that this method is not good in performance if the number of fields increase). What is the problem? Do I need to change my approach or do something like overriding another method or something else? What is the standard way for this scenario?
You should use SerializerMethodField and also stick to Django ORM to fetch data instead of iterating over it: views.py class IndicatorView(APIView): def get(self, request): serializer = IndicatorSerializer(Trade.objects.all()) return Response(serializer.data) serializers.py class IndicatorSerializer(serializers.Serializer): incoming_trades = serializers.SerializerMethodField() trades_distribution = serializers.SerializerMethodField() def get_incoming_trades(self, trades): """ If there is a reason for failure then .exclude(failure_reason=None) would yield UNSUCCESFULL trades. Thus, if you want successfull ones, that would be: .filter(failure_reason=None).count() """ incoming_trades = { 'all_count': trades.count(), 'all_earnings': trades.aggregate(total=Sum("trade_price"))['total'], 'successful_count': trades.filter(failure_reason=None).count(), 'successful_earnings': ( trades.filter(failure_reason=None) .aggregate(total=Sum("trade_price"))['total']), 'unsuccessful_count': trades.exclude(failure_reason=None).count(), 'unsuccessful_earnings': ( trades.exclude(failure_reason=None) .aggregate(total=Sum("trade_price"))['total']), } return incoming_trades def get_trades_distribution(self, trades): """ Note that just like your query this does not distinguish successful / unsuccessful trades Therefore, you should filter the QS if that is your intention. """ trades_distribution =( trades.order_by("id") .annotate(seller=F("creator__username")) .values("seller") .annotate(trades_total=Sum("trade_price")) .order_by() ) return trades_distribution response { "incoming_trades": { "all_count": 3, "all_earnings": 733.76, "successful_count": 2, "successful_earnings": 165.87, "unsuccessful_count": 1, "unsuccessful_earnings": 567.89 }, "trades_distribution": [ { "seller": "admin", "trades_total": 691.34 }, { "seller": "someuser", "trades_total": 42.42 } ] } P.S Check the aggregation docs on how to group_by which can be a little bit tricky.
3
1
78,918,585
2024-8-27
https://stackoverflow.com/questions/78918585/count-same-consecutive-numbers-in-list-column-in-polars-dataframe
I have a pl.DataFrame with a column comprising lists with integers. I need to assert that each consecutive integer is showing up two times in a row at a maximum. For instance, a list containing [1,1,0,-1,1] would be OK, as the number 1 is showing up max two times in a row (the first two elements, followed by a zero). This list should lead to a failed assertion: [1,1,1,0,-1] The number 1 shows up three times in a row. Here's a toy example, where row2 should lead to a failed assertion. import polars as pl row1 = [0, 1, -1, -1, 1, 1, -1, 0] row2 = [1, -1, -1, -1, 0, 0, 1, -1] df = pl.DataFrame({"list": [row1, row2]}) print(f"row1: {row1}") print(f"row2: {row2}") print(df) row1: [0, 1, -1, -1, 1, 1, -1, 0] row2: [1, -1, -1, -1, 0, 0, 1, -1] shape: (2, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ list β”‚ β”‚ --- β”‚ β”‚ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ [0, 1, … 0] β”‚ β”‚ [1, -1, … -1] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
The following could be used. Perform run-length encoding of the list using pl.Expr.rle. This produces a list of structs. Each struct contains a (unique) list value and the corresponding run length. Check whether the maximum run length in the list is at most 2. Ensure the result is of type bool by selecting the first (and only) element in the resulting list (using pl.Expr.list.first). df.with_columns( ok=pl.col("list").list.eval( pl.element().rle().struct.field("len").max() <= 2 ).list.first() ) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ list ┆ ok β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ bool β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ [0, 1, -1, -1, 1, 1, -1, 0] ┆ true β”‚ β”‚ [1, -1, -1, -1, 0, 0, 1, -1] ┆ false β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Doing all of this in a pl.Expr.list.eval can be avoided by exploding the list. Then, a window function (pl.Expr.over) is needed to ensure the maximum is computed separately for each list. max_run_length = pl.col("list").explode().rle().struct.field("len").max().over(pl.int_range(pl.len())) df.with_columns(passed=max_run_length <= 2) The result will be same.
3
5
78,917,816
2024-8-27
https://stackoverflow.com/questions/78917816/filter-rows-inside-window-function-in-python-polars
I need to compute the Herfindahl–Hirschman index ("HHI", the sum of squared market shares) but leaving out the firm represented in the row. Here's an example: df = (pl.DataFrame({ 'year':(2023, 2023, 2023, 2024, 2024, 2024), 'firm':('A', 'B', 'C', 'A', 'B', 'C'), 'volume':(20, 50, 3, 25, 13, 5) }) .with_columns( sum = pl.col('volume').sum().over('year'), leaveout_sum = (pl.col('volume').sum().over('year'))-(pl.col('volume')) ) .with_columns( share = (pl.col('volume')/pl.col('sum'))*100 ) .with_columns( hhi = (pl.col('share')**2).sum().over('year').round() )) Which gives: β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ year ┆ firm ┆ volume ┆ sum ┆ leaveout_sum ┆ share ┆ hhi β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ i64 ┆ i64 ┆ i64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ════════β•ͺ═════β•ͺ══════════════β•ͺ═══════════β•ͺ════════║ β”‚ 2023 ┆ A ┆ 20 ┆ 73 ┆ 53 ┆ 27.39726 ┆ 5459.0 β”‚ β”‚ 2023 ┆ B ┆ 50 ┆ 73 ┆ 23 ┆ 68.493151 ┆ 5459.0 β”‚ β”‚ 2023 ┆ C ┆ 3 ┆ 73 ┆ 70 ┆ 4.109589 ┆ 5459.0 β”‚ β”‚ 2024 ┆ A ┆ 25 ┆ 43 ┆ 18 ┆ 58.139535 ┆ 4429.0 β”‚ β”‚ 2024 ┆ B ┆ 13 ┆ 43 ┆ 30 ┆ 30.232558 ┆ 4429.0 β”‚ β”‚ 2024 ┆ C ┆ 5 ┆ 43 ┆ 38 ┆ 11.627907 ┆ 4429.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The hhi column there is the normal HHI index, including all firms in the market, and I can compute the "leaveout" volume summation to get the sum of volumes of the other firms in that year. For example, the leaveout-HHI for firm A in 2023 would be the square of 3/53 plus the square of 50/53 (i.e., the squares of the market shares of firms B and C assuming firm A didn't exist). How do I tell polars to do this? Is there a way to filter the window function, perhaps? My real dataset includes almost 800 firms over 204 months for 500 separate markets, so doing this manually is out of the question.
You can take out the denominator from the sum of squares: .with_columns( leaveout_sum = (pl.col.volume.sum().over('year')) - pl.col.volume, leaveout_sum_of_sq = (pl.col.volume**2).sum().over('year') - pl.col.volume**2 ) .with_columns( leaveout_hhi = pl.col.leaveout_sum_of_sq / pl.col.leaveout_sum**2 )) I left out the * 100 factor, the above correctly computes (3/53)^2 + (50/53)^2 for your example by doing (3^2 + 50^2) / (50 + 3)^2.
3
2
78,917,261
2024-8-27
https://stackoverflow.com/questions/78917261/longest-complete-subsequence-of-ordered-vowels
Given a string made up of "a", "e", "i", "o", or "u", find the longest subsequence of vowels in order. For example, is the string is "aeiaeiou", then the answer is 6, making the subsequence "aaeiou". Another example is: "aeiaaioooaauuaeiou" where the answer is 10. A complete subsequence means that you need to have all the vowels in order, a -> e -> i -> o -> u, but you can have repeats of vowels like "aaeeeeiiooouuu" is valid. If no such subsequence exists, return 0. I came up with a DP approach of O(n^2) and I'm curious if there is a faster version for this problem. def find_longest_dp(vowels): n = len(vowels) dp = [[0, False] for _ in range(n)] prev_vowel = {"a": None, "e": "a", "i": "e", "o": "i", "u": "o"} for i in range(n): if vowels[i] == "a": dp[i] = [1, True] for j in range(i): if dp[j][1] and vowels[j] in {vowels[i], prev_vowel[vowels[i]]}: dp[i][0] = max(dp[i][0], dp[j][0] + 1) dp[i][1] = True return max((dp[i][0] for i in range(n) if vowels[i] == 'u'), default=0)
O(n) by keeping track of the max length for each last-of-the-subsequence vowel (e.g., maxlen[3] tells the max length of an ordered subsequence ending with 'o'): def find_longest_dp2(vowels): maxlen = [0] * 5 for v in vowels: i = 'aeiou'.find(v) maxlen[i] = max(maxlen[:i+1]) + 1 return max(maxlen) Or if all five vowels need to be in the subsequence (as the "complete" in your title suggests) and you want 0 if no such subsequence exists (as your code suggests): def find_longest_dp3(vowels): maxlen = [0] + [float('-inf')] * 5 for v in vowels: i = '.aeiou'.find(v) maxlen[i] = max(maxlen[i-1:i+1]) + 1 return max(maxlen[5], 0) Attempt This Online! with testing.
5
3
78,914,940
2024-8-26
https://stackoverflow.com/questions/78914940/is-it-possible-to-adjust-qlineedit-icon-spacing
I'm planning to use a QLineEdit with three actions added via addAction(). Easy enough and it looks like this: (squares as icons for the example) But a minor annoyance is that I find the spacing between the icons a bit too large. Is it possible to adjust this spacing? QLineEdit doesn't seem to have an accessible layout where you could set the spacing.
Actions shown in QLineEdit (including that used for the "clear button") are implemented privately. The addAction(<action|icon>, position) is an overload of QWidget::addAction() that, after calling the base implementation, also creates an instance of a private QToolButton subclass (QLineEditIconButton). The reasoning behind using QToolButton is that it provides immediate mouse interaction, and also has a strong relation with QActions (see QToolButton::setDefaultAction()). Each button then follows common "side widgets parameters" that use hardcoded values: QLineEditPrivate::SideWidgetParameters QLineEditPrivate::sideWidgetParameters() const { Q_Q(const QLineEdit); SideWidgetParameters result; result.iconSize = q->style()->pixelMetric(QStyle::PM_SmallIconSize, nullptr, q); result.margin = result.iconSize / 4; result.widgetWidth = result.iconSize + 6; result.widgetHeight = result.iconSize + 2; return result; } That is a lot of spacing and margins; it may be fine for relatively wide line edits or for systems using large icons and/or huge fonts, but also a considerable waste of space for most cases using more than one action (possibly including the clear button): even using a basic 16px icon size, this means that every 2 icons there is possibly enough space for another one. In any case, once a new action is added, QLineEdit then lays out those "fake buttons" based on the above values, spacing them with the result.margin and considering the increased width. Note that those buttons are not even drawn as real QToolButtons, as the private QLineEditIconButton subclass also overrides its own paintEvent(). In fact, it just draws the icon pixmap (using the enabled and pressed state of the action/button) within the QToolButton rect(). Luckily, painting and mouse interaction are always based on the button geometry, meaning that we can programmatically change their geometry once they've been set (and still have full functionality), which happens when: a new action is added (which automatically creates the related button); an existing action is removed/deleted; the widget is resized; the layoutDirection changes; For very basic applications that only consider "trailing actions", we could just consider the first three cases above (and left-to-right text), then arbitrarily update the geometries of the tool buttons based on assumed sizes and positions; that may not be sufficient, though, because there can be actions on both sides, and the layout direction also inverts the position (and order) of those actions. Also, QLineEdit considers the effective text margins (the "bounding rectangle" in which text is actually shown and can be interacted with) based on the above "side widget parameters", meaning that once we've reduced the size and/or spacing between the buttons, we're still left out with the same, possibly small, horizontal space left for displayed text and cursor movement. This means that we need to keep track of the position of each action (leading or trailing) and finally update the text margins with a further "private" implementation. In the following example I'm showing a possible implementation of everything explained above, which should work fine in most cases. There may be some issues for actions that are explicitly hidden (but I'll check that later, as I'm under the impression that it could be a bug, see the note below[1]). It should also work fine for both PyQt6 and PySide6, as well as PyQt5 and PySide2 (except for the enum namespaces for older PyQt5 versions and PySide2). class CustomLineEdit(QLineEdit): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.__textMargins = QMargins() self.__iconMargins = QMargins() self.__leadingActions = set() self.__trailingActions = set() self.__actions = { QLineEdit.ActionPosition.LeadingPosition: self.__leadingActions, QLineEdit.ActionPosition.TrailingPosition: self.__trailingActions } def __updateGeometries(self): buttons = self.findChildren(QToolButton) if len(buttons) <= 1: return iconSize = self.style().pixelMetric( QStyle.PixelMetric.PM_SmallIconSize, None, self) iconMargin = max(1, iconSize // 8) btnWidth = iconSize + 2 leading = [] trailing = [] for button in buttons: if button.defaultAction() in self.__leadingActions: leading.append(button) else: trailing.append(button) if self.layoutDirection() == Qt.LayoutDirection.RightToLeft: leading, trailing = trailing, leading if leading: if len(leading) > 1: leading.sort(key=lambda b: b.x()) lastGeo = leading[-1].geometry() it = iter(leading) prev = None while True: button = next(it, None) if not button: break geo = button.geometry() geo.setWidth(btnWidth) if prev: geo.moveLeft(prev.x() + prev.width() + iconMargin) button.setGeometry(geo) prev = button else: button = leading[0] lastGeo = button.geometry() geo = button.geometry() geo.setWidth(btnWidth) button.setGeometry(geo) left = geo.right() - lastGeo.right() else: left = 0 if trailing: if len(trailing) > 1: trailing.sort(key=lambda b: -b.x()) lastGeo = trailing[-1].geometry() it = iter(trailing) prev = None while True: button = next(it, None) if not button: break geo = button.geometry() if prev: geo.setWidth(btnWidth) geo.moveRight(prev.x() - iconMargin) else: geo.setLeft(geo.right() - btnWidth + 1) button.setGeometry(geo) prev = button else: button = trailing[0] lastGeo = button.geometry() geo = button.geometry() geo.setLeft(geo.right() - btnWidth + 1) button.setGeometry(geo) right = lastGeo.x() - geo.x() else: right = 0 self.__iconMargins = QMargins(left, 0, right, 0) super().setTextMargins(self.__textMargins + self.__iconMargins) # Note that these are NOT "real overrides" def addAction(self, *args): if len(args) != 2: # possibly, the default QWidget::addAction() super().addAction(*args) return arg, position = args if isinstance(arg, QAction): action = arg # check if the action already exists in a different position, and # eventually remove it from the related set if ( position == QLineEdit.ActionPosition.LeadingPosition and action in self.__trailingActions ): self.__trailingActions.discard(action) elif action in self.__leadingActions: self.__leadingActions.discard(action) super().addAction(action, position) self.__actions[position].add(action) # addAction(action, position) is "void" and returns None return action = super().addAction(arg, position) self.__actions[position].add(action) # for compliance with the default addAction() behavior return action def textMargins(self): return QMargins(self.__textMargins) def setTextMargins(self, *args): self.__textMargins = QMargins(*args) super().setTextMargins(self.__textMargins + self.__iconMargins) # Common event overrides def actionEvent(self, event): super().actionEvent(event) if event.type() == event.Type.ActionRemoved: # this should take care of actions that are being deleted, even # indirectly (eg. their parent is being destroyed) action = event.action() if action in self.__leadingActions: self.__leadingActions.discard(action) elif action in self.__trailingActions: self.__trailingActions.discard(action) else: return self.__updateGeometries() def changeEvent(self, event): super().changeEvent(event) if event.type() == event.Type.LayoutDirectionChange: self.__updateGeometries() def childEvent(self, event): super().childEvent(event) if ( event.polished() and isinstance(event.child(), QToolButton) # the following is optional and the name may change in the future and event.child().metaObject().className() == 'QLineEditIconButton' ): self.__updateGeometries() def resizeEvent(self, event): super().resizeEvent(event) self.__updateGeometries() And here is an example code that creates two line edits with similar functionalities (two undo/redo actions and clear button), in order to compare the differences in layouts. if __name__ == '__main__': import sys app = QApplication(sys.argv) test = QWidget() layout = QFormLayout(test) undoIcon = QIcon.fromTheme('edit-undo') if undoIcon.isNull(): undoIcon = app.style().standardIcon( QStyle.StandardPixmap.SP_ArrowLeft) redoIcon = QIcon.fromTheme('edit-redo') if redoIcon.isNull(): redoIcon = app.style().standardIcon( QStyle.StandardPixmap.SP_ArrowRight) def makeLineEdit(cls): def checkUndoRedo(): undoAction.setEnabled(widget.isUndoAvailable()) redoAction.setEnabled(widget.isRedoAvailable()) widget = cls() widget.setClearButtonEnabled(True) undoAction = QAction(undoIcon, 'Undo', widget, enabled=False) redoAction = QAction(redoIcon, 'Redo', widget, enabled=False) widget.addAction(redoAction, QLineEdit.ActionPosition.TrailingPosition) widget.addAction(undoAction, QLineEdit.ActionPosition.TrailingPosition) undoAction.triggered.connect(widget.undo) redoAction.triggered.connect(widget.redo) widget.textChanged.connect(checkUndoRedo) return widget layout.addRow('Standard QLineEdit:', makeLineEdit(QLineEdit)) layout.addRow('Custom QLineEdit:', makeLineEdit(CustomLineEdit)) test.show() sys.exit(app.exec()) Here is the result: [1]: there could be some inconsistencies with QEvent.Type.ActionChanged for actionEvent() (which is received first by the private QLineEditIconButton) when the visibility of an action changes at runtime and after __updateGeometries() has been already called following the other events, which may reset the text margins. I will do further investigation, as I believe it as a symptom of a bug, other than a possible inconsistency in the behavior of the latest Qt versions. [2]: I've not tested this on many styles, nor with complex selections or drag&drop; feel free to leave a comment about these aspects or related unexpected behavior.
2
1
78,915,972
2024-8-26
https://stackoverflow.com/questions/78915972/drop-row-with-3-columns-value-equal
I would like to drop the row with all three columns have equal value. e.g. import pandas as pd data = [ ['A',2,2,2], ['B',2,2,3], ['C',3,3,3], ['D',4,2,2], ['E',5,5,2] ] df = pd.DataFrame(data,columns=['name','val1','val2','val3']) print(df) In above example, row 0 and row 2 will be dropped since the value is equal.
You can easily drop rows where specified columns have equal values using the following steps: Step 1: Use df[['val1', 'val2', 'val3']].nunique(axis=1) to calculate the number of unique values in each row across these columns. Step2 : where the number of unique values >= then filter the rows Here is code snippet to guide you #filters val1, val2, val3 is >=1 df_filterd = df[df[['val1', 'val2', 'val3']].nunique(axis=1) > 1] print(df_filterd) Result: name val1 val2 val3 1 B 2 2 3 3 D 4 2 2 4 E 5 5 2 OR You can drop rows where all values in val1, val2, val3 are equal , here is docs which you can use to reference it drop df_drop = df[~df[['val1', 'val2', 'val3']].eq(df['val1'], axis=0).all(axis=1)] print(df_drop) name val1 val2 val3 1 B 2 2 3 3 D 4 2 2 4 E 5 5 2
2
2
78,915,951
2024-8-26
https://stackoverflow.com/questions/78915951/find-the-index-of-the-first-non-null-value-in-a-column-in-a-polars-dataframe
I need to find the first non-null value in a column over a grouped pl.DataFrame. import polars as pl df = pl.DataFrame( { "symbol": ["s1", "s1", "s2", "s2"], "trade": [None, 1, -1, None], } ) shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ trade β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ s1 ┆ null β”‚ β”‚ s1 ┆ 1 β”‚ β”‚ s2 ┆ -1 β”‚ β”‚ s2 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ How can I get the row numbers/index values of the first non-null value in the trade columns while group_by symbol? I am actually looking for the row/index numbers 1 and 0. Maybe the result could be something like this: shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ first-non-null β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════════║ β”‚ s1 ┆ 1 β”‚ β”‚ s2 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I am actually looking for the equivalent to pd.first_valid_index()
Here's one way using .arg_true().first(): print( df.group_by("symbol").agg( pl.col("trade").is_not_null().arg_true().first().alias("first-non-null") ) ) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ first-non-null β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════════║ β”‚ s1 ┆ 1 β”‚ β”‚ s2 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
3
78,914,882
2024-8-26
https://stackoverflow.com/questions/78914882/replace-a-cell-in-a-column-based-on-a-cell-in-another-column-in-a-polars-datafra
Consider the following pl.DataFrame: import polars as pl df = pl.DataFrame( { "symbol": ["s1", "s1", "s2", "s2"], "signal": [0, 1, 2, 0], "trade": [None, 1, None, -1], } ) shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ signal ┆ trade β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════β•ͺ═══════║ β”‚ s1 ┆ 0 ┆ null β”‚ β”‚ s1 ┆ 1 ┆ 1 β”‚ β”‚ s2 ┆ 2 ┆ null β”‚ β”‚ s2 ┆ 0 ┆ -1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Now, I need to group the dataframe by symbol and check if the first row in every group in column signal is not equal to 0 (zero). It this equals to True, I need to replace the corresponding cell in column trade with the value in the cell in signal. Here's what I am actually looking for: shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ signal ┆ trade β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════β•ͺ═══════║ β”‚ s1 ┆ 0 ┆ null β”‚ β”‚ s1 ┆ 1 ┆ 1 β”‚ β”‚ s2 ┆ 2 ┆ 2 β”‚ <- copy value from the ``signal`` column β”‚ s2 ┆ 0 ┆ -1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
For this, a when-then-otherwise construct may be used. We create a condition that evaluates to True exactly for the first rows (create index on the fly using pl.int_range) in each group with signal not equal to 0. Based on that condition, we either select the value in the signal or trade column. df.with_columns( trade=pl.when( pl.col("signal") != 0, pl.int_range(pl.len()) == 0, ).then("signal").otherwise("trade").over("symbol") ) shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ signal ┆ trade β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════β•ͺ═══════║ β”‚ s1 ┆ 0 ┆ null β”‚ β”‚ s1 ┆ 1 ┆ 1 β”‚ β”‚ s2 ┆ 2 ┆ 2 β”‚ β”‚ s2 ┆ 0 ┆ -1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
3
4
78,914,836
2024-8-26
https://stackoverflow.com/questions/78914836/is-there-a-way-to-include-column-index-name-with-pandas-dataframe-to-csv
Is there a way to include the column (not rows!) index name in the output when calling Pandas' dataframe.to_csv() method? For example: import pandas as pd iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv') pivot_iris = iris.pivot_table(index='species', columns='sepal_length', values='sepal_width') print(pivot_iris.columns) print(pivot_iris) pivot_iris.to_csv('pivot_iris.csv', index=True, header=True) After calling pivot, the column index name is set to sepal_length as you can see in the prints Index([4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.6, 7.7, 7.9], dtype='float64', name='sepal_length') and sepal_length 4.3 4.4 4.5 4.6 4.7 ... 7.3 7.4 7.6 7.7 7.9 species ... setosa 3.0 3.033333 2.3 3.325 3.2 ... NaN NaN NaN NaN NaN versicolor NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN virginica NaN NaN NaN NaN NaN ... 2.9 2.8 3.0 3.05 3.8 [3 rows x 35 columns] Unfortunately the output file produced with to_csv() is missing the label in front of the column names: species,4.30,4.40,4.50,4.60,4.70,4.80,4.90,5.00,5.10,5.20,5.30,5.40,5.50,5.60,5.70,5.80,5.90,6.00,6.10,6.20,6.30,6.40,6.50,6.60,6.70,6.80,6.90,7.00,7.10,7.20,7.30,7.40,7.60,7.70,7.90 setosa,3.00,3.03,2.30,3.33,3.20,3.18,3.20,3.36,3.60,3.67,3.70,3.66,3.85,,4.10,4.00,,,,,,,,,,,,,,,,,,, versicolor,,,,,,,2.40,2.15,2.50,2.70,,3.00,2.44,2.82,2.82,2.67,3.10,2.80,2.88,2.55,2.70,3.05,2.80,2.95,3.07,2.80,3.10,3.20,,,,,,, virginica,,,,,,,2.50,,,,,,,2.80,2.50,2.73,3.00,2.60,2.80,3.10,2.93,2.92,3.05,,3.04,3.10,3.13,,3.00,3.27,2.90,2.80,3.00,3.05,3.80 is there a way to include it?
You can't really include the index names in a CSV. What you could do is to create a MultiIndex: pivot_iris = (pd.concat({'sepal_length': iris.pivot_table(index='species', columns='sepal_length', values='sepal_width')}, axis=1) .rename_axis(columns=(None, 'species')).reset_index() ) pivot_iris.to_csv('pivot_iris.csv', index=False) Output: species,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length,sepal_length ,4.3,4.4,4.5,4.6,4.7,4.8,4.9,5.0,5.1,5.2,5.3,5.4,5.5,5.6,5.7,5.8,5.9,6.0,6.1,6.2,6.3,6.4,6.5,6.6,6.7,6.8,6.9,7.0,7.1,7.2,7.3,7.4,7.6,7.7,7.9 setosa,3.0,3.03,2.3,3.32,3.2,3.18,3.2,3.36,3.6,3.67,3.7,3.66,3.85,,4.1,4.0,,,,,,,,,,,,,,,,,,, versicolor,,,,,,,2.4,2.15,2.5,2.7,,3.0,2.44,2.82,2.82,2.67,3.1,2.8,2.88,2.55,2.7,3.05,2.8,2.95,3.07,2.8,3.1,3.2,,,,,,, virginica,,,,,,,2.5,,,,,,,2.8,2.5,2.73,3.0,2.6,2.8,3.1,2.93,2.92,3.05,,3.04,3.1,3.13,,3.0,3.27,2.9,2.8,3.0,3.05,3.8
3
2
78,914,504
2024-8-26
https://stackoverflow.com/questions/78914504/pandas-dataframe-with-mixed-periods-1m-6m-and-12m-to-dataframe-with-one-per
I have a dataframe with income for a certain period of time. The period is given by a start date and an end date (For example 2023-04-01 and 2023-06-30). The period can vary between 3, 6 and 12 months. My goal is to bring everything to the same period, but I did not find an easy way to do it. This is how I am doing it now: get the periods in months: df["date_diff"] = (df.date_to.dt.to_period("M") - df.date.dt.to_period("M")).apply(lambda x: x.n) + 1 iterate over rows create a new row for each month in the date_diff column divide income by number of months in period shift date append new rows to array of new rows get df from array of new rows new_rows = [] for _,row in df.iterrows(): for i in range(row["date_diff"]): new_row = row.copy() new_row["new_date"] = row["date"] + relativedelta(months=i) + pd.offsets.MonthEnd() new_row["new_net_revenues"] = row["net_revenues"] / row["date_diff"] new_rows.append(new_row) new_df = pd.DataFrame(new_rows) This obviously takes a hell lot of time for larger dataframes. I wonder if there is a much faster solution with builtin funtions? Essentially it is kind of an explode to the months in the period. I also tried to just concat the rows according to the number of months in the period and do the math afterwards, but still not very elegant. Edit: The code above transforms this: net_revenues date date_to date_diff 0 0.02 2023-04-01 2023-06-30 3 1 0.01 2023-04-01 2023-06-30 3 2 0.02 2023-01-01 2023-03-31 3 into this: new_net_revenues new_date 0 0.006667 2023-04-30 0 0.006667 2023-05-31 0 0.006667 2023-06-30 1 0.003333 2023-04-30 1 0.003333 2023-05-31 1 0.003333 2023-06-30 2 0.006667 2023-01-31 2 0.006667 2023-02-28 2 0.006667 2023-03-31 3 0.003333 2023-01-31 3 0.003333 2023-02-28 3 0.003333 2023-03-31
You can create new rows with Index.repeat, add months by counter by GroupBy.cumcount and create datetimes by Series.dt.to_timestamp: df = pd.DataFrame({'date':['2023-04-01', '2023-01-01'], 'date_to':['2023-06-30', '2023-06-30'], }).apply(pd.to_datetime).assign(net_revenues=[100,20]) print (df) date date_to net_revenues 0 2023-04-01 2023-06-30 100 1 2023-01-01 2023-06-30 20 periods = df.date.dt.to_period("M") diff = (df.date_to.dt.to_period("M") - periods).apply(lambda x: x.n) + 1 new_df = df.loc[df.index.repeat(diff)] new_df['new_date'] = (periods + new_df.groupby(level=0).cumcount()).dt.to_timestamp("M") new_df["new_net_revenues"] = new_df["net_revenues"] / diff print (new_df) date date_to net_revenues new_date new_net_revenues 0 2023-04-01 2023-06-30 100 2023-04-30 33.333333 0 2023-04-01 2023-06-30 100 2023-05-31 33.333333 0 2023-04-01 2023-06-30 100 2023-06-30 33.333333 1 2023-01-01 2023-06-30 20 2023-01-31 3.333333 1 2023-01-01 2023-06-30 20 2023-02-28 3.333333 1 2023-01-01 2023-06-30 20 2023-03-31 3.333333 1 2023-01-01 2023-06-30 20 2023-04-30 3.333333 1 2023-01-01 2023-06-30 20 2023-05-31 3.333333 1 2023-01-01 2023-06-30 20 2023-06-30 3.333333
2
2
78,913,782
2024-8-26
https://stackoverflow.com/questions/78913782/how-can-i-run-python-package-using-google-collab
I want to run the DAN repo. I am using Google Cloud Collab. I cloned the project on my Google Drive in the following directory /content/drive/MyDrive/DAN/DAN Trying to run An example script file is available at OCR/document_OCR/dan/predict_examples to recognize images directly from paths using trained weights. using collab notebook located inside /content/drive/MyDrive/DAN/DAN from google.colab import drive drive.mount('/content/drive') import sys sys.path.append('/content/drive/MyDrive/DAN/DAN') !python3 /content/drive/MyDrive/DAN/DAN/OCR/document_OCR/dan/predict_example.py But this is not working for me And I am facing the following issue Traceback (most recent call last): File "/content/drive/MyDrive/DAN/DAN/OCR/document_OCR/dan/predict_example.py", line 9, in <module> from basic.models import FCN_Encoder ModuleNotFoundError: No module named 'basic' Where the predict_example.py scripts starts with import os.path import torch from torch.optim import Adam from PIL import Image import numpy as np from basic.models import FCN_Encoder from OCR.document_OCR.dan.models_dan import GlobalHTADecoder from OCR.document_OCR.dan.trainer_dan import Manager from basic.utils import pad_images from basic.metric_manager import keep_all_but_tokens
!python3 spawns a new interpreter which doesn't know about sys.path.append(...) above it, it only affects the current interpreter. instead either use runpy.run_path to execute a file within the current interpreter, or modify os.environ["PYTHONPATH"] by appending :/your_path to it before calling !python3 so it will carry over to other spawned interpreters.
2
2
78,913,464
2024-8-26
https://stackoverflow.com/questions/78913464/how-to-filter-on-uniqueness-by-condition
Imagine I have a dataset like: data = { "a": [1, 4, 2, 4, 7, 4], "b": [4, 2, 3, 3, 0, 2], "c": ["a", "b", "c", "d", "e", "f"], } and I want to keep only the rows for which a + b is uniquely described by a single combination of a and b. I managed to hack this: df = ( pl.DataFrame(data) .with_columns(sum_ab=pl.col("a") + pl.col("b")) .group_by("sum_ab") .agg(pl.col("a"), pl.col("b"), pl.col("c")) .filter( (pl.col("a").list.unique().list.len() == 1) & (pl.col("b").list.unique().list.len() == 1) ) .explode(["a", "b", "c"]) .select("a", "b", "c") ) """ shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 4 ┆ 2 ┆ b β”‚ β”‚ 4 ┆ 2 ┆ f β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ """ Can someone suggest a better way to achieve the same? I struggled a bit to figure this logic out, so I imagine there is a more direct/elegant way of getting the same result.
.struct() to combine a and b into one column so we can check uniqueness. n_unique() to check uniqueness. over() to limit the calculation to be within a + b. df.filter( pl.struct("a","b").n_unique().over(pl.col.a + pl.col.b) == 1 ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 4 ┆ 2 ┆ b β”‚ β”‚ 4 ┆ 2 ┆ f β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ If you would need to extend it to larger number of columns then you could use sum_horizontal() to make it more generic: columns = ["a","b"] df.filter( pl.struct(columns).n_unique().over(pl.sum_horizontal(columns)) == 1 )
4
4
78,910,189
2024-8-24
https://stackoverflow.com/questions/78910189/using-einsum-for-transpose-times-matrix-times-transpose-xaxt
So I have m number of different vectors (say x), each one is (1,n), stacked horizontally, totally in a (m,n) matrix we call it B, and a matrix (A) with dimension (n,n). I want to compute xAx^T for all vectors x, output should be (m,1) How can I write an einsum query for this given B and A? Here is a sample without einsum: import torch m = 30 n = 4 B = torch.randn(m, n) A = torch.randn(n, n) result = torch.zeros(m,1) for i in range(m): x = B[i].unsqueeze(0) result[i] = torch.matmul(x, torch.matmul(A, x.T))
Using einsum and tensordot : import torch import numpy as np A = torch.tensor([[1.0, 0.0, 0.0, 0.0], [0.0, 2.0, 0.0, 0.0], [0.0, 0.0, 3.0, 0.0], [0.0, 0.0, 0.0, 4.0]]) B = torch.tensor([[1.0, 4.0, 7.0, 10.0], [2.0, 5.0, 8.0, 11.0], [3.0, 6.0, 9.0, 12.0]]) # Using einsum res_using_einsum = torch.einsum('bi,ij,bj -> b',B,A,B).unsqueeze(dim = -1) print(res_using_einsum) ''' tensor([[580.], [730.], [900.]]) ''' # Using tensordot BA = torch.tensordot(B, A, dims = ([1],[0])) ''' BA : tensor([[ 1., 8., 21., 40.], [ 2., 10., 24., 44.], [ 3., 12., 27., 48.]]) ''' res_using_tensordot = torch.tensordot(BA, B, dims =([1],[1]))#.unsqueeze(-1) ''' res_using_tensordot : tensor([[580., 650., 720.], [650., 730., 810.], [720., 810., 900.]]) ''' diagonal_result = torch.diagonal(res_using_tensordot, 0).unsqueeze(1) ''' tensor([[580.], [730.], [900.]]) '''
3
1
78,896,435
2024-8-21
https://stackoverflow.com/questions/78896435/function-takes-foo-subclass-and-wraps-it-in-bar-else-returns-type-unaltered
I have the following code: from typing import TypeVar, Any, Generic class Foo: ... class Bar(Generic[FooT]): def __init__(self, foo: FooT): self._foo = foo FooT = TypeVar('FooT', bound=Foo) T = TypeVar('T') def func(a: FooT | T) -> Bar[FooT] | T: if isinstance(a, Foo): return Bar(a) return a def my_other_func(a: Foo) -> None: func(a) func takes either a Foo subclass and returns Bar which wraps that Foo object, or returns the input unaltered I thought I could type it as def func(a: FooT | T) -> Bar[FooT] | T: But if I run mypy on this I get main.py:18: error: Argument 1 to "Bar" has incompatible type "Foo"; expected "FooT" [arg-type] main.py:22: error: Argument 1 to "func" has incompatible type "Foo"; expected "Never" [arg-type] Found 2 errors in 1 file (checked 1 source file) and I don't understand either. How should I have typed it?
Use typing.overload: from typing import TypeVar, Generic class Foo: ... FooT = TypeVar('FooT', bound=Foo) class Bar(Generic[FooT]): def __init__(self, foo: FooT): self._foo = foo T = TypeVar('T') @overload def func(a: FooT) -> Bar[FooT]: ... @overload def func(a: T) -> T: ... def func(a): if isinstance(a, Foo): return Bar(a) return a def my_other_func(a: Foo) -> None: func(a) Essentially, Callable[[FooT|T], Bar[FooT]|T] is not the same as Callable[[FooT], Bar[FooT]] | Callable[[T], T]. typing.overload lets you define the latter instead of the former.
2
1
78,910,274
2024-8-25
https://stackoverflow.com/questions/78910274/how-to-maintain-original-order-of-attributes-when-using-ruamel-yaml-dump
I am using a pydantic model to store some data. The model itself is not relevant, the relevant part is that the pydantic model.dump_to_json() gives me a string like this: str_val = '{"aspect":{"name":"tuskhelm_of_joritz_the_mighty"},"affix":[{"name":"maximum_life"},{"name":"intelligence"}]}' Note that "aspect" is listed first. However, whenever I try to write this out to a yaml file, affix is always put first. This is for a human readable/modifiable file and I really like how ruamble's YAML writes out the file, but is there any way for me to ensure that aspect will always be written out first? Here is some sample code: import sys from ruamel.yaml import YAML def main(): str_val = '{"aspect":{"name":"tuskhelm_of_joritz_the_mighty"},"affix":[{"name":"maximum_life"},{"name":"intelligence"}]}' yaml = YAML(typ="safe", pure=True) dict_val = yaml.load(str_val) print("Dictionary value: " + str(dict_val)) yaml.dump(dict_val, sys.stdout) if __name__ == "__main__": main() And here is the output of this code: Dictionary value: {'aspect': {'name': 'tuskhelm_of_joritz_the_mighty'}, 'affix': [{'name': 'maximum_life'}, {'name': 'intelligence'}]} affix: - {name: maximum_life} - {name: intelligence} aspect: {name: tuskhelm_of_joritz_the_mighty} Note that even though the dictionary is in the proper order, upon dumping aspect always comes out second. I've also tried other means of writing out the yaml (including using the default loader instead of safe) and though I can get the order to be preserved, what's given is not (in my opinion) as human readable. If there's a way to format a different yaml writer to write out like this I'd be fine with that too.
The safe loader doesn't preserve the order of the keys of a mapping, neither in pure mode or when using the C extension. It essentially follows the YAML specification, which says keys are unordered, and the behaviour of PyYAML from which it was forked, so the dumper activily sorts the output keys. If you leave out the typ="safe," (the default loader currently is pure only), the order is preserved as well as the flow-style of the original. The flow-style information is attached to the loaded dictionaries/lists (actually to subclass instances of these), so using a different dumper instance is not going to help, as the default dumper recognises this flow-style information, and dumps the original flow style. The safe dumper would ignore that information and sort the keys, if it could handle the dict/list subclasses, so that is not going to be of any use in this case. The thing to do is to recursively remove the flow-style information from the collection instances, and set the default_flow_style attribute of the YAML instance (which defaults to False): import sys from ruamel.yaml import YAML def rm_style_info(d): if isinstance(d, dict): d.fa._flow_style = None for k, v in d.items(): rm_style_info(k) rm_style_info(v) elif isinstance(d, list): d.fa._flow_style = None for elem in d: rm_style_info(elem) def main(): str_val = '{"aspect":{"name":"tuskhelm_of_joritz_the_mighty"},"affix":[{"name":"maximum_life"},{"name":"intelligence"}]}' yaml = YAML() yaml.default_flow_style = None dict_val = yaml.load(str_val) rm_style_info(dict_val) print("Dictionary value: " + str(dict_val)) yaml.dump(dict_val, sys.stdout) if __name__ == "__main__": main() which gives: Dictionary value: {'aspect': {'name': 'tuskhelm_of_joritz_the_mighty'}, 'affix': [{'name': 'maximum_life'}, {'name': 'intelligence'}]} aspect: {name: tuskhelm_of_joritz_the_mighty} affix: - {name: maximum_life} - {name: intelligence} There is currently no API to unset the flow attribute (.fa), so make sure to pin the version of ruamel.yaml you are using, and test before updating the version number.
3
2
78,909,971
2024-8-24
https://stackoverflow.com/questions/78909971/exponential-plot-in-python-is-a-curve-with-multiple-inflection-points-instead-of
I am trying to draw a simple exponential in python. When using the code below, everything works fine and the exponential is shown import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import numpy as np def graph(func, x_range): x = np.arange(*x_range) y = func(x) plt.plot(x, y) graph(lambda x: pow(3,x), (0,20)) plt.savefig("mygraph.png") However if I change the range from 20 to 30, it draws a curve which is not exponential at all. Why is that happening?
I can't make time to nail this now, but the graph looks a whole lot like what I'd expect if integer arithmetic is overflowing. numpy's ints are fixed-width, unlike Python's ints (which are unbounded). In particular, >>> 3**20 3486784401 >>> _.bit_length() 32 so at the point things start to go crazy the result is overflowing a 32-bit int. Try replacing the 3 with 3.0 in your pow(3,x)?
2
5
78,909,690
2024-8-24
https://stackoverflow.com/questions/78909690/functools-partial-with-bound-first-argument-and-args-and-kwargs-interprets-ca
Using python 3.10: I have a function with a first argument and then *args and **kwargs. I want to bind the first argument and leave the *args and **kwargs free. If I do this and the bound function gets called using a list of arguments, python interprets it as multiple values for the bound argument. An example: from functools import partial def foo1(a : int, *args, **kwargs): print(a) print(args) print(kwargs) pfoo1 = partial(foo1, a=10) pfoo1() # works pfoo1(something="testme") # works # pfoo1("testme") # !! interpreted as multiple values for argument a? # pfoo1(*("testme")) # !! interpreted as multiple values for argument a? # pfoo1(*["testme"]) # !! interpreted as multiple values for argument a? I know I can easily solve this by replacing the foo1 function with: def foo1(*args, **kwargs): print(args) print(kwargs) And then running the same code, but I do not have the luxury of control over the incoming functions so I am stuck with a given signature. A solution could be to create one's own partial class through lambdas and bound members, but that seems excessive. Is there a way of doing this in the partial framework or at least a simple way?
a is the first parameter to function foo1. If you want a partial, do not use keyword: pfoo1 = partial(foo1, 10)
3
3
78,908,830
2024-8-24
https://stackoverflow.com/questions/78908830/how-can-i-adjust-the-white-empty-space-around-a-matplotlib-table-to-match-the-ta
There's extra white, empty space around my Matplotlib table, causing my table to get squished as if the figsize wasn't big enough. If I increase the figsize, the table displays normally, but then the white, empty space gets even bigger. Additionally, the title gets misplaced over the table when the number of rows passes a certain number (around 35 or so). import matplotlib.pyplot as plt def create_pivot_table( title: str, pivot_table: pd.core.frame.DataFrame, ): """ Creates a Matplotlib table from a Pandas pivot table. Returns fig and ax. """ fig_width = 1 + len(pivot_table.columns) * 0.6 fig_height = 1 + len(pivot_table) * 0.3 figsize=(fig_width, fig_height) fig, ax = plt.subplots(figsize=figsize) title = title ax.set_title( title, fontsize=16, weight='bold', loc='center' ) ax.axis('off') table = ax.table( cellText=pivot_table.values, colLabels=pivot_table.columns, rowLabels=pivot_table.index, cellLoc='center', loc='center' ) table.auto_set_font_size(False) # Formatting the table with alternating colors to make it more readable for (row, column), cell in table.get_celld().items(): if row == 0: cell.set_text_props(weight='bold') cell.set_fontsize(8) cell.set_height(0.05) if column % 2 == 0: cell.set_facecolor('#dadada') else: cell.set_facecolor('#ffffff') else: cell.set_fontsize(10) cell.set_height(0.03) if row % 2 == 0: cell.set_facecolor('#ffffff') else: cell.set_facecolor('#e6f7ff') if column % 2 == 0: cell.set_facecolor('#dadada' if row % 2 == 0 else '#b9d5d3') else: cell.set_facecolor('#ffffff' if row % 2 == 0 else '#e6f7ff') return fig, ax Dummy data: import pandas as pd import numpy as np data = np.random.randint(1, 100, size=(3, 4)) df = pd.DataFrame(data, index=['A', 'B', 'C'], columns=[1, 2, 3, 4]) title = 'Title' fig, ax = create_pivot_table( title=title, pivot_table=df ) Result: Desired outcome: the same table, minus the empty white space, so the table displays correctly. Additionally, I'd like to center the title and table within the figure, which does not happen with the code I wrote (depending on the table and title size). Thanks in advance for the help!
I think you need to try suptitle for title and play with bbox parameter of table. import matplotlib.pyplot as plt import pandas as pd import numpy as np def create_pivot_table( title: str, pivot_table: pd.core.frame.DataFrame, ): """ Creates a Matplotlib table from a Pandas pivot table. Returns fig and ax. """ fig_width = 1 + len(pivot_table.columns) * 0.6 fig_height = 1 + len(pivot_table) * 0.3 figsize=(fig_width, fig_height) fig, ax = plt.subplots(figsize=figsize) fig.suptitle(title, fontsize=16, weight='bold', y=0.95) ax.axis('off') table = ax.table( cellText=pivot_table.values, colLabels=pivot_table.columns, rowLabels=pivot_table.index, cellLoc='center', loc='center', bbox=[0,0,1,0.95] ) table.auto_set_font_size(False) # Formatting the table with alternating colors to make it more readable for (row, column), cell in table.get_celld().items(): if row == 0: cell.set_text_props(weight='bold') cell.set_fontsize(8) cell.set_height(0.05) if column % 2 == 0: cell.set_facecolor('#dadada') else: cell.set_facecolor('#ffffff') else: cell.set_fontsize(10) cell.set_height(0.03) if row % 2 == 0: cell.set_facecolor('#ffffff') else: cell.set_facecolor('#e6f7ff') if column % 2 == 0: cell.set_facecolor('#dadada' if row % 2 == 0 else '#b9d5d3') else: cell.set_facecolor('#ffffff' if row % 2 == 0 else '#e6f7ff') return fig, ax data = np.random.randint(1, 100, size=(3, 4)) df = pd.DataFrame(data, index=['A', 'B', 'C'], columns=[1, 2, 3, 4]) title = 'Title' fig, ax = create_pivot_table( title=title, pivot_table=df ) plt.show()
3
2
78,900,439
2024-8-22
https://stackoverflow.com/questions/78900439/how-to-show-two-outputs-of-the-same-function-without-running-it-twice
I have this function that iterates over my data and generates two outputs (in the example, the function check(). Now I want to show both outputs on different cards. AFAIK the card ID has to be the same as the function that generates the output. In my example, the function check() is run twice, which is very inefficient (in my real data this is the function generating the most computational load). Is there a way to run this function only once, but still using both outputs on different cards in shiny for Python? MRE: from shiny import App, render, ui, reactive from pathlib import Path app_ui = ui.page_fillable( ui.layout_sidebar( ui.sidebar( ui.input_text("numbers", "Number you want to check", value="1,2,3,4,5,6,7,8"), ui.input_action_button("check_numbers", "Check your numbers") ), ui.output_ui("numbers_frames") ) ) def server(input, output, session): @render.ui @reactive.event(input.check_numbers) def numbers_frames(): return ui.TagList( ui.layout_columns( ui.card( ui.card_header("even"), ui.output_text("even"), ), ui.card( ui.card_header("odd"), ui.output_text("odd"), ), ) ) @reactive.event(input.check_numbers) def check(): even = [] odd = [] for number in input.numbers().split(','): number_int = int(number.strip()) if number_int % 2 == 0: even.append(str(number_int)) else: odd.append(str(number_int)) print("check() has been exectuted") return (even, odd) @output @render.text def even(): even_output = check()[0] return ','.join(even_output) @output @render.text def odd(): odd_output = check()[1] return ','.join(odd_output) src_dir = Path(__file__).parent / "src" app = App(app_ui, server, static_assets=src_dir)
Rewrite your check() function such that it modifies a reactive.value() which contains the odd and the even numbers from the input. check() is decorated by reactive.effect() and reactive.event(), where the event is input.check_numbers. Then check() only gets executed once when the user clicks the button. And then, inside the render.text(), you display the current content of the reactive.value(). from shiny import App, render, ui, reactive from pathlib import Path app_ui = ui.page_fillable( ui.layout_sidebar( ui.sidebar( ui.input_text("numbers", "Number you want to check", value="1,2,3,4,5,6,7,8"), ui.input_action_button("check_numbers", "Check your numbers") ), ui.output_ui("numbers_frames") ) ) def server(input, output, session): outputNumbers = reactive.value({}) @render.ui @reactive.event(input.check_numbers) def numbers_frames(): return ui.TagList( ui.layout_columns( ui.card( ui.card_header("even"), ui.output_text("even"), ), ui.card( ui.card_header("odd"), ui.output_text("odd"), ), ) ) @reactive.effect() @reactive.event(input.check_numbers) def check(): nums = [n for n in input.numbers().split(',') if n] evenNums = [str(num) for num in nums if int(num.strip()) % 2 == 0] oddNums = [str(num) for num in nums if int(num.strip()) % 2 != 0] outputNumbers.set({'odd': ','.join(oddNums), 'even': ','.join(evenNums)}) print("check() has been executed") @output @render.text def even(): return outputNumbers()['even'] @output @render.text def odd(): return outputNumbers()['odd'] src_dir = Path(__file__).parent / "src" app = App(app_ui, server, static_assets=src_dir)
2
1
78,908,245
2024-8-24
https://stackoverflow.com/questions/78908245/python-pandas-dataframe-row-name-change
How can I replace the first index 0 with "Color" and 1 with "Size"? import pandas as pd clothes = {"shirt": ["red", "M"], "sweater": ["yellow", "L"], "jacket": ["black", "L"]} # pd.DataFrame.from_dict(clothes) df = pd.DataFrame.from_dict(clothes) df I tried like this but nothing changes. The output shows index name as an error df = pd.DataFrame.from_dict(clothes, index = ['color', 'size'])
don't use from_dict, but the DataFrame constructor directly: df = pd.DataFrame(clothes, index=['color', 'size']) If the DataFrame is already existing, set_axis: df = pd.DataFrame(clothes) df = df.set_axis(['color', 'size']) output: shirt sweater jacket color red yellow black size M L L
2
1
78,907,326
2024-8-23
https://stackoverflow.com/questions/78907326/cant-update-data-in-data-base-via-patch-method-in-django
I have a model of items, and I need to write CRUD operations with data. Post and get works, but patch - no, and I can`t understand why serializers.py class CreateItemSerializer(serializers.ModelSerializer): photo = serializers.ImageField(max_length=None, allow_empty_file=True, allow_null=True) class Meta: model = CreateItem fields = '__all__' def create(self, validated_data): items = CreateItem.object.create_item( name = validated_data.get('name'), description = validated_data.get('description'), type_item = validated_data.get('type_item'), photo=validated_data.get('photo') ) return items def update(self, instance, validated_data): instance.name = validated_data.get('name', instance.name) instance.description = validated_data.get('description', instance.description) instance.type_item = validated_data.get('type_item', instance.type_item) instance.photo = validated_data.get('photo', instance.photo) instance.save() return instance views.py class CreateItemView(APIView): serializer_class = CreateItemSerializer def post(self, request): serializer = self.serializer_class(data=request.data) serializer.is_valid(raise_exception=True) serializer.save() return Response(_('item created successfully'), status=status.HTTP_201_CREATED) def get(self, request, pk, format=None): item = [item.name for item in CreateItem.object.all()] description = [item.description for item in CreateItem.object.all()] type_item = [item.type_item for item in CreateItem.object.all()] return Response({'name':item[pk], 'description':description[pk], 'type_item':type_item[pk]}, status=status.HTTP_200_OK) def patch(self, request, pk): serializer = CreateItemSerializer(data=request.data) serializer.is_valid(raise_exception=True) return Response(_("item updated successfully"), status=status.HTTP_200_OK) when I call patch method, "all works" but data doesn`t change
The logic of the save method differ depending on the parameter you delivered when the DRF Serializer object was created. In the case of the patch method, it is the process of modifying an existing instance. That's why when you create a Serializer in the patch method, you need to forward not only the data sent by the client, but also the instances that you want to change. However, when creating a serializer object in your patch method, only the value delivered by the client is passed as an argument(request.data). class CreateItemView(APIView): serializer_class = CreateItemSerializer def patch(self, request, pk): serializer = CreateItemSerializer(data=request.data) serializer.is_valid(raise_exception=True) return Response(_("item updated successfully"), status=status.HTTP_200_OK) And the update method of Serializer is also not called because it does not call the save method of Serializer. from django.shortcuts import get_object_or_404 class CreateItemView(APIView): serializer_class = CreateItemSerializer def patch(self, request, pk): instance = get_object_or_404(CreateItem, id=pk) serializer = CreateItemSerializer(instance, data=request.data) serializer.is_valid(raise_exception=True) serializer.save() return Response(_("item updated successfully"), status=status.HTTP_200_OK) And I think you are trying to proceed with a partial update as you have used the patch method. If Serializer wants a partial update, you can deliver the partial parameter to True. from django.shortcuts import get_object_or_404 class CreateItemView(APIView): serializer_class = CreateItemSerializer def patch(self, request, pk): instance = get_object_or_404(CreateItem, id=pk) serializer = CreateItemSerializer(instance, data=request.data, partial=True) serializer.is_valid(raise_exception=True) serializer.save() return Response(_("item updated successfully"), status=status.HTTP_200_OK) Please refer to this article.
2
1
78,907,265
2024-8-23
https://stackoverflow.com/questions/78907265/polars-keep-the-biggest-value-using-2-categories
I have a polars dataframe that contain some ID, actions, and values : Example Dataframe: data = { "ID" : [1, 1, 2,2,3,3], "Action" : ["A", "A", "B", "B", "A", "A"], "Where" : ["Office", "Home", "Home", "Office", "Home", "Home"], "Value" : [1, 2, 3, 4, 5, 6] } df = pl.DataFrame(data) I want to select for each ID and action the biggest value, so i know where he rather do the action. I'm taking the following approach : ( df .select( pl.col("ID"), pl.col("Action"), pl.col("Where"), TOP = pl.col("Value").max().over(["ID", "Action"])) ) After that , i sorted the values and keep the unique values (The first one) to maintain the desired info, however the input its incorrect : ( df .select( pl.col("ID"), pl.col("Action"), pl.col("Where"), TOP = pl.col("Value").max().over(["ID", "Action"])) .sort( pl.col("*"), descending =True ) .unique( subset = ["ID", "Action"], maintain_order = True, keep = "first" ) ) Current Output : shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ ID ┆ Action ┆ Where ┆ TOP β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ════════β•ͺ════════β•ͺ═════║ β”‚ 3 ┆ A ┆ Home ┆ 6 β”‚ β”‚ 2 ┆ B ┆ Office ┆ 4 β”‚ β”‚ 1 ┆ A ┆ Office ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Expected Output: shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ ID ┆ Action ┆ Where ┆ TOP β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ════════β•ͺ════════β•ͺ═════║ β”‚ 3 ┆ A ┆ Home ┆ 6 β”‚ β”‚ 2 ┆ B ┆ Office ┆ 4 β”‚ β”‚ 1 ┆ A ┆ Home ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Also, i think this approach its not the optimal way
The over and unique could be combined into a group_by .arg_max() can give you the index of the max .get() will extract the corresponding values at that index (df.group_by("ID", "Action") .agg( pl.all().get(pl.col("Value").arg_max()) ) ) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ ID ┆ Action ┆ Where ┆ Value β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ════════β•ͺ════════β•ͺ═══════║ β”‚ 1 ┆ A ┆ Home ┆ 2 β”‚ β”‚ 2 ┆ B ┆ Office ┆ 4 β”‚ β”‚ 3 ┆ A ┆ Home ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
4
3
78,903,312
2024-8-22
https://stackoverflow.com/questions/78903312/why-is-my-locally-installed-python-package-not-found-when-installed-as-an-editab
I am using Python 3.10.7 64-bit in VSCode. I have a Python package "pxml" built using setuptools. It is in a folder with an empty __init.py__ and a pyproject.toml containing: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [project] name = "pxml" version = "0.0.1" dependencies = [] When installed normally with python -m pip install path/to/folder, an import pxml works just fine. However, when installed with python -m pip install -e path/to/folder, I get the error "Import 'pxml' cannot be resolved." python -m pip list shows the package installed and when I run a simple test file to access one of pxml's methods it runs just fine. Making sure that I am using the same python version in the terminal, python -c "import pxml" reports no errors. One thing that does show up as an autocomplete option when I type import pxml is __editable___pxml_0_0_1_finder, not sure if that is a clue. The directory structure is: - src + XML * pxml.py * __init__.py * pyproject.toml + test.py I am using test.py to import pxml. Any ideas?
For the package structure shown in your question, this is import statement is incorrect: import pxml pxml is not a top-level import, it is a submodule of a package named XML. If python -c "import pxml" works that is only by accident and you happen to have pxml visible on sys.path for some reason - perhaps the subdirectory XML is your current working directory, and it is resolved that way because the first element of sys.path is the empty string. Instead, the import should be one of: import XML.pxml or from XML import pxml
2
1
78,904,654
2024-8-23
https://stackoverflow.com/questions/78904654/how-to-make-a-selection-in-the-colorbar-of-an-altair-plot
Recently, I started using Altair and liked it a lot. However, I stumbled across something I could not figure out using the docs: I have a scatter plot that encodes some continuous data as color. Is there a way of selecting a range of these values via the colorbar? Similar to how it is possible to select discrete features in a clickable legend. I tried the following code: cars = data.cars() colorbar_brush = alt.selection_interval(fields=["Acceleration"], bind="legend") chart = alt.Chart(cars).mark_circle().encode( x="Horsepower", y="Miles_per_Gallon", color=alt.condition(colorbar_brush, "Acceleration:Q", alt.value("lightgray")), ).add_params( colorbar_brush, ) I also tried colorbar_brush = alt.selection_interval(encodings=["color"], bind="legend") Both did not work. Alternatively, it is possible to use separate charts as clickable legends. But how could I create a chart of a color gradient?
As @joelostblom mentioned in his comment you can use a second plot to generate a color scale with a selection interval. I have implemented this using a mark_tick with good results. Most of the options for the scale chart's y encoding are only for aesthetics and can be removed / tweaked as desired. cars = data.cars() color_encoding = alt.Color("Acceleration:Q", legend=None) selection = alt.selection_interval(encodings=["y"]) chart = ( alt.Chart(cars) .mark_circle() .encode( x="Horsepower", y="Miles_per_Gallon", color=alt.condition(selection, color_encoding, alt.ColorValue("lightgray")), ) ) legend_chart = ( alt.Chart( alt.sequence( min(cars.Acceleration), max(cars.Acceleration), 0.01, as_="Acceleration" ) ) .mark_tick(orient="horizontal") .encode( y=alt.Y( "Acceleration:Q", axis=alt.Axis( orient="right", ticks=False, grid=False, titleAngle=0, titleAnchor="end", titleAlign="center", titleBaseline="bottom", titleY=-10, tickCount=3 ), ).scale(zero=False, padding=0), color=color_encoding, ) ).add_params(selection) chart | legend_chart
3
1
78,904,263
2024-8-23
https://stackoverflow.com/questions/78904263/better-way-to-get-list-of-keys-which-has-the-same-value-from-a-output-dict
I'm trying to get all keys from a python dict which has the same value. as part of this, I have made the below attempt and it works. but checking if there is a neater way to do this. I have gone through the thread Finding all the keys with the same value in a Python dictionary b = {'a1': ['b1', 'b2', 'b3'], 'a2': ['b1', 'b2', 'b3'], 'a3': ['b4', 'b5', 'b6'], 'a4': ['b4', 'b5', 'b6'] } c = [] for i in b.values(): if i not in c: c.append(i) f = list() for i in c: print(i) e = [k for k, v in b.items() if v == i] print(e) f.append((i, e)) print(f) This gives the output as: [(['a1', 'a2'], ['b1', 'b2', 'b3']), (['a3', 'a4'], ['b4', 'b5', 'b6'])]
Why not go for defaultdict it is proven to be quite efficient and faster and you dont have to worry about getting errors such as the famous KeyError.. Ref: Stackoverflow_old_question I am not sure but python lawyers might have me for this and also tuple my guess is that your b1, b2, b3 needs to be immutable, I would go for this implementation which provides readability from collections import defaultdict b = { 'a1': ['b1', 'b2', 'b3'], 'a2': ['b1', 'b2', 'b3'], 'a3': ['b4', 'b5', 'b6'], 'a4': ['b4', 'b5', 'b6'] } # group keys with collections group_keys = defaultdict(list) for k, v in b.items(): group_keys[tuple(v)].append(k) # make use of packing ans = [(v, [*k]) for k, v in group_keys.items()] print(ans) clean result: [(['a1', 'a2'], ['b1', 'b2', 'b3']), (['a3', 'a4'], ['b4', 'b5', 'b6'])] Bench mark testing: Average Execution Time for the code above: 1.921582967042923e-06 seconds Average Execution Time for large dataset: 0.01955163748934865 seconds Average Execution Time for larger dataset: 2.0231823625043033 seconds Taking advise from @nocomment: If you are using python --version : 3.12, this should also work nicely for k, v in b.items(): group_keys[*v].append(k) # make use of packing ans = [(v, [*k]) for k, v in group_keys.items()] print(ans)
2
1
78,904,936
2024-8-23
https://stackoverflow.com/questions/78904936/is-there-a-better-way-to-replace-all-non-ascii-characters-from-specific-columns
There are some sentences and words in Chinese and Japanese that I just want to drop. Or if there is a better solution than just dropping them, I would like to explore them as well. import pandas as pd import re # Define the function to check for English text def is_english(text): # Regex pattern for English letters, numbers, and common English symbols pattern = r"^[A-Za-z0-9\s.,!?\'\"()@#$%^&*;:\[\]{}Β±_+=/\\|`~]*$" return bool(re.match(pattern, text)) # Apply the function to check each relevant column and create a boolean mask df["is_english"] = df_clean.apply( lambda row: is_english(str(row.get("column_name1", ""))) if isinstance(row.get("column_name1", ""), str) else False and is_english(str(row.get("column_name2", ""))) if isinstance(row.get("column_name2", ""), str) else False, axis=1, ) # Delete rows where at least one relevant column contains non-English characters df_cleaned = df_clean[listings_clean["is_english"]].drop( columns="is_english" )
If you just want to keep ASCII characters, be explicit, select those and drop the rest with str.replace: df = pd.DataFrame({'col': ['AaΓ€Γ‘πŸ˜€Ξ±γ‚δ»Š']}) df['ascii'] = df['col'].str.replace(r'[^\x00-\x7F]', '', regex=True) You can also remove specific character ranges: # removing CJK, CJK symbols, hiragana, katakana # you can add all desired blocks here df['non_chinese_japanese'] = df['col'].str.replace( r'[\u4E00-\u9FFF\u3000-\u303F\u3040-\u309F\u30A0-\u30FF]', '', regex=True ) Output: col ascii non_chinese_japanese 0 AaΓ€Γ‘πŸ˜€Ξ±γ‚δ»Š Aa AaΓ€Γ‘πŸ˜€Ξ±
2
2
78,904,801
2024-8-23
https://stackoverflow.com/questions/78904801/polars-dataframe-decimal-precision-doubles-on-mul-with-integer
I have a Polars (v1.5.0) dataframe with 4 columns as shown in example below. When I multiply decimal columns with an integer column, the scale of the resultant decimal column doubles. from decimal import Decimal import polars as pl df = pl.DataFrame({ "a": [1, 2], "b": [Decimal('3.45'), Decimal('4.73')], "c": [Decimal('2.113'), Decimal('4.213')], "d": [Decimal('1.10'), Decimal('3.01')] }) shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ decimal[*,2] ┆ decimal[*,3] ┆ decimal[*,2] β”‚ β•žβ•β•β•β•β•β•ͺ══════════════β•ͺ══════════════β•ͺ══════════════║ β”‚ 1 ┆ 3.45 ┆ 2.113 ┆ 1.10 β”‚ β”‚ 2 ┆ 4.73 ┆ 4.213 ┆ 3.01 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ df.with_columns(pl.col("c", "d").mul(pl.col("a"))) shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ decimal[*,2] ┆ decimal[*,6] ┆ decimal[*,4] β”‚ β•žβ•β•β•β•β•β•ͺ══════════════β•ͺ══════════════β•ͺ══════════════║ β”‚ 1 ┆ 3.45 ┆ 2.113000 ┆ 1.1000 β”‚ β”‚ 2 ┆ 4.73 ┆ 8.426000 ┆ 6.0200 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I don't know why the scale doubles, when I am just multiplying a decimal with an integer. What do I do so that the scale does not change?
The scale indeed seems to double. You could cast back to the original dtype: cols = ['c', 'd', 'e'] df.with_columns(pl.col(c).mul(pl.col('a')).cast(df[c].dtype) for c in cols) Note that there currently doesn't seem to be a way to access the dtype in an Expr, but this is a discussed feature. Example: β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c ┆ d ┆ e β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ decimal[*,2] ┆ decimal[*,3] ┆ decimal[*,4] β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════════════β•ͺ══════════════β•ͺ══════════════║ β”‚ 1 ┆ 3 ┆ 2.11 ┆ 1.100 ┆ 1.1001 β”‚ β”‚ 2 ┆ 4 ┆ 8.42 ┆ 6.022 ┆ 6.0004 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Used input: from decimal import Decimal df = pl.DataFrame({ "a": [1, 2], "b": [3, 4], "c": [Decimal('2.11'), Decimal('4.21')], "d": [Decimal('1.10'), Decimal('3.011')], "e": [Decimal('1.1001'), Decimal('3.0002')], })
4
3
78,901,146
2024-8-22
https://stackoverflow.com/questions/78901146/why-does-recurse-true-cause-dill-not-to-respect-globals-in-functions
If I pickle a function with dill that contains a global, somehow that global state isn't respected when the function is loaded again. I don't understand enough about dill to be anymore specific, but take this working code for example: import multiprocessing import dill def initializer(): global foo foo = 1 def worker(arg): return foo with multiprocessing.Pool(2, initializer) as pool: res = pool.map(worker, range(10)) print(res) This works fine, and prints [1, 1] as expected. However, if I instead pickle the initializer and worker functions using dill's recurse=True, and then restore them, it fails: import multiprocessing import dill def initializer(): global foo foo = 1 def worker(arg): return foo with open('funcs.pkl', 'wb') as f: dill.dump((initializer, worker), f, recurse=True) with open('funcs.pkl', 'rb') as f: initializer, worker = dill.load(f) with multiprocessing.Pool(2, initializer) as pool: res = pool.map(worker, range(2)) This code fails with the following error: File "/tmp/ipykernel_158597/1183951641.py", line 9, in worker return foo ^^^ NameError: name 'foo' is not defined If I use recurse=False it works fine, but somehow pickling them in this way causes the code to break. Why?
With the recurse=True option, dill.dump builds a new globals dict for the function being serialized with objects that the function refers to also recursively serialized. The side effect is that when deserialized with dill.load, these objects are reconstructed as new objects, including the globals dict for the function. This is why, after deserialization, the globals dicts of the functions become different objects from each other, so that changes made to the globals dict of the initializer function have no effect on the globals dict of the worker function. You can verify this behavior by checking the identity of the global namespace in which a function object is defined and runs under, availble as the __globals__ attribute of the function object: import dill def initializer(): global foo foo = 1 def worker(arg): return foo print(id(globals())) print(id(initializer.__globals__)) print(id(worker.__globals__)) with open('funcs.pkl', 'wb') as f: dill.dump((initializer, worker), f, recurse=True) with open('funcs.pkl', 'rb') as f: initializer, worker = dill.load(f) print('-- dilled --') print(id(globals())) print(id(initializer.__globals__)) print(id(worker.__globals__)) This outputs something like: 124817730351552 124817730351552 124817730351552 -- dilled -- 124817730351552 124817727897280 124817728060352 Demo: https://replit.com/@blhsing1/RelievedPrimeLaws
4
1
78,902,034
2024-8-22
https://stackoverflow.com/questions/78902034/use-one-regex-to-extract-information-from-two-patterns
I would like to simplify below regex logic with one regex statement. Then it's easier to understand the logic. import re content = """ [ 1.765989] initcall init_module.cfi_jt+0x0/0x8 [altmode_glink] returned 0 after 379 usecs [ 0.001873] initcall selinux_init+0x0/0x1f8 returned 0 after 85 usecs """ for line in content.split("\n"): m = re.search("\[\s+(?P<ts>[\d.]+)\] initcall init_module.*\[(?P<func>[^ ]+)\] returned (?P<ret>[\d-]+) after (?P<val>[\d-]+) usecs",line) if m: ts = m['ts'] func = m['func'] ret = m['ret'] val = m['val'] print(ts,func,ret,val) else: m = re.search("\[\s+(?P<ts>[\d.]+)\] initcall (?P<func>[^ ]+)\+.*? returned (?P<ret>[\d-]+) after (?P<val>[\d-]+) usecs",line) if m: ts = m['ts'] func = m['func'] ret = m['ret'] val = m['val'] print(ts,func,ret,val) Current output is good: 1.765989 altmode_glink 0 379 0.001873 selinux_init 0 85
The tricky part is apparently how you identify a function name, which can be be matched by non-left-bracket characters followed by either a right bracket or a plus sign followed by non-spaces: for ts, func, ret, val in re.findall( r'\[\s*([\d.]+)] initcall .*?([^[]+)(?:]|\+\S+) ' r'returned (\d+) after (\d+) usecs', content): print(ts, func, ret, val) This outputs: 1.765989 altmode_glink 0 379 0.001873 selinux_init 0 85 Demo here
3
1
78,903,730
2024-8-22
https://stackoverflow.com/questions/78903730/pandas-list-all-unique-values-based-on-groupby
I have a dataframe that has worksite info. District# Site# Address 1 1 123 Bayview Ln 1 2 456 Example St 2 36 789 Hello Dr 2 44 789 Hello Dr I am trying to transform this dataframe to add a column with the highest Site# as well as the distinct addresses when I group by District#. Here is an example of what I want the output to look like: District# Site# Address MaxSite# All District Addresses 1 1 123 Bayview Ln 2 123 Bayview Ln,456 Example St 1 2 456 Example St 2 123 Bayview Ln,456 Example St 2 36 789 Hello Dr 44 789 Hello Dr 2 44 789 Hello Dr 44 789 Hello Dr I am able to get the Max Site# by doing df['MaxSite#'] = df.groupby(by='District#')['Site#'].transform('max') But I am trying to find a similar way to list all of the unique addresses when I groupby District#. I have tried doing .transform('unique') but that is not a valid function name and doing .agg(['unique']) returns dimensions that do not match
You can use groupby and agg to get the Max Site Number and List all the addresses Then merge back to the original dataframe: grouped_df = df.groupby('District#').agg(Max_Site_Num=('Site#', 'max'), All_District_Addresses=('Address', lambda x: list(set(x))).reset_index() df = df.merge(grouped_df,on='District#') Output: District# Site# Address Max_Site_Num All_District_Addresses 0 1 1 123 Bayview Ln 2 [123 Bayview Ln, 456 Example St] 1 1 2 456 Example St 2 [123 Bayview Ln, 456 Example St] 2 2 36 789 Hello Dr 44 [789 Hello Dr] 3 2 44 789 Hello Dr 44 [789 Hello Dr]
4
5